QEMU's AddressSpace matches vm_memory::GuestAddressSpace very well,
so it's straightforward to implement vm_memory::GuestAddressSpace trait
for AddressSpace structure.
And since QEMU's memory is almost entirely processed through
AddressSpace, provide the high-level memory write/read/store/load
interfaces for Rust side use.
Additionally, provide the safe binding for address_space_memory.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/qemu-api/src/memory.rs | 149 +++++++++++++++++++++++++++++++++---
1 file changed, 140 insertions(+), 9 deletions(-)
diff --git a/rust/qemu-api/src/memory.rs b/rust/qemu-api/src/memory.rs
index 23347f35e5da..42bba23cf3f8 100644
--- a/rust/qemu-api/src/memory.rs
+++ b/rust/qemu-api/src/memory.rs
@@ -3,7 +3,7 @@
// SPDX-License-Identifier: GPL-2.0-or-later
//! Bindings for `MemoryRegion`, `MemoryRegionOps`, `MemTxAttrs`
-//! `MemoryRegionSection` and `FlatView`.
+//! `MemoryRegionSection`, `FlatView` and `AddressSpace`.
use std::{
ffi::{c_uint, c_void, CStr, CString},
@@ -11,7 +11,7 @@
marker::PhantomData,
mem::size_of,
ops::Deref,
- ptr::NonNull,
+ ptr::{addr_of, NonNull},
sync::atomic::Ordering,
};
@@ -19,21 +19,25 @@
pub use bindings::{hwaddr, MemTxAttrs};
pub use vm_memory::GuestAddress;
use vm_memory::{
- bitmap::BS, Address, AtomicAccess, Bytes, GuestMemory, GuestMemoryError, GuestMemoryRegion,
- GuestMemoryResult, GuestUsize, MemoryRegionAddress, ReadVolatile, VolatileSlice, WriteVolatile,
+ bitmap::BS, Address, AtomicAccess, Bytes, GuestAddressSpace, GuestMemory, GuestMemoryError,
+ GuestMemoryRegion, GuestMemoryResult, GuestUsize, MemoryRegionAddress, ReadVolatile,
+ VolatileSlice, WriteVolatile,
};
use crate::{
bindings::{
- self, address_space_lookup_section, device_endian, flatview_ref,
- flatview_translate_section, flatview_unref, memory_region_init_io, section_access_allowed,
- section_covers_region_addr, section_fuzz_dma_read, section_get_host_addr,
- section_rust_load, section_rust_read_continue_step, section_rust_store,
- section_rust_write_continue_step, MEMTX_OK,
+ self, address_space_lookup_section, address_space_memory, address_space_to_flatview,
+ device_endian, flatview_ref, flatview_translate_section, flatview_unref,
+ memory_region_init_io, section_access_allowed, section_covers_region_addr,
+ section_fuzz_dma_read, section_get_host_addr, section_rust_load,
+ section_rust_read_continue_step, section_rust_store, section_rust_write_continue_step,
+ MEMTX_OK,
},
callbacks::FnCall,
cell::Opaque,
+ error::{Error, Result},
prelude::*,
+ rcu::{rcu_read_lock, rcu_read_unlock},
uninit::MaybeUninitField,
zeroable::Zeroable,
};
@@ -1016,3 +1020,130 @@ fn clone(&self) -> Self {
)
}
}
+
+/// A safe wrapper around [`bindings::AddressSpace`].
+///
+/// [`AddressSpace`] is the address space abstraction in QEMU, which
+/// provides memory access for the Guest memory it managed.
+#[repr(transparent)]
+#[derive(qemu_api_macros::Wrapper)]
+pub struct AddressSpace(Opaque<bindings::AddressSpace>);
+
+unsafe impl Send for AddressSpace {}
+unsafe impl Sync for AddressSpace {}
+
+impl GuestAddressSpace for AddressSpace {
+ type M = FlatView;
+ type T = FlatViewRefGuard;
+
+ /// Get the memory of the [`AddressSpace`].
+ ///
+ /// This function retrieves the [`FlatView`] for the current
+ /// [`AddressSpace`]. And it should be called from an RCU
+ /// critical section. The returned [`FlatView`] is used for
+ /// short-term memory access.
+ ///
+ /// Note, this function method may **panic** if [`FlatView`] is
+ /// being distroying. Fo this case, we should consider to providing
+ /// the more stable binding with [`bindings::address_space_get_flatview`].
+ fn memory(&self) -> Self::T {
+ let flatp = unsafe { address_space_to_flatview(self.0.as_mut_ptr()) };
+ FlatViewRefGuard::new(unsafe { Self::M::from_raw(flatp) }).expect(
+ "Failed to clone FlatViewRefGuard: the FlatView may have been destroyed concurrently.",
+ )
+ }
+}
+
+/// The helper to convert [`vm_memory::GuestMemoryError`] to
+/// [`crate::error::Error`].
+#[track_caller]
+fn guest_mem_err_to_qemu_err(err: GuestMemoryError) -> Error {
+ match err {
+ GuestMemoryError::InvalidGuestAddress(addr) => {
+ Error::from(format!("Invalid guest address: {:#x}", addr.raw_value()))
+ }
+ GuestMemoryError::InvalidBackendAddress => Error::from("Invalid backend memory address"),
+ GuestMemoryError::GuestAddressOverflow => {
+ Error::from("Guest address addition resulted in an overflow")
+ }
+ GuestMemoryError::CallbackOutOfRange => {
+ Error::from("Callback accessed memory out of range")
+ }
+ GuestMemoryError::IOError(io_err) => Error::with_error("Guest memory I/O error", io_err),
+ other_err => Error::with_error("An unexpected guest memory error occurred", other_err),
+ }
+}
+
+impl AddressSpace {
+ /// The write interface of `AddressSpace`.
+ ///
+ /// This function is similar to `address_space_write` in C side.
+ ///
+ /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+ pub fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
+ rcu_read_lock();
+ let r = self.memory().deref().write(buf, addr);
+ rcu_read_unlock();
+ r.map_err(guest_mem_err_to_qemu_err)
+ }
+
+ /// The read interface of `AddressSpace`.
+ ///
+ /// This function is similar to `address_space_read_full` in C side.
+ ///
+ /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+ ///
+ /// It should also be noted that this function does not support the fast
+ /// path like `address_space_read` in C side.
+ pub fn read(&self, buf: &mut [u8], addr: GuestAddress) -> Result<usize> {
+ rcu_read_lock();
+ let r = self.memory().deref().read(buf, addr);
+ rcu_read_unlock();
+ r.map_err(guest_mem_err_to_qemu_err)
+ }
+
+ /// The store interface of `AddressSpace`.
+ ///
+ /// This function is similar to `address_space_st{size}` in C side.
+ ///
+ /// But it only assumes @val follows target-endian by default. So ensure
+ /// the endian of `val` aligned with target, before using this method.
+ ///
+ /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+ pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
+ rcu_read_lock();
+ let r = self.memory().deref().store(val, addr, Ordering::Relaxed);
+ rcu_read_unlock();
+ r.map_err(guest_mem_err_to_qemu_err)
+ }
+
+ /// The load interface of `AddressSpace`.
+ ///
+ /// This function is similar to `address_space_ld{size}` in C side.
+ ///
+ /// But it only support target-endian by default. The returned value is
+ /// with target-endian.
+ ///
+ /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+ pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
+ rcu_read_lock();
+ let r = self.memory().deref().load(addr, Ordering::Relaxed);
+ rcu_read_unlock();
+ r.map_err(guest_mem_err_to_qemu_err)
+ }
+}
+
+/// The safe binding around [`bindings::address_space_memory`].
+///
+/// `ADDRESS_SPACE_MEMORY` provides the complete address space
+/// abstraction for the whole Guest memory.
+pub static ADDRESS_SPACE_MEMORY: &AddressSpace = unsafe {
+ let ptr: *const bindings::AddressSpace = addr_of!(address_space_memory);
+
+ // SAFETY: AddressSpace is #[repr(transparent)].
+ let wrapper_ptr: *const AddressSpace = ptr.cast();
+
+ // SAFETY: `address_space_memory` structure is valid in C side during
+ // the whole QEMU life.
+ &*wrapper_ptr
+};
--
2.34.1
On 8/7/25 14:30, Zhao Liu wrote:
> +impl GuestAddressSpace for AddressSpace {
> + type M = FlatView;
> + type T = FlatViewRefGuard;
> +
> + /// Get the memory of the [`AddressSpace`].
> + ///
> + /// This function retrieves the [`FlatView`] for the current
> + /// [`AddressSpace`]. And it should be called from an RCU
> + /// critical section. The returned [`FlatView`] is used for
> + /// short-term memory access.
> + ///
> + /// Note, this function method may **panic** if [`FlatView`] is
> + /// being distroying. Fo this case, we should consider to providing
> + /// the more stable binding with [`bindings::address_space_get_flatview`].
> + fn memory(&self) -> Self::T {
> + let flatp = unsafe { address_space_to_flatview(self.0.as_mut_ptr()) };
> + FlatViewRefGuard::new(unsafe { Self::M::from_raw(flatp) }).expect(
> + "Failed to clone FlatViewRefGuard: the FlatView may have been destroyed concurrently.",
> + )
This is essentially address_space_get_flatview(). You can call it
directly, or you need to loop if FlatViewRefGuard finds a zero reference
count.
> + }
> +}
> +
> +impl AddressSpace {
> + /// The write interface of `AddressSpace`.
> + ///
> + /// This function is similar to `address_space_write` in C side.
> + ///
> + /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> + pub fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
> + rcu_read_lock();
> + let r = self.memory().deref().write(buf, addr);
> + rcu_read_unlock();
self.memory() must not need rcu_read_lock/unlock around it, they should
be called by the memory() function itself.
> + r.map_err(guest_mem_err_to_qemu_err)
> + }
I think it's ok to return the vm-memory error. Ultimately, the error
will be either ignored or turned into a device error condition, but I
don't think it's ever going to become an Error**.
> + /// The store interface of `AddressSpace`.
> + ///
> + /// This function is similar to `address_space_st{size}` in C side.
> + ///
> + /// But it only assumes @val follows target-endian by default. So ensure
> + /// the endian of `val` aligned with target, before using this method.
QEMU is trying to get rid of target endianness. We should use the
vm-memory BeNN and LeNN as much as possible. It would be great if you
could write either
ADDRESS_SPACE_MEMORY.store::<Le32>(addr, 42);
or
let n = Le32(42);
ADDRESS_SPACE_MEMORY.store(addr, n);
but not
ADDRESS_SPACE_MEMORY.store(addr, 42);
(Also I've not looked at the patches closely enough, but wouldn't
store() use *host* endianness? Same in patch 23).
Paolo
> + /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> + pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
> + rcu_read_lock();
> + let r = self.memory().deref().store(val, addr, Ordering::Relaxed);
> + rcu_read_unlock();
> + r.map_err(guest_mem_err_to_qemu_err)
> + }
> +
> + /// The load interface of `AddressSpace`.
> + ///
> + /// This function is similar to `address_space_ld{size}` in C side.
> + ///
> + /// But it only support target-endian by default. The returned value is
> + /// with target-endian.
> + ///
> + /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> + pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
> + rcu_read_lock();
> + let r = self.memory().deref().load(addr, Ordering::Relaxed);
> + rcu_read_unlock();
> + r.map_err(guest_mem_err_to_qemu_err)
> + }
> +}
> +
> +/// The safe binding around [`bindings::address_space_memory`].
> +///
> +/// `ADDRESS_SPACE_MEMORY` provides the complete address space
> +/// abstraction for the whole Guest memory.
> +pub static ADDRESS_SPACE_MEMORY: &AddressSpace = unsafe {
> + let ptr: *const bindings::AddressSpace = addr_of!(address_space_memory);
> +
> + // SAFETY: AddressSpace is #[repr(transparent)].
> + let wrapper_ptr: *const AddressSpace = ptr.cast();
> +
> + // SAFETY: `address_space_memory` structure is valid in C side during
> + // the whole QEMU life.
> + &*wrapper_ptr
> +};
On Thu, Aug 07, 2025 at 03:50:45PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:50:45 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 24/26] rust/memory: Provide AddressSpace bindings
>
> On 8/7/25 14:30, Zhao Liu wrote:
> > +impl GuestAddressSpace for AddressSpace {
> > + type M = FlatView;
> > + type T = FlatViewRefGuard;
> > +
> > + /// Get the memory of the [`AddressSpace`].
> > + ///
> > + /// This function retrieves the [`FlatView`] for the current
> > + /// [`AddressSpace`]. And it should be called from an RCU
> > + /// critical section. The returned [`FlatView`] is used for
> > + /// short-term memory access.
> > + ///
> > + /// Note, this function method may **panic** if [`FlatView`] is
> > + /// being distroying. Fo this case, we should consider to providing
> > + /// the more stable binding with [`bindings::address_space_get_flatview`].
> > + fn memory(&self) -> Self::T {
> > + let flatp = unsafe { address_space_to_flatview(self.0.as_mut_ptr()) };
> > + FlatViewRefGuard::new(unsafe { Self::M::from_raw(flatp) }).expect(
> > + "Failed to clone FlatViewRefGuard: the FlatView may have been destroyed concurrently.",
> > + )
>
> This is essentially address_space_get_flatview(). You can call it directly,
> or you need to loop if FlatViewRefGuard finds a zero reference count.
Yes. Here address_space_get_flatview() is better.
> > + }
> > +}
> > +
> > +impl AddressSpace {
> > + /// The write interface of `AddressSpace`.
> > + ///
> > + /// This function is similar to `address_space_write` in C side.
> > + ///
> > + /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> > + pub fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
> > + rcu_read_lock();
> > + let r = self.memory().deref().write(buf, addr);
> > + rcu_read_unlock();
>
> self.memory() must not need rcu_read_lock/unlock around it, they should be
> called by the memory() function itself.
Ah, then rcu just ensures &FlatView is valid since we increments its
ref count during rcu critical section.
But rcu will no longer cover the entire write process!
Combining this RcuGuard proposal in the reply of patch 13:
https://lore.kernel.org/qemu-devel/aJsX9HH%2FJwblZEYO@intel.com/
impl AddressSpace {
pub fn get_flatview<'g>(&self, rcu: &'g RcuGuard) -> &'g FlatView {
let flatp = RcuCell::new(self.deref().current_map);
unsafe { FlatView::from_raw(flatp.get(rcu).unwrap()) }
}
}
impl GuestAddressSpace for AddressSpace {
fn memory(&self) -> Self::T {
let rcu = RcuGuard::new();
FlatViewRefGuard::new(self.get_flatview(&rcu)).unwrap()
}
}
rcu is dropped at the end of memory(). So `&'g RcuGuard` is not enough
for this case.
> > + r.map_err(guest_mem_err_to_qemu_err)
> > + }
>
> I think it's ok to return the vm-memory error. Ultimately, the error will
> be either ignored or turned into a device error condition, but I don't think
> it's ever going to become an Error**.
Sure, and for HPET, the error isn't be handled except panic...
> > + /// The store interface of `AddressSpace`.
> > + ///
> > + /// This function is similar to `address_space_st{size}` in C side.
> > + ///
> > + /// But it only assumes @val follows target-endian by default. So ensure
> > + /// the endian of `val` aligned with target, before using this method.
>
> QEMU is trying to get rid of target endianness. We should use the vm-memory
> BeNN and LeNN as much as possible. It would be great if you could write
> either
Yes, this is the ideal way.
This will involve the changes in both vm-memory and QEMU:
* vm-memory: we need to implement AtomicAccess trait for BeNN and LeNN in
vm-memory (but this is not a big deal).
* For QEMU,
Now to handle AtomicAccess, I've abstracted a uniform C store() binding
in patch 21:
MemTxResult section_rust_store(MemoryRegionSection *section,
hwaddr mr_offset, const uint8_t *buf,
MemTxAttrs attrs, hwaddr len);
If you haven't looked at this, you can see the comments in:
impl Bytes<MemoryRegionAddress> for MemoryRegionSection {
fn store<T: AtomicAccess>(
&self,
val: T,
addr: MemoryRegionAddress,
_order: Ordering,
) -> GuestMemoryResult<()> {}
}
section_rust_store() supports target endian by default like
address_space_st(). If we wants to add LE & BE support, I think we have
2 options:
1) Add another endian argument in section_rust_store, but this also
requires to pass endian informantion in Bytes trait. Ethier we need to
implement Bytes<(MemoryRegionAddress, DeviceEndiann)>, or we need to
add endian info in AtomicAccess.
2) simplify section_rust_store() further: ignore endian stuff and just
store the data from *buf to MMIO/ram. For this way, we need to
adjust adjust_endianness() to do nothing:
section_rust_store()
-> memory_region_dispatch_write()
-> adjust_endianness()
However, adjust_endianness() is still very useful, especially for
QEMU, the caller of store() doesn't know the access is for MMIO or
RAM.
So I prefer 1) for now, and maybe it's better to add endian info in
AtomicAccess.
> ADDRESS_SPACE_MEMORY.store::<Le32>(addr, 42);
>
> or
>
> let n = Le32(42);
> ADDRESS_SPACE_MEMORY.store(addr, n);
>
> but not
>
> ADDRESS_SPACE_MEMORY.store(addr, 42);
Yes, this way is similar with my previous attempt...but I don't know
what's the best to handler LE/BE, so this RFC just omitted these cases,
and chose a simplest case - native endian.
> (Also I've not looked at the patches closely enough, but wouldn't store()
> use *host* endianness? Same in patch 23).
It seems QEMU's interfaces don't use *host* endianness?
I'm referring address_space_ld*() & address_space_st*(), and their doc
said:
/* address_space_ld*: load from an address space
* address_space_st*: store to an address space
*
* These functions perform a load or store of the byte, word,
* longword or quad to the specified address within the AddressSpace.
* The _le suffixed functions treat the data as little endian;
* _be indicates big endian; no suffix indicates "same endianness
* as guest CPU".
*
* The "guest CPU endianness" accessors are deprecated for use outside
* target-* code; devices should be CPU-agnostic and use either the LE
* or the BE accessors.
*/
I also considerred host endianness. But host endianness doesn't align
with C side... C side only supports little/big/native (target/guest)
endianness.
So, do you think Rust side should consider host endianness? Or maybe
we can add DEVICE_HOST_ENDIAN in device_endian. But is there the way to
check what the specific endianness of Host is?
Thanks,
Zhao
© 2016 - 2025 Red Hat, Inc.