From nobody Sun Feb 8 09:41:15 2026 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAAF33A1A2 for ; Mon, 11 Mar 2024 10:47:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154047; cv=none; b=fkVduSVK/Rt/e/hLCfu6tixBKJfMjdJHD4EooVpGGIG3JZRSyzB2UjLCfwHffc4kwlhDUOwPWhLBERzmeFVO6HrMejMps2YCnxkOsLru7admTat4lL/FM86CzwnZty6zWZ+AIelvgUs1Og4LViAp7Ip3jPFFfQL2SAr/0/tln3I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154047; c=relaxed/simple; bh=CE8ogUjSBLipzOUypmZnpupnoyLg1zoMasivxy+Jnk4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NT9J72kCxSLXRTlM15cWSa3IP6oYftPpYyg/WLy4JDj+sirpILY4a6Ic0rYMNKfooUGexBemOWLMsldkS7nB+EjVXtSZN8Or5+VknqU/kcER1bhlMB/J0OuAc0p2+IL6F0zDGw4r9gdIHE2Ngj9gfGZygKok4oJAvYOQQnZRmFo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wONdZGhL; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wONdZGhL" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60a04bc559fso42990557b3.2 for ; Mon, 11 Mar 2024 03:47:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710154044; x=1710758844; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tJfDIazbJrMGOOKIBUcFFh1leBWm7kChWsW4aYpnkLE=; b=wONdZGhL+f0FlyOGBknjLiZ3cyadEs8fUQpqnA0HkX4D2ktXs2BcFTLec5J9YleO9+ MdHGUqsFDSEnD2o6+O4dyS3UEV7FhGgMQENYSPNtvrbWE3NLJzb1vNmvrqJ/QKjdbTb2 IFxoDAuGSDzq4Ocaqgepazpc/6ghUcgSr8TrufXrwMbsPfaxsk38u2ZFOt398qDxisVI XWBE3HTt+cJnXi3ah0liLbu7E2sRgDbyVIzshNcn+gQtPJ3/TQ25wy/we8hJo2ihXlq/ VOoJuDzAVJz79C2/QsookY+lSWnJLPRiYPhO+JeZ7drh6BLodO6hS7S03Mt/wcHFWapF uneQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710154044; x=1710758844; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tJfDIazbJrMGOOKIBUcFFh1leBWm7kChWsW4aYpnkLE=; b=CPla3mStxYorshT5xjR3k2qRO5Xes7LGbpIBsMPxr5fkw955t7+VEIE8AFd3z0UkJk zPXSFfxv/I88FJXPI76FvUiSDbevGimj8YTICSaQaFA6uW9IAGxH/T97PCJUm0txghAf TNshGbopJX+cH8LvcIoLU0UautKegGBdbZO+JPMeFseR4rKO+vFaQmeOdEbqCxKqRsBS amVeocTsNA1cTSkAHFXcFzpXlW5CVCrkpVAkBlHunENO/gB0QWemyW5ZHTDqEbx/wTe1 BQhuKy2YGsKCcrMeNxyvy4i1T9EyKjVO6x8eS5xqgxhw7w9ceenix76pjgy//nTw9SAM qr4g== X-Forwarded-Encrypted: i=1; AJvYcCW0lIlAMPZBDl9xcdL+EdnXuy948n/9nVKF3+1AJBitx8ysygskpAdrJhAvrNOGKP3rTLQ/hA6MXjoAz4TWrS1P2fwDRRk9gWNJqvSd X-Gm-Message-State: AOJu0YxdDATaGfANMZI4IWre4YXAyxzrz3RIO23JPSJXexuYPMOjrUaP i3I7QNWvyfBrd3DZQaAgeeRphrmh5syt/xhe5WXhjO+ETKn9dyfzqwD43hYtvKkAbDo4a0ua+rP qfXpviPoRDECfbg== X-Google-Smtp-Source: AGHT+IHWo2hgJA7I1lNAM1JPrCNFP1QQdhP0LJwepyeKAlj3ZSU3zDEzBuTeLwcWuteFPzrx1jYm/DJnQnRxrBo= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a0d:df4d:0:b0:60a:2982:be4f with SMTP id i74-20020a0ddf4d000000b0060a2982be4fmr1097144ywe.10.1710154043825; Mon, 11 Mar 2024 03:47:23 -0700 (PDT) Date: Mon, 11 Mar 2024 10:47:13 +0000 In-Reply-To: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=14937; i=aliceryhl@google.com; h=from:subject:message-id; bh=KUEfeRFB2gsk7rV7sWNNsK0aetNlvTWIuKGWTuilxTY=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBl7uEznbz8dzSVFC/Ua7dl1QZWxxfEhJAN6hOKT /ZPLc3xO5+JAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZe7hMwAKCRAEWL7uWMY5 RtBYD/4ndbs8ieFY1Zgqjuwjac1GlVu8jE5ozgkt8/X+pVx3RCo2KzAVVBcZaUKp090CEbopiT+ 13YgGxnS2ZBt8xQy9fkwpnwypkUZIVFh7mh+djoZ3c1PTAngnYTKU73/6G9MeW3/412odUUNdXJ fIQY6ULbDbSJ2HTVXUYX+YAAcyw9LQ1S5b9lezeyb1UwZl4mF6ubS9MYMIA5BYATHzWw5Kp7e6V 1/rb3ST/RsqEZrN2cMdNT7gmusUwmYqujS75affiPbZPx6fvsN2rcyxv5eRVJILF7G4DXe/ne6t ciONZdIFqp71ZH30bKsL9jBNErsDWC0OqYccLCTguZaQpusIO8/Ox+Ka+N4nvy6vWup742gZWmR OdTkMa0Cj5wchUYaF0y6hjDZEa/swOYM/QOMNPWAH/ZsR7ZbUJVbPyPqVmjrJ9AK9VuG8PV5yNF JOPq0BY2BoAFRE70Fq1fMYufhOfotepxwad8TLiLmQewZ5LDA7tdLIOpbMhReNL9b9l3kfssSX2 CzmQekoQ3mL0Tjww5kE0AiKyJu0u1HNdo0hSNtXNBO3Qu/pB1xdUBZxxjETS0iBSNERy/shQVPy IxYkVpqWNR/EMRQLvd4ESlSx+MNOC9RA4qZ3+wyYpVyDPPNxCcyKbHXGS7K3FpbOH+iTW2OaVhP XrRF3MJCbKvcGDw== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240311-alice-mm-v3-1-cdf7b3a2049c@google.com> Subject: [PATCH v3 1/4] rust: uaccess: add userspace pointers From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , "=?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Wedson Almeida Filho A pointer to an area in userspace memory, which can be either read-only or read-write. All methods on this struct are safe: invalid pointers return `EFAULT`. Concurrent access, *including data races to/from userspace memory*, is permitted, because fundamentally another userspace thread/process could always be modifying memory at the same time (in the same way that userspace Rust's `std::io` permits data races with the contents of files on disk). In the presence of a race, the exact byte values read/written are unspecified but the operation is well-defined. Kernelspace code should validate its copy of data after completing a read, and not expect that multiple reads of the same address will return the same value. These APIs are designed to make it difficult to accidentally write TOCTOU bugs. Every time you read from a memory location, the pointer is advanced by the length so that you cannot use that reader to read the same memory location twice. Preventing double-fetches avoids TOCTOU bugs. This is accomplished by taking `self` by value to prevent obtaining multiple readers on a given `UserSlicePtr`, and the readers only permitting forward reads. If double-fetching a memory location is necessary for some reason, then that is done by creating multiple readers to the same memory location. Constructing a `UserSlicePtr` performs no checks on the provided address and length, it can safely be constructed inside a kernel thread with no current userspace process. Reads and writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map of the current process and enforce that the address range is within the user range (no additional calls to `access_ok` are needed). This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice by removing the `IoBufferReader` and `IoBufferWriter` traits, and various other changes. Signed-off-by: Wedson Almeida Filho Co-developed-by: Alice Ryhl Signed-off-by: Alice Ryhl Reviewed-by: Benno Lossin --- rust/helpers.c | 14 +++ rust/kernel/lib.rs | 1 + rust/kernel/uaccess.rs | 315 +++++++++++++++++++++++++++++++++++++++++++++= ++++ 3 files changed, 330 insertions(+) diff --git a/rust/helpers.c b/rust/helpers.c index 70e59efd92bc..312b6fcb49d5 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -38,6 +38,20 @@ __noreturn void rust_helper_BUG(void) } EXPORT_SYMBOL_GPL(rust_helper_BUG); =20 +unsigned long rust_helper_copy_from_user(void *to, const void __user *from, + unsigned long n) +{ + return copy_from_user(to, from, n); +} +EXPORT_SYMBOL_GPL(rust_helper_copy_from_user); + +unsigned long rust_helper_copy_to_user(void __user *to, const void *from, + unsigned long n) +{ + return copy_to_user(to, from, n); +} +EXPORT_SYMBOL_GPL(rust_helper_copy_to_user); + void rust_helper_mutex_lock(struct mutex *lock) { mutex_lock(lock); diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index be68d5e567b1..37f84223b83f 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -49,6 +49,7 @@ pub mod task; pub mod time; pub mod types; +pub mod uaccess; pub mod workqueue; =20 #[doc(hidden)] diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs new file mode 100644 index 000000000000..020f3847683f --- /dev/null +++ b/rust/kernel/uaccess.rs @@ -0,0 +1,315 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! User pointers. +//! +//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h) + +use crate::{bindings, error::code::*, error::Result}; +use alloc::vec::Vec; +use core::ffi::{c_ulong, c_void}; + +/// A pointer to an area in userspace memory, which can be either read-onl= y or +/// read-write. +/// +/// All methods on this struct are safe: attempting to read or write inval= id +/// pointers will return `EFAULT`. Concurrent access, *including data races +/// to/from userspace memory*, is permitted, because fundamentally another +/// userspace thread/process could always be modifying memory at the same = time +/// (in the same way that userspace Rust's [`std::io`] permits data races = with +/// the contents of files on disk). In the presence of a race, the exact b= yte +/// values read/written are unspecified but the operation is well-defined. +/// Kernelspace code should validate its copy of data after completing a r= ead, +/// and not expect that multiple reads of the same address will return the= same +/// value. +/// +/// These APIs are designed to make it difficult to accidentally write TOC= TOU +/// (time-of-check to time-of-use) bugs. Every time a memory location is r= ead, +/// the reader's position is advanced by the read length and the next read= will +/// start from there. This helps prevent accidentally reading the same loc= ation +/// twice and causing a TOCTOU bug. +/// +/// Creating a [`UserSliceReader`] and/or [`UserSliceWriter`] consumes the +/// `UserSlice`, helping ensure that there aren't multiple readers or writ= ers to +/// the same location. +/// +/// If double-fetching a memory location is necessary for some reason, the= n that +/// is done by creating multiple readers to the same memory location, e.g.= using +/// [`clone_reader`]. +/// +/// # Examples +/// +/// Takes a region of userspace memory from the current process, and modif= y it +/// by adding one to every byte in the region. +/// +/// ```no_run +/// use alloc::vec::Vec; +/// use core::ffi::c_void; +/// use kernel::error::Result; +/// use kernel::uaccess::UserSlice; +/// +/// pub fn bytes_add_one(uptr: *mut c_void, len: usize) -> Result<()> { +/// let (read, mut write) =3D UserSlice::new(uptr, len).reader_writer(= ); +/// +/// let mut buf =3D Vec::new(); +/// read.read_all(&mut buf)?; +/// +/// for b in &mut buf { +/// *b =3D b.wrapping_add(1); +/// } +/// +/// write.write_slice(&buf)?; +/// Ok(()) +/// } +/// ``` +/// +/// Example illustrating a TOCTOU (time-of-check to time-of-use) bug. +/// +/// ```no_run +/// use alloc::vec::Vec; +/// use core::ffi::c_void; +/// use kernel::error::{code::EINVAL, Result}; +/// use kernel::uaccess::UserSlice; +/// +/// /// Returns whether the data in this region is valid. +/// fn is_valid(uptr: *mut c_void, len: usize) -> Result { +/// let read =3D UserSlice::new(uptr, len).reader(); +/// +/// let mut buf =3D Vec::new(); +/// read.read_all(&mut buf)?; +/// +/// todo!() +/// } +/// +/// /// Returns the bytes behind this user pointer if they are valid. +/// pub fn get_bytes_if_valid(uptr: *mut c_void, len: usize) -> Result> { +/// if !is_valid(uptr, len)? { +/// return Err(EINVAL); +/// } +/// +/// let read =3D UserSlice::new(uptr, len).reader(); +/// +/// let mut buf =3D Vec::new(); +/// read.read_all(&mut buf)?; +/// +/// // THIS IS A BUG! The bytes could have changed since we checked th= em. +/// // +/// // To avoid this kind of bug, don't call `UserSlice::new` multiple +/// // times with the same address. +/// Ok(buf) +/// } +/// ``` +/// +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html +/// [`clone_reader`]: UserSliceReader::clone_reader +pub struct UserSlice { + ptr: *mut c_void, + length: usize, +} + +impl UserSlice { + /// Constructs a user slice from a raw pointer and a length in bytes. + /// + /// Constructing a [`UserSlice`] performs no checks on the provided ad= dress + /// and length, it can safely be constructed inside a kernel thread wi= th no + /// current userspace process. Reads and writes wrap the kernel APIs + /// `copy_from_user` and `copy_to_user`, which check the memory map of= the + /// current process and enforce that the address range is within the u= ser + /// range (no additional calls to `access_ok` are needed). + /// + /// Callers must be careful to avoid time-of-check-time-of-use + /// (TOCTOU) issues. The simplest way is to create a single instance of + /// [`UserSlice`] per user memory block as it reads each byte at + /// most once. + pub fn new(ptr: *mut c_void, length: usize) -> Self { + UserSlice { ptr, length } + } + + /// Reads the entirety of the user slice, appending it to the end of t= he + /// provided buffer. + /// + /// Fails with `EFAULT` if the read encounters a page fault. + pub fn read_all(self, buf: &mut Vec) -> Result { + self.reader().read_all(buf) + } + + /// Constructs a [`UserSliceReader`]. + pub fn reader(self) -> UserSliceReader { + UserSliceReader { + ptr: self.ptr, + length: self.length, + } + } + + /// Constructs a [`UserSliceWriter`]. + pub fn writer(self) -> UserSliceWriter { + UserSliceWriter { + ptr: self.ptr, + length: self.length, + } + } + + /// Constructs both a [`UserSliceReader`] and a [`UserSliceWriter`]. + /// + /// Usually when this is used, you will first read the data, and then + /// overwrite it afterwards. + pub fn reader_writer(self) -> (UserSliceReader, UserSliceWriter) { + ( + UserSliceReader { + ptr: self.ptr, + length: self.length, + }, + UserSliceWriter { + ptr: self.ptr, + length: self.length, + }, + ) + } +} + +/// A reader for [`UserSlice`]. +/// +/// Used to incrementally read from the user slice. +pub struct UserSliceReader { + ptr: *mut c_void, + length: usize, +} + +impl UserSliceReader { + /// Skip the provided number of bytes. + /// + /// Returns an error if skipping more than the length of the buffer. + pub fn skip(&mut self, num_skip: usize) -> Result { + // Update `self.length` first since that's the fallible part of th= is + // operation. + self.length =3D self.length.checked_sub(num_skip).ok_or(EFAULT)?; + self.ptr =3D self.ptr.wrapping_byte_add(num_skip); + Ok(()) + } + + /// Create a reader that can access the same range of data. + /// + /// Reading from the clone does not advance the current reader. + /// + /// The caller should take care to not introduce TOCTOU issues, as des= cribed + /// in the documentation for [`UserSlice`]. + pub fn clone_reader(&self) -> UserSliceReader { + UserSliceReader { + ptr: self.ptr, + length: self.length, + } + } + + /// Returns the number of bytes left to be read from this reader. + /// + /// Note that even reading less than this number of bytes may fail. + pub fn len(&self) -> usize { + self.length + } + + /// Returns `true` if no data is available in the io buffer. + pub fn is_empty(&self) -> bool { + self.length =3D=3D 0 + } + + /// Reads raw data from the user slice into a raw kernel buffer. + /// + /// Fails with `EFAULT` if the read encounters a page fault. + /// + /// # Safety + /// + /// The `out` pointer must be valid for writing `len` bytes. + pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result { + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) =3D c_ulong::try_from(len) else { + return Err(EFAULT); + }; + // SAFETY: The caller promises that `out` is valid for writing `le= n` bytes. + let res =3D unsafe { bindings::copy_from_user(out.cast::()= , self.ptr, len_ulong) }; + if res !=3D 0 { + return Err(EFAULT); + } + // Userspace pointers are not directly dereferencable by the kerne= l, so + // we cannot use `add`, which has C-style rules for defined behavi= or. + self.ptr =3D self.ptr.wrapping_byte_add(len); + self.length -=3D len; + Ok(()) + } + + /// Reads the entirety of the user slice, appending it to the end of t= he + /// provided buffer. + /// + /// Fails with `EFAULT` if the read encounters a page fault. + pub fn read_all(mut self, buf: &mut Vec) -> Result { + let len =3D self.length; + buf.try_reserve(len)?; + + // SAFETY: The call to `try_reserve` was successful, so the spare + // capacity is at least `len` bytes long. + unsafe { self.read_raw(buf.spare_capacity_mut().as_mut_ptr().cast(= ), len)? }; + + // SAFETY: Since the call to `read_raw` was successful, so the next + // `len` bytes of the vector have been initialized. + unsafe { buf.set_len(buf.len() + len) }; + Ok(()) + } +} + +/// A writer for [`UserSlice`]. +/// +/// Used to incrementally write into the user slice. +pub struct UserSliceWriter { + ptr: *mut c_void, + length: usize, +} + +impl UserSliceWriter { + /// Returns the amount of space remaining in this buffer. + /// + /// Note that even writing less than this number of bytes may fail. + pub fn len(&self) -> usize { + self.length + } + + /// Returns `true` if no more data can be written to this buffer. + pub fn is_empty(&self) -> bool { + self.length =3D=3D 0 + } + + /// Writes raw data to this user pointer from a raw kernel buffer. + /// + /// Fails with `EFAULT` if the write encounters a page fault. + /// + /// # Safety + /// + /// The `data` pointer must be valid for reading `len` bytes. + pub unsafe fn write_raw(&mut self, data: *const u8, len: usize) -> Res= ult { + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) =3D c_ulong::try_from(len) else { + return Err(EFAULT); + }; + let res =3D unsafe { bindings::copy_to_user(self.ptr, data.cast::<= c_void>(), len_ulong) }; + if res !=3D 0 { + return Err(EFAULT); + } + // Userspace pointers are not directly dereferencable by the kerne= l, so + // we cannot use `add`, which has C-style rules for defined behavi= or. + self.ptr =3D self.ptr.wrapping_byte_add(len); + self.length -=3D len; + Ok(()) + } + + /// Writes the provided slice to this user pointer. + /// + /// Fails with `EFAULT` if the write encounters a page fault. + pub fn write_slice(&mut self, data: &[u8]) -> Result { + let len =3D data.len(); + let ptr =3D data.as_ptr(); + // SAFETY: The pointer originates from a reference to a slice of l= ength + // `len`, so the pointer is valid for reading `len` bytes. + unsafe { self.write_raw(ptr, len) } + } +} --=20 2.44.0.278.ge034bb2e1d-goog From nobody Sun Feb 8 09:41:15 2026 Received: from mail-lj1-f202.google.com (mail-lj1-f202.google.com [209.85.208.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4534374CF for ; Mon, 11 Mar 2024 10:47:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154050; cv=none; b=jgRXkTjz5ZFoCYYbGTLtOmTJSoiyl9e8uwV/XVOAVCKdhiI5ffQKj34uH6MZSiNzLVwqF8nZjVTV7duVQ9uiaOIlNYk9ZZsRAAfdm3Xf+hBzHkuyM5i41AcYsmgvzrgW5sMySN/XycnqTKCFhU/FnH2344PIvygXaOfPxtbssDQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154050; c=relaxed/simple; bh=wmuCL83UBR6TYf+x9z8Idn6vDuRVMYiqC4cmyq6jAlM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eSl0aHT0uFABHF9PMTRKiqzHtTvNvXUEWqTGWrSKsIhpaGb65XjfU/GlR5SpVRECElje6FvLYQMDMTxiH6A/jldagv7eFxpZNelqjw4FD/7aqSM6SPeyI73gQzUv97sRZQ7FFkZdXauAVvIpbU5+Eiw2WQOuxHH1BsFEx3VuCfY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WL+V9EGM; arc=none smtp.client-ip=209.85.208.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WL+V9EGM" Received: by mail-lj1-f202.google.com with SMTP id 38308e7fff4ca-2d449d2d9bdso498841fa.2 for ; Mon, 11 Mar 2024 03:47:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710154047; x=1710758847; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QK7N3wvRNuxt3Md31RNENpTO88LXLvO1cHfykZ9Cy8A=; b=WL+V9EGMU/KL9yYMO+1+PKIYBfcTrE01/5EE/XYMx9gbIeBhAFeHKDYfMA7gzDGiKq ljuwQbFMFHhWjbGIFbODyD+x2lxzTz8QQ1M8VkOCYKsrRtLtc5zT22x1uJrt93D19qvF NhgmTnIWNj+88Vbw4xB7YoZX/VEmpot5e2hyPdKF3J9a4naUfLUOZ44RolbvVP3PNRYN sah/JwhSS01gZ6PApT8KvfAeIGPtvyHEdE3SYHh/iUsqW+0gC25tBe2s0zoEcVxvpE/x RTW9iX6wF1vstdDew+XUk7xL1Aj7+eW4IZh5YsJonKVmR9F79GISLooDW0lqVzOMUkln crQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710154047; x=1710758847; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QK7N3wvRNuxt3Md31RNENpTO88LXLvO1cHfykZ9Cy8A=; b=ANgX1+Z3v3V+wqa1Bed802yx1z6pl6NNZ6j89fo7P/YnJ2HdDDOvkrrNtRFtPAExV5 Uz9F2zBx8TsNEBBiRgKAvVk8uiBFpSGrlHELI/KiAk58uC6WlS7MeaS4/3ym2jjA2x9Z yvtjUelwCj7aosYPW/mLJwQhJsC5cU2SZRxff9RheXCe9eN+ANVxQZ/DmVoomhFhy4N4 3/vbTxMDVE2XSfWv4ESrKyCB2Y8Je1QsMcvvzXxJx3OIu80Ba7eX/5T03DbahfZ6mp6Q rCQeoOXkwG2b7isjmF2iDapjtthBOsYByTQ9hehAiclzcJdwmvVv3RZKavmX9Fr7adZM fZGg== X-Forwarded-Encrypted: i=1; AJvYcCXAHmb1lVoimZULPE2JzMHFQDeYJtqfPo8PChOwUtmHGPDlmj2V8PfJz/p4/RK0GJV1/qOTKDvHbZaVCHA29N2MiNb75riG4/I41hfG X-Gm-Message-State: AOJu0YxxSqXyT1M32cGNe2XJzn3R2gLnLD9i0xJkPNukyYIqW2qAlM1j 7HYBhH4ZpglIpqi5iLIwsYFrDTbwcoVBMyDLcW0EMm3jjdHcJS0sov+P4Wx6MFX9uB6S1dvlody dIFaRPwAAFvcDBQ== X-Google-Smtp-Source: AGHT+IHSWDS4ViRws7i/kRXoMZR5WHtv1Oe3C4G9VZj4JBfcdfc7JLVHMzSKn3kJjxqSGhfVqyFOyNajwYd1AbY= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a2e:98d7:0:b0:2d2:39e3:4177 with SMTP id s23-20020a2e98d7000000b002d239e34177mr5921ljj.3.1710154046386; Mon, 11 Mar 2024 03:47:26 -0700 (PDT) Date: Mon, 11 Mar 2024 10:47:14 +0000 In-Reply-To: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=5026; i=aliceryhl@google.com; h=from:subject:message-id; bh=pHnbSUxra5e5H6zBKTRnUbpLhZzdUBojHIfp+nyvk+Y=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBl7uE0j1KsTCtMubuFNvEOo6APsxvUwA1hJoBnz QHXFmedNU6JAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZe7hNAAKCRAEWL7uWMY5 Rp+KD/4qOeDNNJqVjqc6H7KB4hwApHbA5wPeaWMEUeFX/cw2NlzyLJPdERRCiqZLQ/cEdsHg7Wz gZUXgev5Z21/0QAeen5iqImCU0oknimN4IowfaMXghclnRg6AI95WZr+YSDOfUTInGLsjVeR5X4 D2yQCZ22UtLTRbmz3aVcRLkUAFDTMyzKjj0uQQysx92PKlM62qwFpJyHgf1VYYl+tU9FOK2prQ4 X5m6OeKp9sz7tZJ64BNvj9wFCrVHbeYZmu3X1gEIPtj0YuUGtylFF3hNNZX4PWb27YBqgTh/jsv uLj1a8kBx+HN66qwceF7EVW40qHUqlJYvF8kByf0Qd6lOo61FxBRRXrfhE+GFkdgK1uRDRenjpJ gfIqSvdgqx9JClXUqQgS6U8txVmMCWErdT3wSkzhop3wNLdA4q0eZlTnTHQVtuofBhbnbjr9NvE BgApEvdQxYtNmjEDWS3zH/GG18kLfHqArvpraRws98F4n0PVY22Dg1RuifPFihc9um3lqpoto0K cdMmSF6mMo4HSZx1wq+r/PURWdB2rlvRbt6q8sLZ5xWK1NorgXqXs8w9gqad76yCV5byjmGMGqC ULKm+giotJR8S6iaAvd0PFjxeKoz5R9iOobOCUZFg4UQsrn890ASHToXImx92uLM1+kCe1HYXEH Pw8yHaDClkwD+iQ== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240311-alice-mm-v3-2-cdf7b3a2049c@google.com> Subject: [PATCH v3 2/4] uaccess: always export _copy_[from|to]_user with CONFIG_RUST From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , "=?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Arnd Bergmann Rust code needs to be able to access _copy_from_user and _copy_to_user so that it can skip the check_copy_size check in cases where the length is known at compile-time, mirroring the logic for when C code will skip check_copy_size. To do this, we ensure that exported versions of these methods are available when CONFIG_RUST is enabled. Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test on x86 using the Android cuttlefish emulator. Signed-off-by: Arnd Bergmann Tested-by: Alice Ryhl Signed-off-by: Alice Ryhl Reviewed-by: Boqun Feng --- include/linux/uaccess.h | 38 ++++++++++++++++++++++++-------------- lib/usercopy.c | 30 ++++-------------------------- 2 files changed, 28 insertions(+), 40 deletions(-) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..2ebfce98b5cc 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include =20 @@ -138,13 +139,18 @@ __copy_to_user(void __user *to, const void *from, uns= igned long n) return raw_copy_to_user(to, from, n); } =20 -#ifdef INLINE_COPY_FROM_USER static inline __must_check unsigned long -_copy_from_user(void *to, const void __user *from, unsigned long n) +_inline_copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res =3D n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { + /* + * Ensure that bad access_ok() speculation will not + * lead to nasty side effects *after* the copy is + * finished: + */ + barrier_nospec(); instrument_copy_from_user_before(to, from, n); res =3D raw_copy_from_user(to, from, n); instrument_copy_from_user_after(to, from, n, res); @@ -153,14 +159,11 @@ _copy_from_user(void *to, const void __user *from, un= signed long n) memset(to + (n - res), 0, res); return res; } -#else extern __must_check unsigned long _copy_from_user(void *, const void __user *, unsigned long); -#endif =20 -#ifdef INLINE_COPY_TO_USER static inline __must_check unsigned long -_copy_to_user(void __user *to, const void *from, unsigned long n) +_inline_copy_to_user(void __user *to, const void *from, unsigned long n) { might_fault(); if (should_fail_usercopy()) @@ -171,25 +174,32 @@ _copy_to_user(void __user *to, const void *from, unsi= gned long n) } return n; } -#else extern __must_check unsigned long _copy_to_user(void __user *, const void *, unsigned long); -#endif =20 static __always_inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n) { - if (check_copy_size(to, n, false)) - n =3D _copy_from_user(to, from, n); - return n; + if (!check_copy_size(to, n, false)) + return n; +#ifdef INLINE_COPY_FROM_USER + return _inline_copy_from_user(to, from, n); +#else + return _copy_from_user(to, from, n); +#endif } =20 static __always_inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n) { - if (check_copy_size(from, n, true)) - n =3D _copy_to_user(to, from, n); - return n; + if (!check_copy_size(from, n, true)) + return n; + +#ifdef INLINE_COPY_TO_USER + return _inline_copy_to_user(to, from, n); +#else + return _copy_to_user(to, from, n); +#endif } =20 #ifndef copy_mc_to_kernel diff --git a/lib/usercopy.c b/lib/usercopy.c index d29fe29c6849..de7f30618293 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -7,40 +7,18 @@ =20 /* out-of-line parts */ =20 -#ifndef INLINE_COPY_FROM_USER +#if !defined(INLINE_COPY_FROM_USER) || defined(CONFIG_RUST) unsigned long _copy_from_user(void *to, const void __user *from, unsigned = long n) { - unsigned long res =3D n; - might_fault(); - if (!should_fail_usercopy() && likely(access_ok(from, n))) { - /* - * Ensure that bad access_ok() speculation will not - * lead to nasty side effects *after* the copy is - * finished: - */ - barrier_nospec(); - instrument_copy_from_user_before(to, from, n); - res =3D raw_copy_from_user(to, from, n); - instrument_copy_from_user_after(to, from, n, res); - } - if (unlikely(res)) - memset(to + (n - res), 0, res); - return res; + return _inline_copy_from_user(to, from, n); } EXPORT_SYMBOL(_copy_from_user); #endif =20 -#ifndef INLINE_COPY_TO_USER +#if !defined(INLINE_COPY_TO_USER) || defined(CONFIG_RUST) unsigned long _copy_to_user(void __user *to, const void *from, unsigned lo= ng n) { - might_fault(); - if (should_fail_usercopy()) - return n; - if (likely(access_ok(to, n))) { - instrument_copy_to_user(to, from, n); - n =3D raw_copy_to_user(to, from, n); - } - return n; + return _inline_copy_to_user(to, from, n); } EXPORT_SYMBOL(_copy_to_user); #endif --=20 2.44.0.278.ge034bb2e1d-goog From nobody Sun Feb 8 09:41:15 2026 Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C15153BB4D for ; Mon, 11 Mar 2024 10:47:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154054; cv=none; b=QFtBXrcOG5BJyiUP31lM9TpZNJ6HymSggdKFvYLgrojWA2HyxckHa+Xyzvma9LvqfXwCHiGtNPKvt5tguLPfm0hHCwrkCefAYytjLP9T+Ba97t9amtP4zvc+wp24a/J4UpdKrynvEnQaVOleD3wdiLXfFZNSepJeLqYZbUOmfkI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154054; c=relaxed/simple; bh=O8zC2lgk3iljbnvk5ax/PRPoOoGfwC7mnB92Tq28bps=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=usyCx+8tbnuhpcdyfKKU5/OGQ7TRQo+hIejGXI8Y89GniEj3bC73Rw4dqIpOH5CCecmrDPD1xHLZDe/TbMtMBoty7PemLoRhwOQiUdYf0qCwTsRWfipl0qx1QwbfVg45vm2ZMdl5s6zfRKaA+78LPrfGuGWYhw5TDBf8IKv89Ok= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KxXm+C1z; arc=none smtp.client-ip=209.85.167.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KxXm+C1z" Received: by mail-lf1-f74.google.com with SMTP id 2adb3069b0e04-5133f5243d7so3018223e87.1 for ; Mon, 11 Mar 2024 03:47:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710154050; x=1710758850; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2c6bMPArnt8D9kdvoHKALHQQ9grE3IbZuTFkAk5bFxs=; b=KxXm+C1zM+BJT24yPUqSUojR+UA9ZCvpjFyG/1GA7rYor1F12HKR3s0kUOfEh76NZr Ka+97OdnYgZ0kXIb7c/rWetIv/QAuaiE+NyiiWmWoFhKw8gL3W6E9JDxjBoPO2/aE3Q2 EmAZCnC5n4WTRHSo7NrH1hwBKmMZ4ByHvADwyLtbjPDfWUVr0E0fyD6WnfAAhyt+HgPz 4WRbhTcyeaxa+6gwP2l/BbcUQ0INf3qF4rQ9wXmdjzgUMkUHOKPOtowAsR+kGjJdsTxr v/Gm14Qg80MNS8/lDb6ur3alLk8F50s3jvQUoq1bWaCsyCNOO6liB2i0ifAAa+i3mzSO nhBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710154050; x=1710758850; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2c6bMPArnt8D9kdvoHKALHQQ9grE3IbZuTFkAk5bFxs=; b=rZuiie9cbbmkTtjkIRW/uEbmQs1NQ2i2FTz00lu3KdilHG1lmitxoXN5W6DfG+apjV WpeKi1FrLFN4YAPNw+HaZ1VoCxpkV50tzibNarSRw4tCpVdCDdNEerW5dnmFFShPBrI6 Y0egKaWJ6Dq6O4VNNVK4hsnXg1v0JQsSHwedrHFSHQeb9Lipssyv0ej5mPQ1OocqNSIi YY+A7qDpil18b9H60Gk0tS0eHTlrLFahwx1tPV1SsHV4SAH9cd+qVKvpwR5hAniPDTyO Fdc5BLolgYHCY7dRt9NlRMq5FdVfOgFKtqfWhuKFa20a1MLRztVM4DPkc1hSGQTeAJSV 7wYQ== X-Forwarded-Encrypted: i=1; AJvYcCUCia2EvsCVLnDgYpQwZcPZcCUPbF8UL2Qf5AdvOGglfMO5TB2X2g0wC7TRWfo/pxFl2OzFl11mZcz0lB7LnAD5KFuvwB1HCenOupRP X-Gm-Message-State: AOJu0Yy1C5fyfbtPMVZOL8BGlwoJR0+Tt6SKNGLzsClXOS4KAZv9iuNK aSh1jegzC/SIsl+qntwiRfRKkgVFsB/2vhCYPkijzi3c7rxHINZYN9KHM2FrJiR2bZjqMKDTGXD narYgG7cM1nTQwA== X-Google-Smtp-Source: AGHT+IF2Vt5V8eujOxNumBXozXKhQ/7eLs0zX0F5tRH0T9iuVNAdf86I2d84KVYWqXVZsAGntbB8mh+8UU6oXgA= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:6512:251e:b0:513:b15a:c2b7 with SMTP id be30-20020a056512251e00b00513b15ac2b7mr610lfb.6.1710154049688; Mon, 11 Mar 2024 03:47:29 -0700 (PDT) Date: Mon, 11 Mar 2024 10:47:15 +0000 In-Reply-To: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=8715; i=aliceryhl@google.com; h=from:subject:message-id; bh=O8zC2lgk3iljbnvk5ax/PRPoOoGfwC7mnB92Tq28bps=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBl7uE0gI7mwBhBKAktav0IYVr2x44RfFVlQeCJ/ r5KHT4GM4SJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZe7hNAAKCRAEWL7uWMY5 RgDaD/9CIQWLbBRhAR+VU/Z4XfgkfK+S6xaMQ0IxoLJqhiaETkwMy/uyjInFXb6u7diGVZTH+9k du+csjmxKR4IFkHrC+1NSlR83gaONd+HvAsrt6cZLrmXvGkT0DssMzwlW0RG6o/cFPaQ+OaEl8K kWsSiIv9CkYMGvG93ScxhlagwGgyLrWQ6AYxuDr3W9Fvfqtuf2NIQ2K72VXUk2EmBE2b696F5h7 /M3NUihppcd/bjMJDlq7vxRZ/1msuLZTucpotKSoRYdGKhgLMetCWmZDWRDZFKXCFHqoxwOZ88e 0ChUkKfBrmf+XbhsNE2vO6cYbGElVfaVT+URBjbP4o876YOg8VAJIN7stWYWJA744W1dlQxi+9H Uf/5U2FIf3R4wwSFXfw9EjUFQyTrRio9xzEHgiFWg1USMdFmNXf7UTYyOmyPr3EHjxu5IW4V0S+ 75CFlA6FeNtV1opclUn/2ScgG7h2BeFPKTlur/eyYkEzGO/fturmuluG35NO+gOFe0STlw8HELK b1DpVGvIzBBzmymt4nOjuFiYlgZMjFYzn4nwL5l8UJn8j1j1OtCl1xZoZh1gtk1CV00GrcuEpq8 Qp68iEbRP8iBY8W5U1kNYAh2VgX5kmri2Sch7c6W3pp7GWgoGZ0DP/G7aPhQb86ZU6HEbEpoiue nrg/PIkXQyJQnvQ== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240311-alice-mm-v3-3-cdf7b3a2049c@google.com> Subject: [PATCH v3 3/4] rust: uaccess: add typed accessors for userspace pointers From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , "=?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Add safe methods for reading and writing Rust values to and from userspace pointers. The C methods for copying to/from userspace use a function called `check_object_size` to verify that the kernel pointer is not dangling. However, this check is skipped when the length is a compile-time constant, with the assumption that such cases trivially have a correct kernel pointer. In this patch, we apply the same optimization to the typed accessors. For both methods, the size of the operation is known at compile time to be size_of of the type being read or written. Since the C side doesn't provide a variant that skips only this check, we create custom helpers for this purpose. The majority of reads and writes to userspace pointers in the Rust Binder driver uses these accessor methods. Benchmarking has found that skipping the `check_object_size` check makes a big difference for the cases being skipped here. (And that the check doesn't make a difference for the cases that use the raw read/write methods.) This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice to skip the `check_object_size` check, and to update various comments, including the notes about kernel pointers in `WritableToBytes`. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Signed-off-by: Alice Ryhl Reviewed-by: Benno Lossin Reviewed-by: Boqun Feng --- rust/kernel/types.rs | 67 ++++++++++++++++++++++++++++++++++++++++++++ rust/kernel/uaccess.rs | 75 ++++++++++++++++++++++++++++++++++++++++++++++= +++- 2 files changed, 141 insertions(+), 1 deletion(-) diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs index aa77bad9bce4..f72b82efdbfa 100644 --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -409,3 +409,70 @@ pub enum Either { /// Constructs an instance of [`Either`] containing a value of type `R= `. Right(R), } + +/// Types for which any bit pattern is valid. +/// +/// Not all types are valid for all values. For example, a `bool` must be = either +/// zero or one, so reading arbitrary bytes into something that contains a +/// `bool` is not okay. +/// +/// It's okay for the type to have padding, as initializing those bytes ha= s no +/// effect. +/// +/// # Safety +/// +/// All bit-patterns must be valid for this type. +pub unsafe trait FromBytes {} + +// SAFETY: All bit patterns are acceptable values of the types below. +unsafe impl FromBytes for u8 {} +unsafe impl FromBytes for u16 {} +unsafe impl FromBytes for u32 {} +unsafe impl FromBytes for u64 {} +unsafe impl FromBytes for usize {} +unsafe impl FromBytes for i8 {} +unsafe impl FromBytes for i16 {} +unsafe impl FromBytes for i32 {} +unsafe impl FromBytes for i64 {} +unsafe impl FromBytes for isize {} +// SAFETY: If all bit patterns are acceptable for individual values in an = array, +// then all bit patterns are also acceptable for arrays of that type. +unsafe impl FromBytes for [T] {} +unsafe impl FromBytes for [T; N] {} + +/// Types that can be viewed as an immutable slice of initialized bytes. +/// +/// If a struct implements this trait, then it is okay to copy it byte-for= -byte +/// to userspace. This means that it should not have any padding, as paddi= ng +/// bytes are uninitialized. Reading uninitialized memory is not just unde= fined +/// behavior, it may even lead to leaking sensitive information on the sta= ck to +/// userspace. +/// +/// The struct should also not hold kernel pointers, as kernel pointer add= resses +/// are also considered sensitive. However, leaking kernel pointers is not +/// considered undefined behavior by Rust, so this is a correctness requir= ement, +/// but not a safety requirement. +/// +/// # Safety +/// +/// Values of this type may not contain any uninitialized bytes. +pub unsafe trait AsBytes {} + +// SAFETY: Instances of the following types have no uninitialized portions. +unsafe impl AsBytes for u8 {} +unsafe impl AsBytes for u16 {} +unsafe impl AsBytes for u32 {} +unsafe impl AsBytes for u64 {} +unsafe impl AsBytes for usize {} +unsafe impl AsBytes for i8 {} +unsafe impl AsBytes for i16 {} +unsafe impl AsBytes for i32 {} +unsafe impl AsBytes for i64 {} +unsafe impl AsBytes for isize {} +unsafe impl AsBytes for bool {} +unsafe impl AsBytes for char {} +unsafe impl AsBytes for str {} +// SAFETY: If individual values in an array have no uninitialized portions= , then +// the array itself does not have any uninitialized portions either. +unsafe impl AsBytes for [T] {} +unsafe impl AsBytes for [T; N] {} diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs index 020f3847683f..72d55b2b33c9 100644 --- a/rust/kernel/uaccess.rs +++ b/rust/kernel/uaccess.rs @@ -4,9 +4,15 @@ //! //! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h) =20 -use crate::{bindings, error::code::*, error::Result}; +use crate::{ + bindings, + error::code::*, + error::Result, + types::{AsBytes, FromBytes}, +}; use alloc::vec::Vec; use core::ffi::{c_ulong, c_void}; +use core::mem::{size_of, MaybeUninit}; =20 /// A pointer to an area in userspace memory, which can be either read-onl= y or /// read-write. @@ -237,6 +243,41 @@ pub unsafe fn read_raw(&mut self, out: *mut u8, len: u= size) -> Result { Ok(()) } =20 + /// Reads a value of the specified type. + /// + /// Fails with `EFAULT` if the read encounters a page fault. + pub fn read(&mut self) -> Result { + let len =3D size_of::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) =3D c_ulong::try_from(len) else { + return Err(EFAULT); + }; + let mut out: MaybeUninit =3D MaybeUninit::uninit(); + // SAFETY: The local variable `out` is valid for writing `size_of:= :()` bytes. + // + // By using the _copy_from_user variant, we skip the check_object_= size + // check that verifies the kernel pointer. This mirrors the logic = on the + // C side that skips the check when the length is a compile-time + // constant. + let res =3D unsafe { + bindings::_copy_from_user(out.as_mut_ptr().cast::(), s= elf.ptr, len_ulong) + }; + if res !=3D 0 { + return Err(EFAULT); + } + // Since this is not a pointer to a valid object in our program, + // we cannot use `add`, which has C-style rules for defined + // behavior. + self.ptr =3D self.ptr.wrapping_byte_add(len); + self.length -=3D len; + // SAFETY: The read above has initialized all bytes in `out`, and = since + // `T` implements `FromBytes`, any bit-pattern is a valid value fo= r this + // type. + Ok(unsafe { out.assume_init() }) + } + /// Reads the entirety of the user slice, appending it to the end of t= he /// provided buffer. /// @@ -312,4 +353,36 @@ pub fn write_slice(&mut self, data: &[u8]) -> Result { // `len`, so the pointer is valid for reading `len` bytes. unsafe { self.write_raw(ptr, len) } } + + /// Writes the provided Rust value to this userspace pointer. + /// + /// Fails with `EFAULT` if the write encounters a page fault. + pub fn write(&mut self, value: &T) -> Result { + let len =3D size_of::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) =3D c_ulong::try_from(len) else { + return Err(EFAULT); + }; + // SAFETY: The reference points to a value of type `T`, so it is v= alid + // for reading `size_of::()` bytes. + // + // By using the _copy_to_user variant, we skip the check_object_si= ze + // check that verifies the kernel pointer. This mirrors the logic = on the + // C side that skips the check when the length is a compile-time + // constant. + let res =3D unsafe { + bindings::_copy_to_user(self.ptr, (value as *const T).cast::(), len_ulong) + }; + if res !=3D 0 { + return Err(EFAULT); + } + // Since this is not a pointer to a valid object in our program, + // we cannot use `add`, which has C-style rules for defined + // behavior. + self.ptr =3D self.ptr.wrapping_byte_add(len); + self.length -=3D len; + Ok(()) + } } --=20 2.44.0.278.ge034bb2e1d-goog From nobody Sun Feb 8 09:41:15 2026 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3FC983C47B for ; Mon, 11 Mar 2024 10:47:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154057; cv=none; b=PlxaB70Nhtg0JPJ3YPOQqFSGx5v0O226u0OQWebHHS9lu1kU5IihQBhbPFX8eZxSusVUhNOQkpGaBRuAlIZgB9qeZ59BrW4N5gcK+1VpXh5kp7PPZhv90tFWISS73NyfUiNb/5AkuE7pocEW0vdBmxx8/tAVAPqYTfi/Nvxikew= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710154057; c=relaxed/simple; bh=RqqYo8Xwakoq53YPU2GZeGJPFs6BA+Ric3Um8ZRej8k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ea2W4W8X0dFhqWOB4Gr+8XIRZCDbi370qMKJi4B+D5likCyluECJ2+i02E1t5AY13TXkwUrXCxcuiX4gMRKMcUCa6M/zLQD94JtDHOKJthK8WKjvitl8Dz8o26j4nm6fojl8QEbICE7tVykXtSzscO5/p17IoMAC5C/uR+yjH+A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oGQ02Zvr; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oGQ02Zvr" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcf22e5b70bso5036879276.1 for ; Mon, 11 Mar 2024 03:47:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710154053; x=1710758853; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=a3rgQ6OTM36YDirL+P3CRmpv1dVQxJgazAx/ZBhtsKE=; b=oGQ02ZvrfEECNYhgUBFXF3ktYQ0+KgW1Kf9Ry32dBALuRr4+o02QGdrsV9gHoeYYmQ MwPt48GGxRN49GXeWDMvVLIEVWelifYo1zkfsAXC2RlNVXtZ3CCuETmTSTWwOmhQxU7k 7Hne8j1T+ASutLF9qS9xmEdkKKEFVucE4FQVhZryCCqumM5HHJeiI3Vsbk74mMXIeUFq L4aVb4PQKpNcgbfejghjdzX1+onrRa14u9cpcUy52ICwXs2j05BvIt7Cc9dC+AYLR3Nr +wdX9bgNEm/NKTknUONl5R16heXaIpQXufBEla3UIwfANqNNAvleLeOoNzJGbrO9GaDZ muXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710154053; x=1710758853; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=a3rgQ6OTM36YDirL+P3CRmpv1dVQxJgazAx/ZBhtsKE=; b=jx5N9firLjTf7fPpPS6FVHkrnxDqFZEwplo+jzLIUSmkafwJKXW6WCoj1GMuUpsupU 5PSNv5z7kGwM7QYJnck6MfKrYuu+jLUDsGkuHveY/csU61E/Rnmeqmrq6OOZyFjGfRf4 hmNfOZ8E8rrisNZSTD7kdDRwOiy5c/BtjjhwpRhK9E0UUarMQaQRjrmzsC+8jwlIEcQP Pda7eG1WOLr+1twSW7CLq2fM1hPdOyWF4366KK6CoiUgG8ZHRTuoWKaFPv8iA3517dbh 5TKu+k7Eu7PK+4ykbYU62iYlwkTPuV4VLoGBiS/JGYqTwY2cXGa+ySGrmUmM+a3iteRO BeuQ== X-Forwarded-Encrypted: i=1; AJvYcCXfaNNguBBWuGwadN+6LdHeHyjz3ufP3VC4PJv+kxMi+TSpXpz+QeHJCvk8CkBrtWUf17qJaSuZi5zfgE04CyjLWL6hW8YfItKXL6Rj X-Gm-Message-State: AOJu0YwS1LOP/eQeKIuV6rU+O09eFNDAd4ES9a+EkFiuG9YNJqzQasvE Ri6aTbJNemHJWpXwUcqUuthqs2Tj7pu26dOmFAV/M+a3vxT9yIL3PGz0IjFNWIzyqAKeevLPKhW owBqSoiSUahHGdw== X-Google-Smtp-Source: AGHT+IGGexGpdfDtIhCShg6eUatPr5rS/mCdR8LuVUYXldx2rgtDy1e66KzQBWuleQC3NjqJS6NJkv3lPfTQxck= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:6902:f07:b0:dcf:f526:4cc6 with SMTP id et7-20020a0569020f0700b00dcff5264cc6mr328765ybb.11.1710154053262; Mon, 11 Mar 2024 03:47:33 -0700 (PDT) Date: Mon, 11 Mar 2024 10:47:16 +0000 In-Reply-To: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240311-alice-mm-v3-0-cdf7b3a2049c@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=13889; i=aliceryhl@google.com; h=from:subject:message-id; bh=RqqYo8Xwakoq53YPU2GZeGJPFs6BA+Ric3Um8ZRej8k=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBl7uE1v/lyWXqmRMi7zcm+0EN/VPFicekl+mqlB zgG2I528fGJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZe7hNQAKCRAEWL7uWMY5 RjuXD/4j6zTA/pUDR4mY4dMVnQrE7UQxfslBQujKYedFfxV8P8MB4VT9nBQlzTye8G1GF5oVQPo mhnmtK25dHQp4INUApVFhKLZTRJOsOSrAG396vsukMIL63SixlKwkJhCRPnGeC25zqvCMKoPQF8 mJyjNnhA9Kexqv4l9RI6yD/yDX5EdWW7g1ireNoQdDGh8ZZK7QodL9MG9klwn9isq39jHwbdz0n 3QwUqC/HpK+HY2g5R91mGWFrlwXxC9ha2sY3hY20P+PFUJqU3gb+Tdzp0koLxvBRSwHbxwqrvF6 rHhKY9aJPd4PwBl87M3ptnxQZX+iO+ypaQRUch7fAqlt3QkPuEscr4k6skSY8czT2IODfTdjA1w 7TIzzy87ruidKT9XIdMrzfJPzn+TFh7lhjzLWVF+LOEMes8tregbk4w1WJNH/bteXWMuvDJj9Hf 7cLL2ldlFuhR43ikqEYjOCd/VcUaq/HfyvrcJ0dZawgUl/ka0OQwmiXk2T3r6TNOm6ypquziQ8K 5zCrp/TC2oPurX03F6u/QqPNN/XK7F85opPnTA39ZZ0SUQlmC+3dmEwdcLZdfwawxW0wcfbluDT vE+XzWzGfiRCV//XocT3nuXQfZ6V7vQj6uea45c/qr9aLqbYmIU4aQBTfO0JSSUCo4KHiRcAbm4 WD4Go7lIqtG9USw== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240311-alice-mm-v3-4-cdf7b3a2049c@google.com> Subject: [PATCH v3 4/4] rust: add abstraction for `struct page` From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , "=?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Adds a new struct called `Page` that wraps a pointer to `struct page`. This struct is assumed to hold ownership over the page, so that Rust code can allocate and manage pages directly. The page type has various methods for reading and writing into the page. These methods will temporarily map the page to allow the operation. All of these methods use a helper that takes an offset and length, performs bounds checks, and returns a pointer to the given offset in the page. This patch only adds support for pages of order zero, as that is all Rust Binder needs. However, it is written to make it easy to add support for higher-order pages in the future. To do that, you would add a const generic parameter to `Page` that specifies the order. Most of the methods do not need to be adjusted, as the logic for dealing with mapping multiple pages at once can be isolated to just the `with_pointer_into_page` method. Finally, the struct can be renamed to `Pages`, and the type alias `Page =3D Pages<0>` can be introduced. Rust Binder needs to manage pages directly as that is how transactions are delivered: Each process has an mmap'd region for incoming transactions. When an incoming transaction arrives, the Binder driver will choose a region in the mmap, allocate and map the relevant pages manually, and copy the incoming transaction directly into the page. This architecture allows the driver to copy transactions directly from the address space of one process to another, without an intermediate copy to a kernel buffer. This code is based on Wedson's page abstractions from the old rust branch, but it has been modified by Alice by removing the incomplete support for higher-order pages, by introducing the `with_*` helpers to consolidate the bounds checking logic into a single place, and by introducing gfp flags. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Signed-off-by: Alice Ryhl --- rust/bindings/bindings_helper.h | 3 + rust/helpers.c | 20 ++++ rust/kernel/lib.rs | 1 + rust/kernel/page.rs | 223 ++++++++++++++++++++++++++++++++++++= ++++ 4 files changed, 247 insertions(+) diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helpe= r.h index 65b98831b975..1073005ca449 100644 --- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -20,5 +20,8 @@ =20 /* `bindgen` gets confused at certain things. */ const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN =3D ARCH_SLAB_MINALIGN; +const size_t RUST_CONST_HELPER_PAGE_SIZE =3D PAGE_SIZE; +const size_t RUST_CONST_HELPER_PAGE_MASK =3D PAGE_MASK; const gfp_t RUST_CONST_HELPER_GFP_KERNEL =3D GFP_KERNEL; const gfp_t RUST_CONST_HELPER___GFP_ZERO =3D __GFP_ZERO; +const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM =3D ___GFP_HIGHMEM; diff --git a/rust/helpers.c b/rust/helpers.c index 312b6fcb49d5..298d2ee16e61 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -25,6 +25,8 @@ #include #include #include +#include +#include #include #include #include @@ -93,6 +95,24 @@ int rust_helper_signal_pending(struct task_struct *t) } EXPORT_SYMBOL_GPL(rust_helper_signal_pending); =20 +struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order) +{ + return alloc_pages(gfp_mask, order); +} +EXPORT_SYMBOL_GPL(rust_helper_alloc_pages); + +void *rust_helper_kmap_local_page(struct page *page) +{ + return kmap_local_page(page); +} +EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page); + +void rust_helper_kunmap_local(const void *addr) +{ + kunmap_local(addr); +} +EXPORT_SYMBOL_GPL(rust_helper_kunmap_local); + refcount_t rust_helper_REFCOUNT_INIT(int n) { return (refcount_t)REFCOUNT_INIT(n); diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index 37f84223b83f..667fc67fa24f 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -39,6 +39,7 @@ pub mod kunit; #[cfg(CONFIG_NET)] pub mod net; +pub mod page; pub mod prelude; pub mod print; mod static_assert; diff --git a/rust/kernel/page.rs b/rust/kernel/page.rs new file mode 100644 index 000000000000..02d25b142fc8 --- /dev/null +++ b/rust/kernel/page.rs @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Kernel page allocation and management. + +use crate::{bindings, error::code::*, error::Result, uaccess::UserSliceRea= der}; +use core::{ + alloc::AllocError, + ptr::{self, NonNull}, +}; + +/// A bitwise shift for the page size. +pub const PAGE_SHIFT: usize =3D bindings::PAGE_SHIFT as usize; +/// The number of bytes in a page. +pub const PAGE_SIZE: usize =3D bindings::PAGE_SIZE as usize; +/// A bitmask that can be used to get the page containing a given address = by masking away the lower +/// bits. +pub const PAGE_MASK: usize =3D bindings::PAGE_MASK as usize; + +/// Flags for the "get free page" function that underlies all memory alloc= ations. +pub mod flags { + pub type gfp_t =3D bindings::gfp_t; + + /// `GFP_KERNEL` is typical for kernel-internal allocations. The calle= r requires `ZONE_NORMAL` + /// or a lower zone for direct access but can direct reclaim. + pub const GFP_KERNEL: gfp_t =3D bindings::GFP_KERNEL; + /// `GFP_ZERO` returns a zeroed page on success. + pub const __GFP_ZERO: gfp_t =3D bindings::__GFP_ZERO; + /// `GFP_HIGHMEM` indicates that the allocated memory may be located i= n high memory. + pub const __GFP_HIGHMEM: gfp_t =3D bindings::__GFP_HIGHMEM; +} + +/// A pointer to a page that owns the page allocation. +/// +/// # Invariants +/// +/// The pointer points at a page, and has ownership over the page. +pub struct Page { + page: NonNull, +} + +// SAFETY: It is safe to transfer page allocations between threads. +unsafe impl Send for Page {} + +// SAFETY: As long as the safety requirements for `&self` methods on this = type +// are followed, there is no problem with calling them in parallel. +unsafe impl Sync for Page {} + +impl Page { + /// Allocates a new page. + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result= { + // SAFETY: The specified order is zero and we want one page. + let page =3D unsafe { bindings::alloc_pages(gfp_flags, 0) }; + let page =3D NonNull::new(page).ok_or(AllocError)?; + // INVARIANT: We checked that the allocation succeeded. + Ok(Self { page }) + } + + /// Returns a raw pointer to the page. + pub fn as_ptr(&self) -> *mut bindings::page { + self.page.as_ptr() + } + + /// Runs a piece of code with this page mapped to an address. + /// + /// The page is unmapped when this call returns. + /// + /// It is up to the caller to use the provided raw pointer correctly. + pub fn with_page_mapped(&self, f: impl FnOnce(*mut u8) -> T) -> T { + // SAFETY: `page` is valid due to the type invariants on `Page`. + let mapped_addr =3D unsafe { bindings::kmap_local_page(self.as_ptr= ()) }; + + let res =3D f(mapped_addr.cast()); + + // SAFETY: This unmaps the page mapped above. + // + // Since this API takes the user code as a closure, it can only be= used + // in a manner where the pages are unmapped in reverse order. This= is as + // required by `kunmap_local`. + // + // In other words, if this call to `kunmap_local` happens when a + // different page should be unmapped first, then there must necess= arily + // be a call to `kmap_local_page` other than the call just above in + // `with_page_mapped` that made that possible. In this case, it is= the + // unsafe block that wraps that other call that is incorrect. + unsafe { bindings::kunmap_local(mapped_addr) }; + + res + } + + /// Runs a piece of code with a raw pointer to a slice of this page, w= ith + /// bounds checking. + /// + /// If `f` is called, then it will be called with a pointer that point= s at + /// `off` bytes into the page, and the pointer will be valid for at le= ast + /// `len` bytes. The pointer is only valid on this task, as this metho= d uses + /// a local mapping. + /// + /// If `off` and `len` refers to a region outside of this page, then t= his + /// method returns `EINVAL` and does not call `f`. + /// + /// It is up to the caller to use the provided raw pointer correctly. + pub fn with_pointer_into_page( + &self, + off: usize, + len: usize, + f: impl FnOnce(*mut u8) -> Result, + ) -> Result { + let bounds_ok =3D off <=3D PAGE_SIZE && len <=3D PAGE_SIZE && (off= + len) <=3D PAGE_SIZE; + + if bounds_ok { + self.with_page_mapped(move |page_addr| { + // SAFETY: The `off` integer is at most `PAGE_SIZE`, so th= is pointer offset will + // result in a pointer that is in bounds or one off the en= d of the page. + f(unsafe { page_addr.add(off) }) + }) + } else { + Err(EINVAL) + } + } + + /// Maps the page and reads from it into the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `off= set .. + /// offset+len` goes outside ot the page, then this call returns `EINV= AL`. + /// + /// # Safety + /// + /// * Callers must ensure that `dst` is valid for writing `len` bytes. + /// * Callers must ensure that this call does not race with a write to= the + /// same page that overlaps with this read. + pub unsafe fn read_raw(&self, dst: *mut u8, offset: usize, len: usize)= -> Result { + self.with_pointer_into_page(offset, len, move |src| { + // SAFETY: If `with_pointer_into_page` calls into this closure= , then + // it has performed a bounds check and guarantees that `src` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and writes into it from the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `off= set .. + /// offset+len` goes outside ot the page, then this call returns `EINV= AL`. + /// + /// # Safety + /// + /// * Callers must ensure that `src` is valid for reading `len` bytes. + /// * Callers must ensure that this call does not race with a read or = write + /// to the same page that overlaps with this write. + pub unsafe fn write_raw(&self, src: *const u8, offset: usize, len: usi= ze) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure= , then + // it has performed a bounds check and guarantees that `dst` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and zeroes the given slice. + /// + /// This method will perform bounds checks on the page offset. If `off= set .. + /// offset+len` goes outside ot the page, then this call returns `EINV= AL`. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or wr= ite to + /// the same page that overlaps with this write. + pub unsafe fn fill_zero(&self, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure= , then + // it has performed a bounds check and guarantees that `dst` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::write_bytes(dst, 0u8, len) }; + Ok(()) + }) + } + + /// Copies data from userspace into this page. + /// + /// This method will perform bounds checks on the page offset. If `off= set .. + /// offset+len` goes outside ot the page, then this call returns `EINV= AL`. + /// + /// Like the other `UserSliceReader` methods, data races are allowed o= n the + /// userspace address. However, they are not allowed on the page you a= re + /// copying into. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or wr= ite to + /// the same page that overlaps with this write. + pub unsafe fn copy_from_user_slice( + &self, + reader: &mut UserSliceReader, + offset: usize, + len: usize, + ) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure= , then + // it has performed a bounds check and guarantees that `dst` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race when wri= ting + // to `dst`. + unsafe { reader.read_raw(dst, len) } + }) + } +} + +impl Drop for Page { + fn drop(&mut self) { + // SAFETY: By the type invariants, we have ownership of the page a= nd can + // free it. + unsafe { bindings::__free_pages(self.page.as_ptr(), 0) }; + } +} --=20 2.44.0.278.ge034bb2e1d-goog