[for-linus][PATCH 0/4] tracing: Cleanup and simplify the persistent ring buffer for 6.15

Steven Rostedt posted 4 patches 8 months, 2 weeks ago
There is a newer version of this series
Documentation/admin-guide/kernel-parameters.txt |  2 +
Documentation/trace/debugging.rst               |  2 +
kernel/trace/ring_buffer.c                      |  5 +-
kernel/trace/trace.c                            | 66 ++++++++++++++++---------
kernel/trace/trace.h                            |  1 +
5 files changed, 50 insertions(+), 26 deletions(-)
[for-linus][PATCH 0/4] tracing: Cleanup and simplify the persistent ring buffer for 6.15
Posted by Steven Rostedt 8 months, 2 weeks ago
Persistent buffer cleanups and simplifications for v6.15:

It was mistaken that the physical memory returned from "reserve_mem" had to
be vmap()'d to get to it from a virtual address. But reserve_mem already
maps the memory to the virtual address of the kernel so a simple
phys_to_virt() can be used to get to the virtual address from the physical
memory returned by "reserve_mem". With this new found knowledge, the
code can be cleaned up and simplified.

- Enforce that the persistent memory is page aligned

  As the buffers using the persistent memory are all going to be
  mapped via pages, make sure that the memory given to the tracing
  infrastructure is page aligned. If it is not, it will print a warning
  and fail to map the buffer.

- Use phys_to_virt() to get the virtual address from reserve_mem

  Instead of calling vmap() on the physical memory returned from
  "reserve_mem", use phys_to_virt() instead.

  As the memory returned by "memmap" or any other means where a physical
  address is given to the tracing infrastructure, it still needs to
  be vmap(). Since this memory can never be returned back to the buddy
  allocator nor should it ever be memmory mapped to user space, flag
  this buffer and up the ref count. The ref count will keep it from
  ever being freed, and the flag will prevent it from ever being memory
  mapped to user space.

- Use vmap_page_range() for memmap virtual address mapping

  For the memmap buffer, instead of allocating an array of struct pages,
  assigning them to the contiguous phsycial memory and then passing that to
  vmap(), use vmap_page_range() instead

- Replace flush_dcache_folio() with flush_kernel_vmap_range()

  Instead of calling virt_to_folio() and passing that to
  flush_dcache_folio(), just call flush_kernel_vmap_range() directly.
  This also fixes a bug where if a subbuffer was bigger than PAGE_SIZE
  only the PAGE_SIZE portion would be flushed.


  git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
ring-buffer/fixes

Head SHA1: e4d4b8670c44cdd22212cab3c576e2d317efa67c


Steven Rostedt (4):
      tracing: Enforce the persistent ring buffer to be page aligned
      tracing: Have reserve_mem use phys_to_virt() and separate from memmap buffer
      tracing: Use vmap_page_range() to map memmap ring buffer
      ring-buffer: Use flush_kernel_vmap_range() over flush_dcache_folio()

----
 Documentation/admin-guide/kernel-parameters.txt |  2 +
 Documentation/trace/debugging.rst               |  2 +
 kernel/trace/ring_buffer.c                      |  5 +-
 kernel/trace/trace.c                            | 66 ++++++++++++++++---------
 kernel/trace/trace.h                            |  1 +
 5 files changed, 50 insertions(+), 26 deletions(-)