Update documentation about design, implementation and API usages
for page_frag.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
Documentation/mm/page_frags.rst | 176 +++++++++++++++++++++-
include/linux/page_frag_cache.h | 250 +++++++++++++++++++++++++++++++-
mm/page_frag_cache.c | 26 +++-
3 files changed, 441 insertions(+), 11 deletions(-)
diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst
index 503ca6cdb804..7fd9398aca4e 100644
--- a/Documentation/mm/page_frags.rst
+++ b/Documentation/mm/page_frags.rst
@@ -1,3 +1,5 @@
+.. SPDX-License-Identifier: GPL-2.0
+
==============
Page fragments
==============
@@ -40,4 +42,176 @@ page via a single call. The advantage to doing this is that it allows for
cleaning up the multiple references that were added to a page in order to
avoid calling get_page per allocation.
-Alexander Duyck, Nov 29, 2016.
+
+Architecture overview
+=====================
+
+.. code-block:: none
+
+ +----------------------+
+ | page_frag API caller |
+ +----------------------+
+ |
+ |
+ v
+ +------------------------------------------------------------------+
+ | request page fragment |
+ +------------------------------------------------------------------+
+ | | |
+ | | |
+ | Cache not enough |
+ | | |
+ | +-----------------+ |
+ | | reuse old cache |--Usable-->|
+ | +-----------------+ |
+ | | |
+ | Not usable |
+ | | |
+ | v |
+ Cache empty +-----------------+ |
+ | | drain old cache | |
+ | +-----------------+ |
+ | | |
+ v_________________________________v |
+ | |
+ | |
+ _________________v_______________ |
+ | | Cache is enough
+ | | |
+ PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | |
+ | | |
+ | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE |
+ v | |
+ +----------------------------------+ | |
+ | refill cache with order > 0 page | | |
+ +----------------------------------+ | |
+ | | | |
+ | | | |
+ | Refill failed | |
+ | | | |
+ | v v |
+ | +------------------------------------+ |
+ | | refill cache with order 0 page | |
+ | +----------------------------------=-+ |
+ | | |
+ Refill succeed | |
+ | Refill succeed |
+ | | |
+ v v v
+ +------------------------------------------------------------------+
+ | allocate fragment from cache |
+ +------------------------------------------------------------------+
+
+API interface
+=============
+As the design and implementation of page_frag API implies, the allocation side
+does not allow concurrent calling. Instead it is assumed that the caller must
+ensure there is not concurrent alloc calling to the same page_frag_cache
+instance by using its own lock or rely on some lockless guarantee like NAPI
+softirq.
+
+Depending on different aligning requirement, the page_frag API caller may call
+page_frag_*_align*() to ensure the returned virtual address or offset of the
+page is aligned according to the 'align/alignment' parameter. Note the size of
+the allocated fragment is not aligned, the caller needs to provide an aligned
+fragsz if there is an alignment requirement for the size of the fragment.
+
+Depending on different use cases, callers expecting to deal with va, page or
+both va and page for them may call page_frag_alloc, page_frag_refill, or
+page_frag_alloc_refill API accordingly.
+
+There is also a use case that needs minimum memory in order for forward progress,
+but more performant if more memory is available. Using page_frag_*_prepare() and
+page_frag_commit*() related API, the caller requests the minimum memory it needs
+and the prepare API will return the maximum size of the fragment returned. The
+caller needs to either call the commit API to report how much memory it actually
+uses, or not do so if deciding to not use any memory.
+
+.. kernel-doc:: include/linux/page_frag_cache.h
+ :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc
+ __page_frag_alloc_align page_frag_alloc_align page_frag_alloc
+ __page_frag_refill_align page_frag_refill_align
+ page_frag_refill __page_frag_refill_prepare_align
+ page_frag_refill_prepare_align page_frag_refill_prepare
+ __page_frag_alloc_refill_prepare_align
+ page_frag_alloc_refill_prepare_align
+ page_frag_alloc_refill_prepare page_frag_alloc_refill_probe
+ page_frag_refill_probe page_frag_commit
+ page_frag_commit_noref page_frag_alloc_abort
+
+.. kernel-doc:: mm/page_frag_cache.c
+ :identifiers: page_frag_cache_drain page_frag_free
+ __page_frag_alloc_refill_probe_align
+
+Coding examples
+===============
+
+Initialization and draining API
+-------------------------------
+
+.. code-block:: c
+
+ page_frag_cache_init(nc);
+ ...
+ page_frag_cache_drain(nc);
+
+
+Allocation & freeing API
+------------------------
+
+.. code-block:: c
+
+ void *va;
+
+ va = page_frag_alloc_align(nc, size, gfp, align);
+ if (!va)
+ goto do_error;
+
+ err = do_something(va, size);
+ if (err) {
+ page_frag_abort(nc, size);
+ goto do_error;
+ }
+
+ ...
+
+ page_frag_free(va);
+
+
+Preparation & committing API
+----------------------------
+
+.. code-block:: c
+
+ struct page_frag page_frag, *pfrag;
+ bool merge = true;
+ void *va;
+
+ pfrag = &page_frag;
+ va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL);
+ if (!va)
+ goto wait_for_space;
+
+ copy = min_t(unsigned int, copy, pfrag->size);
+ if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) {
+ if (i >= max_skb_frags)
+ goto new_segment;
+
+ merge = false;
+ }
+
+ copy = mem_schedule(copy);
+ if (!copy)
+ goto wait_for_space;
+
+ err = copy_from_iter_full_nocache(va, copy, iter);
+ if (err)
+ goto do_error;
+
+ if (merge) {
+ skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_commit_noref(nc, pfrag, copy);
+ } else {
+ skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy);
+ page_frag_commit(nc, pfrag, copy);
+ }
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 1c0c11250b66..806d4b8d4bed 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -28,11 +28,29 @@ static inline bool encoded_page_decode_pfmemalloc(unsigned long encoded_page)
return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
}
+/**
+ * page_frag_cache_init() - Init page_frag cache.
+ * @nc: page_frag cache from which to init
+ *
+ * Inline helper to initialize the page_frag cache.
+ */
static inline void page_frag_cache_init(struct page_frag_cache *nc)
{
nc->encoded_page = 0;
}
+/**
+ * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc.
+ * @nc: page_frag cache from which to check
+ *
+ * Used to check if the current page in page_frag cache is allocated from the
+ * pfmemalloc reserves. It has the same calling context expectation as the
+ * allocation API.
+ *
+ * Return:
+ * true if the current page in page_frag cache is allocated from the pfmemalloc
+ * reserves, otherwise return false.
+ */
static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
{
return encoded_page_decode_pfmemalloc(nc->encoded_page);
@@ -61,6 +79,19 @@ static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc,
return __page_frag_cache_commit_noref(nc, pfrag, used_sz);
}
+/**
+ * __page_frag_alloc_align() - Alloc a page fragment with aligning
+ * requirement.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the 'va'
+ *
+ * Allocate a page fragment from page_frag cache with aligning requirement.
+ *
+ * Return:
+ * Virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
@@ -78,6 +109,19 @@ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
return va;
}
+/**
+ * page_frag_alloc_align() - Allocate a page fragment with aligning requirement.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache needs to be refilled
+ * @align: the requested aligning requirement for the fragment
+ *
+ * WARN_ON_ONCE() checking for @align before allocating a page fragment from
+ * page_frag cache with aligning requirement.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align)
@@ -86,12 +130,36 @@ static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
}
+/**
+ * page_frag_alloc() - Allocate a page fragment.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Alloc a page fragment from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask)
{
return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
}
+/**
+ * __page_frag_refill_align() - Refill a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the fragment
+ *
+ * Refill a page_frag from page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool __page_frag_refill_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -106,6 +174,20 @@ static inline bool __page_frag_refill_align(struct page_frag_cache *nc,
return true;
}
+/**
+ * page_frag_refill_align() - Refill a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache needs to be refilled
+ * @align: the requested aligning requirement for the fragment
+ *
+ * WARN_ON_ONCE() checking for @align before refilling a page_frag from
+ * page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -115,6 +197,18 @@ static inline bool page_frag_refill_align(struct page_frag_cache *nc,
return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align);
}
+/**
+ * page_frag_refill() - Refill a page_frag.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Refill a page_frag from page_frag cache.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag, gfp_t gfp_mask)
@@ -122,6 +216,20 @@ static inline bool page_frag_refill(struct page_frag_cache *nc,
return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u);
}
+/**
+ * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with
+ * aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the fragment
+ *
+ * Prepare refill a page_frag from page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if prepare refilling succeeds, otherwise return false.
+ */
static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -132,6 +240,21 @@ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc,
align_mask);
}
+/**
+ * page_frag_refill_prepare_align() - Prepare refilling a page_frag with
+ * aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache needs to be refilled
+ * @align: the requested aligning requirement for the fragment
+ *
+ * WARN_ON_ONCE() checking for @align before prepare refilling a page_frag from
+ * page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if prepare refilling succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -143,6 +266,18 @@ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc,
-align);
}
+/**
+ * page_frag_refill_prepare() - Prepare refilling a page_frag.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Prepare refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_prepare(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -152,6 +287,20 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc,
~0u);
}
+/**
+ * __page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and
+ * refilling a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the fragment.
+ *
+ * Prepare allocating a fragment and refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -161,6 +310,21 @@ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cach
return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask);
}
+/**
+ * page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and
+ * refilling a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align: the requested aligning requirement for the fragment.
+ *
+ * WARN_ON_ONCE() checking for @align before prepare allocating a fragment and
+ * refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -172,6 +336,19 @@ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache
gfp_mask, -align);
}
+/**
+ * page_frag_alloc_refill_prepare() - Prepare allocating a fragment and
+ * refilling a page_frag.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Prepare allocating a fragment and refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -181,6 +358,18 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc,
gfp_mask, ~0u);
}
+/**
+ * page_frag_alloc_refill_probe() - Probe allocating a fragment and refilling
+ * a page_frag.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled
+ *
+ * Probe allocating a fragment and refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag)
@@ -188,6 +377,17 @@ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc,
return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u);
}
+/**
+ * page_frag_refill_probe() - Probe refilling a page_frag.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled
+ *
+ * Probe refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_probe(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag)
@@ -195,20 +395,54 @@ static inline bool page_frag_refill_probe(struct page_frag_cache *nc,
return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag);
}
-static inline void page_frag_commit(struct page_frag_cache *nc,
- struct page_frag *pfrag,
- unsigned int used_sz)
+/**
+ * page_frag_commit - Commit a prepared page fragment.
+ * @nc: page_frag cache from which to commit
+ * @pfrag: the page_frag to be committed
+ * @used_sz: size of the page fragment has been used
+ *
+ * Commit the actual used size for the allocation that was either prepared
+ * or probed.
+ *
+ * Return:
+ * The true size of the fragment considering the offset alignment.
+ */
+static inline unsigned int page_frag_commit(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
{
- __page_frag_cache_commit(nc, pfrag, used_sz);
+ return __page_frag_cache_commit(nc, pfrag, used_sz);
}
-static inline void page_frag_commit_noref(struct page_frag_cache *nc,
- struct page_frag *pfrag,
- unsigned int used_sz)
+/**
+ * page_frag_commit_noref - Commit a prepared page fragment without taking
+ * page refcount.
+ * @nc: page_frag cache from which to commit
+ * @pfrag: the page_frag to be committed
+ * @used_sz: size of the page fragment has been used
+ *
+ * Commit the prepared or probed fragment by passing the actual used size, but
+ * not taking refcount. Mostly used for fragmemt coalescing case when the
+ * current fragment can share the same refcount with previous fragment.
+ *
+ * Return:
+ * The true size of the fragment considering the offset alignment.
+ */
+static inline unsigned int page_frag_commit_noref(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
{
- __page_frag_cache_commit_noref(nc, pfrag, used_sz);
+ return __page_frag_cache_commit_noref(nc, pfrag, used_sz);
}
+/**
+ * page_frag_alloc_abort - Abort the page fragment allocation.
+ * @nc: page_frag cache to which the page fragment is aborted back
+ * @fragsz: size of the page fragment to be aborted
+ *
+ * It is expected to be called from the same context as the allocation API.
+ * Mostly used for error handling cases where the fragment is no longer needed.
+ */
static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
unsigned int fragsz)
{
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 5ea4b663ab8e..51f4eb4b2169 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -70,6 +70,10 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
return page;
}
+/**
+ * page_frag_cache_drain - Drain the current page from page_frag cache.
+ * @nc: page_frag cache from which to drain
+ */
void page_frag_cache_drain(struct page_frag_cache *nc)
{
if (!nc->encoded_page)
@@ -112,6 +116,20 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(__page_frag_cache_commit_noref);
+/**
+ * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and
+ * refilling a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @align_mask: the requested aligning requirement for the fragment.
+ *
+ * Probe allocing a fragment and refilling a page_frag from page_frag cache with
+ * aligning requirement.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -203,8 +221,12 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
}
EXPORT_SYMBOL(__page_frag_cache_prepare);
-/*
- * Frees a page fragment allocated out of either a compound or order 0 page.
+/**
+ * page_frag_free - Free a page fragment.
+ * @addr: va of page fragment to be freed
+ *
+ * Free a page fragment allocated out of either a compound or order 0 page by
+ * virtual address.
*/
void page_frag_free(void *addr)
{
--
2.33.0
On Fri, Oct 18, 2024 at 06:53:50PM +0800, Yunsheng Lin wrote: > diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst > index 503ca6cdb804..7fd9398aca4e 100644 > --- a/Documentation/mm/page_frags.rst > +++ b/Documentation/mm/page_frags.rst > @@ -1,3 +1,5 @@ > +.. SPDX-License-Identifier: GPL-2.0 > + > ============== > Page fragments > ============== > @@ -40,4 +42,176 @@ page via a single call. The advantage to doing this is that it allows for > cleaning up the multiple references that were added to a page in order to > avoid calling get_page per allocation. > > -Alexander Duyck, Nov 29, 2016. > + > +Architecture overview > +===================== > + > +.. code-block:: none > + > + +----------------------+ > + | page_frag API caller | > + +----------------------+ > + | > + | > + v > + +------------------------------------------------------------------+ > + | request page fragment | > + +------------------------------------------------------------------+ > + | | | > + | | | > + | Cache not enough | > + | | | > + | +-----------------+ | > + | | reuse old cache |--Usable-->| > + | +-----------------+ | > + | | | > + | Not usable | > + | | | > + | v | > + Cache empty +-----------------+ | > + | | drain old cache | | > + | +-----------------+ | > + | | | > + v_________________________________v | > + | | > + | | > + _________________v_______________ | > + | | Cache is enough > + | | | > + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | > + | | | > + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | > + v | | > + +----------------------------------+ | | > + | refill cache with order > 0 page | | | > + +----------------------------------+ | | > + | | | | > + | | | | > + | Refill failed | | > + | | | | > + | v v | > + | +------------------------------------+ | > + | | refill cache with order 0 page | | > + | +----------------------------------=-+ | > + | | | > + Refill succeed | | > + | Refill succeed | > + | | | > + v v v > + +------------------------------------------------------------------+ > + | allocate fragment from cache | > + +------------------------------------------------------------------+ > + > +API interface > +============= > +As the design and implementation of page_frag API implies, the allocation side > +does not allow concurrent calling. Instead it is assumed that the caller must > +ensure there is not concurrent alloc calling to the same page_frag_cache > +instance by using its own lock or rely on some lockless guarantee like NAPI > +softirq. > + > +Depending on different aligning requirement, the page_frag API caller may call > +page_frag_*_align*() to ensure the returned virtual address or offset of the > +page is aligned according to the 'align/alignment' parameter. Note the size of > +the allocated fragment is not aligned, the caller needs to provide an aligned > +fragsz if there is an alignment requirement for the size of the fragment. > + > +Depending on different use cases, callers expecting to deal with va, page or > +both va and page for them may call page_frag_alloc, page_frag_refill, or > +page_frag_alloc_refill API accordingly. > + > +There is also a use case that needs minimum memory in order for forward progress, > +but more performant if more memory is available. Using page_frag_*_prepare() and > +page_frag_commit*() related API, the caller requests the minimum memory it needs > +and the prepare API will return the maximum size of the fragment returned. The > +caller needs to either call the commit API to report how much memory it actually > +uses, or not do so if deciding to not use any memory. > + > +.. kernel-doc:: include/linux/page_frag_cache.h > + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc > + __page_frag_alloc_align page_frag_alloc_align page_frag_alloc > + __page_frag_refill_align page_frag_refill_align > + page_frag_refill __page_frag_refill_prepare_align > + page_frag_refill_prepare_align page_frag_refill_prepare > + __page_frag_alloc_refill_prepare_align > + page_frag_alloc_refill_prepare_align > + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe > + page_frag_refill_probe page_frag_commit > + page_frag_commit_noref page_frag_alloc_abort > + > +.. kernel-doc:: mm/page_frag_cache.c > + :identifiers: page_frag_cache_drain page_frag_free > + __page_frag_alloc_refill_probe_align > + > +Coding examples > +=============== > + > +Initialization and draining API > +------------------------------- > + > +.. code-block:: c > + > + page_frag_cache_init(nc); > + ... > + page_frag_cache_drain(nc); > + > + > +Allocation & freeing API > +------------------------ > + > +.. code-block:: c > + > + void *va; > + > + va = page_frag_alloc_align(nc, size, gfp, align); > + if (!va) > + goto do_error; > + > + err = do_something(va, size); > + if (err) { > + page_frag_abort(nc, size); > + goto do_error; > + } > + > + ... > + > + page_frag_free(va); > + > + > +Preparation & committing API > +---------------------------- > + > +.. code-block:: c > + > + struct page_frag page_frag, *pfrag; > + bool merge = true; > + void *va; > + > + pfrag = &page_frag; > + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); > + if (!va) > + goto wait_for_space; > + > + copy = min_t(unsigned int, copy, pfrag->size); > + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { > + if (i >= max_skb_frags) > + goto new_segment; > + > + merge = false; > + } > + > + copy = mem_schedule(copy); > + if (!copy) > + goto wait_for_space; > + > + err = copy_from_iter_full_nocache(va, copy, iter); > + if (err) > + goto do_error; > + > + if (merge) { > + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); > + page_frag_commit_noref(nc, pfrag, copy); > + } else { > + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); > + page_frag_commit(nc, pfrag, copy); > + } Looks good. > +/** > + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. > + * @nc: page_frag cache from which to check > + * > + * Used to check if the current page in page_frag cache is allocated from the "Check if ..." > + * pfmemalloc reserves. It has the same calling context expectation as the > + * allocation API. > + * > + * Return: > + * true if the current page in page_frag cache is allocated from the pfmemalloc > + * reserves, otherwise return false. > + */ > <snipped>... > +/** > + * page_frag_alloc() - Allocate a page fragment. > + * @nc: page_frag cache from which to allocate > + * @fragsz: the requested fragment size > + * @gfp_mask: the allocation gfp to use when cache need to be refilled > + * > + * Alloc a page fragment from page_frag cache. "Allocate a page fragment ..." > + * > + * Return: > + * virtual address of the page fragment, otherwise return NULL. > + */ > static inline void *page_frag_alloc(struct page_frag_cache *nc, > <snipped>... > +/** > + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with > + * aligning requirement. > + * @nc: page_frag cache from which to refill > + * @fragsz: the requested fragment size > + * @pfrag: the page_frag to be refilled. > + * @gfp_mask: the allocation gfp to use when cache need to be refilled > + * @align_mask: the requested aligning requirement for the fragment > + * > + * Prepare refill a page_frag from page_frag cache with aligning requirement. "Prepare refilling ..." > + * > + * Return: > + * True if prepare refilling succeeds, otherwise return false. > + */ > <snipped>... > +/** > + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and > + * refilling a page_frag with aligning requirement. > + * @nc: page_frag cache from which to allocate and refill > + * @fragsz: the requested fragment size > + * @pfrag: the page_frag to be refilled. > + * @align_mask: the requested aligning requirement for the fragment. > + * > + * Probe allocing a fragment and refilling a page_frag from page_frag cache with "Probe allocating..." > + * aligning requirement. > + * > + * Return: > + * virtual address of the page fragment, otherwise return NULL. > + */ Thanks. -- An old man doll... just what I always wanted! - Clara
On 2024/10/20 18:02, Bagas Sanjaya wrote: Thanks, will try my best to not miss any 'alloc' typo for doc patch next version:( > On Fri, Oct 18, 2024 at 06:53:50PM +0800, Yunsheng Lin wrote: >> diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst >> index 503ca6cdb804..7fd9398aca4e 100644 >> --- a/Documentation/mm/page_frags.rst >> +++ b/Documentation/mm/page_frags.rst >> @@ -1,3 +1,5 @@ >> +.. SPDX-License-Identifier: GPL-2.0 >> + >> ============== >> Page fragments >> ============== >> @@ -40,4 +42,176 @@ page via a single call. The advantage to doing this is that it allows for >> cleaning up the multiple references that were added to a page in order to >> avoid calling get_page per allocation. >> >> -Alexander Duyck, Nov 29, 2016. >> + >> +Architecture overview >> +===================== >> + >> +.. code-block:: none >> + >> + +----------------------+ >> + | page_frag API caller | >> + +----------------------+ >> + | >> + | >> + v >> + +------------------------------------------------------------------+ >> + | request page fragment | >> + +------------------------------------------------------------------+ >> + | | | >> + | | | >> + | Cache not enough | >> + | | | >> + | +-----------------+ | >> + | | reuse old cache |--Usable-->| >> + | +-----------------+ | >> + | | | >> + | Not usable | >> + | | | >> + | v | >> + Cache empty +-----------------+ | >> + | | drain old cache | | >> + | +-----------------+ | >> + | | | >> + v_________________________________v | >> + | | >> + | | >> + _________________v_______________ | >> + | | Cache is enough >> + | | | >> + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | >> + | | | >> + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | >> + v | | >> + +----------------------------------+ | | >> + | refill cache with order > 0 page | | | >> + +----------------------------------+ | | >> + | | | | >> + | | | | >> + | Refill failed | | >> + | | | | >> + | v v | >> + | +------------------------------------+ | >> + | | refill cache with order 0 page | | >> + | +----------------------------------=-+ | >> + | | | >> + Refill succeed | | >> + | Refill succeed | >> + | | | >> + v v v >> + +------------------------------------------------------------------+ >> + | allocate fragment from cache | >> + +------------------------------------------------------------------+ >> + >> +API interface >> +============= >> +As the design and implementation of page_frag API implies, the allocation side >> +does not allow concurrent calling. Instead it is assumed that the caller must >> +ensure there is not concurrent alloc calling to the same page_frag_cache >> +instance by using its own lock or rely on some lockless guarantee like NAPI >> +softirq. >> + >> +Depending on different aligning requirement, the page_frag API caller may call >> +page_frag_*_align*() to ensure the returned virtual address or offset of the >> +page is aligned according to the 'align/alignment' parameter. Note the size of >> +the allocated fragment is not aligned, the caller needs to provide an aligned >> +fragsz if there is an alignment requirement for the size of the fragment. >> + >> +Depending on different use cases, callers expecting to deal with va, page or >> +both va and page for them may call page_frag_alloc, page_frag_refill, or >> +page_frag_alloc_refill API accordingly. >> + >> +There is also a use case that needs minimum memory in order for forward progress, >> +but more performant if more memory is available. Using page_frag_*_prepare() and >> +page_frag_commit*() related API, the caller requests the minimum memory it needs >> +and the prepare API will return the maximum size of the fragment returned. The >> +caller needs to either call the commit API to report how much memory it actually >> +uses, or not do so if deciding to not use any memory. >> + >> +.. kernel-doc:: include/linux/page_frag_cache.h >> + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc >> + __page_frag_alloc_align page_frag_alloc_align page_frag_alloc >> + __page_frag_refill_align page_frag_refill_align >> + page_frag_refill __page_frag_refill_prepare_align >> + page_frag_refill_prepare_align page_frag_refill_prepare >> + __page_frag_alloc_refill_prepare_align >> + page_frag_alloc_refill_prepare_align >> + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe >> + page_frag_refill_probe page_frag_commit >> + page_frag_commit_noref page_frag_alloc_abort >> + >> +.. kernel-doc:: mm/page_frag_cache.c >> + :identifiers: page_frag_cache_drain page_frag_free >> + __page_frag_alloc_refill_probe_align >> + >> +Coding examples >> +=============== >> + >> +Initialization and draining API >> +------------------------------- >> + >> +.. code-block:: c >> + >> + page_frag_cache_init(nc); >> + ... >> + page_frag_cache_drain(nc); >> + >> + >> +Allocation & freeing API >> +------------------------ >> + >> +.. code-block:: c >> + >> + void *va; >> + >> + va = page_frag_alloc_align(nc, size, gfp, align); >> + if (!va) >> + goto do_error; >> + >> + err = do_something(va, size); >> + if (err) { >> + page_frag_abort(nc, size); >> + goto do_error; >> + } >> + >> + ... >> + >> + page_frag_free(va); >> + >> + >> +Preparation & committing API >> +---------------------------- >> + >> +.. code-block:: c >> + >> + struct page_frag page_frag, *pfrag; >> + bool merge = true; >> + void *va; >> + >> + pfrag = &page_frag; >> + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); >> + if (!va) >> + goto wait_for_space; >> + >> + copy = min_t(unsigned int, copy, pfrag->size); >> + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { >> + if (i >= max_skb_frags) >> + goto new_segment; >> + >> + merge = false; >> + } >> + >> + copy = mem_schedule(copy); >> + if (!copy) >> + goto wait_for_space; >> + >> + err = copy_from_iter_full_nocache(va, copy, iter); >> + if (err) >> + goto do_error; >> + >> + if (merge) { >> + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); >> + page_frag_commit_noref(nc, pfrag, copy); >> + } else { >> + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); >> + page_frag_commit(nc, pfrag, copy); >> + } > > Looks good. > >> +/** >> + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. >> + * @nc: page_frag cache from which to check >> + * >> + * Used to check if the current page in page_frag cache is allocated from the > "Check if ..." >> + * pfmemalloc reserves. It has the same calling context expectation as the >> + * allocation API. >> + * >> + * Return: >> + * true if the current page in page_frag cache is allocated from the pfmemalloc >> + * reserves, otherwise return false. >> + */ >> <snipped>... >> +/** >> + * page_frag_alloc() - Allocate a page fragment. >> + * @nc: page_frag cache from which to allocate >> + * @fragsz: the requested fragment size >> + * @gfp_mask: the allocation gfp to use when cache need to be refilled >> + * >> + * Alloc a page fragment from page_frag cache. > "Allocate a page fragment ..." >> + * >> + * Return: >> + * virtual address of the page fragment, otherwise return NULL. >> + */ >> static inline void *page_frag_alloc(struct page_frag_cache *nc, >> <snipped>... >> +/** >> + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with >> + * aligning requirement. >> + * @nc: page_frag cache from which to refill >> + * @fragsz: the requested fragment size >> + * @pfrag: the page_frag to be refilled. >> + * @gfp_mask: the allocation gfp to use when cache need to be refilled >> + * @align_mask: the requested aligning requirement for the fragment >> + * >> + * Prepare refill a page_frag from page_frag cache with aligning requirement. > "Prepare refilling ..." >> + * >> + * Return: >> + * True if prepare refilling succeeds, otherwise return false. >> + */ >> <snipped>... >> +/** >> + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and >> + * refilling a page_frag with aligning requirement. >> + * @nc: page_frag cache from which to allocate and refill >> + * @fragsz: the requested fragment size >> + * @pfrag: the page_frag to be refilled. >> + * @align_mask: the requested aligning requirement for the fragment. >> + * >> + * Probe allocing a fragment and refilling a page_frag from page_frag cache with > "Probe allocating..." >> + * aligning requirement. >> + * >> + * Return: >> + * virtual address of the page fragment, otherwise return NULL. >> + */ > > Thanks. >
© 2016 - 2024 Red Hat, Inc.