From nobody Fri Dec 19 08:40:42 2025 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF578206F2E; Fri, 6 Dec 2024 12:32:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733488355; cv=none; b=vEbY+sLHC+i49puN0oQ2JHUkm6A9XnXchWmsCKNAY4qOnKXeZTjWFg2WW4UyBBnRvkXh/Iih4k6fDWQdEnNf7t6jX9AxDWsLsHhEaWjSQZRhzSlG2A22JBM+D+8pU4xOHRlsysIoargZW+0e5L6asT5Rqy8PL66gFN42dMn8pjA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733488355; c=relaxed/simple; bh=oZyy6/yJR4CQBETo54Pol6OM6p01s3meRvsT1QPxdMc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=n2xgTU7G1pQ2ifywnqRyNW1nz8aKTX49aV13QrNmvxGpLiYGzqqb/zwXEpdYdFjMKm46oC+3QuY6XnxItOtsPOC5lw3Tx8F5L8MQyvhyL0km9OQ8biLRiyGXo0hezMBl1IHCmQsyDz/deY9oFYWBejWlYrxyxGtQ00X9eoTK54o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Y4VwX1XQ5z21mlC; Fri, 6 Dec 2024 20:30:52 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id BE341140136; Fri, 6 Dec 2024 20:32:31 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:31 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 06/10] mm: page_frag: introduce alloc_refill prepare & commit API Date: Fri, 6 Dec 2024 20:25:29 +0800 Message-ID: <20241206122533.3589947-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) Content-Type: text/plain; charset="utf-8" Currently alloc related API returns virtual address of the allocated fragment and refill related API returns page info of the allocated fragment through 'struct page_frag'. There are use cases that need both the virtual address and page info of the allocated fragment. Introduce alloc_refill API for those use cases. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 45 +++++++++++++++++++++ include/linux/page_frag_cache.h | 71 +++++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.= rst index 4cfdbe7db55a..1c98f7090d92 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -111,6 +111,9 @@ page is aligned according to the 'align/alignment' para= meter. Note the size of the allocated fragment is not aligned, the caller needs to provide an alig= ned fragsz if there is an alignment requirement for the size of the fragment. =20 +Depending on different use cases, callers expecting to deal with va, page = or +both va and page may call alloc, refill or alloc_refill API accordingly. + There is a use case that needs minimum memory in order for forward progres= s, but more performant if more memory is available. By using the prepare and comm= it related API, the caller calls prepare API to requests the minimum memory it @@ -123,6 +126,9 @@ uses, or not do so if deciding to not use any memory. __page_frag_alloc_align page_frag_alloc_align page_frag_alloc page_frag_alloc_abort __page_frag_refill_prepare_align page_frag_refill_prepare_align page_frag_refill_prepare + __page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare =20 .. kernel-doc:: mm/page_frag_cache.c :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abor= t_ref @@ -193,3 +199,42 @@ Refill Preparation & committing API skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); page_frag_refill_commit(nc, pfrag, copy); } + + +Alloc_Refill Preparation & committing API +----------------------------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge =3D true; + void *va; + + pfrag =3D &page_frag; + va =3D page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); + if (!va) + goto wait_for_space; + + copy =3D min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >=3D max_skb_frags) + goto new_segment; + + merge =3D false; + } + + copy =3D mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err =3D copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_refill_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_refill_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cach= e.h index 1e699334646a..329390afbe78 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -211,6 +211,77 @@ static inline bool page_frag_refill_prepare(struct pag= e_frag_cache *nc, ~0u); } =20 +/** + * __page_frag_alloc_refill_prepare_align() - Prepare allocating a fragmen= t and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment. + * + * Prepare allocating a fragment and refilling a page_frag from page_frag = cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void +*__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +/** + * page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment = and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement for the fragment. + * + * WARN_ON_ONCE() checking for @align before prepare allocating a fragment= and + * refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void +*page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +/** + * page_frag_alloc_refill_prepare() - Prepare allocating a fragment and + * refilling a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare allocating a fragment and refilling a page_frag from page_frag = cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache = *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + /** * page_frag_refill_commit - Commit a prepare refilling. * @nc: page_frag cache from which to commit --=20 2.33.0