From nobody Fri Sep 20 01:24:59 2024 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A92941B0124; Wed, 31 Jul 2024 12:50:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.189 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722430250; cv=none; b=Sa30ICmylcVIUq438gXptLrLfJ6sQMRst18KondFjuWwdCLEZV3EJ47VTGAcg6uaO2EEYld/k7GBjzMccaxzpcy8+nzOlgHWN5eL3WTrLoTIVY66FUeIhi4X7QP2QcOx4lRurP0IEaivauETTKJBq+ZRsGS37azNIjiDoZ97+o0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722430250; c=relaxed/simple; bh=Ms0CpAmPI1O6RUVj37I6ZaTQ52I3u8ziQSj8ZYGUANw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FbZXC9XhZidYiW4aKD2ve5VWqQ2AiOmV2mXXsluts+2aa7Ejj/bZiddNwYAusvaTBEQAeHeemkBDv6pdcsrB+6SBuqOXJZ+sEYtTLhrPfd9xGvUkfXJKafeW8gGPGtdAOAwwiU4EOb0NAkd+I4Qv6/d/BBfJ1O8vf4u38bme0NY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.189 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WYsKY07hKzQnR4; Wed, 31 Jul 2024 20:46:25 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 1867D140257; Wed, 31 Jul 2024 20:50:44 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 20:50:43 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v12 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Wed, 31 Jul 2024 20:44:53 +0800 Message-ID: <20240731124505.2903877-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20240731124505.2903877-1-linyunsheng@huawei.com> References: <20240731124505.2903877-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) Content-Type: text/plain; charset="utf-8" We are about to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coalescing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to avoid the disadvantages as much as possible. Rename 'offset' to 'remaining' to retain the 'countdown' behavior as 'remaining countdown' instead of 'offset countdown'. Also, Renaming enable us to do a single 'fragsz > remaining' checking for the case of cache not being enough, which should be the fast path if we ensure 'remaining' is zero when 'va' =3D=3D NULL by memset'ing 'struct page_frag_cache' in page_frag_cache_init() and page_frag_cache_drain(). 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.cam= el@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 4 +-- mm/page_frag_cache.c | 52 +++++++++++++++++------------------ 2 files changed, 28 insertions(+), 28 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index cdc1e3696439..b1c54b2b9308 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -52,10 +52,10 @@ struct page_frag { struct page_frag_cache { void *va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; + __u16 remaining; __u16 size; #else - __u32 offset; + __u32 remaining; #endif /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 609a485cd02a..c5bc72cf018a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + unsigned int size =3D nc->size; +#else unsigned int size =3D PAGE_SIZE; +#endif + unsigned int remaining; struct page *page; - int offset; =20 if (unlikely(!nc->va)) { refill: @@ -82,14 +86,27 @@ void *__page_frag_alloc_align(struct page_frag_cache *n= c, */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); =20 - /* reset page count bias and offset to start of new frag */ + /* reset page count bias and remaining to start of new frag */ nc->pfmemalloc =3D page_is_pfmemalloc(page); nc->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset =3D size; + nc->remaining =3D size; } =20 - offset =3D nc->offset - fragsz; - if (unlikely(offset < 0)) { + remaining =3D nc->remaining & align_mask; + if (unlikely(remaining < fragsz)) { + if (unlikely(fragsz > PAGE_SIZE)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + page =3D virt_to_page(nc->va); =20 if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -100,35 +117,18 @@ void *__page_frag_alloc_align(struct page_frag_cache = *nc, goto refill; } =20 -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size =3D nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); =20 - /* reset page count bias and offset to start of new frag */ + /* reset page count bias and remaining to start of new frag */ nc->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset =3D size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } + remaining =3D size; } =20 nc->pagecnt_bias--; - offset &=3D align_mask; - nc->offset =3D offset; + nc->remaining =3D remaining - fragsz; =20 - return nc->va + offset; + return nc->va + (size - remaining); } EXPORT_SYMBOL(__page_frag_alloc_align); =20 --=20 2.33.0