From nobody Fri Sep 20 01:34:52 2024 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99A9F127E04; Mon, 15 Apr 2024 13:22:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713187328; cv=none; b=AqabcGKUCv5vtBTEfQ8/SrUKh0JSmN9nFEFea6RRNbmEUguvP0zK6Da/Nd8nIJjU8bpwAF5F3FqHZJ5aWdNZh04y1jORIR+dLvEDioDAXjwPRJSwDxPbLDw+fnrVRXBu+oT9BGChBdXRmjRxdStrWH9hXWF3xv/BYAMmAfc5Dh8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713187328; c=relaxed/simple; bh=2ZLLdE8LlclEochHLbd0yjVyJIhm7seaU0bgLTxBVQI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=sRX9iwws5AckneB98ZOAP8xfdzjD2kUljOSH5Pi5SKcd/W6QFcCmG+B7WNgGabdyscuE9C04+frdUCbZKzYN1CZw3bFyDjib7pUfgWbZGAaL15xxHSsZ5NfoRkZNZjH7PbqsJJaAwo7iwTRqCdBmECXeqeVzecnOonavvuytSDI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VJ76F5p6mzTmPr; Mon, 15 Apr 2024 21:18:45 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id C1C5718007D; Mon, 15 Apr 2024 21:22:01 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:01 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v2 05/15] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Mon, 15 Apr 2024 21:19:30 +0800 Message-ID: <20240415131941.51153-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) Content-Type: text/plain; charset="utf-8" We are above to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coaleasing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to aovid the disadvantages as much as possible. 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.cam= el@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 64993b5d1243..dc864ee09536 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -65,9 +65,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { - unsigned int size =3D PAGE_SIZE; + unsigned int size, offset; struct page *page; - int offset; =20 if (unlikely(!nc->va)) { refill: @@ -75,10 +74,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, if (!page) return NULL; =20 -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size =3D nc->size; -#endif /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ @@ -87,11 +82,18 @@ void *__page_frag_alloc_align(struct page_frag_cache *n= c, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc =3D page_is_pfmemalloc(page); nc->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset =3D size; + nc->offset =3D 0; } =20 - offset =3D nc->offset - fragsz; - if (unlikely(offset < 0)) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size =3D nc->size; +#else + size =3D PAGE_SIZE; +#endif + + offset =3D ALIGN(nc->offset, -align_mask); + if (unlikely(offset + fragsz > size)) { page =3D virt_to_page(nc->va); =20 if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -102,17 +104,13 @@ void *__page_frag_alloc_align(struct page_frag_cache = *nc, goto refill; } =20 -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size =3D nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); =20 /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset =3D size - fragsz; - if (unlikely(offset < 0)) { + offset =3D 0; + if (unlikely(fragsz > size)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -127,8 +125,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *n= c, } =20 nc->pagecnt_bias--; - offset &=3D align_mask; - nc->offset =3D offset; + nc->offset =3D offset + fragsz; =20 return nc->va + offset; } --=20 2.33.0