From nobody Tue Feb 10 02:46:24 2026 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F18F41DE2B7 for ; Tue, 18 Feb 2025 23:29:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921349; cv=pass; b=OQH23ABiOsBXriTZ5jVOzx4vOexXiXJAVpK6/IsMEVHOO2Zi0EsfC4BPRE1tQZjEe/dWQO8N/bAQlp4RFMewSI5z1i2lGM2w1/ORF5kq2QcrLWwoApOkOZCVgQfJ+JaNwwIvt8ypojWGdGJgpdsnZ+sdEKete9cK1LB5zE1p2/4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921349; c=relaxed/simple; bh=mU8teWzLKlbGw6q3WSEaWSEngPuisZhLnSXQiQmtitc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=iPaAkPlnxGiwdbmYUYuk9YFF2NUEjtuXtRLzkoKT8MSJgdaz8aN7psv9+x9FivDw1VWHwqaUsERqqzVVMt21sloeaYj4gBmph43yK8RngsFBOXiNJczw/L5dqIl+lcQBJ2iEurP8WhLfTvbXu8DzwBgVZm8hnKhXJz6RRTfqmzg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=bWfB42Hu; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="bWfB42Hu" ARC-Seal: i=1; a=rsa-sha256; t=1739921336; cv=none; d=zohomail.com; s=zohoarc; b=l0KzvJGwWpePn6EQSYFdMNxLgi1lxE3dcL7INCLKWOPETJf9XELBUfuTDbmUi4J29CRfbGBts5w70TAQIG2Fcd33u7f5eZwjY8StUwnD/O0kXPbu5Ns/JcrjCLJcUnPhlGXfXk989lhZcZGCIs8CHZAO1OC/Wpr0yomVb4m+/Bw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921336; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=8Rmrq+SVHz9TSycz2daVmEOgD0I/l/bPDtjOwgr6T4I=; b=kOdPjJasFNnkctBMarNyUNAcUu5ruNFarIBLbsMBPDdaEHgn3+hacinnLd9kMTJjliBdSFyHXE0u8s88wKUHz/hU8rbacHwwydU/k/s/5+K5Bb5Vo0kGXq9jWfLHhwKKX+lwGS8t4qmEDK7F3jPG9oaOXO5Ksc1sE8TZkHaFAJc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921336; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=8Rmrq+SVHz9TSycz2daVmEOgD0I/l/bPDtjOwgr6T4I=; b=bWfB42HuLv1Z+nllroUYZ+ikzWa0+ulO1cDS+ohv5ZKQOs/tgR4pisGzzEK4vUI0 7Q5W4lTEXexyw/wO7RgC1xscPa4EjM9Bd38rIYEYGZjGHngriDBlfeVmrV9+lzba6QX 34id3AF1B/ZUhHU9ILByIufRcU3o91udZXfu2waE= Received: by mx.zohomail.com with SMTPS id 1739921333909239.0258459553687; Tue, 18 Feb 2025 15:28:53 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Andrew Morton Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= Subject: [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray Date: Tue, 18 Feb 2025 23:25:32 +0000 Message-ID: <20250218232552.3450939-3-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable In preparation for a future commit that will introduce sparse allocation of pages in DRM shmem, a scatterlist function that knows how to deal with an x= array collection of memory pages had to be introduced. Because the new function is identical to the existing one that deals with a= page array, the page_array abstraction is also introduced, which hides the way p= ages are retrieved from a collection. Signed-off-by: Adri=C3=A1n Larumbe --- include/linux/scatterlist.h | 47 +++++++++++++ lib/scatterlist.c | 128 ++++++++++++++++++++++++++++++++++++ 2 files changed, 175 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index d836e7440ee8..0045df9c374f 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -48,6 +48,39 @@ struct sg_append_table { unsigned int total_nents; /* Total entries in the table */ }; =20 +struct page_array { + union { + struct page **array; + struct xarray *xarray; + }; + + struct page *(*get_page)(struct page_array, unsigned int); +}; + +static inline struct page *page_array_get_page(struct page_array a, + unsigned int index) +{ + return a.array[index]; +} + +static inline struct page *page_xarray_get_page(struct page_array a, + unsigned int index) +{ + return xa_load(a.xarray, index); +} + +#define PAGE_ARRAY(pages) \ + ((struct page_array) { \ + .array =3D pages, \ + .get_page =3D page_array_get_page, \ + }) + +#define PAGE_XARRAY(pages) \ + ((struct page_array) { \ + .xarray =3D pages, \ + .get_page =3D page_xarray_get_page, \ + }) + /* * Notes on SG table design. * @@ -448,6 +481,20 @@ int sg_alloc_table_from_pages_segment(struct sg_table = *sgt, struct page **pages, unsigned long size, unsigned int max_segment, gfp_t gfp_mask); =20 +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct pa= ge_array pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, unsigned int max_segment, gfp_t gfp_mask); + +static inline int sg_alloc_table_from_page_xarray(struct sg_table *sgt, st= ruct xarray *pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, gfp_t gfp_mask) +{ + struct page_array parray =3D PAGE_XARRAY(pages); + + return sg_alloc_table_from_page_array_segment(sgt, parray, idx, n_pages, = offset, + size, UINT_MAX, gfp_mask); +} + /** * sg_alloc_table_from_pages - Allocate and initialize an sg table from * an array of pages diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 5bb6b8aff232..669ebd23e4ad 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -553,6 +553,115 @@ int sg_alloc_append_table_from_pages(struct sg_append= _table *sgt_append, } EXPORT_SYMBOL(sg_alloc_append_table_from_pages); =20 +static inline int +sg_alloc_append_table_from_page_array(struct sg_append_table *sgt_append, + struct page_array pages, + unsigned int first_page, + unsigned int n_pages, + unsigned int offset, unsigned long size, + unsigned int max_segment, + unsigned int left_pages, gfp_t gfp_mask) +{ + unsigned int chunks, seg_len, i, prv_len =3D 0; + unsigned int added_nents =3D 0; + struct scatterlist *s =3D sgt_append->prv; + unsigned int cur_pg_index =3D first_page; + unsigned int last_pg_index =3D first_page + n_pages - 1; + struct page *last_pg; + + /* + * The algorithm below requires max_segment to be aligned to PAGE_SIZE + * otherwise it can overshoot. + */ + max_segment =3D ALIGN_DOWN(max_segment, PAGE_SIZE); + if (WARN_ON(max_segment < PAGE_SIZE)) + return -EINVAL; + + if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && sgt_append->prv) + return -EOPNOTSUPP; + + if (sgt_append->prv) { + unsigned long next_pfn; + struct page *page; + + if (WARN_ON(offset)) + return -EINVAL; + + /* Merge contiguous pages into the last SG */ + page =3D pages.get_page(pages, cur_pg_index); + prv_len =3D sgt_append->prv->length; + next_pfn =3D (sg_phys(sgt_append->prv) + prv_len) / PAGE_SIZE; + if (page_to_pfn(page) =3D=3D next_pfn) { + last_pg =3D pfn_to_page(next_pfn - 1); + while (cur_pg_index <=3D last_pg_index && + pages_are_mergeable(page, last_pg)) { + if (sgt_append->prv->length + PAGE_SIZE > max_segment) + break; + sgt_append->prv->length +=3D PAGE_SIZE; + last_pg =3D page; + cur_pg_index++; + } + if (cur_pg_index > last_pg_index) + goto out; + } + } + + /* compute number of contiguous chunks */ + chunks =3D 1; + seg_len =3D 0; + for (i =3D cur_pg_index + 1; i <=3D last_pg_index; i++) { + seg_len +=3D PAGE_SIZE; + if (seg_len >=3D max_segment || + !pages_are_mergeable(pages.get_page(pages, i), + pages.get_page(pages, i - 1))) { + chunks++; + seg_len =3D 0; + } + } + + /* merging chunks and putting them into the scatterlist */ + for (i =3D 0; i < chunks; i++) { + unsigned int j, chunk_size; + + /* look for the end of the current chunk */ + seg_len =3D 0; + for (j =3D cur_pg_index + 1; j <=3D last_pg_index; j++) { + seg_len +=3D PAGE_SIZE; + if (seg_len >=3D max_segment || + !pages_are_mergeable(pages.get_page(pages, j), + pages.get_page(pages, j - 1))) + break; + } + + /* Pass how many chunks might be left */ + s =3D get_next_sg(sgt_append, s, chunks - i + left_pages, + gfp_mask); + if (IS_ERR(s)) { + /* + * Adjust entry length to be as before function was + * called. + */ + if (sgt_append->prv) + sgt_append->prv->length =3D prv_len; + return PTR_ERR(s); + } + chunk_size =3D ((j - cur_pg_index) << PAGE_SHIFT) - offset; + sg_set_page(s, pages.get_page(pages, cur_pg_index), + min_t(unsigned long, size, chunk_size), offset); + added_nents++; + size -=3D chunk_size; + offset =3D 0; + cur_pg_index =3D j; + } + sgt_append->sgt.nents +=3D added_nents; + sgt_append->sgt.orig_nents =3D sgt_append->sgt.nents; + sgt_append->prv =3D s; +out: + if (!left_pages) + sg_mark_end(s); + return 0; +} + /** * sg_alloc_table_from_pages_segment - Allocate and initialize an sg table= from * an array of pages and given maximum @@ -596,6 +705,25 @@ int sg_alloc_table_from_pages_segment(struct sg_table = *sgt, struct page **pages, } EXPORT_SYMBOL(sg_alloc_table_from_pages_segment); =20 +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct pa= ge_array pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, unsigned int max_segment, gfp_t gfp_mask) +{ + struct sg_append_table append =3D {}; + int err; + + err =3D sg_alloc_append_table_from_page_array(&append, pages, idx, n_page= s, offset, + size, max_segment, 0, gfp_mask); + if (err) { + sg_free_append_table(&append); + return err; + } + memcpy(sgt, &append.sgt, sizeof(*sgt)); + WARN_ON(append.total_nents !=3D sgt->orig_nents); + return 0; +} +EXPORT_SYMBOL(sg_alloc_table_from_page_array_segment); + #ifdef CONFIG_SGL_ALLOC =20 /** --=20 2.47.1