From nobody Fri Dec 19 06:22:31 2025 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16D541DE2DE for ; Tue, 18 Feb 2025 23:29:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921352; cv=pass; b=e1O05gww9sHx2vIGdJrNgzMo2Mu6w5okRNOPuQcUfehMW8lRTrIp+X6I5PQ9lWcNVvgFL+seTtXv1hfqKyP6ujXWdbYdBrrgfFGTy8rP6eSL04eUkhsPJY0d6V9/NCenwudZlZMjRvm5Q7kCz5wDluRmaHKl1oaeWF/nKE2VJVc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921352; c=relaxed/simple; bh=m5slS+Tynrj84xh1fSRdt6VhD57IMHLlPGQn5BkQvzM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MVSAvjfWJ3yVOZpyAgI0qKynbqKrRPZwFRGafpb3K0g9h6ZevKXVpmhjPvrVW4fYy8NpFcDIYLVZGZvI8bhJA8wKTk+7kM3o33oBgJLm8z2iJJfOX5D/8xYXGr2dBkb2en+yf3fyMKoZMjhUe/gVqBl6CRbDqrhZ6go4QaOvrrk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=REpdhcUj; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="REpdhcUj" ARC-Seal: i=1; a=rsa-sha256; t=1739921334; cv=none; d=zohomail.com; s=zohoarc; b=iUWcBqs6mNafZVIXiqFTo6A/liAIFzzS5QKQ23NUxyUOVaSXn7SrwzkRnqHEpyIJBXtggUuIulHAFjQJplNsThft+joniuucfxY3vREFKxzysDX50XZl0ubfk9EhL43adtkNM0UqnvCuLvbT0lgH0NHvWDJc5ZQLlVAgrT979oY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921334; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=MT/z3CoYWH1GTvPLpRnh3YEX8M4rKQcUJDbxLPJnRwA=; b=ida1JEdG3LBnNTURDnmmKjsB3Ei6L4/JI2/7DKXtravnSibGn1ohC/e+xHWSseigRjgEp1y4mL0lwO/8FIUjtuy2izbAOpxronf6OwrqFxie7D4sE0GGPE3LL0/Um8tj9D6g7XEihOa+YA2AmAg9hjvFgS+wojZXnjUTm1EOPsY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921334; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=MT/z3CoYWH1GTvPLpRnh3YEX8M4rKQcUJDbxLPJnRwA=; b=REpdhcUjQJ6Xzte09GzdHaNMDd7WA50NPKA2bydQT4gU/B3lQ7RgiflO8S0O2Uu/ RYOnuEh2E9TBz1B94m2G60jSf5dOXGErfkW0o7t8n/zV9XUaxAfhd5lyyPAQGlNLxzr b4vIV1rFwqcSzxX+W55Ni1p+nhMMW6RlzGue+Obw= Received: by mx.zohomail.com with SMTPS id 1739921331681134.0055665173901; Tue, 18 Feb 2025 15:28:51 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Hugh Dickins Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= , linux-mm@kvack.org Subject: [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages Date: Tue, 18 Feb 2025 23:25:31 +0000 Message-ID: <20250218232552.3450939-2-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable With the future goal of preventing deadlocks with the shrinker when reclaim= ing GEM-allocated memory, a variant of shmem_read_mapping_page_gfp() that does = not sleep when enough memory isn't available, therefore potentially triggering = the shrinker on same driver, is introduced. Signed-off-by: Adri=C3=A1n Larumbe --- include/linux/shmem_fs.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 0b273a7b9f01..5735728aeda2 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -167,6 +167,13 @@ static inline struct page *shmem_read_mapping_page( mapping_gfp_mask(mapping)); } =20 +static inline struct page *shmem_read_mapping_page_nonblocking( + struct address_space *mapping, pgoff_t index) +{ + return shmem_read_mapping_page_gfp(mapping, index, + mapping_gfp_mask(mapping) | GFP_NOWAIT); +} + static inline bool shmem_file(struct file *file) { if (!IS_ENABLED(CONFIG_SHMEM)) --=20 2.47.1 From nobody Fri Dec 19 06:22:31 2025 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F18F41DE2B7 for ; Tue, 18 Feb 2025 23:29:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921349; cv=pass; b=OQH23ABiOsBXriTZ5jVOzx4vOexXiXJAVpK6/IsMEVHOO2Zi0EsfC4BPRE1tQZjEe/dWQO8N/bAQlp4RFMewSI5z1i2lGM2w1/ORF5kq2QcrLWwoApOkOZCVgQfJ+JaNwwIvt8ypojWGdGJgpdsnZ+sdEKete9cK1LB5zE1p2/4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921349; c=relaxed/simple; bh=mU8teWzLKlbGw6q3WSEaWSEngPuisZhLnSXQiQmtitc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=iPaAkPlnxGiwdbmYUYuk9YFF2NUEjtuXtRLzkoKT8MSJgdaz8aN7psv9+x9FivDw1VWHwqaUsERqqzVVMt21sloeaYj4gBmph43yK8RngsFBOXiNJczw/L5dqIl+lcQBJ2iEurP8WhLfTvbXu8DzwBgVZm8hnKhXJz6RRTfqmzg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=bWfB42Hu; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="bWfB42Hu" ARC-Seal: i=1; a=rsa-sha256; t=1739921336; cv=none; d=zohomail.com; s=zohoarc; b=l0KzvJGwWpePn6EQSYFdMNxLgi1lxE3dcL7INCLKWOPETJf9XELBUfuTDbmUi4J29CRfbGBts5w70TAQIG2Fcd33u7f5eZwjY8StUwnD/O0kXPbu5Ns/JcrjCLJcUnPhlGXfXk989lhZcZGCIs8CHZAO1OC/Wpr0yomVb4m+/Bw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921336; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=8Rmrq+SVHz9TSycz2daVmEOgD0I/l/bPDtjOwgr6T4I=; b=kOdPjJasFNnkctBMarNyUNAcUu5ruNFarIBLbsMBPDdaEHgn3+hacinnLd9kMTJjliBdSFyHXE0u8s88wKUHz/hU8rbacHwwydU/k/s/5+K5Bb5Vo0kGXq9jWfLHhwKKX+lwGS8t4qmEDK7F3jPG9oaOXO5Ksc1sE8TZkHaFAJc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921336; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=8Rmrq+SVHz9TSycz2daVmEOgD0I/l/bPDtjOwgr6T4I=; b=bWfB42HuLv1Z+nllroUYZ+ikzWa0+ulO1cDS+ohv5ZKQOs/tgR4pisGzzEK4vUI0 7Q5W4lTEXexyw/wO7RgC1xscPa4EjM9Bd38rIYEYGZjGHngriDBlfeVmrV9+lzba6QX 34id3AF1B/ZUhHU9ILByIufRcU3o91udZXfu2waE= Received: by mx.zohomail.com with SMTPS id 1739921333909239.0258459553687; Tue, 18 Feb 2025 15:28:53 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Andrew Morton Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= Subject: [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray Date: Tue, 18 Feb 2025 23:25:32 +0000 Message-ID: <20250218232552.3450939-3-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable In preparation for a future commit that will introduce sparse allocation of pages in DRM shmem, a scatterlist function that knows how to deal with an x= array collection of memory pages had to be introduced. Because the new function is identical to the existing one that deals with a= page array, the page_array abstraction is also introduced, which hides the way p= ages are retrieved from a collection. Signed-off-by: Adri=C3=A1n Larumbe --- include/linux/scatterlist.h | 47 +++++++++++++ lib/scatterlist.c | 128 ++++++++++++++++++++++++++++++++++++ 2 files changed, 175 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index d836e7440ee8..0045df9c374f 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -48,6 +48,39 @@ struct sg_append_table { unsigned int total_nents; /* Total entries in the table */ }; =20 +struct page_array { + union { + struct page **array; + struct xarray *xarray; + }; + + struct page *(*get_page)(struct page_array, unsigned int); +}; + +static inline struct page *page_array_get_page(struct page_array a, + unsigned int index) +{ + return a.array[index]; +} + +static inline struct page *page_xarray_get_page(struct page_array a, + unsigned int index) +{ + return xa_load(a.xarray, index); +} + +#define PAGE_ARRAY(pages) \ + ((struct page_array) { \ + .array =3D pages, \ + .get_page =3D page_array_get_page, \ + }) + +#define PAGE_XARRAY(pages) \ + ((struct page_array) { \ + .xarray =3D pages, \ + .get_page =3D page_xarray_get_page, \ + }) + /* * Notes on SG table design. * @@ -448,6 +481,20 @@ int sg_alloc_table_from_pages_segment(struct sg_table = *sgt, struct page **pages, unsigned long size, unsigned int max_segment, gfp_t gfp_mask); =20 +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct pa= ge_array pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, unsigned int max_segment, gfp_t gfp_mask); + +static inline int sg_alloc_table_from_page_xarray(struct sg_table *sgt, st= ruct xarray *pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, gfp_t gfp_mask) +{ + struct page_array parray =3D PAGE_XARRAY(pages); + + return sg_alloc_table_from_page_array_segment(sgt, parray, idx, n_pages, = offset, + size, UINT_MAX, gfp_mask); +} + /** * sg_alloc_table_from_pages - Allocate and initialize an sg table from * an array of pages diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 5bb6b8aff232..669ebd23e4ad 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -553,6 +553,115 @@ int sg_alloc_append_table_from_pages(struct sg_append= _table *sgt_append, } EXPORT_SYMBOL(sg_alloc_append_table_from_pages); =20 +static inline int +sg_alloc_append_table_from_page_array(struct sg_append_table *sgt_append, + struct page_array pages, + unsigned int first_page, + unsigned int n_pages, + unsigned int offset, unsigned long size, + unsigned int max_segment, + unsigned int left_pages, gfp_t gfp_mask) +{ + unsigned int chunks, seg_len, i, prv_len =3D 0; + unsigned int added_nents =3D 0; + struct scatterlist *s =3D sgt_append->prv; + unsigned int cur_pg_index =3D first_page; + unsigned int last_pg_index =3D first_page + n_pages - 1; + struct page *last_pg; + + /* + * The algorithm below requires max_segment to be aligned to PAGE_SIZE + * otherwise it can overshoot. + */ + max_segment =3D ALIGN_DOWN(max_segment, PAGE_SIZE); + if (WARN_ON(max_segment < PAGE_SIZE)) + return -EINVAL; + + if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && sgt_append->prv) + return -EOPNOTSUPP; + + if (sgt_append->prv) { + unsigned long next_pfn; + struct page *page; + + if (WARN_ON(offset)) + return -EINVAL; + + /* Merge contiguous pages into the last SG */ + page =3D pages.get_page(pages, cur_pg_index); + prv_len =3D sgt_append->prv->length; + next_pfn =3D (sg_phys(sgt_append->prv) + prv_len) / PAGE_SIZE; + if (page_to_pfn(page) =3D=3D next_pfn) { + last_pg =3D pfn_to_page(next_pfn - 1); + while (cur_pg_index <=3D last_pg_index && + pages_are_mergeable(page, last_pg)) { + if (sgt_append->prv->length + PAGE_SIZE > max_segment) + break; + sgt_append->prv->length +=3D PAGE_SIZE; + last_pg =3D page; + cur_pg_index++; + } + if (cur_pg_index > last_pg_index) + goto out; + } + } + + /* compute number of contiguous chunks */ + chunks =3D 1; + seg_len =3D 0; + for (i =3D cur_pg_index + 1; i <=3D last_pg_index; i++) { + seg_len +=3D PAGE_SIZE; + if (seg_len >=3D max_segment || + !pages_are_mergeable(pages.get_page(pages, i), + pages.get_page(pages, i - 1))) { + chunks++; + seg_len =3D 0; + } + } + + /* merging chunks and putting them into the scatterlist */ + for (i =3D 0; i < chunks; i++) { + unsigned int j, chunk_size; + + /* look for the end of the current chunk */ + seg_len =3D 0; + for (j =3D cur_pg_index + 1; j <=3D last_pg_index; j++) { + seg_len +=3D PAGE_SIZE; + if (seg_len >=3D max_segment || + !pages_are_mergeable(pages.get_page(pages, j), + pages.get_page(pages, j - 1))) + break; + } + + /* Pass how many chunks might be left */ + s =3D get_next_sg(sgt_append, s, chunks - i + left_pages, + gfp_mask); + if (IS_ERR(s)) { + /* + * Adjust entry length to be as before function was + * called. + */ + if (sgt_append->prv) + sgt_append->prv->length =3D prv_len; + return PTR_ERR(s); + } + chunk_size =3D ((j - cur_pg_index) << PAGE_SHIFT) - offset; + sg_set_page(s, pages.get_page(pages, cur_pg_index), + min_t(unsigned long, size, chunk_size), offset); + added_nents++; + size -=3D chunk_size; + offset =3D 0; + cur_pg_index =3D j; + } + sgt_append->sgt.nents +=3D added_nents; + sgt_append->sgt.orig_nents =3D sgt_append->sgt.nents; + sgt_append->prv =3D s; +out: + if (!left_pages) + sg_mark_end(s); + return 0; +} + /** * sg_alloc_table_from_pages_segment - Allocate and initialize an sg table= from * an array of pages and given maximum @@ -596,6 +705,25 @@ int sg_alloc_table_from_pages_segment(struct sg_table = *sgt, struct page **pages, } EXPORT_SYMBOL(sg_alloc_table_from_pages_segment); =20 +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct pa= ge_array pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, unsigned int max_segment, gfp_t gfp_mask) +{ + struct sg_append_table append =3D {}; + int err; + + err =3D sg_alloc_append_table_from_page_array(&append, pages, idx, n_page= s, offset, + size, max_segment, 0, gfp_mask); + if (err) { + sg_free_append_table(&append); + return err; + } + memcpy(sgt, &append.sgt, sizeof(*sgt)); + WARN_ON(append.total_nents !=3D sgt->orig_nents); + return 0; +} +EXPORT_SYMBOL(sg_alloc_table_from_page_array_segment); + #ifdef CONFIG_SGL_ALLOC =20 /** --=20 2.47.1 From nobody Fri Dec 19 06:22:31 2025 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 629081DE2B5 for ; Tue, 18 Feb 2025 23:29:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921358; cv=pass; b=pu3fqms+GngraIakUOkTqC3IIPXlwFfszMP6QQBsQIo4q+gxVNyl+ESluM2p4AFVFELRdyaFxe+yk1GWi6/rZ59RiMJhyyQM8aWuea5Uxk6SSuc2Fz2R/Z5js0Xdwd1W+dbniGX/0cLRUEPSWTN9rzgWkw+GK9IuDNoTCSOBERU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921358; c=relaxed/simple; bh=83bAq6pX3ZK+2WbLvwKw/f9ubqpeJlg+GNDZKlJsScA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=V1v8Y8STNrpd2ZPvt3135SeJXRJQqxEtWH1a53WpzDCVjfxjQux49BHVyiAl/Cv9izq7OIGCiAOY1ph+54MQfFFOp3mBBiON6qzkWqL3OY40waZfYSsxUCLsuU8c4INaI6V6Ch5a+nEWqldVeX8mOUvgl2BFWeIDMYcWVD01P/k= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=YHwYeW+G; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="YHwYeW+G" ARC-Seal: i=1; a=rsa-sha256; t=1739921338; cv=none; d=zohomail.com; s=zohoarc; b=KAB+ODyFxGhg7B6x99dY/ELB1ZUxT4mKPu1HceEiLZbGFSKNWjggrdYJGFquNuSvUfXaVAI/07+ckaai2Zr8jb9yTlrBaRVpINI4IbsrQHsthbTeThmkhdgF0rDTCs/dYo3CSdZw/g5xhvpd7+KdMmLYBpshaLrIlauw5GEMdog= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921338; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=xXOQHDyDK3ylLZfTlWzUigjKBktcbj0fWV8yThDx3wA=; b=SC0T7cuumYiDbudXfvebEbIuTjsdTWZ68i99nzQui5PMwjcVjauxJb1yiQ7YZyJscy1755Ht2exz6sKfcoTWN4Q3YqjMgRDjjkspn/v42i6vvkssmkGa7yGEB3W+mqiMPXjBejnMmVgzvmR/ojRjMCfEYnIIXemL9AzQ1Fx5+OY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921338; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=xXOQHDyDK3ylLZfTlWzUigjKBktcbj0fWV8yThDx3wA=; b=YHwYeW+G8NwqvIUlXgU7Tl3xjbF/vDtvQq33g0bLvaR2ve1bbmsPlR98L7/Ehi/w 0PsRn2/aNwdggrxgsQ0VjV0a00pbkmzLaorCCooWmkTkaLMIeB6EsmRXjp6VtqaoMGE Gtx9jDeaQ3fUggAJIXJij2KMEb80IthjjMho9gOY= Received: by mx.zohomail.com with SMTPS id 1739921336488392.7532493423994; Tue, 18 Feb 2025 15:28:56 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= Subject: [RFC PATCH 3/7] drm/prime: Let drm_prime_pages_to_sg use the page_array interface Date: Tue, 18 Feb 2025 23:25:33 +0000 Message-ID: <20250218232552.3450939-4-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Switch to sg_alloc_table_from_page_array_segment() when generating an sgtab= le from an array of pages. This is functionally equivalent, but a future commit will also let us do the same from a memory page xarray. Signed-off-by: Adri=C3=A1n Larumbe --- drivers/gpu/drm/drm_prime.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 32a8781cfd67..1549733d3833 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -837,6 +837,7 @@ struct sg_table *drm_prime_pages_to_sg(struct drm_devic= e *dev, struct page **pages, unsigned int nr_pages) { struct sg_table *sg; + struct page_array parray =3D PAGE_ARRAY(pages); size_t max_segment =3D 0; int err; =20 @@ -848,9 +849,9 @@ struct sg_table *drm_prime_pages_to_sg(struct drm_devic= e *dev, max_segment =3D dma_max_mapping_size(dev->dev); if (max_segment =3D=3D 0) max_segment =3D UINT_MAX; - err =3D sg_alloc_table_from_pages_segment(sg, pages, nr_pages, 0, - (unsigned long)nr_pages << PAGE_SHIFT, - max_segment, GFP_KERNEL); + err =3D sg_alloc_table_from_page_array_segment(sg, parray, 0, nr_pages, 0, + (unsigned long)nr_pages << PAGE_SHIFT, + max_segment, GFP_KERNEL); if (err) { kfree(sg); sg =3D ERR_PTR(err); --=20 2.47.1 From nobody Fri Dec 19 06:22:31 2025 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 628A91DE2B3 for ; Tue, 18 Feb 2025 23:29:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921359; cv=pass; b=tA3d2Wy0es15a/b1J9009qE5RnkTrnWxv+wYLNImdk5l5J5nxA/LQQ0KBM8kNTZzmgkCSer6HCRB4tc6La/29dyBi4PoOwf2w///CHvDW+aUgnznhs35aSfYpo4gfqwKL85Mv+GZPNK86HbeEQMHhk3mPhTunRCfMYu21n0nlU0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921359; c=relaxed/simple; bh=i0FZfSuI/Yogq5ZsgNJT9DTD0EEseJKrkDj2sfB+ECc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=a+rDApF6wjjKsD7gKP2b3oUnWsiaQYxSFnT+htP2+NRLgM1XVGQOrZxXPyp7ornd5lhHuiCA7ax4MOJ7JjB/2D9wGIibJ0m40v2eL79o2FlMHvmJFyDmqK74N9KIE1NHtnv858ht3dMePJrKZJn9kNWf/HcbmNZYIgjiaCgVTic= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=M2r2mMvi; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="M2r2mMvi" ARC-Seal: i=1; a=rsa-sha256; t=1739921340; cv=none; d=zohomail.com; s=zohoarc; b=RnCgSv/BfIU54AQTpHckROhV7AhgvZ2s5yE1hiYtY4HI7OCmRj3pgWpm9HbVTvp9MZ84V6hJJfXKvForuQ3i+q4wX+ssJm+mXzGG88sPn5/AYr84mJDF8NwzJWxPZUfbPjCM/+7MA1Rpw/aDaD3zPM4uqk1kXgOFrd2yv1zELiE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921340; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=n3MGcvZJ2jssBxw31RXxSN0eBDZxEkdm3sXeyoWiJR4=; b=lGA4gdInt0uvqni3N8MDyjGAYMCuR3KXVnl7XRj1tXZqc5Mx/90xMmmQchkfQED960y0qecQL6MKsrkPt5d9aFfU12xg47H+fqm4KJ9xkw3MPTUhGU5YeSmU+JywQXPb7Dt47ZDvH22NXtDMQXaNNHgv9UMhtL7kEs0K+KsF3Sw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921340; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=n3MGcvZJ2jssBxw31RXxSN0eBDZxEkdm3sXeyoWiJR4=; b=M2r2mMviKC8pmuNGhWoKa6o3u99Kcp1TEkCLHqjdKKOgdX2x2VgEWzAFBXICR3Ep yOBNJMWCw1kbO14NcMuLcmP68AKt0KHww1jhaUIMZpD5t+Dd2M2ZTe2Ly7s24+5pReW kCsQCIMAqIIFOFFO0vMcbAswrIady1ubXJUsU8hA= Received: by mx.zohomail.com with SMTPS id 1739921339255606.5607224644493; Tue, 18 Feb 2025 15:28:59 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= Subject: [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects Date: Tue, 18 Feb 2025 23:25:34 +0000 Message-ID: <20250218232552.3450939-5-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Sparse DRM objects will store their backing pages in an xarray, to avoid the overhead of preallocating a huge struct page pointer array when only a very small range of indices might be assigned. For now, only the definition of a sparse object as a union alternative to a 'dense' object is provided, with functions that exploit it being part of la= ter commits. Signed-off-by: Adri=C3=A1n Larumbe --- drivers/gpu/drm/drm_gem_shmem_helper.c | 42 +++++++++++++++++++++++--- include/drm/drm_gem_shmem_helper.h | 18 ++++++++++- 2 files changed, 54 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 5ab351409312..d63e42be2d72 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -10,6 +10,7 @@ #include #include #include +#include =20 #ifdef CONFIG_X86 #include @@ -50,7 +51,7 @@ static const struct drm_gem_object_funcs drm_gem_shmem_fu= ncs =3D { =20 static struct drm_gem_shmem_object * __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private, - struct vfsmount *gemfs) + bool sparse, struct vfsmount *gemfs) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj; @@ -90,6 +91,11 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t si= ze, bool private, =20 INIT_LIST_HEAD(&shmem->madv_list); =20 + if (unlikely(sparse)) + xa_init_flags(&shmem->xapages, XA_FLAGS_ALLOC); + + shmem->sparse =3D sparse; + if (!private) { /* * Our buffers are kept pinned, so allocating them @@ -124,10 +130,16 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t= size, bool private, */ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, = size_t size) { - return __drm_gem_shmem_create(dev, size, false, NULL); + return __drm_gem_shmem_create(dev, size, false, false, NULL); } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); =20 +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device= *dev, size_t size) +{ + return __drm_gem_shmem_create(dev, size, false, true, NULL); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_create_sparse); + /** * drm_gem_shmem_create_with_mnt - Allocate an object with the given size = in a * given mountpoint @@ -145,7 +157,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_= mnt(struct drm_device *de size_t size, struct vfsmount *gemfs) { - return __drm_gem_shmem_create(dev, size, false, gemfs); + return __drm_gem_shmem_create(dev, size, false, false, gemfs); } EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt); =20 @@ -173,7 +185,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) + + if ((!shmem->sparse && shmem->pages) || + (shmem->sparse && !xa_empty(&shmem->xapages))) drm_gem_shmem_put_pages(shmem); =20 drm_WARN_ON(obj->dev, shmem->pages_use_count); @@ -191,11 +205,19 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shm= em_object *shmem) struct drm_gem_object *obj =3D &shmem->base; struct page **pages; =20 + if (drm_WARN_ON(obj->dev, shmem->sparse)) + return -EINVAL; + dma_resv_assert_held(shmem->base.resv); =20 if (shmem->pages_use_count++ > 0) return 0; =20 + /* We only allow increasing the user count in the case of + sparse shmem objects with some backed pages for now */ + if (shmem->sparse && xa_empty(&shmem->xapages)) + return -EINVAL; + pages =3D drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", @@ -541,6 +563,8 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *= vmf) struct page *page; pgoff_t page_offset; =20 + drm_WARN_ON(obj->dev, shmem->sparse); + /* We don't use vmf->pgoff since that has the fake offset */ page_offset =3D (vmf->address - vma->vm_start) >> PAGE_SHIFT; =20 @@ -567,6 +591,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct= *vma) struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); =20 drm_WARN_ON(obj->dev, obj->import_attach); + drm_WARN_ON(obj->dev, shmem->sparse); =20 dma_resv_lock(shmem->base.resv, NULL); =20 @@ -666,6 +691,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shme= m_object *shmem, if (shmem->base.import_attach) return; =20 + if (drm_WARN_ON(shmem->base.dev, shmem->sparse)) + return; + drm_printf_indent(p, indent, "pages_use_count=3D%u\n", shmem->pages_use_c= ount); drm_printf_indent(p, indent, "vmap_use_count=3D%u\n", shmem->vmap_use_cou= nt); drm_printf_indent(p, indent, "vaddr=3D%p\n", shmem->vaddr); @@ -691,6 +719,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_= gem_shmem_object *shmem) struct drm_gem_object *obj =3D &shmem->base; =20 drm_WARN_ON(obj->dev, obj->import_attach); + drm_WARN_ON(obj->dev, shmem->sparse); =20 return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SH= IFT); } @@ -702,6 +731,9 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_loc= ked(struct drm_gem_shmem_ int ret; struct sg_table *sgt; =20 + if (drm_WARN_ON(obj->dev, shmem->sparse)) + return ERR_PTR(-EINVAL); + if (shmem->sgt) return shmem->sgt; =20 @@ -787,7 +819,7 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *= dev, size_t size =3D PAGE_ALIGN(attach->dmabuf->size); struct drm_gem_shmem_object *shmem; =20 - shmem =3D __drm_gem_shmem_create(dev, size, true, NULL); + shmem =3D __drm_gem_shmem_create(dev, size, true, false, NULL); if (IS_ERR(shmem)) return ERR_CAST(shmem); =20 diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index d22e3fb53631..902039cfc4ce 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include =20 #include #include @@ -29,7 +30,11 @@ struct drm_gem_shmem_object { /** * @pages: Page table */ - struct page **pages; + union { + + struct page **pages; + struct xarray xapages; + }; =20 /** * @pages_use_count: @@ -91,6 +96,11 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc : 1; + + /** + * @sparse: the object's virtual memory space is only partially backed by= pages + */ + bool sparse : 1; }; =20 #define to_drm_gem_shmem_obj(obj) \ @@ -229,6 +239,9 @@ static inline int drm_gem_shmem_object_vmap(struct drm_= gem_object *obj, { struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); =20 + if (shmem->sparse) + return -EACCES; + return drm_gem_shmem_vmap(shmem, map); } =20 @@ -263,6 +276,9 @@ static inline int drm_gem_shmem_object_mmap(struct drm_= gem_object *obj, struct v { struct drm_gem_shmem_object *shmem =3D to_drm_gem_shmem_obj(obj); =20 + if (shmem->sparse) + return -EACCES; + return drm_gem_shmem_mmap(shmem, vma); } =20 --=20 2.47.1 From nobody Fri Dec 19 06:22:31 2025 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7365D1DE4F8 for ; Tue, 18 Feb 2025 23:29:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921363; cv=pass; b=ozYQwI4Voq+0WnhAgzlNfNwv6ImZ7r7cN522D4fwYic3pkLEoKTZuxOVKHukfNt6Nhxcc2F4ak9+ufp+2P1CjSBw2r/rFnjsTryOB8VGPBsTU2qZ4mjdunX1/GoZWWmRi4RtdrtTlSJViYUHW2AUTRMYvLqVbmtKA3ZLsLr/y9k= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921363; c=relaxed/simple; bh=W0YDHKm8aVZqW1TVxmsFzMDkaR5JPZQbmAhgoV+3P2g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=iI7C1gQUgJI+22olQP7WLhYzYN6bl2NkrOCo8syxVe/lqtGVw6xPI3ekAjMev7ySAu8C3PVb4Zaft5v8A0Zd9LDV9oO0IaISRo1GR/I8m5GZvrsF+Ew+cUvO71koGDEdPa8HWRy8yFB5LKTuiqywZx8fNDbhcoYE/iMrYL7oHh4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=goQi3QnU; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="goQi3QnU" ARC-Seal: i=1; a=rsa-sha256; t=1739921344; cv=none; d=zohomail.com; s=zohoarc; b=RuyA0qYHE59i9UeK6jSfX+vKYF2pBJDcnJVG7va5kUfSTIk5jFN9ElBtkdnbN64Nzesvxv3whjj4eHuC2152ZxI5/8TTQi7uO79sxq6zPsY4M+DLbZTeZaYIM0tAtnNWUEgOoAWoKlVwHppUYthA44QDE9QNVf5Obm93RJQKSXU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921344; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=lxIiOljX+8RCkQFRb8Qc1CK/237y/rJX69Kr+P0hkwg=; b=TGwkodcGf1i/Jv04xgiPBhLO9NrPPS/XDKsYKoPkXGpam+bF7ZAYsnwVzq/s8fSPK163/Gu3Dq66CT098bKyYlxzWU46dylQc4BnhX8zB5/DKBys6r4nj157DVzx4ae3Fe2uCBpW5nwFT4rdsLUQcwuR9jA3FcKlJl2Tju8KZMY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921344; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=lxIiOljX+8RCkQFRb8Qc1CK/237y/rJX69Kr+P0hkwg=; b=goQi3QnU9PXH3+xd3RZvDZ+f09wsssdxlkz4goU4WmQefz985Zww0D1TKAQP8qMG czdr9dc96AlszkwAyLxfjMr95dEiDI3b2HoQhQI5dqIvMw3dz+C07NbQOVca5zopDer oS15tMIH2nyDW3pLDqKp+XU7AyIZtaRZCncxKWR8= Received: by mx.zohomail.com with SMTPS id 1739921342252847.3662481768877; Tue, 18 Feb 2025 15:29:02 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= Subject: [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects Date: Tue, 18 Feb 2025 23:25:35 +0000 Message-ID: <20250218232552.3450939-6-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Add a new function that lets drivers allocate pages for a subset of the shm= em object's virtual address range. Expand the shmem object's definition to inc= lude an RSS field, since it's different from the base GEM object's virtual size. Add also new function for putting the pages of a sparse page array. There is refactorisation potential with drm_gem_put_pages, but it is yet to be decid= ed what this should look like. Signed-off-by: Adri=C3=A1n Larumbe --- drivers/gpu/drm/drm_gem.c | 32 +++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 123 ++++++++++++++++++++++++- include/drm/drm_gem.h | 3 + include/drm/drm_gem_shmem_helper.h | 12 +++ 4 files changed, 165 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index ee811764c3df..930c5219e1e9 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -679,6 +679,38 @@ void drm_gem_put_pages(struct drm_gem_object *obj, str= uct page **pages, } EXPORT_SYMBOL(drm_gem_put_pages); =20 +void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx, + unsigned int npages, bool dirty, bool accessed) +{ + struct folio_batch fbatch; + struct page *page; + + folio_batch_init(&fbatch); + + xa_for_each(pa, idx, page) { + struct folio *folio =3D page_folio(page); + + if (dirty) + folio_mark_dirty(folio); + if (accessed) + folio_mark_accessed(folio); + + /* Undo the reference we took when populating the table */ + if (!folio_batch_add(&fbatch, folio)) + drm_gem_check_release_batch(&fbatch); + + xa_erase(pa, idx); + + idx +=3D folio_nr_pages(folio) - 1; + } + + if (folio_batch_count(&fbatch)) + drm_gem_check_release_batch(&fbatch); + + WARN_ON((idx+1) !=3D npages); +} +EXPORT_SYMBOL(drm_gem_put_sparse_xarray); + static int objects_lookup(struct drm_file *filp, u32 *handle, int count, struct drm_gem_object **objs) { diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index d63e42be2d72..40f7f6812195 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -10,7 +10,6 @@ #include #include #include -#include =20 #ifdef CONFIG_X86 #include @@ -161,6 +160,18 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with= _mnt(struct drm_device *de } EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt); =20 +static void drm_gem_shmem_put_pages_sparse(struct drm_gem_shmem_object *sh= mem) +{ + unsigned int n_pages =3D shmem->rss_size / PAGE_SIZE; + + drm_WARN_ON(shmem->base.dev, (shmem->rss_size & (PAGE_SIZE - 1)) !=3D 0); + drm_WARN_ON(shmem->base.dev, !shmem->sparse); + + drm_gem_put_sparse_xarray(&shmem->xapages, 0, n_pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -264,10 +275,15 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_obj= ect *shmem) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif =20 - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages =3D NULL; + if (!shmem->sparse) { + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages =3D NULL; + } else { + drm_gem_shmem_put_pages_sparse(shmem); + xa_destroy(&shmem->xapages); + } } EXPORT_SYMBOL(drm_gem_shmem_put_pages); =20 @@ -765,6 +781,81 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_lo= cked(struct drm_gem_shmem_ return ERR_PTR(ret); } =20 +static struct sg_table *drm_gem_shmem_get_sparse_pages_locked(struct drm_g= em_shmem_object *shmem, + unsigned int n_pages, + pgoff_t page_offset) +{ + struct drm_gem_object *obj =3D &shmem->base; + gfp_t mask =3D GFP_KERNEL | GFP_NOWAIT; + size_t size =3D n_pages * PAGE_SIZE; + struct address_space *mapping; + struct sg_table *sgt; + struct page *page; + bool first_alloc; + int ret, i; + + if (!shmem->sparse) + return ERR_PTR(-EINVAL); + + /* If the mapping exists, then bail out immediately */ + if (xa_load(&shmem->xapages, page_offset) !=3D NULL) + return ERR_PTR(-EEXIST); + + dma_resv_assert_held(shmem->base.resv); + + first_alloc =3D xa_empty(&shmem->xapages); + + mapping =3D shmem->base.filp->f_mapping; + mapping_set_unevictable(mapping); + + for (i =3D 0; i < n_pages; i++) { + page =3D shmem_read_mapping_page_nonblocking(mapping, page_offset + i); + if (IS_ERR(page)) { + ret =3D PTR_ERR(page); + goto err_free_pages; + } + + /* Add the page into the xarray */ + ret =3D xa_err(xa_store(&shmem->xapages, page_offset + i, page, mask)); + if (ret) { + put_page(page); + goto err_free_pages; + } + } + + sgt =3D kzalloc(sizeof(*sgt), mask); + if (!sgt) { + ret =3D -ENOMEM; + goto err_free_pages; + } + + ret =3D sg_alloc_table_from_page_xarray(sgt, &shmem->xapages, page_offset= , n_pages, 0, size, mask); + if (ret) + goto err_free_sgtable; + + ret =3D dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0); + if (ret) + goto err_free_sgtable; + + if (first_alloc) + shmem->pages_use_count =3D 1; + + shmem->rss_size +=3D size; + + return sgt; + +err_free_sgtable: + kfree(sgt); +err_free_pages: + while (--i) { + page =3D xa_erase(&shmem->xapages, page_offset + i); + if (drm_WARN_ON(obj->dev, !page)) + continue; + put_page(page); + } + return ERR_PTR(ret); +} + /** * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a * scatter/gather table for a shmem GEM object. @@ -796,6 +887,28 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct dr= m_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt); =20 +struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_o= bject *shmem, + unsigned int n_pages, pgoff_t page_offset) +{ + struct drm_gem_object *obj =3D &shmem->base; + struct sg_table *sgt; + int ret; + + if (drm_WARN_ON(obj->dev, !shmem->sparse)) + return ERR_PTR(-EINVAL); + + ret =3D dma_resv_lock(shmem->base.resv, NULL); + if (ret) + return ERR_PTR(ret); + + sgt =3D drm_gem_shmem_get_sparse_pages_locked(shmem, n_pages, page_offset= ); + + dma_resv_unlock(shmem->base.resv); + + return sgt; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sparse_pages_sgt); + /** * drm_gem_shmem_prime_import_sg_table - Produce a shmem GEM object from * another driver's scatter/gather table of pinned pages diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index fdae947682cd..4fd45169a3af 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -38,6 +38,7 @@ #include #include #include +#include =20 #include =20 @@ -532,6 +533,8 @@ int drm_gem_create_mmap_offset_size(struct drm_gem_obje= ct *obj, size_t size); struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); +void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx, + unsigned int npages, bool dirty, bool accessed); =20 void drm_gem_lock(struct drm_gem_object *obj); void drm_gem_unlock(struct drm_gem_object *obj); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index 902039cfc4ce..fcd84c8cf8e7 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -44,6 +44,14 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; =20 + /** + * @rss_size: + * + * Size of the object RSS, in bytes. + * lifetime. + */ + size_t rss_size; + /** * @madv: State for madvise * @@ -107,6 +115,7 @@ struct drm_gem_shmem_object { container_of(obj, struct drm_gem_shmem_object, base) =20 struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, = size_t size); +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device= *dev, size_t size); struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_devi= ce *dev, size_t size, struct vfsmount *gemfs); @@ -138,6 +147,9 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *s= hmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *s= hmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *= shmem); =20 +struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_o= bject *shmem, + unsigned int n_pages, pgoff_t page_offset); + void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent); =20 --=20 2.47.1 From nobody Fri Dec 19 06:22:31 2025 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0B471DE4EA for ; Tue, 18 Feb 2025 23:29:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921365; cv=pass; b=Bn7UI5SSO+N8qWlDp2foTy63ZTLndho3o5aw5QLLGFK2Nph3oJiW27aHjf/vSpSM32gZCdw+k76wRs3O7ztyMYH/VYWjOU0sGEqNHZyj0oeLYe4cJW9XNwWnn6nIBu9hgiuhOtI27EklvyW3lsfWosd5Llt7SuL/8CBD137nTFU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921365; c=relaxed/simple; bh=0rrbFhPeDnBvIhYQOXOX3p6n4uxnsW//+9av7ugA6mU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=R6lTXLV2kEES/vj9b7ph8LnwlrUhbVskJPS1hX+nkwWVJ/NET7jVmUKv0Ncng9E2jc0yEXkOCOspSiRqPxknrXWi/+QfaRcP1ndPHlJQFw5ocWT6mksBgdq3qmn2oIcHPcXmztu8LCBuSm7/oXPK7oOem5XzFBor1bNqEhCS+WU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=LPRN0EBK; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="LPRN0EBK" ARC-Seal: i=1; a=rsa-sha256; t=1739921346; cv=none; d=zohomail.com; s=zohoarc; b=Qifcp8XYul+frmQSo8lIl216jfu1LGHu/smbtO+yxc8piOwYPhaM/2+MdAUrj/r9TYV8Y37/HVQ/wFjQ5X+fNvBZU7z2BTaPBqyOc+4PLJeIfRUGWEIefTtKDFx1ulK4d0RAXu8A+cz5KdqZi+qMXbGpTAgodM8EbDAXGjcydTI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921346; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=XNOD0+zkb7PNYrrdBX15Ic1+43Bxv9rI1DSYcDxeTZE=; b=i+A8U+2f+RILhuuDeJ0Q6I8b33LJ64AXfwpznqQS0PTWRUiF51IQ2N3ZF3Io3hoKEmy5qEDOtnQwBuVHqj9KG74TbdbOIHkMcHVtwIHz+cFclZtTeq6nL8PPHpMA2SLkc/FJUCvgCV/sAPFd0Sl0lEZmZJaViJ5GfPMbC7dYqac= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921346; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=XNOD0+zkb7PNYrrdBX15Ic1+43Bxv9rI1DSYcDxeTZE=; b=LPRN0EBKk3GnEYb72Xr1ToNxXHID31v1QyhTh8veT/Euxg9bRaoKE5ihT7mSArwg 0z+fJwEi9dpTY+NHLpw7tghrabCPmF2V94CmZuNtP1crATAaQ1RwpjwDbZXDPTrfMEP xDymQHBUabu0ibUTHVS+O50I5lZs+e9XuuDBDNbM= Received: by mx.zohomail.com with SMTPS id 1739921345075345.4757482949359; Tue, 18 Feb 2025 15:29:05 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= Subject: [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs Date: Tue, 18 Feb 2025 23:25:36 +0000 Message-ID: <20250218232552.3450939-7-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Panfrost heap BOs grow on demand when the GPU triggers a page fault after accessing an address within the BO's virtual range. We still store the sgts we get back from the shmem sparse allocation functi= on, since it was decided management of sparse memory SGTs should be done by cli= ent drivers rather than the shmem subsystem. Signed-off-by: Adri=C3=A1n Larumbe --- drivers/gpu/drm/panfrost/panfrost_gem.c | 12 ++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 85 +++++-------------------- 3 files changed, 25 insertions(+), 74 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panf= rost/panfrost_gem.c index 8e0ff3efede7..0cda2c4e524f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -40,10 +40,10 @@ static void panfrost_gem_free_object(struct drm_gem_obj= ect *obj) int n_sgt =3D bo->base.base.size / SZ_2M; =20 for (i =3D 0; i < n_sgt; i++) { - if (bo->sgts[i].sgl) { - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], + if (bo->sgts[i]) { + dma_unmap_sgtable(pfdev->dev, bo->sgts[i], DMA_BIDIRECTIONAL, 0); - sg_free_table(&bo->sgts[i]); + sg_free_table(bo->sgts[i]); } } kvfree(bo->sgts); @@ -274,7 +274,11 @@ panfrost_gem_create(struct drm_device *dev, size_t siz= e, u32 flags) if (flags & PANFROST_BO_HEAP) size =3D roundup(size, SZ_2M); =20 - shmem =3D drm_gem_shmem_create(dev, size); + if (flags & PANFROST_BO_HEAP) + shmem =3D drm_gem_shmem_create_sparse(dev, size); + else + shmem =3D drm_gem_shmem_create(dev, size); + if (IS_ERR(shmem)) return ERR_CAST(shmem); =20 diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panf= rost/panfrost_gem.h index 7516b7ecf7fe..2a8d0752011e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -11,7 +11,7 @@ struct panfrost_mmu; =20 struct panfrost_gem_object { struct drm_gem_shmem_object base; - struct sg_table *sgts; + struct sg_table **sgts; =20 /* * Use a list for now. If searching a mapping ever becomes the diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panf= rost/panfrost_mmu.c index b91019cd5acb..4a78ff9ca293 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -441,14 +441,11 @@ addr_to_mapping(struct panfrost_device *pfdev, int as= , u64 addr) static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int = as, u64 addr) { - int ret, i; struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; - struct address_space *mapping; - struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; - struct page **pages; + int ret =3D 0; =20 bomapping =3D addr_to_mapping(pfdev, as, addr); if (!bomapping) @@ -459,94 +456,44 @@ static int panfrost_mmu_map_fault_addr(struct panfros= t_device *pfdev, int as, dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA =3D %llx)", bomapping->mmnode.start << PAGE_SHIFT); ret =3D -EINVAL; - goto err_bo; + goto fault_out; } WARN_ON(bomapping->mmu->as !=3D as); =20 /* Assume 2MB alignment and size multiple */ addr &=3D ~((u64)SZ_2M - 1); - page_offset =3D addr >> PAGE_SHIFT; - page_offset -=3D bomapping->mmnode.start; + page_offset =3D (addr >> PAGE_SHIFT) - bomapping->mmnode.start; =20 - obj =3D &bo->base.base; - - dma_resv_lock(obj->resv, NULL); - - if (!bo->base.pages) { + if (!bo->sgts) { bo->sgts =3D kvmalloc_array(bo->base.base.size / SZ_2M, - sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); + sizeof(struct sg_table *), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { ret =3D -ENOMEM; - goto err_unlock; - } - - pages =3D kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, - sizeof(struct page *), GFP_KERNEL | __GFP_ZERO); - if (!pages) { - kvfree(bo->sgts); - bo->sgts =3D NULL; - ret =3D -ENOMEM; - goto err_unlock; - } - bo->base.pages =3D pages; - bo->base.pages_use_count =3D 1; - } else { - pages =3D bo->base.pages; - if (pages[page_offset]) { - /* Pages are already mapped, bail out. */ - goto out; + goto fault_out; } } =20 - mapping =3D bo->base.base.filp->f_mapping; - mapping_set_unevictable(mapping); + sgt =3D drm_gem_shmem_get_sparse_pages_sgt(&bo->base, NUM_FAULT_PAGES, pa= ge_offset); + if (IS_ERR(sgt)) { + if (WARN_ON(PTR_ERR(sgt) !=3D -EEXIST)) + ret =3D PTR_ERR(sgt); + else + ret =3D 0; =20 - for (i =3D page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { - /* Can happen if the last fault only partially filled this - * section of the pages array before failing. In that case - * we skip already filled pages. - */ - if (pages[i]) - continue; - - pages[i] =3D shmem_read_mapping_page(mapping, i); - if (IS_ERR(pages[i])) { - ret =3D PTR_ERR(pages[i]); - pages[i] =3D NULL; - goto err_unlock; - } + goto fault_out; } =20 - sgt =3D &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; - ret =3D sg_alloc_table_from_pages(sgt, pages + page_offset, - NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); - if (ret) - goto err_unlock; - - ret =3D dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); - if (ret) - goto err_map; - mmu_map_sg(pfdev, bomapping->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); =20 + bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)] =3D sgt; + bomapping->active =3D true; bo->heap_rss_size +=3D SZ_2M; =20 dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); =20 -out: - dma_resv_unlock(obj->resv); - - panfrost_gem_mapping_put(bomapping); - - return 0; - -err_map: - sg_free_table(sgt); -err_unlock: - dma_resv_unlock(obj->resv); -err_bo: +fault_out: panfrost_gem_mapping_put(bomapping); return ret; } --=20 2.47.1 From nobody Fri Dec 19 06:22:31 2025 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8727C1DE8AD for ; Tue, 18 Feb 2025 23:29:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921368; cv=pass; b=M4092dppNu3zvD/d4IKFQjFa6LwVwnq+A2J5e3WYx+YhKVpfKAv9964Vb7DyIWojqVc3QxiKTYfL8TpY6iXkkOu+z08Nmwa6FY62w/N4qurByh2Vw4bszNFmK+dXH0ZkQgyF257bFyF4sLXVcyeKZfWKrJDN1WoARWO57mViTBU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739921368; c=relaxed/simple; bh=KqMp5lE2sarZI9DK2eg13siTzqtLa4yjYk6vl31oeao=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HeVqHII0O7omXhQ3ZQEQ59IxOAOMzDtkrZzxgvAX+A7anROucXJMqmw0o9XV6kF1YwboJlqoG6M3W0wyiPeNaknZ+H0YKqvzOcXi2PTAQQoM8xilqjCapoNkA9IXpeq3QFx335a1Uod3sP4tUmngM37rS6W3HC4dtMMKJcHrXpc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b=cT44W4Xt; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="cT44W4Xt" ARC-Seal: i=1; a=rsa-sha256; t=1739921350; cv=none; d=zohomail.com; s=zohoarc; b=nmqVa7cF5xhDoKrRmhZBDJGYT0v38+pBpdExeM7XdrYfG33t8RrDj559mOOlN1XoUUaOboH6DwR0oRuh5bI8TZPXSrAzgJ3O/p6FbaUt0YuUoMXK/X/K17gx1qOYGT/E08mgE/uhSQ/6M4zLBRTaoZDYRTOCnV3jaMLgF2Jv9KM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921350; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=VtJM+6VGRnYwpG1okrpvmsoo+QzSvXWf7ZKxTATVhv4=; b=bNxUEARlfyR/n2eH01zRD0BuXaKT8WS0p1R1V6YkVO+GRiticJW3WFCaHwTV7ircuY2lOvX9Lm+w/tWvlQzjsIUI+gBL5eREisOeYNwASA+Ht9FHijUZ59icKx8vEH0Tty21YcvrRwGlUoB4fP20KH20Q/wTyq8mfDoNXaq012k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921350; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=VtJM+6VGRnYwpG1okrpvmsoo+QzSvXWf7ZKxTATVhv4=; b=cT44W4XtMz3+QJyLmTynv+wH6fRh/+jQdWmJJo+rVT8kV39GlTeWbFF4enqIuUFv b+CwdXdHKIa/jvfjNgOQIX8btggOULKtIChLy5iSLOazqNSz+PT36bw4KPhMHLRdY/Y tIBjX9Gv1EF/mOb7SuK38JzAsTuqr8AdsnuVNtQo= Received: by mx.zohomail.com with SMTPS id 1739921347867297.0981931852451; Tue, 18 Feb 2025 15:29:07 -0800 (PST) From: =?UTF-8?q?Adri=C3=A1n=20Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Liviu Dudau Cc: kernel@collabora.com, =?UTF-8?q?Adri=C3=A1n=20Larumbe?= Subject: [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo Date: Tue, 18 Feb 2025 23:25:37 +0000 Message-ID: <20250218232552.3450939-8-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Because of the alternative definition of the 'pages' field in shmem after a= dding support for sparse allocations, the logic for deciding whether pages are available must be expanded. Signed-off-by: Adri=C3=A1n Larumbe --- drivers/gpu/drm/panfrost/panfrost_gem.c | 4 +++- drivers/gpu/drm/panthor/panthor_gem.c | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panf= rost/panfrost_gem.c index 0cda2c4e524f..ced2fdee74ab 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -200,7 +200,9 @@ static enum drm_gem_object_status panfrost_gem_status(s= truct drm_gem_object *obj struct panfrost_gem_object *bo =3D to_panfrost_bo(obj); enum drm_gem_object_status res =3D 0; =20 - if (bo->base.base.import_attach || bo->base.pages) + if (bo->base.base.import_attach || + (!bo->base.sparse && bo->base.pages) || + (bo->base.sparse && !xa_empty(&bo->base.xapages))) res |=3D DRM_GEM_OBJECT_RESIDENT; =20 if (bo->base.madv =3D=3D PANFROST_MADV_DONTNEED) diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/pantho= r/panthor_gem.c index 8244a4e6c2a2..8dbaf766bd79 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.c +++ b/drivers/gpu/drm/panthor/panthor_gem.c @@ -155,7 +155,9 @@ static enum drm_gem_object_status panthor_gem_status(st= ruct drm_gem_object *obj) struct panthor_gem_object *bo =3D to_panthor_bo(obj); enum drm_gem_object_status res =3D 0; =20 - if (bo->base.base.import_attach || bo->base.pages) + if (bo->base.base.import_attach || + (!bo->base.sparse && bo->base.pages) || + (bo->base.sparse && !xa_empty(&bo->base.xapages))) res |=3D DRM_GEM_OBJECT_RESIDENT; =20 return res; --=20 2.47.1