From nobody Mon Feb 9 17:23:25 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=suse.de ARC-Seal: i=1; a=rsa-sha256; t=1636367362; cv=none; d=zohomail.com; s=zohoarc; b=PU/jfDGQATxY8FW+692kXwZXMJ89QWz9bBFtSZbaJ3eQick/axlR1T+f6IxyeEcStb3tUc43lDMs8JJ5vYC1HNWkjaYmTQZ95BI2zLErKN5ngoNYFa4XlssV8JB/m22rFXYQWw4iug4HjJjuQnQq5GUTpYpgoTbc42HwlEpcfKM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1636367362; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=uqJ3PALj2N8mNSp5HF/ewLhmQXtB1FwQ53fCXX1lFLw=; b=WTqDA4z2KNCplfdP1wJptn5GJ/xSUK2A6xVOhvF5xMtcch3QbDDNXb0hBzddT9IssTZspw75yYHgyCt8Y2vYyDH38plqHDkrMbnASt37y0qJ9HaQqwGON1z2oZJEgqa0GSrUgT/5zFSca0BKLMvZNKQL6WJfcf+XMANaExpDXE8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1636367362630809.9830529343228; Mon, 8 Nov 2021 02:29:22 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.223206.385813 (Exim 4.92) (envelope-from ) id 1mk1tV-0004z7-03; Mon, 08 Nov 2021 10:28:53 +0000 Received: by outflank-mailman (output) from mailman id 223206.385813; Mon, 08 Nov 2021 10:28:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mk1tU-0004z0-T7; Mon, 08 Nov 2021 10:28:52 +0000 Received: by outflank-mailman (input) for mailman id 223206; Mon, 08 Nov 2021 10:28:51 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mk1tT-0004ix-2r for xen-devel@lists.xenproject.org; Mon, 08 Nov 2021 10:28:51 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a968f7ad-407e-11ec-9787-a32c541c8605; Mon, 08 Nov 2021 11:28:49 +0100 (CET) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2BFBE21B02; Mon, 8 Nov 2021 10:28:49 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id CF4F113B70; Mon, 8 Nov 2021 10:28:48 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id ADOHMeD7iGHPHQAAMHmgww (envelope-from ); Mon, 08 Nov 2021 10:28:48 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a968f7ad-407e-11ec-9787-a32c541c8605 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1636367329; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uqJ3PALj2N8mNSp5HF/ewLhmQXtB1FwQ53fCXX1lFLw=; b=rmnSqlyDX8NxHsoJUlQ7OsnPHLSTOBo6y+O4B7PK3khQJFab65TWpO9v36FlFPij0oUQiK GA92sF2pvLXkHSUgGjwl3EksMTZHAMSweU0IHGUysHl2AMMOs+ZrOxsXJAM7XR6tnmugNz SqMvDY606d5S7B8fG8cwkFH8FkoMbRI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1636367329; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uqJ3PALj2N8mNSp5HF/ewLhmQXtB1FwQ53fCXX1lFLw=; b=pikWo+BrPYVvgje5Km5ynjXlmML6EPsbnpIYweTsdor4z8xyzL/I1EI7YddJ/rcpJOwsSX hwKhYVsIIWVGUNDg== From: Thomas Zimmermann To: daniel@ffwll.ch, airlied@linux.ie, maarten.lankhorst@linux.intel.com, mripard@kernel.org, inki.dae@samsung.com, jy0922.shim@samsung.com, sw0312.kim@samsung.com, kyungmin.park@samsung.com, krzysztof.kozlowski@canonical.com, oleksandr_andrushchenko@epam.com Cc: dri-devel@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, xen-devel@lists.xenproject.org, Thomas Zimmermann Subject: [PATCH 2/3] drm/xen: Implement mmap as GEM object function Date: Mon, 8 Nov 2021 11:28:45 +0100 Message-Id: <20211108102846.309-3-tzimmermann@suse.de> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211108102846.309-1-tzimmermann@suse.de> References: <20211108102846.309-1-tzimmermann@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.de) X-ZM-MESSAGEID: 1636367365181100007 Content-Type: text/plain; charset="utf-8" Moving the driver-specific mmap code into a GEM object function allows for using DRM helpers for various mmap callbacks. The respective xen functions are being removed. The file_operations structure fops is now being created by the helper macro DEFINE_DRM_GEM_FOPS(). Signed-off-by: Thomas Zimmermann Reviewed-by: Oleksandr Andrushchenko --- drivers/gpu/drm/xen/xen_drm_front.c | 16 +--- drivers/gpu/drm/xen/xen_drm_front_gem.c | 108 +++++++++--------------- drivers/gpu/drm/xen/xen_drm_front_gem.h | 7 -- 3 files changed, 44 insertions(+), 87 deletions(-) diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_= drm_front.c index 9f14d99c763c..434064c820e8 100644 --- a/drivers/gpu/drm/xen/xen_drm_front.c +++ b/drivers/gpu/drm/xen/xen_drm_front.c @@ -469,19 +469,7 @@ static void xen_drm_drv_release(struct drm_device *dev) kfree(drm_info); } =20 -static const struct file_operations xen_drm_dev_fops =3D { - .owner =3D THIS_MODULE, - .open =3D drm_open, - .release =3D drm_release, - .unlocked_ioctl =3D drm_ioctl, -#ifdef CONFIG_COMPAT - .compat_ioctl =3D drm_compat_ioctl, -#endif - .poll =3D drm_poll, - .read =3D drm_read, - .llseek =3D no_llseek, - .mmap =3D xen_drm_front_gem_mmap, -}; +DEFINE_DRM_GEM_FOPS(xen_drm_dev_fops); =20 static const struct drm_driver xen_drm_driver =3D { .driver_features =3D DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMI= C, @@ -489,7 +477,7 @@ static const struct drm_driver xen_drm_driver =3D { .prime_handle_to_fd =3D drm_gem_prime_handle_to_fd, .prime_fd_to_handle =3D drm_gem_prime_fd_to_handle, .gem_prime_import_sg_table =3D xen_drm_front_gem_import_sg_table, - .gem_prime_mmap =3D xen_drm_front_gem_prime_mmap, + .gem_prime_mmap =3D drm_gem_prime_mmap, .dumb_create =3D xen_drm_drv_dumb_create, .fops =3D &xen_drm_dev_fops, .name =3D "xendrm-du", diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/= xen_drm_front_gem.c index b293c67230ef..dd358ba2bf8e 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -57,6 +57,47 @@ static void gem_free_pages_array(struct xen_gem_object *= xen_obj) xen_obj->pages =3D NULL; } =20 +static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj, + struct vm_area_struct *vma) +{ + struct xen_gem_object *xen_obj =3D to_xen_gem_obj(gem_obj); + int ret; + + vma->vm_ops =3D gem_obj->funcs->vm_ops; + + /* + * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the + * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map + * the whole buffer. + */ + vma->vm_flags &=3D ~VM_PFNMAP; + vma->vm_flags |=3D VM_MIXEDMAP; + vma->vm_pgoff =3D 0; + + /* + * According to Xen on ARM ABI (xen/include/public/arch-arm.h): + * all memory which is shared with other entities in the system + * (including the hypervisor and other guests) must reside in memory + * which is mapped as Normal Inner Write-Back Outer Write-Back + * Inner-Shareable. + */ + vma->vm_page_prot =3D vm_get_page_prot(vma->vm_flags); + + /* + * vm_operations_struct.fault handler will be called if CPU access + * to VM is here. For GPUs this isn't the case, because CPU doesn't + * touch the memory. Insert pages now, so both CPU and GPU are happy. + * + * FIXME: as we insert all the pages now then no .fault handler must + * be called, so don't provide one + */ + ret =3D vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages); + if (ret < 0) + DRM_ERROR("Failed to map pages into vma: %d\n", ret); + + return ret; +} + static const struct vm_operations_struct xen_drm_drv_vm_ops =3D { .open =3D drm_gem_vm_open, .close =3D drm_gem_vm_close, @@ -67,6 +108,7 @@ static const struct drm_gem_object_funcs xen_drm_front_g= em_object_funcs =3D { .get_sg_table =3D xen_drm_front_gem_get_sg_table, .vmap =3D xen_drm_front_gem_prime_vmap, .vunmap =3D xen_drm_front_gem_prime_vunmap, + .mmap =3D xen_drm_front_gem_object_mmap, .vm_ops =3D &xen_drm_drv_vm_ops, }; =20 @@ -238,58 +280,6 @@ xen_drm_front_gem_import_sg_table(struct drm_device *d= ev, return &xen_obj->base; } =20 -static int gem_mmap_obj(struct xen_gem_object *xen_obj, - struct vm_area_struct *vma) -{ - int ret; - - /* - * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the - * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map - * the whole buffer. - */ - vma->vm_flags &=3D ~VM_PFNMAP; - vma->vm_flags |=3D VM_MIXEDMAP; - vma->vm_pgoff =3D 0; - /* - * According to Xen on ARM ABI (xen/include/public/arch-arm.h): - * all memory which is shared with other entities in the system - * (including the hypervisor and other guests) must reside in memory - * which is mapped as Normal Inner Write-Back Outer Write-Back - * Inner-Shareable. - */ - vma->vm_page_prot =3D vm_get_page_prot(vma->vm_flags); - - /* - * vm_operations_struct.fault handler will be called if CPU access - * to VM is here. For GPUs this isn't the case, because CPU - * doesn't touch the memory. Insert pages now, so both CPU and GPU are - * happy. - * FIXME: as we insert all the pages now then no .fault handler must - * be called, so don't provide one - */ - ret =3D vm_map_pages(vma, xen_obj->pages, xen_obj->num_pages); - if (ret < 0) - DRM_ERROR("Failed to map pages into vma: %d\n", ret); - - return ret; -} - -int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma) -{ - struct xen_gem_object *xen_obj; - struct drm_gem_object *gem_obj; - int ret; - - ret =3D drm_gem_mmap(filp, vma); - if (ret < 0) - return ret; - - gem_obj =3D vma->vm_private_data; - xen_obj =3D to_xen_gem_obj(gem_obj); - return gem_mmap_obj(xen_obj, vma); -} - int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dm= a_buf_map *map) { struct xen_gem_object *xen_obj =3D to_xen_gem_obj(gem_obj); @@ -313,17 +303,3 @@ void xen_drm_front_gem_prime_vunmap(struct drm_gem_obj= ect *gem_obj, { vunmap(map->vaddr); } - -int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, - struct vm_area_struct *vma) -{ - struct xen_gem_object *xen_obj; - int ret; - - ret =3D drm_gem_mmap_obj(gem_obj, gem_obj->size, vma); - if (ret < 0) - return ret; - - xen_obj =3D to_xen_gem_obj(gem_obj); - return gem_mmap_obj(xen_obj, vma); -} diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/= xen_drm_front_gem.h index a4e67d0a149c..eaea470f7001 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h @@ -15,9 +15,7 @@ struct dma_buf_attachment; struct dma_buf_map; struct drm_device; struct drm_gem_object; -struct file; struct sg_table; -struct vm_area_struct; =20 struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev, size_t size); @@ -33,15 +31,10 @@ struct page **xen_drm_front_gem_get_pages(struct drm_ge= m_object *obj); =20 void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj= ); =20 -int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma); - int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map); =20 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map); =20 -int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, - struct vm_area_struct *vma); - #endif /* __XEN_DRM_FRONT_GEM_H */ --=20 2.33.1