From nobody Sat May 9 05:49:48 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67F8218DF60 for ; Thu, 30 Jan 2025 09:05:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738227950; cv=none; b=ExIhSaMICuVLvdJbnxsYE/C6whB2e3yBi6NGkwzYAmpTqORULSHzalZPOviV4avlhXTmtK2I9tepmrgfSP3199ukx7NgGduw/NY/AZ+vf5MYrLERUTCzNI8zVMscY5mgmcwT3ltXS1O67q97DLtV3QuwD1AT6w1tP/JwQRaBOzA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738227950; c=relaxed/simple; bh=wlKzwgVoHccYNTgEupqDGDqFykLE20ra+IM/boUYUao=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=IIV3Q33x6Fv9UDxuNJ5CT8QT1aqueYvtxM5O3GubnZJ6UZMqZNsB/dU9NVl00Hzxr8NJbn2DTKBKc/3vw9I4dmG5CaUarFDZUkb+pIdI7t1t9pLdRGazwZk17B/3E477C2PlUNZ5H7GUUI0m6atfzC1E8xjshU+DJA6yn/5rfqE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Ehydeu9x; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ehydeu9x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738227947; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=b/Me/1u3C6Fdj+dKIki1ToAPlxQvGrr1yQR3FcauRP0=; b=Ehydeu9xt6jaWxEwY3OndT78qkThJPE5BmVrJpSnSyx0ropk2EwCdxv/Tn8TB3Fw0nZG4G b+1ED1nuOuKHn5/4GzyL093tYd3uFzWZgSD5op0e7uWEK0r2NcORmx7T9jl292TXS4YXgu sLPYXpEX4WbaG6fVii649Jf+KOlnxTM= Received: from mail-pj1-f70.google.com (mail-pj1-f70.google.com [209.85.216.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-57-t-spD557PlKt3t-Ha4Z5NQ-1; Thu, 30 Jan 2025 04:05:45 -0500 X-MC-Unique: t-spD557PlKt3t-Ha4Z5NQ-1 X-Mimecast-MFC-AGG-ID: t-spD557PlKt3t-Ha4Z5NQ Received: by mail-pj1-f70.google.com with SMTP id 98e67ed59e1d1-2ef9b9981f1so1629528a91.3 for ; Thu, 30 Jan 2025 01:05:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738227944; x=1738832744; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=b/Me/1u3C6Fdj+dKIki1ToAPlxQvGrr1yQR3FcauRP0=; b=wZpPsjr0+HcV5PUrKz4jSx9G+YnJbBKc7CCcGNBi0XZtTAX8kB3y16SfEhJU0ep8y8 qDuZYpCNW7StvL5H9SzHawPpMRJE6tEpyTKGT3ATV8PKxKkIhx0rQZud4MQK6eWYXqzs mDIKs6a+Mn93kkuhKEM+DVx0v90/HslPRrB1OgyJS4ISYrrDVGaagepk/xWxgM3/aR+D bGJUGeXccCUNHCDkJxO5oGmDrCAyb6ALGp5d5kAaZNB4dQ/ymqkgYEoLV9uNRzdLLpzg ZX9BEsH1pa3gfXcVE0g0xNmVtGUXLKXCwOn0NenhinfcJbNcQz6VnXQN7j4X7FcshZUj 91xQ== X-Forwarded-Encrypted: i=1; AJvYcCVMps/8g6j03tGUcesgCEmQwtm6FXX/+KKJ51/4RlcAl2wulkrVvUwx78CypXxsWVs7nqczS+XMVg9NWrQ=@vger.kernel.org X-Gm-Message-State: AOJu0YzxP+HNX/DAXv94R0h3nr4V7reQIxsFoYgqwszjKZ4tX72GLSbu Sq1oFC8W3hetfdETQDYueErqEEtMnwRFNHRPh0GxprsfHbrJpWznBG7pi7RuOe4qKKrtmMVrASA U6uxuNAax2xrEyxKyvpfMyvdjeV9Jx88pMy4JahcvoCIqTVrFh+SpmL3FZoT/Yw== X-Gm-Gg: ASbGnctOKkeUuqJRADwxxEcsaVn02BlGzgOH8LT86J5UHUwajPqypNFQVkxD8yI06WB AAWomJs5TtuppKecAS373t0OwM2aAZa9ejoirHoZHnMjA2y33jkh/gYDKKjNyoOJqglas+GY7E3 YyGAYYPthnd5swgvLyRRPq3UJrCcRr9DEjvPRnvXDyWJIfU2mOcmfaw69iyXFAQBCCa4iNywQRP 1+lak6I/AHHHV0kkC2jRJB2oFJCiRbsC+rLKcC4lsNHnrOKHSQr7FPmC0iLrCXsmc5kxoJG3RHz sRk20g== X-Received: by 2002:a05:6a00:391e:b0:725:cfd0:dffa with SMTP id d2e1a72fcca58-72fd0bced35mr8921459b3a.5.1738227944251; Thu, 30 Jan 2025 01:05:44 -0800 (PST) X-Google-Smtp-Source: AGHT+IEBbD+Byr2jBgIEjMTy4eB9vEyHEuDof01TNhq6Rh91kC3SrdCjWJ/XPECX2/RvsJOf8mDTIg== X-Received: by 2002:a05:6a00:391e:b0:725:cfd0:dffa with SMTP id d2e1a72fcca58-72fd0bced35mr8921427b3a.5.1738227943831; Thu, 30 Jan 2025 01:05:43 -0800 (PST) Received: from zeus.elecom ([240b:10:83a2:bd00:6e35:f2f5:2e21:ae3a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72fe69ce054sm933512b3a.132.2025.01.30.01.05.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Jan 2025 01:05:42 -0800 (PST) From: Ryosuke Yasuoka To: airlied@redhat.com, kraxel@redhat.com, gurchetansingh@chromium.org, olvaffe@gmail.com, maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, daniel@ffwll.ch, jfalempe@redhat.com, dmitry.osipenko@collabora.com Cc: Ryosuke Yasuoka , virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v6] drm/virtio: Add drm_panic support Date: Thu, 30 Jan 2025 18:05:15 +0900 Message-ID: <20250130090517.201356-1-ryasuoka@redhat.com> X-Mailer: git-send-email 2.47.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Virtio gpu supports the drm_panic module, which displays a message to the screen when a kernel panic occurs. Signed-off-by: Jocelyn Falempe Signed-off-by: Ryosuke Yasuoka --- v6: Based on Dmitry's comment, fix the followings - Reject external dmabufs backing the GEM object - Allocate vbuf with kmem_cache_zalloc(..., GFP_ATOMIC) instead of drmm_kzalloc(). v5: https://lore.kernel.org/all/CAHpthZrZ6DjsCQ4baQ80b2vOTdkR=3DvHDx=3D10W7DTS4= ohxb6=3Dpg@mail.gmail.com/ Based on Dmitry's comment, fix the followings - Rename virtio_panic_buffer to panic_vbuf - Remove some unnecessary dummy ret and return directly. - Reject if the bo is VRAM BO - Remove virtio_gpu_panic_put_vbuf() before notify - Add description for panic buffer allocation - Remove virtio_gpu_panic_object_array and use virtio_gpu_panic_array_alloc() to allocate objs instead of static allocation in stack. v4: https://lore.kernel.org/all/ec721548-0d47-4c40-9e9d-59f58e2181ae@redhat.com/ - As per Dmitry's comment, make virtio_panic_buffer private to virtio_gpu_device. v3: https://lore.kernel.org/all/09d9815c-9d5b-464b-9362-5b8232d36de1@collabora.= com/ - As per Jocelyn's comment, add a finite timeout 500usec in virtio_gpu_panic_put_vbuf() to avoid infinite loop v2: https://lore.kernel.org/all/d885913e-e81c-488e-8db8-e3f7fae13b2c@redhat.com/ - Remove unnecessary virtio_gpu_vbuffer_inline - Remove reclaim_list and just call drm_gem_object_put() if there is an obj - Don't wait for an event in virtio_gpu_panic_queue_ctrl_sgs and just return -ENOMEM. Also add error handlers for this error. - Use virtio_gpu_panic_queue_fenced_ctrl_buffer() in virtio_gpu_panic_cmd_resource_flush - Remove fence and objs arguments because these are always NULL in panic handler. - Rename virtio_gpu_panic_queue_fenced_ctrl_buffer to ..._queue_ctrl_buffer - Rename virtio_gpu_panic_alloc_cmd to ..._panic_init_cmd v1: https://lore.kernel.org/all/c7a4a4cd-ce84-4e87-924d-c1c001fc5d28@redhat.com/ drivers/gpu/drm/virtio/virtgpu_drv.h | 17 +++ drivers/gpu/drm/virtio/virtgpu_gem.c | 14 +++ drivers/gpu/drm/virtio/virtgpu_plane.c | 103 ++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_vq.c | 157 ++++++++++++++++++++++++- 4 files changed, 285 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/= virtgpu_drv.h index f42ca9d8ed10..44511f316851 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -126,6 +126,12 @@ struct virtio_gpu_object_array { struct drm_gem_object *objs[] __counted_by(total); }; =20 +#define MAX_INLINE_CMD_SIZE 96 +#define MAX_INLINE_RESP_SIZE 24 +#define VBUFFER_SIZE (sizeof(struct virtio_gpu_vbuffer) \ + + MAX_INLINE_CMD_SIZE \ + + MAX_INLINE_RESP_SIZE) + struct virtio_gpu_vbuffer; struct virtio_gpu_device; =20 @@ -310,6 +316,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_p= riv, struct drm_device *dev, struct drm_mode_create_dumb *args); =20 +struct virtio_gpu_object_array *virtio_gpu_panic_array_alloc(void); struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents); struct virtio_gpu_object_array* virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32= nents); @@ -334,12 +341,21 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu= _device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_panic_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgd= ev, + uint64_t offset, + uint32_t width, uint32_t height, + uint32_t x, uint32_t y, + struct virtio_gpu_object_array *objs); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, uint32_t x, uint32_t y, struct virtio_gpu_object_array *objs, struct virtio_gpu_fence *fence); +void virtio_gpu_panic_cmd_resource_flush(struct virtio_gpu_device *vgdev, + uint32_t resource_id, + uint32_t x, uint32_t y, + uint32_t width, uint32_t height); void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev, uint32_t resource_id, uint32_t x, uint32_t y, @@ -408,6 +424,7 @@ void virtio_gpu_ctrl_ack(struct virtqueue *vq); void virtio_gpu_cursor_ack(struct virtqueue *vq); void virtio_gpu_dequeue_ctrl_func(struct work_struct *work); void virtio_gpu_dequeue_cursor_func(struct work_struct *work); +void virtio_gpu_panic_notify(struct virtio_gpu_device *vgdev); void virtio_gpu_notify(struct virtio_gpu_device *vgdev); =20 int diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/= virtgpu_gem.c index 5aab588fc400..dde8fc1a3689 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -148,6 +148,20 @@ void virtio_gpu_gem_object_close(struct drm_gem_object= *obj, virtio_gpu_notify(vgdev); } =20 +/* For drm panic */ +struct virtio_gpu_object_array *virtio_gpu_panic_array_alloc(void) +{ + struct virtio_gpu_object_array *objs; + + objs =3D kmalloc(sizeof(struct virtio_gpu_object_array), GFP_ATOMIC); + if (!objs) + return NULL; + + objs->nents =3D 0; + objs->total =3D 1; + return objs; +} + struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents) { struct virtio_gpu_object_array *objs; diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virti= o/virtgpu_plane.c index 42aa554eca9f..b48b3816b241 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -28,6 +28,8 @@ #include #include #include +#include +#include =20 #include "virtgpu_drv.h" =20 @@ -127,6 +129,30 @@ static int virtio_gpu_plane_atomic_check(struct drm_pl= ane *plane, return ret; } =20 +/* For drm panic */ +static int virtio_gpu_panic_update_dumb_bo(struct virtio_gpu_device *vgdev, + struct drm_plane_state *state, + struct drm_rect *rect) +{ + struct virtio_gpu_object *bo =3D + gem_to_virtio_gpu_obj(state->fb->obj[0]); + struct virtio_gpu_object_array *objs; + uint32_t w =3D rect->x2 - rect->x1; + uint32_t h =3D rect->y2 - rect->y1; + uint32_t x =3D rect->x1; + uint32_t y =3D rect->y1; + uint32_t off =3D x * state->fb->format->cpp[0] + + y * state->fb->pitches[0]; + + objs =3D virtio_gpu_panic_array_alloc(); + if (!objs) + return -ENOMEM; + virtio_gpu_array_add_obj(objs, &bo->base.base); + + return virtio_gpu_panic_cmd_transfer_to_host_2d(vgdev, off, w, h, x, y, + objs); +} + static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev, struct drm_plane_state *state, struct drm_rect *rect) @@ -150,6 +176,24 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gp= u_device *vgdev, objs, NULL); } =20 +/* For drm_panic */ +static void virtio_gpu_panic_resource_flush(struct drm_plane *plane, + uint32_t x, uint32_t y, + uint32_t width, uint32_t height) +{ + struct drm_device *dev =3D plane->dev; + struct virtio_gpu_device *vgdev =3D dev->dev_private; + struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; + + vgfb =3D to_virtio_gpu_framebuffer(plane->state->fb); + bo =3D gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + + virtio_gpu_panic_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y, + width, height); + virtio_gpu_panic_notify(vgdev); +} + static void virtio_gpu_resource_flush(struct drm_plane *plane, uint32_t x, uint32_t y, uint32_t width, uint32_t height) @@ -446,11 +490,70 @@ static void virtio_gpu_cursor_plane_update(struct drm= _plane *plane, virtio_gpu_cursor_ping(vgdev, output); } =20 +static int virtio_drm_get_scanout_buffer(struct drm_plane *plane, + struct drm_scanout_buffer *sb) +{ + struct virtio_gpu_object *bo; + + if (!plane->state || !plane->state->fb || !plane->state->visible) + return -ENODEV; + + bo =3D gem_to_virtio_gpu_obj(plane->state->fb->obj[0]); + if (virtio_gpu_is_vram(bo) || bo->base.base.import_attach) + return -ENODEV; + + /* try to vmap it if possible */ + if (!bo->base.vaddr) { + int ret; + + ret =3D drm_gem_shmem_vmap(&bo->base, &sb->map[0]); + if (ret) + return ret; + } else { + iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr); + } + + sb->format =3D plane->state->fb->format; + sb->height =3D plane->state->fb->height; + sb->width =3D plane->state->fb->width; + sb->pitch[0] =3D plane->state->fb->pitches[0]; + return 0; +} + +static void virtio_panic_flush(struct drm_plane *plane) +{ + struct virtio_gpu_object *bo; + struct drm_device *dev =3D plane->dev; + struct virtio_gpu_device *vgdev =3D dev->dev_private; + struct drm_rect rect; + + rect.x1 =3D 0; + rect.y1 =3D 0; + rect.x2 =3D plane->state->fb->width; + rect.y2 =3D plane->state->fb->height; + + bo =3D gem_to_virtio_gpu_obj(plane->state->fb->obj[0]); + + if (bo->dumb) { + if (virtio_gpu_panic_update_dumb_bo(vgdev, plane->state, + &rect)) + return; + } + + virtio_gpu_panic_resource_flush(plane, + plane->state->src_x >> 16, + plane->state->src_y >> 16, + plane->state->src_w >> 16, + plane->state->src_h >> 16); +} + static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs= =3D { .prepare_fb =3D virtio_gpu_plane_prepare_fb, .cleanup_fb =3D virtio_gpu_plane_cleanup_fb, .atomic_check =3D virtio_gpu_plane_atomic_check, .atomic_update =3D virtio_gpu_primary_plane_update, + .get_scanout_buffer =3D virtio_drm_get_scanout_buffer, + .panic_flush =3D virtio_panic_flush, }; =20 static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = =3D { diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/v= irtgpu_vq.c index ad91624df42d..a5e9adadb149 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -36,12 +36,6 @@ #include "virtgpu_drv.h" #include "virtgpu_trace.h" =20 -#define MAX_INLINE_CMD_SIZE 96 -#define MAX_INLINE_RESP_SIZE 24 -#define VBUFFER_SIZE (sizeof(struct virtio_gpu_vbuffer) \ - + MAX_INLINE_CMD_SIZE \ - + MAX_INLINE_RESP_SIZE) - static void convert_to_hw_box(struct virtio_gpu_box *dst, const struct drm_virtgpu_3d_box *src) { @@ -86,6 +80,22 @@ void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgd= ev) vgdev->vbufs =3D NULL; } =20 +/* For drm_panic */ +static struct virtio_gpu_vbuffer* +virtio_gpu_panic_get_vbuf(struct virtio_gpu_device *vgdev, int size) +{ + struct virtio_gpu_vbuffer *vbuf; + + vbuf =3D kmem_cache_zalloc(vgdev->vbufs, GFP_ATOMIC); + + vbuf->buf =3D (void *)vbuf + sizeof(*vbuf); + vbuf->size =3D size; + vbuf->resp_cb =3D NULL; + vbuf->resp_size =3D sizeof(struct virtio_gpu_ctrl_hdr); + vbuf->resp_buf =3D (void *)vbuf->buf + size; + return vbuf; +} + static struct virtio_gpu_vbuffer* virtio_gpu_get_vbuf(struct virtio_gpu_device *vgdev, int size, int resp_size, void *resp_buf, @@ -137,6 +147,18 @@ virtio_gpu_alloc_cursor(struct virtio_gpu_device *vgde= v, return (struct virtio_gpu_update_cursor *)vbuf->buf; } =20 +/* For drm_panic */ +static void *virtio_gpu_panic_alloc_cmd_resp(struct virtio_gpu_device *vgd= ev, + struct virtio_gpu_vbuffer **vbuffer_p, + int cmd_size) +{ + struct virtio_gpu_vbuffer *vbuf; + + vbuf =3D virtio_gpu_panic_get_vbuf(vgdev, cmd_size); + *vbuffer_p =3D vbuf; + return (struct virtio_gpu_command *)vbuf->buf; +} + static void *virtio_gpu_alloc_cmd_resp(struct virtio_gpu_device *vgdev, virtio_gpu_resp_cb cb, struct virtio_gpu_vbuffer **vbuffer_p, @@ -311,6 +333,34 @@ static struct sg_table *vmalloc_to_sgt(char *data, uin= t32_t size, int *sg_ents) return sgt; } =20 +/* For drm_panic */ +static int virtio_gpu_panic_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, + struct virtio_gpu_vbuffer *vbuf, + int elemcnt, + struct scatterlist **sgs, + int outcnt, + int incnt) +{ + struct virtqueue *vq =3D vgdev->ctrlq.vq; + int ret; + + if (vgdev->has_indirect) + elemcnt =3D 1; + + if (vq->num_free < elemcnt) + return -ENOMEM; + + ret =3D virtqueue_add_sgs(vq, sgs, outcnt, incnt, vbuf, GFP_ATOMIC); + WARN_ON(ret); + + vbuf->seqno =3D ++vgdev->ctrlq.seqno; + trace_virtio_gpu_cmd_queue(vq, virtio_gpu_vbuf_ctrl_hdr(vbuf), vbuf->seqn= o); + + atomic_inc(&vgdev->pending_commands); + + return 0; +} + static int virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf, struct virtio_gpu_fence *fence, @@ -368,6 +418,32 @@ static int virtio_gpu_queue_ctrl_sgs(struct virtio_gpu= _device *vgdev, return 0; } =20 +/* For drm_panic */ +static int virtio_gpu_panic_queue_ctrl_buffer(struct virtio_gpu_device *vg= dev, + struct virtio_gpu_vbuffer *vbuf) +{ + struct scatterlist *sgs[3], vcmd, vresp; + int elemcnt =3D 0, outcnt =3D 0, incnt =3D 0; + + /* set up vcmd */ + sg_init_one(&vcmd, vbuf->buf, vbuf->size); + elemcnt++; + sgs[outcnt] =3D &vcmd; + outcnt++; + + /* set up vresp */ + if (vbuf->resp_size) { + sg_init_one(&vresp, vbuf->resp_buf, vbuf->resp_size); + elemcnt++; + sgs[outcnt + incnt] =3D &vresp; + incnt++; + } + + return virtio_gpu_panic_queue_ctrl_sgs(vgdev, vbuf, + elemcnt, sgs, + outcnt, incnt); +} + static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *v= gdev, struct virtio_gpu_vbuffer *vbuf, struct virtio_gpu_fence *fence) @@ -422,6 +498,21 @@ static int virtio_gpu_queue_fenced_ctrl_buffer(struct = virtio_gpu_device *vgdev, return ret; } =20 +/* For drm_panic */ +void virtio_gpu_panic_notify(struct virtio_gpu_device *vgdev) +{ + bool notify; + + if (!atomic_read(&vgdev->pending_commands)) + return; + + atomic_set(&vgdev->pending_commands, 0); + notify =3D virtqueue_kick_prepare(vgdev->ctrlq.vq); + + if (notify) + virtqueue_notify(vgdev->ctrlq.vq); +} + void virtio_gpu_notify(struct virtio_gpu_device *vgdev) { bool notify; @@ -567,6 +658,29 @@ void virtio_gpu_cmd_set_scanout(struct virtio_gpu_devi= ce *vgdev, virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); } =20 +/* For drm_panic */ +void virtio_gpu_panic_cmd_resource_flush(struct virtio_gpu_device *vgdev, + uint32_t resource_id, + uint32_t x, uint32_t y, + uint32_t width, uint32_t height) +{ + struct virtio_gpu_resource_flush *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p =3D virtio_gpu_panic_alloc_cmd_resp(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + vbuf->objs =3D NULL; + + cmd_p->hdr.type =3D cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_FLUSH); + cmd_p->resource_id =3D cpu_to_le32(resource_id); + cmd_p->r.width =3D cpu_to_le32(width); + cmd_p->r.height =3D cpu_to_le32(height); + cmd_p->r.x =3D cpu_to_le32(x); + cmd_p->r.y =3D cpu_to_le32(y); + + virtio_gpu_panic_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev, uint32_t resource_id, uint32_t x, uint32_t y, @@ -591,6 +705,37 @@ void virtio_gpu_cmd_resource_flush(struct virtio_gpu_d= evice *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } =20 +/* For drm_panic */ +int virtio_gpu_panic_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgd= ev, + uint64_t offset, + uint32_t width, uint32_t height, + uint32_t x, uint32_t y, + struct virtio_gpu_object_array *objs) +{ + struct virtio_gpu_object *bo =3D gem_to_virtio_gpu_obj(objs->objs[0]); + struct virtio_gpu_transfer_to_host_2d *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + bool use_dma_api =3D !virtio_has_dma_quirk(vgdev->vdev); + + if (virtio_gpu_is_shmem(bo) && use_dma_api) + dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, + bo->base.sgt, DMA_TO_DEVICE); + + cmd_p =3D virtio_gpu_panic_alloc_cmd_resp(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + vbuf->objs =3D objs; + + cmd_p->hdr.type =3D cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D); + cmd_p->resource_id =3D cpu_to_le32(bo->hw_res_handle); + cmd_p->offset =3D cpu_to_le64(offset); + cmd_p->r.width =3D cpu_to_le32(width); + cmd_p->r.height =3D cpu_to_le32(height); + cmd_p->r.x =3D cpu_to_le32(x); + cmd_p->r.y =3D cpu_to_le32(y); + + return virtio_gpu_panic_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, base-commit: 64179a1416e1420a34226ab3beb5f84710953d16 --=20 2.47.1