From nobody Sun Nov 24 01:44:12 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B20C11CAF for ; Fri, 8 Nov 2024 03:26:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731036390; cv=none; b=lALiq6WYdQCTwQ1xEPcIfv9apxzXlKoOW9dC+9ZpR3jEZu5btr7bGdb+seB+jgXQEr0HSVnqeAPLta+m1YfP+mGOVuxXWfIk1Kv2u7E7dCyp3KYzsYzC8e1xRxFjUJFPPwEqtqX6SfQ7flEih8jchh+ZEIGjKsBDjvhQ94xuBw8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731036390; c=relaxed/simple; bh=b6N9OqbM2Mnfnx8D27btbpMOUePvj2vHxlauHaDLP0U=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=Ry34Rv9Sra+XAqClBp3MyUfYQc/VWL7IpAa9PXehPEoRWYPsy6MRogPzmSQBIfepjGCj1qn/FPSesPYTImz6SRwZRn+kJm59zmzVY5Upl7J1IeEKq/p2QBjONKX9l+q0VjfDodqLHOdMNOc8bdrxMZx4lkQM9CYol+kNtRlk6Is= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=E4YXTBeC; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="E4YXTBeC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731036387; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=wegb2/eCGe6wV/W0p6rCPdmPgLSM8E7tz1irYfpXDxs=; b=E4YXTBeCKAbFJHSn82WcEhBhrYJq18phAUVluK6JHBgQS01wYt5eYxLYFQfUqBOCkNHtPo ESJtXoixwztHCVs9YHfS9b04TEK2Ct9MtvlbRO5ibb7XGdr/IihWNaPUMFsYxXmTJ7CmxE uMqdkM0gj8VeJRcLUMD+kohrHRC26R8= Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-422-iBfRsmfjOa2zgOHJ2e0bLw-1; Thu, 07 Nov 2024 22:26:25 -0500 X-MC-Unique: iBfRsmfjOa2zgOHJ2e0bLw-1 X-Mimecast-MFC-AGG-ID: iBfRsmfjOa2zgOHJ2e0bLw Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-71e479829c8so1999210b3a.0 for ; Thu, 07 Nov 2024 19:26:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731036384; x=1731641184; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=wegb2/eCGe6wV/W0p6rCPdmPgLSM8E7tz1irYfpXDxs=; b=pmikJMQ5vmUFt9NdE6zzrWERaTKeAj/cQMaOgfFpyyinIXzBHurNWu8fUjJrL1Kn6p 4IBwgF2V2lanuj72pEQfvgCedLJODjs8g9tW8FiF2ypYDwnWMYXZKTBIY9nm+biVEGAB 7wsow73o3X/qdNxwrgB5u2INcD8AWyI4DkHuJMRjzHdk0YkRGGfiN2EbpCa7l2UaRI8g dKhU25BQkbVlOsSy6JSPzyCGxiKun0XBrgmNu2IMd2mCOtHKoR3LU90/Yk/pCqf9nqSI GaEL23k1RmqbvftYOM35DHmrfoJCbeBuEh6Vp3xZmKNgBzALR4zV2yNe3V3vue6z38H0 O2pg== X-Forwarded-Encrypted: i=1; AJvYcCVfafOlWWjROq4rQmnFv3TnBlc645Gcgpe3N5E8maeujLJ+7sqDVfY2idcScrU/lS0e+zvkgmEg6ASVR0I=@vger.kernel.org X-Gm-Message-State: AOJu0Yw9NDNbzLXnhS+RKYA5yYnrSv8vpWXEghCCZaaQeH47DmJVwfPO jBPO/HnHebVw86n9hXooVTq96I1Jqt6WutCtnZ55yfpeu1cuEyg6HeSyyUzpz2tCSDUB88MZG9v PuRxiFOZuYW8LIAAy2mNneLzW2dxOxAQ9G9UO6ppqE3BbIFfe4LzL+lfFIe/hzg== X-Received: by 2002:a05:6a20:1593:b0:1db:d8df:8c4e with SMTP id adf61e73a8af0-1dc229382d6mr1689799637.12.1731036384191; Thu, 07 Nov 2024 19:26:24 -0800 (PST) X-Google-Smtp-Source: AGHT+IGlJGzcM6Lw9hg4QV3IXJfu3xH2gJgIS/z8mdvu0D+zDP8zXtQa3H+0LV06vWVsLWw0yVljqA== X-Received: by 2002:a05:6a20:1593:b0:1db:d8df:8c4e with SMTP id adf61e73a8af0-1dc229382d6mr1689779637.12.1731036383783; Thu, 07 Nov 2024 19:26:23 -0800 (PST) Received: from zeus.elecom ([240b:10:83a2:bd00:6e35:f2f5:2e21:ae3a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72407a17ebcsm2510730b3a.138.2024.11.07.19.26.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Nov 2024 19:26:23 -0800 (PST) From: Ryosuke Yasuoka To: airlied@redhat.com, kraxel@redhat.com, gurchetansingh@chromium.org, olvaffe@gmail.com, maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, simona@ffwll.ch Cc: Jocelyn Falempe , dri-devel@lists.freedesktop.org, virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, Ryosuke Yasuoka Subject: [PATCH v3] drm/virtio: Add drm_panic support Date: Fri, 8 Nov 2024 12:26:02 +0900 Message-ID: <20241108032603.3164570-1-ryasuoka@redhat.com> X-Mailer: git-send-email 2.47.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jocelyn Falempe Virtio gpu supports the drm_panic module, which displays a message to the screen when a kernel panic occurs. Signed-off-by: Ryosuke Yasuoka Signed-off-by: Jocelyn Falempe --- v3: - As per Jocelyn's comment, add a finite timeout 500usec in virtio_gpu_panic_put_vbuf() to avoid infinite loop v2: https://lkml.org/lkml/2024/11/6/668 - Remove unnecessary virtio_gpu_vbuffer_inline - Remove reclaim_list and just call drm_gem_object_put() if there is an obj - Don't wait for an event in virtio_gpu_panic_queue_ctrl_sgs and just return -ENOMEM. Also add error handlers for this error. - Use virtio_gpu_panic_queue_fenced_ctrl_buffer() in virtio_gpu_panic_cmd_resource_flush - Remove fence and objs arguments because these are always NULL in panic handler. - Rename virtio_gpu_panic_queue_fenced_ctrl_buffer to ..._queue_ctrl_buffer - Rename virtio_gpu_panic_alloc_cmd to ..._panic_init_cmd v1: https://lkml.org/lkml/2024/10/31/154 drivers/gpu/drm/virtio/virtgpu_drv.h | 18 +++ drivers/gpu/drm/virtio/virtgpu_plane.c | 171 +++++++++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_vq.c | 148 ++++++++++++++++++++- 3 files changed, 331 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/= virtgpu_drv.h index 64c236169db8..5387e3fd9dee 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -125,6 +125,12 @@ struct virtio_gpu_object_array { struct drm_gem_object *objs[] __counted_by(total); }; =20 +#define MAX_INLINE_CMD_SIZE 96 +#define MAX_INLINE_RESP_SIZE 24 +#define VBUFFER_SIZE (sizeof(struct virtio_gpu_vbuffer) \ + + MAX_INLINE_CMD_SIZE \ + + MAX_INLINE_RESP_SIZE) + struct virtio_gpu_vbuffer; struct virtio_gpu_device; =20 @@ -329,12 +335,23 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu= _device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_panic_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgd= ev, + uint64_t offset, + uint32_t width, uint32_t height, + uint32_t x, uint32_t y, + struct virtio_gpu_object_array *objs, + struct virtio_gpu_vbuffer *vbuf); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, uint32_t x, uint32_t y, struct virtio_gpu_object_array *objs, struct virtio_gpu_fence *fence); +int virtio_gpu_panic_cmd_resource_flush(struct virtio_gpu_device *vgdev, + struct virtio_gpu_vbuffer *vbuf, + uint32_t resource_id, + uint32_t x, uint32_t y, + uint32_t width, uint32_t height); void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev, uint32_t resource_id, uint32_t x, uint32_t y, @@ -399,6 +416,7 @@ void virtio_gpu_ctrl_ack(struct virtqueue *vq); void virtio_gpu_cursor_ack(struct virtqueue *vq); void virtio_gpu_dequeue_ctrl_func(struct work_struct *work); void virtio_gpu_dequeue_cursor_func(struct work_struct *work); +void virtio_gpu_panic_notify(struct virtio_gpu_device *vgdev); void virtio_gpu_notify(struct virtio_gpu_device *vgdev); =20 int diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virti= o/virtgpu_plane.c index a72a2dbda031..0098bbf487d7 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -26,6 +26,9 @@ #include #include #include +#include +#include +#include =20 #include "virtgpu_drv.h" =20 @@ -108,6 +111,30 @@ static int virtio_gpu_plane_atomic_check(struct drm_pl= ane *plane, return ret; } =20 +/* For drm panic */ +static int virtio_gpu_panic_update_dumb_bo(struct virtio_gpu_device *vgdev, + struct drm_plane_state *state, + struct drm_rect *rect, + struct virtio_gpu_object_array *objs, + struct virtio_gpu_vbuffer *vbuf) +{ + int ret; + struct virtio_gpu_object *bo =3D + gem_to_virtio_gpu_obj(state->fb->obj[0]); + uint32_t w =3D rect->x2 - rect->x1; + uint32_t h =3D rect->y2 - rect->y1; + uint32_t x =3D rect->x1; + uint32_t y =3D rect->y1; + uint32_t off =3D x * state->fb->format->cpp[0] + + y * state->fb->pitches[0]; + + virtio_gpu_array_add_obj(objs, &bo->base.base); + + ret =3D virtio_gpu_panic_cmd_transfer_to_host_2d(vgdev, off, w, h, x, y, + objs, vbuf); + return ret; +} + static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev, struct drm_plane_state *state, struct drm_rect *rect) @@ -131,6 +158,26 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gp= u_device *vgdev, objs, NULL); } =20 +/* For drm_panic */ +static int virtio_gpu_panic_resource_flush(struct drm_plane *plane, + struct virtio_gpu_vbuffer *vbuf, + uint32_t x, uint32_t y, + uint32_t width, uint32_t height) +{ + int ret; + struct drm_device *dev =3D plane->dev; + struct virtio_gpu_device *vgdev =3D dev->dev_private; + struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; + + vgfb =3D to_virtio_gpu_framebuffer(plane->state->fb); + bo =3D gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + + ret =3D virtio_gpu_panic_cmd_resource_flush(vgdev, vbuf, bo->hw_res_handl= e, x, y, + width, height); + return ret; +} + static void virtio_gpu_resource_flush(struct drm_plane *plane, uint32_t x, uint32_t y, uint32_t width, uint32_t height) @@ -359,11 +406,128 @@ static void virtio_gpu_cursor_plane_update(struct dr= m_plane *plane, virtio_gpu_cursor_ping(vgdev, output); } =20 +static int virtio_drm_get_scanout_buffer(struct drm_plane *plane, + struct drm_scanout_buffer *sb) +{ + struct virtio_gpu_object *bo; + + if (!plane->state || !plane->state->fb || !plane->state->visible) + return -ENODEV; + + bo =3D gem_to_virtio_gpu_obj(plane->state->fb->obj[0]); + + /* try to vmap it if possible */ + if (!bo->base.vaddr) { + int ret; + + ret =3D drm_gem_shmem_vmap(&bo->base, &sb->map[0]); + if (ret) + return ret; + } else { + iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr); + } + + sb->format =3D plane->state->fb->format; + sb->height =3D plane->state->fb->height; + sb->width =3D plane->state->fb->width; + sb->pitch[0] =3D plane->state->fb->pitches[0]; + return 0; +} + +struct virtio_gpu_panic_object_array { + struct ww_acquire_ctx ticket; + struct list_head next; + u32 nents, total; + struct drm_gem_object *objs; +}; + +static void *virtio_panic_buffer; + +static void virtio_gpu_panic_put_vbuf(struct virtqueue *vq, + struct virtio_gpu_vbuffer *vbuf, + struct virtio_gpu_object_array *objs) +{ + unsigned int len; + int i; + + /* waiting vbuf to be used */ + for (i =3D 0; i < 500; i++) { + if (vbuf =3D=3D virtqueue_get_buf(vq, &len)) { + if (objs !=3D NULL && vbuf->objs) + drm_gem_object_put(objs->objs[0]); + break; + } + udelay(1); + } +} + +static void virtio_panic_flush(struct drm_plane *plane) +{ + int ret; + struct virtio_gpu_object *bo; + struct drm_device *dev =3D plane->dev; + struct virtio_gpu_device *vgdev =3D dev->dev_private; + struct drm_rect rect; + struct virtio_gpu_vbuffer *vbuf_dumb_bo =3D virtio_panic_buffer; + struct virtio_gpu_vbuffer *vbuf_resource_flush =3D virtio_panic_buffer + = VBUFFER_SIZE; + + rect.x1 =3D 0; + rect.y1 =3D 0; + rect.x2 =3D plane->state->fb->width; + rect.y2 =3D plane->state->fb->height; + + bo =3D gem_to_virtio_gpu_obj(plane->state->fb->obj[0]); + + struct drm_gem_object obj; + struct virtio_gpu_panic_object_array objs =3D { + .next =3D { NULL, NULL }, + .nents =3D 0, + .total =3D 1, + .objs =3D &obj + }; + + if (bo->dumb) { + ret =3D virtio_gpu_panic_update_dumb_bo(vgdev, + plane->state, + &rect, + (struct virtio_gpu_object_array *)&objs, + vbuf_dumb_bo); + if (ret) { + if (vbuf_dumb_bo->objs) + drm_gem_object_put(&objs.objs[0]); + return; + } + } + + ret =3D virtio_gpu_panic_resource_flush(plane, vbuf_resource_flush, + plane->state->src_x >> 16, + plane->state->src_y >> 16, + plane->state->src_w >> 16, + plane->state->src_h >> 16); + if (ret) { + virtio_gpu_panic_put_vbuf(vgdev->ctrlq.vq, + vbuf_dumb_bo, + (struct virtio_gpu_object_array *)&objs); + return; + } + + virtio_gpu_panic_notify(vgdev); + + virtio_gpu_panic_put_vbuf(vgdev->ctrlq.vq, + vbuf_dumb_bo, + (struct virtio_gpu_object_array *)&objs); + virtio_gpu_panic_put_vbuf(vgdev->ctrlq.vq, + vbuf_resource_flush, + NULL); +} + static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs= =3D { .prepare_fb =3D virtio_gpu_plane_prepare_fb, .cleanup_fb =3D virtio_gpu_plane_cleanup_fb, .atomic_check =3D virtio_gpu_plane_atomic_check, .atomic_update =3D virtio_gpu_primary_plane_update, + .get_scanout_buffer =3D virtio_drm_get_scanout_buffer, + .panic_flush =3D virtio_panic_flush, }; =20 static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = =3D { @@ -383,6 +547,13 @@ struct drm_plane *virtio_gpu_plane_init(struct virtio_= gpu_device *vgdev, const uint32_t *formats; int nformats; =20 + /* allocate panic buffers */ + if (index =3D=3D 0 && type =3D=3D DRM_PLANE_TYPE_PRIMARY) { + virtio_panic_buffer =3D drmm_kzalloc(dev, 2 * VBUFFER_SIZE, GFP_KERNEL); + if (!virtio_panic_buffer) + return ERR_PTR(-ENOMEM); + } + if (type =3D=3D DRM_PLANE_TYPE_CURSOR) { formats =3D virtio_gpu_cursor_formats; nformats =3D ARRAY_SIZE(virtio_gpu_cursor_formats); diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/v= irtgpu_vq.c index 0d3d0d09f39b..f6e1655458dd 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -36,12 +36,6 @@ #include "virtgpu_drv.h" #include "virtgpu_trace.h" =20 -#define MAX_INLINE_CMD_SIZE 96 -#define MAX_INLINE_RESP_SIZE 24 -#define VBUFFER_SIZE (sizeof(struct virtio_gpu_vbuffer) \ - + MAX_INLINE_CMD_SIZE \ - + MAX_INLINE_RESP_SIZE) - static void convert_to_hw_box(struct virtio_gpu_box *dst, const struct drm_virtgpu_3d_box *src) { @@ -311,6 +305,34 @@ static struct sg_table *vmalloc_to_sgt(char *data, uin= t32_t size, int *sg_ents) return sgt; } =20 +/* For drm_panic */ +static int virtio_gpu_panic_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, + struct virtio_gpu_vbuffer *vbuf, + int elemcnt, + struct scatterlist **sgs, + int outcnt, + int incnt) +{ + struct virtqueue *vq =3D vgdev->ctrlq.vq; + int ret; + + if (vgdev->has_indirect) + elemcnt =3D 1; + + if (vq->num_free < elemcnt) + return -ENOMEM; + + ret =3D virtqueue_add_sgs(vq, sgs, outcnt, incnt, vbuf, GFP_ATOMIC); + WARN_ON(ret); + + vbuf->seqno =3D ++vgdev->ctrlq.seqno; + trace_virtio_gpu_cmd_queue(vq, virtio_gpu_vbuf_ctrl_hdr(vbuf), vbuf->seqn= o); + + atomic_inc(&vgdev->pending_commands); + + return 0; +} + static int virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf, struct virtio_gpu_fence *fence, @@ -368,6 +390,33 @@ static int virtio_gpu_queue_ctrl_sgs(struct virtio_gpu= _device *vgdev, return 0; } =20 +/* For drm_panic */ +static int virtio_gpu_panic_queue_ctrl_buffer(struct virtio_gpu_device *vg= dev, + struct virtio_gpu_vbuffer *vbuf) +{ + struct scatterlist *sgs[3], vcmd, vresp; + int elemcnt =3D 0, outcnt =3D 0, incnt =3D 0, ret; + + /* set up vcmd */ + sg_init_one(&vcmd, vbuf->buf, vbuf->size); + elemcnt++; + sgs[outcnt] =3D &vcmd; + outcnt++; + + /* set up vresp */ + if (vbuf->resp_size) { + sg_init_one(&vresp, vbuf->resp_buf, vbuf->resp_size); + elemcnt++; + sgs[outcnt + incnt] =3D &vresp; + incnt++; + } + + ret =3D virtio_gpu_panic_queue_ctrl_sgs(vgdev, vbuf, + elemcnt, sgs, + outcnt, incnt); + return ret; +} + static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *v= gdev, struct virtio_gpu_vbuffer *vbuf, struct virtio_gpu_fence *fence) @@ -422,6 +471,21 @@ static int virtio_gpu_queue_fenced_ctrl_buffer(struct = virtio_gpu_device *vgdev, return ret; } =20 +/* For drm_panic */ +void virtio_gpu_panic_notify(struct virtio_gpu_device *vgdev) +{ + bool notify; + + if (!atomic_read(&vgdev->pending_commands)) + return; + + atomic_set(&vgdev->pending_commands, 0); + notify =3D virtqueue_kick_prepare(vgdev->ctrlq.vq); + + if (notify) + virtqueue_notify(vgdev->ctrlq.vq); +} + void virtio_gpu_notify(struct virtio_gpu_device *vgdev) { bool notify; @@ -567,6 +631,44 @@ void virtio_gpu_cmd_set_scanout(struct virtio_gpu_devi= ce *vgdev, virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); } =20 +/* For drm_panic */ +static void virtio_gpu_panic_init_cmd(struct virtio_gpu_device *vgdev, + struct virtio_gpu_vbuffer *vbuf, + int cmd_size) +{ + vbuf->buf =3D (void *)vbuf + sizeof(*vbuf); + vbuf->size =3D cmd_size; + vbuf->resp_cb =3D NULL; + vbuf->resp_size =3D sizeof(struct virtio_gpu_ctrl_hdr); + vbuf->resp_buf =3D (void *)vbuf->buf + cmd_size; +} + +/* For drm_panic */ +int virtio_gpu_panic_cmd_resource_flush(struct virtio_gpu_device *vgdev, + struct virtio_gpu_vbuffer *vbuf, + uint32_t resource_id, + uint32_t x, uint32_t y, + uint32_t width, uint32_t height) +{ + int ret; + struct virtio_gpu_resource_flush *cmd_p; + + virtio_gpu_panic_init_cmd(vgdev, vbuf, + sizeof(struct virtio_gpu_resource_flush)); + cmd_p =3D (void *)vbuf->buf; + vbuf->objs =3D NULL; + + cmd_p->hdr.type =3D cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_FLUSH); + cmd_p->resource_id =3D cpu_to_le32(resource_id); + cmd_p->r.width =3D cpu_to_le32(width); + cmd_p->r.height =3D cpu_to_le32(height); + cmd_p->r.x =3D cpu_to_le32(x); + cmd_p->r.y =3D cpu_to_le32(y); + + ret =3D virtio_gpu_panic_queue_ctrl_buffer(vgdev, vbuf); + return ret; +} + void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev, uint32_t resource_id, uint32_t x, uint32_t y, @@ -591,6 +693,40 @@ void virtio_gpu_cmd_resource_flush(struct virtio_gpu_d= evice *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } =20 +/* For drm_panic */ +int virtio_gpu_panic_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgd= ev, + uint64_t offset, + uint32_t width, uint32_t height, + uint32_t x, uint32_t y, + struct virtio_gpu_object_array *objs, + struct virtio_gpu_vbuffer *vbuf) +{ + int ret; + struct virtio_gpu_object *bo =3D gem_to_virtio_gpu_obj(objs->objs[0]); + struct virtio_gpu_transfer_to_host_2d *cmd_p; + bool use_dma_api =3D !virtio_has_dma_quirk(vgdev->vdev); + + if (virtio_gpu_is_shmem(bo) && use_dma_api) + dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, + bo->base.sgt, DMA_TO_DEVICE); + + virtio_gpu_panic_init_cmd(vgdev, vbuf, + sizeof(struct virtio_gpu_transfer_to_host_2d)); + cmd_p =3D (void *)vbuf->buf; + vbuf->objs =3D objs; + + cmd_p->hdr.type =3D cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D); + cmd_p->resource_id =3D cpu_to_le32(bo->hw_res_handle); + cmd_p->offset =3D cpu_to_le64(offset); + cmd_p->r.width =3D cpu_to_le32(width); + cmd_p->r.height =3D cpu_to_le32(height); + cmd_p->r.x =3D cpu_to_le32(x); + cmd_p->r.y =3D cpu_to_le32(y); + + ret =3D virtio_gpu_panic_queue_ctrl_buffer(vgdev, vbuf); + return ret; +} + void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, --=20 2.47.0