From nobody Sun Feb 8 05:28:19 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 505902505BF for ; Wed, 5 Mar 2025 15:26:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741188417; cv=none; b=g/bcVX7QGIaa/r0z7l6GMDpX+5oaIBsMy8cqLTh0RaDXSK8iiYzL22yQgkNi9QdvDgT/h0N8DGdcbf7F3bHS854Bb+87w/Zuc6nwQKZToFJArJV3K+7G0uBXYUHJ5u19uWnGbCxsc+lIVjdpWJl8F8Q8D/5zHdagZ5sUFXOfr+c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741188417; c=relaxed/simple; bh=rFm7PHeSc25dzGeWR07reNqJ158j+USs1E2yMw/XqQA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sgmGWa5VfzEB1DiaUtsneXcB1IeDNQpMeo0e1IlttberqGxpkyvob6OgrrEYGEzw7NFY2gy8ldXeywPIJ5BVO1s6TVdTk34KMWzgChf4eQh2z9KI7Fuxt+FCEtMkT7duXzTVRxAUfgDbw7yuQhDwFzkSixxFq6Bn8gS5RhKMiqw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=A5wEvaY6; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="A5wEvaY6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741188414; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Mzd4XGn+uFEzS0vbdAuX3w3DrwjWbNhsj0Skkz0C+/o=; b=A5wEvaY6oNjNBC3cMMIQcubzOGkcDPJWdUR5T4mWaDAmjrNr0Uvxd/imxbOe/Plze1Czik Z1KsDNQMZdpXU39K+jP78YufnIckAYsanecnn5Umm1mpuocZDe0habLOq1O4cKnlC9nwDk 3WoutvjLb4Kjk0WsEldvFHyQGBGPwtc= Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-450-OEYr3QNgMjGxLpl2qH9g9Q-1; Wed, 05 Mar 2025 10:26:42 -0500 X-MC-Unique: OEYr3QNgMjGxLpl2qH9g9Q-1 X-Mimecast-MFC-AGG-ID: OEYr3QNgMjGxLpl2qH9g9Q_1741188401 Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-223d86b4df0so61900615ad.3 for ; Wed, 05 Mar 2025 07:26:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741188401; x=1741793201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Mzd4XGn+uFEzS0vbdAuX3w3DrwjWbNhsj0Skkz0C+/o=; b=OZPh7obKSQazNyTrI/GfZolZrs1vFhTCLD9449uDWOsG84OtGgLAhl+K6AX9CY4gkA k8srcRYrkRyQLYKr+tfT7PgIUEJhc94KoMh73ii1sipW8AGyVXCFF9cNa7LJUNdjRLMl pJv025gVJ05bg7Vr6gbSqF9+fAtLQJg1One7qXbP/7uMLiMuc+jlEN3KtpQ8HpJM6fa/ XpCCR2byy4S5sFkkSGhKX8V6i+tGGeLNVPd+N0enpVo07VirQqaugIoQDUuQQc/SNaic jpxPJEmUeLbHgfmk8MBQJkzPzA2Rb7av4/tkmI+FhvzGT+CZxuy59UlaU5N5WqHawftu b0jw== X-Forwarded-Encrypted: i=1; AJvYcCVIpjqfLWGpOZhYF7kG+3qGYl2CELAjwgK7DInwT0Yyrg/foezY4+K+XgLQavbqoFoo9aqqLEsLxEMH8fg=@vger.kernel.org X-Gm-Message-State: AOJu0YxNYJFnNX2sZNEdpnUXYSDxCBWTXf4nDNhm0mZHwVBq25RSZno9 OKhiTC7IIORN/VWH1bHI5/CMpaFhhntMCqhCCb6Sa0Wqvtx10H7Xu+vNk3IkAGtdGs4gsuZEqC/ BRLkaXQnlZ3nA/gfn9yvC5hcGCQDOmiG1Zgumfxk/6MIi7WVUOLWJL6zWphbLpQ== X-Gm-Gg: ASbGnctbMVCG3VR7moUCbmFiBXq2vgQEW+9HZ9PHQ7mc4FfcE8rMamJFIzoJIsuCbMY X4CBPZ26iUE9pkzX0uyKW3xijsJ3z/LPlr28Hmo3XDCF5swKfU7HcJPHyOtklxPr/dSB1ypdRhG c/ganjdVn0o+819h8eM0eqt1aueUD4xv01wbdmUVwm1bg7jjirfCiCvcKs7C97H3V66OE3rhpQI 9kmTH9R5idwOyz8g0xgIfB4RD/0y3uumQfmbIG8y9iDnG0vtXcKj1FccJJmpm4e6kx6KmDvYGwZ ROLTjeoHf5ZpZ/yr X-Received: by 2002:a17:902:d4c5:b0:220:ea90:1925 with SMTP id d9443c01a7336-223f1d20313mr60493205ad.35.1741188401123; Wed, 05 Mar 2025 07:26:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IECC+BX4uW9YAdKPYnu6dCDuOObXXtjhxLSSHWa2EsQXKtadkIakYvJhvOdDY/oZGXAYMQVLw== X-Received: by 2002:a17:902:d4c5:b0:220:ea90:1925 with SMTP id d9443c01a7336-223f1d20313mr60492655ad.35.1741188400762; Wed, 05 Mar 2025 07:26:40 -0800 (PST) Received: from zeus.elecom ([240b:10:83a2:bd00:6e35:f2f5:2e21:ae3a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7364b4eff66sm6983292b3a.83.2025.03.05.07.26.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 07:26:40 -0800 (PST) From: Ryosuke Yasuoka To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, simona@ffwll.ch, kraxel@redhat.com, gurchetansingh@chromium.org, olvaffe@gmail.com, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, dmitry.osipenko@collabora.com, jfalempe@redhat.com Cc: Ryosuke Yasuoka , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org Subject: [PATCH drm-next 1/2] vmalloc: Add atomic_vmap Date: Thu, 6 Mar 2025 00:25:53 +0900 Message-ID: <20250305152555.318159-2-ryasuoka@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250305152555.318159-1-ryasuoka@redhat.com> References: <20250305152555.318159-1-ryasuoka@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some drivers can use vmap in drm_panic, however, vmap is sleepable and takes locks. Since drm_panic will vmap in panic handler, atomic_vmap requests pages with GFP_ATOMIC and maps KVA without locks and sleep. Signed-off-by: Ryosuke Yasuoka --- include/linux/vmalloc.h | 2 + mm/internal.h | 5 ++ mm/vmalloc.c | 105 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 112 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 31e9ffd936e3..c7a2a9a1976d 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -190,6 +190,8 @@ void * __must_check vrealloc_noprof(const void *p, size= _t size, gfp_t flags) extern void vfree(const void *addr); extern void vfree_atomic(const void *addr); =20 +extern void *atomic_vmap(struct page **pages, unsigned int count, + unsigned long flags, pgprot_t prot); extern void *vmap(struct page **pages, unsigned int count, unsigned long flags, pgprot_t prot); void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot); diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..134b332bf5b9 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1278,6 +1278,11 @@ int numa_migrate_check(struct folio *folio, struct v= m_fault *vmf, void free_zone_device_folio(struct folio *folio); int migrate_device_coherent_folio(struct folio *folio); =20 +struct vm_struct *atomic_get_vm_area_node(unsigned long size, unsigned lon= g align, + unsigned long shift, unsigned long flags, + unsigned long start, unsigned long end, int node, + gfp_t gfp_mask, const void *caller); + struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long align, unsigned long shift, unsigned long flags, unsigned long start, diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a6e7acebe9ad..f5c93779c60a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1945,6 +1945,57 @@ static inline void setup_vmalloc_vm(struct vm_struct= *vm, va->vm =3D vm; } =20 +static struct vmap_area *atomic_alloc_vmap_area(unsigned long size, + unsigned long align, + unsigned long vstart, unsigned long vend, + int node, gfp_t gfp_mask, + unsigned long va_flags, struct vm_struct *vm) +{ + struct vmap_node *vn; + struct vmap_area *va; + unsigned long addr; + + if (unlikely(!size || offset_in_page(size) || !is_power_of_2(align))) + return ERR_PTR(-EINVAL); + + if (unlikely(!vmap_initialized)) + return ERR_PTR(-EBUSY); + + va =3D kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); + if (unlikely(!va)) + return ERR_PTR(-ENOMEM); + + /* + * Only scan the relevant parts containing pointers to other objects + * to avoid false negatives. + */ + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask); + + addr =3D __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, + size, align, vstart, vend); + + trace_alloc_vmap_area(addr, size, align, vstart, vend, addr =3D=3D vend); + + va->va_start =3D addr; + va->va_end =3D addr + size; + va->vm =3D NULL; + va->flags =3D va_flags; + + vm->addr =3D (void *)va->va_start; + vm->size =3D va_size(va); + va->vm =3D vm; + + vn =3D addr_to_node(va->va_start); + + insert_vmap_area(va, &vn->busy.root, &vn->busy.head); + + BUG_ON(!IS_ALIGNED(va->va_start, align)); + BUG_ON(va->va_start < vstart); + BUG_ON(va->va_end > vend); + + return va; +} + /* * Allocate a region of KVA of the specified size and alignment, within the * vstart and vend. If vm is passed in, the two will also be bound. @@ -3106,6 +3157,33 @@ static void clear_vm_uninitialized_flag(struct vm_st= ruct *vm) vm->flags &=3D ~VM_UNINITIALIZED; } =20 +struct vm_struct *atomic_get_vm_area_node(unsigned long size, unsigned lon= g align, + unsigned long shift, unsigned long flags, + unsigned long start, unsigned long end, int node, + gfp_t gfp_mask, const void *caller) +{ + struct vmap_area *va; + struct vm_struct *area; + + size =3D ALIGN(size, 1ul << shift); + if (unlikely(!size)) + return NULL; + + area =3D kzalloc_node(sizeof(*area), gfp_mask, node); + if (unlikely(!area)) + return NULL; + + size +=3D PAGE_SIZE; + area->flags =3D flags; + area->caller =3D caller; + + va =3D atomic_alloc_vmap_area(size, align, start, end, node, gfp_mask, 0,= area); + if (IS_ERR(va)) + return NULL; + + return area; +} + struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long align, unsigned long shift, unsigned long flags, unsigned long start, unsigned long end, int node, @@ -3418,6 +3496,33 @@ void vunmap(const void *addr) } EXPORT_SYMBOL(vunmap); =20 +void *atomic_vmap(struct page **pages, unsigned int count, + unsigned long flags, pgprot_t prot) +{ + struct vm_struct *area; + unsigned long addr; + unsigned long size; /* In bytes */ + + if (count > totalram_pages()) + return NULL; + + size =3D (unsigned long)count << PAGE_SHIFT; + area =3D atomic_get_vm_area_node(size, 1, PAGE_SHIFT, flags, + VMALLOC_START, VMALLOC_END, + NUMA_NO_NODE, GFP_ATOMIC, + __builtin_return_address(0)); + if (!area) + return NULL; + + addr =3D (unsigned long)area->addr; + if (vmap_pages_range(addr, addr + size, pgprot_nx(prot), + pages, PAGE_SHIFT) < 0) { + return NULL; + } + + return area->addr; +} + /** * vmap - map an array of pages into virtually contiguous space * @pages: array of page pointers --=20 2.48.1 From nobody Sun Feb 8 05:28:19 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DA3B24EF77 for ; Wed, 5 Mar 2025 15:26:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741188411; cv=none; b=B8axRIsN5UN2t6itLq8xA9kmJB9/ULNShjQf/0ZDZUYjtiAffJDClMloJkI1tn1MzhOJ1tQc6OoMOKZXY7/90YPjDKyWrGKgeBGGoZC87MKTEEP735Yc4l3bBgdHz/Qhdn7eZd+eALuampF6h2Y+kAVOrBGc4/xJM9QEfMRiS3s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741188411; c=relaxed/simple; bh=WuSs0ccIERyzfHTV1tmUgRKBiDqHcxHRwmn33redAdI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ws925bOKqKXXoDTxLRtXum11DAD0aJTNyMIFyi2X8sM+/MEpVttzHx7ZirJhFHhhVNbzny0YFeBYfZnITPBY7OaruBhxCvfpWkFRYMJ4XHQHfTXRaME0RWsLmZolOB99Pqzu4sk+mAyd+0pGV+Vkdab36g/VMoe5D1XKcmGZ7m0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=PGA5cfTu; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PGA5cfTu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741188409; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qPtUDuBhtdSX6GhOg+XQnbMK00oSmP61u2YCpHroymk=; b=PGA5cfTufMu9ErdAXhMTr6aDft8iXaw0NJM04gpCnnPieCxOzJIbeCWWo9836PUGbBbY8k x/i8LTSsqViyLsVh1j+Aiq4yA3r0stpjinOA8diOX8gGnjo1EXc8DmCCS0o0Oy5K92j9VT Lso59OjGMvqUTvFvmbfYaq4knXGEwe0= Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-655-aoXS06h0MXiJv1nWrJlrzw-1; Wed, 05 Mar 2025 10:26:47 -0500 X-MC-Unique: aoXS06h0MXiJv1nWrJlrzw-1 X-Mimecast-MFC-AGG-ID: aoXS06h0MXiJv1nWrJlrzw_1741188407 Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-2234bf13b47so132309655ad.0 for ; Wed, 05 Mar 2025 07:26:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741188406; x=1741793206; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qPtUDuBhtdSX6GhOg+XQnbMK00oSmP61u2YCpHroymk=; b=B1LDbmw7wGY1rE26oldylPjH07Z4hIBrlWAqT6Jvj53SaU424jFl+WwP/i2ai/fdn6 9hfQpfVHd8KvOPDbVRHstU1FDx21/c31NWo7ubE7VYsFWOMQo6L3r8uIvJzM671oWMYn +irCSfESyzns2l0axjjB4+PopDJpshpA/dBNBiYkTDfppzwCoaX2h3wVJARucDiBst7i WUcZWlQcTEFtdeJQQqCMS9la7Cv9IFERGtWTV/DRmuNJ3UnjAbEqidigiAFP7mhGYgcS EZkOB0/d3g3ffxf4r7xFzML/j+Kjz3/TT9+NyU6mUh6O8qkQK0fam1HtJfeSwDZoc6Cv ZdOg== X-Forwarded-Encrypted: i=1; AJvYcCWyw0XGSHiSsAvoSrxglZpGZhId8wjb/wAc3Tn4aPOZ3/WOD/KSmzqi55XUemtNlr9aNIuOJQu+wcyz1ew=@vger.kernel.org X-Gm-Message-State: AOJu0YwDyM1aiXxiE4PuKhG+g8x0GD4AXvnQqsnPskpIkonlPZtogfXU HSCzNFmmTp3snfw0Osz2m/cfb4G+1U/FZ84tnBvDNH5X7X6x9K4GWWiEAceLVgdJTo02FTRzMAr JlmSkpREcxS4mr2u02t9gM8VrMM8aa70BRWUU5AgH8fLyhM1aoOspX1L/zOreAA== X-Gm-Gg: ASbGncswwZTw/+KP2nUE0IjY93E+UZysbWstLl0TWIdNjh/IbTAvDiUgTNfC92MY+Ct sZZvMh3bGacx05DsPvIC2LwIXsl+fRmiDZLXERGDRUYadm/vMKTgdjaAWHn708yjGforHtCqEsE E3TQzFR5hRRkew85tlazobTIXCgYh2VeAjRCb4OtSSsBtdR57teKjnewOUTKl1+oNJB0HY/II9f bOt93ncsHE826/HZ72C5YXVfFthAuEME2dO1DFhs2EM30ipxtD9lFDVLb0ytajS1Q/GpmiKCPpS bWNBJUTakCHkAcXO X-Received: by 2002:a05:6a00:998:b0:736:46b4:beef with SMTP id d2e1a72fcca58-73682b55144mr6706588b3a.3.1741188406436; Wed, 05 Mar 2025 07:26:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IECYyXVd+Gb6leHDb5NO82ag7tQipklHOXKqXQqY4fydYrcJZzPu+CjaBp7aum4qt8U11Vpow== X-Received: by 2002:a05:6a00:998:b0:736:46b4:beef with SMTP id d2e1a72fcca58-73682b55144mr6706526b3a.3.1741188406066; Wed, 05 Mar 2025 07:26:46 -0800 (PST) Received: from zeus.elecom ([240b:10:83a2:bd00:6e35:f2f5:2e21:ae3a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7364b4eff66sm6983292b3a.83.2025.03.05.07.26.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 07:26:45 -0800 (PST) From: Ryosuke Yasuoka To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, simona@ffwll.ch, kraxel@redhat.com, gurchetansingh@chromium.org, olvaffe@gmail.com, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, dmitry.osipenko@collabora.com, jfalempe@redhat.com Cc: Ryosuke Yasuoka , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org Subject: [PATCH drm-next 2/2] drm/virtio: Use atomic_vmap to work drm_panic in GUI Date: Thu, 6 Mar 2025 00:25:54 +0900 Message-ID: <20250305152555.318159-3-ryasuoka@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250305152555.318159-1-ryasuoka@redhat.com> References: <20250305152555.318159-1-ryasuoka@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" virtio drm_panic supports only vmapped shmem BO because there is no atomic vmap feature. Now atomic_vmap is supported, so drm_panic tries to vmap addr if it is not mapped. Signed-off-by: Ryosuke Yasuoka --- drivers/gpu/drm/drm_gem.c | 51 ++++++++++++++++++++++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 51 ++++++++++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_plane.c | 14 +++++-- include/drm/drm_gem.h | 1 + include/drm/drm_gem_shmem_helper.h | 2 + 5 files changed, 116 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index ee811764c3df..eebfaef3a52e 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -535,6 +535,57 @@ static void drm_gem_check_release_batch(struct folio_b= atch *fbatch) cond_resched(); } =20 +struct page **drm_gem_atomic_get_pages(struct drm_gem_object *obj) +{ + struct address_space *mapping; + struct page **pages; + struct folio *folio; + long i, j, npages; + + if (WARN_ON(!obj->filp)) + return ERR_PTR(-EINVAL); + + /* This is the shared memory object that backs the GEM resource */ + mapping =3D obj->filp->f_mapping; + + /* We already BUG_ON() for non-page-aligned sizes in + * drm_gem_object_init(), so we should never hit this unless + * driver author is doing something really wrong: + */ + WARN_ON((obj->size & (PAGE_SIZE - 1)) !=3D 0); + + npages =3D obj->size >> PAGE_SHIFT; + + pages =3D kmalloc_array(npages, sizeof(struct page *), GFP_ATOMIC); + if (pages =3D=3D NULL) + return ERR_PTR(-ENOMEM); + + mapping_set_unevictable(mapping); + + i =3D 0; + while (i < npages) { + long nr; + + folio =3D shmem_read_folio_gfp(mapping, i, + GFP_ATOMIC); + if (IS_ERR(folio)) + return ERR_PTR(-ENOMEM); + nr =3D min(npages - i, folio_nr_pages(folio)); + for (j =3D 0; j < nr; j++, i++) + pages[i] =3D folio_file_page(folio, i); + + /* Make sure shmem keeps __GFP_DMA32 allocated pages in the + * correct region during swapin. Note that this requires + * __GFP_DMA32 to be set in mapping_gfp_mask(inode->i_mapping) + * so shmem can relocate pages during swapin if required. + */ + BUG_ON(mapping_gfp_constraint(mapping, __GFP_DMA32) && + (folio_pfn(folio) >=3D 0x00100000UL)); + } + + return pages; +} + /** * drm_gem_get_pages - helper to allocate backing pages for a GEM object * from shmem diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_g= em_shmem_helper.c index 5ab351409312..789dfd726a36 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -186,6 +186,34 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *s= hmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); =20 +static int drm_gem_shmem_atomic_get_pages(struct drm_gem_shmem_object *shm= em) +{ + struct drm_gem_object *obj =3D &shmem->base; + struct page **pages; + + pages =3D drm_gem_atomic_get_pages(obj); + if (IS_ERR(pages)) { + drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", + PTR_ERR(pages)); + shmem->pages_use_count =3D 0; + return PTR_ERR(pages); + } + + /* + * TODO: Allocating WC pages which are correctly flushed is only + * supported on x86. Ideal solution would be a GFP_WC flag, which also + * ttm_pool.c could use. + */ +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wc(pages, obj->size >> PAGE_SHIFT); +#endif + + shmem->pages =3D pages; + + return 0; +} + static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj =3D &shmem->base; @@ -317,6 +345,29 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *= shmem) } EXPORT_SYMBOL(drm_gem_shmem_unpin); =20 +int drm_gem_shmem_atomic_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) +{ + struct drm_gem_object *obj =3D &shmem->base; + int ret =3D 0; + + pgprot_t prot =3D PAGE_KERNEL; + + ret =3D drm_gem_shmem_atomic_get_pages(shmem); + if (ret) + return -ENOMEM; + + if (shmem->map_wc) + prot =3D pgprot_writecombine(prot); + shmem->vaddr =3D atomic_vmap(shmem->pages, obj->size >> PAGE_SHIFT, + VM_MAP, prot); + if (!shmem->vaddr) + return -ENOMEM; + iosys_map_set_vaddr(map, shmem->vaddr); + + return 0; +} + /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virti= o/virtgpu_plane.c index a6f5a78f436a..2a977c5cf42a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -500,11 +500,19 @@ static int virtio_drm_get_scanout_buffer(struct drm_p= lane *plane, =20 bo =3D gem_to_virtio_gpu_obj(plane->state->fb->obj[0]); =20 - /* Only support mapped shmem bo */ - if (virtio_gpu_is_vram(bo) || bo->base.base.import_attach || !bo->base.va= ddr) + if (virtio_gpu_is_vram(bo) || bo->base.base.import_attach) return -ENODEV; =20 - iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr); + /* try to vmap it if possible */ + if (!bo->base.vaddr) { + int ret; + + ret =3D drm_gem_shmem_atomic_vmap(&bo->base, &sb->map[0]); + if (ret) + return ret; + } else { + iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr); + } =20 sb->format =3D plane->state->fb->format; sb->height =3D plane->state->fb->height; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index fdae947682cd..cfed66bc12ef 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -529,6 +529,7 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *ob= j); int drm_gem_create_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t siz= e); =20 +struct page **drm_gem_atomic_get_pages(struct drm_gem_object *obj); struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem= _helper.h index d22e3fb53631..86a357945f42 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -105,6 +105,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *sh= mem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_atomic_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, struct iosys_map *map); void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, --=20 2.48.1