From nobody Sun Feb 8 08:43:06 2026 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69F0531AAA6 for ; Tue, 16 Dec 2025 10:33:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765881198; cv=none; b=fzURbR0AUCl4Ck5KZ/rw0nursinOfjhagJatnFGhS+475vqH3NkTVp4pnNuzfKOgy0iQcB+1XuWN3Xro+1k1DHkVijMhIaAMxnrg7legz65sNuNYJU2HIr6QFjwFBAI75TeBC2lnOTj99Y+amYAk9H5Rn0203MGImdVpAqoteaA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765881198; c=relaxed/simple; bh=dKAxJWV2WpadcKiWGOPMSBC6iKfC/UuCk90Z5T4bPdQ=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=AVTnvaD7m9gMdWRM6XlAiQ6sVMY260wpmjOxAyYab6SEPjjPKnnOpfYo6uDrsGFVAaOxYfJRlLnKC4OiI1Lqk19DPkteF9TNlymrjPFLSfZzFpfN6MQauP22z4k+yaE8cGWVW+b9d/bqJuYWpRINpgwO6RjqOayqgk45AguOBaA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZkoriyrS; arc=none smtp.client-ip=209.85.216.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZkoriyrS" Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-34c1d84781bso3563813a91.2 for ; Tue, 16 Dec 2025 02:33:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1765881196; x=1766485996; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=RSo7iwKPFQwIzSj+zOboygLjO0H2teY9xGDI0Xqf4yE=; b=ZkoriyrSPjzeF3M2rlxwgDqmPvzsVHydpzZSYMIMHT/T0DasF/n3bhqdXEK1RePmtX ph8JKeA8kGJR9Qz4bB6HMiiNeUIYMegr4YOCps9bNJDyY/NrCTzlRPfhRj1W44nS18k4 B4s4NKgQFsg0EoME9we8n1RxtUfmb5WsLg7EElK3GVf1Tnz8QCNkHmwXHbwS+mrdkXGg mhDsmFs9IN095HDQl/cKKlh3R+SBXMZuXEIG1aaTRSW0xDgPplKvnh/1CDQz9bOvq34P xbw/RZ+puCsgSZLLJqlLRsNCL1+H2gMc4z/7Rena+11VuFhRWSaxilihRGwcexmKbO5I DZ7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765881196; x=1766485996; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=RSo7iwKPFQwIzSj+zOboygLjO0H2teY9xGDI0Xqf4yE=; b=ua8q3QAKqKJ6cwIPzdXQ+5YxZvSy/AZDe9zH3td8zMcwMwgMMDLPYCKIuOjA7gIKQw h4m9olLlNcO0nFVf52KQJU4m1N4DQpTAoeZlLdDWlesF7+wmfYZWWr8XgadiE/O06G1s WtFQL7g+ZvQPVVXTIegMWP2pBXXsCfiptKHQazoSzixFILS20s7tZQCm/fbYq2TQq1K9 0MhbFPUiQ1rjWIEu0M9ZibWu4wWejasTgreoQGhttL7rZ0zImFQkACYEz6cOu/1ngHxE b9wiMdtQbFgm6FdnQtKXnSnqnMkQbqU3ombeM8O4DMYCJ3QPMFgQu1B+9KUDGL5GFYKr CONQ== X-Forwarded-Encrypted: i=1; AJvYcCU6T+wpyvrw6H3c33AACUPZNxZc40YAcPYnkBxKXHg2qwWV7D9qoYIMH2XWSVmOo8/JlbGtCayEH4C02h4=@vger.kernel.org X-Gm-Message-State: AOJu0YyXMTd4JtU0+IsNJHisipTDlUwTbZsiugBFYw4P2WVu3eAJ2Iz/ LQApA7/dU5qC3ZWNAQxgwPpL+GOtsH8QnfuP2q9KMutGxLXCM/yAelO2 X-Gm-Gg: AY/fxX40kdSPZKRiZ2ejobOyuL7YmPXoaP2r+2xW3d3PMySHGBcwSOAjDbo8b+KdDy6 vu6fjOKBh9umSs3EQlaCtWBJQiFJUKq4u6w4jKya9FnAU5BMlG1s7y6iFRShCELrpn3l3LSVHi5 ni9FC42x54kvsRbodEG1pWOnkb8ROFG/r6vgR0ryXDukF1R5NlSl0BhEKR9mD0y5rO3Y364RJ50 zw1V4c8d/p5s/w0TdOyF00RnHXsEU2g8CXEZ8eBR+dpYVFhu3y5gzqc5RCVlXI0m0Yvq8Tsr25m GkujkVfZMaLgb5NJ6K8WmmOAsv3dvSQs3QmB2XjWLNlVzz45nZSe3sDvvxykYk/+510tE+Rk58k 8T1GAsS0FX4Ko3CUHfQ71nWDmI8zkYxadiMZE7EnEp6Up/s4p9POjRlUzoJcaFXzX8EAX1eL2ry iqvXcjE9Xf6MZoIOQzxzoT6ufteuJ1 X-Google-Smtp-Source: AGHT+IE+dTlOQV5lST64ofLot790YXp7Fm1NZQhKwRWSADHdxtuVl7d3i2YEmzG1S8Hg7oAdVwzL2g== X-Received: by 2002:a17:90a:fc45:b0:34a:47d0:9a82 with SMTP id 98e67ed59e1d1-34abd75bd40mr14384414a91.23.1765881195450; Tue, 16 Dec 2025 02:33:15 -0800 (PST) Received: from rahul-mintos.ban-spse ([165.204.156.251]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-34cd40c0ad5sm557412a91.1.2025.12.16.02.33.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Dec 2025 02:33:14 -0800 (PST) From: Abhishek Rajput To: Alex Deucher , =?UTF-8?q?Christian=20K=C3=B6nig?= , David Airlie , Simona Vetter Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, abhiraj21put@gmail.com Subject: [PATCH] drm/radeon: Convert legacy DRM logging in evergreen.c to drm_* helpers Date: Tue, 16 Dec 2025 16:02:38 +0530 Message-ID: <20251216103238.625468-1-abhiraj21put@gmail.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace DRM_DEBUG(), DRM_ERROR(), and DRM_INFO() calls with the corresponding drm_dbg(), drm_err(), and drm_info() helpers in the radeon driver. The drm_*() logging helpers take a struct drm_device * argument, allowing the DRM core to prefix log messages with the correct device name and instance. This is required to correctly distinguish log messages on systems with multiple GPUs. This change aligns radeon with the DRM TODO item: "Convert logging to drm_* functions with drm_device parameter". Signed-off-by: Abhishek Rajput diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/ev= ergreen.c index bc4ab71613a5..3cbc6eedbf66 100644 --- a/drivers/gpu/drm/radeon/evergreen.c +++ b/drivers/gpu/drm/radeon/evergreen.c @@ -1630,6 +1630,7 @@ void evergreen_pm_misc(struct radeon_device *rdev) int req_cm_idx =3D rdev->pm.requested_clock_mode_index; struct radeon_power_state *ps =3D &rdev->pm.power_state[req_ps_idx]; struct radeon_voltage *voltage =3D &ps->clock_info[req_cm_idx].voltage; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 if (voltage->type =3D=3D VOLTAGE_SW) { /* 0xff0x are flags rather then an actual voltage */ @@ -1638,7 +1639,7 @@ void evergreen_pm_misc(struct radeon_device *rdev) if (voltage->voltage && (voltage->voltage !=3D rdev->pm.current_vddc)) { radeon_atom_set_voltage(rdev, voltage->voltage, SET_VOLTAGE_TYPE_ASIC_V= DDC); rdev->pm.current_vddc =3D voltage->voltage; - DRM_DEBUG("Setting: vddc: %d\n", voltage->voltage); + drm_dbg(ddev, "Setting: vddc: %d\n", voltage->voltage); } =20 /* starting with BTC, there is one state that is used for both @@ -1659,7 +1660,7 @@ void evergreen_pm_misc(struct radeon_device *rdev) if (voltage->vddci && (voltage->vddci !=3D rdev->pm.current_vddci)) { radeon_atom_set_voltage(rdev, voltage->vddci, SET_VOLTAGE_TYPE_ASIC_VDD= CI); rdev->pm.current_vddci =3D voltage->vddci; - DRM_DEBUG("Setting: vddci: %d\n", voltage->vddci); + drm_dbg(ddev, "Setting: vddci: %d\n", voltage->vddci); } } } @@ -2168,6 +2169,7 @@ static void evergreen_program_watermarks(struct radeo= n_device *rdev, u32 pipe_offset =3D radeon_crtc->crtc_id * 16; u32 tmp, arb_control3; fixed20_12 a, b, c; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 if (radeon_crtc->base.enabled && num_heads && mode) { active_time =3D (u32) div_u64((u64)mode->crtc_hdisplay * 1000000, @@ -2244,14 +2246,14 @@ static void evergreen_program_watermarks(struct rad= eon_device *rdev, !evergreen_average_bandwidth_vs_available_bandwidth(&wm_high) || !evergreen_check_latency_hiding(&wm_high) || (rdev->disp_priority =3D=3D 2)) { - DRM_DEBUG_KMS("force priority a to high\n"); + drm_dbg_kms(ddev, "force priority a to high\n"); priority_a_cnt |=3D PRIORITY_ALWAYS_ON; } if (!evergreen_average_bandwidth_vs_dram_bandwidth_for_display(&wm_low) = || !evergreen_average_bandwidth_vs_available_bandwidth(&wm_low) || !evergreen_check_latency_hiding(&wm_low) || (rdev->disp_priority =3D=3D 2)) { - DRM_DEBUG_KMS("force priority b to high\n"); + drm_dbg_kms(ddev, "force priority b to high\n"); priority_b_cnt |=3D PRIORITY_ALWAYS_ON; } =20 @@ -2401,6 +2403,7 @@ static int evergreen_pcie_gart_enable(struct radeon_d= evice *rdev) { u32 tmp; int r; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 if (rdev->gart.robj =3D=3D NULL) { dev_err(rdev->dev, "No VRAM object for PCIE GART.\n"); @@ -2448,7 +2451,7 @@ static int evergreen_pcie_gart_enable(struct radeon_d= evice *rdev) WREG32(VM_CONTEXT1_CNTL, 0); =20 evergreen_pcie_gart_tlb_flush(rdev); - DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n", + drm_info(ddev, "PCIE GART of %uM enabled (table at 0x%016llX).\n", (unsigned)(rdev->mc.gtt_size >> 20), (unsigned long long)rdev->gart.table_addr); rdev->gart.ready =3D true; @@ -2626,16 +2629,17 @@ static void evergreen_blank_dp_output(struct radeon= _device *rdev, unsigned stream_ctrl; unsigned fifo_ctrl; unsigned counter =3D 0; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 if (dig_fe >=3D ARRAY_SIZE(evergreen_dp_offsets)) { - DRM_ERROR("invalid dig_fe %d\n", dig_fe); + drm_err(ddev, "invalid dig_fe %d\n", dig_fe); return; } =20 stream_ctrl =3D RREG32(EVERGREEN_DP_VID_STREAM_CNTL + evergreen_dp_offsets[dig_fe]); if (!(stream_ctrl & EVERGREEN_DP_VID_STREAM_CNTL_ENABLE)) { - DRM_ERROR("dig %d , should be enable\n", dig_fe); + drm_err(ddev, "dig %d , should be enable\n", dig_fe); return; } =20 @@ -2652,7 +2656,7 @@ static void evergreen_blank_dp_output(struct radeon_d= evice *rdev, evergreen_dp_offsets[dig_fe]); } if (counter >=3D 32) - DRM_ERROR("counter exceeds %d\n", counter); + drm_err(ddev, "counter exceeds %d\n", counter); =20 fifo_ctrl =3D RREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_f= e]); fifo_ctrl |=3D EVERGREEN_DP_STEER_FIFO_RESET; @@ -2998,10 +3002,11 @@ static int evergreen_cp_start(struct radeon_device = *rdev) struct radeon_ring *ring =3D &rdev->ring[RADEON_RING_TYPE_GFX_INDEX]; int r, i; uint32_t cp_me; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 r =3D radeon_ring_lock(rdev, ring, 7); if (r) { - DRM_ERROR("radeon: cp failed to lock ring (%d).\n", r); + drm_err(ddev, "radeon: cp failed to lock ring (%d).\n", r); return r; } radeon_ring_write(ring, PACKET3(PACKET3_ME_INITIALIZE, 5)); @@ -3018,7 +3023,7 @@ static int evergreen_cp_start(struct radeon_device *r= dev) =20 r =3D radeon_ring_lock(rdev, ring, evergreen_default_size + 19); if (r) { - DRM_ERROR("radeon: cp failed to lock ring (%d).\n", r); + drm_err(ddev, "radeon: cp failed to lock ring (%d).\n", r); return r; } =20 @@ -3826,6 +3831,7 @@ u32 evergreen_gpu_check_soft_reset(struct radeon_devi= ce *rdev) { u32 reset_mask =3D 0; u32 tmp; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 /* GRBM_STATUS */ tmp =3D RREG32(GRBM_STATUS); @@ -3884,7 +3890,7 @@ u32 evergreen_gpu_check_soft_reset(struct radeon_devi= ce *rdev) =20 /* Skip MC reset as it's mostly likely not hung, just busy */ if (reset_mask & RADEON_RESET_MC) { - DRM_DEBUG("MC busy: 0x%08X, clearing.\n", reset_mask); + drm_dbg(ddev, "MC busy: 0x%08X, clearing.\n", reset_mask); reset_mask &=3D ~RADEON_RESET_MC; } =20 @@ -4495,6 +4501,7 @@ int evergreen_irq_set(struct radeon_device *rdev) u32 grbm_int_cntl =3D 0; u32 dma_cntl, dma_cntl1 =3D 0; u32 thermal_int =3D 0; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 if (!rdev->irq.installed) { WARN(1, "Can't enable IRQ/MSI because no handler is installed\n"); @@ -4520,40 +4527,40 @@ int evergreen_irq_set(struct radeon_device *rdev) if (rdev->family >=3D CHIP_CAYMAN) { /* enable CP interrupts on all rings */ if (atomic_read(&rdev->irq.ring_int[RADEON_RING_TYPE_GFX_INDEX])) { - DRM_DEBUG("evergreen_irq_set: sw int gfx\n"); + drm_dbg(ddev, "%s : sw int gfx\n", __func__); cp_int_cntl |=3D TIME_STAMP_INT_ENABLE; } if (atomic_read(&rdev->irq.ring_int[CAYMAN_RING_TYPE_CP1_INDEX])) { - DRM_DEBUG("evergreen_irq_set: sw int cp1\n"); + drm_dbg(ddev, "%s : sw int cp1\n", __func__); cp_int_cntl1 |=3D TIME_STAMP_INT_ENABLE; } if (atomic_read(&rdev->irq.ring_int[CAYMAN_RING_TYPE_CP2_INDEX])) { - DRM_DEBUG("evergreen_irq_set: sw int cp2\n"); + drm_dbg(ddev, "%s : sw int cp2\n", __func__); cp_int_cntl2 |=3D TIME_STAMP_INT_ENABLE; } } else { if (atomic_read(&rdev->irq.ring_int[RADEON_RING_TYPE_GFX_INDEX])) { - DRM_DEBUG("evergreen_irq_set: sw int gfx\n"); + drm_dbg(ddev, "%s : sw int gfx\n", __func__); cp_int_cntl |=3D RB_INT_ENABLE; cp_int_cntl |=3D TIME_STAMP_INT_ENABLE; } } =20 if (atomic_read(&rdev->irq.ring_int[R600_RING_TYPE_DMA_INDEX])) { - DRM_DEBUG("r600_irq_set: sw int dma\n"); + drm_dbg(ddev, "r600_irq_set: sw int dma\n"); dma_cntl |=3D TRAP_ENABLE; } =20 if (rdev->family >=3D CHIP_CAYMAN) { dma_cntl1 =3D RREG32(CAYMAN_DMA1_CNTL) & ~TRAP_ENABLE; if (atomic_read(&rdev->irq.ring_int[CAYMAN_RING_TYPE_DMA1_INDEX])) { - DRM_DEBUG("r600_irq_set: sw int dma1\n"); + drm_dbg(ddev, "r600_irq_set: sw int dma1\n"); dma_cntl1 |=3D TRAP_ENABLE; } } =20 if (rdev->irq.dpm_thermal) { - DRM_DEBUG("dpm thermal\n"); + drm_dbg(ddev, "dpm thermal\n"); thermal_int |=3D THERM_INT_MASK_HIGH | THERM_INT_MASK_LOW; } =20 @@ -4713,6 +4720,7 @@ int evergreen_irq_process(struct radeon_device *rdev) bool queue_thermal =3D false; u32 status, addr; const char *event_name; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 if (!rdev->ih.enabled || rdev->shutdown) return IRQ_NONE; @@ -4725,7 +4733,7 @@ int evergreen_irq_process(struct radeon_device *rdev) return IRQ_NONE; =20 rptr =3D rdev->ih.rptr; - DRM_DEBUG("evergreen_irq_process start: rptr %d, wptr %d\n", rptr, wptr); + drm_dbg(ddev, "%s start: rptr %d, wptr %d\n", __func__, rptr, wptr); =20 /* Order reading of wptr vs. reading of IH ring data */ rmb(); @@ -4766,18 +4774,18 @@ int evergreen_irq_process(struct radeon_device *rde= v) mask =3D LB_D1_VLINE_INTERRUPT; event_name =3D "vline"; } else { - DRM_DEBUG("Unhandled interrupt: %d %d\n", + drm_dbg(ddev, "Unhandled interrupt: %d %d\n", src_id, src_data); break; } =20 if (!(disp_int[crtc_idx] & mask)) { - DRM_DEBUG("IH: D%d %s - IH event w/o asserted irq bit?\n", + drm_dbg(ddev, "IH: D%d %s - IH event w/o asserted irq bit?\n", crtc_idx + 1, event_name); } =20 disp_int[crtc_idx] &=3D ~mask; - DRM_DEBUG("IH: D%d %s\n", crtc_idx + 1, event_name); + drm_dbg(ddev, "IH: D%d %s\n", crtc_idx + 1, event_name); =20 break; case 8: /* D1 page flip */ @@ -4786,7 +4794,7 @@ int evergreen_irq_process(struct radeon_device *rdev) case 14: /* D4 page flip */ case 16: /* D5 page flip */ case 18: /* D6 page flip */ - DRM_DEBUG("IH: D%d flip\n", ((src_id - 8) >> 1) + 1); + drm_dbg(ddev, "IH: D%d flip\n", ((src_id - 8) >> 1) + 1); if (radeon_use_pflipirq > 0) radeon_crtc_handle_flip(rdev, (src_id - 8) >> 1); break; @@ -4804,39 +4812,39 @@ int evergreen_irq_process(struct radeon_device *rde= v) event_name =3D "HPD_RX"; =20 } else { - DRM_DEBUG("Unhandled interrupt: %d %d\n", + drm_dbg(ddev, "Unhandled interrupt: %d %d\n", src_id, src_data); break; } =20 if (!(disp_int[hpd_idx] & mask)) - DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); + drm_dbg(ddev, "IH: IH event w/o asserted irq bit?\n"); =20 disp_int[hpd_idx] &=3D ~mask; - DRM_DEBUG("IH: %s%d\n", event_name, hpd_idx + 1); + drm_dbg(ddev, "IH: %s%d\n", event_name, hpd_idx + 1); =20 break; case 44: /* hdmi */ afmt_idx =3D src_data; if (afmt_idx > 5) { - DRM_ERROR("Unhandled interrupt: %d %d\n", + drm_err(ddev, "Unhandled interrupt: %d %d\n", src_id, src_data); break; } =20 if (!(afmt_status[afmt_idx] & AFMT_AZ_FORMAT_WTRIG)) - DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); + drm_dbg(ddev, "IH: IH event w/o asserted irq bit?\n"); =20 afmt_status[afmt_idx] &=3D ~AFMT_AZ_FORMAT_WTRIG; queue_hdmi =3D true; - DRM_DEBUG("IH: HDMI%d\n", afmt_idx + 1); + drm_dbg(ddev, "IH: HDMI%d\n", afmt_idx + 1); break; case 96: - DRM_ERROR("SRBM_READ_ERROR: 0x%x\n", RREG32(SRBM_READ_ERROR)); + drm_err(ddev, "SRBM_READ_ERROR: 0x%x\n", RREG32(SRBM_READ_ERROR)); WREG32(SRBM_INT_ACK, 0x1); break; case 124: /* UVD */ - DRM_DEBUG("IH: UVD int: 0x%08x\n", src_data); + drm_dbg(ddev, "IH: UVD int: 0x%08x\n", src_data); radeon_fence_process(rdev, R600_RING_TYPE_UVD_INDEX); break; case 146: @@ -4857,11 +4865,11 @@ int evergreen_irq_process(struct radeon_device *rde= v) case 176: /* CP_INT in ring buffer */ case 177: /* CP_INT in IB1 */ case 178: /* CP_INT in IB2 */ - DRM_DEBUG("IH: CP int: 0x%08x\n", src_data); + drm_dbg(ddev, "IH: CP int: 0x%08x\n", src_data); radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX); break; case 181: /* CP EOP event */ - DRM_DEBUG("IH: CP EOP\n"); + drm_dbg(ddev, "IH: CP EOP\n"); if (rdev->family >=3D CHIP_CAYMAN) { switch (src_data) { case 0: @@ -4878,30 +4886,30 @@ int evergreen_irq_process(struct radeon_device *rde= v) radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX); break; case 224: /* DMA trap event */ - DRM_DEBUG("IH: DMA trap\n"); + drm_dbg(ddev, "IH: DMA trap\n"); radeon_fence_process(rdev, R600_RING_TYPE_DMA_INDEX); break; case 230: /* thermal low to high */ - DRM_DEBUG("IH: thermal low to high\n"); + drm_dbg(ddev, "IH: thermal low to high\n"); rdev->pm.dpm.thermal.high_to_low =3D false; queue_thermal =3D true; break; case 231: /* thermal high to low */ - DRM_DEBUG("IH: thermal high to low\n"); + drm_dbg(ddev, "IH: thermal high to low\n"); rdev->pm.dpm.thermal.high_to_low =3D true; queue_thermal =3D true; break; case 233: /* GUI IDLE */ - DRM_DEBUG("IH: GUI idle\n"); + drm_dbg(ddev, "IH: GUI idle\n"); break; case 244: /* DMA trap event */ if (rdev->family >=3D CHIP_CAYMAN) { - DRM_DEBUG("IH: DMA1 trap\n"); + drm_dbg(ddev, "IH: DMA1 trap\n"); radeon_fence_process(rdev, CAYMAN_RING_TYPE_DMA1_INDEX); } break; default: - DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); + drm_dbg(ddev, "Unhandled interrupt: %d %d\n", src_id, src_data); break; } =20 @@ -5000,6 +5008,7 @@ static int evergreen_startup(struct radeon_device *rd= ev) { struct radeon_ring *ring; int r; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 /* enable pcie gen2 link */ evergreen_pcie_gen2_enable(rdev); @@ -5016,7 +5025,7 @@ static int evergreen_startup(struct radeon_device *rd= ev) if (ASIC_IS_DCE5(rdev) && !rdev->pm.dpm_enabled) { r =3D ni_mc_load_microcode(rdev); if (r) { - DRM_ERROR("Failed to load MC firmware!\n"); + drm_err(ddev, "Failed to load MC firmware!\n"); return r; } } @@ -5038,7 +5047,7 @@ static int evergreen_startup(struct radeon_device *rd= ev) rdev->rlc.cs_data =3D evergreen_cs_data; r =3D sumo_rlc_init(rdev); if (r) { - DRM_ERROR("Failed to init rlc BOs!\n"); + drm_err(ddev, "Failed to init rlc BOs!\n"); return r; } } @@ -5071,7 +5080,7 @@ static int evergreen_startup(struct radeon_device *rd= ev) =20 r =3D r600_irq_init(rdev); if (r) { - DRM_ERROR("radeon: IH init failed (%d).\n", r); + drm_err(ddev, "radeon: IH init failed (%d).\n", r); radeon_irq_kms_fini(rdev); return r; } @@ -5109,7 +5118,7 @@ static int evergreen_startup(struct radeon_device *rd= ev) =20 r =3D radeon_audio_init(rdev); if (r) { - DRM_ERROR("radeon: audio init failed\n"); + drm_err(ddev, "radeon: audio init failed\n"); return r; } =20 @@ -5119,6 +5128,7 @@ static int evergreen_startup(struct radeon_device *rd= ev) int evergreen_resume(struct radeon_device *rdev) { int r; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 /* reset the asic, the gfx blocks are often in a bad state * after the driver is unloaded or after a resume @@ -5141,7 +5151,7 @@ int evergreen_resume(struct radeon_device *rdev) rdev->accel_working =3D true; r =3D evergreen_startup(rdev); if (r) { - DRM_ERROR("evergreen startup failed on resume\n"); + drm_err(ddev, "evergreen startup failed on resume\n"); rdev->accel_working =3D false; return r; } @@ -5176,6 +5186,7 @@ int evergreen_suspend(struct radeon_device *rdev) int evergreen_init(struct radeon_device *rdev) { int r; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 /* Read BIOS */ if (!radeon_get_bios(rdev)) { @@ -5201,7 +5212,7 @@ int evergreen_init(struct radeon_device *rdev) dev_err(rdev->dev, "Card not posted and no BIOS - ignoring\n"); return -EINVAL; } - DRM_INFO("GPU not posted. posting now...\n"); + drm_info(ddev, "GPU not posted. posting now...\n"); atom_asic_init(rdev->mode_info.atom_context); } /* init golden registers */ @@ -5233,7 +5244,7 @@ int evergreen_init(struct radeon_device *rdev) if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw || !rdev->mc_fw) { r =3D ni_init_microcode(rdev); if (r) { - DRM_ERROR("Failed to load firmware!\n"); + drm_err(ddev, "Failed to load firmware!\n"); return r; } } @@ -5241,7 +5252,7 @@ int evergreen_init(struct radeon_device *rdev) if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) { r =3D r600_init_microcode(rdev); if (r) { - DRM_ERROR("Failed to load firmware!\n"); + drm_err(ddev, "Failed to load firmware!\n"); return r; } } @@ -5287,7 +5298,7 @@ int evergreen_init(struct radeon_device *rdev) */ if (ASIC_IS_DCE5(rdev)) { if (!rdev->mc_fw && !(rdev->flags & RADEON_IS_IGP)) { - DRM_ERROR("radeon: MC ucode required for NI+.\n"); + drm_err(ddev, "radeon: MC ucode required for NI+.\n"); return -EINVAL; } } @@ -5323,6 +5334,7 @@ void evergreen_fini(struct radeon_device *rdev) void evergreen_pcie_gen2_enable(struct radeon_device *rdev) { u32 link_width_cntl, speed_cntl; + struct drm_device *ddev =3D rdev_to_drm(rdev); =20 if (radeon_pcie_gen2 =3D=3D 0) return; @@ -5343,11 +5355,11 @@ void evergreen_pcie_gen2_enable(struct radeon_devic= e *rdev) =20 speed_cntl =3D RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL); if (speed_cntl & LC_CURRENT_DATA_RATE) { - DRM_INFO("PCIE gen 2 link speeds already enabled\n"); + drm_info(ddev, "PCIE gen 2 link speeds already enabled\n"); return; } =20 - DRM_INFO("enabling PCIE gen 2 link speeds, disable with radeon.pcie_gen2= =3D0\n"); + drm_info(ddev, "enabling PCIE gen 2 link speeds, disable with radeon.pcie= _gen2=3D0\n"); =20 if ((speed_cntl & LC_OTHER_SIDE_EVER_SENT_GEN2) || (speed_cntl & LC_OTHER_SIDE_SUPPORTS_GEN2)) { --=20 2.43.0