From nobody Sat Feb 22 08:31:15 2025 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C3841EBA04 for ; Tue, 7 Jan 2025 11:35:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736249718; cv=none; b=GulkTA1VAjXehKAnqxxldj6W43RJKU3ttNZ5IoNCkoFyMHKBZq6XwBZTm7DNZDnyQ/84TVOPNTvQCW/wfxKXFeaJ1FDBGdBw8/iiNndstJlBKBebcSlPxSeRPTx/Hk/o0WeK5B/9AyOIAr0HpAqd6yKfRQYXKUFsOTTHZu+5Zpg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736249718; c=relaxed/simple; bh=Wv4PolvkKnvL9Mo1h5WACyK9WDdQldUJdXUIl/u8MeM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aWjDBSNxQGIVoRVuLvs0rYfubOBK3ilxRmgh3NlOQrUMda5Q8VyehLvtspdkugmp9AwA9lhYmTuanZqMMAsz2fwArdgk2JV3N3OeReiu+HYoMtM+wwPwl1RYE/vZj6YfWVBy0UHmzL9dR5MxUQGSkEUI4Ivjk71P65/Uq4jxu0I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=vLI/tSdF; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="vLI/tSdF" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-4368a293339so121667045e9.3 for ; Tue, 07 Jan 2025 03:35:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1736249712; x=1736854512; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=361dVHxyYbzfXGnUF4Ia+mPKpgg8pWQWoFNDsGCUa2w=; b=vLI/tSdFc0uTJRgzLqe+pJL+n2BWr69D6q1LnOQZ9pvTe55Eo4Xi1PQOkoNxRS9Xvz I8QrW5jnNWespJxsEj9d764eWlqrBibhImCpPQEtxVBl5FhoONILjRB2GOes/X0YmS+e YW19RAsk2SaEKidJJwKk7HtqoZIIgLM50yUBg0eyUnTiokdCg1bmzdBCSqIuqdfBq/8V 0z8H9FvHfubZSIsKoTXpanWYJm5w5zAUcI2w1Z4tSznrDAQbzNdhelW4OTMikDBA9pT5 wp7B1Js27m2gbLkpxwmQS8XRi1Vr3CyakODbqI4myT8VC//UWl1AT3ZBLjV1Rz29b/TO M7Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736249712; x=1736854512; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=361dVHxyYbzfXGnUF4Ia+mPKpgg8pWQWoFNDsGCUa2w=; b=fGjG0ZP/g77XzLXDKRQmL6Os/uaZjzWZd4+8CbXAD8OR79FdS7mpzovabas6XxK7yg 6udZI7RXS2UYtlOArdW/2wWxjnesEVbjW/DJEtk+cncmJZ7VCG3g50DP3E3zZZSsr5su HSmCXy5iq8ssrLiMs7LlY/EiTSoHRrgg9bk3SjLfHHza5hTCdrLmUGHsQyq4dgoR6xbE xpCY/f30a144xuEbUcfWgoGm+6yv71J8AqqPJCVquL0T719BxW2GVss8vAv4IhihByaT h/I4/iU9xyq77/v4sIjH7OVmaMpKgy+Y2pMEvpB7D8v7/n9qIgpPHVDnDyjYJzQuOK92 EiWQ== X-Forwarded-Encrypted: i=1; AJvYcCUEHvaFanT8DTcTxcIz7IxRryl4UJx/UDrtaSyIRffpjU8WSUTDH8n7/ZywkLSWRH1nC9fla3nk/xOtycc=@vger.kernel.org X-Gm-Message-State: AOJu0Yzh3KCiHapAcobDYHNYzUK8I4Mo0VBW2AOFMoxDiz04wt+Jiqvs +l1rugETjYsiVLtQpIwciAO/wQcH8HzKv/pGzAizRk4HxwBk4CAkaUVW/qWAAXU= X-Gm-Gg: ASbGnctNuV+nWs0fx1muQErbElmUI8Kw+Peur26Pn6TcFIzZ2pLAhnXgh2F3szs5wSV sO19Ba368//31lhkNaMXdYQ1mS3ooYCJE00INayC5C110wS79OXrsq+X/YI9WtfeZeKPeaDBlCT ciYdA5H0zTU/QmcFyc8YwntWlrd9wxAV3OrbdM/0HY+eeUFQ1kOVgq4CohsbUaWbbMR27n0qoDQ KAMpGB5zB1fA3w6MnSyg9gKkO8YT0IxDdFRHn7M+nFmoTKZw1zboPTp X-Google-Smtp-Source: AGHT+IEPIBkpUbKZmzd11xA9RlSbKwTWjgAlK2hoOybk1pgXH7+fgfg4ZL8FPrxTRRNcipcPh+zn8w== X-Received: by 2002:a5d:6da4:0:b0:386:3864:5cf2 with SMTP id ffacd0b85a97d-38a221f0e37mr49281056f8f.19.1736249712387; Tue, 07 Jan 2025 03:35:12 -0800 (PST) Received: from pop-os.. ([145.224.66.180]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38a1c89e1acsm50299218f8f.68.2025.01.07.03.35.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jan 2025 03:35:12 -0800 (PST) From: James Clark To: maz@kernel.org, kvmarm@lists.linux.dev, oliver.upton@linux.dev, suzuki.poulose@arm.com, coresight@lists.linaro.org Cc: James Clark , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , Mark Brown , James Morse , Shiqi Liu , Fuad Tabba , "Rob Herring (Arm)" , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v10 08/10] KVM: arm64: coresight: Give TRBE enabled state to KVM Date: Tue, 7 Jan 2025 11:32:45 +0000 Message-Id: <20250107113252.260631-9-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250107113252.260631-1-james.clark@linaro.org> References: <20250107113252.260631-1-james.clark@linaro.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently in nVHE, KVM has to check if TRBE is enabled on every guest switch even if it was never used. Because it's a debug feature and is more likely to not be used than used, give KVM the TRBE buffer status to allow a much simpler and faster do-nothing path in the hyp. Protected mode now disables trace regardless of TRBE (because trfcr_while_in_guest is always 0), which was not previously done. However, it continues to flush whenever the buffer is enabled regardless of the filter status. This avoids the hypothetical case of a host that had disabled the filter but not flushed which would arise if only doing the flush when the filter was enabled. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 9 +++ arch/arm64/kvm/debug.c | 32 +++++++++- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 63 ++++++++++++-------- drivers/hwtracing/coresight/coresight-trbe.c | 3 + 4 files changed, 79 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index fb252d540850..b244ec44bd3a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -612,6 +612,8 @@ struct cpu_sve_state { struct kvm_host_data { #define KVM_HOST_DATA_FLAG_HAS_SPE 0 #define KVM_HOST_DATA_FLAG_HAS_TRBE 1 +#define KVM_HOST_DATA_FLAG_TRBE_ENABLED 2 +#define KVM_HOST_DATA_FLAG_EL1_TRACING_CONFIGURED 3 unsigned long flags; =20 struct kvm_cpu_context host_ctxt; @@ -657,6 +659,9 @@ struct kvm_host_data { u64 mdcr_el2; } host_debug_state; =20 + /* Guest trace filter value */ + u64 trfcr_while_in_guest; + /* Number of programmable event counters (PMCR_EL0.N) for this CPU */ unsigned int nr_event_counters; }; @@ -1381,6 +1386,8 @@ static inline bool kvm_pmu_counter_deferred(struct pe= rf_event_attr *attr) void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); +void kvm_enable_trbe(void); +void kvm_disable_trbe(void); #else static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} static inline void kvm_clr_pmu_events(u64 clr) {} @@ -1388,6 +1395,8 @@ static inline bool kvm_set_pmuserenr(u64 val) { return false; } +static inline void kvm_enable_trbe(void) {} +static inline void kvm_disable_trbe(void) {} #endif =20 void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 1ee2fd765b62..e1ac3d2a65be 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -309,7 +309,33 @@ void kvm_init_host_debug_data(void) !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P)) host_data_set_flag(HAS_SPE); =20 - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuffe= r_SHIFT) && - !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P)) - host_data_set_flag(HAS_TRBE); + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceFilt_= SHIFT)) { + /* Force disable trace in protected mode in case of no TRBE */ + if (is_protected_kvm_enabled()) + host_data_set_flag(EL1_TRACING_CONFIGURED); + + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuff= er_SHIFT) && + !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P)) + host_data_set_flag(HAS_TRBE); + } +} + +void kvm_enable_trbe(void) +{ + if (has_vhe() || is_protected_kvm_enabled() || + WARN_ON_ONCE(preemptible())) + return; + + host_data_set_flag(TRBE_ENABLED); +} +EXPORT_SYMBOL_GPL(kvm_enable_trbe); + +void kvm_disable_trbe(void) +{ + if (has_vhe() || is_protected_kvm_enabled() || + WARN_ON_ONCE(preemptible())) + return; + + host_data_clear_flag(TRBE_ENABLED); } +EXPORT_SYMBOL_GPL(kvm_disable_trbe); diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/d= ebug-sr.c index 858bb38e273f..2f4a4f5036bb 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -51,32 +51,45 @@ static void __debug_restore_spe(u64 pmscr_el1) write_sysreg_el1(pmscr_el1, SYS_PMSCR); } =20 -static void __debug_save_trace(u64 *trfcr_el1) +static void __trace_do_switch(u64 *saved_trfcr, u64 new_trfcr) { - *trfcr_el1 =3D 0; + *saved_trfcr =3D read_sysreg_el1(SYS_TRFCR); + write_sysreg_el1(new_trfcr, SYS_TRFCR); +} =20 - /* Check if the TRBE is enabled */ - if (!(read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E)) - return; - /* - * Prohibit trace generation while we are in guest. - * Since access to TRFCR_EL1 is trapped, the guest can't - * modify the filtering set by the host. - */ - *trfcr_el1 =3D read_sysreg_el1(SYS_TRFCR); - write_sysreg_el1(0, SYS_TRFCR); - isb(); - /* Drain the trace buffer to memory */ - tsb_csync(); +static bool __trace_needs_drain(void) +{ + if (is_protected_kvm_enabled() && host_data_test_flag(HAS_TRBE)) + return read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E; + + return host_data_test_flag(TRBE_ENABLED); } =20 -static void __debug_restore_trace(u64 trfcr_el1) +static bool __trace_needs_switch(void) { - if (!trfcr_el1) - return; + return host_data_test_flag(TRBE_ENABLED) || + host_data_test_flag(EL1_TRACING_CONFIGURED); +} + +static void __trace_switch_to_guest(void) +{ + /* Unsupported with TRBE so disable */ + if (host_data_test_flag(TRBE_ENABLED)) + *host_data_ptr(trfcr_while_in_guest) =3D 0; + + __trace_do_switch(host_data_ptr(host_debug_state.trfcr_el1), + *host_data_ptr(trfcr_while_in_guest)); =20 - /* Restore trace filter controls */ - write_sysreg_el1(trfcr_el1, SYS_TRFCR); + if (__trace_needs_drain()) { + isb(); + tsb_csync(); + } +} + +static void __trace_switch_to_host(void) +{ + __trace_do_switch(host_data_ptr(trfcr_while_in_guest), + *host_data_ptr(host_debug_state.trfcr_el1)); } =20 void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) @@ -84,9 +97,9 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) /* Disable and flush SPE data generation */ if (host_data_test_flag(HAS_SPE)) __debug_save_spe(host_data_ptr(host_debug_state.pmscr_el1)); - /* Disable and flush Self-Hosted Trace generation */ - if (host_data_test_flag(HAS_TRBE)) - __debug_save_trace(host_data_ptr(host_debug_state.trfcr_el1)); + + if (__trace_needs_switch()) + __trace_switch_to_guest(); } =20 void __debug_switch_to_guest(struct kvm_vcpu *vcpu) @@ -98,8 +111,8 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *= vcpu) { if (host_data_test_flag(HAS_SPE)) __debug_restore_spe(*host_data_ptr(host_debug_state.pmscr_el1)); - if (host_data_test_flag(HAS_TRBE)) - __debug_restore_trace(*host_data_ptr(host_debug_state.trfcr_el1)); + if (__trace_needs_switch()) + __trace_switch_to_host(); } =20 void __debug_switch_to_host(struct kvm_vcpu *vcpu) diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtraci= ng/coresight/coresight-trbe.c index 03d3695ba5aa..a728802d2206 100644 --- a/drivers/hwtracing/coresight/coresight-trbe.c +++ b/drivers/hwtracing/coresight/coresight-trbe.c @@ -17,6 +17,7 @@ =20 #include #include +#include #include =20 #include "coresight-self-hosted-trace.h" @@ -221,6 +222,7 @@ static inline void set_trbe_enabled(struct trbe_cpudata= *cpudata, u64 trblimitr) */ trblimitr |=3D TRBLIMITR_EL1_E; write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); + kvm_enable_trbe(); =20 /* Synchronize the TRBE enable event */ isb(); @@ -239,6 +241,7 @@ static inline void set_trbe_disabled(struct trbe_cpudat= a *cpudata) */ trblimitr &=3D ~TRBLIMITR_EL1_E; write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); + kvm_disable_trbe(); =20 if (trbe_needs_drain_after_disable(cpudata)) trbe_drain_buffer(); --=20 2.34.1