From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5ECB26462D for ; Mon, 24 Mar 2025 17:32:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837575; cv=none; b=Sou9U+VGLfUr/esJ2N/sWJwRS8VmauZtYt+5AxjtGzfwSvdal+xZd6BkEzkQf72wKlJ7rPBkOoaC147cDqA+kwWZ+oI0nMI8ARfdppv1yEhHlJ2T7q0G4cndQ6eSCnjjYCYSG9Ew6izphi9fUAZp89lJ1AiyBPPKg755IabfE1c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837575; c=relaxed/simple; bh=jqeh4Xsv0CfKNigyiQO9yN1pSEojv6cGJrdFDQycpnA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VeydidzgaN+0bMrEVHzSpUNNlIoI0JYbtO7zek51o4cmLD/HUrc81kUVNzw7qO1J1LWWEk0lA78DhNPLUuMibYsz8FGtgU7OkkSLM3rlOQiCwOTUFjzj1JVR4LymRJq+l/tM0bc2HZSdVGZ2Zbg2EdbimooyHAfr3tgr7iV1U2E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oPryfCnr; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oPryfCnr" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff4b130bb2so7703759a91.0 for ; Mon, 24 Mar 2025 10:32:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837573; x=1743442373; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1Ya/oFtnNY7+pGYqinZGprl0k84EEnI1q0ZfiObPFs0=; b=oPryfCnrKtpuKA6tboavd6yT+pxkFANrE3KQ+RSArD94HF0q3uM7ie/NBcvt53j8Q9 fkVkohFX2K2Z9WFpOJ60GQujjHKkGW6p+OTPcJ1fcS0PPVeI+zpj1r6ibNT+kdoLTjod 0Pspe3tjIgPiDwU9iejXo83Lol59q4JeyjbhqP+DwNrqMyUezWVA0R1sZhbnwGqUjomQ +SZGiJGB13x8fMJfsTzf3O3Wu9fnJM3yP9dmUa6SHwe/GD+BPg57GXDQ4D2Q4yP3ZiJ1 6GXyVYTBFhtMFkH5BL8VtUdZzL/QAf35TN0kSGvmneNa0EBjdyriceBBvqm9HRTammuP sU5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837573; x=1743442373; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1Ya/oFtnNY7+pGYqinZGprl0k84EEnI1q0ZfiObPFs0=; b=e9l9t7oH6fGD0aFhKSOdiQRmz8grDpuWK2kOQj7LG4tXu7z9Fm2auOeuoW/dMu7S8C qoSUWmzEsIH6M+MVtsQNpiPLiYa+9SgyAlM3TacqSy9o6v1caQREIK6Np8kg7Z2yQ8XV Vq7pqkTgxkG3AgZUQGkT6eU3Q8Q3YWzXfReeaxgJoxwWusQspm2Ix5Y5KZ71ltnfuioH P5dcOj3JQERdM4ZaUmLbFLqYMTXXRPGNx8Du/1lSLsF/wFWE6zkzlMKr05Pm6kYjK+BA C0kid172sg2T5BCblY1DXwW51kdHgomiiBPeTsUP8ghQ4mNyCR3hmRg7Bw5Yb5GLL9ov yNZw== X-Forwarded-Encrypted: i=1; AJvYcCUVxQ+aR9Ljn+6ohoFZ2lPckIpOgBtJ9BOOIZ1zjCw8Mgr9dZd/+MtsG7axdAfXsJmAa80n89d2LLFR8uo=@vger.kernel.org X-Gm-Message-State: AOJu0YwWrf5fh9pw1Gk/7n01LySdQOkXRvfc1qFhuNDT7P+hkOcJLeVe LAlmA8Zmx2FPmJf/CJZgpWcfivX+NqNn6IliG+JC9JA22/+MyZxQctwhY9ImDML6VBMgEgRDPzb IZZ7wCg== X-Google-Smtp-Source: AGHT+IGpguu4/NyJujzClyWSa0WQRqb1Suj/mOYzqbowa3BrBAeAJiAlTWZpsBT1An9QqH4xJtO2xBiGOtyQ X-Received: from pjd6.prod.google.com ([2002:a17:90b:54c6:b0:2e5:5ffc:1c36]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:d008:b0:2fe:a545:4c84 with SMTP id 98e67ed59e1d1-3030ff03212mr18790033a91.34.1742837573143; Mon, 24 Mar 2025 10:32:53 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:41 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-2-mizhang@google.com> Subject: [PATCH v4 01/38] perf: Support get/put mediated PMU interfaces From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Currently, the guest and host share the PMU resources when a guest is running. KVM has to create an extra virtual event to simulate the guest's event, which brings several issues, e.g., high overhead, not accuracy and etc. A new mediated PMU method is proposed to address the issue. It requires that the PMU resources can be fully occupied by the guest while it's running. Two new interfaces are implemented to fulfill the requirement. The hypervisor should invoke the interface while creating a guest which wants the mediated PMU capability. The PMU resources should only be temporarily occupied as a whole when a guest is running. When the guest is out, the PMU resources are still shared among different users. The exclude_guest event modifier is used to guarantee the exclusive occupation of the PMU resources. When creating a guest, the hypervisor should check whether there are !exclude_guest events in the system. If yes, the creation should fail. Because some PMU resources have been occupied by other users. If no, the PMU resources can be safely accessed by the guest directly. Perf guarantees that no new !exclude_guest events are created when a guest is running. Only the mediated PMU is affected, but not for other PMU e.g., uncore and SW PMU. The behavior of those PMUs are not changed. The guest enter/exit interfaces should only impact the supported PMUs. Add a new PERF_PMU_CAP_MEDIATED_VPMU flag to indicate the PMUs that support the feature. Add nr_include_guest_events to track the !exclude_guest events of PMU with PERF_PMU_CAP_MEDIATED_VPMU. Suggested-by: Sean Christopherson Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 11 +++++++ kernel/events/core.c | 66 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 77 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 8333f132f4a9..54018dd0b2a4 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -301,6 +301,8 @@ struct perf_event_pmu_context; #define PERF_PMU_CAP_AUX_OUTPUT 0x0080 #define PERF_PMU_CAP_EXTENDED_HW_TYPE 0x0100 #define PERF_PMU_CAP_AUX_PAUSE 0x0200 +/* Support to passthrough whole PMU resoure to guest */ +#define PERF_PMU_CAP_MEDIATED_VPMU 0x0400 =20 /** * pmu::scope @@ -1811,6 +1813,8 @@ extern void perf_event_task_tick(void); extern int perf_event_account_interrupt(struct perf_event *event); extern int perf_event_period(struct perf_event *event, u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); +int perf_get_mediated_pmu(void); +void perf_put_mediated_pmu(void); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1901,6 +1905,13 @@ static inline int perf_exclude_event(struct perf_eve= nt *event, struct pt_regs *r { return 0; } + +static inline int perf_get_mediated_pmu(void) +{ + return 0; +} + +static inline void perf_put_mediated_pmu(void) { } #endif =20 #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index bcb09e011e9e..be623701dc48 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -431,6 +431,20 @@ static atomic_t nr_bpf_events __read_mostly; static atomic_t nr_cgroup_events __read_mostly; static atomic_t nr_text_poke_events __read_mostly; static atomic_t nr_build_id_events __read_mostly; +static atomic_t nr_include_guest_events __read_mostly; + +static atomic_t nr_mediated_pmu_vms; +static DEFINE_MUTEX(perf_mediated_pmu_mutex); + +/* !exclude_guest event of PMU with PERF_PMU_CAP_MEDIATED_VPMU */ +static inline bool is_include_guest_event(struct perf_event *event) +{ + if ((event->pmu->capabilities & PERF_PMU_CAP_MEDIATED_VPMU) && + !event->attr.exclude_guest) + return true; + + return false; +} =20 static LIST_HEAD(pmus); static DEFINE_MUTEX(pmus_lock); @@ -5320,6 +5334,9 @@ static void _free_event(struct perf_event *event) =20 unaccount_event(event); =20 + if (is_include_guest_event(event)) + atomic_dec(&nr_include_guest_events); + security_perf_event_free(event); =20 if (event->rb) { @@ -5877,6 +5894,36 @@ u64 perf_event_pause(struct perf_event *event, bool = reset) } EXPORT_SYMBOL_GPL(perf_event_pause); =20 +/* + * Currently invoked at VM creation to + * - Check whether there are existing !exclude_guest events of PMU with + * PERF_PMU_CAP_MEDIATED_VPMU + * - Set nr_mediated_pmu_vms to prevent !exclude_guest event creation on + * PMUs with PERF_PMU_CAP_MEDIATED_VPMU + * + * No impact for the PMU without PERF_PMU_CAP_MEDIATED_VPMU. The perf + * still owns all the PMU resources. + */ +int perf_get_mediated_pmu(void) +{ + guard(mutex)(&perf_mediated_pmu_mutex); + if (atomic_inc_not_zero(&nr_mediated_pmu_vms)) + return 0; + + if (atomic_read(&nr_include_guest_events)) + return -EBUSY; + + atomic_inc(&nr_mediated_pmu_vms); + return 0; +} +EXPORT_SYMBOL_GPL(perf_get_mediated_pmu); + +void perf_put_mediated_pmu(void) +{ + atomic_dec(&nr_mediated_pmu_vms); +} +EXPORT_SYMBOL_GPL(perf_put_mediated_pmu); + /* * Holding the top-level event's child_mutex means that any * descendant process that has inherited this event will block @@ -12210,6 +12257,17 @@ static void account_event(struct perf_event *event) account_pmu_sb_event(event); } =20 +static int perf_account_include_guest_event(void) +{ + guard(mutex)(&perf_mediated_pmu_mutex); + + if (atomic_read(&nr_mediated_pmu_vms)) + return -EOPNOTSUPP; + + atomic_inc(&nr_include_guest_events); + return 0; +} + /* * Allocate and initialize an event structure */ @@ -12435,11 +12493,19 @@ perf_event_alloc(struct perf_event_attr *attr, in= t cpu, if (err) goto err_callchain_buffer; =20 + if (is_include_guest_event(event)) { + err =3D perf_account_include_guest_event(); + if (err) + goto err_security_alloc; + } + /* symmetric to unaccount_event() in _free_event() */ account_event(event); =20 return event; =20 +err_security_alloc: + security_perf_event_free(event); err_callchain_buffer: if (!event->parent) { if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E102264A60 for ; Mon, 24 Mar 2025 17:32:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837577; cv=none; b=IhctRpENirvgXg59frHxIm2lg+txZm49iXyCrUap9vF5lRUpWL6uXPtEpyC0YQGw0APU+qRe2LTUCXXIWX2wnp0f+9asP608QLg+XAExPFW4twg+bwWyJJPiDl75t/3zcJSeGDPMDdCDWiOM0B+idcrwzAnnLL7an0WQKSmORCQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837577; c=relaxed/simple; bh=/f1zw2NQVdrvC1p5SxX0ny1y+81MhhF+V4C2TQ4BIug=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SQcHsJLN0rQj8FTwhDdeMrCcPFoN06UXX4HjX3dPk4YiCfdOgfLlkUxITVCBnjwQyj2OBMxoVKycT027EknOivWIONHUWWs6zB6XrF7M2JQKfCnpABbm7WrTf5rD3Ku/Sg3rb5SD28I5Oys97VUvID7LgDtncRlmABb2wGHxER8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FZcODbSk; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FZcODbSk" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff6aaa18e8so7023296a91.1 for ; Mon, 24 Mar 2025 10:32:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837575; x=1743442375; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=NpZkmVtYxhQ/ofswPdkBbPONzCRhKPTufy96IjDt+/w=; b=FZcODbSkBG9ZF8cZNTkkFG5AyEqtTKCZkjQpnbZi1NU2khHaEsOldyaSJtcSmbDXzL qGcrH3m50Rgbo7HlLfaonp0+iipQHNqKkJw30xAlz3V1TGDN5KLi9jmuCfER1dTkjyml PUgatp6W6/1vh0tbzhAtmRLwRLvsjrDYoYMhYoR9a0Hui2SY76YJCCFAkPjMeHAs58fF KEkBUIw099MyqaO0Pln/3d+NfSOCuhzu8EwDxmJLRceUQ+XhK/F0ateb+ojeFOP/ASmD fW8lMHFH/ezfeQWrG454xatACMMyKdHBqHa1/bKjB++9IF36CjLv+5/xd7pOj4CEUDc0 c3lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837575; x=1743442375; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NpZkmVtYxhQ/ofswPdkBbPONzCRhKPTufy96IjDt+/w=; b=fQrZuAi9Y2TWDJAVLwvthOJc9RspYe3qy8j/iFORStalysO1FkqJH2gthkZX4uj+/M Ip5vKXHN+CleNzobYPce2i3Kt0B7qQju8RHDLtWyavb+ft5HzOveoOZTkKLzJu7lCiYT 0dDyMtL3KOj0sGh9aWZF8C8N9YIzEIn+LzfSZZXiakZ6i4Ojo0W9edD48XjEkIC5Q1Pf zIvWLkVIrS9CuTUvj1moTgkYTiZRykG1INJmmR5vJgpNFd3aqk8XSjy8K4L31EtU0mm0 2wb2xCAv6fOSvWnKavfQlAszz0njn2Khb5qaH8UnTK7M61q+y/ljVGScw8cbKamIWhMg 2WAw== X-Forwarded-Encrypted: i=1; AJvYcCUzSMH6N6dr3tKdkWSZjnJiH+guSMz+dOSqEXDVzsbNtEQSXwXAj2fXbGha7+5OmKWTH2NhyqLgMt6GuCE=@vger.kernel.org X-Gm-Message-State: AOJu0YwLnBrP0vj7179POecD24MIl33gozhR1WDUACjlRCFbt3w9eCgq NJP5ocq8lIkjoX+Acnw9sGCoXQ0fWxkjeBAWiz/n0c9zUsz6OqGkSdyy+sPxYsQs9nLdOsw1RCt kX+j83g== X-Google-Smtp-Source: AGHT+IF7xewnBsDRdx97NTiWXEVuLkQEPX52IbGBsRyPp1wqsjb1vaTTVeIfoBjFih/I+7gdk4u7wG31sMt1 X-Received: from pjk8.prod.google.com ([2002:a17:90b:5588:b0:2fc:ccfe:368]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4c52:b0:2fe:e9c6:689e with SMTP id 98e67ed59e1d1-3030fe87ea4mr21917270a91.8.1742837574859; Mon, 24 Mar 2025 10:32:54 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:42 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-3-mizhang@google.com> Subject: [PATCH v4 02/38] perf: Skip pmu_ctx based on event_type From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang To optimize the cgroup context switch, the perf_event_pmu_context iteration skips the PMUs without cgroup events. A bool cgroup was introduced to indicate the case. It can work, but this way is hard to extend for other cases, e.g. skipping non-passthrough PMUs. It doesn't make sense to keep adding bool variables. Pass the event_type instead of the specific bool variable. Check both the event_type and related pmu_ctx variables to decide whether skipping a PMU. Event flags, e.g., EVENT_CGROUP, should be cleard in the ctx->is_active. Add EVENT_FLAGS to indicate such event flags. No functional change. Signed-off-by: Kan Liang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- kernel/events/core.c | 73 ++++++++++++++++++++++++-------------------- 1 file changed, 40 insertions(+), 33 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index be623701dc48..8d3a0cc59fb4 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -163,7 +163,7 @@ enum event_type_t { /* see ctx_resched() for details */ EVENT_CPU =3D 0x10, EVENT_CGROUP =3D 0x20, - + EVENT_FLAGS =3D EVENT_CGROUP, /* compound helpers */ EVENT_ALL =3D EVENT_FLEXIBLE | EVENT_PINNED, EVENT_TIME_FROZEN =3D EVENT_TIME | EVENT_FROZEN, @@ -733,27 +733,37 @@ do { \ ___p; \ }) =20 -#define for_each_epc(_epc, _ctx, _pmu, _cgroup) \ +static bool perf_skip_pmu_ctx(struct perf_event_pmu_context *pmu_ctx, + enum event_type_t event_type) +{ + if ((event_type & EVENT_CGROUP) && !pmu_ctx->nr_cgroups) + return true; + return false; +} + +#define for_each_epc(_epc, _ctx, _pmu, _event_type) \ list_for_each_entry(_epc, &((_ctx)->pmu_ctx_list), pmu_ctx_entry) \ - if (_cgroup && !_epc->nr_cgroups) \ + if (perf_skip_pmu_ctx(_epc, _event_type)) \ continue; \ else if (_pmu && _epc->pmu !=3D _pmu) \ continue; \ else =20 -static void perf_ctx_disable(struct perf_event_context *ctx, bool cgroup) +static void perf_ctx_disable(struct perf_event_context *ctx, + enum event_type_t event_type) { struct perf_event_pmu_context *pmu_ctx; =20 - for_each_epc(pmu_ctx, ctx, NULL, cgroup) + for_each_epc(pmu_ctx, ctx, NULL, event_type) perf_pmu_disable(pmu_ctx->pmu); } =20 -static void perf_ctx_enable(struct perf_event_context *ctx, bool cgroup) +static void perf_ctx_enable(struct perf_event_context *ctx, + enum event_type_t event_type) { struct perf_event_pmu_context *pmu_ctx; =20 - for_each_epc(pmu_ctx, ctx, NULL, cgroup) + for_each_epc(pmu_ctx, ctx, NULL, event_type) perf_pmu_enable(pmu_ctx->pmu); } =20 @@ -913,7 +923,7 @@ static void perf_cgroup_switch(struct task_struct *task) return; =20 perf_ctx_lock(cpuctx, cpuctx->task_ctx); - perf_ctx_disable(&cpuctx->ctx, true); + perf_ctx_disable(&cpuctx->ctx, EVENT_CGROUP); =20 ctx_sched_out(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP); /* @@ -929,7 +939,7 @@ static void perf_cgroup_switch(struct task_struct *task) */ ctx_sched_in(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP); =20 - perf_ctx_enable(&cpuctx->ctx, true); + perf_ctx_enable(&cpuctx->ctx, EVENT_CGROUP); perf_ctx_unlock(cpuctx, cpuctx->task_ctx); } =20 @@ -2796,11 +2806,11 @@ static void ctx_resched(struct perf_cpu_context *cp= uctx, =20 event_type &=3D EVENT_ALL; =20 - for_each_epc(epc, &cpuctx->ctx, pmu, false) + for_each_epc(epc, &cpuctx->ctx, pmu, 0) perf_pmu_disable(epc->pmu); =20 if (task_ctx) { - for_each_epc(epc, task_ctx, pmu, false) + for_each_epc(epc, task_ctx, pmu, 0) perf_pmu_disable(epc->pmu); =20 task_ctx_sched_out(task_ctx, pmu, event_type); @@ -2820,11 +2830,11 @@ static void ctx_resched(struct perf_cpu_context *cp= uctx, =20 perf_event_sched_in(cpuctx, task_ctx, pmu); =20 - for_each_epc(epc, &cpuctx->ctx, pmu, false) + for_each_epc(epc, &cpuctx->ctx, pmu, 0) perf_pmu_enable(epc->pmu); =20 if (task_ctx) { - for_each_epc(epc, task_ctx, pmu, false) + for_each_epc(epc, task_ctx, pmu, 0) perf_pmu_enable(epc->pmu); } } @@ -3374,11 +3384,10 @@ static void ctx_sched_out(struct perf_event_context *ctx, struct pmu *pmu, enum event_= type_t event_type) { struct perf_cpu_context *cpuctx =3D this_cpu_ptr(&perf_cpu_context); + enum event_type_t active_type =3D event_type & ~EVENT_FLAGS; struct perf_event_pmu_context *pmu_ctx; int is_active =3D ctx->is_active; - bool cgroup =3D event_type & EVENT_CGROUP; =20 - event_type &=3D ~EVENT_CGROUP; =20 lockdep_assert_held(&ctx->lock); =20 @@ -3409,7 +3418,7 @@ ctx_sched_out(struct perf_event_context *ctx, struct = pmu *pmu, enum event_type_t * see __load_acquire() in perf_event_time_now() */ barrier(); - ctx->is_active &=3D ~event_type; + ctx->is_active &=3D ~active_type; =20 if (!(ctx->is_active & EVENT_ALL)) { /* @@ -3430,7 +3439,7 @@ ctx_sched_out(struct perf_event_context *ctx, struct = pmu *pmu, enum event_type_t =20 is_active ^=3D ctx->is_active; /* changed bits */ =20 - for_each_epc(pmu_ctx, ctx, pmu, cgroup) + for_each_epc(pmu_ctx, ctx, pmu, event_type) __pmu_ctx_sched_out(pmu_ctx, is_active); } =20 @@ -3622,7 +3631,7 @@ perf_event_context_sched_out(struct task_struct *task= , struct task_struct *next) raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING); if (context_equiv(ctx, next_ctx)) { =20 - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); =20 /* PMIs are disabled; ctx->nr_no_switch_fast is stable. */ if (local_read(&ctx->nr_no_switch_fast) || @@ -3647,7 +3656,7 @@ perf_event_context_sched_out(struct task_struct *task= , struct task_struct *next) perf_ctx_sched_task_cb(ctx, false); perf_event_swap_task_ctx_data(ctx, next_ctx); =20 - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); =20 /* * RCU_INIT_POINTER here is safe because we've not @@ -3671,13 +3680,13 @@ perf_event_context_sched_out(struct task_struct *ta= sk, struct task_struct *next) =20 if (do_switch) { raw_spin_lock(&ctx->lock); - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); =20 inside_switch: perf_ctx_sched_task_cb(ctx, false); task_ctx_sched_out(ctx, NULL, EVENT_ALL); =20 - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); raw_spin_unlock(&ctx->lock); } } @@ -3981,11 +3990,9 @@ static void ctx_sched_in(struct perf_event_context *ctx, struct pmu *pmu, enum event_t= ype_t event_type) { struct perf_cpu_context *cpuctx =3D this_cpu_ptr(&perf_cpu_context); + enum event_type_t active_type =3D event_type & ~EVENT_FLAGS; struct perf_event_pmu_context *pmu_ctx; int is_active =3D ctx->is_active; - bool cgroup =3D event_type & EVENT_CGROUP; - - event_type &=3D ~EVENT_CGROUP; =20 lockdep_assert_held(&ctx->lock); =20 @@ -4003,7 +4010,7 @@ ctx_sched_in(struct perf_event_context *ctx, struct p= mu *pmu, enum event_type_t barrier(); } =20 - ctx->is_active |=3D (event_type | EVENT_TIME); + ctx->is_active |=3D active_type | EVENT_TIME; if (ctx->task) { if (!(is_active & EVENT_ALL)) cpuctx->task_ctx =3D ctx; @@ -4018,13 +4025,13 @@ ctx_sched_in(struct perf_event_context *ctx, struct= pmu *pmu, enum event_type_t * in order to give them the best chance of going on. */ if (is_active & EVENT_PINNED) { - for_each_epc(pmu_ctx, ctx, pmu, cgroup) + for_each_epc(pmu_ctx, ctx, pmu, event_type) __pmu_ctx_sched_in(pmu_ctx, EVENT_PINNED); } =20 /* Then walk through the lower prio flexible groups */ if (is_active & EVENT_FLEXIBLE) { - for_each_epc(pmu_ctx, ctx, pmu, cgroup) + for_each_epc(pmu_ctx, ctx, pmu, event_type) __pmu_ctx_sched_in(pmu_ctx, EVENT_FLEXIBLE); } } @@ -4041,11 +4048,11 @@ static void perf_event_context_sched_in(struct task= _struct *task) =20 if (cpuctx->task_ctx =3D=3D ctx) { perf_ctx_lock(cpuctx, ctx); - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); =20 perf_ctx_sched_task_cb(ctx, true); =20 - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); perf_ctx_unlock(cpuctx, ctx); goto rcu_unlock; } @@ -4058,7 +4065,7 @@ static void perf_event_context_sched_in(struct task_s= truct *task) if (!ctx->nr_events) goto unlock; =20 - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); /* * We want to keep the following priority order: * cpu pinned (that don't need to move), task pinned, @@ -4068,7 +4075,7 @@ static void perf_event_context_sched_in(struct task_s= truct *task) * events, no need to flip the cpuctx's events around. */ if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree)) { - perf_ctx_disable(&cpuctx->ctx, false); + perf_ctx_disable(&cpuctx->ctx, 0); ctx_sched_out(&cpuctx->ctx, NULL, EVENT_FLEXIBLE); } =20 @@ -4077,9 +4084,9 @@ static void perf_event_context_sched_in(struct task_s= truct *task) perf_ctx_sched_task_cb(cpuctx->task_ctx, true); =20 if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree)) - perf_ctx_enable(&cpuctx->ctx, false); + perf_ctx_enable(&cpuctx->ctx, 0); =20 - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); =20 unlock: perf_ctx_unlock(cpuctx, ctx); --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D2D2264A8C for ; Mon, 24 Mar 2025 17:32:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837579; cv=none; b=YUaz9SwDliqTNetvyS2eBjdx1KvKtArtlstBzL9ylE7IlcD4VA5Hc7uqlnGAfVATvdinOiyKK0i5n9PGW3Hm1UnWpIRW1Q+qtXQTsx0M8SGv2wmh63hQvBXW7EJXOIKN5j4v7RjZPq8QfLJDb/oTVhpzReZdc6hnVMe9XRfiAqA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837579; c=relaxed/simple; bh=NcnzPjuIhnNCxZC5lReB3dPlHVdpnq0sLTE3KLRRIoo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HH+JAHFQ+cSCoNbQCuq4uHTz8t8XAbKfzUlqfkAl+ZFRDUj460nJsxEXxkn4kZ0D49p+fRoRxXcb2fCAseRW2GLJhiTKyJZ9vnPOo6HfMFUP6WIWQLKltIPK3/DjH9D9sT1L6PNbQ5AdLNtGalU9eRb4u2pv3+LahWzSqqTL5iA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YMawbmN/; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YMawbmN/" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-300fefb8e05so5950225a91.3 for ; Mon, 24 Mar 2025 10:32:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837576; x=1743442376; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KoYa355W4V2QzzqqNj1mxjyWfMrgeXRfAYA+4A7CYtI=; b=YMawbmN/HKbdbR0nG2L7gp3niUKQSjNcsbrZ/9XNmMKL2trg0lvFqmboHapAwaHCTg ItbkAVBHoRzcy2KMtZACiUXjqNJx5z6mbyRmK5ZNaW3a/kXcKQ9Mo5QOe0Hn/GQakUwx Aw9DgZoxg+23ROyuasdfG87mdgSkee26iOQXvhAIxnfR9RnTW2Wen0MOLP4uFUT2um5s zjs+LAnAcQQP244fhqhC/X3SHsxHqG6IcvKABQc2T2h7JtTkqqFKrBsV8bpSgxe4TY8Y nwTbtt6X0ILtGgsEdyaJKj0u/aDJ3C6i1apNXnSm+qRBR1BrTpeLcLyh0/qx7ot2fNCJ K7KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837576; x=1743442376; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KoYa355W4V2QzzqqNj1mxjyWfMrgeXRfAYA+4A7CYtI=; b=f/fuacWbVoFcA3ABeWRR7vbquOXasrQv9amVnlbOCwkE73bEEgAQuwGAlPKXPKzahR wEr2Dq4rN1IzBVRG8VSW+cSWyOR2ZHu+pE0LV17YepSnTuJjuY/npvuCutKpK7Jxh7Jx F/vpWSW0JDFP7BIylGICSuin/61cReYBFyQwrdl99+V4lrgcwTU3q0Z1Uj3WB6Vy79ud 5l+R9xvRuQ4V8xuN799HAPkE8jTKOwOu+ZBNWVLBJ3mBZfWTEfsuZ4BVt+cRsJevvv/i X4aPJCXPyh5fSduWW6SCuFt1SC1MCrbXxyPVhUuhw8pBKN89KICZW64wXUm2fy0JJccl 7Hlg== X-Forwarded-Encrypted: i=1; AJvYcCVTZBUI9eToERSiP6aBM0vgXGHLHAfdzjyGgSZv15QjQfABRFfSc37P+BbAyQf5NYJobfsUp/seiFjQgQc=@vger.kernel.org X-Gm-Message-State: AOJu0Yytn8mNda0io/VcuHejLAaZcusPKi0uIP85+BYkGMWbv1BkCHs/ 8F0wEt5siNySe2qi782E6n5fS2yEFJ1g8ZA+hWmUbOdrrw7IbDD7oIao4Madga/ovK/u2G/PcRu 4omiYoQ== X-Google-Smtp-Source: AGHT+IHhB1eRQEES3YcQZzL9/Q9I1UVQ0Y8OuRAelA2GczZLgreS9vzo+vXqpeBrMgIJJXRiX0b/GiqfnHPy X-Received: from pjbsb12.prod.google.com ([2002:a17:90b:50cc:b0:2ff:8471:8e53]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:53c3:b0:2fe:b77a:2eab with SMTP id 98e67ed59e1d1-3030ff21ed4mr19622571a91.32.1742837576464; Mon, 24 Mar 2025 10:32:56 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:43 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-4-mizhang@google.com> Subject: [PATCH v4 03/38] perf: Clean up perf ctx time From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang The current perf tracks two timestamps for the normal ctx and cgroup. The same type of variables and similar codes are used to track the timestamps. In the following patch, the third timestamp to track the guest time will be introduced. To avoid the code duplication, add a new struct perf_time_ctx and factor out a generic function update_perf_time_ctx(). No functional change. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 13 +++---- kernel/events/core.c | 70 +++++++++++++++++--------------------- 2 files changed, 39 insertions(+), 44 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 54018dd0b2a4..a2fd1bdc955c 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -953,6 +953,11 @@ struct perf_event_groups { u64 index; }; =20 +struct perf_time_ctx { + u64 time; + u64 stamp; + u64 offset; +}; =20 /** * struct perf_event_context - event context structure @@ -992,9 +997,7 @@ struct perf_event_context { /* * Context clock, runs when context enabled. */ - u64 time; - u64 timestamp; - u64 timeoffset; + struct perf_time_ctx time; =20 /* * These fields let us detect when two contexts have both @@ -1085,9 +1088,7 @@ struct bpf_perf_event_data_kern { * This is a per-cpu dynamically allocated data structure. */ struct perf_cgroup_info { - u64 time; - u64 timestamp; - u64 timeoffset; + struct perf_time_ctx time; int active; }; =20 diff --git a/kernel/events/core.c b/kernel/events/core.c index 8d3a0cc59fb4..e38c8b5e8086 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -770,6 +770,24 @@ static void perf_ctx_enable(struct perf_event_context = *ctx, static void ctx_sched_out(struct perf_event_context *ctx, struct pmu *pmu,= enum event_type_t event_type); static void ctx_sched_in(struct perf_event_context *ctx, struct pmu *pmu, = enum event_type_t event_type); =20 +static inline void update_perf_time_ctx(struct perf_time_ctx *time, u64 no= w, bool adv) +{ + if (adv) + time->time +=3D now - time->stamp; + time->stamp =3D now; + + /* + * The above: time' =3D time + (now - timestamp), can be re-arranged + * into: time` =3D now + (time - timestamp), which gives a single value + * offset to compute future time without locks on. + * + * See perf_event_time_now(), which can be used from NMI context where + * it's (obviously) not possible to acquire ctx->lock in order to read + * both the above values in a consistent manner. + */ + WRITE_ONCE(time->offset, time->time - time->stamp); +} + #ifdef CONFIG_CGROUP_PERF =20 static inline bool @@ -811,7 +829,7 @@ static inline u64 perf_cgroup_event_time(struct perf_ev= ent *event) struct perf_cgroup_info *t; =20 t =3D per_cpu_ptr(event->cgrp->info, event->cpu); - return t->time; + return t->time.time; } =20 static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64= now) @@ -820,22 +838,11 @@ static inline u64 perf_cgroup_event_time_now(struct p= erf_event *event, u64 now) =20 t =3D per_cpu_ptr(event->cgrp->info, event->cpu); if (!__load_acquire(&t->active)) - return t->time; - now +=3D READ_ONCE(t->timeoffset); + return t->time.time; + now +=3D READ_ONCE(t->time.offset); return now; } =20 -static inline void __update_cgrp_time(struct perf_cgroup_info *info, u64 n= ow, bool adv) -{ - if (adv) - info->time +=3D now - info->timestamp; - info->timestamp =3D now; - /* - * see update_context_time() - */ - WRITE_ONCE(info->timeoffset, info->time - info->timestamp); -} - static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *c= puctx, bool final) { struct perf_cgroup *cgrp =3D cpuctx->cgrp; @@ -849,7 +856,7 @@ static inline void update_cgrp_time_from_cpuctx(struct = perf_cpu_context *cpuctx, cgrp =3D container_of(css, struct perf_cgroup, css); info =3D this_cpu_ptr(cgrp->info); =20 - __update_cgrp_time(info, now, true); + update_perf_time_ctx(&info->time, now, true); if (final) __store_release(&info->active, 0); } @@ -872,7 +879,7 @@ static inline void update_cgrp_time_from_event(struct p= erf_event *event) * Do not update time when cgroup is not active */ if (info->active) - __update_cgrp_time(info, perf_clock(), true); + update_perf_time_ctx(&info->time, perf_clock(), true); } =20 static inline void @@ -896,7 +903,7 @@ perf_cgroup_set_timestamp(struct perf_cpu_context *cpuc= tx) for (css =3D &cgrp->css; css; css =3D css->parent) { cgrp =3D container_of(css, struct perf_cgroup, css); info =3D this_cpu_ptr(cgrp->info); - __update_cgrp_time(info, ctx->timestamp, false); + update_perf_time_ctx(&info->time, ctx->time.stamp, false); __store_release(&info->active, 1); } } @@ -1511,20 +1518,7 @@ static void __update_context_time(struct perf_event_= context *ctx, bool adv) =20 lockdep_assert_held(&ctx->lock); =20 - if (adv) - ctx->time +=3D now - ctx->timestamp; - ctx->timestamp =3D now; - - /* - * The above: time' =3D time + (now - timestamp), can be re-arranged - * into: time` =3D now + (time - timestamp), which gives a single value - * offset to compute future time without locks on. - * - * See perf_event_time_now(), which can be used from NMI context where - * it's (obviously) not possible to acquire ctx->lock in order to read - * both the above values in a consistent manner. - */ - WRITE_ONCE(ctx->timeoffset, ctx->time - ctx->timestamp); + update_perf_time_ctx(&ctx->time, now, adv); } =20 static void update_context_time(struct perf_event_context *ctx) @@ -1542,7 +1536,7 @@ static u64 perf_event_time(struct perf_event *event) if (is_cgroup_event(event)) return perf_cgroup_event_time(event); =20 - return ctx->time; + return ctx->time.time; } =20 static u64 perf_event_time_now(struct perf_event *event, u64 now) @@ -1556,9 +1550,9 @@ static u64 perf_event_time_now(struct perf_event *eve= nt, u64 now) return perf_cgroup_event_time_now(event, now); =20 if (!(__load_acquire(&ctx->is_active) & EVENT_TIME)) - return ctx->time; + return ctx->time.time; =20 - now +=3D READ_ONCE(ctx->timeoffset); + now +=3D READ_ONCE(ctx->time.offset); return now; } =20 @@ -11533,14 +11527,14 @@ static void task_clock_event_update(struct perf_e= vent *event, u64 now) =20 static void task_clock_event_start(struct perf_event *event, int flags) { - local64_set(&event->hw.prev_count, event->ctx->time); + local64_set(&event->hw.prev_count, event->ctx->time.time); perf_swevent_start_hrtimer(event); } =20 static void task_clock_event_stop(struct perf_event *event, int flags) { perf_swevent_cancel_hrtimer(event); - task_clock_event_update(event, event->ctx->time); + task_clock_event_update(event, event->ctx->time.time); } =20 static int task_clock_event_add(struct perf_event *event, int flags) @@ -11560,8 +11554,8 @@ static void task_clock_event_del(struct perf_event = *event, int flags) static void task_clock_event_read(struct perf_event *event) { u64 now =3D perf_clock(); - u64 delta =3D now - event->ctx->timestamp; - u64 time =3D event->ctx->time + delta; + u64 delta =3D now - event->ctx->time.stamp; + u64 time =3D event->ctx->time.time + delta; =20 task_clock_event_update(event, time); } --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8C2F264F94 for ; Mon, 24 Mar 2025 17:32:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837581; cv=none; b=H0ZOHDadB68YPrCQkP0+YqJqIz7dZkCuj+yuH2Yw5AQRGpePhfuYY+MMMsm2BhqbzOTy0ZrBY7Tlu98SlRWS3n73i3LNTvZXkQKKBDBsBDigpI2PY0IGABOcKy/lE8rvoKaO1LJO5EAudc4rzP/QK95fQHHhBa5tXz/ajHNjpOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837581; c=relaxed/simple; bh=2BuPXqNLxQTdVsROf7TWyuiDpNEn5WKq6vsqxN17VCM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YGpXO3FQqGJ9WPhgP43pARUZiGEELPBv1bdIdAvNL0naRV/6AVGz5gw/yokJmAHxr21STMwkT4Vu8TGIxGQkYW35W0r8YOwLP2y53J2yeID0hBCJa+nepRKE4FrDKEEHXLTiMxvA+RsxKFGQapPsxLN4PE0YSXRNHMS0uIjlZzg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=geUbrIM8; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="geUbrIM8" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22403329f9eso65954375ad.3 for ; Mon, 24 Mar 2025 10:32:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837578; x=1743442378; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=DpGhOlvCPgKwpoUjFAthOd+ucVcRh8xk9vyIDZMuNkk=; b=geUbrIM8eV8759iJ9cKsQDvEHstkqzGsqNLsZWbJp44mjgDyXRp+nzZoNHQzrUQPFx /2HnQ81hs3YN9uNai84nwbDqzUTEomy7fovrofSwHCb8f1e2BeHCvAsWvZcnZtQwFl2G ehWvID2M2TDi8y0fRXUZ6OzcIVzXo3qgt9Wrn9fajPaf8PhzkkowTZ1hyUpKuSnMMsDQ xSVH/sHljYusntZKgu/N9GHe2/QFpoE0mm0l/zU3Q2KtWLSVIRmARML2KEOzRfGI67pt xCV3Amq8drJIhD5YA41Zt/QRezGwggtgqZVw3D8CqxdhvD7zT/0E5s4Qv3XUrLKniplA 83qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837578; x=1743442378; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=DpGhOlvCPgKwpoUjFAthOd+ucVcRh8xk9vyIDZMuNkk=; b=JeAbLXc4A0JKIbaRVK3F8gr9aG0nyfeiizNofe713rIf1dlSOWuK6o0wpolHUVZloy wmPfaHv5V97npng/dbKk8GSIxG1Mzh22dmmzMO6vEUnEoQeZe/cDxaBtvLB0Uf9S7ZcA XhDzxBTplgTS62ul4MGFfpg2ZPfpoR9lMcQkmTPfdzoDYq5iWUAbBHKhzivNRvc/eh2t zx42NQbJ3xxiIAdbZZsCdfPAZbn2KH4E3mD4kFjP/rAW9nLLGTQHz6LDg3N/hYiBYTYt 5wLus197I2u1slv/4kVa0swqwMRq5Q1gwBni+0LPlFzYfbbObE+cDyAxA+bhz4h/mSJI /NOg== X-Forwarded-Encrypted: i=1; AJvYcCV7xCTnXlwu0gE9pChHEKHX71ZLIdF7c9hteldDMxvu46My8clwyveSAFzXfh5Gl+zEt4OG8nabEu2EOBU=@vger.kernel.org X-Gm-Message-State: AOJu0Yxq4jy6zejOIVcllmm1Ql41PwP9MFZ0DYcO3qdmh4WaPw+y/NCj OaeGHGPHxOKsg4vMKTflL3G+kNwPJ/E/Yu5Cky9v9cGYCsPuiPxoI3D0XutI5nAoRtXMjaGB7VV fi2yZcw== X-Google-Smtp-Source: AGHT+IGryWCSjn4U2oVJze2RmD1k3aYNzs0hpCNsDBsQwmo6nkByTnFp5J0ou7ECx0mltCSXSeaOMOhc8kYM X-Received: from plxq18.prod.google.com ([2002:a17:902:dad2:b0:223:49f0:9077]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:40cb:b0:223:f9a4:3f9c with SMTP id d9443c01a7336-22780c536bcmr198778035ad.9.1742837578076; Mon, 24 Mar 2025 10:32:58 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:44 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-5-mizhang@google.com> Subject: [PATCH v4 04/38] perf: Add a EVENT_GUEST flag From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Current perf doesn't explicitly schedule out all exclude_guest events while the guest is running. There is no problem with the current emulated vPMU. Because perf owns all the PMU counters. It can mask the counter which is assigned to an exclude_guest event when a guest is running (Intel way), or set the corresponding HOSTONLY bit in evsentsel (AMD way). The counter doesn't count when a guest is running. However, either way doesn't work with the introduced passthrough vPMU. A guest owns all the PMU counters when it's running. The host should not mask any counters. The counter may be used by the guest. The evsentsel may be overwritten. Perf should explicitly schedule out all exclude_guest events to release the PMU resources when entering a guest, and resume the counting when exiting the guest. It's possible that an exclude_guest event is created when a guest is running. The new event should not be scheduled in as well. The ctx time is shared among different PMUs. The time cannot be stopped when a guest is running. It is required to calculate the time for events from other PMUs, e.g., uncore events. Add timeguest to track the guest run time. For an exclude_guest event, the elapsed time equals the ctx time - guest time. Cgroup has dedicated times. Use the same method to deduct the guest time from the cgroup time as well. Co-developed-by: Peter Zijlstra (Intel) Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 6 ++ kernel/events/core.c | 209 +++++++++++++++++++++++++++++-------- 2 files changed, 169 insertions(+), 46 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index a2fd1bdc955c..7bda1e20be12 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -999,6 +999,11 @@ struct perf_event_context { */ struct perf_time_ctx time; =20 + /* + * Context clock, runs when in the guest mode. + */ + struct perf_time_ctx timeguest; + /* * These fields let us detect when two contexts have both * been cloned (inherited) from a common ancestor. @@ -1089,6 +1094,7 @@ struct bpf_perf_event_data_kern { */ struct perf_cgroup_info { struct perf_time_ctx time; + struct perf_time_ctx timeguest; int active; }; =20 diff --git a/kernel/events/core.c b/kernel/events/core.c index e38c8b5e8086..7a2115b2c5c1 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -163,7 +163,8 @@ enum event_type_t { /* see ctx_resched() for details */ EVENT_CPU =3D 0x10, EVENT_CGROUP =3D 0x20, - EVENT_FLAGS =3D EVENT_CGROUP, + EVENT_GUEST =3D 0x40, + EVENT_FLAGS =3D EVENT_CGROUP | EVENT_GUEST, /* compound helpers */ EVENT_ALL =3D EVENT_FLEXIBLE | EVENT_PINNED, EVENT_TIME_FROZEN =3D EVENT_TIME | EVENT_FROZEN, @@ -435,6 +436,7 @@ static atomic_t nr_include_guest_events __read_mostly; =20 static atomic_t nr_mediated_pmu_vms; static DEFINE_MUTEX(perf_mediated_pmu_mutex); +static DEFINE_PER_CPU(bool, perf_in_guest); =20 /* !exclude_guest event of PMU with PERF_PMU_CAP_MEDIATED_VPMU */ static inline bool is_include_guest_event(struct perf_event *event) @@ -738,6 +740,9 @@ static bool perf_skip_pmu_ctx(struct perf_event_pmu_con= text *pmu_ctx, { if ((event_type & EVENT_CGROUP) && !pmu_ctx->nr_cgroups) return true; + if ((event_type & EVENT_GUEST) && + !(pmu_ctx->pmu->capabilities & PERF_PMU_CAP_MEDIATED_VPMU)) + return true; return false; } =20 @@ -788,6 +793,39 @@ static inline void update_perf_time_ctx(struct perf_ti= me_ctx *time, u64 now, boo WRITE_ONCE(time->offset, time->time - time->stamp); } =20 +static_assert(offsetof(struct perf_event_context, timeguest) - + offsetof(struct perf_event_context, time) =3D=3D + sizeof(struct perf_time_ctx)); + +#define T_TOTAL 0 +#define T_GUEST 1 + +static inline u64 __perf_event_time_ctx(struct perf_event *event, + struct perf_time_ctx *times) +{ + u64 time =3D times[T_TOTAL].time; + + if (event->attr.exclude_guest) + time -=3D times[T_GUEST].time; + + return time; +} + +static inline u64 __perf_event_time_ctx_now(struct perf_event *event, + struct perf_time_ctx *times, + u64 now) +{ + if (event->attr.exclude_guest && __this_cpu_read(perf_in_guest)) { + /* + * (now + times[total].offset) - (now + times[guest].offset) :=3D + * times[total].offset - times[guest].offset + */ + return READ_ONCE(times[T_TOTAL].offset) - READ_ONCE(times[T_GUEST].offse= t); + } + + return now + READ_ONCE(times[T_TOTAL].offset); +} + #ifdef CONFIG_CGROUP_PERF =20 static inline bool @@ -824,12 +862,16 @@ static inline int is_cgroup_event(struct perf_event *= event) return event->cgrp !=3D NULL; } =20 +static_assert(offsetof(struct perf_cgroup_info, timeguest) - + offsetof(struct perf_cgroup_info, time) =3D=3D + sizeof(struct perf_time_ctx)); + static inline u64 perf_cgroup_event_time(struct perf_event *event) { struct perf_cgroup_info *t; =20 t =3D per_cpu_ptr(event->cgrp->info, event->cpu); - return t->time.time; + return __perf_event_time_ctx(event, &t->time); } =20 static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64= now) @@ -838,9 +880,21 @@ static inline u64 perf_cgroup_event_time_now(struct pe= rf_event *event, u64 now) =20 t =3D per_cpu_ptr(event->cgrp->info, event->cpu); if (!__load_acquire(&t->active)) - return t->time.time; - now +=3D READ_ONCE(t->time.offset); - return now; + return __perf_event_time_ctx(event, &t->time); + + return __perf_event_time_ctx_now(event, &t->time, now); +} + +static inline void __update_cgrp_guest_time(struct perf_cgroup_info *info,= u64 now, bool adv) +{ + update_perf_time_ctx(&info->timeguest, now, adv); +} + +static inline void update_cgrp_time(struct perf_cgroup_info *info, u64 now) +{ + update_perf_time_ctx(&info->time, now, true); + if (__this_cpu_read(perf_in_guest)) + __update_cgrp_guest_time(info, now, true); } =20 static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *c= puctx, bool final) @@ -856,7 +910,7 @@ static inline void update_cgrp_time_from_cpuctx(struct = perf_cpu_context *cpuctx, cgrp =3D container_of(css, struct perf_cgroup, css); info =3D this_cpu_ptr(cgrp->info); =20 - update_perf_time_ctx(&info->time, now, true); + update_cgrp_time(info, now); if (final) __store_release(&info->active, 0); } @@ -879,11 +933,11 @@ static inline void update_cgrp_time_from_event(struct= perf_event *event) * Do not update time when cgroup is not active */ if (info->active) - update_perf_time_ctx(&info->time, perf_clock(), true); + update_cgrp_time(info, perf_clock()); } =20 static inline void -perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx) +perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx, bool guest) { struct perf_event_context *ctx =3D &cpuctx->ctx; struct perf_cgroup *cgrp =3D cpuctx->cgrp; @@ -903,8 +957,12 @@ perf_cgroup_set_timestamp(struct perf_cpu_context *cpu= ctx) for (css =3D &cgrp->css; css; css =3D css->parent) { cgrp =3D container_of(css, struct perf_cgroup, css); info =3D this_cpu_ptr(cgrp->info); - update_perf_time_ctx(&info->time, ctx->time.stamp, false); - __store_release(&info->active, 1); + if (guest) { + __update_cgrp_guest_time(info, ctx->time.stamp, false); + } else { + update_perf_time_ctx(&info->time, ctx->time.stamp, false); + __store_release(&info->active, 1); + } } } =20 @@ -1104,7 +1162,7 @@ static inline int perf_cgroup_connect(pid_t pid, stru= ct perf_event *event, } =20 static inline void -perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx) +perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx, bool guest) { } =20 @@ -1514,16 +1572,24 @@ static void perf_unpin_context(struct perf_event_co= ntext *ctx) */ static void __update_context_time(struct perf_event_context *ctx, bool adv) { - u64 now =3D perf_clock(); + lockdep_assert_held(&ctx->lock); + + update_perf_time_ctx(&ctx->time, perf_clock(), adv); +} =20 +static void __update_context_guest_time(struct perf_event_context *ctx, bo= ol adv) +{ lockdep_assert_held(&ctx->lock); =20 - update_perf_time_ctx(&ctx->time, now, adv); + /* must be called after __update_context_time(); */ + update_perf_time_ctx(&ctx->timeguest, ctx->time.stamp, adv); } =20 static void update_context_time(struct perf_event_context *ctx) { __update_context_time(ctx, true); + if (__this_cpu_read(perf_in_guest)) + __update_context_guest_time(ctx, true); } =20 static u64 perf_event_time(struct perf_event *event) @@ -1536,7 +1602,7 @@ static u64 perf_event_time(struct perf_event *event) if (is_cgroup_event(event)) return perf_cgroup_event_time(event); =20 - return ctx->time.time; + return __perf_event_time_ctx(event, &ctx->time); } =20 static u64 perf_event_time_now(struct perf_event *event, u64 now) @@ -1550,10 +1616,9 @@ static u64 perf_event_time_now(struct perf_event *ev= ent, u64 now) return perf_cgroup_event_time_now(event, now); =20 if (!(__load_acquire(&ctx->is_active) & EVENT_TIME)) - return ctx->time.time; + return __perf_event_time_ctx(event, &ctx->time); =20 - now +=3D READ_ONCE(ctx->time.offset); - return now; + return __perf_event_time_ctx_now(event, &ctx->time, now); } =20 static enum event_type_t get_event_type(struct perf_event *event) @@ -2384,20 +2449,23 @@ group_sched_out(struct perf_event *group_event, str= uct perf_event_context *ctx) } =20 static inline void -__ctx_time_update(struct perf_cpu_context *cpuctx, struct perf_event_conte= xt *ctx, bool final) +__ctx_time_update(struct perf_cpu_context *cpuctx, struct perf_event_conte= xt *ctx, + bool final, enum event_type_t event_type) { if (ctx->is_active & EVENT_TIME) { if (ctx->is_active & EVENT_FROZEN) return; + update_context_time(ctx); - update_cgrp_time_from_cpuctx(cpuctx, final); + /* vPMU should not stop time */ + update_cgrp_time_from_cpuctx(cpuctx, !(event_type & EVENT_GUEST) && fina= l); } } =20 static inline void ctx_time_update(struct perf_cpu_context *cpuctx, struct perf_event_context= *ctx) { - __ctx_time_update(cpuctx, ctx, false); + __ctx_time_update(cpuctx, ctx, false, 0); } =20 /* @@ -3405,7 +3473,7 @@ ctx_sched_out(struct perf_event_context *ctx, struct = pmu *pmu, enum event_type_t * * would only update time for the pinned events. */ - __ctx_time_update(cpuctx, ctx, ctx =3D=3D &cpuctx->ctx); + __ctx_time_update(cpuctx, ctx, ctx =3D=3D &cpuctx->ctx, event_type); =20 /* * CPU-release for the below ->is_active store, @@ -3431,7 +3499,18 @@ ctx_sched_out(struct perf_event_context *ctx, struct= pmu *pmu, enum event_type_t cpuctx->task_ctx =3D NULL; } =20 - is_active ^=3D ctx->is_active; /* changed bits */ + if (event_type & EVENT_GUEST) { + /* + * Schedule out all exclude_guest events of PMU + * with PERF_PMU_CAP_MEDIATED_VPMU. + */ + is_active =3D EVENT_ALL; + __update_context_guest_time(ctx, false); + perf_cgroup_set_timestamp(cpuctx, true); + barrier(); + } else { + is_active ^=3D ctx->is_active; /* changed bits */ + } =20 for_each_epc(pmu_ctx, ctx, pmu, event_type) __pmu_ctx_sched_out(pmu_ctx, is_active); @@ -3926,10 +4005,15 @@ static inline void group_update_userpage(struct per= f_event *group_event) event_update_userpage(event); } =20 +struct merge_sched_data { + int can_add_hw; + enum event_type_t event_type; +}; + static int merge_sched_in(struct perf_event *event, void *data) { struct perf_event_context *ctx =3D event->ctx; - int *can_add_hw =3D data; + struct merge_sched_data *msd =3D data; =20 if (event->state <=3D PERF_EVENT_STATE_OFF) return 0; @@ -3937,13 +4021,22 @@ static int merge_sched_in(struct perf_event *event,= void *data) if (!event_filter_match(event)) return 0; =20 - if (group_can_go_on(event, *can_add_hw)) { + /* + * Don't schedule in any host events from PMU with + * PERF_PMU_CAP_MEDIATED_VPMU, while a guest is running. + */ + if (__this_cpu_read(perf_in_guest) && + event->pmu_ctx->pmu->capabilities & PERF_PMU_CAP_MEDIATED_VPMU && + !(msd->event_type & EVENT_GUEST)) + return 0; + + if (group_can_go_on(event, msd->can_add_hw)) { if (!group_sched_in(event, ctx)) list_add_tail(&event->active_list, get_event_list(event)); } =20 if (event->state =3D=3D PERF_EVENT_STATE_INACTIVE) { - *can_add_hw =3D 0; + msd->can_add_hw =3D 0; if (event->attr.pinned) { perf_cgroup_event_disable(event, ctx); perf_event_set_state(event, PERF_EVENT_STATE_ERROR); @@ -3962,11 +4055,15 @@ static int merge_sched_in(struct perf_event *event,= void *data) =20 static void pmu_groups_sched_in(struct perf_event_context *ctx, struct perf_event_groups *groups, - struct pmu *pmu) + struct pmu *pmu, + enum event_type_t event_type) { - int can_add_hw =3D 1; + struct merge_sched_data msd =3D { + .can_add_hw =3D 1, + .event_type =3D event_type, + }; visit_groups_merge(ctx, groups, smp_processor_id(), pmu, - merge_sched_in, &can_add_hw); + merge_sched_in, &msd); } =20 static void __pmu_ctx_sched_in(struct perf_event_pmu_context *pmu_ctx, @@ -3975,9 +4072,9 @@ static void __pmu_ctx_sched_in(struct perf_event_pmu_= context *pmu_ctx, struct perf_event_context *ctx =3D pmu_ctx->ctx; =20 if (event_type & EVENT_PINNED) - pmu_groups_sched_in(ctx, &ctx->pinned_groups, pmu_ctx->pmu); + pmu_groups_sched_in(ctx, &ctx->pinned_groups, pmu_ctx->pmu, event_type); if (event_type & EVENT_FLEXIBLE) - pmu_groups_sched_in(ctx, &ctx->flexible_groups, pmu_ctx->pmu); + pmu_groups_sched_in(ctx, &ctx->flexible_groups, pmu_ctx->pmu, event_type= ); } =20 static void @@ -3994,9 +4091,11 @@ ctx_sched_in(struct perf_event_context *ctx, struct = pmu *pmu, enum event_type_t return; =20 if (!(is_active & EVENT_TIME)) { + /* EVENT_TIME should be active while the guest runs */ + WARN_ON_ONCE(event_type & EVENT_GUEST); /* start ctx time */ __update_context_time(ctx, false); - perf_cgroup_set_timestamp(cpuctx); + perf_cgroup_set_timestamp(cpuctx, false); /* * CPU-release for the below ->is_active store, * see __load_acquire() in perf_event_time_now() @@ -4012,7 +4111,23 @@ ctx_sched_in(struct perf_event_context *ctx, struct = pmu *pmu, enum event_type_t WARN_ON_ONCE(cpuctx->task_ctx !=3D ctx); } =20 - is_active ^=3D ctx->is_active; /* changed bits */ + if (event_type & EVENT_GUEST) { + /* + * Schedule in the required exclude_guest events of PMU + * with PERF_PMU_CAP_MEDIATED_VPMU. + */ + is_active =3D event_type & EVENT_ALL; + + /* + * Update ctx time to set the new start time for + * the exclude_guest events. + */ + update_context_time(ctx); + update_cgrp_time_from_cpuctx(cpuctx, false); + barrier(); + } else { + is_active ^=3D ctx->is_active; /* changed bits */ + } =20 /* * First go through the list and put on any pinned groups @@ -4020,13 +4135,13 @@ ctx_sched_in(struct perf_event_context *ctx, struct= pmu *pmu, enum event_type_t */ if (is_active & EVENT_PINNED) { for_each_epc(pmu_ctx, ctx, pmu, event_type) - __pmu_ctx_sched_in(pmu_ctx, EVENT_PINNED); + __pmu_ctx_sched_in(pmu_ctx, EVENT_PINNED | (event_type & EVENT_GUEST)); } =20 /* Then walk through the lower prio flexible groups */ if (is_active & EVENT_FLEXIBLE) { for_each_epc(pmu_ctx, ctx, pmu, event_type) - __pmu_ctx_sched_in(pmu_ctx, EVENT_FLEXIBLE); + __pmu_ctx_sched_in(pmu_ctx, EVENT_FLEXIBLE | (event_type & EVENT_GUEST)= ); } } =20 @@ -6285,23 +6400,25 @@ void perf_event_update_userpage(struct perf_event *= event) if (!rb) goto unlock; =20 - /* - * compute total_time_enabled, total_time_running - * based on snapshot values taken when the event - * was last scheduled in. - * - * we cannot simply called update_context_time() - * because of locking issue as we can be called in - * NMI context - */ - calc_timer_values(event, &now, &enabled, &running); - - userpg =3D rb->user_page; /* * Disable preemption to guarantee consistent time stamps are stored to * the user page. */ preempt_disable(); + + /* + * compute total_time_enabled, total_time_running + * based on snapshot values taken when the event + * was last scheduled in. + * + * we cannot simply called update_context_time() + * because of locking issue as we can be called in + * NMI context + */ + calc_timer_values(event, &now, &enabled, &running); + + userpg =3D rb->user_page; + ++userpg->lock; barrier(); userpg->index =3D perf_event_index(event); --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87958264A7B for ; Mon, 24 Mar 2025 17:33:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837582; cv=none; b=jnHXT1N0mkADHA/zrMgcP1RpiCmZGVPjxDkdgIq9EGGWbLwa5edlTFtSdI8kNMjFjqSeqyYvZySFOkzGlDrr7O4xRV9f7MwTFTHHrs9UeqJ1KdFCl1B0kWPWYY5LowLTVhlSqWbTrGobuhLfcGiKPqL2nvTYZvIXP9BENwYiJqA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837582; c=relaxed/simple; bh=r3ssglzgAyQY/LauolaMvaMBF1AHt8NxHXUa6wJ8Sj8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=W3BjVFA8W6AAP9K1pNpFsn0guO+Avq7T1qw9zDMBhAw0Nb+dALHXM8rCeeavx8Ifk0PmP8nZm05hqWhqQzUbfcr0oCU5YzLrbQvdSNhAHz5epYwDJlri2l/fHnsm3Z2mfbftSAXoxlNNyqfk43izd7ixOSqv1u8CQbr7aCGXLuQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3Gr2/oF/; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3Gr2/oF/" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff52e1c56fso12522877a91.2 for ; Mon, 24 Mar 2025 10:33:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837580; x=1743442380; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=c4X7EuSOF1wxl7FggRxYWtDduMoeYbDYpmIUw2UV9ss=; b=3Gr2/oF/hSSb8mwQzfKAg5lunr08DhkCyOImNCLnSer7WpLucsML9voWLMcn+7ZBaC eOPM5CcYDM84sufWiqfCheFHOZJeo8oXF17MPVQBzMAQQqyy0Hd0pFybMTy/Xjv05KXy us8HS1V11q2slIwf9CSn6H6lqtZnzrL1oHgzhlRj3NpKGvUyzYkBasShX1EfgYTmiEO7 eeGopja5nZLUl+n1vrsKMjhCcruO5VPqENml0foG0vg4nZVQ2BwAjcMp1OrGl+kJRgz5 OkfMdhXAA0oxCXoCBVe2lMBMB/qlnONnTbxDsdLk66vqzn0W0yg8Jz/1AXNcDMlJUiMb qjjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837580; x=1743442380; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=c4X7EuSOF1wxl7FggRxYWtDduMoeYbDYpmIUw2UV9ss=; b=IK/ycvG11hjdIxnHxfS4Ft004splYg+yLH30j6gxvW1WPeoc4VgeK5XVolflhvTQkx qLzj1HNjfMu/4gCNZ7xwAwO4ofbTd0YNbIKI/TNwBdCGNg6Ia9ro1bWd57w9V9CBUXNx NVpdMFSdskziJuXg5wiOe/HjLf8m7R1xyE7Mn2LuXWpkUjdzy8e3iKLCDiUoliKzX+Io 8V1ewFaHTGuY3wSJc4Jp/PpG2V+a3xRIKcy9GzMaHJnkgwn+lbTKlgatf8iGNc/m8VBD cVqC9ZyUmJ8LaVP2stUtVpyKn6QHOoiarMi+r4paCzPJ5inLqhSgVYX1jbPBfaSkFMuJ 8a/A== X-Forwarded-Encrypted: i=1; AJvYcCUd5mZdlfkhYYq7G1RW5Jj8qqNaHHs5UxdCMsE1JoiHo7G2OViSK/eQYgDLDZ+9gheV2smpuw8PGDA6LCs=@vger.kernel.org X-Gm-Message-State: AOJu0YzZvHAioYxv1d6Uegzig8AUiF/WT3vz/NiDLKPKCbRqLEKNDPYR DXkAOrtsh+RUSOMvJoZziP/lqsqng+ueM6qTDTRhaFS8S5gWXiSMdPvBFG7VIOFdwZbDCveSBVr Z4BKBJg== X-Google-Smtp-Source: AGHT+IHhGvCKjJ5ueOY8WQhpRpbhZlPUOf/OMT+/j5IrFuYXtKUWrdlNkLg1d1zUxlvPTuSGiVaHP6pIolC9 X-Received: from pjf4.prod.google.com ([2002:a17:90b:3f04:b0:2fa:284f:adb2]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:134b:b0:2ea:712d:9a82 with SMTP id 98e67ed59e1d1-3031001d8fcmr22027868a91.29.1742837579839; Mon, 24 Mar 2025 10:32:59 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:45 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-6-mizhang@google.com> Subject: [PATCH v4 05/38] perf: Add generic exclude_guest support From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Only KVM knows the exact time when a guest is entering/exiting. Expose two interfaces to KVM to switch the ownership of the PMU resources. All the pinned events must be scheduled in first. Extend the perf_event_sched_in() helper to support extra flag, e.g., EVENT_GUEST. Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 4 ++ kernel/events/core.c | 80 ++++++++++++++++++++++++++++++++++---- 2 files changed, 77 insertions(+), 7 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 7bda1e20be12..37187ee8e226 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1822,6 +1822,8 @@ extern int perf_event_period(struct perf_event *event= , u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); int perf_get_mediated_pmu(void); void perf_put_mediated_pmu(void); +void perf_guest_enter(void); +void perf_guest_exit(void); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1919,6 +1921,8 @@ static inline int perf_get_mediated_pmu(void) } =20 static inline void perf_put_mediated_pmu(void) { } +static inline void perf_guest_enter(void) { } +static inline void perf_guest_exit(void) { } #endif =20 #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index 7a2115b2c5c1..d05487d465c9 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2827,14 +2827,15 @@ static void task_ctx_sched_out(struct perf_event_co= ntext *ctx, =20 static void perf_event_sched_in(struct perf_cpu_context *cpuctx, struct perf_event_context *ctx, - struct pmu *pmu) + struct pmu *pmu, + enum event_type_t event_type) { - ctx_sched_in(&cpuctx->ctx, pmu, EVENT_PINNED); + ctx_sched_in(&cpuctx->ctx, pmu, EVENT_PINNED | event_type); if (ctx) - ctx_sched_in(ctx, pmu, EVENT_PINNED); - ctx_sched_in(&cpuctx->ctx, pmu, EVENT_FLEXIBLE); + ctx_sched_in(ctx, pmu, EVENT_PINNED | event_type); + ctx_sched_in(&cpuctx->ctx, pmu, EVENT_FLEXIBLE | event_type); if (ctx) - ctx_sched_in(ctx, pmu, EVENT_FLEXIBLE); + ctx_sched_in(ctx, pmu, EVENT_FLEXIBLE | event_type); } =20 /* @@ -2890,7 +2891,7 @@ static void ctx_resched(struct perf_cpu_context *cpuc= tx, else if (event_type & EVENT_PINNED) ctx_sched_out(&cpuctx->ctx, pmu, EVENT_FLEXIBLE); =20 - perf_event_sched_in(cpuctx, task_ctx, pmu); + perf_event_sched_in(cpuctx, task_ctx, pmu, 0); =20 for_each_epc(epc, &cpuctx->ctx, pmu, 0) perf_pmu_enable(epc->pmu); @@ -4188,7 +4189,7 @@ static void perf_event_context_sched_in(struct task_s= truct *task) ctx_sched_out(&cpuctx->ctx, NULL, EVENT_FLEXIBLE); } =20 - perf_event_sched_in(cpuctx, ctx, NULL); + perf_event_sched_in(cpuctx, ctx, NULL, 0); =20 perf_ctx_sched_task_cb(cpuctx->task_ctx, true); =20 @@ -6040,6 +6041,71 @@ void perf_put_mediated_pmu(void) } EXPORT_SYMBOL_GPL(perf_put_mediated_pmu); =20 +static inline void perf_host_exit(struct perf_cpu_context *cpuctx) +{ + perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST); + ctx_sched_out(&cpuctx->ctx, NULL, EVENT_GUEST); + perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST); + if (cpuctx->task_ctx) { + perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST); + task_ctx_sched_out(cpuctx->task_ctx, NULL, EVENT_GUEST); + perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST); + } +} + +/* When entering a guest, schedule out all exclude_guest events. */ +void perf_guest_enter(void) +{ + struct perf_cpu_context *cpuctx =3D this_cpu_ptr(&perf_cpu_context); + + lockdep_assert_irqs_disabled(); + + perf_ctx_lock(cpuctx, cpuctx->task_ctx); + + if (WARN_ON_ONCE(__this_cpu_read(perf_in_guest))) + goto unlock; + + perf_host_exit(cpuctx); + + __this_cpu_write(perf_in_guest, true); + +unlock: + perf_ctx_unlock(cpuctx, cpuctx->task_ctx); +} +EXPORT_SYMBOL_GPL(perf_guest_enter); + +static inline void perf_host_enter(struct perf_cpu_context *cpuctx) +{ + perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST); + if (cpuctx->task_ctx) + perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST); + + perf_event_sched_in(cpuctx, cpuctx->task_ctx, NULL, EVENT_GUEST); + + if (cpuctx->task_ctx) + perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST); + perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST); +} + +void perf_guest_exit(void) +{ + struct perf_cpu_context *cpuctx =3D this_cpu_ptr(&perf_cpu_context); + + lockdep_assert_irqs_disabled(); + + perf_ctx_lock(cpuctx, cpuctx->task_ctx); + + if (WARN_ON_ONCE(!__this_cpu_read(perf_in_guest))) + goto unlock; + + perf_host_enter(cpuctx); + + __this_cpu_write(perf_in_guest, false); +unlock: + perf_ctx_unlock(cpuctx, cpuctx->task_ctx); +} +EXPORT_SYMBOL_GPL(perf_guest_exit); + /* * Holding the top-level event's child_mutex means that any * descendant process that has inherited this event will block --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB60326562B for ; Mon, 24 Mar 2025 17:33:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837583; cv=none; b=mdKuILaLcPjtQ3JmQ2IzvvRHTjQqBBh1OZOFTIb5NYmvUcukvpbAuTxgWTzP6bR5/8WLJpy1vKYMwhf9tcbgCwW4zQlaOF0fMceOxdynZe0Xtgu+Uc3M+J1210sBOHYEpyHTjXg8TWNRdKV/VKazTSj6m6vJhK9Q/i0B25JguM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837583; c=relaxed/simple; bh=9rhZJv3o4U2efMEBmplCAD4VQ1xqI3/o9m3fZyWbtWs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rTAHpuUHd4a6RtYHs8godat9ddhmKXJeXrsOm8IdDtD5bPZ8tb3Kd6CPorTc2BL+rSQANf8VpgXSKhpVnUhldPgXE4XW21sgrUPhDUxzv3HhahljGn/zLeY1li05KjAzUU5l0i+o5r3ulzESR8U4JXzvag1FZv6qHQ48cfWZ/+8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VSNqBUvf; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VSNqBUvf" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff798e8c3bso7880189a91.2 for ; Mon, 24 Mar 2025 10:33:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837581; x=1743442381; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tKKNrD4nKMnN1mUr26p1zzV8EuNLOk0hiGR0cZe9rjY=; b=VSNqBUvfmVOqK3mM3hfysDvVhcY872DInaVvGZGrr+vnyftW4kfzRnLr3UJGIXdbjt lEdfEprBCn2u4uaeZOmQIYNfKwT8VVlgOXYGB64kAp3pjexB71wfSlRe/dXxYIPD9scF cPGyJN8tVqDiLPTxObKDgQBSv9UchrOcKNxbTjhmTjqrG4er8QbUxWfuqgQiLMkwidoc ik9Irg9EkomGXJlj10PksJUGB/yKptKr5LhekUqAM2HK85fp+LaBnjtlsgzXz2oIRGLB blRtAfieWTyTd6c4D998JIdEyLbc9jVv7KXyyEsx+uznPuS8Ycj52cSSCB9uMBASW39g RGaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837581; x=1743442381; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tKKNrD4nKMnN1mUr26p1zzV8EuNLOk0hiGR0cZe9rjY=; b=JnQC3PS+IbyqMVi67DhLrega6OyQqDSPkU/WLasHcV06xak9PUMycJrVhFUPuHLQqB 8I3ucnj01Rc0MoJWWBREwP+C2wRrmBjBc+3bsc+Nl3dzTZG2uZ/LU57VJiY2hCUX4N3E NTkHxnuYbiJVghSnFsywRC/gTWlHJLgNQvNLu2vI2SS3enCsti+qU56IOalLWLmb7Yen 5zzsXp2bMzLJh+d/2KkWCUtw1+cYoW1yJntkcns9fULAm9BePAixM8nzSIMHFL9OLJmh am6bbexPIjjt/L2iAJ6RoPXtaYbUgeRU/ySb7vmQ31hyW3Aeujz4BVIchGS+92LafBqs vh4A== X-Forwarded-Encrypted: i=1; AJvYcCVW9njrkFBLNFEi9VBejyIljTOskKjwYhULePK5QGJVcBok1CHEAutCPVQ+RGeOEPZ2F5y6XDjihAubV4g=@vger.kernel.org X-Gm-Message-State: AOJu0YyXKRowrWxDnD3nXa/quN9mKh08ncNoolyDiw/0HtT6pk5HKksG 78mm9LCq8CD8D/UMrN98lzqSTrgT1MiW/Ov/FXGO+zs97We7B6EUNqtho7PcSWmmQhGipUzA8a7 L9RsfRA== X-Google-Smtp-Source: AGHT+IHBetavi3mTLVBEXyB04t1yw41hTE62aogD8Pkaru0jucbwplRgcbjX/OZoofxCfWlf0uf1KWGy1Jry X-Received: from pjf11.prod.google.com ([2002:a17:90b:3f0b:b0:2fa:1803:2f9f]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3148:b0:2ee:bbe0:98c6 with SMTP id 98e67ed59e1d1-3030fe928eemr17984594a91.8.1742837581399; Mon, 24 Mar 2025 10:33:01 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:46 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-7-mizhang@google.com> Subject: [PATCH v4 06/38] x86/irq: Factor out common code for installing kvm irq handler From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Xiong Zhang KVM will register irq handler for POSTED_INTR_WAKEUP_VECTOR and KVM_GUEST_PMI_VECTOR, the existing kvm_set_posted_intr_wakeup_handler() is renamed to x86_set_kvm_irq_handler(), and vector input parameter is used to distinguish POSTED_INTR_WARKUP_VECTOR and KVM_GUEST_PMI_VECTOR. Caller should call x86_set_kvm_irq_handler() once to register a non-dummy handler for each vector. If caller register one handler for a vector, later the caller register the same or different non-dummy handler again, the second call will output warn message. Suggested-by: Sean Christopherson Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/irq.h | 2 +- arch/x86/kernel/irq.c | 18 ++++++++++++------ arch/x86/kvm/vmx/vmx.c | 4 ++-- 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h index 194dfff84cb1..050a247b69b4 100644 --- a/arch/x86/include/asm/irq.h +++ b/arch/x86/include/asm/irq.h @@ -30,7 +30,7 @@ struct irq_desc; extern void fixup_irqs(void); =20 #if IS_ENABLED(CONFIG_KVM) -extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void)); +void x86_set_kvm_irq_handler(u8 vector, void (*handler)(void)); #endif =20 extern void (*x86_platform_ipi_callback)(void); diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 385e3a5fc304..18cd418fe106 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -312,16 +312,22 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_x86_platform_ipi) static void dummy_handler(void) {} static void (*kvm_posted_intr_wakeup_handler)(void) =3D dummy_handler; =20 -void kvm_set_posted_intr_wakeup_handler(void (*handler)(void)) +void x86_set_kvm_irq_handler(u8 vector, void (*handler)(void)) { - if (handler) + if (!handler) + handler =3D dummy_handler; + + if (vector =3D=3D POSTED_INTR_WAKEUP_VECTOR && + (handler =3D=3D dummy_handler || + kvm_posted_intr_wakeup_handler =3D=3D dummy_handler)) kvm_posted_intr_wakeup_handler =3D handler; - else { - kvm_posted_intr_wakeup_handler =3D dummy_handler; + else + WARN_ON_ONCE(1); + + if (handler =3D=3D dummy_handler) synchronize_rcu(); - } } -EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wakeup_handler); +EXPORT_SYMBOL_GPL(x86_set_kvm_irq_handler); =20 /* * Handler for POSTED_INTERRUPT_VECTOR. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6c56d5235f0f..00ac94535c21 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -8279,7 +8279,7 @@ void vmx_migrate_timers(struct kvm_vcpu *vcpu) =20 void vmx_hardware_unsetup(void) { - kvm_set_posted_intr_wakeup_handler(NULL); + x86_set_kvm_irq_handler(POSTED_INTR_WAKEUP_VECTOR, NULL); =20 if (nested) nested_vmx_hardware_unsetup(); @@ -8583,7 +8583,7 @@ __init int vmx_hardware_setup(void) if (r && nested) nested_vmx_hardware_unsetup(); =20 - kvm_set_posted_intr_wakeup_handler(pi_wakeup_handler); + x86_set_kvm_irq_handler(POSTED_INTR_WAKEUP_VECTOR, pi_wakeup_handler); =20 return r; } --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF1A0265CBF for ; Mon, 24 Mar 2025 17:33:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837585; cv=none; b=F1hMS2S64uSRZNl8tImY3vYMKtgIzJdbr/CecEpRVrxKKINrj2biJxIuHJezHhf645EOgmhATeObuXH1b14HnsI+QL5//MgmpB1X5s+eOmv8/BufZqTLpMvDevWVMayhkDrts0IVFsmbZtCzhodI2C79rkPWsEIjNT+lrO1j6I0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837585; c=relaxed/simple; bh=9s/X3EcXq1cls+QrXMm/3bhusn/SOpHELkcXFGS8ybU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KPZgEr61TH++R0vT2VALmVso1DTovIz/ARlzqYWSYRENZgCbBZE9ECigla8XTsc/0G/NZ1q96+FZr77Y2xyupf9Hl9m2fN40o6jem2+nhgnzPZfkExgjSSet/oMBqzL/vUPXbsG/xVeGLn1r7SIcqA2KZGv8KRLUjafHQLkoYJs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0ED3fJ7N; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0ED3fJ7N" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff68033070so7313276a91.2 for ; Mon, 24 Mar 2025 10:33:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837583; x=1743442383; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=inTNfiK+UoZulh9AoMBj8W4HF655HVoauEo6Hos0PJ4=; b=0ED3fJ7NvSqLzzACSI4OROciiHZt0lZRuKO3k8Z+O5RIpIxKb+w1zSOQSJGCu34Fjn A9yVcy/nUWjJvVsWxV00ohON2kgTtWIhnNFQI0FZkoq9lGcINbKY2+/ssVzdrTUlvuNT 24c0xYREtAMmQrYiDeACHEaOHdlHZOPJ0EdAu3qwmX9U21EBpobDJloxjjmVw9MjOy7l xoU3gmegbKQn/ATDYMxBkrKyCVPiiO6X9vBJAMZs7Fu+TBacC1PwRq1ta4+VaAY9oujx ULMMcAgTVWSsJyKWOuLLmbDK6YqsflG2x+XC0C8p+qkEnCBrU2FZg2rE7FqMUPQRR/gi tAXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837583; x=1743442383; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=inTNfiK+UoZulh9AoMBj8W4HF655HVoauEo6Hos0PJ4=; b=vb2fsjuGts2pMUVYbE2NAoMibHcbkKu7kPs+Ui0eW9bL2syjoq5dKtVLmt7kEJ+la1 Lsolb8jmapOetznpy7XfRoQhlttcq8TFBYY2Qo5AL+DG3O01jyRl11K/+OCKuSwVsWvd 4Xuvn2jRoqrz5lBAYeJqdXWuHNZAjTTCbiJPq1K8VykZHG25kFmwoA5jQmKyAobOEejr IFwNIhNopQvIw/z4sFMiYUR60FR/Vz+VrfXzME4N990oaBjVKB8FQBk+7qJiNQWivq7z VvQ6x/MkwCIIoCjDhiAStQimKi9os7WXsEVYKDPwnuJt1jQ0wzhCIJaHDCgYt9/WcTfY eucQ== X-Forwarded-Encrypted: i=1; AJvYcCXE6rjtNeiAoXQ072N8OzoGVnF3P+LIKdTCy2OZu0Qw2bhNpyUG5tnMhVevX5r5z8Oq9i0//KF7QSogmLw=@vger.kernel.org X-Gm-Message-State: AOJu0YzmfbppLQRlz4xf3ql3GvUwmB7jADQMKCpTxcfTEfrTtffOmyxy F6x68gAy2U/PBLLndTso9ATzloLTm+dp2J36LjpA8C6MFeaZBS3zlYwYL/UH05IViGnp4E8e5BC VwWx+3w== X-Google-Smtp-Source: AGHT+IHTcMJxsKm+v2TZxHZ+lGCjyf6A04l1EGBgrq/tk82YK/fK71733qiMqe/z518iPWP/HMV/QNXJNIDe X-Received: from pjbqn14.prod.google.com ([2002:a17:90b:3d4e:b0:2ff:84e6:b2bd]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:280b:b0:2ee:d824:b559 with SMTP id 98e67ed59e1d1-3030fef09b9mr20407939a91.28.1742837583045; Mon, 24 Mar 2025 10:33:03 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:47 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-8-mizhang@google.com> Subject: [PATCH v4 07/38] perf: core/x86: Register a new vector for KVM GUEST PMI From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Xiong Zhang Create a new vector in the host IDT for kvm guest PMI handling within mediated passthrough vPMU. In addition, guest PMI handler registration is added into x86_set_kvm_irq_handler(). This is the preparation work to support mediated passthrough vPMU to handle kvm guest PMIs without interference from PMI handler of the host PMU. Signed-off-by: Xiong Zhang Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/hardirq.h | 1 + arch/x86/include/asm/idtentry.h | 1 + arch/x86/include/asm/irq_vectors.h | 5 ++++- arch/x86/kernel/idt.c | 1 + arch/x86/kernel/irq.c | 21 +++++++++++++++++++ .../beauty/arch/x86/include/asm/irq_vectors.h | 5 ++++- 6 files changed, 32 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h index 6ffa8b75f4cd..25fac35b9a29 100644 --- a/arch/x86/include/asm/hardirq.h +++ b/arch/x86/include/asm/hardirq.h @@ -19,6 +19,7 @@ typedef struct { unsigned int kvm_posted_intr_ipis; unsigned int kvm_posted_intr_wakeup_ipis; unsigned int kvm_posted_intr_nested_ipis; + unsigned int kvm_guest_pmis; #endif unsigned int x86_platform_ipis; /* arch dependent */ unsigned int apic_perf_irqs; diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentr= y.h index ad5c68f0509d..b0cb3220e1bb 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -745,6 +745,7 @@ DECLARE_IDTENTRY_SYSVEC(IRQ_WORK_VECTOR, sysvec_irq_wo= rk); DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_VECTOR, sysvec_kvm_posted_intr_ipi); DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_WAKEUP_VECTOR, sysvec_kvm_posted_intr_= wakeup_ipi); DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_NESTED_VECTOR, sysvec_kvm_posted_intr_= nested_ipi); +DECLARE_IDTENTRY_SYSVEC(KVM_GUEST_PMI_VECTOR, sysvec_kvm_guest_pmi= _handler); #else # define fred_sysvec_kvm_posted_intr_ipi NULL # define fred_sysvec_kvm_posted_intr_wakeup_ipi NULL diff --git a/arch/x86/include/asm/irq_vectors.h b/arch/x86/include/asm/irq_= vectors.h index 47051871b436..250cdab11306 100644 --- a/arch/x86/include/asm/irq_vectors.h +++ b/arch/x86/include/asm/irq_vectors.h @@ -77,7 +77,10 @@ */ #define IRQ_WORK_VECTOR 0xf6 =20 -/* 0xf5 - unused, was UV_BAU_MESSAGE */ +#if IS_ENABLED(CONFIG_KVM) +#define KVM_GUEST_PMI_VECTOR 0xf5 +#endif + #define DEFERRED_ERROR_VECTOR 0xf4 =20 /* Vector on which hypervisor callbacks will be delivered */ diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index f445bec516a0..0bec4c7e2308 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -157,6 +157,7 @@ static const __initconst struct idt_data apic_idts[] = =3D { INTG(POSTED_INTR_VECTOR, asm_sysvec_kvm_posted_intr_ipi), INTG(POSTED_INTR_WAKEUP_VECTOR, asm_sysvec_kvm_posted_intr_wakeup_ipi), INTG(POSTED_INTR_NESTED_VECTOR, asm_sysvec_kvm_posted_intr_nested_ipi), + INTG(KVM_GUEST_PMI_VECTOR, asm_sysvec_kvm_guest_pmi_handler), # endif # ifdef CONFIG_IRQ_WORK INTG(IRQ_WORK_VECTOR, asm_sysvec_irq_work), diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 18cd418fe106..b29714e23fc4 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -183,6 +183,12 @@ int arch_show_interrupts(struct seq_file *p, int prec) seq_printf(p, "%10u ", irq_stats(j)->kvm_posted_intr_wakeup_ipis); seq_puts(p, " Posted-interrupt wakeup event\n"); + + seq_printf(p, "%*s: ", prec, "VPMU"); + for_each_online_cpu(j) + seq_printf(p, "%10u ", + irq_stats(j)->kvm_guest_pmis); + seq_puts(p, " KVM GUEST PMI\n"); #endif #ifdef CONFIG_X86_POSTED_MSI seq_printf(p, "%*s: ", prec, "PMN"); @@ -311,6 +317,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_x86_platform_ipi) #if IS_ENABLED(CONFIG_KVM) static void dummy_handler(void) {} static void (*kvm_posted_intr_wakeup_handler)(void) =3D dummy_handler; +static void (*kvm_guest_pmi_handler)(void) =3D dummy_handler; =20 void x86_set_kvm_irq_handler(u8 vector, void (*handler)(void)) { @@ -321,6 +328,10 @@ void x86_set_kvm_irq_handler(u8 vector, void (*handler= )(void)) (handler =3D=3D dummy_handler || kvm_posted_intr_wakeup_handler =3D=3D dummy_handler)) kvm_posted_intr_wakeup_handler =3D handler; + else if (vector =3D=3D KVM_GUEST_PMI_VECTOR && + (handler =3D=3D dummy_handler || + kvm_guest_pmi_handler =3D=3D dummy_handler)) + kvm_guest_pmi_handler =3D handler; else WARN_ON_ONCE(1); =20 @@ -356,6 +367,16 @@ DEFINE_IDTENTRY_SYSVEC_SIMPLE(sysvec_kvm_posted_intr_n= ested_ipi) apic_eoi(); inc_irq_stat(kvm_posted_intr_nested_ipis); } + +/* + * Handler for KVM_GUEST_PMI_VECTOR. + */ +DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_guest_pmi_handler) +{ + apic_eoi(); + inc_irq_stat(kvm_guest_pmis); + kvm_guest_pmi_handler(); +} #endif =20 #ifdef CONFIG_X86_POSTED_MSI diff --git a/tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h b/t= ools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h index 47051871b436..250cdab11306 100644 --- a/tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h +++ b/tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h @@ -77,7 +77,10 @@ */ #define IRQ_WORK_VECTOR 0xf6 =20 -/* 0xf5 - unused, was UV_BAU_MESSAGE */ +#if IS_ENABLED(CONFIG_KVM) +#define KVM_GUEST_PMI_VECTOR 0xf5 +#endif + #define DEFERRED_ERROR_VECTOR 0xf4 =20 /* Vector on which hypervisor callbacks will be delivered */ --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B41826656F for ; Mon, 24 Mar 2025 17:33:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837587; cv=none; b=I3MZ/gtD3hFdnsRbQt5KeziYrwSIDtO3v6Q5RaE9QvKt+gtb070nx0Berjr0YUS519XsysdU90HbXOs5MIx0Rfu+AUEN0fXTRuchdTN66Ef6lMZVqkB59sVFYnG+KU2w3gi/gxFL7yE8G5UW1nNsMgBUB+A5D7X9fnpF0xcq1K4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837587; c=relaxed/simple; bh=uSsfP2iUGKZAko91Ue51RA6jOzSPWpQLs2FQJIUEdN4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Dhr0Jf2NAeGRYSwkOTPrM4E7eeKH6FhqDL1T+QCgGBiLin63sZEO8qxwlHOmImtSLfhS9YI5urEaCOiyS2XMsVqEo9dcghH+3VEFD7pR3KvY1vDCV4VeaG86o/sV/qCTzSnaMu7RtqnWJPwcgx9cQxY+d7PYDFZn6ItcfzuULas= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QiwPhNRp; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QiwPhNRp" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff82dd6de0so6225367a91.0 for ; Mon, 24 Mar 2025 10:33:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837585; x=1743442385; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+Kc+hvgN5vKkqFOj791yq3cFJDYHXw2rm6mE1S/YN0o=; b=QiwPhNRpfFfPtMxFJTrLQn4AXDUbhzYl8mvd6+DFh3bfrKshKFgQR7MT+Xat3OoAeL 1lX1/U71RATrvgge7A8e93voA43N9Fp8hISfJInqFew1m3V4wnEHlmueEihajOekSMoi HnWm2oUm1QtS4uOePTn7v3sVfn45xRhwjLXGTry3Lkc/jwPw7UXrtKX1CSZ/SIJ/q123 /5A954VGj22zF6+r+QB7ek0tE81yXM1APDEURJdIZKJ+JjhlnNJtzcQLR4gYFwiRHTlk +HV5G8C5VDghfYnAi8gLQ4ZpPg7hcbaCb7WrHnvnwfasNOFyZxBqp+iS4sgV7nbZtMI2 STPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837585; x=1743442385; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+Kc+hvgN5vKkqFOj791yq3cFJDYHXw2rm6mE1S/YN0o=; b=W0YELE3gsgaHveZRM6YROiR8RfdtivNkEjYy+oy+MpW+Ikaft+xya0cuMMCeUtvADf mqkrmXm5wQCUP1nX0O3VALGAPd8t9Ls0qWvAhdHeqaru3N+1cgcUxQRA+frohWGC5KPd 1wJ4lEpkLfgAfhF2uIvLTbgagpLsocdGiMAKPUeXVfjYzrbHjBJvtcOsD6sYjLcCDsp1 DX8lRWxT9UPSEHFe+AIq5AQDR6K00hK1wcL2ZnS3hqtfOUn4kN+c1WRQ91NbEPjGmJJG itaKc0n1+h1Gw/g0xQYHDPwzzwJF505mcLjIRnLRhXRdnZ/+stER4OsC8sX+Ys/g+pRH lqgQ== X-Forwarded-Encrypted: i=1; AJvYcCWtvQTbQw1W+XL48taexqvsVKeDF7MNEuJr4UeFyDFwRN6uAQu+gukFgEhjd/b3A2x8UL1LNcAVnj/fnEw=@vger.kernel.org X-Gm-Message-State: AOJu0YxVC76GzYgB+ED8mzA3CQRC1xgja/DZjuHxRu1GvcbikZ0aNFNn u79ifIiS/I3MTftWxQbqJh+7FgX/TD5gMFofnXqU64CPIyujZ4OdEEs2L6u+wLJFM3I4rnmeboI K8iK1ag== X-Google-Smtp-Source: AGHT+IGd6varlGCeyx7AQp9DEz54Ir4tI3Fr0sEZAmWQE+3Tt+2k7Q201qPQY7a5j8n0L8adjjUOD9PBqS3+ X-Received: from pjbsc2.prod.google.com ([2002:a17:90b:5102:b0:2ff:611c:bae8]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4c81:b0:301:cba1:7ada with SMTP id 98e67ed59e1d1-3030fe56378mr20286185a91.1.1742837584784; Mon, 24 Mar 2025 10:33:04 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:48 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-9-mizhang@google.com> Subject: [PATCH v4 08/38] KVM: x86/pmu: Register KVM_GUEST_PMI_VECTOR handler From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Xiong Zhang Add function to register/unregister guest KVM PMI handler at KVM module initialization and destroy. This allows the host PMU with passthough capability enabled can switch PMI handler at PMU context switch. Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 02159c967d29..72995952978a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13984,6 +13984,16 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, un= signed int size, } EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); =20 +static void kvm_handle_guest_pmi(void) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + + if (WARN_ON_ONCE(!vcpu)) + return; + + kvm_make_request(KVM_REQ_PMI, vcpu); +} + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); @@ -14021,12 +14031,14 @@ static int __init kvm_x86_init(void) =20 kvm_mmu_x86_module_init(); mitigate_smt_rsb &=3D boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possib= le(); + x86_set_kvm_irq_handler(KVM_GUEST_PMI_VECTOR, kvm_handle_guest_pmi); return 0; } module_init(kvm_x86_init); =20 static void __exit kvm_x86_exit(void) { + x86_set_kvm_irq_handler(KVM_GUEST_PMI_VECTOR, NULL); WARN_ON_ONCE(static_branch_unlikely(&kvm_has_noapic_vcpu)); } module_exit(kvm_x86_exit); --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1373F266B42 for ; Mon, 24 Mar 2025 17:33:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837589; cv=none; b=gu6tu+gMkf0NIMF6TxICgPdtbtaYOCvUnLgajxpjwsVnqN2D3jZBKBRI6pbU/1MYS+v0hyC+l/XIVFxyORmk30FPihaeuVQm4D6iIrrnGUH7oNHKLy2SG4R9g1PXmdEv+/kFSkV09SS3Y0/rFz1QUra9cXyc9ynocBdMG078LPg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837589; c=relaxed/simple; bh=CzYGuYYUd7AX2Fs1hEVjbTbxTpXC9vjjk2D4/WR/4rs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JGwQbo9pxkWDnKq0BAFBDh34ZI6y4kMsWlPbiJ0VP3aQ9eBLrAm5jE57UWr6ZRLFzHakBVm6FcrnONBf0U4gwfwR4BM1oxbRM0ksy21bKlwNs6OdCSxO8IV2Te8gUegRnZwRX3nDMWUabYOW86swuUcxcCgGNnzh2OgBHxCmcrc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1wqmLZQP; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1wqmLZQP" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22651aca434so76025245ad.1 for ; Mon, 24 Mar 2025 10:33:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837586; x=1743442386; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0bq5uIWHPtPqH2Nz7MB6IWHn/nwEFnOT5cezx9vd7Ig=; b=1wqmLZQPeQtUFnYNjfIpE9DNlCWBbD6sXlzENq5LTZxza9NkKcFNFstKe4P2x442iT apXGgx0RfgMYB56eQSX3jhwDoY10mhnYWcABNX19bBCrYYLZV0zpY7l7Uc58BSjB5bv9 sz6ME/HByP4/JfzTy5w3WxsSD3lBJEJmR329p0gZiZoP7q9mVvrEEbSDJ8CgyBnY0d9H L2LL5yF9fdatoFaIp7dp0jzXw/dyrlc8WqW/u5c9Mzh9Hir9z7yxhPYPqO+adD3G2RxV Wk5wZix9IWUeuSuw0S8YwtP5wNQca+HDt68r2nDzN1oZRKgHNDAdugwBotF8rTjMxiAT TvRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837586; x=1743442386; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0bq5uIWHPtPqH2Nz7MB6IWHn/nwEFnOT5cezx9vd7Ig=; b=noxubvAJf2VfccSL2ukOAa/TSjDIaBm8PxDKfqMPLk/aH+E4aPVPua5KqvqMFuWkIm nAlQmDWNe5Ihy+D919uX0VNGYiwafmf7MXcQ3Pr+ONffMOjetuhPfs8pHAPIbDFPZxJG y51RxyNrb3gKdkBbJs9hThZ9KizVmHAUh1hkxkwHSYkAwzIZKmDZWDtsHCXAkgsv6enq flPMyuNXewaH+TbwbWNY8otkE8S8x4Ms27cabLqMzhtFIrwi9ueEOsnWxbCwmnRpzVXf GId8L1ISvszjyZ+HnO+eELs3jkAt0p+zCNcL+z4BHN0MwY8DM7+yLfr9BPWx6CGHx2Ib iohg== X-Forwarded-Encrypted: i=1; AJvYcCW/uYTEGZ+/Dy5zr6y1WH9YzqeP9WExLIQRHDDR2daODTonOmWIO6Ae2oBf9Z9iZtj8QbEnDSndB03XSzQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yz0yJtB/4eHvnzSLwgm7uYuRPrrChboMUpa+qMLMiGFq2bgfwxK SjIV2D/JSIYG3dtExKMI/109LdgBYIE5HJQOwzHWkE3c/Htu91DJfEEBpZ/aZe/aJS0KGTEMUBy ZP0ng6A== X-Google-Smtp-Source: AGHT+IFbjdGhv2on6ZGQymO61yqZIfPGWJmtEpWmLtg8c3el8x4krG9NA7ZJsIUev60Xj0G8Vxi6JsLrov10 X-Received: from pjbsi11.prod.google.com ([2002:a17:90b:528b:b0:2f9:c349:2f84]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d2c9:b0:216:7926:8d69 with SMTP id d9443c01a7336-22780e26127mr197421695ad.47.1742837586412; Mon, 24 Mar 2025 10:33:06 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:49 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-10-mizhang@google.com> Subject: [PATCH v4 09/38] perf: Add switch_guest_ctx() interface From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang When entering/exiting a guest, some contexts for a guest have to be switched. For examples, there is a dedicated interrupt vector for guests on Intel platforms. When PMI switch into a new guest vector, guest_lvtpc value need to be reflected onto HW, e,g., guest clear PMI mask bit, the HW PMI mask bit should be cleared also, then PMI can be generated continuously for guest. So guest_lvtpc parameter is added into perf_guest_enter() and switch_guest_ctx(). Add a dedicated list to track all the pmus with the PASSTHROUGH cap, which may require switching the guest context. It can avoid going through the huge pmus list. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 17 +++++++++++-- kernel/events/core.c | 51 +++++++++++++++++++++++++++++++++++++- 2 files changed, 65 insertions(+), 3 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 37187ee8e226..58c1cf6939bf 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -584,6 +584,11 @@ struct pmu { * Check period value for PERF_EVENT_IOC_PERIOD ioctl. */ int (*check_period) (struct perf_event *event, u64 value); /* optional */ + + /* + * Switch guest context when a guest enter/exit, e.g., interrupt vectors. + */ + void (*switch_guest_ctx) (bool enter, void *data); /* optional */ }; =20 enum perf_addr_filter_action_t { @@ -1030,6 +1035,11 @@ struct perf_event_context { local_t nr_no_switch_fast; }; =20 +struct mediated_pmus_list { + raw_spinlock_t lock; + struct list_head list; +}; + struct perf_cpu_pmu_context { struct perf_event_pmu_context epc; struct perf_event_pmu_context *task_epc; @@ -1044,6 +1054,9 @@ struct perf_cpu_pmu_context { struct hrtimer hrtimer; ktime_t hrtimer_interval; unsigned int hrtimer_active; + + /* Track the PMU with PERF_PMU_CAP_MEDIATED_VPMU cap */ + struct list_head mediated_entry; }; =20 /** @@ -1822,7 +1835,7 @@ extern int perf_event_period(struct perf_event *event= , u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); int perf_get_mediated_pmu(void); void perf_put_mediated_pmu(void); -void perf_guest_enter(void); +void perf_guest_enter(u32 guest_lvtpc); void perf_guest_exit(void); #else /* !CONFIG_PERF_EVENTS: */ static inline void * @@ -1921,7 +1934,7 @@ static inline int perf_get_mediated_pmu(void) } =20 static inline void perf_put_mediated_pmu(void) { } -static inline void perf_guest_enter(void) { } +static inline void perf_guest_enter(u32 guest_lvtpc) { } static inline void perf_guest_exit(void) { } #endif =20 diff --git a/kernel/events/core.c b/kernel/events/core.c index d05487d465c9..406b86641f02 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -451,6 +451,7 @@ static inline bool is_include_guest_event(struct perf_e= vent *event) static LIST_HEAD(pmus); static DEFINE_MUTEX(pmus_lock); static struct srcu_struct pmus_srcu; +static DEFINE_PER_CPU(struct mediated_pmus_list, mediated_pmus); static cpumask_var_t perf_online_mask; static cpumask_var_t perf_online_core_mask; static cpumask_var_t perf_online_die_mask; @@ -6053,8 +6054,26 @@ static inline void perf_host_exit(struct perf_cpu_co= ntext *cpuctx) } } =20 +static void perf_switch_guest_ctx(bool enter, u32 guest_lvtpc) +{ + struct mediated_pmus_list *pmus =3D this_cpu_ptr(&mediated_pmus); + struct perf_cpu_pmu_context *cpc; + struct pmu *pmu; + + lockdep_assert_irqs_disabled(); + + rcu_read_lock(); + list_for_each_entry_rcu(cpc, &pmus->list, mediated_entry) { + pmu =3D cpc->epc.pmu; + + if (pmu->switch_guest_ctx) + pmu->switch_guest_ctx(enter, (void *)&guest_lvtpc); + } + rcu_read_unlock(); +} + /* When entering a guest, schedule out all exclude_guest events. */ -void perf_guest_enter(void) +void perf_guest_enter(u32 guest_lvtpc) { struct perf_cpu_context *cpuctx =3D this_cpu_ptr(&perf_cpu_context); =20 @@ -6067,6 +6086,8 @@ void perf_guest_enter(void) =20 perf_host_exit(cpuctx); =20 + perf_switch_guest_ctx(true, guest_lvtpc); + __this_cpu_write(perf_in_guest, true); =20 unlock: @@ -6098,6 +6119,8 @@ void perf_guest_exit(void) if (WARN_ON_ONCE(!__this_cpu_read(perf_in_guest))) goto unlock; =20 + perf_switch_guest_ctx(false, 0); + perf_host_enter(cpuctx); =20 __this_cpu_write(perf_in_guest, false); @@ -12104,6 +12127,15 @@ int perf_pmu_register(struct pmu *pmu, const char = *name, int type) cpc =3D per_cpu_ptr(pmu->cpu_pmu_context, cpu); __perf_init_event_pmu_context(&cpc->epc, pmu); __perf_mux_hrtimer_init(cpc, cpu); + + if (pmu->capabilities & PERF_PMU_CAP_MEDIATED_VPMU) { + struct mediated_pmus_list *pmus; + + pmus =3D per_cpu_ptr(&mediated_pmus, cpu); + raw_spin_lock(&pmus->lock); + list_add_rcu(&cpc->mediated_entry, &pmus->list); + raw_spin_unlock(&pmus->lock); + } } =20 if (!pmu->start_txn) { @@ -12162,6 +12194,20 @@ void perf_pmu_unregister(struct pmu *pmu) mutex_lock(&pmus_lock); list_del_rcu(&pmu->entry); =20 + if (pmu->capabilities & PERF_PMU_CAP_MEDIATED_VPMU) { + struct mediated_pmus_list *pmus; + struct perf_cpu_pmu_context *cpc; + int cpu; + + for_each_possible_cpu(cpu) { + cpc =3D per_cpu_ptr(pmu->cpu_pmu_context, cpu); + pmus =3D per_cpu_ptr(&mediated_pmus, cpu); + raw_spin_lock(&pmus->lock); + list_del_rcu(&cpc->mediated_entry); + raw_spin_unlock(&pmus->lock); + } + } + /* * We dereference the pmu list under both SRCU and regular RCU, so * synchronize against both of those. @@ -14252,6 +14298,9 @@ static void __init perf_event_init_all_cpus(void) =20 INIT_LIST_HEAD(&per_cpu(sched_cb_list, cpu)); =20 + INIT_LIST_HEAD(&per_cpu(mediated_pmus.list, cpu)); + raw_spin_lock_init(&per_cpu(mediated_pmus.lock, cpu)); + cpuctx =3D per_cpu_ptr(&perf_cpu_context, cpu); __perf_event_init_context(&cpuctx->ctx); lockdep_set_class(&cpuctx->ctx.mutex, &cpuctx_mutex); --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF425266B60 for ; Mon, 24 Mar 2025 17:33:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837590; cv=none; b=oZc7korYPoeHziyNuYRvO3kbfDlBv/ztuxJU02SquaBMyW37kqFR1FXKbarfusXwOfbgf8t7TSNJrZO248OHIeNoLxJl954HzhQxFCAOFP2cjq+u9zdG/UuMSVsYKj7E7Eu57Ykk8iudm2LzW05KWUPr6ydmGkDURl5I0Ko7NCE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837590; c=relaxed/simple; bh=uZHApNHtC1srnE6XLFaAhncIb2YrpW9H049JpCQwpNA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GktBeTLQMlrLmWFjdNo+t/jxLPcpe1yvi1lX0rdVtuUU3e8O75IgJQ7qUtHHbMhDEE/vwRk28PRB8BT9W54C7UGJQJmM1HqZ3cMSzDodwyZFPf63z0iDq7TMeOa/HuySoRAeN+8j4JAdv+LXVu7i7jlLbXSK8rMLCvps7XRTYUk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LQTiH46V; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LQTiH46V" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2264e5b2b7cso69864085ad.2 for ; Mon, 24 Mar 2025 10:33:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837588; x=1743442388; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EQNh2u0xTlEo8Nzfc2nQDu4yd1/+qtwS3CX+UnSCF+k=; b=LQTiH46VZZ8v1TDQNAq/Ex4NBdXkNWMJTHryJN/9588OjpKKOS8JNsxJqfnhyjXye3 +rxIM9CEyRNlt21i1RGwhgn8VbY1qwzcpaYZOxfX8Ac60s7F2k+Tzdy52XDREcWPvb+Y 6r/4b+L6ZQRs4qHV5em2+3fQ52xzu5AJmm3BS2PfOT605ZYLK5sMPeOOkXViWi9R3Scc 75CefBDWdpu3zYp/yLAuG/BYC8kILRxLm1L5vg4tGwnTnjN7IhFBpuKo7IJ3tkgrTU1t Clxm862XMEDXgE6PJRN6E81n8okOPKLh/xwccx9tSrRVktZraXZYPHB9w74vvPyvfIdP 8hjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837588; x=1743442388; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EQNh2u0xTlEo8Nzfc2nQDu4yd1/+qtwS3CX+UnSCF+k=; b=f74L0pvhwV9rufIFekkd00igmpfo981qYpY87mXyTZYZknV5sFS2eTv0y2gbqV2WRR PNwSmgMW8QUo/UeJakQgJy6jH9D89D4dA2EnWm7xq9YPdkudsGjv2kRO1fz5uIPmPhsX GftIgR9sc4bk5N3GLvr8K7fviGA1x4Lfgxl6nFTwO6Yng+zHnFohtdwtkN2R0qVrLWuG Q+zF5UOWt+BVavdhmMzCokHaFxDQti3Hf+f0p0XkK7IhcjQbov0NqCNrHvcemjYfldGd NPBLUgR7OlZpxUwqi3fKLYQelLTfZG5eZw+pJYtW87YgFQqjCgfKoTw9kKoDcz5DUMz5 UJTw== X-Forwarded-Encrypted: i=1; AJvYcCWcGRjEjHkNUdEOS6JcOwlgLQLlnbA3wnxKi8zw9ua1FG+rMqQ2EUxv3qYjJ4EZGpidvrCkBKIYwgCOzl8=@vger.kernel.org X-Gm-Message-State: AOJu0Yyl6hn4XaAD6X05LL8prqsbvsqnPeY+4viQA6DOeJpFJ6UD9Xi1 58P2394ERh2TjAe5LO2zYixE+yif+BTSoiBBB1k+r0++Zlwy95BkA3AoqhwqDo/ZcvdvTXDFeST nUI2Cqg== X-Google-Smtp-Source: AGHT+IHHgQ/VTpQK+Ek9oKOxqSwHtDlsEEsDEATZSstz5FpSTkUYvsnRaM4XcmWiSvU9k8/G75kMTSuBQ3o/ X-Received: from pjbsr7.prod.google.com ([2002:a17:90b:4e87:b0:2ff:5752:a78f]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:db0f:b0:224:2a6d:55ae with SMTP id d9443c01a7336-22780e2aff0mr223959085ad.48.1742837588117; Mon, 24 Mar 2025 10:33:08 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:50 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-11-mizhang@google.com> Subject: [PATCH v4 10/38] perf/x86: Support switch_guest_ctx interface From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Implement switch_guest_ctx interface for x86 PMU, switch PMI to dedicated KVM_GUEST_PMI_VECTOR at perf guest enter, and switch PMI back to NMI at perf guest exit. Signed-off-by: Xiong Zhang Signed-off-by: Kan Liang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/events/core.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 8f218ac0d445..28161d6ff26d 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2677,6 +2677,16 @@ static bool x86_pmu_filter(struct pmu *pmu, int cpu) return ret; } =20 +static void x86_pmu_switch_guest_ctx(bool enter, void *data) +{ + u32 guest_lvtpc =3D *(u32 *)data; + + if (enter) + apic_write(APIC_LVTPC, guest_lvtpc); + else + apic_write(APIC_LVTPC, APIC_DM_NMI); +} + static struct pmu pmu =3D { .pmu_enable =3D x86_pmu_enable, .pmu_disable =3D x86_pmu_disable, @@ -2706,6 +2716,8 @@ static struct pmu pmu =3D { .aux_output_match =3D x86_pmu_aux_output_match, =20 .filter =3D x86_pmu_filter, + + .switch_guest_ctx =3D x86_pmu_switch_guest_ctx, }; =20 void arch_perf_update_userpage(struct perf_event *event, --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94807266EFC for ; Mon, 24 Mar 2025 17:33:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837592; cv=none; b=rgh0ByeTc0tr7meoeYFzlfKrFBGLqpwwxxaolb6CIG2jv80RZsfU2oJWUNuSymcZoli9OrpTXma6Mq0D1qvKaDgi5dKPTq2DQvhcHLXfB83Ejx9wDQdIf8akaXqJBUcdlTl40bQY/Lolk+3yI3ak3m+CNfcvxWGFhRJEiFq/k4Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837592; c=relaxed/simple; bh=7yohD4OY7oJi3PjKnxCA6LaTJHZ1/TxhZUQgByNjysY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=a/XkJlFlcWjfx+KXGlNNnv0mIREIffNh0lfdiSToefegBhcrYHX0G1R8LCf5Rj68R/XJsJNi6Nv/P0KVpr5FLsawgjNGwcdJ3XXRIQUd2wBGAOXIy1YXyJCounsI0iVBeLyFXtocW/fFOxfrrS+1+8BYgPUP7vSsnsDm0URKIBg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=feSMjzYD; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="feSMjzYD" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2254bdd4982so119712855ad.1 for ; Mon, 24 Mar 2025 10:33:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837590; x=1743442390; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dGFuB9mmLp1BQVwj/uNArSbaMRJJW4D9abapz9TbNsk=; b=feSMjzYDYEpiUZ0DlHVbdW7g4WJ91v03gdyDD46ItIoKjGBNT8aFrtNvP/v5/nE7GQ D/kl1Aokvlq9lYwBkB7LDM78H3iSMOLXh5T+5T/iQo13XCzRiKqdn0Bj0eSLdyjoDoRW GXMHUMg/WPnTYPp8gtskXOZPZHgW36V7G2BrP+luEEhY+M1J2f6fD3bW81qBOys5XQVp DuN5fJEauLn6rf4n2i3yXAhk0EqfTqLVspCf4AiJrMrSK0rfTEu47ZqgB34qL3SaUWCa W5Hwx53PCc963XPgrVfYqFV/c27+7rwO24BQSdfzn2lUY1QkcrpjE7pAUwa7O3Ovs+Tg Da5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837590; x=1743442390; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dGFuB9mmLp1BQVwj/uNArSbaMRJJW4D9abapz9TbNsk=; b=asmHugQXy4U3vkVuvVBR6rcA+X9KZVI5cm4mYAChwRaDd0YNL/PpS/KfnkyvRTeWX5 HluzShgfrWLRiD64RbUbqVQvPgE/LTnxk1OW6URi8D73ADpa2Zumrbjc/HelbZnwgms8 qjKDinCfxs0FAohHCjGmEnbzxkVMPenf9OAoxVGD9Yyt7QRCO/ds0LCkFnJKOV101uEj 4wUF7aIpZbBuF3mklEYeG6YbtYNGQHG649if5MZvHXWyYYqjDQsW9nOz/O8Rahmrc5Pf 84xmCjPO4WKF4LNKUBdAuD1j7G1WE94jyjbjk60HVewDR3s0KserJGP5q7pu/UECtvoA oTYw== X-Forwarded-Encrypted: i=1; AJvYcCXoPpl9PUFvgUzYM62mL+qxNmhudKyztVKlYvXGdNEC38wB4vMhy2IRNXPg5LT8s/7m2wEKn/60nDBG06k=@vger.kernel.org X-Gm-Message-State: AOJu0YzX7ae/lwaZQtg5mKCaLnAoRAwDeo/vokv9Bj1MlcxtBmTtoJa4 3cYj13oZXSld2T/HcNuJNUbBF7JhEIXzEmenheloGi2Ngh0nw5iABumaEwBRy1UUUqd/Ct5XtRM rIrrpNA== X-Google-Smtp-Source: AGHT+IEfRjGoPGy11ZTF+dZtGZ81t1uBtVUdGc31rboe3q80FnaB9lKjiEwv64EnTeLYudK0ci+z5pWqoJgr X-Received: from pfld12.prod.google.com ([2002:a05:6a00:198c:b0:736:46a8:452d]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4f81:b0:736:5822:74b4 with SMTP id d2e1a72fcca58-73905a530a9mr23169375b3a.21.1742837589869; Mon, 24 Mar 2025 10:33:09 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:51 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-12-mizhang@google.com> Subject: [PATCH v4 11/38] perf/x86: Forbid PMI handler when guest own PMU From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If a guest PMI is delivered after VM-exit, the KVM maskable interrupt will be held pending until EFLAGS.IF is set. In the meantime, if the logical processor receives an NMI for any reason at all, perf_event_nmi_handler() will be invoked. If there is any active perf event anywhere on the system, x86_pmu_handle_irq() will be invoked, and it will clear IA32_PERF_GLOBAL_STATUS. By the time KVM's PMI handler is invoked, it will be a mystery which counter(s) overflowed. When LVTPC is using KVM PMI vecotr, PMU is owned by guest, Host NMI let x86_pmu_handle_irq() run, x86_pmu_handle_irq() restore PMU vector to NMI and clear IA32_PERF_GLOBAL_STATUS, this breaks guest vPMU passthrough environment. So modify perf_event_nmi_handler() to check perf_in_guest per cpu variable, and if so, to simply return without calling x86_pmu_handle_irq(). Suggested-by: Jim Mattson Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 28161d6ff26d..96a173bbbec2 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -54,6 +54,8 @@ DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events) =3D { .pmu =3D &pmu, }; =20 +static DEFINE_PER_CPU(bool, pmi_vector_is_nmi) =3D true; + DEFINE_STATIC_KEY_FALSE(rdpmc_never_available_key); DEFINE_STATIC_KEY_FALSE(rdpmc_always_available_key); DEFINE_STATIC_KEY_FALSE(perf_is_hybrid); @@ -1737,6 +1739,24 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_r= egs *regs) u64 finish_clock; int ret; =20 + /* + * When guest pmu context is loaded this handler should be forbidden from + * running, the reasons are: + * 1. After perf_guest_enter() is called, and before cpu enter into + * non-root mode, host non-PMI NMI could happen, but x86_pmu_handle_ir= q() + * restore PMU to use NMI vector, which destroy KVM PMI vector setting. + * 2. When VM is running, host non-PMI NMI causes VM exit, KVM will + * call host NMI handler (vmx_vcpu_enter_exit()) first before KVM save + * guest PMU context (kvm_pmu_put_guest_context()), as x86_pmu_handle_= irq() + * clear global_status MSR which has guest status now, then this destr= oy + * guest PMU status. + * 3. After VM exit, but before KVM save guest PMU context, host non-PMI = NMI + * could happen, x86_pmu_handle_irq() clear global_status MSR which has + * guest status now, then this destroy guest PMU status. + */ + if (!this_cpu_read(pmi_vector_is_nmi)) + return NMI_DONE; + /* * All PMUs/events that share this PMI handler should make sure to * increment active_events for their events. @@ -2681,10 +2701,13 @@ static void x86_pmu_switch_guest_ctx(bool enter, vo= id *data) { u32 guest_lvtpc =3D *(u32 *)data; =20 - if (enter) + if (enter) { apic_write(APIC_LVTPC, guest_lvtpc); - else + this_cpu_write(pmi_vector_is_nmi, false); + } else { apic_write(APIC_LVTPC, APIC_DM_NMI); + this_cpu_write(pmi_vector_is_nmi, true); + } } =20 static struct pmu pmu =3D { --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83A7B267399 for ; Mon, 24 Mar 2025 17:33:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837594; cv=none; b=g5TN6/R9b1f8koRwK00nOyrZmpJ97kPdCGppQWWj14BugwFMxJcIfrupXDTHVP8hvRXKl13DQgDX3/lmT1AMi4cjlBK3R2DEEpz7dl+ZB67DJ3tN/lQPMpLBBRM2ZHBMXJrH/5yFl8hM3Y6U9UReLDv9JUVrdcm+eGAze1vD96E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837594; c=relaxed/simple; bh=aKPDqW/NC3IvupYuKRIdymXqUgDQVOkH4kyaS7aodbE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fYwH1kahRvORBltT0xwoKuLn4D7NtUxrzogpxO7ATBPwLD3WZ5JGEMi1+JF2qMxW5IbyWknywexRKEMcgRL66SaBSshUq6pWZp/E0tZzh0djv0mMM3NYTMxSvqU7vUF9+W6UnXofjFJYC55VRxcKxpZXWOP69+xtZ8CnNcPvLLc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YCadjtt+; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YCadjtt+" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff69646218so12406764a91.3 for ; Mon, 24 Mar 2025 10:33:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837592; x=1743442392; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=PnRSUNVr3rnNCPEtvYKiZWXoNmnUmRmeili7I7gz5MU=; b=YCadjtt+xNLo2kEuettF4P4eY0pvfsAeFyXOW/vW61BR1K7hZ5viOPAJvOH40hsci8 1pWilOaplTmSFDf3oQLJJtpiSSWxUuRejwYp5Ermueug8s5buJooEqJu1OqZejiDxUnJ YD6jiCferK7djoEpfKXL7kURpMHfkernIqfVtdmI9baYS1XoZWbJULOe78iyU1mTiTru +etBCNTpTvbhYS0ROP2tWqGb+s+HOfI1Dx989POW8fLs6Kg4+OcRP3jPtpkwCEb/QQQn GHyg0S5mducRRavtwn0MKqyyUsh3gjZ7/vyy4G3pj6avK88S2cGiuWaMndsDRJ2hsC8q 205w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837592; x=1743442392; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PnRSUNVr3rnNCPEtvYKiZWXoNmnUmRmeili7I7gz5MU=; b=sw+gZROrDFzzRXLF1L7fL0SsHv5N1a4Z6RU/1W8/A4mqCKJOaclv6qM3R+ic5mq1vt c4pziYVDc8gcQebeS05LlQio28WE/wADmhyy+ciET3e6q2LoI9vuJY75/JiLY9NS8Fe9 gFWDi6i9Zt09SVihwn3LDla+9iT4Wgk+z9E2lfo3jXz4w2AMjkdCRoc3nKwsaLqxQ2LP niaOPR1sNIbK6rdc84jjtKXdgWkA6XiCqcll9uWLlxo8CIrORD1SkSWq+9BFngu0QivG ysyx22q4QgwXwYO1Jzo0uSC55SnPVJgPKXHg8iKOh/sYpwOCZfJpWjyWEzOR8iszytpa PUkg== X-Forwarded-Encrypted: i=1; AJvYcCWLglmAnLUB+wdiS8Lt+aofgibFMsttc93oTXIN0bAZBr2F1F9eBuRs9g6XqYOwPmJIDSW1IWF7kLQ5PFM=@vger.kernel.org X-Gm-Message-State: AOJu0YzcWKBXY6gBSgEwZQkpkTvQ74Ub2KKn+4Orwpm1FN9bSCJ7mjUq XX/ne27E1II9KFhqhfBvQpwcKe2wQqh2D5mmwnJutMYUCzALy2x5OAwfx/YsFqbIMfCh1MPP8al BrHqKGg== X-Google-Smtp-Source: AGHT+IEW+7GJcDXGG6CURiKG3q1Lc5zKORtSrc56gB/zRY+t9aZM8BPwlqJUyaIgVCLGS/H1C2BVYJ2TtCKc X-Received: from pjbqx3.prod.google.com ([2002:a17:90b:3e43:b0:2ff:5f6a:835c]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4fcc:b0:2f4:434d:c7ed with SMTP id 98e67ed59e1d1-3030fea7d0cmr26492841a91.16.1742837591632; Mon, 24 Mar 2025 10:33:11 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:52 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-13-mizhang@google.com> Subject: [PATCH v4 12/38] perf/x86/core: Do not set bit width for unavailable counters From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sandipan Das Not all x86 processors have fixed counters. It may also be the case that a processor has only fixed counters and no general-purpose counters. Set the bit widths corresponding to each counter type only if such counters are available. Fixes: b3d9468a8bd2 ("perf, x86: Expose perf capability to other modules") Signed-off-by: Sandipan Das Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/events/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 96a173bbbec2..7c852ee3e217 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -3107,8 +3107,8 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capab= ility *cap) cap->version =3D x86_pmu.version; cap->num_counters_gp =3D x86_pmu_num_counters(NULL); cap->num_counters_fixed =3D x86_pmu_num_counters_fixed(NULL); - cap->bit_width_gp =3D x86_pmu.cntval_bits; - cap->bit_width_fixed =3D x86_pmu.cntval_bits; + cap->bit_width_gp =3D cap->num_counters_gp ? x86_pmu.cntval_bits : 0; + cap->bit_width_fixed =3D cap->num_counters_fixed ? x86_pmu.cntval_bits : = 0; cap->events_mask =3D (unsigned int)x86_pmu.events_maskl; cap->events_mask_len =3D x86_pmu.events_mask_len; cap->pebs_ept =3D x86_pmu.pebs_ept; --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B961B2676C3 for ; Mon, 24 Mar 2025 17:33:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837596; cv=none; b=B9muI3qRxrrYsVs6RUmm12wTV8iNHWyVN3Q1R//EttqaWWgPttVzgvoPmY+ajxRTElIVGoonNJKRD38Jlep0CnGtCD6A0hVv7loqkrDj0wliAZnObEX+HdBol80K5+NV30z7KYvz+b3bgT0jBSZKvSN9mOBb2uLuJzxdzIOb/SY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837596; c=relaxed/simple; bh=xgxYVwZdjRquH478VTURKv/ZwPIL/P+KKLaH5KZ8A0k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=R05MQgJnxFHm3fN+DhoqbBLe9ZCPyYn6HrVkOmi6/8oUygjBuv9zZIXFcUAKK8dCEEU/tNEFJ3n4SI/W28vRZkQK24cPufQzP25l4HnzI9/TTg16yVqLyeqyxRhHydM1GKLSLW8Zio/YZFgeVoyWF9MioTgV/ulk6qD7stbvabU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rYMaB95W; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rYMaB95W" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3032ea03448so3599323a91.2 for ; Mon, 24 Mar 2025 10:33:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837593; x=1743442393; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5OIbd6+hOx7hGMg1T3kEmCwLM9O3N837RBR3OmTqUzg=; b=rYMaB95WbcGG1k1CIUlndOliRt3vEQHamyedYYykG3bBU06zS++5geWA6MOkt8RXt9 lkbnjL2bSVN6B/BC3+aK7K3blAFEj6CqsBVcB3h1fs9qPCkNxQ91sTElP01qBuwLYXNQ yzYDKNEuKRlpnWnFYqXC5o9gnAPLS9O07+trxq6t9OEwf4rfxNypDdLhTrsd/eqP/w9w LwTEObju00+w2nVTEpsdGy87/Iguz0oT/XNS5GqIoSQdqQfO8mDy03KH1WLCBmA0ShF6 y0AtHmvptkceRfwIru9D9bbQ9dzqC57CzAd8/II/4CqO2BQgn4pPsHpBszqLeV7OISRl VyRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837593; x=1743442393; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5OIbd6+hOx7hGMg1T3kEmCwLM9O3N837RBR3OmTqUzg=; b=X2foPd6tLZywo1EpCTlJTWTUjbt7Ryu3AEJBjC/ugvG63MOtDNDXBfabzIlD2zEkYt FlBAdct9FRZVVJ8GNuxSjj6j4GeP5FTFfzVcAEUjkddmeZS5DXJ/5ixgkZ5s3Wb9Tro1 5cRWaZFHzbw4xTDytRFIoTB89ui2rCckXwz4GwFXTsvwUZMMHHmiLc1K8YnMJc531icv N65/z1aF/8dt5tOLYx37L//BaJjeX30LJ6BmNJi0G33VwaHIHlI4tsrkaOhXJ70mCMRz ozTyQSKdaVgxta4TlNvt0el6Wy45zmsYveiYDi2IS0mJbr389i6Y0hLW84B79kQA77xs Xu8A== X-Forwarded-Encrypted: i=1; AJvYcCU7UoRE3EXyCcLMZNxlTSQ2xwYOCuG9C4UgWx6kkUP95aC+bhh9d3dXPmdF3KA9PssFNMlUgWZ7p1mE9IM=@vger.kernel.org X-Gm-Message-State: AOJu0YwSVlDj9w7phl8q5m91OJEnyjy3aNc7zDSdWllFNm2uS9cqoya6 3+WbHxJFTyfNDYuznTygFUzyjfCVUmvRa1WPI9vII4VrSX1hh3Ujk/kjJRwzuCtjH52MfN/dGRo abXaFoA== X-Google-Smtp-Source: AGHT+IEdo7ypzX77GetTJ9C36fdgW9oGowukok+4d1hBh9wD+ALdjl7qpWu1TMmZXtF4P0hSuyHVg9HGlwBC X-Received: from pjc7.prod.google.com ([2002:a17:90b:2f47:b0:2f9:e05f:187f]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4b10:b0:2ff:796b:4d05 with SMTP id 98e67ed59e1d1-3030fea7630mr23371014a91.11.1742837593268; Mon, 24 Mar 2025 10:33:13 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:53 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-14-mizhang@google.com> Subject: [PATCH v4 13/38] perf/x86/core: Plumb mediated PMU capability from x86_pmu to x86_pmu_cap From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Plumb mediated PMU capability to x86_pmu_cap in order to let any kernel entity such as KVM know that host PMU support mediated PMU mode and has the implementation. Signed-off-by: Mingwei Zhang --- arch/x86/events/core.c | 1 + arch/x86/include/asm/perf_event.h | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 7c852ee3e217..7a792486d9fb 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -3112,6 +3112,7 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capab= ility *cap) cap->events_mask =3D (unsigned int)x86_pmu.events_maskl; cap->events_mask_len =3D x86_pmu.events_mask_len; cap->pebs_ept =3D x86_pmu.pebs_ept; + cap->mediated =3D !!(pmu.capabilities & PERF_PMU_CAP_MEDIATED_VPMU); } EXPORT_SYMBOL_GPL(perf_get_x86_pmu_capability); =20 diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_= event.h index 0ba8d20f2d1d..3aee76f3316c 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -285,6 +285,7 @@ struct x86_pmu_capability { unsigned int events_mask; int events_mask_len; unsigned int pebs_ept :1; + unsigned int mediated :1; }; =20 /* --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE776264639 for ; Mon, 24 Mar 2025 17:33:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837597; cv=none; b=hKErGduN1YW0XLiC5nrb0SLc6hSu2r+/l0tZBrFuFZB+ymvHkRzcuJ6TJ38rcwrK7NfTzTsnUPOo6dJgtC+F4m+ZBssnrTXo4AW1XdsKVBrXFtiA6KqO+GRIGQaGQ4PKG7MGN2sklzFCAUIXufBM02NJz0UOazvPrXF297AhMCA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837597; c=relaxed/simple; bh=7DB+Pot4S4YTvYSOXJ71A2ta9CwwBpBvOTGdR4Q5yl8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=buWR2qGLzaOf/+7V2+stkf5qXd4tVWpHAfL0p7YMXI2UmudkIb8limbH6p3wc9gi/AtT++WZex1sr7Q1Izpo6FpQUreS106artxU5sC1t11ey6y1cucclN1tB7XHwsw7Q9sxdLJQ5AuzZJDke1hprrbyurLhcCIgU8D1XlsnAyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wdFiMiBl; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wdFiMiBl" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff8a2c7912so7739607a91.1 for ; Mon, 24 Mar 2025 10:33:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837595; x=1743442395; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=QEZcrdvDcdB438jq0C+7mJLnHmCqXMR/6Rcp3Tn0Nqk=; b=wdFiMiBlpKp4Z6RhkjNuMD1pvjZeW3HRRm3pGdlKtZM9kzt4I57Puby0B+zI7g9nf/ g7ydZbUGR61/xzTA9UtvN8bYR+81PwwqvyUFYIalos+CL354YdczNBwK9DDcsti/skxS +FYNTccARUTrhpqb5350TTKym/g1xhtapm8iU9YZbnbkL/5O9o5boNhOf53k/9M2mHMZ vIowWt3QuRHAyuBcrewdQaynE09FhXVYL0SLnGJgBSva9+JV1eNqU0Iro1t/UTwVBPn3 BVlZ2rb7NooGSZPITp14Sw8I5NNce1i3QMRUtonJwqWSlEQu3C+z9oOU+x6iqFJAcHGR GSig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837595; x=1743442395; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QEZcrdvDcdB438jq0C+7mJLnHmCqXMR/6Rcp3Tn0Nqk=; b=SoTfDbnFcJbCg+Ot2XH0SgJoeG+A8ey7hO2jeF5sbjk+gaUhNQoALMi8ReS1XvY3KY PUfaTzLBRzXrqv+tRpcVkofF/zRXEIlRfT5Qpe4FPgztIUD/UJcQQ4UlAJMltyXqibtA +1yNzzkOEB+dWe0++HzJZQLQoZLnZ1PCRIiXs6Rz6lJqlfZxFXlOyDs1Me3GE1sZ9eX7 v2qmVDe2cw7/XHP4rNwmND4u2TMo9ewJgoYqAg+7TN2xQ9K+12r/Xz5ODMWQcrNHR9Mw NdQchk9WvgEwIGq8KjVlbm9zeEf/mfb6v2XC+/HXhWvN7/CIVkk+pp/7vVBGyBG+DMZz 8Esw== X-Forwarded-Encrypted: i=1; AJvYcCV6qIHIRdjBnE08kGihdSHc0miCu+U7jiL9XpROKdwtyeWq//veYpxoro/TazLH3M91XzxMl980Y2nxTz8=@vger.kernel.org X-Gm-Message-State: AOJu0YxxBbR5e7y3z+RGwTUgCMzdVRrJNgPAI62Hybye24E2pszc1vzE lwsFeEi/xMVkklINCIfMW8+U4jX7xec4k1SfBqfbs65YLpwkA5Pk0b8D6JjhtognjrCRmef626L 2/6us7g== X-Google-Smtp-Source: AGHT+IHVJlsFAJG1/M0zW8PXxbMMRp3WQsNEDZjGK9sn+7Jy4KFkZOOTViFL/FbZWizRdlX7RVil8rLqGVWL X-Received: from pjh3.prod.google.com ([2002:a17:90b:3f83:b0:2fe:7f7a:74b2]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:544f:b0:2ff:5267:e7da with SMTP id 98e67ed59e1d1-3030e5509f2mr22819736a91.3.1742837594906; Mon, 24 Mar 2025 10:33:14 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:54 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-15-mizhang@google.com> Subject: [PATCH v4 14/38] KVM: x86/pmu: Introduce enable_mediated_pmu global parameter From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Introduce enable_mediated_pmu global parameter to control if mediated vPMU can be enabled on KVM level. Even enable_mediated_pmu is set to true in KVM, user space hypervisor still need to enable mediated vPMU explicitly by calling KVM_CAP_PMU_CAPABILITY ioctl. This gives hypervisor flexibility to enable or disable mediated vPMU for each VM. Mediated vPMU depends on some PMU features on higher PMU version, like PERF_GLOBAL_STATUS_SET MSR in v4+ for Intel PMU. Thus introduce a pmu_ops variable MIN_MEDIATED_PMU_VERSION to indicates the minimum host PMU version which mediated vPMU needs. Currently enable_mediated_pmu is not exposed to user space as a module parameter until all mediated vPMU code are in place. Suggested-by: Sean Christopherson Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 3 ++- arch/x86/kvm/pmu.h | 11 +++++++++ arch/x86/kvm/svm/pmu.c | 1 + arch/x86/kvm/vmx/capabilities.h | 3 ++- arch/x86/kvm/vmx/pmu_intel.c | 5 ++++ arch/x86/kvm/vmx/vmx.c | 3 ++- arch/x86/kvm/x86.c | 44 ++++++++++++++++++++++++++++++--- arch/x86/kvm/x86.h | 1 + 8 files changed, 64 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 75e9cfc689f8..4f455afe4009 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -775,7 +775,8 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) pmu->pebs_data_cfg_rsvd =3D ~0ull; bitmap_zero(pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); =20 - if (!vcpu->kvm->arch.enable_pmu) + if (!vcpu->kvm->arch.enable_pmu || + (!lapic_in_kernel(vcpu) && enable_mediated_pmu)) return; =20 kvm_pmu_call(refresh)(vcpu); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index ad89d0bd6005..dd45a0c6be74 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -45,6 +45,7 @@ struct kvm_pmu_ops { const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; const int MIN_NR_GP_COUNTERS; + const int MIN_MEDIATED_PMU_VERSION; }; =20 void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops); @@ -63,6 +64,12 @@ static inline bool kvm_pmu_has_perf_global_ctrl(struct k= vm_pmu *pmu) return pmu->version > 1; } =20 +static inline bool kvm_mediated_pmu_enabled(struct kvm_vcpu *vcpu) +{ + return vcpu->kvm->arch.enable_pmu && + enable_mediated_pmu && vcpu_to_pmu(vcpu)->version; +} + /* * KVM tracks all counters in 64-bit bitmaps, with general purpose counters * mapped to bits 31:0 and fixed counters mapped to 63:32, e.g. fixed coun= ter 0 @@ -210,6 +217,10 @@ static inline void kvm_init_pmu_capability(const struc= t kvm_pmu_ops *pmu_ops) enable_pmu =3D false; } =20 + if (!enable_pmu || !kvm_pmu_cap.mediated || + pmu_ops->MIN_MEDIATED_PMU_VERSION > kvm_pmu_cap.version) + enable_mediated_pmu =3D false; + if (!enable_pmu) { memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap)); return; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 288f7f2a46f2..c8b9fd9b5350 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -239,4 +239,5 @@ struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .EVENTSEL_EVENT =3D AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS =3D KVM_MAX_NR_AMD_GP_COUNTERS, .MIN_NR_GP_COUNTERS =3D AMD64_NUM_COUNTERS, + .MIN_MEDIATED_PMU_VERSION =3D 2, }; diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilitie= s.h index cb6588238f46..fac2c80ddbab 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -390,7 +390,8 @@ static inline bool vmx_pt_mode_is_host_guest(void) =20 static inline bool vmx_pebs_supported(void) { - return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept; + return boot_cpu_has(X86_FEATURE_PEBS) && + !enable_mediated_pmu && kvm_pmu_cap.pebs_ept; } =20 static inline bool cpu_has_notify_vmexit(void) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 77012b2eca0e..425e93d4b1c6 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -739,4 +739,9 @@ struct kvm_pmu_ops intel_pmu_ops __initdata =3D { .EVENTSEL_EVENT =3D ARCH_PERFMON_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS =3D KVM_MAX_NR_INTEL_GP_COUNTERS, .MIN_NR_GP_COUNTERS =3D 1, + /* + * Intel mediated vPMU support depends on + * MSR_CORE_PERF_GLOBAL_STATUS_SET which is supported from 4+. + */ + .MIN_MEDIATED_PMU_VERSION =3D 4, }; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 00ac94535c21..a4b5b6455c7b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7916,7 +7916,8 @@ static __init u64 vmx_get_perf_capabilities(void) if (boot_cpu_has(X86_FEATURE_PDCM)) rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); =20 - if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR)) { + if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR) && + !enable_mediated_pmu) { x86_perf_get_lbr(&vmx_lbr_caps); =20 /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 72995952978a..1ebe169b88b6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -188,6 +188,14 @@ bool __read_mostly enable_pmu =3D true; EXPORT_SYMBOL_GPL(enable_pmu); module_param(enable_pmu, bool, 0444); =20 +/* + * Enable/disable mediated passthrough PMU virtualization. + * Don't expose it to userspace as a module paramerter until + * all mediated vPMU code is in place. + */ +bool __read_mostly enable_mediated_pmu; +EXPORT_SYMBOL_GPL(enable_mediated_pmu); + bool __read_mostly eager_page_split =3D true; module_param(eager_page_split, bool, 0644); =20 @@ -6643,9 +6651,28 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, break; =20 mutex_lock(&kvm->lock); - if (!kvm->created_vcpus) { - kvm->arch.enable_pmu =3D !(cap->args[0] & KVM_PMU_CAP_DISABLE); - r =3D 0; + /* + * To keep PMU configuration "simple", setting vPMU support is + * disallowed if vCPUs are created, or if mediated PMU support + * was already enabled for the VM. + */ + if (!kvm->created_vcpus && + (!enable_mediated_pmu || !kvm->arch.enable_pmu)) { + bool pmu_enable =3D !(cap->args[0] & KVM_PMU_CAP_DISABLE); + + if (enable_mediated_pmu && pmu_enable) { + char *err_msg =3D "Fail to enable mediated vPMU, " \ + "please disable system wide perf events or nmi_watchdog " \ + "(echo 0 > /proc/sys/kernel/nmi_watchdog).\n"; + + r =3D perf_get_mediated_pmu(); + if (r) + kvm_err("%s", err_msg); + } else + r =3D 0; + + if (!r) + kvm->arch.enable_pmu =3D pmu_enable; } mutex_unlock(&kvm->lock); break; @@ -12723,7 +12750,14 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned lon= g type) kvm->arch.default_tsc_khz =3D max_tsc_khz ? : tsc_khz; kvm->arch.apic_bus_cycle_ns =3D APIC_BUS_CYCLE_NS_DEFAULT; kvm->arch.guest_can_read_msr_platform_info =3D true; - kvm->arch.enable_pmu =3D enable_pmu; + + /* + * PMU virtualization is opt-in when mediated PMU support is enabled. + * KVM_CAP_PMU_CAPABILITY ioctl must be called explicitly to enable + * mediated vPMU. For legacy perf-based vPMU, its behavior isn't changed, + * KVM_CAP_PMU_CAPABILITY ioctl is optional. + */ + kvm->arch.enable_pmu =3D enable_pmu && !enable_mediated_pmu; =20 #if IS_ENABLED(CONFIG_HYPERV) spin_lock_init(&kvm->arch.hv_root_tdp_lock); @@ -12876,6 +12910,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) __x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0); mutex_unlock(&kvm->slots_lock); } + if (kvm->arch.enable_pmu && enable_mediated_pmu) + perf_put_mediated_pmu(); kvm_unload_vcpu_mmus(kvm); kvm_x86_call(vm_destroy)(kvm); kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->sr= cu, 1)); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 91e50a513100..dbf9973b3d09 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -391,6 +391,7 @@ extern struct kvm_caps kvm_caps; extern struct kvm_host_values kvm_host; =20 extern bool enable_pmu; +extern bool enable_mediated_pmu; =20 /* * Get a filtered version of KVM's supported XCR0 that strips out dynamic --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4600826771C for ; Mon, 24 Mar 2025 17:33:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837598; cv=none; b=G07O1V+GjOqLFKukzl1H7beuALrB39Fk5RYJ65uXExEcgzfDvlDsp2so+QuSbVs7LnQrzqVPKpDmOGabxW5gxQGPL372VqpPfEQmj7XjSDFaqP63oi1Fex5Ejo5451wJ672GbJ8BYISz3QvMN93THQu4RIYJ38Yk4rc0ogMD1SE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837598; c=relaxed/simple; bh=2UZFS7ZtPOsOdbmjBqXwodQu0sg74b205rrAfdCfezc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=B7hM5Drua4thWPzDhzZKNTButidBX5Mnyo+idQAaeAzS3OPVK3B9P6kvDcxaIhrcPoLE8QhPQWQrpxthvAgjjFH13a6q/o4LEqDO8Q0hOxSNEc4T1isFtDgxOBc9i58/POD04hwH/nIxFilE/fTPxovtkTTtxHvAsixxIrJmcmQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B0ky/j4t; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B0ky/j4t" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3032ea03448so3599392a91.2 for ; Mon, 24 Mar 2025 10:33:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837596; x=1743442396; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/pjum03NzkfgM+Q+OY/G0cb0GEix9DxH7FdvQDwo0sM=; b=B0ky/j4ttoOO7HETzD0jLw2boowsp81uwV2unwnFMKdWBGKvilL+7xAvqJClQVMNhS LbKHR+f3/bvLQS3m+cEO7c/6sMS0IsYZiWoJxrzAgJgjNeJt4nL707m2If25c1ZsjtNh NvsIM08qRieZrME1aoe2J9AFbl+hAyxjNzWdOTdtriawnQWgunwLtpkJBXvRMda7KaQc upCP2NoMO3RUMFo5wSdv495K7fb3GJP8IuWSwRUd0tlVwjE9KQvMRkZtC2+P5a4UYWEx 21AFlA/9I+XH26OXaRgIQeG8Nn4TCA5ZgY1UuiTiTxZlsvqEixTnyUAP1t/Kb7npX/r9 g71g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837596; x=1743442396; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/pjum03NzkfgM+Q+OY/G0cb0GEix9DxH7FdvQDwo0sM=; b=It368VU3A1vz01hxsVr3VUs1HiBhZgFi/vpODBj4Gsx1AXsbjSROKsTtQlDc2mfijb qtfea/jh0PPDYJS256HrDUcLgMQexb+ed1L0xLR1lgnbVj6W/n0tQc5YF0HysLq2cExz PWthqQa+3xdmMHFcVZMzB3/gkaTNTHLAbLlU/+n6WILEtdn2ydpsw0THUixa328wuuQ1 G8AxKJcGHNKeKzwxalqHGyIFwotDnJSHE75wQyZIqWDbFgKtQoEtXsi/29t+8Bycw/OC 89CVlDr7Z7DJlRu8FZuEbcVBrDiUQOcN8HtPNHi/1MUZKTsNkHAliaU9vFDT7JLgK8pK hp/g== X-Forwarded-Encrypted: i=1; AJvYcCUbXSvO2F5933lQCEtqn0wXL3gkzELdmUoXiRga89tWN4XJexNXMALziZV87tuoknunl2AylgalbHCCNWU=@vger.kernel.org X-Gm-Message-State: AOJu0YyFPkkqp7Kt4KpPqfWETtBo7imtNhgNiO9OnWicbMfapw8I3eXe Wf8yrAWHQV9UWrywpZC3BBBHdmRisJgV400RWSfH8ZEC8Qr9A8AwvIznRqVBhEuUhYUP4qWANg3 630D+ag== X-Google-Smtp-Source: AGHT+IEp4Ey07lcVugyBVSb+j22AJh8dSqNsTLQcXlHV08cAfBwc+uLQMdNHEOqok620nmaX8Fc8YALvVfam X-Received: from pjm7.prod.google.com ([2002:a17:90b:2fc7:b0:2f8:49ad:406c]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3e47:b0:2f4:4500:bb4d with SMTP id 98e67ed59e1d1-3030fec4e66mr22976717a91.20.1742837596673; Mon, 24 Mar 2025 10:33:16 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:55 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-16-mizhang@google.com> Subject: [PATCH v4 15/38] KVM: x86/pmu: Check PMU cpuid configuration from user space From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Check user space's PMU cpuid configuration and filter the invalid configuration. Either legacy perf-based vPMU or mediated vPMU needs kernel to support local APIC, otherwise PMI has no way to be injected into guest. If kernel doesn't support local APIC, reject user space to enable PMU cpuid. User space configured PMU version must be no larger than KVM supported maximum pmu version for mediated vPMU, otherwise guest may manipulate some unsupported or unallowed PMU MSRs, this is dangerous and harmful. If the pmu version is larger than 1 but smaller than 5, CPUID.AH.ECX must be 0 as well which is required by SDM. Suggested-by: Zide Chen Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/cpuid.c | 15 +++++++++++++++ arch/x86/kvm/pmu.c | 7 +++++-- arch/x86/kvm/pmu.h | 1 + 3 files changed, 21 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 8eb3a88707f2..f849ced9deba 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -179,6 +179,21 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu) return -EINVAL; } =20 + best =3D kvm_find_cpuid_entry(vcpu, 0xa); + if (vcpu->kvm->arch.enable_pmu && best) { + union cpuid10_eax eax; + + eax.full =3D best->eax; + if (enable_mediated_pmu && + eax.split.version_id > kvm_pmu_cap.version) + return -EINVAL; + if (eax.split.version_id > 0 && !vcpu_pmu_can_enable(vcpu)) + return -EINVAL; + if (eax.split.version_id > 1 && eax.split.version_id < 5 && + best->ecx !=3D 0) + return -EINVAL; + } + /* * Exposing dynamic xfeatures to the guest requires additional * enabling in the FPU, e.g. to expand the guest XSAVE state size. diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 4f455afe4009..92c742ead663 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -743,6 +743,10 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) kvm_pmu_call(reset)(vcpu); } =20 +inline bool vcpu_pmu_can_enable(struct kvm_vcpu *vcpu) +{ + return vcpu->kvm->arch.enable_pmu && lapic_in_kernel(vcpu); +} =20 /* * Refresh the PMU configuration for the vCPU, e.g. if userspace changes C= PUID @@ -775,8 +779,7 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) pmu->pebs_data_cfg_rsvd =3D ~0ull; bitmap_zero(pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); =20 - if (!vcpu->kvm->arch.enable_pmu || - (!lapic_in_kernel(vcpu) && enable_mediated_pmu)) + if (!vcpu_pmu_can_enable(vcpu)) return; =20 kvm_pmu_call(refresh)(vcpu); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index dd45a0c6be74..e1d0096f249b 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -284,6 +284,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel); +bool vcpu_pmu_can_enable(struct kvm_vcpu *vcpu); =20 bool is_vmware_backdoor_pmc(u32 pmc_idx); =20 --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D698C267737 for ; Mon, 24 Mar 2025 17:33:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837600; cv=none; b=UGjlO7wwmZi6tyYw30rKHXoBQd+ugoO2ew/x4fVLom+Z3dNjA6DoeZF8XkURu1ht2MfonA5tpro5UBFHcqauaj5D0G+o12FtDl95VO5PuLVuRmK1yBg/5an9cnq0yvEwvg1B3uCg19neWrM9rP+uiee7kj8+htAAPVl0PdvUseY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837600; c=relaxed/simple; bh=b8kfH1giQMSXfOU3H4wI9RK8POw0T6oW++CyqKtBnX4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iwYzSCMNVsET5+fw18iYPXGuemwYn2OL3Dci/5aH3j+mtg6/86Rku1CJaK/AnJH9+HN4WesjCTGVYcWvzfqgRUj3vOMV/PNS3rNrX5PZDxnQ6BC9dI2S/xT7yHmcv07SQNLp+tPQlyIOgoKVri8whXHoPe8NgJqM35mG/Tt6obk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IA7hcS2r; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IA7hcS2r" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff8340d547so8785118a91.2 for ; Mon, 24 Mar 2025 10:33:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837598; x=1743442398; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=bMvztk4Ls/rO+e5Y79A5cu9f2ky21aF113lmuU+q7KE=; b=IA7hcS2rbox46IneULeeUT7ooW/SC3Kl4OLoLiTfiHgx9EoZScPA8Z2hfrVoTXbE1Z qZ71qmHX7FzbU4vMr03IqxXwwOgP1dAxD4ygoEIgIee6xE8bepwc+oeP61W8O/PK6XgC Wt2iCjdcxQFru3prWrGCMnLbHJwWJrBsnHB8ilCMD/sc7rfC8QkgbUKiRyW6NoKD6056 K8sYqQneeUMSa7EpNYby36tTO+5ineU9F6PMqvLqsMhhxChin7PYPRUMHXr7YkJlkfw5 Xgv8lFfn0qUAgdsZVa/a/G+uIntikYqD4GVdGQhbx8g8jqsSNu98+cx2alvYmMzx99xs T1pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837598; x=1743442398; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bMvztk4Ls/rO+e5Y79A5cu9f2ky21aF113lmuU+q7KE=; b=K9pmZzyHv/joMAmL1UTfvE8gb6TpgsseIvod6lJo2JCyCf/VHunLhF2Kz4sugUIpQt kM7DATdqjwYiVMDPQnpxWBMoF6cXIsvdFOg6hwe0H0edLPZSQs1wyM+q+tPbj4TUwl4o OYFUlFArZ2RthUwDYUhbvMiGEPks7e9a6vgMC+MzOhJDyrVfZVSFUHGlIH+/bvssQoRz 4BBZCQLK3uc3Yi2opO6muaAd0Mx3lTHU4zEnZ2kaXLNs8iGK/cvG1Wh1yKEpc6gdPkrr mka2CpjuIamzFHq1yOQ5nSU9a7LVel7/atz4BxkI4YIn0b7polWaFGH3m/iyRbheuT1M mx2w== X-Forwarded-Encrypted: i=1; AJvYcCUjyZRQIXANPdcveUXSox5BBNbxbrEQNmOQLsW30WaMAVN39rZtP9Rk2+sbSe/M/N3Gd5RR1yy6HpKyl0U=@vger.kernel.org X-Gm-Message-State: AOJu0Ywex4/ullzg+1l2K22P2QLpO7TAX2LZjAMNSSqyN/gCyktL60fY 5xOIDfbYcxWKBLxkVKFUngVsR3+sfGuJ9kPUDW9m5V2r+AZ0K6BHbkeNXa0iS9BVD65G51YMOdX nnjeF1A== X-Google-Smtp-Source: AGHT+IFuXYKM2/ft8zmuifG//R7KBXVtO7yRI38c7kXk4CZuqSwM8AkIrVF15vqaI/zOHCFLN1phpoVdAyVd X-Received: from pjd6.prod.google.com ([2002:a17:90b:54c6:b0:2e5:5ffc:1c36]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2b4e:b0:2ef:19d0:2261 with SMTP id 98e67ed59e1d1-3030fe956damr23161099a91.16.1742837598383; Mon, 24 Mar 2025 10:33:18 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:56 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-17-mizhang@google.com> Subject: [PATCH v4 16/38] KVM: x86: Rename vmx_vmentry/vmexit_ctrl() helpers From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Rename the two helpers vmx_vmentry/vmexit_ctrl() to vmx_get_initial_vmentry/vmexit_ctrl() to represent their real meaning. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/vmx/vmx.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a4b5b6455c7b..acd3582874b9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4424,7 +4424,7 @@ static u32 vmx_pin_based_exec_ctrl(struct vcpu_vmx *v= mx) return pin_based_exec_ctrl; } =20 -static u32 vmx_vmentry_ctrl(void) +static u32 vmx_get_initial_vmentry_ctrl(void) { u32 vmentry_ctrl =3D vmcs_config.vmentry_ctrl; =20 @@ -4441,7 +4441,7 @@ static u32 vmx_vmentry_ctrl(void) return vmentry_ctrl; } =20 -static u32 vmx_vmexit_ctrl(void) +static u32 vmx_get_initial_vmexit_ctrl(void) { u32 vmexit_ctrl =3D vmcs_config.vmexit_ctrl; =20 @@ -4806,10 +4806,10 @@ static void init_vmcs(struct vcpu_vmx *vmx) if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat); =20 - vm_exit_controls_set(vmx, vmx_vmexit_ctrl()); + vm_exit_controls_set(vmx, vmx_get_initial_vmexit_ctrl()); =20 /* 22.2.1, 20.8.1 */ - vm_entry_controls_set(vmx, vmx_vmentry_ctrl()); + vm_entry_controls_set(vmx, vmx_get_initial_vmentry_ctrl()); =20 vmx->vcpu.arch.cr0_guest_owned_bits =3D vmx_l1_guest_owned_cr0_bits(); vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits); --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E86A267B15 for ; Mon, 24 Mar 2025 17:33:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837602; cv=none; b=QmGYVXPwQBNcjMcoQhbPkgcY7Jj3kG/RW1rWQnTQVJgn57MY9ajIW7cLGuP2T+eyqxLR00FUqHMU8pjdYesM+h+IVwrrFvhRgQwLg/Ec0U3rj6K2vrcqCOkLdk3H0ObXvJ3SG07CzKiIJz3f9EsaxaVVsxesoH5RwJBl0xLgbw8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837602; c=relaxed/simple; bh=5qOPwwrxnQsx7uNK7PsLpk4FpylCW4KwiTr6opmkLZA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LU+/SjYz2MOi8dS3oHOkM7mm2PiwVvq6BUXSqFi3KhpokegEWfKAQIkOCxFNFbHAo6taBoDBoz0F0LGbyJM2YtkEPkp0Vzu6LCD8lDDrpKkeZBKiBuBdHX/r8yzMIkZDD+y4BfL2x6qdNoLZl3TrRrd/gFMZU5A4FHTwW3krUDg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tof9gi2O; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tof9gi2O" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-224191d9228so98043995ad.3 for ; Mon, 24 Mar 2025 10:33:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837600; x=1743442400; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=rcOpnJuyQafXKIM3t3NDl4D1Wy5EWhwivUY6bQgJ2Ig=; b=tof9gi2O6OfpkkQ4puiCPlL7wXk6OiiPSypGup2WLbU+MQUMSoDWfvLu7Y/k2CN98A dsuL+tJYvGP+yreT79YQ6HlPtbzav2CfQkLkst6x7mBkF50LH/RGffacg1h+BUM3IgTc JbNWabJOmL04gQglarcV6dok2K3Lh/TKonwpbMc6W9JDQ6olnnFy6dGLnrcw93zAUpDp O/qSu1QhbJ1MI/tmYVfYVPERPj5EP24+jAO+YXki5yFxWnCnLnbe/Ajn0hcprBRWkB07 hrdBKN8NvXPoUys2Qt/BvFMh+CQeZm2U5EHZ5e7sVQk8FEIOw6bJ3jtdWZ6QNqnGsX7m GOpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837600; x=1743442400; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=rcOpnJuyQafXKIM3t3NDl4D1Wy5EWhwivUY6bQgJ2Ig=; b=CB2b2Ezur73xnbc6hyiknzpKFVQAQYxywlEz+K2lamOYmzS5yCD6BmlVxtEZFnSXkK CrCO7wO1GNefiitvUdB7jIrLCznjDYWftESuU1ygSX87GrepNJ+BHNvTlU+YyBhIj5Hi pxIRuAOyvZN47x2/VEV4/cLJBP1BxPP75NniHRR7ORMS8/9dOOwVEe3ArhN97of+YDLQ 2PVHDSdv5kIBZvHcJAA/vvZEPGCt/2SqYBw+Ygn2rtcVLDN7Y8m3/E5fbhGik9xdS4Dt fbKRNIiEElsPRO/GCPH2Am1DKE9cHvs97K6+SqdAJi4sHUoMr4DgKA0IXncYoTaGs2Fv SCAw== X-Forwarded-Encrypted: i=1; AJvYcCXppvw/82YXblV2OCL1GzJ6+lwUlQHAMOi9IjPOFzDfZW2nB/npZIgEoWvhy+aNXqyus12y2+rXBL9X0cw=@vger.kernel.org X-Gm-Message-State: AOJu0YzHYBteS4qE38oBICqpEjKyCVbm9Pmh7+AgHlWULU+uMH6OqCfE y0kKuXicgzpq05QkHxT7DUmz2tkHea1uG3WKUX9U44GR6t0nFDjFK9RlozPMnUKHKkOXplY7Cg/ LmoqJzA== X-Google-Smtp-Source: AGHT+IHUO+nVYTje/CxANJJGo3on5RcFF6XTIavPoEExwPpO1NjAc4cYfof26CrUOcsMjaHQJm3pjkVgG3XW X-Received: from plho1.prod.google.com ([2002:a17:903:23c1:b0:223:49cb:8f99]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:4403:b0:223:4bd6:3869 with SMTP id d9443c01a7336-22780c7ba5bmr212349475ad.15.1742837600009; Mon, 24 Mar 2025 10:33:20 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:57 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-18-mizhang@google.com> Subject: [PATCH v4 17/38] KVM: x86/pmu: Add perf_capabilities field in struct kvm_host_values{} From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Add perf_capabilities in kvm_host_values{} structure to record host perf capabilities. KVM needs to know if host supports some PMU capabilities and then decide if passthrough or intercept some PMU MSRs or instruction like rdpmc, e.g. If host supports PERF_METRICES, but guest is configured not to support it, then rdpmc instruction needs to be intercepted. Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/vmx.c | 8 ++------ arch/x86/kvm/x86.c | 3 +++ arch/x86/kvm/x86.h | 1 + 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index acd3582874b9..ca1c53f855e0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7908,14 +7908,10 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) static __init u64 vmx_get_perf_capabilities(void) { u64 perf_cap =3D PMU_CAP_FW_WRITES; - u64 host_perf_cap =3D 0; =20 if (!enable_pmu) return 0; =20 - if (boot_cpu_has(X86_FEATURE_PDCM)) - rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); - if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR) && !enable_mediated_pmu) { x86_perf_get_lbr(&vmx_lbr_caps); @@ -7928,11 +7924,11 @@ static __init u64 vmx_get_perf_capabilities(void) if (!vmx_lbr_caps.has_callstack) memset(&vmx_lbr_caps, 0, sizeof(vmx_lbr_caps)); else if (vmx_lbr_caps.nr) - perf_cap |=3D host_perf_cap & PMU_CAP_LBR_FMT; + perf_cap |=3D kvm_host.perf_capabilities & PMU_CAP_LBR_FMT; } =20 if (vmx_pebs_supported()) { - perf_cap |=3D host_perf_cap & PERF_CAP_PEBS_MASK; + perf_cap |=3D kvm_host.perf_capabilities & PERF_CAP_PEBS_MASK; =20 /* * Disallow adaptive PEBS as it is functionally broken, can be diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1ebe169b88b6..578e5f110b6c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9786,6 +9786,9 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) rdmsrl(MSR_IA32_ARCH_CAPABILITIES, kvm_host.arch_capabilities); =20 + if (boot_cpu_has(X86_FEATURE_PDCM)) + rdmsrl(MSR_IA32_PERF_CAPABILITIES, kvm_host.perf_capabilities); + r =3D ops->hardware_setup(); if (r !=3D 0) goto out_mmu_exit; diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index dbf9973b3d09..b1df4ad2341b 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -46,6 +46,7 @@ struct kvm_host_values { u64 xcr0; u64 xss; u64 arch_capabilities; + u64 perf_capabilities; }; =20 void kvm_spurious_fault(void); --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E85D0267B7F for ; Mon, 24 Mar 2025 17:33:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837604; cv=none; b=OmQnollAmWI0dz7O8mp3f95EHfzf9rQ8WBQfy/qv0r7mSp48+aQGdyrKowbUDl+VY8uGtw1siCEQOC+uCksFn80hAfWh5siSGeRM/R9l2kzzJFtnbiiz+2pH23IfWR6Xo3jmT4f9wZ7JWXHjKK4J5BcakNm+Ttfwljza0wtzqOw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837604; c=relaxed/simple; bh=H5dpimzCkTLaGElT4g2IUO6VSSp4QnkSLT93PRKHBOc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kx+SAASqD2OgdA2sYpWFHdAVxV3CAWcDcyWXAxWTqaIuWDRpdw2Oe8peJ+tpQUH1i78CfXYz97GpFTV+quXLvd+G7amirPLtoFhZqo09cnRZxVwsRbtE9rr2vBrOsD/yQfJnhBdw+LDvq2Uj/PieJJ0sc/DVSDcxQsQCoXn8SnU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=b9PutlaE; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="b9PutlaE" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff62f96b10so8547551a91.0 for ; Mon, 24 Mar 2025 10:33:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837601; x=1743442401; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4s4fj12+7LYXYBlMzf4tcSXF30o33nLdQg99nDxRYP4=; b=b9PutlaEQ9XT396KBS5OuPYvvYN0WVVeZX/LNwa85upj9K0u0ZAWOdo08KPCvyT0Gh 8yptVP6LUROFUyxQEAAiINE+IytoffXL1nJvPiWFcuW2I4VJi8n4+s2HD6Mbx04tEfyI XuifVd1jZIEHdX5curGtrQfF4Gu5Fv2v0TvKPawveyg+gfHEqU/x65fNkjBrTEJelA1u T1VLYzeB5oWu3omP/hkRbsgUSf/+2kdJ+m7gZduQfDiqsqLyY4GSsGqtD9JVfE80I0cB mguvJ4/G0dU9+qEm6D6A/jIsiKr597VsDoxHiyvNCJXhRZJseseWh5WTf2mXRz1rwqX4 x/pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837601; x=1743442401; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4s4fj12+7LYXYBlMzf4tcSXF30o33nLdQg99nDxRYP4=; b=pizImQnmAWk4ntKdGmSAy3uzwOQhQsincqpZEgCzNGrNVsA11ZI577uMarxGvdXLHl y/WuaEuh2/GD20chB93UL8eNnsA+Mr6YmFnYepHEhWwnyELJI+lxqIjNcWAurDEKWjsK 9QoepsOubvMimaPqt5t9Sf8aV0uAnCZYaTXBsYvvDc7lcpd9OAaPqwlZg7cQyCkSMB/T 4NqMIsqvJ7SoFtkCQMtX4eUqvgs7RDcl2h11mLto2R8S9FwauopNvCl0XTXk+THWg9mT t8YUEFJg1bhCa7Aw2GRqoQ6RtSzz23OCz/GUMqvtY8t2H7r1csio4cKCgp+qLq1cu9Lj Cg0w== X-Forwarded-Encrypted: i=1; AJvYcCUS8PqUE+Ijv2J2WDJyWxR9FNPNZ6jT6Lp0zVwL4kxl+pr7JxXwqHDZKfVKcplA6MDD4DmG0tYEBYuJAmw=@vger.kernel.org X-Gm-Message-State: AOJu0Yzj9zbfJvQb143OyPawkLqN3ioFXR8QUQu4bFGZkqhPDMF65uK0 4TW8v2qoQ3zHafBkrRAmsg9VS3tLmaEHA4JtYnG6sLpJYhR80xad8nL7TXbEWHWv/vN15VOOIRm sEoqFiA== X-Google-Smtp-Source: AGHT+IF3eAhtbr/aKZMsOSHh6EvPm40FhM3S3reS/zODf44Xe9H+4QBhRWOx5GA3LttXrkzjAl5B1PQqv1BM X-Received: from pjbmf11.prod.google.com ([2002:a17:90b:184b:b0:2ef:d283:5089]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:d2cb:b0:2fa:42f3:e3e4 with SMTP id 98e67ed59e1d1-301d426aa83mr27441767a91.3.1742837601440; Mon, 24 Mar 2025 10:33:21 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:58 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-19-mizhang@google.com> Subject: [PATCH v4 18/38] KVM: x86/pmu: Move PMU_CAP_{FW_WRITES,LBR_FMT} into msr-index.h header From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Move PMU_CAP_{FW_WRITES,LBR_FMT} into msr-index.h and rename them with PERF_CAP prefix to keep consistent with other perf capabilities macros. No functional change intended. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/msr-index.h | 15 +++++++++------ arch/x86/kvm/vmx/capabilities.h | 3 --- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 12 ++++++------ 4 files changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 72765b2fe0d8..ca70846ffd55 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -305,12 +305,15 @@ #define PERF_CAP_PT_IDX 16 =20 #define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6 -#define PERF_CAP_PEBS_TRAP BIT_ULL(6) -#define PERF_CAP_ARCH_REG BIT_ULL(7) -#define PERF_CAP_PEBS_FORMAT 0xf00 -#define PERF_CAP_PEBS_BASELINE BIT_ULL(14) -#define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \ - PERF_CAP_PEBS_FORMAT | PERF_CAP_PEBS_BASELINE) + +#define PERF_CAP_LBR_FMT 0x3f +#define PERF_CAP_PEBS_TRAP BIT_ULL(6) +#define PERF_CAP_ARCH_REG BIT_ULL(7) +#define PERF_CAP_PEBS_FORMAT 0xf00 +#define PERF_CAP_FW_WRITES BIT_ULL(13) +#define PERF_CAP_PEBS_BASELINE BIT_ULL(14) +#define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \ + PERF_CAP_PEBS_FORMAT | PERF_CAP_PEBS_BASELINE) =20 #define MSR_IA32_RTIT_CTL 0x00000570 #define RTIT_CTL_TRACEEN BIT(0) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilitie= s.h index fac2c80ddbab..013536fde10b 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -21,9 +21,6 @@ extern int __read_mostly pt_mode; #define PT_MODE_SYSTEM 0 #define PT_MODE_HOST_GUEST 1 =20 -#define PMU_CAP_FW_WRITES (1ULL << 13) -#define PMU_CAP_LBR_FMT 0x3f - struct nested_vmx_msrs { /* * We only store the "true" versions of the VMX capability MSRs. We diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 425e93d4b1c6..fc017e9a6a0c 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -118,7 +118,7 @@ static inline u64 vcpu_get_perf_capabilities(struct kvm= _vcpu *vcpu) =20 static inline bool fw_writes_is_enabled(struct kvm_vcpu *vcpu) { - return (vcpu_get_perf_capabilities(vcpu) & PMU_CAP_FW_WRITES) !=3D 0; + return (vcpu_get_perf_capabilities(vcpu) & PERF_CAP_FW_WRITES) !=3D 0; } =20 static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr) @@ -543,7 +543,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) =20 perf_capabilities =3D vcpu_get_perf_capabilities(vcpu); if (cpuid_model_is_consistent(vcpu) && - (perf_capabilities & PMU_CAP_LBR_FMT)) + (perf_capabilities & PERF_CAP_LBR_FMT)) memcpy(&lbr_desc->records, &vmx_lbr_caps, sizeof(vmx_lbr_caps)); else lbr_desc->records.nr =3D 0; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ca1c53f855e0..9c4b3c2b1d65 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2188,7 +2188,7 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu= *vcpu, bool host_initiated (host_initiated || guest_cpu_cap_has(vcpu, X86_FEATURE_BUS_LOCK_DETEC= T))) debugctl |=3D DEBUGCTLMSR_BUS_LOCK_DETECT; =20 - if ((kvm_caps.supported_perf_cap & PMU_CAP_LBR_FMT) && + if ((kvm_caps.supported_perf_cap & PERF_CAP_LBR_FMT) && (host_initiated || intel_pmu_lbr_is_enabled(vcpu))) debugctl |=3D DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI; =20 @@ -2464,9 +2464,9 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_dat= a *msr_info) vmx->pt_desc.guest.addr_a[index / 2] =3D data; break; case MSR_IA32_PERF_CAPABILITIES: - if (data & PMU_CAP_LBR_FMT) { - if ((data & PMU_CAP_LBR_FMT) !=3D - (kvm_caps.supported_perf_cap & PMU_CAP_LBR_FMT)) + if (data & PERF_CAP_LBR_FMT) { + if ((data & PERF_CAP_LBR_FMT) !=3D + (kvm_caps.supported_perf_cap & PERF_CAP_LBR_FMT)) return 1; if (!cpuid_model_is_consistent(vcpu)) return 1; @@ -7907,7 +7907,7 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) =20 static __init u64 vmx_get_perf_capabilities(void) { - u64 perf_cap =3D PMU_CAP_FW_WRITES; + u64 perf_cap =3D PERF_CAP_FW_WRITES; =20 if (!enable_pmu) return 0; @@ -7924,7 +7924,7 @@ static __init u64 vmx_get_perf_capabilities(void) if (!vmx_lbr_caps.has_callstack) memset(&vmx_lbr_caps, 0, sizeof(vmx_lbr_caps)); else if (vmx_lbr_caps.nr) - perf_cap |=3D kvm_host.perf_capabilities & PMU_CAP_LBR_FMT; + perf_cap |=3D kvm_host.perf_capabilities & PERF_CAP_LBR_FMT; } =20 if (vmx_pebs_supported()) { --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AE35267F51 for ; Mon, 24 Mar 2025 17:33:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837605; cv=none; b=CsGQbarJKe7oORuuUMbSwnRvcydhIn1m8J/fm2OQjAgvMu/k1VKZzN9L7m3fbkBbmlfjxPeooDcZwHd1S9zP3dUO5ErtAOjB8UrhbxHk7pbeWa4u6x65HwTwmX9HN3ClwlIYk4p8fwyrRSVIFihb7xmRBfILN5t1ePhtRNQaEqE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837605; c=relaxed/simple; bh=X0TQbSNZ2k8J/w0/hYiMGT/c4Mohm/WrFcB6ldk0fP4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gTpFLePzUqwlk8QNbEDScAM2VzcCHoaP3QMoFnLVgfcxWApKzP6UJc61Qt0d0re4GTGSi5KCOSVazbw9FBpvs0f06EJdByx0tWcfjknem/V3Wc0lccOO4YeMjwSrLUylkiscMeZkYI1gcD6xcyrhF1YyQ6TeqQY4+Q+3R3VU64s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Vxi7l72z; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Vxi7l72z" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3032f4ea8cfso3071582a91.3 for ; Mon, 24 Mar 2025 10:33:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837603; x=1743442403; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=qanTXrz+w4TJOSGI45XCDuNPJKy6t1An1vEOy+VJFPI=; b=Vxi7l72zTXwhzeNwzxNWMFc9ZGDt5CF1zLHWuaiCC7ooIKbU+B0mVIfGotAYLrzUqC 22GVPBYCuqIc5kAQh88RVfkQvywqJjT/Tq8gWaZrq0EQWGx1N3sFNwPTOVgri9WFUspu oCepEW7kMmALmW0r9hSlkdYbGMUU2sQkLjoISFq0Au71gK5p51IZ9x87otgEdF2IbyE0 WclofmpyGHRBg27i70H7J0l1V68tWyJnQfepZNDw+xt3jFgG3TqJxqP+kiPJg+7NUfc2 AvdKwP2H56CxmPMus0AeOc9RJVUosqvab5HERXDKuBNpD7mbmhPGgoNqO+2AtsPMgYA7 cq3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837603; x=1743442403; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qanTXrz+w4TJOSGI45XCDuNPJKy6t1An1vEOy+VJFPI=; b=cMQIK5fvcw/agT0Ya/FGmP/LbCAWjb3oUQyAmZUfoxDr6Lp5qTGrj+yOKbuxxeiKVS 40zzaLe3atuqpb+qEmr1Z0Q49mphnkaXGw/nGFVDI3qPcE67p5UZLSCTB1O/fqDfIbm+ e9qBJ2y5c+lX1YPuc42TORPr/piEjUdsBupmxVPp1gK1Wvh5HPXLQK8oSwmM0uXlpB4q 29x5hX8AvlVAQH0uCoSXCqrdQpSBwMJcA5+Z5xXwpuOXhOsLxvpURvt08VFZ1w1ED2S2 oOUhWDbja6HY/1TH0vOAdZofaR528V5B/TUfeQxZ7ukxS92WQ4kJ5FyO6bDmykpNjK9D wApQ== X-Forwarded-Encrypted: i=1; AJvYcCVnvgMmbH+FLN070uYikhEN2PWwKIWWYaxIoDUjIWDfw5kg8Ml2scYZrZjbJJjb/B1Rp9eEJckf5mj7jM4=@vger.kernel.org X-Gm-Message-State: AOJu0YyhMxYEGSWIbZ5p5RNhzV7pTgW1tCw/AZYrDB3Ui1lDcI0/iu7k ecK0QBkvFylJ9Zr3pGL7y1Uk6hK6dIn5ozvRI9w6zTBv+fYP+9m1PdvPJppUjCHjMES2NXKwyEn bN/PGdw== X-Google-Smtp-Source: AGHT+IFylqOtzrCn8td/IyRUcTlqEyDEKm/kkkuG7qOaMhTvraXjC3+UngnEdhoJSJ60RacEffeNuZUcyLlK X-Received: from pjk8.prod.google.com ([2002:a17:90b:5588:b0:2fc:ccfe:368]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4ccf:b0:2ff:64a0:4a57 with SMTP id 98e67ed59e1d1-3030feeb744mr18687157a91.26.1742837603274; Mon, 24 Mar 2025 10:33:23 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:30:59 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-20-mizhang@google.com> Subject: [PATCH v4 19/38] KVM: VMX: Add macros to wrap around {secondary,tertiary}_exec_controls_changebit() From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Add macros around helpers that changes VMCS bits to simplify vmx exec ctrl bits clearing and setting. No function change intended. Suggested-by: Sean Christopherson Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/vmx/vmx.c | 20 +++++++------------- arch/x86/kvm/vmx/vmx.h | 8 ++++++++ 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9c4b3c2b1d65..ff66f17d6358 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4471,19 +4471,13 @@ void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *v= cpu) =20 pin_controls_set(vmx, vmx_pin_based_exec_ctrl(vmx)); =20 - if (kvm_vcpu_apicv_active(vcpu)) { - secondary_exec_controls_setbit(vmx, - SECONDARY_EXEC_APIC_REGISTER_VIRT | - SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY); - if (enable_ipiv) - tertiary_exec_controls_setbit(vmx, TERTIARY_EXEC_IPI_VIRT); - } else { - secondary_exec_controls_clearbit(vmx, - SECONDARY_EXEC_APIC_REGISTER_VIRT | - SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY); - if (enable_ipiv) - tertiary_exec_controls_clearbit(vmx, TERTIARY_EXEC_IPI_VIRT); - } + secondary_exec_controls_changebit(vmx, + SECONDARY_EXEC_APIC_REGISTER_VIRT | + SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY, + kvm_vcpu_apicv_active(vcpu)); + if (enable_ipiv) + tertiary_exec_controls_changebit(vmx, TERTIARY_EXEC_IPI_VIRT, + kvm_vcpu_apicv_active(vcpu)); =20 vmx_update_msr_bitmap_x2apic(vcpu); } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 8b111ce1087c..5c505af553c8 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -612,6 +612,14 @@ static __always_inline void lname##_controls_clearbit(= struct vcpu_vmx *vmx, u##b { \ BUILD_BUG_ON(!(val & (KVM_REQUIRED_VMX_##uname | KVM_OPTIONAL_VMX_##uname= ))); \ lname##_controls_set(vmx, lname##_controls_get(vmx) & ~val); \ +} \ +static __always_inline void lname##_controls_changebit(struct vcpu_vmx *vm= x, u##bits val, \ + bool set) \ +{ \ + if (set) \ + lname##_controls_setbit(vmx, val); \ + else \ + lname##_controls_clearbit(vmx, val); \ } BUILD_CONTROLS_SHADOW(vm_entry, VM_ENTRY_CONTROLS, 32) BUILD_CONTROLS_SHADOW(vm_exit, VM_EXIT_CONTROLS, 32) --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97095268689 for ; Mon, 24 Mar 2025 17:33:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837607; cv=none; b=qIdrjKWwLSkdciSSiRQVmjAv8I1bZyu2ENum9HD7Z3He5ysDg1fZlBZj6WV/lYuOVBOaXvhbKMkz+cdSwb906ASfhcc05MhfLYau6HPIPbu/oNMABQUY5KfbFl8flWGkt5kJ4m1lV4hAi4OujQX2ATgeqSPry3KBH8L2M+r4icU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837607; c=relaxed/simple; bh=emJID9XX/emVac8xSMbNVk6xY1SY9CZZ7m+C7kd5oSQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kEx9EcUvRpB3YQ4b0B+/2jHuuvxhP7UtBSSliVCkOVTpqHz0Up93PY1KYWt5gDTzxZJqIMLyGvG0SnvfDspQSPow1L0SSVAFt2/oWBNzut39m4+mPVBs/WO4WtENfXcwv+jxyfKRgflt+htX6mbxFNITxoMFOn3R/L2JBSzAx6g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Z1/Uw2FH; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Z1/Uw2FH" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3011bee1751so7279702a91.1 for ; Mon, 24 Mar 2025 10:33:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837605; x=1743442405; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=sw4JUssCmzWmsW/5UbAE8xBYLkC17nTe5sdEWfUAlzM=; b=Z1/Uw2FHJ85pG5weLiVCQY63n6qzW5ohcplgjb4sUNZ9JZUx4mJe3ARVFFNTRN5nxP ICbDH/moAwVtEWRF5mHMnO+X/c2eV1VUBEmgSBMxM459ZPOXhPVSLno9sds6RCC7Y2GM KSwTE24IA3Xg9kkpwe+52P86n8mb7H63PhRWyeSCZ2TGN5ODxzAnv+HTW13fIQhGjQNa 4KUVwTermYoLOCcffGo19V3Uy7rjgJ+ZXzH6rv+Dz7OIBDQnOSkFyMntNiZ0yyqz21Wd 884/i16yhbxdO520M3b9IwextC7fDG94+e8yF7vZ+xDzy/e3C+nbjA394+0w1xXrZh/A sptw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837605; x=1743442405; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=sw4JUssCmzWmsW/5UbAE8xBYLkC17nTe5sdEWfUAlzM=; b=u0lRD3KGXRlZGIjI/gJwkR4SbJlljyMfOktdDnCgyFsMifc8LAruR4uU7C++g5wPMr 1HJY6WtBW3Vlps18KQ/Cm3pDvMPZtVjykzbJafh88d2LKxlgOg9M7qL0c/BYL/qMDoWq 01Oh1o0lNT0dy8JSCqSof5TKbaNkYAzaiHt6i3FbqO48bDolluVlyCdU+2qeGMp3XCIs CeW8yUmcOcwkuZ0YjUdkZiN5cssfG9oMTchYWoNuAo7YtgIbmLv4KO0mbxWSBVESL/BT 0MkD1yf4WwkoXM/LIN8LwVPKv57oV6ZTHAwzMg1ceyi8VhIZwu7jPnZpOkfY/llaE3WG rG0w== X-Forwarded-Encrypted: i=1; AJvYcCUmbgkdYg/KD4ql5b4zg+ghQY1fo+Bg0iha2OkDVdTTsPA29VSnzFsKZ4xOjq11B1g+XZHTLqJ6hQlPHPI=@vger.kernel.org X-Gm-Message-State: AOJu0YxLCeW26lzehd+L6kkLfwhKAIfpx4pIa2ZA9R+y5YBpgn+Zu+m7 wXFCozg7IM+ySzFXbzelX6EeB+TtiA1Im2dOtabi2faq1zvfc2rifBSXCVD5jNi0PYU/mIp7y4z rrXaO5A== X-Google-Smtp-Source: AGHT+IEkmnOY6fpnDhzWr8K9H40U6Qpwh/FYlgWiXfKWlM5pYqW2YEqLpPv1uhdG73giN+aDE0bsUDcUH/vd X-Received: from pjbsj16.prod.google.com ([2002:a17:90b:2d90:b0:301:a339:b558]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:e7c4:b0:2ee:f440:53ed with SMTP id 98e67ed59e1d1-3030ff06d11mr20578622a91.31.1742837605001; Mon, 24 Mar 2025 10:33:25 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:00 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-21-mizhang@google.com> Subject: [PATCH v4 20/38] KVM: x86/pmu: Check if mediated vPMU can intercept rdpmc From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Check if rdpmc can be intercepted for mediated vPMU. Simply speaking, if guest own all PMU counters in mediated vPMU, then rdpmc interception should be disabled to mitigate the performance impact, otherwise rdpmc has to be intercepted to avoid guest obtain host counter's data via rdpmc instruction. Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang Co-developed-by: Sandipan Das Signed-off-by: Sandipan Das Signed-off-by: Dapeng Mi --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/kvm/pmu.c | 34 ++++++++++++++++++++++++++++++++ arch/x86/kvm/pmu.h | 19 ++++++++++++++++++ arch/x86/kvm/svm/pmu.c | 14 ++++++++++++- arch/x86/kvm/vmx/pmu_intel.c | 18 ++++++++--------- 5 files changed, 76 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index ca70846ffd55..337f4b0a2998 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -312,6 +312,7 @@ #define PERF_CAP_PEBS_FORMAT 0xf00 #define PERF_CAP_FW_WRITES BIT_ULL(13) #define PERF_CAP_PEBS_BASELINE BIT_ULL(14) +#define PERF_CAP_PERF_METRICS BIT_ULL(15) #define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \ PERF_CAP_PEBS_FORMAT | PERF_CAP_PEBS_BASELINE) =20 diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 92c742ead663..6ad71752be4b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -604,6 +604,40 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx,= u64 *data) return 0; } =20 +inline bool kvm_rdpmc_in_guest(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + + if (!kvm_mediated_pmu_enabled(vcpu)) + return false; + + /* + * VMware allows access to these Pseduo-PMCs even when read via RDPMC + * in Ring3 when CR4.PCE=3D0. + */ + if (enable_vmware_backdoor) + return false; + + /* + * FIXME: In theory, perf metrics is always combined with fixed + * counter 3. it's fair enough to compare the guest and host + * fixed counter number and don't need to check perf metrics + * explicitly. However kvm_pmu_cap.num_counters_fixed is limited + * KVM_MAX_NR_FIXED_COUNTERS (3) as fixed counter 3 is not + * supported now. perf metrics is still needed to be checked + * explicitly here. Once fixed counter 3 is supported, the perf + * metrics checking can be removed. + */ + return pmu->nr_arch_gp_counters =3D=3D kvm_pmu_cap.num_counters_gp && + pmu->nr_arch_fixed_counters =3D=3D kvm_pmu_cap.num_counters_fixed = && + vcpu_has_perf_metrics(vcpu) =3D=3D kvm_host_has_perf_metrics() && + pmu->counter_bitmask[KVM_PMC_GP] =3D=3D + (BIT_ULL(kvm_pmu_cap.bit_width_gp) - 1) && + pmu->counter_bitmask[KVM_PMC_FIXED] =3D=3D + (BIT_ULL(kvm_pmu_cap.bit_width_fixed) - 1); +} +EXPORT_SYMBOL_GPL(kvm_rdpmc_in_guest); + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu) { if (lapic_in_kernel(vcpu)) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index e1d0096f249b..509c995b7871 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -271,6 +271,24 @@ static inline bool pmc_is_globally_enabled(struct kvm_= pmc *pmc) return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); } =20 +static inline u64 vcpu_get_perf_capabilities(struct kvm_vcpu *vcpu) +{ + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_PDCM)) + return 0; + + return vcpu->arch.perf_capabilities; +} + +static inline bool vcpu_has_perf_metrics(struct kvm_vcpu *vcpu) +{ + return !!(vcpu_get_perf_capabilities(vcpu) & PERF_CAP_PERF_METRICS); +} + +static inline bool kvm_host_has_perf_metrics(void) +{ + return !!(kvm_host.perf_capabilities & PERF_CAP_PERF_METRICS); +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); @@ -287,6 +305,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 e= ventsel); bool vcpu_pmu_can_enable(struct kvm_vcpu *vcpu); =20 bool is_vmware_backdoor_pmc(u32 pmc_idx); +bool kvm_rdpmc_in_guest(struct kvm_vcpu *vcpu); =20 extern struct kvm_pmu_ops intel_pmu_ops; extern struct kvm_pmu_ops amd_pmu_ops; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index c8b9fd9b5350..153972e944eb 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -173,7 +173,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) return 1; } =20 -static void amd_pmu_refresh(struct kvm_vcpu *vcpu) +static void __amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); union cpuid_0x80000022_ebx ebx; @@ -212,6 +212,18 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); } =20 +static void amd_pmu_refresh(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm =3D to_svm(vcpu); + + __amd_pmu_refresh(vcpu); + + if (kvm_rdpmc_in_guest(vcpu)) + svm_clr_intercept(svm, INTERCEPT_RDPMC); + else + svm_set_intercept(svm, INTERCEPT_RDPMC); +} + static void amd_pmu_init(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index fc017e9a6a0c..2a5f79206b02 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -108,14 +108,6 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct k= vm_vcpu *vcpu, return &counters[array_index_nospec(idx, num_counters)]; } =20 -static inline u64 vcpu_get_perf_capabilities(struct kvm_vcpu *vcpu) -{ - if (!guest_cpu_cap_has(vcpu, X86_FEATURE_PDCM)) - return 0; - - return vcpu->arch.perf_capabilities; -} - static inline bool fw_writes_is_enabled(struct kvm_vcpu *vcpu) { return (vcpu_get_perf_capabilities(vcpu) & PERF_CAP_FW_WRITES) !=3D 0; @@ -456,7 +448,7 @@ static void intel_pmu_enable_fixed_counter_bits(struct = kvm_pmu *pmu, u64 bits) pmu->fixed_ctr_ctrl_rsvd &=3D ~intel_fixed_bits_by_idx(i, bits); } =20 -static void intel_pmu_refresh(struct kvm_vcpu *vcpu) +static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); struct lbr_desc *lbr_desc =3D vcpu_to_lbr_desc(vcpu); @@ -564,6 +556,14 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) } } =20 +static void intel_pmu_refresh(struct kvm_vcpu *vcpu) +{ + __intel_pmu_refresh(vcpu); + + exec_controls_changebit(to_vmx(vcpu), CPU_BASED_RDPMC_EXITING, + !kvm_rdpmc_in_guest(vcpu)); +} + static void intel_pmu_init(struct kvm_vcpu *vcpu) { int i; --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09E4E268C49 for ; Mon, 24 Mar 2025 17:33:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837609; cv=none; b=YJhq5vNU+gZIoAs+hsIKkw4f9uQWMSaD/8ROms7g7xVofdvwduDV6N3F1QYvApNgy7/UY1i49YX2fHquT0AsoSGMgueF4WYOx+J52PUsCOddXjTK8XJd71NR9IENCri+7G1k2VnvVI2/4TwLieW0CKo3u0ifQEKorayB40BFPWQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837609; c=relaxed/simple; bh=eG4BKBSOeyB/+BHXOdngXFZetx+df4JdcLSILPP88y4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bXJ5qhcbhrr1KkPBkNGddaFqgYgk1H1JneP9eeoCjYwQj02vBSTeLXNglsTvGwSPXDzgtIn+f/o8DSsW4cL1s+5rGYa9CN53DFzD8lmdpcOooWEbVW7FWVlmrZa146awqXvgLE/qKjJErba9ZygIqq5CeFUmAwpcqGMcvLL8kOk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JItFKbQ2; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JItFKbQ2" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2242ade807fso132558425ad.2 for ; Mon, 24 Mar 2025 10:33:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837606; x=1743442406; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=uxkoYLdOZ8QTOf9r3R5M+fKRbYC5VCo3sjiKsIcDxbc=; b=JItFKbQ2193WjnubYaIxbUv20zNYASlv1JYIBOSUQHksGO84yFsu/87ae8lOgKiDUH bP0VvB3gpty6Uji6ZWaZp81SKh3ZTVnT4cFS3QVyZzJfjKZPkYhMAM8Npbxpx1yoUG8V e0AcRYYBz2zSjg9EbYiXcM1HjWTNPz77NEROo24ovhMX3Jv+lFiTTubuNMTAf5M0xVGJ N8m7Q/HD8hy+Mu9uLDUDcFbTsLtiaU16/62WQNCD0LlwIlia6w0PIVvrobsDtC9Qmtb+ HNSGM9qtJ/WJT5nk5a8JAZ2xPvVBJVTxnpoFIawPBgoKOI4GNlbj1fuKdL1BhyDoPoh0 1p2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837606; x=1743442406; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uxkoYLdOZ8QTOf9r3R5M+fKRbYC5VCo3sjiKsIcDxbc=; b=RRf4mc7csZxZyXeV+eTYW3LawVeHI7lfDiKvb8QKtqyEQHFpPq4ANgXaBN3M/nDgNG eBo588ZV7UcLqQzo7pKA3MFnylVgZJkZP7Kjh/Ny82Hn1eyk25anZ1xeATwMTXWfrEFK LQVxay5gUmRuHLPKpEoeOZJmg671x8OnOZKNA3M2uEMGOSyROBI2+TIaS4yIoIP3sbxY ybPX+qlSMmzVPpHPUnHFUMX7FsLL7Z0p/jLnY7UQcexSc7Jq36aP1nzrWMauJVSOiseU PruFvOYvyHwSzW7m4eQJvNEJt/MXOG7G/SDh+KmA6dE96Zs/Gc5o6w1Qeg+KJIVk3CUz dt4g== X-Forwarded-Encrypted: i=1; AJvYcCVtTdoUIaEgTB5+Kz1VYjxv4b0HLtKDRUT5sxfWRQzMiPQFRvQ/zpTj4nWDpTCHdabttolKpXu118ZJTms=@vger.kernel.org X-Gm-Message-State: AOJu0YyPZjDo0u5o8JkiwxiA8wpNaN8C/2xiDIDGu5kEIRBwjnhP+Jo4 zHYMD5QlFO9Q9deDbsc474dUfxBkZ/Sb3eGuXkDnATuOfRIeBM2kcpOEm+Ze58PUmGj4/Lg8p3w h8cZ2CQ== X-Google-Smtp-Source: AGHT+IF7lDlCFVGQKi2+Rq+79r7jy+w20n0zBki1L4iPeSQKWAv1w9nC9E7vAoD4ZGW1DAswyQPwFE6zZfen X-Received: from pjh3.prod.google.com ([2002:a17:90b:3f83:b0:2fe:7f7a:74b2]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:fc45:b0:21d:dfae:300c with SMTP id d9443c01a7336-22780c546a8mr217885605ad.3.1742837606600; Mon, 24 Mar 2025 10:33:26 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:01 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-22-mizhang@google.com> Subject: [PATCH v4 21/38] KVM: x86/pmu/vmx: Save/load guest IA32_PERF_GLOBAL_CTRL with vm_exit/entry_ctrl From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Intel processor (vmx) provides capability to save/load guest IA32_PERF_GLOBAL_CTRL at vm-exit/vm-entry by setting VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL bit in VM-exit-ctrl or VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL bit in VM-entry-ctrl. Mediated vPMU leverages both capabilities to save/load guest IA32_PERF_GLOBAL_CTRL automatically at vm-exit/vm-entry. Note that the former was introduced in SapphireRapids and later Intel CPUs. If VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL is unavailable, mediated PMU will be disabled. Note that mediated PMU can be enabled by falling back to atomic msr save/retore list. However, that would cause extra overhead per VM-enter/exit. Since these VMX capability bits perform automatic saving/restoring of the PMU global ctrl between VMCS and the HW MSR. No synchronization was performed betwen HW MSR and pmu->global_ctrli, the KVM cached value . Therefore, whenever KVM needs to use this variable, it will need to explicitly read the value from MSR to pmu->global_ctrl. This is especially so when guest doesn't own all PMU counters, i.e., when IA32_PERF_GLOBAL_CTRL is interceped by mediated PMU. Suggested-by: Sean Christopherson Signed-off-by: Dapeng Mi Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/kvm_host.h | 4 ++++ arch/x86/include/asm/vmx.h | 1 + arch/x86/kvm/pmu.c | 30 ++++++++++++++++++++++++- arch/x86/kvm/vmx/capabilities.h | 5 +++++ arch/x86/kvm/vmx/nested.c | 3 ++- arch/x86/kvm/vmx/pmu_intel.c | 39 ++++++++++++++++++++++++++++++++- arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++++++- arch/x86/kvm/vmx/vmx.h | 3 ++- 8 files changed, 102 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 0b7af5902ff7..4b3bfefc2d05 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -553,6 +553,10 @@ struct kvm_pmu { unsigned available_event_types; u64 fixed_ctr_ctrl; u64 fixed_ctr_ctrl_rsvd; + /* + * kvm_pmu_sync_global_ctrl_from_vmcs() must be called to update + * this SW-maintained global_ctrl for mediated vPMU before accessing it. + */ u64 global_ctrl; u64 global_status; u64 counter_bitmask[2]; diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index f7fd4369b821..48e137560f17 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -106,6 +106,7 @@ #define VM_EXIT_CLEAR_BNDCFGS 0x00800000 #define VM_EXIT_PT_CONCEAL_PIP 0x01000000 #define VM_EXIT_CLEAR_IA32_RTIT_CTL 0x02000000 +#define VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL 0x40000000 =20 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR 0x00036dff =20 diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 6ad71752be4b..4e8cefcce7ab 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -646,6 +646,30 @@ void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu) } } =20 +static void kvm_pmu_sync_global_ctrl_from_vmcs(struct kvm_vcpu *vcpu) +{ + struct msr_data msr_info =3D { .index =3D MSR_CORE_PERF_GLOBAL_CTRL }; + + if (!kvm_mediated_pmu_enabled(vcpu)) + return; + + /* Sync pmu->global_ctrl from GUEST_IA32_PERF_GLOBAL_CTRL. */ + kvm_pmu_call(get_msr)(vcpu, &msr_info); +} + +static void kvm_pmu_sync_global_ctrl_to_vmcs(struct kvm_vcpu *vcpu, u64 gl= obal_ctrl) +{ + struct msr_data msr_info =3D { + .index =3D MSR_CORE_PERF_GLOBAL_CTRL, + .data =3D global_ctrl }; + + if (!kvm_mediated_pmu_enabled(vcpu)) + return; + + /* Sync pmu->global_ctrl to GUEST_IA32_PERF_GLOBAL_CTRL. */ + kvm_pmu_call(set_msr)(vcpu, &msr_info); +} + bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) { switch (msr) { @@ -680,7 +704,6 @@ int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_d= ata *msr_info) msr_info->data =3D pmu->global_status; break; case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: - case MSR_CORE_PERF_GLOBAL_CTRL: msr_info->data =3D pmu->global_ctrl; break; case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: @@ -731,6 +754,9 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_d= ata *msr_info) diff =3D pmu->global_ctrl ^ data; pmu->global_ctrl =3D data; reprogram_counters(pmu, diff); + + /* Propagate guest global_ctrl to GUEST_IA32_PERF_GLOBAL_CTRL. */ + kvm_pmu_sync_global_ctrl_to_vmcs(vcpu, data); } break; case MSR_CORE_PERF_GLOBAL_OVF_CTRL: @@ -907,6 +933,8 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 e= ventsel) =20 BUILD_BUG_ON(sizeof(pmu->global_ctrl) * BITS_PER_BYTE !=3D X86_PMC_IDX_MA= X); =20 + kvm_pmu_sync_global_ctrl_from_vmcs(vcpu); + if (!kvm_pmu_has_perf_global_ctrl(pmu)) bitmap_copy(bitmap, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); else if (!bitmap_and(bitmap, pmu->all_valid_pmc_idx, diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilitie= s.h index 013536fde10b..cc63bd4ab87c 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -101,6 +101,11 @@ static inline bool cpu_has_load_perf_global_ctrl(void) return vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; } =20 +static inline bool cpu_has_save_perf_global_ctrl(void) +{ + return vmcs_config.vmexit_ctrl & VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL; +} + static inline bool cpu_has_vmx_mpx(void) { return vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 8a7af02d466e..ecf72394684d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -7004,7 +7004,8 @@ static void nested_vmx_setup_exit_ctls(struct vmcs_co= nfig *vmcs_conf, VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR | VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER | VM_EXIT_SAVE_VMX_PREEMPTION_TIMER | VM_EXIT_ACK_INTR_ON_EXIT | - VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; + VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | + VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL; =20 /* We support free control of debug control saving. */ msrs->exit_ctls_low &=3D ~VM_EXIT_SAVE_DEBUG_CONTROLS; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 2a5f79206b02..04a893e56135 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -294,6 +294,11 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, st= ruct msr_data *msr_info) u32 msr =3D msr_info->index; =20 switch (msr) { + case MSR_CORE_PERF_GLOBAL_CTRL: + if (kvm_mediated_pmu_enabled(vcpu)) + pmu->global_ctrl =3D vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL); + msr_info->data =3D pmu->global_ctrl; + break; case MSR_CORE_PERF_FIXED_CTR_CTRL: msr_info->data =3D pmu->fixed_ctr_ctrl; break; @@ -339,6 +344,11 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, st= ruct msr_data *msr_info) u64 reserved_bits, diff; =20 switch (msr) { + case MSR_CORE_PERF_GLOBAL_CTRL: + if (kvm_mediated_pmu_enabled(vcpu)) + vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL, + pmu->global_ctrl); + break; case MSR_CORE_PERF_FIXED_CTR_CTRL: if (data & pmu->fixed_ctr_ctrl_rsvd) return 1; @@ -558,10 +568,37 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) =20 static void intel_pmu_refresh(struct kvm_vcpu *vcpu) { + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct vcpu_vmx *vmx =3D to_vmx(vcpu); + bool mediated; + __intel_pmu_refresh(vcpu); =20 - exec_controls_changebit(to_vmx(vcpu), CPU_BASED_RDPMC_EXITING, + exec_controls_changebit(vmx, CPU_BASED_RDPMC_EXITING, !kvm_rdpmc_in_guest(vcpu)); + + mediated =3D kvm_mediated_pmu_enabled(vcpu); + if (cpu_has_load_perf_global_ctrl()) { + vm_entry_controls_changebit(vmx, + VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL, mediated); + /* + * Initialize guest PERF_GLOBAL_CTRL to reset value as SDM rules. + * + * Note: GUEST_IA32_PERF_GLOBAL_CTRL must be initialized to + * "BIT_ULL(pmu->nr_arch_gp_counters) - 1" instead of pmu->global_ctrl + * since pmu->global_ctrl is only be initialized when guest + * pmu->version > 1. Otherwise if pmu->version is 1, pmu->global_ctrl + * is 0 and guest counters are never really enabled. + */ + if (mediated) + vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL, + BIT_ULL(pmu->nr_arch_gp_counters) - 1); + } + + if (cpu_has_save_perf_global_ctrl()) + vm_exit_controls_changebit(vmx, + VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | + VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL, mediated); } =20 static void intel_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ff66f17d6358..38ecf3c116bd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4390,6 +4390,13 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vm= x) =20 if (cpu_has_load_ia32_efer()) vmcs_write64(HOST_IA32_EFER, kvm_host.efer); + + /* + * Initialize host PERF_GLOBAL_CTRL to 0 to disable all counters + * immediately once VM exits. Mediated vPMU then call perf_guest_exit() + * to re-enable host perf events. + */ + vmcs_write64(HOST_IA32_PERF_GLOBAL_CTRL, 0); } =20 void set_cr4_guest_host_mask(struct vcpu_vmx *vmx) @@ -4457,7 +4464,8 @@ static u32 vmx_get_initial_vmexit_ctrl(void) VM_EXIT_CLEAR_IA32_RTIT_CTL); /* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */ return vmexit_ctrl & - ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER); + ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER | + VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL); } =20 void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu) @@ -7196,6 +7204,9 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *= vmx) struct perf_guest_switch_msr *msrs; struct kvm_pmu *pmu =3D vcpu_to_pmu(&vmx->vcpu); =20 + if (kvm_mediated_pmu_enabled(&vmx->vcpu)) + return; + pmu->host_cross_mapped_mask =3D 0; if (pmu->pebs_enable & pmu->global_ctrl) intel_pmu_cross_mapped_check(pmu); @@ -8451,6 +8462,15 @@ __init int vmx_hardware_setup(void) enable_sgx =3D false; #endif =20 + /* + * All CPUs that support a mediated PMU are expected to support loading + * and saving PERF_GLOBAL_CTRL via dedicated VMCS fields. + */ + if (enable_mediated_pmu && + (WARN_ON_ONCE(!cpu_has_load_perf_global_ctrl() || + !cpu_has_save_perf_global_ctrl()))) + enable_mediated_pmu =3D false; + /* * set_apic_access_page_addr() is used to reload apic access * page upon invalidation. No need to do anything if not diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 5c505af553c8..b282165f98a6 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -510,7 +510,8 @@ static inline u8 vmx_get_rvi(void) VM_EXIT_LOAD_IA32_EFER | \ VM_EXIT_CLEAR_BNDCFGS | \ VM_EXIT_PT_CONCEAL_PIP | \ - VM_EXIT_CLEAR_IA32_RTIT_CTL) + VM_EXIT_CLEAR_IA32_RTIT_CTL | \ + VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL) =20 #define KVM_REQUIRED_VMX_PIN_BASED_VM_EXEC_CONTROL \ (PIN_BASED_EXT_INTR_MASK | \ --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D500C268FDD for ; Mon, 24 Mar 2025 17:33:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837610; cv=none; b=TOQlmA1nhYvgC2Vo0uGfXF4WH1yiAG+2os8a8MuD0ldLc6ArViET+tSXpIgByHxUjAw/OVeTpEZaDEwY4GjH3g/kMW8kK9o0pEDil/HcFSejrXxKTOJ/Dd5KEuCJXN2e7T9zqU3DMWVyYPKNB4VoZYh2GSK2fYCV0mQs/4DWtpM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837610; c=relaxed/simple; bh=DjiR7EBVDFq578TagmWuP100RH26sgH1+9KmgyBWQKo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SzAk+LdTXAEjjUAYsd4kYcs0obUjdUm9FAGkrnL49fjM6yGPHuj+hi74kZ+o+rR3waX+Qz0aVy6Tz8R2CaDgVy5vaian88UGiJCx1x2eXsyQg1umDQCKq60Nv9V5F0o2azE5cwFW5d7yTwII4H30tjmxvWM398/m+myibMJnzVY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G9xW6DL1; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G9xW6DL1" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-22406ee0243so57913635ad.3 for ; Mon, 24 Mar 2025 10:33:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837608; x=1743442408; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Hwx4wIJAsVYnLOl0ZG9MpTbW4pE4+Px1sfUV91deBgo=; b=G9xW6DL1fWTFSGLj51Glgh8C5uOilfLx8s24WCIqSuSbI3aN2dePzGZY9lYZqdXPTi 0KUGLO0nJEADH1bPZOFK8bWxC+4e1ml+QGLmp+VgFho1Tp/upMWYLFoP+UKeJTG1oPqe Xv1AZQ796nxF9+BxjA9GkOvWteoehbTjRhkfgKPHlATNoG4Smo8hbgrN7IinNnSa338I 4APUsNeiXxvGLwoJJZHc9EG8ImK1F9lGjh+uQRS+0qWbI5ixAxYM1s959H0K+QrTH3U5 F40Ekg8U9f8bFULbAG+PW3iWZLhyZcEKOdsDXfE2j++pwRWWcUOnSY9B6aZMf7leXaym i4Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837608; x=1743442408; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Hwx4wIJAsVYnLOl0ZG9MpTbW4pE4+Px1sfUV91deBgo=; b=M4bG30ueEJwr2y5ozJhFN3/y42+PlV8EXpBU11cAfePYDjZ18SKE+XxJkzod9j4VRr yjgxn7fGUrsNcBIn0Df9PNhm/ELDlsbbGuX1+IlMx1anpljP9YjlfK8jjOj6C+JMDQld 9XkqDAAxeuvZ5hpzZuQeRRlP+K9rSMvONuO2K78ucmE2N2L11OwnPy/DMdrvPVd+Jqtf i5UvAp6vgZsffPi193RGyuThCoIf/6v5E+AtPVCdU1FlP1n1S2u9cxruyyIYgfMTZuLV xzxAyRkaZhTDT2ryK23YXaQqnMNL5RtTXCdkIXohX4KUDsk854cCFv+j6NY9bxc9P63k RYBQ== X-Forwarded-Encrypted: i=1; AJvYcCWZz4I78F3Xk4oq6n2XAUGovUZzjlmBlTy17rd/6KS+6HiHU1EFs/TSU9G9uCjWsIBqRzo2hhN34JaiahY=@vger.kernel.org X-Gm-Message-State: AOJu0Yz3LnaGC781so12EDqFhb8JhHcw6M0M4Xz2ydAhvLjl07dbdkMn CAcZOnDsM/iCnSk8TphSrqvysXgwspIC58RCz26zjNlfw2dUnyMaIlZqZdYtDJPB8x0Exq1IpE1 RGlbibQ== X-Google-Smtp-Source: AGHT+IHae5K7FPmx18RLeEA+8VGb68vcd8TH4zBslmk2ZQBRzo2nlTxrSA84khWHVrr7EZzdVzoUVppDS/cV X-Received: from pfez7.prod.google.com ([2002:aa7:8887:0:b0:736:39fa:2251]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:23cb:b0:736:64b7:f104 with SMTP id d2e1a72fcca58-739059457c8mr16758180b3a.5.1742837608210; Mon, 24 Mar 2025 10:33:28 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:02 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-23-mizhang@google.com> Subject: [PATCH v4 22/38] KVM: x86/pmu: Optimize intel/amd_pmu_refresh() helpers From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Currently pmu->global_ctrl is initialized in the common kvm_pmu_refresh() helper since both Intel and AMD CPUs set enable bits for all GP counters for PERF_GLOBAL_CTRL MSR. But it may be not the best place to initialize pmu->global_ctrl. Strictly speaking, pmu->global_ctrl is vendor specific and there are lots of global_ctrl related processing in intel/amd_pmu_refresh() helpers, so better handle them in same place. Thus move pmu->global_ctrl initialization into intel/amd_pmu_refresh() helpers. Besides, intel_pmu_refresh() doesn't handle global_ctrl_rsvd and global_status_rsvd properly and fix it. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/pmu.c | 10 ------- arch/x86/kvm/svm/pmu.c | 14 +++++++-- arch/x86/kvm/vmx/pmu_intel.c | 55 ++++++++++++++++++------------------ 3 files changed, 39 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 4e8cefcce7ab..2ac4c039de8b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -843,16 +843,6 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) return; =20 kvm_pmu_call(refresh)(vcpu); - - /* - * At RESET, both Intel and AMD CPUs set all enable bits for general - * purpose counters in IA32_PERF_GLOBAL_CTRL (so that software that - * was written for v1 PMUs don't unknowingly leave GP counters disabled - * in the global controls). Emulate that behavior when refreshing the - * PMU so that userspace doesn't need to manually set PERF_GLOBAL_CTRL. - */ - if (kvm_pmu_has_perf_global_ctrl(pmu) && pmu->nr_arch_gp_counters) - pmu->global_ctrl =3D GENMASK_ULL(pmu->nr_arch_gp_counters - 1, 0); } =20 void kvm_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 153972e944eb..eba086ef5eca 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -198,12 +198,20 @@ static void __amd_pmu_refresh(struct kvm_vcpu *vcpu) pmu->nr_arch_gp_counters =3D min_t(unsigned int, pmu->nr_arch_gp_counters, kvm_pmu_cap.num_counters_gp); =20 - if (pmu->version > 1) { - pmu->global_ctrl_rsvd =3D ~((1ull << pmu->nr_arch_gp_counters) - 1); + if (kvm_pmu_cap.version > 1) { + /* + * At RESET, AMD CPUs set all enable bits for general purpose counters in + * IA32_PERF_GLOBAL_CTRL (so that software that was written for v1 PMUs + * don't unknowingly leave GP counters disabled in the global controls). + * Emulate that behavior when refreshing the PMU so that userspace doesn= 't + * need to manually set PERF_GLOBAL_CTRL. + */ + pmu->global_ctrl =3D BIT_ULL(pmu->nr_arch_gp_counters) - 1; + pmu->global_ctrl_rsvd =3D ~pmu->global_ctrl; pmu->global_status_rsvd =3D pmu->global_ctrl_rsvd; } =20 - pmu->counter_bitmask[KVM_PMC_GP] =3D ((u64)1 << 48) - 1; + pmu->counter_bitmask[KVM_PMC_GP] =3D BIT_ULL(48) - 1; pmu->reserved_bits =3D 0xfffffff000280000ull; pmu->raw_event_mask =3D AMD64_RAW_EVENT_MASK; /* not applicable to AMD; but clean them to prevent any fall out */ diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 04a893e56135..c30c6c5e36c8 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -466,7 +466,6 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) union cpuid10_eax eax; union cpuid10_edx edx; u64 perf_capabilities; - u64 counter_rsvd; =20 memset(&lbr_desc->records, 0, sizeof(lbr_desc->records)); =20 @@ -493,11 +492,10 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) kvm_pmu_cap.num_counters_gp); eax.split.bit_width =3D min_t(int, eax.split.bit_width, kvm_pmu_cap.bit_width_gp); - pmu->counter_bitmask[KVM_PMC_GP] =3D ((u64)1 << eax.split.bit_width) - 1; + pmu->counter_bitmask[KVM_PMC_GP] =3D BIT_ULL(eax.split.bit_width) - 1; eax.split.mask_length =3D min_t(int, eax.split.mask_length, kvm_pmu_cap.events_mask_len); - pmu->available_event_types =3D ~entry->ebx & - ((1ull << eax.split.mask_length) - 1); + pmu->available_event_types =3D ~entry->ebx & (BIT_ULL(eax.split.mask_leng= th) - 1); =20 if (pmu->version =3D=3D 1) { pmu->nr_arch_fixed_counters =3D 0; @@ -506,29 +504,34 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) kvm_pmu_cap.num_counters_fixed); edx.split.bit_width_fixed =3D min_t(int, edx.split.bit_width_fixed, kvm_pmu_cap.bit_width_fixed); - pmu->counter_bitmask[KVM_PMC_FIXED] =3D - ((u64)1 << edx.split.bit_width_fixed) - 1; + pmu->counter_bitmask[KVM_PMC_FIXED] =3D BIT_ULL(edx.split.bit_width_fixe= d) - 1; } =20 intel_pmu_enable_fixed_counter_bits(pmu, INTEL_FIXED_0_KERNEL | INTEL_FIXED_0_USER | INTEL_FIXED_0_ENABLE_PMI); =20 - counter_rsvd =3D ~(((1ull << pmu->nr_arch_gp_counters) - 1) | - (((1ull << pmu->nr_arch_fixed_counters) - 1) << KVM_FIXED_PMC_BASE_IDX)); - pmu->global_ctrl_rsvd =3D counter_rsvd; + if (kvm_pmu_has_perf_global_ctrl(pmu)) { + /* + * At RESET, Intel CPUs set all enable bits for general purpose counters + * in IA32_PERF_GLOBAL_CTRL. Emulate this behavior. + */ + pmu->global_ctrl =3D BIT_ULL(pmu->nr_arch_gp_counters) - 1; + pmu->global_ctrl_rsvd =3D ~((BIT_ULL(pmu->nr_arch_gp_counters) - 1) | + ((BIT_ULL(pmu->nr_arch_fixed_counters) - 1) << + KVM_FIXED_PMC_BASE_IDX)); =20 - /* - * GLOBAL_STATUS and GLOBAL_OVF_CONTROL (a.k.a. GLOBAL_STATUS_RESET) - * share reserved bit definitions. The kernel just happens to use - * OVF_CTRL for the names. - */ - pmu->global_status_rsvd =3D pmu->global_ctrl_rsvd - & ~(MSR_CORE_PERF_GLOBAL_OVF_CTRL_OVF_BUF | - MSR_CORE_PERF_GLOBAL_OVF_CTRL_COND_CHGD); - if (vmx_pt_mode_is_host_guest()) - pmu->global_status_rsvd &=3D - ~MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI; + /* + * GLOBAL_STATUS and GLOBAL_OVF_CONTROL (a.k.a. GLOBAL_STATUS_RESET) + * share reserved bit definitions. The kernel just happens to use + * OVF_CTRL for the names. + */ + pmu->global_status_rsvd =3D pmu->global_ctrl_rsvd & + ~(MSR_CORE_PERF_GLOBAL_OVF_CTRL_OVF_BUF | + MSR_CORE_PERF_GLOBAL_OVF_CTRL_COND_CHGD); + if (vmx_pt_mode_is_host_guest()) + pmu->global_status_rsvd &=3D ~MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_= PMI; + } =20 entry =3D kvm_find_cpuid_entry_index(vcpu, 7, 0); if (entry && @@ -538,10 +541,9 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->raw_event_mask |=3D (HSW_IN_TX|HSW_IN_TX_CHECKPOINTED); } =20 - bitmap_set(pmu->all_valid_pmc_idx, - 0, pmu->nr_arch_gp_counters); - bitmap_set(pmu->all_valid_pmc_idx, - INTEL_PMC_MAX_GENERIC, pmu->nr_arch_fixed_counters); + bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); + bitmap_set(pmu->all_valid_pmc_idx, INTEL_PMC_MAX_GENERIC, + pmu->nr_arch_fixed_counters); =20 perf_capabilities =3D vcpu_get_perf_capabilities(vcpu); if (cpuid_model_is_consistent(vcpu) && @@ -555,13 +557,12 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) =20 if (perf_capabilities & PERF_CAP_PEBS_FORMAT) { if (perf_capabilities & PERF_CAP_PEBS_BASELINE) { - pmu->pebs_enable_rsvd =3D counter_rsvd; + pmu->pebs_enable_rsvd =3D pmu->global_ctrl_rsvd; pmu->reserved_bits &=3D ~ICL_EVENTSEL_ADAPTIVE; pmu->pebs_data_cfg_rsvd =3D ~0xff00000full; intel_pmu_enable_fixed_counter_bits(pmu, ICL_FIXED_0_ADAPTIVE); } else { - pmu->pebs_enable_rsvd =3D - ~((1ull << pmu->nr_arch_gp_counters) - 1); + pmu->pebs_enable_rsvd =3D ~(BIT_ULL(pmu->nr_arch_gp_counters) - 1); } } } --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 588EC2690D6 for ; Mon, 24 Mar 2025 17:33:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837612; cv=none; b=nVqFDC+MB5RnF48ljNv+W77k/BKOXDZEyMpNNJrOKd9C3NZoifDMFk34LN+QO/LpVU2QZg8rwA3sCxfoRlVYrdFeVmsv8g5d0LicQyo6AS5S9cy2NtB4hsQs0M3lxpX5mOBBGcuCYmhoU2WxjGBCMYoOfXE1sV4az0h/dtoqEHA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837612; c=relaxed/simple; bh=XasFjCIuahndhb/faa1LOJ8dTCArQb7+Q2eC27OUBis=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QVMT+EjgxbopkRw+R5IOhgXqUV4vf3fOtpSseaPfsbNtgc4CdDvKgqRr2uMAqAf4MXPJPKR/Rn2lAuTD7gRgMIzcQDh4x/cNB1vhor8hXUyGKa9qltB8e2SQ5ZeWKagxPgkrgcpTKxBbDI1JN2/OD3MRgRGz00hnJyMQ24TPRQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gB1KNBkj; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gB1KNBkj" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff6167e9ccso11526037a91.1 for ; Mon, 24 Mar 2025 10:33:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837610; x=1743442410; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=b2n1VcrwFWvpmJrE8tpONgvu6Ll9zEcizkSJHjtbwr8=; b=gB1KNBkjwgy3qjTlR3laYf5INLQyWMMk53hgoE9u4+g0Q3RDrKiuJGkequVp/MlI4n Ba7zvQ7KTg8RgFI5NymONkLF0sU/+R+TXn2jRKJhaJrEkZxcHTM0EatRGsJqi03mep2t pRxlvO7wRg7pMQcQ6fr2nmkE1jDdJI+iAcPrWc95j7zcxhBmOYVGNDyh9SxJ8D1Yrae8 Nt5Rapi8L9LjUOgVLVxq4QvUg5FEmFnD5HC8ULloxe4AicPEmTdfpMtAnCie8AsfHUbb 8P2FX6Km5hi6mJVBVcJs+AdhYS9XHgMZSVkCvsHZ0K2BwPMzMNFYFjAsD5sT8tGH3bEN QbyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837610; x=1743442410; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=b2n1VcrwFWvpmJrE8tpONgvu6Ll9zEcizkSJHjtbwr8=; b=ddlnedkhkignpwQs3PdlHpXRu+/AuaK26nm85jeGAmQyIi9s/R9w7uY4npQK51f+ok PR3r3KMiKY/iijfp+j3T91d9tO+2AJTiOWuPP2BwecNp0rMsjTGtEbosc+J3C2XO/etg +KzA1YprS5zhXyBx5Qj9M15Lr+ccbO+al9Aya34szCIdzib554FKntb5tP7o0oRkEUbi 84QzW9sY1xJFQEvOFKkWlmf9gUHLHmLao0P4E5DAPC1Je2DDXk36HJBTlLvCmF6YADM2 psUaMIqI0hci6Gkd43GMvYv8NuPlQLPgMaoz5K3Aeg1gHNSE+drNG1sPnHEs/fBucIMA nEgg== X-Forwarded-Encrypted: i=1; AJvYcCUqaZOh02NF0eRJItmYk2dTk2+K3lU9i4scteED9Pi6v2WgNeubHa+YddgN5yLXgcLTm+2fYINhbYFEbmE=@vger.kernel.org X-Gm-Message-State: AOJu0YxtYyvI7EKWXUCVkwoAkfYgb9LEPMqiMRvX1JUEqoAbGCYDiIda 44a2crR2koxTNlNO1EOBsQpN47VrafSTJhpvOdB/HKxyBLavlZetNdFFScnHI8hAQHnMG5q2XZX kPCwOIg== X-Google-Smtp-Source: AGHT+IH3WMcuBrRZ/MQ5kDTO8s97PR3oLVS4GF20epZGZVaSDyEwzXc5MbNUgvClxFCiR+vMgC9hoD3d1qxd X-Received: from pjh13.prod.google.com ([2002:a17:90b:3f8d:b0:2ea:9d23:79a0]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4a44:b0:2f4:4003:f3d4 with SMTP id 98e67ed59e1d1-3030ff08e4amr20813258a91.30.1742837609884; Mon, 24 Mar 2025 10:33:29 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:03 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-24-mizhang@google.com> Subject: [PATCH v4 23/38] KVM: x86/pmu: Configure the interception of PMU MSRs From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Add helper intel_pmu_update_msr_intercepts() to configure the interception of PMU MSRs. For mediated vPMU, intercept all the guest owned GP counters EVENTSELx MSRs and fixed counters FIX_CTR_CTRL MSR (Intel only). This is because KVM needs to intercept the event configuration and filter out malicious guest events and events that might cause CPU glitches. In addition, pass through all the guest owned perf counter MSRs to reduce the performance impact. Note that PMU MSRs that not owned by guest are always intercepted. Accessing them always cause #GP As for the global shared MSRs, pass through them to guest only if guest own all PMU resources. Otherwise, intercept them all to avoid guest to access host owned counters. Suggested-by: Sean Christopherson Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang Co-developed-by: Sandipan Das Signed-off-by: Sandipan Das Signed-off-by: Dapeng Mi --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/kvm/svm/pmu.c | 63 ++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/pmu_intel.c | 44 ++++++++++++++++++++++ 3 files changed, 108 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 337f4b0a2998..a4d8356e9b53 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -719,6 +719,7 @@ #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302 +#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET 0xc0000303 =20 /* AMD Last Branch Record MSRs */ #define MSR_AMD64_LBR_SELECT 0xc000010e diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index eba086ef5eca..4fc809c74ba8 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -220,6 +220,67 @@ static void __amd_pmu_refresh(struct kvm_vcpu *vcpu) bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); } =20 +static void amd_pmu_update_msr_intercepts(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct vcpu_svm *svm =3D to_svm(vcpu); + int msr_clear =3D !!(kvm_mediated_pmu_enabled(vcpu)); + int i; + + for (i =3D 0; i < min(pmu->nr_arch_gp_counters, AMD64_NUM_COUNTERS); i++)= { + /* + * Legacy counters are always available irrespective of any + * CPUID feature bits and when X86_FEATURE_PERFCTR_CORE is set, + * PERF_LEGACY_CTLx and PERF_LEGACY_CTRx registers are mirrored + * with PERF_CTLx and PERF_CTRx respectively. + */ + set_msr_interception(vcpu, svm->msrpm, MSR_K7_EVNTSEL0 + i, 0, 0); + set_msr_interception(vcpu, svm->msrpm, MSR_K7_PERFCTR0 + i, + msr_clear, msr_clear); + } + + for (i =3D 0; i < pmu->nr_arch_gp_counters; i++) { + /* + * PERF_CTLx registers require interception in order to clear + * HostOnly bit and set GuestOnly bit. This is to prevent the + * PERF_CTRx registers from counting before VM entry and after + * VM exit. + */ + set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTL + 2 * i, 0, 0); + /* + * Pass through counters exposed to the guest and intercept + * counters that are unexposed. Do this explicitly since this + * function may be set multiple times before vcpu runs. + */ + set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTR + 2 * i, + msr_clear, msr_clear); + } + + for ( ; i < kvm_pmu_cap.num_counters_gp; i++) { + set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTL + 2 * i, 0, 0); + set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTR + 2 * i, 0, 0); + } + + /* + * In mediated vPMU, intercept global PMU MSRs when guest PMU only owns + * a subset of counters provided in HW or its version is less than 2. + */ + if (kvm_mediated_pmu_enabled(vcpu) && kvm_pmu_has_perf_global_ctrl(pmu) && + pmu->nr_arch_gp_counters =3D=3D kvm_pmu_cap.num_counters_gp) + msr_clear =3D 1; + else + msr_clear =3D 0; + + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_CTL, + msr_clear, msr_clear); + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, + msr_clear, msr_clear); + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_= CLR, + msr_clear, msr_clear); + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_= SET, + msr_clear, msr_clear); +} + static void amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); @@ -230,6 +291,8 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) svm_clr_intercept(svm, INTERCEPT_RDPMC); else svm_set_intercept(svm, INTERCEPT_RDPMC); + + amd_pmu_update_msr_intercepts(vcpu); } =20 static void amd_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c30c6c5e36c8..450f9e5b9e40 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -567,6 +567,48 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) } } =20 +static void intel_pmu_update_msr_intercepts(struct kvm_vcpu *vcpu) +{ + bool intercept =3D !kvm_mediated_pmu_enabled(vcpu); + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + int i; + + for (i =3D 0; i < pmu->nr_arch_gp_counters; i++) { + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PERFCTR0 + i, + MSR_TYPE_RW, intercept); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PMC0 + i, MSR_TYPE_RW, + intercept || !fw_writes_is_enabled(vcpu)); + } + for ( ; i < kvm_pmu_cap.num_counters_gp; i++) { + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PERFCTR0 + i, + MSR_TYPE_RW, true); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PMC0 + i, + MSR_TYPE_RW, true); + } + + for (i =3D 0; i < pmu->nr_arch_fixed_counters; i++) + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_FIXED_CTR0 + i, + MSR_TYPE_RW, intercept); + for ( ; i < kvm_pmu_cap.num_counters_fixed; i++) + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_FIXED_CTR0 + i, + MSR_TYPE_RW, true); + + if (kvm_mediated_pmu_enabled(vcpu) && kvm_pmu_has_perf_global_ctrl(pmu) && + vcpu_has_perf_metrics(vcpu) =3D=3D kvm_host_has_perf_metrics() && + pmu->nr_arch_gp_counters =3D=3D kvm_pmu_cap.num_counters_gp && + pmu->nr_arch_fixed_counters =3D=3D kvm_pmu_cap.num_counters_fixed) + intercept =3D false; + else + intercept =3D true; + + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_GLOBAL_STATUS, + MSR_TYPE_RW, intercept); + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, + MSR_TYPE_RW, intercept); + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_GLOBAL_OVF_CTRL, + MSR_TYPE_RW, intercept); +} + static void intel_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -578,6 +620,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) exec_controls_changebit(vmx, CPU_BASED_RDPMC_EXITING, !kvm_rdpmc_in_guest(vcpu)); =20 + intel_pmu_update_msr_intercepts(vcpu); + mediated =3D kvm_mediated_pmu_enabled(vcpu); if (cpu_has_load_perf_global_ctrl()) { vm_entry_controls_changebit(vmx, --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A571A268C6F for ; Mon, 24 Mar 2025 17:33:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837613; cv=none; b=N+8N9S07BFVlnke9aTfHENzMRuQBErURJr50ZG+l8nFV5EkIpVW3DKoE1rfjpqYQm5YzYMQQMHMOlpnbMTSRqClraq29b5c2VBMbPBpMYFQF2F7MS9vfCU0nvTlVYXyu9vWeZKZWRidbPXR3omty9X/MLVmeNi0sLiXuyvw1xTE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837613; c=relaxed/simple; bh=e5TykSXhWcjiHTlVnT5EVQdmrLLoGQPyiymnk17gYkY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oczg/G6petnqMgPYSDYcrSm+asfrnF9Mvr3+tpblwwFHbPaumEkvXU8cHWhp1RS7/3N4jm5FEL44HEmr+8lA/16sMXK+7ii9bFjR/SLOgHjbqMVcqsIXD8Ksz6jOyFbLaA+aSOFwnjssanmvJNYqLv+TV0QAdUdkMBsdEiBeN9o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JyQJFp7t; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JyQJFp7t" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff58318acaso13392776a91.0 for ; Mon, 24 Mar 2025 10:33:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837611; x=1743442411; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=XEyrlidpGrXPVy+91ZT3XWvyzIHfkJ7R/xos7wEnLbQ=; b=JyQJFp7tk5/XUK/YI0VTxLgfOFlFJFpSYykUuH3oDnuwf2ZePUDvcBK0LQk5MLAwSL LMVYpCl7nTTGmMGWyj2Va0oObNMO6gaqSmxYaXksYP/vIIIHPFtoae6blKRE8HbTDlo7 stOwKKoITG6wbD8DrR/pJWt3754FADxEBVFn+C0C+w5aHHEDRyfFIdB15DY80RdT6AqB PitHdWdoa2OqZWBDjO/9swmNW4ZKzpYEfQuLoIIqskB+nkIvT+BBCH+Np6zu4ZJiMmgd f9+YhNI/um+VpSpfbeGrEZKCcdfb45yRwjKKVbRov+HCAQfsMLGjN4oSCqY0OxzrPKRM V8hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837611; x=1743442411; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XEyrlidpGrXPVy+91ZT3XWvyzIHfkJ7R/xos7wEnLbQ=; b=PnzkBv9E/0ZVBZlPx2mVwsBwUNeHIGMLu89oUMIPVFsj6jWqHvp8EjJTH/QXGIfHqs DuGssSZSoGeeXbhseNZliM+oixwJb5aziSc4D3Q8isobNlIPfBkUHURq86PW1N1ZzwxI 7ryFO1lV1qpqhp3jpu9nndsmTkDUlEkRF0X9r9/K4ClNM6rvRMlTWzxcqyNSfbPQZL22 dNhTBYEwQwEnsuZJbhJhuBUqnd6S/+tLCX9TEuCX+35jJimkHz7LkDR40IUsOhM41sHs C3jTzF0uoEh2BhYYW8oqiVkPH41Fz3VuvrIi/su/pbqOUC0LvOz9XS2IFkp3QlWzxYji TNMg== X-Forwarded-Encrypted: i=1; AJvYcCVezvd3NudIhPYTZhYyQHqJEZ48NV3Wy3oWmwqgmxgrK1KnRdrN3ZrX+b/DeJdBNyCmZy/sbTiXppO0a0I=@vger.kernel.org X-Gm-Message-State: AOJu0YxvM3uwzreYROcX7MqBJKntMY8dMJp5YJjwh2HgR3BQc2E081/r 9iIOSm9qsVS6f2bJ14MVoNqtNv4ad5agiZhzL3VEyMBsJ7fkQqdFm42oNSL2wXRZ5oA8OhACR7k +CR1FMw== X-Google-Smtp-Source: AGHT+IHd9nbaxHBmU/fV94phikYnv83+4tNG/MgzkHkbPYI6qikEaqz7CMK9KVUvtgRvzrdBVwsQp4ULQLs1 X-Received: from pjtd15.prod.google.com ([2002:a17:90b:4f:b0:2f7:d453:e587]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3c43:b0:2fa:137f:5c61 with SMTP id 98e67ed59e1d1-3030fe856edmr26058407a91.12.1742837611219; Mon, 24 Mar 2025 10:33:31 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:04 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-25-mizhang@google.com> Subject: [PATCH v4 24/38] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot() From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reject PMU MSRs interception explicitly in vmx_get_passthrough_msr_slot() since interception of PMU MSRs are specially handled in intel_passthrough_pmu_msrs(). Signed-off-by: Mingwei Zhang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/vmx.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 38ecf3c116bd..7bb16bed08da 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -165,7 +165,7 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO); =20 /* * List of MSRs that can be directly passed to the guest. - * In addition to these x2apic, PT and LBR MSRs are handled specially. + * In addition to these x2apic, PMU, PT and LBR MSRs are handled specially. */ static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = =3D { MSR_IA32_SPEC_CTRL, @@ -691,6 +691,16 @@ static int vmx_get_passthrough_msr_slot(u32 msr) case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ + case MSR_IA32_PMC0 ... + MSR_IA32_PMC0 + KVM_MAX_NR_GP_COUNTERS - 1: + case MSR_IA32_PERFCTR0 ... + MSR_IA32_PERFCTR0 + KVM_MAX_NR_GP_COUNTERS - 1: + case MSR_CORE_PERF_FIXED_CTR0 ... + MSR_CORE_PERF_FIXED_CTR0 + KVM_MAX_NR_FIXED_COUNTERS - 1: + case MSR_CORE_PERF_GLOBAL_STATUS: + case MSR_CORE_PERF_GLOBAL_CTRL: + case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + /* PMU MSRs. These are handled in intel_passthrough_pmu_msrs() */ return -ENOENT; } =20 --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49CC426989A for ; Mon, 24 Mar 2025 17:33:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837615; cv=none; b=a1FiEJ1akB30RIkPY8KwBvrbT1GR+76lv9qygXBIeWQRv3kXBWPMfQS3JG8h7P2gNHjQIsYvh0zfwHuoTPbif84OJg0m4S0ichOYdcS63IQFybrbNMornXjUjPlP/r3Zap9HryrHM3dFFwQdsrT8dQIxs0qY45GFuOakmBmBof8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837615; c=relaxed/simple; bh=FoH7tcwE2GeOWEZefq9DrN/eLcHxhIICNsQu5pyoq5E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lTzrZX2li4UWkgSQO+g132CaBYSRIOefA5v0FIjnAIUNhTSK0ynxY+prr+VbHlOYSlTpLDtJt8J+cP3+z+J1/DwXFgbXz+dAI7XBdBqOwsYI8Z2XaL2bKGeAhpT9NK7lXoT5ehyQyc5Zn842vLRy+ZzcEYkEGRFToHN19BhV55A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zcbco0U3; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zcbco0U3" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22647ff3cf5so65613235ad.0 for ; Mon, 24 Mar 2025 10:33:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837612; x=1743442412; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4OJAB7txnhcLTcXVYrfnb5vnccIxSZ8DO2qJNaOYpK0=; b=zcbco0U3TX5qrbjFdtCpf+5jZIbdBVS0xbLIab9r9HToLxKrkgCxe0oTA0nEUXujd+ Ft6k7K4iQbVnW865TAQbBDFhcqgwEaXY2uPlwvegZBqOaMMNML1Gb17Luxb3EK7iTUeN 8jA5sqykKHzcPL+YApgKBQIu8O+tN0Atm/RetygYqD/JbeoQMsdQ89fNkhfNUAQSlgYS LqRg9s4JGH8vQKX1WKUHfKqFaL2iLTyebEqkpey8y4Xz0YHxts1WkxMpFZ3P3x3+/N1H qLnHECavEVOOPp8gq12fo2b8xQsf/G2Cb5sMoVLZYsNdbx50c4fDfyW3AAmAY5VySzeE Y6AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837612; x=1743442412; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4OJAB7txnhcLTcXVYrfnb5vnccIxSZ8DO2qJNaOYpK0=; b=uY/pxaqINL2pxGKpFxWOTg2iKFNGrqI/PVCLz8yUklm5l5X/SGKrYCEauQeuduLR8L L+yUYQzBPCftHtXi1PMBS2lmP751HXaBqDXh1OZeOh4nuxlDJFOwnYMcTotf/hkrxvi3 TXVTv5u5A0gPxa30fRKoNGVQ8uWI26VpfbeCYafIj7j/8i72S7kRR1gAdrTeaPpcnOtT AohAQJxdErTu6Lt7idf3hbGiVPg+iGgAEWnnXkpFu2iPEBIoyl7rhoTuizp6cihqs+PT rxE8oMHXaHqrf9Z28KuyXFngAnFkwKEQJQk+zNwy3P/M7X3E44TxVYmrc7l+BkqspXfM Ne8w== X-Forwarded-Encrypted: i=1; AJvYcCXfBXVPRI5pj59L4mcLi+bR5p1aE4VHNSlklPfTjunnptRSYf0k2ZbsSGDfp0hmerLgcgsKs/tbhzvjSuI=@vger.kernel.org X-Gm-Message-State: AOJu0YxWT93aFGEbUljx7ADs6Lm7BMGfj8w0r9Xl4iZsK72awg8qENYR ZSaPx9xdtHufQPAmEnCJLEpASlSFT1g9gE7JSruZvqY+TeFke87mmKo7We4Q4d4V3Q6dUtV/4S9 Q1VTU4g== X-Google-Smtp-Source: AGHT+IEWTl/R0f1qfRJlE3emUEFa0yj+sOW5pbNSYV28FCQ8VzTuf09ewlLJoB3GkYH0qtajqpf5IDFU59K+ X-Received: from plbkw5.prod.google.com ([2002:a17:902:f905:b0:226:342c:5750]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:985:b0:220:c813:dfce with SMTP id d9443c01a7336-22780e0a965mr250833335ad.39.1742837612577; Mon, 24 Mar 2025 10:33:32 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:05 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-26-mizhang@google.com> Subject: [PATCH v4 25/38] KVM: x86/pmu: Add AMD PMU registers to direct access list From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sandipan Das Add all PMU-related MSRs (including legacy K7 MSRs) to the list of possible direct access MSRs. Most of them will not be intercepted when using passthrough PMU. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/svm.c | 24 ++++++++++++++++++++++++ arch/x86/kvm/svm/svm.h | 2 +- 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a713c803a3a3..bff351992468 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -143,6 +143,30 @@ static const struct svm_direct_access_msrs { { .index =3D X2APIC_MSR(APIC_TMICT), .always =3D false }, { .index =3D X2APIC_MSR(APIC_TMCCT), .always =3D false }, { .index =3D X2APIC_MSR(APIC_TDCR), .always =3D false }, + { .index =3D MSR_K7_EVNTSEL0, .always =3D false }, + { .index =3D MSR_K7_PERFCTR0, .always =3D false }, + { .index =3D MSR_K7_EVNTSEL1, .always =3D false }, + { .index =3D MSR_K7_PERFCTR1, .always =3D false }, + { .index =3D MSR_K7_EVNTSEL2, .always =3D false }, + { .index =3D MSR_K7_PERFCTR2, .always =3D false }, + { .index =3D MSR_K7_EVNTSEL3, .always =3D false }, + { .index =3D MSR_K7_PERFCTR3, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTL0, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTR0, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTL1, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTR1, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTL2, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTR2, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTL3, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTR3, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTL4, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTR4, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTL5, .always =3D false }, + { .index =3D MSR_F15H_PERF_CTR5, .always =3D false }, + { .index =3D MSR_AMD64_PERF_CNTR_GLOBAL_CTL, .always =3D false }, + { .index =3D MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, .always =3D false }, + { .index =3D MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, .always =3D false }, + { .index =3D MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET, .always =3D false }, { .index =3D MSR_INVALID, .always =3D false }, }; =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 9d7cdb8fbf87..ae71bf5f12d0 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -44,7 +44,7 @@ static inline struct page *__sme_pa_to_page(unsigned long= pa) #define IOPM_SIZE PAGE_SIZE * 3 #define MSRPM_SIZE PAGE_SIZE * 2 =20 -#define MAX_DIRECT_ACCESS_MSRS 48 +#define MAX_DIRECT_ACCESS_MSRS 72 #define MSRPM_OFFSETS 32 extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly; extern bool npt_enabled; --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9882264A97 for ; Mon, 24 Mar 2025 17:33:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837616; cv=none; b=G3vLA0ozN2qG1VrAqevaqsfPnxcwihnMuNocPqAlQBE619RkyhjOKceQ4Z2WsVUkASTdvvByhe8QQzTFlH+ps2azRgFiSXYNcYxJ8kUusY7q6sk6eFrTEfytpMC2qR7+ZlQQaZB3isdkl8QD55UC0zPG7Ufi2MI0qwP54SgpVtw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837616; c=relaxed/simple; bh=ITiBlZiRHmcvBLv1rIOmuvAZXwsI7qvVjm85+vMyxA0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sM2SIUvz2Lvm4fknMQJKxs0t5OisoJ3/cwina/Jog//f2aG+oxictpxCMlmzJR+fzQ/hYBg1myQEUEhEjaBDfSvtHacIC3yvRqYBXOki0qxvSG9KL130E6LmfipAB1U9U89g3YV/gBhdohKDR847TbY6lGsjzPNZiq8jztTocVQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=v1Mrbzde; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="v1Mrbzde" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3032f4ea8cfso3071868a91.3 for ; Mon, 24 Mar 2025 10:33:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837614; x=1743442414; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=NnSSKyHIRqLX/evz9sw81ueizRor/lmrUcgLZrYP6Yc=; b=v1MrbzdeRZSVyA8BvKX2NF8MoGn8DwgExDug+ncmUKBL7z0tKeD2MtARnFYgcTgeNH 20521ndZo+Dt3nxuJQU/rDJrNin0t0hKfJb/HtwGhdGmmbUZP09Us7obEUBRaASVU3+e JlMZhoXhlOP4GudsJGvxMy6ghVEEvIAxva9lL7xT6VPJ7Np7jY8UdhJDNhir9b7t4arY wjz6TfahCeCTdLzQOHA9l+G2AB4e3Zu3jnG9gBYRC/4gNIWqxB1b0/8m6YwD2SvvJPKL fyCe474qYUkck9GkuOuITjk50RFeD7v17H1GQzo20D7Besjn657X+labwaB4g3vqiKNX r/Yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837614; x=1743442414; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NnSSKyHIRqLX/evz9sw81ueizRor/lmrUcgLZrYP6Yc=; b=aIR9NA2Kk6f31NVglACqqzlfuipxkxqNEfC0QUtK5Jk0RTshg7ylEZmygSB0ShE/E+ GA/7UFxWCf7v9ufbi7x1hlcKqvT5ykkPQ5Wp6tWuGwpxxG4dwqFTlY/IwtyJJ22gSohm bs2AM4flcaTsr81Pb4ZJtt2HrVE00kh5eeMlS818BAExOIrbgJ0eRvQaj9lDnHRv9HyA aqTCv/r8j4HZcyWiENi9yGrhI2INZjocwpYSCBgiZ/TSdKNPPQJ3XyPWSajiVPSNzgNe J4rsiqs+X0M6vDaHhsWo1z1E8QLtqxkkTsE+g2akskOG5edP+ZFSHhjvTJhie5lhYOTp ZBbQ== X-Forwarded-Encrypted: i=1; AJvYcCWuXJd1uMHmsZZOdBljsHs7x9WKggNnJA21SZjm7BLWVWS+dAUpVSgnT1HJWqfs797GC53/QtJjLiWORTs=@vger.kernel.org X-Gm-Message-State: AOJu0Yz442wd7jo0LAL0M/vz/K/Qb8//pgkEihb8+hAa/kJM+r+FRBtB tn9/gHPoaXNiqKazDJFCTHg1QfKHs3l/dwrV0Pver4Cxlp6S2T+LCgMQbzlVRIo+t/dnq20BkJO GGxWPFw== X-Google-Smtp-Source: AGHT+IFw1kDW7fvKUTQlpdhZNYAssnVbQk9XuqdbzQ17QFPrU7ocYHSDgpUqHdNFn0Ydfs4WYWKqY2Gibqqq X-Received: from pjbqn13.prod.google.com ([2002:a17:90b:3d4d:b0:2e9:ee22:8881]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3bc7:b0:301:1bce:c252 with SMTP id 98e67ed59e1d1-3030fee95d3mr22162466a91.27.1742837614298; Mon, 24 Mar 2025 10:33:34 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:06 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-27-mizhang@google.com> Subject: [PATCH v4 26/38] KVM: x86/pmu: Introduce eventsel_hw to prepare for pmu event filtering From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce eventsel_hw and fixed_ctr_ctrl_hw to store the actual HW value in PMU event selector MSRs. In mediated PMU checks events before allowing the event values written to the PMU MSRs. However, to match the HW behavior, when PMU event checks fails, KVM should allow guest to read the value back. This essentially requires an extra variable to separate the guest requested value from actual PMU MSR value. Note this only applies to event selectors. Signed-off-by: Mingwei Zhang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/pmu.c | 7 +++++-- arch/x86/kvm/svm/pmu.c | 1 + arch/x86/kvm/vmx/pmu_intel.c | 2 ++ 4 files changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 4b3bfefc2d05..7ee74bbbb0aa 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -524,6 +524,7 @@ struct kvm_pmc { */ u64 emulated_counter; u64 eventsel; + u64 eventsel_hw; struct perf_event *perf_event; struct kvm_vcpu *vcpu; /* @@ -552,6 +553,7 @@ struct kvm_pmu { unsigned nr_arch_fixed_counters; unsigned available_event_types; u64 fixed_ctr_ctrl; + u64 fixed_ctr_ctrl_hw; u64 fixed_ctr_ctrl_rsvd; /* * kvm_pmu_sync_global_ctrl_from_vmcs() must be called to update diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 2ac4c039de8b..63143eeb5c44 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -794,11 +794,14 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) pmc->counter =3D 0; pmc->emulated_counter =3D 0; =20 - if (pmc_is_gp(pmc)) + if (pmc_is_gp(pmc)) { pmc->eventsel =3D 0; + pmc->eventsel_hw =3D 0; + } } =20 - pmu->fixed_ctr_ctrl =3D pmu->global_ctrl =3D pmu->global_status =3D 0; + pmu->fixed_ctr_ctrl =3D pmu->fixed_ctr_ctrl_hw =3D 0; + pmu->global_ctrl =3D pmu->global_status =3D 0; =20 kvm_pmu_call(reset)(vcpu); } diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 4fc809c74ba8..9feaca739b96 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -165,6 +165,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) data &=3D ~pmu->reserved_bits; if (data !=3D pmc->eventsel) { pmc->eventsel =3D data; + pmc->eventsel_hw =3D data; kvm_pmu_request_counter_reprogram(pmc); } return 0; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 450f9e5b9e40..796b7bc4affe 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -41,6 +41,7 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu,= u64 data) int i; =20 pmu->fixed_ctr_ctrl =3D data; + pmu->fixed_ctr_ctrl_hw =3D data; for (i =3D 0; i < pmu->nr_arch_fixed_counters; i++) { u8 new_ctrl =3D fixed_ctrl_field(data, i); u8 old_ctrl =3D fixed_ctrl_field(old_fixed_ctr_ctrl, i); @@ -403,6 +404,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, str= uct msr_data *msr_info) =20 if (data !=3D pmc->eventsel) { pmc->eventsel =3D data; + pmc->eventsel_hw =3D data; kvm_pmu_request_counter_reprogram(pmc); } break; --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49DBE269D15 for ; Mon, 24 Mar 2025 17:33:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837618; cv=none; b=pohhW0HfgVlbImYtZqd78K+puFmyCQY8mAQVVYF79qfaUcCb+gW/B2wYwDzF1CFFRy8xb0Ppx4iRDIni2MinOX7J4fucYXS9FREO6syO7EQEBubY0JbqMPbVhYVR3XHQNpwpl4Mc74alSICfhEx4iSYaxJximvObjaXBDvtDs94= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837618; c=relaxed/simple; bh=HK3sVTz5/StFQ9UqVyNVhdMzjLv+XOymilAVSqlp0yI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rymlL6UUIbbdTlmvCsm5pe0IAJCaQaY9Yeb/bQIRh46kkPtoAJJbPo6RowGAbDT5YqGFG601BwGbPoW/Ryb8P7k+yQHtwWa68jk7Jhrshw42HDS+huCtSWEARgT4e/VVzdFfoYwZBM9fIp1wnL72MFD+1NhYEOsID88C/Na0G2E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=r2ltvTN5; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="r2ltvTN5" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22650077995so119219055ad.3 for ; Mon, 24 Mar 2025 10:33:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837615; x=1743442415; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=BBjwYLkShPCK2VrztwK2kT4zbSZ0SiX7EhvMfBHkH9E=; b=r2ltvTN5aTvtWebOo9YjXBnFya/ngtp8YlLkOqyjMtT9VJvDglxpACMARKlSE8dpmb aefs+WNZLM+V5qXV12rybUzAvW/ZPr30CSC0AseO0wxvunlorX8Z223wNZtht1x0WGof CDsUs3N8Dmy2PnOlrmdh8pzYhocu7U508LjHyHxKqFwRzbPcozGZE+q0FgGNqpyCFesV k2sQIdD5B/JGNLLE/ciKRzO8m13NOmQL1x99xSXZi4rYexlGS0jkoV9rjkZk5fj8oYgp 2A/dI8TDSk8Y8jeCR55ykGUhezJJitWFSHYryvWXwJvDNJ3ZqMdOz8oh8ZlahG+KYmJR xFqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837615; x=1743442415; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=BBjwYLkShPCK2VrztwK2kT4zbSZ0SiX7EhvMfBHkH9E=; b=JncEsEIiyE0Y5DI/Mg3ZV8zlyuC9KwDifwk+0l7o/zDYMjAEFHI5X0JSvHOkZHQHQn YPdjqbw+YTH4GD1Eo6B5AOMnB9zwzXS1vbOBm28DhUqyWVXIhLUoU5egERBoDC8yUS4g +qd42rsGadLQxIEHrIVpO80i7t0VD0AvoUcbDwGkJi4oqfZbq9CQIy1oA0V1yBQFqc+a I+QM1ctRhsz+Pj9CyX2grUSaZ6aU61Q9X2soVCJMg7ZuTTzy1e1tpI/F3bADQjeHUmuf 5FLYi2UTDcLpwbioWTguHbsfYbCul4lwGuxhhHqqfWJv5+i63rvK6tubH9KT7QM3ftfk Cwmg== X-Forwarded-Encrypted: i=1; AJvYcCX9b/tmTnWlmHCx24sjD2ylk5LDUocixX3s18jRiI64JPFn/dhIlPPWT4XbW0b9YDeRQQIh8Pp9nxqy2IY=@vger.kernel.org X-Gm-Message-State: AOJu0YyhFGhtUjPRLzI7mHCqrrvRiQmjfnzQKsjm3ckQTKYpv1LNT6Qk cCBNR7wWM4gPMm8AvtGSNTo4mWGI/S0zU13qzyI4vxXLCpl9fYTb6JN5sjwkWGV9YgVfmOnRz/y FTfql0A== X-Google-Smtp-Source: AGHT+IGpE1JBswCmZdI+P62zf261bTaAPnI3B5oXWGwwXXhYsEBSpl4lzhlzzW8eJz0nRuiROYZYDYhxsR5/ X-Received: from plog5.prod.google.com ([2002:a17:902:8685:b0:223:fbb0:59ae]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f608:b0:210:f706:dc4b with SMTP id d9443c01a7336-22780c7606emr192403565ad.13.1742837615660; Mon, 24 Mar 2025 10:33:35 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:07 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-28-mizhang@google.com> Subject: [PATCH v4 27/38] KVM: x86/pmu: Handle PMU MSRs interception and event filtering From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Mediated vPMU needs to intercept EVENTSELx and FIXED_CNTR_CTRL MSRs to filter out guest malicious perf events. Either writing these MSRs or updating event filters would call reprogram_counter() eventually. Thus check if the guest event should be filtered out in reprogram_counter(). If so, clear corresponding EVENTSELx MSR or FIXED_CNTR_CTRL field to ensure the guest event won't be really enabled at vm-entry. Besides, mediated vPMU intercepts the MSRs of these guest not owned counters and it just needs simply to read/write from/to pmc->counter. Suggested-by: Sean Christopherson Signed-off-by: Dapeng Mi Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang --- arch/x86/kvm/pmu.c | 27 +++++++++++++++++++++++++++ arch/x86/kvm/pmu.h | 3 +++ 2 files changed, 30 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 63143eeb5c44..e9100dc49fdc 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -305,6 +305,11 @@ static void pmc_update_sample_period(struct kvm_pmc *p= mc) =20 void pmc_write_counter(struct kvm_pmc *pmc, u64 val) { + if (kvm_mediated_pmu_enabled(pmc->vcpu)) { + pmc->counter =3D val & pmc_bitmask(pmc); + return; + } + /* * Drop any unconsumed accumulated counts, the WRMSR is a write, not a * read-modify-write. Adjust the counter value so that its value is @@ -455,6 +460,28 @@ static int reprogram_counter(struct kvm_pmc *pmc) bool emulate_overflow; u8 fixed_ctr_ctrl; =20 + if (kvm_mediated_pmu_enabled(pmu_to_vcpu(pmu))) { + bool allowed =3D check_pmu_event_filter(pmc); + + if (pmc_is_gp(pmc)) { + if (allowed) + pmc->eventsel_hw |=3D pmc->eventsel & + ARCH_PERFMON_EVENTSEL_ENABLE; + else + pmc->eventsel_hw &=3D ~ARCH_PERFMON_EVENTSEL_ENABLE; + } else { + int idx =3D pmc->idx - KVM_FIXED_PMC_BASE_IDX; + + if (allowed) + pmu->fixed_ctr_ctrl_hw =3D pmu->fixed_ctr_ctrl; + else + pmu->fixed_ctr_ctrl_hw &=3D + ~intel_fixed_bits_by_idx(idx, 0xf); + } + + return 0; + } + emulate_overflow =3D pmc_pause_counter(pmc); =20 if (!pmc_event_is_allowed(pmc)) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 509c995b7871..6289f523d893 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -113,6 +113,9 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) { u64 counter, enabled, running; =20 + if (kvm_mediated_pmu_enabled(pmc->vcpu)) + return pmc->counter & pmc_bitmask(pmc); + counter =3D pmc->counter + pmc->emulated_counter; =20 if (pmc->perf_event && !pmc->is_paused) --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D68FE26A0AF for ; Mon, 24 Mar 2025 17:33:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837619; cv=none; b=RIxy8B1DWb1XU2UonnZG4ISdbVuuXPI4fr5k9SUgb2RUGZwWYuFnq8So+0Lk5O0TnJZDvUHE5IYfsmA3IAgo21cfhPxhLSr+e9XejCCanO7TPUGX0758mvbS/owCFeIgev8OvE9A1cabPSHuRdcJXXAXQlwblAhNZLFQEiFHFIs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837619; c=relaxed/simple; bh=ivxGMo1giylgI8Vx9v7Az7fbPzotpOqbVmQ+M+xadOE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=USm+FNKfIeLG7XlJ1EvdCIMhrxY7aDWQCjWfHmyQZJ6WqT++sIGYJgcM5djMLkcfba6jnm9CxXMydBpvFON3h3aJhPBgKQgIqgWNRWHVB0elfXMRXYw8xAGwDL+CxJAR7tWze42Evggpacb++VpwIOmKO6SH8Rph5wtMACk5tA4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TrORQrF+; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TrORQrF+" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff6943febeso6438739a91.0 for ; Mon, 24 Mar 2025 10:33:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837617; x=1743442417; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=7tcfTvJa2x4r0LER3JMCH4ud8u11Uq3+9z/wHx+U/XQ=; b=TrORQrF+eaRefoHZrSYCjtjLO2o/lp0Nei0jMVrixSOtnEW3tIajPCEqCAiJkRWJnT tLVSgnFBMD87H8CnXwM91VC1hces2rXC9/5FO7gXstN7v7pEweeVxumEW9BmN/dOgTXh SkWUsr6Z0GWfFTo4Zjzm2C5AWwJuQqgyxRJkdbvfNxikFxnZuNWiNtwA2bO8yGWBQU9U 3qRpJEEMdNVgodWueo6DMUO8KLDM+zPDQN5dFMAHUtMxXBptJLRJ84Wx2zG/JnPq15hI mp+I63E7Wfwd97adcqXIXEVs1TbExi349XqrwepRK1bc7+KsjCoiFSBc153mfDgsrWDU AMPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837617; x=1743442417; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=7tcfTvJa2x4r0LER3JMCH4ud8u11Uq3+9z/wHx+U/XQ=; b=i9hP6mlLtwXfrVqJQ4CkMfuo8GmTzrp5ro01EsLAyU1bULLIwvJ/ChYwz1zaJC+Pg8 SqTYn+5F5n7C5KKGDbMuSu+z4odsniUhu/C604RyZVzsQbYvD51GQXGOBi1oCA9/gdnS qAyGIg8AB06xpt4vARAjrPNZC0TXXgHbIo9mByh2mGPIAc8r/8XO3/feq8ly888PRiHW LLUfmnSUsFxbmWbxXmKMTlZjgSt886Ib+e46hy0xGKwbbaFv6OblFpDazztufnWYpYnS XdPFX3W0IxERWYmInoeGbQnZPJLwOhZfaQpP3OcTzeYUydMCupNFRZ+w5vD0kwa40864 zLdA== X-Forwarded-Encrypted: i=1; AJvYcCUj1ipbEC9p1NssZwbX4EP7wd3O/KKA7GwwpuYckbsgEr2jBobkhORZwlRPwjjYgNGDqg4Mlbj8j+5pLU4=@vger.kernel.org X-Gm-Message-State: AOJu0YxKdTaOZ9hSw3+ZkKJBmHm4IIEqcn/B4NKfsZwUUiZxUXEJHVzF RLWVRw1MzNivenari76RTBN1x+dN3X3/ib4Mr+YgmvSRHLehJucJvM4XhvMQEQULaoSSZhAWTvt ozKg2aw== X-Google-Smtp-Source: AGHT+IG8VOxNV1nd1KcRz6bwFYrN07riyfvl55OXKVLtIrvOisrrUSvNGUFvmnaAyN7tlAD9BenuAvVLWJLD X-Received: from pjbnb5.prod.google.com ([2002:a17:90b:35c5:b0:2fa:1481:81f5]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:224d:b0:2fa:3174:e344 with SMTP id 98e67ed59e1d1-301d43a21f5mr24824003a91.14.1742837617363; Mon, 24 Mar 2025 10:33:37 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:08 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-29-mizhang@google.com> Subject: [PATCH v4 28/38] KVM: x86/pmu/svm: Set GuestOnly bit and clear HostOnly bit when guest writes to event selectors From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sandipan Das On AMD platforms, there is no way to restore PerfCntrGlobalCtl at VM-Entry or clear it at VM-Exit. Since the register states will be restored before entering and saved after exiting guest context, the counters can keep ticking and even overflow leading to chaos while still in host context. To avoid this, intecept event selectors, which is already done by mediated PMU. In addition, always set the GuestOnly bit and clear the HostOnly bit for PMU selectors on AMD. Doing so allows the counters run only in guest context even if their enable bits are still set after VM exit and before host/guest PMU context switch. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/pmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 9feaca739b96..1a7e3a897fdf 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -165,7 +165,8 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) data &=3D ~pmu->reserved_bits; if (data !=3D pmc->eventsel) { pmc->eventsel =3D data; - pmc->eventsel_hw =3D data; + pmc->eventsel_hw =3D (data & ~AMD64_EVENTSEL_HOSTONLY) | + AMD64_EVENTSEL_GUESTONLY; kvm_pmu_request_counter_reprogram(pmc); } return 0; --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E9A926A0F6 for ; Mon, 24 Mar 2025 17:33:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837621; cv=none; b=J1ox0n3asljtgmbqWww36vB95IqWOXjHyn2q6nUcZ2SJGoqytloCpOsqZNqSYbe8ofT+IlBwzAKTDAwSd4ntO5xZSbUV5rpf3YOPiTgmr95h5huG241YwDlcvWIRiXsMAWZytcankxMl/5YVL9lmKHI+uQFOZq+Ack/ao0I5zgs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837621; c=relaxed/simple; bh=OPw487ed2bKDiogk+R3usk/ZQ2/yI+2RyoUZkiY73s4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qSHBtgEQk6gppf03mPmd7YAPi16a10Nyh7slU2m2G4Pj0L+KKuyOwkVwjRQ5DnwRlMJRH5w+btJn7YRD7Uqp/jVZ1gkEKhvCJQb+k14GABR+smrC5ADEkGuCjon0AaX7tc5ujUZI6dYFv/OrCI8iZUkHGFUZs6F2DRmjzsW0Am8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nVTXNBwp; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nVTXNBwp" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff6167e9ccso11526353a91.1 for ; Mon, 24 Mar 2025 10:33:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837619; x=1743442419; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=kL+7KIqLv3Egxp6P/dAYOlfZUppvzqtnReXi+8P4zTQ=; b=nVTXNBwpd8F+Q4y+YWHPMakG0FUpheOZwxbm1oGRhzoCbDtIcN367qmFt4TaYW7ydj kSk6/tzlhk0sV08FtZdQcXxIsXQQp28Enenya2GTTsk8EVdMnJJMvMLde5Ocax63Nav8 Z0+/4tAS5XyKRAdMqG/PkV9TSnDWAnSV/Lqdxy8JDKVL0HBQEDBWxLNlM4kWoxJOqS+M I3fN9v4ErgC4c5/fhWP20kvOBKZzBRwPq6Pmbc9x/QNvtEr9614aNtbRfhKnA/2OPzEX voD0Y946kBRRTfJgs6c1MDGw22FodzWNYvOX5OO+D0200K5dmFIPrpzb3vWIilTmJyVw 0eIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837619; x=1743442419; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kL+7KIqLv3Egxp6P/dAYOlfZUppvzqtnReXi+8P4zTQ=; b=N6USgHAbXTY6bDBmSsWugm44WpZMNh7ZZ5hpE3jjw3PV5xiPuRSxbKhXaoP7nrlZdN vJXWfqZ3iF4m6Pfj7llM1xD0fVPk0tdBOh6h5/+gEuxCT8sfeq6YK/1qoHrkQXf1LBOo eY9erDZe6tZMPOXMlmNegnmP8/O4i/jfR1fHjTqdJEgd7x+B35SFKDaOBYaN9VFiY3cq uBI+mHMkImfKLAN/znT5hyGYOPYLSStLdLK6rHPmJCqS5BlfIA60wxwGLZjc9IxImPZw +R5fUDK7gntz8ckOUbRjdlAHJUelTQhJhDIOUYDctzdwVoa8hbWR/EFnzPqxq1jDCcM6 gZ2g== X-Forwarded-Encrypted: i=1; AJvYcCXhut51q7RxyOpIhC/142bHawgkyvSKfXhvCZt1t+QdWRPepvG+qnDwaw99yeSs653a7Uo5Bbkc+hlnT2E=@vger.kernel.org X-Gm-Message-State: AOJu0YxHrvF0C22ZWvXDsUlJmv7vEFDgbjYheX1QOOu4c7M9WX1bcXNQ 6LS79Cg+HaXWodkiIF7mQKL5E/A+z2JXZMvU6Igmf000S3j3JX0BxZWf+iEND8YFu9PH6Gv38Se xtOMDMQ== X-Google-Smtp-Source: AGHT+IFFt7nQF2snrjJwm4EP6MjmF4BWDt2Y4Obr2OfxUtVCi5bni9hKwtrVfpwZsx9viFJ0IBLp18xdhROF X-Received: from pjur15.prod.google.com ([2002:a17:90a:d40f:b0:2fa:210c:d068]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2805:b0:2ee:aed6:9ec2 with SMTP id 98e67ed59e1d1-3030fe85a4amr26513298a91.14.1742837619053; Mon, 24 Mar 2025 10:33:39 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:09 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-30-mizhang@google.com> Subject: [PATCH v4 29/38] KVM: x86/pmu: Switch host/guest PMU context at vm-exit/vm-entry From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi This patch supports to switch host/guest PMU context at vm-exit/vm-entry for mediated vPMU. In details, kvm_pmu_put_guest_context() is called to save guest PMU context and load host PMU context at VM-exits and kvm_pmu_load_guest_context() is called to save host PMU context and load guest PMU context at vm-entries. A pair of pmu_ops callbacks *put_guest_context() and *load_guest_context() are added to save/restore vendor specific PMU MSRs. Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang Co-developed-by: Sandipan Das Signed-off-by: Sandipan Das Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 2 + arch/x86/include/asm/kvm_host.h | 4 ++ arch/x86/include/asm/msr-index.h | 1 + arch/x86/kvm/pmu.c | 96 ++++++++++++++++++++++++++ arch/x86/kvm/pmu.h | 11 +++ arch/x86/kvm/svm/pmu.c | 54 +++++++++++++++ arch/x86/kvm/vmx/pmu_intel.c | 59 ++++++++++++++++ arch/x86/kvm/x86.c | 4 ++ 8 files changed, 231 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/= kvm-x86-pmu-ops.h index 9159bf1a4730..35f27366c277 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -22,6 +22,8 @@ KVM_X86_PMU_OP(init) KVM_X86_PMU_OP_OPTIONAL(reset) KVM_X86_PMU_OP_OPTIONAL(deliver_pmi) KVM_X86_PMU_OP_OPTIONAL(cleanup) +KVM_X86_PMU_OP(put_guest_context) +KVM_X86_PMU_OP(load_guest_context) =20 #undef KVM_X86_PMU_OP #undef KVM_X86_PMU_OP_OPTIONAL diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 7ee74bbbb0aa..4117a382739a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -568,6 +568,10 @@ struct kvm_pmu { u64 raw_event_mask; struct kvm_pmc gp_counters[KVM_MAX_NR_GP_COUNTERS]; struct kvm_pmc fixed_counters[KVM_MAX_NR_FIXED_COUNTERS]; + u32 gp_eventsel_base; + u32 gp_counter_base; + u32 fixed_base; + u32 cntr_shift; =20 /* * Overlay the bitmap with a 64-bit atomic so that all bits can be diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index a4d8356e9b53..df33a4f026a1 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1153,6 +1153,7 @@ #define MSR_CORE_PERF_GLOBAL_STATUS 0x0000038e #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 +#define MSR_CORE_PERF_GLOBAL_STATUS_SET 0x00000391 =20 #define MSR_PERF_METRICS 0x00000329 =20 diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index e9100dc49fdc..68f203454bbc 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -1127,3 +1127,99 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kv= m, void __user *argp) kfree(filter); return r; } + +void kvm_pmu_put_guest_pmcs(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + u32 eventsel_msr; + u32 counter_msr; + u32 i; + + /* + * Clear hardware selector MSR content and its counter to avoid + * leakage and also avoid this guest GP counter get accidentally + * enabled during host running when host enable global ctrl. + */ + for (i =3D 0; i < pmu->nr_arch_gp_counters; i++) { + pmc =3D &pmu->gp_counters[i]; + eventsel_msr =3D pmc_msr_addr(pmu, pmu->gp_eventsel_base, i); + counter_msr =3D pmc_msr_addr(pmu, pmu->gp_counter_base, i); + + rdpmcl(i, pmc->counter); + rdmsrl(eventsel_msr, pmc->eventsel_hw); + if (pmc->counter) + wrmsrl(counter_msr, 0); + if (pmc->eventsel_hw) + wrmsrl(eventsel_msr, 0); + } + + for (i =3D 0; i < pmu->nr_arch_fixed_counters; i++) { + pmc =3D &pmu->fixed_counters[i]; + counter_msr =3D pmc_msr_addr(pmu, pmu->fixed_base, i); + + rdpmcl(INTEL_PMC_FIXED_RDPMC_BASE | i, pmc->counter); + if (pmc->counter) + wrmsrl(counter_msr, 0); + } + +} +EXPORT_SYMBOL_GPL(kvm_pmu_put_guest_pmcs); + +void kvm_pmu_load_guest_pmcs(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + u32 eventsel_msr; + u32 counter_msr; + u32 i; + + /* + * No need to zero out unexposed GP/fixed counters/selectors since RDPMC + * in this case will be intercepted. Accessing to these counters and + * selectors will cause #GP in the guest. + */ + for (i =3D 0; i < pmu->nr_arch_gp_counters; i++) { + pmc =3D &pmu->gp_counters[i]; + eventsel_msr =3D pmc_msr_addr(pmu, pmu->gp_eventsel_base, i); + counter_msr =3D pmc_msr_addr(pmu, pmu->gp_counter_base, i); + + wrmsrl(counter_msr, pmc->counter); + wrmsrl(eventsel_msr, pmc->eventsel_hw); + } + for (i =3D 0; i < pmu->nr_arch_fixed_counters; i++) { + pmc =3D &pmu->fixed_counters[i]; + counter_msr =3D pmc_msr_addr(pmu, pmu->fixed_base, i); + + wrmsrl(counter_msr, pmc->counter); + } +} +EXPORT_SYMBOL_GPL(kvm_pmu_load_guest_pmcs); + +void kvm_pmu_put_guest_context(struct kvm_vcpu *vcpu) +{ + if (!kvm_mediated_pmu_enabled(vcpu)) + return; + + lockdep_assert_irqs_disabled(); + + kvm_pmu_call(put_guest_context)(vcpu); + + perf_guest_exit(); +} + +void kvm_pmu_load_guest_context(struct kvm_vcpu *vcpu) +{ + u32 guest_lvtpc; + + if (!kvm_mediated_pmu_enabled(vcpu)) + return; + + lockdep_assert_irqs_disabled(); + + guest_lvtpc =3D APIC_DM_FIXED | KVM_GUEST_PMI_VECTOR | + (kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVTPC) & APIC_LVT_MASKED); + perf_guest_enter(guest_lvtpc); + + kvm_pmu_call(load_guest_context)(vcpu); +} diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 6289f523d893..d5da3a9a3bd5 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -41,6 +41,8 @@ struct kvm_pmu_ops { void (*reset)(struct kvm_vcpu *vcpu); void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); + void (*put_guest_context)(struct kvm_vcpu *vcpu); + void (*load_guest_context)(struct kvm_vcpu *vcpu); =20 const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; @@ -292,6 +294,11 @@ static inline bool kvm_host_has_perf_metrics(void) return !!(kvm_host.perf_capabilities & PERF_CAP_PERF_METRICS); } =20 +static inline u32 pmc_msr_addr(struct kvm_pmu *pmu, u32 base, int idx) +{ + return base + idx * pmu->cntr_shift; +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); @@ -306,6 +313,10 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel); bool vcpu_pmu_can_enable(struct kvm_vcpu *vcpu); +void kvm_pmu_put_guest_pmcs(struct kvm_vcpu *vcpu); +void kvm_pmu_load_guest_pmcs(struct kvm_vcpu *vcpu); +void kvm_pmu_put_guest_context(struct kvm_vcpu *vcpu); +void kvm_pmu_load_guest_context(struct kvm_vcpu *vcpu); =20 bool is_vmware_backdoor_pmc(u32 pmc_idx); bool kvm_rdpmc_in_guest(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 1a7e3a897fdf..7e0d84d50b74 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -175,6 +175,22 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, stru= ct msr_data *msr_info) return 1; } =20 +static inline void amd_update_msr_base(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + + if (kvm_pmu_has_perf_global_ctrl(pmu) || + guest_cpu_cap_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { + pmu->gp_eventsel_base =3D MSR_F15H_PERF_CTL0; + pmu->gp_counter_base =3D MSR_F15H_PERF_CTR0; + pmu->cntr_shift =3D 2; + } else { + pmu->gp_eventsel_base =3D MSR_K7_EVNTSEL0; + pmu->gp_counter_base =3D MSR_K7_PERFCTR0; + pmu->cntr_shift =3D 1; + } +} + static void __amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -220,6 +236,8 @@ static void __amd_pmu_refresh(struct kvm_vcpu *vcpu) pmu->counter_bitmask[KVM_PMC_FIXED] =3D 0; pmu->nr_arch_fixed_counters =3D 0; bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); + + amd_update_msr_base(vcpu); } =20 static void amd_pmu_update_msr_intercepts(struct kvm_vcpu *vcpu) @@ -312,6 +330,40 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) } } =20 + +static void amd_put_guest_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + + rdmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, pmu->global_ctrl); + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0); + rdmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, pmu->global_status); + + /* Clear global status bits if non-zero */ + if (pmu->global_status) + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, pmu->global_status); + + kvm_pmu_put_guest_pmcs(vcpu); +} + +static void amd_load_guest_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + u64 global_status; + + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0); + + kvm_pmu_load_guest_pmcs(vcpu); + + rdmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, global_status); + /* Clear host global_status MSR if non-zero. */ + if (global_status) + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, global_status); + + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET, pmu->global_status); + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, pmu->global_ctrl); +} + struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .rdpmc_ecx_to_pmc =3D amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc =3D amd_msr_idx_to_pmc, @@ -321,6 +373,8 @@ struct kvm_pmu_ops amd_pmu_ops __initdata =3D { .set_msr =3D amd_pmu_set_msr, .refresh =3D amd_pmu_refresh, .init =3D amd_pmu_init, + .put_guest_context =3D amd_put_guest_context, + .load_guest_context =3D amd_load_guest_context, .EVENTSEL_EVENT =3D AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS =3D KVM_MAX_NR_AMD_GP_COUNTERS, .MIN_NR_GP_COUNTERS =3D AMD64_NUM_COUNTERS, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 796b7bc4affe..ed17ab198dfb 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -460,6 +460,17 @@ static void intel_pmu_enable_fixed_counter_bits(struct= kvm_pmu *pmu, u64 bits) pmu->fixed_ctr_ctrl_rsvd &=3D ~intel_fixed_bits_by_idx(i, bits); } =20 +static inline void intel_update_msr_base(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + + pmu->gp_eventsel_base =3D MSR_P6_EVNTSEL0; + pmu->gp_counter_base =3D fw_writes_is_enabled(vcpu) ? + MSR_IA32_PMC0 : MSR_IA32_PERFCTR0; + pmu->fixed_base =3D MSR_CORE_PERF_FIXED_CTR0; + pmu->cntr_shift =3D 1; +} + static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); @@ -567,6 +578,8 @@ static void __intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->pebs_enable_rsvd =3D ~(BIT_ULL(pmu->nr_arch_gp_counters) - 1); } } + + intel_update_msr_base(vcpu); } =20 static void intel_pmu_update_msr_intercepts(struct kvm_vcpu *vcpu) @@ -809,6 +822,50 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } } =20 +static void intel_put_guest_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + + /* Global ctrl register is already saved at VM-exit. */ + rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, pmu->global_status); + + /* Clear hardware MSR_CORE_PERF_GLOBAL_STATUS MSR, if non-zero. */ + if (pmu->global_status) + wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, pmu->global_status); + + rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl_hw); + + /* + * Clear hardware FIXED_CTR_CTRL MSR to avoid information leakage and + * also avoid these guest fixed counters get accidentially enabled + * during host running when host enable global ctrl. + */ + if (pmu->fixed_ctr_ctrl_hw) + wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, 0); + + kvm_pmu_put_guest_pmcs(vcpu); +} + +static void intel_load_guest_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + u64 global_status, toggle; + + /* Clear host global_ctrl MSR if non-zero. */ + wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, global_status); + toggle =3D pmu->global_status ^ global_status; + if (global_status & toggle) + wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, global_status & toggle); + if (pmu->global_status & toggle) + wrmsrl(MSR_CORE_PERF_GLOBAL_STATUS_SET, pmu->global_status & toggle); + + wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl_hw); + + kvm_pmu_load_guest_pmcs(vcpu); +} + struct kvm_pmu_ops intel_pmu_ops __initdata =3D { .rdpmc_ecx_to_pmc =3D intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc =3D intel_msr_idx_to_pmc, @@ -820,6 +877,8 @@ struct kvm_pmu_ops intel_pmu_ops __initdata =3D { .reset =3D intel_pmu_reset, .deliver_pmi =3D intel_pmu_deliver_pmi, .cleanup =3D intel_pmu_cleanup, + .put_guest_context =3D intel_put_guest_context, + .load_guest_context =3D intel_load_guest_context, .EVENTSEL_EVENT =3D ARCH_PERFMON_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS =3D KVM_MAX_NR_INTEL_GP_COUNTERS, .MIN_NR_GP_COUNTERS =3D 1, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 578e5f110b6c..d35afa8d9cbb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10998,6 +10998,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) set_debugreg(0, 7); } =20 + kvm_pmu_load_guest_context(vcpu); + guest_timing_enter_irqoff(); =20 for (;;) { @@ -11027,6 +11029,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ++vcpu->stat.exits; } =20 + kvm_pmu_put_guest_context(vcpu); + /* * Do this here before restoring debug registers on the host. And * since we do this before handling the vmexit, a DR access vmexit --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 630CD26A1C7 for ; Mon, 24 Mar 2025 17:33:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837624; cv=none; b=Ve77iNPKRxcfezgg/5a2ch5dokI2v2R1XeWzF3WWRd7Yvtba2frmJynjbUWfivw8t3E2Xh/nAek4OEj134rcQ0d9Ar5CliXaxGhbdETtiLYSFyi/zZ8YnwlexrFqd9FtUL4ybj+GplzJTMHRoHtverCQi+8cYwRSJk7RjFfPBK4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837624; c=relaxed/simple; bh=smO9N5SKxgy1G1fUozBHmUGxKKNDlxNMEJJCN3XB7VI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TmeWsnYFh+2OnoR7vECykdZ0o4fxADv/VWph8Zc32AwMv3ksMHfA8UxBLZ5bHRRVOcUujeeHO2v9z1S0xVaY3lj95zLENXMIA0lC1JexNEWcF3qEULbrUIupNb4GQnOiYhTGWAWeLe24ViHw6NmwWGSevwYaEl9cAdFGx0HLJ7A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pWdIytrK; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pWdIytrK" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2240a7aceeaso72265125ad.0 for ; Mon, 24 Mar 2025 10:33:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837621; x=1743442421; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=wEVPDgzOlF/JMd1kCpp/+G+/XVxiSfG9ZpDB6GPsH7w=; b=pWdIytrKGuvU8gmfl8qv2Z4TTK75FrMMYRq77MToIiQBgzT2zexuQw6PhOzh30BCF/ JnqsK43be8E60ZEJDmWUTvCqaftdnkba2PF028Uzet8s96bdmRa2uYgCfdyzYuUn3P61 hda2hTpvpmft1KcD+JsQu/fQ5flNIk1N/GMnWoBMzsgC0YwgryJCrbszxRDqvshiEvjF E4wnXtMCApK/L9L1Qy1UmWay6gf56wfPfQbPtsjNREWkuAz+rUj6nuEuxWLWQk3r8IJu 6AHSZZURF3MzAd9+Ip/sZ9cFMMPAYWJx/6su1byaDG21s2UDHxu+Tef59hX8Ak22byCM dUQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837621; x=1743442421; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=wEVPDgzOlF/JMd1kCpp/+G+/XVxiSfG9ZpDB6GPsH7w=; b=T5g5Kiuk6GJ6WdbDUqsTSPq0ebfK4kNoQgOnigQGvtRpxC2NakdK/tLL8WlUsuImPa f2hVchv0CuIO3BWWeOO7h40Nm4hUpDzj0aGsgHQFdVyN330IYEtx13zObWt1mrPhjO0e dFslXD1LB5rIjU2Dq9T8QyCTCxAgCbB2bvr/eo525CtmWz1Ey4rk5LZwssQ7in0s/JAa 7YYWaGQHuhAiteGS85MbEHk+GwNS8rUSBA6u+xkmgKrlFqFcu/9npjEPbdp/Rd4Tu86H csEBmv98OBp/f7qKPtC2SEYlacUWOdEPZ3xHPl5tFYLFMc/WRvloNPP+Db/BRgglYpc9 VUUw== X-Forwarded-Encrypted: i=1; AJvYcCVYnFH34Uc2c8cXzwN0u00Yd4STT18ZgtqCZ48qTbu2habcGcW6p7CxxutTruyuflD9+YuJp6lldC3SDmw=@vger.kernel.org X-Gm-Message-State: AOJu0YzJqjyHZmeTD7p8MVkuiUXvWZ98fwwux4+/lbyRIFGUHiWEX2Ru S3S75orybJJZScwfzHBBUp4UENook2L3h70X0dqdLAEgmUMTbzFiDmJohOx3+qs4suINK7yT9JD wGyV46A== X-Google-Smtp-Source: AGHT+IGI6p2Pj6Fn8aHHQEXMJiBqaTwbmqGsVWU5LIiIRgo44ugxKecCqHQMA+tuCDEL1Vi+10l8dSdDlnrF X-Received: from plblc15.prod.google.com ([2002:a17:902:fa8f:b0:223:67ac:e082]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:320b:b0:224:10a2:cae7 with SMTP id d9443c01a7336-22780e42056mr246605315ad.40.1742837620663; Mon, 24 Mar 2025 10:33:40 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:10 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-31-mizhang@google.com> Subject: [PATCH v4 30/38] KVM: x86/pmu: Handle emulated instruction for mediated vPMU From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Mediated vPMU needs to accumulate the emulated instructions into counter and load the counter into HW at vm-entry. Moreover, if the accumulation leads to counter overflow, KVM needs to update GLOBAL_STATUS and inject PMI into guest as well. Suggested-by: Sean Christopherson Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/pmu.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 42 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 68f203454bbc..f71009ec92cf 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -911,10 +911,50 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) kvm_pmu_reset(vcpu); } =20 +static bool pmc_pmi_enabled(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); + u8 fixed_ctr_ctrl; + bool pmi_enabled; + + if (pmc_is_gp(pmc)) { + pmi_enabled =3D pmc->eventsel & ARCH_PERFMON_EVENTSEL_INT; + } else { + fixed_ctr_ctrl =3D fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - KVM_FIXED_PMC_BASE_IDX); + pmi_enabled =3D fixed_ctr_ctrl & INTEL_FIXED_0_ENABLE_PMI; + } + + return pmi_enabled; +} + static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { - pmc->emulated_counter++; - kvm_pmu_request_counter_reprogram(pmc); + struct kvm_vcpu *vcpu =3D pmc->vcpu; + + /* + * For perf-based PMUs, accumulate software-emulated events separately + * from pmc->counter, as pmc->counter is offset by the count of the + * associated perf event. Request reprogramming, which will consult + * both emulated and hardware-generated events to detect overflow. + */ + if (!kvm_mediated_pmu_enabled(vcpu)) { + pmc->emulated_counter++; + kvm_pmu_request_counter_reprogram(pmc); + return; + } + + /* + * For mediated PMUs, pmc->counter is updated when the vCPU's PMU is + * put, and will be loaded into hardware when the PMU is loaded. Simply + * increment the counter and signal overflow if it wraps to zero. + */ + pmc->counter =3D (pmc->counter + 1) & pmc_bitmask(pmc); + if (!pmc->counter) { + pmc_to_pmu(pmc)->global_status |=3D BIT_ULL(pmc->idx); + if (pmc_pmi_enabled(pmc)) + kvm_make_request(KVM_REQ_PMI, vcpu); + } } =20 static inline bool cpl_is_matched(struct kvm_pmc *pmc) --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E7B326AAAE for ; Mon, 24 Mar 2025 17:33:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837625; cv=none; b=nD6j0LTS3xO/lzMVQ2tSACiopak16hCMAP/kC73DZl1WJWUozuPBOs7tkH1b5sV7ZUNiBDkF3+Eomk/9w/EV3RYbJVKktyxvSf7Qi2UYwlgAQB578sH6QKBKe3M9WcmqrOEA/96YURcjWjyD33P0mfLdMVhXT+JqzNkC6bWHH9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837625; c=relaxed/simple; bh=s6ocOzT71CXmbbBcXEYyjJ6D7W58hm1QwCnGVXZRsjo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ky0y/R3IUpyiu08Fd8mK7fsq2+xw7ae73bAVL8Kbr5M33+uxaYkoBDlrFYlh+XhKLVQjo49iM4WvI+rm+vuCaJnDYj5WisYf57CfvImMSIWTL1YY1nUNF2GH5isK0dATKul8ioESNHxNV84ZsxVaXoNq4RAZevMEjhRdACJpWks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YoVhKcwK; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YoVhKcwK" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff8340d547so8785829a91.2 for ; Mon, 24 Mar 2025 10:33:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837623; x=1743442423; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=6+V904KeUQ21IQza0CY3IXyrVqbjYbDjSx/YwFPJNc8=; b=YoVhKcwKMepcD3FbK4BPI0C5DbfYMDm8GQmYmO39yeSU2mvClfdgpAyY3sfpSn+kAk P6lOGW22wT/NcnZJOsQFo0tPUA5Ln4RcgjwA9Ze0zfuIijEXMVnqcpo4CXjDCh8DwwWS +W28YBp+Ayo4ODu5TEi4TiEriELYNA4YUiPhYNd58Zj9NgHXMbD6P1cLMTANTXjMAbtU ZWbXpzMNaczZ/5BMC4EIW1xIN+gEIn+uhcpRR9LEbjAmKA36RCK53Dox8jblhK+6MdXH CItjOwnW8JuvvPyyM28k9uLS3ejhWU0xnKLB3mwG7h6vt4jI0b3IRgf0aeLRubTS9pGY HlRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837623; x=1743442423; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6+V904KeUQ21IQza0CY3IXyrVqbjYbDjSx/YwFPJNc8=; b=MWFbyXfUo3l3Zn/yn+RqzA1zccpCOiDsRlrnyDkU6FHbvv0rVlpQfjlSiB0bd6Ti6y yr6LeqEEY6XhG0VWibjRT3ng6i1QmM8tpPgA3jzSx5Eq/CheiVCKfCkIoPz8N5bd12ss WAytmFdWL35fAeu81TXnr6rpRV6Y1NeBD9xoyEPghCVgRTq/2ZtNaWZxOYzPhm8yElHN DuyLwbE9d3CpbUqk9RINb6bZRbhTI4MYoPvvQrjkAAOJo7fsIGXCMk6GwpYDhLVZCV5N KTpnsDq8G+9uAlPU6QY6eF9YE2ay/cO+EUL2Y+Reb7qIyB0pza+xCmtq0ApgnNm6+IfM VTow== X-Forwarded-Encrypted: i=1; AJvYcCW0KUlHNDOvt4L/duuIVqGyBadZSbVw471P2YXaIV8ZOdBGPQ37hD1J3tcmH70LEalcm42LlMTlTyluqBM=@vger.kernel.org X-Gm-Message-State: AOJu0YzQ8at8esFbKlVbhquDew22iAfHeCLMRzP3Ow6VAljRav4KeOiT s1r5qXfCwbMYv/U7rPBRRARfcgwIZQ/eIerweGk5rSQv7ne9oIt5kMvG1yjFeesqJUhxQldz0om vyyk6QQ== X-Google-Smtp-Source: AGHT+IHIqBrwCBT2VgNzdu6a3SRF1WZmbb7R9Sly1cPBTsCVjyp/BdAE9Odk3iiE+395mJznbAYwK1AFqmr6 X-Received: from pjyd8.prod.google.com ([2002:a17:90a:dfc8:b0:2ff:5516:6add]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:45:b0:2ff:58a4:9db3 with SMTP id 98e67ed59e1d1-3030ff0b351mr22679948a91.35.1742837622350; Mon, 24 Mar 2025 10:33:42 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:11 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-32-mizhang@google.com> Subject: [PATCH v4 31/38] KVM: nVMX: Add macros to simplify nested MSR interception setting From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Add macros nested_vmx_merge_msr_bitmaps_xxx() to simplify nested MSR interception setting. No function change intended. Suggested-by: Sean Christopherson Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/vmx/nested.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ecf72394684d..cf557acf91f8 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -613,6 +613,19 @@ static inline void nested_vmx_set_intercept_for_msr(st= ruct vcpu_vmx *vmx, msr_bitmap_l0, msr); } =20 +#define nested_vmx_merge_msr_bitmaps(msr, type) \ + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, \ + msr_bitmap_l0, msr, type) + +#define nested_vmx_merge_msr_bitmaps_read(msr) \ + nested_vmx_merge_msr_bitmaps(msr, MSR_TYPE_R) + +#define nested_vmx_merge_msr_bitmaps_write(msr) \ + nested_vmx_merge_msr_bitmaps(msr, MSR_TYPE_W) + +#define nested_vmx_merge_msr_bitmaps_rw(msr) \ + nested_vmx_merge_msr_bitmaps(msr, MSR_TYPE_RW) + /* * Merge L0's and L1's MSR bitmap, return false to indicate that * we do not use the hardware. @@ -696,23 +709,13 @@ static inline bool nested_vmx_prepare_msr_bitmap(stru= ct kvm_vcpu *vcpu, * other runtime changes to vmcs01's bitmap, e.g. dynamic pass-through. */ #ifdef CONFIG_X86_64 - nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, - MSR_FS_BASE, MSR_TYPE_RW); - - nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, - MSR_GS_BASE, MSR_TYPE_RW); - - nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, - MSR_KERNEL_GS_BASE, MSR_TYPE_RW); + nested_vmx_merge_msr_bitmaps_rw(MSR_FS_BASE); + nested_vmx_merge_msr_bitmaps_rw(MSR_GS_BASE); + nested_vmx_merge_msr_bitmaps_rw(MSR_KERNEL_GS_BASE); #endif - nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, - MSR_IA32_SPEC_CTRL, MSR_TYPE_RW); - - nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, - MSR_IA32_PRED_CMD, MSR_TYPE_W); - - nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, - MSR_IA32_FLUSH_CMD, MSR_TYPE_W); + nested_vmx_merge_msr_bitmaps_rw(MSR_IA32_SPEC_CTRL); + nested_vmx_merge_msr_bitmaps_write(MSR_IA32_PRED_CMD); + nested_vmx_merge_msr_bitmaps_write(MSR_IA32_FLUSH_CMD); =20 kvm_vcpu_unmap(vcpu, &map); =20 --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0A592641F8 for ; Mon, 24 Mar 2025 17:33:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837626; cv=none; b=ZVzd0P55eXHNlt1iBHvcuq8argZrAYFhiAFs2qpaXPp7jUbiuyn9K85DdLsyQk/48+yONjvEOFyJpKPyRNqyhOY5CfSkhbIqowY8X/ohgC4A8QhEM4rXhUdm6+UrKCRWoAtqquva3PSavwlffR24Bmn0y83B6RMHpKFWVN55jv8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837626; c=relaxed/simple; bh=tLEAHwFsGl0uTcmsj5p4KoOmnVEl0UIhLCLr4C2CpFE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EE5l30AVcHNhSzkIvbJWBkVx+aD5hLgYJv0eaDhmefXByB1rV/CUyNR5XkVwVYe9nkkNTF2I7BfiWh0fHybu+sRhN53wfp3MAPe0g4xK7nAMP4jbhQEHhfUus5CAORy+CuFJMXvFAra9AimhpfLIkh/DrAqSaTKmMF1JvHmHpkQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=F4W15rJ1; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="F4W15rJ1" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff5296726fso13429175a91.0 for ; Mon, 24 Mar 2025 10:33:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837624; x=1743442424; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=m1dzEiSB0TnXn8z4Uk//mVwwzT73iVmqUPR4PdJQimU=; b=F4W15rJ1e5GDeU2KQ6J2Oll+Q8qMM+n1Fh+ORwztJO5KFMVFq1Qy52AYYVvsNd8NFB Bpn4iTmZYBO6KfInuyVkyX8V9I+HNHPlbb9rPvYiok+YH4+s+0wWISnw6ycpxiVD0GOF LvTl8ZNzpR7kok49Yyb+NbSbTqtbIXenAx8h51zakBenBS8GGFo6iwKyNDdpGEjTopTJ hAHJ2877fpwvEqpmC/RI4UTZs1FS2oiophBlFXGuZYn2LwaxrLWTDsnl+Z5vrZzvcV61 mbYUAEMvI/RCqgF5icfblfuuP8aRgJZWvyAm2bscqWymtMAYKFKLbhmXCbXbpMTuI+Hs MtKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837624; x=1743442424; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=m1dzEiSB0TnXn8z4Uk//mVwwzT73iVmqUPR4PdJQimU=; b=ESxOHOfxjT39yaap12l+cpzfTVnqHovbKIQFQ7Bhdv9gmKorVWj0ZPlDgQBBjnWyT2 NTeZaZun++nK1ZAkd50/ahqtTJwKOdPAkaVPd1Je6znnZUf69xEN0xbxgHBm3RuKULws Wa20nGKJWogSeLQLlmBgi7yRIcYgZQqsTqHxzXa0vtoM51yovrrJvsMk8eI+aoR+rzac J9sHPpwKCsNVod2hGgzlv7r9XNQ4fQou9KPstMq/lz/Q3bQvKKwnXaGYNnp3R8k9e/mn GFlmZPmdU7Qd6Zhez8fYJxUj3fKL36f2nAzl9YUVdy+1C10MtwUYZ7WlwboWBYsZGk2w BUhA== X-Forwarded-Encrypted: i=1; AJvYcCWpE+FOUKt7654UijYe+IeGI38yTJJdMeEcXdtRK0Wi0TwmKox6E5uGnNQOgeAk42F+YOwYwekqSZd3J0A=@vger.kernel.org X-Gm-Message-State: AOJu0Yxwk7x5KYQxEepcBK3DxYqf/ryIss98TDvCp0sWRVEB1Fiiy7E0 F12mSsReWunWHvbc2vvWQ6Ru96N+jKmBEdEVI6AXeuKJOI65ILPYrpGh86/f6CTh8mUnhXzvUIm EuEGvJQ== X-Google-Smtp-Source: AGHT+IGHdqNaZHSbA93dO3we1p4to3OBaAJ87qlwbaP5lP7uoA4gJwD6qAmceHmEt8Oo8wq8FMI7Brlio3aM X-Received: from pgkk2.prod.google.com ([2002:a63:2402:0:b0:af1:dadf:28e7]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:168e:b0:1f5:709d:e0cb with SMTP id adf61e73a8af0-1fe43437231mr23271595637.39.1742837624000; Mon, 24 Mar 2025 10:33:44 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:12 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-33-mizhang@google.com> Subject: [PATCH v4 32/38] KVM: nVMX: Add nested virtualization support for mediated PMU From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add nested virtualization support for mediated PMU by combining the MSR interception bitmaps of vmcs01 and vmcs12. Readers may argue even without this patch, nested virtualization works for mediated PMU because L1 will see Perfmon v2 and will have to use legacy vPMU implementation if it is Linux. However, any assumption made on L1 may be invalid, e.g., L1 may not even be Linux. If both L0 and L1 pass through PMU MSRs, the correct behavior is to allow MSR access from L2 directly touch HW MSRs, since both L0 and L1 passthrough the access. However, in current implementation, if without adding anything for nested, KVM always set MSR interception bits in vmcs02. This leads to the fact that L0 will emulate all MSR read/writes for L2, leading to errors, since the current mediated vPMU never implements set_msr() and get_msr() for any counter access except counter accesses from the VMM side. So fix the issue by setting up the correct MSR interception for PMU MSRs. Signed-off-by: Mingwei Zhang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/nested.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index cf557acf91f8..dbec40cb55bc 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -626,6 +626,36 @@ static inline void nested_vmx_set_intercept_for_msr(st= ruct vcpu_vmx *vmx, #define nested_vmx_merge_msr_bitmaps_rw(msr) \ nested_vmx_merge_msr_bitmaps(msr, MSR_TYPE_RW) =20 +/* + * Disable PMU MSRs interception for nested VM if L0 and L1 are + * both mediated vPMU. + */ +static void nested_vmx_merge_pmu_msr_bitmaps(struct kvm_vcpu *vcpu, + unsigned long *msr_bitmap_l1, + unsigned long *msr_bitmap_l0) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct vcpu_vmx *vmx =3D to_vmx(vcpu); + int i; + + if (!kvm_mediated_pmu_enabled(vcpu)) + return; + + for (i =3D 0; i < pmu->nr_arch_gp_counters; i++) { + nested_vmx_merge_msr_bitmaps_rw(MSR_ARCH_PERFMON_EVENTSEL0 + i); + nested_vmx_merge_msr_bitmaps_rw(MSR_IA32_PERFCTR0 + i); + nested_vmx_merge_msr_bitmaps_rw(MSR_IA32_PMC0 + i); + } + + for (i =3D 0; i < pmu->nr_arch_fixed_counters; i++) + nested_vmx_merge_msr_bitmaps_rw(MSR_CORE_PERF_FIXED_CTR0 + i); + + nested_vmx_merge_msr_bitmaps_rw(MSR_CORE_PERF_FIXED_CTR_CTRL); + nested_vmx_merge_msr_bitmaps_rw(MSR_CORE_PERF_GLOBAL_CTRL); + nested_vmx_merge_msr_bitmaps_read(MSR_CORE_PERF_GLOBAL_STATUS); + nested_vmx_merge_msr_bitmaps_write(MSR_CORE_PERF_GLOBAL_OVF_CTRL); +} + /* * Merge L0's and L1's MSR bitmap, return false to indicate that * we do not use the hardware. @@ -717,6 +747,8 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, nested_vmx_merge_msr_bitmaps_write(MSR_IA32_PRED_CMD); nested_vmx_merge_msr_bitmaps_write(MSR_IA32_FLUSH_CMD); =20 + nested_vmx_merge_pmu_msr_bitmaps(vcpu, msr_bitmap_l1, msr_bitmap_l0); + kvm_vcpu_unmap(vcpu, &map); =20 vmx->nested.force_msr_bitmap_recalc =3D false; --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B0ED26B2B3 for ; Mon, 24 Mar 2025 17:33:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837627; cv=none; b=igENXaePHrqc2Q9iEX38T/9J+zmJb62S/N7d+I8JyqFv1euaiDwMT0A3ZlT5Gfb+uX3h6SKAQZl5Z9P+YQp5YRDyKA/PM1y+kYLwHG+bh3X6XwZvIuyrnQa9F3dYYjp9t5VyQOi8DByUxfpPkB6TAaqd+GG29QVEIrEzYCx8FaQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837627; c=relaxed/simple; bh=SPV6f+vG/kNYrdnOrV7Ud4dp6yqxNxakKCs85+jvmag=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jHWkycCw6p+pkUDziiAf2I/U7hn3soaNTDdUzLIScKXNvqBX2MrGyPQIgEqQ7p4POS1Dvukd2Pg+ZKaCDljbxMAyvANiPIV0mjGyZo79RqU6w0NSgDzFEkiT/ccphApDmj2rarO9tsA3b5mjqBxtAzchKg49517m9ESBNcCYKSw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k297gcpG; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k297gcpG" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-22651aca434so76033995ad.1 for ; Mon, 24 Mar 2025 10:33:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837626; x=1743442426; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=3FAxBAfvnTeE02w75JVrD4RSLbO6pRSDBEOxfTgOTwA=; b=k297gcpG3+AiZfVEIa1bic5XpHjHyo6FxbO2YOIXeEEjx3v3hugoGxcb2zBZR8uvsg EmT1t5jLyF2iIV7gD1/Yf1AWFAR11fZIXmm+/aeltchjq3tUVBeAC77JOFJcgneqqHUo bcdKnqy1fIYPIXTQJpQhvnpPSXIAVOAHubTjb/X+vGlX3rBqRpxcgtZYfU2dp3vOjx8B vtL+PbgbhSjREx75TThEXGnE5eMleuj9TPBJxe5l1qCQBsLOVKZ0dpYqADM/3/i3io2R Dt9GgaLgWEETfkwCbQ+3ngHbH4NrtQR/SSDQjxXSff/QwK1IGhw+cttoLXpVxDCypwxd 7h5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837626; x=1743442426; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=3FAxBAfvnTeE02w75JVrD4RSLbO6pRSDBEOxfTgOTwA=; b=VTiFRBy7OtgHnTqWGKgbLwLbK9qnf41qS3fjBT/M3EvVjL6uqN4U6E0mYCJqma+Fyg Y7VXncOL1wx7nl73q3RZS0G+xlpAyD0GHy4jdrR8tnX9Gjlsf+XvbFPoewFkCWsU0mXF fm0RDHsf5T8OCdGiWw2oJBwkDXUwqJRh15ryMLbX97+I1UFvbX6i/HuvckmePCd0sLeC pl2dIJjyotWpUl2oMrq1c7ksop7LMm1N+bDlRePqmjDPEXBhpYGim6hJ7C7pwLbqW+NK KuRyKwHq/RkFmsLweQyDtpnQBjI0B+r/CtTUfY0g7SnPgNx0IDM52brTAzYIYqrNr3Iq uNLw== X-Forwarded-Encrypted: i=1; AJvYcCWusyHfcegX090JGU/drvx/SIf2twGGkhnyvquoXo7SUsTHMfOPN1g+6FqkPCqjVFdMAs4n5eery+hEVvA=@vger.kernel.org X-Gm-Message-State: AOJu0YyugW8dQPEtqBA4/USZU9NMrQRmWQZSOgU7TUnVQdChkaF86GVq md9mkUgBDMXKCLO5QIKTar4j6H6Hs0rV7IxXNckUu/lGLC7Fwu3I72BPozsWj0JpD4fxwIloF3G J42HfUw== X-Google-Smtp-Source: AGHT+IFNR4hLiVkO6uhlXqD3citXpGRm9BfvjKuSb2pxzfsnHIVx+SR4I6E3SRncE31NfMsm/svDGQp+t39P X-Received: from pgbeh14.prod.google.com ([2002:a05:6a02:256e:b0:af2:3b16:9767]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:12d5:b0:1fb:e271:82e2 with SMTP id adf61e73a8af0-1fe42f35752mr23440489637.11.1742837625714; Mon, 24 Mar 2025 10:33:45 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:13 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-34-mizhang@google.com> Subject: [PATCH v4 33/38] perf/x86/intel: Support PERF_PMU_CAP_MEDIATED_VPMU From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kan Liang Apply the PERF_PMU_CAP_MEDIATED_VPMU for Intel core PMU. It only indicates that the perf side of core PMU is ready to support the passthrough vPMU. Besides the capability, the hypervisor should still need to check the PMU version and other capabilities to decide whether to enable the mediated vPMU. Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- arch/x86/events/intel/core.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index e86333eee266..ab74fdfa6a66 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4943,6 +4943,8 @@ static void intel_pmu_check_hybrid_pmus(struct x86_hy= brid_pmu *pmu) else pmu->intel_ctrl &=3D ~(1ULL << GLOBAL_CTRL_EN_PERF_METRICS); =20 + pmu->pmu.capabilities |=3D PERF_PMU_CAP_MEDIATED_VPMU; + intel_pmu_check_event_constraints(pmu->event_constraints, pmu->cntr_mask64, pmu->fixed_cntr_mask64, @@ -6535,6 +6537,9 @@ __init int intel_pmu_init(void) pr_cont(" AnyThread deprecated, "); } =20 + /* The perf side of core PMU is ready to support the mediated vPMU. */ + x86_get_pmu(smp_processor_id())->capabilities |=3D PERF_PMU_CAP_MEDIATED_= VPMU; + /* * Install the hw-cache-events table: */ --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13F63264F8F for ; Mon, 24 Mar 2025 17:33:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837629; cv=none; b=myjUeDamQ7LNRNf16MSBP8U/XbGi1OqFZIYe03LMnxykGtRfYoy/jzXNmd3kHQYrZO2+uO53e6ZCBOKudWHgGVQ80FhGboGNy/PxxaRpiruPXhigl8oGRE688TsUCf6652gU3dUAFjqGfX8H0KFdLUFmtctydCfYZvBq+5Pu7X0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837629; c=relaxed/simple; bh=uxeRSQhopIcLqUg/0mbD+aBrEYrjz1kZmjC8RvnoPKM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dhQiKyqPsv5vt7tuEZEPDRKtK4irk8MxcZMYFPpp7ct1hG45blx2VMsxN6PLi4XIdr2Wmgx9LaU9PTl9ox5fOhAnoxcdXEFP+y/v2ed6QR3qOaf385BxCdDNFb6H8c3Tt1r5J5p18KlfRpt6Tn/ReZqwZXXYTlB1o3+Vl9w+gyY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=x9hBxvAe; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="x9hBxvAe" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3032f4eacd8so3248006a91.3 for ; Mon, 24 Mar 2025 10:33:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837627; x=1743442427; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=QRjcPleU/6UshS7jqyhSRtB7XVQZnZWYVsb4HkvaGz8=; b=x9hBxvAefRw81hQpG4KyXtRUJaASsi5qsBnTv8pacP8J/iugFUOgy4XMpv08JLFe8y fEWKp+sU650Z3At4nxG4chxuSKhzCMuX2FGb6xS7LQpaObzFwtSpHVaJQx4fgZZJdlde j9AVlpIipASBfJdTMBETfz1iJEqR8tHs/o1I5mATU2eE1vkMK5qvZhK+JOyTsXYkjRlf nFY7fkuFezymKKXCO4Y52OFThnPxahSwmsYT/wwcYfIqImJu3b3As63Oy5Jy9YW1dV37 hXxCadh+Ko5Kjo8MZ/BnRCaQcH99kSd4y8whA4uG0Ogs20HFMnl2hnec0Oxh3bsE9fFw 73sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837627; x=1743442427; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QRjcPleU/6UshS7jqyhSRtB7XVQZnZWYVsb4HkvaGz8=; b=o4MqZbHboxFtzWBpD4ICrELVnMEynrCTlmRGBa2vgVpffCggS3s1cLo6w1yuENsOJh VHD95ISoFiTxQcJeNN2PGaYGvoeTQiugG0yVgP1ffYJnAUR+MiXnJpRopQUo1zQUi/vL p0eJsR1Pajn7hAo2+FXNgVRE5VbxjJO9yEAcxvXgRHWS4Kfdwvih/a61PeNlz5++A/UR qYnwatFUXEYUYOkPmVWXVdfQHG3mO3oEWrl2rDbfuBBKh/TPTeJ0+jVI7pMHtgel2f+T +dY3l9atet5KNPQVJfwp0rB8255l5ov9XCRTxDjLgWjaqJ4/oq6AFqetHRxEyjNJf3/u mcyg== X-Forwarded-Encrypted: i=1; AJvYcCXbb/Jw5WknvHsZau6/1pz/dh1RT8G2sCg6l25q0inzx0GcCOGuBGb7mQO3DFCBw+IEahpc0eIdNqRXmsw=@vger.kernel.org X-Gm-Message-State: AOJu0YzkzHbuvMPnf0z/xvldJ0AZfKBC3fX4jndE6zDPnoDFDOnHnw5B eweKDxduCwom6kc/h1iA1KcJkGCaPM0/kUjMg3EWPyePa5VB1YJm1hIl5RfB9MVeBjh2F1cae6q xWSrF9w== X-Google-Smtp-Source: AGHT+IEuHjs/A2Re3ck6QJTqptDV7w7AygF6gEwnZPovDIhyUvEjjPjQgZu8Hydk3XXYIRv6LtyhIJkf9PiR X-Received: from pgmh3.prod.google.com ([2002:a63:5743:0:b0:af2:54b0:c8d5]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:6a1e:b0:1f5:70d8:6a99 with SMTP id adf61e73a8af0-1fe42f08ea1mr21842848637.4.1742837627379; Mon, 24 Mar 2025 10:33:47 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:14 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-35-mizhang@google.com> Subject: [PATCH v4 34/38] perf/x86/amd: Support PERF_PMU_CAP_MEDIATED_VPMU for AMD host From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sandipan Das Apply the PERF_PMU_CAP_MEDIATED_VPMU flag for version 2 and later implementations of the core PMU. Aside from having Global Control and Status registers, virtualizing the PMU using the passthrough model requires an interface to set or clear the overflow bits in the Global Status MSRs while restoring or saving the PMU context of a vCPU. PerfMonV2-capable hardware has additional MSRs for this purpose namely, PerfCntrGlobalStatusSet and PerfCntrGlobalStatusClr, thereby making it suitable for use with mediated vPMU. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/events/amd/core.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index 30d6ceb4c8ad..a8b537dd2ddb 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -1433,6 +1433,8 @@ static int __init amd_core_pmu_init(void) =20 amd_pmu_global_cntr_mask =3D x86_pmu.cntr_mask64; =20 + x86_get_pmu(smp_processor_id())->capabilities |=3D PERF_PMU_CAP_MEDIATED= _VPMU; + /* Update PMC handling functions */ x86_pmu.enable_all =3D amd_pmu_v2_enable_all; x86_pmu.disable_all =3D amd_pmu_v2_disable_all; --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A20926B968 for ; Mon, 24 Mar 2025 17:33:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837630; cv=none; b=SDP0EurJT+ubXm7OC57FFYOek9LtGjQuaejdMgptsV5Q5+FVxMcHcDbSzrlLoNKCU9hPd3ZyWiWmgdjIwtqoYqR76NOpVvcSGsXWMSYmzby5i5F0QwN9IQzibiE4yR7/ChMi4gLJFLnh92qd+BxR7XV0XCFtuaqiYjA+AAQkpMc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837630; c=relaxed/simple; bh=zOnGJniEN3WXnc3VeKLo8SjLrISpK18TNlsPROxYixw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=A1iWj6Irphy3WqnDKQ+HPFMU9urmAgTgWjBiKEMjZL8AtcUUlShLZGki04JBTyBIC6wwnWMV2CXw4HMowAH2qDYtcvymHBoHg60KudqVZMUABPgtN33LQnVfyp7Fr4jfC/EkZi+gMhtAY73grFvsliqlFuWyc+lIUBYGiGaPaiE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2Z+tr33A; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2Z+tr33A" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2241e7e3addso66344235ad.1 for ; Mon, 24 Mar 2025 10:33:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837629; x=1743442429; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+5INsqPwwbOfqQ7eFWXO61jTPMwL6br0niH/ojJ3Zxg=; b=2Z+tr33An67SC8ci0lyW5v2i47HRpk9AD90kJV5uWkd9NZ++1Cw3QXrHiENgKc1Zd3 1MFtikti0MqVQfiN5o9LK7gf8SSrgjkXyu2Q4gA46IriC7OHyaXX+vDWMe8ieP1Wl3OO 4do0/yFHX9CGSlTgmeowLs5nt5fyu31cN9Ds4XbGHwNjtRtNuj2a8V2m+xwsh3i8ipNI 2Ke3ZTw9zDdJUSxmBWP7zfmBpZ/aiIY/vGSct3O9TBL+k8K+hzPou/J2X4C61Bt96HcF JEaStGf5iTx8SJkF2aiPlIvYldJf63aIx+s6JM7p9eA3Rc/pI1Ph6gfmufICFgcCOneg OmXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837629; x=1743442429; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+5INsqPwwbOfqQ7eFWXO61jTPMwL6br0niH/ojJ3Zxg=; b=PvoeuH2B/UDtiu7T0dX23Qms4C5bgCaLxyklIj2UuS72wKPu7IlS5E+WvmlgOVPZ0y Aggu2M//3hVQ8D4S5JRaopJwtV3Hx3GL/zYIU05Ogyq/HRnDFPMD8JopdI2P8c4SZeQh kWO2oayDnFrxUvas97OI1JBml0oUhvNMNpTs+wkOldUVvD9r+Bx2sE1CcFRPjEP9seoA cVgM3AhRTycIq4SFQndKqBh4s4DdsNbq6cFRjezJf/eJoOFXTPw22bdtqG618z2423zF zBiGyocc7YxI/xDprlzss4kQsemdu95g+q5I9Wwk0w9+OHhn+1TrddW4dJRkIw6yjqce J5cg== X-Forwarded-Encrypted: i=1; AJvYcCXUxQS75Oaz2vC/le+7lTEBUXUeRRCaW3ZDLmIW+b6yFT5kE50KrbLPg0Mub1now+Yb6pHmRBrdb88/d4Y=@vger.kernel.org X-Gm-Message-State: AOJu0YzCLSp77a4TGBYUl50mQpiKPkunKi9d93RSmWiqEr8TmOjhmYoP OGKfcmZHARj3VG8Ea5z3BVEs3m+pEO6YrmBKfOSMyR7i59WG7rkzWl8s4NlqhEJnGUziNHoXTW7 XFM5Sag== X-Google-Smtp-Source: AGHT+IHU2CykiJSiQvxXiEw9anmIf5P5FGVKtn7cQ47VHoNwLlgG0OkW/k4wbNcHBidn2YUs/QsPYppALFJG X-Received: from plgu5.prod.google.com ([2002:a17:902:e805:b0:223:58e2:570d]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d488:b0:224:26fd:82e5 with SMTP id d9443c01a7336-22780e29ebfmr212668755ad.48.1742837628794; Mon, 24 Mar 2025 10:33:48 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:15 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-36-mizhang@google.com> Subject: [PATCH v4 35/38] KVM: x86/pmu: Expose enable_mediated_pmu parameter to user space From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Expose enable_mediated_pmu parameter to user space, then users can enable/disable mediated vPMU on demand. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/svm.c | 2 ++ arch/x86/kvm/vmx/vmx.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index bff351992468..a7ccac624dd3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -265,6 +265,8 @@ module_param(intercept_smi, bool, 0444); bool vnmi =3D true; module_param(vnmi, bool, 0444); =20 +module_param(enable_mediated_pmu, bool, 0444); + static bool svm_gp_erratum_intercept =3D true; =20 static u8 rsm_ins_bytes[] =3D "\x0f\xaa"; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 7bb16bed08da..af9e7b917335 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -147,6 +147,8 @@ module_param_named(preemption_timer, enable_preemption_= timer, bool, S_IRUGO); extern bool __read_mostly allow_smaller_maxphyaddr; module_param(allow_smaller_maxphyaddr, bool, S_IRUGO); =20 +module_param(enable_mediated_pmu, bool, 0444); + #define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD) #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE #define KVM_VM_CR0_ALWAYS_ON \ --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05BD126E16F for ; Mon, 24 Mar 2025 17:33:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837632; cv=none; b=iKx7u79mlRzo7NK+RL8TEU03OsmAmCoRBvjP5p/zPM7RsmM65wnbMKqHbJiq/AD6TNQLPTIZurYR5T9uo1oxZUaaB8jwroB9s0d1N0J146YfHqIH4uZZ9el2ONN7td76z1ZxtXvYYi6otblX1NO0PWCovwyZ9NJdJWMK2uQEAmY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837632; c=relaxed/simple; bh=QEXTgCiBbEdS4w6CQHuGGcellK8LpBCCKx5tvc2kY/4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RV3m5O2eq+E7YehFvu314N/tY4R/tfO0w2h6TYH63ghZvr1lHu691XxoqIyJA/zfSyoO59TJZSOAFhzvrl/sm/HQsNM3nm60ICy75kbNBR+nsikdSTLnRcnYDRuzootMVCEd5Dz2Lhmz3D86XTmbrOhYTzTl8TsPYXJV8pCSpUc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=M3+3ZEQt; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M3+3ZEQt" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff6aaa18e8so7024910a91.1 for ; Mon, 24 Mar 2025 10:33:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837630; x=1743442430; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dBqCRDl1BaUTi1HkHSsNxnlzCe4zXBNzKMIODcAHiYw=; b=M3+3ZEQtiHGdOCLYG5U2JoETd1v86mAbIQUuAb6hXUC1gJWNobZmKAX2w+iD9p8AmB dKhcNrwtqzBP5ME7WlIkA5CNBaMjch9CSSAfXaAFI88OV0kcsY6OiAC+WWjQogaGtNhi nk7n6fQuiwpU+k8prr4AMw1Dy/0832/PR0V8mkhh9ZisM6ZjNnlAAAq4a8qQNXjEP5gO 6tItd+qliyDfuHzLXr8HIEm1/QQ4W+KsdT2B0yjA3wec4Cb1+s6Wm6+SxvkGPVPpDC5j No//eL2PUrBXvzXFxCZ0mj6V+UuvCnq3P5Yt8q5HMT8ZkvxU2P6/YKuTBRoOLUjQtevV pk5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837630; x=1743442430; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dBqCRDl1BaUTi1HkHSsNxnlzCe4zXBNzKMIODcAHiYw=; b=ZcoGrQe6wn8wPR7o6bW47M8kDcXTBHma2BoGl/ayRdZC4s+x6CLBGnLTezH2iLpB2n HeSUUUXzBxFtBPnHFxUbkg53MfaKZS972wCqoC/itYwHcQktZROclzt2tx7l1YmOydOZ mnQBGcQoBZds89YzkvxZEiYXSKguACKbEVjbBPK7XEJ/Ulp0LPMx6ZG5jcDMQl/TsysB HLg6fPTA6SLUq8WlC+vfeYaKa8auXYTqxXP+O9D+Fv6nCTKapgJuxgF4egrCzl2i+XoG sD6oGYfQctSt8ML+saPMWN83Ih2pDC6ndrmsqRjJlQHYVocR8sqLzQQHwNFelppoBrLw b96Q== X-Forwarded-Encrypted: i=1; AJvYcCWv9wBAIST4MLe0oPzKCMH1JvDCne+5mkznEqBKs6ILENBKV1zML0PulKC0pqTU4gDPiKD7qrraak/cCkM=@vger.kernel.org X-Gm-Message-State: AOJu0Yyh//19ui38Z5uuMT4JdFExGjhamYNp05ODt3UiwS6rfyMLOlzm 4AVOVUzQzXmKNNAIPsZ+2UytvH9WJwrsuJP3dQXbgWttGJca4EuQs9dPP0Dm+vj1W5mf65SWLD6 aR9LjMQ== X-Google-Smtp-Source: AGHT+IFVMWyCCSqp8v1wmBp9ZYcVHAb3dnatY+7Oyg/sMKAHidEKkkuGwuo6xJ/wrZ6w/wGs3G6SoF1DmHcl X-Received: from pjbpl3.prod.google.com ([2002:a17:90b:2683:b0:2fb:fa85:1678]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2dc7:b0:2ee:d371:3227 with SMTP id 98e67ed59e1d1-3030fea3935mr26101748a91.17.1742837630569; Mon, 24 Mar 2025 10:33:50 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:16 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-37-mizhang@google.com> Subject: [PATCH v4 36/38] KVM: selftests: Add mediated vPMU supported for pmu tests From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Mediated vPMU needs to call KVM_CAP_PMU_CAPABILITY ioctl to enable it. Thus add a helper vm_create_with_one_vcpu_with_pmu() to create PMU enabled VM and replace vm_create_with_one_vcpu() helper with this new helper in pmu tests. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- .../testing/selftests/kvm/include/kvm_util.h | 3 +++ tools/testing/selftests/kvm/lib/kvm_util.c | 23 +++++++++++++++++++ .../selftests/kvm/x86/pmu_counters_test.c | 4 +++- .../selftests/kvm/x86/pmu_event_filter_test.c | 8 ++++--- 4 files changed, 34 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 4c4e5a847f67..a73b0b98be5e 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -961,6 +961,9 @@ static inline struct kvm_vm *vm_create_shape_with_one_v= cpu(struct vm_shape shape return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code); } =20 +struct kvm_vm *vm_create_with_one_vcpu_with_pmu(struct kvm_vcpu **vcpu, + void *guest_code); + struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm); =20 void kvm_pin_this_task_to_pcpu(uint32_t pcpu); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 33fefeb3ca44..18143ec2e751 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -545,6 +545,29 @@ struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_= vm *vm) return vm_vcpu_recreate(vm, 0); } =20 +struct kvm_vm *vm_create_with_one_vcpu_with_pmu(struct kvm_vcpu **vcpu, + void *guest_code) +{ + struct kvm_vm *vm; + int r; + + r =3D kvm_check_cap(KVM_CAP_PMU_CAPABILITY); + if (!(r & KVM_PMU_CAP_DISABLE)) + return NULL; + + vm =3D vm_create(1); + + /* + * KVM_CAP_PMU_CAPABILITY ioctl must be explicitly called to enable + * mediated vPMU. + */ + vm_enable_cap(vm, KVM_CAP_PMU_CAPABILITY, !KVM_PMU_CAP_DISABLE); + + *vcpu =3D vm_vcpu_add(vm, 0, guest_code); + + return vm; +} + void kvm_pin_this_task_to_pcpu(uint32_t pcpu) { cpu_set_t mask; diff --git a/tools/testing/selftests/kvm/x86/pmu_counters_test.c b/tools/te= sting/selftests/kvm/x86/pmu_counters_test.c index 698cb36989db..441c66f314fb 100644 --- a/tools/testing/selftests/kvm/x86/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86/pmu_counters_test.c @@ -40,7 +40,9 @@ static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct = kvm_vcpu **vcpu, { struct kvm_vm *vm; =20 - vm =3D vm_create_with_one_vcpu(vcpu, guest_code); + vm =3D vm_create_with_one_vcpu_with_pmu(vcpu, guest_code); + assert(vm); + sync_global_to_guest(vm, kvm_pmu_version); =20 /* diff --git a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c b/tool= s/testing/selftests/kvm/x86/pmu_event_filter_test.c index c15513cd74d1..1c7d265a0003 100644 --- a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c @@ -822,8 +822,9 @@ static void test_fixed_counter_bitmap(void) * fixed performance counters. */ for (idx =3D 0; idx < nr_fixed_counters; idx++) { - vm =3D vm_create_with_one_vcpu(&vcpu, - intel_run_fixed_counter_guest_code); + vm =3D vm_create_with_one_vcpu_with_pmu(&vcpu, + intel_run_fixed_counter_guest_code); + assert(vm); vcpu_args_set(vcpu, 1, idx); __test_fixed_counter_bitmap(vcpu, idx, nr_fixed_counters); kvm_vm_free(vm); @@ -843,7 +844,8 @@ int main(int argc, char *argv[]) TEST_REQUIRE(use_intel_pmu() || use_amd_pmu()); guest_code =3D use_intel_pmu() ? intel_guest_code : amd_guest_code; =20 - vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); + vm =3D vm_create_with_one_vcpu_with_pmu(&vcpu, guest_code); + assert(vm); =20 TEST_REQUIRE(sanity_check_pmu(vcpu)); =20 --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A736F264F9F for ; Mon, 24 Mar 2025 17:33:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837634; cv=none; b=EDlyrnOnFnoLQDKJFtQP+Fmq74qt6XVdyIDfdNxdOf9hBlo8mIVa6+mbxsbG69mJrdrc8qDZX6H5Iw+IpSetfcuwu7txLrflVUCyfTfdLJD1/o4V8sp+ikU7JRwHcRDUdNjlvCoRyJwGRPSWx7Xxuw0c6luFPvzDl7Jw0zKj4q4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837634; c=relaxed/simple; bh=t+FBsewtahFrhEqpdQUoxO68mKcKYqQ7X3zySWw12Ao=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FJjMAZfwx4mRrPHOU/PqhWZ6g0xtpoptdAl/cUGj8AMMJN0u1kH+wZQchFxUBphRXcYlD39XEoAtFA7qwtlys/NZT0ZpS+NtFWB/SnG+7tXZiuNm1aigiLDL3bjg4jpIpJ1neibjnJVO7FmmhyupITi3ZyCt8jKKzd/rA2yZa0s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=szaoe19h; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="szaoe19h" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2242f3fd213so72383115ad.1 for ; Mon, 24 Mar 2025 10:33:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837632; x=1743442432; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gKxt/ZFpIcQCUjKHtBsLuH60QpmMdsVsx2I4SxrQ2+c=; b=szaoe19hWVLWRI8vGWFz8HFExFqDfUIPNRe37bPvP9hxX1GijvGhTDE/LFE4e4Sn2V kP0ovxg/QWPIdHuwNP1sschCLWuDq6lzhu6IcXYqYkIzkW4Up204TFhnfuV4jfFq08t+ xwjnEpab53w2jwbjkditiV6nC0kgu7Ou8mRmJEQv8lq09GGo5Mkp9cUlfPsHd7hi7Sak paRjIZfUwoGExau19pZhm42izPcPRicUzkS2eb3+EFfKf/PCFrfJuVet21nI9ezIiJOv ydn7hM6WDCWJdC8CLb7H4jG5vnlxBbfbOEbszn3UJMiNDlegM5QXLJy9IEFNMpdf7JcN xBXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837632; x=1743442432; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gKxt/ZFpIcQCUjKHtBsLuH60QpmMdsVsx2I4SxrQ2+c=; b=A8QhioKUJP4CMGqBnAlYE8p8lhnb0Tvjg/vlkWekSJKx8xLsEPevVRDL11Nq06vcvM Fyyn9WLSrI0QMjZQuueNI3afiE4GoGopjwz6hfNuT/3Hr3l3luNmsKoc6U1clriPTFf0 NO3S/O70LELfs2y6JNFjgzdznTO11WCzYtbPuD1gUeLHYkLzPmJuOAYRB5yILoByJIOf gJTmByABy8oN0e0TarFljbBkyl7fZihmAktyf9LR3Q7OLOWQI1YBn3Sq3BGTG3AWYbHa 58fRTuj+wMB8/MS7XiTU8vdg4+2Dxj05Yhq5dpK+uqvhuOIHjSPpXQMLTSrknhCXkj3D GB/w== X-Forwarded-Encrypted: i=1; AJvYcCXiqSfvs2rYhgdBwbJohIW9xdLVB/W61jn5awoWIqasRQzTWjIxsfNtzvPp5h7dc/MxNtZq54rlpkhBM/E=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/u8To1UWVCqGRuaAJ0AedWm1Y6WZ6L0SjdYqRC8SIid5S/P8P ZaRlxLTE7KM+rT9K1RDARH4HKTQH5VyxdUTaYFJ/yUyPBmgaMUemIiEn37NPpKpveqSHhN2S7Mz phbo5IQ== X-Google-Smtp-Source: AGHT+IHEUGZ9HMGmtkVavziiAMz85RPnL2GSNbJ89iUYw8JW6AOa/3RZtMuAZcKQV1zb8hNT1J6wnY1/R504 X-Received: from plbjg17.prod.google.com ([2002:a17:903:26d1:b0:221:8568:bfe3]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e846:b0:21f:6546:9af0 with SMTP id d9443c01a7336-22780e5fee3mr212707755ad.44.1742837631950; Mon, 24 Mar 2025 10:33:51 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:17 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-38-mizhang@google.com> Subject: [PATCH v4 37/38] KVM: Selftests: Support mediated vPMU for vmx_pmu_caps_test From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi Define KVM_ONE_VCPU_PMU_TEST_SUITE macro which calls vm_create_with_one_vcpu_with_pmu() to create mediated vPMU enabled VM. Then vmx_pmu_caps_test can supported mediated vPMU's validation. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- .../selftests/kvm/include/kvm_test_harness.h | 13 +++++++++++++ tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c | 2 +- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_test_harness.h b/tools= /testing/selftests/kvm/include/kvm_test_harness.h index 8f7c6858e8e2..4efde79708ce 100644 --- a/tools/testing/selftests/kvm/include/kvm_test_harness.h +++ b/tools/testing/selftests/kvm/include/kvm_test_harness.h @@ -23,6 +23,19 @@ kvm_vm_free(self->vcpu->vm); \ } =20 +#define KVM_ONE_VCPU_PMU_TEST_SUITE(name) \ + FIXTURE(name) { \ + struct kvm_vcpu *vcpu; \ + }; \ + \ + FIXTURE_SETUP(name) { \ + (void)vm_create_with_one_vcpu_with_pmu(&self->vcpu, NULL); \ + } \ + \ + FIXTURE_TEARDOWN(name) { \ + kvm_vm_free(self->vcpu->vm); \ + } + #define KVM_ONE_VCPU_TEST(suite, test, guestcode) \ static void __suite##_##test(struct kvm_vcpu *vcpu); \ \ diff --git a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c b/tools/te= sting/selftests/kvm/x86/vmx_pmu_caps_test.c index a1f5ff45d518..d23610131acb 100644 --- a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c @@ -73,7 +73,7 @@ static void guest_code(uint64_t current_val) GUEST_DONE(); } =20 -KVM_ONE_VCPU_TEST_SUITE(vmx_pmu_caps); +KVM_ONE_VCPU_PMU_TEST_SUITE(vmx_pmu_caps); =20 /* * Verify that guest WRMSRs to PERF_CAPABILITIES #GP regardless of the val= ue --=20 2.49.0.395.g12beb8f557-goog From nobody Fri Dec 19 17:37:57 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34A4126FA53 for ; Mon, 24 Mar 2025 17:33:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837635; cv=none; b=Xt8yOPV1+O0UOse+f3t8/hS+66q9TkkEEL6eYKs80YQA49f9unEsAV1RNfB3miNqnqK8vF9ZamYbfCJPwFAwh/h1niIIosHgILOve7qEA9YWfB0oobDWTwjTNWUfgIERZm+Ghltc9wcVJu80PAUJmsJPniv7h96BnOBJWGVG1uA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742837635; c=relaxed/simple; bh=Fzdu3CbhlWwdv5ErNtNQUvoyT6udvNORVBxBG5rpyUw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qEEtg69LZR3LX64nfVECyEMHESvkb6K60/CUGW3uFcLogpxNVj7IaWvhYSkXCetcdcJnlLEu9YYApDzGkWzSJ6yJPNq2irZPPOlzsReBDESDVQdtgjyaF6IWIoDDny74jrDxi8926RncqJBt9IBKh6kNYKeHpR8FrMennkOXTYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0XyQvbI1; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0XyQvbI1" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-22403329f9eso65964735ad.3 for ; Mon, 24 Mar 2025 10:33:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742837634; x=1743442434; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=88C2UbjNetqKIudE+7P6t/BsF0Q2P+IjVPGB7Q/ZhWo=; b=0XyQvbI1xqPbubC9grfv9wVTIfiSKn6Cv4HPN/QlYcOEVHj20FFrced+pBcJIs57jW qM5RNvPwkJQpBrFw587YcjBbkb9N5N4m8K7fg02ocXD7zZqXMDNtj9cp6NzKGvx3WjUr Q//l+Bxtli4E3tLhtubZqN/g7WToDWDk37y95nzMXPBcl3+q7D31sd+qXwx0e520h9e2 HYbuI5tWpkje9sPpiUMfslRRgDavvufOSWQfvIBQ+w+ghZ/QsAOYTTLeIQOcJ8LKDQua dL1VzyqC2HLA/Yol00qB687zYIsAht8eWGgucc/B4wR/qLTZROc2J6BNyp3mhpLOgNni 2png== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742837634; x=1743442434; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=88C2UbjNetqKIudE+7P6t/BsF0Q2P+IjVPGB7Q/ZhWo=; b=agTFvtkNjAjqex/WQE9gN28z7SHEQ7XBqh5GdRZaoIp6xym0DRpFEDdz/zsR6v7Uhv E/2x01PHnAEbU8RS5uxZWzAr6uhZ0dwyboi9hMYpIH8THlDTNHaysSfG+3BMqgshPwLE /ZNwsnc9y0mILkO6FHjLk4SgKJyiQwQFlxN/gJ2/0JA0yT8UZIE3ptd7ZUuRxvMFy/s3 rwp33zlUp63gepjDJI53KftJhrIiOAEPE9HoFnodoRAe5ya0THxla8Gyv/m85IB54JwH Dqn/2R8bVSbie6L+RLyYNP0CfARGUWgMQrT7VRTpYq0kNC+CpWih4nEH3ZSiW+0Zzyqa xp+A== X-Forwarded-Encrypted: i=1; AJvYcCVTErbkfjq+reA9kPxmcyeAv2aXg6t16mGb0szSxdytRqpKynCH4K6r1zP3jftOqA9IjsAoO1q16ZKUH8s=@vger.kernel.org X-Gm-Message-State: AOJu0YzBrnZAP1+OfGgMpbudvFtidHvJZtoeMHVbTsFhlzV7YJJlvMvO q7DmLeE5aH7KDNjRGAm9X+MSpqauJqC+QBJcNqdgry/yoETJ2NMjOov8CG9GjkMJFAGZjWvM0Nx +6odbng== X-Google-Smtp-Source: AGHT+IFbsFBFAVlfGnjEPNOWOEliKW5tmQpe2OOWwOf9zNzJ3tQktbRTua3NsLOUyOjEGtnHL9GXrfWkJCng X-Received: from plbmg13.prod.google.com ([2002:a17:903:348d:b0:220:efca:379c]) (user=mizhang job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:c40a:b0:223:fabd:4f99 with SMTP id d9443c01a7336-22780c529ffmr247929475ad.5.1742837633604; Mon, 24 Mar 2025 10:33:53 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 24 Mar 2025 17:31:18 +0000 In-Reply-To: <20250324173121.1275209-1-mizhang@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250324173121.1275209-1-mizhang@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250324173121.1275209-39-mizhang@google.com> Subject: [PATCH v4 38/38] KVM: Selftests: Fix pmu_counters_test error for mediated vPMU From: Mingwei Zhang To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang@google.com, Kan , "H. Peter Anvin" , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Mingwei Zhang , Yongwei Ma , Xiong Zhang , Dapeng Mi , Jim Mattson , Sandipan Das , Zide Chen , Eranian Stephane , Das Sandipan , Shukla Manali , Nikunj Dadhania Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Dapeng Mi As previous patch commit 'f8905c638eb7 ("KVM: x86/pmu: Check PMU cpuid configuration from user space")', KVM would check if user space configured pmu version is larger than KVM supported maximum pmu version for mediated vPMU, or if fixed counter bitmap is configured incorrectly, if so, KVM would return an error. This enhanced check would lead to pmu_counters_test fails, thus limit pmu_counters_test only validate KVM supported pmu versions for mediated vPMU and only validate 0 fixed counter bitmap if pmu version is less than 5. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- .../selftests/kvm/include/x86/processor.h | 8 ++++++++ .../selftests/kvm/x86/pmu_counters_test.c | 20 ++++++++++++++++--- 2 files changed, 25 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index d60da8966772..7db34f48427a 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1311,6 +1311,14 @@ static inline bool kvm_is_pmu_enabled(void) return get_kvm_param_bool("enable_pmu"); } =20 +static inline bool kvm_is_mediated_pmu_enabled(void) +{ + if (host_cpu_is_intel) + return get_kvm_intel_param_bool("enable_mediated_pmu"); + else + return get_kvm_amd_param_bool("enable_mediated_pmu"); +} + static inline bool kvm_is_forced_emulation_enabled(void) { return !!get_kvm_param_integer("force_emulation_prefix"); diff --git a/tools/testing/selftests/kvm/x86/pmu_counters_test.c b/tools/te= sting/selftests/kvm/x86/pmu_counters_test.c index 441c66f314fb..4745f82ce860 100644 --- a/tools/testing/selftests/kvm/x86/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86/pmu_counters_test.c @@ -564,8 +564,14 @@ static void test_intel_counters(void) * Test up to PMU v5, which is the current maximum version defined by * Intel, i.e. is the last version that is guaranteed to be backwards * compatible with KVM's existing behavior. + * + * Whereas for mediated vPMU, limit max_pmu_version to KVM supported + * maximum pmu version since KVM rejects PMU versions larger than KVM + * supported maximum PMU version to avoid guest to manipulate unsupported + * or unallowed PMU MSRs directly. */ - uint8_t max_pmu_version =3D max_t(typeof(pmu_version), pmu_version, 5); + uint8_t max_pmu_version =3D kvm_is_mediated_pmu_enabled() ? + pmu_version : max_t(typeof(pmu_version), pmu_version, 5); =20 /* * Detect the existence of events that aren't supported by selftests. @@ -622,8 +628,16 @@ static void test_intel_counters(void) pr_info("Testing fixed counters, PMU version %u, perf_caps =3D %lx\n", v, perf_caps[i]); for (j =3D 0; j <=3D nr_fixed_counters; j++) { - for (k =3D 0; k <=3D (BIT(nr_fixed_counters) - 1); k++) - test_fixed_counters(v, perf_caps[i], j, k); + /* + * pmu version less than 5 doesn't support fixed counter + * bitmap, so only set fixed counter bitamp to 0. + */ + if (v < 5) { + test_fixed_counters(v, perf_caps[i], j, 0); + } else { + for (k =3D 0; k <=3D (BIT(nr_fixed_counters) - 1); k++) + test_fixed_counters(v, perf_caps[i], j, k); + } } } } --=20 2.49.0.395.g12beb8f557-goog