[PATCH v6 04/44] perf: Add APIs to create/release mediated guest vPMUs

Sean Christopherson posted 44 patches 1 week, 3 days ago
[PATCH v6 04/44] perf: Add APIs to create/release mediated guest vPMUs
Posted by Sean Christopherson 1 week, 3 days ago
From: Kan Liang <kan.liang@linux.intel.com>

Currently, exposing PMU capabilities to a KVM guest is done by emulating
guest PMCs via host perf events, i.e. by having KVM be "just" another user
of perf.  As a result, the guest and host are effectively competing for
resources, and emulating guest accesses to vPMU resources requires
expensive actions (expensive relative to the native instruction).  The
overhead and resource competition results in degraded guest performance
and ultimately very poor vPMU accuracy.

To address the issues with the perf-emulated vPMU, introduce a "mediated
vPMU", where the data plane (PMCs and enable/disable knobs) is exposed
directly to the guest, but the control plane (event selectors and access
to fixed counters) is managed by KVM (via MSR interceptions).  To allow
host perf usage of the PMU to (partially) co-exist with KVM/guest usage
of the PMU, KVM and perf will coordinate to a world switch between host
perf context and guest vPMU context near VM-Enter/VM-Exit.

Add two exported APIs, perf_{create,release}_mediated_pmu(), to allow KVM
to create and release a mediated PMU instance (per VM).  Because host perf
context will be deactivated while the guest is running, mediated PMU usage
will be mutually exclusive with perf analysis of the guest, i.e. perf
events that do NOT exclude the guest will not behave as expected.

To avoid silent failure of !exclude_guest perf events, disallow creating a
mediated PMU if there are active !exclude_guest events, and on the perf
side, disallowing creating new !exclude_guest perf events while there is
at least one active mediated PMU.

Exempt PMU resources that do not support mediated PMU usage, i.e. that are
outside the scope/view of KVM's vPMU and will not be swapped out while the
guest is running.

Guard mediated PMU with a new kconfig to help readers identify code paths
that are unique to mediated PMU support, and to allow for adding arch-
specific hooks without stubs.  KVM x86 is expected to be the only KVM
architecture to support a mediated PMU in the near future (e.g. arm64 is
trending toward a partitioned PMU implementation), and KVM x86 will select
PERF_GUEST_MEDIATED_PMU unconditionally, i.e. won't need stubs.

Immediately select PERF_GUEST_MEDIATED_PMU when KVM x86 is enabled so that
all paths are compile tested.  Full KVM support is on its way...

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
[sean: add kconfig and WARNing, rewrite changelog, swizzle patch ordering]
Tested-by: Xudong Hao <xudong.hao@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/Kconfig       |  1 +
 include/linux/perf_event.h |  6 +++
 init/Kconfig               |  4 ++
 kernel/events/core.c       | 82 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 93 insertions(+)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 278f08194ec8..d916bd766c94 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -37,6 +37,7 @@ config KVM_X86
 	select SCHED_INFO
 	select PERF_EVENTS
 	select GUEST_PERF_EVENTS
+	select PERF_GUEST_MEDIATED_PMU
 	select HAVE_KVM_MSI
 	select HAVE_KVM_CPU_RELAX_INTERCEPT
 	select HAVE_KVM_NO_POLL
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index fd1d91017b99..94f679634ef6 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -305,6 +305,7 @@ struct perf_event_pmu_context;
 #define PERF_PMU_CAP_EXTENDED_HW_TYPE	0x0100
 #define PERF_PMU_CAP_AUX_PAUSE		0x0200
 #define PERF_PMU_CAP_AUX_PREFER_LARGE	0x0400
+#define PERF_PMU_CAP_MEDIATED_VPMU	0x0800
 
 /**
  * pmu::scope
@@ -1914,6 +1915,11 @@ extern int perf_event_account_interrupt(struct perf_event *event);
 extern int perf_event_period(struct perf_event *event, u64 value);
 extern u64 perf_event_pause(struct perf_event *event, bool reset);
 
+#ifdef CONFIG_PERF_GUEST_MEDIATED_PMU
+int perf_create_mediated_pmu(void);
+void perf_release_mediated_pmu(void);
+#endif
+
 #else /* !CONFIG_PERF_EVENTS: */
 
 static inline void *
diff --git a/init/Kconfig b/init/Kconfig
index cab3ad28ca49..45b9ac626829 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -2010,6 +2010,10 @@ config GUEST_PERF_EVENTS
 	bool
 	depends on HAVE_PERF_EVENTS
 
+config PERF_GUEST_MEDIATED_PMU
+	bool
+	depends on GUEST_PERF_EVENTS
+
 config PERF_USE_VMALLOC
 	bool
 	help
diff --git a/kernel/events/core.c b/kernel/events/core.c
index e34112df8b31..cfeea7d330f9 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5657,6 +5657,8 @@ static void __free_event(struct perf_event *event)
 	call_rcu(&event->rcu_head, free_event_rcu);
 }
 
+static void mediated_pmu_unaccount_event(struct perf_event *event);
+
 DEFINE_FREE(__free_event, struct perf_event *, if (_T) __free_event(_T))
 
 /* vs perf_event_alloc() success */
@@ -5666,6 +5668,7 @@ static void _free_event(struct perf_event *event)
 	irq_work_sync(&event->pending_disable_irq);
 
 	unaccount_event(event);
+	mediated_pmu_unaccount_event(event);
 
 	if (event->rb) {
 		/*
@@ -6188,6 +6191,81 @@ u64 perf_event_pause(struct perf_event *event, bool reset)
 }
 EXPORT_SYMBOL_GPL(perf_event_pause);
 
+#ifdef CONFIG_PERF_GUEST_MEDIATED_PMU
+static atomic_t nr_include_guest_events __read_mostly;
+
+static atomic_t nr_mediated_pmu_vms __read_mostly;
+static DEFINE_MUTEX(perf_mediated_pmu_mutex);
+
+/* !exclude_guest event of PMU with PERF_PMU_CAP_MEDIATED_VPMU */
+static inline bool is_include_guest_event(struct perf_event *event)
+{
+	if ((event->pmu->capabilities & PERF_PMU_CAP_MEDIATED_VPMU) &&
+	    !event->attr.exclude_guest)
+		return true;
+
+	return false;
+}
+
+static int mediated_pmu_account_event(struct perf_event *event)
+{
+	if (!is_include_guest_event(event))
+		return 0;
+
+	guard(mutex)(&perf_mediated_pmu_mutex);
+
+	if (atomic_read(&nr_mediated_pmu_vms))
+		return -EOPNOTSUPP;
+
+	atomic_inc(&nr_include_guest_events);
+	return 0;
+}
+
+static void mediated_pmu_unaccount_event(struct perf_event *event)
+{
+	if (!is_include_guest_event(event))
+		return;
+
+	atomic_dec(&nr_include_guest_events);
+}
+
+/*
+ * Currently invoked at VM creation to
+ * - Check whether there are existing !exclude_guest events of PMU with
+ *   PERF_PMU_CAP_MEDIATED_VPMU
+ * - Set nr_mediated_pmu_vms to prevent !exclude_guest event creation on
+ *   PMUs with PERF_PMU_CAP_MEDIATED_VPMU
+ *
+ * No impact for the PMU without PERF_PMU_CAP_MEDIATED_VPMU. The perf
+ * still owns all the PMU resources.
+ */
+int perf_create_mediated_pmu(void)
+{
+	guard(mutex)(&perf_mediated_pmu_mutex);
+	if (atomic_inc_not_zero(&nr_mediated_pmu_vms))
+		return 0;
+
+	if (atomic_read(&nr_include_guest_events))
+		return -EBUSY;
+
+	atomic_inc(&nr_mediated_pmu_vms);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(perf_create_mediated_pmu);
+
+void perf_release_mediated_pmu(void)
+{
+	if (WARN_ON_ONCE(!atomic_read(&nr_mediated_pmu_vms)))
+		return;
+
+	atomic_dec(&nr_mediated_pmu_vms);
+}
+EXPORT_SYMBOL_GPL(perf_release_mediated_pmu);
+#else
+static int mediated_pmu_account_event(struct perf_event *event) { return 0; }
+static void mediated_pmu_unaccount_event(struct perf_event *event) {}
+#endif
+
 /*
  * Holding the top-level event's child_mutex means that any
  * descendant process that has inherited this event will block
@@ -13078,6 +13156,10 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
 	if (err)
 		return ERR_PTR(err);
 
+	err = mediated_pmu_account_event(event);
+	if (err)
+		return ERR_PTR(err);
+
 	/* symmetric to unaccount_event() in _free_event() */
 	account_event(event);
 
-- 
2.52.0.223.gf5cc29aaa4-goog
Re: [PATCH v6 04/44] perf: Add APIs to create/release mediated guest vPMUs
Posted by Peter Zijlstra 1 week ago
On Fri, Dec 05, 2025 at 04:16:40PM -0800, Sean Christopherson wrote:

> +static atomic_t nr_include_guest_events __read_mostly;
> +
> +static atomic_t nr_mediated_pmu_vms __read_mostly;
> +static DEFINE_MUTEX(perf_mediated_pmu_mutex);

> +static int mediated_pmu_account_event(struct perf_event *event)
> +{
> +	if (!is_include_guest_event(event))
> +		return 0;
> +
> +	guard(mutex)(&perf_mediated_pmu_mutex);
> +
> +	if (atomic_read(&nr_mediated_pmu_vms))
> +		return -EOPNOTSUPP;
> +
> +	atomic_inc(&nr_include_guest_events);
> +	return 0;
> +}
> +
> +static void mediated_pmu_unaccount_event(struct perf_event *event)
> +{
> +	if (!is_include_guest_event(event))
> +		return;
> +
> +	atomic_dec(&nr_include_guest_events);
> +}

> +int perf_create_mediated_pmu(void)
> +{
> +	guard(mutex)(&perf_mediated_pmu_mutex);
> +	if (atomic_inc_not_zero(&nr_mediated_pmu_vms))
> +		return 0;
> +
> +	if (atomic_read(&nr_include_guest_events))
> +		return -EBUSY;
> +
> +	atomic_inc(&nr_mediated_pmu_vms);
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(perf_create_mediated_pmu);
> +
> +void perf_release_mediated_pmu(void)
> +{
> +	if (WARN_ON_ONCE(!atomic_read(&nr_mediated_pmu_vms)))
> +		return;
> +
> +	atomic_dec(&nr_mediated_pmu_vms);
> +}
> +EXPORT_SYMBOL_GPL(perf_release_mediated_pmu);

These two things are supposed to be symmetric, but are implemented
differently; what gives?

That is, should not both have the general shape:

	if (atomic_inc_not_zero(&A))
		return 0;

	guard(mutex)(&lock);

	if (atomic_read(&B))
		return -EBUSY;

	atomic_inc(&A);
	return 0;

Similarly, I would imagine both release variants to have the underflow
warn on like:

	if (WARN_ON_ONCE(!atomic_read(&A)))
		return;

	atomic_dec(&A);

Hmm?

Also, EXPORT_SYMBOL_FOR_KVM() ?

I can make these edits when applying, if/when we get to applying. Let me
continue reading.
Re: [PATCH v6 04/44] perf: Add APIs to create/release mediated guest vPMUs
Posted by Sean Christopherson 1 week ago
On Mon, Dec 08, 2025, Peter Zijlstra wrote:
> On Fri, Dec 05, 2025 at 04:16:40PM -0800, Sean Christopherson wrote:
> 
> > +static atomic_t nr_include_guest_events __read_mostly;
> > +
> > +static atomic_t nr_mediated_pmu_vms __read_mostly;
> > +static DEFINE_MUTEX(perf_mediated_pmu_mutex);
> 
> > +static int mediated_pmu_account_event(struct perf_event *event)
> > +{
> > +	if (!is_include_guest_event(event))
> > +		return 0;
> > +
> > +	guard(mutex)(&perf_mediated_pmu_mutex);
> > +
> > +	if (atomic_read(&nr_mediated_pmu_vms))
> > +		return -EOPNOTSUPP;
> > +
> > +	atomic_inc(&nr_include_guest_events);
> > +	return 0;
> > +}
> > +
> > +static void mediated_pmu_unaccount_event(struct perf_event *event)
> > +{
> > +	if (!is_include_guest_event(event))
> > +		return;
> > +
> > +	atomic_dec(&nr_include_guest_events);
> > +}
> 
> > +int perf_create_mediated_pmu(void)
> > +{
> > +	guard(mutex)(&perf_mediated_pmu_mutex);
> > +	if (atomic_inc_not_zero(&nr_mediated_pmu_vms))
> > +		return 0;
> > +
> > +	if (atomic_read(&nr_include_guest_events))
> > +		return -EBUSY;
> > +
> > +	atomic_inc(&nr_mediated_pmu_vms);
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(perf_create_mediated_pmu);
> > +
> > +void perf_release_mediated_pmu(void)
> > +{
> > +	if (WARN_ON_ONCE(!atomic_read(&nr_mediated_pmu_vms)))
> > +		return;
> > +
> > +	atomic_dec(&nr_mediated_pmu_vms);
> > +}
> > +EXPORT_SYMBOL_GPL(perf_release_mediated_pmu);
> 
> These two things are supposed to be symmetric, but are implemented
> differently; what gives?
> 
> That is, should not both have the general shape:
> 
> 	if (atomic_inc_not_zero(&A))
> 		return 0;
> 
> 	guard(mutex)(&lock);
> 
> 	if (atomic_read(&B))
> 		return -EBUSY;
> 
> 	atomic_inc(&A);
> 	return 0;
> 
> Similarly, I would imagine both release variants to have the underflow
> warn on like:
> 
> 	if (WARN_ON_ONCE(!atomic_read(&A)))
> 		return;
> 
> 	atomic_dec(&A);
> 
> Hmm?

IIUC, you're suggesting someting like this?  If so, that makes perfect sense to me.

diff --git a/kernel/events/core.c b/kernel/events/core.c
index c6368c64b866..fa2e7b722283 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6356,7 +6356,8 @@ static int mediated_pmu_account_event(struct perf_event *event)
 
 static void mediated_pmu_unaccount_event(struct perf_event *event)
 {
-       if (!is_include_guest_event(event))
+       if (!is_include_guest_event(event) ||
+           WARN_ON_ONCE(!atomic_read(&nr_include_guest_events)))
                return;
 
        atomic_dec(&nr_include_guest_events);

> Also, EXPORT_SYMBOL_FOR_KVM() ?

Ya, for sure.  I posted this against a branch without EXPORT_SYMBOL_FOR_KVM(),
because there are also hard dependencies on the for-6.19 KVM pull requests, and
I didn't want to wait to post until 6.19-rc1 because of the impending winter
break.  Though I also simply forgot about these exports :-(

These could also use EXPORT_SYMBOL_FOR_KVM():

  EXPORT_SYMBOL_FOR_MODULES(perf_load_guest_lvtpc, "kvm");
  EXPORT_SYMBOL_FOR_MODULES(perf_put_guest_lvtpc, "kvm");


> I can make these edits when applying, if/when we get to applying. Let me
> continue reading.
>