From nobody Fri Jan 2 18:50:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEB24E94137 for ; Mon, 9 Oct 2023 23:10:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379140AbjJIXKT (ORCPT ); Mon, 9 Oct 2023 19:10:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379086AbjJIXKD (ORCPT ); Mon, 9 Oct 2023 19:10:03 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C69DFD63 for ; Mon, 9 Oct 2023 16:09:05 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d8153284d6eso6762964276.3 for ; Mon, 09 Oct 2023 16:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696892944; x=1697497744; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lbiCPM17L0Ezy9RJ+4AjTcOJhsmWUq480xf6KhudPI8=; b=OJCLrkMQf3sN2JQHP+asIrFlr5gpwK6SpgSzblzJL6mirtJeWI3UxLUUtut7OfbahM iIOCerDN6vC3vX20LzubSf5h0c5JgzFC8DXHB27SmejGN0AVPgxBmx4ToFmaGAQVLr1A OaccV2D6rlX+ZdPRCxNjcGBa0I8bShHnbqafieXfmssUy6RoZsesyW6r+hGzHqWcX/Z2 3IRwcaJ1I7XRnw7phvtJtWX/9Kr0hchHSytfkSTnPp53Kdkkl9SD5d6WOPvsXt6CpKuH DgyQk7vS/bDClIfeirdG5TlD7Yqkl1VN+kTfK3V66v0ilOy9owBCFFoBDsm5MAqp1Iry 6fCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696892944; x=1697497744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lbiCPM17L0Ezy9RJ+4AjTcOJhsmWUq480xf6KhudPI8=; b=NttxNWE/3GpoIfz4msmK5MOzhWLUWCxKOqdjSWPtCm77FL6IB+IswmUE+s/nDpXId5 1gvOAgS0V+241BSEnbOi49oeClmZMqJm/OqMcIPW4CZp2m+AKjOhxi+QEsgGwzM8l3Ui rFU4zUBW9bAS9Jw5M1Jw8Jc5bQcGKDu7jtNUv3d/Hh+vOoy/IWlTz6oEHpDk90Kst1xF Bzk4ehgpBBwYRP9ZpJkfmYFTx2c5WHniBScZ3aBmJDj3oLanfBe/u9vuOvDccB8lJuQE 9+YmZNfKiFKI4QID89C4NgAwU9pB+uaPmO7CXE1cFPZwYx225XNVD07xz6WSluRXZN96 HTUQ== X-Gm-Message-State: AOJu0Yxo/AHrSRFqIVd0JBbZ3OxbJKqmADX06JSnuu0jI4PkKZP3oUiG zjjvc3Vy0viFAlaRfL/23NaGEN7oQVFf X-Google-Smtp-Source: AGHT+IGS+qeXuoYzERBdhx/297ZX7D3f4cLKnn2F32lgBAnmC7aVQekkaeRf5AdGPo5boRZQt5zgvgDdd+ff X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:20a1]) (user=rananta job=sendgmr) by 2002:a25:42d6:0:b0:d9a:4db7:63e1 with SMTP id p205-20020a2542d6000000b00d9a4db763e1mr15924yba.12.1696892944315; Mon, 09 Oct 2023 16:09:04 -0700 (PDT) Date: Mon, 9 Oct 2023 23:08:47 +0000 In-Reply-To: <20231009230858.3444834-1-rananta@google.com> Mime-Version: 1.0 References: <20231009230858.3444834-1-rananta@google.com> X-Mailer: git-send-email 2.42.0.609.gbb76f46606-goog Message-ID: <20231009230858.3444834-2-rananta@google.com> Subject: [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Reiji Watanabe Introduce new helper functions to set the guest's PMU (kvm->arch.arm_pmu) either to a default probed instance or to a caller requested one, and use it when the guest's PMU needs to be set. These helpers will make it easier for the following patches to modify the relevant code. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Eric Auger --- arch/arm64/kvm/pmu-emul.c | 50 +++++++++++++++++++++++++++------------ 1 file changed, 35 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 3afb281ed8d2..eb5dcb12dafe 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -874,6 +874,36 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } =20 +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +static int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) { struct kvm *kvm =3D vcpu->kvm; @@ -893,7 +923,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu= , int pmu_id) break; } =20 - kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_pmu(kvm, arm_pmu); cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); ret =3D 0; break; @@ -917,20 +947,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, st= ruct kvm_device_attr *attr) return -EBUSY; =20 if (!kvm->arch.arm_pmu) { - /* - * No PMU set, get the default one. - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ - kvm->arch.arm_pmu =3D kvm_pmu_probe_armpmu(); - if (!kvm->arch.arm_pmu) - return -ENODEV; + int ret =3D kvm_arm_set_default_pmu(kvm); + + if (ret) + return ret; } =20 switch (attr->attr) { --=20 2.42.0.609.gbb76f46606-goog