From nobody Fri Dec 19 20:37:56 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17708C00A8F for ; Fri, 20 Oct 2023 21:41:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233152AbjJTVlO (ORCPT ); Fri, 20 Oct 2023 17:41:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345261AbjJTVlJ (ORCPT ); Fri, 20 Oct 2023 17:41:09 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2DA7D7F for ; Fri, 20 Oct 2023 14:40:58 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a828bdcfbaso18750737b3.2 for ; Fri, 20 Oct 2023 14:40:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697838058; x=1698442858; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fRZJl825jTe4bNk4e/h0EcnYD7OOEsJ4orW3IfNwdD4=; b=GFt2h0oUOpCAeqIf56JfA/hb+414FjMqTiNto4ntFBD3x92sd7Hy+jLuZxkkVfYVqQ j+uNxXMQzNKJ3M3lF9UJpYWdKR6lBYVV+pfbtrz7lpVBAyxLbjRCXxX3WpmZm6QLjamG H7PSWIDroNc1YANPohvGSmH1HzRaZ3EmbAZTmSea85C8TV+04vOCKSKfvegRLF+QjU/m J12ls22GdHTlSIc8d9xEqCH/nt23rfHzXTukM80cS8vmwxGKQy+q3+Ulkr5AauWQRg0D MWXGJGNEp+dCs6x2yltwk3FGbNTaYuvDh3hviqdJSg7VZpTmtncf89F070VRZtB75aTU imYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697838058; x=1698442858; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fRZJl825jTe4bNk4e/h0EcnYD7OOEsJ4orW3IfNwdD4=; b=Dz96XAIyeJ2/wlYCvhvKhjv84PR9ofOVhTPgjXN0M4dT2OUveD5QKofMOmlkIE4vbd GSBtXXxdZe60lnWoZ7TQAoNq8fXhU7s7sba0ivLzcPfg50EBiL92GjZKy3VtuCULT8HD abKgLoZySMkWPKA/6uxRz0/8xI+YRSHrtBVlS8k4rJO061bbg8tIWsFS+6uXoWx4VTJE Ox+IFnYWUQ5WesEYiFN0bSTGPs5yrmXNFC4TQY7yLoZi+d+2MBNUXIgh9rZsCX6b5S28 w5JrRu7xCMHiUsk38wpY7EHQsoi/SChRRBmFSVCxQB2TyQXJpXeXuJgdVEwAAS4wpqbq cRJg== X-Gm-Message-State: AOJu0Yy2HA3JbmKuaboZ1xnDQpn7wwIIoyF63eDQe9bTFEXrGdJovQe8 lohp9jsuM5K/HuiuLS3McWNHBz6XiiiQ X-Google-Smtp-Source: AGHT+IHe1u7IUea+IlhDVSmNzeHVDsOIIsOYR3rNgYKN/Y9aeMFrZpa/CBGN7KTEhg1GR5frubeMf8zmtcdy X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:20a1]) (user=rananta job=sendgmr) by 2002:a0d:dd10:0:b0:577:619e:d3c9 with SMTP id g16-20020a0ddd10000000b00577619ed3c9mr76387ywe.10.1697838058021; Fri, 20 Oct 2023 14:40:58 -0700 (PDT) Date: Fri, 20 Oct 2023 21:40:41 +0000 In-Reply-To: <20231020214053.2144305-1-rananta@google.com> Mime-Version: 1.0 References: <20231020214053.2144305-1-rananta@google.com> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog Message-ID: <20231020214053.2144305-2-rananta@google.com> Subject: [PATCH v8 01/13] KVM: arm64: PMU: Introduce helpers to set the guest's PMU From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Eric Auger Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Reiji Watanabe Introduce new helper functions to set the guest's PMU (kvm->arch.arm_pmu) either to a default probed instance or to a caller requested one, and use it when the guest's PMU needs to be set. These helpers will make it easier for the following patches to modify the relevant code. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Eric Auger Reviewed-by: Sebastian Ott --- arch/arm64/kvm/pmu-emul.c | 50 +++++++++++++++++++++++++++------------ 1 file changed, 35 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 3afb281ed8d2c..eb5dcb12dafe9 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -874,6 +874,36 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } =20 +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +static int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) { struct kvm *kvm =3D vcpu->kvm; @@ -893,7 +923,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu= , int pmu_id) break; } =20 - kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_pmu(kvm, arm_pmu); cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); ret =3D 0; break; @@ -917,20 +947,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, st= ruct kvm_device_attr *attr) return -EBUSY; =20 if (!kvm->arch.arm_pmu) { - /* - * No PMU set, get the default one. - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ - kvm->arch.arm_pmu =3D kvm_pmu_probe_armpmu(); - if (!kvm->arch.arm_pmu) - return -ENODEV; + int ret =3D kvm_arm_set_default_pmu(kvm); + + if (ret) + return ret; } =20 switch (attr->attr) { --=20 2.42.0.655.g421f12c284-goog