From nobody Thu Oct 9 02:17:44 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53CF423C39A for ; Fri, 20 Jun 2025 22:18:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457919; cv=none; b=QCcjzRpkxg3kLO1vX+xtHq2lNPfGKlAsnXaRXaSMfAXKP5AxxDwtbmIuCBRwmjRV64KIlpqcsawAEEWgqArPzl3BO0iqOF6ilC4CIbeS/qotUoNLMcBkdQ+qpcAxBwOoTkzx624JIxwl+rNr7zX9e9c2NAVoQsbfl3QWOnwlmnY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457919; c=relaxed/simple; bh=/MLHiy+d8eVTeEEbOWcTciIGqCC3NZq1YQ/d7snJV4A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MYECYovgsFVDwFvnY1nHgwXii8TIJ0F1nrfseZWoVnOt+U1p9Bb4bfiMeJ8MUCBaftSLOEsFKl0oi24eIYY5MAyhZPWcCtq4087D1P1rfSpu7fwVWsO6bBSDbOFhy3Fu9pW0MS1zFL02AcF/CnT8HgOnuP+6D19MeSUMjbR87Zs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hvy23RWg; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hvy23RWg" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86a50b5f5bdso160303539f.0 for ; Fri, 20 Jun 2025 15:18:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457916; x=1751062716; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ycwbhAp7Zd2F37ixHvuXJ2dafaPBGypIo/idKjOgZXE=; b=hvy23RWg0l9nDJm7oXpJNWqrcRPsJWwprMxgxMa22gumuzb3UTXQkV2YujP147UVui n8B5tZlqMm+evPPaKpeD0ApFTnOsA1EkAyCzZ3WV179OCVBOVwaU+PdMboKqogzKcqun eRkvCg6UNXieWQCBnM/02PT6MimW1Rlu9zL5+oUYmpOUc7BQAgx6XA22IPIYuIjcCfX+ SppjY8TU93GQspmaeUVCet96jqvQa1okSINfG8NyQ7jN4ntFlYoQonVXjHAFBES+no33 tYU+4ohF9Tv8Ik2qZN1AmJ8Jg28MDo3Igz1kFkQX8dmmOAy6kQynKg+x3XLLovSFyhwc bINA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457916; x=1751062716; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ycwbhAp7Zd2F37ixHvuXJ2dafaPBGypIo/idKjOgZXE=; b=XVe2JBL+jrZ5MGnFpYJh2T+qiR9b6WfXVBMLJAw3zo3lQBQYV7rms2c1Gasmqvt2G1 NNdtmXxVDx+04kprCOizcs1u42WtSJBqeSZd6KIq5iW8Q7CWvVLDycAt+2Z3i0ZHtidX TUhvkAd2GRb3Sy4asZTjJd/V8wPcyI2a/j3ytEB2OpUZduvzPoRsxIGwAW0xV6JPJ18A 2hlxTQ4CyMbl8ZJ9DVIzjlaiz6N/BSeGdkhcmQkHOuQaO6h0fb6s8FdO4NbPGr23RyFD eBly9xZ5uGfmgQfYaeMSNpm1E1wW0wqwYu/+knc9QBOchvhRsIO3RGVbiDycPktx/EWp VHEQ== X-Forwarded-Encrypted: i=1; AJvYcCWvi4g49XcPwxO5q55Nxt7YAL0s7Bn4lswqA5wY006h0jcnOsOVqjlDrLtDNisflZbvUp9GUi15fTEGYYU=@vger.kernel.org X-Gm-Message-State: AOJu0YwgUZ/cHPt84uM2bq8PoTdlnJJ5DfnbezneBEGqzFM5uDBZ2N+q +MoQfV4ir6jr9JHV++vczP8JzlMnDhfEPowPktOt87VL0N1v4L0lHqWiyJa9aHiFqOJjFy8pDE/ tyg80LDDkMmlx5KN9gIPaD2gt6g== X-Google-Smtp-Source: AGHT+IHdG4exJg/OMLEag+SmUtZLjuBoL7XDzzcwpjklziK4h+sFdGNrbUVUxPa6LEbWOqyaRb+b3/J1oVJBEuicPg== X-Received: from ilbcp2.prod.google.com ([2002:a05:6e02:3982:b0:3dd:c6fb:13fb]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a92:c26c:0:b0:3dd:c18b:c03e with SMTP id e9e14a558f8ab-3de38c21fe9mr52496035ab.5.1750457916490; Fri, 20 Jun 2025 15:18:36 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:01 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-2-coltonlewis@google.com> Subject: [PATCH v2 01/23] arm64: cpufeature: Add cpucap for HPMN0 From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a capability for FEAT_HPMN0, whether MDCR_EL2.HPMN can specify 0 counters reserved for the guest. This required changing HPMN0 to an UnsignedEnum in tools/sysreg because otherwise not all the appropriate macros are generated to add it to arm64_cpu_capabilities_arm64_features. Signed-off-by: Colton Lewis --- arch/arm64/kernel/cpufeature.c | 8 ++++++++ arch/arm64/tools/cpucaps | 1 + arch/arm64/tools/sysreg | 6 +++--- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b34044e20128..278294fdc97d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -548,6 +548,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] =3D { }; =20 static const struct arm64_ftr_bits ftr_id_aa64dfr0[] =3D { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_HP= MN0_SHIFT, 4, 0), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_= DoubleLock_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _PMSVer_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CT= X_CMPs_SHIFT, 4, 0), @@ -2896,6 +2897,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, FGT2) }, + { + .desc =3D "Allow MDCR_EL2.HPMN =3D 0", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_HPMN0, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 10effd4cff6b..5b196ba21629 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -39,6 +39,7 @@ HAS_GIC_CPUIF_SYSREGS HAS_GIC_PRIO_MASKING HAS_GIC_PRIO_RELAXED_SYNC HAS_HCR_NV1 +HAS_HPMN0 HAS_HCX HAS_LDAPR HAS_LPA2 diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 8a8cf6874298..d29742481754 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1531,9 +1531,9 @@ EndEnum EndSysreg =20 Sysreg ID_AA64DFR0_EL1 3 0 0 5 0 -Enum 63:60 HPMN0 - 0b0000 UNPREDICTABLE - 0b0001 DEF +UnsignedEnum 63:60 HPMN0 + 0b0000 NI + 0b0001 IMP EndEnum UnsignedEnum 59:56 ExtTrcBuff 0b0000 NI --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C98323CEF9 for ; Fri, 20 Jun 2025 22:18:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457920; cv=none; b=pZj6eq4+hIgNdejeByB70rX9exZ5LRbyKvu9B9FFRrQQvF0SjWNcZiFo12YyUReYPvHuIXx+hfSJpjlP9CFp6dW8GqdQT7YaiEjUIAjA7tSv/9KOfbmZ/OehJ3j34y6Wlit2jZ9C/gel0kn1p7euOVVD8VvwVUh81/yxBm/swZ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457920; c=relaxed/simple; bh=ZTAgaNMzI4qiekOqkv3pNeLxj5wOqxlFzSL0ObZQoaI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tkfGBU4vELfnsKbObI8CguOlDqm349Jir2Wc1c5of96vmzlVVT1DT/FuO/X9YhK33SH+MYM2TS7zyqTZyWtB0B6ZH5aAdcrZuF7vAoAM22kYtJC3+Tn3j0B/Gweo9un2qJbVIjGkii6UVtjunifsgeszUs7XAbKHfAw/IrqsNWo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B1fTMjpl; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B1fTMjpl" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86cc7cdb86fso181802539f.1 for ; Fri, 20 Jun 2025 15:18:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457917; x=1751062717; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Tgz/COFsBAperIPw3jbaNsXNDBcCpCZIFlLHMRVJgqY=; b=B1fTMjplXpGS1iTbK6dYeZ95fWN25oPNuUEbaLnxmiqKHrvP8hGzncZCU33rzqojcg uYmnJ1g+kLVwiCdMxgEeuptR2jPqu8cAKAtFlqFVlTXogG+MpLeRWABIeaE5neV8BxAo NQz8QLdEVf59XiyH13SvbeOAq3n9oK0rRM6eBZamwYZ8RgoQ5qSsaPNIGfSQBG3Xgcs3 zX4qOxT6Hm4ET9WfOW6KlTX/PAa9qArUfOz0ZPcMHSet9ckoLHrh9K9DhpE8mXUywYhr NSmfD1+kGK0XIpfVQCWvk/RtW/ZEeOO/ALT4Nf4uKW+eHq9VZjkYtdyLnyotyzMQj2ZU Kmvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457917; x=1751062717; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Tgz/COFsBAperIPw3jbaNsXNDBcCpCZIFlLHMRVJgqY=; b=BxOTVnfzGSCJh3bzCR8XASKN8ySMJPlU4G9yN2NWLVnWA24RVWaWJfQsm9bE9bLxIs NfPWiWfTjSYceldzvN06VMQyk2nVRaxAqzytPAKOyJCov0yPo2EjvpGBwmsDg6VIUIUF pQL6wBccDpxylp0fhYpoZLofcJLJGUxEMWfjsrx1pvwvZsKraspDwP4GtrDAxhyzYVp5 7eSjh413omImw2i10UJaV0yWNTmkvZi6mW+Yf72Z6EPamqGGXhjR0j1mogcO83twcGeH o1qmNTLkfP2vj5LgViXl3sD4WJfd5um2qrjOgMNO8Y0S2r1T+DagG7EchO1qNxTLS2T6 DgNg== X-Forwarded-Encrypted: i=1; AJvYcCWkSu2k1hfKhFqDPp1owWWhOlanECoedFjYkS3goYYF3yCBz5B/efNttqTxXiDCbL+4tS8DvJqyQSddazc=@vger.kernel.org X-Gm-Message-State: AOJu0YxBInIw01OMtOGWBqOvGJkoyrBkdmASz1Zhs87SESHoFO9SIJn9 0SjSPUr0Hetid1KlRvxzy+I/kFUB+IY/iw4gLyu7BwVfOV3bNQJj+he5XlQ3R31yL0eYRqFXMly ps+4Xb3q587W9/mKAd4cqdCLfBA== X-Google-Smtp-Source: AGHT+IGjEFFNY2ukf5QiVzn6d0Iwii2o/2bYlxITJ69QjxdmPFLoVLBssIQF8Rhp8gGY8wow9/U+VX25kMai0q6pUg== X-Received: from ilbee13.prod.google.com ([2002:a05:6e02:490d:b0:3dd:bec6:9d56]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1646:b0:3db:86fc:d328 with SMTP id e9e14a558f8ab-3de38c2e1c2mr57720245ab.5.1750457917550; Fri, 20 Jun 2025 15:18:37 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:02 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-3-coltonlewis@google.com> Subject: [PATCH v2 02/23] arm64: Generate sign macro for sysreg Enums From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There's no reason Enums shouldn't be equivalent to UnsignedEnums and explicitly specify they are unsigned. This will avoid the annoyance I had with HPMN0. Signed-off-by: Colton Lewis --- arch/arm64/tools/gen-sysreg.awk | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/tools/gen-sysreg.awk b/arch/arm64/tools/gen-sysreg.= awk index f2a1732cb1f6..fa21a632d9b7 100755 --- a/arch/arm64/tools/gen-sysreg.awk +++ b/arch/arm64/tools/gen-sysreg.awk @@ -308,6 +308,7 @@ $1 =3D=3D "Enum" && (block_current() =3D=3D "Sysreg" ||= block_current() =3D=3D "SysregFields parse_bitdef(reg, field, $2) =20 define_field(reg, field, msb, lsb) + define_field_sign(reg, field, "false") =20 next } --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD6DB23F291 for ; Fri, 20 Jun 2025 22:18:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457921; cv=none; b=T5qqB22IKvcqlPBL8OWYu7JKYJ08WCoSj4q80MXpQs3OJ0Qr7gbB1JaJAGbJKfam8l4NNoMHXj/61H+fVslkvEsbwDWbia6xxjXI5fT2g0Y84T8SpOJ2eMbQHDT1pnJ07mrL9KhrARm9JNKYJLkKS+7zRkykPj6uhSrhZzlX9Vs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457921; c=relaxed/simple; bh=YIYZtYXjjIDG79viwi+LeCOmjCYAMmr9a5RXDPHvUQE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DYThnqHNDxT74+vbj7t6eBNafBm5yfgFyUpsMFIgbQc9MuhsXvRD4VqVP87Qlj+2pG6mFCj60XmWJqpJ9J1YQMK4N8um+kllQJZ++qSbnCSqjvtAKG5jZOTZuttyhZd2rp9mruMQQNiBNLjQw6QvUEA9bRG7NgsN3ULe3e+ZE1Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aHCn46JH; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aHCn46JH" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddba1b53e8so23345255ab.1 for ; Fri, 20 Jun 2025 15:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457919; x=1751062719; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Fp13SX1Zh+VZLmNtKrEzp/HCcjo4sCQKHBsJF62MnBU=; b=aHCn46JHfXsAqDl847ItrCxJPstp8TmNfe8t9PridE3GUcukpZ5flqpGgdAC46nTRQ 81OVXEi6Q+WzaujLcYAhuK+5tB/1gTb38YGfUp/9UwCTQ7v48VxjRqO7kD6yxUt86gXR 4ojD7u5Xr6AxLl2zf1TrCgPuXi+yE5jRPS5/lugjK42hCSPKG8qgcZVzmw/jTGEn7jf5 1SWhN8cMLnbcLUL5zhKRu+jY7X0UNqmYospW6nrxqcY1Q98P1u0R142QTszPuzRORBsG IRGEBiMw/aRsW1y3dNuV4+hYpf9/MBOo/b8lHVTV1W+FzUdZBV9152/GT+oOH3E2/6la Uvhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457919; x=1751062719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Fp13SX1Zh+VZLmNtKrEzp/HCcjo4sCQKHBsJF62MnBU=; b=u0rahCI4f+/Syf02DMRxde9trBHgQqzT8mWTpW9Q/T/v1d156nOQRzwybrxkUiRcRA PcF/NcYLrQtlMXh/Qjfm78/EeVUYpONLaTDkINZwwqh8cWW9V+jhInNHFxw+cwhwPwz9 aJZASV+o7q4XRySkjpCo+V4CQ1xkbZYrkIG9MEv73Q1l6mUpWfgT6rd+kylukwavuuFB lPTfFay2tLwHe/J66D6uZ6lBxZOn6PFt8zTWCFqxG5oYxbuF5amwOGjdfXrBik8jwx4d KVlgFuCWpCpBKNtidsdG+FmIzwFZeeHZmXdiEk/C9nl7wvaa/FjLBRRUPCSpuG01FFTN 6nXQ== X-Forwarded-Encrypted: i=1; AJvYcCWXuj2/MUjFuisb874uFbSOGSmTkXsisnIM4qDVKOYU1yoVUI12BuIOzfzDojdKoOnE7HjFcwQVeAb5or8=@vger.kernel.org X-Gm-Message-State: AOJu0YxCpwF01bw4UHIA34zVbtsy9NdaOX309qPYlkA+CAKTptDHFJ1h mFAHfyk1oe5s+R6dV6dWDTNn65yyZuwEXCw3aAFiGfV7aXr0NlNu4gxUfOVGMbi7mw0HrFKbza9 gVKYsmXL4qKEOPQ/GZBagLCT1zQ== X-Google-Smtp-Source: AGHT+IFEidHFMEbKk2OynEjRQvMLrVqFymZV1Kh1vRLY6IT0l2lygB4TWhQMZfNIZL6xIAJ9fHhQZ3Bjc4iSkEbXjw== X-Received: from ilbcp10.prod.google.com ([2002:a05:6e02:398a:b0:3dc:756a:e520]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:3782:b0:3dc:7fa4:834 with SMTP id e9e14a558f8ab-3de38cb06a0mr47968995ab.15.1750457918713; Fri, 20 Jun 2025 15:18:38 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:03 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-4-coltonlewis@google.com> Subject: [PATCH v2 03/23] arm64: cpufeature: Add cpucap for PMICNTR From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a cpucap for FEAT_PMUv3_PMICNTR, meaning there is a dedicated instruction counter as well as the cycle counter. Signed-off-by: Colton Lewis --- arch/arm64/kernel/cpufeature.c | 7 +++++++ arch/arm64/tools/cpucaps | 1 + 2 files changed, 8 insertions(+) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 278294fdc97d..85dea9714928 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2904,6 +2904,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) }, + { + .desc =3D "PMU Dedicated Instruction Counter", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_PMICNTR, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR1_EL1, PMICNTR, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 5b196ba21629..6dd72fcdd612 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -47,6 +47,7 @@ HAS_LSE_ATOMICS HAS_MOPS HAS_NESTED_VIRT HAS_PAN +HAS_PMICNTR HAS_PMUV3 HAS_S1PIE HAS_S1POE --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A808D241116 for ; Fri, 20 Jun 2025 22:18:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457922; cv=none; b=MD0FXPXR7FJsgZfu27FbZ5N5H40jOgOqh8aDlo/OpYgHzFyU2TVMSOGZl4VPjYTKIsw+m8TM/8YDr+kltk4toTGdHTrACk5cDzPe1YlC21szaUtb7lcDhSx4Ktoh4LtvdC4BZYd6Ih0fHD0BQJNq4QszjFQjCh1RVvNozQ2c65k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457922; c=relaxed/simple; bh=hEI+Ws5Ed4T0XYMyDeyGmT0ZhHqtHUDKeUxZmkH++E0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AY/YFlDzIqOBhd+l/DYG9wVLJ/ANCL3IOLCbIvZxOmPyuz07UQxrYeOfljvjhWNRHSfJzpXta5P99UfvGA+4y+feT8MpE3CifPAKOZggeUwiyOVOF057cLHIVgE2rd71p1+Xmwe2eh2glbUXcoFgmnum15+BhS0fWdK8N4iEdEY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BJpAyO/B; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BJpAyO/B" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddc1af1e5bso53318595ab.0 for ; Fri, 20 Jun 2025 15:18:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457919; x=1751062719; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tzpL3Ld7f3Wg7N4Mr5qllGNQ06XKr05fHXYYgZEl88s=; b=BJpAyO/BtidrS4k3gtcxF0or8fzizr8Vz9hZeEeiJYHA55yAh419X4bhab/XKH/5mB nUB83IBqajn29noD1PY/k3oLa2fORyHziD2wlBBQjAM8ob9lrGdGjIYoSUQlI23gDJj1 wDY16PKaPpH7PQYTUikCzVi8Wl6JcqON+9frJ6AOtNZMkDbg4Ll30xG1kelIWdOWwXiK +dNDtVVuWJaf3vPQkPKv7bwLEmB+TxlPew538h06TuxYXDZo8BdFI8YFOenanmnGbZJB 1NGElBTA2caOAkGifvt/qQQFeR5NIL+pa6FlXBLLTwKuz1iOXN2PNGRufnKY8SuB88jy pFHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457919; x=1751062719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tzpL3Ld7f3Wg7N4Mr5qllGNQ06XKr05fHXYYgZEl88s=; b=K6yHZPyzwdIA02LvNSojgERpaADEDvD7S16Nk1y09P7hwwY0fbxe5MwpQLMxywbrWK oJIryBYQqmqObSzu2O0/oJXmHW/gpDWIzzBtG6DovmBbvH5DywKj2OrJrxKXwGjdwQS5 1hfwZ4HpF2wAO+luTItPy0KWoQyNs6/YM5g5LC9TTQ4heKYvEJxmD98Rkk+465jaAtxp ok3y6+Wgbgix0bg/EsPKX2M1s3elU+JF2rFkmmeV6xdLz9Otounzu9/kwW7ywnVjcsDy hLb+fkoYBGPeeQcpzgv80zP1WzBvZzKCKUhSeU44dcUXnhIZ9PxxMY3FPUWdoo03Ce/K 8HDg== X-Forwarded-Encrypted: i=1; AJvYcCW8sfnEdUHljXZMRTWsNuWMExxug4dg+q6PwSVMvbSM6nw7AQFt3oz4JKqP5KmqDfh/UBYmX2U0qikF0dE=@vger.kernel.org X-Gm-Message-State: AOJu0YygNJvHCrhx6vn74VEGAdIf0zIMJf0TrWJXwn4ZYORgkgZRzsBs qnNBcF57i+o/f7gOyMLE0kTodjiX3IWSuIl2TxjgUK1STGuB77DQPxan5bqSe6a8vfnLF0YOn7j OaKt2OhxAXgMgP+k2uN/Nq8MzFg== X-Google-Smtp-Source: AGHT+IGe+i459NmOZi236cX8Bur8HwnWegO6n5YG8V2oQ3pdTXQJfj8y4MrKY8v3FuSI+vF/XixyV2nwOspzfQufmQ== X-Received: from ilbea3.prod.google.com ([2002:a05:6e02:4503:b0:3dd:d24f:26bc]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a92:cd83:0:b0:3dd:cdea:8f85 with SMTP id e9e14a558f8ab-3de38cc7adbmr52345645ab.20.1750457919574; Fri, 20 Jun 2025 15:18:39 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:04 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-5-coltonlewis@google.com> Subject: [PATCH v2 04/23] arm64: Define PMI{CNTR,FILTR}_EL0 as undef_access From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because KVM isn't fully prepared to support these yet even though the host PMUv3 driver does, define them as undef_access for now. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 76c2f0da821f..99fdbe174202 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3092,6 +3092,9 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility =3D s= me_visibility }, { SYS_DESC(SYS_FPMR), undef_access, reset_val, FPMR, 0, .visibility =3D f= p8_visibility }, =20 + { SYS_DESC(SYS_PMICNTR_EL0), undef_access }, + { SYS_DESC(SYS_PMICFILTR_EL0), undef_access }, + { PMU_SYS_REG(PMCR_EL0), .access =3D access_pmcr, .reset =3D reset_pmcr, .reg =3D PMCR_EL0, .get_user =3D get_pmcr, .set_user =3D set_pmcr }, { PMU_SYS_REG(PMCNTENSET_EL0), --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A74E1242D70 for ; Fri, 20 Jun 2025 22:18:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457924; cv=none; b=LcRfgVvAtot2Kdi/LT6WeDny59GiYudsv0RbR0KOGiQjt6AZAu3oAbRSI/7O2UF259ch5k+u05VpL+TcqX2aGAROO8cGwXjgZoL/ylWZfaKQWeKHEsy8QPe6un80AIhk3elvITLqp9XMNR2vWH48N3mVV8TwU2GAGqn/LbXdAnM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457924; c=relaxed/simple; bh=I3e4uNmpE9tSGaln+NGHh3+Y0TBXHnLrzFPU+vvpM20=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Yx6w5Xk/Iwv+1fKlmJRdFwTpi0cwxuAB4qyQcPbBrXU5YJimeVhUjtDawPJNsb+AqYC9jbETN1YhaaqQk84qJgpl3zpu+ZL7+zhbJzlnJnuGqiXhZjUq9HYQSPS+AyGbITLfOGVC391eO8OJAGP4Qh5Jtd475WdU8SwJAHQja0g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CZnldkZd; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CZnldkZd" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-40a72c039e3so728524b6e.1 for ; Fri, 20 Jun 2025 15:18:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457920; x=1751062720; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GWHD159eEkEd2RVAASCwDxR2pqbhwMR1y9w3LokBHQQ=; b=CZnldkZdamAHo/66/0xx1jg0Jm58NcF6kdwKow6COSTmb/0iq2deGCUiBVDm6dxzue t0t+usNf+rcmFAesr/qEK+M7Tm1nYIsTyI9YtGOdX6OR4yIuHFkmytWS/NTLY3eJWjiS 9Q0Ed58OzUdsUHynUn2oT4Alre0Ty8EiWvrkGUxjza4kvRqNeXfKYyAmegDPWWFw6O5O Sr8Y2SgICJTzm7rYEkeWCifHVMK6V/UbwO4jYDAh/ZZzg32z4xapss4bnW6CAbu6Eu6t Xs8n3k25D23wi3UdtyFX3Q2H2jEqLsREjvhwVJ5soMyCZP03YmUXsxtbglKPOs7x0j/a YKag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457920; x=1751062720; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GWHD159eEkEd2RVAASCwDxR2pqbhwMR1y9w3LokBHQQ=; b=CuD8lFMMMIdAqmSeGmrYwdbD1rCrz6SQlc9KtQiiwVT2TcZ1uXR5Nt+DeFf26YzWau Ay248Pkbhz/fvMvKptqM+kTt9U6pJ2ljcZWkOla83P7qePATD43yynKS6KVClacPeV1o JUwquXuusEE7aYJFNNh34ChmhfhBunZOu8D59btQldFtCTOOo9SM0pvkrYQ24xH3PxWD Bi+8ZpLs1K57W+HP4BMMm0TaacH/DnP9kBnz/PZUzFsIM6Y6xKolo59YbEPcW7eSPTy/ HlR8X7zAOaK7sPNJNBwZBFG3dTt1d4hdv877XOYrqZOgEMHc3q3SYfjJGKhbIdvb9rYz VahA== X-Forwarded-Encrypted: i=1; AJvYcCVjk3d2GEQDJhy0GLutTFHuiX8aDcgtY1pLg/ZcUf00d9pcftfF2LaCUwXG0P8v9PxGRM33e9OQCu7RxVU=@vger.kernel.org X-Gm-Message-State: AOJu0Yz8Hiulp03mChmQSdtTA62LD3YCm+a3NI9LzLG8lPXmpKdNXB0k DuanJ38T0nyC0Or+dc3U1NLSCPS/+fusNvjHyK4I9L82+ShKWDWkT2s6CONBkRV/SiucfJs9U+q gyiOJ4UnDZcIvaJ3IGFN1COTWyA== X-Google-Smtp-Source: AGHT+IF2OKmMZlYWXREsJ8oFrgMGWLo7kWe5ClPilcLSMHKtRvIA2tl/nLM95GtmJ2ppZK9UqlnZUllLoCroliosBg== X-Received: from oibko5.prod.google.com ([2002:a05:6808:6905:b0:407:ab9e:f086]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:1584:b0:406:6fd3:ff18 with SMTP id 5614622812f47-40ac7141518mr3848175b6e.34.1750457920632; Fri, 20 Jun 2025 15:18:40 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:05 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-6-coltonlewis@google.com> Subject: [PATCH v2 05/23] KVM: arm64: Cleanup PMU includes From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier asm/kvm_host.h includes asm/arm_pmu.h which includes perf/arm_pmuv3.h which includes asm/arm_pmuv3.h which includes asm/kvm_host.h This causes compilation problems why trying to use anything defined in any of the headers in any other headers. Reorganize these tangled headers. In particular: * Move the declarations defining the interface between KVM and PMU to its own header asm/kvm_pmu.h that can be used without the problem described above. * Delete kvm/arm_pmu.h. These functions are mostly internal to KVM and should go in asm/kvm_host.h. Signed-off-by: Marc Zyngier Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 2 +- arch/arm64/include/asm/kvm_host.h | 15 +-------------- arch/arm64/include/asm/kvm_pmu.h | 9 +++++++++ arch/arm64/kvm/debug.c | 1 + arch/arm64/kvm/hyp/include/hyp/switch.h | 1 + arch/arm64/kvm/pmu-emul.c | 4 +++- arch/arm64/kvm/pmu.c | 2 ++ arch/arm64/kvm/sys_regs.c | 1 + include/linux/perf/arm_pmu.h | 14 ++++++++------ virt/kvm/kvm_main.c | 1 + 10 files changed, 28 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 8a777dec8d88..32c003a7b810 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -6,7 +6,7 @@ #ifndef __ASM_PMUV3_H #define __ASM_PMUV3_H =20 -#include +#include =20 #include #include diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 27ed26bd4381..2df76689381a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1487,25 +1488,11 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcp= u); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); =20 -static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) -{ - return (!has_vhe() && attr->exclude_host); -} - #ifdef CONFIG_KVM -void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); -void kvm_clr_pmu_events(u64 clr); -bool kvm_set_pmuserenr(u64 val); void kvm_enable_trbe(void); void kvm_disable_trbe(void); void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest); #else -static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} -static inline void kvm_clr_pmu_events(u64 clr) {} -static inline bool kvm_set_pmuserenr(u64 val) -{ - return false; -} static inline void kvm_enable_trbe(void) {} static inline void kvm_disable_trbe(void) {} static inline void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_gu= est) {} diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index baf028d19dfc..a44f712668b5 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -14,6 +14,11 @@ =20 #define KVM_ARMV8_PMU_MAX_COUNTERS 32 =20 +#define kvm_pmu_counter_deferred(attr) \ + ({ \ + !has_vhe() && (attr)->exclude_host; \ + }) + #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ @@ -68,9 +73,13 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); =20 struct kvm_pmu_events *kvm_get_pmu_events(void); +void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); +void kvm_clr_pmu_events(u64 clr); +bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); +void kvm_host_pmu_init(struct arm_pmu *pmu); =20 #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 1a7dab333f55..a554c3e368dc 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -9,6 +9,7 @@ =20 #include #include +#include =20 #include #include diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 7599844908c0..825b81749972 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -14,6 +14,7 @@ #include #include #include +#include #include =20 #include diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index dcdd80ffd49d..b9882085394e 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -8,8 +8,8 @@ #include #include #include -#include #include +#include #include #include #include @@ -24,6 +24,8 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc= ); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + bool kvm_supports_guest_pmuv3(void) { guard(mutex)(&arm_pmus_lock); diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 6b48a3d16d0d..8bfc6b0a85f6 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,6 +8,8 @@ #include #include =20 +#include + static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 /* diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 99fdbe174202..eaff6d63ef77 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include #include #include +#include =20 #include #include diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 6dc5e0cd76ca..1de206b09616 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -13,6 +13,9 @@ #include #include #include +#ifdef CONFIG_ARM64 +#include +#endif =20 #ifdef CONFIG_ARM_PMU =20 @@ -25,6 +28,11 @@ #else #define ARMPMU_MAX_HWEVENTS 33 #endif + +#ifdef CONFIG_ARM +#define kvm_host_pmu_init(_x) { (void)_x; } +#endif + /* * ARM PMU hw_event flags */ @@ -170,12 +178,6 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn); static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; } #endif =20 -#ifdef CONFIG_KVM -void kvm_host_pmu_init(struct arm_pmu *pmu); -#else -#define kvm_host_pmu_init(x) do { } while(0) -#endif - bool arm_pmu_irq_is_nmi(void); =20 /* Internal functions only for core arm_pmu code */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e2f6344256ce..25259fcf3115 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -48,6 +48,7 @@ #include #include #include +#include =20 #include #include --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 978F8244678 for ; Fri, 20 Jun 2025 22:18:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457927; cv=none; b=TQCBZvcgYuDoc/0+yjmjDBOG9klei5/kUHwwDCSpY1cniJKBWJWwwYFbVDzqQNtO6GWR7UucJ4Jtj72AjjDPDR76BkDGqycP5Mf2TuIcFDzAJarixH/ygpdAXtulLDTjSWaj22Ryv5IOTHQwcYEp9posDvCzUZeUuAK/aL+Tk6E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457927; c=relaxed/simple; bh=5IHWtpZGhnwRWYb9rbztoZ+wzNOBJv1Uw6IKxJF95Dw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=meVT/S9Lpe9kqzyuCxY63PY94hjNx30G4PyokQG/6udSsX5903/hxgmwrYlK3wOplW0qzSWVGlm8y7M1AUlAmqVb+/v4/JCjf9pYAjP/ZnL9JZnbvjuRYuh9/mPafyy4ghMkXjMbBnm7dNnMmyPAJ2fnFdRLlPry2pDHswMfqCo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aC4KfNzM; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aC4KfNzM" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddc1af1e5bso53319185ab.0 for ; Fri, 20 Jun 2025 15:18:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457922; x=1751062722; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=x3K397b+Ku9Clt/xzuTFTBRnmsS9ulS9drOn5AWbcag=; b=aC4KfNzMNwlAqTjDUvfbbGF+EJHd4S9rXKPYUWecr5rYaeoeN9FPTxSWSDJpkHVUCl QZy0npvjzVBTF2ktSePbrn9BrMg2jbkZ+bZiLH3ffAclvMRZc6lDoGimW5XFKLTVlGu+ nx61BekRJeLzrJOMNLocyFrgju6yDXmp3Vj6fzLFcQf30fhD2JYwhTum1ouSDXcskeeP OWykOprkCpcCGZg++j88DAbkxXTTQm6U3jvfTDxGRxC/Q3oSSNkgsUdA2XvKWRTtdQgu IPPbxyaOTFWAWdOqtIwTY1pr+5kshN7NUclU2IqbLidffTmm5RLMf+KOQ0urnM3+wR86 hhDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457922; x=1751062722; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=x3K397b+Ku9Clt/xzuTFTBRnmsS9ulS9drOn5AWbcag=; b=c/MAuz1cdzs7TpujPdie69IHkhbUKcPl0CgazI7L3T5M4wzUrHdkxp0/qUNmDiJrNd e8qmX7iuOAwEAL4JyBekS5VImlYhyXgjgY1Dw2Y4gki1wRQfkeGPCUACJVfjyJRUxKac C7bYd2t+npbfV4R9t5KDyGiHG3ZwKjn1rKN9ZFkT38J0dG14aF+no/WtM9zvAFwMda/j CifhS1EiwBT5MBSZDWnJ9rTrvcYZEFarpu9AUs6Gq09rgi06EGWEqMz9MFSpfYaWpiTK Vnc+IREZ8rTFU/xNZVPjXuVQSpmXyV0QcZG0iuUKxMobLov5zh1uxfuuNp1HmNLm6ZY3 Bqng== X-Forwarded-Encrypted: i=1; AJvYcCXOW/bMvMByPRzuJI9MwLAKkGqCD4YG72Ux1oIV+DiWrVuI8f89SFMau7Z1GTTSX8/ZtBOLmjyU5+4ooEM=@vger.kernel.org X-Gm-Message-State: AOJu0YyYXh1x47Ju0R4qmq874VjU+jxsPJRJETz9mNdaE+MaaV+akMfg ua6njEp6/a6eP6JkuE8Po97C0KKY5N8KmOTxRdshaBwZlPYvEVHowz4/TIyJSqm/X/qC5C0pAtk Rx6ISwgCYaa9GzN6TWf6HrAEsyQ== X-Google-Smtp-Source: AGHT+IGeN+R/nog8l+0J382a0wmWarTW23rOocvSw6zab5Kb+Y864FKOuGcr5QDR0jWNMe68Z8h9gjRdsCXc5ip0EA== X-Received: from ilkk5.prod.google.com ([2002:a05:6e02:5e85:b0:3dc:7303:c8cf]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:3c03:b0:3dd:b808:be68 with SMTP id e9e14a558f8ab-3de38cbfe51mr53280825ab.16.1750457921974; Fri, 20 Jun 2025 15:18:41 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:06 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-7-coltonlewis@google.com> Subject: [PATCH v2 06/23] KVM: arm64: Reorganize PMU functions From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A lot of functions in pmu-emul.c aren't specific to the emulated PMU implementation. Move them to the more appropriate pmu.c file where shared PMU functions should live. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 3 + arch/arm64/kvm/pmu-emul.c | 674 +------------------------------ arch/arm64/kvm/pmu.c | 673 ++++++++++++++++++++++++++++++ 3 files changed, 677 insertions(+), 673 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index a44f712668b5..c55dbac28c90 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -50,13 +50,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u6= 4 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx,= u64 val); u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu); +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu); u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu); +u32 kvm_pmu_event_mask(struct kvm *kvm); u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1); void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); void kvm_pmu_update_run(struct kvm_vcpu *vcpu); void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index b9882085394e..a6452d10fc1e 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -17,21 +17,10 @@ =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) =20 -static LIST_HEAD(arm_pmus); -static DEFINE_MUTEX(arm_pmus_lock); - static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 -#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) - -bool kvm_supports_guest_pmuv3(void) -{ - guard(mutex)(&arm_pmus_lock); - return !list_empty(&arm_pmus); -} - static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) { return container_of(pmc, struct kvm_vcpu, arch.pmu.pmc[pmc->idx]); @@ -42,46 +31,6 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vc= pu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } =20 -static u32 __kvm_pmu_event_mask(unsigned int pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return GENMASK(9, 0); - case ID_AA64DFR0_EL1_PMUVer_V3P1: - case ID_AA64DFR0_EL1_PMUVer_V3P4: - case ID_AA64DFR0_EL1_PMUVer_V3P5: - case ID_AA64DFR0_EL1_PMUVer_V3P7: - return GENMASK(15, 0); - default: /* Shouldn't be here, just for sanity */ - WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); - return 0; - } -} - -static u32 kvm_pmu_event_mask(struct kvm *kvm) -{ - u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); - u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); - - return __kvm_pmu_event_mask(pmuver); -} - -u64 kvm_pmu_evtyper_mask(struct kvm *kvm) -{ - u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | - kvm_pmu_event_mask(kvm); - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) - mask |=3D ARMV8_PMU_INCLUDE_EL2; - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) - mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | - ARMV8_PMU_EXCLUDE_NS_EL1 | - ARMV8_PMU_EXCLUDE_EL3; - - return mask; -} - /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -274,59 +223,6 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) irq_work_sync(&vcpu->arch.pmu.overflow_work); } =20 -static u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) -{ - unsigned int hpmn, n; - - if (!vcpu_has_nv(vcpu)) - return 0; - - hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - n =3D vcpu->kvm->arch.nr_pmu_counters; - - /* - * Programming HPMN to a value greater than PMCR_EL0.N is - * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an - * UNKNOWN number of counters (in our case, zero) are reserved for EL2. - */ - if (hpmn >=3D n) - return 0; - - /* - * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't - * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly - * depend on hardware (all PMU registers are trapped), make the - * implementation choice that all counters are included in the second - * range reserved for EL2/EL3. - */ - return GENMASK(n - 1, hpmn); -} - -bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) -{ - return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); -} - -u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); - - if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) - return mask; - - return mask & ~kvm_pmu_hyp_counter_mask(vcpu); -} - -u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); - - if (val =3D=3D 0) - return BIT(ARMV8_PMU_CYCLE_IDX); - else - return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); -} - static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) { if (!pmc->perf_event) { @@ -372,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -395,24 +291,6 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *v= cpu) return reg; } =20 -static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu =3D &vcpu->arch.pmu; - bool overflow; - - overflow =3D kvm_pmu_overflow_status(vcpu); - if (pmu->irq_level =3D=3D overflow) - return; - - pmu->irq_level =3D overflow; - - if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, - pmu->irq_num, overflow, pmu); - WARN_ON(ret); - } -} - bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D &vcpu->arch.pmu; @@ -438,43 +316,6 @@ void kvm_pmu_update_run(struct kvm_vcpu *vcpu) regs->device_irq_level |=3D KVM_ARM_DEV_PMU; } =20 -/** - * kvm_pmu_flush_hwstate - flush pmu state to cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the host, and = inject - * an interrupt if that was the case. - */ -void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/** - * kvm_pmu_sync_hwstate - sync pmu state from cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the guest, and - * inject an interrupt if that was the case. - */ -void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/* - * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding - * to the event. - * This is why we need a callback to do it once outside of the NMI context. - */ -static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) -{ - struct kvm_vcpu *vcpu; - - vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); - kvm_vcpu_kick(vcpu); -} - /* * Perform an increment on any of the counters described in @mask, * generating the overflow if required, and propagate it as a chained @@ -786,132 +627,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *= vcpu, u64 data, kvm_pmu_create_perf_event(pmc); } =20 -void kvm_host_pmu_init(struct arm_pmu *pmu) -{ - struct arm_pmu_entry *entry; - - /* - * Check the sanitised PMU version for the system, as KVM does not - * support implementations where PMUv3 exists on a subset of CPUs. - */ - if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) - return; - - guard(mutex)(&arm_pmus_lock); - - entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) - return; - - entry->arm_pmu =3D pmu; - list_add_tail(&entry->entry, &arm_pmus); -} - -static struct arm_pmu *kvm_pmu_probe_armpmu(void) -{ - struct arm_pmu_entry *entry; - struct arm_pmu *pmu; - int cpu; - - guard(mutex)(&arm_pmus_lock); - - /* - * It is safe to use a stale cpu to iterate the list of PMUs so long as - * the same value is used for the entirety of the loop. Given this, and - * the fact that no percpu data is used for the lookup there is no need - * to disable preemption. - * - * It is still necessary to get a valid cpu, though, to probe for the - * default PMU instance as userspace is not required to specify a PMU - * type. In order to uphold the preexisting behavior KVM selects the - * PMU instance for the core during vcpu init. A dependent use - * case would be a user with disdain of all things big.LITTLE that - * affines the VMM to a particular cluster of cores. - * - * In any case, userspace should just do the sane thing and use the UAPI - * to select a PMU type directly. But, be wary of the baggage being - * carried here. - */ - cpu =3D raw_smp_processor_id(); - list_for_each_entry(entry, &arm_pmus, entry) { - pmu =3D entry->arm_pmu; - - if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) - return pmu; - } - - return NULL; -} - -static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) -{ - u32 hi[2], lo[2]; - - bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); - bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); - - return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; -} - -static u64 compute_pmceid0(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 0); - - /* always support SW_INCR */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); - /* always support CHAIN */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); - return val; -} - -static u64 compute_pmceid1(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 1); - - /* - * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled - * as RAZ - */ - val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); - return val; -} - -u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) -{ - struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; - unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; - u64 val, mask =3D 0; - int base, i, nr_events; - - if (!pmceid1) { - val =3D compute_pmceid0(cpu_pmu); - base =3D 0; - } else { - val =3D compute_pmceid1(cpu_pmu); - base =3D 32; - } - - if (!bmap) - return val; - - nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; - - for (i =3D 0; i < 32; i +=3D 8) { - u64 byte; - - byte =3D bitmap_get_value8(bmap, base + i); - mask |=3D byte << i; - if (nr_events >=3D (0x4000 + base + 32)) { - byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); - mask |=3D byte << (32 + i); - } - } - - return val & mask; -} - void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) { u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); @@ -923,393 +638,6 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) kvm_pmu_reprogram_counter_mask(vcpu, mask); } =20 -int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) -{ - if (!vcpu->arch.pmu.created) - return -EINVAL; - - /* - * A valid interrupt configuration for the PMU is either to have a - * properly configured interrupt number and using an in-kernel - * irqchip, or to not have an in-kernel GIC and not set an IRQ. - */ - if (irqchip_in_kernel(vcpu->kvm)) { - int irq =3D vcpu->arch.pmu.irq_num; - /* - * If we are using an in-kernel vgic, at this point we know - * the vgic will be initialized, so we can check the PMU irq - * number against the dimensions of the vgic and make sure - * it's valid. - */ - if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) - return -EINVAL; - } else if (kvm_arm_pmu_irq_initialized(vcpu)) { - return -EINVAL; - } - - return 0; -} - -static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) -{ - if (irqchip_in_kernel(vcpu->kvm)) { - int ret; - - /* - * If using the PMU with an in-kernel virtual GIC - * implementation, we require the GIC to be already - * initialized when initializing the PMU. - */ - if (!vgic_initialized(vcpu->kvm)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, - &vcpu->arch.pmu); - if (ret) - return ret; - } - - init_irq_work(&vcpu->arch.pmu.overflow_work, - kvm_pmu_perf_overflow_notify_vcpu); - - vcpu->arch.pmu.created =3D true; - return 0; -} - -/* - * For one VM the interrupt type must be same for each vcpu. - * As a PPI, the interrupt number is the same for all vcpus, - * while as an SPI it must be a separate number per vcpu. - */ -static bool pmu_irq_is_valid(struct kvm *kvm, int irq) -{ - unsigned long i; - struct kvm_vcpu *vcpu; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!kvm_arm_pmu_irq_initialized(vcpu)) - continue; - - if (irq_is_ppi(irq)) { - if (vcpu->arch.pmu.irq_num !=3D irq) - return false; - } else { - if (vcpu->arch.pmu.irq_num =3D=3D irq) - return false; - } - } - - return true; -} - -/** - * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. - * @kvm: The kvm pointer - */ -u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; - - /* - * PMUv3 requires that all event counters are capable of counting any - * event, though the same may not be true of non-PMUv3 hardware. - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return 1; - - /* - * The arm_pmu->cntr_mask considers the fixed counter(s) as well. - * Ignore those and return only the general-purpose counters. - */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); -} - -static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) -{ - kvm->arch.nr_pmu_counters =3D nr; - - /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ - if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { - struct kvm_vcpu *vcpu; - unsigned long i; - - kvm_for_each_vcpu(i, vcpu, kvm) { - u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); - val &=3D ~MDCR_EL2_HPMN; - val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); - __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); - } - } -} - -static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) -{ - lockdep_assert_held(&kvm->arch.config_lock); - - kvm->arch.arm_pmu =3D arm_pmu; - kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); -} - -/** - * kvm_arm_set_default_pmu - No PMU set, get the default one. - * @kvm: The kvm pointer - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ -int kvm_arm_set_default_pmu(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); - - if (!arm_pmu) - return -ENODEV; - - kvm_arm_set_pmu(kvm, arm_pmu); - return 0; -} - -static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) -{ - struct kvm *kvm =3D vcpu->kvm; - struct arm_pmu_entry *entry; - struct arm_pmu *arm_pmu; - int ret =3D -ENXIO; - - lockdep_assert_held(&kvm->arch.config_lock); - mutex_lock(&arm_pmus_lock); - - list_for_each_entry(entry, &arm_pmus, entry) { - arm_pmu =3D entry->arm_pmu; - if (arm_pmu->pmu.type =3D=3D pmu_id) { - if (kvm_vm_has_ran_once(kvm) || - (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { - ret =3D -EBUSY; - break; - } - - kvm_arm_set_pmu(kvm, arm_pmu); - cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret =3D 0; - break; - } - } - - mutex_unlock(&arm_pmus_lock); - return ret; -} - -static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) -{ - struct kvm *kvm =3D vcpu->kvm; - - if (!kvm->arch.arm_pmu) - return -EINVAL; - - if (n > kvm_arm_pmu_get_max_counters(kvm)) - return -EINVAL; - - kvm_arm_set_nr_counters(kvm, n); - return 0; -} - -int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - struct kvm *kvm =3D vcpu->kvm; - - lockdep_assert_held(&kvm->arch.config_lock); - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (vcpu->arch.pmu.created) - return -EBUSY; - - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(kvm)) - return -EINVAL; - - if (get_user(irq, uaddr)) - return -EFAULT; - - /* The PMU overflow interrupt can be a PPI or a valid SPI. */ - if (!(irq_is_ppi(irq) || irq_is_spi(irq))) - return -EINVAL; - - if (!pmu_irq_is_valid(kvm, irq)) - return -EINVAL; - - if (kvm_arm_pmu_irq_initialized(vcpu)) - return -EBUSY; - - kvm_debug("Set kvm ARM PMU irq: %d\n", irq); - vcpu->arch.pmu.irq_num =3D irq; - return 0; - } - case KVM_ARM_VCPU_PMU_V3_FILTER: { - u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); - struct kvm_pmu_event_filter __user *uaddr; - struct kvm_pmu_event_filter filter; - int nr_events; - - /* - * Allow userspace to specify an event filter for the entire - * event range supported by PMUVer of the hardware, rather - * than the guest's PMUVer for KVM backward compatibility. - */ - nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; - - uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; - - if (copy_from_user(&filter, uaddr, sizeof(filter))) - return -EFAULT; - - if (((u32)filter.base_event + filter.nevents) > nr_events || - (filter.action !=3D KVM_PMU_EVENT_ALLOW && - filter.action !=3D KVM_PMU_EVENT_DENY)) - return -EINVAL; - - if (kvm_vm_has_ran_once(kvm)) - return -EBUSY; - - if (!kvm->arch.pmu_filter) { - kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); - if (!kvm->arch.pmu_filter) - return -ENOMEM; - - /* - * The default depends on the first applied filter. - * If it allows events, the default is to deny. - * Conversely, if the first filter denies a set of - * events, the default is to allow. - */ - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_zero(kvm->arch.pmu_filter, nr_events); - else - bitmap_fill(kvm->arch.pmu_filter, nr_events); - } - - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - else - bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - - return 0; - } - case KVM_ARM_VCPU_PMU_V3_SET_PMU: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int pmu_id; - - if (get_user(pmu_id, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); - } - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { - unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; - unsigned int n; - - if (get_user(n, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); - } - case KVM_ARM_VCPU_PMU_V3_INIT: - return kvm_arm_pmu_v3_init(vcpu); - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(vcpu->kvm)) - return -EINVAL; - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - irq =3D vcpu->arch.pmu.irq_num; - return put_user(irq, uaddr); - } - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: - case KVM_ARM_VCPU_PMU_V3_INIT: - case KVM_ARM_VCPU_PMU_V3_FILTER: - case KVM_ARM_VCPU_PMU_V3_SET_PMU: - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: - if (kvm_vcpu_has_pmu(vcpu)) - return 0; - } - - return -ENXIO; -} - -u8 kvm_arm_pmu_get_pmuver_limit(void) -{ - unsigned int pmuver; - - pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, - read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); - - /* - * Spoof a barebones PMUv3 implementation if the system supports IMPDEF - * traps of the PMUv3 sysregs - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return ID_AA64DFR0_EL1_PMUVer_IMP; - - /* - * Otherwise, treat IMPLEMENTATION DEFINED functionality as - * unimplemented - */ - if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) - return 0; - - return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); -} - -/** - * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU - * @vcpu: The vcpu pointer - */ -u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) -{ - u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; - - if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) - n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - - return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); -} - void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) { bool reprogrammed =3D false; diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 8bfc6b0a85f6..79b7ea037153 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,10 +8,21 @@ #include #include =20 +#include #include =20 +static LIST_HEAD(arm_pmus); +static DEFINE_MUTEX(arm_pmus_lock); static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + +bool kvm_supports_guest_pmuv3(void) +{ + guard(mutex)(&arm_pmus_lock); + return !list_empty(&arm_pmus); +} + /* * Given the perf event attributes and system type, determine * if we are going to need to switch counters at guest entry/exit. @@ -211,3 +222,665 @@ void kvm_vcpu_pmu_resync_el0(void) =20 kvm_make_request(KVM_REQ_RESYNC_PMU_EL0, vcpu); } + +void kvm_host_pmu_init(struct arm_pmu *pmu) +{ + struct arm_pmu_entry *entry; + + /* + * Check the sanitised PMU version for the system, as KVM does not + * support implementations where PMUv3 exists on a subset of CPUs. + */ + if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) + return; + + guard(mutex)(&arm_pmus_lock); + + entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return; + + entry->arm_pmu =3D pmu; + list_add_tail(&entry->entry, &arm_pmus); +} + +static struct arm_pmu *kvm_pmu_probe_armpmu(void) +{ + struct arm_pmu_entry *entry; + struct arm_pmu *pmu; + int cpu; + + guard(mutex)(&arm_pmus_lock); + + /* + * It is safe to use a stale cpu to iterate the list of PMUs so long as + * the same value is used for the entirety of the loop. Given this, and + * the fact that no percpu data is used for the lookup there is no need + * to disable preemption. + * + * It is still necessary to get a valid cpu, though, to probe for the + * default PMU instance as userspace is not required to specify a PMU + * type. In order to uphold the preexisting behavior KVM selects the + * PMU instance for the core during vcpu init. A dependent use + * case would be a user with disdain of all things big.LITTLE that + * affines the VMM to a particular cluster of cores. + * + * In any case, userspace should just do the sane thing and use the UAPI + * to select a PMU type directly. But, be wary of the baggage being + * carried here. + */ + cpu =3D raw_smp_processor_id(); + list_for_each_entry(entry, &arm_pmus, entry) { + pmu =3D entry->arm_pmu; + + if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) + return pmu; + } + + return NULL; +} + +static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) +{ + u32 hi[2], lo[2]; + + bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); + bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); + + return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; +} + +static u64 compute_pmceid0(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 0); + + /* always support SW_INCR */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); + /* always support CHAIN */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); + return val; +} + +static u64 compute_pmceid1(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 1); + + /* + * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled + * as RAZ + */ + val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); + return val; +} + +u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) +{ + struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; + u64 val, mask =3D 0; + int base, i, nr_events; + + if (!pmceid1) { + val =3D compute_pmceid0(cpu_pmu); + base =3D 0; + } else { + val =3D compute_pmceid1(cpu_pmu); + base =3D 32; + } + + if (!bmap) + return val; + + nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; + + for (i =3D 0; i < 32; i +=3D 8) { + u64 byte; + + byte =3D bitmap_get_value8(bmap, base + i); + mask |=3D byte << i; + if (nr_events >=3D (0x4000 + base + 32)) { + byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); + mask |=3D byte << (32 + i); + } + } + + return val & mask; +} + +/* + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding + * to the event. + * This is why we need a callback to do it once outside of the NMI context. + */ +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) +{ + struct kvm_vcpu *vcpu; + + vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); + kvm_vcpu_kick(vcpu); +} + +static u32 __kvm_pmu_event_mask(unsigned int pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return GENMASK(9, 0); + case ID_AA64DFR0_EL1_PMUVer_V3P1: + case ID_AA64DFR0_EL1_PMUVer_V3P4: + case ID_AA64DFR0_EL1_PMUVer_V3P5: + case ID_AA64DFR0_EL1_PMUVer_V3P7: + return GENMASK(15, 0); + default: /* Shouldn't be here, just for sanity */ + WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); + return 0; + } +} + +u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + +u64 kvm_pmu_evtyper_mask(struct kvm *kvm) +{ + u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | + kvm_pmu_event_mask(kvm); + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) + mask |=3D ARMV8_PMU_INCLUDE_EL2; + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) + mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | + ARMV8_PMU_EXCLUDE_NS_EL1 | + ARMV8_PMU_EXCLUDE_EL3; + + return mask; +} + +static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D &vcpu->arch.pmu; + bool overflow; + + overflow =3D kvm_pmu_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) + return; + + pmu->irq_level =3D overflow; + + if (likely(irqchip_in_kernel(vcpu->kvm))) { + int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, + pmu->irq_num, overflow, pmu); + WARN_ON(ret); + } +} + +/** + * kvm_pmu_flush_hwstate - flush pmu state to cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the host, and = inject + * an interrupt if that was the case. + */ +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +/** + * kvm_pmu_sync_hwstate - sync pmu state from cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the guest, and + * inject an interrupt if that was the case. + */ +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.pmu.created) + return -EINVAL; + + /* + * A valid interrupt configuration for the PMU is either to have a + * properly configured interrupt number and using an in-kernel + * irqchip, or to not have an in-kernel GIC and not set an IRQ. + */ + if (irqchip_in_kernel(vcpu->kvm)) { + int irq =3D vcpu->arch.pmu.irq_num; + /* + * If we are using an in-kernel vgic, at this point we know + * the vgic will be initialized, so we can check the PMU irq + * number against the dimensions of the vgic and make sure + * it's valid. + */ + if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) + return -EINVAL; + } else if (kvm_arm_pmu_irq_initialized(vcpu)) { + return -EINVAL; + } + + return 0; +} + +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) +{ + if (irqchip_in_kernel(vcpu->kvm)) { + int ret; + + /* + * If using the PMU with an in-kernel virtual GIC + * implementation, we require the GIC to be already + * initialized when initializing the PMU. + */ + if (!vgic_initialized(vcpu->kvm)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, + &vcpu->arch.pmu); + if (ret) + return ret; + } + + init_irq_work(&vcpu->arch.pmu.overflow_work, + kvm_pmu_perf_overflow_notify_vcpu); + + vcpu->arch.pmu.created =3D true; + return 0; +} + +/* + * For one VM the interrupt type must be same for each vcpu. + * As a PPI, the interrupt number is the same for all vcpus, + * while as an SPI it must be a separate number per vcpu. + */ +static bool pmu_irq_is_valid(struct kvm *kvm, int irq) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!kvm_arm_pmu_irq_initialized(vcpu)) + continue; + + if (irq_is_ppi(irq)) { + if (vcpu->arch.pmu.irq_num !=3D irq) + return false; + } else { + if (vcpu->arch.pmu.irq_num =3D=3D irq) + return false; + } + } + + return true; +} + +/** + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. + * @kvm: The kvm pointer + */ +u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + + /* + * PMUv3 requires that all event counters are capable of counting any + * event, though the same may not be true of non-PMUv3 hardware. + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return 1; + + /* + * The arm_pmu->cntr_mask considers the fixed counter(s) as well. + * Ignore those and return only the general-purpose counters. + */ + return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); +} + +static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) +{ + kvm->arch.nr_pmu_counters =3D nr; + + /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ + if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { + struct kvm_vcpu *vcpu; + unsigned long i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); + + val &=3D ~MDCR_EL2_HPMN; + val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); + } + } +} + +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + +static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) +{ + struct kvm *kvm =3D vcpu->kvm; + struct arm_pmu_entry *entry; + struct arm_pmu *arm_pmu; + int ret =3D -ENXIO; + + lockdep_assert_held(&kvm->arch.config_lock); + mutex_lock(&arm_pmus_lock); + + list_for_each_entry(entry, &arm_pmus, entry) { + arm_pmu =3D entry->arm_pmu; + if (arm_pmu->pmu.type =3D=3D pmu_id) { + if (kvm_vm_has_ran_once(kvm) || + (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { + ret =3D -EBUSY; + break; + } + + kvm_arm_set_pmu(kvm, arm_pmu); + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); + ret =3D 0; + break; + } + } + + mutex_unlock(&arm_pmus_lock); + return ret; +} + +static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) +{ + struct kvm *kvm =3D vcpu->kvm; + + if (!kvm->arch.arm_pmu) + return -EINVAL; + + if (n > kvm_arm_pmu_get_max_counters(kvm)) + return -EINVAL; + + kvm_arm_set_nr_counters(kvm, n); + return 0; +} + +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + struct kvm *kvm =3D vcpu->kvm; + + lockdep_assert_held(&kvm->arch.config_lock); + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (vcpu->arch.pmu.created) + return -EBUSY; + + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(kvm)) + return -EINVAL; + + if (get_user(irq, uaddr)) + return -EFAULT; + + /* The PMU overflow interrupt can be a PPI or a valid SPI. */ + if (!(irq_is_ppi(irq) || irq_is_spi(irq))) + return -EINVAL; + + if (!pmu_irq_is_valid(kvm, irq)) + return -EINVAL; + + if (kvm_arm_pmu_irq_initialized(vcpu)) + return -EBUSY; + + kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; + return 0; + } + case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); + struct kvm_pmu_event_filter __user *uaddr; + struct kvm_pmu_event_filter filter; + int nr_events; + + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; + + uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; + + if (copy_from_user(&filter, uaddr, sizeof(filter))) + return -EFAULT; + + if (((u32)filter.base_event + filter.nevents) > nr_events || + (filter.action !=3D KVM_PMU_EVENT_ALLOW && + filter.action !=3D KVM_PMU_EVENT_DENY)) + return -EINVAL; + + if (kvm_vm_has_ran_once(kvm)) + return -EBUSY; + + if (!kvm->arch.pmu_filter) { + kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); + if (!kvm->arch.pmu_filter) + return -ENOMEM; + + /* + * The default depends on the first applied filter. + * If it allows events, the default is to deny. + * Conversely, if the first filter denies a set of + * events, the default is to allow. + */ + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_zero(kvm->arch.pmu_filter, nr_events); + else + bitmap_fill(kvm->arch.pmu_filter, nr_events); + } + + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + else + bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + + return 0; + } + case KVM_ARM_VCPU_PMU_V3_SET_PMU: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int pmu_id; + + if (get_user(pmu_id, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); + } + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { + unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; + unsigned int n; + + if (get_user(n, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); + } + case KVM_ARM_VCPU_PMU_V3_INIT: + return kvm_arm_pmu_v3_init(vcpu); + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(vcpu->kvm)) + return -EINVAL; + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + irq =3D vcpu->arch.pmu.irq_num; + return put_user(irq, uaddr); + } + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: + case KVM_ARM_VCPU_PMU_V3_INIT: + case KVM_ARM_VCPU_PMU_V3_FILTER: + case KVM_ARM_VCPU_PMU_V3_SET_PMU: + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: + if (kvm_vcpu_has_pmu(vcpu)) + return 0; + } + + return -ENXIO; +} + +u8 kvm_arm_pmu_get_pmuver_limit(void) +{ + unsigned int pmuver; + + pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, + read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); + + /* + * Spoof a barebones PMUv3 implementation if the system supports IMPDEF + * traps of the PMUv3 sysregs + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return ID_AA64DFR0_EL1_PMUVer_IMP; + + /* + * Otherwise, treat IMPLEMENTATION DEFINED functionality as + * unimplemented + */ + if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + return 0; + + return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); +} + +u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); + + if (val =3D=3D 0) + return BIT(ARMV8_PMU_CYCLE_IDX); + else + return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); +} + +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) +{ + unsigned int hpmn, n; + + if (!vcpu_has_nv(vcpu)) + return 0; + + hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + n =3D vcpu->kvm->arch.nr_pmu_counters; + + /* + * Programming HPMN to a value greater than PMCR_EL0.N is + * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an + * UNKNOWN number of counters (in our case, zero) are reserved for EL2. + */ + if (hpmn >=3D n) + return 0; + + /* + * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't + * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly + * depend on hardware (all PMU registers are trapped), make the + * implementation choice that all counters are included in the second + * range reserved for EL2/EL3. + */ + return GENMASK(n - 1, hpmn); +} + +bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) +{ + return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); +} + +u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); + + if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) + return mask; + + return mask & ~kvm_pmu_hyp_counter_mask(vcpu); +} + +/** + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU + * @vcpu: The vcpu pointer + */ +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + + if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) + n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + + return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); +} --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E93B246BD1 for ; Fri, 20 Jun 2025 22:18:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457927; cv=none; b=mIZziZzx5n2lCO/qORduM2yRgGefhwgshaEogDlvIYSe6z43wBKlNSFV+Ad5HxpOFeqpCbedwt9Lj4orT+FbJp9mSMFrSh0Hv+fVq1P7OcMA3g5Huttr+kGqzL5Z1iGSWACqO+NwBGQrcIp3WDmJSR2HCCId9lhYDdv+wYUTmR8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457927; c=relaxed/simple; bh=9GgWD1MN9AupT6yYClwdM+lZgz+6vMs8VB/PG0zJWr8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=X6g9+iDqRlTEZMwXlAqQOeLXYSKFM4lWzIY2oX75RrFelJ8pM945wBK9nkdMlqaRvksTgU3kxevEaBzojq73v//I6u0ct8yArOAFHjgHswTtJ9SnUPchrJCi0JJE4UaUrPlmHvOYR/GiHGcZmlXtkVNXa3s98h0OQSbpcCjWwS4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EJLXt47w; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EJLXt47w" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddcfea00afso30247365ab.0 for ; Fri, 20 Jun 2025 15:18:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457923; x=1751062723; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CI3k8EjFmwTOYcPqsV3PhQ8+RSg8JDcAcfT+x1n6dsQ=; b=EJLXt47wEt1+4WGlzgaPj9RtF6Tq7OxYAK2C6iRid1nVt4wdRUKkamw9CS0tHfaxBX OkRrEZIi9YSPDHpJmU/wvfqgIJwrv5SWqp9yfwn2/tXm6dI17oc3q/e17b+jRnzWlnGN mbNkSpF/h04dkPd2vxExKsmFtl0MCeXgAMdXijBcaXC37kVU0fToj6yEAyU6xhsJ8/gX MPsB0/Z3r7qBflYAQww8LLCj5iKsGDQn/QH/L7fG4bkbsq9e7i0R5vNVEuSuQsrJDIFk V2M0DqnjfS3V5/wFWxX7y0n4i5Nr1XE2IJ31EHeDbjIf3SDfc3bKoP3AG0wDRylpPd/3 S2xQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457923; x=1751062723; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CI3k8EjFmwTOYcPqsV3PhQ8+RSg8JDcAcfT+x1n6dsQ=; b=SIM3a5BsdHWW3JuQ1fGHul/nOuotfK9TSFEO6ue+33MjEgWjmRo54l4nnPBQROy8qi 35oLtHnOzYvrYl3zG8VyPQn1d+25dtSPofb9I/Lcqoo+KpdZOE35MNgX1TNr8RCyGmCk +QsHFXRiaoFWsTPU/saCpGXCbBTA1f4PWFtPViiD+BH/iNT0yThgWvkChtRqTxxR1gyR 4dtT4mXvNR3IcjRICmUcNVMV/KwQHVAYtQdO+Oi3TBrsDK3LzabfJMSCaAPCT1OEw3Yu KpLWFazXMAhh6xcl1CA6+a9ka2xLNSN6//rCTKW2JtiIfZ/2WtRKU2oarBWA0zaOKzKG o1Vw== X-Forwarded-Encrypted: i=1; AJvYcCXlctrGlVtabaQfJHYAuY05oXQG0HZAhrwywI8xm9CTtUhU1FAvLPOmy5GMBnR2sP4VVpB+TcCTzdgsq/Y=@vger.kernel.org X-Gm-Message-State: AOJu0Yw0B7ppKGUnPd70U0dJVcYOJemzZe/kKUkDlYaP4XzyhFCet7q1 gqHG4Xs7bbyVuEcjWCNYh1tthkQOi60U7TM+k21fIr07MpG7GOVz1I0td2+LId6aOPxM3Sb398A Ucavzzg9TV/eBXKjSC0JpV0hl8A== X-Google-Smtp-Source: AGHT+IGd22a1m3jPCQL2sEo9/Oo70+BIZ/MHoetdJ28KF4pJAvyQp5h1OYPs9GTJxMaOkTXVAbhUtbzBHpnKmlW6oQ== X-Received: from ilbeb7.prod.google.com ([2002:a05:6e02:4607:b0:3dd:f56e:32fc]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a92:cda2:0:b0:3da:71c7:5c7f with SMTP id e9e14a558f8ab-3de38cb7688mr49401535ab.15.1750457923037; Fri, 20 Jun 2025 15:18:43 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:07 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-8-coltonlewis@google.com> Subject: [PATCH v2 07/23] perf: arm_pmuv3: Introduce method to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For PMUv3, the register field MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Create module parameters partition_pmu and reserved_guest_counters to reserve a number of counters for the guest. These numbers are set at boot because the perf subsystem assumes the number of counters will not change after the PMU is probed. Introduce the function armv8pmu_partition() to modify the PMU driver's cntr_mask of available counters to exclude the counters being reserved for the guest and record reserved_guest_counters as the maximum allowable value for HPMN. Due to the difficulty this feature would create for the driver running at EL1 on the host, partitioning is only allowed in VHE mode. Working on nVHE mode would require a hypercall for every counter access in the driver because the counters reserved for the host by HPMN are only accessible to EL2. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 10 ++++ arch/arm64/include/asm/arm_pmuv3.h | 5 ++ drivers/perf/arm_pmuv3.c | 95 +++++++++++++++++++++++++++++- include/linux/perf/arm_pmu.h | 1 + 4 files changed, 109 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 2ec0e5e83fc9..9dc43242538c 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -228,6 +228,11 @@ static inline bool kvm_set_pmuserenr(u64 val) =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} =20 +static inline bool has_vhe(void) +{ + return false; +} + /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 #define ARMV8_PMU_DFR_VER_V3P1 0x4 @@ -242,6 +247,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ARMV8_PMU_DFR_VER_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ARMV8_PMU_DFR_VER_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ARMV8_PMU_DFR_VER_V3P4; diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 32c003a7b810..e2057365ba73 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -173,6 +173,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P4; diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 3db9f4ed17e8..26230cd4175c 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -35,6 +35,17 @@ #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED =20 +static bool partition_pmu __read_mostly; +static u8 reserved_guest_counters __read_mostly; + +module_param(partition_pmu, bool, 0); +MODULE_PARM_DESC(partition_pmu, + "Partition the PMU into host and guest VM counters [y/n]"); + +module_param(reserved_guest_counters, byte, 0); +MODULE_PARM_DESC(reserved_guest_counters, + "How many counters to reserve for guest VMs [0-$NR_COUNTERS]"); + /* * ARMv8 Architectural defined events, not all of these may * be supported on any given implementation. Unsupported events will @@ -500,6 +511,11 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } =20 +static u64 armv8pmu_pmcr_n_read(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1195,6 +1211,74 @@ struct armv8pmu_probe_info { bool present; }; =20 +/** + * armv8pmu_reservation_is_valid() - Determine if reservation is allowed + * @guest_counters: Number of host counters to reserve + * + * Determine if the number of host counters in the argument is + * allowed. It is allowed if it will produce a valid value for + * register field MDCR_EL2.HPMN. + * + * Return: True if reservation allowed, false otherwise + */ +static bool armv8pmu_reservation_is_valid(u8 guest_counters) +{ + return guest_counters <=3D armv8pmu_pmcr_n_read(); +} + +/** + * armv8pmu_partition_supported() - Determine if partitioning is possible + * + * Partitioning is only supported in VHE mode (with PMUv3, assumed + * since we are in the PMUv3 driver) + * + * Return: True if partitioning is possible, false otherwise + */ +static bool armv8pmu_partition_supported(void) +{ + return has_vhe(); +} + +/** + * armv8pmu_partition() - Partition the PMU + * @pmu: Pointer to pmu being partitioned + * @guest_counters: Number of host counters to reserve + * + * Partition the given PMU by taking a number of host counters to + * reserve and, if it is a valid reservation, recording the + * corresponding HPMN value in the hpmn field of the PMU and clearing + * the guest-reserved counters from the counter mask. + * + * Passing 0 for @guest_counters has the effect of disabling partitioning. + * + * Return: 0 on success, -ERROR otherwise + */ +static int armv8pmu_partition(struct arm_pmu *pmu, u8 guest_counters) +{ + u8 nr_counters; + u8 hpmn; + + if (!armv8pmu_reservation_is_valid(guest_counters)) + return -EINVAL; + + nr_counters =3D armv8pmu_pmcr_n_read(); + hpmn =3D guest_counters; + + pmu->hpmn_max =3D hpmn; + + /* Inform host driver of available counters */ + bitmap_clear(pmu->cntr_mask, 0, hpmn); + bitmap_set(pmu->cntr_mask, hpmn, nr_counters - hpmn); + clear_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + + if (pmuv3_has_icntr()) + clear_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + pr_info("Partitioned PMU with HPMN %u", hpmn); + + return 0; +} + static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe =3D info; @@ -1209,10 +1293,10 @@ static void __armv8pmu_probe_pmu(void *info) =20 cpu_pmu->pmuver =3D pmuver; probe->present =3D true; + cpu_pmu->hpmn_max =3D -1; =20 /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, armv8pmu_pmcr_n_read()); =20 /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1221,6 +1305,13 @@ static void __armv8pmu_probe_pmu(void *info) if (pmuv3_has_icntr()) set_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask); =20 + if (partition_pmu) { + if (armv8pmu_partition_supported()) + WARN_ON(armv8pmu_partition(cpu_pmu, reserved_guest_counters)); + else + pr_err("PMU partition is not supported"); + } + pmceid[0] =3D pmceid_raw[0] =3D read_pmceid0(); pmceid[1] =3D pmceid_raw[1] =3D read_pmceid1(); =20 diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 1de206b09616..95f2b800e63d 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -130,6 +130,7 @@ struct arm_pmu { =20 /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + u8 hpmn_max; /* MDCR_EL2.HPMN: counter partition pivot */ }; =20 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74FB323D28D for ; Fri, 20 Jun 2025 22:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457928; cv=none; b=Nw9/7W7bnZ3PkOMuSAEIYCMrPGknU3C2K71EvV0+4Bu1YRi95AMNjefOTEmpajM5PqQbGC4zP30YDRRZfXYP9YDaALM8NzkJlA03s292mC+UMGbnp38zcBtMFCVv6sgeVNvxsaXumOTPZiEppcXzGO6dgjjguOeXydy4iZ5CxJ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457928; c=relaxed/simple; bh=9GgWD1MN9AupT6yYClwdM+lZgz+6vMs8VB/PG0zJWr8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Hl0RjSNd5Iz+FVYktG0szr5ilcsuTq6IRmLGYusL13O1b2kX5lOXsg3c7LJFeu6aQUE9/GR7IoI0RpZ4DWugoTrBDWdNFbhPjtChlquQN7sK4immVFMRGplUnMulLE3fhzdc2nhrDolmiSbJ7q7K+5o+8cFJlKXkFKqIwwo0b2Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=befor7Je; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="befor7Je" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddc5137992so27490635ab.2 for ; Fri, 20 Jun 2025 15:18:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457924; x=1751062724; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CI3k8EjFmwTOYcPqsV3PhQ8+RSg8JDcAcfT+x1n6dsQ=; b=befor7JeIMa03LhxJj8M3qtGKcEusulZiZqdoKphiJPDxhEAAwNa39RA3tTR4wwkRC QqQppu6MhhZZilTvzIuvMvSXNLZ3QOmi7yJlmwTU03eiqDevogq6oteFFybsAGUPPQPO jOsInvQU0T1c/v8uZqKcjztaWP6b+Anhic7Fz/LVxCq5HPROyeAKXABqhSfY4Bey9eVg yQ7X43NuK2ylYGxHxUeLEVHGB+mwNafU+egYxLviQqi2KtsS9H5wrT6Gw+iX9W5aPVLI hX9pLrV7FsbX6jtgLkHQEp4AIWnYn9gZ4WsvkXTkhNRM6aay2q+924HF6yi47F1ZaY0v wK1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457924; x=1751062724; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CI3k8EjFmwTOYcPqsV3PhQ8+RSg8JDcAcfT+x1n6dsQ=; b=DAq9XR8+Q5QhAkAP2X8+PmPqoSzUoBZGqf2vys8mOMFSOHSIU/q+N3+VqwSKNPFbn0 0HxC3M17RXy+4D/9y8W+WQBNAdbl2qi6NVZHDvu6+c8JjRjc/WtJfPHOdYyuO9WmzIar 2dkJkKGO5vRsooPV+8yT9/nuPY+58qy3tKd7ZbhOvlM0qFYvvlnpE9za/1oFRMm0uyM9 0A5Rl/XWP7VKpFI2B3Pdia/TbC1W26RdAh9NHyRRAK/z0JIJE9tlTtTLYscc2bY39uqk /gMHCMQ5RZ5mSyRN9HDn2Bpp+m4ZWE+pDfLMDszYRsTOtvkTL4gl1h+QIGuMnpVqWw3P m3fg== X-Forwarded-Encrypted: i=1; AJvYcCWCK7ua6H00RcEj9SaYAJpX5hfc17K3OzPXe6ROcVPIs5hi/wQOzywX8K91nREhXyFxH7jws+lqIVNfSFU=@vger.kernel.org X-Gm-Message-State: AOJu0YxUBh9x6gw+o6Vp2g/0LXl4z4Q0SUkCRRihZWMDaD3ASCuG8eiD Y34mVZysMac7QE99R3KXFWavk0G97YRJc+CaOklo5fS7PSTmQh5m8VdrbIsP6rXXVcqo7XRamCZ W+oT6Pd58d3fbAg7mnCCdIcgHdQ== X-Google-Smtp-Source: AGHT+IGRzFrNkTKRncgvh4xo99Bcu003VZXHoGW3TcBgBpmaK0gzPhj85w0EWM+POhiHIrJPWFM8l3F3ufOL+8v7Xw== X-Received: from ilbcp2.prod.google.com ([2002:a05:6e02:3982:b0:3dd:c6fb:13fb]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:351f:b0:3dd:f948:8539 with SMTP id e9e14a558f8ab-3de38c1c186mr52207735ab.2.1750457924241; Fri, 20 Jun 2025 15:18:44 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:08 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-9-coltonlewis@google.com> Subject: [PATCH v2 07/23] perf: pmuv3: Introduce method to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For PMUv3, the register field MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Create module parameters partition_pmu and reserved_guest_counters to reserve a number of counters for the guest. These numbers are set at boot because the perf subsystem assumes the number of counters will not change after the PMU is probed. Introduce the function armv8pmu_partition() to modify the PMU driver's cntr_mask of available counters to exclude the counters being reserved for the guest and record reserved_guest_counters as the maximum allowable value for HPMN. Due to the difficulty this feature would create for the driver running at EL1 on the host, partitioning is only allowed in VHE mode. Working on nVHE mode would require a hypercall for every counter access in the driver because the counters reserved for the host by HPMN are only accessible to EL2. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 10 ++++ arch/arm64/include/asm/arm_pmuv3.h | 5 ++ drivers/perf/arm_pmuv3.c | 95 +++++++++++++++++++++++++++++- include/linux/perf/arm_pmu.h | 1 + 4 files changed, 109 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 2ec0e5e83fc9..9dc43242538c 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -228,6 +228,11 @@ static inline bool kvm_set_pmuserenr(u64 val) =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} =20 +static inline bool has_vhe(void) +{ + return false; +} + /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 #define ARMV8_PMU_DFR_VER_V3P1 0x4 @@ -242,6 +247,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ARMV8_PMU_DFR_VER_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ARMV8_PMU_DFR_VER_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ARMV8_PMU_DFR_VER_V3P4; diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 32c003a7b810..e2057365ba73 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -173,6 +173,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P4; diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 3db9f4ed17e8..26230cd4175c 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -35,6 +35,17 @@ #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED =20 +static bool partition_pmu __read_mostly; +static u8 reserved_guest_counters __read_mostly; + +module_param(partition_pmu, bool, 0); +MODULE_PARM_DESC(partition_pmu, + "Partition the PMU into host and guest VM counters [y/n]"); + +module_param(reserved_guest_counters, byte, 0); +MODULE_PARM_DESC(reserved_guest_counters, + "How many counters to reserve for guest VMs [0-$NR_COUNTERS]"); + /* * ARMv8 Architectural defined events, not all of these may * be supported on any given implementation. Unsupported events will @@ -500,6 +511,11 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } =20 +static u64 armv8pmu_pmcr_n_read(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1195,6 +1211,74 @@ struct armv8pmu_probe_info { bool present; }; =20 +/** + * armv8pmu_reservation_is_valid() - Determine if reservation is allowed + * @guest_counters: Number of host counters to reserve + * + * Determine if the number of host counters in the argument is + * allowed. It is allowed if it will produce a valid value for + * register field MDCR_EL2.HPMN. + * + * Return: True if reservation allowed, false otherwise + */ +static bool armv8pmu_reservation_is_valid(u8 guest_counters) +{ + return guest_counters <=3D armv8pmu_pmcr_n_read(); +} + +/** + * armv8pmu_partition_supported() - Determine if partitioning is possible + * + * Partitioning is only supported in VHE mode (with PMUv3, assumed + * since we are in the PMUv3 driver) + * + * Return: True if partitioning is possible, false otherwise + */ +static bool armv8pmu_partition_supported(void) +{ + return has_vhe(); +} + +/** + * armv8pmu_partition() - Partition the PMU + * @pmu: Pointer to pmu being partitioned + * @guest_counters: Number of host counters to reserve + * + * Partition the given PMU by taking a number of host counters to + * reserve and, if it is a valid reservation, recording the + * corresponding HPMN value in the hpmn field of the PMU and clearing + * the guest-reserved counters from the counter mask. + * + * Passing 0 for @guest_counters has the effect of disabling partitioning. + * + * Return: 0 on success, -ERROR otherwise + */ +static int armv8pmu_partition(struct arm_pmu *pmu, u8 guest_counters) +{ + u8 nr_counters; + u8 hpmn; + + if (!armv8pmu_reservation_is_valid(guest_counters)) + return -EINVAL; + + nr_counters =3D armv8pmu_pmcr_n_read(); + hpmn =3D guest_counters; + + pmu->hpmn_max =3D hpmn; + + /* Inform host driver of available counters */ + bitmap_clear(pmu->cntr_mask, 0, hpmn); + bitmap_set(pmu->cntr_mask, hpmn, nr_counters - hpmn); + clear_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + + if (pmuv3_has_icntr()) + clear_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + pr_info("Partitioned PMU with HPMN %u", hpmn); + + return 0; +} + static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe =3D info; @@ -1209,10 +1293,10 @@ static void __armv8pmu_probe_pmu(void *info) =20 cpu_pmu->pmuver =3D pmuver; probe->present =3D true; + cpu_pmu->hpmn_max =3D -1; =20 /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, armv8pmu_pmcr_n_read()); =20 /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1221,6 +1305,13 @@ static void __armv8pmu_probe_pmu(void *info) if (pmuv3_has_icntr()) set_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask); =20 + if (partition_pmu) { + if (armv8pmu_partition_supported()) + WARN_ON(armv8pmu_partition(cpu_pmu, reserved_guest_counters)); + else + pr_err("PMU partition is not supported"); + } + pmceid[0] =3D pmceid_raw[0] =3D read_pmceid0(); pmceid[1] =3D pmceid_raw[1] =3D read_pmceid1(); =20 diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 1de206b09616..95f2b800e63d 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -130,6 +130,7 @@ struct arm_pmu { =20 /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + u8 hpmn_max; /* MDCR_EL2.HPMN: counter partition pivot */ }; =20 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EF35248891 for ; Fri, 20 Jun 2025 22:18:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457929; cv=none; b=Ou+56fur8r4/zDSxzf/OueKTehDUJPXPCm+cLbxaPaxqeAMYWwiOnMZzvY+vu8NbFBHe8JRfOU5fg6z5eX9duknqUXNrkimVslI+Eaqu+95HcZ4reb96MO/+NTjaaCbVAe+veQ7LIomP2kx9zXumHQq8QgjovtbJn5o0ZeiXzbA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457929; c=relaxed/simple; bh=V5QuFFtEFS1le6XC0EQ62amHSr4v2C/GQj21LgKvsUg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oG02IjgT8JodV0CukkyY5a7qQYVSDtdmuUIMBiSWeAyGk7XEdvrUVpjLktLGoVZip1gH+0MCsFzMOI/Mg5KKilXmWV/mrO087hCGOU1M60OL+E5YdGK1ZvLkC05L04IZDcv+RFYeak9xBoot10zj0XKuede/QzMPxiaLnqLVZDk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wniEzoqL; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wniEzoqL" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86d0aa2dc99so172889139f.1 for ; Fri, 20 Jun 2025 15:18:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457925; x=1751062725; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W8bfLeP3qqrBDCFJ2FU2DOZ6I9ab7gAt7xxGlDeoSjE=; b=wniEzoqLkzjVmhqPOdR35nwUfXJy9g54aG6iKAdadN7hJOMEJ/qdcHYByQYRURzZKd 5WL2bFAl/6wceCPnoyyzXEzYtPZ05QLtOnS4CWPnZJAwTHrOxGdqNgxXMHA8UROnFlLL D0o4nJ9+VQXME1U17Y5T3Cqv9szlqpEy+jB6gR0Nz10LCvorLCm8E9gKstZ2jr7vpgtb cHqf+gO/Y792ks1jEy5dEQdjkiSie+SxyY3UsQTjh7Z2hUfC3TW/848vzPuBmo9uD4R5 kWjyyxlcrWOIKynCMgTTt+nY4HlvIyxsDP19ACbBvkp4ICAogxyUACpWAgPxiOyJVmuF QRQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457925; x=1751062725; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W8bfLeP3qqrBDCFJ2FU2DOZ6I9ab7gAt7xxGlDeoSjE=; b=DBZue9vshFyy0eqVlvsKeK0ZBdpo2WPQxJUrjf9m53+r6g9n2Le8KMlFPAsfzU1dJx rFrEApqma7U57a7SSyYqC+pyr1JE30ovLaMXAfysqe+HB2rqQXaBJNSAfAG15aJeecYI LI7LIePkfVrjsYKPF2zIYFGzZcErN+EFxQQq94fpunY038XfYcfGCiGuTbtkxzZTJxYv sYYWJvb/iDr9htS7dDSKu+FrP6uJ7/zDjB0DCuJ0HYZd11tSwBktKB5UPKfi4C5otuEL chi7xMllvzQxqBJMrLTv+o4CvgV+wc9HZNtWwy961DgB8bfkuwIhYP2GMI1Fw4q0VVlQ kZaQ== X-Forwarded-Encrypted: i=1; AJvYcCXKBl+y2A48MARw0kdgtqhxKvZw/RoQHwOTbg7znn+DibReuSKX2YcpjbZ3hMOP4HZx3hzXGHFwGk8bkSU=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+5B78QTPuaReOcDHm1/mPa7Pp4+t6NxkSJc4KZBBt93mEYp4Z kwRc79RhbI9DHC+4TUkQd4KL8AUdQrJDV8zyrsjTRpeNAYnx6/A2+A20X0GWpaq7Nd1ZeRBjUHn /8f9wNzQaPUDp51A/DcYV8cvcsg== X-Google-Smtp-Source: AGHT+IFNu9JHRQIB7XTy2Ob9SNBqkVdGcONXmzfrAwfGHzD/c/g9AEFFIMXAac8kpjhLkSQL3RhwV1XbrB993VdGhQ== X-Received: from ilbeb7.prod.google.com ([2002:a05:6e02:4607:b0:3dd:f56e:32fc]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1b07:b0:3dd:f745:1c1a with SMTP id e9e14a558f8ab-3de38c1572dmr55614125ab.4.1750457925391; Fri, 20 Jun 2025 15:18:45 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:09 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-10-coltonlewis@google.com> Subject: [PATCH v2 08/23] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The OVSR bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Signed-off-by: Colton Lewis --- drivers/perf/arm_pmuv3.c | 4 ++-- include/linux/perf/arm_pmuv3.h | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 26230cd4175c..e47f5953928a 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -518,7 +518,7 @@ static u64 armv8pmu_pmcr_n_read(void) =20 static int armv8pmu_has_overflowed(u64 pmovsr) { - return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); + return !!(pmovsr & ARMV8_PMU_CNT_MASK_ALL); } =20 static int armv8pmu_counter_has_overflowed(u64 pmnc, int idx) @@ -754,7 +754,7 @@ static u64 armv8pmu_getreset_flags(void) value =3D read_pmovsclr(); =20 /* Write to clear flags */ - value &=3D ARMV8_PMU_OVERFLOWED_MASK; + value &=3D ARMV8_PMU_CNT_MASK_ALL; write_pmovsclr(value); =20 return value; diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a2..fd2a34b4a64d 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -224,14 +224,14 @@ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) =20 /* - * PMOVSR: counters overflow flag status reg + * Counter bitmask layouts for overflow, enable, and interrupts */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ -/* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) =20 /* * PMXEVTYPER: Event selection reg --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 414E22405F6 for ; Fri, 20 Jun 2025 22:18:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457931; cv=none; b=QhHRPsLaQ7ePXoWR8NgrLOK0fsXuvsX90NV32LnDqo6lXKiuD5xhvCA5+u212juuNpbvqG9uXSPJKEVIJ/pw3LlduYgpUyXRG4HLyFmMu6xf65smnJ3QVr+NExmnh23MgcqBNcu++J4Mv4gaDWAGJtChWzP9Cr0E1jwtzdyBALs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457931; c=relaxed/simple; bh=6ysZwt5atmpSKFCDrlDBPLn3kfM3W7BZIeQsXMk2wqc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gv+X6rWbQwQ5x4PwQbeO588xpXgxepcGFQoDP2sd3+o5eDhbpdBjJL+qIP3f283q4T6NkogZjUFaGGnsSBnXxO1UP+Eyq0/22xRy8wO6EwhxQnhNFY3RI0e1Afyj/04VDa2MSaCowi3r3dWIAzeMT3sRAY3oOy4dg4PXJjCEQRA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hEnKOoBt; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hEnKOoBt" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddb9c3b6edso29206975ab.3 for ; Fri, 20 Jun 2025 15:18:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457926; x=1751062726; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TaxbDY8rSubDKVeB5qmQvsKdNyXQzHjSCcKHkHOgoe8=; b=hEnKOoBteMws6Ivx7eEVaRKQ3QM9Ti2GZVI8CvsBO8Kv1Ud950wt5mtZKgfsat6kEm CeZuPe4wlfnjgwMXDVvdNIvkS1Sp7+uDqhvwJDNiyZ3H5xGw5mCLxQeh+bwqH8qcT63h onh+jsZ7pAusCvqBkD8nMBAT0SSkNSVuOeF3i7NfuxqfPSGh3QqrEuxVjayZrLGJ+3Rd qwi8kR4o5Ijb5zXUJ2TP7XREzbStTGXCrpWYHfAIVy7VDJ+xnyS1fevqZaJUG/2LofYO VbDsuUJMuzIMRboIAxHwSaFT6ce/oq9utZv4nveomcIP1dYblu37q3A79klLc/4Bn32P 2KwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457926; x=1751062726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TaxbDY8rSubDKVeB5qmQvsKdNyXQzHjSCcKHkHOgoe8=; b=E/vL2jBFCqbClWwqGA8sEDOuvUqFKd3jr4XnVGOV8og/1KBnycLHXOfol8rgO3UY6x CsY4ZxdMbcj24xFUk6pgH3uxUvKRmYOqM0YVeIAjrj/OY7bjeNvtZhgv9t+u4V2ZlfVO zcicABU2UotlUqH+UuBDW7HO7Q/0gO7wW6V+Ydl36G6Rjcu8Kda2QEdlqG+rJ24MvoII UwaOVNbU5P9QQsRaVml2n7loVz4ciBxR0WGCla/zOFTBlGkL9O8W7EjZ1t186lykLriN U4VlwEr7EpFEbQkyUtTGZ3g0UZXLIvpUhDI+uA1Fjb6+ekfO19gfD1Z9z6uIi/ZjBOMW d0yg== X-Forwarded-Encrypted: i=1; AJvYcCXhWFjlDNjrIDSsNPySF0A3g+yfvJ9U2qaKOK+3kSZsE7SAIG5GErjzwJdtq0JJ+sgbh/kjNDne84f4SnA=@vger.kernel.org X-Gm-Message-State: AOJu0YzQDYUemaWn+DHt1A9Cpd3etYV6GJKEBKVObHBNUf0sHNFD2j8D KE6SvJcg2aWkn7f3E5nNC0KMnsvBJI6WxmlIaHKPdQfhMUdx1o69bm8QcsYKdtUUbajaodg0JJC un6JsdCvaAH7+DK/DjSNGVZ+dvw== X-Google-Smtp-Source: AGHT+IHmcP6P8yd1kt8l/2AJx8SYpIrdG7oKp/9ZlcMxhqeWhGwhZSAzY8eWSi7LyqHpRmp5lQSqYAZtIJu5h/0h1g== X-Received: from ilbbp5.prod.google.com ([2002:a05:6e02:3485:b0:3dd:ca88:fcaf]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:318f:b0:3d4:3ab3:daf0 with SMTP id e9e14a558f8ab-3de38c3295emr45672015ab.7.1750457926519; Fri, 20 Jun 2025 15:18:46 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:10 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-11-coltonlewis@google.com> Subject: [PATCH v2 09/23] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Partitioning is defined by the MDCR_EL2.HPMN register field and the maximum value KVM can use is saved in cpu_pmu->hpmn_max. The range 0..HPMN-1 is accessible by EL1 and EL0 while HPMN..PMCR.N is reserved for EL2. Define some functions that take HPMN as an argument and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their different position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 +++++++ arch/arm64/include/asm/kvm_pmu.h | 26 ++++++++++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/pmu-part.c | 88 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 36 +++++++++++-- 5 files changed, 165 insertions(+), 5 deletions(-) create mode 100644 arch/arm64/kvm/pmu-part.c diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 9dc43242538c..59c471c33c77 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -227,6 +227,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index c55dbac28c90..151e5b6793f2 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -87,6 +87,14 @@ void kvm_host_pmu_init(struct arm_pmu *pmu); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) =20 +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); + +#else + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -208,6 +216,24 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_v= cpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 86035b311269..3edbaa57bbf2 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -23,7 +23,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o =20 -kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o +kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu-part.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o kvm-$(CONFIG_PTDUMP_STAGE2_DEBUGFS) +=3D ptdump.o =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c new file mode 100644 index 000000000000..340f8d334efd --- /dev/null +++ b/arch/arm64/kvm/pmu-part.c @@ -0,0 +1,88 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Google LLC + * Author: Colton Lewis + */ + +#include +#include +#include + +#include +#include + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + return GENMASK(nr_counters - 1, pmu->hpmn_max); +} + +/** + * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counte= rs + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_ALL & ~kvm_pmu_host_counter_mask(pmu); +} + +/** + * kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** + * kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index e47f5953928a..48ff8c65de68 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -839,12 +839,18 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) kvm_vcpu_pmu_resync_el0(); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -954,6 +960,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, =20 /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr)) { if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) return ARMV8_PMU_CYCLE_IDX; @@ -969,6 +976,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -980,7 +988,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1072,6 +1080,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1079,6 +1095,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1086,11 +1105,20 @@ static void armv8pmu_reset(void *info) /* Clear the counters we flip at guest entry/exit */ kvm_clr_pmu_events(mask); =20 + + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:44 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 010A4246BB6 for ; Fri, 20 Jun 2025 22:18:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457931; cv=none; b=mOnDjDwsmTh/7n7YXEP3J21nu2SU7+tRcf/YmP1l4hZcWQdvWPgHJ1KSSI8lXiZA/7pWf5U++KYJtdC6Vm15i8JYY0xwjQJfATMAJapFJYE56sm1DnWFBwGucyrG+IcBklZK4fiVms97OumMM9g1+o5/ceRESVNxmj/7b4ZFtTU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457931; c=relaxed/simple; bh=QZPsbtUaYametyX+rMN8NQzux/Frt4tHD8rtEMiJ9Sc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VslKJViMgGjXm9EFaIYrA1hWOgMNHiVws7A4Za+bohnbv6CyTAJxG42Krt5rrbY7WSvLBn7nDYvsCeMyRkZ1TnxQGd3ITqcIMtOCYip5y7B4IXEkyjyRy4evKYvrMzoZ7jXAVxDhPoO2TEr4sTyZbHzfMOKCSqJ4Vb7/Kd+IlCA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3Z+cZGCf; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3Z+cZGCf" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddd5cd020dso51294385ab.0 for ; Fri, 20 Jun 2025 15:18:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457927; x=1751062727; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qBxhIed3fkpmOQu/SLH3ienlsanfX2Wz3izzC9sfark=; b=3Z+cZGCfJ78sAjnnQ72+f3zHiwvuj5b/uXlIYL3THEqhtbdqTyH4shcij3/980B3Mn 3XQ/BZCRacwGADPnTbY8217yPtpdCuBMr2PGkyQ1FhCyCyVvueVSO11Z1u274p71DKzm 1nioFD5LH+WJankQC1S4IJo9sJ4ocWG4SyryHPE5c+v88Wu+yVcPmZoK5eFxE7/W5pLj hImWLkE3tC0bWIGZLpfPwAHK0zORXg3H0Jtm52vnXnCbS51le6znl1oz6+ONS6n6qQHq TEIj2LqBjO3Xqifh94dZjWpvsi9ALx0lCASLMYuSsQ/NzYv9jjVBRsBJRCByd32a7D5k Pzrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457927; x=1751062727; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qBxhIed3fkpmOQu/SLH3ienlsanfX2Wz3izzC9sfark=; b=D8OlSKlm+LNUOkLS9c0w6BKPORn/4CemwQugV89cdCix+f20+aU6QDMj5N47J8ZdRw hnc+uMdYF737Oq04ydVy8r7B3CA1exYIbxVbO8O/Zd8n3Kzvj/uoxvVnoY14cYWRTxvv OTRVWWoQPgS9L/1hGy0QjDLBWEfFCWa95KD3IVujylbCkfd8Ee2nHYaQ0shPIxsF7pcq x0MPxJeV51FVlqn+1yjgiOGm3U5B1HYhhTtsVAH+krYuiShipGF/b0vlsyr4MVBLcEyb QOBvg386O6G5ClIEP/6YFfPq9yy9uL4X39r4HCGuvqyiXR5kkPGZukzhgvMqC2IOLuj9 /xXA== X-Forwarded-Encrypted: i=1; AJvYcCV0wAX2KzXyIestMeN6p5/o+8hbLwJhmkiCi9uQG+C0dU1BWK501/LpKEy0Wkco3RLbLdIMvQTGVA3bMHA=@vger.kernel.org X-Gm-Message-State: AOJu0YzMhJIfCpI0DTD4EFTk5XvoYQDDE4T0w5m+DiLCO4z/1MwCqDXA qNDjLOnAI7VkGwnh1m0VL8QYHfozlBFKklPpDIOGPdR9F7dkS7Byvd7l+Fou0/A2tHfmyBif3x2 2pOsbxfqQOfn0qPteSuL7uQ/MXw== X-Google-Smtp-Source: AGHT+IFTbBL04tu/5o1zgL0CZq/zWbduiqa+Y5YKeofcr7jFQYs86cUlPmuIGNSiiyk8Ax4cMVHcBUNCIBRZtQH70g== X-Received: from ilbcp10.prod.google.com ([2002:a05:6e02:398a:b0:3dc:756a:e520]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:2488:b0:3dd:c40d:787e with SMTP id e9e14a558f8ab-3de38c1b8f3mr60278825ab.2.1750457927652; Fri, 20 Jun 2025 15:18:47 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:11 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-12-coltonlewis@google.com> Subject: [PATCH v2 10/23] KVM: arm64: Correct kvm_arm_pmu_get_max_counters() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since cntr_mask is modified when the PMU is partitioned to remove some bits, make sure the missing counters are added back to get the right total. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 79b7ea037153..67216451b8ce 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -533,6 +533,8 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) { struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + u8 counters; + =20 /* * PMUv3 requires that all event counters are capable of counting any @@ -545,7 +547,12 @@ u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) * The arm_pmu->cntr_mask considers the fixed counter(s) as well. * Ignore those and return only the general-purpose counters. */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); + counters =3D bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUN= TERS); + + if (kvm_pmu_is_partitioned(arm_pmu)) + counters +=3D arm_pmu->hpmn_max; + + return counters; } =20 static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D61E24EAAF for ; Fri, 20 Jun 2025 22:18:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457933; cv=none; b=nmNyBc9aoJR4yCzrLWiAiHZ5tSXlazYXsKtRXVoPglfEZ3ZFVYNodU/shH4IODnI4xBE6jY31zBXHWS7TtVOdnhFsViu7Rwsks3R+2JujtFOkkIs8cDIjb3YH9/nGxF4ePzTPM5vBjAixTbg8Rt5LLvFFIBEDdjm+fwj6c0eOuM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457933; c=relaxed/simple; bh=zsVUBb1ubvHzGav8+UqWMIjyiqxRosRbYVfDNovbPns=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Hyyj2bKSv8QGaRB2h5mb5dCEZwQ2II/L2beApjRMjz2n6i8fLxLb+w9cU/aIx54oiTQVoR1YF4A6PVAFwb0lQGeAsQ30hFcMnWI2NG0ZPl6pPJU53X305CI6hvP6TLMHKGlrZ/PuVz+1GN9H9zFFPPdoLdPTfhaJHgPsunPJQKg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=frohF7Fe; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="frohF7Fe" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-2ea89b7d8easo1851529fac.0 for ; Fri, 20 Jun 2025 15:18:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457929; x=1751062729; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=srRRaeOIs8QqP+SAheIX06PoynQsP08aWn4E9T2+MS8=; b=frohF7Fe/U5KhIy4Oy/SqwZXLawHlPWNYmJ3nk1m/GS0/e35A3r/BXxx8BN/LaFPbM H3ASZ+IkUE/SsGi7TUPWLUdrjEuDSi4xBnv89iBEocfDqXjXTyddzElTwLOAKEbBbNEL 8i6Riv79+w+8sWSxsCa+ofOmtWZvPbiNJobybqRc06ci0uGVkRLFY43tHd8iWzPwlLbz h5TMSn3CHfIqr+3GavDYA0XZtzhUTLwox7f+RHLlvSABC2ccBn+dEiKpj3TkXDZ8JL/d 2sLYp7Y9tVBVtzrm/zHsuIW/rpw/ZmTCA9qepQZYR7h5eYDhYM0qdLaDItEKUTvHVE0O RfEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457929; x=1751062729; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=srRRaeOIs8QqP+SAheIX06PoynQsP08aWn4E9T2+MS8=; b=TDvHSbF2588hHM0z5vwmGL0oiwTAft94N1HLKKXfBRzLtN6Zk+hUcIKaTsBMeZzhSE F4fh1CllmWEW2AmH17Rl2Uw69BvBm9WnbbWgRBL0HSVwKSTbbbvboQnWUF4tZ9nnIwag jhuqpax3/I2VMSrC52/lxXfAVrkhNzjM62hol9AowpVoUJFoC4vuwWUA4CuBW+0WnZHW /ZiaTi6faylT6ZTZbjxGmExv9RRx7L+uExNnNDHrJHYFATEnC4C5VgJTUxr43/YTWt49 Xt/XcX1ya0WJ0i9Ww3ihYFdqNmi+8JScDUzP0EgXm3KeO6W3AAaR8H5PZuMgR+7/bFRS 5Ohg== X-Forwarded-Encrypted: i=1; AJvYcCXDpg1kfQnC1+anOIgiDejAwap73CCZC5D7pDMH4/Of0iQenloDxCDDsfpK1G9kysY2GG0fQZlZ73yZIeg=@vger.kernel.org X-Gm-Message-State: AOJu0YySDesKeicCCAufc+l1xlLUAJ3fCVSZgO0R5oUNW3sT5HyMwJfm gdTTqtQ3TErFQcCR8WYWw/gccyosgujnRtNsMldJcWIXM6omuGOD3rqvXSGxZZgdTpHBJJugytC 7aRNx1jMvVw61S0okMPV1bllK6Q== X-Google-Smtp-Source: AGHT+IFglucSmVuhrW+uZlSgSrT5jPMZNpmjjmax2ar+hK7o/sbaqYEVFqi7A3KjzTy6avd238vq7SDc4ZyxUgURTw== X-Received: from oabxe8.prod.google.com ([2002:a05:6870:ce88:b0:2e9:2323:d48f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:364a:b0:2ea:7101:7dc1 with SMTP id 586e51a60fabf-2eeee65a6f4mr3296802fac.33.1750457928774; Fri, 20 Jun 2025 15:18:48 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:12 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-13-coltonlewis@google.com> Subject: [PATCH v2 11/23] KVM: arm64: Set up FGT for Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to gain the best performance benefit from partitioning the PMU, utilize fine grain traps (FEAT_FGT and FEAT_FGT2) to avoid trapping common PMU register accesses by the guest to remove that overhead. There should be no information leaks between guests as all these registers are context swapped by a later patch in this series. Untrapped: * PMCR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCCNTR_EL0 * PMINTEN_EL0 * PMEVCNTRn_EL0 Trapped: * PMOVS_EL0 * PMEVTYPERn_EL0 * PMCCFILTR_EL0 * PMICNTR_EL0 * PMICFILTR_EL0 PMOVS remains trapped so KVM can track overflow IRQs that will need to be injected into the guest. PMICNTR remains trapped because KVM is not handling that yet. PMEVTYPERn remains trapped so KVM can limit which events guests can count, such as disallowing counting at EL2. PMCCFILTR and PMCIFILTR are the same. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 13 ++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 58 +++++++++++++++++++++++++ arch/arm64/kvm/pmu-part.c | 32 ++++++++++++++ 3 files changed, 103 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 151e5b6793f2..02984cfeb446 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -93,7 +93,20 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +#if !defined(__KVM_NVHE_HYPERVISOR__) +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); #else +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} + +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} +#endif =20 /* * Updates the vcpu's view of the pmu events for this cpu. diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 825b81749972..47d2db8446df 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -191,6 +191,61 @@ static inline bool cpu_has_amu(void) ID_AA64PFR0_EL1_AMU_SHIFT); } =20 +/** + * __activate_pmu_fgt() - Activate fine grain traps for partitioned PMU + * @vcpu: Pointer to struct kvm_vcpu + * + * Clear the most commonly accessed registers for a partitioned + * PMU. Trap the rest. + */ +static inline void __activate_pmu_fgt(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *hctxt =3D host_data_ptr(host_ctxt); + struct kvm *kvm =3D kern_hyp_va(vcpu->kvm); + u64 set; + u64 clr; + + set =3D HDFGRTR_EL2_PMOVS + | HDFGRTR_EL2_PMCCFILTR_EL0 + | HDFGRTR_EL2_PMEVTYPERn_EL0; + clr =3D HDFGRTR_EL2_PMUSERENR_EL0 + | HDFGRTR_EL2_PMSELR_EL0 + | HDFGRTR_EL2_PMINTEN + | HDFGRTR_EL2_PMCNTEN + | HDFGRTR_EL2_PMCCNTR_EL0 + | HDFGRTR_EL2_PMEVCNTRn_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGRTR_EL2, clr, set); + + set =3D HDFGWTR_EL2_PMOVS + | HDFGWTR_EL2_PMCCFILTR_EL0 + | HDFGWTR_EL2_PMEVTYPERn_EL0; + clr =3D HDFGWTR_EL2_PMUSERENR_EL0 + | HDFGWTR_EL2_PMCR_EL0 + | HDFGWTR_EL2_PMSELR_EL0 + | HDFGWTR_EL2_PMINTEN + | HDFGWTR_EL2_PMCNTEN + | HDFGWTR_EL2_PMCCNTR_EL0 + | HDFGWTR_EL2_PMEVCNTRn_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGWTR_EL2, clr, set); + + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) + return; + + set =3D HDFGRTR2_EL2_nPMICFILTR_EL0 + | HDFGRTR2_EL2_nPMICNTR_EL0; + clr =3D 0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGRTR2_EL2, clr, set); + + set =3D HDFGWTR2_EL2_nPMICFILTR_EL0 + | HDFGWTR2_EL2_nPMICNTR_EL0; + clr =3D 0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGWTR2_EL2, clr, set); +} + static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *hctxt =3D host_data_ptr(host_ctxt); @@ -210,6 +265,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_v= cpu *vcpu) if (cpu_has_amu()) update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2); =20 + if (kvm_vcpu_pmu_use_fgt(vcpu)) + __activate_pmu_fgt(vcpu); + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) return; =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 340f8d334efd..269397a1fcbc 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -26,6 +26,38 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) return pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU + * @vcpu: Pointer to kvm_vcpu struct + * + * Determine if given VCPU has a partitioned PMU by extracting that + * field and passing it to :c:func:`kvm_pmu_is_partitioned` + * + * Return: True if the VCPU PMU is partitioned, false otherwise + */ +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu); +} + +/** + * kvm_vcpu_pmu_use_fgt() - Determine if we can use FGT + * @vcpu: Pointer to struct kvm_vcpu + * + * Determine if we can use FGT for direct access to registers. We can + * if capabilities permit the number of guest counters requested. + * + * Return: True if we can use FGT, false otherwise + */ +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; + + return kvm_vcpu_pmu_is_partitioned(vcpu) && + cpus_have_final_cap(ARM64_HAS_FGT) && + (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EDAA251793 for ; Fri, 20 Jun 2025 22:18:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457934; cv=none; b=Ipv8Dur5UiM4NAyFCvn1ykalGf4PtmEEEpS4dwngv+xOh54xWCdtzkIwyrTP+Dj90DwehaxvFdscLcl1asKsxQx9eQRYzKKiVDNm5HD7aauxactJ8HNZ1G3+J4+e8lFm7AhVII2odSLE2e+YYm+T+y3YzljiILvWTkuE1BwpHkQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457934; c=relaxed/simple; bh=9RsbMC4gKstOTyv5y2QgORYLw5zb7ZTQrMSj9VZtIh4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tuSKAXX+N0+/ccl9Sbk9+BxtZss+TC2y3GqCxV876LAm3akFCeemygXT5mlXLAi0kYkOCvyaSrHnCzi2jz5RaJhXlJgIxq7Ft4M00c95ivGkW1DUvFUD3QN6prSSvCoIjQCxw+i5SFBvmAIsDpyFUfVzyk3i2YVF4+l8cDMvebY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=z21KhWcP; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="z21KhWcP" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddd045bb28so21158615ab.1 for ; Fri, 20 Jun 2025 15:18:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457930; x=1751062730; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6vRk6zke2lpK9XGCO0Sj8vZO43QSodRcPYun9h3GVGY=; b=z21KhWcP6OWvUOHxu+sQEHvr7e7IyX4i8hAqsCA3B1ZXm78qRBOtzPV1m2xtyBox8f fiF4c7OFrO6B69AWqEvgF+GpNlyOIl2OQV2dmlajYuSwn2fbELML46DskSrNqwlASwKm MjNhV8g+BDOwjZGI/e6qVm2cuil41sb01mdl8tijhcQRWfM3l8OyOUBwHzb3PrqEWAkI H09SRfIyPpDokCpedUnFUg7dwh5lyTdAVjCP61b+suBkoQ0PFsoEzjeNhEAqFTRQu/0P rGXzRO3etLYcGI3/FhQMfJTdYxKOcAW6w/WoSbD/DNU4TDTmTruRtYYsB3CbYEj/UD90 P8PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457930; x=1751062730; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6vRk6zke2lpK9XGCO0Sj8vZO43QSodRcPYun9h3GVGY=; b=IAS+XJa8Tqw35hpagOjVumGDhPQDGNx11PylBkM8ifH6KgekIEqbjsREjZH1aqmkkC HfN03chwrPexQegQtzpYaZ3cV5xhGmEGne4Zf80jz+NbfzRKMlYwClXThQY7lBxGv0tt DWgcIS4cBnU5nTeFCQboE8D0mONmFrx03wQx4ZWW28hva+y6f8mJWTZp2EIAcqEjjtyt n1WHPkWp164+7QFUYYw0gvOWx4CwK9TXNjHmwG54HuQxn/yFRwTKKXjMO4add6U1M4TX +HyS6G02VywZD3a0Ur06k8OijVAT61PIlC0hedH/XSQZK+aTVprLFyHxk/lgQfOSSrQY vO5Q== X-Forwarded-Encrypted: i=1; AJvYcCW9Vn8tLColTDcZaC+Y2q27tBn6BZ7JuAPErE3gu1eciCE/LTDGAZQElOkK9+o97zYdzC+rw4WfDrdsjNg=@vger.kernel.org X-Gm-Message-State: AOJu0YzpF4nad7L9mKpCJNn6wPN/vqshBp3r6PK5dws3cwrn5g3KkhUi 7ncqOS1+SOOuRESVED60fNEgjSwg3D166iA1vRcArqM3WG8F5cCe6qb0a+E5Bzl/w9Bx2uuRzpN YjIV/GoU260u2rxUbbxlhbZLBIg== X-Google-Smtp-Source: AGHT+IEkoSb5VH84QY4nbT8Ifa51AqeoGsCAL340Tv3oIBa14LiCdgJYJOfuore5wja+yy1LW8r/DPYGzutAdJdswQ== X-Received: from ilbbb11.prod.google.com ([2002:a05:6e02:b:b0:3de:11fe:800c]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:17c7:b0:3de:287b:c430 with SMTP id e9e14a558f8ab-3de38c1ba23mr58907475ab.3.1750457929911; Fri, 20 Jun 2025 15:18:49 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:13 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-14-coltonlewis@google.com> Subject: [PATCH v2 12/23] KVM: arm64: Writethrough trapped PMEVTYPER register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 30 +++++++++++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index eaff6d63ef77..3733e3ce8f39 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include #include #include +#include #include =20 #include @@ -943,6 +944,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu= , u64 idx) u64 pmcr, val; =20 pmcr =3D kvm_vcpu_read_pmcr(vcpu); + val =3D FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); if (idx >=3D val && idx !=3D ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); @@ -1037,6 +1039,30 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return true; } =20 +static bool writethrough_pmevtyper(struct kvm_vcpu *vcpu, struct sys_reg_p= arams *p, + u64 reg, u64 idx) +{ + u64 eventsel; + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + eventsel =3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES; + else + eventsel =3D p->regval & kvm_pmu_evtyper_mask(vcpu->kvm); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(eventsel, vcpu->kvm->arch.pmu_filter)) + return false; + + __vcpu_sys_reg(vcpu, reg) =3D eventsel; + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + write_pmccfiltr(eventsel); + else + write_pmevtypern(idx, eventsel); + + return true; +} + static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { @@ -1063,7 +1089,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, if (!pmu_counter_idx_valid(vcpu, idx)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmevtyper(vcpu, p, reg, idx); + } else if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); kvm_vcpu_pmu_restore_guest(vcpu); } else { --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C41525392C for ; Fri, 20 Jun 2025 22:18:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457934; cv=none; b=JqoRFBI9qyl2/71F3AynRFubXkLAD3tORDxr5B3W+nPrB75zkjhsWyI/6Ca51xuTt6QWFtdAJwaIB40ov7bWk68G+R9HUVEDYJ2UIQg7HQutbBHyFpfmWoNk2lIITCaJzxBuo93lftZLGRRNly2jdwO2o4b1T4sRplR1H4nXQAk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457934; c=relaxed/simple; bh=r9/vMoXr3lfiFToKxSojMMbuj31iMx90TgFdXWddsZ4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bVvIDnfG3A0jKd3vArGBxJdbsMl/61pXNwg2g7PyyHNaZTRAvmTL8IJTZlwc1J1o+Vva2qDE+DQtQa1zH962yIAU7FGBCC3FU9l40syd+wQWqM3DNlbYsOHww/JOSWMuPA9oD9c7FxTIqQrFOIiog4HFVQcjRJW9nUjZOTqrsM4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wJREnVd5; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wJREnVd5" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3de0dc57859so22298195ab.2 for ; Fri, 20 Jun 2025 15:18:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457931; x=1751062731; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1gZPdqHIaC5N89cxtk14Y15WeUbkAXiFR8OakDw0fW4=; b=wJREnVd55aygRSMWYfWIONBOEokFdusDAIP02uGKhsJr8zaLjzo8Qgm3XgbR/z9QJa F2VOjnDNGfdi0kyc8FzbuMFNUzvS6YJ7mjRmGkSCDw6yTqRn7BWd1RJ4HyVlczD+nV9v 9LuP04TWFLA+AXw62z+Lp1EOV1dPjt0nRig6KC+ZKiHdhGdBqo7kmN8Pc4kbIL1OeAC2 iMWL1MRVRm2OAYyr6GoZlDYqzAGyjuOwNQ/VFA7XVzH7QmHALPvlS+7i1AwRHZDIbV40 ORs8+YvglyXaEMugR+Yu/fu+H5M918s5TCB+tsA8p7VLw/PBIX8OUB90bgqQdlvuQeXK soww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457931; x=1751062731; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1gZPdqHIaC5N89cxtk14Y15WeUbkAXiFR8OakDw0fW4=; b=omvseK35GLEHPgrSh7e2sVa/mvqqFUPgjFn19YNPzIdE0wSak8JOLJUCNGA50o2FaM Gh7Wan21LS53ZZnRAQ5e+jX49AE2bGnmzZKx/lrQ99mJq3sAlZ3Pn9quakRDyTjzrBxa +Mx1UCriSrt0RK+EtPJVqWLk4ypo1TuTJ9/NBZi0tY+sIXn4cobPyUpnV3QukQqAr8Lt YDG/70UZW6zQekfQbOgE48OrPNZGXv8tx1GmMWWcsyO5tALB4wop1MFGI2JY8XyYO8kQ QZ/Lt7lAVpEWuZe4HQL3e5PVdG6sPvzGLZAoTe1BiU5jSq3xjaBYgzhb2f7kAfqID2QJ wekg== X-Forwarded-Encrypted: i=1; AJvYcCWtDYvO3QSpgYrmo6ui0Eog/q8cOslEgMPLVved07bQbKGmRTbSc5a1wlLJEW4X4eaJZTJ4JxwRHFPZDok=@vger.kernel.org X-Gm-Message-State: AOJu0YxHbcYMbUOmrZPLv/5og8JvQyeVhESxSnhAiM9vRlfiKCTzxnjM i8llCdHMeubvhzde0W2JWzuLwuwLpeY4/4KzpoI4VZ1qyZba4itzpfNywnomLYNdhBDzef4QreQ 7ecGvE8MtP1tOQKWy4pEVPEstcA== X-Google-Smtp-Source: AGHT+IGjv4MwAm4QLbkXWrYC7rAHKw9/iUlbnKxWWkdEwdxKvGAPykFMMacviS9dcIIIxXL7XUy6syolfW7Tyi7fWg== X-Received: from ilbbd7.prod.google.com ([2002:a05:6e02:3007:b0:3dd:b63a:d0bb]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:3801:b0:3dd:b655:2d6a with SMTP id e9e14a558f8ab-3de38c245c5mr47355195ab.7.1750457931036; Fri, 20 Jun 2025 15:18:51 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:14 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-15-coltonlewis@google.com> Subject: [PATCH v2 13/23] KVM: arm64: Use physical PMSELR for PMXEVTYPER if partitioned From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because PMXEVTYPER is trapped and PMSELR is not, it is not appropriate to use the virtual PMSELR register when it could be outdated and lead to an invalid write. Use the physical register. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 7 ++++++- arch/arm64/kvm/sys_regs.c | 9 +++++++-- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index e2057365ba73..1880e426a559 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -72,11 +72,16 @@ static inline u64 read_pmcr(void) return read_sysreg(pmcr_el0); } =20 -static inline void write_pmselr(u32 val) +static inline void write_pmselr(u64 val) { write_sysreg(val, pmselr_el0); } =20 +static inline u64 read_pmselr(void) +{ + return read_sysreg(pmselr_el0); +} + static inline void write_pmccntr(u64 val) { write_sysreg(val, pmccntr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3733e3ce8f39..3140d90849c1 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1066,14 +1066,19 @@ static bool writethrough_pmevtyper(struct kvm_vcpu = *vcpu, struct sys_reg_params static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { - u64 idx, reg; + u64 idx, reg, pmselr; =20 if (pmu_access_el0_disabled(vcpu)) return false; =20 if (r->CRn =3D=3D 9 && r->CRm =3D=3D 13 && r->Op2 =3D=3D 1) { /* PMXEVTYPER_EL0 */ - idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, __vcpu_sys_reg(vcpu, PMSELR_EL0)); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + pmselr =3D read_pmselr(); + else + pmselr =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + + idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, pmselr); reg =3D PMEVTYPER0_EL0 + idx; } else if (r->CRn =3D=3D 14 && (r->CRm & 12) =3D=3D 12) { idx =3D ((r->CRm & 3) << 3) | (r->Op2 & 7); --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C0DE254AE1 for ; Fri, 20 Jun 2025 22:18:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457936; cv=none; b=qNK1KGvodHBWeJbDGZVtHFNjoUpazj16VpDa5PVbPSJophru9mtCmmb5Bqt/a85POJzv6a8OYU7sU+26Mob+qT6y/Qy4E8q8Dgff0x9rKTeG/JABEwmcm60pscBFPXqwGPh0mDXUi9qAPefLJeE4oifMbZPSDqX97cIEwzmHJnQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457936; c=relaxed/simple; bh=M4XcPGm72140qEI7iQ0ptN69kSxOw9Z73/VRQYhovoM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IZjvMJERbYA1Gr/2CIFEADTICyeub0fUAyDAHbWDaDBwf3SUhE1QQYooFusjGrPJd51rcIk5GUOumbwqFoLUIYm6PXcDCSWOT8AYYTYEhuH0J08eAS/KUkJVqybo/2OWJYVFQ8hQpPCH7OlJ/usnsm3ak8hD3v1fVR6JZaLyRvA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VRuzgIXq; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VRuzgIXq" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-87595d00ca0so187103139f.2 for ; Fri, 20 Jun 2025 15:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457932; x=1751062732; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EUwcqeVf5KIMoAmCHhrWA0WRgfRVQt5PquOGXBH2GsU=; b=VRuzgIXq4ZgNE2gvTj71GUc9AU/hjLFwatQGAzWzjNevoK36wahhQY6S9aazL7KCwq ljqlE1RGHjnGvwijGn7kO5dSWBBT8eAt+o45gJB6ZC8/nPyYJwPTRRCj2k5qCdVKIh8G t/on6l8dAvs7clgdCUduiEl0pG/XSe2XUMB7vOVfRSIjOvWgllgG74CPpOSCPNHqKr5O lMyfQjJukJvgL2VyKMCoaRr+72yhG7dkOwgJ5Q5ozpaLuqa9wS9MtA6iE8NGFkqSyEVc eojhLOc3QH4t19pjDztmb4YBFXBfhCXkVYDyI2QFyVE3YgzyTOICYswaAOpVSMCznWNv 6dqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457932; x=1751062732; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EUwcqeVf5KIMoAmCHhrWA0WRgfRVQt5PquOGXBH2GsU=; b=AR1Cq1ip9m02ISpLuLxAsobGm6CvA4P2hnTXHyVkCxk8RIDWevgC22dPod4aOvAzbX aErNZzj3h8Vc4T0J9JgFfz+U4wPMeW9ERaSxXyYgzEj9Yr2kiz8sehGwADvP+D1Qs+H9 vwxp8oJRSb8Py7UjaSDZPhEI00n+fyt6gzE1hL9GlRNfI8TwrL2dlmIOMNEOGk0sTHz7 UVC0QLSRvyPURQE5FjC8uYIrxIuuZb+9+u1l0LHQTzimzjCdBK0ErVxdUywrk9w9NVy5 /xl/kQotc9E6PAhF7QjeGvtFZgj1bJtnqzDz7bzcZtL1g3wZXXAUKVgnYKwrFrxOEhZE MA7A== X-Forwarded-Encrypted: i=1; AJvYcCWQU8n4WDWr3WVO1k1ul9KntWlVSJbaQcPQQ0zchetLH3c9MjfbWUl5YJ/o+uI8bgjijuRLLyuo/sr2n2U=@vger.kernel.org X-Gm-Message-State: AOJu0YzQ5DG5QjxwqhCnIje0dFTMBOzrBw1AX5lSUKH7ccEj9nrLv2a3 EOeO0Nn8YZkU+j0XASI0u03xw99oKIDOfTDMx25Dq5YxYEBlRhXFqTIYj/SEb9/eEFqN/JtvtfD vU8dQhdcIXyLmRf3R7fHWoQm5Lw== X-Google-Smtp-Source: AGHT+IECQiJFomMTnsPbeme3Oi7RJjR0YLkrmdcXG9nsdHvxsNa0RAqjFaN0Cl+MqvohmdUapqaunpxSDwgZDSywoA== X-Received: from ilff3.prod.google.com ([2002:a05:6e02:5e03:b0:3dd:a3df:9d57]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:16ce:b0:3dd:d653:5a05 with SMTP id e9e14a558f8ab-3de38c1bec3mr55840665ab.3.1750457932182; Fri, 20 Jun 2025 15:18:52 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:15 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-16-coltonlewis@google.com> Subject: [PATCH v2 14/23] KVM: arm64: Writethrough trapped PMOVS register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 10 ++++++++++ arch/arm64/kvm/sys_regs.c | 17 ++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 1880e426a559..3bddde5f4ebb 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -142,6 +142,16 @@ static inline u64 read_pmicfiltr(void) return read_sysreg_s(SYS_PMICFILTR_EL0); } =20 +static inline void write_pmovsset(u64 val) +{ + write_sysreg(val, pmovsset_el0); +} + +static inline u64 read_pmovsset(void) +{ + return read_sysreg(pmovsset_el0); +} + static inline void write_pmovsclr(u64 val) { write_sysreg(val, pmovsclr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3140d90849c1..627c31db84d2 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1174,6 +1174,19 @@ static bool access_pminten(struct kvm_vcpu *vcpu, st= ruct sys_reg_params *p, return true; } =20 +static void writethrough_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, bool set) +{ + u64 mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (set) { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); + write_pmovsset(p->regval & mask); + } else { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=3D, ~(p->regval & mask)); + write_pmovsclr(p->regval & mask); + } +} + static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1182,7 +1195,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struc= t sys_reg_params *p, if (pmu_access_el0_disabled(vcpu)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmovs(vcpu, p, r->CRm & 0x2); + } else if (p->is_write) { if (r->CRm & 0x2) /* accessing PMOVSSET_EL0 */ __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32FE12550B3 for ; Fri, 20 Jun 2025 22:18:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457937; cv=none; b=cWoX5t9rw30XKiET7luWs7KAgKmtpNeC07RDi030/sCAlwEQv+bnpdUH4RVrFIavJK3WrfxUt3HTqHVdZMmu986Nch5mXLNUX2ggS1ly6GBSttsuBcmXyUDmyaAgak2Lk5ggM8NPTBmdk8tp+HaytZlPigSxeqzIxAvNToVc+QQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457937; c=relaxed/simple; bh=Fgy9N61EGbaW+2jj77gOoaKWPvYvkLRmPo8eHGIeOZU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ax9TvVsG94TwVZF2I5vK6LPzIHo9McPkJhjUrOn/2/bCdv/dS99yFjUBgdIpmdOb7ApVxrZYuUqIXXMybSwCqok1Igz0K/uRSprtcOUklP0u+DEKs9RvNkHh5jWTPiFaykhQwBBYGsgk91GAx38Pj+yFb4+gqWtG681yyypFis0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dod4uq0G; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dod4uq0G" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddcc7e8266so19930445ab.0 for ; Fri, 20 Jun 2025 15:18:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457933; x=1751062733; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5lFo8UoLWpKLNujXhqrIEmnLXEDG4SKwogS8dxIvDeU=; b=dod4uq0Gvrb/5HoDU6C6iu0/rIOgaAcpGwo1qkG9N6y+96czRObnWihYlVFUf9HRwn 1SqA5nxu+/wcdkAfS6WBYBl+DV6jSQsuU7DsDzdMmlcqT0R0Bgi4KzL6zT6CfOYV15nZ Fn3v6lhcirJFCQdQRaEzKMDGgeoBKJ0hAd6RSaLOZQxV1PlL0jgKvtqFydew58nZXuXz 1BJh90nMoCp7RW19vzz/RVd+CmAVm28yJ9yGOUb7ZVYV67hQ7Uf6aSxrY1DGjBizS3a/ rdUugNzTSPnDOFWhVUlKYdhRYV+M/WNOmjoV5SGsO/To7d/8NYKACKpR70GNF0LI3gN2 +wGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457933; x=1751062733; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5lFo8UoLWpKLNujXhqrIEmnLXEDG4SKwogS8dxIvDeU=; b=RpXKLXy+h6fI3+7PLgQCV/lNmuGegj+hqzrEraEtD+RloWfNebQpm3kYTZuwAu0/pR GUkKghZtEua5tptMHU+yhhm1BYyltMM046iktjHI4MFsgQ5vO+gw958+CBWNGYJW8pKv ldux5Dtk3DM+qk+07u+ufn1aS7wf4T8nTASr523S0JzSK2LgDehX0jAx/EZc4xGuGxjR gEknCqu3yaNymp82JqkAEXPBfyeXmcheragBFnUQREatYJxb3hmUcU80au214b5eGXIA XVijvkaO+1IY2BRQVobWwpVcigH6z2iffUFpLPL9TIWrbCFPz1VDYggjLxakwuYmkpBD CIlA== X-Forwarded-Encrypted: i=1; AJvYcCUpTsKrJpyYLgig8YEEEdRgvAEh7IFplGnRx9g6YsA/DMTNdIc15tuTZN0YZG7Pc4I3hnbZb2aQ8+cnEaM=@vger.kernel.org X-Gm-Message-State: AOJu0YxvHkUY81XA1gRRsrDB4tHbAqkltQS4tJhrZya2sWYLRFse2ZcS hGXByndx4/PYEHNeylRh0HQUI+9PmZrqKCqXcU1Wa5uxf5ooNZ85aeP5tHYMLtPo+IkFH3bIU+Y BUq+DKpLAbpz+50RtIJ617Uctlg== X-Google-Smtp-Source: AGHT+IGAbHdbdz1QS9prM+U7BA3VvdkmpAqUwikGNLGBai/ylX3I975t5q0qFV+Kxl3tRuCG/iIvJmE+cZNaYmg53A== X-Received: from ilbbs18.prod.google.com ([2002:a05:6e02:2412:b0:3de:deaf:795f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1447:b0:3dd:bf91:23f7 with SMTP id e9e14a558f8ab-3de39593521mr41196005ab.7.1750457933102; Fri, 20 Jun 2025 15:18:53 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:16 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-17-coltonlewis@google.com> Subject: [PATCH v2 15/23] KVM: arm64: Write fast path PMU register handlers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We may want a partitioned PMU but not have FEAT_FGT to untrap the specific registers that would normally be. Add a handler for those registers in the fast path so we can still get a performance boost from partitioning. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 37 ++++- arch/arm64/include/asm/kvm_pmu.h | 1 + arch/arm64/kvm/hyp/include/hyp/switch.h | 174 ++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 4 +- 4 files changed, 213 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 3bddde5f4ebb..12004fd04018 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -41,6 +41,16 @@ static inline unsigned long read_pmevtypern(int n) return 0; } =20 +static inline void write_pmxevcntr(u64 val) +{ + write_sysreg(val, pmxevcntr_el0); +} + +static inline u64 read_pmxevcntr(void) +{ + return read_sysreg(pmxevcntr_el0); +} + static inline unsigned long read_pmmir(void) { return read_cpuid(PMMIR_EL1); @@ -107,21 +117,41 @@ static inline void write_pmcntenset(u64 val) write_sysreg(val, pmcntenset_el0); } =20 +static inline u64 read_pmcntenset(void) +{ + return read_sysreg(pmcntenset_el0); +} + static inline void write_pmcntenclr(u64 val) { write_sysreg(val, pmcntenclr_el0); } =20 +static inline u64 read_pmcntenclr(void) +{ + return read_sysreg(pmcntenclr_el0); +} + static inline void write_pmintenset(u64 val) { write_sysreg(val, pmintenset_el1); } =20 +static inline u64 read_pmintenset(void) +{ + return read_sysreg(pmintenset_el1); +} + static inline void write_pmintenclr(u64 val) { write_sysreg(val, pmintenclr_el1); } =20 +static inline u64 read_pmintenclr(void) +{ + return read_sysreg(pmintenclr_el1); +} + static inline void write_pmccfiltr(u64 val) { write_sysreg(val, pmccfiltr_el0); @@ -162,11 +192,16 @@ static inline u64 read_pmovsclr(void) return read_sysreg(pmovsclr_el0); } =20 -static inline void write_pmuserenr(u32 val) +static inline void write_pmuserenr(u64 val) { write_sysreg(val, pmuserenr_el0); } =20 +static inline u64 read_pmuserenr(void) +{ + return read_sysreg(pmuserenr_el0); +} + static inline void write_pmuacr(u64 val) { write_sysreg_s(val, SYS_PMUACR_EL1); diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 02984cfeb446..4e205327b94e 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -79,6 +79,7 @@ struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 47d2db8446df..4920b7da9ce8 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -25,12 +25,14 @@ #include #include #include +#include #include #include #include #include #include =20 +#include <../../sys_regs.h> #include "arm_psci.h" =20 struct kvm_exception_table_entry { @@ -782,6 +784,175 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu) return true; } =20 +/** + * handle_pmu_reg() - Handle fast access to most PMU regs + * @vcpu: Ponter to kvm_vcpu struct + * @p: System register parameters (read/write, Op0, Op1, CRm, CRn, Op2) + * @reg: VCPU register identifier + * @rt: Target general register + * @val: Value to write + * @readfn: Sysreg read function + * @writefn: Sysreg write function + * + * Handle fast access to most PMU regs. Writethrough to the physical + * register. This function is a wrapper for the simplest case, but + * sadly there aren't many of those. + * + * Always return true. The boolean makes usage more consistent with + * similar functions. + * + * Return: True + */ +static bool handle_pmu_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, + enum vcpu_sysreg reg, u8 rt, u64 val, + u64 (*readfn)(void), void (*writefn)(u64)) +{ + if (p->is_write) { + __vcpu_assign_sys_reg(vcpu, reg, val); + writefn(val); + } else { + vcpu_set_reg(vcpu, rt, readfn()); + } + + return true; +} + +/** + * kvm_hyp_handle_pmu_regs() - Fast handler for PMU registers + * @vcpu: Pointer to vcpu struct + * + * This handler immediately writes through certain PMU registers when + * we have a partitioned PMU (that is, MDCR_EL2.HPMN is set to reserve + * a range of counters for the guest) but the machine does not have + * FEAT_FGT to selectively untrap the registers we want. + * + * Return: True if the exception was successfully handled, false otherwise + */ +static bool kvm_hyp_handle_pmu_regs(struct kvm_vcpu *vcpu) +{ + struct sys_reg_params p; + u64 esr; + u32 sysreg; + u8 rt; + u64 val; + u8 idx; + bool ret; + + if (!kvm_vcpu_pmu_is_partitioned(vcpu) + || pmu_access_el0_disabled(vcpu)) + return false; + + esr =3D kvm_vcpu_get_esr(vcpu); + p =3D esr_sys64_to_params(esr); + sysreg =3D esr_sys64_to_sysreg(esr); + rt =3D kvm_vcpu_sys_get_rt(vcpu); + val =3D vcpu_get_reg(vcpu, rt); + + switch (sysreg) { + case SYS_PMCR_EL0: + val &=3D ARMV8_PMU_PMCR_MASK; + + if (p.is_write) { + write_pmcr(val); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, read_pmcr()); + } else { + val =3D u64_replace_bits( + read_pmcr(), + vcpu->kvm->arch.nr_pmu_counters, + ARMV8_PMU_PMCR_N); + vcpu_set_reg(vcpu, rt, val); + } + + ret =3D true; + break; + case SYS_PMUSERENR_EL0: + val &=3D ARMV8_PMU_USERENR_MASK; + ret =3D handle_pmu_reg(vcpu, &p, PMUSERENR_EL0, rt, val, + &read_pmuserenr, &write_pmuserenr); + break; + case SYS_PMSELR_EL0: + val &=3D PMSELR_EL0_SEL_MASK; + ret =3D handle_pmu_reg(vcpu, &p, PMSELR_EL0, rt, val, + &read_pmselr, &write_pmselr); + break; + case SYS_PMINTENCLR_EL1: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=3D, ~val); + write_pmintenclr(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmintenclr()); + } + ret =3D true; + break; + case SYS_PMINTENSET_EL1: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=3D, val); + write_pmintenset(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmintenset()); + } + ret =3D true; + break; + case SYS_PMCNTENCLR_EL0: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=3D, ~val); + write_pmcntenclr(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmcntenclr()); + } + ret =3D true; + break; + case SYS_PMCNTENSET_EL0: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=3D, val); + write_pmcntenset(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmcntenset()); + } + ret =3D true; + break; + case SYS_PMCCNTR_EL0: + ret =3D handle_pmu_reg(vcpu, &p, PMCCNTR_EL0, rt, val, + &read_pmccntr, &write_pmccntr); + break; + case SYS_PMXEVCNTR_EL0: + idx =3D FIELD_GET(PMSELR_EL0_SEL, read_pmselr()); + + if (idx >=3D vcpu->kvm->arch.nr_pmu_counters) + return false; + + ret =3D handle_pmu_reg(vcpu, &p, PMEVCNTR0_EL0 + idx, rt, val, + &read_pmxevcntr, &write_pmxevcntr); + break; + case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30): + idx =3D ((p.CRm & 3) << 3) | (p.Op2 & 7); + + if (idx >=3D vcpu->kvm->arch.nr_pmu_counters) + return false; + + if (p.is_write) { + write_pmevcntrn(idx, val); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + idx, val); + } else { + vcpu_set_reg(vcpu, rt, read_pmevcntrn(idx)); + } + + ret =3D true; + break; + default: + ret =3D false; + } + + if (ret) + __kvm_skip_instr(vcpu); + + return ret; +} + static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_= code) { if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && @@ -799,6 +970,9 @@ static inline bool kvm_hyp_handle_sysreg(struct kvm_vcp= u *vcpu, u64 *exit_code) if (kvm_handle_cntxct(vcpu)) return true; =20 + if (kvm_hyp_handle_pmu_regs(vcpu)) + return true; + return false; } =20 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 627c31db84d2..1ea7d092ec59 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -853,7 +853,7 @@ static bool check_pmu_access_disabled(struct kvm_vcpu *= vcpu, u64 flags) return !enabled; } =20 -static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) { return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); } @@ -1053,7 +1053,7 @@ static bool writethrough_pmevtyper(struct kvm_vcpu *v= cpu, struct sys_reg_params !test_bit(eventsel, vcpu->kvm->arch.pmu_filter)) return false; =20 - __vcpu_sys_reg(vcpu, reg) =3D eventsel; + __vcpu_assign_sys_reg(vcpu, reg, eventsel); =20 if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) write_pmccfiltr(eventsel); --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91E562561D9 for ; Fri, 20 Jun 2025 22:18:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457939; cv=none; b=L90qWy9WIP5+vDOOnFvi1DiJ2QPACIBhM++ncw+Yfj7xN6NCge5tfwI6sij5OJhw8BANHkPsJC85RxOsG5+MaB3Mul6ITKwVmACcAG3ShRH9bp28fnBpqIakDLnDH9PDcZFoP4/BGjSid5InEjji7vA2heao+MHute2SVz6lrAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457939; c=relaxed/simple; bh=V8F6QXbKZ5qjKUOB8cB6Qj1Bq4o1xFCNUWAnMMd+Y5c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WFrbqFShO0dgBmOyisZhEZz04ybPhGkT9pZGtzLyfJxV6fyNegyzVPiFSuZvd4i/h2XowdNdnikQlqVK3qto5IfNsvnPxSyZKnKydJ25BZ3PCYnuACucKJRvSdVllIXxqOflXsYTZlEZ5K5Rl/PCWKwC83NRbbe6hv1mOC+qe1Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WzNUkUeq; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WzNUkUeq" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86f4e2434b6so186987339f.2 for ; Fri, 20 Jun 2025 15:18:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457934; x=1751062734; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cZyFqXYBMWnJqAFxJ0Fisis6DfGC/d8Ag5of3Smukjk=; b=WzNUkUeqARsMtJDYI8pU53xrnKrtAmWGGes7HeuZALE1YQZqlvuxX3qbAhG8y0bRIO qS2K41eYvKN0h/Y2mfoS0j9YTudGZe8xDnWf6nz7BzhCDi+ntPkV1CPyVSwu/35IFV0t co7wBA5AlY1SWM5HrSv+5C1Uel64Mjg63Qykr18rrcNVbwIQo6q9pV9qCupg7f1jWHrv qDm1BoaYUdkRA+xuehQt3DZ554GHkZmmurvq75kDUxv8Xqsdg0JfUOOhv8tXNaCV7jB4 QD2NrayP4z617GKLQzpSjlfY8fVAuRoIebfbArZfPkp/DZuVLA1Y91VcSA+0zgyzUSlz RcvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457934; x=1751062734; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cZyFqXYBMWnJqAFxJ0Fisis6DfGC/d8Ag5of3Smukjk=; b=Wt9B3ZYkDvXOYgrlbLRfJviNv3QXVjIiyT8iT/JaoQ4TUSs2TcqoX29zFecy1tEVHp UHyDAQi7vmAq3vjnYCTub5WtAwojpXbxpLedyryDInyOm+DTxmF3tscFCZs+lTdhRQAU DsbDJAODCJqAE6Umh+QLupqBb9l2VoiSAqpltejVV3KMgdpw8Ckr6GbgXBKyPJeR3IEd sYcg1qMiVyMW0D+QKkm0ckKqiyZjdrr8ZCR6MXEHH/6TCHn/msoa8Nfc9OZ88E2Jq8uR PZPKq88S+tDBx+KLeYVX1kXq1dD5DXqcZ3cRZIlCTmTLkqTnLjS2RYZmYnEVgacoES77 WMGA== X-Forwarded-Encrypted: i=1; AJvYcCWp23ttfuLDkJxinXXY/heJzAI2204xv9v/xWCzNpi5Fdewf7ssmd5zZBEVEAf1j9rPvm/yRuMv88ZNjbU=@vger.kernel.org X-Gm-Message-State: AOJu0YzwElRY1m6CyY1t9lCVeoB/a5zDNdMG6hTbGvziawbMOAbxTyJd 9gtJd7hRYjE3jFe6TIqsOb2Z+7DobDohGH5XC69jJ5GmpYCc4dtBMOwdUYoB6lq20MvpHgzF2a+ gXldyFz12/y3KGCouV18Jg4QcuA== X-Google-Smtp-Source: AGHT+IHH9vAF0PeeIga9bJPbVv5gkEIZ1wU4Ll5A3y00fis/xcMt2D7wL+QOVfQw74a3+dOyehd4c5Bz0GZvUuruOg== X-Received: from ilbdd7.prod.google.com ([2002:a05:6e02:3d87:b0:3dc:a0cf:cd86]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:178e:b0:3dd:b4f4:2bcc with SMTP id e9e14a558f8ab-3de38c986b2mr44226395ab.13.1750457934106; Fri, 20 Jun 2025 15:18:54 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:17 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-18-coltonlewis@google.com> Subject: [PATCH v2 16/23] KVM: arm64: Setup MDCR_EL2 to handle a partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Setup MDCR_EL2 to handle a partitioned PMU. That means calculate an appropriate value for HPMN instead of the maximum setting the host allows (which implies no partition) so hardware enforces that a guest will only see the counters in the guest partition. With HPMN set, we can now leave the TPM and TPMCR bits unset unless FGT is not available, in which case we need to fall back to that. Also, if available, set the filtering bits HPMD and HCCD to be extra sure nothing counts at EL2. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 3 ++ arch/arm64/kvm/debug.c | 23 ++++++++++--- arch/arm64/kvm/pmu-part.c | 57 ++++++++++++++++++++++++++++++++ arch/arm64/kvm/pmu.c | 2 +- 4 files changed, 79 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 4e205327b94e..1b68f1a706d1 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -94,6 +94,9 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index a554c3e368dc..b420fec3c754 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -37,15 +37,28 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcp= u) * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, - *host_data_ptr(nr_event_counters)); - vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TPM | - MDCR_EL2_TPMS | - MDCR_EL2_TTRF | + vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, kvm_pmu_hpmn(vcpu)); + vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TTRF | MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA); =20 + if (kvm_vcpu_pmu_is_partitioned(vcpu) + && is_pmuv3p1(read_pmuver())) { + /* + * Filtering these should be redundant because we trap + * all the TYPER and FILTR registers anyway and ensure + * they filter EL2, but set the bits if they are here. + */ + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HPMD; + + if (is_pmuv3p5(read_pmuver())) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HCCD; + } + + if (!kvm_vcpu_pmu_use_fgt(vcpu)) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_TPM | MDCR_EL2_TPMCR; + /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) /* Route all software debug exceptions to EL2 */ diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 269397a1fcbc..289f396bd887 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -118,3 +118,60 @@ void kvm_pmu_host_counters_disable(void) mdcr &=3D ~MDCR_EL2_HPME; write_sysreg(mdcr, mdcr_el2); } + +/** + * kvm_pmu_guest_num_counters() - Number of counters to show to guest + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the number of counters to show to the guest via + * PMCR_EL0.N, making sure to respect the maximum the host allows, + * which is hpmn_max if partitioned and host_max otherwise. + * + * Return: Valid value for PMCR_EL0.N + */ +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + u8 hpmn_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + u8 nr_cnt =3D vcpu->kvm->arch.nr_pmu_counters; + u8 host_max =3D *host_data_ptr(nr_event_counters); + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + if (nr_cnt <=3D hpmn_max && nr_cnt <=3D host_max) + return nr_cnt; + if (hpmn_max <=3D host_max) + return hpmn_max; + } + + if (nr_cnt <=3D host_max) + return nr_cnt; + + return host_max; +} + +/** + * kvm_pmu_hpmn() - Calculate HPMN field value + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the appropriate value to set for MDCR_EL2.HPMN, ensuring + * it always stays below the number of counters on the current CPU and + * above 0 unless the CPU has FEAT_HPMN0. + * + * This function works whether or not the PMU is partitioned. + * + * Return: A valid HPMN value + */ +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D kvm_pmu_guest_num_counters(vcpu); + u8 hpmn_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + u8 host_max =3D *host_data_ptr(nr_event_counters); + + if (hpmn =3D=3D 0 && !cpus_have_final_cap(ARM64_HAS_HPMN0)) { + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return hpmn_max; + else + return host_max; + } + + return hpmn; +} diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 67216451b8ce..90fc088ce3d3 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -884,7 +884,7 @@ u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vc= pu) u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) { u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + u64 n =3D kvm_pmu_hpmn(vcpu); =20 if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04F5E23F409 for ; Fri, 20 Jun 2025 22:18:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457939; cv=none; b=s69KPvzk5xK6d+WdMt4tVh95t4fm1GJhdd6SVQs6NJ4YmrSHa9wKAtOFhqtjdhUolDWdfYtgfQuU6EMC/gJeirj9JjyrmHpOki3F3WmRVyPLjMQyfoY6y3jr2bfrvOBcKEU5YEt5YpB5uuc7A2emBzNWdZQHuR47jNSX5s84NSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457939; c=relaxed/simple; bh=y1VZ87bDMvhFxKKb8KOnkNsslQVDRlVgHyTOA2XQMHw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ONMuJumqS0yg39lMT4eytj7qmEGMPSlnk4jiRbbQ0NHDUgax8rsAAtmAZzIL2p5wl27AmT3Iu4dlF0GN3kYrxZVuW87WCxJM7qpA2NHET5pXtPtu/4pHqgbNWfzvQVeb73qSWl1DoNfTvDGm228Uw5UfM+2tQqYqdeX4EgjuVXU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0gt7J+bX; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0gt7J+bX" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddafe52d04so70004515ab.1 for ; Fri, 20 Jun 2025 15:18:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457935; x=1751062735; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lCeKccnUmGK0t6piLYL2X+QAR4eGmRxuwVO46fLii9o=; b=0gt7J+bXZsmFNHsm/F54vSpepp6/EUdA0ClcoHg+lAPPgvxhRJgk6DvVT/M1jwGEuj 2YiNfomAX3Y7MLDW2mndyrlxpKChvWVz6stq3XXi2qfe9smH18Rna15x+fGAjo8EWNTV dnEiyaZm7kBaYRR3xGRIcJd9K2zrwM2m5qWHIJ8mVQ83jqVmjZIrHb00eYBWNez0IWfa jx3ZmMkvdFGspLLbwPzsVw/gYEEO1Ztumzoon8c9HDTgj7qkU6e7bzB+UuU/GfC9SD7S 3OWlSol3iC6tcZ/uD4oUE8aIQu4QFcG5GbW4yCFni1pIHyLYV9UeauWjUFZjWjycBhMn PgRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457935; x=1751062735; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lCeKccnUmGK0t6piLYL2X+QAR4eGmRxuwVO46fLii9o=; b=uzTM3OPnABk9tDgTZZdItfUW+Xdrxt5xGJP0UrfWTUndr2gjjmRYqSSCTD3kr/FABa D6vCEGy0d/kSmgrm5WSdQyCGIW1ZaOOT6/pp5cvbbYbNYGFXfV78stgClJZn/7sQqhH+ wmHqMN5IrICB7w+8SUZsoYvqBLs22MD9JrenEbciQKvjRmJGqC9Eq3aEuhltaCl9grRp Z8N8lNKSLPJxbKgmLm6lBbdqI/B8O4qIt86yIy/Bd1VY0mZAS1BlOXgLC1w1VMgUj0f/ x3HD2z8VaB4VzVDsyy7BjiUHOLmyAjVo5cagsywH/E+Yw1jvBr3Jy3URzqx99ZbL5tyB YnRw== X-Forwarded-Encrypted: i=1; AJvYcCUqCdW87hPR0EtR4rkHbl/pKMaL6QXkWhRxBeLdIocQ149YwsSSzyS0/eE0cA55OhDEXb3dI1wrmojiwiQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyLTg4VCRl+Ayxq9mrpx1DZtbtMGOS44UZfv6KLpin/LnvN+cle olPRLH6CvBztgV3kD5kEKqIARrxq5zzz5o/SelmEA88PX8FKmiT2Ql1SEp4C2uMQOcrBhlzGh7g gcNuDXk57oCMdwYvbB+gptrHYlA== X-Google-Smtp-Source: AGHT+IGAgAXt4VAt9nwcok51fvIV/dFkwYe009yCTofFAGBTMKz3erZRuxZTNNWRigmqT+1olC2QwZa0BzFYekCXXQ== X-Received: from ilbdi5.prod.google.com ([2002:a05:6e02:1f85:b0:3dd:754f:1dc4]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1fc9:b0:3dd:edef:894c with SMTP id e9e14a558f8ab-3de38cc04cfmr43152805ab.14.1750457935204; Fri, 20 Jun 2025 15:18:55 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:18 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-19-coltonlewis@google.com> Subject: [PATCH v2 17/23] KVM: arm64: Account for partitioning in PMCR_EL0 access From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For some reason unknown to me, KVM allows writes to PMCR_EL0.N even though the architecture specifies that field as RO. Make sure these accesses conform to additional constraints imposed when the PMU is partitioned. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 2 +- arch/arm64/kvm/sys_regs.c | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 90fc088ce3d3..5f0847dc7d53 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -884,7 +884,7 @@ u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vc= pu) u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) { u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D kvm_pmu_hpmn(vcpu); + u64 n =3D kvm_pmu_guest_num_counters(vcpu); =20 if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1ea7d092ec59..b64b60e297bd 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1266,7 +1266,9 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const stru= ct sys_reg_desc *r, */ if (!kvm_vm_has_ran_once(kvm) && !vcpu_has_nv(vcpu) && - new_n <=3D kvm_arm_pmu_get_max_counters(kvm)) + new_n <=3D kvm_arm_pmu_get_max_counters(kvm) && + (!kvm_vcpu_pmu_is_partitioned(vcpu) || + new_n <=3D kvm->arch.arm_pmu->hpmn_max)) kvm->arch.nr_pmu_counters =3D new_n; =20 mutex_unlock(&kvm->arch.config_lock); --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50130254AF0 for ; Fri, 20 Jun 2025 22:18:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457941; cv=none; b=s0NY3hZJdhy8AddFYAKkO+xUU8N42D36GUxcEq3J1zEnY17HFM9rO9sD+ZE3wslne8gWdcH9s15ie7z0DRE6pyR8dWWHAr0atyRAfZ0cKTQ5nFy+e+N/A2Ftw+FoKutfuAdGlABc3B7qZHn7yBgTtvTii38UZMpYYSTzc+GpPS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457941; c=relaxed/simple; bh=XOdKUxQ/QaGZqU+z2DC4uMCuspeHUckAli7oazP7b6A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jD4zq111kGsg8mSFe78Z10YTAoxNJjl/wjMdV1xEQZTbuTVIrSdui/gAVQ0xKA1UZdhnMw5vxkA/Rw1N7rcYRJoTZTs1imsR++a7wl6UioI1baAIfESv5UvtqlHp1cZB1CEgQWsV0qVIkLqwTuNIaZkcZ7V+qaglFFN+qxHkpVk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LsrcqHL3; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LsrcqHL3" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86d1218df67so189602139f.1 for ; Fri, 20 Jun 2025 15:18:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457936; x=1751062736; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3TLkE+mPccNF7FZJHZATw8blfiE6UPhgsaqmd9jjOMk=; b=LsrcqHL3r8Es2ZiydwnybToSwjW1N2Mjx/UxIO0m0BUMLD84EdbTIjNVNX2rHY3rSX LXA9w109dhqZ31sOWBfgPu7EPMZg776TgPw9f3bu9KYEOf4Kbap+0L1oMZum0kGTil4q I+RI6f5AmgcX/U4nPVznW+LiRfUNVy+0qFdTAxS//FRlhbu8pFiLv6rlqbVfhvtvsZhG ZWxDDjefRqJ75oocRMdlV3UoXiomnc6Os8P1AfC5hc8BBp7TVR5MPrJLLmAVvXIiWJtA 1n/AYIw4Aob01GDVOkuuXEP6gQ2L2O6g8Z6mrHSR8hYbqj4YJV2kYALWG7fuwIX36DdF dS0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457936; x=1751062736; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3TLkE+mPccNF7FZJHZATw8blfiE6UPhgsaqmd9jjOMk=; b=S+jPylWa+EnPEathgnBuQ/QgdW7+6pkMlOEfeJrtLMpGgocU6Bs+zGxJEPUInxgC3b E/OjtT2t+R2N7sil9RNt5a5+QMeaGiw7opws1mqCnRFaVDQS4wbFXo+cUFpq+3ItuTpd AVk3HoNd90/pLaDtUaanb/7Hu0p4Pan1W18YhqPRSSvv6mWTvFbiO4sGE1kDpb4qWq4l ScXPeztQm3O+qLOULXq4sTps/dSNSD531RBCT0jiKf8OoKG0g9MCGtCHlyiH0NohUdTi eMB/xgmnqV/mLfBRGNm0KVY+y861vWtwLbOAQ9c1ISg7aBqEKGfzpPHu/9JsMcARWU+V re0w== X-Forwarded-Encrypted: i=1; AJvYcCUN4tRqYhpIRm7NGllmKRJlSWzpcXWBHGUN47aC620YwQJqVtCoShzK2gK9HD8JOIbPXh7nsgakZP4MSVs=@vger.kernel.org X-Gm-Message-State: AOJu0Yxli3a+8QUmfb8oQI1IPO8LBtL3X9in4wKIXZ8mB62uqwpL5Cdw sLJCJEYdKzsp+tWRtIWiwrBxWDqo9nNq4mJn/s4TCY1Ae2ar+bic5hgkw3pgmUjnlMfBcCGmlbg 6ZGXJMrxgwMLua7nHxtSqUW7iXA== X-Google-Smtp-Source: AGHT+IE6oh+D/93u2i5FqfrM7fWzDsA+dS03dbCH1axzjP2fK0KgI8fFqm+kYjEF6gu4aMgcIodY83nkJvGivBM9cg== X-Received: from ilue6.prod.google.com ([2002:a05:6e02:b26:b0:3dd:b662:5c3b]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:168d:b0:3dc:787f:2bb7 with SMTP id e9e14a558f8ab-3de38cba89amr46469765ab.17.1750457936356; Fri, 20 Jun 2025 15:18:56 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:19 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-20-coltonlewis@google.com> Subject: [PATCH v2 18/23] KVM: arm64: Context swap Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Save and restore newly untrapped registers that will be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMICNTR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If the PMU is not partitioned or MDCR_EL2.TPM is set, all PMU registers are trapped so return immediately. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/include/asm/kvm_pmu.h | 2 + arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/pmu-part.c | 101 ++++++++++++++++++++++++++++++ 4 files changed, 107 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 2df76689381a..374771557d2c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -453,9 +453,11 @@ enum vcpu_sysreg { PMEVCNTR0_EL0, /* Event Counter Register (0-30) */ PMEVCNTR30_EL0 =3D PMEVCNTR0_EL0 + 30, PMCCNTR_EL0, /* Cycle Counter Register */ + PMICNTR_EL0, /* Instruction Counter Register */ PMEVTYPER0_EL0, /* Event Type Register (0-30) */ PMEVTYPER30_EL0 =3D PMEVTYPER0_EL0 + 30, PMCCFILTR_EL0, /* Cycle Count Filter Register */ + PMICFILTR_EL0, /* Insturction Count Filter Register */ PMCNTENSET_EL0, /* Count Enable Set Register */ PMINTENSET_EL1, /* Interrupt Enable Set Register */ PMOVSSET_EL0, /* Overflow Flag Status Set Register */ diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 1b68f1a706d1..208893485027 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -96,6 +96,8 @@ void kvm_pmu_host_counters_disable(void); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e452aba1a3b2..7c007ee44ecb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -616,6 +616,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); =20 @@ -658,6 +659,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 289f396bd887..19bd6e0da222 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -8,6 +8,7 @@ #include #include =20 +#include #include #include =20 @@ -175,3 +176,103 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) =20 return hpmn; } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned or we have MDCR_EL2_TPM, + * every PMU access is trapped so don't bother with the swap. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + write_pmevcntrn(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCNTR_EL0); + write_pmccntr(val); + + val =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + write_pmuserenr(val); + + val =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_pmselr(val); + + val =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + write_pmcr(val); + + /* + * Loading these registers is tricky because of + * 1. Applying only the bits for guest counters (indicated by mask) + * 2. Setting and clearing are different registers + */ + val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_pmcntenset(val & mask); + write_pmcntenclr(~val & mask); + + val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_pmintenset(val & mask); + write_pmintenclr(~val & mask); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned or we have MDCR_EL2_TPM, + * every PMU access is trapped so don't bother with the swap. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D read_pmevcntrn(i); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val); + } + + val =3D read_pmccntr(); + __vcpu_assign_sys_reg(vcpu, PMCCNTR_EL0, val); + + val =3D read_pmuserenr(); + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, val); + + val =3D read_pmselr(); + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); + + val =3D read_pmcr(); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val); + + /* Mask these to only save the guest relevant bits. */ + val =3D read_pmcntenset(); + __vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask); + + val =3D read_pmintenset(); + __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); +} --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6D49258CD3 for ; Fri, 20 Jun 2025 22:18:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457942; cv=none; b=ncdDuvJxpEUSuo/FSrlcAbWQ36/EwqjipmuvXEzSWXBQZI2XeMMTn2Pe0tu5wDJoSaL0wNpbOxQ7VxIC9OJ2GtSJtYC6qIMrRLfZLmu8UUdSTpoJQ2qGgFH0PQb4j4Uw+fvw5aKHPbgPUqUuULMTc+ipVW/fO3xjaDAQcsz0riM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457942; c=relaxed/simple; bh=yMGaPG1CZQkU3Bq8XdwQsHNu1kFzSYzZgl5dA+p/rKk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sqM5uFjhS7bM9Nq9DK6sQqi+tFPceymrEwhy/sU/3Uq8P2aX+osOLJMkW3RONA2K8TqWfU7epTeRbnA6OoCgBHuVgrfWFsRyJHp0GTTSUNzP2vhH7nwzZb3OMUal6s4xxdsc621ajqQyXP701xXXqQfvwKwsFH/kUtXQPT/wYUY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4L05MAEd; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4L05MAEd" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddd045bb28so21159155ab.1 for ; Fri, 20 Jun 2025 15:18:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457937; x=1751062737; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=evXfKUBhSeG+2oly7f+qnRnfAE1yqFxCmvksu6Guakw=; b=4L05MAEd9mBEL8yU95i3IBx5kciKhcXWgeqgGyv2qKblvATzu89jWtccYMwybuTNrP 0Y7cMvGjd84bgA7H5wLSrdeiKjLIFiwDCC76vAZnTdewMhsMtq2nYkpbxlit3rO0q7z0 ixy8Vras9MBVijExVSrgLBCrQwZUetK1jdLA+97kzLfvH/m0HaK55r/aeNmfKwDqE3M3 WzZq5KNsqEWlDSlCetxInka78F/jj2l48XZhH7cE2dcczPgHWXQTQ3Jtpohk4ceNA/z2 NNw9Jdu3q+UybwkiwEcdEEvy9FCpEndbxye4hnfInlBmncJwTdvhpDXLUWQaJ0m1ecVp DAlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457937; x=1751062737; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=evXfKUBhSeG+2oly7f+qnRnfAE1yqFxCmvksu6Guakw=; b=XE4ZfJ3VSb/DMU8/lMsw40hGfyw6mvoErom2k0gLiqDeP0dJt1EyEgPdVA/W9EoOV3 uVrhk0yM65k2CiljiZXIUWJhE68JM3O5Fh3R1Q/8JDP/IGtsBYfHBXIQQZMSxirs4uj9 BxnyEibxebaaaHWCZWLtRy7tp7bRC2uTks1eWOkO58ffk95tkWIO8zKfuafCdzY7Vduy louBHVZ+vvsv4J0L7eu0Fu9Fi89PPV0H/eSvkfe6UJCarVCdYDlHkEw9x3jhjBGLqRIv AU1fvx3ChT68xDSFhRgzgpz4vS7IBCZWbqreb7tPDosJAHARJl0AyJ1KWwRIqMRHPnLd KdIA== X-Forwarded-Encrypted: i=1; AJvYcCVs7H0+zbBt1CA8JinOr10ny+q3WuAC+sl5eknBxwNM1v4ONcaLr+U+OcvlgCGMeDGdLLi3/Utc8yhRaz0=@vger.kernel.org X-Gm-Message-State: AOJu0Yye6Knf12r7SPYQ8dcG17P/V2+R3+zZpHxeChc7CoDQNQsnuLhW 7mthEG6Or8qI2hHgdxbHukoO75oBWWbLjUe4rfolkC4ftXU1pzVxmLHy02w38z+e2/E0Mt6uCUV brJi6g4NqqQpJSZQHVwuc2m2h7g== X-Google-Smtp-Source: AGHT+IFQNwOuHEkKN7g4qywdhJQMH3Ei639Gg+Rq7B3OB/2GxQkU/gqG6/aq0ZsCLexdUvss75SIoe73NPnJzqeZ9A== X-Received: from ilue6.prod.google.com ([2002:a05:6e02:b26:b0:3dd:b662:5c3b]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1a4f:b0:3dd:bb7e:f1af with SMTP id e9e14a558f8ab-3de38cdcc11mr49042745ab.20.1750457937447; Fri, 20 Jun 2025 15:18:57 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:20 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-21-coltonlewis@google.com> Subject: [PATCH v2 19/23] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-part.c | 43 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 19bd6e0da222..fd19a1dd7901 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -177,6 +177,47 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return hpmn; } =20 +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 evtyper_set =3D kvm_pmu_evtyper_mask(vcpu->kvm) + & ~kvm_pmu_event_mask(vcpu->kvm) + & ~ARMV8_PMU_INCLUDE_EL2; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + u8 i; + u64 val; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(val, vcpu->kvm->arch.pmu_filter)) { + val |=3D evtyper_set; + val &=3D ~evtyper_clr; + } + + write_pmevtypern(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(ARMV8_PMUV3_PERFCTR_CPU_CYCLES, vcpu->kvm->arch.pmu_filter)= ) { + val |=3D evtyper_set; + val &=3D ~evtyper_clr; + } + + write_pmccfiltr(val); +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -199,6 +240,8 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) return; =20 + kvm_pmu_apply_event_filter(vcpu); + for (i =3D 0; i < pmu->hpmn_max; i++) { val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); write_pmevcntrn(i, val); --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16443259CBE for ; Fri, 20 Jun 2025 22:19:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457943; cv=none; b=iK5n3GVMrB5Vwl77h2KB5Uz0IkkLqNYRzqLArJ0euhR7tcIsVOZlH+1oDGHeS9cZfVotgP1fFMhWu5CzowimuRoOwImV6zydaVxdR5qFKyXSDdRGbL9xCOY5X/K6nyXSW3jRdIIc+gFGEG9KXxtt1T9XKtPBVHf+11B4fAhRVno= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457943; c=relaxed/simple; bh=Lyz6rv8md1KGVT7ne6PPM15rAwSQJ5K6f0sp89R8EHo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gTq67hoq2/fQbuP8aii5PexTcilICmGhlLY7bfZ4SEqbR8uJe7/Xf4n+W+YPGfmspHCVXy2PRVzVcMhwwdOtJsmpqbR7AGx5TzFNDfoqlBmDNTyNn0ktYzCkGcQDt+lC/oIDKYSYWin1ViciL7saSD0WsxnPpsXy1tz6sEAI98Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G0Sq1SQw; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G0Sq1SQw" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddc0a6d4bdso26393715ab.0 for ; Fri, 20 Jun 2025 15:19:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457938; x=1751062738; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oVwS6Fh5z64OnGR4LBlPfMlx26sURPRLP7n6w+E8rl4=; b=G0Sq1SQwAtlcog+dgN5R3UPYLZ0VFV+CkG0rhcDXg4oDQyajhSktC7hLUt1h0IlvTo iYuNtNfP4RuR5FMSkEIckGZDnLltjw9sGvyf2G3wtyy2tN82Kv8VlsdN614MqO95P4G2 QEmEcvOeMSuf2Kva6sREqN75ZT4SA55ZrMR9g+Xq7ICFZq9roRKBMGyjotXpMGv3dQsG WJjdqD2Iz5FzGKgqab4SIfHgxuQbWQWKMMYTla8eRbZugltc2swF19zuy9S7sVRzivO/ 6eE836Z/UHvP3TaQkDpi1fe49JqaJBxZhXFndYodHGcungW3v4spmmE3kBsD3c+Lr70q YF1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457938; x=1751062738; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oVwS6Fh5z64OnGR4LBlPfMlx26sURPRLP7n6w+E8rl4=; b=uby62wWDfTYSR4Ly/xb4KvidgHf42MBOxEizi03RzXgbfb8tvzvis1yb9Y7ZEirFSH j8FcLm2B344yP9itqx5w0kompONTOIUqt9MsTHYuZpBvP/yWSEAwzu2L7LeRgttsxELm Nf4TsOBmGZ+lrRvtAS4xWN/agzK982ONCAij9Eqwwg2nhKDrhHIJMfMXBURj1O4WO+rm eSsIj1x5y2UMNxRHs6cBdYP7qGwLFj+qmo3qmzf84Eq3bsKzy5DrHW73Zdwnds4RORMQ Ysb9uc8st7gPXQB5+EOiHuPuAAv6Nyt8oQWxT3F3ahjXvnlr/zEC2GRklwG5HljPWaDl wF0Q== X-Forwarded-Encrypted: i=1; AJvYcCUkuozwEl7hQSRa892ITnA54ScSit4pDgTwLQt3QTkTvSVVr721g6s+a4JQaFpa6gZggkBGLsfDtOVaV/4=@vger.kernel.org X-Gm-Message-State: AOJu0YxTfQh6a/bJvgoKf1EKKILyvR8anFd+Dfuaft6c0YWlAG34IsgI sZ+QK5g4kAP8Udrz38/97qH/9eAghCz1YckqH/7hNwVKUHg3KqW+2GMKS5A3n87PI4th6bT0MSU HjobP3FtxJGquPK0aJszujNsNfA== X-Google-Smtp-Source: AGHT+IGWbUKlCjaeeL5Jq6/oP99y0P2KGUXK+3DxJmW+tayqMBP0Qp74h4SC7OmyoZXB0QBoHnf6jvg7xpoTOjee1g== X-Received: from ilbbs18.prod.google.com ([2002:a05:6e02:2412:b0:3de:deaf:795f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:2404:b0:3dc:8746:962b with SMTP id e9e14a558f8ab-3de38cb0971mr54757195ab.15.1750457938508; Fri, 20 Jun 2025 15:18:58 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:21 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-22-coltonlewis@google.com> Subject: [PATCH v2 20/23] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Guest counters will still trigger interrupts that need to be handled by the host PMU interrupt handler. Clear the overflow flags in hardware to handle the interrupt as normal, but record which guest overflow flags were set in the virtual overflow register for later injecting the interrupt into the guest. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 6 ++++++ arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-part.c | 17 +++++++++++++++++ drivers/perf/arm_pmuv3.c | 15 +++++++++++---- 4 files changed, 36 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 59c471c33c77..b5caedaef594 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -180,6 +180,11 @@ static inline void write_pmintenset(u32 val) write_sysreg(val, PMINTENSET); } =20 +static inline u32 read_pmintenset(void) +{ + return read_sysreg(PMINTENSET); +} + static inline void write_pmintenclr(u32 val) { write_sysreg(val, PMINTENCLR); @@ -245,6 +250,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) return ~0; } =20 +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 208893485027..e1c8d8fc27bd 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -93,6 +93,7 @@ u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); +void kvm_pmu_handle_guest_irq(u64 govf); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); @@ -252,6 +253,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) =20 static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 #endif =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index fd19a1dd7901..8c35447ef103 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -319,3 +319,20 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) val =3D read_pmintenset(); __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); } + +/** + * kvm_pmu_handle_guest_irq() - Record IRQs in guest counters + * @govf: Bitmask of guest overflowed counters + * + * Record IRQs from overflows in guest-reserved counters in the VCPU + * register for the guest to clear later. + */ +void kvm_pmu_handle_guest_irq(u64 govf) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + + if (!vcpu) + return; + + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |=3D govf; +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 48ff8c65de68..52c9a79bea74 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -755,6 +755,8 @@ static u64 armv8pmu_getreset_flags(void) =20 /* Write to clear flags */ value &=3D ARMV8_PMU_CNT_MASK_ALL; + /* Only reset interrupt enabled counters. */ + value &=3D read_pmintenset(); write_pmovsclr(value); =20 return value; @@ -857,6 +859,7 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u64 pmovsr; + u64 govf; struct perf_sample_data data; struct pmu_hw_events *cpuc =3D this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; @@ -883,19 +886,17 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu= *cpu_pmu) * to prevent skews in group events. */ armv8pmu_stop(cpu_pmu); + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS) { struct perf_event *event =3D cpuc->events[idx]; struct hw_perf_event *hwc; =20 /* Ignore if we don't have an event. */ - if (!event) - continue; - /* * We have a single interrupt for all counters. Check that * each counter has overflowed before we process it. */ - if (!armv8pmu_counter_has_overflowed(pmovsr, idx)) + if (!event || !armv8pmu_counter_has_overflowed(pmovsr, idx)) continue; =20 hwc =3D &event->hw; @@ -911,6 +912,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) */ perf_event_overflow(event, &data, regs); } + + govf =3D pmovsr & kvm_pmu_guest_counter_mask(cpu_pmu); + + if (kvm_pmu_is_partitioned(cpu_pmu) && govf) + kvm_pmu_handle_guest_irq(govf); + armv8pmu_start(cpu_pmu); =20 return IRQ_HANDLED; --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D260725178E for ; Fri, 20 Jun 2025 22:19:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457944; cv=none; b=K2yR6JOGho/imlmyCKR/qoYHs81rxOLrC+1w2gL2v+ldF33mgqzxOUbKIRDLNTTtu449gsCbhDNdEoy0TfWPJqHPX45DqTgLq5jZbcGkCtpacdYdhin3fcn0Q9ZWJQ5CHBOojWVwi45KLFd52tDfvnw7IGFVDoHglQ3ShL6UoWw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457944; c=relaxed/simple; bh=UtUM3ldBidk/ZfqaXCJxpZLHCIFaPdHm1CFC9ZX/g4o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=f78mTh89k0HyQuP/i9gEFQDev7DRLnAWDomHKX8owGTbOqj8LFd+sLBT+FtQJ+rsl0JP0SJGf/UJqzakADoP7wqRA/UojxHaUD53n7CeOimILO5td0QeCPGjy3gAJU9rl4n+/GLMWTUfQgygV2YTlNyT7SEFNcPX4cC3GqmsnlM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IFV4/Kto; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IFV4/Kto" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-739f86ef076so928110a34.3 for ; Fri, 20 Jun 2025 15:19:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457939; x=1751062739; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I8HXxU2VCpz4OytKr27sB5aizCv0X4297FAz3aEz5bM=; b=IFV4/KtoA1XRU73QGF/Jsc+rrtFOg3X+dp+gnwetWypQlUvBbjKu2Ui/F1eTtwxLIm aI3Yi5qDFAjGqgV+KzKGxAGSDDoyS8CUv2+sz6FKTGwcNroMmlLG66zEbpuD7QaSFrLs LzsKUetLbc17ceE/ebPYnRzNdnnsnfYovU2ma/f7c89t7szN8WfAzeTzLNuLKdKSIxMd d+T9ZdJ1J5rEPjn9DS8ER9GPtP62dwLA2Pb8rrIrV+RPrgSRuVrqllyHzFXWX0iXHpjf O/94dmNxC1X8s+vUa5AADajA3ATY5wWzSTf1npsD1eo6md6InvlO57OzxLA0AASt13Tb xQdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457939; x=1751062739; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I8HXxU2VCpz4OytKr27sB5aizCv0X4297FAz3aEz5bM=; b=IutpFzcQxwt/5XoDbuh8rQAzJuAIEdwZTqY1QF/OgHdFzd18Lti6rP9n0pXZVC/CVA SGK3T+1dFLqmJJiM71QsmRXwF6tXecCFMyabmmwStJfIvKVvHwc2Dke3zq0h+jk5M+Vt yrv+AlFGRC2EuOzri0kSxov9DWfYhdIxDD7xa/KmyYmZbtd3BhVfSCIPEvLl3fIehEYw +EpZaWBJ/u9YTbZQvSWjKPozORFCLZffhztNGdTpquarGduSCSuRsNc+GFc7NGcevdZw uV7tHWhT05F6koKODJxg6lNG+PJXL7KYLtqN8w7ty20nO0bGijStOgbF8gwNMAj1G+Hh bBiQ== X-Forwarded-Encrypted: i=1; AJvYcCXMgFg5TvsPaMaM+AEZp4a87e0wc5MxmMIhDNKDiCkU4ySeVNUbb8GhMskb8IHGmiZgJWf2IH9cUAKOdSI=@vger.kernel.org X-Gm-Message-State: AOJu0Yw6OJEAU76CqhPiAj8ZB6dpU1jPeRRxH0SAJSnfwSJTZ/N/cPOQ /j3Zw3cIlJntA1ZSgWuSzJIUj1O5DSE3pWsuLgDeRB4mec5uMWl30shrZ0Ncayo8b2rQ9jk0CLt hpt+QCPh1CAA+YS1ruzqYC8zzQA== X-Google-Smtp-Source: AGHT+IFuCnKy8+3z6ev+AWX/d4ZAkywXzAnOdFqueu4NAyl6W7lB7Mj76T4KDs7bBDbPFd+NK7woFdJUSx22J1vOEg== X-Received: from otbek10.prod.google.com ([2002:a05:6830:710a:b0:735:b23b:2682]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:840d:b0:734:f5e2:8cbc with SMTP id 46e09a7af769-73a91d24190mr2042962a34.18.1750457939613; Fri, 20 Jun 2025 15:18:59 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:22 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-23-coltonlewis@google.com> Subject: [PATCH v2 20/23] perf: pmuv3: Handle IRQs for Partitioned PMU guest counters From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Guest counters will still trigger interrupts that need to be handled by the host PMU interrupt handler. Clear the overflow flags in hardware to handle the interrupt as normal, but record which guest overflow flags were set in the virtual overflow register for later injecting the interrupt into the guest. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 6 ++++++ arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-part.c | 17 +++++++++++++++++ drivers/perf/arm_pmuv3.c | 15 +++++++++++---- 4 files changed, 36 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 59c471c33c77..b5caedaef594 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -180,6 +180,11 @@ static inline void write_pmintenset(u32 val) write_sysreg(val, PMINTENSET); } =20 +static inline u32 read_pmintenset(void) +{ + return read_sysreg(PMINTENSET); +} + static inline void write_pmintenclr(u32 val) { write_sysreg(val, PMINTENCLR); @@ -245,6 +250,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) return ~0; } =20 +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 8193f48c42d0..72e96d7df9ba 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -93,6 +93,7 @@ u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); +void kvm_pmu_handle_guest_irq(u64 govf); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); @@ -244,6 +245,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) =20 static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 #endif =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index fd19a1dd7901..8c35447ef103 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -319,3 +319,20 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) val =3D read_pmintenset(); __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); } + +/** + * kvm_pmu_handle_guest_irq() - Record IRQs in guest counters + * @govf: Bitmask of guest overflowed counters + * + * Record IRQs from overflows in guest-reserved counters in the VCPU + * register for the guest to clear later. + */ +void kvm_pmu_handle_guest_irq(u64 govf) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + + if (!vcpu) + return; + + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |=3D govf; +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 48ff8c65de68..52c9a79bea74 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -755,6 +755,8 @@ static u64 armv8pmu_getreset_flags(void) =20 /* Write to clear flags */ value &=3D ARMV8_PMU_CNT_MASK_ALL; + /* Only reset interrupt enabled counters. */ + value &=3D read_pmintenset(); write_pmovsclr(value); =20 return value; @@ -857,6 +859,7 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u64 pmovsr; + u64 govf; struct perf_sample_data data; struct pmu_hw_events *cpuc =3D this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; @@ -883,19 +886,17 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu= *cpu_pmu) * to prevent skews in group events. */ armv8pmu_stop(cpu_pmu); + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS) { struct perf_event *event =3D cpuc->events[idx]; struct hw_perf_event *hwc; =20 /* Ignore if we don't have an event. */ - if (!event) - continue; - /* * We have a single interrupt for all counters. Check that * each counter has overflowed before we process it. */ - if (!armv8pmu_counter_has_overflowed(pmovsr, idx)) + if (!event || !armv8pmu_counter_has_overflowed(pmovsr, idx)) continue; =20 hwc =3D &event->hw; @@ -911,6 +912,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) */ perf_event_overflow(event, &data, regs); } + + govf =3D pmovsr & kvm_pmu_guest_counter_mask(cpu_pmu); + + if (kvm_pmu_is_partitioned(cpu_pmu) && govf) + kvm_pmu_handle_guest_irq(govf); + armv8pmu_start(cpu_pmu); =20 return IRQ_HANDLED; --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E8BE25B302 for ; Fri, 20 Jun 2025 22:19:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457945; cv=none; b=ATnh90WNMcaD56nPJ0QSKpsPHoSENHkRX5102zNrtu6KC4hkB/LUat/AD8GHCYZ/0LPhXuUMsLVj3V7iLrsFTzj0rQnuO/egP+FIR5cCvgrMAm9uua5q4ABiFSaYSDkmDeaLCj+vJfLE2ntlz8Q7hsm2tnaiCNO7sf7LWUs0ivk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457945; c=relaxed/simple; bh=XedMELF7GtBc5JFPguCMW77ZmVNxloch7pPzCMErQGo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fCEIOigFERW8zaC1B+JE/vEgp7yceNn4R/xtkxIKu6bGSUaEaJwav/kxiL8zb4+YULh9VX5KX9cDNjWLaS16qgZALlGePvwSi7cX++8D91FPnhIwapFMifwES175wxagRi+O9XDZgYJvdWIO6W2qpCHBoJ74JROLHFMQNr46Vao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uBYFSh2m; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uBYFSh2m" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddd5cd020dso51296945ab.0 for ; Fri, 20 Jun 2025 15:19:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457941; x=1751062741; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=V1Ykp94zxfIa/dbPK+2a7kBugu0lfLOBN3b0mLEqxA0=; b=uBYFSh2mQE7J42P8bHNYe7Qu1FR4H7uDTzyY62lMCsB9k1yJqOnjU/17LxYt1yJTiW DZlQMqWSjl/M6/YzP7iVPjL1X1Hd1O27g8m806DG2OfDuCiFqjt4+zISuEBTv/TCGmQr u0amizaBjfEY3S/6AUWoiHWk9Jco+LUopSqZHRBrDT11kLDwuUpg2OSAT9Ag4/0jgf1k a9Y/IDpA7HEZ3qtrfRFNrQKVJA89WWVHgkWlGx07gmNDYkkJEiInSj8bCuYNHluHXbzi TgLHP0Z2bp6dB1q300on4hUyaqOyjWYaCP8IdpyMCRY1QzZagPC/6QTS4HaM0idwGOpo eblQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457941; x=1751062741; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=V1Ykp94zxfIa/dbPK+2a7kBugu0lfLOBN3b0mLEqxA0=; b=kTP05NyGkcdA8AcbXZsVaC2YDXlYgL5dRDs5xaXeKRH7JvHCUvl+7ErR1JEbAzg+FA VnUZEwHLlozW9DhZu1hcPLrCf3GtQQme0pJlKfEC0x9crQukt4slapBamn2tdtSN3cU3 ptUdshGi++TCBA+GWT2da8GqKjQZ4UGTyVlfn7O+pw+JxfMje2xoKfHpLoZHOwSf4/0F Imi0+t5tus/z+KNnTHb/xREAPrmcXfVlq0scYPFuWqV48lear3KsI9t4Nt9GICgbpbOM q9YUMVuI2BGCg1A8fD0eyicgi6YFR8gdr/aY/0yf64WiaxU9JDEgYzQOa+sgyGg49wq8 wvSw== X-Forwarded-Encrypted: i=1; AJvYcCW0C6jb3S8tXExHOGwz6/+NoWtC2G8tHeNkYczeyhDGaaOcNlleQowSNCgvyI2fZwzKHOsYnz0cLp2hL+4=@vger.kernel.org X-Gm-Message-State: AOJu0YzTdiix2MQ5RlPT4tZ+A8RbGJzvTSuatgXuHvDACxyzJiu4Kd/N doBDNEQ0Lijvgfror/0krrbCByZnYxl+XoQFCgImL35fGsNnExG9rsHlCDvEx/SpR5FN2h8EoBB OHFgcjdi6B5mx+vFgPTFLP3TzMw== X-Google-Smtp-Source: AGHT+IGsN72kNwJ2dwKK78XXy9iE4CcdXHD3dDBBOtntuEfratpO+Bq/lh5Sf0arHIXBqE+w6Fo0vZ83wCMILU28YQ== X-Received: from ilbbz6.prod.google.com ([2002:a05:6e02:2686:b0:3d6:d162:a9b0]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:214d:b0:3dd:d746:25eb with SMTP id e9e14a558f8ab-3de38cc8bb0mr44894155ab.16.1750457940797; Fri, 20 Jun 2025 15:19:00 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:23 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-24-coltonlewis@google.com> Subject: [PATCH v2 21/23] KVM: arm64: Inject recorded guest interrupts From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we re-enter the VM after handling a PMU interrupt, calculate whether it was any of the guest counters that overflowed and inject an interrupt into the guest if so. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/pmu-part.c | 24 ++++++++++++++++++++++-- arch/arm64/kvm/pmu.c | 7 ++++++- 4 files changed, 32 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index e1c8d8fc27bd..1e632373ba38 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -84,6 +84,8 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); void kvm_host_pmu_init(struct arm_pmu *pmu); +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu); +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu); =20 #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index a6452d10fc1e..926aeda51b9e 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -268,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -405,7 +405,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *pe= rf_event, kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ARMV8_PMUV3_PERFCTR_CHAIN); =20 - if (kvm_pmu_overflow_status(vcpu)) { + if (kvm_pmu_emul_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); =20 if (!in_nmi()) diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 8c35447ef103..2c347e7a26d8 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -260,7 +260,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) write_pmcr(val); =20 /* - * Loading these registers is tricky because of + * Loading these registers is more intricate because of * 1. Applying only the bits for guest counters (indicated by mask) * 2. Setting and clearing are different registers */ @@ -334,5 +334,25 @@ void kvm_pmu_handle_guest_irq(u64 govf) if (!vcpu) return; =20 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |=3D govf; + __vcpu_assign_sys_reg(vcpu, PMOVSSET_EL0, govf); +} + +/** + * kvm_pmu_part_overflow_status() - Determine if any guest counters have o= verflowed + * @vcpu: Ponter to struct kvm_vcpu + * + * Determine if any guest counters have overflowed and therefore an + * IRQ needs to be injected into the guest. + * + * Return: True if there was an overflow, false otherwise + */ +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u64 pmovs =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + u64 pmint =3D read_pmintenset(); + u64 pmcr =3D read_pmcr(); + + return (pmcr & ARMV8_PMU_PMCR_E) && (mask & pmovs & pmint); } diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 5f0847dc7d53..65b380debc33 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -407,7 +407,11 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D &vcpu->arch.pmu; bool overflow; =20 - overflow =3D kvm_pmu_overflow_status(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + overflow =3D kvm_pmu_part_overflow_status(vcpu); + else + overflow =3D kvm_pmu_emul_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) return; =20 @@ -683,6 +687,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_device_attr *attr) return -EBUSY; =20 kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; return 0; } --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F86C25BEE5 for ; Fri, 20 Jun 2025 22:19:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457947; cv=none; b=EOhmIEv9uuQrcaLMXekThFzI7GCHcRwhjYreVVOA6mW9CEUYFpGxxZ9vxTpejwwX3H6MLI5gb5yVz9oCq7F7BvgFL7Fny1wPdPpEGoHOy6JiktJf1cjyj73fmplIQnINLlEeccbg1n/VOr7Okrg/1Qr2qIl24D00EFBaZs3w+/g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457947; c=relaxed/simple; bh=pIKenjuivsuNxrfTX5gucCTB8XKdbMKFSXxxmN3Vba4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SmoXfVkpdZF0HabmsuurWgJ9WgxPA3RCHk2v2tsZ2uUiDxdOSbO9xvVy6BVJkc2WzN/dGpshYSlqJXrmndGGEHJRBuO4E8dY3sKcqr28wAJ+OIgUOvV8PPvaQshLO7i0nbUUz/5ewpJOxBb1QvoPK29f1RIUhugPUMG7LXp4xdo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Wog8UAfk; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Wog8UAfk" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddc147611fso45273515ab.3 for ; Fri, 20 Jun 2025 15:19:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457942; x=1751062742; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jlIaNmwKSlc4jhDaObR1hwnxm1Q7OVOS47yjE2eb8lw=; b=Wog8UAfkxq3pVmu/3h/QYYY7c8eZpwn1cAHYygUvs8n9+qOTbCsNVOin588M51TePO XBLsPOUjv8Py3ncOsJReZn88Pt/rDRt6nnIu2hajfDbQ0gEAUb04su4aXoaxRv8nFOaI dKmcm3KVhja/GyidA5SgtrU/hctFXEbPSvbw1QvCrvVGH90fgiJJI5ydsnoVLMgE7yKh Ggr/uClek29tL9VPhwLlae1/FMvZkWrmpT4NfijMlGROKl7TP8itPWVuHh+r2/Mx7NFZ NJn1eVNpOJdpkNIJ4q3jdwq/Q0KHfMvq+hf6ksbwkZTBJxEO6QKRCOjMHzls/u4MUg6t 34JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457942; x=1751062742; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jlIaNmwKSlc4jhDaObR1hwnxm1Q7OVOS47yjE2eb8lw=; b=oM/BB6/BqiYkSZsNUtikcs+sRTrhU1MExq/k7J7nXUAIN9P6eCV+bb9MH1R1PtkIV2 0iES+KsSjLTCvzJO4TGqV9dpJ2rL8em4giBwYV3wJLKau0aL3v+3GogtIODfTkYmwGZF Dh6jjNJcdOcTCdWr/P8SFRiFOxCR87h2XSN1z49GMJlY7VxxWvZ1vrLykCW7AMH3HRkf cIiwmx5PSaL5VL9q0q3QLLrd8SVRoj/nZZrYV9pVwOd0nI7SSwolnnh9ipPWWRJ8C2m8 E9l5Ihyc3TPY07BhGab5lIaaC7YBUUT+014alnfNYYETY+A6udVNUnzGdJ6LH2PLC6m2 ywwg== X-Forwarded-Encrypted: i=1; AJvYcCX64F1sft0McEuKZf+I5MTKvYVWXNIukZg7AUCN7Q90+t33amPHBQI/sH+XtcUZ41ugmoZcwxj+b+gkxUA=@vger.kernel.org X-Gm-Message-State: AOJu0YxqgB0D7A1hCnchKxnopn3SYqAUKEtKJQ9Y67PjUSNWN5Ck1Y7V H9k++374qngfQCJqo+hXbc72qle7cm8nL1rRYj7qhb8uFtEpeKs8ivc789XPGyuFSOggffUGOfI CkyzfK7+ne/4IEBQLN+WD0Ty3hg== X-Google-Smtp-Source: AGHT+IGCeCVOZ/lzFtZabgEj5uIBvECJLOzYYRnqGjmZi9n0ocVr8mmsA3q0AfByPW+K8EB/QVJMTveOMkHDJeFyrA== X-Received: from ilbbs5.prod.google.com ([2002:a05:6e02:2405:b0:3dd:ca16:cb9]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:258e:b0:3dd:d338:5c7a with SMTP id e9e14a558f8ab-3de38c1bed9mr54498185ab.4.1750457942046; Fri, 20 Jun 2025 15:19:02 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:24 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-25-coltonlewis@google.com> Subject: [PATCH v2 22/23] KVM: arm64: Add ioctl to partition the PMU when supported From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add KVM_ARM_PMU_PARTITION to enable the partitioned PMU for a given vCPU. Add a corresponding KVM_CAP_ARM_PMU_PARTITION to check for this ability. This capability is allowed on an initialized vCPU where PMUv3 and VHE are supported. However, because the underlying ability relies on the driver being passed some command line arguments to configure the hardware partition at boot, enabling the partitioned PMU will not be allowed without the underlying driver configuration even though the capability exists. Signed-off-by: Colton Lewis --- Documentation/virt/kvm/api.rst | 21 +++++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/arm.c | 20 ++++++++++++++++++++ arch/arm64/kvm/pmu-part.c | 3 ++- include/uapi/linux/kvm.h | 4 ++++ 5 files changed, 50 insertions(+), 1 deletion(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 4ef3d8482000..7e76f7c87598 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6478,6 +6478,27 @@ the capability to be present. =20 `flags` must currently be zero. =20 +4.144 KVM_ARM_PARTITION_PMU +--------------------------- + +:Capability: KVM_CAP_ARM_PARTITION_PMU +:Architectures: arm64 +:Type: vcpu ioctl +:Parameters: arg[0] is a boolean to enable the partitioned PMU + +This API controls the PMU implementation used for VMs. The capability +is only available if the host PMUv3 driver was configured for +partitioning via the module parameters `arm-pmuv3.partition_pmu=3Dy` and +`arm-pmuv3.reserved_guest_counters=3D[0-$NR_COUNTERS]`. When enabled, +VMs are configured to have direct hardware access to the most +frequently used registers for the counters configured by the +aforementioned module parameters, bypassing the KVM traps in the +standard emulated PMU implementation and reducing overhead of any +guest software that uses PMU capabilities such as `perf`. + +If the host driver was configured for partitioning but the partitioned +PMU is disabled through this interface, the VM will use the legacy PMU +that shares the host partition. =20 .. _kvm_run: =20 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 374771557d2c..0ef7ebb68d17 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -369,6 +369,9 @@ struct kvm_arch { /* Maximum number of counters for the guest */ u8 nr_pmu_counters; =20 + /* Whether this guest uses the partitioned PMU */ + bool partitioned_pmu_enable; + /* Iterator for idreg debugfs */ u8 idreg_debugfs_iter; =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7c007ee44ecb..97c320ed07c1 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -21,6 +21,7 @@ #include #include #include +#include #include =20 #define CREATE_TRACE_POINTS @@ -38,6 +39,7 @@ #include #include #include +#include #include #include #include @@ -383,6 +385,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) case KVM_CAP_ARM_PMU_V3: r =3D kvm_supports_guest_pmuv3(); break; + case KVM_CAP_ARM_PARTITION_PMU: + r =3D kvm_supports_guest_pmuv3() && has_vhe(); + break; case KVM_CAP_ARM_INJECT_SERROR_ESR: r =3D cpus_have_final_cap(ARM64_HAS_RAS_EXTN); break; @@ -1810,6 +1815,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp, =20 return kvm_arm_vcpu_finalize(vcpu, what); } + case KVM_ARM_PARTITION_PMU: { + bool enable; + + if (unlikely(!kvm_vcpu_initialized(vcpu))) + return -ENOEXEC; + + if (!kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu)) + return -EPERM; + + if (copy_from_user(&enable, argp, sizeof(enable))) + return -EFAULT; + + vcpu->kvm->arch.partitioned_pmu_enable =3D enable; + return 0; + } default: r =3D -EINVAL; } diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 2c347e7a26d8..2388590f4843 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -38,7 +38,8 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) */ bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) { - return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu); + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) + && vcpu->kvm->arch.partitioned_pmu_enable; } =20 /** diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c74cf8f73337..2f8a8d4cfe3c 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -935,6 +935,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_EL2_E2H0 241 #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_GMEM_SHARED_MEM 243 +#define KVM_CAP_ARM_PARTITION_PMU 244 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1413,6 +1414,9 @@ struct kvm_enc_region { #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) =20 +/* Available with KVM_CAP_ARM_PARTITION_PMU */ +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, bool) + #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) #define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) =20 --=20 2.50.0.714.g196bf9f422-goog From nobody Thu Oct 9 02:17:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB18E259CBA for ; Fri, 20 Jun 2025 22:19:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457947; cv=none; b=aXjWKT7rXERdFU3LwAbw9CpOu9jL+VjBn9LnTQ0ueQVYglU9sq8+i73yHmaMZgjjRpphMWBoC2x1K3BWXTh6/tJVWNmRYjuo0ZX6NA2HoDhTmwVGACXylde2sz+ZxNvpc4d67uADYfPdG8Vx0ksG/g0zIdYgC0re4P3NQJlcT+M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750457947; c=relaxed/simple; bh=DzV1iQPyhQYH/07BErYJfalFEt85HaMiwlc8OwgiWrI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NaEqVPZKUXrPd/I98cPBcOB/4HmeRROxD0oZAnxb+Z7K9W6MZAMhL/kcwh4FrAgXw/cqJ4oimNClqmRrTDwO3bN+MFxyEpAVgdjxnHiRddk6FQkvLqrGFLCp0RqvyRKL/dvm3iq0EaBoCASgq0QtxAintsx8JU0qGsF/yEGhxsw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Qx7ITm54; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Qx7ITm54" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddc1af1e5bso53323285ab.0 for ; Fri, 20 Jun 2025 15:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750457943; x=1751062743; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NzRy/DCIVlwO1UeSIhiO63SgndmbkqKBMoPqbIF1wck=; b=Qx7ITm540uwjexD6neLrYrKihQOUxvcZmmJUb2h81mZSu9Jqwq67hE0F1XqiFvl95e yOB4jMstYFSvydfMUfZ7Oxhz/V7C8x121nri++RcnISmmVEZiBKDEb/PLW9s2lr6oi+p ePRzK7d6dj/3GdqD4+4poDtaF9+ebcvKkuqrXNZfXkCBK4TRYEpNdnUpIM3lKWxSyAU5 ObQeQ8GoGbcTywhaCnR+U2lLQRoxJz01ZQFIVw9wcAHRyDL2+Lybs+M3iBWJBVkPMn3k ng66S098dcPL3M+G1GINgYLi/n/1cjrf4RK3OB612skTTW7U0NltUcgjqfCDwMN9hiXd nkOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750457943; x=1751062743; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NzRy/DCIVlwO1UeSIhiO63SgndmbkqKBMoPqbIF1wck=; b=gBJF8RZPtEYkCcTLfBSVgBZtGQoqFhxmXeIJ/+ReJaUaWePxeJB3PMbJtYfyRb2k/z 0aGm0a/4cHI/x4FH3Kz7mP2gSzoujCI9190D2RYENAp0UwSQ2CWvQv9zXrTZmTLdZwvB nx+opp3EjVVH4jdc/8O+aKM6na2GflQY/Iliue4a/stahUdafvc2Z8+FBG1iq78WJxvJ BHQ2dPnzjKsxZmeFTGGFQDEEdM8rx6z3aOHRQHXRWptb1xut/NkLs3x/uRRjFg3oX97+ CoZkX9Cmc9PQMSA3a8nIPJKS07wqbAmMaSLZtgovqX11VKEauqWDlsr28bIHEHWdU5gv Ag3Q== X-Forwarded-Encrypted: i=1; AJvYcCU0xyHg7gMElbTJ21nJAwgCsj4Qkn+PqPV9e5NOe78rq1Fx5rH2XdHQ8jzCK6L7330OQ2VeGDhBPUYQ2Es=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+D5p+zKPawe6sobO7FKu91QNBnSfGfAsy5GIA5QiC8mtJUe77 Q7ymlrkIbGWLAeYAQFRqcbixWLF9m8Il11fMuqwX0YPRXAZ/QIXJTzMum0AuhigpGG7buf/fWuv q6oqeRosDWJWaZk9kR2tOmKHfIA== X-Google-Smtp-Source: AGHT+IGL0FZSPU1ilcGsH+T8e2Jo4op9tS5C/DjCI1eSwz135bo73pLwqVefVkcFUgBxqY2EeJDtp/6Z5o5Ovw5jOw== X-Received: from ilbdi5.prod.google.com ([2002:a05:6e02:1f85:b0:3dd:754f:1dc4]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1c2f:b0:3dd:89b0:8e1b with SMTP id e9e14a558f8ab-3de38cbfed7mr58377885ab.15.1750457943154; Fri, 20 Jun 2025 15:19:03 -0700 (PDT) Date: Fri, 20 Jun 2025 22:13:25 +0000 In-Reply-To: <20250620221326.1261128-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250620221326.1261128-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250620221326.1261128-26-coltonlewis@google.com> Subject: [PATCH v2 23/23] KVM: arm64: selftests: Add test case for partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Run separate a test case for a partitioned PMU in vpmu_counter_access. An enum is created specifying whether we are testing the emulated or partitioned PMU and all the test functions are modified to take the implementation as an argument and make the difference in setup appropriately. Because the test should still succeed even if we are on a machine where we have the capability but the ioctl fails because the driver was never configured properly, use __vcpu_ioctl to avoid checking the return code. Signed-off-by: Colton Lewis --- tools/include/uapi/linux/kvm.h | 2 + .../selftests/kvm/arm64/vpmu_counter_access.c | 63 +++++++++++++------ 2 files changed, 47 insertions(+), 18 deletions(-) diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index b6ae8ad8934b..cb72b57b9b6c 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -930,6 +930,7 @@ struct kvm_enable_cap { #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 +#define KVM_CAP_ARM_PARTITION_PMU 242 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1356,6 +1357,7 @@ struct kvm_vfio_spapr_tce { #define KVM_S390_SET_CMMA_BITS _IOW(KVMIO, 0xb9, struct kvm_s390_cmma= _log) /* Memory Encryption Commands */ #define KVM_MEMORY_ENCRYPT_OP _IOWR(KVMIO, 0xba, unsigned long) +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, u8) =20 struct kvm_enc_region { __u64 addr; diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index f16b3b27e32e..93259b73de7c 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -25,6 +25,16 @@ /* The cycle counter bit position that's common among the PMU registers */ #define ARMV8_PMU_CYCLE_IDX 31 =20 +enum pmu_impl { + EMULATED, + PARTITIONED +}; + +const char *pmu_impl_str[] =3D { + "Emulated", + "Partitioned" +}; + struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -405,7 +415,7 @@ static void guest_code(uint64_t expected_pmcr_n) } =20 /* Create a VM that has one vCPU with PMUv3 configured. */ -static void create_vpmu_vm(void *guest_code) +static void create_vpmu_vm(void *guest_code, enum pmu_impl impl) { struct kvm_vcpu_init init; uint8_t pmuver, ec; @@ -419,6 +429,7 @@ static void create_vpmu_vm(void *guest_code) .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, .attr =3D KVM_ARM_VCPU_PMU_V3_INIT, }; + bool partition =3D impl; =20 /* The test creates the vpmu_vm multiple times. Ensure a clean state */ memset(&vpmu_vm, 0, sizeof(vpmu_vm)); @@ -449,6 +460,9 @@ static void create_vpmu_vm(void *guest_code) /* Initialize vPMU */ vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + __vcpu_ioctl(vpmu_vm.vcpu, KVM_ARM_PARTITION_PMU, &partition); } =20 static void destroy_vpmu_vm(void) @@ -475,12 +489,12 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t = pmcr_n) } } =20 -static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_f= ail) +static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, enum pmu_impl= impl, bool expect_fail) { struct kvm_vcpu *vcpu; uint64_t pmcr, pmcr_orig; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); vcpu =3D vpmu_vm.vcpu; =20 pmcr_orig =3D vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); @@ -508,7 +522,7 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t pm= cr_n, bool expect_fail) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_access_test(uint64_t pmcr_n) +static void run_access_test(uint64_t pmcr_n, enum pmu_impl impl) { uint64_t sp; struct kvm_vcpu *vcpu; @@ -516,7 +530,7 @@ static void run_access_test(uint64_t pmcr_n) =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ @@ -529,6 +543,7 @@ static void run_access_test(uint64_t pmcr_n) * check if PMCR_EL0.N is preserved. */ vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); vcpu_init_descriptor_tables(vcpu); @@ -550,14 +565,14 @@ static struct pmreg_sets validity_check_reg_sets[] = =3D { * Create a VM, and check if KVM handles the userspace accesses of * the PMU register sets in @validity_check_reg_sets[] correctly. */ -static void run_pmregs_validity_test(uint64_t pmcr_n) +static void run_pmregs_validity_test(uint64_t pmcr_n, enum pmu_impl impl) { int i; struct kvm_vcpu *vcpu; uint64_t set_reg_id, clr_reg_id, reg_val; uint64_t valid_counters_mask, max_counters_mask; =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 valid_counters_mask =3D get_counters_mask(pmcr_n); @@ -607,11 +622,11 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_error_test(uint64_t pmcr_n, enum pmu_impl impl) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pm= cr_n); =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, true); destroy_vpmu_vm(); } =20 @@ -619,30 +634,42 @@ static void run_error_test(uint64_t pmcr_n) * Return the default number of implemented PMU event counters excluding * the cycle counter (i.e. PMCR_EL0.N value) for the guest. */ -static uint64_t get_pmcr_n_limit(void) +static uint64_t get_pmcr_n_limit(enum pmu_impl impl) { uint64_t pmcr; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); pmcr =3D vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); destroy_vpmu_vm(); return get_pmcr_n(pmcr); } =20 -int main(void) +void test_pmu(enum pmu_impl impl) { uint64_t i, pmcr_n; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + pr_info("Testing PMU: Implementation =3D %s\n", pmu_impl_str[impl]); + + pmcr_n =3D get_pmcr_n_limit(impl); + pr_debug("PMCR_EL0.N: Limit =3D %lu\n", pmcr_n); =20 - pmcr_n =3D get_pmcr_n_limit(); for (i =3D 0; i <=3D pmcr_n; i++) { - run_access_test(i); - run_pmregs_validity_test(i); + run_access_test(i, impl); + run_pmregs_validity_test(i, impl); } =20 for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_error_test(i, impl); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + test_pmu(EMULATED); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + test_pmu(PARTITIONED); =20 return 0; } --=20 2.50.0.714.g196bf9f422-goog