From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B9D62F0C73 for ; Thu, 26 Jun 2025 20:05:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968355; cv=none; b=UGVfP3joT7GpnhmBp0K7A9lnfEhB6+kJ9wUAIrPFTb9zOCrGxLLXpfZgBb1yZ045y0QPIroISDVwr8TsI+5/9qb2EGpRfVMDb192yYtjD02DZCQw7jBkFpSTAGeVUb4D4ejvrX4yclfNF0Ew05b6cfB5CAB5jNqC6VnDjEe+NrQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968355; c=relaxed/simple; bh=9apEKUIdLeyPsikl7KFCLmT2RywguEX6MjbxXJr0f/0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TqWUdrttjZItN4g21n7L0uuyA1C8v0nphaAMQtWAmTn4zN1K1Lo8FqmOnap3gmWYiRBuCLofRljPKdXhtu4FLpNJ71DknQCftmqteObKqGGmGOA+/JDYQ2WUkn/Fmc//jh9OtKt7RTmMpj7HAaBZadcAafiFqAIBfBVIUTBPnLs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZbCbXG1t; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZbCbXG1t" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3df393bf5f2so14558215ab.1 for ; Thu, 26 Jun 2025 13:05:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968352; x=1751573152; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iaFai0kpA9G7yidHpU/KUU/hEAHmvPP5GvS91zDpLpc=; b=ZbCbXG1t93on6ZRGgXaE1OAzhkNHfwrfyR37dvqdzvdzI7ccihsvwMmZ/AWwHHSgms EsLy7Ao4mb/qbFBbAn3Ql9abr+SYBGHV921xTu+tWSBY5JEx59DZFi+mQV+RQKGVqfxO b92fs2RhJirHmqwdubzWz+411Pj1dcCft5Mp5U+0G2iTsg/KgfTNY29cG09UFQXfaJLn o/GdUaGKD7aofJjTcZMyRH3oTT76gztpnZnhbwe6lP5Wn8ASZlIxyznUnCpwJ+FHoL/b g4o5fGgxdNmWmHVn01mTBxNqKqJZMt4rsrW57k09+n7FRTxYpJkxNbzA13aK5pje/oex x6rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968352; x=1751573152; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iaFai0kpA9G7yidHpU/KUU/hEAHmvPP5GvS91zDpLpc=; b=mNINt976WwFmoZFEi8iW7BPtxBcWGWUv08ZAtqRuP7BWqtIEmmjNTvfDLnedDH7adR 0cxj9A5lHb+Ecv8D67klSSRs8T/hBJ27UADehVFwWWBtfjltx7WL6Yvp7dCbsJL91hIa JZlN/JyQr8lgaDnDTdNQtmt4BX//5WeNc2zOPvdF030MmKRm5h4k8wcw2FjvKclPGl9I YYmA0olDNgyQPCLPrR4NKL4NghXYLvZeskQ03iXm5UfEQYJz4cd0QbEimk+LvTHWLcd2 KPLwotSTAgLOfZyUsYf1SfpHKjURVF8Gzv/evSyjTeWJhqRZvLEFdPNn0MRn2BxqZby8 46Mg== X-Forwarded-Encrypted: i=1; AJvYcCX3TSa3wEK3kVzcjt63ipP9E8bTiEB/81QexgV955Djmskd+3IUahHiVqHjYZI4a0xYBX6QJUuyLaG4/vc=@vger.kernel.org X-Gm-Message-State: AOJu0YzX2OiN7Sp6KOydLcoNQcTjmqxvhG1TqOt2uNQTcRWMoF4r4aTJ Ci7ABUe1nwfYLY6CXb39Ss8rm3mUvPMqvD8+sLqpUbA3yu+je+2dXzQeolIHLUD/6Z1NH7CTpbT 2gWwt2aswRb5QrSUipVii/9cmFA== X-Google-Smtp-Source: AGHT+IH29/gNXm3nZ2B2ojHCYilqtMfkqWLjy1qqrKLeh9n7wKnCz/XYXSUrRH/F4j6Tl4Qhzhb+cR95L/p7I4Y1iQ== X-Received: from ilbdh3.prod.google.com ([2002:a05:6e02:1f03:b0:3dd:b580:4100]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a92:c245:0:b0:3df:4159:8fe5 with SMTP id e9e14a558f8ab-3df4ab55b4bmr13423615ab.4.1750968352377; Thu, 26 Jun 2025 13:05:52 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:37 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-2-coltonlewis@google.com> Subject: [PATCH v3 01/22] arm64: cpufeature: Add cpucap for HPMN0 From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a capability for FEAT_HPMN0, whether MDCR_EL2.HPMN can specify 0 counters reserved for the guest. This required changing HPMN0 to an UnsignedEnum in tools/sysreg because otherwise not all the appropriate macros are generated to add it to arm64_cpu_capabilities_arm64_features. Signed-off-by: Colton Lewis Acked-by: Mark Rutland --- arch/arm64/kernel/cpufeature.c | 8 ++++++++ arch/arm64/tools/cpucaps | 1 + arch/arm64/tools/sysreg | 6 +++--- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b34044e20128..73a7dac4b6f6 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -548,6 +548,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] =3D { }; =20 static const struct arm64_ftr_bits ftr_id_aa64dfr0[] =3D { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_HP= MN0_SHIFT, 4, 0), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_= DoubleLock_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _PMSVer_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CT= X_CMPs_SHIFT, 4, 0), @@ -2896,6 +2897,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, FGT2) }, + { + .desc =3D "FEAT_HPMN0", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_HPMN0, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 10effd4cff6b..5b196ba21629 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -39,6 +39,7 @@ HAS_GIC_CPUIF_SYSREGS HAS_GIC_PRIO_MASKING HAS_GIC_PRIO_RELAXED_SYNC HAS_HCR_NV1 +HAS_HPMN0 HAS_HCX HAS_LDAPR HAS_LPA2 diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 8a8cf6874298..d29742481754 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1531,9 +1531,9 @@ EndEnum EndSysreg =20 Sysreg ID_AA64DFR0_EL1 3 0 0 5 0 -Enum 63:60 HPMN0 - 0b0000 UNPREDICTABLE - 0b0001 DEF +UnsignedEnum 63:60 HPMN0 + 0b0000 NI + 0b0001 IMP EndEnum UnsignedEnum 59:56 ExtTrcBuff 0b0000 NI --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63D612F0E53 for ; Thu, 26 Jun 2025 20:05:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968356; cv=none; b=gyl7HMxYfwAllfhJX+AtdA6vUNFyiFxCb2u/KuIP6a+QLrtCx01+rADI6aWpNNqLhgqE29J4z18rp3CkLJjT4KWybY5RakGJeYWG4em9/xw6g+iKAz4N4afAGzSOm9RFD9vQUdt+9qESkkbKxIe2NYqx3VuGLXDQYyS2ET+PApg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968356; c=relaxed/simple; bh=T2AAAwNCvFAm9YUhWYISU/OAnWzXdQk+10RnZrJqatc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=B3Kc6KSIoVfxuOWd7GZSOZWNrFWIIaSiGRZyLmFE9cjskRXtbt/5a+Q8LFysQLmojQjK3IoBThxvLvNXEj4n3aeiJG88HSzHBrq95+2WPuBjArlYR/18Qmy7DgX7YIk6TsYXFZ/j9YcilirH3Xn0FcKy2ZarHKoaCdQ5MdRN1rw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oIRE3E7c; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oIRE3E7c" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-40ab5c5028bso1513384b6e.3 for ; Thu, 26 Jun 2025 13:05:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968353; x=1751573153; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EHt9BrBIYqZp5kZ5sjlL9mrhswYvvwFRhl772iB1+XY=; b=oIRE3E7c3Net0Zrc38DGPcJR5uot6iNbrChMN9jOLEjSbQ7+lPcoLwaGiEHlieHL5q GI6t+qbp22yc02kprAzzTLCeCnL0NDN5l7cErzEHnSecjLMsRRMg7qDQ4kdceU/0aLwD d9gAos2CxaYeYiCNAUZw4/uL5GIqPBOFlYJKrQQP6Haiw8PJQS1u4b/rT60E93vRMfHL 8IjF2PAX05JSzfvbVyf3TDst/7ir1SJg/mD9QxvDcqVG4FpzTIlf7LTUmRdKeviWdcE9 EmWjeAAmvXedHn44R2MEvZXGeTNK49eN3J8nG5P8c9G/+lyviSi2EZCHJznjuMeggbUa 9bjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968353; x=1751573153; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EHt9BrBIYqZp5kZ5sjlL9mrhswYvvwFRhl772iB1+XY=; b=CxJrTDqZrt1Q122fjLq4LgVlsL+xVqkNPqUUnQVOWgk2EaVBDazQEH3TCvbRgXuRAy gPa2zoDDSIlgyew9J/Oa9t/N+TwUDhsUn0tT9nzcAVEU3fxuFHnSeJDbiBI2G3kHireF mKzR4qw1SrSvLaLfoNhVnlaCehlkz1zWGUZQbDhSrxGi9Ebk/6rpD/ItZwTZiKgtA53U 0POc38JtL/cm/v/5Q3ViQIutNf7BPCHmzsunZIilJ7k+wSCEyf8QK3keVDngLGK7tqA6 wzkmFNkqns88m/IKiQsqUyu3O+1Sn6+sSk/gB6D/2Q0NA5l5DdeCb03eVLoqrr6KpaZ6 xztg== X-Forwarded-Encrypted: i=1; AJvYcCXs6LRG526GELEkhere4faGXO09pAdqYKfqy8c8k7yf4rzYx8x9wdTOMxLBNdlhQAALV9yoZ/D2BQtA0dQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyJe1vuzkDXXU9AakqeVcfepcoKsgBK20Pd0BXHqJ8Y3G/NvwH3 6CkpH4vvTjzTpqmRTrW3MyhgsdqvLMg9rKTl70fuJEe8ErDsKQrFgZPYRBYyWtSMvefqX5z4wMp s6QjcE3VF8OIcycJJp9FsGasLmA== X-Google-Smtp-Source: AGHT+IFlRtE1eWK/FMjUnULxroCH+ZQKYqxRY5qaFdZ+f+SoiV9KEwtPmc0HwZMn2/GT2thgfmq4kS9LjJKIm6e5aQ== X-Received: from oibcj14.prod.google.com ([2002:a05:6808:1b8e:b0:40a:c2da:1f99]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:6a87:b0:409:59d:2fb with SMTP id 5614622812f47-40b33c77bfdmr516313b6e.11.1750968353579; Thu, 26 Jun 2025 13:05:53 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:38 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-3-coltonlewis@google.com> Subject: [PATCH v3 02/22] arm64: Generate sign macro for sysreg Enums From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There's no reason Enums shouldn't be equivalent to UnsignedEnums and explicitly specify they are unsigned. This will avoid the annoyance I had with HPMN0. Signed-off-by: Colton Lewis --- arch/arm64/tools/gen-sysreg.awk | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/tools/gen-sysreg.awk b/arch/arm64/tools/gen-sysreg.= awk index f2a1732cb1f6..fa21a632d9b7 100755 --- a/arch/arm64/tools/gen-sysreg.awk +++ b/arch/arm64/tools/gen-sysreg.awk @@ -308,6 +308,7 @@ $1 =3D=3D "Enum" && (block_current() =3D=3D "Sysreg" ||= block_current() =3D=3D "SysregFields parse_bitdef(reg, field, $2) =20 define_field(reg, field, msb, lsb) + define_field_sign(reg, field, "false") =20 next } --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EB282F2346 for ; Thu, 26 Jun 2025 20:05:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968357; cv=none; b=S0jcgnk+MX9/E5kj4HAh3wn2L9LIc2bzDT/pSBAuNH1/dqGAsmugLurWwFvU8L47oQp2KvQH7ZBTERF4kwHxpVzZIZDJW4iKmoz2Ezl+Tp+65Nh8hJFlQENyOLdxuPwIiGGEuEDISlZ2VE8bbr9+//hZm5HsFZ59T5gIh5eufsA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968357; c=relaxed/simple; bh=bO4sw7/hINJdmJB2o7i6RovRyFWviPuCwq5wxaiI8Bs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lpDQCPiXzQxIk5wiMWKi9/gxymSLOM46eoVRKCnakGRS6R5Xzm3flvZxpqT51Tie75ko78IIvg9XUR4Ewa9A4OGt/km/VnA72SxMCBSpgNfmSdMCyFmNdyIvaUt5Y/aYnHsiceV2kKVie93ftUbfXsCOnQUSfJoKHQ2YSOr9X+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=x4CA2EDb; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="x4CA2EDb" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-2ef38fe89d9so863246fac.3 for ; Thu, 26 Jun 2025 13:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968354; x=1751573154; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zPJCCS28P/ht0GkmQnG+3GK0DAvT59T6hS3LwMCctEw=; b=x4CA2EDbEizTS4SuUHPzydjNoZurxTQT4UJ75wifRxZKdNfo/bZSyJz6/1ck+TPUKq XN2yyz7jpUbd04MU0+4gQFGNlzn0u0+sv5wiUO0FfoJbCQxNEDEjATLEXzGZVEh0jDVG ZX9G7CEcqbH0dCOCRXlBsOBEnqT0/ze3Lekaax/pEjsnRjqjGAZqwgdFreZ7ZoU8/gv1 0yE1eII/CfrZ9cMQgDmRDe3s0pBHwKfOiVwY/RN6m3cNwblLmE6OpTa9+NG4vKZG5jYL AdYUxeArtZORepDKNrrV4YW6j4alCORUiZ1mMz8jhGfI8MbqlGi92lHo6zHX25wFq/Wi oWWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968354; x=1751573154; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zPJCCS28P/ht0GkmQnG+3GK0DAvT59T6hS3LwMCctEw=; b=rdOD9xK5mY7Cp1knQuvopTpMJ2Y8MpWnwQEaLiWpAfNIZ/uKj0X2ZbkNpVgNADWHLA W8ucuLji/uTV69gYVv5dIxCut7O+crAwfU08IowoD2AV1Bs40VCedadL1dPIvwQ+Yjme /YHSkaKdRRmewHVgT1PovKaWBAom4YfENs/7lB9m1J4ZFhq81nkTKwHGLbJpqCzJMnEg 35RN9jUge3Dfb0eLCoEY5s9fHdyoQNp4R4U9Os69rldO66+dCm09RQqBBoLg1YPlnyIC JgndiKK9jPAddY/etRYG8yxh2GANW9iQnzSf8gVarG7+/sPF/OC60ZmEy2e4q3k0+W1S ZmpA== X-Forwarded-Encrypted: i=1; AJvYcCVnn4akJmPh+2YvWDb2B1TTpKc3mykSjsdXHyQLLjwvzhD3NhgSysCj7mVx5kCbtFAyCQ7orUzMqskcCj8=@vger.kernel.org X-Gm-Message-State: AOJu0YwNx6EwtL7Okm6NPM3CvhY1gA5tkruLGvVxQNoL3AWHGLSiOHZR 2fZS0spr2D4sy5uRLgEqbOksfb2oQW74NsgUmFa7Tx4UBC6YgD1PWddss+8o3fDhyajk9HVNmCC NhRgv/hCq0EiVJKz+Gif/iP623w== X-Google-Smtp-Source: AGHT+IEAValjLvIQUTTrl0WGiv7qDA/VK+W0m1ZQUDRoDH/VSaq0CR37ZF/2z6Fxf5GRvTFbBupf/JJhqDdR5wa1WQ== X-Received: from oabxc6.prod.google.com ([2002:a05:6870:ce06:b0:2e8:ed52:15c9]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:7050:10b0:2ef:eddd:690d with SMTP id 586e51a60fabf-2efeddd6bf1mr119143fac.24.1750968354720; Thu, 26 Jun 2025 13:05:54 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:39 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-4-coltonlewis@google.com> Subject: [PATCH v3 03/22] KVM: arm64: Define PMI{CNTR,FILTR}_EL0 as undef_access From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because KVM isn't fully prepared to support these yet even though the host PMUv3 driver does, define them as undef_access for now. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 76c2f0da821f..99fdbe174202 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3092,6 +3092,9 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility =3D s= me_visibility }, { SYS_DESC(SYS_FPMR), undef_access, reset_val, FPMR, 0, .visibility =3D f= p8_visibility }, =20 + { SYS_DESC(SYS_PMICNTR_EL0), undef_access }, + { SYS_DESC(SYS_PMICFILTR_EL0), undef_access }, + { PMU_SYS_REG(PMCR_EL0), .access =3D access_pmcr, .reset =3D reset_pmcr, .reg =3D PMCR_EL0, .get_user =3D get_pmcr, .set_user =3D set_pmcr }, { PMU_SYS_REG(PMCNTENSET_EL0), --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A4CA2F2362 for ; Thu, 26 Jun 2025 20:05:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968358; cv=none; b=rXAj6W9Cn/+sB7bV0dVagMJvLuXriKJK4+qt1Zh1tNZIVrDJS2s8zoLWcdAqkHklQS+GH8fiMvbrlvfWSqj2oGJwlKLSCW3IczeR7CMe0ct9XXj7LEXqhmZOM+Haii7NvuePeD+x/Bba9una5sHt6Tm3fxouKjV2xxc/hW7A8FY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968358; c=relaxed/simple; bh=lsw/9JDLn48QzC9luGx5RzoWfLOk/GJ1m1FQZZ1NVxk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VlWXkwiUzSfCQySgAckn7kKQkrxMmpNMBt8npCj6XPbqFfXBfsAfx40ZSzphTQR184AcfFBaC7awguijJUTXWv7vyAc3xkDc9dcN77z5AKWwPB9DPVbHFJZwItLUHgt39Jsx2fpv1YoA98fcs6JV+BLpVEnmYiBjrSsJdJBxVGc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ChDZxUFU; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ChDZxUFU" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddd97c04f4so17923865ab.2 for ; Thu, 26 Jun 2025 13:05:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968356; x=1751573156; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PRv3wKa2Rgdejn+APGtlkMboACuL4RazR5HzeTmIwac=; b=ChDZxUFUnCTip/myGX7Ut6Xb2CUkDU42vY7EFzcMtDIN7V8upfECnPfel9KPnxBhRy Ln0NSc6BK8H1sUvxjatCEqxSaFjipm22w60uj8J8sHAy+E3HKmjQAbTYhAqzUfTixwkh FFjCrKjSLi2kXjLz1tTG7sYBQ8VXku1hTz4ZRtZ354eQC6F8LbwFolBXMWW6prbf0/j/ +B/pgS2lRSQ+fEfRlbYeFXKhQciMFD1JND0pCpTy3JInNeiwEi+Jd9vQGGixbdSBeEyN zLCmsSXBVYsLoJVuFEX52JhEb8rGxB9j4xZTAgEoRufDp6CXSSryOhkm1L/aJM+08Ey+ D/cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968356; x=1751573156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PRv3wKa2Rgdejn+APGtlkMboACuL4RazR5HzeTmIwac=; b=c842QJO/5FvdXbUOV4n2G25bOVRTFBSqSBb7GkNQhpWJdJMYvkpvrdoIpX2jqAdojy 8s7d81BTTBklY97J2BYNr+HiXsZZVbCL2wZ3w6th/seqavGTFmjR1QxikNmLaujSjkH5 duKL98oIl1cELnhQ1X1bE5L3b24kFKpQ1M0iv1UUiX/0fZg5ExX6U9aOm/gHQeSuw4fU VvIeMdkVfg8gfEFARJY8xxAc9maTvITyHp3IklHE0zpYRFCNaFi8sY+EXRYaiYmtUhD6 6B7DSpoMtJS6ddu4IkjtwXsKx/ylAk1m5moKnAQkWxKIGGhz9e50bmy1yL+NJbYkGGIb qEwA== X-Forwarded-Encrypted: i=1; AJvYcCXbvzPPMiogJvudDLT/pnSRQ4jhOw4UCH3i75MpQ6dWfXdMcEWI7xusnuLRGlueGAEXWmzjA+MoCYseocY=@vger.kernel.org X-Gm-Message-State: AOJu0YxbfkJyiJ3zcTUn42N+Hh50Y3k499YBNPaSFUxoMaDOCoM3pX6V /kv/Zlxj9rQ++qQ0hN8pwy7VBhua1CpeE/98DeFuKB7GjuZpaR6nSt/y8cJYYrKhcyfSB/t68F/ a5Ja3HnnQVPv0+y7IDNdL8rue8Q== X-Google-Smtp-Source: AGHT+IG65ARbJWB/ki57K8Y3eggnPaWo7RnX/DLSE7yaShoxZ2aPYQNL7voLM09Vn/2MpXgpwJ03tWhv2B8Hv2lH5g== X-Received: from ilbeh24.prod.google.com ([2002:a05:6e02:4c18:b0:3df:41b8:3ac1]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a92:c241:0:b0:3dd:d321:79ab with SMTP id e9e14a558f8ab-3df4acbedbdmr12502385ab.18.1750968355585; Thu, 26 Jun 2025 13:05:55 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:40 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-5-coltonlewis@google.com> Subject: [PATCH v3 04/22] KVM: arm64: Cleanup PMU includes From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier Reorganize these tangled headers. * Respect the move defining the interface between KVM and PMU in its own header asm/kvm_pmu.h * Define an empty struct arm_pmu so it is defined for those interface functions when compiling with CONFIG_KVM but not CONFIG_ARM_PMU Signed-off-by: Marc Zyngier Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 2 +- arch/arm64/include/asm/kvm_host.h | 15 +-------------- arch/arm64/include/asm/kvm_pmu.h | 15 +++++++++++++++ arch/arm64/kvm/debug.c | 1 + arch/arm64/kvm/hyp/include/hyp/switch.h | 1 + arch/arm64/kvm/pmu.c | 2 ++ arch/arm64/kvm/sys_regs.c | 1 + include/linux/perf/arm_pmu.h | 5 +++++ virt/kvm/kvm_main.c | 1 + 9 files changed, 28 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 8a777dec8d88..32c003a7b810 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -6,7 +6,7 @@ #ifndef __ASM_PMUV3_H #define __ASM_PMUV3_H =20 -#include +#include =20 #include #include diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 27ed26bd4381..2df76689381a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1487,25 +1488,11 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcp= u); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); =20 -static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) -{ - return (!has_vhe() && attr->exclude_host); -} - #ifdef CONFIG_KVM -void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); -void kvm_clr_pmu_events(u64 clr); -bool kvm_set_pmuserenr(u64 val); void kvm_enable_trbe(void); void kvm_disable_trbe(void); void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest); #else -static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} -static inline void kvm_clr_pmu_events(u64 clr) {} -static inline bool kvm_set_pmuserenr(u64 val) -{ - return false; -} static inline void kvm_enable_trbe(void) {} static inline void kvm_disable_trbe(void) {} static inline void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_gu= est) {} diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index baf028d19dfc..ad3247b46838 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -11,9 +11,15 @@ #include #include #include +#include =20 #define KVM_ARMV8_PMU_MAX_COUNTERS 32 =20 +#define kvm_pmu_counter_deferred(attr) \ + ({ \ + !has_vhe() && (attr)->exclude_host; \ + }) + #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ @@ -68,6 +74,9 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); =20 struct kvm_pmu_events *kvm_get_pmu_events(void); +void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); +void kvm_clr_pmu_events(u64 clr); +bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); @@ -161,6 +170,12 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *= vcpu, bool pmceid1) =20 #define kvm_vcpu_has_pmu(vcpu) ({ false; }) static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} +static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} +static inline void kvm_clr_pmu_events(u64 clr) {} +static inline bool kvm_set_pmuserenr(u64 val) +{ + return false; +} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 1a7dab333f55..a554c3e368dc 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -9,6 +9,7 @@ =20 #include #include +#include =20 #include #include diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 7599844908c0..825b81749972 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -14,6 +14,7 @@ #include #include #include +#include #include =20 #include diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 6b48a3d16d0d..8bfc6b0a85f6 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,6 +8,8 @@ #include #include =20 +#include + static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 /* diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 99fdbe174202..eaff6d63ef77 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include #include #include +#include =20 #include #include diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 6dc5e0cd76ca..fb382bcd4f4b 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -187,6 +187,11 @@ void armpmu_free_irq(int irq, int cpu); =20 #define ARMV8_PMU_PDEV_NAME "armv8-pmu" =20 +#else + +struct arm_pmu { +}; + #endif /* CONFIG_ARM_PMU */ =20 #define ARMV8_SPE_PDEV_NAME "arm,spe-v1" diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e2f6344256ce..25259fcf3115 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -48,6 +48,7 @@ #include #include #include +#include =20 #include #include --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7227F2F2728 for ; Thu, 26 Jun 2025 20:05:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968362; cv=none; b=B1q8SKoFuX1zxCD4I5Pq5UUVxia0/4GfBygSU+zt2vzUfSE2/LaKeGOgc7tt5uKgXgbb476+QFax1RjlKpwj7TWKrSN/aLn1CUWmeY+U+ZJVD331agOYXxeLc8S8MmvAvd3bzmMD2QLjaIiWGverIxfuSnDD/JHsBhQhnBPcStU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968362; c=relaxed/simple; bh=BrQXeN/P92gXHlFRu19IbROeqfuLm+yOAIbijHvbGjI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G8cRBUGKXJ6P9xdvyB0sB1oUJFagNGlZo6y6ntMb0Y5IhsNqIF+i/VqT9lyJ8iyEXjdp6vUFXHorWpJPctPx62c4+WfhySFa+UF4y9uMa6NK7PAbV41tmdFA7yEhixbdzAQuygEvAGhGucxg2zHLOXO+X0yXKoUma6eBaMCbY9Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tyxohKjQ; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tyxohKjQ" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3df4189cd09so17831525ab.2 for ; Thu, 26 Jun 2025 13:05:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968356; x=1751573156; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SveeB7Lyh6g7p5lN0shFziaVBmc90JD0UYXRcSjp5g8=; b=tyxohKjQjHmgu8msu791qGyjaPVxmpAZYM8x8BXnBtGT6sqEntDt+1n6lkZhYLqpP4 /xqoAIHVzQQvx/gcXfVpVe+7mnueYBhKvhOW2oXlcCLNC+Bnjbb1XmFyLGVjmF1QChbS XZKjUVKp8v6k3WHK8mQY4Vd2l+kZ+u/ilS0zORKJM9AcCy82BlOvR+rrSOIpuonLMIvY we3/hlWjqTaUD3r8FvrbVhU7UpvRKB+2faaXvnZqDkkNxxc06WnvaTCyWnCNbISlMDdH 1sSuIbCFfq7BJGepq77bXSkuOh5V+vag1k4CsCRN40GGQNMcCEP29FaNuFGP797dAuI1 mxmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968356; x=1751573156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SveeB7Lyh6g7p5lN0shFziaVBmc90JD0UYXRcSjp5g8=; b=JJUVM6nf61PefyHFvDh6PhrKstqhquEgjsPxrKHoc5yuuGaB3KXdBso34IcKYb7e9f xCEPjJgMTt+Lr6v6J9r4gzmCwEcygUhvbHF7HPxvd4Izij58J/WKlKG55aGs2zk2S9Y5 dhDHu1C+dlfAQpbbH6FiF/0ar3yygnn67XPtUj5uDbC6d9WTrUWPtb72GQkhLFYFNY7X JRI+hlfUttaj20WrQGrvAo6wl1QgahChUkILwj78sHKtj/EnFAwv6fS1eByg7Cgwljbv 7FGKPnz7j8y8cGS/UOs+JYNE0k+jYWXoU4A8fudB+FjjZkJUh6pBWbEIAkuaZB3ij068 mdrw== X-Forwarded-Encrypted: i=1; AJvYcCWkM0/qKS0DHdEGaostFEVbG/rIAgEHiupsiYTG/pfp0+7F91MbDfaK0iujPxtX054XbBk3xtjmWbaAi/o=@vger.kernel.org X-Gm-Message-State: AOJu0YxXc03tivKKC6JgHa9cZdcCRsEM3PHY9fXcHOm5y6FKndn6vCWy 8vIsjlK+oNWdrMsupGrHJHjET+f7u8qiNYfPYwYrLuGMEhRGIoJvPe7jJqg6d9nEA/Msazhb5ng dVYwvcHY1fWQW/HOsCDTnKFU8nw== X-Google-Smtp-Source: AGHT+IHZFEL2aUifw0ITmJcTq5UKrGbBYCy4TS/bDdDgyYvB9eexwaIEO9ufyACYKRdiEquSgwVKRl5LhG7nY/Yd6g== X-Received: from ilbbr10.prod.google.com ([2002:a05:6e02:23ca:b0:3df:2a55:3b20]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a92:ca4e:0:b0:3df:3110:cc01 with SMTP id e9e14a558f8ab-3df4abb73e6mr12840245ab.19.1750968356438; Thu, 26 Jun 2025 13:05:56 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:41 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-6-coltonlewis@google.com> Subject: [PATCH v3 05/22] KVM: arm64: Reorganize PMU functions From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A lot of functions in pmu-emul.c aren't specific to the emulated PMU implementation. Move them to the more appropriate pmu.c file where shared PMU functions should live. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 3 + arch/arm64/kvm/pmu-emul.c | 672 +----------------------------- arch/arm64/kvm/pmu.c | 673 +++++++++++++++++++++++++++++++ 3 files changed, 677 insertions(+), 671 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index ad3247b46838..6c961e877804 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -51,13 +51,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u6= 4 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx,= u64 val); u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu); +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu); u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu); +u32 kvm_pmu_event_mask(struct kvm *kvm); u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1); void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); void kvm_pmu_update_run(struct kvm_vcpu *vcpu); void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index dcdd80ffd49d..bcaa9f7a8ca2 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -17,19 +17,10 @@ =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) =20 -static LIST_HEAD(arm_pmus); -static DEFINE_MUTEX(arm_pmus_lock); - static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 -bool kvm_supports_guest_pmuv3(void) -{ - guard(mutex)(&arm_pmus_lock); - return !list_empty(&arm_pmus); -} - static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) { return container_of(pmc, struct kvm_vcpu, arch.pmu.pmc[pmc->idx]); @@ -40,46 +31,6 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vc= pu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } =20 -static u32 __kvm_pmu_event_mask(unsigned int pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return GENMASK(9, 0); - case ID_AA64DFR0_EL1_PMUVer_V3P1: - case ID_AA64DFR0_EL1_PMUVer_V3P4: - case ID_AA64DFR0_EL1_PMUVer_V3P5: - case ID_AA64DFR0_EL1_PMUVer_V3P7: - return GENMASK(15, 0); - default: /* Shouldn't be here, just for sanity */ - WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); - return 0; - } -} - -static u32 kvm_pmu_event_mask(struct kvm *kvm) -{ - u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); - u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); - - return __kvm_pmu_event_mask(pmuver); -} - -u64 kvm_pmu_evtyper_mask(struct kvm *kvm) -{ - u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | - kvm_pmu_event_mask(kvm); - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) - mask |=3D ARMV8_PMU_INCLUDE_EL2; - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) - mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | - ARMV8_PMU_EXCLUDE_NS_EL1 | - ARMV8_PMU_EXCLUDE_EL3; - - return mask; -} - /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -272,59 +223,6 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) irq_work_sync(&vcpu->arch.pmu.overflow_work); } =20 -static u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) -{ - unsigned int hpmn, n; - - if (!vcpu_has_nv(vcpu)) - return 0; - - hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - n =3D vcpu->kvm->arch.nr_pmu_counters; - - /* - * Programming HPMN to a value greater than PMCR_EL0.N is - * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an - * UNKNOWN number of counters (in our case, zero) are reserved for EL2. - */ - if (hpmn >=3D n) - return 0; - - /* - * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't - * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly - * depend on hardware (all PMU registers are trapped), make the - * implementation choice that all counters are included in the second - * range reserved for EL2/EL3. - */ - return GENMASK(n - 1, hpmn); -} - -bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) -{ - return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); -} - -u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); - - if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) - return mask; - - return mask & ~kvm_pmu_hyp_counter_mask(vcpu); -} - -u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); - - if (val =3D=3D 0) - return BIT(ARMV8_PMU_CYCLE_IDX); - else - return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); -} - static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) { if (!pmc->perf_event) { @@ -370,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -393,24 +291,6 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *v= cpu) return reg; } =20 -static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu =3D &vcpu->arch.pmu; - bool overflow; - - overflow =3D kvm_pmu_overflow_status(vcpu); - if (pmu->irq_level =3D=3D overflow) - return; - - pmu->irq_level =3D overflow; - - if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, - pmu->irq_num, overflow, pmu); - WARN_ON(ret); - } -} - bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D &vcpu->arch.pmu; @@ -436,43 +316,6 @@ void kvm_pmu_update_run(struct kvm_vcpu *vcpu) regs->device_irq_level |=3D KVM_ARM_DEV_PMU; } =20 -/** - * kvm_pmu_flush_hwstate - flush pmu state to cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the host, and = inject - * an interrupt if that was the case. - */ -void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/** - * kvm_pmu_sync_hwstate - sync pmu state from cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the guest, and - * inject an interrupt if that was the case. - */ -void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/* - * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding - * to the event. - * This is why we need a callback to do it once outside of the NMI context. - */ -static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) -{ - struct kvm_vcpu *vcpu; - - vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); - kvm_vcpu_kick(vcpu); -} - /* * Perform an increment on any of the counters described in @mask, * generating the overflow if required, and propagate it as a chained @@ -784,132 +627,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *= vcpu, u64 data, kvm_pmu_create_perf_event(pmc); } =20 -void kvm_host_pmu_init(struct arm_pmu *pmu) -{ - struct arm_pmu_entry *entry; - - /* - * Check the sanitised PMU version for the system, as KVM does not - * support implementations where PMUv3 exists on a subset of CPUs. - */ - if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) - return; - - guard(mutex)(&arm_pmus_lock); - - entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) - return; - - entry->arm_pmu =3D pmu; - list_add_tail(&entry->entry, &arm_pmus); -} - -static struct arm_pmu *kvm_pmu_probe_armpmu(void) -{ - struct arm_pmu_entry *entry; - struct arm_pmu *pmu; - int cpu; - - guard(mutex)(&arm_pmus_lock); - - /* - * It is safe to use a stale cpu to iterate the list of PMUs so long as - * the same value is used for the entirety of the loop. Given this, and - * the fact that no percpu data is used for the lookup there is no need - * to disable preemption. - * - * It is still necessary to get a valid cpu, though, to probe for the - * default PMU instance as userspace is not required to specify a PMU - * type. In order to uphold the preexisting behavior KVM selects the - * PMU instance for the core during vcpu init. A dependent use - * case would be a user with disdain of all things big.LITTLE that - * affines the VMM to a particular cluster of cores. - * - * In any case, userspace should just do the sane thing and use the UAPI - * to select a PMU type directly. But, be wary of the baggage being - * carried here. - */ - cpu =3D raw_smp_processor_id(); - list_for_each_entry(entry, &arm_pmus, entry) { - pmu =3D entry->arm_pmu; - - if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) - return pmu; - } - - return NULL; -} - -static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) -{ - u32 hi[2], lo[2]; - - bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); - bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); - - return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; -} - -static u64 compute_pmceid0(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 0); - - /* always support SW_INCR */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); - /* always support CHAIN */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); - return val; -} - -static u64 compute_pmceid1(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 1); - - /* - * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled - * as RAZ - */ - val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); - return val; -} - -u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) -{ - struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; - unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; - u64 val, mask =3D 0; - int base, i, nr_events; - - if (!pmceid1) { - val =3D compute_pmceid0(cpu_pmu); - base =3D 0; - } else { - val =3D compute_pmceid1(cpu_pmu); - base =3D 32; - } - - if (!bmap) - return val; - - nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; - - for (i =3D 0; i < 32; i +=3D 8) { - u64 byte; - - byte =3D bitmap_get_value8(bmap, base + i); - mask |=3D byte << i; - if (nr_events >=3D (0x4000 + base + 32)) { - byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); - mask |=3D byte << (32 + i); - } - } - - return val & mask; -} - void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) { u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); @@ -921,393 +638,6 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) kvm_pmu_reprogram_counter_mask(vcpu, mask); } =20 -int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) -{ - if (!vcpu->arch.pmu.created) - return -EINVAL; - - /* - * A valid interrupt configuration for the PMU is either to have a - * properly configured interrupt number and using an in-kernel - * irqchip, or to not have an in-kernel GIC and not set an IRQ. - */ - if (irqchip_in_kernel(vcpu->kvm)) { - int irq =3D vcpu->arch.pmu.irq_num; - /* - * If we are using an in-kernel vgic, at this point we know - * the vgic will be initialized, so we can check the PMU irq - * number against the dimensions of the vgic and make sure - * it's valid. - */ - if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) - return -EINVAL; - } else if (kvm_arm_pmu_irq_initialized(vcpu)) { - return -EINVAL; - } - - return 0; -} - -static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) -{ - if (irqchip_in_kernel(vcpu->kvm)) { - int ret; - - /* - * If using the PMU with an in-kernel virtual GIC - * implementation, we require the GIC to be already - * initialized when initializing the PMU. - */ - if (!vgic_initialized(vcpu->kvm)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, - &vcpu->arch.pmu); - if (ret) - return ret; - } - - init_irq_work(&vcpu->arch.pmu.overflow_work, - kvm_pmu_perf_overflow_notify_vcpu); - - vcpu->arch.pmu.created =3D true; - return 0; -} - -/* - * For one VM the interrupt type must be same for each vcpu. - * As a PPI, the interrupt number is the same for all vcpus, - * while as an SPI it must be a separate number per vcpu. - */ -static bool pmu_irq_is_valid(struct kvm *kvm, int irq) -{ - unsigned long i; - struct kvm_vcpu *vcpu; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!kvm_arm_pmu_irq_initialized(vcpu)) - continue; - - if (irq_is_ppi(irq)) { - if (vcpu->arch.pmu.irq_num !=3D irq) - return false; - } else { - if (vcpu->arch.pmu.irq_num =3D=3D irq) - return false; - } - } - - return true; -} - -/** - * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. - * @kvm: The kvm pointer - */ -u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; - - /* - * PMUv3 requires that all event counters are capable of counting any - * event, though the same may not be true of non-PMUv3 hardware. - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return 1; - - /* - * The arm_pmu->cntr_mask considers the fixed counter(s) as well. - * Ignore those and return only the general-purpose counters. - */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); -} - -static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) -{ - kvm->arch.nr_pmu_counters =3D nr; - - /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ - if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { - struct kvm_vcpu *vcpu; - unsigned long i; - - kvm_for_each_vcpu(i, vcpu, kvm) { - u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); - val &=3D ~MDCR_EL2_HPMN; - val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); - __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); - } - } -} - -static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) -{ - lockdep_assert_held(&kvm->arch.config_lock); - - kvm->arch.arm_pmu =3D arm_pmu; - kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); -} - -/** - * kvm_arm_set_default_pmu - No PMU set, get the default one. - * @kvm: The kvm pointer - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ -int kvm_arm_set_default_pmu(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); - - if (!arm_pmu) - return -ENODEV; - - kvm_arm_set_pmu(kvm, arm_pmu); - return 0; -} - -static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) -{ - struct kvm *kvm =3D vcpu->kvm; - struct arm_pmu_entry *entry; - struct arm_pmu *arm_pmu; - int ret =3D -ENXIO; - - lockdep_assert_held(&kvm->arch.config_lock); - mutex_lock(&arm_pmus_lock); - - list_for_each_entry(entry, &arm_pmus, entry) { - arm_pmu =3D entry->arm_pmu; - if (arm_pmu->pmu.type =3D=3D pmu_id) { - if (kvm_vm_has_ran_once(kvm) || - (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { - ret =3D -EBUSY; - break; - } - - kvm_arm_set_pmu(kvm, arm_pmu); - cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret =3D 0; - break; - } - } - - mutex_unlock(&arm_pmus_lock); - return ret; -} - -static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) -{ - struct kvm *kvm =3D vcpu->kvm; - - if (!kvm->arch.arm_pmu) - return -EINVAL; - - if (n > kvm_arm_pmu_get_max_counters(kvm)) - return -EINVAL; - - kvm_arm_set_nr_counters(kvm, n); - return 0; -} - -int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - struct kvm *kvm =3D vcpu->kvm; - - lockdep_assert_held(&kvm->arch.config_lock); - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (vcpu->arch.pmu.created) - return -EBUSY; - - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(kvm)) - return -EINVAL; - - if (get_user(irq, uaddr)) - return -EFAULT; - - /* The PMU overflow interrupt can be a PPI or a valid SPI. */ - if (!(irq_is_ppi(irq) || irq_is_spi(irq))) - return -EINVAL; - - if (!pmu_irq_is_valid(kvm, irq)) - return -EINVAL; - - if (kvm_arm_pmu_irq_initialized(vcpu)) - return -EBUSY; - - kvm_debug("Set kvm ARM PMU irq: %d\n", irq); - vcpu->arch.pmu.irq_num =3D irq; - return 0; - } - case KVM_ARM_VCPU_PMU_V3_FILTER: { - u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); - struct kvm_pmu_event_filter __user *uaddr; - struct kvm_pmu_event_filter filter; - int nr_events; - - /* - * Allow userspace to specify an event filter for the entire - * event range supported by PMUVer of the hardware, rather - * than the guest's PMUVer for KVM backward compatibility. - */ - nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; - - uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; - - if (copy_from_user(&filter, uaddr, sizeof(filter))) - return -EFAULT; - - if (((u32)filter.base_event + filter.nevents) > nr_events || - (filter.action !=3D KVM_PMU_EVENT_ALLOW && - filter.action !=3D KVM_PMU_EVENT_DENY)) - return -EINVAL; - - if (kvm_vm_has_ran_once(kvm)) - return -EBUSY; - - if (!kvm->arch.pmu_filter) { - kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); - if (!kvm->arch.pmu_filter) - return -ENOMEM; - - /* - * The default depends on the first applied filter. - * If it allows events, the default is to deny. - * Conversely, if the first filter denies a set of - * events, the default is to allow. - */ - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_zero(kvm->arch.pmu_filter, nr_events); - else - bitmap_fill(kvm->arch.pmu_filter, nr_events); - } - - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - else - bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - - return 0; - } - case KVM_ARM_VCPU_PMU_V3_SET_PMU: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int pmu_id; - - if (get_user(pmu_id, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); - } - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { - unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; - unsigned int n; - - if (get_user(n, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); - } - case KVM_ARM_VCPU_PMU_V3_INIT: - return kvm_arm_pmu_v3_init(vcpu); - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(vcpu->kvm)) - return -EINVAL; - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - irq =3D vcpu->arch.pmu.irq_num; - return put_user(irq, uaddr); - } - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: - case KVM_ARM_VCPU_PMU_V3_INIT: - case KVM_ARM_VCPU_PMU_V3_FILTER: - case KVM_ARM_VCPU_PMU_V3_SET_PMU: - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: - if (kvm_vcpu_has_pmu(vcpu)) - return 0; - } - - return -ENXIO; -} - -u8 kvm_arm_pmu_get_pmuver_limit(void) -{ - unsigned int pmuver; - - pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, - read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); - - /* - * Spoof a barebones PMUv3 implementation if the system supports IMPDEF - * traps of the PMUv3 sysregs - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return ID_AA64DFR0_EL1_PMUVer_IMP; - - /* - * Otherwise, treat IMPLEMENTATION DEFINED functionality as - * unimplemented - */ - if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) - return 0; - - return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); -} - -/** - * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU - * @vcpu: The vcpu pointer - */ -u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) -{ - u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; - - if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) - n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); - - return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); -} - void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) { bool reprogrammed =3D false; diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 8bfc6b0a85f6..79b7ea037153 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,10 +8,21 @@ #include #include =20 +#include #include =20 +static LIST_HEAD(arm_pmus); +static DEFINE_MUTEX(arm_pmus_lock); static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + +bool kvm_supports_guest_pmuv3(void) +{ + guard(mutex)(&arm_pmus_lock); + return !list_empty(&arm_pmus); +} + /* * Given the perf event attributes and system type, determine * if we are going to need to switch counters at guest entry/exit. @@ -211,3 +222,665 @@ void kvm_vcpu_pmu_resync_el0(void) =20 kvm_make_request(KVM_REQ_RESYNC_PMU_EL0, vcpu); } + +void kvm_host_pmu_init(struct arm_pmu *pmu) +{ + struct arm_pmu_entry *entry; + + /* + * Check the sanitised PMU version for the system, as KVM does not + * support implementations where PMUv3 exists on a subset of CPUs. + */ + if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) + return; + + guard(mutex)(&arm_pmus_lock); + + entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return; + + entry->arm_pmu =3D pmu; + list_add_tail(&entry->entry, &arm_pmus); +} + +static struct arm_pmu *kvm_pmu_probe_armpmu(void) +{ + struct arm_pmu_entry *entry; + struct arm_pmu *pmu; + int cpu; + + guard(mutex)(&arm_pmus_lock); + + /* + * It is safe to use a stale cpu to iterate the list of PMUs so long as + * the same value is used for the entirety of the loop. Given this, and + * the fact that no percpu data is used for the lookup there is no need + * to disable preemption. + * + * It is still necessary to get a valid cpu, though, to probe for the + * default PMU instance as userspace is not required to specify a PMU + * type. In order to uphold the preexisting behavior KVM selects the + * PMU instance for the core during vcpu init. A dependent use + * case would be a user with disdain of all things big.LITTLE that + * affines the VMM to a particular cluster of cores. + * + * In any case, userspace should just do the sane thing and use the UAPI + * to select a PMU type directly. But, be wary of the baggage being + * carried here. + */ + cpu =3D raw_smp_processor_id(); + list_for_each_entry(entry, &arm_pmus, entry) { + pmu =3D entry->arm_pmu; + + if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) + return pmu; + } + + return NULL; +} + +static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) +{ + u32 hi[2], lo[2]; + + bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); + bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); + + return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; +} + +static u64 compute_pmceid0(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 0); + + /* always support SW_INCR */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); + /* always support CHAIN */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); + return val; +} + +static u64 compute_pmceid1(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 1); + + /* + * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled + * as RAZ + */ + val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); + return val; +} + +u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) +{ + struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; + u64 val, mask =3D 0; + int base, i, nr_events; + + if (!pmceid1) { + val =3D compute_pmceid0(cpu_pmu); + base =3D 0; + } else { + val =3D compute_pmceid1(cpu_pmu); + base =3D 32; + } + + if (!bmap) + return val; + + nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; + + for (i =3D 0; i < 32; i +=3D 8) { + u64 byte; + + byte =3D bitmap_get_value8(bmap, base + i); + mask |=3D byte << i; + if (nr_events >=3D (0x4000 + base + 32)) { + byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); + mask |=3D byte << (32 + i); + } + } + + return val & mask; +} + +/* + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding + * to the event. + * This is why we need a callback to do it once outside of the NMI context. + */ +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) +{ + struct kvm_vcpu *vcpu; + + vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); + kvm_vcpu_kick(vcpu); +} + +static u32 __kvm_pmu_event_mask(unsigned int pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return GENMASK(9, 0); + case ID_AA64DFR0_EL1_PMUVer_V3P1: + case ID_AA64DFR0_EL1_PMUVer_V3P4: + case ID_AA64DFR0_EL1_PMUVer_V3P5: + case ID_AA64DFR0_EL1_PMUVer_V3P7: + return GENMASK(15, 0); + default: /* Shouldn't be here, just for sanity */ + WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); + return 0; + } +} + +u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + +u64 kvm_pmu_evtyper_mask(struct kvm *kvm) +{ + u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | + kvm_pmu_event_mask(kvm); + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) + mask |=3D ARMV8_PMU_INCLUDE_EL2; + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) + mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | + ARMV8_PMU_EXCLUDE_NS_EL1 | + ARMV8_PMU_EXCLUDE_EL3; + + return mask; +} + +static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D &vcpu->arch.pmu; + bool overflow; + + overflow =3D kvm_pmu_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) + return; + + pmu->irq_level =3D overflow; + + if (likely(irqchip_in_kernel(vcpu->kvm))) { + int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, + pmu->irq_num, overflow, pmu); + WARN_ON(ret); + } +} + +/** + * kvm_pmu_flush_hwstate - flush pmu state to cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the host, and = inject + * an interrupt if that was the case. + */ +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +/** + * kvm_pmu_sync_hwstate - sync pmu state from cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the guest, and + * inject an interrupt if that was the case. + */ +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.pmu.created) + return -EINVAL; + + /* + * A valid interrupt configuration for the PMU is either to have a + * properly configured interrupt number and using an in-kernel + * irqchip, or to not have an in-kernel GIC and not set an IRQ. + */ + if (irqchip_in_kernel(vcpu->kvm)) { + int irq =3D vcpu->arch.pmu.irq_num; + /* + * If we are using an in-kernel vgic, at this point we know + * the vgic will be initialized, so we can check the PMU irq + * number against the dimensions of the vgic and make sure + * it's valid. + */ + if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) + return -EINVAL; + } else if (kvm_arm_pmu_irq_initialized(vcpu)) { + return -EINVAL; + } + + return 0; +} + +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) +{ + if (irqchip_in_kernel(vcpu->kvm)) { + int ret; + + /* + * If using the PMU with an in-kernel virtual GIC + * implementation, we require the GIC to be already + * initialized when initializing the PMU. + */ + if (!vgic_initialized(vcpu->kvm)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, + &vcpu->arch.pmu); + if (ret) + return ret; + } + + init_irq_work(&vcpu->arch.pmu.overflow_work, + kvm_pmu_perf_overflow_notify_vcpu); + + vcpu->arch.pmu.created =3D true; + return 0; +} + +/* + * For one VM the interrupt type must be same for each vcpu. + * As a PPI, the interrupt number is the same for all vcpus, + * while as an SPI it must be a separate number per vcpu. + */ +static bool pmu_irq_is_valid(struct kvm *kvm, int irq) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!kvm_arm_pmu_irq_initialized(vcpu)) + continue; + + if (irq_is_ppi(irq)) { + if (vcpu->arch.pmu.irq_num !=3D irq) + return false; + } else { + if (vcpu->arch.pmu.irq_num =3D=3D irq) + return false; + } + } + + return true; +} + +/** + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. + * @kvm: The kvm pointer + */ +u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + + /* + * PMUv3 requires that all event counters are capable of counting any + * event, though the same may not be true of non-PMUv3 hardware. + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return 1; + + /* + * The arm_pmu->cntr_mask considers the fixed counter(s) as well. + * Ignore those and return only the general-purpose counters. + */ + return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); +} + +static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) +{ + kvm->arch.nr_pmu_counters =3D nr; + + /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ + if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { + struct kvm_vcpu *vcpu; + unsigned long i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); + + val &=3D ~MDCR_EL2_HPMN; + val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); + } + } +} + +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + +static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) +{ + struct kvm *kvm =3D vcpu->kvm; + struct arm_pmu_entry *entry; + struct arm_pmu *arm_pmu; + int ret =3D -ENXIO; + + lockdep_assert_held(&kvm->arch.config_lock); + mutex_lock(&arm_pmus_lock); + + list_for_each_entry(entry, &arm_pmus, entry) { + arm_pmu =3D entry->arm_pmu; + if (arm_pmu->pmu.type =3D=3D pmu_id) { + if (kvm_vm_has_ran_once(kvm) || + (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { + ret =3D -EBUSY; + break; + } + + kvm_arm_set_pmu(kvm, arm_pmu); + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); + ret =3D 0; + break; + } + } + + mutex_unlock(&arm_pmus_lock); + return ret; +} + +static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) +{ + struct kvm *kvm =3D vcpu->kvm; + + if (!kvm->arch.arm_pmu) + return -EINVAL; + + if (n > kvm_arm_pmu_get_max_counters(kvm)) + return -EINVAL; + + kvm_arm_set_nr_counters(kvm, n); + return 0; +} + +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + struct kvm *kvm =3D vcpu->kvm; + + lockdep_assert_held(&kvm->arch.config_lock); + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (vcpu->arch.pmu.created) + return -EBUSY; + + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(kvm)) + return -EINVAL; + + if (get_user(irq, uaddr)) + return -EFAULT; + + /* The PMU overflow interrupt can be a PPI or a valid SPI. */ + if (!(irq_is_ppi(irq) || irq_is_spi(irq))) + return -EINVAL; + + if (!pmu_irq_is_valid(kvm, irq)) + return -EINVAL; + + if (kvm_arm_pmu_irq_initialized(vcpu)) + return -EBUSY; + + kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; + return 0; + } + case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); + struct kvm_pmu_event_filter __user *uaddr; + struct kvm_pmu_event_filter filter; + int nr_events; + + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; + + uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; + + if (copy_from_user(&filter, uaddr, sizeof(filter))) + return -EFAULT; + + if (((u32)filter.base_event + filter.nevents) > nr_events || + (filter.action !=3D KVM_PMU_EVENT_ALLOW && + filter.action !=3D KVM_PMU_EVENT_DENY)) + return -EINVAL; + + if (kvm_vm_has_ran_once(kvm)) + return -EBUSY; + + if (!kvm->arch.pmu_filter) { + kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); + if (!kvm->arch.pmu_filter) + return -ENOMEM; + + /* + * The default depends on the first applied filter. + * If it allows events, the default is to deny. + * Conversely, if the first filter denies a set of + * events, the default is to allow. + */ + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_zero(kvm->arch.pmu_filter, nr_events); + else + bitmap_fill(kvm->arch.pmu_filter, nr_events); + } + + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + else + bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + + return 0; + } + case KVM_ARM_VCPU_PMU_V3_SET_PMU: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int pmu_id; + + if (get_user(pmu_id, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); + } + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { + unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; + unsigned int n; + + if (get_user(n, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); + } + case KVM_ARM_VCPU_PMU_V3_INIT: + return kvm_arm_pmu_v3_init(vcpu); + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(vcpu->kvm)) + return -EINVAL; + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + irq =3D vcpu->arch.pmu.irq_num; + return put_user(irq, uaddr); + } + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: + case KVM_ARM_VCPU_PMU_V3_INIT: + case KVM_ARM_VCPU_PMU_V3_FILTER: + case KVM_ARM_VCPU_PMU_V3_SET_PMU: + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: + if (kvm_vcpu_has_pmu(vcpu)) + return 0; + } + + return -ENXIO; +} + +u8 kvm_arm_pmu_get_pmuver_limit(void) +{ + unsigned int pmuver; + + pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, + read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); + + /* + * Spoof a barebones PMUv3 implementation if the system supports IMPDEF + * traps of the PMUv3 sysregs + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return ID_AA64DFR0_EL1_PMUVer_IMP; + + /* + * Otherwise, treat IMPLEMENTATION DEFINED functionality as + * unimplemented + */ + if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + return 0; + + return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); +} + +u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); + + if (val =3D=3D 0) + return BIT(ARMV8_PMU_CYCLE_IDX); + else + return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); +} + +u64 kvm_pmu_hyp_counter_mask(struct kvm_vcpu *vcpu) +{ + unsigned int hpmn, n; + + if (!vcpu_has_nv(vcpu)) + return 0; + + hpmn =3D SYS_FIELD_GET(MDCR_EL2, HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + n =3D vcpu->kvm->arch.nr_pmu_counters; + + /* + * Programming HPMN to a value greater than PMCR_EL0.N is + * CONSTRAINED UNPREDICTABLE. Make the implementation choice that an + * UNKNOWN number of counters (in our case, zero) are reserved for EL2. + */ + if (hpmn >=3D n) + return 0; + + /* + * Programming HPMN=3D0 is CONSTRAINED UNPREDICTABLE if FEAT_HPMN0 isn't + * implemented. Since KVM's ability to emulate HPMN=3D0 does not directly + * depend on hardware (all PMU registers are trapped), make the + * implementation choice that all counters are included in the second + * range reserved for EL2/EL3. + */ + return GENMASK(n - 1, hpmn); +} + +bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx) +{ + return kvm_pmu_hyp_counter_mask(vcpu) & BIT(idx); +} + +u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); + + if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu)) + return mask; + + return mask & ~kvm_pmu_hyp_counter_mask(vcpu); +} + +/** + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU + * @vcpu: The vcpu pointer + */ +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + + if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) + n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); + + return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); +} --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 710122F2C60 for ; Thu, 26 Jun 2025 20:05:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968361; cv=none; b=GdBWYQXg4WZitmxJhMaIR43bsFRJ0rQzNB16UsSNsr51B6qsbLdJ8QvKZIFP1s2H8gb02plYf3ewf8Y1jQmPgoSAXynR3x6QJZyCjQeo+Q93z9QjvxufhsBeRej6KnafqQIMtZ5SxUTaWHn+d3C4j4sqaktRz3bWlf7qnJtTLSA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968361; c=relaxed/simple; bh=FJaIRJNsEMYWuJ2zLQqRXmEvqCxduDf0sz7RXVyB6W8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oRUUA52Kdznu5UJVewBhv0nZuqo45mHEo+E8QSAaOwqCf+hD+Ib2V2mn3gmf43T37LdqIGXQJrOP/DwqEKh9WhUCVwUl2ojDxN1CXjYwEXRqcmgc/SnZq9y+fBmSAui5ENq5aYtRRoyiHvwpEATNCPlMny58kUxadN/D7PoBF50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=L6pkSHax; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="L6pkSHax" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3df2d907c23so7895395ab.3 for ; Thu, 26 Jun 2025 13:05:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968357; x=1751573157; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mMqYoZsK/epKwgd6CyfTf5pEmCnSON7Fq5dKdiRWdhs=; b=L6pkSHaxVFARbjGpfGpQvmIaZOKX7wxZKb7zS1FOz2MiDoUoIiZFmb9T+sQ2khBHZ4 P+ZvzJ8b2Y7fR4ZcV9KBnqZyWipWBeaZINrr7NszHvd9+RDfi/6/uCZIQCI/jF2AUEjE bCAGTGc0eChZYLnBC+FK42kq2SSbstanXoSUTj+wDo9d9T/KdaWpooiPsyNfpTiV6KlL 3LsYcWx93tGopm314PCR6TC4jjdMgqo9DIy0BYDzPHafB1xln7e6aqOZpPDI4UuM2eB6 8nUl+aI83FLsCkgKtIf80l5yGxakvu4mXMKeAk8Yb+yKEILndWH5zxxTno3TL37cQRcc OxlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968357; x=1751573157; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mMqYoZsK/epKwgd6CyfTf5pEmCnSON7Fq5dKdiRWdhs=; b=XXvaT1aP6slIxV7xlQAqfq3XYZDGaIbF5WgRwNrwtcjB2mQhDdJaSpYDxHQOqxTOPA DBMBkX9iXqU4IVT0MoR4yEagR2oJNThkeMi4+J6tgFts9WPez04jDiPrhlk6bYPRTRc9 iC5MF0WiMrvwPuFo1+eoOxWmAfE87Fc0BD9KIFd+5bo/QUgvfNB4r8Of1Q10QUZAS8Ho my1hWgBNifEvEsHT+xcQtkARuz1dyH4Eeh0X1/s8NtfuwgejyXWR2SPTi3DpBw86UcWR jJKeRujfWA3Hbck8NX2b6CnwwhK/qsLmtjmn5Sj/s/WyqMjEwjZBRNy6dn8b4XbvEGz3 AJuA== X-Forwarded-Encrypted: i=1; AJvYcCUd2yFAuQcTp/F/PD5+aliWXKbR1j1pYqAXg7MOPl9R3olTIKkzyTxjltY6LMg6MB5UkR5ivje3KIZqbeQ=@vger.kernel.org X-Gm-Message-State: AOJu0YzN8/q77jXiQQRrZ3IPpPGMWouE9gTIh08kWxCeSiY4U9oPz1CW ugdhgY4VJkNXFHl/eNxNE43Rjnc4wgK+uvBBj23VOUyjIoUtmsCLjt6Qa9VyvSk5OF6Dfw5gEgN k1b15sNTHCsMP1gA0jkVtzQ8mew== X-Google-Smtp-Source: AGHT+IFdNpatVUTOyeMi8epAV2vW9mg5K1vKFnbAchBnxALDFWUZgDhl9ssvMfuP0TtvfkXE9oLZZ6vS3hSy7zgqdA== X-Received: from ilbee28.prod.google.com ([2002:a05:6e02:491c:b0:3df:31be:c2e5]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a92:cda1:0:b0:3df:2fda:e30b with SMTP id e9e14a558f8ab-3df4accf840mr12363705ab.21.1750968357514; Thu, 26 Jun 2025 13:05:57 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:42 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-7-coltonlewis@google.com> Subject: [PATCH v3 06/22] perf: arm_pmuv3: Introduce method to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For PMUv3, the register field MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Create module parameter reserved_host_counters to reserve a number of counters for the host. This number is set at boot because the perf subsystem assumes the number of counters will not change after the PMU is probed. Introduce the function armv8pmu_partition() to modify the PMU driver's cntr_mask of available counters to exclude the counters being reserved for the guest and record reserved_guest_counters as the maximum allowable value for HPMN. Due to the difficulty this feature would create for the driver running at EL1 on the host, partitioning is only allowed in VHE mode. Working on nVHE mode would require a hypercall for every counter access in the driver because the counters reserved for the host by HPMN are only accessible to EL2. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 14 ++++++ arch/arm64/include/asm/arm_pmuv3.h | 5 ++ arch/arm64/include/asm/kvm_pmu.h | 6 +++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/pmu-part.c | 23 ++++++++++ drivers/perf/arm_pmuv3.c | 74 +++++++++++++++++++++++++++++- include/linux/perf/arm_pmu.h | 1 + 7 files changed, 122 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/kvm/pmu-part.c diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 2ec0e5e83fc9..49b1f2d7842d 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -221,6 +221,10 @@ static inline bool kvm_pmu_counter_deferred(struct per= f_event_attr *attr) return false; } =20 +static inline bool kvm_pmu_partition_supported(void) +{ + return false; +} static inline bool kvm_set_pmuserenr(u64 val) { return false; @@ -228,6 +232,11 @@ static inline bool kvm_set_pmuserenr(u64 val) =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} =20 +static inline bool has_vhe(void) +{ + return false; +} + /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 #define ARMV8_PMU_DFR_VER_V3P1 0x4 @@ -242,6 +251,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ARMV8_PMU_DFR_VER_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ARMV8_PMU_DFR_VER_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ARMV8_PMU_DFR_VER_V3P4; diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 32c003a7b810..e2057365ba73 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -173,6 +173,11 @@ static inline bool pmuv3_implemented(int pmuver) pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_NI); } =20 +static inline bool is_pmuv3p1(int pmuver) +{ + return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P1; +} + static inline bool is_pmuv3p4(int pmuver) { return pmuver >=3D ID_AA64DFR0_EL1_PMUVer_V3P4; diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 6c961e877804..8a2ed02e157d 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -46,6 +46,7 @@ struct arm_pmu_entry { }; =20 bool kvm_supports_guest_pmuv3(void); +bool kvm_pmu_partition_supported(void); #define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); @@ -115,6 +116,11 @@ static inline bool kvm_supports_guest_pmuv3(void) return false; } =20 +static inline bool kvm_pmu_partition_supported(void) +{ + return false; +} + #define kvm_arm_pmu_irq_initialized(v) (false) static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 86035b311269..3edbaa57bbf2 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -23,7 +23,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o =20 -kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o +kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu-part.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o kvm-$(CONFIG_PTDUMP_STAGE2_DEBUGFS) +=3D ptdump.o =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c new file mode 100644 index 000000000000..f04c580c19c0 --- /dev/null +++ b/arch/arm64/kvm/pmu-part.c @@ -0,0 +1,23 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Google LLC + * Author: Colton Lewis + */ + +#include + +#include + +/** + * kvm_pmu_partition_supported() - Determine if partitioning is possible + * + * Partitioning is only supported in VHE mode (with PMUv3, assumed + * since we are in the PMUv3 driver) + * + * Return: True if partitioning is possible, false otherwise + */ +bool kvm_pmu_partition_supported(void) +{ + return kvm_supports_guest_pmuv3() && + has_vhe(); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 3db9f4ed17e8..6358de6c9fab 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -35,6 +35,12 @@ #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC #define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED =20 +static int reserved_host_counters __read_mostly =3D -1; + +module_param(reserved_host_counters, int, 0); +MODULE_PARM_DESC(reserved_host_counters, + "PMU Partition: -1 =3D No partition; +N =3D Reserve N counters for the = host"); + /* * ARMv8 Architectural defined events, not all of these may * be supported on any given implementation. Unsupported events will @@ -500,6 +506,11 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } =20 +static u64 armv8pmu_pmcr_n_read(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1195,6 +1206,58 @@ struct armv8pmu_probe_info { bool present; }; =20 +/** + * armv8pmu_reservation_is_valid() - Determine if reservation is allowed + * @host_counters: Number of host counters to reserve + * + * Determine if the number of host counters in the argument is an + * allowed reservation, 0 to NR_COUNTERS inclusive. + * + * Return: True if reservation allowed, false otherwise + */ +static bool armv8pmu_reservation_is_valid(int host_counters) +{ + return host_counters >=3D 0 && + host_counters <=3D armv8pmu_pmcr_n_read(); +} + +/** + * armv8pmu_partition() - Partition the PMU + * @pmu: Pointer to pmu being partitioned + * @host_counters: Number of host counters to reserve + * + * Partition the given PMU by taking a number of host counters to + * reserve and, if it is a valid reservation, recording the + * corresponding HPMN value in the hpmn_max field of the PMU and + * clearing the guest-reserved counters from the counter mask. + * + * Return: 0 on success, -ERROR otherwise + */ +static int armv8pmu_partition(struct arm_pmu *pmu, int host_counters) +{ + u8 nr_counters; + u8 hpmn; + + if (!armv8pmu_reservation_is_valid(host_counters)) + return -EINVAL; + + nr_counters =3D armv8pmu_pmcr_n_read(); + hpmn =3D nr_counters - host_counters; + + pmu->hpmn_max =3D hpmn; + + bitmap_clear(pmu->cntr_mask, 0, hpmn); + bitmap_set(pmu->cntr_mask, hpmn, host_counters); + clear_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + + if (pmuv3_has_icntr()) + clear_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + pr_info("Partitioned PMU with HPMN %u", hpmn); + + return 0; +} + static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe =3D info; @@ -1209,10 +1272,10 @@ static void __armv8pmu_probe_pmu(void *info) =20 cpu_pmu->pmuver =3D pmuver; probe->present =3D true; + cpu_pmu->hpmn_max =3D -1; =20 /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, armv8pmu_pmcr_n_read()); =20 /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1221,6 +1284,13 @@ static void __armv8pmu_probe_pmu(void *info) if (pmuv3_has_icntr()) set_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask); =20 + if (reserved_host_counters >=3D 0) { + if (kvm_pmu_partition_supported()) + WARN_ON(armv8pmu_partition(cpu_pmu, reserved_host_counters)); + else + pr_err("PMU partition is not supported"); + } + pmceid[0] =3D pmceid_raw[0] =3D read_pmceid0(); pmceid[1] =3D pmceid_raw[1] =3D read_pmceid1(); =20 diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index fb382bcd4f4b..071cd2d1681c 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -122,6 +122,7 @@ struct arm_pmu { =20 /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + int hpmn_max; /* MDCR_EL2.HPMN: counter partition pivot */ }; =20 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 980742F363F for ; Thu, 26 Jun 2025 20:05:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968362; cv=none; b=XI6v6w/5j+baWrM1MAHSu4pV2bfE8Zf0Y643eP4iQSj08rfZHUxA0Uo02CVWHfCpGIPKbUnW+2zpLcuEP6ao3uot6+2BW7h6jBgsMEBBGlFu8OjqE6eVDrvBlxQx+j+0NIAmOXDH27efDTO6ZAKxmDt66GscO1Fv5TJkSsuI/1U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968362; c=relaxed/simple; bh=zDfO2v8ZbtcP2RnCvD+2EHC8b8R/WWM29bYxaxnNoHY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=StvbDOn11hHiuYSvRI3nfcIe4Uma/5zqlMS4ge3EOmQn9MI01NP2MqxnJ9t5a711oRB0ZCBOaFmusOMZ8R4ZZm5saZPhicwdWBM+mZka7GI8cH1eSa9OcRqvAfPUqoWc2n4tbYdMTqOA3xygZ6tsLI9GxPSoJe6PNLEUULWaWPs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oTKmwFWT; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oTKmwFWT" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-8754cf2d6e2so141983639f.3 for ; Thu, 26 Jun 2025 13:05:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968358; x=1751573158; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mBWQmXlElKRqufKLx0IJBng3uEmJIOP/0SNzHgrfiOQ=; b=oTKmwFWTmEop0RSYQQ5GmV27f3skzFtQzPU4QqmSbkiqNMcTq77p/quyMhoJcWsCat vxPo557m74QVvIiMVJpJEY74DVV/bht2S+6URKbJBlmfILkDNfMUY3UJaGZ/V+7trcQt MUgLoDF0Kad2bvAtr3h5PzoRq+wL+ScqQEuAN7HenxLGzjJG003MWlGR53MWvgWstKMf fcpDiUBkswzVDJZwqwMCA2UW48zQentiOJLjbfIM9KIwmEb2U/nAPVY3t5Rwcwuh6kka aX4wMrWRxIuf9EmJVm6M0ZoxH9BfHfae3NN13SYuXF7F2a/v08BfvixAm62/X40W8rXl MqlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968358; x=1751573158; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mBWQmXlElKRqufKLx0IJBng3uEmJIOP/0SNzHgrfiOQ=; b=SL3uaolQ+Ekk7cQdX6vak22bWdupx6K/ZtjqJ8XKo6SfPnJ6mL+0ET3SJ5AooFQVbZ 6QUlFjL8ZKvl/15sEdPG5PygH38pmuIbnvz50OCgkWhaRKB52HbklTwewN7pn5ej8FOo s+3FUxlk2DF41XSxarlgaBSUQwsnhje4KnH6TDkQH6/iESoAR2DeRhY0O8WyhMYaUzuD bQA6RnL0irjY55nAMzDIFyll3i0iXvUkmw7axi4ccjykYK7Leaks7HmlYrcL49UtFKFX MEWwBj6zmM7hsXiD+0dRTPA6CS8FPUf0T4WgQBck8KWBy6oZ4TmWz7oa8EXl9euz7eCZ nxWg== X-Forwarded-Encrypted: i=1; AJvYcCXD41C90p0DWv2cqfjNFAe/CTSHMH4dNoe4FjmResGyuWdrvF7jVL9DEXUf08eZIY7mSh81ilFmSuGoG1A=@vger.kernel.org X-Gm-Message-State: AOJu0YypQVqdrIbNC+WSQGC7h+k0QKck6IHVEYl3rZpsbuaufWQ1E74L tSlyePEMFIitbWNTdpu+x8F+6SuZKeaJ9dwG+enKKedO0H7JGUTLHu/1AR/XvN+69QCbjanKj3G 0BzLQz7ZWso8+kTIYCwc+THGMhQ== X-Google-Smtp-Source: AGHT+IGgfwcSnINywDElIvOUq/qNOGZjEQuNJ0HSkOlhtmSxQvzs3JhS67z89W+d6j0EUhWxDQaGOM9wiefkiUhZhQ== X-Received: from ilbea18.prod.google.com ([2002:a05:6e02:4512:b0:3dd:bec6:9d56]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:2683:b0:3df:385d:50a8 with SMTP id e9e14a558f8ab-3df4ab62001mr10858775ab.6.1750968358575; Thu, 26 Jun 2025 13:05:58 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:43 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-8-coltonlewis@google.com> Subject: [PATCH v3 07/22] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The OVSR bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Signed-off-by: Colton Lewis Acked-by: Mark Rutland --- drivers/perf/arm_pmuv3.c | 4 ++-- include/linux/perf/arm_pmuv3.h | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 6358de6c9fab..3bc016afea34 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -513,7 +513,7 @@ static u64 armv8pmu_pmcr_n_read(void) =20 static int armv8pmu_has_overflowed(u64 pmovsr) { - return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); + return !!(pmovsr & ARMV8_PMU_CNT_MASK_ALL); } =20 static int armv8pmu_counter_has_overflowed(u64 pmnc, int idx) @@ -749,7 +749,7 @@ static u64 armv8pmu_getreset_flags(void) value =3D read_pmovsclr(); =20 /* Write to clear flags */ - value &=3D ARMV8_PMU_OVERFLOWED_MASK; + value &=3D ARMV8_PMU_CNT_MASK_ALL; write_pmovsclr(value); =20 return value; diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a2..fd2a34b4a64d 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -224,14 +224,14 @@ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) =20 /* - * PMOVSR: counters overflow flag status reg + * Counter bitmask layouts for overflow, enable, and interrupts */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ -/* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) =20 /* * PMXEVTYPER: Event selection reg --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D836F2F3C03 for ; Thu, 26 Jun 2025 20:06:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968364; cv=none; b=eOlDnCO5dRddDH/I9Gdjl0qMIxA6HMPz/0F2b0T2NPbc5FUV3cqHHXe/cslhR6Am+xt5ViRHTZZyW76pdT3lhv1iwT8E7RNBkefZ8/nZ79cgmNcAGbELJI3A/aO6vFBvAich59xw5bot9HOLzm7MWEaemO7W5qKOmLNk5WB0ZEo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968364; c=relaxed/simple; bh=JKvVw7A/8JGu4CwoQcJwSACUWgFqO10vrUJv3yyThLY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=I9woaEvRhepokno6zGdNmiviYP4E5dFonbLtjaHJXm4T4+OFvviOYYg6vHKgKnisPHAyxI6oT/ncMFrnaT+5RogBSPC8EouuWxrPlA8lEF7/GU8UTWgczVzicIIaIco2zpCA3Y+s13O2UfxGSM0IJMa5W5bxs1RWGmKW4eIyhSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a2OaGWMK; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a2OaGWMK" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-40b23ef137fso325505b6e.1 for ; Thu, 26 Jun 2025 13:06:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968359; x=1751573159; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k99ouXUEirIE9akhLs0xb4XKe9rhIoYFwQDE+CcOFTs=; b=a2OaGWMKcPmaXh831Z1XhLDEu1kYxnJRZawgJEGH+PgVKE6NLzhEkRkVmc9K/00GAg tYgC1CoNb1r5rZwSXEo9TEepzQDLL+KbcNX9G/CRTLGW0oAg5fygtptk8v525RsRLFbQ yTfQPiNoRZo8UDAiHBcvnSFqCN41ZUC1QTGWyoleCXHPZR7TmHiiiNaErRXz4V5EAOHa rzlK2bPBQFKHG/d9nQk8rQHvme7ge89ooF3SoMeqiduk45iUlPmIK1Zto6yH8hE3V1oY uUpdkZPiZyprRmJwr/rV+MHDeCJy52MzeJiDPqPCQpsTbs/Ftg4toeVzjnc+Scmtujcs Bjkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968359; x=1751573159; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k99ouXUEirIE9akhLs0xb4XKe9rhIoYFwQDE+CcOFTs=; b=H/bEkHjbBF5A9fWyrQhS7TjgXwY7ajHkRdWzxL4gqTkhFpW70+0DPbN+7AUb6viY2m 7Z0QpZ1VsRk71bWLUE+zQs9VH4EPujCAV9zwhV4NId4+KBoyftDoH0pY4exgaILg8T2o kpbJWUyl52XF5sPeh8J3wJFm8gawYt3dhowFot67AIVMqyy0FMVZB1heO7yli/tk7E9l C5XZrPNZcIvNJb+/S091Q+RBzY7VYKO6BXg2BGIo4a08O4IArGmFX+5fylgDba7P/cNN iH//QY6sLO0uMK9oKrgPtSqasfIWn+jUyyeWrahqTDxgwjLJGXf1WEm613WNgSOkfjKL PB+Q== X-Forwarded-Encrypted: i=1; AJvYcCVBg7kCnGc28DZOuQ9q9KlKkWfv2rBKJOYBgGHuxZTu/qjHyZVhzxpyGPb39d34iA0ljjcrfZZR9Jj/J90=@vger.kernel.org X-Gm-Message-State: AOJu0YxvKeb0B+QaNkRdwmZ0pydYQb9YzJJqp2A16+yrjq5Yeq+ZpWIP jeXIlgzyOseuOm90/+dCcsWm2YdT0aX0PTKXLCvwlxQ5SFcP/uxhLc/F1FEbyOGZEdjhVyzxFzN Pod3uBbvsUf57GVzNvSlUBUseLA== X-Google-Smtp-Source: AGHT+IFge23p/I/z7S0XcRITxBw4/sy3n3n02uRxYoHFvdZ4JoPohx7PAGqgN0sbPW+XiqB5U6UA12YkpdYqnn6mpw== X-Received: from oibce5.prod.google.com ([2002:a05:6808:3385:b0:40a:b672:b3fd]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:1208:b0:401:e98b:ee41 with SMTP id 5614622812f47-40b33e14008mr489722b6e.21.1750968359726; Thu, 26 Jun 2025 13:05:59 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:44 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-9-coltonlewis@google.com> Subject: [PATCH v3 08/22] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Partitioning is defined by the MDCR_EL2.HPMN register field and the maximum value KVM can use is saved in cpu_pmu->hpmn_max. The range 0..HPMN-1 is accessible by EL1 and EL0 while HPMN..PMCR.N is reserved for EL2. Define some functions that take HPMN as an argument and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their different position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 +++++++ arch/arm64/include/asm/kvm_pmu.h | 24 +++++++++ arch/arm64/kvm/pmu-part.c | 84 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 36 ++++++++++++-- 4 files changed, 158 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 49b1f2d7842d..5f6269039f44 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -231,6 +231,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 8a2ed02e157d..6328e90952ba 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -88,6 +88,12 @@ void kvm_vcpu_pmu_resync_el0(void); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) =20 +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -220,6 +226,24 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm_v= cpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index f04c580c19c0..4f06a48175e2 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -5,7 +5,10 @@ */ =20 #include +#include +#include =20 +#include #include =20 /** @@ -21,3 +24,84 @@ bool kvm_pmu_partition_supported(void) return kvm_supports_guest_pmuv3() && has_vhe(); } + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return pmu->hpmn_max >=3D 0 && + pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Assumes pmu is partitioned and hpmn_max is a valid value. + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + return GENMASK(nr_counters - 1, pmu->hpmn_max); +} + +/** + * kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved counte= rs + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Assumes pmu is partitioned and hpmn_max is a valid value. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_ALL & ~kvm_pmu_host_counter_mask(pmu); +} + +/** + * kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** + * kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 3bc016afea34..5836672a3dd9 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -834,12 +834,18 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) kvm_vcpu_pmu_resync_el0(); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -949,6 +955,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, =20 /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr)) { if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) return ARMV8_PMU_CYCLE_IDX; @@ -964,6 +971,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -975,7 +983,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1067,6 +1075,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1074,6 +1090,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1081,11 +1100,20 @@ static void armv8pmu_reset(void *info) /* Clear the counters we flip at guest entry/exit */ kvm_clr_pmu_events(mask); =20 + + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CFD22F4312 for ; Thu, 26 Jun 2025 20:06:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968364; cv=none; b=co7vhbTi0F4g1rz1N5apdBplY1Vf8HkZNUpgOKTykcOiR8TLAC+rp0D27Jj1Dsf123gNnSOlD/bF3ijY2ZO/cyEDnXsrhnhHfBWt4Diye/w3sANo8ByF7PGWuMpko3oq8g2+a1VQgh809/wKoOja+2C97IQ62jAJEGG2UaqRJS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968364; c=relaxed/simple; bh=K2xYmW/GgRMbLaQlmv0qc4PJUypEgPrPpwxJt6aim7A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AtxVsT1E9uIe4w+S7+HLMXMoefHGpwJzMVDZv8x/7ol4Bqidtuwlbh6zW3I+gKnVk6mlJA4XTQq8XDdsgJq81d2yZNwDNfEgFzc+dDvftDcM4iwAzka2/3N2VMiRZOelyCLk4lezB1z8hqXC1cJbhyBBSJJ+NiHKsQTVWzd3mIg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QGY9JTtc; arc=none smtp.client-ip=209.85.210.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QGY9JTtc" Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-735afe92f3dso245619a34.0 for ; Thu, 26 Jun 2025 13:06:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968360; x=1751573160; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/x7IKRsWOtConph2bUpKtMTviRfv4NZ5l+qpFLQ4pko=; b=QGY9JTtc2RkySTRV7Nde7iqNyBTHJcRQIryLAQIPrqKJyac1W3WZCNknWXUKnaw804 h3LQkaNh/w+5THpXIyZ8qm8Bf2DGP7kaNbtkdcwMJ4kldWFRHR5IgHhHHDXXyBd4dX+w WUSAg7iCSzv3vhObVbn1riBxagoWNu0//2I5mHEBM0L6VTnmYYLasXCz2P/8XS//TH29 9RbBq1sRd9cvFg+NVZEaR9BtLeBqaKeSrPZW6kHEhTYknRthKgTogMJjDH9XOwB+lkqg NvBq7mQA2aKrkIL++Jk1bfKV4xd3kW0tjYb5H8zCdzHfN6qqzd0/f812ZHPNCSDhd20E uabA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968360; x=1751573160; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/x7IKRsWOtConph2bUpKtMTviRfv4NZ5l+qpFLQ4pko=; b=ZgW+KWaPrkG7HvbYm3UnxkaLBqW8haDLlJcOC8DScCWK5X/O2Ix70rdBiZKa5QShGD XjJSfOEeYmylkDe6M57mvE10yA0XJPGF2fNd3VNZtMgXKm3NcHO3+w7i1iXWHPAx/o4n hw+dJ/uZOc4tNO7/eM7Gq66uLUTSkxJsgpoDbi0mnPaO+cPaV21kG60CkvouipeWIJGC puDmZOgmchjPYNc6Ha4H1t7wsWK9lHU0J7ioAH6ZjiRo2RBUOQCN20weFrbyRcC0toiR RvuZYivdwYM+8Cwy+rnE2equ2Ozhs9ZPZOd2Tvlw7TqUE37iQWOJ5vRc/72XUK8KRFRq UCKA== X-Forwarded-Encrypted: i=1; AJvYcCXE8xbnvfKJ1hmmEh35YDurQ9I48yzb6gkE7gBH8Ro0h/ZH8Mtgs+DsDyTVj4cq6cyf+RE4uE9Phw/qgD4=@vger.kernel.org X-Gm-Message-State: AOJu0YxWDjLP4Ss1gfzIOvHfBnnLF3nII/7K7wZvZy0899vawrzXyp/K B1QDySrmEmj5cpzyr6hTJw5tEHcVZnWhKhlyPfrQJJ86IssRSGVAobl9lmny00Vh0OyBYuyTcWy y9WI3e8+hQnlNABRO5WZCv/CvPA== X-Google-Smtp-Source: AGHT+IE+CkaZ5iXFv3dTlFyH7UdGDZHsbvzoJSe9buN7PuL7mIyZPRSBWsO96oSxssKGqZuLRqFlyodkFZoXyb0msA== X-Received: from otber8.prod.google.com ([2002:a05:6830:3c08:b0:72b:8c86:a47f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:720b:b0:72b:9bb3:67cf with SMTP id 46e09a7af769-73afc5c355dmr285473a34.9.1750968360578; Thu, 26 Jun 2025 13:06:00 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:45 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-10-coltonlewis@google.com> Subject: [PATCH v3 09/22] KVM: arm64: Correct kvm_arm_pmu_get_max_counters() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since cntr_mask is modified when the PMU is partitioned to remove some bits, make sure the missing counters are added back to get the right total. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 79b7ea037153..67216451b8ce 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -533,6 +533,8 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) { struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + u8 counters; + =20 /* * PMUv3 requires that all event counters are capable of counting any @@ -545,7 +547,12 @@ u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) * The arm_pmu->cntr_mask considers the fixed counter(s) as well. * Ignore those and return only the general-purpose counters. */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); + counters =3D bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUN= TERS); + + if (kvm_pmu_is_partitioned(arm_pmu)) + counters +=3D arm_pmu->hpmn_max; + + return counters; } =20 static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E2102F2C68 for ; Thu, 26 Jun 2025 20:06:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968366; cv=none; b=ZTsIDTviJoPK5HPi8bDv4ViXUM88hZNxkuK2WJIAP+Vr6FdmINEKF00hssq6DCAPC/0KM7nPO8d2IzmUPKone5edT8dtcQFRrxIWRhiZp4D9LE15aby7AUIPBwO9ipIJ6ppkL5jFZRA37TBjdZilj8jptdNdUYWXNvUZ4xGcILY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968366; c=relaxed/simple; bh=4fAdFsyi27Pf8+Vjrkp71TZRc32rI/HvSzPk9iR23Hc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rJZLOGIRvq/2oj1cWccnW9Ezlm3wvZkDN4k2MsUVe3+DSbZ8T3Wu0E48ez4mNQdwOyviGIni8esd7jena+J/KP9GScEQesY1xcK50u5x2sfDBBVDdCytdSxthmSiCH/Gb/I4a+ZSGjiOdVyqfUp1vk7/lxFvW3/ja2GDBAsuDSA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=plL03OC3; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="plL03OC3" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3df3b71b987so12791015ab.0 for ; Thu, 26 Jun 2025 13:06:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968361; x=1751573161; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5EUw4Y0mcnEG/FkgSChX2W9q1RZ2xMCnbVv00SQ/fhA=; b=plL03OC3vcWyjMKRTFRQ/hWJt0VXnLBvZdGz+3u4waKE5kbrlGk4CtgLVqyRPIFrDp LAmeYp1/YcwkUuDeZFVQK/UqsyabbkAIZrrFxLy0qK/J+LVBHVoDVhGaUv7V8yC7iON+ wl2mt0w89iPn7Hxay5Aa6GsWuYCdHfdrNGvl5OHCsxIcSYZJ5lN33Ntv3xYYDEBYywJQ Zn+THpbkblQXahSBDhtjp7TLDPxCtNzy0esZ1NPpjqd/h1oH06fRSNUON0AB2v8jDKE3 7Slytpgbu3navOWpjuYKVTuKsSM0zdxrClg939BdjFqfAlyTTzpoDZctqPavS7pcBozQ U8FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968361; x=1751573161; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5EUw4Y0mcnEG/FkgSChX2W9q1RZ2xMCnbVv00SQ/fhA=; b=TbS5PEhV4jwGPd6OrWvqbpvNOc9oDYr0I4nr+dm+ZE2yG1Ocy0LxoCYZABujPFokt/ X7nvY65Qk+3faMviFDIhA07cLxj4BHAPAiAi6ArcHCV22f0CXBRKpsXe0WSaWgVagPQB s+RyPvaZX69Q4t/FdZk7zqjqeO+/RVSoMAZba7X0outyY+wtBzz9YT/B05VHmoeeXMT1 kuTrBxSV54dDF8ome17DxQ5ObaKJRxiRPRKh+z/j+y/pwbpCVs/qx1tGbJeg1UQx3gbT OfeeYxv3UhkOT1qw65cSsZRcio+65pflzNNrP1TIvu/q0rKSUlUF2EX6WFh0WkGHXasT rKWQ== X-Forwarded-Encrypted: i=1; AJvYcCV87Rcf7HZDTv7Ax3IxAnaCExU1vpu5BRA+mb7HPQ/HHjX1K8/P5XBTXUcmIKMoC+8oVWuj0jPhOyGuoO8=@vger.kernel.org X-Gm-Message-State: AOJu0Yz4yyqSV9WxXHsBtRZX0nBsvdQL17u8KHrw+n2G52tjFjlD8sXC shWjg2BWHe1k1Fb8BqEJQ9OYimwdWD5+q8IePKlGKRHfJ8lR9JRmLoHY++fquCfL9MsxZq/DNPm 8NLkhKxjhRZbVKEfst9ORyxyutg== X-Google-Smtp-Source: AGHT+IEGP6p3lSxAsTlEVIC7KYL0WEMMLhnUKbnvL8lg4j3kdPHiyfZtI7gPbyS5YbhTsvYef9EF8diocba00WqnwQ== X-Received: from ilbbs10.prod.google.com ([2002:a05:6e02:240a:b0:3dd:bbd4:ed74]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1c23:b0:3dd:ceb0:f603 with SMTP id e9e14a558f8ab-3df4ab6aa68mr11148085ab.2.1750968361548; Thu, 26 Jun 2025 13:06:01 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:46 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-11-coltonlewis@google.com> Subject: [PATCH v3 10/22] KVM: arm64: Set up FGT for Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to gain the best performance benefit from partitioning the PMU, utilize fine grain traps (FEAT_FGT and FEAT_FGT2) to avoid trapping common PMU register accesses by the guest to remove that overhead. There should be no information leaks between guests as all these registers are context swapped by a later patch in this series. Untrapped: * PMCR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCCNTR_EL0 * PMINTEN_EL0 * PMEVCNTRn_EL0 Trapped: * PMOVS_EL0 * PMEVTYPERn_EL0 * PMCCFILTR_EL0 * PMICNTR_EL0 * PMICFILTR_EL0 PMOVS remains trapped so KVM can track overflow IRQs that will need to be injected into the guest. PMICNTR remains trapped because KVM is not handling that yet. PMEVTYPERn remains trapped so KVM can limit which events guests can count, such as disallowing counting at EL2. PMCCFILTR and PMCIFILTR are the same. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 23 ++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 58 +++++++++++++++++++++++++ arch/arm64/kvm/pmu-part.c | 32 ++++++++++++++ 3 files changed, 113 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 6328e90952ba..73b7161e3f4e 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -94,6 +94,21 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +#if !defined(__KVM_NVHE_HYPERVISOR__) +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); +#else +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} + +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} +#endif + /* * Updates the vcpu's view of the pmu events for this cpu. * Must be called before every vcpu run after disabling interrupts, to ens= ure @@ -133,6 +148,14 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm= _vcpu *vcpu, { return 0; } +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + return false; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 825b81749972..47d2db8446df 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -191,6 +191,61 @@ static inline bool cpu_has_amu(void) ID_AA64PFR0_EL1_AMU_SHIFT); } =20 +/** + * __activate_pmu_fgt() - Activate fine grain traps for partitioned PMU + * @vcpu: Pointer to struct kvm_vcpu + * + * Clear the most commonly accessed registers for a partitioned + * PMU. Trap the rest. + */ +static inline void __activate_pmu_fgt(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *hctxt =3D host_data_ptr(host_ctxt); + struct kvm *kvm =3D kern_hyp_va(vcpu->kvm); + u64 set; + u64 clr; + + set =3D HDFGRTR_EL2_PMOVS + | HDFGRTR_EL2_PMCCFILTR_EL0 + | HDFGRTR_EL2_PMEVTYPERn_EL0; + clr =3D HDFGRTR_EL2_PMUSERENR_EL0 + | HDFGRTR_EL2_PMSELR_EL0 + | HDFGRTR_EL2_PMINTEN + | HDFGRTR_EL2_PMCNTEN + | HDFGRTR_EL2_PMCCNTR_EL0 + | HDFGRTR_EL2_PMEVCNTRn_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGRTR_EL2, clr, set); + + set =3D HDFGWTR_EL2_PMOVS + | HDFGWTR_EL2_PMCCFILTR_EL0 + | HDFGWTR_EL2_PMEVTYPERn_EL0; + clr =3D HDFGWTR_EL2_PMUSERENR_EL0 + | HDFGWTR_EL2_PMCR_EL0 + | HDFGWTR_EL2_PMSELR_EL0 + | HDFGWTR_EL2_PMINTEN + | HDFGWTR_EL2_PMCNTEN + | HDFGWTR_EL2_PMCCNTR_EL0 + | HDFGWTR_EL2_PMEVCNTRn_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGWTR_EL2, clr, set); + + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) + return; + + set =3D HDFGRTR2_EL2_nPMICFILTR_EL0 + | HDFGRTR2_EL2_nPMICNTR_EL0; + clr =3D 0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGRTR2_EL2, clr, set); + + set =3D HDFGWTR2_EL2_nPMICFILTR_EL0 + | HDFGWTR2_EL2_nPMICNTR_EL0; + clr =3D 0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGWTR2_EL2, clr, set); +} + static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *hctxt =3D host_data_ptr(host_ctxt); @@ -210,6 +265,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_v= cpu *vcpu) if (cpu_has_amu()) update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2); =20 + if (kvm_vcpu_pmu_use_fgt(vcpu)) + __activate_pmu_fgt(vcpu); + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) return; =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 4f06a48175e2..92775e19cbf6 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -41,6 +41,38 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) pmu->hpmn_max <=3D *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU + * @vcpu: Pointer to kvm_vcpu struct + * + * Determine if given VCPU has a partitioned PMU by extracting that + * field and passing it to :c:func:`kvm_pmu_is_partitioned` + * + * Return: True if the VCPU PMU is partitioned, false otherwise + */ +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu); +} + +/** + * kvm_vcpu_pmu_use_fgt() - Determine if we can use FGT + * @vcpu: Pointer to struct kvm_vcpu + * + * Determine if we can use FGT for direct access to registers. We can + * if capabilities permit the number of guest counters requested. + * + * Return: True if we can use FGT, false otherwise + */ +bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D vcpu->kvm->arch.nr_pmu_counters; + + return kvm_vcpu_pmu_is_partitioned(vcpu) && + cpus_have_final_cap(ARM64_HAS_FGT) && + (hpmn !=3D 0 || cpus_have_final_cap(ARM64_HAS_HPMN0)); +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BFA52F5300 for ; Thu, 26 Jun 2025 20:06:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968367; cv=none; b=OszHPearEKuuT5v/qZNzVRNGHeg7hBn2DMXcwsR+4BI5Tkh/jQ2s/OdnkbFh7ZjoIs9Bhah4fS7dwD4Hwbb4F68GOKXAyIY3ieYKakaEGx4m8rjqvskeAAMoX792xO/pS6u1gY/bnayg0hbptWRDuxKEnDVtbmyGD7jVsO+GPWE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968367; c=relaxed/simple; bh=OXBsuzgQqKeH4go9scl3J2sCFEoZvJBcvtYLhQxD4UM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=e59sC5FsBjnoz3Q1OJ7zaqY5WN+RZ32z808hU5zfLlVCNL3oM6lKnZiOrVGrQJNWJNPRKI88QmHLTgYx8MNJjekGN8u8gP37B0djTjyrnSELOWbQJF0lnb9OrX/865IfOyot2jZg4J8cUYV+Im5s0pMz6gQE5DfaETu+wlYG5ig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1X+FQOqf; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1X+FQOqf" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86cfccca327so228072439f.2 for ; Thu, 26 Jun 2025 13:06:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968363; x=1751573163; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OjcjVqcT3PfYTwjN9wXAnWmKWncB0aK3QVDv0kRYNAs=; b=1X+FQOqfKvgouuPXQVl+pSvQ/7A9Od9j6N10ykxLJCJwVmnko+4ZHI24nK2V4iKPQj 1WY2cJIguxoDOmBNP7XKVnPzikBB2PZKAKMPzAAPTJId9tUxEoHholjI1RHRbKXRTFDl 8LQSYpTnaOb4A4+kpDq8Op9vxx7E97aXBErgQ3pY/E8gPuSvXPj3CP5ZOup0+4QAM9V4 GcTUE06AW7tQuM1uWiouBQ+eGDJDML0OzhDpF/iXQpXlfxgVI3NBamH1C79DBumJIrXn 5BRcHDfOjMogsin6EnQXvGAZEa424Ecp7lyISmO8enY0ORanJGAbKoH6gpp3IlpYu0Z1 rMdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968363; x=1751573163; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OjcjVqcT3PfYTwjN9wXAnWmKWncB0aK3QVDv0kRYNAs=; b=D2yXm9UhVUzenyza8z9TMeDFLBZxGsJJzD5xwaF7NIOr9b/Zsd1h7ky4BAuZMd0tMD FqPwSyY8I1np2kiNVX+UFjizUEz9Ocx6aRka6tZGOgrD4JBtNxtamAzoleIL/plAwYOD ZCX0Hl6cEOE69Lj6WA6+VNvXg+NnLDKZiLMET4sPlc4JViR4SRotPY3gbGoKb9mNbkIb QE0XUWB9/gyQBXFeANE4xZsvspfBR47YFjUypobeWdEY09+1wkfX4O0RRGMKAU5ttJGI 3mO/9VSj1QUfp36VEJiuTB6G6vugCPL8XDoorqZAMGm2XKJIzgF136/CKbLqCecWLZGm vnGA== X-Forwarded-Encrypted: i=1; AJvYcCUFZEMtCs5URP1929duo0B8m0jGl+OPm0gj22KKuh7b+GjAjalnGyYHzCFNob6KvYkBwg3etI+mQcIqsRY=@vger.kernel.org X-Gm-Message-State: AOJu0YxNqkDWyyQ9rK8Cdsf05p0BncbtvBeJ4Zdgsa9iWMh2jvSMqfwl tUWfh/NKM6VtEXXt8duvR4FC1YSXLmq0F+s+1uGrgXWF0OfwHXuFIymx+ibXgjLdYCpEgXC95il O1+SGyYhx5hIgjWMJLdIKJEbD5w== X-Google-Smtp-Source: AGHT+IFUddqpSua+e1bwkQXI4ID0eoYsJI6Flt9aCdAiBQPdjrY96T7W5A1qLJU1ATtwnMWwgss2F2B6rdL1cwzbcQ== X-Received: from ilbec6.prod.google.com ([2002:a05:6e02:4706:b0:3df:16fc:af6f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1c03:b0:3df:3464:ab86 with SMTP id e9e14a558f8ab-3df4ab61d98mr11788385ab.9.1750968362685; Thu, 26 Jun 2025 13:06:02 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:47 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-12-coltonlewis@google.com> Subject: [PATCH v3 11/22] KVM: arm64: Writethrough trapped PMEVTYPER register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index eaff6d63ef77..49e8e3dcd306 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include #include #include +#include #include =20 #include @@ -1037,6 +1038,30 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return true; } =20 +static bool writethrough_pmevtyper(struct kvm_vcpu *vcpu, struct sys_reg_p= arams *p, + u64 reg, u64 idx) +{ + u64 eventsel; + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + eventsel =3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES; + else + eventsel =3D p->regval & kvm_pmu_evtyper_mask(vcpu->kvm); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(eventsel, vcpu->kvm->arch.pmu_filter)) + return false; + + __vcpu_assign_sys_reg(vcpu, reg, eventsel); + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + write_pmccfiltr(eventsel); + else + write_pmevtypern(idx, eventsel); + + return true; +} + static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { @@ -1063,7 +1088,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, if (!pmu_counter_idx_valid(vcpu, idx)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmevtyper(vcpu, p, reg, idx); + } else if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); kvm_vcpu_pmu_restore_guest(vcpu); } else { --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECBBC2F532B for ; Thu, 26 Jun 2025 20:06:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968368; cv=none; b=QZPIBiKCKfRarDdHDpNX2mrvg2qwb/TbvlcFtTxZ3MBgF2rj45+Tj17Edz4GzDJSUYnkFbeSFbg5lpn1R4t0Pduv3/mwHHvO5EqvGCojs+mBfHmJDFNoLs4w4ILsthfE0ZPkowMcBDHx1jDqnmQCVsN7afcLr7G8x+uzI5kKQ+E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968368; c=relaxed/simple; bh=h+qqtPmhoOk7XPMw7Qzu6UvFYPyXxhuY/D/zWZQbWDs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NSb9I4FlDp6Dp4S4OXVhCuOG9YYpDWpRQ4DjSHtisWs9pO09odDVf5fRKdgtzwQZCdw82uWpRHX5A09CYV2n1AytBggmBJLk3K2M/mM2YSWwJaq1bW/cbFN/osbOHsILLGv6nqxKmstRuPgj1Z+M6oQ9lKGx8heqr/S3aeA4unc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jtMRCbTY; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jtMRCbTY" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddd5311fd3so12668475ab.3 for ; Thu, 26 Jun 2025 13:06:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968363; x=1751573163; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RE7YbU/rytuy8zxs8P9SeXUwLNfz8JPOxqvhnV3EEjQ=; b=jtMRCbTYlr5EtYPJjov6lcOaBMWO2hMWSGIGp818TY4GVcJHBrzOOdYeY3k9J2ScqU kfew1GjC93GBXpazvNiRM35EjNqEAax/p1QUnalH7Tsu5MWW/XAiCznTY9t559R5TdOW NvJo3vrNgfpEzphSOUTmAKeHPhirIXW03I8hml9wh/ztpQNhiai++RIsC3L6A0TKAtka WE3UHszbh2s0FUDBvx9zAxvoh0mafCF9HFqe7pMKE4K3WjSOL8GfySBUh3I30hviP1n5 n/6S4FwiKlSr7h0EuCWgqabu3GeUfnPS6uwt7VCMnP2llFL2DeoZQthXGbqIGDxW5+kI TdCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968363; x=1751573163; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RE7YbU/rytuy8zxs8P9SeXUwLNfz8JPOxqvhnV3EEjQ=; b=s59sZ5u6t+iWQX4+0+101zWwkF59FFvrvtsl5oHV4YCjh6lNVSCJ/IQiD51QrjpcJp iGfVIu4ijahATq6nreHysbSStSa+wksTz6L3DQJUrH2lP2YLpIbH8tPJ8hMqpEG3SGpc sMW3p9CYEZ5PUB2x2AqjNLiwPjAbaOT5ES1Ez45w0+2MsgQA3QSIRiarQpdFG5Wosd+j c7NukIHTBVDumvPEDKbleZyni2X4H0bE0zYNa9QJWmohgP4oMsWfe49Jsh0FYNGaOT/L Nesq+5R6jGQ5BJhaKeEfN95Sh+uGvqaW2IP0tDrpRdyv7pJJshLSEv1CQ2pl8Ph7uSTS O3SA== X-Forwarded-Encrypted: i=1; AJvYcCWx5wLKEy8mtbjjiEZVy5yXXhzWgBQOFYdrpRIv3RXpAiAlziCvnMX9+EhRGzbFmbl3Xqdqd3PBPrqdL7E=@vger.kernel.org X-Gm-Message-State: AOJu0YwLMoOGAxT3Q7nIgk5LyP7tyhj3vHvteWoUa00F41cBLWqUR7Sl 3IKMjNmB/GYlmedPrjpSUDFhdcXZVFWepGqefT2FuHZ83oI3TlycajDBwxhQVLOOTEi9tsL9mlG 4nixzcg0+7VOZqkhbc5RfXogsZw== X-Google-Smtp-Source: AGHT+IFVSmW/JlkCptcGzCVO362c3Cn87QaRlSz6D4Ns1ZpQiarZzlH5o54K21jyXUKriqhtonfUrINv6bCNPdpvpA== X-Received: from ilbee28.prod.google.com ([2002:a05:6e02:491c:b0:3df:31be:c2e5]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:248e:b0:3df:29c5:2976 with SMTP id e9e14a558f8ab-3df4ab85a5amr11374775ab.14.1750968363583; Thu, 26 Jun 2025 13:06:03 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:48 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-13-coltonlewis@google.com> Subject: [PATCH v3 12/22] KVM: arm64: Use physical PMSELR for PMXEVTYPER if partitioned From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because PMXEVTYPER is trapped and PMSELR is not, it is not appropriate to use the virtual PMSELR register when it could be outdated and lead to an invalid write. Use the physical register. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 7 ++++++- arch/arm64/kvm/sys_regs.c | 9 +++++++-- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index e2057365ba73..1880e426a559 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -72,11 +72,16 @@ static inline u64 read_pmcr(void) return read_sysreg(pmcr_el0); } =20 -static inline void write_pmselr(u32 val) +static inline void write_pmselr(u64 val) { write_sysreg(val, pmselr_el0); } =20 +static inline u64 read_pmselr(void) +{ + return read_sysreg(pmselr_el0); +} + static inline void write_pmccntr(u64 val) { write_sysreg(val, pmccntr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 49e8e3dcd306..771d73451b9a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1065,14 +1065,19 @@ static bool writethrough_pmevtyper(struct kvm_vcpu = *vcpu, struct sys_reg_params static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { - u64 idx, reg; + u64 idx, reg, pmselr; =20 if (pmu_access_el0_disabled(vcpu)) return false; =20 if (r->CRn =3D=3D 9 && r->CRm =3D=3D 13 && r->Op2 =3D=3D 1) { /* PMXEVTYPER_EL0 */ - idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, __vcpu_sys_reg(vcpu, PMSELR_EL0)); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + pmselr =3D read_pmselr(); + else + pmselr =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + + idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, pmselr); reg =3D PMEVTYPER0_EL0 + idx; } else if (r->CRn =3D=3D 14 && (r->CRm & 12) =3D=3D 12) { idx =3D ((r->CRm & 3) << 3) | (r->Op2 & 7); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-oi1-f201.google.com (mail-oi1-f201.google.com [209.85.167.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F70B2F5492 for ; Thu, 26 Jun 2025 20:06:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968369; cv=none; b=pQGGS2kAe04TUFQjpESAByi83CVpU2i20iutQ7xG5e89C7BT/sDH1YLXqRABj381+pl9v8h2af6NueOQnoD5oaHB91IqAy4lUJdsr56dsrZrJCHmUKXA10oJ2lkPcNqANkq8Cx+xJrRqCM5swHo6A/2KlUZhMY7X3obk6t8FDm8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968369; c=relaxed/simple; bh=Sugt0ZAWw0whdfnB19zyTKRQsY2Z9PQHTBS1EdvqwWo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Xy1xs8k1xOfr7Kr6q9qKsaT2Stq4OymBPhCEzrm0pnNJtglwNk79jNll1ZRgGqPrJ2/ihRRcuAN4HoDRmzr2Pqr2Bb/flURH0Ud4aiAEkZPzIHkSCzA0vaOFntbEfzdLEFzyC6sf8c+BEOgcyVQnjB2WOj27jDxJTxkQKTFBnnw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZDq5WsJZ; arc=none smtp.client-ip=209.85.167.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZDq5WsJZ" Received: by mail-oi1-f201.google.com with SMTP id 5614622812f47-40ab5c5028bso1513607b6e.3 for ; Thu, 26 Jun 2025 13:06:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968365; x=1751573165; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oR9vFP5SFauEza8dTMYPljjE6wfKaScNOs3wNYF4UAU=; b=ZDq5WsJZWvybAKocwRUs5ZbWjrgywIYuGhkevDJ53sqrPvHt8e9wjCyfUl+sol6ALT u2udCPI9GynRzK2OKdQuLZxnglExgG2mNdLYGgHLzRWYByzpQvQOVVwAQYDqP92C5VJt pJ+tgfRSSsYCJLs8NbuZCotuHKB/AfbpnS7WAhMmlu7fM/2ZI33r+Vv2uf0GrtBgjGGv 8irjKoL5QtfjPnMfHKRfw/w/Q+SHx1aavhphlJc+rsX2hTK1P7O3XzW+b/WQbrLZnuxP 6a7AKv/H19wt6tAeEHofSZPVA3x92OwWTh+wM9KsTCVuSUIwWtXJ5zmWkzgkYZsxdbrZ MTUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968365; x=1751573165; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oR9vFP5SFauEza8dTMYPljjE6wfKaScNOs3wNYF4UAU=; b=aDFwV5sze2JnkV8y/kuv19WNrIlCbhcgKzQJ1yN1Z7MCHKtky0evGcRSJpiNCbfOZn G02aLa+3ists94hSrLahAN28UXzr0LIgVYS6OCrgoYjwPhjJ3qq5JbtvUrPAr1u+7AkP /0kvrTeexq4sPhOQhGxIFIUIOhE+4GNcytwkKjydoOntNCFn/j4T427im6IbwFWNFK1h qiIYnNyRdybuTxDVQZyhxOuLVLCoTVaJj1mTScmI0VSHpaiG9Y8L8VmzYr2EtNn+Cpg2 w3kXEQrUiYrXNS8CK2qe72tiqzxT8vHoxYYPz1ax3ZZhjhbn5dsifv+wtW1qTVHrGSo3 hzgg== X-Forwarded-Encrypted: i=1; AJvYcCWpjgsDgzaDPH5JwHG5I30OpSUjckdLQaWIOtU2PYx1D1wd0NpjMEr9CfkSyaNQtCidv+XgTzGoF0bhzMU=@vger.kernel.org X-Gm-Message-State: AOJu0YwEaWKkSh+SkVDJnVO4rWI+1LTfrSrDoO6aQcaB6WdlyZfsN5Pv PNLmoOhZY3mZhPoQlZF6U/hTAPdFl6aMoM4+ypMmnMrQq186tvStwgIVmgW2GMyrau0cBUumOAT emGg1Qs2N4xgsYbB0DN8wSDkVSQ== X-Google-Smtp-Source: AGHT+IHKPzqyXuHjLhNaVF48d4dlM1ddP8qVK1Z3JzzP3RzzSVDOilpZen/JKG4nnFQT/I6NY42YIxRN76XnpGTzTw== X-Received: from oobbq3.prod.google.com ([2002:a05:6820:1a03:b0:60e:fcff:daa1]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:3196:b0:401:e721:8b48 with SMTP id 5614622812f47-40b33c31cf5mr524973b6e.8.1750968364756; Thu, 26 Jun 2025 13:06:04 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:49 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-14-coltonlewis@google.com> Subject: [PATCH v3 13/22] KVM: arm64: Writethrough trapped PMOVS register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 10 ++++++++++ arch/arm64/kvm/sys_regs.c | 17 ++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 1880e426a559..3bddde5f4ebb 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -142,6 +142,16 @@ static inline u64 read_pmicfiltr(void) return read_sysreg_s(SYS_PMICFILTR_EL0); } =20 +static inline void write_pmovsset(u64 val) +{ + write_sysreg(val, pmovsset_el0); +} + +static inline u64 read_pmovsset(void) +{ + return read_sysreg(pmovsset_el0); +} + static inline void write_pmovsclr(u64 val) { write_sysreg(val, pmovsclr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 771d73451b9a..cfbce4537b4c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1173,6 +1173,19 @@ static bool access_pminten(struct kvm_vcpu *vcpu, st= ruct sys_reg_params *p, return true; } =20 +static void writethrough_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, bool set) +{ + u64 mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (set) { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); + write_pmovsset(p->regval & mask); + } else { + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=3D, ~(p->regval & mask)); + write_pmovsclr(p->regval & mask); + } +} + static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1181,7 +1194,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struc= t sys_reg_params *p, if (pmu_access_el0_disabled(vcpu)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmovs(vcpu, p, r->CRm & 0x2); + } else if (p->is_write) { if (r->CRm & 0x2) /* accessing PMOVSSET_EL0 */ __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, (p->regval & mask)); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E63C52F0E53 for ; Thu, 26 Jun 2025 20:06:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968371; cv=none; b=aMZ1rcuLhgf4OwYhQwZ2YgVMM5gp3A4nHLGBVWcyyDFKAuwc8kKwDVbT03dazvEuTc/bQ5p+RcoRhY2CiuyflSTpy+/GV+IAx0TZZmdYbMQFFf0CXsUd2kbMyF7tALcb+2ni2EnNv4bNWperQWKlsCW3eRAgmlPRFOSRmJq2EeI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968371; c=relaxed/simple; bh=0aGOUAxvYj03Qeu8/55DTv+on9abe7K5kCyx7QQqG+o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=plbS22ATTBLZUPwRkvtt0vuYbrHUbzyGsIleR22O6B4BOQYUoF9b9/zs+Ftrqa7yAJ15moTxLD0WVUxpNcaMQeUjfg5eGL32Fq1LVHJlKYtR9di2NUN4n2uQ3kS8Jz64AtwCrDeZyHEXg3wpI5QJlKEjG86/e/M4kr+jW6pG/bM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ug+lgSq7; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ug+lgSq7" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-60eda518f7fso120584eaf.3 for ; Thu, 26 Jun 2025 13:06:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968366; x=1751573166; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KpklIyqnpbKB9v5Fxsk87TZdEI2UnkxleUzjYw0ct2A=; b=Ug+lgSq7G+pnWVa0dcYY36Ib3ijO/hZrPvDAhUha+fma1ddDBk/0msmvfMl8MD80Ta SLIzZKDOuByjZS5gTYHLrLAUoT/S70/cGIWZFVI+RX90IlA7mKmgw6cNdqak1IA8HjZq KMLB0mQ4KMjDy+zDJ5JO9FV0bQTFHnKgI98mtc0LtkNH0VKK7Iw3XGN9VWD9kkIAM1R2 ABfSB7N/nC4WsZB3Rrmm4eDrJb29TeaX3rC5dr5FPS4P2ZGfOPJ4YIZZR+TUR1s1mKRw ZX+9r3KtpOx2SmFSGWDwL0fvg2xfQ9CvmhY2Xsw/LWuSaPE/KI/FRrSHwy7LARuzQ90S +TLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968366; x=1751573166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KpklIyqnpbKB9v5Fxsk87TZdEI2UnkxleUzjYw0ct2A=; b=Jr58Wsvz1qgtxAF3zh0mZWceTUyKQuiya9uzorUDLUl+Fxwu/d94mAmRUG+o8nSE4g 59pTJNGjzRzyzSvU91dcrR9GegLJO+BYVmRGAgaCn7flUS5SNm4JdD3nWdEB/YJtGH+z fFga2lmYLVSIlxho9KSNIlIby3YZTtJvGxU+DGl29yR+W4n3qTjriS0ZsxL2m7B2a2en C2jYPJkUYK3QUZRbxUB7W1gaNS1jg5ovsZXdNxV0dT35ytJy3hG38yYPz16lAVK8s7sI YtwuMm4C2pG4725Rq7Zzyz64JTO1dY4mmXnRfdXPpDCFMYde5HXmmEuqP6gQJCtEXASC lmog== X-Forwarded-Encrypted: i=1; AJvYcCWFqDtxAfZJbbZ8Rcb3GZeQn92tMUS10zCa1c79vZB0EoI7tNSdPm2iKbVcRYHB7vDVf2EepFw7WeI8L6E=@vger.kernel.org X-Gm-Message-State: AOJu0YxebEvW5Zt/EiTkT7IVyEAGUvtRdAEAY1XzABGWZqO8STpvbR8Z ftLhGsPm5LyvYr8tvvOLaGZdWXc1BGywZY4Kv3J4Nb2ec10g3LN7pMKS0sn46CNoivlyuYIF41P SKekKRbBMn6CuR82OaxItl/MHxw== X-Google-Smtp-Source: AGHT+IG4toCaf6FKWI2t1gXzS9UewnHgKU58nuszLZDDrTg7T5QNDlzEcsa7wLKXQQUN/AxzQPP1VurrCS0Sc6lf5Q== X-Received: from oaclx11.prod.google.com ([2002:a05:6871:50cb:b0:2c1:6943:e919]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:36c2:b0:2d9:8578:9478 with SMTP id 586e51a60fabf-2efed4305b8mr279482fac.4.1750968365927; Thu, 26 Jun 2025 13:06:05 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:50 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-15-coltonlewis@google.com> Subject: [PATCH v3 14/22] KVM: arm64: Write fast path PMU register handlers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We may want a partitioned PMU but not have FEAT_FGT to untrap the specific registers that would normally be. Add a handler for those registers in the fast path so we can still get a performance boost from partitioning. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 37 ++++- arch/arm64/include/asm/kvm_pmu.h | 10 ++ arch/arm64/kvm/hyp/include/hyp/switch.h | 174 ++++++++++++++++++++++++ arch/arm64/kvm/pmu.c | 16 +++ arch/arm64/kvm/sys_regs.c | 16 --- 5 files changed, 236 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 3bddde5f4ebb..12004fd04018 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -41,6 +41,16 @@ static inline unsigned long read_pmevtypern(int n) return 0; } =20 +static inline void write_pmxevcntr(u64 val) +{ + write_sysreg(val, pmxevcntr_el0); +} + +static inline u64 read_pmxevcntr(void) +{ + return read_sysreg(pmxevcntr_el0); +} + static inline unsigned long read_pmmir(void) { return read_cpuid(PMMIR_EL1); @@ -107,21 +117,41 @@ static inline void write_pmcntenset(u64 val) write_sysreg(val, pmcntenset_el0); } =20 +static inline u64 read_pmcntenset(void) +{ + return read_sysreg(pmcntenset_el0); +} + static inline void write_pmcntenclr(u64 val) { write_sysreg(val, pmcntenclr_el0); } =20 +static inline u64 read_pmcntenclr(void) +{ + return read_sysreg(pmcntenclr_el0); +} + static inline void write_pmintenset(u64 val) { write_sysreg(val, pmintenset_el1); } =20 +static inline u64 read_pmintenset(void) +{ + return read_sysreg(pmintenset_el1); +} + static inline void write_pmintenclr(u64 val) { write_sysreg(val, pmintenclr_el1); } =20 +static inline u64 read_pmintenclr(void) +{ + return read_sysreg(pmintenclr_el1); +} + static inline void write_pmccfiltr(u64 val) { write_sysreg(val, pmccfiltr_el0); @@ -162,11 +192,16 @@ static inline u64 read_pmovsclr(void) return read_sysreg(pmovsclr_el0); } =20 -static inline void write_pmuserenr(u32 val) +static inline void write_pmuserenr(u64 val) { write_sysreg(val, pmuserenr_el0); } =20 +static inline u64 read_pmuserenr(void) +{ + return read_sysreg(pmuserenr_el0); +} + static inline void write_pmuacr(u64 val) { write_sysreg_s(val, SYS_PMUACR_EL1); diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 73b7161e3f4e..62c8032a548f 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -81,6 +81,8 @@ struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); +bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags); +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); @@ -214,6 +216,14 @@ static inline bool kvm_set_pmuserenr(u64 val) { return false; } +static inline bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 fl= ags) +{ + return false; +} +static inline bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +{ + return false; +} static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index 47d2db8446df..4920b7da9ce8 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -25,12 +25,14 @@ #include #include #include +#include #include #include #include #include #include =20 +#include <../../sys_regs.h> #include "arm_psci.h" =20 struct kvm_exception_table_entry { @@ -782,6 +784,175 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu) return true; } =20 +/** + * handle_pmu_reg() - Handle fast access to most PMU regs + * @vcpu: Ponter to kvm_vcpu struct + * @p: System register parameters (read/write, Op0, Op1, CRm, CRn, Op2) + * @reg: VCPU register identifier + * @rt: Target general register + * @val: Value to write + * @readfn: Sysreg read function + * @writefn: Sysreg write function + * + * Handle fast access to most PMU regs. Writethrough to the physical + * register. This function is a wrapper for the simplest case, but + * sadly there aren't many of those. + * + * Always return true. The boolean makes usage more consistent with + * similar functions. + * + * Return: True + */ +static bool handle_pmu_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, + enum vcpu_sysreg reg, u8 rt, u64 val, + u64 (*readfn)(void), void (*writefn)(u64)) +{ + if (p->is_write) { + __vcpu_assign_sys_reg(vcpu, reg, val); + writefn(val); + } else { + vcpu_set_reg(vcpu, rt, readfn()); + } + + return true; +} + +/** + * kvm_hyp_handle_pmu_regs() - Fast handler for PMU registers + * @vcpu: Pointer to vcpu struct + * + * This handler immediately writes through certain PMU registers when + * we have a partitioned PMU (that is, MDCR_EL2.HPMN is set to reserve + * a range of counters for the guest) but the machine does not have + * FEAT_FGT to selectively untrap the registers we want. + * + * Return: True if the exception was successfully handled, false otherwise + */ +static bool kvm_hyp_handle_pmu_regs(struct kvm_vcpu *vcpu) +{ + struct sys_reg_params p; + u64 esr; + u32 sysreg; + u8 rt; + u64 val; + u8 idx; + bool ret; + + if (!kvm_vcpu_pmu_is_partitioned(vcpu) + || pmu_access_el0_disabled(vcpu)) + return false; + + esr =3D kvm_vcpu_get_esr(vcpu); + p =3D esr_sys64_to_params(esr); + sysreg =3D esr_sys64_to_sysreg(esr); + rt =3D kvm_vcpu_sys_get_rt(vcpu); + val =3D vcpu_get_reg(vcpu, rt); + + switch (sysreg) { + case SYS_PMCR_EL0: + val &=3D ARMV8_PMU_PMCR_MASK; + + if (p.is_write) { + write_pmcr(val); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, read_pmcr()); + } else { + val =3D u64_replace_bits( + read_pmcr(), + vcpu->kvm->arch.nr_pmu_counters, + ARMV8_PMU_PMCR_N); + vcpu_set_reg(vcpu, rt, val); + } + + ret =3D true; + break; + case SYS_PMUSERENR_EL0: + val &=3D ARMV8_PMU_USERENR_MASK; + ret =3D handle_pmu_reg(vcpu, &p, PMUSERENR_EL0, rt, val, + &read_pmuserenr, &write_pmuserenr); + break; + case SYS_PMSELR_EL0: + val &=3D PMSELR_EL0_SEL_MASK; + ret =3D handle_pmu_reg(vcpu, &p, PMSELR_EL0, rt, val, + &read_pmselr, &write_pmselr); + break; + case SYS_PMINTENCLR_EL1: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=3D, ~val); + write_pmintenclr(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmintenclr()); + } + ret =3D true; + break; + case SYS_PMINTENSET_EL1: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=3D, val); + write_pmintenset(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmintenset()); + } + ret =3D true; + break; + case SYS_PMCNTENCLR_EL0: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=3D, ~val); + write_pmcntenclr(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmcntenclr()); + } + ret =3D true; + break; + case SYS_PMCNTENSET_EL0: + val &=3D kvm_pmu_accessible_counter_mask(vcpu); + if (p.is_write) { + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=3D, val); + write_pmcntenset(val); + } else { + vcpu_set_reg(vcpu, rt, read_pmcntenset()); + } + ret =3D true; + break; + case SYS_PMCCNTR_EL0: + ret =3D handle_pmu_reg(vcpu, &p, PMCCNTR_EL0, rt, val, + &read_pmccntr, &write_pmccntr); + break; + case SYS_PMXEVCNTR_EL0: + idx =3D FIELD_GET(PMSELR_EL0_SEL, read_pmselr()); + + if (idx >=3D vcpu->kvm->arch.nr_pmu_counters) + return false; + + ret =3D handle_pmu_reg(vcpu, &p, PMEVCNTR0_EL0 + idx, rt, val, + &read_pmxevcntr, &write_pmxevcntr); + break; + case SYS_PMEVCNTRn_EL0(0) ... SYS_PMEVCNTRn_EL0(30): + idx =3D ((p.CRm & 3) << 3) | (p.Op2 & 7); + + if (idx >=3D vcpu->kvm->arch.nr_pmu_counters) + return false; + + if (p.is_write) { + write_pmevcntrn(idx, val); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + idx, val); + } else { + vcpu_set_reg(vcpu, rt, read_pmevcntrn(idx)); + } + + ret =3D true; + break; + default: + ret =3D false; + } + + if (ret) + __kvm_skip_instr(vcpu); + + return ret; +} + static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_= code) { if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && @@ -799,6 +970,9 @@ static inline bool kvm_hyp_handle_sysreg(struct kvm_vcp= u *vcpu, u64 *exit_code) if (kvm_handle_cntxct(vcpu)) return true; =20 + if (kvm_hyp_handle_pmu_regs(vcpu)) + return true; + return false; } =20 diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 67216451b8ce..338e7eebf0d1 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -891,3 +891,19 @@ u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) =20 return u64_replace_bits(pmcr, n, ARMV8_PMU_PMCR_N); } + +bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) +{ + u64 reg =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + bool enabled =3D (reg & flags) || vcpu_mode_priv(vcpu); + + if (!enabled) + kvm_inject_undefined(vcpu); + + return !enabled; +} + +bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +{ + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index cfbce4537b4c..b80cf6194fa3 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -842,22 +842,6 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const str= uct sys_reg_desc *r) return __vcpu_sys_reg(vcpu, r->reg); } =20 -static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) -{ - u64 reg =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); - bool enabled =3D (reg & flags) || vcpu_mode_priv(vcpu); - - if (!enabled) - kvm_inject_undefined(vcpu); - - return !enabled; -} - -static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) -{ - return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); -} - static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu) { return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_SW | ARMV8_PMU_U= SERENR_EN); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B960B2FA621 for ; Thu, 26 Jun 2025 20:06:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968370; cv=none; b=mIg0RwBWCqMLb4ZXy7y2ATnxb/nUTvZGoZdQw8v72DuASEQVtJM/pSkm15Tk4sTV7oILtiyGiiDGmuDymiiixBohlR5D2IzGM22i6j/nqaF2l1Hl1Vs++TAQwt2Bkk8IvvN8M06EzdjtgtfTPtBjR6Try7UIuMJNjWrOupqDq7o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968370; c=relaxed/simple; bh=IWaFMzxpfIdgN+yIYTWB9IxNShtWfAOHfgOed50zY4s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OUD5pRmbbt2pgGIGefPZJ94Xq6A81FkQbGukHRIIz8A8uGGaEOtZduU1vaM7whdLsXb/CzIaZvRWRW7CownpNPbP1BF+EKLJGgR8/aYu+LX/DBE7D0cAcJfkiTK89qJYDEyhCOXTpvVvOPqgUSy00yDdgikUPa2bbf+8/QibGck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xrGG5V+h; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xrGG5V+h" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-87595d00ca0so133985839f.2 for ; Thu, 26 Jun 2025 13:06:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968367; x=1751573167; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QA3UW6Y8rAxAny6iU/eRmOq8dPrpECd+SpYd5oHTFx0=; b=xrGG5V+hxyzcBEKAoRXiobGW4TUVJUPBhRBxdmhZnefVB/FgmOwbBiO8Qfl7ODCICI a08ZhaYfcS6FZlYuKs8jmaC30NyS/MkmCdJPeCMcCmLmir4xURC/Ct3ucY7z+kZdRXrE BTxfjM882OheW7jiU6DIhBuFPx/qHOPsHUaVqrKdkh8N2sVGV55KUe1Grn+trSXGXiyu 9r7wljBitsqntRbXI40MHKg6/cX5NB6M7/E06WewVuipyKhm55fhfIp9Omi1F/hpzC2X KZFVvjorvgIk0TABfI3VDn2FIhzQCQTl8pI+Sp0EXZ1BVpMPDslkI3Rjltti0OmScukh FgSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968367; x=1751573167; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QA3UW6Y8rAxAny6iU/eRmOq8dPrpECd+SpYd5oHTFx0=; b=wuM287fM95C2oA2N04wFLlIMJ0/xX/mrSzcOcsNwKzXPq8sqcdFKcDT1mL0FHs1hPL 5i+U+rEhWCAuruZlaAFXTxopGSjy2adx2qmrzQmw97epC9q58nHx+/0PzqH9Mbl3SlVj ft97gI1MGxi3VMXPZVdSUKFueJ9ex34mtxcd49blBQiUe3U4bWIQVB+mmkYl7Ag21L/W 4ycFrH6xPAagybTJrizPwdZeepf2cFJ85BP+U7PosrAZdkKNWoZqnwUNX3kIrYYS3llA NI7nshP/Pg6VAxFkYBpSU7N8UjTiE0B2tlIdnriLOQggXUKed6P45zrEOde1DiUAVVgR ma1w== X-Forwarded-Encrypted: i=1; AJvYcCVje3aGnt53w1dy3qxkeF6oQzLrAsKTg5Rxu12gKSvqb7hYEX2uQCJ4k+t32xDFGtBAgw6ZEc/phF6T8tg=@vger.kernel.org X-Gm-Message-State: AOJu0YwcYkP8/Snde2p95Uv7BlMuHJihU1/Dl5fpIDPJ6RGE5Py1qqHm lWvelWgdg4aZa3Ngbi8rkIpTv11JJ65nW0nxUzMLG1MC1ysIv73MZ7jiu4buIAHDbQ0hLYycxAU 7sGqAlrSXWBgDnJv9e47W1zO7jg== X-Google-Smtp-Source: AGHT+IEzT1wnDgAQ96OJkbudRajHKCKUOzyR2rX0XafuH6cZli5Gqftia2M8Ig1EJNOLt/wms6hYjIofKHvZFHa92Q== X-Received: from ilbbc25.prod.google.com ([2002:a05:6e02:99:b0:3df:40c1:c1ac]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1748:b0:3df:3be7:59d1 with SMTP id e9e14a558f8ab-3df4ab87f69mr10145295ab.11.1750968367035; Thu, 26 Jun 2025 13:06:07 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:51 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-16-coltonlewis@google.com> Subject: [PATCH v3 15/22] KVM: arm64: Setup MDCR_EL2 to handle a partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Setup MDCR_EL2 to handle a partitioned PMU. That means calculate an appropriate value for HPMN instead of the maximum setting the host allows (which implies no partition) so hardware enforces that a guest will only see the counters in the guest partition. With HPMN set, we can now leave the TPM and TPMCR bits unset unless FGT is not available, in which case we need to fall back to that. Also, if available, set the filtering bits HPMD and HCCD to be extra sure nothing counts at EL2. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 11 ++++++ arch/arm64/kvm/debug.c | 23 ++++++++++--- arch/arm64/kvm/pmu-part.c | 57 ++++++++++++++++++++++++++++++++ 3 files changed, 86 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 62c8032a548f..35674879aae0 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -96,6 +96,9 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); =20 +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); + #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu); @@ -158,6 +161,14 @@ static inline bool kvm_vcpu_pmu_use_fgt(struct kvm_vcp= u *vcpu) { return false; } +static inline u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + return 0; +} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index a554c3e368dc..b420fec3c754 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -37,15 +37,28 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcp= u) * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, - *host_data_ptr(nr_event_counters)); - vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TPM | - MDCR_EL2_TPMS | - MDCR_EL2_TTRF | + vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, kvm_pmu_hpmn(vcpu)); + vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TTRF | MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA); =20 + if (kvm_vcpu_pmu_is_partitioned(vcpu) + && is_pmuv3p1(read_pmuver())) { + /* + * Filtering these should be redundant because we trap + * all the TYPER and FILTR registers anyway and ensure + * they filter EL2, but set the bits if they are here. + */ + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HPMD; + + if (is_pmuv3p5(read_pmuver())) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_HCCD; + } + + if (!kvm_vcpu_pmu_use_fgt(vcpu)) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_TPM | MDCR_EL2_TPMCR; + /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) /* Route all software debug exceptions to EL2 */ diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 92775e19cbf6..f954d2d29314 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -137,3 +137,60 @@ void kvm_pmu_host_counters_disable(void) mdcr &=3D ~MDCR_EL2_HPME; write_sysreg(mdcr, mdcr_el2); } + +/** + * kvm_pmu_guest_num_counters() - Number of counters to show to guest + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the number of counters to show to the guest via + * PMCR_EL0.N, making sure to respect the maximum the host allows, + * which is hpmn_max if partitioned and host_max otherwise. + * + * Return: Valid value for PMCR_EL0.N + */ +u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu) +{ + u8 nr_cnt =3D vcpu->kvm->arch.nr_pmu_counters; + int hpmn_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + u8 host_max =3D *host_data_ptr(nr_event_counters); + + if (kvm_vcpu_pmu_is_partitioned(vcpu)) { + if (nr_cnt <=3D hpmn_max && nr_cnt <=3D host_max) + return nr_cnt; + if (hpmn_max <=3D host_max) + return hpmn_max; + } + + if (nr_cnt <=3D host_max) + return nr_cnt; + + return host_max; +} + +/** + * kvm_pmu_hpmn() - Calculate HPMN field value + * @vcpu: Pointer to struct kvm_vcpu + * + * Calculate the appropriate value to set for MDCR_EL2.HPMN, ensuring + * it always stays below the number of counters on the current CPU and + * above 0 unless the CPU has FEAT_HPMN0. + * + * This function works whether or not the PMU is partitioned. + * + * Return: A valid HPMN value + */ +u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) +{ + u8 hpmn =3D kvm_pmu_guest_num_counters(vcpu); + int hpmn_max =3D vcpu->kvm->arch.arm_pmu->hpmn_max; + u8 host_max =3D *host_data_ptr(nr_event_counters); + + if (hpmn =3D=3D 0 && !cpus_have_final_cap(ARM64_HAS_HPMN0)) { + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + return hpmn_max; + else + return host_max; + } + + return hpmn; +} --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 891732F94B6 for ; Thu, 26 Jun 2025 20:06:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968372; cv=none; b=rxZzGOxHe5F0whM5lvD/pU4DvXTalqOOKqlcyNxKYtSgQ1S0wnvh82eGNMKCynOUvXUMlvUwMAmP7gy0nbgikuwpAmYSP2i70NTN5LCU1Vs4dX0smfFDBJB+DZNhNiQVL/MIKtD/VcsndyzbKzuXXI6xirV+aogJDDaBQYLZWpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968372; c=relaxed/simple; bh=8z/ZhUBiJsaGtKB7500oDCpj9d2atPMBBLxlj1YRgaY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eZDfEWnFEHnG5VgKPV4+z75+LqPZ8+1+FecGBeR3153FiNx6Ud7QoUoL3VhlEo2sJeFCAr0SgCHb7QgzvZdDSkgbmGYFB8xxspTAEfzg5EXKMqqba9aRawzURwNmQS11somoJi6NI6RWFWL4UlvzkaMLUUq+Mri4NQBN3v2Zoww= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=v5DnrLHH; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="v5DnrLHH" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-61168eb522aso1961363eaf.0 for ; Thu, 26 Jun 2025 13:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968368; x=1751573168; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EggI+2LInzJ3wRO1GPi9TTSrlVG7gA6O9HYI2Cdzh00=; b=v5DnrLHHRywEmLfokm0HWA9RtZhcbh63WpF01hbh+dDJvSUhbzV0YPvKHRGFsf2t9p LaiDBdDiRwR39RQPCQSSMVlGzOj0bAIGyUeK2JNeUPje6i+lhHvxgYacd05NRisNY+Qs 0j5wTxQS1E2M6hTYPer4/V7dRAyCUm+H2l1gkutg1EJJ6n9o5DK3D0s49z+TJCCYrsH2 kAkApMUKbjm5oayg7U2aYQPvroVHVv3uf3PSF5IfYylS3TPjNIYsCdHMhz9HAsQP3hVJ l3tpSs/cWUC3KwqcRcv4g6+xUr7GYpKAUPzTPi9Pk8f0R8N+K1dJ91D/e9uW3ycDGBLf G0/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968368; x=1751573168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EggI+2LInzJ3wRO1GPi9TTSrlVG7gA6O9HYI2Cdzh00=; b=QVyNhEXcKUFiohHHRGRkbMPTo4MQQ+U1nRrs0HIzkZinqpvRIfvgCxEQOPWEcv18ev yKR/yKFIgWbIeEivb+3Rd2V0zucOwoI5Yw838j5lZBiIFL5qsH4NWpuUO8F1wrnclpQ4 m1BMnc5fp612xGZYvPVlfVih8+XNsmuec2+EW++DqdZa2qBHPhourI9P7aHdPE+IfKUb S8Nai32q5K2GUL3k3OMWYGpwUvpKIAc9+nNYrrFjdCj2/7qFRU6Qj7WSxtZ+sa/0ge7K bcFrK1AH9eX5Xd3F0XjvcMcTV6yKe2i9QEZYB8hg3z+TCzM5Mrn8MV4omluOBJLdQBXL +/3g== X-Forwarded-Encrypted: i=1; AJvYcCVG88Vp6WUrHNpds118Yl5C2UYVsuQeFQWapyzJXbm+Mgun8FYu8XrSb3xhzTO8tQpvgPZBOCXDNw3oGC0=@vger.kernel.org X-Gm-Message-State: AOJu0YzcF73hCctat5Stqb6mWB9ry88rcRG+YXm1Xs+jUezGzD/ZEf7z jT4kjqx8Bmv4q4rhMbGvJ0/SXcXiCZLrK/bS10jBbXQ3sJZRVgyjpWggxzwIN/sfIcqFxy/1UQG pHQeWC637d8DTjkgWufVGdsaURg== X-Google-Smtp-Source: AGHT+IEDQi6MUvs/OOb8kHvz1Lzcrs7koLOsO+Cfk8AV6zjNya1DxUg2y1UQDMm5ESpCnmdsnoP82zEcgDNAL2N1IQ== X-Received: from oigs15-n2.prod.google.com ([2002:a05:6808:68cf:20b0:40a:f58a:e126]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:3a14:b0:408:fef8:9c91 with SMTP id 5614622812f47-40b341ca298mr321047b6e.5.1750968368200; Thu, 26 Jun 2025 13:06:08 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:52 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-17-coltonlewis@google.com> Subject: [PATCH v3 16/22] KVM: arm64: Account for partitioning in PMCR_EL0 access From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For some reason unknown to me, KVM allows writes to PMCR_EL0.N even though the architecture specifies that field as RO. Make sure these accesses conform to additional constraints imposed when the PMU is partitioned. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu.c | 2 +- arch/arm64/kvm/sys_regs.c | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 338e7eebf0d1..9469f1e0a0b6 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -884,7 +884,7 @@ u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vc= pu) u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) { u64 pmcr =3D __vcpu_sys_reg(vcpu, PMCR_EL0); - u64 n =3D vcpu->kvm->arch.nr_pmu_counters; + u64 n =3D kvm_pmu_guest_num_counters(vcpu); =20 if (vcpu_has_nv(vcpu) && !vcpu_is_el2(vcpu)) n =3D FIELD_GET(MDCR_EL2_HPMN, __vcpu_sys_reg(vcpu, MDCR_EL2)); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b80cf6194fa3..e3d53f2da60b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1249,7 +1249,9 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const stru= ct sys_reg_desc *r, */ if (!kvm_vm_has_ran_once(kvm) && !vcpu_has_nv(vcpu) && - new_n <=3D kvm_arm_pmu_get_max_counters(kvm)) + new_n <=3D kvm_arm_pmu_get_max_counters(kvm) && + (!kvm_vcpu_pmu_is_partitioned(vcpu) || + new_n <=3D kvm_pmu_hpmn(vcpu))) kvm->arch.nr_pmu_counters =3D new_n; =20 mutex_unlock(&kvm->arch.config_lock); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CD402F4A1E for ; Thu, 26 Jun 2025 20:06:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968373; cv=none; b=bqFmueXUyuSPTzYf5FNynbqJrBhFWhcxNu/KyK94KXs8vom2c5xwva/DGxMiQX8J3rJq36Pdwb7Lbngn51fxLHVz709QoENmVVTPcXVEExmzgP87RRDBmfkJ37nLlrkhoWZ1QFMyVgeEP0YJ2ADWImiiBIT/aDhHRCNgr6Plq5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968373; c=relaxed/simple; bh=r9ks6Gk9YbqV8jpdii7wfr1xMzIClUFml1wYw6+d3fA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PxcxchGTiNmOKC2i4ac+yQVDW82rupDzJlU5eNtU5PCTyoVLrwOSi0jtbJ7BVWjK84ESS3fNgrC4wTHZzXcVKu7gwsfYU6O7CxP9/jk2qRiEQ6HiZti/twvyyHsx47gf+0j0uEIbyHojqrDDamZmYc3ohB3zFaIMuViE48J+C7I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c03+PxfS; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c03+PxfS" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-875b64cccd6so187932939f.3 for ; Thu, 26 Jun 2025 13:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968369; x=1751573169; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aRlId8j6LCdYrirtMZ0W4g5cwi8ZnWeFsohFg4IJ6SY=; b=c03+PxfSPWfjnIkH0gRkV6Sr39GOHPfsiE38yXKjIk0GrQVpI2XbzRKdevfDY33Tsf oP1nK6vLVrevcLEHpKAl9eACLeqf5yXD2whCQVVj5MJgvg/8pL2DjkXLgjAUuDjw56M7 Q2V9xPqsZJLKt4kByvTOR33AzxLO3qrY8OLXtDlFHZRXz1R2q9wz5bbenzpaN6w2GoNv OJkIbTeh07Vw0aVbhOiBnWgE9alAjQQA6UsKXrNriMFfjMZrvLiMcXM39Ajy1xFPIFdI 8V47hYNZjkE6EBFfebQ43G+lRxp8I0Xscq2qV/WnT4JPenEs1RBQ0wMaSr5o7oiDDHMw SoNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968369; x=1751573169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aRlId8j6LCdYrirtMZ0W4g5cwi8ZnWeFsohFg4IJ6SY=; b=wkmO64PmMCsm+wakpKlufZF3wVvJ2Dbgs/BsoViTwFTShN86vOs3n7MzhLo/FUza0y /UXO1uw7GnJzpie06llLplDxDZQmVDxBU29mthURDU5aAFns6M2/JIWB7N8kVkbJc9I3 xg6C/ByAZiFB8ewR/NgZpgHaUcF19rxEYzIwSwYdCMRWTq09p/wANtfkmVsdhT8FF9zU tlMeuqekTG81pT9brnv/7bFSDQx43kvGrmqEmvWCmDoMHWnuPE06rcmZA9RzlGh4suWv qFQrTd3/8KoufLePPMr4doZKZ6dJ01upMnbSaOAB/HvL2Ol8y3G/RJhUxMO/8FxVw64A BDZg== X-Forwarded-Encrypted: i=1; AJvYcCXrkvLuXjyYbKXkk34Ek211GIqpJVLixVS+A14quQ2TyO4dI5tuUSrLcEyFR5knUCj9c/lAuwTm4GDxiMo=@vger.kernel.org X-Gm-Message-State: AOJu0Yx9e/SgZA8GsLKpxqN79HLsrF/pKHZlJi7C0y5WB0aqJ2xSIyW0 M0ZpBVxzvb+wbG7XhyJH2NvVSItFFoprFy3TgHkMQ7vLO4x8Y7d5Tw531yxuCwQhQIxp2+eoVWQ 50PPye1FIAcppXvckFGtpTHryzA== X-Google-Smtp-Source: AGHT+IFQs2Wwdg/fSES5A58gFSJxMknzSDLNOp4k85bx8hXhl1wyXlz+jCSjhcviMYqna778nTCFvIvjB3UkqOM8qQ== X-Received: from ilbbm19.prod.google.com ([2002:a05:6e02:3313:b0:3df:3a5e:9ba8]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:2589:b0:3df:460a:ec3c with SMTP id e9e14a558f8ab-3df4acc520fmr11509085ab.22.1750968369104; Thu, 26 Jun 2025 13:06:09 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:53 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-18-coltonlewis@google.com> Subject: [PATCH v3 17/22] KVM: arm64: Context swap Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Save and restore newly untrapped registers that can be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMICNTR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If the PMU is not partitioned or MDCR_EL2.TPM is set, all PMU registers are trapped so return immediately. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 4 ++ arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/pmu-part.c | 101 +++++++++++++++++++++++++++++++ 3 files changed, 107 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 35674879aae0..4f0741bf6779 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -98,6 +98,8 @@ void kvm_pmu_host_counters_disable(void); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 #if !defined(__KVM_NVHE_HYPERVISOR__) bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); @@ -169,6 +171,8 @@ static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) { return 0; } +static inline void kvm_pmu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) {} static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e452aba1a3b2..7c007ee44ecb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -616,6 +616,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); =20 @@ -658,6 +659,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index f954d2d29314..5eb53c6409e7 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -9,6 +9,7 @@ #include =20 #include +#include #include =20 /** @@ -194,3 +195,103 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) =20 return hpmn; } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned or we have MDCR_EL2_TPM, + * every PMU access is trapped so don't bother with the swap. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + write_pmevcntrn(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCNTR_EL0); + write_pmccntr(val); + + val =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + write_pmuserenr(val); + + val =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_pmselr(val); + + val =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + write_pmcr(val); + + /* + * Loading these registers is tricky because of + * 1. Applying only the bits for guest counters (indicated by mask) + * 2. Setting and clearing are different registers + */ + val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_pmcntenset(val & mask); + write_pmcntenclr(~val & mask); + + val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_pmintenset(val & mask); + write_pmintenclr(~val & mask); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned or we have MDCR_EL2_TPM, + * every PMU access is trapped so don't bother with the swap. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D read_pmevcntrn(i); + __vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val); + } + + val =3D read_pmccntr(); + __vcpu_assign_sys_reg(vcpu, PMCCNTR_EL0, val); + + val =3D read_pmuserenr(); + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, val); + + val =3D read_pmselr(); + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val); + + val =3D read_pmcr(); + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, val); + + /* Mask these to only save the guest relevant bits. */ + val =3D read_pmcntenset(); + __vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask); + + val =3D read_pmintenset(); + __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); +} --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3C1D2FCFE7 for ; Thu, 26 Jun 2025 20:06:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968374; cv=none; b=extSXW6AIyOue/03z87J+S//MXxbIAWoWIDtsNyd2Q/soNGG8VTwbTLOVWLCd5nnj1pncqyIbjX4Fj3tpNW3D8+GU9UTmyMrMxwrBFxJ+YpD5ehuzzv8EBx+18xEexoT/HAyn7iBkUWwWQIfTs4y370C/+ls/fn3x1oLO8L/CJg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968374; c=relaxed/simple; bh=JNAet3NELvRm4pw1yj+KmlMpWNsI1wPDPWHykWoXnxc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OKnXETv0hQbmZXpfs0dnx/8wtTuQUcIhDsr0z/GHMu0jyb6+UJwxmQMOHoG27nviGjtiQeRddQGxqH4tr3Cud5Q0u650KTBHmYQWiK+nID+3HeE27v/qhn8V81/ftkq49oDxheMx2HnIxiI40SLnaiVRproaUTU7InZr41GzxWs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nbrV3dnj; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nbrV3dnj" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-4066a4d2d31so473979b6e.2 for ; Thu, 26 Jun 2025 13:06:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968370; x=1751573170; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fsln9Cluh/lr4G6zFvH7bpJ9BvK8mJLYYPmhFGobydY=; b=nbrV3dnjrXPAORgeAn/DK5DbZGj2zGpx0tofsNLT5f+/wk+JGGZ7EbmRIBogx8MK52 wvRJoGyxUvSlUKMLLIFl1sn5MmOUI0ExiwjYAgstCggk3ZTk7XE1aElovyZBmycJm1Zc LALszPYIb+T0Yn+aRcXDSvXW/m6i6xBzEtUPspCi3zHPa3+oQU6V97fBOIjv6y0yHnI4 Vu7pyf7eodtv9TTmiS6BVOMyoaZ3YdmawACqcx+Y9snFRYOr36l4ndAPa+GhCcwTL+xO 0zIKZezODbB400EdNCQFuqyfACIXNMq77x2TJBw+NcBYrkdN1pMUi1u3q3oCpohmyEOt z+pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968370; x=1751573170; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fsln9Cluh/lr4G6zFvH7bpJ9BvK8mJLYYPmhFGobydY=; b=JpFfpOuHNsyvLaYeXv6tU0lq25WLPs4uDgBE1ScSvgZNdyiRTAwWawFTm7vEHARZLU L16ZdJtDJyCAFSxhhT5QOLrIz0MP2xcozllzTpAscH0fVFbSubkoVWHEuvi1QTbDF85C Eam8qm39sF6zSbyLsadFntKDuACZLrDf8HYp5ajUba539i9KhrbSp2vEF0l86HP2bndN 61HesN04HHIP2akRXSwM4RBnQk+Sqrxed7X5D5pF6qCpvOBCMwG06tDY9VKE64hDR83l ZVQorvMq8eqC/jyRv+0ScvjSpNQoM/IeF9Y4z0qOEARvwJ5JvHHFYCyY67XDTXWEOOpl WzkA== X-Forwarded-Encrypted: i=1; AJvYcCWE1/5WXa21IE3N0WVtO7e1UQmIcR9qXWTr8OhAXXnUJJQ1BuQOOl8QdD1Ua0U4SCm/O5X7eCvU2H4VUnw=@vger.kernel.org X-Gm-Message-State: AOJu0Yz4Z50x1+dSyPYlrG36S8tIIRv5VXyTQDntnVVqeg3x8aDPa+S2 6ZzNfTQNZ9Vgv3/7QkEh56jfJHDvq4SOeaXS4nbliCUhkGe0L8CYao0VOeaxIiNg8t4Y3vu1Qak f8qM33QtMda6C3+Ulo++3v2Lk9g== X-Google-Smtp-Source: AGHT+IGFAFvDi2PbexbAqauL7y5C7K4PGkXLqZnIMKzQ7KnS/pNLTbsU7lTkZfS0TlivdMxg6dS5ZH5rez4yi9e+5w== X-Received: from oobbq3.prod.google.com ([2002:a05:6820:1a03:b0:60e:fcff:daa1]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:151f:b0:404:2960:9b4d with SMTP id 5614622812f47-40b33e133bfmr396970b6e.25.1750968370241; Thu, 26 Jun 2025 13:06:10 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:54 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-19-coltonlewis@google.com> Subject: [PATCH v3 18/22] KVM: arm64: Enforce PMU event filter at vcpu_load() From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load. If the event is filtered, exclude counting at all exception levels before writing the hardware. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-part.c | 43 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 5eb53c6409e7..1451870757e1 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -196,6 +196,47 @@ u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu) return hpmn; } =20 +/** + * kvm_pmu_apply_event_filter() + * @vcpu: Pointer to vcpu struct + * + * To uphold the guarantee of the KVM PMU event filter, we must ensure + * no counter counts if the event is filtered. Accomplish this by + * filtering all exception levels if the event is filtered. + */ +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 evtyper_set =3D kvm_pmu_evtyper_mask(vcpu->kvm) + & ~kvm_pmu_event_mask(vcpu->kvm) + & ~ARMV8_PMU_INCLUDE_EL2; + u64 evtyper_clr =3D ARMV8_PMU_INCLUDE_EL2; + u8 i; + u64 val; + + for (i =3D 0; i < pmu->hpmn_max; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(val, vcpu->kvm->arch.pmu_filter)) { + val |=3D evtyper_set; + val &=3D ~evtyper_clr; + } + + write_pmevtypern(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCFILTR_EL0); + + if (vcpu->kvm->arch.pmu_filter && + !test_bit(ARMV8_PMUV3_PERFCTR_CPU_CYCLES, vcpu->kvm->arch.pmu_filter)= ) { + val |=3D evtyper_set; + val &=3D ~evtyper_clr; + } + + write_pmccfiltr(val); +} + /** * kvm_pmu_load() - Load untrapped PMU registers * @vcpu: Pointer to struct kvm_vcpu @@ -218,6 +259,8 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) return; =20 + kvm_pmu_apply_event_filter(vcpu); + for (i =3D 0; i < pmu->hpmn_max; i++) { val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); write_pmevcntrn(i, val); --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCD912FD872 for ; Thu, 26 Jun 2025 20:06:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968375; cv=none; b=mC5/7B8Pm/JvmhlspnEQBHqekn55c0SbPmNFEaB+RULim9OrFUybZ966wMx3IW07CmN7GfpmwtPMcr0xGYNcGt5RAA7PYlzCaLQ8j4CM1lsR9USzT7BUE/IbcUkkBfpomQ1XM8jEYPwbOH4uNLmNXRX6AxGbZU7mE2dSPZgra58= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968375; c=relaxed/simple; bh=gOCVNUcJxbP4E19oVaSXNAaxiVd9KT2116peS7a3WYY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lhGsgIvGIehb3FrbnZeo7WlaAOlO4GN6meHrl5Je4GH6QTHA94k2ogQpOwwlt8ebLpEEMx99HuOzV5uEuaREG8FiqRqpJNMiv3+8ZpUkfUW19gfOWM07j1+fU8i0exNuECLNhUrosdfK0UYJZ1qPF5bXlD61+eFzUjz/UHfRtrY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=D9dOn/5D; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="D9dOn/5D" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86cf89ff625so135837439f.0 for ; Thu, 26 Jun 2025 13:06:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968371; x=1751573171; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=S6YI1PbJ7Kn84JnSkWupK7ZLYRCn92aoL1PXIwZUeC8=; b=D9dOn/5DrKjAxctMufgKLJeo6rPsYWyidZbROfbRj3KalNUIzBGfAcN2HPY+r4ojQu QJRs0PphmEKDzY3g4mLkgN8U/4+zqOBuwlOe11n+y8Np3AdHmL21r/E28CjWuJVVrx1e BVmX/Cly4iFtmfZXv+ii2RjGjF6xHYKeB3QlZ/I87WWDJsf8daSkg0teefCfBP+U98Gt PyrZwLrLir9ih9kiiVUp4ZdMcEwUyspWXsjv4pL73dnVh5wCRDoQpyk8oe4+Ea1VJrQQ 5uEleY7cc34aLVE7Q1WwhTaoPZcIeicdihSYUcBbFmrnoPXIvyHVOTT/Rww3L0tuqFvg 5uFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968371; x=1751573171; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=S6YI1PbJ7Kn84JnSkWupK7ZLYRCn92aoL1PXIwZUeC8=; b=mF5/qVfBuoAQZnyGGznnVN4PWRmIDQg89zOuZb97u8+dPmRs6gS+Gl4+AatL5KphJC PXnlBHj+Ivwmk5Ys3CMJhuJyi7R7FEVaAPjo/sSegXJRM+haDBW9IKkl+2QU/cxh4ZVl AJg+XqoWFi0OBxbPXWlgP/Xr0iefwFMeYosUp4TSM3k9ZzjxiN6OQGU2849R1+SOgDhZ UhcfHilaA6/IrJycFHrD1SIjFj3iKvMknCJB7tVtLYAltE71+Tfogc7LGO70uh22Z51X HByMz2xcIZ9ZK6sENAgFhC7WCZFMo5nQlR/krGC7PoE6OSlwSQqH86AwB8yhpv1NQWZF jWlg== X-Forwarded-Encrypted: i=1; AJvYcCWee0j7z5AP/JGvWxhnQh15cjXdGWWTEozZDzlH/6FZY65ksAK+oGH55TY6Eq4BDMcNYJVAUZdOJgVxX30=@vger.kernel.org X-Gm-Message-State: AOJu0YyHHrw19xKqdNL5FnZgtPHr9xqRzsSRasF9fgA3+csAJmq3oI7e L4tjan0fdHnmxeR2U0Bxo34sFXe/LpNfL5Uk3JNd5Hgy+zIZ42zWgoAOGs4sUGqli81ReoZSj71 NcQsidv9Gpd/keYjoxCFISSBKXA== X-Google-Smtp-Source: AGHT+IHJUkjGyGvNcWxAAAjfiD6q7g9phykRry/HcdOfgoIUsGWYJ5QzPA0/oIz6lali1a+s4FtxtR3CCKpC4TGpWw== X-Received: from ilbbb14.prod.google.com ([2002:a05:6e02:e:b0:3df:38c1:b314]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:154a:b0:3de:25cb:42c2 with SMTP id e9e14a558f8ab-3df4acdd094mr10972865ab.18.1750968371417; Thu, 26 Jun 2025 13:06:11 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:55 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-20-coltonlewis@google.com> Subject: [PATCH v3 19/22] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Guest counters will still trigger interrupts that need to be handled by the host PMU interrupt handler. Clear the overflow flags in hardware to handle the interrupt as normal, but record which guest overflow flags were set in the virtual overflow register for later injecting the interrupt into the guest. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 6 ++++++ arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-part.c | 17 +++++++++++++++++ drivers/perf/arm_pmuv3.c | 9 +++++++++ 4 files changed, 34 insertions(+) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 5f6269039f44..36638efe4258 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -180,6 +180,11 @@ static inline void write_pmintenset(u32 val) write_sysreg(val, PMINTENSET); } =20 +static inline u32 read_pmintenset(void) +{ + return read_sysreg(PMINTENSET); +} + static inline void write_pmintenclr(u32 val) { write_sysreg(val, PMINTENCLR); @@ -249,6 +254,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) return ~0; } =20 +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 static inline bool has_vhe(void) { diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 4f0741bf6779..03e3bd318e4b 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -95,6 +95,7 @@ u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); +void kvm_pmu_handle_guest_irq(u64 govf); =20 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu); u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu); @@ -291,6 +292,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) =20 static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 #endif =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 1451870757e1..bd04c37b19b9 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -338,3 +338,20 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) val =3D read_pmintenset(); __vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask); } + +/** + * kvm_pmu_handle_guest_irq() - Record IRQs in guest counters + * @govf: Bitmask of guest overflowed counters + * + * Record IRQs from overflows in guest-reserved counters in the VCPU + * register for the guest to clear later. + */ +void kvm_pmu_handle_guest_irq(u64 govf) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + + if (!vcpu) + return; + + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 5836672a3dd9..dba7dcee8258 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -750,6 +750,8 @@ static u64 armv8pmu_getreset_flags(void) =20 /* Write to clear flags */ value &=3D ARMV8_PMU_CNT_MASK_ALL; + /* Only reset interrupt enabled counters. */ + value &=3D read_pmintenset(); write_pmovsclr(value); =20 return value; @@ -852,6 +854,7 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u64 pmovsr; + u64 govf; struct perf_sample_data data; struct pmu_hw_events *cpuc =3D this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; @@ -906,6 +909,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) */ perf_event_overflow(event, &data, regs); } + + govf =3D pmovsr & kvm_pmu_guest_counter_mask(cpu_pmu); + + if (kvm_pmu_is_partitioned(cpu_pmu) && govf) + kvm_pmu_handle_guest_irq(govf); + armv8pmu_start(cpu_pmu); =20 return IRQ_HANDLED; --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B47B82FD896 for ; Thu, 26 Jun 2025 20:06:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968379; cv=none; b=eI20vfGrBwPwfEgxvGl7spmBuhjhDRfETyjlhjqzu8SmpD4XcSgBJ28IUgf1cMU6pElDFdNX2XU/KZxgqt1wMBsT4/Tm0GmJlh+HsQ9OJWgq4rHBUF5nwV6fsHWCAzjPyUj7TJPnnPJMQRLu7tr97R/Zhrz0xxrhAEDK5E7HMuw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968379; c=relaxed/simple; bh=gH+Yd1mzVo9y7Dr3+Hxk8fddtdVh8GqAqskRlpwY6xU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ulXBLBncqBm1Ls2kxVEzKBiG5EGBLUzgSDqB7cJIxu7ioCDkwMBq5bWocDTKNm2xVauzvWAsbVoN/pvY09Nol6v7dRnJejIQJSXOE40osS+o7pUQGUzX4DP4mh5PAtAVDcvWO5C8XS0NZn3EYCUFm/SnJlanGRdZ89XYiC1t/2E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XbEtBbxr; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XbEtBbxr" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddc0a6d4bdso17350525ab.0 for ; Thu, 26 Jun 2025 13:06:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968372; x=1751573172; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ai9/sepaJ5XVOCTHqXQnbqMQb7WUqdqFh88LxqpgeeM=; b=XbEtBbxrk8G0itMu4AxXxhJJrIOyP6ZFUEEqGx5SKDlIWdT5LX+bE5fVGWmi9TiXxP iklyJBAQhesnjI4kwameANIoYix4kjytwAhJ1zqSlmiclrtzDHSOHFhPaM1kFw9G4s1u 2d1/QpucTiRh1rT1wnbctZYhsGFozHQyven0ZUSI/wYozuaBVLjh880k+I4zyzoitoBZ oFpcaLAhGlXGoB2FbqskYjjQ2R+7jLn6QPAqgQGkPbWbhCdQoRoc44Byo+KKRAA3gngE 9hkGmHRypRPabwbw+1eiP1Zl+2GewJU+i9zal6QYFgbPhA7Iw1JlQyluaeizTENO9wuZ Wp5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968372; x=1751573172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ai9/sepaJ5XVOCTHqXQnbqMQb7WUqdqFh88LxqpgeeM=; b=mf//DBjKO39Au9mC6+3oEj5U1IEIS8Tjuu6ZSSYAr54HmS2bU5/OElF9B6ucpw2ZS9 ijEK0nK39bsXrWlvC7JMcPmzg6nwqZZvjjpaOa8sESllnXx/Sh4wMrfbBXpEzlRgQ+Vs lBENxnisgigl/MOV0W0AdStFhF/Z8CQzCbONBKjPpnvCeGzKhEVUziB115eNqqMgJBEo XsYZ+eqSwFtZOCSnG/wySToM+K0QWugUN49Z/4xwg3sLdBTeLvqGiFsPw+VA3VN8rXdi lKp5a1goPOgdDzn+Wo0Loeq5F68qtmZlmsNu2iJf2Yn3Xke+SOuFSgWwi+wMIsfb3p5g cXKw== X-Forwarded-Encrypted: i=1; AJvYcCXehn6p+tM2Y3rBgMpdCfq9PlKgVghfpNgPT8Tc1pNOiSqPvXD4L9hZxSIBE70WVUeHBZndVeX6z87T98Q=@vger.kernel.org X-Gm-Message-State: AOJu0YwyYWLSubCDz0BRR52iqyE2TImLkjhZmZITqG/xlH3BQEq8oegx tDrt+iy79fjh13L9/koqJ0Riw3Z3mF2Z7x5jSTJvWoXUi9BYVpnEOKIfxT8USJyVnIS2XaCttyu xS+U792wuh3K/7Zik+KnJQZkckw== X-Google-Smtp-Source: AGHT+IHMxVNZWApbptZpNTBCT5d8XSjlwqRmfTyoWU26w+gUqtg0TL1VrqiKQldGVuUwisWThSdEUMcZDgoysCsEWA== X-Received: from ilos8.prod.google.com ([2002:a92:cb08:0:b0:3db:8374:9a74]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1c04:b0:3dc:7cc1:b731 with SMTP id e9e14a558f8ab-3df4aacec3amr12947595ab.0.1750968372540; Thu, 26 Jun 2025 13:06:12 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:56 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-21-coltonlewis@google.com> Subject: [PATCH v3 20/22] KVM: arm64: Inject recorded guest interrupts From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we re-enter the VM after handling a PMU interrupt, calculate whether it was any of the guest counters that overflowed and inject an interrupt into the guest if so. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/pmu-part.c | 22 +++++++++++++++++++++- arch/arm64/kvm/pmu.c | 6 +++++- 4 files changed, 30 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 03e3bd318e4b..d047def897bc 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -86,6 +86,8 @@ bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_resync_el0(void); +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu); +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu); =20 #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index bcaa9f7a8ca2..6f41fc3e3f74 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -268,7 +268,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -405,7 +405,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *pe= rf_event, kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ARMV8_PMUV3_PERFCTR_CHAIN); =20 - if (kvm_pmu_overflow_status(vcpu)) { + if (kvm_pmu_emul_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); =20 if (!in_nmi()) diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index bd04c37b19b9..165d1eae2634 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -279,7 +279,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) write_pmcr(val); =20 /* - * Loading these registers is tricky because of + * Loading these registers is more intricate because of * 1. Applying only the bits for guest counters (indicated by mask) * 2. Setting and clearing are different registers */ @@ -355,3 +355,23 @@ void kvm_pmu_handle_guest_irq(u64 govf) =20 __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=3D, govf); } + +/** + * kvm_pmu_part_overflow_status() - Determine if any guest counters have o= verflowed + * @vcpu: Ponter to struct kvm_vcpu + * + * Determine if any guest counters have overflowed and therefore an + * IRQ needs to be injected into the guest. + * + * Return: True if there was an overflow, false otherwise + */ +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u64 pmovs =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + u64 pmint =3D read_pmintenset(); + u64 pmcr =3D read_pmcr(); + + return (pmcr & ARMV8_PMU_PMCR_E) && (mask & pmovs & pmint); +} diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 9469f1e0a0b6..6ab0d23f9251 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -407,7 +407,11 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D &vcpu->arch.pmu; bool overflow; =20 - overflow =3D kvm_pmu_overflow_status(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + overflow =3D kvm_pmu_part_overflow_status(vcpu); + else + overflow =3D kvm_pmu_emul_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) return; =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24FF42F19A6 for ; Thu, 26 Jun 2025 20:06:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968378; cv=none; b=H0DNQvEseABS2PMgJrS08lnQQKCyVaPsWBbVKEfggky1ec7ZTlrw/HMwhhzlEOp9PKp707yg804uPNNXtOBaZ1Pq8qDUYvARQnL0avt4upNs7EyvuM3BbGXnGW76peyAGKCF+5RuRdZVt/h8IH00qLqksnBdAlIMFziI1HabcOk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968378; c=relaxed/simple; bh=Ur9B1M7+gVwDPw0f+x/fdeLOw95hKGc1bWeTEwS7dE0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bnA9FSSFiG5wPoUIbNU1+MYRh3z1q3WPykid9diUa342P7EY75FfSDyRtC8nJrjVJRngIxendVQQYfbHRqb4ZAVQ7otfqFr24W24psRgk+h43EFTsWM54nPhReqlnRiaQJhJnRiubm8b4ucb4U6oQSxQJ8nbT83SdYGmpICxgD8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Xi+Dn9KF; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xi+Dn9KF" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddc0a6d4bdso17350785ab.0 for ; Thu, 26 Jun 2025 13:06:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968373; x=1751573173; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sHvOqMg130dyJ4sC7dKVjeSNT/BBBCVrJwL/iamcsPo=; b=Xi+Dn9KFWLjD+O4rPp6pFMaNvpEiGq1iZo+F3eOKnnkKIrw5Y9y9UViBXXj62zzI9r 5MrE/dwSFIk2G++iWxE4RyQWEV/HHTOJQh5SpA/hJwfxfRj6ZEqaEloieQNFHO0HM9IS OLSF1QWwmqQHfBn+mvZO9dCNVk/Nx7YoWq047qfIm9ZaazHVUIKa0DgmjkcU4sHd38U6 acc53HoMcEs+uiyQ+TmYRk4STBh1r8oRrKqDgExSNGcmkQPoZnwndbFvbicjyS3R6otX Xa+okcUsljRR/YClHHeRLhFSkkcvYOF0KxsOE5MoUBAneE5TyGvDOzsVeVnMDKC1q/u6 oFvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968373; x=1751573173; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sHvOqMg130dyJ4sC7dKVjeSNT/BBBCVrJwL/iamcsPo=; b=uOZVG9KZEPiAsEGT9kpRcB2sfEjva5G0aVfxbYZS2Iiz5K8txhoLTd1sNaBGRuGhQw 0IcafHanNww4vr4t6kYjQSs12g/6zQM7TFZsuSWxo9pQu4nAU+4jJyUw0dGvRx0NjRDl Be8wtdjfM7Zsgk73VWQgi8DJzC9nAU2NL2rzF+YwfPIImT3c7z3x1lwNIh4A8p0+rEP+ JyLedujSb+ivZlh1z8Z8/xPxZlTYXukcjNrh9wG6uyzauQ0wRVc0341Em8X0kmVUokGB njI6QRQLA8L6DEwneEe8DmplzoitpU4xS1E6tni9X1mE2UKHgdCTYTSY7e1ehDuqMMk0 Wbrg== X-Forwarded-Encrypted: i=1; AJvYcCVrY5qHLYCmer1ZJG56ujH57+YFoiIsA3S5lRuKPZB03x+3IFvGrDlBzhHl+WIl764rkizL4iGB7VZLD4k=@vger.kernel.org X-Gm-Message-State: AOJu0YyzzKb9JISkI1S/pq50qWckWaQCB6keLrS2SKoe8Z8wUL2tz31D e/uta6g9idA9F+TjovYNsr9doj9FEJ5BcFs5j/Tc1x+wy6jhR+BB13gFo4niF/xGcg9lFsTAeSn jWovB0l0zh0v9Crf5bXDwewCltA== X-Google-Smtp-Source: AGHT+IFCTmpcPKfhbRYfbWmSV70zC3G54uswHvEuJzrqC5pwlW7LzTFUxkgmMX60c/miGwoG/j77iicLZrxALiS37Q== X-Received: from ilbec6.prod.google.com ([2002:a05:6e02:4706:b0:3df:16fc:af6f]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1c23:b0:3dd:dd39:324c with SMTP id e9e14a558f8ab-3df4ab6d058mr11317385ab.9.1750968373400; Thu, 26 Jun 2025 13:06:13 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:57 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-22-coltonlewis@google.com> Subject: [PATCH v3 21/22] KVM: arm64: Add ioctl to partition the PMU when supported From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add KVM_ARM_PMU_PARTITION to enable the partitioned PMU for a given vCPU. Add a corresponding KVM_CAP_ARM_PMU_PARTITION to check for this ability. This capability is allowed on an initialized vCPU where PMUv3 and VHE are supported. However, because the underlying ability relies on the driver being passed some command line arguments to configure the hardware partition at boot, enabling the partitioned PMU will not be allowed without the underlying driver configuration even though the capability exists. Signed-off-by: Colton Lewis --- Documentation/virt/kvm/api.rst | 21 +++++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/arm.c | 20 ++++++++++++++++++++ arch/arm64/kvm/pmu-part.c | 3 ++- include/uapi/linux/kvm.h | 4 ++++ 5 files changed, 50 insertions(+), 1 deletion(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 4ef3d8482000..7e76f7c87598 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6478,6 +6478,27 @@ the capability to be present. =20 `flags` must currently be zero. =20 +4.144 KVM_ARM_PARTITION_PMU +--------------------------- + +:Capability: KVM_CAP_ARM_PARTITION_PMU +:Architectures: arm64 +:Type: vcpu ioctl +:Parameters: arg[0] is a boolean to enable the partitioned PMU + +This API controls the PMU implementation used for VMs. The capability +is only available if the host PMUv3 driver was configured for +partitioning via the module parameters `arm-pmuv3.partition_pmu=3Dy` and +`arm-pmuv3.reserved_guest_counters=3D[0-$NR_COUNTERS]`. When enabled, +VMs are configured to have direct hardware access to the most +frequently used registers for the counters configured by the +aforementioned module parameters, bypassing the KVM traps in the +standard emulated PMU implementation and reducing overhead of any +guest software that uses PMU capabilities such as `perf`. + +If the host driver was configured for partitioning but the partitioned +PMU is disabled through this interface, the VM will use the legacy PMU +that shares the host partition. =20 .. _kvm_run: =20 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 2df76689381a..3f835b28c855 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -369,6 +369,9 @@ struct kvm_arch { /* Maximum number of counters for the guest */ u8 nr_pmu_counters; =20 + /* Whether this guest uses the partitioned PMU */ + bool partitioned_pmu_enable; + /* Iterator for idreg debugfs */ u8 idreg_debugfs_iter; =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7c007ee44ecb..cb4380f39d98 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -21,6 +21,7 @@ #include #include #include +#include #include =20 #define CREATE_TRACE_POINTS @@ -38,6 +39,7 @@ #include #include #include +#include #include #include #include @@ -383,6 +385,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) case KVM_CAP_ARM_PMU_V3: r =3D kvm_supports_guest_pmuv3(); break; + case KVM_CAP_ARM_PARTITION_PMU: + r =3D kvm_pmu_partition_supported(); + break; case KVM_CAP_ARM_INJECT_SERROR_ESR: r =3D cpus_have_final_cap(ARM64_HAS_RAS_EXTN); break; @@ -1810,6 +1815,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp, =20 return kvm_arm_vcpu_finalize(vcpu, what); } + case KVM_ARM_PARTITION_PMU: { + bool enable; + + if (unlikely(!kvm_vcpu_initialized(vcpu))) + return -ENOEXEC; + + if (!kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu)) + return -EPERM; + + if (copy_from_user(&enable, argp, sizeof(enable))) + return -EFAULT; + + vcpu->kvm->arch.partitioned_pmu_enable =3D enable; + return 0; + } default: r =3D -EINVAL; } diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 165d1eae2634..7b5c5fcc6623 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -53,7 +53,8 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) */ bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) { - return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu); + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) + && vcpu->kvm->arch.partitioned_pmu_enable; } =20 /** diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c74cf8f73337..2f8a8d4cfe3c 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -935,6 +935,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_EL2_E2H0 241 #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_GMEM_SHARED_MEM 243 +#define KVM_CAP_ARM_PARTITION_PMU 244 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1413,6 +1414,9 @@ struct kvm_enc_region { #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) =20 +/* Available with KVM_CAP_ARM_PARTITION_PMU */ +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, bool) + #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) #define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) =20 --=20 2.50.0.727.gbf7dc18ff4-goog From nobody Wed Oct 8 14:20:55 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36D9B2F0C5A for ; Thu, 26 Jun 2025 20:06:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968379; cv=none; b=XosHSYF88S8QRsvEN5f9CMjeZnHPhW1e7j6lW4G1Iwas6JEcQ8niCp82uwPvvH+7DXNcj3X5I1M7LDYzohypU+6KyLyTho0ob46sWkv70U8w88F5gO2ujYoafhpOcAszRPHpuBH0+oycbC1puyp92b2ORghcVbzcZcQHeYaAzME= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750968379; c=relaxed/simple; bh=9eSwVawIpIf46LRehZWFLyWPO8/7v6TDvCTZI7UPgOU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=m7rm2sEjOTJn//2yKJ78bqK9SFFMSToLcTUb+i4wLweSEWUecl/in8TRva3LqrjiwOjHb2ymB2hXB76wRt+79j6UNJS16+blyZFYUxVpsLhwWwBabN4uIFUQjlhXLh8Nk2hiU6MMuI/kh7fbOmXtLZ2rvI2tsel8EubQ3oj157k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SaK5HvjV; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SaK5HvjV" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddc0a6d4bdso17351015ab.0 for ; Thu, 26 Jun 2025 13:06:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750968374; x=1751573174; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xdZdMkXachOTBLhsNs+TRwK+/iArlUH+VMAh4h7bRTc=; b=SaK5HvjVeUvC5HS6nMm9H5aUeUHasza9Rtv0idWuVRX2BqRDJSKrAXN8mqGPsU3RNd m9lKKk/bV2OtnM7N7VTaFh+kQDBIADLJX6PfltT0hHB8lIWJlFMQ7a59rEcC56DNUjM+ 40Z3W/THkWV9W4GzhHoku0u8OJVy8CNp9lJ0NAwPEo4wzrF1a2VHtEG+sfwXwSikk9ac bWcOo02i09V+CN2mNSIUFpF23QEhooDlQKTG03TvlDnDWqoWokPetpmUm2HgN4msx9TH 64RGy9jPQnN0/KvAOc/O1+23mU0tS/lL56WyGunjeWxUxuZJbl4SJPXlvbLNDQVRCuKI 78zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750968374; x=1751573174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xdZdMkXachOTBLhsNs+TRwK+/iArlUH+VMAh4h7bRTc=; b=szI2d7gCsoomjWehJAqFvXB67vypcke7NCJOQRcloGQMgeavhFQFQ6WnK3t9pYkQKH ip+iNtVTTRn6KoJJcqBOEekYOFIamGUENMGuEn2KUlQJuzoi4ffVrq2tGFqJ2J/kbUTr wL9+UHQKWD1pmsb9iLLTWoXLg+FS7i+AtGCrlzncruwb2PGmZeWMfdS+1j7NFaNIx+TW NunHYar6odybZCuKp5GsjIo2GORe0SOFClRY9GT1ndrshYxsT6QKdcYN0fUG0Qj+7gfq ganeyYuruZx6SXdm44mGutpCGGVsqIoGQJOdzYfot63wPeuxf1YrLbe+S8pKf0HfImTS W6ew== X-Forwarded-Encrypted: i=1; AJvYcCU1d+I2mvAooN/Nf1iYg1kbSCmZGlTJ86NlQh9YlBI2/MeeD3Alvy7H7m+VyWs8UUCBefmZ5yUyA1ajQvU=@vger.kernel.org X-Gm-Message-State: AOJu0YyHkBtU8miOtW+xhAfwdeTfCRUM+rrS9OXfe1YQPnG6S/hLMzZ1 n2jG2x2vseeVO43NM59y6RpgvD07LXQTg3RMCE5BGemU6clOXxHzusVQHXgteGS/ywdxAf257KC fvgrQ5qmMY70F/tbnk/Luz++oQA== X-Google-Smtp-Source: AGHT+IGarWfmYd/DUQimhQezHE2YXV7tgZKjgPA1cETPrxi7OiBglxyD0Fcw3dSMp14kty4kT+yi3FezNyuiQsd4uQ== X-Received: from ilbbs16.prod.google.com ([2002:a05:6e02:2410:b0:3df:30df:d2c4]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1c23:b0:3dd:dd39:324c with SMTP id e9e14a558f8ab-3df4ab6d058mr11318515ab.9.1750968374502; Thu, 26 Jun 2025 13:06:14 -0700 (PDT) Date: Thu, 26 Jun 2025 20:04:58 +0000 In-Reply-To: <20250626200459.1153955-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250626200459.1153955-1-coltonlewis@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250626200459.1153955-23-coltonlewis@google.com> Subject: [PATCH v3 22/22] KVM: arm64: selftests: Add test case for partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Run separate a test case for a partitioned PMU in vpmu_counter_access. An enum is created specifying whether we are testing the emulated or partitioned PMU and all the test functions are modified to take the implementation as an argument and make the difference in setup appropriately. Because the test should still succeed even if we are on a machine where we have the capability but the ioctl fails because the driver was never configured properly, use __vcpu_ioctl to avoid checking the return code. Signed-off-by: Colton Lewis --- tools/include/uapi/linux/kvm.h | 2 + .../selftests/kvm/arm64/vpmu_counter_access.c | 62 +++++++++++++------ 2 files changed, 46 insertions(+), 18 deletions(-) diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index b6ae8ad8934b..21a2c37528c8 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -930,6 +930,7 @@ struct kvm_enable_cap { #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 +#define KVM_CAP_ARM_PARTITION_PMU 244 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1356,6 +1357,7 @@ struct kvm_vfio_spapr_tce { #define KVM_S390_SET_CMMA_BITS _IOW(KVMIO, 0xb9, struct kvm_s390_cmma= _log) /* Memory Encryption Commands */ #define KVM_MEMORY_ENCRYPT_OP _IOWR(KVMIO, 0xba, unsigned long) +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, bool) =20 struct kvm_enc_region { __u64 addr; diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index f16b3b27e32e..92e665516bc8 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -25,6 +25,16 @@ /* The cycle counter bit position that's common among the PMU registers */ #define ARMV8_PMU_CYCLE_IDX 31 =20 +enum pmu_impl { + EMULATED, + PARTITIONED +}; + +const char *pmu_impl_str[] =3D { + "Emulated", + "Partitioned" +}; + struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -405,7 +415,7 @@ static void guest_code(uint64_t expected_pmcr_n) } =20 /* Create a VM that has one vCPU with PMUv3 configured. */ -static void create_vpmu_vm(void *guest_code) +static void create_vpmu_vm(void *guest_code, enum pmu_impl impl) { struct kvm_vcpu_init init; uint8_t pmuver, ec; @@ -419,6 +429,7 @@ static void create_vpmu_vm(void *guest_code) .group =3D KVM_ARM_VCPU_PMU_V3_CTRL, .attr =3D KVM_ARM_VCPU_PMU_V3_INIT, }; + bool partition =3D (impl =3D=3D PARTITIONED); =20 /* The test creates the vpmu_vm multiple times. Ensure a clean state */ memset(&vpmu_vm, 0, sizeof(vpmu_vm)); @@ -449,6 +460,9 @@ static void create_vpmu_vm(void *guest_code) /* Initialize vPMU */ vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + __vcpu_ioctl(vpmu_vm.vcpu, KVM_ARM_PARTITION_PMU, &partition); } =20 static void destroy_vpmu_vm(void) @@ -475,12 +489,12 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t = pmcr_n) } } =20 -static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_f= ail) +static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, enum pmu_impl= impl, bool expect_fail) { struct kvm_vcpu *vcpu; uint64_t pmcr, pmcr_orig; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); vcpu =3D vpmu_vm.vcpu; =20 pmcr_orig =3D vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); @@ -508,7 +522,7 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t pm= cr_n, bool expect_fail) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_access_test(uint64_t pmcr_n) +static void run_access_test(uint64_t pmcr_n, enum pmu_impl impl) { uint64_t sp; struct kvm_vcpu *vcpu; @@ -516,7 +530,7 @@ static void run_access_test(uint64_t pmcr_n) =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 /* Save the initial sp to restore them later to run the guest again */ @@ -550,14 +564,14 @@ static struct pmreg_sets validity_check_reg_sets[] = =3D { * Create a VM, and check if KVM handles the userspace accesses of * the PMU register sets in @validity_check_reg_sets[] correctly. */ -static void run_pmregs_validity_test(uint64_t pmcr_n) +static void run_pmregs_validity_test(uint64_t pmcr_n, enum pmu_impl impl) { int i; struct kvm_vcpu *vcpu; uint64_t set_reg_id, clr_reg_id, reg_val; uint64_t valid_counters_mask, max_counters_mask; =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, false); vcpu =3D vpmu_vm.vcpu; =20 valid_counters_mask =3D get_counters_mask(pmcr_n); @@ -607,11 +621,11 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_error_test(uint64_t pmcr_n, enum pmu_impl impl) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pm= cr_n); =20 - test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); + test_create_vpmu_vm_with_pmcr_n(pmcr_n, impl, true); destroy_vpmu_vm(); } =20 @@ -619,30 +633,42 @@ static void run_error_test(uint64_t pmcr_n) * Return the default number of implemented PMU event counters excluding * the cycle counter (i.e. PMCR_EL0.N value) for the guest. */ -static uint64_t get_pmcr_n_limit(void) +static uint64_t get_pmcr_n_limit(enum pmu_impl impl) { uint64_t pmcr; =20 - create_vpmu_vm(guest_code); + create_vpmu_vm(guest_code, impl); pmcr =3D vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0)); destroy_vpmu_vm(); return get_pmcr_n(pmcr); } =20 -int main(void) +void test_pmu(enum pmu_impl impl) { uint64_t i, pmcr_n; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + pr_info("Testing PMU: Implementation =3D %s\n", pmu_impl_str[impl]); + + pmcr_n =3D get_pmcr_n_limit(impl); + pr_debug("PMCR_EL0.N: Limit =3D %lu\n", pmcr_n); =20 - pmcr_n =3D get_pmcr_n_limit(); for (i =3D 0; i <=3D pmcr_n; i++) { - run_access_test(i); - run_pmregs_validity_test(i); + run_access_test(i, impl); + run_pmregs_validity_test(i, impl); } =20 for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_error_test(i, impl); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + test_pmu(EMULATED); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + test_pmu(PARTITIONED); =20 return 0; } --=20 2.50.0.727.gbf7dc18ff4-goog