From nobody Thu Dec 18 14:11:45 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B69E2236F8 for ; Mon, 2 Jun 2025 19:28:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892541; cv=none; b=Ojzr8fpbIiZ9Ict9jK73eXOBrbG04SItjZ6CdtWgWaUuW6+ZeSZFK2rFqX+jxd/X7IrPfwpz6lVXBFjBSTGT2hTY0BGae+ArOGeZ7UMRjanXQKjWBo+lKEJeOf11By0TR+I7qFfla0UeXg+NpGEYbnhDeE839ur+v5qFc6TGteU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892541; c=relaxed/simple; bh=UpyLhDgeST+9yjPTPNbtC0GB4N1QHxuH3tcvChyIfWE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ndpD83ILSsxkmdddJoDSUdSrRelGMWlOhsk184MIoAldgUhi1LOK46q4DOAyTIcXd/M5U9NIs0rccb2EjSCtjyGAJgBnaUI4Ys/RvxAEmXZrz4inLlx5m6m7+IWXhhiE0nmxwKfhi02DvLvNF8TYcdsX0p4kj9vtffgDWnajPC8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dWjPA7Mv; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dWjPA7Mv" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-2d9ea524aa6so4517451fac.0 for ; Mon, 02 Jun 2025 12:28:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892538; x=1749497338; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zgwK7stiipaBbElK9ZH8eFt+KmwsSimSmMPY4y4bpeQ=; b=dWjPA7MvaOHSJe1kYu0WU0y08mojxXXNrP2WK0ZIjspQXsn1gjAWI5X8asaKmXo29f 5Js7A9y8noxsW423wRSz36mjn8W/SvOVjBAtCs10cFsOyRYQh/QIFqdfo0tmNb0lfFe/ hwQKs6cumge2DX+sNsYtGgPY4x6XHmweccz8EnFEMvlKYdxqE1FBIcJXufC03NCDsWAl 6eWK1P/80DIjVhdNeEQglmGzUy4RonP+ydFCzQx/zek0WbVljaWTNXPkr6TvUAIKIPu7 QhnYwsM+8Ub9XrWnTKvoqdof/HD8I55gdnAZQhL+ioVM33katWIl2v0sBqVUFtOS/wB+ zRyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892538; x=1749497338; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zgwK7stiipaBbElK9ZH8eFt+KmwsSimSmMPY4y4bpeQ=; b=bE4ZeC4gA9tQUT+Hvlf3d9HAoHQTM2h5Q8htisM0XEeCE6YremX/FWwRyirXH181Yf 2fkRCYm93RXqVaDvwTvn2DiI69fQmmpJeHGpEO9bM3v0gDlR693tABLVGropMdPlXFCu aKohWimfek9axPaqDgoYC5asdNCId4NE0djBhwrdRlPiw9MBAQtyt0Yqi6Ble2lMn/uM LvY9JsGkVp1Y6WUTPUfxcNrcOWNnrJQaa6btY8gX4iGDZe4YJfDs2eHdr7SnX6GsrAfA Jl7Q2fVCw6FUDeMllkwtVtw+XxtLQaiFz84gdCHbGY7exzrh/manlniIBtbRdzbb0j+j 3QaA== X-Forwarded-Encrypted: i=1; AJvYcCWTNixK8N0hpOTF4h1t3XeX0GysIpynnn5vVJd58zyh8GcEyFxpMoJiNhwXhCibfFvrt+2lAeZyOoPqDCE=@vger.kernel.org X-Gm-Message-State: AOJu0YwaRcah+uPLE0Eyq4mv66UMaU9AcmPCEwcbrU053Y6lkHtd1UtN b0SGcIhAKnWesmQjCdUtwv1wj9vOrnQHva5+b8q0NceSxLZTPHpJpIUW3IWioThJgrqM5+qAr8i J1chG8OHOSHodbbZy4vtwYftcTg== X-Google-Smtp-Source: AGHT+IHY5orLOMss72/ttOtRz18RgR7Q6GomVkN3UzZeAOQY/fszGrZVl+8B2BDQNOJ4jWTDkgrOBQQZ8Rc/ZsDHgg== X-Received: from oabgs11.prod.google.com ([2002:a05:6870:ab8b:b0:2c1:5c70:acac]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:b1cc:b0:2d8:5015:1a8f with SMTP id 586e51a60fabf-2e94881808fmr4931862fac.7.1748892538177; Mon, 02 Jun 2025 12:28:58 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:46 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-2-coltonlewis@google.com> Subject: [PATCH 01/17] arm64: cpufeature: Add cpucap for HPMN0 From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a capability for FEAT_HPMN0, whether MDCR_EL2.HPMN can specify 0 counters reserved for the guest. This required changing HPMN0 to an UnsignedEnum in tools/sysreg because otherwise not all the appropriate macros are generated to add it to arm64_cpu_capabilities_arm64_features. Signed-off-by: Colton Lewis --- arch/arm64/kernel/cpufeature.c | 8 ++++++++ arch/arm64/tools/cpucaps | 1 + arch/arm64/tools/sysreg | 6 +++--- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index a3da020f1d1c..578eea321a60 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -541,6 +541,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] =3D { }; =20 static const struct arm64_ftr_bits ftr_id_aa64dfr0[] =3D { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_HP= MN0_SHIFT, 4, 0), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_= DoubleLock_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1= _PMSVer_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CT= X_CMPs_SHIFT, 4, 0), @@ -2884,6 +2885,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, FGT2) }, + { + .desc =3D "Hypervisor PMU Partitioning 0 Guest Counters", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_HPMN0, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 10effd4cff6b..5b196ba21629 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -39,6 +39,7 @@ HAS_GIC_CPUIF_SYSREGS HAS_GIC_PRIO_MASKING HAS_GIC_PRIO_RELAXED_SYNC HAS_HCR_NV1 +HAS_HPMN0 HAS_HCX HAS_LDAPR HAS_LPA2 diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 8a8cf6874298..d29742481754 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1531,9 +1531,9 @@ EndEnum EndSysreg =20 Sysreg ID_AA64DFR0_EL1 3 0 0 5 0 -Enum 63:60 HPMN0 - 0b0000 UNPREDICTABLE - 0b0001 DEF +UnsignedEnum 63:60 HPMN0 + 0b0000 NI + 0b0001 IMP EndEnum UnsignedEnum 59:56 ExtTrcBuff 0b0000 NI --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C1F4225779 for ; Mon, 2 Jun 2025 19:28:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892542; cv=none; b=uqdHYlblE6B/2W6vi0unBenW+Hjby6n95PxNPd+XezZgWZUlkm/hlGGWt7V/7c+7dJuovCJIB8FPjIElDMGHXNP/+Wdm2IE9e4Ptb+soflpecUIQPvWINUyiLsLhFrFIftozDp0Ws92uh1CHqsuRv0AVuB8YdOGLR9PNHd+AMwI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892542; c=relaxed/simple; bh=EE8lwX3DPSHd479fj6h8oG+WY564CeOYM752jwwfbIY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bjNXsXPVlIOnCkVhJpDIYb3c/4nb3MgDFHD3c8d48cxcQ99uB9/gQ/AIwQXzvtHduxXlNGXw5c9sOaQMtJR8v6kpEkt0GGFP2o687aWa6tnzF9VZ24bIZfwdJLVwEpqpQOWPDQxPC4SIis6e9jfZmxtfyPkCFYCGnzLfXP/Yy1Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XbbImGHM; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XbbImGHM" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddb4a92e80so7582965ab.3 for ; Mon, 02 Jun 2025 12:28:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892539; x=1749497339; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ob4i5lx3NjoopJMkPn2Gu2SflaQcZ8+s5uU/xFdQFYQ=; b=XbbImGHMT5YKWxUxFNXKd4+Fc5Z7rljkAz4a2j4ZrW2ZT17Q2jzUOjBMaRGlaH+Vid lpbeulD4SNnMJUNkuOPNpZGGSqNcG6rgRwmTlTiOt7msbwV374B048BW1wMhXW3ehjUo NpEI9q1tGFpPC9ifMmECKkhDQRgZ/ANpUfZoMaIzHL9NgdTlxGzperO28lLJDdM3mdrA 7yr9apdzZmPOogxNLPSCEwMW0IZayhfvdE/Fzm+/Us8m/EWwd4sX/sSTCgwtshFGJc5B LnxxVUX3XwL6GH00S8FHm5j7azb6kV1hLhvUGvlCUkKo7ZitVhRw98xa/H8kN8UQ2gxl VkTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892539; x=1749497339; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ob4i5lx3NjoopJMkPn2Gu2SflaQcZ8+s5uU/xFdQFYQ=; b=fTXWY9jiopNMNKP+xKMLxTe6Yfa35yuDjT3MafRVCq5svxqJd/rxqp+eCx4hpCrFOT 9MbxtSJaVV1rrRAEaZcmkzQDOqu4Jd60w3ZKXafUhznj16ALitJQgL8QU60r5L/4G4Bc 6XrzcuIcOhN/TdQjKr+u0Df7BCyJYH6S6g0/jm/9mSm0UKwkV5Rai26uplg3Whq4Cmay SfI15RtkSkyEs7Bwk75P6YuiU/HMLS+kqicU8hQyOy6sMTxST+QLEjlxbxuBgZiokacs +u4RoQUhuuBK4L+WdGXGUzEf7HR52zdDQx4z/XFYfFLnOwSaYMbF92MNOpN2Jpsw+krF pKFA== X-Forwarded-Encrypted: i=1; AJvYcCWm7XN36ry+kaqbwyE0exByNWgwAiCj7rnHXgI3HmXNJl/a3L7vgXUUV5PoeHPtnO2khxqTG3SfNgcknqo=@vger.kernel.org X-Gm-Message-State: AOJu0YxZ12kPL3gI114Uo/LwXGnzYYvyf5YVoL/wE+9xHJs8h7QA9IqZ ccSMaG57soXEdGcHrdpZwWgrmVDTvqFUu0csFDok8222XNC9OId+IXqWJq7fABjwHvu3LjzaBqU O+ET92sHU3P2/jFlixoU++KwZBQ== X-Google-Smtp-Source: AGHT+IG9118tOdExJjBuer/f/VBDO3HNWaiwxBHIeaEYz6xcDfDEYQh0uxIwgejlm2yfIrTUPAC6q6L4NmScHLSSog== X-Received: from ilbbp25.prod.google.com ([2002:a05:6e02:3499:b0:3dc:756a:e520]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1aa6:b0:3dc:8e8b:42a8 with SMTP id e9e14a558f8ab-3dd99c28963mr156701455ab.16.1748892539395; Mon, 02 Jun 2025 12:28:59 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:47 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-3-coltonlewis@google.com> Subject: [PATCH 02/17] arm64: Generate sign macro for sysreg Enums From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There's no reason Enums shouldn't be equivalent to UnsignedEnums and explicitly specify they are unsigned. This will avoid the annoyance I had with HPMN0. Signed-off-by: Colton Lewis --- arch/arm64/tools/gen-sysreg.awk | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/tools/gen-sysreg.awk b/arch/arm64/tools/gen-sysreg.= awk index f2a1732cb1f6..fa21a632d9b7 100755 --- a/arch/arm64/tools/gen-sysreg.awk +++ b/arch/arm64/tools/gen-sysreg.awk @@ -308,6 +308,7 @@ $1 =3D=3D "Enum" && (block_current() =3D=3D "Sysreg" ||= block_current() =3D=3D "SysregFields parse_bitdef(reg, field, $2) =20 define_field(reg, field, msb, lsb) + define_field_sign(reg, field, "false") =20 next } --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 517F1227EA7 for ; Mon, 2 Jun 2025 19:29:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892543; cv=none; b=UEbnqvi2lsNUAU4w3exMVusQ0T6/itQ1LkJoSx2QupP19M99k+BTxyJin4tsploR0DjwW4dVtU4B+2ywaGfgRzkyWaxhjQWfBhUsP+XnRiM2YuFnlDmlj02JZU/qzLaIxj8CinJ45kSJK7jR8xg8AxPje/uNxr1jpvszSr4mfgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892543; c=relaxed/simple; bh=hTJhSiv8BCxLXoZ+NIawSUFDzfOoIJdaR2ZVHq+pdOY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kuZYTpZ1q6MvJJg3Tcd1AcsawGv4SE6gv8wMYvaqeT6gvTVyx7+GZPtX1d9qBEr9w/g5/WYOYzlZ2U4E2ax5s9uhYj1L2FlLEE3qyixcWsRPNe6VHpETlstcUJP4zD50VKQiVGl5syk+jIXyKoaeqPaU/rLy1NhG5lTqpWeIH/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wqso+t7N; arc=none smtp.client-ip=209.85.166.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wqso+t7N" Received: by mail-io1-f74.google.com with SMTP id ca18e2360f4ac-86cc7cdb86fso408811439f.1 for ; Mon, 02 Jun 2025 12:29:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892540; x=1749497340; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OVB/n3He0PUlo5Tdl7uIOjIZvaOC3NNae41+U8BzpWw=; b=wqso+t7NhJqrApZJzbtbNVoClSpEMoimaL4X+Pks3aonRPGdeKmZnXOqwF3e9LfYVR IEdjJrvjabWwK/is6CHvoYHs02GmXtwWW4xZ5/j0W+jNXbu83gdfHLamZ3FI/EPm/LDe igP+YO+MU9ZeBkTlOKBFiLlCBINcqv7GObafx1Z9QZW8hsoJ/ar8/p+Gm2MTxSM8gLpU jY8qOGqysDIje/pRy6fcg5YdLpbnzdfJY1FPkhRAXHXI5kQt805U/+kF7jdjOKS+hEkl pRls57M1p5I2ypjOx82J9kCn/XZ/V0zk2OOvOIcQLjy1RHSeYzNR+mW8YNYO2e7M75jA TvEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892540; x=1749497340; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OVB/n3He0PUlo5Tdl7uIOjIZvaOC3NNae41+U8BzpWw=; b=VeAnPgpKcyAooCxyKhtDy4v29bPhX9I5vDIqdtaZQuVXHSAMf7BpXlIJHgvOhYh3ct rkMUzG3K+vGYuft2ni3ADwZqhc9EPQZWFa+YAOgLRcAlmhVxEQCu3xTHGMkbfWcvK2S3 Z/YWWNKhUioFshqXM0Tds1wPSUx1SwGGIV3c0TwBmJ6q1enuxMbc4sX0ZDZg65K3dTJi HKN29uoOTdkxXasjpqszq8yb5/CwXQXt2Vfz8YjP4ZiivUsGKt+hUNeuqkgqW98L/nhb zR2JhsRvEQQRF66/SqNwrUhvuXY7+HUXMXazmYhAhVqbHbn9HGJqQhzq6UQf10xxKeCs 1KRw== X-Forwarded-Encrypted: i=1; AJvYcCWuOfkGIRAmbc2DIEyIMAMeoJMVa2R5xmTPkwdNvpp6yObbb6Hgs+TEvnQREFHYjm3HteAaBpUm6GICVrY=@vger.kernel.org X-Gm-Message-State: AOJu0YzLNhHLUtmCRnJiWTsov4RiUUjTezu+EbUsOnRodWItpf3Dxu6x E9ADPobSDhuY0SfQlGntcM+uiZ927+fjt4u0Bl+qnNSXoTWoNHR6F/NWxQsrVRbx/GClVmjbShO GRm30/u/rB+KC2v4Z547OULTcLw== X-Google-Smtp-Source: AGHT+IGP8Tr9H87Uubq0BhuyNeY27JD5iCuR6Hrz2NAJX2PVsu7lMWY1S1KfCHmBz3Zd2wWiWUQw5wTiYh98OuafJw== X-Received: from ilbby14.prod.google.com ([2002:a05:6e02:260e:b0:3dc:64a8:fc81]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1a25:b0:3dc:8b57:b770 with SMTP id e9e14a558f8ab-3dda3370d7cmr84711015ab.11.1748892540564; Mon, 02 Jun 2025 12:29:00 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:48 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-4-coltonlewis@google.com> Subject: [PATCH 03/17] arm64: cpufeature: Add cpucap for PMICNTR From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a cpucap for FEAT_PMUv3_PMICNTR, meaning there is a dedicated instruction counter as well as the cycle counter. Signed-off-by: Colton Lewis --- arch/arm64/kernel/cpufeature.c | 7 +++++++ arch/arm64/tools/cpucaps | 1 + 2 files changed, 8 insertions(+) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 578eea321a60..e798a706d8fb 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2892,6 +2892,13 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64DFR0_EL1, HPMN0, IMP) }, + { + .desc =3D "PMU Dedicated Instruction Counter", + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .capability =3D ARM64_HAS_PMICNTR, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64DFR1_EL1, PMICNTR, IMP) + }, #ifdef CONFIG_ARM64_SME { .desc =3D "Scalable Matrix Extension", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 5b196ba21629..6dd72fcdd612 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -47,6 +47,7 @@ HAS_LSE_ATOMICS HAS_MOPS HAS_NESTED_VIRT HAS_PAN +HAS_PMICNTR HAS_PMUV3 HAS_S1PIE HAS_S1POE --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67EF422A4EB for ; Mon, 2 Jun 2025 19:29:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892545; cv=none; b=Rp3mFSCFoqJaVgrD5OsOXHhYVig3K+HfHsPTQD51Co16vC9wiRI2vTu0rvrhObyUtWyHq3LTwPdj4PqoSRYaWAc1GyslYE3obT5pZcBM584VW321vq9MgcjymsxHzbLkm5m/8mD3ECTdZoouKlsQp8px4UIRsVgDl2dXerEQOuk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892545; c=relaxed/simple; bh=iGpFWDppkgdxqqBhrOO3aL9IGmqpmUG3I4Gjo6EbB6E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UOEgpzGz6w20frakVjl4vi7K9LGkYiW5pnHR/UXrEp4aS6U8uhVUBnYrjb5vZ8lj+jsSkSTroA4zW8bBuXWDNJdWQC9ozjlXt8e/6qqR5I+2pwdB2G1sQpDkvfRC3DaQhFtquOzsQufhUf6bh3dMXlZ4/0GYhmxvSG+quRA7/mk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1OexTHaC; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1OexTHaC" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-86f4f032308so319999339f.1 for ; Mon, 02 Jun 2025 12:29:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892541; x=1749497341; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uuO4kHSXnGdRh+A/3T816/z1oQQQitwNMdkeN8eavRU=; b=1OexTHaC21Wd8e1VuCNgXXPcQnSxdLF3SbHkztlb9aCJN46qAj7Jnk6z3eArkSFSip nd+wKmbBhO6LAQ/rr2dIMiRmLu6CEfJm54MKQ1yephiNExNTe1AP5KDrj7u0TsF1Q5nh Nn3HM0vVWDfaGhZzWCQDrhyWm+p9llQ+6omjekb4otCNYExORomR9vgLy0e75DMi+1nG aoLbdCpL2sJ1s6UMaQAMS06uo5lp/22HGuWJJk//vPnw5RnReBywU1qJAmUs/buLgjp0 sV+gvCM58lpMFAwtyjtjdez3ncLhizTpzGcVkpD7JhjQMKdt59Cw3sPU6EbmECms/T5d MWBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892541; x=1749497341; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uuO4kHSXnGdRh+A/3T816/z1oQQQitwNMdkeN8eavRU=; b=XioWC2Xf25y8sV2GjhOuiFyzVqMc3SZvNEkBOExBE/4wL3uizvQ7DNTrbGxQOFc/mq u7OI7hJ6WgBoDC+2jniMUrga/YEN4q0aavjrnsz64ReF4wEu8Bk2QG2Dik1wRRfXy/vV R41F3WN1YFZLZEUx4X8235JAzkmFPxh/E7ooonZLheINdABjkXd3Hs5wira0MfIf/RD4 5ZOuFyJsHYbNHgKmlZ7zDF/Jekiu/fgyBug6/CUAbslRo0XXBA2vzn88PsY8zVH/U0OR U32tR444slQPisTIrl+DXr2sFDSPdPvwSmG5v2Iz1ZN7kSnVHS70D/CqTpTDbp4YFUEw m44w== X-Forwarded-Encrypted: i=1; AJvYcCUVJkyDvCidBkKDWWyHx8BqBmwLETbdiof/NQ/Xo+Y04bEWJF9TiFgyeCOR2H/mxZMCavePmTP2BDEb7lk=@vger.kernel.org X-Gm-Message-State: AOJu0Yw1NvNnOCy/aYjPoPQFiwJcvRw/xrpSNEA8Dm8gHW0uJ1Ky1q5Z Ay5xJkYZwKyJDOx5NFKgs9piNEtih/ocH6fVi4O3jZUFAWQUR4HTZAVKnYPqEG/9fB7m2rigBK6 WaTTSJS8IwOpZXe6wEZQQEXRxPw== X-Google-Smtp-Source: AGHT+IHgAEss5g9/qKcpWYkUlDHsx3OxY7MyeLL7paDiELgy23KMB7p5zl3LoU1V5SjDTN8g9AKt5f5t5tnQMYSY6Q== X-Received: from ilfj14.prod.google.com ([2002:a05:6e02:220e:b0:3dc:6ecb:a0cd]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:216f:b0:3dc:757b:3fb2 with SMTP id e9e14a558f8ab-3dd99c047c5mr183176455ab.7.1748892541418; Mon, 02 Jun 2025 12:29:01 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:49 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-5-coltonlewis@google.com> Subject: [PATCH 04/17] KVM: arm64: Cleanup PMU includes From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier asm/kvm_host.h includes asm/arm_pmu.h which includes perf/arm_pmuv3.h which includes asm/arm_pmuv3.h which includes asm/kvm_host.h This causes compilation problems why trying to use anything defined in any of the headers in any other headers. Reorganize these tangled headers. In particular: * Move the declarations defining the interface between KVM and PMU to its own header asm/kvm_pmu.h that can be used without the problem described above. * Delete kvm/arm_pmu.h. These functions are mostly internal to KVM and should go in asm/kvm_host.h. Signed-off-by: Marc Zyngier Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 2 +- arch/arm64/include/asm/kvm_host.h | 190 ++++++++++++++++++++-- arch/arm64/include/asm/kvm_pmu.h | 38 +++++ arch/arm64/kvm/arm.c | 1 - arch/arm64/kvm/debug.c | 1 + arch/arm64/kvm/hyp/include/hyp/switch.h | 1 + arch/arm64/kvm/pmu-emul.c | 30 ++-- arch/arm64/kvm/pmu.c | 2 + arch/arm64/kvm/sys_regs.c | 1 + include/kvm/arm_pmu.h | 199 ------------------------ include/linux/perf/arm_pmu.h | 14 +- virt/kvm/kvm_main.c | 1 + 12 files changed, 246 insertions(+), 234 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_pmu.h delete mode 100644 include/kvm/arm_pmu.h diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 8a777dec8d88..32c003a7b810 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -6,7 +6,7 @@ #ifndef __ASM_PMUV3_H #define __ASM_PMUV3_H =20 -#include +#include =20 #include #include diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index d941abc6b5ee..f5d97cd8e177 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -35,7 +36,6 @@ =20 #include #include -#include =20 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS =20 @@ -782,6 +782,33 @@ struct vcpu_reset_state { =20 struct vncr_tlb; =20 +#if IS_ENABLED(CONFIG_HW_PERF_EVENTS) + +#define KVM_ARMV8_PMU_MAX_COUNTERS 32 + +struct kvm_pmc { + u8 idx; /* index into the pmu->pmc array */ + struct perf_event *perf_event; +}; + +struct kvm_pmu_events { + u64 events_host; + u64 events_guest; +}; + +struct kvm_pmu { + struct irq_work overflow_work; + struct kvm_pmu_events events; + struct kvm_pmc pmc[KVM_ARMV8_PMU_MAX_COUNTERS]; + int irq_num; + bool created; + bool irq_level; +}; +#else +struct kvm_pmu { +}; +#endif + struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; =20 @@ -1469,25 +1496,11 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcp= u); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); =20 -static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) -{ - return (!has_vhe() && attr->exclude_host); -} - #ifdef CONFIG_KVM -void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); -void kvm_clr_pmu_events(u64 clr); -bool kvm_set_pmuserenr(u64 val); void kvm_enable_trbe(void); void kvm_disable_trbe(void); void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest); #else -static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} -static inline void kvm_clr_pmu_events(u64 clr) {} -static inline bool kvm_set_pmuserenr(u64 val) -{ - return false; -} static inline void kvm_enable_trbe(void) {} static inline void kvm_disable_trbe(void) {} static inline void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_gu= est) {} @@ -1658,5 +1671,152 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id= fgt); void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, = u64 *res1); void check_feature_map(void); =20 +#define kvm_vcpu_has_pmu(vcpu) \ + (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) + +#if IS_ENABLED(CONFIG_HW_PERF_EVENTS) + +bool kvm_supports_guest_pmuv3(void); +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); +void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); +void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx,= u64 val); +u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu); +u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1); +void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); +void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); +void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); +void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); +bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); +void kvm_pmu_update_run(struct kvm_vcpu *vcpu); +void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, + u64 select_idx); +void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu); +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr); +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr); +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr); +int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); + +struct kvm_pmu_events *kvm_get_pmu_events(void); +void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); +void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); + +/* + * Updates the vcpu's view of the pmu events for this cpu. + * Must be called before every vcpu run after disabling interrupts, to ens= ure + * that an interrupt cannot fire and update the structure. + */ +#define kvm_pmu_update_vcpu_events(vcpu) \ + do { \ + if (!has_vhe() && system_supports_pmuv3()) \ + vcpu->arch.pmu.events =3D *kvm_get_pmu_events(); \ + } while (0) + +u8 kvm_arm_pmu_get_pmuver_limit(void); +u64 kvm_pmu_evtyper_mask(struct kvm *kvm); +int kvm_arm_set_default_pmu(struct kvm *kvm); +u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm); + +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); +bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx); +void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu); +#else +static inline bool kvm_arm_support_pmu_v3(void) +{ + return false; +} + +static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, + u64 select_idx) +{ + return 0; +} +static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, + u64 select_idx, u64 val) {} +static inline u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u= 64 val) {} +static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {} +static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) +{ + return false; +} +static inline void kvm_pmu_update_run(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 v= al) {} +static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {} +static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, + u64 data, u64 select_idx) {} +static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr) +{ + return -ENXIO; +} +static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr) +{ + return -ENXIO; +} +static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr) +{ + return -ENXIO; +} +static inline int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) +{ + return 0; +} + +static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} +static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} +static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} +static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} +static inline u8 kvm_arm_pmu_get_pmuver_limit(void) +{ + return 0; +} +static inline u64 kvm_pmu_evtyper_mask(struct kvm *kvm) +{ + return 0; +} + +static inline int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + return -ENODEV; +} + +static inline u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) +{ + return 0; +} + +static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return 0; +} + +static inline bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned = int idx) +{ + return false; +} + +static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} + +#endif =20 #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h new file mode 100644 index 000000000000..613cddbdbdd8 --- /dev/null +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef __KVM_PMU_H +#define __KVM_PMU_H + +/* + * Define the interface between the PMUv3 driver and KVM. + */ +struct perf_event_attr; +struct arm_pmu; + +#define kvm_pmu_counter_deferred(attr) \ + ({ \ + !has_vhe() && (attr)->exclude_host; \ + }) + +#ifdef CONFIG_KVM + +void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); +void kvm_clr_pmu_events(u64 clr); +bool kvm_set_pmuserenr(u64 val); +void kvm_vcpu_pmu_resync_el0(void); +void kvm_host_pmu_init(struct arm_pmu *pmu); + +#else + +static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} +static inline void kvm_clr_pmu_events(u64 clr) {} +static inline bool kvm_set_pmuserenr(u64 val) +{ + return false; +} +static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_host_pmu_init(struct arm_pmu *pmu) {} + +#endif + +#endif diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 36cfcffb40d8..3b9c003f2ea6 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -43,7 +43,6 @@ #include =20 #include -#include #include =20 #include "sys_regs.h" diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 0e4c805e7e89..7fb1d9e7180f 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -9,6 +9,7 @@ =20 #include #include +#include =20 #include #include diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index eef310cdbdbd..d407e716df1b 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -14,6 +14,7 @@ #include #include #include +#include #include =20 #include diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 25c29107f13f..472a2ab6938f 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -8,11 +8,10 @@ #include #include #include -#include #include +#include #include #include -#include #include =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) @@ -24,6 +23,8 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc= ); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + bool kvm_supports_guest_pmuv3(void) { guard(mutex)(&arm_pmus_lock); @@ -258,6 +259,16 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) pmu->pmc[i].idx =3D i; } =20 +static u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) +{ + u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); + + if (val =3D=3D 0) + return BIT(ARMV8_PMU_CYCLE_IDX); + else + return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); +} + /** * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu * @vcpu: The vcpu pointer @@ -315,16 +326,6 @@ u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *v= cpu) return mask & ~kvm_pmu_hyp_counter_mask(vcpu); } =20 -u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) -{ - u64 val =3D FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); - - if (val =3D=3D 0) - return BIT(ARMV8_PMU_CYCLE_IDX); - else - return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); -} - static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) { if (!pmc->perf_event) { @@ -784,6 +785,11 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *v= cpu, u64 data, kvm_pmu_create_perf_event(pmc); } =20 +struct arm_pmu_entry { + struct list_head entry; + struct arm_pmu *arm_pmu; +}; + void kvm_host_pmu_init(struct arm_pmu *pmu) { struct arm_pmu_entry *entry; diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 6b48a3d16d0d..8bfc6b0a85f6 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,6 +8,8 @@ #include #include =20 +#include + static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 /* diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 707c651aff03..d368eeb4f88e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include #include #include +#include =20 #include #include diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h deleted file mode 100644 index 96754b51b411..000000000000 --- a/include/kvm/arm_pmu.h +++ /dev/null @@ -1,199 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2015 Linaro Ltd. - * Author: Shannon Zhao - */ - -#ifndef __ASM_ARM_KVM_PMU_H -#define __ASM_ARM_KVM_PMU_H - -#include -#include - -#define KVM_ARMV8_PMU_MAX_COUNTERS 32 - -#if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) -struct kvm_pmc { - u8 idx; /* index into the pmu->pmc array */ - struct perf_event *perf_event; -}; - -struct kvm_pmu_events { - u64 events_host; - u64 events_guest; -}; - -struct kvm_pmu { - struct irq_work overflow_work; - struct kvm_pmu_events events; - struct kvm_pmc pmc[KVM_ARMV8_PMU_MAX_COUNTERS]; - int irq_num; - bool created; - bool irq_level; -}; - -struct arm_pmu_entry { - struct list_head entry; - struct arm_pmu *arm_pmu; -}; - -bool kvm_supports_guest_pmuv3(void); -#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) -u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); -void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 = val); -void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx,= u64 val); -u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu); -u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu); -u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1); -void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); -void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); -void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); -void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); -void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); -bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); -void kvm_pmu_update_run(struct kvm_vcpu *vcpu); -void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); -void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, - u64 select_idx); -void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu); -int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, - struct kvm_device_attr *attr); -int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, - struct kvm_device_attr *attr); -int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, - struct kvm_device_attr *attr); -int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); - -struct kvm_pmu_events *kvm_get_pmu_events(void); -void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); -void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); -void kvm_vcpu_pmu_resync_el0(void); - -#define kvm_vcpu_has_pmu(vcpu) \ - (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) - -/* - * Updates the vcpu's view of the pmu events for this cpu. - * Must be called before every vcpu run after disabling interrupts, to ens= ure - * that an interrupt cannot fire and update the structure. - */ -#define kvm_pmu_update_vcpu_events(vcpu) \ - do { \ - if (!has_vhe() && system_supports_pmuv3()) \ - vcpu->arch.pmu.events =3D *kvm_get_pmu_events(); \ - } while (0) - -u8 kvm_arm_pmu_get_pmuver_limit(void); -u64 kvm_pmu_evtyper_mask(struct kvm *kvm); -int kvm_arm_set_default_pmu(struct kvm *kvm); -u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm); - -u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); -bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned int idx); -void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu); -#else -struct kvm_pmu { -}; - -static inline bool kvm_supports_guest_pmuv3(void) -{ - return false; -} - -#define kvm_arm_pmu_irq_initialized(v) (false) -static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, - u64 select_idx) -{ - return 0; -} -static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, - u64 select_idx, u64 val) {} -static inline void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, - u64 select_idx, u64 val) {} -static inline u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu) -{ - return 0; -} -static inline u64 kvm_pmu_accessible_counter_mask(struct kvm_vcpu *vcpu) -{ - return 0; -} -static inline void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) {} -static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {} -static inline void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u= 64 val) {} -static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {} -static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {} -static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) -{ - return false; -} -static inline void kvm_pmu_update_run(struct kvm_vcpu *vcpu) {} -static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 v= al) {} -static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {} -static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, - u64 data, u64 select_idx) {} -static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, - struct kvm_device_attr *attr) -{ - return -ENXIO; -} -static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, - struct kvm_device_attr *attr) -{ - return -ENXIO; -} -static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, - struct kvm_device_attr *attr) -{ - return -ENXIO; -} -static inline int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) -{ - return 0; -} -static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) -{ - return 0; -} - -#define kvm_vcpu_has_pmu(vcpu) ({ false; }) -static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} -static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} -static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} -static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} -static inline u8 kvm_arm_pmu_get_pmuver_limit(void) -{ - return 0; -} -static inline u64 kvm_pmu_evtyper_mask(struct kvm *kvm) -{ - return 0; -} -static inline void kvm_vcpu_pmu_resync_el0(void) {} - -static inline int kvm_arm_set_default_pmu(struct kvm *kvm) -{ - return -ENODEV; -} - -static inline u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) -{ - return 0; -} - -static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) -{ - return 0; -} - -static inline bool kvm_pmu_counter_is_hyp(struct kvm_vcpu *vcpu, unsigned = int idx) -{ - return false; -} - -static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} - -#endif - -#endif diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 6dc5e0cd76ca..1de206b09616 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -13,6 +13,9 @@ #include #include #include +#ifdef CONFIG_ARM64 +#include +#endif =20 #ifdef CONFIG_ARM_PMU =20 @@ -25,6 +28,11 @@ #else #define ARMPMU_MAX_HWEVENTS 33 #endif + +#ifdef CONFIG_ARM +#define kvm_host_pmu_init(_x) { (void)_x; } +#endif + /* * ARM PMU hw_event flags */ @@ -170,12 +178,6 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn); static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; } #endif =20 -#ifdef CONFIG_KVM -void kvm_host_pmu_init(struct arm_pmu *pmu); -#else -#define kvm_host_pmu_init(x) do { } while(0) -#endif - bool arm_pmu_irq_is_nmi(void); =20 /* Internal functions only for core arm_pmu code */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e85b33a92624..d2263b5a0789 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -49,6 +49,7 @@ #include #include #include +#include =20 #include #include --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EC752288C6 for ; Mon, 2 Jun 2025 19:29:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892548; cv=none; b=DezDc3q1Ot3BhBK8qYVpsm5s9SPw06vVWIhEkNAWS5VI/ULVOLP2VLzih2DoFRCeMCJefgpcplzw6ytY1tPQoEYLxITTxt9hi6WvoNbWE2FenVb8F8qzGQ649c5teKfs4YZWz3bGKYKYUig6spm9wtloUyGma8fmNsTsPPbxoB8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892548; c=relaxed/simple; bh=2aH1Vf8zPWUrohelj8hw5jdNKCPkMUIFNNssJRRXkVc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kLwrKCKw9/S8eVisp96uCuZbiH/ii6290Byh6vRAuwLsf35uD/o2GsTF2S8iNOHqGvUlAv1AMnWFefN3uCI3yVEqvkgxduupp+Hxx2FJL5N7xEiy1rmRGmCKSm6Tm1XyF3nRY4ze75G1AhOIIA1F7cZ5ruVSLDWKzQPzaH8vZfc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TkQBAGl5; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TkQBAGl5" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-2e990e17650so526852fac.3 for ; Mon, 02 Jun 2025 12:29:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892543; x=1749497343; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z7LYM8ZvID96FZr+JuW6Hae0DwdlvRBjFoi2xNtG5W8=; b=TkQBAGl5sEVpC3LKeiQ3cpnHEU8NwgXvhPTIjDsve+juENoCo2M6MUvsccZKRZ4aE7 cYAfuYXN3I1ZYjBtQCBCrSD/Mrltb/LZQtp/Kibe7Jr7HdF4X4XAZjwiyOY/iHoAY32Y qAtgdCFbRKVNCwuvlw5SA+bdjdw9IRnO69zecGVsKwUnM5oCiHdsF3Ibcre33vKOeg5d Pcj9yuNYU1KZ/tlFhUsvQwSaMGKNxYh4gI3Dlu34SmLPzQWd5YKyxLkJj4OIW8559DPD P4apWqglfbcOg5Sw0EHTQd99fhEwl0GcpSPcUzSPAMrqGg7F75V5a4de8FwwJC5beyIF OXoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892543; x=1749497343; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z7LYM8ZvID96FZr+JuW6Hae0DwdlvRBjFoi2xNtG5W8=; b=MseqWkKtS/bCOaHM0YJ2SBcVxgoN5MI8VCcPUxkN9KnES7yZTSESp01Gf1rGxaW0Gj Yo3tIdWIQBFQooxykBYlGcGc2FenC8Z96GvaDzOVwPGz4K4HdWm1xTrBotuVRCydO+K/ xwOIvGq2WHXgaZ+qeSxm3osRVTANNcHN3sMCD8ZuaHHpRdmkoYQsXpt+MQMWylYLvNha 8LedTMiZc1qPl0gVT8L4/rRzM/4GZloqyyudKmI4IGdvg5Jk5Q4BEjMPn60y+Vxn9VWh BcGXL+3Zc0GAR7cHBKR0HCzToklZFi7wjyJodTVLNkAzKBgYXgnBpe/AVJ3lQl+af1i5 kJqQ== X-Forwarded-Encrypted: i=1; AJvYcCVAQLq7xL5Mncm6I5vi1MLi3ZtgLyyZrJVRUbGYqtVFJnnvY/5uYcu1vehZvnHm4sa6LhXeK/yxx75Ju5U=@vger.kernel.org X-Gm-Message-State: AOJu0Yxfmnd5tTjzxR2qkBkC2KenFpQVI+e6WhbLPqQ5DNURGyxmyxEe 1X7pxn9tL1VVvwJt5itP7EOnlu6TAb5DyNS5d2dsNB3QknlCYALUD06uW7QIeJ/zgouy426q4oL MeoTEp2nDaj+irH2elLY5fUoT/g== X-Google-Smtp-Source: AGHT+IESEoZoojjjgDnj/3AYYCwJ5dMUOUMhsLaaqgmwWB1wn3Gy/2I4Nc5m4EQRTVS72I3Pot1Ky13pZMd+ehwbrQ== X-Received: from oablc6.prod.google.com ([2002:a05:6871:4186:b0:29d:cf2b:e3ca]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:4213:b0:2ba:11d5:ad64 with SMTP id 586e51a60fabf-2e948c0d0cdmr5404353fac.23.1748892542774; Mon, 02 Jun 2025 12:29:02 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:50 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-6-coltonlewis@google.com> Subject: [PATCH 05/17] KVM: arm64: Reorganize PMU functions From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A lot of functions in pmu-emul.c aren't specific to the emulated PMU implementation. Move them to the more appropriate pmu.c file where shared PMU functions should live. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/kvm/pmu-emul.c | 611 +----------------------------- arch/arm64/kvm/pmu.c | 610 +++++++++++++++++++++++++++++ 3 files changed, 613 insertions(+), 610 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index f5d97cd8e177..3482d7602a5b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1706,6 +1706,7 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); =20 /* * Updates the vcpu's view of the pmu events for this cpu. @@ -1719,6 +1720,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); } while (0) =20 u8 kvm_arm_pmu_get_pmuver_limit(void); +u32 kvm_pmu_event_mask(struct kvm *kvm); u64 kvm_pmu_evtyper_mask(struct kvm *kvm); int kvm_arm_set_default_pmu(struct kvm *kvm); u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 472a2ab6938f..ff86c66e1b48 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -16,21 +16,10 @@ =20 #define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0) =20 -static LIST_HEAD(arm_pmus); -static DEFINE_MUTEX(arm_pmus_lock); - static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); =20 -#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) - -bool kvm_supports_guest_pmuv3(void) -{ - guard(mutex)(&arm_pmus_lock); - return !list_empty(&arm_pmus); -} - static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) { return container_of(pmc, struct kvm_vcpu, arch.pmu.pmc[pmc->idx]); @@ -41,46 +30,6 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vc= pu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } =20 -static u32 __kvm_pmu_event_mask(unsigned int pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return GENMASK(9, 0); - case ID_AA64DFR0_EL1_PMUVer_V3P1: - case ID_AA64DFR0_EL1_PMUVer_V3P4: - case ID_AA64DFR0_EL1_PMUVer_V3P5: - case ID_AA64DFR0_EL1_PMUVer_V3P7: - return GENMASK(15, 0); - default: /* Shouldn't be here, just for sanity */ - WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); - return 0; - } -} - -static u32 kvm_pmu_event_mask(struct kvm *kvm) -{ - u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); - u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); - - return __kvm_pmu_event_mask(pmuver); -} - -u64 kvm_pmu_evtyper_mask(struct kvm *kvm) -{ - u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | - kvm_pmu_event_mask(kvm); - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) - mask |=3D ARMV8_PMU_INCLUDE_EL2; - - if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) - mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | - ARMV8_PMU_EXCLUDE_NS_EL1 | - ARMV8_PMU_EXCLUDE_EL3; - - return mask; -} - /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -371,7 +320,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -394,24 +343,6 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *v= cpu) return reg; } =20 -static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu =3D &vcpu->arch.pmu; - bool overflow; - - overflow =3D kvm_pmu_overflow_status(vcpu); - if (pmu->irq_level =3D=3D overflow) - return; - - pmu->irq_level =3D overflow; - - if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, - pmu->irq_num, overflow, pmu); - WARN_ON(ret); - } -} - bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D &vcpu->arch.pmu; @@ -437,43 +368,6 @@ void kvm_pmu_update_run(struct kvm_vcpu *vcpu) regs->device_irq_level |=3D KVM_ARM_DEV_PMU; } =20 -/** - * kvm_pmu_flush_hwstate - flush pmu state to cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the host, and = inject - * an interrupt if that was the case. - */ -void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/** - * kvm_pmu_sync_hwstate - sync pmu state from cpu - * @vcpu: The vcpu pointer - * - * Check if the PMU has overflowed while we were running in the guest, and - * inject an interrupt if that was the case. - */ -void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) -{ - kvm_pmu_update_state(vcpu); -} - -/* - * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding - * to the event. - * This is why we need a callback to do it once outside of the NMI context. - */ -static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) -{ - struct kvm_vcpu *vcpu; - - vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); - kvm_vcpu_kick(vcpu); -} - /* * Perform an increment on any of the counters described in @mask, * generating the overflow if required, and propagate it as a chained @@ -785,137 +679,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *= vcpu, u64 data, kvm_pmu_create_perf_event(pmc); } =20 -struct arm_pmu_entry { - struct list_head entry; - struct arm_pmu *arm_pmu; -}; - -void kvm_host_pmu_init(struct arm_pmu *pmu) -{ - struct arm_pmu_entry *entry; - - /* - * Check the sanitised PMU version for the system, as KVM does not - * support implementations where PMUv3 exists on a subset of CPUs. - */ - if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) - return; - - guard(mutex)(&arm_pmus_lock); - - entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) - return; - - entry->arm_pmu =3D pmu; - list_add_tail(&entry->entry, &arm_pmus); -} - -static struct arm_pmu *kvm_pmu_probe_armpmu(void) -{ - struct arm_pmu_entry *entry; - struct arm_pmu *pmu; - int cpu; - - guard(mutex)(&arm_pmus_lock); - - /* - * It is safe to use a stale cpu to iterate the list of PMUs so long as - * the same value is used for the entirety of the loop. Given this, and - * the fact that no percpu data is used for the lookup there is no need - * to disable preemption. - * - * It is still necessary to get a valid cpu, though, to probe for the - * default PMU instance as userspace is not required to specify a PMU - * type. In order to uphold the preexisting behavior KVM selects the - * PMU instance for the core during vcpu init. A dependent use - * case would be a user with disdain of all things big.LITTLE that - * affines the VMM to a particular cluster of cores. - * - * In any case, userspace should just do the sane thing and use the UAPI - * to select a PMU type directly. But, be wary of the baggage being - * carried here. - */ - cpu =3D raw_smp_processor_id(); - list_for_each_entry(entry, &arm_pmus, entry) { - pmu =3D entry->arm_pmu; - - if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) - return pmu; - } - - return NULL; -} - -static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) -{ - u32 hi[2], lo[2]; - - bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); - bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); - - return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; -} - -static u64 compute_pmceid0(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 0); - - /* always support SW_INCR */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); - /* always support CHAIN */ - val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); - return val; -} - -static u64 compute_pmceid1(struct arm_pmu *pmu) -{ - u64 val =3D __compute_pmceid(pmu, 1); - - /* - * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled - * as RAZ - */ - val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | - BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); - return val; -} - -u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) -{ - struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; - unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; - u64 val, mask =3D 0; - int base, i, nr_events; - - if (!pmceid1) { - val =3D compute_pmceid0(cpu_pmu); - base =3D 0; - } else { - val =3D compute_pmceid1(cpu_pmu); - base =3D 32; - } - - if (!bmap) - return val; - - nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; - - for (i =3D 0; i < 32; i +=3D 8) { - u64 byte; - - byte =3D bitmap_get_value8(bmap, base + i); - mask |=3D byte << i; - if (nr_events >=3D (0x4000 + base + 32)) { - byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); - mask |=3D byte << (32 + i); - } - } - - return val & mask; -} - void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) { u64 mask =3D kvm_pmu_implemented_counter_mask(vcpu); @@ -927,378 +690,6 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) kvm_pmu_reprogram_counter_mask(vcpu, mask); } =20 -int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) -{ - if (!vcpu->arch.pmu.created) - return -EINVAL; - - /* - * A valid interrupt configuration for the PMU is either to have a - * properly configured interrupt number and using an in-kernel - * irqchip, or to not have an in-kernel GIC and not set an IRQ. - */ - if (irqchip_in_kernel(vcpu->kvm)) { - int irq =3D vcpu->arch.pmu.irq_num; - /* - * If we are using an in-kernel vgic, at this point we know - * the vgic will be initialized, so we can check the PMU irq - * number against the dimensions of the vgic and make sure - * it's valid. - */ - if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) - return -EINVAL; - } else if (kvm_arm_pmu_irq_initialized(vcpu)) { - return -EINVAL; - } - - return 0; -} - -static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) -{ - if (irqchip_in_kernel(vcpu->kvm)) { - int ret; - - /* - * If using the PMU with an in-kernel virtual GIC - * implementation, we require the GIC to be already - * initialized when initializing the PMU. - */ - if (!vgic_initialized(vcpu->kvm)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, - &vcpu->arch.pmu); - if (ret) - return ret; - } - - init_irq_work(&vcpu->arch.pmu.overflow_work, - kvm_pmu_perf_overflow_notify_vcpu); - - vcpu->arch.pmu.created =3D true; - return 0; -} - -/* - * For one VM the interrupt type must be same for each vcpu. - * As a PPI, the interrupt number is the same for all vcpus, - * while as an SPI it must be a separate number per vcpu. - */ -static bool pmu_irq_is_valid(struct kvm *kvm, int irq) -{ - unsigned long i; - struct kvm_vcpu *vcpu; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!kvm_arm_pmu_irq_initialized(vcpu)) - continue; - - if (irq_is_ppi(irq)) { - if (vcpu->arch.pmu.irq_num !=3D irq) - return false; - } else { - if (vcpu->arch.pmu.irq_num =3D=3D irq) - return false; - } - } - - return true; -} - -/** - * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. - * @kvm: The kvm pointer - */ -u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; - - /* - * PMUv3 requires that all event counters are capable of counting any - * event, though the same may not be true of non-PMUv3 hardware. - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return 1; - - /* - * The arm_pmu->cntr_mask considers the fixed counter(s) as well. - * Ignore those and return only the general-purpose counters. - */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); -} - -static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) -{ - kvm->arch.nr_pmu_counters =3D nr; - - /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ - if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { - struct kvm_vcpu *vcpu; - unsigned long i; - - kvm_for_each_vcpu(i, vcpu, kvm) { - u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); - val &=3D ~MDCR_EL2_HPMN; - val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); - __vcpu_sys_reg(vcpu, MDCR_EL2) =3D val; - } - } -} - -static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) -{ - lockdep_assert_held(&kvm->arch.config_lock); - - kvm->arch.arm_pmu =3D arm_pmu; - kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); -} - -/** - * kvm_arm_set_default_pmu - No PMU set, get the default one. - * @kvm: The kvm pointer - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ -int kvm_arm_set_default_pmu(struct kvm *kvm) -{ - struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); - - if (!arm_pmu) - return -ENODEV; - - kvm_arm_set_pmu(kvm, arm_pmu); - return 0; -} - -static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) -{ - struct kvm *kvm =3D vcpu->kvm; - struct arm_pmu_entry *entry; - struct arm_pmu *arm_pmu; - int ret =3D -ENXIO; - - lockdep_assert_held(&kvm->arch.config_lock); - mutex_lock(&arm_pmus_lock); - - list_for_each_entry(entry, &arm_pmus, entry) { - arm_pmu =3D entry->arm_pmu; - if (arm_pmu->pmu.type =3D=3D pmu_id) { - if (kvm_vm_has_ran_once(kvm) || - (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { - ret =3D -EBUSY; - break; - } - - kvm_arm_set_pmu(kvm, arm_pmu); - cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret =3D 0; - break; - } - } - - mutex_unlock(&arm_pmus_lock); - return ret; -} - -static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) -{ - struct kvm *kvm =3D vcpu->kvm; - - if (!kvm->arch.arm_pmu) - return -EINVAL; - - if (n > kvm_arm_pmu_get_max_counters(kvm)) - return -EINVAL; - - kvm_arm_set_nr_counters(kvm, n); - return 0; -} - -int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - struct kvm *kvm =3D vcpu->kvm; - - lockdep_assert_held(&kvm->arch.config_lock); - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (vcpu->arch.pmu.created) - return -EBUSY; - - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(kvm)) - return -EINVAL; - - if (get_user(irq, uaddr)) - return -EFAULT; - - /* The PMU overflow interrupt can be a PPI or a valid SPI. */ - if (!(irq_is_ppi(irq) || irq_is_spi(irq))) - return -EINVAL; - - if (!pmu_irq_is_valid(kvm, irq)) - return -EINVAL; - - if (kvm_arm_pmu_irq_initialized(vcpu)) - return -EBUSY; - - kvm_debug("Set kvm ARM PMU irq: %d\n", irq); - vcpu->arch.pmu.irq_num =3D irq; - return 0; - } - case KVM_ARM_VCPU_PMU_V3_FILTER: { - u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); - struct kvm_pmu_event_filter __user *uaddr; - struct kvm_pmu_event_filter filter; - int nr_events; - - /* - * Allow userspace to specify an event filter for the entire - * event range supported by PMUVer of the hardware, rather - * than the guest's PMUVer for KVM backward compatibility. - */ - nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; - - uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; - - if (copy_from_user(&filter, uaddr, sizeof(filter))) - return -EFAULT; - - if (((u32)filter.base_event + filter.nevents) > nr_events || - (filter.action !=3D KVM_PMU_EVENT_ALLOW && - filter.action !=3D KVM_PMU_EVENT_DENY)) - return -EINVAL; - - if (kvm_vm_has_ran_once(kvm)) - return -EBUSY; - - if (!kvm->arch.pmu_filter) { - kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); - if (!kvm->arch.pmu_filter) - return -ENOMEM; - - /* - * The default depends on the first applied filter. - * If it allows events, the default is to deny. - * Conversely, if the first filter denies a set of - * events, the default is to allow. - */ - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_zero(kvm->arch.pmu_filter, nr_events); - else - bitmap_fill(kvm->arch.pmu_filter, nr_events); - } - - if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) - bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - else - bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); - - return 0; - } - case KVM_ARM_VCPU_PMU_V3_SET_PMU: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int pmu_id; - - if (get_user(pmu_id, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); - } - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { - unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; - unsigned int n; - - if (get_user(n, uaddr)) - return -EFAULT; - - return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); - } - case KVM_ARM_VCPU_PMU_V3_INIT: - return kvm_arm_pmu_v3_init(vcpu); - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: { - int __user *uaddr =3D (int __user *)(long)attr->addr; - int irq; - - if (!irqchip_in_kernel(vcpu->kvm)) - return -EINVAL; - - if (!kvm_vcpu_has_pmu(vcpu)) - return -ENODEV; - - if (!kvm_arm_pmu_irq_initialized(vcpu)) - return -ENXIO; - - irq =3D vcpu->arch.pmu.irq_num; - return put_user(irq, uaddr); - } - } - - return -ENXIO; -} - -int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) -{ - switch (attr->attr) { - case KVM_ARM_VCPU_PMU_V3_IRQ: - case KVM_ARM_VCPU_PMU_V3_INIT: - case KVM_ARM_VCPU_PMU_V3_FILTER: - case KVM_ARM_VCPU_PMU_V3_SET_PMU: - case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: - if (kvm_vcpu_has_pmu(vcpu)) - return 0; - } - - return -ENXIO; -} - -u8 kvm_arm_pmu_get_pmuver_limit(void) -{ - unsigned int pmuver; - - pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, - read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); - - /* - * Spoof a barebones PMUv3 implementation if the system supports IMPDEF - * traps of the PMUv3 sysregs - */ - if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) - return ID_AA64DFR0_EL1_PMUVer_IMP; - - /* - * Otherwise, treat IMPLEMENTATION DEFINED functionality as - * unimplemented - */ - if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) - return 0; - - return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); -} - /** * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU * @vcpu: The vcpu pointer diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 8bfc6b0a85f6..4f0152e67ff3 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,10 +8,21 @@ #include #include =20 +#include #include =20 +static LIST_HEAD(arm_pmus); +static DEFINE_MUTEX(arm_pmus_lock); static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) + +bool kvm_supports_guest_pmuv3(void) +{ + guard(mutex)(&arm_pmus_lock); + return !list_empty(&arm_pmus); +} + /* * Given the perf event attributes and system type, determine * if we are going to need to switch counters at guest entry/exit. @@ -211,3 +222,602 @@ void kvm_vcpu_pmu_resync_el0(void) =20 kvm_make_request(KVM_REQ_RESYNC_PMU_EL0, vcpu); } + +struct arm_pmu_entry { + struct list_head entry; + struct arm_pmu *arm_pmu; +}; + +void kvm_host_pmu_init(struct arm_pmu *pmu) +{ + struct arm_pmu_entry *entry; + + /* + * Check the sanitised PMU version for the system, as KVM does not + * support implementations where PMUv3 exists on a subset of CPUs. + */ + if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) + return; + + guard(mutex)(&arm_pmus_lock); + + entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return; + + entry->arm_pmu =3D pmu; + list_add_tail(&entry->entry, &arm_pmus); +} + +static struct arm_pmu *kvm_pmu_probe_armpmu(void) +{ + struct arm_pmu_entry *entry; + struct arm_pmu *pmu; + int cpu; + + guard(mutex)(&arm_pmus_lock); + + /* + * It is safe to use a stale cpu to iterate the list of PMUs so long as + * the same value is used for the entirety of the loop. Given this, and + * the fact that no percpu data is used for the lookup there is no need + * to disable preemption. + * + * It is still necessary to get a valid cpu, though, to probe for the + * default PMU instance as userspace is not required to specify a PMU + * type. In order to uphold the preexisting behavior KVM selects the + * PMU instance for the core during vcpu init. A dependent use + * case would be a user with disdain of all things big.LITTLE that + * affines the VMM to a particular cluster of cores. + * + * In any case, userspace should just do the sane thing and use the UAPI + * to select a PMU type directly. But, be wary of the baggage being + * carried here. + */ + cpu =3D raw_smp_processor_id(); + list_for_each_entry(entry, &arm_pmus, entry) { + pmu =3D entry->arm_pmu; + + if (cpumask_test_cpu(cpu, &pmu->supported_cpus)) + return pmu; + } + + return NULL; +} + +static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1) +{ + u32 hi[2], lo[2]; + + bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); + bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS= ); + + return ((u64)hi[pmceid1] << 32) | lo[pmceid1]; +} + +static u64 compute_pmceid0(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 0); + + /* always support SW_INCR */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_SW_INCR); + /* always support CHAIN */ + val |=3D BIT(ARMV8_PMUV3_PERFCTR_CHAIN); + return val; +} + +static u64 compute_pmceid1(struct arm_pmu *pmu) +{ + u64 val =3D __compute_pmceid(pmu, 1); + + /* + * Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled + * as RAZ + */ + val &=3D ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) | + BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32)); + return val; +} + +u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) +{ + struct arm_pmu *cpu_pmu =3D vcpu->kvm->arch.arm_pmu; + unsigned long *bmap =3D vcpu->kvm->arch.pmu_filter; + u64 val, mask =3D 0; + int base, i, nr_events; + + if (!pmceid1) { + val =3D compute_pmceid0(cpu_pmu); + base =3D 0; + } else { + val =3D compute_pmceid1(cpu_pmu); + base =3D 32; + } + + if (!bmap) + return val; + + nr_events =3D kvm_pmu_event_mask(vcpu->kvm) + 1; + + for (i =3D 0; i < 32; i +=3D 8) { + u64 byte; + + byte =3D bitmap_get_value8(bmap, base + i); + mask |=3D byte << i; + if (nr_events >=3D (0x4000 + base + 32)) { + byte =3D bitmap_get_value8(bmap, 0x4000 + base + i); + mask |=3D byte << (32 + i); + } + } + + return val & mask; +} + +/* + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresp= onding + * to the event. + * This is why we need a callback to do it once outside of the NMI context. + */ +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) +{ + struct kvm_vcpu *vcpu; + + vcpu =3D container_of(work, struct kvm_vcpu, arch.pmu.overflow_work); + kvm_vcpu_kick(vcpu); +} + +static u32 __kvm_pmu_event_mask(unsigned int pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return GENMASK(9, 0); + case ID_AA64DFR0_EL1_PMUVer_V3P1: + case ID_AA64DFR0_EL1_PMUVer_V3P4: + case ID_AA64DFR0_EL1_PMUVer_V3P5: + case ID_AA64DFR0_EL1_PMUVer_V3P7: + return GENMASK(15, 0); + default: /* Shouldn't be here, just for sanity */ + WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); + return 0; + } +} + +u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 =3D kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + +u64 kvm_pmu_evtyper_mask(struct kvm *kvm) +{ + u64 mask =3D ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | + kvm_pmu_event_mask(kvm); + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP)) + mask |=3D ARMV8_PMU_INCLUDE_EL2; + + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP)) + mask |=3D ARMV8_PMU_EXCLUDE_NS_EL0 | + ARMV8_PMU_EXCLUDE_NS_EL1 | + ARMV8_PMU_EXCLUDE_EL3; + + return mask; +} + +static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D &vcpu->arch.pmu; + bool overflow; + + overflow =3D kvm_pmu_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) + return; + + pmu->irq_level =3D overflow; + + if (likely(irqchip_in_kernel(vcpu->kvm))) { + int ret =3D kvm_vgic_inject_irq(vcpu->kvm, vcpu, + pmu->irq_num, overflow, pmu); + WARN_ON(ret); + } +} + +/** + * kvm_pmu_flush_hwstate - flush pmu state to cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the host, and = inject + * an interrupt if that was the case. + */ +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +/** + * kvm_pmu_sync_hwstate - sync pmu state from cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the guest, and + * inject an interrupt if that was the case. + */ +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.pmu.created) + return -EINVAL; + + /* + * A valid interrupt configuration for the PMU is either to have a + * properly configured interrupt number and using an in-kernel + * irqchip, or to not have an in-kernel GIC and not set an IRQ. + */ + if (irqchip_in_kernel(vcpu->kvm)) { + int irq =3D vcpu->arch.pmu.irq_num; + /* + * If we are using an in-kernel vgic, at this point we know + * the vgic will be initialized, so we can check the PMU irq + * number against the dimensions of the vgic and make sure + * it's valid. + */ + if (!irq_is_ppi(irq) && !vgic_valid_spi(vcpu->kvm, irq)) + return -EINVAL; + } else if (kvm_arm_pmu_irq_initialized(vcpu)) { + return -EINVAL; + } + + return 0; +} + +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) +{ + if (irqchip_in_kernel(vcpu->kvm)) { + int ret; + + /* + * If using the PMU with an in-kernel virtual GIC + * implementation, we require the GIC to be already + * initialized when initializing the PMU. + */ + if (!vgic_initialized(vcpu->kvm)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + ret =3D kvm_vgic_set_owner(vcpu, vcpu->arch.pmu.irq_num, + &vcpu->arch.pmu); + if (ret) + return ret; + } + + init_irq_work(&vcpu->arch.pmu.overflow_work, + kvm_pmu_perf_overflow_notify_vcpu); + + vcpu->arch.pmu.created =3D true; + return 0; +} + +/* + * For one VM the interrupt type must be same for each vcpu. + * As a PPI, the interrupt number is the same for all vcpus, + * while as an SPI it must be a separate number per vcpu. + */ +static bool pmu_irq_is_valid(struct kvm *kvm, int irq) +{ + unsigned long i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!kvm_arm_pmu_irq_initialized(vcpu)) + continue; + + if (irq_is_ppi(irq)) { + if (vcpu->arch.pmu.irq_num !=3D irq) + return false; + } else { + if (vcpu->arch.pmu.irq_num =3D=3D irq) + return false; + } + } + + return true; +} + +/** + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. + * @kvm: The kvm pointer + */ +u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm->arch.arm_pmu; + + /* + * PMUv3 requires that all event counters are capable of counting any + * event, though the same may not be true of non-PMUv3 hardware. + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return 1; + + /* + * The arm_pmu->cntr_mask considers the fixed counter(s) as well. + * Ignore those and return only the general-purpose counters. + */ + return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); +} + +static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr) +{ + kvm->arch.nr_pmu_counters =3D nr; + + /* Reset MDCR_EL2.HPMN behind the vcpus' back... */ + if (test_bit(KVM_ARM_VCPU_HAS_EL2, kvm->arch.vcpu_features)) { + struct kvm_vcpu *vcpu; + unsigned long i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + u64 val =3D __vcpu_sys_reg(vcpu, MDCR_EL2); + + val &=3D ~MDCR_EL2_HPMN; + val |=3D FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); + __vcpu_sys_reg(vcpu, MDCR_EL2) =3D val; + } + } +} + +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + kvm->arch.arm_pmu =3D arm_pmu; + kvm_arm_set_nr_counters(kvm, kvm_arm_pmu_get_max_counters(kvm)); +} + +/** + * kvm_arm_set_default_pmu - No PMU set, get the default one. + * @kvm: The kvm pointer + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ +int kvm_arm_set_default_pmu(struct kvm *kvm) +{ + struct arm_pmu *arm_pmu =3D kvm_pmu_probe_armpmu(); + + if (!arm_pmu) + return -ENODEV; + + kvm_arm_set_pmu(kvm, arm_pmu); + return 0; +} + +static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) +{ + struct kvm *kvm =3D vcpu->kvm; + struct arm_pmu_entry *entry; + struct arm_pmu *arm_pmu; + int ret =3D -ENXIO; + + lockdep_assert_held(&kvm->arch.config_lock); + mutex_lock(&arm_pmus_lock); + + list_for_each_entry(entry, &arm_pmus, entry) { + arm_pmu =3D entry->arm_pmu; + if (arm_pmu->pmu.type =3D=3D pmu_id) { + if (kvm_vm_has_ran_once(kvm) || + (kvm->arch.pmu_filter && kvm->arch.arm_pmu !=3D arm_pmu)) { + ret =3D -EBUSY; + break; + } + + kvm_arm_set_pmu(kvm, arm_pmu); + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); + ret =3D 0; + break; + } + } + + mutex_unlock(&arm_pmus_lock); + return ret; +} + +static int kvm_arm_pmu_v3_set_nr_counters(struct kvm_vcpu *vcpu, unsigned = int n) +{ + struct kvm *kvm =3D vcpu->kvm; + + if (!kvm->arch.arm_pmu) + return -EINVAL; + + if (n > kvm_arm_pmu_get_max_counters(kvm)) + return -EINVAL; + + kvm_arm_set_nr_counters(kvm, n); + return 0; +} + +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + struct kvm *kvm =3D vcpu->kvm; + + lockdep_assert_held(&kvm->arch.config_lock); + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (vcpu->arch.pmu.created) + return -EBUSY; + + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(kvm)) + return -EINVAL; + + if (get_user(irq, uaddr)) + return -EFAULT; + + /* The PMU overflow interrupt can be a PPI or a valid SPI. */ + if (!(irq_is_ppi(irq) || irq_is_spi(irq))) + return -EINVAL; + + if (!pmu_irq_is_valid(kvm, irq)) + return -EINVAL; + + if (kvm_arm_pmu_irq_initialized(vcpu)) + return -EBUSY; + + kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; + return 0; + } + case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver =3D kvm_arm_pmu_get_pmuver_limit(); + struct kvm_pmu_event_filter __user *uaddr; + struct kvm_pmu_event_filter filter; + int nr_events; + + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events =3D __kvm_pmu_event_mask(pmuver) + 1; + + uaddr =3D (struct kvm_pmu_event_filter __user *)(long)attr->addr; + + if (copy_from_user(&filter, uaddr, sizeof(filter))) + return -EFAULT; + + if (((u32)filter.base_event + filter.nevents) > nr_events || + (filter.action !=3D KVM_PMU_EVENT_ALLOW && + filter.action !=3D KVM_PMU_EVENT_DENY)) + return -EINVAL; + + if (kvm_vm_has_ran_once(kvm)) + return -EBUSY; + + if (!kvm->arch.pmu_filter) { + kvm->arch.pmu_filter =3D bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT); + if (!kvm->arch.pmu_filter) + return -ENOMEM; + + /* + * The default depends on the first applied filter. + * If it allows events, the default is to deny. + * Conversely, if the first filter denies a set of + * events, the default is to allow. + */ + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_zero(kvm->arch.pmu_filter, nr_events); + else + bitmap_fill(kvm->arch.pmu_filter, nr_events); + } + + if (filter.action =3D=3D KVM_PMU_EVENT_ALLOW) + bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + else + bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents); + + return 0; + } + case KVM_ARM_VCPU_PMU_V3_SET_PMU: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int pmu_id; + + if (get_user(pmu_id, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); + } + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: { + unsigned int __user *uaddr =3D (unsigned int __user *)(long)attr->addr; + unsigned int n; + + if (get_user(n, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_nr_counters(vcpu, n); + } + case KVM_ARM_VCPU_PMU_V3_INIT: + return kvm_arm_pmu_v3_init(vcpu); + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: { + int __user *uaddr =3D (int __user *)(long)attr->addr; + int irq; + + if (!irqchip_in_kernel(vcpu->kvm)) + return -EINVAL; + + if (!kvm_vcpu_has_pmu(vcpu)) + return -ENODEV; + + if (!kvm_arm_pmu_irq_initialized(vcpu)) + return -ENXIO; + + irq =3D vcpu->arch.pmu.irq_num; + return put_user(irq, uaddr); + } + } + + return -ENXIO; +} + +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr = *attr) +{ + switch (attr->attr) { + case KVM_ARM_VCPU_PMU_V3_IRQ: + case KVM_ARM_VCPU_PMU_V3_INIT: + case KVM_ARM_VCPU_PMU_V3_FILTER: + case KVM_ARM_VCPU_PMU_V3_SET_PMU: + case KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS: + if (kvm_vcpu_has_pmu(vcpu)) + return 0; + } + + return -ENXIO; +} + +u8 kvm_arm_pmu_get_pmuver_limit(void) +{ + unsigned int pmuver; + + pmuver =3D SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, + read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1)); + + /* + * Spoof a barebones PMUv3 implementation if the system supports IMPDEF + * traps of the PMUv3 sysregs + */ + if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS)) + return ID_AA64DFR0_EL1_PMUVer_IMP; + + /* + * Otherwise, treat IMPLEMENTATION DEFINED functionality as + * unimplemented + */ + if (pmuver =3D=3D ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + return 0; + + return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5); +} --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E88FE22D4C8 for ; Mon, 2 Jun 2025 19:29:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892549; cv=none; b=RMI1SG8MtQc+s8qR3hjXtY9AmOrlwA6BH2Tv+N3GrNB6U7vT5tt2Z9AgnBriXsQOeN7+wS5i5aavK+kwlwRIqYUsW3Pj6toGWfxkDbzbN62AX17LEe4EDcNdu3AiJm08VgyYhvG26odaAGiJIyKXaEQbwebe4wj6EVHM5QHEsG0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892549; c=relaxed/simple; bh=Vcxej3OdIqv2ylRzWrFBEgNJ2+T9OXbIi8zsufdKlu0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Y2R7VdugMKm6XiGBaorUKjD9iAjM5hdfa6sQ3jxAskG2mzklW0eq9tiC6+DVsrAX9Z5z6RrV04WM8JvGe7sYjXUIeiG+h4BOjF9ItHLuWxE5/9ftsFQYVUUpNyGOjduYe5bkTM5qElzMCsYtItBw5OT/a6NoyRz0SUe16XpeqBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jlDD/E+s; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jlDD/E+s" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3dd81f9ce43so52008545ab.0 for ; Mon, 02 Jun 2025 12:29:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892543; x=1749497343; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UfOzyGcF07X0D5/1q7yTcK3OXDHh41vI4neiIUdQIqk=; b=jlDD/E+s8I9AYFs9B4N9f7EvovOuQwsTA7O2xgekQTMNXjRSXsHQSzR6r5tN2xLmJL a6iHA9P4Ap9cN69W7GrCuJQ7AbfKQ+cHot9WuR2EfGW3FLbumn2UJO/AbgwW3X9vnMZL lWe4Uh7EAVLUZqfyogIEg0LtvqH/c0Ii+wrO4WseZtz4hu/IUXfeb5iG4xMfYpWBATep zwgF0QbBayRYX5Yvp9tWzJxwGDBREfYre49fVhBQdAqUXsbqBa1ySciBTgiUC+Zj54UD j+puA7jPJROWTmZGkUvjPSuzQda7lsoQeSqjlGO9Fx+/g/rGeH6Save1dGbsEZqSklJz oRwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892543; x=1749497343; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UfOzyGcF07X0D5/1q7yTcK3OXDHh41vI4neiIUdQIqk=; b=uGhcDitklMi3GBxyLN38Oy0N4BqRiY+L4maFy3VxQwic2MXx6/Gys0HPAcJtLKh5fT 6+qSPxiuhUo6mL5sBEBp/QbkiFZ7jZRJhXPhkgwYU5qGGYLNluGjQgvfd0eia/AsDPch P5XDPEJ/9fA9LviZ/F1oA04TAJqz1aN5SvU+sk4d68QHtKy4wa1FK7l2+6j5D6XjTQjn Nttj2mxDwTA4U28TeTV8PLaOBiZrotzRLL9AU4twskfwAr2R5jGMs+6jkDmW2mhIDedN sMYu+Fs6YtAQcXnT/vdSb2ph67KX0Gn2l6NtEXrb/6BD1HEYkSMoX2sVg3QJAHUBzrc8 BnUA== X-Forwarded-Encrypted: i=1; AJvYcCV79O01RM/NyNsooHSqXp+kBfqO4Dr2pTqkvqWd1hXpavPkUU2rNn46Tf63LWErazj/JuWc6lusRDjFmUY=@vger.kernel.org X-Gm-Message-State: AOJu0YyE/d6FkVWvYE0XBOL2fifD/n+g3M6mZnjAozEHrO/RsmdMZ8J5 TpJLAleUra1AOwZBR5eRZeiICfoa5bcuuQ+UytG0k8SjGdRUYDfu8qkL/KikhoA+D7fZoQ3td60 GLGRDL0G85RWS9oR73jIvKDGNZw== X-Google-Smtp-Source: AGHT+IGL1Kilk+67ELLUvG6KC6loDyrfoWQqkbb+XKZg4pQuuZkVEStcuaIGbxa/s2Ca1gvii7S5wn+pWq8+i5pjCQ== X-Received: from ilbbq5.prod.google.com ([2002:a05:6e02:2385:b0:3dd:7629:ec3a]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1fcd:b0:3dc:8423:5440 with SMTP id e9e14a558f8ab-3dd99b1383fmr148898625ab.0.1748892543626; Mon, 02 Jun 2025 12:29:03 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:51 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-7-coltonlewis@google.com> Subject: [PATCH 06/17] KVM: arm64: Introduce method to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For PMUv3, the register field MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Track HPMN and in a variable in struct arm_pmu because both KVM and the PMUv3 driver will need to know that to handle guests correctly. Introduce the function kvm_pmu_partition() to set this variable and modify the PMU driver's cntr_mask of available counters to exclude the counters being reserved for the guest. Finally, make sure HPMN is set with this value when setting up the MDCR_EL2 register. Create a module parameter reserved_host_counters to set a default value. A more flexible uAPI will be added in a later commit. Due to the difficulty this feature would create for the driver running at EL1 on the host, partitioning is only allowed in VHE mode. Working on nVHE mode would require a hypercall for every counter access in the driver because the counters reserved for the host by HPMN are only accessible to EL2. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_pmu.h | 19 +++++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/debug.c | 9 ++- arch/arm64/kvm/pmu-part.c | 117 +++++++++++++++++++++++++++++++ arch/arm64/kvm/pmu.c | 13 ++++ include/linux/perf/arm_pmu.h | 1 + 6 files changed, 157 insertions(+), 4 deletions(-) create mode 100644 arch/arm64/kvm/pmu-part.c diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 613cddbdbdd8..83b81e7829bf 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -22,6 +22,10 @@ bool kvm_set_pmuserenr(u64 val); void kvm_vcpu_pmu_resync_el0(void); void kvm_host_pmu_init(struct arm_pmu *pmu); =20 +bool kvm_pmu_partition_supported(void); +u8 kvm_pmu_hpmn(u8 host_counters); +int kvm_pmu_partition(struct arm_pmu *pmu, u8 host_counters); + #else =20 static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *att= r) {} @@ -33,6 +37,21 @@ static inline bool kvm_set_pmuserenr(u64 val) static inline void kvm_vcpu_pmu_resync_el0(void) {} static inline void kvm_host_pmu_init(struct arm_pmu *pmu) {} =20 +static inline bool kvm_pmu_partiton_supported(void) +{ + return false; +} + +static inline u8 kvm_pmu_hpmn(u8 nr_counters) +{ + return -1; +} + +static inline int kvm_pmu_partition(struct arm_pmu *pmu) +{ + return -EPERM; +} + #endif =20 #endif diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 7c329e01c557..8161dfb123d7 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -25,7 +25,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o =20 -kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o +kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu-part.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o kvm-$(CONFIG_PTDUMP_STAGE2_DEBUGFS) +=3D ptdump.o =20 diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 7fb1d9e7180f..41746a498a45 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -9,6 +9,7 @@ =20 #include #include +#include #include =20 #include @@ -31,15 +32,17 @@ */ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { + u8 hpmn =3D vcpu->kvm->arch.arm_pmu->hpmn; + preempt_disable(); =20 /* * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, - *host_data_ptr(nr_event_counters)); - vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_TPM | + vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, hpmn); + vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_HPMD | + MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | MDCR_EL2_TPMCR | diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c new file mode 100644 index 000000000000..7252a58f085c --- /dev/null +++ b/arch/arm64/kvm/pmu-part.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Google LLC + * Author: Colton Lewis + */ + +#include +#include +#include + +#include +#include + +/** + * kvm_pmu_reservation_is_valid() - Determine if reservation is allowed + * @host_counters: Number of host counters to reserve + * + * Determine if the number of host counters in the argument is + * allowed. It is allowed if it will produce a valid value for + * register field MDCR_EL2.HPMN. + * + * Return: True if reservation allowed, false otherwise + */ +static bool kvm_pmu_reservation_is_valid(u8 host_counters) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + return host_counters < nr_counters || + (host_counters =3D=3D nr_counters + && cpus_have_final_cap(ARM64_HAS_HPMN0)); +} + +/** + * kvm_pmu_hpmn() - Compute HPMN value + * @host_counters: Number of host counters to reserve + * + * This function computes the value of HPMN, the partition pivot + * value, such that counters 0..HPMN are reserved for the guest and + * counters HPMN..N are reserved for the host. + * + * If the requested @host_counters would create an invalid partition, + * return the value of HPMN that creates no partition. + * + * Return: Value of HPMN + */ +u8 kvm_pmu_hpmn(u8 host_counters) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + if (likely(kvm_pmu_reservation_is_valid(host_counters))) + return nr_counters - host_counters; + else + return nr_counters; +} + +/** + * kvm_pmu_partition_supported() - Determine if partitioning is possible + * + * Partitioning is only supported in VHE mode where we have PMUv3 and + * Fine Grain Traps (FGT). + * + * Return: True if partitioning is possible, false otherwise + */ +bool kvm_pmu_partition_supported(void) +{ + return has_vhe() + && pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit()) + && cpus_have_final_cap(ARM64_HAS_FGT); +} + +/** + * kvm_pmu_partition() - Partition the PMU + * @pmu: Pointer to pmu being partitioned + * @host_counters: Number of host counters to reserve + * + * Partition the given PMU by taking a number of host counters to + * reserve and, if it is a valid reservation, recording the + * corresponding HPMN value in the hpmn field of the PMU and clearing + * the guest-reserved counters from the counter mask. + * + * Passing 0 for @host_counters has the effect of disabling partitioning. + * + * Return: 0 on success, -ERROR otherwise + */ +int kvm_pmu_partition(struct arm_pmu *pmu, u8 host_counters) +{ + u8 nr_counters; + u8 hpmn; + + if (!kvm_pmu_reservation_is_valid(host_counters)) + return -EINVAL; + + nr_counters =3D *host_data_ptr(nr_event_counters); + hpmn =3D kvm_pmu_hpmn(host_counters); + + if (hpmn < nr_counters) { + pmu->hpmn =3D hpmn; + /* Inform host driver of available counters */ + bitmap_clear(pmu->cntr_mask, 0, hpmn); + bitmap_set(pmu->cntr_mask, hpmn, nr_counters); + clear_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + if (pmuv3_has_icntr()) + clear_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + kvm_debug("Partitioned PMU with HPMN %u", hpmn); + } else { + pmu->hpmn =3D nr_counters; + bitmap_set(pmu->cntr_mask, 0, nr_counters); + set_bit(ARMV8_PMU_CYCLE_IDX, pmu->cntr_mask); + if (pmuv3_has_icntr()) + set_bit(ARMV8_PMU_INSTR_IDX, pmu->cntr_mask); + + kvm_debug("Unpartitioned PMU"); + } + + return 0; +} diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 4f0152e67ff3..2dcfac3ea9c6 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -15,6 +15,12 @@ static LIST_HEAD(arm_pmus); static DEFINE_MUTEX(arm_pmus_lock); static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); =20 +static u8 reserved_host_counters __read_mostly; + +module_param(reserved_host_counters, byte, 0); +MODULE_PARM_DESC(reserved_host_counters, + "Partition the PMU into host and guest counters"); + #define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >=3D VGIC_NR= _SGIS) =20 bool kvm_supports_guest_pmuv3(void) @@ -239,6 +245,13 @@ void kvm_host_pmu_init(struct arm_pmu *pmu) if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit())) return; =20 + if (reserved_host_counters) { + if (kvm_pmu_partition_supported()) + WARN_ON(kvm_pmu_partition(pmu, reserved_host_counters)); + else + kvm_err("PMU Partition is not supported"); + } + guard(mutex)(&arm_pmus_lock); =20 entry =3D kmalloc(sizeof(*entry), GFP_KERNEL); diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 1de206b09616..3843d66b7328 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -130,6 +130,7 @@ struct arm_pmu { =20 /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + u8 hpmn; /* MDCR_EL2.HPMN: counter partition pivot */ }; =20 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD86122D793 for ; Mon, 2 Jun 2025 19:29:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892548; cv=none; b=ou5LGSR/GNNbUTHaALy11yr9Fry42fverXJSulymuZgrVwh86ufYM/dHL2LeS9aQdYPj7oOykoWOJqBUfE6GCwcBeIMwz/HTF7BWOMNGcSMXGPTLpskE1yNxJ0fGserxamBm5uy5xxOl0wkeqPUks23iRM7InykeuBaeILshE30= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892548; c=relaxed/simple; bh=zCxK3dISxEfOML4T7vSp5Om1/JdipoyUx0Yzj/yzWcY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kzWrVMSbr/03bybZBlpwYEJNAYHwJC+MoRISduHSvgSGROzgPXTcOZ/M5h2KqHnmWo2Ta8DXeDZwHZHhRbP/s75cAjUcm5TKY6YC+5pFqC6BBSL08g5bldeTMPKVEIZvnsz7MUhNqs2F4yZiCggb5CXp1nG/qGc9w9tU1l+3xns= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qhrHTX6Y; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qhrHTX6Y" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3dc8ab29f1eso58846525ab.2 for ; Mon, 02 Jun 2025 12:29:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892544; x=1749497344; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JYJABjVP2iDArJLMl5gj3ep5scI8fjcxHEMsai9rZ0Q=; b=qhrHTX6Ye+QOZb7bt52FhvR56+uKSbjeH+QJzZvUdehfqcBVH2m2lxBYDrrUnQ2iPZ 8qNVA7cTEjoWmqpPBhl4s2V1vYqBsgJvbxADHxtUEM/AE9V+sXmSeoaxtEBf87Kw7Re5 //0IXX4Jj9V3B1BxE+Q4eJ11htc2YGo1z5t64ZmjiaBlFpHcrCfdLR5FRUkP29zcpYLE SaGmy6etLpowiCcIxs/IhGrkdxyP/pqZCF+rSN8zo+YSMWWcVqqft13S02X81F4yB77+ j1cO1pG6sRSqk0UGKEsnlDM6X0WkZL+N/Tgr74kZZzvvi1LAEPPwsrxgUTekM+zMFKDN +cXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892544; x=1749497344; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JYJABjVP2iDArJLMl5gj3ep5scI8fjcxHEMsai9rZ0Q=; b=rsWZiIYhQMhUgGj7bxm04C0J1gv6UxQryVJrH5tCNviMv+vWGXfS+lUpauVQuQzjv1 EblQpUI52fVGeXQKhzG1LkGKLXiI5IZtGN58zvDkYS44fUdTRHItwQM4kEmvTQDHSxsZ 083h8OT5bj7bPnoxOgieqCbPNWk2gz3Boj3agfsiN/O1Yf+vf0GH0yliCBGyrRmzBeHX tJlac2YNMjsqsMRkr1R/DcPG3wY1Y3fIvmqMftFAmH1hjGwblbaJwBtK/bkms7JDUHfY /JAyLVIsFc0URSQr3c250HAHQHH2Q6AXiTfDvEjU9Uuh5+eWJeDgQIj/M6++UZqaXWi9 0sgg== X-Forwarded-Encrypted: i=1; AJvYcCXiQdsxgyR3jWUM3n6FF6Wcu2aLXcxi9UJuLjwqZHNjFflwVe2Ef4y85dx6C74OOyyeebeohv/IJNz0FP8=@vger.kernel.org X-Gm-Message-State: AOJu0YwTm7rnE0sXFI1/Xvdl6qX20tHfcRy9ytCpmdzXQ15AeWpKdELt Q2PvfAS8LyY6a83ui9bFZHiNqvsl/+M1T+c9mLwVrU+72DdXvrxb5OQRTyZ5kuTQaFJHZf/dLi9 qmzCOhe9oU1G9QR06ZcIlw5YiTA== X-Google-Smtp-Source: AGHT+IGg2rWIw4S1SBhgIdtyOl3kiwsQOgXvBToZBQsbPlELmXsPKjt1+hk3s/Rllj0rwiuIByf4uJq1ToD1VufkVQ== X-Received: from ilbbf17.prod.google.com ([2002:a05:6e02:3091:b0:3dc:a282:283e]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:b46:b0:3dd:89c4:bc66 with SMTP id e9e14a558f8ab-3dda3342b61mr86473295ab.9.1748892544754; Mon, 02 Jun 2025 12:29:04 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:52 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-8-coltonlewis@google.com> Subject: [PATCH 07/17] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The OVSR bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Signed-off-by: Colton Lewis --- drivers/perf/arm_pmuv3.c | 4 ++-- include/linux/perf/arm_pmuv3.h | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index e506d59654e7..bbcbc8e0c62a 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -502,7 +502,7 @@ static void armv8pmu_pmcr_write(u64 val) =20 static int armv8pmu_has_overflowed(u64 pmovsr) { - return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); + return !!(pmovsr & ARMV8_PMU_CNT_MASK_ALL); } =20 static int armv8pmu_counter_has_overflowed(u64 pmnc, int idx) @@ -738,7 +738,7 @@ static u64 armv8pmu_getreset_flags(void) value =3D read_pmovsclr(); =20 /* Write to clear flags */ - value &=3D ARMV8_PMU_OVERFLOWED_MASK; + value &=3D ARMV8_PMU_CNT_MASK_ALL; write_pmovsclr(value); =20 return value; diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a2..fd2a34b4a64d 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -224,14 +224,14 @@ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) =20 /* - * PMOVSR: counters overflow flag status reg + * Counter bitmask layouts for overflow, enable, and interrupts */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ -/* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) =20 /* * PMXEVTYPER: Event selection reg --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C149225A3E for ; Mon, 2 Jun 2025 19:29:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892551; cv=none; b=u8la4DZmK857FdMvdSU+aIzb9e0RRmtRAUMmJvZeFk16jANXlb5Mx4dsNPHH9NFRlE3WpQ+FLzENZhx1Y901Bbo1rFn9qPN0o5P9j23xFdT71wJxpqNW/NLr5V2r2VCEXPdNS33pmFmBDq0YMuactB5pppknBg1psiJWUBTlyiw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892551; c=relaxed/simple; bh=Kvx3uAqjJggLi1DATHITYErcbyIy/uA3bAn8jDjgGIY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OMtwfs7Vgg7Q3RQS+uijC5+2EiUD9XxdUej5k5uQR2XHctBRbFHFOzAD64WkqFtqzSdg+71u3Heb0tImV3vvrGmCgKV396V6fyxpyP7RzT/+jPIIv0gkqN6Hg1V0GLJQA34i3eYsPpD8cGeLnP/9pMA5EVywVVBMxhH0uCJA91o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K/tv9cTJ; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K/tv9cTJ" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3dc9c1970daso45211175ab.1 for ; Mon, 02 Jun 2025 12:29:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892545; x=1749497345; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GXuWSM2GCZ9T+i75KiasrPAv9lcjtTJZ6XXd+zYRNlg=; b=K/tv9cTJ6LEH+2NRbzc2tz4fYx9ECT0CRFez1+Fe4dWowB+PKAW2JKpb46whl2icGB /NT113kY7jjpkOlu3W7MmijtukTR/X4Llb7ifToQmZ12GeH9fqiK06EghmXKfIe0YoiI NIZpMo4MeWwcsvaSBcalQI3LxFLzSb3n/C+A4Ay75xrhomsQF32GXB3Ga7Y9KTPP6omm 0kY3pFBJ+8w9u+jiqEvxlCiKQuYyfILuMibd/zYvbUbsIy3qgCnjH67m24BljADPsRqQ EpROFFC/06prLiH7GYA/HumifoQLtRrYUeZiD9C/M0sGj+POPpfcZxdgjdF10OECyc23 jmDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892545; x=1749497345; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GXuWSM2GCZ9T+i75KiasrPAv9lcjtTJZ6XXd+zYRNlg=; b=B4V08MUQNruIAhQZUpTU3PrfUbOKcTqnNkjlEiUU3zf6DbjvT0znb+EHpgPHP6fBAb lmfLI+1Z/ZIY+P85cOWAUKTyljp27W7EFc3ZtZ+r1yK6zc4BFmv7BPB6HxjzK0FVC0Fm UewROKsRPSGF+I30ecTBuNlT1nIMWL2hOKblD5IP2DAO/VcSz58ASL+QDqLmGDFKNjt0 65RUjPJvu4vsfuwI9+6wuRe78BJN1rqwAPa+cl/jwqiuQiM1hTykY9PBrSq+n3TmfFii /0jt3+Em6hll6jz6G9th3swhBp76qkoxqpWHh/UIQ8tWt3NqfZ/P8IzSYocT6hmVUTaI TEHg== X-Forwarded-Encrypted: i=1; AJvYcCVlwciiW1O0WqVbn+xRrLKGmKIoLSNdTRFpQzSfupwP0LcAVmY2L5+jB5Y00nb0TscFk7rzWop7uhlgr3M=@vger.kernel.org X-Gm-Message-State: AOJu0YzcS1SHS0kN8ieQ5e3ExBbhxZ8kwEtFOf4eeevHDKYzkk1Bxqv0 5ZFgJCT6VaXHBZHBF/NdqSH5DvBJuVeepXun/F4W2oHezxuIcR32rg93vg7t8zqvCbXty+QruwZ KLgZrETfrsqfYGNT9mTOf0mt/2g== X-Google-Smtp-Source: AGHT+IHFmiRfGhHEuAQ8BSv3MIMbdR2lvVdsHAXbv8PPs+QRUvPwpjEs+PRrAmKCsaAFW76c2vibn0+6tinsiTYuug== X-Received: from ilbbl7.prod.google.com ([2002:a05:6e02:32c7:b0:3dc:7303:c8cf]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:16cf:b0:3dc:9b89:6a3b with SMTP id e9e14a558f8ab-3ddb6854e4fmr9879575ab.8.1748892545634; Mon, 02 Jun 2025 12:29:05 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:53 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-9-coltonlewis@google.com> Subject: [PATCH 08/17] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Partitioning is defined by the MDCR_EL2.HPMN register field and saved in cpu_pmu->hpmn. The range 0..HPMN-1 is accessible by EL1 and EL0 while HPMN..PMCR.N is reserved for EL2. Define some functions that take HPMN as an argument and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their different position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 18 ++++++++ arch/arm64/include/asm/kvm_pmu.h | 23 ++++++++++ arch/arm64/kvm/pmu-part.c | 73 ++++++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 36 ++++++++++++++-- 4 files changed, 146 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 2ec0e5e83fc9..1687b4031ec2 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -227,6 +227,24 @@ static inline bool kvm_set_pmuserenr(u64 val) } =20 static inline void kvm_vcpu_pmu_resync_el0(void) {} +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + =20 /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 83b81e7829bf..4098d4ad03d9 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -25,6 +25,11 @@ void kvm_host_pmu_init(struct arm_pmu *pmu); bool kvm_pmu_partition_supported(void); u8 kvm_pmu_hpmn(u8 host_counters); int kvm_pmu_partition(struct arm_pmu *pmu, u8 host_counters); +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu); +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); +void kvm_pmu_host_counters_enable(void); +void kvm_pmu_host_counters_disable(void); =20 #else =20 @@ -52,6 +57,24 @@ static inline int kvm_pmu_partition(struct arm_pmu *pmu) return -EPERM; } =20 +static inline bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return false; +} + +static inline u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ~0; +} + +static inline void kvm_pmu_host_counters_enable(void) {} +static inline void kvm_pmu_host_counters_disable(void) {} + #endif =20 #endif diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 7252a58f085c..33eeaa8faf7f 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -115,3 +115,76 @@ int kvm_pmu_partition(struct arm_pmu *pmu, u8 host_cou= nters) =20 return 0; } + +/** + * kvm_pmu_is_partitioned() - Determine if given PMU is partitioned + * @pmu: Pointer to arm_pmu struct + * + * Determine if given PMU is partitioned by looking at hpmn field. The + * PMU is partitioned if this field is less than the number of + * counters in the system. + * + * Return: True if the PMU is partitioned, false otherwise + */ +bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) +{ + return pmu->hpmn < *host_data_ptr(nr_event_counters); +} + +/** + * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters + * @pmu: Pointer to arm_pmu struct + * + * Compute the bitmask that selects the host-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in HPMN..N + * + * Return: Bitmask + */ +u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu) +{ + u8 nr_counters =3D *host_data_ptr(nr_event_counters); + + return GENMASK(nr_counters - 1, pmu->hpmn); +} + +/** kvm_pmu_guest_counter_mask() - Compute bitmask of guest-reserved count= ers + * + * Compute the bitmask that selects the guest-reserved counters in the + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers. These are the counters + * in 0..HPMN and the cycle and instruction counters. + * + * Return: Bitmask + */ +u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu) +{ + return ARMV8_PMU_CNT_MASK_ALL & ~kvm_pmu_host_counter_mask(pmu); +} + +/** kvm_pmu_host_counters_enable() - Enable host-reserved counters + * + * When partitioned the enable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Enable that bit. + */ +void kvm_pmu_host_counters_enable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr |=3D MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} + +/** kvm_pmu_host_counters_disable() - Disable host-reserved counters + * + * When partitioned the disable bit for host-reserved counters is + * MDCR_EL2.HPME instead of the typical PMCR_EL0.E, which now + * exclusively controls the guest-reserved counters. Disable that bit. + */ +void kvm_pmu_host_counters_disable(void) +{ + u64 mdcr =3D read_sysreg(mdcr_el2); + + mdcr &=3D ~MDCR_EL2_HPME; + write_sysreg(mdcr, mdcr_el2); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index bbcbc8e0c62a..f447a0f10e2b 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -823,12 +823,18 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) kvm_vcpu_pmu_resync_el0(); =20 /* Enable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_enable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); } =20 static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { /* Disable all counters */ + if (kvm_pmu_is_partitioned(cpu_pmu)) + kvm_pmu_host_counters_disable(); + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } =20 @@ -939,6 +945,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, =20 /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr)) { if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) return ARMV8_PMU_CYCLE_IDX; @@ -954,6 +961,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * may not know how to handle it. */ if ((evtype =3D=3D ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !kvm_pmu_is_partitioned(cpu_pmu) && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -965,7 +973,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events = *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1057,6 +1065,14 @@ static int armv8pmu_set_event_filter(struct hw_perf_= event *event, return 0; } =20 +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu =3D (struct arm_pmu *)info; @@ -1064,6 +1080,9 @@ static void armv8pmu_reset(void *info) =20 bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); =20 + if (kvm_pmu_is_partitioned(cpu_pmu)) + mask &=3D kvm_pmu_host_counter_mask(cpu_pmu); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1071,11 +1090,20 @@ static void armv8pmu_reset(void *info) /* Clear the counters we flip at guest entry/exit */ kvm_clr_pmu_events(mask); =20 + + pmcr =3D ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (kvm_pmu_is_partitioned(cpu_pmu)) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr =3D ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; =20 /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2AE5922F389 for ; Mon, 2 Jun 2025 19:29:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892553; cv=none; b=I8iteMqK7wz/kI/22FTxCSB07I0QZ4HU9gFb9bttnjVgYM6ItMv8lb8Ql6fjaBYk0rmwY9j/0JrP7FbnXRaPkbDu+JnNwtXkHIGyqz9jxInaFVS+2w8RtaB0FaisFxlvX/wMzvbKBjdxSB8ci4ON7D9VRs/HIxZJrwFmmWoKb/E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892553; c=relaxed/simple; bh=Bwq4jBtGO6rJNu1lehBJIit1fChtJKTke3gbuOKUag4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=b6T08BRno+qo6zvRJVAPLjwq3wwZ3aJ1e6m+aPIttWXPiF/2+r1vadgcxGuZO9Kw/vzMMkKHwxntO8rhyJ83KHrmYiz25TR0WkPd/+S8OBVhNoNW4yGmtOCfq0B68z+q3Y1nlCJNYYsfBBSYa2WJVWE+AZnziOzn/hu8G9i7/cA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=idBaaChr; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="idBaaChr" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3dca1268a57so60004445ab.2 for ; Mon, 02 Jun 2025 12:29:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892547; x=1749497347; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8zoUuW8MUOrKH78JI9d8xFBlkAyF1avhcSa2sdvpTDE=; b=idBaaChrNvXPazfDtUN0g1fUwhTDZDuTMIRAb+ONA3kfka+D0mQ+9jERJVWDcSRK3i Ik7VDJR3XwG73hUNejmQFDU/H32PEtsX5d2zVWALcCVZLN5mnowmOk90RUoEtOlxl72e qLnkGnHzc0B61+a5bpDzkBy8V3etChXKcHKZp0Xu2uZZ/xdfIyTCR76TVjhYt+plJJqT UKUevstyeJpNZQM8N2is3fUE7A+On+AXIvslFGaI8gOJgkwaYgKPXTve5V7qDjVsMqJZ 4JncNTDKDnRbT7AUolKY80Qx9Hnd5oKa7tLp6SMvHyCQxXMJOYaxJQ7jqVIWrCsqp6IN UenA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892547; x=1749497347; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8zoUuW8MUOrKH78JI9d8xFBlkAyF1avhcSa2sdvpTDE=; b=A8mgOAqw0hFp2zHrGmBSiChCv+le2GS5Dymg1rfLZ5q3K86Mqw+sojMRL/82W3bKB+ 5uf6GHKuu6D+QeiOArtJRdy3etQoixweAnhtK5/+dfYbO+pfdoW9R7q1cU8+Lh2O5ocL ZUmHhFWSVcyz70C5e/MiI/NkpJEYhhbFA5S0P2sQmvOuprxtSAnnKAFQRbIcWmhAEnpL JeHfpPPkfW4YwjoqMdNdISAWvk4TQI+xQP1O6/wdxauo6HUDehYJ5Mbtrk16IHn7Aqho r5uui7+kJQ51fsiIqVBCoc+RY8O0Ebin/C6G8KWS8SLVq5IGUIwU+iPWxIv5UKJS3gYv Fr9g== X-Forwarded-Encrypted: i=1; AJvYcCUIJYnVt/5Bvh4owthCr24Sovdhf68mzziUq5LgfS4M71qMjXO682/rJWlisS4tPomgEQJ5yqOpPD9rjEQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yz8kbUUMRJrIgeOtrMT6gTsB6/Qx641xc6HcoCef9CulpuJNVFF MRTGifsjSotVESGQbR98zEVFDGZnc4aLaotzUUzP5BkdLPJmhMqBpJkKQ0eVOlEe3Z1xoW2WG0d 1RpI4bctMMlwdg+VuMQO+m7DzFw== X-Google-Smtp-Source: AGHT+IGwp++K9bYUB9A3dAnUT3ZJdSuEOT5EY3KXma39JRYsyLuBt4tvoJnwGqdqyMUlHwe7mrLxzz2cIxBSsNLN8g== X-Received: from ilsx9.prod.google.com ([2002:a05:6e02:749:b0:3dd:a3a9:42af]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:184e:b0:3dc:7c5d:6372 with SMTP id e9e14a558f8ab-3dd9c9ae2abmr132483545ab.7.1748892546821; Mon, 02 Jun 2025 12:29:06 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:54 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-10-coltonlewis@google.com> Subject: [PATCH 09/17] KVM: arm64: Set up FGT for Partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to gain a real performance benefit from partitioning the PMU, utilize fine grain traps (FEAT_FGT and FEAT_FGT2) to avoid trapping common PMU register accesses by the guest to remove that overhead. There should be no information leaks between guests as all these registers are context switched by a later patch in this series. Untrapped: * PMCR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCCNTR_EL0 * PMINTEN_EL0 * PMEVCNTRn_EL0 * PMICNTR_EL0 Trapped: * PMOVS_EL0 * PMEVTYPERn_EL0 * PMICFILTR_EL0 * PMCCFILTR_EL0 PMOVS remains trapped so KVM can track overflow IRQs that will need to be injected into the guest. PMEVTYPERn remains trapped so KVM can limit which events guests can count, such as disallowing counting at EL2. PMCCFILTR and PMCIFILTR are the same Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 11 +++++ arch/arm64/kvm/debug.c | 5 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 64 +++++++++++++++++++++++-- arch/arm64/kvm/pmu-part.c | 14 ++++++ 4 files changed, 88 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 3482d7602a5b..4ea045098bfa 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1703,6 +1703,12 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); =20 +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu); + +#if defined(__KVM_NVHE_HYPERVISOR__) +#define kvm_vcpu_pmu_is_partitioned(_) false +#endif + struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); @@ -1819,6 +1825,11 @@ static inline bool kvm_pmu_counter_is_hyp(struct kvm= _vcpu *vcpu, unsigned int id =20 static inline void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu) {} =20 +static inline bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return false; +} + #endif =20 #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 41746a498a45..cbe36825e41f 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -42,13 +42,14 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcp= u) */ vcpu->arch.mdcr_el2 =3D FIELD_PREP(MDCR_EL2_HPMN, hpmn); vcpu->arch.mdcr_el2 |=3D (MDCR_EL2_HPMD | - MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | - MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA); =20 + if (!kvm_vcpu_pmu_is_partitioned(vcpu)) + vcpu->arch.mdcr_el2 |=3D MDCR_EL2_TPM | MDCR_EL2_TPMCR; + /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) /* Route all software debug exceptions to EL2 */ diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index d407e716df1b..c3c34a471ace 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -133,6 +133,10 @@ static inline void __activate_traps_fpsimd32(struct kv= m_vcpu *vcpu) case HDFGWTR_EL2: \ id =3D HDFGRTR_GROUP; \ break; \ + case HDFGRTR2_EL2: \ + case HDFGWTR2_EL2: \ + id =3D HDFGRTR2_GROUP; \ + break; \ case HAFGRTR_EL2: \ id =3D HAFGRTR_GROUP; \ break; \ @@ -143,10 +147,6 @@ static inline void __activate_traps_fpsimd32(struct kv= m_vcpu *vcpu) case HFGITR2_EL2: \ id =3D HFGITR2_GROUP; \ break; \ - case HDFGRTR2_EL2: \ - case HDFGWTR2_EL2: \ - id =3D HDFGRTR2_GROUP; \ - break; \ default: \ BUILD_BUG_ON(1); \ } \ @@ -191,6 +191,59 @@ static inline bool cpu_has_amu(void) ID_AA64PFR0_EL1_AMU_SHIFT); } =20 +/** + * __activate_pmu_fgt() - Activate fine grain traps for partitioned PMU + * @vcpu: Pointer to struct kvm_vcpu + * + * Clear the most commonly accessed registers for a partitioned + * PMU. Trap the rest. + */ +static inline void __activate_pmu_fgt(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *hctxt =3D host_data_ptr(host_ctxt); + struct kvm *kvm =3D kern_hyp_va(vcpu->kvm); + u64 set; + u64 clr; + + set =3D HDFGRTR_EL2_PMOVS + | HDFGRTR_EL2_PMCCFILTR_EL0 + | HDFGRTR_EL2_PMEVTYPERn_EL0; + clr =3D HDFGRTR_EL2_PMUSERENR_EL0 + | HDFGRTR_EL2_PMSELR_EL0 + | HDFGRTR_EL2_PMINTEN + | HDFGRTR_EL2_PMCNTEN + | HDFGRTR_EL2_PMCCNTR_EL0 + | HDFGRTR_EL2_PMEVCNTRn_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGRTR_EL2, clr, set); + + set =3D HDFGWTR_EL2_PMOVS + | HDFGWTR_EL2_PMCCFILTR_EL0 + | HDFGWTR_EL2_PMEVTYPERn_EL0; + clr =3D HDFGWTR_EL2_PMUSERENR_EL0 + | HDFGWTR_EL2_PMCR_EL0 + | HDFGWTR_EL2_PMSELR_EL0 + | HDFGWTR_EL2_PMINTEN + | HDFGWTR_EL2_PMCNTEN + | HDFGWTR_EL2_PMCCNTR_EL0 + | HDFGWTR_EL2_PMEVCNTRn_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGWTR_EL2, clr, set); + + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) + return; + + set =3D HDFGRTR2_EL2_nPMICFILTR_EL0; + clr =3D HDFGRTR2_EL2_nPMICNTR_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGRTR2_EL2, clr, set); + + set =3D HDFGWTR2_EL2_nPMICFILTR_EL0; + clr =3D HDFGWTR2_EL2_nPMICNTR_EL0; + + update_fgt_traps_cs(hctxt, vcpu, kvm, HDFGWTR2_EL2, clr, set); +} + static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *hctxt =3D host_data_ptr(host_ctxt); @@ -210,6 +263,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_v= cpu *vcpu) if (cpu_has_amu()) update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2); =20 + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + __activate_pmu_fgt(vcpu); + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) return; =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 33eeaa8faf7f..179a4144cfd0 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -131,6 +131,20 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu) return pmu->hpmn < *host_data_ptr(nr_event_counters); } =20 +/** + * kvm_vcpu_pmu_is_partitioned() - Determine if given VCPU has a partition= ed PMU + * @vcpu: Pointer to kvm_vcpu struct + * + * Determine if given VCPU has a partitioned PMU by extracting that + * field and passing it to :c:func:`kvm_pmu_is_partitioned` + * + * Return: True if the VCPU PMU is partitioned, false otherwise + */ +bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu) +{ + return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu); +} + /** * kvm_pmu_host_counter_mask() - Compute bitmask of host-reserved counters * @pmu: Pointer to arm_pmu struct --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C385022F177 for ; Mon, 2 Jun 2025 19:29:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892554; cv=none; b=nk0hTMpWztQRRoqNk7KCEK38WiPx0w3afQSuzjkrrxkqWep4gVbYutGrYkjGTClW/IgqoAuL5Kfc/n8cp+KgaKUPYuznLAWGXBTzLu6Tjzl5HRQnJ1qP6fMDk7Yi7q4DL8aGPfl37dzQBH4b3VLeakz+PXWis7504prjPGzY7do= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892554; c=relaxed/simple; bh=gJd6Sg7X96ujBJm4xpL0n0BwR1+EO0NbIRmkP5OmPT4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=V64t5Ar/cKwiPi8cLhGhBzxy7AtGLnZ9VejR2BqZWIVooCPVETnvnC/M/SP1bguddp+oqcetHjTextq+6JP/Ld4y2MFefMbVG6BEge5xAfCLMSrtToxw2IiEq9X39hFeby1V9AmSAUQpyFYOTmHHM3ngyHhutXGGkNxtkTs1NYk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jJPoXRM0; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jJPoXRM0" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddb4ed2dcbso9577705ab.3 for ; Mon, 02 Jun 2025 12:29:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892547; x=1749497347; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0IRqT9xxCS8Kun6mHdSmbCCnl/MV3abA8LdyACjVeWc=; b=jJPoXRM0BVG9y0DfHxdv7KEUrF7FUc/unmtfGpMvTs5Q6kqBeTVPZoPbnIR/LGtltY 3VfNLV68uzcmKpShBwZdeIn2nc0horX+isDFPhqunBwvZiJaoT6dBZUDrLwocvRVlxmC ypMfInTy3wn4aem+01zskMCLnILW+S58THYyChI4xom4dpABTQcBRUpt5+Xf49g9biyB ouoX4teDR0DBhvZdbmSh1zVR6xkbtJsndd/hiq0/ubSiUeyAsIG20zewUG+TVpaK6Me6 gIaJAis+GHb+NWa73Q2L0M6QhsU2+TUjW8nC9t8lwcz0tMbr5K4X/0sBHrWDLFKcHSd9 nc6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892547; x=1749497347; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0IRqT9xxCS8Kun6mHdSmbCCnl/MV3abA8LdyACjVeWc=; b=R2011d/YLoJWctMy1OiUk/MzFixq4FKrcOyh93L1GRxKUMmOF2vaiTGQek2fnc0wiY 2q2b1JyWqSFu8auSMMXWoxUyof/EARPFYaG3BP0Q7vi7TfFoUkgoC0MR6OgQ06w2K08S 31Z00iXOdethTzyTFE9RlO1mXfNfQdLvLJ9g4FDPW2d5Vvr7GYodzk1QeVQgdrWcUdAI WU8mOJkqWpVi5WNku+ltjZpmLpf0NOBddwrWQ2WmQFMoczeJdOURPeOST9o8cjkwQqKT bM3htBPakk7Q4AqxOcSOabUFi9b2GwCuX8RFavAaj2u8+CyIrfZ8YpoQihurovYmhyVY WwUg== X-Forwarded-Encrypted: i=1; AJvYcCUZ0irhp5UpNuna0Dk7R3KiKaZ7jk0HxVMrCH45aBktt3Mb0z4K9fV+Fj3WmtsjTU84IRu5Zyn69UCpNdI=@vger.kernel.org X-Gm-Message-State: AOJu0YyrW/b78/4mBfpWHrE96PjXA2bbH6EA6slFEDLoartrlnJf3ZoD M1ahcxDwJqU5dk15wvYEP78wtpQcPWkbIPJ0IJMukT5XWgH9ORIGpmrAlJuPwQJvqHNs81QDK4s De66LVQLW9DPZgfM/fGkUozfR1w== X-Google-Smtp-Source: AGHT+IFMVeKyETkycXmBUpASHlOBIqAlJOgdAa/bpyVkcS9jV+S6qkzUphiHGfVK6q8KTzMSYrzDWRGuInC2fMT41A== X-Received: from ilbbk6.prod.google.com ([2002:a05:6e02:3286:b0:3dd:b580:4100]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:4515:10b0:3dd:b5c6:421f with SMTP id e9e14a558f8ab-3ddb5c642c8mr13690965ab.6.1748892547783; Mon, 02 Jun 2025 12:29:07 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:55 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-11-coltonlewis@google.com> Subject: [PATCH 10/17] KVM: arm64: Writethrough trapped PMEVTYPER register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/kvm/sys_regs.c | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d368eeb4f88e..afd06400429a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -18,6 +18,7 @@ #include #include #include +#include #include =20 #include @@ -942,7 +943,11 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcp= u, u64 idx) { u64 pmcr, val; =20 - pmcr =3D kvm_vcpu_read_pmcr(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + pmcr =3D read_pmcr(); + else + pmcr =3D kvm_vcpu_read_pmcr(vcpu); + val =3D FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); if (idx >=3D val && idx !=3D ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); @@ -1037,6 +1042,22 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, return true; } =20 +static void writethrough_pmevtyper(struct kvm_vcpu *vcpu, struct sys_reg_p= arams *p, + u64 reg, u64 idx) +{ + u64 evmask =3D kvm_pmu_evtyper_mask(vcpu->kvm); + u64 val =3D p->regval & evmask; + + __vcpu_sys_reg(vcpu, reg) =3D val; + + if (idx =3D=3D ARMV8_PMU_CYCLE_IDX) + write_pmccfiltr(val); + else if (idx =3D=3D ARMV8_PMU_INSTR_IDX) + write_pmicfiltr(val); + else + write_pmevtypern(idx, val); +} + static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { @@ -1063,7 +1084,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu,= struct sys_reg_params *p, if (!pmu_counter_idx_valid(vcpu, idx)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmevtyper(vcpu, p, reg, idx); + } else if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); kvm_vcpu_pmu_restore_guest(vcpu); } else { --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2458225779 for ; Mon, 2 Jun 2025 19:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892555; cv=none; b=go3Wo6dOC8x1N1vTfkoz5kUjxm7e0m/QDUfu6N3YHUNMnFTegCVERTtfUdkehMmKbN1v/KNItQ0agE/eeDsd61851M3A+E2/tEc76Y00PIb88nZ0oiX/7J3lsjQmwV0cy8M3aRtqg1XRrIla9gWrCxQPSd5XM8600DUzXRr4LBg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892555; c=relaxed/simple; bh=PoWjuOi8eYoIqV1zexvNE+wuPqOMbkZ5+/3Bw3NmTHA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iJBEEBGDFiX1oSXb9/Hwzdg7KQ0xAVhu6E/hG9sR3pkAKlKWin+M1gAHEgCZlvre/VVcpvKcmpyo2rBcJ8wzbZAEXKAwOPPg56wbSUVoqRDN2S+OizJqEA31lDepqtwQxHIFrujt9Zbr/oe82IgP7MYIvYphIX4vCjANdXZ9IfQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WW04qRes; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WW04qRes" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3dd789f3b99so99215635ab.0 for ; Mon, 02 Jun 2025 12:29:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892549; x=1749497349; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p/Q7iQgArbZ1+tQ0dSc/WoTlOEBgZJ24X0IeGAOsgLg=; b=WW04qRespJD8GYTYLixvcguycs1Aj2zrL7TvjricfWwD7ZgBOeXwyGcg1gg7nmq3jp w+GeDHsaHGSMcc5vsbmgV8etWE4jQ2r/kr91rBr5k389v5ize0kAmxyPcx1uR8uXmmZq y0U/nH3whXtdzFYWhK1CmDSPEdGVSYF8C5/77WCi7BUwtypYqmbFm5Vq+vCPPVMa1Iz5 AmdjhzqaxbtplI7QLwuGAZoEjRXV+jtBSU5fZNMR6DE6m80Ji1gkc7bBHLfQBRIM+5/T 1iUKUCnZx+SBffTXJ9iNn6TVk/BXddu9Tw9+4N6irRdK61pNrFZQwN1B4YQ8V7+t0ySl KGHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892549; x=1749497349; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p/Q7iQgArbZ1+tQ0dSc/WoTlOEBgZJ24X0IeGAOsgLg=; b=ht7u4GQQfDBEtdbS2dp9iu2j5S/c80J5k9LrzUCfyEGRh5RlxoPydiEf5D6NxW/uul EsPHU2l0oC20kgIROkHNWv+qn86QiH1bd+rCbSCGQfAEjrqruGDsuTQt3r9+Au+Qcp6p mJi+bXNfMIyGa9aMvPqwzFlCd4x3l/e9vfNl1LatFLGzh4zV8bFogLaZ3FlhU/HPyVB+ mGlTpupfK2xKI2UrM8Btyoe7hyS8lbtof7W5wzO1KlD+zHG7AW52E2K61VTDFsste+gd 7wqmPUq831LydeF+Yg6qIp9OE99EFIEJuF8yfPEM8w0064aRVP1gFT2Z1jRATR0dOIFS kVcQ== X-Forwarded-Encrypted: i=1; AJvYcCUXBsTylrHFK0uxQwlP5LD5EQJtW6sSTgdvEw5voAi9QV/TdcVfs39jDHSrUoicMrd4h0n35zPJ0ENjaMg=@vger.kernel.org X-Gm-Message-State: AOJu0YzaQzn/KFrGcGZ+pF2VNKNY8fAeMBIVeP7322D/oBtzW90T5Hbv 2+53z0TK8bsznvfCRPwsNIIcgjc2XpJ07iIO20XoWssy52QUMff6rWIcl9fiv11mhYYjcgjXzRR xZeefoG0I4NIAfRvqFCJDVSD/Lw== X-Google-Smtp-Source: AGHT+IFh0r8fdtRevJfULp2m0AI+wpxfd5UxvO3vZFtqup+r9hHTOSopW5KVBegS6S2BOnnva8v5qQTcx27amQki3g== X-Received: from ilbbn9.prod.google.com ([2002:a05:6e02:3389:b0:3dd:a4c2:242b]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:3e02:b0:3d4:3db1:77ae with SMTP id e9e14a558f8ab-3dd99c28958mr177500035ab.18.1748892548842; Mon, 02 Jun 2025 12:29:08 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:56 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-12-coltonlewis@google.com> Subject: [PATCH 11/17] KVM: arm64: Use physical PMSELR for PMXEVTYPER if partitioned From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because PMXEVTYPER is trapped and PMSELR is not, it is not appropriate to use the virtual PMSELR register when it could be outdated and lead to an invalid write. Use the physical register. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 7 ++++++- arch/arm64/kvm/sys_regs.c | 9 +++++++-- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 32c003a7b810..8eee8cb218ea 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -72,11 +72,16 @@ static inline u64 read_pmcr(void) return read_sysreg(pmcr_el0); } =20 -static inline void write_pmselr(u32 val) +static inline void write_pmselr(u64 val) { write_sysreg(val, pmselr_el0); } =20 +static inline u64 read_pmselr(void) +{ + return read_sysreg(pmselr_el0); +} + static inline void write_pmccntr(u64 val) { write_sysreg(val, pmccntr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index afd06400429a..377fa7867152 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1061,14 +1061,19 @@ static void writethrough_pmevtyper(struct kvm_vcpu = *vcpu, struct sys_reg_params static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, const struct sys_reg_desc *r) { - u64 idx, reg; + u64 idx, reg, pmselr; =20 if (pmu_access_el0_disabled(vcpu)) return false; =20 if (r->CRn =3D=3D 9 && r->CRm =3D=3D 13 && r->Op2 =3D=3D 1) { /* PMXEVTYPER_EL0 */ - idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, __vcpu_sys_reg(vcpu, PMSELR_EL0)); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + pmselr =3D read_pmselr(); + else + pmselr =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + + idx =3D SYS_FIELD_GET(PMSELR_EL0, SEL, pmselr); reg =3D PMEVTYPER0_EL0 + idx; } else if (r->CRn =3D=3D 14 && (r->CRm & 12) =3D=3D 12) { idx =3D ((r->CRm & 3) << 3) | (r->Op2 & 7); --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3CA823099F for ; Mon, 2 Jun 2025 19:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892555; cv=none; b=BpJUrLdgL4pGhF4zg85RqFlqxYHM83xkZG/gPfg1KDIXQDuZ+XloKQ+1BeyGyu5JjoG/dtjaOsbjWB0AjAo7O+s9v/2pMMZhXQdxzSlBFxIZ9CF/Nfy5q7SSyPbLOWNDOZq+HpxbjaTjUTRX1Qwc3gsVvb4tP0A51lyCMrAkQck= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892555; c=relaxed/simple; bh=RyYQY6ZWwMFc6nT32sVnu5XE9OeBNCLMJOkY/XJHt/w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bntBYBFiGsXZ4/ndpnjsY0cek2gyqwltYbfzZxffkoUBhuWY/0HrRTXynJReX9j6Jk8onbnnj6q4ZjXniswRDmWSPHS3Jboq3pnJSWPH71N/BG1RI9OXVAo9W8kR8Mg0hkHcM1VOzITFTD5lL5D65fSjM+aHItigKz06oWybnOA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mJA50gxK; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mJA50gxK" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3ddb4a92e80so7584355ab.3 for ; Mon, 02 Jun 2025 12:29:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892550; x=1749497350; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hkgTT4wWRbZ1NUmf838c82NeTMr3HJQoA5BVqQ/v/iI=; b=mJA50gxKlGbgWpxoKiSctLGwSKd3heo54wAZYW8bjBm1CtxqzPyeTFRug+yFuI9tep NwBcN99+T5cGlOCVI9wgdiJGCO4O0YoX9/w4qBZ/cb8qZZQ2Dw11bp0/rTno+jPilcoV Pi6Qr6mYB6XOPOhZUxa6C+gkQ3109ud0RSET/HtKn3fuQIGP/CS8qBW/E5Xy1rgihsbZ ZlQuF3kSj1Tya2VVDrShwBA+VuJY0T+lju5QnYLl3oW4DKnl+p/e/Mso5FmtkgAa+99C Z60wDxTBnNpN3RjVD+zS53x0n975MloxkTElfI3Mm3fZBo6VxQ5/xqdY9W4NwYLUIhVx NL3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892550; x=1749497350; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hkgTT4wWRbZ1NUmf838c82NeTMr3HJQoA5BVqQ/v/iI=; b=qD8x9o3bjFVkKgbYh/WhN/jUJoi+RodSeBHe0BxelDcj9LMfNGEBNRy6JvDE1bUIBT /FvmmeurpErCL30BcZdPvVAxAT6/pERcaXCQG5X9+eaz/EFi1vT7n1ZYk4DgEvkf4FmW O2FZUqsbI1U/juDrA4n8ZlA6HGqr7wzUTKjK6Tl8tRA1PNqCuH4i2rGbZPYemLpkbyDJ gdzK1HBGgOoZnnCwnKZidaLxoEu2ujBoecYymStB0ztLVnQmJERRD1V6JZUxj31x7rUY uhLm2P+4NwywObCXgw+tzDvC/I2qB+XeyjZ9tGM8RVHhCodc6U9au1eJJGV2Ql7Bpwwd 5uZA== X-Forwarded-Encrypted: i=1; AJvYcCVz4shV5nadDe6u1h0ebO3P1O+akFagMIfwhWidcndMV99c7anXZU0eZNsb3rKCf1kvdsAKVJfdKm/hUNY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy2PNzPybY10MHkdK+8cW4fs3EEsdBwHvTq+flpf3PhFIe/8dIZ dy/SG7CjUEGzbhiOU07+7Zm9aKUmNglfNIEPwesPKdRJDAns/ZgRyO3acDByp0RWg1+5g+TV08t tWu09vEDuJCy9y9+HQLLL+ediQQ== X-Google-Smtp-Source: AGHT+IH/msiP+hJPT5a6q/xsva/w/itdMQOY/9ZqIFVxoo9pak9WnZnc3ACatvAhfPGVJEVfjUEYVcYY8x4wERvcsQ== X-Received: from ilbbl7.prod.google.com ([2002:a05:6e02:32c7:b0:3dc:7303:c8cf]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1fec:b0:3dc:8b57:b76c with SMTP id e9e14a558f8ab-3dd99be4926mr150059925ab.9.1748892549700; Mon, 02 Jun 2025 12:29:09 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:57 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-13-coltonlewis@google.com> Subject: [PATCH 12/17] KVM: arm64: Writethrough trapped PMOVS register From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With FGT in place, the remaining trapped registers need to be written through to the underlying physical registers as well as the virtual ones. Failing to do this means delaying when guest writes take effect. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 10 ++++++++++ arch/arm64/kvm/sys_regs.c | 17 ++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 8eee8cb218ea..5d01ed25c4ef 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -142,6 +142,16 @@ static inline u64 read_pmicfiltr(void) return read_sysreg_s(SYS_PMICFILTR_EL0); } =20 +static inline void write_pmovsset(u64 val) +{ + write_sysreg(val, pmovsset_el0); +} + +static inline u64 read_pmovsset(void) +{ + return read_sysreg(pmovsset_el0); +} + static inline void write_pmovsclr(u64 val) { write_sysreg(val, pmovsclr_el0); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 377fa7867152..81a4ba7e6038 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1169,6 +1169,19 @@ static bool access_pminten(struct kvm_vcpu *vcpu, st= ruct sys_reg_params *p, return true; } =20 +static void writethrough_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_param= s *p, bool set) +{ + u64 mask =3D kvm_pmu_accessible_counter_mask(vcpu); + + if (set) { + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |=3D (p->regval & mask); + write_pmovsset(p->regval & mask); + } else { + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &=3D ~(p->regval & mask); + write_pmovsclr(~(p->regval & mask)); + } +} + static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1177,7 +1190,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struc= t sys_reg_params *p, if (pmu_access_el0_disabled(vcpu)) return false; =20 - if (p->is_write) { + if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) { + writethrough_pmovs(vcpu, p, r->CRm & 0x2); + } else if (p->is_write) { if (r->CRm & 0x2) /* accessing PMOVSSET_EL0 */ __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |=3D (p->regval & mask); --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D429F22DA19 for ; Mon, 2 Jun 2025 19:29:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892557; cv=none; b=dOddK8R25sBaWZVabzTUwyLM5r/9Put28SP6EIeUi3Jv7E6HDElSSIokT99/ZGOQfnsOGvahdIjaGRR1n9WjxJO4XVhgUwD7rzgbcPmZE43LesG/UvMkLHhDECnqLu13RyGY8TRaYkk2nqE0DxDUVWq/YspELtgsZsjRcVFzg4g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892557; c=relaxed/simple; bh=z8Mgf4Ozoql/C7tjISrHBVxXhjy93k5CpTee2Ovb984=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bif8XICI45DDw+wXiuAx7SRxzuOKdtnhNyb9xem+Scx117Q5wUO2oAoYf2p+jAN46CYLBZHtWlH4+YOCwpYF8xr3MuQ+V1eCUooJ8MWd7R0qR5jKLBm4Q+KEcPN0EB81Y7wPiqtW41YhmTDPr/rnRBM6eZT1LYCIJCtWjKp3wjY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1YGYPCIO; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1YGYPCIO" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3ddb4dcebfaso9513905ab.1 for ; Mon, 02 Jun 2025 12:29:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892550; x=1749497350; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vPN7IQ4QZrByG0258EKRU9FkVzNdKi6RapVUPx28H4Q=; b=1YGYPCIOhwsGFTHW8d5thIo/moKFMEKYh0Qt4m5onn3PDGdFlIdrban9vE9LxJ+TiO LGLrxapeE4TIdkXmcaFoe2jdQJHRd6QESecjVqR1LxqRyOW9IfzSSinYExy1PvA2l0hH lzgPjCVvMLfYrQCP/Jt/nSSR6Jo+7zZ8tpz/ndT4Icoqjc2kdmUKpRctwuz1n0qB5+Pt IpyorGJZqVXjoOhcOQGUGrcD8X4gaQV/QEK7O8pOzroDYsTwtf8P1ZC5dHfn5BtQ+Xho Vxv9jR1Cd3vZmP2QbmIT7ennVgasoeVs1CVATCv0ycP7ehFlMilnFP7KFqXBEldhcsNI EC0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892550; x=1749497350; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vPN7IQ4QZrByG0258EKRU9FkVzNdKi6RapVUPx28H4Q=; b=s6oD2a2lgGwOnKINAI4quZoUaS9EwG+XQ4/gImDrxWZZl370J6sMYOLaa1nDLK/PXC +G45uVYx27+YeLWTjam6H6Z6FykNEyDhCjBMRpdj5Niyj8pmJCuPcil1MyRpKchiExZ8 G0K8YO/Gdk39tonDVLH+YauYfrD6OhLsUHV+/WTNXmEkfBprJk3bYKdmBOAtDqAM991f nAcRDF9+KszkDMmjpWMCC7DdShdfPAGYZq84Y5l8n9u0TO9nz2QDWQg6/v0E2nGRu3Bt zAVQvVFnBf0y2qbuhOHPsb6ZavGmlgLkMWJUb8FW6tV5lVpOyU3jy1ELYbb0dWoF8ZfI LYxA== X-Forwarded-Encrypted: i=1; AJvYcCUWWYfgfm+TfZi8XUX7R6kcxjMOxqxYNVp3znb1TWrwi/nCapo8JrxWV+2P8W0tMCjTdwMglcS2Rdwmbvk=@vger.kernel.org X-Gm-Message-State: AOJu0Yw+GqM59/MGOT4iVUTNcBtgh7JTfyvMzuAgqqg36LYjyuU8TRaP rJhSQF+3fLBxap6sYoGkhq0gO2VFEWo6hAHhpG4oOTlBuMwz0Zb1H6RoaQ11bNs2D5l6tHnE9je sKR+8BSV4Fi5xsD3TvgZpTIEvVg== X-Google-Smtp-Source: AGHT+IH86fHCzapUTGFWqzE+3QpTaBJs9I3lg0JaK8lTkophTRu4KMdA3DgBlYLQP0BjLgK/7D8Z4lUFZI+wwSJ2TQ== X-Received: from ilbbf17.prod.google.com ([2002:a05:6e02:3091:b0:3dc:a282:283e]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:3c04:b0:3d9:36a8:3d98 with SMTP id e9e14a558f8ab-3dd99bd048bmr170872535ab.2.1748892550667; Mon, 02 Jun 2025 12:29:10 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:58 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-14-coltonlewis@google.com> Subject: [PATCH 13/17] KVM: arm64: Context switch Partitioned PMU guest registers From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Save and restore newly untrapped registers that will be directly accessed by the guest when the PMU is partitioned. * PMEVCNTRn_EL0 * PMCCNTR_EL0 * PMICNTR_EL0 * PMUSERENR_EL0 * PMSELR_EL0 * PMCR_EL0 * PMCNTEN_EL0 * PMINTEN_EL1 If the PMU is not partitioned or MDCR_EL2.TPM is set, all PMU registers are trapped so return immediately. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/arm_pmuv3.h | 17 ++++- arch/arm64/include/asm/kvm_host.h | 4 + arch/arm64/kvm/arm.c | 2 + arch/arm64/kvm/pmu-part.c | 117 +++++++++++++++++++++++++++++ 4 files changed, 139 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/ar= m_pmuv3.h index 5d01ed25c4ef..a00845cffb3f 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -107,6 +107,11 @@ static inline void write_pmcntenset(u64 val) write_sysreg(val, pmcntenset_el0); } =20 +static inline u64 read_pmcntenset(void) +{ + return read_sysreg(pmcntenset_el0); +} + static inline void write_pmcntenclr(u64 val) { write_sysreg(val, pmcntenclr_el0); @@ -117,6 +122,11 @@ static inline void write_pmintenset(u64 val) write_sysreg(val, pmintenset_el1); } =20 +static inline u64 read_pmintenset(void) +{ + return read_sysreg(pmintenset_el1); +} + static inline void write_pmintenclr(u64 val) { write_sysreg(val, pmintenclr_el1); @@ -162,11 +172,16 @@ static inline u64 read_pmovsclr(void) return read_sysreg(pmovsclr_el0); } =20 -static inline void write_pmuserenr(u32 val) +static inline void write_pmuserenr(u64 val) { write_sysreg(val, pmuserenr_el0); } =20 +static inline u64 read_pmuserenr(void) +{ + return read_sysreg(pmuserenr_el0); +} + static inline void write_pmuacr(u64 val) { write_sysreg_s(val, SYS_PMUACR_EL1); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 4ea045098bfa..955359f20161 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -453,9 +453,11 @@ enum vcpu_sysreg { PMEVCNTR0_EL0, /* Event Counter Register (0-30) */ PMEVCNTR30_EL0 =3D PMEVCNTR0_EL0 + 30, PMCCNTR_EL0, /* Cycle Counter Register */ + PMICNTR_EL0, /* Instruction Counter Register */ PMEVTYPER0_EL0, /* Event Type Register (0-30) */ PMEVTYPER30_EL0 =3D PMEVTYPER0_EL0 + 30, PMCCFILTR_EL0, /* Cycle Count Filter Register */ + PMICFILTR_EL0, /* Insturction Count Filter Register */ PMCNTENSET_EL0, /* Count Enable Set Register */ PMINTENSET_EL1, /* Interrupt Enable Set Register */ PMOVSSET_EL0, /* Overflow Flag Status Set Register */ @@ -1713,6 +1715,8 @@ struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); +void kvm_pmu_load(struct kvm_vcpu *vcpu); +void kvm_pmu_put(struct kvm_vcpu *vcpu); =20 /* * Updates the vcpu's view of the pmu events for this cpu. diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3b9c003f2ea6..4a1cc7b72295 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -615,6 +615,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vcpu_load_vhe(vcpu); kvm_arch_vcpu_load_fp(vcpu); kvm_vcpu_pmu_restore_guest(vcpu); + kvm_pmu_load(vcpu); if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu); =20 @@ -657,6 +658,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + kvm_pmu_put(vcpu); if (vcpu_has_nv(vcpu)) kvm_vcpu_put_hw_mmu(vcpu); kvm_arm_vmid_clear_active(); diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 179a4144cfd0..40c72caef34e 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -8,6 +8,7 @@ #include #include =20 +#include #include #include =20 @@ -202,3 +203,119 @@ void kvm_pmu_host_counters_disable(void) mdcr &=3D ~MDCR_EL2_HPME; write_sysreg(mdcr, mdcr_el2); } + +/** + * kvm_pmu_load() - Load untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_load(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned, don't bother. + * + * If we have MDCR_EL2_TPM, every PMU access is trapped which + * implies we are using the emulated PMU instead of direct + * access. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i =3D 0; i < pmu->hpmn; i++) { + val =3D __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i); + write_pmevcntrn(i, val); + } + + val =3D __vcpu_sys_reg(vcpu, PMCCNTR_EL0); + write_pmccntr(val); + + if (cpus_have_final_cap(ARM64_HAS_PMICNTR)) { + val =3D __vcpu_sys_reg(vcpu, PMICNTR_EL0); + write_pmicntr(val); + } + + val =3D __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + write_pmuserenr(val); + + val =3D __vcpu_sys_reg(vcpu, PMSELR_EL0); + write_pmselr(val); + + val =3D __vcpu_sys_reg(vcpu, PMCR_EL0); + write_pmcr(val); + + /* + * Loading these registers is tricky because of + * 1. Applying only the bits for guest counters (indicated by mask) + * 2. Setting and clearing are different registers + */ + val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + write_pmcntenset(val & mask); + write_pmcntenclr(~val & mask); + + val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + write_pmintenset(val & mask); + write_pmintenclr(~val & mask); +} + +/** + * kvm_pmu_put() - Put untrapped PMU registers + * @vcpu: Pointer to struct kvm_vcpu + * + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask + * to only bits belonging to guest-reserved counters and leave + * host-reserved counters alone in bitmask registers. + */ +void kvm_pmu_put(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u8 i; + u64 val; + + /* + * If the PMU is not partitioned, don't bother. + * + * If we have MDCR_EL2_TPM, every PMU access is trapped which + * implies we are using the emulated PMU instead of direct + * access. + */ + if (!kvm_pmu_is_partitioned(pmu) || (vcpu->arch.mdcr_el2 & MDCR_EL2_TPM)) + return; + + for (i =3D 0; i < pmu->hpmn; i++) { + val =3D read_pmevcntrn(i); + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) =3D val; + } + + val =3D read_pmccntr(); + __vcpu_sys_reg(vcpu, PMCCNTR_EL0) =3D val; + + if (this_cpu_has_cap(ARM64_HAS_PMICNTR)) { + val =3D read_pmicntr(); + __vcpu_sys_reg(vcpu, PMICNTR_EL0) =3D val; + } + + val =3D read_pmuserenr(); + __vcpu_sys_reg(vcpu, PMUSERENR_EL0) =3D val; + + val =3D read_pmselr(); + __vcpu_sys_reg(vcpu, PMSELR_EL0) =3D val; + + val =3D read_pmcr(); + __vcpu_sys_reg(vcpu, PMCR_EL0) =3D val; + + /* Mask these to only save the guest relevant bits. */ + val =3D read_pmcntenset(); + __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) =3D val & mask; + + val =3D read_pmintenset(); + __vcpu_sys_reg(vcpu, PMINTENSET_EL1) =3D val & mask; +} --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19CA3227E8A for ; Mon, 2 Jun 2025 19:29:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892557; cv=none; b=VgVefTegNd2WstPqiJzSKgIun5Cx+lVYgMZve1uxcbh7nucXPQtgWuoG9w8nrWiAKYWQOPKF/SkPiPamQVpzuxru0ah1Li6XY8C9+Lx1Niban/3BYmAAr5Fe2hzveSyJRvkpvimiQ5QZbGhU4XUotTaObm4doQisiAYZUhRNar8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892557; c=relaxed/simple; bh=y3rVKdDcfaZ6JmoNjVIK4+Ctxp2l0yLp74rKxw2AHwI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SZ32i7aXjvsjLj2Fk5G+EhgvyYKcQ0xlmadha3hP7dy7bRNi6+IYViZYfd3208jSvDPHgzwQc5zJkQamCgg+zHS90elOtw7c4KcLMXKXsc3unxl47+9QNoEN4uMd7nynKrUN6P1nNTSl0Jw9BRA+SIMDtxPuJXdHZCVatzrwqu8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zF475f6k; arc=none smtp.client-ip=209.85.166.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zF475f6k" Received: by mail-il1-f201.google.com with SMTP id e9e14a558f8ab-3dda452174dso38262605ab.3 for ; Mon, 02 Jun 2025 12:29:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892552; x=1749497352; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pIieb9nvj3js/0lbP2t4ZJLM8sYLAoNuXbgaxM7clWY=; b=zF475f6k6jlFaz5tGGepBA+LPcVwWSHzi98ZGNyAN5aJWPl6dQEUOV3NyKkVH92s7x hvbKlQrepJ0hGmCThoESXAYAQlQnOag4NiwO6THkh6baFVDr7eBqAPdc1oJuPvzQfvG/ cY3IQhy/tZLWDIDqWWFeZv9m0x14K37n0oATkY022brrKTL4ub7IrXfhKxJVrTr+dVVj 1mxegiI3ffqz5pu3Ud5/tDS8g/SF2s3okDysmNPUbj+aYFLJ8rnBWjjhfTKEABYtRn3C KoOvhw0Y8mJ4sh927BBzen9HRPaTei/tt8tmE9tO60MjgquaelrUK7ftC5V6I2l+wDpl XSig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892552; x=1749497352; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pIieb9nvj3js/0lbP2t4ZJLM8sYLAoNuXbgaxM7clWY=; b=GVVf1iwWw9aFbNBSrnuWAKleBfBaWmhQ+DgsPmgmKJhDt/BnGlj0JoTI06xbkcek8H IuCOd0fPGa1av0cax4WhLs8yjUf7JB1MFeCd6crHgi7+shDM//m3Qnepc/44sAb3S96f gCu2n5vaSrt54bVhztPW/6j00/0Y0M0DGx1aGwpzSJPaG6nGWZzapxR+3AnHiXqvg8+E NdCRRNyJlqCBvJkYBcEBbeUNuLhkmopXxpeyUpUlFWJ0Pjm5yRVonXR+KJyhALfW+sZd A5a4Lgn+WCrpTUcZxKbvKtwK1/uVMjdtja3wPo20fqyKXTyzrl8Ov5IUtSldenVP2+zA GyaA== X-Forwarded-Encrypted: i=1; AJvYcCWRXeJ3u9ZGKILpT3g43HBL/NAIdFg+jyDx+Wb7GaOF254CvK2ucw6MdzsMIuRAdOo8ooNMDsi8No5KX7Q=@vger.kernel.org X-Gm-Message-State: AOJu0YyLAt2LKur6RIs1rkFYXiMVcoGHWIYLI2wsKSyUmoBPYe4xAOrv TcAitiruFGFujuL3n3znD5nGnWrdYBOxotrSNUsnnkeS0pPSRMywsbzij7EwU5akH1+MpH+VtfV 3gXeCQ4GfNqUCmJzZu03vGQgZ0A== X-Google-Smtp-Source: AGHT+IFqQBQNJHZiX1hBF+LjbTaumbyKnOzaoNP3vPsriCDKKVQYIJ9OZ+1lHyJrYq4J3I7AtifVqQuuEfJTrU+N0g== X-Received: from ilbdr4.prod.google.com ([2002:a05:6e02:3f04:b0:3dc:8616:ffc6]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:2189:b0:3dc:8b57:b750 with SMTP id e9e14a558f8ab-3dda3387e2cmr94188185ab.17.1748892551894; Mon, 02 Jun 2025 12:29:11 -0700 (PDT) Date: Mon, 2 Jun 2025 19:26:59 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-15-coltonlewis@google.com> Subject: [PATCH 14/17] perf: pmuv3: Handle IRQs for Partitioned PMU guest counters From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Guest counters will still trigger interrupts that need to be handled by the host PMU interrupt handler. Clear the overflow flags in hardware to handle the interrupt as normal, but record which guest overflow flags were set in the virtual overflow register for later injecting the interrupt into the guest. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 6 ++++++ arch/arm64/include/asm/kvm_pmu.h | 2 ++ arch/arm64/kvm/pmu-part.c | 17 +++++++++++++++++ drivers/perf/arm_pmuv3.c | 15 +++++++++++---- 4 files changed, 36 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pm= uv3.h index 1687b4031ec2..26e149bdc8b0 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -180,6 +180,11 @@ static inline void write_pmintenset(u32 val) write_sysreg(val, PMINTENSET); } =20 +static inline u32 read_pmintenset(void) +{ + return read_sysreg(PMINTENSET); +} + static inline void write_pmintenclr(u32 val) { write_sysreg(val, PMINTENCLR); @@ -245,6 +250,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm= _pmu *pmu) return ~0; } =20 +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 /* PMU Version in DFR Register */ #define ARMV8_PMU_DFR_VER_NI 0 diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_= pmu.h index 4098d4ad03d9..4cefd9fcf52b 100644 --- a/arch/arm64/include/asm/kvm_pmu.h +++ b/arch/arm64/include/asm/kvm_pmu.h @@ -30,6 +30,7 @@ u64 kvm_pmu_host_counter_mask(struct arm_pmu *pmu); u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu); void kvm_pmu_host_counters_enable(void); void kvm_pmu_host_counters_disable(void); +void kvm_pmu_handle_guest_irq(u64 govf); =20 #else =20 @@ -74,6 +75,7 @@ static inline u64 kvm_pmu_guest_counter_mask(struct arm_p= mu *pmu) =20 static inline void kvm_pmu_host_counters_enable(void) {} static inline void kvm_pmu_host_counters_disable(void) {} +static inline void kvm_pmu_handle_guest_irq(u64 govf) {} =20 #endif =20 diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 40c72caef34e..0e1a2235e992 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -319,3 +319,20 @@ void kvm_pmu_put(struct kvm_vcpu *vcpu) val =3D read_pmintenset(); __vcpu_sys_reg(vcpu, PMINTENSET_EL1) =3D val & mask; } + +/** + * kvm_pmu_handle_guest_irq() - Record IRQs in guest counters + * @govf: Bitmask of guest overflowed counters + * + * Record IRQs from overflows in guest-reserved counters in the VCPU + * register for the guest to clear later. + */ +void kvm_pmu_handle_guest_irq(u64 govf) +{ + struct kvm_vcpu *vcpu =3D kvm_get_running_vcpu(); + + if (!vcpu) + return; + + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |=3D govf; +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index f447a0f10e2b..20d9b35260d9 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -739,6 +739,8 @@ static u64 armv8pmu_getreset_flags(void) =20 /* Write to clear flags */ value &=3D ARMV8_PMU_CNT_MASK_ALL; + /* Only reset interrupt enabled counters. */ + value &=3D read_pmintenset(); write_pmovsclr(value); =20 return value; @@ -841,6 +843,7 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu) static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u64 pmovsr; + u64 govf; struct perf_sample_data data; struct pmu_hw_events *cpuc =3D this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; @@ -867,19 +870,17 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu= *cpu_pmu) * to prevent skews in group events. */ armv8pmu_stop(cpu_pmu); + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS) { struct perf_event *event =3D cpuc->events[idx]; struct hw_perf_event *hwc; =20 /* Ignore if we don't have an event. */ - if (!event) - continue; - /* * We have a single interrupt for all counters. Check that * each counter has overflowed before we process it. */ - if (!armv8pmu_counter_has_overflowed(pmovsr, idx)) + if (!event || !armv8pmu_counter_has_overflowed(pmovsr, idx)) continue; =20 hwc =3D &event->hw; @@ -896,6 +897,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu = *cpu_pmu) if (perf_event_overflow(event, &data, regs)) cpu_pmu->disable(event); } + + govf =3D pmovsr & kvm_pmu_guest_counter_mask(cpu_pmu); + + if (kvm_pmu_is_partitioned(cpu_pmu) && govf) + kvm_pmu_handle_guest_irq(govf); + armv8pmu_start(cpu_pmu); =20 return IRQ_HANDLED; --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 724822327A3 for ; Mon, 2 Jun 2025 19:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892559; cv=none; b=aYdliAEENQd1uSjGT+8zeQDwrE2mlOtQLtZGaB7fS/3/PUAgX/JtpBu7okp2NXE2N/9qDdZjMlzUPehgfzFcntVdMjvQz+5YVtccKthkuX/YEsAqBYtqa0HODfA9piJiyNYwRlt1CQYGkUq5CJizA9K0hNmOsne2M49EeByQXwQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892559; c=relaxed/simple; bh=TTLikVmKHBl9vHWgJj7ptAb8oZIRjsQGAJ+ajwQ8Eb4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uvPFVhFxM76aT+AQi835GrXBmXitPFzMCovWWzZ2+q5ixPsOLhUDCy4QnCQhgjSEGSQE1QYo+RM0TnJaC6B3rVoZEJCID/qOsOcpqtx4ODY6PM+VN+PYdmQkBCkxzE8eExQV/XGyhiUAHk7Az7h8dOHlkJB8vVmfhar2q7wNCtM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ijOckjMw; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ijOckjMw" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3dda4bb1c97so17182985ab.1 for ; Mon, 02 Jun 2025 12:29:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892553; x=1749497353; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oY77J80y2xgj2jg9N03EbI54De+Pf+kjVEC2/miRHXM=; b=ijOckjMwcPwzhUSVwA1a5baggwRwj7mwdR7dB0zgOXqSnSEhWwMQmnBXg2gq7iFzAR JdjGXYAtKqraL35ccPvC5gpzylnW9nNCOx8Tt+BFzf11J7s+YT0KjYzMFtHxPVXbndlf kRDK0O3q1WY+p9cEB7VwfkCmmCC9tMpu8T9HIJqtzUfmsVrCoYcaIdTZMzDno7ZdmB2/ YDq+3+9QLJlRg2hLKFUcVIkGRz764ecT1WFb8H0P+PJK2Ekh5dDdFv+Qc2Q27Pawvvv6 U+Esfp5ruTPq9q9Bi/A7W9wgkYJP9aMsYwEhkBDYND+gjbj2dtUlp7RGLPZ9GiruNkWV t+Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892553; x=1749497353; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oY77J80y2xgj2jg9N03EbI54De+Pf+kjVEC2/miRHXM=; b=MODl0BOUHEvJrrQLOVJQiXEXCc5oVg/spNEizhxzOex2il4/jxHsU3bW4BMisLt3Yo qyxq/2HlKbbTMZMm2udKb46PWongk3L3kudE5D4bk7noJHdO/b4rRJfDOnv0qwl585Qe 4Tlxeg3MkFz1VqN9cGczDNz8Qu8Kes5hOS+T/KqZctbdzcvihqPVKLa8ceJ1vQquYsjA p6Z5wOASg7+1YOoWouqWjjm+RpqAbYwiunOgGoDYFb+v9oQcg2FlVbtzn8vyr2KdZ5NG I3aRpBioaCA3zUJt5wj9wAUdWmyayLX893IKYICTYcgWUCwy5RHlGnfX1I7kmVTncI33 mLiA== X-Forwarded-Encrypted: i=1; AJvYcCVG9IqW0EHF5GmaT/CqScb75orEFzc5mTg8Y9V0CMKox609/mgUEpWcq52qIBBRnS7t1M37N/qxF/iIHmI=@vger.kernel.org X-Gm-Message-State: AOJu0Yy9YRlq+SS1OJuh68BXVUYWAEceIQgkBqCcUP66L3J+NRF4E9XM wHyMIMg7HvKKU70PiUw+9M/K2frBmLMlJCC5+JOEMy7+LJ1cnY8P3tDQYXHc8pYVOCs0M19IJbK OSlq6q3HNLztEjceuSQtwl4uGhQ== X-Google-Smtp-Source: AGHT+IGwil/ABNZ8qf4ojpW0M3/YV2pZdfKPjkXHE38VBvyOQZsaB+n377hoKIbc0ge0ZdQrftjzuWpR/waySgTS4w== X-Received: from ilbbs17.prod.google.com ([2002:a05:6e02:2411:b0:3dc:6a8b:b101]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:4414:20b0:3dd:b569:6448 with SMTP id e9e14a558f8ab-3ddb5696709mr16632695ab.6.1748892553051; Mon, 02 Jun 2025 12:29:13 -0700 (PDT) Date: Mon, 2 Jun 2025 19:27:00 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-16-coltonlewis@google.com> Subject: [PATCH 15/17] KVM: arm64: Inject recorded guest interrupts From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we re-enter the VM after handling a PMU interrupt, calculate whether it was any of the guest counters that overflowed and inject an interrupt into the guest if so. Signed-off-by: Colton Lewis --- arch/arm64/include/asm/kvm_host.h | 3 ++- arch/arm64/kvm/pmu-emul.c | 4 ++-- arch/arm64/kvm/pmu-part.c | 22 +++++++++++++++++++++- arch/arm64/kvm/pmu.c | 7 ++++++- 4 files changed, 31 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 955359f20161..0af8cc4c340f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1714,9 +1714,10 @@ bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vc= pu); struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu); +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu); void kvm_pmu_load(struct kvm_vcpu *vcpu); void kvm_pmu_put(struct kvm_vcpu *vcpu); +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu); =20 /* * Updates the vcpu's view of the pmu events for this cpu. diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index ff86c66e1b48..0ffabada1dad 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -320,7 +320,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vc= pu, u64 val) * counter where the values of the global enable control, PMOVSSET_EL0[n],= and * PMINTENSET_EL1[n] are all 1. */ -bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) +bool kvm_pmu_emul_overflow_status(struct kvm_vcpu *vcpu) { u64 reg =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); =20 @@ -457,7 +457,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *pe= rf_event, kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ARMV8_PMUV3_PERFCTR_CHAIN); =20 - if (kvm_pmu_overflow_status(vcpu)) { + if (kvm_pmu_emul_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); =20 if (!in_nmi()) diff --git a/arch/arm64/kvm/pmu-part.c b/arch/arm64/kvm/pmu-part.c index 0e1a2235e992..1d85e7ce76c8 100644 --- a/arch/arm64/kvm/pmu-part.c +++ b/arch/arm64/kvm/pmu-part.c @@ -252,7 +252,7 @@ void kvm_pmu_load(struct kvm_vcpu *vcpu) write_pmcr(val); =20 /* - * Loading these registers is tricky because of + * Loading these registers is more intricate because of * 1. Applying only the bits for guest counters (indicated by mask) * 2. Setting and clearing are different registers */ @@ -336,3 +336,23 @@ void kvm_pmu_handle_guest_irq(u64 govf) =20 __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |=3D govf; } + +/** + * kvm_pmu_part_overflow_status() - Determine if any guest counters have o= verflowed + * @vcpu: Ponter to struct kvm_vcpu + * + * Determine if any guest counters have overflowed and therefore an + * IRQ needs to be injected into the guest. + * + * Return: True if there was an overflow, false otherwise + */ +bool kvm_pmu_part_overflow_status(struct kvm_vcpu *vcpu) +{ + struct arm_pmu *pmu =3D vcpu->kvm->arch.arm_pmu; + u64 mask =3D kvm_pmu_guest_counter_mask(pmu); + u64 pmovs =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0); + u64 pmint =3D read_pmintenset(); + u64 pmcr =3D read_pmcr(); + + return (pmcr & ARMV8_PMU_PMCR_E) && (mask & pmovs & pmint); +} diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 2dcfac3ea9c6..6c3151dec25a 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -425,7 +425,11 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu =3D &vcpu->arch.pmu; bool overflow; =20 - overflow =3D kvm_pmu_overflow_status(vcpu); + if (kvm_vcpu_pmu_is_partitioned(vcpu)) + overflow =3D kvm_pmu_part_overflow_status(vcpu); + else + overflow =3D kvm_pmu_emul_overflow_status(vcpu); + if (pmu->irq_level =3D=3D overflow) return; =20 @@ -694,6 +698,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_device_attr *attr) return -EBUSY; =20 kvm_debug("Set kvm ARM PMU irq: %d\n", irq); + vcpu->arch.pmu.irq_num =3D irq; return 0; } --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7460D2327A7 for ; Mon, 2 Jun 2025 19:29:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892559; cv=none; b=jRYw0M9YfQHy56QuiVzXNf3tA5f3io2iCuvAhS4xffusrVxBvss33fcItRhk25sPZSQyX3//Brdo4NvmKa07glKQAEG99d7Ol22YwTBop4wA1N6toaxLij6SshHKmePvhjtrsAAT1xEPhUbvlmGCy/UOVskKxBFAqGrqF3Ngskw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892559; c=relaxed/simple; bh=CcnjBedU7/MUCvahUwrHCirXtUmsFSqeBjbuRsRRORA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sENyu57u3ZslAOhh7LQMu9eUjN9xuV1iyS+DYAuEGPRtfi9/6AOrFgF+vP/WljwrkN9JDMIXBWeZOd2AjpGKlz9EdViMjKhyTFeLFFDTabiWq6Xnx6Y1QmnZUzwQ1i/JQTyseOx5PEifjLTp9hzQLKZnecYp9ZI6B+KbRe9XmfA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MKc6IavZ; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MKc6IavZ" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-3dda50c4072so24597945ab.3 for ; Mon, 02 Jun 2025 12:29:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892554; x=1749497354; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PcghHp9NHNnZagjNBq90Et1fDsToCSyQRDmWBaP4jq8=; b=MKc6IavZyZ2vCc0m8Mw8KUOTyONnpML2pmjdMq/OJy11tOD5BHZsiprQ4ngenZ+gYD Q3kwzYd1bkGrGF8FOtaCTv/fDD6ul62nL8meketehgh8X0RePczxOiVPfI9Ft7KKAl33 InFFtijp5G10n/PhoxkMDIRHO+jxGs8w3utD9iCelYdFk5KLixML5uQcgfW55x73OmiK P4Rup1h6DzjoqHgXLQoNsgYnIW6EjW09iolUgwPDEr5xtR5qqYutn42Q2Om3fLuBUm0r Fb6e+Tt01bv/mT6qY9jbUnRX3LQd10rX9YUJZZ0O7dGVej3aHn8PQL+UabP023czWAiM 5Mdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892554; x=1749497354; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PcghHp9NHNnZagjNBq90Et1fDsToCSyQRDmWBaP4jq8=; b=a/PhBqWP/86JrtIgMeqyBFVKj7grhTmrebrxplouU9RC5rAj+W6Q+HLKPP6Ft4dotU c61nKoE1a982lcLXrKn0vcpJ0fIQK5aT8fnQetUBRNgaPd+YxTceeSAEp4LA+HRgfMaJ 9JyMI0gZxtRfwmmBhIoZ7iSg6XJjsLraAFVNkwp75ilXyICM0KBS69NxHcY2kjklMXkz Z21c2lBLq13jUSCvNwQCj6d/GW6+3Jx6EXZYyXUH7gDRdXlezPxzX6Ju1LRPDBiNE1P/ lwqfrrfKtJz1TguuNFps3Zk7MSOTi3j5KUw8eePt6Efl6zOaFFVtWDVLimcJ8YxeOqQ7 ZhYA== X-Forwarded-Encrypted: i=1; AJvYcCWXBbFk7zgj7qOpT1uIUGQMXErf+PDlzI/1qU7+WknN1KcXqfx/OPCGgD7B9m5HQV1EStZGSGSWqVAmFyM=@vger.kernel.org X-Gm-Message-State: AOJu0YxVS9vihl3fxcnHp0p5TZ5p1O0ftLkdRXq5OJ94AiA+lITnyfQ1 XSZSoHOUbB7Rw/Nn8oesEcsSRgZtyruyNUQLZjQaliRadV3lZDXNftjffKDZhBBrMXAY1C104Ez j1vIYck9F2921W8z4UOBMVxYTdQ== X-Google-Smtp-Source: AGHT+IEnhHZikzLqY9Iyw+UlUAPLFo203H/gjCIqwNlpl0+DZeuWgmgKIGVnEnC2cAT2atcfhzVzUE50F2kqc2B8Og== X-Received: from ilah6.prod.google.com ([2002:a05:6e02:1d86:b0:3dd:a262:4f8c]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:b21:b0:3dc:7bc9:503e with SMTP id e9e14a558f8ab-3dd9c989ce3mr159545185ab.2.1748892554217; Mon, 02 Jun 2025 12:29:14 -0700 (PDT) Date: Mon, 2 Jun 2025 19:27:01 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-17-coltonlewis@google.com> Subject: [PATCH 16/17] KVM: arm64: Add ioctl to partition the PMU when supported From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add KVM_ARM_PMU_PARTITION to partition the PMU for a given vCPU with a specified number of reserved host counters. Add a corresponding KVM_CAP_ARM_PMU_PARTITION to check for this ability. This capability is allowed on an initialized vCPU where PMUv3, VHE, and FGT are supported. If the ioctl is never called, partitioning will fall back on kernel command line kvm.reserved_host_counters as before. Signed-off-by: Colton Lewis --- Documentation/virt/kvm/api.rst | 16 ++++++++++++++++ arch/arm64/kvm/arm.c | 21 +++++++++++++++++++++ include/uapi/linux/kvm.h | 4 ++++ 3 files changed, 41 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index fe3d6b5d2acc..88b851cb6f66 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6464,6 +6464,22 @@ the capability to be present. =20 `flags` must currently be zero. =20 +4.144 KVM_ARM_PARTITION_PMU +--------------------------- + +:Capability: KVM_CAP_ARM_PMU_PARTITION +:Architectures: arm64 +:Type: vcpu ioctl +:Parameters: arg[0] is the number of counters to reserve for the host + +This API controls the ability to partition the PMU counters into two +sets, one set reserved for the host and one set reserved for the +guest. When partitoned, KVM will allow the guest direct hardware +access to the most commonly used PMU capabilities for those counters, +bypassing the KVM traps in the standard emulated PMU implementation +and reducing the overhead of any guest software that uses PMU +capabilities such as `perf`. The host PMU driver will not access any +of the counters or bits reserved for the guest. =20 .. _kvm_run: =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4a1cc7b72295..1c44160d3b2d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -21,6 +21,7 @@ #include #include #include +#include #include =20 #define CREATE_TRACE_POINTS @@ -38,6 +39,7 @@ #include #include #include +#include #include #include #include @@ -382,6 +384,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) case KVM_CAP_ARM_PMU_V3: r =3D kvm_supports_guest_pmuv3(); break; + case KVM_CAP_ARM_PARTITION_PMU: + r =3D kvm_pmu_partition_supported(); + break; case KVM_CAP_ARM_INJECT_SERROR_ESR: r =3D cpus_have_final_cap(ARM64_HAS_RAS_EXTN); break; @@ -1809,6 +1814,22 @@ long kvm_arch_vcpu_ioctl(struct file *filp, =20 return kvm_arm_vcpu_finalize(vcpu, what); } + case KVM_ARM_PARTITION_PMU: { + struct arm_pmu *pmu; + u8 host_counters; + + if (unlikely(!kvm_vcpu_initialized(vcpu))) + return -ENOEXEC; + + if (!kvm_pmu_partition_supported()) + return -EPERM; + + if (copy_from_user(&host_counters, argp, sizeof(host_counters))) + return -EFAULT; + + pmu =3D vcpu->kvm->arch.arm_pmu; + return kvm_pmu_partition(pmu, host_counters); + } default: r =3D -EINVAL; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c9d4a908976e..f7387c0696d5 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -932,6 +932,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 #define KVM_CAP_ARM_EL2 240 #define KVM_CAP_ARM_EL2_E2H0 241 +#define KVM_CAP_ARM_PARTITION_PMU 242 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1410,6 +1411,9 @@ struct kvm_enc_region { #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) =20 +/* Available with KVM_CAP_ARM_PARTITION_PMU */ +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, u8) + #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) #define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) =20 --=20 2.49.0.1204.g71687c7c1d-goog From nobody Thu Dec 18 14:11:45 2025 Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CAFC22D9E6 for ; Mon, 2 Jun 2025 19:29:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892560; cv=none; b=TBV4RlJKS7oAsBYoUwydDuX57HjCKSjYQTw0ubV8JexhXEr84a1rSfXi0XCupLUfJL8SU7qT1R5YUCVskqkycZyqvxnJoL8jNKDixWeT47IYNXcTWP9rckZQcuvmntJx1B3ziNYvh5v/KgnQ1yS3i8QWMhqvgsPro/MyqpIJo4Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748892560; c=relaxed/simple; bh=rVIheqH0+/PJT/lcs6JePPwGwkoQWcSv5wqNXPtE6ls=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nBZ0OSMnEYZmBTG2454cKO/aa9kVqxfeMYp71yTViRzgGJD8jpEqBR4wRF753xb3w57p7K24UHNKrSzThTjBUFn96lrOXitijxzAXqT6wYRHrM6bwv70UVmDUFmrYytMk6inFrpO7JQ+8iHoWNowqyIQlsPuKN0bMWRMroPII0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=iVF2U7KN; arc=none smtp.client-ip=209.85.167.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iVF2U7KN" Received: by mail-oi1-f202.google.com with SMTP id 5614622812f47-4067106dd65so1185428b6e.0 for ; Mon, 02 Jun 2025 12:29:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748892555; x=1749497355; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=26CTSmqDyxITTH+s3Gb/2J4B5NVYM/QmbIppKLMrdxo=; b=iVF2U7KNerA2yKX30lgFz560RP6ahSkVuDbEImHI8MLTQoKAfuMMuNkwLrPVfzp1G0 G8fteJN4JgmbdVyEYbWDqD2Qb6JCYDbrVKcQ4u+y700djaB94a2GDRDPgNrLuzojdTdK WqCYuqlBdJNiYLwSbi58SJLdlpJHp4UVyHCkNXqkubUEYoCiDi0F7BVI5mPNIKHEFZN7 ozO5+CQ3kLvaMNFnj7i8ShAKOo676CAFm8/E5REfbPRnHLPkQ5VdFvv78mxXMUOnK+JE zquKYMlg+/ahB/l4hqjQWxOF43zSAUUVM+WekZkUTTP0rVEugEa7ujZO2Gqs36DAsujF WqfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748892555; x=1749497355; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=26CTSmqDyxITTH+s3Gb/2J4B5NVYM/QmbIppKLMrdxo=; b=NV5sgedPKGxsnhUXRSBY88JY8kVbQsPa9/oAGgfDTCfTmnlWkhngd1kgnOxWatYz1n QgaWOywluGTKtXEUhv70XdEL6pPsmkWalVE08NZ26eW0SXcjK6dPMaDoBfZQFV6MfhWl /ZMUsMocPNwJdKOdgHNOZwfBrrS3fZx8Q73jdcIqz9zugfib6ALKuzFsWnYPkD0rKWc4 pOinPkA9I6uMxeXzilouClmtIjL1VekNF5Agqx9JBguMODVY3Gsf193x18r0A6sNfL5C n236PJjyRJBZoVKwdPD9G0RGoQlTuo0uR0HLOs1KW0IvFVDoXMbd0HhjNgd2Ghq8ZDyM xzJA== X-Forwarded-Encrypted: i=1; AJvYcCV7EDFv9fkwXf0QDKE9mZy/QPYnWiFRwUlq1uSxXWkh6upY4Xxem1JK9cmsHqeIq+/VMZsfVIJpeojv/UM=@vger.kernel.org X-Gm-Message-State: AOJu0YyCSv3rDoQ95wwvqdO6CLOT1Gt/+s4xJFbjLlTHUDoGAb1c8qu0 sO+3eA/UKuv3AhJhnptbZtGXiwV/pWVW+3IORq4GWaqRIO9PirMaSVsN/uF6Bts77Trj2QcXeOr 3XGK6walfpSXN/C+Ajp1ZJLjBVA== X-Google-Smtp-Source: AGHT+IGRQ3tQu2xw3SSP0wfqw8hVdXkUiNnwLp3FSZDPtlEdZWrls30MkwdnKq2FbM4pg171i7/WNDJ2D1arVSPVXw== X-Received: from oibbo14.prod.google.com ([2002:a05:6808:228e:b0:406:4510:b017]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:6b8d:b0:403:3e86:ab4c with SMTP id 5614622812f47-4067e6d94d1mr7758695b6e.39.1748892555108; Mon, 02 Jun 2025 12:29:15 -0700 (PDT) Date: Mon, 2 Jun 2025 19:27:02 +0000 In-Reply-To: <20250602192702.2125115-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250602192702.2125115-1-coltonlewis@google.com> X-Mailer: git-send-email 2.49.0.1204.g71687c7c1d-goog Message-ID: <20250602192702.2125115-18-coltonlewis@google.com> Subject: [PATCH 17/17] KVM: arm64: selftests: Add test case for partitioned PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Run separate test cases for a partitioned PMU in vpmu_counter_access. Notably, partitioning the PMU untraps PMCR_EL0.N, so that is no longer settable by KVM. Add a boolean argument to run_access_test() that will partition the PMU by reserving one host counter if true then run the test for the PMCR_EL0.N value that implies, one less than the number of counters on the host system. Signed-off-by: Colton Lewis --- tools/include/uapi/linux/kvm.h | 2 + .../selftests/kvm/arm64/vpmu_counter_access.c | 40 ++++++++++++++++--- 2 files changed, 37 insertions(+), 5 deletions(-) diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index b6ae8ad8934b..cb72b57b9b6c 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -930,6 +930,7 @@ struct kvm_enable_cap { #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 +#define KVM_CAP_ARM_PARTITION_PMU 242 =20 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1356,6 +1357,7 @@ struct kvm_vfio_spapr_tce { #define KVM_S390_SET_CMMA_BITS _IOW(KVMIO, 0xb9, struct kvm_s390_cmma= _log) /* Memory Encryption Commands */ #define KVM_MEMORY_ENCRYPT_OP _IOWR(KVMIO, 0xba, unsigned long) +#define KVM_ARM_PARTITION_PMU _IOWR(KVMIO, 0xce, u8) =20 struct kvm_enc_region { __u64 addr; diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tool= s/testing/selftests/kvm/arm64/vpmu_counter_access.c index f16b3b27e32e..e06448c1fbb5 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -369,6 +369,7 @@ static void guest_code(uint64_t expected_pmcr_n) pmcr =3D read_sysreg(pmcr_el0); pmcr_n =3D get_pmcr_n(pmcr); =20 + /* __GUEST_ASSERT(0, "Expect PMCR: %lx", pmcr); */ /* Make sure that PMCR_EL0.N indicates the value userspace set */ __GUEST_ASSERT(pmcr_n =3D=3D expected_pmcr_n, "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx", @@ -508,16 +509,18 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t = pmcr_n, bool expect_fail) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_= n, * and run the test. */ -static void run_access_test(uint64_t pmcr_n) +static void run_access_test(uint64_t pmcr_n, bool partition) { uint64_t sp; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; + uint8_t host_counters =3D (uint8_t)partition; =20 pr_debug("Test with pmcr_n %lu\n", pmcr_n); =20 test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); vcpu =3D vpmu_vm.vcpu; + vcpu_ioctl(vcpu, KVM_ARM_PARTITION_PMU, &host_counters); =20 /* Save the initial sp to restore them later to run the guest again */ sp =3D vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1)); @@ -529,6 +532,8 @@ static void run_access_test(uint64_t pmcr_n) * check if PMCR_EL0.N is preserved. */ vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); + vcpu_ioctl(vcpu, KVM_ARM_PARTITION_PMU, &host_counters); + init.features[0] |=3D (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); vcpu_init_descriptor_tables(vcpu); @@ -609,7 +614,7 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) */ static void run_error_test(uint64_t pmcr_n) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pm= cr_n); =20 test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); destroy_vpmu_vm(); @@ -629,20 +634,45 @@ static uint64_t get_pmcr_n_limit(void) return get_pmcr_n(pmcr); } =20 -int main(void) +void test_emulated_pmu(void) { uint64_t i, pmcr_n; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + pr_info("Testing Emulated PMU\n"); =20 pmcr_n =3D get_pmcr_n_limit(); for (i =3D 0; i <=3D pmcr_n; i++) { - run_access_test(i); + run_access_test(i, false); run_pmregs_validity_test(i); } =20 for (i =3D pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) run_error_test(i); +} + +void test_partitioned_pmu(void) +{ + uint64_t i, pmcr_n; + + pr_info("Testing Partitioned PMU\n"); + + pmcr_n =3D get_pmcr_n_limit(); + run_access_test(pmcr_n - 1, true); + + /* Partitioning implies only one PMCR.N allowed */ + for (i =3D 0; i < ARMV8_PMU_MAX_COUNTERS; i++) + if (i !=3D pmcr_n) + run_error_test(i); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + test_emulated_pmu(); + + if (kvm_has_cap(KVM_CAP_ARM_PARTITION_PMU)) + test_partitioned_pmu(); =20 return 0; } --=20 2.49.0.1204.g71687c7c1d-goog