From nobody Thu Apr 9 12:21:06 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1604AC43217 for ; Sat, 5 Nov 2022 07:23:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229505AbiKEHXt (ORCPT ); Sat, 5 Nov 2022 03:23:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229528AbiKEHXn (ORCPT ); Sat, 5 Nov 2022 03:23:43 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F9106428 for ; Sat, 5 Nov 2022 00:23:42 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id o7so6356626pjj.1 for ; Sat, 05 Nov 2022 00:23:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Yq52tAH/MxGM65Yx9zGEKaWPTvFueeiBIyUIbXR0M9Q=; b=bFB2F0dOk+7RmyqlODQN6AV4el3iyfkqJDX87HRZQndWourFb9NRcH/jeGdUNWvhhQ 1Y3ToDkfdz0hTYXOxYzZo32ler1S0My6evWkO5zOa6+91gnb3qxnN7M30VMidZ7Y71v4 fv9mQDDqwX+6BLqZgt96OY8pLxHRks3jScJxhXvzetgjf4hT+lV7J6MbZK2n3SbUzCN6 UV80MP+Ij9HvNjBFttNcEo4irYujAbTkhW96yFlGTPGj3sWmVhvreoNbw/QF8V/xMzlR cmgP60deL5rIgHc0zXWnvRtIPsJTgLNYySxvzgzdts+vgbtK3Z7F8xnfCoZksNnav2dW hy3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Yq52tAH/MxGM65Yx9zGEKaWPTvFueeiBIyUIbXR0M9Q=; b=y4nDwLlCYtOr7gu53194HOf356qIV+7yAERfCyQ2pyFO8BmCrnYTG+9PUAM7kr18i0 8JdBVgmUPNIt0jpAT1sxEI4QgIkaAr9ImzsxBEW6tg0V3L4XU0k+CJKgQXZlpQLCF6vX SWrKn+N6kVPJt6BvIzha3OIYhfWbXjwhxePzdR/tZbMXysiBvk2ibj9A0sK0Jciutf/C Opk5ZAKGkfTrleYu3dQs6npx6Oo/gnf+2J1HqYRUVA9+fY6jfhOFhZOl5XyxyGMjosGT ee+ayTShbtMe8v2x57LPqTCVQdbredOgWv+gl8GTRVo5czIHnJFIEQ8UFyDrrCgKZz0E A3mQ== X-Gm-Message-State: ACrzQf1yzGRGydPeZwtNWhZEGkSpUPwA2p7CsGSSfdlodS+CZgDWqtnm 1mbyccdMiA1ciLyF339Kk60crg== X-Google-Smtp-Source: AMsMyM4LKijNmkyOB/EbOSqn/NWSN8cvycYETzVLpnbzyZIBhOtJ7jYzZjEQ2n+eOgZwOZPCi5+YAw== X-Received: by 2002:a17:90b:1c0d:b0:213:1a9c:5b1 with SMTP id oc13-20020a17090b1c0d00b002131a9c05b1mr55848573pjb.188.1667633022220; Sat, 05 Nov 2022 00:23:42 -0700 (PDT) Received: from leoy-huangpu.lan (211-75-219-201.hinet-ip.hinet.net. [211.75.219.201]) by smtp.gmail.com with ESMTPSA id w27-20020aa79a1b000000b00562ef28aac6sm698138pfj.185.2022.11.05.00.23.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 05 Nov 2022 00:23:41 -0700 (PDT) From: Leo Yan To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Catalin Marinas , Will Deacon , Arnaldo Carvalho de Melo , John Garry , James Clark , Mike Leach , Peter Zijlstra , Ingo Molnar , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Cc: Leo Yan Subject: [PATCH v1 1/3] KVM: arm64: Dynamically register callback for tracepoints Date: Sat, 5 Nov 2022 07:23:09 +0000 Message-Id: <20221105072311.8214-2-leo.yan@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221105072311.8214-1-leo.yan@linaro.org> References: <20221105072311.8214-1-leo.yan@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This commit doesn't change any functionality but only refactoring. It registers callbacks for tracepoints dynamically, with this way, the existed trace events (in this case kvm_entry and kvm_exit) are kept. This is a preparation to add new trace events in later patch. Signed-off-by: Leo Yan --- arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/arm.c | 4 ++-- arch/arm64/kvm/trace.c | 29 +++++++++++++++++++++++++++++ arch/arm64/kvm/trace_arm.h | 8 ++++++++ 4 files changed, 40 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/kvm/trace.c diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 5e33c2d4645a..4e641d2df7ad 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -14,7 +14,7 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ inject_fault.o va_layout.o handle_exit.o \ guest.o debug.o reset.o sys_regs.o stacktrace.o \ vgic-sys-reg-v3.o fpsimd.o pkvm.o \ - arch_timer.o trng.o vmid.o \ + arch_timer.o trng.o vmid.o trace.o \ vgic/vgic.o vgic/vgic-init.o \ vgic/vgic-irqfd.o vgic/vgic-v2.o \ vgic/vgic-v3.o vgic/vgic-v4.o \ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 94d33e296e10..03ed5f6c92bc 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -917,7 +917,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) /************************************************************** * Enter the guest */ - trace_kvm_entry(*vcpu_pc(vcpu)); + trace_kvm_entry_tp(vcpu); guest_timing_enter_irqoff(); =20 ret =3D kvm_arm_vcpu_enter_exit(vcpu); @@ -974,7 +974,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 local_irq_enable(); =20 - trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + trace_kvm_exit_tp(ret, vcpu); =20 /* Exit types that need handling before we can be preempted */ handle_exit_early(vcpu, ret); diff --git a/arch/arm64/kvm/trace.c b/arch/arm64/kvm/trace.c new file mode 100644 index 000000000000..d25a3db994e2 --- /dev/null +++ b/arch/arm64/kvm/trace.c @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +#include + +#include "trace_arm.h" + +static void kvm_entry_tp(void *data, struct kvm_vcpu *vcpu) +{ + if (trace_kvm_entry_enabled()) + trace_kvm_entry(*vcpu_pc(vcpu)); +} + +static void kvm_exit_tp(void *data, int ret, struct kvm_vcpu *vcpu) +{ + if (trace_kvm_exit_enabled()) + trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), + *vcpu_pc(vcpu)); +} + +static int __init kvm_tp_init(void) +{ + register_trace_kvm_entry_tp(kvm_entry_tp, NULL); + register_trace_kvm_exit_tp(kvm_exit_tp, NULL); + return 0; +} + +core_initcall(kvm_tp_init) diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h index 33e4e7dd2719..ef02ae93b28b 100644 --- a/arch/arm64/kvm/trace_arm.h +++ b/arch/arm64/kvm/trace_arm.h @@ -11,6 +11,10 @@ /* * Tracepoints for entry/exit to guest */ +DECLARE_TRACE(kvm_entry_tp, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu)); + TRACE_EVENT(kvm_entry, TP_PROTO(unsigned long vcpu_pc), TP_ARGS(vcpu_pc), @@ -26,6 +30,10 @@ TRACE_EVENT(kvm_entry, TP_printk("PC: 0x%016lx", __entry->vcpu_pc) ); =20 +DECLARE_TRACE(kvm_exit_tp, + TP_PROTO(int ret, struct kvm_vcpu *vcpu), + TP_ARGS(ret, vcpu)); + TRACE_EVENT(kvm_exit, TP_PROTO(int ret, unsigned int esr_ec, unsigned long vcpu_pc), TP_ARGS(ret, esr_ec, vcpu_pc), --=20 2.34.1 From nobody Thu Apr 9 12:21:06 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD4BEC4332F for ; Sat, 5 Nov 2022 07:23:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229548AbiKEHXx (ORCPT ); Sat, 5 Nov 2022 03:23:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229516AbiKEHXt (ORCPT ); Sat, 5 Nov 2022 03:23:49 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CC23640A for ; Sat, 5 Nov 2022 00:23:49 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id g62so6336350pfb.10 for ; Sat, 05 Nov 2022 00:23:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QfZA8Oeq546U9rjXWlvF9+hmpzEZ8iLdnFqiiX6KOGA=; b=bHXFAvPFjUm6i2kX0+cKQ9kyX3IEu0dAg7rej/TY3PIo8z+3oigqZrU9IyDTpig+R3 HJN1Txjluse6j+vStwPJGRMGQ2GQfVNq9iTZQAZG/88DlBTj+WoYddnMO7dEAGRRy81/ 7uNc8LfdswOlXFqzrYNoY8hk83/eFkQeePpTXaZNoWlSiI6RkzO9bBj1Lra5u7PKE7XI YZp8JMGcjrYuXqAQUIfNvqMj6q3LmZW7DsihaejeJi5LGY/dO0jhHe1SpWyFBo0LGCh0 NU/4w+dZlOR2PRJFjVuQstVilJzYNAhIsBuGRf0wBs9nuZ+/VHjwdaQp2+kwd9dj3X8W dBZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QfZA8Oeq546U9rjXWlvF9+hmpzEZ8iLdnFqiiX6KOGA=; b=gn736Y0+rpNAFXQX3GWUgRJFIfZcDApLChWg2jSCXwpWWyeqckccFsDrohFl80j73B udraQQNKMdfHbZYbxrxFabwvk/THcYt2o3KHcDa8K/5x3bzUUbGv+1teDVoQYjfv7F8l iM1nZ8AZRNzsEtMP+PXpsKt/Tyb+EbD9jOMUJeDGIonQbH0fUKVcOHV0TDpP/i+OEMPF HCEFeuoqhdCLOKNtirUvU43/iokz7O9flWbtCcG3de0Rmtvic878flkTltySk8j6oNdK uTor7C0kwGmbmAL9sFbBti/OxspKSGZi5ZE+ZadJuh2K2gt57KzYKNykoVfthVXDArU/ u0TA== X-Gm-Message-State: ACrzQf0s1rhg8l3mOF3Ll1WlNx+GWsT0RxJWGV2JW3EPa6They5mAseU QXXebRJuyeZ4LIQJ6I/cKtQKpg== X-Google-Smtp-Source: AMsMyM5wcfHGBOZqSWFQQMZBQipB+vd3qIPfNGFDpVK5wLEQTDZBAIR4i3sEAE/hnRLXOzTBhLMmoA== X-Received: by 2002:a63:1a60:0:b0:43c:9bcd:6c37 with SMTP id a32-20020a631a60000000b0043c9bcd6c37mr33168645pgm.125.1667633028514; Sat, 05 Nov 2022 00:23:48 -0700 (PDT) Received: from leoy-huangpu.lan (211-75-219-201.hinet-ip.hinet.net. [211.75.219.201]) by smtp.gmail.com with ESMTPSA id w27-20020aa79a1b000000b00562ef28aac6sm698138pfj.185.2022.11.05.00.23.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 05 Nov 2022 00:23:48 -0700 (PDT) From: Leo Yan To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Catalin Marinas , Will Deacon , Arnaldo Carvalho de Melo , John Garry , James Clark , Mike Leach , Peter Zijlstra , Ingo Molnar , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Cc: Leo Yan Subject: [PATCH v1 2/3] KVM: arm64: Add trace events with field 'vcpu_id' Date: Sat, 5 Nov 2022 07:23:10 +0000 Message-Id: <20221105072311.8214-3-leo.yan@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221105072311.8214-1-leo.yan@linaro.org> References: <20221105072311.8214-1-leo.yan@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The existed trace events kvm_entry and kvm_exit don't contain the info for virtual CPU id, thus the perf tool has no chance to do statistics based on virtual CPU wise; and the trace events are ABI and we cannot change it to avoid ABI breakage. For above reasons, this patch adds two trace events kvm_entry_v2 and kvm_exit_v2 with a new field 'vcpu_id'. To support both the old and new events, we use the tracepoint callback to check if any event is enabled or not, if it's enabled then the callback will invoke the corresponding trace event. Signed-off-by: Leo Yan --- arch/arm64/kvm/trace.c | 6 +++++ arch/arm64/kvm/trace_arm.h | 45 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/arch/arm64/kvm/trace.c b/arch/arm64/kvm/trace.c index d25a3db994e2..d9b2587c77c3 100644 --- a/arch/arm64/kvm/trace.c +++ b/arch/arm64/kvm/trace.c @@ -10,6 +10,9 @@ static void kvm_entry_tp(void *data, struct kvm_vcpu *vcp= u) { if (trace_kvm_entry_enabled()) trace_kvm_entry(*vcpu_pc(vcpu)); + + if (trace_kvm_entry_v2_enabled()) + trace_kvm_entry_v2(vcpu); } =20 static void kvm_exit_tp(void *data, int ret, struct kvm_vcpu *vcpu) @@ -17,6 +20,9 @@ static void kvm_exit_tp(void *data, int ret, struct kvm_v= cpu *vcpu) if (trace_kvm_exit_enabled()) trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + + if (trace_kvm_exit_v2_enabled()) + trace_kvm_exit_v2(ret, vcpu); } =20 static int __init kvm_tp_init(void) diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h index ef02ae93b28b..932c9d0c36f3 100644 --- a/arch/arm64/kvm/trace_arm.h +++ b/arch/arm64/kvm/trace_arm.h @@ -4,6 +4,7 @@ =20 #include #include +#include =20 #undef TRACE_SYSTEM #define TRACE_SYSTEM kvm @@ -30,6 +31,23 @@ TRACE_EVENT(kvm_entry, TP_printk("PC: 0x%016lx", __entry->vcpu_pc) ); =20 +TRACE_EVENT(kvm_entry_v2, + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu), + + TP_STRUCT__entry( + __field( unsigned int, vcpu_id ) + __field( unsigned long, vcpu_pc ) + ), + + TP_fast_assign( + __entry->vcpu_id =3D vcpu->vcpu_id; + __entry->vcpu_pc =3D *vcpu_pc(vcpu); + ), + + TP_printk("vcpu: %u PC: 0x%016lx", __entry->vcpu_id, __entry->vcpu_pc) +); + DECLARE_TRACE(kvm_exit_tp, TP_PROTO(int ret, struct kvm_vcpu *vcpu), TP_ARGS(ret, vcpu)); @@ -57,6 +75,33 @@ TRACE_EVENT(kvm_exit, __entry->vcpu_pc) ); =20 +TRACE_EVENT(kvm_exit_v2, + TP_PROTO(int ret, struct kvm_vcpu *vcpu), + TP_ARGS(ret, vcpu), + + TP_STRUCT__entry( + __field( unsigned int, vcpu_id ) + __field( int, ret ) + __field( unsigned int, esr_ec ) + __field( unsigned long, vcpu_pc ) + ), + + TP_fast_assign( + __entry->vcpu_id =3D vcpu->vcpu_id; + __entry->ret =3D ARM_EXCEPTION_CODE(ret); + __entry->esr_ec =3D ARM_EXCEPTION_IS_TRAP(ret) ? + kvm_vcpu_trap_get_class(vcpu): 0; + __entry->vcpu_pc =3D *vcpu_pc(vcpu); + ), + + TP_printk("%s: vcpu: %u HSR_EC: 0x%04x (%s), PC: 0x%016lx", + __print_symbolic(__entry->ret, kvm_arm_exception_type), + __entry->vcpu_id, + __entry->esr_ec, + __print_symbolic(__entry->esr_ec, kvm_arm_exception_class), + __entry->vcpu_pc) +); + TRACE_EVENT(kvm_guest_fault, TP_PROTO(unsigned long vcpu_pc, unsigned long hsr, unsigned long hxfar, --=20 2.34.1 From nobody Thu Apr 9 12:21:06 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09063C433FE for ; Sat, 5 Nov 2022 07:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229614AbiKEHYM (ORCPT ); Sat, 5 Nov 2022 03:24:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229608AbiKEHYC (ORCPT ); Sat, 5 Nov 2022 03:24:02 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6181720BE9 for ; Sat, 5 Nov 2022 00:23:55 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id b11so6348993pjp.2 for ; Sat, 05 Nov 2022 00:23:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S8CxQ8M2QM9sBIlwmPDF3sO4KY64cCnbt7okAkRcIdI=; b=c0dlly7hwTw7JU56MQSd2/9OjXdPEB8Z/l2dmasIIKMvsyZHc8sBbN5UJQtR7mVC09 q1RR61V+SDTO2DuU2hQynG5H3e4yn4CM6fXf36AcT70kPYKmFbcQCH7SXXZkg4VyVfhu EPFrH9rYmLzLRfPkGFi1Qz02uF9/Zt1E3vUw0+DG9adou/Lc4Ndq83SCfiSm6MmDRz8s s1D/JRn7aWkc58Nrswu24cJ8DSKi/vsbmPgm6gAHhajuLe1RWjCt6XzHurTyatoUKKtT 7iVtd1ml8i5VhcahkoMggu5Al7bKaRASIgh9v83JNHCWiqAvE+xNgcz8FYTpsyBAMetU 3/EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S8CxQ8M2QM9sBIlwmPDF3sO4KY64cCnbt7okAkRcIdI=; b=j4KF5g20sibXdu1p8xmT0PB+7TJNqMs+odUkf72O8btstWteOYGT2fyaAY491tcOhR z3im2QZFh7ojmOUJRYcsvMoFxCvQpWN5eY1PTfLmoNGDjHjKSj/jw3yoWrob6VexVKiZ oMQJ6FUDqDkzOQUjN3t6kqE4eBv9qvbwUTTGVmiodD16uTn8tplKPHgtEF9RkyfsLDrR HrSYSWO1BYpxkZs7wFZWPSHVyxeyJhTBSndynkRpsp0JMsZST9xFWsD92Lbrq+1WGGpe aiOUywS7LjwzoFbZMmLEZyhFOvktFCtdmGBDaEf6IaejCT8AST5NPe1lfJ0o87TzxGXY iKVw== X-Gm-Message-State: ACrzQf1mXbZBXV/DgOUV+po58POHG7oLjjTGZTVov3v/9gdTc3aIVkdU HbZhm7bHf7Fz6bK+rKCgPqAW8Q== X-Google-Smtp-Source: AMsMyM5leRTjPX8cC2BZ8OxRg1qLemGZp+/Aqexr9F666mTxh2xSMl5cDCaV8E5Oxa1vtL+dJHRYXA== X-Received: by 2002:a17:902:ec92:b0:186:9fc6:868c with SMTP id x18-20020a170902ec9200b001869fc6868cmr39074139plg.12.1667633034808; Sat, 05 Nov 2022 00:23:54 -0700 (PDT) Received: from leoy-huangpu.lan (211-75-219-201.hinet-ip.hinet.net. [211.75.219.201]) by smtp.gmail.com with ESMTPSA id w27-20020aa79a1b000000b00562ef28aac6sm698138pfj.185.2022.11.05.00.23.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 05 Nov 2022 00:23:54 -0700 (PDT) From: Leo Yan To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Catalin Marinas , Will Deacon , Arnaldo Carvalho de Melo , John Garry , James Clark , Mike Leach , Peter Zijlstra , Ingo Molnar , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Cc: Leo Yan Subject: [PATCH v1 3/3] perf arm64: Support virtual CPU ID for kvm-stat Date: Sat, 5 Nov 2022 07:23:11 +0000 Message-Id: <20221105072311.8214-4-leo.yan@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221105072311.8214-1-leo.yan@linaro.org> References: <20221105072311.8214-1-leo.yan@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since the two trace events kvm_entry_v2/kvm_exit_v2 are added, we can use the field "vcpu_id" in the events to get to know the virtual CPU ID. To keep backward compatibility, we still need to rely on the trace events kvm_entry/kvm_exit for old kernels. This patch adds Arm64's functions setup_kvm_events_tp() and arm64__setup_kvm_tp(), by detecting the nodes under sysfs folder, it can dynamically register trace events kvm_entry_v2/kvm_exit_v2 when the kernel has provided them, otherwise, it rolls back to use events kvm_entry/kvm_exit for backward compatibility. Let cpu_isa_init() to invoke arm64__setup_kvm_tp(), this can allow the command "perf kvm stat report" also to dynamically setup trace events. Before: # perf kvm stat report --vcpu 27 Analyze events for all VMs, VCPU 27: VM-EXIT Samples Samples% Time% Min Time Max Ti= me Avg time Total Samples:0, Total events handled time:0.00us. After: # perf kvm stat report --vcpu 27 Analyze events for all VMs, VCPU 27: VM-EXIT Samples Samples% Time% Min Time Max Ti= me Avg time SYS64 808 98.54% 91.24% 0.00us 303.76= us 3.46us ( +- 13.54% ) WFx 10 1.22% 7.79% 0.00us 69.48= us 23.91us ( +- 25.91% ) IRQ 2 0.24% 0.97% 0.00us 22.64= us 14.82us ( +- 52.77% ) Total Samples:820, Total events handled time:3068.28us. Signed-off-by: Leo Yan --- tools/perf/arch/arm64/util/kvm-stat.c | 54 ++++++++++++++++++++++++--- 1 file changed, 49 insertions(+), 5 deletions(-) diff --git a/tools/perf/arch/arm64/util/kvm-stat.c b/tools/perf/arch/arm64/= util/kvm-stat.c index 73d18e0ed6f6..1ba54ce3d7d8 100644 --- a/tools/perf/arch/arm64/util/kvm-stat.c +++ b/tools/perf/arch/arm64/util/kvm-stat.c @@ -3,6 +3,7 @@ #include #include "../../../util/evsel.h" #include "../../../util/kvm-stat.h" +#include "../../../util/tracepoint.h" #include "arm64_exception_types.h" #include "debug.h" =20 @@ -10,18 +11,28 @@ define_exit_reasons_table(arm64_exit_reasons, kvm_arm_e= xception_type); define_exit_reasons_table(arm64_trap_exit_reasons, kvm_arm_exception_class= ); =20 const char *kvm_trap_exit_reason =3D "esr_ec"; -const char *vcpu_id_str =3D "id"; +const char *vcpu_id_str =3D "vcpu_id"; const int decode_str_len =3D 20; const char *kvm_exit_reason =3D "ret"; -const char *kvm_entry_trace =3D "kvm:kvm_entry"; -const char *kvm_exit_trace =3D "kvm:kvm_exit"; +const char *kvm_entry_trace; +const char *kvm_exit_trace; =20 -const char *kvm_events_tp[] =3D { +#define NR_TPS 2 + +static const char *kvm_events_tp_v1[NR_TPS + 1] =3D { "kvm:kvm_entry", "kvm:kvm_exit", NULL, }; =20 +static const char *kvm_events_tp_v2[NR_TPS + 1] =3D { + "kvm:kvm_entry_v2", + "kvm:kvm_exit_v2", + NULL, +}; + +const char *kvm_events_tp[NR_TPS + 1]; + static void event_get_key(struct evsel *evsel, struct perf_sample *sample, struct event_key *key) @@ -78,8 +89,41 @@ const char * const kvm_skip_events[] =3D { NULL, }; =20 -int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid __maybe_unus= ed) +static int arm64__setup_kvm_tp(struct perf_kvm_stat *kvm) { + const char **kvm_events, **events_ptr; + int i, nr_tp =3D 0; + + if (is_valid_tracepoint("kvm:kvm_entry_v2")) { + kvm_events =3D kvm_events_tp_v2; + kvm_entry_trace =3D "kvm:kvm_entry_v2"; + kvm_exit_trace =3D "kvm:kvm_exit_v2"; + } else { + kvm_events =3D kvm_events_tp_v1; + kvm_entry_trace =3D "kvm:kvm_entry"; + kvm_exit_trace =3D "kvm:kvm_exit"; + } + + for (events_ptr =3D kvm_events; *events_ptr; events_ptr++) { + if (!is_valid_tracepoint(*events_ptr)) + return -1; + nr_tp++; + } + + for (i =3D 0; i < nr_tp; i++) + kvm_events_tp[i] =3D kvm_events[i]; + kvm_events_tp[i] =3D NULL; + kvm->exit_reasons_isa =3D "arm64"; return 0; } + +int setup_kvm_events_tp(struct perf_kvm_stat *kvm) +{ + return arm64__setup_kvm_tp(kvm); +} + +int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid __maybe_unus= ed) +{ + return arm64__setup_kvm_tp(kvm); +} --=20 2.34.1