From nobody Fri Dec 19 17:34:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45EBBC00A8F for ; Mon, 23 Oct 2023 23:40:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231615AbjJWXkj (ORCPT ); Mon, 23 Oct 2023 19:40:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231728AbjJWXkS (ORCPT ); Mon, 23 Oct 2023 19:40:18 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22CF510E0 for ; Mon, 23 Oct 2023 16:40:15 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5b62a669d61so2369874a12.1 for ; Mon, 23 Oct 2023 16:40:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698104414; x=1698709214; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=qUH4I8dRZYZdtSJK7SIMlGfWi/fWxXkAkYB7xZkkAPk=; b=cMMBVzmoNR/o7Mx04hDAFX2YXFvVUgT7Ud+6RkV1oR4ISwoZ+veNYZE2zG0ZN5GLgK Urc/PX4Ixlvm8ZsPDkKOQm2OdJAq0Bi7RG7xY3YR01BfHxsLpW0I4e/Ifp3GBwxfNQYV iebilztHcdacpJppIpI0ew6D/A3vmNI47CqA6/xgA0QwwqSQ4XwyJHzirRS0nhyGJzWf R6jM75esKzX8HoWsklfbYOlLeHjosAUSFLrUjASAiko9LxtEvt+KWa0EAL4VuRtJGCcM ZFQtmahaA5KOZqr3ftCk/UgfEpZsvQUbg2nIN76BMunaKgMxO0/NhxcRUI2+8l0+HtBi ss9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698104414; x=1698709214; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qUH4I8dRZYZdtSJK7SIMlGfWi/fWxXkAkYB7xZkkAPk=; b=DqSUAlXHa2ugf+NPWjG6Qci3psL4cImvmRyWRtOhnNXbUB3lCLl6DH7soSkVfcWvbF ng+pRnT2E1LOOMjcpNpyqHbWvOICRg7e1H1ul1H66foTrikruu8NU3ALXnzPZ8kPum1r 3+2/I7Ms4xFOkSdQq/YiaMjB6A2kspZsjz+G5yzGcFbuj1w7ZY8QxkgvHG6qUVDJAByJ KbS2kcJsURXnY6ozrRWzGwvwL5/XB95XNK0/+g8D0zpltwtfMiWtL0Kp4LzwIPdo81iw voGvYyc9ieij5xvj5d2HHjGVuBjnel4fQsZSmcYklUXqQe3Pmm+BKsbmXTmK3JPjNroa HWow== X-Gm-Message-State: AOJu0YwJ9mxMTuu5q7Jkq95afsEZMGm0VhEyPiKBhe/PO02Uh4jkl5Ca /kR+HbiVmNjLBTWjsCUA0zXojEHFgDw= X-Google-Smtp-Source: AGHT+IHlq1CeC/WoWgz0CTXW5UjYJPfHpY7mtMCpxFg/1pdX7v89esWBp2sWfD+drMribdJgxh3H58FXWh8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:518b:0:b0:56b:6acd:d5f8 with SMTP id h11-20020a65518b000000b0056b6acdd5f8mr187066pgq.7.1698104414521; Mon, 23 Oct 2023 16:40:14 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 23 Oct 2023 16:40:00 -0700 In-Reply-To: <20231023234000.2499267-1-seanjc@google.com> Mime-Version: 1.0 References: <20231023234000.2499267-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231023234000.2499267-7-seanjc@google.com> Subject: [PATCH 6/6] KVM: x86/pmu: Track emulated counter events instead of previous counter From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , Roman Kagan , Jim Mattson , Dapeng Mi , Like Xu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Explicitly track emulated counter events instead of using the common counter value that's shared with the hardware counter owned by perf. Bumping the common counter requires snapshotting the pre-increment value in order to detect overflow from emulation, and the snapshot approach is inherently flawed. Snapshotting the previous counter at every increment assumes that there is at most one emulated counter event per emulated instruction (or rather, between checks for KVM_REQ_PMU). That's mostly holds true today because KVM only emulates (branch) instructions retired, but the approach will fall apart if KVM ever supports event types that don't have a 1:1 relationship with instructions. And KVM already has a relevant bug, as handle_invalid_guest_state() emulates multiple instructions without checking KVM_REQ_PMU, i.e. could miss an overflow event due to clobbering pmc->prev_counter. Not checking KVM_REQ_PMU is problematic in both cases, but at least with the emulated counter approach, the resulting behavior is delayed overflow detection, as opposed to completely lost detection. Cc: Mingwei Zhang Cc: Roman Kagan Cc: Jim Mattson Cc: Dapeng Mi Cc: Like Xu Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 17 +++++++++++++++- arch/x86/kvm/pmu.c | 36 +++++++++++++++++++++++---------- arch/x86/kvm/pmu.h | 3 ++- 3 files changed, 43 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d7036982332e..d8bc9ba88cfc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -500,8 +500,23 @@ struct kvm_pmc { u8 idx; bool is_paused; bool intr; + /* + * Base value of the PMC counter, relative to the *consumed* count in + * the associated perf_event. This value includes counter updates from + * the perf_event and emulated_count since the last time the counter + * was reprogrammed, but it is *not* the current value as seen by the + * guest or userspace. + * + * The count is relative to the associated perf_event so that KVM + * doesn't need to reprogram the perf_event every time the guest writes + * to the counter. + */ u64 counter; - u64 prev_counter; + /* + * PMC events triggered by KVM emulation that haven't been fully + * processed, i.e. haven't undergone overflow detection. + */ + u64 emulated_counter; u64 eventsel; struct perf_event *perf_event; struct kvm_vcpu *vcpu; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3725d001239d..f02cee222e9a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -127,9 +127,9 @@ static void kvm_perf_overflow(struct perf_event *perf_e= vent, struct kvm_pmc *pmc =3D perf_event->overflow_handler_context; =20 /* - * Ignore overflow events for counters that are scheduled to be - * reprogrammed, e.g. if a PMI for the previous event races with KVM's - * handling of a related guest WRMSR. + * Ignore asynchronous overflow events for counters that are scheduled + * to be reprogrammed, e.g. if a PMI for the previous event races with + * KVM's handling of a related guest WRMSR. */ if (test_and_set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi)) return; @@ -226,13 +226,19 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc,= u32 type, u64 config, =20 static void pmc_pause_counter(struct kvm_pmc *pmc) { - u64 counter =3D pmc->counter; + /* + * Accumulate emulated events, even if the PMC was already paused, e.g. + * if KVM emulated an event after a WRMSR, but before reprogramming, or + * if KVM couldn't create a perf event. + */ + u64 counter =3D pmc->counter + pmc->emulated_counter; =20 - if (!pmc->perf_event || pmc->is_paused) - return; + pmc->emulated_counter =3D 0; =20 /* update counter, reset event value to avoid redundant accumulation */ - counter +=3D perf_event_pause(pmc->perf_event, true); + if (pmc->perf_event && !pmc->is_paused) + counter +=3D perf_event_pause(pmc->perf_event, true); + pmc->counter =3D counter & pmc_bitmask(pmc); pmc->is_paused =3D true; } @@ -289,6 +295,14 @@ static void pmc_update_sample_period(struct kvm_pmc *p= mc) =20 void pmc_write_counter(struct kvm_pmc *pmc, u64 val) { + /* + * Drop any unconsumed accumulated counts, the WRMSR is a write, not a + * read-modify-write. Adjust the counter value so that it's value is + * relative to the current perf_event (if there is one), as reading the + * current count is faster than pausing and repgrogramming the event in + * order to reset it to '0'. + */ + pmc->emulated_counter =3D 0; pmc->counter +=3D val - pmc_read_counter(pmc); pmc->counter &=3D pmc_bitmask(pmc); pmc_update_sample_period(pmc); @@ -426,6 +440,7 @@ static bool pmc_event_is_allowed(struct kvm_pmc *pmc) static void reprogram_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu =3D pmc_to_pmu(pmc); + u64 prev_counter =3D pmc->counter; u64 eventsel =3D pmc->eventsel; u64 new_config =3D eventsel; u8 fixed_ctr_ctrl; @@ -435,7 +450,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) if (!pmc_event_is_allowed(pmc)) goto reprogram_complete; =20 - if (pmc->counter < pmc->prev_counter) + if (pmc->counter < prev_counter) __kvm_perf_overflow(pmc, false); =20 if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) @@ -475,7 +490,6 @@ static void reprogram_counter(struct kvm_pmc *pmc) =20 reprogram_complete: clear_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi); - pmc->prev_counter =3D 0; } =20 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) @@ -701,6 +715,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) =20 pmc_stop_counter(pmc); pmc->counter =3D 0; + pmc->emulated_counter =3D 0; =20 if (pmc_is_gp(pmc)) pmc->eventsel =3D 0; @@ -772,8 +787,7 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) =20 static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { - pmc->prev_counter =3D pmc->counter; - pmc->counter =3D (pmc->counter + 1) & pmc_bitmask(pmc); + pmc->emulated_counter++; kvm_pmu_request_counter_reprogram(pmc); } =20 diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index cae85e550f60..7caeb3d8d4fd 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -66,7 +66,8 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) { u64 counter, enabled, running; =20 - counter =3D pmc->counter; + counter =3D pmc->counter + pmc->emulated_counter; + if (pmc->perf_event && !pmc->is_paused) counter +=3D perf_event_read_value(pmc->perf_event, &enabled, &running); --=20 2.42.0.758.gaed0368e0e-goog