From nobody Sun Dec 28 02:53:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01107C4332F for ; Thu, 14 Dec 2023 02:47:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442998AbjLNCrp (ORCPT ); Wed, 13 Dec 2023 21:47:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235224AbjLNCri (ORCPT ); Wed, 13 Dec 2023 21:47:38 -0500 Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com [IPv6:2607:f8b0:4864:20::833]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BF9F10C for ; Wed, 13 Dec 2023 18:47:39 -0800 (PST) Received: by mail-qt1-x833.google.com with SMTP id d75a77b69052e-42589694492so56236791cf.1 for ; Wed, 13 Dec 2023 18:47:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bitbyteword.org; s=google; t=1702522059; x=1703126859; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gTSAC8lorgQBroJi8wDtWTizY3Sq3NA2keDTcoNJeiU=; b=E28tJ5RP/YYloN5KYrVjjeROKlG/JunnuwxD6Il3I6T6YvRmpMq51N9KkkNBmNswzN wTmyjgWlSLk5HVBQshysM6JawkK8RDuhLi9in9RTT+YpePoVUerXU/4W58tKoxhAWrSh qXuFLVoVX79+Tyosr4QjXE1foejYZUjFxO/MIiOb02ccBOeZx/lkHZ3hkpn1wh932Fpi dvWa5tnyRQgnfZNel+URgPB05B5NbQ1oKEidcdVJWJBwmkxfuT0jwhF7FaMexfMSjX/F xlcOualUpnQUzK6XxMsS6Rbbb5+tMMEw9KzlPGpibeatp++uGyQghnczU1h9wIzyZgTB 8Pyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702522059; x=1703126859; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gTSAC8lorgQBroJi8wDtWTizY3Sq3NA2keDTcoNJeiU=; b=RwGKmVWC/Kpqcd9eV7FCrImkJ+0KsQU4gr4z+66GuzrQfFeufLtSwKqGULYqgOykIS Zu+MQYtPktHpkzaHmmpiWJgxxNAyCdeHjDCZMxQJe72Sy21PaQj9mlU72roFmephyQj7 aQdYBoz4yYvFqKgr7z56dz6f62hKNHz6qhMjLRCyTaBtm++EaknOiNDAXQ8H92KNy78u 3q5ua0AXaPzQdGuKxZ9s0jyPXLx+0/2W34V+Jzatoox+KdfcbEfKHG2vPRecQI8Ild1I VkGnkPmSei0A70v5ViukWXfYdwNsFp+Q72pOTtVwYqTIicw0usPv0yTI1sy01+pr5Tkv gUNQ== X-Gm-Message-State: AOJu0Ywflx4e6zzXlNbu2QafSdpVjGNaa3qt5Es/LnTDXY/I3PVijohO XqyVqiz5EJCjxGBKYN8T2IC5Hg== X-Google-Smtp-Source: AGHT+IFCknomeNY7bK13m6X0xaXsU9OorF5sm50ko/Fs0P9mA9ckrWT4Qq2LVghLRhWQezBZWpVAAQ== X-Received: by 2002:ac8:5d4c:0:b0:423:7d72:6c8 with SMTP id g12-20020ac85d4c000000b004237d7206c8mr14420946qtx.53.1702522058661; Wed, 13 Dec 2023 18:47:38 -0800 (PST) Received: from vinp3lin.lan (c-73-143-21-186.hsd1.vt.comcast.net. [73.143.21.186]) by smtp.gmail.com with ESMTPSA id fh3-20020a05622a588300b00425b356b919sm4240208qtb.55.2023.12.13.18.47.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 18:47:38 -0800 (PST) From: "Vineeth Pillai (Google)" To: Ben Segall , Borislav Petkov , Daniel Bristot de Oliveira , Dave Hansen , Dietmar Eggemann , "H . Peter Anvin" , Ingo Molnar , Juri Lelli , Mel Gorman , Paolo Bonzini , Andy Lutomirski , Peter Zijlstra , Sean Christopherson , Steven Rostedt , Thomas Gleixner , Valentin Schneider , Vincent Guittot , Vitaly Kuznetsov , Wanpeng Li Cc: "Vineeth Pillai (Google)" , Suleiman Souhlal , Masami Hiramatsu , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, Joel Fernandes Subject: [RFC PATCH 4/8] kvm: x86: boost vcpu threads on latency sensitive paths Date: Wed, 13 Dec 2023 21:47:21 -0500 Message-ID: <20231214024727.3503870-5-vineeth@bitbyteword.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231214024727.3503870-1-vineeth@bitbyteword.org> References: <20231214024727.3503870-1-vineeth@bitbyteword.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Proactively boost the vcpu thread when delivering an interrupt so that the guest vcpu gets to run with minimum latency and service the interrupt. The host knows that guest vcpu is going to service an irq/nmi as soon as its delivered and boosting the priority will help the guest to avoid latencies. Timer interrupt is one common scenario which benefits from this. When a vcpu resumes from halt, it would be because of an event like timer or irq/nmi and is latency sensitive. So, boosting the priority of vcpu thread when it goes idle makes sense as the wakeup would be because of a latency sensitive event and this boosting will not hurt the host as the thread is scheduled out. Co-developed-by: Joel Fernandes (Google) Signed-off-by: Joel Fernandes (Google) Signed-off-by: Vineeth Pillai (Google) --- arch/x86/kvm/i8259.c | 2 +- arch/x86/kvm/lapic.c | 8 ++++---- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- include/linux/kvm_host.h | 8 ++++++++ virt/kvm/kvm_main.c | 8 ++++++++ 6 files changed, 23 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c index 8dec646e764b..6841ed802f00 100644 --- a/arch/x86/kvm/i8259.c +++ b/arch/x86/kvm/i8259.c @@ -62,7 +62,7 @@ static void pic_unlock(struct kvm_pic *s) kvm_for_each_vcpu(i, vcpu, s->kvm) { if (kvm_apic_accept_pic_intr(vcpu)) { kvm_make_request(KVM_REQ_EVENT, vcpu); - kvm_vcpu_kick(vcpu); + kvm_vcpu_kick_boost(vcpu); return; } } diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index e74e223f46aa..ae25176fddc8 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1309,12 +1309,12 @@ static int __apic_accept_irq(struct kvm_lapic *apic= , int delivery_mode, result =3D 1; vcpu->arch.pv.pv_unhalted =3D 1; kvm_make_request(KVM_REQ_EVENT, vcpu); - kvm_vcpu_kick(vcpu); + kvm_vcpu_kick_boost(vcpu); break; =20 case APIC_DM_SMI: if (!kvm_inject_smi(vcpu)) { - kvm_vcpu_kick(vcpu); + kvm_vcpu_kick_boost(vcpu); result =3D 1; } break; @@ -1322,7 +1322,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, = int delivery_mode, case APIC_DM_NMI: result =3D 1; kvm_inject_nmi(vcpu); - kvm_vcpu_kick(vcpu); + kvm_vcpu_kick_boost(vcpu); break; =20 case APIC_DM_INIT: @@ -1901,7 +1901,7 @@ static void apic_timer_expired(struct kvm_lapic *apic= , bool from_timer_fn) atomic_inc(&apic->lapic_timer.pending); kvm_make_request(KVM_REQ_UNBLOCK, vcpu); if (from_timer_fn) - kvm_vcpu_kick(vcpu); + kvm_vcpu_kick_boost(vcpu); } =20 static void start_sw_tscdeadline(struct kvm_lapic *apic) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c8466bc64b87..578c19aeef73 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3566,7 +3566,7 @@ void svm_complete_interrupt_delivery(struct kvm_vcpu = *vcpu, int delivery_mode, if (!READ_ONCE(vcpu->arch.apic->apicv_active)) { /* Process the interrupt via kvm_check_and_inject_events(). */ kvm_make_request(KVM_REQ_EVENT, vcpu); - kvm_vcpu_kick(vcpu); + kvm_vcpu_kick_boost(vcpu); return; } =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bc6f0fea48b4..b786cb2eb185 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4266,7 +4266,7 @@ static void vmx_deliver_interrupt(struct kvm_lapic *a= pic, int delivery_mode, if (vmx_deliver_posted_interrupt(vcpu, vector)) { kvm_lapic_set_irr(vector, apic); kvm_make_request(KVM_REQ_EVENT, vcpu); - kvm_vcpu_kick(vcpu); + kvm_vcpu_kick_boost(vcpu); } else { trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode, trig_mode, vector); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c6647f6312c9..f76680fbc60d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2296,11 +2296,19 @@ static inline bool kvm_vcpu_sched_enabled(struct kv= m_vcpu *vcpu) { return kvm_arch_vcpu_pv_sched_enabled(&vcpu->arch); } + +static inline void kvm_vcpu_kick_boost(struct kvm_vcpu *vcpu) +{ + kvm_vcpu_set_sched(vcpu, true); + kvm_vcpu_kick(vcpu); +} #else static inline int kvm_vcpu_set_sched(struct kvm_vcpu *vcpu, bool boost) { return 0; } + +#define kvm_vcpu_kick_boost kvm_vcpu_kick #endif =20 #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 37748e2512e1..0dd8b84ed073 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3460,6 +3460,14 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) if (kvm_vcpu_check_block(vcpu) < 0) break; =20 + /* + * Boost before scheduling out. Wakeup happens only on + * an event or a signal and hence it is beneficial to + * be scheduled ASAP. Ultimately, guest gets to idle loop + * and then will request deboost. + */ + kvm_vcpu_set_sched(vcpu, true); + waited =3D true; schedule(); } --=20 2.43.0