From nobody Sun Dec 14 12:18:15 2025 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40B092E62D0 for ; Fri, 23 May 2025 01:01:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747962108; cv=none; b=AYq77HiqlXrLw+qZK6O4EKHWcPjmcJBEhx6khw5WNulU/3uJ156K8GaxYRgbnyjzEuvYrzJjcRIfxNd+3/cdhURZw2iTCHdl5Slm459MybzvDKDZfXAQP+lptQ9AXu3DwUDxaVNXczLl/VGMfBGTEdeSJwdkJJXm5RUugjJp59I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747962108; c=relaxed/simple; bh=2QBb5/XIqsfvIc456zHCbrENEH2By/61zuGLujz3Rzg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sVlaWEnEJQTCI7Hkvx/69ZYE+6bpPrF3abupAWLD0vs6ac4/z7lFg8yToJ1n60Iu5tNZDwi7aV6+J0OyQw33wlZg5/YSbA7fMRbVm07K/5VlGb85nzzQKHOyQJU/l/k1gIlIgGUp0TKtd4Y3ySBvFJjqoSuI+rtDaYSAaHjuIhY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QfkKzsfe; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QfkKzsfe" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b26e33ae9d5so8987268a12.1 for ; Thu, 22 May 2025 18:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747962104; x=1748566904; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/e9UYgTv1sZ7z5b1x8uYjd+06D7YFK/4pMDs1/TTKKw=; b=QfkKzsfeFflEs6xTiI1fsNSIla/NbjaOyz1VSwJ3m5yWrSrfsG0OSbh4t2uclml7Rk D5oADYLyImCkDLjZGwX/1D/pEEODJCfsjHGJoUz1svOeZD/Q5ira51kmibOGEXInjEUI re3YjEd4m8JwnqeFlF+b+MzHMMvIS4VUDA79891LR6jk6oIavpUYcAUzkQ5Wk8Q4Yh3a v7GeI9OHf+TbUovohzYZv3r+sa6lkDNYXoy2OC4qiNyifZ457HJ5qjeeEid+bw2hY916 ZGTB5YQEPkT/+k92T2yrJDy9UAOZElTr4iWNG6q+DsJn24P5ntseEwEuB/by//9wZvaD VQ5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747962104; x=1748566904; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/e9UYgTv1sZ7z5b1x8uYjd+06D7YFK/4pMDs1/TTKKw=; b=R6CbZrGYNxLjsNN+0bkv2cWArm8sAVkNDxRmZRzi4FJEMgbakasgXx04Ngn8IhLIqr ATzLxdI0GqV05ECAb+cvsebZ4ieISis4ZdWbS2/fLmQBLnrSbQsigVgIME8Kk3vSwbnY +vEGT7fbyj1ARYE80kGxVn9FKsrwmNDnrqm4/nRdzGwtbdbCVJd5ErSRD1o2TiwRBFUt RATtaRajPVl2KuXcUEAL6GzHtIVO115vzQXv5jqRrEgxlsGvLP525rO4zLNnHZgP+vKj iOys/ZG6guTx8YdpInu5SPdqQdbAkV/Y8YWPZGiEcHjwwUbEqNvEPb7JLFP1BSIS3IU8 aI9g== X-Forwarded-Encrypted: i=1; AJvYcCVV0lOYB0k6SxgS3jsyiRtGktaHJzn2jYImjUtDl2P1AYZlshx6T0TLQgc40vOUBu1EW7oZtk4si+Ym8/0=@vger.kernel.org X-Gm-Message-State: AOJu0Yz7RXpwlD8eDcWeUTRUfyJgrXdDkKw0l0E7a8hLDBB6bu0ZOlgx s+qDn40vpIHzu2KGruyVlwsesaS9+yZrwJG06vx4fSnhJQS7vEpj7E1gqZ7rifEH2B0zoYFPle5 j3+cNdA== X-Google-Smtp-Source: AGHT+IEQRYTnAxDbAwwyPpwl8BUMXr/CByqDWtsE69Ozfi2WC7I9F1/uYDQUgOID+Lp8MTwFn4NoVPso1aA= X-Received: from pgbcz10.prod.google.com ([2002:a05:6a02:230a:b0:b0d:967f:23de]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:a103:b0:218:6dc8:2cb with SMTP id adf61e73a8af0-2186dc802cfmr5175120637.0.1747962104546; Thu, 22 May 2025 18:01:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 22 May 2025 18:00:01 -0700 In-Reply-To: <20250523010004.3240643-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250523010004.3240643-1-seanjc@google.com> X-Mailer: git-send-email 2.49.0.1151.ga128411c76-goog Message-ID: <20250523010004.3240643-57-seanjc@google.com> Subject: [PATCH v2 56/59] KVM: SVM: Don't check vCPU's blocking status when toggling AVIC on/off From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Joerg Roedel , David Woodhouse , Lu Baolu Cc: kvm@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Sairaj Kodilkar , Vasant Hegde , Maxim Levitsky , Joao Martins , Francesco Lavra , David Matlack Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't query a vCPU's blocking status when toggling AVIC on/off; barring KVM bugs, the vCPU can't be blocking when refreshing AVIC controls. And if there are KVM bugs, ensuring the vCPU and its associated IRTEs are in the correct state is desirable, i.e. well worth any overhead in a buggy scenario. Isolating the "real" load/put flows will allow moving the IOMMU IRTE (de)activation logic from avic_refresh_apicv_exec_ctrl() to avic_update_iommu_vcpu_affinity(), i.e. will allow updating the vCPU's physical ID entry and its IRTEs in a common path, under a single critical section of ir_list_lock. Tested-by: Sairaj Kodilkar Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/avic.c | 65 +++++++++++++++++++++++------------------ 1 file changed, 37 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index 9ddec6f3ad41..1e6e5d1f6b4e 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -828,7 +828,7 @@ static void avic_update_iommu_vcpu_affinity(struct kvm_= vcpu *vcpu, int cpu) WARN_ON_ONCE(amd_iommu_update_ga(cpu, irqfd->irq_bypass_data)); } =20 -void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +static void __avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { struct kvm_svm *kvm_svm =3D to_kvm_svm(vcpu->kvm); int h_physical_id =3D kvm_cpu_get_apicid(cpu); @@ -844,16 +844,6 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (WARN_ON_ONCE(vcpu->vcpu_id * sizeof(entry) >=3D PAGE_SIZE)) return; =20 - /* - * No need to update anything if the vCPU is blocking, i.e. if the vCPU - * is being scheduled in after being preempted. The CPU entries in the - * Physical APIC table and IRTE are consumed iff IsRun{ning} is '1'. - * If the vCPU was migrated, its new CPU value will be stuffed when the - * vCPU unblocks. - */ - if (kvm_vcpu_is_blocking(vcpu)) - return; - /* * Grab the per-vCPU interrupt remapping lock even if the VM doesn't * _currently_ have assigned devices, as that can change. Holding @@ -888,31 +878,33 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) spin_unlock_irqrestore(&svm->ir_list_lock, flags); } =20 -void avic_vcpu_put(struct kvm_vcpu *vcpu) +void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + /* + * No need to update anything if the vCPU is blocking, i.e. if the vCPU + * is being scheduled in after being preempted. The CPU entries in the + * Physical APIC table and IRTE are consumed iff IsRun{ning} is '1'. + * If the vCPU was migrated, its new CPU value will be stuffed when the + * vCPU unblocks. + */ + if (kvm_vcpu_is_blocking(vcpu)) + return; + + __avic_vcpu_load(vcpu, cpu); +} + +static void __avic_vcpu_put(struct kvm_vcpu *vcpu) { struct kvm_svm *kvm_svm =3D to_kvm_svm(vcpu->kvm); struct vcpu_svm *svm =3D to_svm(vcpu); unsigned long flags; - u64 entry; + u64 entry =3D svm->avic_physical_id_entry; =20 lockdep_assert_preemption_disabled(); =20 if (WARN_ON_ONCE(vcpu->vcpu_id * sizeof(entry) >=3D PAGE_SIZE)) return; =20 - /* - * Note, reading the Physical ID entry outside of ir_list_lock is safe - * as only the pCPU that has loaded (or is loading) the vCPU is allowed - * to modify the entry, and preemption is disabled. I.e. the vCPU - * can't be scheduled out and thus avic_vcpu_{put,load}() can't run - * recursively. - */ - entry =3D svm->avic_physical_id_entry; - - /* Nothing to do if IsRunning =3D=3D '0' due to vCPU blocking. */ - if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)) - return; - /* * Take and hold the per-vCPU interrupt remapping lock while updating * the Physical ID entry even though the lock doesn't protect against @@ -932,7 +924,24 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu) WRITE_ONCE(kvm_svm->avic_physical_id_table[vcpu->vcpu_id], entry); =20 spin_unlock_irqrestore(&svm->ir_list_lock, flags); +} =20 +void avic_vcpu_put(struct kvm_vcpu *vcpu) +{ + /* + * Note, reading the Physical ID entry outside of ir_list_lock is safe + * as only the pCPU that has loaded (or is loading) the vCPU is allowed + * to modify the entry, and preemption is disabled. I.e. the vCPU + * can't be scheduled out and thus avic_vcpu_{put,load}() can't run + * recursively. + */ + u64 entry =3D to_svm(vcpu)->avic_physical_id_entry; + + /* Nothing to do if IsRunning =3D=3D '0' due to vCPU blocking. */ + if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)) + return; + + __avic_vcpu_put(vcpu); } =20 void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu) @@ -973,9 +982,9 @@ void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu) avic_refresh_virtual_apic_mode(vcpu); =20 if (activated) - avic_vcpu_load(vcpu, vcpu->cpu); + __avic_vcpu_load(vcpu, vcpu->cpu); else - avic_vcpu_put(vcpu); + __avic_vcpu_put(vcpu); =20 /* * Here, we go through the per-vcpu ir_list to update all existing --=20 2.49.0.1151.ga128411c76-goog