From nobody Wed Dec 17 06:02:16 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82F42226D15 for ; Fri, 4 Apr 2025 19:40:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743795622; cv=none; b=WtPBppXwRbihWoqh+gCRi6zGh+yR35l9uzA/czAEvAPoRGIzCMUmkJ0UBw9zNDRNCvZfZXBvS9knX9kPen9HP4q8y6w2437HR47Enth5POuuZjM86bOKbecP9J/BTUZWkud0QOcHqCoMz2MJvYR6QvGzRr6kUwS+RZmRwdUrUTQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743795622; c=relaxed/simple; bh=B5b++zvRQrYCt3P1/JdYmNo4yfWlq1mTjCkluzYI5Uk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qs0W4TfiAu5qYHbUdLtM14mZZTK9n6BiULKg8tYaLdaqt1ro9UZqPFJ2IJMJj8WZ3i+ARomjhFsRP9kTfM1nA1s8wOwTI9Tw0tRy9pT2sEmlPU6Y9uOAui02K2OkafkU3zuS6Z0as+KjQ9gu9R0C8tDRldNs38RpZGFlhfy6Sb8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2NAJuUib; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2NAJuUib" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-739515de999so1997834b3a.1 for ; Fri, 04 Apr 2025 12:40:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743795620; x=1744400420; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zH81SK0PLF89dcoZf/Fkldanv1dFQfCPkTGC85oMJ5g=; b=2NAJuUibHLrgzeFJQYjSYbDHMD1v+XPDcikljFesIHoD9x2G+p6+Q0qZ+ksCJIr6M3 ijvOjIuQRuLtHF48D39bmkP5lrVZug9Aow+vxYATz8uDYxZBg5RsX/JB5hSnH/xOh0kS X524Iv+LSdsv4eh/GMDFjdapQcqXui/DPMObhJthe51STtjXoER+rdrezFA8skUwsbMb GcIR7jkIMA0Ef+aIaIpKNMoxxy/ux4Y1Ttgf34vLoMWSWcRY5PIiOkkDlG9bpqR9YGWT q+o48OOIWSud0YkkQ8uH/M9LJRafR2XOhlbku/sUcCPW9K6QxXI/aQ9gnRj65cPcmI34 jNuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743795620; x=1744400420; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zH81SK0PLF89dcoZf/Fkldanv1dFQfCPkTGC85oMJ5g=; b=NczpXPStiDFJTXUXKoG7oJB1kZhCizzcjPIEnf2W8szQ6OjWKyhyS/BJqZ8E7zVzzl xbpSgvgadILe2mDNEgkeQX7OA5lc2oRntEcv+5PUtt6vKWc9jIahrd12EPk4GL+AtukR GidEMsEXBti0ROtI+2bkk2PRc6sTJyrFesk3kzU1V9ZVEkJBR4WktiTbsIq545bKtljS 3z2bIkYQ4yb0UGKYwy8XSEwdSCHW7ME0Zt/m1gSTPE3s70zrelpYLCQu7A94o4XmImBL MsiOkR9KS3hP25gqYKDnNk2M2/4f6lbN8iemlixAmvW1uwMb+tVLI66p4LVQqScy5IZ6 RH+w== X-Forwarded-Encrypted: i=1; AJvYcCV/pO6bS+mMkbHX8JKTXeQqHWK+ahS/o55YiHE/BD2DAcEd6qrGg8BuMyBX1Bs+bzK7/S/n2EU3281sCmk=@vger.kernel.org X-Gm-Message-State: AOJu0YzJiDasOPqukNI2PJy8TIKJWd9W2w0tv8uNPN9lUVtywfLTr7gU j6gKf+8j92j8+9bz3zUZ+IeUol+Pk2U9IW3STetb4Zg2QmqaQapV1jTfi8gNVjW2wbor5H6w1BP GNw== X-Google-Smtp-Source: AGHT+IGVSydf5w/q8WpKQOUOH+7EuLOse28u3r7PdtsTQmzo1sfNdIuVWViLN0PPNIxHwzX4Yipa+c/f+dQ= X-Received: from pfmx24.prod.google.com ([2002:a62:fb18:0:b0:736:5012:3564]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:2e06:b0:736:2a73:675b with SMTP id d2e1a72fcca58-739e7102e92mr4843046b3a.19.1743795619827; Fri, 04 Apr 2025 12:40:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 4 Apr 2025 12:38:34 -0700 In-Reply-To: <20250404193923.1413163-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250404193923.1413163-1-seanjc@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250404193923.1413163-20-seanjc@google.com> Subject: [PATCH 19/67] KVM: SVM: Drop superfluous "cache" of AVIC Physical ID entry pointer From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Joerg Roedel , David Woodhouse , Lu Baolu Cc: kvm@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Maxim Levitsky , Joao Martins , David Matlack Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the vCPU's pointer to its AVIC Physical ID entry, and simply index the table directly. Caching a pointer address is completely unnecessary for performance, and while the field technically caches the result of the pointer calculation, it's all too easy to misinterpret the name and think that the field somehow caches the _data_ in the table. No functional change intended. Suggested-by: Maxim Levitsky Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/avic.c | 27 +++++++++++++++------------ arch/x86/kvm/svm/svm.h | 1 - 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index ae6d2c00397f..c4e6c97b736f 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -303,8 +303,6 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu) AVIC_PHYSICAL_ID_ENTRY_VALID_MASK; WRITE_ONCE(kvm_svm->avic_physical_id_table[id], new_entry); =20 - svm->avic_physical_id_cache =3D &kvm_svm->avic_physical_id_table[id]; - return 0; } =20 @@ -779,13 +777,16 @@ static int svm_ir_list_add(struct vcpu_svm *svm, struct kvm_kernel_irqfd *irqfd, struct amd_iommu_pi_data *pi) { + struct kvm_vcpu *vcpu =3D &svm->vcpu; + struct kvm *kvm =3D vcpu->kvm; + struct kvm_svm *kvm_svm =3D to_kvm_svm(kvm); unsigned long flags; u64 entry; =20 if (WARN_ON_ONCE(!pi->ir_data)) return -EINVAL; =20 - irqfd->irq_bypass_vcpu =3D &svm->vcpu; + irqfd->irq_bypass_vcpu =3D vcpu; irqfd->irq_bypass_data =3D pi->ir_data; =20 spin_lock_irqsave(&svm->ir_list_lock, flags); @@ -796,7 +797,7 @@ static int svm_ir_list_add(struct vcpu_svm *svm, * will update the pCPU info when the vCPU awkened and/or scheduled in. * See also avic_vcpu_load(). */ - entry =3D READ_ONCE(*(svm->avic_physical_id_cache)); + entry =3D READ_ONCE(kvm_svm->avic_physical_id_table[vcpu->vcpu_id]); if (entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK) amd_iommu_update_ga(entry & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK, true, pi->ir_data); @@ -976,17 +977,18 @@ avic_update_iommu_vcpu_affinity(struct kvm_vcpu *vcpu= , int cpu, bool r) =20 void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - u64 entry; + struct kvm_svm *kvm_svm =3D to_kvm_svm(vcpu->kvm); int h_physical_id =3D kvm_cpu_get_apicid(cpu); struct vcpu_svm *svm =3D to_svm(vcpu); unsigned long flags; + u64 entry; =20 lockdep_assert_preemption_disabled(); =20 if (WARN_ON(h_physical_id & ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK= )) return; =20 - if (WARN_ON_ONCE(!svm->avic_physical_id_cache)) + if (WARN_ON_ONCE(vcpu->vcpu_id * sizeof(entry) >=3D PAGE_SIZE)) return; =20 /* @@ -1008,14 +1010,14 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) */ spin_lock_irqsave(&svm->ir_list_lock, flags); =20 - entry =3D READ_ONCE(*(svm->avic_physical_id_cache)); + entry =3D READ_ONCE(kvm_svm->avic_physical_id_table[vcpu->vcpu_id]); WARN_ON_ONCE(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK); =20 entry &=3D ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK; entry |=3D (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK); entry |=3D AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK; =20 - WRITE_ONCE(*(svm->avic_physical_id_cache), entry); + WRITE_ONCE(kvm_svm->avic_physical_id_table[vcpu->vcpu_id], entry); avic_update_iommu_vcpu_affinity(vcpu, h_physical_id, true); =20 spin_unlock_irqrestore(&svm->ir_list_lock, flags); @@ -1023,13 +1025,14 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) =20 void avic_vcpu_put(struct kvm_vcpu *vcpu) { - u64 entry; + struct kvm_svm *kvm_svm =3D to_kvm_svm(vcpu->kvm); struct vcpu_svm *svm =3D to_svm(vcpu); unsigned long flags; + u64 entry; =20 lockdep_assert_preemption_disabled(); =20 - if (WARN_ON_ONCE(!svm->avic_physical_id_cache)) + if (WARN_ON_ONCE(vcpu->vcpu_id * sizeof(entry) >=3D PAGE_SIZE)) return; =20 /* @@ -1039,7 +1042,7 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu) * can't be scheduled out and thus avic_vcpu_{put,load}() can't run * recursively. */ - entry =3D READ_ONCE(*(svm->avic_physical_id_cache)); + entry =3D READ_ONCE(kvm_svm->avic_physical_id_table[vcpu->vcpu_id]); =20 /* Nothing to do if IsRunning =3D=3D '0' due to vCPU blocking. */ if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)) @@ -1058,7 +1061,7 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu) avic_update_iommu_vcpu_affinity(vcpu, -1, 0); =20 entry &=3D ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK; - WRITE_ONCE(*(svm->avic_physical_id_cache), entry); + WRITE_ONCE(kvm_svm->avic_physical_id_table[vcpu->vcpu_id], entry); =20 spin_unlock_irqrestore(&svm->ir_list_lock, flags); =20 diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 4c83b6b73714..e223e57f7def 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -301,7 +301,6 @@ struct vcpu_svm { =20 u32 ldr_reg; u32 dfr_reg; - u64 *avic_physical_id_cache; =20 /* * Per-vcpu list of struct amd_svm_iommu_ir: --=20 2.49.0.504.g3bcea36a83-goog