From nobody Wed Nov 27 10:25:01 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E84F1E3784 for ; Thu, 10 Oct 2024 18:25:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584707; cv=none; b=VqWgFERg5FfjjHGKuQ9JEilMWPQZsUsMY6JtqVo1r4CKkNkH9Yyut914k4TKS8pJgOeyEtAYfFn+vNBIYPWjaXw7VvLBWPgV8D9irMaLthQ2TsEGAFEwUifYjivDCCXivSE8bYyM5W8BGIIxqb76OZqhNR1mB4Nv8RZ9ATtSV3M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584707; c=relaxed/simple; bh=V8Pz/YRM4TKMSvyoWeXbxNdYc7yxSwxWSxDHm25abTE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=D4323BWlNDUSce7u8kPBJCAXE8YnUSePpl1h9PHKJZqMXPBZd7DA6Oq2iVArI95fFhvxOvjoGUQ6qBaeEwS0DczX1Xpy+iV8Y2Xv/6XzeCPGzELkEukjPeN0W8f92ahTpqzi7+OVk4RRN7b55fR0m6bx0OduN8MgEjCuDvy50M0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2Sdd9oap; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2Sdd9oap" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e165fc5d94fso1874979276.2 for ; Thu, 10 Oct 2024 11:25:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584704; x=1729189504; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zmCOkT+wuJeT9H1MjXHCrAkGhLQoiMWK376kHF4kwCg=; b=2Sdd9oap+q0MyhyBJfN/XXaLaeugLyBeVKfhNG2OdkEb/qrt0NkuxHYVoDsDdUZ9e1 pmV7SmQhjF9bAwgYB0J8zvebqLiGSyv20F8v9j0thW0q6ArX1s4TwXm8L3IHPzem3f8y DwxbUzTbCpK28hZiB1VxvTVS9uIMZOjdMOhgijGcTvQ/wCKV0SUyqlU1dK5YJ8IjBDbg b/lM5r6hSEmGcKJbLuF7of6iZwO86ElPJvMvme5CnHpYMePTLHn6R0KeLtgTbtl3JJy2 5t78epXtg7vnXBYurXwLP6RwMrqQnpppEpVK40dF3oMv/Bjakbx1TqmR49X4kE7/FaxV fqbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584704; x=1729189504; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zmCOkT+wuJeT9H1MjXHCrAkGhLQoiMWK376kHF4kwCg=; b=q2nwsldDeiihld+9mJ/azQOU3KPndXRNZfHj/MOrNqfr9SpMF4+c8mV7zdbwdZOW/7 zrgYQPDt36vhJ+zjzRsoLaD9JZ9zgMLidMTQpScMqDPaI+8koesrEOkfP73ItuulkAPk E525kqVvw+28ie0eQPyXlmffXI8hf7zHDannvrMHo/6laWMaiVtJt6WgW0Kml2z0KR+0 O0DdfS3vnf2Gga2i5Q92tZZIOChoEQ8wYcj4hjyU6mAGr7GYdgov2IqDDwdPSIlrUdOR d29cMO3ulLfdRk3SZ3yJDaMSqKw+1gPIU1vnP6eprJWXxoatZRDKH7M9gW3AUesZFR6a PA2A== X-Forwarded-Encrypted: i=1; AJvYcCVPWaUcABuyDGWYR61oEO2+roxRCEqlcFAaVpyTSuubO8hkbkcbYIJdj9uf+2qo1LWxD9gsDqOlS337GkY=@vger.kernel.org X-Gm-Message-State: AOJu0Yz8D9QLhnprILH8Ub7WAPQdOfDbaDl3cd6LBAC5ccUPw2o36u3U k/Czg1uxjruL3eoSOQEYb09xOKvxjwEfYwwd7vcyTwM9iFoKEpLfnwX4h1T55nsmcBOpeVvNoEK Cxw== X-Google-Smtp-Source: AGHT+IHLS32ZnUy4bY0gdVqRjJSlR5N1zmb3UxYk5drKGwVtxhq/Y8NIZIwMp1duovDzYwXPQsOfO5WBu4o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6902:1812:b0:e24:9f58:dd17 with SMTP id 3f1490d57ef6-e28fe32f042mr66754276.1.1728584702932; Thu, 10 Oct 2024 11:25:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:08 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-7-seanjc@google.com> Subject: [PATCH v13 06/85] KVM: x86/mmu: Invert @can_unsync and renamed to @synchronizing From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, "=?UTF-8?q?Alex=20Benn=C3=A9e?=" , Yan Zhao , David Matlack , David Stevens , Andrew Jones Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Invert the polarity of "can_unsync" and rename the parameter to "synchronizing" to allow a future change to set the Accessed bit if KVM is synchronizing an existing SPTE. Querying "can_unsync" in that case is nonsensical, as the fact that KVM can't unsync SPTEs doesn't provide any justification for setting the Accessed bit. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 12 ++++++------ arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/spte.c | 4 ++-- arch/x86/kvm/mmu/spte.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- 6 files changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8c64069aa89..0f21d6f76cab 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2795,7 +2795,7 @@ static void kvm_unsync_page(struct kvm *kvm, struct k= vm_mmu_page *sp) * be write-protected. */ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot = *slot, - gfn_t gfn, bool can_unsync, bool prefetch) + gfn_t gfn, bool synchronizing, bool prefetch) { struct kvm_mmu_page *sp; bool locked =3D false; @@ -2810,12 +2810,12 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const = struct kvm_memory_slot *slot, =20 /* * The page is not write-tracked, mark existing shadow pages unsync - * unless KVM is synchronizing an unsync SP (can_unsync =3D false). In - * that case, KVM must complete emulation of the guest TLB flush before - * allowing shadow pages to become unsync (writable by the guest). + * unless KVM is synchronizing an unsync SP. In that case, KVM must + * complete emulation of the guest TLB flush before allowing shadow + * pages to become unsync (writable by the guest). */ for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn) { - if (!can_unsync) + if (synchronizing) return -EPERM; =20 if (sp->unsync) @@ -2941,7 +2941,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, } =20 wrprot =3D make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefet= ch, - true, host_writable, &spte); + false, host_writable, &spte); =20 if (*sptep =3D=3D spte) { ret =3D RET_PF_SPURIOUS; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index c98827840e07..4da83544c4e1 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,7 +164,7 @@ static inline gfn_t gfn_round_for_level(gfn_t gfn, int = level) } =20 int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot = *slot, - gfn_t gfn, bool can_unsync, bool prefetch); + gfn_t gfn, bool synchronizing, bool prefetch); =20 void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t = gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn= ); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index ae7d39ff2d07..6e7bd8921c6f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -963,7 +963,7 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, stru= ct kvm_mmu_page *sp, int host_writable =3D spte & shadow_host_writable_mask; slot =3D kvm_vcpu_gfn_to_memslot(vcpu, gfn); make_spte(vcpu, sp, slot, pte_access, gfn, - spte_to_pfn(spte), spte, true, false, + spte_to_pfn(spte), spte, true, true, host_writable, &spte); =20 return mmu_spte_update(sptep, spte); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 5521608077ec..0e47fea1a2d9 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -157,7 +157,7 @@ bool spte_has_volatile_bits(u64 spte) bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, + u64 old_spte, bool prefetch, bool synchronizing, bool host_writable, u64 *new_spte) { int level =3D sp->role.level; @@ -248,7 +248,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch))= { + if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, synchronizing, prefetc= h)) { wrprot =3D true; pte_access &=3D ~ACC_WRITE_MASK; spte &=3D ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 2cb816ea2430..c81cac9358e0 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -499,7 +499,7 @@ bool spte_has_volatile_bits(u64 spte); bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, + u64 old_spte, bool prefetch, bool synchronizing, bool host_writable, u64 *new_spte); u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page_role role, int index); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3c6583468742..76bca7a726c1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1033,8 +1033,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm= _vcpu *vcpu, new_spte =3D make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else wrprot =3D make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + fault->pfn, iter->old_spte, fault->prefetch, + false, fault->map_writable, &new_spte); =20 if (new_spte =3D=3D iter->old_spte) ret =3D RET_PF_SPURIOUS; --=20 2.47.0.rc1.288.g06298d1525-goog