From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48EC1C4332F for ; Thu, 13 Oct 2022 21:13:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230021AbiJMVNa (ORCPT ); Thu, 13 Oct 2022 17:13:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbiJMVNR (ORCPT ); Thu, 13 Oct 2022 17:13:17 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74F64192D94 for ; Thu, 13 Oct 2022 14:12:55 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id a11-20020a056a001d0b00b005635c581a24so1786241pfx.17 for ; Thu, 13 Oct 2022 14:12:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gevzVagMbQoR45WzsGVt74F8pXR1wYOxGraePZU6BPw=; b=OcajF8ZdgWvzSjcSQgUM4TAej7hicV5o0oo4Ytv01XUnlvgM+Ikr84KRLTVWz0c5Z9 DmNyhDEtyZ/OHJWuRq6z7UCtDqlf2nDgENRpHNf/e8H7av/CBQ6u2qYJq76pZxA5X6n0 8R3qBEnPdbtT86At0jRnWXq6pM219EXJjhDm5fz5a+cONM3kPUp/LJflWEAXdeoFDZia KMctiUTjFfG/li0HSb7VMZX1hFbQnibzUNeEGVdeejgjmHNmdCMPguR9/0R1/ov0HgnA 4g9wxFeUEjEloz31pcP3mg0FJ04WqHEMPMC6On/EylL4ZvAv2LvkNcnLMrRFq/wu4efJ 8aPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gevzVagMbQoR45WzsGVt74F8pXR1wYOxGraePZU6BPw=; b=7ZBh4veP/X4T/nK+Q1OEDrbP1JTxb+xReMvAzow+ZDe7/wv7hc+4D22Obyh8GAVuWj wBYP9Jnf3jKADaPIFVrQLe5fGiqwlntBQoLsovLZRqfbfgaIvW5mHZ66io57+C9I7kH9 q4Ek3Gky3sebJN/fLnAh8FYyX7wCYsX5JZCY7UPBh6FaImbtll//mcsZcNqviP7eFF5Z M4sd80RiXJ6vQikqpNN5QX51C3k+C7Nkq/Elp2jRsL/rejvfH4w/XdhSs5M2k5RnLzBr SnY3yttx25f8ujRkaYIY2wCOgztS0jKmwvkfgwAYRoSNNC1eiRpO26leG6jonaIjD3r/ Zr8w== X-Gm-Message-State: ACrzQf1PUCd7ff6LsF+DypEJsNQnmVHDDmSWomF9HYOiXQrci52VNHB2 Yd/kXst+ZIIkdSocpcfC7cg+ZsMjCJU= X-Google-Smtp-Source: AMsMyM6OhYuX5d5FpblM517k1J148s7nC5q11XNu/RMmGV5MfHPuyfAcPCnJztM1vSQjo0AhrVmnnDrGc4A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:ad0:b0:555:ac02:433b with SMTP id c16-20020a056a000ad000b00555ac02433bmr1501633pfl.18.1665695560839; Thu, 13 Oct 2022 14:12:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:19 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-2-seanjc@google.com> Subject: [PATCH v2 01/16] KVM: Initialize gfn_to_pfn_cache locks in dedicated helper From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Move the gfn_to_pfn_cache lock initialization to another helper and call the new helper during VM/vCPU creation. There are race conditions possible due to kvm_gfn_to_pfn_cache_init()'s ability to re-initialize the cache's locks. For example: a race between ioctl(KVM_XEN_HVM_EVTCHN_SEND) and kvm_gfn_to_pfn_cache_init() leads to a corrupted shinfo gpc lock. (thread 1) | (thread 2) | kvm_xen_set_evtchn_fast | read_lock_irqsave(&gpc->lock, ...) | | kvm_gfn_to_pfn_cache_init | rwlock_init(&gpc->lock) read_unlock_irqrestore(&gpc->lock, ...) | Rename "cache_init" and "cache_destroy" to activate+deactivate to avoid implying that the cache really is destroyed/freed. Note, there more races in the newly named kvm_gpc_activate() that will be addressed separately. Fixes: 982ed0de4753 ("KVM: Reinstate gfn_to_pfn_cache with invalidation sup= port") Cc: stable@vger.kernel.org Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj [sean: call out that this is a bug fix] Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 12 +++++---- arch/x86/kvm/xen.c | 57 +++++++++++++++++++++------------------- include/linux/kvm_host.h | 24 ++++++++++++----- virt/kvm/pfncache.c | 21 ++++++++------- 4 files changed, 66 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4bd5f8a751de..943f039564e7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2315,11 +2315,11 @@ static void kvm_write_system_time(struct kvm_vcpu *= vcpu, gpa_t system_time, =20 /* we verify if the enable bit is set... */ if (system_time & 1) { - kvm_gfn_to_pfn_cache_init(vcpu->kvm, &vcpu->arch.pv_time, vcpu, - KVM_HOST_USES_PFN, system_time & ~1ULL, - sizeof(struct pvclock_vcpu_time_info)); + kvm_gpc_activate(vcpu->kvm, &vcpu->arch.pv_time, vcpu, + KVM_HOST_USES_PFN, system_time & ~1ULL, + sizeof(struct pvclock_vcpu_time_info)); } else { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); } =20 return; @@ -3388,7 +3388,7 @@ static int kvm_pv_enable_async_pf_int(struct kvm_vcpu= *vcpu, u64 data) =20 static void kvmclock_reset(struct kvm_vcpu *vcpu) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); vcpu->arch.time =3D 0; } =20 @@ -11757,6 +11757,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.regs_avail =3D ~0; vcpu->arch.regs_dirty =3D ~0; =20 + kvm_gpc_init(&vcpu->arch.pv_time); + if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) vcpu->arch.mp_state =3D KVM_MP_STATE_RUNNABLE; else diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 93c628d3e3a9..b2be60c6efa4 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -42,13 +42,13 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gf= n_t gfn) int idx =3D srcu_read_lock(&kvm->srcu); =20 if (gfn =3D=3D GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(kvm, gpc); + kvm_gpc_deactivate(kvm, gpc); goto out; } =20 do { - ret =3D kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, KVM_HOST_USES_PFN, - gpa, PAGE_SIZE); + ret =3D kvm_gpc_activate(kvm, gpc, NULL, KVM_HOST_USES_PFN, gpa, + PAGE_SIZE); if (ret) goto out; =20 @@ -554,15 +554,15 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_xen_vcpu_attr *data) offsetof(struct compat_vcpu_info, time)); =20 if (data->u.gpa =3D=3D GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache= ); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); r =3D 0; break; } =20 - r =3D kvm_gfn_to_pfn_cache_init(vcpu->kvm, - &vcpu->arch.xen.vcpu_info_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct vcpu_info)); + r =3D kvm_gpc_activate(vcpu->kvm, + &vcpu->arch.xen.vcpu_info_cache, NULL, + KVM_HOST_USES_PFN, data->u.gpa, + sizeof(struct vcpu_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); =20 @@ -570,16 +570,16 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_xen_vcpu_attr *data) =20 case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO: if (data->u.gpa =3D=3D GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(vcpu->kvm, + &vcpu->arch.xen.vcpu_time_info_cache); r =3D 0; break; } =20 - r =3D kvm_gfn_to_pfn_cache_init(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct pvclock_vcpu_time_info)); + r =3D kvm_gpc_activate(vcpu->kvm, + &vcpu->arch.xen.vcpu_time_info_cache, + NULL, KVM_HOST_USES_PFN, data->u.gpa, + sizeof(struct pvclock_vcpu_time_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); break; @@ -590,16 +590,15 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_xen_vcpu_attr *data) break; } if (data->u.gpa =3D=3D GPA_INVALID) { - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(vcpu->kvm, + &vcpu->arch.xen.runstate_cache); r =3D 0; break; } =20 - r =3D kvm_gfn_to_pfn_cache_init(vcpu->kvm, - &vcpu->arch.xen.runstate_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct vcpu_runstate_info)); + r =3D kvm_gpc_activate(vcpu->kvm, &vcpu->arch.xen.runstate_cache, + NULL, KVM_HOST_USES_PFN, data->u.gpa, + sizeof(struct vcpu_runstate_info)); break; =20 case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT: @@ -1816,7 +1815,12 @@ void kvm_xen_init_vcpu(struct kvm_vcpu *vcpu) { vcpu->arch.xen.vcpu_id =3D vcpu->vcpu_idx; vcpu->arch.xen.poll_evtchn =3D 0; + timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0); + + kvm_gpc_init(&vcpu->arch.xen.runstate_cache); + kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache); } =20 void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) @@ -1824,18 +1828,17 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) if (kvm_xen_timer_enabled(vcpu)) kvm_xen_stop_timer(vcpu); =20 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.runstate_cache); - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.vcpu_info_cache); - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_time_info_cache); + del_timer_sync(&vcpu->arch.xen.poll_timer); } =20 void kvm_xen_init_vm(struct kvm *kvm) { idr_init(&kvm->arch.xen.evtchn_ports); + kvm_gpc_init(&kvm->arch.xen.shinfo_cache); } =20 void kvm_xen_destroy_vm(struct kvm *kvm) @@ -1843,7 +1846,7 @@ void kvm_xen_destroy_vm(struct kvm *kvm) struct evtchnfd *evtchnfd; int i; =20 - kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache); + kvm_gpc_deactivate(kvm, &kvm->arch.xen.shinfo_cache); =20 idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) { if (!evtchnfd->deliver.port.port) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 32f259fa5801..694c4cb6caf4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1240,8 +1240,18 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_= t gpa, const void *data, void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); =20 /** - * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for= a - * given guest physical address. + * kvm_gpc_init - initialize gfn_to_pfn_cache. + * + * @gpc: struct gfn_to_pfn_cache object. + * + * This sets up a gfn_to_pfn_cache by initializing locks. Note, the cache= must + * be zero-allocated (or zeroed by the caller before init). + */ +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); + +/** + * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given = guest + * physical address. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1265,9 +1275,9 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, = gfn_t gfn); * kvm_gfn_to_pfn_cache_check() to ensure that the cache is valid before * accessing the target page. */ -int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gp= c, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len); +int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + gpa_t gpa, unsigned long len); =20 /** * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache. @@ -1324,7 +1334,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, str= uct gfn_to_pfn_cache *gpc, void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *= gpc); =20 /** - * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache. + * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1332,7 +1342,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc); * This removes a cache from the @kvm's list to be processed on MMU notifi= er * invocation. */ -void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache= *gpc); +void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); =20 void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 68ff41d39545..08f97cf97264 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -346,17 +346,20 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc) } EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap); =20 +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc) +{ + rwlock_init(&gpc->lock); + mutex_init(&gpc->refresh_lock); +} +EXPORT_SYMBOL_GPL(kvm_gpc_init); =20 -int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gp= c, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len) +int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + gpa_t gpa, unsigned long len) { WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) !=3D usage); =20 if (!gpc->active) { - rwlock_init(&gpc->lock); - mutex_init(&gpc->refresh_lock); - gpc->khva =3D NULL; gpc->pfn =3D KVM_PFN_ERR_FAULT; gpc->uhva =3D KVM_HVA_ERR_BAD; @@ -371,9 +374,9 @@ int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct g= fn_to_pfn_cache *gpc, } return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init); +EXPORT_SYMBOL_GPL(kvm_gpc_activate); =20 -void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache= *gpc) +void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) { if (gpc->active) { spin_lock(&kvm->gpc_lock); @@ -384,4 +387,4 @@ void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc) gpc->active =3D false; } } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy); +EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11384C433FE for ; Thu, 13 Oct 2022 21:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230044AbiJMVNT (ORCPT ); Thu, 13 Oct 2022 17:13:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229940AbiJMVNL (ORCPT ); Thu, 13 Oct 2022 17:13:11 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2337192D97 for ; Thu, 13 Oct 2022 14:12:56 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id b14-20020a056902030e00b006a827d81fd8so2562650ybs.17 for ; Thu, 13 Oct 2022 14:12:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=BR8P37dIvaTzFUy4lljy43WBv5S0DCgP55/AwwfIul8=; b=CCO6wUnaRGcTstghOOGYPgobIQnpiUGsYX/hCr5+qDry/5v3Aund/1oMz7QmAiJ9Rc qHtK5CFnvyCSLEQwMgXznejUx9u+4wkGV8aoLCND0rOJeaKdeTe+kjeevWvOX3W3ZDap UtKRvvUyA6hYauHJYsNI8BHgtt6mO7R/Hr28mdLgdPebppsjoR5A+R8QYNujc3nOIPvm ZreplDyFfEtuvnJvSnH1L0+SYE42Lygeuk0bChDkmHAKdIUNA8mNcCi2rJSjF9uHjyRX s1BTh4U1dK1fd+RGvcHWbK/w6QqxqeLgE7Bzr1z0J+GXsMUXg/E2UGmmcphr0UDIkrmR HDVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=BR8P37dIvaTzFUy4lljy43WBv5S0DCgP55/AwwfIul8=; b=HeFk2O3W3TGfgpLlkZ+eQkXjSPAJjpNKf5AH+EsbnlqGyIYTeXdjGBMpJie0Y28lnv wXggspDgTkL1koanmsRT4Q4+H+TzPutQM7dCSBOURadz7aL1fDNAZA7a+llRBeLo8LdH p5hj+L3LuLHZyHRZ+I+aCO7fanLHdfY0VNMeMROyow0n2kyte62xapSX4wPCsm2O6zMb W5JHNF5dCQCZ5sX9e6razJnRe8EPDc2oIPR5oJ+sMw6iNiXRffog5KfeZc8IomFBIEnh yLsFzzcQd7J1WoE//w8u2BWAfhqL/1IqHgTmVh+jxHxInl9X4V1jk2SVoV1NT+/XdsbW BIrg== X-Gm-Message-State: ACrzQf3e+SQhdqeTWHKeiIxIxSkHnE5FEwimy/RQzgOyH/4TmqrJhcpK xyqauljdYdCREAhKsZEELuCQpncgzJU= X-Google-Smtp-Source: AMsMyM5LbnzAqbPMUXUd7XT5kRqUCn/x5Q7YTZS7zNDZ3dH5rXweKKHifdxs21L0r4AlnVY0kMB8XcRQ6V0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:344e:0:b0:6c1:40e4:5a3b with SMTP id b75-20020a25344e000000b006c140e45a3bmr1712215yba.423.1665695562532; Thu, 13 Oct 2022 14:12:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:20 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-3-seanjc@google.com> Subject: [PATCH v2 02/16] KVM: Reject attempts to consume or refresh inactive gfn_to_pfn_cache From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Reject kvm_gpc_check() and kvm_gpc_refresh() if the cache is inactive. Not checking the active flag during refresh is particularly egregious, as KVM can end up with a valid, inactive cache, which can lead to a variety of use-after-free bugs, e.g. consuming a NULL kernel pointer or missing an mmu_notifier invalidation due to the cache not being on the list of gfns to invalidate. Note, "active" needs to be set if and only if the cache is on the list of caches, i.e. is reachable via mmu_notifier events. If a relevant mmu_notifier event occurs while the cache is "active" but not on the list, KVM will not acquire the cache's lock and so will not serailize the mmu_notifier event with active users and/or kvm_gpc_refresh(). A race between KVM_XEN_ATTR_TYPE_SHARED_INFO and KVM_XEN_HVM_EVTCHN_SEND can be exploited to trigger the bug. 1. Deactivate shinfo cache: kvm_xen_hvm_set_attr case KVM_XEN_ATTR_TYPE_SHARED_INFO kvm_gpc_deactivate kvm_gpc_unmap gpc->valid =3D false gpc->khva =3D NULL gpc->active =3D false Result: active =3D false, valid =3D false 2. Cause cache refresh: kvm_arch_vm_ioctl case KVM_XEN_HVM_EVTCHN_SEND kvm_xen_hvm_evtchn_send kvm_xen_set_evtchn kvm_xen_set_evtchn_fast kvm_gpc_check return -EWOULDBLOCK because !gpc->valid kvm_xen_set_evtchn_fast return -EWOULDBLOCK kvm_gpc_refresh hva_to_pfn_retry gpc->valid =3D true gpc->khva =3D not NULL Result: active =3D false, valid =3D true 3. Race ioctl KVM_XEN_HVM_EVTCHN_SEND against ioctl KVM_XEN_ATTR_TYPE_SHARED_INFO: kvm_arch_vm_ioctl case KVM_XEN_HVM_EVTCHN_SEND kvm_xen_hvm_evtchn_send kvm_xen_set_evtchn kvm_xen_set_evtchn_fast read_lock gpc->lock kvm_xen_hvm_set_attr case KVM_XEN_ATTR_TYPE_SHARED_INFO mutex_lock kvm->lock kvm_xen_shared_info_init kvm_gpc_activate gpc->khva =3D NULL kvm_gpc_check [ Check passes because gpc->valid is still true, even though gpc->khva is already NULL. ] shinfo =3D gpc->khva pending_bits =3D shinfo->evtchn_pending CRASH: test_and_set_bit(..., pending_bits) Fixes: 982ed0de4753 ("KVM: Reinstate gfn_to_pfn_cache with invalidation sup= port") Cc: stable@vger.kernel.org Reported-by: : Michal Luczaj Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 41 ++++++++++++++++++++++++++++++++++------- 1 file changed, 34 insertions(+), 7 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 08f97cf97264..346e47f15572 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -81,6 +81,9 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct g= fn_to_pfn_cache *gpc, { struct kvm_memslots *slots =3D kvm_memslots(kvm); =20 + if (!gpc->active) + return false; + if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE) return false; =20 @@ -240,10 +243,11 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, str= uct gfn_to_pfn_cache *gpc, { struct kvm_memslots *slots =3D kvm_memslots(kvm); unsigned long page_offset =3D gpa & ~PAGE_MASK; - kvm_pfn_t old_pfn, new_pfn; + bool unmap_old =3D false; unsigned long old_uhva; + kvm_pfn_t old_pfn; void *old_khva; - int ret =3D 0; + int ret; =20 /* * If must fit within a single page. The 'len' argument is @@ -261,6 +265,11 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc, =20 write_lock_irq(&gpc->lock); =20 + if (!gpc->active) { + ret =3D -EINVAL; + goto out_unlock; + } + old_pfn =3D gpc->pfn; old_khva =3D gpc->khva - offset_in_page(gpc->khva); old_uhva =3D gpc->uhva; @@ -291,6 +300,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struc= t gfn_to_pfn_cache *gpc, /* If the HVA=E2=86=92PFN mapping was already valid, don't unmap it. */ old_pfn =3D KVM_PFN_ERR_FAULT; old_khva =3D NULL; + ret =3D 0; } =20 out: @@ -305,14 +315,15 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, str= uct gfn_to_pfn_cache *gpc, gpc->khva =3D NULL; } =20 - /* Snapshot the new pfn before dropping the lock! */ - new_pfn =3D gpc->pfn; + /* Detect a pfn change before dropping the lock! */ + unmap_old =3D (old_pfn !=3D gpc->pfn); =20 +out_unlock: write_unlock_irq(&gpc->lock); =20 mutex_unlock(&gpc->refresh_lock); =20 - if (old_pfn !=3D new_pfn) + if (unmap_old) gpc_unmap_khva(kvm, old_pfn, old_khva); =20 return ret; @@ -366,11 +377,19 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_p= fn_cache *gpc, gpc->vcpu =3D vcpu; gpc->usage =3D usage; gpc->valid =3D false; - gpc->active =3D true; =20 spin_lock(&kvm->gpc_lock); list_add(&gpc->list, &kvm->gpc_list); spin_unlock(&kvm->gpc_lock); + + /* + * Activate the cache after adding it to the list, a concurrent + * refresh must not establish a mapping until the cache is + * reachable by mmu_notifier events. + */ + write_lock_irq(&gpc->lock); + gpc->active =3D true; + write_unlock_irq(&gpc->lock); } return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); } @@ -379,12 +398,20 @@ EXPORT_SYMBOL_GPL(kvm_gpc_activate); void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) { if (gpc->active) { + /* + * Deactivate the cache before removing it from the list, KVM + * must stall mmu_notifier events until all users go away, i.e. + * until gpc->lock is dropped and refresh is guaranteed to fail. + */ + write_lock_irq(&gpc->lock); + gpc->active =3D false; + write_unlock_irq(&gpc->lock); + spin_lock(&kvm->gpc_lock); list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); =20 kvm_gfn_to_pfn_cache_unmap(kvm, gpc); - gpc->active =3D false; } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ACD5C43219 for ; Thu, 13 Oct 2022 21:14:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230193AbiJMVNt (ORCPT ); Thu, 13 Oct 2022 17:13:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230072AbiJMVNV (ORCPT ); Thu, 13 Oct 2022 17:13:21 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93EF4191D7A for ; Thu, 13 Oct 2022 14:13:05 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id y65-20020a25c844000000b006bb773548d5so2573887ybf.5 for ; Thu, 13 Oct 2022 14:13:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dWPjjwmCXdw1FoLWbTCqb8PkUleSoznDfVrorISf0Bs=; b=Wl3SxD4RgiI8mdPWO8AkxLNNNuFvmx2TVKUslqBgYcFOtlxKo0PibzHbprgwD8PCQH oq2O2wmIg5XJC70e4y7GFFVrlusSXIq4WR8SveMxveZzg0/6WISqHpZ7OzBkjr01nnp7 C6uTtRPcGcj5EF+MFcNS3KIOsZAX3AR665gsokStcFaoOmibFpyvlUqT4tzvPCCyECxs lpfUQE8m3YQVfaZP6hMuCSAug86mpg81Q/U0VR/UF1XvD1AAX+FStgqMrwKT8N4FNF6y lcR5IGwRUJE7UaFfSWZucL1k1lLGHEBhkfck+fYHsFX0SS+sCTzjwGEKQ62XJ1/7pdsi UJ+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dWPjjwmCXdw1FoLWbTCqb8PkUleSoznDfVrorISf0Bs=; b=vFNojcWjMkE0q1CRiZMkmZA8bgIVvOyppbwYOb5N2A2jejWTdRSw91Lt0Y3M1lfUjU MUf+aClhJ0L7xHexF4cNVfcWZiH3hSuqgrluHNqotQlKg7GsPxyyuUpD3A8Bn2PTCFSy BH2hdiLCFfYKYSKuHFsmkJdlNJVThJNIRYZTVCPyhmz06UF3WaqqdY+r6nHnkOBINK/C 1IVlSaWcK3iyhGZlsKfoQaeAYkhzawdHHTJKF2LCpgF1mM3mf7I5bHJuZypTTkSY/l12 tFQ+v+tcr6v/3wGKOe8xn8yWdMGmmYTVYuzCDpllHXrBDlbyNa4/2QVv3lb+c4SViEzf 1tug== X-Gm-Message-State: ACrzQf26NvmPWEt0TtoOc1g7fPV044b924n+dX5Ye6gEgd6/wOK1yEwP JRdKzYIz731MjIBxUY1WOPZkVuEf4KU= X-Google-Smtp-Source: AMsMyM5MqNN6X4L0yO41k0hy74o8gIZHqHekpNg9zfTkf+yB9vmeBeh0sLHX4cs8h+d/YE+9RnqmgaeNThY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:bc4b:0:b0:6bc:a469:468c with SMTP id d11-20020a25bc4b000000b006bca469468cmr1922360ybk.199.1665695564234; Thu, 13 Oct 2022 14:12:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:21 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-4-seanjc@google.com> Subject: [PATCH v2 03/16] KVM: x86: Always use non-compat vcpu_runstate_info size for gfn=>pfn cache From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Always use the size of Xen's non-compat vcpu_runstate_info struct when checking that the GPA+size doesn't cross a page boundary. Conceptually, using the current mode is more correct, but KVM isn't consistent with itself as kvm_xen_vcpu_set_attr() unconditionally uses the "full" size when activating the cache. More importantly, prior to the introduction of the gfn_to_pfn_cache, KVM _always_ used the full size, i.e. allowing the guest (userspace?) to use a poorly aligned GPA in 32-bit mode but not 64-bit mode is more of a bug than a feature, and fixing the bug doesn't break KVM's historical ABI. Always using the non-compat size will allow for future gfn_to_pfn_cache clenups as this is (was) the only case where KVM uses a different size at check()+refresh() than at activate(). E.g. the length/size of the cache can be made immutable and dropped from check()+refresh(), which yields a cleaner set of APIs and avoids potential bugs that could occur if check() where invoked with a different size than refresh(). Fixes: a795cd43c5b5 ("KVM: x86/xen: Use gfn_to_pfn_cache for runstate area") Cc: David Woodhouse Signed-off-by: Sean Christopherson --- arch/x86/kvm/xen.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index b2be60c6efa4..9e79ef2cca99 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -212,10 +212,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v,= int state) if (!vx->runstate_cache.active) return; =20 - if (IS_ENABLED(CONFIG_64BIT) && v->kvm->arch.xen.long_mode) - user_len =3D sizeof(struct vcpu_runstate_info); - else - user_len =3D sizeof(struct compat_vcpu_runstate_info); + user_len =3D sizeof(struct vcpu_runstate_info); =20 read_lock_irqsave(&gpc->lock, flags); while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53E53C4332F for ; Thu, 13 Oct 2022 21:14:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbiJMVNh (ORCPT ); Thu, 13 Oct 2022 17:13:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230043AbiJMVNV (ORCPT ); Thu, 13 Oct 2022 17:13:21 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E183192B94 for ; Thu, 13 Oct 2022 14:13:10 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id bd33-20020a056a0027a100b005665e548115so107682pfb.10 for ; Thu, 13 Oct 2022 14:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ZCYCVjmO2J72H0KFHJBSx9/YTLGBB6hsk4zZ8ILpai0=; b=BAJw0IF60pdsbuWPNmAB32su9AuL606MvSCrJ0kCG9QXn5CbIWEGqPpRHejSajjH9X VQXsslI/2hycC9eGhoX+Kp/GSiQXYxrZZvrdD3V4uJH8e7qeHpiegAxgd8Mv5iXNRFzY Gbwnbzbips8KTUvCTR1u068vE3OE3rK4qtaVtlW7+RV71nJiEUIHMbK/MVOEQt8afEuM 0As8jmxorDCqpNDFublPhw39vixQLAh0b1DBIIk9Zd6/sZfXQ/tu5HPSp8k3yzO2tgPy jq8OUIFNk9+WAmTpB5/tCO+k2xaqNDEVoelle/75tyEb661C7vy2+wfzjiwEtjhiUFDE RSIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZCYCVjmO2J72H0KFHJBSx9/YTLGBB6hsk4zZ8ILpai0=; b=eCWWjadMxXqdrnYIZNRaRx0gvvnoVlYtBpw7DISU8vyQ7g4O5s/2EEvdUR5p5VrWt/ 5Hi/cqz8rVtLmRQ3MEzvby0Z3kcCy4LT4+Mj9HyKkim8KCs55ZNTiJhURbaOFVfdSrXO Y1cYsL5BNfgvPXYgWyMzLmcJLuZ5LMMD+ewFYERJwovlU3Ueb4taSOtnqLeFX6TMenn9 SXJDfm2E5wiT5V8R4e1f/VGJzpPUBc6MBtOmF9pqoaN+H562ZfE1J+c96MJ1TY8JYMRt 8Vi8nrTeTbKDGWnp9cWXMf5e96wjfN8USyVA+zYNYeNsqXUIBt6FcAbqQjiDWcITV2Ql e/YA== X-Gm-Message-State: ACrzQf1y2FxYHb7AgcTeec+auC3wDTqAYmLcgIcXLgB5X9UUnBbpxL+T lXX7eepZgwbwnIXdtJGdfz5/AsHaCT0= X-Google-Smtp-Source: AMsMyM6gx+vDFWbDdQGIunEB0MXJrMfMZ6n/GycVSLwOykukLHA2lD8p/fsMb+tYz8smiS8IcfhQO+wwI7M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:234d:b0:561:ec04:e77e with SMTP id j13-20020a056a00234d00b00561ec04e77emr1527327pfj.14.1665695565830; Thu, 13 Oct 2022 14:12:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:22 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-5-seanjc@google.com> Subject: [PATCH v2 04/16] KVM: Shorten gfn_to_pfn_cache function names From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Formalize "gpc" as the acronym and use it in function names. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 8 ++++---- arch/x86/kvm/xen.c | 29 ++++++++++++++--------------- include/linux/kvm_host.h | 21 ++++++++++----------- virt/kvm/pfncache.c | 20 ++++++++++---------- 4 files changed, 38 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 943f039564e7..fd00e6a33203 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3034,12 +3034,12 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu= *v, unsigned long flags; =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, + offset + sizeof(*guest_hv_clock))) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, + offset + sizeof(*guest_hv_clock))) return; =20 read_lock_irqsave(&gpc->lock, flags); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 9e79ef2cca99..74d9f4985f93 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -215,15 +215,14 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v= , int state) user_len =3D sizeof(struct vcpu_runstate_info); =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - user_len)) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, user_len)) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* When invoked from kvm_sched_out() we cannot sleep */ if (state =3D=3D RUNSTATE_runnable) return; =20 - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, user_len)) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, user_len)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -349,12 +348,12 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -414,8 +413,8 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* @@ -429,8 +428,8 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; =20 - if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, + sizeof(struct vcpu_info))) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -963,7 +962,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, i= nt nr_ports, =20 read_lock_irqsave(&gpc->lock, flags); idx =3D srcu_read_lock(&kvm->srcu); - if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) goto out_rcu; =20 ret =3D false; @@ -1354,7 +1353,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) idx =3D srcu_read_lock(&kvm->srcu); =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) goto out_rcu; =20 if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1388,7 +1387,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) gpc =3D &vcpu->arch.xen.vcpu_info_cache; =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, sizeof(struct vcpu_i= nfo))) { + if (!kvm_gpc_check(kvm, gpc, gpc->gpa, sizeof(struct vcpu_info))) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. @@ -1486,7 +1485,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *= xe, struct kvm *kvm) break; =20 idx =3D srcu_read_lock(&kvm->srcu); - rc =3D kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpc->gpa, PAGE_SIZE); + rc =3D kvm_gpc_refresh(kvm, gpc, gpc->gpa, PAGE_SIZE); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 694c4cb6caf4..bb020ee3b2fe 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1271,16 +1271,15 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); * -EFAULT for an untranslatable guest physical address. * * This primes a gfn_to_pfn_cache and links it into the @kvm's list for - * invalidations to be processed. Callers are required to use - * kvm_gfn_to_pfn_cache_check() to ensure that the cache is valid before - * accessing the target page. + * invalidations to be processed. Callers are required to use kvm_gpc_che= ck() + * to ensure that the cache is valid before accessing the target page. */ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, gpa_t gpa, unsigned long len); =20 /** - * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache. + * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1297,11 +1296,11 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to= _pfn_cache *gpc, * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *= gpc, - gpa_t gpa, unsigned long len); +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a, + unsigned long len); =20 /** - * kvm_gfn_to_pfn_cache_refresh - update a previously initialized cache. + * kvm_gpc_refresh - update a previously initialized cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1318,11 +1317,11 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, st= ruct gfn_to_pfn_cache *gpc, * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache = *gpc, - gpa_t gpa, unsigned long len); +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa, + unsigned long len); =20 /** - * kvm_gfn_to_pfn_cache_unmap - temporarily unmap a gfn_to_pfn_cache. + * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. * * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. @@ -1331,7 +1330,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, str= uct gfn_to_pfn_cache *gpc, * but at least the mapping from GPA to userspace HVA will remain cached * and can be reused on a subsequent refresh. */ -void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *= gpc); +void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); =20 /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 346e47f15572..23180f1d9c1c 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,8 +76,8 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, u= nsigned long start, } } =20 -bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *= gpc, - gpa_t gpa, unsigned long len) +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a, + unsigned long len) { struct kvm_memslots *slots =3D kvm_memslots(kvm); =20 @@ -96,7 +96,7 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct g= fn_to_pfn_cache *gpc, =20 return true; } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check); +EXPORT_SYMBOL_GPL(kvm_gpc_check); =20 static void gpc_unmap_khva(struct kvm *kvm, kvm_pfn_t pfn, void *khva) { @@ -238,8 +238,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc) return -EFAULT; } =20 -int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache = *gpc, - gpa_t gpa, unsigned long len) +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa, + unsigned long len) { struct kvm_memslots *slots =3D kvm_memslots(kvm); unsigned long page_offset =3D gpa & ~PAGE_MASK; @@ -328,9 +328,9 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struc= t gfn_to_pfn_cache *gpc, =20 return ret; } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_refresh); +EXPORT_SYMBOL_GPL(kvm_gpc_refresh); =20 -void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *= gpc) +void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) { void *old_khva; kvm_pfn_t old_pfn; @@ -355,7 +355,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct= gfn_to_pfn_cache *gpc) =20 gpc_unmap_khva(kvm, old_pfn, old_khva); } -EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap); +EXPORT_SYMBOL_GPL(kvm_gpc_unmap); =20 void kvm_gpc_init(struct gfn_to_pfn_cache *gpc) { @@ -391,7 +391,7 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn= _cache *gpc, gpc->active =3D true; write_unlock_irq(&gpc->lock); } - return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); + return kvm_gpc_refresh(kvm, gpc, gpa, len); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); =20 @@ -411,7 +411,7 @@ void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_= pfn_cache *gpc) list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); =20 - kvm_gfn_to_pfn_cache_unmap(kvm, gpc); + kvm_gpc_unmap(kvm, gpc); } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A703C433FE for ; Thu, 13 Oct 2022 21:14:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230152AbiJMVNm (ORCPT ); Thu, 13 Oct 2022 17:13:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbiJMVNV (ORCPT ); Thu, 13 Oct 2022 17:13:21 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53ADE192B9A for ; Thu, 13 Oct 2022 14:13:10 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id h19-20020a63e153000000b00434dfee8dbaso1575352pgk.18 for ; Thu, 13 Oct 2022 14:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=I+Js4rwjsW6iIwbX3VOZw+Ue+OSoWJ6FRkZ2WqW5Mnw=; b=YFtv89DAnR5aI0XcjusnNfre1iUUFeX+FG5+Sjlu+cBR5Br5DMTjfEA/fWvvSWjOe9 qleW6wvYLwBC3TnG0PiE6Yfr9LOJ3zOOv5R+SwNTyFqmzegCCnRMDfuj5mSkgLqdvlpR tpFCZ8Efs75QDGh0h///oq7iGCXmgtWaNQ8D89JKktSH1uGHYgzs/UVYpXvydxyHSOEu ZVlyQnj2SYH+rrJo+dWEzzhs4F6Qq/Cq21G4tJPZaqAiELtteZgN4syVtkjWWyd78HTt dNQoMbkw+DdnCI070hDk6W8CEt9C2u2Vi63DbpALlIHoVJe8avwk/sBteEnAlNZRYADW Yn9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=I+Js4rwjsW6iIwbX3VOZw+Ue+OSoWJ6FRkZ2WqW5Mnw=; b=cRC9qhsv0H61Q8aMIRmhhriBEtdLPtYrzLT5kBWqEoPAF6cFSm2n9bZZocc6WK+ZZT 304NHBh72Wl00es76lroMDNPZNJDauuuJzzusyPnheo6Iqe9K43vNyj8+iVapzKSn5wN 0S1s/AIhMOo2ZUbUoDgXOBCV+YzGUW4j2LJccEu7GzobXK7Cd71EiQh8b3JOZ72W/KA4 ykQUcLX73eomUCqhofyKjy8LuYOD6Ihp0HGZCPM7L8I9prKM562vELJIQ4gyeADrk5p0 HshZr9vn/2iilnptyZ+hFIu3jWS/T4vtZWpDMyXYR0+xp0Szgv084A8qKoBqphTG0Cak VMhw== X-Gm-Message-State: ACrzQf2xc9oj4N3ogRrG/I7DIsemSV45udl/gDIXkCA6yftL4Y1cfs9M 0e0gBesO4S++ZTrHPmwYycE0g0Qjr9M= X-Google-Smtp-Source: AMsMyM5/5YVpY1nzSJShOOKrI1737xNVhS5agX9dO3x+QhWK710GBWxc0llMNoRwpiNUHSrBafMZJCqHfaQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:d990:b0:20d:9c20:90c with SMTP id d16-20020a17090ad99000b0020d9c20090cmr1895243pjv.203.1665695567569; Thu, 13 Oct 2022 14:12:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:23 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-6-seanjc@google.com> Subject: [PATCH v2 05/16] KVM: x86: Remove unused argument in gpc_unmap_khva() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Remove the unused @kvm argument from gpc_unmap_khva(). Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 23180f1d9c1c..32ccf168361b 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -98,7 +98,7 @@ bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cac= he *gpc, gpa_t gpa, } EXPORT_SYMBOL_GPL(kvm_gpc_check); =20 -static void gpc_unmap_khva(struct kvm *kvm, kvm_pfn_t pfn, void *khva) +static void gpc_unmap_khva(kvm_pfn_t pfn, void *khva) { /* Unmap the old pfn/page if it was mapped before. */ if (!is_error_noslot_pfn(pfn) && khva) { @@ -177,7 +177,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc) * the existing mapping and didn't create a new one. */ if (new_khva !=3D old_khva) - gpc_unmap_khva(kvm, new_pfn, new_khva); + gpc_unmap_khva(new_pfn, new_khva); =20 kvm_release_pfn_clean(new_pfn); =20 @@ -324,7 +324,7 @@ int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_= cache *gpc, gpa_t gpa, mutex_unlock(&gpc->refresh_lock); =20 if (unmap_old) - gpc_unmap_khva(kvm, old_pfn, old_khva); + gpc_unmap_khva(old_pfn, old_khva); =20 return ret; } @@ -353,7 +353,7 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_c= ache *gpc) write_unlock_irq(&gpc->lock); mutex_unlock(&gpc->refresh_lock); =20 - gpc_unmap_khva(kvm, old_pfn, old_khva); + gpc_unmap_khva(old_pfn, old_khva); } EXPORT_SYMBOL_GPL(kvm_gpc_unmap); =20 --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1B15C4332F for ; Thu, 13 Oct 2022 21:15:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230266AbiJMVPe (ORCPT ); Thu, 13 Oct 2022 17:15:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230108AbiJMVNi (ORCPT ); Thu, 13 Oct 2022 17:13:38 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26DA418D83A for ; Thu, 13 Oct 2022 14:13:20 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id h186-20020a636cc3000000b0045a1966a975so1564062pgc.5 for ; Thu, 13 Oct 2022 14:13:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1msHt2h1zk3U7VfDgxOJcVdiI5NiAbcgEYge87AV1N4=; b=LT3c/pDjbJWeU16zOB0oo+t9+yGv0hQb2pF6+lsobdNrnTlzOJ1BT9HjXDwk8ar+Ri 4E369MG74nJF1S/jSUVKQHt+1Z7J3aN/bMnEI2apAE9SXqbHbwG857JwlSlj5sFGHWqK xp2r7KilhlU7w5bcLRZNQuk87ysFvoRHZJtrvE1rLdr5fZ+NrIJYGjciglfo6aCpEygQ /EXa0W3oC58ZqFJsCXJ6zrDsS5fw7ErpyUlS6jYCIBOyX+6k1h7KUnMQUuDg/vBWUSIO Q/b+sjAjkMHjsSWkxGaab75aABk8kPEBUcmMj67xOhZmZ7nQByi+SYoaGa+hxi0RS9I8 4PTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1msHt2h1zk3U7VfDgxOJcVdiI5NiAbcgEYge87AV1N4=; b=2jlvlX52SOYVqtTV033GI7g3xp0z03nRVDRCQ0kVDuy8+YeQf6rjywcAaTXSlGbXgl 2XqPVEiIdvCZAJP1wrDeJWY+qBgp4MuS7RCeiobtfq1jKJnEo/QXzzVBXQMxb9Vxp1v3 STgoEczirpIyz4ymgq0TLQQrLbbge7WICXkmRjJiUdMMCHsvqTqF+0xOdBJlvwzYVEpu i36hNxIrPbWlowxyoge96QijQTrKSTq+1gFsT/ZYZ6mB2kZGKdumpbAKJQaz7jGy+zi+ 7t/fj+XG96zc5JG/3hr1yddS9qcS/P+8lTrYwwBlzrdXIkMxIxLTudx53bZdJCsq/yWi mv5Q== X-Gm-Message-State: ACrzQf2ZcEnvff82VCLDUZLxWHx4bxrNvnS7tWMmH4z9K9CRh/ywpCUo JuG9fFtRMOZQd9laOFsG9tsoIThYfHM= X-Google-Smtp-Source: AMsMyM6RFkaj0/jA7js6tXadWjKbBCfJEO4acH6bd27Y9fq7PfEz7rQ1YFzwS3asqgOmFogbEop+9IEEvVI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:1d0d:b0:20d:5c1c:5fbb with SMTP id on13-20020a17090b1d0d00b0020d5c1c5fbbmr13469697pjb.196.1665695569201; Thu, 13 Oct 2022 14:12:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:24 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-7-seanjc@google.com> Subject: [PATCH v2 06/16] KVM: Store immutable gfn_to_pfn_cache properties From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Move the assignment of immutable properties @kvm, @vcpu, and @usage to the initializer. Make _activate() and _deactivate() use stored values. Note, @len is also effectively immutable, but less obviously so. Leave @len as is for now, it will be addressed in a future patch. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj [sean: handle @len in a separate patch] Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 14 +++++------- arch/x86/kvm/xen.c | 47 ++++++++++++++++++--------------------- include/linux/kvm_host.h | 37 +++++++++++++++--------------- include/linux/kvm_types.h | 1 + virt/kvm/pfncache.c | 22 +++++++++++------- 5 files changed, 61 insertions(+), 60 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fd00e6a33203..9c68050672de 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2314,13 +2314,11 @@ static void kvm_write_system_time(struct kvm_vcpu *= vcpu, gpa_t system_time, kvm_make_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu); =20 /* we verify if the enable bit is set... */ - if (system_time & 1) { - kvm_gpc_activate(vcpu->kvm, &vcpu->arch.pv_time, vcpu, - KVM_HOST_USES_PFN, system_time & ~1ULL, + if (system_time & 1) + kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL, sizeof(struct pvclock_vcpu_time_info)); - } else { - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); - } + else + kvm_gpc_deactivate(&vcpu->arch.pv_time); =20 return; } @@ -3388,7 +3386,7 @@ static int kvm_pv_enable_async_pf_int(struct kvm_vcpu= *vcpu, u64 data) =20 static void kvmclock_reset(struct kvm_vcpu *vcpu) { - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); + kvm_gpc_deactivate(&vcpu->arch.pv_time); vcpu->arch.time =3D 0; } =20 @@ -11757,7 +11755,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.regs_avail =3D ~0; vcpu->arch.regs_dirty =3D ~0; =20 - kvm_gpc_init(&vcpu->arch.pv_time); + kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN); =20 if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) vcpu->arch.mp_state =3D KVM_MP_STATE_RUNNABLE; diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 74d9f4985f93..55b7195d69d6 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -42,13 +42,12 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gf= n_t gfn) int idx =3D srcu_read_lock(&kvm->srcu); =20 if (gfn =3D=3D GPA_INVALID) { - kvm_gpc_deactivate(kvm, gpc); + kvm_gpc_deactivate(gpc); goto out; } =20 do { - ret =3D kvm_gpc_activate(kvm, gpc, NULL, KVM_HOST_USES_PFN, gpa, - PAGE_SIZE); + ret =3D kvm_gpc_activate(gpc, gpa, PAGE_SIZE); if (ret) goto out; =20 @@ -550,15 +549,13 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_xen_vcpu_attr *data) offsetof(struct compat_vcpu_info, time)); =20 if (data->u.gpa =3D=3D GPA_INVALID) { - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_info_cache); r =3D 0; break; } =20 - r =3D kvm_gpc_activate(vcpu->kvm, - &vcpu->arch.xen.vcpu_info_cache, NULL, - KVM_HOST_USES_PFN, data->u.gpa, - sizeof(struct vcpu_info)); + r =3D kvm_gpc_activate(&vcpu->arch.xen.vcpu_info_cache, + data->u.gpa, sizeof(struct vcpu_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); =20 @@ -566,15 +563,13 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_xen_vcpu_attr *data) =20 case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO: if (data->u.gpa =3D=3D GPA_INVALID) { - kvm_gpc_deactivate(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_time_info_cache); r =3D 0; break; } =20 - r =3D kvm_gpc_activate(vcpu->kvm, - &vcpu->arch.xen.vcpu_time_info_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, + r =3D kvm_gpc_activate(&vcpu->arch.xen.vcpu_time_info_cache, + data->u.gpa, sizeof(struct pvclock_vcpu_time_info)); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -586,14 +581,13 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, stru= ct kvm_xen_vcpu_attr *data) break; } if (data->u.gpa =3D=3D GPA_INVALID) { - kvm_gpc_deactivate(vcpu->kvm, - &vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.runstate_cache); r =3D 0; break; } =20 - r =3D kvm_gpc_activate(vcpu->kvm, &vcpu->arch.xen.runstate_cache, - NULL, KVM_HOST_USES_PFN, data->u.gpa, + r =3D kvm_gpc_activate(&vcpu->arch.xen.runstate_cache, + data->u.gpa, sizeof(struct vcpu_runstate_info)); break; =20 @@ -1814,9 +1808,12 @@ void kvm_xen_init_vcpu(struct kvm_vcpu *vcpu) =20 timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0); =20 - kvm_gpc_init(&vcpu->arch.xen.runstate_cache); - kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache); - kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_init(&vcpu->arch.xen.runstate_cache, vcpu->kvm, NULL, + KVM_HOST_USES_PFN); + kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache, vcpu->kvm, NULL, + KVM_HOST_USES_PFN); + kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache, vcpu->kvm, NULL, + KVM_HOST_USES_PFN); } =20 void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) @@ -1824,9 +1821,9 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) if (kvm_xen_timer_enabled(vcpu)) kvm_xen_stop_timer(vcpu); =20 - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.runstate_cache); - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); - kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_time_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.runstate_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_info_cache); + kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_time_info_cache); =20 del_timer_sync(&vcpu->arch.xen.poll_timer); } @@ -1834,7 +1831,7 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) void kvm_xen_init_vm(struct kvm *kvm) { idr_init(&kvm->arch.xen.evtchn_ports); - kvm_gpc_init(&kvm->arch.xen.shinfo_cache); + kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN); } =20 void kvm_xen_destroy_vm(struct kvm *kvm) @@ -1842,7 +1839,7 @@ void kvm_xen_destroy_vm(struct kvm *kvm) struct evtchnfd *evtchnfd; int i; =20 - kvm_gpc_deactivate(kvm, &kvm->arch.xen.shinfo_cache); + kvm_gpc_deactivate(&kvm->arch.xen.shinfo_cache); =20 idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) { if (!evtchnfd->deliver.port.port) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bb020ee3b2fe..e5e70607a5ef 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1243,18 +1243,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu,= gfn_t gfn); * kvm_gpc_init - initialize gfn_to_pfn_cache. * * @gpc: struct gfn_to_pfn_cache object. - * - * This sets up a gfn_to_pfn_cache by initializing locks. Note, the cache= must - * be zero-allocated (or zeroed by the caller before init). - */ -void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); - -/** - * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given = guest - * physical address. - * * @kvm: pointer to kvm instance. - * @gpc: struct gfn_to_pfn_cache object. * @vcpu: vCPU to be used for marking pages dirty and to be woken on * invalidation. * @usage: indicates if the resulting host physical PFN is used while @@ -1263,20 +1252,31 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); * changes!---will also force @vcpu to exit the guest and * refresh the cache); and/or if the PFN used directly * by KVM (and thus needs a kernel virtual mapping). + * + * This sets up a gfn_to_pfn_cache by initializing locks and assigning the + * immutable attributes. Note, the cache must be zero-allocated (or zeroe= d by + * the caller before init). + */ +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage); + +/** + * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given = guest + * physical address. + * + * @gpc: struct gfn_to_pfn_cache object. * @gpa: guest physical address to map. * @len: sanity check; the range being access must fit a single page. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. - * -EFAULT for an untranslatable guest physical address. + * -EFAULT for an untranslatable guest physical address. * - * This primes a gfn_to_pfn_cache and links it into the @kvm's list for + * This primes a gfn_to_pfn_cache and links it into the @gpc->kvm's list f= or * invalidations to be processed. Callers are required to use kvm_gpc_che= ck() * to ensure that the cache is valid before accessing the target page. */ -int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len); +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned lon= g len); =20 /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. @@ -1335,13 +1335,12 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_p= fn_cache *gpc); /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. * - * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * - * This removes a cache from the @kvm's list to be processed on MMU notifi= er + * This removes a cache from the VM's list to be processed on MMU notifier * invocation. */ -void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); +void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc); =20 void kvm_sigset_activate(struct kvm_vcpu *vcpu); void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 3ca3db020e0e..76de36e56cdf 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -67,6 +67,7 @@ struct gfn_to_pfn_cache { gpa_t gpa; unsigned long uhva; struct kvm_memory_slot *memslot; + struct kvm *kvm; struct kvm_vcpu *vcpu; struct list_head list; rwlock_t lock; diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 32ccf168361b..6756dfa60d5a 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -357,25 +357,29 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn= _cache *gpc) } EXPORT_SYMBOL_GPL(kvm_gpc_unmap); =20 -void kvm_gpc_init(struct gfn_to_pfn_cache *gpc) +void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage) { + WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) !=3D usage); + WARN_ON_ONCE((usage & KVM_GUEST_USES_PFN) && !vcpu); + rwlock_init(&gpc->lock); mutex_init(&gpc->refresh_lock); + + gpc->kvm =3D kvm; + gpc->vcpu =3D vcpu; + gpc->usage =3D usage; } EXPORT_SYMBOL_GPL(kvm_gpc_init); =20 -int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, - gpa_t gpa, unsigned long len) +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned lon= g len) { - WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) !=3D usage); + struct kvm *kvm =3D gpc->kvm; =20 if (!gpc->active) { gpc->khva =3D NULL; gpc->pfn =3D KVM_PFN_ERR_FAULT; gpc->uhva =3D KVM_HVA_ERR_BAD; - gpc->vcpu =3D vcpu; - gpc->usage =3D usage; gpc->valid =3D false; =20 spin_lock(&kvm->gpc_lock); @@ -395,8 +399,10 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pf= n_cache *gpc, } EXPORT_SYMBOL_GPL(kvm_gpc_activate); =20 -void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) +void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) { + struct kvm *kvm =3D gpc->kvm; + if (gpc->active) { /* * Deactivate the cache before removing it from the list, KVM --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFE7BC4332F for ; Thu, 13 Oct 2022 21:14:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230155AbiJMVOP (ORCPT ); Thu, 13 Oct 2022 17:14:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230154AbiJMVNm (ORCPT ); Thu, 13 Oct 2022 17:13:42 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BA92192BA2 for ; Thu, 13 Oct 2022 14:13:24 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id p24-20020a63f458000000b0043cd718c49dso1585318pgk.15 for ; Thu, 13 Oct 2022 14:13:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=a4H6RRJPk2dGB6P2HabFO7Il/OYENPrSGssdwi/forA=; b=LRQBSlnHAC2o/nLb76XOELDV+qRbsrKoolzLISt7bliPNp8atvRG/tV6G2jh+ZOuYk rGdCAqlfEn8SgSQFkFuAL0dc30DDRJ9sBTfCHqHbFhKD+n5Eyo1Eqp5CQqRgeBt6cTR1 CN3oKcPX243/OalqKAhivmjnuP9NFm/LvneYDiCtPE5+jQc4o6eKMRt5PyQpjA3LAwUJ xiRkt1KBYlWm20pr87BVvORpQ89cJxKi3CdAEfWNTT3kQO3MrKJmCd7ueUYZM0d5LxDv ncyWZe/mFy5CmBu/xvyBvRZjy4XQfgfo5hj5z//KoeyYBIKhtiBa6rE3VkTuX6paHybY MYFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=a4H6RRJPk2dGB6P2HabFO7Il/OYENPrSGssdwi/forA=; b=BvCIocFst3ZS+HiVkCVZssxOIzJ4fsbLw+RLbxco6arFY6Dj1ADvkUMby1zckQmARO Xc9ND+yZuDTBz2pp08i+irNTMKPALeRBRb4Lavop5n0SCkliwQMNeP9ed0jwqNXkIsS+ 0txDWbgjaIgTmvif0ZOxjR13BQEkjCtyAFMcIsz8bPkPyurlVGOrz2pj5s3sue7Cqi/N 5HwYM3MGxhAfLGS5X3iGT0n6n+fyUtE4wLW3TgmgQYyz82Kkj5Ihop2+fXhOJqxiRhkA cmMfs9tn0tSW/0W5xWOfXB4rEeBAyGRUp92RKC3T+HjVNgOtRxsLdGJlKh9F6Z5RpE+Y rjwQ== X-Gm-Message-State: ACrzQf3+gdCtS3VdoVjnhz1mb2wDIBsgZ+FBP3aAP0oLL5+tzaP9bLsx tgI1W62ySnQCEJZwP04yrbLL/jPcYDU= X-Google-Smtp-Source: AMsMyM4KejC+Gj7bPF0CxMXT+SZ8aB+MKLj2iqOhrP2GpdUTq8TyDneufYoKDsM04c96P4oCqzsfL064NfM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:aa7:86d4:0:b0:562:c5e2:a986 with SMTP id h20-20020aa786d4000000b00562c5e2a986mr1615268pfo.61.1665695570785; Thu, 13 Oct 2022 14:12:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:25 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-8-seanjc@google.com> Subject: [PATCH v2 07/16] KVM: Store gfn_to_pfn_cache length as an immutable property From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Make the length of a gfn=3D>pfn cache an immutable property of the cache to cleanup the APIs and avoid potential bugs, e.g calling check() with a larger size than refresh() could put KVM into an infinite loop. All current (and anticipated future) users access the cache with a predetermined size, which isn't a coincidence as using a dedicated cache really only make sense when the access pattern is "fixed". Add a WARN in kvm_setup_guest_pvclock() to assert that the offset+size matches the length of the cache, both to make it more obvious that the length really is immutable in that case, and to detect future bugs. No functional change intended. Signed-off-by: Michal Luczaj Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 14 ++++++------ arch/x86/kvm/xen.c | 46 ++++++++++++++++----------------------- include/linux/kvm_host.h | 14 +++++------- include/linux/kvm_types.h | 1 + virt/kvm/pfncache.c | 18 +++++++-------- 5 files changed, 42 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9c68050672de..0b4fa3455f3a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2315,8 +2315,7 @@ static void kvm_write_system_time(struct kvm_vcpu *vc= pu, gpa_t system_time, =20 /* we verify if the enable bit is set... */ if (system_time & 1) - kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL, - sizeof(struct pvclock_vcpu_time_info)); + kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL); else kvm_gpc_deactivate(&vcpu->arch.pv_time); =20 @@ -3031,13 +3030,13 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu= *v, struct pvclock_vcpu_time_info *guest_hv_clock; unsigned long flags; =20 + WARN_ON_ONCE(gpc->len !=3D offset + sizeof(*guest_hv_clock)); + read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, - offset + sizeof(*guest_hv_clock))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -11755,7 +11754,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.regs_avail =3D ~0; vcpu->arch.regs_dirty =3D ~0; =20 - kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN); + kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN, + sizeof(struct pvclock_vcpu_time_info)); =20 if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) vcpu->arch.mp_state =3D KVM_MP_STATE_RUNNABLE; diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 55b7195d69d6..6f5a5507392e 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -47,7 +47,7 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_= t gfn) } =20 do { - ret =3D kvm_gpc_activate(gpc, gpa, PAGE_SIZE); + ret =3D kvm_gpc_activate(gpc, gpa); if (ret) goto out; =20 @@ -203,7 +203,6 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, = int state) struct gfn_to_pfn_cache *gpc =3D &vx->runstate_cache; uint64_t *user_times; unsigned long flags; - size_t user_len; int *user_state; =20 kvm_xen_update_runstate(v, state); @@ -211,17 +210,15 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v= , int state) if (!vx->runstate_cache.active) return; =20 - user_len =3D sizeof(struct vcpu_runstate_info); - read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, user_len)) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* When invoked from kvm_sched_out() we cannot sleep */ if (state =3D=3D RUNSTATE_runnable) return; =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, user_len)) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -347,12 +344,10 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -412,8 +407,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* @@ -427,8 +421,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, - sizeof(struct vcpu_info))) { + if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -555,7 +548,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct= kvm_xen_vcpu_attr *data) } =20 r =3D kvm_gpc_activate(&vcpu->arch.xen.vcpu_info_cache, - data->u.gpa, sizeof(struct vcpu_info)); + data->u.gpa); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); =20 @@ -569,8 +562,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct= kvm_xen_vcpu_attr *data) } =20 r =3D kvm_gpc_activate(&vcpu->arch.xen.vcpu_time_info_cache, - data->u.gpa, - sizeof(struct pvclock_vcpu_time_info)); + data->u.gpa); if (!r) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); break; @@ -587,8 +579,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct= kvm_xen_vcpu_attr *data) } =20 r =3D kvm_gpc_activate(&vcpu->arch.xen.runstate_cache, - data->u.gpa, - sizeof(struct vcpu_runstate_info)); + data->u.gpa); break; =20 case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT: @@ -956,7 +947,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, i= nt nr_ports, =20 read_lock_irqsave(&gpc->lock, flags); idx =3D srcu_read_lock(&kvm->srcu); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) goto out_rcu; =20 ret =3D false; @@ -1347,7 +1338,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) idx =3D srcu_read_lock(&kvm->srcu); =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) + if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) goto out_rcu; =20 if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1381,7 +1372,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) gpc =3D &vcpu->arch.xen.vcpu_info_cache; =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa, sizeof(struct vcpu_info))) { + if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. @@ -1479,7 +1470,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *= xe, struct kvm *kvm) break; =20 idx =3D srcu_read_lock(&kvm->srcu); - rc =3D kvm_gpc_refresh(kvm, gpc, gpc->gpa, PAGE_SIZE); + rc =3D kvm_gpc_refresh(kvm, gpc, gpc->gpa); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); =20 @@ -1809,11 +1800,11 @@ void kvm_xen_init_vcpu(struct kvm_vcpu *vcpu) timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0); =20 kvm_gpc_init(&vcpu->arch.xen.runstate_cache, vcpu->kvm, NULL, - KVM_HOST_USES_PFN); + KVM_HOST_USES_PFN, sizeof(struct vcpu_runstate_info)); kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache, vcpu->kvm, NULL, - KVM_HOST_USES_PFN); + KVM_HOST_USES_PFN, sizeof(struct vcpu_info)); kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache, vcpu->kvm, NULL, - KVM_HOST_USES_PFN); + KVM_HOST_USES_PFN, sizeof(struct pvclock_vcpu_time_info)); } =20 void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) @@ -1831,7 +1822,8 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) void kvm_xen_init_vm(struct kvm *kvm) { idr_init(&kvm->arch.xen.evtchn_ports); - kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN); + kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN, + PAGE_SIZE); } =20 void kvm_xen_destroy_vm(struct kvm *kvm) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e5e70607a5ef..c79f2e122ac8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1252,13 +1252,15 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu= , gfn_t gfn); * changes!---will also force @vcpu to exit the guest and * refresh the cache); and/or if the PFN used directly * by KVM (and thus needs a kernel virtual mapping). + * @len: sanity check; the range being access must fit a single page. * * This sets up a gfn_to_pfn_cache by initializing locks and assigning the * immutable attributes. Note, the cache must be zero-allocated (or zeroe= d by * the caller before init). */ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage); + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + unsigned long len); =20 /** * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given = guest @@ -1266,7 +1268,6 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struc= t kvm *kvm, * * @gpc: struct gfn_to_pfn_cache object. * @gpa: guest physical address to map. - * @len: sanity check; the range being access must fit a single page. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. @@ -1276,7 +1277,7 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struc= t kvm *kvm, * invalidations to be processed. Callers are required to use kvm_gpc_che= ck() * to ensure that the cache is valid before accessing the target page. */ -int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned lon= g len); +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa); =20 /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. @@ -1284,7 +1285,6 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gp= a_t gpa, unsigned long len) * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * @gpa: current guest physical address to map. - * @len: sanity check; the range being access must fit a single page. * * @return: %true if the cache is still valid and the address matches. * %false if the cache is not valid. @@ -1296,8 +1296,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gp= a_t gpa, unsigned long len) * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a, - unsigned long len); +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a); =20 /** * kvm_gpc_refresh - update a previously initialized cache. @@ -1317,8 +1316,7 @@ bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn= _cache *gpc, gpa_t gpa, * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa, - unsigned long len); +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa); =20 /** * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 76de36e56cdf..d66b276d29e0 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -74,6 +74,7 @@ struct gfn_to_pfn_cache { struct mutex refresh_lock; void *khva; kvm_pfn_t pfn; + unsigned long len; enum pfn_cache_usage usage; bool active; bool valid; diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 6756dfa60d5a..34883ad12536 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,15 +76,14 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,= unsigned long start, } } =20 -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a, - unsigned long len) +bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a) { struct kvm_memslots *slots =3D kvm_memslots(kvm); =20 if (!gpc->active) return false; =20 - if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE) + if ((gpa & ~PAGE_MASK) + gpc->len > PAGE_SIZE) return false; =20 if (gpc->gpa !=3D gpa || gpc->generation !=3D slots->generation || @@ -238,8 +237,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc) return -EFAULT; } =20 -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa, - unsigned long len) +int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa) { struct kvm_memslots *slots =3D kvm_memslots(kvm); unsigned long page_offset =3D gpa & ~PAGE_MASK; @@ -253,7 +251,7 @@ int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_= cache *gpc, gpa_t gpa, * If must fit within a single page. The 'len' argument is * only to enforce that. */ - if (page_offset + len > PAGE_SIZE) + if (page_offset + gpc->len > PAGE_SIZE) return -EINVAL; =20 /* @@ -358,7 +356,8 @@ void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_c= ache *gpc) EXPORT_SYMBOL_GPL(kvm_gpc_unmap); =20 void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage) + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, + unsigned long len) { WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) !=3D usage); WARN_ON_ONCE((usage & KVM_GUEST_USES_PFN) && !vcpu); @@ -369,10 +368,11 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struc= t kvm *kvm, gpc->kvm =3D kvm; gpc->vcpu =3D vcpu; gpc->usage =3D usage; + gpc->len =3D len; } EXPORT_SYMBOL_GPL(kvm_gpc_init); =20 -int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned lon= g len) +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { struct kvm *kvm =3D gpc->kvm; =20 @@ -395,7 +395,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_= t gpa, unsigned long len) gpc->active =3D true; write_unlock_irq(&gpc->lock); } - return kvm_gpc_refresh(kvm, gpc, gpa, len); + return kvm_gpc_refresh(kvm, gpc, gpa); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); =20 --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9C14C43217 for ; Thu, 13 Oct 2022 21:14:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229588AbiJMVOF (ORCPT ); Thu, 13 Oct 2022 17:14:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230146AbiJMVNl (ORCPT ); Thu, 13 Oct 2022 17:13:41 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F2AD192DB7 for ; Thu, 13 Oct 2022 14:13:27 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3549b9f0a03so28040567b3.11 for ; Thu, 13 Oct 2022 14:13:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5URKB2UHMUSPpWygEwKHgMdRAkLfFXnLzKwGOMCRDNA=; b=UKNC2oX08Dn5jOmQaFh8md74layXZhPEWoVtlgyMroHsnDPxL9Ea4oOE99qmFZKq35 c/rLwzP18IC/7p/JVrfDeh8kN2cowwAN6IbStXMfiY3r5he7mlWqzRYwplCfiCsVKlFR beyqEUJa6kiF+TycHjvtt5ttKot4XXozvmzS53XK4tISWavxxqJxecXn+FH51b0th1rd 4uRhaeMV8UAV3LShApu0XEjCuUUNQ+agDoal03x65LKmf5LXR7Zk67B+Qu5NJkMIfZrn 6W3aUB5e0ujvuP1lNjRimNix4mKv2pZ4WBlWNv3IEGmMpb3LiiFoptN5NXk8Ojsx6fzm uYRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5URKB2UHMUSPpWygEwKHgMdRAkLfFXnLzKwGOMCRDNA=; b=LCLnCryZiz78sJCJLE8Ufq3cYa+MSzT8lVX6rRVqb6YEt2/0DckB1DKjl1Xh8f0vc9 nfpKK1Xob2okp1xSeMzowTso/wla2/fYweQKl1x0IyTgY+ikVPNoCFlLqW18B0PokVW1 fpezJIGh9kgZvwjcoyfxVIFXyEIqLtO+PVtLC0KXn5tRHcQUqMBfg3lxb5Zm/EhRCWJc wSKAsF/GX/H0DN3cG/35O4fnqKCOioRDtc3J3G/QfXGYZ0kKa+R0wNNfiF/xC1O7yF0D cNMgMAsZvygL/+ZJWSm4j+YnYnILm4XXur9lMZGDsbI48QLTykP25T5/m8j5s1aEY4j3 cDqg== X-Gm-Message-State: ACrzQf1N9DTwedh1iHsEctqbHxC2FsbdKgGaKpLkoJfesCCOrOug/Jtc NdeRAEkl5tBCi5KgqBc1HU+paIrzOTM= X-Google-Smtp-Source: AMsMyM76F81HmX3F2Qos6l80xV4HqE50+9kT+DWyQE4AfLfy2pocrTQVdpKufL11zhYH9DERGg08VIhL+Ck= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:3746:0:b0:6bd:33bc:a284 with SMTP id e67-20020a253746000000b006bd33bca284mr1714965yba.506.1665695572592; Thu, 13 Oct 2022 14:12:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:26 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-9-seanjc@google.com> Subject: [PATCH v2 08/16] KVM: Use gfn_to_pfn_cache's immutable "kvm" in kvm_gpc_check() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Make kvm_gpc_check() use kvm instance cached in gfn_to_pfn_cache. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 2 +- arch/x86/kvm/xen.c | 12 ++++++------ include/linux/kvm_host.h | 3 +-- virt/kvm/pfncache.c | 4 ++-- 4 files changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0b4fa3455f3a..b357a84f8c49 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3033,7 +3033,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *= v, WARN_ON_ONCE(gpc->len !=3D offset + sizeof(*guest_hv_clock)); =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 6f5a5507392e..c7304f37c438 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -211,7 +211,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, = int state) return; =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* When invoked from kvm_sched_out() we cannot sleep */ @@ -344,7 +344,7 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) @@ -407,7 +407,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* @@ -947,7 +947,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, i= nt nr_ports, =20 read_lock_irqsave(&gpc->lock, flags); idx =3D srcu_read_lock(&kvm->srcu); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc, gpc->gpa)) goto out_rcu; =20 ret =3D false; @@ -1338,7 +1338,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) idx =3D srcu_read_lock(&kvm->srcu); =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc, gpc->gpa)) goto out_rcu; =20 if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1372,7 +1372,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) gpc =3D &vcpu->arch.xen.vcpu_info_cache; =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) { + if (!kvm_gpc_check(gpc, gpc->gpa)) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c79f2e122ac8..ad8ef7f2d705 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1282,7 +1282,6 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gp= a_t gpa); /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * - * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * @gpa: current guest physical address to map. * @@ -1296,7 +1295,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gp= a_t gpa); * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a); +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa); =20 /** * kvm_gpc_refresh - update a previously initialized cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 34883ad12536..6fe76fb4d228 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,9 +76,9 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, u= nsigned long start, } } =20 -bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gp= a) +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { - struct kvm_memslots *slots =3D kvm_memslots(kvm); + struct kvm_memslots *slots =3D kvm_memslots(gpc->kvm); =20 if (!gpc->active) return false; --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25841C4332F for ; Thu, 13 Oct 2022 21:14:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230180AbiJMVOU (ORCPT ); Thu, 13 Oct 2022 17:14:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230157AbiJMVNm (ORCPT ); Thu, 13 Oct 2022 17:13:42 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BC2019374F for ; Thu, 13 Oct 2022 14:13:30 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id q18-20020aa79832000000b00562d921e30aso1784665pfl.4 for ; Thu, 13 Oct 2022 14:13:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=E0wMVancUSCdQHm6xrhPLp8XHQC+Zfx0f39Z7/tw4F8=; b=FFpkcBRBwqGjxj/2Lpmkxa6nmJHRfRTkJpW/HdkN5dV56vwsisFFcKuCEj+zyvPIUN zbsnmso3/Zhfw4UCU8XmMyJxpZ+75WFMFFMg3MVqWCfm9cz3CYhKkTuFdXwfc70R8P2c 2G1pnZFZmEpjNtMUldatsG0ALoHLtyh/xAd+i21jg8831ZyzvmLJld6tXHYg2Y1JmpPx RGR5YvjLX1ls8cG4ssSHfCxAPHC4iP6VRfVBrag3kWoH01c25sR59X0HfzCQqopGO5+j xvtnNaJhpA0E10sdWhUupebK+0P7ROqzAelJQhQG+eB38BRxvC6z5LwkHJmvFEyH+ab1 AG7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=E0wMVancUSCdQHm6xrhPLp8XHQC+Zfx0f39Z7/tw4F8=; b=wv/Xd+n6gkeKd4MMHLvn3duMTqFziie0fnFDFvKB8lePwfoAw51Srf/4aBhMYFK+mc 1nH2wclGbN7PzZdukid55JhYC9e9g63IyNZvh+OgibX7X46e5Mzkhy69Xg+E5wMM/A96 P4hziDiCi1VH4pFBsk6KKD9+BBhJWxBFujo0NJf+KqZOXch5r+qF0hSbIioJ7grspVDl ANp2o2oVpf3si5eNjkuigCPkW+tPr5aHTOl+Fc0uMl1vmyNymkiK4IlwIi2o7FunZBY7 y8BAQaCdJtHUZSJ9dfCjeSwGtz4msCY8GA6XJq77KO6oAep/hhdMpXmh9jJDldtXKrxV +f1w== X-Gm-Message-State: ACrzQf17N1Ukr4l1jdOo081Poc4TGM5jII7NXlVmPKybNmWIY8359CDw FjU4YMt4tHQtSjwQdHaGr9f/3pzCTc0= X-Google-Smtp-Source: AMsMyM7BtIoFBOcy0PCnFfzJvpr6blOAVwNXcBS3RQH+M0TC6BP9D7fDhFf9cOn2kWZj8nqHBW8e53pMr5s= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:868b:b0:185:be8:b316 with SMTP id g11-20020a170902868b00b001850be8b316mr1578925plo.157.1665695573969; Thu, 13 Oct 2022 14:12:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:27 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-10-seanjc@google.com> Subject: [PATCH v2 09/16] KVM: Clean up hva_to_pfn_retry() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Make hva_to_pfn_retry() use kvm instance cached in gfn_to_pfn_cache. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 6fe76fb4d228..ef7ac1666847 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -138,7 +138,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm = *kvm, unsigned long mmu_s return kvm->mmu_invalidate_seq !=3D mmu_seq; } =20 -static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache= *gpc) +static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) { /* Note, the new page offset may be different than the old! */ void *old_khva =3D gpc->khva - offset_in_page(gpc->khva); @@ -158,7 +158,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc) gpc->valid =3D false; =20 do { - mmu_seq =3D kvm->mmu_invalidate_seq; + mmu_seq =3D gpc->kvm->mmu_invalidate_seq; smp_rmb(); =20 write_unlock_irq(&gpc->lock); @@ -216,7 +216,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, stru= ct gfn_to_pfn_cache *gpc) * attempting to refresh. */ WARN_ON_ONCE(gpc->valid); - } while (mmu_notifier_retry_cache(kvm, mmu_seq)); + } while (mmu_notifier_retry_cache(gpc->kvm, mmu_seq)); =20 gpc->valid =3D true; gpc->pfn =3D new_pfn; @@ -293,7 +293,7 @@ int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_= cache *gpc, gpa_t gpa) * drop the lock and do the HVA to PFN lookup again. */ if (!gpc->valid || old_uhva !=3D gpc->uhva) { - ret =3D hva_to_pfn_retry(kvm, gpc); + ret =3D hva_to_pfn_retry(gpc); } else { /* If the HVA=E2=86=92PFN mapping was already valid, don't unmap it. */ old_pfn =3D KVM_PFN_ERR_FAULT; --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E42C1C4332F for ; Thu, 13 Oct 2022 21:14:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230239AbiJMVO0 (ORCPT ); Thu, 13 Oct 2022 17:14:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230168AbiJMVNn (ORCPT ); Thu, 13 Oct 2022 17:13:43 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AD9B192BB6 for ; Thu, 13 Oct 2022 14:13:33 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id e13-20020a63500d000000b0045bf92a0b5aso1579625pgb.22 for ; Thu, 13 Oct 2022 14:13:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/Wux4Tvws0B1FY5Eqq8RnVGyJRYrHhxjDEQpTU2vx4E=; b=lm3T1cwJ15k+6/uBB8xtPltALi9r5i87D/2rmZNqKvEmPelSkNSVueJFvJOSUtkNpM D7ZfmZNjpy4c2mZbl1sAAEncd8rYyd5w5o6W6U1UvsAZsQ/7hTAvprmfQgyGtXVWYTzv ygawkMCI0oYRuFo2rXYPgmV7BQDQCvYM1UvwpCtMyZgPmb4mPFWlR7yWzNAVC3N74OeB rM31Pta/FvF5UQqjop91PRPVIzY3Pp0P2btkR9lYSrKuXe00bYiV5g/o5j4F1TRwAr5m xh7rXOzlwDSDW/avVQ3BVGG3L+u9t8b5D0ckbMv5ss/mpe0/2MlRozQhIsSCSQ5dx6YF bnMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/Wux4Tvws0B1FY5Eqq8RnVGyJRYrHhxjDEQpTU2vx4E=; b=uphGe4On01jV9AI5tpKMVs/Bx+B+F4toXh4De8OZv9BDmoJ7Z28TafeXsQl8md2sIy RlpvIRRgVhYjeYgbPl6nNyiDlHTr+TyX7Tiuc5B9WiwQmOImOvQFHUmSIyhteaChiqpT sGwyM07ps5nnJq2vWBExHzakbeDoLgJA1Ycx7oSnT2gVRmfcOrY6touQI8M7tKL4JNXt Ree7PUjk3a6Z89Iichx8VuChQ8pfkXp8iHbkg6Vyy7HnZsYOHoUFVvSZWROp8T72NpT/ qsa4t6t4rjPXHpCrx0CIsu9q6LqY7BbcvbYbu3iAAmC+bxjfFButI80+gFoxGMGZvnqx QUjw== X-Gm-Message-State: ACrzQf1qwgxnmRMAiUVx2vvSNmYP3hOEdDyOqmwlmFk2zuqYy1//ApOr 3q1fckuiu8H6xwbrDJXOQHLRNn1BxtI= X-Google-Smtp-Source: AMsMyM6qr+GDBp8mpP+eAPS6Nay8vdzEveP9Wti1HT2bjY8hLoFWIbhsGKadzgEIyDTHNX012UthqhLTldQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:c984:b0:20a:eab5:cf39 with SMTP id w4-20020a17090ac98400b0020aeab5cf39mr114805pjt.1.1665695575531; Thu, 13 Oct 2022 14:12:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:28 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-11-seanjc@google.com> Subject: [PATCH v2 10/16] KVM: Use gfn_to_pfn_cache's immutable "kvm" in kvm_gpc_refresh() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Make kvm_gpc_refresh() use kvm instance cached in gfn_to_pfn_cache. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj [sean: leave kvm_gpc_unmap() as-is] Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 2 +- arch/x86/kvm/xen.c | 8 ++++---- include/linux/kvm_host.h | 8 +++----- virt/kvm/pfncache.c | 6 +++--- 4 files changed, 11 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b357a84f8c49..d370d06bb07a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3036,7 +3036,7 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *= v, while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc, gpc->gpa)) return; =20 read_lock_irqsave(&gpc->lock, flags); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index c7304f37c438..920ba5ca3016 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -218,7 +218,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, = int state) if (state =3D=3D RUNSTATE_runnable) return; =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc, gpc->gpa)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -347,7 +347,7 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) while (!kvm_gpc_check(gpc, gpc->gpa)) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc, gpc->gpa)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -421,7 +421,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; =20 - if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) { + if (kvm_gpc_refresh(gpc, gpc->gpa)) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -1470,7 +1470,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *= xe, struct kvm *kvm) break; =20 idx =3D srcu_read_lock(&kvm->srcu); - rc =3D kvm_gpc_refresh(kvm, gpc, gpc->gpa); + rc =3D kvm_gpc_refresh(gpc, gpc->gpa); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ad8ef7f2d705..b63d2abbef56 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1300,22 +1300,20 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gp= a_t gpa); /** * kvm_gpc_refresh - update a previously initialized cache. * - * @kvm: pointer to kvm instance. * @gpc: struct gfn_to_pfn_cache object. * @gpa: updated guest physical address to map. - * @len: sanity check; the range being access must fit a single page. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. - * -EFAULT for an untranslatable guest physical address. + * -EFAULT for an untranslatable guest physical address. * * This will attempt to refresh a gfn_to_pfn_cache. Note that a successful - * returm from this function does not mean the page can be immediately + * return from this function does not mean the page can be immediately * accessed because it may have raced with an invalidation. Callers must * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa); +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa); =20 /** * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index ef7ac1666847..432b150bd9f1 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -237,9 +237,9 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) return -EFAULT; } =20 -int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t g= pa) +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { - struct kvm_memslots *slots =3D kvm_memslots(kvm); + struct kvm_memslots *slots =3D kvm_memslots(gpc->kvm); unsigned long page_offset =3D gpa & ~PAGE_MASK; bool unmap_old =3D false; unsigned long old_uhva; @@ -395,7 +395,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_= t gpa) gpc->active =3D true; write_unlock_irq(&gpc->lock); } - return kvm_gpc_refresh(kvm, gpc, gpa); + return kvm_gpc_refresh(gpc, gpa); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); =20 --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A780CC4332F for ; Thu, 13 Oct 2022 21:14:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230187AbiJMVOe (ORCPT ); Thu, 13 Oct 2022 17:14:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230179AbiJMVNq (ORCPT ); Thu, 13 Oct 2022 17:13:46 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBB0419375C for ; Thu, 13 Oct 2022 14:13:35 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3562f0fb5a7so28156457b3.1 for ; Thu, 13 Oct 2022 14:13:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=N+ZLsWQWrbTeNoSHVahwYLBX7L/YFHf5HDS9U7VFCCo=; b=FsxHbjOHYqiWQWKvKcoWJWpwm1TWzvwZwM7umrP8lrQyx512BCB/aj9APV57S1zRKv M3W57j19MhZJuBdQgrGaDEbFXEI0b50MpiTMzR8NLjjzCIOQALgZrDczMsSpRlULzv9X piO072IEBzh3t59bPz1frM+5kgdt7pNZCO4/PenZkNJ4BFmJssUs+ksH47Qq5hTKhhnO 0xQKj3lupz+pNlcziUvEtCWa97qMpPZXn/BWiqQbftslhGY9+YKuUbNoGp9dcsq3iDGL MJaUwH9n0jjDAttouYARLnARPc7rxM55KeHAEkG6JjJ60M4n7eO+ZlBv9dyPh1DaSjWS VrAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=N+ZLsWQWrbTeNoSHVahwYLBX7L/YFHf5HDS9U7VFCCo=; b=DlsQOV3MAy3AqPpgM1ba3BLyxqzEUjVaFNMMGbb6/k84ZuKDmFRY90iqMXds5JIkrp C+O0/ZPmVtD3Ar9LGrqaB0K7Z7blTSI3TtMmfwmYG3r1z4JHGCQ6KbM9OdIe8t1xHXmx 8O4KFZT84q8TpH30guhXCHsSv5/d1iWez7ISb4YKBBpJ7kHAymPnwvmxNMKpsKpgzdF8 WeB2qcd4ahIxvA0/lmprXlMlqfVGKIBTJoKEYnwR44NTOqsV6afVXe1TcxD2W9NYRoxO MPHLgQsr2t1mWCG83C+50iTokHcs8V+3VRsCGy1yN5KSOGTIkF/d4CxRb9K9eKoiDwIp rtzg== X-Gm-Message-State: ACrzQf1+adEyk9P5Pzm009zptWX8TeRIvIvXung7KNowHKijQCLRf4Kl i/TDqByj6Vq3+tOCS23F08M48MQsgYk= X-Google-Smtp-Source: AMsMyM5fAIdbSS3Hmw2vGAg48UHVyralf3ztielCGIwvgHhlrNWhnbQ7znQZsDmv9sJolUSIaLw1Hetf928= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:99cf:0:b0:360:b7bd:64de with SMTP id q198-20020a8199cf000000b00360b7bd64demr1839403ywg.91.1665695577390; Thu, 13 Oct 2022 14:12:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:29 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-12-seanjc@google.com> Subject: [PATCH v2 11/16] KVM: Drop KVM's API to allow temprorarily unmapping gfn=>pfn cache From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Drop kvm_gpc_unmap() as it has no users and unclear requirements. The API was added as part of the original gfn_to_pfn_cache support, but its sole usage[*] was never merged. Fold the guts of kvm_gpc_unmap() into the deactivate path and drop the API. Omit acquiring refresh_lock as as concurrent calls to kvm_gpc_deactivate() are not allowed (this is not enforced, e.g. via lockdep. due to it being called during vCPU destruction). If/when temporary unmapping makes a comeback, the desirable behavior is likely to restrict temporary unmapping to vCPU-exclusive mappings and require the vcpu->mutex be held to serialize unmap. Use of the refresh_lock to protect unmapping was somewhat specuatively added by commit 93984f19e7bc ("KVM: Fully serialize gfn=3D>pfn cache refresh via mutex") to guard against concurrent unmaps, but the primary use case of the temporary unmap, nested virtualization[*], doesn't actually need or want concurrent unmaps. [*] https://lore.kernel.org/all/20211210163625.2886-7-dwmw2@infradead.org Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 12 ----------- virt/kvm/pfncache.c | 44 +++++++++++++++------------------------- 2 files changed, 16 insertions(+), 40 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b63d2abbef56..22cf43389954 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1315,18 +1315,6 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa= _t gpa); */ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa); =20 -/** - * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache. - * - * @kvm: pointer to kvm instance. - * @gpc: struct gfn_to_pfn_cache object. - * - * This unmaps the referenced page. The cache is left in the invalid state - * but at least the mapping from GPA to userspace HVA will remain cached - * and can be reused on a subsequent refresh. - */ -void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); - /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. * diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 432b150bd9f1..62b47feed36c 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -328,33 +328,6 @@ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_= t gpa) } EXPORT_SYMBOL_GPL(kvm_gpc_refresh); =20 -void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) -{ - void *old_khva; - kvm_pfn_t old_pfn; - - mutex_lock(&gpc->refresh_lock); - write_lock_irq(&gpc->lock); - - gpc->valid =3D false; - - old_khva =3D gpc->khva - offset_in_page(gpc->khva); - old_pfn =3D gpc->pfn; - - /* - * We can leave the GPA =E2=86=92 uHVA map cache intact but the PFN - * lookup will need to be redone even for the same page. - */ - gpc->khva =3D NULL; - gpc->pfn =3D KVM_PFN_ERR_FAULT; - - write_unlock_irq(&gpc->lock); - mutex_unlock(&gpc->refresh_lock); - - gpc_unmap_khva(old_pfn, old_khva); -} -EXPORT_SYMBOL_GPL(kvm_gpc_unmap); - void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, unsigned long len) @@ -402,6 +375,8 @@ EXPORT_SYMBOL_GPL(kvm_gpc_activate); void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) { struct kvm *kvm =3D gpc->kvm; + kvm_pfn_t old_pfn; + void *old_khva; =20 if (gpc->active) { /* @@ -411,13 +386,26 @@ void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc) */ write_lock_irq(&gpc->lock); gpc->active =3D false; + gpc->valid =3D false; + + /* + * Leave the GPA =3D> uHVA cache intact, it's protected by the + * memslot generation. The PFN lookup needs to be redone every + * time as mmu_notifier protection is lost when the cache is + * removed from the VM's gpc_list. + */ + old_khva =3D gpc->khva - offset_in_page(gpc->khva); + gpc->khva =3D NULL; + + old_pfn =3D gpc->pfn; + gpc->pfn =3D KVM_PFN_ERR_FAULT; write_unlock_irq(&gpc->lock); =20 spin_lock(&kvm->gpc_lock); list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); =20 - kvm_gpc_unmap(kvm, gpc); + gpc_unmap_khva(old_pfn, old_khva); } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01928C4332F for ; Thu, 13 Oct 2022 21:14:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230203AbiJMVOr (ORCPT ); Thu, 13 Oct 2022 17:14:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230189AbiJMVNr (ORCPT ); Thu, 13 Oct 2022 17:13:47 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50826192BA3 for ; Thu, 13 Oct 2022 14:13:38 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r126-20020a632b84000000b004393806c06eso1577213pgr.4 for ; Thu, 13 Oct 2022 14:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KzUT48wWdLM0GS1tdTWfSztOOLtqUcKTy+x0XSoy+nI=; b=LAIF/INbrCyeNVmFPXN31HCop+C55GkbkQAdu31lDBBbZyldCtI0Px2ohWMAfko3KW uVEBsbeNeIm6wfrNw+BitE0+3ZthLtt1vR3aukCaqBncbky/tOjkYitGzEU/RREjqImn hNJk0xArg74UBFBv05kP+Q9K9H/DCM+rZR9+Y0GSZ1f7WSSO5MCataZWJQJP6gVoeyva XEmpj3/onwVh9sKK9UobbuZyl45dLQw9emPpIpgz+tSFes72O1tgf52Qz8HXgOqmrpG9 e1n2sxd8mZ9WtlaDl9RWbBtusVAVAYOLf3HYpuExEvu3NpzpSYi+pcFPezLh1Oy7FBDl a6tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KzUT48wWdLM0GS1tdTWfSztOOLtqUcKTy+x0XSoy+nI=; b=cMo1q/ig3th0sgdmMqXCA7LOLYw5XsyrPQdoOTmIvCF0HT3KJesEQNJ2p1Frr0eHKt EcJHni0Etx5TfdUCxpM7k98j/Q135EOVqjFRa7dCCvVs7suiYEsg3D9jQVB76xKN9v+I CsJT44MJazk5X+/ntEEpIkTXXWWImDMcKM1e3booLQp2P0L0XbtWRCRnD2sJgozatH84 AynQOxbAzfT3O0/cES1JqvDReKYGxZAyS871CpUz5qjR+7+yZgGSDCp+OlPi0PG0QV6L FcKSzFqoi78emJ31H6KGTl8o0j2GfZR8nm3BimidntpFjJj9u5wIPL+3ftJISOiI48b5 WuzQ== X-Gm-Message-State: ACrzQf2egiJYpockuaGecS7t1Xt32QHVhItF9WaHgd+lizzDy+M4TKpQ EZro0/XijQEtPFf3wpvTM4/vDEPHL84= X-Google-Smtp-Source: AMsMyM5JBoOUAGVdNSV/5FiKvoxtt7Vpe+BjQGnE06c6G75bcw8EQpovg1arsWEkH694SxjG+GG67OVQYrA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d88a:b0:17f:713b:c753 with SMTP id b10-20020a170902d88a00b0017f713bc753mr1567056plz.37.1665695579062; Thu, 13 Oct 2022 14:12:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:30 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-13-seanjc@google.com> Subject: [PATCH v2 12/16] KVM: Do not partially reinitialize gfn=>pfn cache during activation From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't partially reinitialize a gfn=3D>pfn cache when activating the cache, and instead assert that the cache is not valid during activation. Bug the VM if the assertion fails, as use-after-free and/or data corruption is all but guaranteed if KVM ends up with a valid-but-inactive cache. Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 62b47feed36c..2d5b417e50ac 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -342,6 +342,9 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct = kvm *kvm, gpc->vcpu =3D vcpu; gpc->usage =3D usage; gpc->len =3D len; + gpc->pfn =3D KVM_PFN_ERR_FAULT; + gpc->uhva =3D KVM_HVA_ERR_BAD; + } EXPORT_SYMBOL_GPL(kvm_gpc_init); =20 @@ -350,10 +353,8 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa= _t gpa) struct kvm *kvm =3D gpc->kvm; =20 if (!gpc->active) { - gpc->khva =3D NULL; - gpc->pfn =3D KVM_PFN_ERR_FAULT; - gpc->uhva =3D KVM_HVA_ERR_BAD; - gpc->valid =3D false; + if (KVM_BUG_ON(gpc->valid, kvm)) + return -EIO; =20 spin_lock(&kvm->gpc_lock); list_add(&gpc->list, &kvm->gpc_list); --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19345C4332F for ; Thu, 13 Oct 2022 21:15:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230219AbiJMVPI (ORCPT ); Thu, 13 Oct 2022 17:15:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230120AbiJMVOM (ORCPT ); Thu, 13 Oct 2022 17:14:12 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51C3E192D9A for ; Thu, 13 Oct 2022 14:13:46 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id o14-20020a056a00214e00b0056238ef46ebso1780690pfk.2 for ; Thu, 13 Oct 2022 14:13:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=QVsgOUFdCeAu3kg4DIQmvroAjbUx1fEa2Um3hwmoUgQ=; b=LTIXi1l9TES5TaT4VN1PhRA/Sp0htqxITjUh2LwiHHHhS1vAVeurBK+Of1n6IZxM69 L4Nbe+dg5FzuWzB3lq+rayvnEHGnqCFciuTsrPTzBETiYOzg2LzHtkfJCLEFj6sqDTBq eRUsFduZ8EgZcF1BezhkeJwv3+yf/rbtD9vfDFAnEhJ5lDNxphgMu8zs4B1tZhbwrOrV B5fBMuWM7TdtbXWncJJ7cWROJbonPqBFVtYOuOV46dJJUA357asOZiq1DB0p0GgwfryV TAKqD9e1zu1FhnZZeIcZYE4QJCVlpfmpOlt1WRmvfjR5PMC8q+lbpZPfU56wuZdyL8NZ 0NDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QVsgOUFdCeAu3kg4DIQmvroAjbUx1fEa2Um3hwmoUgQ=; b=h9hxO21Gxppe42nI5PgKQYI0npA9aXV4pNU8YEQeAtOWH3IOcNR8CbOMP0wfw8eQyH i+DGEbbJtmfan3/vTs94h68OWZeklCeyrktuZ+9N5hkmRCEgAXkkpE1H3DAxdBRn6587 Q2AZJVivCBY5PbE+CvgJvUzOwFJm5ggZlS2IUjjUUxT1Yza8pU6Z5t5sxze4gEBW5zCr hqUuLBZFQKOgn5/du8gN0oX5/iPCkcw8oRzd+odO7DiNOjgKq7Boj6hO/0dSjI3BUT1/ +w+hFIq9Tx33ajZnTEKYwkOOd5z0dgL9kRZvA/JRbBeIGMsli0aivaBu1JEF+j+KBhLd Tb+A== X-Gm-Message-State: ACrzQf0E8FGxIgASA9JJdmuhNNrSIGi7ZZNMp5pGr0nQSfJtkX0uuS5p cau7AjmjqVH1o140qIXaZL/9MtIymS0= X-Google-Smtp-Source: AMsMyM7ya5QkJAUOvJRMaQlxWenHfSVG5jIUMLPe32tgxDSiOUSY3UdXYPjFMx0rle3Is5aPNnIWN5u3eYE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:114c:b0:528:2c7a:6302 with SMTP id b12-20020a056a00114c00b005282c7a6302mr1481255pfm.37.1665695580614; Thu, 13 Oct 2022 14:13:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:31 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-14-seanjc@google.com> Subject: [PATCH v2 13/16] KVM: Drop @gpa from exported gfn=>pfn cache check() and refresh() helpers From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the @gpa param from the exported check()+refresh() helpers and limit changing the cache's GPA to the activate path. All external users just feed in gpc->gpa, i.e. this is a fancy nop. Allowing users to change the GPA at check()+refresh() is dangerous as those helpers explicitly allow concurrent calls, e.g. KVM could get into a livelock scenario. It's also unclear as to what the expected behavior should be if multiple tasks attempt to refresh with different GPAs. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 4 ++-- arch/x86/kvm/xen.c | 20 ++++++++++---------- include/linux/kvm_host.h | 6 ++---- virt/kvm/pfncache.c | 16 +++++++++------- 4 files changed, 23 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d370d06bb07a..2db8515d38dd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3033,10 +3033,10 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu= *v, WARN_ON_ONCE(gpc->len !=3D offset + sizeof(*guest_hv_clock)); =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gpc_refresh(gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc)) return; =20 read_lock_irqsave(&gpc->lock, flags); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 920ba5ca3016..529d3f4c1b9d 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -211,14 +211,14 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v= , int state) return; =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* When invoked from kvm_sched_out() we cannot sleep */ if (state =3D=3D RUNSTATE_runnable) return; =20 - if (kvm_gpc_refresh(gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -344,10 +344,10 @@ void kvm_xen_inject_pending_events(struct kvm_vcpu *v) * little more honest about it. */ read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); =20 - if (kvm_gpc_refresh(gpc, gpc->gpa)) + if (kvm_gpc_refresh(gpc)) return; =20 read_lock_irqsave(&gpc->lock, flags); @@ -407,7 +407,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending)); =20 read_lock_irqsave(&gpc->lock, flags); - while (!kvm_gpc_check(gpc, gpc->gpa)) { + while (!kvm_gpc_check(gpc)) { read_unlock_irqrestore(&gpc->lock, flags); =20 /* @@ -421,7 +421,7 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v) if (in_atomic() || !task_is_running(current)) return 1; =20 - if (kvm_gpc_refresh(gpc, gpc->gpa)) { + if (kvm_gpc_refresh(gpc)) { /* * If this failed, userspace has screwed up the * vcpu_info mapping. No interrupts for you. @@ -947,7 +947,7 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, i= nt nr_ports, =20 read_lock_irqsave(&gpc->lock, flags); idx =3D srcu_read_lock(&kvm->srcu); - if (!kvm_gpc_check(gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc)) goto out_rcu; =20 ret =3D false; @@ -1338,7 +1338,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) idx =3D srcu_read_lock(&kvm->srcu); =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(gpc, gpc->gpa)) + if (!kvm_gpc_check(gpc)) goto out_rcu; =20 if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) { @@ -1372,7 +1372,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe= , struct kvm *kvm) gpc =3D &vcpu->arch.xen.vcpu_info_cache; =20 read_lock_irqsave(&gpc->lock, flags); - if (!kvm_gpc_check(gpc, gpc->gpa)) { + if (!kvm_gpc_check(gpc)) { /* * Could not access the vcpu_info. Set the bit in-kernel * and prod the vCPU to deliver it for itself. @@ -1470,7 +1470,7 @@ static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *= xe, struct kvm *kvm) break; =20 idx =3D srcu_read_lock(&kvm->srcu); - rc =3D kvm_gpc_refresh(gpc, gpc->gpa); + rc =3D kvm_gpc_refresh(gpc); srcu_read_unlock(&kvm->srcu, idx); } while(!rc); =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 22cf43389954..92cf0be21974 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1283,7 +1283,6 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gp= a_t gpa); * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * * @gpc: struct gfn_to_pfn_cache object. - * @gpa: current guest physical address to map. * * @return: %true if the cache is still valid and the address matches. * %false if the cache is not valid. @@ -1295,13 +1294,12 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, = gpa_t gpa); * Callers in IN_GUEST_MODE may do so without locking, although they should * still hold a read lock on kvm->scru for the memslot checks. */ -bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa); +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc); =20 /** * kvm_gpc_refresh - update a previously initialized cache. * * @gpc: struct gfn_to_pfn_cache object. - * @gpa: updated guest physical address to map. * * @return: 0 for success. * -EINVAL for a mapping which would cross a page boundary. @@ -1313,7 +1311,7 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_= t gpa); * still lock and check the cache status, as this function does not return * with the lock still held to permit access. */ -int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa); +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc); =20 /** * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d5b417e50ac..f211c878788b 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -76,17 +76,14 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,= unsigned long start, } } =20 -bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa) +bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc) { struct kvm_memslots *slots =3D kvm_memslots(gpc->kvm); =20 if (!gpc->active) return false; =20 - if ((gpa & ~PAGE_MASK) + gpc->len > PAGE_SIZE) - return false; - - if (gpc->gpa !=3D gpa || gpc->generation !=3D slots->generation || + if (gpc->generation !=3D slots->generation || kvm_is_error_hva(gpc->uhva)) return false; =20 @@ -237,7 +234,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) return -EFAULT; } =20 -int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) +static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa) { struct kvm_memslots *slots =3D kvm_memslots(gpc->kvm); unsigned long page_offset =3D gpa & ~PAGE_MASK; @@ -326,6 +323,11 @@ int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_= t gpa) =20 return ret; } + +int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc) +{ + return __kvm_gpc_refresh(gpc, gpc->gpa); +} EXPORT_SYMBOL_GPL(kvm_gpc_refresh); =20 void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm, @@ -369,7 +371,7 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_= t gpa) gpc->active =3D true; write_unlock_irq(&gpc->lock); } - return kvm_gpc_refresh(gpc, gpa); + return __kvm_gpc_refresh(gpc, gpa); } EXPORT_SYMBOL_GPL(kvm_gpc_activate); =20 --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6242C433FE for ; Thu, 13 Oct 2022 21:14:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230227AbiJMVOx (ORCPT ); Thu, 13 Oct 2022 17:14:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230226AbiJMVOA (ORCPT ); Thu, 13 Oct 2022 17:14:00 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7ABB190E5D for ; Thu, 13 Oct 2022 14:13:47 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-358c893992cso27839067b3.9 for ; Thu, 13 Oct 2022 14:13:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=enHjfiPcp49jZxPsRc8NHubzz9PIYIRo2WUSut8GWYc=; b=m9qYNDRh5/bP062uaLuPUk974Odr39TMFHkfG791CgVfCCQFLpAymJhNra7RzSQbGe vVn/LZa5NpT4Lcsst7qHoK3YygOBe9X/SQ0YYufk1Oj88O7a6YE2Jn4t93mxf9+En6Hn Ooymzgxj3dr/q52Ui9GlfLJX01shMRE+sXSkaYIOl6KgZKtkkfwq1cr7KUoA+mCmmSOk a54h5+MYyfgkgGaKtfZJTEoqT2P/vF9DjqoNH5UgzwQxXIXNYqJgIbLeVrl7gancOnup RubJ/bZM3wPSwZYryJxWEcd3+c19/I4aO5Rae4FCZQpgXCNGXe/N22UwiezQKjSVNkT0 ZmJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=enHjfiPcp49jZxPsRc8NHubzz9PIYIRo2WUSut8GWYc=; b=lcpYrPJdIOQWYDNWrN81DH9PMj5fAjABQHx2wg50vasZ+lbQAV9/QvJrP5GuoVQNTa 0oIsRyAAAPvnTV1Qf+NEDwFBlBoPS5Gly6aGmHnyaZYBA4PAIYCujk4ZnShg5vRy/Z9j dmCRnr0EF2xf31hKUNClEsRHOCl7dUn6qClkKt0POJRTMSW1dfY/7joZJMTxEma35HeO gC8RbHV/KDXwqpvWnIBhn1vqNKZTmhTNPqVVXIBGATQ681TFaJ7sKHnANjrgmm5WDrpc dimMDzKNaiIARK1NmpM/3gPh8s+mc/nfpQ82/GD2BX4HBx48xAfo5zg1I9jszQMBzTU8 MycQ== X-Gm-Message-State: ACrzQf3zFnRZ3iqfl7KxD1IuZeo1cTD4O64RZFopKwFwVhd31uG/nATf DP9cDnbMaQPG4YSr6+bvR1GIqiz+Gjo= X-Google-Smtp-Source: AMsMyM5duWlg6Mdq5UFsNT6JDOvbEQtD9HFySACxNL61/Ra+5KFqbIMPVrZhy/aMHGZBAGJDxfZYt10CuIc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:be14:0:b0:6be:885f:20bb with SMTP id h20-20020a25be14000000b006be885f20bbmr1768093ybk.480.1665695582404; Thu, 13 Oct 2022 14:13:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:32 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-15-seanjc@google.com> Subject: [PATCH v2 14/16] KVM: Skip unnecessary "unmap" if gpc is already valid during refresh From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When refreshing a gfn=3D>pfn cache, skip straight to unlocking if the cache already valid instead of stuffing the "old" variables to turn the unmapping outro into a nop. Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index f211c878788b..57d47f06637d 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -293,9 +293,8 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *g= pc, gpa_t gpa) ret =3D hva_to_pfn_retry(gpc); } else { /* If the HVA=E2=86=92PFN mapping was already valid, don't unmap it. */ - old_pfn =3D KVM_PFN_ERR_FAULT; - old_khva =3D NULL; ret =3D 0; + goto out_unlock; } =20 out: --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 582D2C433FE for ; Thu, 13 Oct 2022 21:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230250AbiJMVPA (ORCPT ); Thu, 13 Oct 2022 17:15:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230234AbiJMVOA (ORCPT ); Thu, 13 Oct 2022 17:14:00 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E851193755 for ; Thu, 13 Oct 2022 14:13:48 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id g3-20020a056a000b8300b00563772d1021so1775474pfj.18 for ; Thu, 13 Oct 2022 14:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Dkze2MsJxqCmAbWDzZZK4VGavXQgqa2AUGiiGVa6HH8=; b=dZitK4lqU/S7tVoHB8Mi0JBvY83jA7BnacPckZSLugRBTK18I4x1RwkdgfzTmSC3hn udH1dus/1z2gLr0+/SkrgyETcBI2rYFPqWw3Ugj8ESy09L4AhX30XhBz143NL32L/+Y+ nPk1wmmfLm7/ZlbmT+p3UaxDixhUXMWnXAaOheZ1bkUvnwLtBUkBPbkGk96caBpHEseU 4LKFkINsOUoLIBwemofF9VjepfzobAoNTNcsU0nwoPOJcAzD2bHcY3GXWjCoE50jpcit joA1lC2tulYrEo/kTriQE8scZzL0HUNH7uC+Jfuz097eee1UFXhdyr6+/RGFF9jAiF/G LoRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Dkze2MsJxqCmAbWDzZZK4VGavXQgqa2AUGiiGVa6HH8=; b=Tlazec003osB9ZTdYOlsf/7QzcFZAluOejpD5hSw+clXXQwkqGSDZHSoGNLC1BoZAr yvJzR1SlUXb+fqn4OXTsmE3scrCH/B18eZSNdZxTPjQUSCGcw2k2HdJjieBlWt2BJPoj UF1FDN70mR6O2hWMtRgomiXt9l8/Jf8nD9E8hoFegJww5bD+y0MKHLzi5HViUI1lUY5M zwfe2uxev6OirRJV3nYctFur/3mmucxDgmz/IyvrnnTgR/EVrtzjEDnZJFFdxSaFJp4M 98gu1swp8vAGi+eZdxBCtOeJdavXuRm7ajqiYduYP4r+qszKcZtDDKmrhXPiNy9H6NG9 7xDw== X-Gm-Message-State: ACrzQf2mEm6uuoeff0REM+UFgcxtjejcnEHa/hZOL7mZSHz50GPlbE0t m5C6kaHsTL9JaRNEOn14LJkKfMmCItY= X-Google-Smtp-Source: AMsMyM4vrnK/wcheMWW3HQKHNgeiPWMqXEmimKvHDB7d0xYox77M0n9vUcsCD7v60VP/Q3hfKZrGRvYn3ag= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:fe97:b0:20a:fee1:8f69 with SMTP id co23-20020a17090afe9700b0020afee18f69mr114235pjb.0.1665695584064; Thu, 13 Oct 2022 14:13:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:33 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-16-seanjc@google.com> Subject: [PATCH v2 15/16] KVM: selftests: Add tests in xen_shinfo_test to detect lock races From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michal Luczaj Tests for races between shinfo_cache (de)activation and hypercall+ioctl() processing. KVM has had bugs where activating the shared info cache multiple times and/or with concurrent users results in lock corruption, NULL pointer dereferences, and other fun. For the timer injection testcase (#22), re-arm the timer until the IRQ is successfully injected. If the timer expires while the shared info is deactivated (invalid), KVM will drop the event. Signed-off-by: Michal Luczaj Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/xen_shinfo_test.c | 140 ++++++++++++++++++ 1 file changed, 140 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/t= esting/selftests/kvm/x86_64/xen_shinfo_test.c index 8a5cb800f50e..caa3f5ab9e10 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c @@ -15,9 +15,13 @@ #include #include #include +#include =20 #include =20 +/* Defined in include/linux/kvm_types.h */ +#define GPA_INVALID (~(ulong)0) + #define SHINFO_REGION_GVA 0xc0000000ULL #define SHINFO_REGION_GPA 0xc0000000ULL #define SHINFO_REGION_SLOT 10 @@ -44,6 +48,8 @@ =20 #define MIN_STEAL_TIME 50000 =20 +#define SHINFO_RACE_TIMEOUT 2 /* seconds */ + #define __HYPERVISOR_set_timer_op 15 #define __HYPERVISOR_sched_op 29 #define __HYPERVISOR_event_channel_op 32 @@ -148,6 +154,7 @@ static void guest_wait_for_irq(void) static void guest_code(void) { struct vcpu_runstate_info *rs =3D (void *)RUNSTATE_VADDR; + int i; =20 __asm__ __volatile__( "sti\n" @@ -325,6 +332,49 @@ static void guest_code(void) guest_wait_for_irq(); =20 GUEST_SYNC(21); + /* Racing host ioctls */ + + guest_wait_for_irq(); + + GUEST_SYNC(22); + /* Racing vmcall against host ioctl */ + + ports[0] =3D 0; + + p =3D (struct sched_poll) { + .ports =3D ports, + .nr_ports =3D 1, + .timeout =3D 0 + }; + +wait_for_timer: + /* + * Poll for a timer wake event while the worker thread is mucking with + * the shared info. KVM XEN drops timer IRQs if the shared info is + * invalid when the timer expires. Arbitrarily poll 100 times before + * giving up and asking the VMM to re-arm the timer. 100 polls should + * consume enough time to beat on KVM without taking too long if the + * timer IRQ is dropped due to an invalid event channel. + */ + for (i =3D 0; i < 100 && !guest_saw_irq; i++) + asm volatile("vmcall" + : "=3Da" (rax) + : "a" (__HYPERVISOR_sched_op), + "D" (SCHEDOP_poll), + "S" (&p) + : "memory"); + + /* + * Re-send the timer IRQ if it was (likely) dropped due to the timer + * expiring while the event channel was invalid. + */ + if (!guest_saw_irq) { + GUEST_SYNC(23); + goto wait_for_timer; + } + guest_saw_irq =3D false; + + GUEST_SYNC(24); } =20 static int cmp_timespec(struct timespec *a, struct timespec *b) @@ -352,11 +402,36 @@ static void handle_alrm(int sig) TEST_FAIL("IRQ delivery timed out"); } =20 +static void *juggle_shinfo_state(void *arg) +{ + struct kvm_vm *vm =3D (struct kvm_vm *)arg; + + struct kvm_xen_hvm_attr cache_init =3D { + .type =3D KVM_XEN_ATTR_TYPE_SHARED_INFO, + .u.shared_info.gfn =3D SHINFO_REGION_GPA / PAGE_SIZE + }; + + struct kvm_xen_hvm_attr cache_destroy =3D { + .type =3D KVM_XEN_ATTR_TYPE_SHARED_INFO, + .u.shared_info.gfn =3D GPA_INVALID + }; + + for (;;) { + __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_init); + __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_destroy); + pthread_testcancel(); + }; + + return NULL; +} + int main(int argc, char *argv[]) { struct timespec min_ts, max_ts, vm_ts; struct kvm_vm *vm; + pthread_t thread; bool verbose; + int ret; =20 verbose =3D argc > 1 && (!strncmp(argv[1], "-v", 3) || !strncmp(argv[1], "--verbose", 10)); @@ -785,6 +860,71 @@ int main(int argc, char *argv[]) case 21: TEST_ASSERT(!evtchn_irq_expected, "Expected event channel IRQ but it didn't happen"); + alarm(0); + + if (verbose) + printf("Testing shinfo lock corruption (KVM_XEN_HVM_EVTCHN_SEND)\n"); + + ret =3D pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm= ); + TEST_ASSERT(ret =3D=3D 0, "pthread_create() failed: %s", strerror(ret)= ); + + struct kvm_irq_routing_xen_evtchn uxe =3D { + .port =3D 1, + .vcpu =3D vcpu->id, + .priority =3D KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL + }; + + evtchn_irq_expected =3D true; + for (time_t t =3D time(NULL) + SHINFO_RACE_TIMEOUT; time(NULL) < t;) + __vm_ioctl(vm, KVM_XEN_HVM_EVTCHN_SEND, &uxe); + break; + + case 22: + TEST_ASSERT(!evtchn_irq_expected, + "Expected event channel IRQ but it didn't happen"); + + if (verbose) + printf("Testing shinfo lock corruption (SCHEDOP_poll)\n"); + + shinfo->evtchn_pending[0] =3D 1; + + evtchn_irq_expected =3D true; + tmr.u.timer.expires_ns =3D rs->state_entry_time + + SHINFO_RACE_TIMEOUT * 1000000000ULL; + vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr); + break; + + case 23: + /* + * Optional and possibly repeated sync point. + * Injecting the timer IRQ may fail if the + * shinfo is invalid when the timer expires. + * If the timer has expired but the IRQ hasn't + * been delivered, rearm the timer and retry. + */ + vcpu_ioctl(vcpu, KVM_XEN_VCPU_GET_ATTR, &tmr); + + /* Resume the guest if the timer is still pending. */ + if (tmr.u.timer.expires_ns) + break; + + /* All done if the IRQ was delivered. */ + if (!evtchn_irq_expected) + break; + + tmr.u.timer.expires_ns =3D rs->state_entry_time + + SHINFO_RACE_TIMEOUT * 1000000000ULL; + vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr); + break; + case 24: + TEST_ASSERT(!evtchn_irq_expected, + "Expected event channel IRQ but it didn't happen"); + + ret =3D pthread_cancel(thread); + TEST_ASSERT(ret =3D=3D 0, "pthread_cancel() failed: %s", strerror(ret)= ); + + ret =3D pthread_join(thread, 0); + TEST_ASSERT(ret =3D=3D 0, "pthread_join() failed: %s", strerror(ret)); goto done; =20 case 0x20: --=20 2.38.0.413.g74048e4d9e-goog From nobody Tue Apr 7 07:50:45 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DD51C4332F for ; Thu, 13 Oct 2022 21:15:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230221AbiJMVP1 (ORCPT ); Thu, 13 Oct 2022 17:15:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230199AbiJMVOq (ORCPT ); Thu, 13 Oct 2022 17:14:46 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C782319375D for ; Thu, 13 Oct 2022 14:13:51 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id s13-20020a170902ea0d00b00183243c7a0fso1995857plg.3 for ; Thu, 13 Oct 2022 14:13:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Ayk7WF1bhRdGwOMvSXaXxnD19B/O1/9Is88g4dKNZ4g=; b=RGZk+Vdb53lCuSorY1wAskL95mWskPoDQmOT0OPifoQMxC88+I+sUOzBcNwM2ES118 iW/lEHriL4Qg+SyNCkAnOE/F6nkNbWpyMPrn2QueInKh5I9bN/usi/JiSlITCWwhzV2w F6RhstB4XKOktzSjRK6gkJVtWkzoAHgddgAk7BJQH6HKFMpgykXSfdeLSPliTOAoC2SU OhvYMmXuZ77gQ7BXD0oc1vhaLxjItBGg0M0MZ4A4tbio5n4WSVH2CZsZU9ZjKFIddh25 2ojCOWdDUXvoEAgrXLJgd97RPQuSVYegtFiK5TFLdLpOqU8FtEIs1sigEXVw1rFG6IsE rsOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Ayk7WF1bhRdGwOMvSXaXxnD19B/O1/9Is88g4dKNZ4g=; b=DQeOMV/x+/1sxfGBqTBksX0FOfJpwcAeZ5wnw4bxqgmXnIS0lOMl/mSSJW0VIeiuTk t+7IDgbmSNhiPmBosQhv7MrpDDKnL8SJQyaVANF9YskdR/M2OSnS5M+JgH5bmJosncO+ xbkC7C10/ZNwT1MNRsoiHcaKlrarOvOhF/VLRP26lMnvvk+JL2My0ZnZnD8vH4kH50HU 2JTMqz2/A1WsoeggLtlhyc8W89Ky2GYIp4fSRmbCqEK0BHid+u2+CgVAxbgiuse2bgDF 7YSEAEAQOCpRZ5DMiNfgsNWwc9hVUhhrTiTSB7f6EYgWbunYy8KLMes3Yxm/Rraw82IS iZUQ== X-Gm-Message-State: ACrzQf1ax95849LkogvHlbE3+6xtKE0HqNrsfroZKDJ9oQH5cU6LN7Gf 1Nax5lO4ztmaXsdeOh9ss0Qv1sjby0s= X-Google-Smtp-Source: AMsMyM6tAfauzbLm9EvaoW78suvTos1Ey2Zysy7ZW4jVb6C5cbyn4fHYulzadh7ri04uq/mtbeeyli3ZBZE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:aa7:9614:0:b0:562:b07b:ad62 with SMTP id q20-20020aa79614000000b00562b07bad62mr1641934pfg.79.1665695586079; Thu, 13 Oct 2022 14:13:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 13 Oct 2022 21:12:34 +0000 In-Reply-To: <20221013211234.1318131-1-seanjc@google.com> Mime-Version: 1.0 References: <20221013211234.1318131-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.413.g74048e4d9e-goog Message-ID: <20221013211234.1318131-17-seanjc@google.com> Subject: [PATCH v2 16/16] KVM: selftests: Mark "guest_saw_irq" as volatile in xen_shinfo_test From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michal Luczaj , David Woodhouse Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Tag "guest_saw_irq" as "volatile" to ensure that the compiler will never optimize away lookups. Relying on the compiler thinking that the flag is global and thus might change also works, but it's subtle, less robust, and looks like a bug at first glance, e.g. risks being "fixed" and breaking the test. Make the flag "static" as well since convincing the compiler it's global is no longer necessary. Alternatively, the flag could be accessed with {READ,WRITE}_ONCE(), but literally every access would need the wrappers, and eking out performance isn't exactly top priority for selftests. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/t= esting/selftests/kvm/x86_64/xen_shinfo_test.c index caa3f5ab9e10..2a5727188c8d 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c @@ -132,7 +132,7 @@ struct { struct kvm_irq_routing_entry entries[2]; } irq_routes; =20 -bool guest_saw_irq; +static volatile bool guest_saw_irq; =20 static void evtchn_handler(struct ex_regs *regs) { --=20 2.38.0.413.g74048e4d9e-goog