From nobody Fri Nov 29 18:32:48 2024 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D639B15CD58; Mon, 16 Sep 2024 18:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510706; cv=none; b=kVZtJDazz2hu2qP4d7zj827fSaQZtVKrltuDlCcHSIf+fseZEEDgUcHMVUSoXr/tq3ejpJ1H9yeIHb7rrqejrIh456moPs2EX3MVCWCQz64q+JQJ0vs9yyHzlWHWG/AlYR4sh2Da65FWHA/5UMLZPMRqoGiKh+Wcmf9apO6oPf0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510706; c=relaxed/simple; bh=MDSaJl5sAMnH3y/1CLTaSwzclLTxhLAIttfd4HIbynM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EUlFCjnpQEO3EHCG7Uxlam5EE/4Jna4g+xWIPJsO18s/B9/0QmwezY7+iqHZjUmG0yisWPiGFAOeI8pWpAmkwO9SNTAyFn6T20biIGEdf8Fi/oNCTrB+yZTGkzhsYBSHrlxvkcRKum1xKicdO17fdzP77pcv1ov6n99d1kxuxtU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 449301F8C3; Mon, 16 Sep 2024 18:18:08 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 7B42E13A3A; Mon, 16 Sep 2024 18:18:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wJALHF926GbveAAAD6G6ig (envelope-from ); Mon, 16 Sep 2024 18:18:07 +0000 From: Roy Hopkins To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev Cc: Roy Hopkins , Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , Thomas Gleixner , Michael Roth , Ashish Kalra , Joerg Roedel , Tom Lendacky Subject: [RFC PATCH 1/5] kvm: Move kvm_vcpu fields into common structure Date: Mon, 16 Sep 2024 19:17:53 +0100 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-Spam-Flag: NO X-Spam-Score: -4.00 X-Rspamd-Queue-Id: 449301F8C3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Level: Content-Type: text/plain; charset="utf-8" This commit is in preparation for supporting multiple VM privilege levels (VMPLs) within KVM by creating an instance of struct kvm_vcpu for each VMPL. Support for multiple VMPLs will be introduced in a later commit. This commit prepares struct kvm_vcpu by allowing the fields that are common to all VMPLs to be accessed from the struct kvm_vcpu for any VMPL. To add support for common fields across multiple struct kvm_vcpus, this commit creates a sub-structure within struct kvm_vcpu for storing the common fields. Any fields that have been identified as common across VMPLs for SEV-SNP (for now) have been moved into this sub-structure. When the struct kvm_vcpu is initialised, a pointer to the common structure is stored within struct kvm_vcpu allowing access to the common fields. This means that any reference to these common fields needs to be updated to access them via the structure pointer, e.g. vcpu->common->[common_field]. This has resulted in this patch generating many small changes to many files which is an unfortunate side-effect of this implementation. Signed-off-by: Roy Hopkins --- Documentation/virt/kvm/api.rst | 2 +- Documentation/virt/kvm/locking.rst | 6 +- Documentation/virt/kvm/vcpu-requests.rst | 20 +- arch/arm64/kvm/arch_timer.c | 4 +- arch/arm64/kvm/arm.c | 34 +- arch/arm64/kvm/debug.c | 22 +- arch/arm64/kvm/guest.c | 6 +- arch/arm64/kvm/handle_exit.c | 12 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hypercalls.c | 2 +- arch/arm64/kvm/mmio.c | 14 +- arch/arm64/kvm/pmu-emul.c | 4 +- arch/arm64/kvm/psci.c | 14 +- arch/arm64/kvm/vgic/vgic.c | 2 +- arch/loongarch/kvm/exit.c | 52 +-- arch/loongarch/kvm/timer.c | 2 +- arch/loongarch/kvm/vcpu.c | 22 +- arch/mips/kvm/emulate.c | 94 ++--- arch/mips/kvm/mips.c | 52 +-- arch/mips/kvm/vz.c | 30 +- arch/powerpc/kvm/book3s.c | 4 +- arch/powerpc/kvm/book3s_emulate.c | 6 +- arch/powerpc/kvm/book3s_hv.c | 38 +- arch/powerpc/kvm/book3s_hv_nested.c | 4 +- arch/powerpc/kvm/book3s_hv_rm_xics.c | 8 +- arch/powerpc/kvm/book3s_pr.c | 38 +- arch/powerpc/kvm/book3s_pr_papr.c | 2 +- arch/powerpc/kvm/book3s_xics.c | 8 +- arch/powerpc/kvm/book3s_xive.c | 12 +- arch/powerpc/kvm/book3s_xive_native.c | 8 +- arch/powerpc/kvm/booke.c | 38 +- arch/powerpc/kvm/booke_emulate.c | 22 +- arch/powerpc/kvm/e500_emulate.c | 6 +- arch/powerpc/kvm/emulate.c | 6 +- arch/powerpc/kvm/emulate_loadstore.c | 2 +- arch/powerpc/kvm/powerpc.c | 62 ++-- arch/powerpc/kvm/timing.h | 28 +- arch/powerpc/kvm/trace.h | 2 +- arch/powerpc/kvm/trace_hv.h | 2 +- arch/riscv/kvm/aia_device.c | 8 +- arch/riscv/kvm/aia_imsic.c | 2 +- arch/riscv/kvm/vcpu.c | 24 +- arch/riscv/kvm/vcpu_insn.c | 14 +- arch/riscv/kvm/vcpu_sbi.c | 2 +- arch/riscv/kvm/vcpu_sbi_hsm.c | 2 +- arch/s390/include/asm/kvm_host.h | 8 +- arch/s390/kvm/diag.c | 72 ++-- arch/s390/kvm/gaccess.c | 4 +- arch/s390/kvm/guestdbg.c | 14 +- arch/s390/kvm/intercept.c | 30 +- arch/s390/kvm/interrupt.c | 82 ++--- arch/s390/kvm/kvm-s390.c | 160 ++++---- arch/s390/kvm/kvm-s390.h | 12 +- arch/s390/kvm/priv.c | 186 +++++----- arch/s390/kvm/pv.c | 2 +- arch/s390/kvm/sigp.c | 62 ++-- arch/s390/kvm/vsie.c | 6 +- arch/x86/kvm/debugfs.c | 2 +- arch/x86/kvm/hyperv.c | 20 +- arch/x86/kvm/kvm_cache_regs.h | 4 +- arch/x86/kvm/lapic.c | 4 +- arch/x86/kvm/mmu/mmu.c | 28 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/pmu.c | 2 +- arch/x86/kvm/svm/nested.c | 6 +- arch/x86/kvm/svm/sev.c | 66 ++-- arch/x86/kvm/svm/svm.c | 56 +-- arch/x86/kvm/vmx/nested.c | 18 +- arch/x86/kvm/vmx/posted_intr.c | 2 +- arch/x86/kvm/vmx/sgx.c | 4 +- arch/x86/kvm/vmx/vmx.c | 128 +++---- arch/x86/kvm/x86.c | 348 +++++++++--------- arch/x86/kvm/xen.c | 24 +- arch/x86/kvm/xen.h | 2 +- drivers/s390/crypto/vfio_ap_ops.c | 10 +- include/linux/kvm_host.h | 167 +++++---- .../selftests/kvm/aarch64/debug-exceptions.c | 2 +- .../selftests/kvm/aarch64/page_fault_test.c | 2 +- .../selftests/kvm/aarch64/smccc_filter.c | 2 +- .../selftests/kvm/demand_paging_test.c | 2 +- .../selftests/kvm/dirty_log_perf_test.c | 2 +- tools/testing/selftests/kvm/dirty_log_test.c | 4 +- .../testing/selftests/kvm/guest_print_test.c | 4 +- .../selftests/kvm/hardware_disable_test.c | 2 +- .../selftests/kvm/kvm_page_table_test.c | 2 +- .../testing/selftests/kvm/lib/aarch64/ucall.c | 2 +- tools/testing/selftests/kvm/lib/kvm_util.c | 14 +- tools/testing/selftests/kvm/lib/riscv/ucall.c | 2 +- .../kvm/lib/s390x/diag318_test_handler.c | 2 +- .../selftests/kvm/lib/s390x/processor.c | 6 +- tools/testing/selftests/kvm/lib/s390x/ucall.c | 2 +- .../testing/selftests/kvm/lib/x86_64/ucall.c | 2 +- .../kvm/memslot_modification_stress_test.c | 2 +- .../testing/selftests/kvm/memslot_perf_test.c | 2 +- .../selftests/kvm/pre_fault_memory_test.c | 2 +- tools/testing/selftests/kvm/s390x/cmma_test.c | 20 +- .../testing/selftests/kvm/s390x/debug_test.c | 12 +- tools/testing/selftests/kvm/s390x/memop.c | 2 +- tools/testing/selftests/kvm/s390x/resets.c | 14 +- .../selftests/kvm/s390x/sync_regs_test.c | 10 +- tools/testing/selftests/kvm/s390x/tprot.c | 2 +- .../selftests/kvm/set_memory_region_test.c | 4 +- tools/testing/selftests/kvm/steal_time.c | 2 +- .../testing/selftests/kvm/x86_64/cpuid_test.c | 2 +- .../testing/selftests/kvm/x86_64/debug_regs.c | 2 +- .../selftests/kvm/x86_64/fix_hypercall_test.c | 2 +- .../selftests/kvm/x86_64/flds_emulation.h | 2 +- .../kvm/x86_64/hyperv_extended_hypercalls.c | 2 +- .../kvm/x86_64/nested_exceptions_test.c | 4 +- .../kvm/x86_64/private_mem_conversions_test.c | 4 +- .../kvm/x86_64/private_mem_kvm_exits_test.c | 16 +- .../selftests/kvm/x86_64/set_boot_cpu_id.c | 2 +- .../selftests/kvm/x86_64/sev_smoke_test.c | 22 +- .../selftests/kvm/x86_64/sync_regs_test.c | 16 +- .../kvm/x86_64/triple_fault_event_test.c | 2 +- .../selftests/kvm/x86_64/tsc_msrs_test.c | 2 +- .../selftests/kvm/x86_64/userspace_io_test.c | 2 +- .../kvm/x86_64/userspace_msr_exit_test.c | 8 +- .../kvm/x86_64/vmx_apic_access_test.c | 2 +- .../kvm/x86_64/vmx_close_while_nested_test.c | 2 +- .../vmx_exception_with_invalid_guest_state.c | 2 +- .../x86_64/vmx_invalid_nested_guest_state.c | 2 +- .../selftests/kvm/x86_64/xcr0_cpuid_test.c | 2 +- .../selftests/kvm/x86_64/xen_vmcall_test.c | 2 +- virt/kvm/async_pf.c | 60 +-- virt/kvm/dirty_ring.c | 6 +- virt/kvm/kvm_main.c | 176 ++++----- 127 files changed, 1375 insertions(+), 1366 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 33938468d62d..5f4df9a90907 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7494,7 +7494,7 @@ initial handling in the kernel, KVM exits to user spa= ce with KVM_EXIT_S390_STSI to allow user space to insert further data. =20 Before exiting to userspace, kvm handlers should fill in s390_stsi field of -vcpu->run:: +vcpu->common->run:: =20 struct { __u64 addr; diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/lo= cking.rst index 02880d5552d5..a571cfa9a8be 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -11,7 +11,7 @@ The acquisition orders for mutexes are as follows: =20 - cpus_read_lock() is taken outside kvm_lock =20 -- kvm->lock is taken outside vcpu->mutex +- kvm->lock is taken outside vcpu->common->mutex =20 - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock =20 @@ -27,7 +27,7 @@ The acquisition orders for mutexes are as follows: For SRCU: =20 - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections - for kvm->lock, vcpu->mutex and kvm->slots_lock. These locks _cannot_ + for kvm->lock, vcpu->common->mutex and kvm->slots_lock. These locks _ca= nnot_ be taken inside a kvm->srcu read-side critical section; that is, the following is broken:: =20 @@ -41,7 +41,7 @@ For SRCU: =20 On x86: =20 -- vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock and kvm->arch.xen.= xen_lock +- vcpu->common->mutex is taken outside kvm->arch.hyperv.hv_lock and kvm->a= rch.xen.xen_lock =20 - kvm->arch.mmu_lock is an rwlock; critical sections for kvm->arch.tdp_mmu_pages_lock and kvm->arch.mmu_unsync_pages_lock must diff --git a/Documentation/virt/kvm/vcpu-requests.rst b/Documentation/virt/= kvm/vcpu-requests.rst index 06718b9bc959..2a54cdc908e3 100644 --- a/Documentation/virt/kvm/vcpu-requests.rst +++ b/Documentation/virt/kvm/vcpu-requests.rst @@ -63,9 +63,9 @@ are listed below: VCPU Mode --------- =20 -VCPUs have a mode state, ``vcpu->mode``, that is used to track whether the +VCPUs have a mode state, ``vcpu->common->mode``, that is used to track whe= ther the guest is running in guest mode or not, as well as some specific -outside guest mode states. The architecture may use ``vcpu->mode`` to +outside guest mode states. The architecture may use ``vcpu->common->mode`= ` to ensure VCPU requests are seen by VCPUs (see "Ensuring Requests Are Seen"), as well as to avoid sending unnecessary IPIs (see "IPI Reduction"), and even to ensure IPI acknowledgements are waited upon (see "Waiting for @@ -198,7 +198,7 @@ enter guest mode. This means that an optimized impleme= ntation (see "IPI Reduction") must be certain when it's safe to not send the IPI. One solution, which all architectures except s390 apply, is to: =20 -- set ``vcpu->mode`` to IN_GUEST_MODE between disabling the interrupts and +- set ``vcpu->common->mode`` to IN_GUEST_MODE between disabling the interr= upts and the last kvm_request_pending() check; - enable interrupts atomically when entering the guest. =20 @@ -209,15 +209,15 @@ can exclude the possibility of a VCPU thread observing the next request made of it, even if the request is made immediately after the check. This is done by way of the Dekker memory barrier pattern (scenario 10 of [lwn-mb]_). As the Dekker pattern requires two variables, -this solution pairs ``vcpu->mode`` with ``vcpu->requests``. Substituting +this solution pairs ``vcpu->common->mode`` with ``vcpu->requests``. Subst= ituting them into the pattern gives:: =20 CPU1 CPU2 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D local_irq_disable(); - WRITE_ONCE(vcpu->mode, IN_GUEST_MODE); kvm_make_request(REQ, vcpu); + WRITE_ONCE(vcpu->common->mode, IN_GUEST_MODE); kvm_make_request(REQ, vc= pu); smp_mb(); smp_mb(); - if (kvm_request_pending(vcpu)) { if (READ_ONCE(vcpu->mode) =3D=3D + if (kvm_request_pending(vcpu)) { if (READ_ONCE(vcpu->common->mode= ) =3D=3D IN_GUEST_MODE) { ...abort guest entry... ...send IPI... } } @@ -225,9 +225,9 @@ them into the pattern gives:: As stated above, the IPI is only useful for VCPU threads in guest mode or that have already disabled interrupts. This is why this specific case of the Dekker pattern has been extended to disable interrupts before setting -``vcpu->mode`` to IN_GUEST_MODE. WRITE_ONCE() and READ_ONCE() are used to +``vcpu->common->mode`` to IN_GUEST_MODE. WRITE_ONCE() and READ_ONCE() are= used to pedantically implement the memory barrier pattern, guaranteeing the -compiler doesn't interfere with ``vcpu->mode``'s carefully planned +compiler doesn't interfere with ``vcpu->common->mode``'s carefully planned accesses. =20 IPI Reduction @@ -269,8 +269,8 @@ even the request-less VCPU kick is coupled with the same local_irq_disable() + smp_mb() pattern described above; the ON bit (Outstanding Notification) in the posted interrupt descriptor takes the role of ``vcpu->requests``. When sending a posted interrupt, PIR.ON is -set before reading ``vcpu->mode``; dually, in the VCPU thread, -vmx_sync_pir_to_irr() reads PIR after setting ``vcpu->mode`` to +set before reading ``vcpu->common->mode``; dually, in the VCPU thread, +vmx_sync_pir_to_irr() reads PIR after setting ``vcpu->common->mode`` to IN_GUEST_MODE. =20 Additional Considerations diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 879982b1cc73..d488bff63fcc 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -431,7 +431,7 @@ void kvm_timer_update_run(struct kvm_vcpu *vcpu) { struct arch_timer_context *vtimer =3D vcpu_vtimer(vcpu); struct arch_timer_context *ptimer =3D vcpu_ptimer(vcpu); - struct kvm_sync_regs *regs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *regs =3D &vcpu->common->run->s.regs; =20 /* Populate the device bitmap with the timer states */ regs->device_irq_level &=3D ~(KVM_ARM_DEV_EL1_VTIMER | @@ -861,7 +861,7 @@ bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu) { struct arch_timer_context *vtimer =3D vcpu_vtimer(vcpu); struct arch_timer_context *ptimer =3D vcpu_ptimer(vcpu); - struct kvm_sync_regs *sregs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *sregs =3D &vcpu->common->run->s.regs; bool vlevel, plevel; =20 if (likely(irqchip_in_kernel(vcpu->kvm))) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a7ca776b51ec..1a2de4007d08 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -456,11 +456,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) spin_lock_init(&vcpu->arch.mp_state_lock); =20 #ifdef CONFIG_LOCKDEP - /* Inform lockdep that the config_lock is acquired after vcpu->mutex */ - mutex_lock(&vcpu->mutex); + /* Inform lockdep that the config_lock is acquired after vcpu->common->mu= tex */ + mutex_lock(&vcpu->common->mutex); mutex_lock(&vcpu->kvm->arch.config_lock); mutex_unlock(&vcpu->kvm->arch.config_lock); - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); #endif =20 /* Force users to call KVM_ARM_VCPU_INIT */ @@ -975,9 +975,9 @@ static int kvm_vcpu_suspend(struct kvm_vcpu *vcpu) * userspace informing it of the wakeup condition. */ if (kvm_arch_vcpu_runnable(vcpu)) { - memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event)); - vcpu->run->system_event.type =3D KVM_SYSTEM_EVENT_WAKEUP; - vcpu->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; + memset(&vcpu->common->run->system_event, 0, sizeof(vcpu->common->run->sy= stem_event)); + vcpu->common->run->system_event.type =3D KVM_SYSTEM_EVENT_WAKEUP; + vcpu->common->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; return 0; } =20 @@ -1067,7 +1067,7 @@ static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *v= cpu) */ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 /* * If we're using a userspace irqchip, then check if we need @@ -1127,7 +1127,7 @@ static int noinstr kvm_arm_vcpu_enter_exit(struct kvm= _vcpu *vcpu) */ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int ret; =20 if (run->exit_reason =3D=3D KVM_EXIT_MMIO) { @@ -1138,7 +1138,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 vcpu_load(vcpu); =20 - if (!vcpu->wants_to_run) { + if (!vcpu->common->wants_to_run) { ret =3D -EINTR; goto out; } @@ -1192,10 +1192,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * See the comment in kvm_vcpu_exiting_guest_mode() and * Documentation/virt/kvm/vcpu-requests.rst */ - smp_store_mb(vcpu->mode, IN_GUEST_MODE); + smp_store_mb(vcpu->common->mode, IN_GUEST_MODE); =20 if (ret <=3D 0 || kvm_vcpu_exit_request(vcpu, &ret)) { - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; isb(); /* Ensure work in x_flush_hwstate is committed */ kvm_pmu_sync_hwstate(vcpu); if (static_branch_unlikely(&userspace_irqchip_in_use)) @@ -1217,8 +1217,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 ret =3D kvm_arm_vcpu_enter_exit(vcpu); =20 - vcpu->mode =3D OUTSIDE_GUEST_MODE; - vcpu->stat.exits++; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->stat.exits++; /* * Back from guest *************************************************************/ @@ -1920,7 +1920,7 @@ static void unlock_vcpus(struct kvm *kvm, int vcpu_lo= ck_idx) =20 for (; vcpu_lock_idx >=3D 0; vcpu_lock_idx--) { tmp_vcpu =3D kvm_get_vcpu(kvm, vcpu_lock_idx); - mutex_unlock(&tmp_vcpu->mutex); + mutex_unlock(&tmp_vcpu->common->mutex); } } =20 @@ -1941,13 +1941,13 @@ bool lock_all_vcpus(struct kvm *kvm) =20 /* * Any time a vcpu is in an ioctl (including running), the - * core KVM code tries to grab the vcpu->mutex. + * core KVM code tries to grab the vcpu->common->mutex. * - * By grabbing the vcpu->mutex of all VCPUs we ensure that no + * By grabbing the vcpu->common->mutex of all VCPUs we ensure that no * other VCPUs can fiddle with the state while we access it. */ kvm_for_each_vcpu(c, tmp_vcpu, kvm) { - if (!mutex_trylock(&tmp_vcpu->mutex)) { + if (!mutex_trylock(&tmp_vcpu->common->mutex)) { unlock_vcpus(kvm, c - 1); return false; } diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index ce8886122ed3..8a0984778cfa 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -108,7 +108,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcp= u) MDCR_EL2_TDOSA); =20 /* Is the VM being debugged by userspace? */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) /* Route all software debug exceptions to EL2 */ vcpu->arch.mdcr_el2 |=3D MDCR_EL2_TDE; =20 @@ -119,7 +119,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcp= u) * - The guest is not using debug (DEBUG_DIRTY clear). * - The guest has enabled the OS Lock (debug exceptions are blocked). */ - if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) || + if ((vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW) || !vcpu_get_flag(vcpu, DEBUG_DIRTY) || kvm_vcpu_os_lock_enabled(vcpu)) vcpu->arch.mdcr_el2 |=3D MDCR_EL2_TDA; @@ -171,12 +171,12 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) { unsigned long mdscr, orig_mdcr_el2 =3D vcpu->arch.mdcr_el2; =20 - trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug); + trace_kvm_arm_setup_debug(vcpu, vcpu->common->guest_debug); =20 kvm_arm_setup_mdcr_el2(vcpu); =20 /* Check if we need to use the debug registers. */ - if (vcpu->guest_debug || kvm_vcpu_os_lock_enabled(vcpu)) { + if (vcpu->common->guest_debug || kvm_vcpu_os_lock_enabled(vcpu)) { /* Save guest debug state */ save_guest_debug_regs(vcpu); =20 @@ -200,7 +200,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) * returns to normal once the host is no longer * debugging the system. */ - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) { /* * If the software step state at the last guest exit * was Active-pending, we don't set DBG_SPSR_SS so @@ -232,7 +232,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) * debug ioctl. The existing DEBUG_DIRTY mechanism ensures * the registers are updated on the world switch. */ - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW) { /* Enable breakpoints/watchpoints */ mdscr =3D vcpu_read_sys_reg(vcpu, MDSCR_EL1); mdscr |=3D DBG_MDSCR_MDE; @@ -264,7 +264,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) } } =20 - BUG_ON(!vcpu->guest_debug && + BUG_ON(!vcpu->common->guest_debug && vcpu->arch.debug_ptr !=3D &vcpu->arch.vcpu_debug_state); =20 /* If KDE or MDE are set, perform a full save/restore cycle. */ @@ -280,13 +280,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) =20 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) { - trace_kvm_arm_clear_debug(vcpu->guest_debug); + trace_kvm_arm_clear_debug(vcpu->common->guest_debug); =20 /* * Restore the guest's debug registers if we were using them. */ - if (vcpu->guest_debug || kvm_vcpu_os_lock_enabled(vcpu)) { - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { + if (vcpu->common->guest_debug || kvm_vcpu_os_lock_enabled(vcpu)) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) { if (!(*vcpu_cpsr(vcpu) & DBG_SPSR_SS)) /* * Mark the vcpu as ACTIVE_PENDING @@ -301,7 +301,7 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) * If we were using HW debug we need to restore the * debug_ptr to the guest debug state. */ - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW) { kvm_arm_reset_debug_ptr(vcpu); =20 trace_kvm_arm_set_regset("BKPTS", get_num_brps(), diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 11098eb7eb44..891b6cc27c7c 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -927,16 +927,16 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vc= pu *vcpu, } =20 if (dbg->control & KVM_GUESTDBG_ENABLE) { - vcpu->guest_debug =3D dbg->control; + vcpu->common->guest_debug =3D dbg->control; =20 /* Hardware assisted Break and Watch points */ - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW) { vcpu->arch.external_debug_state =3D dbg->arch; } =20 } else { /* If not enabled clear all flags */ - vcpu->guest_debug =3D 0; + vcpu->common->guest_debug =3D 0; vcpu_clear_flag(vcpu, DBG_SS_ACTIVE_PENDING); } =20 diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index d7c2990e7c9e..de749c8e3009 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -38,7 +38,7 @@ static int handle_hvc(struct kvm_vcpu *vcpu) { trace_kvm_hvc_arm64(*vcpu_pc(vcpu), vcpu_get_reg(vcpu, 0), kvm_vcpu_hvc_get_imm(vcpu)); - vcpu->stat.hvc_exit_stat++; + vcpu->common->stat.hvc_exit_stat++; =20 /* Forward hvc instructions to the virtual EL2 if the guest has EL2. */ if (vcpu_has_nv(vcpu)) { @@ -132,10 +132,10 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu) =20 if (esr & ESR_ELx_WFx_ISS_WFE) { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true); - vcpu->stat.wfe_exit_stat++; + vcpu->common->stat.wfe_exit_stat++; } else { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false); - vcpu->stat.wfi_exit_stat++; + vcpu->common->stat.wfi_exit_stat++; } =20 if (esr & ESR_ELx_WFx_ISS_WFxT) { @@ -176,11 +176,11 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu) * guest and host are using the same debug facilities it will be up to * userspace to re-inject the correct exception for guest delivery. * - * @return: 0 (while setting vcpu->run->exit_reason) + * @return: 0 (while setting vcpu->common->run->exit_reason) */ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; u64 esr =3D kvm_vcpu_get_esr(vcpu); =20 run->exit_reason =3D KVM_EXIT_DEBUG; @@ -360,7 +360,7 @@ static int handle_trap_exceptions(struct kvm_vcpu *vcpu) */ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (ARM_SERROR_PENDING(exception_index)) { /* diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/i= nclude/hyp/switch.h index f59ccfe11ab9..6e846b14542f 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -645,7 +645,7 @@ static inline void synchronize_vcpu_pstate(struct kvm_v= cpu *vcpu, u64 *exit_code * active-not-pending state? */ if (cpus_have_final_cap(ARM64_WORKAROUND_2077057) && - vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP && + vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP && *vcpu_cpsr(vcpu) & DBG_SPSR_SS && ESR_ELx_EC(read_sysreg_el2(SYS_ESR)) =3D=3D ESR_ELx_EC_PAC) write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index 5763d979d8ca..1b4e11414125 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -244,7 +244,7 @@ static u8 kvm_smccc_get_action(struct kvm_vcpu *vcpu, u= 32 func_id) static void kvm_prepare_hypercall_exit(struct kvm_vcpu *vcpu, u32 func_id) { u8 ec =3D ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; u64 flags =3D 0; =20 if (ec =3D=3D ESR_ELx_EC_SMC32 || ec =3D=3D ESR_ELx_EC_SMC64) diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index cd6b7b83e2c3..4524f590e170 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -85,13 +85,13 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu) int mask; =20 /* Detect an already handled MMIO return */ - if (unlikely(!vcpu->mmio_needed)) + if (unlikely(!vcpu->common->mmio_needed)) return 1; =20 - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; =20 if (!kvm_vcpu_dabt_iswrite(vcpu)) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 len =3D kvm_vcpu_dabt_get_as(vcpu); data =3D kvm_mmio_read_buf(run->mmio.data, len); @@ -122,7 +122,7 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu) =20 int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; unsigned long data; unsigned long rt; int ret; @@ -187,20 +187,20 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t f= ault_ipa) run->mmio.is_write =3D is_write; run->mmio.phys_addr =3D fault_ipa; run->mmio.len =3D len; - vcpu->mmio_needed =3D 1; + vcpu->common->mmio_needed =3D 1; =20 if (!ret) { /* We handled the access successfully in the kernel. */ if (!is_write) memcpy(run->mmio.data, data_buf, len); - vcpu->stat.mmio_exit_kernel++; + vcpu->common->stat.mmio_exit_kernel++; kvm_handle_mmio_return(vcpu); return 1; } =20 if (is_write) memcpy(run->mmio.data, data_buf, len); - vcpu->stat.mmio_exit_user++; + vcpu->common->stat.mmio_exit_user++; run->exit_reason =3D KVM_EXIT_MMIO; return 0; } diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 82a2a003259c..daa6f4a16612 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -373,7 +373,7 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu =3D &vcpu->arch.pmu; - struct kvm_sync_regs *sregs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *sregs =3D &vcpu->common->run->s.regs; bool run_level =3D sregs->device_irq_level & KVM_ARM_DEV_PMU; =20 if (likely(irqchip_in_kernel(vcpu->kvm))) @@ -387,7 +387,7 @@ bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) */ void kvm_pmu_update_run(struct kvm_vcpu *vcpu) { - struct kvm_sync_regs *regs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *regs =3D &vcpu->common->run->s.regs; =20 /* Populate the timer bitmap for user space */ regs->device_irq_level &=3D ~KVM_ARM_DEV_PMU; diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c index 1f69b667332b..5eb741514196 100644 --- a/arch/arm64/kvm/psci.c +++ b/arch/arm64/kvm/psci.c @@ -182,11 +182,11 @@ static void kvm_prepare_system_event(struct kvm_vcpu = *vcpu, u32 type, u64 flags) } kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP); =20 - memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event)); - vcpu->run->system_event.type =3D type; - vcpu->run->system_event.ndata =3D 1; - vcpu->run->system_event.data[0] =3D flags; - vcpu->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; + memset(&vcpu->common->run->system_event, 0, sizeof(vcpu->common->run->sys= tem_event)); + vcpu->common->run->system_event.type =3D type; + vcpu->common->run->system_event.ndata =3D 1; + vcpu->common->run->system_event.data[0] =3D flags; + vcpu->common->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; } =20 static void kvm_psci_system_off(struct kvm_vcpu *vcpu) @@ -207,9 +207,9 @@ static void kvm_psci_system_reset2(struct kvm_vcpu *vcp= u) =20 static void kvm_psci_system_suspend(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 - memset(&run->system_event, 0, sizeof(vcpu->run->system_event)); + memset(&run->system_event, 0, sizeof(vcpu->common->run->system_event)); run->system_event.type =3D KVM_SYSTEM_EVENT_SUSPEND; run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; } diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c index f07b3ddff7d4..4685bc985779 100644 --- a/arch/arm64/kvm/vgic/vgic.c +++ b/arch/arm64/kvm/vgic/vgic.c @@ -24,7 +24,7 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = =3D { /* * Locking order is always: * kvm->lock (mutex) - * vcpu->mutex (mutex) + * vcpu->common->mutex (mutex) * kvm->arch.config_lock (mutex) * its->cmd_lock (mutex) * its->its_lock (mutex) diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c index ea73f9dc2cc6..baf8df7b2dcf 100644 --- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -31,7 +31,7 @@ static int kvm_emu_cpucfg(struct kvm_vcpu *vcpu, larch_in= st inst) =20 rd =3D inst.reg2_format.rd; rj =3D inst.reg2_format.rj; - ++vcpu->stat.cpucfg_exits; + ++vcpu->common->stat.cpucfg_exits; index =3D vcpu->arch.gprs[rj]; =20 /* @@ -241,7 +241,7 @@ int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, stru= ct kvm_run *run) =20 int kvm_emu_idle(struct kvm_vcpu *vcpu) { - ++vcpu->stat.idle_exits; + ++vcpu->common->stat.idle_exits; trace_kvm_exit_idle(vcpu, KVM_TRACE_EXIT_IDLE); =20 if (!kvm_arch_vcpu_runnable(vcpu)) @@ -255,7 +255,7 @@ static int kvm_trap_handle_gspr(struct kvm_vcpu *vcpu) unsigned long curr_pc; larch_inst inst; enum emulation_result er =3D EMULATE_DONE; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 /* Fetch the instruction */ inst.word =3D vcpu->arch.badi; @@ -328,10 +328,10 @@ static int kvm_handle_gspr(struct kvm_vcpu *vcpu) if (er =3D=3D EMULATE_DONE) { ret =3D RESUME_GUEST; } else if (er =3D=3D EMULATE_DO_MMIO) { - vcpu->run->exit_reason =3D KVM_EXIT_MMIO; + vcpu->common->run->exit_reason =3D KVM_EXIT_MMIO; ret =3D RESUME_HOST; } else if (er =3D=3D EMULATE_DO_IOCSR) { - vcpu->run->exit_reason =3D KVM_EXIT_LOONGARCH_IOCSR; + vcpu->common->run->exit_reason =3D KVM_EXIT_LOONGARCH_IOCSR; ret =3D RESUME_HOST; } else { kvm_queue_exception(vcpu, EXCCODE_INE, 0); @@ -345,10 +345,10 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_in= st inst) { int ret; unsigned int op8, opcode, rd; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 run->mmio.phys_addr =3D vcpu->arch.badv; - vcpu->mmio_needed =3D 2; /* signed */ + vcpu->common->mmio_needed =3D 2; /* signed */ op8 =3D (inst.word >> 24) & 0xff; ret =3D EMULATE_DO_MMIO; =20 @@ -377,21 +377,21 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_in= st inst) run->mmio.len =3D 1; break; case ldbu_op: - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ run->mmio.len =3D 1; break; case ldh_op: run->mmio.len =3D 2; break; case ldhu_op: - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ run->mmio.len =3D 2; break; case ldw_op: run->mmio.len =3D 4; break; case ldwu_op: - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ run->mmio.len =3D 4; break; case ldd_op: @@ -412,21 +412,21 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_in= st inst) break; case ldxbu_op: run->mmio.len =3D 1; - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ break; case ldxh_op: run->mmio.len =3D 2; break; case ldxhu_op: run->mmio.len =3D 2; - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ break; case ldxw_op: run->mmio.len =3D 4; break; case ldxwu_op: run->mmio.len =3D 4; - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ break; case ldxd_op: run->mmio.len =3D 8; @@ -444,14 +444,14 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_in= st inst) /* Set for kvm_complete_mmio_read() use */ vcpu->arch.io_gpr =3D rd; run->mmio.is_write =3D 0; - vcpu->mmio_is_write =3D 0; + vcpu->common->mmio_is_write =3D 0; trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, run->mmio.len, run->mmio.phys_addr, NULL); } else { kvm_err("Read not supported Inst=3D0x%08x @%lx BadVaddr:%#lx\n", inst.word, vcpu->arch.pc, vcpu->arch.badv); kvm_arch_vcpu_dump_regs(vcpu); - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; } =20 return ret; @@ -466,19 +466,19 @@ int kvm_complete_mmio_read(struct kvm_vcpu *vcpu, str= uct kvm_run *run) update_pc(&vcpu->arch); switch (run->mmio.len) { case 1: - if (vcpu->mmio_needed =3D=3D 2) + if (vcpu->common->mmio_needed =3D=3D 2) *gpr =3D *(s8 *)run->mmio.data; else *gpr =3D *(u8 *)run->mmio.data; break; case 2: - if (vcpu->mmio_needed =3D=3D 2) + if (vcpu->common->mmio_needed =3D=3D 2) *gpr =3D *(s16 *)run->mmio.data; else *gpr =3D *(u16 *)run->mmio.data; break; case 4: - if (vcpu->mmio_needed =3D=3D 2) + if (vcpu->common->mmio_needed =3D=3D 2) *gpr =3D *(s32 *)run->mmio.data; else *gpr =3D *(u32 *)run->mmio.data; @@ -504,7 +504,7 @@ int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_ins= t inst) int ret; unsigned int rd, op8, opcode; unsigned long curr_pc, rd_val =3D 0; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; void *data =3D run->mmio.data; =20 /* @@ -595,8 +595,8 @@ int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_ins= t inst) =20 if (ret =3D=3D EMULATE_DO_MMIO) { run->mmio.is_write =3D 1; - vcpu->mmio_needed =3D 1; - vcpu->mmio_is_write =3D 1; + vcpu->common->mmio_needed =3D 1; + vcpu->common->mmio_is_write =3D 1; trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, run->mmio.len, run->mmio.phys_addr, data); } else { @@ -615,7 +615,7 @@ static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu,= bool write) int ret; larch_inst inst; enum emulation_result er =3D EMULATE_DONE; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; unsigned long badv =3D vcpu->arch.badv; =20 ret =3D kvm_handle_mm_fault(vcpu, badv, write); @@ -667,7 +667,7 @@ static int kvm_handle_write_fault(struct kvm_vcpu *vcpu) */ static int kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (!kvm_guest_has_fpu(&vcpu->arch)) { kvm_queue_exception(vcpu, EXCCODE_INE, 0); @@ -812,13 +812,13 @@ static int kvm_handle_hypercall(struct kvm_vcpu *vcpu) =20 switch (code) { case KVM_HCALL_SERVICE: - vcpu->stat.hypercall_exits++; + vcpu->common->stat.hypercall_exits++; kvm_handle_service(vcpu); break; case KVM_HCALL_SWDBG: /* KVM_HCALL_SWDBG only in effective when SW_BP is enabled */ - if (vcpu->guest_debug & KVM_GUESTDBG_SW_BP_MASK) { - vcpu->run->exit_reason =3D KVM_EXIT_DEBUG; + if (vcpu->common->guest_debug & KVM_GUESTDBG_SW_BP_MASK) { + vcpu->common->run->exit_reason =3D KVM_EXIT_DEBUG; ret =3D RESUME_HOST; break; } diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c index bcc6b6d063d9..195984339769 100644 --- a/arch/loongarch/kvm/timer.c +++ b/arch/loongarch/kvm/timer.c @@ -30,7 +30,7 @@ enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *t= imer) =20 vcpu =3D container_of(timer, struct kvm_vcpu, arch.swtimer); kvm_queue_irq(vcpu, INT_TI); - rcuwait_wake_up(&vcpu->wait); + rcuwait_wake_up(&vcpu->common->wait); =20 return HRTIMER_NORESTART; } diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 16756ffb55e8..cf63b36d92e8 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -156,7 +156,7 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) kvm_deliver_intr(vcpu); kvm_deliver_exception(vcpu); /* Make sure the vcpu mode has been written */ - smp_store_mb(vcpu->mode, IN_GUEST_MODE); + smp_store_mb(vcpu->common->mode, IN_GUEST_MODE); kvm_check_vpid(vcpu); =20 /* @@ -171,7 +171,7 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) =20 if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) { /* make sure the vcpu mode has been written */ - smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE); + smp_store_mb(vcpu->common->mode, OUTSIDE_GUEST_MODE); local_irq_enable(); ret =3D -EAGAIN; } @@ -190,7 +190,7 @@ static int kvm_handle_exit(struct kvm_run *run, struct = kvm_vcpu *vcpu) u32 intr =3D estat & 0x1fff; /* Ignore NMI */ u32 ecode =3D (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; =20 - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; =20 /* Set a default exit reason */ run->exit_reason =3D KVM_EXIT_UNKNOWN; @@ -204,7 +204,7 @@ static int kvm_handle_exit(struct kvm_run *run, struct = kvm_vcpu *vcpu) ret =3D kvm_handle_fault(vcpu, ecode); } else { WARN(!intr, "vm exiting with suspicious irq\n"); - ++vcpu->stat.int_exits; + ++vcpu->common->stat.int_exits; } =20 if (ret =3D=3D RESUME_GUEST) @@ -316,9 +316,9 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu= *vcpu, return -EINVAL; =20 if (dbg->control & KVM_GUESTDBG_ENABLE) - vcpu->guest_debug =3D dbg->control; + vcpu->common->guest_debug =3D dbg->control; else - vcpu->guest_debug =3D 0; + vcpu->common->guest_debug =3D 0; =20 return 0; } @@ -1403,12 +1403,12 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { int r =3D -EINTR; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 - if (vcpu->mmio_needed) { - if (!vcpu->mmio_is_write) + if (vcpu->common->mmio_needed) { + if (!vcpu->common->mmio_is_write) kvm_complete_mmio_read(vcpu, run); - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; } =20 if (run->exit_reason =3D=3D KVM_EXIT_LOONGARCH_IOCSR) { @@ -1416,7 +1416,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_complete_iocsr_read(vcpu, run); } =20 - if (!vcpu->wants_to_run) + if (!vcpu->common->wants_to_run) return r; =20 /* Clear exit_reason */ diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c index 0feec52222fb..f8612236d553 100644 --- a/arch/mips/kvm/emulate.c +++ b/arch/mips/kvm/emulate.c @@ -947,7 +947,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcp= u *vcpu) kvm_debug("[%#lx] !!!WAIT!!! (%#lx)\n", vcpu->arch.pc, vcpu->arch.pending_exceptions); =20 - ++vcpu->stat.wait_exits; + ++vcpu->common->stat.wait_exits; trace_kvm_exit(vcpu, KVM_TRACE_EXIT_WAIT); if (!vcpu->arch.pending_exceptions) { kvm_vz_lose_htimer(vcpu); @@ -959,7 +959,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcp= u *vcpu) * check if any I/O interrupts are pending. */ if (kvm_arch_vcpu_runnable(vcpu)) - vcpu->run->exit_reason =3D KVM_EXIT_IRQ_WINDOW_OPEN; + vcpu->common->run->exit_reason =3D KVM_EXIT_IRQ_WINDOW_OPEN; } =20 return EMULATE_DONE; @@ -972,7 +972,7 @@ enum emulation_result kvm_mips_emulate_store(union mips= _instruction inst, int r; enum emulation_result er; u32 rt; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; void *data =3D run->mmio.data; unsigned int imme; unsigned long curr_pc; @@ -1245,15 +1245,15 @@ enum emulation_result kvm_mips_emulate_store(union = mips_instruction inst, goto out_fail; } =20 - vcpu->mmio_needed =3D 1; + vcpu->common->mmio_needed =3D 1; run->mmio.is_write =3D 1; - vcpu->mmio_is_write =3D 1; + vcpu->common->mmio_is_write =3D 1; =20 r =3D kvm_io_bus_write(vcpu, KVM_MMIO_BUS, run->mmio.phys_addr, run->mmio.len, data); =20 if (!r) { - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; return EMULATE_DONE; } =20 @@ -1268,7 +1268,7 @@ enum emulation_result kvm_mips_emulate_store(union mi= ps_instruction inst, enum emulation_result kvm_mips_emulate_load(union mips_instruction inst, u32 cause, struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int r; enum emulation_result er; unsigned long curr_pc; @@ -1297,7 +1297,7 @@ enum emulation_result kvm_mips_emulate_load(union mip= s_instruction inst, if (run->mmio.phys_addr =3D=3D KVM_INVALID_ADDR) return EMULATE_FAIL; =20 - vcpu->mmio_needed =3D 2; /* signed */ + vcpu->common->mmio_needed =3D 2; /* signed */ switch (op) { #if defined(CONFIG_64BIT) case ld_op: @@ -1305,7 +1305,7 @@ enum emulation_result kvm_mips_emulate_load(union mip= s_instruction inst, break; =20 case lwu_op: - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ fallthrough; #endif case lw_op: @@ -1313,14 +1313,14 @@ enum emulation_result kvm_mips_emulate_load(union m= ips_instruction inst, break; =20 case lhu_op: - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ fallthrough; case lh_op: run->mmio.len =3D 2; break; =20 case lbu_op: - vcpu->mmio_needed =3D 1; /* unsigned */ + vcpu->common->mmio_needed =3D 1; /* unsigned */ fallthrough; case lb_op: run->mmio.len =3D 1; @@ -1334,16 +1334,16 @@ enum emulation_result kvm_mips_emulate_load(union m= ips_instruction inst, imme =3D vcpu->arch.host_cp0_badvaddr & 0x3; switch (imme) { case 0: - vcpu->mmio_needed =3D 3; /* 1 byte */ + vcpu->common->mmio_needed =3D 3; /* 1 byte */ break; case 1: - vcpu->mmio_needed =3D 4; /* 2 bytes */ + vcpu->common->mmio_needed =3D 4; /* 2 bytes */ break; case 2: - vcpu->mmio_needed =3D 5; /* 3 bytes */ + vcpu->common->mmio_needed =3D 5; /* 3 bytes */ break; case 3: - vcpu->mmio_needed =3D 6; /* 4 bytes */ + vcpu->common->mmio_needed =3D 6; /* 4 bytes */ break; default: break; @@ -1358,16 +1358,16 @@ enum emulation_result kvm_mips_emulate_load(union m= ips_instruction inst, imme =3D vcpu->arch.host_cp0_badvaddr & 0x3; switch (imme) { case 0: - vcpu->mmio_needed =3D 7; /* 4 bytes */ + vcpu->common->mmio_needed =3D 7; /* 4 bytes */ break; case 1: - vcpu->mmio_needed =3D 8; /* 3 bytes */ + vcpu->common->mmio_needed =3D 8; /* 3 bytes */ break; case 2: - vcpu->mmio_needed =3D 9; /* 2 bytes */ + vcpu->common->mmio_needed =3D 9; /* 2 bytes */ break; case 3: - vcpu->mmio_needed =3D 10; /* 1 byte */ + vcpu->common->mmio_needed =3D 10; /* 1 byte */ break; default: break; @@ -1383,28 +1383,28 @@ enum emulation_result kvm_mips_emulate_load(union m= ips_instruction inst, imme =3D vcpu->arch.host_cp0_badvaddr & 0x7; switch (imme) { case 0: - vcpu->mmio_needed =3D 11; /* 1 byte */ + vcpu->common->mmio_needed =3D 11; /* 1 byte */ break; case 1: - vcpu->mmio_needed =3D 12; /* 2 bytes */ + vcpu->common->mmio_needed =3D 12; /* 2 bytes */ break; case 2: - vcpu->mmio_needed =3D 13; /* 3 bytes */ + vcpu->common->mmio_needed =3D 13; /* 3 bytes */ break; case 3: - vcpu->mmio_needed =3D 14; /* 4 bytes */ + vcpu->common->mmio_needed =3D 14; /* 4 bytes */ break; case 4: - vcpu->mmio_needed =3D 15; /* 5 bytes */ + vcpu->common->mmio_needed =3D 15; /* 5 bytes */ break; case 5: - vcpu->mmio_needed =3D 16; /* 6 bytes */ + vcpu->common->mmio_needed =3D 16; /* 6 bytes */ break; case 6: - vcpu->mmio_needed =3D 17; /* 7 bytes */ + vcpu->common->mmio_needed =3D 17; /* 7 bytes */ break; case 7: - vcpu->mmio_needed =3D 18; /* 8 bytes */ + vcpu->common->mmio_needed =3D 18; /* 8 bytes */ break; default: break; @@ -1419,28 +1419,28 @@ enum emulation_result kvm_mips_emulate_load(union m= ips_instruction inst, imme =3D vcpu->arch.host_cp0_badvaddr & 0x7; switch (imme) { case 0: - vcpu->mmio_needed =3D 19; /* 8 bytes */ + vcpu->common->mmio_needed =3D 19; /* 8 bytes */ break; case 1: - vcpu->mmio_needed =3D 20; /* 7 bytes */ + vcpu->common->mmio_needed =3D 20; /* 7 bytes */ break; case 2: - vcpu->mmio_needed =3D 21; /* 6 bytes */ + vcpu->common->mmio_needed =3D 21; /* 6 bytes */ break; case 3: - vcpu->mmio_needed =3D 22; /* 5 bytes */ + vcpu->common->mmio_needed =3D 22; /* 5 bytes */ break; case 4: - vcpu->mmio_needed =3D 23; /* 4 bytes */ + vcpu->common->mmio_needed =3D 23; /* 4 bytes */ break; case 5: - vcpu->mmio_needed =3D 24; /* 3 bytes */ + vcpu->common->mmio_needed =3D 24; /* 3 bytes */ break; case 6: - vcpu->mmio_needed =3D 25; /* 2 bytes */ + vcpu->common->mmio_needed =3D 25; /* 2 bytes */ break; case 7: - vcpu->mmio_needed =3D 26; /* 1 byte */ + vcpu->common->mmio_needed =3D 26; /* 1 byte */ break; default: break; @@ -1462,19 +1462,19 @@ enum emulation_result kvm_mips_emulate_load(union m= ips_instruction inst, */ case 0x0: run->mmio.len =3D 1; - vcpu->mmio_needed =3D 27; /* signed */ + vcpu->common->mmio_needed =3D 27; /* signed */ break; case 0x1: run->mmio.len =3D 2; - vcpu->mmio_needed =3D 28; /* signed */ + vcpu->common->mmio_needed =3D 28; /* signed */ break; case 0x2: run->mmio.len =3D 4; - vcpu->mmio_needed =3D 29; /* signed */ + vcpu->common->mmio_needed =3D 29; /* signed */ break; case 0x3: run->mmio.len =3D 8; - vcpu->mmio_needed =3D 30; /* signed */ + vcpu->common->mmio_needed =3D 30; /* signed */ break; default: kvm_err("Godson Extended GS-Load for float not yet supported (inst=3D0x= %08x)\n", @@ -1487,19 +1487,19 @@ enum emulation_result kvm_mips_emulate_load(union m= ips_instruction inst, default: kvm_err("Load not yet supported (inst=3D0x%08x)\n", inst.word); - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; return EMULATE_FAIL; } =20 run->mmio.is_write =3D 0; - vcpu->mmio_is_write =3D 0; + vcpu->common->mmio_is_write =3D 0; =20 r =3D kvm_io_bus_read(vcpu, KVM_MMIO_BUS, run->mmio.phys_addr, run->mmio.len, run->mmio.data); =20 if (!r) { kvm_mips_complete_mmio_load(vcpu); - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; return EMULATE_DONE; } =20 @@ -1508,7 +1508,7 @@ enum emulation_result kvm_mips_emulate_load(union mip= s_instruction inst, =20 enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; unsigned long *gpr =3D &vcpu->arch.gprs[vcpu->arch.io_gpr]; enum emulation_result er =3D EMULATE_DONE; =20 @@ -1523,7 +1523,7 @@ enum emulation_result kvm_mips_complete_mmio_load(str= uct kvm_vcpu *vcpu) =20 switch (run->mmio.len) { case 8: - switch (vcpu->mmio_needed) { + switch (vcpu->common->mmio_needed) { case 11: *gpr =3D (vcpu->arch.gprs[vcpu->arch.io_gpr] & 0xffffffffffffff) | (((*(s64 *)run->mmio.data) & 0xff) << 56); @@ -1590,7 +1590,7 @@ enum emulation_result kvm_mips_complete_mmio_load(str= uct kvm_vcpu *vcpu) break; =20 case 4: - switch (vcpu->mmio_needed) { + switch (vcpu->common->mmio_needed) { case 1: *gpr =3D *(u32 *)run->mmio.data; break; @@ -1631,14 +1631,14 @@ enum emulation_result kvm_mips_complete_mmio_load(s= truct kvm_vcpu *vcpu) break; =20 case 2: - if (vcpu->mmio_needed =3D=3D 1) + if (vcpu->common->mmio_needed =3D=3D 1) *gpr =3D *(u16 *)run->mmio.data; else *gpr =3D *(s16 *)run->mmio.data; =20 break; case 1: - if (vcpu->mmio_needed =3D=3D 1) + if (vcpu->common->mmio_needed =3D=3D 1) *gpr =3D *(u8 *)run->mmio.data; else *gpr =3D *(s8 *)run->mmio.data; diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index b5de770b092e..1669bf24f40f 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -265,7 +265,7 @@ static enum hrtimer_restart kvm_mips_comparecount_wakeu= p(struct hrtimer *timer) kvm_mips_callbacks->queue_timer_int(vcpu); =20 vcpu->arch.wait =3D 0; - rcuwait_wake_up(&vcpu->wait); + rcuwait_wake_up(&vcpu->common->wait); =20 return kvm_mips_count_timeout(vcpu); } @@ -428,13 +428,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 kvm_sigset_activate(vcpu); =20 - if (vcpu->mmio_needed) { - if (!vcpu->mmio_is_write) + if (vcpu->common->mmio_needed) { + if (!vcpu->common->mmio_is_write) kvm_mips_complete_mmio_load(vcpu); - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; } =20 - if (!vcpu->wants_to_run) + if (!vcpu->common->wants_to_run) goto out; =20 lose_fpu(1); @@ -445,11 +445,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 /* * Make sure the read of VCPU requests in vcpu_run() callback is not - * reordered ahead of the write to vcpu->mode, or we could miss a TLB + * reordered ahead of the write to vcpu->common->mode, or we could miss a= TLB * flush request while the requester sees the VCPU as outside of guest * mode and not needing an IPI. */ - smp_store_mb(vcpu->mode, IN_GUEST_MODE); + smp_store_mb(vcpu->common->mode, IN_GUEST_MODE); =20 r =3D kvm_mips_vcpu_enter_exit(vcpu); =20 @@ -1168,7 +1168,7 @@ static void kvm_mips_set_c0_status(void) */ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; u32 cause =3D vcpu->arch.host_cp0_cause; u32 exccode =3D (cause >> CAUSEB_EXCCODE) & 0x1f; u32 __user *opc =3D (u32 __user *) vcpu->arch.pc; @@ -1177,7 +1177,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vc= pu) u32 inst; int ret =3D RESUME_GUEST; =20 - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; =20 /* Set a default exit reason */ run->exit_reason =3D KVM_EXIT_UNKNOWN; @@ -1199,7 +1199,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vc= pu) case EXCCODE_INT: kvm_debug("[%d]EXCCODE_INT @ %p\n", vcpu->vcpu_id, opc); =20 - ++vcpu->stat.int_exits; + ++vcpu->common->stat.int_exits; =20 if (need_resched()) cond_resched(); @@ -1210,7 +1210,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vc= pu) case EXCCODE_CPU: kvm_debug("EXCCODE_CPU: @ PC: %p\n", opc); =20 - ++vcpu->stat.cop_unusable_exits; + ++vcpu->common->stat.cop_unusable_exits; ret =3D kvm_mips_callbacks->handle_cop_unusable(vcpu); /* XXXKYMA: Might need to return to user space */ if (run->exit_reason =3D=3D KVM_EXIT_IRQ_WINDOW_OPEN) @@ -1218,7 +1218,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vc= pu) break; =20 case EXCCODE_MOD: - ++vcpu->stat.tlbmod_exits; + ++vcpu->common->stat.tlbmod_exits; ret =3D kvm_mips_callbacks->handle_tlb_mod(vcpu); break; =20 @@ -1227,7 +1227,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vc= pu) cause, kvm_read_c0_guest_status(&vcpu->arch.cop0), opc, badvaddr); =20 - ++vcpu->stat.tlbmiss_st_exits; + ++vcpu->common->stat.tlbmiss_st_exits; ret =3D kvm_mips_callbacks->handle_tlb_st_miss(vcpu); break; =20 @@ -1235,52 +1235,52 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *= vcpu) kvm_debug("TLB LD fault: cause %#x, PC: %p, BadVaddr: %#lx\n", cause, opc, badvaddr); =20 - ++vcpu->stat.tlbmiss_ld_exits; + ++vcpu->common->stat.tlbmiss_ld_exits; ret =3D kvm_mips_callbacks->handle_tlb_ld_miss(vcpu); break; =20 case EXCCODE_ADES: - ++vcpu->stat.addrerr_st_exits; + ++vcpu->common->stat.addrerr_st_exits; ret =3D kvm_mips_callbacks->handle_addr_err_st(vcpu); break; =20 case EXCCODE_ADEL: - ++vcpu->stat.addrerr_ld_exits; + ++vcpu->common->stat.addrerr_ld_exits; ret =3D kvm_mips_callbacks->handle_addr_err_ld(vcpu); break; =20 case EXCCODE_SYS: - ++vcpu->stat.syscall_exits; + ++vcpu->common->stat.syscall_exits; ret =3D kvm_mips_callbacks->handle_syscall(vcpu); break; =20 case EXCCODE_RI: - ++vcpu->stat.resvd_inst_exits; + ++vcpu->common->stat.resvd_inst_exits; ret =3D kvm_mips_callbacks->handle_res_inst(vcpu); break; =20 case EXCCODE_BP: - ++vcpu->stat.break_inst_exits; + ++vcpu->common->stat.break_inst_exits; ret =3D kvm_mips_callbacks->handle_break(vcpu); break; =20 case EXCCODE_TR: - ++vcpu->stat.trap_inst_exits; + ++vcpu->common->stat.trap_inst_exits; ret =3D kvm_mips_callbacks->handle_trap(vcpu); break; =20 case EXCCODE_MSAFPE: - ++vcpu->stat.msa_fpe_exits; + ++vcpu->common->stat.msa_fpe_exits; ret =3D kvm_mips_callbacks->handle_msa_fpe(vcpu); break; =20 case EXCCODE_FPE: - ++vcpu->stat.fpe_exits; + ++vcpu->common->stat.fpe_exits; ret =3D kvm_mips_callbacks->handle_fpe(vcpu); break; =20 case EXCCODE_MSADIS: - ++vcpu->stat.msa_disabled_exits; + ++vcpu->common->stat.msa_disabled_exits; ret =3D kvm_mips_callbacks->handle_msa_disabled(vcpu); break; =20 @@ -1317,7 +1317,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vc= pu) if (signal_pending(current)) { run->exit_reason =3D KVM_EXIT_INTR; ret =3D (-EINTR << 2) | RESUME_HOST; - ++vcpu->stat.signal_exits; + ++vcpu->common->stat.signal_exits; trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL); } } @@ -1327,11 +1327,11 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *= vcpu) =20 /* * Make sure the read of VCPU requests in vcpu_reenter() - * callback is not reordered ahead of the write to vcpu->mode, + * callback is not reordered ahead of the write to vcpu->common->mode, * or we could miss a TLB flush request while the requester sees * the VCPU as outside of guest mode and not needing an IPI. */ - smp_store_mb(vcpu->mode, IN_GUEST_MODE); + smp_store_mb(vcpu->common->mode, IN_GUEST_MODE); =20 kvm_mips_callbacks->vcpu_reenter(vcpu); =20 diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c index 99d5a71e4300..7f720410f2fd 100644 --- a/arch/mips/kvm/vz.c +++ b/arch/mips/kvm/vz.c @@ -833,7 +833,7 @@ static int kvm_trap_vz_no_handler(struct kvm_vcpu *vcpu) exccode, opc, inst, badvaddr, read_gc0_status()); kvm_arch_vcpu_dump_regs(vcpu); - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; return RESUME_HOST; } =20 @@ -1162,7 +1162,7 @@ static enum emulation_result kvm_vz_gpsi_lwc2(union m= ips_instruction inst, rd =3D inst.loongson3_lscsr_format.rd; switch (inst.loongson3_lscsr_format.fr) { case 0x8: /* Read CPUCFG */ - ++vcpu->stat.vz_cpucfg_exits; + ++vcpu->common->stat.vz_cpucfg_exits; hostcfg =3D read_cpucfg(vcpu->arch.gprs[rs]); =20 switch (vcpu->arch.gprs[rs]) { @@ -1491,38 +1491,38 @@ static int kvm_trap_vz_handle_guest_exit(struct kvm= _vcpu *vcpu) trace_kvm_exit(vcpu, KVM_TRACE_EXIT_GEXCCODE_BASE + gexccode); switch (gexccode) { case MIPS_GCTL0_GEXC_GPSI: - ++vcpu->stat.vz_gpsi_exits; + ++vcpu->common->stat.vz_gpsi_exits; er =3D kvm_trap_vz_handle_gpsi(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GSFC: - ++vcpu->stat.vz_gsfc_exits; + ++vcpu->common->stat.vz_gsfc_exits; er =3D kvm_trap_vz_handle_gsfc(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_HC: - ++vcpu->stat.vz_hc_exits; + ++vcpu->common->stat.vz_hc_exits; er =3D kvm_trap_vz_handle_hc(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GRR: - ++vcpu->stat.vz_grr_exits; + ++vcpu->common->stat.vz_grr_exits; er =3D kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GVA: - ++vcpu->stat.vz_gva_exits; + ++vcpu->common->stat.vz_gva_exits; er =3D kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GHFC: - ++vcpu->stat.vz_ghfc_exits; + ++vcpu->common->stat.vz_ghfc_exits; er =3D kvm_trap_vz_handle_ghfc(cause, opc, vcpu); break; case MIPS_GCTL0_GEXC_GPA: - ++vcpu->stat.vz_gpa_exits; + ++vcpu->common->stat.vz_gpa_exits; er =3D kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; default: - ++vcpu->stat.vz_resvd_exits; + ++vcpu->common->stat.vz_resvd_exits; er =3D kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc, vcpu); break; @@ -1534,7 +1534,7 @@ static int kvm_trap_vz_handle_guest_exit(struct kvm_v= cpu *vcpu) } else if (er =3D=3D EMULATE_HYPERCALL) { ret =3D kvm_mips_handle_hypcall(vcpu); } else { - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; ret =3D RESUME_HOST; } return ret; @@ -1579,7 +1579,7 @@ static int kvm_trap_vz_handle_cop_unusable(struct kvm= _vcpu *vcpu) break; =20 case EMULATE_FAIL: - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; ret =3D RESUME_HOST; break; =20 @@ -1611,7 +1611,7 @@ static int kvm_trap_vz_handle_msa_disabled(struct kvm= _vcpu *vcpu) (read_gc0_status() & (ST0_CU1 | ST0_FR)) =3D=3D ST0_CU1 || !(read_gc0_config5() & MIPS_CONF5_MSAEN) || vcpu->arch.aux_inuse & KVM_MIPS_AUX_MSA) { - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; return RESUME_HOST; } =20 @@ -1622,7 +1622,7 @@ static int kvm_trap_vz_handle_msa_disabled(struct kvm= _vcpu *vcpu) =20 static int kvm_trap_vz_handle_tlb_ld_miss(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; u32 *opc =3D (u32 *) vcpu->arch.pc; u32 cause =3D vcpu->arch.host_cp0_cause; ulong badvaddr =3D vcpu->arch.host_cp0_badvaddr; @@ -1669,7 +1669,7 @@ static int kvm_trap_vz_handle_tlb_ld_miss(struct kvm_= vcpu *vcpu) =20 static int kvm_trap_vz_handle_tlb_st_miss(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; u32 *opc =3D (u32 *) vcpu->arch.pc; u32 cause =3D vcpu->arch.host_cp0_cause; ulong badvaddr =3D vcpu->arch.host_cp0_badvaddr; diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index ff6c38373957..75665a8ae276 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -178,7 +178,7 @@ void kvmppc_book3s_dequeue_irqprio(struct kvm_vcpu *vcp= u, =20 void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec) { - vcpu->stat.queue_intr++; + vcpu->common->stat.queue_intr++; =20 set_bit(kvmppc_book3s_vec2irqprio(vec), &vcpu->arch.pending_exceptions); @@ -818,7 +818,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu= *vcpu, struct kvm_guest_debug *dbg) { vcpu_load(vcpu); - vcpu->guest_debug =3D dbg->control; + vcpu->common->guest_debug =3D dbg->control; vcpu_put(vcpu); return 0; } diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_em= ulate.c index de126d153328..7078db40fb17 100644 --- a/arch/powerpc/kvm/book3s_emulate.c +++ b/arch/powerpc/kvm/book3s_emulate.c @@ -367,13 +367,13 @@ int kvmppc_core_emulate_op_pr(struct kvm_vcpu *vcpu, if (kvmppc_h_pr(vcpu, cmd) =3D=3D EMULATE_DONE) break; =20 - vcpu->run->papr_hcall.nr =3D cmd; + vcpu->common->run->papr_hcall.nr =3D cmd; for (i =3D 0; i < 9; ++i) { ulong gpr =3D kvmppc_get_gpr(vcpu, 4 + i); - vcpu->run->papr_hcall.args[i] =3D gpr; + vcpu->common->run->papr_hcall.args[i] =3D gpr; } =20 - vcpu->run->exit_reason =3D KVM_EXIT_PAPR_HCALL; + vcpu->common->run->exit_reason =3D KVM_EXIT_PAPR_HCALL; vcpu->arch.hcall_needed =3D 1; emulated =3D EMULATE_EXIT_USER; break; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 8f7d7e37bc8c..b71735d10d85 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -238,7 +238,7 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *v= cpu) =20 waitp =3D kvm_arch_vcpu_get_wait(vcpu); if (rcuwait_wake_up(waitp)) - ++vcpu->stat.generic.halt_wakeup; + ++vcpu->common->stat.generic.halt_wakeup; =20 cpu =3D READ_ONCE(vcpu->arch.thread_cpu); if (cpu >=3D 0 && kvmppc_ipi_thread(cpu)) @@ -1482,8 +1482,8 @@ static int kvmppc_emulate_debug_inst(struct kvm_vcpu = *vcpu) } =20 if (ppc_inst_val(last_inst) =3D=3D KVMPPC_INST_SW_BREAKPOINT) { - vcpu->run->exit_reason =3D KVM_EXIT_DEBUG; - vcpu->run->debug.arch.address =3D kvmppc_get_pc(vcpu); + vcpu->common->run->exit_reason =3D KVM_EXIT_DEBUG; + vcpu->common->run->debug.arch.address =3D kvmppc_get_pc(vcpu); return RESUME_HOST; } else { kvmppc_core_queue_program(vcpu, SRR1_PROGILL | @@ -1627,10 +1627,10 @@ static int kvmppc_tm_unavailable(struct kvm_vcpu *v= cpu) static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int r =3D RESUME_HOST; =20 - vcpu->stat.sum_exits++; + vcpu->common->stat.sum_exits++; =20 /* * This can happen if an interrupt occurs in the last stages @@ -1659,13 +1659,13 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *v= cpu, vcpu->arch.trap =3D BOOK3S_INTERRUPT_HV_DECREMENTER; fallthrough; case BOOK3S_INTERRUPT_HV_DECREMENTER: - vcpu->stat.dec_exits++; + vcpu->common->stat.dec_exits++; r =3D RESUME_GUEST; break; case BOOK3S_INTERRUPT_EXTERNAL: case BOOK3S_INTERRUPT_H_DOORBELL: case BOOK3S_INTERRUPT_H_VIRT: - vcpu->stat.ext_intr_exits++; + vcpu->common->stat.ext_intr_exits++; r =3D RESUME_GUEST; break; /* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/ @@ -1887,7 +1887,7 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcp= u, vcpu->arch.last_inst =3D kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.emul_inst) : vcpu->arch.emul_inst; - if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_SW_BP) { r =3D kvmppc_emulate_debug_inst(vcpu); } else { kvmppc_core_queue_program(vcpu, SRR1_PROGILL | @@ -1960,7 +1960,7 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu = *vcpu) int r; int srcu_idx; =20 - vcpu->stat.sum_exits++; + vcpu->common->stat.sum_exits++; =20 /* * This can happen if an interrupt occurs in the last stages @@ -1981,22 +1981,22 @@ static int kvmppc_handle_nested_exit(struct kvm_vcp= u *vcpu) switch (vcpu->arch.trap) { /* We're good on these - the host merely wanted to get our attention */ case BOOK3S_INTERRUPT_HV_DECREMENTER: - vcpu->stat.dec_exits++; + vcpu->common->stat.dec_exits++; r =3D RESUME_GUEST; break; case BOOK3S_INTERRUPT_EXTERNAL: - vcpu->stat.ext_intr_exits++; + vcpu->common->stat.ext_intr_exits++; r =3D RESUME_HOST; break; case BOOK3S_INTERRUPT_H_DOORBELL: case BOOK3S_INTERRUPT_H_VIRT: - vcpu->stat.ext_intr_exits++; + vcpu->common->stat.ext_intr_exits++; r =3D RESUME_GUEST; break; /* These need to go to the nested HV */ case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: vcpu->arch.trap =3D BOOK3S_INTERRUPT_HV_DECREMENTER; - vcpu->stat.dec_exits++; + vcpu->common->stat.dec_exits++; r =3D RESUME_HOST; break; /* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/ @@ -4679,7 +4679,7 @@ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu) =20 static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int n_ceded, i, r; struct kvmppc_vcore *vc; struct kvm_vcpu *v; @@ -4790,7 +4790,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) =20 if (vcpu->arch.state =3D=3D KVMPPC_VCPU_RUNNABLE) { kvmppc_remove_runnable(vc, vcpu, mftb()); - vcpu->stat.signal_exits++; + vcpu->common->stat.signal_exits++; run->exit_reason =3D KVM_EXIT_INTR; vcpu->arch.ret =3D -EINTR; } @@ -4811,7 +4811,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 = time_limit, unsigned long lpcr) { struct rcuwait *wait =3D kvm_arch_vcpu_get_wait(vcpu); - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int trap, r, pcpu; int srcu_idx; struct kvmppc_vcore *vc; @@ -4978,7 +4978,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 = time_limit, for (;;) { set_current_state(TASK_INTERRUPTIBLE); if (signal_pending(current)) { - vcpu->stat.signal_exits++; + vcpu->common->stat.signal_exits++; run->exit_reason =3D KVM_EXIT_INTR; vcpu->arch.ret =3D -EINTR; break; @@ -5001,7 +5001,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 = time_limit, return vcpu->arch.ret; =20 sigpend: - vcpu->stat.signal_exits++; + vcpu->common->stat.signal_exits++; run->exit_reason =3D KVM_EXIT_INTR; vcpu->arch.ret =3D -EINTR; out: @@ -5015,7 +5015,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 = time_limit, =20 static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int r; int srcu_idx; struct kvm *kvm; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_= hv_nested.c index 05f5220960c6..0a8383cca6f7 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -203,7 +203,7 @@ static void kvmhv_nested_mmio_needed(struct kvm_vcpu *v= cpu, u64 regs_ptr) * written there in kvmppc_complete_mmio_load() */ if (((vcpu->arch.io_gpr & KVM_MMIO_REG_EXT_MASK) =3D=3D KVM_MMIO_REG_GPR) - && (vcpu->mmio_is_write =3D=3D 0)) { + && (vcpu->common->mmio_is_write =3D=3D 0)) { vcpu->arch.nested_io_gpr =3D (gpa_t) regs_ptr + offsetof(struct pt_regs, gpr[vcpu->arch.io_gpr]); @@ -420,7 +420,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) if (r =3D=3D -EINTR) return H_INTERRUPT; =20 - if (vcpu->mmio_needed) { + if (vcpu->common->mmio_needed) { kvmhv_nested_mmio_needed(vcpu, regs_ptr); return H_TOO_HARD; } diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s= _hv_rm_xics.c index f2636414d82a..abc5b2b7afc5 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_xics.c +++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c @@ -132,7 +132,7 @@ static void icp_rm_set_vcpu_irq(struct kvm_vcpu *vcpu, int hcore; =20 /* Mark the target VCPU as having an interrupt pending */ - vcpu->stat.queue_intr++; + vcpu->common->stat.queue_intr++; set_bit(BOOK3S_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions); =20 /* Kick self ? Just set MER and return */ @@ -713,14 +713,14 @@ static int ics_rm_eoi(struct kvm_vcpu *vcpu, u32 irq) =20 /* Handle passthrough interrupts */ if (state->host_irq) { - ++vcpu->stat.pthru_all; + ++vcpu->common->stat.pthru_all; if (state->intr_cpu !=3D -1) { int pcpu =3D raw_smp_processor_id(); =20 pcpu =3D cpu_first_thread_sibling(pcpu); - ++vcpu->stat.pthru_host; + ++vcpu->common->stat.pthru_host; if (state->intr_cpu !=3D pcpu) { - ++vcpu->stat.pthru_bad_aff; + ++vcpu->common->stat.pthru_bad_aff; xics_opal_set_server(state->host_irq, pcpu); } state->intr_cpu =3D -1; diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 7b8ae509328f..a9ba0b6334d0 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -493,7 +493,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u6= 4 msr) if (msr & MSR_POW) { if (!vcpu->arch.pending_exceptions) { kvm_vcpu_halt(vcpu); - vcpu->stat.generic.halt_wakeup++; + vcpu->common->stat.generic.halt_wakeup++; =20 /* Unset POW bit after we woke up */ msr &=3D ~MSR_POW; @@ -774,17 +774,17 @@ static int kvmppc_handle_pagefault(struct kvm_vcpu *v= cpu, /* The guest's PTE is not mapped yet. Map on the host */ if (kvmppc_mmu_map_page(vcpu, &pte, iswrite) =3D=3D -EIO) { /* Exit KVM if mapping failed */ - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; return RESUME_HOST; } if (data) - vcpu->stat.sp_storage++; + vcpu->common->stat.sp_storage++; else if (vcpu->arch.mmu.is_dcbz32(vcpu) && (!(vcpu->arch.hflags & BOOK3S_HFLAG_DCBZ32))) kvmppc_patch_dcbz(vcpu, &pte); } else { /* MMIO */ - vcpu->stat.mmio_exits++; + vcpu->common->stat.mmio_exits++; vcpu->arch.paddr_accessed =3D pte.raddr; vcpu->arch.vaddr_accessed =3D pte.eaddr; r =3D kvmppc_emulate_mmio(vcpu); @@ -1056,7 +1056,7 @@ void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr) =20 static void kvmppc_setup_debug(struct kvm_vcpu *vcpu) { - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) { u64 msr =3D kvmppc_get_msr(vcpu); =20 kvmppc_set_msr(vcpu, msr | MSR_SE); @@ -1065,7 +1065,7 @@ static void kvmppc_setup_debug(struct kvm_vcpu *vcpu) =20 static void kvmppc_clear_debug(struct kvm_vcpu *vcpu) { - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) { u64 msr =3D kvmppc_get_msr(vcpu); =20 kvmppc_set_msr(vcpu, msr & ~MSR_SE); @@ -1105,7 +1105,7 @@ static int kvmppc_exit_pr_progint(struct kvm_vcpu *vc= pu, unsigned int exit_nr) } } =20 - vcpu->stat.emulated_inst_exits++; + vcpu->common->stat.emulated_inst_exits++; er =3D kvmppc_emulate_instruction(vcpu); switch (er) { case EMULATE_DONE: @@ -1121,7 +1121,7 @@ static int kvmppc_exit_pr_progint(struct kvm_vcpu *vc= pu, unsigned int exit_nr) r =3D RESUME_GUEST; break; case EMULATE_DO_MMIO: - vcpu->run->exit_reason =3D KVM_EXIT_MMIO; + vcpu->common->run->exit_reason =3D KVM_EXIT_MMIO; r =3D RESUME_HOST_NV; break; case EMULATE_EXIT_USER: @@ -1136,11 +1136,11 @@ static int kvmppc_exit_pr_progint(struct kvm_vcpu *= vcpu, unsigned int exit_nr) =20 int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int r =3D RESUME_HOST; int s; =20 - vcpu->stat.sum_exits++; + vcpu->common->stat.sum_exits++; =20 run->exit_reason =3D KVM_EXIT_UNKNOWN; run->ready_for_interrupt_injection =3D 1; @@ -1154,7 +1154,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsi= gned int exit_nr) case BOOK3S_INTERRUPT_INST_STORAGE: { ulong shadow_srr1 =3D vcpu->arch.shadow_srr1; - vcpu->stat.pf_instruc++; + vcpu->common->stat.pf_instruc++; =20 if (kvmppc_is_split_real(vcpu)) kvmppc_fixup_split_real(vcpu); @@ -1182,7 +1182,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsi= gned int exit_nr) int idx =3D srcu_read_lock(&vcpu->kvm->srcu); r =3D kvmppc_handle_pagefault(vcpu, kvmppc_get_pc(vcpu), exit_nr); srcu_read_unlock(&vcpu->kvm->srcu, idx); - vcpu->stat.sp_instruc++; + vcpu->common->stat.sp_instruc++; } else if (vcpu->arch.mmu.is_dcbz32(vcpu) && (!(vcpu->arch.hflags & BOOK3S_HFLAG_DCBZ32))) { /* @@ -1203,7 +1203,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsi= gned int exit_nr) { ulong dar =3D kvmppc_get_fault_dar(vcpu); u32 fault_dsisr =3D vcpu->arch.fault_dsisr; - vcpu->stat.pf_storage++; + vcpu->common->stat.pf_storage++; =20 #ifdef CONFIG_PPC_BOOK3S_32 /* We set segments as unused segments when invalidating them. So @@ -1258,13 +1258,13 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, un= signed int exit_nr) case BOOK3S_INTERRUPT_HV_DECREMENTER: case BOOK3S_INTERRUPT_DOORBELL: case BOOK3S_INTERRUPT_H_DOORBELL: - vcpu->stat.dec_exits++; + vcpu->common->stat.dec_exits++; r =3D RESUME_GUEST; break; case BOOK3S_INTERRUPT_EXTERNAL: case BOOK3S_INTERRUPT_EXTERNAL_HV: case BOOK3S_INTERRUPT_H_VIRT: - vcpu->stat.ext_intr_exits++; + vcpu->common->stat.ext_intr_exits++; r =3D RESUME_GUEST; break; case BOOK3S_INTERRUPT_HMI: @@ -1333,7 +1333,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsi= gned int exit_nr) r =3D RESUME_GUEST; } else { /* Guest syscalls */ - vcpu->stat.syscall_exits++; + vcpu->common->stat.syscall_exits++; kvmppc_book3s_queue_irqprio(vcpu, exit_nr); r =3D RESUME_GUEST; } @@ -1407,7 +1407,7 @@ int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsi= gned int exit_nr) r =3D RESUME_GUEST; break; case BOOK3S_INTERRUPT_TRACE: - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) { run->exit_reason =3D KVM_EXIT_DEBUG; r =3D RESUME_HOST; } else { @@ -1813,7 +1813,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_vcpu *vcpu) =20 /* Check if we can run the vcpu at all */ if (!vcpu->arch.sane) { - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; ret =3D -EINVAL; goto out; } @@ -1855,7 +1855,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_vcpu *vcpu) =20 srr_regs_clobbered(); out: - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; return ret; } =20 diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr= _papr.c index b2c89e850d7a..bde9c79c339b 100644 --- a/arch/powerpc/kvm/book3s_pr_papr.c +++ b/arch/powerpc/kvm/book3s_pr_papr.c @@ -393,7 +393,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cm= d) case H_CEDE: kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); kvm_vcpu_halt(vcpu); - vcpu->stat.generic.halt_wakeup++; + vcpu->common->stat.generic.halt_wakeup++; return EMULATE_DONE; case H_LOGICAL_CI_LOAD: return kvmppc_h_pr_logical_ci_load(vcpu); diff --git a/arch/powerpc/kvm/book3s_xics.c b/arch/powerpc/kvm/book3s_xics.c index 589a8f257120..94bd793a760d 100644 --- a/arch/powerpc/kvm/book3s_xics.c +++ b/arch/powerpc/kvm/book3s_xics.c @@ -1353,17 +1353,17 @@ static void kvmppc_xics_release(struct kvm_device *= dev) */ kvm_for_each_vcpu(i, vcpu, kvm) { /* - * Take vcpu->mutex to ensure that no one_reg get/set ioctl + * Take vcpu->common->mutex to ensure that no one_reg get/set ioctl * (i.e. kvmppc_xics_[gs]et_icp) can be done concurrently. - * Holding the vcpu->mutex also means that execution is + * Holding the vcpu->common->mutex also means that execution is * excluded for the vcpu until the ICP was freed. When the vcpu * can execute again, vcpu->arch.icp and vcpu->arch.irq_type * have been cleared and the vcpu will not be going into the * XICS code anymore. */ - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); kvmppc_xics_free_icp(vcpu); - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); } =20 if (kvm) diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c index 1362c672387e..ae7ec36e5fbd 100644 --- a/arch/powerpc/kvm/book3s_xive.c +++ b/arch/powerpc/kvm/book3s_xive.c @@ -1510,7 +1510,7 @@ int kvmppc_xive_set_icp(struct kvm_vcpu *vcpu, u64 ic= pval) =20 /* * We can't update the state of a "pushed" VCPU, but that - * shouldn't happen because the vcpu->mutex makes running a + * shouldn't happen because the vcpu->common->mutex makes running a * vcpu mutually exclusive with doing one_reg get/set on it. */ if (WARN_ON(vcpu->arch.xive_pushed)) @@ -1770,7 +1770,7 @@ void kvmppc_xive_disable_vcpu_interrupts(struct kvm_v= cpu *vcpu) =20 /* * Clear pointers to escalation interrupt ESB. - * This is safe because the vcpu->mutex is held, preventing + * This is safe because the vcpu->common->mutex is held, preventing * any other CPU from concurrently executing a KVM_RUN ioctl. */ vcpu->arch.xive_esc_vaddr =3D 0; @@ -2663,16 +2663,16 @@ static void kvmppc_xive_release(struct kvm_device *= dev) */ kvm_for_each_vcpu(i, vcpu, kvm) { /* - * Take vcpu->mutex to ensure that no one_reg get/set ioctl + * Take vcpu->common->mutex to ensure that no one_reg get/set ioctl * (i.e. kvmppc_xive_[gs]et_icp) can be done concurrently. - * Holding the vcpu->mutex also means that the vcpu cannot + * Holding the vcpu->common->mutex also means that the vcpu cannot * be executing the KVM_RUN ioctl, and therefore it cannot * be executing the XIVE push or pull code or accessing * the XIVE MMIO regions. */ - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); kvmppc_xive_cleanup_vcpu(vcpu); - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); } =20 /* diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3= s_xive_native.c index 6e2ebbd8aaac..8cf6489f0d76 100644 --- a/arch/powerpc/kvm/book3s_xive_native.c +++ b/arch/powerpc/kvm/book3s_xive_native.c @@ -1045,16 +1045,16 @@ static void kvmppc_xive_native_release(struct kvm_d= evice *dev) */ kvm_for_each_vcpu(i, vcpu, kvm) { /* - * Take vcpu->mutex to ensure that no one_reg get/set ioctl + * Take vcpu->common->mutex to ensure that no one_reg get/set ioctl * (i.e. kvmppc_xive_native_[gs]et_vp) can be being done. - * Holding the vcpu->mutex also means that the vcpu cannot + * Holding the vcpu->common->mutex also means that the vcpu cannot * be executing the KVM_RUN ioctl, and therefore it cannot * be executing the XIVE push or pull code or accessing * the XIVE MMIO regions. */ - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); kvmppc_xive_native_cleanup_vcpu(vcpu); - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); } =20 /* diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 6a5be025a8af..10e4fbf8b8cd 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -234,7 +234,7 @@ static void kvmppc_vcpu_sync_debug(struct kvm_vcpu *vcp= u) #endif =20 /* Force enable debug interrupts when user space wants to debug */ - if (vcpu->guest_debug) { + if (vcpu->common->guest_debug) { #ifdef CONFIG_KVM_BOOKE_HV /* * Since there is no shadow MSR, sync MSR_DE into the guest @@ -743,14 +743,14 @@ int kvmppc_core_check_requests(struct kvm_vcpu *vcpu) #endif =20 if (kvm_check_request(KVM_REQ_WATCHDOG, vcpu)) { - vcpu->run->exit_reason =3D KVM_EXIT_WATCHDOG; + vcpu->common->run->exit_reason =3D KVM_EXIT_WATCHDOG; r =3D 0; } =20 if (kvm_check_request(KVM_REQ_EPR_EXIT, vcpu)) { - vcpu->run->epr.epr =3D 0; + vcpu->common->run->epr.epr =3D 0; vcpu->arch.epr_needed =3D true; - vcpu->run->exit_reason =3D KVM_EXIT_EPR; + vcpu->common->run->exit_reason =3D KVM_EXIT_EPR; r =3D 0; } =20 @@ -763,7 +763,7 @@ int kvmppc_vcpu_run(struct kvm_vcpu *vcpu) struct debug_reg debug; =20 if (!vcpu->arch.sane) { - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; return -EINVAL; } =20 @@ -823,7 +823,7 @@ int kvmppc_vcpu_run(struct kvm_vcpu *vcpu) #endif =20 out: - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; return ret; } =20 @@ -848,8 +848,8 @@ static int emulation_exit(struct kvm_vcpu *vcpu) __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst); /* For debugging, encode the failing instruction and * report it to userspace. */ - vcpu->run->hw.hardware_exit_reason =3D ~0ULL << 32; - vcpu->run->hw.hardware_exit_reason |=3D vcpu->arch.last_inst; + vcpu->common->run->hw.hardware_exit_reason =3D ~0ULL << 32; + vcpu->common->run->hw.hardware_exit_reason |=3D vcpu->arch.last_inst; kvmppc_core_queue_program(vcpu, ESR_PIL); return RESUME_HOST; =20 @@ -863,11 +863,11 @@ static int emulation_exit(struct kvm_vcpu *vcpu) =20 static int kvmppc_handle_debug(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct debug_reg *dbg_reg =3D &(vcpu->arch.dbg_reg); u32 dbsr =3D vcpu->arch.dbsr; =20 - if (vcpu->guest_debug =3D=3D 0) { + if (vcpu->common->guest_debug =3D=3D 0) { /* * Debug resources belong to Guest. * Imprecise debug event is not injected @@ -993,8 +993,8 @@ static int kvmppc_resume_inst_load(struct kvm_vcpu *vcp= u, __func__, vcpu->arch.regs.nip); /* For debugging, encode the failing instruction and * report it to userspace. */ - vcpu->run->hw.hardware_exit_reason =3D ~0ULL << 32; - vcpu->run->hw.hardware_exit_reason |=3D last_inst; + vcpu->common->run->hw.hardware_exit_reason =3D ~0ULL << 32; + vcpu->common->run->hw.hardware_exit_reason |=3D last_inst; kvmppc_core_queue_program(vcpu, ESR_PIL); return RESUME_HOST; =20 @@ -1010,7 +1010,7 @@ static int kvmppc_resume_inst_load(struct kvm_vcpu *v= cpu, */ int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigned int exit_nr) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int r =3D RESUME_HOST; int s; int idx; @@ -1040,7 +1040,7 @@ int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigne= d int exit_nr) break; case BOOKE_INTERRUPT_PROGRAM: /* SW breakpoints arrive as illegal instructions on HV */ - if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_SW_BP) { emulated =3D kvmppc_get_last_inst(vcpu, INST_GENERIC, &pinst); last_inst =3D ppc_inst_val(pinst); } @@ -1136,7 +1136,7 @@ int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigne= d int exit_nr) break; =20 case BOOKE_INTERRUPT_PROGRAM: - if ((vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) && + if ((vcpu->common->guest_debug & KVM_GUESTDBG_USE_SW_BP) && (last_inst =3D=3D KVMPPC_INST_SW_BREAKPOINT)) { /* * We are here because of an SW breakpoint instr, @@ -2039,16 +2039,16 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_= vcpu *vcpu, =20 if (!(dbg->control & KVM_GUESTDBG_ENABLE)) { vcpu->arch.dbg_reg.dbcr0 =3D 0; - vcpu->guest_debug =3D 0; + vcpu->common->guest_debug =3D 0; kvm_guest_protect_msr(vcpu, MSR_DE, false); goto out; } =20 kvm_guest_protect_msr(vcpu, MSR_DE, true); - vcpu->guest_debug =3D dbg->control; + vcpu->common->guest_debug =3D dbg->control; vcpu->arch.dbg_reg.dbcr0 =3D 0; =20 - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) vcpu->arch.dbg_reg.dbcr0 |=3D DBCR0_IDM | DBCR0_IC; =20 /* Code below handles only HW breakpoints */ @@ -2072,7 +2072,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vc= pu *vcpu, dbg_reg->dbcr2 =3D DBCR2_DAC1US | DBCR2_DAC2US; #endif =20 - if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)) + if (!(vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP)) goto out; =20 ret =3D -EINVAL; diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emul= ate.c index d8d38aca71bd..77d0f0f40db8 100644 --- a/arch/powerpc/kvm/booke_emulate.c +++ b/arch/powerpc/kvm/booke_emulate.c @@ -146,7 +146,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -157,7 +157,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -169,7 +169,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -180,7 +180,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -192,7 +192,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -203,7 +203,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -214,7 +214,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -229,7 +229,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -240,7 +240,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 debug_inst =3D true; @@ -251,7 +251,7 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong spr_val) * If userspace is debugging guest then guest * can not access debug registers. */ - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) break; =20 vcpu->arch.dbsr &=3D ~spr_val; @@ -427,7 +427,7 @@ int kvmppc_booke_emulate_mfspr(struct kvm_vcpu *vcpu, i= nt sprn, ulong *spr_val) break; case SPRN_DBCR0: *spr_val =3D vcpu->arch.dbg_reg.dbcr0; - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) *spr_val =3D *spr_val | DBCR0_EDM; break; case SPRN_DBCR1: diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulat= e.c index 051102d50c31..ded7c572157e 100644 --- a/arch/powerpc/kvm/e500_emulate.c +++ b/arch/powerpc/kvm/e500_emulate.c @@ -90,9 +90,9 @@ static int kvmppc_e500_emul_ehpriv(struct kvm_vcpu *vcpu, =20 switch (get_oc(inst)) { case EHPRIV_OC_DEBUG: - vcpu->run->exit_reason =3D KVM_EXIT_DEBUG; - vcpu->run->debug.arch.address =3D vcpu->arch.regs.nip; - vcpu->run->debug.arch.status =3D 0; + vcpu->common->run->exit_reason =3D KVM_EXIT_DEBUG; + vcpu->common->run->debug.arch.address =3D vcpu->arch.regs.nip; + vcpu->common->run->debug.arch.status =3D 0; kvmppc_account_exit(vcpu, DEBUG_EXITS); emulated =3D EMULATE_EXIT_USER; *advance =3D 0; diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c index 355d5206e8aa..e66c5a637a4b 100644 --- a/arch/powerpc/kvm/emulate.c +++ b/arch/powerpc/kvm/emulate.c @@ -272,9 +272,9 @@ int kvmppc_emulate_instruction(struct kvm_vcpu *vcpu) * these are illegal instructions. */ if (inst =3D=3D KVMPPC_INST_SW_BREAKPOINT) { - vcpu->run->exit_reason =3D KVM_EXIT_DEBUG; - vcpu->run->debug.arch.status =3D 0; - vcpu->run->debug.arch.address =3D kvmppc_get_pc(vcpu); + vcpu->common->run->exit_reason =3D KVM_EXIT_DEBUG; + vcpu->common->run->debug.arch.status =3D 0; + vcpu->common->run->debug.arch.address =3D kvmppc_get_pc(vcpu); emulated =3D EMULATE_EXIT_USER; advance =3D 0; } else diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulat= e_loadstore.c index ec60c7979718..9869f2f65cee 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -97,7 +97,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) int type =3D op.type & INSTR_TYPE_MASK; int size =3D GETSIZE(op.type); =20 - vcpu->mmio_is_write =3D OP_IS_STORE(type); + vcpu->common->mmio_is_write =3D OP_IS_STORE(type); =20 switch (type) { case LOAD: { diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 5e6c7b527677..b7346994f526 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -93,15 +93,15 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu) =20 if (signal_pending(current)) { kvmppc_account_exit(vcpu, SIGNAL_EXITS); - vcpu->run->exit_reason =3D KVM_EXIT_INTR; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTR; r =3D -EINTR; break; } =20 - vcpu->mode =3D IN_GUEST_MODE; + vcpu->common->mode =3D IN_GUEST_MODE; =20 /* - * Reading vcpu->requests must happen after setting vcpu->mode, + * Reading vcpu->requests must happen after setting vcpu->common->mode, * so we don't miss a request because the requester sees * OUTSIDE_GUEST_MODE and assumes we'll be checking requests * before next entering the guest (and thus doesn't IPI). @@ -295,7 +295,7 @@ int kvmppc_emulate_mmio(struct kvm_vcpu *vcpu) r =3D RESUME_GUEST; break; case EMULATE_DO_MMIO: - vcpu->run->exit_reason =3D KVM_EXIT_MMIO; + vcpu->common->run->exit_reason =3D KVM_EXIT_MMIO; /* We must reload nonvolatiles because "update" load/store * instructions modify register state. */ /* Future optimization: only reload non-volatiles if they were @@ -318,7 +318,7 @@ int kvmppc_emulate_mmio(struct kvm_vcpu *vcpu) if (!IS_ENABLED(CONFIG_BOOKE)) { ulong dsisr =3D DSISR_BADACCESS; =20 - if (vcpu->mmio_is_write) + if (vcpu->common->mmio_is_write) dsisr |=3D DSISR_ISSTORE; =20 kvmppc_core_queue_data_storage(vcpu, @@ -352,7 +352,7 @@ int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int = size, void *ptr, struct kvmppc_pte pte; int r =3D -EINVAL; =20 - vcpu->stat.st++; + vcpu->common->stat.st++; =20 if (vcpu->kvm->arch.kvm_ops && vcpu->kvm->arch.kvm_ops->store_to_eaddr) r =3D vcpu->kvm->arch.kvm_ops->store_to_eaddr(vcpu, eaddr, ptr, @@ -395,7 +395,7 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int = size, void *ptr, struct kvmppc_pte pte; int rc =3D -EINVAL; =20 - vcpu->stat.ld++; + vcpu->common->stat.ld++; =20 if (vcpu->kvm->arch.kvm_ops && vcpu->kvm->arch.kvm_ops->load_from_eaddr) rc =3D vcpu->kvm->arch.kvm_ops->load_from_eaddr(vcpu, eaddr, ptr, @@ -1138,7 +1138,7 @@ static inline u32 dp_to_sp(u64 fprd) =20 static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; u64 gpr; =20 if (run->mmio.len > sizeof(gpr)) @@ -1250,7 +1250,7 @@ static int __kvmppc_handle_load(struct kvm_vcpu *vcpu, unsigned int rt, unsigned int bytes, int is_default_endian, int sign_extend) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int idx, ret; bool host_swabbed; =20 @@ -1270,8 +1270,8 @@ static int __kvmppc_handle_load(struct kvm_vcpu *vcpu, =20 vcpu->arch.io_gpr =3D rt; vcpu->arch.mmio_host_swabbed =3D host_swabbed; - vcpu->mmio_needed =3D 1; - vcpu->mmio_is_write =3D 0; + vcpu->common->mmio_needed =3D 1; + vcpu->common->mmio_is_write =3D 0; vcpu->arch.mmio_sign_extend =3D sign_extend; =20 idx =3D srcu_read_lock(&vcpu->kvm->srcu); @@ -1283,7 +1283,7 @@ static int __kvmppc_handle_load(struct kvm_vcpu *vcpu, =20 if (!ret) { kvmppc_complete_mmio_load(vcpu); - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; return EMULATE_DONE; } =20 @@ -1324,7 +1324,7 @@ int kvmppc_handle_vsx_load(struct kvm_vcpu *vcpu, if (emulated !=3D EMULATE_DONE) break; =20 - vcpu->arch.paddr_accessed +=3D vcpu->run->mmio.len; + vcpu->arch.paddr_accessed +=3D vcpu->common->run->mmio.len; =20 vcpu->arch.mmio_vsx_copy_nums--; vcpu->arch.mmio_vsx_offset++; @@ -1336,7 +1336,7 @@ int kvmppc_handle_vsx_load(struct kvm_vcpu *vcpu, int kvmppc_handle_store(struct kvm_vcpu *vcpu, u64 val, unsigned int bytes, int is_default_endian) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; void *data =3D run->mmio.data; int idx, ret; bool host_swabbed; @@ -1354,8 +1354,8 @@ int kvmppc_handle_store(struct kvm_vcpu *vcpu, run->mmio.phys_addr =3D vcpu->arch.paddr_accessed; run->mmio.len =3D bytes; run->mmio.is_write =3D 1; - vcpu->mmio_needed =3D 1; - vcpu->mmio_is_write =3D 1; + vcpu->common->mmio_needed =3D 1; + vcpu->common->mmio_is_write =3D 1; =20 if ((vcpu->arch.mmio_sp64_extend) && (bytes =3D=3D 4)) val =3D dp_to_sp(val); @@ -1385,7 +1385,7 @@ int kvmppc_handle_store(struct kvm_vcpu *vcpu, srcu_read_unlock(&vcpu->kvm->srcu, idx); =20 if (!ret) { - vcpu->mmio_needed =3D 0; + vcpu->common->mmio_needed =3D 0; return EMULATE_DONE; } =20 @@ -1470,7 +1470,7 @@ int kvmppc_handle_vsx_store(struct kvm_vcpu *vcpu, if (emulated !=3D EMULATE_DONE) break; =20 - vcpu->arch.paddr_accessed +=3D vcpu->run->mmio.len; + vcpu->arch.paddr_accessed +=3D vcpu->common->run->mmio.len; =20 vcpu->arch.mmio_vsx_copy_nums--; vcpu->arch.mmio_vsx_offset++; @@ -1481,13 +1481,13 @@ int kvmppc_handle_vsx_store(struct kvm_vcpu *vcpu, =20 static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; enum emulation_result emulated =3D EMULATE_FAIL; int r; =20 vcpu->arch.paddr_accessed +=3D run->mmio.len; =20 - if (!vcpu->mmio_is_write) { + if (!vcpu->common->mmio_is_write) { emulated =3D kvmppc_handle_vsx_load(vcpu, vcpu->arch.io_gpr, run->mmio.len, 1, vcpu->arch.mmio_sign_extend); } else { @@ -1530,7 +1530,7 @@ int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu, if (emulated !=3D EMULATE_DONE) break; =20 - vcpu->arch.paddr_accessed +=3D vcpu->run->mmio.len; + vcpu->arch.paddr_accessed +=3D vcpu->common->run->mmio.len; vcpu->arch.mmio_vmx_copy_nums--; vcpu->arch.mmio_vmx_offset++; } @@ -1650,7 +1650,7 @@ int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu, if (emulated !=3D EMULATE_DONE) break; =20 - vcpu->arch.paddr_accessed +=3D vcpu->run->mmio.len; + vcpu->arch.paddr_accessed +=3D vcpu->common->run->mmio.len; vcpu->arch.mmio_vmx_copy_nums--; vcpu->arch.mmio_vmx_offset++; } @@ -1660,13 +1660,13 @@ int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu, =20 static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; enum emulation_result emulated =3D EMULATE_FAIL; int r; =20 vcpu->arch.paddr_accessed +=3D run->mmio.len; =20 - if (!vcpu->mmio_is_write) { + if (!vcpu->common->mmio_is_write) { emulated =3D kvmppc_handle_vmx_load(vcpu, vcpu->arch.io_gpr, run->mmio.len, 1); } else { @@ -1792,14 +1792,14 @@ int kvm_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcp= u, struct kvm_one_reg *reg) =20 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int r; =20 vcpu_load(vcpu); =20 - if (vcpu->mmio_needed) { - vcpu->mmio_needed =3D 0; - if (!vcpu->mmio_is_write) + if (vcpu->common->mmio_needed) { + vcpu->common->mmio_needed =3D 0; + if (!vcpu->common->mmio_is_write) kvmppc_complete_mmio_load(vcpu); #ifdef CONFIG_VSX if (vcpu->arch.mmio_vsx_copy_nums > 0) { @@ -1810,7 +1810,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (vcpu->arch.mmio_vsx_copy_nums > 0) { r =3D kvmppc_emulate_mmio_vsx_loadstore(vcpu); if (r =3D=3D RESUME_HOST) { - vcpu->mmio_needed =3D 1; + vcpu->common->mmio_needed =3D 1; goto out; } } @@ -1824,7 +1824,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (vcpu->arch.mmio_vmx_copy_nums > 0) { r =3D kvmppc_emulate_mmio_vmx_loadstore(vcpu); if (r =3D=3D RESUME_HOST) { - vcpu->mmio_needed =3D 1; + vcpu->common->mmio_needed =3D 1; goto out; } } @@ -1852,7 +1852,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 kvm_sigset_activate(vcpu); =20 - if (!vcpu->wants_to_run) + if (!vcpu->common->wants_to_run) r =3D -EINTR; else r =3D kvmppc_vcpu_run(vcpu); diff --git a/arch/powerpc/kvm/timing.h b/arch/powerpc/kvm/timing.h index 45817ab82bb4..78b8872e5995 100644 --- a/arch/powerpc/kvm/timing.h +++ b/arch/powerpc/kvm/timing.h @@ -45,46 +45,46 @@ static inline void kvmppc_account_exit_stat(struct kvm_= vcpu *vcpu, int type) */ switch (type) { case EXT_INTR_EXITS: - vcpu->stat.ext_intr_exits++; + vcpu->common->stat.ext_intr_exits++; break; case DEC_EXITS: - vcpu->stat.dec_exits++; + vcpu->common->stat.dec_exits++; break; case EMULATED_INST_EXITS: - vcpu->stat.emulated_inst_exits++; + vcpu->common->stat.emulated_inst_exits++; break; case DSI_EXITS: - vcpu->stat.dsi_exits++; + vcpu->common->stat.dsi_exits++; break; case ISI_EXITS: - vcpu->stat.isi_exits++; + vcpu->common->stat.isi_exits++; break; case SYSCALL_EXITS: - vcpu->stat.syscall_exits++; + vcpu->common->stat.syscall_exits++; break; case DTLB_REAL_MISS_EXITS: - vcpu->stat.dtlb_real_miss_exits++; + vcpu->common->stat.dtlb_real_miss_exits++; break; case DTLB_VIRT_MISS_EXITS: - vcpu->stat.dtlb_virt_miss_exits++; + vcpu->common->stat.dtlb_virt_miss_exits++; break; case MMIO_EXITS: - vcpu->stat.mmio_exits++; + vcpu->common->stat.mmio_exits++; break; case ITLB_REAL_MISS_EXITS: - vcpu->stat.itlb_real_miss_exits++; + vcpu->common->stat.itlb_real_miss_exits++; break; case ITLB_VIRT_MISS_EXITS: - vcpu->stat.itlb_virt_miss_exits++; + vcpu->common->stat.itlb_virt_miss_exits++; break; case SIGNAL_EXITS: - vcpu->stat.signal_exits++; + vcpu->common->stat.signal_exits++; break; case DBELL_EXITS: - vcpu->stat.dbell_exits++; + vcpu->common->stat.dbell_exits++; break; case GDBELL_EXITS: - vcpu->stat.gdbell_exits++; + vcpu->common->stat.gdbell_exits++; break; } } diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h index ea1d7c808319..35c000d918bb 100644 --- a/arch/powerpc/kvm/trace.h +++ b/arch/powerpc/kvm/trace.h @@ -108,7 +108,7 @@ TRACE_EVENT(kvm_check_requests, =20 TP_fast_assign( __entry->cpu_nr =3D vcpu->vcpu_id; - __entry->requests =3D vcpu->requests; + __entry->requests =3D vcpu->common->requests; ), =20 TP_printk("vcpu=3D%x requests=3D%x", diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h index 77ebc724e6cd..6395698cfe05 100644 --- a/arch/powerpc/kvm/trace_hv.h +++ b/arch/powerpc/kvm/trace_hv.h @@ -504,7 +504,7 @@ TRACE_EVENT(kvmppc_run_vcpu_exit, =20 TP_fast_assign( __entry->vcpu_id =3D vcpu->vcpu_id; - __entry->exit =3D vcpu->run->exit_reason; + __entry->exit =3D vcpu->common->run->exit_reason; __entry->ret =3D vcpu->arch.ret; ), =20 diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index 39cd26af5a69..17cf59ccfae3 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -18,7 +18,7 @@ static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_i= dx) =20 for (; vcpu_lock_idx >=3D 0; vcpu_lock_idx--) { tmp_vcpu =3D kvm_get_vcpu(kvm, vcpu_lock_idx); - mutex_unlock(&tmp_vcpu->mutex); + mutex_unlock(&tmp_vcpu->common->mutex); } } =20 @@ -33,7 +33,7 @@ static bool lock_all_vcpus(struct kvm *kvm) unsigned long c; =20 kvm_for_each_vcpu(c, tmp_vcpu, kvm) { - if (!mutex_trylock(&tmp_vcpu->mutex)) { + if (!mutex_trylock(&tmp_vcpu->common->mutex)) { unlock_vcpus(kvm, c - 1); return false; } @@ -207,12 +207,12 @@ static int aia_imsic_addr(struct kvm *kvm, u64 *addr, return -EINVAL; } =20 - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); if (write) vcpu_aia->imsic_addr =3D *addr; else *addr =3D vcpu_aia->imsic_addr; - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); =20 return 0; } diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 0a1e859323b4..cceae3dc9102 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -734,7 +734,7 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vc= pu) struct imsic_mrif tmrif; void __iomem *new_vsfile_va; struct kvm *kvm =3D vcpu->kvm; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_vcpu_aia *vaia =3D &vcpu->arch.aia_context; struct imsic *imsic =3D vaia->imsic_state; int ret =3D 0, new_vsfile_hgei =3D -1, old_vsfile_hgei, old_vsfile_cpu; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 8d7d381737ee..c2c8bc303394 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -517,10 +517,10 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vc= pu *vcpu, struct kvm_guest_debug *dbg) { if (dbg->control & KVM_GUESTDBG_ENABLE) { - vcpu->guest_debug =3D dbg->control; + vcpu->common->guest_debug =3D dbg->control; vcpu->arch.cfg.hedeleg &=3D ~BIT(EXC_BREAKPOINT); } else { - vcpu->guest_debug =3D 0; + vcpu->common->guest_debug =3D 0; vcpu->arch.cfg.hedeleg |=3D BIT(EXC_BREAKPOINT); } =20 @@ -555,7 +555,7 @@ static void kvm_riscv_vcpu_setup_config(struct kvm_vcpu= *vcpu) } =20 cfg->hedeleg =3D KVM_HEDELEG_DEFAULT; - if (vcpu->guest_debug) + if (vcpu->common->guest_debug) cfg->hedeleg &=3D ~BIT(EXC_BREAKPOINT); } =20 @@ -732,7 +732,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { int ret; struct kvm_cpu_trap trap; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (!vcpu->arch.ran_atleast_once) kvm_riscv_vcpu_setup_config(vcpu); @@ -745,15 +745,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) switch (run->exit_reason) { case KVM_EXIT_MMIO: /* Process MMIO value returned from user-space */ - ret =3D kvm_riscv_vcpu_mmio_return(vcpu, vcpu->run); + ret =3D kvm_riscv_vcpu_mmio_return(vcpu, vcpu->common->run); break; case KVM_EXIT_RISCV_SBI: /* Process SBI value returned from user-space */ - ret =3D kvm_riscv_vcpu_sbi_return(vcpu, vcpu->run); + ret =3D kvm_riscv_vcpu_sbi_return(vcpu, vcpu->common->run); break; case KVM_EXIT_RISCV_CSR: /* Process CSR value returned from user-space */ - ret =3D kvm_riscv_vcpu_csr_return(vcpu, vcpu->run); + ret =3D kvm_riscv_vcpu_csr_return(vcpu, vcpu->common->run); break; default: ret =3D 0; @@ -764,7 +764,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) return ret; } =20 - if (!vcpu->wants_to_run) { + if (!vcpu->common->wants_to_run) { kvm_vcpu_srcu_read_unlock(vcpu); return -EINTR; } @@ -803,7 +803,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * See the comment in kvm_vcpu_exiting_guest_mode() and * Documentation/virt/kvm/vcpu-requests.rst */ - vcpu->mode =3D IN_GUEST_MODE; + vcpu->common->mode =3D IN_GUEST_MODE; =20 kvm_vcpu_srcu_read_unlock(vcpu); smp_mb__after_srcu_read_unlock(); @@ -820,7 +820,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (kvm_riscv_gstage_vmid_ver_changed(&vcpu->kvm->arch.vmid) || kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) { - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; local_irq_enable(); preempt_enable(); kvm_vcpu_srcu_read_lock(vcpu); @@ -841,8 +841,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 kvm_riscv_vcpu_enter_exit(vcpu); =20 - vcpu->mode =3D OUTSIDE_GUEST_MODE; - vcpu->stat.exits++; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->stat.exits++; =20 /* * Save SCAUSE, STVAL, HTVAL, and HTINST because we might diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 97dec18e6989..c5e4f869cc33 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -201,7 +201,7 @@ void kvm_riscv_vcpu_wfi(struct kvm_vcpu *vcpu) =20 static int wfi_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) { - vcpu->stat.wfi_exit_stat++; + vcpu->common->stat.wfi_exit_stat++; kvm_riscv_vcpu_wfi(vcpu); return KVM_INSN_CONTINUE_NEXT_SEPC; } @@ -335,7 +335,7 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_r= un *run, ulong insn) if (rc > KVM_INSN_EXIT_TO_USER_SPACE) { if (rc =3D=3D KVM_INSN_CONTINUE_NEXT_SEPC) { run->riscv_csr.ret_value =3D val; - vcpu->stat.csr_exit_kernel++; + vcpu->common->stat.csr_exit_kernel++; kvm_riscv_vcpu_csr_return(vcpu, run); rc =3D KVM_INSN_CONTINUE_SAME_SEPC; } @@ -345,7 +345,7 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_r= un *run, ulong insn) =20 /* Exit to user-space for CSR emulation */ if (rc <=3D KVM_INSN_EXIT_TO_USER_SPACE) { - vcpu->stat.csr_exit_user++; + vcpu->common->stat.csr_exit_user++; run->exit_reason =3D KVM_EXIT_RISCV_CSR; } =20 @@ -576,13 +576,13 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, s= truct kvm_run *run, if (!kvm_io_bus_read(vcpu, KVM_MMIO_BUS, fault_addr, len, data_buf)) { /* Successfully handled MMIO access in the kernel so resume */ memcpy(run->mmio.data, data_buf, len); - vcpu->stat.mmio_exit_kernel++; + vcpu->common->stat.mmio_exit_kernel++; kvm_riscv_vcpu_mmio_return(vcpu, run); return 1; } =20 /* Exit to userspace for MMIO emulation */ - vcpu->stat.mmio_exit_user++; + vcpu->common->stat.mmio_exit_user++; run->exit_reason =3D KVM_EXIT_MMIO; =20 return 0; @@ -709,13 +709,13 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, = struct kvm_run *run, if (!kvm_io_bus_write(vcpu, KVM_MMIO_BUS, fault_addr, len, run->mmio.data)) { /* Successfully handled MMIO access in the kernel so resume */ - vcpu->stat.mmio_exit_kernel++; + vcpu->common->stat.mmio_exit_kernel++; kvm_riscv_vcpu_mmio_return(vcpu, run); return 1; } =20 /* Exit to userspace for MMIO emulation */ - vcpu->stat.mmio_exit_user++; + vcpu->common->stat.mmio_exit_user++; run->exit_reason =3D KVM_EXIT_MMIO; =20 return 0; diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 62f409d4176e..3e450efc8a87 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -117,7 +117,7 @@ void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, = struct kvm_run *run) struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; =20 vcpu->arch.sbi_context.return_handled =3D 0; - vcpu->stat.ecall_exit_stat++; + vcpu->common->stat.ecall_exit_stat++; run->exit_reason =3D KVM_EXIT_RISCV_SBI; run->riscv_sbi.extension_id =3D cp->a7; run->riscv_sbi.function_id =3D cp->a6; diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index dce667f4b6ab..a759f1b80b8a 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -81,7 +81,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *v= cpu) return SBI_ERR_INVALID_PARAM; if (!kvm_riscv_vcpu_stopped(target_vcpu)) return SBI_HSM_STATE_STARTED; - else if (vcpu->stat.generic.blocking) + else if (vcpu->common->stat.generic.blocking) return SBI_HSM_STATE_SUSPENDED; else return SBI_HSM_STATE_STOPPED; diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_h= ost.h index 8e77afbed58e..337bb899c7ac 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -708,13 +708,13 @@ struct kvm_hw_bp_info_arch { #define KVM_GUESTDBG_EXIT_PENDING 0x10000000 =20 #define guestdbg_enabled(vcpu) \ - (vcpu->guest_debug & KVM_GUESTDBG_ENABLE) + (vcpu->common->guest_debug & KVM_GUESTDBG_ENABLE) #define guestdbg_sstep_enabled(vcpu) \ - (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) + (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) #define guestdbg_hw_bp_enabled(vcpu) \ - (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) + (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP) #define guestdbg_exit_pending(vcpu) (guestdbg_enabled(vcpu) && \ - (vcpu->guest_debug & KVM_GUESTDBG_EXIT_PENDING)) + (vcpu->common->guest_debug & KVM_GUESTDBG_EXIT_PENDING)) =20 #define KVM_GUESTDBG_VALID_MASK \ (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_SINGLESTEP |\ diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c index 2a32438e09ce..7535e159961a 100644 --- a/arch/s390/kvm/diag.c +++ b/arch/s390/kvm/diag.c @@ -22,9 +22,9 @@ static int diag_release_pages(struct kvm_vcpu *vcpu) unsigned long start, end; unsigned long prefix =3D kvm_s390_get_prefix(vcpu); =20 - start =3D vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4]; - end =3D vcpu->run->s.regs.gprs[vcpu->arch.sie_block->ipa & 0xf] + PAGE_SI= ZE; - vcpu->stat.instruction_diagnose_10++; + start =3D vcpu->common->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf= 0) >> 4]; + end =3D vcpu->common->run->s.regs.gprs[vcpu->arch.sie_block->ipa & 0xf] += PAGE_SIZE; + vcpu->common->stat.instruction_diagnose_10++; =20 if (start & ~PAGE_MASK || end & ~PAGE_MASK || start >=3D end || start < 2 * PAGE_SIZE) @@ -73,11 +73,11 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcp= u) u16 ry =3D (vcpu->arch.sie_block->ipa & 0x0f); =20 VCPU_EVENT(vcpu, 3, "diag page reference parameter block at 0x%llx", - vcpu->run->s.regs.gprs[rx]); - vcpu->stat.instruction_diagnose_258++; - if (vcpu->run->s.regs.gprs[rx] & 7) + vcpu->common->run->s.regs.gprs[rx]); + vcpu->common->stat.instruction_diagnose_258++; + if (vcpu->common->run->s.regs.gprs[rx] & 7) return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); - rc =3D read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(par= m)); + rc =3D read_guest(vcpu, vcpu->common->run->s.regs.gprs[rx], rx, &parm, si= zeof(parm)); if (rc) return kvm_s390_inject_prog_cond(vcpu, rc); if (parm.parm_version !=3D 2 || parm.parm_len < 5 || parm.code !=3D 0x258) @@ -94,7 +94,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu) * the token must not be changed. We have to return * decimal 8 instead, as mandated in SC24-6084. */ - vcpu->run->s.regs.gprs[ry] =3D 8; + vcpu->common->run->s.regs.gprs[ry] =3D 8; return 0; } =20 @@ -108,7 +108,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcp= u) vcpu->arch.pfault_token =3D parm.token_addr; vcpu->arch.pfault_select =3D parm.select_mask; vcpu->arch.pfault_compare =3D parm.compare_mask; - vcpu->run->s.regs.gprs[ry] =3D 0; + vcpu->common->run->s.regs.gprs[ry] =3D 0; rc =3D 0; break; case 1: /* @@ -122,13 +122,13 @@ static int __diag_page_ref_service(struct kvm_vcpu *v= cpu) parm.compare_mask || parm.zarch) return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); =20 - vcpu->run->s.regs.gprs[ry] =3D 0; + vcpu->common->run->s.regs.gprs[ry] =3D 0; /* * If the pfault handling was not established or is already * canceled SC24-6084 requests to return decimal 4. */ if (vcpu->arch.pfault_token =3D=3D KVM_S390_PFAULT_TOKEN_INVALID) - vcpu->run->s.regs.gprs[ry] =3D 4; + vcpu->common->run->s.regs.gprs[ry] =3D 4; else vcpu->arch.pfault_token =3D KVM_S390_PFAULT_TOKEN_INVALID; =20 @@ -145,7 +145,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcp= u) static int __diag_time_slice_end(struct kvm_vcpu *vcpu) { VCPU_EVENT(vcpu, 5, "%s", "diag time slice end"); - vcpu->stat.instruction_diagnose_44++; + vcpu->common->stat.instruction_diagnose_44++; kvm_vcpu_on_spin(vcpu, true); return 0; } @@ -169,8 +169,8 @@ static int __diag_time_slice_end_directed(struct kvm_vc= pu *vcpu) int tcpu_cpu; int tid; =20 - tid =3D vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4]; - vcpu->stat.instruction_diagnose_9c++; + tid =3D vcpu->common->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0)= >> 4]; + vcpu->common->stat.instruction_diagnose_9c++; =20 /* yield to self */ if (tid =3D=3D vcpu->vcpu_id) @@ -194,7 +194,7 @@ static int __diag_time_slice_end_directed(struct kvm_vc= pu *vcpu) VCPU_EVENT(vcpu, 5, "diag time slice end directed to %d: yield forwarded", tid); - vcpu->stat.diag_9c_forward++; + vcpu->common->stat.diag_9c_forward++; return 0; } =20 @@ -205,23 +205,23 @@ static int __diag_time_slice_end_directed(struct kvm_= vcpu *vcpu) return 0; no_yield: VCPU_EVENT(vcpu, 5, "diag time slice end directed to %d: ignored", tid); - vcpu->stat.diag_9c_ignored++; + vcpu->common->stat.diag_9c_ignored++; return 0; } =20 static int __diag_ipl_functions(struct kvm_vcpu *vcpu) { unsigned int reg =3D vcpu->arch.sie_block->ipa & 0xf; - unsigned long subcode =3D vcpu->run->s.regs.gprs[reg] & 0xffff; + unsigned long subcode =3D vcpu->common->run->s.regs.gprs[reg] & 0xffff; =20 VCPU_EVENT(vcpu, 3, "diag ipl functions, subcode %lx", subcode); - vcpu->stat.instruction_diagnose_308++; + vcpu->common->stat.instruction_diagnose_308++; switch (subcode) { case 3: - vcpu->run->s390_reset_flags =3D KVM_S390_RESET_CLEAR; + vcpu->common->run->s390_reset_flags =3D KVM_S390_RESET_CLEAR; break; case 4: - vcpu->run->s390_reset_flags =3D 0; + vcpu->common->run->s390_reset_flags =3D 0; break; default: return -EOPNOTSUPP; @@ -233,13 +233,13 @@ static int __diag_ipl_functions(struct kvm_vcpu *vcpu) */ if (!kvm_s390_user_cpu_state_ctrl(vcpu->kvm)) kvm_s390_vcpu_stop(vcpu); - vcpu->run->s390_reset_flags |=3D KVM_S390_RESET_SUBSYSTEM; - vcpu->run->s390_reset_flags |=3D KVM_S390_RESET_IPL; - vcpu->run->s390_reset_flags |=3D KVM_S390_RESET_CPU_INIT; - vcpu->run->exit_reason =3D KVM_EXIT_S390_RESET; + vcpu->common->run->s390_reset_flags |=3D KVM_S390_RESET_SUBSYSTEM; + vcpu->common->run->s390_reset_flags |=3D KVM_S390_RESET_IPL; + vcpu->common->run->s390_reset_flags |=3D KVM_S390_RESET_CPU_INIT; + vcpu->common->run->exit_reason =3D KVM_EXIT_S390_RESET; VCPU_EVENT(vcpu, 3, "requesting userspace resets %llx", - vcpu->run->s390_reset_flags); - trace_kvm_s390_request_resets(vcpu->run->s390_reset_flags); + vcpu->common->run->s390_reset_flags); + trace_kvm_s390_request_resets(vcpu->common->run->s390_reset_flags); return -EREMOTE; } =20 @@ -247,16 +247,16 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *v= cpu) { int ret; =20 - vcpu->stat.instruction_diagnose_500++; + vcpu->common->stat.instruction_diagnose_500++; /* No virtio-ccw notification? Get out quickly. */ if (!vcpu->kvm->arch.css_support || - (vcpu->run->s.regs.gprs[1] !=3D KVM_S390_VIRTIO_CCW_NOTIFY)) + (vcpu->common->run->s.regs.gprs[1] !=3D KVM_S390_VIRTIO_CCW_NOTIFY)) return -EOPNOTSUPP; =20 VCPU_EVENT(vcpu, 4, "diag 0x500 schid 0x%8.8x queue 0x%x cookie 0x%llx", - (u32) vcpu->run->s.regs.gprs[2], - (u32) vcpu->run->s.regs.gprs[3], - vcpu->run->s.regs.gprs[4]); + (u32) vcpu->common->run->s.regs.gprs[2], + (u32) vcpu->common->run->s.regs.gprs[3], + vcpu->common->run->s.regs.gprs[4]); =20 /* * The layout is as follows: @@ -265,16 +265,16 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *v= cpu) * - gpr 4 contains the index on the bus (optionally) */ ret =3D kvm_io_bus_write_cookie(vcpu, KVM_VIRTIO_CCW_NOTIFY_BUS, - vcpu->run->s.regs.gprs[2] & 0xffffffff, - 8, &vcpu->run->s.regs.gprs[3], - vcpu->run->s.regs.gprs[4]); + vcpu->common->run->s.regs.gprs[2] & 0xffffffff, + 8, &vcpu->common->run->s.regs.gprs[3], + vcpu->common->run->s.regs.gprs[4]); =20 /* * Return cookie in gpr 2, but don't overwrite the register if the * diagnose will be handled by userspace. */ if (ret !=3D -EOPNOTSUPP) - vcpu->run->s.regs.gprs[2] =3D ret; + vcpu->common->run->s.regs.gprs[2] =3D ret; /* kvm_io_bus_write_cookie returns -EOPNOTSUPP if it found no match. */ return ret < 0 ? ret : 0; } @@ -301,7 +301,7 @@ int kvm_s390_handle_diag(struct kvm_vcpu *vcpu) case 0x500: return __diag_virtio_hypercall(vcpu); default: - vcpu->stat.instruction_diagnose_other++; + vcpu->common->stat.instruction_diagnose_other++; return -EOPNOTSUPP; } } diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c index e65f597e3044..3431bc47371c 100644 --- a/arch/s390/kvm/gaccess.c +++ b/arch/s390/kvm/gaccess.c @@ -235,8 +235,8 @@ static int ar_translation(struct kvm_vcpu *vcpu, union = asce *asce, u8 ar, return -EINVAL; =20 if (vcpu->arch.acrs_loaded) - save_access_regs(vcpu->run->s.regs.acrs); - alet.val =3D vcpu->run->s.regs.acrs[ar]; + save_access_regs(vcpu->common->run->s.regs.acrs); + alet.val =3D vcpu->common->run->s.regs.acrs[ar]; =20 if (ar =3D=3D 0 || alet.val =3D=3D 0) { asce->val =3D vcpu->arch.sie_block->gcr[1]; diff --git a/arch/s390/kvm/guestdbg.c b/arch/s390/kvm/guestdbg.c index 80879fc73c90..82c4a2dd066c 100644 --- a/arch/s390/kvm/guestdbg.c +++ b/arch/s390/kvm/guestdbg.c @@ -370,8 +370,8 @@ static struct kvm_hw_wp_info_arch *any_wp_changed(struc= t kvm_vcpu *vcpu) =20 void kvm_s390_prepare_debug_exit(struct kvm_vcpu *vcpu) { - vcpu->run->exit_reason =3D KVM_EXIT_DEBUG; - vcpu->guest_debug &=3D ~KVM_GUESTDBG_EXIT_PENDING; + vcpu->common->run->exit_reason =3D KVM_EXIT_DEBUG; + vcpu->common->guest_debug &=3D ~KVM_GUESTDBG_EXIT_PENDING; } =20 #define PER_CODE_MASK (PER_EVENT_MASK >> 24) @@ -388,7 +388,7 @@ void kvm_s390_prepare_debug_exit(struct kvm_vcpu *vcpu) static int debug_exit_required(struct kvm_vcpu *vcpu, u8 perc, unsigned long peraddr) { - struct kvm_debug_exit_arch *debug_exit =3D &vcpu->run->debug.arch; + struct kvm_debug_exit_arch *debug_exit =3D &vcpu->common->run->debug.arch; struct kvm_hw_wp_info_arch *wp_info =3D NULL; struct kvm_hw_bp_info_arch *bp_info =3D NULL; unsigned long addr =3D vcpu->arch.sie_block->gpsw.addr; @@ -482,8 +482,8 @@ static int per_fetched_addr(struct kvm_vcpu *vcpu, unsi= gned long *addr) u32 disp =3D opcode[1] & 0x0fff; u32 index =3D opcode[0] & 0x000f; =20 - *addr =3D base ? vcpu->run->s.regs.gprs[base] : 0; - *addr +=3D index ? vcpu->run->s.regs.gprs[index] : 0; + *addr =3D base ? vcpu->common->run->s.regs.gprs[base] : 0; + *addr +=3D index ? vcpu->common->run->s.regs.gprs[index] : 0; *addr +=3D disp; } *addr =3D kvm_s390_logical_to_effective(vcpu, *addr); @@ -516,7 +516,7 @@ int kvm_s390_handle_per_ifetch_icpt(struct kvm_vcpu *vc= pu) return kvm_s390_inject_prog_irq(vcpu, &pgm_info); =20 if (debug_exit_required(vcpu, pgm_info.per_code, pgm_info.per_address)) - vcpu->guest_debug |=3D KVM_GUESTDBG_EXIT_PENDING; + vcpu->common->guest_debug |=3D KVM_GUESTDBG_EXIT_PENDING; =20 if (!guest_per_enabled(vcpu) || !(vcpu->arch.sie_block->gcr[9] & PER_EVENT_IFETCH)) @@ -589,7 +589,7 @@ int kvm_s390_handle_per_event(struct kvm_vcpu *vcpu) =20 if (debug_exit_required(vcpu, vcpu->arch.sie_block->perc, vcpu->arch.sie_block->peraddr)) - vcpu->guest_debug |=3D KVM_GUESTDBG_EXIT_PENDING; + vcpu->common->guest_debug |=3D KVM_GUESTDBG_EXIT_PENDING; =20 rc =3D filter_guest_per_event(vcpu); if (rc) diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c index b16352083ff9..8c8569b506b6 100644 --- a/arch/s390/kvm/intercept.c +++ b/arch/s390/kvm/intercept.c @@ -56,7 +56,7 @@ static int handle_stop(struct kvm_vcpu *vcpu) int rc =3D 0; uint8_t flags, stop_pending; =20 - vcpu->stat.exit_stop_request++; + vcpu->common->stat.exit_stop_request++; =20 /* delay the stop if any non-stop irq is pending */ if (kvm_s390_vcpu_has_irq(vcpu, 1)) @@ -92,7 +92,7 @@ static int handle_validity(struct kvm_vcpu *vcpu) { int viwhy =3D vcpu->arch.sie_block->ipb >> 16; =20 - vcpu->stat.exit_validity++; + vcpu->common->stat.exit_validity++; trace_kvm_s390_intercept_validity(vcpu, viwhy); KVM_EVENT(3, "validity intercept 0x%x for pid %u (kvm 0x%pK)", viwhy, current->pid, vcpu->kvm); @@ -105,7 +105,7 @@ static int handle_validity(struct kvm_vcpu *vcpu) =20 static int handle_instruction(struct kvm_vcpu *vcpu) { - vcpu->stat.exit_instruction++; + vcpu->common->stat.exit_instruction++; trace_kvm_s390_intercept_instruction(vcpu, vcpu->arch.sie_block->ipa, vcpu->arch.sie_block->ipb); @@ -248,7 +248,7 @@ static int handle_prog(struct kvm_vcpu *vcpu) psw_t psw; int rc; =20 - vcpu->stat.exit_program_interruption++; + vcpu->common->stat.exit_program_interruption++; =20 /* * Intercept 8 indicates a loop of specification exceptions @@ -306,7 +306,7 @@ static int handle_external_interrupt(struct kvm_vcpu *v= cpu) psw_t newpsw; int rc; =20 - vcpu->stat.exit_external_interrupt++; + vcpu->common->stat.exit_external_interrupt++; =20 if (kvm_s390_pv_cpu_is_protected(vcpu)) { newpsw =3D vcpu->arch.sie_block->gpsw; @@ -363,7 +363,7 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu) kvm_s390_get_regs_rre(vcpu, ®1, ®2); =20 /* Ensure that the source is paged-in, no actual access -> no key checkin= g */ - rc =3D guest_translate_address_with_key(vcpu, vcpu->run->s.regs.gprs[reg2= ], + rc =3D guest_translate_address_with_key(vcpu, vcpu->common->run->s.regs.g= prs[reg2], reg2, &srcaddr, GACC_FETCH, 0); if (rc) return kvm_s390_inject_prog_cond(vcpu, rc); @@ -372,7 +372,7 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu) return rc; =20 /* Ensure that the source is paged-in, no actual access -> no key checkin= g */ - rc =3D guest_translate_address_with_key(vcpu, vcpu->run->s.regs.gprs[reg1= ], + rc =3D guest_translate_address_with_key(vcpu, vcpu->common->run->s.regs.g= prs[reg1], reg1, &dstaddr, GACC_STORE, 0); if (rc) return kvm_s390_inject_prog_cond(vcpu, rc); @@ -387,7 +387,7 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu) =20 static int handle_partial_execution(struct kvm_vcpu *vcpu) { - vcpu->stat.exit_pei++; + vcpu->common->stat.exit_pei++; =20 if (vcpu->arch.sie_block->ipa =3D=3D 0xb254) /* MVPG */ return handle_mvpg_pei(vcpu); @@ -412,10 +412,10 @@ int handle_sthyi(struct kvm_vcpu *vcpu) return kvm_s390_inject_program_int(vcpu, PGM_OPERATION); =20 kvm_s390_get_regs_rre(vcpu, ®1, ®2); - code =3D vcpu->run->s.regs.gprs[reg1]; - addr =3D vcpu->run->s.regs.gprs[reg2]; + code =3D vcpu->common->run->s.regs.gprs[reg1]; + addr =3D vcpu->common->run->s.regs.gprs[reg2]; =20 - vcpu->stat.instruction_sthyi++; + vcpu->common->stat.instruction_sthyi++; VCPU_EVENT(vcpu, 3, "STHYI: fc: %llu addr: 0x%016llx", code, addr); trace_kvm_s390_handle_sthyi(vcpu, code, addr); =20 @@ -454,7 +454,7 @@ int handle_sthyi(struct kvm_vcpu *vcpu) } =20 free_page((unsigned long)sctns); - vcpu->run->s.regs.gprs[reg2 + 1] =3D rc; + vcpu->common->run->s.regs.gprs[reg2 + 1] =3D rc; kvm_s390_set_psw_cc(vcpu, cc); return r; } @@ -464,7 +464,7 @@ static int handle_operexc(struct kvm_vcpu *vcpu) psw_t oldpsw, newpsw; int rc; =20 - vcpu->stat.exit_operation_exception++; + vcpu->common->stat.exit_operation_exception++; trace_kvm_s390_handle_operexc(vcpu, vcpu->arch.sie_block->ipa, vcpu->arch.sie_block->ipb); =20 @@ -608,10 +608,10 @@ int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu) =20 switch (vcpu->arch.sie_block->icptcode) { case ICPT_EXTREQ: - vcpu->stat.exit_external_request++; + vcpu->common->stat.exit_external_request++; return 0; case ICPT_IOREQ: - vcpu->stat.exit_io_request++; + vcpu->common->stat.exit_io_request++; return 0; case ICPT_INST: rc =3D handle_instruction(vcpu); diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index 4f0e7f61edf7..9b545e4307f9 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -488,7 +488,7 @@ static int __must_check __deliver_cpu_timer(struct kvm_= vcpu *vcpu) struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; int rc =3D 0; =20 - vcpu->stat.deliver_cputm++; + vcpu->common->stat.deliver_cputm++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_CPU_TIMER, 0, 0); if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -512,7 +512,7 @@ static int __must_check __deliver_ckc(struct kvm_vcpu *= vcpu) struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; int rc =3D 0; =20 - vcpu->stat.deliver_ckc++; + vcpu->common->stat.deliver_ckc++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_CLOCK_COMP, 0, 0); if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -584,8 +584,8 @@ static int __write_machine_check(struct kvm_vcpu *vcpu, =20 mci.val =3D mchk->mcic; /* take care of lazy register loading */ - kvm_s390_fpu_store(vcpu->run); - save_access_regs(vcpu->run->s.regs.acrs); + kvm_s390_fpu_store(vcpu->common->run); + save_access_regs(vcpu->common->run->s.regs.acrs); if (MACHINE_HAS_GS && vcpu->arch.gs_enabled) save_gs_cb(current->thread.gs_cb); =20 @@ -615,7 +615,7 @@ static int __write_machine_check(struct kvm_vcpu *vcpu, } =20 if (!rc && mci.vr && ext_sa_addr && test_kvm_facility(vcpu->kvm, 129)) { - if (write_guest_abs(vcpu, ext_sa_addr, vcpu->run->s.regs.vrs, + if (write_guest_abs(vcpu, ext_sa_addr, vcpu->common->run->s.regs.vrs, 512)) mci.vr =3D 0; } else { @@ -624,7 +624,7 @@ static int __write_machine_check(struct kvm_vcpu *vcpu, if (!rc && mci.gs && ext_sa_addr && test_kvm_facility(vcpu->kvm, 133) && (lc =3D=3D 11 || lc =3D=3D 12)) { if (write_guest_abs(vcpu, ext_sa_addr + 1024, - &vcpu->run->s.regs.gscb, 32)) + &vcpu->common->run->s.regs.gscb, 32)) mci.gs =3D 0; } else { mci.gs =3D 0; @@ -640,15 +640,15 @@ static int __write_machine_check(struct kvm_vcpu *vcp= u, =20 /* Register-save areas */ if (cpu_has_vx()) { - convert_vx_to_fp(fprs, (__vector128 *) vcpu->run->s.regs.vrs); + convert_vx_to_fp(fprs, (__vector128 *) vcpu->common->run->s.regs.vrs); rc |=3D write_guest_lc(vcpu, __LC_FPREGS_SAVE_AREA, fprs, 128); } else { rc |=3D write_guest_lc(vcpu, __LC_FPREGS_SAVE_AREA, - vcpu->run->s.regs.fprs, 128); + vcpu->common->run->s.regs.fprs, 128); } rc |=3D write_guest_lc(vcpu, __LC_GPREGS_SAVE_AREA, - vcpu->run->s.regs.gprs, 128); - rc |=3D put_guest_lc(vcpu, vcpu->run->s.regs.fpc, + vcpu->common->run->s.regs.gprs, 128); + rc |=3D put_guest_lc(vcpu, vcpu->common->run->s.regs.fpc, (u32 __user *) __LC_FP_CREG_SAVE_AREA); rc |=3D put_guest_lc(vcpu, vcpu->arch.sie_block->todpr, (u32 __user *) __LC_TOD_PROGREG_SAVE_AREA); @@ -657,7 +657,7 @@ static int __write_machine_check(struct kvm_vcpu *vcpu, rc |=3D put_guest_lc(vcpu, vcpu->arch.sie_block->ckc >> 8, (u64 __user *) __LC_CLOCK_COMP_SAVE_AREA); rc |=3D write_guest_lc(vcpu, __LC_AREGS_SAVE_AREA, - &vcpu->run->s.regs.acrs, 64); + &vcpu->common->run->s.regs.acrs, 64); rc |=3D write_guest_lc(vcpu, __LC_CREGS_SAVE_AREA, &vcpu->arch.sie_block->gcr, 128); =20 @@ -716,7 +716,7 @@ static int __must_check __deliver_machine_check(struct = kvm_vcpu *vcpu) trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_MCHK, mchk.cr14, mchk.mcic); - vcpu->stat.deliver_machine_check++; + vcpu->common->stat.deliver_machine_check++; rc =3D __write_machine_check(vcpu, &mchk); } return rc; @@ -728,7 +728,7 @@ static int __must_check __deliver_restart(struct kvm_vc= pu *vcpu) int rc =3D 0; =20 VCPU_EVENT(vcpu, 3, "%s", "deliver: cpu restart"); - vcpu->stat.deliver_restart_signal++; + vcpu->common->stat.deliver_restart_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_RESTART, 0, 0); =20 if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -755,7 +755,7 @@ static int __must_check __deliver_set_prefix(struct kvm= _vcpu *vcpu) clear_bit(IRQ_PEND_SET_PREFIX, &li->pending_irqs); spin_unlock(&li->lock); =20 - vcpu->stat.deliver_prefix_signal++; + vcpu->common->stat.deliver_prefix_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_SIGP_SET_PREFIX, prefix.address, 0); @@ -778,7 +778,7 @@ static int __must_check __deliver_emergency_signal(stru= ct kvm_vcpu *vcpu) spin_unlock(&li->lock); =20 VCPU_EVENT(vcpu, 4, "%s", "deliver: sigp emerg"); - vcpu->stat.deliver_emergency_signal++; + vcpu->common->stat.deliver_emergency_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_EMERGENCY, cpu_addr, 0); if (kvm_s390_pv_cpu_is_protected(vcpu)) { @@ -811,7 +811,7 @@ static int __must_check __deliver_external_call(struct = kvm_vcpu *vcpu) spin_unlock(&li->lock); =20 VCPU_EVENT(vcpu, 4, "%s", "deliver: sigp ext call"); - vcpu->stat.deliver_external_call++; + vcpu->common->stat.deliver_external_call++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_EXTERNAL_CALL, extcall.code, 0); @@ -863,7 +863,7 @@ static int __must_check __deliver_prog(struct kvm_vcpu = *vcpu) ilen =3D pgm_info.flags & KVM_S390_PGM_FLAGS_ILC_MASK; VCPU_EVENT(vcpu, 3, "deliver: program irq code 0x%x, ilen:%d", pgm_info.code, ilen); - vcpu->stat.deliver_program++; + vcpu->common->stat.deliver_program++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_PROGRAM_INT, pgm_info.code, 0); =20 @@ -1013,7 +1013,7 @@ static int __must_check __deliver_service(struct kvm_= vcpu *vcpu) =20 VCPU_EVENT(vcpu, 4, "deliver: sclp parameter 0x%x", ext.ext_params); - vcpu->stat.deliver_service_signal++; + vcpu->common->stat.deliver_service_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_SERVICE, ext.ext_params, 0); =20 @@ -1037,7 +1037,7 @@ static int __must_check __deliver_service_ev(struct k= vm_vcpu *vcpu) spin_unlock(&fi->lock); =20 VCPU_EVENT(vcpu, 4, "%s", "deliver: sclp parameter event"); - vcpu->stat.deliver_service_signal++; + vcpu->common->stat.deliver_service_signal++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_SERVICE, ext.ext_params, 0); =20 @@ -1100,7 +1100,7 @@ static int __must_check __deliver_virtio(struct kvm_v= cpu *vcpu) VCPU_EVENT(vcpu, 4, "deliver: virtio parm: 0x%x,parm64: 0x%llx", inti->ext.ext_params, inti->ext.ext_params2); - vcpu->stat.deliver_virtio++; + vcpu->common->stat.deliver_virtio++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, inti->type, inti->ext.ext_params, @@ -1186,7 +1186,7 @@ static int __must_check __deliver_io(struct kvm_vcpu = *vcpu, inti->io.subchannel_id >> 1 & 0x3, inti->io.subchannel_nr); =20 - vcpu->stat.deliver_io++; + vcpu->common->stat.deliver_io++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, inti->type, ((__u32)inti->io.subchannel_id << 16) | @@ -1214,7 +1214,7 @@ static int __must_check __deliver_io(struct kvm_vcpu = *vcpu, VCPU_EVENT(vcpu, 4, "%s isc %u", "deliver: I/O (AI/gisa)", isc); memset(&io, 0, sizeof(io)); io.io_int_word =3D isc_to_int_word(isc); - vcpu->stat.deliver_io++; + vcpu->common->stat.deliver_io++; trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id, KVM_S390_INT_IO(1, 0, 0, 0), ((__u32)io.subchannel_id << 16) | @@ -1299,7 +1299,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) struct kvm_s390_gisa_interrupt *gi =3D &vcpu->kvm->arch.gisa_int; u64 sltime; =20 - vcpu->stat.exit_wait_state++; + vcpu->common->stat.exit_wait_state++; =20 /* fast path */ if (kvm_arch_vcpu_runnable(vcpu)) @@ -1332,7 +1332,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) no_timer: kvm_vcpu_srcu_read_unlock(vcpu); kvm_vcpu_halt(vcpu); - vcpu->valid_wakeup =3D false; + vcpu->common->valid_wakeup =3D false; __unset_cpu_idle(vcpu); kvm_vcpu_srcu_read_lock(vcpu); =20 @@ -1342,7 +1342,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) =20 void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu) { - vcpu->valid_wakeup =3D true; + vcpu->common->valid_wakeup =3D true; kvm_vcpu_wake_up(vcpu); =20 /* @@ -1469,11 +1469,11 @@ int __must_check kvm_s390_deliver_pending_interrupt= s(struct kvm_vcpu *vcpu) * singlestep event now. */ if (delivered && guestdbg_sstep_enabled(vcpu)) { - struct kvm_debug_exit_arch *debug_exit =3D &vcpu->run->debug.arch; + struct kvm_debug_exit_arch *debug_exit =3D &vcpu->common->run->debug.arc= h; =20 debug_exit->addr =3D vcpu->arch.sie_block->gpsw.addr; debug_exit->type =3D KVM_SINGLESTEP; - vcpu->guest_debug |=3D KVM_GUESTDBG_EXIT_PENDING; + vcpu->common->guest_debug |=3D KVM_GUESTDBG_EXIT_PENDING; } =20 set_intercept_indicators(vcpu); @@ -1485,7 +1485,7 @@ static int __inject_prog(struct kvm_vcpu *vcpu, struc= t kvm_s390_irq *irq) { struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; =20 - vcpu->stat.inject_program++; + vcpu->common->stat.inject_program++; VCPU_EVENT(vcpu, 3, "inject: program irq code 0x%x", irq->u.pgm.code); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_PROGRAM_INT, irq->u.pgm.code, 0); @@ -1527,7 +1527,7 @@ static int __inject_pfault_init(struct kvm_vcpu *vcpu= , struct kvm_s390_irq *irq) { struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; =20 - vcpu->stat.inject_pfault_init++; + vcpu->common->stat.inject_pfault_init++; VCPU_EVENT(vcpu, 4, "inject: pfault init parameter block at 0x%llx", irq->u.ext.ext_params2); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_PFAULT_INIT, @@ -1546,7 +1546,7 @@ static int __inject_extcall(struct kvm_vcpu *vcpu, st= ruct kvm_s390_irq *irq) struct kvm_s390_extcall_info *extcall =3D &li->irq.extcall; uint16_t src_id =3D irq->u.extcall.code; =20 - vcpu->stat.inject_external_call++; + vcpu->common->stat.inject_external_call++; VCPU_EVENT(vcpu, 4, "inject: external call source-cpu:%u", src_id); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_EXTERNAL_CALL, @@ -1571,7 +1571,7 @@ static int __inject_set_prefix(struct kvm_vcpu *vcpu,= struct kvm_s390_irq *irq) struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; struct kvm_s390_prefix_info *prefix =3D &li->irq.prefix; =20 - vcpu->stat.inject_set_prefix++; + vcpu->common->stat.inject_set_prefix++; VCPU_EVENT(vcpu, 3, "inject: set prefix to %x", irq->u.prefix.address); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_SIGP_SET_PREFIX, @@ -1592,7 +1592,7 @@ static int __inject_sigp_stop(struct kvm_vcpu *vcpu, = struct kvm_s390_irq *irq) struct kvm_s390_stop_info *stop =3D &li->irq.stop; int rc =3D 0; =20 - vcpu->stat.inject_stop_signal++; + vcpu->common->stat.inject_stop_signal++; trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_SIGP_STOP, 0, 0); =20 if (irq->u.stop.flags & ~KVM_S390_STOP_SUPP_FLAGS) @@ -1616,7 +1616,7 @@ static int __inject_sigp_restart(struct kvm_vcpu *vcp= u) { struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; =20 - vcpu->stat.inject_restart++; + vcpu->common->stat.inject_restart++; VCPU_EVENT(vcpu, 3, "%s", "inject: restart int"); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_RESTART, 0, 0); =20 @@ -1629,7 +1629,7 @@ static int __inject_sigp_emergency(struct kvm_vcpu *v= cpu, { struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; =20 - vcpu->stat.inject_emergency_signal++; + vcpu->common->stat.inject_emergency_signal++; VCPU_EVENT(vcpu, 4, "inject: emergency from cpu %u", irq->u.emerg.code); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_EMERGENCY, @@ -1650,7 +1650,7 @@ static int __inject_mchk(struct kvm_vcpu *vcpu, struc= t kvm_s390_irq *irq) struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; struct kvm_s390_mchk_info *mchk =3D &li->irq.mchk; =20 - vcpu->stat.inject_mchk++; + vcpu->common->stat.inject_mchk++; VCPU_EVENT(vcpu, 3, "inject: machine check mcic 0x%llx", irq->u.mchk.mcic); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_MCHK, 0, @@ -1681,7 +1681,7 @@ static int __inject_ckc(struct kvm_vcpu *vcpu) { struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; =20 - vcpu->stat.inject_ckc++; + vcpu->common->stat.inject_ckc++; VCPU_EVENT(vcpu, 3, "%s", "inject: clock comparator external"); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_CLOCK_COMP, 0, 0); @@ -1695,7 +1695,7 @@ static int __inject_cpu_timer(struct kvm_vcpu *vcpu) { struct kvm_s390_local_interrupt *li =3D &vcpu->arch.local_int; =20 - vcpu->stat.inject_cputm++; + vcpu->common->stat.inject_cputm++; VCPU_EVENT(vcpu, 3, "%s", "inject: cpu timer external"); trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_CPU_TIMER, 0, 0); @@ -3195,12 +3195,12 @@ void kvm_s390_gisa_enable(struct kvm *kvm) if (!gisa_desc) return; kvm_for_each_vcpu(i, vcpu, kvm) { - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); vcpu->arch.sie_block->gd =3D gisa_desc; vcpu->arch.sie_block->eca |=3D ECA_AIV; VCPU_EVENT(vcpu, 3, "AIV gisa format-%u enabled for cpu %03u", vcpu->arch.sie_block->gd & 0x3, vcpu->vcpu_id); - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); } } =20 @@ -3231,10 +3231,10 @@ void kvm_s390_gisa_disable(struct kvm *kvm) if (!gi->origin) return; kvm_for_each_vcpu(i, vcpu, kvm) { - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); vcpu->arch.sie_block->eca &=3D ~ECA_AIV; vcpu->arch.sie_block->gd =3D 0U; - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); VCPU_EVENT(vcpu, 3, "AIV disabled for cpu %03u", vcpu->vcpu_id); } kvm_s390_gisa_destroy(kvm); diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 0fd96860fc45..0b8c368af939 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -2450,13 +2450,13 @@ int kvm_s390_cpus_from_pv(struct kvm *kvm, u16 *rc,= u16 *rrc) * We want to return the first failure rc and rrc, though. */ kvm_for_each_vcpu(i, vcpu, kvm) { - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); if (kvm_s390_pv_destroy_cpu(vcpu, &_rc, &_rrc) && !ret) { *rc =3D _rc; *rrc =3D _rrc; ret =3D -EIO; } - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); } /* Ensure that we re-enable gisa if the non-PV guest used it but the PV g= uest did not. */ if (use_gisa) @@ -2488,9 +2488,9 @@ static int kvm_s390_cpus_to_pv(struct kvm *kvm, u16 *= rc, u16 *rrc) kvm_s390_gisa_disable(kvm); =20 kvm_for_each_vcpu(i, vcpu, kvm) { - mutex_lock(&vcpu->mutex); + mutex_lock(&vcpu->common->mutex); r =3D kvm_s390_pv_create_cpu(vcpu, rc, rrc); - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); if (r) break; } @@ -3874,8 +3874,8 @@ static int kvm_s390_vcpu_setup(struct kvm_vcpu *vcpu) VCPU_EVENT(vcpu, 3, "AIV gisa format-%u enabled for cpu %03u", vcpu->arch.sie_block->gd & 0x3, vcpu->vcpu_id); } - vcpu->arch.sie_block->sdnxo =3D virt_to_phys(&vcpu->run->s.regs.sdnx) | S= DNXC; - vcpu->arch.sie_block->riccbd =3D virt_to_phys(&vcpu->run->s.regs.riccb); + vcpu->arch.sie_block->sdnxo =3D virt_to_phys(&vcpu->common->run->s.regs.s= dnx) | SDNXC; + vcpu->arch.sie_block->riccbd =3D virt_to_phys(&vcpu->common->run->s.regs.= riccb); =20 if (sclp.has_kss) kvm_s390_set_cpuflags(vcpu, CPUSTAT_KSS); @@ -3938,7 +3938,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) =20 vcpu->arch.pfault_token =3D KVM_S390_PFAULT_TOKEN_INVALID; kvm_clear_async_pf_completion_queue(vcpu); - vcpu->run->kvm_valid_regs =3D KVM_SYNC_PREFIX | + vcpu->common->run->kvm_valid_regs =3D KVM_SYNC_PREFIX | KVM_SYNC_GPRS | KVM_SYNC_ACRS | KVM_SYNC_CRS | @@ -3948,20 +3948,20 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.acrs_loaded =3D false; kvm_s390_set_prefix(vcpu, 0); if (test_kvm_facility(vcpu->kvm, 64)) - vcpu->run->kvm_valid_regs |=3D KVM_SYNC_RICCB; + vcpu->common->run->kvm_valid_regs |=3D KVM_SYNC_RICCB; if (test_kvm_facility(vcpu->kvm, 82)) - vcpu->run->kvm_valid_regs |=3D KVM_SYNC_BPBC; + vcpu->common->run->kvm_valid_regs |=3D KVM_SYNC_BPBC; if (test_kvm_facility(vcpu->kvm, 133)) - vcpu->run->kvm_valid_regs |=3D KVM_SYNC_GSCB; + vcpu->common->run->kvm_valid_regs |=3D KVM_SYNC_GSCB; if (test_kvm_facility(vcpu->kvm, 156)) - vcpu->run->kvm_valid_regs |=3D KVM_SYNC_ETOKEN; + vcpu->common->run->kvm_valid_regs |=3D KVM_SYNC_ETOKEN; /* fprs can be synchronized via vrs, even if the guest has no vx. With * cpu_has_vx(), (load|store)_fpu_regs() will work with vrs format. */ if (cpu_has_vx()) - vcpu->run->kvm_valid_regs |=3D KVM_SYNC_VRS; + vcpu->common->run->kvm_valid_regs |=3D KVM_SYNC_VRS; else - vcpu->run->kvm_valid_regs |=3D KVM_SYNC_FPRS; + vcpu->common->run->kvm_valid_regs |=3D KVM_SYNC_FPRS; =20 if (kvm_is_ucontrol(vcpu->kvm)) { rc =3D __kvm_ucontrol_vcpu_init(vcpu); @@ -4077,7 +4077,7 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu) /* do not poll with more than halt_poll_max_steal percent of steal time */ if (get_lowcore()->avg_steal_timer * 100 / (TICK_USEC << 12) >=3D READ_ONCE(halt_poll_max_steal)) { - vcpu->stat.halt_no_poll_steal++; + vcpu->common->stat.halt_no_poll_steal++; return true; } return false; @@ -4196,7 +4196,7 @@ static void kvm_arch_vcpu_ioctl_normal_reset(struct k= vm_vcpu *vcpu) { vcpu->arch.sie_block->gpsw.mask &=3D ~PSW_MASK_RI; vcpu->arch.pfault_token =3D KVM_S390_PFAULT_TOKEN_INVALID; - memset(vcpu->run->s.regs.riccb, 0, sizeof(vcpu->run->s.regs.riccb)); + memset(vcpu->common->run->s.regs.riccb, 0, sizeof(vcpu->common->run->s.re= gs.riccb)); =20 kvm_clear_async_pf_completion_queue(vcpu); if (!kvm_s390_user_cpu_state_ctrl(vcpu->kvm)) @@ -4223,18 +4223,18 @@ static void kvm_arch_vcpu_ioctl_initial_reset(struc= t kvm_vcpu *vcpu) vcpu->arch.sie_block->gcr[14] =3D CR14_INITIAL_MASK; =20 /* ... the data in sync regs */ - memset(vcpu->run->s.regs.crs, 0, sizeof(vcpu->run->s.regs.crs)); - vcpu->run->s.regs.ckc =3D 0; - vcpu->run->s.regs.crs[0] =3D CR0_INITIAL_MASK; - vcpu->run->s.regs.crs[14] =3D CR14_INITIAL_MASK; - vcpu->run->psw_addr =3D 0; - vcpu->run->psw_mask =3D 0; - vcpu->run->s.regs.todpr =3D 0; - vcpu->run->s.regs.cputm =3D 0; - vcpu->run->s.regs.ckc =3D 0; - vcpu->run->s.regs.pp =3D 0; - vcpu->run->s.regs.gbea =3D 1; - vcpu->run->s.regs.fpc =3D 0; + memset(vcpu->common->run->s.regs.crs, 0, sizeof(vcpu->common->run->s.regs= .crs)); + vcpu->common->run->s.regs.ckc =3D 0; + vcpu->common->run->s.regs.crs[0] =3D CR0_INITIAL_MASK; + vcpu->common->run->s.regs.crs[14] =3D CR14_INITIAL_MASK; + vcpu->common->run->psw_addr =3D 0; + vcpu->common->run->psw_mask =3D 0; + vcpu->common->run->s.regs.todpr =3D 0; + vcpu->common->run->s.regs.cputm =3D 0; + vcpu->common->run->s.regs.ckc =3D 0; + vcpu->common->run->s.regs.pp =3D 0; + vcpu->common->run->s.regs.gbea =3D 1; + vcpu->common->run->s.regs.fpc =3D 0; /* * Do not reset these registers in the protected case, as some of * them are overlaid and they are not accessible in this case @@ -4250,7 +4250,7 @@ static void kvm_arch_vcpu_ioctl_initial_reset(struct = kvm_vcpu *vcpu) =20 static void kvm_arch_vcpu_ioctl_clear_reset(struct kvm_vcpu *vcpu) { - struct kvm_sync_regs *regs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *regs =3D &vcpu->common->run->s.regs; =20 /* Clear reset is a superset of the initial reset */ kvm_arch_vcpu_ioctl_initial_reset(vcpu); @@ -4267,7 +4267,7 @@ static void kvm_arch_vcpu_ioctl_clear_reset(struct kv= m_vcpu *vcpu) int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *r= egs) { vcpu_load(vcpu); - memcpy(&vcpu->run->s.regs.gprs, ®s->gprs, sizeof(regs->gprs)); + memcpy(&vcpu->common->run->s.regs.gprs, ®s->gprs, sizeof(regs->gprs)); vcpu_put(vcpu); return 0; } @@ -4275,7 +4275,7 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcp= u, struct kvm_regs *regs) int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *r= egs) { vcpu_load(vcpu); - memcpy(®s->gprs, &vcpu->run->s.regs.gprs, sizeof(regs->gprs)); + memcpy(®s->gprs, &vcpu->common->run->s.regs.gprs, sizeof(regs->gprs)); vcpu_put(vcpu); return 0; } @@ -4285,7 +4285,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vc= pu, { vcpu_load(vcpu); =20 - memcpy(&vcpu->run->s.regs.acrs, &sregs->acrs, sizeof(sregs->acrs)); + memcpy(&vcpu->common->run->s.regs.acrs, &sregs->acrs, sizeof(sregs->acrs)= ); memcpy(&vcpu->arch.sie_block->gcr, &sregs->crs, sizeof(sregs->crs)); =20 vcpu_put(vcpu); @@ -4297,7 +4297,7 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vc= pu, { vcpu_load(vcpu); =20 - memcpy(&sregs->acrs, &vcpu->run->s.regs.acrs, sizeof(sregs->acrs)); + memcpy(&sregs->acrs, &vcpu->common->run->s.regs.acrs, sizeof(sregs->acrs)= ); memcpy(&sregs->crs, &vcpu->arch.sie_block->gcr, sizeof(sregs->crs)); =20 vcpu_put(vcpu); @@ -4310,12 +4310,12 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vc= pu, struct kvm_fpu *fpu) =20 vcpu_load(vcpu); =20 - vcpu->run->s.regs.fpc =3D fpu->fpc; + vcpu->common->run->s.regs.fpc =3D fpu->fpc; if (cpu_has_vx()) - convert_fp_to_vx((__vector128 *) vcpu->run->s.regs.vrs, + convert_fp_to_vx((__vector128 *) vcpu->common->run->s.regs.vrs, (freg_t *) fpu->fprs); else - memcpy(vcpu->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs)); + memcpy(vcpu->common->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs)); =20 vcpu_put(vcpu); return ret; @@ -4327,10 +4327,10 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vc= pu, struct kvm_fpu *fpu) =20 if (cpu_has_vx()) convert_vx_to_fp((freg_t *) fpu->fprs, - (__vector128 *) vcpu->run->s.regs.vrs); + (__vector128 *) vcpu->common->run->s.regs.vrs); else - memcpy(fpu->fprs, vcpu->run->s.regs.fprs, sizeof(fpu->fprs)); - fpu->fpc =3D vcpu->run->s.regs.fpc; + memcpy(fpu->fprs, vcpu->common->run->s.regs.fprs, sizeof(fpu->fprs)); + fpu->fpc =3D vcpu->common->run->s.regs.fpc; =20 vcpu_put(vcpu); return 0; @@ -4343,8 +4343,8 @@ static int kvm_arch_vcpu_ioctl_set_initial_psw(struct= kvm_vcpu *vcpu, psw_t psw) if (!is_vcpu_stopped(vcpu)) rc =3D -EBUSY; else { - vcpu->run->psw_mask =3D psw.mask; - vcpu->run->psw_addr =3D psw.addr; + vcpu->common->run->psw_mask =3D psw.mask; + vcpu->common->run->psw_addr =3D psw.addr; } return rc; } @@ -4366,7 +4366,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vc= pu *vcpu, =20 vcpu_load(vcpu); =20 - vcpu->guest_debug =3D 0; + vcpu->common->guest_debug =3D 0; kvm_s390_clear_bp_data(vcpu); =20 if (dbg->control & ~VALID_GUESTDBG_FLAGS) { @@ -4379,7 +4379,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vc= pu *vcpu, } =20 if (dbg->control & KVM_GUESTDBG_ENABLE) { - vcpu->guest_debug =3D dbg->control; + vcpu->common->guest_debug =3D dbg->control; /* enforce guest PER */ kvm_s390_set_cpuflags(vcpu, CPUSTAT_P); =20 @@ -4391,7 +4391,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vc= pu *vcpu, } =20 if (rc) { - vcpu->guest_debug =3D 0; + vcpu->common->guest_debug =3D 0; kvm_s390_clear_bp_data(vcpu); kvm_s390_clear_cpuflags(vcpu, CPUSTAT_P); } @@ -4672,8 +4672,8 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu) */ kvm_check_async_pf_completion(vcpu); =20 - vcpu->arch.sie_block->gg14 =3D vcpu->run->s.regs.gprs[14]; - vcpu->arch.sie_block->gg15 =3D vcpu->run->s.regs.gprs[15]; + vcpu->arch.sie_block->gg14 =3D vcpu->common->run->s.regs.gprs[14]; + vcpu->arch.sie_block->gg15 =3D vcpu->common->run->s.regs.gprs[15]; =20 if (need_resched()) schedule(); @@ -4751,8 +4751,8 @@ static int vcpu_post_run(struct kvm_vcpu *vcpu, int e= xit_reason) if (guestdbg_enabled(vcpu)) kvm_s390_restore_guest_per_regs(vcpu); =20 - vcpu->run->s.regs.gprs[14] =3D vcpu->arch.sie_block->gg14; - vcpu->run->s.regs.gprs[15] =3D vcpu->arch.sie_block->gg15; + vcpu->common->run->s.regs.gprs[14] =3D vcpu->arch.sie_block->gg14; + vcpu->common->run->s.regs.gprs[15] =3D vcpu->arch.sie_block->gg15; =20 if (exit_reason =3D=3D -EINTR) { VCPU_EVENT(vcpu, 3, "%s", "machine check"); @@ -4768,26 +4768,26 @@ static int vcpu_post_run(struct kvm_vcpu *vcpu, int= exit_reason) =20 if (rc !=3D -EOPNOTSUPP) return rc; - vcpu->run->exit_reason =3D KVM_EXIT_S390_SIEIC; - vcpu->run->s390_sieic.icptcode =3D vcpu->arch.sie_block->icptcode; - vcpu->run->s390_sieic.ipa =3D vcpu->arch.sie_block->ipa; - vcpu->run->s390_sieic.ipb =3D vcpu->arch.sie_block->ipb; + vcpu->common->run->exit_reason =3D KVM_EXIT_S390_SIEIC; + vcpu->common->run->s390_sieic.icptcode =3D vcpu->arch.sie_block->icptcod= e; + vcpu->common->run->s390_sieic.ipa =3D vcpu->arch.sie_block->ipa; + vcpu->common->run->s390_sieic.ipb =3D vcpu->arch.sie_block->ipb; return -EREMOTE; } else if (exit_reason !=3D -EFAULT) { - vcpu->stat.exit_null++; + vcpu->common->stat.exit_null++; return 0; } else if (kvm_is_ucontrol(vcpu->kvm)) { - vcpu->run->exit_reason =3D KVM_EXIT_S390_UCONTROL; - vcpu->run->s390_ucontrol.trans_exc_code =3D + vcpu->common->run->exit_reason =3D KVM_EXIT_S390_UCONTROL; + vcpu->common->run->s390_ucontrol.trans_exc_code =3D current->thread.gmap_addr; - vcpu->run->s390_ucontrol.pgm_code =3D 0x10; + vcpu->common->run->s390_ucontrol.pgm_code =3D 0x10; return -EREMOTE; } else if (current->thread.gmap_pfault) { trace_kvm_s390_major_guest_pfault(vcpu); current->thread.gmap_pfault =3D 0; if (kvm_arch_setup_async_pf(vcpu)) return 0; - vcpu->stat.pfault_sync++; + vcpu->common->stat.pfault_sync++; return kvm_arch_fault_in_page(vcpu, current->thread.gmap_addr, 1); } return vcpu_post_run_fault_in_sie(vcpu); @@ -4821,14 +4821,14 @@ static int __vcpu_run(struct kvm_vcpu *vcpu) local_irq_enable(); if (kvm_s390_pv_cpu_is_protected(vcpu)) { memcpy(sie_page->pv_grregs, - vcpu->run->s.regs.gprs, + vcpu->common->run->s.regs.gprs, sizeof(sie_page->pv_grregs)); } exit_reason =3D sie64a(vcpu->arch.sie_block, - vcpu->run->s.regs.gprs, + vcpu->common->run->s.regs.gprs, gmap_get_enabled()->asce); if (kvm_s390_pv_cpu_is_protected(vcpu)) { - memcpy(vcpu->run->s.regs.gprs, + memcpy(vcpu->common->run->s.regs.gprs, sie_page->pv_grregs, sizeof(sie_page->pv_grregs)); /* @@ -4857,7 +4857,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu) =20 static void sync_regs_fmt2(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; struct runtime_instr_cb *riccb; struct gs_cb *gscb; =20 @@ -4920,7 +4920,7 @@ static void sync_regs_fmt2(struct kvm_vcpu *vcpu) } if (vcpu->arch.gs_enabled) { current->thread.gs_cb =3D (struct gs_cb *) - &vcpu->run->s.regs.gscb; + &vcpu->common->run->s.regs.gscb; restore_gs_cb(current->thread.gs_cb); } preempt_enable(); @@ -4930,7 +4930,7 @@ static void sync_regs_fmt2(struct kvm_vcpu *vcpu) =20 static void sync_regs(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; =20 if (kvm_run->kvm_dirty_regs & KVM_SYNC_PREFIX) kvm_s390_set_prefix(vcpu, kvm_run->s.regs.prefix); @@ -4944,9 +4944,9 @@ static void sync_regs(struct kvm_vcpu *vcpu) vcpu->arch.sie_block->ckc =3D kvm_run->s.regs.ckc; } save_access_regs(vcpu->arch.host_acrs); - restore_access_regs(vcpu->run->s.regs.acrs); + restore_access_regs(vcpu->common->run->s.regs.acrs); vcpu->arch.acrs_loaded =3D true; - kvm_s390_fpu_load(vcpu->run); + kvm_s390_fpu_load(vcpu->common->run); /* Sync fmt2 only data */ if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) { sync_regs_fmt2(vcpu); @@ -4970,7 +4970,7 @@ static void sync_regs(struct kvm_vcpu *vcpu) =20 static void store_regs_fmt2(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; =20 kvm_run->s.regs.todpr =3D vcpu->arch.sie_block->todpr; kvm_run->s.regs.pp =3D vcpu->arch.sie_block->pp; @@ -4994,7 +4994,7 @@ static void store_regs_fmt2(struct kvm_vcpu *vcpu) =20 static void store_regs(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; =20 kvm_run->psw_mask =3D vcpu->arch.sie_block->gpsw.mask; kvm_run->psw_addr =3D vcpu->arch.sie_block->gpsw.addr; @@ -5005,17 +5005,17 @@ static void store_regs(struct kvm_vcpu *vcpu) kvm_run->s.regs.pft =3D vcpu->arch.pfault_token; kvm_run->s.regs.pfs =3D vcpu->arch.pfault_select; kvm_run->s.regs.pfc =3D vcpu->arch.pfault_compare; - save_access_regs(vcpu->run->s.regs.acrs); + save_access_regs(vcpu->common->run->s.regs.acrs); restore_access_regs(vcpu->arch.host_acrs); vcpu->arch.acrs_loaded =3D false; - kvm_s390_fpu_store(vcpu->run); + kvm_s390_fpu_store(vcpu->common->run); if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) store_regs_fmt2(vcpu); } =20 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; DECLARE_KERNEL_FPU_ONSTACK32(fpu); int rc; =20 @@ -5028,7 +5028,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (vcpu->kvm->arch.pv.dumping) return -EINVAL; =20 - if (!vcpu->wants_to_run) + if (!vcpu->common->wants_to_run) return -EINTR; =20 if (kvm_run->kvm_valid_regs & ~KVM_SYNC_S390_VALID_FIELDS || @@ -5086,7 +5086,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 kvm_sigset_deactivate(vcpu); =20 - vcpu->stat.exit_userspace++; + vcpu->common->stat.exit_userspace++; out: vcpu_put(vcpu); return rc; @@ -5120,21 +5120,21 @@ int kvm_s390_store_status_unloaded(struct kvm_vcpu = *vcpu, unsigned long gpa) =20 /* manually convert vector registers if necessary */ if (cpu_has_vx()) { - convert_vx_to_fp(fprs, (__vector128 *) vcpu->run->s.regs.vrs); + convert_vx_to_fp(fprs, (__vector128 *) vcpu->common->run->s.regs.vrs); rc =3D write_guest_abs(vcpu, gpa + __LC_FPREGS_SAVE_AREA, fprs, 128); } else { rc =3D write_guest_abs(vcpu, gpa + __LC_FPREGS_SAVE_AREA, - vcpu->run->s.regs.fprs, 128); + vcpu->common->run->s.regs.fprs, 128); } rc |=3D write_guest_abs(vcpu, gpa + __LC_GPREGS_SAVE_AREA, - vcpu->run->s.regs.gprs, 128); + vcpu->common->run->s.regs.gprs, 128); rc |=3D write_guest_abs(vcpu, gpa + __LC_PSW_SAVE_AREA, &vcpu->arch.sie_block->gpsw, 16); rc |=3D write_guest_abs(vcpu, gpa + __LC_PREFIX_SAVE_AREA, &px, 4); rc |=3D write_guest_abs(vcpu, gpa + __LC_FP_CREG_SAVE_AREA, - &vcpu->run->s.regs.fpc, 4); + &vcpu->common->run->s.regs.fpc, 4); rc |=3D write_guest_abs(vcpu, gpa + __LC_TOD_PROGREG_SAVE_AREA, &vcpu->arch.sie_block->todpr, 4); cputm =3D kvm_s390_get_cpu_timer(vcpu); @@ -5144,7 +5144,7 @@ int kvm_s390_store_status_unloaded(struct kvm_vcpu *v= cpu, unsigned long gpa) rc |=3D write_guest_abs(vcpu, gpa + __LC_CLOCK_COMP_SAVE_AREA, &clkcomp, 8); rc |=3D write_guest_abs(vcpu, gpa + __LC_AREGS_SAVE_AREA, - &vcpu->run->s.regs.acrs, 64); + &vcpu->common->run->s.regs.acrs, 64); rc |=3D write_guest_abs(vcpu, gpa + __LC_CREGS_SAVE_AREA, &vcpu->arch.sie_block->gcr, 128); return rc ? -EFAULT : 0; @@ -5157,8 +5157,8 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu,= unsigned long addr) * switch in the run ioctl. Let's update our copies before we save * it into the save area */ - kvm_s390_fpu_store(vcpu->run); - save_access_regs(vcpu->run->s.regs.acrs); + kvm_s390_fpu_store(vcpu->common->run); + save_access_regs(vcpu->common->run->s.regs.acrs); =20 return kvm_s390_store_status_unloaded(vcpu, addr); } @@ -5422,7 +5422,7 @@ static long kvm_s390_vcpu_memsida_op(struct kvm_vcpu = *vcpu, break; case KVM_S390_MEMOP_SIDA_READ: case KVM_S390_MEMOP_SIDA_WRITE: - /* we are locked against sida going away by the vcpu->mutex */ + /* we are locked against sida going away by the vcpu->common->mutex */ r =3D kvm_s390_vcpu_sida_op(vcpu, mop); break; default: @@ -5473,7 +5473,7 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp, * after (and not before) the interrupt delivery. */ if (!rc) - vcpu->guest_debug &=3D ~KVM_GUESTDBG_EXIT_PENDING; + vcpu->common->guest_debug &=3D ~KVM_GUESTDBG_EXIT_PENDING; =20 return rc; } diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h index bf8534218af3..992d78d7b408 100644 --- a/arch/s390/kvm/kvm-s390.h +++ b/arch/s390/kvm/kvm-s390.h @@ -135,7 +135,7 @@ static inline u64 kvm_s390_get_base_disp_s(struct kvm_v= cpu *vcpu, u8 *ar) if (ar) *ar =3D base2; =20 - return (base2 ? vcpu->run->s.regs.gprs[base2] : 0) + disp2; + return (base2 ? vcpu->common->run->s.regs.gprs[base2] : 0) + disp2; } =20 static inline u64 kvm_s390_get_base_disp_siy(struct kvm_vcpu *vcpu, u8 *ar) @@ -150,7 +150,7 @@ static inline u64 kvm_s390_get_base_disp_siy(struct kvm= _vcpu *vcpu, u8 *ar) if (ar) *ar =3D base1; =20 - return (base1 ? vcpu->run->s.regs.gprs[base1] : 0) + disp1; + return (base1 ? vcpu->common->run->s.regs.gprs[base1] : 0) + disp1; } =20 static inline void kvm_s390_get_base_disp_sse(struct kvm_vcpu *vcpu, @@ -162,8 +162,8 @@ static inline void kvm_s390_get_base_disp_sse(struct kv= m_vcpu *vcpu, u32 base2 =3D (vcpu->arch.sie_block->ipb & 0xf000) >> 12; u32 disp2 =3D vcpu->arch.sie_block->ipb & 0x0fff; =20 - *address1 =3D (base1 ? vcpu->run->s.regs.gprs[base1] : 0) + disp1; - *address2 =3D (base2 ? vcpu->run->s.regs.gprs[base2] : 0) + disp2; + *address1 =3D (base1 ? vcpu->common->run->s.regs.gprs[base1] : 0) + disp1; + *address2 =3D (base2 ? vcpu->common->run->s.regs.gprs[base2] : 0) + disp2; =20 if (ar_b1) *ar_b1 =3D base1; @@ -191,7 +191,7 @@ static inline u64 kvm_s390_get_base_disp_rsy(struct kvm= _vcpu *vcpu, u8 *ar) if (ar) *ar =3D base2; =20 - return (base2 ? vcpu->run->s.regs.gprs[base2] : 0) + (long)(int)disp2; + return (base2 ? vcpu->common->run->s.regs.gprs[base2] : 0) + (long)(int)d= isp2; } =20 static inline u64 kvm_s390_get_base_disp_rs(struct kvm_vcpu *vcpu, u8 *ar) @@ -202,7 +202,7 @@ static inline u64 kvm_s390_get_base_disp_rs(struct kvm_= vcpu *vcpu, u8 *ar) if (ar) *ar =3D base2; =20 - return (base2 ? vcpu->run->s.regs.gprs[base2] : 0) + disp2; + return (base2 ? vcpu->common->run->s.regs.gprs[base2] : 0) + disp2; } =20 /* Set the condition code in the guest program status word */ diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 1a49b89706f8..414c534b66aa 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -31,7 +31,7 @@ =20 static int handle_ri(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_ri++; + vcpu->common->stat.instruction_ri++; =20 if (test_kvm_facility(vcpu->kvm, 64)) { VCPU_EVENT(vcpu, 3, "%s", "ENABLE: RI (lazy)"); @@ -52,13 +52,13 @@ int kvm_s390_handle_aa(struct kvm_vcpu *vcpu) =20 static int handle_gs(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_gs++; + vcpu->common->stat.instruction_gs++; =20 if (test_kvm_facility(vcpu->kvm, 133)) { VCPU_EVENT(vcpu, 3, "%s", "ENABLE: GS (lazy)"); preempt_disable(); local_ctl_set_bit(2, CR2_GUARDED_STORAGE_BIT); - current->thread.gs_cb =3D (struct gs_cb *)&vcpu->run->s.regs.gscb; + current->thread.gs_cb =3D (struct gs_cb *)&vcpu->common->run->s.regs.gsc= b; restore_gs_cb(current->thread.gs_cb); preempt_enable(); vcpu->arch.sie_block->ecb |=3D ECB_GS; @@ -87,7 +87,7 @@ static int handle_set_clock(struct kvm_vcpu *vcpu) u8 ar; u64 op2; =20 - vcpu->stat.instruction_sck++; + vcpu->common->stat.instruction_sck++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -126,7 +126,7 @@ static int handle_set_prefix(struct kvm_vcpu *vcpu) int rc; u8 ar; =20 - vcpu->stat.instruction_spx++; + vcpu->common->stat.instruction_spx++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -164,7 +164,7 @@ static int handle_store_prefix(struct kvm_vcpu *vcpu) int rc; u8 ar; =20 - vcpu->stat.instruction_stpx++; + vcpu->common->stat.instruction_stpx++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -194,7 +194,7 @@ static int handle_store_cpu_address(struct kvm_vcpu *vc= pu) int rc; u8 ar; =20 - vcpu->stat.instruction_stap++; + vcpu->common->stat.instruction_stap++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -261,7 +261,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) bool unlocked; int rc; =20 - vcpu->stat.instruction_iske++; + vcpu->common->stat.instruction_iske++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -272,7 +272,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) =20 kvm_s390_get_regs_rre(vcpu, ®1, ®2); =20 - gaddr =3D vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; + gaddr =3D vcpu->common->run->s.regs.gprs[reg2] & PAGE_MASK; gaddr =3D kvm_s390_logical_to_effective(vcpu, gaddr); gaddr =3D kvm_s390_real_to_abs(vcpu, gaddr); vmaddr =3D gfn_to_hva(vcpu->kvm, gpa_to_gfn(gaddr)); @@ -296,8 +296,8 @@ static int handle_iske(struct kvm_vcpu *vcpu) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); if (rc < 0) return rc; - vcpu->run->s.regs.gprs[reg1] &=3D ~0xff; - vcpu->run->s.regs.gprs[reg1] |=3D key; + vcpu->common->run->s.regs.gprs[reg1] &=3D ~0xff; + vcpu->common->run->s.regs.gprs[reg1] |=3D key; return 0; } =20 @@ -308,7 +308,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) bool unlocked; int rc; =20 - vcpu->stat.instruction_rrbe++; + vcpu->common->stat.instruction_rrbe++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -319,7 +319,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) =20 kvm_s390_get_regs_rre(vcpu, ®1, ®2); =20 - gaddr =3D vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; + gaddr =3D vcpu->common->run->s.regs.gprs[reg2] & PAGE_MASK; gaddr =3D kvm_s390_logical_to_effective(vcpu, gaddr); gaddr =3D kvm_s390_real_to_abs(vcpu, gaddr); vmaddr =3D gfn_to_hva(vcpu->kvm, gpa_to_gfn(gaddr)); @@ -359,7 +359,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) bool unlocked; int rc; =20 - vcpu->stat.instruction_sske++; + vcpu->common->stat.instruction_sske++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -377,8 +377,8 @@ static int handle_sske(struct kvm_vcpu *vcpu) =20 kvm_s390_get_regs_rre(vcpu, ®1, ®2); =20 - key =3D vcpu->run->s.regs.gprs[reg1] & 0xfe; - start =3D vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; + key =3D vcpu->common->run->s.regs.gprs[reg1] & 0xfe; + start =3D vcpu->common->run->s.regs.gprs[reg2] & PAGE_MASK; start =3D kvm_s390_logical_to_effective(vcpu, start); if (m3 & SSKE_MB) { /* start already designates an absolute address */ @@ -421,24 +421,24 @@ static int handle_sske(struct kvm_vcpu *vcpu) kvm_s390_set_psw_cc(vcpu, 3); } else { kvm_s390_set_psw_cc(vcpu, rc); - vcpu->run->s.regs.gprs[reg1] &=3D ~0xff00UL; - vcpu->run->s.regs.gprs[reg1] |=3D (u64) oldkey << 8; + vcpu->common->run->s.regs.gprs[reg1] &=3D ~0xff00UL; + vcpu->common->run->s.regs.gprs[reg1] |=3D (u64) oldkey << 8; } } if (m3 & SSKE_MB) { if (psw_bits(vcpu->arch.sie_block->gpsw).eaba =3D=3D PSW_BITS_AMODE_64BI= T) - vcpu->run->s.regs.gprs[reg2] &=3D ~PAGE_MASK; + vcpu->common->run->s.regs.gprs[reg2] &=3D ~PAGE_MASK; else - vcpu->run->s.regs.gprs[reg2] &=3D ~0xfffff000UL; + vcpu->common->run->s.regs.gprs[reg2] &=3D ~0xfffff000UL; end =3D kvm_s390_logical_to_effective(vcpu, end); - vcpu->run->s.regs.gprs[reg2] |=3D end; + vcpu->common->run->s.regs.gprs[reg2] |=3D end; } return 0; } =20 static int handle_ipte_interlock(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_ipte_interlock++; + vcpu->common->stat.instruction_ipte_interlock++; if (psw_bits(vcpu->arch.sie_block->gpsw).pstate) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); wait_event(vcpu->kvm->arch.ipte_wq, !ipte_lock_held(vcpu->kvm)); @@ -452,13 +452,13 @@ static int handle_test_block(struct kvm_vcpu *vcpu) gpa_t addr; int reg2; =20 - vcpu->stat.instruction_tb++; + vcpu->common->stat.instruction_tb++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); =20 kvm_s390_get_regs_rre(vcpu, NULL, ®2); - addr =3D vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; + addr =3D vcpu->common->run->s.regs.gprs[reg2] & PAGE_MASK; addr =3D kvm_s390_logical_to_effective(vcpu, addr); if (kvm_s390_check_low_addr_prot_real(vcpu, addr)) return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm); @@ -473,7 +473,7 @@ static int handle_test_block(struct kvm_vcpu *vcpu) if (kvm_clear_guest(vcpu->kvm, addr, PAGE_SIZE)) return -EFAULT; kvm_s390_set_psw_cc(vcpu, 0); - vcpu->run->s.regs.gprs[0] =3D 0; + vcpu->common->run->s.regs.gprs[0] =3D 0; return 0; } =20 @@ -486,7 +486,7 @@ static int handle_tpi(struct kvm_vcpu *vcpu) u64 addr; u8 ar; =20 - vcpu->stat.instruction_tpi++; + vcpu->common->stat.instruction_tpi++; =20 addr =3D kvm_s390_get_base_disp_s(vcpu, &ar); if (addr & 3) @@ -548,12 +548,12 @@ static int handle_tsch(struct kvm_vcpu *vcpu) struct kvm_s390_interrupt_info *inti =3D NULL; const u64 isc_mask =3D 0xffUL << 24; /* all iscs set */ =20 - vcpu->stat.instruction_tsch++; + vcpu->common->stat.instruction_tsch++; =20 /* a valid schid has at least one bit set */ - if (vcpu->run->s.regs.gprs[1]) + if (vcpu->common->run->s.regs.gprs[1]) inti =3D kvm_s390_get_io_int(vcpu->kvm, isc_mask, - vcpu->run->s.regs.gprs[1]); + vcpu->common->run->s.regs.gprs[1]); =20 /* * Prepare exit to userspace. @@ -563,15 +563,15 @@ static int handle_tsch(struct kvm_vcpu *vcpu) * interrupts, this is no problem since the priority is kept * intact. */ - vcpu->run->exit_reason =3D KVM_EXIT_S390_TSCH; - vcpu->run->s390_tsch.dequeued =3D !!inti; + vcpu->common->run->exit_reason =3D KVM_EXIT_S390_TSCH; + vcpu->common->run->s390_tsch.dequeued =3D !!inti; if (inti) { - vcpu->run->s390_tsch.subchannel_id =3D inti->io.subchannel_id; - vcpu->run->s390_tsch.subchannel_nr =3D inti->io.subchannel_nr; - vcpu->run->s390_tsch.io_int_parm =3D inti->io.io_int_parm; - vcpu->run->s390_tsch.io_int_word =3D inti->io.io_int_word; + vcpu->common->run->s390_tsch.subchannel_id =3D inti->io.subchannel_id; + vcpu->common->run->s390_tsch.subchannel_nr =3D inti->io.subchannel_nr; + vcpu->common->run->s390_tsch.io_int_parm =3D inti->io.io_int_parm; + vcpu->common->run->s390_tsch.io_int_word =3D inti->io.io_int_word; } - vcpu->run->s390_tsch.ipb =3D vcpu->arch.sie_block->ipb; + vcpu->common->run->s390_tsch.ipb =3D vcpu->arch.sie_block->ipb; kfree(inti); return -EREMOTE; } @@ -593,7 +593,7 @@ static int handle_io_inst(struct kvm_vcpu *vcpu) if (vcpu->arch.sie_block->ipa =3D=3D 0xb235) return handle_tsch(vcpu); /* Handle in userspace. */ - vcpu->stat.instruction_io_other++; + vcpu->common->stat.instruction_io_other++; return -EOPNOTSUPP; } else { /* @@ -642,7 +642,7 @@ static int handle_pqap(struct kvm_vcpu *vcpu) * Note: running nested under z/VM can result in intercepts for other * function codes, e.g. PQAP(QCI). We do not support this and bail out. */ - reg0 =3D vcpu->run->s.regs.gprs[0]; + reg0 =3D vcpu->common->run->s.regs.gprs[0]; fc =3D (reg0 >> 24) & 0xff; if (fc !=3D 0x03) return -EOPNOTSUPP; @@ -677,7 +677,7 @@ static int handle_pqap(struct kvm_vcpu *vcpu) pqap_hook =3D *vcpu->kvm->arch.crypto.pqap_hook; ret =3D pqap_hook(vcpu); if (!ret) { - if (vcpu->run->s.regs.gprs[1] & 0x00ff0000) + if (vcpu->common->run->s.regs.gprs[1] & 0x00ff0000) kvm_s390_set_psw_cc(vcpu, 3); else kvm_s390_set_psw_cc(vcpu, 0); @@ -692,7 +692,7 @@ static int handle_pqap(struct kvm_vcpu *vcpu) * We send this response to the guest. */ status.response_code =3D 0x01; - memcpy(&vcpu->run->s.regs.gprs[1], &status, sizeof(status)); + memcpy(&vcpu->common->run->s.regs.gprs[1], &status, sizeof(status)); kvm_s390_set_psw_cc(vcpu, 3); return 0; } @@ -702,7 +702,7 @@ static int handle_stfl(struct kvm_vcpu *vcpu) int rc; unsigned int fac; =20 - vcpu->stat.instruction_stfl++; + vcpu->common->stat.instruction_stfl++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -751,7 +751,7 @@ int kvm_s390_handle_lpsw(struct kvm_vcpu *vcpu) int rc; u8 ar; =20 - vcpu->stat.instruction_lpsw++; + vcpu->common->stat.instruction_lpsw++; =20 if (gpsw->mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -780,7 +780,7 @@ static int handle_lpswe(struct kvm_vcpu *vcpu) int rc; u8 ar; =20 - vcpu->stat.instruction_lpswe++; + vcpu->common->stat.instruction_lpswe++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -834,7 +834,7 @@ static int handle_stidp(struct kvm_vcpu *vcpu) int rc; u8 ar; =20 - vcpu->stat.instruction_stidp++; + vcpu->common->stat.instruction_stidp++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -882,25 +882,25 @@ static void handle_stsi_3_2_2(struct kvm_vcpu *vcpu, = struct sysinfo_3_2_2 *mem) static void insert_stsi_usr_data(struct kvm_vcpu *vcpu, u64 addr, u8 ar, u8 fc, u8 sel1, u16 sel2) { - vcpu->run->exit_reason =3D KVM_EXIT_S390_STSI; - vcpu->run->s390_stsi.addr =3D addr; - vcpu->run->s390_stsi.ar =3D ar; - vcpu->run->s390_stsi.fc =3D fc; - vcpu->run->s390_stsi.sel1 =3D sel1; - vcpu->run->s390_stsi.sel2 =3D sel2; + vcpu->common->run->exit_reason =3D KVM_EXIT_S390_STSI; + vcpu->common->run->s390_stsi.addr =3D addr; + vcpu->common->run->s390_stsi.ar =3D ar; + vcpu->common->run->s390_stsi.fc =3D fc; + vcpu->common->run->s390_stsi.sel1 =3D sel1; + vcpu->common->run->s390_stsi.sel2 =3D sel2; } =20 static int handle_stsi(struct kvm_vcpu *vcpu) { - int fc =3D (vcpu->run->s.regs.gprs[0] & 0xf0000000) >> 28; - int sel1 =3D vcpu->run->s.regs.gprs[0] & 0xff; - int sel2 =3D vcpu->run->s.regs.gprs[1] & 0xffff; + int fc =3D (vcpu->common->run->s.regs.gprs[0] & 0xf0000000) >> 28; + int sel1 =3D vcpu->common->run->s.regs.gprs[0] & 0xff; + int sel2 =3D vcpu->common->run->s.regs.gprs[1] & 0xffff; unsigned long mem =3D 0; u64 operand2; int rc =3D 0; u8 ar; =20 - vcpu->stat.instruction_stsi++; + vcpu->common->stat.instruction_stsi++; VCPU_EVENT(vcpu, 3, "STSI: fc: %u sel1: %u sel2: %u", fc, sel1, sel2); =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) @@ -919,12 +919,12 @@ static int handle_stsi(struct kvm_vcpu *vcpu) !vcpu->kvm->arch.user_stsi)) goto out_no_data; =20 - if (vcpu->run->s.regs.gprs[0] & 0x0fffff00 - || vcpu->run->s.regs.gprs[1] & 0xffff0000) + if (vcpu->common->run->s.regs.gprs[0] & 0x0fffff00 + || vcpu->common->run->s.regs.gprs[1] & 0xffff0000) return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); =20 if (fc =3D=3D 0) { - vcpu->run->s.regs.gprs[0] =3D 3 << 28; + vcpu->common->run->s.regs.gprs[0] =3D 3 << 28; kvm_s390_set_psw_cc(vcpu, 0); return 0; } @@ -973,7 +973,7 @@ static int handle_stsi(struct kvm_vcpu *vcpu) trace_kvm_s390_handle_stsi(vcpu, fc, sel1, sel2, operand2); free_page(mem); kvm_s390_set_psw_cc(vcpu, 0); - vcpu->run->s.regs.gprs[0] =3D 0; + vcpu->common->run->s.regs.gprs[0] =3D 0; return rc; out_no_data: kvm_s390_set_psw_cc(vcpu, 3); @@ -1044,16 +1044,16 @@ static int handle_epsw(struct kvm_vcpu *vcpu) { int reg1, reg2; =20 - vcpu->stat.instruction_epsw++; + vcpu->common->stat.instruction_epsw++; =20 kvm_s390_get_regs_rre(vcpu, ®1, ®2); =20 /* This basically extracts the mask half of the psw. */ - vcpu->run->s.regs.gprs[reg1] &=3D 0xffffffff00000000UL; - vcpu->run->s.regs.gprs[reg1] |=3D vcpu->arch.sie_block->gpsw.mask >> 32; + vcpu->common->run->s.regs.gprs[reg1] &=3D 0xffffffff00000000UL; + vcpu->common->run->s.regs.gprs[reg1] |=3D vcpu->arch.sie_block->gpsw.mask= >> 32; if (reg2) { - vcpu->run->s.regs.gprs[reg2] &=3D 0xffffffff00000000UL; - vcpu->run->s.regs.gprs[reg2] |=3D + vcpu->common->run->s.regs.gprs[reg2] &=3D 0xffffffff00000000UL; + vcpu->common->run->s.regs.gprs[reg2] |=3D vcpu->arch.sie_block->gpsw.mask & 0x00000000ffffffffUL; } return 0; @@ -1076,7 +1076,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) unsigned long start, end; unsigned char key; =20 - vcpu->stat.instruction_pfmf++; + vcpu->common->stat.instruction_pfmf++; =20 kvm_s390_get_regs_rre(vcpu, ®1, ®2); =20 @@ -1086,32 +1086,32 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); =20 - if (vcpu->run->s.regs.gprs[reg1] & PFMF_RESERVED) + if (vcpu->common->run->s.regs.gprs[reg1] & PFMF_RESERVED) return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); =20 /* Only provide non-quiescing support if enabled for the guest */ - if (vcpu->run->s.regs.gprs[reg1] & PFMF_NQ && + if (vcpu->common->run->s.regs.gprs[reg1] & PFMF_NQ && !test_kvm_facility(vcpu->kvm, 14)) return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); =20 /* Only provide conditional-SSKE support if enabled for the guest */ - if (vcpu->run->s.regs.gprs[reg1] & PFMF_SK && + if (vcpu->common->run->s.regs.gprs[reg1] & PFMF_SK && test_kvm_facility(vcpu->kvm, 10)) { - mr =3D vcpu->run->s.regs.gprs[reg1] & PFMF_MR; - mc =3D vcpu->run->s.regs.gprs[reg1] & PFMF_MC; + mr =3D vcpu->common->run->s.regs.gprs[reg1] & PFMF_MR; + mc =3D vcpu->common->run->s.regs.gprs[reg1] & PFMF_MC; } =20 - nq =3D vcpu->run->s.regs.gprs[reg1] & PFMF_NQ; - key =3D vcpu->run->s.regs.gprs[reg1] & PFMF_KEY; - start =3D vcpu->run->s.regs.gprs[reg2] & PAGE_MASK; + nq =3D vcpu->common->run->s.regs.gprs[reg1] & PFMF_NQ; + key =3D vcpu->common->run->s.regs.gprs[reg1] & PFMF_KEY; + start =3D vcpu->common->run->s.regs.gprs[reg2] & PAGE_MASK; start =3D kvm_s390_logical_to_effective(vcpu, start); =20 - if (vcpu->run->s.regs.gprs[reg1] & PFMF_CF) { + if (vcpu->common->run->s.regs.gprs[reg1] & PFMF_CF) { if (kvm_s390_check_low_addr_prot_real(vcpu, start)) return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm); } =20 - switch (vcpu->run->s.regs.gprs[reg1] & PFMF_FSC) { + switch (vcpu->common->run->s.regs.gprs[reg1] & PFMF_FSC) { case 0x00000000: /* only 4k frames specify a real address */ start =3D kvm_s390_real_to_abs(vcpu, start); @@ -1141,12 +1141,12 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) if (kvm_is_error_hva(vmaddr)) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); =20 - if (vcpu->run->s.regs.gprs[reg1] & PFMF_CF) { + if (vcpu->common->run->s.regs.gprs[reg1] & PFMF_CF) { if (kvm_clear_guest(vcpu->kvm, start, PAGE_SIZE)) return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING); } =20 - if (vcpu->run->s.regs.gprs[reg1] & PFMF_SK) { + if (vcpu->common->run->s.regs.gprs[reg1] & PFMF_SK) { int rc =3D kvm_s390_skey_check_enable(vcpu); =20 if (rc) @@ -1169,13 +1169,13 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) } start +=3D PAGE_SIZE; } - if (vcpu->run->s.regs.gprs[reg1] & PFMF_FSC) { + if (vcpu->common->run->s.regs.gprs[reg1] & PFMF_FSC) { if (psw_bits(vcpu->arch.sie_block->gpsw).eaba =3D=3D PSW_BITS_AMODE_64BI= T) { - vcpu->run->s.regs.gprs[reg2] =3D end; + vcpu->common->run->s.regs.gprs[reg2] =3D end; } else { - vcpu->run->s.regs.gprs[reg2] &=3D ~0xffffffffUL; + vcpu->common->run->s.regs.gprs[reg2] &=3D ~0xffffffffUL; end =3D kvm_s390_logical_to_effective(vcpu, end); - vcpu->run->s.regs.gprs[reg2] |=3D end; + vcpu->common->run->s.regs.gprs[reg2] |=3D end; } } return 0; @@ -1196,7 +1196,7 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, co= nst int orc) */ =20 kvm_s390_get_regs_rre(vcpu, &r1, &r2); - gfn =3D vcpu->run->s.regs.gprs[r2] >> PAGE_SHIFT; + gfn =3D vcpu->common->run->s.regs.gprs[r2] >> PAGE_SHIFT; hva =3D gfn_to_hva(vcpu->kvm, gfn); entries =3D (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3; =20 @@ -1206,7 +1206,7 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, co= nst int orc) nappended =3D pgste_perform_essa(vcpu->kvm->mm, hva, orc, &ptev, &pgstev); if (nappended < 0) { res =3D orc ? 0x10 : 0; - vcpu->run->s.regs.gprs[r1] =3D res; /* Exception Indication */ + vcpu->common->run->s.regs.gprs[r1] =3D res; /* Exception Indication */ return 0; } res =3D (pgstev & _PGSTE_GPS_USAGE_MASK) >> 22; @@ -1223,7 +1223,7 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, co= nst int orc) } if (pgstev & _PGSTE_GPS_NODAT) res |=3D 0x20; - vcpu->run->s.regs.gprs[r1] =3D res; + vcpu->common->run->s.regs.gprs[r1] =3D res; /* * It is possible that all the normal 511 slots were full, in which case * we will now write in the 512th slot, which is reserved for host use. @@ -1256,7 +1256,7 @@ static int handle_essa(struct kvm_vcpu *vcpu) =20 VCPU_EVENT(vcpu, 4, "ESSA: release %d pages", entries); gmap =3D vcpu->arch.gmap; - vcpu->stat.instruction_essa++; + vcpu->common->stat.instruction_essa++; if (!vcpu->kvm->arch.use_cmma) return kvm_s390_inject_program_int(vcpu, PGM_OPERATION); =20 @@ -1345,7 +1345,7 @@ int kvm_s390_handle_lctl(struct kvm_vcpu *vcpu) u64 ga; u8 ar; =20 - vcpu->stat.instruction_lctl++; + vcpu->common->stat.instruction_lctl++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1384,7 +1384,7 @@ int kvm_s390_handle_stctl(struct kvm_vcpu *vcpu) u64 ga; u8 ar; =20 - vcpu->stat.instruction_stctl++; + vcpu->common->stat.instruction_stctl++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1418,7 +1418,7 @@ static int handle_lctlg(struct kvm_vcpu *vcpu) u64 ga; u8 ar; =20 - vcpu->stat.instruction_lctlg++; + vcpu->common->stat.instruction_lctlg++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1456,7 +1456,7 @@ static int handle_stctg(struct kvm_vcpu *vcpu) u64 ga; u8 ar; =20 - vcpu->stat.instruction_stctg++; + vcpu->common->stat.instruction_stctg++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1508,7 +1508,7 @@ static int handle_tprot(struct kvm_vcpu *vcpu) int ret, cc; u8 ar; =20 - vcpu->stat.instruction_tprot++; + vcpu->common->stat.instruction_tprot++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); @@ -1572,16 +1572,16 @@ static int handle_sckpf(struct kvm_vcpu *vcpu) { u32 value; =20 - vcpu->stat.instruction_sckpf++; + vcpu->common->stat.instruction_sckpf++; =20 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); =20 - if (vcpu->run->s.regs.gprs[0] & 0x00000000ffff0000) + if (vcpu->common->run->s.regs.gprs[0] & 0x00000000ffff0000) return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); =20 - value =3D vcpu->run->s.regs.gprs[0] & 0x000000000000ffff; + value =3D vcpu->common->run->s.regs.gprs[0] & 0x000000000000ffff; vcpu->arch.sie_block->todpr =3D value; =20 return 0; @@ -1589,7 +1589,7 @@ static int handle_sckpf(struct kvm_vcpu *vcpu) =20 static int handle_ptff(struct kvm_vcpu *vcpu) { - vcpu->stat.instruction_ptff++; + vcpu->common->stat.instruction_ptff++; =20 /* we don't emulate any control instructions yet */ kvm_s390_set_psw_cc(vcpu, 3); diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index 75e81ba26d04..9c36d0c2a8ef 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(kvm_s390_pv_is_protected); =20 bool kvm_s390_pv_cpu_is_protected(struct kvm_vcpu *vcpu) { - lockdep_assert_held(&vcpu->mutex); + lockdep_assert_held(&vcpu->common->mutex); return !!kvm_s390_pv_cpu_get_handle(vcpu); } EXPORT_SYMBOL_GPL(kvm_s390_pv_cpu_is_protected); diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c index 55c34cb35428..cfdc920d2b87 100644 --- a/arch/s390/kvm/sigp.c +++ b/arch/s390/kvm/sigp.c @@ -306,61 +306,61 @@ static int handle_sigp_dst(struct kvm_vcpu *vcpu, u8 = order_code, =20 switch (order_code) { case SIGP_SENSE: - vcpu->stat.instruction_sigp_sense++; + vcpu->common->stat.instruction_sigp_sense++; rc =3D __sigp_sense(vcpu, dst_vcpu, status_reg); break; case SIGP_EXTERNAL_CALL: - vcpu->stat.instruction_sigp_external_call++; + vcpu->common->stat.instruction_sigp_external_call++; rc =3D __sigp_external_call(vcpu, dst_vcpu, status_reg); break; case SIGP_EMERGENCY_SIGNAL: - vcpu->stat.instruction_sigp_emergency++; + vcpu->common->stat.instruction_sigp_emergency++; rc =3D __sigp_emergency(vcpu, dst_vcpu); break; case SIGP_STOP: - vcpu->stat.instruction_sigp_stop++; + vcpu->common->stat.instruction_sigp_stop++; rc =3D __sigp_stop(vcpu, dst_vcpu); break; case SIGP_STOP_AND_STORE_STATUS: - vcpu->stat.instruction_sigp_stop_store_status++; + vcpu->common->stat.instruction_sigp_stop_store_status++; rc =3D __sigp_stop_and_store_status(vcpu, dst_vcpu, status_reg); break; case SIGP_STORE_STATUS_AT_ADDRESS: - vcpu->stat.instruction_sigp_store_status++; + vcpu->common->stat.instruction_sigp_store_status++; rc =3D __sigp_store_status_at_addr(vcpu, dst_vcpu, parameter, status_reg); break; case SIGP_SET_PREFIX: - vcpu->stat.instruction_sigp_prefix++; + vcpu->common->stat.instruction_sigp_prefix++; rc =3D __sigp_set_prefix(vcpu, dst_vcpu, parameter, status_reg); break; case SIGP_COND_EMERGENCY_SIGNAL: - vcpu->stat.instruction_sigp_cond_emergency++; + vcpu->common->stat.instruction_sigp_cond_emergency++; rc =3D __sigp_conditional_emergency(vcpu, dst_vcpu, parameter, status_reg); break; case SIGP_SENSE_RUNNING: - vcpu->stat.instruction_sigp_sense_running++; + vcpu->common->stat.instruction_sigp_sense_running++; rc =3D __sigp_sense_running(vcpu, dst_vcpu, status_reg); break; case SIGP_START: - vcpu->stat.instruction_sigp_start++; + vcpu->common->stat.instruction_sigp_start++; rc =3D __prepare_sigp_re_start(vcpu, dst_vcpu, order_code); break; case SIGP_RESTART: - vcpu->stat.instruction_sigp_restart++; + vcpu->common->stat.instruction_sigp_restart++; rc =3D __prepare_sigp_re_start(vcpu, dst_vcpu, order_code); break; case SIGP_INITIAL_CPU_RESET: - vcpu->stat.instruction_sigp_init_cpu_reset++; + vcpu->common->stat.instruction_sigp_init_cpu_reset++; rc =3D __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code); break; case SIGP_CPU_RESET: - vcpu->stat.instruction_sigp_cpu_reset++; + vcpu->common->stat.instruction_sigp_cpu_reset++; rc =3D __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code); break; default: - vcpu->stat.instruction_sigp_unknown++; + vcpu->common->stat.instruction_sigp_unknown++; rc =3D __prepare_sigp_unknown(vcpu, dst_vcpu); } =20 @@ -387,34 +387,34 @@ static int handle_sigp_order_in_user_space(struct kvm= _vcpu *vcpu, u8 order_code, return 0; /* update counters as we're directly dropping to user space */ case SIGP_STOP: - vcpu->stat.instruction_sigp_stop++; + vcpu->common->stat.instruction_sigp_stop++; break; case SIGP_STOP_AND_STORE_STATUS: - vcpu->stat.instruction_sigp_stop_store_status++; + vcpu->common->stat.instruction_sigp_stop_store_status++; break; case SIGP_STORE_STATUS_AT_ADDRESS: - vcpu->stat.instruction_sigp_store_status++; + vcpu->common->stat.instruction_sigp_store_status++; break; case SIGP_STORE_ADDITIONAL_STATUS: - vcpu->stat.instruction_sigp_store_adtl_status++; + vcpu->common->stat.instruction_sigp_store_adtl_status++; break; case SIGP_SET_PREFIX: - vcpu->stat.instruction_sigp_prefix++; + vcpu->common->stat.instruction_sigp_prefix++; break; case SIGP_START: - vcpu->stat.instruction_sigp_start++; + vcpu->common->stat.instruction_sigp_start++; break; case SIGP_RESTART: - vcpu->stat.instruction_sigp_restart++; + vcpu->common->stat.instruction_sigp_restart++; break; case SIGP_INITIAL_CPU_RESET: - vcpu->stat.instruction_sigp_init_cpu_reset++; + vcpu->common->stat.instruction_sigp_init_cpu_reset++; break; case SIGP_CPU_RESET: - vcpu->stat.instruction_sigp_cpu_reset++; + vcpu->common->stat.instruction_sigp_cpu_reset++; break; default: - vcpu->stat.instruction_sigp_unknown++; + vcpu->common->stat.instruction_sigp_unknown++; } VCPU_EVENT(vcpu, 3, "SIGP: order %u for CPU %d handled in userspace", order_code, cpu_addr); @@ -427,7 +427,7 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu) int r1 =3D (vcpu->arch.sie_block->ipa & 0x00f0) >> 4; int r3 =3D vcpu->arch.sie_block->ipa & 0x000f; u32 parameter; - u16 cpu_addr =3D vcpu->run->s.regs.gprs[r3]; + u16 cpu_addr =3D vcpu->common->run->s.regs.gprs[r3]; u8 order_code; int rc; =20 @@ -440,21 +440,21 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu) return -EOPNOTSUPP; =20 if (r1 % 2) - parameter =3D vcpu->run->s.regs.gprs[r1]; + parameter =3D vcpu->common->run->s.regs.gprs[r1]; else - parameter =3D vcpu->run->s.regs.gprs[r1 + 1]; + parameter =3D vcpu->common->run->s.regs.gprs[r1 + 1]; =20 trace_kvm_s390_handle_sigp(vcpu, order_code, cpu_addr, parameter); switch (order_code) { case SIGP_SET_ARCHITECTURE: - vcpu->stat.instruction_sigp_arch++; + vcpu->common->stat.instruction_sigp_arch++; rc =3D __sigp_set_arch(vcpu, parameter, - &vcpu->run->s.regs.gprs[r1]); + &vcpu->common->run->s.regs.gprs[r1]); break; default: rc =3D handle_sigp_dst(vcpu, order_code, cpu_addr, parameter, - &vcpu->run->s.regs.gprs[r1]); + &vcpu->common->run->s.regs.gprs[r1]); } =20 if (rc < 0) @@ -476,7 +476,7 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu) int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu) { int r3 =3D vcpu->arch.sie_block->ipa & 0x000f; - u16 cpu_addr =3D vcpu->run->s.regs.gprs[r3]; + u16 cpu_addr =3D vcpu->common->run->s.regs.gprs[r3]; struct kvm_vcpu *dest_vcpu; u8 order_code =3D kvm_s390_get_base_disp_rs(vcpu, NULL); =20 diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c index 89cafea4c41f..6a9a0fbfc41d 100644 --- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -1028,7 +1028,7 @@ static u64 vsie_get_register(struct kvm_vcpu *vcpu, s= truct vsie_page *vsie_page, case 14: return vsie_page->scb_s.gg14; default: - return vcpu->run->s.regs.gprs[reg]; + return vcpu->common->run->s.regs.gprs[reg]; } } =20 @@ -1150,7 +1150,7 @@ static int do_vsie_run(struct kvm_vcpu *vcpu, struct = vsie_page *vsie_page) vcpu->arch.sie_block->prog0c |=3D PROG_IN_SIE; barrier(); if (!kvm_s390_vcpu_sie_inhibited(vcpu)) - rc =3D sie64a(scb_s, vcpu->run->s.regs.gprs, gmap_get_enabled()->asce); + rc =3D sie64a(scb_s, vcpu->common->run->s.regs.gprs, gmap_get_enabled()-= >asce); barrier(); vcpu->arch.sie_block->prog0c &=3D ~PROG_IN_SIE; =20 @@ -1426,7 +1426,7 @@ int kvm_s390_handle_vsie(struct kvm_vcpu *vcpu) unsigned long scb_addr; int rc; =20 - vcpu->stat.instruction_sie++; + vcpu->common->stat.instruction_sie++; if (!test_kvm_cpu_feat(vcpu->kvm, KVM_S390_VM_CPU_FEAT_SIEF2)) return -EOPNOTSUPP; if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index 999227fc7c66..2c71b422238a 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -24,7 +24,7 @@ DEFINE_SIMPLE_ATTRIBUTE(vcpu_timer_advance_ns_fops, vcpu_= get_timer_advance_ns, N static int vcpu_get_guest_mode(void *data, u64 *val) { struct kvm_vcpu *vcpu =3D (struct kvm_vcpu *) data; - *val =3D vcpu->stat.guest_mode; + *val =3D vcpu->common->stat.guest_mode; return 0; } =20 diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 4f0a94346d00..d95beecd818d 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -337,9 +337,9 @@ static int kvm_hv_syndbg_complete_userspace(struct kvm_= vcpu *vcpu) { struct kvm_hv *hv =3D to_kvm_hv(vcpu->kvm); =20 - if (vcpu->run->hyperv.u.syndbg.msr =3D=3D HV_X64_MSR_SYNDBG_CONTROL) + if (vcpu->common->run->hyperv.u.syndbg.msr =3D=3D HV_X64_MSR_SYNDBG_CONTR= OL) hv->hv_syndbg.control.status =3D - vcpu->run->hyperv.u.syndbg.status; + vcpu->common->run->hyperv.u.syndbg.status; return 1; } =20 @@ -1988,7 +1988,7 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) for (j =3D 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE); =20 - ++vcpu->stat.tlb_flush; + ++vcpu->common->stat.tlb_flush; } return 0; =20 @@ -2387,7 +2387,7 @@ static int kvm_hv_hypercall_complete(struct kvm_vcpu = *vcpu, u64 result) =20 trace_kvm_hv_hypercall_done(result); kvm_hv_hypercall_set_result(vcpu, result); - ++vcpu->stat.hypercalls; + ++vcpu->common->stat.hypercalls; =20 ret =3D kvm_skip_emulated_instruction(vcpu); =20 @@ -2399,7 +2399,7 @@ static int kvm_hv_hypercall_complete(struct kvm_vcpu = *vcpu, u64 result) =20 static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu) { - return kvm_hv_hypercall_complete(vcpu, vcpu->run->hyperv.u.hcall.result); + return kvm_hv_hypercall_complete(vcpu, vcpu->common->run->hyperv.u.hcall.= result); } =20 static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hc= all *hc) @@ -2678,11 +2678,11 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) return kvm_hv_hypercall_complete(vcpu, ret); =20 hypercall_userspace_exit: - vcpu->run->exit_reason =3D KVM_EXIT_HYPERV; - vcpu->run->hyperv.type =3D KVM_EXIT_HYPERV_HCALL; - vcpu->run->hyperv.u.hcall.input =3D hc.param; - vcpu->run->hyperv.u.hcall.params[0] =3D hc.ingpa; - vcpu->run->hyperv.u.hcall.params[1] =3D hc.outgpa; + vcpu->common->run->exit_reason =3D KVM_EXIT_HYPERV; + vcpu->common->run->hyperv.type =3D KVM_EXIT_HYPERV_HCALL; + vcpu->common->run->hyperv.u.hcall.input =3D hc.param; + vcpu->common->run->hyperv.u.hcall.params[0] =3D hc.ingpa; + vcpu->common->run->hyperv.u.hcall.params[1] =3D hc.outgpa; vcpu->arch.complete_userspace_io =3D kvm_hv_hypercall_complete_userspace; return 0; } diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index b1eb46e26b2e..6d99076e570f 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -208,7 +208,7 @@ static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcp= u) static inline void enter_guest_mode(struct kvm_vcpu *vcpu) { vcpu->arch.hflags |=3D HF_GUEST_MASK; - vcpu->stat.guest_mode =3D 1; + vcpu->common->stat.guest_mode =3D 1; } =20 static inline void leave_guest_mode(struct kvm_vcpu *vcpu) @@ -220,7 +220,7 @@ static inline void leave_guest_mode(struct kvm_vcpu *vc= pu) kvm_make_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu); } =20 - vcpu->stat.guest_mode =3D 0; + vcpu->common->stat.guest_mode =3D 0; } =20 static inline bool is_guest_mode(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 4915acdbfcd8..e2dd573e4f2d 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -173,7 +173,7 @@ bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu) =20 static bool kvm_use_posted_timer_interrupt(struct kvm_vcpu *vcpu) { - return kvm_can_post_timer_interrupt(vcpu) && vcpu->mode =3D=3D IN_GUEST_M= ODE; + return kvm_can_post_timer_interrupt(vcpu) && vcpu->common->mode =3D=3D IN= _GUEST_MODE; } =20 static inline u32 kvm_apic_calc_x2apic_ldr(u32 id) @@ -1564,7 +1564,7 @@ static u32 apic_get_tmcct(struct kvm_lapic *apic) static void __report_tpr_access(struct kvm_lapic *apic, bool write) { struct kvm_vcpu *vcpu =3D apic->vcpu; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 kvm_make_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu); run->tpr_access.rip =3D kvm_rip_read(vcpu); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 928cf84778b0..a61e68249620 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -658,9 +658,9 @@ static void walk_shadow_page_lockless_begin(struct kvm_= vcpu *vcpu) =20 /* * Make sure a following spte read is not reordered ahead of the write - * to vcpu->mode. + * to vcpu->common->mode. */ - smp_store_mb(vcpu->mode, READING_SHADOW_PAGE_TABLES); + smp_store_mb(vcpu->common->mode, READING_SHADOW_PAGE_TABLES); } } =20 @@ -670,11 +670,11 @@ static void walk_shadow_page_lockless_end(struct kvm_= vcpu *vcpu) kvm_tdp_mmu_walk_lockless_end(); } else { /* - * Make sure the write to vcpu->mode is not reordered in front of + * Make sure the write to vcpu->common->mode is not reordered in front of * reads to sptes. If it does, kvm_mmu_commit_zap_page() can see us * OUTSIDE_GUEST_MODE and proceed to free the shadow page table. */ - smp_store_release(&vcpu->mode, OUTSIDE_GUEST_MODE); + smp_store_release(&vcpu->common->mode, OUTSIDE_GUEST_MODE); local_irq_enable(); } } @@ -2609,7 +2609,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, =20 /* * We need to make sure everyone sees our modifications to - * the page tables and see changes to vcpu->mode here. The barrier + * the page tables and see changes to vcpu->common->mode here. The barrier * in the kvm_flush_remote_tlbs() achieves this. This pairs * with vcpu_enter_guest and walk_shadow_page_lockless_begin/end. * @@ -2880,7 +2880,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, bool write_fault =3D fault && fault->write; =20 if (unlikely(is_noslot_pfn(pfn))) { - vcpu->stat.pf_mmio_spte_created++; + vcpu->common->stat.pf_mmio_spte_created++; mark_mmio_spte(vcpu, sptep, gfn, pte_access); return RET_PF_EMULATE; } @@ -3547,7 +3547,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) walk_shadow_page_lockless_end(vcpu); =20 if (ret !=3D RET_PF_INVALID) - vcpu->stat.pf_fast++; + vcpu->common->stat.pf_fast++; =20 return ret; } @@ -4299,7 +4299,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu,= struct kvm_async_pf *work) * truly spurious and never trigger emulation */ if (r =3D=3D RET_PF_FIXED) - vcpu->stat.pf_fixed++; + vcpu->common->stat.pf_fixed++; } =20 static inline u8 kvm_max_level_for_order(int order) @@ -5997,7 +5997,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu= , gpa_t cr2_or_gpa, u64 err } =20 if (r =3D=3D RET_PF_INVALID) { - vcpu->stat.pf_taken++; + vcpu->common->stat.pf_taken++; =20 r =3D kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, error_code, false, &emulation_type, NULL); @@ -6009,11 +6009,11 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, u64 err return r; =20 if (r =3D=3D RET_PF_FIXED) - vcpu->stat.pf_fixed++; + vcpu->common->stat.pf_fixed++; else if (r =3D=3D RET_PF_EMULATE) - vcpu->stat.pf_emulate++; + vcpu->common->stat.pf_emulate++; else if (r =3D=3D RET_PF_SPURIOUS) - vcpu->stat.pf_spurious++; + vcpu->common->stat.pf_spurious++; =20 if (r !=3D RET_PF_EMULATE) return 1; @@ -6145,7 +6145,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) * done here for them. */ kvm_mmu_invalidate_addr(vcpu, vcpu->arch.walk_mmu, gva, KVM_MMU_ROOTS_ALL= ); - ++vcpu->stat.invlpg; + ++vcpu->common->stat.invlpg; } EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); =20 @@ -6167,7 +6167,7 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t= gva, unsigned long pcid) =20 if (roots) kvm_mmu_invalidate_addr(vcpu, mmu, gva, roots); - ++vcpu->stat.invlpg; + ++vcpu->common->stat.invlpg; =20 /* * Mappings not reachable via the current cr3 or the prev_roots will be diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c7dc49ee7388..edc6d0594d1c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1053,7 +1053,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm= _vcpu *vcpu, =20 /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) { - vcpu->stat.pf_mmio_spte_created++; + vcpu->common->stat.pf_mmio_spte_created++; trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn, new_spte); ret =3D RET_PF_EMULATE; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 47a46283c866..599827b14fcd 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -503,7 +503,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) =20 /* * The reprogramming bitmap can be written asynchronously by something - * other than the task that holds vcpu->mutex, take care to clear only + * other than the task that holds vcpu->common->mutex, take care to clear= only * the bits that will actually processed. */ BUILD_BUG_ON(sizeof(bitmap) !=3D sizeof(atomic64_t)); diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 6f704c1037e5..521b3e9ce60a 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1795,10 +1795,10 @@ static bool svm_get_nested_state_pages(struct kvm_v= cpu *vcpu) return false; =20 if (!nested_svm_vmrun_msrpm(svm)) { - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_EMULATION; - vcpu->run->internal.ndata =3D 0; + vcpu->common->run->internal.ndata =3D 0; return false; } =20 diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 25d5fe0dab5a..2ad1b9b497e0 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -825,7 +825,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) int i; =20 /* Check some debug related fields before encrypting the VMSA */ - if (svm->vcpu.guest_debug || (svm->vmcb->save.dr7 & ~DR7_FIXED_1)) + if (svm->vcpu.common->guest_debug || (svm->vmcb->save.dr7 & ~DR7_FIXED_1)) return -EINVAL; =20 /* @@ -916,7 +916,7 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, st= ruct kvm_vcpu *vcpu, struct vcpu_svm *svm =3D to_svm(vcpu); int ret; =20 - if (vcpu->guest_debug) { + if (vcpu->common->guest_debug) { pr_warn_once("KVM_SET_GUEST_DEBUG for SEV-ES guest is not supported"); return -EINVAL; } @@ -970,13 +970,13 @@ static int sev_launch_update_vmsa(struct kvm *kvm, st= ruct kvm_sev_cmd *argp) return -ENOTTY; =20 kvm_for_each_vcpu(i, vcpu, kvm) { - ret =3D mutex_lock_killable(&vcpu->mutex); + ret =3D mutex_lock_killable(&vcpu->common->mutex); if (ret) return ret; =20 ret =3D __sev_launch_update_vmsa(kvm, vcpu, &argp->error); =20 - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); if (ret) return ret; } @@ -1931,7 +1931,7 @@ static int sev_lock_vcpus_for_migration(struct kvm *k= vm, unsigned long i, j; =20 kvm_for_each_vcpu(i, vcpu, kvm) { - if (mutex_lock_killable_nested(&vcpu->mutex, role)) + if (mutex_lock_killable_nested(&vcpu->common->mutex, role)) goto out_unlock; =20 #ifdef CONFIG_PROVE_LOCKING @@ -1942,7 +1942,7 @@ static int sev_lock_vcpus_for_migration(struct kvm *k= vm, */ role =3D SEV_NR_MIGRATION_ROLES; else - mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); + mutex_release(&vcpu->common->mutex.dep_map, _THIS_IP_); #endif } =20 @@ -1956,10 +1956,10 @@ static int sev_lock_vcpus_for_migration(struct kvm = *kvm, =20 #ifdef CONFIG_PROVE_LOCKING if (j) - mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_); + mutex_acquire(&vcpu->common->mutex.dep_map, role, 0, _THIS_IP_); #endif =20 - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); } return -EINTR; } @@ -1974,10 +1974,10 @@ static void sev_unlock_vcpus_for_migration(struct k= vm *kvm) if (first) first =3D false; else - mutex_acquire(&vcpu->mutex.dep_map, + mutex_acquire(&vcpu->common->mutex.dep_map, SEV_NR_MIGRATION_ROLES, 0, _THIS_IP_); =20 - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); } } =20 @@ -3697,7 +3697,7 @@ static int snp_complete_psc_msr(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); =20 - if (vcpu->run->hypercall.ret) + if (vcpu->common->run->hypercall.ret) set_ghcb_msr(svm, GHCB_MSR_PSC_RESP_ERROR); else set_ghcb_msr(svm, GHCB_MSR_PSC_RESP); @@ -3721,14 +3721,14 @@ static int snp_begin_psc_msr(struct vcpu_svm *svm, = u64 ghcb_msr) return 1; /* resume guest */ } =20 - vcpu->run->exit_reason =3D KVM_EXIT_HYPERCALL; - vcpu->run->hypercall.nr =3D KVM_HC_MAP_GPA_RANGE; - vcpu->run->hypercall.args[0] =3D gpa; - vcpu->run->hypercall.args[1] =3D 1; - vcpu->run->hypercall.args[2] =3D (op =3D=3D SNP_PAGE_STATE_PRIVATE) + vcpu->common->run->exit_reason =3D KVM_EXIT_HYPERCALL; + vcpu->common->run->hypercall.nr =3D KVM_HC_MAP_GPA_RANGE; + vcpu->common->run->hypercall.args[0] =3D gpa; + vcpu->common->run->hypercall.args[1] =3D 1; + vcpu->common->run->hypercall.args[2] =3D (op =3D=3D SNP_PAGE_STATE_PRIVAT= E) ? KVM_MAP_GPA_RANGE_ENCRYPTED : KVM_MAP_GPA_RANGE_DECRYPTED; - vcpu->run->hypercall.args[2] |=3D KVM_MAP_GPA_RANGE_PAGE_SZ_4K; + vcpu->common->run->hypercall.args[2] |=3D KVM_MAP_GPA_RANGE_PAGE_SZ_4K; =20 vcpu->arch.complete_userspace_io =3D snp_complete_psc_msr; =20 @@ -3777,7 +3777,7 @@ static int snp_complete_one_psc(struct kvm_vcpu *vcpu) struct vcpu_svm *svm =3D to_svm(vcpu); struct psc_buffer *psc =3D svm->sev_es.ghcb_sa; =20 - if (vcpu->run->hypercall.ret) { + if (vcpu->common->run->hypercall.ret) { snp_complete_psc(svm, VMGEXIT_PSC_ERROR_GENERIC); return 1; /* resume guest */ } @@ -3884,14 +3884,14 @@ static int snp_begin_psc(struct vcpu_svm *svm, stru= ct psc_buffer *psc) switch (entry_start.operation) { case VMGEXIT_PSC_OP_PRIVATE: case VMGEXIT_PSC_OP_SHARED: - vcpu->run->exit_reason =3D KVM_EXIT_HYPERCALL; - vcpu->run->hypercall.nr =3D KVM_HC_MAP_GPA_RANGE; - vcpu->run->hypercall.args[0] =3D gfn_to_gpa(gfn); - vcpu->run->hypercall.args[1] =3D npages; - vcpu->run->hypercall.args[2] =3D entry_start.operation =3D=3D VMGEXIT_PS= C_OP_PRIVATE + vcpu->common->run->exit_reason =3D KVM_EXIT_HYPERCALL; + vcpu->common->run->hypercall.nr =3D KVM_HC_MAP_GPA_RANGE; + vcpu->common->run->hypercall.args[0] =3D gfn_to_gpa(gfn); + vcpu->common->run->hypercall.args[1] =3D npages; + vcpu->common->run->hypercall.args[2] =3D entry_start.operation =3D=3D VM= GEXIT_PSC_OP_PRIVATE ? KVM_MAP_GPA_RANGE_ENCRYPTED : KVM_MAP_GPA_RANGE_DECRYPTED; - vcpu->run->hypercall.args[2] |=3D entry_start.pagesize + vcpu->common->run->hypercall.args[2] |=3D entry_start.pagesize ? KVM_MAP_GPA_RANGE_PAGE_SZ_2M : KVM_MAP_GPA_RANGE_PAGE_SZ_4K; vcpu->arch.complete_userspace_io =3D snp_complete_one_psc; @@ -4578,10 +4578,10 @@ static int sev_handle_vmgexit_msr_protocol(struct v= cpu_svm *svm) return ret; =20 out_terminate: - vcpu->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; - vcpu->run->system_event.type =3D KVM_SYSTEM_EVENT_SEV_TERM; - vcpu->run->system_event.ndata =3D 1; - vcpu->run->system_event.data[0] =3D control->ghcb_gpa; + vcpu->common->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; + vcpu->common->run->system_event.type =3D KVM_SYSTEM_EVENT_SEV_TERM; + vcpu->common->run->system_event.ndata =3D 1; + vcpu->common->run->system_event.data[0] =3D control->ghcb_gpa; =20 return 0; } @@ -4656,7 +4656,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) svm->sev_es.ghcb_sa); break; case SVM_VMGEXIT_NMI_COMPLETE: - ++vcpu->stat.nmi_window_exits; + ++vcpu->common->stat.nmi_window_exits; svm->nmi_masked =3D false; kvm_make_request(KVM_REQ_EVENT, vcpu); ret =3D 1; @@ -4695,10 +4695,10 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) case SVM_VMGEXIT_TERM_REQUEST: pr_info("SEV-ES guest requested termination: reason %#llx info %#llx\n", control->exit_info_1, control->exit_info_2); - vcpu->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; - vcpu->run->system_event.type =3D KVM_SYSTEM_EVENT_SEV_TERM; - vcpu->run->system_event.ndata =3D 1; - vcpu->run->system_event.data[0] =3D control->ghcb_gpa; + vcpu->common->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; + vcpu->common->run->system_event.type =3D KVM_SYSTEM_EVENT_SEV_TERM; + vcpu->common->run->system_event.ndata =3D 1; + vcpu->common->run->system_event.data[0] =3D control->ghcb_gpa; break; case SVM_VMGEXIT_PSC: ret =3D setup_vmgexit_scratch(svm, true, control->exit_info_2); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 632c74cb41f4..478cd15bb9f2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1064,7 +1064,7 @@ void disable_nmi_singlestep(struct vcpu_svm *svm) { svm->nmi_singlestep =3D false; =20 - if (!(svm->vcpu.guest_debug & KVM_GUESTDBG_SINGLESTEP)) { + if (!(svm->vcpu.common->guest_debug & KVM_GUESTDBG_SINGLESTEP)) { /* Clear our flags if they were not set by the guest */ if (!(svm->nmi_singlestep_guest_rflags & X86_EFLAGS_TF)) svm->vmcb->save.rflags &=3D ~X86_EFLAGS_TF; @@ -1554,7 +1554,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int = cpu) struct vcpu_svm *svm =3D to_svm(vcpu); struct svm_cpu_data *sd =3D per_cpu_ptr(&svm_data, cpu); =20 - if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm)) + if (vcpu->common->scheduled_out && !kvm_pause_in_guest(vcpu->kvm)) shrink_ple_window(vcpu); =20 if (sd->current_vmcb !=3D svm->vmcb) { @@ -1574,7 +1574,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) =20 svm_prepare_host_switch(vcpu); =20 - ++vcpu->stat.host_state_reload; + ++vcpu->common->stat.host_state_reload; } =20 static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu) @@ -1976,8 +1976,8 @@ static void svm_update_exception_bitmap(struct kvm_vc= pu *vcpu) =20 clr_exception_intercept(svm, BP_VECTOR); =20 - if (vcpu->guest_debug & KVM_GUESTDBG_ENABLE) { - if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_ENABLE) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_SW_BP) set_exception_intercept(svm, BP_VECTOR); } } @@ -2087,10 +2087,10 @@ static int npf_interception(struct kvm_vcpu *vcpu) =20 static int db_interception(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; struct vcpu_svm *svm =3D to_svm(vcpu); =20 - if (!(vcpu->guest_debug & + if (!(vcpu->common->guest_debug & (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) && !svm->nmi_singlestep) { u32 payload =3D svm->vmcb->save.dr6 ^ DR6_ACTIVE_LOW; @@ -2104,7 +2104,7 @@ static int db_interception(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_EVENT, vcpu); } =20 - if (vcpu->guest_debug & + if (vcpu->common->guest_debug & (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) { kvm_run->exit_reason =3D KVM_EXIT_DEBUG; kvm_run->debug.arch.dr6 =3D svm->vmcb->save.dr6; @@ -2121,7 +2121,7 @@ static int db_interception(struct kvm_vcpu *vcpu) static int bp_interception(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; =20 kvm_run->exit_reason =3D KVM_EXIT_DEBUG; kvm_run->debug.arch.pc =3D svm->vmcb->save.cs.base + svm->vmcb->save.rip; @@ -2207,7 +2207,7 @@ static int mc_interception(struct kvm_vcpu *vcpu) =20 static int shutdown_interception(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; struct vcpu_svm *svm =3D to_svm(vcpu); =20 =20 @@ -2238,7 +2238,7 @@ static int io_interception(struct kvm_vcpu *vcpu) int size, in, string; unsigned port; =20 - ++vcpu->stat.io_exits; + ++vcpu->common->stat.io_exits; string =3D (io_info & SVM_IOIO_STR_MASK) !=3D 0; in =3D (io_info & SVM_IOIO_TYPE_MASK) !=3D 0; port =3D io_info >> 16; @@ -2268,7 +2268,7 @@ static int smi_interception(struct kvm_vcpu *vcpu) =20 static int intr_interception(struct kvm_vcpu *vcpu) { - ++vcpu->stat.irq_exits; + ++vcpu->common->stat.irq_exits; return 1; } =20 @@ -2592,7 +2592,7 @@ static int iret_interception(struct kvm_vcpu *vcpu) =20 WARN_ON_ONCE(sev_es_guest(vcpu->kvm)); =20 - ++vcpu->stat.nmi_window_exits; + ++vcpu->common->stat.nmi_window_exits; svm->awaiting_iret_completion =3D true; =20 svm_clr_iret_intercept(svm); @@ -2767,7 +2767,7 @@ static int dr_interception(struct kvm_vcpu *vcpu) if (sev_es_guest(vcpu->kvm)) return 1; =20 - if (vcpu->guest_debug =3D=3D 0) { + if (vcpu->common->guest_debug =3D=3D 0) { /* * No more DR vmexits; force a reload of the debug registers * and reenter on this instruction. The next vmexit will @@ -2804,7 +2804,7 @@ static int cr8_write_interception(struct kvm_vcpu *vc= pu) return r; if (cr8_prev <=3D kvm_get_cr8(vcpu)) return r; - vcpu->run->exit_reason =3D KVM_EXIT_SET_TPR; + vcpu->common->run->exit_reason =3D KVM_EXIT_SET_TPR; return 0; } =20 @@ -3231,7 +3231,7 @@ static int interrupt_window_interception(struct kvm_v= cpu *vcpu) */ kvm_clear_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_IRQWIN); =20 - ++vcpu->stat.irq_window_exits; + ++vcpu->common->stat.irq_window_exits; return 1; } =20 @@ -3482,11 +3482,11 @@ static int svm_handle_invalid_exit(struct kvm_vcpu = *vcpu, u64 exit_code) { vcpu_unimpl(vcpu, "svm: unexpected exit reason 0x%llx\n", exit_code); dump_vmcb(vcpu); - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASO= N; - vcpu->run->internal.ndata =3D 2; - vcpu->run->internal.data[0] =3D exit_code; - vcpu->run->internal.data[1] =3D vcpu->arch.last_vmentry_cpu; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_UNEXPECTED_EX= IT_REASON; + vcpu->common->run->internal.ndata =3D 2; + vcpu->common->run->internal.data[0] =3D exit_code; + vcpu->common->run->internal.data[1] =3D vcpu->arch.last_vmentry_cpu; return 0; } =20 @@ -3530,7 +3530,7 @@ static void svm_get_exit_info(struct kvm_vcpu *vcpu, = u32 *reason, static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) { struct vcpu_svm *svm =3D to_svm(vcpu); - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; u32 exit_code =3D svm->vmcb->control.exit_code; =20 /* SEV-ES guests must use the CR write traps to track CR registers. */ @@ -3612,7 +3612,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) svm->nmi_masked =3D true; svm_set_iret_intercept(svm); } - ++vcpu->stat.nmi_injections; + ++vcpu->common->stat.nmi_injections; } =20 static bool svm_is_vnmi_pending(struct kvm_vcpu *vcpu) @@ -3643,7 +3643,7 @@ static bool svm_set_vnmi_pending(struct kvm_vcpu *vcp= u) * the NMI is "injected", but for all intents and purposes, passing the * NMI off to hardware counts as injection. */ - ++vcpu->stat.nmi_injections; + ++vcpu->common->stat.nmi_injections; =20 return true; } @@ -3664,7 +3664,7 @@ static void svm_inject_irq(struct kvm_vcpu *vcpu, boo= l reinjected) =20 trace_kvm_inj_virq(vcpu->arch.interrupt.nr, vcpu->arch.interrupt.soft, reinjected); - ++vcpu->stat.irq_injections; + ++vcpu->common->stat.irq_injections; =20 svm->vmcb->control.event_inj =3D vcpu->arch.interrupt.nr | SVM_EVTINJ_VALID | type; @@ -3674,10 +3674,10 @@ void svm_complete_interrupt_delivery(struct kvm_vcp= u *vcpu, int delivery_mode, int trig_mode, int vector) { /* - * apic->apicv_active must be read after vcpu->mode. + * apic->apicv_active must be read after vcpu->common->mode. * Pairs with smp_store_release in vcpu_enter_guest. */ - bool in_guest_mode =3D (smp_load_acquire(&vcpu->mode) =3D=3D IN_GUEST_MOD= E); + bool in_guest_mode =3D (smp_load_acquire(&vcpu->common->mode) =3D=3D IN_G= UEST_MODE); =20 /* Note, this is called iff the local APIC is in-kernel. */ if (!READ_ONCE(vcpu->arch.apic->apicv_active)) { @@ -4292,7 +4292,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_= vcpu *vcpu, /* Track VMRUNs that have made past consistency checking */ if (svm->nested.nested_run_pending && svm->vmcb->control.exit_code !=3D SVM_EXIT_ERR) - ++vcpu->stat.nested_run; + ++vcpu->common->stat.nested_run; =20 svm->nested.nested_run_pending =3D 0; } diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 2392a7ef254d..9e02425a9c4e 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3281,10 +3281,10 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu= *vcpu) } else { pr_debug_ratelimited("%s: no backing for APIC-access address in vmcs12\= n", __func__); - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_EMULATION; - vcpu->run->internal.ndata =3D 0; + vcpu->common->run->internal.ndata =3D 0; return false; } } @@ -3355,10 +3355,10 @@ static bool vmx_get_nested_state_pages(struct kvm_v= cpu *vcpu) if (!nested_get_evmcs_page(vcpu)) { pr_debug_ratelimited("%s: enlightened vmptrld failed\n", __func__); - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_EMULATION; - vcpu->run->internal.ndata =3D 0; + vcpu->common->run->internal.ndata =3D 0; =20 return false; } @@ -4733,7 +4733,7 @@ static void nested_vmx_restore_host_state(struct kvm_= vcpu *vcpu) * and vcpu->arch.dr7 is not squirreled away before the * nested VMENTER (not worth adding a variable in nested_vmx). */ - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP) kvm_set_dr(vcpu, 7, DR7_FIXED_1); else WARN_ON(kvm_set_dr(vcpu, 7, vmcs_readl(GUEST_DR7))); @@ -6261,11 +6261,11 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcp= u *vcpu, return vcpu->arch.apf.host_apf_flags || vmx_need_pf_intercept(vcpu); else if (is_debug(intr_info) && - vcpu->guest_debug & + vcpu->common->guest_debug & (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) return true; else if (is_breakpoint(intr_info) && - vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) + vcpu->common->guest_debug & KVM_GUESTDBG_USE_SW_BP) return true; else if (is_alignment_check(intr_info) && !vmx_guest_inject_ac(vcpu)) diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c index ec08fa3caf43..64fff7e2d487 100644 --- a/arch/x86/kvm/vmx/posted_intr.c +++ b/arch/x86/kvm/vmx/posted_intr.c @@ -208,7 +208,7 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu) * as blocking and preempted, e.g. if it's preempted between setting * its wait state and manually scheduling out. */ - if (vcpu->preempted) + if (vcpu->common->preempted) pi_set_sn(pi_desc); } =20 diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c index 6fef01e0536e..5487a4abfe0f 100644 --- a/arch/x86/kvm/vmx/sgx.c +++ b/arch/x86/kvm/vmx/sgx.c @@ -394,8 +394,8 @@ int handle_encls(struct kvm_vcpu *vcpu) if (leaf =3D=3D EINIT) return handle_encls_einit(vcpu); WARN_ONCE(1, "unexpected exit on ENCLS[%u]", leaf); - vcpu->run->exit_reason =3D KVM_EXIT_UNKNOWN; - vcpu->run->hw.hardware_exit_reason =3D EXIT_REASON_ENCLS; + vcpu->common->run->exit_reason =3D KVM_EXIT_UNKNOWN; + vcpu->common->run->hw.hardware_exit_reason =3D EXIT_REASON_ENCLS; return 0; } return 1; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f18c2d8c7476..3af12f593f57 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -887,7 +887,7 @@ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu) */ if (enable_vmware_backdoor) eb |=3D (1u << GP_VECTOR); - if ((vcpu->guest_debug & + if ((vcpu->common->guest_debug & (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)) =3D=3D (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)) eb |=3D 1u << BP_VECTOR; @@ -1362,7 +1362,7 @@ static void vmx_prepare_switch_to_host(struct vcpu_vm= x *vmx) =20 host_state =3D &vmx->loaded_vmcs->host_state; =20 - ++vmx->vcpu.stat.host_state_reload; + ++vmx->vcpu.common->stat.host_state_reload; =20 #ifdef CONFIG_X86_64 rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base); @@ -1519,7 +1519,7 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm)) + if (vcpu->common->scheduled_out && !kvm_pause_in_guest(vcpu->kvm)) shrink_ple_window(vcpu); =20 vmx_vcpu_load_vmcs(vcpu, cpu, NULL); @@ -4174,7 +4174,7 @@ static inline void kvm_vcpu_trigger_posted_interrupt(= struct kvm_vcpu *vcpu, int pi_vec) { #ifdef CONFIG_SMP - if (vcpu->mode =3D=3D IN_GUEST_MODE) { + if (vcpu->common->mode =3D=3D IN_GUEST_MODE) { /* * The vector of the virtual has already been set in the PIR. * Send a notification event to deliver the virtual interrupt @@ -4229,14 +4229,14 @@ static int vmx_deliver_nested_posted_interrupt(stru= ct kvm_vcpu *vcpu, kvm_make_request(KVM_REQ_EVENT, vcpu); =20 /* - * This pairs with the smp_mb_*() after setting vcpu->mode in + * This pairs with the smp_mb_*() after setting vcpu->common->mode in * vcpu_enter_guest() to guarantee the vCPU sees the event * request if triggering a posted interrupt "fails" because - * vcpu->mode !=3D IN_GUEST_MODE. The extra barrier is needed as + * vcpu->common->mode !=3D IN_GUEST_MODE. The extra barrier is needed as * the smb_wmb() in kvm_make_request() only ensures everything * done before making the request is visible when the request * is visible, it doesn't ensure ordering between the store to - * vcpu->requests and the load from vcpu->mode. + * vcpu->requests and the load from vcpu->common->mode. */ smp_mb__after_atomic(); =20 @@ -4275,9 +4275,9 @@ static int vmx_deliver_posted_interrupt(struct kvm_vc= pu *vcpu, int vector) =20 /* * The implied barrier in pi_test_and_set_on() pairs with the smp_mb_*() - * after setting vcpu->mode in vcpu_enter_guest(), thus the vCPU is + * after setting vcpu->common->mode in vcpu_enter_guest(), thus the vCPU = is * guaranteed to see PID.ON=3D1 and sync the PIR to IRR if triggering a - * posted interrupt "fails" because vcpu->mode !=3D IN_GUEST_MODE. + * posted interrupt "fails" because vcpu->common->mode !=3D IN_GUEST_MODE. */ kvm_vcpu_trigger_posted_interrupt(vcpu, POSTED_INTR_VECTOR); return 0; @@ -4953,7 +4953,7 @@ void vmx_inject_irq(struct kvm_vcpu *vcpu, bool reinj= ected) =20 trace_kvm_inj_virq(irq, vcpu->arch.interrupt.soft, reinjected); =20 - ++vcpu->stat.irq_injections; + ++vcpu->common->stat.irq_injections; if (vmx->rmode.vm86_active) { int inc_eip =3D 0; if (vcpu->arch.interrupt.soft) @@ -4990,7 +4990,7 @@ void vmx_inject_nmi(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->vnmi_blocked_time =3D 0; } =20 - ++vcpu->stat.nmi_injections; + ++vcpu->common->stat.nmi_injections; vmx->loaded_vmcs->nmi_known_unmasked =3D false; =20 if (vmx->rmode.vm86_active) { @@ -5129,11 +5129,11 @@ static bool rmode_exception(struct kvm_vcpu *vcpu, = int vec) */ to_vmx(vcpu)->vcpu.arch.event_exit_inst_len =3D vmcs_read32(VM_EXIT_INSTRUCTION_LEN); - if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_SW_BP) return false; fallthrough; case DB_VECTOR: - return !(vcpu->guest_debug & + return !(vcpu->common->guest_debug & (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)); case DE_VECTOR: case OF_VECTOR: @@ -5204,7 +5204,7 @@ bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu) static int handle_exception_nmi(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; u32 intr_info, ex_no, error_code; unsigned long cr2, dr6; u32 vect_info; @@ -5270,13 +5270,13 @@ static int handle_exception_nmi(struct kvm_vcpu *vc= pu) */ if ((vect_info & VECTORING_INFO_VALID_MASK) && !(is_page_fault(intr_info) && !(error_code & PFERR_RSVD_MASK))) { - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D KVM_INTERNAL_ERROR_SIMUL_EX; - vcpu->run->internal.ndata =3D 4; - vcpu->run->internal.data[0] =3D vect_info; - vcpu->run->internal.data[1] =3D intr_info; - vcpu->run->internal.data[2] =3D error_code; - vcpu->run->internal.data[3] =3D vcpu->arch.last_vmentry_cpu; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_SIMUL_EX; + vcpu->common->run->internal.ndata =3D 4; + vcpu->common->run->internal.data[0] =3D vect_info; + vcpu->common->run->internal.data[1] =3D intr_info; + vcpu->common->run->internal.data[2] =3D error_code; + vcpu->common->run->internal.data[3] =3D vcpu->arch.last_vmentry_cpu; return 0; } =20 @@ -5302,7 +5302,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu) switch (ex_no) { case DB_VECTOR: dr6 =3D vmx_get_exit_qual(vcpu); - if (!(vcpu->guest_debug & + if (!(vcpu->common->guest_debug & (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))) { /* * If the #DB was due to ICEBP, a.k.a. INT1, skip the @@ -5377,14 +5377,14 @@ static int handle_exception_nmi(struct kvm_vcpu *vc= pu) =20 static __always_inline int handle_external_interrupt(struct kvm_vcpu *vcpu) { - ++vcpu->stat.irq_exits; + ++vcpu->common->stat.irq_exits; return 1; } =20 static int handle_triple_fault(struct kvm_vcpu *vcpu) { - vcpu->run->exit_reason =3D KVM_EXIT_SHUTDOWN; - vcpu->mmio_needed =3D 0; + vcpu->common->run->exit_reason =3D KVM_EXIT_SHUTDOWN; + vcpu->common->mmio_needed =3D 0; return 0; } =20 @@ -5397,7 +5397,7 @@ static int handle_io(struct kvm_vcpu *vcpu) exit_qualification =3D vmx_get_exit_qual(vcpu); string =3D (exit_qualification & 16) !=3D 0; =20 - ++vcpu->stat.io_exits; + ++vcpu->common->stat.io_exits; =20 if (string) return kvm_emulate_instruction(vcpu, 0); @@ -5516,7 +5516,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) * KVM_GUESTDBG_SINGLESTEP-triggered * KVM_EXIT_DEBUG here. */ - vcpu->run->exit_reason =3D KVM_EXIT_SET_TPR; + vcpu->common->run->exit_reason =3D KVM_EXIT_SET_TPR; return 0; } } @@ -5549,7 +5549,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) default: break; } - vcpu->run->exit_reason =3D 0; + vcpu->common->run->exit_reason =3D 0; vcpu_unimpl(vcpu, "unhandled control register: op %d cr %d\n", (int)(exit_qualification >> 4) & 3, cr); return 0; @@ -5578,12 +5578,12 @@ static int handle_dr(struct kvm_vcpu *vcpu) * need to emulate the latter, either for the host or the * guest debugging itself. */ - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) { - vcpu->run->debug.arch.dr6 =3D DR6_BD | DR6_ACTIVE_LOW; - vcpu->run->debug.arch.dr7 =3D dr7; - vcpu->run->debug.arch.pc =3D kvm_get_linear_rip(vcpu); - vcpu->run->debug.arch.exception =3D DB_VECTOR; - vcpu->run->exit_reason =3D KVM_EXIT_DEBUG; + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP) { + vcpu->common->run->debug.arch.dr6 =3D DR6_BD | DR6_ACTIVE_LOW; + vcpu->common->run->debug.arch.dr7 =3D dr7; + vcpu->common->run->debug.arch.pc =3D kvm_get_linear_rip(vcpu); + vcpu->common->run->debug.arch.exception =3D DB_VECTOR; + vcpu->common->run->exit_reason =3D KVM_EXIT_DEBUG; return 0; } else { kvm_queue_exception_p(vcpu, DB_VECTOR, DR6_BD); @@ -5591,7 +5591,7 @@ static int handle_dr(struct kvm_vcpu *vcpu) } } =20 - if (vcpu->guest_debug =3D=3D 0) { + if (vcpu->common->guest_debug =3D=3D 0) { exec_controls_clearbit(to_vmx(vcpu), CPU_BASED_MOV_DR_EXITING); =20 /* @@ -5651,7 +5651,7 @@ static int handle_interrupt_window(struct kvm_vcpu *v= cpu) =20 kvm_make_request(KVM_REQ_EVENT, vcpu); =20 - ++vcpu->stat.irq_window_exits; + ++vcpu->common->stat.irq_window_exits; return 1; } =20 @@ -5848,7 +5848,7 @@ static int handle_nmi_window(struct kvm_vcpu *vcpu) return -EIO; =20 exec_controls_clearbit(to_vmx(vcpu), CPU_BASED_NMI_WINDOW_EXITING); - ++vcpu->stat.nmi_window_exits; + ++vcpu->common->stat.nmi_window_exits; kvm_make_request(KVM_REQ_EVENT, vcpu); =20 return 1; @@ -6075,7 +6075,7 @@ static int handle_notify(struct kvm_vcpu *vcpu) unsigned long exit_qual =3D vmx_get_exit_qual(vcpu); bool context_invalid =3D exit_qual & NOTIFY_VM_CONTEXT_INVALID; =20 - ++vcpu->stat.notify_window_exits; + ++vcpu->common->stat.notify_window_exits; =20 /* * Notify VM exit happened while executing iret from NMI, @@ -6087,8 +6087,8 @@ static int handle_notify(struct kvm_vcpu *vcpu) =20 if (vcpu->kvm->arch.notify_vmexit_flags & KVM_X86_NOTIFY_VMEXIT_USER || context_invalid) { - vcpu->run->exit_reason =3D KVM_EXIT_NOTIFY; - vcpu->run->notify.flags =3D context_invalid ? + vcpu->common->run->exit_reason =3D KVM_EXIT_NOTIFY; + vcpu->common->run->notify.flags =3D context_invalid ? KVM_NOTIFY_CONTEXT_INVALID : 0; return 0; } @@ -6516,19 +6516,19 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu,= fastpath_t exit_fastpath) =20 if (exit_reason.failed_vmentry) { dump_vmcs(vcpu); - vcpu->run->exit_reason =3D KVM_EXIT_FAIL_ENTRY; - vcpu->run->fail_entry.hardware_entry_failure_reason + vcpu->common->run->exit_reason =3D KVM_EXIT_FAIL_ENTRY; + vcpu->common->run->fail_entry.hardware_entry_failure_reason =3D exit_reason.full; - vcpu->run->fail_entry.cpu =3D vcpu->arch.last_vmentry_cpu; + vcpu->common->run->fail_entry.cpu =3D vcpu->arch.last_vmentry_cpu; return 0; } =20 if (unlikely(vmx->fail)) { dump_vmcs(vcpu); - vcpu->run->exit_reason =3D KVM_EXIT_FAIL_ENTRY; - vcpu->run->fail_entry.hardware_entry_failure_reason + vcpu->common->run->exit_reason =3D KVM_EXIT_FAIL_ENTRY; + vcpu->common->run->fail_entry.hardware_entry_failure_reason =3D vmcs_read32(VM_INSTRUCTION_ERROR); - vcpu->run->fail_entry.cpu =3D vcpu->arch.last_vmentry_cpu; + vcpu->common->run->fail_entry.cpu =3D vcpu->arch.last_vmentry_cpu; return 0; } =20 @@ -6548,17 +6548,17 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu,= fastpath_t exit_fastpath) exit_reason.basic !=3D EXIT_REASON_NOTIFY)) { int ndata =3D 3; =20 - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D KVM_INTERNAL_ERROR_DELIVERY_EV; - vcpu->run->internal.data[0] =3D vectoring_info; - vcpu->run->internal.data[1] =3D exit_reason.full; - vcpu->run->internal.data[2] =3D vmx_get_exit_qual(vcpu); + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_DELIVERY_EV; + vcpu->common->run->internal.data[0] =3D vectoring_info; + vcpu->common->run->internal.data[1] =3D exit_reason.full; + vcpu->common->run->internal.data[2] =3D vmx_get_exit_qual(vcpu); if (exit_reason.basic =3D=3D EXIT_REASON_EPT_MISCONFIG) { - vcpu->run->internal.data[ndata++] =3D + vcpu->common->run->internal.data[ndata++] =3D vmcs_read64(GUEST_PHYSICAL_ADDRESS); } - vcpu->run->internal.data[ndata++] =3D vcpu->arch.last_vmentry_cpu; - vcpu->run->internal.ndata =3D ndata; + vcpu->common->run->internal.data[ndata++] =3D vcpu->arch.last_vmentry_cp= u; + vcpu->common->run->internal.ndata =3D ndata; return 0; } =20 @@ -6612,12 +6612,12 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu,= fastpath_t exit_fastpath) vcpu_unimpl(vcpu, "vmx: unexpected exit reason 0x%x\n", exit_reason.full); dump_vmcs(vcpu); - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON; - vcpu->run->internal.ndata =3D 2; - vcpu->run->internal.data[0] =3D exit_reason.full; - vcpu->run->internal.data[1] =3D vcpu->arch.last_vmentry_cpu; + vcpu->common->run->internal.ndata =3D 2; + vcpu->common->run->internal.data[0] =3D exit_reason.full; + vcpu->common->run->internal.data[1] =3D vcpu->arch.last_vmentry_cpu; return 0; } =20 @@ -6631,9 +6631,9 @@ int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t= exit_fastpath) */ if (to_vmx(vcpu)->exit_reason.bus_lock_detected) { if (ret > 0) - vcpu->run->exit_reason =3D KVM_EXIT_X86_BUS_LOCK; + vcpu->common->run->exit_reason =3D KVM_EXIT_X86_BUS_LOCK; =20 - vcpu->run->flags |=3D KVM_RUN_X86_BUS_LOCK; + vcpu->common->run->flags |=3D KVM_RUN_X86_BUS_LOCK; return 0; } return ret; @@ -6680,7 +6680,7 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vc= pu) return; } =20 - vcpu->stat.l1d_flush++; + vcpu->common->stat.l1d_flush++; =20 if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { native_wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); @@ -7401,7 +7401,7 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool f= orce_immediate_exit) * vmentry fails as it then expects bit 14 (BS) in pending debug * exceptions being set, but that's not correct for the guest debugging * case. */ - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) vmx_set_interrupt_shadow(vcpu, 0); =20 kvm_load_guest_xsave_state(vcpu); @@ -7458,7 +7458,7 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool f= orce_immediate_exit) */ if (vmx->nested.nested_run_pending && !vmx->exit_reason.failed_vmentry) - ++vcpu->stat.nested_run; + ++vcpu->common->stat.nested_run; =20 vmx->nested.nested_run_pending =3D 0; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3efc3a89499c..e646b4042963 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -764,7 +764,7 @@ static int complete_emulated_insn_gp(struct kvm_vcpu *v= cpu, int err) =20 void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fa= ult) { - ++vcpu->stat.pf_guest; + ++vcpu->common->stat.pf_guest; =20 /* * Async #PF in L2 is always forwarded to L1 as a VM-Exit regardless of @@ -1318,7 +1318,7 @@ static void kvm_update_dr0123(struct kvm_vcpu *vcpu) { int i; =20 - if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)) { + if (!(vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP)) { for (i =3D 0; i < KVM_NR_DB_REGS; i++) vcpu->arch.eff_db[i] =3D vcpu->arch.db[i]; } @@ -1328,7 +1328,7 @@ void kvm_update_dr7(struct kvm_vcpu *vcpu) { unsigned long dr7; =20 - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP) dr7 =3D vcpu->arch.guest_debug_dr7; else dr7 =3D vcpu->arch.dr7; @@ -1358,7 +1358,7 @@ int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigne= d long val) switch (dr) { case 0 ... 3: vcpu->arch.db[array_index_nospec(dr, size)] =3D val; - if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)) + if (!(vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP)) vcpu->arch.eff_db[dr] =3D val; break; case 4: @@ -1970,15 +1970,15 @@ EXPORT_SYMBOL_GPL(kvm_set_msr); =20 static void complete_userspace_rdmsr(struct kvm_vcpu *vcpu) { - if (!vcpu->run->msr.error) { - kvm_rax_write(vcpu, (u32)vcpu->run->msr.data); - kvm_rdx_write(vcpu, vcpu->run->msr.data >> 32); + if (!vcpu->common->run->msr.error) { + kvm_rax_write(vcpu, (u32)vcpu->common->run->msr.data); + kvm_rdx_write(vcpu, vcpu->common->run->msr.data >> 32); } } =20 static int complete_emulated_msr_access(struct kvm_vcpu *vcpu) { - return complete_emulated_insn_gp(vcpu, vcpu->run->msr.error); + return complete_emulated_insn_gp(vcpu, vcpu->common->run->msr.error); } =20 static int complete_emulated_rdmsr(struct kvm_vcpu *vcpu) @@ -1989,7 +1989,7 @@ static int complete_emulated_rdmsr(struct kvm_vcpu *v= cpu) =20 static int complete_fast_msr_access(struct kvm_vcpu *vcpu) { - return kvm_x86_call(complete_emulated_msr)(vcpu, vcpu->run->msr.error); + return kvm_x86_call(complete_emulated_msr)(vcpu, vcpu->common->run->msr.e= rror); } =20 static int complete_fast_rdmsr(struct kvm_vcpu *vcpu) @@ -2021,12 +2021,12 @@ static int kvm_msr_user_space(struct kvm_vcpu *vcpu= , u32 index, if (!(vcpu->kvm->arch.user_space_msr_mask & msr_reason)) return 0; =20 - vcpu->run->exit_reason =3D exit_reason; - vcpu->run->msr.error =3D 0; - memset(vcpu->run->msr.pad, 0, sizeof(vcpu->run->msr.pad)); - vcpu->run->msr.reason =3D msr_reason; - vcpu->run->msr.index =3D index; - vcpu->run->msr.data =3D data; + vcpu->common->run->exit_reason =3D exit_reason; + vcpu->common->run->msr.error =3D 0; + memset(vcpu->common->run->msr.pad, 0, sizeof(vcpu->common->run->msr.pad)); + vcpu->common->run->msr.reason =3D msr_reason; + vcpu->common->run->msr.index =3D index; + vcpu->common->run->msr.data =3D data; vcpu->arch.complete_userspace_io =3D completion; =20 return 1; @@ -2126,7 +2126,7 @@ EXPORT_SYMBOL_GPL(kvm_emulate_monitor); static inline bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu) { xfer_to_guest_mode_prepare(); - return vcpu->mode =3D=3D EXITING_GUEST_MODE || kvm_request_pending(vcpu) = || + return vcpu->common->mode =3D=3D EXITING_GUEST_MODE || kvm_request_pendin= g(vcpu) || xfer_to_guest_mode_work_pending(); } =20 @@ -3596,7 +3596,7 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu) =20 static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu) { - ++vcpu->stat.tlb_flush; + ++vcpu->common->stat.tlb_flush; kvm_x86_call(flush_tlb_all)(vcpu); =20 /* Flushing all ASIDs flushes the current ASID... */ @@ -3605,7 +3605,7 @@ static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *v= cpu) =20 static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) { - ++vcpu->stat.tlb_flush; + ++vcpu->common->stat.tlb_flush; =20 if (!tdp_enabled) { /* @@ -3630,7 +3630,7 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu = *vcpu) =20 static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu) { - ++vcpu->stat.tlb_flush; + ++vcpu->common->stat.tlb_flush; kvm_x86_call(flush_tlb_current)(vcpu); } =20 @@ -4993,7 +4993,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cp= u) =20 vcpu->arch.l1tf_flush_l1d =3D true; =20 - if (vcpu->scheduled_out && pmu->version && pmu->event_count) { + if (vcpu->common->scheduled_out && pmu->version && pmu->event_count) { pmu->need_cleanup =3D true; kvm_make_request(KVM_REQ_PMU, vcpu); } @@ -5065,11 +5065,11 @@ static void kvm_steal_time_set_preempted(struct kvm= _vcpu *vcpu) * preempted if and only if the VM-Exit was due to a host interrupt. */ if (!vcpu->arch.at_instruction_boundary) { - vcpu->stat.preemption_other++; + vcpu->common->stat.preemption_other++; return; } =20 - vcpu->stat.preemption_reported++; + vcpu->common->stat.preemption_reported++; if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) return; =20 @@ -5100,7 +5100,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { int idx; =20 - if (vcpu->preempted) { + if (vcpu->common->preempted) { vcpu->arch.preempted_in_kernel =3D kvm_arch_vcpu_in_kernel(vcpu); =20 /* @@ -7867,10 +7867,10 @@ struct read_write_emulator_ops { =20 static int read_prepare(struct kvm_vcpu *vcpu, void *val, int bytes) { - if (vcpu->mmio_read_completed) { + if (vcpu->common->mmio_read_completed) { trace_kvm_mmio(KVM_TRACE_MMIO_READ, bytes, - vcpu->mmio_fragments[0].gpa, val); - vcpu->mmio_read_completed =3D 0; + vcpu->common->mmio_fragments[0].gpa, val); + vcpu->common->mmio_read_completed =3D 0; return 1; } =20 @@ -7905,9 +7905,9 @@ static int read_exit_mmio(struct kvm_vcpu *vcpu, gpa_= t gpa, static int write_exit_mmio(struct kvm_vcpu *vcpu, gpa_t gpa, void *val, int bytes) { - struct kvm_mmio_fragment *frag =3D &vcpu->mmio_fragments[0]; + struct kvm_mmio_fragment *frag =3D &vcpu->common->mmio_fragments[0]; =20 - memcpy(vcpu->run->mmio.data, frag->data, min(8u, frag->len)); + memcpy(vcpu->common->run->mmio.data, frag->data, min(8u, frag->len)); return X86EMUL_CONTINUE; } =20 @@ -7968,8 +7968,8 @@ static int emulator_read_write_onepage(unsigned long = addr, void *val, bytes -=3D handled; val +=3D handled; =20 - WARN_ON(vcpu->mmio_nr_fragments >=3D KVM_MAX_MMIO_FRAGMENTS); - frag =3D &vcpu->mmio_fragments[vcpu->mmio_nr_fragments++]; + WARN_ON(vcpu->common->mmio_nr_fragments >=3D KVM_MAX_MMIO_FRAGMENTS); + frag =3D &vcpu->common->mmio_fragments[vcpu->common->mmio_nr_fragments++]; frag->gpa =3D gpa; frag->data =3D val; frag->len =3D bytes; @@ -7990,7 +7990,7 @@ static int emulator_read_write(struct x86_emulate_ctx= t *ctxt, ops->read_write_prepare(vcpu, val, bytes)) return X86EMUL_CONTINUE; =20 - vcpu->mmio_nr_fragments =3D 0; + vcpu->common->mmio_nr_fragments =3D 0; =20 /* Crossing a page boundary? */ if (((addr + bytes - 1) ^ addr) & PAGE_MASK) { @@ -8014,18 +8014,18 @@ static int emulator_read_write(struct x86_emulate_c= txt *ctxt, if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - if (!vcpu->mmio_nr_fragments) + if (!vcpu->common->mmio_nr_fragments) return rc; =20 - gpa =3D vcpu->mmio_fragments[0].gpa; + gpa =3D vcpu->common->mmio_fragments[0].gpa; =20 - vcpu->mmio_needed =3D 1; - vcpu->mmio_cur_fragment =3D 0; + vcpu->common->mmio_needed =3D 1; + vcpu->common->mmio_cur_fragment =3D 0; =20 - vcpu->run->mmio.len =3D min(8u, vcpu->mmio_fragments[0].len); - vcpu->run->mmio.is_write =3D vcpu->mmio_is_write =3D ops->write; - vcpu->run->exit_reason =3D KVM_EXIT_MMIO; - vcpu->run->mmio.phys_addr =3D gpa; + vcpu->common->run->mmio.len =3D min(8u, vcpu->common->mmio_fragments[0].l= en); + vcpu->common->run->mmio.is_write =3D vcpu->common->mmio_is_write =3D ops-= >write; + vcpu->common->run->exit_reason =3D KVM_EXIT_MMIO; + vcpu->common->run->mmio.phys_addr =3D gpa; =20 return ops->read_write_exit_mmio(vcpu, gpa, val, bytes); } @@ -8178,12 +8178,12 @@ static int emulator_pio_in_out(struct kvm_vcpu *vcp= u, int size, else memcpy(vcpu->arch.pio_data, data, size * count); =20 - vcpu->run->exit_reason =3D KVM_EXIT_IO; - vcpu->run->io.direction =3D in ? KVM_EXIT_IO_IN : KVM_EXIT_IO_OUT; - vcpu->run->io.size =3D size; - vcpu->run->io.data_offset =3D KVM_PIO_PAGE_OFFSET * PAGE_SIZE; - vcpu->run->io.count =3D count; - vcpu->run->io.port =3D port; + vcpu->common->run->exit_reason =3D KVM_EXIT_IO; + vcpu->common->run->io.direction =3D in ? KVM_EXIT_IO_IN : KVM_EXIT_IO_OUT; + vcpu->common->run->io.size =3D size; + vcpu->common->run->io.data_offset =3D KVM_PIO_PAGE_OFFSET * PAGE_SIZE; + vcpu->common->run->io.count =3D count; + vcpu->common->run->io.port =3D port; return 0; } =20 @@ -8767,7 +8767,7 @@ EXPORT_SYMBOL_GPL(kvm_inject_realmode_interrupt); static void prepare_emulation_failure_exit(struct kvm_vcpu *vcpu, u64 *dat= a, u8 ndata, u8 *insn_bytes, u8 insn_size) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; u64 info[5]; u8 info_start; =20 @@ -8839,7 +8839,7 @@ static int handle_emulation_failure(struct kvm_vcpu *= vcpu, int emulation_type) { struct kvm *kvm =3D vcpu->kvm; =20 - ++vcpu->stat.insn_emulation_fail; + ++vcpu->common->stat.insn_emulation_fail; trace_kvm_emulate_insn_failed(vcpu); =20 if (emulation_type & EMULTYPE_VMWARE_GP) { @@ -8998,9 +8998,9 @@ static int kvm_vcpu_check_hw_bp(unsigned long addr, u= 32 type, u32 dr7, =20 static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; =20 - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) { kvm_run->debug.arch.dr6 =3D DR6_BS | DR6_ACTIVE_LOW; kvm_run->debug.arch.pc =3D kvm_get_linear_rip(vcpu); kvm_run->debug.arch.exception =3D DB_VECTOR; @@ -9074,9 +9074,9 @@ static bool kvm_vcpu_check_code_breakpoint(struct kvm= _vcpu *vcpu, EMULTYPE_TRAP_UD | EMULTYPE_VMWARE_GP | EMULTYPE_PF)) return false; =20 - if (unlikely(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) && + if (unlikely(vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP) && (vcpu->arch.guest_debug_dr7 & DR7_BP_EN_MASK)) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; unsigned long eip =3D kvm_get_linear_rip(vcpu); u32 dr6 =3D kvm_vcpu_check_hw_bp(eip, 0, vcpu->arch.guest_debug_dr7, @@ -9161,7 +9161,7 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *= vcpu, int emulation_type, r =3D x86_decode_insn(ctxt, insn, insn_len, emulation_type); =20 trace_kvm_emulate_insn_start(vcpu); - ++vcpu->stat.insn_emulation; + ++vcpu->common->stat.insn_emulation; =20 return r; } @@ -9290,8 +9290,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gp= a_t cr2_or_gpa, } =20 if (ctxt->have_exception) { - WARN_ON_ONCE(vcpu->mmio_needed && !vcpu->mmio_is_write); - vcpu->mmio_needed =3D false; + WARN_ON_ONCE(vcpu->common->mmio_needed && !vcpu->common->mmio_is_write); + vcpu->common->mmio_needed =3D false; r =3D 1; inject_emulated_exception(vcpu); } else if (vcpu->arch.pio.count) { @@ -9303,10 +9303,10 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, = gpa_t cr2_or_gpa, vcpu->arch.complete_userspace_io =3D complete_emulated_pio; } r =3D 0; - } else if (vcpu->mmio_needed) { - ++vcpu->stat.mmio_exits; + } else if (vcpu->common->mmio_needed) { + ++vcpu->common->stat.mmio_exits; =20 - if (!vcpu->mmio_is_write) + if (!vcpu->common->mmio_is_write) writeback =3D false; r =3D 0; vcpu->arch.complete_userspace_io =3D complete_emulated_mmio; @@ -9335,7 +9335,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gp= a_t cr2_or_gpa, if (ctxt->is_branch) kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIR= ED); kvm_rip_write(vcpu, ctxt->eip); - if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) + if (r && (ctxt->tf || (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLES= TEP))) r =3D kvm_vcpu_do_singlestep(vcpu); kvm_x86_call(update_emulated_instruction)(vcpu); __kvm_set_rflags(vcpu, ctxt->eflags); @@ -9933,12 +9933,12 @@ static int __kvm_emulate_halt(struct kvm_vcpu *vcpu= , int state, int reason) * managed by userspace, in which case userspace is responsible for * handling wake events. */ - ++vcpu->stat.halt_exits; + ++vcpu->common->stat.halt_exits; if (lapic_in_kernel(vcpu)) { vcpu->arch.mp_state =3D state; return 1; } else { - vcpu->run->exit_reason =3D reason; + vcpu->common->run->exit_reason =3D reason; return 0; } } @@ -10073,7 +10073,7 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, = unsigned long dest_id) struct kvm_vcpu *target =3D NULL; struct kvm_apic_map *map; =20 - vcpu->stat.directed_yield_attempted++; + vcpu->common->stat.directed_yield_attempted++; =20 if (single_task_running()) goto no_yield; @@ -10086,7 +10086,7 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, = unsigned long dest_id) =20 rcu_read_unlock(); =20 - if (!target || !READ_ONCE(target->ready)) + if (!target || !READ_ONCE(target->common->ready)) goto no_yield; =20 /* Ignore requests to yield to self */ @@ -10096,7 +10096,7 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, = unsigned long dest_id) if (kvm_vcpu_yield_to(target) <=3D 0) goto no_yield; =20 - vcpu->stat.directed_yield_successful++; + vcpu->common->stat.directed_yield_successful++; =20 no_yield: return; @@ -10104,12 +10104,12 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu= , unsigned long dest_id) =20 static int complete_hypercall_exit(struct kvm_vcpu *vcpu) { - u64 ret =3D vcpu->run->hypercall.ret; + u64 ret =3D vcpu->common->run->hypercall.ret; =20 if (!is_64_bit_mode(vcpu)) ret =3D (u32)ret; kvm_rax_write(vcpu, ret); - ++vcpu->stat.hypercalls; + ++vcpu->common->stat.hypercalls; return kvm_skip_emulated_instruction(vcpu); } =20 @@ -10180,16 +10180,16 @@ unsigned long __kvm_emulate_hypercall(struct kvm_= vcpu *vcpu, unsigned long nr, break; } =20 - vcpu->run->exit_reason =3D KVM_EXIT_HYPERCALL; - vcpu->run->hypercall.nr =3D KVM_HC_MAP_GPA_RANGE; - vcpu->run->hypercall.args[0] =3D gpa; - vcpu->run->hypercall.args[1] =3D npages; - vcpu->run->hypercall.args[2] =3D attrs; - vcpu->run->hypercall.flags =3D 0; + vcpu->common->run->exit_reason =3D KVM_EXIT_HYPERCALL; + vcpu->common->run->hypercall.nr =3D KVM_HC_MAP_GPA_RANGE; + vcpu->common->run->hypercall.args[0] =3D gpa; + vcpu->common->run->hypercall.args[1] =3D npages; + vcpu->common->run->hypercall.args[2] =3D attrs; + vcpu->common->run->hypercall.flags =3D 0; if (op_64_bit) - vcpu->run->hypercall.flags |=3D KVM_EXIT_HYPERCALL_LONG_MODE; + vcpu->common->run->hypercall.flags |=3D KVM_EXIT_HYPERCALL_LONG_MODE; =20 - WARN_ON_ONCE(vcpu->run->hypercall.flags & KVM_EXIT_HYPERCALL_MBZ); + WARN_ON_ONCE(vcpu->common->run->hypercall.flags & KVM_EXIT_HYPERCALL_MBZ= ); vcpu->arch.complete_userspace_io =3D complete_hypercall_exit; /* stat is incremented on completion. */ return 0; @@ -10200,7 +10200,7 @@ unsigned long __kvm_emulate_hypercall(struct kvm_vc= pu *vcpu, unsigned long nr, } =20 out: - ++vcpu->stat.hypercalls; + ++vcpu->common->stat.hypercalls; return ret; } EXPORT_SYMBOL_GPL(__kvm_emulate_hypercall); @@ -10263,14 +10263,14 @@ static int emulator_fix_hypercall(struct x86_emul= ate_ctxt *ctxt) =20 static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu) { - return vcpu->run->request_interrupt_window && + return vcpu->common->run->request_interrupt_window && likely(!pic_in_kernel(vcpu->kvm)); } =20 /* Called within kvm->srcu read side. */ static void post_kvm_run_save(struct kvm_vcpu *vcpu) { - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; =20 kvm_run->if_flag =3D kvm_x86_call(get_if_flag)(vcpu); kvm_run->cr8 =3D kvm_get_cr8(vcpu); @@ -10496,7 +10496,7 @@ static int kvm_check_and_inject_events(struct kvm_v= cpu *vcpu, } =20 /* Don't inject interrupts if the user asked to avoid doing so */ - if (vcpu->guest_debug & KVM_GUESTDBG_BLOCKIRQ) + if (vcpu->common->guest_debug & KVM_GUESTDBG_BLOCKIRQ) return 0; =20 /* @@ -10884,7 +10884,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) #endif =20 if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) { - vcpu->run->exit_reason =3D KVM_EXIT_TPR_ACCESS; + vcpu->common->run->exit_reason =3D KVM_EXIT_TPR_ACCESS; r =3D 0; goto out; } @@ -10893,8 +10893,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_x86_ops.nested_ops->triple_fault(vcpu); =20 if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) { - vcpu->run->exit_reason =3D KVM_EXIT_SHUTDOWN; - vcpu->mmio_needed =3D 0; + vcpu->common->run->exit_reason =3D KVM_EXIT_SHUTDOWN; + vcpu->common->mmio_needed =3D 0; r =3D 0; goto out; } @@ -10921,8 +10921,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) BUG_ON(vcpu->arch.pending_ioapic_eoi > 255); if (test_bit(vcpu->arch.pending_ioapic_eoi, vcpu->arch.ioapic_handled_vectors)) { - vcpu->run->exit_reason =3D KVM_EXIT_IOAPIC_EOI; - vcpu->run->eoi.vector =3D + vcpu->common->run->exit_reason =3D KVM_EXIT_IOAPIC_EOI; + vcpu->common->run->eoi.vector =3D vcpu->arch.pending_ioapic_eoi; r =3D 0; goto out; @@ -10936,24 +10936,24 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_vcpu_reload_apic_access_page(vcpu); #ifdef CONFIG_KVM_HYPERV if (kvm_check_request(KVM_REQ_HV_CRASH, vcpu)) { - vcpu->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; - vcpu->run->system_event.type =3D KVM_SYSTEM_EVENT_CRASH; - vcpu->run->system_event.ndata =3D 0; + vcpu->common->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; + vcpu->common->run->system_event.type =3D KVM_SYSTEM_EVENT_CRASH; + vcpu->common->run->system_event.ndata =3D 0; r =3D 0; goto out; } if (kvm_check_request(KVM_REQ_HV_RESET, vcpu)) { - vcpu->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; - vcpu->run->system_event.type =3D KVM_SYSTEM_EVENT_RESET; - vcpu->run->system_event.ndata =3D 0; + vcpu->common->run->exit_reason =3D KVM_EXIT_SYSTEM_EVENT; + vcpu->common->run->system_event.type =3D KVM_SYSTEM_EVENT_RESET; + vcpu->common->run->system_event.ndata =3D 0; r =3D 0; goto out; } if (kvm_check_request(KVM_REQ_HV_EXIT, vcpu)) { struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); =20 - vcpu->run->exit_reason =3D KVM_EXIT_HYPERV; - vcpu->run->hyperv =3D hv_vcpu->exit; + vcpu->common->run->exit_reason =3D KVM_EXIT_HYPERV; + vcpu->common->run->hyperv =3D hv_vcpu->exit; r =3D 0; goto out; } @@ -10987,7 +10987,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) =20 if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win || kvm_xen_has_interrupt(vcpu)) { - ++vcpu->stat.req_event; + ++vcpu->common->stat.req_event; r =3D kvm_apic_accept_events(vcpu); if (r < 0) { r =3D 0; @@ -11028,8 +11028,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) */ local_irq_disable(); =20 - /* Store vcpu->apicv_active before vcpu->mode. */ - smp_store_release(&vcpu->mode, IN_GUEST_MODE); + /* Store vcpu->apicv_active before vcpu->common->mode. */ + smp_store_release(&vcpu->common->mode, IN_GUEST_MODE); =20 kvm_vcpu_srcu_read_unlock(vcpu); =20 @@ -11058,7 +11058,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_x86_call(sync_pir_to_irr)(vcpu); =20 if (kvm_vcpu_exit_request(vcpu)) { - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; smp_wmb(); local_irq_enable(); preempt_enable(); @@ -11113,7 +11113,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) } =20 /* Note, VM-Exits that go down the "slow" path are accounted below. */ - ++vcpu->stat.exits; + ++vcpu->common->stat.exits; } =20 /* @@ -11123,7 +11123,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * KVM_DEBUGREG_WONT_EXIT again. */ if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) { - WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP); + WARN_ON(vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP); kvm_x86_call(sync_dirty_debug_regs)(vcpu); kvm_update_dr0123(vcpu); kvm_update_dr7(vcpu); @@ -11142,7 +11142,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) vcpu->arch.last_vmentry_cpu =3D vcpu->cpu; vcpu->arch.last_guest_tsc =3D kvm_read_l1_tsc(vcpu, rdtsc()); =20 - vcpu->mode =3D OUTSIDE_GUEST_MODE; + vcpu->common->mode =3D OUTSIDE_GUEST_MODE; smp_wmb(); =20 /* @@ -11167,7 +11167,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) */ kvm_before_interrupt(vcpu, KVM_HANDLING_IRQ); local_irq_enable(); - ++vcpu->stat.exits; + ++vcpu->common->stat.exits; local_irq_disable(); kvm_after_interrupt(vcpu); =20 @@ -11300,7 +11300,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu) { int r; =20 - vcpu->run->exit_reason =3D KVM_EXIT_UNKNOWN; + vcpu->common->run->exit_reason =3D KVM_EXIT_UNKNOWN; =20 for (;;) { /* @@ -11329,8 +11329,8 @@ static int vcpu_run(struct kvm_vcpu *vcpu) if (dm_request_for_irq_injection(vcpu) && kvm_vcpu_ready_for_interrupt_injection(vcpu)) { r =3D 0; - vcpu->run->exit_reason =3D KVM_EXIT_IRQ_WINDOW_OPEN; - ++vcpu->stat.request_irq_exits; + vcpu->common->run->exit_reason =3D KVM_EXIT_IRQ_WINDOW_OPEN; + ++vcpu->common->stat.request_irq_exits; break; } =20 @@ -11378,22 +11378,22 @@ static int complete_emulated_pio(struct kvm_vcpu = *vcpu) */ static int complete_emulated_mmio(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_mmio_fragment *frag; unsigned len; =20 - BUG_ON(!vcpu->mmio_needed); + BUG_ON(!vcpu->common->mmio_needed); =20 /* Complete previous fragment */ - frag =3D &vcpu->mmio_fragments[vcpu->mmio_cur_fragment]; + frag =3D &vcpu->common->mmio_fragments[vcpu->common->mmio_cur_fragment]; len =3D min(8u, frag->len); - if (!vcpu->mmio_is_write) + if (!vcpu->common->mmio_is_write) memcpy(frag->data, run->mmio.data, len); =20 if (frag->len <=3D 8) { /* Switch to the next fragment. */ frag++; - vcpu->mmio_cur_fragment++; + vcpu->common->mmio_cur_fragment++; } else { /* Go forward to the next mmio piece. */ frag->data +=3D len; @@ -11401,22 +11401,22 @@ static int complete_emulated_mmio(struct kvm_vcpu= *vcpu) frag->len -=3D len; } =20 - if (vcpu->mmio_cur_fragment >=3D vcpu->mmio_nr_fragments) { - vcpu->mmio_needed =3D 0; + if (vcpu->common->mmio_cur_fragment >=3D vcpu->common->mmio_nr_fragments)= { + vcpu->common->mmio_needed =3D 0; =20 /* FIXME: return into emulator if single-stepping. */ - if (vcpu->mmio_is_write) + if (vcpu->common->mmio_is_write) return 1; - vcpu->mmio_read_completed =3D 1; + vcpu->common->mmio_read_completed =3D 1; return complete_emulated_io(vcpu); } =20 run->exit_reason =3D KVM_EXIT_MMIO; run->mmio.phys_addr =3D frag->gpa; - if (vcpu->mmio_is_write) + if (vcpu->common->mmio_is_write) memcpy(run->mmio.data, frag->data, min(8u, frag->len)); run->mmio.len =3D min(8u, frag->len); - run->mmio.is_write =3D vcpu->mmio_is_write; + run->mmio.is_write =3D vcpu->common->mmio_is_write; vcpu->arch.complete_userspace_io =3D complete_emulated_mmio; return 0; } @@ -11433,14 +11433,14 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *v= cpu) static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { fpu_swap_kvm_fpstate(&vcpu->arch.guest_fpu, false); - ++vcpu->stat.fpu_reload; + ++vcpu->common->stat.fpu_reload; trace_kvm_fpu(0); } =20 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) { struct kvm_queued_exception *ex =3D &vcpu->arch.exception; - struct kvm_run *kvm_run =3D vcpu->run; + struct kvm_run *kvm_run =3D vcpu->common->run; int r; =20 vcpu_load(vcpu); @@ -11450,7 +11450,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) =20 kvm_vcpu_srcu_read_lock(vcpu); if (unlikely(vcpu->arch.mp_state =3D=3D KVM_MP_STATE_UNINITIALIZED)) { - if (!vcpu->wants_to_run) { + if (!vcpu->common->wants_to_run) { r =3D -EINTR; goto out; } @@ -11486,7 +11486,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (signal_pending(current)) { r =3D -EINTR; kvm_run->exit_reason =3D KVM_EXIT_INTR; - ++vcpu->stat.signal_exits; + ++vcpu->common->stat.signal_exits; } goto out; } @@ -11534,10 +11534,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) goto out; } else { WARN_ON_ONCE(vcpu->arch.pio.count); - WARN_ON_ONCE(vcpu->mmio_needed); + WARN_ON_ONCE(vcpu->common->mmio_needed); } =20 - if (!vcpu->wants_to_run) { + if (!vcpu->common->wants_to_run) { r =3D -EINTR; goto out; } @@ -11820,11 +11820,11 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 ts= s_selector, int idt_index, * Report an error userspace if MMIO is needed, as KVM doesn't support * MMIO during a task switch (or any other complex operation). */ - if (ret || vcpu->mmio_needed) { - vcpu->mmio_needed =3D false; - vcpu->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror =3D KVM_INTERNAL_ERROR_EMULATION; - vcpu->run->internal.ndata =3D 0; + if (ret || vcpu->common->mmio_needed) { + vcpu->common->mmio_needed =3D false; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTERNAL_ERROR; + vcpu->common->run->internal.suberror =3D KVM_INTERNAL_ERROR_EMULATION; + vcpu->common->run->internal.ndata =3D 0; return 0; } =20 @@ -12018,7 +12018,7 @@ static void kvm_arch_vcpu_guestdbg_update_apicv_inh= ibit(struct kvm *kvm) down_write(&kvm->arch.apicv_update_lock); =20 kvm_for_each_vcpu(i, vcpu, kvm) { - if (vcpu->guest_debug & KVM_GUESTDBG_BLOCKIRQ) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_BLOCKIRQ) { set =3D true; break; } @@ -12054,11 +12054,11 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kv= m_vcpu *vcpu, */ rflags =3D kvm_get_rflags(vcpu); =20 - vcpu->guest_debug =3D dbg->control; - if (!(vcpu->guest_debug & KVM_GUESTDBG_ENABLE)) - vcpu->guest_debug =3D 0; + vcpu->common->guest_debug =3D dbg->control; + if (!(vcpu->common->guest_debug & KVM_GUESTDBG_ENABLE)) + vcpu->common->guest_debug =3D 0; =20 - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) { + if (vcpu->common->guest_debug & KVM_GUESTDBG_USE_HW_BP) { for (i =3D 0; i < KVM_NR_DB_REGS; ++i) vcpu->arch.eff_db[i] =3D dbg->arch.debugreg[i]; vcpu->arch.guest_debug_dr7 =3D dbg->arch.debugreg[7]; @@ -12068,7 +12068,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_= vcpu *vcpu, } kvm_update_dr7(vcpu); =20 - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) vcpu->arch.singlestep_rip =3D kvm_get_linear_rip(vcpu); =20 /* @@ -12163,40 +12163,40 @@ static void store_regs(struct kvm_vcpu *vcpu) { BUILD_BUG_ON(sizeof(struct kvm_sync_regs) > SYNC_REGS_SIZE_BYTES); =20 - if (vcpu->run->kvm_valid_regs & KVM_SYNC_X86_REGS) - __get_regs(vcpu, &vcpu->run->s.regs.regs); + if (vcpu->common->run->kvm_valid_regs & KVM_SYNC_X86_REGS) + __get_regs(vcpu, &vcpu->common->run->s.regs.regs); =20 - if (vcpu->run->kvm_valid_regs & KVM_SYNC_X86_SREGS) - __get_sregs(vcpu, &vcpu->run->s.regs.sregs); + if (vcpu->common->run->kvm_valid_regs & KVM_SYNC_X86_SREGS) + __get_sregs(vcpu, &vcpu->common->run->s.regs.sregs); =20 - if (vcpu->run->kvm_valid_regs & KVM_SYNC_X86_EVENTS) + if (vcpu->common->run->kvm_valid_regs & KVM_SYNC_X86_EVENTS) kvm_vcpu_ioctl_x86_get_vcpu_events( - vcpu, &vcpu->run->s.regs.events); + vcpu, &vcpu->common->run->s.regs.events); } =20 static int sync_regs(struct kvm_vcpu *vcpu) { - if (vcpu->run->kvm_dirty_regs & KVM_SYNC_X86_REGS) { - __set_regs(vcpu, &vcpu->run->s.regs.regs); - vcpu->run->kvm_dirty_regs &=3D ~KVM_SYNC_X86_REGS; + if (vcpu->common->run->kvm_dirty_regs & KVM_SYNC_X86_REGS) { + __set_regs(vcpu, &vcpu->common->run->s.regs.regs); + vcpu->common->run->kvm_dirty_regs &=3D ~KVM_SYNC_X86_REGS; } =20 - if (vcpu->run->kvm_dirty_regs & KVM_SYNC_X86_SREGS) { - struct kvm_sregs sregs =3D vcpu->run->s.regs.sregs; + if (vcpu->common->run->kvm_dirty_regs & KVM_SYNC_X86_SREGS) { + struct kvm_sregs sregs =3D vcpu->common->run->s.regs.sregs; =20 if (__set_sregs(vcpu, &sregs)) return -EINVAL; =20 - vcpu->run->kvm_dirty_regs &=3D ~KVM_SYNC_X86_SREGS; + vcpu->common->run->kvm_dirty_regs &=3D ~KVM_SYNC_X86_SREGS; } =20 - if (vcpu->run->kvm_dirty_regs & KVM_SYNC_X86_EVENTS) { - struct kvm_vcpu_events events =3D vcpu->run->s.regs.events; + if (vcpu->common->run->kvm_dirty_regs & KVM_SYNC_X86_EVENTS) { + struct kvm_vcpu_events events =3D vcpu->common->run->s.regs.events; =20 if (kvm_vcpu_ioctl_x86_set_vcpu_events(vcpu, &events)) return -EINVAL; =20 - vcpu->run->kvm_dirty_regs &=3D ~KVM_SYNC_X86_EVENTS; + vcpu->common->run->kvm_dirty_regs &=3D ~KVM_SYNC_X86_EVENTS; } =20 return 0; @@ -12320,7 +12320,7 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) { struct kvm *kvm =3D vcpu->kvm; =20 - if (mutex_lock_killable(&vcpu->mutex)) + if (mutex_lock_killable(&vcpu->common->mutex)) return; vcpu_load(vcpu); kvm_synchronize_tsc(vcpu, NULL); @@ -12329,7 +12329,7 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) /* poll control enabled by default */ vcpu->arch.msr_kvm_poll_control =3D 1; =20 - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); =20 if (kvmclock_periodic_sync && vcpu->vcpu_idx =3D=3D 0) schedule_delayed_work(&kvm->arch.kvmclock_sync_work, @@ -13171,7 +13171,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, =20 static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) { - if (!list_empty_careful(&vcpu->async_pf.done)) + if (!list_empty_careful(&vcpu->common->async_pf.done)) return true; =20 if (kvm_apic_has_pending_init_or_sipi(vcpu) && @@ -13297,7 +13297,7 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu) unsigned long rflags; =20 rflags =3D kvm_x86_call(get_rflags)(vcpu); - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP) rflags &=3D ~X86_EFLAGS_TF; return rflags; } @@ -13305,7 +13305,7 @@ EXPORT_SYMBOL_GPL(kvm_get_rflags); =20 static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) { - if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP && + if (vcpu->common->guest_debug & KVM_GUESTDBG_SINGLESTEP && kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip)) rflags |=3D X86_EFLAGS_TF; kvm_x86_call(set_rflags)(vcpu, rflags); @@ -13813,22 +13813,22 @@ EXPORT_SYMBOL_GPL(kvm_handle_invpcid); =20 static int complete_sev_es_emulated_mmio(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_mmio_fragment *frag; unsigned int len; =20 - BUG_ON(!vcpu->mmio_needed); + BUG_ON(!vcpu->common->mmio_needed); =20 /* Complete previous fragment */ - frag =3D &vcpu->mmio_fragments[vcpu->mmio_cur_fragment]; + frag =3D &vcpu->common->mmio_fragments[vcpu->common->mmio_cur_fragment]; len =3D min(8u, frag->len); - if (!vcpu->mmio_is_write) + if (!vcpu->common->mmio_is_write) memcpy(frag->data, run->mmio.data, len); =20 if (frag->len <=3D 8) { /* Switch to the next fragment. */ frag++; - vcpu->mmio_cur_fragment++; + vcpu->common->mmio_cur_fragment++; } else { /* Go forward to the next mmio piece. */ frag->data +=3D len; @@ -13836,8 +13836,8 @@ static int complete_sev_es_emulated_mmio(struct kvm= _vcpu *vcpu) frag->len -=3D len; } =20 - if (vcpu->mmio_cur_fragment >=3D vcpu->mmio_nr_fragments) { - vcpu->mmio_needed =3D 0; + if (vcpu->common->mmio_cur_fragment >=3D vcpu->common->mmio_nr_fragments)= { + vcpu->common->mmio_needed =3D 0; =20 // VMG change, at this point, we're always done // RIP has already been advanced @@ -13847,7 +13847,7 @@ static int complete_sev_es_emulated_mmio(struct kvm= _vcpu *vcpu) // More MMIO is needed run->mmio.phys_addr =3D frag->gpa; run->mmio.len =3D min(8u, frag->len); - run->mmio.is_write =3D vcpu->mmio_is_write; + run->mmio.is_write =3D vcpu->common->mmio_is_write; if (run->mmio.is_write) memcpy(run->mmio.data, frag->data, min(8u, frag->len)); run->exit_reason =3D KVM_EXIT_MMIO; @@ -13875,20 +13875,20 @@ int kvm_sev_es_mmio_write(struct kvm_vcpu *vcpu, = gpa_t gpa, unsigned int bytes, data +=3D handled; =20 /*TODO: Check if need to increment number of frags */ - frag =3D vcpu->mmio_fragments; - vcpu->mmio_nr_fragments =3D 1; + frag =3D vcpu->common->mmio_fragments; + vcpu->common->mmio_nr_fragments =3D 1; frag->len =3D bytes; frag->gpa =3D gpa; frag->data =3D data; =20 - vcpu->mmio_needed =3D 1; - vcpu->mmio_cur_fragment =3D 0; + vcpu->common->mmio_needed =3D 1; + vcpu->common->mmio_cur_fragment =3D 0; =20 - vcpu->run->mmio.phys_addr =3D gpa; - vcpu->run->mmio.len =3D min(8u, frag->len); - vcpu->run->mmio.is_write =3D 1; - memcpy(vcpu->run->mmio.data, frag->data, min(8u, frag->len)); - vcpu->run->exit_reason =3D KVM_EXIT_MMIO; + vcpu->common->run->mmio.phys_addr =3D gpa; + vcpu->common->run->mmio.len =3D min(8u, frag->len); + vcpu->common->run->mmio.is_write =3D 1; + memcpy(vcpu->common->run->mmio.data, frag->data, min(8u, frag->len)); + vcpu->common->run->exit_reason =3D KVM_EXIT_MMIO; =20 vcpu->arch.complete_userspace_io =3D complete_sev_es_emulated_mmio; =20 @@ -13914,19 +13914,19 @@ int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, g= pa_t gpa, unsigned int bytes, data +=3D handled; =20 /*TODO: Check if need to increment number of frags */ - frag =3D vcpu->mmio_fragments; - vcpu->mmio_nr_fragments =3D 1; + frag =3D vcpu->common->mmio_fragments; + vcpu->common->mmio_nr_fragments =3D 1; frag->len =3D bytes; frag->gpa =3D gpa; frag->data =3D data; =20 - vcpu->mmio_needed =3D 1; - vcpu->mmio_cur_fragment =3D 0; + vcpu->common->mmio_needed =3D 1; + vcpu->common->mmio_cur_fragment =3D 0; =20 - vcpu->run->mmio.phys_addr =3D gpa; - vcpu->run->mmio.len =3D min(8u, frag->len); - vcpu->run->mmio.is_write =3D 0; - vcpu->run->exit_reason =3D KVM_EXIT_MMIO; + vcpu->common->run->mmio.phys_addr =3D gpa; + vcpu->common->run->mmio.len =3D min(8u, frag->len); + vcpu->common->run->mmio.is_write =3D 0; + vcpu->common->run->exit_reason =3D KVM_EXIT_MMIO; =20 vcpu->arch.complete_userspace_io =3D complete_sev_es_emulated_mmio; =20 diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 622fe24da910..57a699db05f2 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -1360,7 +1360,7 @@ static int kvm_xen_hypercall_set_result(struct kvm_vc= pu *vcpu, u64 result) =20 static int kvm_xen_hypercall_complete_userspace(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.xen.hypercall_rip))) return 1; @@ -1696,17 +1696,17 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu) return kvm_xen_hypercall_set_result(vcpu, r); =20 handle_in_userspace: - vcpu->run->exit_reason =3D KVM_EXIT_XEN; - vcpu->run->xen.type =3D KVM_EXIT_XEN_HCALL; - vcpu->run->xen.u.hcall.longmode =3D longmode; - vcpu->run->xen.u.hcall.cpl =3D cpl; - vcpu->run->xen.u.hcall.input =3D input; - vcpu->run->xen.u.hcall.params[0] =3D params[0]; - vcpu->run->xen.u.hcall.params[1] =3D params[1]; - vcpu->run->xen.u.hcall.params[2] =3D params[2]; - vcpu->run->xen.u.hcall.params[3] =3D params[3]; - vcpu->run->xen.u.hcall.params[4] =3D params[4]; - vcpu->run->xen.u.hcall.params[5] =3D params[5]; + vcpu->common->run->exit_reason =3D KVM_EXIT_XEN; + vcpu->common->run->xen.type =3D KVM_EXIT_XEN_HCALL; + vcpu->common->run->xen.u.hcall.longmode =3D longmode; + vcpu->common->run->xen.u.hcall.cpl =3D cpl; + vcpu->common->run->xen.u.hcall.input =3D input; + vcpu->common->run->xen.u.hcall.params[0] =3D params[0]; + vcpu->common->run->xen.u.hcall.params[1] =3D params[1]; + vcpu->common->run->xen.u.hcall.params[2] =3D params[2]; + vcpu->common->run->xen.u.hcall.params[3] =3D params[3]; + vcpu->common->run->xen.u.hcall.params[4] =3D params[4]; + vcpu->common->run->xen.u.hcall.params[5] =3D params[5]; vcpu->arch.xen.hypercall_rip =3D kvm_get_linear_rip(vcpu); vcpu->arch.complete_userspace_io =3D kvm_xen_hypercall_complete_userspace; diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h index f5841d9000ae..ce0d90767df8 100644 --- a/arch/x86/kvm/xen.h +++ b/arch/x86/kvm/xen.h @@ -184,7 +184,7 @@ static inline void kvm_xen_runstate_set_preempted(struc= t kvm_vcpu *vcpu) * behalf of the vCPU. Only if the VMM does actually block * does it need to enter RUNSTATE_blocked. */ - if (WARN_ON_ONCE(!vcpu->preempted)) + if (WARN_ON_ONCE(!vcpu->common->preempted)) return; =20 kvm_xen_update_runstate(vcpu, RUNSTATE_runnable); diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_a= p_ops.c index 9f76f2d7b66e..b851c7650bbb 100644 --- a/drivers/s390/crypto/vfio_ap_ops.c +++ b/drivers/s390/crypto/vfio_ap_ops.c @@ -350,7 +350,7 @@ static struct ap_queue_status vfio_ap_irq_disable(struc= t vfio_ap_queue *q) */ static int vfio_ap_validate_nib(struct kvm_vcpu *vcpu, dma_addr_t *nib) { - *nib =3D vcpu->run->s.regs.gprs[2]; + *nib =3D vcpu->common->run->s.regs.gprs[2]; =20 if (!*nib) return -EINVAL; @@ -576,7 +576,7 @@ static int handle_pqap(struct kvm_vcpu *vcpu) .response_code =3D AP_RESPONSE_Q_NOT_AVAIL, }; struct ap_matrix_mdev *matrix_mdev; =20 - apqn =3D vcpu->run->s.regs.gprs[0] & 0xffff; + apqn =3D vcpu->common->run->s.regs.gprs[0] & 0xffff; =20 /* If we do not use the AIV facility just go to userland */ if (!(vcpu->arch.sie_block->eca & ECA_AIV)) { @@ -615,7 +615,7 @@ static int handle_pqap(struct kvm_vcpu *vcpu) goto out_unlock; } =20 - status =3D vcpu->run->s.regs.gprs[1]; + status =3D vcpu->common->run->s.regs.gprs[1]; =20 /* If IR bit(16) is set we enable the interrupt */ if ((status >> (63 - 16)) & 0x01) @@ -624,8 +624,8 @@ static int handle_pqap(struct kvm_vcpu *vcpu) qstatus =3D vfio_ap_irq_disable(q); =20 out_unlock: - memcpy(&vcpu->run->s.regs.gprs[1], &qstatus, sizeof(qstatus)); - vcpu->run->s.regs.gprs[1] >>=3D 32; + memcpy(&vcpu->common->run->s.regs.gprs[1], &qstatus, sizeof(qstatus)); + vcpu->common->run->s.regs.gprs[1] >>=3D 32; mutex_unlock(&matrix_dev->mdevs_lock); return 0; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 79a6b1a63027..fb5c58c90975 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -179,13 +179,13 @@ static inline bool is_error_page(struct page *page) * OUTSIDE_GUEST_MODE. KVM_REQ_OUTSIDE_GUEST_MODE differs from a vCPU "ki= ck" * in that it ensures the vCPU has reached OUTSIDE_GUEST_MODE before conti= nuing * on. A kick only guarantees that the vCPU is on its way out, e.g. a pre= vious - * kick may have set vcpu->mode to EXITING_GUEST_MODE, and so there's no + * kick may have set vcpu->common->mode to EXITING_GUEST_MODE, and so ther= e's no * guarantee the vCPU received an IPI and has actually exited guest mode. */ #define KVM_REQ_OUTSIDE_GUEST_MODE (KVM_REQUEST_NO_ACTION | KVM_REQUEST_WA= IT | KVM_REQUEST_NO_WAKEUP) =20 #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \ - BUILD_BUG_ON((unsigned)(nr) >=3D (sizeof_field(struct kvm_vcpu, requests)= * 8) - KVM_REQUEST_ARCH_BASE); \ + BUILD_BUG_ON((unsigned)(nr) >=3D (sizeof_field(struct kvm_vcpu, _common.r= equests) * 8) - KVM_REQUEST_ARCH_BASE); \ (unsigned)(((nr) + KVM_REQUEST_ARCH_BASE) | (flags)); \ }) #define KVM_ARCH_REQ(nr) KVM_ARCH_REQ_FLAGS(nr, 0) @@ -328,73 +328,80 @@ struct kvm_vcpu { int cpu; int vcpu_id; /* id given by userspace at creation */ int vcpu_idx; /* index into kvm->vcpu_array */ - int ____srcu_idx; /* Don't use this directly. You've been warned. */ -#ifdef CONFIG_PROVE_RCU - int srcu_depth; -#endif - int mode; - u64 requests; - unsigned long guest_debug; =20 - struct mutex mutex; - struct kvm_run *run; + struct kvm_vcpu_arch arch; =20 -#ifndef __KVM_HAVE_ARCH_WQP - struct rcuwait wait; + struct kvm_vcpu_common { + int ____srcu_idx; /* Don't use this directly. You've been warned. */ +#ifdef CONFIG_PROVE_RCU + int srcu_depth; #endif - struct pid __rcu *pid; - int sigset_active; - sigset_t sigset; - unsigned int halt_poll_ns; - bool valid_wakeup; + int mode; + u64 requests; + unsigned long guest_debug; +=09 + struct mutex mutex; + + struct kvm_run *run; + + #ifndef __KVM_HAVE_ARCH_WQP + struct rcuwait wait; + #endif + struct pid __rcu *pid; + int sigset_active; + sigset_t sigset; + unsigned int halt_poll_ns; + bool valid_wakeup; =20 #ifdef CONFIG_HAS_IOMEM - int mmio_needed; - int mmio_read_completed; - int mmio_is_write; - int mmio_cur_fragment; - int mmio_nr_fragments; - struct kvm_mmio_fragment mmio_fragments[KVM_MAX_MMIO_FRAGMENTS]; + int mmio_needed; + int mmio_read_completed; + int mmio_is_write; + int mmio_cur_fragment; + int mmio_nr_fragments; + struct kvm_mmio_fragment mmio_fragments[KVM_MAX_MMIO_FRAGMENTS]; #endif =20 #ifdef CONFIG_KVM_ASYNC_PF - struct { - u32 queued; - struct list_head queue; - struct list_head done; - spinlock_t lock; - } async_pf; + struct { + u32 queued; + struct list_head queue; + struct list_head done; + spinlock_t lock; + } async_pf; #endif =20 #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT - /* - * Cpu relax intercept or pause loop exit optimization - * in_spin_loop: set when a vcpu does a pause loop exit - * or cpu relax intercepted. - * dy_eligible: indicates whether vcpu is eligible for directed yield. - */ - struct { - bool in_spin_loop; - bool dy_eligible; - } spin_loop; + /* + * Cpu relax intercept or pause loop exit optimization + * in_spin_loop: set when a vcpu does a pause loop exit + * or cpu relax intercepted. + * dy_eligible: indicates whether vcpu is eligible for directed yield. + */ + struct { + bool in_spin_loop; + bool dy_eligible; + } spin_loop; #endif - bool wants_to_run; - bool preempted; - bool ready; - bool scheduled_out; - struct kvm_vcpu_arch arch; - struct kvm_vcpu_stat stat; - char stats_id[KVM_STATS_NAME_SIZE]; - struct kvm_dirty_ring dirty_ring; + bool wants_to_run; + bool preempted; + bool ready; + bool scheduled_out; + struct kvm_vcpu_stat stat; + char stats_id[KVM_STATS_NAME_SIZE]; + struct kvm_dirty_ring dirty_ring; =20 - /* - * The most recently used memslot by this vCPU and the slots generation - * for which it is valid. - * No wraparound protection is needed since generations won't overflow in - * thousands of years, even assuming 1M memslot operations per second. - */ - struct kvm_memory_slot *last_used_slot; - u64 last_used_slot_gen; + /* + * The most recently used memslot by this vCPU and the slots generation + * for which it is valid. + * No wraparound protection is needed since generations won't overflow in + * thousands of years, even assuming 1M memslot operations per second. + */ + struct kvm_memory_slot *last_used_slot; + u64 last_used_slot_gen; + } _common; + + struct kvm_vcpu_common *common; }; =20 /* @@ -550,11 +557,11 @@ static inline int kvm_vcpu_exiting_guest_mode(struct = kvm_vcpu *vcpu) { /* * The memory barrier ensures a previous write to vcpu->requests cannot - * be reordered with the read of vcpu->mode. It pairs with the general - * memory barrier following the write of vcpu->mode in VCPU RUN. + * be reordered with the read of vcpu->common->mode. It pairs with the g= eneral + * memory barrier following the write of vcpu->common->mode in VCPU RUN. */ smp_mb__before_atomic(); - return cmpxchg(&vcpu->mode, IN_GUEST_MODE, EXITING_GUEST_MODE); + return cmpxchg(&vcpu->common->mode, IN_GUEST_MODE, EXITING_GUEST_MODE); } =20 /* @@ -923,19 +930,19 @@ static inline void kvm_vm_bugged(struct kvm *kvm) static inline void kvm_vcpu_srcu_read_lock(struct kvm_vcpu *vcpu) { #ifdef CONFIG_PROVE_RCU - WARN_ONCE(vcpu->srcu_depth++, - "KVM: Illegal vCPU srcu_idx LOCK, depth=3D%d", vcpu->srcu_depth - 1); + WARN_ONCE(vcpu->common->srcu_depth++, + "KVM: Illegal vCPU srcu_idx LOCK, depth=3D%d", vcpu->common->srcu_dept= h - 1); #endif - vcpu->____srcu_idx =3D srcu_read_lock(&vcpu->kvm->srcu); + vcpu->common->____srcu_idx =3D srcu_read_lock(&vcpu->kvm->srcu); } =20 static inline void kvm_vcpu_srcu_read_unlock(struct kvm_vcpu *vcpu) { - srcu_read_unlock(&vcpu->kvm->srcu, vcpu->____srcu_idx); + srcu_read_unlock(&vcpu->kvm->srcu, vcpu->common->____srcu_idx); =20 #ifdef CONFIG_PROVE_RCU - WARN_ONCE(--vcpu->srcu_depth, - "KVM: Illegal vCPU srcu_idx UNLOCK, depth=3D%d", vcpu->srcu_depth); + WARN_ONCE(--vcpu->common->srcu_depth, + "KVM: Illegal vCPU srcu_idx UNLOCK, depth=3D%d", vcpu->common->srcu_de= pth); #endif } =20 @@ -1611,7 +1618,7 @@ static inline struct rcuwait *kvm_arch_vcpu_get_wait(= struct kvm_vcpu *vcpu) #ifdef __KVM_HAVE_ARCH_WQP return vcpu->arch.waitp; #else - return &vcpu->wait; + return &vcpu->common->wait; #endif } =20 @@ -2148,7 +2155,7 @@ static inline void __kvm_make_request(int req, struct= kvm_vcpu *vcpu) * caller. Paired with the smp_mb__after_atomic in kvm_check_request. */ smp_wmb(); - set_bit(req & KVM_REQUEST_MASK, (void *)&vcpu->requests); + set_bit(req & KVM_REQUEST_MASK, (void *)&vcpu->common->requests); } =20 static __always_inline void kvm_make_request(int req, struct kvm_vcpu *vcp= u) @@ -2166,17 +2173,17 @@ static __always_inline void kvm_make_request(int re= q, struct kvm_vcpu *vcpu) =20 static inline bool kvm_request_pending(struct kvm_vcpu *vcpu) { - return READ_ONCE(vcpu->requests); + return READ_ONCE(vcpu->common->requests); } =20 static inline bool kvm_test_request(int req, struct kvm_vcpu *vcpu) { - return test_bit(req & KVM_REQUEST_MASK, (void *)&vcpu->requests); + return test_bit(req & KVM_REQUEST_MASK, (void *)&vcpu->common->requests); } =20 static inline void kvm_clear_request(int req, struct kvm_vcpu *vcpu) { - clear_bit(req & KVM_REQUEST_MASK, (void *)&vcpu->requests); + clear_bit(req & KVM_REQUEST_MASK, (void *)&vcpu->common->requests); } =20 static inline bool kvm_check_request(int req, struct kvm_vcpu *vcpu) @@ -2267,11 +2274,11 @@ extern struct kvm_device_ops kvm_arm_vgic_v3_ops; =20 static inline void kvm_vcpu_set_in_spin_loop(struct kvm_vcpu *vcpu, bool v= al) { - vcpu->spin_loop.in_spin_loop =3D val; + vcpu->common->spin_loop.in_spin_loop =3D val; } static inline void kvm_vcpu_set_dy_eligible(struct kvm_vcpu *vcpu, bool va= l) { - vcpu->spin_loop.dy_eligible =3D val; + vcpu->common->spin_loop.dy_eligible =3D val; } =20 #else /* !CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT */ @@ -2312,7 +2319,7 @@ bool kvm_arch_irqfd_route_changed(struct kvm_kernel_i= rq_routing_entry *, /* If we wakeup during the poll time, was it a sucessful poll? */ static inline bool vcpu_valid_wakeup(struct kvm_vcpu *vcpu) { - return vcpu->valid_wakeup; + return vcpu->common->valid_wakeup; } =20 #else @@ -2364,8 +2371,8 @@ int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_= vm_thread_fn_t thread_fn, #ifdef CONFIG_KVM_XFER_TO_GUEST_WORK static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) { - vcpu->run->exit_reason =3D KVM_EXIT_INTR; - vcpu->stat.signal_exits++; + vcpu->common->run->exit_reason =3D KVM_EXIT_INTR; + vcpu->common->stat.signal_exits++; } #endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */ =20 @@ -2397,14 +2404,14 @@ static inline void kvm_prepare_memory_fault_exit(st= ruct kvm_vcpu *vcpu, bool is_write, bool is_exec, bool is_private) { - vcpu->run->exit_reason =3D KVM_EXIT_MEMORY_FAULT; - vcpu->run->memory_fault.gpa =3D gpa; - vcpu->run->memory_fault.size =3D size; + vcpu->common->run->exit_reason =3D KVM_EXIT_MEMORY_FAULT; + vcpu->common->run->memory_fault.gpa =3D gpa; + vcpu->common->run->memory_fault.size =3D size; =20 /* RWX flags are not (yet) defined or communicated to userspace. */ - vcpu->run->memory_fault.flags =3D 0; + vcpu->common->run->memory_fault.flags =3D 0; if (is_private) - vcpu->run->memory_fault.flags |=3D KVM_MEMORY_EXIT_FLAG_PRIVATE; + vcpu->common->run->memory_fault.flags |=3D KVM_MEMORY_EXIT_FLAG_PRIVATE; } =20 #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools= /testing/selftests/kvm/aarch64/debug-exceptions.c index 2582c49e525a..b162545286b0 100644 --- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c +++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c @@ -474,7 +474,7 @@ void test_single_step_from_userspace(int test_cnt) struct kvm_guest_debug debug =3D {}; =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code_ss); - run =3D vcpu->run; + run =3D vcpu->common->run; vcpu_args_set(vcpu, 1, test_cnt); =20 while (1) { diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/= testing/selftests/kvm/aarch64/page_fault_test.c index d29b08198b42..357e842a3328 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -660,7 +660,7 @@ static void vcpu_run_loop(struct kvm_vm *vm, struct kvm= _vcpu *vcpu, struct ucall uc; int ret; =20 - run =3D vcpu->run; + run =3D vcpu->common->run; =20 for (;;) { ret =3D _vcpu_run(vcpu); diff --git a/tools/testing/selftests/kvm/aarch64/smccc_filter.c b/tools/tes= ting/selftests/kvm/aarch64/smccc_filter.c index 2d189f3da228..bbcc1a1a9c1a 100644 --- a/tools/testing/selftests/kvm/aarch64/smccc_filter.c +++ b/tools/testing/selftests/kvm/aarch64/smccc_filter.c @@ -207,7 +207,7 @@ static void test_filter_denied(void) static void expect_call_fwd_to_user(struct kvm_vcpu *vcpu, uint32_t func_i= d, enum smccc_conduit conduit) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_HYPERCALL, "Unexpected exit reason: %u", run->exit_reason); diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testi= ng/selftests/kvm/demand_paging_test.c index 0202b78f8680..be98b5c3d629 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -33,7 +33,7 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_= args) { struct kvm_vcpu *vcpu =3D vcpu_args->vcpu; int vcpu_idx =3D vcpu_args->vcpu_idx; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct timespec start; struct timespec ts_diff; int ret; diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/test= ing/selftests/kvm/dirty_log_perf_test.c index 9f24303acb8c..1e194fbc87b5 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -77,7 +77,7 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_= args) struct timespec avg; int ret; =20 - run =3D vcpu->run; + run =3D vcpu->common->run; =20 while (!READ_ONCE(host_quit)) { int current_iteration =3D READ_ONCE(iteration); diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/s= elftests/kvm/dirty_log_test.c index aacf80f57439..35890dcd8d6f 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -253,7 +253,7 @@ static void vcpu_handle_sync_stop(void) =20 static void default_after_vcpu_run(struct kvm_vcpu *vcpu, int ret, int err) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 TEST_ASSERT(ret =3D=3D 0 || (ret =3D=3D -1 && err =3D=3D EINTR), "vcpu run failed: errno=3D%d", err); @@ -391,7 +391,7 @@ static void dirty_ring_collect_dirty_pages(struct kvm_v= cpu *vcpu, int slot, =20 static void dirty_ring_after_vcpu_run(struct kvm_vcpu *vcpu, int ret, int = err) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 /* A ucall-sync or ring-full event is allowed */ if (get_ucall(vcpu, NULL) =3D=3D UCALL_SYNC) { diff --git a/tools/testing/selftests/kvm/guest_print_test.c b/tools/testing= /selftests/kvm/guest_print_test.c index 8092c2d0f5d6..bbb910363120 100644 --- a/tools/testing/selftests/kvm/guest_print_test.c +++ b/tools/testing/selftests/kvm/guest_print_test.c @@ -110,7 +110,7 @@ static void ucall_abort(const char *assert_msg, const c= har *expected_assert_msg) static void run_test(struct kvm_vcpu *vcpu, const char *expected_printf, const char *expected_assert) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct ucall uc; =20 while (1) { @@ -158,7 +158,7 @@ static void test_limits(void) struct ucall uc; =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code_limits); - run =3D vcpu->run; + run =3D vcpu->common->run; vcpu_run(vcpu); =20 TEST_ASSERT(run->exit_reason =3D=3D UCALL_EXIT_REASON, diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c b/tools/te= sting/selftests/kvm/hardware_disable_test.c index bce73bcb973c..805363b9e706 100644 --- a/tools/testing/selftests/kvm/hardware_disable_test.c +++ b/tools/testing/selftests/kvm/hardware_disable_test.c @@ -34,7 +34,7 @@ static void guest_code(void) static void *run_vcpu(void *arg) { struct kvm_vcpu *vcpu =3D arg; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 vcpu_run(vcpu); =20 diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/test= ing/selftests/kvm/kvm_page_table_test.c index dd8b12f626d3..12ae35b635ca 100644 --- a/tools/testing/selftests/kvm/kvm_page_table_test.c +++ b/tools/testing/selftests/kvm/kvm_page_table_test.c @@ -205,7 +205,7 @@ static void *vcpu_worker(void *data) TEST_ASSERT(ret =3D=3D 0, "vcpu_run failed: %d", ret); TEST_ASSERT(get_ucall(vcpu, NULL) =3D=3D UCALL_SYNC, "Invalid guest sync status: exit_reason=3D%s", - exit_reason_str(vcpu->run->exit_reason)); + exit_reason_str(vcpu->common->run->exit_reason)); =20 pr_debug("Got sync event from vCPU %d\n", vcpu->id); stage =3D READ_ONCE(*current_stage); diff --git a/tools/testing/selftests/kvm/lib/aarch64/ucall.c b/tools/testin= g/selftests/kvm/lib/aarch64/ucall.c index ddab0ce89d4d..7146625553ce 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/ucall.c +++ b/tools/testing/selftests/kvm/lib/aarch64/ucall.c @@ -21,7 +21,7 @@ void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_g= pa) =20 void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (run->exit_reason =3D=3D KVM_EXIT_MMIO && run->mmio.phys_addr =3D=3D vcpu->vm->ucall_mmio_addr) { diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 56b170b725b3..520e820d7683 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -684,7 +684,7 @@ static void vm_vcpu_rm(struct kvm_vm *vm, struct kvm_vc= pu *vcpu) vcpu->dirty_gfns =3D NULL; } =20 - ret =3D munmap(vcpu->run, vcpu_mmap_sz()); + ret =3D munmap(vcpu->common->run, vcpu_mmap_sz()); TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); =20 ret =3D close(vcpu->fd); @@ -1349,12 +1349,12 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u= int32_t vcpu_id) vcpu->fd =3D __vm_ioctl(vm, KVM_CREATE_VCPU, (void *)(unsigned long)vcpu_= id); TEST_ASSERT_VM_VCPU_IOCTL(vcpu->fd >=3D 0, KVM_CREATE_VCPU, vcpu->fd, vm); =20 - TEST_ASSERT(vcpu_mmap_sz() >=3D sizeof(*vcpu->run), "vcpu mmap size " + TEST_ASSERT(vcpu_mmap_sz() >=3D sizeof(*vcpu->common->run), "vcpu mmap si= ze " "smaller than expected, vcpu_mmap_sz: %i expected_min: %zi", - vcpu_mmap_sz(), sizeof(*vcpu->run)); - vcpu->run =3D (struct kvm_run *) mmap(NULL, vcpu_mmap_sz(), + vcpu_mmap_sz(), sizeof(*vcpu->common->run)); + vcpu->common->run =3D (struct kvm_run *) mmap(NULL, vcpu_mmap_sz(), PROT_READ | PROT_WRITE, MAP_SHARED, vcpu->fd, 0); - TEST_ASSERT(vcpu->run !=3D MAP_FAILED, + TEST_ASSERT(vcpu->common->run !=3D MAP_FAILED, __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); =20 /* Add to linked-list of VCPUs. */ @@ -1739,9 +1739,9 @@ void vcpu_run_complete_io(struct kvm_vcpu *vcpu) { int ret; =20 - vcpu->run->immediate_exit =3D 1; + vcpu->common->run->immediate_exit =3D 1; ret =3D __vcpu_run(vcpu); - vcpu->run->immediate_exit =3D 0; + vcpu->common->run->immediate_exit =3D 0; =20 TEST_ASSERT(ret =3D=3D -1 && errno =3D=3D EINTR, "KVM_RUN IOCTL didn't exit immediately, rc: %i, errno: %i", diff --git a/tools/testing/selftests/kvm/lib/riscv/ucall.c b/tools/testing/= selftests/kvm/lib/riscv/ucall.c index b5035c63d516..2f9c327eace1 100644 --- a/tools/testing/selftests/kvm/lib/riscv/ucall.c +++ b/tools/testing/selftests/kvm/lib/riscv/ucall.c @@ -13,7 +13,7 @@ =20 void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (run->exit_reason =3D=3D KVM_EXIT_RISCV_SBI && run->riscv_sbi.extension_id =3D=3D KVM_RISCV_SELFTESTS_SBI_EXT) { diff --git a/tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c b= /tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c index 2c432fa164f1..f9207a502eab 100644 --- a/tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c +++ b/tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c @@ -33,7 +33,7 @@ static uint64_t diag318_handler(void) =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); vcpu_run(vcpu); - run =3D vcpu->run; + run =3D vcpu->common->run; =20 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_S390_SIEIC); TEST_ASSERT(run->s390_sieic.icptcode =3D=3D ICPT_INSTRUCTION, diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/test= ing/selftests/kvm/lib/s390x/processor.c index 4ad4492eea1d..f717e8bcd03c 100644 --- a/tools/testing/selftests/kvm/lib/s390x/processor.c +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c @@ -157,7 +157,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, ui= nt8_t indent) =20 void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code) { - vcpu->run->psw_addr =3D (uintptr_t)guest_code; + vcpu->common->run->psw_addr =3D (uintptr_t)guest_code; } =20 struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id) @@ -187,7 +187,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, ui= nt32_t vcpu_id) sregs.crs[1] =3D vm->pgd | 0xf; /* Primary region table */ vcpu_sregs_set(vcpu, &sregs); =20 - vcpu->run->psw_mask =3D 0x0400000180000000ULL; /* DAT enabled + 64 bit m= ode */ + vcpu->common->run->psw_mask =3D 0x0400000180000000ULL; /* DAT enabled + = 64 bit mode */ =20 return vcpu; } @@ -215,7 +215,7 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int = num, ...) void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent) { fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n", - indent, "", vcpu->run->psw_mask, vcpu->run->psw_addr); + indent, "", vcpu->common->run->psw_mask, vcpu->common->run->psw_addr); } =20 void assert_on_unhandled_exception(struct kvm_vcpu *vcpu) diff --git a/tools/testing/selftests/kvm/lib/s390x/ucall.c b/tools/testing/= selftests/kvm/lib/s390x/ucall.c index cca98734653d..bd6822a78ab7 100644 --- a/tools/testing/selftests/kvm/lib/s390x/ucall.c +++ b/tools/testing/selftests/kvm/lib/s390x/ucall.c @@ -8,7 +8,7 @@ =20 void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (run->exit_reason =3D=3D KVM_EXIT_S390_SIEIC && run->s390_sieic.icptcode =3D=3D 4 && diff --git a/tools/testing/selftests/kvm/lib/x86_64/ucall.c b/tools/testing= /selftests/kvm/lib/x86_64/ucall.c index 1265cecc7dd1..b40e55525897 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/ucall.c +++ b/tools/testing/selftests/kvm/lib/x86_64/ucall.c @@ -44,7 +44,7 @@ void ucall_arch_do_ucall(vm_vaddr_t uc) =20 void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 if (run->exit_reason =3D=3D KVM_EXIT_IO && run->io.port =3D=3D UCALL_PIO_= PORT) { struct kvm_regs regs; diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c= b/tools/testing/selftests/kvm/memslot_modification_stress_test.c index 49f162573126..7d8957ddb8a1 100644 --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c @@ -37,7 +37,7 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_= args) struct kvm_run *run; int ret; =20 - run =3D vcpu->run; + run =3D vcpu->common->run; =20 /* Let the guest access its memory until a stop signal is received */ while (!READ_ONCE(memstress_args.stop_vcpus)) { diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testin= g/selftests/kvm/memslot_perf_test.c index 579a64f97333..ab419ad6bb41 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -137,7 +137,7 @@ static void *vcpu_worker(void *__data) { struct vm_data *data =3D __data; struct kvm_vcpu *vcpu =3D data->vcpu; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct ucall uc; =20 while (1) { diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/te= sting/selftests/kvm/pre_fault_memory_test.c index 0350a8896a2f..a2f956821d1e 100644 --- a/tools/testing/selftests/kvm/pre_fault_memory_test.c +++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c @@ -104,7 +104,7 @@ static void __test_pre_fault_memory(unsigned long vm_ty= pe, bool private) vcpu_args_set(vcpu, 1, guest_test_virt_mem); vcpu_run(vcpu); =20 - run =3D vcpu->run; + run =3D vcpu->common->run; TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_IO, "Wanted KVM_EXIT_IO, got exit reason: %u (%s)", run->exit_reason, exit_reason_str(run->exit_reason)); diff --git a/tools/testing/selftests/kvm/s390x/cmma_test.c b/tools/testing/= selftests/kvm/s390x/cmma_test.c index b39033844756..dad56f823d8e 100644 --- a/tools/testing/selftests/kvm/s390x/cmma_test.c +++ b/tools/testing/selftests/kvm/s390x/cmma_test.c @@ -262,10 +262,10 @@ static void test_get_cmma_basic(void) =20 static void assert_exit_was_hypercall(struct kvm_vcpu *vcpu) { - TEST_ASSERT_EQ(vcpu->run->exit_reason, 13); - TEST_ASSERT_EQ(vcpu->run->s390_sieic.icptcode, 4); - TEST_ASSERT_EQ(vcpu->run->s390_sieic.ipa, 0x8300); - TEST_ASSERT_EQ(vcpu->run->s390_sieic.ipb, 0x5010000); + TEST_ASSERT_EQ(vcpu->common->run->exit_reason, 13); + TEST_ASSERT_EQ(vcpu->common->run->s390_sieic.icptcode, 4); + TEST_ASSERT_EQ(vcpu->common->run->s390_sieic.ipa, 0x8300); + TEST_ASSERT_EQ(vcpu->common->run->s390_sieic.ipb, 0x5010000); } =20 static void test_migration_mode(void) @@ -287,7 +287,7 @@ static void test_migration_mode(void) =20 enable_cmma(vm); vcpu =3D vm_vcpu_add(vm, 1, guest_do_one_essa); - orig_psw =3D vcpu->run->psw_addr; + orig_psw =3D vcpu->common->run->psw_addr; =20 /* * Execute one essa instruction in the guest. Otherwise the guest will @@ -313,7 +313,7 @@ static void test_migration_mode(void) errno =3D 0; =20 /* execute another ESSA instruction to see this goes fine */ - vcpu->run->psw_addr =3D orig_psw; + vcpu->common->run->psw_addr =3D orig_psw; vcpu_run(vcpu); assert_exit_was_hypercall(vcpu); =20 @@ -334,7 +334,7 @@ static void test_migration_mode(void) ); =20 /* ESSA instructions should still execute fine */ - vcpu->run->psw_addr =3D orig_psw; + vcpu->common->run->psw_addr =3D orig_psw; vcpu_run(vcpu); assert_exit_was_hypercall(vcpu); =20 @@ -359,7 +359,7 @@ static void test_migration_mode(void) ); =20 /* ESSA instructions should still execute fine */ - vcpu->run->psw_addr =3D orig_psw; + vcpu->common->run->psw_addr =3D orig_psw; vcpu_run(vcpu); assert_exit_was_hypercall(vcpu); =20 @@ -510,7 +510,7 @@ static void test_get_skip_holes(void) enable_cmma(vm); vcpu =3D vm_vcpu_add(vm, 1, guest_dirty_test_data); =20 - orig_psw =3D vcpu->run->psw_addr; + orig_psw =3D vcpu->common->run->psw_addr; =20 /* * Execute some essa instructions in the guest. Otherwise the guest will @@ -526,7 +526,7 @@ static void test_get_skip_holes(void) assert_all_slots_cmma_dirty(vm); =20 /* Then, dirty just the TEST_DATA memslot */ - vcpu->run->psw_addr =3D orig_psw; + vcpu->common->run->psw_addr =3D orig_psw; vcpu_run(vcpu); =20 gfn_offset =3D TEST_DATA_START_GFN; diff --git a/tools/testing/selftests/kvm/s390x/debug_test.c b/tools/testing= /selftests/kvm/s390x/debug_test.c index 84313fb27529..bc9acb6fb068 100644 --- a/tools/testing/selftests/kvm/s390x/debug_test.c +++ b/tools/testing/selftests/kvm/s390x/debug_test.c @@ -47,8 +47,8 @@ static void test_step_int(void *guest_code, size_t new_ps= w_off) =20 vm =3D test_step_int_1(&vcpu, guest_code, new_psw_off, new_psw); TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_DEBUG); - TEST_ASSERT_EQ(vcpu->run->psw_mask, new_psw[0]); - TEST_ASSERT_EQ(vcpu->run->psw_addr, new_psw[1]); + TEST_ASSERT_EQ(vcpu->common->run->psw_mask, new_psw[0]); + TEST_ASSERT_EQ(vcpu->common->run->psw_addr, new_psw[1]); kvm_vm_free(vm); } =20 @@ -85,13 +85,13 @@ static void test_step_pgm_diag(void) vm =3D test_step_int_1(&vcpu, test_step_pgm_diag_guest_code, __LC_PGM_NEW_PSW, new_psw); TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_S390_SIEIC); - TEST_ASSERT_EQ(vcpu->run->s390_sieic.icptcode, ICPT_INSTRUCTION); - TEST_ASSERT_EQ(vcpu->run->s390_sieic.ipa & 0xff00, IPA0_DIAG); + TEST_ASSERT_EQ(vcpu->common->run->s390_sieic.icptcode, ICPT_INSTRUCTION); + TEST_ASSERT_EQ(vcpu->common->run->s390_sieic.ipa & 0xff00, IPA0_DIAG); vcpu_ioctl(vcpu, KVM_S390_IRQ, &irq); vcpu_run(vcpu); TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_DEBUG); - TEST_ASSERT_EQ(vcpu->run->psw_mask, new_psw[0]); - TEST_ASSERT_EQ(vcpu->run->psw_addr, new_psw[1]); + TEST_ASSERT_EQ(vcpu->common->run->psw_mask, new_psw[0]); + TEST_ASSERT_EQ(vcpu->common->run->psw_addr, new_psw[1]); kvm_vm_free(vm); } =20 diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/self= tests/kvm/s390x/memop.c index f2df7416be84..77cd0549d9d6 100644 --- a/tools/testing/selftests/kvm/s390x/memop.c +++ b/tools/testing/selftests/kvm/s390x/memop.c @@ -252,7 +252,7 @@ static struct test_default test_default_init(void *gues= t_code) t.kvm_vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); t.vm =3D (struct test_info) { t.kvm_vm, NULL }; t.vcpu =3D (struct test_info) { t.kvm_vm, vcpu }; - t.run =3D vcpu->run; + t.run =3D vcpu->common->run; return t; } =20 diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/sel= ftests/kvm/s390x/resets.c index 357943f2bea8..ecb8526339f5 100644 --- a/tools/testing/selftests/kvm/s390x/resets.c +++ b/tools/testing/selftests/kvm/s390x/resets.c @@ -84,7 +84,7 @@ static void assert_noirq(struct kvm_vcpu *vcpu) =20 static void assert_clear(struct kvm_vcpu *vcpu) { - struct kvm_sync_regs *sync_regs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *sync_regs =3D &vcpu->common->run->s.regs; struct kvm_sregs sregs; struct kvm_regs regs; struct kvm_fpu fpu; @@ -111,7 +111,7 @@ static void assert_clear(struct kvm_vcpu *vcpu) =20 static void assert_initial_noclear(struct kvm_vcpu *vcpu) { - struct kvm_sync_regs *sync_regs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *sync_regs =3D &vcpu->common->run->s.regs; =20 TEST_ASSERT(sync_regs->gprs[0] =3D=3D 0xffff000000000000UL, "gpr0 =3D=3D 0xffff000000000000 (sync_regs)"); @@ -128,7 +128,7 @@ static void assert_initial_noclear(struct kvm_vcpu *vcp= u) =20 static void assert_initial(struct kvm_vcpu *vcpu) { - struct kvm_sync_regs *sync_regs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *sync_regs =3D &vcpu->common->run->s.regs; struct kvm_sregs sregs; struct kvm_fpu fpu; =20 @@ -156,8 +156,8 @@ static void assert_initial(struct kvm_vcpu *vcpu) TEST_ASSERT(sync_regs->gbea =3D=3D 1, "gbea =3D=3D 1 (sync_regs)"); =20 /* kvm_run */ - TEST_ASSERT(vcpu->run->psw_addr =3D=3D 0, "psw_addr =3D=3D 0 (kvm_run)"); - TEST_ASSERT(vcpu->run->psw_mask =3D=3D 0, "psw_mask =3D=3D 0 (kvm_run)"); + TEST_ASSERT(vcpu->common->run->psw_addr =3D=3D 0, "psw_addr =3D=3D 0 (kvm= _run)"); + TEST_ASSERT(vcpu->common->run->psw_mask =3D=3D 0, "psw_mask =3D=3D 0 (kvm= _run)"); =20 vcpu_fpu_get(vcpu, &fpu); TEST_ASSERT(!fpu.fpc, "fpc =3D=3D 0"); @@ -171,7 +171,7 @@ static void assert_initial(struct kvm_vcpu *vcpu) =20 static void assert_normal_noclear(struct kvm_vcpu *vcpu) { - struct kvm_sync_regs *sync_regs =3D &vcpu->run->s.regs; + struct kvm_sync_regs *sync_regs =3D &vcpu->common->run->s.regs; =20 TEST_ASSERT(sync_regs->crs[2] =3D=3D 0x10, "cr2 =3D=3D 10 (sync_regs)"); TEST_ASSERT(sync_regs->crs[8] =3D=3D 1, "cr10 =3D=3D 1 (sync_regs)"); @@ -182,7 +182,7 @@ static void assert_normal_noclear(struct kvm_vcpu *vcpu) static void assert_normal(struct kvm_vcpu *vcpu) { test_one_reg(vcpu, KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID); - TEST_ASSERT(vcpu->run->s.regs.pft =3D=3D KVM_S390_PFAULT_TOKEN_INVALID, + TEST_ASSERT(vcpu->common->run->s.regs.pft =3D=3D KVM_S390_PFAULT_TOKEN_IN= VALID, "pft =3D=3D 0xff..... (sync_regs)"); assert_noirq(vcpu); } diff --git a/tools/testing/selftests/kvm/s390x/sync_regs_test.c b/tools/tes= ting/selftests/kvm/s390x/sync_regs_test.c index 53def355ccba..cd843306788c 100644 --- a/tools/testing/selftests/kvm/s390x/sync_regs_test.c +++ b/tools/testing/selftests/kvm/s390x/sync_regs_test.c @@ -73,7 +73,7 @@ static void compare_sregs(struct kvm_sregs *left, struct = kvm_sync_regs *right) =20 void test_read_invalid(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int rv; =20 /* Request reading invalid register set from VCPU. */ @@ -94,7 +94,7 @@ void test_read_invalid(struct kvm_vcpu *vcpu) =20 void test_set_invalid(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int rv; =20 /* Request setting invalid register set into VCPU. */ @@ -115,7 +115,7 @@ void test_set_invalid(struct kvm_vcpu *vcpu) =20 void test_req_and_verify_all_valid_regs(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_sregs sregs; struct kvm_regs regs; int rv; @@ -141,7 +141,7 @@ void test_req_and_verify_all_valid_regs(struct kvm_vcpu= *vcpu) =20 void test_set_and_verify_various_reg_values(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_sregs sregs; struct kvm_regs regs; int rv; @@ -180,7 +180,7 @@ void test_set_and_verify_various_reg_values(struct kvm_= vcpu *vcpu) =20 void test_clear_kvm_dirty_regs_bits(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int rv; =20 /* Clear kvm_dirty_regs bits, verify new s.regs values are diff --git a/tools/testing/selftests/kvm/s390x/tprot.c b/tools/testing/self= tests/kvm/s390x/tprot.c index 7a742a673b7c..d181e999c74c 100644 --- a/tools/testing/selftests/kvm/s390x/tprot.c +++ b/tools/testing/selftests/kvm/s390x/tprot.c @@ -214,7 +214,7 @@ int main(int argc, char *argv[]) ksft_set_plan(STAGE_END); =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); - run =3D vcpu->run; + run =3D vcpu->common->run; =20 HOST_SYNC(vcpu, STAGE_INIT_SIMPLE); mprotect(addr_gva2hva(vm, (vm_vaddr_t)pages), PAGE_SIZE * 2, PROT_READ); diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/t= esting/selftests/kvm/set_memory_region_test.c index bb8002084f52..458e6072c263 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -52,7 +52,7 @@ static inline uint64_t guest_spin_on_val(uint64_t spin_va= l) static void *vcpu_worker(void *data) { struct kvm_vcpu *vcpu =3D data; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct ucall uc; uint64_t cmd; =20 @@ -298,7 +298,7 @@ static void test_delete_memory_region(void) =20 pthread_join(vcpu_thread, NULL); =20 - run =3D vcpu->run; + run =3D vcpu->common->run; =20 TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_SHUTDOWN || run->exit_reason =3D=3D KVM_EXIT_INTERNAL_ERROR, diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selft= ests/kvm/steal_time.c index a8d3afa0b86b..0e49c153c5c1 100644 --- a/tools/testing/selftests/kvm/steal_time.c +++ b/tools/testing/selftests/kvm/steal_time.c @@ -334,7 +334,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu) REPORT_GUEST_ASSERT(uc); default: TEST_ASSERT(false, "Unexpected exit: %s", - exit_reason_str(vcpu->run->exit_reason)); + exit_reason_str(vcpu->common->run->exit_reason)); } } =20 diff --git a/tools/testing/selftests/kvm/x86_64/cpuid_test.c b/tools/testin= g/selftests/kvm/x86_64/cpuid_test.c index 8c579ce714e9..89ae4537f160 100644 --- a/tools/testing/selftests/kvm/x86_64/cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/cpuid_test.c @@ -119,7 +119,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, int stage) REPORT_GUEST_ASSERT(uc); default: TEST_ASSERT(false, "Unexpected exit: %s", - exit_reason_str(vcpu->run->exit_reason)); + exit_reason_str(vcpu->common->run->exit_reason)); } } =20 diff --git a/tools/testing/selftests/kvm/x86_64/debug_regs.c b/tools/testin= g/selftests/kvm/x86_64/debug_regs.c index f6b295e0b2d2..ee391b07ada3 100644 --- a/tools/testing/selftests/kvm/x86_64/debug_regs.c +++ b/tools/testing/selftests/kvm/x86_64/debug_regs.c @@ -98,7 +98,7 @@ int main(void) TEST_REQUIRE(kvm_has_cap(KVM_CAP_SET_GUEST_DEBUG)); =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); - run =3D vcpu->run; + run =3D vcpu->common->run; =20 /* Test software BPs - int3 */ memset(&debug, 0, sizeof(debug)); diff --git a/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c b/tool= s/testing/selftests/kvm/x86_64/fix_hypercall_test.c index 762628f7d4ba..b76a020b740b 100644 --- a/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c +++ b/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c @@ -88,7 +88,7 @@ KVM_ONE_VCPU_TEST_SUITE(fix_hypercall); =20 static void enter_guest(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct ucall uc; =20 vcpu_run(vcpu); diff --git a/tools/testing/selftests/kvm/x86_64/flds_emulation.h b/tools/te= sting/selftests/kvm/x86_64/flds_emulation.h index 37b1a9f52864..ebc81c02e719 100644 --- a/tools/testing/selftests/kvm/x86_64/flds_emulation.h +++ b/tools/testing/selftests/kvm/x86_64/flds_emulation.h @@ -19,7 +19,7 @@ static inline void flds(uint64_t address) =20 static inline void handle_flds_emulation_failure_exit(struct kvm_vcpu *vcp= u) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_regs regs; uint8_t *insn_bytes; uint64_t flags; diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.= c b/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c index 949e08e98f31..5562f19a6b60 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c @@ -53,7 +53,7 @@ int main(void) } =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); - run =3D vcpu->run; + run =3D vcpu->common->run; vcpu_set_hv_cpuid(vcpu); =20 /* Hypercall input */ diff --git a/tools/testing/selftests/kvm/x86_64/nested_exceptions_test.c b/= tools/testing/selftests/kvm/x86_64/nested_exceptions_test.c index 3eb0313ffa39..342a71434586 100644 --- a/tools/testing/selftests/kvm/x86_64/nested_exceptions_test.c +++ b/tools/testing/selftests/kvm/x86_64/nested_exceptions_test.c @@ -238,7 +238,7 @@ int main(int argc, char *argv[]) =20 /* Pend #SS and request immediate exit. #SS should still be pending. */ queue_ss_exception(vcpu, false); - vcpu->run->immediate_exit =3D true; + vcpu->common->run->immediate_exit =3D true; vcpu_run_complete_io(vcpu); =20 /* Verify the pending events comes back out the same as it went in. */ @@ -254,7 +254,7 @@ int main(int argc, char *argv[]) * Run for real with the pending #SS, L1 should get a VM-Exit due to * #SS interception and re-enter L2 to request #GP (via injected #SS). */ - vcpu->run->immediate_exit =3D false; + vcpu->common->run->immediate_exit =3D false; vcpu_run(vcpu); assert_ucall_vector(vcpu, GP_VECTOR); =20 diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_tes= t.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index 82a8d88b5338..fe887798811d 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -288,7 +288,7 @@ static void guest_code(uint64_t base_gpa) =20 static void handle_exit_hypercall(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; uint64_t gpa =3D run->hypercall.args[0]; uint64_t size =3D run->hypercall.args[1] * PAGE_SIZE; bool set_attributes =3D run->hypercall.args[2] & MAP_GPA_SET_ATTRIBUTES; @@ -314,7 +314,7 @@ static bool run_vcpus; static void *__test_mem_conversions(void *__vcpu) { struct kvm_vcpu *vcpu =3D __vcpu; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_vm *vm =3D vcpu->vm; struct ucall uc; =20 diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.= c b/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c index 13e72fcec8dd..ad1d63aeca97 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c @@ -34,9 +34,9 @@ static uint32_t run_vcpu_get_exit_reason(struct kvm_vcpu = *vcpu) r =3D _vcpu_run(vcpu); if (r) { TEST_ASSERT(errno =3D=3D EFAULT, KVM_IOCTL_ERROR(KVM_RUN, r)); - TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_MEMORY_FAULT); + TEST_ASSERT_EQ(vcpu->common->run->exit_reason, KVM_EXIT_MEMORY_FAULT); } - return vcpu->run->exit_reason; + return vcpu->common->run->exit_reason; } =20 const struct vm_shape protected_vm_shape =3D { @@ -75,9 +75,9 @@ static void test_private_access_memslot_deleted(void) exit_reason =3D (uint32_t)(uint64_t)thread_return; =20 TEST_ASSERT_EQ(exit_reason, KVM_EXIT_MEMORY_FAULT); - TEST_ASSERT_EQ(vcpu->run->memory_fault.flags, KVM_MEMORY_EXIT_FLAG_PRIVAT= E); - TEST_ASSERT_EQ(vcpu->run->memory_fault.gpa, EXITS_TEST_GPA); - TEST_ASSERT_EQ(vcpu->run->memory_fault.size, EXITS_TEST_SIZE); + TEST_ASSERT_EQ(vcpu->common->run->memory_fault.flags, KVM_MEMORY_EXIT_FLA= G_PRIVATE); + TEST_ASSERT_EQ(vcpu->common->run->memory_fault.gpa, EXITS_TEST_GPA); + TEST_ASSERT_EQ(vcpu->common->run->memory_fault.size, EXITS_TEST_SIZE); =20 kvm_vm_free(vm); } @@ -104,9 +104,9 @@ static void test_private_access_memslot_not_private(voi= d) exit_reason =3D run_vcpu_get_exit_reason(vcpu); =20 TEST_ASSERT_EQ(exit_reason, KVM_EXIT_MEMORY_FAULT); - TEST_ASSERT_EQ(vcpu->run->memory_fault.flags, KVM_MEMORY_EXIT_FLAG_PRIVAT= E); - TEST_ASSERT_EQ(vcpu->run->memory_fault.gpa, EXITS_TEST_GPA); - TEST_ASSERT_EQ(vcpu->run->memory_fault.size, EXITS_TEST_SIZE); + TEST_ASSERT_EQ(vcpu->common->run->memory_fault.flags, KVM_MEMORY_EXIT_FLA= G_PRIVATE); + TEST_ASSERT_EQ(vcpu->common->run->memory_fault.gpa, EXITS_TEST_GPA); + TEST_ASSERT_EQ(vcpu->common->run->memory_fault.size, EXITS_TEST_SIZE); =20 kvm_vm_free(vm); } diff --git a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c b/tools/t= esting/selftests/kvm/x86_64/set_boot_cpu_id.c index 49913784bc82..581b98df7cdd 100644 --- a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c +++ b/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c @@ -81,7 +81,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu) REPORT_GUEST_ASSERT(uc); default: TEST_ASSERT(false, "Unexpected exit: %s", - exit_reason_str(vcpu->run->exit_reason)); + exit_reason_str(vcpu->common->run->exit_reason)); } } } diff --git a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c b/tools/te= sting/selftests/kvm/x86_64/sev_smoke_test.c index 7c70c0da4fb7..d807e758ad7c 100644 --- a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c +++ b/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c @@ -106,12 +106,12 @@ static void test_sync_vmsa(uint32_t policy) =20 vcpu_run(vcpu); =20 - TEST_ASSERT(vcpu->run->exit_reason =3D=3D KVM_EXIT_SYSTEM_EVENT, + TEST_ASSERT(vcpu->common->run->exit_reason =3D=3D KVM_EXIT_SYSTEM_EVENT, "Wanted SYSTEM_EVENT, got %s", - exit_reason_str(vcpu->run->exit_reason)); - TEST_ASSERT_EQ(vcpu->run->system_event.type, KVM_SYSTEM_EVENT_SEV_TERM); - TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 1); - TEST_ASSERT_EQ(vcpu->run->system_event.data[0], GHCB_MSR_TERM_REQ); + exit_reason_str(vcpu->common->run->exit_reason)); + TEST_ASSERT_EQ(vcpu->common->run->system_event.type, KVM_SYSTEM_EVENT_SEV= _TERM); + TEST_ASSERT_EQ(vcpu->common->run->system_event.ndata, 1); + TEST_ASSERT_EQ(vcpu->common->run->system_event.data[0], GHCB_MSR_TERM_REQ= ); =20 compare_xsave((u8 *)&xsave, (u8 *)hva); =20 @@ -135,12 +135,12 @@ static void test_sev(void *guest_code, uint64_t polic= y) vcpu_run(vcpu); =20 if (policy & SEV_POLICY_ES) { - TEST_ASSERT(vcpu->run->exit_reason =3D=3D KVM_EXIT_SYSTEM_EVENT, + TEST_ASSERT(vcpu->common->run->exit_reason =3D=3D KVM_EXIT_SYSTEM_EVENT, "Wanted SYSTEM_EVENT, got %s", - exit_reason_str(vcpu->run->exit_reason)); - TEST_ASSERT_EQ(vcpu->run->system_event.type, KVM_SYSTEM_EVENT_SEV_TERM); - TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 1); - TEST_ASSERT_EQ(vcpu->run->system_event.data[0], GHCB_MSR_TERM_REQ); + exit_reason_str(vcpu->common->run->exit_reason)); + TEST_ASSERT_EQ(vcpu->common->run->system_event.type, KVM_SYSTEM_EVENT_S= EV_TERM); + TEST_ASSERT_EQ(vcpu->common->run->system_event.ndata, 1); + TEST_ASSERT_EQ(vcpu->common->run->system_event.data[0], GHCB_MSR_TERM_R= EQ); break; } =20 @@ -153,7 +153,7 @@ static void test_sev(void *guest_code, uint64_t policy) REPORT_GUEST_ASSERT(uc); default: TEST_FAIL("Unexpected exit: %s", - exit_reason_str(vcpu->run->exit_reason)); + exit_reason_str(vcpu->common->run->exit_reason)); } } =20 diff --git a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c b/tools/te= sting/selftests/kvm/x86_64/sync_regs_test.c index 8fa3948b0170..a2089abfd29a 100644 --- a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c +++ b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c @@ -162,7 +162,7 @@ static void race_sync_regs(struct kvm_vcpu *vcpu, void = *racer) pthread_t thread; time_t t; =20 - run =3D vcpu->run; + run =3D vcpu->common->run; =20 run->kvm_valid_regs =3D KVM_SYNC_X86_SREGS; vcpu_run(vcpu); @@ -207,7 +207,7 @@ static void race_sync_regs(struct kvm_vcpu *vcpu, void = *racer) =20 KVM_ONE_VCPU_TEST(sync_regs_test, read_invalid, guest_code) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int rv; =20 /* Request reading invalid register set from VCPU. */ @@ -228,7 +228,7 @@ KVM_ONE_VCPU_TEST(sync_regs_test, read_invalid, guest_c= ode) =20 KVM_ONE_VCPU_TEST(sync_regs_test, set_invalid, guest_code) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int rv; =20 /* Request setting invalid register set into VCPU. */ @@ -249,7 +249,7 @@ KVM_ONE_VCPU_TEST(sync_regs_test, set_invalid, guest_co= de) =20 KVM_ONE_VCPU_TEST(sync_regs_test, req_and_verify_all_valid, guest_code) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_vcpu_events events; struct kvm_sregs sregs; struct kvm_regs regs; @@ -272,7 +272,7 @@ KVM_ONE_VCPU_TEST(sync_regs_test, req_and_verify_all_va= lid, guest_code) =20 KVM_ONE_VCPU_TEST(sync_regs_test, set_and_verify_various, guest_code) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_vcpu_events events; struct kvm_sregs sregs; struct kvm_regs regs; @@ -310,7 +310,7 @@ KVM_ONE_VCPU_TEST(sync_regs_test, set_and_verify_variou= s, guest_code) =20 KVM_ONE_VCPU_TEST(sync_regs_test, clear_kvm_dirty_regs_bits, guest_code) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 /* Clear kvm_dirty_regs bits, verify new s.regs values are * overwritten with existing guest values. @@ -327,7 +327,7 @@ KVM_ONE_VCPU_TEST(sync_regs_test, clear_kvm_dirty_regs_= bits, guest_code) =20 KVM_ONE_VCPU_TEST(sync_regs_test, clear_kvm_valid_and_dirty_regs, guest_co= de) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_regs regs; =20 /* Run once to get register set */ @@ -358,7 +358,7 @@ KVM_ONE_VCPU_TEST(sync_regs_test, clear_kvm_valid_and_d= irty_regs, guest_code) =20 KVM_ONE_VCPU_TEST(sync_regs_test, clear_kvm_valid_regs_bits, guest_code) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; struct kvm_regs regs; =20 /* Run once to get register set */ diff --git a/tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c b= /tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c index 56306a19144a..bbd8344046e3 100644 --- a/tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c +++ b/tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c @@ -86,7 +86,7 @@ int main(void) } =20 vm_enable_cap(vm, KVM_CAP_X86_TRIPLE_FAULT_EVENT, 1); - run =3D vcpu->run; + run =3D vcpu->common->run; vcpu_run(vcpu); =20 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); diff --git a/tools/testing/selftests/kvm/x86_64/tsc_msrs_test.c b/tools/tes= ting/selftests/kvm/x86_64/tsc_msrs_test.c index 12b0964f4f13..9356a4da7cec 100644 --- a/tools/testing/selftests/kvm/x86_64/tsc_msrs_test.c +++ b/tools/testing/selftests/kvm/x86_64/tsc_msrs_test.c @@ -87,7 +87,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, int stage) REPORT_GUEST_ASSERT(uc); default: TEST_ASSERT(false, "Unexpected exit: %s", - exit_reason_str(vcpu->run->exit_reason)); + exit_reason_str(vcpu->common->run->exit_reason)); } } =20 diff --git a/tools/testing/selftests/kvm/x86_64/userspace_io_test.c b/tools= /testing/selftests/kvm/x86_64/userspace_io_test.c index 9481cbcf284f..d2165b76c93a 100644 --- a/tools/testing/selftests/kvm/x86_64/userspace_io_test.c +++ b/tools/testing/selftests/kvm/x86_64/userspace_io_test.c @@ -59,7 +59,7 @@ int main(int argc, char *argv[]) struct ucall uc; =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); - run =3D vcpu->run; + run =3D vcpu->common->run; =20 memset(®s, 0, sizeof(regs)); =20 diff --git a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c b= /tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c index 32b2794b78fe..dc2ab9a7fc3d 100644 --- a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c +++ b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c @@ -379,7 +379,7 @@ static void check_for_guest_assert(struct kvm_vcpu *vcp= u) { struct ucall uc; =20 - if (vcpu->run->exit_reason =3D=3D KVM_EXIT_IO && + if (vcpu->common->run->exit_reason =3D=3D KVM_EXIT_IO && get_ucall(vcpu, &uc) =3D=3D UCALL_ABORT) { REPORT_GUEST_ASSERT(uc); } @@ -387,7 +387,7 @@ static void check_for_guest_assert(struct kvm_vcpu *vcp= u) =20 static void process_rdmsr(struct kvm_vcpu *vcpu, uint32_t msr_index) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 check_for_guest_assert(vcpu); =20 @@ -419,7 +419,7 @@ static void process_rdmsr(struct kvm_vcpu *vcpu, uint32= _t msr_index) =20 static void process_wrmsr(struct kvm_vcpu *vcpu, uint32_t msr_index) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 check_for_guest_assert(vcpu); =20 @@ -626,7 +626,7 @@ static void handle_wrmsr(struct kvm_run *run) KVM_ONE_VCPU_TEST(user_msr, msr_filter_deny, guest_code_filter_deny) { struct kvm_vm *vm =3D vcpu->vm; - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; int rc; =20 rc =3D kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR); diff --git a/tools/testing/selftests/kvm/x86_64/vmx_apic_access_test.c b/to= ols/testing/selftests/kvm/x86_64/vmx_apic_access_test.c index a81a24761aac..e36ea3afd548 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_apic_access_test.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_apic_access_test.c @@ -91,7 +91,7 @@ int main(int argc, char *argv[]) vcpu_args_set(vcpu, 2, vmx_pages_gva, high_gpa); =20 while (!done) { - volatile struct kvm_run *run =3D vcpu->run; + volatile struct kvm_run *run =3D vcpu->common->run; struct ucall uc; =20 vcpu_run(vcpu); diff --git a/tools/testing/selftests/kvm/x86_64/vmx_close_while_nested_test= .c b/tools/testing/selftests/kvm/x86_64/vmx_close_while_nested_test.c index dad988351493..e2b5ac2af396 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_close_while_nested_test.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_close_while_nested_test.c @@ -60,7 +60,7 @@ int main(int argc, char *argv[]) vcpu_args_set(vcpu, 1, vmx_pages_gva); =20 for (;;) { - volatile struct kvm_run *run =3D vcpu->run; + volatile struct kvm_run *run =3D vcpu->common->run; struct ucall uc; =20 vcpu_run(vcpu); diff --git a/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_= guest_state.c b/tools/testing/selftests/kvm/x86_64/vmx_exception_with_inval= id_guest_state.c index 3fd6eceab46f..f7cb3eec4a17 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_s= tate.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_s= tate.c @@ -22,7 +22,7 @@ static void guest_code(void) =20 static void __run_vcpu_with_invalid_state(struct kvm_vcpu *vcpu) { - struct kvm_run *run =3D vcpu->run; + struct kvm_run *run =3D vcpu->common->run; =20 vcpu_run(vcpu); =20 diff --git a/tools/testing/selftests/kvm/x86_64/vmx_invalid_nested_guest_st= ate.c b/tools/testing/selftests/kvm/x86_64/vmx_invalid_nested_guest_state.c index a100ee5f0009..8006d6d11ccd 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_invalid_nested_guest_state.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_invalid_nested_guest_state.c @@ -68,7 +68,7 @@ int main(int argc, char *argv[]) =20 vcpu_run(vcpu); =20 - run =3D vcpu->run; + run =3D vcpu->common->run; =20 /* * The first exit to L0 userspace should be an I/O access from L2. diff --git a/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c b/tools/t= esting/selftests/kvm/x86_64/xcr0_cpuid_test.c index 95ce192d0753..b1427e8835de 100644 --- a/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c @@ -107,7 +107,7 @@ int main(int argc, char *argv[]) TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_XSAVE)); =20 vm =3D vm_create_with_one_vcpu(&vcpu, guest_code); - run =3D vcpu->run; + run =3D vcpu->common->run; =20 while (1) { vcpu_run(vcpu); diff --git a/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c b/tools/t= esting/selftests/kvm/x86_64/xen_vmcall_test.c index e149d0574961..30f2aee1f1e9 100644 --- a/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c +++ b/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c @@ -102,7 +102,7 @@ int main(int argc, char *argv[]) virt_map(vm, HCALL_REGION_GPA, HCALL_REGION_GPA, 2); =20 for (;;) { - volatile struct kvm_run *run =3D vcpu->run; + volatile struct kvm_run *run =3D vcpu->common->run; struct ucall uc; =20 vcpu_run(vcpu); diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 0ee4816b079a..a54c27b0f8c9 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -37,9 +37,9 @@ void kvm_async_pf_deinit(void) =20 void kvm_async_pf_vcpu_init(struct kvm_vcpu *vcpu) { - INIT_LIST_HEAD(&vcpu->async_pf.done); - INIT_LIST_HEAD(&vcpu->async_pf.queue); - spin_lock_init(&vcpu->async_pf.lock); + INIT_LIST_HEAD(&vcpu->common->async_pf.done); + INIT_LIST_HEAD(&vcpu->common->async_pf.queue); + spin_lock_init(&vcpu->common->async_pf.lock); } =20 static void async_pf_execute(struct work_struct *work) @@ -77,10 +77,10 @@ static void async_pf_execute(struct work_struct *work) if (IS_ENABLED(CONFIG_KVM_ASYNC_PF_SYNC)) kvm_arch_async_page_present(vcpu, apf); =20 - spin_lock(&vcpu->async_pf.lock); - first =3D list_empty(&vcpu->async_pf.done); - list_add_tail(&apf->link, &vcpu->async_pf.done); - spin_unlock(&vcpu->async_pf.lock); + spin_lock(&vcpu->common->async_pf.lock); + first =3D list_empty(&vcpu->common->async_pf.done); + list_add_tail(&apf->link, &vcpu->common->async_pf.done); + spin_unlock(&vcpu->common->async_pf.lock); =20 /* * The apf struct may be freed by kvm_check_async_pf_completion() as @@ -120,9 +120,9 @@ static void kvm_flush_and_free_async_pf_work(struct kvm= _async_pf *work) void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu) { /* cancel outstanding work queue item */ - while (!list_empty(&vcpu->async_pf.queue)) { + while (!list_empty(&vcpu->common->async_pf.queue)) { struct kvm_async_pf *work =3D - list_first_entry(&vcpu->async_pf.queue, + list_first_entry(&vcpu->common->async_pf.queue, typeof(*work), queue); list_del(&work->queue); =20 @@ -134,40 +134,40 @@ void kvm_clear_async_pf_completion_queue(struct kvm_v= cpu *vcpu) #endif } =20 - spin_lock(&vcpu->async_pf.lock); - while (!list_empty(&vcpu->async_pf.done)) { + spin_lock(&vcpu->common->async_pf.lock); + while (!list_empty(&vcpu->common->async_pf.done)) { struct kvm_async_pf *work =3D - list_first_entry(&vcpu->async_pf.done, + list_first_entry(&vcpu->common->async_pf.done, typeof(*work), link); list_del(&work->link); =20 - spin_unlock(&vcpu->async_pf.lock); + spin_unlock(&vcpu->common->async_pf.lock); kvm_flush_and_free_async_pf_work(work); - spin_lock(&vcpu->async_pf.lock); + spin_lock(&vcpu->common->async_pf.lock); } - spin_unlock(&vcpu->async_pf.lock); + spin_unlock(&vcpu->common->async_pf.lock); =20 - vcpu->async_pf.queued =3D 0; + vcpu->common->async_pf.queued =3D 0; } =20 void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu) { struct kvm_async_pf *work; =20 - while (!list_empty_careful(&vcpu->async_pf.done) && + while (!list_empty_careful(&vcpu->common->async_pf.done) && kvm_arch_can_dequeue_async_page_present(vcpu)) { - spin_lock(&vcpu->async_pf.lock); - work =3D list_first_entry(&vcpu->async_pf.done, typeof(*work), + spin_lock(&vcpu->common->async_pf.lock); + work =3D list_first_entry(&vcpu->common->async_pf.done, typeof(*work), link); list_del(&work->link); - spin_unlock(&vcpu->async_pf.lock); + spin_unlock(&vcpu->common->async_pf.lock); =20 kvm_arch_async_page_ready(vcpu, work); if (!IS_ENABLED(CONFIG_KVM_ASYNC_PF_SYNC)) kvm_arch_async_page_present(vcpu, work); =20 list_del(&work->queue); - vcpu->async_pf.queued--; + vcpu->common->async_pf.queued--; kvm_flush_and_free_async_pf_work(work); } } @@ -181,7 +181,7 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr= 2_or_gpa, { struct kvm_async_pf *work; =20 - if (vcpu->async_pf.queued >=3D ASYNC_PF_PER_VCPU) + if (vcpu->common->async_pf.queued >=3D ASYNC_PF_PER_VCPU) return false; =20 /* Arch specific code should not do async PF in this case */ @@ -204,8 +204,8 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr= 2_or_gpa, =20 INIT_WORK(&work->work, async_pf_execute); =20 - list_add_tail(&work->queue, &vcpu->async_pf.queue); - vcpu->async_pf.queued++; + list_add_tail(&work->queue, &vcpu->common->async_pf.queue); + vcpu->common->async_pf.queued++; work->notpresent_injected =3D kvm_arch_async_page_not_present(vcpu, work); =20 schedule_work(&work->work); @@ -218,7 +218,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu) struct kvm_async_pf *work; bool first; =20 - if (!list_empty_careful(&vcpu->async_pf.done)) + if (!list_empty_careful(&vcpu->common->async_pf.done)) return 0; =20 work =3D kmem_cache_zalloc(async_pf_cache, GFP_ATOMIC); @@ -228,14 +228,14 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu) work->wakeup_all =3D true; INIT_LIST_HEAD(&work->queue); /* for list_del to work */ =20 - spin_lock(&vcpu->async_pf.lock); - first =3D list_empty(&vcpu->async_pf.done); - list_add_tail(&work->link, &vcpu->async_pf.done); - spin_unlock(&vcpu->async_pf.lock); + spin_lock(&vcpu->common->async_pf.lock); + first =3D list_empty(&vcpu->common->async_pf.done); + list_add_tail(&work->link, &vcpu->common->async_pf.done); + spin_unlock(&vcpu->common->async_pf.lock); =20 if (!IS_ENABLED(CONFIG_KVM_ASYNC_PF_SYNC) && first) kvm_arch_async_page_present_queued(vcpu); =20 - vcpu->async_pf.queued++; + vcpu->common->async_pf.queued++; return 0; } diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index 7bc74969a819..370e64680548 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -171,7 +171,7 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_di= rty_ring *ring) =20 void kvm_dirty_ring_push(struct kvm_vcpu *vcpu, u32 slot, u64 offset) { - struct kvm_dirty_ring *ring =3D &vcpu->dirty_ring; + struct kvm_dirty_ring *ring =3D &vcpu->common->dirty_ring; struct kvm_dirty_gfn *entry; =20 /* It should never get full */ @@ -203,9 +203,9 @@ bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu) * the dirty ring is reset by userspace. */ if (kvm_check_request(KVM_REQ_DIRTY_RING_SOFT_FULL, vcpu) && - kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) { + kvm_dirty_ring_soft_full(&vcpu->common->dirty_ring)) { kvm_make_request(KVM_REQ_DIRTY_RING_SOFT_FULL, vcpu); - vcpu->run->exit_reason =3D KVM_EXIT_DIRTY_RING_FULL; + vcpu->common->run->exit_reason =3D KVM_EXIT_DIRTY_RING_FULL; trace_kvm_dirty_ring_exit(vcpu); return true; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 92901656a0d4..d1874848862d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -343,7 +343,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) * - powerpc: smp_mb in kvmppc_prepare_to_enter. * * There is already an smp_mb__after_atomic() before - * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that + * kvm_make_all_cpus_request() reads vcpu->common->mode. We reuse that * barrier here. */ if (!kvm_arch_flush_remote_tlbs(kvm) @@ -481,41 +481,43 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memor= y_cache *mc) =20 static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned= id) { - mutex_init(&vcpu->mutex); + vcpu->common =3D &vcpu->_common; + + mutex_init(&vcpu->common->mutex); vcpu->cpu =3D -1; vcpu->kvm =3D kvm; vcpu->vcpu_id =3D id; - vcpu->pid =3D NULL; + vcpu->common->pid =3D NULL; #ifndef __KVM_HAVE_ARCH_WQP - rcuwait_init(&vcpu->wait); + rcuwait_init(&vcpu->common->wait); #endif kvm_async_pf_vcpu_init(vcpu); =20 kvm_vcpu_set_in_spin_loop(vcpu, false); kvm_vcpu_set_dy_eligible(vcpu, false); - vcpu->preempted =3D false; - vcpu->ready =3D false; + vcpu->common->preempted =3D false; + vcpu->common->ready =3D false; preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops); - vcpu->last_used_slot =3D NULL; + vcpu->common->last_used_slot =3D NULL; =20 /* Fill the stats id string for the vcpu */ - snprintf(vcpu->stats_id, sizeof(vcpu->stats_id), "kvm-%d/vcpu-%d", + snprintf(vcpu->common->stats_id, sizeof(vcpu->common->stats_id), "kvm-%d/= vcpu-%d", task_pid_nr(current), id); } =20 static void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) { kvm_arch_vcpu_destroy(vcpu); - kvm_dirty_ring_free(&vcpu->dirty_ring); + kvm_dirty_ring_free(&vcpu->common->dirty_ring); =20 /* * No need for rcu_read_lock as VCPU_RUN is the only place that changes * the vcpu->pid pointer, and at destruction time all file descriptors * are already gone. */ - put_pid(rcu_dereference_protected(vcpu->pid, 1)); + put_pid(rcu_dereference_protected(vcpu->common->pid, 1)); =20 - free_page((unsigned long)vcpu->run); + free_page((unsigned long)vcpu->common->run); kmem_cache_free(kvm_vcpu_cache, vcpu); } =20 @@ -2606,12 +2608,12 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(str= uct kvm_vcpu *vcpu, gfn_t gfn * This also protects against using a memslot from a different address sp= ace, * since different address spaces have different generation numbers. */ - if (unlikely(gen !=3D vcpu->last_used_slot_gen)) { - vcpu->last_used_slot =3D NULL; - vcpu->last_used_slot_gen =3D gen; + if (unlikely(gen !=3D vcpu->common->last_used_slot_gen)) { + vcpu->common->last_used_slot =3D NULL; + vcpu->common->last_used_slot_gen =3D gen; } =20 - slot =3D try_get_memslot(vcpu->last_used_slot, gfn); + slot =3D try_get_memslot(vcpu->common->last_used_slot, gfn); if (slot) return slot; =20 @@ -2622,7 +2624,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struc= t kvm_vcpu *vcpu, gfn_t gfn */ slot =3D search_memslots(slots, gfn, false); if (slot) { - vcpu->last_used_slot =3D slot; + vcpu->common->last_used_slot =3D slot; return slot; } =20 @@ -3638,7 +3640,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); =20 void kvm_sigset_activate(struct kvm_vcpu *vcpu) { - if (!vcpu->sigset_active) + if (!vcpu->common->sigset_active) return; =20 /* @@ -3647,12 +3649,12 @@ void kvm_sigset_activate(struct kvm_vcpu *vcpu) * ->real_blocked don't care as long ->real_blocked is always a subset * of ->blocked. */ - sigprocmask(SIG_SETMASK, &vcpu->sigset, ¤t->real_blocked); + sigprocmask(SIG_SETMASK, &vcpu->common->sigset, ¤t->real_blocked); } =20 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu) { - if (!vcpu->sigset_active) + if (!vcpu->common->sigset_active) return; =20 sigprocmask(SIG_SETMASK, ¤t->real_blocked, NULL); @@ -3663,7 +3665,7 @@ static void grow_halt_poll_ns(struct kvm_vcpu *vcpu) { unsigned int old, val, grow, grow_start; =20 - old =3D val =3D vcpu->halt_poll_ns; + old =3D val =3D vcpu->common->halt_poll_ns; grow_start =3D READ_ONCE(halt_poll_ns_grow_start); grow =3D READ_ONCE(halt_poll_ns_grow); if (!grow) @@ -3673,7 +3675,7 @@ static void grow_halt_poll_ns(struct kvm_vcpu *vcpu) if (val < grow_start) val =3D grow_start; =20 - vcpu->halt_poll_ns =3D val; + vcpu->common->halt_poll_ns =3D val; out: trace_kvm_halt_poll_ns_grow(vcpu->vcpu_id, val, old); } @@ -3682,7 +3684,7 @@ static void shrink_halt_poll_ns(struct kvm_vcpu *vcpu) { unsigned int old, val, shrink, grow_start; =20 - old =3D val =3D vcpu->halt_poll_ns; + old =3D val =3D vcpu->common->halt_poll_ns; shrink =3D READ_ONCE(halt_poll_ns_shrink); grow_start =3D READ_ONCE(halt_poll_ns_grow_start); if (shrink =3D=3D 0) @@ -3693,7 +3695,7 @@ static void shrink_halt_poll_ns(struct kvm_vcpu *vcpu) if (val < grow_start) val =3D 0; =20 - vcpu->halt_poll_ns =3D val; + vcpu->common->halt_poll_ns =3D val; trace_kvm_halt_poll_ns_shrink(vcpu->vcpu_id, val, old); } =20 @@ -3727,7 +3729,7 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) struct rcuwait *wait =3D kvm_arch_vcpu_get_wait(vcpu); bool waited =3D false; =20 - vcpu->stat.generic.blocking =3D 1; + vcpu->common->stat.generic.blocking =3D 1; =20 preempt_disable(); kvm_arch_vcpu_blocking(vcpu); @@ -3749,7 +3751,7 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) kvm_arch_vcpu_unblocking(vcpu); preempt_enable(); =20 - vcpu->stat.generic.blocking =3D 0; + vcpu->common->stat.generic.blocking =3D 0; =20 return waited; } @@ -3757,16 +3759,16 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t s= tart, ktime_t end, bool success) { - struct kvm_vcpu_stat_generic *stats =3D &vcpu->stat.generic; + struct kvm_vcpu_stat_generic *stats =3D &vcpu->common->stat.generic; u64 poll_ns =3D ktime_to_ns(ktime_sub(end, start)); =20 - ++vcpu->stat.generic.halt_attempted_poll; + ++vcpu->common->stat.generic.halt_attempted_poll; =20 if (success) { - ++vcpu->stat.generic.halt_successful_poll; + ++vcpu->common->stat.generic.halt_successful_poll; =20 if (!vcpu_valid_wakeup(vcpu)) - ++vcpu->stat.generic.halt_poll_invalid; + ++vcpu->common->stat.generic.halt_poll_invalid; =20 stats->halt_poll_success_ns +=3D poll_ns; KVM_STATS_LOG_HIST_UPDATE(stats->halt_poll_success_hist, poll_ns); @@ -3809,14 +3811,14 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) bool do_halt_poll; u64 halt_ns; =20 - if (vcpu->halt_poll_ns > max_halt_poll_ns) - vcpu->halt_poll_ns =3D max_halt_poll_ns; + if (vcpu->common->halt_poll_ns > max_halt_poll_ns) + vcpu->common->halt_poll_ns =3D max_halt_poll_ns; =20 - do_halt_poll =3D halt_poll_allowed && vcpu->halt_poll_ns; + do_halt_poll =3D halt_poll_allowed && vcpu->common->halt_poll_ns; =20 start =3D cur =3D poll_end =3D ktime_get(); if (do_halt_poll) { - ktime_t stop =3D ktime_add_ns(start, vcpu->halt_poll_ns); + ktime_t stop =3D ktime_add_ns(start, vcpu->common->halt_poll_ns); =20 do { if (kvm_vcpu_check_block(vcpu) < 0) @@ -3830,9 +3832,9 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) =20 cur =3D ktime_get(); if (waited) { - vcpu->stat.generic.halt_wait_ns +=3D + vcpu->common->stat.generic.halt_wait_ns +=3D ktime_to_ns(cur) - ktime_to_ns(poll_end); - KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.generic.halt_wait_hist, + KVM_STATS_LOG_HIST_UPDATE(vcpu->common->stat.generic.halt_wait_hist, ktime_to_ns(cur) - ktime_to_ns(poll_end)); } out: @@ -3854,18 +3856,18 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) if (!vcpu_valid_wakeup(vcpu)) { shrink_halt_poll_ns(vcpu); } else if (max_halt_poll_ns) { - if (halt_ns <=3D vcpu->halt_poll_ns) + if (halt_ns <=3D vcpu->common->halt_poll_ns) ; /* we had a long block, shrink polling */ - else if (vcpu->halt_poll_ns && + else if (vcpu->common->halt_poll_ns && halt_ns > max_halt_poll_ns) shrink_halt_poll_ns(vcpu); /* we had a short halt and our poll time is too small */ - else if (vcpu->halt_poll_ns < max_halt_poll_ns && + else if (vcpu->common->halt_poll_ns < max_halt_poll_ns && halt_ns < max_halt_poll_ns) grow_halt_poll_ns(vcpu); } else { - vcpu->halt_poll_ns =3D 0; + vcpu->common->halt_poll_ns =3D 0; } } =20 @@ -3876,8 +3878,8 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_halt); bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) { if (__kvm_vcpu_wake_up(vcpu)) { - WRITE_ONCE(vcpu->ready, true); - ++vcpu->stat.generic.halt_wakeup; + WRITE_ONCE(vcpu->common->ready, true); + ++vcpu->common->stat.generic.halt_wakeup; return true; } =20 @@ -3904,8 +3906,8 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) * within the vCPU thread itself. */ if (vcpu =3D=3D __this_cpu_read(kvm_running_vcpu)) { - if (vcpu->mode =3D=3D IN_GUEST_MODE) - WRITE_ONCE(vcpu->mode, EXITING_GUEST_MODE); + if (vcpu->common->mode =3D=3D IN_GUEST_MODE) + WRITE_ONCE(vcpu->common->mode, EXITING_GUEST_MODE); goto out; } =20 @@ -3934,7 +3936,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target) int ret =3D 0; =20 rcu_read_lock(); - pid =3D rcu_dereference(target->pid); + pid =3D rcu_dereference(target->common->pid); if (pid) task =3D get_pid_task(pid, PIDTYPE_PID); rcu_read_unlock(); @@ -3974,11 +3976,11 @@ static bool kvm_vcpu_eligible_for_directed_yield(st= ruct kvm_vcpu *vcpu) #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT bool eligible; =20 - eligible =3D !vcpu->spin_loop.in_spin_loop || - vcpu->spin_loop.dy_eligible; + eligible =3D !vcpu->common->spin_loop.in_spin_loop || + vcpu->common->spin_loop.dy_eligible; =20 - if (vcpu->spin_loop.in_spin_loop) - kvm_vcpu_set_dy_eligible(vcpu, !vcpu->spin_loop.dy_eligible); + if (vcpu->common->spin_loop.in_spin_loop) + kvm_vcpu_set_dy_eligible(vcpu, !vcpu->common->spin_loop.dy_eligible); =20 return eligible; #else @@ -4002,7 +4004,7 @@ static bool vcpu_dy_runnable(struct kvm_vcpu *vcpu) return true; =20 #ifdef CONFIG_KVM_ASYNC_PF - if (!list_empty_careful(&vcpu->async_pf.done)) + if (!list_empty_careful(&vcpu->common->async_pf.done)) return true; #endif =20 @@ -4052,7 +4054,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield= _to_kernel_mode) continue; } else if (pass && i > last_boosted_vcpu) break; - if (!READ_ONCE(vcpu->ready)) + if (!READ_ONCE(vcpu->common->ready)) continue; if (vcpu =3D=3D me) continue; @@ -4065,7 +4067,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield= _to_kernel_mode) * be spinning waiting on IPI delivery, i.e. the target * vCPU is in-kernel for the purposes of directed yield. */ - if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode && + if (READ_ONCE(vcpu->common->preempted) && yield_to_kernel_mode && !kvm_arch_dy_has_pending_interrupt(vcpu) && !kvm_arch_vcpu_preempted_in_kernel(vcpu)) continue; @@ -4107,7 +4109,7 @@ static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf) struct page *page; =20 if (vmf->pgoff =3D=3D 0) - page =3D virt_to_page(vcpu->run); + page =3D virt_to_page(vcpu->common->run); #ifdef CONFIG_X86 else if (vmf->pgoff =3D=3D KVM_PIO_PAGE_OFFSET) page =3D virt_to_page(vcpu->arch.pio_data); @@ -4118,7 +4120,7 @@ static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf) #endif else if (kvm_page_in_dirty_ring(vcpu->kvm, vmf->pgoff)) page =3D kvm_dirty_ring_get_page( - &vcpu->dirty_ring, + &vcpu->common->dirty_ring, vmf->pgoff - KVM_DIRTY_LOG_PAGE_OFFSET); else return kvm_arch_vcpu_fault(vcpu, vmf); @@ -4178,7 +4180,7 @@ static int vcpu_get_pid(void *data, u64 *val) struct kvm_vcpu *vcpu =3D data; =20 rcu_read_lock(); - *val =3D pid_nr(rcu_dereference(vcpu->pid)); + *val =3D pid_nr(rcu_dereference(vcpu->common->pid)); rcu_read_unlock(); return 0; } @@ -4245,22 +4247,22 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm= , unsigned long id) goto vcpu_decrement; } =20 + kvm_vcpu_init(vcpu, kvm, id); + BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE); page =3D alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page) { r =3D -ENOMEM; goto vcpu_free; } - vcpu->run =3D page_address(page); - - kvm_vcpu_init(vcpu, kvm, id); + vcpu->common->run =3D page_address(page); =20 r =3D kvm_arch_vcpu_create(vcpu); if (r) goto vcpu_free_run_page; =20 if (kvm->dirty_ring_size) { - r =3D kvm_dirty_ring_alloc(&vcpu->dirty_ring, + r =3D kvm_dirty_ring_alloc(&vcpu->common->dirty_ring, id, kvm->dirty_ring_size); if (r) goto arch_vcpu_destroy; @@ -4269,9 +4271,9 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, = unsigned long id) mutex_lock(&kvm->lock); =20 #ifdef CONFIG_LOCKDEP - /* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */ - mutex_lock(&vcpu->mutex); - mutex_unlock(&vcpu->mutex); + /* Ensure that lockdep knows vcpu->common->mutex is taken *inside* kvm->l= ock */ + mutex_lock(&vcpu->common->mutex); + mutex_unlock(&vcpu->common->mutex); #endif =20 if (kvm_get_vcpu_by_id(kvm, id)) { @@ -4312,11 +4314,11 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm= , unsigned long id) xa_release(&kvm->vcpu_array, vcpu->vcpu_idx); unlock_vcpu_destroy: mutex_unlock(&kvm->lock); - kvm_dirty_ring_free(&vcpu->dirty_ring); + kvm_dirty_ring_free(&vcpu->common->dirty_ring); arch_vcpu_destroy: kvm_arch_vcpu_destroy(vcpu); vcpu_free_run_page: - free_page((unsigned long)vcpu->run); + free_page((unsigned long)vcpu->common->run); vcpu_free: kmem_cache_free(kvm_vcpu_cache, vcpu); vcpu_decrement: @@ -4330,10 +4332,10 @@ static int kvm_vcpu_ioctl_set_sigmask(struct kvm_vc= pu *vcpu, sigset_t *sigset) { if (sigset) { sigdelsetmask(sigset, sigmask(SIGKILL)|sigmask(SIGSTOP)); - vcpu->sigset_active =3D 1; - vcpu->sigset =3D *sigset; + vcpu->common->sigset_active =3D 1; + vcpu->common->sigset =3D *sigset; } else - vcpu->sigset_active =3D 0; + vcpu->common->sigset_active =3D 0; return 0; } =20 @@ -4342,9 +4344,9 @@ static ssize_t kvm_vcpu_stats_read(struct file *file,= char __user *user_buffer, { struct kvm_vcpu *vcpu =3D file->private_data; =20 - return kvm_stats_read(vcpu->stats_id, &kvm_vcpu_stats_header, - &kvm_vcpu_stats_desc[0], &vcpu->stat, - sizeof(vcpu->stat), user_buffer, size, offset); + return kvm_stats_read(vcpu->common->stats_id, &kvm_vcpu_stats_header, + &kvm_vcpu_stats_desc[0], &vcpu->common->stat, + sizeof(vcpu->common->stat), user_buffer, size, offset); } =20 static int kvm_vcpu_stats_release(struct inode *inode, struct file *file) @@ -4457,7 +4459,7 @@ static long kvm_vcpu_ioctl(struct file *filp, if (r !=3D -ENOIOCTLCMD) return r; =20 - if (mutex_lock_killable(&vcpu->mutex)) + if (mutex_lock_killable(&vcpu->common->mutex)) return -EINTR; switch (ioctl) { case KVM_RUN: { @@ -4465,7 +4467,7 @@ static long kvm_vcpu_ioctl(struct file *filp, r =3D -EINVAL; if (arg) goto out; - oldpid =3D rcu_access_pointer(vcpu->pid); + oldpid =3D rcu_access_pointer(vcpu->common->pid); if (unlikely(oldpid !=3D task_pid(current))) { /* The thread running this VCPU changed. */ struct pid *newpid; @@ -4475,16 +4477,16 @@ static long kvm_vcpu_ioctl(struct file *filp, break; =20 newpid =3D get_task_pid(current, PIDTYPE_PID); - rcu_assign_pointer(vcpu->pid, newpid); + rcu_assign_pointer(vcpu->common->pid, newpid); if (oldpid) synchronize_rcu(); put_pid(oldpid); } - vcpu->wants_to_run =3D !READ_ONCE(vcpu->run->immediate_exit__unsafe); + vcpu->common->wants_to_run =3D !READ_ONCE(vcpu->common->run->immediate_e= xit__unsafe); r =3D kvm_arch_vcpu_ioctl_run(vcpu); - vcpu->wants_to_run =3D false; + vcpu->common->wants_to_run =3D false; =20 - trace_kvm_userspace_exit(vcpu->run->exit_reason, r); + trace_kvm_userspace_exit(vcpu->common->run->exit_reason, r); break; } case KVM_GET_REGS: { @@ -4655,7 +4657,7 @@ static long kvm_vcpu_ioctl(struct file *filp, r =3D kvm_arch_vcpu_ioctl(filp, ioctl, arg); } out: - mutex_unlock(&vcpu->mutex); + mutex_unlock(&vcpu->common->mutex); kfree(fpu); kfree(kvm_sregs); return r; @@ -4993,7 +4995,7 @@ static int kvm_vm_ioctl_reset_dirty_pages(struct kvm = *kvm) mutex_lock(&kvm->slots_lock); =20 kvm_for_each_vcpu(i, vcpu, kvm) - cleared +=3D kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring); + cleared +=3D kvm_dirty_ring_reset(vcpu->kvm, &vcpu->common->dirty_ring); =20 mutex_unlock(&kvm->slots_lock); =20 @@ -6122,7 +6124,7 @@ static int kvm_get_stat_per_vcpu(struct kvm *kvm, siz= e_t offset, u64 *val) *val =3D 0; =20 kvm_for_each_vcpu(i, vcpu, kvm) - *val +=3D *(u64 *)((void *)(&vcpu->stat) + offset); + *val +=3D *(u64 *)((void *)(&vcpu->common->stat) + offset); =20 return 0; } @@ -6133,7 +6135,7 @@ static int kvm_clear_stat_per_vcpu(struct kvm *kvm, s= ize_t offset) struct kvm_vcpu *vcpu; =20 kvm_for_each_vcpu(i, vcpu, kvm) - *(u64 *)((void *)(&vcpu->stat) + offset) =3D 0; + *(u64 *)((void *)(&vcpu->common->stat) + offset) =3D 0; =20 return 0; } @@ -6359,13 +6361,13 @@ static void kvm_sched_in(struct preempt_notifier *p= n, int cpu) { struct kvm_vcpu *vcpu =3D preempt_notifier_to_vcpu(pn); =20 - WRITE_ONCE(vcpu->preempted, false); - WRITE_ONCE(vcpu->ready, false); + WRITE_ONCE(vcpu->common->preempted, false); + WRITE_ONCE(vcpu->common->ready, false); =20 __this_cpu_write(kvm_running_vcpu, vcpu); kvm_arch_vcpu_load(vcpu, cpu); =20 - WRITE_ONCE(vcpu->scheduled_out, false); + WRITE_ONCE(vcpu->common->scheduled_out, false); } =20 static void kvm_sched_out(struct preempt_notifier *pn, @@ -6373,11 +6375,11 @@ static void kvm_sched_out(struct preempt_notifier *= pn, { struct kvm_vcpu *vcpu =3D preempt_notifier_to_vcpu(pn); =20 - WRITE_ONCE(vcpu->scheduled_out, true); + WRITE_ONCE(vcpu->common->scheduled_out, true); =20 - if (current->on_rq && vcpu->wants_to_run) { - WRITE_ONCE(vcpu->preempted, true); - WRITE_ONCE(vcpu->ready, true); + if (current->on_rq && vcpu->common->wants_to_run) { + WRITE_ONCE(vcpu->common->preempted, true); + WRITE_ONCE(vcpu->common->ready, true); } kvm_arch_vcpu_put(vcpu); __this_cpu_write(kvm_running_vcpu, NULL); @@ -6477,7 +6479,7 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align,= struct module *module) kmem_cache_create_usercopy("kvm_vcpu", vcpu_size, vcpu_align, SLAB_ACCOUNT, offsetof(struct kvm_vcpu, arch), - offsetofend(struct kvm_vcpu, stats_id) + offsetofend(struct kvm_vcpu, _common) - offsetof(struct kvm_vcpu, arch), NULL); if (!kvm_vcpu_cache) { --=20 2.43.0 From nobody Fri Nov 29 18:32:48 2024 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A723915852F; Mon, 16 Sep 2024 18:18:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510693; cv=none; b=Wm9duvPcSMvxsqlBrSB5MhfvSfWQycNe9FgJfgNASkzr7krorYsDdvjgWLKePK7g6BeXScxOFQDgrvnutmyQUwFCLAtsrMWh+TZl6NRLyyBdoEYFIgmFJLS+M5Dt2YwdG/zIvDAngDsa1Ql8tRUHwtstrf6Sdu90ntZOjeNiaZ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510693; c=relaxed/simple; bh=r9BF4AQb3kqUUUTA/9bwhkryT/pPkfeJ8RGINnvT0Y4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UhfVHg9HDv72nbia/YPGcVYjscXMZsPLpWvXfeejVQh40P0mY4Ahsuhbm2vCQahVYg6+XTafucy1Ea/5mxlKDD68AgLuywS+jdTn6uYxw9ncAK46NVjGzj2rHcQ4t4p1bL4S8SKMM4BHhyx8+4sr0SPmtq/kn5BplwYgJsWPPVw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=BKEMIx7U; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=SeELWg/o; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="BKEMIx7U"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="SeELWg/o" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E58D421C27; Mon, 16 Sep 2024 18:18:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1726510689; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o85+3T4JTGkyZbZYuD7FdFCNPlWLeI8TY7nbTcQx2e8=; b=BKEMIx7UrL8npzWipxAo2ZseNdZ6Dclhz/LZABOYJKg3TnT9vPbdGYDeyzv/4kbJoAtAUs qaivZ0e+2phYAJSxSrjZJj2i0BiiVcRX7GB15024Bm6t/2rxCpqIuQvnWq0R4OBbqB5EJU hPLsdSYve3cAXtxC9A+e7IVZTuvr2EI= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1726510688; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o85+3T4JTGkyZbZYuD7FdFCNPlWLeI8TY7nbTcQx2e8=; b=SeELWg/odUOEW6TFlBs7+tNHwWnIWZLbeA3NEGdtQt7Sy6jpu+SXHghhWJXsi5er3QL2up L1BQ0XOaou13FTMS5TU9pv26ANETsnFTiVVr87LP2fPCKq3sp6GYJ7hsek79l3b9rGniQZ bAl4cFpNvdoBsISlI8ZH09W6dYhXREQ= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 50F65139CE; Mon, 16 Sep 2024 18:18:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id uCG6EWB26GbveAAAD6G6ig (envelope-from ); Mon, 16 Sep 2024 18:18:08 +0000 From: Roy Hopkins To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev Cc: Roy Hopkins , Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , Thomas Gleixner , Michael Roth , Ashish Kalra , Joerg Roedel , Tom Lendacky Subject: [RFC PATCH 2/5] x86/kvm: Create a child struct kvm_vcpu for each VMPL Date: Mon, 16 Sep 2024 19:17:54 +0100 Message-ID: <7f939d682286d2abc5b22f7f21baa0e23f9f18ec.1726506534.git.roy.hopkins@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Spam-Level: X-Spamd-Result: default: False [-6.80 / 50.00]; REPLY(-4.00)[]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[15]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:email,suse.com:mid]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; R_RATELIMIT(0.00)[to_ip_from(RLh8t8sqpgocps1pdp1zxxqsw5)]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -6.80 X-Spam-Flag: NO Content-Type: text/plain; charset="utf-8" Isolation technologies such as SEV-SNP introduce the concept of virtual machine privilege levels (VMPLs), separate to the processor CPL. A guest runs in the context of one of these VMPLs which allows for a different register context, memory privileges, etc. KVM must maintain state for each supported VMPL and switch between these states before entering the guest based on guest requests or other factors. This patch introduces the ability to create multiple struct kvm_cpus: one for each VMPL related to a single vCPU. This is achieved by introducing a new structure, struct kvm_vcpu_vmpl_state that is included in struct kvm_vcpu to track the state of each VMPL for supported platforms (currently only SEV-SNP). The state for each VMPL is then stored in its own struct kvm_vcpu. State that is common to all VMPL kvm_vcpus is managed by vcpu->common, allowing a pointer to the common fields to be shared amongst all VMPL kvm_vcpu's for a single vCPU id. The patch supports switching VMPLs by changing the target_vmpl in the state structure. However, no code to generate a VMPL switch invokes this at present. Signed-off-by: Roy Hopkins --- arch/x86/kvm/cpuid.c | 78 +++++++++++++------------ arch/x86/kvm/trace.h | 12 +++- arch/x86/kvm/x86.c | 58 +++++++++++++++++-- include/linux/kvm_host.h | 17 ++++++ include/trace/events/kvm.h | 48 +++++++++++++++ virt/kvm/kvm_main.c | 116 +++++++++++++++++++++++++------------ 6 files changed, 249 insertions(+), 80 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 2617be544480..4dc5ac431e97 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -437,51 +437,55 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struc= t kvm_cpuid_entry2 *e2, int nent) { int r; + int vtl; + struct kvm_vcpu_vmpl_state *vcpu_parent =3D vcpu->vcpu_parent; =20 - __kvm_update_cpuid_runtime(vcpu, e2, nent); + for (vtl =3D 0; vtl <=3D vcpu_parent->max_vmpl; ++vtl) { + vcpu =3D vcpu_parent->vcpu_vmpl[vtl]; + __kvm_update_cpuid_runtime(vcpu, e2, nent); =20 - /* - * KVM does not correctly handle changing guest CPUID after KVM_RUN, as - * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't - * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page - * faults due to reusing SPs/SPTEs. In practice no sane VMM mucks with - * the core vCPU model on the fly. It would've been better to forbid any - * KVM_SET_CPUID{,2} calls after KVM_RUN altogether but unfortunately - * some VMMs (e.g. QEMU) reuse vCPU fds for CPU hotplug/unplug and do - * KVM_SET_CPUID{,2} again. To support this legacy behavior, check - * whether the supplied CPUID data is equal to what's already set. - */ - if (kvm_vcpu_has_run(vcpu)) { - r =3D kvm_cpuid_check_equal(vcpu, e2, nent); - if (r) - return r; - - kvfree(e2); - return 0; - } + /* + * KVM does not correctly handle changing guest CPUID after KVM_RUN, as + * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't + * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page + * faults due to reusing SPs/SPTEs. In practice no sane VMM mucks with + * the core vCPU model on the fly. It would've been better to forbid any + * KVM_SET_CPUID{,2} calls after KVM_RUN altogether but unfortunately + * some VMMs (e.g. QEMU) reuse vCPU fds for CPU hotplug/unplug and do + * KVM_SET_CPUID{,2} again. To support this legacy behavior, check + * whether the supplied CPUID data is equal to what's already set. + */ + if (kvm_vcpu_has_run(vcpu)) { + r =3D kvm_cpuid_check_equal(vcpu, e2, nent); + if (r) + return r; + + kvfree(e2); + return 0; + } =20 #ifdef CONFIG_KVM_HYPERV - if (kvm_cpuid_has_hyperv(e2, nent)) { - r =3D kvm_hv_vcpu_init(vcpu); - if (r) - return r; - } + if (kvm_cpuid_has_hyperv(e2, nent)) { + r =3D kvm_hv_vcpu_init(vcpu); + if (r) + return r; + } #endif =20 - r =3D kvm_check_cpuid(vcpu, e2, nent); - if (r) - return r; - - kvfree(vcpu->arch.cpuid_entries); - vcpu->arch.cpuid_entries =3D e2; - vcpu->arch.cpuid_nent =3D nent; + r =3D kvm_check_cpuid(vcpu, e2, nent); + if (r) + return r; =20 - vcpu->arch.kvm_cpuid =3D kvm_get_hypervisor_cpuid(vcpu, KVM_SIGNATURE); -#ifdef CONFIG_KVM_XEN - vcpu->arch.xen.cpuid =3D kvm_get_hypervisor_cpuid(vcpu, XEN_SIGNATURE); -#endif - kvm_vcpu_after_set_cpuid(vcpu); + kvfree(vcpu->arch.cpuid_entries); + vcpu->arch.cpuid_entries =3D e2; + vcpu->arch.cpuid_nent =3D nent; =20 + vcpu->arch.kvm_cpuid =3D kvm_get_hypervisor_cpuid(vcpu, KVM_SIGNATURE); + #ifdef CONFIG_KVM_XEN + vcpu->arch.xen.cpuid =3D kvm_get_hypervisor_cpuid(vcpu, XEN_SIGNATURE); + #endif + kvm_vcpu_after_set_cpuid(vcpu); + } return 0; } =20 diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index d3aeffd6ae75..882b8d2356c6 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -20,18 +20,26 @@ TRACE_EVENT(kvm_entry, =20 TP_STRUCT__entry( __field( unsigned int, vcpu_id ) + __field( unsigned int, vcpu_vmpl ) + __field( unsigned int, current_vmpl ) + __field( unsigned int, target_vmpl ) __field( unsigned long, rip ) __field( bool, immediate_exit ) ), =20 TP_fast_assign( __entry->vcpu_id =3D vcpu->vcpu_id; + __entry->vcpu_vmpl =3D vcpu->vmpl; + __entry->current_vmpl =3D vcpu->vcpu_parent->current_vmpl; + __entry->target_vmpl =3D vcpu->vcpu_parent->target_vmpl; __entry->rip =3D kvm_rip_read(vcpu); __entry->immediate_exit =3D force_immediate_exit; ), =20 - TP_printk("vcpu %u, rip 0x%lx%s", __entry->vcpu_id, __entry->rip, - __entry->immediate_exit ? "[immediate exit]" : "") + TP_printk("vcpu %u, rip 0x%lx%s, vcpu_vmpl %d, current_vmpl %d, target_vm= pl %d", + __entry->vcpu_id, __entry->rip, __entry->immediate_exit ? "[immediate = exit]" : "", + __entry->vcpu_vmpl, __entry->current_vmpl, __entry->target_vmpl + ) ); =20 /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e646b4042963..cc2f62b4cf76 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11299,6 +11299,7 @@ static inline bool kvm_vcpu_running(struct kvm_vcpu= *vcpu) static int vcpu_run(struct kvm_vcpu *vcpu) { int r; + struct kvm_vcpu_vmpl_state *vcpu_parent =3D vcpu->vcpu_parent; =20 vcpu->common->run->exit_reason =3D KVM_EXIT_UNKNOWN; =20 @@ -11341,6 +11342,10 @@ static int vcpu_run(struct kvm_vcpu *vcpu) if (r) return r; } + + /* If the exit code results in a VTL switch then let the caller handle i= t */ + if (vcpu_parent->target_vmpl !=3D vcpu_parent->current_vmpl) + break; } =20 return r; @@ -11437,7 +11442,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) trace_kvm_fpu(0); } =20 -int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) +static int kvm_arch_vcpu_ioctl_run_vtl(struct kvm_vcpu *vcpu) { struct kvm_queued_exception *ex =3D &vcpu->arch.exception; struct kvm_run *kvm_run =3D vcpu->common->run; @@ -11560,6 +11565,43 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) return r; } =20 +static void kvm_sync_vcpu(struct kvm_vcpu *src, struct kvm_vcpu *dst) { + /*=20 + * TODO: This sync should not be necessary if VMPL common fields + * have been setup correctly. This is just a workaround for now. + */ + dst->arch.cpuid_nent =3D src->arch.cpuid_nent; + dst->arch.cpuid_entries =3D src->arch.cpuid_entries; + dst->arch.kvm_cpuid =3D src->arch.kvm_cpuid; +} + +int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) +{ + int r; + struct kvm_vcpu_vmpl_state *vcpu_parent =3D vcpu->vcpu_parent; + struct kvm_vcpu *vcpu_current_vtl; + + for (;;) { + /* Select the correct structure for the current VTL */ + vcpu_parent->current_vmpl =3D vcpu_parent->target_vmpl; + vcpu_current_vtl =3D vcpu_parent->vcpu_vmpl[vcpu_parent->current_vmpl]; + + /* Synchronise shared state from VTL0 to non-zero VTLs */ + if (vcpu_parent->vcpu_vmpl[0] !=3D vcpu_current_vtl) { + kvm_sync_vcpu(vcpu_parent->vcpu_vmpl[0], vcpu_current_vtl); + } + + r =3D kvm_arch_vcpu_ioctl_run_vtl(vcpu_current_vtl); + if ((r < 0) || (vcpu_parent->current_vmpl =3D=3D vcpu_parent->target_vmp= l)) { + break; + } + /* Continue around again if there is a VTL switch */ + trace_kvm_arch_vcpu_ioctl_run_vtl_switch(vcpu_parent); + }=20 + + return r; +} + static void __get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { if (vcpu->arch.emulate_regs_need_sync_to_vcpu) { @@ -12246,7 +12288,10 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) page =3D alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page) goto fail_free_lapic; - vcpu->arch.pio_data =3D page_address(page); + if (vcpu->vmpl =3D=3D 0) + vcpu->arch.pio_data =3D page_address(page); + else + vcpu->arch.pio_data =3D vcpu->vcpu_parent->vcpu_vmpl[0]->arch.pio_data; =20 vcpu->arch.mce_banks =3D kcalloc(KVM_MAX_MCE_BANKS * 4, sizeof(u64), GFP_KERNEL_ACCOUNT); @@ -12308,7 +12353,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) fail_free_mce_banks: kfree(vcpu->arch.mce_banks); kfree(vcpu->arch.mci_ctl2_banks); - free_page((unsigned long)vcpu->arch.pio_data); + if (vcpu->vmpl =3D=3D 0) + free_page((unsigned long)vcpu->arch.pio_data); fail_free_lapic: kvm_free_lapic(vcpu); fail_mmu_destroy: @@ -12357,8 +12403,10 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) idx =3D srcu_read_lock(&vcpu->kvm->srcu); kvm_mmu_destroy(vcpu); srcu_read_unlock(&vcpu->kvm->srcu, idx); - free_page((unsigned long)vcpu->arch.pio_data); - kvfree(vcpu->arch.cpuid_entries); + if (vcpu->vmpl =3D=3D 0) { + free_page((unsigned long)vcpu->arch.pio_data); + kvfree(vcpu->arch.cpuid_entries); + } } =20 void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index fb5c58c90975..806b7ba869cb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -320,6 +320,8 @@ struct kvm_mmio_fragment { unsigned len; }; =20 +struct kvm_vcpu_vmpl_state; + struct kvm_vcpu { struct kvm *kvm; #ifdef CONFIG_PREEMPT_NOTIFIERS @@ -402,6 +404,21 @@ struct kvm_vcpu { } _common; =20 struct kvm_vcpu_common *common; + + struct kvm_vcpu_vmpl_state *vcpu_parent; + int vmpl; +}; + +struct kvm_vcpu_vmpl_state { + /* + * TODO: This array needs to be dynamically allocated to store the + * required number of VMPLs based on the architecture. This has been + * hardcoded to 4 for this RFC for SEV-SNP. + */ + struct kvm_vcpu *vcpu_vmpl[4]; + int max_vmpl; + int current_vmpl; + int target_vmpl; }; =20 /* diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm.h index 74e40d5d4af4..f81187642347 100644 --- a/include/trace/events/kvm.h +++ b/include/trace/events/kvm.h @@ -84,6 +84,30 @@ TRACE_EVENT(kvm_set_irq, ); #endif /* defined(CONFIG_HAVE_KVM_IRQCHIP) */ =20 +TRACE_EVENT(kvm_rdh2, + TP_PROTO(const char *msg, struct kvm_vcpu* vcpu), + TP_ARGS(msg, vcpu), + + TP_STRUCT__entry( + __string(msg, msg) + __field(u32, vcpu_id) + __field(u32, vcpu_vmpl) + __field(u32, current_vmpl) + __field(u32, target_vmpl) + ), + + TP_fast_assign( + __assign_str(msg); + __entry->vcpu_id =3D vcpu ? vcpu->vcpu_id : 0xffffffff; + __entry->vcpu_vmpl =3D vcpu ? vcpu->vmpl : 0xffffffff; + __entry->current_vmpl =3D vcpu ? vcpu->vcpu_parent->current_vmpl : 0xfff= fffff; + __entry->target_vmpl =3D vcpu ? vcpu->vcpu_parent->target_vmpl : 0xfffff= fff; + ), + + TP_printk("%s vcpu_id %X vcpu_vmpl %X current_vmpl %X target_vmpl %X", __= get_str(msg),=20 + __entry->vcpu_id, __entry->vcpu_vmpl, __entry->current_vmpl, __entry->ta= rget_vmpl) +); + #if defined(__KVM_HAVE_IOAPIC) #define kvm_deliver_mode \ {0x0, "Fixed"}, \ @@ -489,6 +513,30 @@ TRACE_EVENT(kvm_test_age_hva, TP_printk("mmu notifier test age hva: %#016lx", __entry->hva) ); =20 + +TRACE_EVENT(kvm_arch_vcpu_ioctl_run_vtl_switch, + TP_PROTO(struct kvm_vcpu_vmpl_state *vcpu_parent), + TP_ARGS(vcpu_parent), + + TP_STRUCT__entry( + __field(unsigned int, vcpu_id) + __field(unsigned int, current_vmpl) + __field(unsigned int, target_vmpl) + ), + + TP_fast_assign( + __entry->vcpu_id =3D vcpu_parent->vcpu_vmpl[0]->vcpu_id; + __entry->current_vmpl =3D vcpu_parent->current_vmpl; + __entry->target_vmpl =3D vcpu_parent->target_vmpl; + ), + + TP_printk("vcpu %u: current_vmpl %d, target vtl %d", + __entry->vcpu_id, + __entry->current_vmpl, + __entry->target_vmpl) +); + + #endif /* _TRACE_KVM_MAIN_H */ =20 /* This part must be outside protection */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d1874848862d..09687ac0455a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -481,7 +481,10 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory= _cache *mc) =20 static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned= id) { - vcpu->common =3D &vcpu->_common; + if (vcpu->vmpl =3D=3D 0) + vcpu->common =3D &vcpu->_common; + else + vcpu->common =3D vcpu->vcpu_parent->vcpu_vmpl[0]->common; =20 mutex_init(&vcpu->common->mutex); vcpu->cpu =3D -1; @@ -507,18 +510,28 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, stru= ct kvm *kvm, unsigned id) =20 static void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_arch_vcpu_destroy(vcpu); - kvm_dirty_ring_free(&vcpu->common->dirty_ring); + int vmpl; + struct kvm_vcpu_vmpl_state *vcpu_parent =3D vcpu->vcpu_parent; + for (vmpl =3D 0; vmpl <=3D vcpu_parent->max_vmpl; ++vmpl) { + struct kvm_vcpu *vcpu_free =3D vcpu_parent->vcpu_vmpl[vmpl]; =20 - /* - * No need for rcu_read_lock as VCPU_RUN is the only place that changes - * the vcpu->pid pointer, and at destruction time all file descriptors - * are already gone. - */ - put_pid(rcu_dereference_protected(vcpu->common->pid, 1)); + if (vmpl =3D=3D 0) { + /* + * No need for rcu_read_lock as VCPU_RUN is the only place that changes + * the vcpu->pid pointer, and at destruction time all file descriptors + * are already gone. + */ + put_pid(rcu_dereference_protected(vcpu_free->common->pid, 1)); + + free_page((unsigned long)vcpu_free->common->run); + kvm_dirty_ring_free(&vcpu_free->common->dirty_ring); + } + + kvm_arch_vcpu_destroy(vcpu_free); =20 - free_page((unsigned long)vcpu->common->run); - kmem_cache_free(kvm_vcpu_cache, vcpu); + kmem_cache_free(kvm_vcpu_cache, vcpu_free); + } + kfree(vcpu_parent); } =20 void kvm_destroy_vcpus(struct kvm *kvm) @@ -3742,7 +3755,11 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) if (kvm_vcpu_check_block(vcpu) < 0) break; =20 + if (vcpu->vcpu_parent->current_vmpl !=3D vcpu->vcpu_parent->target_vmpl) + break; + waited =3D true; + schedule(); } =20 @@ -4211,8 +4228,10 @@ static void kvm_create_vcpu_debugfs(struct kvm_vcpu = *vcpu) static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id) { int r; - struct kvm_vcpu *vcpu; - struct page *page; + struct kvm_vcpu_vmpl_state *vcpu_parent; + struct page *kvm_run_page; + int vmpl; + int vcpu_idx; =20 /* * KVM tracks vCPU IDs as 'int', be kind to userspace and reject @@ -4241,29 +4260,45 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm= , unsigned long id) kvm->created_vcpus++; mutex_unlock(&kvm->lock); =20 - vcpu =3D kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL_ACCOUNT); - if (!vcpu) { + vcpu_parent =3D kzalloc(sizeof(struct kvm_vcpu_vmpl_state), GFP_KERNEL_AC= COUNT); + if (!vcpu_parent) { r =3D -ENOMEM; goto vcpu_decrement; } =20 - kvm_vcpu_init(vcpu, kvm, id); + /*=20 + * TODO: The max_vmpl needs to be determined for the current architecture= . This + * has been hardcoded to 3 for this RFC to match the maximum VMPL for SEV= -SNP + */ + vcpu_parent->max_vmpl =3D 3; + for (vmpl =3D 0; vmpl <=3D vcpu_parent->max_vmpl; ++vmpl) { + vcpu_parent->vcpu_vmpl[vmpl] =3D kmem_cache_zalloc(kvm_vcpu_cache, GFP_K= ERNEL_ACCOUNT); + // TODO: Fix cleanup here + if (!vcpu_parent->vcpu_vmpl[vmpl]) { + r =3D -ENOMEM; + goto vcpu_decrement; + } + vcpu_parent->vcpu_vmpl[vmpl]->vcpu_parent =3D vcpu_parent; + vcpu_parent->vcpu_vmpl[vmpl]->vmpl =3D vmpl; + + kvm_vcpu_init(vcpu_parent->vcpu_vmpl[vmpl], kvm, id); + + r =3D kvm_arch_vcpu_create(vcpu_parent->vcpu_vmpl[vmpl]); + if (r) + goto vcpu_free_run_page; + } =20 BUILD_BUG_ON(sizeof(struct kvm_run) > PAGE_SIZE); - page =3D alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); - if (!page) { + kvm_run_page =3D alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (!kvm_run_page) { r =3D -ENOMEM; goto vcpu_free; } - vcpu->common->run =3D page_address(page); - - r =3D kvm_arch_vcpu_create(vcpu); - if (r) - goto vcpu_free_run_page; + vcpu_parent->vcpu_vmpl[0]->common->run =3D page_address(kvm_run_page); =20 if (kvm->dirty_ring_size) { - r =3D kvm_dirty_ring_alloc(&vcpu->common->dirty_ring, - id, kvm->dirty_ring_size); + r =3D kvm_dirty_ring_alloc(&vcpu_parent->vcpu_vmpl[0]->common->dirty_rin= g, + id, kvm->dirty_ring_size); if (r) goto arch_vcpu_destroy; } @@ -4281,18 +4316,21 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm= , unsigned long id) goto unlock_vcpu_destroy; } =20 - vcpu->vcpu_idx =3D atomic_read(&kvm->online_vcpus); - r =3D xa_reserve(&kvm->vcpu_array, vcpu->vcpu_idx, GFP_KERNEL_ACCOUNT); + vcpu_idx =3D atomic_read(&kvm->online_vcpus); + for (vmpl =3D 0; vmpl <=3D vcpu_parent->max_vmpl; ++vmpl) { + vcpu_parent->vcpu_vmpl[vmpl]->vcpu_idx =3D vcpu_idx; + } + r =3D xa_reserve(&kvm->vcpu_array, vcpu_idx, GFP_KERNEL_ACCOUNT); if (r) goto unlock_vcpu_destroy; =20 /* Now it's all set up, let userspace reach it */ kvm_get_kvm(kvm); - r =3D create_vcpu_fd(vcpu); + r =3D create_vcpu_fd(vcpu_parent->vcpu_vmpl[0]); if (r < 0) goto kvm_put_xa_release; =20 - if (KVM_BUG_ON(xa_store(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, 0), kvm))= { + if (KVM_BUG_ON(xa_store(&kvm->vcpu_array, vcpu_idx, vcpu_parent->vcpu_vmp= l[0], 0), kvm)) { r =3D -EINVAL; goto kvm_put_xa_release; } @@ -4305,22 +4343,28 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm= , unsigned long id) atomic_inc(&kvm->online_vcpus); =20 mutex_unlock(&kvm->lock); - kvm_arch_vcpu_postcreate(vcpu); - kvm_create_vcpu_debugfs(vcpu); + for (vmpl =3D 0; vmpl <=3D vcpu_parent->max_vmpl; ++vmpl) { + kvm_arch_vcpu_postcreate(vcpu_parent->vcpu_vmpl[vmpl]); + } + kvm_create_vcpu_debugfs(vcpu_parent->vcpu_vmpl[0]); return r; =20 kvm_put_xa_release: kvm_put_kvm_no_destroy(kvm); - xa_release(&kvm->vcpu_array, vcpu->vcpu_idx); + xa_release(&kvm->vcpu_array, vcpu_idx); unlock_vcpu_destroy: mutex_unlock(&kvm->lock); - kvm_dirty_ring_free(&vcpu->common->dirty_ring); + kvm_dirty_ring_free(&vcpu_parent->vcpu_vmpl[0]->common->dirty_ring); arch_vcpu_destroy: - kvm_arch_vcpu_destroy(vcpu); + for (vmpl =3D 0; vmpl <=3D vcpu_parent->max_vmpl; ++vmpl) { + kvm_arch_vcpu_destroy(vcpu_parent->vcpu_vmpl[vmpl]); + } vcpu_free_run_page: - free_page((unsigned long)vcpu->common->run); + free_page((unsigned long)vcpu_parent->vcpu_vmpl[0]->common->run); vcpu_free: - kmem_cache_free(kvm_vcpu_cache, vcpu); + for (vmpl =3D 0; vmpl <=3D vcpu_parent->max_vmpl; ++vmpl) { + kmem_cache_free(kvm_vcpu_cache, vcpu_parent->vcpu_vmpl[vmpl]); + } vcpu_decrement: mutex_lock(&kvm->lock); kvm->created_vcpus--; --=20 2.43.0 From nobody Fri Nov 29 18:32:48 2024 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAB0315CD7A; Mon, 16 Sep 2024 18:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510696; cv=none; b=X7QO/jEVHjfBcuCJain82Z4meQnDVFSYbYo+nefx288b8HQyeeL+rT1R5/23uZe6L8bUfrs34KgDbeXpt8VvF2N/rcPdck87oRT6ccSqp7SR6YM7/2WXn18yCNEXO9uxzh3oRcXoovUXVEepoBLoItmUD3aKPcgES/yyIbPQtlQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510696; c=relaxed/simple; bh=NXZtOR7R/mzaQTpWNwKoHUmF20fvOZtMqvuyxa1N1HU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=K7zY0vLIAaCOobuOK9IhNMDAQS1VBqyTld6PfUUe5j5pTI57QBHtoEp6Q+u6Ag9/q6LzV8JNCYMNx5C0zvt9aUNVCNpW+fxjPvKlP4ACkp+ONzm2MylFecl9gupkLVQcB+KskmSrXZ21InUcbT5S8e/T729Lr7eIp76v9y9Cv1A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 92B571F8C4; Mon, 16 Sep 2024 18:18:09 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id F36E413A3A; Mon, 16 Sep 2024 18:18:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id eKFrOWB26GbveAAAD6G6ig (envelope-from ); Mon, 16 Sep 2024 18:18:08 +0000 From: Roy Hopkins To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev Cc: Roy Hopkins , Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , Thomas Gleixner , Michael Roth , Ashish Kalra , Joerg Roedel , Tom Lendacky Subject: [RFC PATCH 3/5] kvm/sev: Update SEV VMPL handling to use multiple struct kvm_vcpus Date: Mon, 16 Sep 2024 19:17:55 +0100 Message-ID: <918c29e916eff38e46090b01714e3d1d5df5cb32.1726506534.git.roy.hopkins@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-Spam-Flag: NO X-Spam-Score: -4.00 X-Rspamd-Queue-Id: 92B571F8C4 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Level: Content-Type: text/plain; charset="utf-8" This commit builds on Tom Lendacky's SEV-SNP support RFC patch series and reworks the handling of VMPL switching to use multiple struct kvm_vcpus to store VMPL context. Signed-off-by: Roy Hopkins --- arch/x86/kvm/svm/sev.c | 159 +++++++++++++++-------------------------- arch/x86/kvm/svm/svm.c | 66 ++++++++++------- arch/x86/kvm/svm/svm.h | 36 +++------- 3 files changed, 109 insertions(+), 152 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 2ad1b9b497e0..3fbb1ce5195d 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -147,7 +147,7 @@ static bool sev_vcpu_has_debug_swap(struct vcpu_svm *sv= m) struct kvm_vcpu *vcpu =3D &svm->vcpu; struct kvm_sev_info *sev =3D &to_kvm_svm(vcpu->kvm)->sev_info; =20 - return sev->vmsa_features[cur_vmpl(svm)] & SVM_SEV_FEAT_DEBUG_SWAP; + return sev->vmsa_features[vcpu->vmpl] & SVM_SEV_FEAT_DEBUG_SWAP; } =20 /* Must be called with the sev_bitmap_lock held */ @@ -818,7 +818,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu =3D &svm->vcpu; struct kvm_sev_info *sev =3D &to_kvm_svm(vcpu->kvm)->sev_info; - struct sev_es_save_area *save =3D vmpl_vmsa(svm, SVM_SEV_VMPL0); + struct sev_es_save_area *save =3D vmpl_vmsa(svm); struct xregs_state *xsave; const u8 *s; u8 *d; @@ -931,11 +931,11 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, = struct kvm_vcpu *vcpu, * the VMSA memory content (i.e it will write the same memory region * with the guest's key), so invalidate it first. */ - clflush_cache_range(vmpl_vmsa(svm, SVM_SEV_VMPL0), PAGE_SIZE); + clflush_cache_range(vmpl_vmsa(svm), PAGE_SIZE); =20 vmsa.reserved =3D 0; vmsa.handle =3D to_kvm_sev_info(kvm)->handle; - vmsa.address =3D __sme_pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)); + vmsa.address =3D __sme_pa(vmpl_vmsa(svm)); vmsa.len =3D PAGE_SIZE; ret =3D sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); if (ret) @@ -2504,7 +2504,7 @@ static int snp_launch_update_vmsa(struct kvm *kvm, st= ruct kvm_sev_cmd *argp) =20 kvm_for_each_vcpu(i, vcpu, kvm) { struct vcpu_svm *svm =3D to_svm(vcpu); - u64 pfn =3D __pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)) >> PAGE_SHIFT; + u64 pfn =3D __pa(vmpl_vmsa(svm)) >> PAGE_SHIFT; =20 /* If SVSM support is requested, only measure the boot vCPU */ if ((sev->snp_init_flags & KVM_SEV_SNP_SVSM) && vcpu->vcpu_id !=3D 0) @@ -2520,7 +2520,7 @@ static int snp_launch_update_vmsa(struct kvm *kvm, st= ruct kvm_sev_cmd *argp) return ret; =20 /* Issue the SNP command to encrypt the VMSA */ - data.address =3D __sme_pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)); + data.address =3D __sme_pa(vmpl_vmsa(svm)); ret =3D __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE, &data, &argp->error); if (ret) { @@ -3242,16 +3242,16 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) * releasing it back to the system. */ if (sev_snp_guest(vcpu->kvm)) { - u64 pfn =3D __pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)) >> PAGE_SHIFT; + u64 pfn =3D __pa(vmpl_vmsa(svm)) >> PAGE_SHIFT; =20 if (kvm_rmp_make_shared(vcpu->kvm, pfn, PG_LEVEL_4K)) goto skip_vmsa_free; } =20 if (vcpu->arch.guest_state_protected) - sev_flush_encrypted_page(vcpu, vmpl_vmsa(svm, SVM_SEV_VMPL0)); + sev_flush_encrypted_page(vcpu, vmpl_vmsa(svm)); =20 - __free_page(virt_to_page(vmpl_vmsa(svm, SVM_SEV_VMPL0))); + __free_page(virt_to_page(vmpl_vmsa(svm))); =20 skip_vmsa_free: if (svm->sev_es.ghcb_sa_free) @@ -3924,13 +3924,18 @@ static int __sev_snp_update_protected_guest_state(s= truct kvm_vcpu *vcpu) =20 /* Clear use of the VMSA */ svm->vmcb->control.vmsa_pa =3D INVALID_PAGE; - tgt_vmpl_vmsa_hpa(svm) =3D INVALID_PAGE; + vmpl_vmsa_hpa(svm) =3D INVALID_PAGE; =20 - if (VALID_PAGE(tgt_vmpl_vmsa_gpa(svm))) { - gfn_t gfn =3D gpa_to_gfn(tgt_vmpl_vmsa_gpa(svm)); + if (VALID_PAGE(vmpl_vmsa_gpa(svm))) { + gfn_t gfn =3D gpa_to_gfn(vmpl_vmsa_gpa(svm)); struct kvm_memory_slot *slot; kvm_pfn_t pfn; =20 + if (vcpu->vmpl !=3D 0) { + svm->vmcb->control.asid =3D to_svm(vcpu->vcpu_parent->vcpu_vmpl[0])->vm= cb->control.asid; + svm->vmcb->control.nested_cr3 =3D to_svm(vcpu->vcpu_parent->vcpu_vmpl[0= ])->vmcb->control.nested_cr3; + } + slot =3D gfn_to_memslot(vcpu->kvm, gfn); if (!slot) return -EINVAL; @@ -3952,11 +3957,11 @@ static int __sev_snp_update_protected_guest_state(s= truct kvm_vcpu *vcpu) * guest boot. Deferring that also allows the existing logic for * SEV-ES VMSAs to be re-used with minimal SNP-specific changes. */ - tgt_vmpl_has_guest_vmsa(svm) =3D true; + vmpl_has_guest_vmsa(svm) =3D true; =20 /* Use the new VMSA */ svm->vmcb->control.vmsa_pa =3D pfn_to_hpa(pfn); - tgt_vmpl_vmsa_hpa(svm) =3D pfn_to_hpa(pfn); + vmpl_vmsa_hpa(svm) =3D pfn_to_hpa(pfn); =20 /* * Since the vCPU may not have gone through the LAUNCH_UPDATE_VMSA path, @@ -3969,7 +3974,7 @@ static int __sev_snp_update_protected_guest_state(str= uct kvm_vcpu *vcpu) vcpu->arch.pv.pv_unhalted =3D false; vcpu->arch.mp_state =3D KVM_MP_STATE_RUNNABLE; =20 - tgt_vmpl_vmsa_gpa(svm) =3D INVALID_PAGE; + vmpl_vmsa_gpa(svm) =3D INVALID_PAGE; =20 /* * gmem pages aren't currently migratable, but if this ever @@ -3980,25 +3985,6 @@ static int __sev_snp_update_protected_guest_state(st= ruct kvm_vcpu *vcpu) kvm_release_pfn_clean(pfn); } =20 - if (cur_vmpl(svm) !=3D tgt_vmpl(svm)) { - /* Unmap the current GHCB */ - sev_es_unmap_ghcb(svm); - - /* Save the GHCB GPA of the current VMPL */ - svm->sev_es.ghcb_gpa[cur_vmpl(svm)] =3D svm->vmcb->control.ghcb_gpa; - - /* Set the GHCB_GPA for the target VMPL and make it the current VMPL */ - svm->vmcb->control.ghcb_gpa =3D svm->sev_es.ghcb_gpa[tgt_vmpl(svm)]; - - cur_vmpl(svm) =3D tgt_vmpl(svm); - } - - /* - * When replacing the VMSA during SEV-SNP AP creation, - * mark the VMCB dirty so that full state is always reloaded. - */ - vmcb_mark_all_dirty(svm->vmcb); - return 0; } =20 @@ -4017,12 +4003,12 @@ bool sev_snp_init_protected_guest_state(struct kvm_= vcpu *vcpu) =20 mutex_lock(&svm->sev_es.snp_vmsa_mutex); =20 - if (!tgt_vmpl_ap_waiting_for_reset(svm)) + if (!vmpl_ap_waiting_for_reset(svm)) goto unlock; =20 init =3D true; =20 - tgt_vmpl_ap_waiting_for_reset(svm) =3D false; + vmpl_ap_waiting_for_reset(svm) =3D false; =20 ret =3D __sev_snp_update_protected_guest_state(vcpu); if (ret) @@ -4067,6 +4053,8 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) apic_id); return -EINVAL; } + /* Ensure we have the target CPU for the correct VMPL */ + target_vcpu =3D target_vcpu->vcpu_parent->vcpu_vmpl[vmpl]; =20 ret =3D 0; =20 @@ -4081,13 +4069,13 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) =20 mutex_lock(&target_svm->sev_es.snp_vmsa_mutex); =20 - vmpl_vmsa_gpa(target_svm, vmpl) =3D INVALID_PAGE; - vmpl_ap_waiting_for_reset(target_svm, vmpl) =3D true; + vmpl_vmsa_gpa(target_svm) =3D INVALID_PAGE; + vmpl_ap_waiting_for_reset(target_svm) =3D true; =20 /* VMPL0 can only be replaced by another vCPU running VMPL0 */ if (vmpl =3D=3D SVM_SEV_VMPL0 && (vcpu =3D=3D target_vcpu || - vmpl_vmsa_hpa(svm, SVM_SEV_VMPL0) !=3D svm->vmcb->control.vmsa_pa)) { + vmpl_vmsa_hpa(svm) !=3D svm->vmcb->control.vmsa_pa)) { ret =3D -EINVAL; goto out; } @@ -4145,9 +4133,7 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) kick =3D false; fallthrough; case SVM_VMGEXIT_AP_CREATE: - /* Switch to new VMSA on the next VMRUN */ - target_svm->sev_es.snp_target_vmpl =3D vmpl; - vmpl_vmsa_gpa(target_svm, vmpl) =3D svm->vmcb->control.exit_info_2 & PAG= E_MASK; + vmpl_vmsa_gpa(target_svm) =3D svm->vmcb->control.exit_info_2 & PAGE_MASK; break; case SVM_VMGEXIT_AP_DESTROY: break; @@ -4161,6 +4147,17 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) out: if (kick) { kvm_make_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, target_vcpu); + + /* + * SNP APs can be initially started using the APIC INIT/SIPI sequence or + * via a GHCB call. In the case of the GHCB call the sequence is not mov= ed + * from KVM_MP_STATE_UNINITIALIZED to KVM_MP_STATE_RUNNABLE, so we need + * to unblock the VCPU. + */ + if (target_vcpu->vcpu_parent->vcpu_vmpl[0]->arch.mp_state =3D=3D KVM_MP_= STATE_UNINITIALIZED) { + target_vcpu->vcpu_parent->vcpu_vmpl[0]->arch.mp_state =3D KVM_MP_STATE_= RUNNABLE; + } + kvm_vcpu_kick(target_vcpu); } =20 @@ -4339,59 +4336,33 @@ static void sev_get_apic_ids(struct vcpu_svm *svm) =20 static int __sev_run_vmpl_vmsa(struct vcpu_svm *svm, unsigned int new_vmpl) { - struct kvm_vcpu *vcpu =3D &svm->vcpu; - struct vmpl_switch_sa *old_vmpl_sa; - struct vmpl_switch_sa *new_vmpl_sa; - unsigned int old_vmpl; + struct kvm_vcpu_vmpl_state *vcpu_parent =3D svm->vcpu.vcpu_parent; + struct kvm_vcpu *vcpu_current =3D &svm->vcpu; + struct kvm_vcpu *vcpu_target =3D vcpu_parent->vcpu_vmpl[new_vmpl]; + struct vcpu_svm *svm_current =3D svm; + struct vcpu_svm *svm_target =3D to_svm(vcpu_target); =20 if (new_vmpl >=3D SVM_SEV_VMPL_MAX) return -EINVAL; new_vmpl =3D array_index_nospec(new_vmpl, SVM_SEV_VMPL_MAX); + vcpu_current->vcpu_parent->target_vmpl =3D new_vmpl; =20 - old_vmpl =3D svm->sev_es.snp_current_vmpl; - svm->sev_es.snp_target_vmpl =3D new_vmpl; + if (new_vmpl =3D=3D vcpu_parent->current_vmpl) { + return 0; + } =20 - if (svm->sev_es.snp_target_vmpl =3D=3D svm->sev_es.snp_current_vmpl || - sev_snp_init_protected_guest_state(vcpu)) + if (sev_snp_init_protected_guest_state(vcpu_target)) { return 0; + } =20 /* If the VMSA is not valid, return an error */ - if (!VALID_PAGE(vmpl_vmsa_hpa(svm, new_vmpl))) + if (!VALID_PAGE(vmpl_vmsa_hpa(svm_target))) return -EINVAL; =20 /* Unmap the current GHCB */ - sev_es_unmap_ghcb(svm); - - /* Save some current VMCB values */ - svm->sev_es.ghcb_gpa[old_vmpl] =3D svm->vmcb->control.ghcb_gpa; - - old_vmpl_sa =3D &svm->sev_es.vssa[old_vmpl]; - old_vmpl_sa->int_state =3D svm->vmcb->control.int_state; - old_vmpl_sa->exit_int_info =3D svm->vmcb->control.exit_int_info; - old_vmpl_sa->exit_int_info_err =3D svm->vmcb->control.exit_int_info_err; - old_vmpl_sa->cr0 =3D vcpu->arch.cr0; - old_vmpl_sa->cr2 =3D vcpu->arch.cr2; - old_vmpl_sa->cr4 =3D vcpu->arch.cr4; - old_vmpl_sa->cr8 =3D vcpu->arch.cr8; - old_vmpl_sa->efer =3D vcpu->arch.efer; - - /* Restore some previous VMCB values */ - svm->vmcb->control.vmsa_pa =3D vmpl_vmsa_hpa(svm, new_vmpl); - svm->vmcb->control.ghcb_gpa =3D svm->sev_es.ghcb_gpa[new_vmpl]; - - new_vmpl_sa =3D &svm->sev_es.vssa[new_vmpl]; - svm->vmcb->control.int_state =3D new_vmpl_sa->int_state; - svm->vmcb->control.exit_int_info =3D new_vmpl_sa->exit_int_info; - svm->vmcb->control.exit_int_info_err =3D new_vmpl_sa->exit_int_info_err; - vcpu->arch.cr0 =3D new_vmpl_sa->cr0; - vcpu->arch.cr2 =3D new_vmpl_sa->cr2; - vcpu->arch.cr4 =3D new_vmpl_sa->cr4; - vcpu->arch.cr8 =3D new_vmpl_sa->cr8; - vcpu->arch.efer =3D new_vmpl_sa->efer; - - svm->sev_es.snp_current_vmpl =3D new_vmpl; - - vmcb_mark_all_dirty(svm->vmcb); + sev_es_unmap_ghcb(svm_current); + + vcpu_parent->target_vmpl =3D new_vmpl; =20 return 0; } @@ -4520,7 +4491,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcp= u_svm *svm) gfn =3D get_ghcb_msr_bits(svm, GHCB_MSR_GPA_VALUE_MASK, GHCB_MSR_GPA_VALUE_POS); =20 - svm->sev_es.ghcb_registered_gpa[cur_vmpl(svm)] =3D gfn_to_gpa(gfn); + svm->sev_es.ghcb_registered_gpa =3D gfn_to_gpa(gfn); =20 set_ghcb_msr_bits(svm, gfn, GHCB_MSR_GPA_VALUE_MASK, GHCB_MSR_GPA_VALUE_POS); @@ -4825,8 +4796,8 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm) * the VMSA will be NULL if this vCPU is the destination for intrahost * migration, and will be copied later. */ - if (cur_vmpl_vmsa(svm) && !cur_vmpl_has_guest_vmsa(svm)) - svm->vmcb->control.vmsa_pa =3D __pa(cur_vmpl_vmsa(svm)); + if (vmpl_vmsa(svm) && !vmpl_has_guest_vmsa(svm)) + svm->vmcb->control.vmsa_pa =3D __pa(vmpl_vmsa(svm)); =20 /* Can't intercept CR register access, HV can't modify CR registers */ svm_clr_intercept(svm, INTERCEPT_CR0_READ); @@ -4889,7 +4860,6 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu =3D &svm->vcpu; struct kvm_sev_info *sev =3D &to_kvm_svm(vcpu->kvm)->sev_info; - unsigned int i; u64 sev_info; =20 /* @@ -4899,20 +4869,9 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm) sev_info =3D GHCB_MSR_SEV_INFO((__u64)sev->ghcb_version, GHCB_VERSION_MIN, sev_enc_bit); set_ghcb_msr(svm, sev_info); - svm->sev_es.ghcb_gpa[SVM_SEV_VMPL0] =3D sev_info; + svm->sev_es.ghcb_gpa =3D sev_info; =20 mutex_init(&svm->sev_es.snp_vmsa_mutex); - - /* - * When not running under SNP, the "current VMPL" tracking for a guest - * is always 0 and the base tracking of GPAs and SPAs will be as before - * multiple VMPL support. However, under SNP, multiple VMPL levels can - * be run, so initialize these values appropriately. - */ - for (i =3D 1; i < SVM_SEV_VMPL_MAX; i++) { - svm->sev_es.vmsa_info[i].hpa =3D INVALID_PAGE; - svm->sev_es.ghcb_gpa[i] =3D sev_info; - } } =20 void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_sa= ve_area *hostsa) @@ -5301,7 +5260,7 @@ bool sev_snp_is_rinj_active(struct kvm_vcpu *vcpu) return false; =20 sev =3D &to_kvm_svm(vcpu->kvm)->sev_info; - vmpl =3D to_svm(vcpu)->sev_es.snp_current_vmpl; + vmpl =3D vcpu->vcpu_parent->current_vmpl; =20 return sev->vmsa_features[vmpl] & SVM_SEV_FEAT_RESTRICTED_INJECTION; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 478cd15bb9f2..22a189910ba1 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -996,40 +996,54 @@ void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb = *from_vmcb) =20 void svm_enable_lbrv(struct kvm_vcpu *vcpu) { - struct vcpu_svm *svm =3D to_svm(vcpu); + int vtl; + struct vcpu_svm *svm; + struct kvm_vcpu_vmpl_state *vcpu_parent =3D vcpu->vcpu_parent; =20 - svm->vmcb->control.virt_ext |=3D LBR_CTL_ENABLE_MASK; - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); + for (vtl =3D 0; vtl <=3D vcpu_parent->max_vmpl; ++vtl) { + vcpu =3D vcpu_parent->vcpu_vmpl[vtl]; + svm =3D to_svm(vcpu); =20 - if (sev_es_guest(vcpu->kvm)) - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_DEBUGCTLMSR, 1, 1); + svm->vmcb->control.virt_ext |=3D LBR_CTL_ENABLE_MASK; + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); =20 - /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ - if (is_guest_mode(vcpu)) - svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr); + if (sev_es_guest(vcpu->kvm)) + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_DEBUGCTLMSR, 1, 1); + + /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ + if (is_guest_mode(vcpu)) + svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr); + } } =20 static void svm_disable_lbrv(struct kvm_vcpu *vcpu) { - struct vcpu_svm *svm =3D to_svm(vcpu); + int vtl; + struct vcpu_svm *svm; + struct kvm_vcpu_vmpl_state *vcpu_parent =3D vcpu->vcpu_parent; =20 - KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm); + for (vtl =3D 0; vtl <=3D vcpu_parent->max_vmpl; ++vtl) { + vcpu =3D vcpu_parent->vcpu_vmpl[vtl]; + svm =3D to_svm(vcpu); =20 - svm->vmcb->control.virt_ext &=3D ~LBR_CTL_ENABLE_MASK; - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 0, 0); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 0, 0); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 0, 0); - set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 0, 0); + KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm); =20 - /* - * Move the LBR msrs back to the vmcb01 to avoid copying them - * on nested guest entries. - */ - if (is_guest_mode(vcpu)) - svm_copy_lbrs(svm->vmcb01.ptr, svm->vmcb); + svm->vmcb->control.virt_ext &=3D ~LBR_CTL_ENABLE_MASK; + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 0, 0); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 0, 0); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 0, 0); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 0, 0); + + /* + * Move the LBR msrs back to the vmcb01 to avoid copying them + * on nested guest entries. + */ + if (is_guest_mode(vcpu)) + svm_copy_lbrs(svm->vmcb01.ptr, svm->vmcb); + } } =20 static struct vmcb *svm_get_lbr_vmcb(struct vcpu_svm *svm) @@ -1464,8 +1478,8 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu) svm_switch_vmcb(svm, &svm->vmcb01); =20 if (vmsa_page) { - vmpl_vmsa(svm, SVM_SEV_VMPL0) =3D page_address(vmsa_page); - vmpl_vmsa_hpa(svm, SVM_SEV_VMPL0) =3D __pa(page_address(vmsa_page)); + vmpl_vmsa(svm) =3D page_address(vmsa_page); + vmpl_vmsa_hpa(svm) =3D __pa(page_address(vmsa_page)); } =20 svm->guest_state_loaded =3D false; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 97a1b1b4cb5f..de7f92ba55c3 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -211,25 +211,11 @@ struct vmpl_switch_sa { u64 efer; }; =20 -#define vmpl_vmsa(s, v) ((s)->sev_es.vmsa_info[(v)].vmsa) -#define vmpl_vmsa_gpa(s, v) ((s)->sev_es.vmsa_info[(v)].gpa) -#define vmpl_vmsa_hpa(s, v) ((s)->sev_es.vmsa_info[(v)].hpa) -#define vmpl_ap_waiting_for_reset(s, v) ((s)->sev_es.vmsa_info[(v)].ap_wa= iting_for_reset) -#define vmpl_has_guest_vmsa(s, v) ((s)->sev_es.vmsa_info[(v)].has_guest_v= msa) - -#define cur_vmpl(s) ((s)->sev_es.snp_current_vmpl) -#define cur_vmpl_vmsa(s) vmpl_vmsa((s), cur_vmpl(s)) -#define cur_vmpl_vmsa_gpa(s) vmpl_vmsa_gpa((s), cur_vmpl(s)) -#define cur_vmpl_vmsa_hpa(s) vmpl_vmsa_hpa((s), cur_vmpl(s)) -#define cur_vmpl_ap_waiting_for_reset(s) vmpl_ap_waiting_for_reset((s), cu= r_vmpl(s)) -#define cur_vmpl_has_guest_vmsa(s) vmpl_has_guest_vmsa((s), cur_vmpl(s)) - -#define tgt_vmpl(s) ((s)->sev_es.snp_target_vmpl) -#define tgt_vmpl_vmsa(s) vmpl_vmsa((s), tgt_vmpl(s)) -#define tgt_vmpl_vmsa_gpa(s) vmpl_vmsa_gpa((s), tgt_vmpl(s)) -#define tgt_vmpl_vmsa_hpa(s) vmpl_vmsa_hpa((s), tgt_vmpl(s)) -#define tgt_vmpl_ap_waiting_for_reset(s) vmpl_ap_waiting_for_reset((s), tg= t_vmpl(s)) -#define tgt_vmpl_has_guest_vmsa(s) vmpl_has_guest_vmsa((s), tgt_vmpl(s)) +#define vmpl_vmsa(s) ((s)->sev_es.vmsa_info.vmsa) +#define vmpl_vmsa_gpa(s) ((s)->sev_es.vmsa_info.gpa) +#define vmpl_vmsa_hpa(s) ((s)->sev_es.vmsa_info.hpa) +#define vmpl_ap_waiting_for_reset(s) ((s)->sev_es.vmsa_info.ap_waiting_for= _reset) +#define vmpl_has_guest_vmsa(s) ((s)->sev_es.vmsa_info.has_guest_vmsa) =20 struct sev_vmsa_info { /* SEV-ES and SEV-SNP */ @@ -262,15 +248,13 @@ struct vcpu_sev_es_state { u16 psc_inflight; bool psc_2m; =20 - gpa_t ghcb_gpa[SVM_SEV_VMPL_MAX]; - u64 ghcb_registered_gpa[SVM_SEV_VMPL_MAX]; - struct sev_vmsa_info vmsa_info[SVM_SEV_VMPL_MAX]; + gpa_t ghcb_gpa; + u64 ghcb_registered_gpa; + struct sev_vmsa_info vmsa_info; =20 struct mutex snp_vmsa_mutex; /* Used to handle concurrent updates of VMSA= . */ - unsigned int snp_current_vmpl; - unsigned int snp_target_vmpl; =20 - struct vmpl_switch_sa vssa[SVM_SEV_VMPL_MAX]; + struct vmpl_switch_sa vssa; }; =20 struct vcpu_svm { @@ -426,7 +410,7 @@ static __always_inline bool sev_snp_guest(struct kvm *k= vm) =20 static inline bool ghcb_gpa_is_registered(struct vcpu_svm *svm, u64 val) { - return svm->sev_es.ghcb_registered_gpa[cur_vmpl(svm)] =3D=3D val; + return svm->sev_es.ghcb_registered_gpa =3D=3D val; } =20 static inline void vmcb_mark_all_dirty(struct vmcb *vmcb) --=20 2.43.0 From nobody Fri Nov 29 18:32:48 2024 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAA5915CD64; Mon, 16 Sep 2024 18:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510696; cv=none; b=oZisqhk8/FbjlViF7UsOX+2CdJpDYRsGXTP1gtli9vCzPSqeCWxTJDi6D+SHg6mBRhUaIItzWkDVu/LXgyyqEZFYuqZVYePLJA+DeHkiAdjXz5dE4RoBmkZpqig2ULzBZqcK3qeJhsDyDb/TXBsvhIRzYBJqNghpArJm4lgNzqg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510696; c=relaxed/simple; bh=NaYD4z/PMvcd35hhct3ZHR9P9LQY2eeQF++fxr464lc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h6bEyIQun6u8RXs+9zj8pPE4PqyDzkiXCjmDCtsKmWL9wGoUNOgHD7CwBQOZqSmPU9evVATSNlVLjUeikWTrd5xY+8UQvVbFBafRvXCr9mS2ccd/KwhUL6bqqzZZeJrIXWc79gVoAU5c7C6dlPgTQWtYRaGjzxOjAoty8oPUFeM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=V71zJMy3; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=V71zJMy3; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="V71zJMy3"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="V71zJMy3" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 404D71F8C5; Mon, 16 Sep 2024 18:18:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1726510690; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WCm95XJJnBEDcbDJOtrx9fOx+lx0IhmYpGAcLnQXA+s=; b=V71zJMy3Js2dFPMYYzngiEvzIBoh3x26RyoESwZ0ytSjpoXeRG591t3PYyRUA7QqaumpN8 aoRMVtDqqj6tJqN0N2D3a9CvfxTdeTCq/01husX2aT3gy5Zny9AWpX2T8BXRa8hnL3a8EL xRxqi158r+A/OFU+zReCEMttFTBW0Hk= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1726510690; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WCm95XJJnBEDcbDJOtrx9fOx+lx0IhmYpGAcLnQXA+s=; b=V71zJMy3Js2dFPMYYzngiEvzIBoh3x26RyoESwZ0ytSjpoXeRG591t3PYyRUA7QqaumpN8 aoRMVtDqqj6tJqN0N2D3a9CvfxTdeTCq/01husX2aT3gy5Zny9AWpX2T8BXRa8hnL3a8EL xRxqi158r+A/OFU+zReCEMttFTBW0Hk= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A0D65139CE; Mon, 16 Sep 2024 18:18:09 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 0L8nJWF26GbveAAAD6G6ig (envelope-from ); Mon, 16 Sep 2024 18:18:09 +0000 From: Roy Hopkins To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev Cc: Roy Hopkins , Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , Thomas Gleixner , Michael Roth , Ashish Kalra , Joerg Roedel , Tom Lendacky Subject: [RFC PATCH 4/5] x86/kvm: Add x86 field to find the default VMPL that IRQs should target Date: Mon, 16 Sep 2024 19:17:56 +0100 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Spam-Level: X-Spamd-Result: default: False [-6.80 / 50.00]; REPLY(-4.00)[]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[15]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:email,suse.com:mid]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; R_RATELIMIT(0.00)[to_ip_from(RLh8t8sqpgocps1pdp1zxxqsw5)]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -6.80 X-Spam-Flag: NO Content-Type: text/plain; charset="utf-8" When a CPU supports multiple VMPLs, injected interrupts need to be sent to the correct context. This commit adds an operation that determines the VMPL number that IRQs should be sent to in the absence of an explicit target VMPL. Signed-off-by: Roy Hopkins --- arch/x86/include/asm/kvm_host.h | 7 +++++++ arch/x86/kvm/svm/sev.c | 8 ++++++++ 2 files changed, 15 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 94e7b5a4fafe..3dd3a5ff0cec 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1537,6 +1537,13 @@ struct kvm_arch { */ #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1) struct kvm_mmu_memory_cache split_desc_cache; + + /* + * When a system supports multiple VMPLs, injected interrupts need to be + * sent to the correct context. The default VMPL that IRQs should be sent + * to is indicated in this variable. + */ + unsigned int default_irq_vmpl; }; =20 struct kvm_vm_stat { diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 3fbb1ce5195d..ed91aa93da6e 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3983,6 +3983,14 @@ static int __sev_snp_update_protected_guest_state(st= ruct kvm_vcpu *vcpu) * means. */ kvm_release_pfn_clean(pfn); + + /* + * TEMP: If the newly created VMSA is for a lower VMPL then + * set this VMPL to be the default for sending IRQs to. + */ + if (vcpu->vmpl > vcpu->kvm->arch.default_irq_vmpl) { + vcpu->kvm->arch.default_irq_vmpl =3D vcpu->vmpl; + } } =20 return 0; --=20 2.43.0 From nobody Fri Nov 29 18:32:48 2024 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1AD215C144; Mon, 16 Sep 2024 18:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510696; cv=none; b=SJGVewkw2glxYFG/mi3ha7jEskWaksXLRmHQJPQa2lb8FCtPJdOd3Tr10zv+D/wEtyW1h2vWPZdNV2bCFCjhg+cX1I1GPP/5k8SoaJkYH4EotllJb3Oj9baFYweOZCrb3kKLom7Y1SLNwRN/T3CuMcSWfoL3b3hmqK4aLmKqOpE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726510696; c=relaxed/simple; bh=3Fw+K0hQmf1BM7ZUV56V8ZoBg80KB09mMA/nO1nJv50=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=awVXhUKOdvjnWiDRUsyOQ6LdXt9VvUph9TpMfKDQp9o2KUQL/a0L5Te9dbaVHU9svRW0a0XY4xA4xgBVP5D0rX1sZU46oxg9YOwfir05dYMZfH8zsBnMgnkj7Ho/lkt4bmx2/AAPQya/Em/O0eC7mgM47DKzKbdBzLtWqOwrDlk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=rN1w5HsK; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=rN1w5HsK; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="rN1w5HsK"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="rN1w5HsK" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E1D6221C29; Mon, 16 Sep 2024 18:18:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1726510690; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cmWFWeD/k+yoNDkLlzMQjwNocrg/PxJwLUSs9viTQhw=; b=rN1w5HsK48Bgtd/u6hO6M/taq110M6zoZAubwmeu0EzlpqMyrMXDfDc4q85DzVxZD9a8v0 xKP3HgcyUlPy3ku4OoH2wvAdq793eVOAaD+HTX/rg4jFrU6oTXVijMLgtnbXhRfJpuaji0 RXiGt/K5WRGurqytB2T00nc9A9aKNCs= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1726510690; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cmWFWeD/k+yoNDkLlzMQjwNocrg/PxJwLUSs9viTQhw=; b=rN1w5HsK48Bgtd/u6hO6M/taq110M6zoZAubwmeu0EzlpqMyrMXDfDc4q85DzVxZD9a8v0 xKP3HgcyUlPy3ku4OoH2wvAdq793eVOAaD+HTX/rg4jFrU6oTXVijMLgtnbXhRfJpuaji0 RXiGt/K5WRGurqytB2T00nc9A9aKNCs= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4E71913A3A; Mon, 16 Sep 2024 18:18:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wPgSEWJ26GbveAAAD6G6ig (envelope-from ); Mon, 16 Sep 2024 18:18:10 +0000 From: Roy Hopkins To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-coco@lists.linux.dev Cc: Roy Hopkins , Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , Thomas Gleixner , Michael Roth , Ashish Kalra , Joerg Roedel , Tom Lendacky Subject: [RFC PATCH 5/5] x86/kvm: Add target VMPL to IRQs and send to APIC for VMPL Date: Mon, 16 Sep 2024 19:17:57 +0100 Message-ID: <67ffe577fe98f53068d68c053fa209e8d4b4bae9.1726506534.git.roy.hopkins@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Spam-Level: X-Spamd-Result: default: False [-6.80 / 50.00]; REPLY(-4.00)[]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_DN_SOME(0.00)[]; RCPT_COUNT_TWELVE(0.00)[15]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:mid,imap1.dmz-prg2.suse.org:helo]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; R_RATELIMIT(0.00)[to_ip_from(RLh8t8sqpgocps1pdp1zxxqsw5)]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -6.80 X-Spam-Flag: NO Content-Type: text/plain; charset="utf-8" Systems that support VMPLs need to decide which VMPL each IRQ is destined for; each VMPL can support its own set of hardware devices that generate interrupts. This commit extends kvm_lapic_irq to include a target_vmpl field the sends the IRQ to the APIC instance at the target VMPL. Signed-off-by: Roy Hopkins --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/ioapic.c | 3 +++ arch/x86/kvm/irq_comm.c | 1 + arch/x86/kvm/lapic.c | 6 +++++- 4 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 3dd3a5ff0cec..d0febb67dabf 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1624,6 +1624,7 @@ struct kvm_lapic_irq { u32 shorthand; u32 dest_id; bool msi_redir_hint; + unsigned int target_vmpl; }; =20 static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical) diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c index 995eb5054360..7b835a192561 100644 --- a/arch/x86/kvm/ioapic.c +++ b/arch/x86/kvm/ioapic.c @@ -413,6 +413,8 @@ static void ioapic_write_indirect(struct kvm_ioapic *io= apic, u32 val) irq.shorthand =3D APIC_DEST_NOSHORT; irq.dest_id =3D e->fields.dest_id; irq.msi_redir_hint =3D false; + irq.target_vmpl =3D ioapic->kvm->arch.default_irq_vmpl; + bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS); kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, vcpu_bitmap); @@ -458,6 +460,7 @@ static int ioapic_service(struct kvm_ioapic *ioapic, in= t irq, bool line_status) irqe.level =3D 1; irqe.shorthand =3D APIC_DEST_NOSHORT; irqe.msi_redir_hint =3D false; + irqe.target_vmpl =3D ioapic->kvm->arch.default_irq_vmpl; =20 if (irqe.trig_mode =3D=3D IOAPIC_EDGE_TRIG) ioapic->irr_delivered |=3D 1 << irq; diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c index 8136695f7b96..6bd4a78dddba 100644 --- a/arch/x86/kvm/irq_comm.c +++ b/arch/x86/kvm/irq_comm.c @@ -119,6 +119,7 @@ void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel= _irq_routing_entry *e, irq->msi_redir_hint =3D msg.arch_addr_lo.redirect_hint; irq->level =3D 1; irq->shorthand =3D APIC_DEST_NOSHORT; + irq->target_vmpl =3D kvm->arch.default_irq_vmpl; } EXPORT_SYMBOL_GPL(kvm_set_msi_irq); =20 diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index e2dd573e4f2d..20b433a78457 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -836,7 +836,9 @@ static int __apic_accept_irq(struct kvm_lapic *apic, in= t delivery_mode, int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq, struct dest_map *dest_map) { - struct kvm_lapic *apic =3D vcpu->arch.apic; + struct kvm_lapic *apic =3D vcpu->vcpu_parent->vcpu_vmpl[irq->target_vmpl]= ->arch.apic; + if (!apic) + return -EINVAL; =20 return __apic_accept_irq(apic, irq->delivery_mode, irq->vector, irq->level, irq->trig_mode, dest_map); @@ -1528,6 +1530,8 @@ void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 ic= r_low, u32 icr_high) irq.trig_mode =3D icr_low & APIC_INT_LEVELTRIG; irq.shorthand =3D icr_low & APIC_SHORT_MASK; irq.msi_redir_hint =3D false; + /* IPIs always target the same VMPL as the source */ + irq.target_vmpl =3D apic->vcpu->vmpl; if (apic_x2apic_mode(apic)) irq.dest_id =3D icr_high; else --=20 2.43.0