From nobody Thu Oct 2 07:45:10 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AA5723E35F for ; Fri, 19 Sep 2025 00:33:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758241998; cv=none; b=my9uncWYcyTyfXJcel5MWsKLrFtIyjOouvASd4XR39ySeBFT6tNjThqK3/bxLmspOZi4NFp0GPPE7zPPAiyKPVm6gn8qUfIIF2z2NuuayjrMzJ+Nv2EKKnVhWXQ+S4rUDyhYzaWnUF6n7Fe9Zrt+XkYCOMSbKUrnU6JbhpY2Uj8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758241998; c=relaxed/simple; bh=m6dZI+ZwLyJuMwFLGjc1a+3t+gzyy/UmgZ+fCf2theI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=idf+LKdNSN8lrTIV0U9zkWulVCBXpHjWaGMGp96pPpELMj5fjfgRCTSUwCL7jsc7QtoM2bEU8hnlJczNPSo85WlrZuMaV1JmwRwJJzDC9kfZYTSBEYsfRJkQPt1+9iW2YwuWGOTJ/6noErsgYTZ7Iy6wJNY9jD7FKLSWL6gfX+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1Muxm8NI; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1Muxm8NI" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-32eb18b5659so1558504a91.2 for ; Thu, 18 Sep 2025 17:33:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758241996; x=1758846796; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tMuCCcykP7UyXODG1nuaeEehr/fAveTIKF4KZJ2JmiU=; b=1Muxm8NI7RHKOTd4QACW8HQP0tBNeUW3j7PCWux+Km8kSHYB9V2iinGlXfgYTJdYtG nKSQdbOqf+EcitVX21CghDQpulyKTRmApW4OgMHAhltzIIQnNRpLVwe7Aj1dD3RLIy/J zLTad9Le+Zr62TNBMwYLcO8I2P+7qZLvfA7XO8dcmmqvOhrXsnoXGkopr4HBI6We+zuv x/oGE6wkurO4cKWt0kwyBpyBkT9sRisF2TEAQ7LxBnifu4T3E3CbQD74ELsq1J4MX+Bl pZvwS6bA+SoI9+0KAhwH+GIT82Ay1dhVecNiDp+50xBf/XLvOJ8bVm2S6dv/Tbw0iJMs lQ7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758241996; x=1758846796; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tMuCCcykP7UyXODG1nuaeEehr/fAveTIKF4KZJ2JmiU=; b=QfazYbYU/C+VfkaASIMcIpsjS8aij0BaHZhVJJ9g8CUeZrA+W2tgJ7QWgmCtCE1PUs HnVAFKu14JTQCmQ4oS26WfUX9lyQKDz3Z6zQTr8/ojAe2ag4umKhyJrVTFNT0K6u5eVd ZHU/gu0QxQQsL5T0/si6tdmKVFUmxOUEZvnukw71E5BPqzTB+TQqvjDA821FiNr6wD3O OfnDA6eti+lqbyPXwfdzuSmFh7j7t14HJxb4iivl51GY6SfMNA+t1Or+BuusDORnmMBX z4KHBxwFVL3D5G7snaTy8VWDYjn8KMGbfsZHtmbGL5BN5zt4hy4nqIl3c4HjvjClqFqb 5exA== X-Forwarded-Encrypted: i=1; AJvYcCVhSQqAp0OjRog5a/JBS+57+qi8BG7P+3VaPXCallhcFXhVgxQtjuQ64txjBRywAhCTrVzZPvoAk+iH4uU=@vger.kernel.org X-Gm-Message-State: AOJu0YxnSj66NJD99jzQcm+EzlMFbL91h/pw9gsOd/iS94jgbKwmdtez by6YBSEcyxxZ60+xxmkhdMXzkIrGw0sI6YJpAqpVY7cQrRXb//+kPDPSXfWMZ6Q94j1U8w6b7ka sL8u+uA== X-Google-Smtp-Source: AGHT+IFzlv5dXL7dtF1e+eMqZ1ge+J+8LQ3uyWOVWDPWtrQZD4mZKX7IbeJApn+FgKkV2SAzI1W4lLt65x4= X-Received: from pjbok3.prod.google.com ([2002:a17:90b:1d43:b0:330:8b1f:c4e7]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:bd87:b0:250:5ff5:3f4b with SMTP id d9443c01a7336-269ba467e97mr13530515ad.15.1758241996301; Thu, 18 Sep 2025 17:33:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 18 Sep 2025 17:32:59 -0700 In-Reply-To: <20250919003303.1355064-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250919003303.1355064-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.470.ga7dc726c21-goog Message-ID: <20250919003303.1355064-2-seanjc@google.com> Subject: [PATCH v2 1/5] KVM: s390/vfio-ap: Use kvm_is_gpa_in_memslot() instead of open coded equivalent From: Sean Christopherson To: Madhavan Srinivasan , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov , Tony Krowiak , Halil Pasic , Jason Herne , Harald Freudenberger , Holger Dengler Cc: linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use kvm_is_gpa_in_memslot() to check the validity of the notification indicator byte address instead of open coding equivalent logic in the VFIO AP driver. Opportunistically use a dedicated wrapper that exists and is exported expressly for the VFIO AP module. kvm_is_gpa_in_memslot() is generally unsuitable for use outside of KVM; other drivers typically shouldn't rely on KVM's memslots, and using the API requires kvm->srcu (or slots_lock) to be held for the entire duration of the usage, e.g. to avoid TOCTOU bugs. handle_pqap() is a bit of a special case, as it's explicitly invoked from KVM with kvm->srcu already held, and the VFIO AP driver is in many ways an extension of KVM that happens to live in a separate module. Providing a dedicated API for the VFIO AP driver will allow restricting the vast majority of generic KVM's exports to KVM submodules (e.g. to x86's kvm-{amd,intel}.ko vendor mdoules). No functional change intended. Acked-by: Anthony Krowiak Signed-off-by: Sean Christopherson Reviewed-by: Christian Borntraeger --- arch/s390/include/asm/kvm_host.h | 2 ++ arch/s390/kvm/priv.c | 8 ++++++++ drivers/s390/crypto/vfio_ap_ops.c | 2 +- 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_h= ost.h index f870d09515cc..ee25eeda12fd 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -722,6 +722,8 @@ extern int kvm_s390_enter_exit_sie(struct kvm_s390_sie_= block *scb, extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc); extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc); =20 +bool kvm_s390_is_gpa_in_memslot(struct kvm *kvm, gpa_t gpa); + static inline void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) {} static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 9253c70897a8..9a71b6e00948 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -605,6 +605,14 @@ static int handle_io_inst(struct kvm_vcpu *vcpu) } } =20 +#if IS_ENABLED(CONFIG_VFIO_AP) +bool kvm_s390_is_gpa_in_memslot(struct kvm *kvm, gpa_t gpa) +{ + return kvm_is_gpa_in_memslot(kvm, gpa); +} +EXPORT_SYMBOL_FOR_MODULES(kvm_s390_is_gpa_in_memslot, "vfio_ap"); +#endif + /* * handle_pqap: Handling pqap interception * @vcpu: the vcpu having issue the pqap instruction diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_a= p_ops.c index 766557547f83..eb5ff49f6fe7 100644 --- a/drivers/s390/crypto/vfio_ap_ops.c +++ b/drivers/s390/crypto/vfio_ap_ops.c @@ -354,7 +354,7 @@ static int vfio_ap_validate_nib(struct kvm_vcpu *vcpu, = dma_addr_t *nib) =20 if (!*nib) return -EINVAL; - if (kvm_is_error_hva(gfn_to_hva(vcpu->kvm, *nib >> PAGE_SHIFT))) + if (!kvm_s390_is_gpa_in_memslot(vcpu->kvm, *nib)) return -EINVAL; =20 return 0; --=20 2.51.0.470.ga7dc726c21-goog From nobody Thu Oct 2 07:45:10 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 901FB2459D9 for ; Fri, 19 Sep 2025 00:33:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242000; cv=none; b=q64KSjuPm2gN/81nz8nWF+Tgqnw4nUeAhIJ3s2SLNsI/pANU9Z3tt6qszmW/cpX/dZeEuF/Ep+jZ6kQ1E+C+1wwsXBlrWTc6bUf6WbA9Y0bYqvePZeDXohSIiyqMpx6DBndk3YOaFNuJ+s/cYYBmEjY9QhbRlG+jABlm05fJ+6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242000; c=relaxed/simple; bh=ycQPZISrjoZHdv0v5uXiUkbAtVbty6qioOhXOgghES4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CVobS9dvfIrVTKUDrdg9VktaCooqDmnqp+SyUQUVCz+dy0u1sxX807Km/LJNnVYXMRohFamWkF8ygCP094x6s5SrzIS2ON8nFGwoNDtlOVmBKuWDIpkoHfKkjRJsIbtZBNLyCva7njzQCxhNZ3ggKpsm4mGwB1u0OH+YBRPqS/I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pZeBtuXF; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pZeBtuXF" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-77c3fca2db2so1608753b3a.3 for ; Thu, 18 Sep 2025 17:33:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758241998; x=1758846798; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=lM9hj9HYnvV2WwTWzJeyWDgwcN2sQ2JAUayMMeQFeWU=; b=pZeBtuXFGuhaOmG1dgFerx0Q2mzb40eZnBPXwXNOEVijh2nT4CZSvVuLP2Sv1IdvZc tyzGq0H6Hnpnu8EXBPpCWUaxWEwQCQJSIDgPqZTHkUMrOjK4gEANX4F0DHaijIi2Xig/ HUuzPeZRgR3Zcaz2FA4cf9lYaOiHT7T6T4qKkPg4GMWpMwcSylTbATodX6hSgrc614DA ucA1Vp397dJF5nSe3LHYgkWv5ylLSGWUx7c2daa2q0YlKXMHHcv+K10DVZxClJYsFqQO e8JWQfcKnBTCdDUmO4+jfW4R0Ywc9KToTO1V0qvHRTgFP8aXKN3bArg/SikZjFe3EEiY 7M9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758241998; x=1758846798; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=lM9hj9HYnvV2WwTWzJeyWDgwcN2sQ2JAUayMMeQFeWU=; b=WBQeHhoK7eoQLuXAzfDtiMjqegLKb+968V0Q3kBPRDdaRF/bRRljdpq6qY9y0xARdN 518eSl8vdWqDMTd1dtGzCMU5miLJ6nq70Th2IK4siN8XM8AQ3O0RQpfGvcNK8TE8+dj8 IGqVI7gxAS5p1WQpEuunwiFX9Mq4iUoV7fx+vGDaDVbuCZFbx1zYSejr8APyctUa59U/ LpWbNf/9CQBWl5ZjQjfosOY0uAX2ppFd0HxfkUR81hHUvvJmH/cv048GQ7kNV3voYCo8 F1brrikBx3m7mFgTS21C4C4A/HO6qmiWwwsp/Y0zQWcLxAVrrC5rLG0jshgqRcA6VdxZ TEfg== X-Forwarded-Encrypted: i=1; AJvYcCVKIsvokOGuZe984F+mf3fB9lSFj31iLMo4n5jjYnBo9/7973TUJ6ab/IWWEIfZARC5RoWS6Dc5L0mH75I=@vger.kernel.org X-Gm-Message-State: AOJu0YzJfNyxgjksbBJ12vjwZuizfikv9VYjzEPk+kWJ8s5qCl7lldQK F0NEQJbIQhP7ntpvhw5TI1twcVw+FKwGOj/Cof54R/ZWTLZA/zH552OT4bED44G9rXgrx356o3A A6tcd4g== X-Google-Smtp-Source: AGHT+IHdkM4DX7VjtkzDtAJ4EsTAfYAwgaskoTawruvw3bt0m71PwijPo8lWLT+I3fXBTLDhzkiQY6UXPeM= X-Received: from pjuj4.prod.google.com ([2002:a17:90a:d004:b0:32d:57a8:8ae6]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:e098:b0:243:f797:fdf9 with SMTP id adf61e73a8af0-2927182c50dmr2195626637.47.1758241997936; Thu, 18 Sep 2025 17:33:17 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 18 Sep 2025 17:33:00 -0700 In-Reply-To: <20250919003303.1355064-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250919003303.1355064-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.470.ga7dc726c21-goog Message-ID: <20250919003303.1355064-3-seanjc@google.com> Subject: [PATCH v2 2/5] KVM: Export KVM-internal symbols for sub-modules only From: Sean Christopherson To: Madhavan Srinivasan , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov , Tony Krowiak , Halil Pasic , Jason Herne , Harald Freudenberger , Holger Dengler Cc: linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rework the vast majority of KVM's exports to expose symbols only to KVM submodules, i.e. to x86's kvm-{amd,intel}.ko and PPC's kvm-{pr,hv}.ko. With few exceptions, KVM's exported APIs are intended (and safe) for KVM- internal usage only. Keep kvm_get_kvm(), kvm_get_kvm_safe(), and kvm_put_kvm() as normal exports, as they are needed by VFIO, and are generally safe for external usage (though ideally even the get/put APIs would be KVM-internal, and VFIO would pin a VM by grabbing a reference to its associated file). Implement a framework in kvm_types.h in anticipation of providing a macro to restrict KVM-specific kernel exports, i.e. to provide symbol exports for KVM if and only if KVM is built as one or more modules. Signed-off-by: Sean Christopherson --- arch/powerpc/include/asm/kvm_types.h | 15 ++++ arch/x86/include/asm/kvm_types.h | 10 +++ include/linux/kvm_types.h | 25 ++++-- virt/kvm/eventfd.c | 2 +- virt/kvm/guest_memfd.c | 4 +- virt/kvm/kvm_main.c | 128 +++++++++++++-------------- 6 files changed, 110 insertions(+), 74 deletions(-) create mode 100644 arch/powerpc/include/asm/kvm_types.h diff --git a/arch/powerpc/include/asm/kvm_types.h b/arch/powerpc/include/as= m/kvm_types.h new file mode 100644 index 000000000000..656b498ed3b6 --- /dev/null +++ b/arch/powerpc/include/asm/kvm_types.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_PPC_KVM_TYPES_H +#define _ASM_PPC_KVM_TYPES_H + +#if IS_MODULE(CONFIG_KVM_BOOK3S_64_PR) && IS_MODULE(CONFIG_KVM_BOOK3S_64_H= V) +#define KVM_SUB_MODULES kvm-pr,kvm-hv +#elif IS_MODULE(CONFIG_KVM_BOOK3S_64_PR) +#define KVM_SUB_MODULES kvm-pr +#elif IS_MODULE(CONFIG_KVM_INTEL) +#define KVM_SUB_MODULES kvm-hv +#else +#undef KVM_SUB_MODULES +#endif + +#endif diff --git a/arch/x86/include/asm/kvm_types.h b/arch/x86/include/asm/kvm_ty= pes.h index 08f1b57d3b62..23268a188e70 100644 --- a/arch/x86/include/asm/kvm_types.h +++ b/arch/x86/include/asm/kvm_types.h @@ -2,6 +2,16 @@ #ifndef _ASM_X86_KVM_TYPES_H #define _ASM_X86_KVM_TYPES_H =20 +#if IS_MODULE(CONFIG_KVM_AMD) && IS_MODULE(CONFIG_KVM_INTEL) +#define KVM_SUB_MODULES kvm-amd,kvm-intel +#elif IS_MODULE(CONFIG_KVM_AMD) +#define KVM_SUB_MODULES kvm-amd +#elif IS_MODULE(CONFIG_KVM_INTEL) +#define KVM_SUB_MODULES kvm-intel +#else +#undef KVM_SUB_MODULES +#endif + #define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 =20 #endif /* _ASM_X86_KVM_TYPES_H */ diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 827ecc0b7e10..490464c205b4 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -3,6 +3,23 @@ #ifndef __KVM_TYPES_H__ #define __KVM_TYPES_H__ =20 +#include +#include +#include +#include + +#ifdef KVM_SUB_MODULES +#define EXPORT_SYMBOL_FOR_KVM_INTERNAL(symbol) \ + EXPORT_SYMBOL_FOR_MODULES(symbol, __stringify(KVM_SUB_MODULES)) +#else +#define EXPORT_SYMBOL_FOR_KVM_INTERNAL(symbol) +#endif + +#ifndef __ASSEMBLER__ + +#include +#include + struct kvm; struct kvm_async_pf; struct kvm_device_ops; @@ -19,13 +36,6 @@ struct kvm_memslots; =20 enum kvm_mr_change; =20 -#include -#include -#include -#include - -#include - /* * Address types: * @@ -116,5 +126,6 @@ struct kvm_vcpu_stat_generic { }; =20 #define KVM_STATS_NAME_SIZE 48 +#endif /* !__ASSEMBLER__ */ =20 #endif /* __KVM_TYPES_H__ */ diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c index 6b1133a6617f..a7794ffdb976 100644 --- a/virt/kvm/eventfd.c +++ b/virt/kvm/eventfd.c @@ -525,7 +525,7 @@ bool kvm_irq_has_notifier(struct kvm *kvm, unsigned irq= chip, unsigned pin) =20 return false; } -EXPORT_SYMBOL_GPL(kvm_irq_has_notifier); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_irq_has_notifier); =20 void kvm_notify_acked_gsi(struct kvm *kvm, int gsi) { diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 08a6bc7d25b6..4c26000f4d36 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -702,7 +702,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory= _slot *slot, fput(file); return r; } -EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gmem_get_pfn); =20 #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src,= long npages, @@ -784,5 +784,5 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn= , void __user *src, long fput(file); return ret && !i ? ret : i; } -EXPORT_SYMBOL_GPL(kvm_gmem_populate); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gmem_populate); #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fee108988028..83a1b4dbbbd8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -77,22 +77,22 @@ MODULE_LICENSE("GPL"); /* Architectures should define their poll value according to the halt late= ncy */ unsigned int halt_poll_ns =3D KVM_HALT_POLL_NS_DEFAULT; module_param(halt_poll_ns, uint, 0644); -EXPORT_SYMBOL_GPL(halt_poll_ns); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns); =20 /* Default doubles per-vcpu halt_poll_ns. */ unsigned int halt_poll_ns_grow =3D 2; module_param(halt_poll_ns_grow, uint, 0644); -EXPORT_SYMBOL_GPL(halt_poll_ns_grow); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns_grow); =20 /* The start value to grow halt_poll_ns from */ unsigned int halt_poll_ns_grow_start =3D 10000; /* 10us */ module_param(halt_poll_ns_grow_start, uint, 0644); -EXPORT_SYMBOL_GPL(halt_poll_ns_grow_start); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns_grow_start); =20 /* Default halves per-vcpu halt_poll_ns. */ unsigned int halt_poll_ns_shrink =3D 2; module_param(halt_poll_ns_shrink, uint, 0644); -EXPORT_SYMBOL_GPL(halt_poll_ns_shrink); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns_shrink); =20 /* * Allow direct access (from KVM or the CPU) without MMU notifier protecti= on @@ -170,7 +170,7 @@ void vcpu_load(struct kvm_vcpu *vcpu) kvm_arch_vcpu_load(vcpu, cpu); put_cpu(); } -EXPORT_SYMBOL_GPL(vcpu_load); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(vcpu_load); =20 void vcpu_put(struct kvm_vcpu *vcpu) { @@ -180,7 +180,7 @@ void vcpu_put(struct kvm_vcpu *vcpu) __this_cpu_write(kvm_running_vcpu, NULL); preempt_enable(); } -EXPORT_SYMBOL_GPL(vcpu_put); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(vcpu_put); =20 /* TODO: merge with kvm_arch_vcpu_should_kick */ static bool kvm_request_needs_ipi(struct kvm_vcpu *vcpu, unsigned req) @@ -288,7 +288,7 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigne= d int req) =20 return called; } -EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_make_all_cpus_request); =20 void kvm_flush_remote_tlbs(struct kvm *kvm) { @@ -309,7 +309,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) || kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH)) ++kvm->stat.generic.remote_tlb_flush; } -EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_flush_remote_tlbs); =20 void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { @@ -499,7 +499,7 @@ void kvm_destroy_vcpus(struct kvm *kvm) =20 atomic_set(&kvm->online_vcpus, 0); } -EXPORT_SYMBOL_GPL(kvm_destroy_vcpus); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_destroy_vcpus); =20 #ifdef CONFIG_KVM_GENERIC_MMU_NOTIFIER static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn) @@ -1356,7 +1356,7 @@ void kvm_put_kvm_no_destroy(struct kvm *kvm) { WARN_ON(refcount_dec_and_test(&kvm->users_count)); } -EXPORT_SYMBOL_GPL(kvm_put_kvm_no_destroy); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_put_kvm_no_destroy); =20 static int kvm_vm_release(struct inode *inode, struct file *filp) { @@ -1388,7 +1388,7 @@ int kvm_trylock_all_vcpus(struct kvm *kvm) } return -EINTR; } -EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_trylock_all_vcpus); =20 int kvm_lock_all_vcpus(struct kvm *kvm) { @@ -1413,7 +1413,7 @@ int kvm_lock_all_vcpus(struct kvm *kvm) } return r; } -EXPORT_SYMBOL_GPL(kvm_lock_all_vcpus); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lock_all_vcpus); =20 void kvm_unlock_all_vcpus(struct kvm *kvm) { @@ -1425,7 +1425,7 @@ void kvm_unlock_all_vcpus(struct kvm *kvm) kvm_for_each_vcpu(i, vcpu, kvm) mutex_unlock(&vcpu->mutex); } -EXPORT_SYMBOL_GPL(kvm_unlock_all_vcpus); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_unlock_all_vcpus); =20 /* * Allocation size is twice as large as the actual dirty bitmap size. @@ -2133,7 +2133,7 @@ int kvm_set_internal_memslot(struct kvm *kvm, =20 return kvm_set_memory_region(kvm, mem); } -EXPORT_SYMBOL_GPL(kvm_set_internal_memslot); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_internal_memslot); =20 static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, struct kvm_userspace_memory_region2 *mem) @@ -2192,7 +2192,7 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dir= ty_log *log, *is_dirty =3D 1; return 0; } -EXPORT_SYMBOL_GPL(kvm_get_dirty_log); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_dirty_log); =20 #else /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ /** @@ -2627,7 +2627,7 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kv= m, gfn_t gfn) { return __gfn_to_memslot(kvm_memslots(kvm), gfn); } -EXPORT_SYMBOL_GPL(gfn_to_memslot); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(gfn_to_memslot); =20 struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn= _t gfn) { @@ -2661,7 +2661,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struc= t kvm_vcpu *vcpu, gfn_t gfn =20 return NULL; } -EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_gfn_to_memslot); =20 bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn) { @@ -2669,7 +2669,7 @@ bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn) =20 return kvm_is_visible_memslot(memslot); } -EXPORT_SYMBOL_GPL(kvm_is_visible_gfn); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_is_visible_gfn); =20 bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) { @@ -2677,7 +2677,7 @@ bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, g= fn_t gfn) =20 return kvm_is_visible_memslot(memslot); } -EXPORT_SYMBOL_GPL(kvm_vcpu_is_visible_gfn); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_is_visible_gfn); =20 unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) { @@ -2734,19 +2734,19 @@ unsigned long gfn_to_hva_memslot(struct kvm_memory_= slot *slot, { return gfn_to_hva_many(slot, gfn, NULL); } -EXPORT_SYMBOL_GPL(gfn_to_hva_memslot); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(gfn_to_hva_memslot); =20 unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn) { return gfn_to_hva_many(gfn_to_memslot(kvm, gfn), gfn, NULL); } -EXPORT_SYMBOL_GPL(gfn_to_hva); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(gfn_to_hva); =20 unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn) { return gfn_to_hva_many(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, NULL); } -EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_hva); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_gfn_to_hva); =20 /* * Return the hva of a @gfn and the R/W attribute if possible. @@ -2810,7 +2810,7 @@ void kvm_release_page_clean(struct page *page) kvm_set_page_accessed(page); put_page(page); } -EXPORT_SYMBOL_GPL(kvm_release_page_clean); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_release_page_clean); =20 void kvm_release_page_dirty(struct page *page) { @@ -2820,7 +2820,7 @@ void kvm_release_page_dirty(struct page *page) kvm_set_page_dirty(page); kvm_release_page_clean(page); } -EXPORT_SYMBOL_GPL(kvm_release_page_dirty); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_release_page_dirty); =20 static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *= page, struct follow_pfnmap_args *map, bool writable) @@ -3064,7 +3064,7 @@ kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_s= lot *slot, gfn_t gfn, =20 return kvm_follow_pfn(&kfp); } -EXPORT_SYMBOL_GPL(__kvm_faultin_pfn); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_faultin_pfn); =20 int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages) @@ -3081,7 +3081,7 @@ int kvm_prefetch_pages(struct kvm_memory_slot *slot, = gfn_t gfn, =20 return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages); } -EXPORT_SYMBOL_GPL(kvm_prefetch_pages); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_prefetch_pages); =20 /* * Don't use this API unless you are absolutely, positively certain that K= VM @@ -3103,7 +3103,7 @@ struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn= , bool write) (void)kvm_follow_pfn(&kfp); return refcounted_page; } -EXPORT_SYMBOL_GPL(__gfn_to_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__gfn_to_page); =20 int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *= map, bool writable) @@ -3137,7 +3137,7 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, = struct kvm_host_map *map, =20 return map->hva ? 0 : -EFAULT; } -EXPORT_SYMBOL_GPL(__kvm_vcpu_map); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_vcpu_map); =20 void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map) { @@ -3165,7 +3165,7 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm= _host_map *map) map->page =3D NULL; map->pinned_page =3D NULL; } -EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_unmap); =20 static int next_segment(unsigned long len, int offset) { @@ -3201,7 +3201,7 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, v= oid *data, int offset, =20 return __kvm_read_guest_page(slot, gfn, data, offset, len); } -EXPORT_SYMBOL_GPL(kvm_read_guest_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest_page); =20 int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset, int len) @@ -3210,7 +3210,7 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, g= fn_t gfn, void *data, =20 return __kvm_read_guest_page(slot, gfn, data, offset, len); } -EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_read_guest_page); =20 int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long l= en) { @@ -3230,7 +3230,7 @@ int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *= data, unsigned long len) } return 0; } -EXPORT_SYMBOL_GPL(kvm_read_guest); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest); =20 int kvm_vcpu_read_guest(struct kvm_vcpu *vcpu, gpa_t gpa, void *data, unsi= gned long len) { @@ -3250,7 +3250,7 @@ int kvm_vcpu_read_guest(struct kvm_vcpu *vcpu, gpa_t = gpa, void *data, unsigned l } return 0; } -EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_read_guest); =20 static int __kvm_read_guest_atomic(struct kvm_memory_slot *slot, gfn_t gfn, void *data, int offset, unsigned long len) @@ -3281,7 +3281,7 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu,= gpa_t gpa, =20 return __kvm_read_guest_atomic(slot, gfn, data, offset, len); } -EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_read_guest_atomic); =20 /* Copy @len bytes from @data into guest memory at '(@gfn * PAGE_SIZE) + @= offset' */ static int __kvm_write_guest_page(struct kvm *kvm, @@ -3311,7 +3311,7 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, =20 return __kvm_write_guest_page(kvm, slot, gfn, data, offset, len); } -EXPORT_SYMBOL_GPL(kvm_write_guest_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest_page); =20 int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, const void *data, int offset, int len) @@ -3320,7 +3320,7 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, = gfn_t gfn, =20 return __kvm_write_guest_page(vcpu->kvm, slot, gfn, data, offset, len); } -EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_write_guest_page); =20 int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, unsigned long len) @@ -3341,7 +3341,7 @@ int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const= void *data, } return 0; } -EXPORT_SYMBOL_GPL(kvm_write_guest); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest); =20 int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *dat= a, unsigned long len) @@ -3362,7 +3362,7 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t= gpa, const void *data, } return 0; } -EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_write_guest); =20 static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots, struct gfn_to_hva_cache *ghc, @@ -3411,7 +3411,7 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct= gfn_to_hva_cache *ghc, struct kvm_memslots *slots =3D kvm_memslots(kvm); return __kvm_gfn_to_hva_cache_init(slots, ghc, gpa, len); } -EXPORT_SYMBOL_GPL(kvm_gfn_to_hva_cache_init); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gfn_to_hva_cache_init); =20 int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache= *ghc, void *data, unsigned int offset, @@ -3442,14 +3442,14 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, = struct gfn_to_hva_cache *ghc, =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_write_guest_offset_cached); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest_offset_cached); =20 int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, void *data, unsigned long len) { return kvm_write_guest_offset_cached(kvm, ghc, data, 0, len); } -EXPORT_SYMBOL_GPL(kvm_write_guest_cached); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest_cached); =20 int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache = *ghc, void *data, unsigned int offset, @@ -3479,14 +3479,14 @@ int kvm_read_guest_offset_cached(struct kvm *kvm, s= truct gfn_to_hva_cache *ghc, =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_read_guest_offset_cached); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest_offset_cached); =20 int kvm_read_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, void *data, unsigned long len) { return kvm_read_guest_offset_cached(kvm, ghc, data, 0, len); } -EXPORT_SYMBOL_GPL(kvm_read_guest_cached); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest_cached); =20 int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) { @@ -3506,7 +3506,7 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsig= ned long len) } return 0; } -EXPORT_SYMBOL_GPL(kvm_clear_guest); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_clear_guest); =20 void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot, @@ -3531,7 +3531,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, set_bit_le(rel_gfn, memslot->dirty_bitmap); } } -EXPORT_SYMBOL_GPL(mark_page_dirty_in_slot); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(mark_page_dirty_in_slot); =20 void mark_page_dirty(struct kvm *kvm, gfn_t gfn) { @@ -3540,7 +3540,7 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn) memslot =3D gfn_to_memslot(kvm, gfn); mark_page_dirty_in_slot(kvm, memslot, gfn); } -EXPORT_SYMBOL_GPL(mark_page_dirty); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(mark_page_dirty); =20 void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) { @@ -3549,7 +3549,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, = gfn_t gfn) memslot =3D kvm_vcpu_gfn_to_memslot(vcpu, gfn); mark_page_dirty_in_slot(vcpu->kvm, memslot, gfn); } -EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_mark_page_dirty); =20 void kvm_sigset_activate(struct kvm_vcpu *vcpu) { @@ -3786,7 +3786,7 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) =20 trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu)); } -EXPORT_SYMBOL_GPL(kvm_vcpu_halt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_halt); =20 bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) { @@ -3798,7 +3798,7 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) =20 return false; } -EXPORT_SYMBOL_GPL(kvm_vcpu_wake_up); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_wake_up); =20 #ifndef CONFIG_S390 /* @@ -3850,7 +3850,7 @@ void __kvm_vcpu_kick(struct kvm_vcpu *vcpu, bool wait) out: put_cpu(); } -EXPORT_SYMBOL_GPL(__kvm_vcpu_kick); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_vcpu_kick); #endif /* !CONFIG_S390 */ =20 int kvm_vcpu_yield_to(struct kvm_vcpu *target) @@ -3873,7 +3873,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target) =20 return ret; } -EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_yield_to); =20 /* * Helper that checks whether a VCPU is eligible for directed yield. @@ -4028,7 +4028,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield= _to_kernel_mode) /* Ensure vcpu is not eligible during next spinloop */ kvm_vcpu_set_dy_eligible(me, false); } -EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_on_spin); =20 static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff) { @@ -5010,7 +5010,7 @@ bool kvm_are_all_memslots_empty(struct kvm *kvm) =20 return true; } -EXPORT_SYMBOL_GPL(kvm_are_all_memslots_empty); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_are_all_memslots_empty); =20 static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm, struct kvm_enable_cap *cap) @@ -5465,7 +5465,7 @@ bool file_is_kvm(struct file *file) { return file && file->f_op =3D=3D &kvm_vm_fops; } -EXPORT_SYMBOL_GPL(file_is_kvm); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(file_is_kvm); =20 static int kvm_dev_ioctl_create_vm(unsigned long type) { @@ -5560,10 +5560,10 @@ static struct miscdevice kvm_dev =3D { #ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING bool enable_virt_at_load =3D true; module_param(enable_virt_at_load, bool, 0444); -EXPORT_SYMBOL_GPL(enable_virt_at_load); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_virt_at_load); =20 __visible bool kvm_rebooting; -EXPORT_SYMBOL_GPL(kvm_rebooting); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_rebooting); =20 static DEFINE_PER_CPU(bool, virtualization_enabled); static DEFINE_MUTEX(kvm_usage_lock); @@ -5714,7 +5714,7 @@ int kvm_enable_virtualization(void) --kvm_usage_count; return r; } -EXPORT_SYMBOL_GPL(kvm_enable_virtualization); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_enable_virtualization); =20 void kvm_disable_virtualization(void) { @@ -5727,7 +5727,7 @@ void kvm_disable_virtualization(void) cpuhp_remove_state(CPUHP_AP_KVM_ONLINE); kvm_arch_disable_virtualization(); } -EXPORT_SYMBOL_GPL(kvm_disable_virtualization); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_disable_virtualization); =20 static int kvm_init_virtualization(void) { @@ -5864,7 +5864,7 @@ int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_= bus bus_idx, gpa_t addr, r =3D __kvm_io_bus_write(vcpu, bus, &range, val); return r < 0 ? r : 0; } -EXPORT_SYMBOL_GPL(kvm_io_bus_write); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_io_bus_write); =20 int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr, int len, const void *val, long cookie) @@ -5933,7 +5933,7 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_b= us bus_idx, gpa_t addr, r =3D __kvm_io_bus_read(vcpu, bus, &range, val); return r < 0 ? r : 0; } -EXPORT_SYMBOL_GPL(kvm_io_bus_read); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_io_bus_read); =20 int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t a= ddr, int len, struct kvm_io_device *dev) @@ -6051,7 +6051,7 @@ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *= kvm, enum kvm_bus bus_idx, =20 return iodev; } -EXPORT_SYMBOL_GPL(kvm_io_bus_get_dev); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_io_bus_get_dev); =20 static int kvm_debugfs_open(struct inode *inode, struct file *file, int (*get)(void *, u64 *), int (*set)(void *, u64), @@ -6388,7 +6388,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void) =20 return vcpu; } -EXPORT_SYMBOL_GPL(kvm_get_running_vcpu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_running_vcpu); =20 /** * kvm_get_running_vcpus - get the per-CPU array of currently running vcpu= s. @@ -6523,7 +6523,7 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align,= struct module *module) kmem_cache_destroy(kvm_vcpu_cache); return r; } -EXPORT_SYMBOL_GPL(kvm_init); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init); =20 void kvm_exit(void) { @@ -6546,4 +6546,4 @@ void kvm_exit(void) kvm_async_pf_deinit(); kvm_irqfd_exit(); } -EXPORT_SYMBOL_GPL(kvm_exit); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_exit); --=20 2.51.0.470.ga7dc726c21-goog From nobody Thu Oct 2 07:45:10 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E7C81DF24F for ; Fri, 19 Sep 2025 00:33:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242002; cv=none; b=BLJIVZUV8ZO5qfyeUjpQd6QHVMA6p+wp/YNpnW6T8SKZhZpnGkZdr3o++790Dr0v7BdLDG/8SKw+Us4VRVEJDnACrzU884/yITknfb8xqP0cYIYkVB8rjRPYRU/UiY5OZrqCheqKDj5sEZjV0DlMhOkfrHqxNJaurk0530KAdbk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242002; c=relaxed/simple; bh=QPe7hL/QhQ8FyrgrV1HSwe5wRNrlTS+CZ/xfYX1N1xw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PyHhDBzDv9k/utIv6KGlOIH2lQjOx04QDgcMT4f/8a3gFQdahRKtgldDWJdNWB6SmEw4zkzMNlpXj7z6VQ/thSUXkWeDx4cRkM4ie1E67B2ydr7DWzcYaSVfaho1XKjrYFTwF/KkuUUrllEd8wCUPY1E3V4ILDt4lIT+9bg1yug= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=emPcs2ug; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="emPcs2ug" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-32df881dce2so1517636a91.2 for ; Thu, 18 Sep 2025 17:33:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758242000; x=1758846800; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=50iAb3TbAebOi3cXMklwS6LjjlcHrLpQ5f/WYr9Gs1U=; b=emPcs2ugS9/JFUjcWyMPoqb6e4ejF+UxrZWZxvsDb8rGtNwBRDHUdZnv4EiZ3Aaq+y dJb9tshFsBbY/5/slfj+eKZoI4m8I8Oy06bRg/u3hKhCyoa+UJTHvhFYVYPOfhie/m6a f1xZSaVVjBjgmtjjlPop7ECjY1jzd1eksegoYnCkMcQMErUanT4C/0B0P5m8acJGdkYA +nAinTRgXU8pIvdO1sYcCbbBEqpE1IMmWj3Sq7XezSZxL4un5A0bycSLRhfiA4p3somk gwoGAa24Uqbvj6pWPByznSCWupkgpNNbWft9BqDN9Tyu5RwvQMOkoLXXiJOJ3MlMlGOB y3jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758242000; x=1758846800; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=50iAb3TbAebOi3cXMklwS6LjjlcHrLpQ5f/WYr9Gs1U=; b=bpQjFoemnFJ2XZH7xXs1gtqQT1Rp2AoYcM7dxhndZj4F8s3zP5EvpvXtSIqh6TO+2j Dvi5CyVhTA+xNMp4cez4AAMcYeIT4H3EWlbkG0dSqrFzcb17jxwxpDftteHdfyiojU5C JDUqf6CIGA+qsj5FHdgsepQzYH/i5bYgX02TQx2NwfsTyF0zF1jogrXQ2teFZH8d4QUK Gb2Mgu25+4pQjDy3+3ok7bZbsrMIK12gQQzEQNnjLxtMI5KfcRihEHJ/RR6qA+YlJK29 LEl36UAru4WYPUGlM8fP9l8sCsNdEsYkYaRQMlet+t+XST81bnXdwD29u15krn4rTbMT Jidg== X-Forwarded-Encrypted: i=1; AJvYcCUvIeHoz4cMBCjfTJXTOUW71QHebREKxbsPu5RJ9LayXu8ZC6cEdPJWYurYRGDffS9d2D+DeUFbTBqWIJQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxP6tZl5HanEUb7lIrWqnPZw95hZqAC5IRoM121Sr15XZI/Xsv8 eIeIrVul978+QAnNCTjCYUFTEgRzFrzlx4wk7bJZpYrZRuzd/yqbZs9zB8rKp2+CSGUZbzWz45U 50aENHA== X-Google-Smtp-Source: AGHT+IHWbOdo8qoTOqbS20QiAs/keKKGZuQU4AXsDlIYjlpZHJfwwwTHdlmwpscoisSH/f9YdBDDPaf6ydo= X-Received: from pjbsi16.prod.google.com ([2002:a17:90b:5290:b0:32e:8ff7:495]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3508:b0:32e:7340:a7f7 with SMTP id 98e67ed59e1d1-33097fd0f2bmr1581820a91.2.1758241999747; Thu, 18 Sep 2025 17:33:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 18 Sep 2025 17:33:01 -0700 In-Reply-To: <20250919003303.1355064-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250919003303.1355064-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.470.ga7dc726c21-goog Message-ID: <20250919003303.1355064-4-seanjc@google.com> Subject: [PATCH v2 3/5] KVM: x86: Move kvm_intr_is_single_vcpu() to lapic.c From: Sean Christopherson To: Madhavan Srinivasan , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov , Tony Krowiak , Halil Pasic , Jason Herne , Harald Freudenberger , Holger Dengler Cc: linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move kvm_intr_is_single_vcpu() to lapic.c, drop its export, and make its "fast" helper local to lapic.c. kvm_intr_is_single_vcpu() is only usable if the local APIC is in-kernel, i.e. it most definitely belongs in the local APIC code. No functional change intended. Fixes: 2f5fb6b965b3 ("KVM: x86: Dedup AVIC vs. PI code for identifying targ= et vCPU") Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/irq.c | 28 ---------------------------- arch/x86/kvm/lapic.c | 33 +++++++++++++++++++++++++++++++-- arch/x86/kvm/lapic.h | 4 ++-- 4 files changed, 33 insertions(+), 35 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 17772513b9cc..00a210130fba 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2412,9 +2412,6 @@ void __user *__x86_set_memory_region(struct kvm *kvm,= int id, gpa_t gpa, bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu); bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu); =20 -bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq, - struct kvm_vcpu **dest_vcpu); - static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq) { /* We can only post Fixed and LowPrio IRQs */ diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index a6b122f732be..153134893301 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -354,34 +354,6 @@ int kvm_set_routing_entry(struct kvm *kvm, return 0; } =20 -bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq, - struct kvm_vcpu **dest_vcpu) -{ - int r =3D 0; - unsigned long i; - struct kvm_vcpu *vcpu; - - if (kvm_intr_is_single_vcpu_fast(kvm, irq, dest_vcpu)) - return true; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!kvm_apic_present(vcpu)) - continue; - - if (!kvm_apic_match_dest(vcpu, NULL, irq->shorthand, - irq->dest_id, irq->dest_mode)) - continue; - - if (++r =3D=3D 2) - return false; - - *dest_vcpu =3D vcpu; - } - - return r =3D=3D 1; -} -EXPORT_SYMBOL_GPL(kvm_intr_is_single_vcpu); - void kvm_scan_ioapic_irq(struct kvm_vcpu *vcpu, u32 dest_id, u16 dest_mode, u8 vector, unsigned long *ioapic_handled_vectors) { diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 3b76192b24e9..b5e47c523164 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1237,8 +1237,9 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, s= truct kvm_lapic *src, * interrupt. * - Otherwise, use remapped mode to inject the interrupt. */ -bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *i= rq, - struct kvm_vcpu **dest_vcpu) +static bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, + struct kvm_lapic_irq *irq, + struct kvm_vcpu **dest_vcpu) { struct kvm_apic_map *map; unsigned long bitmap; @@ -1265,6 +1266,34 @@ bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, s= truct kvm_lapic_irq *irq, return ret; } =20 +bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq, + struct kvm_vcpu **dest_vcpu) +{ + int r =3D 0; + unsigned long i; + struct kvm_vcpu *vcpu; + + if (kvm_intr_is_single_vcpu_fast(kvm, irq, dest_vcpu)) + return true; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!kvm_apic_present(vcpu)) + continue; + + if (!kvm_apic_match_dest(vcpu, NULL, irq->shorthand, + irq->dest_id, irq->dest_mode)) + continue; + + if (++r =3D=3D 2) + return false; + + *dest_vcpu =3D vcpu; + } + + return r =3D=3D 1; +} +EXPORT_SYMBOL_GPL(kvm_intr_is_single_vcpu); + int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src, struct kvm_lapic_irq *irq, struct dest_map *dest_map) { diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h index 50123fe7f58f..282b9b7da98c 100644 --- a/arch/x86/kvm/lapic.h +++ b/arch/x86/kvm/lapic.h @@ -236,8 +236,8 @@ void kvm_wait_lapic_expire(struct kvm_vcpu *vcpu); void kvm_bitmap_or_dest_vcpus(struct kvm *kvm, struct kvm_lapic_irq *irq, unsigned long *vcpu_bitmap); =20 -bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *i= rq, - struct kvm_vcpu **dest_vcpu); +bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq, + struct kvm_vcpu **dest_vcpu); void kvm_lapic_switch_to_sw_timer(struct kvm_vcpu *vcpu); void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu); void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu); --=20 2.51.0.470.ga7dc726c21-goog From nobody Thu Oct 2 07:45:10 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BFC92505AF for ; Fri, 19 Sep 2025 00:33:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242003; cv=none; b=ZSbVnsbNmQykD0SlSZYRTgDpHU0b9phuc2k9WALTmf1z6LPEdJ+NTVWRdXjSmkiOiL/vNBv9Ha+y+PlvFUISW09eCZcf+nszJt5SqpOwJVkjiuWLEZ40JwpOtxCDm2cS3gersgITaY9GHRvQbgcYWCSjcdzTaXGFe1ryRLeJVN8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242003; c=relaxed/simple; bh=beOA7yeMJ2oVmvwypx6LdP1/laar+mcAGIBojfsH2oY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YFfDFW0MPMcs8g2+td4a3x75UtlFvAHJNiy+m3r1dSsuxAwhMt18xZDBTNJnn18LGSEYSW3marjX4mQ411LeFsUD3reyPceiSWUvavHhvtbESNCPGK02Jx/i6KgUOXijpJcMbeL1kC8TRZwP0/KAnDXLY7A9cUAYnSnW+whZ0hI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wual1gXE; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wual1gXE" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-32ee4998c4aso1444436a91.1 for ; Thu, 18 Sep 2025 17:33:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758242001; x=1758846801; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=RD/DK6kAG0c67mN/7HOpwDxU9lDZGk0fnKN4zAB6ORM=; b=wual1gXEO/qW/KcZbwpbjI0sfoehgcUGLD9x5H5IssZtwVGw1NDTs0m/FavAeb0uWd hJezfbf4ec0ve+CJ8ri6xoPdmbJhNd43fHRSqAky+UdLoM2+OMQVkRc3Cb3X3PJLbfI4 8C3KMBbIY+54TStHLijE8z1Ov0x2jdnnoIP5hcyos0ofFhya0/ZIy+OoS+yimlrlM/Ge TbZ++UWriWCVW2aOhubu4sAgGGxWOsQfaWG1Vg8u3QszBVt5oshCh9zuBgASzj1kIanL /FgadlFwL0mzQsHkmJxMQcTn4i+/LucAJ5ylSCG1fu9waaPgXjZ5tKrYnrCNGG+4VYnV YdvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758242001; x=1758846801; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RD/DK6kAG0c67mN/7HOpwDxU9lDZGk0fnKN4zAB6ORM=; b=vgHBC1+MHA5Q/qsjb6e0PzaBVCaAq5h6bmoOQJKuS7y4x6f/Sh7ASNm0lvMUVL+OmL M41ZQqGDL72qnfxG61wi8LG6BGf21/dbm0mID1uQW5EkieMY9KArY8CwvHhAc/lgvvej KvJXH2Cy8YZmse+Iv1/DiP0VA6DjoF53eUINKwUoTk/XR7HtPLgIndZ7j8F5Dn6jcxpc OH7LamMlHHhXZW7Ck/0fsmP3vSVNPhHr8vY8W3okJimGoUtzzbdEQws18/2oY9rbWpFb /5l7bEXfuSKDxL0o0E+8mKTtGeQPw4FuKop76ctbOJjx41bf7QPcgk+gUCeC1r58tYRA wOSA== X-Forwarded-Encrypted: i=1; AJvYcCW6NAgcJiuwmmncx48RQ/y3FMzt/j6uZmAWVYrnasmquCwK5+epVRKvi0kdgul+Thqag95LxnIbkm3fZDk=@vger.kernel.org X-Gm-Message-State: AOJu0Yy2yoaEqkcB/VcMEM6R7Pv9yHj0rcs0wFohPzuCKFnLvyxrgMxK yVWCpEPIHU9aG3OHUdNuAzp3FsV5HkD+x8AbDvDbJvPU6KmeYgbmDlb+nUDKwZ2sRmXMrdNMjA7 QCEYCHA== X-Google-Smtp-Source: AGHT+IEhgM6vmYfPOslFAinl0W4DEpXPFYuyJm2Gwrf0Sw+W4NYJijAe6YZgNnvs4sWIJBhjxXbmqFTMuoE= X-Received: from pjbpw5.prod.google.com ([2002:a17:90b:2785:b0:330:7dd8:2dc2]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3c88:b0:32e:b36b:3711 with SMTP id 98e67ed59e1d1-3309836d7c3mr1410096a91.28.1758242001517; Thu, 18 Sep 2025 17:33:21 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 18 Sep 2025 17:33:02 -0700 In-Reply-To: <20250919003303.1355064-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250919003303.1355064-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.470.ga7dc726c21-goog Message-ID: <20250919003303.1355064-5-seanjc@google.com> Subject: [PATCH v2 4/5] KVM: x86: Drop pointless exports of kvm_arch_xxx() hooks From: Sean Christopherson To: Madhavan Srinivasan , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov , Tony Krowiak , Halil Pasic , Jason Herne , Harald Freudenberger , Holger Dengler Cc: linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the exporting of several kvm_arch_xxx() hooks that are only called from arch-neutral code, i.e. that are only called from kvm.ko. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e07936efacd4..ea0fffb24d4d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13542,14 +13542,12 @@ void kvm_arch_register_noncoherent_dma(struct kvm= *kvm) if (atomic_inc_return(&kvm->arch.noncoherent_dma_count) =3D=3D 1) kvm_noncoherent_dma_assignment_start_or_stop(kvm); } -EXPORT_SYMBOL_GPL(kvm_arch_register_noncoherent_dma); =20 void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm) { if (!atomic_dec_return(&kvm->arch.noncoherent_dma_count)) kvm_noncoherent_dma_assignment_start_or_stop(kvm); } -EXPORT_SYMBOL_GPL(kvm_arch_unregister_noncoherent_dma); =20 bool kvm_arch_has_noncoherent_dma(struct kvm *kvm) { @@ -13561,7 +13559,6 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu) { return (vcpu->arch.msr_kvm_poll_control & 1) =3D=3D 0; } -EXPORT_SYMBOL_GPL(kvm_arch_no_poll); =20 #ifdef CONFIG_KVM_GUEST_MEMFD /* --=20 2.51.0.470.ga7dc726c21-goog From nobody Thu Oct 2 07:45:10 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E859E2561AA for ; Fri, 19 Sep 2025 00:33:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242007; cv=none; b=pVSP4jtrfZO3VNCbsz6h8VsTRfAy6WAofXmyYgeymxV51gol09opyy6jAhAx4mFGTEPuQHLkBb/SnGSIo1S5VIrQNFQbXWIMUNiwP+Ik5WoS0y1UD9+4h2txT3viuu4p5mYhla62tzxdJng4rZtSW9pNUyqwLTBlqMP0CxnNonI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758242007; c=relaxed/simple; bh=7Au6ECwRur6gCQPoNuqfb/CqcwlGB4QBIOfW4eOu8Wg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Edl2p1fZELOG5PluC4BbQCrk23EPLi5Ccp0mVsoP0n4I/l9JAt5HbrgmgIwsQ8yE0jeU0jCN/zdNDWjzvs09Wt46NRpLVkjSdEleHKFnPEedRpqE5izrqQtHw7o6zzUu4mIK8A+9nHmW7ZFE5mgoVt2/EvCpXWaylsbe0kPO4TM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=POdODkM/; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="POdODkM/" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-77283b2b5f7so2293353b3a.0 for ; Thu, 18 Sep 2025 17:33:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758242003; x=1758846803; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=iWtIqknTatw78RNMwtt34Lt98UAdTdmQo7Vp0jVxdAc=; b=POdODkM/Hjylz1sy3l5BJyqXs3gQEEZZlDGN9UKp4lryzgsEfHEd1PV4X67ApKO9hW aKNxS/9dutjWf0xPNUrXVZJDv5NSuFk9XSUI9BjzHBKEj9b/POrRlNjKuth+rAy0f6eM RAXnwyEE8C/0yvLPv/43n9sOiu03RjGtbpPANTE7mHZFiSXbxa1Y4ugI49lRLgxW/C47 UbqbAkUooA+ch1qOwenNSzSO43w5DsbAFsHaMqc39v8trgVMuEUP3NUsrBr4yJQD8ctU iupKksqDHX7+g1WpbHFRiKscpoGV2kT+GyBiQGH/XF1HqaGoDYxXenoEDquDfVxvmyHa t+dA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758242003; x=1758846803; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iWtIqknTatw78RNMwtt34Lt98UAdTdmQo7Vp0jVxdAc=; b=gaWnNJb9h3mB7NXCaN/vi4n/sIhJRqlSaRwqe7lIYRGX1h0RXhcp4j88FRKq+1+GTl z/D+eZMlxynewZk07kgeYlp//+feod9dBbnRzNm/K58xEvaJbApzRVdeqiT3EjeFp7uT kmU59SOFtLhCBquOKsCE6iajJoX26ibLAjgTb+TqiojC1iEdMDm+op0g87Q5OqC0knoy SCpH8v6BbJxXQ6XNtPDj/wGBVnkg0gn5R8hC6lfMEAoobbJL4ocLfuZ6qeeXZhUAtQiv Ww6+eOjLNtqmcbYVAKjAR2qy2ucGodpqUrD+/CJsDdjwHDNwWEdnU2LpZk2B2C9rzjL/ +IXw== X-Forwarded-Encrypted: i=1; AJvYcCUUndOsuA7yXcBYpAjUpXsAbIYfKbW05hyYbeo/EhpqrjuBH10Uf1qbXRvfY5BiWUqIo0ppavVomq4BSHo=@vger.kernel.org X-Gm-Message-State: AOJu0Yy4vyPijS8c7JblB1ilB7twz3Xr7XYomiZt+/l1Nsv3qWrKkFAb UK7OVgI63+OxXmGNiwNWX8HPgbTdiMreVzN+rddhnzTaPa2IgV1anZhG6Wgyvf9IAOsajZW4wZK ayh2dzA== X-Google-Smtp-Source: AGHT+IGXuOLqd69Ccm9uJ6dotAIyGTLvu0YrSbfdR2aJ+sIyrCygY61Z1DCMCWBSxpqRdDQhx6u43Cz5Dy4= X-Received: from pfbfw3.prod.google.com ([2002:a05:6a00:61c3:b0:77c:1814:c8d4]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:10d0:b0:772:270f:58ab with SMTP id d2e1a72fcca58-77e4e8acfa9mr1706676b3a.15.1758242003097; Thu, 18 Sep 2025 17:33:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 18 Sep 2025 17:33:03 -0700 In-Reply-To: <20250919003303.1355064-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250919003303.1355064-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.470.ga7dc726c21-goog Message-ID: <20250919003303.1355064-6-seanjc@google.com> Subject: [PATCH v2 5/5] KVM: x86: Export KVM-internal symbols for sub-modules only From: Sean Christopherson To: Madhavan Srinivasan , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov , Tony Krowiak , Halil Pasic , Jason Herne , Harald Freudenberger , Holger Dengler Cc: linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rework almost all of KVM x86's exports to expose symbols only to KVM's vendor modules, i.e. to kvm-{amd,intel}.ko. Keep the generic exports that are guarded by CONFIG_KVM_EXTERNAL_WRITE_TRACKING=3Dy, as they're explicitly designed/intended for external usage. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 10 +- arch/x86/kvm/hyperv.c | 4 +- arch/x86/kvm/irq.c | 6 +- arch/x86/kvm/kvm_onhyperv.c | 6 +- arch/x86/kvm/lapic.c | 40 +++---- arch/x86/kvm/mmu/mmu.c | 36 +++--- arch/x86/kvm/mmu/spte.c | 10 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/pmu.c | 10 +- arch/x86/kvm/smm.c | 2 +- arch/x86/kvm/x86.c | 216 ++++++++++++++++++------------------ 11 files changed, 171 insertions(+), 171 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index efee08fad72e..b5ba207f1aa5 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -34,7 +34,7 @@ * aligned to sizeof(unsigned long) because it's not accessed via bitops. */ u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly; -EXPORT_SYMBOL_GPL(kvm_cpu_caps); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_caps); =20 struct cpuid_xstate_sizes { u32 eax; @@ -131,7 +131,7 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry2( =20 return NULL; } -EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry2); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_find_cpuid_entry2); =20 static int kvm_check_cpuid(struct kvm_vcpu *vcpu) { @@ -1228,7 +1228,7 @@ void kvm_set_cpu_caps(void) kvm_cpu_cap_clear(X86_FEATURE_RDPID); } } -EXPORT_SYMBOL_GPL(kvm_set_cpu_caps); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_cpu_caps); =20 #undef F #undef SCATTERED_F @@ -2052,7 +2052,7 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *= ebx, used_max_basic); return exact; } -EXPORT_SYMBOL_GPL(kvm_cpuid); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpuid); =20 int kvm_emulate_cpuid(struct kvm_vcpu *vcpu) { @@ -2070,4 +2070,4 @@ int kvm_emulate_cpuid(struct kvm_vcpu *vcpu) kvm_rdx_write(vcpu, edx); return kvm_skip_emulated_instruction(vcpu); } -EXPORT_SYMBOL_GPL(kvm_emulate_cpuid); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_cpuid); diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index a471900c7325..38595ecb990d 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -923,7 +923,7 @@ bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu) return false; return vcpu->arch.pv_eoi.msr_val & KVM_MSR_ENABLED; } -EXPORT_SYMBOL_GPL(kvm_hv_assist_page_enabled); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_hv_assist_page_enabled); =20 int kvm_hv_get_assist_page(struct kvm_vcpu *vcpu) { @@ -935,7 +935,7 @@ int kvm_hv_get_assist_page(struct kvm_vcpu *vcpu) return kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data, &hv_vcpu->vp_assist_page, sizeof(struct hv_vp_assist_page)); } -EXPORT_SYMBOL_GPL(kvm_hv_get_assist_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_hv_get_assist_page); =20 static void stimer_prepare_msg(struct kvm_vcpu_hv_stimer *stimer) { diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index 153134893301..7cc8950005b6 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -103,7 +103,7 @@ int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v) =20 return kvm_apic_has_interrupt(v) !=3D -1; /* LAPIC */ } -EXPORT_SYMBOL_GPL(kvm_cpu_has_injectable_intr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_has_injectable_intr); =20 /* * check if there is pending interrupt without @@ -119,7 +119,7 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *v) =20 return kvm_apic_has_interrupt(v) !=3D -1; /* LAPIC */ } -EXPORT_SYMBOL_GPL(kvm_cpu_has_interrupt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_has_interrupt); =20 /* * Read pending interrupt(from non-APIC source) @@ -148,7 +148,7 @@ int kvm_cpu_get_extint(struct kvm_vcpu *v) WARN_ON_ONCE(!irqchip_split(v->kvm)); return get_userspace_extint(v); } -EXPORT_SYMBOL_GPL(kvm_cpu_get_extint); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_get_extint); =20 /* * Read pending interrupt vector and intack. diff --git a/arch/x86/kvm/kvm_onhyperv.c b/arch/x86/kvm/kvm_onhyperv.c index ded0bd688c65..ee53e75a60cb 100644 --- a/arch/x86/kvm/kvm_onhyperv.c +++ b/arch/x86/kvm/kvm_onhyperv.c @@ -101,13 +101,13 @@ int hv_flush_remote_tlbs_range(struct kvm *kvm, gfn_t= start_gfn, gfn_t nr_pages) =20 return __hv_flush_remote_tlbs_range(kvm, &range); } -EXPORT_SYMBOL_GPL(hv_flush_remote_tlbs_range); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(hv_flush_remote_tlbs_range); =20 int hv_flush_remote_tlbs(struct kvm *kvm) { return __hv_flush_remote_tlbs_range(kvm, NULL); } -EXPORT_SYMBOL_GPL(hv_flush_remote_tlbs); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(hv_flush_remote_tlbs); =20 void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp) { @@ -121,4 +121,4 @@ void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t roo= t_tdp) spin_unlock(&kvm_arch->hv_root_tdp_lock); } } -EXPORT_SYMBOL_GPL(hv_track_root_tdp); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(hv_track_root_tdp); diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index b5e47c523164..0ae7f913d782 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -106,7 +106,7 @@ bool kvm_apic_pending_eoi(struct kvm_vcpu *vcpu, int ve= ctor) } =20 __read_mostly DEFINE_STATIC_KEY_FALSE(kvm_has_noapic_vcpu); -EXPORT_SYMBOL_GPL(kvm_has_noapic_vcpu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_has_noapic_vcpu); =20 __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_hw_disabled, HZ); __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_sw_disabled, HZ); @@ -646,7 +646,7 @@ bool __kvm_apic_update_irr(unsigned long *pir, void *re= gs, int *max_irr) return ((max_updated_irr !=3D -1) && (max_updated_irr =3D=3D *max_irr)); } -EXPORT_SYMBOL_GPL(__kvm_apic_update_irr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_apic_update_irr); =20 bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, unsigned long *pir, int *m= ax_irr) { @@ -657,7 +657,7 @@ bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, unsigne= d long *pir, int *max_irr apic->irr_pending =3D true; return irr_updated; } -EXPORT_SYMBOL_GPL(kvm_apic_update_irr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_update_irr); =20 static inline int apic_search_irr(struct kvm_lapic *apic) { @@ -697,7 +697,7 @@ void kvm_apic_clear_irr(struct kvm_vcpu *vcpu, int vec) { apic_clear_irr(vec, vcpu->arch.apic); } -EXPORT_SYMBOL_GPL(kvm_apic_clear_irr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_clear_irr); =20 static void *apic_vector_to_isr(int vec, struct kvm_lapic *apic) { @@ -779,7 +779,7 @@ void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu) =20 kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic)); } -EXPORT_SYMBOL_GPL(kvm_apic_update_hwapic_isr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_update_hwapic_isr); =20 int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu) { @@ -790,7 +790,7 @@ int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu) */ return apic_find_highest_irr(vcpu->arch.apic); } -EXPORT_SYMBOL_GPL(kvm_lapic_find_highest_irr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_find_highest_irr); =20 static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, int vector, int level, int trig_mode, @@ -954,7 +954,7 @@ void kvm_apic_update_ppr(struct kvm_vcpu *vcpu) { apic_update_ppr(vcpu->arch.apic); } -EXPORT_SYMBOL_GPL(kvm_apic_update_ppr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_update_ppr); =20 static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr) { @@ -1065,7 +1065,7 @@ bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struc= t kvm_lapic *source, return false; } } -EXPORT_SYMBOL_GPL(kvm_apic_match_dest); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_match_dest); =20 static int kvm_vector_to_index(u32 vector, u32 dest_vcpus, const unsigned long *bitmap, u32 bitmap_size) @@ -1292,7 +1292,7 @@ bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct = kvm_lapic_irq *irq, =20 return r =3D=3D 1; } -EXPORT_SYMBOL_GPL(kvm_intr_is_single_vcpu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_intr_is_single_vcpu); =20 int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src, struct kvm_lapic_irq *irq, struct dest_map *dest_map) @@ -1569,7 +1569,7 @@ void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vc= pu, int vector) kvm_ioapic_send_eoi(apic, vector); kvm_make_request(KVM_REQ_EVENT, apic->vcpu); } -EXPORT_SYMBOL_GPL(kvm_apic_set_eoi_accelerated); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_set_eoi_accelerated); =20 static void kvm_icr_to_lapic_irq(struct kvm_lapic *apic, u32 icr_low, u32 icr_high, struct kvm_lapic_irq *irq) @@ -1600,7 +1600,7 @@ void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 ic= r_low, u32 icr_high) =20 kvm_irq_delivery_to_apic(apic->vcpu->kvm, apic, &irq, NULL); } -EXPORT_SYMBOL_GPL(kvm_apic_send_ipi); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_send_ipi); =20 static u32 apic_get_tmcct(struct kvm_lapic *apic) { @@ -1717,7 +1717,7 @@ u64 kvm_lapic_readable_reg_mask(struct kvm_lapic *api= c) =20 return valid_reg_mask; } -EXPORT_SYMBOL_GPL(kvm_lapic_readable_reg_mask); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_readable_reg_mask); =20 static int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 offset, int len, void *data) @@ -1958,7 +1958,7 @@ void kvm_wait_lapic_expire(struct kvm_vcpu *vcpu) lapic_timer_int_injected(vcpu)) __kvm_wait_lapic_expire(vcpu); } -EXPORT_SYMBOL_GPL(kvm_wait_lapic_expire); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_wait_lapic_expire); =20 static void kvm_apic_inject_pending_timer_irqs(struct kvm_lapic *apic) { @@ -2272,7 +2272,7 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu) out: preempt_enable(); } -EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_expired_hv_timer); =20 void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu) { @@ -2525,7 +2525,7 @@ void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu) { kvm_lapic_reg_write(vcpu->arch.apic, APIC_EOI, 0); } -EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_set_eoi); =20 #define X2APIC_ICR_RESERVED_BITS (GENMASK_ULL(31, 20) | GENMASK_ULL(17, 16= ) | BIT(13)) =20 @@ -2608,7 +2608,7 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u= 32 offset) else kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset)); } -EXPORT_SYMBOL_GPL(kvm_apic_write_nodecode); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_write_nodecode); =20 void kvm_free_lapic(struct kvm_vcpu *vcpu) { @@ -2746,7 +2746,7 @@ int kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 valu= e, bool host_initiated) kvm_recalculate_apic_map(vcpu->kvm); return 0; } -EXPORT_SYMBOL_GPL(kvm_apic_set_base); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_set_base); =20 void kvm_apic_update_apicv(struct kvm_vcpu *vcpu) { @@ -2794,7 +2794,7 @@ int kvm_alloc_apic_access_page(struct kvm *kvm) =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_alloc_apic_access_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_alloc_apic_access_page); =20 void kvm_inhibit_apic_access_page(struct kvm_vcpu *vcpu) { @@ -3058,7 +3058,7 @@ int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu) __apic_update_ppr(apic, &ppr); return apic_has_interrupt_for_ppr(apic, ppr); } -EXPORT_SYMBOL_GPL(kvm_apic_has_interrupt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_has_interrupt); =20 int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu) { @@ -3117,7 +3117,7 @@ void kvm_apic_ack_interrupt(struct kvm_vcpu *vcpu, in= t vector) } =20 } -EXPORT_SYMBOL_GPL(kvm_apic_ack_interrupt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_ack_interrupt); =20 static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s, bool set) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 55335dbd70ce..667d66cf76d5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -110,7 +110,7 @@ static bool __ro_after_init tdp_mmu_allowed; #ifdef CONFIG_X86_64 bool __read_mostly tdp_mmu_enabled =3D true; module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444); -EXPORT_SYMBOL_GPL(tdp_mmu_enabled); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(tdp_mmu_enabled); #endif =20 static int max_huge_page_level __read_mostly; @@ -3865,7 +3865,7 @@ void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_m= mu *mmu, write_unlock(&kvm->mmu_lock); } } -EXPORT_SYMBOL_GPL(kvm_mmu_free_roots); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_free_roots); =20 void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) { @@ -3892,7 +3892,7 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, s= truct kvm_mmu *mmu) =20 kvm_mmu_free_roots(kvm, mmu, roots_to_free); } -EXPORT_SYMBOL_GPL(kvm_mmu_free_guest_mode_roots); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_free_guest_mode_roots); =20 static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, u8 level) @@ -4876,7 +4876,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 = error_code, =20 return r; } -EXPORT_SYMBOL_GPL(kvm_handle_page_fault); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_handle_page_fault); =20 #ifdef CONFIG_X86_64 static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, @@ -4966,7 +4966,7 @@ int kvm_tdp_map_page(struct kvm_vcpu *vcpu, gpa_t gpa= , u64 error_code, u8 *level return -EIO; } } -EXPORT_SYMBOL_GPL(kvm_tdp_map_page); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_tdp_map_page); =20 long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, struct kvm_pre_fault_memory *range) @@ -5162,7 +5162,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new= _pgd) __clear_sp_write_flooding_count(sp); } } -EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_new_pgd); =20 static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access) @@ -5808,7 +5808,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u= nsigned long cr0, shadow_mmu_init_context(vcpu, context, cpu_role, root_role); kvm_mmu_new_pgd(vcpu, nested_cr3); } -EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_shadow_npt_mmu); =20 static union kvm_cpu_role kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_di= rty, @@ -5862,7 +5862,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, b= ool execonly, =20 kvm_mmu_new_pgd(vcpu, new_eptp); } -EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_shadow_ept_mmu); =20 static void init_kvm_softmmu(struct kvm_vcpu *vcpu, union kvm_cpu_role cpu_role) @@ -5927,7 +5927,7 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu) else init_kvm_softmmu(vcpu, cpu_role); } -EXPORT_SYMBOL_GPL(kvm_init_mmu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_mmu); =20 void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) { @@ -5963,7 +5963,7 @@ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu) kvm_mmu_unload(vcpu); kvm_init_mmu(vcpu); } -EXPORT_SYMBOL_GPL(kvm_mmu_reset_context); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_reset_context); =20 int kvm_mmu_load(struct kvm_vcpu *vcpu) { @@ -5997,7 +5997,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) out: return r; } -EXPORT_SYMBOL_GPL(kvm_mmu_load); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_load); =20 void kvm_mmu_unload(struct kvm_vcpu *vcpu) { @@ -6059,7 +6059,7 @@ void kvm_mmu_free_obsolete_roots(struct kvm_vcpu *vcp= u) __kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.root_mmu); __kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.guest_mmu); } -EXPORT_SYMBOL_GPL(kvm_mmu_free_obsolete_roots); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_free_obsolete_roots); =20 static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, int *bytes) @@ -6385,7 +6385,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu= , gpa_t cr2_or_gpa, u64 err return x86_emulate_instruction(vcpu, cr2_or_gpa, emulation_type, insn, insn_len); } -EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_page_fault); =20 void kvm_mmu_print_sptes(struct kvm_vcpu *vcpu, gpa_t gpa, const char *msg) { @@ -6401,7 +6401,7 @@ void kvm_mmu_print_sptes(struct kvm_vcpu *vcpu, gpa_t= gpa, const char *msg) pr_cont(", spte[%d] =3D 0x%llx", level, sptes[level]); pr_cont("\n"); } -EXPORT_SYMBOL_GPL(kvm_mmu_print_sptes); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_print_sptes); =20 static void __kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mm= u *mmu, u64 addr, hpa_t root_hpa) @@ -6467,7 +6467,7 @@ void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, s= truct kvm_mmu *mmu, __kvm_mmu_invalidate_addr(vcpu, mmu, addr, mmu->prev_roots[i].hpa); } } -EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_addr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_invalidate_addr); =20 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { @@ -6484,7 +6484,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) kvm_mmu_invalidate_addr(vcpu, vcpu->arch.walk_mmu, gva, KVM_MMU_ROOTS_ALL= ); ++vcpu->stat.invlpg; } -EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_invlpg); =20 =20 void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long p= cid) @@ -6537,7 +6537,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_force= d_root_level, else max_huge_page_level =3D PG_LEVEL_2M; } -EXPORT_SYMBOL_GPL(kvm_configure_mmu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_configure_mmu); =20 static void free_mmu_pages(struct kvm_mmu *mmu) { @@ -7204,7 +7204,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *= kvm, =20 return need_tlb_flush; } -EXPORT_SYMBOL_GPL(kvm_zap_gfn_range); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_zap_gfn_range); =20 static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index df31039b5d63..37647afde7d3 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -22,7 +22,7 @@ bool __read_mostly enable_mmio_caching =3D true; static bool __ro_after_init allow_mmio_caching; module_param_named(mmio_caching, enable_mmio_caching, bool, 0444); -EXPORT_SYMBOL_GPL(enable_mmio_caching); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_mmio_caching); =20 bool __read_mostly kvm_ad_enabled; =20 @@ -470,13 +470,13 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 m= mio_mask, u64 access_mask) shadow_mmio_mask =3D mmio_mask; shadow_mmio_access_mask =3D access_mask; } -EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_mmio_spte_mask); =20 void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value) { kvm->arch.shadow_mmio_value =3D mmio_value; } -EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_value); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_mmio_spte_value); =20 void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask) { @@ -487,7 +487,7 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask) shadow_me_value =3D me_value; shadow_me_mask =3D me_mask; } -EXPORT_SYMBOL_GPL(kvm_mmu_set_me_spte_mask); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_me_spte_mask); =20 void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only) { @@ -513,7 +513,7 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_e= xec_only) kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, VMX_EPT_RWX_MASK | VMX_EPT_SUPPRESS_VE_BIT, 0); } -EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_ept_masks); =20 void kvm_mmu_reset_all_pte_masks(void) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7059ac9d58e2..c5734ca5c17d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1982,7 +1982,7 @@ bool kvm_tdp_mmu_gpa_is_mapped(struct kvm_vcpu *vcpu,= u64 gpa) spte =3D sptes[leaf]; return is_shadow_present_pte(spte) && is_last_spte(spte, leaf); } -EXPORT_SYMBOL_GPL(kvm_tdp_mmu_gpa_is_mapped); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_tdp_mmu_gpa_is_mapped); =20 /* * Returns the last level spte pointer of the shadow page walk for the giv= en diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b7dc5bd981ba..40ac4cb44ed2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -31,7 +31,7 @@ static struct x86_pmu_capability __read_mostly kvm_host_p= mu; =20 /* KVM's PMU capabilities, i.e. the intersection of KVM and hardware suppo= rt. */ struct x86_pmu_capability __read_mostly kvm_pmu_cap; -EXPORT_SYMBOL_GPL(kvm_pmu_cap); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_cap); =20 struct kvm_pmu_emulated_event_selectors { u64 INSTRUCTIONS_RETIRED; @@ -373,7 +373,7 @@ void pmc_write_counter(struct kvm_pmc *pmc, u64 val) pmc->counter &=3D pmc_bitmask(pmc); pmc_update_sample_period(pmc); } -EXPORT_SYMBOL_GPL(pmc_write_counter); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(pmc_write_counter); =20 static int filter_cmp(const void *pa, const void *pb, u64 mask) { @@ -581,7 +581,7 @@ void kvm_pmu_recalc_pmc_emulation(struct kvm_pmu *pmu, = struct kvm_pmc *pmc) if (pmc_is_event_match(pmc, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED)) bitmap_set(pmu->pmc_counting_branches, pmc->idx, 1); } -EXPORT_SYMBOL_GPL(kvm_pmu_recalc_pmc_emulation); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_recalc_pmc_emulation); =20 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { @@ -996,13 +996,13 @@ void kvm_pmu_instruction_retired(struct kvm_vcpu *vcp= u) { kvm_pmu_trigger_event(vcpu, vcpu_to_pmu(vcpu)->pmc_counting_instructions); } -EXPORT_SYMBOL_GPL(kvm_pmu_instruction_retired); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_instruction_retired); =20 void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu) { kvm_pmu_trigger_event(vcpu, vcpu_to_pmu(vcpu)->pmc_counting_branches); } -EXPORT_SYMBOL_GPL(kvm_pmu_branch_retired); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired); =20 static bool is_masked_filter_valid(const struct kvm_x86_pmu_event_filter *= filter) { diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index 5dd8a1646800..f04674cad9ef 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -131,7 +131,7 @@ void kvm_smm_changed(struct kvm_vcpu *vcpu, bool enteri= ng_smm) =20 kvm_mmu_reset_context(vcpu); } -EXPORT_SYMBOL_GPL(kvm_smm_changed); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_smm_changed); =20 void process_smi(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ea0fffb24d4d..69934531cc1c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -97,10 +97,10 @@ * vendor module being reloaded with different module parameters. */ struct kvm_caps kvm_caps __read_mostly; -EXPORT_SYMBOL_GPL(kvm_caps); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_caps); =20 struct kvm_host_values kvm_host __read_mostly; -EXPORT_SYMBOL_GPL(kvm_host); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_host); =20 #define ERR_PTR_USR(e) ((void __user *)ERR_PTR(e)) =20 @@ -152,7 +152,7 @@ module_param(ignore_msrs, bool, 0644); =20 bool __read_mostly report_ignored_msrs =3D true; module_param(report_ignored_msrs, bool, 0644); -EXPORT_SYMBOL_GPL(report_ignored_msrs); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(report_ignored_msrs); =20 unsigned int min_timer_period_us =3D 200; module_param(min_timer_period_us, uint, 0644); @@ -166,7 +166,7 @@ module_param(tsc_tolerance_ppm, uint, 0644); =20 bool __read_mostly enable_vmware_backdoor =3D false; module_param(enable_vmware_backdoor, bool, 0444); -EXPORT_SYMBOL_GPL(enable_vmware_backdoor); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_vmware_backdoor); =20 /* * Flags to manipulate forced emulation behavior (any non-zero value will @@ -181,7 +181,7 @@ module_param(pi_inject_timer, bint, 0644); =20 /* Enable/disable PMU virtualization */ bool __read_mostly enable_pmu =3D true; -EXPORT_SYMBOL_GPL(enable_pmu); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_pmu); module_param(enable_pmu, bool, 0444); =20 bool __read_mostly eager_page_split =3D true; @@ -208,7 +208,7 @@ struct kvm_user_return_msrs { }; =20 u32 __read_mostly kvm_nr_uret_msrs; -EXPORT_SYMBOL_GPL(kvm_nr_uret_msrs); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_nr_uret_msrs); static u32 __read_mostly kvm_uret_msrs_list[KVM_MAX_NR_USER_RETURN_MSRS]; static struct kvm_user_return_msrs __percpu *user_return_msrs; =20 @@ -218,16 +218,16 @@ static struct kvm_user_return_msrs __percpu *user_ret= urn_msrs; | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) =20 bool __read_mostly allow_smaller_maxphyaddr =3D 0; -EXPORT_SYMBOL_GPL(allow_smaller_maxphyaddr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(allow_smaller_maxphyaddr); =20 bool __read_mostly enable_apicv =3D true; -EXPORT_SYMBOL_GPL(enable_apicv); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_apicv); =20 bool __read_mostly enable_ipiv =3D true; -EXPORT_SYMBOL_GPL(enable_ipiv); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_ipiv); =20 bool __read_mostly enable_device_posted_irqs =3D true; -EXPORT_SYMBOL_GPL(enable_device_posted_irqs); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_device_posted_irqs); =20 const struct _kvm_stats_desc kvm_vm_stats_desc[] =3D { KVM_GENERIC_VM_STATS(), @@ -612,7 +612,7 @@ int kvm_add_user_return_msr(u32 msr) kvm_uret_msrs_list[kvm_nr_uret_msrs] =3D msr; return kvm_nr_uret_msrs++; } -EXPORT_SYMBOL_GPL(kvm_add_user_return_msr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_add_user_return_msr); =20 int kvm_find_user_return_msr(u32 msr) { @@ -624,7 +624,7 @@ int kvm_find_user_return_msr(u32 msr) } return -1; } -EXPORT_SYMBOL_GPL(kvm_find_user_return_msr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_find_user_return_msr); =20 static void kvm_user_return_msr_cpu_online(void) { @@ -664,7 +664,7 @@ int kvm_set_user_return_msr(unsigned slot, u64 value, u= 64 mask) kvm_user_return_register_notifier(msrs); return 0; } -EXPORT_SYMBOL_GPL(kvm_set_user_return_msr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_user_return_msr); =20 void kvm_user_return_msr_update_cache(unsigned int slot, u64 value) { @@ -673,7 +673,7 @@ void kvm_user_return_msr_update_cache(unsigned int slot= , u64 value) msrs->values[slot].curr =3D value; kvm_user_return_register_notifier(msrs); } -EXPORT_SYMBOL_GPL(kvm_user_return_msr_update_cache); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_user_return_msr_update_cache); =20 static void drop_user_return_notifiers(void) { @@ -695,7 +695,7 @@ noinstr void kvm_spurious_fault(void) /* Fault while not rebooting. We want the trace. */ BUG_ON(!kvm_rebooting); } -EXPORT_SYMBOL_GPL(kvm_spurious_fault); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_spurious_fault); =20 #define EXCPT_BENIGN 0 #define EXCPT_CONTRIBUTORY 1 @@ -800,7 +800,7 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcp= u, ex->has_payload =3D false; ex->payload =3D 0; } -EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_deliver_exception_payload); =20 static void kvm_queue_exception_vmexit(struct kvm_vcpu *vcpu, unsigned int= vector, bool has_error_code, u32 error_code, @@ -884,7 +884,7 @@ void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigne= d nr) { kvm_multiple_exception(vcpu, nr, false, 0, false, 0); } -EXPORT_SYMBOL_GPL(kvm_queue_exception); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_queue_exception); =20 =20 void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, @@ -892,7 +892,7 @@ void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsig= ned nr, { kvm_multiple_exception(vcpu, nr, false, 0, true, payload); } -EXPORT_SYMBOL_GPL(kvm_queue_exception_p); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_queue_exception_p); =20 static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code, unsigned long payload) @@ -927,7 +927,7 @@ void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsig= ned int nr, vcpu->arch.exception.has_payload =3D false; vcpu->arch.exception.payload =3D 0; } -EXPORT_SYMBOL_GPL(kvm_requeue_exception); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_requeue_exception); =20 int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err) { @@ -938,7 +938,7 @@ int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err) =20 return 1; } -EXPORT_SYMBOL_GPL(kvm_complete_insn_gp); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_complete_insn_gp); =20 static int complete_emulated_insn_gp(struct kvm_vcpu *vcpu, int err) { @@ -988,7 +988,7 @@ void kvm_inject_emulated_page_fault(struct kvm_vcpu *vc= pu, =20 fault_mmu->inject_page_fault(vcpu, fault); } -EXPORT_SYMBOL_GPL(kvm_inject_emulated_page_fault); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_inject_emulated_page_fault); =20 void kvm_inject_nmi(struct kvm_vcpu *vcpu) { @@ -1000,7 +1000,7 @@ void kvm_queue_exception_e(struct kvm_vcpu *vcpu, uns= igned nr, u32 error_code) { kvm_multiple_exception(vcpu, nr, true, error_code, false, 0); } -EXPORT_SYMBOL_GPL(kvm_queue_exception_e); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_queue_exception_e); =20 /* * Checks if cpl <=3D required_cpl; if true, return true. Otherwise queue @@ -1022,7 +1022,7 @@ bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr) kvm_queue_exception(vcpu, UD_VECTOR); return false; } -EXPORT_SYMBOL_GPL(kvm_require_dr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_require_dr); =20 static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) { @@ -1077,7 +1077,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long = cr3) =20 return 1; } -EXPORT_SYMBOL_GPL(load_pdptrs); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(load_pdptrs); =20 static bool kvm_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { @@ -1130,7 +1130,7 @@ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned= long old_cr0, unsigned lon if ((cr0 ^ old_cr0) & KVM_MMU_CR0_ROLE_BITS) kvm_mmu_reset_context(vcpu); } -EXPORT_SYMBOL_GPL(kvm_post_set_cr0); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_post_set_cr0); =20 int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { @@ -1171,13 +1171,13 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned lon= g cr0) =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_set_cr0); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_cr0); =20 void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw) { (void)kvm_set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~0x0eul) | (msw & 0x0f)); } -EXPORT_SYMBOL_GPL(kvm_lmsw); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lmsw); =20 void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) { @@ -1200,7 +1200,7 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) wrpkru(vcpu->arch.pkru); } -EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_load_guest_xsave_state); =20 void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) { @@ -1226,7 +1226,7 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) } =20 } -EXPORT_SYMBOL_GPL(kvm_load_host_xsave_state); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_load_host_xsave_state); =20 #ifdef CONFIG_X86_64 static inline u64 kvm_guest_supported_xfd(struct kvm_vcpu *vcpu) @@ -1291,7 +1291,7 @@ int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu) =20 return kvm_skip_emulated_instruction(vcpu); } -EXPORT_SYMBOL_GPL(kvm_emulate_xsetbv); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_xsetbv); =20 static bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) { @@ -1339,7 +1339,7 @@ void kvm_post_set_cr4(struct kvm_vcpu *vcpu, unsigned= long old_cr4, unsigned lon kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); =20 } -EXPORT_SYMBOL_GPL(kvm_post_set_cr4); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_post_set_cr4); =20 int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) { @@ -1370,7 +1370,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long = cr4) =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_set_cr4); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_cr4); =20 static void kvm_invalidate_pcid(struct kvm_vcpu *vcpu, unsigned long pcid) { @@ -1462,7 +1462,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long = cr3) =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_set_cr3); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_cr3); =20 int kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8) { @@ -1474,7 +1474,7 @@ int kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long = cr8) vcpu->arch.cr8 =3D cr8; return 0; } -EXPORT_SYMBOL_GPL(kvm_set_cr8); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_cr8); =20 unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu) { @@ -1483,7 +1483,7 @@ unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu) else return vcpu->arch.cr8; } -EXPORT_SYMBOL_GPL(kvm_get_cr8); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_cr8); =20 static void kvm_update_dr0123(struct kvm_vcpu *vcpu) { @@ -1508,7 +1508,7 @@ void kvm_update_dr7(struct kvm_vcpu *vcpu) if (dr7 & DR7_BP_EN_MASK) vcpu->arch.switch_db_regs |=3D KVM_DEBUGREG_BP_ENABLED; } -EXPORT_SYMBOL_GPL(kvm_update_dr7); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_update_dr7); =20 static u64 kvm_dr6_fixed(struct kvm_vcpu *vcpu) { @@ -1549,7 +1549,7 @@ int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigne= d long val) =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_set_dr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_dr); =20 unsigned long kvm_get_dr(struct kvm_vcpu *vcpu, int dr) { @@ -1566,7 +1566,7 @@ unsigned long kvm_get_dr(struct kvm_vcpu *vcpu, int d= r) return vcpu->arch.dr7; } } -EXPORT_SYMBOL_GPL(kvm_get_dr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_dr); =20 int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu) { @@ -1582,7 +1582,7 @@ int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu) kvm_rdx_write(vcpu, data >> 32); return kvm_skip_emulated_instruction(vcpu); } -EXPORT_SYMBOL_GPL(kvm_emulate_rdpmc); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_rdpmc); =20 /* * Some IA32_ARCH_CAPABILITIES bits have dependencies on MSRs that KVM @@ -1721,7 +1721,7 @@ bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer) =20 return __kvm_valid_efer(vcpu, efer); } -EXPORT_SYMBOL_GPL(kvm_valid_efer); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_valid_efer); =20 static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { @@ -1764,7 +1764,7 @@ void kvm_enable_efer_bits(u64 mask) { efer_reserved_bits &=3D ~mask; } -EXPORT_SYMBOL_GPL(kvm_enable_efer_bits); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_enable_efer_bits); =20 bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type) { @@ -1807,7 +1807,7 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index= , u32 type) =20 return allowed; } -EXPORT_SYMBOL_GPL(kvm_msr_allowed); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_msr_allowed); =20 /* * Write @data into the MSR specified by @index. Select MSR specific fault @@ -1944,13 +1944,13 @@ int __kvm_emulate_msr_read(struct kvm_vcpu *vcpu, u= 32 index, u64 *data) { return kvm_get_msr_ignored_check(vcpu, index, data, false); } -EXPORT_SYMBOL_GPL(__kvm_emulate_msr_read); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_emulate_msr_read); =20 int __kvm_emulate_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data) { return kvm_set_msr_ignored_check(vcpu, index, data, false); } -EXPORT_SYMBOL_GPL(__kvm_emulate_msr_write); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_emulate_msr_write); =20 int kvm_emulate_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data) { @@ -1959,7 +1959,7 @@ int kvm_emulate_msr_read(struct kvm_vcpu *vcpu, u32 i= ndex, u64 *data) =20 return __kvm_emulate_msr_read(vcpu, index, data); } -EXPORT_SYMBOL_GPL(kvm_emulate_msr_read); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_msr_read); =20 int kvm_emulate_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data) { @@ -1968,7 +1968,7 @@ int kvm_emulate_msr_write(struct kvm_vcpu *vcpu, u32 = index, u64 data) =20 return __kvm_emulate_msr_write(vcpu, index, data); } -EXPORT_SYMBOL_GPL(kvm_emulate_msr_write); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_msr_write); =20 =20 static void complete_userspace_rdmsr(struct kvm_vcpu *vcpu) @@ -2077,7 +2077,7 @@ int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu) return __kvm_emulate_rdmsr(vcpu, kvm_rcx_read(vcpu), -1, complete_fast_rdmsr); } -EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_rdmsr); =20 int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg) { @@ -2085,7 +2085,7 @@ int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 = msr, int reg) =20 return __kvm_emulate_rdmsr(vcpu, msr, reg, complete_fast_rdmsr_imm); } -EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr_imm); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_rdmsr_imm); =20 static int __kvm_emulate_wrmsr(struct kvm_vcpu *vcpu, u32 msr, u64 data) { @@ -2113,13 +2113,13 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu) return __kvm_emulate_wrmsr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu)); } -EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_wrmsr); =20 int kvm_emulate_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg) { return __kvm_emulate_wrmsr(vcpu, msr, kvm_register_read(vcpu, reg)); } -EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr_imm); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_wrmsr_imm); =20 int kvm_emulate_as_nop(struct kvm_vcpu *vcpu) { @@ -2131,7 +2131,7 @@ int kvm_emulate_invd(struct kvm_vcpu *vcpu) /* Treat an INVD instruction as a NOP and just skip it. */ return kvm_emulate_as_nop(vcpu); } -EXPORT_SYMBOL_GPL(kvm_emulate_invd); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_invd); =20 fastpath_t handle_fastpath_invd(struct kvm_vcpu *vcpu) { @@ -2140,14 +2140,14 @@ fastpath_t handle_fastpath_invd(struct kvm_vcpu *vc= pu) =20 return EXIT_FASTPATH_REENTER_GUEST; } -EXPORT_SYMBOL_GPL(handle_fastpath_invd); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(handle_fastpath_invd); =20 int kvm_handle_invalid_op(struct kvm_vcpu *vcpu) { kvm_queue_exception(vcpu, UD_VECTOR); return 1; } -EXPORT_SYMBOL_GPL(kvm_handle_invalid_op); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_handle_invalid_op); =20 =20 static int kvm_emulate_monitor_mwait(struct kvm_vcpu *vcpu, const char *in= sn) @@ -2173,13 +2173,13 @@ int kvm_emulate_mwait(struct kvm_vcpu *vcpu) { return kvm_emulate_monitor_mwait(vcpu, "MWAIT"); } -EXPORT_SYMBOL_GPL(kvm_emulate_mwait); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_mwait); =20 int kvm_emulate_monitor(struct kvm_vcpu *vcpu) { return kvm_emulate_monitor_mwait(vcpu, "MONITOR"); } -EXPORT_SYMBOL_GPL(kvm_emulate_monitor); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_monitor); =20 static inline bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu) { @@ -2217,13 +2217,13 @@ fastpath_t handle_fastpath_wrmsr(struct kvm_vcpu *v= cpu) return __handle_fastpath_wrmsr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu)); } -EXPORT_SYMBOL_GPL(handle_fastpath_wrmsr); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(handle_fastpath_wrmsr); =20 fastpath_t handle_fastpath_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int r= eg) { return __handle_fastpath_wrmsr(vcpu, msr, kvm_register_read(vcpu, reg)); } -EXPORT_SYMBOL_GPL(handle_fastpath_wrmsr_imm); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(handle_fastpath_wrmsr_imm); =20 /* * Adapt set_msr() to msr_io()'s calling convention @@ -2589,7 +2589,7 @@ u64 kvm_read_l1_tsc(struct kvm_vcpu *vcpu, u64 host_t= sc) return vcpu->arch.l1_tsc_offset + kvm_scale_tsc(host_tsc, vcpu->arch.l1_tsc_scaling_ratio); } -EXPORT_SYMBOL_GPL(kvm_read_l1_tsc); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_l1_tsc); =20 u64 kvm_calc_nested_tsc_offset(u64 l1_offset, u64 l2_offset, u64 l2_multip= lier) { @@ -2604,7 +2604,7 @@ u64 kvm_calc_nested_tsc_offset(u64 l1_offset, u64 l2_= offset, u64 l2_multiplier) nested_offset +=3D l2_offset; return nested_offset; } -EXPORT_SYMBOL_GPL(kvm_calc_nested_tsc_offset); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_calc_nested_tsc_offset); =20 u64 kvm_calc_nested_tsc_multiplier(u64 l1_multiplier, u64 l2_multiplier) { @@ -2614,7 +2614,7 @@ u64 kvm_calc_nested_tsc_multiplier(u64 l1_multiplier,= u64 l2_multiplier) =20 return l1_multiplier; } -EXPORT_SYMBOL_GPL(kvm_calc_nested_tsc_multiplier); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_calc_nested_tsc_multiplier); =20 static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset) { @@ -3692,7 +3692,7 @@ void kvm_service_local_tlb_flush_requests(struct kvm_= vcpu *vcpu) if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) kvm_vcpu_flush_tlb_guest(vcpu); } -EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_service_local_tlb_flush_requests); =20 static void record_steal_time(struct kvm_vcpu *vcpu) { @@ -4184,7 +4184,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) } return 0; } -EXPORT_SYMBOL_GPL(kvm_set_msr_common); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_msr_common); =20 static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool ho= st) { @@ -4533,7 +4533,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct = msr_data *msr_info) } return 0; } -EXPORT_SYMBOL_GPL(kvm_get_msr_common); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_msr_common); =20 /* * Read or write a bunch of msrs. All parameters are kernel addresses. @@ -7521,7 +7521,7 @@ gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, = gva_t gva, u64 access =3D (kvm_x86_call(get_cpl)(vcpu) =3D=3D 3) ? PFERR_USER_MASK := 0; return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } -EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_gva_to_gpa_read); =20 gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva, struct x86_exception *exception) @@ -7532,7 +7532,7 @@ gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu,= gva_t gva, access |=3D PFERR_WRITE_MASK; return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } -EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_write); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_gva_to_gpa_write); =20 /* uses this to access any guest's mapped memory without checking CPL */ gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva, @@ -7618,7 +7618,7 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu, return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception); } -EXPORT_SYMBOL_GPL(kvm_read_guest_virt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest_virt); =20 static int emulator_read_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *val, unsigned int bytes, @@ -7690,7 +7690,7 @@ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu= , gva_t addr, void *val, return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, PFERR_WRITE_MASK, exception); } -EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest_virt_system); =20 static int kvm_check_emulate_insn(struct kvm_vcpu *vcpu, int emul_type, void *insn, int insn_len) @@ -7724,7 +7724,7 @@ int handle_ud(struct kvm_vcpu *vcpu) =20 return kvm_emulate_instruction(vcpu, emul_type); } -EXPORT_SYMBOL_GPL(handle_ud); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(handle_ud); =20 static int vcpu_is_mmio_gpa(struct kvm_vcpu *vcpu, unsigned long gva, gpa_t gpa, bool write) @@ -8203,7 +8203,7 @@ int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) kvm_emulate_wbinvd_noskip(vcpu); return kvm_skip_emulated_instruction(vcpu); } -EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_wbinvd); =20 =20 =20 @@ -8692,7 +8692,7 @@ void kvm_inject_realmode_interrupt(struct kvm_vcpu *v= cpu, int irq, int inc_eip) kvm_set_rflags(vcpu, ctxt->eflags); } } -EXPORT_SYMBOL_GPL(kvm_inject_realmode_interrupt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_inject_realmode_interrupt); =20 static void prepare_emulation_failure_exit(struct kvm_vcpu *vcpu, u64 *dat= a, u8 ndata, u8 *insn_bytes, u8 insn_size) @@ -8757,13 +8757,13 @@ void __kvm_prepare_emulation_failure_exit(struct kv= m_vcpu *vcpu, u64 *data, { prepare_emulation_failure_exit(vcpu, data, ndata, NULL, 0); } -EXPORT_SYMBOL_GPL(__kvm_prepare_emulation_failure_exit); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_prepare_emulation_failure_exit); =20 void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu) { __kvm_prepare_emulation_failure_exit(vcpu, NULL, 0); } -EXPORT_SYMBOL_GPL(kvm_prepare_emulation_failure_exit); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_prepare_emulation_failure_exit); =20 void kvm_prepare_event_vectoring_exit(struct kvm_vcpu *vcpu, gpa_t gpa) { @@ -8785,7 +8785,7 @@ void kvm_prepare_event_vectoring_exit(struct kvm_vcpu= *vcpu, gpa_t gpa) run->internal.suberror =3D KVM_INTERNAL_ERROR_DELIVERY_EV; run->internal.ndata =3D ndata; } -EXPORT_SYMBOL_GPL(kvm_prepare_event_vectoring_exit); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_prepare_event_vectoring_exit); =20 static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_t= ype) { @@ -8909,7 +8909,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vc= pu) r =3D kvm_vcpu_do_singlestep(vcpu); return r; } -EXPORT_SYMBOL_GPL(kvm_skip_emulated_instruction); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_skip_emulated_instruction); =20 static bool kvm_is_code_breakpoint_inhibited(struct kvm_vcpu *vcpu) { @@ -9040,7 +9040,7 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *= vcpu, int emulation_type, =20 return r; } -EXPORT_SYMBOL_GPL(x86_decode_emulated_instruction); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(x86_decode_emulated_instruction); =20 int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int emulation_type, void *insn, int insn_len) @@ -9264,14 +9264,14 @@ int kvm_emulate_instruction(struct kvm_vcpu *vcpu, = int emulation_type) { return x86_emulate_instruction(vcpu, 0, emulation_type, NULL, 0); } -EXPORT_SYMBOL_GPL(kvm_emulate_instruction); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_instruction); =20 int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu, void *insn, int insn_len) { return x86_emulate_instruction(vcpu, 0, 0, insn, insn_len); } -EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_instruction_from_buffer); =20 static int complete_fast_pio_out_port_0x7e(struct kvm_vcpu *vcpu) { @@ -9366,7 +9366,7 @@ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, uns= igned short port, int in) ret =3D kvm_fast_pio_out(vcpu, size, port); return ret && kvm_skip_emulated_instruction(vcpu); } -EXPORT_SYMBOL_GPL(kvm_fast_pio); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_fast_pio); =20 static int kvmclock_cpu_down_prep(unsigned int cpu) { @@ -9798,7 +9798,7 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) kmem_cache_destroy(x86_emulator_cache); return r; } -EXPORT_SYMBOL_GPL(kvm_x86_vendor_init); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_x86_vendor_init); =20 void kvm_x86_vendor_exit(void) { @@ -9832,7 +9832,7 @@ void kvm_x86_vendor_exit(void) kvm_x86_ops.enable_virtualization_cpu =3D NULL; mutex_unlock(&vendor_module_lock); } -EXPORT_SYMBOL_GPL(kvm_x86_vendor_exit); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_x86_vendor_exit); =20 #ifdef CONFIG_X86_64 static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr, @@ -9896,7 +9896,7 @@ bool kvm_apicv_activated(struct kvm *kvm) { return (READ_ONCE(kvm->arch.apicv_inhibit_reasons) =3D=3D 0); } -EXPORT_SYMBOL_GPL(kvm_apicv_activated); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apicv_activated); =20 bool kvm_vcpu_apicv_activated(struct kvm_vcpu *vcpu) { @@ -9906,7 +9906,7 @@ bool kvm_vcpu_apicv_activated(struct kvm_vcpu *vcpu) =20 return (vm_reasons | vcpu_reasons) =3D=3D 0; } -EXPORT_SYMBOL_GPL(kvm_vcpu_apicv_activated); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_apicv_activated); =20 static void set_or_clear_apicv_inhibit(unsigned long *inhibits, enum kvm_apicv_inhibit reason, bool set) @@ -10082,7 +10082,7 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu= , int cpl, vcpu->run->hypercall.ret =3D ret; return 1; } -EXPORT_SYMBOL_GPL(____kvm_emulate_hypercall); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(____kvm_emulate_hypercall); =20 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) { @@ -10095,7 +10095,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) return __kvm_emulate_hypercall(vcpu, kvm_x86_call(get_cpl)(vcpu), complete_hypercall_exit); } -EXPORT_SYMBOL_GPL(kvm_emulate_hypercall); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_hypercall); =20 static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt) { @@ -10538,7 +10538,7 @@ void __kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu) preempt_enable(); up_read(&vcpu->kvm->arch.apicv_update_lock); } -EXPORT_SYMBOL_GPL(__kvm_vcpu_update_apicv); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_vcpu_update_apicv); =20 static void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu) { @@ -10614,7 +10614,7 @@ void kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, __kvm_set_or_clear_apicv_inhibit(kvm, reason, set); up_write(&kvm->arch.apicv_update_lock); } -EXPORT_SYMBOL_GPL(kvm_set_or_clear_apicv_inhibit); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_or_clear_apicv_inhibit); =20 static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu) { @@ -11159,7 +11159,7 @@ bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) =20 return false; } -EXPORT_SYMBOL_GPL(kvm_vcpu_has_events); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_has_events); =20 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) { @@ -11312,7 +11312,7 @@ int kvm_emulate_halt_noskip(struct kvm_vcpu *vcpu) { return __kvm_emulate_halt(vcpu, KVM_MP_STATE_HALTED, KVM_EXIT_HLT); } -EXPORT_SYMBOL_GPL(kvm_emulate_halt_noskip); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_halt_noskip); =20 int kvm_emulate_halt(struct kvm_vcpu *vcpu) { @@ -11323,7 +11323,7 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu) */ return kvm_emulate_halt_noskip(vcpu) && ret; } -EXPORT_SYMBOL_GPL(kvm_emulate_halt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_halt); =20 fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu) { @@ -11335,7 +11335,7 @@ fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcp= u) =20 return EXIT_FASTPATH_EXIT_HANDLED; } -EXPORT_SYMBOL_GPL(handle_fastpath_hlt); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(handle_fastpath_hlt); =20 int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu) { @@ -11344,7 +11344,7 @@ int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu) return __kvm_emulate_halt(vcpu, KVM_MP_STATE_AP_RESET_HOLD, KVM_EXIT_AP_RESET_HOLD) && ret; } -EXPORT_SYMBOL_GPL(kvm_emulate_ap_reset_hold); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_ap_reset_hold); =20 bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu) { @@ -11876,7 +11876,7 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_= selector, int idt_index, kvm_set_rflags(vcpu, ctxt->eflags); return 1; } -EXPORT_SYMBOL_GPL(kvm_task_switch); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_task_switch); =20 static bool kvm_is_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sr= egs) { @@ -12576,7 +12576,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool ini= t_event) if (init_event) kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); } -EXPORT_SYMBOL_GPL(kvm_vcpu_reset); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_reset); =20 void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) { @@ -12588,7 +12588,7 @@ void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *= vcpu, u8 vector) kvm_set_segment(vcpu, &cs, VCPU_SREG_CS); kvm_rip_write(vcpu, 0); } -EXPORT_SYMBOL_GPL(kvm_vcpu_deliver_sipi_vector); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_deliver_sipi_vector); =20 void kvm_arch_enable_virtualization(void) { @@ -12706,7 +12706,7 @@ bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu) { return vcpu->kvm->arch.bsp_vcpu_id =3D=3D vcpu->vcpu_id; } -EXPORT_SYMBOL_GPL(kvm_vcpu_is_reset_bsp); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_is_reset_bsp); =20 bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu) { @@ -12870,7 +12870,7 @@ void __user * __x86_set_memory_region(struct kvm *k= vm, int id, gpa_t gpa, =20 return (void __user *)hva; } -EXPORT_SYMBOL_GPL(__x86_set_memory_region); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(__x86_set_memory_region); =20 void kvm_arch_pre_destroy_vm(struct kvm *kvm) { @@ -13278,13 +13278,13 @@ unsigned long kvm_get_linear_rip(struct kvm_vcpu = *vcpu) return (u32)(get_segment_base(vcpu, VCPU_SREG_CS) + kvm_rip_read(vcpu)); } -EXPORT_SYMBOL_GPL(kvm_get_linear_rip); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_linear_rip); =20 bool kvm_is_linear_rip(struct kvm_vcpu *vcpu, unsigned long linear_rip) { return kvm_get_linear_rip(vcpu) =3D=3D linear_rip; } -EXPORT_SYMBOL_GPL(kvm_is_linear_rip); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_is_linear_rip); =20 unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu) { @@ -13295,7 +13295,7 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu) rflags &=3D ~X86_EFLAGS_TF; return rflags; } -EXPORT_SYMBOL_GPL(kvm_get_rflags); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_rflags); =20 static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) { @@ -13310,7 +13310,7 @@ void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned= long rflags) __kvm_set_rflags(vcpu, rflags); kvm_make_request(KVM_REQ_EVENT, vcpu); } -EXPORT_SYMBOL_GPL(kvm_set_rflags); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_rflags); =20 static inline u32 kvm_async_pf_hash_fn(gfn_t gfn) { @@ -13553,7 +13553,7 @@ bool kvm_arch_has_noncoherent_dma(struct kvm *kvm) { return atomic_read(&kvm->arch.noncoherent_dma_count); } -EXPORT_SYMBOL_GPL(kvm_arch_has_noncoherent_dma); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_arch_has_noncoherent_dma); =20 bool kvm_arch_no_poll(struct kvm_vcpu *vcpu) { @@ -13609,7 +13609,7 @@ int kvm_spec_ctrl_test_value(u64 value) =20 return ret; } -EXPORT_SYMBOL_GPL(kvm_spec_ctrl_test_value); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_spec_ctrl_test_value); =20 void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 e= rror_code) { @@ -13634,7 +13634,7 @@ void kvm_fixup_and_inject_pf_error(struct kvm_vcpu = *vcpu, gva_t gva, u16 error_c } vcpu->arch.walk_mmu->inject_page_fault(vcpu, &fault); } -EXPORT_SYMBOL_GPL(kvm_fixup_and_inject_pf_error); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_fixup_and_inject_pf_error); =20 /* * Handles kvm_read/write_guest_virt*() result and either injects #PF or r= eturns @@ -13663,7 +13663,7 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu= , int r, =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_handle_memory_failure); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_handle_memory_failure); =20 int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gv= a) { @@ -13727,7 +13727,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsig= ned long type, gva_t gva) return 1; } } -EXPORT_SYMBOL_GPL(kvm_handle_invpcid); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_handle_invpcid); =20 static int complete_sev_es_emulated_mmio(struct kvm_vcpu *vcpu) { @@ -13812,7 +13812,7 @@ int kvm_sev_es_mmio_write(struct kvm_vcpu *vcpu, gp= a_t gpa, unsigned int bytes, =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_sev_es_mmio_write); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_sev_es_mmio_write); =20 int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned int by= tes, void *data) @@ -13850,7 +13850,7 @@ int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa= _t gpa, unsigned int bytes, =20 return 0; } -EXPORT_SYMBOL_GPL(kvm_sev_es_mmio_read); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_sev_es_mmio_read); =20 static void advance_sev_es_emulated_pio(struct kvm_vcpu *vcpu, unsigned co= unt, int size) { @@ -13938,7 +13938,7 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, uns= igned int size, return in ? kvm_sev_es_ins(vcpu, size, port) : kvm_sev_es_outs(vcpu, size, port); } -EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_sev_es_string_io); =20 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); --=20 2.51.0.470.ga7dc726c21-goog