Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
---
target/i386/kvm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index e4b4f5756a34..b57f873ec9e8 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -1714,7 +1714,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
env->nested_state->size = max_nested_state_len;
- if (IS_INTEL_CPU(env)) {
+ if (cpu_has_vmx(env)) {
struct kvm_vmx_nested_state_hdr *vmx_hdr =
&env->nested_state->hdr.vmx;
--
2.20.1
On 7/5/2019 2:06 PM, Liran Alon wrote:
> Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
> Signed-off-by: Liran Alon <liran.alon@oracle.com>
> ---
> target/i386/kvm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index e4b4f5756a34..b57f873ec9e8 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -1714,7 +1714,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>
> env->nested_state->size = max_nested_state_len;
>
> - if (IS_INTEL_CPU(env)) {
> + if (cpu_has_vmx(env)) {
> struct kvm_vmx_nested_state_hdr *vmx_hdr =
> &env->nested_state->hdr.vmx;
>
Reviewed-by: Maran Wilson <maran.wilson@oracle.com>
Thanks,
-Maran
On 05/07/19 23:06, Liran Alon wrote:
> - if (IS_INTEL_CPU(env)) {
> + if (cpu_has_vmx(env)) {
> struct kvm_vmx_nested_state_hdr *vmx_hdr =
> &env->nested_state->hdr.vmx;
>
I am not sure this is enough, because kvm_get_nested_state and kvm_put_nested_state would run anyway later. If we want to cull them completely for a non-VMX virtual machine, I'd do something like this:
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 5035092..73ab102 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -1748,14 +1748,13 @@ int kvm_arch_init_vcpu(CPUState *cs)
max_nested_state_len = kvm_max_nested_state_length();
if (max_nested_state_len > 0) {
assert(max_nested_state_len >= offsetof(struct kvm_nested_state, data));
- env->nested_state = g_malloc0(max_nested_state_len);
- env->nested_state->size = max_nested_state_len;
-
- if (IS_INTEL_CPU(env)) {
+ if (cpu_has_vmx(env)) {
struct kvm_vmx_nested_state_hdr *vmx_hdr =
&env->nested_state->hdr.vmx;
+ env->nested_state = g_malloc0(max_nested_state_len);
+ env->nested_state->size = max_nested_state_len;
env->nested_state->format = KVM_STATE_NESTED_FORMAT_VMX;
vmx_hdr->vmxon_pa = -1ull;
vmx_hdr->vmcs12_pa = -1ull;
@@ -3682,7 +3681,7 @@ static int kvm_put_nested_state(X86CPU *cpu)
CPUX86State *env = &cpu->env;
int max_nested_state_len = kvm_max_nested_state_length();
- if (max_nested_state_len <= 0) {
+ if (!env->nested_state) {
return 0;
}
@@ -3696,7 +3695,7 @@ static int kvm_get_nested_state(X86CPU *cpu)
int max_nested_state_len = kvm_max_nested_state_length();
int ret;
- if (max_nested_state_len <= 0) {
+ if (!env->nested_state) {
return 0;
}
What do you think? (As a side effect, this completely disables
KVM_GET/SET_NESTED_STATE on SVM, which I think is safer since it
will have to save at least the NPT root and the paging mode. So we
could remove vmstate_svm_nested_state as well).
Paolo
> On 11 Jul 2019, at 16:45, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 05/07/19 23:06, Liran Alon wrote:
>> - if (IS_INTEL_CPU(env)) {
>> + if (cpu_has_vmx(env)) {
>> struct kvm_vmx_nested_state_hdr *vmx_hdr =
>> &env->nested_state->hdr.vmx;
>>
>
> I am not sure this is enough, because kvm_get_nested_state and kvm_put_nested_state would run anyway later. If we want to cull them completely for a non-VMX virtual machine, I'd do something like this:
>
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index 5035092..73ab102 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -1748,14 +1748,13 @@ int kvm_arch_init_vcpu(CPUState *cs)
> max_nested_state_len = kvm_max_nested_state_length();
> if (max_nested_state_len > 0) {
> assert(max_nested_state_len >= offsetof(struct kvm_nested_state, data));
> - env->nested_state = g_malloc0(max_nested_state_len);
>
> - env->nested_state->size = max_nested_state_len;
> -
> - if (IS_INTEL_CPU(env)) {
> + if (cpu_has_vmx(env)) {
> struct kvm_vmx_nested_state_hdr *vmx_hdr =
> &env->nested_state->hdr.vmx;
>
> + env->nested_state = g_malloc0(max_nested_state_len);
> + env->nested_state->size = max_nested_state_len;
> env->nested_state->format = KVM_STATE_NESTED_FORMAT_VMX;
> vmx_hdr->vmxon_pa = -1ull;
> vmx_hdr->vmcs12_pa = -1ull;
> @@ -3682,7 +3681,7 @@ static int kvm_put_nested_state(X86CPU *cpu)
> CPUX86State *env = &cpu->env;
> int max_nested_state_len = kvm_max_nested_state_length();
>
> - if (max_nested_state_len <= 0) {
> + if (!env->nested_state) {
> return 0;
> }
>
> @@ -3696,7 +3695,7 @@ static int kvm_get_nested_state(X86CPU *cpu)
> int max_nested_state_len = kvm_max_nested_state_length();
> int ret;
>
> - if (max_nested_state_len <= 0) {
> + if (!env->nested_state) {
> return 0;
> }
>
>
> What do you think? (As a side effect, this completely disables
> KVM_GET/SET_NESTED_STATE on SVM, which I think is safer since it
> will have to save at least the NPT root and the paging mode. So we
> could remove vmstate_svm_nested_state as well).
>
> Paolo
I like your suggestion better than my commit. It is indeed more elegant and correct. :)
The code change above looks good to me as nested_state_needed() will return false anyway if env->nested_state is false.
Will you submit a new patch or should I?
-Liran
On 11/07/19 16:36, Liran Alon wrote: > Will you submit a new patch or should I? I've just sent it, I was waiting for you to comment on the idea. I forgot to CC you though. Paolo
© 2016 - 2026 Red Hat, Inc.