From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0DB728725A for ; Tue, 30 Dec 2025 23:01:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135716; cv=none; b=fDSJyB2z1ZDOGA7GjMEg9wHKELPdocJFQFNM0QdSmO3RVm5UdDCI4funXoF6Yl/HMaPmtyNUZmN6WreGfRjgu8p7ugNed2k7wMJv59IEYOdjZOO+IZ8rBgo0AuxK/OJXH3Sl/37G2yWwYnldZbxsEZBvaUAckh3xSgUFZMmGSN8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135716; c=relaxed/simple; bh=Kg4SWKOUtIjisFIyNIjWRo8qDsjuJqrQPq7nPM7bE2k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Clq7+K3yrjR3cdYNJvWa9G7bPQ8wGoQIS3Uvv5FnjPowI0MojRaVvsky+++ZP0jMXp90yVFSkaGEjAP9HvPQRNDiEOhvkyYA3dWCZc290vT1ADXBc1bk0kSqwvFAG+ZgGbwmltL17qH6dzFHd+ZjgFgC6c5szQTDTGusAf4L1JQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1AFyCcnW; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1AFyCcnW" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34c387d3eb6so9527452a91.2 for ; Tue, 30 Dec 2025 15:01:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135714; x=1767740514; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=DRWkfiIT0OBRt/HR5AGE/0OZHtoELcBJ5J9uDz4mIeY=; b=1AFyCcnWvtFlecobYpYDNrC122F47kVzq3K01cHvCqxd2FcLlPqGmNDTPFFb8jlp/E DkSdd3NSyyPUSySlS7I3Wo8uXorvFymP4lcl/tNqpCRmanPsqQIr+qd1RnpdiL2fGefI iWjdWA0l62C9fUO888JbImnz/Ner9dr2KbKWRN53bMKiDfFfRJ6APvK6UhApBxyjyN/s c3lbDV5AMFKZq1eIxxy7QLg/SWMcldALCTonYL0PxQ6bAa6WDFfNLlA5TK+XnK/+e82f YbV41UoGIo9XdV+Bjbl0ULyzq7RTaWXlbtUqCbKUoWgEuxrPWoYsdrtUeBiflJSLXiyX EL9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135714; x=1767740514; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=DRWkfiIT0OBRt/HR5AGE/0OZHtoELcBJ5J9uDz4mIeY=; b=OiSF4xHPy59nRyvWH2BfKVX3uDNIZevGmVk063uyHIsiNZOkMIazA4UeH4qHhsauRN +oHeSJ2B+qVYi1jBf8i99M6fTLLKPhulHJfwODJqqdtEDecg4fVRm1VDcA+6kgGRcy6n wr1ZWWiVMAqadXZ+EDAHNHEjTnN0equI+BmD5jLSE6YxY5I8ezprPaB+CIR2FXJ947W4 mz3RMklxwnnQew5R8Nqp+lVQkOC9sEIQZMPpA71doJ1qvFJjIFRox+Q/R3Spq7f2KOyM FEXZCVXgwgWK/J6nuizW8d6xp1XuaUxiQhBSKcumrxlrHUxdoRsUZCF+XBhXBTeCaoho Q/eA== X-Forwarded-Encrypted: i=1; AJvYcCWE8EpJDbiujfxyRyxI7oBfADkYA5YKJI0zqQXn7FfycatYcqCLlujzoAJGQc6xHycEywTz0dR/5Xn3IWY=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+Klvh0fcfuOIjNmfdBBj0f80oBo/U8mecu/Qhtm23uIpbgT8k gj6SuElstIyZ/aDqmEw/BfPWkKFgP1SxSyr0ZN5T083JYCr0TNCajwFdoW0q2wLh1aO5jdAvetz wQ/XocA== X-Google-Smtp-Source: AGHT+IFqovkU/eUlOykY6HWolLjXCahq1FJtAZ+GEK7lClBpAjuVhdcXVPtQL/MTMKhhYaxAhrGpJ6Ohw/A= X-Received: from pjmm16.prod.google.com ([2002:a17:90b:5810:b0:34b:fe89:512c]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2f47:b0:34c:3501:d118 with SMTP id 98e67ed59e1d1-34e9211d455mr26888073a91.1.1767135714342; Tue, 30 Dec 2025 15:01:54 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:30 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-2-seanjc@google.com> Subject: [PATCH v4 01/21] KVM: selftests: Make __vm_get_page_table_entry() static From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed The function is only used in processor.c, drop the declaration in processor.h and make it static. No functional change intended. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86/processor.h | 2 -- tools/testing/selftests/kvm/lib/x86/processor.c | 4 ++-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index 57d62a425109..c00c0fbe62cd 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1367,8 +1367,6 @@ static inline bool kvm_is_ignore_msrs(void) return get_kvm_param_bool("ignore_msrs"); } =20 -uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr, - int *level); uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr); =20 uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2, diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index 36104d27f3d9..c14bf2b5f28f 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -306,8 +306,8 @@ static bool vm_is_target_pte(uint64_t *pte, int *level,= int current_level) return *level =3D=3D current_level; } =20 -uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr, - int *level) +static uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vad= dr, + int *level) { int va_width =3D 12 + (vm->pgtable_levels) * 9; uint64_t *pte =3D &vm->pgd; --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92A662DC764 for ; Tue, 30 Dec 2025 23:01:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135718; cv=none; b=WnsNDQ0JvL6ZBAdwDL8CjFhSm6BDQkBEcUFDnSTet+F4mPO+tPYYEFLVWKqaRohj5FDm6R8sna6NpwgkdNbBLu9yblvWaXPYtRMdQPk4rtTV5PINALqLKZoFLfR4h4f1UuDnM0gk/rNt9mbNbXdaRlYTLMWychZuO12cIBt1358= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135718; c=relaxed/simple; bh=eN5uc7pSFzMybyeyIg1wl+tq2Gr7dbaFYRW0/rdIHyY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=I/2Gu5fyt7MZ3QZCIB3pHIwps5YNoyRd5CnXqFXmVfkpj0FaDeu9PnkZlTagvHB34XHtjNuWefVvGqpVKJWTjQ1rkPgnekM2erQckBDgpnzOQk0f4GponLK1XobcX9DpnlVVYMYGbao7HvGaZjfgd314ezT8Qfxa7YEttsSWPX4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wlMnN8ZA; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wlMnN8ZA" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2a31087af17so205317205ad.1 for ; Tue, 30 Dec 2025 15:01:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135716; x=1767740516; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=A1x3Jv4hKAWrGgMmMHZE2moCxnL8NNPpEDvW/jENOxU=; b=wlMnN8ZAQuUzwfITHfJfa3XuaGu02njfpBrkUDdIY5RLM2I6T+xKA1AoHVbdSdqcN9 gk7GgO19gGVi4nhmAQzsU2d/gFzeta0scJga3TtUnCvyhkuJ5JNXTfZUTTRgvEqyr/eN yAvt6cZ2+obN+xIv+kw9oas0R2onbxgWnFCo3NkXlYeutKyaKeAdgldRvcBnYU9N6rHG ZTPFYWJ3xkGh/z9+cnZqq5vUt2R8v/fQ2k307HBhXHPOlL1C91Pp0pmWfkxkOAC/nYZ9 RXdQ87MVhEZ2EzC7c9C4CqIiSsdf4fQWSxaoj2JY6EeGmqOZxFj3hGw8AvsNS/hzLXBN f4pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135716; x=1767740516; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=A1x3Jv4hKAWrGgMmMHZE2moCxnL8NNPpEDvW/jENOxU=; b=slpPj8LmmCEqC7X/bxlIG/4De3AehIiiWUK95Hy8qmEKywW0RvYxYv7KAc0CN4/sKc V/zHyJpS1bUvuebYL5S2mxE6Sx/FOnqredAC4DgzlfJQuuKMsL8uNeGRoc3x6AW5+46p wBsAkpbm/wON3ZW1H/UFMD5WL/QD2UIMxjDkhvljAlb7uoSTul0zA+WQO85w3gQw00tA iaRIawgByMsnxmsUNMozNqMsec2WWpK9rjIKitWyp4SWoeRfJWxp8tma5nMQ68HgCof1 a+YXMHefidqSkK+qkevbFuPRz/AArRut9Ka4qn6H6SRX4tmE+ATCsJZ/YGUOZEXaBf3O g6zg== X-Forwarded-Encrypted: i=1; AJvYcCW3mA6m/n71RXnGynVQkdF1kHXybJMcdpMwToV6Ya7KM+LHpKLsxJTDbxFS6HRBynXBJ+iOGOXw28Nf8Po=@vger.kernel.org X-Gm-Message-State: AOJu0YxO5iKWdQ4NOTfHHSKI/QwzHrNVYsJaMBPRLaSFMqZ0hZLRfypU QAVlK0ORCs1GiUxPZYJVcVKXuWY+wkB2z4eCPpv2yAquigzJ1XGvIQU6+vKSdUiNgEVb6mA1F1v 3EXCktQ== X-Google-Smtp-Source: AGHT+IEv+GX7evZLjKnG5+SJTBFZGdYgiJ8sh+9jSGp5VMcqldVrrbZx+5CrciujOKyo6Zq9eBa3EV2km6w= X-Received: from pjbpm1.prod.google.com ([2002:a17:90b:3c41:b0:34c:2ca6:ff3e]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:32d1:b0:293:e12:1bec with SMTP id d9443c01a7336-2a2f2229207mr365100115ad.20.1767135715930; Tue, 30 Dec 2025 15:01:55 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:31 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-3-seanjc@google.com> Subject: [PATCH v4 02/21] KVM: selftests: Stop passing a memslot to nested_map_memslot() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed On x86, KVM selftests use memslot 0 for all the default regions used by the test infrastructure. This is an implementation detail. nested_map_memslot() is currently used to map the default regions by explicitly passing slot 0, which leaks the library implementation into the caller. Rename the function to a very verbose nested_identity_map_default_memslots() to reflect what it actually does. Add an assertion that only memslot 0 is being used so that the implementation does not change from under us. No functional change intended. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86/vmx.h | 4 ++-- tools/testing/selftests/kvm/lib/x86/vmx.c | 12 ++++++++---- tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c | 2 +- 3 files changed, 11 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/= selftests/kvm/include/x86/vmx.h index 96e2b4c630a9..91916b8aa94b 100644 --- a/tools/testing/selftests/kvm/include/x86/vmx.h +++ b/tools/testing/selftests/kvm/include/x86/vmx.h @@ -563,8 +563,8 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm= *vm, uint64_t nested_paddr, uint64_t paddr); void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uint64_t size); -void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, - uint32_t memslot); +void nested_identity_map_default_memslots(struct vmx_pages *vmx, + struct kvm_vm *vm); void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t addr, uint64_t size); bool kvm_cpu_has_ept(void); diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index 29b082a58daa..eec33ec63811 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -494,12 +494,16 @@ void nested_map(struct vmx_pages *vmx, struct kvm_vm = *vm, /* Prepare an identity extended page table that maps all the * physical pages in VM. */ -void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, - uint32_t memslot) +void nested_identity_map_default_memslots(struct vmx_pages *vmx, + struct kvm_vm *vm) { + uint32_t s, memslot =3D 0; sparsebit_idx_t i, last; - struct userspace_mem_region *region =3D - memslot2region(vm, memslot); + struct userspace_mem_region *region =3D memslot2region(vm, memslot); + + /* Only memslot 0 is mapped here, ensure it's the only one being used */ + for (s =3D 0; s < NR_MEM_REGIONS; s++) + TEST_ASSERT_EQ(vm->memslots[s], 0); =20 i =3D (region->region.guest_phys_addr >> vm->page_shift) - 1; last =3D i + (region->region.memory_size >> vm->page_shift); diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/t= esting/selftests/kvm/x86/vmx_dirty_log_test.c index 98cb6bdab3e6..aab7333aaef0 100644 --- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c @@ -121,7 +121,7 @@ static void test_vmx_dirty_log(bool enable_ept) */ if (enable_ept) { prepare_eptp(vmx, vm); - nested_map_memslot(vmx, vm, 0); + nested_identity_map_default_memslots(vmx, vm); nested_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); nested_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); } --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 444532E3AF1 for ; Tue, 30 Dec 2025 23:01:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135720; cv=none; b=Fi5nWHpQU91Ws5nqqMB4cczmn0VTslXNb/evzes4mJQhkVh+KSRBpf6wxwYo4T0qJWXm3mliM3uAWr6cvM8OfQcra99pXrbQaiWDLzJUtVf6XvqSd75fH83pvA1iX11GBNnaS6VziwEKgWc2LzKhKJHEVpBhOhiDYWW8+rL/Xf8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135720; c=relaxed/simple; bh=91L4txYNNUE5SJojuQZ2nZ3cjzuGzWc5839mOJ9F1/o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CUkunrdlj4XAS5F9q2ySZ7tU9bh8El36z1xbhWcXExuR3rvRYMAbqa42PNY0maS2ma/WCm4DrtzJ3Oa2gVhqzDR0Ede4ZrCQ6uNjsW0xsBJiKkPwDaW16MbGUxlEelYxaH5QefP+mOMI5jgI2usfiRsnGzEgI9vIpcI4fmxcyPE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YeHNEm/3; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YeHNEm/3" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34ea5074935so9470867a91.0 for ; Tue, 30 Dec 2025 15:01:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135717; x=1767740517; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=yWDCaLTCIbcQO/h6zTAUUdpMSrBY0ICbFLNvpCPkE/c=; b=YeHNEm/3fFJqx0yDz67XzLGTyEln06TSvJ3eAd02R0xOpQ7QEFQf/zcEzLJxOBucXU EIedQipuhkNWJ00WHpUR2b5LjykCORWhle7jYTUD36igg58ujZp9LXzisbEAfHvU6615 brNrxOQBK5MI1qIGbntPYnxJ8YzP9ZDkaQkPu1nh/3Ws/soqqxfKimybb241jieuLZ// CBjHkjswwqqwKKxD+g4b1KrdRAyi2rovWfKQc3hW03oaNQ/3ktkH8PQNw6+aNxntMmGu pMkKSWr2UerOO9g0C4mXCIvFKXFFusmtzFnfWpn+tGEul1+ORr+ygRyuZ/1sYjjFeiNY FTjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135717; x=1767740517; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=yWDCaLTCIbcQO/h6zTAUUdpMSrBY0ICbFLNvpCPkE/c=; b=U+1s0cBj2m6VDRxt06RKSVnFItUNND7Eq0Q2cW5XFldHtvShZxO19CV6J24DpyqtwR QQGx6uNc9emSUzU7Oh2UvIfQ8trbfeVHzx7TzWSIjHz9Q/dQ0ujbXEw28BV1iHtuROR0 qPTYLXrBAmtZIpRAq4Ng+mVm8Y3RDOKs2F7mCFZLV29IMJlhoHV7gJgNa/TnVKJ/iG8Y ZlAy4ZvRjj+Iyx+ZvZD/SCl7L8NIfEt8lxwmt6Hj3SfNMmWjJqa1/ccb6BP4E4Jmm5w2 cmUl6J5vyjMBzNnXxz8e1wCZvKjbDOYEpOm9cFdOqSwdaAxnyXvwp86n1HpMi43Mr9T0 X2Ow== X-Forwarded-Encrypted: i=1; AJvYcCUUH7c3+XAvch9a10Dhx7eOc+NTvbddtqJe/WZzJ4ZbcbK6PR8lSLWXNNlOQzzm9O0M2wfVpOS7weWCU5Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yyk1r1d6Wf3fmobpIOMD/QMVDi9c82jpnYOSmKM+/7PEF3CO3XW DXKQDrg1jqL7hgW61U/EwAP4Iuje7xr78CiuHKx5hTnXBwlsAYMrcsfSKazLyP+6pH8973uOdjb DbuBW3g== X-Google-Smtp-Source: AGHT+IHV3rTLFIW3Ya/WfGECQ4NGCUq3VWcqeP/2rJnzAJYZRKybcHRTVRGE5PryLt5G6ChfGloqjW0LJ10= X-Received: from pjbin23.prod.google.com ([2002:a17:90b:4397:b0:34c:f8b8:349b]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3f83:b0:340:bb56:79de with SMTP id 98e67ed59e1d1-34e921ee80bmr29589802a91.30.1767135717583; Tue, 30 Dec 2025 15:01:57 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:32 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-4-seanjc@google.com> Subject: [PATCH v4 03/21] KVM: selftests: Rename nested TDP mapping functions From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Rename the functions from nested_* to tdp_* to make their purpose clearer. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86/vmx.h | 16 +++--- .../testing/selftests/kvm/lib/x86/memstress.c | 4 +- tools/testing/selftests/kvm/lib/x86/vmx.c | 50 +++++++++---------- .../selftests/kvm/x86/vmx_dirty_log_test.c | 6 +-- 4 files changed, 37 insertions(+), 39 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/= selftests/kvm/include/x86/vmx.h index 91916b8aa94b..04b8231d032a 100644 --- a/tools/testing/selftests/kvm/include/x86/vmx.h +++ b/tools/testing/selftests/kvm/include/x86/vmx.h @@ -559,14 +559,14 @@ bool load_vmcs(struct vmx_pages *vmx); =20 bool ept_1g_pages_supported(void); =20 -void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr); -void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size); -void nested_identity_map_default_memslots(struct vmx_pages *vmx, - struct kvm_vm *vm); -void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t addr, uint64_t size); +void tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_= paddr, + uint64_t paddr); +void tdp_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_pad= dr, + uint64_t paddr, uint64_t size); +void tdp_identity_map_default_memslots(struct vmx_pages *vmx, + struct kvm_vm *vm); +void tdp_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t addr, uint64_t size); bool kvm_cpu_has_ept(void); void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm= *vm); diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testin= g/selftests/kvm/lib/x86/memstress.c index 0b1f288ad556..1928b00bde51 100644 --- a/tools/testing/selftests/kvm/lib/x86/memstress.c +++ b/tools/testing/selftests/kvm/lib/x86/memstress.c @@ -70,11 +70,11 @@ void memstress_setup_ept(struct vmx_pages *vmx, struct = kvm_vm *vm) * KVM can shadow the EPT12 with the maximum huge page size supported * by the backing source. */ - nested_identity_map_1g(vmx, vm, 0, 0x100000000ULL); + tdp_identity_map_1g(vmx, vm, 0, 0x100000000ULL); =20 start =3D align_down(memstress_args.gpa, PG_SIZE_1G); end =3D align_up(memstress_args.gpa + memstress_args.size, PG_SIZE_1G); - nested_identity_map_1g(vmx, vm, start, end - start); + tdp_identity_map_1g(vmx, vm, start, end - start); } =20 void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vc= pu *vcpus[]) diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index eec33ec63811..1954ccdfc353 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -362,12 +362,12 @@ void prepare_vmcs(struct vmx_pages *vmx, void *guest_= rip, void *guest_rsp) init_vmcs_guest_state(guest_rip, guest_rsp); } =20 -static void nested_create_pte(struct kvm_vm *vm, - struct eptPageTableEntry *pte, - uint64_t nested_paddr, - uint64_t paddr, - int current_level, - int target_level) +static void tdp_create_pte(struct kvm_vm *vm, + struct eptPageTableEntry *pte, + uint64_t nested_paddr, + uint64_t paddr, + int current_level, + int target_level) { if (!pte->readable) { pte->writable =3D true; @@ -394,8 +394,8 @@ static void nested_create_pte(struct kvm_vm *vm, } =20 =20 -void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, int target_level) +void __tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, int target_level) { const uint64_t page_size =3D PG_LEVEL_SIZE(target_level); struct eptPageTableEntry *pt =3D vmx->eptp_hva, *pte; @@ -428,7 +428,7 @@ void __nested_pg_map(struct vmx_pages *vmx, struct kvm_= vm *vm, index =3D (nested_paddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; pte =3D &pt[index]; =20 - nested_create_pte(vm, pte, nested_paddr, paddr, level, target_level); + tdp_create_pte(vm, pte, nested_paddr, paddr, level, target_level); =20 if (pte->page_size) break; @@ -445,10 +445,10 @@ void __nested_pg_map(struct vmx_pages *vmx, struct kv= m_vm *vm, =20 } =20 -void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr) +void tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr) { - __nested_pg_map(vmx, vm, nested_paddr, paddr, PG_LEVEL_4K); + __tdp_pg_map(vmx, vm, nested_paddr, paddr, PG_LEVEL_4K); } =20 /* @@ -468,8 +468,8 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm= *vm, * Within the VM given by vm, creates a nested guest translation for the * page range starting at nested_paddr to the page range starting at paddr. */ -void __nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size, +void __tdp_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size, int level) { size_t page_size =3D PG_LEVEL_SIZE(level); @@ -479,23 +479,23 @@ void __nested_map(struct vmx_pages *vmx, struct kvm_v= m *vm, TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); =20 while (npages--) { - __nested_pg_map(vmx, vm, nested_paddr, paddr, level); + __tdp_pg_map(vmx, vm, nested_paddr, paddr, level); nested_paddr +=3D page_size; paddr +=3D page_size; } } =20 -void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size) +void tdp_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size) { - __nested_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K); + __tdp_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K); } =20 /* Prepare an identity extended page table that maps all the * physical pages in VM. */ -void nested_identity_map_default_memslots(struct vmx_pages *vmx, - struct kvm_vm *vm) +void tdp_identity_map_default_memslots(struct vmx_pages *vmx, + struct kvm_vm *vm) { uint32_t s, memslot =3D 0; sparsebit_idx_t i, last; @@ -512,18 +512,16 @@ void nested_identity_map_default_memslots(struct vmx_= pages *vmx, if (i > last) break; =20 - nested_map(vmx, vm, - (uint64_t)i << vm->page_shift, - (uint64_t)i << vm->page_shift, - 1 << vm->page_shift); + tdp_map(vmx, vm, (uint64_t)i << vm->page_shift, + (uint64_t)i << vm->page_shift, 1 << vm->page_shift); } } =20 /* Identity map a region with 1GiB Pages. */ -void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, +void tdp_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t addr, uint64_t size) { - __nested_map(vmx, vm, addr, addr, size, PG_LEVEL_1G); + __tdp_map(vmx, vm, addr, addr, size, PG_LEVEL_1G); } =20 bool kvm_cpu_has_ept(void) diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/t= esting/selftests/kvm/x86/vmx_dirty_log_test.c index aab7333aaef0..e7d0c08ba29d 100644 --- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c @@ -121,9 +121,9 @@ static void test_vmx_dirty_log(bool enable_ept) */ if (enable_ept) { prepare_eptp(vmx, vm); - nested_identity_map_default_memslots(vmx, vm); - nested_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); - nested_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); + tdp_identity_map_default_memslots(vmx, vm); + tdp_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); + tdp_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); } =20 bmap =3D bitmap_zalloc(TEST_MEM_PAGES); --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A37F2DEA89 for ; Tue, 30 Dec 2025 23:01:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135723; cv=none; b=cS6O5TAszM4SZcuzVSzcQQSh9Eg261lrLPWxSkkF0A5P+3Tf1lfhAfQ9Uzy/cJdjyhCA1m7gh+iLaoGM9gO8DBLARVNMsrMwuAPTVBBI3jQwzJcwn2oDZ8k+01TjVZj8LQcgG9kh2LxvqwtfT7OjkVRb5KunNnHL48vhH0CWRPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135723; c=relaxed/simple; bh=gzGPTMBPkQtGA7aekWTJchDP0r6Rf1tv37EAxV+/uf0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gJhq3gfjZ5w+S9v9PCg96kcPJgemPIDhyhLVv6y2zgh1Jl7uH04DBqhPuMQUD1a6AgMHPXTDuzoo77uW2A9Xbd+pINV4n4EWQIWJvgV4TYur6ffxMSqZvnIA2dl4pktHr960Bi+tDACo18QeUBiOLZxalShhAc0XI6DTR7QC7iQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=C66zMUm+; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="C66zMUm+" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34aa1d06456so23431308a91.0 for ; Tue, 30 Dec 2025 15:01:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135719; x=1767740519; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=kFEesFGPsutGHqF0001WthlsbSNNmBrKZgvGONrq/UE=; b=C66zMUm+m45C1Yfnisi0/OlzQr1RuAzz45D6IWDHIZJLUM6ZDjvvONA4fZjTm+pfB9 hL5L7QuJxbLejhQGr5oMAlVcMPwSDC/pQ2K+QOSBmVkk3geKsU6F2qUAgZluLSIKsa0A E0a0X+JN5Y2IP0geYYv2/g1nGoDsa2G8jV9vRbvwwW+fz7lKxUbu1p8UkNEqy3WlwTeP tfgv4QSEeuLd4KGUfNucjvilN1NDehbp1g3Y5A7gq1R/Fkda0wdYvwMjsxDZI2yuQs2u MUfrgHO96PvVBYQNeWvgeDqj8NK+QtsgLuilh8y/Z2gK9Uw3fSzUo+Xf+a+coGr0OrRt 1+tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135719; x=1767740519; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kFEesFGPsutGHqF0001WthlsbSNNmBrKZgvGONrq/UE=; b=BES4XgWhhLlgNYjLros8uS9OK5A4FVO730zhHLtaZ80+lQ7fMAfXIgc6xq5gnKOiKN W69NWV9FigPKvqAqXLFyLHMUm7eVfWfjgH7W/L0RgMDDPhQrTOfhLIgDchWmCIGEp/xJ axIrwvF7lSQcUtn+9Y5BwEOYEa/QG5qzIQIoYoZNjvKUtJqc5xhk8mqz7alrmaTOHBPr mlWTZ9tAYtAtdrzcSJtY92hbKPfLBEpdBx2514yWfgqjqIj+qXBv8criXIaphZ0lbbJU v9s60BR+o5e1N/Tjno2WXV+xGEdt/MOl9XwtgndT2/Z14ypgr37oMsaKflNCN+80B5RV DfxQ== X-Forwarded-Encrypted: i=1; AJvYcCVgLAl8kavKZUbykiZmybRa/o5D7MyhQusd65OHBhKwieHaWrkbpB32l3VKLDJJlLMto9QgnONenpU8a/w=@vger.kernel.org X-Gm-Message-State: AOJu0YyS9RP54rUKmiDcQ93ymoj6sOaT2syhZ5TE3Ks31Qjw+I/PPw4R KTFhioWGDE7WBdc3lfsZfwb0DhMgaxmjaqXPf/Cv1a7gzMKXFVBuKF7MKE0UqbJEZ5oaShA8xRc xCVWM0g== X-Google-Smtp-Source: AGHT+IFMvmVZegtbcaH8ygYeOdZA18LUtrqVKCjaPL92fD8viNNyqFkg9jykfPAXGidnl1kcsTEoQKpdcsc= X-Received: from pjbqe11.prod.google.com ([2002:a17:90b:4f8b:b0:34a:9e02:ffa0]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4a52:b0:340:99fd:9676 with SMTP id 98e67ed59e1d1-34e9212aaa9mr28973871a91.10.1767135719406; Tue, 30 Dec 2025 15:01:59 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:33 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-5-seanjc@google.com> Subject: [PATCH v4 04/21] KVM: selftests: Kill eptPageTablePointer From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Replace the struct overlay with explicit bitmasks, which is clearer and less error-prone. See commit f18b4aebe107 ("kvm: selftests: do not use bitfields larger than 32-bits for PTEs") for an example of why bitfields are not preferable. Remove the unused PAGE_SHIFT_4K definition while at it. No functional change intended. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/lib/x86/vmx.c | 35 +++++++++++------------ 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index 1954ccdfc353..85043bb1ec4d 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -10,10 +10,16 @@ #include "processor.h" #include "vmx.h" =20 -#define PAGE_SHIFT_4K 12 - #define KVM_EPT_PAGE_TABLE_MIN_PADDR 0x1c0000 =20 +#define EPTP_MT_SHIFT 0 /* EPTP memtype bits 2:0 */ +#define EPTP_PWL_SHIFT 3 /* EPTP page walk length bits 5:3 */ +#define EPTP_AD_ENABLED_SHIFT 6 /* EPTP AD enabled bit 6 */ + +#define EPTP_WB (X86_MEMTYPE_WB << EPTP_MT_SHIFT) +#define EPTP_PWL_4 (3ULL << EPTP_PWL_SHIFT) /* PWL is (levels - 1) */ +#define EPTP_AD_ENABLED (1ULL << EPTP_AD_ENABLED_SHIFT) + bool enable_evmcs; =20 struct hv_enlightened_vmcs *current_evmcs; @@ -34,14 +40,6 @@ struct eptPageTableEntry { uint64_t suppress_ve:1; }; =20 -struct eptPageTablePointer { - uint64_t memory_type:3; - uint64_t page_walk_length:3; - uint64_t ad_enabled:1; - uint64_t reserved_11_07:5; - uint64_t address:40; - uint64_t reserved_63_52:12; -}; int vcpu_enable_evmcs(struct kvm_vcpu *vcpu) { uint16_t evmcs_ver; @@ -196,16 +194,15 @@ static inline void init_vmcs_control_fields(struct vm= x_pages *vmx) vmwrite(PIN_BASED_VM_EXEC_CONTROL, rdmsr(MSR_IA32_VMX_TRUE_PINBASED_CTLS)= ); =20 if (vmx->eptp_gpa) { - uint64_t ept_paddr; - struct eptPageTablePointer eptp =3D { - .memory_type =3D X86_MEMTYPE_WB, - .page_walk_length =3D 3, /* + 1 */ - .ad_enabled =3D ept_vpid_cap_supported(VMX_EPT_VPID_CAP_AD_BITS), - .address =3D vmx->eptp_gpa >> PAGE_SHIFT_4K, - }; + uint64_t eptp =3D vmx->eptp_gpa | EPTP_WB | EPTP_PWL_4; =20 - memcpy(&ept_paddr, &eptp, sizeof(ept_paddr)); - vmwrite(EPT_POINTER, ept_paddr); + TEST_ASSERT((vmx->eptp_gpa & ~PHYSICAL_PAGE_MASK) =3D=3D 0, + "Illegal bits set in vmx->eptp_gpa"); + + if (ept_vpid_cap_supported(VMX_EPT_VPID_CAP_AD_BITS)) + eptp |=3D EPTP_AD_ENABLED; + + vmwrite(EPT_POINTER, eptp); sec_exec_ctl |=3D SECONDARY_EXEC_ENABLE_EPT; } =20 --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E6702E0B71 for ; Tue, 30 Dec 2025 23:02:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135724; cv=none; b=pa6XvzeR+PRZkvAK4yBq5xZ3RT36/xuMLMNcjssJH7r2i8zFixsBEsl+cors1htBkHlciq0voEXl0qOCHQAombhipqzFV5apREPNIvizwuZK4ffYFtBNfR98mdDXIF5BOHQxqqmyzk94P7wry7c7CxFDsaVnWmVTVEudfusrBv0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135724; c=relaxed/simple; bh=NTL8KGsm6USzTzkvmwpcMZD2nPHbvkmlY/rR1jDM3PA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jv+ZtBDmuRWQWuoDD9fc2sgh6c1tdKcBD0qZlGGcJKFZXk36vvrp5cSo1Hky6iJd7k/QiRDD5zRqPb0nMdIS93kWirmKXI27pRO8mgGfOP2spqLJRtrDNoqBg7B3kMPtd+PGS35jex4Tvc69Ie6+7x2Pk8Qr+/OkcBwf0nCUMmA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jCtDd5bb; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jCtDd5bb" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34c6cda4a92so22223838a91.3 for ; Tue, 30 Dec 2025 15:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135721; x=1767740521; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=snslv60TkZffEAQE2fKI/7w9i9KGKI8ATFVeEcgF+Qk=; b=jCtDd5bbIb/tv7YqEYBezTqu/1qenXfDKzunLM5fXgpJWGQHsAmtf6yaK5jMGtdah8 dAgOesSk3RqGLPHh4/wUdlPY7+ofxZN1UCwvxzJQaBScdwdgd5eEIlwaa5qmtXSbguEC UgNYFpgLptqLjhjubwVu+rX4MxaPGcANWiUuFbB+KpzRE7hyhC383+Fqh+tGWOX5MPu8 MAZJd24BylJTzFdNA+UK5H/xG3AyMT3tUwY8nUL1KDQhtQsIK4eTR31nDbLDVZ5x9cBm VrRBwh6hU0lJEabwEWc3VFQjQOAlMY5WgTBEfzYvryKrsn2trU1X4g9WS7pMHod2hjvY uRmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135721; x=1767740521; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=snslv60TkZffEAQE2fKI/7w9i9KGKI8ATFVeEcgF+Qk=; b=RAFkxuFUmC+diupv+r9qnos6tetwLGm/ihA9QOAxhZ9wZY7cVBATdyPlpTdwWTlQ8K eWwcQ6nOS512bKw5lWYRFb51droDA6FEQl45IHc7bRB8ToKKfH9OJU6GWvQSUTaACi8n Adbxf6ezH1NpU/9Cy4cZNb/iGSEJ57DqpyqDnDh7VXioVlx6LKLh37352B7csoJizOIe wuRRHbYoDbJkmzvwK1njwzUr/jB+J5N8NYcv6YX0kqoHbaDRvN7Q4KBeAL7GmbsyX6Vk OzdMkTS8ITZ+Sps75VwQZr9XzUbZWzt+YK52Szs5kkmy5Vdd33MoN63ymiS2/BbL4ia1 NGqw== X-Forwarded-Encrypted: i=1; AJvYcCVWV2ba078jIWZKdQIX2i36jN3caSewukuQlj80vJBuQ8aCW9fRi8Qh/UK4aNirOezBakQmgMnU0VEfuaA=@vger.kernel.org X-Gm-Message-State: AOJu0YyS0gCxgPSX5eCkuOvrJcWbGXQpvEBRn0Oo/7eDxJz0zVlrzsi3 majV5k4gUPIjAZftiyVB04yRLYmAq9zfXnGumKwAZKPt1vwZd27bxumpPewgGzPlklo6IIHyaFy wjILlBw== X-Google-Smtp-Source: AGHT+IElvIFPPzPbx/fWhmlKYhmrHPLLecvBIwXjPK26QLj43KDrpSTQ0SF/ILvphJaY0JI4Mn53QzvVzYY= X-Received: from pjoo4.prod.google.com ([2002:a17:90b:5824:b0:34a:a9d5:99d6]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2e0c:b0:341:8bda:d0ae with SMTP id 98e67ed59e1d1-34e921b7334mr26690257a91.20.1767135721380; Tue, 30 Dec 2025 15:02:01 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:34 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-6-seanjc@google.com> Subject: [PATCH v4 05/21] KVM: selftests: Stop setting A/D bits when creating EPT PTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Stop setting Accessed/Dirty bits when creating EPT entries for L2 so that the stage-1 and stage-2 (a.k.a. TDP) page table APIs can use common code without bleeding the EPT hack into the common APIs. While commit 094444204570 ("selftests: kvm: add test for dirty logging inside nested guests") is _very_ light on details, the most likely explanation is that vmx_dirty_log_test was attempting to avoid taking an EPT Violation on the first _write_ from L2. static void l2_guest_code(u64 *a, u64 *b) { READ_ONCE(*a); WRITE_ONCE(*a, 1); <=3D=3D=3D GUEST_SYNC(true); ... } When handling read faults in the shadow MMU, KVM opportunistically creates a writable SPTE if the mapping can be writable *and* the gPTE is dirty (or doesn't support the Dirty bit), i.e. if KVM doesn't need to intercept writes in order to emulate Dirty-bit updates. By setting A/D bits in the test's EPT entries, the above READ+WRITE will fault only on the read, and in theory expose the bug fixed by KVM commit 1f4e5fc83a42 ("KVM: x86: fix nested guest live migration with PML"). If the Dirty bit is NOT set, the test will get a false pass due; though again, in theory. However, the test is flawed (and always was, at least in the versions posted publicly), as KVM (correctly) marks the corresponding L1 GFN as dirty (in the dirty bitmap) when creating the writable SPTE. I.e. without a check on the dirty bitmap after the READ_ONCE(), the check after the first WRITE_ONCE() will get a false pass due to the dirty bitmap/log having been updated by the read fault, not by PML. Furthermore, the subsequent behavior in the test's l2_guest_code() effectively hides the flawed test behavior, as the straight writes to a new L2 GPA fault also trigger the KVM bug, and so the test will still detect the failure due to lack of isolation between the two testcases (Read=3D>Write vs. Write=3D>Write). WRITE_ONCE(*b, 1); GUEST_SYNC(true); WRITE_ONCE(*b, 1); GUEST_SYNC(true); GUEST_SYNC(false); Punt on fixing vmx_dirty_log_test for the moment as it will be easier to properly fix the test once the TDP code uses the common MMU APIs, at which point it will be trivially easy for the test to retrieve the EPT PTE and set the Dirty bit as needed. Signed-off-by: Yosry Ahmed [sean: rewrite changelog to explain the situation] Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/lib/x86/vmx.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index 85043bb1ec4d..a3e2eae981da 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -432,14 +432,6 @@ void __tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm= *vm, =20 pt =3D addr_gpa2hva(vm, pte->address * vm->page_size); } - - /* - * For now mark these as accessed and dirty because the only - * testcase we have needs that. Can be reconsidered later. - */ - pte->accessed =3D true; - pte->dirty =3D true; - } =20 void tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 351A32E06D2 for ; Tue, 30 Dec 2025 23:02:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135727; cv=none; b=jC9YBL4dt3rhNHnDGIH+iybry12uLaZ3vV4y765ljclLWjXgVKY3W8zP3Ts7Ur2aGt+flHinCkB6OiqsXjFkA6760oeJceO/gNhOOrQJIY06XOq9GLkx2zBC1OoHqXfvTC1gzjsjNUcNl6S6IJACbL5YGuO03/lBGHW3tcXIkYo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135727; c=relaxed/simple; bh=VBEZazNQmX47aR+zvVpMv3ul4jxDIkXc586G94mUMS8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bo1q94Q5QBWLYgg0E/je10Vnt9HRK5eGR7MvejgQLN2EPBa3W5mnqfDswEpRATgs4668y+SR1zOSE0TzaCPHqyqrwoePiYBwWDwZ5WHokQlcptzLl03NooQ0PGHBzv+R0sdIfVfijz+VMvSNrEVbae+V1OSFig5wIzXJKMgmWqY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VEaBHz0T; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VEaBHz0T" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7c1df71b076so19891815b3a.0 for ; Tue, 30 Dec 2025 15:02:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135723; x=1767740523; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+BOw+hIi1bRSqv3w3/oHSK6eg1GIdqsKODkrn0JtmRs=; b=VEaBHz0TKDiPvAatzM24RjuFb8+ezJRKF43ObSlg+xTXmqQfP+ADTggmKQt4cIYYxc 3xvSC3C+y1tdXKVbxbJsvmBeksKpuK6LLEtqxYYvFATnmrvj4edvyvIrbgt2uB1emVr7 SuuJ8+SM4glkoDTu+WFi9Yy0kcOQOP+z9HO+ku+2kgvH0ZurYa0dagz3P2T4jZ8ScqXw FsA8d3KgoYnZdjAjmoByZ18KDOTIb18llFxCX+763wgdWZbIFelfqREZgkT3bpJ5//Tv 28TmpJ/mJLLOBf/IeJUzspTPPqkGFQTsZbvlftHfriX3f0+0s/MBzuOVxCL/lf8fEhsM m81w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135723; x=1767740523; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+BOw+hIi1bRSqv3w3/oHSK6eg1GIdqsKODkrn0JtmRs=; b=saUqKKzuLf+yyS1UrWK4NzV3p1OwNUo9rzj/AIVNTavjiIV8fxDIKs1gWuPQbILFtP cdixp3bNsBtFLSobTwier+dT1wHGIjAxfjtOqNiB+1Lk0q3QKWl7+oPamRetgNQzRpeO eGHeEq0w9fhjlEkzgQg8VTQX8CAoYJxz592Lw1W5LR91m0A1oRiUXAdHpB//U0YD1K++ hYzZ9tZqx5fMwWcsVmSp965PN4rqi6abDaCc6sc5F4gyAIaBYNi4wtePTiprHYkYjIi3 FA9TaOGf6mqIFHnW7tRn8SOqDQFnE+Vo40yaR91fZiMGzY8+u9iHrJML1AlyK943CmoM MtuA== X-Forwarded-Encrypted: i=1; AJvYcCXAVlSfLEtOyzl9URugsS5GtgrSefPtxBpHlmr6b/jDC0a/ozKRXQ3x0khivq0S3S83BHkdzjv80ZuKwic=@vger.kernel.org X-Gm-Message-State: AOJu0Yxc/tem5+buvU+TFhF1ylN43F7wRD+xjuhYUWq4yFeh0xkFTyIL CIfpjkI5xzh5x4WJDrps45NUF2aRjrKswe6ol0G27VMfjGz61M3mjTJVISXk58InijfD2sQHBKI g43mDUw== X-Google-Smtp-Source: AGHT+IEys4f3BVFYDoa/02B3caxExHnUYhMZBOGJg0+IEIIzrst7dQ9T8MyTlUZ9TvEhnr4W7Sf0wNW88Eg= X-Received: from pjbpc3.prod.google.com ([2002:a17:90b:3b83:b0:341:88c5:20ac]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:33a2:b0:35e:824a:dc57 with SMTP id adf61e73a8af0-376a94cb6femr37621114637.37.1767135723117; Tue, 30 Dec 2025 15:02:03 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:35 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-7-seanjc@google.com> Subject: [PATCH v4 06/21] KVM: selftests: Add "struct kvm_mmu" to track a given MMU instance From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a "struct kvm_mmu" to track a given MMU instance, e.g. a VM's stage-1 MMU versus a VM's stage-2 MMU, so that x86 can share MMU functionality for both stage-1 and stage-2 MMUs, without creating the potential for subtle bugs, e.g. due to consuming on vm->pgtable_levels when operating a stage-2 MMU. Encapsulate the existing de facto MMU in "struct kvm_vm", e.g instead of burying the MMU details in "struct kvm_vm_arch", to avoid more #ifdefs in ____vm_create(), and in the hopes that other architectures can utilize the formalized MMU structure if/when they too support stage-2 page tables. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yosry Ahmed --- .../testing/selftests/kvm/include/kvm_util.h | 11 ++++-- .../selftests/kvm/lib/arm64/processor.c | 38 +++++++++---------- tools/testing/selftests/kvm/lib/kvm_util.c | 28 +++++++------- .../selftests/kvm/lib/loongarch/processor.c | 28 +++++++------- .../selftests/kvm/lib/riscv/processor.c | 31 +++++++-------- .../selftests/kvm/lib/s390/processor.c | 16 ++++---- .../testing/selftests/kvm/lib/x86/processor.c | 28 +++++++------- .../kvm/x86/vmx_nested_la57_state_test.c | 2 +- 8 files changed, 94 insertions(+), 88 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 81f4355ff28a..39558c05c0bf 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -88,12 +88,17 @@ enum kvm_mem_region_type { NR_MEM_REGIONS, }; =20 +struct kvm_mmu { + bool pgd_created; + uint64_t pgd; + int pgtable_levels; +}; + struct kvm_vm { int mode; unsigned long type; int kvm_fd; int fd; - unsigned int pgtable_levels; unsigned int page_size; unsigned int page_shift; unsigned int pa_bits; @@ -104,13 +109,13 @@ struct kvm_vm { struct sparsebit *vpages_valid; struct sparsebit *vpages_mapped; bool has_irqchip; - bool pgd_created; vm_paddr_t ucall_mmio_addr; - vm_paddr_t pgd; vm_vaddr_t handlers; uint32_t dirty_ring_size; uint64_t gpa_tag_mask; =20 + struct kvm_mmu mmu; + struct kvm_vm_arch arch; =20 struct kvm_binary_stats stats; diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/test= ing/selftests/kvm/lib/arm64/processor.c index d46e4b13b92c..c40f59d48311 100644 --- a/tools/testing/selftests/kvm/lib/arm64/processor.c +++ b/tools/testing/selftests/kvm/lib/arm64/processor.c @@ -28,7 +28,7 @@ static uint64_t page_align(struct kvm_vm *vm, uint64_t v) =20 static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva) { - unsigned int shift =3D (vm->pgtable_levels - 1) * (vm->page_shift - 3) + = vm->page_shift; + unsigned int shift =3D (vm->mmu.pgtable_levels - 1) * (vm->page_shift - 3= ) + vm->page_shift; uint64_t mask =3D (1UL << (vm->va_bits - shift)) - 1; =20 return (gva >> shift) & mask; @@ -39,7 +39,7 @@ static uint64_t pud_index(struct kvm_vm *vm, vm_vaddr_t g= va) unsigned int shift =3D 2 * (vm->page_shift - 3) + vm->page_shift; uint64_t mask =3D (1UL << (vm->page_shift - 3)) - 1; =20 - TEST_ASSERT(vm->pgtable_levels =3D=3D 4, + TEST_ASSERT(vm->mmu.pgtable_levels =3D=3D 4, "Mode %d does not have 4 page table levels", vm->mode); =20 return (gva >> shift) & mask; @@ -50,7 +50,7 @@ static uint64_t pmd_index(struct kvm_vm *vm, vm_vaddr_t g= va) unsigned int shift =3D (vm->page_shift - 3) + vm->page_shift; uint64_t mask =3D (1UL << (vm->page_shift - 3)) - 1; =20 - TEST_ASSERT(vm->pgtable_levels >=3D 3, + TEST_ASSERT(vm->mmu.pgtable_levels >=3D 3, "Mode %d does not have >=3D 3 page table levels", vm->mode); =20 return (gva >> shift) & mask; @@ -104,7 +104,7 @@ static uint64_t pte_addr(struct kvm_vm *vm, uint64_t pt= e) =20 static uint64_t ptrs_per_pgd(struct kvm_vm *vm) { - unsigned int shift =3D (vm->pgtable_levels - 1) * (vm->page_shift - 3) + = vm->page_shift; + unsigned int shift =3D (vm->mmu.pgtable_levels - 1) * (vm->page_shift - 3= ) + vm->page_shift; return 1 << (vm->va_bits - shift); } =20 @@ -117,13 +117,13 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) { size_t nr_pages =3D page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size; =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 - vm->pgd =3D vm_phy_pages_alloc(vm, nr_pages, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, - vm->memslots[MEM_REGION_PT]); - vm->pgd_created =3D true; + vm->mmu.pgd =3D vm_phy_pages_alloc(vm, nr_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + vm->mmu.pgd_created =3D true; } =20 static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, @@ -147,12 +147,12 @@ static void _virt_pg_map(struct kvm_vm *vm, uint64_t = vaddr, uint64_t paddr, " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, vaddr) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pgd_index(vm, vaddr) * 8; if (!*ptep) *ptep =3D addr_pte(vm, vm_alloc_page_table(vm), PGD_TYPE_TABLE | PTE_VALID); =20 - switch (vm->pgtable_levels) { + switch (vm->mmu.pgtable_levels) { case 4: ptep =3D addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, vaddr) * = 8; if (!*ptep) @@ -190,16 +190,16 @@ uint64_t *virt_get_pte_hva_at_level(struct kvm_vm *vm= , vm_vaddr_t gva, int level { uint64_t *ptep; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) goto unmapped_gva; =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, gva) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pgd_index(vm, gva) * 8; if (!ptep) goto unmapped_gva; if (level =3D=3D 0) return ptep; =20 - switch (vm->pgtable_levels) { + switch (vm->mmu.pgtable_levels) { case 4: ptep =3D addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, gva) * 8; if (!ptep) @@ -263,13 +263,13 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm,= uint8_t indent, uint64_t p =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) { - int level =3D 4 - (vm->pgtable_levels - 1); + int level =3D 4 - (vm->mmu.pgtable_levels - 1); uint64_t pgd, *ptep; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 - for (pgd =3D vm->pgd; pgd < vm->pgd + ptrs_per_pgd(vm) * 8; pgd +=3D 8) { + for (pgd =3D vm->mmu.pgd; pgd < vm->mmu.pgd + ptrs_per_pgd(vm) * 8; pgd += =3D 8) { ptep =3D addr_gpa2hva(vm, pgd); if (!*ptep) continue; @@ -350,7 +350,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct k= vm_vcpu_init *init) TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode); } =20 - ttbr0_el1 =3D vm->pgd & GENMASK(47, vm->page_shift); + ttbr0_el1 =3D vm->mmu.pgd & GENMASK(47, vm->page_shift); =20 /* Configure output size */ switch (vm->mode) { @@ -358,7 +358,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct k= vm_vcpu_init *init) case VM_MODE_P52V48_16K: case VM_MODE_P52V48_64K: tcr_el1 |=3D TCR_IPS_52_BITS; - ttbr0_el1 |=3D FIELD_GET(GENMASK(51, 48), vm->pgd) << 2; + ttbr0_el1 |=3D FIELD_GET(GENMASK(51, 48), vm->mmu.pgd) << 2; break; case VM_MODE_P48V48_4K: case VM_MODE_P48V48_16K: diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 8279b6ced8d2..65752daeed90 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -281,34 +281,34 @@ struct kvm_vm *____vm_create(struct vm_shape shape) /* Setup mode specific traits. */ switch (vm->mode) { case VM_MODE_P52V48_4K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P52V48_64K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_P48V48_4K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P48V48_64K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_P40V48_4K: case VM_MODE_P36V48_4K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P40V48_64K: case VM_MODE_P36V48_64K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_P52V48_16K: case VM_MODE_P48V48_16K: case VM_MODE_P40V48_16K: case VM_MODE_P36V48_16K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P47V47_16K: case VM_MODE_P36V47_16K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_PXXVYY_4K: #ifdef __x86_64__ @@ -321,22 +321,22 @@ struct kvm_vm *____vm_create(struct vm_shape shape) vm->va_bits); =20 if (vm->va_bits =3D=3D 57) { - vm->pgtable_levels =3D 5; + vm->mmu.pgtable_levels =3D 5; } else { TEST_ASSERT(vm->va_bits =3D=3D 48, "Unexpected guest virtual address width: %d", vm->va_bits); - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; } #else TEST_FAIL("VM_MODE_PXXVYY_4K not supported on non-x86 platforms"); #endif break; case VM_MODE_P47V64_4K: - vm->pgtable_levels =3D 5; + vm->mmu.pgtable_levels =3D 5; break; case VM_MODE_P44V64_4K: - vm->pgtable_levels =3D 5; + vm->mmu.pgtable_levels =3D 5; break; default: TEST_FAIL("Unknown guest mode: 0x%x", vm->mode); @@ -1956,8 +1956,8 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t= indent) fprintf(stream, "%*sMapped Virtual Pages:\n", indent, ""); sparsebit_dump(stream, vm->vpages_mapped, indent + 2); fprintf(stream, "%*spgd_created: %u\n", indent, "", - vm->pgd_created); - if (vm->pgd_created) { + vm->mmu.pgd_created); + if (vm->mmu.pgd_created) { fprintf(stream, "%*sVirtual Translation Tables:\n", indent + 2, ""); virt_dump(stream, vm, indent + 4); diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/= testing/selftests/kvm/lib/loongarch/processor.c index 07c103369ddb..17aa55a2047a 100644 --- a/tools/testing/selftests/kvm/lib/loongarch/processor.c +++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c @@ -50,11 +50,11 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) int i; vm_paddr_t child, table; =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 child =3D table =3D 0; - for (i =3D 0; i < vm->pgtable_levels; i++) { + for (i =3D 0; i < vm->mmu.pgtable_levels; i++) { invalid_pgtable[i] =3D child; table =3D vm_phy_page_alloc(vm, LOONGARCH_PAGE_TABLE_PHYS_MIN, vm->memslots[MEM_REGION_PT]); @@ -62,8 +62,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) virt_set_pgtable(vm, table, child); child =3D table; } - vm->pgd =3D table; - vm->pgd_created =3D true; + vm->mmu.pgd =3D table; + vm->mmu.pgd_created =3D true; } =20 static int virt_pte_none(uint64_t *ptep, int level) @@ -77,11 +77,11 @@ static uint64_t *virt_populate_pte(struct kvm_vm *vm, v= m_vaddr_t gva, int alloc) uint64_t *ptep; vm_paddr_t child; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) goto unmapped_gva; =20 - child =3D vm->pgd; - level =3D vm->pgtable_levels - 1; + child =3D vm->mmu.pgd; + level =3D vm->mmu.pgtable_levels - 1; while (level > 0) { ptep =3D addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8; if (virt_pte_none(ptep, level)) { @@ -161,11 +161,11 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, = uint8_t indent) { int level; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 - level =3D vm->pgtable_levels - 1; - pte_dump(stream, vm, indent, vm->pgd, level); + level =3D vm->mmu.pgtable_levels - 1; + pte_dump(stream, vm, indent, vm->mmu.pgd, level); } =20 void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent) @@ -297,7 +297,7 @@ static void loongarch_vcpu_setup(struct kvm_vcpu *vcpu) =20 width =3D vm->page_shift - 3; =20 - switch (vm->pgtable_levels) { + switch (vm->mmu.pgtable_levels) { case 4: /* pud page shift and width */ val =3D (vm->page_shift + width * 2) << 20 | (width << 25); @@ -309,15 +309,15 @@ static void loongarch_vcpu_setup(struct kvm_vcpu *vcp= u) val |=3D vm->page_shift | width << 5; break; default: - TEST_FAIL("Got %u page table levels, expected 3 or 4", vm->pgtable_level= s); + TEST_FAIL("Got %u page table levels, expected 3 or 4", vm->mmu.pgtable_l= evels); } =20 loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL0, val); =20 /* PGD page shift and width */ - val =3D (vm->page_shift + width * (vm->pgtable_levels - 1)) | width << 6; + val =3D (vm->page_shift + width * (vm->mmu.pgtable_levels - 1)) | width <= < 6; loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL1, val); - loongarch_set_csr(vcpu, LOONGARCH_CSR_PGDL, vm->pgd); + loongarch_set_csr(vcpu, LOONGARCH_CSR_PGDL, vm->mmu.pgd); =20 /* * Refill exception runs on real mode diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/test= ing/selftests/kvm/lib/riscv/processor.c index 2eac7d4b59e9..e6ec7c224fc3 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -60,7 +60,7 @@ static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t g= va, int level) { TEST_ASSERT(level > -1, "Negative page table level (%d) not possible", level); - TEST_ASSERT(level < vm->pgtable_levels, + TEST_ASSERT(level < vm->mmu.pgtable_levels, "Invalid page table level (%d)", level); =20 return (gva & pte_index_mask[level]) >> pte_index_shift[level]; @@ -70,19 +70,19 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) { size_t nr_pages =3D page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size; =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 - vm->pgd =3D vm_phy_pages_alloc(vm, nr_pages, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, - vm->memslots[MEM_REGION_PT]); - vm->pgd_created =3D true; + vm->mmu.pgd =3D vm_phy_pages_alloc(vm, nr_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + vm->mmu.pgd_created =3D true; } =20 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { uint64_t *ptep, next_ppn; - int level =3D vm->pgtable_levels - 1; + int level =3D vm->mmu.pgtable_levels - 1; =20 TEST_ASSERT((vaddr % vm->page_size) =3D=3D 0, "Virtual address not on page boundary,\n" @@ -98,7 +98,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, = uint64_t paddr) " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pte_index(vm, vaddr, level) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pte_index(vm, vaddr, level) * 8; if (!*ptep) { next_ppn =3D vm_alloc_page_table(vm) >> PGTBL_PAGE_SIZE_SHIFT; *ptep =3D (next_ppn << PGTBL_PTE_ADDR_SHIFT) | @@ -126,12 +126,12 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vad= dr, uint64_t paddr) vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) { uint64_t *ptep; - int level =3D vm->pgtable_levels - 1; + int level =3D vm->mmu.pgtable_levels - 1; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) goto unmapped_gva; =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pte_index(vm, gva, level) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pte_index(vm, gva, level) * 8; if (!ptep) goto unmapped_gva; level--; @@ -176,13 +176,14 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm,= uint8_t indent, =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) { - int level =3D vm->pgtable_levels - 1; + struct kvm_mmu *mmu =3D &vm->mmu; + int level =3D mmu->pgtable_levels - 1; uint64_t pgd, *ptep; =20 - if (!vm->pgd_created) + if (!mmu->pgd_created) return; =20 - for (pgd =3D vm->pgd; pgd < vm->pgd + ptrs_per_pte(vm) * 8; pgd +=3D 8) { + for (pgd =3D mmu->pgd; pgd < mmu->pgd + ptrs_per_pte(vm) * 8; pgd +=3D 8)= { ptep =3D addr_gpa2hva(vm, pgd); if (!*ptep) continue; @@ -211,7 +212,7 @@ void riscv_vcpu_mmu_setup(struct kvm_vcpu *vcpu) TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode); } =20 - satp =3D (vm->pgd >> PGTBL_PAGE_SIZE_SHIFT) & SATP_PPN; + satp =3D (vm->mmu.pgd >> PGTBL_PAGE_SIZE_SHIFT) & SATP_PPN; satp |=3D SATP_MODE_48; =20 vcpu_set_reg(vcpu, RISCV_GENERAL_CSR_REG(satp), satp); diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testi= ng/selftests/kvm/lib/s390/processor.c index 8ceeb17c819a..6a9a660413a7 100644 --- a/tools/testing/selftests/kvm/lib/s390/processor.c +++ b/tools/testing/selftests/kvm/lib/s390/processor.c @@ -17,7 +17,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) TEST_ASSERT(vm->page_size =3D=3D PAGE_SIZE, "Unsupported page size: 0x%x", vm->page_size); =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 paddr =3D vm_phy_pages_alloc(vm, PAGES_PER_REGION, @@ -25,8 +25,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) vm->memslots[MEM_REGION_PT]); memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size); =20 - vm->pgd =3D paddr; - vm->pgd_created =3D true; + vm->mmu.pgd =3D paddr; + vm->mmu.pgd_created =3D true; } =20 /* @@ -70,7 +70,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, ui= nt64_t gpa) gva, vm->max_gfn, vm->page_size); =20 /* Walk through region and segment tables */ - entry =3D addr_gpa2hva(vm, vm->pgd); + entry =3D addr_gpa2hva(vm, vm->mmu.pgd); for (ri =3D 1; ri <=3D 4; ri++) { idx =3D (gva >> (64 - 11 * ri)) & 0x7ffu; if (entry[idx] & REGION_ENTRY_INVALID) @@ -94,7 +94,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_= t gva) TEST_ASSERT(vm->page_size =3D=3D PAGE_SIZE, "Unsupported page size: 0x%x", vm->page_size); =20 - entry =3D addr_gpa2hva(vm, vm->pgd); + entry =3D addr_gpa2hva(vm, vm->mmu.pgd); for (ri =3D 1; ri <=3D 4; ri++) { idx =3D (gva >> (64 - 11 * ri)) & 0x7ffu; TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID), @@ -149,10 +149,10 @@ static void virt_dump_region(FILE *stream, struct kvm= _vm *vm, uint8_t indent, =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) { - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 - virt_dump_region(stream, vm, indent, vm->pgd); + virt_dump_region(stream, vm, indent, vm->mmu.pgd); } =20 void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code) @@ -184,7 +184,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, ui= nt32_t vcpu_id) =20 vcpu_sregs_get(vcpu, &sregs); sregs.crs[0] |=3D 0x00040000; /* Enable floating point regs */ - sregs.crs[1] =3D vm->pgd | 0xf; /* Primary region table */ + sregs.crs[1] =3D vm->mmu.pgd | 0xf; /* Primary region table */ vcpu_sregs_set(vcpu, &sregs); =20 vcpu->run->psw_mask =3D 0x0400000180000000ULL; /* DAT enabled + 64 bit m= ode */ diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index c14bf2b5f28f..f027f86d1535 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -162,9 +162,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) "Unknown or unsupported guest mode: 0x%x", vm->mode); =20 /* If needed, create the top-level page table. */ - if (!vm->pgd_created) { - vm->pgd =3D vm_alloc_page_table(vm); - vm->pgd_created =3D true; + if (!vm->mmu.pgd_created) { + vm->mmu.pgd =3D vm_alloc_page_table(vm); + vm->mmu.pgd_created =3D true; } } =20 @@ -175,7 +175,7 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t *= parent_pte, uint64_t *page_table =3D addr_gpa2hva(vm, pt_gpa); int index =3D (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; =20 - TEST_ASSERT((*parent_pte & PTE_PRESENT_MASK) || parent_pte =3D=3D &vm->pg= d, + TEST_ASSERT((*parent_pte & PTE_PRESENT_MASK) || parent_pte =3D=3D &vm->mm= u.pgd, "Parent PTE (level %d) not PRESENT for gva: 0x%08lx", level + 1, vaddr); =20 @@ -218,7 +218,7 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *v= m, void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int = level) { const uint64_t pg_size =3D PG_LEVEL_SIZE(level); - uint64_t *pte =3D &vm->pgd; + uint64_t *pte =3D &vm->mmu.pgd; int current_level; =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -243,7 +243,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, u= int64_t paddr, int level) * Allocate upper level page tables, if not already present. Return * early if a hugepage was created. */ - for (current_level =3D vm->pgtable_levels; + for (current_level =3D vm->mmu.pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { pte =3D virt_create_upper_pte(vm, pte, vaddr, paddr, @@ -309,14 +309,14 @@ static bool vm_is_target_pte(uint64_t *pte, int *leve= l, int current_level) static uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vad= dr, int *level) { - int va_width =3D 12 + (vm->pgtable_levels) * 9; - uint64_t *pte =3D &vm->pgd; + int va_width =3D 12 + (vm->mmu.pgtable_levels) * 9; + uint64_t *pte =3D &vm->mmu.pgd; int current_level; =20 TEST_ASSERT(!vm->arch.is_pt_protected, "Walking page tables of protected guests is impossible"); =20 - TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D vm->pgtable_levels, + TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D vm->mmu.pgtable_leve= ls, "Invalid PG_LEVEL_* '%d'", *level); =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -332,7 +332,7 @@ static uint64_t *__vm_get_page_table_entry(struct kvm_v= m *vm, uint64_t vaddr, (((int64_t)vaddr << (64 - va_width) >> (64 - va_width))), "Canonical check failed. The virtual address is invalid."); =20 - for (current_level =3D vm->pgtable_levels; + for (current_level =3D vm->mmu.pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { pte =3D virt_get_pte(vm, pte, vaddr, current_level); @@ -357,7 +357,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, ui= nt8_t indent) uint64_t *pde, *pde_start; uint64_t *pte, *pte_start; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 fprintf(stream, "%*s " @@ -365,7 +365,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, ui= nt8_t indent) fprintf(stream, "%*s index hvaddr gpaddr " "addr w exec dirty\n", indent, ""); - pml4e_start =3D (uint64_t *) addr_gpa2hva(vm, vm->pgd); + pml4e_start =3D (uint64_t *) addr_gpa2hva(vm, vm->mmu.pgd); for (uint16_t n1 =3D 0; n1 <=3D 0x1ffu; n1++) { pml4e =3D &pml4e_start[n1]; if (!(*pml4e & PTE_PRESENT_MASK)) @@ -538,7 +538,7 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct k= vm_vcpu *vcpu) sregs.cr4 |=3D X86_CR4_PAE | X86_CR4_OSFXSR; if (kvm_cpu_has(X86_FEATURE_XSAVE)) sregs.cr4 |=3D X86_CR4_OSXSAVE; - if (vm->pgtable_levels =3D=3D 5) + if (vm->mmu.pgtable_levels =3D=3D 5) sregs.cr4 |=3D X86_CR4_LA57; sregs.efer |=3D (EFER_LME | EFER_LMA | EFER_NX); =20 @@ -549,7 +549,7 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct k= vm_vcpu *vcpu) kvm_seg_set_kernel_data_64bit(&sregs.gs); kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr); =20 - sregs.cr3 =3D vm->pgd; + sregs.cr3 =3D vm->mmu.pgd; vcpu_sregs_set(vcpu, &sregs); } =20 diff --git a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c b= /tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c index cf1d2d1f2a8f..915c42001dba 100644 --- a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c @@ -90,7 +90,7 @@ int main(int argc, char *argv[]) * L1 needs to read its own PML5 table to set up L2. Identity map * the PML5 table to facilitate this. */ - virt_map(vm, vm->pgd, vm->pgd, 1); + virt_map(vm, vm->mmu.pgd, vm->mmu.pgd, 1); =20 vcpu_alloc_vmx(vm, &vmx_pages_gva); vcpu_args_set(vcpu, 1, vmx_pages_gva); --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D11272F290B for ; Tue, 30 Dec 2025 23:02:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135728; cv=none; b=UIRzDq0i3BUyM1gqXZSKyoHhBLbvj2l1PiT2VQQIda2FMBw1vJdbJj3mDmpk99DtLHFLzEI++o3GJ8xnHQuDToO9zGLB45Mkkn0KD2NrXChzXMY/Q/TyNliaUB/vriAU49i+NND1092QTeiAOIfea6wfqSzfFD/O5tQz7RhVu0U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135728; c=relaxed/simple; bh=ZU8xlAMH6mZxvqsPzFpIFF9VcGo1De1VYXH4GsE/Stk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Oa6WT6CgTN2bV1a2lvBrSbLl6SmKPki356SdNBcFdnyiicXBuepiU3kKebK1oGiRf9l+smUJ7Wn2RZorRCJvTWefzgCSAwf9bxDWsSm+ebe2YZplTB695+DomVVtRezNNaW33Mqj04hGudS1OBOVWQZuYrj1NhsOge4MpJil+qY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=huXxvwEX; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="huXxvwEX" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34a9bb41009so16720273a91.3 for ; Tue, 30 Dec 2025 15:02:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135725; x=1767740525; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5Cc6wkm6hIGFSfttAC+mfZ95Ec3DVPFhFOuj3x7H3nk=; b=huXxvwEXGo4vJUxgPlhDhMnBY+++5KZy/uPIqOO4fRsT6a4MDayLLol2WYB2/+foeV eRvf8oPyYZbgXUTzVVDM9fK/+M/rRqD3Ott9kygbjZfKPzrSeicFvkk9znJFfnoQd2aq 7Jd8LzrLDIU6kiUm/gnmDUSlb8CHIVKBvVi+vsrU0yvqNpiO79TnEyTYg648X85G/VGe R7SNwgAMj5MbTGJzs2NYWui2Uc/7VCeJ6L3nbMz5UT0Xe0pP06yJPXj1wcc7BtnFLghj XzWsNgzvRH9Mmb6tYaeGzCmzFoPzXNXjGu0BwQge6V1LpnZfXgnHV3hmUmkspUonLBx0 jFlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135725; x=1767740525; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5Cc6wkm6hIGFSfttAC+mfZ95Ec3DVPFhFOuj3x7H3nk=; b=jiWC47RSkYIyqb48A5oDqyOb/5w/qdOZJAhQm4jBoi+HZWAfKhvIYPuKBV4Vzls4CY yP19pGqykeEZ6z7HAGOzEl+1LniADeZj07YqrHV4h/DVv4vFXaMKo2zogoSnCtGJLvPo prBUbDNk9+Q6onFhfZVTzl21kSRFZ+BSGF9ZsI2hiljgz6f9vyktQC2nC+ByfCr2oQVT PnN6cQNk+U1F1GWmNc52Jw4kBOg4sALAQYA287+rl66qIgf8QF8OUjsFpg6emCABWGV7 /Pn2Nten3QA7uh8sVH3aXnStoxBScbCtN1LQOu8NpF5Y/mqMVUFYXyN9BEEj0f1ydpub wpcA== X-Forwarded-Encrypted: i=1; AJvYcCUomxYSQw2AFVuHIN6ofip18MPnJiRNkeGdJbZS40yeNXvwm/I7mVkYv2ec+MHti/gMgfv3NQRH6oT0XMQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwSpFhrPi8BW0n0YmgoqX23/XJ+wwlYKb2aPJxoiez6pgNispXx kNwWk1dajVsuEYPpw47xoBQXpfAuDREn02YHiQIHEwifHXkHhp7Lafxs7KVn2JgOyKWlq5885LA vD/ToWg== X-Google-Smtp-Source: AGHT+IEOOU/PO2opx5uUUNTa4NR3nkLk9xuANEwk4g6N1XvLso6TqSi1gOEvbsBH7aLFxJKOwgJ+kV8p0Fg= X-Received: from pjnz3.prod.google.com ([2002:a17:90a:8b83:b0:34a:bee9:ef2]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2e08:b0:340:ad5e:ca with SMTP id 98e67ed59e1d1-34e92139e3amr31373229a91.12.1767135724924; Tue, 30 Dec 2025 15:02:04 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:36 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-8-seanjc@google.com> Subject: [PATCH v4 07/21] KVM: selftests: Plumb "struct kvm_mmu" into x86's MMU APIs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for generalizing the x86 virt mapping APIs to work with TDP (stage-2) page tables, plumb "struct kvm_mmu" into all of the helper functions instead of operating on vm->mmu directly. Opportunistically swap the order of the check in virt_get_pte() to first assert that the parent is the PGD, and then check that the PTE is present, as it makes more sense to check if the parent PTE is the PGD/root (i.e. not a PTE) before checking that the PTE is PRESENT. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Yosry Ahmed [sean: rebase on common kvm_mmu structure, rewrite changelog] Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/processor.h | 3 +- .../testing/selftests/kvm/lib/x86/processor.c | 68 +++++++++++-------- 2 files changed, 41 insertions(+), 30 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index c00c0fbe62cd..cbac9de29074 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1449,7 +1449,8 @@ enum pg_level { #define PG_SIZE_2M PG_LEVEL_SIZE(PG_LEVEL_2M) #define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G) =20 -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int = level); +void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, uint64_t vaddr, + uint64_t paddr, int level); void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level); =20 diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index f027f86d1535..f25742a804b0 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -156,26 +156,31 @@ bool kvm_is_tdp_enabled(void) return get_kvm_amd_param_bool("npt"); } =20 -void virt_arch_pgd_alloc(struct kvm_vm *vm) +static void virt_mmu_init(struct kvm_vm *vm, struct kvm_mmu *mmu) { - TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, - "Unknown or unsupported guest mode: 0x%x", vm->mode); - /* If needed, create the top-level page table. */ - if (!vm->mmu.pgd_created) { - vm->mmu.pgd =3D vm_alloc_page_table(vm); - vm->mmu.pgd_created =3D true; + if (!mmu->pgd_created) { + mmu->pgd =3D vm_alloc_page_table(vm); + mmu->pgd_created =3D true; } } =20 -static void *virt_get_pte(struct kvm_vm *vm, uint64_t *parent_pte, - uint64_t vaddr, int level) +void virt_arch_pgd_alloc(struct kvm_vm *vm) +{ + TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, + "Unknown or unsupported guest mode: 0x%x", vm->mode); + + virt_mmu_init(vm, &vm->mmu); +} + +static void *virt_get_pte(struct kvm_vm *vm, struct kvm_mmu *mmu, + uint64_t *parent_pte, uint64_t vaddr, int level) { uint64_t pt_gpa =3D PTE_GET_PA(*parent_pte); uint64_t *page_table =3D addr_gpa2hva(vm, pt_gpa); int index =3D (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; =20 - TEST_ASSERT((*parent_pte & PTE_PRESENT_MASK) || parent_pte =3D=3D &vm->mm= u.pgd, + TEST_ASSERT((*parent_pte =3D=3D mmu->pgd) || (*parent_pte & PTE_PRESENT_M= ASK), "Parent PTE (level %d) not PRESENT for gva: 0x%08lx", level + 1, vaddr); =20 @@ -183,13 +188,14 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t= *parent_pte, } =20 static uint64_t *virt_create_upper_pte(struct kvm_vm *vm, + struct kvm_mmu *mmu, uint64_t *parent_pte, uint64_t vaddr, uint64_t paddr, int current_level, int target_level) { - uint64_t *pte =3D virt_get_pte(vm, parent_pte, vaddr, current_level); + uint64_t *pte =3D virt_get_pte(vm, mmu, parent_pte, vaddr, current_level); =20 paddr =3D vm_untag_gpa(vm, paddr); =20 @@ -215,10 +221,11 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm = *vm, return pte; } =20 -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int = level) +void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, uint64_t vaddr, + uint64_t paddr, int level) { const uint64_t pg_size =3D PG_LEVEL_SIZE(level); - uint64_t *pte =3D &vm->mmu.pgd; + uint64_t *pte =3D &mmu->pgd; int current_level; =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -243,17 +250,17 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr,= uint64_t paddr, int level) * Allocate upper level page tables, if not already present. Return * early if a hugepage was created. */ - for (current_level =3D vm->mmu.pgtable_levels; + for (current_level =3D mmu->pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { - pte =3D virt_create_upper_pte(vm, pte, vaddr, paddr, + pte =3D virt_create_upper_pte(vm, mmu, pte, vaddr, paddr, current_level, level); if (*pte & PTE_LARGE_MASK) return; } =20 /* Fill in page table entry. */ - pte =3D virt_get_pte(vm, pte, vaddr, PG_LEVEL_4K); + pte =3D virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); TEST_ASSERT(!(*pte & PTE_PRESENT_MASK), "PTE already present for 4k page at vaddr: 0x%lx", vaddr); *pte =3D PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MA= SK); @@ -270,7 +277,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, u= int64_t paddr, int level) =20 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); + __virt_pg_map(vm, &vm->mmu, vaddr, paddr, PG_LEVEL_4K); } =20 void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, @@ -285,7 +292,7 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, = uint64_t paddr, nr_bytes, pg_size); =20 for (i =3D 0; i < nr_pages; i++) { - __virt_pg_map(vm, vaddr, paddr, level); + __virt_pg_map(vm, &vm->mmu, vaddr, paddr, level); sparsebit_set_num(vm->vpages_mapped, vaddr >> vm->page_shift, nr_bytes / PAGE_SIZE); =20 @@ -294,7 +301,8 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, = uint64_t paddr, } } =20 -static bool vm_is_target_pte(uint64_t *pte, int *level, int current_level) +static bool vm_is_target_pte(struct kvm_mmu *mmu, uint64_t *pte, + int *level, int current_level) { if (*pte & PTE_LARGE_MASK) { TEST_ASSERT(*level =3D=3D PG_LEVEL_NONE || @@ -306,17 +314,19 @@ static bool vm_is_target_pte(uint64_t *pte, int *leve= l, int current_level) return *level =3D=3D current_level; } =20 -static uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vad= dr, +static uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, + struct kvm_mmu *mmu, + uint64_t vaddr, int *level) { - int va_width =3D 12 + (vm->mmu.pgtable_levels) * 9; - uint64_t *pte =3D &vm->mmu.pgd; + int va_width =3D 12 + (mmu->pgtable_levels) * 9; + uint64_t *pte =3D &mmu->pgd; int current_level; =20 TEST_ASSERT(!vm->arch.is_pt_protected, "Walking page tables of protected guests is impossible"); =20 - TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D vm->mmu.pgtable_leve= ls, + TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D mmu->pgtable_levels, "Invalid PG_LEVEL_* '%d'", *level); =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -332,22 +342,22 @@ static uint64_t *__vm_get_page_table_entry(struct kvm= _vm *vm, uint64_t vaddr, (((int64_t)vaddr << (64 - va_width) >> (64 - va_width))), "Canonical check failed. The virtual address is invalid."); =20 - for (current_level =3D vm->mmu.pgtable_levels; + for (current_level =3D mmu->pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { - pte =3D virt_get_pte(vm, pte, vaddr, current_level); - if (vm_is_target_pte(pte, level, current_level)) + pte =3D virt_get_pte(vm, mmu, pte, vaddr, current_level); + if (vm_is_target_pte(mmu, pte, level, current_level)) return pte; } =20 - return virt_get_pte(vm, pte, vaddr, PG_LEVEL_4K); + return virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); } =20 uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr) { int level =3D PG_LEVEL_4K; =20 - return __vm_get_page_table_entry(vm, vaddr, &level); + return __vm_get_page_table_entry(vm, &vm->mmu, vaddr, &level); } =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) @@ -497,7 +507,7 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_se= gment *segp) vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) { int level =3D PG_LEVEL_NONE; - uint64_t *pte =3D __vm_get_page_table_entry(vm, gva, &level); + uint64_t *pte =3D __vm_get_page_table_entry(vm, &vm->mmu, gva, &level); =20 TEST_ASSERT(*pte & PTE_PRESENT_MASK, "Leaf PTE not PRESENT for gva: 0x%08lx", gva); --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 773372E6CC4 for ; Tue, 30 Dec 2025 23:02:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135729; cv=none; b=ub68XqJwm1pHIk94PhAubwiWG4KdWcizRFwpolm7obZnbYs0tEPQUmCzF4cf/r+9yK1s3rZdsZIYlA69s0LxjFpJjHN9dUhZoPE63RnvDX8iA/uf10R5B5acTMIErgBuEebR6D3r+ZhRPFTHMmVlXcX3y/zynCv/CbiAk7bLbPE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135729; c=relaxed/simple; bh=pyBUiyOamhL8LS5/W09JA+DscggetuA4FZPbkkaVrm8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mnIGqo/5342G3sU8U4ymhog9oWPBF3Rg05rs6OT2uMNhqLQeVjZg6XQKB+ZW4tyag6R+Wk/lC0qAAc7pYaC2UPTBUHD+v0yg93U2rAtjRH+G5mINUsfTR5slT/r1KnN6AkzGx0FcoXfLa4ReCoQTzsmx8Tdy49kOI7LCR20L504= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Nf4Pib9v; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Nf4Pib9v" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2a0bb1192cbso205119225ad.1 for ; Tue, 30 Dec 2025 15:02:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135727; x=1767740527; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=sUaPNsiWybrsN14UDe2YJDAR+cxy5ZbGWPMbDaaniZw=; b=Nf4Pib9vVgG/sLPiis7httwoX01KDFNxQFYr598a7rUzRSOUgPRPPcO5ZOA3o+X0I3 xs24LO1/Cdg7icax11d6wG3bKaaZPhSk/uqGlxboaTJp0rztINAm2ubzOliqGQJ0bXnX m98nNy9W9lR940q1v/A86w4lHN77RdYPVTEsyPqXuOHGbcNRWCQ8H4r4l47sq/v9KWOm IhUuwYpKmayCIJ9sTFjHKlQOZxn560yLGv37nXbGqyUQV7DlUxbjlkKcVU7Cwa2coYQh 0WwlX4C3VPGvUqL1o1xuK5jNLqQEm5XyAyAYCJu7tjq5fymPngPzi+gFt1QkWzf/vYvJ GhMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135727; x=1767740527; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=sUaPNsiWybrsN14UDe2YJDAR+cxy5ZbGWPMbDaaniZw=; b=DWkylXZXi4FLzTZtrt28yFG07TSWln0OFj9eou2O93FfcYYAo6Tv+fmBRYj6DH9U4s 0F20h9yveHBnr5i4VSko/3M0jDFoDJKJAfaK3FALRKlIXrBJ+K4eu4L6hYfkjnrQlV8c sptfORSsRRZBQRvmjDbqx1mMJbDjZiJl80qJHLrJP4FC2N1XkPUANB0aSb4pgtxVMv8Q i+1PMG5Yr8ygzlwkw7FVh9+//r9foi83hrEwHlVCdgyk7uCFtvmKEg7qxif8iGgC121I RzfKLD4jT77xclNA2RRA3bnBbEgphrJnqfoK9tayDt72U/Z709rOlli09FtuWxQZacWo wjKg== X-Forwarded-Encrypted: i=1; AJvYcCVNTvPzRSRpk8hiYWHWjieH3WTisXhImDHTl+Bd1qlpyjhswesE5fBl9/SlDBTcwWKNorZs26RHvn0nxxY=@vger.kernel.org X-Gm-Message-State: AOJu0YzjCUfNPgVBU9VWfSrvopGlTA3vRe/uIOT2cZeEsIRHZrWl5/Lh /G+eQnGWCeFvoR5zF/22Q+w3e1ZfH9TEiD70uVeXcpyXprTXT42gorODrp4cbkTstDsIgTeVGTY RvktOiQ== X-Google-Smtp-Source: AGHT+IFQnOuwYC0g4+LirB1coBfKvkA6ZwCBuTM8ay+vZ/F6ieVjqX4SvnIiidBDol06ehcl6JagnUzMopM= X-Received: from plxx9.prod.google.com ([2002:a17:902:e049:b0:2a0:9ab8:a28d]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:2f0b:b0:2a0:9970:13fd with SMTP id d9443c01a7336-2a2f293cf1emr298922605ad.43.1767135726838; Tue, 30 Dec 2025 15:02:06 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:37 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-9-seanjc@google.com> Subject: [PATCH v4 08/21] KVM: selftests: Add a "struct kvm_mmu_arch arch" member to kvm_mmu From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add an arch structure+field in "struct kvm_mmu" so that architectures can track arch-specific information for a given MMU. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yosry Ahmed --- tools/testing/selftests/kvm/include/arm64/kvm_util_arch.h | 2 ++ tools/testing/selftests/kvm/include/kvm_util.h | 2 ++ tools/testing/selftests/kvm/include/loongarch/kvm_util_arch.h | 1 + tools/testing/selftests/kvm/include/riscv/kvm_util_arch.h | 1 + tools/testing/selftests/kvm/include/s390/kvm_util_arch.h | 1 + tools/testing/selftests/kvm/include/x86/kvm_util_arch.h | 2 ++ 6 files changed, 9 insertions(+) diff --git a/tools/testing/selftests/kvm/include/arm64/kvm_util_arch.h b/to= ols/testing/selftests/kvm/include/arm64/kvm_util_arch.h index b973bb2c64a6..4a2033708227 100644 --- a/tools/testing/selftests/kvm/include/arm64/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/arm64/kvm_util_arch.h @@ -2,6 +2,8 @@ #ifndef SELFTEST_KVM_UTIL_ARCH_H #define SELFTEST_KVM_UTIL_ARCH_H =20 +struct kvm_mmu_arch {}; + struct kvm_vm_arch { bool has_gic; int gic_fd; diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 39558c05c0bf..c1497515fa6a 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -92,6 +92,8 @@ struct kvm_mmu { bool pgd_created; uint64_t pgd; int pgtable_levels; + + struct kvm_mmu_arch arch; }; =20 struct kvm_vm { diff --git a/tools/testing/selftests/kvm/include/loongarch/kvm_util_arch.h = b/tools/testing/selftests/kvm/include/loongarch/kvm_util_arch.h index e43a57d99b56..d5095900e442 100644 --- a/tools/testing/selftests/kvm/include/loongarch/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/loongarch/kvm_util_arch.h @@ -2,6 +2,7 @@ #ifndef SELFTEST_KVM_UTIL_ARCH_H #define SELFTEST_KVM_UTIL_ARCH_H =20 +struct kvm_mmu_arch {}; struct kvm_vm_arch {}; =20 #endif // SELFTEST_KVM_UTIL_ARCH_H diff --git a/tools/testing/selftests/kvm/include/riscv/kvm_util_arch.h b/to= ols/testing/selftests/kvm/include/riscv/kvm_util_arch.h index e43a57d99b56..d5095900e442 100644 --- a/tools/testing/selftests/kvm/include/riscv/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/riscv/kvm_util_arch.h @@ -2,6 +2,7 @@ #ifndef SELFTEST_KVM_UTIL_ARCH_H #define SELFTEST_KVM_UTIL_ARCH_H =20 +struct kvm_mmu_arch {}; struct kvm_vm_arch {}; =20 #endif // SELFTEST_KVM_UTIL_ARCH_H diff --git a/tools/testing/selftests/kvm/include/s390/kvm_util_arch.h b/too= ls/testing/selftests/kvm/include/s390/kvm_util_arch.h index e43a57d99b56..d5095900e442 100644 --- a/tools/testing/selftests/kvm/include/s390/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/s390/kvm_util_arch.h @@ -2,6 +2,7 @@ #ifndef SELFTEST_KVM_UTIL_ARCH_H #define SELFTEST_KVM_UTIL_ARCH_H =20 +struct kvm_mmu_arch {}; struct kvm_vm_arch {}; =20 #endif // SELFTEST_KVM_UTIL_ARCH_H diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tool= s/testing/selftests/kvm/include/x86/kvm_util_arch.h index 972bb1c4ab4c..456e5ca170df 100644 --- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h @@ -10,6 +10,8 @@ =20 extern bool is_forced_emulation_enabled; =20 +struct kvm_mmu_arch {}; + struct kvm_vm_arch { vm_vaddr_t gdt; vm_vaddr_t tss; --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B0562FD69D for ; Tue, 30 Dec 2025 23:02:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135732; cv=none; b=raJlWZMDNBW4KZtCEnz/ZwhefN46mtXP87Wv5xJmZtk2AqWO/9rkDP0Vgv35dho1eypmtWPG2k1W9lnnwTApF02UW5bBMvAQY2Yb5jiw5EVRm+CppJYjVN1FLFHQHPev5gHtIleZhGIOh8MAqFRDVKjG7qu2TPt+kmHPGKBfKlk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135732; c=relaxed/simple; bh=BhtOyCRfzVpLTFESld4Vi5QuDQZcBRxr4LUMT100+ew=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g3LpuVgoTOVY79C1jb3TnJaPxWl9hmkRWzF44oThgusa+pgESyP8A4G0rkCPWa0DE2XzqPMuY3G3xh4KegYIGba32w/H3/r7Icn0I0uJs0rJA5Chee5LEHvvFbVAgHEapiyihxqMUQpglEqpm2ssyq8v18w0vQzd8ZjllewcKiw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p1xE+NvE; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p1xE+NvE" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34c6cda4a92so22223899a91.3 for ; Tue, 30 Dec 2025 15:02:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135729; x=1767740529; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=rp9dO+QiHStaM2noTobEg1WWV0+ff87sBjqkUH1s1So=; b=p1xE+NvEZFNo+uzEKeKG7WX32FBEmlqi+fM1DR9LZWG7dIwa9rt+S0ZQB/30QlhTdx ezEhWhTIwObXXK6EiEefm02R6yqy9oXaB5/0lTKEKpRT8zONR1fQutjRhQ9MLJZvEcsE k4IIkV+LhiNtTzcuEsI3PKTx4KRnorGgCbxnDMSMfPrdFbceEEt/wWuIuuA3nY+5Asop l1AXwBLsUj+fz5XyO1WLGXLD/RnLXBvI5q/z6rBT1y7bXiX7oR3mg7rN/nNnPaCwJpmQ xIuZOv0QdsDAqJ7VGc44PjqOtbsB/tzuPwB6VaYoH9CB9m4tx9zdYWU8xZMSn1milE5H JU1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135729; x=1767740529; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=rp9dO+QiHStaM2noTobEg1WWV0+ff87sBjqkUH1s1So=; b=bMxlFhaCwKG9ne21ToF4rybwBd/HKLfa6gpMVPFMPjq3ZdAD9tzwlThnjo52wo4KNu KNkXl0iraVLf0IaJMV/0+igzf6uZidxqHBbBg6KDPkMvKYulsy/0POxggU0urqWwqS5v Cefnsh/0e90fy+MJ/sdZ4unfcTPFW6+kLJRLn9pBTjX8IJWII1o2dOPmn4HOXoBp8+pq v1eoedWE2BoTQ6kBkG84/1j7X3I+hsgmvHLJIdW+8j22hFrq61UEh/iX0NS36BmztwQT WWjhtV4rm3Wmx9Q35q0K2oHMDz7oKDfZNHbJLKhVMo87QJ3xtv8s66f3dmDt4ePAyzPL sDHg== X-Forwarded-Encrypted: i=1; AJvYcCUPglB66TAwW+TmQ2KJhSSMhyOL5Ql76phwOmQ/wLWB+jOTQTtmyIPLquXH5AkmGwqakKqOKgkVimFROzs=@vger.kernel.org X-Gm-Message-State: AOJu0Yyckl84fC2iXydNATgxOpuX6W1vZQbyN0soF4Cfg3lOvoxTUhxb Jyrb2iJRxO4H/vRxFE3C+oqDIKXQgoCS+Jrq2L+HKiuyIvBgGXA6Aun+ixZbZRDKHb9uGjsVzuI lSDOoHA== X-Google-Smtp-Source: AGHT+IHXiVGp+aLiRnfsWKk0NFKmgA3+F+MNTAgUXfHua357amgtXMsohsm9V2nYhNe7Fz2ha8uNgJNsSiU= X-Received: from pjbci19.prod.google.com ([2002:a17:90a:fc93:b0:34a:c039:1428]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:524b:b0:32d:f352:f764 with SMTP id 98e67ed59e1d1-34e9212a45cmr24810424a91.2.1767135728615; Tue, 30 Dec 2025 15:02:08 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:38 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-10-seanjc@google.com> Subject: [PATCH v4 09/21] KVM: selftests: Move PTE bitmasks to kvm_mmu From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Move the PTE bitmasks into kvm_mmu to parameterize them for virt mapping functions. Introduce helpers to read/write different PTE bits given a kvm_mmu. Drop the 'global' bit definition as it's currently unused, but leave the 'user' bit as it will be used in coming changes. Opportunisitcally rename 'large' to 'huge' as it's more consistent with the kernel naming. Leave PHYSICAL_PAGE_MASK alone, it's fixed in all page table formats and a lot of other macros depend on it. It's tempting to move all the other macros to be per-struct instead, but it would be too much noise for little benefit. Keep c_bit and s_bit in vm->arch as they used before the MMU is initialized, through __vmcreate() -> vm_userspace_mem_region_add() -> vm_mem_add() -> vm_arch_has_protected_memory(). No functional change intended. Signed-off-by: Yosry Ahmed [sean: rename accessors to is__pte()] Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/kvm_util_arch.h | 16 ++++- .../selftests/kvm/include/x86/processor.h | 28 +++++--- .../testing/selftests/kvm/lib/x86/processor.c | 71 +++++++++++-------- 3 files changed, 76 insertions(+), 39 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tool= s/testing/selftests/kvm/include/x86/kvm_util_arch.h index 456e5ca170df..bad381d63b6a 100644 --- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h @@ -10,7 +10,21 @@ =20 extern bool is_forced_emulation_enabled; =20 -struct kvm_mmu_arch {}; +struct pte_masks { + uint64_t present; + uint64_t writable; + uint64_t user; + uint64_t accessed; + uint64_t dirty; + uint64_t huge; + uint64_t nx; + uint64_t c; + uint64_t s; +}; + +struct kvm_mmu_arch { + struct pte_masks pte_masks; +}; =20 struct kvm_vm_arch { vm_vaddr_t gdt; diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index cbac9de29074..b2084434dd8b 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -362,16 +362,6 @@ static inline unsigned int x86_model(unsigned int eax) return ((eax >> 12) & 0xf0) | ((eax >> 4) & 0x0f); } =20 -/* Page table bitfield declarations */ -#define PTE_PRESENT_MASK BIT_ULL(0) -#define PTE_WRITABLE_MASK BIT_ULL(1) -#define PTE_USER_MASK BIT_ULL(2) -#define PTE_ACCESSED_MASK BIT_ULL(5) -#define PTE_DIRTY_MASK BIT_ULL(6) -#define PTE_LARGE_MASK BIT_ULL(7) -#define PTE_GLOBAL_MASK BIT_ULL(8) -#define PTE_NX_MASK BIT_ULL(63) - #define PHYSICAL_PAGE_MASK GENMASK_ULL(51, 12) =20 #define PAGE_SHIFT 12 @@ -1449,6 +1439,24 @@ enum pg_level { #define PG_SIZE_2M PG_LEVEL_SIZE(PG_LEVEL_2M) #define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G) =20 +#define PTE_PRESENT_MASK(mmu) ((mmu)->arch.pte_masks.present) +#define PTE_WRITABLE_MASK(mmu) ((mmu)->arch.pte_masks.writable) +#define PTE_USER_MASK(mmu) ((mmu)->arch.pte_masks.user) +#define PTE_ACCESSED_MASK(mmu) ((mmu)->arch.pte_masks.accessed) +#define PTE_DIRTY_MASK(mmu) ((mmu)->arch.pte_masks.dirty) +#define PTE_HUGE_MASK(mmu) ((mmu)->arch.pte_masks.huge) +#define PTE_NX_MASK(mmu) ((mmu)->arch.pte_masks.nx) +#define PTE_C_BIT_MASK(mmu) ((mmu)->arch.pte_masks.c) +#define PTE_S_BIT_MASK(mmu) ((mmu)->arch.pte_masks.s) + +#define is_present_pte(mmu, pte) (!!(*(pte) & PTE_PRESENT_MASK(mmu))) +#define is_writable_pte(mmu, pte) (!!(*(pte) & PTE_WRITABLE_MASK(mmu))) +#define is_user_pte(mmu, pte) (!!(*(pte) & PTE_USER_MASK(mmu))) +#define is_accessed_pte(mmu, pte) (!!(*(pte) & PTE_ACCESSED_MASK(mmu))) +#define is_dirty_pte(mmu, pte) (!!(*(pte) & PTE_DIRTY_MASK(mmu))) +#define is_huge_pte(mmu, pte) (!!(*(pte) & PTE_HUGE_MASK(mmu))) +#define is_nx_pte(mmu, pte) (!!(*(pte) & PTE_NX_MASK(mmu))) + void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, uint64_t vaddr, uint64_t paddr, int level); void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index f25742a804b0..3800f4ff6770 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -156,12 +156,14 @@ bool kvm_is_tdp_enabled(void) return get_kvm_amd_param_bool("npt"); } =20 -static void virt_mmu_init(struct kvm_vm *vm, struct kvm_mmu *mmu) +static void virt_mmu_init(struct kvm_vm *vm, struct kvm_mmu *mmu, + struct pte_masks *pte_masks) { /* If needed, create the top-level page table. */ if (!mmu->pgd_created) { mmu->pgd =3D vm_alloc_page_table(vm); mmu->pgd_created =3D true; + mmu->arch.pte_masks =3D *pte_masks; } } =20 @@ -170,7 +172,19 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, "Unknown or unsupported guest mode: 0x%x", vm->mode); =20 - virt_mmu_init(vm, &vm->mmu); + struct pte_masks pte_masks =3D (struct pte_masks){ + .present =3D BIT_ULL(0), + .writable =3D BIT_ULL(1), + .user =3D BIT_ULL(2), + .accessed =3D BIT_ULL(5), + .dirty =3D BIT_ULL(6), + .huge =3D BIT_ULL(7), + .nx =3D BIT_ULL(63), + .c =3D vm->arch.c_bit, + .s =3D vm->arch.s_bit, + }; + + virt_mmu_init(vm, &vm->mmu, &pte_masks); } =20 static void *virt_get_pte(struct kvm_vm *vm, struct kvm_mmu *mmu, @@ -180,7 +194,7 @@ static void *virt_get_pte(struct kvm_vm *vm, struct kvm= _mmu *mmu, uint64_t *page_table =3D addr_gpa2hva(vm, pt_gpa); int index =3D (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; =20 - TEST_ASSERT((*parent_pte =3D=3D mmu->pgd) || (*parent_pte & PTE_PRESENT_M= ASK), + TEST_ASSERT((*parent_pte =3D=3D mmu->pgd) || is_present_pte(mmu, parent_p= te), "Parent PTE (level %d) not PRESENT for gva: 0x%08lx", level + 1, vaddr); =20 @@ -199,10 +213,10 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm = *vm, =20 paddr =3D vm_untag_gpa(vm, paddr); =20 - if (!(*pte & PTE_PRESENT_MASK)) { - *pte =3D PTE_PRESENT_MASK | PTE_WRITABLE_MASK; + if (!is_present_pte(mmu, pte)) { + *pte =3D PTE_PRESENT_MASK(mmu) | PTE_WRITABLE_MASK(mmu); if (current_level =3D=3D target_level) - *pte |=3D PTE_LARGE_MASK | (paddr & PHYSICAL_PAGE_MASK); + *pte |=3D PTE_HUGE_MASK(mmu) | (paddr & PHYSICAL_PAGE_MASK); else *pte |=3D vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK; } else { @@ -214,7 +228,7 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *v= m, TEST_ASSERT(current_level !=3D target_level, "Cannot create hugepage at level: %u, vaddr: 0x%lx", current_level, vaddr); - TEST_ASSERT(!(*pte & PTE_LARGE_MASK), + TEST_ASSERT(!is_huge_pte(mmu, pte), "Cannot create page table at level: %u, vaddr: 0x%lx", current_level, vaddr); } @@ -255,24 +269,24 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu = *mmu, uint64_t vaddr, current_level--) { pte =3D virt_create_upper_pte(vm, mmu, pte, vaddr, paddr, current_level, level); - if (*pte & PTE_LARGE_MASK) + if (is_huge_pte(mmu, pte)) return; } =20 /* Fill in page table entry. */ pte =3D virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); - TEST_ASSERT(!(*pte & PTE_PRESENT_MASK), + TEST_ASSERT(!is_present_pte(mmu, pte), "PTE already present for 4k page at vaddr: 0x%lx", vaddr); - *pte =3D PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MA= SK); + *pte =3D PTE_PRESENT_MASK(mmu) | PTE_WRITABLE_MASK(mmu) | (paddr & PHYSIC= AL_PAGE_MASK); =20 /* * Neither SEV nor TDX supports shared page tables, so only the final * leaf PTE needs manually set the C/S-bit. */ if (vm_is_gpa_protected(vm, paddr)) - *pte |=3D vm->arch.c_bit; + *pte |=3D PTE_C_BIT_MASK(mmu); else - *pte |=3D vm->arch.s_bit; + *pte |=3D PTE_S_BIT_MASK(mmu); } =20 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) @@ -304,7 +318,7 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, = uint64_t paddr, static bool vm_is_target_pte(struct kvm_mmu *mmu, uint64_t *pte, int *level, int current_level) { - if (*pte & PTE_LARGE_MASK) { + if (is_huge_pte(mmu, pte)) { TEST_ASSERT(*level =3D=3D PG_LEVEL_NONE || *level =3D=3D current_level, "Unexpected hugepage at level %d", current_level); @@ -362,12 +376,13 @@ uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, = uint64_t vaddr) =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) { + struct kvm_mmu *mmu =3D &vm->mmu; uint64_t *pml4e, *pml4e_start; uint64_t *pdpe, *pdpe_start; uint64_t *pde, *pde_start; uint64_t *pte, *pte_start; =20 - if (!vm->mmu.pgd_created) + if (!mmu->pgd_created) return; =20 fprintf(stream, "%*s " @@ -375,47 +390,47 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, = uint8_t indent) fprintf(stream, "%*s index hvaddr gpaddr " "addr w exec dirty\n", indent, ""); - pml4e_start =3D (uint64_t *) addr_gpa2hva(vm, vm->mmu.pgd); + pml4e_start =3D (uint64_t *) addr_gpa2hva(vm, mmu->pgd); for (uint16_t n1 =3D 0; n1 <=3D 0x1ffu; n1++) { pml4e =3D &pml4e_start[n1]; - if (!(*pml4e & PTE_PRESENT_MASK)) + if (!is_present_pte(mmu, pml4e)) continue; fprintf(stream, "%*spml4e 0x%-3zx %p 0x%-12lx 0x%-10llx %u " " %u\n", indent, "", pml4e - pml4e_start, pml4e, addr_hva2gpa(vm, pml4e), PTE_GET_PFN(*pml4e), - !!(*pml4e & PTE_WRITABLE_MASK), !!(*pml4e & PTE_NX_MASK)); + is_writable_pte(mmu, pml4e), is_nx_pte(mmu, pml4e)); =20 pdpe_start =3D addr_gpa2hva(vm, *pml4e & PHYSICAL_PAGE_MASK); for (uint16_t n2 =3D 0; n2 <=3D 0x1ffu; n2++) { pdpe =3D &pdpe_start[n2]; - if (!(*pdpe & PTE_PRESENT_MASK)) + if (!is_present_pte(mmu, pdpe)) continue; fprintf(stream, "%*spdpe 0x%-3zx %p 0x%-12lx 0x%-10llx " "%u %u\n", indent, "", pdpe - pdpe_start, pdpe, addr_hva2gpa(vm, pdpe), - PTE_GET_PFN(*pdpe), !!(*pdpe & PTE_WRITABLE_MASK), - !!(*pdpe & PTE_NX_MASK)); + PTE_GET_PFN(*pdpe), is_writable_pte(mmu, pdpe), + is_nx_pte(mmu, pdpe)); =20 pde_start =3D addr_gpa2hva(vm, *pdpe & PHYSICAL_PAGE_MASK); for (uint16_t n3 =3D 0; n3 <=3D 0x1ffu; n3++) { pde =3D &pde_start[n3]; - if (!(*pde & PTE_PRESENT_MASK)) + if (!is_present_pte(mmu, pde)) continue; fprintf(stream, "%*spde 0x%-3zx %p " "0x%-12lx 0x%-10llx %u %u\n", indent, "", pde - pde_start, pde, addr_hva2gpa(vm, pde), - PTE_GET_PFN(*pde), !!(*pde & PTE_WRITABLE_MASK), - !!(*pde & PTE_NX_MASK)); + PTE_GET_PFN(*pde), is_writable_pte(mmu, pde), + is_nx_pte(mmu, pde)); =20 pte_start =3D addr_gpa2hva(vm, *pde & PHYSICAL_PAGE_MASK); for (uint16_t n4 =3D 0; n4 <=3D 0x1ffu; n4++) { pte =3D &pte_start[n4]; - if (!(*pte & PTE_PRESENT_MASK)) + if (!is_present_pte(mmu, pte)) continue; fprintf(stream, "%*spte 0x%-3zx %p " "0x%-12lx 0x%-10llx %u %u " @@ -424,9 +439,9 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, ui= nt8_t indent) pte - pte_start, pte, addr_hva2gpa(vm, pte), PTE_GET_PFN(*pte), - !!(*pte & PTE_WRITABLE_MASK), - !!(*pte & PTE_NX_MASK), - !!(*pte & PTE_DIRTY_MASK), + is_writable_pte(mmu, pte), + is_nx_pte(mmu, pte), + is_dirty_pte(mmu, pte), ((uint64_t) n1 << 27) | ((uint64_t) n2 << 18) | ((uint64_t) n3 << 9) @@ -509,7 +524,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vadd= r_t gva) int level =3D PG_LEVEL_NONE; uint64_t *pte =3D __vm_get_page_table_entry(vm, &vm->mmu, gva, &level); =20 - TEST_ASSERT(*pte & PTE_PRESENT_MASK, + TEST_ASSERT(is_present_pte(&vm->mmu, pte), "Leaf PTE not PRESENT for gva: 0x%08lx", gva); =20 /* --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E79728725A for ; Tue, 30 Dec 2025 23:02:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135733; cv=none; b=QWL2aVlhDkWzaAfp+lzGsyTT4g7vFepF6JJ60yjm9TKccEsIJMlsuWZp5gRZhmF/V1FXaabKBvEbt5WWrX1amBfKTXbdmcuqDz4Q0sLsWnpDsPfqB0E41duvebqPRSo3FeroX+Se5ogKr2r92ose/sSO616UyQriDaTjGB7bY9w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135733; c=relaxed/simple; bh=3mr7O/tvT5uy5dzcKxhclQgiEmpVIdBzfbqLItSl0DI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gtBwRw3qfL0RV9oZSkAap72xrg8jeS7AhXPhoQM8iljGPoko0E5GaSdPyyW8RycmXDRdgnENuNViYUpHNEjE3kVCUd7VCi1GMVGQvotNjBSfMHurx6oJLdAMSR5dDUFCA+32VQhhYgFTKIiVG8a/EA4tjGXTDS80pbx1OYHvSfA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=f1jCb/It; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="f1jCb/It" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34ab459c051so25537889a91.0 for ; Tue, 30 Dec 2025 15:02:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135730; x=1767740530; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=HoCo7aJ171z3a/CeRUy13gZIVr8qzfglNXk4q+wsZUs=; b=f1jCb/ItWHMs/h/oYxf/fQGLgiNnhYN5XLGo1PdKr7oVuj7G2/Ahhbwoj0AIPcTbT6 lhgERrwycBab8iflJUsmHDT4+KFDe/Cx2z8ZUqdBZ1qy0igCBllD9rvF+q5BzcNPmccH FJHPBDP6higQvfHWYItPhQQZU3K0pHo6+gSiQlhZAGEQlF12pO32q7xuSLKWm8wwcHyj FfwSLqUwuc8MNdJWv8lb4Q+ynwHNLJEdinF/Mel/QYGeP2910vM6OjNtIYzcJUDGBLNg T40vCQ4/Hyhbj3C6oh+8wXBuHMfNWXuiTzm/bmpBuqljMpfJHjWTG8OxIOtSqS+/vb6o 7wtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135730; x=1767740530; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=HoCo7aJ171z3a/CeRUy13gZIVr8qzfglNXk4q+wsZUs=; b=tLqlOcxrVypy7M7e3qrssbs8mf+ZxtD/w2NFcRbwKf8fX3SsGOmi5kqdVORcAmjiN4 gsoTux+v56KKG0i6jljfAWFMzmLbuC04475phlbXTdEFhVcE/AAOU+i6DxiqTsgkSt4N kdq5Y/ZJJm5CKogkE9zlAxyGfUbZ3fvVkVLgFnXUB5FHIfGIuyqixqSHOxBerQ3FkFW0 2tV5EX+crsPkuezmfVinmENDkKb4Jcbix/LwMYdO3fHRueoZ9pH71uyTOxzAm9mEQi07 xPZHsL1ki/5v3S5Dd+LdNy2hWOitYQimAD7gceJG4f/svwPB4XRNiRES+UQ/A7wsRSbN I75Q== X-Forwarded-Encrypted: i=1; AJvYcCUlltxyD5leU8Bh3fqZP3h3/qI8g8LT412jaZTo8KjGVCDjV/g9NbADzs43o8o/KI+/eFjCdUkSiX2Sq88=@vger.kernel.org X-Gm-Message-State: AOJu0YwIuyurlNyuY1VdogLg0XuN8cM7bwudHSNZrqhCzsoRfQu8GWno uL1rplw1uh4vMr9Dlx0L1CQ1xg8/n/AnqL7+C84KkVioIrC2d+JKyqp7ebBXuyfTh/fHshH+XTe yC1Ca8Q== X-Google-Smtp-Source: AGHT+IHXyCRNTO0LITU2dGt9isyL83xRKJpOSaQ7I3QFob4WUrF/8KVGlxXnGM2mybHxASS/pACiFt+892M= X-Received: from pjo20.prod.google.com ([2002:a17:90b:5674:b0:34c:2124:a2b0]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:28ce:b0:340:c64d:38d3 with SMTP id 98e67ed59e1d1-34e921448b2mr30537997a91.12.1767135730226; Tue, 30 Dec 2025 15:02:10 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:39 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-11-seanjc@google.com> Subject: [PATCH v4 10/21] KVM: selftests: Use a TDP MMU to share EPT page tables between vCPUs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed prepare_eptp() currently allocates new EPTs for each vCPU. memstress has its own hack to share the EPTs between vCPUs. Currently, there is no reason to have separate EPTs for each vCPU, and the complexity is significant. The only reason it doesn't matter now is because memstress is the only user with multiple vCPUs. Add vm_enable_ept() to allocate EPT page tables for an entire VM, and use it everywhere to replace prepare_eptp(). Drop 'eptp' and 'eptp_hva' from 'struct vmx_pages' as they serve no purpose (e.g. the EPTP can be built from the PGD), but keep 'eptp_gpa' so that the MMU structure doesn't need to be passed in along with vmx_pages. Dynamically allocate the TDP MMU structure to avoid a cyclical dependency between kvm_util_arch.h and kvm_util.h. Remove the workaround in memstress to copy the EPT root between vCPUs since that's now the default behavior. Name the MMU tdp_mmu instead of e.g. nested_mmu or nested.mmu to avoid recreating the same mess that KVM has with respect to "nested" MMUs, e.g. does nested refer to the stage-2 page tables created by L1, or the stage-1 page tables created by L2? Signed-off-by: Yosry Ahmed Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/kvm_util_arch.h | 4 +++ .../selftests/kvm/include/x86/processor.h | 3 ++ tools/testing/selftests/kvm/include/x86/vmx.h | 8 ++--- .../testing/selftests/kvm/lib/x86/memstress.c | 19 ++++-------- .../testing/selftests/kvm/lib/x86/processor.c | 9 ++++++ tools/testing/selftests/kvm/lib/x86/vmx.c | 30 ++++++++++++------- .../selftests/kvm/x86/vmx_dirty_log_test.c | 7 ++--- 7 files changed, 48 insertions(+), 32 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tool= s/testing/selftests/kvm/include/x86/kvm_util_arch.h index bad381d63b6a..05a1fc1780f2 100644 --- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h @@ -26,6 +26,8 @@ struct kvm_mmu_arch { struct pte_masks pte_masks; }; =20 +struct kvm_mmu; + struct kvm_vm_arch { vm_vaddr_t gdt; vm_vaddr_t tss; @@ -35,6 +37,8 @@ struct kvm_vm_arch { uint64_t s_bit; int sev_fd; bool is_pt_protected; + + struct kvm_mmu *tdp_mmu; }; =20 static inline bool __vm_arch_has_protected_memory(struct kvm_vm_arch *arch) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index b2084434dd8b..973f2069cd3b 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1457,6 +1457,9 @@ enum pg_level { #define is_huge_pte(mmu, pte) (!!(*(pte) & PTE_HUGE_MASK(mmu))) #define is_nx_pte(mmu, pte) (!!(*(pte) & PTE_NX_MASK(mmu))) =20 +void tdp_mmu_init(struct kvm_vm *vm, int pgtable_levels, + struct pte_masks *pte_masks); + void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, uint64_t vaddr, uint64_t paddr, int level); void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/= selftests/kvm/include/x86/vmx.h index 04b8231d032a..1fd83c23529a 100644 --- a/tools/testing/selftests/kvm/include/x86/vmx.h +++ b/tools/testing/selftests/kvm/include/x86/vmx.h @@ -520,13 +520,11 @@ struct vmx_pages { uint64_t vmwrite_gpa; void *vmwrite; =20 - void *eptp_hva; - uint64_t eptp_gpa; - void *eptp; - void *apic_access_hva; uint64_t apic_access_gpa; void *apic_access; + + uint64_t eptp_gpa; }; =20 union vmx_basic { @@ -568,7 +566,7 @@ void tdp_identity_map_default_memslots(struct vmx_pages= *vmx, void tdp_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t addr, uint64_t size); bool kvm_cpu_has_ept(void); -void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm); +void vm_enable_ept(struct kvm_vm *vm); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm= *vm); =20 #endif /* SELFTEST_KVM_VMX_H */ diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testin= g/selftests/kvm/lib/x86/memstress.c index 1928b00bde51..00f7f11e5f0e 100644 --- a/tools/testing/selftests/kvm/lib/x86/memstress.c +++ b/tools/testing/selftests/kvm/lib/x86/memstress.c @@ -59,12 +59,10 @@ uint64_t memstress_nested_pages(int nr_vcpus) return 513 + 10 * nr_vcpus; } =20 -void memstress_setup_ept(struct vmx_pages *vmx, struct kvm_vm *vm) +static void memstress_setup_ept_mappings(struct vmx_pages *vmx, struct kvm= _vm *vm) { uint64_t start, end; =20 - prepare_eptp(vmx, vm); - /* * Identity map the first 4G and the test region with 1G pages so that * KVM can shadow the EPT12 with the maximum huge page size supported @@ -79,7 +77,7 @@ void memstress_setup_ept(struct vmx_pages *vmx, struct kv= m_vm *vm) =20 void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vc= pu *vcpus[]) { - struct vmx_pages *vmx, *vmx0 =3D NULL; + struct vmx_pages *vmx; struct kvm_regs regs; vm_vaddr_t vmx_gva; int vcpu_id; @@ -87,18 +85,13 @@ void memstress_setup_nested(struct kvm_vm *vm, int nr_v= cpus, struct kvm_vcpu *vc TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); TEST_REQUIRE(kvm_cpu_has_ept()); =20 + vm_enable_ept(vm); for (vcpu_id =3D 0; vcpu_id < nr_vcpus; vcpu_id++) { vmx =3D vcpu_alloc_vmx(vm, &vmx_gva); =20 - if (vcpu_id =3D=3D 0) { - memstress_setup_ept(vmx, vm); - vmx0 =3D vmx; - } else { - /* Share the same EPT table across all vCPUs. */ - vmx->eptp =3D vmx0->eptp; - vmx->eptp_hva =3D vmx0->eptp_hva; - vmx->eptp_gpa =3D vmx0->eptp_gpa; - } + /* The EPTs are shared across vCPUs, setup the mappings once */ + if (vcpu_id =3D=3D 0) + memstress_setup_ept_mappings(vmx, vm); =20 /* * Override the vCPU to run memstress_l1_guest_code() which will diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index 3800f4ff6770..8a9298a72897 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -187,6 +187,15 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) virt_mmu_init(vm, &vm->mmu, &pte_masks); } =20 +void tdp_mmu_init(struct kvm_vm *vm, int pgtable_levels, + struct pte_masks *pte_masks) +{ + TEST_ASSERT(!vm->arch.tdp_mmu, "TDP MMU already initialized"); + + vm->arch.tdp_mmu =3D calloc(1, sizeof(*vm->arch.tdp_mmu)); + virt_mmu_init(vm, vm->arch.tdp_mmu, pte_masks); +} + static void *virt_get_pte(struct kvm_vm *vm, struct kvm_mmu *mmu, uint64_t *parent_pte, uint64_t vaddr, int level) { diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index a3e2eae981da..9d4e391fdf2c 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -56,6 +56,21 @@ int vcpu_enable_evmcs(struct kvm_vcpu *vcpu) return evmcs_ver; } =20 +void vm_enable_ept(struct kvm_vm *vm) +{ + TEST_ASSERT(kvm_cpu_has_ept(), "KVM doesn't support nested EPT"); + if (vm->arch.tdp_mmu) + return; + + /* TODO: Drop eptPageTableEntry in favor of PTE masks. */ + struct pte_masks pte_masks =3D (struct pte_masks) { + + }; + + /* TODO: Add support for 5-level EPT. */ + tdp_mmu_init(vm, 4, &pte_masks); +} + /* Allocate memory regions for nested VMX tests. * * Input Args: @@ -105,6 +120,9 @@ vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva) vmx->vmwrite_gpa =3D addr_gva2gpa(vm, (uintptr_t)vmx->vmwrite); memset(vmx->vmwrite_hva, 0, getpagesize()); =20 + if (vm->arch.tdp_mmu) + vmx->eptp_gpa =3D vm->arch.tdp_mmu->pgd; + *p_vmx_gva =3D vmx_gva; return vmx; } @@ -395,7 +413,8 @@ void __tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm = *vm, uint64_t nested_paddr, uint64_t paddr, int target_level) { const uint64_t page_size =3D PG_LEVEL_SIZE(target_level); - struct eptPageTableEntry *pt =3D vmx->eptp_hva, *pte; + void *eptp_hva =3D addr_gpa2hva(vm, vm->arch.tdp_mmu->pgd); + struct eptPageTableEntry *pt =3D eptp_hva, *pte; uint16_t index; =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -525,15 +544,6 @@ bool kvm_cpu_has_ept(void) return ctrl & SECONDARY_EXEC_ENABLE_EPT; } =20 -void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm) -{ - TEST_ASSERT(kvm_cpu_has_ept(), "KVM doesn't support nested EPT"); - - vmx->eptp =3D (void *)vm_vaddr_alloc_page(vm); - vmx->eptp_hva =3D addr_gva2hva(vm, (uintptr_t)vmx->eptp); - vmx->eptp_gpa =3D addr_gva2gpa(vm, (uintptr_t)vmx->eptp); -} - void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm= *vm) { vmx->apic_access =3D (void *)vm_vaddr_alloc_page(vm); diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/t= esting/selftests/kvm/x86/vmx_dirty_log_test.c index e7d0c08ba29d..5c8cf8ac42a2 100644 --- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c @@ -93,6 +93,9 @@ static void test_vmx_dirty_log(bool enable_ept) =20 /* Create VM */ vm =3D vm_create_with_one_vcpu(&vcpu, l1_guest_code); + if (enable_ept) + vm_enable_ept(vm); + vmx =3D vcpu_alloc_vmx(vm, &vmx_pages_gva); vcpu_args_set(vcpu, 1, vmx_pages_gva); =20 @@ -113,14 +116,10 @@ static void test_vmx_dirty_log(bool enable_ept) * ... pages in the L2 GPA range [0xc0001000, 0xc0003000) will map to * 0xc0000000. * - * Note that prepare_eptp should be called only L1's GPA map is done, - * meaning after the last call to virt_map. - * * When EPT is disabled, the L2 guest code will still access the same L1 * GPAs as the EPT enabled case. */ if (enable_ept) { - prepare_eptp(vmx, vm); tdp_identity_map_default_memslots(vmx, vm); tdp_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); tdp_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FDFD3043B5 for ; Tue, 30 Dec 2025 23:02:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135736; cv=none; b=eEmRAPIxfY0uAgZvE9zGXz6kiW5XCecNTaYLuhSn4+5S+5Oad/dTZC8OBR2R5ULWtD5qRIJCOXnDlxUh8fGJujJFxY85GkybIglF1OFWq8Sjwxq0WqBO1+QaKrGYlD8pcw7ZiBJew/I6/oJH1NOHriWQI/fAAj0efEGGiSnvfc4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135736; c=relaxed/simple; bh=Yh8Qab0Gq4P0gx+ZK5aVX/2lf28QcgZxjaey8n2Vr4c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gbA6+AcAIH0y8RJw/yEhjhKlV5Iygwpxr3WIyDpA1LHxczmYpSVczy+BifElza46127s/wdYegv/0hwh5f1+4yETgXBiQg3Krp45RcFbXS6LSK2JBL70f2fwYN2lriDSgw5bDYnW9kkodnNW0NWJi3LQImknT9F7dlHNBbPmU/A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NM4tcSTQ; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NM4tcSTQ" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34c6e05af3bso22763793a91.3 for ; Tue, 30 Dec 2025 15:02:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135732; x=1767740532; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ja2t3RddJQQ9WtlMGrZPDGGnut4pYDpTPLHSPL1hvlg=; b=NM4tcSTQsnog7kEFqij1HPgUespL5azjSD2YAmvQ16xjGL+BCaCYFxoIACilYFosNi J9YuTRwp53KWD5jEeSjdhngm7TqKnNngnr2iT4wfkODJPBNQwxOYpib1Sc6ypHSbsFjZ 5fdNN0Tk9b1bdBD1tdS7zQEYncVPcRiYZpSGe9S1lMj0K+AKs7FPcmv2u+2jc/b7QEQI 91vCaSlKw3lo9RIR0sOU9jdfw25Xfu7PhDhYvQSaiKPHYR1bKZ/izvJWuJDHja/pIH7H axIQZzcWnzm+oTOxVKxO6m1dtNiFaH9u7SIriNhgeZLU+BHXhzumlaykruDm1pu+Cuys v9+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135732; x=1767740532; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ja2t3RddJQQ9WtlMGrZPDGGnut4pYDpTPLHSPL1hvlg=; b=XI0TzC+Gkmkr8JO6WtbAI4yRqCY0856tsS3X07S3yDmMs1Qivb+T4MjD1QIGkJvUKz 2jR6e8xr46FUYbQTLjNoCQB+Cps3esg33ANrsoI9HUWw2LMyrZHOgKy9DTcOMmQ5eWG3 mEVaNaxDYjhRoFfyx7dpxn10dy3+KeNZ3deb67SVahy15RALQbz5yXReEIdkpC1dRsYT Ip0D4jj7ZRvbdwvR2EIKD+D9lECt4LJDfHdYg6zYTa4hl/grOfwSRRiuA0aTWVKI75VV HctYBegjVxgDL+vzpXtqdZnMNBgrBXU+D4i3anC7WhTz2L5wVFOKmTyPm5mHJPqPqzue ZaMQ== X-Forwarded-Encrypted: i=1; AJvYcCXDiLA4NgVjDuulx0F1jp/7R3D3xcWs/+3Abt63O25D5Xu2kW7GYOoI57KP1rERzCXdWuHeuX8lONTuUXs=@vger.kernel.org X-Gm-Message-State: AOJu0YwOwE86q7u7FoxyFoOTj9FCvB3O+Q56scgU/v4nvGIHx7IzBlgw 9wdIHu80MLNS9Vl8538obTfLmIc7q1xgR7OWuu57Kl1I1AACKfqrs5UEgvJFK/DT/1S1raB3uPP 6A8SmCA== X-Google-Smtp-Source: AGHT+IFlmvuQfNmiHzFuNq9XlPth424WfhGaGaW+Waxyx9tMx4HEJVVNAsbuKBuWKPaCjxwrIKXmvslHERQ= X-Received: from pjbnw17.prod.google.com ([2002:a17:90b:2551:b0:34c:30f1:8d54]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3882:b0:340:d511:e167 with SMTP id 98e67ed59e1d1-34e9207378fmr28865851a91.0.1767135731850; Tue, 30 Dec 2025 15:02:11 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:40 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-12-seanjc@google.com> Subject: [PATCH v4 11/21] KVM: selftests: Stop passing VMX metadata to TDP mapping functions From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed The root GPA can now be retrieved from the nested MMU, stop passing VMX metadata. This is in preparation for making these functions work for NPTs as well. Opportunistically drop tdp_pg_map() since it's unused. No functional change intended. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86/vmx.h | 11 ++----- .../testing/selftests/kvm/lib/x86/memstress.c | 11 +++---- tools/testing/selftests/kvm/lib/x86/vmx.c | 33 +++++++------------ .../selftests/kvm/x86/vmx_dirty_log_test.c | 9 +++-- 4 files changed, 24 insertions(+), 40 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/= selftests/kvm/include/x86/vmx.h index 1fd83c23529a..4dd4c2094ee6 100644 --- a/tools/testing/selftests/kvm/include/x86/vmx.h +++ b/tools/testing/selftests/kvm/include/x86/vmx.h @@ -557,14 +557,9 @@ bool load_vmcs(struct vmx_pages *vmx); =20 bool ept_1g_pages_supported(void); =20 -void tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_= paddr, - uint64_t paddr); -void tdp_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_pad= dr, - uint64_t paddr, uint64_t size); -void tdp_identity_map_default_memslots(struct vmx_pages *vmx, - struct kvm_vm *vm); -void tdp_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t addr, uint64_t size); +void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uin= t64_t size); +void tdp_identity_map_default_memslots(struct kvm_vm *vm); +void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size); bool kvm_cpu_has_ept(void); void vm_enable_ept(struct kvm_vm *vm); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm= *vm); diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testin= g/selftests/kvm/lib/x86/memstress.c index 00f7f11e5f0e..3319cb57a78d 100644 --- a/tools/testing/selftests/kvm/lib/x86/memstress.c +++ b/tools/testing/selftests/kvm/lib/x86/memstress.c @@ -59,7 +59,7 @@ uint64_t memstress_nested_pages(int nr_vcpus) return 513 + 10 * nr_vcpus; } =20 -static void memstress_setup_ept_mappings(struct vmx_pages *vmx, struct kvm= _vm *vm) +static void memstress_setup_ept_mappings(struct kvm_vm *vm) { uint64_t start, end; =20 @@ -68,16 +68,15 @@ static void memstress_setup_ept_mappings(struct vmx_pag= es *vmx, struct kvm_vm *v * KVM can shadow the EPT12 with the maximum huge page size supported * by the backing source. */ - tdp_identity_map_1g(vmx, vm, 0, 0x100000000ULL); + tdp_identity_map_1g(vm, 0, 0x100000000ULL); =20 start =3D align_down(memstress_args.gpa, PG_SIZE_1G); end =3D align_up(memstress_args.gpa + memstress_args.size, PG_SIZE_1G); - tdp_identity_map_1g(vmx, vm, start, end - start); + tdp_identity_map_1g(vm, start, end - start); } =20 void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vc= pu *vcpus[]) { - struct vmx_pages *vmx; struct kvm_regs regs; vm_vaddr_t vmx_gva; int vcpu_id; @@ -87,11 +86,11 @@ void memstress_setup_nested(struct kvm_vm *vm, int nr_v= cpus, struct kvm_vcpu *vc =20 vm_enable_ept(vm); for (vcpu_id =3D 0; vcpu_id < nr_vcpus; vcpu_id++) { - vmx =3D vcpu_alloc_vmx(vm, &vmx_gva); + vcpu_alloc_vmx(vm, &vmx_gva); =20 /* The EPTs are shared across vCPUs, setup the mappings once */ if (vcpu_id =3D=3D 0) - memstress_setup_ept_mappings(vmx, vm); + memstress_setup_ept_mappings(vm); =20 /* * Override the vCPU to run memstress_l1_guest_code() which will diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index 9d4e391fdf2c..ea1c09f9e8ab 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -409,8 +409,8 @@ static void tdp_create_pte(struct kvm_vm *vm, } =20 =20 -void __tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, int target_level) +void __tdp_pg_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, + int target_level) { const uint64_t page_size =3D PG_LEVEL_SIZE(target_level); void *eptp_hva =3D addr_gpa2hva(vm, vm->arch.tdp_mmu->pgd); @@ -453,12 +453,6 @@ void __tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm= *vm, } } =20 -void tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr) -{ - __tdp_pg_map(vmx, vm, nested_paddr, paddr, PG_LEVEL_4K); -} - /* * Map a range of EPT guest physical addresses to the VM's physical address * @@ -476,9 +470,8 @@ void tdp_pg_map(struct vmx_pages *vmx, struct kvm_vm *v= m, * Within the VM given by vm, creates a nested guest translation for the * page range starting at nested_paddr to the page range starting at paddr. */ -void __tdp_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size, - int level) +void __tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, + uint64_t size, int level) { size_t page_size =3D PG_LEVEL_SIZE(level); size_t npages =3D size / page_size; @@ -487,23 +480,22 @@ void __tdp_map(struct vmx_pages *vmx, struct kvm_vm *= vm, TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); =20 while (npages--) { - __tdp_pg_map(vmx, vm, nested_paddr, paddr, level); + __tdp_pg_map(vm, nested_paddr, paddr, level); nested_paddr +=3D page_size; paddr +=3D page_size; } } =20 -void tdp_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size) +void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, + uint64_t size) { - __tdp_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K); + __tdp_map(vm, nested_paddr, paddr, size, PG_LEVEL_4K); } =20 /* Prepare an identity extended page table that maps all the * physical pages in VM. */ -void tdp_identity_map_default_memslots(struct vmx_pages *vmx, - struct kvm_vm *vm) +void tdp_identity_map_default_memslots(struct kvm_vm *vm) { uint32_t s, memslot =3D 0; sparsebit_idx_t i, last; @@ -520,16 +512,15 @@ void tdp_identity_map_default_memslots(struct vmx_pag= es *vmx, if (i > last) break; =20 - tdp_map(vmx, vm, (uint64_t)i << vm->page_shift, + tdp_map(vm, (uint64_t)i << vm->page_shift, (uint64_t)i << vm->page_shift, 1 << vm->page_shift); } } =20 /* Identity map a region with 1GiB Pages. */ -void tdp_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t addr, uint64_t size) +void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size) { - __tdp_map(vmx, vm, addr, addr, size, PG_LEVEL_1G); + __tdp_map(vm, addr, addr, size, PG_LEVEL_1G); } =20 bool kvm_cpu_has_ept(void) diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/t= esting/selftests/kvm/x86/vmx_dirty_log_test.c index 5c8cf8ac42a2..370f8d3117c2 100644 --- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c @@ -80,7 +80,6 @@ void l1_guest_code(struct vmx_pages *vmx) static void test_vmx_dirty_log(bool enable_ept) { vm_vaddr_t vmx_pages_gva =3D 0; - struct vmx_pages *vmx; unsigned long *bmap; uint64_t *host_test_mem; =20 @@ -96,7 +95,7 @@ static void test_vmx_dirty_log(bool enable_ept) if (enable_ept) vm_enable_ept(vm); =20 - vmx =3D vcpu_alloc_vmx(vm, &vmx_pages_gva); + vcpu_alloc_vmx(vm, &vmx_pages_gva); vcpu_args_set(vcpu, 1, vmx_pages_gva); =20 /* Add an extra memory slot for testing dirty logging */ @@ -120,9 +119,9 @@ static void test_vmx_dirty_log(bool enable_ept) * GPAs as the EPT enabled case. */ if (enable_ept) { - tdp_identity_map_default_memslots(vmx, vm); - tdp_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); - tdp_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); + tdp_identity_map_default_memslots(vm); + tdp_map(vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); + tdp_map(vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); } =20 bmap =3D bitmap_zalloc(TEST_MEM_PAGES); --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A10F306B21 for ; Tue, 30 Dec 2025 23:02:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135736; cv=none; b=NZ2Vbk920O8vK7YG0I7Y/XWI70EhvFAC3NmgmGh01jXJsoNiSji52TXPE15goQsA2tjtYQujJSmXNXZTUMDnQBMlmFhTt7FqhRvrJy7xBH49ofAMANegjrYzmwRs8h4r7CtBIjkesWLSl91tyHYB/rzjuwWQUmUamSuUNGWPTmo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135736; c=relaxed/simple; bh=2ZolxtbeL/TfZv1wuBul5eMq3hwecEq8h3mFaMWxBTA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=noNsK8YQhygkwcTHwOsJRxyMS18dPhPiz5azx92cEYyh3CkSfdkjKNQjrPmBbn+o117nfLtIS0RK6E2Yxd+Bvdss8mHaQfsdvi/hVftX26mO19PAkZepWhNhSQrmBRRXb6rNAYPntJ1C3lBeaTV2NFwBwcadQi47cRNEgIqqbi0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=A7rTKwVw; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="A7rTKwVw" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34a9bb41009so16720359a91.3 for ; Tue, 30 Dec 2025 15:02:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135733; x=1767740533; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=aempC3xup8zdxs82u0niExWKRbrEWWiY7rS3u4Pr19E=; b=A7rTKwVwsRCeC0uteG3QsCqrQL5vkk7PzZI2Sv1a0R2/DY5MO3fNN84Bm+k1Ga1X87 sG3MgvdCQ6aFgRopYw4JVEh0ns96otmmF+KyrKtsm4wCtGtWLdEODZoH5tnfP8MSy+EQ aCm4NyUDp8wougXheksx4Hhvhr2TomuR97XY1yLbJy+GC/5ibBxDTVACMvlb3hSyZQOE uZ7XcT87GUqgTjik4HDORNqwATaJloIEyOEzZKn4ktaoBJHY+SwrywpMT45R9PO8Ap7v XInNRUV/JeqfBmCt8oH+KM/0EJHbxyq8AtrxrX58b2FrK/CXoGYqJt6swbbOTyQkDlXg n1sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135733; x=1767740533; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=aempC3xup8zdxs82u0niExWKRbrEWWiY7rS3u4Pr19E=; b=goTwQBUC2L6xEuQaizi7r1BDeMSpUUNA+QBP73tUZeqDC0ecPoXYvqHGBTPBMGeD3U MnOpdsVEQWF30aN0IO7jID01ANdV5DTzlYiy1rvBwB+6kxIsLAOsYn8TlqQuKxmDuZxP Hz/s4HGZsQAj2LshUJapqUNQ+tGtEk7NPu9m6fqSuUoZjNlgKdakEIaCoBm9tjwbQ+QU FlXN7E5nlAyiUUirjG4qpd1+hJUZo0PCJHsYlLdbZ7Uztd3hiIA5uOQVKC/E2NJbS2+7 jTZeuSpSU97m+honSyOUzxsjShefglcOaRsgG36YIRNYF9OnnjcEKIwNZlMfFozVfItP zR0w== X-Forwarded-Encrypted: i=1; AJvYcCVCc38fuXRXqRAuOQULRpu8uQSQpQ9edsLm4AAIdu0WyxxMIUyFDj9nJxowqiPv66vmTd0+XdzaRvt16FI=@vger.kernel.org X-Gm-Message-State: AOJu0YwPTLN9KxSMciaQCqDkH6FRUDW+aH3XfQ/5dHW2wtdlIdGehZxo Ut3h6/yDTVaMbbmjTPqtq+BznyTY86c0qixOsZEejmzhwMG64tWvomJurfcrTVWFJKjPkHW6PKz g1xiOWg== X-Google-Smtp-Source: AGHT+IF1//ZF8YQZFDW338R6CLvvv/Jhm6bfFusPZNwPuigVsAoIaoiYhFuoR4zT4Z2vqbXSCzOrQaND/Kk= X-Received: from pjbpm1.prod.google.com ([2002:a17:90b:3c41:b0:34a:b3a0:78b9]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:52:b0:34c:94f0:fc09 with SMTP id 98e67ed59e1d1-34e92139d60mr28142377a91.10.1767135733534; Tue, 30 Dec 2025 15:02:13 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:41 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-13-seanjc@google.com> Subject: [PATCH v4 12/21] KVM: selftests: Add a stage-2 MMU instance to kvm_vm From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a stage-2 MMU instance so that architectures that support nested virtualization (more specifically, nested stage-2 page tables) can create and track stage-2 page tables for running L2 guests. Plumb the structure into common code to avoid cyclical dependencies, and to provide some line of sight to having common APIs for creating stage-2 mappings. As a bonus, putting the member in common code justifies using stage2_mmu instead of tdp_mmu for x86. Signed-off-by: Sean Christopherson Reviewed-by: Yosry Ahmed --- tools/testing/selftests/kvm/include/kvm_util.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index c1497515fa6a..371d55e0366e 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -116,7 +116,12 @@ struct kvm_vm { uint32_t dirty_ring_size; uint64_t gpa_tag_mask; =20 + /* + * "mmu" is the guest's stage-1, with a short name because the vast + * majority of tests only care about the stage-1 MMU. + */ struct kvm_mmu mmu; + struct kvm_mmu stage2_mmu; =20 struct kvm_vm_arch arch; =20 --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1AF02DFF3F for ; Tue, 30 Dec 2025 23:02:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135739; cv=none; b=A+j/imf1424IrqSIFU85aUUZmUnQWOmjVlvyIg3mfSyq47pi3svX/QjC15pCA1A0BOUAJyfT7iZ3kDtiEBZvdAk7KVKQAn/tnKQDewQkycjObHwRww2roo5Z+ffCn4ams9V3s2DY177Y87/zG1W2RATKp7Rk9Txi3P2fIrpeQrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135739; c=relaxed/simple; bh=L1SYXQiqNle3gLGTfCVE+Svz9FsBQhnc4uiADka+aHk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mHIeIZXSvtzM1mualQ0JaiWOTRpBX6OamCC6s0uUQFQObTwGlJ9mlR5fSgBhoz/eoP/cpGkRa/NLlZ6tRlQyA+EgoXcQDy5LVT8pVXfBQaW6izFSXIwehCfvtvsC2v2JoRkVZUI/JLQjD0IO89wljeoKISrvpqWsgUIhhkKZzS0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=I6Q7gsuS; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="I6Q7gsuS" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-c3dfa080662so11336a12.1 for ; Tue, 30 Dec 2025 15:02:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135735; x=1767740535; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=A7fpsi0DvuznhTE1Zn3HtOIUiJjnW+9EfXyg1RrLtBE=; b=I6Q7gsuSIIm5E8g5nB68SSTRoetrm9DV3y0HXbRLDIYAuXQaPnDGeLxQLfqqX/LOAP R9Qpd3Gm9Ds4/ujFdwwpD9NBqoaakrXMq8KY/5eQ2pQlusAKOzpah7cBlF0o8PX8JBx4 V6vCNmSdiRSY4llAZ67GkPeXI+xIodAcNbpkpkHYapMVZyTAqsYR3t0dSOoCSiUWoGt2 hrl3PoEy2pqO+erORK26IEPAA4sWBIy7REkmkPb9sdK1fjfRSbmh1BjNCGkKSIhX7HQo ZPZ3ITIbYaOYRmsv9nSFAwYgjiuudvmjsasDQVcb64hU6csJFoPBWvmHWpVHH2xmE2qh EMVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135735; x=1767740535; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=A7fpsi0DvuznhTE1Zn3HtOIUiJjnW+9EfXyg1RrLtBE=; b=ZG7gj18wXc7C/dFdl802O8sDfhlEK1o24jONor/bvSymaO+hL4xONzS7kUBTQP2c54 bk/rkj3N0ksPzHuzyZLNEYYquoSKA5eN2mwSPi+IP5gncGLMFMjTj5Cxtvc/uGYb3Ohq oxc84/nEtBb9FFuLCqTqAQegi019+dT2ZVrS0J2DubBGgHI98mhGbv59VK8SFdqJ0UsC SwnOH0sabUX2RXcllc2eGnD+zwE8Fi1EuIfsbMBN7qDLc6hFQVDeUQOvTQA6/AlDMyPW bzl8419dYafBCGvKP6lPK1fF81RhJuI1C/SUdidkFfIwQT5nzdPszcPnyaJwYXy/67Gr t8/A== X-Forwarded-Encrypted: i=1; AJvYcCXTE4fbU4t4BdWBa+yvlc1IPyIQvwDb7VohatTZhpd3jV9gemjedB4CkMAeeMEaCdbYfJyBVsPXNLBfX/c=@vger.kernel.org X-Gm-Message-State: AOJu0Yz+AufxjeAYT3sfXinniRfVYm9rzNiKNOg0ucWPz3mjOXIZrEn1 iifdeFqsBxsXEAKx+y4oxSDeZiRiQ4TH8mGmyk61Y7eQLq5SbUl3qoP6FNPGUawi4l1YXrBX6lp GLuKm5A== X-Google-Smtp-Source: AGHT+IEDv3PqfDUF2AaKu2cqsZdRy1TYjAx9BPIyTm2+oOAnKpWrJ3J7Os+kj78BhbIr6Dp0qwwHYlrIeMQ= X-Received: from pfbem48.prod.google.com ([2002:a05:6a00:3770:b0:7f9:3450:d9b0]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:9309:0:b0:7e8:3fcb:bc41 with SMTP id d2e1a72fcca58-7ff547df601mr29528623b3a.22.1767135735167; Tue, 30 Dec 2025 15:02:15 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:42 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-14-seanjc@google.com> Subject: [PATCH v4 13/21] KVM: selftests: Reuse virt mapping functions for nested EPTs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Rework tdp_map() and friends to use __virt_pg_map() and drop the custom EPT code in __tdp_pg_map() and tdp_create_pte(). The EPT code and __virt_pg_map() are practically identical, the main differences are: - EPT uses the EPT struct overlay instead of the PTE masks. - EPT always assumes 4-level EPTs. To reuse __virt_pg_map(), extend the PTE masks to work with EPT's RWX and X-only capabilities, and provide a tdp_mmu_init() API so that EPT can pass in the EPT PTE masks along with the root page level (which is currently hardcoded to '4'). Don't reuse KVM's insane overloading of the USER bit for EPT_R as there's no reason to multiplex bits in the selftests, e.g. selftests aren't trying to shadow guest PTEs and thus don't care about funnelling protections into a common permissions check. Another benefit of reusing the code is having separate handling for upper-level PTEs vs 4K PTEs, which avoids some quirks like setting the large bit on a 4K PTE in the EPTs. For all intents and purposes, no functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Yosry Ahmed Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/kvm_util_arch.h | 4 +- .../selftests/kvm/include/x86/processor.h | 16 ++- .../testing/selftests/kvm/lib/x86/processor.c | 21 +++- tools/testing/selftests/kvm/lib/x86/vmx.c | 119 +++--------------- 4 files changed, 52 insertions(+), 108 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tool= s/testing/selftests/kvm/include/x86/kvm_util_arch.h index 05a1fc1780f2..1cf84b8212c6 100644 --- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h @@ -14,6 +14,8 @@ struct pte_masks { uint64_t present; uint64_t writable; uint64_t user; + uint64_t readable; + uint64_t executable; uint64_t accessed; uint64_t dirty; uint64_t huge; @@ -37,8 +39,6 @@ struct kvm_vm_arch { uint64_t s_bit; int sev_fd; bool is_pt_protected; - - struct kvm_mmu *tdp_mmu; }; =20 static inline bool __vm_arch_has_protected_memory(struct kvm_vm_arch *arch) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index 973f2069cd3b..4c0d2fc83c1c 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1442,6 +1442,8 @@ enum pg_level { #define PTE_PRESENT_MASK(mmu) ((mmu)->arch.pte_masks.present) #define PTE_WRITABLE_MASK(mmu) ((mmu)->arch.pte_masks.writable) #define PTE_USER_MASK(mmu) ((mmu)->arch.pte_masks.user) +#define PTE_READABLE_MASK(mmu) ((mmu)->arch.pte_masks.readable) +#define PTE_EXECUTABLE_MASK(mmu) ((mmu)->arch.pte_masks.executable) #define PTE_ACCESSED_MASK(mmu) ((mmu)->arch.pte_masks.accessed) #define PTE_DIRTY_MASK(mmu) ((mmu)->arch.pte_masks.dirty) #define PTE_HUGE_MASK(mmu) ((mmu)->arch.pte_masks.huge) @@ -1449,13 +1451,23 @@ enum pg_level { #define PTE_C_BIT_MASK(mmu) ((mmu)->arch.pte_masks.c) #define PTE_S_BIT_MASK(mmu) ((mmu)->arch.pte_masks.s) =20 -#define is_present_pte(mmu, pte) (!!(*(pte) & PTE_PRESENT_MASK(mmu))) +/* + * For PTEs without a PRESENT bit (i.e. EPT entries), treat the PTE as pre= sent + * if it's executable or readable, as EPT supports execute-only PTEs, but = not + * write-only PTEs. + */ +#define is_present_pte(mmu, pte) \ + (PTE_PRESENT_MASK(mmu) ? \ + !!(*(pte) & PTE_PRESENT_MASK(mmu)) : \ + !!(*(pte) & (PTE_READABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu)))) +#define is_executable_pte(mmu, pte) \ + ((*(pte) & (PTE_EXECUTABLE_MASK(mmu) | PTE_NX_MASK(mmu))) =3D=3D PTE_EXEC= UTABLE_MASK(mmu)) #define is_writable_pte(mmu, pte) (!!(*(pte) & PTE_WRITABLE_MASK(mmu))) #define is_user_pte(mmu, pte) (!!(*(pte) & PTE_USER_MASK(mmu))) #define is_accessed_pte(mmu, pte) (!!(*(pte) & PTE_ACCESSED_MASK(mmu))) #define is_dirty_pte(mmu, pte) (!!(*(pte) & PTE_DIRTY_MASK(mmu))) #define is_huge_pte(mmu, pte) (!!(*(pte) & PTE_HUGE_MASK(mmu))) -#define is_nx_pte(mmu, pte) (!!(*(pte) & PTE_NX_MASK(mmu))) +#define is_nx_pte(mmu, pte) (!is_executable_pte(mmu, pte)) =20 void tdp_mmu_init(struct kvm_vm *vm, int pgtable_levels, struct pte_masks *pte_masks); diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index 8a9298a72897..41316cac94e0 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -165,6 +165,10 @@ static void virt_mmu_init(struct kvm_vm *vm, struct kv= m_mmu *mmu, mmu->pgd_created =3D true; mmu->arch.pte_masks =3D *pte_masks; } + + TEST_ASSERT(mmu->pgtable_levels =3D=3D 4 || mmu->pgtable_levels =3D=3D 5, + "Selftests MMU only supports 4-level and 5-level paging, not %u-leve= l paging", + mmu->pgtable_levels); } =20 void virt_arch_pgd_alloc(struct kvm_vm *vm) @@ -180,6 +184,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) .dirty =3D BIT_ULL(6), .huge =3D BIT_ULL(7), .nx =3D BIT_ULL(63), + .executable =3D 0, .c =3D vm->arch.c_bit, .s =3D vm->arch.s_bit, }; @@ -190,10 +195,10 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) void tdp_mmu_init(struct kvm_vm *vm, int pgtable_levels, struct pte_masks *pte_masks) { - TEST_ASSERT(!vm->arch.tdp_mmu, "TDP MMU already initialized"); + TEST_ASSERT(!vm->stage2_mmu.pgtable_levels, "TDP MMU already initialized"= ); =20 - vm->arch.tdp_mmu =3D calloc(1, sizeof(*vm->arch.tdp_mmu)); - virt_mmu_init(vm, vm->arch.tdp_mmu, pte_masks); + vm->stage2_mmu.pgtable_levels =3D pgtable_levels; + virt_mmu_init(vm, &vm->stage2_mmu, pte_masks); } =20 static void *virt_get_pte(struct kvm_vm *vm, struct kvm_mmu *mmu, @@ -223,7 +228,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *v= m, paddr =3D vm_untag_gpa(vm, paddr); =20 if (!is_present_pte(mmu, pte)) { - *pte =3D PTE_PRESENT_MASK(mmu) | PTE_WRITABLE_MASK(mmu); + *pte =3D PTE_PRESENT_MASK(mmu) | PTE_READABLE_MASK(mmu) | + PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu); if (current_level =3D=3D target_level) *pte |=3D PTE_HUGE_MASK(mmu) | (paddr & PHYSICAL_PAGE_MASK); else @@ -269,6 +275,9 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *m= mu, uint64_t vaddr, TEST_ASSERT(vm_untag_gpa(vm, paddr) =3D=3D paddr, "Unexpected bits in paddr: %lx", paddr); =20 + TEST_ASSERT(!PTE_EXECUTABLE_MASK(mmu) || !PTE_NX_MASK(mmu), + "X and NX bit masks cannot be used simultaneously"); + /* * Allocate upper level page tables, if not already present. Return * early if a hugepage was created. @@ -286,7 +295,9 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *m= mu, uint64_t vaddr, pte =3D virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); TEST_ASSERT(!is_present_pte(mmu, pte), "PTE already present for 4k page at vaddr: 0x%lx", vaddr); - *pte =3D PTE_PRESENT_MASK(mmu) | PTE_WRITABLE_MASK(mmu) | (paddr & PHYSIC= AL_PAGE_MASK); + *pte =3D PTE_PRESENT_MASK(mmu) | PTE_READABLE_MASK(mmu) | + PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu) | + (paddr & PHYSICAL_PAGE_MASK); =20 /* * Neither SEV nor TDX supports shared page tables, so only the final diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index ea1c09f9e8ab..e3737b3d9120 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -25,21 +25,6 @@ bool enable_evmcs; struct hv_enlightened_vmcs *current_evmcs; struct hv_vp_assist_page *current_vp_assist; =20 -struct eptPageTableEntry { - uint64_t readable:1; - uint64_t writable:1; - uint64_t executable:1; - uint64_t memory_type:3; - uint64_t ignore_pat:1; - uint64_t page_size:1; - uint64_t accessed:1; - uint64_t dirty:1; - uint64_t ignored_11_10:2; - uint64_t address:40; - uint64_t ignored_62_52:11; - uint64_t suppress_ve:1; -}; - int vcpu_enable_evmcs(struct kvm_vcpu *vcpu) { uint16_t evmcs_ver; @@ -58,13 +43,24 @@ int vcpu_enable_evmcs(struct kvm_vcpu *vcpu) =20 void vm_enable_ept(struct kvm_vm *vm) { + struct pte_masks pte_masks; + TEST_ASSERT(kvm_cpu_has_ept(), "KVM doesn't support nested EPT"); - if (vm->arch.tdp_mmu) - return; - - /* TODO: Drop eptPageTableEntry in favor of PTE masks. */ - struct pte_masks pte_masks =3D (struct pte_masks) { =20 + /* + * EPTs do not have 'present' or 'user' bits, instead bit 0 is the + * 'readable' bit. + */ + pte_masks =3D (struct pte_masks) { + .present =3D 0, + .user =3D 0, + .readable =3D BIT_ULL(0), + .writable =3D BIT_ULL(1), + .executable =3D BIT_ULL(2), + .huge =3D BIT_ULL(7), + .accessed =3D BIT_ULL(8), + .dirty =3D BIT_ULL(9), + .nx =3D 0, }; =20 /* TODO: Add support for 5-level EPT. */ @@ -120,8 +116,8 @@ vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva) vmx->vmwrite_gpa =3D addr_gva2gpa(vm, (uintptr_t)vmx->vmwrite); memset(vmx->vmwrite_hva, 0, getpagesize()); =20 - if (vm->arch.tdp_mmu) - vmx->eptp_gpa =3D vm->arch.tdp_mmu->pgd; + if (vm->stage2_mmu.pgd_created) + vmx->eptp_gpa =3D vm->stage2_mmu.pgd; =20 *p_vmx_gva =3D vmx_gva; return vmx; @@ -377,82 +373,6 @@ void prepare_vmcs(struct vmx_pages *vmx, void *guest_r= ip, void *guest_rsp) init_vmcs_guest_state(guest_rip, guest_rsp); } =20 -static void tdp_create_pte(struct kvm_vm *vm, - struct eptPageTableEntry *pte, - uint64_t nested_paddr, - uint64_t paddr, - int current_level, - int target_level) -{ - if (!pte->readable) { - pte->writable =3D true; - pte->readable =3D true; - pte->executable =3D true; - pte->page_size =3D (current_level =3D=3D target_level); - if (pte->page_size) - pte->address =3D paddr >> vm->page_shift; - else - pte->address =3D vm_alloc_page_table(vm) >> vm->page_shift; - } else { - /* - * Entry already present. Assert that the caller doesn't want - * a hugepage at this level, and that there isn't a hugepage at - * this level. - */ - TEST_ASSERT(current_level !=3D target_level, - "Cannot create hugepage at level: %u, nested_paddr: 0x%lx", - current_level, nested_paddr); - TEST_ASSERT(!pte->page_size, - "Cannot create page table at level: %u, nested_paddr: 0x%lx", - current_level, nested_paddr); - } -} - - -void __tdp_pg_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, - int target_level) -{ - const uint64_t page_size =3D PG_LEVEL_SIZE(target_level); - void *eptp_hva =3D addr_gpa2hva(vm, vm->arch.tdp_mmu->pgd); - struct eptPageTableEntry *pt =3D eptp_hva, *pte; - uint16_t index; - - TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, - "Unknown or unsupported guest mode: 0x%x", vm->mode); - - TEST_ASSERT((nested_paddr >> 48) =3D=3D 0, - "Nested physical address 0x%lx is > 48-bits and requires 5-level EPT= ", - nested_paddr); - TEST_ASSERT((nested_paddr % page_size) =3D=3D 0, - "Nested physical address not on page boundary,\n" - " nested_paddr: 0x%lx page_size: 0x%lx", - nested_paddr, page_size); - TEST_ASSERT((nested_paddr >> vm->page_shift) <=3D vm->max_gfn, - "Physical address beyond beyond maximum supported,\n" - " nested_paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", - paddr, vm->max_gfn, vm->page_size); - TEST_ASSERT((paddr % page_size) =3D=3D 0, - "Physical address not on page boundary,\n" - " paddr: 0x%lx page_size: 0x%lx", - paddr, page_size); - TEST_ASSERT((paddr >> vm->page_shift) <=3D vm->max_gfn, - "Physical address beyond beyond maximum supported,\n" - " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", - paddr, vm->max_gfn, vm->page_size); - - for (int level =3D PG_LEVEL_512G; level >=3D PG_LEVEL_4K; level--) { - index =3D (nested_paddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; - pte =3D &pt[index]; - - tdp_create_pte(vm, pte, nested_paddr, paddr, level, target_level); - - if (pte->page_size) - break; - - pt =3D addr_gpa2hva(vm, pte->address * vm->page_size); - } -} - /* * Map a range of EPT guest physical addresses to the VM's physical address * @@ -473,6 +393,7 @@ void __tdp_pg_map(struct kvm_vm *vm, uint64_t nested_pa= ddr, uint64_t paddr, void __tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uint64_t size, int level) { + struct kvm_mmu *mmu =3D &vm->stage2_mmu; size_t page_size =3D PG_LEVEL_SIZE(level); size_t npages =3D size / page_size; =20 @@ -480,7 +401,7 @@ void __tdp_map(struct kvm_vm *vm, uint64_t nested_paddr= , uint64_t paddr, TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); =20 while (npages--) { - __tdp_pg_map(vm, nested_paddr, paddr, level); + __virt_pg_map(vm, mmu, nested_paddr, paddr, level); nested_paddr +=3D page_size; paddr +=3D page_size; } --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 131F631A558 for ; Tue, 30 Dec 2025 23:02:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135741; cv=none; b=WmEx5jxk9NKHugxuCqKMyLoWbrPb16R3DNMl1kGC/PJh0muwKLrDj3a4rJAT+Kh1ayLgY6Fe34n+a8a8zkxK5lpCiSuQIaNZM/x8uJlficQwGlBCpn3xR/afSZjUCCCkAgERoSZVOIFeX0dtpshDZPg6xFNfDAYS4NU6TybFRhs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135741; c=relaxed/simple; bh=AgZ+fbG5FQCIMKh1nY6n3ojPHo4Xii3azjxxpDjCjck=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NjeUOsHXX6jsY1LxKsXiWDM3VzyqHhBazZaOO7cdKCp8iRqMo50PyMQ3temiY3BHWtvpZAqMtEePw3RXp93cxp1C3LAv+AI5xasWNzg5nbytfIwW2QhkvepU6fFwhm77ur6K2vfZGvH8l3bRLQf7mqicnZxoIYycJ0zdtsJX+wc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wqq54Den; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wqq54Den" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34c6e05af3bso22763851a91.3 for ; Tue, 30 Dec 2025 15:02:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135737; x=1767740537; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=58nGY4ldtTkRgfnQQsIVSVuYVZGfnI/GIgKO3s4AINY=; b=wqq54DencKpEVKF+/ZRe/p6aFGcRxXQ2AWAyTBtOUhWqtLOVpHf4g17eew/ouX1yQh hUWd+AHN9Tua+aVq/+zRV+Rq/Ug3Tjc/w4pBSBPNokZ3oFx7Z/Cq2JZHX1NEGmPvKoJV bolg+2pSDEy2he4KgZHzfUpjvVt+JX5Rsz27J5eqVaV3h+aEE44/8jO/84ZwPhXDj34M A65cSeBOA+Sm2kxnjDR5cq9skNNcfOOYNkaT2iJN94SA9/dz4Xlkh9ZEQwZ+TeRzpwpC FQfKQxInWvosAA90Du2rTWAmlCU44ijo/ctjN/5KCsrT/mfAHr9eCDr+zwDR3JE7R3SG 7EZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135737; x=1767740537; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=58nGY4ldtTkRgfnQQsIVSVuYVZGfnI/GIgKO3s4AINY=; b=q94EzsT1uWXN5YYTsvO3FH33rFULktAJBnl8ffxEIZrwnykfgnOED5lQkxt/SaKftm FCLrneiipGRL/lHRzmxLUw/10ZUHR7qZU6EBBsherflhbtODsrEd1R5sWzr/+hGI5sso URVU6CXwfiUqmsjK4x3XYAzV/vPc8kgTKfGHHH1ZSLIFQduRiY3ARZK0NDV2SpNsqotV JuejVtp5YuqCMDzDMhXAmtaniBFTxa5kR0kNcOPozF4GUwqysMpNOLxtXmTCqz9nxElO baDkhtYrVBtpfYVCUMSnQLng2NnAssr+nfDmYNYkgC2AQW7FNaKsG2PBmynmsxFROqiY tGmQ== X-Forwarded-Encrypted: i=1; AJvYcCU2m4s+2auHVeXqRpI3iWBHXStRVLayP/VZbKBEmmfV3TfeZFHL31RxWzKncV9nvznHI9i+2B67BPgOOj8=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/zDbfOyBjH5NxtCgJDoEwYrNObRFgXRAjccC91/kNZJuKklRh RNo6XFxjSAyzVCierwf1C6yk51C+jnY3hAKho1s+OsiLfaBNNOXaspcM28F8qLAnjtR6MA+oaG8 d3bH5XA== X-Google-Smtp-Source: AGHT+IHYP5GuhSEXKm9bE6+AiGelDQ95KXay8Hc3+Dh7sZg4xo9JYXOIH/yq4oOqsJcIf2r6UztQNZfvedQ= X-Received: from pjblp6.prod.google.com ([2002:a17:90b:4a86:b0:34a:aa6a:e179]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4b43:b0:343:e2ba:e8be with SMTP id 98e67ed59e1d1-34e9214bec7mr27723847a91.10.1767135737056; Tue, 30 Dec 2025 15:02:17 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:43 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-15-seanjc@google.com> Subject: [PATCH v4 14/21] KVM: selftests: Move TDP mapping functions outside of vmx.c From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that the functions are no longer VMX-specific, move them to processor.c. Do a minor comment tweak replacing 'EPT' with 'TDP'. No functional change intended. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/processor.h | 4 ++ tools/testing/selftests/kvm/include/x86/vmx.h | 3 - .../testing/selftests/kvm/lib/x86/processor.c | 53 ++++++++++++++ tools/testing/selftests/kvm/lib/x86/vmx.c | 71 ------------------- 4 files changed, 57 insertions(+), 74 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index 4c0d2fc83c1c..d134c886f280 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1477,6 +1477,10 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu= *mmu, uint64_t vaddr, void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level); =20 +void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uin= t64_t size); +void tdp_identity_map_default_memslots(struct kvm_vm *vm); +void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size); + /* * Basic CPU control in CR0 */ diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/= selftests/kvm/include/x86/vmx.h index 4dd4c2094ee6..92b918700d24 100644 --- a/tools/testing/selftests/kvm/include/x86/vmx.h +++ b/tools/testing/selftests/kvm/include/x86/vmx.h @@ -557,9 +557,6 @@ bool load_vmcs(struct vmx_pages *vmx); =20 bool ept_1g_pages_supported(void); =20 -void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uin= t64_t size); -void tdp_identity_map_default_memslots(struct kvm_vm *vm); -void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size); bool kvm_cpu_has_ept(void); void vm_enable_ept(struct kvm_vm *vm); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm= *vm); diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index 41316cac94e0..29e7d172f945 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -472,6 +472,59 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u= int8_t indent) } } =20 +void __tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, + uint64_t size, int level) +{ + size_t page_size =3D PG_LEVEL_SIZE(level); + size_t npages =3D size / page_size; + + TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + while (npages--) { + __virt_pg_map(vm, &vm->stage2_mmu, nested_paddr, paddr, level); + nested_paddr +=3D page_size; + paddr +=3D page_size; + } +} + +void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, + uint64_t size) +{ + __tdp_map(vm, nested_paddr, paddr, size, PG_LEVEL_4K); +} + +/* Prepare an identity extended page table that maps all the + * physical pages in VM. + */ +void tdp_identity_map_default_memslots(struct kvm_vm *vm) +{ + uint32_t s, memslot =3D 0; + sparsebit_idx_t i, last; + struct userspace_mem_region *region =3D memslot2region(vm, memslot); + + /* Only memslot 0 is mapped here, ensure it's the only one being used */ + for (s =3D 0; s < NR_MEM_REGIONS; s++) + TEST_ASSERT_EQ(vm->memslots[s], 0); + + i =3D (region->region.guest_phys_addr >> vm->page_shift) - 1; + last =3D i + (region->region.memory_size >> vm->page_shift); + for (;;) { + i =3D sparsebit_next_clear(region->unused_phy_pages, i); + if (i > last) + break; + + tdp_map(vm, (uint64_t)i << vm->page_shift, + (uint64_t)i << vm->page_shift, 1 << vm->page_shift); + } +} + +/* Identity map a region with 1GiB Pages. */ +void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size) +{ + __tdp_map(vm, addr, addr, size, PG_LEVEL_1G); +} + /* * Set Unusable Segment * diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index e3737b3d9120..448a63457467 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -373,77 +373,6 @@ void prepare_vmcs(struct vmx_pages *vmx, void *guest_r= ip, void *guest_rsp) init_vmcs_guest_state(guest_rip, guest_rsp); } =20 -/* - * Map a range of EPT guest physical addresses to the VM's physical address - * - * Input Args: - * vm - Virtual Machine - * nested_paddr - Nested guest physical address to map - * paddr - VM Physical Address - * size - The size of the range to map - * level - The level at which to map the range - * - * Output Args: None - * - * Return: None - * - * Within the VM given by vm, creates a nested guest translation for the - * page range starting at nested_paddr to the page range starting at paddr. - */ -void __tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, - uint64_t size, int level) -{ - struct kvm_mmu *mmu =3D &vm->stage2_mmu; - size_t page_size =3D PG_LEVEL_SIZE(level); - size_t npages =3D size / page_size; - - TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow"); - TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); - - while (npages--) { - __virt_pg_map(vm, mmu, nested_paddr, paddr, level); - nested_paddr +=3D page_size; - paddr +=3D page_size; - } -} - -void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, - uint64_t size) -{ - __tdp_map(vm, nested_paddr, paddr, size, PG_LEVEL_4K); -} - -/* Prepare an identity extended page table that maps all the - * physical pages in VM. - */ -void tdp_identity_map_default_memslots(struct kvm_vm *vm) -{ - uint32_t s, memslot =3D 0; - sparsebit_idx_t i, last; - struct userspace_mem_region *region =3D memslot2region(vm, memslot); - - /* Only memslot 0 is mapped here, ensure it's the only one being used */ - for (s =3D 0; s < NR_MEM_REGIONS; s++) - TEST_ASSERT_EQ(vm->memslots[s], 0); - - i =3D (region->region.guest_phys_addr >> vm->page_shift) - 1; - last =3D i + (region->region.memory_size >> vm->page_shift); - for (;;) { - i =3D sparsebit_next_clear(region->unused_phy_pages, i); - if (i > last) - break; - - tdp_map(vm, (uint64_t)i << vm->page_shift, - (uint64_t)i << vm->page_shift, 1 << vm->page_shift); - } -} - -/* Identity map a region with 1GiB Pages. */ -void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size) -{ - __tdp_map(vm, addr, addr, size, PG_LEVEL_1G); -} - bool kvm_cpu_has_ept(void) { uint64_t ctrl; --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3AAF9303A32 for ; Tue, 30 Dec 2025 23:02:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135742; cv=none; b=fYqq/RrjM/7fl5HvXEEqgqLu2v2j1kLu0YE+uL8dcUvMiaO2kxEUbNmsxj17Vg/BJZW5Dz/AZER1679/1ViLgxNKFZj0+l9x7OcqgwIDOsyQjms+RPGMTpjQsDNs8j5wUb2CRSM68MwqP6EqXFR6HMEUvKU6mibfRQvF/a5dzno= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135742; c=relaxed/simple; bh=urAFT6EJOW1W5qRLVZla3ZmDxnhGkG5YJibpWY0+6OQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tjlRyHNmaYU/AFLxJgyp+RoH0qZmhsEf8wadLpSgm6pBdoarzDn1Nj8TMxw0B9QVfVkHkqzXamSoQlJC+gV59sd1hrQzUuaU+Owj8srvlkiJZjQurUjm9CDyR0Q8xtCB2sv1qHVa57h0N99rIdko8RkvApLL29jjQ5Y1IfIErXU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dt4nNH9g; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dt4nNH9g" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34ea5074935so9471060a91.0 for ; Tue, 30 Dec 2025 15:02:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135739; x=1767740539; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=huRdRK/iPbU8RR9eXbhBU7huR1SivgrOj8/wOFgY0o8=; b=dt4nNH9g0idbSJAGh3JcAgXjqAuhC0/fSR/0PFd5PI5YoErvrN48GL136WPjYfGtWI kv3yIGhIrJlYaEY+gyMwwUeDrO2yrOdyrJSYDXLhhNmPHObYqCBPkdveosnKtUdEppYM DA03fmEatVqi9PRRLSE/7k21zL0nXwKAzXKcJl5t9ky1W8Yb8eodKa5p4buCAsx4GPCM FSeJuGEkcoBMdK4nQU8HykGAws/9+VbUmy6VqhBm9xG9l4JueqWHu97mibpZyI6nCVPl 92Kyzp6MJPAmWAjI0GLVXejDOhaHoMZoze6e4mT4wgrQUXgoHyoy6FmKfnpML9Qh+Wpz M/jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135739; x=1767740539; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=huRdRK/iPbU8RR9eXbhBU7huR1SivgrOj8/wOFgY0o8=; b=oM8Bmw/NJavgc/Y/6Ey0+kLoXS2HYg6JS9vnycBcbEOcp1Qhx2LFWpJREStQLkXloD OAlyNNMGn+6Kbkvr9k0kD2IWYht0gY6NMXXtCJLkIww/0cWqGEiX728fyVjk082K+S9H z5A46sJRkVDHdmu0fMYVR763CmAqB3SO3+//lB+fOCyvsyK5VZIg9RKIRWFrgCJQNEwB WzFpnEofkZuAG814U4kBGbPNgtW3ChfPyyQXszCIQ5tvGOHPbx0pIHAHj4IyJ19j5cyH ism+sa6A8S2WW9njJlnmB9+v4o1WNHlGAI0hXaASPG08kPJuwf1ruxt9qmxFoIzHFmht XaNg== X-Forwarded-Encrypted: i=1; AJvYcCX8kzJi3TwJBkDULDpySCuOkMrWMLsDm8Zt8xYaYlibCQM0Xs0yDE9JRFDMN/LAGrfQtYfBRqaSXszCaUI=@vger.kernel.org X-Gm-Message-State: AOJu0YyXAoQUBt4aQqme9lRY2rwNPP/uIzNS6qxjWOy33ZjQDcRmmTkP zUWjCAYVXtU6loZ0crqW78WBjwQYWdg4wix5kaIT2BGR+nnWyI9qfYCoOh7je9BBOoHxbL/ZfF2 d099x5g== X-Google-Smtp-Source: AGHT+IGZKtBCYgVUD+NyJMek1FLHVR3iv8QadOzR34XMxVrGwi8alAagMVcgqb3s54atcYK8jGvrWX49Hsw= X-Received: from pjwt5.prod.google.com ([2002:a17:90a:d145:b0:34c:7437:183d]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:580e:b0:34e:5516:6655 with SMTP id 98e67ed59e1d1-34e92143ca1mr27682847a91.9.1767135738632; Tue, 30 Dec 2025 15:02:18 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:44 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-16-seanjc@google.com> Subject: [PATCH v4 15/21] KVM: selftests: Allow kvm_cpu_has_ept() to be called on AMD CPUs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed In preparation for generalizing the nested dirty logging test, checking if either EPT or NPT is enabled will be needed. To avoid needing to gate the kvm_cpu_has_ept() call by the CPU type, make sure the function returns false if VMX is not available instead of trying to read VMX-only MSRs. No functional change intended. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/lib/x86/vmx.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/self= tests/kvm/lib/x86/vmx.c index 448a63457467..c87b340362a9 100644 --- a/tools/testing/selftests/kvm/lib/x86/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86/vmx.c @@ -377,6 +377,9 @@ bool kvm_cpu_has_ept(void) { uint64_t ctrl; =20 + if (!kvm_cpu_has(X86_FEATURE_VMX)) + return false; + ctrl =3D kvm_get_feature_msr(MSR_IA32_VMX_TRUE_PROCBASED_CTLS) >> 32; if (!(ctrl & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)) return false; --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB1E2325715 for ; Tue, 30 Dec 2025 23:02:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135746; cv=none; b=joRLj91+BK1ogCu5e0i7/Ex+xKJNqWUYYb4m4mKN9KIVfAdyXshVTrdbgjl/eHvBKHfeFrJcyCIYTBHgm1jJBevo1SGA9ywG/VpMlNK54xZSOhGtRd0GPFSOpWGiqraCpDe3SNufWqEm1Br+tly1oIr9ZVS3k5jJ341IgSG6/so= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135746; c=relaxed/simple; bh=MPbkHc4GV4TzFE9XoAUnnbKF5eS/YzsDgg5s4A8v2yQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XN+MiXE0dzcVthai0b56fHpFxltIHB7z/+zV+Y43waS7jbaEVNFG/Nxull9KsbGfNLpHEwlEfa0lZlGLJbCq5DdYi4hJS8nhHEuct8YKMRFPYXmNHoFSdUS0jENTQv/eIBPclz/6ESzhZmm4eqNTLXfKGQvb9XBd8Bml1mEJXyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QC5JJ93d; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QC5JJ93d" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34ac814f308so23353029a91.3 for ; Tue, 30 Dec 2025 15:02:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135740; x=1767740540; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=J6aMxlCW9zWw4tw+lh+J4j0Q+W18rvfDg7v6hfZtyes=; b=QC5JJ93dMbM5QK+ubSuJf0hON/hzsIi1EgqzhnOjxGW7/reU2Z1vl+KkBX7NbutwK4 5k4bShqEy18WORIf8N0woE6Ukg9FMteAM+dV7cEaZUkrp5AoiSJ6gqy/3blRcs+0JwLc E+AT/968f29drHcOwwu+F/6WFx0XqO4MKZ7UogInMuVVQOp0LU+kV08VmciGfc3AIJ+e S2uZU2Wjo77++tJxXvT/0DESmtNwcP9z5im26ehIyc3dDJdc/ueAk/H+ajPnYi3j6Owz DCxMUC0+/6lZka/lvNRk225erzZJ2ZhqqhxpWPRdIb1Nz2vHWoEEvaN/TBDtND9hof3e gIgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135740; x=1767740540; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=J6aMxlCW9zWw4tw+lh+J4j0Q+W18rvfDg7v6hfZtyes=; b=fuzgJbndrM6fRKltIGbt+L+1MLuI2AbltwT+8MnV8Dg0bThejW+MXFhYIFkkAEvTPJ oUiScB+CfKLq42HZaePH1KYwL6AwAcZvKahh90Vz17B5QlOpoeJ/dyXHqA8oczFHz5un KYqzQDU619TyCnSCkFN70zowL85z9eSm7lhjjkcP6PRUPCU0Rxw/PYl3f1287DAiKyZ8 4rNRTRrVewP5jUs24VU9l0JX2tdS5htgmFwcYeFd0k/Udh9OtTKCiE1qlHr0uRE7s5HV k8JaMtliQC1Yc2rn2XxfbBkAX7qWTb1OhDOFK1JrYUbV3sGy4eP4cHxSHmsCeL+uTUSb CoCQ== X-Forwarded-Encrypted: i=1; AJvYcCWAPy8N6ExV7pXdugdJJfi7bCJ1V1SZQJJPsVzN3GsaHVBZS6NjQ1/InKcr0R3uv/CcSRKKzokU/NJh5rk=@vger.kernel.org X-Gm-Message-State: AOJu0YxpghtVTunlYxHKut3oU79YuvRtDJxYGFl4a9qigwEUDwlK1FF6 HYwPtTRElJFuAxOtFEUWIvGzMG4vwVNidRJeLJn8bPouI7qWEzMdkbfFMlW0pS65ZJF5IDfn8tG 6kvLOVw== X-Google-Smtp-Source: AGHT+IESYs7jwx5GIZk4Cye/l7AaZRkTq4obaLXSbCdWxS8VVf/3eTwf0I/EcUJvNMDTKAMTNzECK0j0uSQ= X-Received: from pjbkk6.prod.google.com ([2002:a17:90b:4a06:b0:34e:795d:fe31]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:49:b0:343:5f43:933e with SMTP id 98e67ed59e1d1-34e921afaf8mr26205194a91.19.1767135740506; Tue, 30 Dec 2025 15:02:20 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:45 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-17-seanjc@google.com> Subject: [PATCH v4 16/21] KVM: selftests: Add support for nested NPTs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Implement nCR3 and NPT initialization functions, similar to the EPT equivalents, and create common TDP helpers for enablement checking and initialization. Enable NPT for nested guests by default if the TDP MMU was initialized, similar to VMX. Reuse the PTE masks from the main MMU in the NPT MMU, except for the C and S bits related to confidential VMs. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/processor.h | 2 ++ .../selftests/kvm/include/x86/svm_util.h | 9 ++++++++ .../testing/selftests/kvm/lib/x86/memstress.c | 4 ++-- .../testing/selftests/kvm/lib/x86/processor.c | 15 +++++++++++++ tools/testing/selftests/kvm/lib/x86/svm.c | 21 +++++++++++++++++++ .../selftests/kvm/x86/vmx_dirty_log_test.c | 4 ++-- 6 files changed, 51 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index d134c886f280..deb471fb9b51 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1477,6 +1477,8 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu = *mmu, uint64_t vaddr, void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level); =20 +void vm_enable_tdp(struct kvm_vm *vm); +bool kvm_cpu_has_tdp(void); void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uin= t64_t size); void tdp_identity_map_default_memslots(struct kvm_vm *vm); void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size); diff --git a/tools/testing/selftests/kvm/include/x86/svm_util.h b/tools/tes= ting/selftests/kvm/include/x86/svm_util.h index b74c6dcddcbd..5d7c42534bc4 100644 --- a/tools/testing/selftests/kvm/include/x86/svm_util.h +++ b/tools/testing/selftests/kvm/include/x86/svm_util.h @@ -27,6 +27,9 @@ struct svm_test_data { void *msr; /* gva */ void *msr_hva; uint64_t msr_gpa; + + /* NPT */ + uint64_t ncr3_gpa; }; =20 static inline void vmmcall(void) @@ -57,6 +60,12 @@ struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, = vm_vaddr_t *p_svm_gva); void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *g= uest_rsp); void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa); =20 +static inline bool kvm_cpu_has_npt(void) +{ + return kvm_cpu_has(X86_FEATURE_NPT); +} +void vm_enable_npt(struct kvm_vm *vm); + int open_sev_dev_path_or_exit(void); =20 #endif /* SELFTEST_KVM_SVM_UTILS_H */ diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testin= g/selftests/kvm/lib/x86/memstress.c index 3319cb57a78d..407abfc34909 100644 --- a/tools/testing/selftests/kvm/lib/x86/memstress.c +++ b/tools/testing/selftests/kvm/lib/x86/memstress.c @@ -82,9 +82,9 @@ void memstress_setup_nested(struct kvm_vm *vm, int nr_vcp= us, struct kvm_vcpu *vc int vcpu_id; =20 TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); - TEST_REQUIRE(kvm_cpu_has_ept()); + TEST_REQUIRE(kvm_cpu_has_tdp()); =20 - vm_enable_ept(vm); + vm_enable_tdp(vm); for (vcpu_id =3D 0; vcpu_id < nr_vcpus; vcpu_id++) { vcpu_alloc_vmx(vm, &vmx_gva); =20 diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index 29e7d172f945..a3a4c9a4cbcb 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -8,7 +8,9 @@ #include "kvm_util.h" #include "pmu.h" #include "processor.h" +#include "svm_util.h" #include "sev.h" +#include "vmx.h" =20 #ifndef NUM_INTERRUPTS #define NUM_INTERRUPTS 256 @@ -472,6 +474,19 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u= int8_t indent) } } =20 +void vm_enable_tdp(struct kvm_vm *vm) +{ + if (kvm_cpu_has(X86_FEATURE_VMX)) + vm_enable_ept(vm); + else + vm_enable_npt(vm); +} + +bool kvm_cpu_has_tdp(void) +{ + return kvm_cpu_has_ept() || kvm_cpu_has_npt(); +} + void __tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uint64_t size, int level) { diff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/self= tests/kvm/lib/x86/svm.c index d239c2097391..8e4795225595 100644 --- a/tools/testing/selftests/kvm/lib/x86/svm.c +++ b/tools/testing/selftests/kvm/lib/x86/svm.c @@ -59,6 +59,22 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selec= tor, seg->base =3D base; } =20 +void vm_enable_npt(struct kvm_vm *vm) +{ + struct pte_masks pte_masks; + + TEST_ASSERT(kvm_cpu_has_npt(), "KVM doesn't supported nested NPT"); + + /* + * NPTs use the same PTE format, but deliberately drop the C-bit as the + * per-VM shared vs. private information is only meant for stage-1. + */ + pte_masks =3D vm->mmu.arch.pte_masks; + pte_masks.c =3D 0; + + tdp_mmu_init(vm, vm->mmu.pgtable_levels, &pte_masks); +} + void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *g= uest_rsp) { struct vmcb *vmcb =3D svm->vmcb; @@ -102,6 +118,11 @@ void generic_svm_setup(struct svm_test_data *svm, void= *guest_rip, void *guest_r vmcb->save.rip =3D (u64)guest_rip; vmcb->save.rsp =3D (u64)guest_rsp; guest_regs.rdi =3D (u64)svm; + + if (svm->ncr3_gpa) { + ctrl->nested_ctl |=3D SVM_NESTED_CTL_NP_ENABLE; + ctrl->nested_cr3 =3D svm->ncr3_gpa; + } } =20 /* diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/t= esting/selftests/kvm/x86/vmx_dirty_log_test.c index 370f8d3117c2..032ab8bf60a4 100644 --- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c @@ -93,7 +93,7 @@ static void test_vmx_dirty_log(bool enable_ept) /* Create VM */ vm =3D vm_create_with_one_vcpu(&vcpu, l1_guest_code); if (enable_ept) - vm_enable_ept(vm); + vm_enable_tdp(vm); =20 vcpu_alloc_vmx(vm, &vmx_pages_gva); vcpu_args_set(vcpu, 1, vmx_pages_gva); @@ -170,7 +170,7 @@ int main(int argc, char *argv[]) =20 test_vmx_dirty_log(/*enable_ept=3D*/false); =20 - if (kvm_cpu_has_ept()) + if (kvm_cpu_has_tdp()) test_vmx_dirty_log(/*enable_ept=3D*/true); =20 return 0; --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CE38322B6A for ; Tue, 30 Dec 2025 23:02:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135747; cv=none; b=VDteETcKYlmtXVuPCC/jFO2dcwekPCNVZR4PgTxbYEUqO28X1pORXU2VIlG5LPTG1EjftEEZ2IST9PkQUKBZWezDzAJIqP06BBmrr+WuwrwPADJnLCgI02aGbKs3w2Gmwu/Yt6Zl8+JUzVSrEHU8ieDMTEgsRvmODbYwDgiL/A4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135747; c=relaxed/simple; bh=YGA8CA4s7Z2CWClwBVpAk8yl9E1CB5xxazIcYOSG0No=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YTVlsXup3sEmh73ti6ch2khlNIoroLMLqWOF9vvJsxIlEiYwguGkxf7R1bHOXmdfxfT3yGSduWXaKQgXoeOczi55kSBWsasUqeDaA4y5bO7RJlRrSnUQiuoMdmzfWJvVSqa530kpvG4zqKweBHBRG3LZVcDH08j2ZZ60TwDDR0Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=r97hhyBv; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="r97hhyBv" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2a090819ed1so84562765ad.2 for ; Tue, 30 Dec 2025 15:02:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135742; x=1767740542; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=FMYWTnD6Tv1L5yY9MguuNjhNHloVV4vNjkLbvD2vfOU=; b=r97hhyBvJa5Xp4IjZ+4CK7ISV0b1iVpq7LJv44TM0nAmALdfGX77uhDwWS1g6kYrJk o9g/gjs5NAIp7bDbJmEvmc+kF2BXn0tXQmTPSFxY9BEc0bUq6uG2kkI2z+XG7Y8JA7TE RvxIpYTS8IKsDmxUAroDvW/F1szaH87uZPswQw8yG054ahwXzjne0QH3A6A7dxi39O5o YcT6GU4l0XWYKvyqRQuBDKWGy9Qwxq64K05rtfCSu/FVAc1itfXuxudSReWqttu7k/dM 5BaoWT5bmzBHRwVfNxb1R58GY2tkPgJ6O1mJrYyen3Wk479NEpW0hPYWqv32auHvBsVx +87w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135742; x=1767740542; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FMYWTnD6Tv1L5yY9MguuNjhNHloVV4vNjkLbvD2vfOU=; b=PZdp8eilpa/o4WafSq5fgSnvzcIUlXXwJ2Jzze2XuydSE4rRoOPg/+rfc8xs2BRI7X dBGSbwvuNDZapWXI7+GooaxQHhVX4apkmG+/kUVpYUSZQosQa7qG/+3bgi2WKuaF5sXm hkbXahrxiTkqd6ckZmXFXQGENvDYnIVLCBMUH65fO0SSmxPhIdEYH1K4tyUDbcvgKPqs 0KpcmknHaNaabPn/Rt3HKb2tzZDvM8IGxEgVErd49vo3fxWqArNUN9lFHCVPwLKb58Zl J8p+X3IKn9GAYkIHUbpJjU+uVnYPT3VTV+I96Qc1Xbx4nYkXThVPzNkACNGxwzWo+1W0 XLaA== X-Forwarded-Encrypted: i=1; AJvYcCWsOqSsv/5nv0ZQgSZLyzrp3UCGoExxxss9LHflzTM0UYXiHUhfiwEBq26hC4j8neS8JuvztyWaB/f6pmw=@vger.kernel.org X-Gm-Message-State: AOJu0YzFyN8wvuZg8dKV3smJ4/9KdUn6l5+pcPK4DX853/BcHzyRElCs uj6cEPQseh684sRtWkU92bRRnUSFcFr80UDtzy5mg06zwnMoRhdjxhTW2NtY2JaO8pIO0AAsNl9 kTK4RmQ== X-Google-Smtp-Source: AGHT+IEPzwhGxGkpTdzvCyQe6ViV9J79G7ardG1q/yA1uYuatCDGiLbmMWR1E1eqCq7xQW2f9AolN3qf2J8= X-Received: from plzv12.prod.google.com ([2002:a17:902:b7cc:b0:298:1f9e:c334]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e5c7:b0:2a0:be5d:d53d with SMTP id d9443c01a7336-2a2f2a361c2mr352279255ad.53.1767135742297; Tue, 30 Dec 2025 15:02:22 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:46 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-18-seanjc@google.com> Subject: [PATCH v4 17/21] KVM: selftests: Set the user bit on nested NPT PTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed According to the APM, NPT walks are treated as user accesses. In preparation for supporting NPT mappings, set the 'user' bit on NPTs by adding a mask of bits to always be set on PTEs in kvm_mmu. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86/kvm_util_arch.h | 2 ++ tools/testing/selftests/kvm/include/x86/processor.h | 1 + tools/testing/selftests/kvm/lib/x86/processor.c | 5 +++-- tools/testing/selftests/kvm/lib/x86/svm.c | 3 +++ 4 files changed, 9 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tool= s/testing/selftests/kvm/include/x86/kvm_util_arch.h index 1cf84b8212c6..be35d26bb320 100644 --- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h @@ -22,6 +22,8 @@ struct pte_masks { uint64_t nx; uint64_t c; uint64_t s; + + uint64_t always_set; }; =20 struct kvm_mmu_arch { diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index deb471fb9b51..7b7d962244d6 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1450,6 +1450,7 @@ enum pg_level { #define PTE_NX_MASK(mmu) ((mmu)->arch.pte_masks.nx) #define PTE_C_BIT_MASK(mmu) ((mmu)->arch.pte_masks.c) #define PTE_S_BIT_MASK(mmu) ((mmu)->arch.pte_masks.s) +#define PTE_ALWAYS_SET_MASK(mmu) ((mmu)->arch.pte_masks.always_set) =20 /* * For PTEs without a PRESENT bit (i.e. EPT entries), treat the PTE as pre= sent diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index a3a4c9a4cbcb..5a3385d48902 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -231,7 +231,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *v= m, =20 if (!is_present_pte(mmu, pte)) { *pte =3D PTE_PRESENT_MASK(mmu) | PTE_READABLE_MASK(mmu) | - PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu); + PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu) | + PTE_ALWAYS_SET_MASK(mmu); if (current_level =3D=3D target_level) *pte |=3D PTE_HUGE_MASK(mmu) | (paddr & PHYSICAL_PAGE_MASK); else @@ -299,7 +300,7 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *m= mu, uint64_t vaddr, "PTE already present for 4k page at vaddr: 0x%lx", vaddr); *pte =3D PTE_PRESENT_MASK(mmu) | PTE_READABLE_MASK(mmu) | PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu) | - (paddr & PHYSICAL_PAGE_MASK); + PTE_ALWAYS_SET_MASK(mmu) | (paddr & PHYSICAL_PAGE_MASK); =20 /* * Neither SEV nor TDX supports shared page tables, so only the final diff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/self= tests/kvm/lib/x86/svm.c index 8e4795225595..18e9e9089643 100644 --- a/tools/testing/selftests/kvm/lib/x86/svm.c +++ b/tools/testing/selftests/kvm/lib/x86/svm.c @@ -72,6 +72,9 @@ void vm_enable_npt(struct kvm_vm *vm) pte_masks =3D vm->mmu.arch.pte_masks; pte_masks.c =3D 0; =20 + /* NPT walks are treated as user accesses, so set the 'user' bit. */ + pte_masks.always_set =3D pte_masks.user; + tdp_mmu_init(vm, vm->mmu.pgtable_levels, &pte_masks); } =20 --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0AA232E6AA for ; Tue, 30 Dec 2025 23:02:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135750; cv=none; b=fQ/M2Qq5r+xYvazHdsKEQVE3PT02GTCt4cQCNQHDaZmzOWYC9M4VV6DfTvr3Ai2yUrmSHvPLx9flg37tCzWuVvjrCQhPq4eYDGFegROZ3G6u1qWsSS+YeClVsJQFXrnsqxl1oyUefXBYvdE5+9fueN6kH2p0XcWBZ7VPQqStAU8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135750; c=relaxed/simple; bh=+GnfSi4hdCU01gr9ULMJZhrny8ByifWOa5mQbt/sJCk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JWxkhbEpPzeWgBzL0xJKcHYir6SNNpoDAljKb4PmTLZRdAXI1riskI9cwxb0Dn0SUeG5fdz4f/x9sIJxA5UjrLgGA/MQOTOwLehfR78tiQbvzyUBDWyb1gNBEA70IeW9UViQoUiUyUDx6l3sLOLEgd0rcRCO8FZoj6rxgpnFIBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pQZGjS+J; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pQZGjS+J" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2a08cbeb87eso171100235ad.3 for ; Tue, 30 Dec 2025 15:02:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135745; x=1767740545; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=l02CRqI+Ghgr8JnMhLywloG9mLg+BlVpb36VX9/E5wI=; b=pQZGjS+Jcngpl+vPINZjWaZ93SrQPGnDQZthBMJjnZDPbDiNq/YKbwK138nvaYQw7C xf1TFbKMlZDXv41X5Vd9w3XL8dkUOxWBI7OvMCv7bmIt3yWFuC/N/39VbWXjxZ0MTqXo no4JA67HaxCskzqB1tgvXyhLfXSTUUtzANI3vyCxOZISrMPW8FXu9E8VxsoLTdAtCma6 m2kgcdTFkm+Ddz/uq0bfzidoth8DGMUvzPHKyneEwGlzr0m+/d1zhDfScIAC74ol4prn GVNdFEpcs7tkWEpyU5gN08FSxDUw8/BcbgxKof2LRNxDtfHG2UYF8wEg4Z/p5BSz2eVW YB2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135745; x=1767740545; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=l02CRqI+Ghgr8JnMhLywloG9mLg+BlVpb36VX9/E5wI=; b=sEPSIA2ImawmLMKJx29PuQo7tW4X8V91EelizoVuywA8iftXuDZMakyvaYtYvuQw88 e2j1vQAvDCTtWKG1nSXJCHX+1ExY6FixOoodlp78z1/g2jjRzxjY/1QNKyleZSX3RuEQ Q36k9tdgsg6qLdqTR3idFzeqB0yTzIzo8Q4jD/Lv2sZegKfNPTTqfWO8irxV0y686zX3 OIhCNDfjsH2zMTNolT3tAVdKmO3wcIMlozrJ+6l9IN6JzhS9kgO9wrMQetAmqHjnP4EV VwTuTEzhwJ7Qy4BnRNlcMbIXzNcbn3063ThH94/Bp/+hcoQ3npA5ZY9QlsMz2L+I2QZK QlKg== X-Forwarded-Encrypted: i=1; AJvYcCU5YGY8RtHhHxN7qLSA8kNQcpi7vVnaJPGOzcSIJ3fpH4SgpAQJCUQGEXmZiYDdvY3GPxlOOsI4zvf9Cp0=@vger.kernel.org X-Gm-Message-State: AOJu0YyfxEmV6Y4vYxPBCGkK/wPkYCPRq5RBDhPUkYrfIJad/jHFCQOg PKCH25V3PLks3iwBtR6ekKLEfmcuWOIqEJEUxGmmob7ks20MQ90UFE3d6VDQcTtoVkJhZI0/suK /H9czoQ== X-Google-Smtp-Source: AGHT+IGdzi4L+tULZgNB2XZI1GCuJkpL9VP6/Muf2M4d65NarSlafnhFOOOyBjKAOuvQIqtL04yY0e8Tvtc= X-Received: from pjbsw11.prod.google.com ([2002:a17:90b:2c8b:b0:34a:bf4e:cb5c]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:3c90:b0:35f:26b2:2f94 with SMTP id adf61e73a8af0-376a9acefcdmr33777508637.46.1767135745424; Tue, 30 Dec 2025 15:02:25 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:47 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-19-seanjc@google.com> Subject: [PATCH v4 18/21] KVM: selftests: Extend vmx_dirty_log_test to cover SVM From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Generalize the code in vmx_dirty_log_test.c by adding SVM-specific L1 code, doing some renaming (e.g. EPT -> TDP), and having setup code for both SVM and VMX in test_dirty_log(). Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/Makefile.kvm | 2 +- ...rty_log_test.c =3D> nested_dirty_log_test.c} | 73 ++++++++++++++----- 2 files changed, 54 insertions(+), 21 deletions(-) rename tools/testing/selftests/kvm/x86/{vmx_dirty_log_test.c =3D> nested_d= irty_log_test.c} (71%) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index ba5c2b643efa..8f14213ddef1 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -89,6 +89,7 @@ TEST_GEN_PROGS_x86 +=3D x86/kvm_buslock_test TEST_GEN_PROGS_x86 +=3D x86/monitor_mwait_test TEST_GEN_PROGS_x86 +=3D x86/msrs_test TEST_GEN_PROGS_x86 +=3D x86/nested_close_kvm_test +TEST_GEN_PROGS_x86 +=3D x86/nested_dirty_log_test TEST_GEN_PROGS_x86 +=3D x86/nested_emulation_test TEST_GEN_PROGS_x86 +=3D x86/nested_exceptions_test TEST_GEN_PROGS_x86 +=3D x86/nested_invalid_cr3_test @@ -115,7 +116,6 @@ TEST_GEN_PROGS_x86 +=3D x86/ucna_injection_test TEST_GEN_PROGS_x86 +=3D x86/userspace_io_test TEST_GEN_PROGS_x86 +=3D x86/userspace_msr_exit_test TEST_GEN_PROGS_x86 +=3D x86/vmx_apic_access_test -TEST_GEN_PROGS_x86 +=3D x86/vmx_dirty_log_test TEST_GEN_PROGS_x86 +=3D x86/vmx_exception_with_invalid_guest_state TEST_GEN_PROGS_x86 +=3D x86/vmx_msrs_test TEST_GEN_PROGS_x86 +=3D x86/vmx_invalid_nested_guest_state diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/t= esting/selftests/kvm/x86/nested_dirty_log_test.c similarity index 71% rename from tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c rename to tools/testing/selftests/kvm/x86/nested_dirty_log_test.c index 032ab8bf60a4..89d2e86a0db9 100644 --- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c @@ -12,6 +12,7 @@ #include "test_util.h" #include "kvm_util.h" #include "processor.h" +#include "svm_util.h" #include "vmx.h" =20 /* The memory slot index to track dirty pages */ @@ -25,6 +26,8 @@ #define NESTED_TEST_MEM1 0xc0001000 #define NESTED_TEST_MEM2 0xc0002000 =20 +#define L2_GUEST_STACK_SIZE 64 + static void l2_guest_code(u64 *a, u64 *b) { READ_ONCE(*a); @@ -42,20 +45,19 @@ static void l2_guest_code(u64 *a, u64 *b) vmcall(); } =20 -static void l2_guest_code_ept_enabled(void) +static void l2_guest_code_tdp_enabled(void) { l2_guest_code((u64 *)NESTED_TEST_MEM1, (u64 *)NESTED_TEST_MEM2); } =20 -static void l2_guest_code_ept_disabled(void) +static void l2_guest_code_tdp_disabled(void) { - /* Access the same L1 GPAs as l2_guest_code_ept_enabled() */ + /* Access the same L1 GPAs as l2_guest_code_tdp_enabled() */ l2_guest_code((u64 *)GUEST_TEST_MEM, (u64 *)GUEST_TEST_MEM); } =20 -void l1_guest_code(struct vmx_pages *vmx) +void l1_vmx_code(struct vmx_pages *vmx) { -#define L2_GUEST_STACK_SIZE 64 unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; void *l2_rip; =20 @@ -64,22 +66,49 @@ void l1_guest_code(struct vmx_pages *vmx) GUEST_ASSERT(load_vmcs(vmx)); =20 if (vmx->eptp_gpa) - l2_rip =3D l2_guest_code_ept_enabled; + l2_rip =3D l2_guest_code_tdp_enabled; else - l2_rip =3D l2_guest_code_ept_disabled; + l2_rip =3D l2_guest_code_tdp_disabled; =20 prepare_vmcs(vmx, l2_rip, &l2_guest_stack[L2_GUEST_STACK_SIZE]); =20 GUEST_SYNC(false); GUEST_ASSERT(!vmlaunch()); GUEST_SYNC(false); - GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_VMCALL); + GUEST_ASSERT_EQ(vmreadz(VM_EXIT_REASON), EXIT_REASON_VMCALL); GUEST_DONE(); } =20 -static void test_vmx_dirty_log(bool enable_ept) +static void l1_svm_code(struct svm_test_data *svm) { - vm_vaddr_t vmx_pages_gva =3D 0; + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + void *l2_rip; + + if (svm->ncr3_gpa) + l2_rip =3D l2_guest_code_tdp_enabled; + else + l2_rip =3D l2_guest_code_tdp_disabled; + + generic_svm_setup(svm, l2_rip, &l2_guest_stack[L2_GUEST_STACK_SIZE]); + + GUEST_SYNC(false); + run_guest(svm->vmcb, svm->vmcb_gpa); + GUEST_SYNC(false); + GUEST_ASSERT_EQ(svm->vmcb->control.exit_code, SVM_EXIT_VMMCALL); + GUEST_DONE(); +} + +static void l1_guest_code(void *data) +{ + if (this_cpu_has(X86_FEATURE_VMX)) + l1_vmx_code(data); + else + l1_svm_code(data); +} + +static void test_dirty_log(bool nested_tdp) +{ + vm_vaddr_t nested_gva =3D 0; unsigned long *bmap; uint64_t *host_test_mem; =20 @@ -88,15 +117,19 @@ static void test_vmx_dirty_log(bool enable_ept) struct ucall uc; bool done =3D false; =20 - pr_info("Nested EPT: %s\n", enable_ept ? "enabled" : "disabled"); + pr_info("Nested TDP: %s\n", nested_tdp ? "enabled" : "disabled"); =20 /* Create VM */ vm =3D vm_create_with_one_vcpu(&vcpu, l1_guest_code); - if (enable_ept) + if (nested_tdp) vm_enable_tdp(vm); =20 - vcpu_alloc_vmx(vm, &vmx_pages_gva); - vcpu_args_set(vcpu, 1, vmx_pages_gva); + if (kvm_cpu_has(X86_FEATURE_VMX)) + vcpu_alloc_vmx(vm, &nested_gva); + else + vcpu_alloc_svm(vm, &nested_gva); + + vcpu_args_set(vcpu, 1, nested_gva); =20 /* Add an extra memory slot for testing dirty logging */ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, @@ -115,10 +148,10 @@ static void test_vmx_dirty_log(bool enable_ept) * ... pages in the L2 GPA range [0xc0001000, 0xc0003000) will map to * 0xc0000000. * - * When EPT is disabled, the L2 guest code will still access the same L1 - * GPAs as the EPT enabled case. + * When TDP is disabled, the L2 guest code will still access the same L1 + * GPAs as the TDP enabled case. */ - if (enable_ept) { + if (nested_tdp) { tdp_identity_map_default_memslots(vm); tdp_map(vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); tdp_map(vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); @@ -166,12 +199,12 @@ static void test_vmx_dirty_log(bool enable_ept) =20 int main(int argc, char *argv[]) { - TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX) || kvm_cpu_has(X86_FEATURE_SVM)= ); =20 - test_vmx_dirty_log(/*enable_ept=3D*/false); + test_dirty_log(/*nested_tdp=3D*/false); =20 if (kvm_cpu_has_tdp()) - test_vmx_dirty_log(/*enable_ept=3D*/true); + test_dirty_log(/*nested_tdp=3D*/true); =20 return 0; } --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BACB432BF4B for ; Tue, 30 Dec 2025 23:02:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135752; cv=none; b=pssQAePbrSLccUWHRj2thnXAD4ZRNSM44tA2MFCNHntmwuqGTyfZi8n6W6MstL3VW6td7bv8gdRrcgC8FgakFu5+c5Z5k+WH2ykwGZQm7JDB83YLDfDNkdnSxL5GLpcjZp//7M8NrxyaJJ+8JpWMuENMflYEzcy2TP5CZhQoL8A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135752; c=relaxed/simple; bh=Eq5mGpLfTFizGCRwHdHtCNJ7Yw0gxiv2sS6vsyfOLoI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZhIjpZx1rTzMhQv95xL8R4opb2CodETyeQV5dm3CngcIwAqRP17zWQiZ523WPfjHcrBF6Bz3M73DrDzBH5WEab0Hv8pwC2dz7ENX9Mr8a0SfwdhF2FpzHz64by4gY7Em1RncQQ4Zugic1IAf6xCb2P8vS88Hm0zT6dWU/u9/Z0Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KHJJfccE; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KHJJfccE" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7d5564057d0so24692336b3a.0 for ; Tue, 30 Dec 2025 15:02:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135747; x=1767740547; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=DkXelmu7bqFxndg8qpo6pKIzfvXDHv9zHII5ienX1m4=; b=KHJJfccEAu9ygfghc1NclStBZFqg2G+X34jOX1BC7x2tVkMyGfJ70S9zZTcMQw8efJ yZw2SWk3JEtm1ah4KcaRwGbHsWS/3nQaZsi/1nnFemLNnXEy43XP6mneOtej3DKMT6UZ TQbTSxwlWePWhHWORKPMxghOiSic6KdOfWg7mj/7TyYwQ8AZ/djoGqPrmtCsb1ZQrA2I XRg0u8mGAqUosc31n0M6crfNVMIfeE9Y6VrHxNbHz8RjjXRXe8v/nl3YWFl7S0BKWpfg 52oQ0AdXq4vmiNnWtsKTEs7+/3U1R02X3fdJZqNDE6fWeFU1XLc8RT5wB9E4oAJB/1HO R2qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135747; x=1767740547; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=DkXelmu7bqFxndg8qpo6pKIzfvXDHv9zHII5ienX1m4=; b=QTeR49SbJLGy8xohFOz0mkzlxCraROqCkj2vI0XbaNO1zXFBuGOYNc5Jo5YcoNuAeM jtkAXS6AM5BGw21B5HYmVHLHCDnmQsqpWGvq5dYCmotEnqMjC59nxOR6GJay50OPy3TZ tkI9zOLF7Y3Q6AYIVcNK+VYLZ0LFcTGL1Eh/m1XPEcb/hmV20s4jX5UDQTptfz/lW0xM 8VRry321HqooybWIaepJ0GHfSKp6TTAR6Yx5hHXiY4SglALdBn+/XQAHhD0GJFPetKig ni1ngR2/gBL1pFcJHeVQcquI2oilnq2OaXUBtXh3FuZ6IT5mEZwR6cKWzNmHSkeMoSsx qj+g== X-Forwarded-Encrypted: i=1; AJvYcCWE/pwgga2gmv+j2UT27w8bet1/h8U6Dnu7TvjreRZxpEiWGG5vNeLrNfikAUHxuQd30RlokGU9THGj0Cw=@vger.kernel.org X-Gm-Message-State: AOJu0YwNwKxzyd5e0sVbKsUQPDIAAWjo1VWJjpazSD28EwY/4z94JLIC ahCiAkjrWspcqccLcXsKbaD0qpYRF27S/lUUeb3JuHT2JLg/9CM1rNU9wIoPIdTfl241IDHpP08 Y8NX+Ig== X-Google-Smtp-Source: AGHT+IEInX/hcWFTn62wzbw74nK2yGoqn05dsItc6QCyzO/aFUt0jVk0kLNXelUPHMxHCU04rbTUsERgFgA= X-Received: from pgav5.prod.google.com ([2002:a05:6a02:2dc5:b0:b55:6eb3:fd4]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:7346:b0:319:fc6f:8adf with SMTP id adf61e73a8af0-376a81e2c20mr34940934637.12.1767135747025; Tue, 30 Dec 2025 15:02:27 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:48 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-20-seanjc@google.com> Subject: [PATCH v4 19/21] KVM: selftests: Extend memstress to run on nested SVM From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Add L1 SVM code and generalize the setup code to work for both VMX and SVM. This allows running 'dirty_log_perf_test -n' on AMD CPUs. Signed-off-by: Yosry Ahmed Signed-off-by: Sean Christopherson --- .../testing/selftests/kvm/lib/x86/memstress.c | 42 +++++++++++++++---- 1 file changed, 35 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testin= g/selftests/kvm/lib/x86/memstress.c index 407abfc34909..86f4c5e4c430 100644 --- a/tools/testing/selftests/kvm/lib/x86/memstress.c +++ b/tools/testing/selftests/kvm/lib/x86/memstress.c @@ -13,6 +13,7 @@ #include "kvm_util.h" #include "memstress.h" #include "processor.h" +#include "svm_util.h" #include "vmx.h" =20 void memstress_l2_guest_code(uint64_t vcpu_id) @@ -29,9 +30,10 @@ __asm__( " ud2;" ); =20 -static void memstress_l1_guest_code(struct vmx_pages *vmx, uint64_t vcpu_i= d) -{ #define L2_GUEST_STACK_SIZE 64 + +static void l1_vmx_code(struct vmx_pages *vmx, uint64_t vcpu_id) +{ unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; unsigned long *rsp; =20 @@ -45,10 +47,34 @@ static void memstress_l1_guest_code(struct vmx_pages *v= mx, uint64_t vcpu_id) prepare_vmcs(vmx, memstress_l2_guest_entry, rsp); =20 GUEST_ASSERT(!vmlaunch()); - GUEST_ASSERT(vmreadz(VM_EXIT_REASON) =3D=3D EXIT_REASON_VMCALL); + GUEST_ASSERT_EQ(vmreadz(VM_EXIT_REASON), EXIT_REASON_VMCALL); GUEST_DONE(); } =20 +static void l1_svm_code(struct svm_test_data *svm, uint64_t vcpu_id) +{ + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + unsigned long *rsp; + + + rsp =3D &l2_guest_stack[L2_GUEST_STACK_SIZE - 1]; + *rsp =3D vcpu_id; + generic_svm_setup(svm, memstress_l2_guest_entry, rsp); + + run_guest(svm->vmcb, svm->vmcb_gpa); + GUEST_ASSERT_EQ(svm->vmcb->control.exit_code, SVM_EXIT_VMMCALL); + GUEST_DONE(); +} + + +static void memstress_l1_guest_code(void *data, uint64_t vcpu_id) +{ + if (this_cpu_has(X86_FEATURE_VMX)) + l1_vmx_code(data, vcpu_id); + else + l1_svm_code(data, vcpu_id); +} + uint64_t memstress_nested_pages(int nr_vcpus) { /* @@ -78,15 +104,17 @@ static void memstress_setup_ept_mappings(struct kvm_vm= *vm) void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vc= pu *vcpus[]) { struct kvm_regs regs; - vm_vaddr_t vmx_gva; + vm_vaddr_t nested_gva; int vcpu_id; =20 - TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); TEST_REQUIRE(kvm_cpu_has_tdp()); =20 vm_enable_tdp(vm); for (vcpu_id =3D 0; vcpu_id < nr_vcpus; vcpu_id++) { - vcpu_alloc_vmx(vm, &vmx_gva); + if (kvm_cpu_has(X86_FEATURE_VMX)) + vcpu_alloc_vmx(vm, &nested_gva); + else + vcpu_alloc_svm(vm, &nested_gva); =20 /* The EPTs are shared across vCPUs, setup the mappings once */ if (vcpu_id =3D=3D 0) @@ -99,6 +127,6 @@ void memstress_setup_nested(struct kvm_vm *vm, int nr_vc= pus, struct kvm_vcpu *vc vcpu_regs_get(vcpus[vcpu_id], ®s); regs.rip =3D (unsigned long) memstress_l1_guest_code; vcpu_regs_set(vcpus[vcpu_id], ®s); - vcpu_args_set(vcpus[vcpu_id], 2, vmx_gva, vcpu_id); + vcpu_args_set(vcpus[vcpu_id], 2, nested_gva, vcpu_id); } } --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4638C32F75C for ; Tue, 30 Dec 2025 23:02:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135752; cv=none; b=plx7ITlGkC8lXV3B8EaZE6uGKnFCoa6D+PQE5CDbOfgR4FSBgZ9w15RnuWef5FRFD+dcBRGniyda9rp41sDUWZQV4xT5IDuDOz0+nPcbZGO7xKCDiZyDIaGowW+Q+djqBzlDJ+EeoZpTw3BJqZ22a9rASUQEAUuvDAuwpLMELgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135752; c=relaxed/simple; bh=p2Pgz/yRJSbUlMESZ6664VqIIQNLSRXLMLize367QyE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=s8bAVEMsHXbXk134RUZhf2+QDHLrAjVJ01knoi4mY77s76OXCQ7C+IYSOXE8jvo0Ce9kdxRvt9YQS8PbqN4qR8K/TRq6cJgfojMawcw2C9TQ1WZwwyq/H49+zHkIXt+uvDrX3pgdSjGabfHunxCAaKjHZxGi3B/3i23swOsjUto= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IAmq9RMt; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IAmq9RMt" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-34aa1d06456so23431734a91.0 for ; Tue, 30 Dec 2025 15:02:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135749; x=1767740549; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ykxeCu2rd9E7NdPIA3m898BLFxasNeM9KWtJ/FC5NbY=; b=IAmq9RMtPLYgJBWPNQ5CFowVLi7+XjbabhZEpKBDiq98e2oHgxclRfY4Jg/5P/m54B 4S33dJtnhUQvoKBCqZFZMsATU7TxAtmxVUVRU65HxoGRrU1RBGkE7/QNCFZcji95JcJZ pPdtkZR31TT16mEKQE1HoHN+og23DwCh9+0Bpon2gFkZgjfhMdroLByb64vXu0KlUeN7 Axr0Y1TNkhnF7J6lqdIw/f3Z6GnBWfF0WCjSxQ9ZwHXWaeBdke0TwtnJ0imWoHpawn/d C/izWTTEXs+N9GvIvatLxWLbzHeexXYT7i62Gbn6x9v2q1OYEmXYRfCGikQ/do2imGA5 2xRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135749; x=1767740549; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ykxeCu2rd9E7NdPIA3m898BLFxasNeM9KWtJ/FC5NbY=; b=CNI0PnhxFmSer20kh4rnRSpymvuZBYP17Of7bAccZFZwd03uj24M8ZYE9B8SvaBEHt PTZG9MewIdqeCDVu5K3oVrsD0492dORuB1itDNwMbP1UbC/v6XBgMs0Vx89S+OqoRBE1 0O9f8gvtBls2fOVjcRUPDEIziGpSkf1aLoKE73fZ8Rju5SuG/FJACzOlIQW873vZVkST symxKTKrfV0NMymuTOCVYAtH2/YofiZLf7D4GQYj7JbkL0cZJDfiLHA0Yz3zf6M6COkw 9Aje8TPmAg6PeblkDGxMx2Ybv1eMQMXiceIHDyh04YIwByYqA1zNOy2l8Ndalx66bBTk O1yg== X-Forwarded-Encrypted: i=1; AJvYcCWI+kRPGYk/IN689oJp/7LOksVVvaTIlTF6JmcXjFCGljpxUXzlFu3nyzksybrOyn1zzZ2n1PZ7tCVrojs=@vger.kernel.org X-Gm-Message-State: AOJu0YwLKCNm680qimxQKu/g2RnVeWxFiQuNuaEIwDbxxKVsr7EMTmMr wufQS1xHFDhaIIu7sMEYNm6Fs7Zf2vrqTCXDuoLxwt2iosGheLlnc6X7KOpuBnP8GL6BjX8IQLi LO676hQ== X-Google-Smtp-Source: AGHT+IEpuChkB0izCaZM7wAnBEmg/NFDOHvlk8mvLyNibHqxg0YqGOIPaICPmBipc+pvYA8R48I/lk77pf0= X-Received: from pjbqe11.prod.google.com ([2002:a17:90b:4f8b:b0:34a:9e02:ffa0]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:dfc6:b0:340:d1a1:af8e with SMTP id 98e67ed59e1d1-34e921e60d9mr30599932a91.37.1767135748870; Tue, 30 Dec 2025 15:02:28 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:49 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-21-seanjc@google.com> Subject: [PATCH v4 20/21] KVM: selftests: Rename vm_get_page_table_entry() to vm_get_pte() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Shorten the API to get a PTE as the "PTE" acronym is ubiquitous, and the "page table entry" makes it unnecessarily difficult to quickly understand what callers are doing. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yosry Ahmed --- tools/testing/selftests/kvm/include/x86/processor.h | 2 +- tools/testing/selftests/kvm/lib/x86/processor.c | 2 +- tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c | 2 +- .../selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c | 4 +--- 4 files changed, 4 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index 7b7d962244d6..ab29b1c7ed2d 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1357,7 +1357,7 @@ static inline bool kvm_is_ignore_msrs(void) return get_kvm_param_bool("ignore_msrs"); } =20 -uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr); +uint64_t *vm_get_pte(struct kvm_vm *vm, uint64_t vaddr); =20 uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2, uint64_t a3); diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index 5a3385d48902..ab869a98bbdc 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -390,7 +390,7 @@ static uint64_t *__vm_get_page_table_entry(struct kvm_v= m *vm, return virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); } =20 -uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr) +uint64_t *vm_get_pte(struct kvm_vm *vm, uint64_t vaddr) { int level =3D PG_LEVEL_4K; =20 diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/tes= ting/selftests/kvm/x86/hyperv_tlb_flush.c index a3b7ce155981..c542cc4762b1 100644 --- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c +++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c @@ -619,7 +619,7 @@ int main(int argc, char *argv[]) */ gva =3D vm_vaddr_unused_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VAD= DR); for (i =3D 0; i < NTEST_PAGES; i++) { - pte =3D vm_get_page_table_entry(vm, data->test_pages + i * PAGE_SIZE); + pte =3D vm_get_pte(vm, data->test_pages + i * PAGE_SIZE); gpa =3D addr_hva2gpa(vm, pte); virt_pg_map(vm, gva + PAGE_SIZE * i, gpa & PAGE_MASK); data->test_pages_pte[i] =3D gva + (gpa & ~PAGE_MASK); diff --git a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_t= est.c b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c index fabeeaddfb3a..0e8aec568010 100644 --- a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c +++ b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c @@ -47,7 +47,6 @@ int main(int argc, char *argv[]) struct kvm_vcpu *vcpu; struct kvm_vm *vm; struct ucall uc; - uint64_t *pte; uint64_t *hva; uint64_t gpa; int rc; @@ -73,8 +72,7 @@ int main(int argc, char *argv[]) hva =3D addr_gpa2hva(vm, MEM_REGION_GPA); memset(hva, 0, PAGE_SIZE); =20 - pte =3D vm_get_page_table_entry(vm, MEM_REGION_GVA); - *pte |=3D BIT_ULL(MAXPHYADDR); + *vm_get_pte(vm, MEM_REGION_GVA) |=3D BIT_ULL(MAXPHYADDR); =20 vcpu_run(vcpu); =20 --=20 2.52.0.351.gbe84eed79e-goog From nobody Sat Feb 7 12:40:53 2026 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5AAB330337 for ; Tue, 30 Dec 2025 23:02:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135757; cv=none; b=RTcp47fdabMPzX8MtFJ4iVrC4Id264xRI3k916TqVrcTEl0ChMMIvhYJf038GDcnKbBj+1qHkrdWfjSPb16bgn/t14/gBdDad5Jx/bTlhqR0jpNICn8lyVA8VhDu9sU33mncmKUSblOXrDem5wD/48LI/Bi8xT8IVdm267sVUQU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135757; c=relaxed/simple; bh=/HuCjXMO8BYhY2cJwTr7wd/hfTFZHB16fDr6+9ntWyE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ExzglFWrvQf3zP2MnHmjE5dt8ZFYRdv1hbfa1mATOYXXX7R63mnyvr+EJMqUG8ILQ+mVS57NkPU0kNe9gTS/PDFyHDa/gIpTePyVJRNmiFyhWEI2CofVZd6cD4hvIhqSnBgciVe1oQSg79svmjP+NBLifMrqgVhPvVUPC0fG7SI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gywlyQb+; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gywlyQb+" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7bf5cdef41dso17583256b3a.0 for ; Tue, 30 Dec 2025 15:02:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135751; x=1767740551; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=lvDGtOJlZOshVNKy7HhFV38ObB7SdC2/7YpvR2O0iOs=; b=gywlyQb+i1TiFGonSLX9ONNK0xwrR1f6v5bUGwf1DaC42S0MpXFzYCnYHufUI0zs+n ZVCc1GAW8EjN0vP9zomhqDvepfb5CI/uL4EOQrWC6s7mqirDp0mwsR6KsxsiII7OygWW sxD13lolCBSwNPahpqTFWJTa+1xRf6Q7tktxe1DO02CF7rj/ZWwIaYgJntAoBZBSsa25 CvGc5POwKVKbNa1o3dLwJkHi/mTh039sjRRVLJRBaY9KXNYwopLCq4Z34sjAMVZctceu ONk0pR47MGjlV7VcxiIvbkQytu1JeCUzl13kZBSigXwqZVvxP+roAyASGsAxF+yoxhoY bx2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135751; x=1767740551; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=lvDGtOJlZOshVNKy7HhFV38ObB7SdC2/7YpvR2O0iOs=; b=HzrsKbwmOb0hHt4IySgcH2Mk6tAm/rVzWGePD9y2jMJ0gSyH1Y4ERdV5DOkvlpd9Jq VuNWQ3QFe2d99EF6eyo7+TYhO8UYF+w7SnMgI2Nr7xy6AngLwclXCvVr4+YYH8yOqJ7q G0+K6w2D6HHTX8ELbq0SBqW/zVTdQWEDjZHQR+XqaiTvxZNL6D7W8r7+06IXmLcqAKXK e+k5FMqoJOJ26TQ+HUzbdEzBa2QXrjgSVgBZfuAPb1no5iP/q3/m/xGUcobvzLXyH0lA ngrHrT1dzdU3oL/fLNFB3dmFD9IAK1nw+V7ndTcMHebFT4aGBprfsPBqRRoqJ/LOkbcK gjqQ== X-Forwarded-Encrypted: i=1; AJvYcCU9A9+QhiBpaduOYeOzfDIzSQsvQJ2VybYjobfUw/usWGUkwBQogTBAOm3jfBMpbMjHwiPP6VDjVq+Lgf0=@vger.kernel.org X-Gm-Message-State: AOJu0Ywhavnvwmz+CFYAkEXuwe9Bq8uvjq1XdfvaGAyHpaeE8yKHUiyH OW/pUPNlEptUeLRcnAJ2IlY2nYtB8+Jd2oRMVk7toj0Vifam7GxTBvsbLRiYcdHxW892of8ddpg cOVhRTA== X-Google-Smtp-Source: AGHT+IFl+E4iBtLvuy29ZNl15W5f+YM/yuQ6NLYMRsjb6K6O2MuWbmEY/dD/ZgE2AJbg2gFPgmlYtjYuvMQ= X-Received: from pfbem48.prod.google.com ([2002:a05:6a00:3770:b0:7f9:3450:d9b0]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:420b:b0:7e8:4471:ae55 with SMTP id d2e1a72fcca58-7ff6745678amr28709155b3a.33.1767135750526; Tue, 30 Dec 2025 15:02:30 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:50 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-22-seanjc@google.com> Subject: [PATCH v4 21/21] KVM: selftests: Test READ=>WRITE dirty logging behavior for shadow MMU From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the nested dirty log test to validate KVM's handling of READ faults when dirty logging is enabled. Specifically, set the Dirty bit in the guest PTEs used to map L2 GPAs, so that KVM will create writable SPTEs when handling L2 read faults. When handling read faults in the shadow MMU, KVM opportunistically creates a writable SPTE if the mapping can be writable *and* the gPTE is dirty (or doesn't support the Dirty bit), i.e. if KVM doesn't need to intercept writes in order to emulate Dirty-bit updates. To actually test the L2 READ=3D>WRITE sequence, e.g. without masking a false pass by other test activity, route the READ=3D>WRITE and WRITE=3D>WRITE sequences to separate L1 pages, and differentiate between "marked dirty due to a WRITE access/fault" and "marked dirty due to creating a writable SPTE for a READ access/fault". The updated sequence exposes the bug fixed by KVM commit 1f4e5fc83a42 ("KVM: x86: fix nested guest live migration with PML") when the guest performs a READ=3D>WRITE sequence. Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/processor.h | 1 + .../testing/selftests/kvm/lib/x86/processor.c | 7 ++ .../selftests/kvm/x86/nested_dirty_log_test.c | 115 +++++++++++++----- 3 files changed, 90 insertions(+), 33 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index ab29b1c7ed2d..8945c9eea704 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1483,6 +1483,7 @@ bool kvm_cpu_has_tdp(void); void tdp_map(struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uin= t64_t size); void tdp_identity_map_default_memslots(struct kvm_vm *vm); void tdp_identity_map_1g(struct kvm_vm *vm, uint64_t addr, uint64_t size); +uint64_t *tdp_get_pte(struct kvm_vm *vm, uint64_t l2_gpa); =20 /* * Basic CPU control in CR0 diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index ab869a98bbdc..fab18e9be66c 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -390,6 +390,13 @@ static uint64_t *__vm_get_page_table_entry(struct kvm_= vm *vm, return virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); } =20 +uint64_t *tdp_get_pte(struct kvm_vm *vm, uint64_t l2_gpa) +{ + int level =3D PG_LEVEL_4K; + + return __vm_get_page_table_entry(vm, &vm->stage2_mmu, l2_gpa, &level); +} + uint64_t *vm_get_pte(struct kvm_vm *vm, uint64_t vaddr) { int level =3D PG_LEVEL_4K; diff --git a/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c b/tool= s/testing/selftests/kvm/x86/nested_dirty_log_test.c index 89d2e86a0db9..1e7c1ed917e1 100644 --- a/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c +++ b/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c @@ -17,29 +17,39 @@ =20 /* The memory slot index to track dirty pages */ #define TEST_MEM_SLOT_INDEX 1 -#define TEST_MEM_PAGES 3 +#define TEST_MEM_PAGES 4 =20 /* L1 guest test virtual memory offset */ -#define GUEST_TEST_MEM 0xc0000000 +#define GUEST_TEST_MEM1 0xc0000000 +#define GUEST_TEST_MEM2 0xc0002000 =20 /* L2 guest test virtual memory offset */ #define NESTED_TEST_MEM1 0xc0001000 -#define NESTED_TEST_MEM2 0xc0002000 +#define NESTED_TEST_MEM2 0xc0003000 =20 #define L2_GUEST_STACK_SIZE 64 =20 +#define TEST_SYNC_PAGE_MASK 0xfull +#define TEST_SYNC_READ_FAULT BIT(4) +#define TEST_SYNC_WRITE_FAULT BIT(5) +#define TEST_SYNC_NO_FAULT BIT(6) + static void l2_guest_code(u64 *a, u64 *b) { READ_ONCE(*a); + GUEST_SYNC(0 | TEST_SYNC_READ_FAULT); WRITE_ONCE(*a, 1); - GUEST_SYNC(true); - GUEST_SYNC(false); + GUEST_SYNC(0 | TEST_SYNC_WRITE_FAULT); + READ_ONCE(*a); + GUEST_SYNC(0 | TEST_SYNC_NO_FAULT); =20 WRITE_ONCE(*b, 1); - GUEST_SYNC(true); + GUEST_SYNC(2 | TEST_SYNC_WRITE_FAULT); WRITE_ONCE(*b, 1); - GUEST_SYNC(true); - GUEST_SYNC(false); + GUEST_SYNC(2 | TEST_SYNC_WRITE_FAULT); + READ_ONCE(*b); + GUEST_SYNC(2 | TEST_SYNC_NO_FAULT); + GUEST_SYNC(2 | TEST_SYNC_NO_FAULT); =20 /* Exit to L1 and never come back. */ vmcall(); @@ -53,7 +63,7 @@ static void l2_guest_code_tdp_enabled(void) static void l2_guest_code_tdp_disabled(void) { /* Access the same L1 GPAs as l2_guest_code_tdp_enabled() */ - l2_guest_code((u64 *)GUEST_TEST_MEM, (u64 *)GUEST_TEST_MEM); + l2_guest_code((u64 *)GUEST_TEST_MEM1, (u64 *)GUEST_TEST_MEM2); } =20 void l1_vmx_code(struct vmx_pages *vmx) @@ -72,9 +82,11 @@ void l1_vmx_code(struct vmx_pages *vmx) =20 prepare_vmcs(vmx, l2_rip, &l2_guest_stack[L2_GUEST_STACK_SIZE]); =20 - GUEST_SYNC(false); + GUEST_SYNC(0 | TEST_SYNC_NO_FAULT); + GUEST_SYNC(2 | TEST_SYNC_NO_FAULT); GUEST_ASSERT(!vmlaunch()); - GUEST_SYNC(false); + GUEST_SYNC(0 | TEST_SYNC_NO_FAULT); + GUEST_SYNC(2 | TEST_SYNC_NO_FAULT); GUEST_ASSERT_EQ(vmreadz(VM_EXIT_REASON), EXIT_REASON_VMCALL); GUEST_DONE(); } @@ -91,9 +103,11 @@ static void l1_svm_code(struct svm_test_data *svm) =20 generic_svm_setup(svm, l2_rip, &l2_guest_stack[L2_GUEST_STACK_SIZE]); =20 - GUEST_SYNC(false); + GUEST_SYNC(0 | TEST_SYNC_NO_FAULT); + GUEST_SYNC(2 | TEST_SYNC_NO_FAULT); run_guest(svm->vmcb, svm->vmcb_gpa); - GUEST_SYNC(false); + GUEST_SYNC(0 | TEST_SYNC_NO_FAULT); + GUEST_SYNC(2 | TEST_SYNC_NO_FAULT); GUEST_ASSERT_EQ(svm->vmcb->control.exit_code, SVM_EXIT_VMMCALL); GUEST_DONE(); } @@ -106,6 +120,11 @@ static void l1_guest_code(void *data) l1_svm_code(data); } =20 +static uint64_t test_read_host_page(uint64_t *host_test_mem, int page_nr) +{ + return host_test_mem[PAGE_SIZE * page_nr / sizeof(*host_test_mem)]; +} + static void test_dirty_log(bool nested_tdp) { vm_vaddr_t nested_gva =3D 0; @@ -133,32 +152,45 @@ static void test_dirty_log(bool nested_tdp) =20 /* Add an extra memory slot for testing dirty logging */ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - GUEST_TEST_MEM, + GUEST_TEST_MEM1, TEST_MEM_SLOT_INDEX, TEST_MEM_PAGES, KVM_MEM_LOG_DIRTY_PAGES); =20 /* - * Add an identity map for GVA range [0xc0000000, 0xc0002000). This + * Add an identity map for GVA range [0xc0000000, 0xc0004000). This * affects both L1 and L2. However... */ - virt_map(vm, GUEST_TEST_MEM, GUEST_TEST_MEM, TEST_MEM_PAGES); + virt_map(vm, GUEST_TEST_MEM1, GUEST_TEST_MEM1, TEST_MEM_PAGES); =20 /* - * ... pages in the L2 GPA range [0xc0001000, 0xc0003000) will map to - * 0xc0000000. + * ... pages in the L2 GPA ranges [0xc0001000, 0xc0002000) and + * [0xc0003000, 0xc0004000) will map to 0xc0000000 and 0xc0001000 + * respectively. * * When TDP is disabled, the L2 guest code will still access the same L1 * GPAs as the TDP enabled case. + * + * Set the Dirty bit in the PTEs used by L2 so that KVM will create + * writable SPTEs when handling read faults (if the Dirty bit isn't + * set, KVM must intercept the next write to emulate the Dirty bit + * update). */ if (nested_tdp) { tdp_identity_map_default_memslots(vm); - tdp_map(vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, PAGE_SIZE); - tdp_map(vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, PAGE_SIZE); + tdp_map(vm, NESTED_TEST_MEM1, GUEST_TEST_MEM1, PAGE_SIZE); + tdp_map(vm, NESTED_TEST_MEM2, GUEST_TEST_MEM2, PAGE_SIZE); + + + *tdp_get_pte(vm, NESTED_TEST_MEM1) |=3D PTE_DIRTY_MASK(&vm->stage2_mmu); + *tdp_get_pte(vm, NESTED_TEST_MEM2) |=3D PTE_DIRTY_MASK(&vm->stage2_mmu); + } else { + *vm_get_pte(vm, GUEST_TEST_MEM1) |=3D PTE_DIRTY_MASK(&vm->mmu); + *vm_get_pte(vm, GUEST_TEST_MEM2) |=3D PTE_DIRTY_MASK(&vm->mmu); } =20 bmap =3D bitmap_zalloc(TEST_MEM_PAGES); - host_test_mem =3D addr_gpa2hva(vm, GUEST_TEST_MEM); + host_test_mem =3D addr_gpa2hva(vm, GUEST_TEST_MEM1); =20 while (!done) { memset(host_test_mem, 0xaa, TEST_MEM_PAGES * PAGE_SIZE); @@ -169,25 +201,42 @@ static void test_dirty_log(bool nested_tdp) case UCALL_ABORT: REPORT_GUEST_ASSERT(uc); /* NOT REACHED */ - case UCALL_SYNC: + case UCALL_SYNC: { + int page_nr =3D uc.args[1] & TEST_SYNC_PAGE_MASK; + int i; + /* * The nested guest wrote at offset 0x1000 in the memslot, but the * dirty bitmap must be filled in according to L1 GPA, not L2. */ kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap); - if (uc.args[1]) { - TEST_ASSERT(test_bit(0, bmap), "Page 0 incorrectly reported clean"); - TEST_ASSERT(host_test_mem[0] =3D=3D 1, "Page 0 not written by guest"); - } else { - TEST_ASSERT(!test_bit(0, bmap), "Page 0 incorrectly reported dirty"); - TEST_ASSERT(host_test_mem[0] =3D=3D 0xaaaaaaaaaaaaaaaaULL, "Page 0 wri= tten by guest"); + + /* + * If a fault is expected, the page should be dirty + * as the Dirty bit is set in the gPTE. KVM should + * create a writable SPTE even on a read fault, *and* + * KVM must mark the GFN as dirty when doing so. + */ + TEST_ASSERT(test_bit(page_nr, bmap) =3D=3D !(uc.args[1] & TEST_SYNC_NO_= FAULT), + "Page %u incorrectly reported %s on %s fault", page_nr, + test_bit(page_nr, bmap) ? "dirty" : "clean", + uc.args[1] & TEST_SYNC_NO_FAULT ? "no" : + uc.args[1] & TEST_SYNC_READ_FAULT ? "read" : "write"); + + for (i =3D 0; i < TEST_MEM_PAGES; i++) { + if (i =3D=3D page_nr && uc.args[1] & TEST_SYNC_WRITE_FAULT) + TEST_ASSERT(test_read_host_page(host_test_mem, i) =3D=3D 1, + "Page %u not written by guest", i); + else + TEST_ASSERT(test_read_host_page(host_test_mem, i) =3D=3D 0xaaaaaaaaaa= aaaaaaULL, + "Page %u written by guest", i); + + if (i !=3D page_nr) + TEST_ASSERT(!test_bit(i, bmap), + "Page %u incorrectly reported dirty", i); } - - TEST_ASSERT(!test_bit(1, bmap), "Page 1 incorrectly reported dirty"); - TEST_ASSERT(host_test_mem[PAGE_SIZE / 8] =3D=3D 0xaaaaaaaaaaaaaaaaULL, = "Page 1 written by guest"); - TEST_ASSERT(!test_bit(2, bmap), "Page 2 incorrectly reported dirty"); - TEST_ASSERT(host_test_mem[PAGE_SIZE*2 / 8] =3D=3D 0xaaaaaaaaaaaaaaaaULL= , "Page 2 written by guest"); break; + } case UCALL_DONE: done =3D true; break; --=20 2.52.0.351.gbe84eed79e-goog