From nobody Sat Nov 30 05:32:46 2024 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E92BF1BE878 for ; Wed, 11 Sep 2024 20:42:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726087369; cv=none; b=sZhCCz5qIbYgRL4aE4fKtMYKyYrCtRLRsEc9t+cUyXSsvOCh2snwU7rdPSGRA8M3QMqpYIcdcMdg1ro8JfpN2iEgsczCwhUQyoE7/QzW/tDMdPsJziLGIb9Z79UXjgSM2bVrPGTAAP5DHI8SPwwdaS2f2+LxIhkQzNDkEV3f59s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726087369; c=relaxed/simple; bh=nzjwfSdD3cO6aEdd4P/TaF27B9tCLmQVt0LgI5quIXs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=syTCLmOstf8Ex/UZNoBfyx41bBk+T8ZdU7uWrAL1IjnMnZoHI3lvUZ8z9sYgkxZpysrMQm2CE1M4wIfgZcoXbcsosqs93yAlhJi1IuMIzVjtybbUyg7SeBAZ8IfVHg4l/W6OrZUWn3oAruy5ubPZmwv8w6EeAZmS5yGT3476gak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=A4CcR5Y9; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="A4CcR5Y9" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7cf58491fe9so343407a12.0 for ; Wed, 11 Sep 2024 13:42:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726087367; x=1726692167; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9gVKeS9zxIWQJ6wZPVMMrqlZKwDoa69fsuapAtj2LII=; b=A4CcR5Y9iaOVqaxVdTtTj3/mCmAdEfoENfhxLR5WKQpbImKQzbR0ZY7KsaQxcommsZ oF6hdx6kdRJh3VfhPBAt++bi2Ib1Y3SSeKWwRhd/fX+toIs5OUnj45GdDqekOQ2u2DbI LCxJqXGVuv2BzWbixyq3PNRDu73bnCNJGAQ2sUHpa43ZIAoh9dfNwjp3fvP7GfwRcp15 tPPjJMmTHQXXNpI3Di0cWYncSmmofnxo530Szjqh96aj+xtM2DGwt2twpt8WhbzcR+FL AS1DxoA+WobdhDiS7eOoR0TDm55aFB+r7m4195GQmosb2wHWqusaEE7DQjt3wTa6jb0g de8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726087367; x=1726692167; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9gVKeS9zxIWQJ6wZPVMMrqlZKwDoa69fsuapAtj2LII=; b=pqTlhbS6CwvyF7NAXaMhw4kA+v9zObfAhLUGjhyfDn3QEBacrU4gBH6MoJ62fm2MA0 OrZCzvdrNk+KMa8rGOq8IaZi4GqrFtYjEKQyt1zRgtVFG3nkxP9k2rp3uXpooS9Va1O/ uzf6Sfh7JWKVtXwfW0NAa4c/QmwTbCUHQhiInaIDA6c4mxjg54XcLerJTVJaWWoGd3PO 8m+Pvqbv0i5biTKY7mDFb0kDun47TgCXc5XU0TzPh98xJ3TvyuhbelygocQneDg6wHcv 3tpQtrJr9buW00jHi01cbq08Gidy1peMfCgHOgebxfQMEYfI2snBwLenHERHAdbg1xkj Tvjw== X-Forwarded-Encrypted: i=1; AJvYcCV1ow3H2e/wOhSixFkrRSnjOdWKrMNVNu1E0reIBKeGUB0g0PCJNEVHl9rOKd6yVACaAcumv9775xv7XBg=@vger.kernel.org X-Gm-Message-State: AOJu0YwosOsSvjGA1UdWL2Lg/4qkkcmHdRYlnap/vyOuaB8bUuZ7sRnp baSiVyomC3/0fIIwU782EzHWivGP90lGbBk9PmpCVALVjzyfUXNXsOIMvKJFG8efpY2oYalxriS 1Mw== X-Google-Smtp-Source: AGHT+IGelrPvd4nOI5BY8vHHV3fhWRQXV6KTNrZs/VYSkLr9yCzToyzIxt0KpWTMZ3YrjpacOP321gVr5gM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:721a:0:b0:7cd:6621:8cd5 with SMTP id 41be03b00d2f7-7db2057eeb1mr17290a12.3.1726087367261; Wed, 11 Sep 2024 13:42:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 11 Sep 2024 13:41:58 -0700 In-Reply-To: <20240911204158.2034295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240911204158.2034295-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <20240911204158.2034295-14-seanjc@google.com> Subject: [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ) From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Anup Patel , Paolo Bonzini , Christian Borntraeger , Janosch Frank , Claudio Imbrenda Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Sean Christopherson , James Houghton Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add two phases to mmu_stress_test to verify that KVM correctly handles guest memory that was writable, and then made read-only in the primary MMU, and then made writable again. Add bonus coverage for x86 and arm64 to verify that all of guest memory was marked read-only. Making forward progress (without making memory writable) requires arch specific code to skip over the faulting instruction, but the test can at least verify each vCPU's starting page was made read-only for other architectures. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/mmu_stress_test.c | 104 +++++++++++++++++- 1 file changed, 101 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/= selftests/kvm/mmu_stress_test.c index 50c3a17418c4..c07c15d7cc9a 100644 --- a/tools/testing/selftests/kvm/mmu_stress_test.c +++ b/tools/testing/selftests/kvm/mmu_stress_test.c @@ -16,6 +16,8 @@ #include "guest_modes.h" #include "processor.h" =20 +static bool mprotect_ro_done; + static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stri= de) { uint64_t gpa; @@ -31,6 +33,42 @@ static void guest_code(uint64_t start_gpa, uint64_t end_= gpa, uint64_t stride) *((volatile uint64_t *)gpa); GUEST_SYNC(2); =20 + /* + * Write to the region while mprotect(PROT_READ) is underway. Keep + * looping until the memory is guaranteed to be read-only, otherwise + * vCPUs may complete their writes and advance to the next stage + * prematurely. + * + * For architectures that support skipping the faulting instruction, + * generate the store via inline assembly to ensure the exact length + * of the instruction is known and stable (vcpu_arch_put_guest() on + * fixed-length architectures should work, but the cost of paranoia + * is low in this case). For x86, hand-code the exact opcode so that + * there is no room for variability in the generated instruction. + */ + do { + for (gpa =3D start_gpa; gpa < end_gpa; gpa +=3D stride) +#ifdef __x86_64__ + asm volatile(".byte 0x48,0x89,0x00" :: "a"(gpa) : "memory"); /* mov %ra= x, (%rax) */ +#elif defined(__aarch64__) + asm volatile("str %0, [%0]" :: "r" (gpa) : "memory"); +#else + vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa); +#endif + } while (!READ_ONCE(mprotect_ro_done)); + + /* + * Only architectures that write the entire range can explicitly sync, + * as other architectures will be stuck on the write fault. + */ +#if defined(__x86_64__) || defined(__aarch64__) + GUEST_SYNC(3); +#endif + + for (gpa =3D start_gpa; gpa < end_gpa; gpa +=3D stride) + vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa); + GUEST_SYNC(4); + GUEST_ASSERT(0); } =20 @@ -78,6 +116,7 @@ static void *vcpu_worker(void *data) struct vcpu_info *info =3D data; struct kvm_vcpu *vcpu =3D info->vcpu; struct kvm_vm *vm =3D vcpu->vm; + int r; =20 vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size); =20 @@ -100,6 +139,57 @@ static void *vcpu_worker(void *data) =20 /* Stage 2, read all of guest memory, which is now read-only. */ run_vcpu(vcpu, 2); + + /* + * Stage 3, write guest memory and verify KVM returns -EFAULT for once + * the mprotect(PROT_READ) lands. Only architectures that support + * validating *all* of guest memory sync for this stage, as vCPUs will + * be stuck on the faulting instruction for other architectures. Go to + * stage 3 without a rendezvous + */ + do { + r =3D _vcpu_run(vcpu); + } while (!r); + TEST_ASSERT(r =3D=3D -1 && errno =3D=3D EFAULT, + "Expected EFAULT on write to RO memory, got r =3D %d, errno =3D %d",= r, errno); + +#if defined(__x86_64__) || defined(__aarch64__) + /* + * Verify *all* writes from the guest hit EFAULT due to the VMA now + * being read-only. x86 and arm64 only at this time as skipping the + * instruction that hits the EFAULT requires advancing the program + * counter, which is arch specific and relies on inline assembly. + */ +#ifdef __x86_64__ + vcpu->run->kvm_valid_regs =3D KVM_SYNC_X86_REGS; +#endif + for (;;) { + r =3D _vcpu_run(vcpu); + if (!r) + break; + TEST_ASSERT_EQ(errno, EFAULT); +#if defined(__x86_64__) + WRITE_ONCE(vcpu->run->kvm_dirty_regs, KVM_SYNC_X86_REGS); + vcpu->run->s.regs.regs.rip +=3D 3; +#elif defined(__aarch64__) + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), + vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc)) + 4); +#endif + + } + assert_sync_stage(vcpu, 3); +#endif /* __x86_64__ || __aarch64__ */ + rendezvous_with_boss(); + + /* + * Stage 4. Run to completion, waiting for mprotect(PROT_WRITE) to + * make the memory writable again. + */ + do { + r =3D _vcpu_run(vcpu); + } while (r && errno =3D=3D EFAULT); + TEST_ASSERT_EQ(r, 0); + assert_sync_stage(vcpu, 4); rendezvous_with_boss(); =20 return NULL; @@ -182,7 +272,7 @@ int main(int argc, char *argv[]) const uint64_t start_gpa =3D SZ_4G; const int first_slot =3D 1; =20 - struct timespec time_start, time_run1, time_reset, time_run2, time_ro; + struct timespec time_start, time_run1, time_reset, time_run2, time_ro, ti= me_rw; uint64_t max_gpa, gpa, slot_size, max_mem, i; int max_slots, slot, opt, fd; bool hugepages =3D false; @@ -287,19 +377,27 @@ int main(int argc, char *argv[]) rendezvous_with_vcpus(&time_run2, "run 2"); =20 mprotect(mem, slot_size, PROT_READ); + usleep(10); + mprotect_ro_done =3D true; + sync_global_to_guest(vm, mprotect_ro_done); + rendezvous_with_vcpus(&time_ro, "mprotect RO"); + mprotect(mem, slot_size, PROT_READ | PROT_WRITE); + rendezvous_with_vcpus(&time_rw, "mprotect RW"); =20 + time_rw =3D timespec_sub(time_rw, time_ro); time_ro =3D timespec_sub(time_ro, time_run2); time_run2 =3D timespec_sub(time_run2, time_reset); time_reset =3D timespec_sub(time_reset, time_run1); time_run1 =3D timespec_sub(time_run1, time_start); =20 pr_info("run1 =3D %ld.%.9lds, reset =3D %ld.%.9lds, run2 =3D %ld.%.9lds, " - "ro =3D %ld.%.9lds\n", + "ro =3D %ld.%.9lds, rw =3D %ld.%.9lds\n", time_run1.tv_sec, time_run1.tv_nsec, time_reset.tv_sec, time_reset.tv_nsec, time_run2.tv_sec, time_run2.tv_nsec, - time_ro.tv_sec, time_ro.tv_nsec); + time_ro.tv_sec, time_ro.tv_nsec, + time_rw.tv_sec, time_rw.tv_nsec); =20 /* * Delete even numbered slots (arbitrary) and unmap the first half of --=20 2.46.0.598.g6f2099f65c-goog