From nobody Thu Sep 19 01:14:28 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70B4317E8E6 for ; Fri, 26 Jul 2024 23:53:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038020; cv=none; b=PYUG2CBXN9A9swpqAybfH4HfZvKNOS8Ri7lU7b3F6tieGFmjE+/eSLeZQkp9xMycqiTXmorsAE3od5BK268DDh6DeaDIScI8/ywQgWsn+6Tk2knvjaVZMn2lcNh4RrEh5uhX5n+KqCWfHXi+kKNJdf+7s6iXJg4wM8B0g8njbZI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038020; c=relaxed/simple; bh=4wtI3HrZftg8TSroI/LySwRU47K0urA1mYWFvNIe4WQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SUMQQEb1hqXxKBPRhvd/VTHUxvLvqvcIXp8Z7uUF3WRRr5RC7dxKYGdXedMTnwv1BJeNth/7kqMZTfFeRtujMVuhbgjuN+pEIh7a72b3HnIl6sQHSgGgUkop10TpzOwYdAqSD8kDfVR+5UOgBpMRS5iufjz874S12wW5ho76Dys= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zjHlT+Pi; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zjHlT+Pi" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70e9e3dd963so1395573b3a.1 for ; Fri, 26 Jul 2024 16:53:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038018; x=1722642818; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=hlhq7We3LF/Ul7sqre1CZocM6x6KNUXlB6LyC83BMPc=; b=zjHlT+PiSk/ewkQKfyBiToa5r0MpkfLqGDNhDBXPJm+g7d4w1IsweBYssSUpPz1PUO nrvMfJio5ylilB+m1gQG8Jt+7cBeIcyeWslZa65P7tKknddIT31UnEYaMSnKL2u5sApG E/n7ybgopYkqzYzUb/vJNKECyzsXnDVzWyub3OL5vPnR+2pdFAJUJCjeQijSsFB8JHiW svFC9Cud5FcMPj4UGsmtJVvQiCwvJZi2r91YW1YxR5c/gRWgimWYR7YLSv9rRB8EpJA/ t65QsrD80LKcVv5JBD13QVL/Q6RrX+Lq/NnuLv/Io6ZcbOhqJzINMIbLpeOlYCrLH8uV AFcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038018; x=1722642818; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hlhq7We3LF/Ul7sqre1CZocM6x6KNUXlB6LyC83BMPc=; b=QfeeMl8H9ndBnaC4sa303R0xeYp2RHShvfSvnOiYj4xoH76rHVGF8BEQ9x0KH1bJJv TEt0LCZQpEi376N+tPMrYB9okkooBz/d86f4xwCxfATdc39je6v8UtvxOje3ynB2x573 6EPk6d3DmRBwZVHOlI34iqcVrl6vmKhONBbPeBcGkVgFJloVqeDHqpk/4g6q5VEN2Y8+ B3EoG+BknhHrgcAwTx/OyYRsaEiAKwoBJmxxArGgrBH9IxFeD9UK1mMYs5SyJh4Q/phC 9DJBxKeYVuW0FWj213l4o+otWUwrc1MgFhpA3p9D1j9edmaKTpWFoSkO66sjWg+NUwoe QA0w== X-Forwarded-Encrypted: i=1; AJvYcCXVKkjEH7jl1QKSBtbmOo7+/HzKZGcTJ5V4bc8fO5adDp2Nd95xf8ICUesjcHQES5RdsJDYjqQ1FS13sEf78gxLrjW78wkiOagLRHjF X-Gm-Message-State: AOJu0Yz0BbBRjaNJLDV9pJWGsyhAhGZIJ/lZ815ftv4Lq1W2UQkHtrpQ fde6AgeTR4M5T9nP48h7afhPSkzi+N8eEhH3jWyOxjUWPFnsnm0fPNSVU1cxigj5zXvfHSIfWNA C/A== X-Google-Smtp-Source: AGHT+IHh1dyWiT0um9sPElrkv3mBIH4IG/eZ2ITHeKZHWjYM95v4ERaP3jgKxmY1KlRL2J6TSkYz8SuqfJg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2f10:b0:70e:9de1:9edf with SMTP id d2e1a72fcca58-70ece9fc2c1mr8667b3a.1.1722038017517; Fri, 26 Jul 2024 16:53:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:38 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-30-seanjc@google.com> Subject: [PATCH v12 29/84] KVM: Pin (as in FOLL_PIN) pages during kvm_vcpu_map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Pin, as in FOLL_PIN, pages when mapping them for direct access by KVM. As per Documentation/core-api/pin_user_pages.rst, writing to a page that was gotten via FOLL_GET is explicitly disallowed. Correct (uses FOLL_PIN calls): pin_user_pages() write to the data within the pages unpin_user_pages() INCORRECT (uses FOLL_GET calls): get_user_pages() write to the data within the pages put_page() Unfortunately, FOLL_PIN is a "private" flag, and so kvm_follow_pfn must use a one-off bool instead of being able to piggyback the "flags" field. Link: https://lwn.net/Articles/930667 Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 54 +++++++++++++++++++++++++++++----------- virt/kvm/kvm_mm.h | 7 ++++++ 3 files changed, 47 insertions(+), 16 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8b5ac3305b05..3d4094ece479 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -280,7 +280,7 @@ struct kvm_host_map { * can be used as guest memory but they are not managed by host * kernel). */ - struct page *refcounted_page; + struct page *pinned_page; struct page *page; void *hva; kvm_pfn_t pfn; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 255cbed83b40..4a9b99c11355 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2824,9 +2824,12 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_p= fn *kfp, struct page *page, */ if (pte) { pfn =3D pte_pfn(*pte); - page =3D kvm_pfn_to_refcounted_page(pfn); - if (page && !get_page_unless_zero(page)) - return KVM_PFN_ERR_FAULT; + + if (!kfp->pin) { + page =3D kvm_pfn_to_refcounted_page(pfn); + if (page && !get_page_unless_zero(page)) + return KVM_PFN_ERR_FAULT; + } } else { pfn =3D page_to_pfn(page); } @@ -2845,16 +2848,24 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_= pfn *kfp, struct page *page, static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page; + bool r; =20 /* - * Fast pin a writable pfn only if it is a write fault request - * or the caller allows to map a writable pfn for a read fault - * request. + * Try the fast-only path when the caller wants to pin/get the page for + * writing. If the caller only wants to read the page, KVM must go + * down the full, slow path in order to avoid racing an operation that + * breaks Copy-on-Write (CoW), e.g. so that KVM doesn't end up pointing + * at the old, read-only page while mm/ points at a new, writable page. */ if (!((kfp->flags & FOLL_WRITE) || kfp->map_writable)) return false; =20 - if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page)) { + if (kfp->pin) + r =3D pin_user_pages_fast(kfp->hva, 1, FOLL_WRITE, &page) =3D=3D 1; + else + r =3D get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page); + + if (r) { *pfn =3D kvm_resolve_pfn(kfp, page, NULL, true); return true; } @@ -2883,10 +2894,21 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *k= fp, kvm_pfn_t *pfn) struct page *page, *wpage; int npages; =20 - npages =3D get_user_pages_unlocked(kfp->hva, 1, &page, flags); + if (kfp->pin) + npages =3D pin_user_pages_unlocked(kfp->hva, 1, &page, flags); + else + npages =3D get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages !=3D 1) return npages; =20 + /* + * Pinning is mutually exclusive with opportunistically mapping a read + * fault as writable, as KVM should never pin pages when mapping memory + * into the guest (pinning is only for direct accesses from KVM). + */ + if (WARN_ON_ONCE(kfp->map_writable && kfp->pin)) + goto out; + /* map read fault as writable if possible */ if (!(flags & FOLL_WRITE) && kfp->map_writable && get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { @@ -2895,6 +2917,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp= , kvm_pfn_t *pfn) flags |=3D FOLL_WRITE; } =20 +out: *pfn =3D kvm_resolve_pfn(kfp, page, NULL, flags & FOLL_WRITE); return npages; } @@ -3119,10 +3142,11 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, = struct kvm_host_map *map) .slot =3D gfn_to_memslot(vcpu->kvm, gfn), .gfn =3D gfn, .flags =3D FOLL_WRITE, - .refcounted_page =3D &map->refcounted_page, + .refcounted_page =3D &map->pinned_page, + .pin =3D true, }; =20 - map->refcounted_page =3D NULL; + map->pinned_page =3D NULL; map->page =3D NULL; map->hva =3D NULL; map->gfn =3D gfn; @@ -3159,16 +3183,16 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct k= vm_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); =20 - if (map->refcounted_page) { + if (map->pinned_page) { if (dirty) - kvm_release_page_dirty(map->refcounted_page); - else - kvm_release_page_clean(map->refcounted_page); + kvm_set_page_dirty(map->pinned_page); + kvm_set_page_accessed(map->pinned_page); + unpin_user_page(map->pinned_page); } =20 map->hva =3D NULL; map->page =3D NULL; - map->refcounted_page =3D NULL; + map->pinned_page =3D NULL; } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); =20 diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index d3ac1ba8ba66..acef3f5c582a 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -30,6 +30,13 @@ struct kvm_follow_pfn { /* FOLL_* flags modifying lookup behavior, e.g. FOLL_WRITE. */ unsigned int flags; =20 + /* + * Pin the page (effectively FOLL_PIN, which is an mm/ internal flag). + * The page *must* be pinned if KVM will write to the page via a kernel + * mapping, e.g. via kmap(), mremap(), etc. + */ + bool pin; + /* * If non-NULL, try to get a writable mapping even for a read fault. * Set to true if a writable mapping was obtained. --=20 2.46.0.rc1.232.g9752f9e123-goog