From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8C2B14F114 for ; Fri, 26 Jul 2024 23:52:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037962; cv=none; b=ZIBb9Y1cSSoWaJPYJFXXEiJBRMcaDb8iTX2p+HIWxNK1FexvaSML5lxHwMxPRtbzwajfgngsLenvwberzp3YQo4rnpbtb5XN+o849St5S0a3807PL/wD1TivlPg8PflhKtv1qu/OTKe+YID+X6DJ+IvP7vghnQrCN016QwnqXto= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037962; c=relaxed/simple; bh=24lki3QCeS0kH+gNN4dUIYpp1e6lnIRlAVL0zCm0YAQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lY+l24ty+ri/n2npPts7mm2EldRhyQU17mbPeqY2sDUngIrOYARXwGTOzmZlIoxpcPo/DvGDficSBUlexRQdzx+giesOnn6NcjIRwU26+h1yc5A6d3aZRb01Ex4WtXu/WR4jrB4lKW58gT5KkFaEltWbjpqmy1GuUL0YncWpwqY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Lalj5fIk; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Lalj5fIk" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc52d3c76eso12293395ad.3 for ; Fri, 26 Jul 2024 16:52:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037960; x=1722642760; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=B44llYCZd4EZ9BiHApjJ5QQ8CK6GbkfpTxw4rYmbmqk=; b=Lalj5fIkwkGl2wXKHBt/CgDFS8mvX4OcFzml7HM0bQkTKswl8h/F2c8ROsBEnbuF8t cc/UsIXS9HUwR2l+Yhgpt895QAZLEr4FoD0rd8c35vYgQW2LqCb8VFg6hdnWuVF7yaCH gIV0Wsj82oxWUmm7/A5SUCotOYbOdFUusmUlAeVXn68NxViv33V1PiEvD2onaGs8pu8F C4+6jmvCNll/9jT5RJNTT+WWcDG0zYyfqYYVdz8LPdL2bhfhoXUtSa2adOn+2yXnUqnu 3rYp+/5nVJBw/h5jVu+WyDZIXeBdp5vvJW0MpRVO/w89vdts0QkELqlXoKT2G15bjFpr mNoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037960; x=1722642760; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=B44llYCZd4EZ9BiHApjJ5QQ8CK6GbkfpTxw4rYmbmqk=; b=GVvYI0qzE5vds9ihQSIgObDcuu9q4roXShPtrelZDtoDuS6kE68K5qlAxFsT4qiH+m 3Vuz0ON0tJPhw0iQ89v8RJUKtRk3vqFTU7s9Re0mY78sJN6P2UL5WGa6HEgOIy75BjsW MZ8LDiVm4sb8HSt+2tGGTDemVK3sPF8u/W1dER1zcBy616X9wnEsfg+4LhoY52iqfqZP qRqIzSKesc37+bQ5p9A3JcdkvIsaJDbiRf4HSB/LF/CP/p+OrZqaqoUVfAzihRFkhRr4 lum9FmsgacCFj9/mHBlVXUDghQArIzmyy1p0PMpFgbb7woYdcP7rwyQ/h3BkTKU6tH95 nEnw== X-Forwarded-Encrypted: i=1; AJvYcCV81QNFQFc7PQYMz3TROV6Fyq4Ea4hXQeUrJ0Jtj0J8KR9KASQo9bU/MfR4BJHD5tK60v+5bb6ecMm9mO1lPouJe/bBbgx7sO13a9y3 X-Gm-Message-State: AOJu0Yy+/vJDu7BkqYCaYsU7krqecbsdBCkKvWwlld3ur2nCRQ9fuJAF fIwPQpQpqOmyGgbcDQ7X2ytj69zKo5eHEuxS69RNZWjMezrOaiJ+ya9DKgp0JOrOBu/b6SL7rSw Iog== X-Google-Smtp-Source: AGHT+IEjG0ZQEVlDvyCmqtjyibvaIBPxkBHQDdDjj94pZYYC/xb/lJ2iSbTktb4rvu/gOGNYLlpIc1q2Vcw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2b0e:b0:1fd:7097:af5c with SMTP id d9443c01a7336-1ff048d8505mr410065ad.11.1722037959655; Fri, 26 Jul 2024 16:52:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:10 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-2-seanjc@google.com> Subject: [PATCH v12 01/84] KVM: arm64: Release pfn, i.e. put page, if copying MTE tags hits ZONE_DEVICE From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Put the page reference acquired by gfn_to_pfn_prot() if kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- than-stellar heuristics for dealing with pfn-mapped memory means that KVM can get a page reference to ZONE_DEVICE memory. Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") Signed-off-by: Sean Christopherson Reviewed-by: Catalin Marinas Reviewed-by: Steven Price Tested-by: Alex Benn=C3=A9e --- arch/arm64/kvm/guest.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 11098eb7eb44..e1f0ff08836a 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, page =3D pfn_to_online_page(pfn); if (!page) { /* Reject ZONE_DEVICE memory */ + kvm_release_pfn_clean(pfn); ret =3D -EFAULT; goto out; } --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA33714A62A for ; Fri, 26 Jul 2024 23:52:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037964; cv=none; b=io5bRLlomEMUq6kUPdiS8YBKS3W7QZyKa2C64VwBApvPeZe20eFxSNhMysmYpeD1dFFgihojXjlMpumEWclLca8NvuoqtRXw6B/hM35qCYzQe6g8yENuyJRmQ/GXOxTNpD57ki7mNgvaFRX1wL7DchsoWDsLL7ZCGNXehr/yPYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037964; c=relaxed/simple; bh=FV9suUWbUk18QdDUsD1tv1G/hfnnbrNwbRzzjldv2AY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZcCq4GLjiX9wnTAYxVQ59BZVXvNXEyhlyU0LT3bNXi3qmRKiy3e6w+OopwuUdG4URFnfTWkqrtVb/bxEyShEtlw+RX5Fj54TJcGhWcaJEuB0ZNMLYHeG4DpNHl7rJQlpCAo3qHUdK1SxEEvWnTUYBjrMNhw+ENxTMmxRwM1Tg3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oeSL7w4J; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oeSL7w4J" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70ec1039600so1045945b3a.0 for ; Fri, 26 Jul 2024 16:52:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037962; x=1722642762; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+ACqpXiHFW0htcnePdxx7jkBXYAkQBnbtzPYGP86yy4=; b=oeSL7w4Jw3SPt6gEE8Sx8A5Ga0fdK9b56eKcopjkr4euW4NVPZ8DVWiGuQwIn0crS9 kmfFA2XIflKDg14uOMpPHhFXS6lt2zgrM/WiQUMLiJB89QRcNewxVU2b7U1wRgR9ZOTG MnOvV13KTqth75/dJJt6IuqkEpViyVYJ08004N4BxUzcVNAvqv1MFMXR328D4UmxCrLc 20aP8s+q7QGAFVY23OsCIt0QdrgVqJUwKNgdIV/uAAaCOyqO1iAI68mz1XKKPvvPNxPe 6AeDMLEl8n/gXmC1aLqXz5Gw6DKPndtKCQF4+mJdgBoPYusH1Drij/g5YEZI+hA2m3LI MuDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037962; x=1722642762; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+ACqpXiHFW0htcnePdxx7jkBXYAkQBnbtzPYGP86yy4=; b=uYPmhR4tNFdBm81d3k8uVCKPo0mTce7U7O3DZ1aFvc5xUPf9rgIcFK4Ri5t9G33CEr s05q53/GoBzwpVXzThheBycUJh0qbBVJUkYKpopVwM9kOgKt+fYztZGcCCaXM0RKa4Xz 4Od+COTRUZss5n9C1pZJcQrlVELVCOlVomVstQl67uSfCXkB5kI5yO5Z4XDYBK3BppE8 VPT76r3XWCgv/xiHX3PsQpOMaYvuPJPaaICE9J4jR3mFejJ5+SWDXwlX/mBdWNxuC45P OG+aW9KYGI9CnA9jqTLGRr8rQKD4pKN2DaIUb7lCnbH7A+gTtyDpXHM7DkSbV8Rc02hM wT8g== X-Forwarded-Encrypted: i=1; AJvYcCVpMRLBYDR6qMYpFSnmQP62/fCYeWw1YDi9SyFihqjBgTKeT3RPlEvE3Rdwhj2o4gK3mU32OU9wpQMhMzEH5u7wsnLCfmJ1MXN3KfEo X-Gm-Message-State: AOJu0Yz8SlhNHgDVUYzlP9sfUkFYwVrgzlEKAHAA70JtxQJNleYzdp6I nEXWHVsktl9drX5wylVIwKrv5eg6Gm+pb1/ieTqCiiSOxnl7TZWZwwebDzVv/Ch+suX/6nrSuqm qLg== X-Google-Smtp-Source: AGHT+IGwCQGQqihWYa2umPunvMXl+38XfICaGVr07aRfeEr4UWXG0owvMV5oQenmn7mjdAWrLBWC2Kt2f3c= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2d5:b0:70d:1cb3:e3bb with SMTP id d2e1a72fcca58-70ecedee1c9mr17317b3a.5.1722037961972; Fri, 26 Jul 2024 16:52:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:11 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-3-seanjc@google.com> Subject: [PATCH v12 02/84] KVM: arm64: Disallow copying MTE to guest memory while KVM is dirty logging From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Disallow copying MTE tags to guest memory while KVM is dirty logging, as writing guest memory without marking the gfn as dirty in the memslot could result in userspace failing to migrate the updated page. Ideally (maybe?), KVM would simply mark the gfn as dirty, but there is no vCPU to work with, and presumably the only use case for copy MTE tags _to_ the guest is when restoring state on the target. Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") Signed-off-by: Sean Christopherson Reviewed-by: Catalin Marinas Reviewed-by: Steven Price Tested-by: Alex Benn=C3=A9e --- arch/arm64/kvm/guest.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index e1f0ff08836a..962f985977c2 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1045,6 +1045,11 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, =20 mutex_lock(&kvm->slots_lock); =20 + if (write && atomic_read(&kvm->nr_memslots_dirty_logging)) { + ret =3D -EBUSY; + goto out; + } + while (length > 0) { kvm_pfn_t pfn =3D gfn_to_pfn_prot(kvm, gfn, write, NULL); void *maddr; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCCA1155336 for ; Fri, 26 Jul 2024 23:52:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037967; cv=none; b=P2enWySuyMf3w6MRRdzNhUQRVTPYcoF+I8SabKiMtgi5uX5bUrYTOXPtTsPDIzDuiGZtMGnMsiOncZq6mkOPpKJ4HNFdJorDRVrf+JFzXeKFR0YLteSzbFje1iKE8FiKMmw8hE5v4NlrWGY9VPloRU77da8CbOS1zr6ZWAUECqk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037967; c=relaxed/simple; bh=As5a6sCjA0bn2QxKfat0y/Alc6god6QcgzHJ1XC3GL0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jmjfsaLRmQM4n1ocx3VfjeNvcVki62Y8tO/9/1xe25n4qI9R7ntr0mc0N1ex6d4ttNqxgB1miqiSPjkzsKk5L5S7lSr6tcfVXUPD+cdctU8oTUmfkVlKMl+RFPZXqF/xCuAoF7DkTFiFfd/fQ8WNz7HhF/ROszeTGQR9txVqtTI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fdnDV+1H; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fdnDV+1H" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-79028eac001so1426515a12.0 for ; Fri, 26 Jul 2024 16:52:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037964; x=1722642764; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oJ4Czs6Wm6GL3WD+/Q+iqutVJUfrJPNLt1yka1W8eMI=; b=fdnDV+1HxF+ZDEUfAaSQMe5gqybfshxy3wBNaaMJWShcud2X86nWhsHuANclv+qmQn BGm0UUcRb76OzoQdn01lBBv4TiijJWIiqPVrKXfvtfsjvSqomDufivBtvKUguwNIBwXq 6o2etMgaRSSTMK/xqJaraOPiJ52u0VYJRr2OPL2m6kTnLhv82tKzwibcNMVA0QUvADEf v3vXCuSG+FKX2fNcf8ztluRbcyRbK3qOnzYFUpoRrpyOUvFnOpmSrxCckvICl3CGzQQW znGWzGgHXbEoYjW21xGipMy9d6R63P3fN+BYxB8QUzd/cvBDtkWlpcZaTvgphoYj2hdR hZ6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037964; x=1722642764; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oJ4Czs6Wm6GL3WD+/Q+iqutVJUfrJPNLt1yka1W8eMI=; b=LtR+1A0CsBADK8DOncy+yRgPMRNXtxBz+A8a5/rWyvqp+RH61gRuTsG0AZqrRyK+Nx WXkp1wZlWhDUcnjGdyLT4sH6ud9gNC/tqML55GNGM0Y72HT/k5lzRfHHLM30gg2BNlAo RRkIiVrXyekbBg1zBpBPKqYFOiFk97WfNnI7VEX8WOFAKGWNek9X7PeInAjpXyW8Kd5b v4hk41jrfOPwHE7v/qE7rRa14YsZIvrzjWmpUjbS9qtJjvMNUrFFurLYozXGdihsWqy+ oExe7G0YJ37W6vvQaLa6DBaVN7zUY7FPWeRlHpdI2aoV2fjDxbNIoT7bWiKUHijyjXIv JSxw== X-Forwarded-Encrypted: i=1; AJvYcCWGyxVBJKi3o90PP4t9DC9Jrgsw9IpoC1EyIxFxyvyF6+5jddSIaU77OvO7Us+4nBZWSZKp21hooo6bXpGXVEINfSud++rt054S5rvx X-Gm-Message-State: AOJu0YwV6bTTKGp+8Hc/H5egOxbzQHXOCUNztTq6hBq4EuNlTEh4zrVB 8Z/a4kHplfQEWBN42ZRdJuvUrk8cWOgCx/Dy2CMRQlk2wvz0xMqg3Wmbj/ABlyxWNz9r/2vtLvf XIA== X-Google-Smtp-Source: AGHT+IE4QseE8FQyDppGpUcmrzEgygvCajM4BiXmkGQ8bCrs1hHMNuBsnO5u2Op4CzG6c9mnubgf1na2VnA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f683:b0:1fc:27be:42dd with SMTP id d9443c01a7336-1ff0479bbb4mr799825ad.1.1722037964087; Fri, 26 Jul 2024 16:52:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:12 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-4-seanjc@google.com> Subject: [PATCH v12 03/84] KVM: Drop KVM_ERR_PTR_BAD_PAGE and instead return NULL to indicate an error From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove KVM_ERR_PTR_BAD_PAGE and instead return NULL, as "bad page" is just a leftover bit of weirdness from days of old when KVM stuffed a "bad" page into the guest instead of actually handling missing pages. See commit cea7bb21280e ("KVM: MMU: Make gfn_to_page() always safe"). Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/book3s_pr.c | 2 +- arch/powerpc/kvm/book3s_xive_native.c | 2 +- arch/s390/kvm/vsie.c | 2 +- arch/x86/kvm/lapic.c | 2 +- include/linux/kvm_host.h | 7 ------- virt/kvm/kvm_main.c | 15 ++++++--------- 6 files changed, 10 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index a7d7137ea0c8..1bdcd4ee4813 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -645,7 +645,7 @@ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, st= ruct kvmppc_pte *pte) int i; =20 hpage =3D gfn_to_page(vcpu->kvm, pte->raddr >> PAGE_SHIFT); - if (is_error_page(hpage)) + if (!hpage) return; =20 hpage_offset =3D pte->raddr & ~PAGE_MASK; diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3= s_xive_native.c index 6e2ebbd8aaac..d9bf1bc3ff61 100644 --- a/arch/powerpc/kvm/book3s_xive_native.c +++ b/arch/powerpc/kvm/book3s_xive_native.c @@ -654,7 +654,7 @@ static int kvmppc_xive_native_set_queue_config(struct k= vmppc_xive *xive, } =20 page =3D gfn_to_page(kvm, gfn); - if (is_error_page(page)) { + if (!page) { srcu_read_unlock(&kvm->srcu, srcu_idx); pr_err("Couldn't get queue page %llx!\n", kvm_eq.qaddr); return -EINVAL; diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c index 54deafd0d698..566697ee37eb 100644 --- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -661,7 +661,7 @@ static int pin_guest_page(struct kvm *kvm, gpa_t gpa, h= pa_t *hpa) struct page *page; =20 page =3D gfn_to_page(kvm, gpa_to_gfn(gpa)); - if (is_error_page(page)) + if (!page) return -EINVAL; *hpa =3D (hpa_t)page_to_phys(page) + (gpa & ~PAGE_MASK); return 0; diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index a7172ba59ad2..6d65b36fac29 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -2629,7 +2629,7 @@ int kvm_alloc_apic_access_page(struct kvm *kvm) } =20 page =3D gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); - if (is_error_page(page)) { + if (!page) { ret =3D -EFAULT; goto out; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 689e8be873a7..3d9617d1de41 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -153,13 +153,6 @@ static inline bool kvm_is_error_gpa(gpa_t gpa) return gpa =3D=3D INVALID_GPA; } =20 -#define KVM_ERR_PTR_BAD_PAGE (ERR_PTR(-ENOENT)) - -static inline bool is_error_page(struct page *page) -{ - return IS_ERR(page); -} - #define KVM_REQUEST_MASK GENMASK(7,0) #define KVM_REQUEST_NO_WAKEUP BIT(8) #define KVM_REQUEST_WAIT BIT(9) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d0788d0a72cc..fd8c212b8de7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3085,19 +3085,14 @@ EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); */ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) { - struct page *page; kvm_pfn_t pfn; =20 pfn =3D gfn_to_pfn(kvm, gfn); =20 if (is_error_noslot_pfn(pfn)) - return KVM_ERR_PTR_BAD_PAGE; + return NULL; =20 - page =3D kvm_pfn_to_refcounted_page(pfn); - if (!page) - return KVM_ERR_PTR_BAD_PAGE; - - return page; + return kvm_pfn_to_refcounted_page(pfn); } EXPORT_SYMBOL_GPL(gfn_to_page); =20 @@ -3191,7 +3186,8 @@ static void kvm_set_page_accessed(struct page *page) =20 void kvm_release_page_clean(struct page *page) { - WARN_ON(is_error_page(page)); + if (WARN_ON(!page)) + return; =20 kvm_set_page_accessed(page); put_page(page); @@ -3215,7 +3211,8 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); =20 void kvm_release_page_dirty(struct page *page) { - WARN_ON(is_error_page(page)); + if (WARN_ON(!page)) + return; =20 kvm_set_page_dirty(page); kvm_release_page_clean(page); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF85A155A26 for ; Fri, 26 Jul 2024 23:52:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037968; cv=none; b=IwUKZ/b62LkPc2Qgk1xbGYyUEInkXMDlzuWywH1qrfbvk1vQuP0B1jfbuD9OfBARKVglaQXRr7ZqYORQ6k3dOWuPcZTbC3vU2s4S42EpD0Kp5XZwgVTplD3ZjR+HkikdXp05DVEWkwAnAS3ICG52E8ilRTvVUUyrEkTPoVVT0JA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037968; c=relaxed/simple; bh=nMIA0IB61sSIVDFSpMoIESja3SRG4qTvpXk7E9I+g3Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AYUvriWhLLP32KFUBphkHDXGcPEbp63fU73Equ+KRSQoUPSwpzLT1+Pjv0jRSz/ysCOIXAkh6ECOxrtko25kqdd4K5BhcJwJQ/F6GrdK+kILeiVGVCytjVbQHtisHPXdWvcW/UHif9VSKqDageViA3Y1QRSu7d8L2WCBB7h9rvQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NVWrM2d5; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NVWrM2d5" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d2e68f5c3so1460217b3a.2 for ; Fri, 26 Jul 2024 16:52:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037966; x=1722642766; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xiBZUTde5/yOu+VhqaXt4CYe/LeOwSdHW5ydN5gOuKQ=; b=NVWrM2d5HBvTFTVK/mG2yUHHBhXvbaRVd6bsqLsAjy/Dzv/5nZ9jfReR2DLhxZ9mWa Vptcbyn9cN5j57tjPl2N8b464v80ye5/vPc8x+G48dhnNB8rXZAkKbqa5sbKcgYVPR4X S4R4IlzRIzEVGxqr18jnOewMhepYT1/tq8tDxsexJ/AfL+z0nXjSIZaZdOehhbpQ/8Tw NW6Y22j39ElmP53HjDKP6Ms1LEpTrgKjAEa4a+cYnxkE69QDwqeuoVqBAae0wFvlQizM P8S6dqqHjdZSU0EXFAOkvuhq7Yh/i7InykJeRWEzcliTaeVCYjtjyQQ5bJLnEEyD36oH xAzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037966; x=1722642766; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xiBZUTde5/yOu+VhqaXt4CYe/LeOwSdHW5ydN5gOuKQ=; b=barkcMj3avXbaUhOiTg0/nONV3mcit4Ti7oHZQbzM+iVW+vIyRJfR3HCEnfgZsKrYg QNYWqtN4zXgarJi2yyMnATxwtnPqQ9cvKNkk4VSYQZQkzZBm9alAr4v/vnlAsFLcZYgw ek2jiOG0nwbkGBMHrM0sCWy4n348Fa9AaOxLvX7gY1aqdFIZhBQau6g6K8v4jQcP7a5f q34WCcsN2RKIBsiEfVpvC1TUn2azi1jvcC3N2JZolq35bkHktK/OeuEQxOFsD5FHz8Gn YTd5YIsXecw4tDy9u/sotk6IL9Dxb01uwwO7FtHHoooF0gX0fjNSwyAoZHU8pnipwtNx 0f2A== X-Forwarded-Encrypted: i=1; AJvYcCV4UTT0S9LbbH2+SPQgzrZptlXqwYtQGEZQ6JtaTUmoTJpenzgnUbV+lxIw8DI/6NXdK9ITNc4lrutUnyvN/no4oYWdg80eY+XH+2v+ X-Gm-Message-State: AOJu0Yw/M6gWaMwCqx1Dor2Wh6AsPc6gVojt22dsTmIwz/YbqNhkiBfL EPoka6tx/JGSY9Gm+NJtET5Bn84efW9be59pjDwfz/8hPUl8bfEjuCm9kZMY3qar32+dkRAcPCw rOA== X-Google-Smtp-Source: AGHT+IFeDzTSYc0yloJqv412y/9OHddQLGngXAZq8SjSSTQr+cl72qk3J3BVbl3XjHWFKDqTN/CvEh64V1I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:8703:b0:70d:1285:bbcf with SMTP id d2e1a72fcca58-70ece93a09amr3050b3a.0.1722037965825; Fri, 26 Jul 2024 16:52:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:13 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-5-seanjc@google.com> Subject: [PATCH v12 04/84] KVM: Allow calling kvm_release_page_{clean,dirty}() on a NULL page pointer From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow passing a NULL @page to kvm_release_page_{clean,dirty}(), there's no tangible benefit to forcing the callers to pre-check @page, and it ends up generating a lot of duplicate boilerplate code. Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fd8c212b8de7..656e931ac39e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3186,7 +3186,7 @@ static void kvm_set_page_accessed(struct page *page) =20 void kvm_release_page_clean(struct page *page) { - if (WARN_ON(!page)) + if (!page) return; =20 kvm_set_page_accessed(page); @@ -3211,7 +3211,7 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); =20 void kvm_release_page_dirty(struct page *page) { - if (WARN_ON(!page)) + if (!page) return; =20 kvm_set_page_dirty(page); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB2B515622E for ; Fri, 26 Jul 2024 23:52:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037970; cv=none; b=FsrCPKiVJcB1TMgf4Uv+4N0zgZynV1mmdz58dNNzYAp43K6biRjjHK/feKhiBhLZRemR/TGOw4YDoTBg4Ruj03z/lFJ2RaKdOljZh6/QCfHjRS8qQ0czY2oJVnZ6yD8bdCIgBdcZ2/cWWZIkBTPU8xdotjmFtbF50xgOCDC/Kfc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037970; c=relaxed/simple; bh=o0JBLF6HQjluB3iIirHhfso1PE7Zf/Ht0oB2KRN4Rqk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=h27rFLBfusonksMKUnfYOXTSpodejj0HkOtwzcKjCCSwCB2CupI4r4qzcdSRHsIgQdGJKqLl5RV8dtZI7go201Rm0MIfyVMs2EUllSn+qSljXDZAz6OgKFjpjpIm7PpQGkw3uMO96GP38nVDJam1IYzUrzxAOV4aUMpR0oyV0Hk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EvFvTRdS; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EvFvTRdS" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70b09eb46e4so1449158b3a.3 for ; Fri, 26 Jul 2024 16:52:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037968; x=1722642768; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1PAT6v4tn5ltatT/kjen/UunIT1fRhHnj971W9WBYQU=; b=EvFvTRdSkvDWHv908ZV4vYnJ4Jp/OKYzGWKs7fmL2vIG50ZsyAH9BuQT3QfDq8b3wl qjwOZltJXn0Zi5OhsW8YD77lQIKTgplFRugC7S0s4MbfgfWbdi9GlojuvdFAOojuRne5 VZbJAcqVblteXXtyd+c3/ddzmTLwHdH/mJ5CAj6zgdrAjHKhKXowlTeb5naqO1DRMW8g ahVRGYUVq4TU4Qewt+h/TY+qlOhV2NSrcb0GNTsU2Y9q+mx7LYbQNAyc2Ol+hcDm6Pfc ia5vG2Yyct4CeBPcv22CgTUC/zlL8VYuu3hgT1iPjeQUl9VL84RgwyCv/VTuL7czUhgS Uk1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037968; x=1722642768; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1PAT6v4tn5ltatT/kjen/UunIT1fRhHnj971W9WBYQU=; b=X7xp5N4Ernqj/tInczYAT5K10ec8yABCIniydnQ7FBpC9dnAan8a3EfjYc76t2L99Z ZWTX9CLFC25AMaZ/AxYMMHCDBghawP7nudzbOhvaRKGwJlOc3bLKgcWRQcPLrGbnRiyA BqGfoznA65W6ff+O11bQL+LZVeySLu///Fqg5svrwtpbV+JsjPolk+28iQdeJ4x2VkfW 1ZaqCjO7X9Un+/N8HnovnyJiRL7/9uFz/l4XwVqh4XidWIfGy1li36TrxuWZaoPc1He/ bznR9dN/zi3VoS2ekwqv3jsoyIBlovEU1pB4o3V02+efLvGxrJOsOAylCl21qO8UVNn2 bqgQ== X-Forwarded-Encrypted: i=1; AJvYcCUL06uPwtrN+2aQAontR2szdbWUba3u7tIA0G5WWQOO8zRT7kITq4IQlIZxEzAAQHsXltY4AzSoZHBnGR13bPG1wvWj99ZoRWBD8uKu X-Gm-Message-State: AOJu0YyS2njx39s4DbV3t+E0by6ItV9yul2oLd0j5W8MUJQPhYvTH/Qp AOCr8IRgvAjHnQd4f+ZCAzSkwh5CZyHjNc/jmLYmXjAgQLAltfmPoX4qBHHJoANUpmtyF1+luXZ u2Q== X-Google-Smtp-Source: AGHT+IGybwcUaaDWGaRnVlNYMzL2lybkBwi+pdiFBv1toqDzA479d7SifMoC4AbRMiwOqaRTolCzzwjRN5U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:731c:b0:70d:13c2:1d08 with SMTP id d2e1a72fcca58-70ecedb0dfamr2514b3a.3.1722037967857; Fri, 26 Jul 2024 16:52:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:14 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-6-seanjc@google.com> Subject: [PATCH v12 05/84] KVM: Add kvm_release_page_unused() API to put pages that KVM never consumes From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add an API to release an unused page, i.e. to put a page without marking it accessed or dirty. The API will be used when KVM faults-in a page but bails before installing the guest mapping (and other similar flows). Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3d9617d1de41..c5d39a337aa3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1201,6 +1201,15 @@ unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t= gfn, bool *writable); unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn); unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t = gfn, bool *writable); + +static inline void kvm_release_page_unused(struct page *page) +{ + if (!page) + return; + + put_page(page); +} + void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF5C315697A for ; Fri, 26 Jul 2024 23:52:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037972; cv=none; b=H4ldcffTgNELWqWdCPDULNY9Nhd+ughNTw31f82kDvDewPqrKqiHWLNab1GxlH802O0dm4JENK1z7nWNwgb9tXFfC24By0mapRfSaKkBVfyD9jpIGfQEvDDF8I4qHwEBNIGklfCPItkdsniHYxvNFH2lYvVzBJkjkit4KpgWR6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037972; c=relaxed/simple; bh=W8/UyfsPYwl/cL0QQCHocRUppr1lNXJ7XLufW7qP3ig=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gCTAyQ+tfRbEJap94ZkNzesFa3875T4g74JQa2jjYqcAZbC5xVwBZQY0kwOtQrvZjUCRkeL3AQ+V7rEnhH5nxO+e3wEHJjG4e+sD/XiubkCckrIEZZZQ4jSG2plby2XxALYfYle3g9tXDXyttdYvvP+okd8dqEwnIO48xa94URA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sImOLOZ+; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sImOLOZ+" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-650b621f4cdso5677747b3.1 for ; Fri, 26 Jul 2024 16:52:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037970; x=1722642770; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=VnEAHRC1zG591I39OowzIjQNXTqQsRDXyh/rUn6sicQ=; b=sImOLOZ+olHeeZqm/vAoWA9dUWSXr5MivwgNPZbhD2jvQLSaHvC4v3kFop25k0C3Do zbpjg7TRqPh/YWv/Q0Go8/X/54/KxDxJsRuGEXibYasNWaaIJX/fgAToM27lqVw1y7e2 63BJDSkH3s3BQ+vKOnEqHxOYmqJ+m3e9V8ikZTZvD42QXfSCwJuXNoUlBDxjTPPXeB4q +sWEoCgQQjleOM4SxthrOIRcz93P+AxvlGvapM4t8H77VHScb5+FY3gag2DcAER/DfBm wy83/sFtYgSLZln1W1ibrAcWZEtAHeLSGChDEy8Dez+zRng4pmqALN2KiRTkqWxGYfLL 8GJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037970; x=1722642770; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VnEAHRC1zG591I39OowzIjQNXTqQsRDXyh/rUn6sicQ=; b=ZLdGkHXsTwUQIRp7s9PdbIx87l7hAP/bgAqr+45/kZNNTCess6RnH71qqCjotW+t+a MptpRh/kvebkYXAp43ocUCvzGqIldGryomc+BH+IiYPt5fwJIQMDuglpf8hnQ14Okb63 bvBxZZU9JW63MxcGgbY3duTWRqGX20r4FeMQIcMsWSfVXguZrVdalzuCQKewWnQW5r8t hAfmEAosm8S+4AkO57zd+ykv5tdWLhmxMWFpf6HsPB1ksJLcCf6hAIgKAV+Q9sBW5AyL /8o2/CcMo29MNOQqHsvf982/JQP0AoHPMKFyTj8fKeDMnngn6bHbuWL1WCDsAR5bf+eI +XNQ== X-Forwarded-Encrypted: i=1; AJvYcCVf8f7SQZQ2el3EWThA5d+hIGudihZ14Hgqphhw0BnCBbFPZEhXyFqBAGApqLCHo+D3dutLutBYmTx+4mKoTrHSF2T6ifkEznMKewZC X-Gm-Message-State: AOJu0YyDV/jPMjUk+owj6IiYqZHCTDtiinfDS0cpATOHkI7wYU12tttj OmUqB70jt/m57WIhhpRSk5qvBRG4/bKUTTPTApvdTIhXluRyboE6IBBIn3Knp5eOFd1X89CcykK 6zg== X-Google-Smtp-Source: AGHT+IFPOoUYsYyJmRjA25F0UYC9yeYKs61mDA/QqS/vgTcAgpqjjzQfUfGqPl0MU2c7vZElDBuLLOtKUts= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:fc9:b0:648:afcb:a7ce with SMTP id 00721157ae682-67a05a9d90bmr269367b3.3.1722037969718; Fri, 26 Jul 2024 16:52:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:15 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-7-seanjc@google.com> Subject: [PATCH v12 06/84] KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Apply make_spte()'s optimization to skip trying to unsync shadow pages if and only if the old SPTE was a leaf SPTE, as non-leaf SPTEs in direct MMUs are always writable, i.e. could trigger a false positive and incorrectly lead to KVM creating a SPTE without write-protecting or marking shadow pages unsync. This bug only affects the TDP MMU, as the shadow MMU only overwrites a shadow-present SPTE when synchronizing SPTEs (and only 4KiB SPTEs can be unsync). Specifically, mmu_set_spte() drops any non-leaf SPTEs *before* calling make_spte(), whereas the TDP MMU can do a direct replacement of a page table with the leaf SPTE. Opportunistically update the comment to explain why skipping the unsync stuff is safe, as opposed to simply saying "it's someone else's problem". Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/spte.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d4527965e48c..a3baf0cadbee 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -226,12 +226,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_= page *sp, spte |=3D PT_WRITABLE_MASK | shadow_mmu_writable_mask; =20 /* - * Optimization: for pte sync, if spte was writable the hash - * lookup is unnecessary (and expensive). Write protection - * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. - * Same reasoning can be applied to dirty page accounting. + * When overwriting an existing leaf SPTE, and the old SPTE was + * writable, skip trying to unsync shadow pages as any relevant + * shadow pages must already be unsync, i.e. the hash lookup is + * unnecessary (and expensive). + * + * The same reasoning applies to dirty page/folio accounting; + * KVM will mark the folio dirty using the old SPTE, thus + * there's no need to immediately mark the new SPTE as dirty. + * + * Note, both cases rely on KVM not changing PFNs without first + * zapping the old SPTE, which is guaranteed by both the shadow + * MMU and the TDP MMU. */ - if (is_writable_pte(old_spte)) + if (is_last_spte(old_spte, level) && is_writable_pte(old_spte)) goto out; =20 /* --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B149E158D96 for ; Fri, 26 Jul 2024 23:52:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037974; cv=none; b=rprc48gS+CH21BBRXApTUhEF/v5u3zIWP9nfKCNqhI8/luxam+p8uYlnFtZrJ++k7I6SR5xxa2k+Ya7sGX0jQhdsgmzc2ubCYIoDCCnH/s79lfc1pPRivPP1qPZni3weVWYiH4OmZewoVhsSywr4DhV+6h3H0GgKj/IaJK1h5+8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037974; c=relaxed/simple; bh=uq2J28McPsC1N1m/bOImHP4IANcRnjzGVVMUByQMhBg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sZ+mVY/dU3MS6n9eLER/4PamGMmLmiV/ZIq85i5r4iX3psxpJ7c7vBSfhuLfCL0JRfwUv7vI1NfGtc6xrIQSHQChgja4LHbREASpBH2OJ5pFfNItA9Xcxzf+OQ3ioEuG3N6y2411MF4P/p3BTLMTx37cJJyT2AlhL4i1PHxgziw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kkOA1LIY; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kkOA1LIY" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66480c1a6b5so7083927b3.1 for ; Fri, 26 Jul 2024 16:52:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037972; x=1722642772; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4JK9GTJ8OrY9PouHIVM6/FWVao5wKjRYWblDInuyPC8=; b=kkOA1LIYFTOBfXpgosurDO6MNDJu7e01sfCx7MuOhsVewCkUx+dE0p8xDocMBrH/ln K3CSiglvMbTpCGuJP4nwBvhyMdNk0u9ar79HSsthCnfjQkicZVegLlCKyLYYFMIEX0By bsIWcADvSx6aR2jsEeini6deDrYZqBZSipIcSt5zCut1jTIhjmZABgzNRvKEGRLeDyUR oPLocY0wNzEyUhf+PH6XsEv1crzud8Fo+0uCZ+ZwTngzjI2A8iqLXTP89i8/eDR5nuiV fYlOcANTJdgG76xMgVuhk/yU2V9AWHkdWkUI11KQcL0IOAqDc2St6LMGpVrzdoqtYl72 XXIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037972; x=1722642772; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4JK9GTJ8OrY9PouHIVM6/FWVao5wKjRYWblDInuyPC8=; b=LKCbf8MhLXmDivwzCfk6TEM+Lj9xztSPZxvwjBjQgPRfNcI7rHir9P+uua3epyNFam VnxByW+Vnjyjf2FYZz5I8C3FdqPCvnofoFDV7OYJssSR8GPxK2xGy+KvnBER6c8G7enB 6zWPB/sDYwOeuA9vhTtQ4RZihFzrZy7TUAuegC53K+Iu2Q+s9KlphIySVUB860of93FZ BshZiHg9iPJwzS5e+x6qFqry4EpUwGYa+5b7lXk9aEEmIzSFpHdufMEZm1T4AqZ3VZp0 bVA1gvnGEDuY/AtWCwMf4CakAUCGz+wqAC+FPDgkEgJwIcK1PNTbZuvH9+7wEASKcj/M z1/w== X-Forwarded-Encrypted: i=1; AJvYcCXJ/Vg7mOzL0EAGq2v02DW57sXBE0mETwu5QESL+DmRYUfqsaJFwjJ5NDKgh17gJpAULAJHoFovm04yd5o0q/uRNpYO8MJ5TiJvm7D/ X-Gm-Message-State: AOJu0YzKZgERsUPmd2paFWVNhA3mlIG7nEslBrDOAfM60KPnCV6juHZo ZyyyRChemSI1ghMWCnG//4JAKHO9KbzlkBJlrOIP7wMK3mUpWzo4oQWpr0IkhDvCvJHTZ4ga3UA k7g== X-Google-Smtp-Source: AGHT+IHPFxwOMM7hKklIc91WzRGi/y9tY1tDNjJ/P6YzbKycIVe3cl0TyRdV2w5qTWUMgv20iZ/1nODLU/M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:fc9:b0:64b:a85:e2c5 with SMTP id 00721157ae682-67a05b92dafmr390537b3.3.1722037971723; Fri, 26 Jul 2024 16:52:51 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:16 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-8-seanjc@google.com> Subject: [PATCH v12 07/84] KVM: x86/mmu: Mark folio dirty when creating SPTE, not when zapping/modifying From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages/folios dirty when creating SPTEs to map PFNs into the guest, not when zapping or modifying SPTEs, as marking folios dirty when zapping or modifying SPTEs can be extremely inefficient. E.g. when KVM is zapping collapsible SPTEs to reconstitute a hugepage after disbling dirty logging, KVM will mark every 4KiB pfn as dirty, even though _at least_ 512 pfns are guaranteed to be in a single folio (the SPTE couldn't potentially be huge if that weren't the case). The problem only becomes worse for 1GiB HugeTLB pages, as KVM can mark a single folio dirty 512*512 times. Marking a folio dirty when mapping is functionally safe as KVM drops all relevant SPTEs in response to an mmu_notifier invalidation, i.e. ensures that the guest can't dirty a folio after access has been removed. And because KVM already marks folios dirty when zapping/modifying SPTEs for KVM reasons, i.e. not in response to an mmu_notifier invalidation, there is no danger of "prematurely" marking a folio dirty. E.g. if a filesystems cleans a folio without first removing write access, then there already exists races where KVM could mark a folio dirty before remote TLBs are flushed, i.e. before guest writes are guaranteed to stop. Furthermore, x86 is literally the only architecture that marks folios dirty on the backend; every other KVM architecture marks folios dirty at map time. x86's unique behavior likely stems from the fact that x86's MMU predates mmu_notifiers. Long, long ago, before mmu_notifiers were added, marking pages dirty when zapping SPTEs was logical, and perhaps even necessary, as KVM held references to pages, i.e. kept a page's refcount elevated while the page was mapped into the guest. At the time, KVM's rmap_remove() simply did: if (is_writeble_pte(*spte)) kvm_release_pfn_dirty(pfn); else kvm_release_pfn_clean(pfn); i.e. dropped the refcount and marked the page dirty at the same time. After mmu_notifiers were introduced, commit acb66dd051d0 ("KVM: MMU: don't hold pagecount reference for mapped sptes pages") removed the refcount logic, but kept the dirty logic, i.e. converted the above to: if (is_writeble_pte(*spte)) kvm_release_pfn_dirty(pfn); And for KVM x86, that's essentially how things have stayed over the last ~15 years, without anyone revisiting *why* KVM marks pages/folios dirty at zap/modification time, e.g. the behavior was blindly carried forward to the TDP MMU. Practically speaking, the only downside to marking a folio dirty during mapping is that KVM could trigger writeback of memory that was never actually written. Except that can't actually happen if KVM marks folios dirty if and only if a writable SPTE is created (as done here), because KVM always marks writable SPTEs as dirty during make_spte(). See commit 9b51a63024bd ("KVM: MMU: Explicitly set D-bit for writable spte."), circa 2015. Note, KVM's access tracking logic for prefetched SPTEs is a bit odd. If a guest PTE is dirty and writable, KVM will create a writable SPTE, but then mark the SPTE for access tracking. Which isn't wrong, just a bit odd, as it results in _more_ precise dirty tracking for MMUs _without_ A/D bits. To keep things simple, mark the folio dirty before access tracking comes into play, as an access-tracked SPTE can be restored in the fast page fault path, i.e. without holding mmu_lock. While writing SPTEs and accessing memslots outside of mmu_lock is safe, marking a folio dirty is not. E.g. if the fast path gets interrupted _just_ after setting a SPTE, the primary MMU could theoretically invalidate and free a folio before KVM marks it dirty. Unlike the shadow MMU, which waits for CPUs to respond to an IPI, the TDP MMU only guarantees the page tables themselves won't be freed (via RCU). Opportunistically update a few stale comments. Cc: David Matlack Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 29 ++++------------------------- arch/x86/kvm/mmu/paging_tmpl.h | 6 +++--- arch/x86/kvm/mmu/spte.c | 20 ++++++++++++++++++-- arch/x86/kvm/mmu/tdp_mmu.c | 12 ------------ 4 files changed, 25 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 901be9e420a4..2e6daa6d1cc0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -547,10 +547,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } =20 - if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) { + if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) flush =3D true; - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); - } =20 return flush; } @@ -593,9 +591,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u= 64 *sptep) if (is_accessed_spte(old_spte)) kvm_set_pfn_accessed(pfn); =20 - if (is_dirty_spte(old_spte)) - kvm_set_pfn_dirty(pfn); - return old_spte; } =20 @@ -626,13 +621,6 @@ static bool mmu_spte_age(u64 *sptep) clear_bit((ffs(shadow_accessed_mask) - 1), (unsigned long *)sptep); } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ - if (is_writable_pte(spte)) - kvm_set_pfn_dirty(spte_to_pfn(spte)); - spte =3D mark_spte_for_access_track(spte); mmu_spte_update_no_track(sptep, spte); } @@ -1275,16 +1263,6 @@ static bool spte_clear_dirty(u64 *sptep) return mmu_spte_update(sptep, spte); } =20 -static bool spte_wrprot_for_clear_dirty(u64 *sptep) -{ - bool was_writable =3D test_and_clear_bit(PT_WRITABLE_SHIFT, - (unsigned long *)sptep); - if (was_writable && !spte_ad_enabled(*sptep)) - kvm_set_pfn_dirty(spte_to_pfn(*sptep)); - - return was_writable; -} - /* * Gets the GFN ready for another round of dirty logging by clearing the * - D bit on ad-enabled SPTEs, and @@ -1300,7 +1278,8 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struc= t kvm_rmap_head *rmap_head, =20 for_each_rmap_spte(rmap_head, &iter, sptep) if (spte_ad_need_write_protect(*sptep)) - flush |=3D spte_wrprot_for_clear_dirty(sptep); + flush |=3D test_and_clear_bit(PT_WRITABLE_SHIFT, + (unsigned long *)sptep); else flush |=3D spte_clear_dirty(sptep); =20 @@ -3381,7 +3360,7 @@ static bool fast_pf_fix_direct_spte(struct kvm_vcpu *= vcpu, * harm. This also avoids the TLB flush needed after setting dirty bit * so non-PML cases won't be impacted. * - * Compare with set_spte where instead shadow_dirty_mask is set. + * Compare with make_spte() where instead shadow_dirty_mask is set. */ if (!try_cmpxchg64(sptep, &old_spte, new_spte)) return false; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 69941cebb3a8..ef0b3b213e5b 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -891,9 +891,9 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, s= truct kvm_mmu *mmu, =20 /* * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn= ()) is - * safe because: - * - The spte has a reference to the struct page, so the pfn for a given g= fn - * can't change unless all sptes pointing to it are nuked first. + * safe because SPTEs are protected by mmu_notifiers and memslot generatio= ns, so + * the pfn for a given gfn can't change unless all SPTEs pointing to the g= fn are + * nuked first. * * Returns * < 0: failed to sync spte diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index a3baf0cadbee..9b8795bd2f04 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -232,8 +232,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, * unnecessary (and expensive). * * The same reasoning applies to dirty page/folio accounting; - * KVM will mark the folio dirty using the old SPTE, thus - * there's no need to immediately mark the new SPTE as dirty. + * KVM marked the folio dirty when the old SPTE was created, + * thus there's no need to mark the folio dirty again. * * Note, both cases rely on KVM not changing PFNs without first * zapping the old SPTE, which is guaranteed by both the shadow @@ -266,12 +266,28 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_= page *sp, "spte =3D 0x%llx, level =3D %d, rsvd bits =3D 0x%llx", spte, level, get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); =20 + /* + * Mark the memslot dirty *after* modifying it for access tracking. + * Unlike folios, memslots can be safely marked dirty out of mmu_lock, + * i.e. in the fast page fault handler. + */ if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ WARN_ON_ONCE(level > PG_LEVEL_4K); mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } =20 + /* + * If the page that KVM got from the primary MMU is writable, i.e. if + * it's host-writable, mark the page/folio dirty. As alluded to above, + * folios can't be safely marked dirty in the fast page fault handler, + * and so KVM must (somewhat) speculatively mark the folio dirty even + * though it isn't guaranteed to be written as KVM won't mark the folio + * dirty if/when the SPTE is made writable. + */ + if (host_writable) + kvm_set_pfn_dirty(pfn); + *new_spte =3D spte; return wrprot; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c7dc49ee7388..7ac43d1ce918 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -511,10 +511,6 @@ static void handle_changed_spte(struct kvm *kvm, int a= s_id, gfn_t gfn, if (is_leaf !=3D was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); =20 - if (was_leaf && is_dirty_spte(old_spte) && - (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); - /* * Recursively handle child PTs if the change removed a subtree from * the paging structure. Note the WARN on the PFN changing without the @@ -1248,13 +1244,6 @@ static bool age_gfn_range(struct kvm *kvm, struct td= p_iter *iter, iter->level); new_spte =3D iter->old_spte & ~shadow_accessed_mask; } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ - if (is_writable_pte(iter->old_spte)) - kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); - new_spte =3D mark_spte_for_access_track(iter->old_spte); iter->old_spte =3D kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte, new_spte, @@ -1595,7 +1584,6 @@ static void clear_dirty_pt_masked(struct kvm *kvm, st= ruct kvm_mmu_page *root, trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, iter.old_spte, iter.old_spte & ~dbit); - kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); } =20 rcu_read_unlock(); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFD1F158DA0 for ; Fri, 26 Jul 2024 23:52:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037976; cv=none; b=NExx4EzSItydV7k6E3eP864ZbYdyFOstLOCNFNCxfVaSf7tfzEipr6PLQACLUsXbfvQJtWQH8/LmvyYUdpr30yip32XOnecb286CKKwdROzC3JBF8eX7Fb9RovVtsfOWbDjucmllyQG1PXlmNaETKMQ+Q+Yr87FQltbfijD6gM0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037976; c=relaxed/simple; bh=BpefGVw3ai5MyZqnGpqT/DkM64xZIHhkO67jsUqUIys=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nK70MLCRGinFiPnvIOGuZn28bUO5r0zsZGLwsbLEqQArgN+58U/ef6oKmH2Z7gQQeY3WkSYFyUUNcusM9RZBNDe0ud4gEfqn571G9E6QoFEsKwOvmy7MjtdyvRwmlWX/25N4auqT9ibbYN3GCVLXzpwtg0NPZ91sEOdi9l19IOE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KcdC67np; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KcdC67np" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-664ccf0659cso6135807b3.1 for ; Fri, 26 Jul 2024 16:52:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037974; x=1722642774; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Dpyi/TQvAabfHqvBiXmrWwwqtiSCC0ukXFLvczoDhHE=; b=KcdC67npj0TSKJPWMIOjBYz5BC6rRp3oSzV1GZy6lTT8BP9rI3t5P9Vy4Zs49XGVwX 7SU18jxLk/Ian3AK9DVvxrXW741yG/ikB2NpWee3xhPNXm0uC2A16VolFBtbFkqax2F1 fV2QB716kApSdes4JMFukG4zbfC+WMgqFnM9QJLp33344RyZ2sjkoTR5halQCk7+spp/ WcSaXPP/XO4c4mmyyoVoKD5Cz6HZfzwl5DzT2iC3dNj8TsuNkMvCsUwLGdJXk9d0EAee FwFtub9t4Tg2Iqs4+RdHgufUFTYqRjzKLZGRLUn/anEJ12nl2c3Mh/Gyk5hjCsFHchDH VEFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037974; x=1722642774; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Dpyi/TQvAabfHqvBiXmrWwwqtiSCC0ukXFLvczoDhHE=; b=hhdaXrhzM9RCgKDVLflNYGl8MxfM5u9/HBuwTE1oLaSNBtHllYeOIhgZsiMMI/8t+N S+7KC79425ppjL6USkZS4erH84ygzAM9Ml/U+yjoEF0M/pxDno7w/Lw7M1sy6KEVI1Ty NcdicGoDkUJ+xgs7kKetKvIAJ3oe0ZDeR8TrkWyBCeTSLD+fmWe0bTDoBzGHRGm3xV+X Sb0BaHvraBlmATn0YDbyVY0nrCtxqe0h9hicAtdcwyWFDyRR4iViWrugStIioAIHKyqY frid7pErj8CBBJGFcOfpb4MQZbCOvmLI7k53+flE3XfNjG4+IVs+dHiSBeD0fl2T0DLI JBAQ== X-Forwarded-Encrypted: i=1; AJvYcCVQ0UW8SnUTjI2glZmz0FhTbG/PXVyZaEvwT3fIxedJRjMnQUV/+wzCTBiao6spQ26IbcMIXh8SD5OF3GRnVH2E4qfHOJf3DJXNM1mt X-Gm-Message-State: AOJu0YwoZgeryfhAnPhT7Rl89i1MX7vNSavJq9rdRXJll2JCKS8rT7Wl bOSuY6xdiKPu3aVle/ly5+5plO0tT5F9lLOxeosLfYi86DEhV6kKeF348NCVhR8buPq4UWOypTs sNw== X-Google-Smtp-Source: AGHT+IEuz+UKgzDWJGr0lPjm3UdUlqhv4uVGAVLwalCQIXTZVrMJ3vp7bbZbHMGkSPe0yP2xueKpOueyV+8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:f07:b0:64a:e220:bfb5 with SMTP id 00721157ae682-67a051e9c33mr504237b3.1.1722037973756; Fri, 26 Jul 2024 16:52:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:17 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-9-seanjc@google.com> Subject: [PATCH v12 08/84] KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark folios as accessed only when zapping leaf SPTEs, which is a rough heuristic for "only in response to an mmu_notifier invalidation". Page aging and LRUs are tolerant of false negatives, i.e. KVM doesn't need to be precise for correctness, and re-marking folios as accessed when zapping entire roots or when zapping collapsible SPTEs is expensive and adds very little value. E.g. when a VM is dying, all of its memory is being freed; marking folios accessed at that time provides no known value. Similarly, because KVM marks folios as accessed when creating SPTEs, marking all folios as accessed when userspace happens to delete a memslot doesn't add value. The folio was marked access when the old SPTE was created, and will be marked accessed yet again if a vCPU accesses the pfn again after reloading a new root. Zapping collapsible SPTEs is a similar story; marking folios accessed just because userspace disable dirty logging is a side effect of KVM behavior, not a deliberate goal. As an intermediate step, a.k.a. bisection point, towards *never* marking folios accessed when dropping SPTEs, mark folios accessed when the primary MMU might be invalidating mappings, as such zappings are not KVM initiated, i.e. might actually be related to page aging and LRU activity. Note, x86 is the only KVM architecture that "double dips"; every other arch marks pfns as accessed only when mapping into the guest, not when mapping into the guest _and_ when removing from the guest. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- Documentation/virt/kvm/locking.rst | 76 +++++++++++++++--------------- arch/x86/kvm/mmu/mmu.c | 4 +- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++- 3 files changed, 43 insertions(+), 44 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/lo= cking.rst index 02880d5552d5..8b3bb9fe60bf 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -138,49 +138,51 @@ Then, we can ensure the dirty bitmaps is correctly se= t for a gfn. =20 2) Dirty bit tracking =20 -In the origin code, the spte can be fast updated (non-atomically) if the +In the original code, the spte can be fast updated (non-atomically) if the spte is read-only and the Accessed bit has already been set since the Accessed bit and Dirty bit can not be lost. =20 But it is not true after fast page fault since the spte can be marked writable between reading spte and updating spte. Like below case: =20 -+------------------------------------------------------------------------+ -| At the beginning:: | -| | -| spte.W =3D 0 | -| spte.Accessed =3D 1 | -+------------------------------------+-----------------------------------+ -| CPU 0: | CPU 1: | -+------------------------------------+-----------------------------------+ -| In mmu_spte_clear_track_bits():: | | -| | | -| old_spte =3D *spte; | = | -| | | -| | | -| /* 'if' condition is satisfied. */| | -| if (old_spte.Accessed =3D=3D 1 && | = | -| old_spte.W =3D=3D 0) | = | -| spte =3D 0ull; | = | -+------------------------------------+-----------------------------------+ -| | on fast page fault path:: | -| | | -| | spte.W =3D 1 = | -| | | -| | memory write on the spte:: | -| | | -| | spte.Dirty =3D 1 = | -+------------------------------------+-----------------------------------+ -| :: | | -| | | -| else | | -| old_spte =3D xchg(spte, 0ull) | = | -| if (old_spte.Accessed =3D=3D 1) | = | -| kvm_set_pfn_accessed(spte.pfn);| | -| if (old_spte.Dirty =3D=3D 1) | = | -| kvm_set_pfn_dirty(spte.pfn); | | -| OOPS!!! | | -+------------------------------------+-----------------------------------+ ++-------------------------------------------------------------------------+ +| At the beginning:: | +| | +| spte.W =3D 0 = | +| spte.Accessed =3D 1 = | ++-------------------------------------+-----------------------------------+ +| CPU 0: | CPU 1: | ++-------------------------------------+-----------------------------------+ +| In mmu_spte_update():: | | +| | | +| old_spte =3D *spte; | = | +| | | +| | | +| /* 'if' condition is satisfied. */ | | +| if (old_spte.Accessed =3D=3D 1 && | = | +| old_spte.W =3D=3D 0) | = | +| spte =3D new_spte; | = | ++-------------------------------------+-----------------------------------+ +| | on fast page fault path:: | +| | | +| | spte.W =3D 1 = | +| | | +| | memory write on the spte:: | +| | | +| | spte.Dirty =3D 1 = | ++-------------------------------------+-----------------------------------+ +| :: | | +| | | +| else | | +| old_spte =3D xchg(spte, new_spte);| = | +| if (old_spte.Accessed && | | +| !new_spte.Accessed) | | +| flush =3D true; | = | +| if (old_spte.Dirty && | | +| !new_spte.Dirty) | | +| flush =3D true; | = | +| OOPS!!! | | ++-------------------------------------+-----------------------------------+ =20 The Dirty bit is lost in this case. =20 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2e6daa6d1cc0..58b70328b20c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -542,10 +542,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * to guarantee consistency between TLB and page tables. */ =20 - if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) { + if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) flush =3D true; - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); - } =20 if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) flush =3D true; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7ac43d1ce918..d1de5f28c445 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -520,10 +520,6 @@ static void handle_changed_spte(struct kvm *kvm, int a= s_id, gfn_t gfn, if (was_present && !was_leaf && (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); - - if (was_leaf && is_accessed_spte(old_spte) && - (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } =20 static inline int __must_check __tdp_mmu_set_spte_atomic(struct tdp_iter *= iter, @@ -865,6 +861,9 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct k= vm_mmu_page *root, =20 tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); =20 + if (is_accessed_spte(iter.old_spte)) + kvm_set_pfn_accessed(spte_to_pfn(iter.old_spte)); + /* * Zappings SPTEs in invalid roots doesn't require a TLB flush, * see kvm_tdp_mmu_zap_invalidated_roots() for details. --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE42B15AACA for ; Fri, 26 Jul 2024 23:52:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037979; cv=none; b=RVjjGRciQHn/0rZmirEwFCeyQPGOZIL4mNrJc2ae+Lvx61IOt82rueE5s8ym8v2A1OyfMlluFndvG7W1aPItUJyD9dZhGHTyL/7UtQdZ/MOlg2hPURGTYdpRrnSBK2bd4aLT7vtIvTKGAZn3RU7jjYrmPuniBn3J4LaG8pRxLpY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037979; c=relaxed/simple; bh=P7WAicOgvpnFP/OgYcMHNilB3cYbvZdOfqdCkUi/Hw0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UIPMMXlH+0ehL1YuhXvyYLOgar01mm9k5WlpvaHWhG/fsj+HuApgYZ+3dO6Y1EuFZswI5gJXTlkn7J+WdUKrPcz4I0c3vvoF9eupBUo7gMPRyDmJfhRRda+N6nnavhd/qrz4T7V02My0l3nMUqu5kGe6GMRauZi63l8BCNBaLrQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZoCQUIfK; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZoCQUIfK" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66480c1a6b5so7085567b3.1 for ; Fri, 26 Jul 2024 16:52:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037976; x=1722642776; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=fmU/HR7p0h2R7XRsIfqUXhH6E22vlxrvSMQX4VgE/mU=; b=ZoCQUIfKoT5CuQK+9e2sZZYfCl+zAGIhLpWtYo0VRAfH3J3+yp/9TvzFqTSR15kTYL bjEJZsnNxVoB8J2D8dPbhQPNeiYbnBJNzr7eoHuTSyp/lrVg5jRBZmPlttDbVa48B4PJ W7NBRkNGcYHMxszPj1n5EZ+Jggmu+VPGjPd7Ttp/3pSJrpXJI0+mbJYIo9DA9igvKs/w A++fMhrDKhRlWqiU7uv+yxjhugmcpHWv1DxwkjTkFV8rTa9J52UCIWmP4ZNFsR+z+XGa rntZmjontxSN9+i5vGnBVOpYU/JvPWdjs8KVWoC6Hr8ukP8uFZy87wokVAI7RtIVs7mU MBgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037976; x=1722642776; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fmU/HR7p0h2R7XRsIfqUXhH6E22vlxrvSMQX4VgE/mU=; b=FgNyomh6kl/RDwoygWdD0BGJCeU0JAjLI1yBO9VjCnCHt2kIqHxkDT8XRdb725HST+ Kz1l4vJb4yKKryqvw1D6wrygzfj4b5Ju+QTg7mwxDDsWibxK33sgDdGgtaedHiS6iXdi uRXBQQtgGb14WuHStda3r7khPQryesAGRQWmBNMB6duS/DPOTu3H7o6e3SO6PvXAiq7r v+okhJK9Lbd3m/gRucX2Zpdctr+hgRD0rjjB4+45bNXoPgoCsITJ7h7MKSWKHPtIABv5 H7Qnsra0ZkhMpGfLzBfoQSdAMHi3zM271R4fkqfnTTf7W7PO72WPWxMa63FVnwNnXiS0 9Xxg== X-Forwarded-Encrypted: i=1; AJvYcCVxo5LJWyQTTzyUz7k0Lk+5V2SQNfNzJT6xf2b6fOFiQ6WT6hxDgnhFN47X41X3EPWl6aWtuZgR7mj5uuaxDeElCiD1l0O3Glged6JZ X-Gm-Message-State: AOJu0Yxv40q1mpZ1uW+klS+mgUkMjabXg29Rvq0oin604kINi1sMTCJe T8Y8FLTwpjdEvU8U8OWUEMyxpyxeBF3FWm5Y9TQ216h+F8prZrf/Jy3m62uHckjAPWxkA7iA13Y dfA== X-Google-Smtp-Source: AGHT+IEZeCODu/YeM5d/iOhxGxPVHISuly+z8I6+5BlUeuWOyJE5t2wqwhMJ9ApQGl0yAUOujG/KjtPFVv0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ad14:0:b0:62f:f535:f41 with SMTP id 00721157ae682-67a0abd4d1fmr288247b3.9.1722037976008; Fri, 26 Jul 2024 16:52:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:18 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-10-seanjc@google.com> Subject: [PATCH v12 09/84] KVM: x86/mmu: Don't force flush if SPTE update clears Accessed bit From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't force a TLB flush if mmu_spte_update() clears Accessed bit, as access tracking tolerates false negatives, as evidenced by the mmu_notifier hooks that explicit test and age SPTEs without doing a TLB flush. In practice, this is very nearly a nop. spte_write_protect() and spte_clear_dirty() never clear the Accessed bit. make_spte() always sets the Accessed bit for !prefetch scenarios. FNAME(sync_spte) only sets SPTE if the protection bits are changing, i.e. if a flush will be needed regardless of the Accessed bits. And FNAME(pte_prefetch) sets SPTE if and only if the old SPTE is !PRESENT. That leaves kvm_arch_async_page_ready() as the one path that will generate a !ACCESSED SPTE *and* overwrite a PRESENT SPTE. And that's very arguably a bug, as clobbering a valid SPTE in that case is nonsensical. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 31 +++++++++---------------------- 1 file changed, 9 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 58b70328b20c..b7642f1f993f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -518,37 +518,24 @@ static u64 mmu_spte_update_no_track(u64 *sptep, u64 n= ew_spte) * TLBs must be flushed. Otherwise rmap_write_protect will find a read-only * spte, even though the writable spte might be cached on a CPU's TLB. * + * Remote TLBs also need to be flushed if the Dirty bit is cleared, as fal= se + * negatives are not acceptable, e.g. if KVM is using D-bit based PML on V= MX. + * + * Don't flush if the Accessed bit is cleared, as access tracking tolerates + * false negatives, and the one path that does care about TLB flushes, + * kvm_mmu_notifier_clear_flush_young(), uses mmu_spte_update_no_track(). + * * Returns true if the TLB needs to be flushed */ static bool mmu_spte_update(u64 *sptep, u64 new_spte) { - bool flush =3D false; u64 old_spte =3D mmu_spte_update_no_track(sptep, new_spte); =20 if (!is_shadow_present_pte(old_spte)) return false; =20 - /* - * For the spte updated out of mmu-lock is safe, since - * we always atomically update it, see the comments in - * spte_has_volatile_bits(). - */ - if (is_mmu_writable_spte(old_spte) && - !is_writable_pte(new_spte)) - flush =3D true; - - /* - * Flush TLB when accessed/dirty states are changed in the page tables, - * to guarantee consistency between TLB and page tables. - */ - - if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) - flush =3D true; - - if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) - flush =3D true; - - return flush; + return (is_mmu_writable_spte(old_spte) && !is_writable_pte(new_spte)) || + (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)); } =20 /* --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C40E015B541 for ; Fri, 26 Jul 2024 23:52:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037980; cv=none; b=Ta/WoMB8LX5ocD7wa2fTuxTlhHzYlRYF+KVlkiwrsxln8xcdv0twVVcZmXmVwAzD5nSULTPo2ka6wgd7FKE8YWEdrat9lC2aAEnZ1W4aKUY9Rar1q0qHm+wbAxexQnT3a5mTYvlXEpcgPVIKA+2/RfXnOIHXADxvOIh5Xmi7iTU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037980; c=relaxed/simple; bh=eYPpE7SnzshOjYhzmXkcsAp87ERqXcDPZzZJ9L8u3ZE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CbCWT1aivE6RcBY9o9ysljWDQE1QNu7B0Yo7cjTzyaCr3LvdDD+VjahY+T9W4Ws0u20QCQdV0AZl4Y3gvuTeoVtefIzTQ70D2ZyqodvcGUINFhaTQbC2FGu5xXzuzATteFuQSxcEKtXvewXJP8vjdSTqVzrgW75YuxzoctI+NQU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nu6YJ8ah; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nu6YJ8ah" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e08723e9d7cso412267276.0 for ; Fri, 26 Jul 2024 16:52:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037978; x=1722642778; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=e1uaEfwlbucR9PkkwcUB/hlZrD43F2tYkTCYXf9cR5U=; b=nu6YJ8ahAfVxD3gSiVkVf2rbA0LtCA3av8e+T60YGtz4ztjNWZ8o5aOSn93X3kfAJk TR6hbY+cYWxFUpzhRMkajkVViLQLXFMuUroBPWdzyD4BobLFbl+lCSS4kc/61Ht55hjT QHqcGn0HcJ4pqbNqTOUjzzLvzaYCGlTYrnRjRAunLCg7NwZJ7fBiXIm382qdJMeehnIY P/30SKbgqTT4gaiecwWTPkVJIvAgLRYbxMGyFF/ocj/uexD5bqU2p/3/3nIsoaw6nBWc jDYN4AB7laan4qpa/AvdHPoOPeMXGa5FPcJ/c6kYtpfuAwHuJ+kiPwFceYlcGe6RS1SR yehw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037978; x=1722642778; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=e1uaEfwlbucR9PkkwcUB/hlZrD43F2tYkTCYXf9cR5U=; b=Oltv1HZWdAEVZuN1fi7bKz2JUPDgdF+dM3IT6HaDlSxI+DItopWDsaET2oBVEcO67e +ddB8vT8SF4d7vAyusDqMVTu18MfoEgZU7dY3Vxrk2TBuh5bjCuN7IUygme8x3wtm+1e KcxLfdxMkUNeJoa+BaRWTRpFmrfQDyjxPJwVVOGsu6dhSfLq1nWCPwqD0qZYe3kHdsMB +P52yPDmFBCU3n70ZEnnDVOpldV026aF52uxp6u/e7U4mVH4dPdkevDLin0hlxaFPCF1 iAsvIUxTgs4u0Hi2fe67PIOVj6adCYztCR0L48BYx0c0djoxxGwIlP6QcCRQhdcfHJyR ZFLQ== X-Forwarded-Encrypted: i=1; AJvYcCUfyVDFTitlnN0fzuoLW9xSwPwNmeDfor/jChkewLXOhpmBMl68XJE6iAJ1fs+ZyZpBBUkAO3NhY7dsIO9zwBkmYobwVIXp35mgfid4 X-Gm-Message-State: AOJu0Yz3idTGF6ClayD9REHegkiF6u7hX803DBK7ia97JE5TIx19cq/I L1B0BHOS40Bg2SJeAtD5VqoXryBZo7xejYk/F1yHEilJejeqkplOfI+jjyxUkwOIalNBSSj7aJJ Thw== X-Google-Smtp-Source: AGHT+IG4XdyN3FgjZWItv/Lgg5QEKJKUQifoplgps87wy/mf6q3lv4u9rcIn8zN490p0nZ8TWmWhP2C2xyU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1081:b0:e0b:1407:e357 with SMTP id 3f1490d57ef6-e0b543f0dc9mr70203276.3.1722037977844; Fri, 26 Jul 2024 16:52:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:19 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-11-seanjc@google.com> Subject: [PATCH v12 10/84] KVM: x86/mmu: Use gfn_to_page_many_atomic() when prefetching indirect PTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use gfn_to_page_many_atomic() instead of gfn_to_pfn_memslot_atomic() when prefetching indirect PTEs (direct_pte_prefetch_many() already uses the "to page" APIS). Functionally, the two are subtly equivalent, as the "to pfn" API short-circuits hva_to_pfn() if hva_to_pfn_fast() fails, i.e. is just a wrapper for get_user_page_fast_only()/get_user_pages_fast_only(). Switching to the "to page" API will allow dropping the @atomic parameter from the entire hva_to_pfn() callchain. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/paging_tmpl.h | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index ef0b3b213e5b..6b215a932158 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -535,8 +535,8 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_= mmu_page *sp, { struct kvm_memory_slot *slot; unsigned pte_access; + struct page *page; gfn_t gfn; - kvm_pfn_t pfn; =20 if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; @@ -549,12 +549,11 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kv= m_mmu_page *sp, if (!slot) return false; =20 - pfn =3D gfn_to_pfn_memslot_atomic(slot, gfn); - if (is_error_pfn(pfn)) + if (gfn_to_page_many_atomic(slot, gfn, &page, 1) !=3D 1) return false; =20 - mmu_set_spte(vcpu, slot, spte, pte_access, gfn, pfn, NULL); - kvm_release_pfn_clean(pfn); + mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); + kvm_release_page_clean(page); return true; } =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 705B915CD74 for ; Fri, 26 Jul 2024 23:53:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037982; cv=none; b=Z9qKk8AJ1wXFxL7ozuU57C0oME+8YGLspGGVV7KprZNph7o4thB0yFT8FwIQylKABRZXkswiLTL+tnZ0jgq67zsE5qQ2Q5RuMt31YkTee2CqmqIO1mgpHmvV+uFCKYvr23kGRv2xX0rIsqC9JTL8Ay2HAv1bs5YGFPQ5Xqwunq8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037982; c=relaxed/simple; bh=66rQYSeeqJfCAKMkX7WI4GHJ+JHcgyOho5Y4s7cGJ3c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GlmgcP6suKBMzjptp0Bjm2vJeB9Xty1OxjxRKJ4yfeBajXeoGIbwlySXzzjaHZmcTMOSN/3DtS4Ug2f9hoYH5NqQjUetFE3htJGvyMmZX0Rbit+Oa818NCxUsAmx+1iqzCRNmZdYNDEKlrZiKMOjE8fszBRpbJ/Qk7pKdD0ouo8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xwcgZZWz; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xwcgZZWz" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc58790766so11052075ad.3 for ; Fri, 26 Jul 2024 16:53:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037980; x=1722642780; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OCfqpOOOZn7LpwpHWWHUZjndC9uE6mObDYxnDRBBpAY=; b=xwcgZZWzaAnbOeWQO2etkn51kzgLxaOg55beKjXSpvYvfM5MIOscZc7VNHVmnhZwSB VTROrwqNTOmAtWB2TqyS+ohyiigPxQ8NEKWZOR3zl5AzQe5P2GW07NFNW62U+8UyRG+R 3eNVWgKTx2JlLuEkVErZd/JsFiV/AYVLPE/WTu0g5XRzBBPrRw7FRifzDJ3rabvjDb6G PrRH60zOUCUAgX8G21vxKbjG97s619HN24QJxFh94YdhaSOlt8sIi0ROYFCUPmvaNEz3 MSFiFUL+VQ3/sWbggtCafT3NeuJPRMjLcTtmp+zOR6fxE3im4tl6KXQYEtwEPhLtdnvx oC9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037980; x=1722642780; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OCfqpOOOZn7LpwpHWWHUZjndC9uE6mObDYxnDRBBpAY=; b=Nl+8Zg+1SWdj8i+ZH2J0UrD9rMxlZYwyQGK2vU/pVM5YfJBRmzUGa3rliWyregK0MD HGTk3IWmblUXUF6Mo5nIJdfl5vAa2PBnL5VsWUfpiejnXlTKsf/mh6A7p2yAsSi9HOlg V9VcI+zUkNp08SeTEJ3yUrUzfKaesqmdHuurrs2nhIALIAp+9ySijLji9O4rQJXp78Cr aKB+TDWkiPAD6E4Wme+T96GOSp2B5Qb+24HxmvxIc2zT2KYU5Wj3aTAXRcigLkcpFJOO +Mcxdxw4X8EK9o7EUe6ZWxdJWRJBZ3kzEUN+5WE+MqfQGCiAWUmU0pL/x9lgLMogsyTU 75nA== X-Forwarded-Encrypted: i=1; AJvYcCXsxkDLSdkzHNfE89HpqawtBTICdef2YuOnexDXXEr+hMsdTypVaUj3OpIhokj0B2EEplGCH171zj1omml116bspkHwUYxECM9ZlxLX X-Gm-Message-State: AOJu0Yw2bJlL4WNZN9dbHuDzsQ7xcMRW4IU3Q41HxqKIBXyMyCjq8Sul eipZUDJBq/sGWS6frPtmBE3FsOwRAT8T0znhqxIdYjF+cF8vgO6l9n7BqZf8fGoeu+5/ZFFxn1l u7Q== X-Google-Smtp-Source: AGHT+IGEij7x5iOOrUuL0BYhT5Dm9YFKfO4AZ5nK1acS0T3wJ2KiuANG5aiKSF+tpPt4GoCINPA3jjFDrnM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2341:b0:1fd:d740:b1e5 with SMTP id d9443c01a7336-1ff04850898mr447825ad.6.1722037979718; Fri, 26 Jul 2024 16:52:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:20 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-12-seanjc@google.com> Subject: [PATCH v12 11/84] KVM: Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() to try and communicate its true purpose, as the "atomic" aspect is essentially a side effect of the fact that x86 uses the API while holding mmu_lock. E.g. even if mmu_lock weren't held, KVM wouldn't want to fault-in pages, as the goal is to opportunistically grab surrounding pages that have already been accessed and/or dirtied by the host, and to do so quickly. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b7642f1f993f..c1914f02c5e1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2912,7 +2912,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *= vcpu, if (!slot) return -1; =20 - ret =3D gfn_to_page_many_atomic(slot, gfn, pages, end - start); + ret =3D kvm_prefetch_pages(slot, gfn, pages, end - start); if (ret <=3D 0) return -1; =20 diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 6b215a932158..bc801d454f41 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -549,7 +549,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_= mmu_page *sp, if (!slot) return false; =20 - if (gfn_to_page_many_atomic(slot, gfn, &page, 1) !=3D 1) + if (kvm_prefetch_pages(slot, gfn, &page, 1) !=3D 1) return false; =20 mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c5d39a337aa3..79fed9fea638 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1192,8 +1192,8 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); =20 -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages); +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages); =20 struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 656e931ac39e..803299778cf8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3060,8 +3060,8 @@ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, = gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); =20 -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages) +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages) { unsigned long addr; gfn_t entry =3D 0; @@ -3075,7 +3075,7 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot *s= lot, gfn_t gfn, =20 return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages); } -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); +EXPORT_SYMBOL_GPL(kvm_prefetch_pages); =20 /* * Do not use this helper unless you are absolutely certain the gfn _must_= be --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3CD815EFC0 for ; Fri, 26 Jul 2024 23:53:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037984; cv=none; b=tBdsOdGJopzGmiRg2kyhFQkO1ArY6tMUHidAe6X2gqyV1G/P1VUhPhNxrJNeg5LYbadLRwok/yElkgADH8fZI88Z6KhVI7ZNpSodE0jxk1qpGlC5GcDxffq+vhx6FrZciKXXlVtUM4iaQkT94RkcCFoONBPxX+kZlRkX3cJfuZY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037984; c=relaxed/simple; bh=qQVc59im7GngCaPa3TwBCc9CgIeFb9pgAGD2oXsQqY8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i+yAY5gz/sScAjOVYL4uB7sdv1MsfOHAEL73Lv80ANjQVL4PAgMhjUwPuhg+xow2lBWDiZtkWvFHaNrQy2qZsVlIzL8YQnnxenyUham33bLNMhd1NxmPf+EWLXjeA1rlTQ99K6ITSvazU6+b4LIxxbYaB3KO2GEsM5k9V2UgOhM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DaL84cRz; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DaL84cRz" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-665a6dd38c8so7196407b3.1 for ; Fri, 26 Jul 2024 16:53:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037982; x=1722642782; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ADEmxv9XmYXZ8ay6LRZAqHHnUfOELcTSDZRch+cTwuw=; b=DaL84cRzXnoD/jx1QtLx3hC0ce1MWM+VEWmoJGMJSklrvlz5ZBDckNtOwk2snbincM Y8YF+XxEyBFlcwjdZirAHSLiajBnh7GKA1Xq9Bt1S/1Ca0V04U81S3xAgbBr7G6PZvIn ij9s6ylTR+bpyLs+nIHjzgptOonqw9wDk7I4osznW3s4kiG0/7bRTGis2CsTVehlsTwQ jcOyIEprNopD6ClYq9sOTDbqakNWXo6fMJGtrdS8Aahb6vgt2J+puHtaXOZ4BcSof6fS n8h4caWT2wHDcBxCpt9SxorWGCuTbAtWiDnwnsd28nJAL64uXTcYWQWMbW5Sz9F895yj G8YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037982; x=1722642782; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ADEmxv9XmYXZ8ay6LRZAqHHnUfOELcTSDZRch+cTwuw=; b=vPzyavgBtPzkBxqsgzkJgDRlm+3Panf3McKMkpSE3CMCiS0/ZTtBYldLOXZfWZXdcc GoCv00OjYKMAW6bb/c15gfLQX85K5EybPjG83i2lZlNEJmCNfVQGeulpr9VaF6cy+aZm N7Bh3G2kC3WzbOZP28W3KuBzCb+7Gfv1pbLe1mVMiIk3+H8vSrYz8mwUnfKoo14vMZ2f fan6EhFUVYtNUhtDxBDHSRVMqfUKiOo7lofTGFjKErhDwvbmB3AF1BHGJoyfGp8zMVD0 na7hJg3KIwU3ADXYUl9RCeDsNdJ5IY23HadVl0/fMtthLiAO48HvTZZyAuKksn6LJeNt k0pg== X-Forwarded-Encrypted: i=1; AJvYcCVWhlrKOTr2lLfp5zKgP36snBtmoi5BT1wOyziwHSuwK4GgOGTsoQJzaoIxXQH8jZlIg25hLvUR7rpx78htUYeanAHHQIfw147qpRyj X-Gm-Message-State: AOJu0YzN+O4k1chqNw+WVJiR4/aw48F1GWIMkBM8i1aSkCqjWkFU+2Ys W0SRUbPdOLXrdZmnFagqin9RNgQlukAroJxLe+Uj+cZ+PvfY0OGgzD2KYrZrMeXuvpMyPk90xvF b6Q== X-Google-Smtp-Source: AGHT+IFvuKtzzbSFT7TzhVG4cD407iDdVRMZdEo+hUTgj12n6OpGkNu03b7iXctLyyeC9wijqsbDK83s/Cg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:f03:b0:665:a4a4:57c1 with SMTP id 00721157ae682-67a057b7ffdmr33777b3.2.1722037981863; Fri, 26 Jul 2024 16:53:01 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:21 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-13-seanjc@google.com> Subject: [PATCH v12 12/84] KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop @atomic from the myriad "to_pfn" APIs now that all callers pass "false". No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- Documentation/virt/kvm/locking.rst | 4 +-- arch/arm64/kvm/mmu.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 12 ++++----- include/linux/kvm_host.h | 4 +-- virt/kvm/kvm_main.c | 36 +++++--------------------- virt/kvm/kvm_mm.h | 4 +-- virt/kvm/pfncache.c | 2 +- 9 files changed, 22 insertions(+), 46 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/lo= cking.rst index 8b3bb9fe60bf..9af511e7aa53 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -126,8 +126,8 @@ We dirty-log for gfn1, that means gfn2 is lost in dirty= -bitmap. For direct sp, we can easily avoid it since the spte of direct sp is fixed to gfn. For indirect sp, we disabled fast page fault for simplicity. =20 -A solution for indirect sp could be to pin the gfn, for example via -kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning: +A solution for indirect sp could be to pin the gfn before the cmpxchg. Af= ter +the pinning: =20 - We have held the refcount of pfn; that means the pfn can not be freed and be reused for another gfn. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6981b1bc0946..30dd62f56a11 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1562,7 +1562,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); =20 - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, write_fault, &writable, NULL); if (pfn =3D=3D KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_= 64_mmu_hv.c index 1b51b1c4713b..8cd02ca4b1b8 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -613,7 +613,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok =3D true; } else { /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, writing, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book= 3s_64_mmu_radix.c index 408d98f8a514..26a969e935e3 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -852,7 +852,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcp= u, unsigned long pfn; =20 /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, writing, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c1914f02c5e1..d76390ef49b2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4334,9 +4334,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault return kvm_faultin_pfn_private(vcpu, fault); =20 async =3D false; - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, false, - &async, fault->write, - &fault->map_writable, &fault->hva); + fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, &asyn= c, + fault->write, &fault->map_writable, + &fault->hva); if (!async) return RET_PF_CONTINUE; /* *pfn has correct page already */ =20 @@ -4356,9 +4356,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, - NULL, fault->write, - &fault->map_writable, &fault->hva); + fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, NULL, + fault->write, &fault->map_writable, + &fault->hva); return RET_PF_CONTINUE; } =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 79fed9fea638..6d4503e8eabe 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1217,9 +1217,8 @@ kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn= ); -kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gf= n_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool atomic, bool interruptible, bool *async, + bool interruptible, bool *async, bool write_fault, bool *writable, hva_t *hva); =20 void kvm_release_pfn_clean(kvm_pfn_t pfn); @@ -1300,7 +1299,6 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn); =20 struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn= _t gfn); -kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *ma= p); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool = dirty); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 803299778cf8..84c73b4fc804 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2929,7 +2929,6 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, /* * Pin guest page in memory and return its pfn. * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function is forbidden from sleeping * @interruptible: whether the process can be interrupted by non-fatal sig= nals * @async: whether this function need to wait IO complete if the * host page is not in the memory @@ -2941,22 +2940,16 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, * 2): @write_fault =3D false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, + bool write_fault, bool *writable) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; =20 - /* we can do it either atomically or asynchronously, not both */ - BUG_ON(atomic && async); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; =20 - if (atomic) - return KVM_PFN_ERR_FAULT; - npages =3D hva_to_pfn_slow(addr, async, write_fault, interruptible, writable, &pfn); if (npages =3D=3D 1) @@ -2993,7 +2986,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic,= bool interruptible, } =20 kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool atomic, bool interruptible, bool *async, + bool interruptible, bool *async, bool write_fault, bool *writable, hva_t *hva) { unsigned long addr =3D __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -3015,39 +3008,24 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_mem= ory_slot *slot, gfn_t gfn, writable =3D NULL; } =20 - return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); + return hva_to_pfn(addr, interruptible, async, write_fault, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, + write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); =20 kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); =20 -kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gf= n_t gfn) -{ - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); - -kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) -{ - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); -} -EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); - kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 715f19669d01..a3fa86f60d6c 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,8 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ =20 -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, + bool write_fault, bool *writable); =20 #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index f0039efb9e1e..58c706a610e5 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -198,7 +198,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) } =20 /* We always request a writeable mapping */ - new_pfn =3D hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + new_pfn =3D hva_to_pfn(gpc->uhva, false, NULL, true, NULL); if (is_error_noslot_pfn(new_pfn)) goto out_error; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C70861662EF for ; Fri, 26 Jul 2024 23:53:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037986; cv=none; b=CwsZrsXINI1NzYfgOq02scS8dZB+9KaGZK0RET2eQ9kWaTmPVFAqksgeZUuGeo5AA4O+yDR8+9T5E7dnQaEqBLaz6C+3efnDu2ZxBsj3HUwZMkEMdILiiXLG0fYTzHerK/WEwCskpm2gkUv+3qbTmNJtdZVRkK0DRNu8cPZiWyc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037986; c=relaxed/simple; bh=izT0ZHXrcrHipMlKscBt522ZpbrYe6ZS+CksnjxIp6E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JpCPQIHbf/0wkobPohwcGOre2OxZvRlqWjhYHK29dKY77BGtUL7gHHXiTRwlo1qQWUcEXBU1NeHF0LeTs9hrkU4Gmgz9r9rXe4oKNl3JU4vT50YJ8fQnqyP95gdPMvC4MR3PX/cSSv8ZChMFRy0eRh7ORcq3ek99pE7L+r3pM44= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cpZRhfLd; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cpZRhfLd" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb696be198so1637477a91.3 for ; Fri, 26 Jul 2024 16:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037984; x=1722642784; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=G8vTlhvtYFVN9mhfz5x+cQCHJ6x1sNjhm2Vdu8bZ6Vo=; b=cpZRhfLd+3Ycs0GZk2KtEJ2X+UfQgIiXNHCsUGYj8voK7P1x4QTfARQbypDB6ymh3w lt59ko/N7fug9ZcvjZE2ygnMPn6qVbp79QUvelIIRVTuJPACyO18/0QP6hFlRp616C9Z DzXzOf3OG0oqBOCOTeDRqTRhpBghqwOiLzD+VIe1q7OqTbE1+gHjHigBFsVMDzU0mVQJ gILPZpDPfLv+jecLk78p4QxjM3IYDC/OAB5F0pOzPp9Yov0UpufcSagEZdp2LFJBZJkw YejtYmgMRK5vqtbZ7msWVLkE+HVYZILqb7ldtqtuGovSCoCd+VRTUnQ4meAn6p0LdH41 +oeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037984; x=1722642784; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=G8vTlhvtYFVN9mhfz5x+cQCHJ6x1sNjhm2Vdu8bZ6Vo=; b=t/V+xcAV1koSWT7VaOR+iI/c7ktRgfnndZVlpo3o8wDKDJEVVCBfA3huDZlTMfHJMQ AKBO6f6WZPSEGbdlTogBumQVI5sqdQIcEzk7Msu2f9eNZE4L2Pa5QxIcOu2/c7I2GLuI JT7P69PsZvBBfN3D9g2SIePHZSCP8+mLAu8h8JM7xsMFaHF4U42qYdcV9qal6nMC1t/c WJaXXhEkumzZKPdssaYHvxm6TDd8tMvq3GpMduE53WmjG2wc7/RQidIzqIlpPufj/Bxy MK1hiA2NBJ+OKHkhbA8VeomARLrlS8J269UjddBAV06nbBx0BmatGvcTiSWYyyGK5jjO XiVA== X-Forwarded-Encrypted: i=1; AJvYcCV1UqGCXod0aMR4zVojv3/jldmqR1nGyXZNjkOa0t7tdbDtem2YoiXasI/4klHK9B6dT7e8/uOiT9pkkVBXRAI/ZedTyhwGDYxsIONa X-Gm-Message-State: AOJu0YwpKQa4pBYpAE8OonM3MFMMQaK53aVMmH8oFgG9HAMBeWs0i+Gz j4Y431P7fyXHG9aFQuCCILHsiawPHI88ogSR85CYuS1NKpKe2IoPxszg7XKYDb27OdgOwkFpwrC o9g== X-Google-Smtp-Source: AGHT+IGcvEjw3oRLgbDLJZa9I5d4JQMYDdZeA/FUbXpEzYyt5G6Xd+JdRMAKGNKsfMlLtXc62VqHMV40c8I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:234e:b0:1f6:2964:17b8 with SMTP id d9443c01a7336-1ff0492c71fmr137005ad.10.1722037983933; Fri, 26 Jul 2024 16:53:03 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:22 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-14-seanjc@google.com> Subject: [PATCH v12 13/84] KVM: Annotate that all paths in hva_to_pfn() might sleep From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that hva_to_pfn() no longer supports being called in atomic context, move the might_sleep() annotation from hva_to_pfn_slow() to hva_to_pfn(). Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 84c73b4fc804..03af1a0090b1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2807,8 +2807,6 @@ static int hva_to_pfn_slow(unsigned long addr, bool *= async, bool write_fault, struct page *page; int npages; =20 - might_sleep(); - if (writable) *writable =3D write_fault; =20 @@ -2947,6 +2945,8 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interru= ptible, bool *async, kvm_pfn_t pfn; int npages, r; =20 + might_sleep(); + if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B877016DED9 for ; Fri, 26 Jul 2024 23:53:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037988; cv=none; b=pnhCO2UTKCNJEDh4ERoLCCRQkJvBWhnMn/S9zVRylX/GXMXpvJ/+Irf0nSrJAIlOWCe0ZT157UL2BfbrqFCCYdAPsGdyYPo97uUA2m6m/2hdR4GyHw92uuN0fcRB2uLkul7Etr6wOf7FNwM9QP1Iakqjy4BIPpvIgIZp0d05VUM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037988; c=relaxed/simple; bh=gK+hh8kSbTGkFcbGB4gAOaOwfDaDywlzZ/8wz2JxLpc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fjGiBhsmHFRAtZNqdtzo+kIiPesQVIwH//AugWDnbpcVFj0eHQ60woNu5J4WWcvq04IPPIBDdxvmLmFkYmTgIZHKK+4YgVlz+fBfG2Ay/BuFb/p89TzZ3BTHW/OI4MFzQdUmtN+kPLyl27xo6qlpOLj7QM8X1qr2ZCIvMom484c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=htopK7UR; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="htopK7UR" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb54eac976so1660214a91.0 for ; Fri, 26 Jul 2024 16:53:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037986; x=1722642786; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=U9CL7WNG+bQMLxeBtbO0jb1Rey3S/NViM9NTHyPIsWo=; b=htopK7URM6Kjffl14PYk/lkeBkmSUIlEO5Qr9YDo0wI3szvZB+qFanzzdUqCn9enby 0Xzw/DP4Jx07gZmBTBRfd4XE/3shHs6QGMdDAAYACAejyfP9L8YDfdLTAIyJvCVWbIjN FFB3meBXsiXb72aGPfZaZFXGnqMTMnO1bfE+ktyz54bG3jma0C1q+4i+J/Ix6zNy30uc NmewFzk64w97HMjoZ5ZNFpwP1sAZ/i4PO40C/VEQDAIk3w4jYGpzJ0EBMijMJONeTD8K 8PGgBTe9Xcz6BvD61Ez+42ODXs8IrAQrIIPUwwN/mVpI3OfRvki1XwlIaX0+CskiBY96 0NaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037986; x=1722642786; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=U9CL7WNG+bQMLxeBtbO0jb1Rey3S/NViM9NTHyPIsWo=; b=ti+E/0e+Vu74cCNJZgSb3xIFMOO5zC6izhIE+BiRoFMP5IREW7lFVTaXPo79bJZA7h DV3xO+5VcR0QgKRvNjLkYmQHB3+swzJ7npZ4EpAEPzGMMAEAmB7YmN1YHroTKxu9X5PK YfozZUMNnKj9pmjOhRmiq14C9C/l4R6yFC1fN1aoBM5vmJrAUV3nPh1Pef0hqQQJt/Ac VsSblEwvkcSlrBx2soJi5LmrPYnDc31StAmqYCpye9DfVBqzHMgdR8rbMusi2ZBylxYJ TzXEEqjdgNt6LdmftEkgq12wxbUHp89060eb8ydVFEJTBz/YNIlTmOeP3FoH4LLv5IS2 UVUQ== X-Forwarded-Encrypted: i=1; AJvYcCXqzAbrZzQKAT8BdW+lALlLCTKMhBqnPuhCnFIPRY4ZRB4+RVBriJqh/18JSj3UUt+gOnk1WLwDK72HOoUWEapT0ABBo1i0k0fqOM26 X-Gm-Message-State: AOJu0YxfSYCgMY+zQHxnJr6TX9jAI4RbvVzUoDctDR9nu9lhA9DpTTF5 5nifjCrDl7YHssc76WB9DQ29q/3Z3Lwsom289DaBKGlBkFIbqdofrQS0hivrBDaNLoSaDndL9T4 sPA== X-Google-Smtp-Source: AGHT+IHnRzETq13gV9xdTjZ59JsUWCUSm1ccdx7THxn20zb30PlPil4gPX7Xa7ZvU4LkGPt7E6DUyKfDh3Y= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:fd10:b0:2c9:26a1:3762 with SMTP id 98e67ed59e1d1-2cf7e5b209bmr21293a91.3.1722037985956; Fri, 26 Jul 2024 16:53:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:23 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-15-seanjc@google.com> Subject: [PATCH v12 14/84] KVM: Replace "async" pointer in gfn=>pfn with "no_wait" and error code From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Add a pfn error code to communicate that hva_to_pfn() failed because I/O was needed and disallowed, and convert @async to a constant @no_wait boolean. This will allow eliminating the @no_wait param by having callers pass in FOLL_NOWAIT along with other FOLL_* flags. Signed-off-by: David Stevens Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 18 +++++++++++------- include/linux/kvm_host.h | 3 ++- virt/kvm/kvm_main.c | 29 +++++++++++++++-------------- virt/kvm/kvm_mm.h | 2 +- virt/kvm/pfncache.c | 4 ++-- 5 files changed, 31 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d76390ef49b2..eb9ad0283fd5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4328,17 +4328,21 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu = *vcpu, =20 static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { - bool async; - if (fault->is_private) return kvm_faultin_pfn_private(vcpu, fault); =20 - async =3D false; - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, &asyn= c, + fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, fault->write, &fault->map_writable, &fault->hva); - if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + + /* + * If resolving the page failed because I/O is needed to fault-in the + * page, then either set up an asynchronous #PF to do the I/O, or if + * doing an async #PF isn't possible, retry with I/O allowed. All + * other failures are terminal, i.e. retrying won't help. + */ + if (fault->pfn !=3D KVM_PFN_ERR_NEEDS_IO) + return RET_PF_CONTINUE; =20 if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4356,7 +4360,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, NULL, + fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, fault->write, &fault->map_writable, &fault->hva); return RET_PF_CONTINUE; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6d4503e8eabe..92b2922e2216 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -97,6 +97,7 @@ #define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1) #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2) #define KVM_PFN_ERR_SIGPENDING (KVM_PFN_ERR_MASK + 3) +#define KVM_PFN_ERR_NEEDS_IO (KVM_PFN_ERR_MASK + 4) =20 /* * error pfns indicate that the gfn is in slot but faild to @@ -1218,7 +1219,7 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn,= bool write_fault, bool *writable); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn= ); kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool interruptible, bool *async, + bool interruptible, bool no_wait, bool write_fault, bool *writable, hva_t *hva); =20 void kvm_release_pfn_clean(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 03af1a0090b1..c2efdfe26d5b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2789,7 +2789,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool = write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fau= lt, +static int hva_to_pfn_slow(unsigned long addr, bool no_wait, bool write_fa= ult, bool interruptible, bool *writable, kvm_pfn_t *pfn) { /* @@ -2812,7 +2812,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool *= async, bool write_fault, =20 if (write_fault) flags |=3D FOLL_WRITE; - if (async) + if (no_wait) flags |=3D FOLL_NOWAIT; if (interruptible) flags |=3D FOLL_INTERRUPTIBLE; @@ -2928,8 +2928,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, * Pin guest page in memory and return its pfn. * @addr: host virtual address which maps memory to the guest * @interruptible: whether the process can be interrupted by non-fatal sig= nals - * @async: whether this function need to wait IO complete if the - * host page is not in the memory + * @no_wait: whether or not this function need to wait IO complete if the + * host page is not in the memory * @write_fault: whether we should get a writable host page * @writable: whether it allows to map a writable host page for !@write_fa= ult * @@ -2938,7 +2938,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, * 2): @write_fault =3D false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, bool write_fault, bool *writable) { struct vm_area_struct *vma; @@ -2950,7 +2950,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interru= ptible, bool *async, if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; =20 - npages =3D hva_to_pfn_slow(addr, async, write_fault, interruptible, + npages =3D hva_to_pfn_slow(addr, no_wait, write_fault, interruptible, writable, &pfn); if (npages =3D=3D 1) return pfn; @@ -2959,7 +2959,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interru= ptible, bool *async, =20 mmap_read_lock(current->mm); if (npages =3D=3D -EHWPOISON || - (!async && check_user_page_hwpoison(addr))) { + (!no_wait && check_user_page_hwpoison(addr))) { pfn =3D KVM_PFN_ERR_HWPOISON; goto exit; } @@ -2976,9 +2976,10 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interr= uptible, bool *async, if (r < 0) pfn =3D KVM_PFN_ERR_FAULT; } else { - if (async && vma_is_valid(vma, write_fault)) - *async =3D true; - pfn =3D KVM_PFN_ERR_FAULT; + if (no_wait && vma_is_valid(vma, write_fault)) + pfn =3D KVM_PFN_ERR_NEEDS_IO; + else + pfn =3D KVM_PFN_ERR_FAULT; } exit: mmap_read_unlock(current->mm); @@ -2986,7 +2987,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interru= ptible, bool *async, } =20 kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool interruptible, bool *async, + bool interruptible, bool no_wait, bool write_fault, bool *writable, hva_t *hva) { unsigned long addr =3D __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -3008,21 +3009,21 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_mem= ory_slot *slot, gfn_t gfn, writable =3D NULL; } =20 - return hva_to_pfn(addr, interruptible, async, write_fault, writable); + return hva_to_pfn(addr, interruptible, no_wait, write_fault, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, + return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); =20 kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); =20 diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index a3fa86f60d6c..51f3fee4ca3f 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,7 +20,7 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ =20 -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, bool write_fault, bool *writable); =20 #ifdef CONFIG_HAVE_KVM_PFNCACHE diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 58c706a610e5..32dc61f48c81 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -197,8 +197,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) cond_resched(); } =20 - /* We always request a writeable mapping */ - new_pfn =3D hva_to_pfn(gpc->uhva, false, NULL, true, NULL); + /* We always request a writable mapping */ + new_pfn =3D hva_to_pfn(gpc->uhva, false, false, true, NULL); if (is_error_noslot_pfn(new_pfn)) goto out_error; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6D2116DECF for ; Fri, 26 Jul 2024 23:53:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037990; cv=none; b=Q4WXhckDgCAeRUBCjIAwpwHeMi8VeiOifOGbCLOwmyt6e/VMna1CMZoVtUgv5xoUsaqLLuR6xeM45WY32x+cc+OW4PohGwVqi3PIDAg/pO7rfgztiYHYcgN6hzZjN82CZEqxvVN6SUtda+jFSA/7/pmAAM2CbNxBOUF9AivkZXs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037990; c=relaxed/simple; bh=wc/x3/oHz+A3HJ1377TECtP4A+SVRJ7qmCHuMbUKaP8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G+gs9D6NTjQU75x15NXfZ8x9oDo+wLVz3OKwD5BX48IKDkIbdsAeuat3OgKRQHyOPMxbHCG9jOFeiuv4fAaKJHGJ5Q5RFNq5uF+HQ+ufcAG8ckkmCgyl405221BOSkWZBiwtXELfFTn7rgdDtznFvo+QmrJqgXt3P6WOLPBAW/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EHGzxS0H; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EHGzxS0H" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-76522d1dca5so1517190a12.0 for ; Fri, 26 Jul 2024 16:53:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037988; x=1722642788; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=XEii1g2QxUcfpOdmeEwY4J2sYQAHz7s0WQ4Z9lEf4mM=; b=EHGzxS0HrLGgrJbHtWk+Qhe4MRs8fAABfwYbZu2W5hpU8aguKrm9ZyDqWpyCBd7i7R MwiNoZNPeS1gmr+I4eFWzty2MTPNNoXv5Bp3Xq6ganSxPmWk8FyT098aszpoEG7Y/VmD /3rGLXKnwVrVTEjqLBwM5rXgq/lwdDC6TtROHWJTivELZ/5nFjFi+f+Cp9TX/KXzqhGs bWnvHI/t1/q3h08FgOp17SdHLdABnJQ+xYuXcktcU491qhA+45KQ2pjqW+FxlfxvzlhH Qt7TyvcytVEri0xmVigSxECP3AjXzuTY8x0wKjneRa1oAlDMsph95NQTBsjJ+65j5Ytu dlmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037988; x=1722642788; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XEii1g2QxUcfpOdmeEwY4J2sYQAHz7s0WQ4Z9lEf4mM=; b=edgpnFBogn2fZd3S4sM/U1/1nMcXk8cZYAUSxiw2WszXjKoqWqyTLyC4hXRp2P5Eko IyeJmfLDwwDmtiSuD5OSpcGPY+yKtV7Ugv3nkVpoe9K2Y8b0Inszj8OMKN64WZMz5NxM EiB4XDaHvKpivq2cGTwu5BKZogGCapxPpCDqi0Dsy5p7YsrY0GaGhKt4EBXPLqkm5QNH 3bkmZ7DLkjLNxBoWwdX3lVRIsp/7WFpN6rvt4u+gfqqoH2/mUji19eUt8WPIszp0FL94 mUIw7rXuPQRTB4BQnY9a1uiqfgV/5UXASUPMYJ52xpKKkCbSPsdsSgzH6LdmNCPxnxZ3 Fkhg== X-Forwarded-Encrypted: i=1; AJvYcCXAxCKeEudtYXzpcvcMSKPKe4pkQIV0PsgrRxxCUqH6a9zxJcprPTN2NyrlXSbeLITUFYTg+DW9giJp2fDfgskmfXlyhzhGHCohUrdW X-Gm-Message-State: AOJu0YxPETmQvANq+rWhl/f+ruUl2nPZbYYJwp5TNp6vE7vyn11UwLMl Byh/0niKNNEBNq1+8ET8CDysjgYNCiFlSu2Jl0oM6bPbOFAcip/hgYjJlbEUYyMvIoPiygOCJZL nIg== X-Google-Smtp-Source: AGHT+IHOguM9TV1Wqy8YwTb6CBUJl4Q4k/lDzAhV2epo/9u56TnU3ou5KRwZKpMCKdXGcBYB5FJy1Mcb3zg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:6556:0:b0:710:cc3c:3916 with SMTP id 41be03b00d2f7-7ac8e0b6796mr2159a12.3.1722037987987; Fri, 26 Jul 2024 16:53:07 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:24 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-16-seanjc@google.com> Subject: [PATCH v12 15/84] KVM: x86/mmu: Drop kvm_page_fault.hva, i.e. don't track intermediate hva From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove kvm_page_fault.hva as it is never read, only written. This will allow removing the @hva param from __gfn_to_pfn_memslot(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 5 ++--- arch/x86/kvm/mmu/mmu_internal.h | 2 -- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index eb9ad0283fd5..e0bfbf95646c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3248,7 +3248,6 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *v= cpu, fault->slot =3D NULL; fault->pfn =3D KVM_PFN_NOSLOT; fault->map_writable =3D false; - fault->hva =3D KVM_HVA_ERR_BAD; =20 /* * If MMIO caching is disabled, emulate immediately without @@ -4333,7 +4332,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault =20 fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, fault->write, &fault->map_writable, - &fault->hva); + NULL); =20 /* * If resolving the page failed because I/O is needed to fault-in the @@ -4362,7 +4361,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault */ fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, fault->write, &fault->map_writable, - &fault->hva); + NULL); return RET_PF_CONTINUE; } =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 1721d97743e9..f67396c435df 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -238,7 +238,6 @@ struct kvm_page_fault { /* Outputs of kvm_faultin_pfn. */ unsigned long mmu_seq; kvm_pfn_t pfn; - hva_t hva; bool map_writable; =20 /* @@ -310,7 +309,6 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, .is_private =3D err & PFERR_PRIVATE_ACCESS, =20 .pfn =3D KVM_PFN_ERR_FAULT, - .hva =3D KVM_HVA_ERR_BAD, }; int r; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B83AA16F904 for ; Fri, 26 Jul 2024 23:53:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037992; cv=none; b=SkkvH74RTneS+peuXeXUy+/FtalHsCBHmafQkQVjBR6QFRM7SKQFOm1MrxPAWci9BV1/BA2gJUSNeOqTP6fJEDayXP45gtpX6hFN95rFj6QNh61XevrunjX983q6D2JU7dyGRmE6DkrW2IKFOPS3Ukx2mR6T1Ei/eK4Mqxw+p64= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037992; c=relaxed/simple; bh=GBxVm1lLTLxZuoyRA7pl4xZkWOwLKKjnWJxtSQrLMuc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JA+aZkGESmac4L5bn/M3infZXYdlHTlyXiAycp7l1a423AtE2AEXjUNHbA5L7v+Wi97d6rQ/OvoysuyUWt0WddDIjMNjAnZ8CK6bR8ifHF7CoNwP8MeoFtrwRj6jae16XolSQwvB2Ajo0kZaydpxD3GdhUK1HCjMYTSGOIGE4bA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TWdZojbR; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TWdZojbR" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-650b621f4cdso5683087b3.1 for ; Fri, 26 Jul 2024 16:53:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037990; x=1722642790; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0OUoj11Bw7qJz7X/aUJltZsNV6l4pB1orjUswDi1Y5Y=; b=TWdZojbR27V7CRBUrNIW2tbbIbTQhNASu6YSgSpboAXvkZEe0ZSVT0RrvqlWfDLXZ0 Gph0hfMddDBJYkTFHKyNEBkQ6Ga8pV13VHS0B9lxbTw3MC8Uw07uAVWYknKtlWwOgvQo m0eqpmIGkDwyb5tKCrP5ILxmuG8pH8hNsqYWdDvL02pvZqp30UV7OGTaOybGiC0vDbyM jbLuSTbqZv+2dhtQnojOnJrq8hR1S9pnGpgBYTqPKTzZeW4dvmYKrqPPHvruhrDFsLmO LgkTNlmSf64Rqst7+ApM+DrCT3kXLYmO1yF2dX+PTtfpDCwEwJ/iWaCbBxZGlXiQ+Jmk e88Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037990; x=1722642790; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0OUoj11Bw7qJz7X/aUJltZsNV6l4pB1orjUswDi1Y5Y=; b=Ayse5bbGEMZwm4nKXBepyFnEEqtgXW7xsSfKp4odQSiDMGkGT7zqAZL6jJlWU6K9tc gePQ457A0TxTfX+hfhIkn15LZMz6Va3Xo9lIh45hDWPOPcsy2BqZDv8OklVbYbDZUoYU K7NL9RGaE2nKRoTa71QhYJrCkfn5Lvcd7CRN929XH5qd9Qwahks1Lgs2NMt6gCiiCHuP C7PBrKFAPbcxDzoWavbMLj/HZZu955/YX9cpCXb+MJPwRBX8kY27oBowfmZkpLtbxx2u mjdNAmw+lwcYEDFGKMShpP7p6vsMbPlwZ4JasI9RKHV1jP3dmMBGd6fHOvZVT7bq1hS1 U1+A== X-Forwarded-Encrypted: i=1; AJvYcCXPUPFNKnLXfIwsKuiqmH1GWWhJYwxYvrtErHyDW8G6+i42sAI309oIMpGg4M0wXrPVlOnQO5tyY0P16A/RpVfP5Ar/yv8GxxBC6Rvd X-Gm-Message-State: AOJu0YzsmqV259eToMDwAsANJ9PtlqpBkDLw9bokL+9ctczT+NdxdMjf zBkeAQjRtA0yNehaFFqLCRZ6k0074Zo8Tm7uyDwRcJPxbY8o3b0jnGTMzfLrkB70jqXcR+O0dGV EuA== X-Google-Smtp-Source: AGHT+IHWajt4q+2GEWgaIgDXJueM9mo/vIIpZUTkzUWbzeiF8EolzfURUXE0pwxlI4gUkbCX2sqdG3/b4FA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:86:b0:650:a16c:91ac with SMTP id 00721157ae682-67a0a324dfdmr195847b3.8.1722037989755; Fri, 26 Jul 2024 16:53:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:25 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-17-seanjc@google.com> Subject: [PATCH v12 16/84] KVM: Drop unused "hva" pointer from __gfn_to_pfn_memslot() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop @hva from __gfn_to_pfn_memslot() now that all callers pass NULL. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/arm64/kvm/mmu.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 6 ++---- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 9 +++------ 6 files changed, 9 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 30dd62f56a11..22ee37360c4e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1563,7 +1563,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, mmap_read_unlock(current->mm); =20 pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - write_fault, &writable, NULL); + write_fault, &writable); if (pfn =3D=3D KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_= 64_mmu_hv.c index 8cd02ca4b1b8..2f1d58984b41 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -614,7 +614,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, } else { /* Call KVM generic code to do the slow-path check */ pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok, NULL); + writing, &write_ok); if (is_error_noslot_pfn(pfn)) return -EFAULT; page =3D NULL; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book= 3s_64_mmu_radix.c index 26a969e935e3..8304b6f8fe45 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -853,7 +853,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcp= u, =20 /* Call KVM generic code to do the slow-path check */ pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p, NULL); + writing, upgrade_p); if (is_error_noslot_pfn(pfn)) return -EFAULT; page =3D NULL; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e0bfbf95646c..a201b56728ae 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4331,8 +4331,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault return kvm_faultin_pfn_private(vcpu, fault); =20 fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, - fault->write, &fault->map_writable, - NULL); + fault->write, &fault->map_writable); =20 /* * If resolving the page failed because I/O is needed to fault-in the @@ -4360,8 +4359,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault * get a page and a fatal signal, i.e. SIGKILL, is pending. */ fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, - fault->write, &fault->map_writable, - NULL); + fault->write, &fault->map_writable); return RET_PF_CONTINUE; } =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 92b2922e2216..f42e030f69a4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1220,7 +1220,7 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn,= bool write_fault, kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn= ); kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, bool interruptible, bool no_wait, - bool write_fault, bool *writable, hva_t *hva); + bool write_fault, bool *writable); =20 void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c2efdfe26d5b..6e3bb202c1b3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2988,13 +2988,10 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool inter= ruptible, bool no_wait, =20 kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, bool interruptible, bool no_wait, - bool write_fault, bool *writable, hva_t *hva) + bool write_fault, bool *writable) { unsigned long addr =3D __gfn_to_hva_many(slot, gfn, NULL, write_fault); =20 - if (hva) - *hva =3D addr; - if (kvm_is_error_hva(addr)) { if (writable) *writable =3D false; @@ -3017,13 +3014,13 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gf= n, bool write_fault, bool *writable) { return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - write_fault, writable, NULL); + write_fault, writable); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); =20 kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7BBED17107A for ; Fri, 26 Jul 2024 23:53:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037994; cv=none; b=ZDtT8sTEds+03qs8PZE5L8keYT7oaW/xpI3HGaAWAi5ZCcXIpdonwphDAGlP+4tpkOZh3vVEujS9Y3E8O0TvF7SJLGfwiQV1NTKExjzCeiSwzgCZGNFuRveRbcCVm/PklqtN2firw0RmIDxlM8wn5ZhqvEbbU5MPhll0Ji4/AEE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037994; c=relaxed/simple; bh=J4pFwXscwUHwWUzewM7flw7KODfftvhwfJwFPSCTX7s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ONXQfUhJ4iS8Yi3H+Gx15XlX77BU4ROpTLrSRcbrnLGOrIbbEC5zG3IyiJc6Acg0sqyQHSUmkO5+4SrMQ2mWs47JqbQw7k/1FNSjBz7eIgEhkVos/3MwI8m0/LyQTC8GDOGQbaZjPfRoWmnccvpecfXQL1HeKicE4YVEnsQatlI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cr2ICY4Z; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cr2ICY4Z" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d23f0a3d7so1450753b3a.1 for ; Fri, 26 Jul 2024 16:53:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037992; x=1722642792; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dQWjrE4jvr4hAuT3wkfi5A++VZy4br+IJ16QmEjL6no=; b=cr2ICY4ZxkHTQlQlc+ZFVEJfGSX9nofYwyNDBe4AsE1MIR+USR1fnRvv2objpTD44U N0ymUXcQsLcRU/VWiI0vR982FrmxZn/BHi1+0zcBlHT0eJbJoxpenUPQDWNuS5Ufwqk5 +HpRZvKYYdzCEu0R13ESw+oKVeBRcVe2X2WBrKL2qsfNamCsZkxbomK6drk+Ca5lvUcv kPokUVrhwqBdf7Fn2o0i1crMZdg93uJzmot2HLQp1vXP85WV3nRD1/JC2SAbncc41j5K pB2KPUIf9rjrSrV3v3mTKZdE98/OySoNU/mbwtCdz7xGOd7l+7m+xOLvXa/ep3f0sn9r 2U0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037992; x=1722642792; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dQWjrE4jvr4hAuT3wkfi5A++VZy4br+IJ16QmEjL6no=; b=P4gB35X4zfx2hdc5Zt94rSD0U0YMbD+m60MfyQv1xq+SUF9CuRNJhOsIM0q+e+8smZ TSboPQRR9uMaSc6zblrSZiwXzQNaTKRXdtmpe28Ws9ipi+6YpcZ08wPHUDusY/hWADZx JoSXqsOUdDAlDlnC4Ou0nepGsFm88cxwuUieuhDzdjRzrRGs7wgB3lWwrFj8TEzdw/+p 4FjCBk/T0PnMkn8KbyCojWZyz5wYyARgf8M9VCJ35zuAKNcIzhT8WASEGbOAk0Xqb4IE jTgGguTafR7lRyvwjzu9h9Pc+ARcAbg4X10ey2jN39pWgPLZnMPIlru4F6VKUl+bwppO HiZA== X-Forwarded-Encrypted: i=1; AJvYcCV0pwL7bLrmM7nI9B+ccllvC5+ycHQc11M6T1s2mt//SDrMF4OoSOXqjSzbO08kn0oRWQMZw5RvCuwfXw4n+LGGJplNQJV4AwhBhJiA X-Gm-Message-State: AOJu0YyWvIqxmQnZ8FMx8OYYN+BOcMFue4M7RDaQx8xfl2l7Q5dSvDzE +iQH6tG6DZg0fTKyP3qyLyK4fq81EXSG9PQqHXB49P+NuH4AMl1slAF/y6Ke0AFtyCW4rWWlSZO qxw== X-Google-Smtp-Source: AGHT+IF64dFMfvjIHcwYUOw27WNHnHBsH0QN8lopZ3C9XrG9iD55Iu+E93imY7RdCCfaNaEEdqDD7q46L3E= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:66e5:b0:70d:1e28:1c33 with SMTP id d2e1a72fcca58-70ece9ecd04mr9965b3a.1.1722037991679; Fri, 26 Jul 2024 16:53:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:26 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-18-seanjc@google.com> Subject: [PATCH v12 17/84] KVM: Introduce kvm_follow_pfn() to eventually replace "gfn_to_pfn" APIs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Introduce kvm_follow_pfn() to eventually supplant the various "gfn_to_pfn" APIs, albeit by adding more wrappers. The primary motivation of the new helper is to pass a structure instead of an ever changing set of parameters, e.g. so that tweaking the behavior, inputs, and/or outputs of the "to pfn" helpers doesn't require churning half of KVM. In the more distant future, the APIs exposed to arch code could also follow suit, e.g. by adding something akin to x86's "struct kvm_page_fault" when faulting in guest memory. But for now, the goal is purely to clean up KVM's "internal" MMU code. As part of the conversion, replace the write_fault, interruptible, and no-wait boolean flags with FOLL_WRITE, FOLL_INTERRUPTIBLE, and FOLL_NOWAIT respectively. Collecting the various FOLL_* flags into a single field will again ease the pain of passing new flags. Signed-off-by: David Stevens Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 166 +++++++++++++++++++++++--------------------- virt/kvm/kvm_mm.h | 20 +++++- virt/kvm/pfncache.c | 9 ++- 3 files changed, 111 insertions(+), 84 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6e3bb202c1b3..56c2d11761e0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2761,8 +2761,7 @@ static inline int check_user_page_hwpoison(unsigned l= ong addr) * true indicates success, otherwise false is returned. It's also the * only part that runs if we can in atomic context. */ -static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) +static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page[1]; =20 @@ -2771,14 +2770,13 @@ static bool hva_to_pfn_fast(unsigned long addr, boo= l write_fault, * or the caller allows to map a writable pfn for a read fault * request. */ - if (!(write_fault || writable)) + if (!((kfp->flags & FOLL_WRITE) || kfp->map_writable)) return false; =20 - if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { *pfn =3D page_to_pfn(page[0]); - - if (writable) - *writable =3D true; + if (kfp->map_writable) + *kfp->map_writable =3D true; return true; } =20 @@ -2789,8 +2787,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool = write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool no_wait, bool write_fa= ult, - bool interruptible, bool *writable, kvm_pfn_t *pfn) +static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { /* * When a VCPU accesses a page that is not mapped into the secondary @@ -2803,34 +2800,30 @@ static int hva_to_pfn_slow(unsigned long addr, bool= no_wait, bool write_fault, * Note that get_user_page_fast_only() and FOLL_WRITE for now * implicitly honor NUMA hinting faults and don't need this flag. */ - unsigned int flags =3D FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT; - struct page *page; + unsigned int flags =3D FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT | kfp->flags; + struct page *page, *wpage; int npages; =20 - if (writable) - *writable =3D write_fault; - - if (write_fault) - flags |=3D FOLL_WRITE; - if (no_wait) - flags |=3D FOLL_NOWAIT; - if (interruptible) - flags |=3D FOLL_INTERRUPTIBLE; - - npages =3D get_user_pages_unlocked(addr, 1, &page, flags); + npages =3D get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages !=3D 1) return npages; =20 + if (!kfp->map_writable) + goto out; + + if (kfp->flags & FOLL_WRITE) { + *kfp->map_writable =3D true; + goto out; + } + /* map read fault as writable if possible */ - if (unlikely(!write_fault) && writable) { - struct page *wpage; - - if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { - *writable =3D true; - put_page(page); - page =3D wpage; - } + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { + *kfp->map_writable =3D true; + put_page(page); + page =3D wpage; } + +out: *pfn =3D page_to_pfn(page); return npages; } @@ -2857,23 +2850,23 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn) } =20 static int hva_to_pfn_remapped(struct vm_area_struct *vma, - unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *p_pfn) + struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { kvm_pfn_t pfn; pte_t *ptep; pte_t pte; spinlock_t *ptl; + bool write_fault =3D kfp->flags & FOLL_WRITE; int r; =20 - r =3D follow_pte(vma, addr, &ptep, &ptl); + r =3D follow_pte(vma, kfp->hva, &ptep, &ptl); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does * not call the fault handler, so do it here. */ bool unlocked =3D false; - r =3D fixup_user_fault(current->mm, addr, + r =3D fixup_user_fault(current->mm, kfp->hva, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) @@ -2881,7 +2874,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, if (r) return r; =20 - r =3D follow_pte(vma, addr, &ptep, &ptl); + r =3D follow_pte(vma, kfp->hva, &ptep, &ptl); if (r) return r; } @@ -2893,8 +2886,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, goto out; } =20 - if (writable) - *writable =3D pte_write(pte); + if (kfp->map_writable) + *kfp->map_writable =3D pte_write(pte); pfn =3D pte_pfn(pte); =20 /* @@ -2924,22 +2917,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct= *vma, return r; } =20 -/* - * Pin guest page in memory and return its pfn. - * @addr: host virtual address which maps memory to the guest - * @interruptible: whether the process can be interrupted by non-fatal sig= nals - * @no_wait: whether or not this function need to wait IO complete if the - * host page is not in the memory - * @write_fault: whether we should get a writable host page - * @writable: whether it allows to map a writable host page for !@write_fa= ult - * - * The function will map a writable host page for these two cases: - * 1): @write_fault =3D true - * 2): @write_fault =3D false && @writable, @writable will tell the caller - * whether the mapping is writable. - */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, - bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) { struct vm_area_struct *vma; kvm_pfn_t pfn; @@ -2947,11 +2925,10 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool inter= ruptible, bool no_wait, =20 might_sleep(); =20 - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(kfp, &pfn)) return pfn; =20 - npages =3D hva_to_pfn_slow(addr, no_wait, write_fault, interruptible, - writable, &pfn); + npages =3D hva_to_pfn_slow(kfp, &pfn); if (npages =3D=3D 1) return pfn; if (npages =3D=3D -EINTR) @@ -2959,24 +2936,25 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool inter= ruptible, bool no_wait, =20 mmap_read_lock(current->mm); if (npages =3D=3D -EHWPOISON || - (!no_wait && check_user_page_hwpoison(addr))) { + (!(kfp->flags & FOLL_NOWAIT) && check_user_page_hwpoison(kfp->hva))) { pfn =3D KVM_PFN_ERR_HWPOISON; goto exit; } =20 retry: - vma =3D vma_lookup(current->mm, addr); + vma =3D vma_lookup(current->mm, kfp->hva); =20 if (vma =3D=3D NULL) pfn =3D KVM_PFN_ERR_FAULT; else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) { - r =3D hva_to_pfn_remapped(vma, addr, write_fault, writable, &pfn); + r =3D hva_to_pfn_remapped(vma, kfp, &pfn); if (r =3D=3D -EAGAIN) goto retry; if (r < 0) pfn =3D KVM_PFN_ERR_FAULT; } else { - if (no_wait && vma_is_valid(vma, write_fault)) + if ((kfp->flags & FOLL_NOWAIT) && + vma_is_valid(vma, kfp->flags & FOLL_WRITE)) pfn =3D KVM_PFN_ERR_NEEDS_IO; else pfn =3D KVM_PFN_ERR_FAULT; @@ -2986,41 +2964,69 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool inter= ruptible, bool no_wait, return pfn; } =20 +static kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) +{ + kfp->hva =3D __gfn_to_hva_many(kfp->slot, kfp->gfn, NULL, + kfp->flags & FOLL_WRITE); + + if (kfp->hva =3D=3D KVM_HVA_ERR_RO_BAD) + return KVM_PFN_ERR_RO_FAULT; + + if (kvm_is_error_hva(kfp->hva)) + return KVM_PFN_NOSLOT; + + if (memslot_is_readonly(kfp->slot) && kfp->map_writable) { + *kfp->map_writable =3D false; + kfp->map_writable =3D NULL; + } + + return hva_to_pfn(kfp); +} + kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, bool interruptible, bool no_wait, bool write_fault, bool *writable) { - unsigned long addr =3D __gfn_to_hva_many(slot, gfn, NULL, write_fault); - - if (kvm_is_error_hva(addr)) { - if (writable) - *writable =3D false; - - return addr =3D=3D KVM_HVA_ERR_RO_BAD ? KVM_PFN_ERR_RO_FAULT : - KVM_PFN_NOSLOT; - } - - /* Do not map writable pfn in the readonly memslot. */ - if (writable && memslot_is_readonly(slot)) { - *writable =3D false; - writable =3D NULL; - } - - return hva_to_pfn(addr, interruptible, no_wait, write_fault, writable); + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D gfn, + .map_writable =3D writable, + }; + + if (write_fault) + kfp.flags |=3D FOLL_WRITE; + if (no_wait) + kfp.flags |=3D FOLL_NOWAIT; + if (interruptible) + kfp.flags |=3D FOLL_INTERRUPTIBLE; + + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - write_fault, writable); + struct kvm_follow_pfn kfp =3D { + .slot =3D gfn_to_memslot(kvm, gfn), + .gfn =3D gfn, + .flags =3D write_fault ? FOLL_WRITE : 0, + .map_writable =3D writable, + }; + + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); =20 kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL); + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + }; + + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); =20 diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 51f3fee4ca3f..d5a215958f06 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,24 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ =20 -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, - bool write_fault, bool *writable); + +struct kvm_follow_pfn { + const struct kvm_memory_slot *slot; + const gfn_t gfn; + + unsigned long hva; + + /* FOLL_* flags modifying lookup behavior, e.g. FOLL_WRITE. */ + unsigned int flags; + + /* + * If non-NULL, try to get a writable mapping even for a read fault. + * Set to true if a writable mapping was obtained. + */ + bool *map_writable; +}; + +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp); =20 #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 32dc61f48c81..067daf9ad6ef 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -159,6 +159,12 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_ca= che *gpc) kvm_pfn_t new_pfn =3D KVM_PFN_ERR_FAULT; void *new_khva =3D NULL; unsigned long mmu_seq; + struct kvm_follow_pfn kfp =3D { + .slot =3D gpc->memslot, + .gfn =3D gpa_to_gfn(gpc->gpa), + .flags =3D FOLL_WRITE, + .hva =3D gpc->uhva, + }; =20 lockdep_assert_held(&gpc->refresh_lock); =20 @@ -197,8 +203,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) cond_resched(); } =20 - /* We always request a writable mapping */ - new_pfn =3D hva_to_pfn(gpc->uhva, false, false, true, NULL); + new_pfn =3D hva_to_pfn(&kfp); if (is_error_noslot_pfn(new_pfn)) goto out_error; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FBFB171670 for ; Fri, 26 Jul 2024 23:53:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037996; cv=none; b=NEZdxTgapah3UbgYcPD5t+6ZBomLwL8u27KGboC69v278cBYaMG+G8DAlmZ1Nzd6glW2x2kmj6BIr9F9lUKle4H2vEwiypYas/1RQy6dOQDLKDfn0Dsrac1VpefwzHKOVo6vdyNb5uqpqk9qeoXoHZaD2GKv79vfM7XnrI+sb/U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037996; c=relaxed/simple; bh=ew26wDO2OFK9/ZL4S01JfkaAnmh//qDEW5lL4oKuI6U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fsQZ5sgD9Ami846PRq0UonOmFf+ZSuHqSnLzmQtS0A5fF3kWGmthxf9OTvpxxvEdlt3pqPHg5IdDQvn5ClN72ZwWGR7GgwIQWj/geTgDhKLonuVqervK7kzo1qxVN0ceWgiVRk+hw2A4Zv7aeIR8CihIahi1CMxqA12gFXx0mxM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P8nc3QSi; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P8nc3QSi" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc53227f21so12113325ad.2 for ; Fri, 26 Jul 2024 16:53:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037994; x=1722642794; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=YtgPpIjPSqL3TzmlwlpLZvvahtPd8ef2dtbW1LnZzVw=; b=P8nc3QSilOdFQme/xrj8NlBR5omdDMecY1PdOWRIgdd8geUHUUKBoqEbTpw1FmTv4z NILu6gs57GoP7AdtkwfKaVPwl+z4kkkCyrJPilFEEpeiSbE+bX1ajcgKtKYvRTFxsXLs ivjlYWzKuEEk/OtMsSIKk/SDIbd9aYIGRS8dpYc0i+d4dxisI1z44yElUBrrxWaWPvWZ HbbtxC6XgmMfT1IEdCcri3fMV0BjHKJlU559MMIHbN8gw7V3+ubYtz7VyDfru/DFCuB7 NAFK8AyKqipY+SEBGZFA1+4GR8IgM1OqN/jthabYT5AlAQfR3o87MoKtp8pAjm8XdRU8 9ZiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037994; x=1722642794; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=YtgPpIjPSqL3TzmlwlpLZvvahtPd8ef2dtbW1LnZzVw=; b=sSDUQ9hloV7h2x44fcVU24hxsnn7FUhUiev34gyXNtjYVoERLQzL5bl+VjbiqeqHo7 X8QgX5as/jDDXOKgTpSSeMQKAIbtYgbHIkco1rkiAMIR+XESh1IN0RIA869oDCd0Np1p Mqu8SjAtsMOafjxj0BUwq8k7sLARgszc9LHtQkL48ZpxGY6EJu7D7YRpvgPMJ+7frzTX Wd1+qI6XiG+kjggUjWBJG4QSvH0OM2/0j/ippj828QGBr4nRP/r40DCMbs2RX0fVkWAh xfjOnLdUmbauUABHfXbjgDPThVpUWHVA4C+D9M+ArMvB73S2dyxKhXxb+8fSr3abs0PU 8aKA== X-Forwarded-Encrypted: i=1; AJvYcCWfrR/CUbM5bZBdoYN0GcWbZSayD5v5kS/HX7YzxL1+nhfkz8RByJUpBcqTcwkkhPrgaMWPcHN5fSQSEMSy/jHIkO/tSxsO7/jB8OX2 X-Gm-Message-State: AOJu0YxWZADSzoT07zUh1zxxS+ePebAOyeVIKVuwUQlqzCFVW88jz1Df CHz/1msyex+P7260+hvUltVTD1QwEFUx7IM/PBPMEWV66/nxj2VS9bWJdas0N7E5SADW59TgxxM mkQ== X-Google-Smtp-Source: AGHT+IHmoajaYWuB/CstqlwF+EMvaWyxqBn3H/lV1e3JQu2uS0IU9winjnCx9o/cN/zsyEDP3UE3pPbYXjw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e744:b0:1fe:d72d:13bc with SMTP id d9443c01a7336-1ff04822069mr906395ad.5.1722037993928; Fri, 26 Jul 2024 16:53:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:27 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-19-seanjc@google.com> Subject: [PATCH v12 18/84] KVM: Remove pointless sanity check on @map param to kvm_vcpu_(un)map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop kvm_vcpu_{,un}map()'s useless checks on @map being non-NULL. The map is 100% kernel controlled, any caller that passes a NULL pointer is broken and needs to be fixed, i.e. a crash due to a NULL pointer dereference is desirable (though obviously not as desirable as not having a bug in the first place). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 56c2d11761e0..21ff0f4fa02c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3092,9 +3092,6 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, st= ruct kvm_host_map *map) void *hva =3D NULL; struct page *page =3D KVM_UNMAPPED_PAGE; =20 - if (!map) - return -EINVAL; - pfn =3D gfn_to_pfn(vcpu->kvm, gfn); if (is_error_noslot_pfn(pfn)) return -EINVAL; @@ -3122,9 +3119,6 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_map); =20 void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool = dirty) { - if (!map) - return; - if (!map->hva) return; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D30D7172762 for ; Fri, 26 Jul 2024 23:53:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037998; cv=none; b=WlfyUMCwPu96O9vjspz1u6Z2F4gcGs3PJ5vGlcC1TszQHo+28boyq+QkNom4B/Czo6E9Z7FT0/UO4uf/IJgbX/guTGWaUfwJVMJbe9Qi7eEPrmDURy84F37X2iUSF7/jqHqjtDo5CCaQJHLcET8YxxSK9t1KD+f1wVjpC3Jvp7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722037998; c=relaxed/simple; bh=RHZeaSr31fCCvyOcLLpe++oP0vx96AuhdExL9Q1bNo4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GkgME6qFaHoaNz7W4O2TpWb5/K6AWrtBHtJp1jjg1i1HLZ9DdTDKgy4qGfPseHx4NxrGPcsXThOWaORLbNqmimDnRm2hEjL7Lh+BL84/OF9jeCphrIOsYviSUwIaK7hpIMqQDNDbveuptg1NIrBO0SUdl+qC2F0rRpuod/z55Os= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kCTSWtId; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kCTSWtId" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7a1188b3bc2so1412325a12.2 for ; Fri, 26 Jul 2024 16:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037996; x=1722642796; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oqsp5dWmLf1ceGtEyEpwn2+TSiYr9kgkgYyCIEXijOU=; b=kCTSWtId1rHIZgmfsdqgQ5rr5qjwVYRpqhiAjlZgNwGPzSbamMwXx10J6U5t/+/VC6 xZEuKeLSfx+5+fDzVhBtur6rMPAm4ZwpHOZIhaUJcS3yH42v8xWgXTFEtnX0ILpJQT0P WRz66MsDqfL5hxrAtaonU7xHvW/hOCj0GjaW/6i1CkVHtJGjNL7XX7frQerDXCuYGJFO ZXtZki55GpST/Q18pCz5z99EfGuaDiHvhR5GQvSC61w/SOmuk5tDNZiOinVaMdReWyuN nbqRwh9Z51ENrs1IC2+T9cC5dMduSkF0IN+G8xuXI/Sf/FcvoPH5ARg06pQMuzp9itM+ bLHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037996; x=1722642796; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oqsp5dWmLf1ceGtEyEpwn2+TSiYr9kgkgYyCIEXijOU=; b=El4wPUo8hHbkR/jhSbiro2IZCSXUWd5jY/xjA/r2F1XhOoGC3M+iNLzmMvrWjXR5Ue wmIkU8DZRrpmLiE0oXwyw9zKPcumwlOxhWLs5tQIXU7HIcjAmfu6xpMbVJeclx9AVxCS lMMboj3aOVRW3kY8FQFNOewvR6ycVth7ep10iVhar6XRy9rqqz0kWH3NQO9EJ/jMxBjN BbPU9gyABao8PlrAczYg38jD68q5n0aIe752k1MzA95iIPirZteqDKbJbbKFJ22dZ1eZ CJxeXQvBlFpGUfG+U0N1Fn/kEA2CUdZ4vue2WatsOsMvX8mNvB9cMYyzolWgAVx5ER7F f4pg== X-Forwarded-Encrypted: i=1; AJvYcCWtpWVpaQKPsLZOuYXZRNuV0sOnN4XIiMys+RMkn1uEu9EWkelq+BQO9mifvivSfC0TcvTyIG7wLwmSveM6qDHZUWnFU1D0Oth43pgf X-Gm-Message-State: AOJu0YyA1JkwE9iA20LpWaoEiRhGtjOhijbdQpnZ7B9A7AFhRF3EDBve o2aLA6EPKFnNHWRQsrCb105n1deRB2nGbKBOyA8Ea/vaJV9zR22qYvNwq5jpaAASJ93wNmqIpm1 yaA== X-Google-Smtp-Source: AGHT+IGPnVvSSR8GC4je1DUQj4K0eIO0wDIdrfABFjI4ae5Nt/lU6oCOfX0avnzwArksU/NKeR8FWd7ZKxM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:5a9:b0:6be:8aa5:bffb with SMTP id 41be03b00d2f7-7ac8e0bb8cdmr3560a12.4.1722037995764; Fri, 26 Jul 2024 16:53:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:28 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-20-seanjc@google.com> Subject: [PATCH v12 19/84] KVM: Explicitly initialize all fields at the start of kvm_vcpu_map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Explicitly initialize the entire kvm_host_map structure when mapping a pfn, as some callers declare their struct on the stack, i.e. don't zero-initialize the struct, which makes the map->hva in kvm_vcpu_unmap() *very* suspect. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 40 ++++++++++++++++------------------------ 1 file changed, 16 insertions(+), 24 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 21ff0f4fa02c..67a50b87bb87 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3088,32 +3088,24 @@ void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) =20 int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *ma= p) { - kvm_pfn_t pfn; - void *hva =3D NULL; - struct page *page =3D KVM_UNMAPPED_PAGE; - - pfn =3D gfn_to_pfn(vcpu->kvm, gfn); - if (is_error_noslot_pfn(pfn)) - return -EINVAL; - - if (pfn_valid(pfn)) { - page =3D pfn_to_page(pfn); - hva =3D kmap(page); -#ifdef CONFIG_HAS_IOMEM - } else { - hva =3D memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); -#endif - } - - if (!hva) - return -EFAULT; - - map->page =3D page; - map->hva =3D hva; - map->pfn =3D pfn; + map->page =3D KVM_UNMAPPED_PAGE; + map->hva =3D NULL; map->gfn =3D gfn; =20 - return 0; + map->pfn =3D gfn_to_pfn(vcpu->kvm, gfn); + if (is_error_noslot_pfn(map->pfn)) + return -EINVAL; + + if (pfn_valid(map->pfn)) { + map->page =3D pfn_to_page(map->pfn); + map->hva =3D kmap(map->page); +#ifdef CONFIG_HAS_IOMEM + } else { + map->hva =3D memremap(pfn_to_hpa(map->pfn), PAGE_SIZE, MEMREMAP_WB); +#endif + } + + return map->hva ? 0 : -EFAULT; } EXPORT_SYMBOL_GPL(kvm_vcpu_map); =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A7DA1741EB for ; Fri, 26 Jul 2024 23:53:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038002; cv=none; b=qTMi1xzHLLHrDfzzdCZa2GsnNSxo0txn5NUlX2qvnsTNpDHeECFyxHAY//J0zFBRzp3bgYieXKQzNu+PZWSWJKraI/PPDduLi3izTLpQ6tkhdTkPySX8DAjqI3KRij9umVp3Qcr3RVSqGEcdFhgLCFqBxOEUPHElS1li7hmy6O4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038002; c=relaxed/simple; bh=QpaOetglmizJZZ+b98sQ2qGQfzOr00nbj07HtFuO9j4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=N+/y7M5TAorS59uHvuXPJN3RbLXzbSGBsH+fuqf+2nYSjU4MGa2nhAKGkyACLq+E96HUeHoyTXKV7SHrx61qrcUxfrBTLHq7v1CLN3rrKxHBu2su6hCaxpMMgoL1P+7k1rqTBNAvO2yBvkiUxBqdvSbCd13MYuHFPdp/aH1cmz8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ocv4GqlC; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ocv4GqlC" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0b3d35ccfbso354393276.3 for ; Fri, 26 Jul 2024 16:53:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722037998; x=1722642798; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=F7WgJlT5NNtid7p2jwFbPV8b+lTnD+3vJbPsQDaWDdo=; b=ocv4GqlCyEvL0Bf07jV+GPZ/KIm5GT+lwAp+t86ocwSVViCHPl1xeNpA2gX7tLvhHZ aGaF38hqmPtz6/NS5kj1IGLHLDKa776uIoP2prtC9EOuNNzGZEU6qdMGBmPyGLwStWuv fTmzb0GVx8w6YAzPH1TG9+cpMjS+dciHsrcoew3FqAMxhRXidcY7i71fN5f+HbAUwf8I nZPt/ramfQFuB0dmv8j5Itq2kb1BhkAz/KcxopoXPUE+qhNnuCtqrPR7KgqnqyezbvjF y6mTxDd01NkMkbDeOk5F3WiPrXKYphH5iBw9jQsGXGsxL3bYcW+rTIvWt0T7OydIkrR+ 9pBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722037998; x=1722642798; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=F7WgJlT5NNtid7p2jwFbPV8b+lTnD+3vJbPsQDaWDdo=; b=E0xP8nii5ygqoGrV/R3HkbVD6SXPxPncVuC/V1uWNa9Noesr+5GrOb8RkVcSobIqAn kXfwXucK7CFsaA5behF8bq6YNYieuHpp5H2aIw42AXsdX9wrRew/L/aMm9JmSnq5MFCv VV1VaHKUsrEwFrBK7P+XDV8sed+9L19RERYsfVPBKOjLpBwLUj3T77ypXcjE1p8pBIaD Vvti6TjMygzRbT/X2Fqa8izplJL8GIRn3r6mq8VQ79CfC2jrZgjhG+L1Ol2Frdt4ZR4g pCW3RWW+e/dnIjGCbbP7c0m0iws6N0FVACA30LqYVNT8oFFwOQ+PraGPP2EFXQvSqFDu sPEw== X-Forwarded-Encrypted: i=1; AJvYcCVk+xKUHOiZWO1Gq6NDZH6VtFSfUfgJR6Xu/fmwAG9SmebANjw09jfX78chIhuu9RZsCGlcN6RZdG/lx86CCCKSCxJjsmjmJcm8oTZb X-Gm-Message-State: AOJu0YygbsCjv31Jj6u0rG6C5buI8xlK1J1pgibIkTyBsrl+Jwj2f8kG SumILosl+dWM4Fc4Yv9wYY5B2YVmzAD45RhevAMoERGZ08nzEdb52Akd/O6jDo5BeGxJhCBWziz MnA== X-Google-Smtp-Source: AGHT+IHz6CGF/LJ/2Irg7XXoFs/UlO9MjKavvm7YGPRddocT798mRWwGF7hDBEuQel+TJyAfnYN45In/DfE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:120b:b0:e0b:4dd5:397e with SMTP id 3f1490d57ef6-e0b5455c058mr1734276.7.1722037998118; Fri, 26 Jul 2024 16:53:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:29 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-21-seanjc@google.com> Subject: [PATCH v12 20/84] KVM: Use NULL for struct page pointer to indicate mremapped memory From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop yet another unnecessary magic page value from KVM, as there's zero reason to use a poisoned pointer to indicate "no page". If KVM uses a NULL page pointer, the kernel will explode just as quickly as if KVM uses a poisoned pointer. Never mind the fact that such usage would be a blatant and egregious KVM bug. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 4 ---- virt/kvm/kvm_main.c | 4 ++-- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f42e030f69a4..a5dcb72bab00 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -273,16 +273,12 @@ enum { READING_SHADOW_PAGE_TABLES, }; =20 -#define KVM_UNMAPPED_PAGE ((void *) 0x500 + POISON_POINTER_DELTA) - struct kvm_host_map { /* * Only valid if the 'pfn' is managed by the host kernel (i.e. There is * a 'struct page' for it. When using mem=3D kernel parameter some memory * can be used as guest memory but they are not managed by host * kernel). - * If 'pfn' is not managed by the host kernel, this field is - * initialized to KVM_UNMAPPED_PAGE. */ struct page *page; void *hva; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 67a50b87bb87..3d717a131906 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3088,7 +3088,7 @@ void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) =20 int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *ma= p) { - map->page =3D KVM_UNMAPPED_PAGE; + map->page =3D NULL; map->hva =3D NULL; map->gfn =3D gfn; =20 @@ -3114,7 +3114,7 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm= _host_map *map, bool dirty) if (!map->hva) return; =20 - if (map->page !=3D KVM_UNMAPPED_PAGE) + if (map->page) kunmap(map->page); #ifdef CONFIG_HAS_IOMEM else --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52174175555 for ; Fri, 26 Jul 2024 23:53:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038003; cv=none; b=f1OuB/ydvaNaBFof49rm3QuRf+hp/IkwSRj0xPDQVi1aU4WoVFuX7UHZMdxI6uBLC468456tvn4bstFpfYyue7ax6s4LADYTPr2gPAHylp6eyrwoYcUob2MZ9t2IwkKcRSc+B5Pll9HsA6loibIGS67AH2EBzlZq2zF3EShUEu8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038003; c=relaxed/simple; bh=btA5HWQO+KsQDU5pHR0Xwi+Zt3luZJSHGiywErewvig=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CC4MXOYM8uk3o3Ear92uNsVCIgyYkpG9cJHn8SVGPHgIwwwsSrVGmF594bNYinMAqwnYxCyJNn3ZEwOX43SpOiKs+R0JCjLNYqNrY0bDmzItgPa3KEWyyLbOx5gPGK4WPNkPbFNnD3sJzCDQWcH9UHwQfHLkSvjw5bO60LVJMsg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uQnWLYoQ; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uQnWLYoQ" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-664fc7c4e51so5586917b3.3 for ; Fri, 26 Jul 2024 16:53:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038000; x=1722642800; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=khS3+Cji6+1mvvdZyBUq0wqzq89EmN0cfXvMph1EjiU=; b=uQnWLYoQsT8KY6zrZMLW8sL0go956kdtG4g/J8Nk5pWcFvk3ZG+rYhpXRYaweQfMSs RXqayQwBd7eJhIWcf1g1Vl1t59rEgDbIE96lJDyEXXGs9yQxat87okyZ7SANlqint4jv nirMTHJoPB7PzN2g38SDOPepcD1bTpCfTi7IfDr9GrunlxJGx5eHK7i4lSaeXE7PLfe/ dR0kduTACRFDqPQBs+aB51umUUIR964hYvukuuJa/1v2FJ1OMAtHK6L83QtRxfE9xCND sSIIAmA455S0LJmqPtR1taMizZV7O5lOa2GDZbqUQlYg9thFgnfC4iuR3fw7YXPrEZuL yBzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038000; x=1722642800; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=khS3+Cji6+1mvvdZyBUq0wqzq89EmN0cfXvMph1EjiU=; b=Zf6Fy+X81i+t9Ax7YBBI9w8lBHFhNi2tnrqMXZWmaVj3YiwqX9i94qlHhlo9N8Q9gj Ka06Rk8cCOAUP2zheii3Fj/n9fh9mW+GFU3rRoh9xb7i7QK/Bhlx8+IDKDbzhNt/vIjC dN9Zz/1BvehwOhLgHord/Q4ibb/xa/lw11HMM7shyrLmbo6gig9J3A2AMwvwuaCBBS+g GWlTz6yBs/vtw4O617GwniOGUQLXcs2ZIB5kStJ7ZOw/Q50BXAX6AlJZwz/9SGId1m7x UjmM8yRW3Xe06o1HI/bG6Ub1EVuBHSm+4MMJMWUeioB5FAJwFvNIMEY5e4ljXNC8xpg4 CUkw== X-Forwarded-Encrypted: i=1; AJvYcCVEKXWq/fhI7zUOWznZe6MaVhveNZopW+BwrPl3FCBSL4OMQcP/zHyPMpwU9c7gps6ZnF+DlGxsAPm4NR0MnTRWkp3KcYsDnzk7O3DM X-Gm-Message-State: AOJu0YyqzfkhDliTKFRPzn3VfoT+rppItr8bdQP4smxLc7OHxzAJt4Ns Bs/0lJSdLR8Z8C3xg7+86wEIt8ovDd4Cya1lMzf+6ikvn+oXOShx0DOIFFRquWl2g6Tmeb6SYu+ 2RQ== X-Google-Smtp-Source: AGHT+IHn8ljNba0QNoBWKfaXu8Wx8VTGIoZ1RPeW8pDsltrF/Hl6Iy7/rU4VvDeHaG7eZvfgbbZBLlqPjFc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:806:b0:64a:8aec:617c with SMTP id 00721157ae682-679fffd3e35mr362617b3.0.1722038000511; Fri, 26 Jul 2024 16:53:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:30 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-22-seanjc@google.com> Subject: [PATCH v12 21/84] KVM: nVMX: Rely on kvm_vcpu_unmap() to track validity of eVMCS mapping From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove the explicit evmptr12 validity check when deciding whether or not to unmap the eVMCS pointer, and instead rely on kvm_vcpu_unmap() to play nice with a NULL map->hva, i.e. to do nothing if the map is invalid. Note, vmx->nested.hv_evmcs_map is zero-allocated along with the rest of vcpu_vmx, i.e. the map starts out invalid/NULL. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/vmx/nested.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 2392a7ef254d..a34b49ea64b5 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -231,11 +231,8 @@ static inline void nested_release_evmcs(struct kvm_vcp= u *vcpu) struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - if (nested_vmx_is_evmptr12_valid(vmx)) { - kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); - vmx->nested.hv_evmcs =3D NULL; - } - + kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); + vmx->nested.hv_evmcs =3D NULL; vmx->nested.hv_evmcs_vmptr =3D EVMPTR_INVALID; =20 if (hv_vcpu) { --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89E9C176AAC for ; Fri, 26 Jul 2024 23:53:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038005; cv=none; b=JRidLz9JobEhWGUpJVPE7N8z+NZoSnHeo1GEzlLRI+LHUkfyzlfIkMgYVCEEnPyfH9cMNwn+0vTanGZrP9Bt3lCNjsOd5ykOdvLymCuLuANXlnbjAy+bL9kcbIWf2SKZd6ADMZwt+pZsTe3RaUQyB+ebEX2cql/1o4sHQZoGAAk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038005; c=relaxed/simple; bh=Zgha2xAFcY7KucmW/flV9fGS6Q5hSOiwMmi0KjWZqtM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JwDpoF3pq7hzC3KU336OTa5HJA5yY/nT1F4VlRDRpbhpcXEv6pwaJ7uPCbcH7QgKn/F4GD47Ar05GAM/yUmS3pFqV9rPPJkhbCUsX5w4d0Lgbj6SjW6RiFQBQBRX8Wm+GQnCLAoRQ3I1Fwcelh/6QlP96M7lYG9LOMOqRiyqsP4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p0leuphP; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p0leuphP" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d1c6b7bb4so1137266b3a.1 for ; Fri, 26 Jul 2024 16:53:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038003; x=1722642803; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=nY4qfmLxHYNbNwSwtMKIrsjqMbHvtDgV5qSHM3zuSO0=; b=p0leuphPA1pHQ6LdzTUqg+gAuwHJENizil4IUlrkbKDvfMcouEqM0DaIxVuSB1mU00 5jZgvi3wdfIgV2zV7jRg1+cB06FTewjQIGbNzhu+UpfuyJkMZqXwLFs8SJesVmHhfsuQ 1FQhM+J/PJDV9WXVgXlT84OXCVfdYgLIFpW9EWPq+zvhS6GYKtBXSz5HKWMr1RuK4lIa 6eOzLcZ2Z5f93S0fUWHL8wdvWYtsIXWAaXFJiuQ7zupAvtIVkyU5lHPkSaaN0iBgbYTv dw3ujCnR6xY+b4vmetGSmY1M8Y3nYsnG3S9aXezG7XP4uTKAVZL7sQOzbyw9QmDOrQuC qHyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038003; x=1722642803; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nY4qfmLxHYNbNwSwtMKIrsjqMbHvtDgV5qSHM3zuSO0=; b=lClt4emuipZlmezOyazjxVjiaH2/0ygrqsFIEIHfSV2iWsD8th4rogA2hzqNNLWSzo MhBvtxOM+2KM9fgT4sTHYNwZQUcxMJ5dd+TVPM5Z1rmbZlp+yoSM+Qddu4ljQNbtHr5I pbWp+wZMh2ldwQejYYyTEdXD+/xzTUqEjCBs/bFRWhl6sWx6P3WVTHOrzCRCrP3fFSfP bZCWb5pTzW1B873fT3COIM/8J+Yz+oXGYXOtVL/WmREDEVUKYKc0DPchD6Q5DgVNeQyN pWr+QrNTUbzpCBhx7SmpM8RINla5G/Yf1z9OS2L6ZAj5LGIpCn/KjEamFqneHYBgv7eE cCow== X-Forwarded-Encrypted: i=1; AJvYcCUauI2e8gTGAO2BP4sK5eg58tt1zqb5o7oSTBgxXM4iulk3TBL9chII3gW92ATNfgMzC6Uf+csbAfGdJXv2XtEk9iYPN30hx7BtQ07r X-Gm-Message-State: AOJu0YwjGd+b3Y4SGnAyg+jUCAM+4cPNaSjFRZQpT7Jg1gwrXUmboFrJ cmTUmIEzhqWXYXCOS/rKhDFHiS9dnBRbOQ65MBrgRuQvvgKQMZFtgf8O9A8Prb5Fti0W7+mu2jR 3/w== X-Google-Smtp-Source: AGHT+IH3GR1gDSsaIiyxz/X29dplakZOpWNN6FppUWgXRDWsCZD6dFhJbJXFL2hO7f6rIpjA8QVwSQZzxjA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:6f1c:b0:70d:138a:bee8 with SMTP id d2e1a72fcca58-70ece533146mr8919b3a.0.1722038002674; Fri, 26 Jul 2024 16:53:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:31 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-23-seanjc@google.com> Subject: [PATCH v12 22/84] KVM: nVMX: Drop pointless msr_bitmap_map field from struct nested_vmx From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove vcpu_vmx.msr_bitmap_map and instead use an on-stack structure in the one function that uses the map, nested_vmx_prepare_msr_bitmap(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/vmx/nested.c | 8 ++++---- arch/x86/kvm/vmx/vmx.h | 2 -- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index a34b49ea64b5..372d005e09e7 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -621,7 +621,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, int msr; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 =3D vmx->nested.vmcs02.msr_bitmap; - struct kvm_host_map *map =3D &vmx->nested.msr_bitmap_map; + struct kvm_host_map msr_bitmap_map; =20 /* Nothing to do if the MSR bitmap is not in use. */ if (!cpu_has_vmx_msr_bitmap() || @@ -644,10 +644,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(stru= ct kvm_vcpu *vcpu, return true; } =20 - if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), &msr_bitmap_map)) return false; =20 - msr_bitmap_l1 =3D (unsigned long *)map->hva; + msr_bitmap_l1 =3D (unsigned long *)msr_bitmap_map.hva; =20 /* * To keep the control flow simple, pay eight 8-byte writes (sixteen @@ -711,7 +711,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_FLUSH_CMD, MSR_TYPE_W); =20 - kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false); + kvm_vcpu_unmap(vcpu, &msr_bitmap_map, false); =20 vmx->nested.force_msr_bitmap_recalc =3D false; =20 diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 42498fa63abb..889c6c42ee27 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -204,8 +204,6 @@ struct nested_vmx { struct kvm_host_map virtual_apic_map; struct kvm_host_map pi_desc_map; =20 - struct kvm_host_map msr_bitmap_map; - struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE2F417BB20 for ; Fri, 26 Jul 2024 23:53:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038007; cv=none; b=Hb3mg9sKfBA0HTtZCdOjVjuByim2UrUiBBbxPQ6eErd238nxs6x6//UTmBfTEWV8Hw5bFfynyh65nGr6qp9Q/Y3MrVnMELT8GNisWZpzP1t/rOpgTSi8FbLtuTks5Ao4BOgslG7pZAaLx2itEk72hA12H5Ue1LXXKg4aq+DLM2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038007; c=relaxed/simple; bh=hTCJS839o2gRLTMMefhng3YEMkDWbv7HNmHQt1L2Wh0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=N7SVhv0zp9P+txmvgNLVA+aJkab1uPbtQ1ybnioMbV+hvghmEei15WaLnZnYk6tDaPC7qyYvLcgbMYa4VdepQeCNlfaB2pCZvFmscd85ezcaTnT8HcupeL3OX1Xp3Duy7uZjUBsUEF3uNvpnkGqIDDfB6mRBSJycCXEYLKuAxQc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wdW9ZL3g; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wdW9ZL3g" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0b5296507eso434296276.3 for ; Fri, 26 Jul 2024 16:53:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038005; x=1722642805; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xlV4Qu7sFx9TmbqoV45yeLGjyM3cIO8KzptAxLEtfNk=; b=wdW9ZL3gef1d/Bn2y90XBdXSoI3mt8rx5Ebty0Vs51AUyh03zbJohgowbTiAEmwNFd AJ1DHrUwaCJG7pjqP7Y4Isouc2Ntq/zRUfC4HbWryGny3FFIIQ43O4IOJo4vDv+0/oFa BPjXWcm+2wqjvbDpS7XTJbbhAwia6Md5xdBa0VdDjHKvTxsDCSutFLf8twi5AYcNari5 TYwWYS/rupun/ETbDvq0vgRlikOypL1tBmC42QgWiM8OLoIjQtGcApvMQQ0sFi7PrNK3 uzs/QLxd90RrH/HcuHB3I4xkGx4shBUd/A8NbLr4rAxVuTzJPBTHwi+wOUMnTGMUGVpn CJ1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038005; x=1722642805; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xlV4Qu7sFx9TmbqoV45yeLGjyM3cIO8KzptAxLEtfNk=; b=ccQsZETqaJW6xffcjMSTZvFqRVP30fGQQs9KCmKNHeGgxULwJ+0UBW1QCPS6nGoQPW NXdlziu4Y9r0KW+cLuq5LBU5NGy8Np8weOzR62O79LVkvAMeVXRJU15FkKsZFdJZssL+ xP9Dk9FUcma1bljwnbes07czvk52qKHcHRPTCF1yfg8a0HZeVhu2HemiuakFBu1fBHcH Zq96w4VGGTjYMcPedXhYpKydBLfvFYgwP3q04m5gCoKJEJcQT60ej04md3foInK2pvi7 DAwdE845I+tNwAgI0vzvny7D/MRNiTA8cn6NUGlt8hIKRrYrPTRloP/VO9ZA4zszIURK U8Kg== X-Forwarded-Encrypted: i=1; AJvYcCUcso1RuFPeRQV+6jrhPetJmhQb3Em6tn2YeVRmozGiaVO+BkimIIPBl6uuWYpflrNtPvvuJCnwzdIOjG3x+qwAPR5DJzcf/Bq1zXb1 X-Gm-Message-State: AOJu0YzQJ4xIDXJ3Jkcf9G/c+7Ta0Lyn1Ig8RWGQz9CNjIYTy8Jjmsst Hu/QqZpX3lCq7GH3gf22jGr/KH+fZ3xfyZWv2BSlvVsEQQbw6tE41f/T5X8CBr7BE+JkczXLyH6 vxA== X-Google-Smtp-Source: AGHT+IGzjee8SmO1Sa/b53YWKn3FsBJYizAY4tObHxd1ZNVi3RptE5IxRlqmItgGIGNjjqk15j+5giHmBC8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a5b:8d2:0:b0:e03:53a4:1a7 with SMTP id 3f1490d57ef6-e0b5454c840mr15153276.10.1722038004800; Fri, 26 Jul 2024 16:53:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:32 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-24-seanjc@google.com> Subject: [PATCH v12 23/84] KVM: nVMX: Add helper to put (unmap) vmcs12 pages From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a helper to dedup unmapping the vmcs12 pages. This will reduce the amount of churn when a future patch refactors the kvm_vcpu_unmap() API. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/vmx/nested.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 372d005e09e7..8d05d1d9f544 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -314,6 +314,21 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, str= uct loaded_vmcs *vmcs) vcpu->arch.regs_dirty =3D 0; } =20 +static void nested_put_vmcs12_pages(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx =3D to_vmx(vcpu); + + /* + * Unpin physical memory we referred to in the vmcs02. The APIC access + * page's backing page (yeah, confusing) shouldn't actually be accessed, + * and if it is written, the contents are irrelevant. + */ + kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); + kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); + kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); + vmx->nested.pi_desc =3D NULL; +} + /* * Free whatever needs to be freed from vmx->nested when L1 goes down, or * just stops using VMX. @@ -346,15 +361,8 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.cached_vmcs12 =3D NULL; kfree(vmx->nested.cached_shadow_vmcs12); vmx->nested.cached_shadow_vmcs12 =3D NULL; - /* - * Unpin physical memory we referred to in the vmcs02. The APIC access - * page's backing page (yeah, confusing) shouldn't actually be accessed, - * and if it is written, the contents are irrelevant. - */ - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); - kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); - vmx->nested.pi_desc =3D NULL; + + nested_put_vmcs12_pages(vcpu); =20 kvm_mmu_free_roots(vcpu->kvm, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL); =20 @@ -4942,11 +4950,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm= _exit_reason, vmx_update_cpu_dirty_logging(vcpu); } =20 - /* Unpin physical memory we referred to in vmcs02 */ - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); - kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); - vmx->nested.pi_desc =3D NULL; + nested_put_vmcs12_pages(vcpu); =20 if (vmx->nested.reload_vmcs01_apic_access_page) { vmx->nested.reload_vmcs01_apic_access_page =3D false; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3470C17CA08 for ; Fri, 26 Jul 2024 23:53:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038010; cv=none; b=A25r7QHSb7lhYdNRmRGd/gUu/NbLBHwRSiuqECsK+DnagA8U9zywO8S1TBYdKxTs/0OoycZ2yYkvgLUKh8U8iQuiKm5CpDtopG9LXkSQutNqBazy0Wn8qKOi50wpA+3d7juz8Bl0fvrkFlKZ6QLNt3V8P/wRkJE03iPFfS06NH0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038010; c=relaxed/simple; bh=9OmPm/sXETts377Es3w8PthSdVQhTvxkXBBvmqGH6tM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IUBcGTLyOsjfk7le1vbCO5m6Rz938LotnDS8J7QrNPGSCeV0mcREv/mwjq81ZkmuQFPpANMMSJvXaKZRjqWf/wJIidhpfIINne0pvGYG6X5VTgYs/PzyIsRyVOyb+h9xkP7flE5RuCyjHzfiLXlIYmBdN3wvXinzu6152g6iVTY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a9LYv9ah; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a9LYv9ah" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc6db23c74so11322335ad.0 for ; Fri, 26 Jul 2024 16:53:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038007; x=1722642807; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Cx2uwWro3wQxHPvpUbBfNMOIaJj2wri0zL/P/0Je2BI=; b=a9LYv9ahUKNguAIrLC+P9UGnfpNradvP7gE7mDVQGSIaN6mjWghn8/6jZVmlvnfTz0 VnDgKUOGIBBRlGJtzSRYkO/lrCe7duHbM8jP3axHe9E5Lx4CqIAHND0lsu3aqeBvcMy9 GE79k4XChY+pk5afhZlbSlOYKufhLkP26NVLaVM261WGihZpDbr8VfeI1pSLyT3gmhJS cw1yf4f41muqIuOuvtTqfkIERmnYsWBKBShnKcLaPBDHOmdI4rZ7tJvqg7q6UDOKMDCD CGOxMFvYkAadoTeXWT9SYRd+8I2n/j3zD1vF0MrCxgjcvTThg1Evoqq0YCUVF3oBf+YB vd9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038007; x=1722642807; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Cx2uwWro3wQxHPvpUbBfNMOIaJj2wri0zL/P/0Je2BI=; b=oUVxOaOJe3N20rXDXL8rX4vHtdwV79ONKYGaamTZDah2XXAqLqutRRlV7sptRfqx7B GTOjrYRJHAsVGyhMatmlqMvy895HTBjZ1hBiOXPuOnxo2kmhzxw7rCPqq29Jlj/ZcAjW K9qk6EPTrKPBndMeqFAZz8kWh/3zttFsebLcIi5BbiXaneIcX88yNVzuPgbyXScVp8TV aqfqqyMS6l90zMEQt2FSI2U1pMMhGN+umMSjqH94i969y0jz9Lj/NCxECdxXrZhLNwXN g8rkX4YdQjm7vqCqN8j6cKL9QqSq2zAebi/SEMOJ5T/wDHvhXby8md5DAisrvpBpqwAD 9ogw== X-Forwarded-Encrypted: i=1; AJvYcCXyqUNxxYYbfk+6eA6y/IHkA8tD3ExlU01f25GY66WEukepRHT7amCTHnHQXgzVCvwbNUKIrj/nAGJ+MWsFcfv1vSy3VaJl+JOEdIxt X-Gm-Message-State: AOJu0Yx+n6NOIzf4h+1DGKcEeyxXXTCOH8YlYbkHg3oKnXpVQUGHQVnG 031CIpfEwP5CuJlxU0Nn57quJguws/HKKKpMeo4CTd3s3D3nQXC8bFLuoc5iCoucM6ssO38aKuk asg== X-Google-Smtp-Source: AGHT+IFyTmAdCKGjl4Efd/ur3Lgpr5WGUo8BVtVPeI74QsMUPNsZhPnU9qRmgrxg08u1lYU5BCnREsvFCzc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:41ce:b0:1fa:acf0:72d6 with SMTP id d9443c01a7336-1ff0483372emr764645ad.3.1722038007284; Fri, 26 Jul 2024 16:53:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:33 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-25-seanjc@google.com> Subject: [PATCH v12 24/84] KVM: Use plain "struct page" pointer instead of single-entry array From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use a single pointer instead of a single-entry array for the struct page pointer in hva_to_pfn_fast(). Using an array makes the code unnecessarily annoying to read and update. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3d717a131906..8e83d3f043f1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2763,7 +2763,7 @@ static inline int check_user_page_hwpoison(unsigned l= ong addr) */ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { - struct page *page[1]; + struct page *page; =20 /* * Fast pin a writable pfn only if it is a write fault request @@ -2773,8 +2773,8 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kf= p, kvm_pfn_t *pfn) if (!((kfp->flags & FOLL_WRITE) || kfp->map_writable)) return false; =20 - if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { - *pfn =3D page_to_pfn(page[0]); + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page)) { + *pfn =3D page_to_pfn(page); if (kfp->map_writable) *kfp->map_writable =3D true; return true; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A51417D36B for ; Fri, 26 Jul 2024 23:53:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038012; cv=none; b=NX5fgBYjvXNrHa9vaNe+OADGNWOrs+m0uWrDHkHQXXKLhrAyAgs50HzCPUosFYCQNTAw1Lqp0trmFbUkmKbmjEY9zvLnXXEjh4O1aY3kquqKc4J/z2m57VnUJP37C2fs+QlJ0w2hyuG8Pdu/MGiBPRmD3qHEBTpN31oP976Rr3E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038012; c=relaxed/simple; bh=Y9QuoXap+Xm28J95f/mEAStkkiyjgU4taMMEcAGhDr4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kQg6CSbZRg9xuxXFoOXqOaowHeJgj0gKRsYQFSFq0T1uIBrfzZn3IodZDJ6O1uCWOn2ZEF3LoX+8ux7JbiJwvL2BnuJNpxKIB2PoKx4+/xSefOZDvTn5yEVVV3zb0R/4DBO5sqmsPgLZqzF5m+6yJRN3TSssWajro4nc5KNb1As= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Xvw1Edn+; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xvw1Edn+" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70d1df50db2so1367925b3a.0 for ; Fri, 26 Jul 2024 16:53:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038010; x=1722642810; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tUI1d5PvM6nQSFLRYt7w7bGZCStWjtac1YnX2jIYbCg=; b=Xvw1Edn+Ezhq1l++WA4rGKFKEcgE0YWJFJKjTQTMnNFypQpomjlKC7LykKDtLpPVmu d1lVA9VhJ2hXSaM2VSJL87pdynOrW6wccs6sjWswPGAsm7kI5geM6S7/ceOFwn4TrDQ6 06ctltMqX6+Vvw637GP9vx0FNL60Cd/c2M64N6nke1WvH1g7I4prg12lonRqbmF/oXSl Qo3hGqGtB3X+f9r3z4yjX6JvCoL0h3OQ3TKRGS5I59u5yreJ06CXYBvvVyoEw2Z9gaMk dDrRaCrYHgX9lAMCyBF9ObQGM0RhaKvChHqzPi3ah4aOR9oUolnk4DX+cnlyqYsumy47 1vWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038010; x=1722642810; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tUI1d5PvM6nQSFLRYt7w7bGZCStWjtac1YnX2jIYbCg=; b=PFqC80lfzV3cP2hpcoVvpNMBU+MNeeEdnrinUB8xA8SyzU6mrgBcA771b6Kykvi22P NH/gjCmO87zLaJw42MERjjTQdO/sDo/2IcUF4ZLYqyqmnbqF3G71cgRFrPLK/+HCqy86 6wQOyGPszGxVqrfrG5t4KtSSC7ikSjDAEk1A3lu/BZmyqxk5KJENBC6HUjvscTdXMTq3 tabgFWfrAppb3NWJuT8OkYuq2mJC/byHGP8/f4bX+s0SiTtPcFSLS4dShaoAJUnqoWnQ 53oPbefipMd6+9+ecANhJklPy3sMO9ryWzbdCuh4DhuVslCx1mWLrsDsqbAUwQQcy7bd qyCg== X-Forwarded-Encrypted: i=1; AJvYcCXphwWXwaRxJ/Toj5iQfl5VLoCdOkap84j1GDVIe7rIUdlUJL1+FfuTgphytKltNAZsbDULzwHthWDFG3rAV7lgO00SxnxJrJDGMy/f X-Gm-Message-State: AOJu0YyLucG+l0WHLIy4jb+Kg9WBpdwi2Dkkbcq1I1J6pGMd4JfWhHDm qqZcHkrvKMvqAfmG2GKEVv/3jZLivp1fwxvSR2eyJxOznqEVIh6Nl4iIEnHcupMBYAQGhasVrbS btA== X-Google-Smtp-Source: AGHT+IGIDN5ccGIB3cs8xyjoNBA33aBwY8OuaAQU+QnM7LAxDHF+KLZuhqGJARF7orYR9wXwPpBL0fgCScs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:8593:b0:70d:3466:2f1a with SMTP id d2e1a72fcca58-70ece63fd2emr7784b3a.1.1722038009466; Fri, 26 Jul 2024 16:53:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:34 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-26-seanjc@google.com> Subject: [PATCH v12 25/84] KVM: Provide refcounted page as output field in struct kvm_follow_pfn From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add kvm_follow_pfn.refcounted_page as an output for the "to pfn" APIs to "return" the struct page that is associated with the returned pfn (if KVM acquired a reference to the page). This will eventually allow removing KVM's hacky kvm_pfn_to_refcounted_page() code, which is error prone and can't detect pfns that are valid, but aren't (currently) refcounted. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 100 +++++++++++++++++++++----------------------- virt/kvm/kvm_mm.h | 9 ++++ 2 files changed, 56 insertions(+), 53 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8e83d3f043f1..31570c5627e3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2756,6 +2756,46 @@ static inline int check_user_page_hwpoison(unsigned = long addr) return rc =3D=3D -EHWPOISON; } =20 +static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *= page, + pte_t *pte, bool writable) +{ + kvm_pfn_t pfn; + + WARN_ON_ONCE(!!page =3D=3D !!pte); + + if (kfp->map_writable) + *kfp->map_writable =3D writable; + + /* + * FIXME: Remove this once KVM no longer blindly calls put_page() on + * every pfn that points at a struct page. + * + * Get a reference for follow_pte() pfns if they happen to point at a + * struct page, as KVM will ultimately call kvm_release_pfn_clean() on + * the returned pfn, i.e. KVM expects to have a reference. + * + * Certain IO or PFNMAP mappings can be backed with valid struct pages, + * but be allocated without refcounting, e.g. tail pages of + * non-compound higher order allocations. Grabbing and putting a + * reference to such pages would cause KVM to prematurely free a page + * it doesn't own (KVM gets and puts the one and only reference). + * Don't allow those pages until the FIXME is resolved. + */ + if (pte) { + pfn =3D pte_pfn(*pte); + page =3D kvm_pfn_to_refcounted_page(pfn); + if (page && !get_page_unless_zero(page)) + return KVM_PFN_ERR_FAULT; + } else { + pfn =3D page_to_pfn(page); + } + + if (kfp->refcounted_page) + *kfp->refcounted_page =3D page; + + return pfn; +} + /* * The fast path to get the writable pfn which will be stored in @pfn, * true indicates success, otherwise false is returned. It's also the @@ -2774,9 +2814,7 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kf= p, kvm_pfn_t *pfn) return false; =20 if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page)) { - *pfn =3D page_to_pfn(page); - if (kfp->map_writable) - *kfp->map_writable =3D true; + *pfn =3D kvm_resolve_pfn(kfp, page, NULL, true); return true; } =20 @@ -2808,23 +2846,15 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *k= fp, kvm_pfn_t *pfn) if (npages !=3D 1) return npages; =20 - if (!kfp->map_writable) - goto out; - - if (kfp->flags & FOLL_WRITE) { - *kfp->map_writable =3D true; - goto out; - } - /* map read fault as writable if possible */ - if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { - *kfp->map_writable =3D true; + if (!(flags & FOLL_WRITE) && kfp->map_writable && + get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { put_page(page); page =3D wpage; + flags |=3D FOLL_WRITE; } =20 -out: - *pfn =3D page_to_pfn(page); + *pfn =3D kvm_resolve_pfn(kfp, page, NULL, flags & FOLL_WRITE); return npages; } =20 @@ -2839,20 +2869,9 @@ static bool vma_is_valid(struct vm_area_struct *vma,= bool write_fault) return true; } =20 -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - struct page *page =3D kvm_pfn_to_refcounted_page(pfn); - - if (!page) - return 1; - - return get_page_unless_zero(page); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { - kvm_pfn_t pfn; pte_t *ptep; pte_t pte; spinlock_t *ptl; @@ -2882,38 +2901,13 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, pte =3D ptep_get(ptep); =20 if (write_fault && !pte_write(pte)) { - pfn =3D KVM_PFN_ERR_RO_FAULT; + *p_pfn =3D KVM_PFN_ERR_RO_FAULT; goto out; } =20 - if (kfp->map_writable) - *kfp->map_writable =3D pte_write(pte); - pfn =3D pte_pfn(pte); - - /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. - * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. - */ - if (!kvm_try_get_pfn(pfn)) - r =3D -EFAULT; - + *p_pfn =3D kvm_resolve_pfn(kfp, NULL, &pte, pte_write(pte)); out: pte_unmap_unlock(ptep, ptl); - *p_pfn =3D pfn; - return r; } =20 diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index d5a215958f06..d3ac1ba8ba66 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -35,6 +35,15 @@ struct kvm_follow_pfn { * Set to true if a writable mapping was obtained. */ bool *map_writable; + + /* + * Optional output. Set to a valid "struct page" if the returned pfn + * is for a refcounted or pinned struct page, NULL if the returned pfn + * has no struct page or if the struct page is not being refcounted + * (e.g. tail pages of non-compound higher order allocations from + * IO/PFNMAP mappings). + */ + struct page **refcounted_page; }; =20 kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B9F817D892 for ; Fri, 26 Jul 2024 23:53:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038014; cv=none; b=Ot5bPWhyn35yw3zkvEVXMPDCb6+HvdzwuHCOn8CuQOYUboIQkz0xoSJLqp7fX0qSoee5rUHzhXRZ/xGgXZpeyMRTtZLyBRO1629g807lmI9R4jCWfILdcw30vljBlKFMQRhcBgisu/rg+NGjJbHSN+hU6JAwHBdWjl90XH+zBrU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038014; c=relaxed/simple; bh=28S7/fmm9eTt7XEt9AeUCWuLsWPEg2sRA2JAT+HddFY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F21Teuty8iCpXTHnUC1kiKml9e7yk13GmBDEqkPtgfJQAnVaRjiiT8yaJY69xnf5kZuKB+Y2c0VTQ+qjId2fWUS7UPY2dpzYwrT/oPsLRCqNVmEsspzJ4CDHlo2f8ovQJm/8sWSuLFEvJasaUM2BajAoz6GXQylxDOFOLLq/D6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=evPNA4QG; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="evPNA4QG" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb51290896so1531691a91.0 for ; Fri, 26 Jul 2024 16:53:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038011; x=1722642811; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=R2I4/1amdIANY6ZAXPe7SG9ch9BlLbSqRy3xqYz3ZKo=; b=evPNA4QG6jQ6i9b1z8dl8VtwP1xofFtqtzRLrKTt7sSJnjyxEbsEehEqd3QxrBb+2k cLw1gv4Oung8sLYbYTOTkvIR2O6NOBnupNW/Ghb2wIUm/WyRZdDGVFjKZisRaJgqOnyN DUYb5gUjyk+SXtu9xhgGITudefIW0ZDix0nJ9gIIRE3wlDvhHAKxScSJqAbUKzxKxHx9 aGiqhCHNewdBAbBw2udgkdVe10FaYZCXLXN77v00E3xNw8FiAKwlVG6PIU4KPW6m+xQD lsO9M6H7+apwPFYn1GqXkQ4vK9tO/ThgLPPan4qLaOblfpac20SbSH2Gz9NKMrnwY7jP 5EFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038011; x=1722642811; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=R2I4/1amdIANY6ZAXPe7SG9ch9BlLbSqRy3xqYz3ZKo=; b=KpSnCpmfcvDhKEiiWhl0Y+XsZS6PhQbfNYS2lJm7D8jUqsq2EwFPXqvgyRaOgY8UhL hEUfTw4QhZS1OJULCasG78COxJRzrw+xPa3JqSv8HWIXBZTY54kuG/qu659UUvY7UEmX 3zV3PSTymvcg3kdxqR+hfST6UQPu35DAHOpMOGDAkbdUtTmB6jCnUP108enLUEyvJeaU QeyrhnSGwaDi9w+HIaoXHayK+MfiBeJLR53SSvepYnPKR/I3RwSnhA91kdVr5sJPe3hJ jXh+BjFVrzMYsufCNpVaM9yx5qh6h29Kw9b30ZREphotaNxn+cT1dvOMS9LBnRpZ0mq8 vxAA== X-Forwarded-Encrypted: i=1; AJvYcCWVaZzqDqOElgpsNN1Lpu/joqXyNFVGhDDykPF/3/Zze8ZUV2CxI0R8JvjcV2W6lMdIcDXyZSRK8ai8YD5huwh+Ppzpc555M4jfto+E X-Gm-Message-State: AOJu0Yzzh30CfdCmMcKyQ0K6tSx8h1V4GmfzJgew1nrtHHw3KRP8ETSI 6JIz+Fq4FazZRzQfkfXQAfbFgnnx9hJutqh55U1e3nfqXmvMBaxBve3lkSaSkiPbrkXif2QIJtd Tig== X-Google-Smtp-Source: AGHT+IHKOaCbBS38DB3+XV8s1t7JOWc+aANKzUQK49zxzSlWFwYOsfXU8rtGTN4QUDwCdEEeB4LXs+kD0Xc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:890d:b0:2c8:637:7390 with SMTP id 98e67ed59e1d1-2cf7e97d380mr2244a91.6.1722038011380; Fri, 26 Jul 2024 16:53:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:35 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-27-seanjc@google.com> Subject: [PATCH v12 26/84] KVM: Move kvm_{set,release}_page_{clean,dirty}() helpers up in kvm_main.c From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Hoist the kvm_{set,release}_page_{clean,dirty}() APIs further up in kvm_main.c so that they can be used by the kvm_follow_pfn family of APIs. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 82 ++++++++++++++++++++++----------------------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 31570c5627e3..48b626f1b5f3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2748,6 +2748,47 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vc= pu *vcpu, gfn_t gfn, bool *w return gfn_to_hva_memslot_prot(slot, gfn, writable); } =20 +static bool kvm_is_ad_tracked_page(struct page *page) +{ + /* + * Per page-flags.h, pages tagged PG_reserved "should in general not be + * touched (e.g. set dirty) except by its owner". + */ + return !PageReserved(page); +} + +static void kvm_set_page_dirty(struct page *page) +{ + if (kvm_is_ad_tracked_page(page)) + SetPageDirty(page); +} + +static void kvm_set_page_accessed(struct page *page) +{ + if (kvm_is_ad_tracked_page(page)) + mark_page_accessed(page); +} + +void kvm_release_page_clean(struct page *page) +{ + if (!page) + return; + + kvm_set_page_accessed(page); + put_page(page); +} +EXPORT_SYMBOL_GPL(kvm_release_page_clean); + +void kvm_release_page_dirty(struct page *page) +{ + if (!page) + return; + + kvm_set_page_dirty(page); + kvm_release_page_clean(page); +} +EXPORT_SYMBOL_GPL(kvm_release_page_dirty); + static inline int check_user_page_hwpoison(unsigned long addr) { int rc, flags =3D FOLL_HWPOISON | FOLL_WRITE; @@ -3125,37 +3166,6 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kv= m_host_map *map, bool dirty) } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); =20 -static bool kvm_is_ad_tracked_page(struct page *page) -{ - /* - * Per page-flags.h, pages tagged PG_reserved "should in general not be - * touched (e.g. set dirty) except by its owner". - */ - return !PageReserved(page); -} - -static void kvm_set_page_dirty(struct page *page) -{ - if (kvm_is_ad_tracked_page(page)) - SetPageDirty(page); -} - -static void kvm_set_page_accessed(struct page *page) -{ - if (kvm_is_ad_tracked_page(page)) - mark_page_accessed(page); -} - -void kvm_release_page_clean(struct page *page) -{ - if (!page) - return; - - kvm_set_page_accessed(page); - put_page(page); -} -EXPORT_SYMBOL_GPL(kvm_release_page_clean); - void kvm_release_pfn_clean(kvm_pfn_t pfn) { struct page *page; @@ -3171,16 +3181,6 @@ void kvm_release_pfn_clean(kvm_pfn_t pfn) } EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); =20 -void kvm_release_page_dirty(struct page *page) -{ - if (!page) - return; - - kvm_set_page_dirty(page); - kvm_release_page_clean(page); -} -EXPORT_SYMBOL_GPL(kvm_release_page_dirty); - void kvm_release_pfn_dirty(kvm_pfn_t pfn) { struct page *page; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CE0817D8BC for ; Fri, 26 Jul 2024 23:53:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038015; cv=none; b=Gxd/7gcZ+jV4mMIBMKGjJoHphZlJL42lvH0D83sDymkAsGiX3xANywvptCieazhUP7BMs5lo+zsHZvT56Muv7tTOA75g7UAuOHEgZW44rejspDlAMcfCqvfCdFVc9tujVnU6Q02avTAOPXXJAFz7RF0E6gfHPxsW9rzWCNnQk/c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038015; c=relaxed/simple; bh=fPqYasQ0wmQSuuDUgdf/AR1cBeWS8t9kkwoIJqSn1qM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=recYRnMNtIQx3qblTx0zxVGNILDEvyZvNzuTWZ2KTejYcEcCWBxTCHxJoCecBAYvSUULwehjalGLDa+ulCmv6D+IkhOvdQcMo+LU8T0oS6J+frpMAGL/QoORCaxRE4dZDCS/nLI3KnbkRXnyju2CqvjWc8m0k7XoDqX4OTQNF/c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2QyKeXOX; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2QyKeXOX" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc51ea72abso12386985ad.1 for ; Fri, 26 Jul 2024 16:53:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038014; x=1722642814; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=o1vHCUb6CZ680r+o/spGyfXDOmfamym0Nonz9UARGME=; b=2QyKeXOXnCh+XHJP9nYUGOKSgpiL6EQDNDgn1ih6ODQHap2B0dFOCTY3PJu39ip54k yrgC/K9iJFV0Xr2pKrigk0zzZwbjLwdfQbSTk3sLKKzmtk5Zqgh7QVkwz/rC1KxpLJP7 zf+byVELC/tX9wdDuGuxjKEbp+ObmqcV+G/idvtIP7VegpAqCKlV8T52QMPgGPHdamwW D2DCzKmZq52fJbXUKo706ysQTlPN5vVARbj4PE4+DBGWgX7ROMUlyobwVPztGMkx9iU8 LhuEOl5AZXpbC1SOHJcI0E4HTsIolNEw9YaKgTfvlWyeHQqTrr/nMhyySXEEEyf6Wu9E o3IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038014; x=1722642814; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=o1vHCUb6CZ680r+o/spGyfXDOmfamym0Nonz9UARGME=; b=GGhlaI5TnTaI3NKFGktz2K730Nr7j1tjcaAq0UDztxwHnJlpZT8CJGd1EDSrGgCDNk IOyMxihiMT/B9Rq9b2pOOjsnUJTxu2rihZu/W5pNtELnB4tEyF6VTJU/kfCcyjCkQBpZ rC9pKfNAiUqjduyXdXArUHAMx5WCYwZ+JBDZzN4UXHFBlpPa/XTXWAy/Up8OGEs9CFCc XUEt/elEWTP9d3EQvqhWdGMov3RygtpFRj2c4Pbbt/o1AelcbU5GiTp3C9wVJCAo/FSC InMUYyS7PBXqPCN6IQcN7r1wlMKRQot/c9KOI0ubmoxkXk7jR/iuOC5Qm9ZXmpBnC61b izag== X-Forwarded-Encrypted: i=1; AJvYcCWbBMpXR13aM9HRthr1/kH6RgIk1HV0lhPDCJ4Vjmi2Wz2Nn3adjQABsyIOlxRN1+JrDjmRp6IZfib53T3WzWqCgI0t5zxVmCRhdIwJ X-Gm-Message-State: AOJu0Yyasn8ISkUt463wBfYhYMZp4p7qJRLmcmLhzBUlW13RHGn1YmJG 4ld71urslZbFesr60PgrKGygjYTzMhc2QXi1tVgQxarKPe3G2/gmweA2KeQFbR3BO5xo6H2wce2 2Jg== X-Google-Smtp-Source: AGHT+IEBaNygerYKF4xG2tivsYC4I+4KiNqX7pDiMNbM1xexlfLGsv92ykpFMstIuclC7BO9j6NqEQXsK1A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d2c5:b0:1f9:b35f:a2b6 with SMTP id d9443c01a7336-1ff047dce33mr22915ad.1.1722038013079; Fri, 26 Jul 2024 16:53:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:36 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-28-seanjc@google.com> Subject: [PATCH v12 27/84] KVM: pfncache: Precisely track refcounted pages From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track refcounted struct page memory using kvm_follow_pfn.refcounted_page instead of relying on kvm_release_pfn_clean() to correctly detect that the pfn is associated with a struct page. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/pfncache.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 067daf9ad6ef..728d2c1b488a 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -159,11 +159,14 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_c= ache *gpc) kvm_pfn_t new_pfn =3D KVM_PFN_ERR_FAULT; void *new_khva =3D NULL; unsigned long mmu_seq; + struct page *page; + struct kvm_follow_pfn kfp =3D { .slot =3D gpc->memslot, .gfn =3D gpa_to_gfn(gpc->gpa), .flags =3D FOLL_WRITE, .hva =3D gpc->uhva, + .refcounted_page =3D &page, }; =20 lockdep_assert_held(&gpc->refresh_lock); @@ -198,7 +201,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) if (new_khva !=3D old_khva) gpc_unmap(new_pfn, new_khva); =20 - kvm_release_pfn_clean(new_pfn); + kvm_release_page_unused(page); =20 cond_resched(); } @@ -218,7 +221,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) new_khva =3D gpc_map(new_pfn); =20 if (!new_khva) { - kvm_release_pfn_clean(new_pfn); + kvm_release_page_unused(page); goto out_error; } =20 @@ -236,11 +239,11 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_c= ache *gpc) gpc->khva =3D new_khva + offset_in_page(gpc->uhva); =20 /* - * Put the reference to the _new_ pfn. The pfn is now tracked by the + * Put the reference to the _new_ page. The page is now tracked by the * cache and can be safely migrated, swapped, etc... as the cache will * invalidate any mappings in response to relevant mmu_notifier events. */ - kvm_release_pfn_clean(new_pfn); + kvm_release_page_clean(page); =20 return 0; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68FB317DE13 for ; Fri, 26 Jul 2024 23:53:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038018; cv=none; b=D+cbfUU3LvCnjAek/srk5+/5Hwsk6162a+PL6NA6y8e+AS2XLgLVxgHfkIqwZsIwiNGawLISp2xTB3LhTIdAj2jxe2ZyHAxPxzjV9KGdPjgrB35vsLv5GtxcLjTCH4JiPH61oicNCmzRHaRF3xzqxv19TtytuCfWo0HqBUfOpHU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038018; c=relaxed/simple; bh=ky85fUImU01pEWQgS+oW28QcwsVXcoLV2IyUFHjUgXc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=evEQ0ousZgNgPf/dG+NdPiMu+Vi+Q9+BBvffgOrLKnY6BSMbEBnYrZdxBuwhVd2wU2rLdvQwWZe/oVbCYSirD5Th5dm95RJH2HwRgEAv7LXnZHgJTkqiAp/51XpREFAaK/ZGhhvsyx2/D2TBlkBNrM9zrVm8aH18w2GcPtFIwrk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=J4X361lc; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="J4X361lc" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc54c57a92so10625785ad.3 for ; Fri, 26 Jul 2024 16:53:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038016; x=1722642816; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=cQ8Ke99Gz78IcD45RWrPNlFRmlyo3xxXp4XhOasYhk0=; b=J4X361lcv5omjYSXGjZ0fssgEhP2g3QChtpX63p3YTCh1tEcBOdpZNXI4FFULBvR3S VYgb253yj4SZVjbqi8SF8MfdBuLW/R0HBssZHFe/n2c870P3MUbVanInnAhBpPimmK6a dimoqNXKjqZpQKj3xjHHvKkTXxUbdZf7RWq60mHXbo2R0nBgINL6gArL9befHiD/kmQI xHkuAlEj+O/ohlPrnY0uRW0FkMW65211oxaUpRCZUYydnuVbDI9rFmevXyfjgiDr1EBs nWHRNI9+yM/gBxLmgi71OlUrHgdkity0Evi4dCf7DCRf5oQA149HkdxIrDM2mncfAvwS 9Y5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038016; x=1722642816; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=cQ8Ke99Gz78IcD45RWrPNlFRmlyo3xxXp4XhOasYhk0=; b=Hy/aOrOeR4FGMlLGHAiGDPH7i8hthrSpePdvOV0hq0YpXvboUN9Svf9qsrfcxCoBn5 4HefCCUB40NAd8qys3Te5u9/DU5wtDb09AgHqdWeqVDuC/PyEDnI0bzgFGP2Y6W7iH3l aWDTbAQ81f8o5riBPDEyPwp0mXfZ0qIBPQQTai5FfecJGZH+S0NP/9xipeQfn1QK/VMp UllSHqOr/lcJ80mLU9sqFLwohJ/WIs3by84nUhIxuXK3k3tFhiK3QhO3BuNbPdlmJMcA /qjYqELBpX+RoaZW4TlhI8dE7QlbwF3+UFoWqTCwX3Lo/88CkrrrNuRzN2w8juq71V/K UsHA== X-Forwarded-Encrypted: i=1; AJvYcCV1N4WNEUY0fk+KeibWuXGRCIOGe8SPdj6aE2k8fEq16mwBRVMTaLNDNf3sTMsN2yhX5mdAvSMKPogua7aQ2EP3JkrpahChCkwkcOi8 X-Gm-Message-State: AOJu0YwheqF5WS6+j7nyTvywXdLSh1DjhB510VMwjoQ32GL94oRpe6BI aSrRq3sTIzIUsK0xPMgTs1M+LJNxYDFr2VAY5CIeJY7zHc0F+6Z237zPAvLob+t3/Hir8EOZ+sL G/w== X-Google-Smtp-Source: AGHT+IEYmoKPicLFNTjAfXc3rW7QIOSCWQtpI+t2ZFjfSCVkTMXBL3j4sNtxlzg5xGSYJWPU8sNID0rjz8Y= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d481:b0:1fb:5f82:6a61 with SMTP id d9443c01a7336-1ff047b8ea6mr1156745ad.5.1722038015496; Fri, 26 Jul 2024 16:53:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:37 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-29-seanjc@google.com> Subject: [PATCH v12 28/84] KVM: Migrate kvm_vcpu_map() to kvm_follow_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate kvm_vcpu_map() to kvm_follow_pfn(), and have it track whether or not the map holds a refcounted struct page. Precisely tracking struct page references will eventually allow removing kvm_pfn_to_refcounted_page() and its various wrappers. Signed-off-by: David Stevens [sean: use a pointer instead of a boolean] Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 26 ++++++++++++++++---------- 2 files changed, 17 insertions(+), 11 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a5dcb72bab00..8b5ac3305b05 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -280,6 +280,7 @@ struct kvm_host_map { * can be used as guest memory but they are not managed by host * kernel). */ + struct page *refcounted_page; struct page *page; void *hva; kvm_pfn_t pfn; @@ -1223,7 +1224,6 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); =20 -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long l= en); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 48b626f1b5f3..255cbed83b40 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3113,21 +3113,21 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); =20 -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) -{ - if (dirty) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); -} - int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *ma= p) { + struct kvm_follow_pfn kfp =3D { + .slot =3D gfn_to_memslot(vcpu->kvm, gfn), + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + .refcounted_page =3D &map->refcounted_page, + }; + + map->refcounted_page =3D NULL; map->page =3D NULL; map->hva =3D NULL; map->gfn =3D gfn; =20 - map->pfn =3D gfn_to_pfn(vcpu->kvm, gfn); + map->pfn =3D kvm_follow_pfn(&kfp); if (is_error_noslot_pfn(map->pfn)) return -EINVAL; =20 @@ -3159,10 +3159,16 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct k= vm_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); =20 - kvm_release_pfn(map->pfn, dirty); + if (map->refcounted_page) { + if (dirty) + kvm_release_page_dirty(map->refcounted_page); + else + kvm_release_page_clean(map->refcounted_page); + } =20 map->hva =3D NULL; map->page =3D NULL; + map->refcounted_page =3D NULL; } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70B4317E8E6 for ; Fri, 26 Jul 2024 23:53:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038020; cv=none; b=PYUG2CBXN9A9swpqAybfH4HfZvKNOS8Ri7lU7b3F6tieGFmjE+/eSLeZQkp9xMycqiTXmorsAE3od5BK268DDh6DeaDIScI8/ywQgWsn+6Tk2knvjaVZMn2lcNh4RrEh5uhX5n+KqCWfHXi+kKNJdf+7s6iXJg4wM8B0g8njbZI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038020; c=relaxed/simple; bh=4wtI3HrZftg8TSroI/LySwRU47K0urA1mYWFvNIe4WQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SUMQQEb1hqXxKBPRhvd/VTHUxvLvqvcIXp8Z7uUF3WRRr5RC7dxKYGdXedMTnwv1BJeNth/7kqMZTfFeRtujMVuhbgjuN+pEIh7a72b3HnIl6sQHSgGgUkop10TpzOwYdAqSD8kDfVR+5UOgBpMRS5iufjz874S12wW5ho76Dys= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zjHlT+Pi; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zjHlT+Pi" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70e9e3dd963so1395573b3a.1 for ; Fri, 26 Jul 2024 16:53:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038018; x=1722642818; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=hlhq7We3LF/Ul7sqre1CZocM6x6KNUXlB6LyC83BMPc=; b=zjHlT+PiSk/ewkQKfyBiToa5r0MpkfLqGDNhDBXPJm+g7d4w1IsweBYssSUpPz1PUO nrvMfJio5ylilB+m1gQG8Jt+7cBeIcyeWslZa65P7tKknddIT31UnEYaMSnKL2u5sApG E/n7ybgopYkqzYzUb/vJNKECyzsXnDVzWyub3OL5vPnR+2pdFAJUJCjeQijSsFB8JHiW svFC9Cud5FcMPj4UGsmtJVvQiCwvJZi2r91YW1YxR5c/gRWgimWYR7YLSv9rRB8EpJA/ t65QsrD80LKcVv5JBD13QVL/Q6RrX+Lq/NnuLv/Io6ZcbOhqJzINMIbLpeOlYCrLH8uV AFcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038018; x=1722642818; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hlhq7We3LF/Ul7sqre1CZocM6x6KNUXlB6LyC83BMPc=; b=QfeeMl8H9ndBnaC4sa303R0xeYp2RHShvfSvnOiYj4xoH76rHVGF8BEQ9x0KH1bJJv TEt0LCZQpEi376N+tPMrYB9okkooBz/d86f4xwCxfATdc39je6v8UtvxOje3ynB2x573 6EPk6d3DmRBwZVHOlI34iqcVrl6vmKhONBbPeBcGkVgFJloVqeDHqpk/4g6q5VEN2Y8+ B3EoG+BknhHrgcAwTx/OyYRsaEiAKwoBJmxxArGgrBH9IxFeD9UK1mMYs5SyJh4Q/phC 9DJBxKeYVuW0FWj213l4o+otWUwrc1MgFhpA3p9D1j9edmaKTpWFoSkO66sjWg+NUwoe QA0w== X-Forwarded-Encrypted: i=1; AJvYcCXVKkjEH7jl1QKSBtbmOo7+/HzKZGcTJ5V4bc8fO5adDp2Nd95xf8ICUesjcHQES5RdsJDYjqQ1FS13sEf78gxLrjW78wkiOagLRHjF X-Gm-Message-State: AOJu0Yz0BbBRjaNJLDV9pJWGsyhAhGZIJ/lZ815ftv4Lq1W2UQkHtrpQ fde6AgeTR4M5T9nP48h7afhPSkzi+N8eEhH3jWyOxjUWPFnsnm0fPNSVU1cxigj5zXvfHSIfWNA C/A== X-Google-Smtp-Source: AGHT+IHh1dyWiT0um9sPElrkv3mBIH4IG/eZ2ITHeKZHWjYM95v4ERaP3jgKxmY1KlRL2J6TSkYz8SuqfJg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2f10:b0:70e:9de1:9edf with SMTP id d2e1a72fcca58-70ece9fc2c1mr8667b3a.1.1722038017517; Fri, 26 Jul 2024 16:53:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:38 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-30-seanjc@google.com> Subject: [PATCH v12 29/84] KVM: Pin (as in FOLL_PIN) pages during kvm_vcpu_map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Pin, as in FOLL_PIN, pages when mapping them for direct access by KVM. As per Documentation/core-api/pin_user_pages.rst, writing to a page that was gotten via FOLL_GET is explicitly disallowed. Correct (uses FOLL_PIN calls): pin_user_pages() write to the data within the pages unpin_user_pages() INCORRECT (uses FOLL_GET calls): get_user_pages() write to the data within the pages put_page() Unfortunately, FOLL_PIN is a "private" flag, and so kvm_follow_pfn must use a one-off bool instead of being able to piggyback the "flags" field. Link: https://lwn.net/Articles/930667 Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 54 +++++++++++++++++++++++++++++----------- virt/kvm/kvm_mm.h | 7 ++++++ 3 files changed, 47 insertions(+), 16 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8b5ac3305b05..3d4094ece479 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -280,7 +280,7 @@ struct kvm_host_map { * can be used as guest memory but they are not managed by host * kernel). */ - struct page *refcounted_page; + struct page *pinned_page; struct page *page; void *hva; kvm_pfn_t pfn; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 255cbed83b40..4a9b99c11355 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2824,9 +2824,12 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_p= fn *kfp, struct page *page, */ if (pte) { pfn =3D pte_pfn(*pte); - page =3D kvm_pfn_to_refcounted_page(pfn); - if (page && !get_page_unless_zero(page)) - return KVM_PFN_ERR_FAULT; + + if (!kfp->pin) { + page =3D kvm_pfn_to_refcounted_page(pfn); + if (page && !get_page_unless_zero(page)) + return KVM_PFN_ERR_FAULT; + } } else { pfn =3D page_to_pfn(page); } @@ -2845,16 +2848,24 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_= pfn *kfp, struct page *page, static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page; + bool r; =20 /* - * Fast pin a writable pfn only if it is a write fault request - * or the caller allows to map a writable pfn for a read fault - * request. + * Try the fast-only path when the caller wants to pin/get the page for + * writing. If the caller only wants to read the page, KVM must go + * down the full, slow path in order to avoid racing an operation that + * breaks Copy-on-Write (CoW), e.g. so that KVM doesn't end up pointing + * at the old, read-only page while mm/ points at a new, writable page. */ if (!((kfp->flags & FOLL_WRITE) || kfp->map_writable)) return false; =20 - if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page)) { + if (kfp->pin) + r =3D pin_user_pages_fast(kfp->hva, 1, FOLL_WRITE, &page) =3D=3D 1; + else + r =3D get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page); + + if (r) { *pfn =3D kvm_resolve_pfn(kfp, page, NULL, true); return true; } @@ -2883,10 +2894,21 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *k= fp, kvm_pfn_t *pfn) struct page *page, *wpage; int npages; =20 - npages =3D get_user_pages_unlocked(kfp->hva, 1, &page, flags); + if (kfp->pin) + npages =3D pin_user_pages_unlocked(kfp->hva, 1, &page, flags); + else + npages =3D get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages !=3D 1) return npages; =20 + /* + * Pinning is mutually exclusive with opportunistically mapping a read + * fault as writable, as KVM should never pin pages when mapping memory + * into the guest (pinning is only for direct accesses from KVM). + */ + if (WARN_ON_ONCE(kfp->map_writable && kfp->pin)) + goto out; + /* map read fault as writable if possible */ if (!(flags & FOLL_WRITE) && kfp->map_writable && get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { @@ -2895,6 +2917,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp= , kvm_pfn_t *pfn) flags |=3D FOLL_WRITE; } =20 +out: *pfn =3D kvm_resolve_pfn(kfp, page, NULL, flags & FOLL_WRITE); return npages; } @@ -3119,10 +3142,11 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, = struct kvm_host_map *map) .slot =3D gfn_to_memslot(vcpu->kvm, gfn), .gfn =3D gfn, .flags =3D FOLL_WRITE, - .refcounted_page =3D &map->refcounted_page, + .refcounted_page =3D &map->pinned_page, + .pin =3D true, }; =20 - map->refcounted_page =3D NULL; + map->pinned_page =3D NULL; map->page =3D NULL; map->hva =3D NULL; map->gfn =3D gfn; @@ -3159,16 +3183,16 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct k= vm_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); =20 - if (map->refcounted_page) { + if (map->pinned_page) { if (dirty) - kvm_release_page_dirty(map->refcounted_page); - else - kvm_release_page_clean(map->refcounted_page); + kvm_set_page_dirty(map->pinned_page); + kvm_set_page_accessed(map->pinned_page); + unpin_user_page(map->pinned_page); } =20 map->hva =3D NULL; map->page =3D NULL; - map->refcounted_page =3D NULL; + map->pinned_page =3D NULL; } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); =20 diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index d3ac1ba8ba66..acef3f5c582a 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -30,6 +30,13 @@ struct kvm_follow_pfn { /* FOLL_* flags modifying lookup behavior, e.g. FOLL_WRITE. */ unsigned int flags; =20 + /* + * Pin the page (effectively FOLL_PIN, which is an mm/ internal flag). + * The page *must* be pinned if KVM will write to the page via a kernel + * mapping, e.g. via kmap(), mremap(), etc. + */ + bool pin; + /* * If non-NULL, try to get a writable mapping even for a read fault. * Set to true if a writable mapping was obtained. --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA96D17E90F for ; Fri, 26 Jul 2024 23:53:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038022; cv=none; b=bDtZ4iJZnjCCurEcKRZkFUdTiEPNdQhWfjq9EmUUFgCOMgon3/RYb4LuAAF9sR2R3FX3bLqdU8+Y6rp5Ei1t6MFhq4GGcmyUIOICx25VHzojuW9+hX0vfahX7lUWicc78WCDmokNpy21jpZD5C5mi6SJU6Nk0nVXo6US5ZzBAFI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038022; c=relaxed/simple; bh=Ha3QOJ9ztlbiDtlpfFA1Gmi3O5m4m+UNE8dfT+rqtiE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=O1E0uKQEasSScHlaCNlktsztNCk+H+VjLeq0UVYY00OA+fNWaAVJX6FawOb72xrE9LP27pPkHmI93hquE1wZz4EOvGQKYIcMEXVuPvyuaJo8kmsak4cap7atsU8vyNObg0jxCigtFl9WH7+5tzYWbJri2gRt4aSWFKMYTpIHneo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JrAf4hub; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JrAf4hub" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e03b3f48c65so446179276.0 for ; Fri, 26 Jul 2024 16:53:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038020; x=1722642820; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ABkrzr1cGSxzpfOXTkcnfxpngM7W7Byv3yxUL9/Xpog=; b=JrAf4hubNKIiaxx/ovvpXCVpeOcJqnNWLqc+QKkTWEry7wOuil+7GqrohfIib3Qzyl 6tN4g4q7+0d+V9SfuPlG/b2ntzfTQ+M/vyap5T8mOfC/c6fngwUzdjRmSmS41JdKiZRe HgXUh6FifCzog+96gM2J57qcJTZQVLAVeBaABXV0Ol9lh9V9mH1VAIrOeQte1Y4P8KAi Vjk0Va9QpKCn0U8SwEjBcvGt+Z1DR6/4AXqtqXz/AZM7gu2NX3+LXLtWK+JpHHgJMesR Q+ntA+r9QQms+c/J/ZpO0v1hukChvxogIgS7MRFZ0ww0TzrleFIf/2LSVSHAzEaiXrFQ aX7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038020; x=1722642820; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ABkrzr1cGSxzpfOXTkcnfxpngM7W7Byv3yxUL9/Xpog=; b=i978OPYKa/zgmcD1V2EqworN677zbUGj8xSBGBQgxATZxMxGXlnq9MrHlnMIkReRch SzO09kJGRfHWmIqCErUBc+ZsRSgU70qcVkwnLnaJAIStNUZZg7GzRg93w9BeuF+uLycS t0frq8SqPpvBKVJPosRogc6Wn0Vf834edrx94m9k80ipnZv9LdrQExIhaxOdRT1cSPKY RLKiQ4/gGaJgj7lzF0eY8w5Cm+hiMqHy7Je+LMat+Qu9opGbWyAxWAsD5gQ7+1QCjryE oObUuYsG4pm1rvYXus7UrUKe8fKf6kG4e6unu2AhjFa2Y/X/jDBUAUKe5BZaWAzRFTLv 4n7A== X-Forwarded-Encrypted: i=1; AJvYcCUzw63n5+oKn1+xs35VE+lFJUBwm9KpJ6FqH4S1RcdUyhBBeLzL0LW/qcS+4UYhY3hAvoXnFwmBArG0eDJ02g+Spu830fKSEq9uIdAr X-Gm-Message-State: AOJu0YyQJWN2C+6JB4k/G7/JJfTlqymh0CDG4vch2CwA9RdWaiTp4F2C ocseRnNOggz6Qks3+b/cAscxYfwC+cwAx0TMLxBa1vnay0YQJ4Nn8sDI4NiTFExYah+8BqAg5ob BgQ== X-Google-Smtp-Source: AGHT+IF9e10ucuu5PEDYEm2t1VHH57SvniduNcr3PDVCEbnd2uQ/527xnsEGixiu/Fx7KhH32OKutuDbjT0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:9004:0:b0:e03:b9df:aa13 with SMTP id 3f1490d57ef6-e0b5455eafcmr36206276.8.1722038019655; Fri, 26 Jul 2024 16:53:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:39 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-31-seanjc@google.com> Subject: [PATCH v12 30/84] KVM: nVMX: Mark vmcs12's APIC access page dirty when unmapping From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark the APIC access page as dirty when unmapping it from KVM. The fact that the page _shouldn't_ be written doesn't guarantee the page _won't_ be written. And while the contents are likely irrelevant, the values _are_ visible to the guest, i.e. dropping writes would be visible to the guest (though obviously highly unlikely to be problematic in practice). Marking the map dirty will allow specifying the write vs. read-only when *mapping* the memory, which in turn will allow creating read-only maps. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/vmx/nested.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 8d05d1d9f544..3096f6f5ecdb 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -318,12 +318,7 @@ static void nested_put_vmcs12_pages(struct kvm_vcpu *v= cpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - /* - * Unpin physical memory we referred to in the vmcs02. The APIC access - * page's backing page (yeah, confusing) shouldn't actually be accessed, - * and if it is written, the contents are irrelevant. - */ - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); + kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, true); kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); vmx->nested.pi_desc =3D NULL; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03AF817F387 for ; Fri, 26 Jul 2024 23:53:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038026; cv=none; b=qT1QpCe2Ztmo66jCR6ZH5FNbkIt7nSZsH9nfzAQlrysCyKLe0joSbsWu9pxmX2qUMNUH3USOIJSgdX6C6tjhZor/wlBnDfkvjw1CKpWtRxjuyrN/MaSdLQrPFwsXaAdM2H6ykzh9X+r6WR7VOExA/dka+DJ95F8PDYdKiiYYHKE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038026; c=relaxed/simple; bh=nCm8qj+MbqzUxjzjIu3NKNvGTRrvffksoQ9gtbIzG0U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WCwNwq2AF3Daci+pzPNdqgFrp+proyQ71drMMKrY8l0UAYitRajdRgeD6uj/9+9jTcbBRmABUUWEbBM5Xo9+Tb596y79inJGjEpz+DzuhsvdAb5rsKGkiBoT4Egc7TvhVYZOsX/7W3YQs+T2d5dS/BGpVrDhegK/8nMo14uev7w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TL73I/QC; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TL73I/QC" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e035f7b5976so758109276.0 for ; Fri, 26 Jul 2024 16:53:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038022; x=1722642822; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=XPLhffmLB4ZFWgxHZuTKpVpsLNVQ+RH00jdWHKAirBA=; b=TL73I/QCzpredBGzgxkj0Fl3mBbOun4EGU7qYtrqlxO58O52lIY/CPvwBSyW9bGjUQ IGIK0LXkW8+vQkQG0A7qKcz4/c26baxs4a8hg3h6Jv5FBmGdiChr/DOlg8o/sjs6rkZ8 bZCZJhREVSyLe4XV6C7kVQ6VD7bUpIfhSa0mw+z8t/QGalpvf/7FYekdS/n3LnhWctKQ OYhyTeQ54QKbyVIfGvi+1puD1IvBvV+OZaW7p6JYmfUA5+1TemyzherGdG5Ecfl8SOHY yO6dCOTDX3siGAolFLL9cKsCheFCpOYgTxBTfX5ZMFEf8+JCvl7dwFKaTp1yBxDBLgIB VD8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038022; x=1722642822; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XPLhffmLB4ZFWgxHZuTKpVpsLNVQ+RH00jdWHKAirBA=; b=aIKivwivWqKHx+/HFojM78L7uX7BXJmyEDgPDu85bGqYWbwwYj7jqtmE2v66bSEdmQ A+JwRSeb02sJRjkDeefPwgdzkWMzIf9Ib5sQKlOfVMsPOIkvH2tYDjsVLmO6hbBbSnIt Su7IT+JQTep15WKHmtSnqJEymcBV5adqpQ8UYFNnbnXgX1yBSDD/y+HtvUxSXqrRtmSX cqujPdF4/Kxhu8v7ysmF2YtpfM9mo6PCE6x2vcCmvS2NZG5QWs+e3bm8Prss/Azax+2g gPjseV/OklBBf/nmFbJwJlFK9Z4JuVi2dWDI767MNV+QXo0ZhTIXUpMTsAZGxwhqFw7o bS7Q== X-Forwarded-Encrypted: i=1; AJvYcCXa2GvOQxnmp15CLMdHSRESKgD/pHbjL3lftCZtzlw2Uw5uUoDrzVyEZwliKaNFO2H9TuqoMiS68zXCBFLisD3Mozw1jVV7FZPFwR9I X-Gm-Message-State: AOJu0YxYdjYaswIidYBDNIaIXNsUh48zm0N8bJYIay2V71d988fruTaN BBUY12HsuSudG01bdDDj4hbqwkAYJ8rZiENQkMjpHrdesaQ55niqf+ia01lGvTh5xNI/sxJDgdL wUA== X-Google-Smtp-Source: AGHT+IGBhzeuzUiyouvcvFTiXgb/HBxmg2MAS0wOhJT2px7pnvcCfHG92BiTIsAz4jQy3aPwc+Eekl1Ce9c= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:6844:0:b0:e0b:3432:73d4 with SMTP id 3f1490d57ef6-e0b5560e6b3mr22860276.1.1722038021685; Fri, 26 Jul 2024 16:53:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:40 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-32-seanjc@google.com> Subject: [PATCH v12 31/84] KVM: Pass in write/dirty to kvm_vcpu_map(), not kvm_vcpu_unmap() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that all kvm_vcpu_{,un}map() users pass "true" for @dirty, have them pass "true" as a @writable param to kvm_vcpu_map(), and thus create a read-only mapping when possible. Note, creating read-only mappings can be theoretically slower, as they don't play nice with fast GUP due to the need to break CoW before mapping the underlying PFN. But practically speaking, creating a mapping isn't a super hot path, and getting a writable mapping for reading is weird and confusing. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/svm/svm.c | 8 ++++---- arch/x86/kvm/vmx/nested.c | 16 ++++++++-------- include/linux/kvm_host.h | 20 ++++++++++++++++++-- virt/kvm/kvm_main.c | 12 +++++++----- 6 files changed, 40 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 6f704c1037e5..23b3a228cd0a 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -922,7 +922,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) nested_svm_vmexit(svm); =20 out: - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); =20 return ret; } @@ -1126,7 +1126,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb12->control.exit_int_info_err, KVM_ISA_SVM); =20 - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); =20 nested_svm_transition_tlb_flush(vcpu); =20 diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index a16c873b3232..62f63fd714df 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3466,7 +3466,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm) =20 sev_es_sync_to_ghcb(svm); =20 - kvm_vcpu_unmap(&svm->vcpu, &svm->sev_es.ghcb_map, true); + kvm_vcpu_unmap(&svm->vcpu, &svm->sev_es.ghcb_map); svm->sev_es.ghcb =3D NULL; } =20 diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c115d26844f7..742a2cec04ce 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2299,7 +2299,7 @@ static int vmload_vmsave_interception(struct kvm_vcpu= *vcpu, bool vmload) svm_copy_vmloadsave_state(vmcb12, svm->vmcb); } =20 - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); =20 return ret; } @@ -4690,7 +4690,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union= kvm_smram *smram) svm_copy_vmrun_state(map_save.hva + 0x400, &svm->vmcb01.ptr->save); =20 - kvm_vcpu_unmap(vcpu, &map_save, true); + kvm_vcpu_unmap(vcpu, &map_save); return 0; } =20 @@ -4750,9 +4750,9 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const= union kvm_smram *smram) svm->nested.nested_run_pending =3D 1; =20 unmap_save: - kvm_vcpu_unmap(vcpu, &map_save, true); + kvm_vcpu_unmap(vcpu, &map_save); unmap_map: - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); return ret; } =20 diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 3096f6f5ecdb..f7dde74ff565 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -231,7 +231,7 @@ static inline void nested_release_evmcs(struct kvm_vcpu= *vcpu) struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); + kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map); vmx->nested.hv_evmcs =3D NULL; vmx->nested.hv_evmcs_vmptr =3D EVMPTR_INVALID; =20 @@ -318,9 +318,9 @@ static void nested_put_vmcs12_pages(struct kvm_vcpu *vc= pu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); + kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map); + kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map); + kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map); vmx->nested.pi_desc =3D NULL; } =20 @@ -624,7 +624,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, int msr; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 =3D vmx->nested.vmcs02.msr_bitmap; - struct kvm_host_map msr_bitmap_map; + struct kvm_host_map map; =20 /* Nothing to do if the MSR bitmap is not in use. */ if (!cpu_has_vmx_msr_bitmap() || @@ -647,10 +647,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(stru= ct kvm_vcpu *vcpu, return true; } =20 - if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), &msr_bitmap_map)) + if (kvm_vcpu_map_readonly(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), &map)) return false; =20 - msr_bitmap_l1 =3D (unsigned long *)msr_bitmap_map.hva; + msr_bitmap_l1 =3D (unsigned long *)map.hva; =20 /* * To keep the control flow simple, pay eight 8-byte writes (sixteen @@ -714,7 +714,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_FLUSH_CMD, MSR_TYPE_W); =20 - kvm_vcpu_unmap(vcpu, &msr_bitmap_map, false); + kvm_vcpu_unmap(vcpu, &map); =20 vmx->nested.force_msr_bitmap_recalc =3D false; =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3d4094ece479..82ca0971c156 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -285,6 +285,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool writable; }; =20 /* @@ -1297,8 +1298,23 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn); struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn= _t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); -int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *ma= p); -void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool = dirty); + +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *= map, + bool writable); +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); + +static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, + struct kvm_host_map *map) +{ + return __kvm_vcpu_map(vcpu, gpa, map, true); +} + +static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gpa_t gpa, + struct kvm_host_map *map) +{ + return __kvm_vcpu_map(vcpu, gpa, map, false); +} + unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, b= ool *writable); int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data,= int offset, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4a9b99c11355..a46c7bf1f902 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3136,7 +3136,8 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); =20 -int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *ma= p) +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *= map, + bool writable) { struct kvm_follow_pfn kfp =3D { .slot =3D gfn_to_memslot(vcpu->kvm, gfn), @@ -3150,6 +3151,7 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, st= ruct kvm_host_map *map) map->page =3D NULL; map->hva =3D NULL; map->gfn =3D gfn; + map->writable =3D writable; =20 map->pfn =3D kvm_follow_pfn(&kfp); if (is_error_noslot_pfn(map->pfn)) @@ -3166,9 +3168,9 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, st= ruct kvm_host_map *map) =20 return map->hva ? 0 : -EFAULT; } -EXPORT_SYMBOL_GPL(kvm_vcpu_map); +EXPORT_SYMBOL_GPL(__kvm_vcpu_map); =20 -void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool = dirty) +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map) { if (!map->hva) return; @@ -3180,11 +3182,11 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct k= vm_host_map *map, bool dirty) memunmap(map->hva); #endif =20 - if (dirty) + if (map->writable) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); =20 if (map->pinned_page) { - if (dirty) + if (map->writable) kvm_set_page_dirty(map->pinned_page); kvm_set_page_accessed(map->pinned_page); unpin_user_page(map->pinned_page); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CEFD917F4EB for ; Fri, 26 Jul 2024 23:53:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038026; cv=none; b=FZbqSufPFBUT+6imjZL7FNB7irtyWcUU08+d2kPf5dCP1MuFEVAYFMuA59G5NiBfbrI44ceZ3QUlPAbFHgLTFZczwnKk+vO6i906u3kq/mwjGGxaEA1r+ATAN3JoBRckERXxA0SNrUIK6xn48ZTWEhsmbK31jBFzidEuM6PxjVA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038026; c=relaxed/simple; bh=uf9ms+u2NfXkgc6o314wQs4yJxndrotou5MTD6pjg7g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GfBEXzX0mnZoamSjVJwQFeet6WDHE2OPHDhU97io2/M5hozJ0kICmUG9Rn6/KNMeu4AY3/t11p07XUXgmouDveLe+5sSKgtHNAJC/h42910Ck4PfBdOnpb6cMEqaW+jLG8fvtyQwg8ayC0+5u7XD5jntpw5aZY8SJst6RdqSJjA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gyMRMEom; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gyMRMEom" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e03a694ba5aso407994276.3 for ; Fri, 26 Jul 2024 16:53:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038024; x=1722642824; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=bC28fbRseb42Ch2zMqBFWEix5TOk22ZdgToO+ip0K/0=; b=gyMRMEom/TNpb2Aguqkw3Kr+W8GXK/pp4oC/n2rU2H6jg8xkOLsImfx1d2KBq1W/KA hDRhilSwXDaHxn+hxc0K8RfRSgTrAwYu25j3kBEGHEJXpPkCo109wlhRLMgkAgFEgVtC TRb6k+tcTcw30j9t3AGJv2rze6oZbgLTo9Yy+Dx2utk0ZLVArHJQ6PLYJWjXaOjRHfHC LaciJesXaFtuZDdXBmzHeP3HdYu6uu6XPV3NInXni1dBocS2VkvLEvf+4mChUxdqSYs7 7O7bx5/M4cBWhyOc8ePFQR8EAcLoQibQXydg4n/QO63ZJMZylzLXzMbPyQS1pxOsqNdj qt8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038024; x=1722642824; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bC28fbRseb42Ch2zMqBFWEix5TOk22ZdgToO+ip0K/0=; b=cuvFi1U4ULrF6LY2S1Jhb0dcoIoyxMsui5yE0whT5QYVr2FFPdvD8fDPY9ZRZasqIP Oq6PNQG8tftm/jcjP/8sDu7oPlXjNtmdLbDB/VAafJmH4W4Z6hALp8t5hs1Wor6GoHvT 4aS+dsTHgokgEiihLWdUmUj2iQsLLOVYarAeHmw6Q2qSAzUHMhYI7AlI3tx6smR+XHcZ ssBhdKV12vHd8obySrOj/ZLDsGpQdq6S+VI17Lh522+vBsN4vKU0rNwypED+tU51T8mJ CojpDMCWxt8GCPWwWBhNvksiHndoxgzIkZGgP2Ip9zCZn3mSlMojKTPQ6bvIQSbD3JMg bWtg== X-Forwarded-Encrypted: i=1; AJvYcCXzcsDuondDKbFeJfOXGlECinKgomO1m0bqn1uSIO28yQbcAM+qNqE8y96bKxePKADjFTkTI9VfbDHFesnf/QU2kjOX25YKZrZzLujr X-Gm-Message-State: AOJu0YwN4ovyBKznTMNbzY7cYVoli60AmtBknGRJ+zwBiLIL70sC7cii wdAc2laCekvs3qWjH5Kzbk/Ow7tnoQ/KsA7db7zHXuxZr+HyPzS6V1b1HDso75PBHM11NqM/ItP qLg== X-Google-Smtp-Source: AGHT+IFe6CawsiHghHJz6goDA5MI5OiaYSIvtgnV/oY+y/tLOOW/H4O66kyosfEcqs3UQ2nQeG3uHH/jimU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:181d:b0:e03:5144:1d48 with SMTP id 3f1490d57ef6-e0b5452490amr2050276.11.1722038023813; Fri, 26 Jul 2024 16:53:43 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:41 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-33-seanjc@google.com> Subject: [PATCH v12 32/84] KVM: Get writable mapping for __kvm_vcpu_map() only when necessary From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When creating a memory map for read, don't request a writable pfn from the primary MMU. While creating read-only mappings can be theoretically slower, as they don't play nice with fast GUP due to the need to break CoW before mapping the underlying PFN, practically speaking, creating a mapping isn't a super hot path, and getting a writable mapping for reading is weird and confusing. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a46c7bf1f902..a28479629488 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3142,7 +3142,7 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, = struct kvm_host_map *map, struct kvm_follow_pfn kfp =3D { .slot =3D gfn_to_memslot(vcpu->kvm, gfn), .gfn =3D gfn, - .flags =3D FOLL_WRITE, + .flags =3D writable ? FOLL_WRITE : 0, .refcounted_page =3D &map->pinned_page, .pin =3D true, }; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 929CD17F4FF for ; Fri, 26 Jul 2024 23:53:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038028; cv=none; b=hjWazB+HwFat7MxQCyN8oexzAcNixs9sItWmUeW1wrqjFhalVZoAUhFSMbmJTjtJHMj9cUyKN/TBsHsu1uzWQwde+idBZ5tBM9lLSihYq4sE3VgR8F1xJQ0vNhic9uBIrBQ2CqvAofJCIEQPjVMv2bu3o/AHTzRRtsONnQ7AxZg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038028; c=relaxed/simple; bh=8U41yEZkwmCC7j8hL2o/rtiKo/Y2IzaocSdWcXfAeko=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=en9EF6CV9fVXwAM9qpgI2tUUwkKaqaMKRRIOJchIQxLavS/wIB40f7CDDCgAzwJRtgfJXFTXmAxAzdV/L9OonP0j/zaLNe9sgkssUKon+l2bzDNj3+HN7dM3hIxAIe8VCgUQkan5LYQ8HruF26tdjVx1YfpHcDfS7h425+EGJ6Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k3XBBL3p; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k3XBBL3p" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6688c44060fso6476507b3.2 for ; Fri, 26 Jul 2024 16:53:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038026; x=1722642826; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TLCx2qDCvkD9gyhhD5S2JLlOeuoeLAn7xH7FruJz3kQ=; b=k3XBBL3p5RIi4t28nXtu8zaiPCFU3PZQa4vgyQDDxjIbGTsSBEJJioZssEfc9cTE09 CflcROJvU6p/6zBJBZAhGv6yapUTz+l5cKsqUAyIXmaL+PxgIvT339fXD4T4a+HZmGtC hI6PhsxWioqDCkX+FYOa7uUh8UYJSQPy+qSQrafnGOD3WIV836pmemAoBqFJwDAVRRo0 gqrhlDJz9sP0uVburApfPqs8UUDetG+UmELfUSRbBbYk8Y+qSrs7pbBeqp5L5vAffdeH X2OkgUgGyVWCH93SUKwhS4EsWVZahwBfLCr+qciSwHMnhJmWdmnhNelO+QTstyenr/9q 7zTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038026; x=1722642826; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TLCx2qDCvkD9gyhhD5S2JLlOeuoeLAn7xH7FruJz3kQ=; b=UXmYw4Vyy8Wsh4DNuNDgo6LwW4OrWUOvlbR3zMtpg/CMZSXbuveeAnB6oKIwBmrjid UdBDDQCtPiLbM2B43yt9jdbVPizCZWEClKMbKic6B88m4gizDYufY29lDDrrFAtF2nM/ awm4+rKU4zGOKRaVo4rGi/vuFe9BPukV0Kj9LD0RdYh+1uBMyTYXxHNLbXWjBmSBnF7E fWb1Cqo8gQLlY1lDL0ctGiErNB+NbABwnQl8TsgN0Vzrthv7zIMRy+yXGzfgrQjKy0G6 k03qtRfyKQ1DQwPGAliqR4RrB6pyZUvOWYjX6Y/yBcp/1RQXbY9/TVNDNZZ2ERNFx6bo GGrA== X-Forwarded-Encrypted: i=1; AJvYcCXUFfYHCjmGCMkGY39SEUWNwd2GPw6F3Mp2DmeCGMmrp1zb+qqLJLXHK/Dg/KqodICKg2//vazTSsAAQlpamF6/HuGOoot4lkOSurvJ X-Gm-Message-State: AOJu0YwmXY47nBPIbVGO/B9E6lS8/gWr1pIUjl3FSkTkWEnDYeCNnf3d JOCj6d7XAupNrjmC0hoZeB1+rfvFFpkWJaerL78aXkotBqT8ypHX5dwvYuBDBB/2AReIIBqLZYp 9Jw== X-Google-Smtp-Source: AGHT+IFuGqnDYAUSJ5FmVd85CGAsbS7YYsLTPQN/3Ul7QhkRMd8+dVyQHeeD8wiMpZyrbrthnX4Q+b9wtNM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1005:b0:e0b:f93:fe8c with SMTP id 3f1490d57ef6-e0b5427fa67mr79832276.0.1722038025699; Fri, 26 Jul 2024 16:53:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:42 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-34-seanjc@google.com> Subject: [PATCH v12 33/84] KVM: Disallow direct access (w/o mmu_notifier) to unpinned pfn by default From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add an off-by-default module param to control whether or not KVM is allowed to map memory that isn't pinned, i.e. that KVM can't guarantee won't be freed while it is mapped into KVM and/or the guest. Don't remove the functionality entirely, as there are use cases where mapping unpinned memory is safe (as defined by the platform owner), e.g. when memory is hidden from the kernel and managed by userspace, in which case userspace is already fully trusted to not muck with guest memory mappings. But for more typical setups, mapping unpinned memory is wildly unsafe, and unnecessary. The APIs are used exclusively by x86's nested virtualization support, and there is no known (or sane) use case for mapping PFN-mapped memory a KVM guest _and_ letting the guest use it for virtualization structures. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a28479629488..0b3c0bddaa07 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -94,6 +94,13 @@ unsigned int halt_poll_ns_shrink =3D 2; module_param(halt_poll_ns_shrink, uint, 0644); EXPORT_SYMBOL_GPL(halt_poll_ns_shrink); =20 +/* + * Allow direct access (from KVM or the CPU) without MMU notifier protecti= on + * to unpinned pages. + */ +static bool allow_unsafe_mappings; +module_param(allow_unsafe_mappings, bool, 0444); + /* * Ordering of locks: * @@ -2821,6 +2828,9 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pf= n *kfp, struct page *page, * reference to such pages would cause KVM to prematurely free a page * it doesn't own (KVM gets and puts the one and only reference). * Don't allow those pages until the FIXME is resolved. + * + * Don't grab a reference for pins, callers that pin pages are required + * to check refcounted_page, i.e. must not blindly release the pfn. */ if (pte) { pfn =3D pte_pfn(*pte); @@ -2942,6 +2952,14 @@ static int hva_to_pfn_remapped(struct vm_area_struct= *vma, bool write_fault =3D kfp->flags & FOLL_WRITE; int r; =20 + /* + * Remapped memory cannot be pinned in any meaningful sense. Bail if + * the caller wants to pin the page, i.e. access the page outside of + * MMU notifier protection, and unsafe umappings are disallowed. + */ + if (kfp->pin && !allow_unsafe_mappings) + return -EINVAL; + r =3D follow_pte(vma, kfp->hva, &ptep, &ptl); if (r) { /* --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 408C117FAB8 for ; Fri, 26 Jul 2024 23:53:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038030; cv=none; b=Ivpp1gfFyJY9aeg2/R9+NDE/9EE5IXuxdNJM7WU0+aunMpRJzKHUdl+oeoNbgApkp3HDnMIPDDL1XNsVY7Jkl+FUbIiEMJo5FGDFL6QjLUUHKqq0U+sTDAl/JHfKL+q77/rM89I5zPX3a53ka5CSgaaxzE6hnHwJYoK4pPpVheg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038030; c=relaxed/simple; bh=iyko0n6fwOnAH7YleGpA9px1iO4LPswzFGIVpqZk3kk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aMObYMHHwdotJpkZi5rJWvDzzFyoIgZK4wPw4toDJYpWnLX7SqPiFUXWFqQyWUQPxbXfz3Jc7XvtGP7DWOR9NOPr7bu4uxKxnbUnGJ06hkLMqZucU3faxqbal1sBMR0HvZRqxop7ugSD4bcHD4wa5P8ii1mp1ZMc1sW7q/Oan/s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ljInQIc+; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ljInQIc+" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fd72932d74so11082025ad.1 for ; Fri, 26 Jul 2024 16:53:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038028; x=1722642828; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=f0UOaTihDpRJ7PP+Ej2doGcOk+AHWo0BkuOsHzozLVI=; b=ljInQIc+FH+iLIEq0W0ax1ur+rgWMLOAZ7lB66J7DrMKkIHt89l0ApJT7HkNUcQjni rrs/SAAkmKIWGpCkmbrh3msCXlJCubg1tdmoXX9dYifErIRRpEwTzxl1/NAPWPFDhOiR blPLqUrBlpKdppCGjC7aY/mZSKk3KmBKrPlp2DOZ95ho+7+x8uc9YSeoB/VzH9ZnlWLj Bx0tPwwK0LX06t7Odw2T6P/7ew8fDBcmEwH4UX5IoqLWVGxXtnHAHiMOIg9Y5MLaqA39 GLlqtmx6TXjK4EsxoDCb8jL4Q5M+cQuinpnH0uo4Bv11r2wCGXikBhew/0vMXBLRNiJI 4aMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038028; x=1722642828; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=f0UOaTihDpRJ7PP+Ej2doGcOk+AHWo0BkuOsHzozLVI=; b=XxeQcS2MfaYnr13ESsMWsZtJkyhMAhK/RJnUcuST3CqJYHx4JuOLkJqhh6ZrcfkOwh s7Rf7ocpryU+GalNmXZ/Ocx8U4XHy+E13hOGTp9n4/hHCbItDNR0JdC9TZw/L7/PQKc5 i2F1NrJaJGN3O9R7NQZhyObVsRpkCLxE2R8NgrsogWnNxxd1FtZ8xSgtyOBXq/IVpxrM wTwwTIMEW75HZPuxDokbzV3/YxsGekb/orUJeIr/m2M8EejnxwMiE5v0sjy50zgXxhWF 0klC149mxFYrz1cfsztoHyTFBdiA7bqR4MohoIz1m02MjQNkbk1xzArODgmyhmI/7ebf QLwQ== X-Forwarded-Encrypted: i=1; AJvYcCWENokx/+81SH1NmMp+Qw2s2SPjiGGhoLyjEUS1RuqYPDsJxty9TBEDe6A8qRKE6Tz3UKibz0lr3jw080Y=@vger.kernel.org X-Gm-Message-State: AOJu0YxkmFoOuaCTRBZEjskArFrxb5rR2hnBMIEWd5rWS4qQdie/wSUw DEWjO8nx86Ny1001lSto+Ngyr+IonThhz8P07/tnO3Pj5SamBrspAwyiWpu2nqrjpFRw44Zn0xU nbg== X-Google-Smtp-Source: AGHT+IGYUMRc1uUxr912DKcgHH4gUUnvf0wLEKnF8dkY5ruUFOJLgoqFTBS1LD3evC3JUrfTD0ZWGbhz678= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:cecd:b0:1fd:6529:7443 with SMTP id d9443c01a7336-1ff0486a65amr545105ad.11.1722038027663; Fri, 26 Jul 2024 16:53:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:43 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-35-seanjc@google.com> Subject: [PATCH v12 34/84] KVM: Add a helper to lookup a pfn without grabbing a reference From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a kvm_follow_pfn() wrapper, kvm_lookup_pfn(), to allow looking up a gfn=3D>pfn mapping without the caller getting a reference to any underlying page. The API will be used in flows that want to know if a gfn points at a valid pfn, but don't actually need to do anything with the pfn. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 16 ++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 82ca0971c156..5a572cef4adc 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1212,6 +1212,8 @@ static inline void kvm_release_page_unused(struct pag= e *page) void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 +kvm_pfn_t kvm_lookup_pfn(struct kvm *kvm, gfn_t gfn); + kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0b3c0bddaa07..ad84dab8c5dc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3118,6 +3118,22 @@ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu,= gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); =20 +kvm_pfn_t kvm_lookup_pfn(struct kvm *kvm, gfn_t gfn) +{ + struct page *refcounted_page =3D NULL; + struct kvm_follow_pfn kfp =3D { + .slot =3D gfn_to_memslot(kvm, gfn), + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + .refcounted_page =3D &refcounted_page, + }; + kvm_pfn_t pfn; + + pfn =3D kvm_follow_pfn(&kfp); + kvm_release_page_unused(refcounted_page); + return pfn; +} + int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages) { --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D03B17FAC0 for ; Fri, 26 Jul 2024 23:53:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038032; cv=none; b=ejf0JSf9nReL07UzNDjtQrTxWYjhV4WFwt91HjPrenruMZES/HjadW/+N2uPoPMlUa9L4O0NNfQ3cO7Hzm3rIz/sSS2sIA7mJjaI3pDWl5tBLKRpCT0sO1QdCc+WPNvT0yOuYxCkQ86vs7bv++IHXiD8FJfk+8PxaZ9JNa5gDE4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038032; c=relaxed/simple; bh=RxYXW38TO1i/zpJ/VZw7DD4eCUef7bpZ1SOW7WvxVVk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t9amGUHz5rcMD1Ycd3d5twissZUBBuOb2zSi4+u9S13rmmVYGzh5XuSefwf3IonKc/wX92V18T7M/13p2EfHCG+azQ/u0gOL4HwhHEhL0NAWIYMbs8wX/ed9t7j81Tw65ceLtWfGMf3NvA3IGhKVfBz1k5lCQBzSFzk8z76SEbM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aoeOr1kE; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aoeOr1kE" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d19a4137dso1423102b3a.1 for ; Fri, 26 Jul 2024 16:53:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038030; x=1722642830; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Xse5B7AOfcqAOlY7bR5VB/olShqjuvoHnQg+aILML2Y=; b=aoeOr1kEZUS2hBv2Rkn8ifZLZOsxsP1Bv1brvMyrBSyhc1VJSDS4GJD2qLZd4fKGqg GLGZb9CmxkUApyCT167eWggqwYfZgCM+z3d0UyGrz1dPbu3vwRhJ9EsymEIy2OPrtnTy c5yiogbRA6/8VfhbSRUmh4R5fLa5vHTPnY/26wDkDbIpeahfnjBynbQFFbERdKVId0XB zHdsNTMUL0ukBfXw6KG6hYi8pQtDWDOTzAbgbGIIpiudrp6I3noyejDaxyhbsWKqOjYW +VaCM/LSCPDW9Nsuvh3DjAIRJpvSHyjdfPC60R8xlWQm2R38kaLFybKV9YDFwk5e0bRG Rkbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038030; x=1722642830; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Xse5B7AOfcqAOlY7bR5VB/olShqjuvoHnQg+aILML2Y=; b=uTgBr3PuU6OVKwngn0xoISmIpq2zBvH9OZPxQiPMb98C/NeQkKqHgPwWg1SoZbVwQJ txiUuXRx8PPeHS9uZm4wGv+mWJd/X8PG+MIw0XXCMBtUFoYL/mez2w6HonNIZpaYqkBV dc1uEH6nEjlIFybQHm++rTwDCiQWlS6DnFmCe7rH8nFVQEJlyQ24pmNXLy9TM67lh6qZ 0eJ0tAgpBhhAYPD+HkY0B5vTqupoJd4gYk7aTITDTZdl169iFXGLGVRMOJB2k3aPuWYM gKF2mvjQRABJ1EXhG9P6Kyb7fZaUeik3tvIvOBgANQD7g4M1XBKCiFf8pu34m5FEPOU8 avmQ== X-Forwarded-Encrypted: i=1; AJvYcCVa/+3OGKhe9Brenye1E4s/j2xOQU0DpJLXUf0XzoNY+TqatjEHFBiq2zpytt9UVzpr+mTag9RK+G/ChY0tsicgbMkWZYlukzSJxvKp X-Gm-Message-State: AOJu0Yw8ZjwE8MU3PIcBPoRahdFvgUCYz+r33ZZNWP9CyyRJXiGIJhvg fj+6AN9YufYjHQTSCfxfCat0Zu1bNukgYR9biDsIjOdLr3JrXo78VZzZ2iG8sHMPL8uLOgYYFfq 0mA== X-Google-Smtp-Source: AGHT+IEFcNwoZPtxzQ32cLJ1vxa7gkQJVcgltXDXIuLlGv+WV+Ss10kIc9nwtzcNKm3zIIgT6vAGf5Ndk9o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2e2a:b0:70d:9a0e:c13b with SMTP id d2e1a72fcca58-70ece7f0562mr55820b3a.3.1722038029490; Fri, 26 Jul 2024 16:53:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:44 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-36-seanjc@google.com> Subject: [PATCH v12 35/84] KVM: x86: Use kvm_lookup_pfn() to check if retrying #PF is useful From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use kvm_lookup_pfn() instead of an open coded equivalent when checking to see if KVM should exit to userspace or re-enter the guest after failed instruction emulation triggered by a guest page fault. Note, there is a small functional change as kvm_lookup_pfn() doesn't mark the page as accessed, whereas kvm_release_pfn_clean() does mark the page accessed (if the pfn is backed by a refcounted struct page). Neither behavior is wrong per se, e.g. querying the gfn=3D>pfn mapping doesn't actually access the page, but the guest _did_ access the gfn, otherwise the fault wouldn't have occurred. That said, either KVM will exit to userspace and the guest will likely be terminated, or KVM will re-enter the guest and, barring weirdness in the guest, the guest will re-access the gfn, and KVM will fault-in the pfn and mark it accessed. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/x86.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index af6c8cf6a37a..59501ad6e7f5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8867,7 +8867,6 @@ static bool reexecute_instruction(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, int emulation_type) { gpa_t gpa =3D cr2_or_gpa; - kvm_pfn_t pfn; =20 if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8892,22 +8891,15 @@ static bool reexecute_instruction(struct kvm_vcpu *= vcpu, gpa_t cr2_or_gpa, } =20 /* - * Do not retry the unhandleable instruction if it faults on the - * readonly host memory, otherwise it will goto a infinite loop: + * Do not retry the unhandleable instruction if emulation was triggered + * for emulated MMIO, e.g. by a readonly memslot or lack of a memslot, + * otherwise KVM will send the vCPU into an infinite loop: * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ - pfn =3D gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); - - /* - * If the instruction failed on the error pfn, it can not be fixed, - * report the error to userspace. - */ - if (is_error_noslot_pfn(pfn)) + if (is_error_noslot_pfn(kvm_lookup_pfn(vcpu->kvm, gpa_to_gfn(gpa)))) return false; =20 - kvm_release_pfn_clean(pfn); - /* * If emulation may have been triggered by a write to a shadowed page * table, unprotect the gfn (zap any relevant SPTEs) and re-enter the --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E18F1802AC for ; Fri, 26 Jul 2024 23:53:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038035; cv=none; b=CIn7dJHqGDtDSYeiuPgg4lIM+xkKGCJBqnLR7LIFnZGVDdT2WLTHnImRdx5nPYsHLb/mma+Le5DGp0TToo25x0+NaVdoVSVzLEHAFcB305wuaC9ToiSaivavrKfBPfUISkQCAlIP0ynZNF3qF9JpYq1262H62auUKR9DGLkfOjI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038035; c=relaxed/simple; bh=DtXqKMMIismcR5Jrwgm9yWaU7DBxhq5zzlFfV1KGA1w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Rvh64sgAjiu+x4eXp60QmXmjcwiXhaKmiJXXgw1ID3MbBNVmncGdI+coPWqp5TAUTtH+4GahQAlzcgkqujZcSbJOTUk/J2b1t4n8K0iYtxZwOKxrq/mlWTXLRD40Spz8kYsnwfMoqzP8kFvpWbG2fhIVtW94XfepwD8FIXerCnw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=g2ZVkkO+; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g2ZVkkO+" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb5ba80e77so1679763a91.1 for ; Fri, 26 Jul 2024 16:53:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038032; x=1722642832; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4+b7zhduK4lVkLFwoy0UCXuK8f7LQragdXHyxQERLss=; b=g2ZVkkO+OlZl2mswdGwwn1VJ/RDFk/Q4vALgh5j4WnwElVem1TZ1jcxu4U4rqbgAkN I6zeoi+bRC6h+oWnkHKG2xGxd5YedLWx+8EyOLIw3F1/maqWYgpvQOiD7N4oCv6XzMoN Zaol5cTbt2KTYJ/45a1Z6368VKXO1EYgss1sU19Y+8B8y/Nhgg1esTFrLXfhtRwK1V2P LRWcmk2MMUO0vUPgMTZBlRPIILkfLfylLc847Osqabyo06cZ3tdKVHyJE6ayFkB81opb N6GXktVgLow/lyLm9A5qDCQJtlzrXTKJ/Mx/8BbFOKVd1DWnTxKiplCvx+8NqNQBdZSk F/7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038032; x=1722642832; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4+b7zhduK4lVkLFwoy0UCXuK8f7LQragdXHyxQERLss=; b=QXH3Xaci8eCklbX2PlmeN+qi7hWjfCUAaFQRq/nWpK/ydUDgUTDs4VoAqnZ6So8V/U ffzsr26/7cFkwgBuIfUxj8RFUV88C7Kbf0IrDteUr8wpRs4B8woVMtFzEqg5LA/HZ473 mqEusoWjXKuv+pet48OMSlATsJBXutD/FnWOk73ikwC/dl2S89UrNKkAsVnPFOOGOAOZ 7ZlAlE1hw5/216l/6/4gkF0q0t95WqGv4XltEbPGeQ/zkJz6Sd2T9MhLkhFrWzoj403O 8k6/4jSNcVzdWJFrMNKra4JnTGiyR0a686Omu4KuUOU2qY+3BgSssuxUkqn1xK4E2+ey hdww== X-Forwarded-Encrypted: i=1; AJvYcCVMkvLVvi0jXuWR9cQ26w9ld2RzuCALZHwmbLhH2ATHY7dpMGq2LX3lOYXfS7zq91GJkAZwtjDUiFdG76eF3oXsanTSGrpXnFpckfUA X-Gm-Message-State: AOJu0Yxi9LyyCXTfwYw/346+McYUI7X7PAqIJeDN7CjbOaANivCjRplc BpOyIlDP1QeED6ajDk9keRn0BUmUu2N/tn2YuoFeQHpJ+r3Xxr0hEq1BAO3RnQ2jLC4kPyt8Zkj kVQ== X-Google-Smtp-Source: AGHT+IH83fBTCbgjCdiYRsrBHzQDSmPEXzxw/dBUECcFBZL6/864S5lAdfsy5Zb31ECaUXnMa239gb1WZDE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:4f8b:b0:2c9:759f:a47d with SMTP id 98e67ed59e1d1-2cf7e84e558mr8570a91.4.1722038031713; Fri, 26 Jul 2024 16:53:51 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:45 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-37-seanjc@google.com> Subject: [PATCH v12 36/84] KVM: x86: Use kvm_lookup_pfn() to check if APIC access page was installed From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use kvm_lookup_pfn() to verify that the APIC access page was allocated and installed as expected. The mapping is controlled by KVM, i.e. it's guaranteed to be backed by struct page, the purpose of the check is purely to ensure the page is allocated, i.e. that KVM doesn't point the guest at garbage. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/lapic.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 6d65b36fac29..88dc43660d23 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -2612,8 +2612,8 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu) =20 int kvm_alloc_apic_access_page(struct kvm *kvm) { - struct page *page; void __user *hva; + kvm_pfn_t pfn; int ret =3D 0; =20 mutex_lock(&kvm->slots_lock); @@ -2628,17 +2628,16 @@ int kvm_alloc_apic_access_page(struct kvm *kvm) goto out; } =20 - page =3D gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); - if (!page) { - ret =3D -EFAULT; - goto out; - } - /* * Do not pin the page in memory, so that memory hot-unplug * is able to migrate it. */ - put_page(page); + pfn =3D kvm_lookup_pfn(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); + if (is_error_noslot_pfn(pfn)) { + ret =3D -EFAULT; + goto out; + } + kvm->arch.apic_access_memslot_enabled =3D true; out: mutex_unlock(&kvm->slots_lock); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FB60180A64 for ; Fri, 26 Jul 2024 23:53:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038036; cv=none; b=XRmhAC2QqLKXIptV4d/gX2bp1KlGgQYniU4/h1cwoJQU5x9PunRoGmuL1bq5JT4vnAKHklHjFztV2g4OjZ+4z9rOgyrNMCS9GclhRKN5fefjn0FkyCtbUWpg+0BItGR7b2FL6kuK3ZcNkz7wEjfempaUoW9o6dx+NlTQvGO7JJA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038036; c=relaxed/simple; bh=Qgk09DtqQ///7D2ziQIGzwRVXn6yTiUN0IJf79z4tP4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hfy7BTMZCFghnUyooh7wAgwuQWnvUHB8P6kNyDVb6f0Hs4AnIqxCqkWKoKdtJDuQaNmFlEjgvnBwMvx6N+7mg2Ljjysiyrzny9yDvGYPWS6M3EIUfCpcLLN12cs0F8cVseQU2XDine4gmEIvRkR+siqh/IUQiEYvQyjNMrUGYqk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vOPrOR3Z; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vOPrOR3Z" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d1d51f3e9so1504843b3a.0 for ; Fri, 26 Jul 2024 16:53:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038034; x=1722642834; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=WZrNaP2gtmwMUuypG9xN5YPW8gmxxvZG3Q4zEjuRFyk=; b=vOPrOR3ZCzVs9WpylwWl1TKWJHEzPH/z2Gg+XhnloANSjcNMwQa1ZmzC/2DMBGqnzo FlPhCi8Zcd5lxo+IG3lReaKxnKg8PVNfbzZLdFtTctN+X7qJoH0GBJt6O40GLgQ1jUyd EQUx2gX1sn+OijuRYvG8sb7Kk0wiyMcIIC0Q8d/jaKkoQoSExL0kczoZosaDw7TabwOR gvDIXqejH4DdY0bQj9VIqPpdYweXwbIA5VEWUuvMZRl+aldxLsTNTrezvtpxPhv0Zsup 7o7hrhEwq9DpeVml5N3yRntPhBJcGrTKdzEdiShVt+D/k40D32s2lKuaPjM1Dhb964+l +qrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038034; x=1722642834; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WZrNaP2gtmwMUuypG9xN5YPW8gmxxvZG3Q4zEjuRFyk=; b=l66YpIar0Mnkvsde3w4mV7EmwHGG1lXZKU0NOSF3hcpO/xJofP8mNm2Q8+8KN0KDkg Qzsmcbjtsp7b3zhZcAPmMaXHCrYRyvop3wF89ui2oVq0WQT7S1PfPv+zHtU/c+SKZi0r KcoAFqLSCspU2AP860fkOZUrVGBQAK2stRGWPqErBFzx8AvFDq76OcpHRPY1j7GVwc/8 58tgTLYjq8D1SSF2dAbuQXM8PM8Wu91UlFHpgUVwGnqPDtlPtFrWDpI0QwNbkXf8wTmu gy6fzC8iIiC8o1BE5tVhscZCAwhWWlvKDPbONtrJS2FkWbLCm7p8dSl3gQkVDk8ZoXSC nbzQ== X-Forwarded-Encrypted: i=1; AJvYcCW83hR2Yx6vlsTXJ/vVs2KIzu8eJjjGS0AtaJ6FtffoqxZOiORb2qoNOQ6bU1xdoFxNn7pxdy3KfQTh1oi7zeCeQmvked9bjkp1yyVt X-Gm-Message-State: AOJu0YwJ7ug8UOA181ijYITn/+1doGGT9UcoJO68eN6bU4HH93KUPCkv RvmzdHsXvfcd9pFpRz+U+E5+xLX/j7i6dEOf2P8oLCGhDaVCyuYYwwyfy/bqjS559NJcWFRjdhA e8A== X-Google-Smtp-Source: AGHT+IGBlhv2aegJ385ep1cUBQA5c+rLA7WdLDI/xGSka7RxNxI5KP3IWpNdT5/lkHtS9+xMECoFkdebDOk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1829:b0:70d:1bf2:2878 with SMTP id d2e1a72fcca58-70ecee0a0ebmr24898b3a.5.1722038033787; Fri, 26 Jul 2024 16:53:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:46 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-38-seanjc@google.com> Subject: [PATCH v12 37/84] KVM: x86/mmu: Add "mmu" prefix fault-in helpers to free up generic names From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Prefix x86's faultin_pfn helpers with "mmu" so that the mmu-less names can be used by common KVM for similar APIs. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a201b56728ae..4d30920f653d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4301,8 +4301,8 @@ static u8 kvm_max_private_mapping_level(struct kvm *k= vm, kvm_pfn_t pfn, return req_max_level; } =20 -static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { int max_order, r; =20 @@ -4325,10 +4325,11 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu = *vcpu, return RET_PF_CONTINUE; } =20 -static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) +static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { if (fault->is_private) - return kvm_faultin_pfn_private(vcpu, fault); + return kvm_mmu_faultin_pfn_private(vcpu, fault); =20 fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, fault->write, &fault->map_writable); @@ -4363,8 +4364,8 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault return RET_PF_CONTINUE; } =20 -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *f= ault, - unsigned int access) +static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, unsigned int access) { struct kvm_memory_slot *slot =3D fault->slot; int ret; @@ -4447,7 +4448,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault, if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn= )) return RET_PF_RETRY; =20 - ret =3D __kvm_faultin_pfn(vcpu, fault); + ret =3D __kvm_mmu_faultin_pfn(vcpu, fault); if (ret !=3D RET_PF_CONTINUE) return ret; =20 @@ -4524,7 +4525,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault if (r) return r; =20 - r =3D kvm_faultin_pfn(vcpu, fault, ACC_ALL); + r =3D kvm_mmu_faultin_pfn(vcpu, fault, ACC_ALL); if (r !=3D RET_PF_CONTINUE) return r; =20 @@ -4617,7 +4618,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vc= pu, if (r) return r; =20 - r =3D kvm_faultin_pfn(vcpu, fault, ACC_ALL); + r =3D kvm_mmu_faultin_pfn(vcpu, fault, ACC_ALL); if (r !=3D RET_PF_CONTINUE) return r; =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index f67396c435df..a5113347bb12 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -235,7 +235,7 @@ struct kvm_page_fault { /* The memslot containing gfn. May be NULL. */ struct kvm_memory_slot *slot; =20 - /* Outputs of kvm_faultin_pfn. */ + /* Outputs of kvm_mmu_faultin_pfn(). */ unsigned long mmu_seq; kvm_pfn_t pfn; bool map_writable; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index bc801d454f41..b02d0abfca68 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -811,7 +811,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault if (r) return r; =20 - r =3D kvm_faultin_pfn(vcpu, fault, walker.pte_access); + r =3D kvm_mmu_faultin_pfn(vcpu, fault, walker.pte_access); if (r !=3D RET_PF_CONTINUE) return r; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6D61180A97 for ; Fri, 26 Jul 2024 23:53:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038038; cv=none; b=eKzAqBM9CiWgJwvMGH+UrjNFhUY4wIJKt40yEJUGzpEXeZpLtQmXRZ6sHEEtht/mskErJxtdHxl0x42I1RlcBn6+QQbs/NU63iOkvVmeJKqOW4dI7rsNZyMGtFd8XBaWKk90mqydnVzXJHJTtvz/1DE4Wl8C2IKSxXcR2EECbxE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038038; c=relaxed/simple; bh=LBKPMXijjb0wTzjBJsxSrtcd3ZT6pYGX5pSqb6D2ibs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MeYGiMP/Bfx08ZLRC6oIaV4ttnlTBoTLkDFvNkvQ1iczMkvQH2+0qOUE8QwVT+wgJiAfZhBYdSdl27pi+gvnUMorWA5eQ14353XFXvjK6tdgyfEwaWwl9ISjL44Zk/GntCP4/c9buDekkBqhoZ0/9ZfQYhhAlKWo4P8G9qIthvA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dhLCYwRM; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dhLCYwRM" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-70ac9630e3aso1335970a12.1 for ; Fri, 26 Jul 2024 16:53:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038036; x=1722642836; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=G6XdYYnK6SmaIVsSbjLWLiXTD6vQEkdodh0PTaLJXbU=; b=dhLCYwRMU1sTsqFo+mvvQrjEFN7WADvFZULmykZMrrQujxZAfg/Q5i4It5m0pRRYKF 3X9t644Jl2cWQqUJ+2+VgchB0dxak+38Qj63Wgy4bx7+YOCF9Yb8BW7b/ZvoCLpgnhK2 vo7rBKNTdvoSo50vu/BH+IsmswxYhK+2b/i1Ck3Iq6acdV0bbI29pMTXd0E1JgmL4VK0 iXDlQI8IzpIKH8SE/rIJTAzU8qWI2bmqpYHNcU27RkJlIxNKIhUtsC9b8UQ1IFDGlZ6V uktRZpjXrVACNJzd7c0JCAjJU8fhhLKgmZvK+ofMczsVJjp6lxX7YH68f0G1vouIjKTf WTgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038036; x=1722642836; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=G6XdYYnK6SmaIVsSbjLWLiXTD6vQEkdodh0PTaLJXbU=; b=r+/9muhkuaFBTZ6JC0VrWWNihKActl61AMQk6ICdCeAQNOeZ/mBg0qwsrq1GRCuHz0 RbR9P8JN4BPKgzsn2CY6m0JOzT7CiCj/fKyekijH7awACCmjMBkObJbnmr3SoHSOoYQ5 PwHwNNajB73Gplwz5KlOJh0f7WYljMMkrSuGWajowGQt4Id2Sm5ltl2gJGFbj3Fjz1yI DPvaIf31pQL4NKQ0xXWTGcWaV2Z07WYS5CbYbXn1T8Ehmx+fExyYnWNf94R0cS4/0Oo0 eexCFSiinbFb7vF9SVgvysDtG58+RRiX9L/EDHA3XT32wLHiPJJzkXchXWOhokWXEG8Z SP/g== X-Forwarded-Encrypted: i=1; AJvYcCVKv8ww4J3JBAVsU1QMAEqeH7PSYQX/qmnUMrwbI2z1JJ8m1cwrPKFxbfpZ96TMc+24700WhQlmlZrTMyBK+r2m3BliImShg/mXJUtD X-Gm-Message-State: AOJu0Yy0/eWdOUJYS9Hi2GwGfKbuuiH+r0bVHEA7cDJgZTtj+B0+H00M pEKfsYD2KCLduy7f3YlNUMfz/mIOpfNtbwDvEX6T3u17CgaPMDjAxvm3J51Chs/pALwjC1BR5aI f4w== X-Google-Smtp-Source: AGHT+IFqcJX/9FY6hmZYd8EnC/sSURY7mS/rFneQMUHDSr3ovaE0Tb6dbx+RwJc5DGDg7FZUxHqlUK5Q8cY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c945:b0:1fc:733d:8465 with SMTP id d9443c01a7336-1ff0488cadamr598725ad.8.1722038035986; Fri, 26 Jul 2024 16:53:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:47 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-39-seanjc@google.com> Subject: [PATCH v12 38/84] KVM: x86/mmu: Put direct prefetched pages via kvm_release_page_clean() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use kvm_release_page_clean() to put prefeteched pages instead of calling put_page() directly. This will allow de-duplicating the prefetch code between indirect and direct MMUs. Note, there's a small functional change as kvm_release_page_clean() marks the page/folio as accessed. While it's not strictly guaranteed that the guest will access the page, KVM won't intercept guest accesses, i.e. won't mark the page accessed if it _is_ accessed by the guest (unless A/D bits are disabled, but running without A/D bits is effectively limited to pre-HSW Intel CPUs). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4d30920f653d..0def1444c01c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2919,7 +2919,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *= vcpu, for (i =3D 0; i < ret; i++, gfn++, start++) { mmu_set_spte(vcpu, slot, start, access, gfn, page_to_pfn(pages[i]), NULL); - put_page(pages[i]); + kvm_release_page_clean(pages[i]); } =20 return 0; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F190B181325 for ; Fri, 26 Jul 2024 23:53:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038040; cv=none; b=hjIWTQ5u9GDBKRvO0PDYrSxRF2Pc0uuFNnm41X0Z05rW65sVkTattP4J5J+2YeSQf9eXEdYi/EGCFDZM87SlUC01EPECB3dvUteWaJoEmCzV2Uv/uX/KBCQT0gqdg0YB7R6FGZ9acFtmXM8rMqb3LIOsfuIGS+RDGz4MFNTAs3A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038040; c=relaxed/simple; bh=wGZHvIOm+uvxUha7v63Q2rhvuGfIObXUhdqt56QjUZg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G8bL8jDn9KFpH7z6wZM++SA3S3ysXc1q/DnBLh+tow0lydvqYMFa6+K0MwZq6v5kGojUEQ8nrX3BYr2VSshiy2RWaTipyPaqQiTmDVGLPB5zMEd9uVlSGWEJIScACeAWy6hZocqBqGQ5kYce0BRt8HmYMQ6kKrcuEI1ZCQ1+U8w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vK0VXi5v; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vK0VXi5v" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb5d93c529so1607238a91.0 for ; Fri, 26 Jul 2024 16:53:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038038; x=1722642838; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Qh6T21uN8n1VHf7Uf0xlt62GR+52F2/YXe/M8yozbBs=; b=vK0VXi5vVMYQJjvZhe71mwVeVUHVb8jJnn3FMFVcVxiO9Ci4yN0qAPSp7XdGcY5RAM xEMEm80RCefdybk2ZdegISuMx8WcAogIi8dkt2/S4D4Gy9jOKOYAmJIOvSoVfcoEF3kO 0x3JuVgNWEwMG6erjQxDn1h4yBFN3IgbcivQZdV2O6Xg0P90+otcVMpFZbv5PuIbHt1f pKBdI2Jz14V24L8rlRbItxge8E6eo1FT9kEJIy6kPpH3i+cpJO7nzCflzAJJ0rPxJPxn fbxC2gDrzdJ1Obl1z77jesVUD94GByOdDrwQGLnyfcUR6ZYuMeknpyrwbW+B329wH2HZ G0Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038038; x=1722642838; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Qh6T21uN8n1VHf7Uf0xlt62GR+52F2/YXe/M8yozbBs=; b=bubYBNVWfhdZNqcj30b1xkd1ynMphiGd5o1D2Hwefuz1yOUzR/hW26it2RBwLJhkXu Vnbe5UjCnp9UMk3fjgAiNmg1okFaPZ1ArpDFQuXFUQte0kILncE23z7AAdpmCfEYdV+K xqe0W4YxbDeb4u6b2B9Ri5u9yNMPwpakn+VvN8nKlRfr4OTpNyyN1ZdKlobbf+rfN7s4 Y06PbVoXm/0T+Wh+gP7Mtt2xRX5VRWmeQuujJdCUGxp5SXw9Z86gFi90dYPFpk6yId7Q wtFK+1GKbk4oSRbYmU92BlQqk05Z/dqeVlCHtgLPhb8ErlnPYElsN2BuwgMVoPT72EU/ qHBQ== X-Forwarded-Encrypted: i=1; AJvYcCV/DMZITUEEPHp82bxXFpL3FoTP74wZTb1YR0ZKTKdQEe/YIzEsipx0zup6AOYs7kHoBPkoNoxu/aqLcpAtmNRi/e+a26PEU9qOr1Ru X-Gm-Message-State: AOJu0YyE7wVKf1doeDuqVk3FGC6+5zjBQaVkwF3ZTxXZ++Y2ROU+WGE+ mLv7BsjUYCPomI1PGV8pxS05i5eQ0DYjYw3JZ/iGUHj/gHPfokF+K36NvDCP5vS+3IvkUKZ+unA o6Q== X-Google-Smtp-Source: AGHT+IHJ/uNdZAPdurru3IzCEMezhYQEvGsVQTc+qIu+wSY9S108IorpvBbDocczOA6aFsvKzrBA+JBrX4Q= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:4495:b0:2c9:61e2:ce26 with SMTP id 98e67ed59e1d1-2cf7e1c639emr12819a91.2.1722038038053; Fri, 26 Jul 2024 16:53:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:48 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-40-seanjc@google.com> Subject: [PATCH v12 39/84] KVM: x86/mmu: Add common helper to handle prefetching SPTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Deduplicate the prefetching code for indirect and direct MMUs. The core logic is the same, the only difference is that indirect MMUs need to prefetch SPTEs one-at-a-time, as contiguous guest virtual addresses aren't guaranteed to yield contiguous guest physical addresses. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 40 +++++++++++++++++++++------------- arch/x86/kvm/mmu/paging_tmpl.h | 13 +---------- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0def1444c01c..e76f64f55c4a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2897,32 +2897,41 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, stru= ct kvm_memory_slot *slot, return ret; } =20 -static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp, - u64 *start, u64 *end) +static bool kvm_mmu_prefetch_sptes(struct kvm_vcpu *vcpu, gfn_t gfn, u64 *= sptep, + int nr_pages, unsigned int access) { struct page *pages[PTE_PREFETCH_NUM]; struct kvm_memory_slot *slot; - unsigned int access =3D sp->role.access; - int i, ret; - gfn_t gfn; + int i; + + if (WARN_ON_ONCE(nr_pages > PTE_PREFETCH_NUM)) + return false; =20 - gfn =3D kvm_mmu_page_get_gfn(sp, spte_index(start)); slot =3D gfn_to_memslot_dirty_bitmap(vcpu, gfn, access & ACC_WRITE_MASK); if (!slot) - return -1; + return false; =20 - ret =3D kvm_prefetch_pages(slot, gfn, pages, end - start); - if (ret <=3D 0) - return -1; + nr_pages =3D kvm_prefetch_pages(slot, gfn, pages, nr_pages); + if (nr_pages <=3D 0) + return false; =20 - for (i =3D 0; i < ret; i++, gfn++, start++) { - mmu_set_spte(vcpu, slot, start, access, gfn, + for (i =3D 0; i < nr_pages; i++, gfn++, sptep++) { + mmu_set_spte(vcpu, slot, sptep, access, gfn, page_to_pfn(pages[i]), NULL); kvm_release_page_clean(pages[i]); } =20 - return 0; + return true; +} + +static bool direct_pte_prefetch_many(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, + u64 *start, u64 *end) +{ + gfn_t gfn =3D kvm_mmu_page_get_gfn(sp, spte_index(start)); + unsigned int access =3D sp->role.access; + + return kvm_mmu_prefetch_sptes(vcpu, gfn, start, end - start, access); } =20 static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, @@ -2940,8 +2949,9 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vc= pu, if (is_shadow_present_pte(*spte) || spte =3D=3D sptep) { if (!start) continue; - if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0) + if (!direct_pte_prefetch_many(vcpu, sp, start, spte)) return; + start =3D NULL; } else if (!start) start =3D spte; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index b02d0abfca68..e1c2f098d9d5 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -533,9 +533,7 @@ static bool FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, pt_element_t gpte) { - struct kvm_memory_slot *slot; unsigned pte_access; - struct page *page; gfn_t gfn; =20 if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) @@ -545,16 +543,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm= _mmu_page *sp, pte_access =3D sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); =20 - slot =3D gfn_to_memslot_dirty_bitmap(vcpu, gfn, pte_access & ACC_WRITE_MA= SK); - if (!slot) - return false; - - if (kvm_prefetch_pages(slot, gfn, &page, 1) !=3D 1) - return false; - - mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); - kvm_release_page_clean(page); - return true; + return kvm_mmu_prefetch_sptes(vcpu, gfn, spte, 1, pte_access); } =20 static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu, --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0813181B8C for ; Fri, 26 Jul 2024 23:54:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038043; cv=none; b=FmsOkPY5fPGb6FTvkoQb2xvvcL9KnODWniZGTH1+jgGGPqIMoEDCL9a2MkQaRhOWpqKs8kUYrx3E+j2KGcfoZr1tZkeaLID0Ob2BXhnS/1sIhEyie+maPVWj+H6ZyxO77EUzOyt2IT+82HkWoBwjEbjFVi2AaHPLMYbc7VcdmX4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038043; c=relaxed/simple; bh=I21AIIPQRJE/LHqjT9YPoKMi3QRQyqy6GfrC1Ir9UhI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AvCc5g391sBsLZQ3ZF8kpmsspHnijdKQ+sjpS3JyAsm2cTycK7DJJHu5InAEMk2DrsROTSO9oeQmb/LxNSC7S2OSbgNpnuyTmD1MmKW4LPGTyAvTetpzR9qiN+FW930pdgC7djkRbFpiVOzYUniUaWfdbdGccZhm+oRbFSg9Giw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oP14WA9W; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oP14WA9W" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc5e61f0bbso13362525ad.1 for ; Fri, 26 Jul 2024 16:54:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038040; x=1722642840; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=kjY6k542G0oo6Yjo2zUiSEPku2YJftlYNe05DeRMCZs=; b=oP14WA9WQjKMwJ3SmM9yDyF5evZPqDAdielp/u4IOvMY/lhacM4Mzg8V8jZFQUea21 8y9XJk0PcM7MxuxhQeyHX+mpdaXeC/5cJRS7RF0u2is7ZLR1GoMvfNgsnpyrskhdOgn7 RJ+SsT/32rrzzNs4uu5gyBx4beiPzkC1FbzRJwrv/leFGPytTC6Br0MpcImPzGBR5RyX aopJnhf7RZZP/X4gq1hCKf92VVjA0V+LEoRodQmzVmwkTeTu8YQ9mYNtLIb8+6s4Us6T Q7x5KVkHyR0ArslJxqqitqgnN01kZSWur5vLJtDQFKZ66QpRgMxA1NaK+VdL1RFlxssf A3Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038040; x=1722642840; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kjY6k542G0oo6Yjo2zUiSEPku2YJftlYNe05DeRMCZs=; b=nX8I9oVwhT+o1uAf7/0gFjGAm4/PL+6WQT1GWfdQ0UmDDeAp8NGjdp/LZHL2VbiFMG EQjRsP3irhQLw0j5Hvg+cEGiR0T6gtlGZIAz2ysLCxwMuH6oRHt9QP+4hQ5U46xdtnC3 meUuoPAYQFi9OMe+7Sq1k5JTJiKTciDIiLl6t0pLAYJAiUvzoQUlZbQ1F3NvmRd28NsT g+6dpM6mU4sM1G+FZJO8JUOjNAZkkGnJJElEhNCiO4FtUBfBgHJDL0/Cq2CY9tZFWFTn lGdeGj00A1MIqMVqhslzNCGOMTQskWueNXRd4C6Tiggffdclua+xd6iu4MPwKzN34/VL PoDg== X-Forwarded-Encrypted: i=1; AJvYcCU/qSieoqF08oTfESgIY2wjXInibRut4gSbZnaVAK+iJH4VzgK0SS+hk6zV56NRreNPvdZWk1RwGBTGdo94N/+m1pKQo+HrpJUP2w77 X-Gm-Message-State: AOJu0Yw/7HZRRiLHBxSd1ekX9er3TFWbY+SAXO6oJiWK9KQzFQOPWUs8 vn/WeG1KLxOC6ueNfDNK772Vg2OIdavDhHPd/U7PB/Pi73gqWNePNtlvjVyPVJpqXFBrfMqS3Yb qpg== X-Google-Smtp-Source: AGHT+IHzv0Ay1s7/D5RlCraUOAPppGpx+9DOe5Wc2TnAqa2oJae5LaRq2mY8TBHhwH7zU/okRdoTzatzS5o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f60c:b0:1f8:6c64:3575 with SMTP id d9443c01a7336-1ff046dcdffmr290425ad.0.1722038040140; Fri, 26 Jul 2024 16:54:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:49 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-41-seanjc@google.com> Subject: [PATCH v12 40/84] KVM: x86/mmu: Add helper to "finish" handling a guest page fault From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a helper to finish/complete the handling of a guest page, e.g. to mark the pages accessed and put any held references. In the near future, this will allow improving the logic without having to copy+paste changes into all page fault paths. And in the less near future, will allow sharing the "finish" API across all architectures. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 12 +++++++++--- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e76f64f55c4a..1cdd67707461 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4311,6 +4311,12 @@ static u8 kvm_max_private_mapping_level(struct kvm *= kvm, kvm_pfn_t pfn, return req_max_level; } =20 +static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, int r) +{ + kvm_release_pfn_clean(fault->pfn); +} + static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { @@ -4476,7 +4482,7 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, * mmu_lock is acquired. */ if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn= )) { - kvm_release_pfn_clean(fault->pfn); + kvm_mmu_finish_page_fault(vcpu, fault, RET_PF_RETRY); return RET_PF_RETRY; } =20 @@ -4552,8 +4558,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault r =3D direct_map(vcpu, fault); =20 out_unlock: + kvm_mmu_finish_page_fault(vcpu, fault, r); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } =20 @@ -4641,8 +4647,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vc= pu, r =3D kvm_tdp_mmu_map(vcpu, fault); =20 out_unlock: + kvm_mmu_finish_page_fault(vcpu, fault, r); read_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } #endif diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e1c2f098d9d5..b6897916c76b 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -835,8 +835,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault r =3D FNAME(fetch)(vcpu, fault, &walker); =20 out_unlock: + kvm_mmu_finish_page_fault(vcpu, fault, r); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC462181B89 for ; Fri, 26 Jul 2024 23:54:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038044; cv=none; b=Ttnvyf0/wvgKxD3htgJpyQ2tDqBs7guH/HjKY1aCKhfNqaQ0k/IkW8iDAABRgV2Bsgf1BAJlBAxk5jvXxyaJXf8YGLxfsvfKx1YZ0+N7oAadK7FjGc3S5062NYEfEIXHbXnvCd8W9jmAHffIQI6cVqtlXQ+9ofM82V+8Jpf4ilg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038044; c=relaxed/simple; bh=ekyGgwJW5KCFeLGIovEA7dKaEjIwCbNp1pdHY6eLnUg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iyUsPWPiRgKgKZ78aBqJ8nFoUfgarOH9aEl7HPyNdXg8PPAIoWVNTuTDaD67MzMCVsmD5coemoNSbGcKOKaIPwG0GTdtf7TsGo7s+gY2szpl8n/9A54YvnnPFe5SxqHsA1Ibc1FsQtRTQ5n8Vi4marERGOYvxBD69O5f99eqQbo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EtumUVXI; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EtumUVXI" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70ecd589debso373623b3a.2 for ; Fri, 26 Jul 2024 16:54:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038042; x=1722642842; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9zY2YEu1HUtjdcDr5plCoNlYuLGLbh4OhGnITp611eA=; b=EtumUVXI2jZeJVyXIOtCdc3ms6GfSXVKBvnpKwFMdWnOBYLK2R789/b+Hhq0JC5U5s eaktal8sdZdpzMU5ovrg2yMMyyeKlrS/xlRUfYQppWYAq69mi/k1X5QWseXP3Jepi1YQ XULdeW2trQHNILSouEZYBDOGBqQRqWQQ4v/EvxlUruo/xDYxKBcJTEU5ZH+JEKTai/Tz R3wg7aGnSY67KdY6DKx/mrKiWcSgqqpsHtOCyETPgjORhwvxk+VFQKRakqlpdS0MVWB7 HY2GS2iaWvMsxkCBLBixHsrHudit0DfsH5SjPsONZIBjyoYc3e4PjOdgfyFExEj24A5z wxNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038042; x=1722642842; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9zY2YEu1HUtjdcDr5plCoNlYuLGLbh4OhGnITp611eA=; b=pFBZrGi1DCmG3B+hDLmzdC3uBTChffyANtwjdBlBac278QoYyMAw7Wg759Y0hMObga O2qSakrnWEr4AZsC6qdWWa96g5NSZm0jVSIIrkMY3w3tGf4I7+Jzg6YRlMBYNPMI/wrn 3TzMBw0sNcoVvB4HcvZLhlB0SIBzrDEOHs6Mr3jdOCPrc2HM86TSEmCumBIg2cVYa7KL S4CqTmUgUEtjN5rjVUrqhYvDrIuDDWQPmRC84hYeWZ22hfI4I5uaVmNJMqya3zKvZMaZ YE60bkfWjFAD6WIP7ySvjU8RKegTqdAoypD1SZpRQcSSBqPMyfUe0b+jhdlq+cs+dQ+r p83Q== X-Forwarded-Encrypted: i=1; AJvYcCVamapJYljyhI8uHqn3g6AFzXtDdK2NTx1Af+cMWsNf+ECaigxK041vQblw1A3dyoofncEwNtuxmsS689YqUQUT8Xb2s4vfVtwgjHF4 X-Gm-Message-State: AOJu0Ywb8M9VEuespDH+rvxdB6kacd0W2+7xoj7vn3FVW3JqJADhUpqs r3uhV15CC/sGGwwJT6njrxRh5oLX+o6ur5Tj21zgLAujkQzNmLIKa7ASfiZgJTCmv3Fa8Z0GZTK q7g== X-Google-Smtp-Source: AGHT+IFj2ro53tB8A9NyZ1GYVrgqdx/ajbf6jMsbjIk3q7R62qDhpHqlToArJZlcHrNz2kHOoBaEKLt0+7c= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:66e5:b0:70d:27ca:96b8 with SMTP id d2e1a72fcca58-70ece926ad1mr25418b3a.0.1722038041966; Fri, 26 Jul 2024 16:54:01 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:50 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-42-seanjc@google.com> Subject: [PATCH v12 41/84] KVM: x86/mmu: Mark pages/folios dirty at the origin of make_spte() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the marking of folios dirty from make_spte() out to its callers, which have access to the _struct page_, not just the underlying pfn. Once all architectures follow suit, this will allow removing KVM's ugly hack where KVM elevates the refcount of VM_MIXEDMAP pfns that happen to be struct page memory. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 29 +++++++++++++++++++++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 5 +++++ arch/x86/kvm/mmu/spte.c | 11 ----------- 3 files changed, 32 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1cdd67707461..7e7b855ce1e1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2918,7 +2918,16 @@ static bool kvm_mmu_prefetch_sptes(struct kvm_vcpu *= vcpu, gfn_t gfn, u64 *sptep, for (i =3D 0; i < nr_pages; i++, gfn++, sptep++) { mmu_set_spte(vcpu, slot, sptep, access, gfn, page_to_pfn(pages[i]), NULL); - kvm_release_page_clean(pages[i]); + + /* + * KVM always prefetches writable pages from the primary MMU, + * and KVM can make its SPTE writable in the fast page, without + * notifying the primary MMU. Mark pages/folios dirty now to + * ensure file data is written back if it ends up being written + * by the guest. Because KVM's prefetching GUPs writable PTEs, + * the probability of unnecessary writeback is extremely low. + */ + kvm_release_page_dirty(pages[i]); } =20 return true; @@ -4314,7 +4323,23 @@ static u8 kvm_max_private_mapping_level(struct kvm *= kvm, kvm_pfn_t pfn, static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, int r) { - kvm_release_pfn_clean(fault->pfn); + lockdep_assert_once(lockdep_is_held(&vcpu->kvm->mmu_lock) || + r =3D=3D RET_PF_RETRY); + + /* + * If the page that KVM got from the *primary MMU* is writable, and KVM + * installed or reused a SPTE, mark the page/folio dirty. Note, this + * may mark a folio dirty even if KVM created a read-only SPTE, e.g. if + * the GFN is write-protected. Folios can't be safely marked dirty + * outside of mmu_lock as doing so could race with writeback on the + * folio. As a result, KVM can't mark folios dirty in the fast page + * fault handler, and so KVM must (somewhat) speculatively mark the + * folio dirty if KVM could locklessly make the SPTE writable. + */ + if (!fault->map_writable || r =3D=3D RET_PF_RETRY) + kvm_release_pfn_clean(fault->pfn); + else + kvm_release_pfn_dirty(fault->pfn); } =20 static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index b6897916c76b..2e2d87a925ac 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -953,6 +953,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, str= uct kvm_mmu_page *sp, int spte_to_pfn(spte), spte, true, false, host_writable, &spte); =20 + /* + * There is no need to mark the pfn dirty, as the new protections must + * be a subset of the old protections, i.e. synchronizing a SPTE cannot + * change the SPTE from read-only to writable. + */ return mmu_spte_update(sptep, spte); } =20 diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 9b8795bd2f04..2c5650390d3b 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -277,17 +277,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_p= age *sp, mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } =20 - /* - * If the page that KVM got from the primary MMU is writable, i.e. if - * it's host-writable, mark the page/folio dirty. As alluded to above, - * folios can't be safely marked dirty in the fast page fault handler, - * and so KVM must (somewhat) speculatively mark the folio dirty even - * though it isn't guaranteed to be written as KVM won't mark the folio - * dirty if/when the SPTE is made writable. - */ - if (host_writable) - kvm_set_pfn_dirty(pfn); - *new_spte =3D spte; return wrprot; } --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2F3C181CEF for ; Fri, 26 Jul 2024 23:54:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038047; cv=none; b=oRNKE2uR8F6Mup3r9uCJwQbSHbyFbQdSOShKUwUCUPBmzhYPF/6ZQ1i/GX5SUfK+0QEZqoGApgfuyCVlvifc5LDJjcoL6pa0nQ4ceWGX2aAyqGI9+PzhdB5Rtwb/cioN62j3GlgYL63VZ//LWTTGGuBKirTj7RVxHj38guniJDk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038047; c=relaxed/simple; bh=UWINRAqNW2M8Y4oOWNTo24b9GyZbmPXxBUrdFtS1EX8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uG9wqaj72oZennDHnbfNVNzeX++wISD7nkmO3x4Q9un793TP2MXhs+iA4zd65jl7H3OT0nwcc79pz4X5r2ud6caDU9O0ggE5w9CuInE2QQtv9RoLfmtTq/X/wBZ5iOqCmbkfXO7s1QPfFN50MBL4Xy5hStT5vJIgOVRTvqXeGAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=V5b+Tfpe; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V5b+Tfpe" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66628e9ec89so6540317b3.1 for ; Fri, 26 Jul 2024 16:54:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038044; x=1722642844; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=66tkjZePX8REZ4BHWwuGWSSOzSubgrfSmhtIFHX3PRM=; b=V5b+TfpehAdO9KMd/KsKTEbBdxg1QIt4az43gsucsek9hLuDgqK8rpjTLcdF/gRre1 QqKH9xgD1A2rjCmwHMZPJpWl85Mg6YDFgRmWibsvClV92myvowJhtapwWlOATKM7a6aY d+JMU6Zlwe/OBS2XXLr8m9bms9CnKYWr3WzCNtIicUvbHG9ACUgHhXKNUMU0S3hYqIwx /AYxweT8DAriiYdTK3gpI/Ua3Lc7PtlJ5ow7UaGJtNXIEqEW4CK2TH1B9LwgKwmF+Btr L+IOKHNyL6CB9JATPWl8eBDlHpfsv1HpbdV27bR7RgfW5Oms/lzOeuXbmWF9nTtW0vIe 5HLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038044; x=1722642844; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=66tkjZePX8REZ4BHWwuGWSSOzSubgrfSmhtIFHX3PRM=; b=m5V+pFNGxyJGs7QItkjBeFrFuSs5lW+xLvHN9XxY9n6LGc+lz1lpO5gFadeC5bOatE bKVyYDJbgnzstrB6dYHsCAL88WFCc/GZV9a4eMKx1deYk/TUA9ctDo078xpAmA9UM6Hl dytmz18iKm9TckQkB1wC7g3viKHx48Dx1VbXCIPzGlQzXG/UA8nZsDQUBbqpc9hmaHP3 UaOvpYsCj+V+dzL5/Qat32J2k/HZi/lsZ3P1rLUqbna6gpkid8pshXt0Fm+a9LRpNJ/u Wh2Uhc+UfIICPM+fbpPTi2BZt0U2f+RQNsZTZoX3te+isKfU/86ZYgCo0pTT+yIBjM0a ufKg== X-Forwarded-Encrypted: i=1; AJvYcCUXc/TTSstCMbCyDyqOvpGFXg50NFkJpYlcDkLMdQHtBJ+ZfQ5AV00pcfe5vOi36Vz4JKrH9iw7aJe83SRn6foh0lvSYnzTH4mxarMK X-Gm-Message-State: AOJu0YzbntjuUxeSrnAxRckUmkXcLluiQMZ4K3lxYNxDzrE6T6+yKMhk Pal3CU6fnPpcpYI+t9a0Uie07V6oUwUDamGCa2X/9hjaTQ2+QofGhJMIec2XXVlcii7hKT/6ZBI KIw== X-Google-Smtp-Source: AGHT+IEXCHhy0parcxZWaUKhAkA9z5ds6aScUvbMwsWWyatCr0tT87bBO0XasR4cX5mU+wgB/x+9BeJQnC8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:f92:b0:673:b39a:92f2 with SMTP id 00721157ae682-67a0a7fd778mr244297b3.7.1722038044118; Fri, 26 Jul 2024 16:54:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:51 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-43-seanjc@google.com> Subject: [PATCH v12 42/84] KVM: Move declarations of memslot accessors up in kvm_host.h From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the memslot lookup helpers further up in kvm_host.h so that they can be used by inlined "to pfn" wrappers. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 5a572cef4adc..ef0277b77375 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1153,6 +1153,10 @@ static inline bool kvm_memslot_iter_is_valid(struct = kvm_memslot_iter *iter, gfn_ kvm_memslot_iter_is_valid(iter, end); \ kvm_memslot_iter_next(iter)) =20 +struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); +struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); +struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn= _t gfn); + /* * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations: * - create a new memory slot @@ -1290,15 +1294,13 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, stru= ct gfn_to_hva_cache *ghc, }) =20 int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len); -struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn); void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot= *memslot, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); =20 -struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); -struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn= _t gfn); + kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); =20 int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *= map, --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E49581822CC for ; Fri, 26 Jul 2024 23:54:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038049; cv=none; b=nEqHQpHxXT7EP31cfjJyLANvthi0861tW7jkVsuPu+7JoPTW9QE39D+GbzNprV1fq+orm5FoNNvvlhl3t/DHlrIdWQmsvjkwkzDjk7RZBNHUoMWjGub7MCJMQxdt/Z9gw6Cko/V8mWqxq1At86/VPyy0qU/NjxtAap1V+wU3+B8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038049; c=relaxed/simple; bh=ZiPrcqIShlRmgFNauo+vxtdzmRhSo46Zz79Lct/JRI0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hAi6C7WgXyfi76o7j218OTMM6HUPXuCmUvs6VpZvrYDwwAeZ/XZwEZqKpenDGxNYqvlHX/XMK5iJ8NAuCMiT55bnahn1OK62Qe6uWRoiyNn6NIJ7dFall5ISSOHcoGJehWcGD2YyxKlFY3GAR5w1qJlUIXHf4TSlRGWg4QayZzY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ONAPJNsi; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ONAPJNsi" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7a28f78c67aso1346520a12.1 for ; Fri, 26 Jul 2024 16:54:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038047; x=1722642847; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=wxuzXAlWAmTLEMJkRQnxVQGU0NLqznOMJ0954FoR3Ro=; b=ONAPJNsibz/4nIIt6+A+oNj8ZMye3fOT/y/O/QYFIPVcyE0UbuyYAM5P3GDyY1k7uR HBkWtQTojRHo7K5CBVWtxw2HScBgTHtCbvZ/1jdNUkKAOztHFTcsWscOEu1LXoaSl00/ 3BvVguWqh3UUZi+n4pHj7S94+CH5YAc+WoV36wl3yM/TYmEZhL4I92S80dS8jCqeVfoS y17Byau6b3a+znoDrYhPM74o9kg8Kou5jX1Io5kMePc2MDu6F0V6Wi3C8MbW/zHply2a B5udnDuCCA+NtvUVt/rzsR+lFE+ftibtUtC60syfc361NCNUbNqOfqeJJVZXSXvdeCTl kWmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038047; x=1722642847; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=wxuzXAlWAmTLEMJkRQnxVQGU0NLqznOMJ0954FoR3Ro=; b=aLp7v82Ba6dAwsl0vnJgInucQbRZabOTt5urJCnUr0VS+NGZqdu1xlHMfWIAjhC3uq OJc8A32uySMjXcbHfjSpC3vhG8f3TPiZJFtB374qhNw3En8aYN6tLipN4LxW9fOwpFuI qcJxrmeb7a1xyuCFVbvgN/6vOC2FTIspdwtng7lDOCKN50s4GH1EAGsb6JAOV+/I68mV 9uhrSTWKYS/3AlLGgPO+eK3Yw+/dP2jgrqhIUgGV10870S2DQqU3z5XTAq6nC+TLZ3qV +iOAYISCxCuAt8R6i99HkaNe4NSzloaWeWMk7Bigl5rho7zmhCJn5QeblL+uP5bG78r0 Eu9Q== X-Forwarded-Encrypted: i=1; AJvYcCUHuhorbeUDt3MAP6PKsmqtKMNtYvDCxdFx+qNtqNBjU0AxeC3c5FIzXqsIqVvQ4vDW2GjtKf9lw/P+/8mj/lD6k56DBOo36rWV+pKQ X-Gm-Message-State: AOJu0Ywyk4AqimsQYsM81T5d7bfl7G4v39YiaPeTCRZ85BmJTi9zkxZf GzF3rRzehfH6wULLzSSshDWFGAtLGA8VsKfmyLnsY4AacNSPWMqgCJk8cHrPFmeJDoNajMi7aLR +dg== X-Google-Smtp-Source: AGHT+IHHkDF9jDFipyHQqfPDWUmaQYRR7lKWyDIChzQG4y5VknOXhGJqf1L9qSRAYqKwiuxxpki7u8BCvjE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:141f:0:b0:6e5:ef07:5922 with SMTP id 41be03b00d2f7-7ac8d9d818bmr4158a12.1.1722038046008; Fri, 26 Jul 2024 16:54:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:52 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-44-seanjc@google.com> Subject: [PATCH v12 43/84] KVM: Add kvm_faultin_pfn() to specifically service guest page faults From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new dedicated API, kvm_faultin_pfn(), for servicing guest page faults, i.e. for getting pages/pfns that will be mapped into the guest via an mmu_notifier-protected KVM MMU. Keep struct kvm_follow_pfn buried in internal code, as having __kvm_faultin_pfn() take "out" params is actually cleaner for several architectures, e.g. it allows the caller to have its own "page fault" structure without having to marshal data to/from kvm_follow_pfn. Long term, common KVM would ideally provide a kvm_page_fault structure, a la x86's struct of the same name. But all architectures need to be converted to a common API before that can happen. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 11 +++++++++++ virt/kvm/kvm_main.c | 22 ++++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ef0277b77375..e0548ae92659 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1217,6 +1217,17 @@ void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 kvm_pfn_t kvm_lookup_pfn(struct kvm *kvm, gfn_t gfn); +kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn, + unsigned int foll, bool *writable, + struct page **refcounted_page); + +static inline kvm_pfn_t kvm_faultin_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, + bool write, bool *writable, + struct page **refcounted_page) +{ + return __kvm_faultin_pfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, + write ? FOLL_WRITE : 0, writable, refcounted_page); +} =20 kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ad84dab8c5dc..6dc448602751 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3134,6 +3134,28 @@ kvm_pfn_t kvm_lookup_pfn(struct kvm *kvm, gfn_t gfn) return pfn; } =20 +kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn, + unsigned int foll, bool *writable, + struct page **refcounted_page) +{ + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D foll, + .map_writable =3D writable, + .refcounted_page =3D refcounted_page, + }; + + if (WARN_ON_ONCE(!writable || !refcounted_page)) + return KVM_PFN_ERR_FAULT; + + *writable =3D false; + *refcounted_page =3D NULL; + + return kvm_follow_pfn(&kfp); +} +EXPORT_SYMBOL_GPL(__kvm_faultin_pfn); + int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages) { --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA1F71822C4 for ; Fri, 26 Jul 2024 23:54:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038051; cv=none; b=ElsTISFHn/X6lh4nSDvdp6rzjGzjcqhJUIuxYMfesekIz5CEVjhiVRxMGTYzi2h5azPIMNKBgZtigpzN6EIO6qoyxmlHd0AN7fZARKX23KbD56KMwZwPDEdwkBDGwHkLOjokWjVsqkX12lo7sIdCgkk6BUD8V6zedhP+7Ugn+NY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038051; c=relaxed/simple; bh=V+LqqqAUYhUM/5n5EbZbHeZYUdQNA5yUVRCdR+GfGK4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ViOZ1qElEXcHtuT4VseEA1psz6xsW3qr7Azgice2xiby1Xji3bf1DngXzBfmkw5vxeOr5vPLEUHxdAHCjbWp9uCwgHtJ3U+Wd7wcw+752r0AqzASMcrzG8/Pqu4de9xFcE281XpT31uM2MJzrJo3MgW5byQTE+YFco/1iR9Rc9k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nE3pa4YJ; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nE3pa4YJ" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-76522d1dca5so1517879a12.0 for ; Fri, 26 Jul 2024 16:54:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038049; x=1722642849; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=kZSI1KNdqk//fOFhZYsFCszCldNWPUNapLOmZmlQ+ic=; b=nE3pa4YJfXtt227FHvwjhWHwzBZF2cjYR6sav8EMzbU5TswZVIHwB1W5nij6YbAAHr bYzgaDXjnIT7TBj5XMQiJoiRTlGOFuXn38jRBs6+7wXGQIrMSlIwzrUks/3ROPl5aLFk JPBe81hPcy81kvN7kJA5YeT2shitB0FbgradU4O2lQyV4ksyQt2+KZiCbdPiqRQJsrmE 6/58JNfGUQDZ149S86zQXqNnNPPeD9IcEIQXPgfEh55JYghY12MTGgGCbd34dO3dM6R/ PbkHhwEW6GMI33KSKDaqSkO7MvMilYpdmiJfwlhSahl3aKAbd2AECUIZXBWP60uALBhl WZzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038049; x=1722642849; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kZSI1KNdqk//fOFhZYsFCszCldNWPUNapLOmZmlQ+ic=; b=Jkb4ap7COsJLkQO3Jl1DmY7YgoliUJpeZ081Lfap7QM+RNCHYLhxr7za/I5NwsjAIW FZK/dEupvC07s7mE+vvpjT4ogWfyPRe3aPeZk5nfkLgTmSEivGQPuRTublA506mUwtgH uaCxZYGt32+wZqtK+E5ygOSl4VsZq7T3O46qeqR0vIdlsMJuohShJzyndwG79T6T36Er sXf67mQOvUiu5W7x9X4IUKABN3Rd55SwyI9DNCt1Q2HCGtvHw2BJRHss1FhxPVlVnofm OehAb/vVoyxogkVRPuooh6XBdtxAwp+N6s9OvyzuJ/BEeTSGdc9mgaVODPGRNbChqpvS gU9A== X-Forwarded-Encrypted: i=1; AJvYcCU9rYt1gNTb8rxloRCtKbdxbMA7xTx4/sb1HzeF/3qOzQ5ZIAZpx2o9lxecIei3uQBAhy6H2xtvtF5rlwQzS4cayXOJSHjbe2vW6fjb X-Gm-Message-State: AOJu0Yx4VVwOk9heqHTMHtuW4a0o7k2Bez5vbe1bk2MIKFtqarNlqXWj Hxcd6m8+1LiTK39OkAF44ULiSQfTLxE/eiW+86OWN8UI6209zD3SSCiGH3E2b4eazOJK0YHIGwi SxQ== X-Google-Smtp-Source: AGHT+IHwPJ/UM6h8kXfIVs09j5bz4zlDvOcU2rq/epWSbhMGMPOGMhlwxlAMNu8ktgd9HGr+mJ5fl7Shz2c= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:360a:0:b0:7a1:4462:412e with SMTP id 41be03b00d2f7-7ac8fd30864mr2218a12.9.1722038048851; Fri, 26 Jul 2024 16:54:08 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:53 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-45-seanjc@google.com> Subject: [PATCH v12 44/84] KVM: x86/mmu: Convert page fault paths to kvm_faultin_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert KVM x86 to use the recently introduced __kvm_faultin_pfn(). Opportunstically capture the refcounted_page grabbed by KVM for use in future changes. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 14 ++++++++++---- arch/x86/kvm/mmu/mmu_internal.h | 1 + 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7e7b855ce1e1..53555ea5e5bb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4369,11 +4369,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_v= cpu *vcpu, static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + unsigned int foll =3D fault->write ? FOLL_WRITE : 0; + if (fault->is_private) return kvm_mmu_faultin_pfn_private(vcpu, fault); =20 - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, - fault->write, &fault->map_writable); + foll |=3D FOLL_NOWAIT; + fault->pfn =3D __kvm_faultin_pfn(fault->slot, fault->gfn, foll, + &fault->map_writable, &fault->refcounted_page); =20 /* * If resolving the page failed because I/O is needed to fault-in the @@ -4400,8 +4403,11 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vc= pu, * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn =3D __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, - fault->write, &fault->map_writable); + foll |=3D FOLL_INTERRUPTIBLE; + foll &=3D ~FOLL_NOWAIT; + fault->pfn =3D __kvm_faultin_pfn(fault->slot, fault->gfn, foll, + &fault->map_writable, &fault->refcounted_page); + return RET_PF_CONTINUE; } =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index a5113347bb12..e1f8385105a5 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -238,6 +238,7 @@ struct kvm_page_fault { /* Outputs of kvm_mmu_faultin_pfn(). */ unsigned long mmu_seq; kvm_pfn_t pfn; + struct page *refcounted_page; bool map_writable; =20 /* --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAE13183069 for ; Fri, 26 Jul 2024 23:54:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038053; cv=none; b=S5ZZ6Zo1czHJzTZGsCS9EhZFR0sZ3C074BX4f1CbwE7Rw4zXwjexKCToaIybaWcveMtPJdLWYNFhiapkz2NIKOhxzyb7f4ghrFbLvEvKqGJhvR69UEtsYbGZ64CIySidn84HxTdJbg7qMCLQwfqSW+UkEsyI1Smncrb8PlnZIpk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038053; c=relaxed/simple; bh=zcdjUASvjU1GC4/VXSe0LKQpv6RC30O1h12GjU1AZAU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ovl9gxbDzn8Q+DwaeX5dfulgjS2K+nrpWnllYWm48atp6I3XmzKeNkXCmLB8VCivoiUJR2KeY7knZTV0ocI/H2W/qkncjkt8tVHc4tNO64WBK20JNwtZ54fWF8h4v86lI3oEekVqTutWnbB+7lyZx90uJ0BD3Bo2XN5mB1ULW8w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BbLEAvNd; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BbLEAvNd" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc52d3c76eso12299635ad.3 for ; Fri, 26 Jul 2024 16:54:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038051; x=1722642851; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=RjeQeusXYf+NFoksbIMp6SJLG4xl7UVW4NR1eYCui2c=; b=BbLEAvNdaqogqLLo1NHabBSzth/9DkyEaKInFhQMAXfDWvmxh35EEFBkXNq5p9GvFS qcvDOcKpTetQfUeQzT8psdANz8oYmqYkFEoIPYtxkCXKsgbVtHaDJv+/+9eQQKnNdACb zF9n3gxLqZK0vsVohNLeaqWpY54u/h42ub/x5uHVgZl859tw3qTUnp4zK0+ZhLeTYzgs 7TFyvDAB6zHCOfqf+F/B4PtVyyHrX6Efk0E9c4edA1DHAIxIHe76QdUe60HrVNaz3TgX lpMKJa62iZekZYXLVtxWwPvNrnndnQlFWB/qOuU3ARzXLo9FyyMvE+oqzSPSD0B26FJ/ GQCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038051; x=1722642851; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RjeQeusXYf+NFoksbIMp6SJLG4xl7UVW4NR1eYCui2c=; b=CIUmcy9gTWPpwIw2iyhqT3vCS4JkMUK4vp9Ae5RPmYiMimQPxbevFae4rYxz0KKb8D ErWCobfA/cyQjvwWqEvbI0ZWqcLl9rNtq4Euq3A7jBumKy9AVa1O7lMO2NzBplvkeYdm a/uX6LyXmf7a+9VR5CE+HQO/foL36+/fDxh5w7Zxnt632oP0qZG6exCipsYdPWO7oGA8 h/nPZNDlIwcknF5sQ+J5mPT7VuGpwTz/SNb5/nKebni+X8Suf9xo/X1/te45VWZte4Xl ja6swIWZ6IbN7hOGLy462caYxnM+D0DWlvVkFau/QsJkUstnqDjaFA2+VIMHr0DSyiH+ twEg== X-Forwarded-Encrypted: i=1; AJvYcCWcFl4DMlK9cd/7XcslsbgRHCX73JOORj0MVggpMy2IPTDRYWBgkf7hxiYjdvp5u0mWho1ZrTEk/jcyraeSIBUdsDR/ZLG2r1N8xfNH X-Gm-Message-State: AOJu0Yx7qS4q72x+2zNO0epTOZGQ6lgw9lrq6A8yzMPDPm2f4L1jyA8e HKWITFRxoP6P2llfThAWe/afasOHonpvGpvRVBN4d978eIC6O/ZtPTu2hJseUEdtdM9x4HWnNU+ iBw== X-Google-Smtp-Source: AGHT+IH9Sleax7HjGR/xKBhc5+sBB7ouHNPkUkQToWM2on12s7r2/JIy8m8cUdV6C+LgY7aIUstVbYLK+Ts= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:244b:b0:1fb:82f5:6631 with SMTP id d9443c01a7336-1ff04898c7dmr609255ad.9.1722038050968; Fri, 26 Jul 2024 16:54:10 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:54 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-46-seanjc@google.com> Subject: [PATCH v12 45/84] KVM: guest_memfd: Provide "struct page" as output from kvm_gmem_get_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Provide the "struct page" associated with a guest_memfd pfn as an output from __kvm_gmem_get_pfn() so that KVM guest page fault handlers can directly put the page instead of having to rely on kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/sev.c | 10 ++++++---- include/linux/kvm_host.h | 6 ++++-- virt/kvm/guest_memfd.c | 19 +++++++++++-------- 4 files changed, 22 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 53555ea5e5bb..146e57c9c86d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4353,7 +4353,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcp= u *vcpu, } =20 r =3D kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &max_order); + &fault->refcounted_page, &max_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 62f63fd714df..5c125e4c1096 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3847,6 +3847,7 @@ static int __sev_snp_update_protected_guest_state(str= uct kvm_vcpu *vcpu) if (VALID_PAGE(svm->sev_es.snp_vmsa_gpa)) { gfn_t gfn =3D gpa_to_gfn(svm->sev_es.snp_vmsa_gpa); struct kvm_memory_slot *slot; + struct page *page; kvm_pfn_t pfn; =20 slot =3D gfn_to_memslot(vcpu->kvm, gfn); @@ -3857,7 +3858,7 @@ static int __sev_snp_update_protected_guest_state(str= uct kvm_vcpu *vcpu) * The new VMSA will be private memory guest memory, so * retrieve the PFN from the gmem backend. */ - if (kvm_gmem_get_pfn(vcpu->kvm, slot, gfn, &pfn, NULL)) + if (kvm_gmem_get_pfn(vcpu->kvm, slot, gfn, &pfn, &page, NULL)) return -EINVAL; =20 /* @@ -3886,7 +3887,7 @@ static int __sev_snp_update_protected_guest_state(str= uct kvm_vcpu *vcpu) * changes then care should be taken to ensure * svm->sev_es.vmsa is pinned through some other means. */ - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); } =20 /* @@ -4686,6 +4687,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_= t gpa, u64 error_code) struct kvm_memory_slot *slot; struct kvm *kvm =3D vcpu->kvm; int order, rmp_level, ret; + struct page *page; bool assigned; kvm_pfn_t pfn; gfn_t gfn; @@ -4712,7 +4714,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_= t gpa, u64 error_code) return; } =20 - ret =3D kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, &order); + ret =3D kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, &page, &order); if (ret) { pr_warn_ratelimited("SEV: Unexpected RMP fault, no backing page for priv= ate GPA 0x%llx\n", gpa); @@ -4770,7 +4772,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_= t gpa, u64 error_code) out: trace_kvm_rmp_fault(vcpu, gpa, pfn, error_code, rmp_level, ret); out_no_trace: - put_page(pfn_to_page(pfn)); + kvm_release_page_unused(page); } =20 static bool is_pfn_range_shared(kvm_pfn_t start, kvm_pfn_t end) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e0548ae92659..9d2a97eb30e4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2462,11 +2462,13 @@ static inline bool kvm_mem_is_private(struct kvm *k= vm, gfn_t gfn) =20 #ifdef CONFIG_KVM_PRIVATE_MEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order); + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - kvm_pfn_t *pfn, int *max_order) + kvm_pfn_t *pfn, struct page **page, + int *max_order) { KVM_BUG_ON(1, kvm); return -EIO; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 1c509c351261..ad1f9e73cd13 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -542,12 +542,12 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) } =20 static int __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *s= lot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order, bool prepare) + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order, bool prepare) { pgoff_t index =3D gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem =3D file->private_data; struct folio *folio; - struct page *page; int r; =20 if (file !=3D slot->gmem.file) { @@ -571,9 +571,9 @@ static int __kvm_gmem_get_pfn(struct file *file, struct= kvm_memory_slot *slot, return -EHWPOISON; } =20 - page =3D folio_file_page(folio, index); + *page =3D folio_file_page(folio, index); =20 - *pfn =3D page_to_pfn(page); + *pfn =3D page_to_pfn(*page); if (max_order) *max_order =3D 0; =20 @@ -585,7 +585,8 @@ static int __kvm_gmem_get_pfn(struct file *file, struct= kvm_memory_slot *slot, } =20 int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order) + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order) { struct file *file =3D kvm_gmem_get_file(slot); int r; @@ -593,7 +594,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory= _slot *slot, if (!file) return -EFAULT; =20 - r =3D __kvm_gmem_get_pfn(file, slot, gfn, pfn, max_order, true); + r =3D __kvm_gmem_get_pfn(file, slot, gfn, pfn, page, max_order, true); fput(file); return r; } @@ -604,6 +605,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn= , void __user *src, long { struct file *file; struct kvm_memory_slot *slot; + struct page *page; void __user *p; =20 int ret =3D 0, max_order; @@ -633,7 +635,8 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn= , void __user *src, long break; } =20 - ret =3D __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &max_order, false); + ret =3D __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &page, + &max_order, false); if (ret) break; =20 @@ -644,7 +647,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn= , void __user *src, long p =3D src ? src + i * PAGE_SIZE : NULL; ret =3D post_populate(kvm, gfn, pfn, p, max_order, opaque); =20 - put_page(pfn_to_page(pfn)); + put_page(page); if (ret) break; } --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA185183098 for ; Fri, 26 Jul 2024 23:54:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038055; cv=none; b=rmZI033w7WffM9uO/2xxjtKlgT32eyitQrEQHpHHe3NxXuyTq69AASGopqt/CL4lzHR4lsRNpHjhJRCjSe+csGe8tiIZAEdS28u2FDmCJH5lLBk4bSNi2tTI9HQRG5b6Z68ftXV6jndSozhUPhCSNsozqUwaiVdpB/n7iRwIg88= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038055; c=relaxed/simple; bh=PYmFTsup/AEmfy/4jq9PM2uJzzhDaOGf5jqGf3bldb4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=POxjiU+akX7DHHHFZoiOt84OFR55ORrHHN47ICH8BAS5fCkFXZ4e2eafMzflKC3uwajW2Qv04lWV0T/w3nmyrsnrDqJqP0JFfpwr+9THwtLnlXG6Rfj7NboLIgQchiUV5d7KBj1bDyAJtJmxzVRhdtspqD+PZT+SBwU0xmAuhmA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qG7lmeAW; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qG7lmeAW" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc4fcaa2e8so10525225ad.1 for ; Fri, 26 Jul 2024 16:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038053; x=1722642853; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=IbuNIltU7tfSlkccHv78LPGh6NGiPV3XEW5ZkU/rXdY=; b=qG7lmeAWjCmRFPGjSdtFWnVF5e8lzWK6C1pA4O4yqJYKVqbeW/XxPr9BF1EsoWCNx3 nZXAhwOLGE14OsGoOoASVkjOct5HX7VAgUOvZYUZsfI+wiu/LTHsSMQ0QlOGUx0u+K4t 5d7wv9u+gzCxcGFynRlRTvqXTdzIv8Oi7Qa1sVFlYJvEdVkjltEmPNnSuaJ720AbYW4j AHfLc5kA1XsJ5+zLhnKBzhlGYUInpLqBjZwJQs3rllyeqf6gGjZXRf7DovuwQRxOHRMr BnElQcAGdyePpfgBajZF3tMjQIdnOgmIuCXMqbue6N6p2EKzKcRetpNAffxJDk2/fI2O m/Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038053; x=1722642853; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IbuNIltU7tfSlkccHv78LPGh6NGiPV3XEW5ZkU/rXdY=; b=w2C0a3O4KRbCU4PxyvI6rP9a7hdNnJ2dTcagG9moVJjncR8uwdUm+EdPYR8pw9rqy1 omQv8PWR8hwuasT6LOhKlOfNQoc4WSPxUO6FhxA9khOL+fj4HcMAWM3PIxAXer2ySZOh E6kTIleu7Z8H8G6kpI5x95nZu1huHJ4FKHtocze9bcoDxN7FNrj3a/hnhegET/bnBXc4 m5Tl0nKDRg8sPIgJxtAwj78FeK1qCkvUa2GI95iAeLFSpTdfniCevPehtgj06kaHMD/b 6M5W8nAeK/HCAND4X+5CdwhawnPgkFqNezqixC9dIb8fzXmM+/ItcpEbcSfSPEbfnEvc P3bg== X-Forwarded-Encrypted: i=1; AJvYcCVglBZqXifDeiT92KRJNPWGnTR/f1h7CoK445bgeUGUkfB0cSP5lHeY0ba4BpCe1vwteJOtO3vNUQY+wPY1u4zgV9RpJwPK3dgRu9+Y X-Gm-Message-State: AOJu0Yyf3WwUdexn0IREB/8ZDwA5nnqFUTvLlHq8RlZHtIz3nsedsGYR PuVIYA6ZwL80E+l02ToyUXVWwRJEm/p8/uUQQsfH9qKp9w/FL8tGPEiWSifsRvNJ68atid8DYaO OAg== X-Google-Smtp-Source: AGHT+IF1WA3mVZXk9S93pEq2eXLardKUPi+yfLeicmT6epShCKMtCcCE54tvymeWkFnvdj0Qbejb/rqSUKM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:41cc:b0:1fd:87a7:1445 with SMTP id d9443c01a7336-1ff0489344bmr935735ad.9.1722038053119; Fri, 26 Jul 2024 16:54:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:55 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-47-seanjc@google.com> Subject: [PATCH v12 46/84] KVM: x86/mmu: Put refcounted pages instead of blindly releasing pfns From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that all x86 page fault paths precisely track refcounted pages, use Use kvm_page_fault.refcounted_page to put references to struct page memory when finishing page faults. This is a baby step towards eliminating kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 146e57c9c86d..3cdb1bd80823 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4326,6 +4326,9 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu= *vcpu, lockdep_assert_once(lockdep_is_held(&vcpu->kvm->mmu_lock) || r =3D=3D RET_PF_RETRY); =20 + if (!fault->refcounted_page) + return; + /* * If the page that KVM got from the *primary MMU* is writable, and KVM * installed or reused a SPTE, mark the page/folio dirty. Note, this @@ -4337,9 +4340,9 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu= *vcpu, * folio dirty if KVM could locklessly make the SPTE writable. */ if (!fault->map_writable || r =3D=3D RET_PF_RETRY) - kvm_release_pfn_clean(fault->pfn); + kvm_release_page_clean(fault->refcounted_page); else - kvm_release_pfn_dirty(fault->pfn); + kvm_release_page_dirty(fault->refcounted_page); } =20 static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F165A1836E5 for ; Fri, 26 Jul 2024 23:54:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038057; cv=none; b=eZZVfEMQH2QJVcsaeTB18WFukIXnv5em8TtXoDUgNxX3/5Nv/RUDsaF46u4bctDNwBXeBgQQQ1hVx9swa7etKvhtR2wRu5vtMQkw3AlYcpqfrm2e8qP/okFZldGf7jw3m+u5lAftGBhRAGm/3ungVmdTcumDFQInfh09tEshTxs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038057; c=relaxed/simple; bh=irQbhT8vq6NMlHJubauhZeuUKNqR9u7Db5+BI1KIaQc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P/QRvHZ1Cwfp4NW9AFn7L/5Act0D8gqLWI4RMcyrrEgR9ifRe2/qtTyPsUrh/yXtiXOsYjvq6T6otQqyCX+IdvnFW+7V4cSkYhWPMdcCTCRCrF+0DlO4sKf1dzISzru17lJ4rUbYvNKJhR8zFjT73Se4lx4Wk/0PuTnOKE9Pzgg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=i2wuxh4O; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i2wuxh4O" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-665a6dd38c8so7216647b3.1 for ; Fri, 26 Jul 2024 16:54:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038055; x=1722642855; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gyzdwZRgQhOlsIfYTCemvzcmHdUAJy3wvQe/vDjl5Fg=; b=i2wuxh4O8bx9A2JsnlwAGN0/30VXy29eAXs6SU4rAdIjSv25BVJ7nBFsZ2JTmHUDkO Lu8lhdYqfUiLDwmeu5o+EONm6kH8hGlH12GxphVwVPyubgPD/37KO0jXDZhN5JR49ZFp FGlDoGO4NaWGZV7CRnSYbOdgWzIDikZMLxnxvVjB3nS9ZdqJ6fkSi+v/u9oh87t7MFE0 Eh9Nna3Gi6BuxJwKPApO1KoFPy5+8F6GZwYJMScnAUSiT45OemUr/LmlhwWYn+hJQHKQ dtELVrILPiyJPA5jGxNTAC0mFx+j08jFUkkDpfneszUsTTre8U3T8nrSXyVLZEmdAen+ 9ROg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038055; x=1722642855; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gyzdwZRgQhOlsIfYTCemvzcmHdUAJy3wvQe/vDjl5Fg=; b=D24ojMkPuqYM68OM/HCRnaLUV2zGk0bk6/VfhxF/iuT9RbD+fJ/E+RDHEHKIdj4ixM dg6Ihs+z/lC9YDsytDWnWcjhxJ1ALdGM0g5Rnjv3NavCbBMi6Q+ES83JNv668qxCvB+C f4+s+qVfb9rYA4ZmmozCZ5sUESX5/YTpkLVq11+NI8DUMJ9tf0MbHuQ12LbOReLDTerN IEl+qhMn6PXtfi6H71lDWiEe2tHnX6XcG579zmPcxPZljDwtuhvnPvcA+AzLiWUzUS7f BL0nB8Mgqu/NHX/up0QIFWNHO/C0THRnhqnk4psV3mZIOVNyHhDuAEF+oY1l6fN4ooXi QDlA== X-Forwarded-Encrypted: i=1; AJvYcCUccwyL4fAXC16dJugZqtz5iVdmpVtcoLu/vIiZgqrkVDU6bXFFPLkQ7bUEGo/+1iGEhOM8M7YXG+nxqGQPF1RJgNRtGp8YMfLWfPZ2 X-Gm-Message-State: AOJu0YxWhumBkZcGJAlrAHANqdk6Oy8IoQKXfmRkG9VDzjX2MoiNivfF HAq28PixIVWraYMymVvhLxZmurPPLtoQWhoVIEwLVwSV0Y4XhvIg5AszZ8dW856TUlKiAnQVCtD Wgg== X-Google-Smtp-Source: AGHT+IFpeG9OHD4NFko0QV+s2bLKC/mOPhRZ2kdDMQUp9/Z6e7qKWnbHym6UcCLJKsXB8YuZDTrWfSN5tHI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:14:b0:66a:764f:e57f with SMTP id 00721157ae682-67a0abd50e9mr49107b3.7.1722038055008; Fri, 26 Jul 2024 16:54:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:56 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-48-seanjc@google.com> Subject: [PATCH v12 47/84] KVM: x86/mmu: Don't mark unused faultin pages as accessed From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When finishing guest page faults, don't mark pages as accessed if KVM is resuming the guest _without_ installing a mapping, i.e. if the page isn't being used. While it's possible that marking the page accessed could avoid minor thrashing due to reclaiming a page that the guest is about to access, it's far more likely that the gfn=3D>pfn mapping was was invalidated, e.g. due a memslot change, or because the corresponding VMA is being modified. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3cdb1bd80823..95beb50748fc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4339,7 +4339,9 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu= *vcpu, * fault handler, and so KVM must (somewhat) speculatively mark the * folio dirty if KVM could locklessly make the SPTE writable. */ - if (!fault->map_writable || r =3D=3D RET_PF_RETRY) + if (r =3D=3D RET_PF_RETRY) + kvm_release_page_unused(fault->refcounted_page); + else if (!fault->map_writable) kvm_release_page_clean(fault->refcounted_page); else kvm_release_page_dirty(fault->refcounted_page); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD3BF1836E9 for ; Fri, 26 Jul 2024 23:54:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038059; cv=none; b=lY9QOUQCCK0IItiVSh4HSGbm07urWvlbqasyRDxBF1VSs4Sc5706olcev11XgXeS8M5v5aH28XA+glHHkN4VT+PVDEl1/akZZJMxMavYrUpO/K7q5SSx9XQwN3cexhumo2OeC1y/zWxf3q6m/4y1g6I6z+vWTXYJXapJKnmCXPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038059; c=relaxed/simple; bh=hkAHN3MSFE4PtSlKJPDsUtFG5ZVNRUuBMTGsadq3yME=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HZuEBBvaHDWQLusMd1qaQ5EBXj5oSh930879uuWF6Uiuy97+ukjpo49Cz+p/CGo/MoAp1Dj7fWhoERd+bUZ6s1cUD19yoawWeU0YwD/v/94hWBLimbZpI0/7bMdREGnM+CXlcuXxrgjNhPfoTF01t6jUtO8TAJJgiOqMcOlLdCA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LEeqY9bl; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LEeqY9bl" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fd9a0efe4eso11319175ad.0 for ; Fri, 26 Jul 2024 16:54:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038057; x=1722642857; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=l9BW/dK5c/4ryaOpC0RxFiUBYVnD4MPO1UqGokFlyv8=; b=LEeqY9bltXOZWjLYoMQlFaufi9j1THXUXcixDH7CTGZDdWho8QRTPu8BDeSK93KfeM At5khpGNbLtc36mPK7S9WcFSO/Y0vDQuv8TM1Dlqrzh4txI0tNimo/GhV6DrS504/Yio oUuqKDtxGDdy6bpZMiwDq3gGcKMGS11jJBF2PWQ7oyAb+uEvXZ7a7fZLBpd9VYH8Nzh+ Lz8Pwtmwnh8d0zLKaTItr41SldeCJDQelAfHgPBCZwwf0QY3XG5DMZ2D33dI8tQViCK6 frW7Cs8DS57A0tq2rZwEF0bRCF34+eJAZA8ajX5dcSJlL40lDTfn07IEdMFMZ7wAl5ob c3fA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038057; x=1722642857; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=l9BW/dK5c/4ryaOpC0RxFiUBYVnD4MPO1UqGokFlyv8=; b=QjYGj+zcT8O4IRhgtHydQbiPpXMk4r8K5GTuQR2DmMNHSDcMnrRhSl+aJzAH0D91Va Iz8ndg35v7rIODfGOZliArGnwi2ffFNOtTIzpkks9zJt/NSVZhpV8qurNn4pzPEgXxqZ oawMMI1w3p3ejKgUGohKte7D061WKfseBFWIhH7keK5uYCvXwTks0m2c+MoRn9u0Dk69 O7YaeGBXQNG93K2I6EN6xR05R5Il6gvXEuxPrJ2gNzkjh0uv337M0tEbFBHuEf6NtZ2E iWhtcWUOdmeoz7TmTBWG0OtUbQAsTRAButEOFSjJYR/+w7y+UubfQ8AUqCal/Qgwc90f 4mVQ== X-Forwarded-Encrypted: i=1; AJvYcCV9Hj6A1p9vnCL7VAa5nzZjuyjg3TDiPsoFKZKhgXzBuF9Zad0+ItQsKb5kvpm38674197u1h9j5LXev2XZ4aD8ZJPPt5sAmJkScphl X-Gm-Message-State: AOJu0YzlsasCpA4jHgHiWSgYXTttiMcysNFZF9kxmQcEfCFtQCyVH+S9 76W/qhi6aHAiCYEaDdxjnDccm0CIjoKFtroIetPtu3RXVyRxUAFnzpDr8WIQuWFU5CPX/6sEZ2V deA== X-Google-Smtp-Source: AGHT+IH3COIaUp2rXSDrmiVRXNOzjzCRiof8RV2PU1lQ3pUgA2jXVKIIsmrd9aJoYDTjtMdA3IOsVNqBhcc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ea06:b0:1f9:cbe5:e422 with SMTP id d9443c01a7336-1ff0488dbf2mr744395ad.8.1722038056933; Fri, 26 Jul 2024 16:54:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:57 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-49-seanjc@google.com> Subject: [PATCH v12 48/84] KVM: Move x86's API to release a faultin page to common KVM From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move KVM x86's helper that "finishes" the faultin process to common KVM so that the logic can be shared across all architectures. Note, not all architectures implement a fast page fault path, but the gist of the comment applies to all architectures. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 24 ++---------------------- include/linux/kvm_host.h | 26 ++++++++++++++++++++++++++ 2 files changed, 28 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 95beb50748fc..2a0cfa225c8d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4323,28 +4323,8 @@ static u8 kvm_max_private_mapping_level(struct kvm *= kvm, kvm_pfn_t pfn, static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, int r) { - lockdep_assert_once(lockdep_is_held(&vcpu->kvm->mmu_lock) || - r =3D=3D RET_PF_RETRY); - - if (!fault->refcounted_page) - return; - - /* - * If the page that KVM got from the *primary MMU* is writable, and KVM - * installed or reused a SPTE, mark the page/folio dirty. Note, this - * may mark a folio dirty even if KVM created a read-only SPTE, e.g. if - * the GFN is write-protected. Folios can't be safely marked dirty - * outside of mmu_lock as doing so could race with writeback on the - * folio. As a result, KVM can't mark folios dirty in the fast page - * fault handler, and so KVM must (somewhat) speculatively mark the - * folio dirty if KVM could locklessly make the SPTE writable. - */ - if (r =3D=3D RET_PF_RETRY) - kvm_release_page_unused(fault->refcounted_page); - else if (!fault->map_writable) - kvm_release_page_clean(fault->refcounted_page); - else - kvm_release_page_dirty(fault->refcounted_page); + kvm_release_faultin_page(vcpu->kvm, fault->refcounted_page, + r =3D=3D RET_PF_RETRY, fault->map_writable); } =20 static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9d2a97eb30e4..91341cdc6562 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1216,6 +1216,32 @@ static inline void kvm_release_page_unused(struct pa= ge *page) void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 +static inline void kvm_release_faultin_page(struct kvm *kvm, struct page *= page, + bool unused, bool dirty) +{ + lockdep_assert_once(lockdep_is_held(&kvm->mmu_lock) || unused); + + if (!page) + return; + + /* + * If the page that KVM got from the *primary MMU* is writable, and KVM + * installed or reused a SPTE, mark the page/folio dirty. Note, this + * may mark a folio dirty even if KVM created a read-only SPTE, e.g. if + * the GFN is write-protected. Folios can't be safely marked dirty + * outside of mmu_lock as doing so could race with writeback on the + * folio. As a result, KVM can't mark folios dirty in the fast page + * fault handler, and so KVM must (somewhat) speculatively mark the + * folio dirty if KVM could locklessly make the SPTE writable. + */ + if (unused) + kvm_release_page_unused(page); + else if (dirty) + kvm_release_page_dirty(page); + else + kvm_release_page_clean(page); +} + kvm_pfn_t kvm_lookup_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn, unsigned int foll, bool *writable, --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51E011836E4 for ; Fri, 26 Jul 2024 23:54:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038062; cv=none; b=s6t3y6MyWeReaixpW6+H1pKz+4z/5vBjrZ8ruH9+Dv/VXBGIW2jbUUx+TKxRNEoXnjcZJg6PK9lA1UTwJowIaLNljY5R3C/z0bOZN3SCn2OpYVFYHjPs/BAWCLfi+n+TV1VILnyVSvecayZarXXDwv3XdL7ha0OiU35CDFFeOm4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038062; c=relaxed/simple; bh=bxK6qVpynswZ8R8KEFFqM/NCnF+uG5Y2NSLbD2Gkxm8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tCxhgQeaM70P0qAmXMvWZQflkNPugXhWYNaM0hwxbnpddS+aTJy4Mmv4Qrs0TwPavZippOHF7DqKZtu/1mRMD39RKiZKEDIYqCoqsqlVJ9QyEYnfU/rmtOTVDdEfqny/Fg+RDYckX7iGQLSWVolqcnTW8PAmVR9yTvnmijsvtLs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=r8Kfhn5Q; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="r8Kfhn5Q" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb6c5f9810so1641843a91.2 for ; Fri, 26 Jul 2024 16:54:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038059; x=1722642859; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TqUPaGEc7V7k122pX7jZuWDsKBXmCX3Y3KCoLsEzljY=; b=r8Kfhn5QIfw8bXRjb7GNZxVaDZZrB8VolGE04H9QaxdvwCj1ESAXkRroNpYJqR0pAp Q5iF/xFl3LdO7+u80Jg66zQqFZ43k6rBk/JhEW7FatxTVrGEoAi26/tTUzRMZbFEeung zMrUTPR9byVQRSjML4gH0TLU/dIw0APzpg68iWTZmx7bNuDPzDC9Qoi78KKVuVGF0Dfy MGgJugFMG/aln2uBkR0kGeoDC0Mn5TSx8v9g7W9wEAKg0H9F16IzFwB/bv0DZiQ+TXqy AJaPY4NiQgn4ht1taPocxt6RVaITjzL4O/OcZC6Wi2OG5LkIF6keYvh7Al+4kW/9QjT2 v8Bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038059; x=1722642859; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TqUPaGEc7V7k122pX7jZuWDsKBXmCX3Y3KCoLsEzljY=; b=C3d8c4XQPDw+7ouVdagbK1+vNlqDW3lGE8yodMfUy/OyuLg9pl9N9lE2DU0G5AW9Bz wE+P5KbdfvmRbEqBdYyaWE3wC73ccO4uwJ75n8GuU+R+LPzUffmDuidIzKBsiLuA/dMd qKSZF48qCvMPwkbBHG9psX3hMB/EJs1pvJPgE3qQ/HwM/U1w7G2lRR+2GeUw6M85Xixu Fm3CzIASAcE+qOgsGd2isdWqkfIxkRYZkMnp7VZ72YfraYp0hpf6y4+OLyC8YImpGOaw H4mQTLFEscGoUGWeqnsCW0uGIiDesdrfEHgHjV7Nbyjaj+TZh3O/yDTFp1kqEn0FmNpy lhsA== X-Forwarded-Encrypted: i=1; AJvYcCU2EtWrRCYK3MvP2bTUBeewYFywZ5R2+5Jn9qTNgVWdI43Iv6Fx1qWL/f11zYgf+UHutnZZYTjkrrxXt/oo7XkXpZF10R64MBnMjnHR X-Gm-Message-State: AOJu0Yw1vVefKft41qPbi9JpiXZjvskorMV1a2msVwLxx7eTcZU+A6kh TtVo4U5B/FzktE28bNkyxss5jHVAwpXg1LOhbEtZNwqsqhAAYT8+/FcNlvp0XGQ5exJzuB3p0fG YtQ== X-Google-Smtp-Source: AGHT+IH7R2Sq6ws3raxo/M+HjUZh8HrSGcUFKZNBVKBew2cgXvpkeEwYajfUFO1jgdCm33mambqvR7vew2A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:19c4:b0:2c8:8288:1f3c with SMTP id 98e67ed59e1d1-2cf7e08defcmr21966a91.1.1722038058548; Fri, 26 Jul 2024 16:54:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:58 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-50-seanjc@google.com> Subject: [PATCH v12 49/84] KVM: VMX: Hold mmu_lock until page is released when updating APIC access page From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Hold mmu_lock across kvm_release_pfn_clean() when refreshing the APIC access page address to ensure that KVM doesn't mark a page/folio as accessed after it has been unmapped. Practically speaking marking a folio accesses is benign in this scenario, as KVM does hold a reference (it's really just marking folios dirty that is problematic), but there's no reason not to be paranoid (moving the APIC access page isn't a hot path), and no reason to be different from other mmu_notifier-protected flows in KVM. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/vmx/vmx.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f18c2d8c7476..30032585f7dc 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6828,25 +6828,22 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu = *vcpu) return; =20 read_lock(&vcpu->kvm->mmu_lock); - if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) { + if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu); - read_unlock(&vcpu->kvm->mmu_lock); - goto out; - } + else + vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); =20 - vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); - read_unlock(&vcpu->kvm->mmu_lock); - - /* - * No need for a manual TLB flush at this point, KVM has already done a - * flush if there were SPTEs pointing at the previous page. - */ -out: /* * Do not pin apic access page in memory, the MMU notifier * will call us again if it is migrated or swapped out. */ kvm_release_pfn_clean(pfn); + + /* + * No need for a manual TLB flush at this point, KVM has already done a + * flush if there were SPTEs pointing at the previous page. + */ + read_unlock(&vcpu->kvm->mmu_lock); } =20 void vmx_hwapic_isr_update(int max_isr) --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 686D11849D9 for ; Fri, 26 Jul 2024 23:54:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038062; cv=none; b=QrMpfm38HRCbfmfk15r69ooWElKVVA0AIhwp1uhMlZysRa8SVdzc8jYj7JEcEj9ElH3lFpOrU/OxNB+Rp0lR0Hfmx/Wbz1ZCFCPFi2tx8toEcILy7ICkRPNqbwdCMD//eoyluyyeSNF+USK72SM8gKtQjrZUWG8EIqbCTxDH5PQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038062; c=relaxed/simple; bh=+A4HyT5mPj7Uehr+zQRb98wKpKpbg1KoeNdaEKS53g8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kjJiDTfYH1wjcOU4bQI2OAYmtThFfi9N4fok41lUKUcf+4UK9rPnFt+Nbhu2rEzIqCX4b/mvcBlnP9RdjA35xm1fa3Ze45jOsRMioF0yN4Y2yIfiStLE+pDQFXlhj5Cj01yeE8FOglRk2N4otUjjJXqkczrK/1OtFy1QmuKW9tk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PvydeBsx; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PvydeBsx" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70d19a4137dso1423327b3a.1 for ; Fri, 26 Jul 2024 16:54:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038061; x=1722642861; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=495sZLd3sCBTMIrtoKFKEohUXAwQ30LA4hbLQQBllaM=; b=PvydeBsx6Y6v7gtKxOzkkq3i7AtwdhgD2KevkRmejEmYtWZ6W6GtfA+IQH1JGO8hnn 4xlCCXpA3DDtzMyCdCZZjhG+IjXocFl8tFZCKyscxVC0FBOeoGL9jEUpOEcvxQtz5iaZ zK2C4QsXiVsuEdB83RNQb9BI7jgEZXGzwLOYw/Yp65bt7H3imYToCkoMV0uZXm1OeonR HbdHlQN6rkXyRHXacnzzR6EulxrKgfPiyVE1VyHzxpNFXTVEA428LfUtgJtcdM9GHNxd zXDZi+1fsvIOb1MXbk2dQ2CW33Mj58NH03uiinTuoF5T0W3PvwUtejJRoPx2eaw4r7w+ C6iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038061; x=1722642861; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=495sZLd3sCBTMIrtoKFKEohUXAwQ30LA4hbLQQBllaM=; b=G/JHU1TJJuMtuwiGmxabuXrxa5fBWoPqgHyVYFlzVblwgX5kjln2df3IUPUN72xAYq F9zjBhakf0qMuio7mkwUm7WpefAP/K1/zydXXsTgqqF4LgtL+7CxnrTPbh4deygNVy9w VJvBbr9310WUbNFtTtOdiagZUNOTYMwM3H2XncGsPlejtgI3mSQr+JU+6JfChIgZR+ZW 0+qmTMl88TNA/s3TKiV7b8chA+lsBCS8uB8CexCsKRtk62B/Ez+xdbhT5mdvxZmJt9vL xYCMMBZPBR4H8bmJ1o/Zio6Xc4AdBR5PUkF9bjNdJlvJ21kbBwZyB95/EU4IfKMjfyAu E7Qw== X-Forwarded-Encrypted: i=1; AJvYcCU4W/XPY08EspsdLe9HpnJxuKTPeG0MhDyADrshtWDv4YU0DmuH1HCQeVUHytZPWYtqzW9yCsfq1PmYC8J0tazPYTlDdwoHyf/Ynq2M X-Gm-Message-State: AOJu0YxdtdFJHGVkjSBY0egcI6QGNNWoKrEfyxCFFqm6lCMwAI5mAjAG kBuQZN3fB+OEEBR3h27ga/v/0c6I109MW+MmpaSphekoEAGAQ1bMXwtscPkKgUN94HY6qItwZ1t Qzg== X-Google-Smtp-Source: AGHT+IEX18wdLlkp+2aMf2anoWcNTpOtSr1c0ECGrHfjhmBCNZQh1f0vS16HR3FEGEctU7qtPLZbGkdGSSc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:6f1c:b0:70d:138a:bee8 with SMTP id d2e1a72fcca58-70ece533146mr8925b3a.0.1722038060383; Fri, 26 Jul 2024 16:54:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:51:59 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-51-seanjc@google.com> Subject: [PATCH v12 50/84] KVM: VMX: Use __kvm_faultin_page() to get APIC access page/pfn From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use __kvm_faultin_page() get the APIC access page so that KVM can precisely release the refcounted page, i.e. to remove yet another user of kvm_pfn_to_refcounted_page(). While the path isn't handling a guest page fault, the semantics are effectively the same; KVM just happens to be mapping the pfn into a VMCS field instead of a secondary MMU. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/vmx/vmx.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 30032585f7dc..b109bd282a52 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6786,8 +6786,10 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *= vcpu) struct kvm *kvm =3D vcpu->kvm; struct kvm_memslots *slots =3D kvm_memslots(kvm); struct kvm_memory_slot *slot; + struct page *refcounted_page; unsigned long mmu_seq; kvm_pfn_t pfn; + bool ign; =20 /* Defer reload until vmcs01 is the current VMCS. */ if (is_guest_mode(vcpu)) { @@ -6823,7 +6825,7 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *v= cpu) * controls the APIC-access page memslot, and only deletes the memslot * if APICv is permanently inhibited, i.e. the memslot won't reappear. */ - pfn =3D gfn_to_pfn_memslot(slot, gfn); + pfn =3D __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, &ign, &refcounted_page); if (is_error_noslot_pfn(pfn)) return; =20 @@ -6834,10 +6836,13 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu = *vcpu) vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); =20 /* - * Do not pin apic access page in memory, the MMU notifier - * will call us again if it is migrated or swapped out. + * Do not pin the APIC access page in memory so that it can be freely + * migrated, the MMU notifier will call us again if it is migrated or + * swapped out. KVM backs the memslot with anonymous memory, the pfn + * should always point at a refcounted page (if the pfn is valid). */ - kvm_release_pfn_clean(pfn); + if (!WARN_ON_ONCE(!refcounted_page)) + kvm_release_page_clean(refcounted_page); =20 /* * No need for a manual TLB flush at this point, KVM has already done a --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 823E31850A3 for ; Fri, 26 Jul 2024 23:54:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038065; cv=none; b=Uuu3DMNBFFqTNcfZSoNUW53kstq9Yg1HVhilh5dhhN2i66REhydhOAM9Vu9PZJVCQc0hYi9NBcoZZYjYQx5ne2xvwiSG8SN937XAVpyQsrO3Xqg3xoXqqYJANWbMyNwU7oKC90M/6QFYLBEv9feU5domK71jrmI+6IjcScht2yI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038065; c=relaxed/simple; bh=VbnqqafRlYMVLvo8/KpvTNYaOUVhm0RY4zV5HGgCgQE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KUzasNOEvDs00K7CopbuPlJaqZGIKNDHHKzv1zJPp3JB3q8UA5muRZ9G4p9e5IENpMqhGFl8PRoUBlzGx6AAJ/tj9DmbR6I4ZWWvBcew2pWnb/MWYfjovrnRUq/GHbeMMfo2uF7jWFma7HCgYw1q9+RE5Gj/9IGcpR1bjDRayxo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=F69ABjzb; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="F69ABjzb" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66b8faa2a4aso7071877b3.0 for ; Fri, 26 Jul 2024 16:54:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038062; x=1722642862; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=cHTm12hU98ihrn/4YhoQtTNxFr6qD1GrOX0sqy5dmrg=; b=F69ABjzbLXFZ/ejnJdvIloSN3rsH/NEOPtgkPHhfyYn0YVTiqn7c1Yg8yhxjN3i/cx XNswgWuSZlajr6wW5FkEOEJHFQFRTcD0MAqFFartwEF9/JV5fPxXBAj9mCf26Ur7nqxc UtMHVYIHS+xELcBIJDCET7wtcppyGhkZWSXOMEeCzbR/kzY/PB+jZBM47cZDgaYcYVu5 G+GR5+xpvGBHtV6/GsY+tRtCFr538d3QH0xaou6GxzvB/9CZE9RP0tczxRyQotEW8KDH 39rHL5nO+qgYyQ4kywIMuRE0gXDQC+5K9FSQRtdzczWzF6PElTYhwdT8f5iMrZB9+CCD jHqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038062; x=1722642862; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=cHTm12hU98ihrn/4YhoQtTNxFr6qD1GrOX0sqy5dmrg=; b=LdFzwA2uVlM8zwycd440UEtpqY8uNIHWFKrCAyAtw4MeqrAI70KvxFeROj1A8PVHL/ 1dVd+qxe8FZDlco1qM20t9Xdz+/x3AAIImuxG03HZqpmxvLGHBwAoViOstd75ksibx0A 46QoLy7Myyg8qVeXegL3TQT/Hr4GzCBMyDePHjcnjjR8bJhCB482gT9FzU5NnMZ5hQa3 VL4uRgyj8Hgx24db621np8V8aFD39ufpG8ekVADgDiv9gyeDDW9PXDhuXeoTA120aBxz IFU7a+4iSjcEKaA3PmpXmKbYNsdJCy+QdOukY0smoGbvjtS7TisLMaWR81ZiKsZdskvA RFGQ== X-Forwarded-Encrypted: i=1; AJvYcCWW9G9Dti6jUAcnDHyho58DMzeXodz6TMeDaTAje5vsectGbUuY/aFMNh/GXf1+IjCYEADyRT49ud1PF3sqi9FrbDBrQpPzE99hIE0J X-Gm-Message-State: AOJu0YxdRhBrch9v5m18hjBfNBMmzwwuKQIy04uWwnqVeoc9IWQjxLNb aCx1F5XbC0nPvnccu2kiAhA1yBFB6mGFeiCr7zbXnjbczIhdmQ0ppvshrMb5PqQUhdrdQQDl2/l L7g== X-Google-Smtp-Source: AGHT+IE591qeJqHACb7CkRANjpP2e5907DfMapi1wU/6IAMP+NJSB5aPgL5hiS9fj7k9JDiJNu7gQfM60rI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1005:b0:e0b:f93:fe8c with SMTP id 3f1490d57ef6-e0b5427fa67mr79886276.0.1722038062601; Fri, 26 Jul 2024 16:54:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:00 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-52-seanjc@google.com> Subject: [PATCH v12 51/84] KVM: PPC: e500: Mark "struct page" dirty in kvmppc_e500_shadow_map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark the underlying page as dirty in kvmppc_e500_ref_setup()'s sole caller, kvmppc_e500_shadow_map(), which will allow converting e500 to __kvm_faultin_pfn() + kvm_release_faultin_page() without having to do a weird dance between ref_setup() and shadow_map(). Opportunistically drop the redundant kvm_set_pfn_accessed(), as shadow_map() puts the page via kvm_release_pfn_clean(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/e500_mmu_host.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_h= ost.c index c664fdec75b1..5c2adfd19e12 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -242,7 +242,7 @@ static inline int tlbe_is_writable(struct kvm_book3e_20= 6_tlb_entry *tlbe) return tlbe->mas7_3 & (MAS3_SW|MAS3_UW); } =20 -static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref, +static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref, struct kvm_book3e_206_tlb_entry *gtlbe, kvm_pfn_t pfn, unsigned int wimg) { @@ -252,11 +252,7 @@ static inline void kvmppc_e500_ref_setup(struct tlbe_r= ef *ref, /* Use guest supplied MAS2_G and MAS2_E */ ref->flags |=3D (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg; =20 - /* Mark the page accessed */ - kvm_set_pfn_accessed(pfn); - - if (tlbe_is_writable(gtlbe)) - kvm_set_pfn_dirty(pfn); + return tlbe_is_writable(gtlbe); } =20 static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref) @@ -337,6 +333,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_= vcpu_e500 *vcpu_e500, unsigned int wimg =3D 0; pgd_t *pgdir; unsigned long flags; + bool writable =3D false; =20 /* used to check for invalidations in progress */ mmu_seq =3D kvm->mmu_invalidate_seq; @@ -490,7 +487,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_= vcpu_e500 *vcpu_e500, goto out; } } - kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); + writable =3D kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); + if (writable) + kvm_set_pfn_dirty(pfn); =20 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43B6B185614 for ; Fri, 26 Jul 2024 23:54:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038066; cv=none; b=ll2ZHnFIS8ODfb5NYGVabLHcnWWxQE2VxSEp1/eFsiJaIqfcClRn8VgPPuVo1MGaJ4uKCx2KLkWF5GOHiHTbR5/qg+jG8IsDOyKuOt1YSQyOyhmRjOVAcb1ET4j9RMgUljFaYbWg4MNGnwy3t7FZVMWLDsaO5bOt7pH4jcU2A8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038066; c=relaxed/simple; bh=bY53gDaonSlIhoyFNY0EzYkmvr6M3Y58/Tqc7IOcPjk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aBsCO/nAZclyo4ZFEKWXxB+cY1f5+UF5jfSnvmU80G5Sh5XX59TusVBRMaf9T3WdSzGOkS38PRdeMwrTcA7FnWDnmwCRZYW4TBP1okHPQGD1D0edg0uaOyztPLm4+jzxsoMX142jbcJbPbB3HcxwbxZVJXh+KHNPDFpcwVa1fJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ugzgw1RF; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ugzgw1RF" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc54c57a92so10629115ad.3 for ; Fri, 26 Jul 2024 16:54:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038065; x=1722642865; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=WWjK/C8AJGiWvc7o48+tH+q+w9m6pJN61mgGRKL0h6o=; b=ugzgw1RF782T/j6dLllGrcx+8tYLnUM0lW2mw7yYYEGaplx6WxvFixXYIN+oexYOlg geRXk3tx4xYG3XoeSrFj27OJ7d5JxSM9Y+DMTGgXmj/nsAHqoe1/f9ujRff59eqLJVED /AF6WJRQgIq0Hjn9lMr34ANPMZwUAP23fB52788Fb3UtCHoWw/7cYZXellKcR8Yx+Trj R+JCn/zeEs7YljzI8jGxs72IEg+EEFoeIEzcLHMfUNlEUeo4Q//67RXnDdv1TGS3aRZt dz0z6jmMZzIvt/okahrqWsc0wAWncyhVVC38ephyElSxfs22AMHNlmp3Wvcin10/gfa5 jFPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038065; x=1722642865; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WWjK/C8AJGiWvc7o48+tH+q+w9m6pJN61mgGRKL0h6o=; b=H3OPzJq0d9ilEnu3IwvB0QwKiV/+AkDwTjz4GKdS3ZPn4TwYKIurFg5FVsYJ97DC3e Tc7iYdkAXJzuNDDc2NF15bmsqyv24UHym/xZQ/YUQtZIMofPm2x7yonCPfV328qyF00N gtmQafFOwmAaC8/LzV159IebAU9E7qz+8zj47RH/5gVdD+sL52WMIiDWT/TVTnj7EVSo 61gNJ9aLtS5cTtzcOMofEd9uh/4vP9/iBI4UHJn+GxvonmRMOJAvwO8iTkQbXNxdGVoz sTQZaeVVvo+RHMSIec7AEFk+yT9oj0HdsqsWVRjHYwp2iIqbdNArUsEyi8VAorG0UVFM 2yKA== X-Forwarded-Encrypted: i=1; AJvYcCW+vOj0lPm6j3Wmt7mVbj+VMLWJzUtvtw8/1pV3AiwXH4pYtfPNYxaMHE9oFhBI0RGNCdqqgT2x+y7zzkmBnoHAzzdRc0l2fbaCUB4l X-Gm-Message-State: AOJu0YxIG/VFUVk1BJHy5QFRPOocGer0yrUBTaEXenFzxDbAJWI+7tJM mF3xiAsjAn1yoMIn23XEeuA5scYF6fu0iGfykLmXicslrHa87xy6SytsHkOQFM11jYIT7Jb8RPr OVQ== X-Google-Smtp-Source: AGHT+IHjueGi4XHTR80VLKYNSmezTNqicqTjy1hjWkoeqfVxP7xL/6dlCTgQugjOLvFflviFEVZzvXY7ZzM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:41c6:b0:1fb:80c5:ce5d with SMTP id d9443c01a7336-1ff047b2dd6mr400915ad.4.1722038064523; Fri, 26 Jul 2024 16:54:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:01 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-53-seanjc@google.com> Subject: [PATCH v12 52/84] KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages accessed before dropping mmu_lock when faulting in guest memory so that shadow_map() can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Marking pages accessed outside of mmu_lock is ok (not great, but safe), but marking pages _dirty_ outside of mmu_lock can make filesystems unhappy. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/e500_mmu_host.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_h= ost.c index 5c2adfd19e12..334dd96f8081 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -498,11 +498,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc= _vcpu_e500 *vcpu_e500, kvmppc_mmu_flush_icache(pfn); =20 out: - spin_unlock(&kvm->mmu_lock); - /* Drop refcount on page, so that mmu notifiers can clear it */ kvm_release_pfn_clean(pfn); - + spin_unlock(&kvm->mmu_lock); return ret; } =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 892ED185E63 for ; Fri, 26 Jul 2024 23:54:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038069; cv=none; b=QDbiqtNCMYPVRsXyA/JBeeiyN5WNuERgNVyn6y/h2uOxAdWI8kp0gB3VL0SvIuDqjHVyEiiMC4L4fdnZXkM4FUDm/JUFl5z8NoSi49ptJpv9dMdkNlea9Jk/eOxYcgQij6C4bOKkhweqScJVDe0YSC8GQIJlxgpm4QPVGiSVnls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038069; c=relaxed/simple; bh=5OcQ4yMNFCHP+jiDtiAE4Jxg5Z1H9MhX9ShMIrwVrSo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=h4QDkXV1D67ZnY/mm/c8WOYrK2BnfXL1RQPcWQVSRdYz3c5oBaIJcHnS0CBKQRsBTX1m62/UBFSaQZ+Cx4kordJn0fvAAg/pr8NmXYcBjWjPZf3jx3LJNTls0G3TOY7ocHopV5tEYOYT/jEu1mMpQcmJtON0jByd6N8ihkexFyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=22bCu+sQ; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="22bCu+sQ" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70e93462241so1461933b3a.3 for ; Fri, 26 Jul 2024 16:54:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038067; x=1722642867; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zSr/S7cQW+jg6+5Yur5d69GSxQ0D+3E/XB18qDPP3eA=; b=22bCu+sQaujahuWtEOl2dfPGGGuZEjgPzT+8vB0ggXHrmvuN/nhHQ/dI4lgdXFHsCH cVxqOlUPUAr/qrZcxvyCxEMkO5u9VdoFNzh5tuDoq2r5GaNxpAfDu2LMgsn5vN9CkbRw G1WkAE+9k7bfUaSMBgOCrG5DjzESxFAWfAOvs4hLBjrh0pWpXIor6J/lhRPfufvpM1it icGbTgJuhtQms1hCD6YeB0GUBBP70Z5GEcWUHqnWIS4jHURwilDSbsIPDxCJD+9l+Zv1 H+PBTOVJ8Qxy4AxrWyA+kSARtgB20evKMrpr90SttZf6wfp6JdwaebYVKATevtgaH9JM C2YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038067; x=1722642867; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zSr/S7cQW+jg6+5Yur5d69GSxQ0D+3E/XB18qDPP3eA=; b=UPKamhjF/por2PYDSo2D61ZBT1mKksE35mlqufdex7rFmvxCiSnpkoEVCSU6U3WTTI FR7S7/xHAGToEEzysytiAbvVZ3kkf3+yLqUntDvsc2jUmD33NOLWOlardvJNgKpt2Hhz RyWCyEGdq/Sh7pCI/QyWxmweUgmVjrXnqXSy+APL3sTUEnP9lhJhg3mTHfJUn1pNuHne HLVeelA3kd8dxgX1Cs4F2mbHX3K6ZFTv22RO7nPKMDVjM34XXrLkAMlSCtX4IuLOJpRK jIA9kLjJ1EHi0QA8UpXzcu5L+4GmsnpS8/Tds4sUrwghieoOjT8VuWpcErYMQUG2jYFK ub9g== X-Forwarded-Encrypted: i=1; AJvYcCWFQJPRaPjptYdB+6Q46xenMbXWMuDUI/6GT9/6a/Biy+X7+Ozdzm4NNMZXL+L5pm5DFo/Nhm1HRe85LH62gcohMXKmYQcSVTE0NO34 X-Gm-Message-State: AOJu0YxaKc/VekD7w82WFX+m8etkijRfE+91HX2Vxeeq+06/2yrY7up3 3PUETfynPShTM3hs5XUr1P+QZOuwIGyppwZvmqLnRyJWcJaeKA/c+YR/epM+KiRDCuE3mGA8twt XvA== X-Google-Smtp-Source: AGHT+IEvvOLEi17q5BnKGFU80vyNCkxL+K4ME2VW/zA0UgmW3Jn60esm+Gg/1wQhRB2++SmF9ZjJ4yJ/TXU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:238d:b0:70e:98e2:fdae with SMTP id d2e1a72fcca58-70ecee6db73mr19809b3a.0.1722038066705; Fri, 26 Jul 2024 16:54:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:02 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-54-seanjc@google.com> Subject: [PATCH v12 53/84] KVM: PPC: e500: Use __kvm_faultin_pfn() to handle page faults From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert PPC e500 to use __kvm_faultin_pfn()+kvm_release_faultin_page(), and continue the inexorable march towards the demise of kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/e500_mmu_host.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_h= ost.c index 334dd96f8081..e5a145b578a4 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -322,6 +322,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_= vcpu_e500 *vcpu_e500, { struct kvm_memory_slot *slot; unsigned long pfn =3D 0; /* silence GCC warning */ + struct page *page =3D NULL; unsigned long hva; int pfnmap =3D 0; int tsize =3D BOOK3E_PAGESZ_4K; @@ -443,7 +444,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_= vcpu_e500 *vcpu_e500, =20 if (likely(!pfnmap)) { tsize_pages =3D 1UL << (tsize + 10 - PAGE_SHIFT); - pfn =3D gfn_to_pfn_memslot(slot, gfn); + pfn =3D __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page); if (is_error_noslot_pfn(pfn)) { if (printk_ratelimit()) pr_err("%s: real page not found for gfn %lx\n", @@ -488,8 +489,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_= vcpu_e500 *vcpu_e500, } } writable =3D kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); - if (writable) - kvm_set_pfn_dirty(pfn); =20 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe); @@ -498,8 +497,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_= vcpu_e500 *vcpu_e500, kvmppc_mmu_flush_icache(pfn); =20 out: - /* Drop refcount on page, so that mmu notifiers can clear it */ - kvm_release_pfn_clean(pfn); + kvm_release_faultin_page(kvm, page, !!ret, writable); spin_unlock(&kvm->mmu_lock); return ret; } --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB64E1862A9 for ; Fri, 26 Jul 2024 23:54:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038071; cv=none; b=LZLyBBCBCHYlPlqnAlRhck1yqAEfCaRwWXU3OwTNDskwFI0HMAIODEk8g18C3vrDClJcsWU2/hHXvGAh5bht2kztLMrvyYjfbM5WUskUaNjLWGDg7q02sOWY6suKMt8m1TU5jU93ueDvXXJBliDlmT8uTGo2SBs47DO51uZnqrA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038071; c=relaxed/simple; bh=t1qRb0jq3mEdjTFuwQYR/NAnoG1UlhS5OUQ+Ga/OUT0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=s3LdssSvrmaUJQrOn8f3XIt3p8aekS4JwZEGlm5Vk2MwSTtmIo2VmUGSNMzH4hDVXM9qrpSH/gQOQiPeaBnbosjUISWYCdqHOyYSDCbw/17tyxtIPSQkPibFAb2K0QEERAwhCEiuZZsr2Rg4zw1BaujsQC3QhXY/Gzons2zADCM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mjCfZSh0; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mjCfZSh0" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70d14d11f42so1334293b3a.2 for ; Fri, 26 Jul 2024 16:54:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038069; x=1722642869; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Tp8vWxWuEfoTun4nd23rTIy8nVqGGlQS/g8lo8vNdQg=; b=mjCfZSh0L1ukb7AwitAar0eelVYdJyADOTicV3AOuBvB7O9/KVBBUT6fQtQULkSrss iHmWJjfQC3X8sxIAPv+CiDVIINaHPy5e6x2ssy6YOUVPv2hZ8TJpfwg/G9p3Bab39ZXH cTVwCcIRzo8w8urQTE8i3QuL1d6IFfNutV/xQITNcRzjE/6HmYNHJBoVT+zmhrbwRfTH qNe0fWgSlHQNgEwmltcjOIKYQtwye7hCYEkjr5oFwbw0TvAf9wEkY1w/mineAWPItC+2 KueEV/ziKYg5QrSfBP83Uf6P7VIGMp9LcLQu2V0IHFvCAGtNSE7RDr+i0DcgEi228uGb X6XQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038069; x=1722642869; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Tp8vWxWuEfoTun4nd23rTIy8nVqGGlQS/g8lo8vNdQg=; b=lY5vxQo0qOlYu+mP+47UebQXtyb93pmpRbj7DZ6U5bFbDXADaaHlvF3drZ7bGNZzuu IMzbP0+RYjHKqr9mgBW3o4hOU4iwOtQosQJ6HqYpbqvHvU2nxqh675EIfCHbPxNMwhPM aGXGfx6B/R1V//AvsvMocagm0mGNpjJzdqHXgsUlqiCn14gMrGeMT5/m1201I4+f1m0m 0C9wHo+UtChPHurZufzqDCELEqzXxToKVz1Ikn8iuBCr//jVYqhfz4OOG0SKP1w/cidB zmyuxKSPb+OrA50Z2lpceWG6NDuPujoyvUmQjonqOcmEe4RNHcTkk/r9+u2ZOSQ/vDD7 F8/Q== X-Forwarded-Encrypted: i=1; AJvYcCWUBHvWB70PNiRuJABX0GasLPlVNuohHhX87zQUKHTcC98CyrAidnN7wY3+mOwJmgnQ2B9Nj+Aq1IIhk3XSqphs8B2EpiiqOK/C/8Sd X-Gm-Message-State: AOJu0YwwNFXl8iQZM5sZ+7PnvVdflJeNUdIHmoLgb9cqsXWkwkcAcp2g uCKi/7ojGZxk+Uws/zLCvgcmymEl/KOitdjGRGR91dtPGoLNAAOmgBhRXES01BAFAwYC0e7Ze47 2mA== X-Google-Smtp-Source: AGHT+IGdEcu09Hin88e0jkC1IJzP1gp2m8bzTdpaUdhQIiML2lxsNf415Y84GaeMaBcBrlwhHWl3MjBIRUQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:6f1c:b0:706:71b3:d7cf with SMTP id d2e1a72fcca58-70ece926b55mr9310b3a.0.1722038068878; Fri, 26 Jul 2024 16:54:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:03 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-55-seanjc@google.com> Subject: [PATCH v12 54/84] KVM: arm64: Mark "struct page" pfns accessed/dirty before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages/folios accessed+dirty prior to dropping mmu_lock, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. This will also allow converting arm64 to kvm_release_faultin_page(), which requires that mmu_lock be held (for the aforementioned reason). Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/arm64/kvm/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 22ee37360c4e..ce13c3d884d5 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1685,15 +1685,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, } =20 out_unlock: + if (writable && !ret) + kvm_set_pfn_dirty(pfn); + else + kvm_release_pfn_clean(pfn); + read_unlock(&kvm->mmu_lock); =20 /* Mark the page dirty only if the fault is handled successfully */ - if (writable && !ret) { - kvm_set_pfn_dirty(pfn); + if (writable && !ret) mark_page_dirty_in_slot(kvm, memslot, gfn); - } =20 - kvm_release_pfn_clean(pfn); return ret !=3D -EAGAIN ? ret : 0; } =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 091981862AF for ; Fri, 26 Jul 2024 23:54:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038073; cv=none; b=Lg5KJRLP393jAsjmKLIT9CD0aHMtldRR637Cm3owIzXNd1HEhtIFkhR6Q+GXAmfNqTZQXVtv9pZAJ1JrIFYPLcGU4bWS67EtzW6ECVoTccs1gC3aCrn39jrgdzHcHz0Wnv5V8W6p0j+xY3y/IDd6ZAzDB/sVlDI5d6jGiR2Vq8E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038073; c=relaxed/simple; bh=+aOTq/dN+ZtKhcKidsUn1n9XU82CWPIY26uxDS8eRrg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UzZpGbKWhhjIpvieavS/6ECbnktk46F8CfGTql0aE7Y3XtP8u7uZIbCY6pNgaWRtxlSmU116pCSK5h8vro7zrmDMfgFGTT/wxylO9objz1C924fzToX0Fcf/7kj2KTjnaeBjNGDp78mgMai4PcWD+I3oMVH01M7G4N+gELU8LEQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CE1myMVp; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CE1myMVp" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70ec1039600so1046824b3a.0 for ; Fri, 26 Jul 2024 16:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038071; x=1722642871; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=caIg6eZHDZieCZ2y8TnIGsflH+xvFOT1i8qMSVM8X2U=; b=CE1myMVp3Dl5k96tRgaVKwsJg1Hh5+4ubjoPOGHVYcU2nAp2bUAJp0xVcmUlG7Eq7T kX3FUVviN/xlCGE2VHhx4zvzCFdIXTTAeGebohLc0WJQy0S0QtKCndHVGf8rXyWhZZvK K6WBTEDu89MDh9neQNH/5VcA2kCk0bVCH1D5EZe+FSSWztgn1toecH/12k8k7EFV9IFW BG3oU0/hq36mZarPCztbdXApyZYytdiU/OQE2AlAAZob+lwGFksyoSApae8+/540p4Yu Vb4dBMy573Oloj6ALIjKMIRudvLdvUj3tPfjgkvnhxk4D5I4GciudU2Y9A4yO3tdKSdP iccQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038071; x=1722642871; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=caIg6eZHDZieCZ2y8TnIGsflH+xvFOT1i8qMSVM8X2U=; b=fFEY44Pqbp8vWej9jaF+MaB+uQPs2BNiRhxzoN5UcrwaDptRewi7vqqfL61+zVsm5u A1Gi+M22tNrkS4q9QUz31u0gY2T2emUcLPY07tUwd02M4kPZ5RWGY9JMelo73+kCIqGR PMuFKln21aMUEnuFDk/RvXPP7uJh+cHoxDflrGdLkIio6UxKC+l1qPqrn+0FUHLctzq1 D6Zxu4DYB9Kn6fpurSjjbeAJnCcvmD78oBudHMsAGZ73qONRranxN8tRGkVaG7epr1mt 3GQ0Kzv+FZyMPhaVc1J+cV+IxSMjvVK07TH9EuG6q920jZe23ADX61IAbPRT1tnbIXE/ YWTw== X-Forwarded-Encrypted: i=1; AJvYcCXWJbr8Rpw2dtcAuDduwaMRdnCpjRtrgYvKRtedunhlsErVqmI6PD0YeBDCqzDWT6HISgsIRDn4Wnzzj/oorxmuIh4KfBGg9f/zrMfx X-Gm-Message-State: AOJu0Yy2SsXL3oUXO/QP060+7pFCPSOicu5A3ZN6Jd1Pipre1SV3H6wD 5eiHn0zo9biAdDe9aDGxSmIFDP5NMhE9k9ugbiHZVFs3xQsKkva4ZHlJPQFPMOJ2k/AUgidy2Ig xvg== X-Google-Smtp-Source: AGHT+IFCpL56eroYNOyLiVhUcT5fyuFzA98b59p62FwWY/A7pZmePwqTF4FXfEto2ywMz9XC9lExu2Xr4CM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:9445:b0:706:3421:740d with SMTP id d2e1a72fcca58-70ece9ecf02mr25759b3a.1.1722038071004; Fri, 26 Jul 2024 16:54:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:04 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-56-seanjc@google.com> Subject: [PATCH v12 55/84] KVM: arm64: Use __kvm_faultin_pfn() to handle memory aborts From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert arm64 to use __kvm_faultin_pfn()+kvm_release_faultin_page(). Three down, six to go. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/arm64/kvm/mmu.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ce13c3d884d5..756fc856ab44 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1439,6 +1439,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot =3D KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + struct page *page; =20 if (fault_is_perm) fault_granule =3D kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1553,7 +1554,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, =20 /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or __gfn_to_pfn_memslot() become stale prior to + * vma_lookup() or __kvm_faultin_pfn() become stale prior to * acquiring kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs @@ -1562,8 +1563,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); =20 - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - write_fault, &writable); + pfn =3D __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, + &writable, &page); if (pfn =3D=3D KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; @@ -1576,7 +1577,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * If the page was identified as device early by looking at * the VMA flags, vma_pagesize is already representing the * largest quantity we can map. If instead it was mapped - * via gfn_to_pfn_prot(), vma_pagesize is set to PAGE_SIZE + * via __kvm_faultin_pfn(), vma_pagesize is set to PAGE_SIZE * and must not be upgraded. * * In both cases, we don't let transparent_hugepage_adjust() @@ -1685,11 +1686,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phy= s_addr_t fault_ipa, } =20 out_unlock: - if (writable && !ret) - kvm_set_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); - + kvm_release_faultin_page(kvm, page, !!ret, writable); read_unlock(&kvm->mmu_lock); =20 /* Mark the page dirty only if the fault is handled successfully */ --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57945156F20 for ; Fri, 26 Jul 2024 23:54:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038076; cv=none; b=Z3IImX4oHpNcrGPa8lhzpXJP1U34X6UnwgcT77WW6H4Gp/Cr9v5/w1o8zRuLqrKu6R3KiU/C4jHSO2BovgxUqSgPM2W3zESh/1rZVob8zQ/is1xG6nN4oG9YuuO40O6/z4HL93yEh0zyzwBSuKLcXuxFOZg14xxoyH5IiCKpFDs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038076; c=relaxed/simple; bh=ZzVpAxjr1daPEmxzWMhozlSzguFi8sPCSDoFWGHBH/s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OluKqLW0lPG/R2Iia505QirmoQ1TRXboBG5/DNguQ9rDy6IZYbZEegmW8+IBdX1Z1qw6rqsP6yQTNFQaR1PNlHBZoHIOOS0irrMBRdxqWRE7GtnWOMuElxznVZWXFFr2PrHAvDXroel7kqAiMIb+HC/gQ/cEqzgBvCC2+1n98C0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1IUbQ259; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1IUbQ259" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb7364bac9so1380488a91.0 for ; Fri, 26 Jul 2024 16:54:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038074; x=1722642874; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EmJI7mCwitfWH+6LwHkJyXugWKEFSS3lkF1NSC2iA8Y=; b=1IUbQ2590lBkKDIMrUE72/4/QrD2gtK4G7LqSwbXtf7nN1kGCeezBiWMAvbJQjcA5V FVTHr2gDDgj62Qk59YHk/Sv4nV97eiZzlF72+WGfvKBtV1LEzeIYd+HbQtLquX2MHNzc 5dMvOajXb5YYLHpeMLW92DU7mq1fQs/g/pyws9mO2GHtVureshhkb2qwZS2vX7pGsAbW YLHGi95xH65eB939kEBmpdpGtvAJkCVil2QvpiR4nnxJ8BfE1COmkgoij4igZUBOuQ79 eBNTsiBJhYmN2XzZBIMo8YXqODCVZq09XfjIbJp2HmSZsgFj6pR4kPKAx95tr/IWt41a YvGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038074; x=1722642874; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EmJI7mCwitfWH+6LwHkJyXugWKEFSS3lkF1NSC2iA8Y=; b=eG8Vj2aMgs7WIwuhLt++28btO0/iNew3P0QtL0H3xmawZ35aEit80DnHt33xHgSd22 0LeOj6fiOpGXRKxGEECx2EsZwbSh6VKgOq47gQXIgSb7j2UbczyPdqqZ1qNESMnFMj1Q LFbDtj7QWQVcSVWZOXgQguR0YIt68saBYna8ChCpksssWC9cg/2XoRYgAKrQhyJhzSoI qQ864MR46cwjzQcUbo6xDc+NZgxXXJsZmbj6maCWw+LFkoHz2QeXynCZ5TLOwRagw6ed e8Rf9L3+a0tZui0xy6+gGUnPpNytwB82rZ6y16bPSvMVnhOrM+PatBv3smpmv4t9z3Vy ymng== X-Forwarded-Encrypted: i=1; AJvYcCXvEZAPWndk4Oq7DHPVp4OWb7xoN0jsnREHhh9WIXjKjXha5xI91vJT9ZE7s2lb0yR/LNEbpNSy6vJg1/fiKXPDk5YM3qXALwDZJqEo X-Gm-Message-State: AOJu0YzQ3oNossA/LKqS6KaY3Cdw6dK17GaLKpfXICfaxhXy/y2gIXI+ BLpXeBkbjE7NFSvf3AwkI96JBMdYZXWQJl6WF7rjnO5VItMa12ukode/QV+S5dTT7FPh1F4u2y/ lKw== X-Google-Smtp-Source: AGHT+IG4DskQJpICSO/7ZhzoCwUBvTBXkkJhFyAvydTVcsfBboV7ccXE3z2KESMy5r+D4xCglFGTF0FLhYg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:470e:b0:2c8:632:7efe with SMTP id 98e67ed59e1d1-2cf7d1673aamr10630a91.4.1722038073062; Fri, 26 Jul 2024 16:54:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:05 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-57-seanjc@google.com> Subject: [PATCH v12 56/84] KVM: RISC-V: Mark "struct page" pfns dirty iff a stage-2 PTE is installed From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't mark pages dirty if KVM bails from the page fault handler without installing a stage-2 mapping, i.e. if the page is guaranteed to not be written by the guest. In addition to being a (very) minor fix, this paves the way for converting RISC-V to use kvm_release_faultin_page(). Signed-off-by: Sean Christopherson Acked-by: Anup Patel Reviewed-by: Andrew Jones Tested-by: Alex Benn=C3=A9e --- arch/riscv/kvm/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index b63650f9b966..06aa5a0d056d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -669,7 +669,6 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, goto out_unlock; =20 if (writable) { - kvm_set_pfn_dirty(hfn); mark_page_dirty(kvm, gfn); ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, vma_pagesize, false, true); @@ -682,6 +681,9 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, kvm_err("Failed to map in G-stage\n"); =20 out_unlock: + if ((!ret || ret =3D=3D -EEXIST) && writable) + kvm_set_pfn_dirty(hfn); + spin_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(hfn); kvm_release_pfn_clean(hfn); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F56E187355 for ; Fri, 26 Jul 2024 23:54:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038080; cv=none; b=L1okLEDPdvLtxdiAPYzOY2kxZsW4WJJwbdwV+0yE1Dq6Z0fv34mJ6UuqpTl21JMhgSTDrChiza6Bel3FdCGqp15WZSO+LvIgWIceth+0Nd9zPE+5b5GvMSTaIouxn+XoLpVqq/5MRmHOYMS3nJ7l7rIbbLBZsvMmQWnoTVY8XVU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038080; c=relaxed/simple; bh=xUjZCVI6Pn1FEhfDqaMS78eNTpmQssm1VgccWbaB3YU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=J8VRAS0Pf4WXU7eRANHQMO+dQhjY9DW7O3Y1F454/T7PD1zkarboHP2bWx73VuRSpyrd5zskJJhF60rbS6PJ4J8NcUNXqsSeA/D0v0KVMMf/VZj6fgX9FJIYSu09VJ8cxTag9yVegrjls3NO4H7s2x89B9//Z81yGI0nDJGUdWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qzSz+eiy; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qzSz+eiy" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70eab26e146so1297653b3a.3 for ; Fri, 26 Jul 2024 16:54:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038077; x=1722642877; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=8WWxSeEhwRY8Kz5lKN5dweAMeNFVe6itBQweyIKsQVg=; b=qzSz+eiy51sE136ltcrx3c05MDtPwdpTXtK/OxPx/ZqpeqXQUatNO/3ilT4X+Bg6pY U4MffWjrgYt1qnYkrERvyZfWIo6Vu/aU1gEVvq/IGFJk/w9YyvrrZinzsHJdmzXL7DHP hYjVan3DoE/LpgsOClo8KWhM3wP0UZSAFYTFU5iG5UAAaHidkH3vtwOuKeGFnGkoX1Nn FqCdqiQVEfQqlxoBOH2WyJtrVjkCSTTAAQmPCpDSas7b/tNs8S+ek/ys8p9RCOK6csdr /+rBmQvH8jE1+5B9BskkLoUaDryHEsTpDmdh4wv8YhJjSbtL9Y8lgGpaxYUoljNuCVqu mT5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038077; x=1722642877; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=8WWxSeEhwRY8Kz5lKN5dweAMeNFVe6itBQweyIKsQVg=; b=TYIQw7kmR/ynexSQ8/WUHnjy5H2hfX1YLFyXUwk8hBnKWpXuHiSy+l+WO4ZoWdcdhN uPIZID9XMbQmBb7FZcBYtp0gXqMa89zJOwvFrOh8mc60RcoyWZcl+V4Z1MJP5SdLAhvY ulcMGHAG3a+RbsNKI1CrtbR0VshccxcYo1rUNsZ5GozXL1QCq9fQL/dUzXKDVJ+ggGIx XMPqtR2Zpd8OkNt11MgfK8dm2+w0sG6ZNztYicgt/vVl5N4y71rsbuq8PUtjHPITRW7y kGhx822OOk8VHJl+1RrtP7bgE/LrP08Olv2B1ga4J4weKSsWmb2r5XdtjCpQiO0t9tdp AAIQ== X-Forwarded-Encrypted: i=1; AJvYcCVWJ7ODU5ORkM5G86OIjVyqrhtY0rT05rbhCA7jqdZbJgW6zc1YCjShIuqOnsh/eNgpD/ppBhoKSeE/yksHSKpQvltfWIRfH+3t5VN/ X-Gm-Message-State: AOJu0YxpZSaQ9u4OhwY7uPcM3FVurMwFWK2gxRbhlR1dPQSWptBEmDiV jMccWUG+JYCmQ62ouKn9iT20D21aTp60bKL+7nhfxaU6oBSwf0C0VeAAEYciTIzKZyKLzegrh9d 9UQ== X-Google-Smtp-Source: AGHT+IFil497nfsH36PcPgD5gbB02jEmsX2CuGy2fe2s5ytKElT5nFsSQdr9eHlR2IIEavfJT4DgKyMZ3iM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:66e5:b0:70d:1e28:1c33 with SMTP id d2e1a72fcca58-70ece9ecd04mr10022b3a.1.1722038076443; Fri, 26 Jul 2024 16:54:36 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:06 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-58-seanjc@google.com> Subject: [PATCH v12 57/84] KVM: RISC-V: Mark "struct page" pfns accessed before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages accessed before dropping mmu_lock when faulting in guest memory so that RISC-V can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Marking pages accessed outside of mmu_lock is ok (not great, but safe), but marking pages _dirty_ outside of mmu_lock can make filesystems unhappy. Signed-off-by: Sean Christopherson Acked-by: Anup Patel Reviewed-by: Andrew Jones Tested-by: Alex Benn=C3=A9e --- arch/riscv/kvm/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 06aa5a0d056d..806f68e70642 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -683,10 +683,10 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, out_unlock: if ((!ret || ret =3D=3D -EEXIST) && writable) kvm_set_pfn_dirty(hfn); + else + kvm_release_pfn_clean(hfn); =20 spin_unlock(&kvm->mmu_lock); - kvm_set_pfn_accessed(hfn); - kvm_release_pfn_clean(hfn); return ret; } =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:58 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3C45187849 for ; Fri, 26 Jul 2024 23:54:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038081; cv=none; b=ZYFLPdOd1RiIrLiuws1cOFc+vK9oMViXcc1yL7PL5RhgmqnRWp9qAbVSR77a4LlruJ3g3dg12LwZZvaECu9H8LnrxDIzXKmoy9pITurmussbe95hFEb14xmLdhv6x4qqGy91bCmLf8Ek8vSjOBlgtlhhX8JoLza0e8txHooiILM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038081; c=relaxed/simple; bh=QKdqSeDurXwkoaj80ZjKrbgXs0BzbdNHTlCFTKbvwVA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mdoGEeaEJlJuQ7DbT7vY32SdLshPJRf4zHLPpSzYn9WClh9/pqRwXKv8uJmo/0j5GGltHFrJIMa8+ldQ7i3j8hXIZFjuZAM6KVSe6YcFqumN6ccKmTCg3PA1JW2OwrTP3OFDMYW+BXSZ8CDRbBJle+OSN1uL3wYISqCVVESdrqA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Z1GHr4iw; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Z1GHr4iw" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-650ab31aabdso5339107b3.3 for ; Fri, 26 Jul 2024 16:54:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038079; x=1722642879; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=lKpu7mXAQaRbYxbEDOwpqeV9V2sn6fCg4Pm2+RclqqE=; b=Z1GHr4iwyaZgScL7x3Qd9YOawsiE8lfDmyYEQLoLQHM2UyzIZOxvhUHiyF9qM2lo5z o5iB8yPaFYoB6jqPBVslKHR+G+7Q4f8VSWiTt0fV3+r5VgiE8/YKLxN193I8RhrImfom MPWbZeByHAuKAG1vUDjUEYEYxqFJ6RIeQyfEIcLEU1v1lxnocm7f8+JsVEVBHJlAfdk+ FnOw0Kidl0+sE8lhy5ccYILsUSK+lWaHJlLdQr5ntFDGcWYEARl6+w2fZb7SXyXoHzBJ HyFEiibuf5GxvRN9UbNg161NSLgHsZ5dDSBUPxl0Wzszb7Z2b4i2Zp0c6jiLTEHxm7+R IY7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038079; x=1722642879; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=lKpu7mXAQaRbYxbEDOwpqeV9V2sn6fCg4Pm2+RclqqE=; b=r8NKxLglKCPJn1OkB5ectvtBiLmGoAhFPfTz++xx7cDKAsXmadbk1zxgLDkvvZAYeQ k9rDaKH3qQzeWuqqf8bM7x9ZvL4bI2XfOvaOUyvQxkySOPPHlUMXqY/76A6qBRZJ15s5 nlryrGqZSMiApMbbSbaeDze2Vzmw7JmX1vM7MzpmNTDmA8UeIlHXaIe+u38mhHcGai5g KRMjgtfgeMYsJUBTmzhfb9b99TQtQaL3xn3vavCmISKvMWNJ3kdLQIh089pSg9LVaibC PrnltkEkg0Z5we5lrynLqpB/YDe4uEFFLJtwkpaw46fZHngxABfnDYC91VX9AZ+VhOiD tfUQ== X-Forwarded-Encrypted: i=1; AJvYcCWrWoN7kytawrWqxT+ifnh/6uEV3POb8wXBnyMvncG/jwMmMFjowT+qRgfTBr6IXkbEHulgIpHoRQJA9K5ee1Berf5n+wjUZ0/tLNZj X-Gm-Message-State: AOJu0YwzkKe5Gkg04csh4rTKrts+JR8uWgmUdc9MYfdusGz7GHAEBken ugJGFYHuh4TCRtlIxZ8s4eJ15hicWhX7wLux8wXwWW8HiEDx2+OB02vwhMLxFWygrFLEUSRcJV7 veA== X-Google-Smtp-Source: AGHT+IHhzhjU3Z65Ec9zcL96WqENi83QMIcwPrprR++iZTSrPH87Hnrc6/v3lgb0Tgsr4AkAwll8yu22uIk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:ec3:b0:663:ddc1:eab8 with SMTP id 00721157ae682-67a088f07bfmr790257b3.4.1722038078515; Fri, 26 Jul 2024 16:54:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:07 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-59-seanjc@google.com> Subject: [PATCH v12 58/84] KVM: RISC-V: Use kvm_faultin_pfn() when mapping pfns into the guest From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert RISC-V to __kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Signed-off-by: Sean Christopherson Acked-by: Anup Patel Reviewed-by: Andrew Jones Tested-by: Alex Benn=C3=A9e --- arch/riscv/kvm/mmu.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 806f68e70642..f73d6a79a78c 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -601,6 +601,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, bool logging =3D (memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; + struct page *page; =20 /* We need minimum second+third level pages */ ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); @@ -631,7 +632,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, =20 /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or gfn_to_pfn_prot() become stale priort to acquiring + * vma_lookup() or __kvm_faultin_pfn() become stale priort to acquiring * kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs @@ -647,7 +648,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, return -EFAULT; } =20 - hfn =3D gfn_to_pfn_prot(kvm, gfn, is_write, &writable); + hfn =3D kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page); if (hfn =3D=3D KVM_PFN_ERR_HWPOISON) { send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, vma_pageshift, current); @@ -681,11 +682,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, kvm_err("Failed to map in G-stage\n"); =20 out_unlock: - if ((!ret || ret =3D=3D -EEXIST) && writable) - kvm_set_pfn_dirty(hfn); - else - kvm_release_pfn_clean(hfn); - + kvm_release_faultin_page(kvm, page, ret && ret !=3D -EEXIST, writable); spin_unlock(&kvm->mmu_lock); return ret; } --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4307187860 for ; Fri, 26 Jul 2024 23:54:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038083; cv=none; b=RgcUUid0U4Ika/7P+MoAPY09dHt2x2hxCXSfa6NVNJStv2mQpXbDmAqgOpSxfqSlZC2Jr/+pF5mZ8+RXAjwOdDFbmGTfI+I6+topIHVERLwGN/bvdPOLKs/rfA7L2rE4OA0MgKMBKYJVrVZR+0syKlvLn7ccxcEUkDsKt53Mtbw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038083; c=relaxed/simple; bh=WM0+vLgn5P7yqOKEb4OpGV1yF56+IgtBIdrStZDYJ+0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CsC0l6/UuLB3S+9OR8uOg5XvsrHACz1nBlquz6DX43HHJMJCY+/+0c2vsHCCTkwROdq7zkcKK9YluEBKYUpN6cr8B9hiz6uMvn/iH5HniH5RjZTi3PgueoL3ZnXj0gU3ABe3bBKX5m5+NRBm1hoUaB9FrQ/BH1OsSr93SxprXXI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kVB0XwtW; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kVB0XwtW" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc52d3c76eso12300885ad.3 for ; Fri, 26 Jul 2024 16:54:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038081; x=1722642881; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=eM04pE2oItbps9DzmNWUdptdfO1yLAvOtXUsQdNxOmg=; b=kVB0XwtWlaIqWC2bciibmwyPY5lhm9EGl2h/NSBVjucKht2mKde+UXiItCBPkXUc2j ECsB5VQZYw83cZmPK0vSu2pOPS3g/noU5yo+0nZweKxVq60lznh2/zvobWhF1GwfrHt2 xwM2nYg4a2jKEiLuizfvNjA00XGNgIzL3UA8bsWHa12FFxmS0fAfnWho9uL7JP5vj/HR ++tAatE2sGXWg1/+a2yrih5nfc+uvTlG20K8pikZhBu1sSjEYNKYj298i38cpySiEOQg 0GPkh3qdAFU1U+J9epvzvrCjnNpIOo0bzg4AQO5tuvwyJZ8qvJ5Oz/NqEk3nV9QxIuBK WCZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038081; x=1722642881; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=eM04pE2oItbps9DzmNWUdptdfO1yLAvOtXUsQdNxOmg=; b=Bg9WWtLKZQoL3uW5xIhQ5Vr0meF1ChJw0qjWgS0WOKZCNN2jY/nvjH4o9ZjFxPmi9m YNrbBarTRf0WNNu+smgTzrrl0fzwCaezR7V2/klDnCvgUxbXgRCHaKAJ8DiusMoo2PI/ ks2axJXhmls2clv2XxKiztajd1aR9sy0n0FM1zQf1x64p67Y/MCpowQ9wMV5dF0KFA9v 3VwUin/M2+k/eWvR1Jo8dTujvFJV3wnlMJmETbr002TRNzNSi5K/yeu3R6ruMbdT+WGA E7+zhbQV+1QPAL5dECuc0xDbhHooEqLZhc9G4SiXKEgiIHhWnUuCWwIfEnXpwL17W/X+ 1kVA== X-Forwarded-Encrypted: i=1; AJvYcCVaos2bpuwwNJIqBQKZ0pp6o9UHI8pNymdZm99rihg0/F1KG1p3RZqQC9vQx6CxOlhGFduSifAjp69ztI8aBvJI3rOWYND68IJWNpxT X-Gm-Message-State: AOJu0Yyy1iql57zz0wE9miAG0Kc5tfXdHxoaElABQrAl+8yUEdfcYISQ MAMfG9byYNYRJBrP/1ZHsnup4gOhz8qu5ZQHC1GKlRyX1EESHxagEzDX251VWBX23hVjyctAscw aBg== X-Google-Smtp-Source: AGHT+IHO8zzuGkjlMSTl8Rqd7iW1pR8+tgiNg2ojfeAF3X7ugPbzXNjRAQyOFKQ91J/aSe/NwOtEVPH7LtU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:db11:b0:1f8:44f4:efd9 with SMTP id d9443c01a7336-1ff047d0af0mr1044175ad.2.1722038080667; Fri, 26 Jul 2024 16:54:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:08 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-60-seanjc@google.com> Subject: [PATCH v12 59/84] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s HV From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace Book3s HV's homebrewed fault-in logic with __kvm_faultin_pfn(), which functionally does pretty much the exact same thing. Note, when the code was written, KVM indeed didn't do fast GUP without "!atomic && !async", but that has long since changed (KVM tries fast GUP for all writable mappings). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 25 ++++--------------------- 1 file changed, 4 insertions(+), 21 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_= 64_mmu_hv.c index 2f1d58984b41..f305395cf26e 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -603,27 +603,10 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok =3D writing; hva =3D gfn_to_hva_memslot(memslot, gfn); =20 - /* - * Do a fast check first, since __gfn_to_pfn_memslot doesn't - * do it with !atomic && !async, which is how we call it. - * We always ask for write permission since the common case - * is that the page is writable. - */ - if (get_user_page_fast_only(hva, FOLL_WRITE, &page)) { - write_ok =3D true; - } else { - /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok); - if (is_error_noslot_pfn(pfn)) - return -EFAULT; - page =3D NULL; - if (pfn_valid(pfn)) { - page =3D pfn_to_page(pfn); - if (PageReserved(page)) - page =3D NULL; - } - } + pfn =3D __kvm_faultin_pfn(memslot, gfn, writing ? FOLL_WRITE : 0, + &write_ok, &page); + if (is_error_noslot_pfn(pfn)) + return -EFAULT; =20 /* * Read the PTE from the process' radix tree and use that --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE628188CB8 for ; Fri, 26 Jul 2024 23:54:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038085; cv=none; b=un36mBmFhhR3ejqOqIJoNBB6TJmkbQqTt5Surbri+NFA8Zdwe/JIAzSuOOoj2nlZzoNtjCl+9+Ofx0+G68M0ozSv/u+HkVCR6eqZHXxqlEO4ULVVG1Z4jqDiMlS6ABetzCdvqpYYOImbrgGAxleBmq0W8Lu4IymHuwdNgorplcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038085; c=relaxed/simple; bh=cf98YpKAzSnKSydcfMP082/xfk70ScQmPGu4ud3+tiY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IRY+OYt311lZvco0HwxOfPWC2nxSQrZKp7jX4fe5b1oo4F8oGibRK+TTcZb3hseXwoZw8wTdX2+r0wqpPkETDMbajZMhRcofSyMe5hGdAkC26t6xxMH14mv+/tsWR/ED2856e4bQfujsRNVipoXBRhCJ+/FeNTU7sGNtRDCRH6k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ALmb9nPO; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ALmb9nPO" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e08bc29c584so396745276.0 for ; Fri, 26 Jul 2024 16:54:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038083; x=1722642883; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=6xYXr8Us14FXXKODtWRB0Ht9lxWZb28t4tAXkZ5fXOk=; b=ALmb9nPOUiTHX1TXXUEwpqY73/hhF5D/EoEuFKxkFTPMGGg9hfvz2qIvYf51M5eHUM igGXZINtRmYjFfu2usOtT/NVjWUkvPPz+KnhF/+CgFTPvFSuwSThEbzZpWo8TMTQt7S0 +x46Hlghh2nw+wRcjEIelHQuy/RLICFyHFpfZOk+uce9spm7pboigtfnSymbYzRukPi8 4pxmuJsPXdxS35QU7YYFm7zDs0JU7UaiboBy9dpfgNXsza0eYKslxyWAbGXFLohxyz6P 4Q/vCYm3fHKJtUI7QwBOzsm6QS+Gh2nDVBdIHgkWHfXAz9hIQ9o3UiJiJleysPYjzQIt axLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038083; x=1722642883; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6xYXr8Us14FXXKODtWRB0Ht9lxWZb28t4tAXkZ5fXOk=; b=AFZXOhhlU3uDaVLpIFDCSa/L41yKS+0Hu/jPYNytqryFQybQBl1ewtdVQuMh+CrzNi FyizvXkpHwt2uwg300TyuvBNo+/SQMabPbLwbzzfSLdiU9648yLbBHAcVxL/QhtzaJuE onbO3ObpKC9UjRFg4iWA6wYdH9ZTyfJNoUMV9CSngfU01D/Gyeh7YrmMijxBBsEpfs6+ 2tNJjXLQrb9QPbtPAQkpUPYEVz5FiZNsq8d+Q5k7xRgRekQfSGGr5+z6NInru7qAdzSL vuGiUZxuIDyLxrxP7gkvFlOMK9eGf9DvXQpZc2PREmK0t49S0rwvcXlnkROkhM8Yh+FO 383A== X-Forwarded-Encrypted: i=1; AJvYcCU144AZCcPi/Z8md0/pr0tuGMwLknBWsm+fEVasAPh1o2ZuId9EO8N7Ula9CGI372L4qV7owjd7xxg+xXDBnv0BvbjTjBTg4AmoUa1X X-Gm-Message-State: AOJu0YwicciHmrdgd9NtaQ8ebKCWqHQrLEHhIUw1Ztu1nO7eWfEu5nk1 uuwpW/OISk+7vPefQJtVS2ZPERcC7l428tbn5+V8zLeWHKtl97XS6MUWHolafwCttoe9erLvYjy s1Q== X-Google-Smtp-Source: AGHT+IE8wacjAGm1xSlUK/1K8X0GxeW+L1fXTQ/Y1AQI/5HhDGzzQTYAcJk3V2f1As7zQmLQNuyAyAYHMio= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8c02:0:b0:e05:f1ad:a139 with SMTP id 3f1490d57ef6-e0b545c6203mr2068276.11.1722038082801; Fri, 26 Jul 2024 16:54:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:09 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-61-seanjc@google.com> Subject: [PATCH v12 60/84] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s Radix From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace Book3s Radix's homebrewed (read: copy+pasted) fault-in logic with __kvm_faultin_pfn(), which functionally does pretty much the exact same thing. Note, when the code was written, KVM indeed didn't do fast GUP without "!atomic && !async", but that has long since changed (KVM tries fast GUP for all writable mappings). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/book3s_64_mmu_radix.c | 29 +++++--------------------- 1 file changed, 5 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book= 3s_64_mmu_radix.c index 8304b6f8fe45..14891d0a3b73 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -829,40 +829,21 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *v= cpu, unsigned long mmu_seq; unsigned long hva, gfn =3D gpa >> PAGE_SHIFT; bool upgrade_write =3D false; - bool *upgrade_p =3D &upgrade_write; pte_t pte, *ptep; unsigned int shift, level; int ret; bool large_enable; + kvm_pfn_t pfn; =20 /* used to check for invalidations in progress */ mmu_seq =3D kvm->mmu_invalidate_seq; smp_rmb(); =20 - /* - * Do a fast check first, since __gfn_to_pfn_memslot doesn't - * do it with !atomic && !async, which is how we call it. - * We always ask for write permission since the common case - * is that the page is writable. - */ hva =3D gfn_to_hva_memslot(memslot, gfn); - if (!kvm_ro && get_user_page_fast_only(hva, FOLL_WRITE, &page)) { - upgrade_write =3D true; - } else { - unsigned long pfn; - - /* Call KVM generic code to do the slow-path check */ - pfn =3D __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p); - if (is_error_noslot_pfn(pfn)) - return -EFAULT; - page =3D NULL; - if (pfn_valid(pfn)) { - page =3D pfn_to_page(pfn); - if (PageReserved(page)) - page =3D NULL; - } - } + pfn =3D __kvm_faultin_pfn(memslot, gfn, writing ? FOLL_WRITE : 0, + &upgrade_write, &page); + if (is_error_noslot_pfn(pfn)) + return -EFAULT; =20 /* * Read the PTE from the process' radix tree and use that --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F560188CD8 for ; Fri, 26 Jul 2024 23:54:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038087; cv=none; b=Q+A5En5TnJSoC9XoSLa0eg28lLQJ7UqMabyk9zs9F8Jy1q5pYlmbEkNWs4d5DklN9QTey362+irzpdQ65GvyDsQwHxo/kujF6LzhPZKVSV75VY5jKpXV7bgY+yLGiYEqapk75mf+b1y9jtQk9naeY0AidiblgBGvl6YXtX98K0U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038087; c=relaxed/simple; bh=fhSce6+NlER4lqSnBEsOz4PUKkoojvsuOboKzHVrVQM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gshFRcgkP42sjTbxt3wvKLctVZ4BGhy0sV/nTIP/dVhe7K4tsr+dkLRVhKkNsXonU743RtGne5LcjdEMsYqBo3r+Qzkm04jzGwlQPqe1BjNxpdML+18ymFxGToEFQh3+dkUhnodOV7A7AV3ZBWScrlI5Na/R3if3IPHGpOTcYIs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nezcdDbD; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nezcdDbD" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc4fcaa2e8so10527895ad.1 for ; Fri, 26 Jul 2024 16:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038085; x=1722642885; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=fnzANaCry6VTBaaBvIzLI2Urcy+eNpGQjPUes1ZSEvM=; b=nezcdDbDgVw5lnyIoGPFoQ05Y3OKgwxPoFANa005SwcD7jPE45MklIkdQsnaY2CV0L 51Iiz+W4NmnHH2OQRse7nGPpwXdbb845xRWvEiU1s1639UIXwCCaZFYhAPXEcoUZM1YH x7K3dtanINNG/Lwb6tBR27zgOKkQMhBLiTag31eC1DbL27TuQRiLJQWo4YtxJZumO53D cL7yfEB142RjLl0yiATVCC9EMVYUuc5accQ6pEVaibjR1U8+wEjWh+7N00DNPAnrKm1y 98pfP+Be1ILPxiSx0fZjdUSAJxYsbenBpjPmhQN7CY9uSSez5SEu9APABMpDN4HC2k+u WGgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038085; x=1722642885; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fnzANaCry6VTBaaBvIzLI2Urcy+eNpGQjPUes1ZSEvM=; b=f8JLRCJ2L9Bem+LSNNBQdvc1i6TmE/EOr/pLK9h3XBwX9OVFQNhQWeYinyXN4VoW75 MPZ6YcCTiM5doUScWU/9tGKMgc2zO+WXvLq2XC1MgOW4PqZw53lkDFSyuaOILO0mmDOY 4CQjNUYE0Gcamqdul6Xzv1hM45GJk7yjqMILe89QabxLJTxNsv9N0erPXOy5A2kS3p3O QE5L11amkDAYMQIyhAaKj4OHJ8Sdlv9t4tSt87p2TMSHG9UOU7ICd1An5AdmQlNzaeTo vc8mrMIGXZ+sgP7fPn5jF29DvNPJJI9RCtrYsMnTtYqjafNjqTFkqubXmfmDdBxO+RD7 7EjQ== X-Forwarded-Encrypted: i=1; AJvYcCWYIXHTvvGuS4V14n5jdgEalHMk9DMtbDxvOdA78W5WH9TdkaBktTLUHLKUzp54qdhBdebrjmze5mwzdHO5i2dlWa2fas9vbQJ2NyYM X-Gm-Message-State: AOJu0Yw1h0dyi2LdZIN/3HLB/RvIGKeAAWSJvqSn1oSjY1Dvacs6JxLA 3NKx8rBT3W5jSb58T/uUmxX/rAgmwqZ1MzGNRKv2sGHCHuYg1Ep+VLZrbPQFeVg3dXwJ6IJDlTG S1w== X-Google-Smtp-Source: AGHT+IHmHH2jIucYiOpERpDS4ASgx5TvMHlaOIj1nvo/BSmO2/veDdrUl5Q72dqDHu1wFellCWvE9VI+k4Q= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:11d0:b0:1fb:6b70:8f6a with SMTP id d9443c01a7336-1ff04917c5dmr841815ad.11.1722038084875; Fri, 26 Jul 2024 16:54:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:10 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-62-seanjc@google.com> Subject: [PATCH v12 61/84] KVM: PPC: Drop unused @kvm_ro param from kvmppc_book3s_instantiate_page() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop @kvm_ro from kvmppc_book3s_instantiate_page() as it is now only written, and never read. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 ++---- arch/powerpc/kvm/book3s_hv_nested.c | 4 +--- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/a= sm/kvm_book3s.h index 3e1e2a698c9e..34e8f0b7b345 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -203,7 +203,7 @@ extern bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bo= ol nested, extern int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm_memory_slot *memslot, - bool writing, bool kvm_ro, + bool writing, pte_t *inserted_pte, unsigned int *levelp); extern int kvmppc_init_vm_radix(struct kvm *kvm); extern void kvmppc_free_radix(struct kvm *kvm); diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book= 3s_64_mmu_radix.c index 14891d0a3b73..b3e6e73d6a08 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -821,7 +821,7 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bool nest= ed, bool writing, int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm_memory_slot *memslot, - bool writing, bool kvm_ro, + bool writing, pte_t *inserted_pte, unsigned int *levelp) { struct kvm *kvm =3D vcpu->kvm; @@ -931,7 +931,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcp= u, struct kvm_memory_slot *memslot; long ret; bool writing =3D !!(dsisr & DSISR_ISSTORE); - bool kvm_ro =3D false; =20 /* Check for unusual errors */ if (dsisr & DSISR_UNSUPP_MMU) { @@ -984,7 +983,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcp= u, ea, DSISR_ISSTORE | DSISR_PROTFAULT); return RESUME_GUEST; } - kvm_ro =3D true; } =20 /* Failed to set the reference/change bits */ @@ -1002,7 +1000,7 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *v= cpu, =20 /* Try to insert a pte */ ret =3D kvmppc_book3s_instantiate_page(vcpu, gpa, memslot, writing, - kvm_ro, NULL, NULL); + NULL, NULL); =20 if (ret =3D=3D 0 || ret =3D=3D -EAGAIN) ret =3D RESUME_GUEST; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_= hv_nested.c index 05f5220960c6..771173509617 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -1527,7 +1527,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_= vcpu *vcpu, unsigned long n_gpa, gpa, gfn, perm =3D 0UL; unsigned int shift, l1_shift, level; bool writing =3D !!(dsisr & DSISR_ISSTORE); - bool kvm_ro =3D false; long int ret; =20 if (!gp->l1_gr_to_hr) { @@ -1607,7 +1606,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_= vcpu *vcpu, ea, DSISR_ISSTORE | DSISR_PROTFAULT); return RESUME_GUEST; } - kvm_ro =3D true; } =20 /* 2. Find the host pte for this L1 guest real address */ @@ -1629,7 +1627,7 @@ static long int __kvmhv_nested_page_fault(struct kvm_= vcpu *vcpu, if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) { /* No suitable pte found -> try to insert a mapping */ ret =3D kvmppc_book3s_instantiate_page(vcpu, gpa, memslot, - writing, kvm_ro, &pte, &level); + writing, &pte, &level); if (ret =3D=3D -EAGAIN) return RESUME_GUEST; else if (ret) --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B85501891DA for ; Fri, 26 Jul 2024 23:54:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038089; cv=none; b=YiS8ANEyPWNvsjwGdTIoP/maNIV7vtx6QCsGNTRSBN2QBELQiUJ9hH4FnjgVUMtmt69PhXJNNMjfwyBWWj1MB+Bjy0jeb0WPwRerYXkLWphh5Au76OwF22ZxmpMJtD37+0T2bRyT+6aIwNi6Y7LvuIJIwCF4XhJcVq7PQan3bm8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038089; c=relaxed/simple; bh=W0AdvWeEkFlf//sy1P86MoI9zdYmxDa3lMGHJD2jZgo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SlS9h7p23HzV5yb+9R1vCuC+FEKQEgj5pxsXqrccRDvtGmqYGzSfhbQqK37pUCPRZ3Sfh31L0sRPpeYNbEziui5DmaJp26dpzD+E0wy47SOYerR4d7DcGwcU8QuMv2tqoE89cHSHvfdNr7JFB9shUCVX7vEGmb2GZwwStrFwTq8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zEYzPOHA; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zEYzPOHA" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb5594a9f3so1506358a91.1 for ; Fri, 26 Jul 2024 16:54:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038087; x=1722642887; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oFlnnuKhNT9l8QiAImtf2ecgGSNucZgQMIB0EjCSaJA=; b=zEYzPOHArLl3k+CylZiR+cNBd2rqifEt1+jBsakJJObbB87w5MPICJsdn3R6tY0lB5 QPPqfGWu2/hZDGQiX2JSMuja9xB+WVQ2ejrSHhxFogsVlmu4X6Pc7o3hazyv9rYoR2z4 8ns86kvJDYb8v4xsWZavX7NkqGWBPfU/j3bRb4QnuUhvvpIQpb2asEJyfI4qwdh1+9gw 3Jc0C6ukDVvtpNfi8zT6YzOfXHZNUmTtXE+g9jxmmHQvsIt9ScLMJGoizKA7usyOVuxV HOUK+ubi+ddfJ9ClpW3rBMhKJj6u5bUWB5B4VFOjNQWCmareS8HU4Hp/H0gk6NoaWVSL +ugA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038087; x=1722642887; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oFlnnuKhNT9l8QiAImtf2ecgGSNucZgQMIB0EjCSaJA=; b=v/LVoZf50GwjXaLzlTg6JxLKVbTtpFe+M4QZHcPPNmO8RZY2Y92RYoHUoJ7t5bYwRJ 1w7Vcqf+VY/91m6nb02vuf7KVDHVsYQwa71P4CoDqkoRMw3IoQndgifzfx2uWtK2U/d2 EjeQyXxvP9fLlobxfUqG5779NPZokriQa/R+g/iTYCGRazbGNoS3y39uDrhXGgj/f6wq FchTf51T8Rry4OXTis8KZ2+dEWVpAkrp5yUQrCAd6I7xHErRvrMZEh8HJzVg8EAAPx1+ pEs81SEPSOHGFdEokU21bzS39W9ABhQyXoy2mMCczaDnJSf2o3WY0YK5tzIeqJGM/UlT s5tA== X-Forwarded-Encrypted: i=1; AJvYcCVmZnuAWfiIc+ZAWlj9xVMx5ccyvSaGhtcJtPEdcUWFh5RdD+Yu/sz6smCHpMTB+1OO1U2XMaVvc2Ay5d5gh8DGT4jJLgbxDWm4aBM3 X-Gm-Message-State: AOJu0Yy65p49dtmtGaP3t+RcASxSzaEV/88KCsx1l6DrRNkL4WjJwRDg Ep4z6CdWEYDi2Zy+jimxO5Yn4OJp5ZRhr0Y12Dn53FCQDxThPEEGnFVc+mJR78J7s9tXbJGf07b 2Nw== X-Google-Smtp-Source: AGHT+IHpYPU/r0yyvqEOzDrNXs+L4GUiZCI45y4AH9hPwSk8nxMjnSbjsFMJ0nzRNCC8+VosuScTVNX7UfA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:4f8e:b0:2c9:9a89:a2ae with SMTP id 98e67ed59e1d1-2cf7d4317d2mr2552a91.0.1722038086790; Fri, 26 Jul 2024 16:54:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:11 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-63-seanjc@google.com> Subject: [PATCH v12 62/84] KVM: PPC: Book3S: Mark "struct page" pfns dirty/accessed after installing PTE From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages/folios dirty/accessed after installing a PTE, and more specifically after acquiring mmu_lock and checking for an mmu_notifier invalidation. Marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. This will also allow converting Book3S to kvm_release_faultin_page(), which requires that mmu_lock be held (for the aforementioned reason). Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/book3s_64_mmu_host.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3= s_64_mmu_host.c index bc6a381b5346..d0e4f7bbdc3d 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_host.c +++ b/arch/powerpc/kvm/book3s_64_mmu_host.c @@ -121,13 +121,10 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct= kvmppc_pte *orig_pte, =20 vpn =3D hpt_vpn(orig_pte->eaddr, map->host_vsid, MMU_SEGSIZE_256M); =20 - kvm_set_pfn_accessed(pfn); if (!orig_pte->may_write || !writable) rflags |=3D PP_RXRX; - else { + else mark_page_dirty(vcpu->kvm, gfn); - kvm_set_pfn_dirty(pfn); - } =20 if (!orig_pte->may_execute) rflags |=3D HPTE_R_N; @@ -202,8 +199,11 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct = kvmppc_pte *orig_pte, } =20 out_unlock: + if (!orig_pte->may_write || !writable) + kvm_release_pfn_clean(pfn); + else + kvm_release_pfn_dirty(pfn); spin_unlock(&kvm->mmu_lock); - kvm_release_pfn_clean(pfn); if (cpte) kvmppc_mmu_hpte_cache_free(cpte); =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8C2218A921 for ; Fri, 26 Jul 2024 23:54:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038091; cv=none; b=sz1qlkoshydld4Vv7iIM0GRwIXWjVmFqSm5kRpYmC8oYBMkN/FRE2xMJQ7FHl4rDEqGdnvUg7pgg12t6tgNCUuh8WEc49GpcZJreLFwHlAimQqbXQ9P8yvufhmJt0RovA4Zoua9xe9mTnaCFYFhKLrNwE6PE8mkZicmeKPuiJew= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038091; c=relaxed/simple; bh=VcTQVGMORivsMQ0yF/03nVYlzx0sgRF9KY1Z997atYs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KKrcZZQakOHhlkrqV/PZ8I8N5Bx9KFsToJ/d32zzzsMqFenojdwp/EN9NN1YsChRFkPH3Lzy6QrGOFRDspXT4waB4X63FG4M1KWsM7K76xnfvb9Zau/3fETtoHXidcvXAnBjUfjCFSUpHaYev74xBnnKPowC3GqkRux+Mse3lFM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2UEOJmZ4; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2UEOJmZ4" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb6f2b965dso1762744a91.3 for ; Fri, 26 Jul 2024 16:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038089; x=1722642889; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4jqep9wI+WruRy0z2TZYbUEBq8RbIq5j1OfKKvbW4+Y=; b=2UEOJmZ4yaRtRWNdcw/HJmXSu5snmqhZrBNTCFmgbF1gGG5izCH0MAy/B9444oll7n Sy7ywZ4uOoKaFJvishO/1sL6DcJNIaUoFtLrVgQyR2KTARc7QqcgjlzraMEG0GJ5/uMU /zIlXxu32Qu+0nXskr36clMeYuL71wYs+5ms2uaefuhFrBivDrBgvaechLkksSz4ULpc 1KS/PRIRkLgv07R3+fdEN7fcHBzV/dyllU+FxtHMWF3VJtAElk23sf57imbjonCi8EOv /pL97hdqUwXrqIp6/CsY9pYCGfCNTQFZKODdeANRjkbjV96mwZfk4gbOL61s8Ne0dr5a cGzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038089; x=1722642889; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4jqep9wI+WruRy0z2TZYbUEBq8RbIq5j1OfKKvbW4+Y=; b=dEkjDZOlnVvCw71F8Nd7BH2KZ2mUscdsl2QqSzri6W0IHH413/jPQatadskiiYNTjy vx9yy+7FzSvtvv6sY1kdvipvXLXVwuIJuCBcj/a+9pybzv9+FJW3iG9z2DQLUCq58DxP Zz7m964mEsT6MiOAvzS9ut4xMQhDsGp0sSfAP/dGtKqI6dsAiHEEha6vXXN4adkWuui3 QhsyYCQMqlCqnXN74iDvbo9qqOhGk5NkgcPBHaL2hP+NEm7uEB7MeJ5tVfl+qtb4TvQm pTkcZbFAEEyypH5TXnUCI21zOK3dQ0w3p2s4X94zXAx5bB4Wyi6TOeVlfXQ/iemj3e1f 85qw== X-Forwarded-Encrypted: i=1; AJvYcCWvQURai3m62oKMEPh24KMpaHbWVPt5URohJ8LQi7XDfp+c9O3tEOd9XYmZEMVSftgZ1U5hme7OgdW1nRN6YkiHkE+k6hC//yVAKGUr X-Gm-Message-State: AOJu0Yz7paIFPtle8tHgJsXycKkwb6/TpAWe6BJdS3btrkUpyxTWLPIm cSgcqx7JFOkUoDFj3US3QYInsgseqzzMqXAELoYJHjLtiN6ZeIaHLwE2FrRil2RfEFxM76a8eYL 4kg== X-Google-Smtp-Source: AGHT+IFasJAr04PTZrf6XUkQxxUmiACppU16Z34UUuO+d3NYseQpUQCzHrJDN41FkLsUjVBlqZCxQJJvuE8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:6fe5:b0:2c9:98bc:3584 with SMTP id 98e67ed59e1d1-2cf7e83a1famr42409a91.6.1722038088823; Fri, 26 Jul 2024 16:54:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:12 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-64-seanjc@google.com> Subject: [PATCH v12 63/84] KVM: PPC: Use kvm_faultin_pfn() to handle page faults on Book3s PR From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert Book3S PR to __kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/kvm/book3s.c | 7 ++++--- arch/powerpc/kvm/book3s_32_mmu_host.c | 7 ++++--- arch/powerpc/kvm/book3s_64_mmu_host.c | 10 +++++----- 4 files changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/a= sm/kvm_book3s.h index 34e8f0b7b345..343c10dda80f 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -235,7 +235,7 @@ extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struc= t kvmppc_bat *bat, extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr); extern int kvmppc_emulate_paired_single(struct kvm_vcpu *vcpu); extern kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa, - bool writing, bool *writable); + bool writing, bool *writable, struct page **page); extern void kvmppc_add_revmap_chain(struct kvm *kvm, struct revmap_entry *= rev, unsigned long *rmap, long pte_index, int realmode); extern void kvmppc_update_dirty_map(const struct kvm_memory_slot *memslot, diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index ff6c38373957..d79c5d1098c0 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -422,7 +422,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) EXPORT_SYMBOL_GPL(kvmppc_core_prepare_to_enter); =20 kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa, bool writing, - bool *writable) + bool *writable, struct page **page) { ulong mp_pa =3D vcpu->arch.magic_page_pa & KVM_PAM; gfn_t gfn =3D gpa >> PAGE_SHIFT; @@ -437,13 +437,14 @@ kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gp= a_t gpa, bool writing, kvm_pfn_t pfn; =20 pfn =3D (kvm_pfn_t)virt_to_phys((void*)shared_page) >> PAGE_SHIFT; - get_page(pfn_to_page(pfn)); + *page =3D pfn_to_page(pfn); + get_page(*page); if (writable) *writable =3D true; return pfn; } =20 - return gfn_to_pfn_prot(vcpu->kvm, gfn, writing, writable); + return kvm_faultin_pfn(vcpu, gfn, writing, writable, page); } EXPORT_SYMBOL_GPL(kvmppc_gpa_to_pfn); =20 diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3= s_32_mmu_host.c index 4b3a8d80cfa3..5b7212edbb13 100644 --- a/arch/powerpc/kvm/book3s_32_mmu_host.c +++ b/arch/powerpc/kvm/book3s_32_mmu_host.c @@ -130,6 +130,7 @@ extern char etext[]; int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, bool iswrite) { + struct page *page; kvm_pfn_t hpaddr; u64 vpn; u64 vsid; @@ -145,7 +146,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct k= vmppc_pte *orig_pte, bool writable; =20 /* Get host physical address for gpa */ - hpaddr =3D kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable); + hpaddr =3D kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable, &= page); if (is_error_noslot_pfn(hpaddr)) { printk(KERN_INFO "Couldn't get guest page for gpa %lx!\n", orig_pte->raddr); @@ -232,7 +233,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct k= vmppc_pte *orig_pte, =20 pte =3D kvmppc_mmu_hpte_cache_next(vcpu); if (!pte) { - kvm_release_pfn_clean(hpaddr >> PAGE_SHIFT); + kvm_release_page_unused(page); r =3D -EAGAIN; goto out; } @@ -250,7 +251,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct k= vmppc_pte *orig_pte, =20 kvmppc_mmu_hpte_cache_map(vcpu, pte); =20 - kvm_release_pfn_clean(hpaddr >> PAGE_SHIFT); + kvm_release_page_clean(page); out: return r; } diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3= s_64_mmu_host.c index d0e4f7bbdc3d..be20aee6fd7d 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_host.c +++ b/arch/powerpc/kvm/book3s_64_mmu_host.c @@ -88,13 +88,14 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct k= vmppc_pte *orig_pte, struct hpte_cache *cpte; unsigned long gfn =3D orig_pte->raddr >> PAGE_SHIFT; unsigned long pfn; + struct page *page; =20 /* used to check for invalidations in progress */ mmu_seq =3D kvm->mmu_invalidate_seq; smp_rmb(); =20 /* Get host physical address for gpa */ - pfn =3D kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable); + pfn =3D kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable, &pag= e); if (is_error_noslot_pfn(pfn)) { printk(KERN_INFO "Couldn't get guest page for gpa %lx!\n", orig_pte->raddr); @@ -199,10 +200,9 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct = kvmppc_pte *orig_pte, } =20 out_unlock: - if (!orig_pte->may_write || !writable) - kvm_release_pfn_clean(pfn); - else - kvm_release_pfn_dirty(pfn); + /* FIXME: Don't unconditionally pass unused=3Dfalse. */ + kvm_release_faultin_page(kvm, page, false, + orig_pte->may_write && writable); spin_unlock(&kvm->mmu_lock); if (cpte) kvmppc_mmu_hpte_cache_free(cpte); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B19518A948 for ; Fri, 26 Jul 2024 23:54:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038094; cv=none; b=OoxdjiPiYZ+/5YNzvMGxnqXOXSGXQ0oU2j48Ji+CNTTqYDq+axtB3RycuJoy2abQVXjhZekTFEGyUq82TXa1i0J651dM+JmxIBaELlBB9stXJ2wxAQiVa8WmTYnaUJmQ945t/McmsL1PXQv9bxezugjQZ66l54eLTA2iY/TDbkU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038094; c=relaxed/simple; bh=+UZ8VYtUvtsljZmzMl/1tyQmxGJndr+FFAuTaMIEbgA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=V9W3EsLMkQtxqdOf2fjpj66r45mQgikAohIDHsUHg3rJDMZcOwWlej6+RVq/LtCtIVsajWgEWN3vZfBSb4W17wRO4XI4IAkGD+uNuaPkPzM1Ru2UYfOg4H5OMO69+G1aUAb0acp9jyI5NqCPdziDW5vpRewBryurowdofVrhcPU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=g8fhxbzx; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g8fhxbzx" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0872023b7dso463250276.2 for ; Fri, 26 Jul 2024 16:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038090; x=1722642890; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=XEv6FisOGr6vpV2eziQMsIEoEavMf3RHPa1v90mbDdU=; b=g8fhxbzxMjYe6ZLQfrHAHsBWxC6cxXhzdGPG8Zp+ckGWyAB/sw/M7MffrH4Pa1fopZ TrbHGVSARxRdxADLvNR5mhX1CjXn6oXTefVOfs91mZ1irT9OumbXjM/bkPjd6y11yThi hrXzuznv3R7cnWfQ1Pztd7B8LdjgW8eQ+CCcaVfB5ZTm92CpYeabX+vVMZG/dd0vbhJO ctgWNqa431V4kPr/axMdrwRPVdmagd0FDNtcf9rBoghwcaGo71nI6n3WvvVkEcwWdPAP D4VnkHZYn/VVZSYAfDnplg6cRFRuUVbmH8lJ93S5k4ijI/Rdyf5IXKqUkzZXPnMfWssI Py1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038090; x=1722642890; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XEv6FisOGr6vpV2eziQMsIEoEavMf3RHPa1v90mbDdU=; b=JE8DGN7GnWLDtGs8BgbkG8XVyjvxrUj+eDgBGzWh/hjjx/IirbqUAx/xBhl4OaM+7R pNIhfWpVwQlnNUOmbfnJv/+0zpuOTHHRKuscCqNlNunK1KObS9poey0AilupI+GG1wBr Vyd0PfGw+ri5YTdOZsPp2e3Cs70mRMUscsPO/MyD3WOvPv4WIb/R4IrOTVcItsigvHRr b9QnovjtqkxluZailQqcZFxePjcwo9bZRqHZTEcqQTsY+n/3/fBgPGP0gbdTQgJi2SXD 7DgaM3Sjfc7GXUmmOvzohoBV4j32GHDzzfEVAUHYbDabwjPsVRVLeTDmQ19XXi1azTLy ZFog== X-Forwarded-Encrypted: i=1; AJvYcCUODiovrzIJgEZpBHCKsuFeqwAkU/E5MF72Stb7zfYSw3ecCREQX4whZc6KBxXVKJ/OWz+deh8VxF9pZmAfGuht14HRolWpg61kHrWb X-Gm-Message-State: AOJu0YxRRECUHUx5pGtSwf7/uKSg1LBUnPOxu4ataDFKl5k9q/QMNH8x m45ftXlesjKbC5Ywwu0CfhW4zWO4tTVSGdyHKVk8yYaNCuDWBWRJcJymFysegoF806iZCoquv9g XaQ== X-Google-Smtp-Source: AGHT+IEFsToK4Mtfkt/isyLtzSbxuiJaEYiedQjj5sFCo8o3/vbFtfyZ9L8therJfArRWUvkwGk/K/B8SPA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1002:b0:e05:6961:6db3 with SMTP id 3f1490d57ef6-e0b54503b59mr2236276.9.1722038090539; Fri, 26 Jul 2024 16:54:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:13 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-65-seanjc@google.com> Subject: [PATCH v12 64/84] KVM: LoongArch: Mark "struct page" pfns dirty only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages/folios dirty only the slow page fault path, i.e. only when mmu_lock is held and the operation is mmu_notifier-protected, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson Reviewed-by: Bibo Mao Tested-by: Alex Benn=C3=A9e --- arch/loongarch/kvm/mmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 2634a9e8d82c..364dd35e0557 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -608,13 +608,13 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, u= nsigned long gpa, bool writ if (kvm_pte_young(changed)) kvm_set_pfn_accessed(pfn); =20 - if (kvm_pte_dirty(changed)) { - mark_page_dirty(kvm, gfn); - kvm_set_pfn_dirty(pfn); - } if (page) put_page(page); } + + if (kvm_pte_dirty(changed)) + mark_page_dirty(kvm, gfn); + return ret; out: spin_unlock(&kvm->mmu_lock); @@ -915,12 +915,14 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsign= ed long gpa, bool write) else ++kvm->stat.pages; kvm_set_pte(ptep, new_pte); - spin_unlock(&kvm->mmu_lock); =20 - if (prot_bits & _PAGE_DIRTY) { - mark_page_dirty_in_slot(kvm, memslot, gfn); + if (writeable) kvm_set_pfn_dirty(pfn); - } + + spin_unlock(&kvm->mmu_lock); + + if (prot_bits & _PAGE_DIRTY) + mark_page_dirty_in_slot(kvm, memslot, gfn); =20 kvm_release_pfn_clean(pfn); out: --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C84818C327 for ; Fri, 26 Jul 2024 23:54:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038095; cv=none; b=txT/8Z3vllxfG+1lvZvfaeHDcur1MllQ4yabZNhdcTSwugnilOhLJ9sQhSFliadlb0UxzEKEzHHIqeNDSYDHQbSB5t4qVDZPeD2mW+LImxdKC3C0OnzaQEP65uNDjRlwbIlIhTqf87fjaayC+1c+NE+fjBM5+Qtt32tE8fmp0aw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038095; c=relaxed/simple; bh=Z7iItHoG1Un0PHfLJEkTiTxR+0KDEJBb6FOtFGyXsoU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CiuRRnnEcLGtDfKtE5rdvGP+7PTMdEy+79odBIYWGMy3KC0E0iDj7syqxBnpCNaZkYqtaDTHMiW0rJSDkIiaTIqaEXOxZHR2cc0BEmIWxj1jzYAKwfqg7E6wisJdVc/1ac5Tokp2sR+boNhTkSPAXo571RnfL0K9GVxSBGYgCrg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PwfC3Syd; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PwfC3Syd" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fda155bc43so11485535ad.3 for ; Fri, 26 Jul 2024 16:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038093; x=1722642893; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=rpwXsNxFJqP0CEr/yEQ2h0r+lN6KBTuvETCSKMoe8FQ=; b=PwfC3SydHXcotWsLNhxGsq19l+/jleIfZ2WVEAKW81LAZf7qh9C7moVxdjUBkDZy5y 5UI41ubit0OIVCLIcehBe01+lr1qpgPWltuclSpvXd8Ud11XK36DI/0bnxwwB7Apl0uZ SC7qh65/lS5Sd1plD239HFDuGcyxkgH6b8KhbjSE2gyKFZXoV0DGe/OqE2ei77l5BF/k UldDmW78Ut2208tCKxHPycrY4f/O4axRhq/eBD7EQafvYrOd5KLUM5dfDOau6hJi+A5F 2dk2AriUhjK438cpn99mIX1GZky7m+sEM+ca/53yZgudi0q35pgP3ThGqjcxP/l3NP2O jmdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038093; x=1722642893; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=rpwXsNxFJqP0CEr/yEQ2h0r+lN6KBTuvETCSKMoe8FQ=; b=ScfUiHYsIb+nGbAH5mQwC5/lHFGuwVBlsrziV6qimgpcPKQBxiHwRyjJ4AWRvTautD tGlDFdSGQhMC0GwQwdErEZnKiErTMU8RD3nHrvskLcCxFEDs+dBDh/CpoPUqpQz9KmMN 8fLf1ZxC1lbHlpRsQ5BhLyFktJblG0uoayYjuaiR7pvt+6CZ2mbewIWbXPELWMtmxHMg ezm3SOdsyBJ0XI4vDlHGbyir+s6rXuebWAWWdWg4zaI3hTVBdVwTgX1afapNKnHT7x06 kiftJlgzklLL2441FSo86/S6TWsFLrNVOycvvEi9bE9mqlCoOzYhhB3eslNzx/V4UbLX sV+A== X-Forwarded-Encrypted: i=1; AJvYcCUPNDH70xGsO21BTgMzWwOcPRQuOZzHXOvxSE1ahrT5qmFBTqqiMHmG8fx1pekcYQlv2JBTHMjtT1CqdPzxMAyp6edyH1WbbHkMD7sg X-Gm-Message-State: AOJu0Yy+XL0mW/PvnkBhIsyXuR06gxqJ5hA33/45YUgU/2BBe3NK9lcJ hCUUeUQFhKy4cINZ9JY+aO49PKTYUUx77NSqCSLjNzmvvUU0cLqJg/56C4FPcQ1ohViSJudbtt0 B/w== X-Google-Smtp-Source: AGHT+IEz1qMb/9xSOR6pziERGmR6RlcO2d1FpOxR3ZCse98fOqCGJ5ia3ZBUMMNkI3e0ioyh2oD8FsbOxx8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2341:b0:1fd:63d7:5d34 with SMTP id d9443c01a7336-1ff04803eeamr27405ad.5.1722038092373; Fri, 26 Jul 2024 16:54:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:14 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-66-seanjc@google.com> Subject: [PATCH v12 65/84] KVM: LoongArch: Mark "struct page" pfns accessed only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages accessed only in the slow path, before dropping mmu_lock when faulting in guest memory so that LoongArch can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Signed-off-by: Sean Christopherson Reviewed-by: Bibo Mao Tested-by: Alex Benn=C3=A9e --- arch/loongarch/kvm/mmu.c | 20 ++------------------ 1 file changed, 2 insertions(+), 18 deletions(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 364dd35e0557..52b5c16cf250 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -552,12 +552,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, boo= l write) { int ret =3D 0; - kvm_pfn_t pfn =3D 0; kvm_pte_t *ptep, changed, new; gfn_t gfn =3D gpa >> PAGE_SHIFT; struct kvm *kvm =3D vcpu->kvm; struct kvm_memory_slot *slot; - struct page *page; =20 spin_lock(&kvm->mmu_lock); =20 @@ -570,8 +568,6 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, uns= igned long gpa, bool writ =20 /* Track access to pages marked old */ new =3D kvm_pte_mkyoung(*ptep); - /* call kvm_set_pfn_accessed() after unlock */ - if (write && !kvm_pte_dirty(new)) { if (!kvm_pte_write(new)) { ret =3D -EFAULT; @@ -595,23 +591,11 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, u= nsigned long gpa, bool writ } =20 changed =3D new ^ (*ptep); - if (changed) { + if (changed) kvm_set_pte(ptep, new); - pfn =3D kvm_pte_pfn(new); - page =3D kvm_pfn_to_refcounted_page(pfn); - if (page) - get_page(page); - } + spin_unlock(&kvm->mmu_lock); =20 - if (changed) { - if (kvm_pte_young(changed)) - kvm_set_pfn_accessed(pfn); - - if (page) - put_page(page); - } - if (kvm_pte_dirty(changed)) mark_page_dirty(kvm, gfn); =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B1D218C357 for ; Fri, 26 Jul 2024 23:54:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038097; cv=none; b=MMIR64avGNjCZk/SS6LuvoCD97EpLU35JlfxyYKnaTd3LLSMSxy7yMeJXpfUMxfzNgAk/Fye75sZhy+wrvRhAelrdKiwlVcqQa/z/LoNml1gVS9Janp+YJVBq1eDjRg7kv72jOX8wB0Yoezkq8h4s/u3lZvzJ2QFLLhuqt9ikF8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038097; c=relaxed/simple; bh=p4k5vgBtys8Iq9l8Spp/J/Js17wYLPMbdqXjO7cF1ww=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gHHtnRS+XmQLVwpNhTsEFfFsn2Ud0ctqSm2rIrNHeeDLm3o0MVygu+je8ukwtmFMU5uPnS6H16VsfiCU7EonytFzRvUm48S2bFMIux4SvnSxSiWn4IjeNov+5xkKxsnOonWG3musqR/yMetdeZGuDtnpvQ1Gu8GVGMYhMyAi6gM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a+5qY/gj; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a+5qY/gj" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6648363b329so5172817b3.3 for ; Fri, 26 Jul 2024 16:54:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038094; x=1722642894; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=r/0kdPmZ+ClWir3sWShoQK9y6l+vrSl7wHPcCrpa8dU=; b=a+5qY/gjEV973BT94wElPBxbqo2mZwcW6/G8h18rjsmsRzjd/TYQ0jt4Jx8isfadUg aRRe1GI6hNq/VCbCOddH1pptcuY0Ttru3r//wzksv7XTDeWAS7iWjPJhfw4EGL13QdrV wIRHozBkviUEVPsfnuc1XgYbfaAGequWB+XOBpDW2ym2S9Nhqqu+GbLtQL+RFmQCu4x1 q9IE5D1G9Ze6cIMqkUXVtt+4Jj5SEy5h6E2m+pwBULA06K5fugm2zMsN+BmnBG+RJ323 0v2Vd3+Y1G8iwUC65Q8yj1LGzI7XLysrlaq0l30Mjx3bnNSV8v+rkN1loMcqtnEUnTpY LuLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038094; x=1722642894; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=r/0kdPmZ+ClWir3sWShoQK9y6l+vrSl7wHPcCrpa8dU=; b=meId6StmY3tt5gNN4IjFxBX/1iEA9czZLpyfy1RiL5O6NFd7L80lMrqJS7qaZgBK4o /iISP75/vEBcg6nefvdgEt9Qmuw3k5RGYT2CI8Ie693Aq3YyUkRWblZ5gehjjuBlQRuM buWqdwh4Q3GactPgKYhxovUUS4YAyvR+R7f9YyLDBoh18IwLG390RHRndj9vS/ggUtEp i2pz5HbUFbz9bFloPtbduz1DU9PH8ENxuR7sDdVC6hr4Wd1RGCjqBIYeTonzRjcEmOHd xYU1AhjEc0AOWtIe2zcFf43KTovn8GITZjEwxxtrvXTQN1i4c1iiC6ckDp1SYyQESxV3 3naw== X-Forwarded-Encrypted: i=1; AJvYcCU9mFVBXNmA+xVRF+iK3B2TSzG7rcXUCDC4Ia41nsx4OGDUzKM39DjIW1Ywkt4fX+wwfn8yBviFzmXT1GEYlQlTzneCkSo1r8eRg2Vd X-Gm-Message-State: AOJu0Yx0ztceqHwTsR8TnyjFyDPwwtIEsaEFMnzTloSB1YrQQE58L8HF j8ChQfoLjwH7t2R30A1KGCTpw4tfcFAzhlj/EnuGzvxT5gRQtgu14D/6KiAnpYm3MoDPVgMD4uo 4Pg== X-Google-Smtp-Source: AGHT+IFfaCc2nH86ghdgf4dpmC6KBZ4LBEB5Lf4TmkgG+fOYVVXJfjBbiJJRrvW3lQJXVsfLzyeGSOH6WxY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:830:b0:61b:e103:804d with SMTP id 00721157ae682-67a004a2a4amr37847b3.0.1722038094439; Fri, 26 Jul 2024 16:54:54 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:15 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-67-seanjc@google.com> Subject: [PATCH v12 66/84] KVM: LoongArch: Mark "struct page" pfn accessed before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages accessed before dropping mmu_lock when faulting in guest memory so that LoongArch can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Signed-off-by: Sean Christopherson Reviewed-by: Bibo Mao Tested-by: Alex Benn=C3=A9e --- arch/loongarch/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 52b5c16cf250..230cafa178d7 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -902,13 +902,13 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsign= ed long gpa, bool write) =20 if (writeable) kvm_set_pfn_dirty(pfn); + kvm_release_pfn_clean(pfn); =20 spin_unlock(&kvm->mmu_lock); =20 if (prot_bits & _PAGE_DIRTY) mark_page_dirty_in_slot(kvm, memslot, gfn); =20 - kvm_release_pfn_clean(pfn); out: srcu_read_unlock(&kvm->srcu, srcu_idx); return err; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6246D158A37 for ; Fri, 26 Jul 2024 23:54:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038098; cv=none; b=q4owmofFTZf2kPMaBl2/XQOg3wzOsFzCIMWbckjsa2s7V0hmm+zSMuune9DhO1s+Ob3/+FA7oJARWEQ7ubdlctmc0upswGFgOiUsj+P4iYsjEDmF3QC/Pz7KfzOcVENKyPyzihJJ+lUMBPf75TgdFjUXQwydGl2yCLJOMnS4Yxk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038098; c=relaxed/simple; bh=J5mfIOqDdF2lEfMBx8mMJA3dTUdwpj862NVDKPV0UK0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XPlYqJ6LtOC361fF0YrwjGkien4kgcA63ftrNDKC6QnegrvsMcXGVWytQ/vzPlRjmLymg+ZL5OC+ae0Cf9Meui+rhmboXkgqDgcxM7NCncwGVPqAxvKP/1gDppydC34+ishwOawsQUDkOIl5L88UN5ZmYxbIch27HLxxepbIGPQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mveVntbI; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mveVntbI" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d24aead3eso1479621b3a.0 for ; Fri, 26 Jul 2024 16:54:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038097; x=1722642897; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=T3Tq4gPotkzFIpSFFawcUtCGWBlTwHcibbE0IfTBXb4=; b=mveVntbItzS+s4MdWk3jQSH/2IuG/2/w74AeF9OmDy5GJJXCCZo+3hMjDkXx9zrvh7 0GHpI1Y4DJTOQB4FIhi0QvSSjf2tvbFHNgSvlCUCf97KzxHaa+LQlWlk57bOTrJgynqm yN9EThDKrLnfN5Ni5OZKTw9hfn1ajaRvdpNZ3LeVjYhAYZPFhK70VDPLvlNlZuX3PTD1 B4ZKu3x2+KD/0GobkikrMBAx/eSgRxAYIZ8UUZIrIfvdnNKwPs5CwJxNiuvc4u/l8Nm5 u86NpPzjNbMA0LpOrJTykBlUC8WKh+5YNrBetlAimdLPvbVDArDH2j+0REx5bZhoApc8 Fl0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038097; x=1722642897; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=T3Tq4gPotkzFIpSFFawcUtCGWBlTwHcibbE0IfTBXb4=; b=WbA1x+iCP7wo5hCw6NEeNrmlhnG8eQBIBSF3dLqO765rDMqFHjDCglJcj8ucMWmZYS igDiAwBgg865eRsgAVZUb7BzxyKVkMoZ2AFPPIInBOHkOPBFyJKRmFq1UDwNRCuod6PZ hpdNbJj/feLnE5cf7LYZIhfuv8b7ag2lMXvmrFGnln57a6uKTFiWMUDOuvJLRDyGooK/ wtcDFHyfzbCdXonFhqYMxl4zVR2pC3uiVYg74gAjOJjs5Kyl3s6/6rJtInvjgjsaXNve HBdDkguu2VCo96u+7oIJRNb8i81jbXXDWAYTxVQ5IQyRB/eoImdh+Q5XusydjkAyi1f2 D7UQ== X-Forwarded-Encrypted: i=1; AJvYcCWZRlpiviv1TjtNuT1+V8Hdv3bKYgwKNxcdfBfQLCmCBc4qd0LZOZarIrvQlCyDDBPGC5WLSNts3VtAPTu4CCw8NIkcmXYWkpV01U9u X-Gm-Message-State: AOJu0YxebYc24atmPvH/k6S2PYJoNicbbqNNSb3qfFb9r1BFK4a5Thsi iRxSilG1cz2PYbD8vOSqpmHzKQyotRTlkx4woi8e9p9wN2/eCn6yvBSlswqRBllKupirwtCQhRa tSw== X-Google-Smtp-Source: AGHT+IExCpdv69na3gU93uhAEfGJO0DgmCPrxQU8TmhxNm9ObFOOTchJQQycuTaWmcpCMOvBszJjylffDDY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2d99:b0:70d:2b2a:60f7 with SMTP id d2e1a72fcca58-70ece928763mr9066b3a.0.1722038096309; Fri, 26 Jul 2024 16:54:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:16 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-68-seanjc@google.com> Subject: [PATCH v12 67/84] KVM: LoongArch: Use kvm_faultin_pfn() to map pfns into the guest From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert LoongArch to kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/loongarch/kvm/mmu.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 230cafa178d7..83e4376deabb 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -780,6 +780,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned= long gpa, bool write) struct kvm *kvm =3D vcpu->kvm; struct kvm_memory_slot *memslot; struct kvm_mmu_memory_cache *memcache =3D &vcpu->arch.mmu_page_cache; + struct page *page; =20 /* Try the fast path to handle old / clean pages */ srcu_idx =3D srcu_read_lock(&kvm->srcu); @@ -807,7 +808,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned= long gpa, bool write) mmu_seq =3D kvm->mmu_invalidate_seq; /* * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in - * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't + * kvm_faultin_pfn() (which calls get_user_pages()), so that we don't * risk the page we get a reference to getting unmapped before we have a * chance to grab the mmu_lock without mmu_invalidate_retry() noticing. * @@ -819,7 +820,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned= long gpa, bool write) smp_rmb(); =20 /* Slow path - ask KVM core whether we can access this GPA */ - pfn =3D gfn_to_pfn_prot(kvm, gfn, write, &writeable); + pfn =3D kvm_faultin_pfn(vcpu, gfn, write, &writeable, &page); if (is_error_noslot_pfn(pfn)) { err =3D -EFAULT; goto out; @@ -831,10 +832,10 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsign= ed long gpa, bool write) /* * This can happen when mappings are changed asynchronously, but * also synchronously if a COW is triggered by - * gfn_to_pfn_prot(). + * kvm_faultin_pfn(). */ spin_unlock(&kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); if (retry_no > 100) { retry_no =3D 0; schedule(); @@ -900,10 +901,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigne= d long gpa, bool write) ++kvm->stat.pages; kvm_set_pte(ptep, new_pte); =20 - if (writeable) - kvm_set_pfn_dirty(pfn); - kvm_release_pfn_clean(pfn); - + kvm_release_faultin_page(kvm, page, false, writeable); spin_unlock(&kvm->mmu_lock); =20 if (prot_bits & _PAGE_DIRTY) --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A705C18E769 for ; Fri, 26 Jul 2024 23:55:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038102; cv=none; b=u0kc+BuTg4aKuHh4t8xVs5AhKKO+ky9v4s+iq8h+K8YNYtQdLeUTX9bYcKv63Ijw4+j9JGW5+RaIKu/0SUuO60GEzYg9eGDtgTallh4Ghm6LeFbnORVWXiqCypLmNl2VsEtU62L8m/c2l18fOOExmiBFeXpCQYDQLzO5goVdogg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038102; c=relaxed/simple; bh=X/oocC9KK8HLcDY+BN3O4ToHUmzhhLHquCuXU7/bcMw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BoJS7oRkFNBwKX/4Kx3oCVR+wEaWg0qYfuJDulfVWwM/iLel4bZp1o46xTnazzGdzyn2TFFleLXXMuz2TIUVxqJPFklBLDj5tlqTCOVkTbGs4/yn6jzXpSBvQI9X4Twg1rl7DPtf8V+7s3r+X+jFO5hdViQDT2RtRjGiGxMNtQ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BHGeKCUl; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BHGeKCUl" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-6e67742bee6so737426a12.1 for ; Fri, 26 Jul 2024 16:55:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038100; x=1722642900; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EmsAuvL8HQXwYqsGGesvXxhYre2Gu8OeDMQE+hBNppI=; b=BHGeKCUl/GX9NJMFBC6bAqJ9xkP8FblCPsnhgnxpherIm5N3iCU7JFivY+HydgPxen v16bzTZv02jXaTFv7Gv/zl9hGdx+J+G1OkaLZx/QqrEvnuW9BSpcYhsTqghIJjinpnCI 0KFIVstO82fgplO+iBeDtqNj16s/pVH+s+J2qoxShxGyF91NUvxek0L78Cts3+KrkuuS wO4B3BOZLXIP60LolSoJB0x8vd+tbV8HUzl8w/6aEFvmUFK0npsAzp/PX5gt2w6VYrnP D+CKK5qcUormI3mrYmr4dwQcLyJIHPx2Mw6VhkO+CKmCGNJfqqY31Wbuz/mpHc5y6uok ePUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038100; x=1722642900; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EmsAuvL8HQXwYqsGGesvXxhYre2Gu8OeDMQE+hBNppI=; b=fU7gU1t2hIdQ56TpwILEWpqRGki2xJCaqCi7l8NgCJWiAKHZzzcA9CQVYtHyPJnKXK H3qOHecv+hVWAQXtOvYOqp6GpopmQqHGXYecMlTDvVc0xQ4HyvegazvJoJISJ/+e1WNl nPT4oppaMQqmCXviJgZiSRpxtQk82jDt6pM3UvciJVaYioZjNNRkrDj0uekua/m5yiJW Uv17Vn+tycYSjxxqef8bXAfYVO3DOxIZIIGooQGaZtAbSqKKqhIYvmGjBZbEs/RfFXZ5 Sqmz77dn4fwBm3odBtMoSy6mGgrZSZDaxM1QbTqcvwjhU+qkNjOFBzP0gfNND4BTgru1 vN3g== X-Forwarded-Encrypted: i=1; AJvYcCVN7yjhG6XLb+EFdUfIwC5hKIN657xIHQvrOISM/T0fyZozmQJ2s+Izq7OsMuhUEXnghs8dNM9luizAdb7Bos0lfvaV8PApQbEUqW6H X-Gm-Message-State: AOJu0YwoaJHHGrdyUitw9H3wdsKt9djn9aEymD+XZW+CgizEdIeTqGtM GHYNQ4l/9W+kPs7DU6Me2ws3y5dBbdkbM66JUnyWflVXqB3BUQ+opfFMm2fHJLCH/YLsGfWZPnc iIw== X-Google-Smtp-Source: AGHT+IFixvL6M1c2YXLcj1uim9WdRxJ+9f/3q289x5sMH5Ueo8TBf7rPuOmHjx69h4l0El/Ee2ditwF800M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:fca:b0:72b:5113:ec05 with SMTP id 41be03b00d2f7-7ac8e39fc8dmr2096a12.5.1722038098497; Fri, 26 Jul 2024 16:54:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:17 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-69-seanjc@google.com> Subject: [PATCH v12 68/84] KVM: MIPS: Mark "struct page" pfns dirty only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages/folios dirty only the slow page fault path, i.e. only when mmu_lock is held and the operation is mmu_notifier-protected, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/mips/kvm/mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index c17157e700c0..4da9ce4eb54d 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -514,7 +514,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcp= u, unsigned long gpa, set_pte(ptep, pte_mkdirty(*ptep)); pfn =3D pte_pfn(*ptep); mark_page_dirty(kvm, gfn); - kvm_set_pfn_dirty(pfn); } =20 if (out_entry) @@ -628,7 +627,6 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, uns= igned long gpa, if (write_fault) { prot_bits |=3D __WRITEABLE; mark_page_dirty(kvm, gfn); - kvm_set_pfn_dirty(pfn); } } entry =3D pfn_pte(pfn, __pgprot(prot_bits)); @@ -642,6 +640,9 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, uns= igned long gpa, if (out_buddy) *out_buddy =3D *ptep_buddy(ptep); =20 + if (writeable) + kvm_set_pfn_dirty(pfn); + spin_unlock(&kvm->mmu_lock); kvm_release_pfn_clean(pfn); kvm_set_pfn_accessed(pfn); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C38A18E777 for ; Fri, 26 Jul 2024 23:55:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038104; cv=none; b=bHFcQO1jnyhyQJXLnsSdYDDJqmxZpJxbKnvW5PQ6lxcfmMLQCLVymkXZ6K6yrWjQfLQdMBmhYMt5zE1bVxFTqLWKhDnbAPChm0/SoAUZ/ofHDMd+kUMivxFIizz4FE1UnINs6tZftp8czdtLrAKXfqA2AWHJLEVuwZlWs103A5U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038104; c=relaxed/simple; bh=GCYK8yipXNmlvS7D1ztDicsVoHNyQK/UowKPkS/Qbso=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=n8aynBFm3v6PU75HHbeWhAf+gXU/aPwgcVpG0JnYUOTe/czlc5u9TFqEYgrk5TTtptb7dpvu45OowWVlge1RsZV33qWQbuTKSTE9c/l1fT2b+S2sz+NyetaU/jL3igLpqt4GN4shrQcITwBLE9Lm/CRH3wIk0WB2Yuie2AZNAsM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FJdtrm+x; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FJdtrm+x" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e035f7b5976so760605276.0 for ; Fri, 26 Jul 2024 16:55:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038102; x=1722642902; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xPTfQtMbnGzpFXQRYDpT9MKKQAxP5DVy80mpAp/3DnI=; b=FJdtrm+xguXOVMb30K12HpUc2L8nRUx3ZVNDU1t5rAmioJaZHrLrwjwXOvV7xliTIa uOBCYjXHz6cuSlhiaeSi4YjX/7tPdhanCgACmbxghKcaGwREj8BsDY6T5VR9WoBE2lpI 0zqdTBK2SBq466v44OgnSMRKPezDOkxD9OTuY3Bafmgf8nWRVqB/XTSAyAYEzum8mQsg 0J6ZUZkRlXKMkQjZhYv4W0Hfu93JksYcNkVQ/PHE17NzYljug5ft9vjE3gXwDcZhbDZo ydHqChIlHdY5eFqfPovWCtxPLrwpdgo6FUGCm4Y0LBZcpx/K4XqLqBV5sMl1g1UAmrKJ W/Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038102; x=1722642902; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xPTfQtMbnGzpFXQRYDpT9MKKQAxP5DVy80mpAp/3DnI=; b=lBmOKHBWEQbJCh1DU4CAHuFkVZk2SxT0Vfkp7nbLsbP6XRGQlUAQoCgFMAHyCe6JIP ihfy3c6fxZqHV5anyOtfXvhqSoybb/KzuL5q0/twhYKHWv+3C15TW+jlgBE6AkjSX9o6 Vy+imcw2UGwqPGbUwvUKKCEhru3JL6on8x56elDiQZcJHAQ4+T4jcllhYjKvOjCgrlkI fef4+7ihcc7tfOQUfoc95U+5MvS3KuFZzyayITK852Fmnh8VYzmN+fyCJ2ELTJKsvwGg 9ycKspQ6BL1KK0JX0Xvz4tgJkHcpjQxB/OXGGINWT/N9Tb5bTiWvPaRd/b0Yv9I1vj9X pBRA== X-Forwarded-Encrypted: i=1; AJvYcCWw23g1JXZg1ra3fO4nMg/p91NnzzQn7ufPjx4jCVsDwah772EHWYwkOODNUlk+b7AMKB2H3wHUn9yi/jkF2Mj1q79s8Ncmb+ySFv+H X-Gm-Message-State: AOJu0Yy5xbJk+PrnSIfQr2TyXdBScpjMKsAMoPV97FSh0euD6mM1qFOJ LB8bF1VS1u/p1J2fdPzWg8DQ26bm8NpjH4CyVjeaP2ueJjDTI+80CyJhAISSXQPiqPxEroihTbu uMg== X-Google-Smtp-Source: AGHT+IEEWasWSJ9byHCocPT0rn5wdP9A8yl8cF0t65kc5fqEIfuaFnuVC5T09t/hlGHF+EVf0JUp5RZPIP4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:c0a:b0:e05:a890:5aaa with SMTP id 3f1490d57ef6-e0b555474d7mr20378276.1.1722038101719; Fri, 26 Jul 2024 16:55:01 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:18 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-70-seanjc@google.com> Subject: [PATCH v12 69/84] KVM: MIPS: Mark "struct page" pfns accessed only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages accessed only in the slow page fault path in order to remove an unnecessary user of kvm_pfn_to_refcounted_page(). Marking pages accessed in the primary MMU during KVM page fault handling isn't harmful, but it's largely pointless and likely a waste of a cycles since the primary MMU will call into KVM via mmu_notifiers when aging pages. I.e. KVM participates in a "pull" model, so there's no need to also "push" updates. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/mips/kvm/mmu.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 4da9ce4eb54d..f1e4b618ec6d 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -484,8 +484,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcp= u, unsigned long gpa, struct kvm *kvm =3D vcpu->kvm; gfn_t gfn =3D gpa >> PAGE_SHIFT; pte_t *ptep; - kvm_pfn_t pfn =3D 0; /* silence bogus GCC warning */ - bool pfn_valid =3D false; int ret =3D 0; =20 spin_lock(&kvm->mmu_lock); @@ -498,12 +496,9 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vc= pu, unsigned long gpa, } =20 /* Track access to pages marked old */ - if (!pte_young(*ptep)) { + if (!pte_young(*ptep)) set_pte(ptep, pte_mkyoung(*ptep)); - pfn =3D pte_pfn(*ptep); - pfn_valid =3D true; - /* call kvm_set_pfn_accessed() after unlock */ - } + if (write_fault && !pte_dirty(*ptep)) { if (!pte_write(*ptep)) { ret =3D -EFAULT; @@ -512,7 +507,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcp= u, unsigned long gpa, =20 /* Track dirtying of writeable pages */ set_pte(ptep, pte_mkdirty(*ptep)); - pfn =3D pte_pfn(*ptep); mark_page_dirty(kvm, gfn); } =20 @@ -523,8 +517,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcp= u, unsigned long gpa, =20 out: spin_unlock(&kvm->mmu_lock); - if (pfn_valid) - kvm_set_pfn_accessed(pfn); return ret; } =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 705D418EFCF for ; Fri, 26 Jul 2024 23:55:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038105; cv=none; b=Fw8WAmgmAWLfo16AlJHXn77ENI/PjrB1ZNtXspp5TmJEOWGv59M4LNvwvpMaWvXOZPOlb+HiYw7OsemILI+2bK8a+VSbLc7MCuymmwHVX9txdCblvlNPYNuYCyMXc+J/pfFmX0h1d447Lx9edHfNbV+EKZXi20QhK7y3/KtPaYY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038105; c=relaxed/simple; bh=ziM2BhZeDvRuc7Kv6C0zNSnuZJuDBMOs3hNrlsk1v5s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=V7hmbd4PoqtI2CekdT/IlT71DOr00y6oaNZ0VPWOwcgYxLmKKoIRLpAttyFzsgvGw35vYcgr5VKWaehdKEstY+fiXXxbCPuzWIegls/Lu7iGKh/Yu0eWqUfSRqkP7W0PlvzS7UsXP0rDiY3iCvnefUx7mH8GAdmdLTTnXxWsaHA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tme8JUNj; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tme8JUNj" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-5e4df21f22dso1272979a12.0 for ; Fri, 26 Jul 2024 16:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038104; x=1722642904; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=y+Ar7T6On16alzy9M8c5sPTmukkBuZuZUdMs6MViY54=; b=tme8JUNjEVxgRFC3Oc1gWy+XPRFWvWtTAPAMPSI2WbEYU129+X+s0E+ZrSjVDzZNVE 5DlHXVGCBBObNjpvosHB7VKYw6PR6AsprQt2c5TGekb/CphFEw9zBFRw9GM3T5eTQ2bR 7QoPwY0hZa/rEpWR0lDeoeN6opWOGyl5kQsmzWTYQms6+WQkZnz3n8EPc8wgY9ZMv0mS yiRPPHpWX1NCgxsV+SUcD3UGOOzd0pNNBi1HEmTDnehVqlxY4ncoD6/CNbholWm0WiRl /klmLexZ6ygH/J+yVDwb5XSLCTW6wzxIBQbFfJ+nk19eIuVd+3h2RE1hViooxxGXKOx/ CKtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038104; x=1722642904; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=y+Ar7T6On16alzy9M8c5sPTmukkBuZuZUdMs6MViY54=; b=keP8TiUGYEfQLkU7p2D4r3bbWZMyhMBatwis8MX9zFfDULtwRdokasdC9aX5GeXszb YlfPsjrPnyCLwAKFL6hzZjrBlmDAYQVO5yOefxS055SG1MJ9h5M2YKeZ1YA8Uti7V0j2 6s1Xoh3VNTt14fYqHk55GYp6k2Mr5rAZWFlUzkx+4INpkIbeV0eRsYMIUNVGZzJOYB0L YBItedLCOkCOe7cEdo2s9b0HsILBJXihWT+cttEZwbjpfN5UPJSvNAfDM/o6OwEqL5aD 1BJYINSCP/QPVG4+lfe5ts+GM274u8rmuU4CxbhxAeDyLURaAie/yj0hL1+Gkpz0JCb4 whdQ== X-Forwarded-Encrypted: i=1; AJvYcCW3RBXFOYyOuD2AkvJU/ozbmHEtK5eZrRPc+XRSou6HvQJ+ik0cTFrgkwIqIeLknq3kHqNYNQgBODOa9/VqNb9EUeIKDGyuupTwNYNi X-Gm-Message-State: AOJu0YyH2AFS4IqiilqtnrKvnDxBGLOjOZ3TKiCZuS3rcuan5E7SczDY Zz8+JJ92IfAd5bU0gF3AGDMpO8lRo5Lcv58guFXHaFsiBISZLVg+jpswKQFPr/sKzwPsamq6yZa Tvg== X-Google-Smtp-Source: AGHT+IFnr9ua4KoyN+kf9dgXRrvt4SVsbQldBEs2PtVQEhle6Kqivs/NJk7PeSFySHT1lFiCWuBu3lKqAis= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:50a:b0:740:2b12:8780 with SMTP id 41be03b00d2f7-7ac8fd30684mr3158a12.5.1722038103410; Fri, 26 Jul 2024 16:55:03 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:19 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-71-seanjc@google.com> Subject: [PATCH v12 70/84] KVM: MIPS: Mark "struct page" pfns accessed prior to dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark pages accessed before dropping mmu_lock when faulting in guest memory so that MIPS can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/mips/kvm/mmu.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index f1e4b618ec6d..69463ab24d97 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -634,10 +634,9 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, un= signed long gpa, =20 if (writeable) kvm_set_pfn_dirty(pfn); - - spin_unlock(&kvm->mmu_lock); kvm_release_pfn_clean(pfn); - kvm_set_pfn_accessed(pfn); + + spin_unlock(&kvm->mmu_lock); out: srcu_read_unlock(&kvm->srcu, srcu_idx); return err; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FBAA18F2FB for ; Fri, 26 Jul 2024 23:55:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038111; cv=none; b=PlI/69PKDOLPYJHPvGWf5ABrpqeza2aMbSVyo0+G8w9ryd8ZEgnZLRgz8/Fdze7FeOhJQiqW+edLbjfwAyh+nE6BK/2OTrHgZl7jm+tMoMlCWfXhdJYhdE9V0JMG1B+3IRqnl+JljBwmuIdnIm1vyuEnLwHgdim/wv37afsadek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038111; c=relaxed/simple; bh=lhHYI4ZmXcFxea+okVo57sctSAMSkRMQQrnLBVGt6rM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QHuyRmPhwoh3EfiqkgHwFL+IG+sJdKCVvQg0lUFpggFRGlv7pWNndhIMMOQojP2Ao6NQBBZa3kNcvpnmvs4MNtLG/ifyFMvEBY2OjOo5ddv1tqIk9NKMxVMQd/9aYMM+xEuLjPLCfRt0wX/uHAeA+zM/CJP8dofI2stu1sCEYu0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=n1j+fR25; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="n1j+fR25" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-76522d1dca5so1518496a12.0 for ; Fri, 26 Jul 2024 16:55:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038109; x=1722642909; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=M9eYn37tam32k4Zpbs9VqSvLY6wlhZ916YFlgyF6peA=; b=n1j+fR25oQMnIJsM0eQA2jzCkEw5NgzuEvaJ3oqoU1myvyusJyNmdEzvwT3Zlcd7in 8IL7xvWIOWWOx63r/5mJ1xbfbH2SD+PK9ARnZoadwJGXJ30AnbNwdWJwbK7Ut8WpSFVi SnGGLPyjj+WXY7qrsjQoO+vrq7G3TQHoORsYu25VCAnOtOKPimkrPVUHQ3DoNcLmbwOq nP5Tv/cZOadGX1M3O8Z9gNQbGmi1kw/wzvbQ3qqY7AuyOrLBe9nbGRZXUM65RUmIy9aS Fz8hItR2GtaM4ggnPliwr7Z9P5ARjyXQNuj6mcD+hGlkDzoLNo+YDAcwDFZbpsUpFXtc X7+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038109; x=1722642909; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=M9eYn37tam32k4Zpbs9VqSvLY6wlhZ916YFlgyF6peA=; b=SDBbjYjVnL+qslUm0rvxzpoHOg6eoiMhamMMQl43X11/VlDwFwbtTqaLCDIXZQwj+H L5ef5hzC/X8Mnk/b/YM7hbT+5eTd7PD0k3OsBd87dy5vFmLVXfDBoc7KS1f/ifK+lka8 dhmjC5jCFYLX2u+je8/pDWfoQy4aDVkL0qTutzc3wPi2+0RROXPP36hAzlbGfFcuCbUq uh3nvyjIbxgrzuaNpYXy6yOBWr2mOMhkaxaTn0KFociWU9AbiETsOvlJn/rpR+tr3ivb kMXoA2YpDUGA2mboQonxYSATK310XfKtwIg/ydvxGLpv8eZ26L/TgNieID8jWqWok7x6 lk8Q== X-Forwarded-Encrypted: i=1; AJvYcCXkq0X/jsfm/NxakfKkJPEo2LiEhk8fsyDGlK3QvZ5U/46rWAIvJ8I/DFTmvGpszO8rJxpxBrRdifhv5sEzmZCSREscvrWSU28p0B3T X-Gm-Message-State: AOJu0YyiJXPePfX93V7/ZWGtUAjtjgsXoH1ZqXpgOssDb/MQqgnr37Jp ciGXouJhpe8fxm8sunC2Xl5O47SFDRpiqU3aqMMpy2O6ib9uSRs87P05hhAMsl61v+RZ9PdiJ+p D3A== X-Google-Smtp-Source: AGHT+IG8TH3qPrjAadwsmfbGVL51l5vDfwJrWL4CCM6knLIepOGyFeiwzAiBROJulbOiIHONd5UuXBO9hTI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:526:b0:6e3:e0bc:a332 with SMTP id 41be03b00d2f7-7ac8dbc497emr2439a12.2.1722038105187; Fri, 26 Jul 2024 16:55:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:20 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-72-seanjc@google.com> Subject: [PATCH v12 71/84] KVM: MIPS: Use kvm_faultin_pfn() to map pfns into the guest From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert MIPS to kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/mips/kvm/mmu.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 69463ab24d97..d2c3b6b41f18 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -557,6 +557,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, uns= igned long gpa, bool writeable; unsigned long prot_bits; unsigned long mmu_seq; + struct page *page; =20 /* Try the fast path to handle old / clean pages */ srcu_idx =3D srcu_read_lock(&kvm->srcu); @@ -578,7 +579,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, uns= igned long gpa, mmu_seq =3D kvm->mmu_invalidate_seq; /* * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads - * in gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't + * in kvm_faultin_pfn() (which calls get_user_pages()), so that we don't * risk the page we get a reference to getting unmapped before we have a * chance to grab the mmu_lock without mmu_invalidate_retry() noticing. * @@ -590,7 +591,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, uns= igned long gpa, smp_rmb(); =20 /* Slow path - ask KVM core whether we can access this GPA */ - pfn =3D gfn_to_pfn_prot(kvm, gfn, write_fault, &writeable); + pfn =3D kvm_faultin_pfn(vcpu, gfn, write_fault, &writeable, &page); if (is_error_noslot_pfn(pfn)) { err =3D -EFAULT; goto out; @@ -602,10 +603,10 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, u= nsigned long gpa, /* * This can happen when mappings are changed asynchronously, but * also synchronously if a COW is triggered by - * gfn_to_pfn_prot(). + * kvm_faultin_pfn(). */ spin_unlock(&kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); goto retry; } =20 @@ -632,10 +633,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, un= signed long gpa, if (out_buddy) *out_buddy =3D *ptep_buddy(ptep); =20 - if (writeable) - kvm_set_pfn_dirty(pfn); - kvm_release_pfn_clean(pfn); - + kvm_release_faultin_page(kvm, page, false, writeable); spin_unlock(&kvm->mmu_lock); out: srcu_read_unlock(&kvm->srcu, srcu_idx); --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20A9318F2EC for ; Fri, 26 Jul 2024 23:55:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038110; cv=none; b=kY0+HBpsX6DNjGUcHf1dkpGMdxTPgWcGypGEIF8Nb8r4PB21p/KqxGnT9+iENEbEuO1qB+uVS/9W+/zfDdqEtZITCKfUwIq+xG57hz68crlswaBhy2tiA38wUIwy8aMl6SYvLvLJHmaksU3CiJ8T+EdLDOKKrE+Acw8X7dyPVr4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038110; c=relaxed/simple; bh=LwnbsVmoSkGUot+G6cbXgIy7tatzU4elH4imBAcWXlg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WYFovz4flPWM7mFllbrZNM5QGM90JvSNkfvaQgIb6SvyDBy1mv0zIp/eR4Z1O71K4CkR2ZZQkXrLR/0C9BQOBHMtt+6pFy9se5hzAmk68qIN3Z+di3z7E/u7jMB9Jj57+MdxczAFmaHYPcyEp/pn8Zx+lqJfN4fBh/7t2/IwlBQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MZFH8Z1f; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MZFH8Z1f" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc5f04f356so10599525ad.1 for ; Fri, 26 Jul 2024 16:55:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038107; x=1722642907; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Agw1NDopTbJHSwXqEMTAoK1oQKqohLcT4CyuwhV7hPY=; b=MZFH8Z1f6H+7tRJum8lT1Vm7ri8x+Xug7jf1d83CoRP56DsICdy3/XpfWetgSX84F+ 3O9Uygiu4wzYrBS6TGn9YuOSDHyMhkron0pg612PO657AGWM1CJoXdyZZwSGBxYTKu9L uYXTaAzaSeoBPIu2hHk8gw29P9fdLK64zGzFVIjowqj6XMtueIdgrlrw6AoIVQz/Rwg5 nOrndP0boNuzbW9ZXOcUox7y/7Lgwpgnle+J9kMwZ1GGGeqhvu8LV1TdCWI7DaZrXiaE am/ABXWN9T68epdnmT2yOBlUWf0p3L/bMPX5qNRe8KmWoJNGeJTD5sVm+abEEJgZ7Cdu cJIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038107; x=1722642907; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Agw1NDopTbJHSwXqEMTAoK1oQKqohLcT4CyuwhV7hPY=; b=lqabnfWeNCMysabciZZgNvCWErqADlSwcExMQSLqUP0yQF33EpvEPRWdQ65AfU8MHj PdA5c2lTJF9X8vJDRiazKMdWyzxWXlc0y3SVdel6Q01X3JYF9o8HNnj5QXmEifGligPk jwFeHW6KzOawJqxtgSz5lN0hDjBHfjfS467nB5xniLge45d4m2oe7qWmGWBCbRhuPNBd 0sFEuiRab2TPFx8UjyW44jfNMl9U/8Y4JZjwII7zveeOwnakQgeH1Mp0/FtlPo1I3+XK HrdOcGoSQvhfrg3iAg1eg7TCukVd3rHW/sgJRBzdueCDDgqpKR0eGnvtCM2gSNkNXDIP VGQQ== X-Forwarded-Encrypted: i=1; AJvYcCVqKmzBpKeLQGdcmS9iAQwUz5OxBnxh4e3+xfUMHCUP0tfHQf7AGR1zOErxYn/AozX49ikQirbR1kfTXYYxy4KNphgrPuCNb+MrSsPf X-Gm-Message-State: AOJu0YymSEzBGg9xX4PiTD2UPCEX/qtdqQKzTGdIQJRO7WGZEO5viYTF Tg8r7tMagfSi8Og8tidIXEIIDukW5UvOIgCEato0ew0/zEFN5iQvHureAEXzmZsBDYeJ4Qsqecw mOw== X-Google-Smtp-Source: AGHT+IHxZ9LAKt5vIHU7V05HDfJ12gC6skwH/WeW9XOXHpR7B1c0ubRWlOUtq2UiALskA4rClXraM4lFVDE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:b18d:b0:1fd:8ebf:672c with SMTP id d9443c01a7336-1ff047d8f61mr25415ad.3.1722038107293; Fri, 26 Jul 2024 16:55:07 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:21 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-73-seanjc@google.com> Subject: [PATCH v12 72/84] KVM: PPC: Remove extra get_page() to fix page refcount leak From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't manually do get_page() when patching dcbz, as gfn_to_page() gifts the caller a reference. I.e. doing get_page() will leak the page due to not putting all references. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/book3s_pr.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 1bdcd4ee4813..ae4757ac0848 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -652,7 +652,6 @@ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, st= ruct kvmppc_pte *pte) hpage_offset &=3D ~0xFFFULL; hpage_offset /=3D 4; =20 - get_page(hpage); page =3D kmap_atomic(hpage); =20 /* patch dcbz into reserved instruction, so we trap */ --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07CB518F2EB for ; Fri, 26 Jul 2024 23:55:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038112; cv=none; b=Yp7RsyXp33ZA74zaV7Dea1ggtrNk5TxveABN4RyH78lkn46Uc3niScQA442UW6cPI+zPTHbkVXJJ/0PlFxkwHBXXdW029E9GFQHNIMz1ErmnlBiUIAffLOGY6e06yfVSEG/8KtZnOKiBETWyQCV9UK7ZYjLV6NnvqzjdLdMVxmw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038112; c=relaxed/simple; bh=MKtv9E2m62lbMOtLTcz5DheM9a7Qrx2EJNoDq9m1CG4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pFFe1L8yWEpFG58hh6GxNc9iBtlzGwIAH9Ms11ipNKDSAUhJJWKYnDo96OG/oq7ssmKV5xy3rHq/H7XLmBP/FRQ3WgQ367RQtMDy4LRz4RNVdVvWOzTuaDZo+c0gH6esinhqKmxpCxNCewkg+hl/dnT3xygNEr65K37GgABxGDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zFAZvaTC; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zFAZvaTC" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-654d96c2bb5so6827767b3.2 for ; Fri, 26 Jul 2024 16:55:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038110; x=1722642910; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=FGNMZjC6zrq4QPk0aaY9W+lI1TKYWYG7APrsjifVvZM=; b=zFAZvaTCknt7/gua0u+ec+6x0FU09gH3ubH3kdDCEvbQgtKx9k+r+6Vjoyg1I74k5G qA+Kf+8WZWBB11enpMm2CVK5FRVRF6NUsskdrqXXdnXiZY6B0nuzc+gqeDJ2HST7jwhF RLSlwHntdomZvh2IcA1XECrHNUR2IrEGW3wFvYGUye9EMegOF/MYO/+pykIZ3IwBwRf1 EYLHUIGy5c1bx4yFig3dUHpx/XPYCcwOobhHgNp/j9fad1mpuDh2n/pOXNMd6sXvaFNt wQSwCdZNIpwf3zoDsdpyB95LiGrOiCkUdqpLc1SR/yfF3eXE804jtiuE/yqzVk0zk7Vu tRBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038110; x=1722642910; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FGNMZjC6zrq4QPk0aaY9W+lI1TKYWYG7APrsjifVvZM=; b=jm5+lvVr7WfoR/9uDRc4qy5Msf17sVgfH9m9srAmm4o+xDLjUj3NN1mdcmQO9GFfVF g2c9CLGv62in/KD1ABblzYlc4L1aYqhhm3p1qNx7P/wIG4i/I4RoWwnhavGPkpvCNONi Z09IL41/NWIWy4u+EmhHbthQcl+xtY+0b2Zc8/R/EcIA7ZyalBotA6Z6wwRiOI7HHSSk 4t/9KGkXKbniPqxYBiPa7LwRHJtsAvhnV9nC2rnjd7GE3Hp+PTGoDy7F4yj1DYG4GaNR ZjzsFiFYbaos0W44r/va8Rh82jJK/leP2CCkHMK+kLNrXQbRC+knv/kOttu7x3PYp+x7 M28w== X-Forwarded-Encrypted: i=1; AJvYcCWMoT85KRpz5zjoGVr1hRXk5Q4hheHejjYhnYZrLj5mshkBnYXtnIq7Ly2dIkazK5M1RrDZshBDhGlYWRKVTW2J75A2eSzw3oPSsVIO X-Gm-Message-State: AOJu0Ywi1fVsDpscZwGLUgjXJ1vm7995BldeitJUHXugJd17HiClJXcv /6VVxdv/gVkLZKPSa/cP6wWdggFvQ1z8ls3eAxCobtfoAVk/CE0rOzqF0iFIDy7JNJ/L5V9RTNd o4g== X-Google-Smtp-Source: AGHT+IFLZWO6XOpdLpjl0c1N0cYikN5d5kXLguU/WRS6HZfMhk4D9x8u+2tsxxP6CN0/8qHX8YjuuRQeC7A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:830:b0:61b:e103:804d with SMTP id 00721157ae682-67a004a2a4amr37877b3.0.1722038109325; Fri, 26 Jul 2024 16:55:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:22 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-74-seanjc@google.com> Subject: [PATCH v12 73/84] KVM: PPC: Use kvm_vcpu_map() to map guest memory to patch dcbz instructions From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use kvm_vcpu_map() when patching dcbz in guest memory, as a regular GUP isn't technically sufficient when writing to data in the target pages. As per Documentation/core-api/pin_user_pages.rst: Correct (uses FOLL_PIN calls): pin_user_pages() write to the data within the pages unpin_user_pages() INCORRECT (uses FOLL_GET calls): get_user_pages() write to the data within the pages put_page() As a happy bonus, using kvm_vcpu_{,un}map() takes care of creating a mapping and marking the page dirty. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/book3s_pr.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index ae4757ac0848..393c18958a5b 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -639,28 +639,27 @@ static void kvmppc_set_pvr_pr(struct kvm_vcpu *vcpu, = u32 pvr) */ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pt= e) { - struct page *hpage; + struct kvm_host_map map; u64 hpage_offset; u32 *page; - int i; + int i, r; =20 - hpage =3D gfn_to_page(vcpu->kvm, pte->raddr >> PAGE_SHIFT); - if (!hpage) + r =3D kvm_vcpu_map(vcpu, pte->raddr >> PAGE_SHIFT, &map); + if (r) return; =20 hpage_offset =3D pte->raddr & ~PAGE_MASK; hpage_offset &=3D ~0xFFFULL; hpage_offset /=3D 4; =20 - page =3D kmap_atomic(hpage); + page =3D map.hva; =20 /* patch dcbz into reserved instruction, so we trap */ for (i=3Dhpage_offset; i < hpage_offset + (HW_PAGE_SIZE / 4); i++) if ((be32_to_cpu(page[i]) & 0xff0007ff) =3D=3D INS_DCBZ) page[i] &=3D cpu_to_be32(0xfffffff7); =20 - kunmap_atomic(page); - put_page(hpage); + kvm_vcpu_unmap(vcpu, &map); } =20 static bool kvmppc_visible_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB58C18FC97 for ; Fri, 26 Jul 2024 23:55:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038114; cv=none; b=euAdX8wWWLIgGsuJ0TnEoPcSztTlcBRXEm4C7BMn4KSztqD3yaH33NVc151RbdLjfZXcORh/2FD2J6+4+iXdFTbvk3RinuzEF6VkkzK+NtFtt6rU5c9CKfb/hOvS61f4MfdJMjARP8m0JG1si9NXJ9jRSFaRn+d3XljfVmjA+v4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038114; c=relaxed/simple; bh=O9N2pQ58zHOs8mA5SBdaAy+6THp9s1eg8+6wg1nEqWw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hD3Nl065qO/gsz19B2IZOxD2tn4YUG7Se0QHj5I4JhviXLbaCsrtk95+Rj70NQ9w7UtssRwItm1yqYrefm15XmsAtxUhrF/OyCglHWgsN7KcUj7ssxHFuuH605+1MeoG+hq/KKIF+CeNGGsJCDnmrmGTeIixMWWCKDLiPLiY1N8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pWQsKdPE; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pWQsKdPE" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e0b2fb62a07so450210276.0 for ; Fri, 26 Jul 2024 16:55:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038112; x=1722642912; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4Hy7goaarUlwaWZyx1m4yCokRE7UoQ8eGBMKWJ4Hgr8=; b=pWQsKdPEjJ3K/ABVXUrqwiiZJ1hcSXiB40UXceg5KoEDcuMTyptzOrer+1tQIs58ck ipsRDvluKdAzfDpAStUZSD7F0k5s08cH6S+pEPYTtHcZEl+E7x/xWiLnq/3AZJU+E8Aw trN02GZlgbYHbNOFdoDGUWmHGgNEfn1q8IxLcKPiW94LhYP3Pcz2QH70Ez4wA1FoaLpF exegkvYneXnbVTnhQZszx2+IvyTBbDL5R2N3wuO9xihRpGP8rxH3mETiGqsiKexGE+P9 nq4toqsot/CIVJrPStMAl8w08nqaqazX/R0GCV2wl8OKzzIHr4ViOPWTEpYTP0ruQ54C kTDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038112; x=1722642912; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4Hy7goaarUlwaWZyx1m4yCokRE7UoQ8eGBMKWJ4Hgr8=; b=sLotAUWoWZj69kBvHZ+K0uKPw+W+S/E1jh4L94DRQPXoTq6sHy4bvVURG/PAG8Kfyn RMCUlMYN8ZmboMkjOH5yITfd2EZCR/X7n42uuUCi6iyhQn+P9K4+76KxZKYKL30d+jMm WdWT8JINIMETX1VaUaWC2788H6xTWQiY8vCE9f8fDGwVnOGatX7VIb0lxphXsb5fVH8h pP21rvJpQigeFIZe8M1Xala5Ahnxyq9cPsa09Nj9b7PEf5o9WbK1p6QKvxIGF/1u/lEh 96b3wrgnN4TRi1sP+767CamfmmeG3jG/r3BtQ6iHuCfrUrWHRx/pAQyna1OChkrWLzBQ h/qg== X-Forwarded-Encrypted: i=1; AJvYcCUt7aH/6GaBGw8UdJVGlWdbhqI0tyha/tmSXnOOFzcE99UCIZ/XPNvQfGgUVARHsUfrbkpHo6TlrDDow+Y7djI44OMzDnxpSe8g3APZ X-Gm-Message-State: AOJu0YwSjFaPFG0zv9MI8DZRg8Ckgd8k6obEgqhdJ7xvX55JMxBbTKYZ dP3upFShAFH8VzcNJY5k9Q/xboaw9x5I8VZcbmGyvPhV9m4wmtfLXDWWzfwkh/qMQSzz70XFohI cEQ== X-Google-Smtp-Source: AGHT+IH925MpsiY9ahug77TjXa70fcvLwqDVLsDqA/3N0vd70sZBQGqqpgNL91Lx+sGl/PSva06bFRo6+6A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:2b8c:b0:e03:b0b4:9456 with SMTP id 3f1490d57ef6-e0b5454ca99mr43273276.7.1722038111873; Fri, 26 Jul 2024 16:55:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:23 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-75-seanjc@google.com> Subject: [PATCH v12 74/84] KVM: Convert gfn_to_page() to use kvm_follow_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert gfn_to_page() to the new kvm_follow_pfn() internal API, which will eventually allow removing gfn_to_pfn() and kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6dc448602751..d0f55a6ecb31 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3181,14 +3181,16 @@ EXPORT_SYMBOL_GPL(kvm_prefetch_pages); */ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) { - kvm_pfn_t pfn; + struct page *refcounted_page =3D NULL; + struct kvm_follow_pfn kfp =3D { + .slot =3D gfn_to_memslot(kvm, gfn), + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + .refcounted_page =3D &refcounted_page, + }; =20 - pfn =3D gfn_to_pfn(kvm, gfn); - - if (is_error_noslot_pfn(pfn)) - return NULL; - - return kvm_pfn_to_refcounted_page(pfn); + (void)kvm_follow_pfn(&kfp); + return refcounted_page; } EXPORT_SYMBOL_GPL(gfn_to_page); =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA6A518FDD7 for ; Fri, 26 Jul 2024 23:55:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038116; cv=none; b=P9zJkbUqD7ejcntuVDiaSmvVIDCFBga3WX0N3SHT5hWWEW6ZQJXhtMOV2HKEKB2N5vjc5RiseLRMaza83Y/LlSUnOkYtwPXujGj4MdRHyKhZpT9Cb582m8EeB6Mw9Ts1N+/rVOip5JD9OY8D1q/msnfE3RmPlqsWnKwvjpKCPeQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038116; c=relaxed/simple; bh=p0sbScV9KYWrdS1YnUq9iP3jaGBNbAWZ6v3m97jnfrM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jJQqxyn1jmT5TG/MrNg9IBPkf5YAOjMyiKuQayGsfZoTX5JR8+w3LErND58iInhHRzWpUC86i5RFWhlaCvvQua3Bzb9gM/ZoMSESuIBz/wQtzT9Cnov/VnTIH35Xq5/u4YX7ouQPx+O3Tp6REL+fU6K5I1rmMRs4RlPp2dGNVV4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Rw3gWiwC; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Rw3gWiwC" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6506bfeaf64so6089787b3.1 for ; Fri, 26 Jul 2024 16:55:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038114; x=1722642914; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zSysp7FPh6wn+DolRHFxx8ZAZJRhLuISfWWp3RFo0OA=; b=Rw3gWiwCXZn+yAIRBBxOBVVq7qQjbcphCA3GjHPzQ7gO9dL5tKEufxJCJ7asjzpkeO llmmowxZpw8iLxtC9qk2EhjVJA5HdHCRrwcAKJNDfUkBN+0W7aZTXlghDJhmEALMY6dy Y16GaK06BKVbFand5e+W7GVUjUZTzGRooiB6ud42VYRUzGV3mryUyMK0NuCeS+ew0fj+ aVpGrUi74V7BumvPCvcKuZ1Xj5UTlerHGlKd+3fP8PVeW1vFcP87gk7HkPUXh8fToweZ tMmxlpZOm0F3b2YTDVtm00YwJkxJ28usGqldB5dcqiIlfkE9faLLzCEALOZU2b0J/bbE 06/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038114; x=1722642914; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zSysp7FPh6wn+DolRHFxx8ZAZJRhLuISfWWp3RFo0OA=; b=czsB8rsXR1mUzR8A79sVaNAsrPUxbBKx0VBBuUO6LIl9hfJX6zM7Svf86r7ju+ZXTe 0jDWcxgR4Kh56ZZRwIXvCyfd6EnJVBa5k36irajW2rrx48udt6FO1Vwcbom2AUcCtReE Rd+ic/eVVMyiYjUHNYHk84j6ucPUrL1OQwMVmTPmblHVOKe5WC58vcoveQSwZg0FzR/f r6mbaAqY/kHTi6RzW9qdK/U+WmmyzVTzQmfGf2XqMFiRwkLgzy0Rc3+V64ECVDPip1lx hTy/6nor0kHPn9aWlpQLndGQoDWtdwIr2V0/6jYtNaldYHi6NMBLlUfJQ15irHDzA+sa tG+g== X-Forwarded-Encrypted: i=1; AJvYcCVtb9M7EaW4Gd3qECIoEqTqD2rYe2oKY3b15syh0UlD38z6K9j/ejeGcB67ZAmfyOCx/kRDxn+aOdLr9nCBjIuETwr/PXIrDOKFbigi X-Gm-Message-State: AOJu0Yw4LYe5iYvgTj2X1UbQx7qCgmDW4BzRntoUzK2UWjw3QEPSinQL 9JjJiBXnYvJVvzKDMh0tcKG1XTIzAtyOaRj+fMNFJJzfsgWI2A89D//q74AngTLOIfslRDbKjHD nHg== X-Google-Smtp-Source: AGHT+IHtdYLzf8VxCow9ze9t5PB7b90MwNFbCURT1XZzWgwcWEIftZd8Pqph9MK+hSFYPsX4JKQKNB1AEe8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1889:b0:dfb:1147:cbaa with SMTP id 3f1490d57ef6-e0b5459b0b9mr34694276.10.1722038113838; Fri, 26 Jul 2024 16:55:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:24 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-76-seanjc@google.com> Subject: [PATCH v12 75/84] KVM: Add support for read-only usage of gfn_to_page() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rework gfn_to_page() to support read-only accesses so that it can be used by arm64 to get MTE tags out of guest memory. Opportunistically rewrite the comment to be even more stern about using gfn_to_page(), as there are very few scenarios where requiring a struct page is actually the right thing to do (though there are such scenarios). Add a FIXME to call out that KVM probably should be pinning pages, not just getting pages. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 7 ++++++- virt/kvm/kvm_main.c | 15 ++++++++------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 91341cdc6562..f2d3c3c436cc 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1198,7 +1198,12 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); =20 -struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); +struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write); +static inline struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) +{ + return __gfn_to_page(kvm, gfn, true); +} + unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable); unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d0f55a6ecb31..16bc3ac3ff84 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3174,25 +3174,26 @@ int kvm_prefetch_pages(struct kvm_memory_slot *slot= , gfn_t gfn, EXPORT_SYMBOL_GPL(kvm_prefetch_pages); =20 /* - * Do not use this helper unless you are absolutely certain the gfn _must_= be - * backed by 'struct page'. A valid example is if the backing memslot is - * controlled by KVM. Note, if the returned page is valid, it's refcount = has - * been elevated by gfn_to_pfn(). + * Don't use this API unless you are absolutely, positively certain that K= VM + * needs to get a struct page, e.g. to pin the page for firmware DMA. + * + * FIXME: Users of this API likely need to FOLL_PIN the page, not just ele= vate + * its refcount. */ -struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) +struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write) { struct page *refcounted_page =3D NULL; struct kvm_follow_pfn kfp =3D { .slot =3D gfn_to_memslot(kvm, gfn), .gfn =3D gfn, - .flags =3D FOLL_WRITE, + .flags =3D write ? FOLL_WRITE : 0, .refcounted_page =3D &refcounted_page, }; =20 (void)kvm_follow_pfn(&kfp); return refcounted_page; } -EXPORT_SYMBOL_GPL(gfn_to_page); +EXPORT_SYMBOL_GPL(__gfn_to_page); =20 int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *= map, bool writable) --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67A88190054 for ; Fri, 26 Jul 2024 23:55:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038117; cv=none; b=LomoRdxXPp+gzXgV/fuRh0npGiAcgXK70SgN5IlEx8OU1oxJ3c0Dug3af2DEeTPXqBhoTFBm9GnbR8PM62XYS3ZifnptlH3PqHRWTZ+pnf4yjXdwQnrqa4eEB6PtHsDdZzrC8c61YymV6jo6D/TaO+TYcacbli0rBXM6Hx6HUZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038117; c=relaxed/simple; bh=SUUzhrURtkDY8ACwtRZFtnUrMpf+i7jwgURG3Mw5oko=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iQsH0QfWBXK31u/Kn9AalxafxioagLUc3DYqEomQ8FYYdJeR2W5PcgETdrIBFdcxc9l9YbDsLh7+ekvbhUFByTB6fIZ++gyBOslo2YkZV/8sDJ7iUMCUV8n/JY+w+qZA+UB1hqggMNUviMOo4x5qE/lwj8FSxSp8gxopu6whq24= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GLtaDUrv; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GLtaDUrv" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc658a161bso8414745ad.0 for ; Fri, 26 Jul 2024 16:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038116; x=1722642916; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gSJgbeqW0EjFVjI/rvx0LQaQxVZn1dW4kzDKJHwZpGY=; b=GLtaDUrveUqDcyxp5RkAvPoK0LtbkQw20aUgB3YddyaMWtw5CRqTBTDQbzB+Hgd0qd 9wCtJF9OJqoVJeDtBeKP9q/bGjA46yy2rrnG6FJTtAqAtoqtJK3RGlM7dwUF6FBE5nAt mNnBOsY/ostoBhYnsM1dr/Xbst5EV5Hhp6VQhuj6iFHtKQ98k1QJGAn2b9gW4SutYrln WXS9dyjlgwC56rEpQnLAMb+RDLUYbCGyoDvdxZ0DKRfvV5/YR92FzIDbzDqkrOy9QU1Z rE/B942bUAwWakpd7ehQ33JecYeJmGXz2qhglZR0Jfd3Z4tvAVHQYSmRm5FJ/RpyItSt ty/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038116; x=1722642916; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gSJgbeqW0EjFVjI/rvx0LQaQxVZn1dW4kzDKJHwZpGY=; b=Co+Sj8QSNW3ZrCo2QhpirYxV4PzkGslo9aC7cTVIRZjJ7ICXSQY6ry/JsTiqzfoPZy TDCXRRKJUzrGTbOp7mZe9svCzpwLso+LKoe0AtcrO+6aFWJT1o8VztDu+D7ukdl/EwuB kGniRFy+HXq3JWbQc2I+j/+3L/MfVpHTKkD2MyHUJtAOALTNP99JS6t9p5LyO6m8JHqi 3LKGNM3OjFgu38qY04v4mw7daD2+0yoAJs0GfG2DWAYNF91gYCwiP+qzXrn1Ao4guh6f QmoVzb1O0RU02KCHXU+F7LcjFuZaFUrfJoz9W3k7DnHBFRxX9ijX0PnIJZBGqofWUHCZ GOHg== X-Forwarded-Encrypted: i=1; AJvYcCXzqTQqVfQGdqqLb/xeow8kn7uIj8dJBavbGiGr09U9BDECupn0ojXZ4clzt73fc6evNvQTEqzechPj/LbCsHrMKNI2VrEGprxip+sY X-Gm-Message-State: AOJu0YyYxQXt3YXdr5koP5v4HKOpX1KwoDLcWM5IMq6cX0iF1MiO9Gdv MPJ3piHwmzwXRxHNxb2/8JkTA7WfJq8vmTrZBk6yhlrLNd50ODqeZyxbiwAl6hQouIIkz3AZgKQ W8A== X-Google-Smtp-Source: AGHT+IHxrkKUWCeUEpiaW8aWqxkJkE+eVd4nhH8cGKa9NS3y3LrlRDBQyz/B78NptpquvhhjWf0asoH4ivY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c945:b0:1fc:733d:8465 with SMTP id d9443c01a7336-1ff0488cadamr599925ad.8.1722038115596; Fri, 26 Jul 2024 16:55:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:25 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-77-seanjc@google.com> Subject: [PATCH v12 76/84] KVM: arm64: Use __gfn_to_page() when copying MTE tags to/from userspace From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use __gfn_to_page() instead when copying MTE tags between guest and userspace. This will eventually allow removing gfn_to_pfn_prot(), gfn_to_pfn(), kvm_pfn_to_refcounted_page(), and related APIs. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/arm64/kvm/guest.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 962f985977c2..4cd7ffa76794 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1051,20 +1051,18 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, } =20 while (length > 0) { - kvm_pfn_t pfn =3D gfn_to_pfn_prot(kvm, gfn, write, NULL); + struct page *page =3D __gfn_to_page(kvm, gfn, write); void *maddr; unsigned long num_tags; - struct page *page; =20 - if (is_error_noslot_pfn(pfn)) { - ret =3D -EFAULT; - goto out; - } - - page =3D pfn_to_online_page(pfn); if (!page) { + ret =3D -EFAULT; + goto out; + } + + if (!pfn_to_online_page(page_to_pfn(page))) { /* Reject ZONE_DEVICE memory */ - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); ret =3D -EFAULT; goto out; } @@ -1078,7 +1076,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, /* No tags in memory, so write zeros */ num_tags =3D MTE_GRANULES_PER_PAGE - clear_user(tags, MTE_GRANULES_PER_PAGE); - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); } else { /* * Only locking to serialise with a concurrent @@ -1093,8 +1091,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, if (num_tags !=3D MTE_GRANULES_PER_PAGE) mte_clear_page_tags(maddr); set_page_mte_tagged(page); - - kvm_release_pfn_dirty(pfn); + kvm_release_page_dirty(page); } =20 if (num_tags !=3D MTE_GRANULES_PER_PAGE) { --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A767419046E for ; Fri, 26 Jul 2024 23:55:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038120; cv=none; b=ipS/tF/M84qnXdahGkPO8xw5UMSiVBPUNKrfAimoYPZXfVgjaqyeCBIF+LO7eKgIBBe5nzXRkpb91PfwDuxm2Lg+MxzoxQ4YwmQH5tXSR5y09EKbzq/puHmZmxfRYI4DFLFbc8o7BEQUkb8330MoEKvNwaBQdRSBGQUJscc0JLM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038120; c=relaxed/simple; bh=kO+V5lywh2xQ6zUlGeGo1/DXznX6QAOefV6EtN6ZZqw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Csu6Mt7PjXsYRUg5Rn5sldHtgqtAZMauoY06iWwnO2zlt0e63v0SnUbXGXxhJmkFGiaJhK5tPGuxGuShVhBZsMnw+D/+Gy/aQeX6cyBz9vtMH5eQiTTNURTP1HMeEXsQquGL3eldiPVftotSN9ztG9dm+oouhOQ6HXrgrN37lcg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FQzY4ZkC; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FQzY4ZkC" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc60ef3076so10430105ad.1 for ; Fri, 26 Jul 2024 16:55:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038118; x=1722642918; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=6h2fNXv3Lnt/1e1wvaAdv81wiOKkGDK2sT+OsnVLOZA=; b=FQzY4ZkCa6yptk7CPJITZpKWZIs+oLqOay+He/d1EEsyDqX3mgzALF4yFS5BKGGvjC jq9WVGkY1w0k1JE4l09dgmma/XDmI4cuyQieykTH8aie5nLRo2vcoAUzlIsoZeB6p8f+ EpJpG6a2PKdc5lII5W0OCgVVOYEW49HgTGXmOvekEd7p3MnXOWIZa4ByNe8QkzIPXRpT H372tXJv+5ep+BNJ2j9cHL8Qn5V9Mbwl5y3MD5CS726Gbd7fPOjS3IxAqPpHJSQ+tlRa HtOCGjSEjmRLeRwpKPS8wvNoVUlYwrKEQuG/RmJp10C9w74blXogHKCPSOiH9xQPJ1Ec Av9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038118; x=1722642918; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6h2fNXv3Lnt/1e1wvaAdv81wiOKkGDK2sT+OsnVLOZA=; b=CtRaSDamFlalf32qMuPFPhb1Up2z62KKdoVzm2u2A6eJuk+4BGad42RGPwA5NDvlBl o4uXkwF3lUz9BkPgpDphc0ItXR6f/wG3R4475vyZCiD6hTKZkDgtJcF8s2mKxzL/PXs6 ixjanQcjwicDJiFnidCqq7IANQwlqRe0I76ZjTUMqUkbVi/28K46/nyLQWiydXLJeTiE PLn3Y+rfbAU+XpU+hwQz+O1mQ27b7CTSDMTLWx5I0trIdG7RKAM7sEDd6F3HtbwNlTJ5 ylNvjHBSwNG2p4Ypc4z5pUssnWTMd3xA5IMxDDDPCPk1AXmFKX2RJhOrhfIidymrpuie VpVg== X-Forwarded-Encrypted: i=1; AJvYcCXvIz3ku11Pd9nNny7w/sY5k5LLPfyBvWHjU2hkl6niw+JCv4wWU+i9EurlLdaIo5kd+lCA8DflZSTQiS+M8SWt8HTuiZ/7KWv9huJ9 X-Gm-Message-State: AOJu0YxaVYsIot+uxgX2n2KSSDDLDGWs1MwTdQ2iFitNQVSdS+ABEu0d 3OvKspTN5cWHc3kbDRhTRptPwGoOqS7KXlo4sevmXlE5r8HkFhrhma0EMI7JJuvsgHWY6CPY0jS h8A== X-Google-Smtp-Source: AGHT+IEudPW7JEwYI6UrgE0ex8JC6EzLtvCIl4+qVoUW1lZq2cn7k6X5ytnrRNqYuzfOTIfNZ2CfrPbG7k4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f693:b0:1fb:b3f:b9bf with SMTP id d9443c01a7336-1ff046e1217mr520495ad.0.1722038117916; Fri, 26 Jul 2024 16:55:17 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:26 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-78-seanjc@google.com> Subject: [PATCH v12 77/84] KVM: PPC: Explicitly require struct page memory for Ultravisor sharing From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Explicitly require "struct page" memory when sharing memory between guest and host via an Ultravisor. Given the number of pfn_to_page() calls in the code, it's safe to assume that KVM already requires that the pfn returned by gfn_to_pfn() is backed by struct page, i.e. this is likely a bug fix, not a reduction in KVM capabilities. Switching to gfn_to_page() will eventually allow removing gfn_to_pfn() and kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/powerpc/kvm/book3s_hv_uvmem.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_h= v_uvmem.c index 92f33115144b..3a6592a31a10 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -879,9 +879,8 @@ static unsigned long kvmppc_share_page(struct kvm *kvm,= unsigned long gpa, { =20 int ret =3D H_PARAMETER; - struct page *uvmem_page; + struct page *page, *uvmem_page; struct kvmppc_uvmem_page_pvt *pvt; - unsigned long pfn; unsigned long gfn =3D gpa >> page_shift; int srcu_idx; unsigned long uvmem_pfn; @@ -901,8 +900,8 @@ static unsigned long kvmppc_share_page(struct kvm *kvm,= unsigned long gpa, =20 retry: mutex_unlock(&kvm->arch.uvmem_lock); - pfn =3D gfn_to_pfn(kvm, gfn); - if (is_error_noslot_pfn(pfn)) + page =3D gfn_to_page(kvm, gfn); + if (!page) goto out; =20 mutex_lock(&kvm->arch.uvmem_lock); @@ -911,16 +910,16 @@ static unsigned long kvmppc_share_page(struct kvm *kv= m, unsigned long gpa, pvt =3D uvmem_page->zone_device_data; pvt->skip_page_out =3D true; pvt->remove_gfn =3D false; /* it continues to be a valid GFN */ - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); goto retry; } =20 - if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, + if (!uv_page_in(kvm->arch.lpid, page_to_pfn(page) << page_shift, gpa, 0, page_shift)) { kvmppc_gfn_shared(gfn, kvm); ret =3D H_SUCCESS; } - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); mutex_unlock(&kvm->arch.uvmem_lock); out: srcu_read_unlock(&kvm->srcu, srcu_idx); @@ -1083,21 +1082,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned lon= g gpa, =20 int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) { - unsigned long pfn; + struct page *page; int ret =3D U_SUCCESS; =20 - pfn =3D gfn_to_pfn(kvm, gfn); - if (is_error_noslot_pfn(pfn)) + page =3D gfn_to_page(kvm, gfn); + if (!page) return -EFAULT; =20 mutex_lock(&kvm->arch.uvmem_lock); if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL)) goto out; =20 - ret =3D uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gfn << PAGE_SHIFT, - 0, PAGE_SHIFT); + ret =3D uv_page_in(kvm->arch.lpid, page_to_pfn(page) << PAGE_SHIFT, + gfn << PAGE_SHIFT, 0, PAGE_SHIFT); out: - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); mutex_unlock(&kvm->arch.uvmem_lock); return (ret =3D=3D U_SUCCESS) ? RESUME_GUEST : -EFAULT; } --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFA7C18FDD7 for ; Fri, 26 Jul 2024 23:55:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038122; cv=none; b=cw0uv8z0u/i1PCTFqNW+C2eIhLcJZfj66INTOceFZAhndXXnH8OwQhxz1TiQGm9uerAw9YmwlAGtiw0Vpn/nczfJkeX6SZ/XML66qRuWWdhPtEDFw16F8Cw2gEvTh1K8jiogJ4zN3QmEVgDbz4cgguIMTSRLMgdhmbT6Rp6q2a4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038122; c=relaxed/simple; bh=/5ISP65ZKB4aAJCrgFo/a/dmCj6FCUM4/FLOnd814Lo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tUSXME9gbTn3AMDuuH2UxUh4ckvN0lc9CROtYC5LcE53FiNh84sbaEDBOrjWoMJl4CK8Yd4HdkHXYV4K4czXsBV0Tu8ekJIkSrdYVO3O3wUJzbZ9ILVRbArHqYuqlOVt+AUXjTCoEUMroWs9muyJNf1niprQSkQzFHx8ww4PLW0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XKKg8iPW; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XKKg8iPW" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66a8ce9eecfso3501837b3.1 for ; Fri, 26 Jul 2024 16:55:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038120; x=1722642920; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=B5cS3fQCyroPznI2D76vYSWht1aEO90tb+OiOa938C4=; b=XKKg8iPWPEXGuEA1KPOl8c3qgaOM+ADtHl5/ja3ME/+ZWWY55iaKicznwTjhCAPpVR pDWp9ektuMbLgLwSXB1OYwKg8ohNcXdOe5K63fNaxOOdpSLJl2c7nyKrxS5RrzkVHp+C WXHE4TLK2Mx4DsbQBFerwyFFsNNKkJEoqemk29sGzpyt13jkTlnNIZ741kav+vY9C8+F rYDzbx1Dm/Awfr30yFZZOngzI2OxZebQGdmQzdYkujAlfhn9TkViXPRkJP6wx1aQPUdi m+0+eQf420b5tO8IuM1GYZtKriYE3koNz95MLMONU2lkNYqR/9CZ772JqGtgFE6CLzlf 9Odg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038120; x=1722642920; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=B5cS3fQCyroPznI2D76vYSWht1aEO90tb+OiOa938C4=; b=STcJAMy05YRTvE5YVOWPpPZjWOxOFqwsYXqFmxeCyJk5irM9RZ4HgbqRjBPTfNemIO xFxKikH02hi8QRFsfMFEEiSxpZLDA/gtqp6Wp9pDpS7byswnATp5KobXjak7jtSE5fD4 uS7MkmL0czeT8a3YY7aQ48Sta9ArDqnJCQ3kafXe31oyNrkmQ5BV7JlMtvnvPI/iUqHz y6ksLKSqDz9dIQkRLL5rmv1BlJmxzlMz/zY8qYiNoQ52Yon0OUOHzxwTpTwN7agQXcIK CfYJXM9Zwd1p4xyHJxxrxmLP+406J6ONaZv3IJ8ZvHtDCZ22ubhq6u3A4lojIvpZ01p6 +XRQ== X-Forwarded-Encrypted: i=1; AJvYcCVmP3L1J6arFBrsUNoRoy8QRULiuaQSX2PxtRDMVEbxbaP4n9RrtjntRNPzYS9VNJkfCK5UYJOW/JN1mKZHtRllg97B15DhrEstH99Z X-Gm-Message-State: AOJu0Yxnd7rEg344O+EGcJuNg56PefAW3dxX03ElDUPofAEZsx+zP0JC hc1BJxxN9wYhGHr8GExMBBKXXjAvmREhdGsgTON0CWrWiqorPOo5ynY4fBBtrdX/WqHD2dvpXkN xAQ== X-Google-Smtp-Source: AGHT+IFMnVGgr4kpL4f9GKaKFvGezuIshpJ8eZPypNhpoA7Mjxc5gf+mvj1jThnEcafAjrCOymr+lPlnWyQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a5b:606:0:b0:e0b:3139:7ce5 with SMTP id 3f1490d57ef6-e0b55d079camr27265276.5.1722038119814; Fri, 26 Jul 2024 16:55:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:27 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-79-seanjc@google.com> Subject: [PATCH v12 78/84] KVM: Drop gfn_to_pfn() APIs now that all users are gone From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop gfn_to_pfn() and all its variants now that all users are gone. No functional change intended. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 11 -------- virt/kvm/kvm_main.c | 59 ---------------------------------------- 2 files changed, 70 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f2d3c3c436cc..34a1cadb1b80 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1260,14 +1260,6 @@ static inline kvm_pfn_t kvm_faultin_pfn(struct kvm_v= cpu *vcpu, gfn_t gfn, write ? FOLL_WRITE : 0, writable, refcounted_page); } =20 -kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); -kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, - bool *writable); -kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn= ); -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool interruptible, bool no_wait, - bool write_fault, bool *writable); - void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); @@ -1342,9 +1334,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcp= u, gfn_t gfn); void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot= *memslot, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); =20 - -kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); - int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *= map, bool writable); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 16bc3ac3ff84..5dcf3561b829 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3059,65 +3059,6 @@ static kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pf= n *kfp) return hva_to_pfn(kfp); } =20 -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool interruptible, bool no_wait, - bool write_fault, bool *writable) -{ - struct kvm_follow_pfn kfp =3D { - .slot =3D slot, - .gfn =3D gfn, - .map_writable =3D writable, - }; - - if (write_fault) - kfp.flags |=3D FOLL_WRITE; - if (no_wait) - kfp.flags |=3D FOLL_NOWAIT; - if (interruptible) - kfp.flags |=3D FOLL_INTERRUPTIBLE; - - return kvm_follow_pfn(&kfp); -} -EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); - -kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, - bool *writable) -{ - struct kvm_follow_pfn kfp =3D { - .slot =3D gfn_to_memslot(kvm, gfn), - .gfn =3D gfn, - .flags =3D write_fault ? FOLL_WRITE : 0, - .map_writable =3D writable, - }; - - return kvm_follow_pfn(&kfp); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); - -kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) -{ - struct kvm_follow_pfn kfp =3D { - .slot =3D slot, - .gfn =3D gfn, - .flags =3D FOLL_WRITE, - }; - - return kvm_follow_pfn(&kfp); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); - -kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) -{ - return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn); - -kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) -{ - return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); -} -EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); - kvm_pfn_t kvm_lookup_pfn(struct kvm *kvm, gfn_t gfn) { struct page *refcounted_page =3D NULL; --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA4F3190697 for ; Fri, 26 Jul 2024 23:55:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038124; cv=none; b=BIMjrkXNWCyS2m8t0Andv4ECQfKgfgD1GmAUGHTDHuBL8Yw82vEIMlmGoaSmZx1/TiWLvY5BLepuqZoylSY4a2ZgMScJI6Lmhkd2rf9p+DjsEhEyvl5toxk2HMvT3r0hW0TVdSiqNvoaYFGSaGlbnYrFWfKGb44QK9T66afnI2A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038124; c=relaxed/simple; bh=7Hftste+MoYv6vvr7E6Hei8BDRxdZZ0dk5NIAY9vDsg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mKmHrMB1X/Nd3S80sRB/eiUNv/uUNMj93iPMnAEuOBzjPZRDKXSxXNiQvUFz9fzhwhJQjyWBKHrGAF7vH1JUz4D55q/j69nCxrvi/Zrh/4605pVVEYEEFSiLoNUB8CbUH2nDp00RNqxbOcztj+7mzGhdlgzhi+fOlHcemUiRnPo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wbwXGnaY; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wbwXGnaY" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-664b7a67ad4so6081557b3.2 for ; Fri, 26 Jul 2024 16:55:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038122; x=1722642922; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dTOlfH+mhloY9mYg+/p0IQai80b+6RXS/WhBEV7av3U=; b=wbwXGnaY9nAIseiLuQ4bca3z4IuGDHtp7BYIy3d1mvIArYsO0R4Pr0ghKZyXFT9nQa EHt2czpEIrDwsfFZBKXec2oCvXEy4Dm2ZIYotju3sqyA67Xeabk9vizw+Za1u5PnBS9f KW5UOI7FWsDM2mATePnEf9NQ5YtQaPJvC4G16vm3CNo4QM0a16siak/to9qKB2LIWkMT +F+E84/0qQqX+1UriXVxIsJIiN3cRkAFuZISHYoExZyfTtcZtH1fDPfB7KOLroM4sW3S QSUsRqWvdr9J9Xtp7Cmp8aaPh7Xlyb7WgM4bDzk5Fc7E+H9zlxLWPWT7AHNYdWjwTXrL z2Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038122; x=1722642922; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dTOlfH+mhloY9mYg+/p0IQai80b+6RXS/WhBEV7av3U=; b=kxfJKWZ7289P5yOLv7P8KFdWttBkjIyt7K805IeehvMhuC72a4Ctf7h6qZJ5t0Qwle jKTvt95Ykt6Ax3Tb8gn0sLopm/Wr+65+QaxmK9KAvImOQ65nRyMwTNifDei1UEhY/POD 1l7eBqif6C4qfWD2tZlKYJ+xAfKGka+2faH4513NeohGzqKKPKsmCQTI0R+dTMguVRoa TWBf28agnl7/+YO9vr4SkAw4TxklGmVWMJ/VlJwVkdKYDa3PR1yt9l2oPhA83OWV0Nmb Jd80/WL07pzeZA6Si/LT6petUgJqoOLo/7CC/ewvTCSm06qZKCuIS7akWoYYkNnO/Q85 RMig== X-Forwarded-Encrypted: i=1; AJvYcCWeDJk/Nc4j+MQB4qbDHEiznP46aKAVa8lGqbT3vKLv6VOdgA/qy2sR12yby44PUW5BhWmhVQis2YULaBLnvOKZ2DZHXoAxB7KOQSNg X-Gm-Message-State: AOJu0YzZ+vRF9MI+3tCWq7S8nqGMwxBLIOWnMZ/6gzJH7yA4QmsroK2Z mB8r546ea3oKc2o7Z2uF5yIKCvZEiel5WgAt8i6pjS5nhVGmX2fNXepHGCvYx/p4WTqcHHfX+Xi 9qQ== X-Google-Smtp-Source: AGHT+IGgifSFTGz9fVU5jdzVocA5wMp2MITnk/v5MkueXs++Udesz0kCsEkHm8VEDJpbtR+Hv7uv12YKgg8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:f0a:b0:62f:1f63:ae4f with SMTP id 00721157ae682-67a052e5dccmr429907b3.1.1722038121721; Fri, 26 Jul 2024 16:55:21 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:28 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-80-seanjc@google.com> Subject: [PATCH v12 79/84] KVM: s390: Use kvm_release_page_dirty() to unpin "struct page" memory From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use kvm_release_page_dirty() when unpinning guest pages, as the pfn was retrieved via pin_guest_page(), i.e. is guaranteed to be backed by struct page memory. This will allow dropping kvm_release_pfn_dirty() and friends. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/s390/kvm/vsie.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c index 566697ee37eb..f6f1569be1cc 100644 --- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -670,7 +670,7 @@ static int pin_guest_page(struct kvm *kvm, gpa_t gpa, h= pa_t *hpa) /* Unpins a page previously pinned via pin_guest_page, marking it as dirty= . */ static void unpin_guest_page(struct kvm *kvm, gpa_t gpa, hpa_t hpa) { - kvm_release_pfn_dirty(hpa >> PAGE_SHIFT); + kvm_release_page_dirty(pfn_to_page(hpa >> PAGE_SHIFT)); /* mark the page always as dirty for migration */ mark_page_dirty(kvm, gpa_to_gfn(gpa)); } --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F010191F60 for ; Fri, 26 Jul 2024 23:55:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038126; cv=none; b=ih9uxdydSBnlWgcEDbtRaUuKa/4FG0f7v+Bkx1xbeJvvjd76IkVu827IGxojuvsPyH7lAOX0MblSgp8gNqLRqkJn5wLBEGgaXV/e9FqbhnxgSEVd37Fgbc4ZUTfkmxl9uqteQ5KSEbZ2b+fldD58mT/JL1OBiLpYM20Cy36Los8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038126; c=relaxed/simple; bh=rI1deqx8R2FAT3yDZksFCzznTBxsLeNOCK4lgbhIK5U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=l+oVuQz5VMvn88LzuNVp1grbmH5uPYSr2sCxAbuxjc6VkUORnic6l/eCcDB6IAg8kGCIyn8wOO27oudYnXPmc5JnokbEFmR7wzGL+iRqgaCdbPsAFDi0NEFLM/IkJ8gEBTMGCkhNntSa4xxHEmiC862XOlkY24ikQCE87X4g6lU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cVV2c+sj; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cVV2c+sj" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fca868b53cso10626635ad.1 for ; Fri, 26 Jul 2024 16:55:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038124; x=1722642924; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4JkhtJX/rWB1cBzt9zuFqY6FqfPwx7EIf6/sG5QRcB4=; b=cVV2c+sju13r+c0vZGcTmYBht45CT+HMvoRXfThdc5QZ0nuf0wfpO5L9d352irglET dcAWmJeGiWCvBBekgQYJoRZgIR8JMsnIkGMcPLDZOFQVVNZsi65372160uDokuL77U/e UuOy8NL88Hso/BTODtSwDZHauqQgDg7YfzvvmoYHs99O1vkTEIAO/k7g7uEgcx2c2iLA D7fCnPSn55y8GWTVe/tWQ2uh3zjgIZEppDfyB5+BNAWbdnLfl2xGk0P5ut5UIYRUFoSC o6qnoWgIvZKcTSjQ3RpwEun22AZ3YqDwYQZBw+Pmurv8+EVbjXlwE1G9voJ5KKZVWDml Jbbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038124; x=1722642924; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4JkhtJX/rWB1cBzt9zuFqY6FqfPwx7EIf6/sG5QRcB4=; b=MRJwGaCsad+ynG+zTyAZ0Ft/E0JXwVdlrV8TtUckG+b1UXZ7oZwXH6/jS7CNG4ZP7V anLKbgQqPQB38u2T718/9CA40d7XylphSXGkGQq7HBdAcsoT80HyzJfG4snWy+ZGsvim UNJhaLpCJZ16Wo+ooETxZc/zKRsZ9aEbVagSRPDa82q5R1qkRKqkhPzTAmSrLPBAqUMg H1a4/iQzqSTsi159Jn3VcUbavhL22Kego1L9NRIDUTDUpBVTCwZpxHM7qfTQOMZKgk2d wJwOI0jTmtDEH3Pgv8w3GEA0C6p8o24hSQ7JpYDm+Ja0v7IkHWEPDbuR0MLXVsErFtDL XkNQ== X-Forwarded-Encrypted: i=1; AJvYcCXCLhsjZSe2giVTxhfCmLKU6IZEJk1tFpO/9ArBZ4V3GnDEoNXN08K7MP1trLJEnv6Uj4kgx++ixkt7VZZWGLWXfKmIbIR1fq8+Tcmf X-Gm-Message-State: AOJu0YzA6xBV1T34upyCVRow9QPDAFnFjbzntRF9QNywxd9/ps0KY0Ij xhUWO0YDvXrtyUlwS2IkK45Jw+jAD17O8FJi+LKyDqjuxRwmICgyu41/4bG638bF0p/NQfETuq5 kzg== X-Google-Smtp-Source: AGHT+IFPQa1UjlYj7+Kv/jUhsXC3P8YtU5HtBxuA0lRTgUmwYvAf/3a7bhqWFsOX8QFlAkrWz2XoripRfzc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ea03:b0:1fb:7f2c:5652 with SMTP id d9443c01a7336-1ff04893401mr182655ad.7.1722038123783; Fri, 26 Jul 2024 16:55:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:29 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-81-seanjc@google.com> Subject: [PATCH v12 80/84] KVM: Make kvm_follow_pfn.refcounted_page a required field From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that the legacy gfn_to_pfn() APIs are gone, and all callers of hva_to_pfn() pass in a refcounted_page pointer, make it a required field to ensure all future usage in KVM plays nice. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- virt/kvm/kvm_main.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5dcf3561b829..030a08d4b21d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2844,8 +2844,7 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pf= n *kfp, struct page *page, pfn =3D page_to_pfn(page); } =20 - if (kfp->refcounted_page) - *kfp->refcounted_page =3D page; + *kfp->refcounted_page =3D page; =20 return pfn; } @@ -3001,6 +3000,9 @@ kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) =20 might_sleep(); =20 + if (WARN_ON_ONCE(!kfp->refcounted_page)) + return KVM_PFN_ERR_FAULT; + if (hva_to_pfn_fast(kfp, &pfn)) return pfn; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0008B15A87B for ; Fri, 26 Jul 2024 23:55:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038128; cv=none; b=m705bbgWVxfenVMWTLPsJfwCec49ZhdZobWIAs/4ynbbDhe0PzWpluMPYwRlheHRRvjK+MXe0GyFJB00yav+EcJFfuC9d+jXXshQCmmM3Y9bxQlh5mG7HWTq57dtb2hB9q+HqvrHPUzP295bqNJIV18StZ+0roEFktuVsCDjlDM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038128; c=relaxed/simple; bh=KB6xn4OZUICEWZrxSAqwnEMQRTZa0ja3Fgai7TvLTDQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rk34DrZnD5pcqnM/xh1F64wWeD4SeXi/I/m840O2P2ikhNxkA0fZmbxhJj8nHTi0fKVmV/7uyrnclyZuKE0Im8QF9YpGtoJJuh1uXhkw0I9cGpD/59uT0egFhotb0XC5IcpZIQ2MFqoMnKbp6rwu5xLvSClfwvEBReF4+78ulfQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZoJFHj0w; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZoJFHj0w" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e0b35859345so399041276.2 for ; Fri, 26 Jul 2024 16:55:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038126; x=1722642926; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4pm2gWEOx5QJNc32HyFmIStymfHzlyUTPI/gy9LPzsU=; b=ZoJFHj0w7FsYYJx3bhxdk34ldYyiP0vqIANyKmizKcnmj/uT7k4sNz7hi4gq5zY48I em+Rs6DFvn9dxe0L7is1CoHmRK63z7HZAoJ4dv5N9O8KgQ+iZteRxi6oKsOekdI123xS 2PV66b398wMEnj/ZhmbfMbuFNRdpY4C1dOtiqSfqcxUOd1jHj4M6irSUKpRE0oxQB1ph a+RmLpSSpNq907+2yRyDgYTYouDBGGetKaDOp6vf5A/ZKXKyDhqwkB1t3jbZzO7zDgb0 ILu/FNas9dbFrk9mR0igqZGSLP6WvESlOFwnhSdNCwuDf6/mLaaRwqyt1OsOpKEByxZB Tmeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038126; x=1722642926; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4pm2gWEOx5QJNc32HyFmIStymfHzlyUTPI/gy9LPzsU=; b=cEmHwMLqQRYzwtrvh1pI07Ch2clWrYu5zLhfsXXF1Qfx4AST6GEMzSd+cNECJGutEe sx2SbS82pVlhuhyxelEWucT6FHxeDoDjYs4QQ6Ovw8Eqf2nzm7rStwXUjhx1UaGQa3Hy O+OxfWcJ2hVarh4o9gmjk9Or/IyigRpElgT8R7pTwNfmxAqBNSdmD6nWF6yvf3O1nrz+ QcsFhHC4RWtQifzJcdE2ZkFH4pSpH11mfQIqZEH7TcAY/Tr+aW8DX39WBis50Xuqkfsw t3XXJf8ah6WcN/zEBYOJnjtAU17hqapJQfJaUJnUq2ehgRuD/M97FxwMJZhw7R2vW6KO UpQg== X-Forwarded-Encrypted: i=1; AJvYcCVj4Sdo2bqcJbUx2yuektvsrfk+/Poiif0t1z6qzPkZhOjmrujUf9yr4H3pNTcq3vFtZTxB+9pkcuPvT/psvA8RXz9CQczODJjMOdEw X-Gm-Message-State: AOJu0Yx912yjA7hp37WZUidMicYRo22r67i+eLGcq09F1FgVdu39cDDw rCZExafgWFspfQCIXCimgQzfFc3Z91m0xmDmvTIcLcZ0uSYumw4c6ew7Pk8WVU0gpxL3DBr3XuO 0yQ== X-Google-Smtp-Source: AGHT+IGgyahrOCg0Ix+MrKo8QCm20LiC8Zd9dP6asKSX3o/8So/b0NT46HfwZKTclpC/RfFjW09NkNDMj3A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1029:b0:e0b:9b5:8647 with SMTP id 3f1490d57ef6-e0b544ec4ddmr2334276.8.1722038125883; Fri, 26 Jul 2024 16:55:25 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:30 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-82-seanjc@google.com> Subject: [PATCH v12 81/84] KVM: x86/mmu: Don't mark "struct page" accessed when zapping SPTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't mark pages/folios as accessed in the primary MMU when zapping SPTEs, as doing so relies on kvm_pfn_to_refcounted_page(), and generally speaking is unnecessary and wasteful. KVM participates in page aging via mmu_notifiers, so there's no need to push "accessed" updates to the primary MMU. And if KVM zaps a SPTe in response to an mmu_notifier, marking it accessed _after_ the primary MMU has decided to zap the page is likely to go unnoticed, i.e. odds are good that, if the page is being zapped for reclaim, the page will be swapped out regardless of whether or not KVM marks the page accessed. Dropping x86's use of kvm_set_pfn_accessed() also paves the way for removing kvm_pfn_to_refcounted_page() and all its users. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/x86/kvm/mmu/mmu.c | 17 ----------------- arch/x86/kvm/mmu/tdp_mmu.c | 3 --- 2 files changed, 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2a0cfa225c8d..5979eeb916cd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -546,10 +546,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) */ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) { - kvm_pfn_t pfn; u64 old_spte =3D *sptep; int level =3D sptep_to_sp(sptep)->role.level; - struct page *page; =20 if (!is_shadow_present_pte(old_spte) || !spte_has_volatile_bits(old_spte)) @@ -561,21 +559,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, = u64 *sptep) return old_spte; =20 kvm_update_page_stats(kvm, level, -1); - - pfn =3D spte_to_pfn(old_spte); - - /* - * KVM doesn't hold a reference to any pages mapped into the guest, and - * instead uses the mmu_notifier to ensure that KVM unmaps any pages - * before they are reclaimed. Sanity check that, if the pfn is backed - * by a refcounted page, the refcount is elevated. - */ - page =3D kvm_pfn_to_refcounted_page(pfn); - WARN_ON_ONCE(page && !page_count(page)); - - if (is_accessed_spte(old_spte)) - kvm_set_pfn_accessed(pfn); - return old_spte; } =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d1de5f28c445..dc153cf92a40 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -861,9 +861,6 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct k= vm_mmu_page *root, =20 tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); =20 - if (is_accessed_spte(iter.old_spte)) - kvm_set_pfn_accessed(spte_to_pfn(iter.old_spte)); - /* * Zappings SPTEs in invalid roots doesn't require a TLB flush, * see kvm_tdp_mmu_zap_invalidated_roots() for details. --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B19A81922CB for ; Fri, 26 Jul 2024 23:55:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038131; cv=none; b=mpBkcHB+/gTDyvpHXBYm5OzksKwAHgSpolOoFDA8we8sSvh7EPl5dWVZXQCDb1vbod4LxckxAArnbDzfYzKPdv7ZLjW44Z3QO1Xwbo7oEIKtq0EPSLpOxDysTm8ZpYUT4e1nePDewogFKVE+Q9faNaKGUMsNinGnK6iVrbGLbIs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038131; c=relaxed/simple; bh=emk7Z8a7fhL7Xn+sv3OLuYrPeVkEgytK3q87Mr+MFTs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VJFPwgaZOrBqusE1br+VvHpdF9U37ovgkmpMS3pC5TPx/k8aXztwb6rU5MJwrAAp7aHXiQj/dC29qwf4yCppP+w+c0Xo8XASZ5OGDTIb7z5P3QmWkIZIzwJLnLanae7KtAB8TvmslkjYHZc0HnR1ykbEFuNg6+1sIANbeaeOLgc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HtSou+lL; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HtSou+lL" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d392d311cso1417692b3a.0 for ; Fri, 26 Jul 2024 16:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038128; x=1722642928; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Truk8wLwk4Y9wyHD0oQlYUKlIPfs2FyGYSCSiM9XszU=; b=HtSou+lLsw/jwGEJVmqKXKGVqeuzHUbZx694I6qm1xoKmQ6yVJk0N3/fYXSx8GRv3k weM8F7K2e4BSWMzqvm17oIV6eeoYX4rAdSo3mbE1ql0K9djTv0McmnQVIJlnVAL0qkpK 65XnMW2J8y9C899x97lER0Y2zjsL6J0eQQ8Iuoz2ex4Bb+NsgPJpQLoDzTej0I/F/k6s Fhuimmh0p+AZlEOhvx5JYDI3+CJjTudj6MzfSH1n/1PFHO7fC2ah65Xs9UWv31Iq33wv 4cs34PjZ82JBe5nz2wocTsDT4rGQp5t3ySCOkk+AiykKCskG+yT4Q8F1dtD2k1CjLo2u ayJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038128; x=1722642928; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Truk8wLwk4Y9wyHD0oQlYUKlIPfs2FyGYSCSiM9XszU=; b=ppStGKBSTzqUHouekEFh882dugCgR6cgEN3FC+uEany6wQ2lewrrru2O8yCpiG1d10 R/NLF4zOBfAg8gVvaQIFZHJJQoKeyF822ELzWj/fatSYpQntO9sOyK+oz4W9rzdtsaX2 SGe6cFjpKL7ZbXAzcOra0JFJ0NAIfpp2WQKvb1mDD7h0N+zkEEEfZsbaFsCglG2+H3SJ fr1xdcIvIv8SAQHvRICQW0ZCodHLRbQnEudYihsiF8cyJX0gWfpzKW60m0YGHUgy8Yei JNWCJJnbGcCXV3fJjNgqQeX96iKoWpT1unC1DZ2+bbhdYchz1tqF/E+hZiQMlBzUpwmh pBVg== X-Forwarded-Encrypted: i=1; AJvYcCVsfceB5GYwsuHwii2x1aRfvq5F/+7mrVPuj6o2meYmHp45GG5nRsL5fPl/vOhApZirWkzNf3IJkm6woub6QjxJTRoUQvPubhvR3byw X-Gm-Message-State: AOJu0YxzBS8kTDdPpCNRaDesZg+nYBEjj+zyFPJnEXFFAtlIdWvswmNZ vDQnCVBCsOcDkPN53VRD22EFfDyhamjituVbn3Rs6ZJ2MfeRVMVAdaRYKBiAEI1trXTzJfPnKzJ kEw== X-Google-Smtp-Source: AGHT+IESez/zxTWutjtLJAUJhIiTiAtRpWSvjJWmuvSpH774la09rZR91czqzlEpBwGUZ+0osFi4L0/wScE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:8591:b0:704:3140:5a94 with SMTP id d2e1a72fcca58-70ecea0e7ffmr15968b3a.2.1722038127870; Fri, 26 Jul 2024 16:55:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:31 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-83-seanjc@google.com> Subject: [PATCH v12 82/84] KVM: arm64: Don't mark "struct page" accessed when making SPTE young From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't mark pages/folios as accessed in the primary MMU when making a SPTE young in KVM's secondary MMU, as doing so relies on kvm_pfn_to_refcounted_page(), and generally speaking is unnecessary and wasteful. KVM participates in page aging via mmu_notifiers, so there's no need to push "accessed" updates to the primary MMU. Dropping use of kvm_set_pfn_accessed() also paves the way for removing kvm_pfn_to_refcounted_page() and all its users. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- arch/arm64/include/asm/kvm_pgtable.h | 4 +--- arch/arm64/kvm/hyp/pgtable.c | 7 ++----- arch/arm64/kvm/mmu.c | 6 +----- 3 files changed, 4 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 19278dfe7978..676d80723c38 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -632,10 +632,8 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size); * * If there is a valid, leaf page-table entry used to translate @addr, then * set the access flag in that entry. - * - * Return: The old page-table entry prior to setting the flag, 0 on failur= e. */ -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); =20 /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the a= ccess diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 9e2bbee77491..6679e02a02c4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1287,19 +1287,16 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable= *pgt, u64 addr, u64 size) NULL, NULL, 0); } =20 -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) { - kvm_pte_t pte =3D 0; int ret; =20 ret =3D stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF= , 0, - &pte, NULL, + NULL, NULL, KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED); if (!ret) dsb(ishst); - - return pte; } =20 struct stage2_age_data { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 756fc856ab44..8fd8ea5b5795 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1699,18 +1699,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_i= pa) { - kvm_pte_t pte; struct kvm_s2_mmu *mmu; =20 trace_kvm_access_fault(fault_ipa); =20 read_lock(&vcpu->kvm->mmu_lock); mmu =3D vcpu->arch.hw_mmu; - pte =3D kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); read_unlock(&vcpu->kvm->mmu_lock); - - if (kvm_pte_valid(pte)) - kvm_set_pfn_accessed(kvm_pte_to_pfn(pte)); } =20 /** --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 840931922F2 for ; Fri, 26 Jul 2024 23:55:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038131; cv=none; b=r4uKZSffXldU1jWjPSREvgop0vEcnXwZRMTTht/blWl5Ykmxvita9hF5TcoZDJLAvrggdSfW1CWtziOeVls1tU8f2kjX9PtzGM0xYbviQ2L6jZTA1qHP7A/LrONtWYsC5ySHO+d+vL/tG8BoGSLTYH4cUwt9/qmDahVZbbWKfio= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038131; c=relaxed/simple; bh=HoquObhcjq49pZ1kYWlBVegT881fbNuh57UaVD2ZrGU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=b7NenxDzkAeFC6Zl2vpWzADN+x9HIAG2GKCTFlY7RD1h8VyPJY3BIxOxWS13/+2JgNBqvNX9iopagWklW4I0rAX2ewCUSV7XZCEhL1bDmpbPVjwMrPYMSnYHSZL/FGfe3pghCyC5OUIJXeE+Ft8kmfDi9PpBO1+ngU2UxsJLoOk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SwN8w6tv; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SwN8w6tv" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb54eac976so1661410a91.0 for ; Fri, 26 Jul 2024 16:55:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038130; x=1722642930; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=jpI7lTRxFmJ8IejB0Qdg95emli3A+yh4X5lHdX1qdPI=; b=SwN8w6tvcVGrQ2IEG8LtZCOb8YKFhEjIVg48CofI/XUkxs17lq+3E3T6/B8mrIXQ9w oD6epinOKD2VNTjf0DTzme68lhammTb7I0TffNGzKPYJJ/PTj1v7mbCAIr7Bv5fRhU5H XJbwuwgOuXTGQttCFDns9lV9blvdeW/pcTaXzlfyFcm3E9ghX11zoqe0tYsYp6MYJytO PE8hD+Gty9R6KY7oXsQg8YOkQUjmqrEj9IFrt4uDLJo1ZemLiVA+G76ffwn92Ei643Za KJPs0ZlKy8h8vSX04w/RPhW/MAQ9q4PL1WocFxXlO5LgaZc3okPHTaDJlzE1zFupvVbo zeRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038130; x=1722642930; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jpI7lTRxFmJ8IejB0Qdg95emli3A+yh4X5lHdX1qdPI=; b=VS8qGRrEyJnbl+3LsqvhM/Wb8H+sEoWo4Fu3XJo5wPgUk5e06bmnny0i33PQQidCnf ceLbFYw0TMIScUMfxks8PwWBqDv8mJ+wVNL8XZjZ0X8sqt9blHakgc9LNWZXjXbNimXs L0+uEub9Ao1icgg0WgZpFU9P4mJtIu2tFrVMdVydqUZFaBU8o9e6ABXjo9pUyFLupXJJ rL5Gv2XqAnIF2UeB26eTQe02n9fWkfdK/UBCoUxeApedIQIti4HOUBpDdT6ekFZikxhi fOn0z0grL0ZVV9zCqlq0HZTzXENuIRDHc2F8hJqv1n3jJt92UBLW0gBkW2ptNeATU0cB rcXA== X-Forwarded-Encrypted: i=1; AJvYcCXzffNNlA4vde5MJ+WgxjOhRLUNuPA+ejx7FsacpntUnTgtDdgBU/AsZHZvBN+bUg9VSkAai0UCu9pCUPrFcAJZunCEwo/V9j/nMiTj X-Gm-Message-State: AOJu0YxB/Qj8WovgEyfm5QrtsnC4f//l0lo5HyYLZ+Dybbe13vNPz7nV WjC7tcAMavhTrsEYfVjwsDqqawOTmiz00LVB/njw40x/zgszwYJr4B1s6qJZgl4hjxcj3fMb0MY t1w== X-Google-Smtp-Source: AGHT+IFv8CQNcqhsspCtIqnm+Y+lStR0ot3kSIK9X3XaAk7UtrG+09935pql8gySUvK94zWnbXq5q/tu644= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:fb47:b0:2c9:61f9:9aea with SMTP id 98e67ed59e1d1-2cf7e618b7dmr21168a91.5.1722038129745; Fri, 26 Jul 2024 16:55:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:32 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-84-seanjc@google.com> Subject: [PATCH v12 83/84] KVM: Drop APIs that manipulate "struct page" via pfns From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove all kvm_{release,set}_pfn_*() APIs not that all users are gone. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Alex Benn=C3=A9e Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 5 ---- virt/kvm/kvm_main.c | 55 ---------------------------------------- 2 files changed, 60 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 34a1cadb1b80..87d61f16a449 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1260,11 +1260,6 @@ static inline kvm_pfn_t kvm_faultin_pfn(struct kvm_v= cpu *vcpu, gfn_t gfn, write ? FOLL_WRITE : 0, writable, refcounted_page); } =20 -void kvm_release_pfn_clean(kvm_pfn_t pfn); -void kvm_release_pfn_dirty(kvm_pfn_t pfn); -void kvm_set_pfn_dirty(kvm_pfn_t pfn); -void kvm_set_pfn_accessed(kvm_pfn_t pfn); - int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long l= en); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 030a08d4b21d..8b85e1130a63 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3200,61 +3200,6 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kv= m_host_map *map) } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); =20 -void kvm_release_pfn_clean(kvm_pfn_t pfn) -{ - struct page *page; - - if (is_error_noslot_pfn(pfn)) - return; - - page =3D kvm_pfn_to_refcounted_page(pfn); - if (!page) - return; - - kvm_release_page_clean(page); -} -EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); - -void kvm_release_pfn_dirty(kvm_pfn_t pfn) -{ - struct page *page; - - if (is_error_noslot_pfn(pfn)) - return; - - page =3D kvm_pfn_to_refcounted_page(pfn); - if (!page) - return; - - kvm_release_page_dirty(page); -} -EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty); - -/* - * Note, checking for an error/noslot pfn is the caller's responsibility w= hen - * directly marking a page dirty/accessed. Unlike the "release" helpers, = the - * "set" helpers are not to be used when the pfn might point at garbage. - */ -void kvm_set_pfn_dirty(kvm_pfn_t pfn) -{ - if (WARN_ON(is_error_noslot_pfn(pfn))) - return; - - if (pfn_valid(pfn)) - kvm_set_page_dirty(pfn_to_page(pfn)); -} -EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty); - -void kvm_set_pfn_accessed(kvm_pfn_t pfn) -{ - if (WARN_ON(is_error_noslot_pfn(pfn))) - return; - - if (pfn_valid(pfn)) - kvm_set_page_accessed(pfn_to_page(pfn)); -} -EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed); - static int next_segment(unsigned long len, int offset) { if (len > PAGE_SIZE - offset) --=20 2.46.0.rc1.232.g9752f9e123-goog From nobody Mon Sep 16 19:16:59 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEDA5192B7F for ; Fri, 26 Jul 2024 23:55:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038134; cv=none; b=Qb/idBn9vXHSRRzrQDTMtLaGBOpxbVlunQUjARW1SabzqMldzUDhw7Z7U+ZOk1+niPywi/KlG0Ywbe7GR86KCf/gLJw09dx5eGvnsUdy267XuXVmTnGPM4Kcjzg82InXFOxZYDmjw3RAWktWKo6UvdK7fvbfs/CHCF1gXuzKVw0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722038134; c=relaxed/simple; bh=Q7iFhysn4W1WfxP0+7u3aZ05Q3iDxPLY40ZQrSBaHTY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZZ0hy8WSKcnHvRn2aMmbr0P4aRwScgY7FJgkRsirO6LJDHbVzdDYbjF81aGsVs7OiRpEofPzrZaIo1+m1R476ynAgCe5oaUwrweeXJsWECOGLxcO9nM3Pb7iKUtXZjfG8NCy+YKm6U85UYtWkxIbdkoGKiuMKKO/KbcZcfWU/+4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vyrftBCa; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vyrftBCa" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70d26115cd5so1194221b3a.1 for ; Fri, 26 Jul 2024 16:55:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722038132; x=1722642932; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=diuq/5OTzA7KvKsX4AEile2HlGAF4SgNWi1szy67iaM=; b=vyrftBCazLCBTorKThPCCA2XcqVV1jd+UtiELnRrdvvYJJWSY3Lex+i6jT5AwgGmZL f5MclMvki2EQF1UzYitjYBPe66FMuopo8LLCjbH99N/Iwh3ExW/2BKGJNmjRjO8HouAy /P7tvLvRGyBcHL28FfaZVMG7wWPrm9A5LzW4H+SaxRsuWbt21euPKvWfa3F1cfFFfEkp NuEkdNnRePIgXShKpxnx6DCes8gZdjn9RQf7QzqCCcx1dEgZijoDyyln/7oaC5zqDX77 yktLpqHPVFLkOP5OYp3pNqLkXYuMLKCgYY+HkXp4rOqUw7IArPyszHXqIsnEStzKcyle F24A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722038132; x=1722642932; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=diuq/5OTzA7KvKsX4AEile2HlGAF4SgNWi1szy67iaM=; b=s1Us5bmFtGfOfYeEZrY6Z4USX4f4RJVc0MwFqvEd2e8mIw4nSE6ofvu63NHEMsgEMZ bTCNU0IxfO8HJ4EH9UrIncjVPO4LyO2f5QeCKH0+F69uFT6cHdtIQIDZDEu4IAiPiJGN EMEucg0UGw6dZqRzdhNVoXrBZaxDKT6kfmF58cPbO114C3NPyKj3hQIBAhbWsvpYUlSA RZwzNlAlEeBeChVS7i8zKX9OFyDnpCBFV17fygvuY6Thbsjl57kq6orVuv2VLe/u4A9e OWFMkf4YC+PP2dciRQdGlQ25VSaxZhpy15Zx2ZQVaMssu4KK7Kq2gSp4Kao3SmRyiYn0 XsCw== X-Forwarded-Encrypted: i=1; AJvYcCV8cmv5izYxNwToETu4cjnxjz0h3bZ8tJAZrc/SKta4KaYqAOHsa4uc4o+2OKtGw8ZGvJ3nZWVvHUgI7xIhkBI7YT52FnrCpDOC3DkC X-Gm-Message-State: AOJu0Yxm6YACI/v7sFW6wbi149rIEw6WIfaH7u6HdftlZutmRll7HXL7 O6JdSkyIE3NRuVWzYBYT5w75VrB/rR0GFJf8cMFhPEsUz4qQ0+00dpMLf90FeoTzK6suXJSRg+F Lpg== X-Google-Smtp-Source: AGHT+IHhZHDOMBimQWMolk5T4YvfOEDeWobwSpVEA59XfGWHoVcD5oY/zbLuHzkbg8Phg0CxGQdxohIyYms= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:66e5:b0:70d:27ca:96b8 with SMTP id d2e1a72fcca58-70ece926ad1mr25428b3a.0.1722038131828; Fri, 26 Jul 2024 16:55:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 26 Jul 2024 16:52:33 -0700 In-Reply-To: <20240726235234.228822-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240726235234.228822-85-seanjc@google.com> Subject: [PATCH v12 84/84] KVM: Don't grab reference on VM_MIXEDMAP pfns that have a "struct page" From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that KVM no longer relies on an ugly heuristic to find its struct page references, i.e. now that KVM can't get false positives on VM_MIXEDMAP pfns, remove KVM's hack to elevate the refcount for pfns that happen to have a valid struct page. In addition to removing a long-standing wart in KVM, this allows KVM to map non-refcounted struct page memory into the guest, e.g. for exposing GPU TTM buffers to KVM guests. Signed-off-by: Sean Christopherson Tested-by: Alex Benn=C3=A9e --- include/linux/kvm_host.h | 3 -- virt/kvm/kvm_main.c | 75 ++-------------------------------------- 2 files changed, 2 insertions(+), 76 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 87d61f16a449..d4513ffaf2e1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1702,9 +1702,6 @@ void kvm_arch_sync_events(struct kvm *kvm); =20 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu); =20 -struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn); -bool kvm_is_zone_device_page(struct page *page); - struct kvm_irq_ack_notifier { struct hlist_node link; unsigned gsi; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8b85e1130a63..e279140f2425 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -160,52 +160,6 @@ __weak void kvm_arch_guest_memory_reclaimed(struct kvm= *kvm) { } =20 -bool kvm_is_zone_device_page(struct page *page) -{ - /* - * The metadata used by is_zone_device_page() to determine whether or - * not a page is ZONE_DEVICE is guaranteed to be valid if and only if - * the device has been pinned, e.g. by get_user_pages(). WARN if the - * page_count() is zero to help detect bad usage of this helper. - */ - if (WARN_ON_ONCE(!page_count(page))) - return false; - - return is_zone_device_page(page); -} - -/* - * Returns a 'struct page' if the pfn is "valid" and backed by a refcounted - * page, NULL otherwise. Note, the list of refcounted PG_reserved page ty= pes - * is likely incomplete, it has been compiled purely through people wantin= g to - * back guest with a certain type of memory and encountering issues. - */ -struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn) -{ - struct page *page; - - if (!pfn_valid(pfn)) - return NULL; - - page =3D pfn_to_page(pfn); - if (!PageReserved(page)) - return page; - - /* The ZERO_PAGE(s) is marked PG_reserved, but is refcounted. */ - if (is_zero_pfn(pfn)) - return page; - - /* - * ZONE_DEVICE pages currently set PG_reserved, but from a refcounting - * perspective they are "normal" pages, albeit with slightly different - * usage rules. - */ - if (kvm_is_zone_device_page(page)) - return page; - - return NULL; -} - /* * Switches to specified vcpu, until a matching vcpu_put() */ @@ -2814,35 +2768,10 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_= pfn *kfp, struct page *page, if (kfp->map_writable) *kfp->map_writable =3D writable; =20 - /* - * FIXME: Remove this once KVM no longer blindly calls put_page() on - * every pfn that points at a struct page. - * - * Get a reference for follow_pte() pfns if they happen to point at a - * struct page, as KVM will ultimately call kvm_release_pfn_clean() on - * the returned pfn, i.e. KVM expects to have a reference. - * - * Certain IO or PFNMAP mappings can be backed with valid struct pages, - * but be allocated without refcounting, e.g. tail pages of - * non-compound higher order allocations. Grabbing and putting a - * reference to such pages would cause KVM to prematurely free a page - * it doesn't own (KVM gets and puts the one and only reference). - * Don't allow those pages until the FIXME is resolved. - * - * Don't grab a reference for pins, callers that pin pages are required - * to check refcounted_page, i.e. must not blindly release the pfn. - */ - if (pte) { + if (pte) pfn =3D pte_pfn(*pte); - - if (!kfp->pin) { - page =3D kvm_pfn_to_refcounted_page(pfn); - if (page && !get_page_unless_zero(page)) - return KVM_PFN_ERR_FAULT; - } - } else { + else pfn =3D page_to_pfn(page); - } =20 *kfp->refcounted_page =3D page; =20 --=20 2.46.0.rc1.232.g9752f9e123-goog