From nobody Sat Feb 7 08:44:14 2026 Received: from mail-oo1-f47.google.com (mail-oo1-f47.google.com [209.85.161.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25BF03B78E for ; Wed, 21 Feb 2024 07:26:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500362; cv=none; b=RYSSmCCpDql+VY7y/aBnwZLkGyrRZwn0sEUh+eIc73CvkiSc5yECMioa3xojd1OOm0IdLlmBeCGs//F9oNypX/sqOnmV7lL0bS/e7Fy+h6tGpR1rIwAL8jaax1Kber/LhGZ80wuKnMMqbSlIK0IHPawG0yIFWrvbG3uxck+LES4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500362; c=relaxed/simple; bh=ts94nRAA8qijZ1+V+8RhVz7hYImR2xwd7IhA+JpQbfo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WnUxb9m+1frtfS9GP8PaS3GDMccDkzv0BRE431p/W6hPnpdzPs/WOauvS+ida8CTtBxLnd4PPm/oO4d3nIUq0F86hOO547myX8f1KaD1vP2TOiDh86SkplNc0oqjGVKN+JraY4GOrB6huiM7sQY37AKocFb0lAD5P8W83JZgCYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=Jc1WSE5W; arc=none smtp.client-ip=209.85.161.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Jc1WSE5W" Received: by mail-oo1-f47.google.com with SMTP id 006d021491bc7-59fdcf8ebbcso1716494eaf.3 for ; Tue, 20 Feb 2024 23:26:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500359; x=1709105159; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H+hdGBpVbr7bFPAdmaXUhe5OHdgegQxKZy+RG2n/NC8=; b=Jc1WSE5WpIABSh5VOgxQcE6TojJuRVlPipKkbMk4IHiYg99vpx4izo2a29AGvppyk4 Bw3lpYKdEQi9f+dlcyXa/3v8TmplYX+rCnwaWlXD6E7sMPuLJZNbA82x+/ArIVJKwLiW H5L0jcxxE/6ViB0cwmIg1NgEGuiKfNcewhsSE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500359; x=1709105159; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H+hdGBpVbr7bFPAdmaXUhe5OHdgegQxKZy+RG2n/NC8=; b=O//I0w/Ytr1nBoFN7XT2w2DsDzc34oDntQH5mlcD3hIGLeAWDKq7Q+4VUmwy9S3M8+ Sl43IH+Qd6Gdl2kHoz9qdtnZIEBunfImGf5fwuXtdJ8dDMW//ZaKiJJoLlYfWxnQ+bvs Jo/eMxmKvBB385Rw+ZtY77jMhaC5zusla6i7dCbqUbf/XolIlbciBUKU0eRsEWVGi8hO H5cdKewy4XEmOgret9UhCy8qKzs6TreBOlRJ9rxXo4MfJGRI8RSIfmvU2/E69KfQ1RVP Wvx6Lv4xIvFEMaU8MMtlQ/d055sWnx64qGZQVsG88OsqDXFQUG39e2UB2Khpck0Zdtdq TIAQ== X-Forwarded-Encrypted: i=1; AJvYcCWKJORbEYAAubMezyCZsQhH6Jr4zDovtgSeq0fa5ZWzLyfTMJ6/cu+Uvr50z71on/BAFldZWID/VuSLbOU0a/g4/CZ9Gk07gsU0936p X-Gm-Message-State: AOJu0YzTTj5iOqEVWyU4gIOMXbKEssoYUEJn3fI6zYBAQZPfJJrxxoq9 V95aCgkogepFr4FVQD2F2C+XrxU1b9BEP603xm8+6YNrB8O/Ts5krUKgh5xAaw== X-Google-Smtp-Source: AGHT+IE2G1nq+fsRegEtXbcZKijUFBnow+nd04+2OKpMuJUnEvCAkTRVdFsn5NwLBnQPzSaj9SDRPg== X-Received: by 2002:a05:6358:caa:b0:178:c51c:7af5 with SMTP id o42-20020a0563580caa00b00178c51c7af5mr20488158rwj.32.1708500359251; Tue, 20 Feb 2024 23:25:59 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id u4-20020a62d444000000b006e2b23ea858sm8097492pfl.195.2024.02.20.23.25.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:25:58 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH v10 1/8] KVM: Assert that a page's refcount is elevated when marking accessed/dirty Date: Wed, 21 Feb 2024 16:25:19 +0900 Message-ID: <20240221072528.2702048-2-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Assert that a page's refcount is elevated, i.e. that _something_ holds a reference to the page, when KVM marks a page as accessed and/or dirty. KVM typically doesn't hold a reference to pages that are mapped into the guest, e.g. to allow page migration, compaction, swap, etc., and instead relies on mmu_notifiers to react to changes in the primary MMU. Incorrect handling of mmu_notifier events (or similar mechanisms) can result in KVM keeping a mapping beyond the lifetime of the backing page, i.e. can (and often does) result in use-after-free. Yelling if KVM marks a freed page as accessed/dirty doesn't prevent badness as KVM usually only does A/D updates when unmapping memory from the guest, i.e. the assertion fires well after an underlying bug has occurred, but yelling does help detect, triage, and debug use-after-free bugs. Note, the assertion must use page_count(), NOT page_ref_count()! For hugepages, the returned struct page may be a tailpage and thus not have its own refcount. Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 10bfc88a69f7..c5e4bf7c48f9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3204,6 +3204,19 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); =20 static bool kvm_is_ad_tracked_page(struct page *page) { + /* + * Assert that KVM isn't attempting to mark a freed page as Accessed or + * Dirty, i.e. that KVM's MMU doesn't have a use-after-free bug. KVM + * (typically) doesn't pin pages that are mapped in KVM's MMU, and + * instead relies on mmu_notifiers to know when a mapping needs to be + * zapped/invalidated. Unmapping from KVM's MMU must happen _before_ + * KVM returns from its mmu_notifier, i.e. the page should have an + * elevated refcount at this point even though KVM doesn't hold a + * reference of its own. + */ + if (WARN_ON_ONCE(!page_count(page))) + return false; + /* * Per page-flags.h, pages tagged PG_reserved "should in general not be * touched (e.g. set dirty) except by its owner". --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32B153BB55 for ; Wed, 21 Feb 2024 07:26:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500365; cv=none; b=a9r5wryGRfyhuY0SYSWR+Pp7ZabG8tPmW00+nVkRDDOBxNRJ7QxvJaHvHwiDuuqYpLZaKh6LeKHRNyN6Lt9etZtKSmVpl4BW6ubUEh/c4K3xoVOmtx+EsUrmRQQVJR39P+5Auki5V+Tg6Sc2zYrrtuY8UHCRmJ97rj6JdfXddVc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500365; c=relaxed/simple; bh=Eal+RK83X0qlcGcNEk7z0E4RBKeo7UGh4/zIqBRmhh4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qMZP2r/WF4WWeGHCnFKLWYvjv8jTkel/+EvOD+i+HEmFSJL2KkUGgQHY30GTNmmjU19Q6OZstVfEi2XfOv5T99RW1Z/qLygmIJyA/C2GXwf4vLRv+kbdq/a8OF6ck0s9pPu3Y8DSn0gndfZJu1hy2LAa5DkRQGUBHQ2Mae97v7g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=RCsgoIja; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="RCsgoIja" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1dba6b9b060so33097685ad.1 for ; Tue, 20 Feb 2024 23:26:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500363; x=1709105163; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WLOffij4aJB+MGCya8s97VNV+p1K3jV5QIE+Q+bz00I=; b=RCsgoIjaY4LMlPmm2Edxq6CXzr9Xcd52w12tT2X5R7mBMwfY47Qh5R5XGbqdxgeIUX 2+zCD8Zsd1zSlF2xVDzwYDam/c/qSDKc2CacnFlNrV3eLXw12JxfEIdmDc5viedNOhu1 if/aBKnA7SxfDewu8Q/5sdckYA6yqJJ/g4h/I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500363; x=1709105163; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WLOffij4aJB+MGCya8s97VNV+p1K3jV5QIE+Q+bz00I=; b=Dspw12b4yWzeQ+LvKU3CPC5FU4+0qYUgLXb0s9dHOYsPW2qxHFpomJ2UIuWlRt+yuo 3h3edH0vH7pBDoAxvj9RwdcP9gPowcoIQRQ1M+S9e6PrK1O4HPUjh58E/OjQYqOYu1Mc 0GYQ+yBNqQz88GArQXKMWP/KTP+Z6ExpL1Hm5oimVJDKRvxJ/Wy1KFlDIScCNLv0W63d 1BFtLRpKgO12vWQHlkVz26RvOWZh3q5yejgUVstDvgAnsYQu1Dw7KjFdoRklzBVqtOJ7 lCsr7JS4Z42CrN5kz9JUFYjvH9AKzs71yd1Hcwgv8ryYHIpdAkzcj71KElF0hErhF9Mq uHoA== X-Forwarded-Encrypted: i=1; AJvYcCVteE0C2GhJQ7/guN/raB/6qhUT7PlgRxFnaSzvjtXAT6LSah2+RWJCSdBtz36i358gXWJMg4Q6lvhXe3OOu3VXXLTvxtm8Yayo/+Gd X-Gm-Message-State: AOJu0YygVNCQTtQx0g/CClgHyUHAvZwubGUKrsRvS/25nh0Z0TPgN5Zy tNdnaS7MNziJ+njfPt3etK+S0hlxvvtp0jEX/LPAnkF4H3uVeVDYxI/igW8tow== X-Google-Smtp-Source: AGHT+IGuByzcDeNqa0h3/F4p6FUZXddbiOGTGnCFrf3ufIz55abOi6o/JeNcHuNQgDfA/CeMO+SZRQ== X-Received: by 2002:a17:903:2282:b0:1db:b5be:5981 with SMTP id b2-20020a170903228200b001dbb5be5981mr17305270plh.31.1708500363442; Tue, 20 Feb 2024 23:26:03 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id t11-20020a170902d28b00b001db40e91beesm7415349plc.285.2024.02.20.23.26.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:03 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 2/8] KVM: Relax BUG_ON argument validation Date: Wed, 21 Feb 2024 16:25:20 +0900 Message-ID: <20240221072528.2702048-3-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens hva_to_pfn() includes a check that KVM isn't trying to do an async page fault in a situation where it can't sleep. Downgrade this check from a BUG_ON() to a WARN_ON_ONCE(), since DoS'ing the guest (at worst) is better than bringing down the host. Suggested-by: Sean Christopherson Signed-off-by: David Stevens --- virt/kvm/kvm_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c5e4bf7c48f9..6f37d56fb2fc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2979,7 +2979,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic,= bool interruptible, int npages, r; =20 /* we can do it either atomically or asynchronously, not both */ - BUG_ON(atomic && async); + WARN_ON_ONCE(atomic && async); =20 if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E64173BB55 for ; Wed, 21 Feb 2024 07:26:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500370; cv=none; b=Vl3CjwEjJZnH7pHiAsyAsw0uAUlrxlYiwE8KLsFLNl1Cu+NQ2hSy3w3msP0e1eIhy7GzscZGqEymzJpaNPGiyQjQhmZccLIEeNebQ2rRO+Jo6Vo1TOMLxQCQFQEdxLobgtW9TGArX0ZW6nKo9WwFyoh8jUxEcCsDFh6KMLL3vkM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500370; c=relaxed/simple; bh=RmD39qkqJabTxcfPE8goq6maOJ/dBePa1bMd43TiM5o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=M4tKDkHtmdvKIk5/yhliGnNaU5ww4Gf7J7w9MFI7VSPc53YDDAyMf8RI1t1EHp0/wAFlYQ1bIuDarynA507nJRT9azssZeWKXuiBBqR5WhfjkDoNmbL/UJ3toZdtd5MyKS2oQB/7T9OouGMa1kCfxLTXN9VMIij76IxpyIZGV1Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=S4ilPUGR; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="S4ilPUGR" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1dc1ff697f9so10009175ad.0 for ; Tue, 20 Feb 2024 23:26:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500368; x=1709105168; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UAR1gp4LGHrNBxZ7QFBatkydzrn+82v+XlkHRa0WTGM=; b=S4ilPUGRj9nQpzf1YgsqQip+Qd/oNUXMjF5yq6XvrcScxnqaFY6R1TCZ4avk9oJ/0S tjeM38Sm9Tdl9w+/QLmIvpEa6RFm7WyVBdfAmVi40OQQZ2N/P/YOgpXxduAbO+Bz0mv/ t3qijVPZ7oDFBVJVzxOGy402PW4DqQaIPd+z4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500368; x=1709105168; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UAR1gp4LGHrNBxZ7QFBatkydzrn+82v+XlkHRa0WTGM=; b=GwK99clatZoexIkssyP26gSVrbjVcvC5vTPOvF585wYFEoedSVsHOG7L1VWV+gHh7b ZNTRK3nPvqoMR8TMPimK+psaiWR9TEsBsvR354hGfYFCt5PJGhyjO6062fQU1U5TQmry 1IkYO4Enn+jGTjdMN/IPlIzDsJF9s2G2BNYu5yamF1GX5U9cGwLxYe1oiJPO5Bs4EzTV 0d+Sb6m6attEJMyHlS/vi0RYdD2ziWAwXC0qoRE3cIMQKgSYuptloatThcMbjmknX33w tfTluLOkdEe+zO0QALtdxa6QYQbGvDvXIyKWKy+QRPhy7aepfz056q2IgKfcnoWdU0l8 BYSw== X-Forwarded-Encrypted: i=1; AJvYcCWYteQXNh9FNfYYdPhRmdHbTrPObEdgcyPXw6k5I1xptMoWpvTnvuinj3OjqE0vi99netxds7HbkMK8h03GV6MPNhZeICI8GuGlNU36 X-Gm-Message-State: AOJu0YwZ2bdCD40exQA0ad5VGrgSrAhWt7S67BIF5EcjJyfuxOTKfrn0 yugdphtp5ZLUtStau4OK2s4OCj7tzzKw2Ky0P5F91fnQkzE1E429wiDZnAZEmQ== X-Google-Smtp-Source: AGHT+IHdtvQBsodduz9CLSKFgksIKCIWU5LuIP9uwj0fDetgXWoLFhmG5AQ84wqvrPWNAgX7BbHlIA== X-Received: by 2002:a17:903:1105:b0:1d9:8832:f800 with SMTP id n5-20020a170903110500b001d98832f800mr19339736plh.8.1708500368099; Tue, 20 Feb 2024 23:26:08 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id kr6-20020a170903080600b001da1fae8a73sm603093plb.12.2024.02.20.23.26.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:07 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 3/8] KVM: mmu: Introduce kvm_follow_pfn() Date: Wed, 21 Feb 2024 16:25:21 +0900 Message-ID: <20240221072528.2702048-4-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Introduce kvm_follow_pfn(), which will replace __gfn_to_pfn_memslot(). This initial implementation is just a refactor of the existing API which uses a single structure for passing the arguments. The arguments are further refactored as follows: - The write_fault and interruptible boolean flags and the in parameter part of async are replaced by setting FOLL_WRITE, FOLL_INTERRUPTIBLE, and FOLL_NOWAIT respectively in a new flags argument. - The out parameter portion of the async parameter is now a return value. - The writable in/out parameter is split into a separate. try_map_writable in parameter and writable out parameter. - All other parameter are the same. Upcoming changes will add the ability to get a pfn without needing to take a ref to the underlying page. Signed-off-by: David Stevens Reviewed-by: Maxim Levitsky --- include/linux/kvm_host.h | 18 ++++ virt/kvm/kvm_main.c | 191 +++++++++++++++++++++------------------ virt/kvm/kvm_mm.h | 3 +- virt/kvm/pfncache.c | 10 +- 4 files changed, 131 insertions(+), 91 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7e7fd25b09b3..290db5133c36 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -97,6 +97,7 @@ #define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1) #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2) #define KVM_PFN_ERR_SIGPENDING (KVM_PFN_ERR_MASK + 3) +#define KVM_PFN_ERR_NEEDS_IO (KVM_PFN_ERR_MASK + 4) =20 /* * error pfns indicate that the gfn is in slot but faild to @@ -1209,6 +1210,23 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_mem= ory_slot *slot, gfn_t gfn, void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 +struct kvm_follow_pfn { + const struct kvm_memory_slot *slot; + gfn_t gfn; + /* FOLL_* flags modifying lookup behavior. */ + unsigned int flags; + /* Whether this function can sleep. */ + bool atomic; + /* Try to create a writable mapping even for a read fault. */ + bool try_map_writable; + + /* Outputs of kvm_follow_pfn */ + hva_t hva; + bool writable; +}; + +kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp); + kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6f37d56fb2fc..575756c9c5b0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2791,8 +2791,7 @@ static inline int check_user_page_hwpoison(unsigned l= ong addr) * true indicates success, otherwise false is returned. It's also the * only part that runs if we can in atomic context. */ -static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) +static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page[1]; =20 @@ -2801,14 +2800,12 @@ static bool hva_to_pfn_fast(unsigned long addr, boo= l write_fault, * or the caller allows to map a writable pfn for a read fault * request. */ - if (!(write_fault || writable)) + if (!((kfp->flags & FOLL_WRITE) || kfp->try_map_writable)) return false; =20 - if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { *pfn =3D page_to_pfn(page[0]); - - if (writable) - *writable =3D true; + kfp->writable =3D true; return true; } =20 @@ -2819,8 +2816,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool = write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fau= lt, - bool interruptible, bool *writable, kvm_pfn_t *pfn) +static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { /* * When a VCPU accesses a page that is not mapped into the secondary @@ -2833,32 +2829,24 @@ static int hva_to_pfn_slow(unsigned long addr, bool= *async, bool write_fault, * Note that get_user_page_fast_only() and FOLL_WRITE for now * implicitly honor NUMA hinting faults and don't need this flag. */ - unsigned int flags =3D FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT; + unsigned int flags =3D FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT | kfp->flags; struct page *page; int npages; =20 might_sleep(); =20 - if (writable) - *writable =3D write_fault; - - if (write_fault) - flags |=3D FOLL_WRITE; - if (async) - flags |=3D FOLL_NOWAIT; - if (interruptible) - flags |=3D FOLL_INTERRUPTIBLE; - - npages =3D get_user_pages_unlocked(addr, 1, &page, flags); + npages =3D get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages !=3D 1) return npages; =20 - /* map read fault as writable if possible */ - if (unlikely(!write_fault) && writable) { + if (kfp->flags & FOLL_WRITE) { + kfp->writable =3D true; + } else if (kfp->try_map_writable) { struct page *wpage; =20 - if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { - *writable =3D true; + /* map read fault as writable if possible */ + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { + kfp->writable =3D true; put_page(page); page =3D wpage; } @@ -2889,23 +2877,23 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn) } =20 static int hva_to_pfn_remapped(struct vm_area_struct *vma, - unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *p_pfn) + struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { kvm_pfn_t pfn; pte_t *ptep; pte_t pte; spinlock_t *ptl; + bool write_fault =3D kfp->flags & FOLL_WRITE; int r; =20 - r =3D follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r =3D follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does * not call the fault handler, so do it here. */ bool unlocked =3D false; - r =3D fixup_user_fault(current->mm, addr, + r =3D fixup_user_fault(current->mm, kfp->hva, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) @@ -2913,7 +2901,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, if (r) return r; =20 - r =3D follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r =3D follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); if (r) return r; } @@ -2925,8 +2913,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, goto out; } =20 - if (writable) - *writable =3D pte_write(pte); + kfp->writable =3D pte_write(pte); pfn =3D pte_pfn(pte); =20 /* @@ -2957,38 +2944,28 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, } =20 /* - * Pin guest page in memory and return its pfn. - * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function can sleep - * @interruptible: whether the process can be interrupted by non-fatal sig= nals - * @async: whether this function need to wait IO complete if the - * host page is not in the memory - * @write_fault: whether we should get a writable host page - * @writable: whether it allows to map a writable host page for !@write_fa= ult - * - * The function will map a writable host page for these two cases: - * 1): @write_fault =3D true - * 2): @write_fault =3D false && @writable, @writable will tell the caller - * whether the mapping is writable. + * Convert a hva to a pfn. + * @kfp: args struct for the conversion */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; =20 - /* we can do it either atomically or asynchronously, not both */ - WARN_ON_ONCE(atomic && async); + /* + * FOLL_NOWAIT is used for async page faults, which don't make sense + * in an atomic context where the caller can't do async resolution. + */ + WARN_ON_ONCE(kfp->atomic && (kfp->flags & FOLL_NOWAIT)); =20 - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(kfp, &pfn)) return pfn; =20 - if (atomic) + if (kfp->atomic) return KVM_PFN_ERR_FAULT; =20 - npages =3D hva_to_pfn_slow(addr, async, write_fault, interruptible, - writable, &pfn); + npages =3D hva_to_pfn_slow(kfp, &pfn); if (npages =3D=3D 1) return pfn; if (npages =3D=3D -EINTR) @@ -2996,83 +2973,123 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atom= ic, bool interruptible, =20 mmap_read_lock(current->mm); if (npages =3D=3D -EHWPOISON || - (!async && check_user_page_hwpoison(addr))) { + (!(kfp->flags & FOLL_NOWAIT) && check_user_page_hwpoison(kfp->hva))) { pfn =3D KVM_PFN_ERR_HWPOISON; goto exit; } =20 retry: - vma =3D vma_lookup(current->mm, addr); + vma =3D vma_lookup(current->mm, kfp->hva); =20 if (vma =3D=3D NULL) pfn =3D KVM_PFN_ERR_FAULT; else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) { - r =3D hva_to_pfn_remapped(vma, addr, write_fault, writable, &pfn); + r =3D hva_to_pfn_remapped(vma, kfp, &pfn); if (r =3D=3D -EAGAIN) goto retry; if (r < 0) pfn =3D KVM_PFN_ERR_FAULT; } else { - if (async && vma_is_valid(vma, write_fault)) - *async =3D true; - pfn =3D KVM_PFN_ERR_FAULT; + if ((kfp->flags & FOLL_NOWAIT) && + vma_is_valid(vma, kfp->flags & FOLL_WRITE)) + pfn =3D KVM_PFN_ERR_NEEDS_IO; + else + pfn =3D KVM_PFN_ERR_FAULT; } exit: mmap_read_unlock(current->mm); return pfn; } =20 -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, - bool atomic, bool interruptible, bool *async, - bool write_fault, bool *writable, hva_t *hva) +kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) { - unsigned long addr =3D __gfn_to_hva_many(slot, gfn, NULL, write_fault); + kfp->writable =3D false; + kfp->hva =3D __gfn_to_hva_many(kfp->slot, kfp->gfn, NULL, + kfp->flags & FOLL_WRITE); =20 - if (hva) - *hva =3D addr; - - if (addr =3D=3D KVM_HVA_ERR_RO_BAD) { - if (writable) - *writable =3D false; + if (kfp->hva =3D=3D KVM_HVA_ERR_RO_BAD) return KVM_PFN_ERR_RO_FAULT; - } =20 - if (kvm_is_error_hva(addr)) { - if (writable) - *writable =3D false; + if (kvm_is_error_hva(kfp->hva)) return KVM_PFN_NOSLOT; - } =20 - /* Do not map writable pfn in the readonly memslot. */ - if (writable && memslot_is_readonly(slot)) { - *writable =3D false; - writable =3D NULL; - } + if (memslot_is_readonly(kfp->slot)) + kfp->try_map_writable =3D false; + + return hva_to_pfn(kfp); +} +EXPORT_SYMBOL_GPL(kvm_follow_pfn); + +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t g= fn, + bool atomic, bool interruptible, bool *async, + bool write_fault, bool *writable, hva_t *hva) +{ + kvm_pfn_t pfn; + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D 0, + .atomic =3D atomic, + .try_map_writable =3D !!writable, + }; + + if (write_fault) + kfp.flags |=3D FOLL_WRITE; + if (async) + kfp.flags |=3D FOLL_NOWAIT; + if (interruptible) + kfp.flags |=3D FOLL_INTERRUPTIBLE; =20 - return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); + pfn =3D kvm_follow_pfn(&kfp); + if (pfn =3D=3D KVM_PFN_ERR_NEEDS_IO) { + *async =3D true; + pfn =3D KVM_PFN_ERR_FAULT; + } + if (hva) + *hva =3D kfp.hva; + if (writable) + *writable =3D kfp.writable; + return pfn; } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + kvm_pfn_t pfn; + struct kvm_follow_pfn kfp =3D { + .slot =3D gfn_to_memslot(kvm, gfn), + .gfn =3D gfn, + .flags =3D write_fault ? FOLL_WRITE : 0, + .try_map_writable =3D !!writable, + }; + pfn =3D kvm_follow_pfn(&kfp); + if (writable) + *writable =3D kfp.writable; + return pfn; } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); =20 kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + }; + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); =20 kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gf= n_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D gfn, + .flags =3D FOLL_WRITE, + .atomic =3D true, + }; + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); =20 diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index ecefc7ec51af..9ba61fbb727c 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,7 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ =20 -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll); =20 #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d6aba677830..1fb21c2ced5d 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -144,6 +144,12 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_ca= che *gpc) kvm_pfn_t new_pfn =3D KVM_PFN_ERR_FAULT; void *new_khva =3D NULL; unsigned long mmu_seq; + struct kvm_follow_pfn kfp =3D { + .slot =3D gpc->memslot, + .gfn =3D gpa_to_gfn(gpc->gpa), + .flags =3D FOLL_WRITE, + .hva =3D gpc->uhva, + }; =20 lockdep_assert_held(&gpc->refresh_lock); =20 @@ -182,8 +188,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) cond_resched(); } =20 - /* We always request a writeable mapping */ - new_pfn =3D hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + /* We always request a writable mapping */ + new_pfn =3D hva_to_pfn(&kfp); if (is_error_noslot_pfn(new_pfn)) goto out_error; =20 --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-oi1-f174.google.com (mail-oi1-f174.google.com [209.85.167.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 385193CF61 for ; Wed, 21 Feb 2024 07:26:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500375; cv=none; b=P63e/V4LdVkzEIPeDebd9rkBuLAMXclybgoao01Hq9u4+46AC7FsHoroWj6J+uZ75C4AJCNwnjrlnolB8cM9X8fgKdcWg3EDekRFp0yR25bTO0fb70NXOTFdOQd34xH8Mpv8fsu5vH3E0YVIoUWPDuNHBTLC+XfLB7xcoRce0lQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500375; c=relaxed/simple; bh=VRHol4dx7LY6E87ZtYH4SrK/OkyaH6rxYDR2jN9GG/o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GGT+46VSedaaWHZVeVbq7gTLpFFzrQTgJwRL33Ni/PgPxRjVk6bwUe4o+rkDPgXrkp/ytcpZEemVJzXMXcOYVbhkGZZ/EWnyPtMBHNOajUWvROr59FvAm9FZqkXuuZ/SFbzdqM9DWYHg047/4iVuP/r/Ew4smTm+M3mDSNNgX38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=D9cMQeup; arc=none smtp.client-ip=209.85.167.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="D9cMQeup" Received: by mail-oi1-f174.google.com with SMTP id 5614622812f47-3bbbc6b4ed1so4710093b6e.2 for ; Tue, 20 Feb 2024 23:26:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500372; x=1709105172; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RTBr9/GWDwnsxmtI6m5PH+5BX17VQD9g5p7s8cBzX1w=; b=D9cMQeuplraR6rp9WcVOPJJlrhN20aJvBKbg2YwkaX+KzxZd5KPvthnSaHAZMxMY6K K2S50W69KCqBQtn5M9N5c5N6oSDWIsF+hGj1emfg/n0SwpD3a4QH0+HcX/DsJf21itDh uJ7yKk98A8todWvlBuETfzeWdkeg9lnstl3hM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500372; x=1709105172; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RTBr9/GWDwnsxmtI6m5PH+5BX17VQD9g5p7s8cBzX1w=; b=a2AYc44/O4aglhtk8gxyWQRk4BUrxlEoA4PAUx55wDAGoGnhPG03+djzR6Qf6j1vYq hbHtMuhs5vfxY32LxKze23I4q1Zu02zDKIWE8biDISYSTuakeagADjsugNpE4AbOUVyD iwr6EIuT/PiggMvF7B6to7mxsSBg7S5jsBsOEdUT1hxpCzywos5t3PSGQ9s1W9HKzPtx 2R1KemfNeDx4ccFovUOPXOAQx+7LDCVCbfBmWZGL3eVejleoRdpWAl4fUKwPefL3/oX5 NshxyRSEH6Rr8Zg1hzS0I5wTUmFKLzMFna3OkifZY4nUhEYBG2+ydTShgg0oapyeIlR8 p2PQ== X-Forwarded-Encrypted: i=1; AJvYcCWuVYjB4q6By7wZ4B3LYKtiU+pcrO0VKFsxfYGm4QysXUupq13YT3kHGokI7WnFNQINyC9qfHHPG69BPMMnmFZWVuY93xlTp9RCLVSa X-Gm-Message-State: AOJu0Yw5dpn2fBBt5B3iL4+upK/fI2858P1dZny/5JMhuoGi7FkK33d8 jFD11ThZOliE5t+UPUWQqZ6VDetjCflLyWiGT8JWPhUkmawdWi/Pe/89ykFzCw== X-Google-Smtp-Source: AGHT+IGqQTxxH7pMORTYmeqcJw2GQWFyedi7oJ0B77dG8t/KSSjK+ZU574E6abJL2b4oD5mLxUDX3Q== X-Received: by 2002:a05:6808:23d5:b0:3c1:5603:ac2b with SMTP id bq21-20020a05680823d500b003c15603ac2bmr10664093oib.25.1708500372298; Tue, 20 Feb 2024 23:26:12 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id r6-20020aa78b86000000b006e2f9f007b0sm7830235pfd.92.2024.02.20.23.26.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:11 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 4/8] KVM: mmu: Improve handling of non-refcounted pfns Date: Wed, 21 Feb 2024 16:25:22 +0900 Message-ID: <20240221072528.2702048-5-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens KVM's handling of non-refcounted pfns has two problems: - pfns without struct pages can be accessed without the protection of a mmu notifier. This is unsafe because KVM cannot monitor or control the lifespan of such pfns, so it may continue to access the pfns after they are freed. - struct pages without refcounting (e.g. tail pages of non-compound higher order pages) cannot be used at all, as gfn_to_pfn does not provide enough information for callers to be able to avoid underflowing the refcount. This patch extends the kvm_follow_pfn() API to properly handle these cases: - First, it adds FOLL_GET to the list of supported flags, to indicate whether or not the caller actually wants to take a refcount. - Second, it adds a guarded_by_mmu_notifier parameter that is used to avoid returning non-refcounted pages when the caller cannot safely use them. - Third, it adds an is_refcounted_page output parameter so that callers can tell whether or not a pfn has a struct page that needs to be passed to put_page. Since callers need to be updated on a case-by-case basis to pay attention to is_refcounted_page, the new behavior of returning non-refcounted pages is opt-in via the allow_non_refcounted_struct_page parameter. Once all callers have been updated, this parameter should be removed. The fact that non-refcounted pfns can no longer be accessed without mmu notifier protection by default is a breaking change. This patch provides a module parameter that system admins can use to re-enable the previous unsafe behavior when userspace is trusted not to migrate/free/etc non-refcounted pfns that are mapped into the guest. There is no timeline for updating everything in KVM to use mmu notifiers to alleviate the need for this module parameter. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 24 +++++++++ virt/kvm/kvm_main.c | 108 ++++++++++++++++++++++++++------------- virt/kvm/pfncache.c | 3 +- 3 files changed, 98 insertions(+), 37 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 290db5133c36..88279649c00d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1219,10 +1219,34 @@ struct kvm_follow_pfn { bool atomic; /* Try to create a writable mapping even for a read fault. */ bool try_map_writable; + /* + * Usage of the returned pfn will be guared by a mmu notifier. If + * FOLL_GET is not set, this must be true. + */ + bool guarded_by_mmu_notifier; + /* + * When false, do not return pfns for non-refcounted struct pages. + * + * TODO: This allows callers to use kvm_release_pfn on the pfns + * returned by gfn_to_pfn without worrying about corrupting the + * refcount of non-refcounted pages. Once all callers respect + * refcounted_page, this flag should be removed. + */ + bool allow_non_refcounted_struct_page; =20 /* Outputs of kvm_follow_pfn */ hva_t hva; bool writable; + /* + * Non-NULL if the returned pfn is for a page with a valid refcount, + * NULL if the returned pfn has no struct page or if the struct page is + * not being refcounted (e.g. tail pages of non-compound higher order + * allocations from IO/PFNMAP mappings). + * + * NOTE: This will still be set if FOLL_GET is not specified, but the + * returned page will not have an elevated refcount. + */ + struct page *refcounted_page; }; =20 kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 575756c9c5b0..6c10dc546c8d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -96,6 +96,13 @@ unsigned int halt_poll_ns_shrink; module_param(halt_poll_ns_shrink, uint, 0644); EXPORT_SYMBOL_GPL(halt_poll_ns_shrink); =20 +/* + * Allow non-refcounted struct pages and non-struct page memory to + * be mapped without MMU notifier protection. + */ +static bool allow_unsafe_mappings; +module_param(allow_unsafe_mappings, bool, 0444); + /* * Ordering of locks: * @@ -2786,6 +2793,24 @@ static inline int check_user_page_hwpoison(unsigned = long addr) return rc =3D=3D -EHWPOISON; } =20 +static kvm_pfn_t kvm_follow_refcounted_pfn(struct kvm_follow_pfn *kfp, + struct page *page) +{ + kvm_pfn_t pfn =3D page_to_pfn(page); + + /* + * FIXME: Ideally, KVM wouldn't pass FOLL_GET to gup() when the caller + * doesn't want to grab a reference, but gup() doesn't support getting + * just the pfn, i.e. FOLL_GET is effectively mandatory. If that ever + * changes, drop this and simply don't pass FOLL_GET to gup(). + */ + if (!(kfp->flags & FOLL_GET)) + put_page(page); + + kfp->refcounted_page =3D page; + return pfn; +} + /* * The fast path to get the writable pfn which will be stored in @pfn, * true indicates success, otherwise false is returned. It's also the @@ -2804,7 +2829,7 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kf= p, kvm_pfn_t *pfn) return false; =20 if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { - *pfn =3D page_to_pfn(page[0]); + *pfn =3D kvm_follow_refcounted_pfn(kfp, page[0]); kfp->writable =3D true; return true; } @@ -2851,7 +2876,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp= , kvm_pfn_t *pfn) page =3D wpage; } } - *pfn =3D page_to_pfn(page); + *pfn =3D kvm_follow_refcounted_pfn(kfp, page); return npages; } =20 @@ -2866,16 +2891,6 @@ static bool vma_is_valid(struct vm_area_struct *vma,= bool write_fault) return true; } =20 -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - struct page *page =3D kvm_pfn_to_refcounted_page(pfn); - - if (!page) - return 1; - - return get_page_unless_zero(page); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { @@ -2884,6 +2899,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct = *vma, pte_t pte; spinlock_t *ptl; bool write_fault =3D kfp->flags & FOLL_WRITE; + struct page *page; int r; =20 r =3D follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); @@ -2908,37 +2924,44 @@ static int hva_to_pfn_remapped(struct vm_area_struc= t *vma, =20 pte =3D ptep_get(ptep); =20 + kfp->writable =3D pte_write(pte); + pfn =3D pte_pfn(pte); + + page =3D kvm_pfn_to_refcounted_page(pfn); + if (write_fault && !pte_write(pte)) { pfn =3D KVM_PFN_ERR_RO_FAULT; goto out; } =20 - kfp->writable =3D pte_write(pte); - pfn =3D pte_pfn(pte); + if (!page) + goto out; =20 /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. - * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. + * IO or PFNMAP mappings can be backed with valid struct pages but be + * allocated without refcounting. We need to detect that to make sure we + * only pass refcounted pages to kvm_follow_refcounted_pfn. */ - if (!kvm_try_get_pfn(pfn)) - r =3D -EFAULT; + if (get_page_unless_zero(page)) + WARN_ON_ONCE(kvm_follow_refcounted_pfn(kfp, page) !=3D pfn); =20 out: pte_unmap_unlock(ptep, ptl); - *p_pfn =3D pfn; + + /* + * TODO: Remove the first branch once all callers have been + * taught to play nice with non-refcounted struct pages. + */ + if (page && !kfp->refcounted_page && + !kfp->allow_non_refcounted_struct_page) { + r =3D -EFAULT; + } else if (!kfp->refcounted_page && + !kfp->guarded_by_mmu_notifier && + !allow_unsafe_mappings) { + r =3D -EFAULT; + } else { + *p_pfn =3D pfn; + } =20 return r; } @@ -3004,6 +3027,11 @@ kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) { kfp->writable =3D false; + kfp->refcounted_page =3D NULL; + + if (WARN_ON_ONCE(!(kfp->flags & FOLL_GET) && !kfp->guarded_by_mmu_notifie= r)) + return KVM_PFN_ERR_FAULT; + kfp->hva =3D __gfn_to_hva_many(kfp->slot, kfp->gfn, NULL, kfp->flags & FOLL_WRITE); =20 @@ -3028,9 +3056,10 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memo= ry_slot *slot, gfn_t gfn, struct kvm_follow_pfn kfp =3D { .slot =3D slot, .gfn =3D gfn, - .flags =3D 0, + .flags =3D FOLL_GET, .atomic =3D atomic, .try_map_writable =3D !!writable, + .allow_non_refcounted_struct_page =3D false, }; =20 if (write_fault) @@ -3060,8 +3089,9 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn,= bool write_fault, struct kvm_follow_pfn kfp =3D { .slot =3D gfn_to_memslot(kvm, gfn), .gfn =3D gfn, - .flags =3D write_fault ? FOLL_WRITE : 0, + .flags =3D FOLL_GET | (write_fault ? FOLL_WRITE : 0), .try_map_writable =3D !!writable, + .allow_non_refcounted_struct_page =3D false, }; pfn =3D kvm_follow_pfn(&kfp); if (writable) @@ -3075,7 +3105,8 @@ kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_= slot *slot, gfn_t gfn) struct kvm_follow_pfn kfp =3D { .slot =3D slot, .gfn =3D gfn, - .flags =3D FOLL_WRITE, + .flags =3D FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page =3D false, }; return kvm_follow_pfn(&kfp); } @@ -3086,8 +3117,13 @@ kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm= _memory_slot *slot, gfn_t gf struct kvm_follow_pfn kfp =3D { .slot =3D slot, .gfn =3D gfn, - .flags =3D FOLL_WRITE, + .flags =3D FOLL_GET | FOLL_WRITE, .atomic =3D true, + /* + * Setting atomic means __kvm_follow_pfn will never make it + * to hva_to_pfn_remapped, so this is vacuously true. + */ + .allow_non_refcounted_struct_page =3D true, }; return kvm_follow_pfn(&kfp); } diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 1fb21c2ced5d..6e82062ea203 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -147,8 +147,9 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cac= he *gpc) struct kvm_follow_pfn kfp =3D { .slot =3D gpc->memslot, .gfn =3D gpa_to_gfn(gpc->gpa), - .flags =3D FOLL_WRITE, + .flags =3D FOLL_GET | FOLL_WRITE, .hva =3D gpc->uhva, + .allow_non_refcounted_struct_page =3D false, }; =20 lockdep_assert_held(&gpc->refresh_lock); --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-ot1-f42.google.com (mail-ot1-f42.google.com [209.85.210.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B45AB3CF79 for ; Wed, 21 Feb 2024 07:26:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500379; cv=none; b=TkATIN1lrW3SCWySGGwFs95WlM5cJYeN1UmiSXTw92RjaHdXeqcpMlKOeBjppwdZ8YoyOhQXlaqj8PKdURzFrXJud2gTrS94huFTh9mPGVE/s0sggO9t/x5a0hqnvaEs9B5ehcyZKEelMZKyCnwJWFofHjKnT2V3+J29+FSQixQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500379; c=relaxed/simple; bh=1ox+KlmFyzigfEM1ahAanGLox6PX2sF7DV2hmNrOSqo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qZbS0graqj9tUnDkuaH0Q7kZvivWS3iKhSgzUk0Jjxv2RO+ZI1rS2j0YPAuB+3hclncvU0kM/57+3YblS6TNUrqcB2CFrH11bC3aBKn79b9JDpc1TPXGa1xpSTsHrzwY4Nh3S/Zq9BKDeevAZjYMv6AmqfiaYCSHEf9v9fLL2Nc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=LUWB94C0; arc=none smtp.client-ip=209.85.210.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="LUWB94C0" Received: by mail-ot1-f42.google.com with SMTP id 46e09a7af769-6e445b4f80bso2413581a34.0 for ; Tue, 20 Feb 2024 23:26:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500377; x=1709105177; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wkUtPwrd2MxLHuSBqQ67neXeqjLzee+wPf1sR5jV+c0=; b=LUWB94C0IVZnKN+0jAr+AbMisiZWqcJ0vaR8MuuZRj+65Qgvs3AYhByzigZdslB3IS 8njv/WXSyoHk5SRJ/hihhiAaKKqOLPChBUHQ0tKYJSdheZQtm6b2RmVhDDsseOiE5jgN iP6jBoxfydGloZ91MvPFLlaIBHXtbs3iht5yc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500377; x=1709105177; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wkUtPwrd2MxLHuSBqQ67neXeqjLzee+wPf1sR5jV+c0=; b=lgbVn+ljWgFmM1B7c1r1HEHIxctKH3KoaQGZiOxKOt8WJQ5jyzMPx5H1YBj0T4GSBj FOULnRo1rHfm/GFniKn+ZDSp2zaO/la0BZntcmb/vZF3Mj8SriYHe3TXAOHtB0HQoQxR I+aKbPqPLJxXF+WjfngBuc2qA/Ws5O+9yI7RE0BlzFzUwuIRnVJ2pdRji4beSxGZg/l2 FXJsLtWPZRVl5GaF5zk6OBFiDqxwvx7UHuCbQ3/xl827TR3xNBRMnYP96ST2UsWyqp6b sS0a8ZQf6wazCe9qPJhJJQoOomTl8ahPXHdcoPX8R73Eg4FKTN+1S7zI7m45ZYXUV74V yOVg== X-Forwarded-Encrypted: i=1; AJvYcCUbwev7+LCgbv2noBGzaG6lDwvdW770yoiVtf80J3rcG2baNZhT4ALM1mJD1GgraQ8uSqWphL6BJ6h+ujNPriMbZq+QDGbQWw6WjRbQ X-Gm-Message-State: AOJu0Ywh1I6owpNHE/EBODa6QNKAzrrd9pGKNp3rg/Cj8l+yThH0Xs6q May9tyH77xsSA70Cr78if+PNXAayz3dnQ5SlT8QlyDn65TtPzqjPUT4/NL5qVA== X-Google-Smtp-Source: AGHT+IHIRmw7KOf0S7zevk1ZVHq4HTPptYom7wxTZTyuRHTWKzNijZn/KQq93KqEbVhWnYt7rNUGVA== X-Received: by 2002:a05:6830:10d3:b0:6e2:d888:6502 with SMTP id z19-20020a05683010d300b006e2d8886502mr21734446oto.11.1708500376915; Tue, 20 Feb 2024 23:26:16 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id d18-20020a63ed12000000b005dcbb699abfsm7827438pgi.34.2024.02.20.23.26.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:16 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 5/8] KVM: Migrate kvm_vcpu_map to __kvm_follow_pfn Date: Wed, 21 Feb 2024 16:25:23 +0900 Message-ID: <20240221072528.2702048-6-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate kvm_vcpu_map to __kvm_follow_pfn. Track is_refcounted_page so that kvm_vcpu_unmap know whether or not it needs to release the page. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 24 ++++++++++++++---------- 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 88279649c00d..f72c79f159a2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -295,6 +295,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool is_refcounted_page; }; =20 /* @@ -1265,7 +1266,6 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); =20 -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long l= en); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6c10dc546c8d..e617fe5cac2e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3188,24 +3188,22 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); =20 -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) -{ - if (dirty) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); -} - int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *ma= p) { kvm_pfn_t pfn; void *hva =3D NULL; struct page *page =3D KVM_UNMAPPED_PAGE; + struct kvm_follow_pfn kfp =3D { + .slot =3D gfn_to_memslot(vcpu->kvm, gfn), + .gfn =3D gfn, + .flags =3D FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page =3D true, + }; =20 if (!map) return -EINVAL; =20 - pfn =3D gfn_to_pfn(vcpu->kvm, gfn); + pfn =3D kvm_follow_pfn(&kfp); if (is_error_noslot_pfn(pfn)) return -EINVAL; =20 @@ -3225,6 +3223,7 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, st= ruct kvm_host_map *map) map->hva =3D hva; map->pfn =3D pfn; map->gfn =3D gfn; + map->is_refcounted_page =3D !!kfp.refcounted_page; =20 return 0; } @@ -3248,7 +3247,12 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kv= m_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); =20 - kvm_release_pfn(map->pfn, dirty); + if (map->is_refcounted_page) { + if (dirty) + kvm_release_page_dirty(map->page); + else + kvm_release_page_clean(map->page); + } =20 map->hva =3D NULL; map->page =3D NULL; --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A2A13D3BB for ; Wed, 21 Feb 2024 07:26:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500382; cv=none; b=JGOYtIfHbfG7VpasNWqXFiitLD+skaVy/n8r6x3t48sbPEItq3E1hAxhJ9IZ80/c89tYYDZxKA0y14n62BYvD9GroP1XxAaUzcb8nuTwGm1QLK1jiwkZ+dsg6Lgz5rLgXB0ztkrD7RKZpAWjXSshgv/EU1W6hRv9ZiEE5zkg154= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500382; c=relaxed/simple; bh=vFAefepjz3E0t+04lpw2yPf+Nx3onOaZVt72QR755hc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YvAZfQZVt9gmbT7h90Fzn1d6rxXlGYLLWG+7Ole/TuDwYlSlFRVYKWHeMmh2UelDnROjODyTILSxvvkREE4E9gGzl1zqE6a4da5KAsyrgv5EY8/wkaT9160OP3QmHRnsisSBrVEEBKJ4ULzSDvTiaahA4dsVt90FTnaKH4OSSAA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=Jq4+Atpv; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Jq4+Atpv" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1d746ce7d13so43386925ad.0 for ; Tue, 20 Feb 2024 23:26:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500381; x=1709105181; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SWEaXNkEutzzfeWk4tkEh/aV9qyvddav9W73rpwAgUc=; b=Jq4+AtpvIVQHm1KD+bqb31WoXIM5dUi1NhCaP3jkfaFZZsASoi8BdExRvd7nc0QZnv Ji0iOqjQgIaGI0uUuYRR7LIqRLgYp7aMpsE+lXiD1KULIGkbTufz6IvKFC3RCKV3S50T FXkJEhMKhpElY6LJHinKL+6X5BRv7ItYYucm8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500381; x=1709105181; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SWEaXNkEutzzfeWk4tkEh/aV9qyvddav9W73rpwAgUc=; b=bwQmbXUFhHSPhBqKqVC5gQeuPN2X4Yr3sJnV1eHXO87US+iECWTDlYOLsAvRbYP3aF qX+qVNGeDRmPwGbtzaRk/5EsiQih4y1ypHU3arwsVBwBrVtYjIQHr3IYm9MBJ89sObcv AAE5vQt5LCOls3JVBp06XQMQ40igFTQgl/QHR1qVuV9POIQo0ToWEOG805phsStFLknD T9T/m5+B7b2oQws4aE/0UEZaxSqt8uDAyjc+wfqAyGn0rGa47bVrGufXwhW7SDhTpQND 1l0N46bC6nE2mj+w2zy0Oj1EfxLDLAcYwhyFaqwfThfoMX/S/xPKWjDkYUmkd1Lyootq 3heA== X-Forwarded-Encrypted: i=1; AJvYcCUe74qAzTZ5GVYEXoKm4a7qxaQVxefJXj68yjkYz+oox93OXKqI+EqQJM7DtvStjZt72DEH8U9EhkmADvY0ZUo7kBpMMAWO6KcQ7HhK X-Gm-Message-State: AOJu0YwpHGFSia2VSFEl5gwXBSRi5iVTfiFwWtn8krntzsarKqglVktT NN/3Yg7RmqhHVUeyy0xjAXvwk7K1THJxN7YerEySOhgRGDQAHEy868dCdqtAJw== X-Google-Smtp-Source: AGHT+IHTaM/Rw7dtF9Vc7sCfD1uYbTKNFQjI2s6I8Jpo/gJ3zxDAIPGiSzfjjSJYmob/3WCYbNzzWg== X-Received: by 2002:a17:902:6841:b0:1da:1c72:2ca7 with SMTP id f1-20020a170902684100b001da1c722ca7mr17564758pln.29.1708500380788; Tue, 20 Feb 2024 23:26:20 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id s3-20020a170902c64300b001d93ba1120dsm7387923pls.200.2024.02.20.23.26.18 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:20 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 5/8] KVM: Migrate kvm_vcpu_map() to kvm_follow_pfn() Date: Wed, 21 Feb 2024 16:25:24 +0900 Message-ID: <20240221072528.2702048-7-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate kvm_vcpu_map() to kvm_follow_pfn(). Track is_refcounted_page so that kvm_vcpu_unmap() know whether or not it needs to release the page. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 24 ++++++++++++++---------- 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 88279649c00d..f72c79f159a2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -295,6 +295,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool is_refcounted_page; }; =20 /* @@ -1265,7 +1266,6 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); =20 -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long l= en); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6c10dc546c8d..e617fe5cac2e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3188,24 +3188,22 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); =20 -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) -{ - if (dirty) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); -} - int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *ma= p) { kvm_pfn_t pfn; void *hva =3D NULL; struct page *page =3D KVM_UNMAPPED_PAGE; + struct kvm_follow_pfn kfp =3D { + .slot =3D gfn_to_memslot(vcpu->kvm, gfn), + .gfn =3D gfn, + .flags =3D FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page =3D true, + }; =20 if (!map) return -EINVAL; =20 - pfn =3D gfn_to_pfn(vcpu->kvm, gfn); + pfn =3D kvm_follow_pfn(&kfp); if (is_error_noslot_pfn(pfn)) return -EINVAL; =20 @@ -3225,6 +3223,7 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, st= ruct kvm_host_map *map) map->hva =3D hva; map->pfn =3D pfn; map->gfn =3D gfn; + map->is_refcounted_page =3D !!kfp.refcounted_page; =20 return 0; } @@ -3248,7 +3247,12 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kv= m_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); =20 - kvm_release_pfn(map->pfn, dirty); + if (map->is_refcounted_page) { + if (dirty) + kvm_release_page_dirty(map->page); + else + kvm_release_page_clean(map->page); + } =20 map->hva =3D NULL; map->page =3D NULL; --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D33C3D56E for ; Wed, 21 Feb 2024 07:26:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500386; cv=none; b=GyaNsjvh29XbZylvKgQGlx7/utffG2TfO4jijeNziyAuUON5hMY385HpqyLWEgnJOhCcXDKbGtPH4bOQDGYTxeU6GniaWF3EJ1mKKzk8rV9hmwFMOzbHbNYiYrF1eonlLaFDpRhUse6njRQZSsxiyZ2UyWu2PH1xJ3PtRfDDr9c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500386; c=relaxed/simple; bh=M3SuqSqPQHRRDSx0L7lAp5b3EfogWfgqYBhfsqO8LBM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ya0hXheJJH/FkzskhGdvF/IdFsRAsTaqD04j8cz3YILyeEu9fbWY9UjGOT09QFo2NJSspuRjkl+k0sNyAcQdIyHAA01bn5uWQGh/jUV/uj0MZX3CXnxd1sxUTKkpHifNKdUoXB4Ecwfsa+tmW6kxnQsbEE1wM0yOLuGfFuUE/tI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=DIxWWSNp; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="DIxWWSNp" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-6e4560664b5so266242b3a.1 for ; Tue, 20 Feb 2024 23:26:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500385; x=1709105185; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Pih8sW3nLr8uR7cM0/SKyhPbmHZuIiN96AmunLzN/Xw=; b=DIxWWSNperUegeHmk0cqBQZvMH9DSPkSmScAcv+5AMSuD/JESzqTXvCpwk8yChNPNv E1OehFTMW7afK5/kKA2fA77k7AKcqTgxQzG7OH2dpJE6x37F5DcEyaXVH/WFRhzJbpud N8+I4LLPrWvbG8jftCY0dx3iURAQEUHXAuNPU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500385; x=1709105185; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pih8sW3nLr8uR7cM0/SKyhPbmHZuIiN96AmunLzN/Xw=; b=N15ofql2FInv2Vaci83J6H06b+EXNNowrXIRBzLJ4z1yqRQ2HJxe1QFb0XGRhPZw4a C91pM1EjJaWrcmkGqBMGdwXRgMaDyvCxVNv5KMcwEw+yxz684zp1Q0pTZKGpTaef3VQg V8vnl7P5RGrwnAQ9Dg/IYy8gZAvIhEfShFH1HQIQarmmblQzc3wNnWE1Eu8pPonqHELU vpMpz2F0SmIZAnYVw+R5142STHOAPyMSP1DjkVcr1FFnxbL3D+RQ8hWtUIZSxoSAhsVp qyj5ajyPUlHsl8aS07uNONWvoxU08OMUIXVYkLgFSi78bi9CouocTZsKmxhEVPKlBYME oMyA== X-Forwarded-Encrypted: i=1; AJvYcCW2ufqT4l2W3HlDgTvobjzNdDpWunKq5QX2KRerCqWIveJOsuSkhz7JhcQYeJ9Gs4tQlGchN2KQFjTujjUbED3eOUkPD9DvG01fzeFT X-Gm-Message-State: AOJu0YzjoXtMDQG0m+DP+z6xz2WDuql/T6Z8Os3Rx+N3l3W+6grczKsO XkW0fBVZpcUbojOgKaccYO1qNmRV8uprMmSQcyfhSqU4sROMuPt2uO+XMSxlWA== X-Google-Smtp-Source: AGHT+IHyypm/T0rwoWlLJexgzLRY5m4jjHnGc2zjD25rHsn1YoHjqJ8mJQow4Yfp2IIZNcTX4qemYQ== X-Received: by 2002:aa7:81d1:0:b0:6e1:4a02:6217 with SMTP id c17-20020aa781d1000000b006e14a026217mr14102447pfn.22.1708500384849; Tue, 20 Feb 2024 23:26:24 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id v184-20020a6389c1000000b005bdbe9a597fsm7933795pgd.57.2024.02.20.23.26.22 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:24 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 6/8] KVM: x86: Migrate to __kvm_follow_pfn Date: Wed, 21 Feb 2024 16:25:25 +0900 Message-ID: <20240221072528.2702048-8-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate functions which need access to is_refcounted_page to __kvm_follow_pfn. The functions which need this are __kvm_faultin_pfn and reexecute_instruction. The former requires replacing the async in/out parameter with FOLL_NOWAIT parameter and the KVM_PFN_ERR_NEEDS_IO return value. Handling non-refcounted pages is complicated, so it will be done in a followup. The latter is a straightforward refactor. APIC related callers do not need to migrate because KVM controls the memslot, so it will always be regular memory. Prefetch related callers do not need to be migrated because atomic gfn_to_pfn calls can never make it to hva_to_pfn_remapped. Signed-off-by: David Stevens Reviewed-by: Maxim Levitsky --- arch/x86/kvm/mmu/mmu.c | 43 ++++++++++++++++++++++++++++++++---------- arch/x86/kvm/x86.c | 11 +++++++++-- virt/kvm/kvm_main.c | 11 ++++------- 3 files changed, 46 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2d6cdeab1f8a..bbeb0f6783d7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4331,7 +4331,14 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *= vcpu, static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { struct kvm_memory_slot *slot =3D fault->slot; - bool async; + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D fault->gfn, + .flags =3D FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .try_map_writable =3D true, + .guarded_by_mmu_notifier =3D true, + .allow_non_refcounted_struct_page =3D false, + }; =20 /* * Retry the page fault if the gfn hit a memslot that is being deleted @@ -4368,12 +4375,20 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault if (fault->is_private) return kvm_faultin_pfn_private(vcpu, fault); =20 - async =3D false; - fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &asyn= c, - fault->write, &fault->map_writable, - &fault->hva); - if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + kfp.flags |=3D FOLL_NOWAIT; + fault->pfn =3D kvm_follow_pfn(&kfp); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + /* + * If kvm_follow_pfn() failed because I/O is needed to fault in the + * page, then either set up an asynchronous #PF to do the I/O, or if + * doing an async #PF isn't possible, retry kvm_follow_pfn() with + * I/O allowed. All other failures are fatal, i.e. retrying won't help. + */ + if (fault->pfn !=3D KVM_PFN_ERR_NEEDS_IO) + return RET_PF_CONTINUE; =20 if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4391,9 +4406,17 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, = struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, true, NULL, - fault->write, &fault->map_writable, - &fault->hva); + kfp.flags |=3D FOLL_INTERRUPTIBLE; + kfp.flags &=3D ~FOLL_NOWAIT; + fault->pfn =3D kvm_follow_pfn(&kfp); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + return RET_PF_CONTINUE; +success: + fault->hva =3D kfp.hva; + fault->map_writable =3D kfp.writable; return RET_PF_CONTINUE; } =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 363b1c080205..f4a20e9bc7a6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8747,6 +8747,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, { gpa_t gpa =3D cr2_or_gpa; kvm_pfn_t pfn; + struct kvm_follow_pfn kfp; =20 if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8776,7 +8777,13 @@ static bool reexecute_instruction(struct kvm_vcpu *v= cpu, gpa_t cr2_or_gpa, * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ - pfn =3D gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); + kfp =3D (struct kvm_follow_pfn) { + .slot =3D gfn_to_memslot(vcpu->kvm, gpa_to_gfn(gpa)), + .gfn =3D gpa_to_gfn(gpa), + .flags =3D FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page =3D true, + }; + pfn =3D kvm_follow_pfn(&kfp); =20 /* * If the instruction failed on the error pfn, it can not be fixed, @@ -8785,7 +8792,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, if (is_error_noslot_pfn(pfn)) return false; =20 - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(kfp.refcounted_page); =20 /* The instructions are well-emulated on direct mmu. */ if (vcpu->arch.mmu->root_role.direct) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e617fe5cac2e..5d66d841e775 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3297,6 +3297,9 @@ void kvm_release_page_clean(struct page *page) { WARN_ON(is_error_page(page)); =20 + if (!page) + return; + kvm_set_page_accessed(page); put_page(page); } @@ -3304,16 +3307,10 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean); =20 void kvm_release_pfn_clean(kvm_pfn_t pfn) { - struct page *page; - if (is_error_noslot_pfn(pfn)) return; =20 - page =3D kvm_pfn_to_refcounted_page(pfn); - if (!page) - return; - - kvm_release_page_clean(page); + kvm_release_page_clean(kvm_pfn_to_refcounted_page(pfn)); } EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); =20 --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3BD63D981 for ; Wed, 21 Feb 2024 07:26:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500391; cv=none; b=kcGqYZ51pq/g3FL9nMxfHyatD1+G/yD8qnXhZvdOvhLaGG/DJDXySO9XelfzmHxC4vCRVxcMaikW5I31urXARJ/1j8biRN9vT7Nmlf8RFvNMLvwFs6WVf4tkwiMEL6Z3xR5x/eMgjuZyn4ifGM2iurZz9/+opMoP3XNvwdD+BdM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500391; c=relaxed/simple; bh=2AzZkBo6nO7k/QxOvuRxl5TYVHdGaDzdj6X96G9LMJ8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dKURJihUsr+ArU6sAMH2M51x1j/9Ym08fgnZFKpaWhUNxBTZZYBThUCkyTuSGi4tPSwJHCf488f0Gy17MJJOuIY6d0Vcydie9J0Ixz5aR+ua1K39jVqIoBdoCzHjDLNyG+YVjCYEou5k4kPbcL2ZTipylgBzq67e7xc51Xb+rjY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=Zhhp9cGt; arc=none smtp.client-ip=209.85.219.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Zhhp9cGt" Received: by mail-yb1-f174.google.com with SMTP id 3f1490d57ef6-dc6e080c1f0so5889184276.2 for ; Tue, 20 Feb 2024 23:26:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500389; x=1709105189; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GIt47AOQhqYwbSmwly9ATjjcEw5hYcM2BxlkkvG0BKw=; b=Zhhp9cGtT8EacIFW/Hvx/TsFs1EbJeMSv8KiI3iWxkQSG70AF26aGmWL4r7KVgmrmd 5jwEhKBYVUA0bCr/vOJowXubVdcfFKs1tvbtbe1bStoJmL6K6OiBBUCXKDBK9puFwxV3 sHInUY9g7e7rXZGej5T+CWlgP+MdhLhOFvycg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500389; x=1709105189; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GIt47AOQhqYwbSmwly9ATjjcEw5hYcM2BxlkkvG0BKw=; b=i39BX6aVmBsYqqhq8sq/NHsxYCD+8oST7Y/5KW6cJq9B0LoYDEr+h44lc5Nwj14UbT TFV7WlrThm+ksm1GcHQiwXIKU+/K3gao/2v2URPAadNX5JOyP0r5ITjJojoXHHhXPRbz R2QXQ43046Cw5Dv2cudXf1pRFNRe/IcOL0X7c0TYQwxSUeqd4jF+DzuMSR55/HziXA6W AynEnxtWkzZ/aQD3kfYwmk9eD8f8D0g549xMpT2RTfgvz+I+v2fFXMXHT8xrP8vpT1sK pwcJmoVJuXnaJKuP16DJEaBBrwuDO21oFU+EJqW1zgILjDlGHDqZZG0mu0T3SXqiPnwM 0GjQ== X-Forwarded-Encrypted: i=1; AJvYcCWAyW2uYhVmiYTIwO7hTLhguP7M3XOUjxCnVx7o2D1Ye4ZApo23VGMCkyD8Phfk9osulM9lDJiZ6zyc2ycNAbHiklUgtRhsgTEyBdzx X-Gm-Message-State: AOJu0YyA5Ltvd0zGEHslA22WY3vu9xSBZznyoqMhp5KD01MSryiRIuYj 5Xxdn2/lkafpguG++Zbf4eXmeEdX2VMdMq7SiVLM2kNiayMek5lKj3eC1wVmHQ== X-Google-Smtp-Source: AGHT+IGIrXAFDvR+ce91/wX0UHTroMBo0ZdjQol8T7e9MNCa0XfG7X3z0mt0451t7GSOcS9P/JFR8w== X-Received: by 2002:a25:fe03:0:b0:dcd:990a:c02a with SMTP id k3-20020a25fe03000000b00dcd990ac02amr14224034ybe.63.1708500388968; Tue, 20 Feb 2024 23:26:28 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id i73-20020a636d4c000000b005b458aa0541sm7776854pgc.15.2024.02.20.23.26.26 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:28 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 6/8] KVM: x86: Migrate to kvm_follow_pfn() Date: Wed, 21 Feb 2024 16:25:26 +0900 Message-ID: <20240221072528.2702048-9-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Migrate functions which need to be able to map non-refcounted struct pages to kvm_follow_pfn(). These functions are kvm_faultin_pfn() and reexecute_instruction(). The former requires replacing the async in/out parameter with FOLL_NOWAIT parameter and the KVM_PFN_ERR_NEEDS_IO return value (actually handling non-refcounted pages is complicated, so it will be done in a followup). The latter is a straightforward refactor. APIC related callers do not need to migrate because KVM controls the memslot, so it will always be regular memory. Prefetch related callers do not need to be migrated because atomic gfn_to_pfn() calls can never make it to hva_to_pfn_remapped(). Signed-off-by: David Stevens Reviewed-by: Maxim Levitsky --- arch/x86/kvm/mmu/mmu.c | 43 ++++++++++++++++++++++++++++++++---------- arch/x86/kvm/x86.c | 11 +++++++++-- virt/kvm/kvm_main.c | 11 ++++------- 3 files changed, 46 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2d6cdeab1f8a..bbeb0f6783d7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4331,7 +4331,14 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *= vcpu, static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { struct kvm_memory_slot *slot =3D fault->slot; - bool async; + struct kvm_follow_pfn kfp =3D { + .slot =3D slot, + .gfn =3D fault->gfn, + .flags =3D FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .try_map_writable =3D true, + .guarded_by_mmu_notifier =3D true, + .allow_non_refcounted_struct_page =3D false, + }; =20 /* * Retry the page fault if the gfn hit a memslot that is being deleted @@ -4368,12 +4375,20 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault if (fault->is_private) return kvm_faultin_pfn_private(vcpu, fault); =20 - async =3D false; - fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &asyn= c, - fault->write, &fault->map_writable, - &fault->hva); - if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + kfp.flags |=3D FOLL_NOWAIT; + fault->pfn =3D kvm_follow_pfn(&kfp); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + /* + * If kvm_follow_pfn() failed because I/O is needed to fault in the + * page, then either set up an asynchronous #PF to do the I/O, or if + * doing an async #PF isn't possible, retry kvm_follow_pfn() with + * I/O allowed. All other failures are fatal, i.e. retrying won't help. + */ + if (fault->pfn !=3D KVM_PFN_ERR_NEEDS_IO) + return RET_PF_CONTINUE; =20 if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4391,9 +4406,17 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, = struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, true, NULL, - fault->write, &fault->map_writable, - &fault->hva); + kfp.flags |=3D FOLL_INTERRUPTIBLE; + kfp.flags &=3D ~FOLL_NOWAIT; + fault->pfn =3D kvm_follow_pfn(&kfp); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + return RET_PF_CONTINUE; +success: + fault->hva =3D kfp.hva; + fault->map_writable =3D kfp.writable; return RET_PF_CONTINUE; } =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 363b1c080205..f4a20e9bc7a6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8747,6 +8747,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, { gpa_t gpa =3D cr2_or_gpa; kvm_pfn_t pfn; + struct kvm_follow_pfn kfp; =20 if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8776,7 +8777,13 @@ static bool reexecute_instruction(struct kvm_vcpu *v= cpu, gpa_t cr2_or_gpa, * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ - pfn =3D gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); + kfp =3D (struct kvm_follow_pfn) { + .slot =3D gfn_to_memslot(vcpu->kvm, gpa_to_gfn(gpa)), + .gfn =3D gpa_to_gfn(gpa), + .flags =3D FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page =3D true, + }; + pfn =3D kvm_follow_pfn(&kfp); =20 /* * If the instruction failed on the error pfn, it can not be fixed, @@ -8785,7 +8792,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vc= pu, gpa_t cr2_or_gpa, if (is_error_noslot_pfn(pfn)) return false; =20 - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(kfp.refcounted_page); =20 /* The instructions are well-emulated on direct mmu. */ if (vcpu->arch.mmu->root_role.direct) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e617fe5cac2e..5d66d841e775 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3297,6 +3297,9 @@ void kvm_release_page_clean(struct page *page) { WARN_ON(is_error_page(page)); =20 + if (!page) + return; + kvm_set_page_accessed(page); put_page(page); } @@ -3304,16 +3307,10 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean); =20 void kvm_release_pfn_clean(kvm_pfn_t pfn) { - struct page *page; - if (is_error_noslot_pfn(pfn)) return; =20 - page =3D kvm_pfn_to_refcounted_page(pfn); - if (!page) - return; - - kvm_release_page_clean(page); + kvm_release_page_clean(kvm_pfn_to_refcounted_page(pfn)); } EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); =20 --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB4F83E476 for ; Wed, 21 Feb 2024 07:26:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500395; cv=none; b=a9PP3iGAfswl2CWIQj19XNz/x031J70681utvjwJIVN2t60RFnJh7KCFtzA5Xqypl2GcsQPouXDT79ef/S42o2iHqOzonnWv44OzsoV+Env8a2XYfLckA9A/3mhHJVNcq4CgSAJYgFyBKkNhVDI9l9f02ciwxI4iEKEXBvcdP6g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500395; c=relaxed/simple; bh=AYMUk1nvkZ0pOB+n2DH+4BB79ShWznzgSUYd6B9JA90=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VVYcLpY9ytQOM3go8kz04Bc7WFKufeIaajtLrxzfTXQAAT1j3NFUYm02nMG8SKaAsar93Bw2YlTkMtoTwdlyRN+r1gtDV+rbMNuIXzitdBEfXVZdjkqUaEfILSUCZ6mBNID9mo2zWa3z3TMeXByXjRECej/JPrDF8qwFKrMHgG8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=E7R7pY0u; arc=none smtp.client-ip=209.85.210.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="E7R7pY0u" Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6e46e07ff07so1812284b3a.3 for ; Tue, 20 Feb 2024 23:26:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500393; x=1709105193; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xgeyG0UvxcnYtwi28aLl5IuMoT6toXx2G8RTHGphwOY=; b=E7R7pY0u5nrNKFuxlnOiEpSpxbIOF79gNUag1hzPcC7uedzG1UOhuA9Fk0UypaGKNh rl3hqH5AxeH7EcVuivA2OkaBCRGP8Q+rl+XZ2w4KUa7Hx0xShV1Goe78WF0AeAzFl0Vj VCu91Q59HWGltxhui2e4wQN7Bxw6EKgv8PHPs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500393; x=1709105193; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xgeyG0UvxcnYtwi28aLl5IuMoT6toXx2G8RTHGphwOY=; b=Wz6x3vqnUV79Y+H4lWOtOUU3FKdPizUPuZU3GZT7jMPWRPu7N4Sk07KwU6V3GDnQ4V 0VamJfPCQ+8UX6xIWST2zDUliN72UervdEWdyvk6QEyCddhNCbxmQR7cbzHeT8uIgNX/ Q+xZCcOPCg5TzkIdeEUloaInHYXjNs3/TWf0+Z/4zjwEn/VAT0RM8+kWOmWhvIUmUUA+ U4JLBuh1oSz9Tn1nYHdjur9pgx+kyEB6TIRe/eARxeYQym9wPudoun2Nr6xUV3yDG5dm ZsWKHLffHPWUuBOJP9E2fDH+RnwO5JrQ5t/nlvDEea19WjP2NlDbkM441TeoNnNu35tw Xkqg== X-Forwarded-Encrypted: i=1; AJvYcCVNQYmVbl/0NLaWTfV1itW4btJBUGpr09wBbfY6xSw8rsCcSKC/CJLA8B2aI/SVtQbCSkmlvnjVq9lUGK+Gu9QU0i//TgwzIGmm+QdV X-Gm-Message-State: AOJu0YxfFRTk7e6EXOpFSu0l4KRK0sEXqEiSwVSYPPF9o4BIp9yAp+90 DWJwrfUBTesKbc5SE6M1KyarLYypEu1AoPw3TyAzoqWqKn7/27EL+DAD3Ts9cQ== X-Google-Smtp-Source: AGHT+IGuCxXShC3TFTeo1X9CXGBptbturF8sPZMd5jwJbvXk6SCalhFEJjwEJN/EjoqrumaAkEOtBA== X-Received: by 2002:a05:6a00:1c8a:b0:6e4:69ac:4c94 with SMTP id y10-20020a056a001c8a00b006e469ac4c94mr7174801pfw.34.1708500393070; Tue, 20 Feb 2024 23:26:33 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id v13-20020a62a50d000000b006e45a0feffasm6019256pfm.71.2024.02.20.23.26.30 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:32 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v10 7/8] KVM: x86/mmu: Track if sptes refer to refcounted pages Date: Wed, 21 Feb 2024 16:25:27 +0900 Message-ID: <20240221072528.2702048-10-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Use one of the unused bits in EPT sptes to track whether or not an spte refers to a struct page that has a valid refcount, in preparation for adding support for mapping such pages into guests. The new bit is used to avoid triggering a page_count() =3D=3D 0 warning and to avoid touching A/D bits of unknown usage. Non-EPT sptes don't have any free bits to use, so this tracking is not possible when TDP is disabled or on 32-bit x86. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 43 +++++++++++++++++++--------------- arch/x86/kvm/mmu/paging_tmpl.h | 5 ++-- arch/x86/kvm/mmu/spte.c | 4 +++- arch/x86/kvm/mmu/spte.h | 22 ++++++++++++++++- arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++++++------- include/linux/kvm_host.h | 3 +++ virt/kvm/kvm_main.c | 6 +++-- 7 files changed, 70 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bbeb0f6783d7..7c059b23ae16 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -541,12 +541,14 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) =20 if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) { flush =3D true; - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + if (is_refcounted_page_spte(old_spte)) + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); } =20 if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) { flush =3D true; - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + if (is_refcounted_page_spte(old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte))); } =20 return flush; @@ -578,20 +580,23 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm,= u64 *sptep) =20 pfn =3D spte_to_pfn(old_spte); =20 - /* - * KVM doesn't hold a reference to any pages mapped into the guest, and - * instead uses the mmu_notifier to ensure that KVM unmaps any pages - * before they are reclaimed. Sanity check that, if the pfn is backed - * by a refcounted page, the refcount is elevated. - */ - page =3D kvm_pfn_to_refcounted_page(pfn); - WARN_ON_ONCE(page && !page_count(page)); + if (is_refcounted_page_spte(old_spte)) { + /* + * KVM doesn't hold a reference to any pages mapped into the + * guest, and instead uses the mmu_notifier to ensure that KVM + * unmaps any pages before they are reclaimed. Sanity check + * that, if the pfn is backed by a refcounted page, the + * refcount is elevated. + */ + page =3D kvm_pfn_to_refcounted_page(pfn); + WARN_ON_ONCE(!page || !page_count(page)); =20 - if (is_accessed_spte(old_spte)) - kvm_set_pfn_accessed(pfn); + if (is_accessed_spte(old_spte)) + kvm_set_page_accessed(pfn_to_page(pfn)); =20 - if (is_dirty_spte(old_spte)) - kvm_set_pfn_dirty(pfn); + if (is_dirty_spte(old_spte)) + kvm_set_page_dirty(pfn_to_page(pfn)); + } =20 return old_spte; } @@ -627,8 +632,8 @@ static bool mmu_spte_age(u64 *sptep) * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(spte)) - kvm_set_pfn_dirty(spte_to_pfn(spte)); + if (is_writable_pte(spte) && is_refcounted_page_spte(spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(spte))); =20 spte =3D mark_spte_for_access_track(spte); mmu_spte_update_no_track(sptep, spte); @@ -1267,8 +1272,8 @@ static bool spte_wrprot_for_clear_dirty(u64 *sptep) { bool was_writable =3D test_and_clear_bit(PT_WRITABLE_SHIFT, (unsigned long *)sptep); - if (was_writable && !spte_ad_enabled(*sptep)) - kvm_set_pfn_dirty(spte_to_pfn(*sptep)); + if (was_writable && !spte_ad_enabled(*sptep) && is_refcounted_page_spte(*= sptep)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(*sptep))); =20 return was_writable; } @@ -2946,7 +2951,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, } =20 wrprot =3D make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefet= ch, - true, host_writable, &spte); + true, host_writable, true, &spte); =20 if (*sptep =3D=3D spte) { ret =3D RET_PF_SPURIOUS; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4d4e98fe4f35..c965f77ac4d5 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -902,7 +902,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, s= truct kvm_mmu *mmu, */ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp= , int i) { - bool host_writable; + bool host_writable, is_refcounted; gpa_t first_pte_gpa; u64 *sptep, spte; struct kvm_memory_slot *slot; @@ -959,10 +959,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, st= ruct kvm_mmu_page *sp, int sptep =3D &sp->spt[i]; spte =3D *sptep; host_writable =3D spte & shadow_host_writable_mask; + is_refcounted =3D is_refcounted_page_spte(spte); slot =3D kvm_vcpu_gfn_to_memslot(vcpu, gfn); make_spte(vcpu, sp, slot, pte_access, gfn, spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + host_writable, is_refcounted, &spte); =20 return mmu_spte_update(sptep, spte); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4a599130e9c9..efba85df6518 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -138,7 +138,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte) + bool host_writable, bool is_refcounted, u64 *new_spte) { int level =3D sp->role.level; u64 spte =3D SPTE_MMU_PRESENT_MASK; @@ -188,6 +188,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, =20 if (level > PG_LEVEL_4K) spte |=3D PT_PAGE_SIZE_MASK; + if (spte_has_refcount_bit() && is_refcounted) + spte |=3D SPTE_MMU_PAGE_REFCOUNTED; =20 if (shadow_memtype_mask) spte |=3D static_call(kvm_x86_get_mt_mask)(vcpu, gfn, diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index a129951c9a88..4101cc9ef52f 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -96,6 +96,11 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK= _SAVED_MASK)); /* Defined only to keep the above static asserts readable. */ #undef SHADOW_ACC_TRACK_SAVED_MASK =20 +/* + * Indicates that the SPTE refers to a page with a valid refcount. + */ +#define SPTE_MMU_PAGE_REFCOUNTED BIT_ULL(59) + /* * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of * the memslots generation and is derived as follows: @@ -345,6 +350,21 @@ static inline bool is_dirty_spte(u64 spte) return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; } =20 +/* + * Extra bits are only available for TDP SPTEs, since bits 62:52 are reser= ved + * for PAE paging, including NPT PAE. When a tracking bit isn't available,= we + * will reject mapping non-refcounted struct pages. + */ +static inline bool spte_has_refcount_bit(void) +{ + return tdp_enabled && IS_ENABLED(CONFIG_X86_64); +} + +static inline bool is_refcounted_page_spte(u64 spte) +{ + return !spte_has_refcount_bit() || (spte & SPTE_MMU_PAGE_REFCOUNTED); +} + static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64= pte, int level) { @@ -475,7 +495,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte); + bool host_writable, bool is_refcounted, u64 *new_spte); u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page_role role, int index); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6ae19b4ee5b1..ee497fb78d90 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -414,6 +414,7 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, bool was_leaf =3D was_present && is_last_spte(old_spte, level); bool is_leaf =3D is_present && is_last_spte(new_spte, level); bool pfn_changed =3D spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte); + bool is_refcounted =3D is_refcounted_page_spte(old_spte); =20 WARN_ON_ONCE(level > PT64_ROOT_MAX_LEVEL); WARN_ON_ONCE(level < PG_LEVEL_4K); @@ -478,9 +479,9 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, if (is_leaf !=3D was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); =20 - if (was_leaf && is_dirty_spte(old_spte) && + if (was_leaf && is_dirty_spte(old_spte) && is_refcounted && (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte))); =20 /* * Recursively handle child PTs if the change removed a subtree from @@ -492,9 +493,9 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); =20 - if (was_leaf && is_accessed_spte(old_spte) && + if (was_leaf && is_accessed_spte(old_spte) && is_refcounted && (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); } =20 /* @@ -956,8 +957,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_v= cpu *vcpu, new_spte =3D make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else wrprot =3D make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + fault->pfn, iter->old_spte, fault->prefetch, true, + fault->map_writable, true, &new_spte); =20 if (new_spte =3D=3D iter->old_spte) ret =3D RET_PF_SPURIOUS; @@ -1178,8 +1179,9 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp= _iter *iter, * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(iter->old_spte)) - kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); + if (is_writable_pte(iter->old_spte) && + is_refcounted_page_spte(iter->old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter->old_spte))); =20 new_spte =3D mark_spte_for_access_track(iter->old_spte); iter->old_spte =3D kvm_tdp_mmu_write_spte(iter->sptep, @@ -1602,7 +1604,8 @@ static void clear_dirty_pt_masked(struct kvm *kvm, st= ruct kvm_mmu_page *root, trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, iter.old_spte, iter.old_spte & ~dbit); - kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); + if (is_refcounted_page_spte(iter.old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter.old_spte))); } =20 rcu_read_unlock(); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f72c79f159a2..cff5df6b0c52 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1211,6 +1211,9 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memo= ry_slot *slot, gfn_t gfn, void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); =20 +void kvm_set_page_accessed(struct page *page); +void kvm_set_page_dirty(struct page *page); + struct kvm_follow_pfn { const struct kvm_memory_slot *slot; gfn_t gfn; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5d66d841e775..e53a14adf149 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3281,17 +3281,19 @@ static bool kvm_is_ad_tracked_page(struct page *pag= e) return !PageReserved(page); } =20 -static void kvm_set_page_dirty(struct page *page) +void kvm_set_page_dirty(struct page *page) { if (kvm_is_ad_tracked_page(page)) SetPageDirty(page); } +EXPORT_SYMBOL_GPL(kvm_set_page_dirty); =20 -static void kvm_set_page_accessed(struct page *page) +void kvm_set_page_accessed(struct page *page) { if (kvm_is_ad_tracked_page(page)) mark_page_accessed(page); } +EXPORT_SYMBOL_GPL(kvm_set_page_accessed); =20 void kvm_release_page_clean(struct page *page) { --=20 2.44.0.rc0.258.g7320e95886-goog From nobody Sat Feb 7 08:44:14 2026 Received: from mail-oi1-f178.google.com (mail-oi1-f178.google.com [209.85.167.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CF433EA69 for ; Wed, 21 Feb 2024 07:26:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500400; cv=none; b=qHKBBmQgo5f1Z+qYYmeDA5V8j3YQ1+bxjiPXod+8dT6usc2rGmYqXwITd7JhwsZXaENk5X73Y3rpW996vQnrEUevZO1+Y0CxLC6B1kTudwPeEzbkDy3okCt665innvjHXOqogwDZVvHj7VddDCGM3z/ThFHd8ZGedus18VEkrts= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708500400; c=relaxed/simple; bh=GoRB5muUW/vlnvbdqSuzJBKiC4jRGCNWuO4c6CRCuLs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jQabko9uR54VwQ1rseHcmUVZoxXhxzbgN2X4CH1ZHmBKWnopt98EP/fAbdeJG07yIlYU1JUqO/+oo+AmSc4khSze1NUxpN7xyB8mn3HfMrt/oshwZCWxgrYftMy+/xFNcMnxvX5eITxzdpm9rLGnJUUP5DZL5CldjML3TyKMfzc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=gQWJxiGZ; arc=none smtp.client-ip=209.85.167.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="gQWJxiGZ" Received: by mail-oi1-f178.google.com with SMTP id 5614622812f47-3bd4e6a7cb0so4413274b6e.3 for ; Tue, 20 Feb 2024 23:26:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1708500397; x=1709105197; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9bx11pSLT/dMnEbM2cRs3+BobJT8jfteDPW/FW2wkZ8=; b=gQWJxiGZ2wAaeQ4rprfaIPlH4MuEO1gbLhQvPl2xv10QIwTr4Z2dombPI8TwVYxJph XjSt0itAwDy6EkT9iEOWsHzDX8Yaf3xvXT+isQdjIFmhGyAcOh54Li9tWjnapIRGmnKn bfc3J+3/alMTwz2OTnZlZxZdCUXlqvOCSQkE8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708500397; x=1709105197; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9bx11pSLT/dMnEbM2cRs3+BobJT8jfteDPW/FW2wkZ8=; b=Oj18R9mbI1pSeZP9TH486aMMOV8eFlzuGj1x3Eqtl6w7HPV2nfGVnLqXcxYL4vQsCY MD5gSdXkHllTmDnrSWH21jpjskvmqdjXjZdt84DIriaGJpogXaGlCrna3TPmkP2sMkzv jdqfjXxkN7X1gCqPXbpMbqOXCrI4ZUkorXPv+9TUeKKHBiWTdUiWp/DeVV4RDFtFnQTQ hdNZhcLZLSJ0/vkZ9WwBqpMRkrovNxXnxXJTVGu8tZGhgdI7VwfnysmW10VrKQIn5nni NcYxQyyLIOB8EfnDbWjmRGbSffY8O8ul3F7j1BHX/8f0dF2h0VPP/zpa+dHA+QjZFwJm RTUQ== X-Forwarded-Encrypted: i=1; AJvYcCVwU4VF/7XkphaCqzofUsibTSnUUgd9aJbdIGENCI1jwnj0IJezttmfCv8YDG+4LA2npOL7nupGjXYg3Wvx22zhHNsmN7G5oaCrrUod X-Gm-Message-State: AOJu0YxuAjvn9Tce4BO7l+cdg+nkkXPZZ1fTSFs+V1ZknQUXx3sDaanX jwi9i4jGc8n7oHY98POYdoSV8pshvu+XS5knb9Zevcia+tug4qWCcNhHRO5VrQ== X-Google-Smtp-Source: AGHT+IGNh9WqoGwOHL9szx/wp2kJnQlTMKTYun1TNgpWSovSInZnd2lgzADd8sI8PrrbYbICOO5+xg== X-Received: by 2002:a05:6808:1703:b0:3c0:b3f3:c2dc with SMTP id bc3-20020a056808170300b003c0b3f3c2dcmr20423256oib.35.1708500397214; Tue, 20 Feb 2024 23:26:37 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b417:5d09:c226:a19c]) by smtp.gmail.com with UTF8SMTPSA id o74-20020a62cd4d000000b006e3f09fd6a7sm6765220pfg.85.2024.02.20.23.26.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 23:26:36 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens , Dmitry Osipenko Subject: [PATCH v10 8/8] KVM: x86/mmu: Handle non-refcounted pages Date: Wed, 21 Feb 2024 16:25:28 +0900 Message-ID: <20240221072528.2702048-11-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog In-Reply-To: <20240221072528.2702048-1-stevensd@google.com> References: <20240221072528.2702048-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Stevens Handle non-refcounted pages in kvm_faultin_pfn(). This allows the host to map memory into the guest that is backed by non-refcounted struct pages - for example, the tail pages of higher order non-compound pages allocated by the amdgpu driver via ttm_pool_alloc_page. Tested-by: Dmitry Osipenko # virgl+venus+vi= rtio-intel+i915 Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 24 +++++++++++++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- include/linux/kvm_host.h | 6 ++++-- virt/kvm/guest_memfd.c | 8 ++++---- virt/kvm/kvm_main.c | 10 ++++++++-- 7 files changed, 38 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7c059b23ae16..73a9f6ee683f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2924,6 +2924,11 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struc= t kvm_memory_slot *slot, bool host_writable =3D !fault || fault->map_writable; bool prefetch =3D !fault || fault->prefetch; bool write_fault =3D fault && fault->write; + /* + * Prefetching uses gfn_to_page_many_atomic, which never gets + * non-refcounted pages. + */ + bool is_refcounted =3D !fault || !!fault->accessed_page; =20 if (unlikely(is_noslot_pfn(pfn))) { vcpu->stat.pf_mmio_spte_created++; @@ -2951,7 +2956,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, } =20 wrprot =3D make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefet= ch, - true, host_writable, true, &spte); + true, host_writable, is_refcounted, &spte); =20 if (*sptep =3D=3D spte) { ret =3D RET_PF_SPURIOUS; @@ -4319,8 +4324,8 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *v= cpu, return -EFAULT; } =20 - r =3D kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &max_order); + r =3D kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, + &fault->pfn, &fault->accessed_page, &max_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; @@ -4330,6 +4335,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *v= cpu, fault->max_level); fault->map_writable =3D !(fault->slot->flags & KVM_MEM_READONLY); =20 + /* kvm_gmem_get_pfn takes a refcount, but accessed_page doesn't need it. = */ + put_page(fault->accessed_page); + return RET_PF_CONTINUE; } =20 @@ -4339,10 +4347,10 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault struct kvm_follow_pfn kfp =3D { .slot =3D slot, .gfn =3D fault->gfn, - .flags =3D FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .flags =3D fault->write ? FOLL_WRITE : 0, .try_map_writable =3D true, .guarded_by_mmu_notifier =3D true, - .allow_non_refcounted_struct_page =3D false, + .allow_non_refcounted_struct_page =3D spte_has_refcount_bit(), }; =20 /* @@ -4359,6 +4367,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault fault->slot =3D NULL; fault->pfn =3D KVM_PFN_NOSLOT; fault->map_writable =3D false; + fault->accessed_page =3D NULL; return RET_PF_CONTINUE; } /* @@ -4422,6 +4431,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault success: fault->hva =3D kfp.hva; fault->map_writable =3D kfp.writable; + fault->accessed_page =3D kfp.refcounted_page; return RET_PF_CONTINUE; } =20 @@ -4510,8 +4520,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault r =3D direct_map(vcpu, fault); =20 out_unlock: + kvm_set_page_accessed(fault->accessed_page); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } =20 @@ -4586,8 +4596,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vc= pu, r =3D kvm_tdp_mmu_map(vcpu, fault); =20 out_unlock: + kvm_set_page_accessed(fault->accessed_page); read_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } #endif diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 0669a8a668ca..0b05183600af 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -240,6 +240,8 @@ struct kvm_page_fault { kvm_pfn_t pfn; hva_t hva; bool map_writable; + /* Does NOT have an elevated refcount */ + struct page *accessed_page; =20 /* * Indicates the guest is trying to write a gfn that contains one or diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index c965f77ac4d5..b39dce802394 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -847,8 +847,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault r =3D FNAME(fetch)(vcpu, fault, &walker); =20 out_unlock: + kvm_set_page_accessed(fault->accessed_page); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ee497fb78d90..0524be7c0796 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -958,7 +958,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_v= cpu *vcpu, else wrprot =3D make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, true, &new_spte); + fault->map_writable, !!fault->accessed_page, + &new_spte); =20 if (new_spte =3D=3D iter->old_spte) ret =3D RET_PF_SPURIOUS; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index cff5df6b0c52..0aae27771fea 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2421,11 +2421,13 @@ static inline bool kvm_mem_is_private(struct kvm *k= vm, gfn_t gfn) =20 #ifdef CONFIG_KVM_PRIVATE_MEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order); + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - kvm_pfn_t *pfn, int *max_order) + kvm_pfn_t *pfn, struct page **page, + int *max_order) { KVM_BUG_ON(1, kvm); return -EIO; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 0f4e0cf4f158..dabcca2ecc37 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -483,12 +483,12 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) } =20 int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order) + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order) { pgoff_t index =3D gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem; struct folio *folio; - struct page *page; struct file *file; int r; =20 @@ -514,9 +514,9 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory= _slot *slot, goto out_unlock; } =20 - page =3D folio_file_page(folio, index); + *page =3D folio_file_page(folio, index); =20 - *pfn =3D page_to_pfn(page); + *pfn =3D page_to_pfn(*page); if (max_order) *max_order =3D 0; =20 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e53a14adf149..4db7248fb678 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3288,11 +3288,17 @@ void kvm_set_page_dirty(struct page *page) } EXPORT_SYMBOL_GPL(kvm_set_page_dirty); =20 -void kvm_set_page_accessed(struct page *page) +static void __kvm_set_page_accessed(struct page *page) { if (kvm_is_ad_tracked_page(page)) mark_page_accessed(page); } + +void kvm_set_page_accessed(struct page *page) +{ + if (page) + __kvm_set_page_accessed(page); +} EXPORT_SYMBOL_GPL(kvm_set_page_accessed); =20 void kvm_release_page_clean(struct page *page) @@ -3302,7 +3308,7 @@ void kvm_release_page_clean(struct page *page) if (!page) return; =20 - kvm_set_page_accessed(page); + __kvm_set_page_accessed(page); put_page(page); } EXPORT_SYMBOL_GPL(kvm_release_page_clean); --=20 2.44.0.rc0.258.g7320e95886-goog