From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E73C8E7D0AA for ; Thu, 21 Sep 2023 21:08:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232466AbjIUVIV (ORCPT ); Thu, 21 Sep 2023 17:08:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232421AbjIUVH5 (ORCPT ); Thu, 21 Sep 2023 17:07:57 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAE488DED0 for ; Thu, 21 Sep 2023 13:33:36 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d81503de9c9so1788895276.3 for ; Thu, 21 Sep 2023 13:33:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328416; x=1695933216; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=FGyjwBvsOtsfuIhphBYWtiq/+EdBc6Gwm+3ffP2TVfs=; b=MJ/4lc+cX7TzyAAdalfKhnJV2Qvqe61anNd0ZAc9E3gCp1vGBElc/PE1/LcOl5S/Ys /SosH7LMDdxj1R5b6cRKylf3vp9/cCdCxQy8wiHI8NDdjLFhb2iGCRqXQM9jPcGn0D/y AhXHiCAKC6bjlPMfQ5b1yFC5P7MgeMC1vZjJ72xoQWHG8fCAS9IxY8n3JAstImk4ToCL maNDy7MIwnhjd44s1O8ulnLZf8pdxO5o3POVq/31Pc85VMjVTexR/iuoPj+B2Y1J/eL1 ytLBkpBqa7FW85ROdg2r+V+xE/AIOogyD9uIsggayqwP4J3b6rJgaRU20ZV7fqmifiu9 KygA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328416; x=1695933216; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FGyjwBvsOtsfuIhphBYWtiq/+EdBc6Gwm+3ffP2TVfs=; b=KaQkcTK22+anSh5gTC9tyhDdI3c4WUJw3I2WTW9z1rQSpJsJtJDBcCJia5uA18Hzp0 tAtT5Nggr4JWP7YOh9nBIo7EGlMItpbhMNIxuUhqD9iOFwgnOz9mYgGAY6kJ7wm9QCCZ KKQHbxJFUy8bC8HakXzh5/freqZ05AAIRrqBPmTR0qNceMvm3mVal1wwtTtlM3rPXOxa 3tJGFg9rDwE1xHSBa/MUnhMnvj2vUdMhqbG9oO7W+C1L3S8c6JMOgLmcDr+e+dGYw+Jy g7wzHhp2dlydISbMu/5t+VUYsed4Pnal0Np7WEslQtuyOyvxmW1yWLfClSMmYQGD5P8Y 5IEQ== X-Gm-Message-State: AOJu0YzBmLtNLy7OIIAtbZ2RYEEzMVwjfRwnzR9T47feCDFDPtZvJN9f 9lGnGPXuCmiiTYouGVhmGfWMZihZD8w= X-Google-Smtp-Source: AGHT+IHmSxFj/rzaiI2sywRueiP80CEdjOd2hu2Iude5MVmXEp4VG1h2O78o3QM7I+FKA1VxJxlQX6QqLrA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:d849:0:b0:d81:58d3:cc71 with SMTP id p70-20020a25d849000000b00d8158d3cc71mr94599ybg.13.1695328415917; Thu, 21 Sep 2023 13:33:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:18 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-2-seanjc@google.com> Subject: [PATCH 01/13] KVM: Assert that mmu_invalidate_in_progress *never* goes negative From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the assertion on the in-progress invalidation count from the primary MMU's notifier path to KVM's common notification path, i.e. assert that the count doesn't go negative even when the invalidation is coming from KVM itself. Opportunistically convert the assertion to a KVM_BUG_ON(), i.e. kill only the affected VM, not the entire kernel. A corrupted count is fatal to the VM, e.g. the non-zero (negative) count will cause mmu_invalidate_retry() to block any and all attempts to install new mappings. But it's far from guaranteed that an end() without a start() is fatal or even problematic to anything other than the target VM, e.g. the underlying bug could simply be a duplicate call to end(). And it's much more likely that a missed invalidation, i.e. a potential use-after-free, would manifest as no notification whatsoever, not an end() without a start(). Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a83dfef1316e..30708e460568 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -870,6 +870,7 @@ void kvm_mmu_invalidate_end(struct kvm *kvm) * in conjunction with the smp_rmb in mmu_invalidate_retry(). */ kvm->mmu_invalidate_in_progress--; + KVM_BUG_ON(kvm->mmu_invalidate_in_progress < 0, kvm); =20 /* * Assert that at least one range must be added between start() and @@ -906,8 +907,6 @@ static void kvm_mmu_notifier_invalidate_range_end(struc= t mmu_notifier *mn, */ if (wake) rcuwait_wake_up(&kvm->mn_memslots_update_rcuwait); - - BUG_ON(kvm->mmu_invalidate_in_progress < 0); } =20 static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn, --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36C61E7D0B6 for ; Thu, 21 Sep 2023 22:50:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229916AbjIUWua (ORCPT ); Thu, 21 Sep 2023 18:50:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230456AbjIUWuY (ORCPT ); Thu, 21 Sep 2023 18:50:24 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 552B6585C3 for ; Thu, 21 Sep 2023 13:33:38 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c40ac5b6e7so11371695ad.0 for ; Thu, 21 Sep 2023 13:33:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328418; x=1695933218; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=pekGycpm690ftEnzP10xe6Ye5goWUg/PNLNn/FLDQrU=; b=lwSNIm63R+7w1SGh93pi7J7uo6VBJm+uY9BngSBpoYHuLRB1GwdCJ7SWRCejttudXa ThCTZ+h2tE/t/XM3gNkIreJrE6CjrLT4MeO5waQTOpIbIirJ958HB0ItsrXXUAhkK3pz 6+j2P74GJCOqjsYqPrVPu2z25l34HXWg1Fr7d7Ub7xY9E1PGz43Rmplcd45u3ml5LZ8J Gc/KittbcQLNdF4Ctn3YV4aorSEWHg0epHChNlB1R59rDpndkyVrdmRogZp556tzw/8B 6U5czBzwMrasMbHvuB6OeEWhkMco9dQg2/jFPlwO7ePqYf3JHivzvTKbdPKDLrnvAsXU uyWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328418; x=1695933218; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=pekGycpm690ftEnzP10xe6Ye5goWUg/PNLNn/FLDQrU=; b=Uas5/zbHSmNeRkpivWyOkg+cW5EQopM+75DicCWV1Hs7uG46RKCVH5H3pB/Y5N6ILd n/hRoz48JifMZBZy6idKlgKYBHj+iFyZVwm3sdU3kM3S2Gmy2qX1poRqWFGGkyPzZwhz XEgW60KKCnilSXJ8jy7elT2ApzOpTdY2As3iJB+2e5bQ9BNbQsnH+NUAGyKvS3o7c3g2 cGZRYW3yPqzOvY5Q/GhM1wKracY807SLMhVaeQ+dPHRzK1vWydeWPG5ze3H0nTlVU9hL AxyelhzM5BaIWeRO6Y+GdKmkNsJBlDg6wpewW5Ta6vRQPpXIsMKMYWD6wfd8JX+/SLjQ rxVg== X-Gm-Message-State: AOJu0YywB/I4KgRXKqgHCT0auAKHIwK5PXHxhfXmJi9d9U1W1pWzI6QB f0KWN9Mf2OWzs9xzLslTuW5xEjv40Sc= X-Google-Smtp-Source: AGHT+IEF4UT19Y0uaQX2IVlYHJIurtmC15TJZ1HuZVfkpN+vNSXFUmg/Of1RoXtQbwOKngdV2kpar7enry4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c945:b0:1c5:a79f:8ebb with SMTP id i5-20020a170902c94500b001c5a79f8ebbmr85481pla.6.1695328417821; Thu, 21 Sep 2023 13:33:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:19 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-3-seanjc@google.com> Subject: [PATCH 02/13] KVM: Actually truncate the inode when doing PUNCH_HOLE for guest_memfd From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Restore the call to truncate_inode_pages_range() in guest_memfd's handling of PUNCH_HOLE that was unintentionally removed in a rebase gone bad. Reported-by: Michael Roth Fixes: 1d46f95498c5 ("KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-spe= cific backing memory") Signed-off-by: Sean Christopherson --- virt/kvm/guest_mem.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index a819367434e9..3c9e83a596fe 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -140,10 +140,13 @@ static long kvm_gmem_punch_hole(struct inode *inode, = loff_t offset, loff_t len) */ filemap_invalidate_lock(inode->i_mapping); =20 - list_for_each_entry(gmem, gmem_list, entry) { + list_for_each_entry(gmem, gmem_list, entry) kvm_gmem_invalidate_begin(gmem, start, end); + + truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); + + list_for_each_entry(gmem, gmem_list, entry) kvm_gmem_invalidate_end(gmem, start, end); - } =20 filemap_invalidate_unlock(inode->i_mapping); =20 --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2854AE7D0AB for ; Thu, 21 Sep 2023 21:43:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232678AbjIUVnh (ORCPT ); Thu, 21 Sep 2023 17:43:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232603AbjIUVnK (ORCPT ); Thu, 21 Sep 2023 17:43:10 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9419176812 for ; Thu, 21 Sep 2023 13:33:40 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-59c240e637aso20501107b3.3 for ; Thu, 21 Sep 2023 13:33:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328420; x=1695933220; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=MoKhLmBudHFsqVub0iZnr0qo3mrdYvg37SqAKaFM83k=; b=wacm37VVEAoOBFG9ghqEMvSqQOsqj2uX+efwm1Kt9mu4xfpI4eBbzRd28dN6+kwnbx gbjMsrKrA/az3kHsFFv75Pnpvq8QZeVN+1hPcWnI8aftMXlfGumWnzs8egptFzpeL6xd 6SkuW6gB4bTQJ0laoVIAFd/0TWq5uT4ohszUAQ63ON2mjLUp222i2rIb9wA6X7sqOfwx YPhXb5y9pnZ5l21FdyrmLMZcX6kQAa0zo5Hesdi2jATItiQQG9JTUbTm3Uc0RB7DgQIb WUdsxwgaUGmn+EcCIjRcL6WkWkWN77/Nz3uOuYld23IMCYG+6Pr6WO7aNbbQ8RcYycsm hxAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328420; x=1695933220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=MoKhLmBudHFsqVub0iZnr0qo3mrdYvg37SqAKaFM83k=; b=Rsm7OzFW+N2NOpIV0yscT+EVbr/GNKv1dAErsiiqKlZEEIKMztOUub+G/5IEtq11wN Am+HcU9zG4bs3hyVayINp5CJcHl2RxKns2b4rUXLI5PbzHZ3VP5iZqudnvFHBLd+D1lD 8FzaNf410L2mq68Gv/4wfQGcEkVfJMoJ2+F77qeEPOtCHzinkQWKWOj3+kJa6JdZsCSG fjO19O2MZyXn4FxDj0Ar35uonNjMsam8Tf6I6v9rXHiZ9er4IpJECGzcVMCuE+czhRGb uFrKmdq3Y7XMUEEsqmnzzkbMmSb1vGenxpmRNKg7lcZxJG34sdLK6Ri5NdrlfNWhS2xo 1iAg== X-Gm-Message-State: AOJu0Yys4YVVfDdoOSE9sOQx2NR6vMmrCWTTg9HsqFY5T5jtRzTnvq8S 1/jOmwzioVNjqQg57hUD6AuT82sYh/k= X-Google-Smtp-Source: AGHT+IEYG8/hrFY0O/v1dhWCGuKqtsz5qSqkYbmrQr34bAdR9bPEuoVobXgQyPT7lOV22JgARaBCkSNMs4U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:b726:0:b0:59b:ccba:124f with SMTP id v38-20020a81b726000000b0059bccba124fmr93086ywh.9.1695328419949; Thu, 21 Sep 2023 13:33:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:20 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-4-seanjc@google.com> Subject: [PATCH 03/13] KVM: WARN if *any* MMU invalidation sequence doesn't add a range From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Tweak the assertion in kvm_mmu_invalidate_end() to unconditionally require a range to be added between start() and end(). Asserting if and only if kvm->mmu_invalidate_in_progress is non-zero makes the assertion all but useless as it would fire only when there are multiple invalidations in flight, which is not common and would also get a false negative if one or more sequences, but not all, added a range. Reported-by: Binbin Wu Fixes: 145725d1542a ("KVM: Use gfn instead of hva for mmu_notifier_retry") Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 30708e460568..54480655bcce 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -873,11 +873,10 @@ void kvm_mmu_invalidate_end(struct kvm *kvm) KVM_BUG_ON(kvm->mmu_invalidate_in_progress < 0, kvm); =20 /* - * Assert that at least one range must be added between start() and - * end(). Not adding a range isn't fatal, but it is a KVM bug. + * Assert that at least one range was added between start() and end(). + * Not adding a range isn't fatal, but it is a KVM bug. */ - WARN_ON_ONCE(kvm->mmu_invalidate_in_progress && - kvm->mmu_invalidate_range_start =3D=3D INVALID_GPA); + WARN_ON_ONCE(kvm->mmu_invalidate_range_start =3D=3D INVALID_GPA); } =20 static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F400E7D0A2 for ; Thu, 21 Sep 2023 20:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232606AbjIUU41 (ORCPT ); Thu, 21 Sep 2023 16:56:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232666AbjIUU4L (ORCPT ); Thu, 21 Sep 2023 16:56:11 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9249AE94D for ; Thu, 21 Sep 2023 13:33:42 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d814634fe4bso1834683276.1 for ; Thu, 21 Sep 2023 13:33:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328422; x=1695933222; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GXnfKvHXTg+GXCkfuzsuluimpzEK7TKl935+HXdmmVk=; b=LbW7OvmzGYx86t/DB1TIa4aie8OrmSOTIMtNm/JXLP6Hddgey1cj0Cdt8bNQpoziRB RL5lL7FoxOClWdH1c3CxdW6z2mh6lLSlLONUPzMcqlpjQRgivq8mBkQQPScxlABXbNTZ 0slD7P+iOjHIPOutQ1BeA26IYGi7ufkcODf4uA1xKARXC0/tneDwOmjmFb44mD7R8MFk Na7m2dTmOCp7qZDDihuHQ63wAUZoyObMFkVwGRVNuBV1CJrDKPFZuhoPgnxHLrxfgmDM +imtx0lw7GWFXxbScIR5pllKVWHQGpzrWfKYi538KiG8TwBlmfnsZl+mEHIldlQwH9CT 8zeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328422; x=1695933222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GXnfKvHXTg+GXCkfuzsuluimpzEK7TKl935+HXdmmVk=; b=fq1z4B3ApUV6X62a2RRFl3Bt1+DmNCxHjVDUw7JdOBiRYHKOVGj5dYkRjbx/No20Dr R4MgntQWj5F4K/IgUNXCjo0f6PkxGS4lswdLe0ASTNkhGx9/R2y1OJMPcYGOjW/VNLGk KKyLTkjkbg/VEg7vju271a0rPhhe+ujg2COKKFuEGls7DtjU4NtwophQVtSbw+YCj5S9 Yus4kY5ZF6hODvcGdeqdERB7f+Ptt3W6pw1zD87NoCTxFFtRrkawiJxLg5NxEQKDPXOu 0cLEHEXhrU/YpoklkbybtvcmnGwPIkYfx2crss0nskSTi78KbCsfl2Nz7E4t9fz+e+So Ny9A== X-Gm-Message-State: AOJu0Yw/arDkBWwC1S5sLfLubvGPreCCrYHWrNykJuU/N5wg7sEFJKnC dBiS503IbUCYrxClOcZa/hQW1t88NjA= X-Google-Smtp-Source: AGHT+IEi/fpx5qwnyQazg0USr1DHbY5reNhmjREs+J+LHxISH07TrHV0vwV7RrUiPSmPNiZfUH6GP2uptjE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:77d8:0:b0:d81:bb31:d2fa with SMTP id s207-20020a2577d8000000b00d81bb31d2famr91450ybc.3.1695328421812; Thu, 21 Sep 2023 13:33:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:21 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-5-seanjc@google.com> Subject: [PATCH 04/13] KVM: WARN if there are danging MMU invalidations at VM destruction From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add an assertion that there are no in-progress MMU invalidations when a VM is being destroyed, with the exception of the scenario where KVM unregisters its MMU notifier between an .invalidate_range_start() call and the corresponding .invalidate_range_end(). KVM can't detect unpaired calls from the mmu_notifier due to the above exception waiver, but the assertion can detect KVM bugs, e.g. such as the bug that *almost* escaped initial guest_memfd development. Link: https://lore.kernel.org/all/e397d30c-c6af-e68f-d18e-b4e3739c5389@linu= x.intel.com Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 54480655bcce..277afeedd670 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1381,9 +1381,16 @@ static void kvm_destroy_vm(struct kvm *kvm) * No threads can be waiting in kvm_swap_active_memslots() as the * last reference on KVM has been dropped, but freeing * memslots would deadlock without this manual intervention. + * + * If the count isn't unbalanced, i.e. KVM did NOT unregister between + * a start() and end(), then there shouldn't be any in-progress + * invalidations. */ WARN_ON(rcuwait_active(&kvm->mn_memslots_update_rcuwait)); - kvm->mn_active_invalidate_count =3D 0; + if (kvm->mn_active_invalidate_count) + kvm->mn_active_invalidate_count =3D 0; + else + WARN_ON(kvm->mmu_invalidate_in_progress); #else kvm_flush_shadow_all(kvm); #endif --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26BB2E7D0AA for ; Thu, 21 Sep 2023 20:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230381AbjIUU4f (ORCPT ); Thu, 21 Sep 2023 16:56:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232507AbjIUU4L (ORCPT ); Thu, 21 Sep 2023 16:56:11 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86AF6C06B5 for ; Thu, 21 Sep 2023 13:33:44 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c43e6ba8d1so10908555ad.3 for ; Thu, 21 Sep 2023 13:33:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328424; x=1695933224; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ooFkuvOD+X01OBgJAJRWL8fBqVSu8Kag+d9BzvqrsOA=; b=V30WBHyn02jX7zrORwjVf/opgyfac0giuBab24z168ORR6RfjRU3+sdMlHIDUzp6+Q GEFaeOzRwqrZDztBXnfHO4WJ8PkuAwHRZinBGYE3zKGTKo2Fb+osioFKh6xktbut6JTT 5ynF5OnJLijZqWSQWpzsG4qhYQ06o2LkcCz/Brn7eDPGdhiSzX1TBsIOHTlup6XkvLh/ ZBlRZpGCWG2Uz2L+ElIrKAu/HXtFlQV8ZBxI7qUE/E/K1KLk38MMM4Pmn7yY9f1/j9x6 OYiH9wJVoKsWGfHjMGWYDqyXeKIPoEzZ6StluXAYO+lY+Ryyo4Pr/UHTRWHDZ0xLmJyJ iaJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328424; x=1695933224; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ooFkuvOD+X01OBgJAJRWL8fBqVSu8Kag+d9BzvqrsOA=; b=lnmYcXHI8RjOe8JNCfO+gm24DjzIb+7HEZ2KfF/ztyectMq6ZghfurPFqBBGx99HbO +fdfkuZud2VQeABOhjVHzDcGRTV0k+epVm7pFLXsyERZaF+QpAIPrwhLyJ3eH/MNs6Wj zkmKjJGSSiU7lAfSqdbRLgutZNqJlH4n+GYm8pnWMYfMNTX2PxkwdjS7Lhs22JleksfM 2VkPebj/PbS7IaFy2B2Qa6fOTL7blBmYWk3tssT+8aQcKgTcQnKipeL9ZJB3d3rQ/EMF /qm4/N5EuFCwHdk4xqUkOMLdkcHS8/DjZM9DlOib8nYB0E0HgUaDdmJmJFBO+jemb8Y8 kNGQ== X-Gm-Message-State: AOJu0YzpOYg4+0A42TnQ7fbGOqQVpSQtYUnpr4GwGW5Th6zaCbNwhabX 4L9PxI9hKtH1jDkyWBoeV+r4KFmlK30= X-Google-Smtp-Source: AGHT+IEU5m4YoGaLYARxz6CkyfsMJNbrUD8SUx5Ubkh7a53+UYT0k7pUszgIuGSWr5rg40lyfuo4qcQeNaM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e5d2:b0:1bf:cc5:7b53 with SMTP id u18-20020a170902e5d200b001bf0cc57b53mr91087plf.1.1695328423673; Thu, 21 Sep 2023 13:33:43 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:22 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-6-seanjc@google.com> Subject: [PATCH 05/13] KVM: Fix MMU invalidation bookkeeping in guest_memfd From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Acquire mmu_lock and do invalidate_{begin,end}() if and only if there is at least one memslot that overlaps the to-be-invalidated range. This fixes a bug where KVM would leave a danging in-progress invalidation as the begin() call was unconditional, but the end() was not (only performed if there was overlap). Reported-by: Binbin Wu Fixes: 1d46f95498c5 ("KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-spe= cific backing memory") Signed-off-by: Sean Christopherson --- virt/kvm/guest_mem.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index 3c9e83a596fe..68528e9cddd7 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -88,14 +88,10 @@ static struct folio *kvm_gmem_get_folio(struct inode *i= node, pgoff_t index) static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, pgoff_t end) { + bool flush =3D false, found_memslot =3D false; struct kvm_memory_slot *slot; struct kvm *kvm =3D gmem->kvm; unsigned long index; - bool flush =3D false; - - KVM_MMU_LOCK(kvm); - - kvm_mmu_invalidate_begin(kvm); =20 xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) { pgoff_t pgoff =3D slot->gmem.pgoff; @@ -107,13 +103,21 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem= *gmem, pgoff_t start, .may_block =3D true, }; =20 + if (!found_memslot) { + found_memslot =3D true; + + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_begin(kvm); + } + flush |=3D kvm_mmu_unmap_gfn_range(kvm, &gfn_range); } =20 if (flush) kvm_flush_remote_tlbs(kvm); =20 - KVM_MMU_UNLOCK(kvm); + if (found_memslot) + KVM_MMU_UNLOCK(kvm); } =20 static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, @@ -121,10 +125,11 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *= gmem, pgoff_t start, { struct kvm *kvm =3D gmem->kvm; =20 - KVM_MMU_LOCK(kvm); - if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) + if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) { + KVM_MMU_LOCK(kvm); kvm_mmu_invalidate_end(kvm); - KVM_MMU_UNLOCK(kvm); + KVM_MMU_UNLOCK(kvm); + } } =20 static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t= len) --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C606E7D0AE for ; Thu, 21 Sep 2023 21:44:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231718AbjIUVoI (ORCPT ); Thu, 21 Sep 2023 17:44:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229907AbjIUVnj (ORCPT ); Thu, 21 Sep 2023 17:43:39 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C10A3C06BD for ; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c46ce0c39fso11210605ad.2 for ; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328425; x=1695933225; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TrGawBYlg9dO9+tpkWoFC/Bm9/vroFhKv3wNgQsPHaE=; b=rFUUewuO4GiLk6j9N+4yp4fVjBcvcNNt50CIzAVuZSejDlBuhrCOxxOZhc8+s1TLM8 J7VZoW31U63mk8Sr2bA42PCP59D6/UbKYU68Gb988+21XZqO/uiTRQ++zSP9M7VuQdTY fWZHjOXbjbBzOIH5lb8Me7NfER71QI426xW2bVmo6OVkMZN1P3Kf/95+V7SaqbWIvUep S9L8rrrGHAloQASoSFIgpXDbWCOcL0YurN1tRF6PwSf3OY4QSn4i3SuxJKZNG5H0bxkw t/huqM7dO8MESLuMR8p9A9Vg/hr6nzktEEkW4cxkVgBpB7t0tc23FLgGxiLfq0imX7GG L96w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328425; x=1695933225; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TrGawBYlg9dO9+tpkWoFC/Bm9/vroFhKv3wNgQsPHaE=; b=LX/3U3N/1+ndY0LvT2Fnq7q5WYWnz4K8rg/OoNu3j+FvApKqP51sgsY8E2hDsOtY8E cMkBCfpfJQyxEEEfyBEBYhg/jnX+mpIKzA9JfPoFWzziMFbGKVzP9PLE/MxqIQS/uL5P 33sFajM8kRkBqtfcvQ1fVNbLjeCyFDiFdNI7rChJgWeAKPv1woLcZ5JseqCh7A+Ii10H dVe81Ki/6pkLBHA+gCYTvoBG9CVFt2jsrEP4p+NFCYSxZKy6MzI9k+v+0/vHKUSYM7N2 SqhF+w2mK/DFDUwfpoUENrLzVF20kVnHQOfUvHzMVbeU4JtFU/2bfTIbadVfPYJ0XbO0 yniQ== X-Gm-Message-State: AOJu0YwCcWn4cK01FuaISGLyJDi2cQI+JyyUBugrecmokTC+GT8KAI9u aOg/c87PMrePxioimFBXtutfqnyLh8A= X-Google-Smtp-Source: AGHT+IG+uc2R/SKzIbV4YSj6Mluwpdoxl1dWdFaELq1DVKwkomUBRUUtPG4pltoyI3X4CUGacjj01mPun/o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d490:b0:1bc:7c69:925c with SMTP id c16-20020a170902d49000b001bc7c69925cmr94805plg.10.1695328425216; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:23 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-7-seanjc@google.com> Subject: [PATCH 06/13] KVM: Disallow hugepages for incompatible gmem bindings, but let 'em succeed From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove the restriction that a guest_memfd instance that supports hugepages can *only* be bound by memslots that are 100% compatible with hugepage mappings, and instead force KVM to use an order-0 mapping if the binding isn't compatible with hugepages. The intent of the draconian binding restriction was purely to simplify the guest_memfd implementation, e.g. to avoid repeatining the existing logic in KVM x86ial for precisely tracking which GFNs support hugepages. But checking that the binding's offset and size is compatible is just as easy to do when KVM wants to create a mapping. And on the other hand, completely rejecting bindings that are incompatible with hugepages makes it practically impossible for userspace to use a single guest_memfd instance for all guest memory, e.g. on x86 it would be impossible to skip the legacy VGA hole while still allowing hugepage mappings for the rest of guest memory. Suggested-by: Michael Roth Link: https://lore.kernel.org/all/20230918163647.m6bjgwusc7ww5tyu@amd.com Signed-off-by: Sean Christopherson --- virt/kvm/guest_mem.c | 54 ++++++++++++++++++++++---------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index 68528e9cddd7..4f3a313f5532 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -434,20 +434,6 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t s= ize, u64 flags, return err; } =20 -static bool kvm_gmem_is_valid_size(loff_t size, u64 flags) -{ - if (size < 0 || !PAGE_ALIGNED(size)) - return false; - -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) && - !IS_ALIGNED(size, HPAGE_PMD_SIZE)) - return false; -#endif - - return true; -} - int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) { loff_t size =3D args->size; @@ -460,9 +446,15 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create= _guest_memfd *args) if (flags & ~valid_flags) return -EINVAL; =20 - if (!kvm_gmem_is_valid_size(size, flags)) + if (size < 0 || !PAGE_ALIGNED(size)) return -EINVAL; =20 +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) && + !IS_ALIGNED(size, HPAGE_PMD_SIZE)) + return -EINVAL; +#endif + return __kvm_gmem_create(kvm, size, flags, kvm_gmem_mnt); } =20 @@ -470,7 +462,7 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_sl= ot *slot, unsigned int fd, loff_t offset) { loff_t size =3D slot->npages << PAGE_SHIFT; - unsigned long start, end, flags; + unsigned long start, end; struct kvm_gmem *gmem; struct inode *inode; struct file *file; @@ -489,16 +481,9 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_s= lot *slot, goto err; =20 inode =3D file_inode(file); - flags =3D (unsigned long)inode->i_private; =20 - /* - * For simplicity, require the offset into the file and the size of the - * memslot to be aligned to the largest possible page size used to back - * the file (same as the size of the file itself). - */ - if (!kvm_gmem_is_valid_size(offset, flags) || - !kvm_gmem_is_valid_size(size, flags)) - goto err; + if (offset < 0 || !PAGE_ALIGNED(offset)) + return -EINVAL; =20 if (offset + size > i_size_read(inode)) goto err; @@ -599,8 +584,23 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memor= y_slot *slot, page =3D folio_file_page(folio, index); =20 *pfn =3D page_to_pfn(page); - if (max_order) - *max_order =3D compound_order(compound_head(page)); + if (!max_order) + goto success; + + *max_order =3D compound_order(compound_head(page)); + if (!*max_order) + goto success; + + /* + * For simplicity, allow mapping a hugepage if and only if the entire + * binding is compatible, i.e. don't bother supporting mapping interior + * sub-ranges with hugepages (unless userspace comes up with a *really* + * strong use case for needing hugepages within unaligned bindings). + */ + if (!IS_ALIGNED(slot->gmem.pgoff, 1ull << *max_order) || + !IS_ALIGNED(slot->npages, 1ull << *max_order)) + *max_order =3D 0; +success: r =3D 0; =20 out_unlock: --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A847E7D0AC for ; Thu, 21 Sep 2023 20:56:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231873AbjIUU4k (ORCPT ); Thu, 21 Sep 2023 16:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232587AbjIUU4M (ORCPT ); Thu, 21 Sep 2023 16:56:12 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 546AAC1127 for ; Thu, 21 Sep 2023 13:33:47 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c40ac5b6e7so11372505ad.0 for ; Thu, 21 Sep 2023 13:33:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328427; x=1695933227; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=sB11xgPqav8kJOlkOtFOCiG4fm9eEcXZMC+/0AduleA=; b=rkStgMBLtwv1RojQ9/AzrLzoUD6XPX/6Q7zZXlfRVRbzsL+dFVV+XnsNsJUMsIpqPE 6z5X6GiM1FbIWRl8AAW6bMjIDFrwQaVRIi59lU3mS++FdnYzD+5Ox7fIrB4l2wYpdWcy BbCNnhkyQPHiVT0yLFOzVvl5eGjZLCm6lh7ZHrvVXcgTHdRGKtbWy3ShYlQQXwWkQKRb fGQJyBLynmBe3tQ4YxNsYHWwLsUjOc4qZAMFelFSaJMgNKZKj8qlbbCnpjrgq3l3XXtZ prxYHqMS8o10bw6IUXUhgtwt30OPzaykWHnMveciKDjcyjCNwK4rBBI1pc1tnAkg8RVw TqYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328427; x=1695933227; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=sB11xgPqav8kJOlkOtFOCiG4fm9eEcXZMC+/0AduleA=; b=d1bpYJXAbht5xxkOj4/i0sOXnjWJhSxzPWUbw9eG144Ne/8pHtzAXvMkdGeOisGh3X H0Tzr0ctAmyIUPNzse1vzimW7Hm3mtOW56h6Q0lND0Avggk9Ydq9t5tdVTOu7GG2TfRm XCyMI+/gcu1swOHjCOyxY3EzKgUCUSisura7q4uYotzKLXt7vF5+2nDINA4GSxqaRTgp NLW1ZkmHRQohIvFykdh5PzzBK1tei4PVEQ1JdZiHYdthgXtA17Wp+juDnWUNJpdgPWUH ZTGyZkQdCYvhiUqYpcns/C2MXsF5JoTLS7Srg1CgGVNQ5GbuljmtP9n0dS/25Ybivlgp bDLA== X-Gm-Message-State: AOJu0YyKV1QcpFKhuvzWMDOhXoRkcF9TfSe15TcWNmjUOjVEyIOX/GvP 1j0N9p58RI+CmSCGyRjQ4ykPQ+r9SPM= X-Google-Smtp-Source: AGHT+IGx7Eb0egTaRgjCjWsoC3+ZOFv+sbP9oLGdFxJ5fYEqfixkRwWHlHwIm7T6uuPwO4ciyM1sX09Mndk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d490:b0:1bc:7c69:925c with SMTP id c16-20020a170902d49000b001bc7c69925cmr94808plg.10.1695328426758; Thu, 21 Sep 2023 13:33:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:24 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-8-seanjc@google.com> Subject: [PATCH 07/13] KVM: x86/mmu: Track PRIVATE impact on hugepage mappings for all memslots From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Track the effects of private attributes on potential hugepage mappings if the VM supports private memory, i.e. even if the target memslot can only ever be mapped shared. If userspace configures a chunk of memory as private, KVM must not allow that memory to be mapped shared regardless of whether or not the *current* memslot can be mapped private. E.g. if the guest accesses a private range using a shared memslot, then KVM must exit to userspace. Fixes: 5bb0b4e162d1 ("KVM: x86: Disallow hugepages when memory attributes a= re mixed") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 269d4dc47c98..148931cf9dba 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7314,10 +7314,12 @@ bool kvm_arch_post_set_memory_attributes(struct kvm= *kvm, lockdep_assert_held(&kvm->slots_lock); =20 /* - * KVM x86 currently only supports KVM_MEMORY_ATTRIBUTE_PRIVATE, skip - * the slot if the slot will never consume the PRIVATE attribute. + * Calculate which ranges can be mapped with hugepages even if the slot + * can't map memory PRIVATE. KVM mustn't create a SHARED hugepage over + * a range that has PRIVATE GFNs, and conversely converting a range to + * SHARED may now allow hugepages. */ - if (!kvm_slot_can_be_private(slot)) + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) return false; =20 /* @@ -7372,7 +7374,7 @@ void kvm_mmu_init_memslot_memory_attributes(struct kv= m *kvm, { int level; =20 - if (!kvm_slot_can_be_private(slot)) + if (!kvm_arch_has_private_mem(kvm)) return; =20 for (level =3D PG_LEVEL_2M; level <=3D KVM_MAX_HUGEPAGE_LEVEL; level++) { --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40AA5E7D0AA for ; Thu, 21 Sep 2023 20:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232120AbjIUU4z (ORCPT ); Thu, 21 Sep 2023 16:56:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231565AbjIUU4O (ORCPT ); Thu, 21 Sep 2023 16:56:14 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7071E76BB for ; Thu, 21 Sep 2023 13:33:49 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c44a2cbea0so11966885ad.0 for ; Thu, 21 Sep 2023 13:33:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328428; x=1695933228; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gEbkWjE9idIufV/ddZ5aR1H9FUodW3tbJDRD7kgzYUw=; b=Tu5WSHXGB+q7z+GKZFp8/JnK6DVPjpxSNbok85q1Y+PvCvreZ/0X4/GtKRrY8Y/O9V SPjbGdeFwEVbYWT2MAX06r9U87PtYOOe2Qyszugqwqj5Cry1j8d9c1JoERb+po3qXahq RtvUIjU7W4OLRxNi9F71HXplyEYrwXuh5Ayv79nYL/JCNaHepb00X23aZnJFWKnrvtjw 84l5FW64zppbOyZ5SAsx66wd3gKrB2Qi4sy71q2iLxnjwUiuu6sUHei6MTxnYzIFbkeo GYWFzAVgVxFHUkTG0u7HBs6YrFwbU68G2wyZk6h+yX/0JHJncXpS0PxztrVVL82sciDt t34Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328428; x=1695933228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gEbkWjE9idIufV/ddZ5aR1H9FUodW3tbJDRD7kgzYUw=; b=To5SGzBtBo+DulGnXEN1AGrxVyIWi6BeaY8s5TgRMB6b2qiKTUNPid7L2ligK+Zsu3 pTS6GDNpN63Gq39pQk4THOxcXpKQSaLOpvTWQWAsrmkXgSX3ARAtVTAGK+WKH6UwQSUG yE5hn2f1DVUja7dTzzmQOAug7YzO+Pp0B3lkRQFonAlcjdC1YexIgG/DjqpBSqg69vN9 qPCiTA5Evv6LA3mQvd6afC7m6/ZFM9EQAbKGlxf1/kO4zHtf0VVhust3YtKbx5c+cJ3n 2ehZ5X6ho0SOJVJIth2xoFRg8ELy62qSMNcAT4g5WmuTRIyU2munmEpttqEtsaC6SzRp YKtQ== X-Gm-Message-State: AOJu0YwWWR/DZ1ZNn3F7stca5UO1PHLn8TuoDdWsunMpzSF+jOaKNbSW UocwMFrH9k/1wnG0mQ7o2JjEXeuBvBE= X-Google-Smtp-Source: AGHT+IFdAeqMaeF2jgODrlMtOfJVR36IWpqSSK7eAjHnsSdG7m0GHCsiZUhO86QnzvgNYNps5uY+twCRS8I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:23d2:b0:1c3:d556:4f9e with SMTP id o18-20020a17090323d200b001c3d5564f9emr9906plh.0.1695328428491; Thu, 21 Sep 2023 13:33:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:25 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-9-seanjc@google.com> Subject: [PATCH 08/13] KVM: x86/mmu: Zap shared-only memslots when private attribute changes From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Zap all relevant memslots, including shared-only memslots, if the private memory attribute is being changed. If userspace converts a range to private, KVM must zap shared SPTEs to prevent the guest from accessing the memory as shared. If userspace converts a range to shared, zapping SPTEs for shared-only memslots isn't strictly necessary, but doing so ensures that KVM will install a hugepage mapping if possible, e.g. if a 2MiB range that was mixed is converted to be 100% shared. Fixes: dcde045383f3 ("KVM: x86/mmu: Handle page fault for private memory") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 148931cf9dba..aa67d9d6fcf8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7259,10 +7259,17 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm = *kvm, struct kvm_gfn_range *range) { /* - * KVM x86 currently only supports KVM_MEMORY_ATTRIBUTE_PRIVATE, skip - * the slot if the slot will never consume the PRIVATE attribute. + * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only + * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM + * can simply ignore such slots. But if userspace is making memory + * PRIVATE, then KVM must prevent the guest from accessing the memory + * as shared. And if userspace is making memory SHARED and this point + * is reached, then at least one page within the range was previously + * PRIVATE, i.e. the slot's possible hugepage ranges are changing. + * Zapping SPTEs in this case ensures KVM will reassess whether or not + * a hugepage can be used for affected ranges. */ - if (!kvm_slot_can_be_private(range->slot)) + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) return false; =20 return kvm_mmu_unmap_gfn_range(kvm, range); --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 009D5E7D0A2 for ; Thu, 21 Sep 2023 20:57:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232146AbjIUU5f (ORCPT ); Thu, 21 Sep 2023 16:57:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232246AbjIUU5T (ORCPT ); Thu, 21 Sep 2023 16:57:19 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C42FD663D9 for ; Thu, 21 Sep 2023 13:33:51 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c456e605easo11973775ad.2 for ; Thu, 21 Sep 2023 13:33:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328430; x=1695933230; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9mpd9iMp7FZTg5cux5Hx9q8PM9e8tiNEMrk/Ih+mEqk=; b=RHwf1XNWOhKlI43Qfk0o7cRJi92EbxHj9Itmb1+AumnyYqJ+CsIG2cw6vS568G7aWl Of2yLVO2tZTIn8+klQSXK58GnxplQoq8xEZk+176754KxxATHFRGt7nSbAnYgANbYEZ1 UJBVnhHN4DOvnCHGIVQ+SrNd1SAnrW2Sv7xUfX2tuIXDs5e2K08vsbT/QcqhXcQhU/bc N6l4BA02tMdH1urLf321sE9RjAw0CzRBvuObA1XHvToTM8OTiWncgxLMzEHC1bAoBD7j f9CZ30i6dN2F1Yd4nnVISBayRragA4nnfACdeGVFGbN0IGFfLJ/9Ptcl6T+dCaqhBUAJ VuhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328430; x=1695933230; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9mpd9iMp7FZTg5cux5Hx9q8PM9e8tiNEMrk/Ih+mEqk=; b=RXTiAmezT30129Yb9Veupa5vKC+/GS1gVv/UBSf0+/ojtAV0W4BB8ROVz12WL3yC7x 15LeQnZnNXC14tZq+TQnoWZI2b5hMfAVczKh4lFemwdHNWOz0D87op19FiDrusG9Uze3 Dp/mdnhukBLCAV/EouyJY48LIL4JQrdjECAPfYH6uv2qtPoRH3MdkEQpPpr/pzzFCu2c 3Pm7RuYeLqy5v78Fc0A0sdmzSjJSyCBqvKPrfatVunM16Bc1JBSJMHfELP9LUaQ2krcO kujw0gUliRL3rAyUyD6svtG8hgd39zRo0n+rxLhMPyuTEAAodb8TnQBHUIEGqLOCLnmd Df9Q== X-Gm-Message-State: AOJu0Yyhdd/H6Mf0CIB65j6oOWP6Bw3pselK0JpNngonjdQgs9h3MtjC hW49oBNPdx0YpPqcSr9Um6J4gktNG48= X-Google-Smtp-Source: AGHT+IFLipqB61WXOOTnqiKbb8bDJkmz/woVY4xbUtfSVXedRobcUBhHek7BiWx82eS717I7/ce8FiF8A/s= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f0ca:b0:1c3:4d70:6ed9 with SMTP id v10-20020a170902f0ca00b001c34d706ed9mr63048pla.3.1695328430166; Thu, 21 Sep 2023 13:33:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:26 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-10-seanjc@google.com> Subject: [PATCH 09/13] KVM: Always add relevant ranges to invalidation set when changing attributes From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When setting memory attributes, add all affected memslot ranges to the set of invalidation ranges before calling into arch code. Even if the change in attributes doesn't strictly require zapping, it's not at all obvious that letting arch code establish new mappings while the attributes are in flux is safe and/or desirable. Unconditionally adding ranges allows KVM to keep its sanity check that at least one range is added between begin() and end(), e.g. to guard against a missed add() call, without needing complex code to condition the begin()/end() on arch behavior. Fixes: 9a327182447a ("KVM: Introduce per-page memory attributes") Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 277afeedd670..96fc609459e3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2529,6 +2529,25 @@ static __always_inline void kvm_handle_gfn_range(str= uct kvm *kvm, KVM_MMU_UNLOCK(kvm); } =20 +static bool kvm_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + /* + * Unconditionally add the range to the invalidation set, regardless of + * whether or not the arch callback actually needs to zap SPTEs. E.g. + * if KVM supports RWX attributes in the future and the attributes are + * going from R=3D>RW, zapping isn't strictly necessary. Unconditionally + * adding the range allows KVM to require that MMU invalidations add at + * least one range between begin() and end(), e.g. allows KVM to detect + * bugs where the add() is missed. Rexlaing the rule *might* be safe, + * but it's not obvious that allowing new mappings while the attributes + * are in flux is desirable or worth the complexity. + */ + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); + + return kvm_arch_pre_set_memory_attributes(kvm, range); +} + /* Set @attributes for the gfn range [@start, @end). */ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t e= nd, unsigned long attributes) @@ -2536,7 +2555,7 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm,= gfn_t start, gfn_t end, struct kvm_mmu_notifier_range pre_set_range =3D { .start =3D start, .end =3D end, - .handler =3D kvm_arch_pre_set_memory_attributes, + .handler =3D kvm_pre_set_memory_attributes, .on_lock =3D kvm_mmu_invalidate_begin, .flush_on_ret =3D true, .may_block =3D true, --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62569E7D0A2 for ; Thu, 21 Sep 2023 20:56:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229909AbjIUU4u (ORCPT ); Thu, 21 Sep 2023 16:56:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231194AbjIUU4Q (ORCPT ); Thu, 21 Sep 2023 16:56:16 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 208A41F3A for ; Thu, 21 Sep 2023 13:33:52 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d81c02bf2beso1792556276.2 for ; Thu, 21 Sep 2023 13:33:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328432; x=1695933232; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=e5f7fcNoXL+epp7GM6J6bVKjQgaoLneVTY2X9JXuWRg=; b=GmDzyG/T9FWlGLQM/yDebSC/pppt1SFf7DlQnd4YxYIs07qbm4BTIZ639qV8gbbs7m lvzy1Im+3ZCMr0BYirajSwh0nz1LQsDB+qBczOshuKJJTrW8v+m5dqWH6ydd6J4IsUBy op4+RPEpviqCWD7wI8xCeErcmwwXFqsXWw82wRucspQNalR9y6bXfZtfOqKq0He2I9W9 tkM3wU02+d859sXtpBPWNCQL8tMAPwoebZsdiepkA3c2K+5RptmQ01vQZ/7AJblzIs2G BgCUMbOSvDmNMUaSExgE1x/nVOG5u7IhPgaDLCy7Ha7iVDEfv+9eeIB0VRps/qwAnfmQ Pf/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328432; x=1695933232; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=e5f7fcNoXL+epp7GM6J6bVKjQgaoLneVTY2X9JXuWRg=; b=Kun9QdzOMIhfV9Gh+iW7lKR52OsxPC/E69vhBUnaspRJIbPPV1DQvoVs/VYwJYn/wB unt09PiJloidaus/96SJUgK4Yc0UmaVwLlBEtdnW/ywMSTYg0x84zrGHHhBfX82SFYEZ YYx1/o9AkNU1RJzE6JMzu8CSeSoD3Oy8fFUYpCqN8gCXcZTgS2KR6nRGFLqFTFGaDa6p FMV1SZ1RTvRhacmLqLzaDc5C8q+/zDuh+0DlZAD/2l7PN6hBoEMTUpk943NT6HnfMwYS muWeZt1eJ9qkvUiTMtWxcTcbnxu4SBCAwJ/uoWkAg/Vuq5RDjF435Dxs5dNSDzEIVDBj tNqg== X-Gm-Message-State: AOJu0Ywri3nDNcJ1zgobSiinhqUnrDnK5VQj8n8kkW3QtyOk+BvMOT1E 5VP2Qf3GgN/N/ffkZzlPC0X9WNYAyVc= X-Google-Smtp-Source: AGHT+IE3ligRVXMNx7Ai6SVYOMLzKH9iL3vgDfzERbbEOnY3rAREZhYCuKUZj1rP+ggkOD4exEVe8RniJ6M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ce8b:0:b0:d81:503e:2824 with SMTP id x133-20020a25ce8b000000b00d81503e2824mr82196ybe.10.1695328432076; Thu, 21 Sep 2023 13:33:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:27 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-11-seanjc@google.com> Subject: [PATCH 10/13] KVM: x86/mmu: Drop repeated add() of to-be-invalidated range From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use kvm_unmap_gfn_range() instead of kvm_mmu_unmap_gfn_range() when handling memory attribute ranges now that common KVM adds the target range to the invalidation set, i.e. calls kvm_mmu_invalidate_range_add() before invoking the arch callback. Fixes: dcde045383f3 ("KVM: x86/mmu: Handle page fault for private memory") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aa67d9d6fcf8..bcb812a7f563 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7272,7 +7272,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *k= vm, if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) return false; =20 - return kvm_mmu_unmap_gfn_range(kvm, range); + return kvm_unmap_gfn_range(kvm, range); } =20 static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1064E7D0AA for ; Thu, 21 Sep 2023 21:02:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232342AbjIUVCs (ORCPT ); Thu, 21 Sep 2023 17:02:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232399AbjIUVCD (ORCPT ); Thu, 21 Sep 2023 17:02:03 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 147A790A73 for ; Thu, 21 Sep 2023 13:33:54 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d814a1f7378so4994903276.1 for ; Thu, 21 Sep 2023 13:33:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328434; x=1695933234; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1e4MrCMHTLSZnyIO0FBl7+NQqWZG7f8pPm6hkGSjc9w=; b=Kbs8XBLABurM4wkP1Etj58hufgt+3MCkPFJf4zzdIhVZykp6uW1fcANzyqxo7QZZNl NRgReh2oREqXTzpcBwDeFHoEkOItBnCH183kGo1utsFnIkPRkK5BI9dALIeieSCxSM7y diCyBjYAiyru3Id99pb9LMUS9AeP3OBc3yvK5NgnXYcVN/OAAYm0gVtzAEq6OQEn2hHS oEiHWemCa7W6hyMQCZb+4dSxGWDPuCvu1NnG2+XdQ4n482kQ0vDk8ctTx56UKrWeHWJh p6Js5Td2gFR2c8zX7nC+28BAC6Oj1MwERK81clMq9u0ukz5vSe1LCEjicM/Cha4BBF6Q CLgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328434; x=1695933234; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1e4MrCMHTLSZnyIO0FBl7+NQqWZG7f8pPm6hkGSjc9w=; b=fYhWlvNRF3Jp3dYremmr0JVVxLmBr9KP4oKKfeQPkscLqpCmzws5c7L7s3161izMfU rM2WLgNXnJfnokRTmS0Q7dHRTlqfcWmuXAA+tNy92CDKvZI5qubCqoOEsYyU45djI3Nd rfuUhzG86aRXGOZPEE8zwIRL6hxMgj50gJhduui5rz2fwPRo3upYzZEUzbEjbmSxI1n/ dDt4PhAbl3qnwcEhPOZYuwXcoHy9GqRNGuC5DfB52GCTo4sN/7+8TdpjpaltV4T1N81S cRCQo0NaygkFrrcUa2g8tGfkRN1mpGdJi+6L/o8q9JB8uXQYy7TcmyVda3xRpKjfg0hm N0sg== X-Gm-Message-State: AOJu0YxI3l02M4oOY3MAQnKwliWlGsQSestpaogFw8WfX39jtrnyxogS p9FyUMzRDof8P1u4D4s3d5t9Uj7J7mA= X-Google-Smtp-Source: AGHT+IEO6fBbhlkXcJTXrJ7RZL61QFV8RTGC1IpwqpAUy2RmP7+VLi8mBQy96qpxOGMgIPsWRqQ0yEMjFGQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:abf2:0:b0:d81:57ba:4d7a with SMTP id v105-20020a25abf2000000b00d8157ba4d7amr10576ybi.6.1695328433855; Thu, 21 Sep 2023 13:33:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:28 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-12-seanjc@google.com> Subject: [PATCH 11/13] KVM: selftests: Refactor private mem conversions to prep for punch_hole test From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor the private memory conversions test to prepare for adding a test to verify PUNCH_HOLE functionality *without* actually do a proper conversion, i.e. without calling KVM_SET_MEMORY_ATTRIBUTES. Make setting attributes optional, rename the guest code to be more descriptive, and extract the ranges to a global variable (iterating over multiple ranges is less interesting for PUNCH_HOLE, but with a common array it's trivially easy to do so). Fixes: 90535ca08f76 ("KVM: selftests: Add x86-only selftest for private mem= ory conversions") Signed-off-by: Sean Christopherson --- .../kvm/x86_64/private_mem_conversions_test.c | 51 ++++++++++--------- 1 file changed, 27 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_tes= t.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index 50541246d6fd..b80cf7342d0d 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -83,13 +83,14 @@ static void guest_sync_private(uint64_t gpa, uint64_t s= ize, uint8_t pattern) } =20 /* Arbitrary values, KVM doesn't care about the attribute flags. */ -#define MAP_GPA_SHARED BIT(0) -#define MAP_GPA_DO_FALLOCATE BIT(1) +#define MAP_GPA_SET_ATTRIBUTES BIT(0) +#define MAP_GPA_SHARED BIT(1) +#define MAP_GPA_DO_FALLOCATE BIT(2) =20 static void guest_map_mem(uint64_t gpa, uint64_t size, bool map_shared, bool do_fallocate) { - uint64_t flags =3D 0; + uint64_t flags =3D MAP_GPA_SET_ATTRIBUTES; =20 if (map_shared) flags |=3D MAP_GPA_SHARED; @@ -108,19 +109,19 @@ static void guest_map_private(uint64_t gpa, uint64_t = size, bool do_fallocate) guest_map_mem(gpa, size, false, do_fallocate); } =20 -static void guest_run_test(uint64_t base_gpa, bool do_fallocate) +struct { + uint64_t offset; + uint64_t size; +} static const test_ranges[] =3D { + GUEST_STAGE(0, PAGE_SIZE), + GUEST_STAGE(0, SZ_2M), + GUEST_STAGE(PAGE_SIZE, PAGE_SIZE), + GUEST_STAGE(PAGE_SIZE, SZ_2M), + GUEST_STAGE(SZ_2M, PAGE_SIZE), +}; + +static void guest_test_explicit_conversion(uint64_t base_gpa, bool do_fall= ocate) { - struct { - uint64_t offset; - uint64_t size; - uint8_t pattern; - } stages[] =3D { - GUEST_STAGE(0, PAGE_SIZE), - GUEST_STAGE(0, SZ_2M), - GUEST_STAGE(PAGE_SIZE, PAGE_SIZE), - GUEST_STAGE(PAGE_SIZE, SZ_2M), - GUEST_STAGE(SZ_2M, PAGE_SIZE), - }; const uint8_t init_p =3D 0xcc; uint64_t j; int i; @@ -130,9 +131,9 @@ static void guest_run_test(uint64_t base_gpa, bool do_f= allocate) guest_sync_shared(base_gpa, PER_CPU_DATA_SIZE, (uint8_t)~init_p, init_p); memcmp_g(base_gpa, init_p, PER_CPU_DATA_SIZE); =20 - for (i =3D 0; i < ARRAY_SIZE(stages); i++) { - uint64_t gpa =3D base_gpa + stages[i].offset; - uint64_t size =3D stages[i].size; + for (i =3D 0; i < ARRAY_SIZE(test_ranges); i++) { + uint64_t gpa =3D base_gpa + test_ranges[i].offset; + uint64_t size =3D test_ranges[i].size; uint8_t p1 =3D 0x11; uint8_t p2 =3D 0x22; uint8_t p3 =3D 0x33; @@ -214,11 +215,11 @@ static void guest_run_test(uint64_t base_gpa, bool do= _fallocate) static void guest_code(uint64_t base_gpa) { /* - * Run everything twice, with and without doing fallocate() on the - * guest_memfd backing when converting between shared and private. + * Run the conversion test twice, with and without doing fallocate() on + * the guest_memfd backing when converting between shared and private. */ - guest_run_test(base_gpa, false); - guest_run_test(base_gpa, true); + guest_test_explicit_conversion(base_gpa, false); + guest_test_explicit_conversion(base_gpa, true); GUEST_DONE(); } =20 @@ -227,6 +228,7 @@ static void handle_exit_hypercall(struct kvm_vcpu *vcpu) struct kvm_run *run =3D vcpu->run; uint64_t gpa =3D run->hypercall.args[0]; uint64_t size =3D run->hypercall.args[1] * PAGE_SIZE; + bool set_attributes =3D run->hypercall.args[2] & MAP_GPA_SET_ATTRIBUTES; bool map_shared =3D run->hypercall.args[2] & MAP_GPA_SHARED; bool do_fallocate =3D run->hypercall.args[2] & MAP_GPA_DO_FALLOCATE; struct kvm_vm *vm =3D vcpu->vm; @@ -238,8 +240,9 @@ static void handle_exit_hypercall(struct kvm_vcpu *vcpu) if (do_fallocate) vm_guest_mem_fallocate(vm, gpa, size, map_shared); =20 - vm_set_memory_attributes(vm, gpa, size, - map_shared ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE); + if (set_attributes) + vm_set_memory_attributes(vm, gpa, size, + map_shared ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE); run->hypercall.ret =3D 0; } =20 --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0A47E7D0AB for ; Thu, 21 Sep 2023 21:03:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232487AbjIUVD7 (ORCPT ); Thu, 21 Sep 2023 17:03:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232291AbjIUVDW (ORCPT ); Thu, 21 Sep 2023 17:03:22 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E0AFC3320 for ; Thu, 21 Sep 2023 13:33:57 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5924b2aac52so20342257b3.2 for ; Thu, 21 Sep 2023 13:33:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328436; x=1695933236; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=as3Xfcp/JRiOon5PWxqp/0cQrWwxzrg67R/BFsP5EZc=; b=wZtpeBeUwfckO0z7bOUNL0F6TUXNNsVTgy0qa4xfTtdUMrK3z/YnmEirYT40YEmX0l n6IRO8Bjz/oeONCBm08TY5eEwsCY2BU6x0IZIvfUWnjKGU47ANJfT9yX1zgAqLP3WAI+ o8WiWBL6PZKpY69El9xLmTaCGMvMTI9pRuEbZMw/yznEvTEqphA3ixdiPOHc4KABiMyt 1K4eY59/pTZMmJ6F8rLgG8/yrEQBhigRCksVDnisdDKK5OaCS4HY1qOUBbhCig3bFdzj 7nFG1O7HrX6U4TQZAbcB1xxhUaiPrsp8DVy3yadIXWZcgJcnF4VCTnsolv8pOTR179Cu Nx2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328436; x=1695933236; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=as3Xfcp/JRiOon5PWxqp/0cQrWwxzrg67R/BFsP5EZc=; b=d1Pv40ewegTZzJsR37SC4xnC/EWbTE0m/AHblJLlMI9sArOyAQmFkBoabhMDpcg12d CrEm13LiOfHsZZmiSaIb763nN6ycCJKmEBqVu/1i6PLQoyEfnHqipvx1kATBVWfiiuzw GPN54OaQ8mra/9G/kqv3zdXW2T1WzFhnGm9nJPcBgD5J6EEL58++u4NHyyl77kxpdhVp dZ6CAARP3mrXofGaU8dex4W7uUWIlRJN399eohA7DlbkXS6It3eddRcgCOjfUaOSEEo9 7GTC8/+TMCn36wqWn8qtRGiPbfpsa+DyYwjFJDrqx8vbnHYTNwvLXBzsSDrenXD/6iu8 /gmQ== X-Gm-Message-State: AOJu0Ywmg8kGdyu7Ma7jff/UE8/fCo5JLG3Fw2Rl0funnMqvsOzC9DnI +B6sWV26O1SSZ13ChkhdmLYezudjbUc= X-Google-Smtp-Source: AGHT+IHi7mnPSSHuR6uMQ9zB5hC3CAGDfRvOyb039qMm5sT311dR9pe9rzKOxh23Xqpo1jBpi8ZGPYUE+IQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ad4f:0:b0:58c:6ddd:d27c with SMTP id l15-20020a81ad4f000000b0058c6dddd27cmr92527ywk.6.1695328436337; Thu, 21 Sep 2023 13:33:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:29 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-13-seanjc@google.com> Subject: [PATCH 12/13] KVM: selftests: Add a "pure" PUNCH_HOLE on guest_memfd testcase From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a PUNCH_HOLE testcase to the private memory conversions test that verifies PUNCH_HOLE actually frees memory. Directly verifying that KVM frees memory is impractical, if it's even possible, so instead indirectly verify memory is freed by asserting that the guest reads zeroes after a PUNCH_HOLE. E.g. if KVM zaps SPTEs but doesn't actually punch a hole in the inode, the subsequent read will still see the previous value. And obviously punching a hole shouldn't cause explosions. Signed-off-by: Sean Christopherson --- .../kvm/x86_64/private_mem_conversions_test.c | 61 +++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_tes= t.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index b80cf7342d0d..c04e7d61a585 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -212,6 +212,60 @@ static void guest_test_explicit_conversion(uint64_t ba= se_gpa, bool do_fallocate) } } =20 +static void guest_punch_hole(uint64_t gpa, uint64_t size) +{ + /* "Mapping" memory shared via fallocate() is done via PUNCH_HOLE. */ + uint64_t flags =3D MAP_GPA_SHARED | MAP_GPA_DO_FALLOCATE; + + kvm_hypercall_map_gpa_range(gpa, size, flags); +} + +/* + * Test that PUNCH_HOLE actually frees memory by punching holes without do= ing a + * proper conversion. Freeing (PUNCH_HOLE) should zap SPTEs, and realloca= ting + * (subsequent fault) should zero memory. + */ +static void guest_test_punch_hole(uint64_t base_gpa, bool precise) +{ + const uint8_t init_p =3D 0xcc; + int i; + + /* + * Convert the entire range to private, this testcase is all about + * punching holes in guest_memfd, i.e. shared mappings aren't needed. + */ + guest_map_private(base_gpa, PER_CPU_DATA_SIZE, false); + + for (i =3D 0; i < ARRAY_SIZE(test_ranges); i++) { + uint64_t gpa =3D base_gpa + test_ranges[i].offset; + uint64_t size =3D test_ranges[i].size; + + /* + * Free all memory before each iteration, even for the !precise + * case where the memory will be faulted back in. Freeing and + * reallocating should obviously work, and freeing all memory + * minimizes the probability of cross-testcase influence. + */ + guest_punch_hole(base_gpa, PER_CPU_DATA_SIZE); + + /* Fault-in and initialize memory, and verify the pattern. */ + if (precise) { + memset((void *)gpa, init_p, size); + memcmp_g(gpa, init_p, size); + } else { + memset((void *)base_gpa, init_p, PER_CPU_DATA_SIZE); + memcmp_g(base_gpa, init_p, PER_CPU_DATA_SIZE); + } + + /* + * Punch a hole at the target range and verify that reads from + * the guest succeed and return zeroes. + */ + guest_punch_hole(gpa, size); + memcmp_g(gpa, 0, size); + } +} + static void guest_code(uint64_t base_gpa) { /* @@ -220,6 +274,13 @@ static void guest_code(uint64_t base_gpa) */ guest_test_explicit_conversion(base_gpa, false); guest_test_explicit_conversion(base_gpa, true); + + /* + * Run the PUNCH_HOLE test twice too, once with the entire guest_memfd + * faulted in, once with only the target range faulted in. + */ + guest_test_punch_hole(base_gpa, false); + guest_test_punch_hole(base_gpa, true); GUEST_DONE(); } =20 --=20 2.42.0.515.g380fc7ccd1-goog From nobody Fri Feb 13 17:31:38 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58894E7D0AA for ; Thu, 21 Sep 2023 20:57:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232107AbjIUU5U (ORCPT ); Thu, 21 Sep 2023 16:57:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231967AbjIUU4l (ORCPT ); Thu, 21 Sep 2023 16:56:41 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8348DC333B for ; Thu, 21 Sep 2023 13:33:59 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d81486a0382so1795916276.0 for ; Thu, 21 Sep 2023 13:33:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328438; x=1695933238; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=iAecZcdPmeU17yYV8Y2t9q6VIXYShcA0VY6qsAlVigg=; b=l27JPRust67iglFCZrlA1cuXvzJaR3MpW2D1Xh+UOiTFHfamI+BAuN7dmuiOyylmEj HNyWSWpcRpNfVg41FVEw7qlT4qulyyZj/44JdIAeaN2fv6Qn5We7fZrY/KpKmUBttKxn fC2AQwg+tmGbFUdRSVAVDrPBSwq4xtIENgG116kBptSxEQHYslFHDWIwLEqkJlgOqCtS XOMv50Kkwhl9Cr+EF10KI5aaDEV1KnIpi6ETp/afFUQpSS3Bq1H9U8w34Q71wvBaD22p qElp5IgKAY12O/RnQYcGwrs5+hibOFX5c4kocjIlCcEfuZjmkYxD800dby/Zag1ZXWVN V3fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328438; x=1695933238; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iAecZcdPmeU17yYV8Y2t9q6VIXYShcA0VY6qsAlVigg=; b=TjVs+pmuxrLLM3zK2Nd9m3y58SQqpOwpsqNyKYwDqLOVymh32v5D5bv144fl8Jhjvk Y4xvvP/denf7gPIiJ/xOzhR4Qamzif1syAnsJQ06N06LTUL9+mcCTsJGeAVyIkITn9K0 zsg+CVMbyFscIGagsAzpnEDAKbZ9Uhr1VIbAlBal+vKGnIz8Ly2zYLcCsR1ITPEZt0cP 2WQdt5KLcY9BpPHaXUHod172oy0AHMKauNT2H1zJHrqltgbnWKLi1BiCKnz9uqI03Mbv XCcvluIv+NfF82jmPeW36ljE7/Y95vf/16p4fET/XSz4Z7qWaTFvgW450Ic6Wd/Jvr+j +7qg== X-Gm-Message-State: AOJu0YyUJbbv1B1p4D0SmsZ6KKv3m5PypqbdzjusdipP3SgS3hxE399n bZz1gJKmFt1bDnOzfw3NC77T3iDgp8I= X-Google-Smtp-Source: AGHT+IE6+oUdUlgLe9Ic7Hw4ijB8WbACCZNGhCwO/O53pnCY6EuyFornjYkRYInhosH4RMqIDwwt4GOWGFE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:6894:0:b0:d81:7d48:a459 with SMTP id d142-20020a256894000000b00d817d48a459mr89384ybc.8.1695328438609; Thu, 21 Sep 2023 13:33:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:30 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-14-seanjc@google.com> Subject: [PATCH 13/13] KVM: Rename guest_mem.c to guest_memfd.c From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use guest_memfd.c for the KVM_CREATE_GUEST_MEMFD implementation to make it more obvious that the file holds more than generic "guest memory" APIs, and to provide a stronger conceptual connection with memfd.c. Signed-off-by: Sean Christopherson --- virt/kvm/Makefile.kvm | 2 +- virt/kvm/{guest_mem.c =3D> guest_memfd.c} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename virt/kvm/{guest_mem.c =3D> guest_memfd.c} (100%) diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index a5a61bbe7f4c..724c89af78af 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -12,4 +12,4 @@ kvm-$(CONFIG_KVM_ASYNC_PF) +=3D $(KVM)/async_pf.o kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) +=3D $(KVM)/irqchip.o kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) +=3D $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) +=3D $(KVM)/pfncache.o -kvm-$(CONFIG_KVM_PRIVATE_MEM) +=3D $(KVM)/guest_mem.o +kvm-$(CONFIG_KVM_PRIVATE_MEM) +=3D $(KVM)/guest_memfd.o diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_memfd.c similarity index 100% rename from virt/kvm/guest_mem.c rename to virt/kvm/guest_memfd.c --=20 2.42.0.515.g380fc7ccd1-goog