From nobody Sat Oct 4 03:17:40 2025 Received: from mail-io1-f73.google.com (mail-io1-f73.google.com [209.85.166.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3E253277AE for ; Wed, 20 Aug 2025 16:22:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755706970; cv=none; b=W80lswyzEfvVO0TGKtzCT48TeqijNB/oUdCB9seyoEvYqscWfQWZXGFlAqK1wGoI0rC02rAXMCmyAHLk6i8og63M6F7f7b71EreABV91c4/cuereEuCG/ybGvPwjh/zWhg2jvNQkJXWL7NIwwq7q468oXfIxeZaoX5ySqVMRyJU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755706970; c=relaxed/simple; bh=rnxTonDSNPXs5FCxS6Of9bvwZEGqBmpHPLERN309rp4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tsB8ON20vq2Y1i7+AfMspi9BMVgE6VOlSLIGKAjNYGM/YmQkNm01AkbfyZTxZz5mZlUYuSWdrkYKeT93UiDHMegwR7OGNl6ody782qBuNlwecVx5RPQjin05S545RjBLtA2SSp2974Bodo6tpqGa3yXzjZ0+oaw0nSOFh5rO0bs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--rananta.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uOsGb/Cq; arc=none smtp.client-ip=209.85.166.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--rananta.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uOsGb/Cq" Received: by mail-io1-f73.google.com with SMTP id ca18e2360f4ac-88432d9d5c1so659647439f.1 for ; Wed, 20 Aug 2025 09:22:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1755706967; x=1756311767; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dWAJht+0s/guAhpHqYXPso1akQHfmjOy8KfjWfGKDJM=; b=uOsGb/Cqdelix8cJIFW6d0PULiIpQVLKSDDv+sB9H1XBTsfjFoSgtRWhMuEBnJqT6J O491/RbvVFs/kmo9Cmt9eCOWrUCjJeLW/DmCg/UrTZbfu40LA/B16z/vgXYEn31CcEQf PYkzp5cWnNmgbOAxZ42HJp0Zert6UvU5yz5+ZepR68skmXnfi/xd5FdxEUq1rzqtl38R A4Gs2zN8nm/MyXaYdYN32F6JTR1fb0LhyUH5AA/HAHz6v248UPgyG4APB0vlCOg9M2HW 3QcfWiflcbo5ud6o4Mf6fSsu1zBPoq/5CTFr5v03xxURKqPbLxzRzeDdQOl8845HMD6o mKag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755706967; x=1756311767; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dWAJht+0s/guAhpHqYXPso1akQHfmjOy8KfjWfGKDJM=; b=cT28o6/ZKQbPwQ8FyEHuJJpTHz1XhdJlJ9+BVbIFHg6b+5vmylk34P+YyBg+dXzxH2 2DZFSyT5m4DsZ0lDTiQLYEI08irAdTMLyDAvf2KBHDSo+msJd+JpfQ/mY1QibHmmEje5 jUEkahWPC0pBSh1wb89P8RkTqVXrYqQLSsSk5a68gGQ1Ux8AtJpvBlaP1sbJaMtMrnB+ wko+A4qfd3w+wm9iU8UMQMKiyANg3V80N/BvXFyeiTIemCzf8+iwjIrPfNR4p2jGWAvz 9ZxdmtMrkGm7eUZZJsFmziZqhRvtVr2XQRN0KoHqGL6HYfwZPYmS5DIvmmPCfEi2jMuK FvjQ== X-Forwarded-Encrypted: i=1; AJvYcCUHPA1m6t51oIoIS7HaLXn+HveVZPkassrz+WKCLJYZcd465mmmAl4kbSof3gXBaFV7MWFfNKYtCYFjojQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxWAMu1sOzd8LXThp8t1aE1EnkNaV5wHuWHbVYrEjypuvFif6SN I4svlmBLC2n/LEc1gNu2UDBup97NAVAX4Ljz9LEefSL2GhuKWei72kwGxThLJ0Jw5GMQyOYuxZa YaP3J9uvGwQ== X-Google-Smtp-Source: AGHT+IGr91NNpYqJGSQuJm4ghXc/RFMzXTzX9dbp2NGvcmWuajHz8+Psx6HTRNeVlGvCz79RD8rPG9bKd7E9 X-Received: from iobeu2.prod.google.com ([2002:a05:6602:4c82:b0:881:7069:9679]) (user=rananta job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6602:3402:b0:87c:72f3:d5d7 with SMTP id ca18e2360f4ac-88471b0adadmr748916839f.13.1755706966799; Wed, 20 Aug 2025 09:22:46 -0700 (PDT) Date: Wed, 20 Aug 2025 16:22:41 +0000 In-Reply-To: <20250820162242.2624752-1-rananta@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250820162242.2624752-1-rananta@google.com> X-Mailer: git-send-email 2.51.0.rc2.233.g662b1ed5c5-goog Message-ID: <20250820162242.2624752-2-rananta@google.com> Subject: [PATCH v2 1/2] KVM: arm64: Split kvm_pgtable_stage2_destroy() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Raghavendra Rao Anata , Mingwei Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Split kvm_pgtable_stage2_destroy() into two: - kvm_pgtable_stage2_destroy_range(), that performs the page-table walk and free the entries over a range of addresses. - kvm_pgtable_stage2_destroy_pgd(), that frees the PGD. This refactoring enables subsequent patches to free large page-tables in chunks, calling cond_resched() between each chunk, to yield the CPU as necessary. Existing callers of kvm_pgtable_stage2_destroy(), that probably cannot take advantage of this (such as nVMHE), will continue to function as is. Signed-off-by: Raghavendra Rao Ananta Suggested-by: Oliver Upton --- arch/arm64/include/asm/kvm_pgtable.h | 30 ++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_pkvm.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 25 +++++++++++++++++++---- arch/arm64/kvm/mmu.c | 12 +++++++++-- arch/arm64/kvm/pkvm.c | 11 ++++++++-- 5 files changed, 73 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 2888b5d03757..1246216616b5 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -355,6 +355,11 @@ static inline kvm_pte_t *kvm_dereference_pteref(struct= kvm_pgtable_walker *walke return pteref; } =20 +static inline kvm_pte_t *kvm_dereference_pteref_raw(kvm_pteref_t pteref) +{ + return pteref; +} + static inline int kvm_pgtable_walk_begin(struct kvm_pgtable_walker *walker) { /* @@ -384,6 +389,11 @@ static inline kvm_pte_t *kvm_dereference_pteref(struct= kvm_pgtable_walker *walke return rcu_dereference_check(pteref, !(walker->flags & KVM_PGTABLE_WALK_S= HARED)); } =20 +static inline kvm_pte_t *kvm_dereference_pteref_raw(kvm_pteref_t pteref) +{ + return rcu_dereference_raw(pteref); +} + static inline int kvm_pgtable_walk_begin(struct kvm_pgtable_walker *walker) { if (walker->flags & KVM_PGTABLE_WALK_SHARED) @@ -551,6 +561,26 @@ static inline int kvm_pgtable_stage2_init(struct kvm_p= gtable *pgt, struct kvm_s2 */ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); =20 +/** + * kvm_pgtable_stage2_destroy_range() - Destroy the unlinked range of addr= esses. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). + * @addr: Intermediate physical address at which to place the mapping. + * @size: Size of the mapping. + * + * The page-table is assumed to be unreachable by any hardware walkers pri= or + * to freeing and therefore no TLB invalidation is performed. + */ +void kvm_pgtable_stage2_destroy_range(struct kvm_pgtable *pgt, + u64 addr, u64 size); + +/** + * kvm_pgtable_stage2_destroy_pgd() - Destroy the PGD of guest stage-2 pag= e-table. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init= *(). + * + * It is assumed that the rest of the page-table is freed before this oper= ation. + */ +void kvm_pgtable_stage2_destroy_pgd(struct kvm_pgtable *pgt); + /** * kvm_pgtable_stage2_free_unlinked() - Free an unlinked stage-2 paging st= ructure. * @mm_ops: Memory management callbacks. diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index ea58282f59bb..35f9d9478004 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -179,7 +179,9 @@ struct pkvm_mapping { =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, struct kvm_pgtable_mm_ops *mm_ops); -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); +void pkvm_pgtable_stage2_destroy_range(struct kvm_pgtable *pgt, + u64 addr, u64 size); +void pkvm_pgtable_stage2_destroy_pgd(struct kvm_pgtable *pgt); int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u= 64 phys, enum kvm_pgtable_prot prot, void *mc, enum kvm_pgtable_walk_flags flags); diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index c351b4abd5db..c36f282a175d 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1551,21 +1551,38 @@ static int stage2_free_walker(const struct kvm_pgta= ble_visit_ctx *ctx, return 0; } =20 -void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +void kvm_pgtable_stage2_destroy_range(struct kvm_pgtable *pgt, + u64 addr, u64 size) { - size_t pgd_sz; struct kvm_pgtable_walker walker =3D { .cb =3D stage2_free_walker, .flags =3D KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; =20 - WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); + WARN_ON(kvm_pgtable_walk(pgt, addr, size, &walker)); +} + +void kvm_pgtable_stage2_destroy_pgd(struct kvm_pgtable *pgt) +{ + size_t pgd_sz; + pgd_sz =3D kvm_pgd_pages(pgt->ia_bits, pgt->start_level) * PAGE_SIZE; - pgt->mm_ops->free_pages_exact(kvm_dereference_pteref(&walker, pgt->pgd), = pgd_sz); + + /* + * Since the pgtable is unlinked at this point, and not shared with + * other walkers, safely deference pgd with kvm_dereference_pteref_raw() + */ + pgt->mm_ops->free_pages_exact(kvm_dereference_pteref_raw(pgt->pgd), pgd_s= z); pgt->pgd =3D NULL; } =20 +void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + kvm_pgtable_stage2_destroy_range(pgt, 0, BIT(pgt->ia_bits)); + kvm_pgtable_stage2_destroy_pgd(pgt); +} + void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, v= oid *pgtable, s8 level) { kvm_pteref_t ptep =3D (kvm_pteref_t)pgtable; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1c78864767c5..e41fc7bcee24 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -903,6 +903,14 @@ static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, = unsigned long type) return 0; } =20 +static void kvm_stage2_destroy(struct kvm_pgtable *pgt) +{ + unsigned int ia_bits =3D VTCR_EL2_IPA(pgt->mmu->vtcr); + + KVM_PGT_FN(kvm_pgtable_stage2_destroy_range)(pgt, 0, BIT(ia_bits)); + KVM_PGT_FN(kvm_pgtable_stage2_destroy_pgd)(pgt); +} + /** * kvm_init_stage2_mmu - Initialise a S2 MMU structure * @kvm: The pointer to the KVM structure @@ -979,7 +987,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t return 0; =20 out_destroy_pgtable: - KVM_PGT_FN(kvm_pgtable_stage2_destroy)(pgt); + kvm_stage2_destroy(pgt); out_free_pgtable: kfree(pgt); return err; @@ -1076,7 +1084,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); =20 if (pgt) { - KVM_PGT_FN(kvm_pgtable_stage2_destroy)(pgt); + kvm_stage2_destroy(pgt); kfree(pgt); } } diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index fcd70bfe44fb..61827cf6fea4 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -316,9 +316,16 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgta= ble *pgt, u64 start, u64 e return 0; } =20 -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +void pkvm_pgtable_stage2_destroy_range(struct kvm_pgtable *pgt, + u64 addr, u64 size) { - __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); + __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); +} + +void pkvm_pgtable_stage2_destroy_pgd(struct kvm_pgtable *pgt) +{ + /* Expected to be called after all pKVM mappings have been released. */ + WARN_ON_ONCE(!RB_EMPTY_ROOT(&pgt->pkvm_mappings.rb_root)); } =20 int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, --=20 2.51.0.rc2.233.g662b1ed5c5-goog From nobody Sat Oct 4 03:17:40 2025 Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6180132A3FA for ; Wed, 20 Aug 2025 16:22:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755706972; cv=none; b=BbmG6G2i0ErrjiMP8sKaVDIg+hrCouxm3G/AqwX9ZTzdiocPaVmfvfx00gSzkIQbQqVuXPnG2pYioodts3c35T21d16t3/SV91YrsfUwmkKgF+QJb9YDSVyyFCLqiOQ5S+v4bVIlZ74Tquz/VoQ4LQGCFY04UmeX9iZdy4S2ETI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755706972; c=relaxed/simple; bh=6Ji2BMWpHylbG3Gk41TqRwy1xuilBvKN/0gbdpGP9KY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OcoqgTseIRUsuKkam0y4G1z+3BGL0yYPb3tXvJLBMFEI8HnV8Do+2DTqSDjBal1bn9SbYw/Rv1LsHyAz3quIW8wfKGCWs4e3aKPv+m6Kc5oLcezk9qNaTUtI6P9J9yk5u3ujb2uvqCyLhqF9XJd6UNVHl9LMo6xfZL27tWj+MLU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--rananta.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=T9DN/prH; arc=none smtp.client-ip=209.85.160.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--rananta.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="T9DN/prH" Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-30ccebfdef3so231498fac.2 for ; Wed, 20 Aug 2025 09:22:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1755706969; x=1756311769; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IxgL3AzDrrZHKcYrme0egJ79bqapQGokq5EDgAEIJ+c=; b=T9DN/prHCLRHDKEg8HNNUt5PXib2NHcFO5nuYAS48jZL4LMciGW3c90pbf+MZI+ttj HhbTLDsLQOgJTzHaGhhe8GyG35leLdWAvIBKIJOU+rWqHw6HO0z5/KnwgPC6ZMWuU21i jmqbaQufBFABvHHC2xvBWbXEU9Fn1SLyqZLUn7o4QV00P7FMrH0U/QE7yFZWGdtZ5Rmg a9jgLOPtoEITUMCT4SY391fVggOjT93x8wn713HSKObwW2eaGo02ZYUmpPTfzBGwI8FG ZHHEFzoIv+E064Fxj+Ig2EcToTcJSd2We+9iDcBae7cIoWjJuDbNGaf40yCdPuYhnrqC 5o/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755706969; x=1756311769; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IxgL3AzDrrZHKcYrme0egJ79bqapQGokq5EDgAEIJ+c=; b=wW1HphcKYdJgyybxhqIo7sD7vtzYPcCJKE/xNViUenKUdsxk4OzuLCcwONh3X8lkZR iydFcSF16aVczw8EogAyVPwyQ4XqCwZX6kCcrXf5s3yn9uHYsILq1i5fT4gvgbrl6R6l NX+Gqw4fZFr+tXCBnOmuGOwjarXfnqdeSmQe7qDdfgN7jvpTbn+QvOGtRYIycZy807k/ AwGAxpoTVHetTUPqM+25SozZ5DIhYS6m2mIPzBuI1jf0k+/VR17NlOH+jralM7QEFqV+ UIbgPe3VJKTtf7wrkA7M5WHTEh7gO8zId3krcCUYO+GCHxEfr9Lp80EKi7uQHcVok0E7 tWNA== X-Forwarded-Encrypted: i=1; AJvYcCWvpYP4Gob6fOJ1CX2yykx0uwDzy1P5+xi6lFptusLU8McWqT3Prp4RepFJeM7zsFVctyzsLvQKgscwv7U=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2rrEfhw+ar9IYOVlIAKTvwuNZ7tmUGwViwC6CHkc9EhOpb+eM ougSTVH3UeqfE0e6L6qB4RqQGknhsiCZaDliqnear0DUHCN346rMZ1We302Uq4KcAsUNkgzfFAQ 41M+3psljNA== X-Google-Smtp-Source: AGHT+IGHN40LChKmW4U+xn4mYc2JbS9gSROkMCaEkVyOWl8ZTwAYUdz9gBc8qFeauFM0HFD3Zo+x0vJsjUGN X-Received: from oabxe23.prod.google.com ([2002:a05:6870:ce97:b0:29d:f69c:1743]) (user=rananta job=prod-delivery.src-stubby-dispatcher) by 2002:a05:687c:2be8:b0:306:9f1d:da2a with SMTP id 586e51a60fabf-31122769fd5mr2072917fac.5.1755706969532; Wed, 20 Aug 2025 09:22:49 -0700 (PDT) Date: Wed, 20 Aug 2025 16:22:42 +0000 In-Reply-To: <20250820162242.2624752-1-rananta@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250820162242.2624752-1-rananta@google.com> X-Mailer: git-send-email 2.51.0.rc2.233.g662b1ed5c5-goog Message-ID: <20250820162242.2624752-3-rananta@google.com> Subject: [PATCH v2 2/2] KVM: arm64: Reschedule as needed when destroying the stage-2 page-tables From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Raghavendra Rao Anata , Mingwei Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a large VM, specifically one that holds a significant number of PTEs, gets abruptly destroyed, the following warning is seen during the page-table walk: sched: CPU 0 need_resched set for > 100018840 ns (100 ticks) without sched= ule CPU: 0 UID: 0 PID: 9617 Comm: kvm_page_table_ Tainted: G O 6.16.0-smp-DEV = #3 NONE Tainted: [O]=3DOOT_MODULE Call trace: show_stack+0x20/0x38 (C) dump_stack_lvl+0x3c/0xb8 dump_stack+0x18/0x30 resched_latency_warn+0x7c/0x88 sched_tick+0x1c4/0x268 update_process_times+0xa8/0xd8 tick_nohz_handler+0xc8/0x168 __hrtimer_run_queues+0x11c/0x338 hrtimer_interrupt+0x104/0x308 arch_timer_handler_phys+0x40/0x58 handle_percpu_devid_irq+0x8c/0x1b0 generic_handle_domain_irq+0x48/0x78 gic_handle_irq+0x1b8/0x408 call_on_irq_stack+0x24/0x30 do_interrupt_handler+0x54/0x78 el1_interrupt+0x44/0x88 el1h_64_irq_handler+0x18/0x28 el1h_64_irq+0x84/0x88 stage2_free_walker+0x30/0xa0 (P) __kvm_pgtable_walk+0x11c/0x258 __kvm_pgtable_walk+0x180/0x258 __kvm_pgtable_walk+0x180/0x258 __kvm_pgtable_walk+0x180/0x258 kvm_pgtable_walk+0xc4/0x140 kvm_pgtable_stage2_destroy+0x5c/0xf0 kvm_free_stage2_pgd+0x6c/0xe8 kvm_uninit_stage2_mmu+0x24/0x48 kvm_arch_flush_shadow_all+0x80/0xa0 kvm_mmu_notifier_release+0x38/0x78 __mmu_notifier_release+0x15c/0x250 exit_mmap+0x68/0x400 __mmput+0x38/0x1c8 mmput+0x30/0x68 exit_mm+0xd4/0x198 do_exit+0x1a4/0xb00 do_group_exit+0x8c/0x120 get_signal+0x6d4/0x778 do_signal+0x90/0x718 do_notify_resume+0x70/0x170 el0_svc+0x74/0xd8 el0t_64_sync_handler+0x60/0xc8 el0t_64_sync+0x1b0/0x1b8 The warning is seen majorly on the host kernels that are configured not to force-preempt, such as CONFIG_PREEMPT_NONE=3Dy. To avoid this, instead of walking the entire page-table in one go, split it into smaller ranges, by checking for cond_resched() between each range. Since the path is executed during VM destruction, after the page-table structure is unlinked from the KVM MMU, relying on cond_resched_rwlock_write() isn't necessary. Signed-off-by: Raghavendra Rao Ananta Suggested-by: Oliver Upton --- arch/arm64/kvm/mmu.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e41fc7bcee24..0d6d42a86126 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -903,11 +903,35 @@ static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu,= unsigned long type) return 0; } =20 +/* + * Assume that @pgt is valid and unlinked from the KVM MMU to free the + * page-table without taking the kvm_mmu_lock and without performing any + * TLB invalidations. + * + * Also, the range of addresses can be large enough to cause need_resched + * warnings, for instance on CONFIG_PREEMPT_NONE kernels. Hence, invoke + * cond_resched() periodically to prevent hogging the CPU for a long time + * and schedule something else, if required. + */ +static void stage2_destroy_range(struct kvm_pgtable *pgt, phys_addr_t addr, + phys_addr_t end) +{ + u64 next; + + do { + next =3D stage2_range_addr_end(addr, end); + KVM_PGT_FN(kvm_pgtable_stage2_destroy_range)(pgt, addr, + next - addr); + if (next !=3D end) + cond_resched(); + } while (addr =3D next, addr !=3D end); +} + static void kvm_stage2_destroy(struct kvm_pgtable *pgt) { unsigned int ia_bits =3D VTCR_EL2_IPA(pgt->mmu->vtcr); =20 - KVM_PGT_FN(kvm_pgtable_stage2_destroy_range)(pgt, 0, BIT(ia_bits)); + stage2_destroy_range(pgt, 0, BIT(ia_bits)); KVM_PGT_FN(kvm_pgtable_stage2_destroy_pgd)(pgt); } =20 --=20 2.51.0.rc2.233.g662b1ed5c5-goog