From nobody Mon Feb 9 05:52:13 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89C69EB64DD for ; Sat, 29 Jul 2023 00:54:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237297AbjG2AyZ (ORCPT ); Fri, 28 Jul 2023 20:54:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237127AbjG2Axe (ORCPT ); Fri, 28 Jul 2023 20:53:34 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C165326B2 for ; Fri, 28 Jul 2023 17:53:05 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1bb982d2572so17350195ad.0 for ; Fri, 28 Jul 2023 17:53:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690591926; x=1691196726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=wviPOsS6wxIVOzMjjv3mWVw+feSBMfs9pmoqRvrXCaY=; b=EKB+UIYo3U8JrsxxkoCQHi5oERvy0PlpE5t8T6ECPd1S920laYGLkn5v+VUuZuVRNx TNdT8RcQ0UGfrMuKrJodH/v6mmbDBatABlksyCDAMRh43mT0KlZhqGmXtI3C5fpZpBNQ iw09XJROxq8kshDmgjI9DCbOP1qWMKVyiMTGDY+VVhxgCX25iLB0Pw1e2eevCGa4nI8D xPPeR7CawBIktWLbE6vQ55sQ47znxIYdnLKYz1NKvJ5CiqzCZpN9b1p/ZncxygvkZcDo hoRZVAMyaQb6fuI17sz8+UvDEkP3iIAMtdcF+6fpJ1UJnq4Bowf+GNAYZCbq85JAvBIK jztg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690591926; x=1691196726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=wviPOsS6wxIVOzMjjv3mWVw+feSBMfs9pmoqRvrXCaY=; b=Idmmniw73h7yxUNWTwpWSb8fuhcT/gyqz1yjTuM9OsvF8fOtchXlDkA3CBy/xoWfcc I5EtXCZMo20sH21YVYQDrma/AlQUcmx6kE2kwkm2AZsD+8kiH3TC0CL7q/do+EgKn/nP YNLdJV30uGjyXdmdaICmuDUliv0cudmyGbrqerlG2KWfIUzhaza30lHVjP5eGi1WNEE+ gfSFRKDXO6zOq7BId9l0WXkPqLrtRnv+wum1XKbQ1g2kDWmWlOKztqWn4EvJhsZUtxmW kISAwv/f1gYQ6Jqf/2Xa3fGkAsMjccMIq/jN5xcZNkVniAaRoDOevaVoUv2CKHhXHqB3 CaHg== X-Gm-Message-State: ABy/qLaDpzRwzG/WDWqoLSuuILyW5jqh1qphAU4vQDUIChauqxtrcOxr wRyQ1PjUiWWO/EX5gbk8++hMkdM2uJs= X-Google-Smtp-Source: APBJJlE0CwowDKpfUKMMyIjy0kUWIsKqME6oDTVM8ZuAogX6MjovWPA1wKiYfq3gebYS9rwaKEtm1ou/aRc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f990:b0:1bb:1ffd:5cc8 with SMTP id ky16-20020a170902f99000b001bb1ffd5cc8mr12038plb.11.1690591925918; Fri, 28 Jul 2023 17:52:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 17:51:57 -0700 In-Reply-To: <20230729005200.1057358-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729005200.1057358-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729005200.1057358-3-seanjc@google.com> Subject: [PATCH v2 2/5] KVM: x86/mmu: Harden new PGD against roots without shadow pages From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Reima Ishii Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Harden kvm_mmu_new_pgd() against NULL pointer dereference bugs by sanity checking that the target root has an associated shadow page prior to dereferencing said shadow page. The code in question is guaranteed to only see roots with shadow pages as fast_pgd_switch() explicitly frees the current root if it doesn't have a shadow page, i.e. is a PAE root, and that in turn prevents valid roots from being cached, but that's all very subtle. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1eadfcde30be..dd8cc46551b2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4560,9 +4560,19 @@ static void nonpaging_init_context(struct kvm_mmu *c= ontext) static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pg= d, union kvm_mmu_page_role role) { - return (role.direct || pgd =3D=3D root->pgd) && - VALID_PAGE(root->hpa) && - role.word =3D=3D root_to_sp(root->hpa)->role.word; + struct kvm_mmu_page *sp; + + if (!VALID_PAGE(root->hpa)) + return false; + + if (!role.direct && pgd !=3D root->pgd) + return false; + + sp =3D root_to_sp(root->hpa); + if (WARN_ON_ONCE(!sp)) + return false; + + return role.word =3D=3D sp->role.word; } =20 /* @@ -4682,9 +4692,12 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t ne= w_pgd) * If this is a direct root page, it doesn't have a write flooding * count. Otherwise, clear the write flooding count. */ - if (!new_role.direct) - __clear_sp_write_flooding_count( - root_to_sp(vcpu->arch.mmu->root.hpa)); + if (!new_role.direct) { + struct kvm_mmu_page *sp =3D root_to_sp(vcpu->arch.mmu->root.hpa); + + if (!WARN_ON_ONCE(!sp)) + __clear_sp_write_flooding_count(sp); + } } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); =20 --=20 2.41.0.487.g6d72f3e995-goog