From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73AB1C433EF for ; Tue, 14 Jun 2022 23:33:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344857AbiFNXdo (ORCPT ); Tue, 14 Jun 2022 19:33:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237006AbiFNXdh (ORCPT ); Tue, 14 Jun 2022 19:33:37 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C345A4C7B8 for ; Tue, 14 Jun 2022 16:33:36 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id il9-20020a17090b164900b001e31dd8be25so279739pjb.3 for ; Tue, 14 Jun 2022 16:33:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=F1mEM+Sk9NhfTo4twLztO1JRBfAv8ACvHFu9EdejiLQ=; b=VCGcEXx6TRJZH//+muwGLcl6V0jHbZuGFGc8YXZJ4kBFZP4jqYV6ZgfGDkNne2UzJK H1oNBWP5SHUxaD639jUs+Art971dPO2rn6nVQ+ODbYbTVvr+cuItW16sUj4mno2uNgge 3OTQPpWcC4sTW2GpGb1WFdwD0FYHe+rWzviyvseevbydMKqgEKzkX9PcJ9H3867eDuRN wzjHY4wzhUmN2FPNsg8w44UL6uWY5t5R6nOnh5o3vnYOzkEJnbBuhz//WC1FvbTwrVoQ 2nrMBYIL/QOxcNKUP5MANf4BwctetfiWipAPKw36djGRx5SmbiTkpbq8LPQm4e2kVur7 J0/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=F1mEM+Sk9NhfTo4twLztO1JRBfAv8ACvHFu9EdejiLQ=; b=BSTYSng3v/OcajqAWNNBLk3pod+36lgWnezYQrsUnTlaoCVGpdpADxm61kH+b7pGYP l1urXEl+8QacpHydBJpG1/rNbPhbP+BQVAjfg6bZuw+l6MTieBVMjXpjjewHGJ7Wwk0T mbVdtmcT91AroF7CN3fIiTgw5pe061oPBm3YJ9Qce0SbwwjeKjl1EG5fF3ppf17qC2Kg hqzQSPtBJjPjDY68nYWn2bZfdqXmOrtroNOa8xJ1oXZFSzTslarRpFwxXkqIUlfMkXwE YScmKA1/hEmyXP+4G5981wLkJRNafKgzX0RdwwRrZ86uxTnxCve/R/y8oBwKrgdwN5Y+ iySA== X-Gm-Message-State: AOAM530e+4fA/vZqVD0Co0J0eXdv2lvW5V0PK+9ZtBHIDTYsngxKDP3Q vRRmU+gI1/bVeKCu6Pm/Woytkg3Wn5E= X-Google-Smtp-Source: ABdhPJwaYznr7aIlMCIFr+HBjFPK2thAUaXhCVe8WPr/90jqP0ecaVZ6zGrXdaKmd62aPfo843OWpII8UDE= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1387:b0:51c:2712:7859 with SMTP id t7-20020a056a00138700b0051c27127859mr6973184pfg.38.1655249616263; Tue, 14 Jun 2022 16:33:36 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:21 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-2-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 1/8] KVM: x86/mmu: Drop unused CMPXCHG macro from paging_tmpl.h From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Lai Jiangshan Drop the CMPXCHG macro from paging_tmpl.h, it's no longer used now that KVM uses a common uaccess helper to do 8-byte CMPXCHG. Fixes: f122dfe44768 ("KVM: x86: Use __try_cmpxchg_user() to update guest PT= E A/D bits") Signed-off-by: Lai Jiangshan [sean: drop only CMPXCHG, update changelog accordingly] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index fe35d8fd3276..f595c4b8657f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -34,7 +34,6 @@ #define PT_HAVE_ACCESSED_DIRTY(mmu) true #ifdef CONFIG_X86_64 #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL - #define CMPXCHG "cmpxchgq" #else #define PT_MAX_FULL_LEVELS 2 #endif @@ -51,7 +50,6 @@ #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT #define PT_HAVE_ACCESSED_DIRTY(mmu) true - #define CMPXCHG "cmpxchgl" #elif PTTYPE =3D=3D PTTYPE_EPT #define pt_element_t u64 #define guest_walker guest_walkerEPT @@ -64,9 +62,6 @@ #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.ad_disabled) - #ifdef CONFIG_X86_64 - #define CMPXCHG "cmpxchgq" - #endif #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL #else #error Invalid PTTYPE value @@ -1100,7 +1095,6 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, st= ruct kvm_mmu_page *sp) #undef PT_MAX_FULL_LEVELS #undef gpte_to_gfn #undef gpte_to_gfn_lvl -#undef CMPXCHG #undef PT_GUEST_ACCESSED_MASK #undef PT_GUEST_DIRTY_MASK #undef PT_GUEST_DIRTY_SHIFT --=20 2.36.1.476.g0c4daa206d-goog From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14FCFC43334 for ; Tue, 14 Jun 2022 23:33:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232615AbiFNXds (ORCPT ); Tue, 14 Jun 2022 19:33:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238508AbiFNXdj (ORCPT ); Tue, 14 Jun 2022 19:33:39 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C21B94C7BA for ; Tue, 14 Jun 2022 16:33:38 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id z9-20020a170903018900b00168b66bbde2so5573701plg.12 for ; Tue, 14 Jun 2022 16:33:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=66YOE3D++YjiCQYAOPbfPb7xNbNJtfXwRmXra4+Vga0=; b=XkSOxZVNtKRdwlCMG2Rkoe9pWC1o3+MtsPxenAFTOIWjAp8pAzo9ptSTAPINXlQdZI PGIBIy5g3ssWdKHmjSlmfKvbTjvhVRvrWE2KGeczKLOj/Sema0dIb9phYdk7m6o4/WQc W2/0+ZCuVJQ+L3vAnl3N17NpF37zpaIijGPyKGnBynV6z5+i/vj9m0v2p3MlsuKVYetD Mwpftm7Wz5yvnyHCCM7rrTcijxdJ390lTSlNFAsERaq1YZ4IFNAIC6viSYFB7Wldcdga rapmU4xoXd6ztMR9ukJ4TqxYXBqhJ+EZk39WcJyNz2vyoJb5i7SunvqXxTa9ejkAEZXD n3Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=66YOE3D++YjiCQYAOPbfPb7xNbNJtfXwRmXra4+Vga0=; b=xpxXf8b2/4oG+p2BhGbMNjZiu4hggNVvuanXoFTzsmSFaJcAhqW1MOIe6ZkvSRYU4f 5tkSsWIC58I4ikD1uJ8w2hTGDj4kIs86RDd7zizE0u7ewMhJSg1PuRS6QLen38a0mtrA nlK8cnkrcm1SqUQDbwPgl92iVeOyynWYBAvo8IerI4k85mddhWSpb/afNZV0FKcowXFW OOfE1/EA5e+bEQa6VS6sfORreTNgqb/nm3kBpumhDhBoRPRQdvEnjb5wXtdsAc6nNx11 +BDD7DI6GAMTo6FeRsnvN9u+dc42x3X3kmWzf4N9w61aNZm6cFf8R/7LIIScw5unw1N1 qjBg== X-Gm-Message-State: AOAM530286mM68oItA8HpGeBXRVH+Jmw/Q2ROjzIlJYuu8X/kX6RYHo/ 03IQAjDJNA+L94wQLMMJEFJv4d9LDR8= X-Google-Smtp-Source: ABdhPJyWMZ015MfiUWvGXN7mMPeXD9z9UC8xpj7UH7wHuSJ60ScAFr35PWhit6iSa+I1bttdCV63Z5TIcu4= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:3491:b0:51c:1d3b:b0b0 with SMTP id cp17-20020a056a00349100b0051c1d3bb0b0mr6890353pfb.68.1655249618275; Tue, 14 Jun 2022 16:33:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:22 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-3-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 2/8] KVM: VMX: Refactor 32-bit PSE PT creation to avoid using MMU macro From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Compute the number of PTEs to be filled for the 32-bit PSE page tables using the page size and the size of each entry. While using the MMU's PT32_ENT_PER_PAGE macro is arguably better in isolation, removing VMX's usage will allow a future namespacing cleanup to move the guest page table macros into paging_tmpl.h, out of the reach of code that isn't directly related to shadow paging. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5e14e4c40007..b774f8c1b952 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3704,7 +3704,7 @@ static int init_rmode_identity_map(struct kvm *kvm) } =20 /* Set up identity-mapping pagetable for EPT in real mode */ - for (i =3D 0; i < PT32_ENT_PER_PAGE; i++) { + for (i =3D 0; i < (PAGE_SIZE / sizeof(tmp)); i++) { tmp =3D (i << 22) + (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE); if (__copy_to_user(uaddr + i * sizeof(tmp), &tmp, sizeof(tmp))) { --=20 2.36.1.476.g0c4daa206d-goog From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01FE5C43334 for ; Tue, 14 Jun 2022 23:33:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346064AbiFNXdw (ORCPT ); Tue, 14 Jun 2022 19:33:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242228AbiFNXdl (ORCPT ); Tue, 14 Jun 2022 19:33:41 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92DD04C7B8 for ; Tue, 14 Jun 2022 16:33:40 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id e19-20020aa79813000000b0051bba91468eso4312854pfl.14 for ; Tue, 14 Jun 2022 16:33:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=JiPXDiTrnxJesjnU/3429H2xKqABe3gzeHKYz405h8s=; b=JdZz7/xuwZ0XLM7kEyzmB5EuaV/oBzZYJK2OGUjB5zJSReAs221r7p52onvdt0Dq+F 1O6NE0OJHSXdiyhkRtGqY73yTVzNm/nXKf0h/JbZSYOjTMymVt6nK31Fr8Us3PtMXtQG ndS+Yq4owTDwH/n5bH1Mtpiwqt1iw5EKf5zd0D1ZIb+mgdonTfTZAwKY79LjnlQ3raW6 sojNCDikMBZcQqy8J65li1ba9Mlp1dzC8W8dYSbwrKHaBdE2I+GKx1BWo5YkjrqrwJGA kdOY8Na7n4CPKH8IR0L7Gv5GwbrYfKmAo9gAi75CUz0Mbg2rlFv38ufxNUl+mWCjy93Z eJ/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=JiPXDiTrnxJesjnU/3429H2xKqABe3gzeHKYz405h8s=; b=qVdIlXbElZvnvVjgnIW6MxEoEy1tFE3R8ERedNcqb3q/4WmJ0Pe0y896mECZqFqdvo rY1tsaMsgTmo88O4WNJrxZcbaQn3pw/FnKsGMOdFuFqLkjHnH0cuF+bpWf2uIEppwKzc B3a7sTIroGB3OaxaAtMF+8oH4U2fndpfuTQ3sEoBQdX2JarUNkXCJR9QAUG8ZSB1Lt7A xyADquCkcjDlPLayJEm+AKqzhDWZSq0U7bvvnjTD7JO6EKnTUcqZ3eGIUMgoQgR1yjwz BcaIb+aZRmo3zfCIu2KZBDIo+OF7CFV7CLDcgrHqihdDcZ/gNNKD6a9gg723alsPiDN9 HsjA== X-Gm-Message-State: AOAM5301VyB0XSIVf7lZ1ojG5kjtxGbyQJmpbKpU0MPlep7ksVtdFyx4 RZNriUgt73iszTwcGZHfQMwEuQJZxt0= X-Google-Smtp-Source: ABdhPJz3jNQ6xW3aP14T9TnZrwHvAwIwmkxWFzObT+5J6NN/FftmtSAr61tJ1CtrBBYaMEvTrF5yS5+NbsQ= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:996:b0:505:b6d2:abc8 with SMTP id u22-20020a056a00099600b00505b6d2abc8mr7059915pfg.11.1655249620070; Tue, 14 Jun 2022 16:33:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:23 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-4-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 3/8] KVM: x86/mmu: Bury 32-bit PSE paging helpers in paging_tmpl.h From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move a handful of one-off macros and helpers for 32-bit PSE paging into paging_tmpl.h and hide them behind "PTTYPE =3D=3D 32". Under no circumstan= ce should anything but 32-bit shadow paging care about PSE paging. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 5 ----- arch/x86/kvm/mmu/mmu.c | 7 ------- arch/x86/kvm/mmu/paging_tmpl.h | 18 +++++++++++++++++- 3 files changed, 17 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index f8192864b496..d1021e34ac15 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -34,11 +34,6 @@ #define PT_DIR_PAT_SHIFT 12 #define PT_DIR_PAT_MASK (1ULL << PT_DIR_PAT_SHIFT) =20 -#define PT32_DIR_PSE36_SIZE 4 -#define PT32_DIR_PSE36_SHIFT 13 -#define PT32_DIR_PSE36_MASK \ - (((1ULL << PT32_DIR_PSE36_SIZE) - 1) << PT32_DIR_PSE36_SHIFT) - #define PT64_ROOT_5LEVEL 5 #define PT64_ROOT_4LEVEL 4 #define PT32_ROOT_LEVEL 2 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f168693695bd..73497da1a99b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -324,13 +324,6 @@ static int is_cpuid_PSE36(void) return 1; } =20 -static gfn_t pse36_gfn_delta(u32 gpte) -{ - int shift =3D 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT; - - return (gpte & PT32_DIR_PSE36_MASK) << shift; -} - #ifdef CONFIG_X86_64 static void __set_spte(u64 *sptep, u64 spte) { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f595c4b8657f..55fd35b1b227 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -50,6 +50,11 @@ #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT #define PT_HAVE_ACCESSED_DIRTY(mmu) true + + #define PT32_DIR_PSE36_SIZE 4 + #define PT32_DIR_PSE36_SHIFT 13 + #define PT32_DIR_PSE36_MASK \ + (((1ULL << PT32_DIR_PSE36_SIZE) - 1) << PT32_DIR_PSE36_SHIFT) #elif PTTYPE =3D=3D PTTYPE_EPT #define pt_element_t u64 #define guest_walker guest_walkerEPT @@ -92,6 +97,15 @@ struct guest_walker { struct x86_exception fault; }; =20 +#if PTTYPE =3D=3D 32 +static inline gfn_t pse36_gfn_delta(u32 gpte) +{ + int shift =3D 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT; + + return (gpte & PT32_DIR_PSE36_MASK) << shift; +} +#endif + static gfn_t gpte_to_gfn_lvl(pt_element_t gpte, int lvl) { return (gpte & PT_LVL_ADDR_MASK(lvl)) >> PAGE_SHIFT; @@ -416,8 +430,10 @@ static int FNAME(walk_addr_generic)(struct guest_walke= r *walker, gfn =3D gpte_to_gfn_lvl(pte, walker->level); gfn +=3D (addr & PT_LVL_OFFSET_MASK(walker->level)) >> PAGE_SHIFT; =20 - if (PTTYPE =3D=3D 32 && walker->level > PG_LEVEL_4K && is_cpuid_PSE36()) +#if PTTYPE =3D=3D 32 + if (walker->level > PG_LEVEL_4K && is_cpuid_PSE36()) gfn +=3D pse36_gfn_delta(pte); +#endif =20 real_gpa =3D kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn), access, &walke= r->fault); if (real_gpa =3D=3D UNMAPPED_GVA) --=20 2.36.1.476.g0c4daa206d-goog From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21E11C43334 for ; Tue, 14 Jun 2022 23:33:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346870AbiFNXdy (ORCPT ); Tue, 14 Jun 2022 19:33:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245644AbiFNXdo (ORCPT ); Tue, 14 Jun 2022 19:33:44 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2040E4CD64 for ; Tue, 14 Jun 2022 16:33:43 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d11-20020a170902cecb00b00163fe890197so5588570plg.1 for ; Tue, 14 Jun 2022 16:33:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=u376NglyWXxiicHgNlnw7ZtxygoUbgzaPBx4X4YHgag=; b=Jg4AhD1L1oBjVx8hen9rpkycs2qhxz2lhpbP7hM8uD6MKFHihB7uMlRq/gbfYCNC1v gyJtdtJkMo02Oag7dMy09TFkM2EsEygM3NjsSQGWqWxZWcm3RvqaK/9ITTCFDhFs01yF WKHBOa+lf+EcfQQgaCHwUniHAV2Xha6eg6ugILH7HKqvhUN1Ep9APDnfjCPkGFD7N6yC nekVg0ZjGryGhQ6IW9z4jq76STbshE5gVmpQFOIsBrs7zzexF/w32b6eecJRG3qPhckk jrenxElfnejStnF6H5X3ftdD2iMUamE2hyIdYj9/ry9L7JY4FtEAkMG/lyojHkhI7Oun nZxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=u376NglyWXxiicHgNlnw7ZtxygoUbgzaPBx4X4YHgag=; b=GIq2aaJT2rP1ljWpevZTSF2O+E82J/S+hdvUgXx6tdV9VQW4Sf4ungQiT4Jho9Oycc HACtaY1JyT5VEK0dGttlVMXrWGqUTk34L7+XIdo+FRJuK6JtL4PVspKR1V8hIsxj6aq8 41FLCcTp+UrWuxQ9cWQ6S7xfSeo9lPISg4FRUM1p1/GIFTHmqRpzH4YWHqLCTM9pSHD0 wOENPeJk3oswpvd4OaM3KLMFHe/npyrEoFIaGfFSrUlvn9fcTp7vqrf7vdepgnc5wKMr Aa83NGpig2a3R0dD9PlZIKM3DktDIHoxXJZ3GgnUHFJuWkeCrt9iqEVsYHn5GarE2rpN +/Fw== X-Gm-Message-State: AJIora/M6ncQHPCxJxcgEoG8hwr27WyxOhyPncnenqBVHBhh7zFSihN6 g5C2F+/QtL4degOUyvYwQJfTXwYH7UE= X-Google-Smtp-Source: AGRyM1trjkDzlCBVy45BOBoqBUPK0tsGTitF5YEiPEB6I89EY74HhcObs1/GlBTKGIsnux2XdKUPykkW+tU= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:249:b0:1e0:a8a3:3c6c with SMTP id t9-20020a17090a024900b001e0a8a33c6cmr204399pje.0.1655249621884; Tue, 14 Jun 2022 16:33:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:24 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-5-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 4/8] KVM: x86/mmu: Dedup macros for computing various page table masks From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Provide common helper macros to generate various masks, shifts, etc... for 32-bit vs. 64-bit page tables. Only the inputs differ, the actual calculations are identical. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 14 +++++--------- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++++++++ arch/x86/kvm/mmu/paging.h | 9 +++++---- arch/x86/kvm/mmu/spte.h | 7 +++---- 5 files changed, 29 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index d1021e34ac15..6efe6bd7fb6e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -7,9 +7,9 @@ #include "cpuid.h" =20 #define PT64_PT_BITS 9 -#define PT64_ENT_PER_PAGE (1 << PT64_PT_BITS) +#define PT64_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT64_PT_BITS) #define PT32_PT_BITS 10 -#define PT32_ENT_PER_PAGE (1 << PT32_PT_BITS) +#define PT32_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT32_PT_BITS) =20 #define PT_WRITABLE_SHIFT 1 #define PT_USER_SHIFT 2 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 73497da1a99b..b3edff05a53a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -113,21 +113,17 @@ module_param(dbg, bool, 0644); =20 #define PT32_LEVEL_BITS 10 =20 -#define PT32_LEVEL_SHIFT(level) \ - (PAGE_SHIFT + (level - 1) * PT32_LEVEL_BITS) +#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) =20 #define PT32_LVL_OFFSET_MASK(level) \ - (PT32_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT32_LEVEL_BITS))) - 1)) - -#define PT32_INDEX(address, level)\ - (((address) >> PT32_LEVEL_SHIFT(level)) & ((1 << PT32_LEVEL_BITS) - 1)) + __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) =20 +#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_B= ITS) =20 #define PT32_BASE_ADDR_MASK PAGE_MASK + #define PT32_LVL_ADDR_MASK(level) \ - (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT32_LEVEL_BITS))) - 1)) + __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) =20 #include =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index bd2a26897b97..5e1e3c8f8aaa 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -20,6 +20,20 @@ extern bool dbg; #define MMU_WARN_ON(x) do { } while (0) #endif =20 +/* Page table builder macros common to shadow (host) PTEs and guest PTEs. = */ +#define __PT_LEVEL_SHIFT(level, bits_per_level) \ + (PAGE_SHIFT + ((level) - 1) * (bits_per_level)) +#define __PT_INDEX(address, level, bits_per_level) \ + (((address) >> __PT_LEVEL_SHIFT(level, bits_per_level)) & ((1 << (bits_pe= r_level)) - 1)) + +#define __PT_LVL_ADDR_MASK(base_addr_mask, level, bits_per_level) \ + ((base_addr_mask) & ~((1ULL << (PAGE_SHIFT + (((level) - 1) * (bits_per_l= evel)))) - 1)) + +#define __PT_LVL_OFFSET_MASK(base_addr_mask, level, bits_per_level) \ + ((base_addr_mask) & ((1ULL << (PAGE_SHIFT + (((level) - 1) * (bits_per_le= vel)))) - 1)) + +#define __PT_ENT_PER_PAGE(bits_per_level) (1 << (bits_per_level)) + /* * Unlike regular MMU roots, PAE "roots", a.k.a. PDPTEs/PDPTRs, have a PRE= SENT * bit, and thus are guaranteed to be non-zero when valid. And, when a gu= est diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h index de8ab323bb70..23f3f64b8092 100644 --- a/arch/x86/kvm/mmu/paging.h +++ b/arch/x86/kvm/mmu/paging.h @@ -4,11 +4,12 @@ #define __KVM_X86_PAGING_H =20 #define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1= )) + #define PT64_LVL_ADDR_MASK(level) \ - (GUEST_PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT64_LEVEL_BITS))) - 1)) + __PT_LVL_ADDR_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) + #define PT64_LVL_OFFSET_MASK(level) \ - (GUEST_PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT64_LEVEL_BITS))) - 1)) + __PT_LVL_OFFSET_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) + #endif /* __KVM_X86_PAGING_H */ =20 diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 0127bb6e3c7d..d5a8183b7232 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -55,11 +55,10 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK =3D=3D 0); =20 #define PT64_LEVEL_BITS 9 =20 -#define PT64_LEVEL_SHIFT(level) \ - (PAGE_SHIFT + (level - 1) * PT64_LEVEL_BITS) +#define PT64_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT64_LEVEL_BITS) + +#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_B= ITS) =20 -#define PT64_INDEX(address, level)\ - (((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1)) #define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) =20 /* --=20 2.36.1.476.g0c4daa206d-goog From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5088C43334 for ; Tue, 14 Jun 2022 23:34:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345890AbiFNXeB (ORCPT ); Tue, 14 Jun 2022 19:34:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345886AbiFNXds (ORCPT ); Tue, 14 Jun 2022 19:33:48 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF8904D247 for ; Tue, 14 Jun 2022 16:33:44 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id z67-20020a626546000000b0051bbb66c1bdso4342288pfb.0 for ; Tue, 14 Jun 2022 16:33:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Bv5QESz82k2xovFC3v5Zl2b+f4p4mhP9ynXaK4NEGVA=; b=nSQfZC+hVBY+WoIP6F2L3qJ0PfZuWs+9kcVlsWxwXVtDzxpO7ALXEjVp/dvBNWObe3 mwBZbgQBbAVaGEfrgQBeZnre+sZMWqzM6aq+/pZhpKnHVxMwywwNmkosowbwKFcEuEE4 ZdiPKltyakjzRaVtaoyTR1Tc4GmYqYjVnx4AB/dVUqNPB7W/LrAcrDw981LZ9atin58I wmPO2bV+FC/e4JfRNiyJFSC7Hc5fsbAelRyPa8ZgWdQrF1YGakby04X2G/PCI2Y/Nsw2 gT8PRFutaRJmriZNezl5q0HRD6s+/Sn8wWvE29Z90RvKoCL622xCo9U6IUtgql5DerCT cyZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Bv5QESz82k2xovFC3v5Zl2b+f4p4mhP9ynXaK4NEGVA=; b=VvCRwZPsDfOvH/sZMhy1gcWfdl8V44zMiDarr8SQO3sMrDxpA+pwQEiE825+wZAzVU 8n6ze/SM8qhuGs/0HEmdnX/fBdPhTYYcQzYirBtQ+/oU7rMd6yvzp+GtOoJlJ+w53XAj M32On5sRjfegimhO5qoDaOqXfh+q/cWYSEMZTHl+q7uU3TXlGA3IQ+WuMO9W+Vu1fLX0 4TFGi54xIS+fXfVb+WcjuzlsNRXwhYFKyEuhc1dSFQOIHLrKqAOty+K09IrtqxsP5eiV ZZn/3p12VfGoiADjHRmlvdHZbuhqyCp6i1k5j0Cs9DK+GXE53slXL9Ept3rcwMvbooTA yPOg== X-Gm-Message-State: AJIora/fYXvB/BQv7EA+fZKcYwedd1KnWEDhhK6hLcvoUx+9GZdoP7qS kUZISz7cvclFsdvFF0HxakVjxLAFsh8= X-Google-Smtp-Source: ABdhPJyuzKNG3rHlQSwU2tyJl0qY7vPujbz8Fibp766foiNpQKaBo9RBVx/XcENdnXzDiwueAwqDxClTMRg= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:ab15:b0:168:def9:5aa0 with SMTP id ik21-20020a170902ab1500b00168def95aa0mr6887101plb.28.1655249624182; Tue, 14 Jun 2022 16:33:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:25 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-6-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 5/8] KVM: x86/mmu: Use separate namespaces for guest PTEs and shadow PTEs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Separate the macros for KVM's shadow PTEs (SPTE) from guest 64-bit PTEs (PT64). SPTE and PT64 are _mostly_ the same, but the few differences are quite critical, e.g. *_BASE_ADDR_MASK must differentiate between host and guest physical address spaces, and SPTE_PERM_MASK (was PT64_PERM_MASK) is very much specific to SPTEs. Opportunistically (and temporarily) move most guest macros into paging.h to clearly associate them with shadow paging, and to ensure that they're not used as of this commit. A future patch will eliminate them entirely. Sadly, PT32_LEVEL_BITS is left behind in mmu_internal.h because it's needed for the quadrant calculation in kvm_mmu_get_page(). The quadrant calculation is hot enough (when using shadow paging with 32-bit guests) that adding a per-context helper is undesirable, and burying the computation in paging_tmpl.h with a forward declaration isn't exactly an improvement. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 5 ---- arch/x86/kvm/mmu/mmu.c | 44 +++++++++++---------------------- arch/x86/kvm/mmu/mmu_internal.h | 3 +++ arch/x86/kvm/mmu/paging.h | 17 +++++++++++++ arch/x86/kvm/mmu/paging_tmpl.h | 4 +-- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/mmu/spte.h | 27 +++++++++----------- arch/x86/kvm/mmu/tdp_iter.c | 6 ++--- arch/x86/kvm/mmu/tdp_mmu.c | 6 ++--- 9 files changed, 56 insertions(+), 58 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6efe6bd7fb6e..a99acec925eb 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -6,11 +6,6 @@ #include "kvm_cache_regs.h" #include "cpuid.h" =20 -#define PT64_PT_BITS 9 -#define PT64_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT64_PT_BITS) -#define PT32_PT_BITS 10 -#define PT32_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT32_PT_BITS) - #define PT_WRITABLE_SHIFT 1 #define PT_USER_SHIFT 2 =20 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b3edff05a53a..81f2e58dc85b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -111,20 +111,6 @@ module_param(dbg, bool, 0644); =20 #define PTE_PREFETCH_NUM 8 =20 -#define PT32_LEVEL_BITS 10 - -#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) - -#define PT32_LVL_OFFSET_MASK(level) \ - __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - -#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_B= ITS) - -#define PT32_BASE_ADDR_MASK PAGE_MASK - -#define PT32_LVL_ADDR_MASK(level) \ - __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - #include =20 /* make pte_list_desc fit well in cache lines */ @@ -704,7 +690,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *= sp, int index) if (!sp->role.direct) return sp->gfns[index]; =20 - return sp->gfn + (index << ((sp->role.level - 1) * PT64_LEVEL_BITS)); + return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS)); } =20 static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t= gfn) @@ -1776,7 +1762,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, continue; } =20 - child =3D to_shadow_page(ent & PT64_BASE_ADDR_MASK); + child =3D to_shadow_page(ent & SPTE_BASE_ADDR_MASK); =20 if (child->unsync_children) { if (mmu_pages_add(pvec, child, i)) @@ -2027,8 +2013,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct k= vm_vcpu *vcpu, role.direct =3D direct; role.access =3D access; if (role.has_4_byte_gpte) { - quadrant =3D gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); - quadrant &=3D (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; + quadrant =3D gaddr >> (PAGE_SHIFT + (SPTE_LEVEL_BITS * level)); + quadrant &=3D (1 << ((PT32_LEVEL_BITS - SPTE_LEVEL_BITS) * level)) - 1; role.quadrant =3D quadrant; } if (level <=3D vcpu->arch.mmu->cpu_role.base.level) @@ -2132,7 +2118,7 @@ static void shadow_walk_init_using_root(struct kvm_sh= adow_walk_iterator *iterato =20 iterator->shadow_addr =3D vcpu->arch.mmu->pae_root[(addr >> 30) & 3]; - iterator->shadow_addr &=3D PT64_BASE_ADDR_MASK; + iterator->shadow_addr &=3D SPTE_BASE_ADDR_MASK; --iterator->level; if (!iterator->shadow_addr) iterator->level =3D 0; @@ -2151,7 +2137,7 @@ static bool shadow_walk_okay(struct kvm_shadow_walk_i= terator *iterator) if (iterator->level < PG_LEVEL_4K) return false; =20 - iterator->index =3D SHADOW_PT_INDEX(iterator->addr, iterator->level); + iterator->index =3D SPTE_INDEX(iterator->addr, iterator->level); iterator->sptep =3D ((u64 *)__va(iterator->shadow_addr)) + iterator->inde= x; return true; } @@ -2164,7 +2150,7 @@ static void __shadow_walk_next(struct kvm_shadow_walk= _iterator *iterator, return; } =20 - iterator->shadow_addr =3D spte & PT64_BASE_ADDR_MASK; + iterator->shadow_addr =3D spte & SPTE_BASE_ADDR_MASK; --iterator->level; } =20 @@ -2203,7 +2189,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcp= u, u64 *sptep, * so we should update the spte at this point to get * a new sp with the correct access. */ - child =3D to_shadow_page(*sptep & PT64_BASE_ADDR_MASK); + child =3D to_shadow_page(*sptep & SPTE_BASE_ADDR_MASK); if (child->role.access =3D=3D direct_access) return; =20 @@ -2224,7 +2210,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct k= vm_mmu_page *sp, if (is_last_spte(pte, sp->role.level)) { drop_spte(kvm, spte); } else { - child =3D to_shadow_page(pte & PT64_BASE_ADDR_MASK); + child =3D to_shadow_page(pte & SPTE_BASE_ADDR_MASK); drop_parent_pte(child, spte); =20 /* @@ -2250,7 +2236,7 @@ static int kvm_mmu_page_unlink_children(struct kvm *k= vm, int zapped =3D 0; unsigned i; =20 - for (i =3D 0; i < PT64_ENT_PER_PAGE; ++i) + for (i =3D 0; i < SPTE_ENT_PER_PAGE; ++i) zapped +=3D mmu_page_zap_pte(kvm, sp, sp->spt + i, invalid_list); =20 return zapped; @@ -2663,7 +2649,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, struct kvm_mmu_page *child; u64 pte =3D *sptep; =20 - child =3D to_shadow_page(pte & PT64_BASE_ADDR_MASK); + child =3D to_shadow_page(pte & SPTE_BASE_ADDR_MASK); drop_parent_pte(child, sptep); flush =3D true; } else if (pfn !=3D spte_to_pfn(*sptep)) { @@ -3252,7 +3238,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t= *root_hpa, if (!VALID_PAGE(*root_hpa)) return; =20 - sp =3D to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK); + sp =3D to_shadow_page(*root_hpa & SPTE_BASE_ADDR_MASK); if (WARN_ON(!sp)) return; =20 @@ -3724,7 +3710,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) hpa_t root =3D vcpu->arch.mmu->pae_root[i]; =20 if (IS_VALID_PAE_ROOT(root)) { - root &=3D PT64_BASE_ADDR_MASK; + root &=3D SPTE_BASE_ADDR_MASK; sp =3D to_shadow_page(root); mmu_sync_children(vcpu, sp, true); } @@ -5186,11 +5172,11 @@ static bool need_remote_flush(u64 old, u64 new) return false; if (!is_shadow_present_pte(new)) return true; - if ((old ^ new) & PT64_BASE_ADDR_MASK) + if ((old ^ new) & SPTE_BASE_ADDR_MASK) return true; old ^=3D shadow_nx_mask; new ^=3D shadow_nx_mask; - return (old & ~new & PT64_PERM_MASK) !=3D 0; + return (old & ~new & SPTE_PERM_MASK) !=3D 0; } =20 static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 5e1e3c8f8aaa..cb9d4d358335 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -20,6 +20,9 @@ extern bool dbg; #define MMU_WARN_ON(x) do { } while (0) #endif =20 +/* The number of bits for 32-bit PTEs is to needed compute the quandrant. = */ +#define PT32_LEVEL_BITS 10 + /* Page table builder macros common to shadow (host) PTEs and guest PTEs. = */ #define __PT_LEVEL_SHIFT(level, bits_per_level) \ (PAGE_SHIFT + ((level) - 1) * (bits_per_level)) diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h index 23f3f64b8092..6a63727cc7e8 100644 --- a/arch/x86/kvm/mmu/paging.h +++ b/arch/x86/kvm/mmu/paging.h @@ -5,11 +5,28 @@ =20 #define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1= )) =20 +#define PT64_LEVEL_BITS 9 + +#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_B= ITS) + #define PT64_LVL_ADDR_MASK(level) \ __PT_LVL_ADDR_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) =20 #define PT64_LVL_OFFSET_MASK(level) \ __PT_LVL_OFFSET_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) =20 + +#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) + +#define PT32_LVL_OFFSET_MASK(level) \ + __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) + +#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_B= ITS) + +#define PT32_BASE_ADDR_MASK PAGE_MASK + +#define PT32_LVL_ADDR_MASK(level) \ + __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) + #endif /* __KVM_X86_PAGING_H */ =20 diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 55fd35b1b227..d68cc7a5ef81 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -899,7 +899,7 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_pa= ge *sp) WARN_ON(sp->role.level !=3D PG_LEVEL_4K); =20 if (PTTYPE =3D=3D 32) - offset =3D sp->role.quadrant << PT64_LEVEL_BITS; + offset =3D sp->role.quadrant << SPTE_LEVEL_BITS; =20 return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } @@ -1034,7 +1034,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, st= ruct kvm_mmu_page *sp) =20 first_pte_gpa =3D FNAME(get_level1_sp_gpa)(sp); =20 - for (i =3D 0; i < PT64_ENT_PER_PAGE; i++) { + for (i =3D 0; i < SPTE_ENT_PER_PAGE; i++) { u64 *sptep, spte; struct kvm_memory_slot *slot; unsigned pte_access; diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index cda1851ec155..242e4828d7df 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -301,7 +301,7 @@ u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte= , kvm_pfn_t new_pfn) { u64 new_spte; =20 - new_spte =3D old_spte & ~PT64_BASE_ADDR_MASK; + new_spte =3D old_spte & ~SPTE_BASE_ADDR_MASK; new_spte |=3D (u64)new_pfn << PAGE_SHIFT; =20 new_spte &=3D ~PT_WRITABLE_MASK; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index d5a8183b7232..121c5eaaec77 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -36,12 +36,12 @@ extern bool __read_mostly enable_mmio_caching; static_assert(SPTE_TDP_AD_ENABLED_MASK =3D=3D 0); =20 #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK -#define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) +#define SPTE_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) #else -#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) +#define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) #endif =20 -#define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_m= ask \ +#define SPTE_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_m= ask \ | shadow_x_mask | shadow_nx_mask | shadow_me_mask) =20 #define ACC_EXEC_MASK 1 @@ -50,16 +50,13 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK =3D=3D 0); #define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) =20 /* The mask for the R/X bits in EPT PTEs */ -#define PT64_EPT_READABLE_MASK 0x1ull -#define PT64_EPT_EXECUTABLE_MASK 0x4ull +#define SPTE_EPT_READABLE_MASK 0x1ull +#define SPTE_EPT_EXECUTABLE_MASK 0x4ull =20 -#define PT64_LEVEL_BITS 9 - -#define PT64_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT64_LEVEL_BITS) - -#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_B= ITS) - -#define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) +#define SPTE_LEVEL_BITS 9 +#define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS) +#define SPTE_INDEX(address, level) __PT_INDEX(address, level, SPTE_LEVEL_B= ITS) +#define SPTE_ENT_PER_PAGE __PT_ENT_PER_PAGE(SPTE_LEVEL_BITS) =20 /* * The mask/shift to use for saving the original R/X bits when marking the= PTE @@ -68,8 +65,8 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK =3D=3D 0); * restored only when a write is attempted to the page. This mask obvious= ly * must not overlap the A/D type mask. */ -#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (PT64_EPT_READABLE_MASK | \ - PT64_EPT_EXECUTABLE_MASK) +#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (SPTE_EPT_READABLE_MASK | \ + SPTE_EPT_EXECUTABLE_MASK) #define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 54 #define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \ SHADOW_ACC_TRACK_SAVED_BITS_SHIFT) @@ -281,7 +278,7 @@ static inline bool is_executable_pte(u64 spte) =20 static inline kvm_pfn_t spte_to_pfn(u64 pte) { - return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT; + return (pte & SPTE_BASE_ADDR_MASK) >> PAGE_SHIFT; } =20 static inline bool is_accessed_spte(u64 spte) diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index ee4802d7b36c..9c65a64a56d9 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -11,7 +11,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) { iter->sptep =3D iter->pt_path[iter->level - 1] + - SHADOW_PT_INDEX(iter->gfn << PAGE_SHIFT, iter->level); + SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level); iter->old_spte =3D kvm_tdp_mmu_read_spte(iter->sptep); } =20 @@ -116,8 +116,8 @@ static bool try_step_side(struct tdp_iter *iter) * Check if the iterator is already at the end of the current page * table. */ - if (SHADOW_PT_INDEX(iter->gfn << PAGE_SHIFT, iter->level) =3D=3D - (PT64_ENT_PER_PAGE - 1)) + if (SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level) =3D=3D + (SPTE_ENT_PER_PAGE - 1)) return false; =20 iter->gfn +=3D KVM_PAGES_PER_HPAGE(iter->level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7b9265d67131..26cb9fed2f18 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -425,7 +425,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep= _t pt, bool shared) =20 tdp_mmu_unlink_sp(kvm, sp, shared); =20 - for (i =3D 0; i < PT64_ENT_PER_PAGE; i++) { + for (i =3D 0; i < SPTE_ENT_PER_PAGE; i++) { tdp_ptep_t sptep =3D pt + i; gfn_t gfn =3D base_gfn + i * KVM_PAGES_PER_HPAGE(level); u64 old_spte; @@ -1487,7 +1487,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, s= truct tdp_iter *iter, * No need for atomics when writing to sp->spt since the page table has * not been linked in yet and thus is not reachable from any other CPU. */ - for (i =3D 0; i < PT64_ENT_PER_PAGE; i++) + for (i =3D 0; i < SPTE_ENT_PER_PAGE; i++) sp->spt[i] =3D make_huge_page_split_spte(huge_spte, level, i); =20 /* @@ -1507,7 +1507,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, s= truct tdp_iter *iter, * are overwriting from the page stats. But we have to manually update * the page stats with the new present child pages. */ - kvm_update_page_stats(kvm, level - 1, PT64_ENT_PER_PAGE); + kvm_update_page_stats(kvm, level - 1, SPTE_ENT_PER_PAGE); =20 out: trace_kvm_mmu_split_huge_page(iter->gfn, huge_spte, level, ret); --=20 2.36.1.476.g0c4daa206d-goog From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A6C1C43334 for ; Tue, 14 Jun 2022 23:34:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239267AbiFNXeM (ORCPT ); Tue, 14 Jun 2022 19:34:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345978AbiFNXdu (ORCPT ); Tue, 14 Jun 2022 19:33:50 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B058F4D613 for ; Tue, 14 Jun 2022 16:33:46 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id s10-20020a170902a50a00b00162359521c9so5555471plq.23 for ; Tue, 14 Jun 2022 16:33:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=w3II49oWz5ZD4VBgFJqFAIErRYz4YB3L6RNtySXolkY=; b=nAu6RySpXNQgIcUKmlbGBRHPNLX5h5xTCKuG94eFIr6E03fA9CI89ERUs97qITUu9s 8XF16hirrjNg7L/7mjDSd3+xiwnK/bpRrWfGQAoENn8gTtjnubRpp6NdAurfUyIVjem3 UB4XpiVJ4yglidmi/TVsHMW90fERY72DKnFSZ8xT2GPRTfTBcF/IoWC9eLAO/h4eWrh7 1PvwI+Dej4wrM1/9jfAWx7NLD77FAl4THlFl6sCuUumUDhgUQxvuI9i0OetkUD/atZzu /djyDxDoiZd33Fke9CX0tnUn0Bi/acSBVH1Rbzlmc1VCP6/hglr/l8FQl5A12rser8R4 +eLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=w3II49oWz5ZD4VBgFJqFAIErRYz4YB3L6RNtySXolkY=; b=VBN4+W8d41odeHsH0p5dmxe5CnO35kfS0f5uMX/FTtLzExwENw5YQWsOFUl75BSctR khApZF1SxmomwAL14L1t6Oo2nnLuBPW1gMgZhE2Fd6T7VU7G++CjhiGn94hWT7wGL4LG ykKQhHT5mIIerwdr6wXfOCYkYLcNcHg3T32zIDySkBQvUMGC2c1snkPIKZWvGtF2xtQD thd4X7rR4Gm93tY04qn5GcgT0s/A6MMuBr6PGV9BJni4pDt+NnBuXkWuubnzbCHzXvsJ VFTnYBs0Z241t7ikZJFSAKt4fBWvRdbGOzG5Hi75Bkl9GmpZTP25ojLmEa3ws0Q3zA8W k7uA== X-Gm-Message-State: AJIora+vawcsG7TDD9Kja6b7oDeYlugaSZ0UTFZCpOXEsjL6m+RSPYSZ jjyr52eiDytnWY4KauvlVLGRyUwttiw= X-Google-Smtp-Source: AGRyM1u+edJJRfUOUBN5M8yJAjwBhWPIjI8/Cm0yzRqvwnRgB2E1MZQC3NfGzo9/RXxWJO2FaU1gikDl1PU= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:edd1:b0:158:8318:b51e with SMTP id q17-20020a170902edd100b001588318b51emr6789427plk.89.1655249625966; Tue, 14 Jun 2022 16:33:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:26 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-7-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 6/8] KVM: x86/mmu: Use common macros to compute 32/64-bit paging masks From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Dedup the code for generating (most of) the per-type PT_* masks in paging_tmpl.h. The relevant macros only vary based on the number of bits per level, and that smidge of info is already provided in a common form as PT_LEVEL_BITS. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging.h | 23 ----------------------- arch/x86/kvm/mmu/paging_tmpl.h | 25 +++++++++++-------------- 2 files changed, 11 insertions(+), 37 deletions(-) diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h index 6a63727cc7e8..9de4976b2d46 100644 --- a/arch/x86/kvm/mmu/paging.h +++ b/arch/x86/kvm/mmu/paging.h @@ -5,28 +5,5 @@ =20 #define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1= )) =20 -#define PT64_LEVEL_BITS 9 - -#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_B= ITS) - -#define PT64_LVL_ADDR_MASK(level) \ - __PT_LVL_ADDR_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) - -#define PT64_LVL_OFFSET_MASK(level) \ - __PT_LVL_OFFSET_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) - - -#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) - -#define PT32_LVL_OFFSET_MASK(level) \ - __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - -#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_B= ITS) - -#define PT32_BASE_ADDR_MASK PAGE_MASK - -#define PT32_LVL_ADDR_MASK(level) \ - __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - #endif /* __KVM_X86_PAGING_H */ =20 diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index d68cc7a5ef81..4fcde3a18f5f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -16,8 +16,9 @@ */ =20 /* - * We need the mmu code to access both 32-bit and 64-bit guest ptes, - * so the code in this file is compiled twice, once per pte size. + * The MMU needs to be able to access/walk 32-bit and 64-bit guest page ta= bles, + * as well as guest EPT tables, so the code in this file is compiled thric= e, + * once per guest PTE type. The per-type defines are #undef'd at the end. */ =20 #if PTTYPE =3D=3D 64 @@ -25,10 +26,7 @@ #define guest_walker guest_walker64 #define FNAME(name) paging##64_##name #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK - #define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl) - #define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl) - #define PT_INDEX(addr, level) PT64_INDEX(addr, level) - #define PT_LEVEL_BITS PT64_LEVEL_BITS + #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT #define PT_HAVE_ACCESSED_DIRTY(mmu) true @@ -41,10 +39,7 @@ #define pt_element_t u32 #define guest_walker guest_walker32 #define FNAME(name) paging##32_##name - #define PT_BASE_ADDR_MASK PT32_BASE_ADDR_MASK - #define PT_LVL_ADDR_MASK(lvl) PT32_LVL_ADDR_MASK(lvl) - #define PT_LVL_OFFSET_MASK(lvl) PT32_LVL_OFFSET_MASK(lvl) - #define PT_INDEX(addr, level) PT32_INDEX(addr, level) + #define PT_BASE_ADDR_MASK PAGE_MASK #define PT_LEVEL_BITS PT32_LEVEL_BITS #define PT_MAX_FULL_LEVELS 2 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT @@ -60,10 +55,7 @@ #define guest_walker guest_walkerEPT #define FNAME(name) ept_##name #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK - #define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl) - #define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl) - #define PT_INDEX(addr, level) PT64_INDEX(addr, level) - #define PT_LEVEL_BITS PT64_LEVEL_BITS + #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.ad_disabled) @@ -72,6 +64,11 @@ #error Invalid PTTYPE value #endif =20 +/* Common logic, but per-type values. These also need to be undefined. */ +#define PT_LVL_ADDR_MASK(lvl) __PT_LVL_ADDR_MASK(PT_BASE_ADDR_MASK, lvl, P= T_LEVEL_BITS) +#define PT_LVL_OFFSET_MASK(lvl) __PT_LVL_OFFSET_MASK(PT_BASE_ADDR_MASK, lv= l, PT_LEVEL_BITS) +#define PT_INDEX(addr, lvl) __PT_INDEX(addr, lvl, PT_LEVEL_BITS) + #define PT_GUEST_DIRTY_MASK (1 << PT_GUEST_DIRTY_SHIFT) #define PT_GUEST_ACCESSED_MASK (1 << PT_GUEST_ACCESSED_SHIFT) =20 --=20 2.36.1.476.g0c4daa206d-goog From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA794C43334 for ; Tue, 14 Jun 2022 23:34:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345904AbiFNXeS (ORCPT ); Tue, 14 Jun 2022 19:34:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243007AbiFNXeB (ORCPT ); Tue, 14 Jun 2022 19:34:01 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02A574EA3D for ; Tue, 14 Jun 2022 16:33:48 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 78-20020a630051000000b003fe25580679so5663233pga.9 for ; Tue, 14 Jun 2022 16:33:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ucyUYE+avnReV/NlugmT9mH5RtL+uAsCDPn2qHXO7FY=; b=WYXQVMEwjEHCcw1xxT86Vbs65AyUV4XX/jqrw2OhnJ2yWc+nuEGi1LzBDzL1VNHUat SOsARc5BcIYIu00/qktAcem1AixZ+bd62CREEN7pPxuiyNJADuE6vWxJnrvbbd6k546g zuA8ps8/bhPBaTI5RmPuFdCGm19v6h6ixk6FWu7HW384NK5/kf758VPBArrbQaElA94N m9uYbDGk4vpaNKpApnXRZKjtSwANe1/h5dCidROCRz31+w1mDghD43UaIq7VjOvpF8Uh qBG0tQZW02WzL0ij0S8m1xmps03iZEEthOTb7OEbe06rl16Nd7PIXgNwe5hmOmHXOxxN ASbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ucyUYE+avnReV/NlugmT9mH5RtL+uAsCDPn2qHXO7FY=; b=xf1eONmLPbIV0ji5hCanXjPH+6gJO9TBX66lY35LvPbUXGYqoQiy1s7lfZZocVgKdF bb00gIqaE4fAPNp1zdDgU7UXxgu0D93tuPqp+pFcIBGlXg6Y3RRxM2eEAppyUQMmxnt4 SX8g4+ZTKmT0oLjJlOfIPQRe/L/Vq1bmYImB/NzpPhS+etMYSs/cnmJyav0bHBQ3crTY 1Dp6L0f1mApHJaq4Mm1nqs4YTDRXNlukIyv/WoBu2xsoydh5NFtaxjTRKdy4rQ9BPiHo XnwIdO1VoD/OQdQPduapJ0g+goH9Wx4axgrbddbiwH7X4W49202P8GNXEh1a2pMzSFOb QiSw== X-Gm-Message-State: AJIora977bpOIbJsm6C+vg/cxvMhfHwrXxy9cXPrSmfpq3JxtFXLYdqB AR8/soxj48Tf9PmjizCwmOmd8l4MUlY= X-Google-Smtp-Source: AGRyM1t6JM382xs2x35j07siN8YES1LgIQ/Ay+Nj6j4kpinOigsWY0dTNThe+vHwxLcBC5qtiQiVJFXmqK4= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:903:1c8:b0:167:67ff:323d with SMTP id e8-20020a17090301c800b0016767ff323dmr6319364plh.22.1655249627503; Tue, 14 Jun 2022 16:33:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:27 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-8-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 7/8] KVM: x86/mmu: Truncate paging32's PT_BASE_ADDR_MASK to 32 bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Truncate paging32's PT_BASE_ADDR_MASK to a pt_element_t, i.e. to 32 bits. Ignoring PSE huge pages, the mask is only used in conjunction with gPTEs, which are 32 bits, and so the address is limited to bits 31:12. PSE huge pages encoded PA bits 39:32 in PTE bits 20:13, i.e. need custom logic to handle their funky encoding regardless of PT_BASE_ADDR_MASK. Note, PT_LVL_OFFSET_MASK is somewhat confusing in that it computes the offset of the _gfn_, not of the gpa, i.e. not having bits 63:32 set in PT_BASE_ADDR_MASK is again correct. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4fcde3a18f5f..3ed7ba4730b4 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -39,7 +39,7 @@ #define pt_element_t u32 #define guest_walker guest_walker32 #define FNAME(name) paging##32_##name - #define PT_BASE_ADDR_MASK PAGE_MASK + #define PT_BASE_ADDR_MASK ((pt_element_t)PAGE_MASK) #define PT_LEVEL_BITS PT32_LEVEL_BITS #define PT_MAX_FULL_LEVELS 2 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT --=20 2.36.1.476.g0c4daa206d-goog From nobody Mon Apr 27 09:13:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2FCBC433EF for ; Tue, 14 Jun 2022 23:34:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355238AbiFNXeX (ORCPT ); Tue, 14 Jun 2022 19:34:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351762AbiFNXeF (ORCPT ); Tue, 14 Jun 2022 19:34:05 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2913D4F1FA for ; Tue, 14 Jun 2022 16:33:50 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id e1-20020a17090301c100b00168d7df157cso4260146plh.3 for ; Tue, 14 Jun 2022 16:33:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=UVZzvWgZd/qa60rxNjAcDblynDLVeinGXHSxsImBGrU=; b=DwCFxaRhRl9A9MoEpWb2SQfi4DYTduFc2PKiyCNhwx0btH68sScDN59PD0H9VKIVBM j/tYe+bh2eKxRice15QWiN2N9uma13ApTUjRhZ7Wj2meCm88dpIjI/FjaunvUhbnhD/x Wpw2zZiIoQhapsipnLo306ZtZK/B9XTR2l81FmjjxWzd8/gKFtPQtu/gzQG0wTwDm5aW qSNLff6LQQ/dxyfRtONRsPlG4UOA7x2gejjD1HvDjhN8aYguSIePJRocfy3E+Rj6ZQ61 3WZ45B5xtOEI88TfScodCzfp/DgQsHmp5coQI1Czo01GWHg1WNZXdLatLy2C3v9oSgZy 7JHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=UVZzvWgZd/qa60rxNjAcDblynDLVeinGXHSxsImBGrU=; b=wYTQEqsuW6MWGNe/62jJ65IXMqrio0xMy1/OFREExg8Ak7CcyOL5BvAMaEBqZHIcGm eZZAarQ2mP+r9LoQN+npTc9+saSAdJDNLRhT9EVR+FuN4zm32d5x3KhiZQVZPufC7W5B S2BURUqe5fA0TpXysw27dfpO/oeAFQD4hG2ZCR8bVHPjyBVCaoFqeVbCupzfh0eWNO2i RYEWDJekHTBUKcHl3sVcx4lJvrmJw/RxkG6ruOld6excczIacV2aA2GAnchHXBUR4/Mb j7vwKfxlmRspSYgnPfvQom29INgAUP/8bzKorwla44CQMhgfwQFZ0t/NErtuMU9pcsn8 BviA== X-Gm-Message-State: AOAM5337yT//cHTpcMQTTn7sSkMJSNB9PoZN+TwqnqGJNHnlTyIblZjj lg2BNSOJUzdE0GkCiVf14JYGc/G1R8o= X-Google-Smtp-Source: ABdhPJxJTmXAQVhdGcazJMBZlTYBCSV4zUdBJr9pYqqItppdglCIH0/nzv8n1cUqXLQLK/9Bp7gGHwSp74w= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a63:587:0:b0:405:559:6df1 with SMTP id 129-20020a630587000000b0040505596df1mr6339797pgf.355.1655249629290; Tue, 14 Jun 2022 16:33:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 14 Jun 2022 23:33:28 +0000 In-Reply-To: <20220614233328.3896033-1-seanjc@google.com> Message-Id: <20220614233328.3896033-9-seanjc@google.com> Mime-Version: 1.0 References: <20220614233328.3896033-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v2 8/8] KVM: x86/mmu: Use common logic for computing the 32/64-bit base PA mask From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lai Jiangshan Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use common logic for computing PT_BASE_ADDR_MASK for 32-bit, 64-bit, and EPT paging. Both PAGE_MASK and the new-common logic are supsersets of what is actually needed for 32-bit paging. PAGE_MASK sets bits 63:12 and the former GUEST_PT64_BASE_ADDR_MASK sets bits 51:12, so regardless of which value is used, the result will always be bits 31:12. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 -- arch/x86/kvm/mmu/paging.h | 9 --------- arch/x86/kvm/mmu/paging_tmpl.h | 4 +--- 3 files changed, 1 insertion(+), 14 deletions(-) delete mode 100644 arch/x86/kvm/mmu/paging.h diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 81f2e58dc85b..a67aac727dc2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -53,8 +53,6 @@ #include #include "trace.h" =20 -#include "paging.h" - extern bool itlb_multihit_kvm_mitigation; =20 int __read_mostly nx_huge_pages =3D -1; diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h deleted file mode 100644 index 9de4976b2d46..000000000000 --- a/arch/x86/kvm/mmu/paging.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* Shadow paging constants/helpers that don't need to be #undef'd. */ -#ifndef __KVM_X86_PAGING_H -#define __KVM_X86_PAGING_H - -#define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1= )) - -#endif /* __KVM_X86_PAGING_H */ - diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 3ed7ba4730b4..6c29aef4092b 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -25,7 +25,6 @@ #define pt_element_t u64 #define guest_walker guest_walker64 #define FNAME(name) paging##64_##name - #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT @@ -39,7 +38,6 @@ #define pt_element_t u32 #define guest_walker guest_walker32 #define FNAME(name) paging##32_##name - #define PT_BASE_ADDR_MASK ((pt_element_t)PAGE_MASK) #define PT_LEVEL_BITS PT32_LEVEL_BITS #define PT_MAX_FULL_LEVELS 2 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT @@ -54,7 +52,6 @@ #define pt_element_t u64 #define guest_walker guest_walkerEPT #define FNAME(name) ept_##name - #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 @@ -65,6 +62,7 @@ #endif =20 /* Common logic, but per-type values. These also need to be undefined. */ +#define PT_BASE_ADDR_MASK ((pt_element_t)(((1ULL << 52) - 1) & ~(u64)(PAGE= _SIZE-1))) #define PT_LVL_ADDR_MASK(lvl) __PT_LVL_ADDR_MASK(PT_BASE_ADDR_MASK, lvl, P= T_LEVEL_BITS) #define PT_LVL_OFFSET_MASK(lvl) __PT_LVL_OFFSET_MASK(PT_BASE_ADDR_MASK, lv= l, PT_LEVEL_BITS) #define PT_INDEX(addr, lvl) __PT_INDEX(addr, lvl, PT_LEVEL_BITS) --=20 2.36.1.476.g0c4daa206d-goog