From nobody Wed Dec 17 14:22:04 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A669EE49A6 for ; Mon, 21 Aug 2023 12:31:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234941AbjHUMbZ (ORCPT ); Mon, 21 Aug 2023 08:31:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234921AbjHUMbU (ORCPT ); Mon, 21 Aug 2023 08:31:20 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25CAABE; Mon, 21 Aug 2023 05:31:19 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsF23D6sztShV; Mon, 21 Aug 2023 20:27:34 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:15 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 03/10] x86: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:49 +0800 Message-ID: <20230821123056.2109942-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/x86/mm/fault.c | 55 +++++++++++++++++++-------------------------- 1 file changed, 23 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index ab778eac1952..3edc9edc0b28 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1227,6 +1227,13 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned lo= ng hw_error_code, } NOKPROBE_SYMBOL(do_kern_addr_fault); =20 +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vm= f) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * Handle faults in the user portion of the address space. Nothing in here * should check X86_PF_USER without a specific justification: for almost @@ -1241,13 +1248,13 @@ void do_user_addr_fault(struct pt_regs *regs, unsigned long address) { struct vm_area_struct *vma; - struct task_struct *tsk; - struct mm_struct *mm; + struct mm_struct *mm =3D current->mm; vm_fault_t fault; - unsigned int flags =3D FAULT_FLAG_DEFAULT; - - tsk =3D current; - mm =3D tsk->mm; + struct vm_fault vmf =3D { + .real_address =3D address, + .fault_code =3D error_code, + .flags =3D FAULT_FLAG_DEFAULT + }; =20 if (unlikely((error_code & (X86_PF_USER | X86_PF_INSTR)) =3D=3D X86_PF_IN= STR)) { /* @@ -1311,7 +1318,7 @@ void do_user_addr_fault(struct pt_regs *regs, */ if (user_mode(regs)) { local_irq_enable(); - flags |=3D FAULT_FLAG_USER; + vmf.flags |=3D FAULT_FLAG_USER; } else { if (regs->flags & X86_EFLAGS_IF) local_irq_enable(); @@ -1326,11 +1333,11 @@ void do_user_addr_fault(struct pt_regs *regs, * maybe_mkwrite() can create a proper shadow stack PTE. */ if (error_code & X86_PF_SHSTK) - flags |=3D FAULT_FLAG_WRITE; + vmf.flags |=3D FAULT_FLAG_WRITE; if (error_code & X86_PF_WRITE) - flags |=3D FAULT_FLAG_WRITE; + vmf.flags |=3D FAULT_FLAG_WRITE; if (error_code & X86_PF_INSTR) - flags |=3D FAULT_FLAG_INSTRUCTION; + vmf.flags |=3D FAULT_FLAG_INSTRUCTION; =20 #ifdef CONFIG_X86_64 /* @@ -1350,26 +1357,11 @@ void do_user_addr_fault(struct pt_regs *regs, } #endif =20 - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma =3D lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_error(error_code, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - fault =3D handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs= ); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault =3D try_vma_locked_page_fault(&vmf); + if (fault =3D=3D VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); =20 /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1379,7 +1371,6 @@ void do_user_addr_fault(struct pt_regs *regs, ARCH_DEFAULT_PKEY); return; } -lock_mmap: =20 retry: vma =3D lock_mm_and_find_vma(mm, address, regs); @@ -1410,7 +1401,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault =3D handle_mm_fault(vma, address, flags, regs); + fault =3D handle_mm_fault(vma, address, vmf.flags, regs); =20 if (fault_signal_pending(fault, regs)) { /* @@ -1434,7 +1425,7 @@ void do_user_addr_fault(struct pt_regs *regs, * that we made any progress. Handle this case first. */ if (unlikely(fault & VM_FAULT_RETRY)) { - flags |=3D FAULT_FLAG_TRIED; + vmf.flags |=3D FAULT_FLAG_TRIED; goto retry; } =20 --=20 2.27.0