From nobody Wed Dec 17 14:22:04 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82EEBEE49AC for ; Mon, 21 Aug 2023 12:31:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234965AbjHUMba (ORCPT ); Mon, 21 Aug 2023 08:31:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234938AbjHUMbY (ORCPT ); Mon, 21 Aug 2023 08:31:24 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFB5FE2; Mon, 21 Aug 2023 05:31:22 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsFG5XSpzNnTc; Mon, 21 Aug 2023 20:27:46 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:19 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 06/10] riscv: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:52 +0800 Message-ID: <20230821123056.2109942-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/riscv/mm/fault.c | 58 ++++++++++++++++++------------------------- 1 file changed, 24 insertions(+), 34 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 6115d7514972..b46129b636f2 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -215,6 +215,13 @@ static inline bool access_error(unsigned long cause, s= truct vm_area_struct *vma) return false; } =20 +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vm= f) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * This routine handles page faults. It determines the address and the * problem, and then passes it off to one of the appropriate routines. @@ -223,17 +230,16 @@ void handle_page_fault(struct pt_regs *regs) { struct task_struct *tsk; struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long addr, cause; - unsigned int flags =3D FAULT_FLAG_DEFAULT; + struct mm_struct *mm =3D current->mm; + unsigned long addr =3D regs->badaddr; + unsigned long cause =3D regs->cause; int code =3D SEGV_MAPERR; vm_fault_t fault; - - cause =3D regs->cause; - addr =3D regs->badaddr; - - tsk =3D current; - mm =3D tsk->mm; + struct vm_fault vmf =3D { + .real_address =3D addr, + .fault_code =3D cause, + .flags =3D FAULT_FLAG_DEFAULT, + }; =20 if (kprobe_page_fault(regs, cause)) return; @@ -268,7 +274,7 @@ void handle_page_fault(struct pt_regs *regs) } =20 if (user_mode(regs)) - flags |=3D FAULT_FLAG_USER; + vmf.flags |=3D FAULT_FLAG_USER; =20 if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_= SUM))) { if (fixup_exception(regs)) @@ -280,37 +286,21 @@ void handle_page_fault(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); =20 if (cause =3D=3D EXC_STORE_PAGE_FAULT) - flags |=3D FAULT_FLAG_WRITE; + vmf.flags |=3D FAULT_FLAG_WRITE; else if (cause =3D=3D EXC_INST_PAGE_FAULT) - flags |=3D FAULT_FLAG_INSTRUCTION; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma =3D lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; + vmf.flags |=3D FAULT_FLAG_INSTRUCTION; =20 - if (unlikely(access_error(cause, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault =3D handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault =3D try_vma_locked_page_fault(&vmf); + if (fault =3D=3D VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); =20 if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) no_context(regs, addr); return; } -lock_mmap: =20 retry: vma =3D lock_mm_and_find_vma(mm, addr, regs); @@ -337,7 +327,7 @@ void handle_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, addr, flags, regs); + fault =3D handle_mm_fault(vma, addr, vmf.flags, regs); =20 /* * If we need to retry but a fatal signal is pending, handle the @@ -355,7 +345,7 @@ void handle_page_fault(struct pt_regs *regs) return; =20 if (unlikely(fault & VM_FAULT_RETRY)) { - flags |=3D FAULT_FLAG_TRIED; + vmf.flags |=3D FAULT_FLAG_TRIED; =20 /* * No need to mmap_read_unlock(mm) as we would --=20 2.27.0