From nobody Wed Dec 17 14:21:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90409EE49A6 for ; Mon, 21 Aug 2023 12:31:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234960AbjHUMbl (ORCPT ); Mon, 21 Aug 2023 08:31:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234987AbjHUMbf (ORCPT ); Mon, 21 Aug 2023 08:31:35 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1288B10D; Mon, 21 Aug 2023 05:31:28 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsFC3r4rztShZ; Mon, 21 Aug 2023 20:27:43 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:24 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 10/10] loongarch: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:56 +0800 Message-ID: <20230821123056.2109942-11-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/loongarch/Kconfig | 1 + arch/loongarch/mm/fault.c | 37 +++++++++++++++++++++++++++++++------ 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 2b27b18a63af..6b821f621920 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -56,6 +56,7 @@ config LOONGARCH select ARCH_SUPPORTS_LTO_CLANG select ARCH_SUPPORTS_LTO_CLANG_THIN select ARCH_SUPPORTS_NUMA_BALANCING + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_QUEUED_RWLOCKS diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 2a45e9f3a485..f7ac3a14bb06 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -142,6 +142,13 @@ static inline bool access_error(unsigned int flags, st= ruct pt_regs *regs, return false; } =20 +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vm= f) +{ + return access_error(vmf->flags, vmf->regs, vmf->real_address, vma); +} +#endif + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -151,11 +158,15 @@ static void __kprobes __do_page_fault(struct pt_regs = *regs, unsigned long write, unsigned long address) { int si_code =3D SEGV_MAPERR; - unsigned int flags =3D FAULT_FLAG_DEFAULT; struct task_struct *tsk =3D current; struct mm_struct *mm =3D tsk->mm; struct vm_area_struct *vma =3D NULL; vm_fault_t fault; + struct vm_fault vmf =3D { + .real_address =3D address, + .regs =3D regs, + .flags =3D FAULT_FLAG_DEFAULT, + }; =20 if (kprobe_page_fault(regs, current->thread.trap_nr)) return; @@ -184,11 +195,24 @@ static void __kprobes __do_page_fault(struct pt_regs = *regs, goto bad_area_nosemaphore; =20 if (user_mode(regs)) - flags |=3D FAULT_FLAG_USER; + vmf.flags |=3D FAULT_FLAG_USER; if (write) - flags |=3D FAULT_FLAG_WRITE; + vmf.flags |=3D FAULT_FLAG_WRITE; =20 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + + fault =3D try_vma_locked_page_fault(&vmf); + if (fault =3D=3D VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + no_context(regs, write, address); + return; + } + retry: vma =3D lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) @@ -196,7 +220,7 @@ static void __kprobes __do_page_fault(struct pt_regs *r= egs, =20 si_code =3D SEGV_ACCERR; =20 - if (access_error(flags, regs, vma)) + if (access_error(vmf.flags, regs, address, vma)) goto bad_area; =20 /* @@ -204,7 +228,7 @@ static void __kprobes __do_page_fault(struct pt_regs *r= egs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags, regs); + fault =3D handle_mm_fault(vma, address, vmf.flags, regs); =20 if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) @@ -217,7 +241,7 @@ static void __kprobes __do_page_fault(struct pt_regs *r= egs, return; =20 if (unlikely(fault & VM_FAULT_RETRY)) { - flags |=3D FAULT_FLAG_TRIED; + vmf.flags |=3D FAULT_FLAG_TRIED; =20 /* * No need to mmap_read_unlock(mm) as we would @@ -229,6 +253,7 @@ static void __kprobes __do_page_fault(struct pt_regs *r= egs, =20 mmap_read_unlock(mm); =20 +done: if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) do_out_of_memory(regs, write, address); --=20 2.27.0