From nobody Wed Dec 17 14:21:48 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C5C3EE49AF for ; Mon, 21 Aug 2023 12:31:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234970AbjHUMbc (ORCPT ); Mon, 21 Aug 2023 08:31:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234929AbjHUMb0 (ORCPT ); Mon, 21 Aug 2023 08:31:26 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 207E0EB; Mon, 21 Aug 2023 05:31:24 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGr51ZwzVks7; Mon, 21 Aug 2023 20:29:08 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:20 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 07/10] ARM: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:53 +0800 Message-ID: <20230821123056.2109942-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/arm/Kconfig | 1 + arch/arm/mm/fault.c | 35 +++++++++++++++++++++++++---------- 2 files changed, 26 insertions(+), 10 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1a6a6eb48a15..8b6d4507ccee 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -34,6 +34,7 @@ config ARM select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7 select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_MEMTEST diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index fef62e4a9edd..d53bb028899a 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -242,8 +242,11 @@ do_page_fault(unsigned long addr, unsigned int fsr, st= ruct pt_regs *regs) struct vm_area_struct *vma; int sig, code; vm_fault_t fault; - unsigned int flags =3D FAULT_FLAG_DEFAULT; - unsigned long vm_flags =3D VM_ACCESS_FLAGS; + struct vm_fault vmf =3D { + .real_address =3D addr, + .flags =3D FAULT_FLAG_DEFAULT, + .vm_flags =3D VM_ACCESS_FLAGS, + }; =20 if (kprobe_page_fault(regs, fsr)) return 0; @@ -261,15 +264,15 @@ do_page_fault(unsigned long addr, unsigned int fsr, s= truct pt_regs *regs) goto no_context; =20 if (user_mode(regs)) - flags |=3D FAULT_FLAG_USER; + vmf.flags |=3D FAULT_FLAG_USER; =20 if (is_write_fault(fsr)) { - flags |=3D FAULT_FLAG_WRITE; - vm_flags =3D VM_WRITE; + vmf.flags |=3D FAULT_FLAG_WRITE; + vmf.vm_flags =3D VM_WRITE; } =20 if (fsr & FSR_LNX_PF) { - vm_flags =3D VM_EXEC; + vmf.vm_flags =3D VM_EXEC; =20 if (is_permission_fault(fsr) && !user_mode(regs)) die_kernel_fault("execution of memory", @@ -278,6 +281,18 @@ do_page_fault(unsigned long addr, unsigned int fsr, st= ruct pt_regs *regs) =20 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); =20 + fault =3D try_vma_locked_page_fault(&vmf); + if (fault =3D=3D VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; + return 0; + } + retry: vma =3D lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -289,10 +304,10 @@ do_page_fault(unsigned long addr, unsigned int fsr, s= truct pt_regs *regs) * ok, we have a good vm_area for this memory access, check the * permissions on the VMA allow for the fault which occurred. */ - if (!(vma->vm_flags & vm_flags)) + if (!(vma->vm_flags & vmf.vm_flags)) fault =3D VM_FAULT_BADACCESS; else - fault =3D handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); + fault =3D handle_mm_fault(vma, addr & PAGE_MASK, vmf.flags, regs); =20 /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -310,13 +325,13 @@ do_page_fault(unsigned long addr, unsigned int fsr, s= truct pt_regs *regs) =20 if (!(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_RETRY) { - flags |=3D FAULT_FLAG_TRIED; + vmf.flags |=3D FAULT_FLAG_TRIED; goto retry; } } =20 mmap_read_unlock(mm); - +done: /* * Handle the "normal" case first - VM_FAULT_MAJOR */ --=20 2.27.0