From nobody Sun Feb 8 19:43:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75066EB64D9 for ; Thu, 29 Jun 2023 19:19:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230104AbjF2TTI (ORCPT ); Thu, 29 Jun 2023 15:19:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230447AbjF2TS1 (ORCPT ); Thu, 29 Jun 2023 15:18:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E15B23C24 for ; Thu, 29 Jun 2023 12:14:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=g+NcRVz6DlwyP650U53R0sBow95cLltWneKiM8xVWy4=; b=KZMHkzW6OX3pVpvLi1BlRRDsQ9 14tq/tMjc+Hrm6ASrjWDb8Vx66bhGDFFieE1p3YtcoHv4V0OLqwRJ/hEo0fxBLfXXQI3Wl+Ffn1sm HZ/JjL1mAW5NricHoqNFaS5yfx/Vff56JUhVb7FffLODFkfIe65zA+z4agN2OB8Jcj/AajTwaEYkL xm8he5HSR2Gd3ro4cM3DmkKmIG4bsgStIYe4/xpmb4G5bwchkoSmaOm8osKvWiLYy4Zf3QdHSiulj cxTnvfBFgbb1aVu8D0v0ehfNBNhlpKbDs/MEEBm9NIbcK0cdaUpBTyGD61G/372McjE2grGJfBBRs WBG8gd/Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qEx5t-0056Kz-E4; Thu, 29 Jun 2023 19:14:17 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Linus Torvalds , "Liam R . Howlett" Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH] mm: Always downgrade mmap_lock if requested Date: Thu, 29 Jun 2023 20:14:14 +0100 Message-Id: <20230629191414.1215929-1-willy@infradead.org> X-Mailer: git-send-email 2.37.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now that stack growth must always hold the mmap_lock for write, we can always downgrade the mmap_lock to read and safely unmap pages from the page table, even if we're next to a stack. Signed-off-by: Matthew Wilcox (Oracle) --- mm/mmap.c | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 9b5188b65800..82efaca58ca2 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2550,19 +2550,8 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct= vm_area_struct *vma, =20 mm->locked_vm -=3D locked_vm; mm->map_count -=3D count; - /* - * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or - * VM_GROWSUP VMA. Such VMAs can change their size under - * down_read(mmap_lock) and collide with the VMA we are about to unmap. - */ - if (downgrade) { - if (next && (next->vm_flags & VM_GROWSDOWN)) - downgrade =3D false; - else if (prev && (prev->vm_flags & VM_GROWSUP)) - downgrade =3D false; - else - mmap_write_downgrade(mm); - } + if (downgrade) + mmap_write_downgrade(mm); =20 /* * We can free page tables without write-locking mmap_lock because VMAs --=20 2.39.2