From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE5853B192 for ; Thu, 9 Jan 2025 02:30:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389833; cv=none; b=V0f2AwoaILJyV40h3zgPFWJEYucECVX+tR/1wlp4SQNQZYtGbmNm/4yIoGnZ2gdRG/8E8oobaFmVdSkFMoL3vOv7IoTbWIunl67RnBzpy8/3IuhDVGj0ZM+T0CBqECAUY4syQXA6eI7X7gPBBE2hM3ExwXB8btjcsdJkxSpO6ww= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389833; c=relaxed/simple; bh=hJuoXOD+Hgs/Q4Tge4AjDWFB01AUx9ksvDVIv+G6+4M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bhJdRJGBiJ/vqY46tBZsBvLQSLrp8jPGRkBk0KUY6phu8QajGLrUcc7kmrQIOCnXf0AxraxCckqDL5960mtZ3hATsntIGKNOTnliSzUgzr1qEPq6NlP3oI7X7roHbWPphByu+DssXyWGiBGVfUgnYG5DyCKVdJq1dEr/2l1Teu8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VKbwklV9; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VKbwklV9" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef775ec883so760081a91.1 for ; Wed, 08 Jan 2025 18:30:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389831; x=1736994631; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=L2pdotIvyMZQNnGmgTfsGf0xKAYx/XKZavKJSJfbFgY=; b=VKbwklV917NYg2LHHTu8NWDzLIReJHfpSYINoU65sQLGNnHI3X+5EhKAzJhyL/+/vv K3TIA2wzk4o+5VhXKAHuYfy0SDhNuyLE0JdGG4B15utO4JqAri+0CQ4gF6cGEY3WW7GN ceEQOh/LPDj6373Au5DMEeFlOB3CZPvbYwdChtbxfFaJRu4OnouIM8UpqFUyq2RbcHLc nma6SQzf4I3UOKtq904avU1Fgz2PuAZJSOefvVHpo2aXm4lMxJGibDnoAWUZJaUCPpsm WbszgtAsNlcrU5Kh1TiDluJZ18kH7oNx8sPbz4bNDh2OzkECDvMoq/w6aw4pFJT5NvR6 OmVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389831; x=1736994631; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=L2pdotIvyMZQNnGmgTfsGf0xKAYx/XKZavKJSJfbFgY=; b=FzXySNB3hpJbQnMuOUBE6MbeA84cy2bIpEaHjxR8TtyNAvimNcHwBupzxZKpadZ3eI u6rUMBHe0AOlkZSbHVx7rX9KDseGDCSHcAFTZGmdsKJ4N8OwdDRt0eIVbQjDpdiy1/bg clMrB5Xe3AF+DaW+5vmolhoSYQQbDNW8j6T5LIVYKp6ssgrwj9Qb7jZzgr4u2AHDsY2n iHpe0LQqyjc57oT8l0fXIxRD+r7heXm1OAFDGcgNeJCP4pmxAnfuBxvRlwsrBRJag04N aeoGRMHEqAqX9F3UC4qqtpz46zU1dSs5najeFI04sBl6biDQC/qnWE59rS//iwUSnWJR 8DAg== X-Forwarded-Encrypted: i=1; AJvYcCVaKGJHsbQayhkNns2IfqYDXKfIKcg/IcGXA3qTDEJ12aveRghr4JWT5o8utq64eCDp2SphA4nj7RKrqc0=@vger.kernel.org X-Gm-Message-State: AOJu0YyrHnDwGy8USzTzn61ynkqfD/bxKEjMMhN9RnLcmd80dFFsVEt4 aqxnbaZW/e7JyRlH/fuMsetzFSanGZaSCHkeWmIx+YpAapHg529jM0rLgY7VYhsBiGkio5NOvrj J3w== X-Google-Smtp-Source: AGHT+IHWgi2ldN3a83aKZv2dDntq/xf/7vQGhigjGpu6Kf8lGzADe54dCpxgONH1NIDSqnEDQ0UO7ZABvv0= X-Received: from pjbnb3.prod.google.com ([2002:a17:90b:35c3:b0:2ee:3128:390f]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2748:b0:2f1:30c8:6e75 with SMTP id 98e67ed59e1d1-2f5490e89e0mr6525677a91.32.1736389830966; Wed, 08 Jan 2025 18:30:30 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:10 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-2-surenb@google.com> Subject: [PATCH v8 01/16] mm: introduce vma_start_read_locked{_nested} helpers From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce helper functions which can be used to read-lock a VMA when holding mmap_lock for read. Replace direct accesses to vma->vm_lock with these new helpers. Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Davidlohr Bueso Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 24 ++++++++++++++++++++++++ mm/userfaultfd.c | 22 +++++----------------- 2 files changed, 29 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 57b9e4dc4724..b040376ee81f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -735,6 +735,30 @@ static inline bool vma_start_read(struct vm_area_struc= t *vma) return true; } =20 +/* + * Use only while holding mmap read lock which guarantees that locking wil= l not + * fail (nobody can concurrently write-lock the vma). vma_start_read() sho= uld + * not be used in such cases because it might fail due to mm_lock_seq over= flow. + * This functionality is used to obtain vma read lock and drop the mmap re= ad lock. + */ +static inline void vma_start_read_locked_nested(struct vm_area_struct *vma= , int subclass) +{ + mmap_assert_locked(vma->vm_mm); + down_read_nested(&vma->vm_lock->lock, subclass); +} + +/* + * Use only while holding mmap read lock which guarantees that locking wil= l not + * fail (nobody can concurrently write-lock the vma). vma_start_read() sho= uld + * not be used in such cases because it might fail due to mm_lock_seq over= flow. + * This functionality is used to obtain vma read lock and drop the mmap re= ad lock. + */ +static inline void vma_start_read_locked(struct vm_area_struct *vma) +{ + mmap_assert_locked(vma->vm_mm); + down_read(&vma->vm_lock->lock); +} + static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 11b7eb3c8a28..a03c6f1ceb9e 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -84,16 +84,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_st= ruct *mm, =20 mmap_read_lock(mm); vma =3D find_vma_and_prepare_anon(mm, address); - if (!IS_ERR(vma)) { - /* - * We cannot use vma_start_read() as it may fail due to - * false locked (see comment in vma_start_read()). We - * can avoid that by directly locking vm_lock under - * mmap_lock, which guarantees that nobody can lock the - * vma for write (vma_start_write()) under us. - */ - down_read(&vma->vm_lock->lock); - } + if (!IS_ERR(vma)) + vma_start_read_locked(vma); =20 mmap_read_unlock(mm); return vma; @@ -1490,14 +1482,10 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err =3D find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - /* - * See comment in uffd_lock_vma() as to why not using - * vma_start_read() here. - */ - down_read(&(*dst_vmap)->vm_lock->lock); + vma_start_read_locked(*dst_vmap); if (*dst_vmap !=3D *src_vmap) - down_read_nested(&(*src_vmap)->vm_lock->lock, - SINGLE_DEPTH_NESTING); + vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING); } mmap_read_unlock(mm); return err; --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA6F212EBEA for ; Thu, 9 Jan 2025 02:30:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389835; cv=none; b=Hi2Broy6vXarA4QeRWl0TQhJlgN0aQIKYnLHXihMFMNEBwI95pC7vWql+XGmwKed7PiXa6TtfCvl+Oiosp8ic8puMYKTeyFbRswc0hH/tVt0cW+1mq/nKTIcHYzGy7bVfXkvr9fYfyM6Xo579A99IB3arCAMxGkLy0huuS2xIwQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389835; c=relaxed/simple; bh=vZ/1iA7+UpAN1+I0XHbSug443w9xMCI3C3wqUj7Elxo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=h6xFCTwEz1JDeUJRZ9iTwUPJZtEmj46sIgAt5DmKHrjlL582V6hL276xozyAcbk1Ar5xY7DdTEJSVDP/QMnxipRmTeXe2TolkkWvEDqBAY8DK5JZ99HNC8zD6UtTGx0GvyfmyVJj0nyTht2yrW3r0Tzphvy+S/P/o0axClChW8g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pmNS8dOk; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pmNS8dOk" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef9b9981f1so1140249a91.3 for ; Wed, 08 Jan 2025 18:30:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389833; x=1736994633; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=n13GufSG5JFan3cWwE4IJTcm9x/mZeh3KsDD3Zuaiv0=; b=pmNS8dOkzZA8NgViejMnByGmmxs5VpSv4uG2PugfoVO6QnYdpisTcGCjGK47xr1Wfg UdkMQDQnGd58zAeQMn0GlCgyIIv9SlM7hbyaHPFSAw8Di4PZPHF7AZX0o9AI6EPy0/lK eYiBTYTIhMLQdCATNrdi6StX0JrowyB6JkMmHl2X4BIhnHbXtYHoX+UNxOEHKau94vPR JG5NLJKupn2Jgom4kHUhrNljSfEJs3T41LMG64xJB5Z6dnpeYG9iz+QEVZmST7PIoTF1 nM7lCYmq/VWJ8Bb2wMhqrSY+pa23srYm/TMJCclVD7ejg/3JSPw56qVjBV8W0uxgfC4V i5qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389833; x=1736994633; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=n13GufSG5JFan3cWwE4IJTcm9x/mZeh3KsDD3Zuaiv0=; b=Mva2c5iaZ6WQqVcSEoIZYJPDsTfdeUwxSwpATP5VbQZKWldtHNjWFqLdBSL/of3inF WY2vQ3naRcC7/AHRt5OfeRadL3SxUIj8ZbgL+fQz3+L81MA1j1YZMSd82olt+WVMqPt7 kuntgFwYM5yOs7HTkP+8xz+SiZHb/ZaxFA8ZQnJws7IZzG5omrpyv8owXXeU8ZWHeemG hLpweOjkD5Fj4itZt38c2MDP46HHl1iPW4f4hmKTYGRkIs12/s3HcgL78NoBySHlHPKd 51C7WF+jBJwvFYsJZqBN3vQlUHoXZO0rbumxM1J4z4xsZwNxVImtrUogQMzlvqGF45SO EF9w== X-Forwarded-Encrypted: i=1; AJvYcCUMM9OatvefOPgr6RnyoOuBH8iLtV4oQXhTTtZW1CP+BGxoKdDgQmjvajl12ONvTAlF/xaMMD6+M+UgQZs=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2jDam2gkcNHfnjum3bRw/8IbufimV/vDXAz5fk0xnwfed94UX ehmvqclcyzFf7mCZI5BLQbu2ZEOICVGBUlRyJr+wuPy0/MB8bM3/uCPtY/VGqQz8GatJVLCaWRe zLA== X-Google-Smtp-Source: AGHT+IGBuBHPJFwa+WnmyRNuk6q4QQ/D91Vih/bjyZrS7FqQF0qJtxsRqNnpS2q9DSPjt8+5JEu4B+raBhE= X-Received: from pjbsu4.prod.google.com ([2002:a17:90b:5344:b0:2f4:47fc:7f17]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:c88f:b0:2f4:434d:c7ed with SMTP id 98e67ed59e1d1-2f548f34ee2mr8134361a91.16.1736389833219; Wed, 08 Jan 2025 18:30:33 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:11 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-3-surenb@google.com> Subject: [PATCH v8 02/16] mm: move per-vma lock into vm_area_struct From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Back when per-vma locks were introduces, vm_lock was moved out of vm_area_struct in [1] because of the performance regression caused by false cacheline sharing. Recent investigation [2] revealed that the regressions is limited to a rather old Broadwell microarchitecture and even there it can be mitigated by disabling adjacent cacheline prefetching, see [3]. Splitting single logical structure into multiple ones leads to more complicated management, extra pointer dereferences and overall less maintainable code. When that split-away part is a lock, it complicates things even further. With no performance benefits, there are no reasons for this split. Merging the vm_lock back into vm_area_struct also allows vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. Move vm_lock back into vm_area_struct, aligning it at the cacheline boundary and changing the cache to be cacheline-aligned as well. With kernel compiled using defconfig, this causes VMA memory consumption to grow from 160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes: slabinfo before: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 160 51 2 : ... slabinfo after moving vm_lock: ... : ... vm_area_struct ... 256 32 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages, which is 5.5MB per 100000 VMAs. Note that the size of this structure is dependent on the kernel configuration and typically the original size is higher than 160 bytes. Therefore these calculations are close to the worst case scenario. A more realistic vm_area_struct usage before this change is: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 176 46 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages, which is 3.9MB per 100000 VMAs. This memory consumption growth can be addressed later by optimizing the vm_lock. [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/ [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/ [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbf= P_pR+-2g@mail.gmail.com/ Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 28 ++++++++++-------- include/linux/mm_types.h | 6 ++-- kernel/fork.c | 49 ++++---------------------------- tools/testing/vma/vma_internal.h | 33 +++++---------------- 4 files changed, 32 insertions(+), 84 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b040376ee81f..920e5ddd77cc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -697,6 +697,12 @@ static inline void vma_numab_state_free(struct vm_area= _struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ =20 #ifdef CONFIG_PER_VMA_LOCK +static inline void vma_lock_init(struct vm_area_struct *vma) +{ + init_rwsem(&vma->vm_lock.lock); + vma->vm_lock_seq =3D UINT_MAX; +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield f= alse * locked result to avoid performance overhead, in which case we fall back= to @@ -714,7 +720,7 @@ static inline bool vma_start_read(struct vm_area_struct= *vma) if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm_lock_seq.= sequence)) return false; =20 - if (unlikely(down_read_trylock(&vma->vm_lock->lock) =3D=3D 0)) + if (unlikely(down_read_trylock(&vma->vm_lock.lock) =3D=3D 0)) return false; =20 /* @@ -729,7 +735,7 @@ static inline bool vma_start_read(struct vm_area_struct= *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq =3D=3D raw_read_seqcount(&vma->vm_mm->mm_lo= ck_seq))) { - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); return false; } return true; @@ -744,7 +750,7 @@ static inline bool vma_start_read(struct vm_area_struct= *vma) static inline void vma_start_read_locked_nested(struct vm_area_struct *vma= , int subclass) { mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock->lock, subclass); + down_read_nested(&vma->vm_lock.lock, subclass); } =20 /* @@ -756,13 +762,13 @@ static inline void vma_start_read_locked_nested(struc= t vm_area_struct *vma, int static inline void vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock->lock); + down_read(&vma->vm_lock.lock); } =20 static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); rcu_read_unlock(); } =20 @@ -791,7 +797,7 @@ static inline void vma_start_write(struct vm_area_struc= t *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; =20 - down_write(&vma->vm_lock->lock); + down_write(&vma->vm_lock.lock); /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -799,7 +805,7 @@ static inline void vma_start_write(struct vm_area_struc= t *vma) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock->lock); + up_write(&vma->vm_lock.lock); } =20 static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -811,7 +817,7 @@ static inline void vma_assert_write_locked(struct vm_ar= ea_struct *vma) =20 static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock->lock)) + if (!rwsem_is_locked(&vma->vm_lock.lock)) vma_assert_write_locked(vma); } =20 @@ -844,6 +850,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_str= uct *mm, =20 #else /* CONFIG_PER_VMA_LOCK */ =20 +static inline void vma_lock_init(struct vm_area_struct *vma) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -878,10 +885,6 @@ static inline void assert_fault_locked(struct vm_fault= *vmf) =20 extern const struct vm_operations_struct vma_dummy_vm_ops; =20 -/* - * WARNING: vma_init does not initialize vma->vm_lock. - * Use vm_area_alloc()/vm_area_free() if vma needs locking. - */ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *= mm) { memset(vma, 0, sizeof(*vma)); @@ -890,6 +893,7 @@ static inline void vma_init(struct vm_area_struct *vma,= struct mm_struct *mm) INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); vma_numab_state_init(vma); + vma_lock_init(vma); } =20 /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 70dce20cbfd1..0ca63dee1902 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -738,8 +738,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - /* Unstable RCU readers are allowed to read this. */ - struct vma_lock *vm_lock; #endif =20 /* @@ -792,6 +790,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + struct vma_lock vm_lock ____cacheline_aligned_in_smp; +#endif } __randomize_layout; =20 #ifdef CONFIG_NUMA diff --git a/kernel/fork.c b/kernel/fork.c index ded49f18cd95..40a8e615499f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; =20 -#ifdef CONFIG_PER_VMA_LOCK - -/* SLAB cache for vm_area_struct.lock */ -static struct kmem_cache *vma_lock_cachep; - -static bool vma_lock_alloc(struct vm_area_struct *vma) -{ - vma->vm_lock =3D kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq =3D UINT_MAX; - - return true; -} - -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - kmem_cache_free(vma_lock_cachep, vma->vm_lock); -} - -#else /* CONFIG_PER_VMA_LOCK */ - -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return tru= e; } -static inline void vma_lock_free(struct vm_area_struct *vma) {} - -#endif /* CONFIG_PER_VMA_LOCK */ - struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct = *mm) return NULL; =20 vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - kmem_cache_free(vm_area_cachep, vma); - return NULL; - } =20 return vma; } @@ -496,10 +463,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_stru= ct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - if (!vma_lock_alloc(new)) { - kmem_cache_free(vm_area_cachep, new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -511,7 +475,6 @@ void __vm_area_free(struct vm_area_struct *vma) { vma_numab_state_free(vma); free_anon_vma_name(vma); - vma_lock_free(vma); kmem_cache_free(vm_area_cachep, vma); } =20 @@ -522,7 +485,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); =20 /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -3188,11 +3151,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - - vm_area_cachep =3D KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); -#ifdef CONFIG_PER_VMA_LOCK - vma_lock_cachep =3D KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); -#endif + vm_area_cachep =3D KMEM_CACHE(vm_area_struct, + SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 2404347fa2c7..96aeb28c81f9 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -274,10 +274,10 @@ struct vm_area_struct { /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_lock.lock (in write mode) * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_lock.lock (in read or write mode) * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -286,7 +286,7 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock *vm_lock; + struct vma_lock vm_lock; #endif =20 /* @@ -463,17 +463,10 @@ static inline struct vm_area_struct *vma_next(struct = vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } =20 -static inline bool vma_lock_alloc(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma) { - vma->vm_lock =3D calloc(1, sizeof(struct vma_lock)); - - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); + init_rwsem(&vma->vm_lock.lock); vma->vm_lock_seq =3D UINT_MAX; - - return true; } =20 static inline void vma_assert_write_locked(struct vm_area_struct *); @@ -496,6 +489,7 @@ static inline void vma_init(struct vm_area_struct *vma,= struct mm_struct *mm) vma->vm_ops =3D &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); + vma_lock_init(vma); } =20 static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -506,10 +500,6 @@ static inline struct vm_area_struct *vm_area_alloc(str= uct mm_struct *mm) return NULL; =20 vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - free(vma); - return NULL; - } =20 return vma; } @@ -522,10 +512,7 @@ static inline struct vm_area_struct *vm_area_dup(struc= t vm_area_struct *orig) return NULL; =20 memcpy(new, orig, sizeof(*new)); - if (!vma_lock_alloc(new)) { - free(new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); =20 return new; @@ -695,14 +682,8 @@ static inline void mpol_put(struct mempolicy *) { } =20 -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - free(vma->vm_lock); -} - static inline void __vm_area_free(struct vm_area_struct *vma) { - vma_lock_free(vma); free(vma); } =20 --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED52A13D281 for ; Thu, 9 Jan 2025 02:30:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389837; cv=none; b=PUbO7YParvUVAYT4cmVDL1DqDkEE41TPVHeK9hTkgMrJOTvbw91vZyCuFoNGjiMrjTGcoToohAwg6vvH0gaf0932jQ3MRwE7MRhTUQq00c+r2YR0xnf670PReaILwCLuWyqSvoDEPGOPub3NhfPs8gb3iQwo5R+7qlgwdH6FIq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389837; c=relaxed/simple; bh=i8zxi1afzsVImDxoAoRFcBHsQ16h9cPBb3qsx/RJJOs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SyzxT5/UrWNDqj4D6AoqeAqNH+UWtP348HOmRKuLSt9uY9IeXwfJVRHMqupv84snNbT2DhIHrfZCtJsM549kdlkkcg3swFodhWX3+mY53XGu1OrJZZkxfV+RMYGCa8yeH/PKPNQ6IFP2mqDVGJtD08Uz/ypkt8c3my7vSRq8dQE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1diFkFsI; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1diFkFsI" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef9204f898so750502a91.2 for ; Wed, 08 Jan 2025 18:30:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389835; x=1736994635; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mca0wUeRB8q4/QThhp2wCRFx+30mwBcyzZfjxbePC2g=; b=1diFkFsIMXXMtXgyt4KxU17P1ATJYxzc0i1nT5ZLa7/5R4m1nCeDJZ87smXmv09IGY yVgcJNIcoQKlIaj3HPMXZa8Wu9+oxghr5DBI6s7OzVEd9S7yJxARd0gZLnZaYhn6ybQC ZEoEzR55pfzW4D+4rJ0rOXXPVmo7TRN9g6FHk8/bSVed6y+n9wxHK2tnEnO+/zYiab13 DqrtZ89TLwXxk8yAVzI1Kaj+PNeEDo9OsFa4RfR5B2iDlG+okSlqNd8SLtxn+TBVFy1S dXu2j9fZqPHgofMnMhJCeRfRTUGC9t/4xeDEy3wYwMkZsHcHliutcYd2uN/9Cz1yi8WC LFkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389835; x=1736994635; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mca0wUeRB8q4/QThhp2wCRFx+30mwBcyzZfjxbePC2g=; b=jcSYom674oegn6V/l75DzzAfjI7GfVGc1QRkGuGrI1uLfu1oIQsLBZRqmanILUKc90 JrOm9B5GWiaEjTMm1af6rQijXhvPMursMjNwW2NKiFbrbUhZi8cww1kTZiJmqv03KKKh fnbI2gPYh/EIkVKHYgeLD8WV8Cbpg8Yl7hjScN76ZgQdvX6gZzcGbAsqtJGk9Cd00QWa QnRD2kyvsAZVOPByIW1gz+RoA8wE1PfwMCiRfzE6e/Z5WjcQiQbKTU+BrrgNPOBwxtnT WstX65y3us1S+3m5E4/hMIOz0dLbwbxXumCBCYr7TGo+cNFf9hZfe3FCEw7xhd4Xv1Ei AjCw== X-Forwarded-Encrypted: i=1; AJvYcCVN7R+Fy57jNjM1XdsZAOGg9NR13q4E4tv+0ZxTCTy62WlxlDteBy+gBXHxgul1PIzqndUF50Ys2AIVDq8=@vger.kernel.org X-Gm-Message-State: AOJu0Yz8TKUd8ZoNfNhujCz9h2N3NXlrego0Yoh6rycTRf/7YkvFRQCy woMs5zkDaAmUB06O6r8K8yEGt0SiqKdzFzkSDb2lBjlENG0GtdA/YupgsSJVFxLr7oLkJJ9qoL6 e9Q== X-Google-Smtp-Source: AGHT+IEIh41j0AtPPhIvxBJK6ISgVLxnQVreV/6d3lvpOJLUFoaWqoL/DFcaBwvey98OKSYtXZrpcJIDmp8= X-Received: from pjbsi3.prod.google.com ([2002:a17:90b:5283:b0:2ea:6b84:3849]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4c4f:b0:2ee:ee77:227c with SMTP id 98e67ed59e1d1-2f548f102d1mr7321068a91.3.1736389835294; Wed, 08 Jan 2025 18:30:35 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:12 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-4-surenb@google.com> Subject: [PATCH v8 03/16] mm: mark vma as detached until it's added into vma tree From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Current implementation does not set detached flag when a VMA is first allocated. This does not represent the real state of the VMA, which is detached until it is added into mm's VMA tree. Fix this by marking new VMAs as detached and resetting detached flag only after VMA is added into a tree. Introduce vma_mark_attached() to make the API more readable and to simplify possible future cleanup when vma->vm_mm might be used to indicate detached vma and vma_mark_attached() will need an additional mm parameter. Signed-off-by: Suren Baghdasaryan Reviewed-by: Shakeel Butt Reviewed-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 27 ++++++++++++++++++++------- kernel/fork.c | 4 ++++ mm/memory.c | 2 +- mm/vma.c | 6 +++--- mm/vma.h | 2 ++ tools/testing/vma/vma_internal.h | 17 ++++++++++++----- 6 files changed, 42 insertions(+), 16 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 920e5ddd77cc..a9d8dd5745f7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,12 +821,21 @@ static inline void vma_assert_locked(struct vm_area_s= truct *vma) vma_assert_write_locked(vma); } =20 -static inline void vma_mark_detached(struct vm_area_struct *vma, bool deta= ched) +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached =3D false; +} + +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached =3D detached; + vma_assert_write_locked(vma); + vma->detached =3D true; +} + +static inline bool is_vma_detached(struct vm_area_struct *vma) +{ + return vma->detached; } =20 static inline void release_fault_lock(struct vm_fault *vmf) @@ -857,8 +866,8 @@ static inline void vma_end_read(struct vm_area_struct *= vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } -static inline void vma_mark_detached(struct vm_area_struct *vma, - bool detached) {} +static inline void vma_mark_attached(struct vm_area_struct *vma) {} +static inline void vma_mark_detached(struct vm_area_struct *vma) {} =20 static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *= mm, unsigned long address) @@ -891,7 +900,10 @@ static inline void vma_init(struct vm_area_struct *vma= , struct mm_struct *mm) vma->vm_mm =3D mm; vma->vm_ops =3D &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached =3D true; +#endif vma_numab_state_init(vma); vma_lock_init(vma); } @@ -1086,6 +1098,7 @@ static inline int vma_iter_bulk_store(struct vma_iter= ator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; =20 + vma_mark_attached(vma); return 0; } =20 diff --git a/kernel/fork.c b/kernel/fork.c index 40a8e615499f..f2f9e7b427ad 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -465,6 +465,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_stru= ct *orig) data_race(memcpy(new, orig, sizeof(*new))); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + new->detached =3D true; +#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); =20 diff --git a/mm/memory.c b/mm/memory.c index 1342d451b1bd..105b99064ce5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6391,7 +6391,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_s= truct *mm, goto inval; =20 /* Check if the VMA got isolated after we found it */ - if (vma->detached) { + if (is_vma_detached(vma)) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_MISS); /* The area was replaced with another one */ diff --git a/mm/vma.c b/mm/vma.c index af1d549b179c..d603494e69d7 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -327,7 +327,7 @@ static void vma_complete(struct vma_prepare *vp, struct= vma_iterator *vmi, =20 if (vp->remove) { again: - vma_mark_detached(vp->remove, true); + vma_mark_detached(vp->remove); if (vp->file) { uprobe_munmap(vp->remove, vp->remove->vm_start, vp->remove->vm_end); @@ -1221,7 +1221,7 @@ static void reattach_vmas(struct ma_state *mas_detach) =20 mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - vma_mark_detached(vma, false); + vma_mark_attached(vma); =20 __mt_destroy(mas_detach->tree); } @@ -1296,7 +1296,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_s= truct *vms, if (error) goto munmap_gather_failed; =20 - vma_mark_detached(next, true); + vma_mark_detached(next); nrpages =3D vma_pages(next); =20 vms->nr_pages +=3D nrpages; diff --git a/mm/vma.h b/mm/vma.h index a2e8710b8c47..2a2668de8d2c 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -157,6 +157,7 @@ static inline int vma_iter_store_gfp(struct vma_iterato= r *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; =20 + vma_mark_attached(vma); return 0; } =20 @@ -389,6 +390,7 @@ static inline void vma_iter_store(struct vma_iterator *= vmi, =20 __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); + vma_mark_attached(vma); } =20 static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 96aeb28c81f9..47c8b03ffbbd 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -469,13 +469,17 @@ static inline void vma_lock_init(struct vm_area_struc= t *vma) vma->vm_lock_seq =3D UINT_MAX; } =20 +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached =3D false; +} + static inline void vma_assert_write_locked(struct vm_area_struct *); -static inline void vma_mark_detached(struct vm_area_struct *vma, bool deta= ched) +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached =3D detached; + vma_assert_write_locked(vma); + vma->detached =3D true; } =20 extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -488,7 +492,8 @@ static inline void vma_init(struct vm_area_struct *vma,= struct mm_struct *mm) vma->vm_mm =3D mm; vma->vm_ops =3D &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached =3D true; vma_lock_init(vma); } =20 @@ -514,6 +519,8 @@ static inline struct vm_area_struct *vm_area_dup(struct= vm_area_struct *orig) memcpy(new, orig, sizeof(*new)); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); + /* vma is not locked, can't use vma_mark_detached() */ + new->detached =3D true; =20 return new; } --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92CDB14A4E9 for ; Thu, 9 Jan 2025 02:30:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389840; cv=none; b=vGUL5dCRpczESuoJcinE7pj1PLhQtuzsmQGgnLSP5yPH5XchC5AWEka7ceW9dC2A4r8KuJKk2qcugfKgpLLQPjII59hgA2rJIh9x3mZxYqRMSdO6LKRhmRZx2n+NnJum8yeEIZ0Ml9fQmlSvR7XBCk/I/ADDnpUJKZEF+UfGstA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389840; c=relaxed/simple; bh=rJ1z68qVFAhbRGcmdNEpPggyFaI/bPvfbCXzwU2oAcY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=p/4O5akxRMpK4FgPEKbinmZYxWrd536Z6He2TvlmgT3htiQFQcTXEKikPYIHPj/vswss0CU/8Rm6Npd18KrHEmlc9wc1tcHxPuR+arlJOmDAFGLJTkbnWffsmmi6/A7mXDWn4uUP+ctOWM++OM77YJQLtz4uVLOK8qHxThs0wrQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TR1Se9Wi; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TR1Se9Wi" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ee8ced572eso841801a91.0 for ; Wed, 08 Jan 2025 18:30:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389838; x=1736994638; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tU8qdV00qxcThV6AIuoPhomYC//sTigTQ5m8fxahbLA=; b=TR1Se9WiliJZZ8xBfr1e4YxzwV38VX+fvAv6R9jjOTIdS5HPNXQp0X3sNGlwenqKW6 TD0+t4uMWtUl16yopihGrxNjmB6Rue0qmOqpRVPfdigY5dYvkX91zP2hGPaymPt8kVIp gpnEAuZYBAU40SnK5WwdTFe2pPB1ipJ8uu5A6QyxOd9ApBw0nISxsVBjayHAVOR75C4z afzJBjgfHCUnh7jeHpc8WHYQGFOVitqQ0OejHDrqCF9JS6ioovHEimp1rVYA0A3OJfRJ 1AFGKUtPPUPEZsCJQqg9kKjJEgzJ6PSq/0MD9Utap9gliooxO8nGzcUWr5tQLVHUJKCR e4bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389838; x=1736994638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tU8qdV00qxcThV6AIuoPhomYC//sTigTQ5m8fxahbLA=; b=AbhvODeW0J4V4JoVLBbhmOh+42dZk4x1B2VGoPE5fr5KNOCw/EDKSHnrZlO9zqMBJ/ f3yIZgMcouNK8PMXNrqGDh/FWvjQKqr1g5h/5VTg92yEIswRGn3Clq/yylwlRzC67dyQ G4zMCcyRkJ3zfyZqHqBn7BUGycdEww2mCxtudT6vobXn43bl+J22xucyBkGOHwct43HR 9vvs1ZGKsVHogIjI3aMT5+kGsT70UZKUKzG1TG6DtPR2phKws8Tg2BPjIgUtm3/BSPfD XYNbc5fSimPdIxbyJbnSZnMSXNwJ1BcenfAXi/uhM1+3QeWbdA2MoC4hYMENW4Fz6vS8 7Y7g== X-Forwarded-Encrypted: i=1; AJvYcCXL/4CjE7w3UC26x34vRgFK0JTjMhrqVDIr/ML0ZxfCCsHJKcN8XvMI9PkQ4VXcZHkh2tiJpCcI2Wv4HTQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyMSl6U3dHl2B6P50wFNf7ODwnxuxwpYQLIrk0syzc8wwa0UIcv HOLO67O3LJ4jcpDEymZ65Uwmltgk9o7R5jCQErbb5GctAwnQu92IaYqYdm/9rAQi3KRBu9Ksjmi iyg== X-Google-Smtp-Source: AGHT+IHe7fElKYNkHiMi+Ss9Fbmy8ZuO1qg0Iy4wm9l48QbX7o9iXLM3T+s6E3Ey0ObPvizJE76ByK8fFls= X-Received: from pjbsk16.prod.google.com ([2002:a17:90b:2dd0:b0:2ee:4f3a:d07d]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:dfcb:b0:2ee:863e:9ff6 with SMTP id 98e67ed59e1d1-2f548eae2ecmr8238330a91.16.1736389837604; Wed, 08 Jan 2025 18:30:37 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:13 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-5-surenb@google.com> Subject: [PATCH v8 04/16] mm: introduce vma_iter_store_attached() to use with attached vmas From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_iter_store() functions can be used both when adding a new vma and when updating an existing one. However for existing ones we do not need to mark them attached as they are already marked that way. Introduce vma_iter_store_attached() to be used with already attached vmas. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 12 ++++++++++++ mm/vma.c | 8 ++++---- mm/vma.h | 11 +++++++++-- 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a9d8dd5745f7..e0d403c1ff63 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,6 +821,16 @@ static inline void vma_assert_locked(struct vm_area_st= ruct *vma) vma_assert_write_locked(vma); } =20 +static inline void vma_assert_attached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(vma->detached, vma); +} + +static inline void vma_assert_detached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(!vma->detached, vma); +} + static inline void vma_mark_attached(struct vm_area_struct *vma) { vma->detached =3D false; @@ -866,6 +876,8 @@ static inline void vma_end_read(struct vm_area_struct *= vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } +static inline void vma_assert_attached(struct vm_area_struct *vma) {} +static inline void vma_assert_detached(struct vm_area_struct *vma) {} static inline void vma_mark_attached(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma) {} =20 diff --git a/mm/vma.c b/mm/vma.c index d603494e69d7..b9cf552e120c 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -660,14 +660,14 @@ static int commit_merge(struct vma_merge_struct *vmg, vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); =20 if (expanded) - vma_iter_store(vmg->vmi, vmg->vma); + vma_iter_store_attached(vmg->vmi, vmg->vma); =20 if (adj_start) { adjust->vm_start +=3D adj_start; adjust->vm_pgoff +=3D PHYS_PFN(adj_start); if (adj_start < 0) { WARN_ON(expanded); - vma_iter_store(vmg->vmi, adjust); + vma_iter_store_attached(vmg->vmi, adjust); } } =20 @@ -2845,7 +2845,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) anon_vma_interval_tree_pre_update_vma(vma); vma->vm_end =3D address; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store_attached(&vmi, vma); anon_vma_interval_tree_post_update_vma(vma); =20 perf_event_mmap(vma); @@ -2925,7 +2925,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) vma->vm_start =3D address; vma->vm_pgoff -=3D grow; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store_attached(&vmi, vma); anon_vma_interval_tree_post_update_vma(vma); =20 perf_event_mmap(vma); diff --git a/mm/vma.h b/mm/vma.h index 2a2668de8d2c..63dd38d5230c 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -365,9 +365,10 @@ static inline struct vm_area_struct *vma_iter_load(str= uct vma_iterator *vmi) } =20 /* Store a VMA with preallocated memory */ -static inline void vma_iter_store(struct vma_iterator *vmi, - struct vm_area_struct *vma) +static inline void vma_iter_store_attached(struct vma_iterator *vmi, + struct vm_area_struct *vma) { + vma_assert_attached(vma); =20 #if defined(CONFIG_DEBUG_VM_MAPLE_TREE) if (MAS_WARN_ON(&vmi->mas, vmi->mas.status !=3D ma_start && @@ -390,7 +391,13 @@ static inline void vma_iter_store(struct vma_iterator = *vmi, =20 __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); +} + +static inline void vma_iter_store(struct vma_iterator *vmi, + struct vm_area_struct *vma) +{ vma_mark_attached(vma); + vma_iter_store_attached(vmi, vma); } =20 static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6006C154C05 for ; Thu, 9 Jan 2025 02:30:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389841; cv=none; b=oheABsSntskQddrivw5Hkd4ueccT5lcsu2U4JeIlPJxNXm8i5MxGYn80Y5ZTuY3bCAxbL6bfRf93MRwxmBeBAnuSthoGDXj5oIXlDd9FaBIlNOWrVq3gGabS8XsogqyN/d2ojSakeV0O8MMw4UYgOqO0M/1zkC55onLcQCZ8zuo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389841; c=relaxed/simple; bh=t9LvCLfNsf7gW2HQp/USAapAN+m412qBOrN7Y8/AQ98=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Z4WtoQ6muhmdZeHaXUyu+S7REVCfjPpYYXfrzphActCulxbl6X2Iy8GSxCsS+acmEPBuWqfIrMSiUzk3SDsxFeNISAD+6LG3Zlhk+f3AjH5NSs0EKcupjNM8LVqus+4q3Ru/aZtHBWGLFDb15+k8XNYsDLTmTzroa7QkWdl1WH4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Oy5ABubG; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Oy5ABubG" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21638389f63so4646145ad.1 for ; Wed, 08 Jan 2025 18:30:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389840; x=1736994640; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=Oy5ABubGYniIMUA/4fKpIZAsqITiTMLt3jANHZBGDbVvxSC2Hn2qCm9P90jibz18BN KsrxMSYieoy3AVKbHljvZb0qQgcty2zLLO9wfop+RfTNYiwPFhAkuwzGqFoy+3/9A8K/ DNt9xhClIkVLt0jWJ+Po0RYneALYmpcdFx8TTwkNp6aVH6tUcFQHgqToOISHZapd8/fD RZkqWS6wu8W+THTSnjfBkCjJWCKvZ/JhCn6Ng1h1jH6bIwLWX/ez9xMGt0riWaOwJpP0 YMnB/nK07hl8iXR2PcC4yVy72/2w00NbP/SRTvtfBMW1FmVpIMUqLuXCLMpn4i6qjMwN zqQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389840; x=1736994640; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=JyOOUD9PFZY6btJpl+CVAoRp+bUR0mMz9LSOC2a1DhNNCmM2ASpMJ0VR7mpsrVJawc KPwabGiDigHhD54C9l7SZ4SCgeDkfd2YkHqec/bNIpSnIJei+gJnPKWbjLMK+5J4YZce kZI2W0UD3FVG+ZM+JkUwfNPQnxmSo7hJCzi1P40Uo1JBPeLJ3b9bOnOyFnJpUHYKYBkw luCIqnsdhmcl/w5h5mFvFcm5q72oHZt1BXmORwcxzlO6vklBRuB6xAmkZsVMWx3Lignj vGzTFrOh5SCmZhRoOpEpBD8Jy17kmOfFOCrNJkzuBCTDWWYkg9lKj/UklcyTQQXXQzZA Ltnw== X-Forwarded-Encrypted: i=1; AJvYcCV5n2Iq0UW4mjyTvcojEImdK3kobQ5dgI8NmRXSzxHyJR9/TBQPhlnaZ0LUuRAMG1zXW5bN4WATiom2PyQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyIWH0QXqMEQWbCHT+yxN81clFdpHffwEFPjSsPg/9Ylb6e9xp4 SkxlbTN0H9zg3/txR3Olhrp9hPmDQd/X2cY0r6TskW9KTK03t9pouUeclovxq9kJdyoY3vQGhIR x/Q== X-Google-Smtp-Source: AGHT+IGCSi1cLKzQVkSU7lkundl6oeEMCOCXsG4KvMhn7UcrmY2u1vKpnTKufMOdZ0o1PI8HbdJQwt6UXwo= X-Received: from pfbds11.prod.google.com ([2002:a05:6a00:4acb:b0:725:e39e:1055]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:8403:b0:1db:ffb9:f359 with SMTP id adf61e73a8af0-1e88d128ee1mr9121257637.24.1736389839663; Wed, 08 Jan 2025 18:30:39 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:14 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-6-surenb@google.com> Subject: [PATCH v8 05/16] mm: mark vmas detached upon exit From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When exit_mmap() removes vmas belonging to an exiting task, it does not mark them as detached since they can't be reached by other tasks and they will be freed shortly. Once we introduce vma reuse, all vmas will have to be in detached state before they are freed to ensure vma when reused is in a consistent state. Add missing vma_mark_detached() before freeing the vma. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- mm/vma.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index b9cf552e120c..93ff42ac2002 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -413,10 +413,12 @@ void remove_vma(struct vm_area_struct *vma, bool unre= achable) if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) + if (unreachable) { + vma_mark_detached(vma); __vm_area_free(vma); - else + } else { vm_area_free(vma); + } } =20 /* --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADC80166F1A for ; Thu, 9 Jan 2025 02:30:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389844; cv=none; b=pU9pqrLFCHrLyJRGJEUvCemRJw6q3YPfJwj2FDrgqjBttJUi6rL3U4Yndw4NNgUDVT08rEYkcsw0A5VelO1TLEBSiJGyAqYIUMbq9eSu/0nnEnOXw6aBDJUDar6gLevqnCPjRiTomFX23b8hTAx7uQ7fbrE5BXlKCBj2V4Tc6qU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389844; c=relaxed/simple; bh=G5eaEd7sNymn18hH96NTtB8QopbMIDFHamvLccfcHqs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Rz7dQWgXNwdaLgmGK4R9H5RqNbWwo0e5Hj4bCBZSH5ppuwYfb4FGAPnNuw/8viFzAqVB9R3mF2Qx6+3K51DAXKlVnshnnDtcZfwYLVw4GR2ydQhoHzfPp75C4fezF8iwGPdBzwq1fAH+X6kNkpnsGpabXsO26CXejPt5c651hkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gzlQ2wA+; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gzlQ2wA+" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2167141e00eso6070925ad.2 for ; Wed, 08 Jan 2025 18:30:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389842; x=1736994642; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=gzlQ2wA+IwR4TinVWgr9NB1QX6fV+8M8oWZZskr/nNytnq2TybPCirzZ1Gnmbl71UT 8RbXF6qDD2WJj7JPLv19tt2qal+kuhf2ObAZQvwYJERW+dXx0/lgj2RyVd9u5KigX2gV WqOCWkJX1XCpLYZigiVJLIjV3eMPRTI/wkwhFvjkIw8MJZ8siPwwbTc71LM+8WNGblOv G4GpYKPGlGxWQYqirUhIDxe7ajYFR/R+4ae32y610CWxlTMsiW/Uxp2R4xRWJwE78uYF BWVipcXKxVW0Z4PoFgVXMkySC3TFQ48ZAQz1BSYYdliWwqgaMMrTuTDROxU32KL2j22y 3lmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389842; x=1736994642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=RHsob7Fo0uc3MPYCH61iOPKlNVSLRVxMFtMGyUESFmBukF32/TqrZ8klyuS2I2aQH5 ECLYJXXFH999P6XDrYIAm9U/O+mBcowmLuwWsJa5jidz8sHCaOBnO9Q3v8582+z/7PpR BGdsRC+xqfIhB7DChsbX08RkKoY+0NiefuXiTg/K7Ys10hyMbYz2YLJgBbI/Sfh0HedB dKl1wKI05axuVXokjXBpU2QtXKurgZD9t/kaSCGGggz/XmZbnfBUGdH18OE4xYsKWKA5 /zlN8vlf99+zfuMI/vz1i0z8nCNi9Qms5XunyYzmjj96bxJd7YGc9R3pMqIQQ/RHXaKF X7og== X-Forwarded-Encrypted: i=1; AJvYcCU5sUu+0q3QVJBdBqTG7676W5K7mmm2RQoUhRfQDLY9JQu1GdHS014YjY2D0UWlVGQ9yrQzZgMLgUs8vVI=@vger.kernel.org X-Gm-Message-State: AOJu0YxvZ+56uNWUe7xxClqyWHz9J5hF/ozCsAv2hRtHyg03KnHlbh2V S406+RZ1y5oymkU9W/HG9iJ7pms8g/Rh8TD7wSqVBBQkAtZSgcrkESsTP7h/7zor0gatp+EgE9M 6Ig== X-Google-Smtp-Source: AGHT+IEzXiHq8FNMI7G+fgWP1KpPrFXIPUQsCePye9PY3Gn0PCmDCaiyOC9LF6jJI5g/L9SJ03+t7trrpGQ= X-Received: from pgcz11.prod.google.com ([2002:a63:7e0b:0:b0:7fd:56a7:26a8]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ec85:b0:215:6fcd:6cd1 with SMTP id d9443c01a7336-21a83f43ae6mr56359725ad.7.1736389841928; Wed, 08 Jan 2025 18:30:41 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:15 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-7-surenb@google.com> Subject: [PATCH v8 06/16] types: move struct rcuwait into types.h From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move rcuwait struct definition into types.h so that rcuwait can be used without including rcuwait.h which includes other headers. Without this change mm_types.h can't use rcuwait due to a the following circular dependency: mm_types.h -> rcuwait.h -> signal.h -> mm_types.h Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Davidlohr Bueso Acked-by: Liam R. Howlett --- include/linux/rcuwait.h | 13 +------------ include/linux/types.h | 12 ++++++++++++ 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h index 27343424225c..9ad134a04b41 100644 --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -4,18 +4,7 @@ =20 #include #include - -/* - * rcuwait provides a way of blocking and waking up a single - * task in an rcu-safe manner. - * - * The only time @task is non-nil is when a user is blocked (or - * checking if it needs to) on a condition, and reset as soon as we - * know that the condition has succeeded and are awoken. - */ -struct rcuwait { - struct task_struct __rcu *task; -}; +#include =20 #define __RCUWAIT_INITIALIZER(name) \ { .task =3D NULL, } diff --git a/include/linux/types.h b/include/linux/types.h index 2d7b9ae8714c..f1356a9a5730 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -248,5 +248,17 @@ typedef void (*swap_func_t)(void *a, void *b, int size= ); typedef int (*cmp_r_func_t)(const void *a, const void *b, const void *priv= ); typedef int (*cmp_func_t)(const void *a, const void *b); =20 +/* + * rcuwait provides a way of blocking and waking up a single + * task in an rcu-safe manner. + * + * The only time @task is non-nil is when a user is blocked (or + * checking if it needs to) on a condition, and reset as soon as we + * know that the condition has succeeded and are awoken. + */ +struct rcuwait { + struct task_struct __rcu *task; +}; + #endif /* __ASSEMBLY__ */ #endif /* _LINUX_TYPES_H */ --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CBAEA17B50B for ; Thu, 9 Jan 2025 02:30:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389846; cv=none; b=pS6hvO8I2o9O913DyFwFEdcai0UjrtPFtHDgVIEJdgAtcxo0dW50+t8jAhFSIOIJdk6vsJMlVaJyxeuTc0SpizqyiWQc1XtaJpQz10TZC+/hP+1KEYfjlfNzlilQsz8ECE0neuxECXwK868ygnUnW0PW8EFgXf0rhfYtwZLfK6Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389846; c=relaxed/simple; bh=TtSMcqXfqnEo39dJwOdZSKePXOTJp5aNjo8sQOWbuIc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bHKHjP3SuE+mCBWspDhmx+sG5ysrP6k2fNN7Vm21HULt+boEhhHP9nI0sciAFQSZICvcGPErVTHH6XXSH/r0lzzA/i2O1X7swsEeQ7+WtteMq2fPVgq/WD3SwDY5yGNu0ECFYTaeY2bun1pXPl14bmoMBp9rFi5MuHld+fx/C34= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2wA3zqpK; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2wA3zqpK" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2efc3292021so1161236a91.1 for ; Wed, 08 Jan 2025 18:30:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389844; x=1736994644; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jHfydvSHQ3WxL2+aQStFiDlG77OuqV7nJSSFFl9LU/E=; b=2wA3zqpKBCIFWYeOFwlEVMWJbXpDRxd3goQ6buvryQO1iBMF+aCZD34O0TZKdd5wxp MlSF6IVX1e8z/FGTQSAjCl51bHTC8wB309CXa2jcH9kh5UftaymODjZfO1jlHDZgAjhR xZvSC/vNQgsZQ9OvmDWd1Rzj8rtu+M/6ejl6TQZrbyplTu22cFxeSvuovlNtf9HA9+8k pFkSjoTKc2LGSIMEKGYRJ/UdsoQDduX1q4qb3mFhzSKvWImwS5IkP/OTH0BbTefVtQLd 7eYfaoRihUueWuTxwSpiZS2SOpufbq6l3Ea5JPElWdhEwtKNhvPzBkInMy9sfFTqIeTp gRew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389844; x=1736994644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jHfydvSHQ3WxL2+aQStFiDlG77OuqV7nJSSFFl9LU/E=; b=nx3ON+Cz8Fim/xhmYdk9ykaePjyLVx4SPjtEhR1PslnoxeW1MLm96ekSrm7pMvNIck DJKFTMzQ9JqO7/5jkCskXXEEkncvdkA8oIR7+Ho0eMVgSlOJkO/k5ctvtsSDsCy8T1wN UCmplGq2jI2n0fAV4X1dVdK6DHh6YO9tp0czHKrDSH8/jh86g6aCMLjUVxryi/U92WMC dDG9TwTQ9PI1hwam8TFujJuju0jtmeDa43wWhz0I1d4m0yVLKJ1MPnY7DbHT58HmoFTp n1Ekbhd72Ua9CPYnL6HXWfqIqrfSCLpZTtTAUr5yoD8rXwMRIotfEeHnNBeE942pX4tu lG7Q== X-Forwarded-Encrypted: i=1; AJvYcCVHeVhjSJ1/LuunIwiwwJYpd6FJ1JUStb8+mnx4IP7wM7hBx1THVzwxtTuTfKFgoiYW7E2YsLEkpc8AYlw=@vger.kernel.org X-Gm-Message-State: AOJu0YyCuPWOnT3+TEgzjhzLlMOePDyfPnaVNJuId55iDiI+bZtJQVfq 5xtT6/gmoechtJFINRoYiSj9UlDEcNwwMwPc6geq99Jc86f2TSvXlPUsV81M01QBqHy3GdwHkHl VMg== X-Google-Smtp-Source: AGHT+IFV1+9gbXGa72+TRMzfljX8nTas7v1/UQOwAWuQbBD3PGKS+ix9UpG8sQoR4NwvEysafdLPdictJdI= X-Received: from pjj15.prod.google.com ([2002:a17:90b:554f:b0:2ea:448a:8cd1]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5686:b0:2ee:df70:1ff3 with SMTP id 98e67ed59e1d1-2f548e4d0a7mr8382370a91.0.1736389844102; Wed, 08 Jan 2025 18:30:44 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:16 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-8-surenb@google.com> Subject: [PATCH v8 07/16] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With upcoming replacement of vm_lock with vm_refcnt, we need to handle a possibility of vma_start_read_locked/vma_start_read_locked_nested failing due to refcount overflow. Prepare for such possibility by changing these APIs and adjusting their users. Signed-off-by: Suren Baghdasaryan Acked-by: Vlastimil Babka Cc: Lokesh Gidra --- include/linux/mm.h | 6 ++++-- mm/userfaultfd.c | 18 +++++++++++++----- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e0d403c1ff63..6e6edfd4f3d9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -747,10 +747,11 @@ static inline bool vma_start_read(struct vm_area_stru= ct *vma) * not be used in such cases because it might fail due to mm_lock_seq over= flow. * This functionality is used to obtain vma read lock and drop the mmap re= ad lock. */ -static inline void vma_start_read_locked_nested(struct vm_area_struct *vma= , int subclass) +static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma= , int subclass) { mmap_assert_locked(vma->vm_mm); down_read_nested(&vma->vm_lock.lock, subclass); + return true; } =20 /* @@ -759,10 +760,11 @@ static inline void vma_start_read_locked_nested(struc= t vm_area_struct *vma, int * not be used in such cases because it might fail due to mm_lock_seq over= flow. * This functionality is used to obtain vma read lock and drop the mmap re= ad lock. */ -static inline void vma_start_read_locked(struct vm_area_struct *vma) +static inline bool vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); down_read(&vma->vm_lock.lock); + return true; } =20 static inline void vma_end_read(struct vm_area_struct *vma) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index a03c6f1ceb9e..eb2ca37b32ee 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -85,7 +85,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_str= uct *mm, mmap_read_lock(mm); vma =3D find_vma_and_prepare_anon(mm, address); if (!IS_ERR(vma)) - vma_start_read_locked(vma); + if (!vma_start_read_locked(vma)) + vma =3D ERR_PTR(-EAGAIN); =20 mmap_read_unlock(mm); return vma; @@ -1482,10 +1483,17 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err =3D find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - vma_start_read_locked(*dst_vmap); - if (*dst_vmap !=3D *src_vmap) - vma_start_read_locked_nested(*src_vmap, - SINGLE_DEPTH_NESTING); + if (vma_start_read_locked(*dst_vmap)) { + if (*dst_vmap !=3D *src_vmap) { + if (!vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING)) { + vma_end_read(*dst_vmap); + err =3D -EAGAIN; + } + } + } else { + err =3D -EAGAIN; + } } mmap_read_unlock(mm); return err; --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8B2118660A for ; Thu, 9 Jan 2025 02:30:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389848; cv=none; b=l+Sd5KOAYZHclhkPRzEzvWsZa27O5NY2m+UDRuqgrW+cCI0krBZOcZlbOqY8bMwSdAFRKrkr6S1FrSJlp0yzd9Ljb04GohuLQzlNnUaHDcR/mxfym0AnZW7yaap8ZSdRrjhqwx/hM7uYxHL9zWoAj4ky67ydPLnU2zxWM6fMBC4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389848; c=relaxed/simple; bh=ZCtusRX0CFLoEEKgeZ6+wWBSV3rTZgAWPuc+3VLvu1s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UYNFnm+sIDlwcbuF7uxZWSABLSQxKzDioePXFDV0AfKN4S03maU4sIAY9sQKhCFXi/UJ4CdzV73I9s2kFKpO3ybdYlgzSWeobSHba2v2vNURAd1SXvCHvzYxRVTWzUgrkM8O3SyD2wZnNq7AkZf+X2TsrdxTENCegWjeyV+UdjQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PbWu0sxD; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PbWu0sxD" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef80d30df1so795286a91.1 for ; Wed, 08 Jan 2025 18:30:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389846; x=1736994646; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=PbWu0sxDv7dNTwuaOKoxcG9lAIB91oZZ4EPYHnony9BDGOc4yBzJ8vadMGW7CJ7DuU hNZgSX5bsr4VdTQ0h6ZXic4xDKIkvQgXtIU15szz+3bKwjeyAbU1lU6SsWTATijofW71 477TvJW1E4ctoJnxpxzr47POAo2tgn2qJf1oSp6tvUIbEhs7Tqrn40UXJ9r56+YsKvNX eXriu9RBKU+KxA+BgLFAJ8AvTc/U3zoOSt7nZjEn2Iws1/Sgzl+AJnjSiTdpzCtnZGSw Y5y6xkx6R6GB3ak+o/AD6XsedeWiA5urWtV/gPufJ3Lu3j1Nzj1WYPiKWcxD2JKe4nHd 7OSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389846; x=1736994646; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=viz5ULyCJJqCllZuLxq53VPEwJ/UYVcxmAnDIlQENKMN58nFA/rkr2RvkACboftHNw +RNNOWDf/4TthX1iTbRaFXewqU8+Uq3JfUnmU19oUyN4AgFamUBxQT7dz+jGmpXeFXdr JIJg5A3w9IkVE9yQ2j1w9K8MLoZS0DrGGsJW7OX6fsiQ6aeyB57vHr1R3dlEwCenZK3m pYQbvZBqvXqSG6GarExseaBo4i9jX9oQFzCCbn3xqwbHZSB8tiuu3NMQmVn0cJDCLlId 1M0/tDUTLkxcJiNP63EIsPjBhjd75wP8c7YeiIMERqF0tUIAZiqHE2o+qzCpz226dcZP YMJQ== X-Forwarded-Encrypted: i=1; AJvYcCVn2PcOYdiF/E0RuZCrJbnHw+p9qQFWFnisVhBaFg1riXhIS4dKr4fgHA8tayPg2SVRYnhToVE86m9keHQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxVXW+hX0wRedFq2PjkeXK7kNBqsP/ynUCAPH71l5S77BywZPBh nbsFkmHw7YAR6kvV6w7rt3VTAokYcLAwm9424ARDxBgwyyxh0XomCjGEk59fc6P4d4zrBejmaTM P+A== X-Google-Smtp-Source: AGHT+IHADv1eWMN6DWgWw8q0R38itFS4KVDDVQ6L3paem0ou1tKWUJt5aWqFRQtQ6rnHlKZNXiGFJ2Bl2n0= X-Received: from pfxa8.prod.google.com ([2002:a05:6a00:1d08:b0:725:e46a:4fdd]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:ac2:b0:725:e1de:c0bf with SMTP id d2e1a72fcca58-72d21f2dcbcmr6756667b3a.9.1736389846286; Wed, 08 Jan 2025 18:30:46 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:17 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-9-surenb@google.com> Subject: [PATCH v8 08/16] mm: move mmap_init_lock() out of the header file From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" mmap_init_lock() is used only from mm_init() in fork.c, therefore it does not have to reside in the header file. This move lets us avoid including additional headers in mmap_lock.h later, when mmap_init_lock() needs to initialize rcuwait object. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mmap_lock.h | 6 ------ kernel/fork.c | 6 ++++++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 45a21faa3ff6..4706c6769902 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -122,12 +122,6 @@ static inline bool mmap_lock_speculate_retry(struct mm= _struct *mm, unsigned int =20 #endif /* CONFIG_PER_VMA_LOCK */ =20 -static inline void mmap_init_lock(struct mm_struct *mm) -{ - init_rwsem(&mm->mmap_lock); - mm_lock_seqcount_init(mm); -} - static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); diff --git a/kernel/fork.c b/kernel/fork.c index f2f9e7b427ad..d4c75428ccaf 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1219,6 +1219,12 @@ static void mm_init_uprobes_state(struct mm_struct *= mm) #endif } =20 +static inline void mmap_init_lock(struct mm_struct *mm) +{ + init_rwsem(&mm->mmap_lock); + mm_lock_seqcount_init(mm); +} + static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct = *p, struct user_namespace *user_ns) { --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23AA8188714 for ; Thu, 9 Jan 2025 02:30:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389850; cv=none; b=NSfQEspI8EnrquJqqxtE0qC8Q198R5p12JUPcRTkxEEjZ/Gbz7y+t2W+9sf3MNXDGTPVbfz5zes3M6jBApUSjLoixlYynzdeRd5hZ/m0fz0UA9NlCyEgQnTw9kgvSHjBPv5j2x63POuIz4CHb/wVYN1N8UgEipAM7KFr1R6Ogik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389850; c=relaxed/simple; bh=kpDMdaA8Wnm0en69OwGvizhVO0/tRNViItzC/WKTMDc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k5GX7flq1vp6OZr+jTKsgrnBRNTJolWU3bcrbsuejKVmt8PEqzpFAzek3tBOQOfXKs6fquH+h+g8jIvrM09PgTdRj8IlCu7TfJjU/6vi2B93I8TlFadkgP2leQ/rx0X8HG+x5PDkAaUw+PksfwhWxvZqIQHgdeIsqn+HjvVa4TA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=W/riyiUZ; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W/riyiUZ" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2efa74481fdso836381a91.1 for ; Wed, 08 Jan 2025 18:30:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389848; x=1736994648; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IjUQkFQTPO2uWvItBGSJ4zf4/1EWZkyRsCZwETlqeC8=; b=W/riyiUZPkjOIPIyO9e0umTfMDQHRBEBqWTBIvfR728H1FHTOVtlhHsk1/4s3v+C/T ZFHkVkkg/zMEyx90V+e8eQxUL+4uSlSEufxMFaWCbArhT3uc+qP/EGTQu0C3o6kGGPUi VjxXEWCN4GsmGHiIwzLxbebP49LNOguaRFmViAl9/kLl5s0HAKVlp0ASbsavPw2qFYlm vlC92fzcH6qYHl1caCT26uq7jpT0CMkJ21T/yArCZ3k3g3xaylARma8xkG7p/xnnPtiB 8atjK+0XkkIMwPzVKWRL4pWwz4tcarqumP3btP8jv50MrufbiGnrylPjzyWwG9/Izx3q FfKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389848; x=1736994648; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IjUQkFQTPO2uWvItBGSJ4zf4/1EWZkyRsCZwETlqeC8=; b=fElPl8F2bfNqgrsUuO9qHOuuk/0ky2K3a8WmcO3ap8UYlODyDaXlahx/eqv2aRLfGd q3em8xuXteRfzkZWhkRqVulUbzfQ0dML/nvIDvMnPE393LJN2t2bd+1edQy9LnvPJQan hikqStvyCv0P0gvo2m8XMgd1hRM9K/w3Q+ymjyOddwwq0ccvMlfEIoAKSnXxPbOmkyE9 yrxvP9v/ceSSlrEIKRvp/SYvNv3hciqLKqRK5vxSZiK09fyq4qpSZElEz1nrv5P7txqq gTXVjTNHu3jTGn1BiSx4yOzIitTHEihwYH2mOKXwVNgrWO6oK6Z96YCteWuL9/XYLmOj Yv1g== X-Forwarded-Encrypted: i=1; AJvYcCWM8o84X4qcNRMtUvc21QrfYYWx7thr/bKwXrd2gAxE9OhYjt4zphU+pgipGCCNEeM8iKqgMgAyauw8ybY=@vger.kernel.org X-Gm-Message-State: AOJu0YwtlATL+pVkzIbqHg10HykuSlscUAjX58HJdlmrvRyz3tnaMROc TZaI2xWigMqfJIWm7ygVqF7Retgwh+tyYpO3eJjBTQxTceQN1Dv4kGHeBVr4MUaWMScLfsMuQUm yyw== X-Google-Smtp-Source: AGHT+IFjjwr5ru1uWVNvi15QsRRv5yn9aK9DytE2ceD8f/6OYw8sNNdl9g7eT88QVCpaGBCf3pVchkAXDj4= X-Received: from pjyd4.prod.google.com ([2002:a17:90a:dfc4:b0:2ea:5fc2:b503]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:c2c7:b0:2ee:e518:c1d8 with SMTP id 98e67ed59e1d1-2f548f1c3f0mr7653251a91.30.1736389848454; Wed, 08 Jan 2025 18:30:48 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:18 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-10-surenb@google.com> Subject: [PATCH v8 09/16] mm: uninline the main body of vma_start_write() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_start_write() is used in many places and will grow in size very soon. It is not used in performance critical paths and uninlining it should limit the future code size growth. No functional changes. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 12 +++--------- mm/memory.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6e6edfd4f3d9..bc8067de41c5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -787,6 +787,8 @@ static bool __is_vma_write_locked(struct vm_area_struct= *vma, unsigned int *mm_l return (vma->vm_lock_seq =3D=3D *mm_lock_seq); } =20 +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_se= q); + /* * Begin writing to a VMA. * Exclude concurrent readers under the per-VMA lock until the currently @@ -799,15 +801,7 @@ static inline void vma_start_write(struct vm_area_stru= ct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; =20 - down_write(&vma->vm_lock.lock); - /* - * We should use WRITE_ONCE() here because we can have concurrent reads - * from the early lockless pessimistic check in vma_start_read(). - * We don't really care about the correctness of that early check, but - * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. - */ - WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + __vma_start_write(vma, mm_lock_seq); } =20 static inline void vma_assert_write_locked(struct vm_area_struct *vma) diff --git a/mm/memory.c b/mm/memory.c index 105b99064ce5..26569a44fb5c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6370,6 +6370,20 @@ struct vm_area_struct *lock_mm_and_find_vma(struct m= m_struct *mm, #endif =20 #ifdef CONFIG_PER_VMA_LOCK +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_se= q) +{ + down_write(&vma->vm_lock.lock); + /* + * We should use WRITE_ONCE() here because we can have concurrent reads + * from the early lockless pessimistic check in vma_start_read(). + * We don't really care about the correctness of that early check, but + * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. + */ + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); + up_write(&vma->vm_lock.lock); +} +EXPORT_SYMBOL_GPL(__vma_start_write); + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed = to be * stable and not isolated. If the VMA is not found or is being modified t= he --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 219C71898E9 for ; Thu, 9 Jan 2025 02:30:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389853; cv=none; b=Wuhy14oMUDz3MjgKWYxK5TuWaLBxVVQPNluEVQqx+54XM3vqg3piuG4KlCQ3ptM+kEim7fnHRkgTYjSGAtzF5+3WiNvQJsyRFkEgv6ZXS2OgXKWxK+GvV8d4/ItZ/h77C9CSikWhzAvePiZFDRCh0mVwovk0JHZBXk9l1bxOw5c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389853; c=relaxed/simple; bh=BuXyVw4ezCQ1JAf9BAOIg/Ukfu4Sp8IUeASwcXyb1yU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Rj40jrJu73QlY6cLvP7OJDHV/+lsOrubnNCFFiv9/JPZPOOHA8iS3SHQYSh5mLPcxMkbpluqG1gn2P7FRg83uu5E0ehEu+NRkg0t3AbZYvNeEo3cHGfUMmOJP0tP50Y/BAQHmXd1SFMcvgGklVBZqpdMqoYmWsmsDWvlfetwWgA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=I4xgzVAV; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="I4xgzVAV" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2163dc0f5dbso6840305ad.2 for ; Wed, 08 Jan 2025 18:30:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389850; x=1736994650; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YDQWvgnM3ecpsVy1JQGtTknKWtLSYOOH2mwLaEelsWk=; b=I4xgzVAVKQGRc1ibuWjkN6nmUSDcKMo0dUxJ8V9d4uw5Xpi047A7kTNdjGC6NfM5+R U1dFJ93xuIx0eUwwAYL8S29NLTP4JIUgZs3cnE3/GEwjexNIaElNa2Vw0LAHDL2xKZkl Aywaz65GeV6bcKlFUkFsxYwAPnt1uPIyYbznfzX9AxJSc+wqVMVF9k7I7THoP6bRHzhj V+a/LXXQDl/M13Nphn/vQUrxtgrUiQXI2qRVRthUqKr2wsME5RnGKIyrnOMYIIN8e1Fi UakJq9Zu31qTkJdZ3rS2ok6ZWSyTy6jpEmqvB2j0hvhFzoxqOShNBbOhQ3nelm2pEzf3 ++Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389850; x=1736994650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YDQWvgnM3ecpsVy1JQGtTknKWtLSYOOH2mwLaEelsWk=; b=YNCgUjd9vRa/hiKaPbc13Yvd6CamYC3geu07samUQ54x2lKiY4G+8txDKW5mlWa52d j1ZLQzztV9vLqxJPG8VqmJO+z3pU6n4kpOKFSRv17HZnGeIV+ROnamQlaPsdcqmFYHUK JYpfJHYzhdbxtzN6kpzdbciNKJG5V8bfwqMD04bxwIgDAMEZebAI1ZGIvS4MMtUmbiF4 vsKzgf8H8c0XZnrqwS7bTBZfxdwfTyFB55etDRlrac1FXyfTZfIqEF+1HcpCBH9ouOD/ PHgnrWXDCY+CA6merL0KjPAuNWkga935UAUpOEXJhLyoTAA7zktyXWjq8rcR43UbLiqS e54A== X-Forwarded-Encrypted: i=1; AJvYcCWHg6mrznU+WIRi8zx4MmvdA1N23G7HVSuipl+s9RcPmg/HNsxRDvb2sjfs/OnWxdVQIsoKc8t+qsn/tPw=@vger.kernel.org X-Gm-Message-State: AOJu0YxT92Gz+1bA4zb8GK0n2x5eOPdB+b6rgBJ6MUaB5i8vHj01buNE gjtBARB/1m7Nix/sO9bydoCn4FYWHo1EXrstIzo+0mC5o4/uabltl6AbF9s6XLG6duKMwHdkU37 wrQ== X-Google-Smtp-Source: AGHT+IGU1GveoKiXtMIwX3Dsvx/aQojyG0Nha/qg6tCj4vgO60tNC7tTkojOSt/gTC0vj2nSfdWqvyuR7Jc= X-Received: from pfbea18.prod.google.com ([2002:a05:6a00:4c12:b0:725:dec7:dd47]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:93a1:b0:1e1:9f24:2e4c with SMTP id adf61e73a8af0-1e88cfa6b60mr8471434637.16.1736389850547; Wed, 08 Jan 2025 18:30:50 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:19 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-11-surenb@google.com> Subject: [PATCH v8 10/16] refcount: introduce __refcount_{add|inc}_not_zero_limited From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce functions to increase refcount but with a top limit above which they will fail to increase (the limit is inclusive). Setting the limit to INT_MAX indicates no limit. Signed-off-by: Suren Baghdasaryan Acked-by: Vlastimil Babka --- include/linux/refcount.h | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 35f039ecb272..4934247848cf 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -137,13 +137,19 @@ static inline unsigned int refcount_read(const refcou= nt_t *r) } =20 static inline __must_check __signed_wrap -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp, + int limit) { int old =3D refcount_read(r); =20 do { if (!old) break; + if (i > limit - old) { + if (oldp) + *oldp =3D old; + return false; + } } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); =20 if (oldp) @@ -155,6 +161,12 @@ bool __refcount_add_not_zero(int i, refcount_t *r, int= *oldp) return old; } =20 +static inline __must_check __signed_wrap +bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +{ + return __refcount_add_not_zero_limited(i, r, oldp, INT_MAX); +} + /** * refcount_add_not_zero - add a value to a refcount unless it is 0 * @i: the value to add to the refcount @@ -213,6 +225,12 @@ static inline void refcount_add(int i, refcount_t *r) __refcount_add(i, r, NULL); } =20 +static inline __must_check bool __refcount_inc_not_zero_limited(refcount_t= *r, + int *oldp, int limit) +{ + return __refcount_add_not_zero_limited(1, r, oldp, limit); +} + static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int= *oldp) { return __refcount_add_not_zero(1, r, oldp); --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45786189915 for ; Thu, 9 Jan 2025 02:30:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389855; cv=none; b=hktZRG4bACK6d9yA2cQ88sX1n4vnB6VV1clbrPi+wHK27WRn51fyMC4yc6NNfkwHAWARa2/NVUsvfu3QY2l339gmNlLeTM4G5QA+IQLpJw2B8HVQB+JBlnWuGGR0pmStsK3RBbKqfUk/+CEOr1kqI90QHEmoCIFiC0V1bA2eF/E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389855; c=relaxed/simple; bh=UIlLkSMLHe+Znp2IWOoTWgH+jBN6NGiOERnlI0YGmdY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PCm4AsU1o1FemUliUCZ9TOSEHI+pq11bqo4DVPVOQqOfwbxQYOqnnDCx8Na1HlEDIcaJj9o3/L4r1owjCRDRanaja9WR0xkHaDTeezYpn5Ycy1XQV0jCu1//ojUgDnUml2j7O0VmkkcqoGoRkdFtuYC+4X4+xvxoBkbk9kkqd+I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dRqM5WWC; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dRqM5WWC" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef9da03117so1183605a91.1 for ; Wed, 08 Jan 2025 18:30:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389853; x=1736994653; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HaODUQCc7/mWazYIT3e6gMVgSjynxsNjJAACmHmT7Io=; b=dRqM5WWCupmnbQA6K8Sxv73g+ASIfs+8LgCV+gOAb4nnSrKd1YUtYEdIIwcsXcOZCk ju4VoII9LoeAeHSTuCUPk33x0JCQ7bSWIi4ugDQhploLzR7GT6NduSey4Uc0a2wclLvk ozdGWz85w0hpZjkqNV1UojXpdmQeE7hAIIDx7NpPJjvrFeaK18Wb7swD2Fpt1l4F1gKf PmHSNP5OGGdH/Hwmdob44qWQIp8JmGaLl+d2dnoh5+PK3seKWX6YesK7lO2PYymMYlmE kEIhfC/Y5cd8/LsiP4CFazIjEI+fDR8r2FyGI46APGsnrhsgPTiavxvSz1rF3GCovNNA RDrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389853; x=1736994653; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HaODUQCc7/mWazYIT3e6gMVgSjynxsNjJAACmHmT7Io=; b=XOk5hfyCv+VBf6bMzddvX59WXWAanxu6FBrQHcAoqPYe7ncFP3gktXfPrX0KZKzRV8 JxmzG1a3Eu2aO/od1x13xg8krascGgKYYftXxRQXUtAH6jqtZgTVOeXQhkRx1XB2b3Ft nllg5uHMxgsezyMOzFGO9klbgY3ZSZJ6VxksZ3Ffx7oj+gf0dL3ES+x4ANY458E+dCfv St5JIJDg+dPh3OKydx8qViY2hWAW7lE3F411+0Kju64ZTAowW4vyIeaMi1vcED2lsQmM Lna1tfjfizxysIDKV7aMevNs5/7kB7fksN6G+JvGESFQq7Scfr5OghicC/cCH8VZisTe ENJA== X-Forwarded-Encrypted: i=1; AJvYcCXydHrecEatezbnL7ihawrzsNPuF0rIux75+G2Rhrryct+aiiI3FmygibGaMAvxZSUW4XraI4wQ/RrZlXg=@vger.kernel.org X-Gm-Message-State: AOJu0YxpNoeRe0P1++UlDf2WGkG46Wm/LbrkfDk3D4nmEgz3VWUBD/PZ WEjnOYs0SfPquLMJ8e+1WzMijbSoqAXDwBkBAQ6NRZCDeKudsHHYAyae5SVbFp3pcskqDOfdgPw KPw== X-Google-Smtp-Source: AGHT+IFZKbhh22botJPappbTA+wni8aXzF/beHPBPfJIPvsk7COYCNyyhNJYxZVOVyc2GrTSSI0vGz2zhnQ= X-Received: from pjb12.prod.google.com ([2002:a17:90b:2f0c:b0:2ea:5469:76c2]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:6c3:b0:2ee:c918:cd42 with SMTP id 98e67ed59e1d1-2f548ecf156mr6597632a91.22.1736389852818; Wed, 08 Jan 2025 18:30:52 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:20 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-12-surenb@google.com> Subject: [PATCH v8 11/16] mm: replace vm_lock and detached flag with a reference count From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" rw_semaphore is a sizable structure of 40 bytes and consumes considerable space for each vm_area_struct. However vma_lock has two important specifics which can be used to replace rw_semaphore with a simpler structure: 1. Readers never wait. They try to take the vma_lock and fall back to mmap_lock if that fails. 2. Only one writer at a time will ever try to write-lock a vma_lock because writers first take mmap_lock in write mode. Because of these requirements, full rw_semaphore functionality is not needed and we can replace rw_semaphore and the vma->detached flag with a refcount (vm_refcnt). When vma is in detached state, vm_refcnt is 0 and only a call to vma_mark_attached() can take it out of this state. Note that unlike before, now we enforce both vma_mark_attached() and vma_mark_detached() to be done only after vma has been write-locked. vma_mark_attached() changes vm_refcnt to 1 to indicate that it has been attached to the vma tree. When a reader takes read lock, it increments vm_refcnt, unless the top usable bit of vm_refcnt (0x40000000) is set, indicating presence of a writer. When writer takes write lock, it sets the top usable bit to indicate its presence. If there are readers, writer will wait using newly introduced mm->vma_writer_wait. Since all writers take mmap_lock in write mode first, there can be only one writer at a time. The last reader to release the lock will signal the writer to wake up. refcount might overflow if there are many competing readers, in which case read-locking will fail. Readers are expected to handle such failures. In summary: 1. all readers increment the vm_refcnt; 2. writer sets top usable (writer) bit of vm_refcnt; 3. readers cannot increment the vm_refcnt if the writer bit is set; 4. in the presence of readers, writer must wait for the vm_refcnt to drop to 1 (ignoring the writer bit), indicating an attached vma with no readers; 5. vm_refcnt overflow is handled by the readers. Suggested-by: Peter Zijlstra Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 98 ++++++++++++++++++++++---------- include/linux/mm_types.h | 22 ++++--- kernel/fork.c | 13 ++--- mm/init-mm.c | 1 + mm/memory.c | 77 +++++++++++++++++++++---- tools/testing/vma/linux/atomic.h | 5 ++ tools/testing/vma/vma_internal.h | 66 +++++++++++---------- 7 files changed, 193 insertions(+), 89 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bc8067de41c5..ec7c064792ff 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -32,6 +32,7 @@ #include #include #include +#include =20 struct mempolicy; struct anon_vma; @@ -697,12 +698,41 @@ static inline void vma_numab_state_free(struct vm_are= a_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ =20 #ifdef CONFIG_PER_VMA_LOCK -static inline void vma_lock_init(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_re= fcnt) { - init_rwsem(&vma->vm_lock.lock); +#ifdef CONFIG_DEBUG_LOCK_ALLOC + static struct lock_class_key lockdep_key; + + lockdep_init_map(&vma->vmlock_dep_map, "vm_lock", &lockdep_key, 0); +#endif + if (reset_refcnt) + refcount_set(&vma->vm_refcnt, 0); vma->vm_lock_seq =3D UINT_MAX; } =20 +static inline bool is_vma_writer_only(int refcnt) +{ + /* + * With a writer and no readers, refcnt is VMA_LOCK_OFFSET if the vma + * is detached and (VMA_LOCK_OFFSET + 1) if it is attached. Waiting on + * a detached vma happens only in vma_mark_detached() and is a rare + * case, therefore most of the time there will be no unnecessary wakeup. + */ + return refcnt & VMA_LOCK_OFFSET && refcnt <=3D VMA_LOCK_OFFSET + 1; +} + +static inline void vma_refcount_put(struct vm_area_struct *vma) +{ + int oldcnt; + + if (!__refcount_dec_and_test(&vma->vm_refcnt, &oldcnt)) { + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); + + if (is_vma_writer_only(oldcnt - 1)) + rcuwait_wake_up(&vma->vm_mm->vma_writer_wait); + } +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield f= alse * locked result to avoid performance overhead, in which case we fall back= to @@ -710,6 +740,8 @@ static inline void vma_lock_init(struct vm_area_struct = *vma) */ static inline bool vma_start_read(struct vm_area_struct *vma) { + int oldcnt; + /* * Check before locking. A race might cause false locked result. * We can use READ_ONCE() for the mm_lock_seq here, and don't need @@ -720,13 +752,19 @@ static inline bool vma_start_read(struct vm_area_stru= ct *vma) if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm_lock_seq.= sequence)) return false; =20 - if (unlikely(down_read_trylock(&vma->vm_lock.lock) =3D=3D 0)) + /* + * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail + * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET. + */ + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) return false; =20 + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_); /* - * Overflow might produce false locked result. + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result. * False unlocked result is impossible because we modify and check - * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq * modification invalidates all existing locks. * * We must use ACQUIRE semantics for the mm_lock_seq so that if we are @@ -735,9 +773,10 @@ static inline bool vma_start_read(struct vm_area_struc= t *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq =3D=3D raw_read_seqcount(&vma->vm_mm->mm_lo= ck_seq))) { - up_read(&vma->vm_lock.lock); + vma_refcount_put(vma); return false; } + return true; } =20 @@ -749,8 +788,14 @@ static inline bool vma_start_read(struct vm_area_struc= t *vma) */ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma= , int subclass) { + int oldcnt; + mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock.lock, subclass); + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) + return false; + + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_); return true; } =20 @@ -762,15 +807,13 @@ static inline bool vma_start_read_locked_nested(struc= t vm_area_struct *vma, int */ static inline bool vma_start_read_locked(struct vm_area_struct *vma) { - mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock.lock); - return true; + return vma_start_read_locked_nested(vma, 0); } =20 static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock.lock); + vma_refcount_put(vma); rcu_read_unlock(); } =20 @@ -813,36 +856,33 @@ static inline void vma_assert_write_locked(struct vm_= area_struct *vma) =20 static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock.lock)) + if (refcount_read(&vma->vm_refcnt) <=3D 1) vma_assert_write_locked(vma); } =20 +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), = these + * assertions should be made either under mmap_write_lock or when the obje= ct + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ static inline void vma_assert_attached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(vma->detached, vma); + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } =20 static inline void vma_assert_detached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(!vma->detached, vma); + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } =20 static inline void vma_mark_attached(struct vm_area_struct *vma) { - vma->detached =3D false; -} - -static inline void vma_mark_detached(struct vm_area_struct *vma) -{ - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached =3D true; + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); } =20 -static inline bool is_vma_detached(struct vm_area_struct *vma) -{ - return vma->detached; -} +void vma_mark_detached(struct vm_area_struct *vma); =20 static inline void release_fault_lock(struct vm_fault *vmf) { @@ -865,7 +905,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_str= uct *mm, =20 #else /* CONFIG_PER_VMA_LOCK */ =20 -static inline void vma_lock_init(struct vm_area_struct *vma) {} +static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_re= fcnt) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -908,12 +948,8 @@ static inline void vma_init(struct vm_area_struct *vma= , struct mm_struct *mm) vma->vm_mm =3D mm; vma->vm_ops =3D &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached =3D true; -#endif vma_numab_state_init(vma); - vma_lock_init(vma); + vma_lock_init(vma, false); } =20 /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0ca63dee1902..2d83d79d1899 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include =20 #include =20 @@ -637,9 +638,8 @@ static inline struct anon_vma_name *anon_vma_name_alloc= (const char *name) } #endif =20 -struct vma_lock { - struct rw_semaphore lock; -}; +#define VMA_LOCK_OFFSET 0x40000000 +#define VMA_REF_LIMIT (VMA_LOCK_OFFSET - 1) =20 struct vma_numab_state { /* @@ -717,19 +717,13 @@ struct vm_area_struct { }; =20 #ifdef CONFIG_PER_VMA_LOCK - /* - * Flag to indicate areas detached from the mm->mm_mt tree. - * Unstable RCU readers are allowed to read this. - */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -792,7 +786,10 @@ struct vm_area_struct { struct vm_userfaultfd_ctx vm_userfaultfd_ctx; #ifdef CONFIG_PER_VMA_LOCK /* Unstable RCU readers are allowed to read this. */ - struct vma_lock vm_lock ____cacheline_aligned_in_smp; + refcount_t vm_refcnt ____cacheline_aligned_in_smp; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map vmlock_dep_map; +#endif #endif } __randomize_layout; =20 @@ -927,6 +924,7 @@ struct mm_struct { * by mmlist_lock */ #ifdef CONFIG_PER_VMA_LOCK + struct rcuwait vma_writer_wait; /* * This field has lock-like semantics, meaning it is sometimes * accessed with ACQUIRE/RELEASE semantics. diff --git a/kernel/fork.c b/kernel/fork.c index d4c75428ccaf..9d9275783cf8 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -463,12 +463,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_stru= ct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - vma_lock_init(new); + vma_lock_init(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - new->detached =3D true; -#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); =20 @@ -477,6 +473,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struc= t *orig) =20 void __vm_area_free(struct vm_area_struct *vma) { + /* The vma should be detached while being destroyed. */ + vma_assert_detached(vma); vma_numab_state_free(vma); free_anon_vma_name(vma); kmem_cache_free(vm_area_cachep, vma); @@ -488,8 +486,6 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) struct vm_area_struct *vma =3D container_of(head, struct vm_area_struct, vm_rcu); =20 - /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -1223,6 +1219,9 @@ static inline void mmap_init_lock(struct mm_struct *m= m) { init_rwsem(&mm->mmap_lock); mm_lock_seqcount_init(mm); +#ifdef CONFIG_PER_VMA_LOCK + rcuwait_init(&mm->vma_writer_wait); +#endif } =20 static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct = *p, diff --git a/mm/init-mm.c b/mm/init-mm.c index 6af3ad675930..4600e7605cab 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,6 +40,7 @@ struct mm_struct init_mm =3D { .arg_lock =3D __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist =3D LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK + .vma_writer_wait =3D __RCUWAIT_INITIALIZER(init_mm.vma_writer_wait), .mm_lock_seq =3D SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns =3D &init_user_ns, diff --git a/mm/memory.c b/mm/memory.c index 26569a44fb5c..fe1b47c34052 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6370,9 +6370,41 @@ struct vm_area_struct *lock_mm_and_find_vma(struct m= m_struct *mm, #endif =20 #ifdef CONFIG_PER_VMA_LOCK +static inline bool __vma_enter_locked(struct vm_area_struct *vma, unsigned= int tgt_refcnt) +{ + /* + * If vma is detached then only vma_mark_attached() can raise the + * vm_refcnt. mmap_write_lock prevents racing with vma_mark_attached(). + */ + if (!refcount_add_not_zero(VMA_LOCK_OFFSET, &vma->vm_refcnt)) + return false; + + rwsem_acquire(&vma->vmlock_dep_map, 0, 0, _RET_IP_); + rcuwait_wait_event(&vma->vm_mm->vma_writer_wait, + refcount_read(&vma->vm_refcnt) =3D=3D tgt_refcnt, + TASK_UNINTERRUPTIBLE); + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); + + return true; +} + +static inline void __vma_exit_locked(struct vm_area_struct *vma, bool *det= ached) +{ + *detached =3D refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_refcnt); + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); +} + void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_se= q) { - down_write(&vma->vm_lock.lock); + bool locked; + + /* + * __vma_enter_locked() returns false immediately if the vma is not + * attached, otherwise it waits until refcnt is (VMA_LOCK_OFFSET + 1) + * indicating that vma is attached with no readers. + */ + locked =3D __vma_enter_locked(vma, VMA_LOCK_OFFSET + 1); + /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -6380,10 +6412,43 @@ void __vma_start_write(struct vm_area_struct *vma, = unsigned int mm_lock_seq) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + + if (locked) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(detached, vma); /* vma should remain attached */ + } } EXPORT_SYMBOL_GPL(__vma_start_write); =20 +void vma_mark_detached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_attached(vma); + + /* + * We are the only writer, so no need to use vma_refcount_put(). + * The condition below is unlikely because the vma has been already + * write-locked and readers can increment vm_refcnt only temporarily + * before they check vm_lock_seq, realize the vma is locked and drop + * back the vm_refcnt. That is a narrow window for observing a raised + * vm_refcnt. + */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* + * Wait until refcnt is VMA_LOCK_OFFSET =3D> detached with no + * readers. + */ + if (__vma_enter_locked(vma, VMA_LOCK_OFFSET)) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(!detached, vma); + } + } +} + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed = to be * stable and not isolated. If the VMA is not found or is being modified t= he @@ -6396,7 +6461,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_s= truct *mm, struct vm_area_struct *vma; =20 rcu_read_lock(); -retry: vma =3D mas_walk(&mas); if (!vma) goto inval; @@ -6404,13 +6468,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_= struct *mm, if (!vma_start_read(vma)) goto inval; =20 - /* Check if the VMA got isolated after we found it */ - if (is_vma_detached(vma)) { - vma_end_read(vma); - count_vm_vma_lock_event(VMA_LOCK_MISS); - /* The area was replaced with another one */ - goto retry; - } /* * At this point, we have a stable reference to a VMA: The VMA is * locked and we know it hasn't already been isolated. diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/ato= mic.h index 3e1b6adc027b..788c597c4fde 100644 --- a/tools/testing/vma/linux/atomic.h +++ b/tools/testing/vma/linux/atomic.h @@ -9,4 +9,9 @@ #define atomic_set(x, y) uatomic_set(x, y) #define U8_MAX UCHAR_MAX =20 +#ifndef atomic_cmpxchg_relaxed +#define atomic_cmpxchg_relaxed uatomic_cmpxchg +#define atomic_cmpxchg_release uatomic_cmpxchg +#endif /* atomic_cmpxchg_relaxed */ + #endif /* _LINUX_ATOMIC_H */ diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 47c8b03ffbbd..2ce032943861 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -25,7 +25,7 @@ #include #include #include -#include +#include =20 extern unsigned long stack_guard_gap; #ifdef CONFIG_MMU @@ -134,10 +134,6 @@ typedef __bitwise unsigned int vm_fault_t; */ #define pr_warn_once pr_err =20 -typedef struct refcount_struct { - atomic_t refs; -} refcount_t; - struct kref { refcount_t refcount; }; @@ -232,15 +228,12 @@ struct mm_struct { unsigned long flags; /* Must use atomic bitops to access */ }; =20 -struct vma_lock { - struct rw_semaphore lock; -}; - - struct file { struct address_space *f_mapping; }; =20 +#define VMA_LOCK_OFFSET 0x40000000 + struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ =20 @@ -268,16 +261,13 @@ struct vm_area_struct { }; =20 #ifdef CONFIG_PER_VMA_LOCK - /* Flag to indicate areas detached from the mm->mm_mt tree */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock.lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock.lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -286,7 +276,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock vm_lock; #endif =20 /* @@ -339,6 +328,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + refcount_t vm_refcnt; +#endif } __randomize_layout; =20 struct vm_fault {}; @@ -463,23 +456,41 @@ static inline struct vm_area_struct *vma_next(struct = vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } =20 -static inline void vma_lock_init(struct vm_area_struct *vma) +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), = these + * assertions should be made either under mmap_write_lock or when the obje= ct + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ +static inline void vma_assert_attached(struct vm_area_struct *vma) { - init_rwsem(&vma->vm_lock.lock); - vma->vm_lock_seq =3D UINT_MAX; + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } =20 -static inline void vma_mark_attached(struct vm_area_struct *vma) +static inline void vma_assert_detached(struct vm_area_struct *vma) { - vma->detached =3D false; + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } =20 static inline void vma_assert_write_locked(struct vm_area_struct *); +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); +} + static inline void vma_mark_detached(struct vm_area_struct *vma) { - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached =3D true; + vma_assert_attached(vma); + + /* We are the only writer, so no need to use vma_refcount_put(). */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* + * Reader must have temporarily raised vm_refcnt but it will + * drop it without using the vma since vma is write-locked. + */ + } } =20 extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -492,9 +503,7 @@ static inline void vma_init(struct vm_area_struct *vma,= struct mm_struct *mm) vma->vm_mm =3D mm; vma->vm_ops =3D &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached =3D true; - vma_lock_init(vma); + vma->vm_lock_seq =3D UINT_MAX; } =20 static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -517,10 +526,9 @@ static inline struct vm_area_struct *vm_area_dup(struc= t vm_area_struct *orig) return NULL; =20 memcpy(new, orig, sizeof(*new)); - vma_lock_init(new); + refcount_set(&new->vm_refcnt, 0); + new->vm_lock_seq =3D UINT_MAX; INIT_LIST_HEAD(&new->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - new->detached =3D true; =20 return new; } --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE99D18BC20 for ; Thu, 9 Jan 2025 02:30:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389857; cv=none; b=Idq0ZtpBmRNGvlnWQq1C0rrQ3qPhpV+157lJ+nuyMHML6mupApc1h7AknicAOGKjkToQkpQrHIW42mA/GtXBipe5XQ/hDGR4KgJWsgTTPihdBBKIhzfcG4kp/WKd026Zf9ZfG+gYrXmIbUsnOKfjr3Fw3xrYh73vk8WK+2Wwzhs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389857; c=relaxed/simple; bh=T13/ezDy1zA8roi/Zowvjt9SXeJDqVkDzndw5y9kEeI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eYb8VSXZokfjnEZhc66rXoagXHdfO9MBobkTw9IdF/b3uajkHKeXj1UvUjBWCfZ9LTW9N40H9/ntAickDeR/iw1ItMhnfgugWhveRwlacMLxM0jey6DdtOjoKkaMVTrczxwIahPQDhtUQKBfsAsORyHYLns/E5F8SS+bSELq36k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XE8DatbM; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XE8DatbM" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2166855029eso7270445ad.0 for ; Wed, 08 Jan 2025 18:30:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389855; x=1736994655; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dhc7tvmwjQaP4b5E/hXUfzy8UCkIkS3jsDCXbxBg4ZU=; b=XE8DatbMyLvcwj8tkrsB1CU8uHRloynCGLicn6fKLUbRlQVItDyzDndVoTsNtR1IVT OufbYPEAZhq9zJMSsiICJyBY1AuKnZwnRTCj9w+7uJ0sVc6E5BpbHP53i3DCCN5GlX4/ 5MKQoVbPx+OZUxZibLurRrU8suO78E38SFCnTZ6kJ+Qp6ZsgelKxmvMMqdp54lRFxJYb tYfvBP8IFNMNx+jPMbmwrlpAFPD7nQ9D2gdAsG0Pw5pbdnXP28AEggB881BWhnvxzhE5 aqIPx3OVsJzCosTIvCefYq0YXSIWZIzHEP+pC3ZcURGvKb82aVkLHi9yNGkSyhcNpzdv kypA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389855; x=1736994655; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dhc7tvmwjQaP4b5E/hXUfzy8UCkIkS3jsDCXbxBg4ZU=; b=ipq6GfSz8p2bp9HT9tdyMGcIC3bl4ZcUGpJWXvHy9KbF7tua6A9VYOdBCoyWX27tMk Qp4xjVmhAPvkyNnxRCsSN7Bqad0NPBAAa1UZAOYwecGDBWP1gPDJcQeLnG4P2OjzihIJ 7RREzcBo9FENRsD0CO/9FaWtzm4r3569BApqLjjT5DBzGr7S1m/gxU7ILXJZyP/Kfa2p 7VamspX7wTebHGzlLTYVc2SSEuUNwy42AVGzvQWWqO00/zdURUCVO+8/TdmDX508FutJ VIjWWOvHABxn8EuyBvFU+2wH57k6SgrIeLk3KK3gxoKle4t84h7Gq6WXUnDEKoLeWCwf P18A== X-Forwarded-Encrypted: i=1; AJvYcCUZ3X94LuCIpacoCJwZiBXEQK2R2Cz0ZuroBBIJoL6u5lbAau0V/2prSkUsKN6Cs19GIddAdLJzOXPDSGA=@vger.kernel.org X-Gm-Message-State: AOJu0YwNLqLE1JxaEPaYOutBORbr2vHIkeSWHVAvgQlDUoQrj2j/4MYX ZI6HIx4eV4JvDa+SNdmADuPyCgcf5fAl9aNm5hgnMdqZj9IE3sSGqzHn0Y5WqPoB+hRIcp9OrYu UyA== X-Google-Smtp-Source: AGHT+IGEJut09X73iREifwmtH1pTDK91otd2DkI7NLMgJYUa+WdvvMkSg9t/WQnzDQh+UWszvk42Ntw5lL8= X-Received: from plhu17.prod.google.com ([2002:a17:903:1251:b0:216:2dc4:50c1]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ecce:b0:215:7446:2151 with SMTP id d9443c01a7336-21a83f36facmr63729185ad.4.1736389855012; Wed, 08 Jan 2025 18:30:55 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:21 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-13-surenb@google.com> Subject: [PATCH v8 12/16] mm/debug: print vm_refcnt state when dumping the vma From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vm_refcnt encodes a number of useful states: - whether vma is attached or detached - the number of current vma readers - presence of a vma writer Let's include it in the vma dump. Signed-off-by: Suren Baghdasaryan --- mm/debug.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/debug.c b/mm/debug.c index 8d2acf432385..325d7bf22038 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -178,6 +178,17 @@ EXPORT_SYMBOL(dump_page); =20 void dump_vma(const struct vm_area_struct *vma) { +#ifdef CONFIG_PER_VMA_LOCK + pr_emerg("vma %px start %px end %px mm %px\n" + "prot %lx anon_vma %px vm_ops %px\n" + "pgoff %lx file %px private_data %px\n" + "flags: %#lx(%pGv) refcnt %x\n", + vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_mm, + (unsigned long)pgprot_val(vma->vm_page_prot), + vma->anon_vma, vma->vm_ops, vma->vm_pgoff, + vma->vm_file, vma->vm_private_data, + vma->vm_flags, &vma->vm_flags, refcount_read(&vma->vm_refcnt)); +#else pr_emerg("vma %px start %px end %px mm %px\n" "prot %lx anon_vma %px vm_ops %px\n" "pgoff %lx file %px private_data %px\n" @@ -187,6 +198,7 @@ void dump_vma(const struct vm_area_struct *vma) vma->anon_vma, vma->vm_ops, vma->vm_pgoff, vma->vm_file, vma->vm_private_data, vma->vm_flags, &vma->vm_flags); +#endif } EXPORT_SYMBOL(dump_vma); =20 --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8EDB518BC26 for ; Thu, 9 Jan 2025 02:30:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389858; cv=none; b=nYmqA4uPWhMY/nYC0w8DnahccSE/kBjLt6gBQYiuNd+vF73j5NxoS6hkUTlNWi0MhYiJ+v+ulLX/DcAjJIw7ByM08jFGU0cf9esmc13mJ6Ja35+vhUxwjsoFqLCZLuSRXqLa4V2EynyZ4hUWl8WCU+UXXa8EeeEg4INqefI/gpQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389858; c=relaxed/simple; bh=Qia36dBQMFQkzJ/g+u6JwVjRWXRWMlEMhpHTtKIeCdo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ev2yi27I3I6R9XRF3se1eq67E0TzsiCpBWOd9gR5ajppAqMHiG9szxOSmg9XUR6uI936nceOOq+fc7MxqCeS9FSPrq1s0F1R3sDllkFOQCkpaDCBlZ83vdyH8oh94qn5u5E2BH9s7YrdAOaQWczRRkY3reLo3lh2X5AYQtaN9FI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ycwy91v5; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ycwy91v5" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2163dc0f689so9240955ad.1 for ; Wed, 08 Jan 2025 18:30:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389857; x=1736994657; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ff+5kmdnRB8ZntTSRUYdaKZe919gijOnrfCBbK9Q5zw=; b=ycwy91v5/8IHr4Mw1Q0ReJ6+i+VLtArgnu8AM5BpjYHOSeqosq0F4QjwWwQgS7N0nO WI/VVtoOOP64YwtTc8bsGE8iM2elyfQVz6MgeYomgHF44R0WIyz0ueLX9iNaxZUJcUtm CEWVbalwciVoFbNF8MxID+VVPH5pfcDpokUdk9NgLDiYsrPVze34qPvtFL0Kxm6Y9OeD DIp68vqLIfeWFoNx3GSZeJQDWcOf4hFg+LbB9nzM4855RSg2VkXn8S0AxIKKmQ19WPbM DhuIY4DFNeNZShnmCCcfn5nT2criW9K1DmKq2vFvvpUccH4tGWbIXr9/fWfJi6taAR7c 6vQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389857; x=1736994657; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ff+5kmdnRB8ZntTSRUYdaKZe919gijOnrfCBbK9Q5zw=; b=njnWd934HcvUe5gesmEuSkgHjLzHgysF6WPzU/cI+Fef5Eg7Li0z9HE/Y6Ck8F/2bF Efo4abBTzKtyR4GLBZrByW0zGfiw9dbitutVvXUKrt1RxEihyHRMU8fi/yOWk4MqFp/3 hGzlW5Frq7osh2hsr/24gQ6McWdPEpi+DBuKHaMc2IoNUfR5Fpku7/26aZwErlkZ2WU5 qOj3QosJ2K7r9cjMXIQ43P4c9pnLIIstXwVkMU3mbcqKfJBpMglXuO5kITvBVsze2I3D MFPoy3F9sFi4rck5cmzwOplgO3vJDl5JAIuUClB4qhOypdrLxslxlfEPtBII3gw/N6cR IW0A== X-Forwarded-Encrypted: i=1; AJvYcCXLEiVwyhtSz2YryT61bLs05d8uP06mmWDYDXpAfiqICuef9wwikjhebg/A1c4vhEoK1+fbCqFN8rctrpw=@vger.kernel.org X-Gm-Message-State: AOJu0YzuIoVC2FJVCwudW/UMG9Ao84gMkoiV+SRx+RklnmPd9XImZY0z fQIYM2JsVlYQgk2hOs+4HqXxXd1SAI6CaeVCyMTmXcrL8mtUHqr8KwTyR183gAOEZ9DOX8CGnxe eRw== X-Google-Smtp-Source: AGHT+IHHneXTiyqKXlP1VpELIhdiUZ7vx0qpWsk0pXxK4l9MO/2MOdaa5NjDmeJ2cE4v/bbSs8TX2rtPRrA= X-Received: from pfar8.prod.google.com ([2002:a05:6a00:a908:b0:728:2357:646a]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:350d:b0:1e1:b105:87b with SMTP id adf61e73a8af0-1e88cfdc0f5mr8799971637.23.1736389856900; Wed, 08 Jan 2025 18:30:56 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:22 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-14-surenb@google.com> Subject: [PATCH v8 13/16] mm: remove extra vma_numab_state_init() call From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_init() already memset's the whole vm_area_struct to 0, so there is no need to an additional vma_numab_state_init(). Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ec7c064792ff..aca65cc0a26e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -948,7 +948,6 @@ static inline void vma_init(struct vm_area_struct *vma,= struct mm_struct *mm) vma->vm_mm =3D mm; vma->vm_ops =3D &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_numab_state_init(vma); vma_lock_init(vma, false); } =20 --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA857143888 for ; Thu, 9 Jan 2025 02:30:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389861; cv=none; b=b+BImyO9ng6z5IVl8/+iaDLHG4+r8jG0+ZF1fgnzYZHOMm8jQo10xgOxCe8B4HWFTO1glF32OcdbvYG7SmRLgBZoZaAKmsrXZ4XWm9TdomfIbhE5w6/gGKvF8DW+sN67PR/jN/jvx7LH2Iwj040bPcyq7WGO3JEGluiqDZ5cRo0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389861; c=relaxed/simple; bh=AOkNmUQtA5+hEacSF0o3k1jRBb+bZmOaS3fwL4JYowQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CiA/X60LquzPEASGeQlKg8KUXumR78vKMA+5AN78HzVLSF9W2bN0ufKr6EZFXYSAPOgstqb2BcAw/CyzFvi23Qwe4U4rcg8tEbF91qtxm6s2eERaV/A0kR21ZEBCnVq+yjkTwyV7dWa+B6ONds7BB8sqMqK7h6fR+Hu5Zhehhws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=seECNfoT; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="seECNfoT" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-29f7ae58e1dso351634fac.0 for ; Wed, 08 Jan 2025 18:30:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389859; x=1736994659; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7tJeInLbN8BEQlNwxZIs7XzIeMe2D6U4SFT5BJyi6RU=; b=seECNfoTA7JB5he4xhJ5WUhDu9aE4CNprFCYtE3ShgdtJsnZV/qqXT18C2XoN+j2vc JuY8STsF67Svs6oFRguplGEN9fXWe6COVBxuxFBGsLOhCT4b2z4xjfgeX9uia5xlbINC gIffQMrWQcGbJl05gOdc8nAbvk45GufkvyDeqrjHIsnsAdY6OXX3JR5JM44fahBBhUOu siR9ALRedqkqf+OIgcUmjjA8WolSLcQr4IUAaxzPNy/hr6VKs/ebe/mGpocZTfKmQByj yWHQxBUx9yVR4uKz0xwdBCxTXsgF1o4hOJcCUvjaPuuVJchDduPqLmwt7n8EqdLMPhps FM4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389859; x=1736994659; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7tJeInLbN8BEQlNwxZIs7XzIeMe2D6U4SFT5BJyi6RU=; b=ogAt5D4NNbUdOTrRVAHyNL6tTwijQZe1lHlz8ydJsN102boKZlZuV3bhu7dvKTJ2lv dSDuVVelbG0Kk4lgwHl/RGKPTG/z5G7LTbnX/G9jb33cLnHfIJLWBVnlom3YbuEo5MUH XZBm03XM0gLgNelStq5bTTvoAi/FOwRP51E50ouL9UACe+bS22wVtlH7yIxqXQZFm0qO 3Ceq5NcbHaeBo3DmHV9tGrHBMWeDQ0ax8SW1M6M/Db4Rt4r0wTGUxyuHSlaSv6NDEBjA tRYM3Xap6ciHVqPJPGjDRKkZqHdBRmdgRhuFiYilu1OiRCeYSdfz5hztXFArua42jDtJ QVlQ== X-Forwarded-Encrypted: i=1; AJvYcCU67fT0RumthEYMNBnIkVjSguBCYtDyWJPEYnYpvV1ccVv9tw1EDnknSVPYJzgb3lRUs+ftyk1DpgqUJ6Y=@vger.kernel.org X-Gm-Message-State: AOJu0YwueKOCNAeZUx0p8Yz5qq8QeF9f02J6ob5CKGl3+wUFNSnGxL9u 8iNzLWg4FOpeGSQzM8DN0eaShp8U9z1DF3cS1HOlt/A1GEn3d89j0DHvb4B+gJWCCf+skXhAkw3 uRQ== X-Google-Smtp-Source: AGHT+IHGMXCQEV8FC6jUm6LzXHjwAdfbc9NVyk/P0Er0Y7oYPXQ/fIgh5u42bhXHdFVsWWVl9KMqX04Ngxs= X-Received: from oabqt18.prod.google.com ([2002:a05:6870:6e12:b0:29f:e638:5c2c]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:7d8e:b0:29a:ea3b:a68e with SMTP id 586e51a60fabf-2aabe8aa92bmr945059fac.0.1736389858958; Wed, 08 Jan 2025 18:30:58 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:23 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-15-surenb@google.com> Subject: [PATCH v8 14/16] mm: prepare lock_vma_under_rcu() for vma reuse possibility From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Once we make vma cache SLAB_TYPESAFE_BY_RCU, it will be possible for a vma to be reused and attached to another mm after lock_vma_under_rcu() locks the vma. lock_vma_under_rcu() should ensure that vma_start_read() is using the original mm and after locking the vma it should ensure that vma->vm_mm has not changed from under us. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 10 ++++++---- mm/memory.c | 7 ++++--- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index aca65cc0a26e..1d6b1563b956 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -737,8 +737,10 @@ static inline void vma_refcount_put(struct vm_area_str= uct *vma) * Try to read-lock a vma. The function is allowed to occasionally yield f= alse * locked result to avoid performance overhead, in which case we fall back= to * using mmap_lock. The function should never yield false unlocked result. + * False locked result is possible if mm_lock_seq overflows or if vma gets + * reused and attached to a different mm before we lock it. */ -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_str= uct *vma) { int oldcnt; =20 @@ -749,7 +751,7 @@ static inline bool vma_start_read(struct vm_area_struct= *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm_lock_seq.= sequence)) + if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(mm->mm_lock_seq.sequence= )) return false; =20 /* @@ -772,7 +774,7 @@ static inline bool vma_start_read(struct vm_area_struct= *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq =3D=3D raw_read_seqcount(&vma->vm_mm->mm_lo= ck_seq))) { + if (unlikely(vma->vm_lock_seq =3D=3D raw_read_seqcount(&mm->mm_lock_seq))= ) { vma_refcount_put(vma); return false; } @@ -906,7 +908,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_str= uct *mm, #else /* CONFIG_PER_VMA_LOCK */ =20 static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_re= fcnt) {} -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_str= uct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} diff --git a/mm/memory.c b/mm/memory.c index fe1b47c34052..a8e7e794178e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6465,7 +6465,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_s= truct *mm, if (!vma) goto inval; =20 - if (!vma_start_read(vma)) + if (!vma_start_read(mm, vma)) goto inval; =20 /* @@ -6475,8 +6475,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_s= truct *mm, * fields are accessible for RCU readers. */ =20 - /* Check since vm_start/vm_end might change before we lock the VMA */ - if (unlikely(address < vma->vm_start || address >=3D vma->vm_end)) + /* Check if the vma we locked is the right one. */ + if (unlikely(vma->vm_mm !=3D mm || + address < vma->vm_start || address >=3D vma->vm_end)) goto inval_end_read; =20 rcu_read_unlock(); --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78500191484 for ; Thu, 9 Jan 2025 02:31:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389863; cv=none; b=r7XieRKNDfpF7cOpCIVEyHHvADinEtd8+ZgVBPnhB4L/mSThpP1z4grI8DofEYzvkzlCj0+ZFglxd8eH+tABlHi0TOTInSo/K37wY01gyP92TpOpDM1EvgKnkGUJx5IJDeMI2Bozl1gHqzZKcgaoBhPHn86yNYjOUCK+NctCFWc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389863; c=relaxed/simple; bh=SY7iRzQYJiHBMNEZVoHlBESwQFa6PVAoTFrxpenkE5w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=URl6q6Xe+FLF2r3uJWVAgiOHaRC6NeOrrWrHi0243iJUzNhP8pgP6STIZoXCTmXf4vwl6AmezT6BwuFkRpCHQ5dyIAXZWFXXeSjJxKTMjie0gi3MTKqu6XMffDawucW0vqCxWXm7V5RYgrse4TNIfz1MWi/QeUzH5ir7fFOuV14= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qetqbmdV; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qetqbmdV" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef9da03117so1183863a91.1 for ; Wed, 08 Jan 2025 18:31:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389861; x=1736994661; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0mp5dFwezEdO/jOng82s8ZxmFNIgL5tGd8iP4GBRZps=; b=qetqbmdVP5elQFBK7edEZrUSzlU2llvEdjfilzZaPaEDNDrgqDiVP9CUV5pDy2txnE DA3PobND3kJ/3G5+NyRmzUOQ1U5NJ4vtHZPe5C+9zHcXI5MAAAIGAlV/1I9k7uZoG+b9 aeE+19fGN7qM2GPSlnk7LHhQyJGQ4qmm+w6WpcRSCta5KJd3WFusjQ80lXhgtLMd/wr7 yjBb53cV6RtsfczfcmY5zPQgFpYV4caBZ283tmRZ2dM1sRD+myOS9npINEQJ8xwU8ldZ TawOCoZXaT1vyZaFEMm5KGc7xJbqsrkiqgRdlmhGehov/AqZx1n+dHVcit5As/fCeLoc uEJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389861; x=1736994661; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0mp5dFwezEdO/jOng82s8ZxmFNIgL5tGd8iP4GBRZps=; b=X6orSGIpTWdWixQVFy35Fkg/ERzub+0PraB1vMBz5Tnmd878K+JJl2rLKBnbGmz1WM IyUmZxuqlFYanCbnRum/WNH1dnAWJOipIasodkCyudEDYW1SkK4me9EBK28KOAEtdiao SPt7wQ3lLd93eGLfv/qYWjorVoeVon3EigAr9zjqF4Majuey7t7BdAAyS1gwEwBUHuMg 4M1xoqrqRm+Zeh9RYhHlk2K94+hqqQ4/KDpuP7DcK06pZR2xoxSxTewA38TlqX85eObc cu9Q23/En83J3SFr+0vDXKvAaRYNRu50tz1/ncTcZpXZVDBTKq3d89Wh5MM2NijPi2Tm ynJQ== X-Forwarded-Encrypted: i=1; AJvYcCUmIayM8gJaPHi4u/udhVz7vMe7gf7LlwK11unD/8/YEI5HwW+IYyshhygxp9lGBv1Sflx5xTHIQAreu38=@vger.kernel.org X-Gm-Message-State: AOJu0Yw05EIl2hXuQOiSWo3mjC/qwBqVVF82sO9SqVwhOzWaxnG+Q7mJ KILl3OTbbztDBYMIyVvIVx75VWAD1VqBnCibJsmDTPs4oh3IsQQPZNtorFS5omGMP7i/x3pNNTV pnQ== X-Google-Smtp-Source: AGHT+IEZRV1/vQ9FjpK7xNQnInX31Kc2+w8+X4QP91iZvzjqZarVksUTQcCfVUNTQlhwPAY/KfocNKNXlgE= X-Received: from pjbsz8.prod.google.com ([2002:a17:90b:2d48:b0:2e9:38ea:ca0f]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2747:b0:2f2:a664:df1a with SMTP id 98e67ed59e1d1-2f548e9c9bcmr7618672a91.2.1736389860840; Wed, 08 Jan 2025 18:31:00 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:24 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-16-surenb@google.com> Subject: [PATCH v8 15/16] mm: make vma cache SLAB_TYPESAFE_BY_RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that object reuse before RCU grace period is over will be detected by lock_vma_under_rcu(). Current checks are sufficient as long as vma is detached before it is freed. The only place this is not currently happening is in exit_mmap(). Add the missing vma_mark_detached() in exit_mmap(). Another issue which might trick lock_vma_under_rcu() during vma reuse is vm_area_dup(), which copies the entire content of the vma into a new one, overriding new vma's vm_refcnt and temporarily making it appear as attached. This might trick a racing lock_vma_under_rcu() to operate on a reused vma if it found the vma before it got reused. To prevent this situation, we should ensure that vm_refcnt stays at detached state (0) when it is copied and advances to attached state only after it is added into the vma tree. Introduce vma_copy() which preserves new vma's vm_refcnt and use it in vm_area_dup(). Since all vmas are in detached state with no current readers when they are freed, lock_vma_under_rcu() will not be able to take vm_refcnt after vma got detached even if vma is reused. Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate vm_area_struct reuse and will minimize the number of call_rcu() calls. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 2 - include/linux/mm_types.h | 10 +++-- include/linux/slab.h | 6 --- kernel/fork.c | 72 ++++++++++++++++++++------------ mm/mmap.c | 3 +- mm/vma.c | 11 ++--- mm/vma.h | 2 +- tools/testing/vma/vma_internal.h | 7 +--- 8 files changed, 59 insertions(+), 54 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1d6b1563b956..a674558e4c05 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_= code, struct vm_area_struct *vm_area_alloc(struct mm_struct *); struct vm_area_struct *vm_area_dup(struct vm_area_struct *); void vm_area_free(struct vm_area_struct *); -/* Use only if VMA has no other users */ -void __vm_area_free(struct vm_area_struct *vma); =20 #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 2d83d79d1899..93bfcd0c1fde 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -582,6 +582,12 @@ static inline void *folio_get_private(struct folio *fo= lio) =20 typedef unsigned long vm_flags_t; =20 +/* + * freeptr_t represents a SLUB freelist pointer, which might be encoded + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. + */ +typedef struct { unsigned long v; } freeptr_t; + /* * A region containing a mapping of a non-memory backed file under NOMMU * conditions. These are held in a global tree and are pinned by the VMAs= that @@ -695,9 +701,7 @@ struct vm_area_struct { unsigned long vm_start; unsigned long vm_end; }; -#ifdef CONFIG_PER_VMA_LOCK - struct rcu_head vm_rcu; /* Used for deferred freeing. */ -#endif + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */ }; =20 /* diff --git a/include/linux/slab.h b/include/linux/slab.h index 10a971c2bde3..681b685b6c4e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -234,12 +234,6 @@ enum _slab_flag_bits { #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED #endif =20 -/* - * freeptr_t represents a SLUB freelist pointer, which might be encoded - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. - */ -typedef struct { unsigned long v; } freeptr_t; - /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/kernel/fork.c b/kernel/fork.c index 9d9275783cf8..770b973a099c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -449,6 +449,41 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct = *mm) return vma; } =20 +static void vma_copy(const struct vm_area_struct *src, struct vm_area_stru= ct *dest) +{ + dest->vm_mm =3D src->vm_mm; + dest->vm_ops =3D src->vm_ops; + dest->vm_start =3D src->vm_start; + dest->vm_end =3D src->vm_end; + dest->anon_vma =3D src->anon_vma; + dest->vm_pgoff =3D src->vm_pgoff; + dest->vm_file =3D src->vm_file; + dest->vm_private_data =3D src->vm_private_data; + vm_flags_init(dest, src->vm_flags); + memcpy(&dest->vm_page_prot, &src->vm_page_prot, + sizeof(dest->vm_page_prot)); + /* + * src->shared.rb may be modified concurrently, but the clone + * will be reinitialized. + */ + data_race(memcpy(&dest->shared, &src->shared, sizeof(dest->shared))); + memcpy(&dest->vm_userfaultfd_ctx, &src->vm_userfaultfd_ctx, + sizeof(dest->vm_userfaultfd_ctx)); +#ifdef CONFIG_ANON_VMA_NAME + dest->anon_name =3D src->anon_name; +#endif +#ifdef CONFIG_SWAP + memcpy(&dest->swap_readahead_info, &src->swap_readahead_info, + sizeof(dest->swap_readahead_info)); +#endif +#ifndef CONFIG_MMU + dest->vm_region =3D src->vm_region; +#endif +#ifdef CONFIG_NUMA + dest->vm_policy =3D src->vm_policy; +#endif +} + struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) { struct vm_area_struct *new =3D kmem_cache_alloc(vm_area_cachep, GFP_KERNE= L); @@ -458,11 +493,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_stru= ct *orig) =20 ASSERT_EXCLUSIVE_WRITER(orig->vm_flags); ASSERT_EXCLUSIVE_WRITER(orig->vm_file); - /* - * orig->shared.rb may be modified concurrently, but the clone - * will be reinitialized. - */ - data_race(memcpy(new, orig, sizeof(*new))); + vma_copy(orig, new); vma_lock_init(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); @@ -471,7 +502,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struc= t *orig) return new; } =20 -void __vm_area_free(struct vm_area_struct *vma) +void vm_area_free(struct vm_area_struct *vma) { /* The vma should be detached while being destroyed. */ vma_assert_detached(vma); @@ -480,25 +511,6 @@ void __vm_area_free(struct vm_area_struct *vma) kmem_cache_free(vm_area_cachep, vma); } =20 -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) -{ - struct vm_area_struct *vma =3D container_of(head, struct vm_area_struct, - vm_rcu); - - __vm_area_free(vma); -} -#endif - -void vm_area_free(struct vm_area_struct *vma) -{ -#ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); -#else - __vm_area_free(vma); -#endif -} - static void account_kernel_stack(struct task_struct *tsk, int account) { if (IS_ENABLED(CONFIG_VMAP_STACK)) { @@ -3144,6 +3156,11 @@ void __init mm_cache_init(void) =20 void __init proc_caches_init(void) { + struct kmem_cache_args args =3D { + .use_freeptr_offset =3D true, + .freeptr_offset =3D offsetof(struct vm_area_struct, vm_freeptr), + }; + sighand_cachep =3D kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -3160,8 +3177,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep =3D KMEM_CACHE(vm_area_struct, - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + vm_area_cachep =3D kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), &args, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); diff --git a/mm/mmap.c b/mm/mmap.c index cda01071c7b1..7aa36216ecc0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1305,7 +1305,8 @@ void exit_mmap(struct mm_struct *mm) do { if (vma->vm_flags & VM_ACCOUNT) nr_accounted +=3D vma_pages(vma); - remove_vma(vma, /* unreachable =3D */ true); + vma_mark_detached(vma); + remove_vma(vma); count++; cond_resched(); vma =3D vma_next(&vmi); diff --git a/mm/vma.c b/mm/vma.c index 93ff42ac2002..0a5158d611e3 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -406,19 +406,14 @@ static bool can_vma_merge_right(struct vma_merge_stru= ct *vmg, /* * Close a vm structure and free it. */ -void remove_vma(struct vm_area_struct *vma, bool unreachable) +void remove_vma(struct vm_area_struct *vma) { might_sleep(); vma_close(vma); if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) { - vma_mark_detached(vma); - __vm_area_free(vma); - } else { - vm_area_free(vma); - } + vm_area_free(vma); } =20 /* @@ -1201,7 +1196,7 @@ static void vms_complete_munmap_vmas(struct vma_munma= p_struct *vms, /* Remove and clean up vmas */ mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - remove_vma(vma, /* unreachable =3D */ false); + remove_vma(vma); =20 vm_unacct_memory(vms->nr_accounted); validate_mm(mm); diff --git a/mm/vma.h b/mm/vma.h index 63dd38d5230c..f51005b95b39 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -170,7 +170,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_s= truct *mm, unsigned long start, size_t len, struct list_head *uf, bool unlock); =20 -void remove_vma(struct vm_area_struct *vma, bool unreachable); +void remove_vma(struct vm_area_struct *vma); =20 void unmap_region(struct ma_state *mas, struct vm_area_struct *vma, struct vm_area_struct *prev, struct vm_area_struct *next); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 2ce032943861..49a85ce0d45a 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -697,14 +697,9 @@ static inline void mpol_put(struct mempolicy *) { } =20 -static inline void __vm_area_free(struct vm_area_struct *vma) -{ - free(vma); -} - static inline void vm_area_free(struct vm_area_struct *vma) { - __vm_area_free(vma); + free(vma); } =20 static inline void lru_add_drain(void) --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Wed Dec 17 10:27:24 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DC80192D66 for ; Thu, 9 Jan 2025 02:31:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389865; cv=none; b=NI/QGf38dVp21w9BCW1RUzsWGzC4T1rWqvk//RwXpHGgfI1LITDER+3UfHVg5VjUPE3WI/xqYg1+Ny0FXrs13sJqa79QGKPFJ+dWSfVI5o+RopzEtP+NiHa3zo8H2G4xdZwdzxcIZ06K4p+NNY06kp717MIwuoE1z3rUYGPhvrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389865; c=relaxed/simple; bh=zKfjcGKw4fgC5eVw/PLaT9hCUpUPIDpZJuPQmd/+rms=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iAYtugib7gwbqptFNaUPy32FemTSQP6X/K2B79LmhgnnDJMsFZTP/UXuXEk+3d8hOB6rWcgThgHfgKPB/GWbC9I1S99ARVd7dyK1XImt7Pvy7pmov+Osqjuvv9twv6X8kqLGtjlVjQAPZ8Pl6Xfhb1XDbm8YVQyrn5ro+ruXopw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mttMEBfn; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mttMEBfn" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21648c8601cso5642275ad.2 for ; Wed, 08 Jan 2025 18:31:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389863; x=1736994663; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=mttMEBfnFeHVKyGw/npDcfQr59phLg8EkWtBQ2kqVKf9N+JobFfn5jS2jpJNQ9s833 a0TncVkn6hItLN2RqSg6oXa9JtyHxP1ltQj3YcWHt5CId3+hpTmcEDyTsS2/SoMGqCsl I2pOSrMbDEdeqObltEEdPnFX2zXAdQOntqky+M2YKJ7kvjmJe1RTrEIrNHLL7Dj/C2Tu SwIduQBF/S0edgbmSXPLfjlaWlH9WjPTO/uAUUGBUckU7wEnJh8MGdwRqZu/dYKovhRc TohdaoY8/EYfkBGnnY7R35FW0RzLEOH+sgJr3VBlL+9WSBljPukziIbX4q/3HyL0lB6Z zmqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389863; x=1736994663; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=LimLehHi+y2DIzfRXpZj0WzgrGR/mqqsIRsoqKdj2j3X6MoVrh/aPrUMh8C/Jk79t3 IRwFV01w25cmGmx78TRgrqWz7RjAn/QJThIkK3v9pWmy+lhF31bubVOVh7upya8Kgv5S ocBzzFndqmLAjo4fM/vdZXn0t9lM9TcxP4h70eFfBoqpCbt1uvPMyjjHdTOboLJ2W9jm oZDulqoGxss9+6Cpg9WlJzO/A+YlVNMYIFy7pjiOLaKF2ldJCvpuTlDKotcWKMPoAS68 /QYnpdtPVkZmGJYj0LrmBlYUv5RPa7nYPQscmveqLiqMcNQ8+QjcGlLBFwLKZXw1wpc5 dJzQ== X-Forwarded-Encrypted: i=1; AJvYcCXEo+lq8vhET+smUL+SxLXHI/yxWPeXasnFqLvXzdiqCW7fml4EzBsrcYZseElDTmnbbt4uN3aGM4m79uQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwB/HkTtl+P6Mv+Cl+KEAzw46mgLUFfq9dbyE/2t5/xBLaouN/L 2Dc528U6YKlisaE+r9QW/nzGvo54mNyvDR3F8jAthDXLniGAhlB3Qnllu2PVFC4ygf09mvoTIfo 6YA== X-Google-Smtp-Source: AGHT+IGIq2BAYBw5AVU5HpNLwVIjRPdRRVq4DaaH9pXI9zGCBX20QcwARo1md41gddC/qz9xS8MYPY6uF8o= X-Received: from pgcz21.prod.google.com ([2002:a63:7e15:0:b0:7fd:50e9:aabf]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f64f:b0:215:5240:bb3d with SMTP id d9443c01a7336-21a83fe4915mr80523945ad.42.1736389862977; Wed, 08 Jan 2025 18:31:02 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:25 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-17-surenb@google.com> Subject: [PATCH v8 16/16] docs/mm: document latest changes to vm_lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Change the documentation to reflect that vm_lock is integrated into vma and replaced with vm_refcnt. Document newly introduced vma_start_read_locked{_nested} functions. Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett --- Documentation/mm/process_addrs.rst | 44 ++++++++++++++++++------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_= addrs.rst index 81417fa2ed20..f573de936b5d 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -716,9 +716,14 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA = is looked up in an RCU critical section, then attempts to VMA lock it via :c:func:`!vma_start_rea= d`, before releasing the RCU lock via :c:func:`!rcu_read_unlock`. =20 -VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semapho= re for -their duration and the caller of :c:func:`!lock_vma_under_rcu` must releas= e it -via :c:func:`!vma_end_read`. +In cases when the user already holds mmap read lock, :c:func:`!vma_start_r= ead_locked` +and :c:func:`!vma_start_read_locked_nested` can be used. These functions d= o not +fail due to lock contention but the caller should still check their return= values +in case they fail for other reasons. + +VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for = their +duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via +:c:func:`!vma_end_read`. =20 VMA **write** locks are acquired via :c:func:`!vma_start_write` in instanc= es where a VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is = always @@ -726,9 +731,9 @@ acquired. An mmap write lock **must** be held for the d= uration of the VMA write lock, releasing or downgrading the mmap write lock also releases the VMA w= rite lock so there is no :c:func:`!vma_end_write` function. =20 -Note that a semaphore write lock is not held across a VMA lock. Rather, a -sequence number is used for serialisation, and the write semaphore is only -acquired at the point of write lock to update this. +Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is= temporarily +modified so that readers can detect the presense of a writer. The referenc= e counter is +restored once the vma sequence number used for serialisation is updated. =20 This ensures the semantics we require - VMA write locks provide exclusive = write access to the VMA. @@ -738,7 +743,7 @@ Implementation details =20 The VMA lock mechanism is designed to be a lightweight means of avoiding t= he use of the heavily contended mmap lock. It is implemented using a combination = of a -read/write semaphore and sequence numbers belonging to the containing +reference counter and sequence numbers belonging to the containing :c:struct:`!struct mm_struct` and the VMA. =20 Read locks are acquired via :c:func:`!vma_start_read`, which is an optimis= tic @@ -779,28 +784,31 @@ release of any VMA locks on its release makes sense, = as you would never want to keep VMAs locked across entirely separate write operations. It also mainta= ins correct lock ordering. =20 -Each time a VMA read lock is acquired, we acquire a read lock on the -:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking= that -the sequence count of the VMA does not match that of the mm. +Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_ref= cnt` +reference counter and check that the sequence count of the VMA does not ma= tch +that of the mm. =20 -If it does, the read lock fails. If it does not, we hold the lock, excludi= ng -writers, but permitting other readers, who will also obtain this lock unde= r RCU. +If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped. +If it does not, we keep the reference counter raised, excluding writers, b= ut +permitting other readers, who can also obtain this lock under RCU. =20 Importantly, maple tree operations performed in :c:func:`!lock_vma_under_r= cu` are also RCU safe, so the whole read lock operation is guaranteed to funct= ion correctly. =20 -On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` -read/write semaphore, before setting the VMA's sequence number under this = lock, -also simultaneously holding the mmap write lock. +On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't = be +modified by readers and wait for all readers to drop their reference count. +Once there are no readers, VMA's sequence number is set to match that of t= he +mm. During this entire operation mmap write lock is held. =20 This way, if any read locks are in effect, :c:func:`!vma_start_write` will= sleep until these are finished and mutual exclusion is achieved. =20 -After setting the VMA's sequence number, the lock is released, avoiding -complexity with a long-term held write lock. +After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_ref= cnt` +indicating a writer is cleared. From this point on, VMA's sequence number = will +indicate VMA's write-locked state until mmap write lock is dropped or down= graded. =20 -This clever combination of a read/write semaphore and sequence count allow= s for +This clever combination of a reference counter and sequence count allows f= or fast RCU-based per-VMA lock acquisition (especially on page fault, though utilised elsewhere) with minimal complexity around lock ordering. =20 --=20 2.47.1.613.gc27f4b7a9f-goog