From nobody Wed Dec 17 12:11:13 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23AA8188714 for ; Thu, 9 Jan 2025 02:30:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389850; cv=none; b=NSfQEspI8EnrquJqqxtE0qC8Q198R5p12JUPcRTkxEEjZ/Gbz7y+t2W+9sf3MNXDGTPVbfz5zes3M6jBApUSjLoixlYynzdeRd5hZ/m0fz0UA9NlCyEgQnTw9kgvSHjBPv5j2x63POuIz4CHb/wVYN1N8UgEipAM7KFr1R6Ogik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736389850; c=relaxed/simple; bh=kpDMdaA8Wnm0en69OwGvizhVO0/tRNViItzC/WKTMDc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k5GX7flq1vp6OZr+jTKsgrnBRNTJolWU3bcrbsuejKVmt8PEqzpFAzek3tBOQOfXKs6fquH+h+g8jIvrM09PgTdRj8IlCu7TfJjU/6vi2B93I8TlFadkgP2leQ/rx0X8HG+x5PDkAaUw+PksfwhWxvZqIQHgdeIsqn+HjvVa4TA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=W/riyiUZ; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W/riyiUZ" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2efa74481fdso836381a91.1 for ; Wed, 08 Jan 2025 18:30:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389848; x=1736994648; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IjUQkFQTPO2uWvItBGSJ4zf4/1EWZkyRsCZwETlqeC8=; b=W/riyiUZPkjOIPIyO9e0umTfMDQHRBEBqWTBIvfR728H1FHTOVtlhHsk1/4s3v+C/T ZFHkVkkg/zMEyx90V+e8eQxUL+4uSlSEufxMFaWCbArhT3uc+qP/EGTQu0C3o6kGGPUi VjxXEWCN4GsmGHiIwzLxbebP49LNOguaRFmViAl9/kLl5s0HAKVlp0ASbsavPw2qFYlm vlC92fzcH6qYHl1caCT26uq7jpT0CMkJ21T/yArCZ3k3g3xaylARma8xkG7p/xnnPtiB 8atjK+0XkkIMwPzVKWRL4pWwz4tcarqumP3btP8jv50MrufbiGnrylPjzyWwG9/Izx3q FfKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389848; x=1736994648; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IjUQkFQTPO2uWvItBGSJ4zf4/1EWZkyRsCZwETlqeC8=; b=fElPl8F2bfNqgrsUuO9qHOuuk/0ky2K3a8WmcO3ap8UYlODyDaXlahx/eqv2aRLfGd q3em8xuXteRfzkZWhkRqVulUbzfQ0dML/nvIDvMnPE393LJN2t2bd+1edQy9LnvPJQan hikqStvyCv0P0gvo2m8XMgd1hRM9K/w3Q+ymjyOddwwq0ccvMlfEIoAKSnXxPbOmkyE9 yrxvP9v/ceSSlrEIKRvp/SYvNv3hciqLKqRK5vxSZiK09fyq4qpSZElEz1nrv5P7txqq gTXVjTNHu3jTGn1BiSx4yOzIitTHEihwYH2mOKXwVNgrWO6oK6Z96YCteWuL9/XYLmOj Yv1g== X-Forwarded-Encrypted: i=1; AJvYcCWM8o84X4qcNRMtUvc21QrfYYWx7thr/bKwXrd2gAxE9OhYjt4zphU+pgipGCCNEeM8iKqgMgAyauw8ybY=@vger.kernel.org X-Gm-Message-State: AOJu0YwtlATL+pVkzIbqHg10HykuSlscUAjX58HJdlmrvRyz3tnaMROc TZaI2xWigMqfJIWm7ygVqF7Retgwh+tyYpO3eJjBTQxTceQN1Dv4kGHeBVr4MUaWMScLfsMuQUm yyw== X-Google-Smtp-Source: AGHT+IFjjwr5ru1uWVNvi15QsRRv5yn9aK9DytE2ceD8f/6OYw8sNNdl9g7eT88QVCpaGBCf3pVchkAXDj4= X-Received: from pjyd4.prod.google.com ([2002:a17:90a:dfc4:b0:2ea:5fc2:b503]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:c2c7:b0:2ee:e518:c1d8 with SMTP id 98e67ed59e1d1-2f548f1c3f0mr7653251a91.30.1736389848454; Wed, 08 Jan 2025 18:30:48 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:18 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-10-surenb@google.com> Subject: [PATCH v8 09/16] mm: uninline the main body of vma_start_write() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_start_write() is used in many places and will grow in size very soon. It is not used in performance critical paths and uninlining it should limit the future code size growth. No functional changes. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 12 +++--------- mm/memory.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6e6edfd4f3d9..bc8067de41c5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -787,6 +787,8 @@ static bool __is_vma_write_locked(struct vm_area_struct= *vma, unsigned int *mm_l return (vma->vm_lock_seq =3D=3D *mm_lock_seq); } =20 +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_se= q); + /* * Begin writing to a VMA. * Exclude concurrent readers under the per-VMA lock until the currently @@ -799,15 +801,7 @@ static inline void vma_start_write(struct vm_area_stru= ct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; =20 - down_write(&vma->vm_lock.lock); - /* - * We should use WRITE_ONCE() here because we can have concurrent reads - * from the early lockless pessimistic check in vma_start_read(). - * We don't really care about the correctness of that early check, but - * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. - */ - WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + __vma_start_write(vma, mm_lock_seq); } =20 static inline void vma_assert_write_locked(struct vm_area_struct *vma) diff --git a/mm/memory.c b/mm/memory.c index 105b99064ce5..26569a44fb5c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6370,6 +6370,20 @@ struct vm_area_struct *lock_mm_and_find_vma(struct m= m_struct *mm, #endif =20 #ifdef CONFIG_PER_VMA_LOCK +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_se= q) +{ + down_write(&vma->vm_lock.lock); + /* + * We should use WRITE_ONCE() here because we can have concurrent reads + * from the early lockless pessimistic check in vma_start_read(). + * We don't really care about the correctness of that early check, but + * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. + */ + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); + up_write(&vma->vm_lock.lock); +} +EXPORT_SYMBOL_GPL(__vma_start_write); + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed = to be * stable and not isolated. If the VMA is not found or is being modified t= he --=20 2.47.1.613.gc27f4b7a9f-goog