From nobody Fri Feb 13 22:35:18 2026 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CBF24770F2 for ; Tue, 21 May 2024 12:57:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716296253; cv=none; b=OStOHYlmD0/WR2iRZZlDLMZz+iHaFXRAKiEw7iPGe4sZ7OXnT1pXrOws0hXO1x33D9Wz5CjeH48IELjNObj91v2U6wFkkMqhWlf1kugkreIJutzgPBRsbHYfekPdOmuC1axXGybQ+3OdoOR5OmsqP7scB1uwlvf23lTER8Qdl6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716296253; c=relaxed/simple; bh=K8hyEWt5P6rEtT4czAWRxHMZfJ5snUjMLM4ck6p0xes=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CoVFqduWm4ctxgyAlQt8aqG4ROkWq9++2jA6SILs3SVSDrMhaxCfDMKdd33bqtc9kEqFXzUoOo2Heu7AFMn//xkyrW/uraLH0CovU9sl2gCkItfbEx8xWpKOWp24npgIRyeJQAhgQdzHhZVjM522LtjbhqurTnWW3UvZQ+CntSk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ptlBK/pd; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ptlBK/pd" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de615257412so22370905276.0 for ; Tue, 21 May 2024 05:57:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1716296251; x=1716901051; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tRM9o0p6I8YJ1VI9X9CkGkomZF/jTW7vEZ+k3El+rfw=; b=ptlBK/pddu3QSMgrQ2Q1eJYMzyYTFnOrLNz1DJFL5CSjdJIJestRrIpFELgbiVLHpO N0EWRTgzuzOt5eftyoWm4WTl1RPmi5ggAg1VzQrqxSZ4LVniuvFE9PFshJAmPQF68cj8 i+CLFdkJjsLd6COVgL9xJRrFTd3ruy4kNEqqA3vYR5p9nTA6zj9cnq/P6QH5E0Q6CyGF 3B9lCJAP+syWIW4q/09sao1nwlrXiNikqPuMNGom/G8wwDtb10tdM3ii49wKsD5dIg4R TTsWimEPy2ZQGb0xPxCtu4UEOE7/aHVwrBmQzhOXm+pxR0I9EbTWo86Pbow/NyQWiQY6 l1Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716296251; x=1716901051; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tRM9o0p6I8YJ1VI9X9CkGkomZF/jTW7vEZ+k3El+rfw=; b=KDXxdk5Q/I1xOOtiuQGaD6oyTG9bY2m5voCv2FlCjde2xFXdLZM+BGkU8vuv2Jmpss tjO+0MnAsrFboaSjviLm/Zsi0VqyzH8PuJn72UwLJgoVQnmlhXXaq18ZtAAeQlGtb8cR zSdObo86gQSeOAR8JR9wT2KWThtdOgR18yKnQW1CODhD61l8N3jtezgvUqnxV1ALmANG 7DJZxHtAwjQ/DxvDqEO6QJj3VDQAWoH8RyvEn6HCjfu2iB4IB4OJkyWHdr2vwuv6Vt5j enlkNFfg4uABwa6pwHsumQ65stirVBISghQ4/yD2T+6rAritDAOAqLhAFvIfQRqEZ2jA 7NmQ== X-Forwarded-Encrypted: i=1; AJvYcCVy/gdwVGElDysbY5jz8jve0InfzWSw4N9qp8xTElWQ62VMFsUSNLH1AxWA/iXxT4GK+yfpgN02T2OILv1xWzdbu8fOA9GL62IrfGqV X-Gm-Message-State: AOJu0Yxdi4qY9nC3E8CBK2ktgjWXmskgVqPKBA0qZ7CrEZEszArg06Gy 4MB+4b2MUI0TdpFDIY9H/fdYS1bc11zIkNLP8QpYZ0qkq99wpjjrJYJ700um9w6WnpvTdFvS7Ql eEkJTwFeWEg== X-Google-Smtp-Source: AGHT+IHUQuVOIfwVncwQOFUHVovBLz5PWrCe4Dx+2LOZnSScbdDQHrTlBjLZBbELeDi9NMGgMHKZKXynUK6x6Q== X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:6902:120f:b0:dcb:e982:4e40 with SMTP id 3f1490d57ef6-dee4f38b7cbmr8662141276.12.1716296250742; Tue, 21 May 2024 05:57:30 -0700 (PDT) Date: Tue, 21 May 2024 12:57:18 +0000 In-Reply-To: <20240521-mm-hotplug-sync-v1-0-6d53706c1ba8@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240521-mm-hotplug-sync-v1-0-6d53706c1ba8@google.com> X-Mailer: b4 0.14-dev Message-ID: <20240521-mm-hotplug-sync-v1-1-6d53706c1ba8@google.com> Subject: [PATCH 1/2] mm,memory_hotplug: Remove un-taken lock From: Brendan Jackman To: David Hildenbrand , Oscar Salvador , Andrew Morton , Mike Rapoport Cc: Michal Hocko , Anshuman Khandual , Vlastimil Babka , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable It seems that [1] was acked, and the a v2 was written[2] which improved upon it, but got bogged down in discussion of other topics, so the improvements were not included. Then [1] got merged as commit 27cacaad16c5 ("mm,memory_hotplug: drop unneeded locking") and we ended up with locks that get taken for read but never for write. So, let's remove the read locking. Compared to Oscar's original v2[2], I have added a READ_ONCE in page_outside_zone_boundaries; this is a substitute for the compiler barrier that was implied by read_seqretry(). I believe this is necessary to insure against UB, although the value being read here is only used for a printk so the stakes seem very low (and this is all debug code anyway). I believe a compiler barrier is also needed in zone_spans_pfn, but I'll address that in a separate patch. That read_seqretry() also impleied a CPU-level memory barrier, which I don't think needs replacing: page_outside_zone_boundaries() is used in the alloc and free paths, but you can't allocate or free pages from the span that is in the middle of being added/removed by hotplug. In other words, page_outside_zone_boundaries() doesn't require a strictly up-to-date view of spanned_pages, but I think it does require a value that was once/will eventually be correct, hence READ_ONCE. [1] https://lore.kernel.org/all/20210531093958.15021-1-osalvador@suse.de/T/= #u [2] https://lore.kernel.org/linux-mm/20210602091457.17772-3-osalvador@suse.= de/#t Cc: David Hildenbrand Cc: Michal Hocko Cc: Anshuman Khandual Cc: Vlastimil Babka Cc: Pavel Tatashin Co-developed-by: Oscar Salvador Signed-off-by: Oscar Salvador Signed-off-by: Brendan Jackman --- include/linux/memory_hotplug.h | 35 ----------------------------------- include/linux/mmzone.h | 23 +++++------------------ mm/mm_init.c | 1 - mm/page_alloc.c | 10 +++------- 4 files changed, 8 insertions(+), 61 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 7a9ff464608d..f9577e67e5ee 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -141,31 +141,7 @@ bool mhp_supports_memmap_on_memory(void); =20 /* * Zone resizing functions - * - * Note: any attempt to resize a zone should has pgdat_resize_lock() - * zone_span_writelock() both held. This ensure the size of a zone - * can't be changed while pgdat_resize_lock() held. */ -static inline unsigned zone_span_seqbegin(struct zone *zone) -{ - return read_seqbegin(&zone->span_seqlock); -} -static inline int zone_span_seqretry(struct zone *zone, unsigned iv) -{ - return read_seqretry(&zone->span_seqlock, iv); -} -static inline void zone_span_writelock(struct zone *zone) -{ - write_seqlock(&zone->span_seqlock); -} -static inline void zone_span_writeunlock(struct zone *zone) -{ - write_sequnlock(&zone->span_seqlock); -} -static inline void zone_seqlock_init(struct zone *zone) -{ - seqlock_init(&zone->span_seqlock); -} extern void adjust_present_page_count(struct page *page, struct memory_group *group, long nr_pages); @@ -251,17 +227,6 @@ static inline void pgdat_kswapd_lock_init(pg_data_t *p= gdat) ___page; \ }) =20 -static inline unsigned zone_span_seqbegin(struct zone *zone) -{ - return 0; -} -static inline int zone_span_seqretry(struct zone *zone, unsigned iv) -{ - return 0; -} -static inline void zone_span_writelock(struct zone *zone) {} -static inline void zone_span_writeunlock(struct zone *zone) {} -static inline void zone_seqlock_init(struct zone *zone) {} =20 static inline int try_online_node(int nid) { diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8f9c9590a42c..194ef7fed9d6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -14,7 +14,6 @@ #include #include #include -#include #include #include #include @@ -896,18 +895,11 @@ struct zone { * * Locking rules: * - * zone_start_pfn and spanned_pages are protected by span_seqlock. - * It is a seqlock because it has to be read outside of zone->lock, - * and it is done in the main allocator path. But, it is written - * quite infrequently. - * - * The span_seq lock is declared along with zone->lock because it is - * frequently read in proximity to zone->lock. It's good to - * give them a chance of being in the same cacheline. - * - * Write access to present_pages at runtime should be protected by - * mem_hotplug_begin/done(). Any reader who can't tolerant drift of - * present_pages should use get_online_mems() to get a stable value. + * Besides system initialization functions, memory-hotplug is the only + * user that can change zone's {spanned,present} pages at runtime, and + * it does so by holding the mem_hotplug_lock lock. Any readers who + * can't tolerate drift values should use {get,put}_online_mems to get + * a stable value. */ atomic_long_t managed_pages; unsigned long spanned_pages; @@ -930,11 +922,6 @@ struct zone { unsigned long nr_isolate_pageblock; #endif =20 -#ifdef CONFIG_MEMORY_HOTPLUG - /* see spanned/present_pages for more description */ - seqlock_t span_seqlock; -#endif - int initialized; =20 /* Write-intensive fields used from the page allocator */ diff --git a/mm/mm_init.c b/mm/mm_init.c index f72b852bd5b8..c725618aeb58 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1383,7 +1383,6 @@ static void __meminit zone_init_internals(struct zone= *zone, enum zone_type idx, zone->name =3D zone_names[idx]; zone->zone_pgdat =3D NODE_DATA(nid); spin_lock_init(&zone->lock); - zone_seqlock_init(zone); zone_pcp_init(zone); } =20 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2e22ce5675ca..5116a2b9ea6e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -426,16 +426,12 @@ void set_pageblock_migratetype(struct page *page, int= migratetype) static int page_outside_zone_boundaries(struct zone *zone, struct page *pa= ge) { int ret; - unsigned seq; unsigned long pfn =3D page_to_pfn(page); unsigned long sp, start_pfn; =20 - do { - seq =3D zone_span_seqbegin(zone); - start_pfn =3D zone->zone_start_pfn; - sp =3D zone->spanned_pages; - ret =3D !zone_spans_pfn(zone, pfn); - } while (zone_span_seqretry(zone, seq)); + start_pfn =3D zone->zone_start_pfn; + sp =3D READ_ONCE(zone->spanned_pages); + ret =3D !zone_spans_pfn(zone, pfn); =20 if (ret) pr_err("page 0x%lx outside node %d zone %s [ 0x%lx - 0x%lx ]\n", --=20 2.45.0.rc1.225.g2a3ae87e7f-goog From nobody Fri Feb 13 22:35:18 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB54B78276 for ; Tue, 21 May 2024 12:57:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716296256; cv=none; b=sDNnh1dQJV2tSisTRlYN+sjA4vlCMnEr7ZajZbvW+LMpJb8zhdH3ZpsBHR5xWtg8U+e1a9E/cgnYogwMo9gl7YCy2joJj6Ah+7JdrPAXS0Hk7wEcMQH45zcOuxTXwlK1I0kEXvBPYQpHNY1lnFyM6UOlzOT85lTN7WJKeVDry8U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716296256; c=relaxed/simple; bh=93AUZuHsMYdlo1uCf+dMfhqztW1LZUVjD9Bz+tQ7t3E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Pb5yLAGyhcm4+4y2CxpO9nUq6dNzS995jIQiT0MWoOXb0QEsjS2QP5Ai9obJQA9urjMrDQ0E4lgwLhGGH5jqM8w/e3siyjcjHT9D1GjuO8VYVQAI6zhn/BsFctVzRKPoCs+Wpo5pvnyimrF1kqZjo9UMC2cOvZW1WRhOHUSzEJA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=r4VZVDBk; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="r4VZVDBk" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-41fc5c5cc95so56269615e9.0 for ; Tue, 21 May 2024 05:57:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1716296253; x=1716901053; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Bo9M5YTqR8tTEb+CFvSEPP+Md2sw1AJdmaK3l6GjkIM=; b=r4VZVDBkUi/V/UvkoilKZWS5vujuxlln9IYOSCkUVeczJ4x42jLOeh2zyyXuYRuQKX OtX7UVl0URAZu9Kc+c7ioGPlLcfmITyBvE573VVvMfvb2QNJMN0BoB7Gcgk3uuS/C0G/ CNsXV2KCraCWOW/5iRS5PCT73VMcL1KIeox3gMiG080s/xEsXShp7LDaC+l9hBxFHfPH hUiIwMDBn0cLVXS/5d02YLIeSOB1zR6SnS8UFg/w9bVwPVk9zNlGcWsfyYlsRVF+Hg/X c54QwI3oYibhrkL4ukuucmbvnERkqxNhVYan04AwfXpDBAqzJVrxrV684Y0ygi207NVk vstw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716296253; x=1716901053; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Bo9M5YTqR8tTEb+CFvSEPP+Md2sw1AJdmaK3l6GjkIM=; b=w2Cs0B7idJ/ToHyk7rjaXEjRd5h7WUqBJXRVZ7rpAA35eX6z9rBH7vMYh1MZAYAVdr 8WvEm5/HTVgtq0mdjDZlSAZOqqnRWRfV0PnER0RylGDwbljFa3hkhiMXKI/RkPsDpwF0 klHr+Ra/In2Z8zHY3exgT+4I11vKu/nEHBCbRI6ujGqb4Xac96nm0sbWFyvrAvoUqtHM z6nK9uT+EfzCCXDq35+fHpKHyoO38mrkqi1MWKjUD1tY3YsvwcUsPZVCtNraDlpeVMA5 P3zT5v8AsuKmrZ9aGVM1OnvCXc2FeDvv2VVCRUJyKqGBaEySCwdEczTpwpOxGyGJiap6 CnPg== X-Forwarded-Encrypted: i=1; AJvYcCViDrUWJ8+OXDt/A2VYpGekvnnn+qb8AK5GunPMvz6lJUJeTJz3CqUKx4dAmr6UirOl2ZudPUIm2HG+/+N92EVPbvMr6zH2QOJVQgl5 X-Gm-Message-State: AOJu0Yx9D4lLJyEtZA+Hd52hWTmOW5Upki9avAckVThHoSopPOj8iWkL mxAAeIzBQXpfIZGKHtfilz1vMEKe2weMSrHcQrvyWsimNdO4TkSHR6bQo40rV+g2Ss4tPelX8at LOKp7497M2A== X-Google-Smtp-Source: AGHT+IEqtB7J+eET8jECP6dJaE/vZ4au8St4Y37qZzLM3Wd/fTYl0KhtmsX38BXBJE3VkkIRwseWYmA0Pv3/Xg== X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:600c:19c8:b0:418:ee30:3f9e with SMTP id 5b1f17b1804b1-41fea92770dmr2110535e9.2.1716296253380; Tue, 21 May 2024 05:57:33 -0700 (PDT) Date: Tue, 21 May 2024 12:57:19 +0000 In-Reply-To: <20240521-mm-hotplug-sync-v1-0-6d53706c1ba8@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240521-mm-hotplug-sync-v1-0-6d53706c1ba8@google.com> X-Mailer: b4 0.14-dev Message-ID: <20240521-mm-hotplug-sync-v1-2-6d53706c1ba8@google.com> Subject: [PATCH 2/2] mm,memory_hotplug: {READ,WRITE}_ONCE unsynchronized zone data From: Brendan Jackman To: David Hildenbrand , Oscar Salvador , Andrew Morton , Mike Rapoport Cc: Michal Hocko , Anshuman Khandual , Vlastimil Babka , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable These fields are written by memory hotplug under mem_hotplug_lock but read without any lock. It seems like reader code is robust against the value being stale or "from the future", but we also need to account for: 1. Load/store tearing (according to Linus[1], this really happens, even when everything is aligned as you would hope). 2. Invented loads[2] - the compiler can spill and re-read these fields ([2] calls this "invented loads") and assume that they have not changed. Note we don't need READ_ONCE in paths that have the mem_hotplug_lock for write, but we still need WRITE_ONCE to prevent store-tearing. [1] https://lore.kernel.org/all/CAHk-=3Dwj2t+GK+DGQ7Xy6U7zMf72e7Jkxn4_-kGyf= H3WFEoH+YQ@mail.gmail.com/T/#u As discovered via the original big-bad article[2] [2] https://lwn.net/Articles/793253/ Signed-off-by: Brendan Jackman --- include/linux/mmzone.h | 14 ++++++++++---- mm/compaction.c | 2 +- mm/memory_hotplug.c | 20 ++++++++++++-------- mm/mm_init.c | 2 +- mm/page_alloc.c | 2 +- mm/show_mem.c | 8 ++++---- mm/vmstat.c | 4 ++-- 7 files changed, 31 insertions(+), 21 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 194ef7fed9d6..bdb3be76d10c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1018,11 +1018,13 @@ static inline unsigned long zone_cma_pages(struct z= one *zone) #endif } =20 +/* This is unstable unless you hold mem_hotplug_lock. */ static inline unsigned long zone_end_pfn(const struct zone *zone) { - return zone->zone_start_pfn + zone->spanned_pages; + return zone->zone_start_pfn + READ_ONCE(zone->spanned_pages); } =20 +/* This is unstable unless you hold mem_hotplug_lock. */ static inline bool zone_spans_pfn(const struct zone *zone, unsigned long p= fn) { return zone->zone_start_pfn <=3D pfn && pfn < zone_end_pfn(zone); @@ -1033,9 +1035,10 @@ static inline bool zone_is_initialized(struct zone *= zone) return zone->initialized; } =20 +/* This is unstable unless you hold mem_hotplug_lock. */ static inline bool zone_is_empty(struct zone *zone) { - return zone->spanned_pages =3D=3D 0; + return READ_ONCE(zone->spanned_pages) =3D=3D 0; } =20 #ifndef BUILD_VDSO32_64 @@ -1485,10 +1488,13 @@ static inline bool managed_zone(struct zone *zone) return zone_managed_pages(zone); } =20 -/* Returns true if a zone has memory */ +/* + * Returns true if a zone has memory. + * This is unstable unless you old mem_hotplug_lock. + */ static inline bool populated_zone(struct zone *zone) { - return zone->present_pages; + return READ_ONCE(zone->present_pages); } =20 #ifdef CONFIG_NUMA diff --git a/mm/compaction.c b/mm/compaction.c index e731d45befc7..b8066d1fdcf5 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2239,7 +2239,7 @@ static unsigned int fragmentation_score_zone_weighted= (struct zone *zone) { unsigned long score; =20 - score =3D zone->present_pages * fragmentation_score_zone(zone); + score =3D READ_ONCE(zone->present_pages) * fragmentation_score_zone(zone); return div64_ul(score, zone->zone_pgdat->node_present_pages + 1); } =20 diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 431b1f6753c0..71b5e3d314a2 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -463,6 +463,8 @@ static void shrink_zone_span(struct zone *zone, unsigne= d long start_pfn, int nid =3D zone_to_nid(zone); =20 if (zone->zone_start_pfn =3D=3D start_pfn) { + unsigned long old_end_pfn =3D zone_end_pfn(zone); + /* * If the section is smallest section in the zone, it need * shrink zone->zone_start_pfn and zone->zone_spanned_pages. @@ -470,13 +472,13 @@ static void shrink_zone_span(struct zone *zone, unsig= ned long start_pfn, * for shrinking zone. */ pfn =3D find_smallest_section_pfn(nid, zone, end_pfn, - zone_end_pfn(zone)); + old_end_pfn); if (pfn) { - zone->spanned_pages =3D zone_end_pfn(zone) - pfn; + WRITE_ONCE(zone->spanned_pages, old_end_pfn - pfn); zone->zone_start_pfn =3D pfn; } else { zone->zone_start_pfn =3D 0; - zone->spanned_pages =3D 0; + WRITE_ONCE(zone->spanned_pages, 0); } } else if (zone_end_pfn(zone) =3D=3D end_pfn) { /* @@ -488,10 +490,11 @@ static void shrink_zone_span(struct zone *zone, unsig= ned long start_pfn, pfn =3D find_biggest_section_pfn(nid, zone, zone->zone_start_pfn, start_pfn); if (pfn) - zone->spanned_pages =3D pfn - zone->zone_start_pfn + 1; + WRITE_ONCE(zone->spanned_pages, + pfn - zone->zone_start_pfn + 1); else { zone->zone_start_pfn =3D 0; - zone->spanned_pages =3D 0; + WRITE_ONCE(zone->spanned_pages, 0); } } } @@ -710,7 +713,8 @@ static void __meminit resize_zone_range(struct zone *zo= ne, unsigned long start_p if (zone_is_empty(zone) || start_pfn < zone->zone_start_pfn) zone->zone_start_pfn =3D start_pfn; =20 - zone->spanned_pages =3D max(start_pfn + nr_pages, old_end_pfn) - zone->zo= ne_start_pfn; + WRITE_ONCE(zone->spanned_pages, + max(start_pfn + nr_pages, old_end_pfn) - zone->zone_start_pfn); } =20 static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsign= ed long start_pfn, @@ -795,7 +799,7 @@ static void auto_movable_stats_account_zone(struct auto= _movable_stats *stats, struct zone *zone) { if (zone_idx(zone) =3D=3D ZONE_MOVABLE) { - stats->movable_pages +=3D zone->present_pages; + stats->movable_pages +=3D READ_ONCE(zone->present_pages); } else { stats->kernel_early_pages +=3D zone->present_early_pages; #ifdef CONFIG_CMA @@ -1077,7 +1081,7 @@ void adjust_present_page_count(struct page *page, str= uct memory_group *group, */ if (early_section(__pfn_to_section(page_to_pfn(page)))) zone->present_early_pages +=3D nr_pages; - zone->present_pages +=3D nr_pages; + WRITE_ONCE(zone->present_pages, zone->present_pages + nr_pages); zone->zone_pgdat->node_present_pages +=3D nr_pages; =20 if (group && movable) diff --git a/mm/mm_init.c b/mm/mm_init.c index c725618aeb58..ec66f2eadb95 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1540,7 +1540,7 @@ void __ref free_area_init_core_hotplug(struct pglist_= data *pgdat) for (z =3D 0; z < MAX_NR_ZONES; z++) { struct zone *zone =3D pgdat->node_zones + z; =20 - zone->present_pages =3D 0; + WRITE_ONCE(zone->present_pages, 0); zone_init_internals(zone, z, nid, 0); } } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5116a2b9ea6e..1eb9000ec7d7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5728,7 +5728,7 @@ __meminit void zone_pcp_init(struct zone *zone) =20 if (populated_zone(zone)) pr_debug(" %s zone: %lu pages, LIFO batch:%u\n", zone->name, - zone->present_pages, zone_batchsize(zone)); + READ_ONCE(zone->present_pages), zone_batchsize(zone)); } =20 void adjust_managed_page_count(struct page *page, long count) diff --git a/mm/show_mem.c b/mm/show_mem.c index bdb439551eef..667680a6107b 100644 --- a/mm/show_mem.c +++ b/mm/show_mem.c @@ -337,7 +337,7 @@ static void show_free_areas(unsigned int filter, nodema= sk_t *nodemask, int max_z K(zone_page_state(zone, NR_ZONE_INACTIVE_FILE)), K(zone_page_state(zone, NR_ZONE_UNEVICTABLE)), K(zone_page_state(zone, NR_ZONE_WRITE_PENDING)), - K(zone->present_pages), + K(READ_ONCE(zone->present_pages)), K(zone_managed_pages(zone)), K(zone_page_state(zone, NR_MLOCK)), K(zone_page_state(zone, NR_BOUNCE)), @@ -407,11 +407,11 @@ void __show_mem(unsigned int filter, nodemask_t *node= mask, int max_zone_idx) =20 for_each_populated_zone(zone) { =20 - total +=3D zone->present_pages; - reserved +=3D zone->present_pages - zone_managed_pages(zone); + total +=3D READ_ONCE(zone->present_pages); + reserved +=3D READ_ONCE(zone->present_pages) - zone_managed_pages(zone); =20 if (is_highmem(zone)) - highmem +=3D zone->present_pages; + highmem +=3D READ_ONCE(zone->present_pages); } =20 printk("%lu pages RAM\n", total); diff --git a/mm/vmstat.c b/mm/vmstat.c index 8507c497218b..5a9c4b5768e5 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1708,8 +1708,8 @@ static void zoneinfo_show_print(struct seq_file *m, p= g_data_t *pgdat, min_wmark_pages(zone), low_wmark_pages(zone), high_wmark_pages(zone), - zone->spanned_pages, - zone->present_pages, + READ_ONCE(zone->spanned_pages), + READ_ONCE(zone->present_pages), zone_managed_pages(zone), zone_cma_pages(zone)); =20 --=20 2.45.0.rc1.225.g2a3ae87e7f-goog