From nobody Fri Dec 19 19:18:00 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 697001537C8 for ; Wed, 8 Jan 2025 07:48:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736322507; cv=none; b=pW94Fr9jMLEStcKNNvbRY1LBeGVBEj+3f6wtkk44w7tUB7TUFvx+fNiDPw1H5xXof32+qcXCOtj5KIZlIkRcqLUHIcpL2q8KmMh5K87LyLSI7gdG1cHQIRaQBHE7/JbNZeppQWdxo/TgDcsZYHQMgWDASLuFT/p8dv6bkifqy48= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736322507; c=relaxed/simple; bh=J+okFzfrwnMQJPk/LiRGwLcfHT6CisTA9/08mkFCuRQ=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=t8gkPDgSp/qHUzWrOpPH+y4ahAgjshJiFzTKqLbRIiabgx5qbqFhr89YRZgrs7zZFCLTNwdE5rl0iYnoEFhqxJgq+aceoLlqVCymUw2kNQp23CBDFY0l50ML9FvwHsz3psih7qtjBZZGWlV+Vusfn9LFodpu9co5RUOLVf2P1VE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuzhao.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dpzDSEzh; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuzhao.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dpzDSEzh" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2163dc0f5dbso211769445ad.2 for ; Tue, 07 Jan 2025 23:48:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736322505; x=1736927305; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=JdJGPkkJCXYmdMBxytG3pxeX+4jo8f6ZaYz3JCIIH2I=; b=dpzDSEzhfAKLp0XQxSN6S/rX/zG3UDL6du3/Or2MC92pR+RP+85/hopXPDhXz0SLEf igCPaVWS4likduK7rPgEvypBNfa3iTqX4X6WZDGhh22WQ5jo3A8X+PJitGZgUNuLrtyF yXFvg2+23VoMWXH6LjmgPvKvluvOoh5e4L9t3N92om6KWEJcJJYoRaNvlyYTXkmPGvYS mvs3AG7BHAEqLxHVnA6SCcd8h9NLiYcwm1iKpxWF18s1dOJqXw32HB6gqLgdGWegs1E7 yL5Xpqh4EAvav6BA3sJepNhMBZojg/uZWXvOxOCt9olrP8XBseDUgj0F9fnYUpN7e3qA 2ubw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736322505; x=1736927305; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=JdJGPkkJCXYmdMBxytG3pxeX+4jo8f6ZaYz3JCIIH2I=; b=tt3vBbEZQi8XYuUykzZ3EXh3CpRQtpZky0NrhNsJQxTnXD6dOp/xGMk2XUB7mnLWsY PydJodgzbJO9OpL91cYj/eO8w16+u6KOf/HlTaHKwGorvSyC+VgXJRxi96MA1uFTw00w tkZTR12/+TH2t+GY/GMnT0pLNmcTTo2giuTz71/1fkvnAqWbVOLEqf0Q5TtN9V91EmF8 q0xZVAAPKne4+LFFunMPQnQc9HO+eC1XyMo8OSvTHg+DUrAxdZ3z5ZaIeQdnCue46J1X I4uT5REAZwfBaKEcXGz5EDGLMn7AlpbQ8D18vbKUAtl+yOGpW6FZaVMOvps8rKYpikbO j2Ug== X-Forwarded-Encrypted: i=1; AJvYcCW/Yw4l4gJVf+awzocuC7pHhha6+cxMvyB1KY+NkANmLj7nm9cFluAI/W+4HXmicsl0FiTosG7iqYnRa7k=@vger.kernel.org X-Gm-Message-State: AOJu0Yz5iflXuoSKwViKgpVsPFuIcCT4DDSK+0JWnTuWYJN9599kccdI 3wqc13BMkKqB9Jd33z4XJATVuF9gqCqLkYtrvVhFcfTUcNAYuRcgFNg0vhgw5ibeqXNUwKjy8rL KUw== X-Google-Smtp-Source: AGHT+IFQsxk2EDMbjJvOYs3aW7rWGAGuRHfZ20mqi4KHPPynQ/8QFI9T4pH/NHGSXeblWJ4nxg9dPHhNhCQ= X-Received: from pfwz2.prod.google.com ([2002:a05:6a00:1d82:b0:725:e84a:dd51]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:7fa1:b0:1e0:ce11:b0ce with SMTP id adf61e73a8af0-1e88d0a9418mr4189063637.35.1736322505597; Tue, 07 Jan 2025 23:48:25 -0800 (PST) Date: Wed, 8 Jan 2025 00:48:21 -0700 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250108074822.722696-1-yuzhao@google.com> Subject: [PATCH mm-unstable v2] mm/hugetlb_vmemmap: fix memory loads ordering From: Yu Zhao To: Andrew Morton Cc: David Hildenbrand , Mateusz Guzik , "Matthew Wilcox (Oracle)" , Muchun Song , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Will Deacon Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Using x86_64 as an example, for a 32KB struct page[] area describing a 2MB hugeTLB, HVO reduces the area to 4KB by the following steps: 1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs; 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped by PTE 0, and at the same time change the permission from r/w to r/o; 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB to 4KB. However, the following race can happen due to improperly memory loads ordering: CPU 1 (HVO) CPU 2 (speculative PFN walker) page_ref_freeze() synchronize_rcu() rcu_read_lock() page_is_fake_head() is false vmemmap_remap_pte() XXX: struct page[] becomes r/o page_ref_unfreeze() page_ref_count() is not zero atomic_add_unless(&page->_refcount) XXX: try to modify r/o struct page[] Specifically, page_is_fake_head() must be ordered after page_ref_count() on CPU 2 so that it can only return true for this case, to avoid the later attempt to modify r/o struct page[]. This patch adds the missing memory barrier and makes the tests on page_is_fake_head() and page_ref_count() done in the proper order. Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN wal= kers") Reported-by: Will Deacon Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/ Signed-off-by: Yu Zhao Acked-by: Will Deacon Reviewed-by: David Hildenbrand Reviewed-by: Muchun Song --- include/linux/page-flags.h | 37 +++++++++++++++++++++++++++++++++++++ include/linux/page_ref.h | 2 +- 2 files changed, 38 insertions(+), 1 deletion(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 691506bdf2c5..16fa8f0cea02 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -225,11 +225,48 @@ static __always_inline const struct page *page_fixed_= fake_head(const struct page } return page; } + +static __always_inline bool page_count_writable(const struct page *page, i= nt u) +{ + if (!static_branch_unlikely(&hugetlb_optimize_vmemmap_key)) + return true; + + /* + * The refcount check is ordered before the fake-head check to prevent + * the following race: + * CPU 1 (HVO) CPU 2 (speculative PFN walker) + * + * page_ref_freeze() + * synchronize_rcu() + * rcu_read_lock() + * page_is_fake_head() is false + * vmemmap_remap_pte() + * XXX: struct page[] becomes r/o + * + * page_ref_unfreeze() + * page_ref_count() is not zero + * + * atomic_add_unless(&page->_refcount) + * XXX: try to modify r/o struct page[] + * + * The refcount check also prevents modification attempts to other (r/o) + * tail pages that are not fake heads. + */ + if (atomic_read_acquire(&page->_refcount) =3D=3D u) + return false; + + return page_fixed_fake_head(page) =3D=3D page; +} #else static inline const struct page *page_fixed_fake_head(const struct page *p= age) { return page; } + +static inline bool page_count_writable(const struct page *page, int u) +{ + return true; +} #endif =20 static __always_inline int page_is_fake_head(const struct page *page) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 8c236c651d1d..544150d1d5fd 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -234,7 +234,7 @@ static inline bool page_ref_add_unless(struct page *pag= e, int nr, int u) =20 rcu_read_lock(); /* avoid writing to the vmemmap area being remapped */ - if (!page_is_fake_head(page) && page_ref_count(page) !=3D u) + if (page_count_writable(page, u)) ret =3D atomic_add_unless(&page->_refcount, nr, u); rcu_read_unlock(); =20 --=20 2.47.1.613.gc27f4b7a9f-goog