From nobody Thu Apr 9 17:23:36 2026 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD796316192 for ; Fri, 6 Mar 2026 14:06:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772805995; cv=none; b=X4dcK0ZMm74U/yhwd1oZcbpBSaLyIXOolqPNVbl5RmuF/58AzWtfW67wsnRfvPd/fIVE7uOF9fuVCymzhuQd+UfQtmkNgtYIbxAk3Tu/eugcnbdMQ2b/yTVO9Io0uJAdMwK4S4JjOVn+OEO8i5bukpuq6nfC8raWkD2kUXs5Mu4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772805995; c=relaxed/simple; bh=pew9Y3nMgHOEvgenC0yPfi4+wWkxb9Nhz+zKDRW1zKY=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=WdNW0vxKFh4gXbkcZVHaTzUFjlIi7mNSxxv/YIpVlkKzOL0ODHp2UmOK3CBveR3ENQGMWf/Qydjur3mCGq7ep7Q03La48/ZPONi5CJ/OWWe2x/0rNWgGMXJhkLo+yUEWXWSMOxZGz16ZLiV4p8kii9/h5peESy0L6Hutj07fyII= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ffyWdXI5; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ffyWdXI5" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-82985f42664so1247959b3a.0 for ; Fri, 06 Mar 2026 06:06:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1772805992; x=1773410792; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=W13+CBFURjvfUajrgf80QK1S6lJ7DWAdvvvI450YWW4=; b=ffyWdXI5UvuPr0Bg9ZNrT1tSJh3KYdGxbNzSkprepTBimHdWvmRrz4pAy35r28gtpQ JcN+82CXKSDQbeVxI+PQKMlfz119y1DphWKa0MK3/HCXF3Ii2mctqQvgpvjkz6f5N3qk epLD/xtAeIDWxp7GET68B/T3J2doLqQblp9Q5qYzRefvOHflr4R1xHuS2ToiMQtLovy4 uY2noceH/itBXdwPwwQ6YEhMDT07DPYHGVzwF0q1auEFlt031tkuj5vIVyGCUfDgSb7x UnHw0fnsNJwQ7izacmmT/t8Neu3oytKWNMoOJSlN/PUjrIm1856t70MuAugp6/mNdLU/ rizQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772805992; x=1773410792; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=W13+CBFURjvfUajrgf80QK1S6lJ7DWAdvvvI450YWW4=; b=l0MMVb3BqBB2iHVEc2AXmxi28s920mNaO+8DX0ZZjMpG8cUB4LIgq2KMcoxWiUroMo 2DWJDPZJGBPRcFbk7dB0MDz2FgS4uKSdCzqKc/9DFi/F37bAq2A00bFKxcOjr3TfQkc6 ukCyE4/BblLSfyFAf5M3gw5AQxqFTMgtEbpxR1v7j3OesAvbtkrjJ7cXYz1qdwF9W1cv uUUpIJXQfBtPb7/1BEAPp5Nekzo7LVMiEKnH88EQd/IgYB6S6C9QZnLZrtK9A3Xqvxeo E+eJ56wqUT6ZlgVP1zKjecMINeyaUqwIXmcQWdw+5+5ZJ0TKzUQfU5uUs+6WIxam8PMB 2V1w== X-Forwarded-Encrypted: i=1; AJvYcCWx9VfLp9370DK/6G+ugSjTD4a6g3pwhpGDNG1wZzyXdw1bZDdC4yl3AqQOl8vy15E7b6KWL8+rpt1iP58=@vger.kernel.org X-Gm-Message-State: AOJu0YwNRjuBdk7VMmtu+PPwnsSx1OGqq94oRH+r1vTCLn78mysuBJqX cLvo9AvUK8yio3WxF5UMBdZZUrf8t1mb4sWi1zxUeaoc0Gv9t0lP7NQ7 X-Gm-Gg: ATEYQzwK16ubwSGBtIdMpVr0XOfZcbYTFSWwxDGmS1bG7Bn5Y2RR3B/mnLaLzaBtM+I DOsokxjc6CanW+3q43qtlf/Z5rmzk7jfRAm6y5yOf9877y2YJ5cKgn1erDvXZ8vQpQBE7OqQntC 6idJwUF4k/Q1bInRQy+b2iijWIrJlm7fsjeqUum+xTxlLBEhNhjtYKOhZtHI82mYKCIavKFwCFv X/aLywjZnxZGYOUzPC4XYM9veS+vKethpCkmoCe9iJNw6sAurv//rdgHYCjb31faVtlAcgoJ4nx Y/EgT9KVObpXw84HpQPrb08UZtPFKLt8T5lDZEruV+93e3T6brx1Y0mO5utO0HKDVyIXej45trn 5rD6zkZ4R0MBUN6nlEqYR4Ugmox2QWP4irTiGgOXqSIq15DLhpn5Leg4WFAK1SjslZKv3LmK+PZ /BlcTwHuuWHK7WCknqyUjv/LnZZRPaxmYUWpO3AYg= X-Received: by 2002:a05:6a21:514:b0:395:3677:2be4 with SMTP id adf61e73a8af0-39858e02140mr2687015637.0.1772805991793; Fri, 06 Mar 2026 06:06:31 -0800 (PST) Received: from zjh-os.zhaoxin.com ([2404:7ac0:642d:3126:54a4:fbcc:e0a0:ce02]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c739e16cebbsm1786713a12.16.2026.03.06.06.06.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Mar 2026 06:06:31 -0800 (PST) From: Jianhui Zhou To: Muchun Song , Oscar Salvador , Andrew Morton , Mike Rapoport Cc: David Hildenbrand , Peter Xu , Andrea Arcangeli , Mike Kravetz , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jonas Zhou , Jianhui Zhou , syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com, stable@vger.kernel.org Subject: [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation Date: Fri, 6 Mar 2026 22:03:32 +0800 Message-ID: <20260306140332.171078-1-jianhuizzzzz@gmail.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the page index for hugetlb_fault_mutex_hash(). However, linear_page_index() returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash() expects the index in huge page units (as calculated by vma_hugecache_offset()). This mismatch means that different addresses within the same huge page can produce different hash values, leading to the use of different mutexes for the same huge page. This can cause races between faulting threads, which can corrupt the reservation map and trigger the BUG_ON in resv_map_release(). Fix this by replacing linear_page_index() with vma_hugecache_offset() and applying huge_page_mask() to align the address properly. To make vma_hugecache_offset() available outside of mm/hugetlb.c, move it to include/linux/hugetlb.h as a static inline function. Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb fo= r huge page UFFDIO_COPY") Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=3Df525fd79634858f478e7 Cc: stable@vger.kernel.org Signed-off-by: Jianhui Zhou --- include/linux/hugetlb.h | 17 +++++++++++++++++ mm/hugetlb.c | 11 ----------- mm/userfaultfd.c | 5 ++++- 3 files changed, 21 insertions(+), 12 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 65910437be1c..3f994f3e839c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *= h) return h->order + PAGE_SHIFT; } =20 +/* + * Convert the address within this vma to the page offset within + * the mapping, huge page units here. + */ +static inline pgoff_t vma_hugecache_offset(struct hstate *h, + struct vm_area_struct *vma, unsigned long address) +{ + return ((address - vma->vm_start) >> huge_page_shift(h)) + + (vma->vm_pgoff >> huge_page_order(h)); +} + static inline bool order_is_gigantic(unsigned int order) { return order > MAX_PAGE_ORDER; @@ -1197,6 +1208,12 @@ static inline unsigned int huge_page_shift(struct hs= tate *h) return PAGE_SHIFT; } =20 +static inline pgoff_t vma_hugecache_offset(struct hstate *h, + struct vm_area_struct *vma, unsigned long address) +{ + return linear_page_index(vma, address); +} + static inline bool hstate_is_gigantic(struct hstate *h) { return false; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0beb6e22bc26..b87ed652c748 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1006,17 +1006,6 @@ static long region_count(struct resv_map *resv, long= f, long t) return chg; } =20 -/* - * Convert the address within this vma to the page offset within - * the mapping, huge page units here. - */ -static pgoff_t vma_hugecache_offset(struct hstate *h, - struct vm_area_struct *vma, unsigned long address) -{ - return ((address - vma->vm_start) >> huge_page_shift(h)) + - (vma->vm_pgoff >> huge_page_order(h)); -} - /** * vma_kernel_pagesize - Page size granularity for this VMA. * @vma: The user mapping. diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 927086bb4a3c..8efebc47a410 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -507,6 +507,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( pgoff_t idx; u32 hash; struct address_space *mapping; + struct hstate *h; =20 /* * There is no default zero huge page for all huge page sizes as @@ -564,6 +565,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out_unlock; } =20 + h =3D hstate_vma(dst_vma); + while (src_addr < src_start + len) { VM_WARN_ON_ONCE(dst_addr >=3D dst_start + len); =20 @@ -573,7 +576,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * in the case of shared pmds. fault mutex prevents * races with other faulting threads. */ - idx =3D linear_page_index(dst_vma, dst_addr); + idx =3D vma_hugecache_offset(h, dst_vma, dst_addr & huge_page_mask(h)); mapping =3D dst_vma->vm_file->f_mapping; hash =3D hugetlb_fault_mutex_hash(mapping, idx); mutex_lock(&hugetlb_fault_mutex_table[hash]); --=20 2.43.0