From nobody Thu Apr 2 19:13:59 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9C181E5724 for ; Thu, 12 Feb 2026 00:37:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770856654; cv=none; b=u7Sj6FIv6+OQqRFLRR78EbqMtzm7qOmXvnUczbE2ZlYgOLePgyXqjAO5Uc13obY9vAfRPnwb7Nar1KH2nWBvcOJoSKhOJ62QIWrkLM5DXH1UP+9CQycylEJV8u/SGaxkUr6JPZd+cvpxgSGal3EIQql+YsAmIe9FkcPkVTi9h5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770856654; c=relaxed/simple; bh=8aZtrmhjCVIJaB3u3GcD2XUcVSdoXF1ZdqytqPqNHW4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DDqg+E/M6woJiF8tBAYBUYnktY+hDANVYOsV+K3Fe0w3FGUSbsb3KiRa1FSAZQhbnEePdZLJP8o5agApsmfRW6Ifao+mZkZLj15LXKCnHuzlfrd1Finf6BdGWhlwaeOA4Uf6vjNg+SFZtNgRwT1yiqXjMnrRWWlhbetTPQF8Xmg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZHHpOsnM; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZHHpOsnM" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-c6df833e1efso6374007a12.2 for ; Wed, 11 Feb 2026 16:37:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770856651; x=1771461451; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TcMLX2Xy1jOIpMds+I1gs4yK1mnZMXU9mzbbwG+o564=; b=ZHHpOsnMKJ2Va2p8ZXdT2tHQaQB1BnoFg7Vryz+UgfeJeO4wEpKrY1Duuo1xM2NnVB 6bfsImLyfdpqwGwTc3tbxj/4k21lplCj0z58gusTaBcbjp0azIIPYxuDhMWd5+JpQz2V meTnXZ9Z907Q1Q79XOw37VBVsTgpVTmMB/x7I+S2WR7TmSNy5jnp3mfGXGL8LZqVvhxN hWfaEDJq4V1WIR5hvdX9gut3guJZONPd0yS8R11Q4IbaeuxsRsgia0BIopdLGlFgF8Kw zhcAq20tGLOdIbDOJ1nWAyeo14R0VSe7w2B4T8jct4aw0MFz5BAvsL/bxGhpkzyRh5OA TO/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770856651; x=1771461451; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TcMLX2Xy1jOIpMds+I1gs4yK1mnZMXU9mzbbwG+o564=; b=DC0laQA8IKx9mnbnnECSNDE1oBSeVs+L0p24Ipn0T56pKnIK9b7ljJtsWVZHd5pY5m nZmUqSRF+cEks7PM8P+gseOFDYp9R1vJtH70SqiRpoHYn9obYpG+If8xsWrT7Nlbww9/ Kn3QnLYkHQFzn0o6XSEZevATq+RC8y9b44ryOGg+xggnKVqd+fItBE2DXQrAlB5NG7JB gRiz2rl+XSfqA/v2WK8lcoraHwHDXgPLhGkJDm/dKFWrH8mU320OLa5MZ229X22Utma/ 27hIKtcNV4EtwIjzb+aoNQN9I6cTU2BliC8EVfaJrVJA0h+l1zqQJ+CK+4TGNi904Uap 7O2w== X-Forwarded-Encrypted: i=1; AJvYcCUIdu0y/q9o7BKm3EUB6pa99raG3CWEJBYR1ahHjm3PGVGkKb13x5kruwRQ3GRiwZN/gCLYiGT+iohporU=@vger.kernel.org X-Gm-Message-State: AOJu0YwvztiJVUo/JggSmgA3Gs0YvCpB1oTHOoUn6G6QwxogV9+e83yL qIRDNQeEXdZ1Mr4dZidmKG39lHHwJ57hQeKV7UA+N23h/JW3GJeE684BUbXAnj/AlmFZCMgW6w2 c0i3p7B/wqE5Cm40BIppT39g64Q== X-Received: from pfnz7.prod.google.com ([2002:aa7:85c7:0:b0:824:9b2f:783]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4fc3:b0:81f:4d18:65c4 with SMTP id d2e1a72fcca58-824b05ce226mr768557b3a.59.1770856651067; Wed, 11 Feb 2026 16:37:31 -0800 (PST) Date: Wed, 11 Feb 2026 16:37:14 -0800 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.53.0.310.g728cabbaf7-goog Message-ID: <67a62716952c806c2a512e98bcac1f5224ada324.1770854662.git.ackerleytng@google.com> Subject: [RFC PATCH v1 3/7] mm: hugetlb: Move mpol interpretation out of dequeue_hugetlb_folio_vma() From: Ackerley Tng To: akpm@linux-foundation.org, dan.j.williams@intel.com, david@kernel.org, fvdl@google.com, hannes@cmpxchg.org, jgg@nvidia.com, jiaqiyan@google.com, jthoughton@google.com, kalyazin@amazon.com, mhocko@kernel.org, michael.roth@amd.com, muchun.song@linux.dev, osalvador@suse.de, pasha.tatashin@soleen.com, pbonzini@redhat.com, peterx@redhat.com, pratyush@kernel.org, rick.p.edgecombe@intel.com, rientjes@google.com, roman.gushchin@linux.dev, seanjc@google.com, shakeel.butt@linux.dev, shivankg@amd.com, vannapurve@google.com, yan.y.zhao@intel.com Cc: ackerleytng@google.com, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move memory policy interpretation out of dequeue_hugetlb_folio_vma() and into alloc_hugetlb_folio() to separate reading and interpretation of memory policy from actual allocation. Also rename dequeue_hugetlb_folio_vma() to dequeue_hugetlb_folio_with_mpol() to remove association with vma and to align with alloc_buddy_hugetlb_folio_with_mpol(). This will later allow memory policy to be interpreted outside of the process of allocating a hugetlb folio entirely. This opens doors for other callers of the HugeTLB folio allocation function, such as guest_memfd, where memory may not always be mapped and hence may not have an associated vma. No functional change intended. Signed-off-by: Ackerley Tng Reviewed-by: James Houghton --- mm/hugetlb.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index aaa23d995b65c..74b5136fdeb54 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1378,18 +1378,11 @@ static unsigned long available_huge_pages(struct hs= tate *h) return h->free_huge_pages - h->resv_huge_pages; } =20 -static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, - struct vm_area_struct *vma, - unsigned long address) +static struct folio *dequeue_hugetlb_folio_with_mpol(struct hstate *h, + struct mempolicy *mpol, int nid, nodemask_t *nodemask) { struct folio *folio =3D NULL; - struct mempolicy *mpol; - gfp_t gfp_mask; - nodemask_t *nodemask; - int nid; - - gfp_mask =3D htlb_alloc_mask(h); - nid =3D huge_node(vma, address, gfp_mask, &mpol, &nodemask); + gfp_t gfp_mask =3D htlb_alloc_mask(h); =20 if (mpol_is_preferred_many(mpol)) { folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, @@ -1403,7 +1396,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struct= hstate *h, folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); =20 - mpol_cond_put(mpol); return folio; } =20 @@ -2889,6 +2881,9 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, int ret, idx; struct hugetlb_cgroup *h_cg =3D NULL; gfp_t gfp =3D htlb_alloc_mask(h); + struct mempolicy *mpol; + nodemask_t *nodemask; + int nid; =20 idx =3D hstate_index(h); =20 @@ -2949,6 +2944,9 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, =20 spin_lock_irq(&hugetlb_lock); =20 + /* Takes reference on mpol. */ + nid =3D huge_node(vma, addr, gfp, &mpol, &nodemask); + /* * gbl_chg =3D=3D 0 indicates a reservation exists for the allocation - so * try dequeuing a page. If there are available_huge_pages(), try using @@ -2956,25 +2954,23 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, */ folio =3D NULL; if (!gbl_chg || available_huge_pages(h)) - folio =3D dequeue_hugetlb_folio_vma(h, vma, addr); + folio =3D dequeue_hugetlb_folio_with_mpol(h, mpol, nid, nodemask); =20 if (!folio) { - struct mempolicy *mpol; - nodemask_t *nodemask; - int nid; - spin_unlock_irq(&hugetlb_lock); - nid =3D huge_node(vma, addr, gfp, &mpol, &nodemask); folio =3D alloc_buddy_hugetlb_folio_with_mpol(h, mpol, nid, nodemask); - mpol_cond_put(mpol); - if (!folio) + if (!folio) { + mpol_cond_put(mpol); goto out_uncharge_cgroup; + } spin_lock_irq(&hugetlb_lock); list_add(&folio->lru, &h->hugepage_activelist); folio_ref_unfreeze(folio, 1); /* Fall through */ } =20 + mpol_cond_put(mpol); + /* * Either dequeued or buddy-allocated folio needs to add special * mark to the folio when it consumes a global reservation. --=20 2.53.0.310.g728cabbaf7-goog