From nobody Sat Feb 7 12:29:46 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D6691D7E3E for ; Sun, 1 Dec 2024 21:22:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088174; cv=none; b=gOuF6GonFvIPS/vEt1xG+8dVJ1disdTLKEhrVETiITrMCTOy1qemnDcdwPX51HEJ2NL9qejv9pfStfGG6A6KIREbsZElKUPXOWCks7y5u8ImC7RdHHl7POixOzD5YcpgqUNQdfwR62uW+nkZZpQWig8fbRAZ9XBIl7N+XUp5oHI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088174; c=relaxed/simple; bh=8sVq+14GI7Ys+M4ugo6xJzdxYpIEvy+9vJqPLYXLNZ8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tSLKmDe9tNM6INEyA2W9YqeD/DgJkZi5fgSgKYct3AsS8gM5n/zZjS0Ey3GykB4xCqTIMroqs2Th2HGlSQNmfZgsVemzpjFXCd9lg5M1CiPW7rjXaoSTqNkBRMB4ny5eMPCT2m74doZOWfQBwbCnRwAf+NeB7yuJlntW/YJmHnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=EpK1W4hP; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EpK1W4hP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088170; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VgKna+t5A1Bp9qSkDY4pJYluOesve9BjxVP72JgrxSQ=; b=EpK1W4hPtKBiS+GaxnmXvH3jOTbKdTRlKg0lB2KvKYYQiO6OpuSD6ypSkVeKcMTftsguKt GNuQeQvUW1+R7HZJ7JMSKtyFxTuO7HjmMvQOvnuy4qNQSgcyG3Uf4JsILiX6LvaONaQn13 DQ/OO2jqDXlhtRURAtWat3Ch7BmrD5U= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-141-9QWqZNvdPwqsIYBvd1Eqkw-1; Sun, 01 Dec 2024 16:22:49 -0500 X-MC-Unique: 9QWqZNvdPwqsIYBvd1Eqkw-1 X-Mimecast-MFC-AGG-ID: 9QWqZNvdPwqsIYBvd1Eqkw Received: by mail-qt1-f198.google.com with SMTP id d75a77b69052e-466b72ca44eso51471391cf.0 for ; Sun, 01 Dec 2024 13:22:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088168; x=1733692968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VgKna+t5A1Bp9qSkDY4pJYluOesve9BjxVP72JgrxSQ=; b=DAKVPjcm86Rw/+fBmb4z0Egad4CsQ2Kv5m04ucxiaan7NCuRIzIAquYCD6O124L8aS OIC4o7l8uMdd6ANq2qDaRSk2VQzGOpwdKEVPXBkqM/jxb1sZHXJ5eujTV76d8Kn0s52I htmdfcmWaZRYwUAfPL5BrvCRgOLD4eOIYwA1wvzpVgfy5Svda9GDci5cv272auKlkABr Qas6egDv4akiFeIgyqCWgDGHn6uPapbdG71g3tuL+wUG0cLTJcj+SoRdElIEmiNkRVuD nuMMyF4RnmdksaSg6ZZrnaDiNklSdy2ApeF2ZEs8bDD2/xSVZwndarodi4940QMd2HLG ddyQ== X-Gm-Message-State: AOJu0YzvTLIej1hc2639ORHuv4kUm1qil4pM+KcTgMvFr4sJ+7HjOnOF 7sRzvQJy3IKRIxegxBEscD8j9q5vlo/R3rVfAGI2Cq/9diCbs1MyjWXcxB2cWljo+luEC0F/ep0 K4MjLzu7232IsdHyZeF8d9GewmDVpZGEb2KlmT2n1I+jajN8fzqywLwbaX6iUS15w+Znem1+xxN rqGpUp37/qBR56wVkkfBZ4AtvImI1bfGuMImEy6YVgqq0= X-Gm-Gg: ASbGncsXp8u2QJPLk2mXCda0EFOVHus6focWfy4f3Z51y25Ccv3y66b/tlh1TZXOgZ6 BxOsKG8qU6sp/rmHBXwwTviCvatkTili3zNwErSxdy7ZEcMSpCY6ew4WMvNwF+S4EERMKqpKlms FzP958v9Y5wIPSx6eGrFbfPF9ids8HMoRrKDjZmKoF/8SgjpC/Hf/1FwtxLqphr9SbkkL63sxR2 7rUAwau89fIj20UCRPAupfkSpzFGvSdCT03sxPdRtbJKK4i3HQRqcZX/QjGhCNG/qZvLOZzIzS/ E/ZqBgf3lqhk6EJ8n3qpUak+vA== X-Received: by 2002:a05:622a:1a20:b0:466:a587:8ce9 with SMTP id d75a77b69052e-466b34dc575mr318575601cf.6.1733088167934; Sun, 01 Dec 2024 13:22:47 -0800 (PST) X-Google-Smtp-Source: AGHT+IG7GykcZ01mbWQKH4SyRwwIoF+mZE5tT1yvMdlS3a2CiAzYqEu5ysg5BfeHrZjo8bJtgpTuBA== X-Received: by 2002:a05:622a:1a20:b0:466:a587:8ce9 with SMTP id d75a77b69052e-466b34dc575mr318575121cf.6.1733088167457; Sun, 01 Dec 2024 13:22:47 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.22.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:22:46 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng , linux-stable Subject: [PATCH 1/7] mm/hugetlb: Fix avoid_reserve to allow taking folio from subpool Date: Sun, 1 Dec 2024 16:22:34 -0500 Message-ID: <20241201212240.533824-2-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since commit 04f2cbe35699 ("hugetlb: guarantee that COW faults for a process that called mmap(MAP_PRIVATE) on hugetlbfs will succeed"), avoid_reserve was introduced for a special case of CoW on hugetlb private mappings, and only if the owner VMA is trying to allocate yet another hugetlb folio that is not reserved within the private vma reserved map. Later on, in commit d85f69b0b533 ("mm/hugetlb: alloc_huge_page handle areas hole punched by fallocate"), alloc_huge_page() enforced to not consume any global reservation as long as avoid_reserve=3Dtrue. This operation doesn't look correct, because even if it will enforce the allocation to not use global reservation at all, it will still try to take one reservation from the spool (if the subpool existed). Then since the spool reserved pages take from global reservation, it'll also take one reservation globally. Logically it can cause global reservation to go wrong. I wrote a reproducer below, trigger this special path, and every run of such program will cause global reservation count to increment by one, until it hits the number of free pages: #define _GNU_SOURCE /* See feature_test_macros(7) */ #include #include #include #include #include #include #define MSIZE (2UL << 20) int main(int argc, char *argv[]) { const char *path; int *buf; int fd, ret; pid_t child; if (argc < 2) { printf("usage: %s \n", argv[0]); return -1; } path =3D argv[1]; fd =3D open(path, O_RDWR | O_CREAT, 0666); if (fd < 0) { perror("open failed"); return -1; } ret =3D fallocate(fd, 0, 0, MSIZE); if (ret !=3D 0) { perror("fallocate"); return -1; } buf =3D mmap(NULL, MSIZE, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0); if (buf =3D=3D MAP_FAILED) { perror("mmap() failed"); return -1; } /* Allocate a page */ *buf =3D 1; child =3D fork(); if (child =3D=3D 0) { /* child doesn't need to do anything */ exit(0); } /* Trigger CoW from owner */ *buf =3D 2; munmap(buf, MSIZE); close(fd); unlink(path); return 0; } It can only reproduce with a sub-mount when there're reserved pages on the spool, like: # sysctl vm.nr_hugepages=3D128 # mkdir ./hugetlb-pool # mount -t hugetlbfs -o min_size=3D8M,pagesize=3D2M none ./hugetlb-pool Then run the reproducer on the mountpoint: # ./reproducer ./hugetlb-pool/test Fix it by taking the reservation from spool if available. In general, avoid_reserve is IMHO more about "avoid vma resv map", not spool's. I copied stable, however I have no intention for backporting if it's not a clean cherry-pick, because private hugetlb mapping, and then fork() on top is too rare to hit. Cc: linux-stable Fixes: d85f69b0b533 ("mm/hugetlb: alloc_huge_page handle areas hole punched= by fallocate") Signed-off-by: Peter Xu Reviewed-by: Ackerley Tng Tested-by: Ackerley Tng --- mm/hugetlb.c | 22 +++------------------- 1 file changed, 3 insertions(+), 19 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cec4b121193f..9ce69fd22a01 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1394,8 +1394,7 @@ static unsigned long available_huge_pages(struct hsta= te *h) =20 static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, struct vm_area_struct *vma, - unsigned long address, int avoid_reserve, - long chg) + unsigned long address, long chg) { struct folio *folio =3D NULL; struct mempolicy *mpol; @@ -1411,10 +1410,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struc= t hstate *h, if (!vma_has_reserves(vma, chg) && !available_huge_pages(h)) goto err; =20 - /* If reserves cannot be used, ensure enough pages are in the pool */ - if (avoid_reserve && !available_huge_pages(h)) - goto err; - gfp_mask =3D htlb_alloc_mask(h); nid =3D huge_node(vma, address, gfp_mask, &mpol, &nodemask); =20 @@ -1430,7 +1425,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct= hstate *h, folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); =20 - if (folio && !avoid_reserve && vma_has_reserves(vma, chg)) { + if (folio && vma_has_reserves(vma, chg)) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } @@ -3007,17 +3002,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_str= uct *vma, gbl_chg =3D hugepage_subpool_get_pages(spool, 1); if (gbl_chg < 0) goto out_end_reservation; - - /* - * Even though there was no reservation in the region/reserve - * map, there could be reservations associated with the - * subpool that can be used. This would be indicated if the - * return value of hugepage_subpool_get_pages() is zero. - * However, if avoid_reserve is specified we still avoid even - * the subpool reservations. - */ - if (avoid_reserve) - gbl_chg =3D 1; } =20 /* If this allocation is not consuming a reservation, charge it now. @@ -3040,7 +3024,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, * from the global free pool (global change). gbl_chg =3D=3D 0 indicates * a reservation exists for the allocation. */ - folio =3D dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg); + folio =3D dequeue_hugetlb_folio_vma(h, vma, addr, gbl_chg); if (!folio) { spin_unlock_irq(&hugetlb_lock); folio =3D alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr); --=20 2.47.0 From nobody Sat Feb 7 12:29:46 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8E4E1D9329 for ; Sun, 1 Dec 2024 21:22:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088175; cv=none; b=IKE5mW3s+g27eCkabfXPUYDWURg/8p7mmb2Ud+wAjKrBz5WMDCcwGxzhxXcEGOW1+tdsBEbLoDIMTiKe//lhnuHMwnK0kr+OH6KSG+/aTJUWQRrNtl1kyk1ujh+1vkBAAo3Z056F8bBJSGD42Y+YiAc/m3HICE3b3dUN1UlbQmw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088175; c=relaxed/simple; bh=P/Y6sO9nx2bCFON9rLT8Z54/1lT+XGcwNhlJrI5hx+c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=abd/5KqkZESF0idFtDCEaePRjUxigm2Vl21NLGeqyNOpZrF+a+zWwJM2TNpFdnrpCgsx7ZDdxkLaaSwCHD8AuK4EEL9oSalc+eM70TyDsAzYE2byzIAHUk2le+nyOqnwu3lOA54fdim5xiNKd3OqM+g511iu5HZK4RhqYMwj60Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KFfcxC0k; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KFfcxC0k" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088172; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KsWvwrfBnqjvbOnN4TJvOMsBkpZZHFy0C0E293ikvrs=; b=KFfcxC0kxOO1wjZj8i71u+5VeYr+tUEG/QJJUXw+Zb//KAVO3POSIoZFguTrpWS1B9GbIv kspeyrO8BV44crZlpY6NSOeTzePCXVMXKH3J712BFbUUjvATAPnsrwIy0MQ++X+LAswykR XuEAV+jE67LHxxUa7xby0imSrgDdD8c= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-80-9-k1L_7mMyq2Kr8egeZauA-1; Sun, 01 Dec 2024 16:22:51 -0500 X-MC-Unique: 9-k1L_7mMyq2Kr8egeZauA-1 X-Mimecast-MFC-AGG-ID: 9-k1L_7mMyq2Kr8egeZauA Received: by mail-qt1-f198.google.com with SMTP id d75a77b69052e-4668f208ff7so62033551cf.0 for ; Sun, 01 Dec 2024 13:22:51 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088170; x=1733692970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KsWvwrfBnqjvbOnN4TJvOMsBkpZZHFy0C0E293ikvrs=; b=w3W2rglhQo0UNgCheHDhuAaPwmOWhQhzEDPiY7OhQX9P+aOGAlkCWmKTPjHeO77sjU Qt/nBdEDqyFCZICY6v0cTCMXychsC2AEYUcPE/qZFRyoxn8kE/JP6W2aluy5PVX7CEPH C1+q9s/SyzfC5lryMhuCdfv86DQjmO2xbMCT62yE7Hsp7UAE7CZAR7Y0nQSbj0aR89O0 BOocy3Ghy10BocXShf3ZE0g1K4/BASpXzY2NT4oDuMZYWD9TnI4PNgs0SPOvzBzzAf82 KmTEkKcyHzUDtVbscatjj2Ybouh31VZdYsMWJqE5yHVMscMqTZLvoqHFiVuVSQu2KdBi Nqag== X-Gm-Message-State: AOJu0YxglFUuHCaLLaIn04eCmzGClfcF7Vi7B2TzF3q6uA7bgBVsXC+1 Gdh6b2Ka8B+GKxtkJXq8DZcL9cptiDdwd3qR5ohq4kwvaMfnhbMZ5Spu1wF3d982i4SP5Rh/tWU ES81I2mhIgPXBLrcCjQWwHehzDFs+3ve+R+vFZxiftA+kNCJkAGfI7OoDhmXX3OJC0knb50AgyD rZT5SoHpWuJNUtfCwABtqgVPqilZ8RHeoo0UyDu14Hmr8= X-Gm-Gg: ASbGnct2B4Oo5Wn6XBGZPSXtRFWYWyTZDpIEQFatqAA8ifxNuxfUbDQFMqqeomz+zTh ATTPz4FdEauRBCrz+bRssDZevGEqjW3WENjaPHCbjf9EaM+TJOMIgwybXnLDR23pD8uQiAz5rqo YMD5UNfhkiCCltE5QfsQV8O5dZr3OW/WUELzkpYznjnDZ/DDRNCOCq0HlPUomnDVc0NWmeY+OVD pCioozcJt33nCxed9Gg8EZ6JwWCHJrWRHPAZ60bmaDhAhCZ/pkTI7m5jOpNBOxH1Fi8VCS9YwLy xGMrfV6ZBPsoI+2OnWY6zzNw1A== X-Received: by 2002:a05:622a:46ca:b0:466:d559:b528 with SMTP id d75a77b69052e-466d559b5cfmr148250451cf.17.1733088170735; Sun, 01 Dec 2024 13:22:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IGTMiVxnYL67fM20hfjxO5Kmh9CkTakr0bybTA/KMxSrhoCBEVkaO75R4et7jq3KR1ig2eA4Q== X-Received: by 2002:a05:622a:46ca:b0:466:d559:b528 with SMTP id d75a77b69052e-466d559b5cfmr148250121cf.17.1733088170359; Sun, 01 Dec 2024 13:22:50 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.22.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:22:49 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng Subject: [PATCH 2/7] mm/hugetlb: Stop using avoid_reserve flag in fork() Date: Sun, 1 Dec 2024 16:22:35 -0500 Message-ID: <20241201212240.533824-3-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When fork() and stumble on top of a dma-pinned hugetlb private page, CoW must happen during fork() to guarantee dma coherency. In this specific path, hugetlb pages need to be allocated for the child process. Stop using avoid_reserve=3D1 flag here: it's not required to be used here, as dest_vma (which is destined to be a MAP_PRIVATE hugetlb vma) will have no private vma resv map, and that will make sure it won't be able to use a vma reservation later. No functional change intended with this change. Said that, it's still wanted to do this, so as to reduce the usage of avoid_reserve to the only one user, which is also why this flag was introduced initially in commit 04f2cbe35699 ("hugetlb: guarantee that COW faults for a process that called mmap(MAP_PRIVATE) on hugetlbfs will succeed"). I don't see whoever else should set it at all. Further patch will clean up resv accounting based on this. Signed-off-by: Peter Xu --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9ce69fd22a01..8d4b4197d11b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5317,7 +5317,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, spin_unlock(src_ptl); spin_unlock(dst_ptl); /* Do not use reserve as it's private owned */ - new_folio =3D alloc_hugetlb_folio(dst_vma, addr, 1); + new_folio =3D alloc_hugetlb_folio(dst_vma, addr, 0); if (IS_ERR(new_folio)) { folio_put(pte_folio); ret =3D PTR_ERR(new_folio); --=20 2.47.0 From nobody Sat Feb 7 12:29:46 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 931531D8E10 for ; Sun, 1 Dec 2024 21:22:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088183; cv=none; b=rBs+L8d74pVwoQ4yw8jBLxjLTFwiqbtInHcV81NrtsV2T8w1mq3n1JqKhN4apguDUY6cDJyhtjumu26YGKJDl2ikaBavW68zsysTN0+dT9VXUOTXJfSQ/Fi40B4altpKyxcm/HefKtw0XJrY0q/sOXxaJ2dBO0ATLWtY4aClPQg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088183; c=relaxed/simple; bh=1rrfEw0ohQL9JMTBiEb7GsB38WqkjvEvkPExpDK2JPE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mwe25mahgidwQ+D7L6XKxJRS3Qvi8Ygm2VKcKSx/Y8J+8oSM8e1seJ6Oqc7lWc0aDiVufAaFz1tOkqpiuyxP0EKxekfP+brliFvmjwLvRIUttlpYVEk2MKBgFGDV0xtXYAk2Zc7WEml6j3KN9rJAGfLMI3DkOzBQAzUGIHOGZ4k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=FYPNMRtT; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FYPNMRtT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088177; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eYuSsE1kcQK7m9nM+2EfixOkXvdk4hDxDm5g5MvtHpw=; b=FYPNMRtTDgAAqr+zK4Or9Mz+2IwlhuH6zklXpSwVKYtguuz4aqMS7yQKgFrepU/+BDP1ER HfWjcB+Vtqzs6XDiedxwcwXp4gb+93A4Li0VDtJrqNi8PJ1SlM7+0s/GxjXBT1J90YrKyq /4W6ktCWLXlyKiVMUcrD+HXI2RgruQU= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-368-armYcqloOPWkxdp11jjrOw-1; Sun, 01 Dec 2024 16:22:55 -0500 X-MC-Unique: armYcqloOPWkxdp11jjrOw-1 X-Mimecast-MFC-AGG-ID: armYcqloOPWkxdp11jjrOw Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-466cc5287ffso44304801cf.1 for ; Sun, 01 Dec 2024 13:22:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088174; x=1733692974; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eYuSsE1kcQK7m9nM+2EfixOkXvdk4hDxDm5g5MvtHpw=; b=F23SzYspVCJq7qpTWeQFeoMj7AN5JeMYLlq5IfVp3DsuIGKMz+ZSfwAymuDuQnM5JA R/4yj0742KmuqLl0FgemKKboTZAWkcWu6QgdeLaSnSxh2YeMv3X1UAHuh+LE8EWEnDLc I3A6fv9Pyfm1P7Qy9KqXLXyoXA5FbfF3fnR0wmGQaUL4GSJuwOlUDX8qfidp9sKbiTE4 K852s2D2PMQlg7ANaDYBCPWnDIGohuK283q1M/CuV2UmrkhOV83PyK615UDekn9D6SPH hB8bZOt5ZI7t2mWybERPtK+SXgzHLssBVC/1FxMIzv7VHg8rHmEuRSfK//7vNDP8TU2o gjqQ== X-Gm-Message-State: AOJu0Yx0ideiCPc429TjkYpViuT//AX7Ne6SvF/KidrznnVPIiy7Okqo 36grTzAjA6lpKskFAnXn79FR6EhLhDY1fnFSWuNMtyMY6K8Cdf1PrJpiez/PdOCPP7XDUEBh+2w E/WRj6mVHYHv6x4h6BzaHoNlqN49IwLq1XqH6S6PPtx30QF7WqxZTlwYE6MeUIYrUF2NKTMbcpo qc40XD1Zmyj+MSjaMnqYBA752NnWM14O8WrfpsjBi2Zas= X-Gm-Gg: ASbGncvLw1sMNe077z4ekizAUrlprDTT8V3Yy9U+WWubaEfkQAZRqkUfMb5PVo0Ctr6 bWTg2809jDyY4Yp16Vzn2OWNdgf3QFqvx673pEZ2ILoAJ2qtZuPUO8jdgLzF+Znq27u2QjEndpb /G6hWjQ5ZxeanSQ5RpbVFnjkOhCnDWxNFCnWY/WvT+h8fwIOHBurtGZE0/KwIODso+1eqXMLZLy v9pa/tiTAKUdRyCzmBlNhQHx9GWx01w/RgJbKo8iG1BX3xD2K2uKRLXfjlu4N+HZcLbXF45kI0r W4CbfA5pSfCan6DEsq7+v4Tl2w== X-Received: by 2002:ac8:5a86:0:b0:466:a060:a484 with SMTP id d75a77b69052e-466b35264a8mr377536311cf.27.1733088173915; Sun, 01 Dec 2024 13:22:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IEB36TOKW8CGRWjXDNI9ssLhVGE/dYfSeis1g4xl3UE1LIzQ+/MdfCEtO+Pg1Pp3wKjx4YPPw== X-Received: by 2002:ac8:5a86:0:b0:466:a060:a484 with SMTP id d75a77b69052e-466b35264a8mr377535791cf.27.1733088173463; Sun, 01 Dec 2024 13:22:53 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.22.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:22:52 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng Subject: [PATCH 3/7] mm/hugetlb: Rename avoid_reserve to cow_from_owner Date: Sun, 1 Dec 2024 16:22:36 -0500 Message-ID: <20241201212240.533824-4-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The old name "avoid_reserve" can be too generic and can be used wrongly in the new call sites that want to allocate a hugetlb folio. It's confusing on two things: (1) whether one can opt-in to avoid global reservation, and (2) whether it should take more than one count. In reality, this flag is only used in an extremely hacky path, in an extremely hacky way in hugetlb CoW path only, and always use with 1 saying "skip global reservation". Rename the flag to avoid future abuse of this flag, making it a boolean so as to reflect its true representation that it's not a counter. To make it even harder to abuse, add a comment above the function to explain it. Signed-off-by: Peter Xu --- fs/hugetlbfs/inode.c | 2 +- include/linux/hugetlb.h | 4 ++-- mm/hugetlb.c | 33 ++++++++++++++++++++------------- 3 files changed, 23 insertions(+), 16 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index a5ea006f403e..665c736bdb30 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -819,7 +819,7 @@ static long hugetlbfs_fallocate(struct file *file, int = mode, loff_t offset, * folios in these areas, we need to consume the reserves * to keep reservation accounting consistent. */ - folio =3D alloc_hugetlb_folio(&pseudo_vma, addr, 0); + folio =3D alloc_hugetlb_folio(&pseudo_vma, addr, false); if (IS_ERR(folio)) { mutex_unlock(&hugetlb_fault_mutex_table[hash]); error =3D PTR_ERR(folio); diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ae4fe8615bb6..6189d0383c7f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -682,7 +682,7 @@ struct huge_bootmem_page { =20 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *lis= t); struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, - unsigned long addr, int avoid_reserve); + unsigned long addr, bool cow_from_owner); struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred= _nid, nodemask_t *nmask, gfp_t gfp_mask, bool allow_alloc_fallback); @@ -1061,7 +1061,7 @@ static inline int isolate_or_dissolve_huge_page(struc= t page *page, =20 static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, - int avoid_reserve) + bool cow_from_owner) { return NULL; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8d4b4197d11b..dfd479a857b6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2956,8 +2956,15 @@ int isolate_or_dissolve_huge_page(struct page *page,= struct list_head *list) return ret; } =20 +/* + * NOTE! "cow_from_owner" represents a very hacky usage only used in CoW + * faults of hugetlb private mappings on top of a non-page-cache folio (in + * which case even if there's a private vma resv map it won't cover such + * allocation). New call sites should (probably) never set it to true!! + * When it's set, the allocation will bypass all vma level reservations. + */ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, - unsigned long addr, int avoid_reserve) + unsigned long addr, bool cow_from_owner) { struct hugepage_subpool *spool =3D subpool_vma(vma); struct hstate *h =3D hstate_vma(vma); @@ -2998,7 +3005,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, * Allocations for MAP_NORESERVE mappings also need to be * checked against any subpool limit. */ - if (map_chg || avoid_reserve) { + if (map_chg || cow_from_owner) { gbl_chg =3D hugepage_subpool_get_pages(spool, 1); if (gbl_chg < 0) goto out_end_reservation; @@ -3006,7 +3013,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, =20 /* If this allocation is not consuming a reservation, charge it now. */ - deferred_reserve =3D map_chg || avoid_reserve; + deferred_reserve =3D map_chg || cow_from_owner; if (deferred_reserve) { ret =3D hugetlb_cgroup_charge_cgroup_rsvd( idx, pages_per_huge_page(h), &h_cg); @@ -3031,7 +3038,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); - if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { + if (!cow_from_owner && vma_has_reserves(vma, gbl_chg)) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } @@ -3090,7 +3097,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h), h_cg); out_subpool_put: - if (map_chg || avoid_reserve) + if (map_chg || cow_from_owner) hugepage_subpool_put_pages(spool, 1); out_end_reservation: vma_end_reservation(h, vma, addr); @@ -5317,7 +5324,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, spin_unlock(src_ptl); spin_unlock(dst_ptl); /* Do not use reserve as it's private owned */ - new_folio =3D alloc_hugetlb_folio(dst_vma, addr, 0); + new_folio =3D alloc_hugetlb_folio(dst_vma, addr, false); if (IS_ERR(new_folio)) { folio_put(pte_folio); ret =3D PTR_ERR(new_folio); @@ -5771,7 +5778,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_= folio, struct hstate *h =3D hstate_vma(vma); struct folio *old_folio; struct folio *new_folio; - int outside_reserve =3D 0; + bool cow_from_owner =3D 0; vm_fault_t ret =3D 0; struct mmu_notifier_range range; =20 @@ -5840,7 +5847,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_= folio, */ if (is_vma_resv_set(vma, HPAGE_RESV_OWNER) && old_folio !=3D pagecache_folio) - outside_reserve =3D 1; + cow_from_owner =3D true; =20 folio_get(old_folio); =20 @@ -5849,7 +5856,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_= folio, * be acquired again before returning to the caller, as expected. */ spin_unlock(vmf->ptl); - new_folio =3D alloc_hugetlb_folio(vma, vmf->address, outside_reserve); + new_folio =3D alloc_hugetlb_folio(vma, vmf->address, cow_from_owner); =20 if (IS_ERR(new_folio)) { /* @@ -5859,7 +5866,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_= folio, * reliability, unmap the page from child processes. The child * may get SIGKILLed if it later faults. */ - if (outside_reserve) { + if (cow_from_owner) { struct address_space *mapping =3D vma->vm_file->f_mapping; pgoff_t idx; u32 hash; @@ -6110,7 +6117,7 @@ static vm_fault_t hugetlb_no_page(struct address_spac= e *mapping, goto out; } =20 - folio =3D alloc_hugetlb_folio(vma, vmf->address, 0); + folio =3D alloc_hugetlb_folio(vma, vmf->address, false); if (IS_ERR(folio)) { /* * Returning error will result in faulting task being @@ -6578,7 +6585,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, goto out; } =20 - folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, 0); + folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, false); if (IS_ERR(folio)) { ret =3D -ENOMEM; goto out; @@ -6620,7 +6627,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, goto out; } =20 - folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, 0); + folio =3D alloc_hugetlb_folio(dst_vma, dst_addr, false); if (IS_ERR(folio)) { folio_put(*foliop); ret =3D -ENOMEM; --=20 2.47.0 From nobody Sat Feb 7 12:29:46 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF7E81D8A16 for ; Sun, 1 Dec 2024 21:22:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088181; cv=none; b=Swu45eiW/hcCtMpyEpBKK6HibMosBn87BwiNzUhUFhHWaNGT0wjdBZgpo8h/5b4XkKvBC3uavMYpCH01dsdx/j/gG+MyRkz3pNxksA4He4jFOO4nBOZa6GGzvUVHxlomO31yF4fc/NKLI8/NYgeOs1H7yvJFg3tUyKt2lIDZEcY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088181; c=relaxed/simple; bh=jIB/nB2x+v2Jbbkmervyzfz0wYkf9mfmXiK+PmDv8Zg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nHDpePfBeWOUH+km/7RzO6qDL+9LULXodLUKGwvKy3/TtPkFQ6FsFUSjGcmVIaYxAhman+J0LTWetYleVqSlpTMyzStZDVyf2stH6mOncztjQbz79n8T1vXy95DhrM3/h8j3d7s7GVrRoVds+BXKQ3VOjlsp1x2F/Cy4389TjGc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=MMI2Uve+; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MMI2Uve+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088179; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JsfRVwDPpEABH52Gas2qEz8JIuJrK9vYix32n1qAh2g=; b=MMI2Uve+Mrabsv3/az4ZPzDJ8pfbY2NgCAbLZSuDUQMxHQYydd9wVDmbsaq3ri5pctxbXd dloKsNAprwsl7EQa4JbCq9v74kNsGMCcpj2SD6MOijhTGnpz4dL1Snv/Z6siZeT5f1yX+B NacZfxCAD3ZSdWfXtCnK/9B2eG2WyI0= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-339-jgKA7OGCMqCtk41tBBS26w-1; Sun, 01 Dec 2024 16:22:58 -0500 X-MC-Unique: jgKA7OGCMqCtk41tBBS26w-1 X-Mimecast-MFC-AGG-ID: jgKA7OGCMqCtk41tBBS26w Received: by mail-qt1-f200.google.com with SMTP id d75a77b69052e-46697645ceeso63314401cf.0 for ; Sun, 01 Dec 2024 13:22:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088177; x=1733692977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JsfRVwDPpEABH52Gas2qEz8JIuJrK9vYix32n1qAh2g=; b=o1yzVYBYHubE2MR16E58kH86v0k+m8yMWlbpmtMmEQ/NPxp0eewIOxCbsZnHjUFm04 0If4HKmybONmuR77EB2p96aholcjzJclTt8WDLCYvgfskDCcl0wiBNEn+sz96VeVE7+J 71ygCs0ltOjcangLJ0U9aWCq1oaCYx22QeQF7JZ11ukgpwlCun7Aa7lfOWTxt+QsphcO CxmsFojmXOQ/pi3C1sJirQkS63Q65OJjW7DpckYS5fJeZsGapUp5cBdAq7nO7MXAOtj0 QZedFjr8O20qjSjfWlknSaCTOMxE0smIYtpVolkzbx7nkbjjIswu6A1uEHxnmHj0MWiH YlwA== X-Gm-Message-State: AOJu0Yw8xnVI3ZRsighmh8x1JKsg7hv/bQcbwvxy4RWB+ovplWwNfraK Y+Pq8CJ9sjgIDMa0ltSehZeEY1ZnXjgl9KRPbDi8eJqNsmaN47PZdJHWBkHH02eKqynJQ0EQCUk 72pwRmdU4GaxTjzXnKtT9Mg46WE1itDRVcjx+jTvi+z89VwvNSFk2MDA44+vZcgvF1ZHosUIEC5 MdESux09zjQ4hMiayIkCJFfyDYCAXkdmoxE/CXbpC8U3E= X-Gm-Gg: ASbGncufGeOVFjJ4J76H8Dfw+9h49hqBUKaRvZtZb+x4JBmC8gvwbDKVIldBF1zJsl3 7zI8KlhS43P25ZlsKun4tAyo62CW/u4aCaJ0tw3hgk8q7Dm1CyHkua9WH6C5UxWW1ewkkqmhsWk UiXqRpS0LYmnsJVaAiiNu3zNLHBQInxNsPUBIxqLKhg4yi+8U6EiRXpyJXvR5O98v7EwLLsrhxa K8bWtJjt808T3JXduQVebwgxkI7xtc+z8iIkidujvpVYPiVXshsTJAU5waEzEHZYK9ab9W2go7d ogAFFHrdXC1PUoh1V0jiaGFJRg== X-Received: by 2002:a05:622a:1801:b0:462:a7d1:8e19 with SMTP id d75a77b69052e-466b359cc6dmr326628501cf.13.1733088176612; Sun, 01 Dec 2024 13:22:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IEIHjJ7gsnlT3rnkpu2KbnEapiMVvKzn9R0CmFm4KgWLF8XZTcnqWRYbl4fLEluyyAPgKpicA== X-Received: by 2002:a05:622a:1801:b0:462:a7d1:8e19 with SMTP id d75a77b69052e-466b359cc6dmr326628031cf.13.1733088176142; Sun, 01 Dec 2024 13:22:56 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.22.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:22:55 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng Subject: [PATCH 4/7] mm/hugetlb: Clean up map/global resv accounting when allocate Date: Sun, 1 Dec 2024 16:22:37 -0500 Message-ID: <20241201212240.533824-5-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" alloc_hugetlb_folio() isn't a function easy to read, especially on reservation accountings for either VMA or globally (majorly, spool only). The 1st complexity lies in the special private CoW path, aka, cow_from_owner=3Dtrue case. The 2nd complexity may be the confusing updates of gbl_chg after it's set once, which looks like they can change anytime on the fly. Logically, cow_from_user is only about vma reservation. We could already decouple the flag and consolidate it into map charge flag very early. Then we don't need to keep checking the CoW special flag every time. This patch does it by making map_chg a tri-state flag. Tri-state needed is unfortunate, and it's because currently vma_needs_reservation() has a side effect internally, that it must be followed by either a end() or commit(). We keep the same semantic as before on one thing: "if (map_chg)" means we need a separate per-vma resv count. It keeps most of the old code like before untouched with the new enum. After this patch, we take these steps to decide these variables, hopefully slightly easier to follow: - First, decide map_chg. This will take cow_from_owner into account, once and for all. It's about whether we could take a resv count from the vma, no matter it's shared, private, etc. - Then, decide gbl_chg. The only diff here is spool, comparing to map_chg. Now only update each flag once and for all, instead of keep any of them flipping which can be very hard to follow. With cow_from_owner merged into map_chg, we could remove quite a few such checks all over. Side benefit of such is that we can get rid of one more confusing flag, which is deferred_reserve. Cleanup the comments a bit too. E.g., MAP_NORESERVE may not need to check against spool limit, AFAIU, if it's on a shared mapping, and if the page cache folio has its inode's resv map available (in which case map_chg would have been set zero, hence the code should be correct, not the comment). There's one trivial detail that needs attention that this patch touched, which is this check right after vma_commit_reservation(): if (map_chg > map_commit) It changes to: if (unlikely(map_chg =3D=3D MAP_CHG_NEEDED && retval =3D=3D 0)) It should behave the same like before, because previously the only way to make "map_chg > map_commit" happen is map_chg=3D1 && map_commit=3D0. That's exactly the rewritten line. Meanwhile, either commit() or end() will need to be skipped if ENFORCE, to keep the old behavior. Even though it looks a lot changed, but no functional change expected. Signed-off-by: Peter Xu --- mm/hugetlb.c | 116 +++++++++++++++++++++++++++++++++++---------------- 1 file changed, 80 insertions(+), 36 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dfd479a857b6..14cfe0bb01e4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2956,6 +2956,25 @@ int isolate_or_dissolve_huge_page(struct page *page,= struct list_head *list) return ret; } =20 +typedef enum { + /* + * For either 0/1: we checked the per-vma resv map, and one resv + * count either can be reused (0), or an extra needed (1). + */ + MAP_CHG_REUSE =3D 0, + MAP_CHG_NEEDED =3D 1, + /* + * Cannot use per-vma resv count can be used, hence a new resv + * count is enforced. + * + * NOTE: This is mostly identical to MAP_CHG_NEEDED, except + * that currently vma_needs_reservation() has an unwanted side + * effect to either use end() or commit() to complete the + * transaction. Hence it needs to differenciate from NEEDED. + */ + MAP_CHG_ENFORCED =3D 2, +} map_chg_state; + /* * NOTE! "cow_from_owner" represents a very hacky usage only used in CoW * faults of hugetlb private mappings on top of a non-page-cache folio (in @@ -2969,12 +2988,11 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, struct hugepage_subpool *spool =3D subpool_vma(vma); struct hstate *h =3D hstate_vma(vma); struct folio *folio; - long map_chg, map_commit, nr_pages =3D pages_per_huge_page(h); - long gbl_chg; + long retval, gbl_chg, nr_pages =3D pages_per_huge_page(h); + map_chg_state map_chg; int memcg_charge_ret, ret, idx; struct hugetlb_cgroup *h_cg =3D NULL; struct mem_cgroup *memcg; - bool deferred_reserve; gfp_t gfp =3D htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL; =20 memcg =3D get_mem_cgroup_from_current(); @@ -2985,36 +3003,56 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, } =20 idx =3D hstate_index(h); - /* - * Examine the region/reserve map to determine if the process - * has a reservation for the page to be allocated. A return - * code of zero indicates a reservation exists (no change). - */ - map_chg =3D gbl_chg =3D vma_needs_reservation(h, vma, addr); - if (map_chg < 0) { - if (!memcg_charge_ret) - mem_cgroup_cancel_charge(memcg, nr_pages); - mem_cgroup_put(memcg); - return ERR_PTR(-ENOMEM); + + /* Whether we need a separate per-vma reservation? */ + if (cow_from_owner) { + /* + * Special case! Since it's a CoW on top of a reserved + * page, the private resv map doesn't count. So it cannot + * consume the per-vma resv map even if it's reserved. + */ + map_chg =3D MAP_CHG_ENFORCED; + } else { + /* + * Examine the region/reserve map to determine if the process + * has a reservation for the page to be allocated. A return + * code of zero indicates a reservation exists (no change). + */ + retval =3D vma_needs_reservation(h, vma, addr); + if (retval < 0) { + if (!memcg_charge_ret) + mem_cgroup_cancel_charge(memcg, nr_pages); + mem_cgroup_put(memcg); + return ERR_PTR(-ENOMEM); + } + map_chg =3D retval ? MAP_CHG_NEEDED : MAP_CHG_REUSE; } =20 /* + * Whether we need a separate global reservation? + * * Processes that did not create the mapping will have no * reserves as indicated by the region/reserve map. Check * that the allocation will not exceed the subpool limit. - * Allocations for MAP_NORESERVE mappings also need to be - * checked against any subpool limit. + * Or if it can get one from the pool reservation directly. */ - if (map_chg || cow_from_owner) { + if (map_chg) { gbl_chg =3D hugepage_subpool_get_pages(spool, 1); if (gbl_chg < 0) goto out_end_reservation; + } else { + /* + * If we have the vma reservation ready, no need for extra + * global reservation. + */ + gbl_chg =3D 0; } =20 - /* If this allocation is not consuming a reservation, charge it now. + /* + * If this allocation is not consuming a per-vma reservation, + * charge the hugetlb cgroup now. */ - deferred_reserve =3D map_chg || cow_from_owner; - if (deferred_reserve) { + if (map_chg) { ret =3D hugetlb_cgroup_charge_cgroup_rsvd( idx, pages_per_huge_page(h), &h_cg); if (ret) @@ -3038,7 +3076,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); - if (!cow_from_owner && vma_has_reserves(vma, gbl_chg)) { + if (vma_has_reserves(vma, gbl_chg)) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } @@ -3051,7 +3089,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, /* If allocation is not consuming a reservation, also store the * hugetlb_cgroup pointer on the page. */ - if (deferred_reserve) { + if (map_chg) { hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h), h_cg, folio); } @@ -3060,26 +3098,31 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, =20 hugetlb_set_folio_subpool(folio, spool); =20 - map_commit =3D vma_commit_reservation(h, vma, addr); - if (unlikely(map_chg > map_commit)) { + if (map_chg !=3D MAP_CHG_ENFORCED) { + /* commit() is only needed if the map_chg is not enforced */ + retval =3D vma_commit_reservation(h, vma, addr); /* + * Check for possible race conditions. When it happens.. * The page was added to the reservation map between * vma_needs_reservation and vma_commit_reservation. * This indicates a race with hugetlb_reserve_pages. * Adjust for the subpool count incremented above AND - * in hugetlb_reserve_pages for the same page. Also, + * in hugetlb_reserve_pages for the same page. Also, * the reservation count added in hugetlb_reserve_pages * no longer applies. */ - long rsv_adjust; + if (unlikely(map_chg =3D=3D MAP_CHG_NEEDED && retval =3D=3D 0)) { + long rsv_adjust; =20 - rsv_adjust =3D hugepage_subpool_put_pages(spool, 1); - hugetlb_acct_memory(h, -rsv_adjust); - if (deferred_reserve) { - spin_lock_irq(&hugetlb_lock); - hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), - pages_per_huge_page(h), folio); - spin_unlock_irq(&hugetlb_lock); + rsv_adjust =3D hugepage_subpool_put_pages(spool, 1); + hugetlb_acct_memory(h, -rsv_adjust); + if (map_chg) { + spin_lock_irq(&hugetlb_lock); + hugetlb_cgroup_uncharge_folio_rsvd( + hstate_index(h), pages_per_huge_page(h), + folio); + spin_unlock_irq(&hugetlb_lock); + } } } =20 @@ -3093,14 +3136,15 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, out_uncharge_cgroup: hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg); out_uncharge_cgroup_reservation: - if (deferred_reserve) + if (map_chg) hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h), h_cg); out_subpool_put: - if (map_chg || cow_from_owner) + if (map_chg) hugepage_subpool_put_pages(spool, 1); out_end_reservation: - vma_end_reservation(h, vma, addr); + if (map_chg !=3D MAP_CHG_ENFORCED) + vma_end_reservation(h, vma, addr); if (!memcg_charge_ret) mem_cgroup_cancel_charge(memcg, nr_pages); mem_cgroup_put(memcg); --=20 2.47.0 From nobody Sat Feb 7 12:29:46 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 085E21D8E1D for ; Sun, 1 Dec 2024 21:23:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088183; cv=none; b=jRMVdIK+LZa+6pK2TgKXYLd3VWayMiS8MmKyN6VMImbkexFbOZKg1lm4eTv6EUBJ8kk2OFExipSAQ8xLKQQi7PWYtK4EqrxFmRcee91LSlXpAi4soH57q0xZA+KihnREqLKVDzw2nHSmRfrO3jatnF5937X8iowWYhTO0g+W3Xs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088183; c=relaxed/simple; bh=tNMEMm+0UKui0cbRmREuuGJGP3p7pkxPiHc+LfNvtNM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LYDf9R1t63EAopAlDPKdLn0x8hYJFndlRA1s+bR5d+KdmhV2I7+Y84mnGqIhYj9Paj6LLZvsw7uDIuDbitvP+IY8MlNKg+ycTml99Ro7R6gqVvitV8AOmUY2CM2sGFvd6/NJqwM7QIi5maoN2FmQOT1n7nJHGTCahbqmQESstG4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=IQeoHeaj; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IQeoHeaj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4S7W+NNw/Z0asEvav/qetVxn0P5gIx5aAGF1EDJoEHQ=; b=IQeoHeajXidKyzuyr5HN4Q+rnYcwD4+4UDk2BLfEAvTUiK+HOHQYKtJjAvpD2YrOg6nT8A CXnRX6ngn2MhI7czK3KHoQq7JHfdo8oKMNm8Vxx6DPF9shEsPnEGexl1iYAIyxBLG+ViLN S+sWfYj8liEanAy7G/Yr7Ebg7gEyqsg= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-274-We0czABVOGykjlAstBtiRg-1; Sun, 01 Dec 2024 16:23:00 -0500 X-MC-Unique: We0czABVOGykjlAstBtiRg-1 X-Mimecast-MFC-AGG-ID: We0czABVOGykjlAstBtiRg Received: by mail-qt1-f199.google.com with SMTP id d75a77b69052e-4669be6c87aso47416871cf.3 for ; Sun, 01 Dec 2024 13:23:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088179; x=1733692979; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4S7W+NNw/Z0asEvav/qetVxn0P5gIx5aAGF1EDJoEHQ=; b=mNDgOuryYOrfztt80gKEVZ5u5IzaPZOvjUV4xzTseMpwQtCqaq1uu+3AznhKMYOXH+ pRpihyMJXJOMpemQY0xSKQue/+NI38A1AvdS7r+RXZjIcUc8BCO5GEmtJNazboeT6nch GE76Fr0RroMifU/xphVnRNdk8SGi2+HvuTc4cF4QLxmhpKDAlkfMPXL5xf1TQJ7vYzFG U6DsUywctH0lhc9obwoxTDYpCNN0QVsMF18XcEHoNPmBWVuGogQwDTO3cD7rfPyfAiCt g0my8ukDhuKGsKTrFjQRnWEUVseX1fvy2h6uXU7Z7wffhQsPiZTCGqGzRqLvXWkPWOYF xSoA== X-Gm-Message-State: AOJu0Yxdn6fwMKdfQlEAMGV/g6VGLq0JqNBbsDii1kIhv9uD24tshajw GWljXpkP0/s2TytQgTFWUXPw/XjGxKZTtUNO0LWwU0cZMZgeIpBGZZB2s5RzGxKkJOcL2czRa1F P3qKGHM8Fl3rwQ8Tsod5SwGhFD56mB2SQwTRZywDohyQYiapVJlp7G8tdTZCHnRlGFxXL0+GF0S Zq44+NfkinhuWEA2PJWXjj5JRxqrEchGRUmWngmYdwcoU= X-Gm-Gg: ASbGnctFmvR7znee7HgzYrlxJ+52l22iqS5Tsk+4ZyblbjBZUc2bYN8mE2MQEH6zSVn jvLrQFAbOLFgmncRgsEkf+hSvBePhz+HuQE7NsnXDD6hHU0KDpQ6Q+bB+M46HldbuiAGp02yDCI b7dbXZY60Wne6ssCkPrNhOpdKhxU72Gv5rj45LdEHmIOrREEcc5uWmeZyIyA0H7Eay5XVKYNot1 Cwi7JgkZPePyNEJcyN3V4nqAR4M6yOy0mradhibmWNpe2snaFB3uCMqxwSPydYn8u8lATu18Hfh EIZMCdeIp/SzoKRdNfK5Z9IjMA== X-Received: by 2002:a05:622a:558f:b0:463:60a9:74c0 with SMTP id d75a77b69052e-466b34df5a2mr287105121cf.14.1733088179023; Sun, 01 Dec 2024 13:22:59 -0800 (PST) X-Google-Smtp-Source: AGHT+IEU8yFbm4tc7hT2oRa608rksSZP3tFjGJoJtWKVGNQdDCFCtb+uupJvQXL6XiGSzbeqJN01Ew== X-Received: by 2002:a05:622a:558f:b0:463:60a9:74c0 with SMTP id d75a77b69052e-466b34df5a2mr287104571cf.14.1733088178467; Sun, 01 Dec 2024 13:22:58 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.22.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:22:57 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng Subject: [PATCH 5/7] mm/hugetlb: Simplify vma_has_reserves() Date: Sun, 1 Dec 2024 16:22:38 -0500 Message-ID: <20241201212240.533824-6-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_has_reserves() is a helper "trying" to know whether the vma should consume one reservation when allocating the hugetlb folio. However it's not clear on why we need such complexity, as such information is already represented in the "chg" variable. From alloc_hugetlb_folio() context, "chg" (or in the function's context, "gbl_chg") is defined as: - If gbl_chg=3D1, the allocation cannot reuse an existing reservation - If gbl_chg=3D0, the allocation should reuse an existing reservation Firstly, map_chg is defined as following, to cover all cases of hugetlb reservation scenarios (mostly, via vma_needs_reservation(), but cow_from_owner is an outlier): CONDITION HAS RESERVATION? =3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - SHARED: always check against per-inode resv_map (ignore NONRESERVE) - If resv exists =3D=3D> YES [1] - If not =3D=3D> NO [2] - PRIVATE: complicated... - Request came from a CoW from owner resv map =3D=3D> NO [3] (when cow_from_owner=3D=3Dtrue) - If does not own a resv_map at all.. =3D=3D> NO [4] (examples: VM_NORESERVE, private fork()) - If owns a resv_map, but resv donsn't exists =3D=3D> NO [5] - If owns a resv_map, and resv exists =3D=3D> YES [6] Further on, gbl_chg considered spool setup, so that is a decision based on all the context. If we look at vma_has_reserves(), it almost does check that has already been processed by map_chg accounting (I marked each return value to the case above): static bool vma_has_reserves(struct vm_area_struct *vma, long chg) { if (vma->vm_flags & VM_NORESERVE) { if (vma->vm_flags & VM_MAYSHARE && chg =3D=3D 0) return true; =3D=3D> [1] else return false; =3D=3D> [2] or [4] } if (vma->vm_flags & VM_MAYSHARE) { if (chg) return false; =3D=3D> [2] else return true; =3D=3D> [1] } if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { if (chg) return false; =3D=3D> [5] else return true; =3D=3D> [6] } return false; =3D=3D> [4] } It didn't check [3], but [3] case was actually already covered now by the "chg" / "gbl_chg" / "map_chg" calculations. In short, vma_has_reserves() doesn't provide anything more than return "!chg".. so just simplify all the things. There're a lot of comments describing truncation races, IIUC there should have no race as long as map_chg is properly done. Signed-off-by: Peter Xu --- mm/hugetlb.c | 67 ++++++---------------------------------------------- 1 file changed, 7 insertions(+), 60 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 14cfe0bb01e4..b7e16b3c4e67 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1247,66 +1247,13 @@ void clear_vma_resv_huge_pages(struct vm_area_struc= t *vma) } =20 /* Returns true if the VMA has associated reserve pages */ -static bool vma_has_reserves(struct vm_area_struct *vma, long chg) +static bool vma_has_reserves(long chg) { - if (vma->vm_flags & VM_NORESERVE) { - /* - * This address is already reserved by other process(chg =3D=3D 0), - * so, we should decrement reserved count. Without decrementing, - * reserve count remains after releasing inode, because this - * allocated page will go into page cache and is regarded as - * coming from reserved pool in releasing step. Currently, we - * don't have any other solution to deal with this situation - * properly, so add work-around here. - */ - if (vma->vm_flags & VM_MAYSHARE && chg =3D=3D 0) - return true; - else - return false; - } - - /* Shared mappings always use reserves */ - if (vma->vm_flags & VM_MAYSHARE) { - /* - * We know VM_NORESERVE is not set. Therefore, there SHOULD - * be a region map for all pages. The only situation where - * there is no region map is if a hole was punched via - * fallocate. In this case, there really are no reserves to - * use. This situation is indicated if chg !=3D 0. - */ - if (chg) - return false; - else - return true; - } - /* - * Only the process that called mmap() has reserves for - * private mappings. + * Now "chg" has all the conditions considered for whether we + * should use an existing reservation. */ - if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { - /* - * Like the shared case above, a hole punch or truncate - * could have been performed on the private mapping. - * Examine the value of chg to determine if reserves - * actually exist or were previously consumed. - * Very Subtle - The value of chg comes from a previous - * call to vma_needs_reserves(). The reserve map for - * private mappings has different (opposite) semantics - * than that of shared mappings. vma_needs_reserves() - * has already taken this difference in semantics into - * account. Therefore, the meaning of chg is the same - * as in the shared case above. Code could easily be - * combined, but keeping it separate draws attention to - * subtle differences. - */ - if (chg) - return false; - else - return true; - } - - return false; + return chg =3D=3D 0; } =20 static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) @@ -1407,7 +1354,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct= hstate *h, * have no page reserves. This check ensures that reservations are * not "stolen". The child may still get SIGKILLed */ - if (!vma_has_reserves(vma, chg) && !available_huge_pages(h)) + if (!vma_has_reserves(chg) && !available_huge_pages(h)) goto err; =20 gfp_mask =3D htlb_alloc_mask(h); @@ -1425,7 +1372,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct= hstate *h, folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); =20 - if (folio && vma_has_reserves(vma, chg)) { + if (folio && vma_has_reserves(chg)) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } @@ -3076,7 +3023,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); - if (vma_has_reserves(vma, gbl_chg)) { + if (vma_has_reserves(gbl_chg)) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } --=20 2.47.0 From nobody Sat Feb 7 12:29:46 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42F1A1DA62E for ; Sun, 1 Dec 2024 21:23:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088186; cv=none; b=Veao0vOzLPAYQY8XxlOu3AFESEwjuObqbQpcernUgn4gm20uYk55FnEVIHoM9WBraMkq+wMxJGzEsS0hqcjVAmIlX+0zxcrLje6r+ranDMgv30ZQKPmBQiFGuhgzwHJGcC8EHRPU9wdZnxqE5UDE4dCqtMIFuxSGBvnIZFYciP0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088186; c=relaxed/simple; bh=8Hkpil8l+PF0PLDcFoYxVOpxSR2JrK5WHoQieoSAbtM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TGXiVY3EUuH4eC8bA/WtQrMR2zydwsJk8Svt/NPA118STZ+9h3p5JwMG+77XnKv2TzGo6D23Z5C47z2Va+IRtTvTG9jm/BWr20duGUNaxjYvxrlE02spScVDB6FwnPqSQhOmZHRU9sZpa1PqShEdJZtthMlMcvUX6/Z5I1Z+D9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=itXNgL/f; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="itXNgL/f" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XrrRdtWvOXhX0IC/Bu+h8sPnIUOX9C/tAx/abPzoJUA=; b=itXNgL/fotTr5w+PDOXxGVDiRZoK6j+PT2mHftUuzKe1AEkIV+NmybcM6Og18el0b0F61m qxjsxFow0GTaisGm4pI4SxH9iOtySD/4yEmgJGifH3SqyZF5vhSod8J/eHt+z65XYExNPA NT9GHOZmJsKWJyPO6lzsDFMUR9+a/4E= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-443-sN8ATsnnMJOD9p_4ZasdHQ-1; Sun, 01 Dec 2024 16:23:02 -0500 X-MC-Unique: sN8ATsnnMJOD9p_4ZasdHQ-1 X-Mimecast-MFC-AGG-ID: sN8ATsnnMJOD9p_4ZasdHQ Received: by mail-qt1-f199.google.com with SMTP id d75a77b69052e-466bc740022so75909321cf.0 for ; Sun, 01 Dec 2024 13:23:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088181; x=1733692981; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XrrRdtWvOXhX0IC/Bu+h8sPnIUOX9C/tAx/abPzoJUA=; b=RwE6vMmIa81e/n2JmjZUM1VqDWsfDUGoKRdvjKDsN5nlYMPM/4E7n1Mw3rGTAlYCri MqOz2E70YfM+QV68egAqX2SaRma4kJkq+xlPbm+d+RuLUt4xKN7FDHoUARyjED3aAWUL +ko/s821UT531TJqtqtw7vV4dUEFc9VLwJNG0tcXz1L0urPM/Ae44vet4YZSDw70ioJT nMAdv+hzvmCSRcgiC8FzuIp+kYPA4Jpuqdfk0CZYAXOdeIHlqBDbDfXi5JQ1HNku2HDr 8fP1M1Kbw/Cl7xaUixsAdYWl4eIkTxtHY6SUyuDY+owK+bOl+VVQAS8Kk8+tgqnyrUJY m7GQ== X-Gm-Message-State: AOJu0YxWilNyBueItii6tkmWBXeuHziawHkNPtMjPG3YG+Hjv3astN/B kHgzhCPEBQ+jDfS6YcDVmq3CCfMfScchjFLhTNJIrRkiBwFEatxGb8dO4cI5gObANy1W0Qczbot +m3IK+ceWuBXuHPRqNvlyH7TR9jxwRuzSCUUCccBUzHwkD2KvlM2QhWqftPV7b+Zsz8IMmp2F6g vzY5wQtgWt6dkxAdSiaNyuSD3we5r/Yix/lWC09wJNa9A= X-Gm-Gg: ASbGncvIbAThEv/UJQMGzjYDws0b2HqyBX3rr6X5T3GYkfsaNlEHmw5ZnX2vJAtDIpJ zmBoLzyOMlHxWG0iJD+JXDiSRsHQWwI/dftVXG7AzJyyj5VodGND7UJTcqbuDD12kqic/pZay4Q V5ALT8yArQzt+2QXBFJ+GfoTDbndRzdgwIVEyjU8tRBrgF5Mo7aMEnTiIHtlLzXD5PSZu0noMSB 7ZPDoJY8ACvDCJ3JOFuQ2Hbjfquop174wHP4kXLmugSdrzDwHKRe+YCiuJV9inXsKxLQ7l0Vw7L hVwTivBL6aP2UXFhLIRSpS1yaQ== X-Received: by 2002:ac8:7c48:0:b0:466:a41a:6448 with SMTP id d75a77b69052e-466b359e3e1mr323828951cf.18.1733088180886; Sun, 01 Dec 2024 13:23:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IFMtpb17mmgqOgRNsdiR78ELm0LNQJM3Yg9dkXYjGdXNTa3Aa8RyK4xqT6Ebmoe6jg1COSs1w== X-Received: by 2002:ac8:7c48:0:b0:466:a41a:6448 with SMTP id d75a77b69052e-466b359e3e1mr323828541cf.18.1733088180557; Sun, 01 Dec 2024 13:23:00 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.22.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:22:59 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng Subject: [PATCH 6/7] mm/hugetlb: Drop vma_has_reserves() Date: Sun, 1 Dec 2024 16:22:39 -0500 Message-ID: <20241201212240.533824-7-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" After the previous cleanup, vma_has_reserves() is mostly an empty helper except that it says "use reserve count" is inverted meaning from "needs a global reserve count", which is still true. To avoid confusions on having two inverted ways to ask the same question, always use the gbl_chg everywhere, and drop the function. When at it, rename "chg" to "gbl_chg" in dequeue_hugetlb_folio_vma(). It might be helpful for readers to see that the "chg" here is the global reserve count, not the vma resv count. Signed-off-by: Peter Xu --- mm/hugetlb.c | 23 ++++++----------------- 1 file changed, 6 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b7e16b3c4e67..10251ef3289a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1246,16 +1246,6 @@ void clear_vma_resv_huge_pages(struct vm_area_struct= *vma) hugetlb_dup_vma_private(vma); } =20 -/* Returns true if the VMA has associated reserve pages */ -static bool vma_has_reserves(long chg) -{ - /* - * Now "chg" has all the conditions considered for whether we - * should use an existing reservation. - */ - return chg =3D=3D 0; -} - static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) { int nid =3D folio_nid(folio); @@ -1341,7 +1331,7 @@ static unsigned long available_huge_pages(struct hsta= te *h) =20 static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, struct vm_area_struct *vma, - unsigned long address, long chg) + unsigned long address, long gbl_chg) { struct folio *folio =3D NULL; struct mempolicy *mpol; @@ -1350,11 +1340,10 @@ static struct folio *dequeue_hugetlb_folio_vma(stru= ct hstate *h, int nid; =20 /* - * A child process with MAP_PRIVATE mappings created by their parent - * have no page reserves. This check ensures that reservations are - * not "stolen". The child may still get SIGKILLed + * gbl_chg=3D=3D1 means the allocation requires a new page that was not + * reserved before. Making sure there's at least one free page. */ - if (!vma_has_reserves(chg) && !available_huge_pages(h)) + if (gbl_chg && !available_huge_pages(h)) goto err; =20 gfp_mask =3D htlb_alloc_mask(h); @@ -1372,7 +1361,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct= hstate *h, folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); =20 - if (folio && vma_has_reserves(chg)) { + if (folio && !gbl_chg) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } @@ -3023,7 +3012,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); - if (vma_has_reserves(gbl_chg)) { + if (!gbl_chg) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } --=20 2.47.0 From nobody Sat Feb 7 12:29:46 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 443631DACBE for ; Sun, 1 Dec 2024 21:23:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088187; cv=none; b=s6sptIKqrBk3CWI3aZRBZD/blodEl1lOTqi+RpdC4zLyLb5O+2Ku1rGMwG4d9kTp9JpJe4MU1n588cdoVCvCef3oh+T20BdZf1da+RRt3Y4DybWF/nJsneke6VkE+LZrmt2nRyKaRphlyiWGf3Fl2jqGVBzzWmp5f7Nw3AeVJNI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733088187; c=relaxed/simple; bh=BPZc/pv+eViynoCybzxq2B/XetETvyb3Y3175EYjJ7M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SMmyVedifn9gqY4jFi4kHKIoD0lh+9aM39NFQZkJQoH6VrQuzce40Jv1etLp4Sf3Qm5ue+ME9x82lMqooP5jKQCamAyTbLzCqbTBCX7qTiaCOW9j2uhZ7g7yhhyKZIGoQS4qdKo6BPhune3s/hIyULJRQAlt6lfH1U8c8KeW3ts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XQ4FGh2n; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XQ4FGh2n" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=286MOANtSnZ8I6XGYWwioLKp0ba2wutq5KAnTtwnT/g=; b=XQ4FGh2n8Dn3mAgu7jiDXRlp7S2KmETMu7MI6yTsALQLVgoD6GGcDAychAt50xLjsndlX/ 4FNlx2vatEdzg/pcBstlaU2E/A3sQF1tjefIQ6yVFchOuI2xbgeBll+c9YYkLPwQcNovOk loAEvT12CnoBMc57/t7/IpCAhPsV0hE= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-66-H_XrC2vJNUuyHjSuiKQDKw-1; Sun, 01 Dec 2024 16:23:04 -0500 X-MC-Unique: H_XrC2vJNUuyHjSuiKQDKw-1 X-Mimecast-MFC-AGG-ID: H_XrC2vJNUuyHjSuiKQDKw Received: by mail-qt1-f199.google.com with SMTP id d75a77b69052e-4669d0e1696so77027391cf.0 for ; Sun, 01 Dec 2024 13:23:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088183; x=1733692983; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=286MOANtSnZ8I6XGYWwioLKp0ba2wutq5KAnTtwnT/g=; b=ZXY6ZSidetnuqTHJ1FTSYhM/bOo33ZFePu4umGrj0hjyaTIzorR0OkZVtBNcdo86nT hq/SGoeOk7rFZwwnUU0OjVJkMpyXXKwy9dMKmxcMRJkaNXplvOlhhFj5Z5zSGqsbh/2F blbmGHeKj3rGdODe6ZDhY658WsFcvvEBG3gEizkKnN7bySAGeUqiT3zI4TyWPmA1H8zh 18DBprcwgyZBjIn2mGnnLlbVZUqhqhrfDlwpax1rUBJx4dutzazdKIA+RmFPXzOlyx3A /dDyHuUtUF9AQnEUEJ79Y5fVxBs8MgGQygveTIhbOkX2D58RxHbuCNVv4QyT+u9lv2Lw +70A== X-Gm-Message-State: AOJu0YzkkV2mzOFStVwTOt0VH/niM+tIPGp0VJjDnphK3z+GiYGvQnW9 xKOLDPwGbMzBWiEoG59vBwQs30y4FpdJ/k3onZ/1D0Fz/2uAaC+y0TA816SSrVesW7UpuqAJ/DV xc0XGeby9s82W4c7Jy0N0mq2iYWPgJVR0evSq14nMd0Y5Y8IJm/3pWAqYKx6rINYhPgMFiQJat5 6EtY2ins8D+U9u0FxXTXJfOxnj2ZaRQEZice59mkaL2Iw= X-Gm-Gg: ASbGncuuT1/2GeL+HMz7DMYGacxKrjBbvuqd54q9eMaN7kDgA7KjXa8zwqBfjHSwJcu VuX4MWvyJE/Ay2NN6bEsl+Ml1J2iUE0+11s6D7VojGL3Ulf1lyxCmzeneMoEOioIzyMTAJqRT68 UmRxXrY7vLVojr6yP/FQYppVvLdCuEfBetlz9nmZ4fmqtFkgpG6Y1Fnw+MOPqlopzhA6sZIkHbO 1IjmrqP3YVdSo9rkfLwWLHhRqhwoC4ylcq/I5xQ+7NiUdN0pdJm0g0yVsq0i7Qx4hJeGQRuCL04 f/6UsihKDz8tSXkpKrK/8zpcqw== X-Received: by 2002:ac8:5d94:0:b0:464:c8f2:e553 with SMTP id d75a77b69052e-466b36549f8mr229630881cf.42.1733088183191; Sun, 01 Dec 2024 13:23:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IHcAYLfsmzWB1NZC0VaVby3ei9BnOo2eK+B/fq4AhdvJKUiO4tdmK0mrgEzoTHHOBn14RPVSQ== X-Received: by 2002:ac8:5d94:0:b0:464:c8f2:e553 with SMTP id d75a77b69052e-466b36549f8mr229630581cf.42.1733088182855; Sun, 01 Dec 2024 13:23:02 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.23.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:23:01 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng Subject: [PATCH 7/7] mm/hugetlb: Unify restore reserve accounting for new allocations Date: Sun, 1 Dec 2024 16:22:40 -0500 Message-ID: <20241201212240.533824-8-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Either hugetlb pages dequeued from hstate, or newly allocated from buddy, would require restore-reserve accounting to be managed properly. Merge the two paths on it. Add a small comment to make it slightly nicer. Signed-off-by: Peter Xu --- mm/hugetlb.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 10251ef3289a..64e690fe52bf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1361,11 +1361,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struc= t hstate *h, folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); =20 - if (folio && !gbl_chg) { - folio_set_hugetlb_restore_reserve(folio); - h->resv_huge_pages--; - } - mpol_cond_put(mpol); return folio; =20 @@ -3012,15 +3007,20 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); - if (!gbl_chg) { - folio_set_hugetlb_restore_reserve(folio); - h->resv_huge_pages--; - } list_add(&folio->lru, &h->hugepage_activelist); folio_ref_unfreeze(folio, 1); /* Fall through */ } =20 + /* + * Either dequeued or buddy-allocated folio needs to add special + * mark to the folio when it consumes a global reservation. + */ + if (!gbl_chg) { + folio_set_hugetlb_restore_reserve(folio); + h->resv_huge_pages--; + } + hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio); /* If allocation is not consuming a reservation, also store the * hugetlb_cgroup pointer on the page. --=20 2.47.0