From nobody Tue Apr 7 04:59:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01664C433FE for ; Tue, 11 Oct 2022 21:57:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229616AbiJKV5T (ORCPT ); Tue, 11 Oct 2022 17:57:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229508AbiJKV5L (ORCPT ); Tue, 11 Oct 2022 17:57:11 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BC779E0EF; Tue, 11 Oct 2022 14:57:10 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id l1-20020a17090a72c100b0020a6949a66aso245954pjk.1; Tue, 11 Oct 2022 14:57:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g3WvdRkzSdLVTgj44Ys6XklIheIaxbrqec5BQ0TfTfc=; b=EcEBClDeo6ogTrRId+6TLulSFrp2uvrU+ZkBvLmCm/JAUYaMDshArhrjPjASJyWpjD 0XcrucWAEo5V+aKPBXl9o2biWj7vnOssQnsSa3MUmP8tm/CVyQcYe9op9mTL8+KueHrU Tbee5GWcb3LlXal/kFRUxbtqVWsnEezH5ynVxgLyrJFklrLxr5A7X5K2VR9ZYsaC8CJR Uj0lhF1aJEDuLlR1Sv8JB8Tyw/azUGoAJGDIK44DUxynYCuD/VxMLhPFUWU++voFSkAO OcR/qEjtWfnbPkj/a2JC+TOGNfTKTePy9UoWMZK7rNdFavGeqkGp4Md4exQ4hNbqe2sq wwxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g3WvdRkzSdLVTgj44Ys6XklIheIaxbrqec5BQ0TfTfc=; b=WSkGC0GN8A9ugHSHGjkDEtLr+jaJbgs5cQv8HnEtj05gnBsxpq8rh2oqukqplYJRHB /OiJqXEfxfOVJZmxSuQ5+YuFzebZohLs7MrPUkl+HdvRYSn2REttFIDpDgArk80Dmh0Y cSae2h/n+H9cp9DwXUPVXBP+G+DesrAaObYK2Fls3490GOT3FwCwn3+sunFI9pWSl9sI /spS0lMf6jRXjostUIVVr/wa970JjTi7m2098cFZRiu+ly7t4jV6rRQhQOVwj7W9dY+f RKG2hGYF09WIXhlD1jIfCqCcWTeG6ce0O3aTBhsnTN9q8Pqz0O4YGCDd03e6vq/S7QTZ u74Q== X-Gm-Message-State: ACrzQf12HAKH+m5gjzdSd6ZQE4RVc4mI9YUdKlP+7hNRSK8iGBWhB8Zc hphQ7VSN9VrXLkjuNGAdK6w= X-Google-Smtp-Source: AMsMyM6wjMzwhiGk87FX5YgO5U51L5rHNxunbkLzsRulGcSgra54n7TeCV+744ohlWrnCtmZWeJhAg== X-Received: by 2002:a17:903:210d:b0:184:1881:bfe6 with SMTP id o13-20020a170903210d00b001841881bfe6mr2841548ple.80.1665525429934; Tue, 11 Oct 2022 14:57:09 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z17-20020a170903019100b0018123556931sm6580371plg.204.2022.10.11.14.57.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Oct 2022 14:57:09 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 1/4] filemap: find_lock_entries() now updates start offset Date: Tue, 11 Oct 2022 14:56:31 -0700 Message-Id: <20221011215634.478330-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221011215634.478330-1-vishal.moola@gmail.com> References: <20221011215634.478330-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Initially, find_lock_entries() was being passed in the start offset as a value. That left the calculation of the offset to the callers. This led to complexity in the callers trying to keep track of the index. Now find_lock_entires() takes in a pointer to the start offset and updates the value to be directly after the last entry found. If no entry is found, the offset is not changed. This gets rid of multiple hacky calculations that kept track of the start offset. Signed-off-by: Vishal Moola (Oracle) --- mm/filemap.c | 17 ++++++++++++++--- mm/internal.h | 2 +- mm/shmem.c | 8 ++------ mm/truncate.c | 12 ++++-------- 4 files changed, 21 insertions(+), 18 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 08341616ae7a..e95500b07ee9 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2084,17 +2084,19 @@ unsigned find_get_entries(struct address_space *map= ping, pgoff_t start, * locked or folios under writeback. * * Return: The number of entries which were found. + * Also updates @start to be positioned after the last found entry */ -unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, +unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) { - XA_STATE(xas, &mapping->i_pages, start); + XA_STATE(xas, &mapping->i_pages, *start); + unsigned long nr; struct folio *folio; =20 rcu_read_lock(); while ((folio =3D find_get_entry(&xas, end, XA_PRESENT))) { if (!xa_is_value(folio)) { - if (folio->index < start) + if (folio->index < *start) goto put; if (folio->index + folio_nr_pages(folio) - 1 > end) goto put; @@ -2116,7 +2118,16 @@ unsigned find_lock_entries(struct address_space *map= ping, pgoff_t start, folio_put(folio); } rcu_read_unlock(); + nr =3D folio_batch_count(fbatch); + + if (nr) { + folio =3D fbatch->folios[nr - 1]; + nr =3D folio_nr_pages(folio); =20 + if (folio_test_hugetlb(folio)) + nr =3D 1; + *start =3D folio->index + nr; + } return folio_batch_count(fbatch); } =20 diff --git a/mm/internal.h b/mm/internal.h index 6b7ef495b56d..c504ac7267e0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -106,7 +106,7 @@ static inline void force_page_cache_readahead(struct ad= dress_space *mapping, force_page_cache_ra(&ractl, nr_to_read); } =20 -unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, +unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); unsigned find_get_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); diff --git a/mm/shmem.c b/mm/shmem.c index 86214d48dd09..ab4f6dfcf6bb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -922,21 +922,18 @@ static void shmem_undo_range(struct inode *inode, lof= f_t lstart, loff_t lend, =20 folio_batch_init(&fbatch); index =3D start; - while (index < end && find_lock_entries(mapping, index, end - 1, + while (index < end && find_lock_entries(mapping, &index, end - 1, &fbatch, indices)) { for (i =3D 0; i < folio_batch_count(&fbatch); i++) { folio =3D fbatch.folios[i]; =20 - index =3D indices[i]; - if (xa_is_value(folio)) { if (unfalloc) continue; nr_swaps_freed +=3D !shmem_free_swap(mapping, - index, folio); + folio->index, folio); continue; } - index +=3D folio_nr_pages(folio) - 1; =20 if (!unfalloc || !folio_test_uptodate(folio)) truncate_inode_folio(mapping, folio); @@ -945,7 +942,6 @@ static void shmem_undo_range(struct inode *inode, loff_= t lstart, loff_t lend, folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); cond_resched(); - index++; } =20 same_folio =3D (lstart >> PAGE_SHIFT) =3D=3D (lend >> PAGE_SHIFT); diff --git a/mm/truncate.c b/mm/truncate.c index c0be77e5c008..b0bd63b2359f 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -361,9 +361,8 @@ void truncate_inode_pages_range(struct address_space *m= apping, =20 folio_batch_init(&fbatch); index =3D start; - while (index < end && find_lock_entries(mapping, index, end - 1, + while (index < end && find_lock_entries(mapping, &index, end - 1, &fbatch, indices)) { - index =3D indices[folio_batch_count(&fbatch) - 1] + 1; truncate_folio_batch_exceptionals(mapping, &fbatch, indices); for (i =3D 0; i < folio_batch_count(&fbatch); i++) truncate_cleanup_folio(fbatch.folios[i]); @@ -510,20 +509,18 @@ unsigned long invalidate_mapping_pagevec(struct addre= ss_space *mapping, int i; =20 folio_batch_init(&fbatch); - while (find_lock_entries(mapping, index, end, &fbatch, indices)) { + while (find_lock_entries(mapping, &index, end, &fbatch, indices)) { for (i =3D 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio =3D fbatch.folios[i]; =20 /* We rely upon deletion not changing folio->index */ - index =3D indices[i]; =20 if (xa_is_value(folio)) { count +=3D invalidate_exceptional_entry(mapping, - index, - folio); + folio->index, + folio); continue; } - index +=3D folio_nr_pages(folio) - 1; =20 ret =3D mapping_evict_folio(mapping, folio); folio_unlock(folio); @@ -542,7 +539,6 @@ unsigned long invalidate_mapping_pagevec(struct address= _space *mapping, folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); cond_resched(); - index++; } return count; } --=20 2.36.1 From nobody Tue Apr 7 04:59:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C9A5C4332F for ; Tue, 11 Oct 2022 21:57:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229619AbiJKV5Z (ORCPT ); Tue, 11 Oct 2022 17:57:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229560AbiJKV5M (ORCPT ); Tue, 11 Oct 2022 17:57:12 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 540E09E2CF; Tue, 11 Oct 2022 14:57:11 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id b15so13661570pje.1; Tue, 11 Oct 2022 14:57:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2AXBXWk9KM8i6fKoe7c/6PSVGCSRYeW1GgiDFc8Dazo=; b=RcSCx6GfPgO1zqmEGbdDQcTUwIoMsiDavOa4LQzTIn/Kys1oeo6SjCBVThk33JTJqV lbJEHsFMZ7YTdE6nlgQh0eJxcFSzAqG4mXO8UOH0FDwsbOa2HIBnETyCkzMogatR+KjZ C5hu0sQPVZOOV1U766tB+LuH2Mg5cvwNUcHT2qaELJW4eV5wj8/v7aRQ5McJgFUCt6lL 2GC+XXnttD4uZbATsL6DOkqDgmFtFMH6ir67cYUdZTnN/veUnWCSMWLBjX6lp/NyZ+m9 RiOym3by81erUatbFgimWcCXnDcUnH9AAe8EHyx1Y3JDbKPa7BXFS5ISqh2IKGkwUSmZ hryg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2AXBXWk9KM8i6fKoe7c/6PSVGCSRYeW1GgiDFc8Dazo=; b=wuQCVh9yyK8zfRzuvxtE00a8tMfbfkiATLtxq9uqCBFxrq3hK1VMExBycHgcWRqC4f 8qqh2/cA3iVBmOlWuQ+xfj9OmmvFJjUPLO0GrrmrE8LfbnIIov2Zqra7a+PKe/qudJGY xajWzKPlr1zGZesun/Le1GLGcZ+GViuM7o9eOap2KX3ETTVouhHg6BVFaXVGEF8dd6pK MyrJAMyAh8VOSjCEBSGydzvLhM5HHZvny/NtaRTWYCpz20ZmBmKbFYseRqjTs+DGbN3T RGnLCNMrq4cIMj9TAjrGff3v/hDXpJMfJigLXzzGlwtrzwRW4fSvCpHoQUuR5ucEGa3w 3n2w== X-Gm-Message-State: ACrzQf3S1XJq4cX+eu/DWs8dJ3AwCPZeonZYKYpOeBPA33mLh59d1Usx NzMeMvTvitNyoHcLIbhAG1C3PEME5qEqxw== X-Google-Smtp-Source: AMsMyM5iKzVl/ZJYl342ueaPo+/J5AaNlY1YB4giH4BsCeRbJwSLh01n3j9aRW6XYipvoTRiZU04MA== X-Received: by 2002:a17:903:2285:b0:17f:6a39:7097 with SMTP id b5-20020a170903228500b0017f6a397097mr27134374plh.51.1665525430731; Tue, 11 Oct 2022 14:57:10 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z17-20020a170903019100b0018123556931sm6580371plg.204.2022.10.11.14.57.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Oct 2022 14:57:10 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 2/4] filemap: find_get_entries() now updates start offset Date: Tue, 11 Oct 2022 14:56:32 -0700 Message-Id: <20221011215634.478330-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221011215634.478330-1-vishal.moola@gmail.com> References: <20221011215634.478330-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Initially, find_get_entries() was being passed in the start offset as a value. That left the calculation of the offset to the callers. This led to complexity in the callers trying to keep track of the index. Now find_get_entires() takes in a pointer to the start offset and updates the value to be directly after the last entry found. If no entry is found, the offset is not changed. This gets rid of multiple hacky calculations that kept track of the start offset. Signed-off-by: Vishal Moola (Oracle) --- mm/filemap.c | 15 +++++++++++++-- mm/internal.h | 2 +- mm/shmem.c | 11 ++++------- mm/truncate.c | 23 +++++++++-------------- 4 files changed, 27 insertions(+), 24 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index e95500b07ee9..1b8022c18dc7 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2047,11 +2047,13 @@ static inline struct folio *find_get_entry(struct x= a_state *xas, pgoff_t max, * shmem/tmpfs, are included in the returned array. * * Return: The number of entries which were found. + * Also updates @start to be positioned after the last found entry */ -unsigned find_get_entries(struct address_space *mapping, pgoff_t start, +unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) { - XA_STATE(xas, &mapping->i_pages, start); + XA_STATE(xas, &mapping->i_pages, *start); + unsigned long nr; struct folio *folio; =20 rcu_read_lock(); @@ -2061,7 +2063,16 @@ unsigned find_get_entries(struct address_space *mapp= ing, pgoff_t start, break; } rcu_read_unlock(); + nr =3D folio_batch_count(fbatch); + + if (nr) { + folio =3D fbatch->folios[nr - 1]; + nr =3D folio_nr_pages(folio); =20 + if (folio_test_hugetlb(folio)) + nr =3D 1; + *start =3D folio->index + nr; + } return folio_batch_count(fbatch); } =20 diff --git a/mm/internal.h b/mm/internal.h index c504ac7267e0..68afdbe7106e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -108,7 +108,7 @@ static inline void force_page_cache_readahead(struct ad= dress_space *mapping, =20 unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); -unsigned find_get_entries(struct address_space *mapping, pgoff_t start, +unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); void filemap_free_folio(struct address_space *mapping, struct folio *folio= ); int truncate_inode_folio(struct address_space *mapping, struct folio *foli= o); diff --git a/mm/shmem.c b/mm/shmem.c index ab4f6dfcf6bb..8240e066edfc 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -973,7 +973,7 @@ static void shmem_undo_range(struct inode *inode, loff_= t lstart, loff_t lend, while (index < end) { cond_resched(); =20 - if (!find_get_entries(mapping, index, end - 1, &fbatch, + if (!find_get_entries(mapping, &index, end - 1, &fbatch, indices)) { /* If all gone or hole-punch or unfalloc, we're done */ if (index =3D=3D start || end !=3D -1) @@ -985,13 +985,12 @@ static void shmem_undo_range(struct inode *inode, lof= f_t lstart, loff_t lend, for (i =3D 0; i < folio_batch_count(&fbatch); i++) { folio =3D fbatch.folios[i]; =20 - index =3D indices[i]; if (xa_is_value(folio)) { if (unfalloc) continue; - if (shmem_free_swap(mapping, index, folio)) { + if (shmem_free_swap(mapping, folio->index, folio)) { /* Swap was replaced by page: retry */ - index--; + index =3D folio->index; break; } nr_swaps_freed++; @@ -1004,19 +1003,17 @@ static void shmem_undo_range(struct inode *inode, l= off_t lstart, loff_t lend, if (folio_mapping(folio) !=3D mapping) { /* Page was replaced by swap: retry */ folio_unlock(folio); - index--; + index =3D folio->index; break; } VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); truncate_inode_folio(mapping, folio); } - index =3D folio->index + folio_nr_pages(folio) - 1; folio_unlock(folio); } folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); - index++; } =20 spin_lock_irq(&info->lock); diff --git a/mm/truncate.c b/mm/truncate.c index b0bd63b2359f..846ddbdb27a4 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -400,7 +400,7 @@ void truncate_inode_pages_range(struct address_space *m= apping, index =3D start; while (index < end) { cond_resched(); - if (!find_get_entries(mapping, index, end - 1, &fbatch, + if (!find_get_entries(mapping, &index, end - 1, &fbatch, indices)) { /* If all gone from start onwards, we're done */ if (index =3D=3D start) @@ -414,21 +414,18 @@ void truncate_inode_pages_range(struct address_space = *mapping, struct folio *folio =3D fbatch.folios[i]; =20 /* We rely upon deletion not changing page->index */ - index =3D indices[i]; - if (xa_is_value(folio)) continue; =20 folio_lock(folio); - VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); + VM_BUG_ON_FOLIO(!folio_contains(folio, folio->index), + folio); folio_wait_writeback(folio); truncate_inode_folio(mapping, folio); folio_unlock(folio); - index =3D folio_index(folio) + folio_nr_pages(folio) - 1; } truncate_folio_batch_exceptionals(mapping, &fbatch, indices); folio_batch_release(&fbatch); - index++; } } EXPORT_SYMBOL(truncate_inode_pages_range); @@ -637,16 +634,14 @@ int invalidate_inode_pages2_range(struct address_spac= e *mapping, =20 folio_batch_init(&fbatch); index =3D start; - while (find_get_entries(mapping, index, end, &fbatch, indices)) { + while (find_get_entries(mapping, &index, end, &fbatch, indices)) { for (i =3D 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio =3D fbatch.folios[i]; =20 /* We rely upon deletion not changing folio->index */ - index =3D indices[i]; - if (xa_is_value(folio)) { if (!invalidate_exceptional_entry2(mapping, - index, folio)) + folio->index, folio)) ret =3D -EBUSY; continue; } @@ -656,13 +651,14 @@ int invalidate_inode_pages2_range(struct address_spac= e *mapping, * If folio is mapped, before taking its lock, * zap the rest of the file in one hit. */ - unmap_mapping_pages(mapping, index, - (1 + end - index), false); + unmap_mapping_pages(mapping, folio->index, + (1 + end - folio->index), false); did_range_unmap =3D 1; } =20 folio_lock(folio); - VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); + VM_BUG_ON_FOLIO(!folio_contains(folio, folio->index), + folio); if (folio->mapping !=3D mapping) { folio_unlock(folio); continue; @@ -685,7 +681,6 @@ int invalidate_inode_pages2_range(struct address_space = *mapping, folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); cond_resched(); - index++; } /* * For DAX we invalidate page tables after invalidating page cache. We --=20 2.36.1 From nobody Tue Apr 7 04:59:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 800ECC4332F for ; Tue, 11 Oct 2022 21:57:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229468AbiJKV53 (ORCPT ); Tue, 11 Oct 2022 17:57:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229573AbiJKV5M (ORCPT ); Tue, 11 Oct 2022 17:57:12 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BC0C5A168; Tue, 11 Oct 2022 14:57:12 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id x6so14471303pll.11; Tue, 11 Oct 2022 14:57:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yKhqZ9JRxfUHd7ueXHu2hnxQ9xN5WH2xdUaf7T3ps+Y=; b=dur9V8U6avr6eGkxSZVUrKpuSoFFEw3r5SOEyHC0jqIiLFPOMXjVnqgh7rsvGgMJG0 6TfwFx3ajZ6v8lUQK/ljFO4/hzHBOlj0lEA8YLPsPLoLMsoHP29A/dC+cU2vEMpZZocn 36P7QGWbcTaTFZKWn0aSnczlTD0TSNV8ZzkFmr4X43sv9QQyyDXmpPZ5EMtC9Qkwm0bJ 6p5HMmqvoYWgDwaOwECANtWpHQh/a7lu3EwtFJamvh0YiuoQTzM3ebX9CCwtknosf6Hw nS5vPV+oe/1v7IJcBDTtMF4xkb9c6f78WXQ6ap4LAiIdoOPx68WG3gr853+NSp0KOpEu YlKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yKhqZ9JRxfUHd7ueXHu2hnxQ9xN5WH2xdUaf7T3ps+Y=; b=RXiDjGGA/L0dqCq5uV+fRMJH3W/s+9eJfm8MxocIoN0g2VPS9ThD8QmfpXaVDiHNsY cE//bGTdnZwaC15/OkpnvgJe46KoF9SegJJKUGEogqkfHlO4qWlznpsBmGef2JZHnuaX sUWjv9LY8h+WXQUljuddZDK9I989Ecp49t6pGpHr5IlfY6uvs1Hk0kHsBDfrIv3wCIxA 8Gv3Xk/nzYS7iy6PVfDbD3E66dPJF9SNl1HiTpr+B/SwM/zxKTIcnMHUS7xRZHh2R1qW 9LZScaRdMlKbe5NcFc+EvfGEn16ipoZ5+UKvl4MguhPdPSA2nGeNkY+dMllhth8S8ZKu 3Dow== X-Gm-Message-State: ACrzQf2Ub9Ja/erj070DlvJPQK5QECxIVoiRqLd354bq2eku0G1e0R/w 2Ja/D9gJVbuMBa9aRIl7RSQ= X-Google-Smtp-Source: AMsMyM5nSGmPeX92wxZjVokYJ/kQ1ovTA200qlKORsoRtP5J+yxFeMykTyyDynSg41ww8vRb7zIEkw== X-Received: by 2002:a17:902:e9ca:b0:17f:93b5:5ec8 with SMTP id 10-20020a170902e9ca00b0017f93b55ec8mr26066551plk.59.1665525431654; Tue, 11 Oct 2022 14:57:11 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z17-20020a170903019100b0018123556931sm6580371plg.204.2022.10.11.14.57.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Oct 2022 14:57:11 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 3/4] truncate: Remove indices argument from truncate_folio_batch_exceptionals() Date: Tue, 11 Oct 2022 14:56:33 -0700 Message-Id: <20221011215634.478330-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221011215634.478330-1-vishal.moola@gmail.com> References: <20221011215634.478330-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The indices array is unnecessary. Folios keep track of their xarray indices in the folio->index field which can simply be accessed as needed. This change is in preparation for the removal of the indices arguments of find_get_entries() and find_lock_entries(). Signed-off-by: Vishal Moola (Oracle) --- mm/truncate.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 846ddbdb27a4..4e63d885498a 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -58,7 +58,7 @@ static void clear_shadow_entry(struct address_space *mapp= ing, pgoff_t index, * exceptional entries similar to what folio_batch_remove_exceptionals() d= oes. */ static void truncate_folio_batch_exceptionals(struct address_space *mappin= g, - struct folio_batch *fbatch, pgoff_t *indices) + struct folio_batch *fbatch) { int i, j; bool dax; @@ -82,7 +82,6 @@ static void truncate_folio_batch_exceptionals(struct addr= ess_space *mapping, =20 for (i =3D j; i < folio_batch_count(fbatch); i++) { struct folio *folio =3D fbatch->folios[i]; - pgoff_t index =3D indices[i]; =20 if (!xa_is_value(folio)) { fbatch->folios[j++] =3D folio; @@ -90,11 +89,11 @@ static void truncate_folio_batch_exceptionals(struct ad= dress_space *mapping, } =20 if (unlikely(dax)) { - dax_delete_mapping_entry(mapping, index); + dax_delete_mapping_entry(mapping, folio->index); continue; } =20 - __clear_shadow_entry(mapping, index, folio); + __clear_shadow_entry(mapping, folio->index, folio); } =20 if (!dax) { @@ -363,7 +362,7 @@ void truncate_inode_pages_range(struct address_space *m= apping, index =3D start; while (index < end && find_lock_entries(mapping, &index, end - 1, &fbatch, indices)) { - truncate_folio_batch_exceptionals(mapping, &fbatch, indices); + truncate_folio_batch_exceptionals(mapping, &fbatch); for (i =3D 0; i < folio_batch_count(&fbatch); i++) truncate_cleanup_folio(fbatch.folios[i]); delete_from_page_cache_batch(mapping, &fbatch); @@ -424,7 +423,7 @@ void truncate_inode_pages_range(struct address_space *m= apping, truncate_inode_folio(mapping, folio); folio_unlock(folio); } - truncate_folio_batch_exceptionals(mapping, &fbatch, indices); + truncate_folio_batch_exceptionals(mapping, &fbatch); folio_batch_release(&fbatch); } } --=20 2.36.1 From nobody Tue Apr 7 04:59:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0B52C4332F for ; Tue, 11 Oct 2022 21:57:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229660AbiJKV5d (ORCPT ); Tue, 11 Oct 2022 17:57:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229587AbiJKV5O (ORCPT ); Tue, 11 Oct 2022 17:57:14 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B3A69E2CF; Tue, 11 Oct 2022 14:57:13 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id x31-20020a17090a38a200b0020d2afec803so241480pjb.2; Tue, 11 Oct 2022 14:57:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hXDbujsutxz3ngIaiJSWN9PnHbyRD3OgFchLZ9JC5Lo=; b=AYlGz70PIZyhQNjM/Pp+kDlhxI0tF4WMQOfLtk/ekNQRgN9Ryg8nV3g6d7Ogp9hFb4 RkFKc7VcBUyFEuKbtWBk7zY+UaP8ogbIUMr5IikoXh6EZt10Bn9RTXbf9klImU9xpiwG OVrarOKuCw2ZiFWChKbD6Jlpb+uuALJpUAegSOOJPrPRUog6y8xBumfw0exi2HOTTuTh MZS44y2LBJvC/jA+mSA8DntE1hWUO7M3xntY8klRbwxceSFR1yDXCloilgbWz/S8Suv2 TWhPwZF1LE1kzT39Kim3Ki6gXy6AeTHGoWppqfPxgNk7GJxrZgGxyFISdjoH/kRbpn6z s0kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hXDbujsutxz3ngIaiJSWN9PnHbyRD3OgFchLZ9JC5Lo=; b=dbm3jxye1TC/o/VLmsWJkWqTXktSm84jUuHqpMT05Ew3or+WiHaWsldofgaWF48P6N szI7RpdzY+EnRx1gyz4SdpPCi6QBFrnDqgvNiNbJNXI5Smb5K3f2nAWKi8N/nICzQbn2 lEZwor+5ZzFo60GQpHhy+LqMP83rpckYfivcofA74Qo4shOd3eBE+tEXIwgtsuCNbLVJ DQ/ngCB7HUM7VPInPhiCslpW7ja3O+tkf7QTY3dv921/LBCmFL9PSpWqZ7E98LUD1wpc PxRIttvIB4T4R2rriLnc6W6FK4Hin0uqO1AFYsA1yIm4PdAEkoj9KbHoIB4EFVZH1K06 NQHw== X-Gm-Message-State: ACrzQf2Zeg0Ornn9OGnq16rlNcJSR0VQfhMXUll1YIxJojT1o0/ch71l +0wb9AzVfUftvhdUMJbTHr4= X-Google-Smtp-Source: AMsMyM4lMDBv1ZDaUDkxWrgrJDmvQQx72heA97zHEVahr9/xNPYg8P/qKhBWIIlPosLXZJVP3x8/3A== X-Received: by 2002:a17:902:760d:b0:184:29:8ab8 with SMTP id k13-20020a170902760d00b0018400298ab8mr3405046pll.36.1665525432423; Tue, 11 Oct 2022 14:57:12 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z17-20020a170903019100b0018123556931sm6580371plg.204.2022.10.11.14.57.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Oct 2022 14:57:12 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 4/4] filemap: Remove indices argument from find_lock_entries() and find_get_entries() Date: Tue, 11 Oct 2022 14:56:34 -0700 Message-Id: <20221011215634.478330-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221011215634.478330-1-vishal.moola@gmail.com> References: <20221011215634.478330-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The indices array is unnecessary. Folios keep track of their xarray indices in the folio->index field which can simply be accessed as needed. This change removes the indices argument from find_lock_entries() and find_get_entries(). All of the callers are able to remove their indices arrays as well. Signed-off-by: Vishal Moola (Oracle) --- mm/filemap.c | 8 ++------ mm/internal.h | 4 ++-- mm/shmem.c | 6 ++---- mm/truncate.c | 12 ++++-------- 4 files changed, 10 insertions(+), 20 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1b8022c18dc7..1f6be113a214 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2034,7 +2034,6 @@ static inline struct folio *find_get_entry(struct xa_= state *xas, pgoff_t max, * @start: The starting page cache index * @end: The final page index (inclusive). * @fbatch: Where the resulting entries are placed. - * @indices: The cache indices corresponding to the entries in @entries * * find_get_entries() will search for and return a batch of entries in * the mapping. The entries are placed in @fbatch. find_get_entries() @@ -2050,7 +2049,7 @@ static inline struct folio *find_get_entry(struct xa_= state *xas, pgoff_t max, * Also updates @start to be positioned after the last found entry */ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) + pgoff_t end, struct folio_batch *fbatch) { XA_STATE(xas, &mapping->i_pages, *start); unsigned long nr; @@ -2058,7 +2057,6 @@ unsigned find_get_entries(struct address_space *mappi= ng, pgoff_t *start, =20 rcu_read_lock(); while ((folio =3D find_get_entry(&xas, end, XA_PRESENT)) !=3D NULL) { - indices[fbatch->nr] =3D xas.xa_index; if (!folio_batch_add(fbatch, folio)) break; } @@ -2082,7 +2080,6 @@ unsigned find_get_entries(struct address_space *mappi= ng, pgoff_t *start, * @start: The starting page cache index. * @end: The final page index (inclusive). * @fbatch: Where the resulting entries are placed. - * @indices: The cache indices of the entries in @fbatch. * * find_lock_entries() will return a batch of entries from @mapping. * Swap, shadow and DAX entries are included. Folios are returned @@ -2098,7 +2095,7 @@ unsigned find_get_entries(struct address_space *mappi= ng, pgoff_t *start, * Also updates @start to be positioned after the last found entry */ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) + pgoff_t end, struct folio_batch *fbatch) { XA_STATE(xas, &mapping->i_pages, *start); unsigned long nr; @@ -2119,7 +2116,6 @@ unsigned find_lock_entries(struct address_space *mapp= ing, pgoff_t *start, VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), folio); } - indices[fbatch->nr] =3D xas.xa_index; if (!folio_batch_add(fbatch, folio)) break; continue; diff --git a/mm/internal.h b/mm/internal.h index 68afdbe7106e..db8d5dfa6d68 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -107,9 +107,9 @@ static inline void force_page_cache_readahead(struct ad= dress_space *mapping, } =20 unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); + pgoff_t end, struct folio_batch *fbatch); unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); + pgoff_t end, struct folio_batch *fbatch); void filemap_free_folio(struct address_space *mapping, struct folio *folio= ); int truncate_inode_folio(struct address_space *mapping, struct folio *foli= o); bool truncate_inode_partial_folio(struct folio *folio, loff_t start, diff --git a/mm/shmem.c b/mm/shmem.c index 8240e066edfc..ad6b5adf04ac 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -907,7 +907,6 @@ static void shmem_undo_range(struct inode *inode, loff_= t lstart, loff_t lend, pgoff_t start =3D (lstart + PAGE_SIZE - 1) >> PAGE_SHIFT; pgoff_t end =3D (lend + 1) >> PAGE_SHIFT; struct folio_batch fbatch; - pgoff_t indices[PAGEVEC_SIZE]; struct folio *folio; bool same_folio; long nr_swaps_freed =3D 0; @@ -923,7 +922,7 @@ static void shmem_undo_range(struct inode *inode, loff_= t lstart, loff_t lend, folio_batch_init(&fbatch); index =3D start; while (index < end && find_lock_entries(mapping, &index, end - 1, - &fbatch, indices)) { + &fbatch)) { for (i =3D 0; i < folio_batch_count(&fbatch); i++) { folio =3D fbatch.folios[i]; =20 @@ -973,8 +972,7 @@ static void shmem_undo_range(struct inode *inode, loff_= t lstart, loff_t lend, while (index < end) { cond_resched(); =20 - if (!find_get_entries(mapping, &index, end - 1, &fbatch, - indices)) { + if (!find_get_entries(mapping, &index, end - 1, &fbatch)) { /* If all gone or hole-punch or unfalloc, we're done */ if (index =3D=3D start || end !=3D -1) break; diff --git a/mm/truncate.c b/mm/truncate.c index 4e63d885498a..9db247a88483 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -332,7 +332,6 @@ void truncate_inode_pages_range(struct address_space *m= apping, pgoff_t start; /* inclusive */ pgoff_t end; /* exclusive */ struct folio_batch fbatch; - pgoff_t indices[PAGEVEC_SIZE]; pgoff_t index; int i; struct folio *folio; @@ -361,7 +360,7 @@ void truncate_inode_pages_range(struct address_space *m= apping, folio_batch_init(&fbatch); index =3D start; while (index < end && find_lock_entries(mapping, &index, end - 1, - &fbatch, indices)) { + &fbatch)) { truncate_folio_batch_exceptionals(mapping, &fbatch); for (i =3D 0; i < folio_batch_count(&fbatch); i++) truncate_cleanup_folio(fbatch.folios[i]); @@ -399,8 +398,7 @@ void truncate_inode_pages_range(struct address_space *m= apping, index =3D start; while (index < end) { cond_resched(); - if (!find_get_entries(mapping, &index, end - 1, &fbatch, - indices)) { + if (!find_get_entries(mapping, &index, end - 1, &fbatch)) { /* If all gone from start onwards, we're done */ if (index =3D=3D start) break; @@ -497,7 +495,6 @@ EXPORT_SYMBOL(truncate_inode_pages_final); unsigned long invalidate_mapping_pagevec(struct address_space *mapping, pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) { - pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; pgoff_t index =3D start; unsigned long ret; @@ -505,7 +502,7 @@ unsigned long invalidate_mapping_pagevec(struct address= _space *mapping, int i; =20 folio_batch_init(&fbatch); - while (find_lock_entries(mapping, &index, end, &fbatch, indices)) { + while (find_lock_entries(mapping, &index, end, &fbatch)) { for (i =3D 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio =3D fbatch.folios[i]; =20 @@ -620,7 +617,6 @@ static int folio_launder(struct address_space *mapping,= struct folio *folio) int invalidate_inode_pages2_range(struct address_space *mapping, pgoff_t start, pgoff_t end) { - pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; pgoff_t index; int i; @@ -633,7 +629,7 @@ int invalidate_inode_pages2_range(struct address_space = *mapping, =20 folio_batch_init(&fbatch); index =3D start; - while (find_get_entries(mapping, &index, end, &fbatch, indices)) { + while (find_get_entries(mapping, &index, end, &fbatch)) { for (i =3D 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio =3D fbatch.folios[i]; =20 --=20 2.36.1