From nobody Mon Apr 6 03:09:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F301FC6FA90 for ; Mon, 12 Sep 2022 18:26:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230396AbiILS0V (ORCPT ); Mon, 12 Sep 2022 14:26:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230301AbiILSZn (ORCPT ); Mon, 12 Sep 2022 14:25:43 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A47C19010; Mon, 12 Sep 2022 11:25:36 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id a5-20020a17090aa50500b002008eeb040eso4635575pjq.1; Mon, 12 Sep 2022 11:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=vD6EjjDMoxaFUA+mx+1Z6aV95S5MJYOdc07mIzZAFKw=; b=UqzmvjvODJhLS5eprd3SsVA8aYyAs1wZcKWHylaNWO7CS4j6NYjGTX+zpkhRa8+LoA nmh8lUNTGNmHH69IGw2Qja814CxC8xVJHgX9OT2P0WMme6Y46Kc0+AkAVHOT9LTnjvLD qWh/QE2ozCiJJbF64vhq8Rdc1NvupnRG57jdogiRHGTTs5f496uYcX69OhLB6yiMLY4P LgjSKzS30dTuN0N10mZkwZKDfNrOYg+7YYZOSpnV44a2iR5OVlXyPMqknDxbzHLqqY+E 6Udb7E2cL70veVfJxizDgswHLxJOAXDYa0yj+5OI4LCqh76g8IkASxNybfVgprSJG90d /0rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=vD6EjjDMoxaFUA+mx+1Z6aV95S5MJYOdc07mIzZAFKw=; b=rSssLAhIi9Y3yyZ8zgJ4obyDO7buiGddXqK4MGVpF3xpy/b67E8c/ktVBmIsDn0oHm 3hCogMUYFE5UnDd6pCJNaSq0egjLC8zZlqRg+Ftfmh1mPwguaWgvfZFfmNF+gxkIXE4c Up/z91so6JUXcTlG7lgWLQhksYwTFu1SU6mnvuIRaGh903lfSdveXD4cJ4FCxw9DRHBk IVxV9VT8H/m6OigVWYIj63P1Vl7wJomh4MpQDG8Dso1wDB25/2ALtaR2cCY0dKEBLOLS LEBFy3DvAa3qnF2tQcEEBrJIl+EX7JV2xAYaBvtQ6og0IQLcDSuLedmRHDB6ZvdsjcLt 8Fkg== X-Gm-Message-State: ACgBeo2lKUQmPIT3A9kF6l5vgmMvtPgEjPu1nykBvqy92tWaEaJTehXU 4UdxY3rqXtyjBg03QWqIIfR/GBLNDys/QQ== X-Google-Smtp-Source: AA6agR7zQQqP5xvmjooRtUQxw4kQdhrf8U+2E9FarhnDRQvlrhlSFF3G7X4Zjeh0ukqPZYQP0bsu3w== X-Received: by 2002:a17:90a:4fc2:b0:1fb:3486:7b3d with SMTP id q60-20020a17090a4fc200b001fb34867b3dmr25673001pjh.49.1663007134972; Mon, 12 Sep 2022 11:25:34 -0700 (PDT) Received: from vmfolio.. (c-73-189-111-8.hsd1.ca.comcast.net. [73.189.111.8]) by smtp.googlemail.com with ESMTPSA id x127-20020a626385000000b0053b2681b0e0sm5916894pfb.39.2022.09.12.11.25.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Sep 2022 11:25:34 -0700 (PDT) From: "Vishal Moola (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-nilfs@vger.kernel.org, linux-mm@kvack.org, "Vishal Moola (Oracle)" Subject: [PATCH v2 08/23] ceph: Convert ceph_writepages_start() to use filemap_get_folios_tag() Date: Mon, 12 Sep 2022 11:22:09 -0700 Message-Id: <20220912182224.514561-9-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220912182224.514561-1-vishal.moola@gmail.com> References: <20220912182224.514561-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert function to use folios throughout. This is in preparation for the removal of find_get_pages_range_tag(). This change does NOT support large folios. This shouldn't be an issue as of now since ceph only utilizes folios of size 1 anyways, and there is a lot of work to be done on ceph conversions to folios for later patches at some point. Also some minor renaming for consistency. Signed-off-by: Vishal Moola (Oracle) --- fs/ceph/addr.c | 138 +++++++++++++++++++++++++------------------------ 1 file changed, 70 insertions(+), 68 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index dcf701b05cc1..33dbe55b08be 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -792,7 +792,7 @@ static int ceph_writepages_start(struct address_space *= mapping, struct ceph_vino vino =3D ceph_vino(inode); pgoff_t index, start_index, end =3D -1; struct ceph_snap_context *snapc =3D NULL, *last_snapc =3D NULL, *pgsnapc; - struct pagevec pvec; + struct folio_batch fbatch; int rc =3D 0; unsigned int wsize =3D i_blocksize(inode); struct ceph_osd_request *req =3D NULL; @@ -821,7 +821,7 @@ static int ceph_writepages_start(struct address_space *= mapping, if (fsc->mount_options->wsize < wsize) wsize =3D fsc->mount_options->wsize; =20 - pagevec_init(&pvec); + folio_batch_init(&fbatch); =20 start_index =3D wbc->range_cyclic ? mapping->writeback_index : 0; index =3D start_index; @@ -869,9 +869,9 @@ static int ceph_writepages_start(struct address_space *= mapping, =20 while (!done && index <=3D end) { int num_ops =3D 0, op_idx; - unsigned i, pvec_pages, max_pages, locked_pages =3D 0; + unsigned i, nr_folios, max_pages, locked_pages =3D 0; struct page **pages =3D NULL, **data_pages; - struct page *page; + struct folio *folio; pgoff_t strip_unit_end =3D 0; u64 offset =3D 0, len =3D 0; bool from_pool =3D false; @@ -879,28 +879,28 @@ static int ceph_writepages_start(struct address_space= *mapping, max_pages =3D wsize >> PAGE_SHIFT; =20 get_more_pages: - pvec_pages =3D pagevec_lookup_range_tag(&pvec, mapping, &index, - end, PAGECACHE_TAG_DIRTY); - dout("pagevec_lookup_range_tag got %d\n", pvec_pages); - if (!pvec_pages && !locked_pages) + nr_folios =3D filemap_get_folios_tag(mapping, &index, + end, PAGECACHE_TAG_DIRTY, &fbatch); + dout("filemap_get_folios_tag got %d\n", nr_folios); + if (!nr_folios && !locked_pages) break; - for (i =3D 0; i < pvec_pages && locked_pages < max_pages; i++) { - page =3D pvec.pages[i]; - dout("? %p idx %lu\n", page, page->index); + for (i =3D 0; i < nr_folios && locked_pages < max_pages; i++) { + folio =3D fbatch.folios[i]; + dout("? %p idx %lu\n", folio, folio->index); if (locked_pages =3D=3D 0) - lock_page(page); /* first page */ - else if (!trylock_page(page)) + folio_lock(folio); /* first folio */ + else if (!folio_trylock(folio)) break; =20 /* only dirty pages, or our accounting breaks */ - if (unlikely(!PageDirty(page)) || - unlikely(page->mapping !=3D mapping)) { - dout("!dirty or !mapping %p\n", page); - unlock_page(page); + if (unlikely(!folio_test_dirty(folio)) || + unlikely(folio->mapping !=3D mapping)) { + dout("!dirty or !mapping %p\n", folio); + folio_unlock(folio); continue; } /* only if matching snap context */ - pgsnapc =3D page_snap_context(page); + pgsnapc =3D page_snap_context(&folio->page); if (pgsnapc !=3D snapc) { dout("page snapc %p %lld !=3D oldest %p %lld\n", pgsnapc, pgsnapc->seq, snapc, snapc->seq); @@ -908,11 +908,10 @@ static int ceph_writepages_start(struct address_space= *mapping, !ceph_wbc.head_snapc && wbc->sync_mode !=3D WB_SYNC_NONE) should_loop =3D true; - unlock_page(page); + folio_unlock(folio); continue; } - if (page_offset(page) >=3D ceph_wbc.i_size) { - struct folio *folio =3D page_folio(page); + if (folio_pos(folio) >=3D ceph_wbc.i_size) { =20 dout("folio at %lu beyond eof %llu\n", folio->index, ceph_wbc.i_size); @@ -924,25 +923,26 @@ static int ceph_writepages_start(struct address_space= *mapping, folio_unlock(folio); continue; } - if (strip_unit_end && (page->index > strip_unit_end)) { - dout("end of strip unit %p\n", page); - unlock_page(page); + if (strip_unit_end && (folio->index > strip_unit_end)) { + dout("end of strip unit %p\n", folio); + folio_unlock(folio); break; } - if (PageWriteback(page) || PageFsCache(page)) { + if (folio_test_writeback(folio) || + folio_test_fscache(folio)) { if (wbc->sync_mode =3D=3D WB_SYNC_NONE) { - dout("%p under writeback\n", page); - unlock_page(page); + dout("%p under writeback\n", folio); + folio_unlock(folio); continue; } - dout("waiting on writeback %p\n", page); - wait_on_page_writeback(page); - wait_on_page_fscache(page); + dout("waiting on writeback %p\n", folio); + folio_wait_writeback(folio); + folio_wait_fscache(folio); } =20 - if (!clear_page_dirty_for_io(page)) { - dout("%p !clear_page_dirty_for_io\n", page); - unlock_page(page); + if (!folio_clear_dirty_for_io(folio)) { + dout("%p !clear_page_dirty_for_io\n", folio); + folio_unlock(folio); continue; } =20 @@ -958,7 +958,7 @@ static int ceph_writepages_start(struct address_space *= mapping, u32 xlen; =20 /* prepare async write request */ - offset =3D (u64)page_offset(page); + offset =3D (u64)folio_pos(folio); ceph_calc_file_object_mapping(&ci->i_layout, offset, wsize, &objnum, &objoff, @@ -966,7 +966,7 @@ static int ceph_writepages_start(struct address_space *= mapping, len =3D xlen; =20 num_ops =3D 1; - strip_unit_end =3D page->index + + strip_unit_end =3D folio->index + ((len - 1) >> PAGE_SHIFT); =20 BUG_ON(pages); @@ -981,54 +981,53 @@ static int ceph_writepages_start(struct address_space= *mapping, } =20 len =3D 0; - } else if (page->index !=3D + } else if (folio->index !=3D (offset + len) >> PAGE_SHIFT) { if (num_ops >=3D (from_pool ? CEPH_OSD_SLAB_OPS : CEPH_OSD_MAX_OPS)) { - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); break; } =20 num_ops++; - offset =3D (u64)page_offset(page); + offset =3D (u64)folio_pos(folio); len =3D 0; } =20 - /* note position of first page in pvec */ + /* note position of first page in fbatch */ dout("%p will write page %p idx %lu\n", - inode, page, page->index); + inode, folio, folio->index); =20 if (atomic_long_inc_return(&fsc->writeback_count) > CONGESTION_ON_THRESH( fsc->mount_options->congestion_kb)) fsc->write_congested =3D true; =20 - pages[locked_pages++] =3D page; - pvec.pages[i] =3D NULL; + pages[locked_pages++] =3D &folio->page; + fbatch.folios[i] =3D NULL; =20 - len +=3D thp_size(page); + len +=3D folio_size(folio); } =20 /* did we get anything? */ if (!locked_pages) - goto release_pvec_pages; + goto release_folio_batches; if (i) { unsigned j, n =3D 0; - /* shift unused page to beginning of pvec */ - for (j =3D 0; j < pvec_pages; j++) { - if (!pvec.pages[j]) + /* shift unused folio to the beginning of fbatch */ + for (j =3D 0; j < nr_folios; j++) { + if (!fbatch.folios[j]) continue; if (n < j) - pvec.pages[n] =3D pvec.pages[j]; + fbatch.folios[n] =3D fbatch.folios[j]; n++; } - pvec.nr =3D n; - - if (pvec_pages && i =3D=3D pvec_pages && + fbatch.nr =3D n; + if (nr_folios && i =3D=3D nr_folios && locked_pages < max_pages) { - dout("reached end pvec, trying for more\n"); - pagevec_release(&pvec); + dout("reached end of fbatch, trying for more\n"); + folio_batch_release(&fbatch); goto get_more_pages; } } @@ -1056,7 +1055,7 @@ static int ceph_writepages_start(struct address_space= *mapping, BUG_ON(IS_ERR(req)); } BUG_ON(len < page_offset(pages[locked_pages - 1]) + - thp_size(page) - offset); + folio_size(folio) - offset); =20 req->r_callback =3D writepages_finish; req->r_inode =3D inode; @@ -1098,7 +1097,7 @@ static int ceph_writepages_start(struct address_space= *mapping, set_page_writeback(pages[i]); if (caching) ceph_set_page_fscache(pages[i]); - len +=3D thp_size(page); + len +=3D folio_size(folio); } ceph_fscache_write_to_cache(inode, offset, len, caching); =20 @@ -1108,7 +1107,7 @@ static int ceph_writepages_start(struct address_space= *mapping, /* writepages_finish() clears writeback pages * according to the data length, so make sure * data length covers all locked pages */ - u64 min_len =3D len + 1 - thp_size(page); + u64 min_len =3D len + 1 - folio_size(folio); len =3D get_writepages_data_length(inode, pages[i - 1], offset); len =3D max(len, min_len); @@ -1164,10 +1163,10 @@ static int ceph_writepages_start(struct address_spa= ce *mapping, if (wbc->nr_to_write <=3D 0 && wbc->sync_mode =3D=3D WB_SYNC_NONE) done =3D true; =20 -release_pvec_pages: - dout("pagevec_release on %d pages (%p)\n", (int)pvec.nr, - pvec.nr ? pvec.pages[0] : NULL); - pagevec_release(&pvec); +release_folio_batches: + dout("folio_batch_release on %d batches (%p)", (int) fbatch.nr, + fbatch.nr ? fbatch.folios[0] : NULL); + folio_batch_release(&fbatch); } =20 if (should_loop && !done) { @@ -1180,19 +1179,22 @@ static int ceph_writepages_start(struct address_spa= ce *mapping, if (wbc->sync_mode !=3D WB_SYNC_NONE && start_index =3D=3D 0 && /* all dirty pages were checked */ !ceph_wbc.head_snapc) { - struct page *page; + struct folio *folio; unsigned i, nr; index =3D 0; while ((index <=3D end) && - (nr =3D pagevec_lookup_tag(&pvec, mapping, &index, - PAGECACHE_TAG_WRITEBACK))) { + (nr =3D filemap_get_folios_tag(mapping, &index, + (pgoff_t)-1, + PAGECACHE_TAG_WRITEBACK, + &fbatch))) { for (i =3D 0; i < nr; i++) { - page =3D pvec.pages[i]; - if (page_snap_context(page) !=3D snapc) + folio =3D fbatch.folios[i]; + if (page_snap_context(&folio->page) !=3D + snapc) continue; - wait_on_page_writeback(page); + folio_wait_writeback(folio); } - pagevec_release(&pvec); + folio_batch_release(&fbatch); cond_resched(); } } --=20 2.36.1