From nobody Sat Nov 30 05:31:06 2024 Received: from out-187.mta0.migadu.com (out-187.mta0.migadu.com [91.218.175.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B4501B5312 for ; Wed, 11 Sep 2024 17:38:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726076317; cv=none; b=uHPwRzxu+usYdPeRFBdUF8E/oOuiHb25HAEp+Xr91bonJDp24IshAlf4gBU7vq2aZqlHEgZXH6n7c0Ra6mVUzHAxEZAoocodR2K1RhuF9GncNq8P9VF0Neaf2pVdr5iMzwlC8JcgmzVD89tnQd9LBpOQVqTEoZE0iRfj06DEJjQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726076317; c=relaxed/simple; bh=B3XEJ47A+TN3w7pB/zIoKra4vHtux/OwOB1FhKSv00c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Bu4iiwTFbdMPhMaDt9/x7zy5FKI667u4KmdFjbFBzVkoBtgH1UzfzYQPwoQbSW8G1EUWWJEwf4gDmnmtzPISizz2IUHlLWhzPChQEaS9ZhuYSVVY4yZf1okJJ/c3SemNFBFEKQEQUtDvlXH6o6jm30A+LewpFDxwuGJloaoimzY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=rC42Agfr; arc=none smtp.client-ip=91.218.175.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="rC42Agfr" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1726076313; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=blLN8n4flyGWr3hsLAYet7XCBYXkxyU8EtHvF5BdGSk=; b=rC42AgfrSZDE1gM2LisYttlqg2nTsGs+2RjgNNF3idGQLRUNdBK8Q+NACoSlalBr9GJn+J HNqqL0okrp5lCmSo9fNBaap9g+Uzh8N+fu0lAwUU35g5gwJjLCX5M7YEuCdIIFcru0UuLT 8PtzQKKlg/S2/F20cK9EeDnKLV8v/MY= From: Shakeel Butt To: Andrew Morton Cc: Matthew Wilcox , Johannes Weiner , Omar Sandoval , Chris Mason , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team , linux-fsdevel@vger.kernel.org Subject: [PATCH 2/2] mm: optimize invalidation of shadow entries Date: Wed, 11 Sep 2024 10:38:01 -0700 Message-ID: <20240911173801.4025422-3-shakeel.butt@linux.dev> In-Reply-To: <20240911173801.4025422-1-shakeel.butt@linux.dev> References: <20240911173801.4025422-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" The kernel invalidates the page cache in batches of PAGEVEC_SIZE. For each batch, it traverses the page cache tree and collects the entries (folio and shadow entries) in the struct folio_batch. For the shadow entries present in the folio_batch, it has to traverse the page cache tree for each individual entry to remove them. This patch optimize this by removing them in a single tree traversal. To evaluate the changes, we created 200GiB file on a fuse fs and in a memcg. We created the shadow entries by triggering reclaim through memory.reclaim in that specific memcg and measure the simple fadvise(DONTNEED) operation. # time xfs_io -c 'fadvise -d 0 ${file_size}' file time (sec) Without 5.12 +- 0.061 With-patch 4.19 +- 0.086 (18.16% decrease) Signed-off-by: Shakeel Butt --- mm/truncate.c | 46 ++++++++++++++++++---------------------------- 1 file changed, 18 insertions(+), 28 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index c7c19c816c2e..793c0d17d7b4 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -23,42 +23,28 @@ #include #include "internal.h" =20 -/* - * Regular page slots are stabilized by the page lock even without the tree - * itself locked. These unlocked entries need verification under the tree - * lock. - */ -static inline void __clear_shadow_entry(struct address_space *mapping, - pgoff_t index, void *entry) -{ - XA_STATE(xas, &mapping->i_pages, index); - - xas_set_update(&xas, workingset_update_node); - if (xas_load(&xas) !=3D entry) - return; - xas_store(&xas, NULL); -} - static void clear_shadow_entries(struct address_space *mapping, - struct folio_batch *fbatch, pgoff_t *indices) + unsigned long start, unsigned long max) { - int i; + XA_STATE(xas, &mapping->i_pages, start); + struct folio *folio; =20 /* Handled by shmem itself, or for DAX we do nothing. */ if (shmem_mapping(mapping) || dax_mapping(mapping)) return; =20 - spin_lock(&mapping->host->i_lock); - xa_lock_irq(&mapping->i_pages); + xas_set_update(&xas, workingset_update_node); =20 - for (i =3D 0; i < folio_batch_count(fbatch); i++) { - struct folio *folio =3D fbatch->folios[i]; + spin_lock(&mapping->host->i_lock); + xas_lock_irq(&xas); =20 + /* Clear all shadow entries from start to max */ + xas_for_each(&xas, folio, max) { if (xa_is_value(folio)) - __clear_shadow_entry(mapping, indices[i], folio); + xas_store(&xas, NULL); } =20 - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); if (mapping_shrinkable(mapping)) inode_add_lru(mapping->host); spin_unlock(&mapping->host->i_lock); @@ -478,7 +464,9 @@ unsigned long mapping_try_invalidate(struct address_spa= ce *mapping, =20 folio_batch_init(&fbatch); while (find_lock_entries(mapping, &index, end, &fbatch, indices)) { - for (i =3D 0; i < folio_batch_count(&fbatch); i++) { + int nr =3D folio_batch_count(&fbatch); + + for (i =3D 0; i < nr; i++) { struct folio *folio =3D fbatch.folios[i]; =20 /* We rely upon deletion not changing folio->index */ @@ -505,7 +493,7 @@ unsigned long mapping_try_invalidate(struct address_spa= ce *mapping, } =20 if (xa_has_values) - clear_shadow_entries(mapping, &fbatch, indices); + clear_shadow_entries(mapping, indices[0], indices[nr-1]); =20 folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); @@ -609,7 +597,9 @@ int invalidate_inode_pages2_range(struct address_space = *mapping, folio_batch_init(&fbatch); index =3D start; while (find_get_entries(mapping, &index, end, &fbatch, indices)) { - for (i =3D 0; i < folio_batch_count(&fbatch); i++) { + int nr =3D folio_batch_count(&fbatch); + + for (i =3D 0; i < nr; i++) { struct folio *folio =3D fbatch.folios[i]; =20 /* We rely upon deletion not changing folio->index */ @@ -655,7 +645,7 @@ int invalidate_inode_pages2_range(struct address_space = *mapping, } =20 if (xa_has_values) - clear_shadow_entries(mapping, &fbatch, indices); + clear_shadow_entries(mapping, indices[0], indices[nr-1]); =20 folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); --=20 2.43.5