From nobody Sat Nov 23 20:26:46 2024 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4C761CEAB2 for ; Mon, 11 Nov 2024 23:49:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731368948; cv=none; b=qNvilMjB2BGR73Ag1Uf9FL+FFwteTAXw2exrgBXESmBkIw6B2WRaAGpdzf9Dsmu0PInGnTEnLIe7RU3a3+/qqQkwWaphVXc23I8hBgQ0ob+rh9EIFYNma0BLoF1M/xEkh274Zgkx2zMNNJcwbkCLYhQNGrX8O6jJsxC7HtkX5nI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731368948; c=relaxed/simple; bh=6ZSmJO+bG/O0VvG/m0Wnv2BJ9smDd+VmRMf5KfhKfrc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l40o1DVaCQC0I/spcN7YBLixIY1CLik5h9TN5dfETaSxBP8wilX5RPfmsFmYqfpYIstxpTJ6Je3txAxc8BklXQW6JR9aAiA8QUybRyXphIP3MtHRHjuH681GCKfMLXocPUPpX2J1nz89kzzlzLVcxHai0PmkkKdagEHXkqx3y38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk; spf=pass smtp.mailfrom=kernel.dk; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b=vE3j3An1; arc=none smtp.client-ip=209.85.210.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.dk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20230601.gappssmtp.com header.i=@kernel-dk.20230601.gappssmtp.com header.b="vE3j3An1" Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-720be27db74so3759050b3a.1 for ; Mon, 11 Nov 2024 15:49:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1731368946; x=1731973746; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TkS4Adcluut9E9TBSf0Q5Dr6BdVnPYdslAPkTGEhwiY=; b=vE3j3An1o1a7juNKlP82+dNAyPgLPKFuMomn/kw1a1d465mo9/z/0nXwn5MgU3Q9xZ jcZoDXoAOfWm+7jdxIVAnELsZSgLhGQCuTAphBN/IVn8Q7C4dcaogpcy880gowwi0McJ 1iTGHXmqZON4Ul7EhVdfYEsA3UWunG4MZJiumZrzxzisvXHY7MmUvsV3FAXCUehpV3oO n2p0/Gysf2w/LMm9Gd3IxkgUc553CO7pbVJPepDNq1Np2OX3bQF9mUBc/o7ZjTrXNINr HvFw0WYsG9OpQz0tl2OL4qL+gACl7zUJEeDYYR+iDW+Yij8YmYeDpO7RYsXw7IG0DCrh eWxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731368946; x=1731973746; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TkS4Adcluut9E9TBSf0Q5Dr6BdVnPYdslAPkTGEhwiY=; b=n7Qt1fPFpT4z/tQDnBzf0Fyy/PDPSh8LTC8mIM6uC2JVSjqxVZUNFyahRqK945VYZy R23Q6cFAK/Fnm1XsauPR3OtuQ4gUsY85UmqFVzO+UKufv6UE+X1Pn463aHMQJhtm/1BI vW+4Ma/ksoyr7Yd49B5yXG7AUjw55NBLtvSJ7jt9CcW3i+E7Z6MIg6wgnVqu54zJUJ75 iClySsrz6V1YpeL43W+HzES4vx69XhsJfnLcbxQsbFKHswTtxhTWhens+fTQUO8A1ClW zghMXVgrJnUSda2D8e9zashX4wIkBVlAAL3Mwy3iWmVBdiFT6RcRceP377TAS0pqCnBy NrsA== X-Forwarded-Encrypted: i=1; AJvYcCWDUHvLOvqQzyrWDkYWVMaA0xy2ThKEYxMrgKlZbFV6UD6hU8R9yYig0zvV7rGJfPQgJq66Q6r4vlIwKLc=@vger.kernel.org X-Gm-Message-State: AOJu0YxBz4T622dl7C0DP+M0fRLZzAcb6NHnahKuNjTrYo5JQ26ERZ4O ix6/zTGOS5BiQRiHOn4gMcF8UKXw/vFxTbXE3o3QEnz+Yb9/hYTVzfkxkRGBDMxvbyZUdKFvC+X pqKY= X-Google-Smtp-Source: AGHT+IH28qnmC+d/sJZ3EdxK+NK8za6V+wXT31dNckiLvxFVwcmETmxIGu/BBIMvr9bfEpwgK4E2eA== X-Received: by 2002:a05:6a00:a1a:b0:71e:71ba:9056 with SMTP id d2e1a72fcca58-7241407b632mr20781928b3a.10.1731368946274; Mon, 11 Nov 2024 15:49:06 -0800 (PST) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-724078a7ee9sm10046057b3a.64.2024.11.11.15.49.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2024 15:49:05 -0800 (PST) From: Jens Axboe To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: hannes@cmpxchg.org, clm@meta.com, linux-kernel@vger.kernel.org, willy@infradead.org, kirill@shutemov.name, linux-btrfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, Jens Axboe Subject: [PATCH 12/16] ext4: add RWF_UNCACHED write support Date: Mon, 11 Nov 2024 16:37:39 -0700 Message-ID: <20241111234842.2024180-13-axboe@kernel.dk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241111234842.2024180-1-axboe@kernel.dk> References: <20241111234842.2024180-1-axboe@kernel.dk> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" IOCB_UNCACHED IO needs to prune writeback regions on IO completion, and hence need the worker punt that ext4 also does for unwritten extents. Add an io_end flag to manage that. If foliop is set to foliop_uncached in ext4_write_begin(), then set FGP_UNCACHED so that __filemap_get_folio() will mark newly created folios as uncached. That in turn will make writeback completion drop these ranges from the page cache. Now that ext4 supports both uncached reads and writes, add the fop_flag FOP_UNCACHED to enable it. Signed-off-by: Jens Axboe --- fs/ext4/ext4.h | 1 + fs/ext4/file.c | 2 +- fs/ext4/inline.c | 7 ++++++- fs/ext4/inode.c | 18 ++++++++++++++++-- fs/ext4/page-io.c | 28 ++++++++++++++++------------ 5 files changed, 40 insertions(+), 16 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 44b0d418143c..60dc9ffae076 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -279,6 +279,7 @@ struct ext4_system_blocks { * Flags for ext4_io_end->flags */ #define EXT4_IO_END_UNWRITTEN 0x0001 +#define EXT4_IO_UNCACHED 0x0002 =20 struct ext4_io_end_vec { struct list_head list; /* list of io_end_vec */ diff --git a/fs/ext4/file.c b/fs/ext4/file.c index f14aed14b9cf..0ef39d738598 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -944,7 +944,7 @@ const struct file_operations ext4_file_operations =3D { .splice_write =3D iter_file_splice_write, .fallocate =3D ext4_fallocate, .fop_flags =3D FOP_MMAP_SYNC | FOP_BUFFER_RASYNC | - FOP_DIO_PARALLEL_WRITE, + FOP_DIO_PARALLEL_WRITE | FOP_UNCACHED, }; =20 const struct inode_operations ext4_file_inode_operations =3D { diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c index 3536ca7e4fcc..4089d0744164 100644 --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -667,6 +667,7 @@ int ext4_try_to_write_inline_data(struct address_space = *mapping, handle_t *handle; struct folio *folio; struct ext4_iloc iloc; + fgf_t fgp_flags; =20 if (pos + len > ext4_get_max_inline_size(inode)) goto convert; @@ -702,7 +703,11 @@ int ext4_try_to_write_inline_data(struct address_space= *mapping, if (ret) goto out; =20 - folio =3D __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS, + fgp_flags =3D FGP_WRITEBEGIN | FGP_NOFS; + if (*foliop =3D=3D foliop_uncached) + fgp_flags |=3D FGP_UNCACHED; + + folio =3D __filemap_get_folio(mapping, 0, fgp_flags, mapping_gfp_mask(mapping)); if (IS_ERR(folio)) { ret =3D PTR_ERR(folio); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 54bdd4884fe6..afae3ab64c9e 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1138,6 +1138,7 @@ static int ext4_write_begin(struct file *file, struct= address_space *mapping, int ret, needed_blocks; handle_t *handle; int retries =3D 0; + fgf_t fgp_flags; struct folio *folio; pgoff_t index; unsigned from, to; @@ -1164,6 +1165,15 @@ static int ext4_write_begin(struct file *file, struc= t address_space *mapping, return 0; } =20 + /* + * Set FGP_WRITEBEGIN, and FGP_UNCACHED if foliop contains + * foliop_uncached. That's how generic_perform_write() informs us + * that this is an uncached write. + */ + fgp_flags =3D FGP_WRITEBEGIN; + if (*foliop =3D=3D foliop_uncached) + fgp_flags |=3D FGP_UNCACHED; + /* * __filemap_get_folio() can take a long time if the * system is thrashing due to memory pressure, or if the folio @@ -1172,7 +1182,7 @@ static int ext4_write_begin(struct file *file, struct= address_space *mapping, * the folio (if needed) without using GFP_NOFS. */ retry_grab: - folio =3D __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, + folio =3D __filemap_get_folio(mapping, index, fgp_flags, mapping_gfp_mask(mapping)); if (IS_ERR(folio)) return PTR_ERR(folio); @@ -2903,6 +2913,7 @@ static int ext4_da_write_begin(struct file *file, str= uct address_space *mapping, struct folio *folio; pgoff_t index; struct inode *inode =3D mapping->host; + fgf_t fgp_flags; =20 if (unlikely(ext4_forced_shutdown(inode->i_sb))) return -EIO; @@ -2926,8 +2937,11 @@ static int ext4_da_write_begin(struct file *file, st= ruct address_space *mapping, return 0; } =20 + fgp_flags =3D FGP_WRITEBEGIN; + if (*foliop =3D=3D foliop_uncached) + fgp_flags |=3D FGP_UNCACHED; retry: - folio =3D __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, + folio =3D __filemap_get_folio(mapping, index, fgp_flags, mapping_gfp_mask(mapping)); if (IS_ERR(folio)) return PTR_ERR(folio); diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index ad5543866d21..10447c3c4ff1 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -226,8 +226,6 @@ static void ext4_add_complete_io(ext4_io_end_t *io_end) unsigned long flags; =20 /* Only reserved conversions from writeback should enter here */ - WARN_ON(!(io_end->flag & EXT4_IO_END_UNWRITTEN)); - WARN_ON(!io_end->handle && sbi->s_journal); spin_lock_irqsave(&ei->i_completed_io_lock, flags); wq =3D sbi->rsv_conversion_wq; if (list_empty(&ei->i_rsv_conversion_list)) @@ -252,7 +250,7 @@ static int ext4_do_flush_completed_IO(struct inode *ino= de, =20 while (!list_empty(&unwritten)) { io_end =3D list_entry(unwritten.next, ext4_io_end_t, list); - BUG_ON(!(io_end->flag & EXT4_IO_END_UNWRITTEN)); + BUG_ON(!(io_end->flag & (EXT4_IO_END_UNWRITTEN|EXT4_IO_UNCACHED))); list_del_init(&io_end->list); =20 err =3D ext4_end_io_end(io_end); @@ -287,14 +285,15 @@ ext4_io_end_t *ext4_init_io_end(struct inode *inode, = gfp_t flags) =20 void ext4_put_io_end_defer(ext4_io_end_t *io_end) { - if (refcount_dec_and_test(&io_end->count)) { - if (!(io_end->flag & EXT4_IO_END_UNWRITTEN) || - list_empty(&io_end->list_vec)) { - ext4_release_io_end(io_end); - return; - } - ext4_add_complete_io(io_end); + if (!refcount_dec_and_test(&io_end->count)) + return; + if ((!(io_end->flag & EXT4_IO_END_UNWRITTEN) || + list_empty(&io_end->list_vec)) && + !(io_end->flag & EXT4_IO_UNCACHED)) { + ext4_release_io_end(io_end); + return; } + ext4_add_complete_io(io_end); } =20 int ext4_put_io_end(ext4_io_end_t *io_end) @@ -348,7 +347,7 @@ static void ext4_end_bio(struct bio *bio) blk_status_to_errno(bio->bi_status)); } =20 - if (io_end->flag & EXT4_IO_END_UNWRITTEN) { + if (io_end->flag & (EXT4_IO_END_UNWRITTEN|EXT4_IO_UNCACHED)) { /* * Link bio into list hanging from io_end. We have to do it * atomically as bio completions can be racing against each @@ -417,8 +416,13 @@ static void io_submit_add_bh(struct ext4_io_submit *io, submit_and_retry: ext4_io_submit(io); } - if (io->io_bio =3D=3D NULL) + if (io->io_bio =3D=3D NULL) { io_submit_init_bio(io, bh); + if (folio_test_uncached(folio)) { + ext4_io_end_t *io_end =3D io->io_bio->bi_private; + io_end->flag |=3D EXT4_IO_UNCACHED; + } + } if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, bh_offset(bh))) goto submit_and_retry; wbc_account_cgroup_owner(io->io_wbc, &folio->page, bh->b_size); --=20 2.45.2