From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CF2D21B430 for ; Fri, 8 Nov 2024 17:33:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087181; cv=none; b=nQQMBEwizJeZPPNl+J/6Fuv0pXhg+IFt3vxQGBJ/O9TypnX7byLvbYONoxg8x8wpxU6WQ67HvnGCn9Sb/htW/m+TBcOT/CJcpiEJRuuxL7GLSnuEGejUD+GmDmLzZUr6HDra8qIGwZFUeXr5rSbJfubZ8OMd+jwvuPk9A1pwKpg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087181; c=relaxed/simple; bh=TCORsvgxEZsnnFjuTyK84xCEQqwdy4pROgfsDRybzPs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kAkLQ7OTQ4xIYsbgUYH7myBX/PexSM6jYr6jcpYJbjX/3To2BEtj8yFGasUqPxjaK5R5c4aqLWDUBmsEEwVbZDTAx1EIGmV3oFPbGFniS04E7ToqW5fpsF45gGIuZ3supbx8GfQQ+J77mKURJxqJl+WUicEJbiFWOh/gfnOREXM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=b1ohnTmX; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="b1ohnTmX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087179; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zFfHFGyfFLnfH39ds9xcldOHx5zC/fcdWITNBPuGcyY=; b=b1ohnTmX/IxvDH1fF82QHa2RlTFnBvGwOu2Nzs7RNzexsxs5tKqiu1R1qY9A6hCNKE8RVN AlNU45VeA7nRx8Y333uMmGpv76Hd9ExSDkKIV7rlDOg+1L4//6mnhArWayOVUViAH8OiCj +UhEcxG7RxRQ8lsMSrv9Z9mj4ADeNHY= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-561-ajo1qGtkPo6YZOFPGawkUA-1; Fri, 08 Nov 2024 12:32:56 -0500 X-MC-Unique: ajo1qGtkPo6YZOFPGawkUA-1 X-Mimecast-MFC-AGG-ID: ajo1qGtkPo6YZOFPGawkUA Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3915B1955F69; Fri, 8 Nov 2024 17:32:53 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E54FC195E480; Fri, 8 Nov 2024 17:32:46 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Masahiro Yamada Subject: [PATCH v4 01/33] kheaders: Ignore silly-rename files Date: Fri, 8 Nov 2024 17:32:02 +0000 Message-ID: <20241108173236.1382366-2-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Tell tar to ignore silly-rename files (".__afs*" and ".nfs*") when building the header archive. These occur when a file that is open is unlinked locally, but hasn't yet been closed. Such files are visible to the user via the getdents() syscall and so programs may want to do things with them. During the kernel build, such files may be made during the processing of header files and the cleanup may get deferred by fput() which may result in tar seeing these files when it reads the directory, but they may have disappeared by the time it tries to open them, causing tar to fail with an error. Further, we don't want to include them in the tarball if they still exist. With CONFIG_HEADERS_INSTALL=3Dy, something like the following may be seen: find: './kernel/.tmp_cpio_dir/include/dt-bindings/reset/.__afs2080': No = such file or directory tar: ./include/linux/greybus/.__afs3C95: File removed before we read it The find warning doesn't seem to cause a problem. Fix this by telling tar when called from in gen_kheaders.sh to exclude such files. This only affects afs and nfs; cifs uses the Windows Hidden attribute to prevent the file from being seen. Signed-off-by: David Howells cc: Masahiro Yamada cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-kernel@vger.kernel.org --- kernel/gen_kheaders.sh | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh index 383fd43ac612..7e1340da5aca 100755 --- a/kernel/gen_kheaders.sh +++ b/kernel/gen_kheaders.sh @@ -89,6 +89,7 @@ find $cpio_dir -type f -print0 | =20 # Create archive and try to normalize metadata for reproducibility. tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=3D$KBUILD_BUILD_TIMESTAMP}" \ + --exclude=3D".__afs*" --exclude=3D".nfs*" \ --owner=3D0 --group=3D0 --sort=3Dname --numeric-owner --mode=3Du=3Drw,= go=3Dr,a+X \ -I $XZ -cf $tarfile -C $cpio_dir/ . > /dev/null From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AEA221C18E for ; Fri, 8 Nov 2024 17:33:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087191; cv=none; b=GcM+gfw3JXXsbGncXIJAqA3meHCGgw347n9sALJT3lb2A5ch4T9iuKzH46ub76kxpoina/Mc26DDRF4GeKXnXLerwxVNxEKTzt6qh4IGGRA3oOk/dDmF+5OOV0HfnDGouYqnNOnWMXhNKFe/2Gq1wSowo24YzbvWWyPNEVli5+c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087191; c=relaxed/simple; bh=PkZy5Bm2Mgnujb/0DqtZnvhh4LaJYhfynDFVUZ+zO5s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MtotRLMlw2KtkCaRWgSIoBrPdq+jFnLJwyFR4YEtFjpYUP+4gLcq85pJRR5bimnFDmrBB4aOBheUBMpBK9PBOmdW2uXXMaP6wVYeitCWpxVHOzY9mvbHx1eaeXu4hgbzvGto1JIFicAqwPzFbWA4fAJsseabOrRIVPrp3qaWDHg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=QdobiE+4; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QdobiE+4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087189; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SueMPOdHep9t3PCuklU6bVWhougpE6WyFc4wdjmYJYc=; b=QdobiE+4fVys8vwdVuzFiZPoRGFN4ikUaXAPPmoSwtTuQWLBxFkQyk/Thvc99agGhOU0a5 sLHIbFvnoKTkcPKa1ItOGsvkQ+ns5SNg74Vd9dSm5TUJIszs6SZQeJ7bPG044hu+M9LfqD x0Tz/E0/ypJGS7tS6tLFTU6pYnFK13Y= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-626-s37VTvlNPa6Jyfw2eAy0Hw-1; Fri, 08 Nov 2024 12:33:04 -0500 X-MC-Unique: s37VTvlNPa6Jyfw2eAy0Hw-1 X-Mimecast-MFC-AGG-ID: s37VTvlNPa6Jyfw2eAy0Hw Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D92521955F37; Fri, 8 Nov 2024 17:33:00 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 97A801956054; Fri, 8 Nov 2024 17:32:54 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Christian Brauner Subject: [PATCH v4 02/33] netfs: Remove call to folio_index() Date: Fri, 8 Nov 2024 17:32:03 +0000 Message-ID: <20241108173236.1382366-3-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" From: "Matthew Wilcox (Oracle)" Calling folio_index() is pointless overhead; directly dereferencing folio->index is fine. Signed-off-by: Matthew Wilcox (Oracle) Link: https://lore.kernel.org/r/20241005182307.3190401-2-willy@infradead.org Signed-off-by: Christian Brauner --- include/trace/events/netfs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 69975c9c6823..bf511bca896e 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -450,7 +450,7 @@ TRACE_EVENT(netfs_folio, struct address_space *__m =3D READ_ONCE(folio->mapping); __entry->ino =3D __m ? __m->host->i_ino : 0; __entry->why =3D why; - __entry->index =3D folio_index(folio); + __entry->index =3D folio->index; __entry->nr =3D folio_nr_pages(folio); ), From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A82421F4CA for ; Fri, 8 Nov 2024 17:33:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087197; cv=none; b=HcnIjcU+Rz07sYgg3n079a6L2dtE96xprbqH+cRqyKSa5Iw8hof881Mc4pjdpKwrKvdZcS51DAlM3dZrYxVaFLcg0qV91crpF9DHJF0/LMWl4Mvdt8BMFGQm8vGABKV0OVo/5aaqtfLoCv2udu8glqDkE3ahF6JsNFCWnZGx5tg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087197; c=relaxed/simple; bh=IB2Hra3PbQqimrEPUTwV6xl6xpFuitl8Oqr5TkFFgbI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BqbyEbPsPYYmofACldUBXCdiuGM2nRft6JTnBXAzDY1zvju64GUlPdr+Oo64nQYArGS2f3ylGwGbquhxaWQ252ZvedcxsoaSaIkqmjYXYMii40EVqHVtWLjzbysY1m0NhhbUC4GZc82qmm9NbE/jqw+P75hPXEYLAPvK40vblOo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=MZW0MyjA; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MZW0MyjA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087194; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TRktXy0njOtGduc146rNPLaeznRZ8wjmjmNrqe7WMw8=; b=MZW0MyjAOBaH+dKX85Qhe+Vx5NlqkshlAhUnWr1SK3r35ekdtj3uUcs58tMPEUO8HhL+sc +nS1Jv9Yc3P42WnmnMMMbSvsyuwNE1eUmjfknBob1O8h7BVWg9YO7nZXoRnzKeynjrTMaT tLDSRZBfh5eB3xZHTeTj4NvRIywNesY= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-43-uCukTaVKP9aVL2dtzxjKJg-1; Fri, 08 Nov 2024 12:33:11 -0500 X-MC-Unique: uCukTaVKP9aVL2dtzxjKJg-1 X-Mimecast-MFC-AGG-ID: uCukTaVKP9aVL2dtzxjKJg Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4876819540F1; Fri, 8 Nov 2024 17:33:08 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7704E300019E; Fri, 8 Nov 2024 17:33:02 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Christian Brauner Subject: [PATCH v4 03/33] netfs: Fix a few minor bugs in netfs_page_mkwrite() Date: Fri, 8 Nov 2024 17:32:04 +0000 Message-ID: <20241108173236.1382366-4-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" From: "Matthew Wilcox (Oracle)" We can't return with VM_FAULT_SIGBUS | VM_FAULT_LOCKED; the core code will not unlock the folio in this instance. Introduce a new "unlock" error exit to handle this case. Use it to handle the "folio is truncated" check, and change the "writeback interrupted by a fatal signal" to do a NOPAGE exit instead of letting the core code install the folio currently under writeback before killing the process. Signed-off-by: Matthew Wilcox (Oracle) Link: https://lore.kernel.org/r/20241005182307.3190401-3-willy@infradead.org Signed-off-by: Christian Brauner --- fs/netfs/buffered_write.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index b3910dfcb56d..ff2814da88b1 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -491,7 +491,9 @@ EXPORT_SYMBOL(netfs_file_write_iter); =20 /* * Notification that a previously read-only page is about to become writab= le. - * Note that the caller indicates a single page of a multipage folio. + * The caller indicates the precise page that needs to be written to, but + * we only track group on a per-folio basis, so we block more often than + * we might otherwise. */ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *ne= tfs_group) { @@ -501,7 +503,7 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, str= uct netfs_group *netfs_gr struct address_space *mapping =3D file->f_mapping; struct inode *inode =3D file_inode(file); struct netfs_inode *ictx =3D netfs_inode(inode); - vm_fault_t ret =3D VM_FAULT_RETRY; + vm_fault_t ret =3D VM_FAULT_NOPAGE; int err; =20 _enter("%lx", folio->index); @@ -510,21 +512,15 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, s= truct netfs_group *netfs_gr =20 if (folio_lock_killable(folio) < 0) goto out; - if (folio->mapping !=3D mapping) { - folio_unlock(folio); - ret =3D VM_FAULT_NOPAGE; - goto out; - } - - if (folio_wait_writeback_killable(folio)) { - ret =3D VM_FAULT_LOCKED; - goto out; - } + if (folio->mapping !=3D mapping) + goto unlock; + if (folio_wait_writeback_killable(folio) < 0) + goto unlock; =20 /* Can we see a streaming write here? */ if (WARN_ON(!folio_test_uptodate(folio))) { - ret =3D VM_FAULT_SIGBUS | VM_FAULT_LOCKED; - goto out; + ret =3D VM_FAULT_SIGBUS; + goto unlock; } =20 group =3D netfs_folio_group(folio); @@ -559,5 +555,8 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, str= uct netfs_group *netfs_gr out: sb_end_pagefault(inode->i_sb); return ret; +unlock: + folio_unlock(folio); + goto out; } EXPORT_SYMBOL(netfs_page_mkwrite); From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEC8921A708 for ; Fri, 8 Nov 2024 17:33:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087207; cv=none; b=HOvtiiZsVPSdrCihGBMiGKWgo19Ygdw5BCA5siXVCyMevVAvNjUh3U7eX/l6CYE+PBIkY78ayXyrYea3BcRSATfDfI9pwk2YSeuGV4+7rpuH80nEW+rxmJ/mFZMznw/bGiDGfgEnFwnsC9/FKJ9KN8r96A6trmfQsVOY7xoRqD8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087207; c=relaxed/simple; bh=2vC/IfGilItl9YoYzSLfac5oxJJtNnQtCMzGrNI6xUg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bfP4yfK3dOzxSOajfRMRc4V4treL/rlNMnFeLL2Hu5eahel9rR2NaRmklAMlBZWuZ7w/svt61y3n+n2rKzkfcLTIR79CmYi4MCHh7Yb9ftINC27SuJoxI6M8Yb2QcuVroNAiEDpCJGEnONtOCjQu9rNVV7KQ5hbxIxerX6wGsCs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=N0/qcLWJ; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="N0/qcLWJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087204; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2WJ3IwRH8RLBNIi0Kxo/o1E5+hUSf0Z9V2WPyhSRrlg=; b=N0/qcLWJtUb6I6Ew8vTH1XPZ6RQdhGePB4fhkvjepFCom5kS8vkEX2SPhsRalCvAeT8RLC wY3w4AJFg+7sFY3UJpwuhGbKEW2hvo3BqMDSbXWWZkMWRuH/W2L+wkdk5UHlkCPNvSDz37 daBiKZnIoAFsdBz5+wWZ5HPpQh+miN4= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-68-5-atHLbkMQ6QM_uhS-NvvQ-1; Fri, 08 Nov 2024 12:33:20 -0500 X-MC-Unique: 5-atHLbkMQ6QM_uhS-NvvQ-1 X-Mimecast-MFC-AGG-ID: 5-atHLbkMQ6QM_uhS-NvvQ Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1189219541BE; Fri, 8 Nov 2024 17:33:16 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9C637300019E; Fri, 8 Nov 2024 17:33:09 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Christian Brauner Subject: [PATCH v4 04/33] netfs: Remove unnecessary references to pages Date: Fri, 8 Nov 2024 17:32:05 +0000 Message-ID: <20241108173236.1382366-5-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" From: "Matthew Wilcox (Oracle)" These places should all use folios instead of pages. Signed-off-by: Matthew Wilcox (Oracle) Link: https://lore.kernel.org/r/20241005182307.3190401-4-willy@infradead.org Signed-off-by: Christian Brauner --- fs/netfs/buffered_read.c | 8 ++++---- fs/netfs/buffered_write.c | 14 +++++++------- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index af46a598f4d7..7ac34550c403 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -627,7 +627,7 @@ static bool netfs_skip_folio_read(struct folio *folio, = loff_t pos, size_t len, if (unlikely(always_fill)) { if (pos - offset + len <=3D i_size) return false; /* Page entirely before EOF */ - zero_user_segment(&folio->page, 0, plen); + folio_zero_segment(folio, 0, plen); folio_mark_uptodate(folio); return true; } @@ -646,7 +646,7 @@ static bool netfs_skip_folio_read(struct folio *folio, = loff_t pos, size_t len, =20 return false; zero_out: - zero_user_segments(&folio->page, 0, offset, offset + len, plen); + folio_zero_segments(folio, 0, offset, offset + len, plen); return true; } =20 @@ -713,7 +713,7 @@ int netfs_write_begin(struct netfs_inode *ctx, if (folio_test_uptodate(folio)) goto have_folio; =20 - /* If the page is beyond the EOF, we want to clear it - unless it's + /* If the folio is beyond the EOF, we want to clear it - unless it's * within the cache granule containing the EOF, in which case we need * to preload the granule. */ @@ -773,7 +773,7 @@ int netfs_write_begin(struct netfs_inode *ctx, EXPORT_SYMBOL(netfs_write_begin); =20 /* - * Preload the data into a page we're proposing to write into. + * Preload the data into a folio we're proposing to write into. */ int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index ff2814da88b1..b4826360a411 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -83,13 +83,13 @@ static void netfs_update_i_size(struct netfs_inode *ctx= , struct inode *inode, * netfs_perform_write - Copy data into the pagecache. * @iocb: The operation parameters * @iter: The source buffer - * @netfs_group: Grouping for dirty pages (eg. ceph snaps). + * @netfs_group: Grouping for dirty folios (eg. ceph snaps). * - * Copy data into pagecache pages attached to the inode specified by @iocb. + * Copy data into pagecache folios attached to the inode specified by @ioc= b. * The caller must hold appropriate inode locks. * - * Dirty pages are tagged with a netfs_folio struct if they're not up to d= ate - * to indicate the range modified. Dirty pages may also be tagged with a + * Dirty folios are tagged with a netfs_folio struct if they're not up to = date + * to indicate the range modified. Dirty folios may also be tagged with a * netfs-specific grouping such that data from an old group gets flushed b= efore * a new one is started. */ @@ -223,11 +223,11 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struc= t iov_iter *iter, * we try to read it. */ if (fpos >=3D ctx->zero_point) { - zero_user_segment(&folio->page, 0, offset); + folio_zero_segment(folio, 0, offset); copied =3D copy_folio_from_iter_atomic(folio, offset, part, iter); if (unlikely(copied =3D=3D 0)) goto copy_failed; - zero_user_segment(&folio->page, offset + copied, flen); + folio_zero_segment(folio, offset + copied, flen); __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); trace_netfs_folio(folio, netfs_modify_and_clear); @@ -407,7 +407,7 @@ EXPORT_SYMBOL(netfs_perform_write); * netfs_buffered_write_iter_locked - write data to a file * @iocb: IO state structure (file, offset, etc.) * @from: iov_iter with data to write - * @netfs_group: Grouping for dirty pages (eg. ceph snaps). + * @netfs_group: Grouping for dirty folios (eg. ceph snaps). * * This function does all the work needed for actually writing data to a * file. It does all basic checks, removes SUID from the file, updates From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8F26227B94 for ; Fri, 8 Nov 2024 17:33:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087213; cv=none; b=TmW+WCdQFFPN+mcUwRQ96oQpaZycw53sRM+37QLjx33/xC7VC72TInyPfks1BltiGnnD5gOxu6sMdSPyfnLPWjLN9t3DI0AHwZB6JgA+Aw00Id9cifHmCb5Q6Aq91fQ53i4IYItmkC8w7yTSfv5b1zILU0EYEqBtkBS0x1faSHc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087213; c=relaxed/simple; bh=UQe8ZQ4qDdUDDw8GhW5Rtv4v1nSE8XZnkM9jJJQs9Dk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QhmkmHRH5wGplMdvTMEerv3eUkeYM+OQKfKF8xSHt60auo3ac2XAIagxBEaKzQhQEsCJMn3UG0RJWdL9r6Ol4lRIwufi6l1vMdT4cIsX+der6XtndFx+bn+LLZ8MKp5GCusTiEp0Jezjo/WZzLbFdLS/hKzDI0LpvCkCsLyXi9s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=OvpfGgga; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OvpfGgga" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087210; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DUlYZzbaKUHX76MZjhLNdaJFIvV/RRK5UT5LJFkHe/Y=; b=OvpfGggaOMDOWsScoQwotXTyyaBaI+vaFVmd3XiHE6/xnXIA1bb6N9xAL1b7qtKDBtlEFS 4I4L5a9zuAl8ulzKxA/YCgED6c4622SK5DduTnbJvE+s3BsPCFVZhOkr14SXfZh3BLUUXa Y7T3mbhmQCmIHIarg3FafmWPYMN3Cls= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-582-Du4WdgOCP7qkPE5ndOmvBQ-1; Fri, 08 Nov 2024 12:33:26 -0500 X-MC-Unique: Du4WdgOCP7qkPE5ndOmvBQ-1 X-Mimecast-MFC-AGG-ID: Du4WdgOCP7qkPE5ndOmvBQ Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4923B1953955; Fri, 8 Nov 2024 17:33:23 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CA29E195E485; Fri, 8 Nov 2024 17:33:17 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 05/33] netfs: Use a folio_queue allocation and free functions Date: Fri, 8 Nov 2024 17:32:06 +0000 Message-ID: <20241108173236.1382366-6-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Provide and use folio_queue allocation and free functions to combine the allocation, initialisation and stat (un)accounting steps that are repeated in several places. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_read.c | 12 +++--------- fs/netfs/misc.c | 38 ++++++++++++++++++++++++++++++++++---- include/linux/netfs.h | 5 +++++ 3 files changed, 42 insertions(+), 13 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 7ac34550c403..b5a7beb9d01b 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -131,11 +131,9 @@ static ssize_t netfs_prepare_read_iterator(struct netf= s_io_subrequest *subreq) struct folio_queue *tail =3D rreq->buffer_tail, *new; size_t added; =20 - new =3D kmalloc(sizeof(*new), GFP_NOFS); + new =3D netfs_folioq_alloc(GFP_NOFS); if (!new) return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(new); new->prev =3D tail; tail->next =3D new; rreq->buffer_tail =3D new; @@ -359,11 +357,9 @@ static int netfs_prime_buffer(struct netfs_io_request = *rreq) struct folio_batch put_batch; size_t added; =20 - folioq =3D kmalloc(sizeof(*folioq), GFP_KERNEL); + folioq =3D netfs_folioq_alloc(GFP_KERNEL); if (!folioq) return -ENOMEM; - netfs_stat(&netfs_n_folioq); - folioq_init(folioq); rreq->buffer =3D folioq; rreq->buffer_tail =3D folioq; rreq->submitted =3D rreq->start; @@ -436,12 +432,10 @@ static int netfs_create_singular_buffer(struct netfs_= io_request *rreq, struct fo { struct folio_queue *folioq; =20 - folioq =3D kmalloc(sizeof(*folioq), GFP_KERNEL); + folioq =3D netfs_folioq_alloc(GFP_KERNEL); if (!folioq) return -ENOMEM; =20 - netfs_stat(&netfs_n_folioq); - folioq_init(folioq); folioq_append(folioq, folio); BUG_ON(folioq_folio(folioq, 0) !=3D folio); BUG_ON(folioq_folio_order(folioq, 0) !=3D folio_order(folio)); diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 78fe5796b2b2..6cd7e1ee7a14 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,38 @@ #include #include "internal.h" =20 +/** + * netfs_folioq_alloc - Allocate a folio_queue struct + * @gfp: Allocation constraints + * + * Allocate, initialise and account the folio_queue struct. + */ +struct folio_queue *netfs_folioq_alloc(gfp_t gfp) +{ + struct folio_queue *fq; + + fq =3D kmalloc(sizeof(*fq), gfp); + if (fq) { + netfs_stat(&netfs_n_folioq); + folioq_init(fq); + } + return fq; +} +EXPORT_SYMBOL(netfs_folioq_alloc); + +/** + * netfs_folioq_free - Free a folio_queue struct + * @folioq: The object to free + * + * Free and unaccount the folio_queue struct. + */ +void netfs_folioq_free(struct folio_queue *folioq) +{ + netfs_stat_d(&netfs_n_folioq); + kfree(folioq); +} +EXPORT_SYMBOL(netfs_folioq_free); + /* * Make sure there's space in the rolling queue. */ @@ -87,8 +119,7 @@ struct folio_queue *netfs_delete_buffer_head(struct netf= s_io_request *wreq) =20 if (next) next->prev =3D NULL; - netfs_stat_d(&netfs_n_folioq); - kfree(head); + netfs_folioq_free(head); wreq->buffer =3D next; return next; } @@ -111,8 +142,7 @@ void netfs_clear_buffer(struct netfs_io_request *rreq) folio_put(folio); } } - netfs_stat_d(&netfs_n_folioq); - kfree(p); + netfs_folioq_free(p); } } =20 diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 5eaceef41e6c..b2fa569e875d 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -21,6 +21,7 @@ =20 enum netfs_sreq_ref_trace; typedef struct mempool_s mempool_t; +struct folio_queue; =20 /** * folio_start_private_2 - Start an fscache write on a folio. [DEPRECATED] @@ -454,6 +455,10 @@ void netfs_end_io_write(struct inode *inode); int netfs_start_io_direct(struct inode *inode); void netfs_end_io_direct(struct inode *inode); =20 +/* Miscellaneous APIs. */ +struct folio_queue *netfs_folioq_alloc(gfp_t gfp); +void netfs_folioq_free(struct folio_queue *folioq); + /** * netfs_inode - Get the netfs inode context from the inode * @inode: The inode to query From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A11A921B45B for ; Fri, 8 Nov 2024 17:33:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087218; cv=none; b=uP6JopnGxn779jOHPXFrgy/heBXARwheseksSG/fNrqCk46xMDzjcPM3SXBWVGloodl6DG9mLwZxNs+/8ZzPmT65bJSU/CX+2TlPKkigN6OpNDfQoMmnkmmLT34NgqAHgwLE5ii2xSDeMLTTd1CglAyCRWalv4/duSJzm8tJLIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087218; c=relaxed/simple; bh=NanbMPs6Tqc9/Mt1/8WBl0P09PXdrmyXcTPHgQplcgE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DwX8P81nElPKHI1SNoS/plvnugnRsmpiV5sb6Z31R57xR8aYU4HatD6MXIYJY1i4Ym5bp6BMUqRwRcTlUxg2WbJoxf/TtD0xMaSchvoHnwnHYL8a0weXNHtiEooN77m2w+3yq9R0XqEi27++IE87+2uVqUysga4PwhkjpUurkzk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CS5KGllC; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CS5KGllC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087215; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3bz1L8gmXZ+Uv4SPTYOdRJ4fzcNYgX+y7m6MLO1K0qU=; b=CS5KGllC50phrJPtibiRVDfLaRK8SgmNQlW7J21ABncVJhtFra6znf5BDWcyZOhuoPYIb6 bk1e8ATepbNASaKDqsWRkNfhcHsz96ikc9UaGAIsjKPcPTn/lmxX7Z0Ni6d0kSsf4JhCP+ uvtBWS+rZcs7Sq+SkfBAyeuk795w7+0= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-217--lXxFJwvP6yVVZLfXMBy_g-1; Fri, 08 Nov 2024 12:33:33 -0500 X-MC-Unique: -lXxFJwvP6yVVZLfXMBy_g-1 X-Mimecast-MFC-AGG-ID: -lXxFJwvP6yVVZLfXMBy_g Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 39C6319560B1; Fri, 8 Nov 2024 17:33:30 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id BF990195E481; Fri, 8 Nov 2024 17:33:24 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 06/33] netfs: Add a tracepoint to log the lifespan of folio_queue structs Date: Fri, 8 Nov 2024 17:32:07 +0000 Message-ID: <20241108173236.1382366-7-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Add a tracepoint to log the lifespan of folio_queue structs. For tracing illustrative purposes, folio_queues are tagged with the debug ID of whatever they're related to (typically a netfs_io_request) and a debug ID of their own. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/buffered_read.c | 10 ++++++--- fs/netfs/internal.h | 3 ++- fs/netfs/misc.c | 31 +++++++++++++++++---------- fs/netfs/read_collect.c | 8 +++++-- fs/netfs/write_issue.c | 2 +- fs/smb/client/smb2ops.c | 2 +- include/linux/folio_queue.h | 12 ++++++++--- include/linux/netfs.h | 6 ++++-- include/trace/events/netfs.h | 41 ++++++++++++++++++++++++++++++++++-- lib/kunit_iov_iter.c | 4 ++-- 10 files changed, 91 insertions(+), 28 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index b5a7beb9d01b..df94538fde96 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -131,7 +131,8 @@ static ssize_t netfs_prepare_read_iterator(struct netfs= _io_subrequest *subreq) struct folio_queue *tail =3D rreq->buffer_tail, *new; size_t added; =20 - new =3D netfs_folioq_alloc(GFP_NOFS); + new =3D netfs_folioq_alloc(rreq->debug_id, GFP_NOFS, + netfs_trace_folioq_alloc_read_prep); if (!new) return -ENOMEM; new->prev =3D tail; @@ -357,9 +358,11 @@ static int netfs_prime_buffer(struct netfs_io_request = *rreq) struct folio_batch put_batch; size_t added; =20 - folioq =3D netfs_folioq_alloc(GFP_KERNEL); + folioq =3D netfs_folioq_alloc(rreq->debug_id, GFP_KERNEL, + netfs_trace_folioq_alloc_read_prime); if (!folioq) return -ENOMEM; + rreq->buffer =3D folioq; rreq->buffer_tail =3D folioq; rreq->submitted =3D rreq->start; @@ -432,7 +435,8 @@ static int netfs_create_singular_buffer(struct netfs_io= _request *rreq, struct fo { struct folio_queue *folioq; =20 - folioq =3D netfs_folioq_alloc(GFP_KERNEL); + folioq =3D netfs_folioq_alloc(rreq->debug_id, GFP_KERNEL, + netfs_trace_folioq_alloc_read_sing); if (!folioq) return -ENOMEM; =20 diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index c562aec3b483..01b013f558f7 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -58,7 +58,8 @@ static inline void netfs_proc_del_rreq(struct netfs_io_re= quest *rreq) {} /* * misc.c */ -struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq); +struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq, + enum netfs_folioq_trace trace); int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio = *folio, bool needs_put); struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq= ); diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 6cd7e1ee7a14..afe032551de5 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -10,18 +10,25 @@ =20 /** * netfs_folioq_alloc - Allocate a folio_queue struct + * @rreq_id: Associated debugging ID for tracing purposes * @gfp: Allocation constraints + * @trace: Trace tag to indicate the purpose of the allocation * - * Allocate, initialise and account the folio_queue struct. + * Allocate, initialise and account the folio_queue struct and log a trace= line + * to mark the allocation. */ -struct folio_queue *netfs_folioq_alloc(gfp_t gfp) +struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, + unsigned int /*enum netfs_folioq_trace*/ trace) { + static atomic_t debug_ids; struct folio_queue *fq; =20 fq =3D kmalloc(sizeof(*fq), gfp); if (fq) { netfs_stat(&netfs_n_folioq); - folioq_init(fq); + folioq_init(fq, rreq_id); + fq->debug_id =3D atomic_inc_return(&debug_ids); + trace_netfs_folioq(fq, trace); } return fq; } @@ -30,11 +37,14 @@ EXPORT_SYMBOL(netfs_folioq_alloc); /** * netfs_folioq_free - Free a folio_queue struct * @folioq: The object to free + * @trace: Trace tag to indicate which free * * Free and unaccount the folio_queue struct. */ -void netfs_folioq_free(struct folio_queue *folioq) +void netfs_folioq_free(struct folio_queue *folioq, + unsigned int /*enum netfs_trace_folioq*/ trace) { + trace_netfs_folioq(folioq, trace); netfs_stat_d(&netfs_n_folioq); kfree(folioq); } @@ -43,7 +53,8 @@ EXPORT_SYMBOL(netfs_folioq_free); /* * Make sure there's space in the rolling queue. */ -struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq) +struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq, + enum netfs_folioq_trace trace) { struct folio_queue *tail =3D rreq->buffer_tail, *prev; unsigned int prev_nr_slots =3D 0; @@ -59,11 +70,9 @@ struct folio_queue *netfs_buffer_make_space(struct netfs= _io_request *rreq) prev_nr_slots =3D folioq_nr_slots(tail); } =20 - tail =3D kmalloc(sizeof(*tail), GFP_NOFS); + tail =3D netfs_folioq_alloc(rreq->debug_id, GFP_NOFS, trace); if (!tail) return ERR_PTR(-ENOMEM); - netfs_stat(&netfs_n_folioq); - folioq_init(tail); tail->prev =3D prev; if (prev) /* [!] NOTE: After we set prev->next, the consumer is entirely @@ -98,7 +107,7 @@ int netfs_buffer_append_folio(struct netfs_io_request *r= req, struct folio *folio struct folio_queue *tail; unsigned int slot, order =3D folio_order(folio); =20 - tail =3D netfs_buffer_make_space(rreq); + tail =3D netfs_buffer_make_space(rreq, netfs_trace_folioq_alloc_append_fo= lio); if (IS_ERR(tail)) return PTR_ERR(tail); =20 @@ -119,7 +128,7 @@ struct folio_queue *netfs_delete_buffer_head(struct net= fs_io_request *wreq) =20 if (next) next->prev =3D NULL; - netfs_folioq_free(head); + netfs_folioq_free(head, netfs_trace_folioq_delete); wreq->buffer =3D next; return next; } @@ -142,7 +151,7 @@ void netfs_clear_buffer(struct netfs_io_request *rreq) folio_put(folio); } } - netfs_folioq_free(p); + netfs_folioq_free(p, netfs_trace_folioq_clear); } } =20 diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 3cbb289535a8..214f06bba2c7 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -101,6 +101,7 @@ static bool netfs_consume_read_data(struct netfs_io_sub= request *subreq, bool was subreq->transferred, subreq->len)) subreq->transferred =3D subreq->len; =20 + trace_netfs_folioq(folioq, netfs_trace_folioq_read_progress); next_folio: fsize =3D PAGE_SIZE << subreq->curr_folio_order; fpos =3D round_down(subreq->start + subreq->consumed, fsize); @@ -117,9 +118,11 @@ static bool netfs_consume_read_data(struct netfs_io_su= brequest *subreq, bool was if (folioq) { struct folio *folio =3D folioq_folio(folioq, slot); =20 - pr_err("folioq: orders=3D%02x%02x%02x%02x\n", + pr_err("folioq: fq=3D%x orders=3D%02x%02x%02x%02x %px\n", + folioq->debug_id, folioq->orders[0], folioq->orders[1], - folioq->orders[2], folioq->orders[3]); + folioq->orders[2], folioq->orders[3], + folioq); if (folio) pr_err("folio: %llx-%llx ix=3D%llx o=3D%u qo=3D%u\n", fpos, fend - 1, folio_pos(folio), folio_order(folio), @@ -220,6 +223,7 @@ static bool netfs_consume_read_data(struct netfs_io_sub= request *subreq, bool was slot =3D 0; folioq =3D folioq->next; subreq->curr_folioq =3D folioq; + trace_netfs_folioq(folioq, netfs_trace_folioq_read_progress); } subreq->curr_folioq_slot =3D slot; if (folioq && folioq_folio(folioq, slot)) diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index bf6d507578e5..9b6c0dda9751 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -161,7 +161,7 @@ static void netfs_prepare_write(struct netfs_io_request= *wreq, */ if (iov_iter_is_folioq(wreq_iter) && wreq_iter->folioq_slot >=3D folioq_nr_slots(wreq_iter->folioq)) { - netfs_buffer_make_space(wreq); + netfs_buffer_make_space(wreq, netfs_trace_folioq_prep_write); } =20 subreq =3D netfs_alloc_subrequest(wreq); diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 24a2aa04a108..3fc0a9723374 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4415,7 +4415,7 @@ static struct folio_queue *cifs_alloc_folioq_buffer(s= size_t size) p =3D kmalloc(sizeof(*p), GFP_NOFS); if (!p) goto nomem; - folioq_init(p); + folioq_init(p, 0); if (tail) { tail->next =3D p; p->prev =3D tail; diff --git a/include/linux/folio_queue.h b/include/linux/folio_queue.h index 3abe614ef5f0..4d3f8074c137 100644 --- a/include/linux/folio_queue.h +++ b/include/linux/folio_queue.h @@ -37,16 +37,20 @@ struct folio_queue { #if PAGEVEC_SIZE > BITS_PER_LONG #error marks is not big enough #endif + unsigned int rreq_id; + unsigned int debug_id; }; =20 /** * folioq_init - Initialise a folio queue segment * @folioq: The segment to initialise + * @rreq_id: The request identifier to use in tracelines. * - * Initialise a folio queue segment. Note that the folio pointers are - * left uninitialised. + * Initialise a folio queue segment and set an identifier to be used in tr= aces. + * + * Note that the folio pointers are left uninitialised. */ -static inline void folioq_init(struct folio_queue *folioq) +static inline void folioq_init(struct folio_queue *folioq, unsigned int rr= eq_id) { folio_batch_init(&folioq->vec); folioq->next =3D NULL; @@ -54,6 +58,8 @@ static inline void folioq_init(struct folio_queue *folioq) folioq->marks =3D 0; folioq->marks2 =3D 0; folioq->marks3 =3D 0; + folioq->rreq_id =3D rreq_id; + folioq->debug_id =3D 0; } =20 /** diff --git a/include/linux/netfs.h b/include/linux/netfs.h index b2fa569e875d..a30863e205de 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -456,8 +456,10 @@ int netfs_start_io_direct(struct inode *inode); void netfs_end_io_direct(struct inode *inode); =20 /* Miscellaneous APIs. */ -struct folio_queue *netfs_folioq_alloc(gfp_t gfp); -void netfs_folioq_free(struct folio_queue *folioq); +struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, + unsigned int /*enum netfs_folioq_trace*/ trace); +void netfs_folioq_free(struct folio_queue *folioq, + unsigned int /*enum netfs_trace_folioq*/ trace); =20 /** * netfs_inode - Get the netfs inode context from the inode diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index bf511bca896e..c48dcbf74081 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -191,6 +191,16 @@ EM(netfs_trace_donate_to_next, "to-next") \ E_(netfs_trace_donate_to_deferred_next, "defer-next") =20 +#define netfs_folioq_traces \ + EM(netfs_trace_folioq_alloc_append_folio, "alloc-apf") \ + EM(netfs_trace_folioq_alloc_read_prep, "alloc-r-prep") \ + EM(netfs_trace_folioq_alloc_read_prime, "alloc-r-prime") \ + EM(netfs_trace_folioq_alloc_read_sing, "alloc-r-sing") \ + EM(netfs_trace_folioq_clear, "clear") \ + EM(netfs_trace_folioq_delete, "delete") \ + EM(netfs_trace_folioq_prep_write, "prep-wr") \ + E_(netfs_trace_folioq_read_progress, "r-progress") + #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY =20 @@ -209,6 +219,7 @@ enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __m= ode(byte); enum netfs_folio_trace { netfs_folio_traces } __mode(byte); enum netfs_collect_contig_trace { netfs_collect_contig_traces } __mode(byt= e); enum netfs_donate_trace { netfs_donate_traces } __mode(byte); +enum netfs_folioq_trace { netfs_folioq_traces } __mode(byte); =20 #endif =20 @@ -232,6 +243,7 @@ netfs_sreq_ref_traces; netfs_folio_traces; netfs_collect_contig_traces; netfs_donate_traces; +netfs_folioq_traces; =20 /* * Now redefine the EM() and E_() macros to map the enums to the strings t= hat @@ -317,6 +329,7 @@ TRACE_EVENT(netfs_sreq, __field(unsigned short, flags ) __field(enum netfs_io_source, source ) __field(enum netfs_sreq_trace, what ) + __field(u8, slot ) __field(size_t, len ) __field(size_t, transferred ) __field(loff_t, start ) @@ -332,15 +345,16 @@ TRACE_EVENT(netfs_sreq, __entry->len =3D sreq->len; __entry->transferred =3D sreq->transferred; __entry->start =3D sreq->start; + __entry->slot =3D sreq->curr_folioq_slot; ), =20 - TP_printk("R=3D%08x[%x] %s %s f=3D%02x s=3D%llx %zx/%zx e=3D%d", + TP_printk("R=3D%08x[%x] %s %s f=3D%02x s=3D%llx %zx/%zx s=3D%u e=3D%d= ", __entry->rreq, __entry->index, __print_symbolic(__entry->source, netfs_sreq_sources), __print_symbolic(__entry->what, netfs_sreq_traces), __entry->flags, __entry->start, __entry->transferred, __entry->len, - __entry->error) + __entry->slot, __entry->error) ); =20 TRACE_EVENT(netfs_failure, @@ -745,6 +759,29 @@ TRACE_EVENT(netfs_donate, __entry->amount) ); =20 +TRACE_EVENT(netfs_folioq, + TP_PROTO(const struct folio_queue *fq, + enum netfs_folioq_trace trace), + + TP_ARGS(fq, trace), + + TP_STRUCT__entry( + __field(unsigned int, rreq) + __field(unsigned int, id) + __field(enum netfs_folioq_trace, trace) + ), + + TP_fast_assign( + __entry->rreq =3D fq ? fq->rreq_id : 0; + __entry->id =3D fq ? fq->debug_id : 0; + __entry->trace =3D trace; + ), + + TP_printk("R=3D%08x fq=3D%x %s", + __entry->rreq, __entry->id, + __print_symbolic(__entry->trace, netfs_folioq_traces)) + ); + #undef EM #undef E_ #endif /* _TRACE_NETFS_H */ diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 13e15687675a..10a560feb66e 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -392,7 +392,7 @@ static void __init iov_kunit_load_folioq(struct kunit *= test, if (folioq_full(p)) { p->next =3D kzalloc(sizeof(struct folio_queue), GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p->next); - folioq_init(p->next); + folioq_init(p->next, 0); p->next->prev =3D p; p =3D p->next; } @@ -409,7 +409,7 @@ static struct folio_queue *iov_kunit_create_folioq(stru= ct kunit *test) folioq =3D kzalloc(sizeof(struct folio_queue), GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folioq); kunit_add_action_or_reset(test, iov_kunit_destroy_folioq, folioq); - folioq_init(folioq); + folioq_init(folioq, 0); return folioq; } From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BDCF33DABE2 for ; Fri, 8 Nov 2024 17:33:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087228; cv=none; b=oStaXy3sZdo5wHH8gKUH5JwU8g+smD1jUpxwluYPaFTEUI55H5czatwCA73Qy2rbihWUd7dIXhyGhnNCEvi6/MijAKetxqgH5Cw/lvNUKuEQLUKvidyj0MAU8T3fcsEKOkhujIHK/1jTwKjpcEWPIPmnqeZiPoOYRaKM2EHIpuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087228; c=relaxed/simple; bh=5fFtt2nCtHl1Xac89EK1k+Qy0ZSX2aLIXIuuQTN5vzo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XFs54e6wMFzElc9cNq5O8tgaHIgoo3FylM3OuWkBolAes82LxJw2yVC7hvrqI0yr7aTx7/+NGqdmuttRUARRIU1SuCL153bivu5UlVzAfGqEyh9vkJWhDji1WoDIN1BsYFrPeH1wwUntwRqhAJ8/9BPcrzznEn6W/L1BQgN5avc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GxmIK07m; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GxmIK07m" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087223; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IIqAFbzWPXVnqno9XQpoZDuAzb8OmzTa54qgHj8w48I=; b=GxmIK07mZEBQFZUMtfZI7vXjqIQV6/A3mZWLsT8cDxt+7mdCyGfws6Lpp8g10AgVC0OfZP 6mDEVCezCbR/PxNbX2hyBCymqXubMpzRXMO9Zm1ML3MPXkHyHysQ1BfqimVdqTEHg4bofs dHRy1NLlIUwcVTOzBKAZ4A5gDSFamVY= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-82-gFQNmcwRP26AxKggyG8Qhw-1; Fri, 08 Nov 2024 12:33:40 -0500 X-MC-Unique: gFQNmcwRP26AxKggyG8Qhw-1 X-Mimecast-MFC-AGG-ID: gFQNmcwRP26AxKggyG8Qhw Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5F00C1955F44; Fri, 8 Nov 2024 17:33:37 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id ADC85300019F; Fri, 8 Nov 2024 17:33:31 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 07/33] netfs: Abstract out a rolling folio buffer implementation Date: Fri, 8 Nov 2024 17:32:08 +0000 Message-ID: <20241108173236.1382366-8-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" A rolling buffer is a series of folios held in a list of folio_queues. New folios and folio_queue structs may be inserted at the head simultaneously with spent ones being removed from the tail without the need for locking. The rolling buffer includes an iov_iter and it has to be careful managing this as the list of folio_queues is extended such that an oops doesn't incurred because the iterator was pointing to the end of a folio_queue segment that got appended to and then removed. We need to use the mechanism twice, once for read and once for write, and, in future patches, we will use a second rolling buffer to handle bounce buffering for content encryption. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/Makefile | 1 + fs/netfs/buffered_read.c | 119 ++++------------- fs/netfs/direct_read.c | 14 +- fs/netfs/direct_write.c | 10 +- fs/netfs/internal.h | 4 - fs/netfs/misc.c | 147 --------------------- fs/netfs/objects.c | 2 +- fs/netfs/read_pgpriv2.c | 32 ++--- fs/netfs/read_retry.c | 2 +- fs/netfs/rolling_buffer.c | 225 +++++++++++++++++++++++++++++++++ fs/netfs/write_collect.c | 19 +-- fs/netfs/write_issue.c | 26 ++-- include/linux/netfs.h | 10 +- include/linux/rolling_buffer.h | 61 +++++++++ include/trace/events/netfs.h | 2 + 15 files changed, 375 insertions(+), 299 deletions(-) create mode 100644 fs/netfs/rolling_buffer.c create mode 100644 include/linux/rolling_buffer.h diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index d08b0bfb6756..7492c4aa331e 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -13,6 +13,7 @@ netfs-y :=3D \ read_collect.o \ read_pgpriv2.o \ read_retry.o \ + rolling_buffer.o \ write_collect.o \ write_issue.o =20 diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index df94538fde96..4cacb46e0cf7 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -63,37 +63,6 @@ static int netfs_begin_cache_read(struct netfs_io_reques= t *rreq, struct netfs_in return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cooki= e(ctx)); } =20 -/* - * Decant the list of folios to read into a rolling buffer. - */ -static size_t netfs_load_buffer_from_ra(struct netfs_io_request *rreq, - struct folio_queue *folioq, - struct folio_batch *put_batch) -{ - unsigned int order, nr; - size_t size =3D 0; - - nr =3D __readahead_batch(rreq->ractl, (struct page **)folioq->vec.folios, - ARRAY_SIZE(folioq->vec.folios)); - folioq->vec.nr =3D nr; - for (int i =3D 0; i < nr; i++) { - struct folio *folio =3D folioq_folio(folioq, i); - - trace_netfs_folio(folio, netfs_folio_trace_read); - order =3D folio_order(folio); - folioq->orders[i] =3D order; - size +=3D PAGE_SIZE << order; - - if (!folio_batch_add(put_batch, folio)) - folio_batch_release(put_batch); - } - - for (int i =3D nr; i < folioq_nr_slots(folioq); i++) - folioq_clear(folioq, i); - - return size; -} - /* * netfs_prepare_read_iterator - Prepare the subreq iterator for I/O * @subreq: The subrequest to be set up @@ -128,18 +97,12 @@ static ssize_t netfs_prepare_read_iterator(struct netf= s_io_subrequest *subreq) =20 folio_batch_init(&put_batch); while (rreq->submitted < subreq->start + rsize) { - struct folio_queue *tail =3D rreq->buffer_tail, *new; - size_t added; - - new =3D netfs_folioq_alloc(rreq->debug_id, GFP_NOFS, - netfs_trace_folioq_alloc_read_prep); - if (!new) - return -ENOMEM; - new->prev =3D tail; - tail->next =3D new; - rreq->buffer_tail =3D new; - added =3D netfs_load_buffer_from_ra(rreq, new, &put_batch); - rreq->iter.count +=3D added; + ssize_t added; + + added =3D rolling_buffer_load_from_ra(&rreq->buffer, rreq->ractl, + &put_batch); + if (added < 0) + return added; rreq->submitted +=3D added; } folio_batch_release(&put_batch); @@ -147,7 +110,7 @@ static ssize_t netfs_prepare_read_iterator(struct netfs= _io_subrequest *subreq) =20 subreq->len =3D rsize; if (unlikely(rreq->io_streams[0].sreq_max_segs)) { - size_t limit =3D netfs_limit_iter(&rreq->iter, 0, rsize, + size_t limit =3D netfs_limit_iter(&rreq->buffer.iter, 0, rsize, rreq->io_streams[0].sreq_max_segs); =20 if (limit < rsize) { @@ -156,20 +119,16 @@ static ssize_t netfs_prepare_read_iterator(struct net= fs_io_subrequest *subreq) } } =20 - subreq->io_iter =3D rreq->iter; + subreq->io_iter =3D rreq->buffer.iter; =20 if (iov_iter_is_folioq(&subreq->io_iter)) { - if (subreq->io_iter.folioq_slot >=3D folioq_nr_slots(subreq->io_iter.fol= ioq)) { - subreq->io_iter.folioq =3D subreq->io_iter.folioq->next; - subreq->io_iter.folioq_slot =3D 0; - } subreq->curr_folioq =3D (struct folio_queue *)subreq->io_iter.folioq; subreq->curr_folioq_slot =3D subreq->io_iter.folioq_slot; subreq->curr_folio_order =3D subreq->curr_folioq->orders[subreq->curr_fo= lioq_slot]; } =20 iov_iter_truncate(&subreq->io_iter, subreq->len); - iov_iter_advance(&rreq->iter, subreq->len); + rolling_buffer_advance(&rreq->buffer, subreq->len); return subreq->len; } =20 @@ -348,34 +307,6 @@ static int netfs_wait_for_read(struct netfs_io_request= *rreq) return ret; } =20 -/* - * Set up the initial folioq of buffer folios in the rolling buffer and se= t the - * iterator to refer to it. - */ -static int netfs_prime_buffer(struct netfs_io_request *rreq) -{ - struct folio_queue *folioq; - struct folio_batch put_batch; - size_t added; - - folioq =3D netfs_folioq_alloc(rreq->debug_id, GFP_KERNEL, - netfs_trace_folioq_alloc_read_prime); - if (!folioq) - return -ENOMEM; - - rreq->buffer =3D folioq; - rreq->buffer_tail =3D folioq; - rreq->submitted =3D rreq->start; - iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, 0); - - folio_batch_init(&put_batch); - added =3D netfs_load_buffer_from_ra(rreq, folioq, &put_batch); - folio_batch_release(&put_batch); - rreq->iter.count +=3D added; - rreq->submitted +=3D added; - return 0; -} - /** * netfs_readahead - Helper to manage a read request * @ractl: The description of the readahead request @@ -415,7 +346,8 @@ void netfs_readahead(struct readahead_control *ractl) netfs_rreq_expand(rreq, ractl); =20 rreq->ractl =3D ractl; - if (netfs_prime_buffer(rreq) < 0) + rreq->submitted =3D rreq->start; + if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0) goto cleanup_free; netfs_read_to_pagecache(rreq); =20 @@ -431,22 +363,18 @@ EXPORT_SYMBOL(netfs_readahead); /* * Create a rolling buffer with a single occupying folio. */ -static int netfs_create_singular_buffer(struct netfs_io_request *rreq, str= uct folio *folio) +static int netfs_create_singular_buffer(struct netfs_io_request *rreq, str= uct folio *folio, + unsigned int rollbuf_flags) { - struct folio_queue *folioq; + ssize_t added; =20 - folioq =3D netfs_folioq_alloc(rreq->debug_id, GFP_KERNEL, - netfs_trace_folioq_alloc_read_sing); - if (!folioq) + if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0) return -ENOMEM; =20 - folioq_append(folioq, folio); - BUG_ON(folioq_folio(folioq, 0) !=3D folio); - BUG_ON(folioq_folio_order(folioq, 0) !=3D folio_order(folio)); - rreq->buffer =3D folioq; - rreq->buffer_tail =3D folioq; - rreq->submitted =3D rreq->start + rreq->len; - iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, rreq->len); + added =3D rolling_buffer_append(&rreq->buffer, folio, rollbuf_flags); + if (added < 0) + return added; + rreq->submitted =3D rreq->start + added; rreq->ractl =3D (struct readahead_control *)1UL; return 0; } @@ -514,7 +442,7 @@ static int netfs_read_gaps(struct file *file, struct fo= lio *folio) } if (to < flen) bvec_set_folio(&bvec[i++], folio, flen - to, to); - iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); + iov_iter_bvec(&rreq->buffer.iter, ITER_DEST, bvec, i, rreq->len); rreq->submitted =3D rreq->start + flen; =20 netfs_read_to_pagecache(rreq); @@ -582,7 +510,7 @@ int netfs_read_folio(struct file *file, struct folio *f= olio) trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); =20 /* Set up the output buffer */ - ret =3D netfs_create_singular_buffer(rreq, folio); + ret =3D netfs_create_singular_buffer(rreq, folio, 0); if (ret < 0) goto discard; =20 @@ -739,7 +667,7 @@ int netfs_write_begin(struct netfs_inode *ctx, trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); =20 /* Set up the output buffer */ - ret =3D netfs_create_singular_buffer(rreq, folio); + ret =3D netfs_create_singular_buffer(rreq, folio, 0); if (ret < 0) goto error_put; =20 @@ -804,11 +732,10 @@ int netfs_prefetch_for_write(struct file *file, struc= t folio *folio, trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); =20 /* Set up the output buffer */ - ret =3D netfs_create_singular_buffer(rreq, folio); + ret =3D netfs_create_singular_buffer(rreq, folio, NETFS_ROLLBUF_PAGECACHE= _MARK); if (ret < 0) goto error_put; =20 - folioq_mark2(rreq->buffer, 0); netfs_read_to_pagecache(rreq); ret =3D netfs_wait_for_read(rreq); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index b1a66a6e6bc2..a3f23adbae0f 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -25,7 +25,7 @@ static void netfs_prepare_dio_read_iterator(struct netfs_= io_subrequest *subreq) subreq->len =3D rsize; =20 if (unlikely(rreq->io_streams[0].sreq_max_segs)) { - size_t limit =3D netfs_limit_iter(&rreq->iter, 0, rsize, + size_t limit =3D netfs_limit_iter(&rreq->buffer.iter, 0, rsize, rreq->io_streams[0].sreq_max_segs); =20 if (limit < rsize) { @@ -36,9 +36,9 @@ static void netfs_prepare_dio_read_iterator(struct netfs_= io_subrequest *subreq) =20 trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); =20 - subreq->io_iter =3D rreq->iter; + subreq->io_iter =3D rreq->buffer.iter; iov_iter_truncate(&subreq->io_iter, subreq->len); - iov_iter_advance(&rreq->iter, subreq->len); + iov_iter_advance(&rreq->buffer.iter, subreq->len); } =20 /* @@ -199,15 +199,15 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kioc= b *iocb, struct iov_iter *i * the request. */ if (user_backed_iter(iter)) { - ret =3D netfs_extract_user_iter(iter, rreq->len, &rreq->iter, 0); + ret =3D netfs_extract_user_iter(iter, rreq->len, &rreq->buffer.iter, 0); if (ret < 0) goto out; - rreq->direct_bv =3D (struct bio_vec *)rreq->iter.bvec; + rreq->direct_bv =3D (struct bio_vec *)rreq->buffer.iter.bvec; rreq->direct_bv_count =3D ret; rreq->direct_bv_unpin =3D iov_iter_extract_will_pin(iter); - rreq->len =3D iov_iter_count(&rreq->iter); + rreq->len =3D iov_iter_count(&rreq->buffer.iter); } else { - rreq->iter =3D *iter; + rreq->buffer.iter =3D *iter; rreq->len =3D orig_count; rreq->direct_bv_unpin =3D false; iov_iter_advance(iter, orig_count); diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c index 88f2adfab75e..0722fb9919a3 100644 --- a/fs/netfs/direct_write.c +++ b/fs/netfs/direct_write.c @@ -68,19 +68,19 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb= *iocb, struct iov_iter * * request. */ if (async || user_backed_iter(iter)) { - n =3D netfs_extract_user_iter(iter, len, &wreq->iter, 0); + n =3D netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0); if (n < 0) { ret =3D n; goto out; } - wreq->direct_bv =3D (struct bio_vec *)wreq->iter.bvec; + wreq->direct_bv =3D (struct bio_vec *)wreq->buffer.iter.bvec; wreq->direct_bv_count =3D n; wreq->direct_bv_unpin =3D iov_iter_extract_will_pin(iter); } else { - wreq->iter =3D *iter; + wreq->buffer.iter =3D *iter; } =20 - wreq->io_iter =3D wreq->iter; + wreq->buffer.iter =3D wreq->buffer.iter; } =20 __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags); @@ -92,7 +92,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *= iocb, struct iov_iter * __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); if (async) wreq->iocb =3D iocb; - wreq->len =3D iov_iter_count(&wreq->io_iter); + wreq->len =3D iov_iter_count(&wreq->buffer.iter); wreq->cleanup =3D netfs_cleanup_dio_write; ret =3D netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len); if (ret < 0) { diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 01b013f558f7..ccd9058acb61 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -60,10 +60,6 @@ static inline void netfs_proc_del_rreq(struct netfs_io_r= equest *rreq) {} */ struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq, enum netfs_folioq_trace trace); -int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio = *folio, - bool needs_put); -struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq= ); -void netfs_clear_buffer(struct netfs_io_request *rreq); void netfs_reset_iter(struct netfs_io_subrequest *subreq); =20 /* diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index afe032551de5..4249715f4171 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,153 +8,6 @@ #include #include "internal.h" =20 -/** - * netfs_folioq_alloc - Allocate a folio_queue struct - * @rreq_id: Associated debugging ID for tracing purposes - * @gfp: Allocation constraints - * @trace: Trace tag to indicate the purpose of the allocation - * - * Allocate, initialise and account the folio_queue struct and log a trace= line - * to mark the allocation. - */ -struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, - unsigned int /*enum netfs_folioq_trace*/ trace) -{ - static atomic_t debug_ids; - struct folio_queue *fq; - - fq =3D kmalloc(sizeof(*fq), gfp); - if (fq) { - netfs_stat(&netfs_n_folioq); - folioq_init(fq, rreq_id); - fq->debug_id =3D atomic_inc_return(&debug_ids); - trace_netfs_folioq(fq, trace); - } - return fq; -} -EXPORT_SYMBOL(netfs_folioq_alloc); - -/** - * netfs_folioq_free - Free a folio_queue struct - * @folioq: The object to free - * @trace: Trace tag to indicate which free - * - * Free and unaccount the folio_queue struct. - */ -void netfs_folioq_free(struct folio_queue *folioq, - unsigned int /*enum netfs_trace_folioq*/ trace) -{ - trace_netfs_folioq(folioq, trace); - netfs_stat_d(&netfs_n_folioq); - kfree(folioq); -} -EXPORT_SYMBOL(netfs_folioq_free); - -/* - * Make sure there's space in the rolling queue. - */ -struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq, - enum netfs_folioq_trace trace) -{ - struct folio_queue *tail =3D rreq->buffer_tail, *prev; - unsigned int prev_nr_slots =3D 0; - - if (WARN_ON_ONCE(!rreq->buffer && tail) || - WARN_ON_ONCE(rreq->buffer && !tail)) - return ERR_PTR(-EIO); - - prev =3D tail; - if (prev) { - if (!folioq_full(tail)) - return tail; - prev_nr_slots =3D folioq_nr_slots(tail); - } - - tail =3D netfs_folioq_alloc(rreq->debug_id, GFP_NOFS, trace); - if (!tail) - return ERR_PTR(-ENOMEM); - tail->prev =3D prev; - if (prev) - /* [!] NOTE: After we set prev->next, the consumer is entirely - * at liberty to delete prev. - */ - WRITE_ONCE(prev->next, tail); - - rreq->buffer_tail =3D tail; - if (!rreq->buffer) { - rreq->buffer =3D tail; - iov_iter_folio_queue(&rreq->io_iter, ITER_SOURCE, tail, 0, 0, 0); - } else { - /* Make sure we don't leave the master iterator pointing to a - * block that might get immediately consumed. - */ - if (rreq->io_iter.folioq =3D=3D prev && - rreq->io_iter.folioq_slot =3D=3D prev_nr_slots) { - rreq->io_iter.folioq =3D tail; - rreq->io_iter.folioq_slot =3D 0; - } - } - rreq->buffer_tail_slot =3D 0; - return tail; -} - -/* - * Append a folio to the rolling queue. - */ -int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio = *folio, - bool needs_put) -{ - struct folio_queue *tail; - unsigned int slot, order =3D folio_order(folio); - - tail =3D netfs_buffer_make_space(rreq, netfs_trace_folioq_alloc_append_fo= lio); - if (IS_ERR(tail)) - return PTR_ERR(tail); - - rreq->io_iter.count +=3D PAGE_SIZE << order; - - slot =3D folioq_append(tail, folio); - /* Store the counter after setting the slot. */ - smp_store_release(&rreq->buffer_tail_slot, slot); - return 0; -} - -/* - * Delete the head of a rolling queue. - */ -struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq) -{ - struct folio_queue *head =3D wreq->buffer, *next =3D head->next; - - if (next) - next->prev =3D NULL; - netfs_folioq_free(head, netfs_trace_folioq_delete); - wreq->buffer =3D next; - return next; -} - -/* - * Clear out a rolling queue. - */ -void netfs_clear_buffer(struct netfs_io_request *rreq) -{ - struct folio_queue *p; - - while ((p =3D rreq->buffer)) { - rreq->buffer =3D p->next; - for (int slot =3D 0; slot < folioq_count(p); slot++) { - struct folio *folio =3D folioq_folio(p, slot); - if (!folio) - continue; - if (folioq_is_marked(p, slot)) { - trace_netfs_folio(folio, netfs_folio_trace_put); - folio_put(folio); - } - } - netfs_folioq_free(p, netfs_trace_folioq_clear); - } -} - /* * Reset the subrequest iterator to refer just to the region remaining to = be * read. The iterator may or may not have been advanced by socket ops or diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 31e388ec6e48..5cdddaf1f978 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -143,7 +143,7 @@ static void netfs_free_request(struct work_struct *work) } kvfree(rreq->direct_bv); } - netfs_clear_buffer(rreq); + rolling_buffer_clear(&rreq->buffer); =20 if (atomic_dec_and_test(&ictx->io_count)) wake_up_var(&ictx->io_count); diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c index ba5af89d37fa..d84dccc44cab 100644 --- a/fs/netfs/read_pgpriv2.c +++ b/fs/netfs/read_pgpriv2.c @@ -34,8 +34,9 @@ void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_sub= request *subreq, * [DEPRECATED] Cancel PG_private_2 on all marked folios in the event of an * unrecoverable error. */ -static void netfs_pgpriv2_cancel(struct folio_queue *folioq) +static void netfs_pgpriv2_cancel(struct rolling_buffer *buffer) { + struct folio_queue *folioq =3D buffer->tail; struct folio *folio; int slot; =20 @@ -94,7 +95,7 @@ static int netfs_pgpriv2_copy_folio(struct netfs_io_reque= st *wreq, struct folio trace_netfs_folio(folio, netfs_folio_trace_store_copy); =20 /* Attach the folio to the rolling buffer. */ - if (netfs_buffer_append_folio(wreq, folio, false) < 0) + if (rolling_buffer_append(&wreq->buffer, folio, 0) < 0) return -ENOMEM; =20 cache->submit_extendable_to =3D fsize; @@ -109,7 +110,7 @@ static int netfs_pgpriv2_copy_folio(struct netfs_io_req= uest *wreq, struct folio do { ssize_t part; =20 - wreq->io_iter.iov_offset =3D cache->submit_off; + wreq->buffer.iter.iov_offset =3D cache->submit_off; =20 atomic64_set(&wreq->issued_to, fpos + cache->submit_off); cache->submit_extendable_to =3D fsize - cache->submit_off; @@ -122,8 +123,8 @@ static int netfs_pgpriv2_copy_folio(struct netfs_io_req= uest *wreq, struct folio cache->submit_len -=3D part; } while (cache->submit_len > 0); =20 - wreq->io_iter.iov_offset =3D 0; - iov_iter_advance(&wreq->io_iter, fsize); + wreq->buffer.iter.iov_offset =3D 0; + rolling_buffer_advance(&wreq->buffer, fsize); atomic64_set(&wreq->issued_to, fpos + fsize); =20 if (flen < fsize) @@ -151,7 +152,7 @@ void netfs_pgpriv2_write_to_the_cache(struct netfs_io_r= equest *rreq) goto couldnt_start; =20 /* Need the first folio to be able to set up the op. */ - for (folioq =3D rreq->buffer; folioq; folioq =3D folioq->next) { + for (folioq =3D rreq->buffer.tail; folioq; folioq =3D folioq->next) { if (folioq->marks3) { slot =3D __ffs(folioq->marks3); break; @@ -194,7 +195,7 @@ void netfs_pgpriv2_write_to_the_cache(struct netfs_io_r= equest *rreq) netfs_put_request(wreq, false, netfs_rreq_trace_put_return); _leave(" =3D %d", error); couldnt_start: - netfs_pgpriv2_cancel(rreq->buffer); + netfs_pgpriv2_cancel(&rreq->buffer); } =20 /* @@ -203,13 +204,13 @@ void netfs_pgpriv2_write_to_the_cache(struct netfs_io= _request *rreq) */ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq) { - struct folio_queue *folioq =3D wreq->buffer; + struct folio_queue *folioq =3D wreq->buffer.tail; unsigned long long collected_to =3D wreq->collected_to; - unsigned int slot =3D wreq->buffer_head_slot; + unsigned int slot =3D wreq->buffer.first_tail_slot; bool made_progress =3D false; =20 if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D netfs_delete_buffer_head(wreq); + folioq =3D rolling_buffer_delete_spent(&wreq->buffer); slot =3D 0; } =20 @@ -248,9 +249,9 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io= _request *wreq) folioq_clear(folioq, slot); slot++; if (slot >=3D folioq_nr_slots(folioq)) { - if (READ_ONCE(wreq->buffer_tail) =3D=3D folioq) - break; - folioq =3D netfs_delete_buffer_head(wreq); + folioq =3D rolling_buffer_delete_spent(&wreq->buffer); + if (!folioq) + goto done; slot =3D 0; } =20 @@ -258,7 +259,8 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io= _request *wreq) break; } =20 - wreq->buffer =3D folioq; - wreq->buffer_head_slot =3D slot; + wreq->buffer.tail =3D folioq; +done: + wreq->buffer.first_tail_slot =3D slot; return made_progress; } diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c index 0350592ea804..0fe7677b4022 100644 --- a/fs/netfs/read_retry.c +++ b/fs/netfs/read_retry.c @@ -243,7 +243,7 @@ void netfs_unlock_abandoned_read_pages(struct netfs_io_= request *rreq) { struct folio_queue *p; =20 - for (p =3D rreq->buffer; p; p =3D p->next) { + for (p =3D rreq->buffer.tail; p; p =3D p->next) { for (int slot =3D 0; slot < folioq_count(p); slot++) { struct folio *folio =3D folioq_folio(p, slot); =20 diff --git a/fs/netfs/rolling_buffer.c b/fs/netfs/rolling_buffer.c new file mode 100644 index 000000000000..539ecc3b32be --- /dev/null +++ b/fs/netfs/rolling_buffer.c @@ -0,0 +1,225 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Rolling buffer helpers + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include "internal.h" + +static atomic_t debug_ids; + +/** + * netfs_folioq_alloc - Allocate a folio_queue struct + * @rreq_id: Associated debugging ID for tracing purposes + * @gfp: Allocation constraints + * @trace: Trace tag to indicate the purpose of the allocation + * + * Allocate, initialise and account the folio_queue struct and log a trace= line + * to mark the allocation. + */ +struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp, + unsigned int /*enum netfs_folioq_trace*/ trace) +{ + struct folio_queue *fq; + + fq =3D kmalloc(sizeof(*fq), gfp); + if (fq) { + netfs_stat(&netfs_n_folioq); + folioq_init(fq, rreq_id); + fq->debug_id =3D atomic_inc_return(&debug_ids); + trace_netfs_folioq(fq, trace); + } + return fq; +} +EXPORT_SYMBOL(netfs_folioq_alloc); + +/** + * netfs_folioq_free - Free a folio_queue struct + * @folioq: The object to free + * @trace: Trace tag to indicate which free + * + * Free and unaccount the folio_queue struct. + */ +void netfs_folioq_free(struct folio_queue *folioq, + unsigned int /*enum netfs_trace_folioq*/ trace) +{ + trace_netfs_folioq(folioq, trace); + netfs_stat_d(&netfs_n_folioq); + kfree(folioq); +} +EXPORT_SYMBOL(netfs_folioq_free); + +/* + * Initialise a rolling buffer. We allocate an empty folio queue struct t= o so + * that the pointers can be independently driven by the producer and the + * consumer. + */ +int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id, + unsigned int direction) +{ + struct folio_queue *fq; + + fq =3D netfs_folioq_alloc(rreq_id, GFP_NOFS, netfs_trace_folioq_rollbuf_i= nit); + if (!fq) + return -ENOMEM; + + roll->head =3D fq; + roll->tail =3D fq; + iov_iter_folio_queue(&roll->iter, direction, fq, 0, 0, 0); + return 0; +} + +/* + * Add another folio_queue to a rolling buffer if there's no space left. + */ +int rolling_buffer_make_space(struct rolling_buffer *roll) +{ + struct folio_queue *fq, *head =3D roll->head; + + if (!folioq_full(head)) + return 0; + + fq =3D netfs_folioq_alloc(head->rreq_id, GFP_NOFS, netfs_trace_folioq_mak= e_space); + if (!fq) + return -ENOMEM; + fq->prev =3D head; + + roll->head =3D fq; + if (folioq_full(head)) { + /* Make sure we don't leave the master iterator pointing to a + * block that might get immediately consumed. + */ + if (roll->iter.folioq =3D=3D head && + roll->iter.folioq_slot =3D=3D folioq_nr_slots(head)) { + roll->iter.folioq =3D fq; + roll->iter.folioq_slot =3D 0; + } + } + + /* Make sure the initialisation is stored before the next pointer. + * + * [!] NOTE: After we set head->next, the consumer is at liberty to + * immediately delete the old head. + */ + smp_store_release(&head->next, fq); + return 0; +} + +/* + * Decant the list of folios to read into a rolling buffer. + */ +ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll, + struct readahead_control *ractl, + struct folio_batch *put_batch) +{ + struct folio_queue *fq; + struct page **vec; + int nr, ix, to; + ssize_t size =3D 0; + + if (rolling_buffer_make_space(roll) < 0) + return -ENOMEM; + + fq =3D roll->head; + vec =3D (struct page **)fq->vec.folios; + nr =3D __readahead_batch(ractl, vec + folio_batch_count(&fq->vec), + folio_batch_space(&fq->vec)); + ix =3D fq->vec.nr; + to =3D ix + nr; + fq->vec.nr =3D to; + for (; ix < to; ix++) { + struct folio *folio =3D folioq_folio(fq, ix); + unsigned int order =3D folio_order(folio); + + fq->orders[ix] =3D order; + size +=3D PAGE_SIZE << order; + trace_netfs_folio(folio, netfs_folio_trace_read); + if (!folio_batch_add(put_batch, folio)) + folio_batch_release(put_batch); + } + WRITE_ONCE(roll->iter.count, roll->iter.count + size); + + /* Store the counter after setting the slot. */ + smp_store_release(&roll->next_head_slot, to); + + for (; ix < folioq_nr_slots(fq); ix++) + folioq_clear(fq, ix); + + return size; +} + +/* + * Append a folio to the rolling buffer. + */ +ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *f= olio, + unsigned int flags) +{ + ssize_t size =3D folio_size(folio); + int slot; + + if (rolling_buffer_make_space(roll) < 0) + return -ENOMEM; + + slot =3D folioq_append(roll->head, folio); + if (flags & ROLLBUF_MARK_1) + folioq_mark(roll->head, slot); + if (flags & ROLLBUF_MARK_2) + folioq_mark2(roll->head, slot); + + WRITE_ONCE(roll->iter.count, roll->iter.count + size); + + /* Store the counter after setting the slot. */ + smp_store_release(&roll->next_head_slot, slot); + return size; +} + +/* + * Delete a spent buffer from a rolling queue and return the next in line.= We + * don't return the last buffer to keep the pointers independent, but retu= rn + * NULL instead. + */ +struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *rol= l) +{ + struct folio_queue *spent =3D roll->tail, *next =3D READ_ONCE(spent->next= ); + + if (!next) + return NULL; + next->prev =3D NULL; + netfs_folioq_free(spent, netfs_trace_folioq_delete); + roll->tail =3D next; + return next; +} + +/* + * Clear out a rolling queue. Folios that have mark 1 set are put. + */ +void rolling_buffer_clear(struct rolling_buffer *roll) +{ + struct folio_batch fbatch; + struct folio_queue *p; + + folio_batch_init(&fbatch); + + while ((p =3D roll->tail)) { + roll->tail =3D p->next; + for (int slot =3D 0; slot < folioq_count(p); slot++) { + struct folio *folio =3D folioq_folio(p, slot); + if (!folio) + continue; + if (folioq_is_marked(p, slot)) { + trace_netfs_folio(folio, netfs_folio_trace_put); + if (!folio_batch_add(&fbatch, folio)) + folio_batch_release(&fbatch); + } + } + + netfs_folioq_free(p, netfs_trace_folioq_clear); + } + + folio_batch_release(&fbatch); +} diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 1d438be2e1b4..f3fab41ca3e5 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -83,9 +83,9 @@ int netfs_folio_written_back(struct folio *folio) static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, unsigned int *notes) { - struct folio_queue *folioq =3D wreq->buffer; + struct folio_queue *folioq =3D wreq->buffer.tail; unsigned long long collected_to =3D wreq->collected_to; - unsigned int slot =3D wreq->buffer_head_slot; + unsigned int slot =3D wreq->buffer.first_tail_slot; =20 if (wreq->origin =3D=3D NETFS_PGPRIV2_COPY_TO_CACHE) { if (netfs_pgpriv2_unlock_copied_folios(wreq)) @@ -94,7 +94,9 @@ static void netfs_writeback_unlock_folios(struct netfs_io= _request *wreq, } =20 if (slot >=3D folioq_nr_slots(folioq)) { - folioq =3D netfs_delete_buffer_head(wreq); + folioq =3D rolling_buffer_delete_spent(&wreq->buffer); + if (!folioq) + return; slot =3D 0; } =20 @@ -134,9 +136,9 @@ static void netfs_writeback_unlock_folios(struct netfs_= io_request *wreq, folioq_clear(folioq, slot); slot++; if (slot >=3D folioq_nr_slots(folioq)) { - if (READ_ONCE(wreq->buffer_tail) =3D=3D folioq) - break; - folioq =3D netfs_delete_buffer_head(wreq); + folioq =3D rolling_buffer_delete_spent(&wreq->buffer); + if (!folioq) + goto done; slot =3D 0; } =20 @@ -144,8 +146,9 @@ static void netfs_writeback_unlock_folios(struct netfs_= io_request *wreq, break; } =20 - wreq->buffer =3D folioq; - wreq->buffer_head_slot =3D slot; + wreq->buffer.tail =3D folioq; +done: + wreq->buffer.first_tail_slot =3D slot; } =20 /* diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 9b6c0dda9751..993cc6def38e 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -107,6 +107,8 @@ struct netfs_io_request *netfs_create_write_req(struct = address_space *mapping, ictx =3D netfs_inode(wreq->inode); if (is_buffered && netfs_is_cache_enabled(ictx)) fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ict= x)); + if (rolling_buffer_init(&wreq->buffer, wreq->debug_id, ITER_SOURCE) < 0) + goto nomem; =20 wreq->cleaned_to =3D wreq->start; =20 @@ -129,6 +131,10 @@ struct netfs_io_request *netfs_create_write_req(struct= address_space *mapping, } =20 return wreq; +nomem: + wreq->error =3D -ENOMEM; + netfs_put_request(wreq, false, netfs_rreq_trace_put_failed); + return ERR_PTR(-ENOMEM); } =20 /** @@ -153,16 +159,15 @@ static void netfs_prepare_write(struct netfs_io_reque= st *wreq, loff_t start) { struct netfs_io_subrequest *subreq; - struct iov_iter *wreq_iter =3D &wreq->io_iter; + struct iov_iter *wreq_iter =3D &wreq->buffer.iter; =20 /* Make sure we don't point the iterator at a used-up folio_queue * struct being used as a placeholder to prevent the queue from * collapsing. In such a case, extend the queue. */ if (iov_iter_is_folioq(wreq_iter) && - wreq_iter->folioq_slot >=3D folioq_nr_slots(wreq_iter->folioq)) { - netfs_buffer_make_space(wreq, netfs_trace_folioq_prep_write); - } + wreq_iter->folioq_slot >=3D folioq_nr_slots(wreq_iter->folioq)) + rolling_buffer_make_space(&wreq->buffer); =20 subreq =3D netfs_alloc_subrequest(wreq); subreq->source =3D stream->source; @@ -325,6 +330,9 @@ static int netfs_write_folio(struct netfs_io_request *w= req, =20 _enter(""); =20 + if (rolling_buffer_make_space(&wreq->buffer) < 0) + return -ENOMEM; + /* netfs_perform_write() may shift i_size around the page or from out * of the page to beyond it, but cannot move i_size into or through the * page since we have it locked. @@ -429,7 +437,7 @@ static int netfs_write_folio(struct netfs_io_request *w= req, } =20 /* Attach the folio to the rolling buffer. */ - netfs_buffer_append_folio(wreq, folio, false); + rolling_buffer_append(&wreq->buffer, folio, 0); =20 /* Move the submission point forward to allow for write-streaming data * not starting at the front of the page. We don't do write-streaming @@ -476,7 +484,7 @@ static int netfs_write_folio(struct netfs_io_request *w= req, =20 /* Advance the iterator(s). */ if (stream->submit_off > iter_off) { - iov_iter_advance(&wreq->io_iter, stream->submit_off - iter_off); + rolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off); iter_off =3D stream->submit_off; } =20 @@ -494,7 +502,7 @@ static int netfs_write_folio(struct netfs_io_request *w= req, } =20 if (fsize > iter_off) - iov_iter_advance(&wreq->io_iter, fsize - iter_off); + rolling_buffer_advance(&wreq->buffer, fsize - iter_off); atomic64_set(&wreq->issued_to, fpos + fsize); =20 if (!debug) @@ -633,7 +641,7 @@ int netfs_advance_writethrough(struct netfs_io_request = *wreq, struct writeback_c struct folio **writethrough_cache) { _enter("R=3D%x ic=3D%zu ws=3D%u cp=3D%zu tp=3D%u", - wreq->debug_id, wreq->iter.count, wreq->wsize, copied, to_page_end= ); + wreq->debug_id, wreq->buffer.iter.count, wreq->wsize, copied, to_p= age_end); =20 if (!*writethrough_cache) { if (folio_test_dirty(folio)) @@ -708,7 +716,7 @@ int netfs_unbuffered_write(struct netfs_io_request *wre= q, bool may_wait, size_t part =3D netfs_advance_write(wreq, upload, start, len, false); start +=3D part; len -=3D part; - iov_iter_advance(&wreq->io_iter, part); + rolling_buffer_advance(&wreq->buffer, part); if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) { trace_netfs_rreq(wreq, netfs_rreq_trace_wait_pause); wait_on_bit(&wreq->flags, NETFS_RREQ_PAUSE, TASK_UNINTERRUPTIBLE); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index a30863e205de..0d4ed1229024 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -18,6 +18,7 @@ #include #include #include +#include =20 enum netfs_sreq_ref_trace; typedef struct mempool_s mempool_t; @@ -238,10 +239,9 @@ struct netfs_io_request { struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operatio= ns */ #define NR_IO_STREAMS 2 //wreq->nr_io_streams struct netfs_group *group; /* Writeback group being written back */ - struct folio_queue *buffer; /* Head of I/O buffer */ - struct folio_queue *buffer_tail; /* Tail of I/O buffer */ - struct iov_iter iter; /* Unencrypted-side iterator */ - struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ + struct rolling_buffer buffer; /* Unencrypted buffer */ +#define NETFS_ROLLBUF_PUT_MARK ROLLBUF_MARK_1 +#define NETFS_ROLLBUF_PAGECACHE_MARK ROLLBUF_MARK_2 void *netfs_priv; /* Private data for the netfs */ void *netfs_priv2; /* Private data for the netfs */ struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter)= */ @@ -259,8 +259,6 @@ struct netfs_io_request { long error; /* 0 or error that occurred */ enum netfs_io_origin origin; /* Origin of the request */ bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ - u8 buffer_head_slot; /* First slot in ->buffer */ - u8 buffer_tail_slot; /* Next slot in ->buffer_tail */ unsigned long long i_size; /* Size of the file */ unsigned long long start; /* Start position */ atomic64_t issued_to; /* Write issuer folio cursor */ diff --git a/include/linux/rolling_buffer.h b/include/linux/rolling_buffer.h new file mode 100644 index 000000000000..ac15b1ffdd83 --- /dev/null +++ b/include/linux/rolling_buffer.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* Rolling buffer of folios + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#ifndef _ROLLING_BUFFER_H +#define _ROLLING_BUFFER_H + +#include +#include + +/* + * Rolling buffer. Whilst the buffer is live and in use, folios and folio + * queue segments can be added to one end by one thread and removed from t= he + * other end by another thread. The buffer isn't allowed to be empty; it = must + * always have at least one folio_queue in it so that neither side has to + * modify both queue pointers. + * + * The iterator in the buffer is extended as buffers are inserted. It can= be + * snapshotted to use a segment of the buffer. + */ +struct rolling_buffer { + struct folio_queue *head; /* Producer's insertion point */ + struct folio_queue *tail; /* Consumer's removal point */ + struct iov_iter iter; /* Iterator tracking what's left in the buffer */ + u8 next_head_slot; /* Next slot in ->head */ + u8 first_tail_slot; /* First slot in ->tail */ +}; + +/* + * Snapshot of a rolling buffer. + */ +struct rolling_buffer_snapshot { + struct folio_queue *curr_folioq; /* Queue segment in which current folio = resides */ + unsigned char curr_slot; /* Folio currently being read */ + unsigned char curr_order; /* Order of folio */ +}; + +/* Marks to store per-folio in the internal folio_queue structs. */ +#define ROLLBUF_MARK_1 BIT(0) +#define ROLLBUF_MARK_2 BIT(1) + +int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id, + unsigned int direction); +int rolling_buffer_make_space(struct rolling_buffer *roll); +ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll, + struct readahead_control *ractl, + struct folio_batch *put_batch); +ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *f= olio, + unsigned int flags); +struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *rol= l); +void rolling_buffer_clear(struct rolling_buffer *roll); + +static inline void rolling_buffer_advance(struct rolling_buffer *roll, siz= e_t amount) +{ + iov_iter_advance(&roll->iter, amount); +} + +#endif /* _ROLLING_BUFFER_H */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index c48dcbf74081..a0f5b13aab86 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -198,7 +198,9 @@ EM(netfs_trace_folioq_alloc_read_sing, "alloc-r-sing") \ EM(netfs_trace_folioq_clear, "clear") \ EM(netfs_trace_folioq_delete, "delete") \ + EM(netfs_trace_folioq_make_space, "make-space") \ EM(netfs_trace_folioq_prep_write, "prep-wr") \ + EM(netfs_trace_folioq_rollbuf_init, "roll-init") \ E_(netfs_trace_folioq_read_progress, "r-progress") =20 #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 477971AA1DB for ; Fri, 8 Nov 2024 17:33:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087234; cv=none; b=jWQT3USCHjx1z1DORd80l2GFhCJPxn4PQjAbzYqo9Nd+2+XdWEJ6q9TtRv5aTOOQnbliCaWtVJ0fUq52r5Ua/qem06CCyNsMRrKdTTqblwH9B9tJtWddNH7VYMGsQ1WvZT/lIlM8mv05GSHNRWO2zhMHqIZ4s6A7JoSCTRFmJog= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087234; c=relaxed/simple; bh=9SCEKbNtvAFB8fJZcHIRQXCI/OprF81chlgQt47QcC0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZON/SgoWdW/NA20rysh1FxQ86soXvs7n8X6wmdkIdcv5X34sSXwwDfZOqJ99bqItQPlJWzfNfhFC+cH6QJja7fepcvidosWYBdQiUPoZHsg/k1kiyjvApFKvBIEpUG106NDE3/Wn2FXmL4qzv1y8y6CWcYQkXYTnN3MPj6YKnoQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=d4A1VYLl; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="d4A1VYLl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087231; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X3twVFG043O59vporSoGB49KW0yXOb7j0HiRw4sYWo4=; b=d4A1VYLlHYmdWopWybETbdOWBK/wmF7lbpips5wh5b1033GJdru8H4gPmmM0vB6GrIpyVE SfdsRjpeuwnhxKY63CEbOKwCubXujpK4p8oA901EtstD63kmfOcQHWLhaIwgN3tRzhjoSw RlhQ0mSskTcsRQUPCldTQy4NqTQm0h0= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-357-FSV-RnZqPOKXNkdbjeBNgw-1; Fri, 08 Nov 2024 12:33:48 -0500 X-MC-Unique: FSV-RnZqPOKXNkdbjeBNgw-1 X-Mimecast-MFC-AGG-ID: FSV-RnZqPOKXNkdbjeBNgw Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 56B4C19541AE; Fri, 8 Nov 2024 17:33:45 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C24C41956054; Fri, 8 Nov 2024 17:33:38 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v4 08/33] netfs: Make netfs_advance_write() return size_t Date: Fri, 8 Nov 2024 17:32:09 +0000 Message-ID: <20241108173236.1382366-9-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" netfs_advance_write() calculates the amount of data it's attaching to a stream with size_t, but then returns this as an int. Switch the return value to size_t for consistency. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/internal.h | 6 +++--- fs/netfs/write_issue.c | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index ccd9058acb61..6aa2a8d49b37 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -178,9 +178,9 @@ void netfs_reissue_write(struct netfs_io_stream *stream, struct iov_iter *source); void netfs_issue_write(struct netfs_io_request *wreq, struct netfs_io_stream *stream); -int netfs_advance_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start, size_t len, bool to_eof); +size_t netfs_advance_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + loff_t start, size_t len, bool to_eof); struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size= _t len); int netfs_advance_writethrough(struct netfs_io_request *wreq, struct write= back_control *wbc, struct folio *folio, size_t copied, bool to_page_end, diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 993cc6def38e..c186221b45c0 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -271,9 +271,9 @@ void netfs_issue_write(struct netfs_io_request *wreq, * we can avoid overrunning the credits obtained (cifs) and try to paralle= lise * content-crypto preparation with network writes. */ -int netfs_advance_write(struct netfs_io_request *wreq, - struct netfs_io_stream *stream, - loff_t start, size_t len, bool to_eof) +size_t netfs_advance_write(struct netfs_io_request *wreq, + struct netfs_io_stream *stream, + loff_t start, size_t len, bool to_eof) { struct netfs_io_subrequest *subreq =3D stream->construct; size_t part; From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D27471C1F3E for ; Fri, 8 Nov 2024 17:34:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087243; cv=none; b=QDCjv2AVww0KtgJR/5y9yyVf5c1arDZGkM/AwVk6mMyUoBYA1FkICNRa+4SXBeUmIMVWGOf7RZsvQa+TyoRFrm19iGbD5NmXy5ZvrAxrf/1+WiwRkl644y2LWtsx42GUaq59uZpjz0c4dZiQ9OdqayJovYtNTQKQmE1nm4bl660= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087243; c=relaxed/simple; bh=B1BUwxw64z+5qVgktuj5MQyZ0A0BcydF8lMOP9S4BVM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JTOjc4tz7soZWPdHZ6fqstsCYA/prTi9QiE0JHoB8hNaJ7qwWt/w78IxOrRhZCMgcvz4TaU0RPnb9CohOkbNnX6ExPWLYEbck3qAcQtvlFeFbt1dWWkzY6JxO9jEqy2uWTX4ZE5/WvEtTcG/ODiUwrpHbTb0jSLBgg9IT6HI6qw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=MmjEtnmM; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MmjEtnmM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087240; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cXT5ALic3xBUT2yykbdv9+aDxF01hC9kmNJ9X8Q7R1c=; b=MmjEtnmM34oqpAiuqZOkvVy1G4pdLeyx+agwSJAvIUlX+ZNTtjd4kiWA+zUa/lQU6oFYmQ wvAcWNSm2RcL0r7YBlEsJUfnzK3/b60CJick1dtPPBekO62aUk91DKM+fo+f1syALxNV3H Hz3GJeaJZ9xA2fPlsZjD/nMCiNaAtro= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-474-wHSjUS5DM9-_cFE7o5pFWA-1; Fri, 08 Nov 2024 12:33:55 -0500 X-MC-Unique: wHSjUS5DM9-_cFE7o5pFWA-1 X-Mimecast-MFC-AGG-ID: wHSjUS5DM9-_cFE7o5pFWA Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 50D0E1956089; Fri, 8 Nov 2024 17:33:52 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CD0361955F3D; Fri, 8 Nov 2024 17:33:46 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 09/33] netfs: Split retry code out of fs/netfs/write_collect.c Date: Fri, 8 Nov 2024 17:32:10 +0000 Message-ID: <20241108173236.1382366-10-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" Split write-retry code out of fs/netfs/write_collect.c as it will become more elaborate when content crypto is introduced. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/Makefile | 3 +- fs/netfs/internal.h | 5 + fs/netfs/write_collect.c | 215 ------------------------------------ fs/netfs/write_retry.c | 227 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 234 insertions(+), 216 deletions(-) create mode 100644 fs/netfs/write_retry.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 7492c4aa331e..cbb30bdeacc4 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -15,7 +15,8 @@ netfs-y :=3D \ read_retry.o \ rolling_buffer.o \ write_collect.o \ - write_issue.o + write_issue.o \ + write_retry.o =20 netfs-$(CONFIG_NETFS_STATS) +=3D stats.o =20 diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 6aa2a8d49b37..73887525e939 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -189,6 +189,11 @@ int netfs_end_writethrough(struct netfs_io_request *wr= eq, struct writeback_contr struct folio *writethrough_cache); int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, s= ize_t len); =20 +/* + * write_retry.c + */ +void netfs_retry_writes(struct netfs_io_request *wreq); + /* * Miscellaneous functions. */ diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index f3fab41ca3e5..85e8e94da90a 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -151,221 +151,6 @@ static void netfs_writeback_unlock_folios(struct netf= s_io_request *wreq, wreq->buffer.first_tail_slot =3D slot; } =20 -/* - * Perform retries on the streams that need it. - */ -static void netfs_retry_write_stream(struct netfs_io_request *wreq, - struct netfs_io_stream *stream) -{ - struct list_head *next; - - _enter("R=3D%x[%x:]", wreq->debug_id, stream->stream_nr); - - if (list_empty(&stream->subrequests)) - return; - - if (stream->source =3D=3D NETFS_UPLOAD_TO_SERVER && - wreq->netfs_ops->retry_request) - wreq->netfs_ops->retry_request(wreq, stream); - - if (unlikely(stream->failed)) - return; - - /* If there's no renegotiation to do, just resend each failed subreq. */ - if (!stream->prepare_write) { - struct netfs_io_subrequest *subreq; - - list_for_each_entry(subreq, &stream->subrequests, rreq_link) { - if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) - break; - if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { - struct iov_iter source =3D subreq->io_iter; - - iov_iter_revert(&source, subreq->len - source.count); - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq, &source); - } - } - return; - } - - next =3D stream->subrequests.next; - - do { - struct netfs_io_subrequest *subreq =3D NULL, *from, *to, *tmp; - struct iov_iter source; - unsigned long long start, len; - size_t part; - bool boundary =3D false; - - /* Go through the stream and find the next span of contiguous - * data that we then rejig (cifs, for example, needs the wsize - * renegotiating) and reissue. - */ - from =3D list_entry(next, struct netfs_io_subrequest, rreq_link); - to =3D from; - start =3D from->start + from->transferred; - len =3D from->len - from->transferred; - - if (test_bit(NETFS_SREQ_FAILED, &from->flags) || - !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) - return; - - list_for_each_continue(next, &stream->subrequests) { - subreq =3D list_entry(next, struct netfs_io_subrequest, rreq_link); - if (subreq->start + subreq->transferred !=3D start + len || - test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) || - !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) - break; - to =3D subreq; - len +=3D to->len; - } - - /* Determine the set of buffers we're going to use. Each - * subreq gets a subset of a single overall contiguous buffer. - */ - netfs_reset_iter(from); - source =3D from->io_iter; - source.count =3D len; - - /* Work through the sublist. */ - subreq =3D from; - list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { - if (!len) - break; - /* Renegotiate max_len (wsize) */ - trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - stream->prepare_write(subreq); - - part =3D min(len, stream->sreq_max_len); - subreq->len =3D part; - subreq->start =3D start; - subreq->transferred =3D 0; - len -=3D part; - start +=3D part; - if (len && subreq =3D=3D to && - __test_and_clear_bit(NETFS_SREQ_BOUNDARY, &to->flags)) - boundary =3D true; - - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq, &source); - if (subreq =3D=3D to) - break; - } - - /* If we managed to use fewer subreqs, we can discard the - * excess; if we used the same number, then we're done. - */ - if (!len) { - if (subreq =3D=3D to) - continue; - list_for_each_entry_safe_from(subreq, tmp, - &stream->subrequests, rreq_link) { - trace_netfs_sreq(subreq, netfs_sreq_trace_discard); - list_del(&subreq->rreq_link); - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); - if (subreq =3D=3D to) - break; - } - continue; - } - - /* We ran out of subrequests, so we need to allocate some more - * and insert them after. - */ - do { - subreq =3D netfs_alloc_subrequest(wreq); - subreq->source =3D to->source; - subreq->start =3D start; - subreq->debug_index =3D atomic_inc_return(&wreq->subreq_counter); - subreq->stream_nr =3D to->stream_nr; - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - - trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, - refcount_read(&subreq->ref), - netfs_sreq_trace_new); - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - - list_add(&subreq->rreq_link, &to->rreq_link); - to =3D list_next_entry(to, rreq_link); - trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - - stream->sreq_max_len =3D len; - stream->sreq_max_segs =3D INT_MAX; - switch (stream->source) { - case NETFS_UPLOAD_TO_SERVER: - netfs_stat(&netfs_n_wh_upload); - stream->sreq_max_len =3D umin(len, wreq->wsize); - break; - case NETFS_WRITE_TO_CACHE: - netfs_stat(&netfs_n_wh_write); - break; - default: - WARN_ON_ONCE(1); - } - - stream->prepare_write(subreq); - - part =3D umin(len, stream->sreq_max_len); - subreq->len =3D subreq->transferred + part; - len -=3D part; - start +=3D part; - if (!len && boundary) { - __set_bit(NETFS_SREQ_BOUNDARY, &to->flags); - boundary =3D false; - } - - netfs_reissue_write(stream, subreq, &source); - if (!len) - break; - - } while (len); - - } while (!list_is_head(next, &stream->subrequests)); -} - -/* - * Perform retries on the streams that need it. If we're doing content - * encryption and the server copy changed due to a third-party write, we m= ay - * need to do an RMW cycle and also rewrite the data to the cache. - */ -static void netfs_retry_writes(struct netfs_io_request *wreq) -{ - struct netfs_io_subrequest *subreq; - struct netfs_io_stream *stream; - int s; - - /* Wait for all outstanding I/O to quiesce before performing retries as - * we may need to renegotiate the I/O sizes. - */ - for (s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - if (!stream->active) - continue; - - list_for_each_entry(subreq, &stream->subrequests, rreq_link) { - wait_on_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS, - TASK_UNINTERRUPTIBLE); - } - } - - // TODO: Enc: Fetch changed partial pages - // TODO: Enc: Reencrypt content if needed. - // TODO: Enc: Wind back transferred point. - // TODO: Enc: Mark cache pages for retry. - - for (s =3D 0; s < NR_IO_STREAMS; s++) { - stream =3D &wreq->io_streams[s]; - if (stream->need_retry) { - stream->need_retry =3D false; - netfs_retry_write_stream(wreq, stream); - } - } -} - /* * Collect and assess the results of various write subrequests. We may ne= ed to * retry some of the results - or even do an RMW cycle for content crypto. diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c new file mode 100644 index 000000000000..2222c3a6b9d1 --- /dev/null +++ b/fs/netfs/write_retry.c @@ -0,0 +1,227 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem write retrying. + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include "internal.h" + +/* + * Perform retries on the streams that need it. + */ +static void netfs_retry_write_stream(struct netfs_io_request *wreq, + struct netfs_io_stream *stream) +{ + struct list_head *next; + + _enter("R=3D%x[%x:]", wreq->debug_id, stream->stream_nr); + + if (list_empty(&stream->subrequests)) + return; + + if (stream->source =3D=3D NETFS_UPLOAD_TO_SERVER && + wreq->netfs_ops->retry_request) + wreq->netfs_ops->retry_request(wreq, stream); + + if (unlikely(stream->failed)) + return; + + /* If there's no renegotiation to do, just resend each failed subreq. */ + if (!stream->prepare_write) { + struct netfs_io_subrequest *subreq; + + list_for_each_entry(subreq, &stream->subrequests, rreq_link) { + if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) + break; + if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + struct iov_iter source =3D subreq->io_iter; + + iov_iter_revert(&source, subreq->len - source.count); + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + netfs_reissue_write(stream, subreq, &source); + } + } + return; + } + + next =3D stream->subrequests.next; + + do { + struct netfs_io_subrequest *subreq =3D NULL, *from, *to, *tmp; + struct iov_iter source; + unsigned long long start, len; + size_t part; + bool boundary =3D false; + + /* Go through the stream and find the next span of contiguous + * data that we then rejig (cifs, for example, needs the wsize + * renegotiating) and reissue. + */ + from =3D list_entry(next, struct netfs_io_subrequest, rreq_link); + to =3D from; + start =3D from->start + from->transferred; + len =3D from->len - from->transferred; + + if (test_bit(NETFS_SREQ_FAILED, &from->flags) || + !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) + return; + + list_for_each_continue(next, &stream->subrequests) { + subreq =3D list_entry(next, struct netfs_io_subrequest, rreq_link); + if (subreq->start + subreq->transferred !=3D start + len || + test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) || + !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) + break; + to =3D subreq; + len +=3D to->len; + } + + /* Determine the set of buffers we're going to use. Each + * subreq gets a subset of a single overall contiguous buffer. + */ + netfs_reset_iter(from); + source =3D from->io_iter; + source.count =3D len; + + /* Work through the sublist. */ + subreq =3D from; + list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { + if (!len) + break; + /* Renegotiate max_len (wsize) */ + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + stream->prepare_write(subreq); + + part =3D min(len, stream->sreq_max_len); + subreq->len =3D part; + subreq->start =3D start; + subreq->transferred =3D 0; + len -=3D part; + start +=3D part; + if (len && subreq =3D=3D to && + __test_and_clear_bit(NETFS_SREQ_BOUNDARY, &to->flags)) + boundary =3D true; + + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + netfs_reissue_write(stream, subreq, &source); + if (subreq =3D=3D to) + break; + } + + /* If we managed to use fewer subreqs, we can discard the + * excess; if we used the same number, then we're done. + */ + if (!len) { + if (subreq =3D=3D to) + continue; + list_for_each_entry_safe_from(subreq, tmp, + &stream->subrequests, rreq_link) { + trace_netfs_sreq(subreq, netfs_sreq_trace_discard); + list_del(&subreq->rreq_link); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); + if (subreq =3D=3D to) + break; + } + continue; + } + + /* We ran out of subrequests, so we need to allocate some more + * and insert them after. + */ + do { + subreq =3D netfs_alloc_subrequest(wreq); + subreq->source =3D to->source; + subreq->start =3D start; + subreq->debug_index =3D atomic_inc_return(&wreq->subreq_counter); + subreq->stream_nr =3D to->stream_nr; + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + + trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, + refcount_read(&subreq->ref), + netfs_sreq_trace_new); + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + + list_add(&subreq->rreq_link, &to->rreq_link); + to =3D list_next_entry(to, rreq_link); + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + + stream->sreq_max_len =3D len; + stream->sreq_max_segs =3D INT_MAX; + switch (stream->source) { + case NETFS_UPLOAD_TO_SERVER: + netfs_stat(&netfs_n_wh_upload); + stream->sreq_max_len =3D umin(len, wreq->wsize); + break; + case NETFS_WRITE_TO_CACHE: + netfs_stat(&netfs_n_wh_write); + break; + default: + WARN_ON_ONCE(1); + } + + stream->prepare_write(subreq); + + part =3D umin(len, stream->sreq_max_len); + subreq->len =3D subreq->transferred + part; + len -=3D part; + start +=3D part; + if (!len && boundary) { + __set_bit(NETFS_SREQ_BOUNDARY, &to->flags); + boundary =3D false; + } + + netfs_reissue_write(stream, subreq, &source); + if (!len) + break; + + } while (len); + + } while (!list_is_head(next, &stream->subrequests)); +} + +/* + * Perform retries on the streams that need it. If we're doing content + * encryption and the server copy changed due to a third-party write, we m= ay + * need to do an RMW cycle and also rewrite the data to the cache. + */ +void netfs_retry_writes(struct netfs_io_request *wreq) +{ + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream; + int s; + + /* Wait for all outstanding I/O to quiesce before performing retries as + * we may need to renegotiate the I/O sizes. + */ + for (s =3D 0; s < NR_IO_STREAMS; s++) { + stream =3D &wreq->io_streams[s]; + if (!stream->active) + continue; + + list_for_each_entry(subreq, &stream->subrequests, rreq_link) { + wait_on_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + } + } + + // TODO: Enc: Fetch changed partial pages + // TODO: Enc: Reencrypt content if needed. + // TODO: Enc: Wind back transferred point. + // TODO: Enc: Mark cache pages for retry. + + for (s =3D 0; s < NR_IO_STREAMS; s++) { + stream =3D &wreq->io_streams[s]; + if (stream->need_retry) { + stream->need_retry =3D false; + netfs_retry_write_stream(wreq, stream); + } + } +} From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D2FB20DD60 for ; Fri, 8 Nov 2024 17:34:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087250; cv=none; b=CoLZCWXNYTFRy4edxeefSEBdLKnS4KhaepzP/iEJtDNEwUZV02/gxxTgILWy7KhliVDk+FNgjUYcfzedE0LeqlMNgSbcKHLP9gKhKPlLTgswoFTmRPbsNHIktkEz61cuMBHG2a/J1kSMJQART7wF/5+Gh07xPuAmtFbp/Z63q+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087250; c=relaxed/simple; bh=UY4FIAOMecd1RPE2VIilbpsjd5hNpaTgiF0vreIZSAs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bB8WDC/LErIaTNndzeG/VQ39nyNR9Z9uoXDZoDBhTadwS73Ijh55tqDH+Q6+QHzDI8zjAgNLr86p6+P1lOAy7oW8QwnYd3ItmRZFxEMM8NC/HlWqbm8/dpeqfcAMuoC2NlruiaFdZg9voVR7JU0nhfvRTUTsPY2e8pTWLUOvVXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=VuwZ4Jv8; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VuwZ4Jv8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087247; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aGYosp/+kMXcNDFlu6f+dSAs1+3gA++sooPoxXEtOf0=; b=VuwZ4Jv8wGTg0FgAEaaxZEXG0SgAbR+q05keRiQkNundt1WRPxX6jO7iyiP5Kd57JZSOQu ydBkQAb2D4WgwgxUrJ31XusyAG5hFfVvvaDUG1/L6Qgtf2/5b7tmT5Pr4KwVPiYhx/tNne mM1tv4lp/3fgh5MeR3wnbW3s9735VzI= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-611-8pUauU-mODe2UOaqhn2byA-1; Fri, 08 Nov 2024 12:34:03 -0500 X-MC-Unique: 8pUauU-mODe2UOaqhn2byA-1 X-Mimecast-MFC-AGG-ID: 8pUauU-mODe2UOaqhn2byA Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 41C241956048; Fri, 8 Nov 2024 17:33:59 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AD5FA300019F; Fri, 8 Nov 2024 17:33:53 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 10/33] netfs: Drop the error arg from netfs_read_subreq_terminated() Date: Fri, 8 Nov 2024 17:32:11 +0000 Message-ID: <20241108173236.1382366-11-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Drop the error argument from netfs_read_subreq_terminated() in favour of passing the value in subreq->error. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/9p/vfs_addr.c | 3 ++- fs/afs/file.c | 15 ++++++++----- fs/ceph/addr.c | 13 +++++++---- fs/netfs/buffered_read.c | 16 +++++++------- fs/netfs/objects.c | 15 ++++++++++++- fs/netfs/read_collect.c | 47 +++++++++++++++++++++++++--------------- fs/nfs/fscache.c | 6 +++-- fs/nfs/fscache.h | 3 ++- fs/smb/client/cifssmb.c | 10 +-------- fs/smb/client/file.c | 3 ++- fs/smb/client/smb2pdu.c | 10 +-------- include/linux/netfs.h | 7 +++--- 12 files changed, 86 insertions(+), 62 deletions(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 819c75233235..58a6bd284d88 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -83,7 +83,8 @@ static void v9fs_issue_read(struct netfs_io_subrequest *s= ubreq) if (!err) subreq->transferred +=3D total; =20 - netfs_read_subreq_terminated(subreq, err, false); + subreq->error =3D err; + netfs_read_subreq_terminated(subreq, false); } =20 /** diff --git a/fs/afs/file.c b/fs/afs/file.c index 6762eff97517..56248a078bca 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -246,7 +246,8 @@ static void afs_fetch_data_notify(struct afs_operation = *op) subreq->rreq->i_size =3D req->file_size; if (req->pos + req->actual_len >=3D req->file_size) __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); - netfs_read_subreq_terminated(subreq, error, false); + subreq->error =3D error; + netfs_read_subreq_terminated(subreq, false); req->subreq =3D NULL; } else if (req->done) { req->done(req); @@ -301,8 +302,10 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs= _read *req) =20 op =3D afs_alloc_operation(req->key, vnode->volume); if (IS_ERR(op)) { - if (req->subreq) - netfs_read_subreq_terminated(req->subreq, PTR_ERR(op), false); + if (req->subreq) { + req->subreq->error =3D PTR_ERR(op); + netfs_read_subreq_terminated(req->subreq, false); + } return PTR_ERR(op); } =20 @@ -320,8 +323,10 @@ static void afs_read_worker(struct work_struct *work) struct afs_read *fsreq; =20 fsreq =3D afs_alloc_read(GFP_NOFS); - if (!fsreq) - return netfs_read_subreq_terminated(subreq, -ENOMEM, false); + if (!fsreq) { + subreq->error =3D -ENOMEM; + return netfs_read_subreq_terminated(subreq, false); + } =20 fsreq->subreq =3D subreq; fsreq->pos =3D subreq->start + subreq->transferred; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index c2a9e2cc03de..459249ba6319 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -253,8 +253,9 @@ static void finish_netfs_read(struct ceph_osd_request *= req) subreq->transferred =3D err; err =3D 0; } + subreq->error =3D err; trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress); - netfs_read_subreq_terminated(subreq, err, false); + netfs_read_subreq_terminated(subreq, false); iput(req->r_inode); ceph_dec_osd_stopping_blocker(fsc->mdsc); } @@ -314,7 +315,9 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_= subrequest *subreq) =20 ceph_mdsc_put_request(req); out: - netfs_read_subreq_terminated(subreq, err, false); + subreq->error =3D err; + trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress); + netfs_read_subreq_terminated(subreq, false); return true; } =20 @@ -426,8 +429,10 @@ static void ceph_netfs_issue_read(struct netfs_io_subr= equest *subreq) ceph_osdc_start_request(req->r_osdc, req); out: ceph_osdc_put_request(req); - if (err) - netfs_read_subreq_terminated(subreq, err, false); + if (err) { + subreq->error =3D err; + netfs_read_subreq_terminated(subreq, false); + } doutc(cl, "%llx.%llx result %d\n", ceph_vinop(inode), err); } =20 diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 4cacb46e0cf7..82c3b9957958 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -148,14 +148,13 @@ static void netfs_cache_read_terminated(void *priv, s= size_t transferred_or_error { struct netfs_io_subrequest *subreq =3D priv; =20 - if (transferred_or_error < 0) { - netfs_read_subreq_terminated(subreq, transferred_or_error, was_async); - return; - } - - if (transferred_or_error > 0) + if (transferred_or_error > 0) { subreq->transferred +=3D transferred_or_error; - netfs_read_subreq_terminated(subreq, 0, was_async); + subreq->error =3D 0; + } else { + subreq->error =3D transferred_or_error; + } + netfs_read_subreq_terminated(subreq, was_async); } =20 /* @@ -261,7 +260,8 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq) netfs_stat(&netfs_n_rh_zero); slice =3D netfs_prepare_read_iterator(subreq); __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - netfs_read_subreq_terminated(subreq, 0, false); + subreq->error =3D 0; + netfs_read_subreq_terminated(subreq, false); goto done; } =20 diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 5cdddaf1f978..f10fd56efa17 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -191,7 +191,20 @@ struct netfs_io_subrequest *netfs_alloc_subrequest(str= uct netfs_io_request *rreq } =20 memset(subreq, 0, kmem_cache_size(cache)); - INIT_WORK(&subreq->work, NULL); + + switch (rreq->origin) { + case NETFS_READAHEAD: + case NETFS_READPAGE: + case NETFS_READ_GAPS: + case NETFS_READ_FOR_WRITE: + case NETFS_DIO_READ: + INIT_WORK(&subreq->work, netfs_read_subreq_termination_worker); + break; + default: + INIT_WORK(&subreq->work, NULL); + break; + } + INIT_LIST_HEAD(&subreq->rreq_link); refcount_set(&subreq->ref, 2); subreq->rreq =3D rreq; diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 214f06bba2c7..00358894fac4 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -450,28 +450,26 @@ EXPORT_SYMBOL(netfs_read_subreq_progress); /** * netfs_read_subreq_terminated - Note the termination of an I/O operation. * @subreq: The I/O request that has terminated. - * @error: Error code indicating type of completion. - * @was_async: The termination was asynchronous + * @was_async: True if we're in an asynchronous context. * * This tells the read helper that a contributory I/O operation has termin= ated, * one way or another, and that it should integrate the results. * - * The caller indicates the outcome of the operation through @error, suppl= ying - * 0 to indicate a successful or retryable transfer (if NETFS_SREQ_NEED_RE= TRY - * is set) or a negative error code. The helper will look after reissuing= I/O - * operations as appropriate and writing downloaded data to the cache. + * The caller indicates the outcome of the operation through @subreq->erro= r, + * supplying 0 to indicate a successful or retryable transfer (if + * NETFS_SREQ_NEED_RETRY is set) or a negative error code. The helper will + * look after reissuing I/O operations as appropriate and writing download= ed + * data to the cache. * * Before calling, the filesystem should update subreq->transferred to tra= ck * the amount of data copied into the output buffer. - * - * If @was_async is true, the caller might be running in softirq or interr= upt - * context and we can't sleep. */ -void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, - int error, bool was_async) +void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, bool= was_async) { struct netfs_io_request *rreq =3D subreq->rreq; =20 + might_sleep(); + switch (subreq->source) { case NETFS_READ_FROM_CACHE: netfs_stat(&netfs_n_rh_read_done); @@ -489,7 +487,7 @@ void netfs_read_subreq_terminated(struct netfs_io_subre= quest *subreq, * If the read completed validly short, then we can clear the * tail before going on to unlock the folios. */ - if (error =3D=3D 0 && subreq->transferred < subreq->len && + if (subreq->error =3D=3D 0 && subreq->transferred < subreq->len && (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags) || test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags))) { netfs_clear_unread(subreq); @@ -509,7 +507,7 @@ void netfs_read_subreq_terminated(struct netfs_io_subre= quest *subreq, /* Deal with retry requests, short reads and errors. If we retry * but don't make progress, we abandon the attempt. */ - if (!error && subreq->transferred < subreq->len) { + if (!subreq->error && subreq->transferred < subreq->len) { if (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags)) { trace_netfs_sreq(subreq, netfs_sreq_trace_hit_eof); } else { @@ -523,16 +521,15 @@ void netfs_read_subreq_terminated(struct netfs_io_sub= request *subreq, set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); } else { __set_bit(NETFS_SREQ_FAILED, &subreq->flags); - error =3D -ENODATA; + subreq->error =3D -ENODATA; } } } =20 - subreq->error =3D error; trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); =20 - if (unlikely(error < 0)) { - trace_netfs_failure(rreq, subreq, error, netfs_fail_read); + if (unlikely(subreq->error < 0)) { + trace_netfs_failure(rreq, subreq, subreq->error, netfs_fail_read); if (subreq->source =3D=3D NETFS_READ_FROM_CACHE) { netfs_stat(&netfs_n_rh_read_failed); } else { @@ -548,3 +545,19 @@ void netfs_read_subreq_terminated(struct netfs_io_subr= equest *subreq, netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); } EXPORT_SYMBOL(netfs_read_subreq_terminated); + +/** + * netfs_read_subreq_termination_worker - Workqueue helper for read termin= ation + * @work: The subreq->work in the I/O request that has been terminated. + * + * Helper function to jump to netfs_read_subreq_terminated() from the + * subrequest work item. + */ +void netfs_read_subreq_termination_worker(struct work_struct *work) +{ + struct netfs_io_subrequest *subreq =3D + container_of(work, struct netfs_io_subrequest, work); + + netfs_read_subreq_terminated(subreq, false); +} +EXPORT_SYMBOL(netfs_read_subreq_termination_worker); diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 810269ee0a50..e585a7bcfe4d 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -307,8 +307,10 @@ static void nfs_netfs_issue_read(struct netfs_io_subre= quest *sreq) &nfs_async_read_completion_ops); =20 netfs =3D nfs_netfs_alloc(sreq); - if (!netfs) - return netfs_read_subreq_terminated(sreq, -ENOMEM, false); + if (!netfs) { + sreq->error =3D -ENOMEM; + return netfs_read_subreq_terminated(sreq, false); + } =20 pgio.pg_netfs =3D netfs; /* used in completion */ =20 diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 772d485e96d3..1d86f7cc7195 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -74,7 +74,8 @@ static inline void nfs_netfs_put(struct nfs_netfs_io_data= *netfs) */ netfs->sreq->transferred =3D min_t(s64, netfs->sreq->len, atomic64_read(&netfs->transferred)); - netfs_read_subreq_terminated(netfs->sreq, netfs->error, false); + netfs->sreq->error =3D netfs->error; + netfs_read_subreq_terminated(netfs->sreq, false); kfree(netfs); } static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index c6f15dbe860a..bdf1933cb0e2 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1261,14 +1261,6 @@ CIFS_open(const unsigned int xid, struct cifs_open_p= arms *oparms, int *oplock, return rc; } =20 -static void cifs_readv_worker(struct work_struct *work) -{ - struct cifs_io_subrequest *rdata =3D - container_of(work, struct cifs_io_subrequest, subreq.work); - - netfs_read_subreq_terminated(&rdata->subreq, rdata->result, false); -} - static void cifs_readv_callback(struct mid_q_entry *mid) { @@ -1334,8 +1326,8 @@ cifs_readv_callback(struct mid_q_entry *mid) } =20 rdata->credits.value =3D 0; + rdata->subreq.error =3D rdata->result; rdata->subreq.transferred +=3D rdata->got_bytes; - INIT_WORK(&rdata->subreq.work, cifs_readv_worker); queue_work(cifsiod_wq, &rdata->subreq.work); release_mid(mid); add_credits(server, &credits, 0); diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index a58a3333ecc3..10dd440f8178 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -227,7 +227,8 @@ static void cifs_issue_read(struct netfs_io_subrequest = *subreq) return; =20 failed: - netfs_read_subreq_terminated(subreq, rc, false); + subreq->error =3D rc; + netfs_read_subreq_terminated(subreq, false); } =20 /* diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 6584b5cddc28..0166eb42ce94 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4513,14 +4513,6 @@ smb2_new_read_req(void **buf, unsigned int *total_le= n, return rc; } =20 -static void smb2_readv_worker(struct work_struct *work) -{ - struct cifs_io_subrequest *rdata =3D - container_of(work, struct cifs_io_subrequest, subreq.work); - - netfs_read_subreq_terminated(&rdata->subreq, rdata->result, false); -} - static void smb2_readv_callback(struct mid_q_entry *mid) { @@ -4633,9 +4625,9 @@ smb2_readv_callback(struct mid_q_entry *mid) server->credits, server->in_flight, 0, cifs_trace_rw_credits_read_response_clear); rdata->credits.value =3D 0; + rdata->subreq.error =3D rdata->result; rdata->subreq.transferred +=3D rdata->got_bytes; trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_progress); - INIT_WORK(&rdata->subreq.work, smb2_readv_worker); queue_work(cifsiod_wq, &rdata->subreq.work); release_mid(mid); trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0, diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 0d4ed1229024..a3aa36c1869f 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -428,10 +428,9 @@ bool netfs_release_folio(struct folio *folio, gfp_t gf= p); vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *ne= tfs_group); =20 /* (Sub)request management API. */ -void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, - bool was_async); -void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, - int error, bool was_async); +void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, bool w= as_async); +void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, bool= was_async); +void netfs_read_subreq_termination_worker(struct work_struct *work); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, enum netfs_sreq_ref_trace what); void netfs_put_subrequest(struct netfs_io_subrequest *subreq, From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21BBF2123D6 for ; Fri, 8 Nov 2024 17:34:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087258; cv=none; b=FKjCOlV7MuQgzrVcg8KTyMjUoI1S9eprb3ptJmpYQBesyNYuXUfoLMi16ehHt/Te75R9NZCpD35R8MZqiHpas/yEat1/tXVcBPcFd7maLbmzzHyoIzmtfgWxsloXmqW+MmuwnCd/rPBepJ5rB2fT0czjhdm9K0S7w9RDo6JPT7Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087258; c=relaxed/simple; bh=Vv+jfpPcEP0ftAmIGzIlIjG1nUmyzSs6ZeUOcsiO1QU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RpLoRbkt3GJpajhoEyUMEOJbn5pftIfqnZGVBnlPilB37cdmq/QJSNyNBn9YzA89lWOzll+GMEUhrDvI/fDHxbjaYFw0/D/ukl8H3WNeTLqlDRyI2aTyop2QTENokKof5MveamLozIEIY5UCJK0owmGqpa4ypJrSiVjSA+1PrMo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=I1ozCwOQ; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="I1ozCwOQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087255; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XQulb3S8MTlwASu6KLVgl06OEgPM8wXTsZf/G1673M4=; b=I1ozCwOQfrkHws79XBWSp09eH/Cb/ifqp6VqiDKxr6QcWOxtumXlMoGC+KRLLgzq0MJbRT iZO9+jOboKVp8oZEXbjgPh1d2OnslhmtCpys27WCd98SEa4UjqzYxXnzg2P6Dq98sBG3YQ TwHr8IQsuZKhlObEXF+lBa0Ck4iYn4Q= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-635-t-QhT0lqP66uIBrjwI5RYw-1; Fri, 08 Nov 2024 12:34:09 -0500 X-MC-Unique: t-QhT0lqP66uIBrjwI5RYw-1 X-Mimecast-MFC-AGG-ID: t-QhT0lqP66uIBrjwI5RYw Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 78EAD1953940; Fri, 8 Nov 2024 17:34:06 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B6F0A195607C; Fri, 8 Nov 2024 17:34:00 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds Subject: [PATCH v4 11/33] netfs: Drop the was_async arg from netfs_read_subreq_terminated() Date: Fri, 8 Nov 2024 17:32:12 +0000 Message-ID: <20241108173236.1382366-12-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Content-Type: text/plain; charset="utf-8" Drop the was_async argument from netfs_read_subreq_terminated(). Almost every caller is either in process context and passes false. Some filesystems delegate the call to a workqueue to avoid doing the work in their network message queue parsing thread. The only exception is netfs_cache_read_terminated() which handles completion in the cache - which is usually a callback from the backing filesystem in softirq context, though it can be from process context if an error occurred. In this case, delegate to a workqueue. Suggested-by: Linus Torvalds Link: https://lore.kernel.org/r/CAHk-=3DwiVC5Cgyz6QKXFu6fTaA6h4CjexDR-OV9kL= 6Vo5x9v8=3DA@mail.gmail.com/ Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/9p/vfs_addr.c | 2 +- fs/afs/file.c | 6 ++--- fs/afs/fsclient.c | 2 +- fs/afs/yfsclient.c | 2 +- fs/ceph/addr.c | 6 ++--- fs/netfs/buffered_read.c | 6 ++--- fs/netfs/direct_read.c | 2 +- fs/netfs/internal.h | 2 +- fs/netfs/objects.c | 2 +- fs/netfs/read_collect.c | 53 +++++++++------------------------------- fs/netfs/read_retry.c | 2 +- fs/nfs/fscache.c | 2 +- fs/nfs/fscache.h | 2 +- fs/smb/client/file.c | 2 +- include/linux/netfs.h | 4 +-- 15 files changed, 33 insertions(+), 62 deletions(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 58a6bd284d88..e4144e1a10a9 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -84,7 +84,7 @@ static void v9fs_issue_read(struct netfs_io_subrequest *s= ubreq) subreq->transferred +=3D total; =20 subreq->error =3D err; - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); } =20 /** diff --git a/fs/afs/file.c b/fs/afs/file.c index 56248a078bca..f717168da4ab 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -247,7 +247,7 @@ static void afs_fetch_data_notify(struct afs_operation = *op) if (req->pos + req->actual_len >=3D req->file_size) __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); subreq->error =3D error; - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); req->subreq =3D NULL; } else if (req->done) { req->done(req); @@ -304,7 +304,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_= read *req) if (IS_ERR(op)) { if (req->subreq) { req->subreq->error =3D PTR_ERR(op); - netfs_read_subreq_terminated(req->subreq, false); + netfs_read_subreq_terminated(req->subreq); } return PTR_ERR(op); } @@ -325,7 +325,7 @@ static void afs_read_worker(struct work_struct *work) fsreq =3D afs_alloc_read(GFP_NOFS); if (!fsreq) { subreq->error =3D -ENOMEM; - return netfs_read_subreq_terminated(subreq, false); + return netfs_read_subreq_terminated(subreq); } =20 fsreq->subreq =3D subreq; diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c index 098fa034a1cc..784f7daab112 100644 --- a/fs/afs/fsclient.c +++ b/fs/afs/fsclient.c @@ -352,7 +352,7 @@ static int afs_deliver_fs_fetch_data(struct afs_call *c= all) ret =3D afs_extract_data(call, true); if (req->subreq) { req->subreq->transferred +=3D count_before - call->iov_len; - netfs_read_subreq_progress(req->subreq, false); + netfs_read_subreq_progress(req->subreq); } if (ret < 0) return ret; diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c index 024227aba4cd..368cf277d801 100644 --- a/fs/afs/yfsclient.c +++ b/fs/afs/yfsclient.c @@ -398,7 +398,7 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call = *call) ret =3D afs_extract_data(call, true); if (req->subreq) { req->subreq->transferred +=3D count_before - call->iov_len; - netfs_read_subreq_progress(req->subreq, false); + netfs_read_subreq_progress(req->subreq); } if (ret < 0) return ret; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 459249ba6319..d008e7334db7 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -255,7 +255,7 @@ static void finish_netfs_read(struct ceph_osd_request *= req) } subreq->error =3D err; trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress); - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); iput(req->r_inode); ceph_dec_osd_stopping_blocker(fsc->mdsc); } @@ -317,7 +317,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_= subrequest *subreq) out: subreq->error =3D err; trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress); - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); return true; } =20 @@ -431,7 +431,7 @@ static void ceph_netfs_issue_read(struct netfs_io_subre= quest *subreq) ceph_osdc_put_request(req); if (err) { subreq->error =3D err; - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); } doutc(cl, "%llx.%llx result %d\n", ceph_vinop(inode), err); } diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 82c3b9957958..6fd4f3bef3b4 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -154,7 +154,7 @@ static void netfs_cache_read_terminated(void *priv, ssi= ze_t transferred_or_error } else { subreq->error =3D transferred_or_error; } - netfs_read_subreq_terminated(subreq, was_async); + schedule_work(&subreq->work); } =20 /* @@ -261,7 +261,7 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq) slice =3D netfs_prepare_read_iterator(subreq); __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); subreq->error =3D 0; - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); goto done; } =20 @@ -283,7 +283,7 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq) } while (size > 0); =20 if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq, false); + netfs_rreq_terminated(rreq); =20 /* Defer error return as we may need to wait for outstanding I/O. */ cmpxchg(&rreq->error, 0, ret); diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index a3f23adbae0f..54027fd14904 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -100,7 +100,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs= _io_request *rreq) } while (size > 0); =20 if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq, false); + netfs_rreq_terminated(rreq); return ret; } =20 diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 73887525e939..ba32ca61063c 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -85,7 +85,7 @@ static inline void netfs_see_request(struct netfs_io_requ= est *rreq, * read_collect.c */ void netfs_read_termination_worker(struct work_struct *work); -void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async); +void netfs_rreq_terminated(struct netfs_io_request *rreq); =20 /* * read_pgpriv2.c diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index f10fd56efa17..8c98b70eb3a4 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -56,7 +56,7 @@ struct netfs_io_request *netfs_alloc_request(struct addre= ss_space *mapping, origin =3D=3D NETFS_READ_GAPS || origin =3D=3D NETFS_READ_FOR_WRITE || origin =3D=3D NETFS_DIO_READ) - INIT_WORK(&rreq->work, netfs_read_termination_worker); + INIT_WORK(&rreq->work, NULL); else INIT_WORK(&rreq->work, netfs_write_collection_worker); =20 diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 00358894fac4..146abb2e399a 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -85,7 +85,7 @@ static void netfs_unlock_read_folio(struct netfs_io_subre= quest *subreq, * Unlock any folios that are now completely read. Returns true if the * subrequest is removed from the list. */ -static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bo= ol was_async) +static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) { struct netfs_io_subrequest *prev, *next; struct netfs_io_request *rreq =3D subreq->rreq; @@ -228,8 +228,7 @@ static bool netfs_consume_read_data(struct netfs_io_sub= request *subreq, bool was subreq->curr_folioq_slot =3D slot; if (folioq && folioq_folio(folioq, slot)) subreq->curr_folio_order =3D folioq->orders[slot]; - if (!was_async) - cond_resched(); + cond_resched(); goto next_folio; } =20 @@ -365,7 +364,7 @@ static void netfs_rreq_assess_dio(struct netfs_io_reque= st *rreq) * Note that we're in normal kernel thread context at this point, possibly * running on a workqueue. */ -static void netfs_rreq_assess(struct netfs_io_request *rreq) +void netfs_rreq_terminated(struct netfs_io_request *rreq) { trace_netfs_rreq(rreq, netfs_rreq_trace_assess); =20 @@ -392,56 +391,29 @@ static void netfs_rreq_assess(struct netfs_io_request= *rreq) netfs_pgpriv2_write_to_the_cache(rreq); } =20 -void netfs_read_termination_worker(struct work_struct *work) -{ - struct netfs_io_request *rreq =3D - container_of(work, struct netfs_io_request, work); - netfs_see_request(rreq, netfs_rreq_trace_see_work); - netfs_rreq_assess(rreq); - netfs_put_request(rreq, false, netfs_rreq_trace_put_work_complete); -} - -/* - * Handle the completion of all outstanding I/O operations on a read reque= st. - * We inherit a ref from the caller. - */ -void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async) -{ - if (!was_async) - return netfs_rreq_assess(rreq); - if (!work_pending(&rreq->work)) { - netfs_get_request(rreq, netfs_rreq_trace_get_work); - if (!queue_work(system_unbound_wq, &rreq->work)) - netfs_put_request(rreq, was_async, netfs_rreq_trace_put_work_nq); - } -} - /** * netfs_read_subreq_progress - Note progress of a read operation. * @subreq: The read request that has terminated. - * @was_async: True if we're in an asynchronous context. * * This tells the read side of netfs lib that a contributory I/O operation= has * made some progress and that it may be possible to unlock some folios. * * Before calling, the filesystem should update subreq->transferred to tra= ck * the amount of data copied into the output buffer. - * - * If @was_async is true, the caller might be running in softirq or interr= upt - * context and we can't sleep. */ -void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, - bool was_async) +void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq =3D subreq->rreq; =20 + might_sleep(); + trace_netfs_sreq(subreq, netfs_sreq_trace_progress); =20 if (subreq->transferred > subreq->consumed && (rreq->origin =3D=3D NETFS_READAHEAD || rreq->origin =3D=3D NETFS_READPAGE || rreq->origin =3D=3D NETFS_READ_FOR_WRITE)) { - netfs_consume_read_data(subreq, was_async); + netfs_consume_read_data(subreq); __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); } } @@ -450,7 +422,6 @@ EXPORT_SYMBOL(netfs_read_subreq_progress); /** * netfs_read_subreq_terminated - Note the termination of an I/O operation. * @subreq: The I/O request that has terminated. - * @was_async: True if we're in an asynchronous context. * * This tells the read helper that a contributory I/O operation has termin= ated, * one way or another, and that it should integrate the results. @@ -464,7 +435,7 @@ EXPORT_SYMBOL(netfs_read_subreq_progress); * Before calling, the filesystem should update subreq->transferred to tra= ck * the amount of data copied into the output buffer. */ -void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, bool= was_async) +void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq =3D subreq->rreq; =20 @@ -498,7 +469,7 @@ void netfs_read_subreq_terminated(struct netfs_io_subre= quest *subreq, bool was_a (rreq->origin =3D=3D NETFS_READAHEAD || rreq->origin =3D=3D NETFS_READPAGE || rreq->origin =3D=3D NETFS_READ_FOR_WRITE)) { - netfs_consume_read_data(subreq, was_async); + netfs_consume_read_data(subreq); __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); } rreq->transferred +=3D subreq->transferred; @@ -540,9 +511,9 @@ void netfs_read_subreq_terminated(struct netfs_io_subre= quest *subreq, bool was_a } =20 if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq, was_async); + netfs_rreq_terminated(rreq); =20 - netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_terminated); } EXPORT_SYMBOL(netfs_read_subreq_terminated); =20 @@ -558,6 +529,6 @@ void netfs_read_subreq_termination_worker(struct work_s= truct *work) struct netfs_io_subrequest *subreq =3D container_of(work, struct netfs_io_subrequest, work); =20 - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); } EXPORT_SYMBOL(netfs_read_subreq_termination_worker); diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c index 0fe7677b4022..d1986cec3db7 100644 --- a/fs/netfs/read_retry.c +++ b/fs/netfs/read_retry.c @@ -232,7 +232,7 @@ void netfs_retry_reads(struct netfs_io_request *rreq) netfs_retry_read_subrequests(rreq); =20 if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq, false); + netfs_rreq_terminated(rreq); } =20 /* diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index e585a7bcfe4d..2f3c4f773d73 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -309,7 +309,7 @@ static void nfs_netfs_issue_read(struct netfs_io_subreq= uest *sreq) netfs =3D nfs_netfs_alloc(sreq); if (!netfs) { sreq->error =3D -ENOMEM; - return netfs_read_subreq_terminated(sreq, false); + return netfs_read_subreq_terminated(sreq); } =20 pgio.pg_netfs =3D netfs; /* used in completion */ diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 1d86f7cc7195..9d86868f4998 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -75,7 +75,7 @@ static inline void nfs_netfs_put(struct nfs_netfs_io_data= *netfs) netfs->sreq->transferred =3D min_t(s64, netfs->sreq->len, atomic64_read(&netfs->transferred)); netfs->sreq->error =3D netfs->error; - netfs_read_subreq_terminated(netfs->sreq, false); + netfs_read_subreq_terminated(netfs->sreq); kfree(netfs); } static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 10dd440f8178..27a1757a278e 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -228,7 +228,7 @@ static void cifs_issue_read(struct netfs_io_subrequest = *subreq) =20 failed: subreq->error =3D rc; - netfs_read_subreq_terminated(subreq, false); + netfs_read_subreq_terminated(subreq); } =20 /* diff --git a/include/linux/netfs.h b/include/linux/netfs.h index a3aa36c1869f..738c9c8763f0 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -428,8 +428,8 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp= ); vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *ne= tfs_group); =20 /* (Sub)request management API. */ -void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, bool w= as_async); -void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, bool= was_async); +void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq); +void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq); void netfs_read_subreq_termination_worker(struct work_struct *work); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, enum netfs_sreq_ref_trace what); From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAB2C1F4FC0 for ; Fri, 8 Nov 2024 17:34:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087262; cv=none; b=FMijQMVUHBy7gwuZqX8mYSQgle1kDEejFAuKsEOLdB0Om37NMuXk8zKR75LGJh0x1IyCZeWY4B0Qkyopg2nqGr1sbvlXIqgKhVHLR7uNI+xUdpRDFVUQQ5ooW6ctDxRX2CJTmigy0SqG2OOw+LMhJfDU+Yz1QmEy+GrjbLwdn5c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087262; c=relaxed/simple; bh=XSUYCsMsgwi1gQYDUg7XDOQ/HEYXd2QU0fHMI91b11c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EZ7nGB0ay+0KroZIt689qyDR40IGTiL/merdNjRB8kRsW7sIVMwiVXNE9dNg3KakcNAUiNAgYoBpmehtfcMLJmCLZgr9AfeAqvemy5PG0sv/D5wjx997Vjt9lsHGDPMFuWKW1d/Rsd85gMu1ng/08TJw8a88Ujv1DdL/NP4tBVE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=LNVjvHMv; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LNVjvHMv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087260; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Gu5E8tyC96H9fpVehZhchYQCXTYRja375CBx48Z0WNk=; b=LNVjvHMv2hbLENJRSItzOBXiwiDm4DpsX+UPzNvqgPaYYc3iV8NzJUwGpTR35lPMP1pAdL +iCKP9Js39WnEmoH3W42mtSomc9dcX3AyIXGNtKRhWMz/9qnTkacpdr3klYVtlg76hhAy6 5OK49Jip7PZwQnYMsfhE2R0KnxaO6fU= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-680-vCWqGH5vOt6yeYbCNuTgOg-1; Fri, 08 Nov 2024 12:34:17 -0500 X-MC-Unique: vCWqGH5vOt6yeYbCNuTgOg-1 X-Mimecast-MFC-AGG-ID: vCWqGH5vOt6yeYbCNuTgOg Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BF0621955F42; Fri, 8 Nov 2024 17:34:13 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id EE274195E481; Fri, 8 Nov 2024 17:34:07 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v4 12/33] netfs: Don't use bh spinlock Date: Fri, 8 Nov 2024 17:32:13 +0000 Message-ID: <20241108173236.1382366-13-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" All the accessing of the subrequest lists is now done in process context, possibly in a workqueue, but not now in a BH context, so we don't need the lock against BH interference when taking the netfs_io_request::lock spinlock. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_read.c | 4 ++-- fs/netfs/direct_read.c | 4 ++-- fs/netfs/read_collect.c | 20 ++++++++++---------- fs/netfs/read_retry.c | 8 ++++---- fs/netfs/write_collect.c | 4 ++-- fs/netfs/write_issue.c | 4 ++-- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 6fd4f3bef3b4..4a48b79b8807 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -200,12 +200,12 @@ static void netfs_read_to_pagecache(struct netfs_io_r= equest *rreq) subreq->len =3D size; =20 atomic_inc(&rreq->nr_outstanding); - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_add_tail(&subreq->rreq_link, &rreq->subrequests); subreq->prev_donated =3D rreq->prev_donated; rreq->prev_donated =3D 0; trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); =20 source =3D netfs_cache_prepare_read(rreq, subreq, rreq->i_size); subreq->source =3D source; diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 54027fd14904..1a20cc3979c7 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -68,12 +68,12 @@ static int netfs_dispatch_unbuffered_reads(struct netfs= _io_request *rreq) subreq->len =3D size; =20 atomic_inc(&rreq->nr_outstanding); - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_add_tail(&subreq->rreq_link, &rreq->subrequests); subreq->prev_donated =3D rreq->prev_donated; rreq->prev_donated =3D 0; trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); =20 netfs_stat(&netfs_n_rh_download); if (rreq->netfs_ops->prepare_read) { diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 146abb2e399a..53ef7e0f3e9c 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -142,7 +142,7 @@ static bool netfs_consume_read_data(struct netfs_io_sub= request *subreq) prev_donated =3D READ_ONCE(subreq->prev_donated); next_donated =3D READ_ONCE(subreq->next_donated); if (prev_donated || next_donated) { - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); prev_donated =3D subreq->prev_donated; next_donated =3D subreq->next_donated; subreq->start -=3D prev_donated; @@ -155,7 +155,7 @@ static bool netfs_consume_read_data(struct netfs_io_sub= request *subreq) next_donated =3D subreq->next_donated =3D 0; } trace_netfs_sreq(subreq, netfs_sreq_trace_add_donations); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); } =20 avail =3D subreq->transferred; @@ -184,18 +184,18 @@ static bool netfs_consume_read_data(struct netfs_io_s= ubrequest *subreq) } else if (fpos < start) { excess =3D fend - subreq->start; =20 - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); /* If we complete first on a folio split with the * preceding subreq, donate to that subreq - otherwise * we get the responsibility. */ if (subreq->prev_donated !=3D prev_donated) { - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); goto donation_changed; } =20 if (list_is_first(&subreq->rreq_link, &rreq->subrequests)) { - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); pr_err("Can't donate prior to front\n"); goto bad; } @@ -211,7 +211,7 @@ static bool netfs_consume_read_data(struct netfs_io_sub= request *subreq) =20 if (subreq->consumed >=3D subreq->len) goto remove_subreq_locked; - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); } else { pr_err("fpos > start\n"); goto bad; @@ -239,11 +239,11 @@ static bool netfs_consume_read_data(struct netfs_io_s= ubrequest *subreq) /* Donate the remaining downloaded data to one of the neighbouring * subrequests. Note that we may race with them doing the same thing. */ - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); =20 if (subreq->prev_donated !=3D prev_donated || subreq->next_donated !=3D next_donated) { - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); cond_resched(); goto donation_changed; } @@ -293,11 +293,11 @@ static bool netfs_consume_read_data(struct netfs_io_s= ubrequest *subreq) goto remove_subreq_locked; =20 remove_subreq: - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); remove_subreq_locked: subreq->consumed =3D subreq->len; list_del(&subreq->rreq_link); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_consumed); return true; =20 diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c index d1986cec3db7..264f3cb6a7dc 100644 --- a/fs/netfs/read_retry.c +++ b/fs/netfs/read_retry.c @@ -139,12 +139,12 @@ static void netfs_retry_read_subrequests(struct netfs= _io_request *rreq) __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); =20 - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_add_tail(&subreq->rreq_link, &rreq->subrequests); subreq->prev_donated +=3D rreq->prev_donated; rreq->prev_donated =3D 0; trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); =20 BUG_ON(!len); =20 @@ -215,9 +215,9 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __clear_bit(NETFS_SREQ_RETRYING, &subreq->flags); } - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_splice_tail_init(&queue, &rreq->subrequests); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); } =20 /* diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 85e8e94da90a..d291b31dd074 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -238,14 +238,14 @@ static void netfs_collect_write_results(struct netfs_= io_request *wreq) =20 cancel: /* Remove if completely consumed. */ - spin_lock_bh(&wreq->lock); + spin_lock(&wreq->lock); =20 remove =3D front; list_del_init(&front->rreq_link); front =3D list_first_entry_or_null(&stream->subrequests, struct netfs_io_subrequest, rreq_link); stream->front =3D front; - spin_unlock_bh(&wreq->lock); + spin_unlock(&wreq->lock); netfs_put_subrequest(remove, false, notes & SAW_FAILURE ? netfs_sreq_trace_put_cancel : diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index c186221b45c0..10b5300b9448 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -203,7 +203,7 @@ static void netfs_prepare_write(struct netfs_io_request= *wreq, * the list. The collector only goes nextwards and uses the lock to * remove entries off of the front. */ - spin_lock_bh(&wreq->lock); + spin_lock(&wreq->lock); list_add_tail(&subreq->rreq_link, &stream->subrequests); if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { stream->front =3D subreq; @@ -214,7 +214,7 @@ static void netfs_prepare_write(struct netfs_io_request= *wreq, } } =20 - spin_unlock_bh(&wreq->lock); + spin_unlock(&wreq->lock); =20 stream->construct =3D subreq; } From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4098F21E130 for ; Fri, 8 Nov 2024 17:34:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087267; cv=none; b=fRQv4ln0yhhE1LgYXgnVUBPVPHIVVWpkmCfE0tCpeVIvVyvAKBmD/Zf2ElnfDVcl/Ve07r9DoP+HC0T9oAq6rp05JGKh9SgM4Oa67liqKJUY8gKlpQqe7jUOvxNPozj8u8juoHY8oeTLZLpRWAm+u+5MJWbBv0Q8nK8ep4KKYB0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087267; c=relaxed/simple; bh=PlP3tUUgr08MhjF8RVBqApIWP2b4NxRAsjXbxJF6jPc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HL1vN0Eec+vvlVvsWVIabU7/JXNygCjHCHuywWYAk4b140T5MSvOt3RbDWsWFOOW7GYs2H8Ta/BAmODzZKAaRcuVzGgw7RURPho/yzp2VSlxyBENjWaQr593iOpPzkAS29m21QINvJirG5qbefSLSQgbalB+Sm7EUtIwOMkBdMo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Uvs4D/gE; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Uvs4D/gE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087265; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RVoEs9N2N3+Yy5ytz8xwWGrwrjCUqjBQVK6VxNHCBSE=; b=Uvs4D/gEI5tllXod0h1w/MfGI1n8T0boPF686Wh4IMZJzYXs6QnnYMvGYCG7jo5MYed1fm ODOtC+B4BxTK3QI/peLctvm0fpqJglZO1lJ5z2bzWcDMy/kKdvv4iB+taJ5I0U2oFEO5g0 yxjdQnALdqFt6RcSuX8FkzOMCDzUuwI= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-618-KoCwqDKmMdm7hYFDo0ZXHw-1; Fri, 08 Nov 2024 12:34:23 -0500 X-MC-Unique: KoCwqDKmMdm7hYFDo0ZXHw-1 X-Mimecast-MFC-AGG-ID: KoCwqDKmMdm7hYFDo0ZXHw Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B40CC1955F42; Fri, 8 Nov 2024 17:34:20 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2B0EC195E480; Fri, 8 Nov 2024 17:34:14 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 13/33] afs: Don't use mutex for I/O operation lock Date: Fri, 8 Nov 2024 17:32:14 +0000 Message-ID: <20241108173236.1382366-14-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Don't use the standard mutex for the I/O operation lock, but rather implement our own as the standard mutex must be released in the same thread as locked it. This is a problem when it comes to doing async FetchData where the lock will be dropped from the workqueue that processed the incoming data and not from the issuing thread. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/fs_operation.c | 111 +++++++++++++++++++++++++++++++++++++++--- fs/afs/internal.h | 3 +- fs/afs/super.c | 2 +- 3 files changed, 108 insertions(+), 8 deletions(-) diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c index 428721bbe4f6..8488ff8183fa 100644 --- a/fs/afs/fs_operation.c +++ b/fs/afs/fs_operation.c @@ -49,6 +49,105 @@ struct afs_operation *afs_alloc_operation(struct key *k= ey, struct afs_volume *vo return op; } =20 +struct afs_io_locker { + struct list_head link; + struct task_struct *task; + unsigned long have_lock; +}; + +/* + * Unlock the I/O lock on a vnode. + */ +static void afs_unlock_for_io(struct afs_vnode *vnode) +{ + struct afs_io_locker *locker; + + spin_lock(&vnode->lock); + locker =3D list_first_entry_or_null(&vnode->io_lock_waiters, + struct afs_io_locker, link); + if (locker) { + list_del(&locker->link); + smp_store_release(&locker->have_lock, 1); + smp_mb__after_atomic(); /* Store have_lock before task state */ + wake_up_process(locker->task); + } else { + clear_bit(AFS_VNODE_IO_LOCK, &vnode->flags); + } + spin_unlock(&vnode->lock); +} + +/* + * Lock the I/O lock on a vnode uninterruptibly. We can't use an ordinary + * mutex as lockdep will complain if we unlock it in the wrong thread. + */ +static void afs_lock_for_io(struct afs_vnode *vnode) +{ + struct afs_io_locker myself =3D { .task =3D current, }; + + spin_lock(&vnode->lock); + + if (!test_and_set_bit(AFS_VNODE_IO_LOCK, &vnode->flags)) { + spin_unlock(&vnode->lock); + return; + } + + list_add_tail(&myself.link, &vnode->io_lock_waiters); + spin_unlock(&vnode->lock); + + for (;;) { + set_current_state(TASK_UNINTERRUPTIBLE); + if (smp_load_acquire(&myself.have_lock)) + break; + schedule(); + } + __set_current_state(TASK_RUNNING); +} + +/* + * Lock the I/O lock on a vnode interruptibly. We can't use an ordinary m= utex + * as lockdep will complain if we unlock it in the wrong thread. + */ +static int afs_lock_for_io_interruptible(struct afs_vnode *vnode) +{ + struct afs_io_locker myself =3D { .task =3D current, }; + int ret =3D 0; + + spin_lock(&vnode->lock); + + if (!test_and_set_bit(AFS_VNODE_IO_LOCK, &vnode->flags)) { + spin_unlock(&vnode->lock); + return 0; + } + + list_add_tail(&myself.link, &vnode->io_lock_waiters); + spin_unlock(&vnode->lock); + + for (;;) { + set_current_state(TASK_INTERRUPTIBLE); + if (smp_load_acquire(&myself.have_lock) || + signal_pending(current)) + break; + schedule(); + } + __set_current_state(TASK_RUNNING); + + /* If we got a signal, try to transfer the lock onto the next + * waiter. + */ + if (unlikely(signal_pending(current))) { + spin_lock(&vnode->lock); + if (myself.have_lock) { + spin_unlock(&vnode->lock); + afs_unlock_for_io(vnode); + } else { + list_del(&myself.link); + spin_unlock(&vnode->lock); + } + ret =3D -ERESTARTSYS; + } + return ret; +} + /* * Lock the vnode(s) being operated upon. */ @@ -60,7 +159,7 @@ static bool afs_get_io_locks(struct afs_operation *op) _enter(""); =20 if (op->flags & AFS_OPERATION_UNINTR) { - mutex_lock(&vnode->io_lock); + afs_lock_for_io(vnode); op->flags |=3D AFS_OPERATION_LOCK_0; _leave(" =3D t [1]"); return true; @@ -72,7 +171,7 @@ static bool afs_get_io_locks(struct afs_operation *op) if (vnode2 > vnode) swap(vnode, vnode2); =20 - if (mutex_lock_interruptible(&vnode->io_lock) < 0) { + if (afs_lock_for_io_interruptible(vnode) < 0) { afs_op_set_error(op, -ERESTARTSYS); op->flags |=3D AFS_OPERATION_STOP; _leave(" =3D f [I 0]"); @@ -81,10 +180,10 @@ static bool afs_get_io_locks(struct afs_operation *op) op->flags |=3D AFS_OPERATION_LOCK_0; =20 if (vnode2) { - if (mutex_lock_interruptible_nested(&vnode2->io_lock, 1) < 0) { + if (afs_lock_for_io_interruptible(vnode2) < 0) { afs_op_set_error(op, -ERESTARTSYS); op->flags |=3D AFS_OPERATION_STOP; - mutex_unlock(&vnode->io_lock); + afs_unlock_for_io(vnode); op->flags &=3D ~AFS_OPERATION_LOCK_0; _leave(" =3D f [I 1]"); return false; @@ -104,9 +203,9 @@ static void afs_drop_io_locks(struct afs_operation *op) _enter(""); =20 if (op->flags & AFS_OPERATION_LOCK_1) - mutex_unlock(&vnode2->io_lock); + afs_unlock_for_io(vnode2); if (op->flags & AFS_OPERATION_LOCK_0) - mutex_unlock(&vnode->io_lock); + afs_unlock_for_io(vnode); } =20 static void afs_prepare_vnode(struct afs_operation *op, struct afs_vnode_p= aram *vp, diff --git a/fs/afs/internal.h b/fs/afs/internal.h index c9d620175e80..07b8f7083e73 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -702,13 +702,14 @@ struct afs_vnode { struct afs_file_status status; /* AFS status info for this file */ afs_dataversion_t invalid_before; /* Child dentries are invalid before th= is */ struct afs_permits __rcu *permit_cache; /* cache of permits so far obtain= ed */ - struct mutex io_lock; /* Lock for serialising I/O on this mutex */ + struct list_head io_lock_waiters; /* Threads waiting for the I/O lock */ struct rw_semaphore validate_lock; /* lock for validating this vnode */ struct rw_semaphore rmdir_lock; /* Lock for rmdir vs sillyrename */ struct key *silly_key; /* Silly rename key */ spinlock_t wb_lock; /* lock for wb_keys */ spinlock_t lock; /* waitqueue/flags lock */ unsigned long flags; +#define AFS_VNODE_IO_LOCK 0 /* Set if the I/O serialisation lock is held = */ #define AFS_VNODE_UNSET 1 /* set if vnode attributes not yet set */ #define AFS_VNODE_DIR_VALID 2 /* Set if dir contents are valid */ #define AFS_VNODE_ZAP_DATA 3 /* set if vnode's data should be invalidated= */ diff --git a/fs/afs/super.c b/fs/afs/super.c index f3ba1c3e72f5..7631302c1984 100644 --- a/fs/afs/super.c +++ b/fs/afs/super.c @@ -663,7 +663,7 @@ static void afs_i_init_once(void *_vnode) =20 memset(vnode, 0, sizeof(*vnode)); inode_init_once(&vnode->netfs.inode); - mutex_init(&vnode->io_lock); + INIT_LIST_HEAD(&vnode->io_lock_waiters); init_rwsem(&vnode->validate_lock); spin_lock_init(&vnode->wb_lock); spin_lock_init(&vnode->lock); From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1250D21EBA7 for ; Fri, 8 Nov 2024 17:34:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087274; cv=none; b=S87pNIQwP/CubJJVm9nAqInSOHagjBsUcACefvLwmoQrKX3Vds4nDMD1jSFfh8qHWq0sOm6Zx21fYpsr/axOBFrz/+onwmSKJygigCU38DXx/WwrvUEliRq+hcKRjKfE1pcTMnUH5IcWyxK70382IMl7gzSUbBKvB0LaafQlbd4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087274; c=relaxed/simple; bh=YsDw0J2VEtDAGVGAJbCMxDd9Vr5CQKQkkWhzL5v6Zr0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lYzCvs+lV0u5Wky4R+FVJQbAmn27/YS63bugG9ZThFyXv3EDDp+7ylNa6h9Ug1qdNHumMRSdf6vjYPRPIDTRoKzHw83Qyp5ZNtu9psitq7+1g/zBh5BEl2N2xfSC12tXYcehSXCduABHXB5Zx+gZoIvhhNJtxgCMgRaFet69aoQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=PnfzPwpk; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PnfzPwpk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087272; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E8xdMX/uFcK2gZEOhPcswqOMkVx14v1DliFqQKEHbv4=; b=PnfzPwpk7U9OHiugcys1559rR3O1RUfAkzDSfKi6F2Ig+DdPdSbESe+RPMIMDtxrHmQcSS s+LCFcvDoWJ+64O5zuTryq0MqyKb+gvPMd3J/QeKFFeq3D7qWrCnHw3OKMFK0QQPMg+OM3 ikWn7Gdm2IgvA5N4LXPOfjoZt+kteeA= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-404-48C0RI4NNNuNGyazsDs61w-1; Fri, 08 Nov 2024 12:34:30 -0500 X-MC-Unique: 48C0RI4NNNuNGyazsDs61w-1 X-Mimecast-MFC-AGG-ID: 48C0RI4NNNuNGyazsDs61w Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 954BF1955F37; Fri, 8 Nov 2024 17:34:27 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 188B1195607C; Fri, 8 Nov 2024 17:34:21 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 14/33] afs: Fix EEXIST error returned from afs_rmdir() to be ENOTEMPTY Date: Fri, 8 Nov 2024 17:32:15 +0000 Message-ID: <20241108173236.1382366-15-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Content-Type: text/plain; charset="utf-8" AFS servers pass back a code indicating EEXIST when they're asked to remove a directory that is not empty rather than ENOTEMPTY because not all the systems that an AFS server can run on have the latter error available and AFS preexisted the addition of that error in general. Fix afs_rmdir() to translate EEXIST to ENOTEMPTY. Fixes: 260a980317da ("[AFS]: Add "directory write" support.") Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/dir.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index ada363af5aab..50edd1cae28a 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -1472,7 +1472,12 @@ static int afs_rmdir(struct inode *dir, struct dentr= y *dentry) op->file[1].vnode =3D vnode; } =20 - return afs_do_sync_operation(op); + ret =3D afs_do_sync_operation(op); + + /* Not all systems that can host afs servers have ENOTEMPTY. */ + if (ret =3D=3D -EEXIST) + ret =3D -ENOTEMPTY; + return ret; =20 error: return afs_put_operation(op); From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79C4520B805 for ; Fri, 8 Nov 2024 17:34:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087284; cv=none; b=OBlm+N5Nc6ojbilcDRlxKYOn0P/RNN8UnKUjuJp4iBHsnVKPS3HwccCw4fOjhi4mnGbVJPcF8SG0jLEjF919VH7G+WyNTO6eg6nw7bc1eMnQ7jdEWYti8NPMDC1dx1uo6/dAy3EWiBN3/y6HoLj2oWjL+fsFD+JGkRei6F4/QCk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087284; c=relaxed/simple; bh=5o211Bs9aPlGXgBOv9ZEWRYTJwfzYdkVK8vs4FoRbiQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=a3unHPaA7CRdwezi70q0sNA78Z3scdYnrSqWPP6BbJvl3KUR07h7OkP5Glg3BmJ4EHLErONlHk0nyDJG9ovUmQcTlwkaGlrN4Lk2b9OzR/MBgvphjA1WChq9KW4Qx9z2SDczmetD2h3ley+FMBULrccaee0Mp2ackoCE9JKXBZs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=FNzrnEzn; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FNzrnEzn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087282; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HO1trrAUxh6ged928OSrE9rZIbGVfYli8k1F5yQtGbY=; b=FNzrnEznkWJ2ruQYl1vUf7wqkp+KfLond0AtTxMJEgWJ8VmBACXuhu3uGlhEbQ2M01b37t EGwiVCV6rx/qMYKZ62oK6y0OlTKs1Vt/Q00OFZFg+eWQKv9tsJB9G/PrIHpfPyiAaPOUBB 0UGVyizeQqxQv0KIt027KCcRbp+cLKI= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-690-qOuK97hKMEujosWEowMyUg-1; Fri, 08 Nov 2024 12:34:38 -0500 X-MC-Unique: qOuK97hKMEujosWEowMyUg-1 X-Mimecast-MFC-AGG-ID: qOuK97hKMEujosWEowMyUg Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 973B21955F43; Fri, 8 Nov 2024 17:34:34 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1545D3003B71; Fri, 8 Nov 2024 17:34:28 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 15/33] afs: Fix directory format encoding struct Date: Fri, 8 Nov 2024 17:32:16 +0000 Message-ID: <20241108173236.1382366-16-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" The AFS directory format structure, union afs_xdr_dir_block::meta, has too many alloc counter slots declared and so pushes the hash table along and over the data. This doesn't cause a problem at the moment because I'm currently ignoring the hash table and only using the correct number of alloc_ctrs in the code anyway. In future, however, I should start using the hash table to try and speed up afs_lookup(). Fix this by using the correct constant to declare the counter array. Fixes: 4ea219a839bf ("afs: Split the directory content defs into a header") Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/xdr_fs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/afs/xdr_fs.h b/fs/afs/xdr_fs.h index 8ca868164507..cc5f143d21a3 100644 --- a/fs/afs/xdr_fs.h +++ b/fs/afs/xdr_fs.h @@ -88,7 +88,7 @@ union afs_xdr_dir_block { =20 struct { struct afs_xdr_dir_hdr hdr; - u8 alloc_ctrs[AFS_DIR_MAX_BLOCKS]; + u8 alloc_ctrs[AFS_DIR_BLOCKS_WITH_CTR]; __be16 hashtable[AFS_DIR_HASHTBL_SIZE]; } meta; From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EA14220D50 for ; Fri, 8 Nov 2024 17:34:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087291; cv=none; b=YcHboHOngD4dDYPpmYlTwQNiguorIEgphG1iG/xC67XqYjsTCAj2OyOm+25tXD4tNjrdG9W4WvZD8rx5uQ38rewhZeVzOVea2ZCxNN4WMi9xh1NbQSUfskMmg+s5tuXBEzr/vtGzTmwSbOq8hYf+e3jdw0dC1hofEXz1EHUj1Mc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087291; c=relaxed/simple; bh=90tmfXdOk7PUXqQduFPRhL3TyDiNo9WZvTnvimUkG4w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qVI5scz+yQ3YZs28hJk13n2DjZH5XP7i18vK9EM+jXW67FwOhwsULZs4ZnCeOj3dNlhG5hdpGD+LkfieFzpJyimyhBzDDGlD4cjpW1cCy50q5GeBmxu/fyGPSvpnmuBiib1zd4OVUlv04PRPSfkx2pjyBJcHZX22BN+8WYe5NXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XjYxW9Kq; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XjYxW9Kq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087287; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PpSYjRYWbFXxWDdh6eO8irNXCJzGx0/SlE83d2O2JK0=; b=XjYxW9Kqx8inVixj8XX5iq2yjGNCAa0vuA3cGs/sHXzRE4IfWNrjQxqI/v3DmFyEEd3UKH o1hVrfbIstoWBdeuJLFzHmOU53kUoZSeLFTdGTdK+7afOLM1ieZXXkhSX77gTvwzZXjsHW dhpf/Sfs+6ZBPUhD8Xd5gmmVoE/oeBM= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-656-vLwEkQlAMwO8A82QkSZrig-1; Fri, 08 Nov 2024 12:34:44 -0500 X-MC-Unique: vLwEkQlAMwO8A82QkSZrig-1 X-Mimecast-MFC-AGG-ID: vLwEkQlAMwO8A82QkSZrig Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7319E19560B5; Fri, 8 Nov 2024 17:34:41 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 037561953883; Fri, 8 Nov 2024 17:34:35 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 16/33] netfs: Remove some extraneous directory invalidations Date: Fri, 8 Nov 2024 17:32:17 +0000 Message-ID: <20241108173236.1382366-17-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" In the directory editing code, we shouldn't re-invalidate the directory if it is already invalidated. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/dir_edit.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index fe223fb78111..13fb236a3f50 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -247,7 +247,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, */ index =3D b / AFS_DIR_BLOCKS_PER_PAGE; if (nr_blocks >=3D AFS_DIR_MAX_BLOCKS) - goto error; + goto error_too_many_blocks; if (index >=3D folio_nr_pages(folio0)) { folio =3D afs_dir_get_folio(vnode, index); if (!folio) @@ -260,7 +260,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, =20 /* Abandon the edit if we got a callback break. */ if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) - goto invalidated; + goto already_invalidated; =20 _debug("block %u: %2u %3u %u", b, @@ -348,9 +348,8 @@ void afs_edit_dir_add(struct afs_vnode *vnode, _leave(""); return; =20 -invalidated: +already_invalidated: trace_afs_edit_dir(vnode, why, afs_edit_dir_create_inval, 0, 0, 0, 0, nam= e->name); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); kunmap_local(block); if (folio !=3D folio0) { folio_unlock(folio); @@ -358,9 +357,10 @@ void afs_edit_dir_add(struct afs_vnode *vnode, } goto out_unmap; =20 +error_too_many_blocks: + clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); error: trace_afs_edit_dir(vnode, why, afs_edit_dir_create_error, 0, 0, 0, 0, nam= e->name); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); goto out_unmap; } =20 @@ -421,7 +421,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, =20 /* Abandon the edit if we got a callback break. */ if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) - goto invalidated; + goto already_invalidated; =20 if (b > AFS_DIR_BLOCKS_WITH_CTR || meta->meta.alloc_ctrs[b] <=3D AFS_DIR_SLOTS_PER_BLOCK - 1 - need_slo= ts) { @@ -475,10 +475,9 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, _leave(""); return; =20 -invalidated: +already_invalidated: trace_afs_edit_dir(vnode, why, afs_edit_dir_delete_inval, 0, 0, 0, 0, name->name); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); kunmap_local(block); if (folio !=3D folio0) { folio_unlock(folio); @@ -489,7 +488,6 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, error: trace_afs_edit_dir(vnode, why, afs_edit_dir_delete_error, 0, 0, 0, 0, name->name); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); goto out_unmap; } =20 @@ -530,7 +528,7 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnode= , struct afs_vnode *new_d =20 /* Abandon the edit if we got a callback break. */ if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) - goto invalidated; + goto already_invalidated; =20 slot =3D afs_dir_scan_block(block, &dotdot_name, b); if (slot >=3D 0) @@ -564,18 +562,16 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vno= de, struct afs_vnode *new_d _leave(""); return; =20 -invalidated: +already_invalidated: kunmap_local(block); folio_unlock(folio); folio_put(folio); trace_afs_edit_dir(vnode, why, afs_edit_dir_update_inval, 0, 0, 0, 0, ".."); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); goto out; =20 error: trace_afs_edit_dir(vnode, why, afs_edit_dir_update_error, 0, 0, 0, 0, ".."); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); goto out; } From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B856921B427 for ; Fri, 8 Nov 2024 17:34:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087301; cv=none; b=cf/Nlqfs5ffhTMHTZDV/TvHs2GEIqogpLxLcOV+cv8dNgZ/A3+sx/8ZmZIWSuQrPwT5IpfdLIT7e+LZUw/MsT5LOg5XqVkMJUdKrgnatukPmCs4ChFvKZX3+zoej4K2UmJ7qHydZRve0rcOfPlMma7m+u0/JwjhNvBcpnIDwBdY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087301; c=relaxed/simple; bh=LOURsnIZQDl9tWN/DtXMyLu3vMcnBJb7H0S0wNAcQls=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p0SOkABiAgvQFE+2T5621Jcb/BWLQ/lWF3c89J3mvElVeWHDUFqE2gxYIqQK1sg5iAHA4YF2Q3j+NcdPcggvEgyIZwTRkA6JZvKFJME/ZHHT8NQOWef+chdffdRQV8Aj+7+gzeiISpw8+VEnCUpe7G3AA8ky46U0FvJLOGuoU8o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HxIdUB99; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HxIdUB99" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087297; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3kDl7vtgTz4UJLk1NBKECBmgzlrlbYChuV4j2ks8Pq0=; b=HxIdUB996XpZdSzendGYmXcgtLtIADJKbHZ/HCXLHV9NDog2Ds6doqjPyKby+n/KWmdeB4 zGo47+6ccH9XBLFXNkht4bxK/Vcun2VMsZmnTudRYua3W2crWqD4IUQhOJKtRw5iPWQUZZ mYEaLo5wdH+JkJGwkY8fA9I224a14ZQ= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-230-utFlB_VuN8Sv2_-v1sEaMg-1; Fri, 08 Nov 2024 12:34:52 -0500 X-MC-Unique: utFlB_VuN8Sv2_-v1sEaMg-1 X-Mimecast-MFC-AGG-ID: utFlB_VuN8Sv2_-v1sEaMg Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9D7EC19560B5; Fri, 8 Nov 2024 17:34:48 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id EDDA1195E480; Fri, 8 Nov 2024 17:34:42 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 17/33] cachefiles: Add some subrequest tracepoints Date: Fri, 8 Nov 2024 17:32:18 +0000 Message-ID: <20241108173236.1382366-18-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Add some tracepoints into the cachefiles write paths. Signed-off-by: David Howells cc: netfs@lists.linux.dev --- fs/cachefiles/io.c | 4 ++++ include/trace/events/netfs.h | 3 +++ 2 files changed, 7 insertions(+) diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index 6a821a959b59..92058ae43488 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" =20 struct cachefiles_kiocb { @@ -366,6 +367,7 @@ static int cachefiles_write(struct netfs_cache_resource= s *cres, if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) { if (term_func) term_func(term_func_priv, -ENOBUFS, false); + trace_netfs_sreq(term_func_priv, netfs_sreq_trace_cache_nowrite); return -ENOBUFS; } =20 @@ -695,6 +697,7 @@ static void cachefiles_issue_write(struct netfs_io_subr= equest *subreq) iov_iter_truncate(&subreq->io_iter, len); } =20 + trace_netfs_sreq(subreq, netfs_sreq_trace_cache_prepare); cachefiles_begin_secure(cache, &saved_cred); ret =3D __cachefiles_prepare_write(object, cachefiles_cres_file(cres), &start, &len, len, true); @@ -704,6 +707,7 @@ static void cachefiles_issue_write(struct netfs_io_subr= equest *subreq) return; } =20 + trace_netfs_sreq(subreq, netfs_sreq_trace_cache_write); cachefiles_write(&subreq->rreq->cache_resources, subreq->start, &subreq->io_iter, netfs_write_subrequest_terminated, subreq); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index a0f5b13aab86..7c3c866ae183 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -74,6 +74,9 @@ #define netfs_sreq_traces \ EM(netfs_sreq_trace_add_donations, "+DON ") \ EM(netfs_sreq_trace_added, "ADD ") \ + EM(netfs_sreq_trace_cache_nowrite, "CA-NW") \ + EM(netfs_sreq_trace_cache_prepare, "CA-PR") \ + EM(netfs_sreq_trace_cache_write, "CA-WR") \ EM(netfs_sreq_trace_clear, "CLEAR") \ EM(netfs_sreq_trace_discard, "DSCRD") \ EM(netfs_sreq_trace_donate_to_prev, "DON-P") \ From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C7901F6675 for ; Fri, 8 Nov 2024 17:35:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087304; cv=none; b=Z04KEhxzhiP7hO/7hP+2ARxtA1dNmBf0FB0Opx0rmJkXWe2XHkWgfFahMpSAV9Oc9ktNy6yhNkAa9DSHunYSGphBzTTXJWBx/slReAgH6lmS3zykMVFn0nmtwzIbW/hZ6IUUK9ZrRNq8MJYmEaVg8lX7mi7Jho42I81qUmx6PvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087304; c=relaxed/simple; bh=Iu5W7GQDrF6wAgpzj6wnYkl7V7v8hp++3GNg/ZYfmXg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P81BfXMsKW4Bbvugw2sTZxUz68QSFAz5ks8OtmrOQ3VOEcNKhk8gs50M+Sbw+r080ZStsmoMbgl6xNCkvEzoRV/rjCqre7TwXJzJFfTbg7qz53zXpzhlrAM3JsS8ewI1sLjqi+9RGmdEj/XgiRFU2I8dBa7wvi34fdK/92RdQR4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AcpFbPjQ; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AcpFbPjQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087301; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aZQE6Gk2pDLRsrGirLF3LOxjqtk3JDWUrBtiyK47AD0=; b=AcpFbPjQlZXfnPYQyxODg/1iYJoZyOcLNTiHogjWX7Qs6lgGTqU2mkqngAtGil0vd3NhA/ 8TejPtyxk0PBroW2K16H09EG7v2rllAWOK02dRIyb6JRo8G2JElOPxnJ9kjl1bQ7p91ibf 28f2z2I8sPk+5YuHkG6XiwnPO5bTunI= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-500-mXByTJK0Mx-Oo6w3qzMSgA-1; Fri, 08 Nov 2024 12:34:58 -0500 X-MC-Unique: mXByTJK0Mx-Oo6w3qzMSgA-1 X-Mimecast-MFC-AGG-ID: mXByTJK0Mx-Oo6w3qzMSgA Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7709819560B5; Fri, 8 Nov 2024 17:34:55 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 07730300019E; Fri, 8 Nov 2024 17:34:49 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 18/33] cachefiles: Add auxiliary data trace Date: Fri, 8 Nov 2024 17:32:19 +0000 Message-ID: <20241108173236.1382366-19-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Add a display of the first 8 bytes of the downloaded auxiliary data and of the on-disk stored auxiliary data as these are used in coherency management. In the case of afs, this holds the data version number. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/cachefiles/xattr.c | 9 ++++++++- include/trace/events/cachefiles.h | 13 ++++++++++--- 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c index 7c6f260a3be5..52383b1d0ba6 100644 --- a/fs/cachefiles/xattr.c +++ b/fs/cachefiles/xattr.c @@ -77,6 +77,7 @@ int cachefiles_set_object_xattr(struct cachefiles_object = *object) trace_cachefiles_vfs_error(object, file_inode(file), ret, cachefiles_trace_setxattr_error); trace_cachefiles_coherency(object, file_inode(file)->i_ino, + be64_to_cpup((__be64 *)buf->data), buf->content, cachefiles_coherency_set_fail); if (ret !=3D -ENOMEM) @@ -85,6 +86,7 @@ int cachefiles_set_object_xattr(struct cachefiles_object = *object) "Failed to set xattr with error %d", ret); } else { trace_cachefiles_coherency(object, file_inode(file)->i_ino, + be64_to_cpup((__be64 *)buf->data), buf->content, cachefiles_coherency_set_ok); } @@ -126,7 +128,10 @@ int cachefiles_check_auxdata(struct cachefiles_object = *object, struct file *file object, "Failed to read aux with error %zd", xlen); why =3D cachefiles_coherency_check_xattr; - } else if (buf->type !=3D CACHEFILES_COOKIE_TYPE_DATA) { + goto out; + } + + if (buf->type !=3D CACHEFILES_COOKIE_TYPE_DATA) { why =3D cachefiles_coherency_check_type; } else if (memcmp(buf->data, p, len) !=3D 0) { why =3D cachefiles_coherency_check_aux; @@ -141,7 +146,9 @@ int cachefiles_check_auxdata(struct cachefiles_object *= object, struct file *file ret =3D 0; } =20 +out: trace_cachefiles_coherency(object, file_inode(file)->i_ino, + be64_to_cpup((__be64 *)buf->data), buf->content, why); kfree(buf); return ret; diff --git a/include/trace/events/cachefiles.h b/include/trace/events/cache= files.h index 7d931db02b93..775a72e6adc6 100644 --- a/include/trace/events/cachefiles.h +++ b/include/trace/events/cachefiles.h @@ -380,10 +380,11 @@ TRACE_EVENT(cachefiles_rename, TRACE_EVENT(cachefiles_coherency, TP_PROTO(struct cachefiles_object *obj, ino_t ino, + u64 disk_aux, enum cachefiles_content content, enum cachefiles_coherency_trace why), =20 - TP_ARGS(obj, ino, content, why), + TP_ARGS(obj, ino, disk_aux, content, why), =20 /* Note that obj may be NULL */ TP_STRUCT__entry( @@ -391,6 +392,8 @@ TRACE_EVENT(cachefiles_coherency, __field(enum cachefiles_coherency_trace, why ) __field(enum cachefiles_content, content ) __field(u64, ino ) + __field(u64, aux ) + __field(u64, disk_aux) ), =20 TP_fast_assign( @@ -398,13 +401,17 @@ TRACE_EVENT(cachefiles_coherency, __entry->why =3D why; __entry->content =3D content; __entry->ino =3D ino; + __entry->aux =3D be64_to_cpup((__be64 *)obj->cookie->inline_aux); + __entry->disk_aux =3D disk_aux; ), =20 - TP_printk("o=3D%08x %s B=3D%llx c=3D%u", + TP_printk("o=3D%08x %s B=3D%llx c=3D%u aux=3D%llx dsk=3D%llx", __entry->obj, __print_symbolic(__entry->why, cachefiles_coherency_traces), __entry->ino, - __entry->content) + __entry->content, + __entry->aux, + __entry->disk_aux) ); =20 TRACE_EVENT(cachefiles_vol_coherency, From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0E2B2229D1 for ; Fri, 8 Nov 2024 17:35:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087311; cv=none; b=pYsZmv1PcECr+CrMd+Qew7QbdZ82MHQqtch60FHyT+GdlTHGNxFqJIcGQHevCRyfIstIoiBVeNpA/AAd8UyZk/+c2EQvjh/UCqZmnY75zw4VP5thSdegZGHCHtZu+YBlvgv8c+8J5pAw2a5tf+VFxIBkMK2N1x/LvG2rh6VrhLE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087311; c=relaxed/simple; bh=a2afrvpp9V9JKCrlDwt64r66vnQWdcxIj3gIWZS/+lQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UEtcskyQHFioee68SfKsuNnYSEZS+zW3zL9w1HROAVishxiC1/j2S6Bx3ElVNRbzHgokPPJlX1ADCMMU+Cn0P2IxBJ56Bg1yEAIOf6OVZj7c+ez6zks+DoVHiDWpyiZnLWffT4irBUuBoKq3QU7t46vCR6TIPRtWDdxZ9h8oGpE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fVHyiStj; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fVHyiStj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087308; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0+CHA5nvTaynibcl9pvE1SvWwST0jzyWUHGlFvOt4QU=; b=fVHyiStjcQCXHtCXPD+q1LDZBkGg9IU0KtxxBShJOqtuovtIzu7taOK7IztQwFTiTnPc8f rUycDZzAO/SdT98I5xTeZvG0I4+4a8RgMTIUbO68T+NdB44igNILUiF3UQm3653uwy5blZ ed36+lWgjdktLxblZJgUJP61UTk7t9o= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-364-T8K-GU8aPzm5ckpNPSE2GQ-1; Fri, 08 Nov 2024 12:35:05 -0500 X-MC-Unique: T8K-GU8aPzm5ckpNPSE2GQ-1 X-Mimecast-MFC-AGG-ID: T8K-GU8aPzm5ckpNPSE2GQ Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 853E41955D58; Fri, 8 Nov 2024 17:35:02 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CFC40195E481; Fri, 8 Nov 2024 17:34:56 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 19/33] afs: Add more tracepoints to do with tracking validity Date: Fri, 8 Nov 2024 17:32:20 +0000 Message-ID: <20241108173236.1382366-20-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Add wrappers to set and clear the callback promise and to mark a directory as invalidated, and add tracepoints to track these events: (1) afs_cb_promise: Log when a callback promise is set on a vnode. (2) afs_vnode_invalid: Log when the server's callback promise for a vnode is no longer valid and we need to refetch the vnode metadata. (3) afs_dir_invalid: Log when the contents of a directory are marked invalid and requiring refetching from the server and the cache invalidating. and two tracepoints to record data version number management: (4) afs_set_dv: Log when the DV is recorded on a vnode. (5) afs_dv_mismatch: Log when the DV recorded on a vnode plus the expected delta for the operation does not match the DV we got back from the server. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/callback.c | 4 +- fs/afs/dir.c | 14 ++- fs/afs/dir_edit.c | 16 ++-- fs/afs/inode.c | 23 +++-- fs/afs/internal.h | 32 +++++++ fs/afs/rotate.c | 4 +- fs/afs/validation.c | 31 ++++--- include/trace/events/afs.h | 169 +++++++++++++++++++++++++++++++++++-- 8 files changed, 248 insertions(+), 45 deletions(-) diff --git a/fs/afs/callback.c b/fs/afs/callback.c index 99b2c8172021..69e1dd55b160 100644 --- a/fs/afs/callback.c +++ b/fs/afs/callback.c @@ -41,7 +41,7 @@ static void afs_volume_init_callback(struct afs_volume *v= olume) =20 list_for_each_entry(vnode, &volume->open_mmaps, cb_mmap_link) { if (vnode->cb_v_check !=3D atomic_read(&volume->cb_v_break)) { - atomic64_set(&vnode->cb_expires_at, AFS_NO_CB_PROMISE); + afs_clear_cb_promise(vnode, afs_cb_promise_clear_vol_init_cb); queue_work(system_unbound_wq, &vnode->cb_work); } } @@ -79,7 +79,7 @@ void __afs_break_callback(struct afs_vnode *vnode, enum a= fs_cb_break_reason reas _enter(""); =20 clear_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags); - if (atomic64_xchg(&vnode->cb_expires_at, AFS_NO_CB_PROMISE) !=3D AFS_NO_C= B_PROMISE) { + if (afs_clear_cb_promise(vnode, afs_cb_promise_clear_cb_break)) { vnode->cb_break++; vnode->cb_v_check =3D atomic_read(&vnode->volume->cb_v_break); afs_clear_permits(vnode); diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 50edd1cae28a..f36a28a8f27b 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -324,8 +324,7 @@ static struct afs_read *afs_read_dir(struct afs_vnode *= dvnode, struct key *key) =20 folio =3D filemap_get_folio(mapping, i); if (IS_ERR(folio)) { - if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) - afs_stat_v(dvnode, n_inval); + afs_invalidate_dir(dvnode, afs_dir_invalid_reclaimed_folio); folio =3D __filemap_get_folio(mapping, i, FGP_LOCK | FGP_CREAT, mapping->gfp_mask); @@ -1388,8 +1387,8 @@ static void afs_dir_remove_subdir(struct dentry *dent= ry) =20 clear_nlink(&vnode->netfs.inode); set_bit(AFS_VNODE_DELETED, &vnode->flags); - atomic64_set(&vnode->cb_expires_at, AFS_NO_CB_PROMISE); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_clear_cb_promise(vnode, afs_cb_promise_clear_rmdir); + afs_invalidate_dir(vnode, afs_dir_invalid_subdir_removed); } } =20 @@ -1851,6 +1850,7 @@ static void afs_rename_success(struct afs_operation *= op) write_seqlock(&vnode->cb_lock); =20 new_dv =3D vnode->status.data_version + 1; + trace_afs_set_dv(vnode, new_dv); vnode->status.data_version =3D new_dv; inode_set_iversion_raw(&vnode->netfs.inode, new_dv); =20 @@ -2063,8 +2063,7 @@ static bool afs_dir_release_folio(struct folio *folio= , gfp_t gfp_flags) folio_detach_private(folio); =20 /* The directory will need reloading. */ - if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) - afs_stat_v(dvnode, n_relpg); + afs_invalidate_dir(dvnode, afs_dir_invalid_release_folio); return true; } =20 @@ -2081,8 +2080,7 @@ static void afs_dir_invalidate_folio(struct folio *fo= lio, size_t offset, BUG_ON(!folio_test_locked(folio)); =20 /* The directory will need reloading. */ - if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) - afs_stat_v(dvnode, n_inval); + afs_invalidate_dir(dvnode, afs_dir_invalid_inval_folio); =20 /* we clean up only if the entire folio is being invalidated */ if (offset =3D=3D 0 && length =3D=3D folio_size(folio)) diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index 13fb236a3f50..5d092c8c0157 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -116,7 +116,7 @@ static struct folio *afs_dir_get_folio(struct afs_vnode= *vnode, pgoff_t index) FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mapping->gfp_mask); if (IS_ERR(folio)) { - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_get_block); return NULL; } if (!folio_test_private(folio)) @@ -220,7 +220,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, i_size =3D i_size_read(&vnode->netfs.inode); if (i_size > AFS_DIR_BLOCK_SIZE * AFS_DIR_MAX_BLOCKS || (i_size & (AFS_DIR_BLOCK_SIZE - 1))) { - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_add_bad_size); return; } =20 @@ -299,7 +299,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, * succeeded. Download the directory again. */ trace_afs_edit_dir(vnode, why, afs_edit_dir_create_nospc, 0, 0, 0, 0, nam= e->name); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_add_no_slots); goto out_unmap; =20 new_directory: @@ -358,7 +358,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, goto out_unmap; =20 error_too_many_blocks: - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_add_too_many_blocks); error: trace_afs_edit_dir(vnode, why, afs_edit_dir_create_error, 0, 0, 0, 0, nam= e->name); goto out_unmap; @@ -388,7 +388,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, if (i_size < AFS_DIR_BLOCK_SIZE || i_size > AFS_DIR_BLOCK_SIZE * AFS_DIR_MAX_BLOCKS || (i_size & (AFS_DIR_BLOCK_SIZE - 1))) { - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_rem_bad_size); return; } nr_blocks =3D i_size / AFS_DIR_BLOCK_SIZE; @@ -440,7 +440,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, /* Didn't find the dirent to clobber. Download the directory again. */ trace_afs_edit_dir(vnode, why, afs_edit_dir_delete_noent, 0, 0, 0, 0, name->name); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_rem_wrong_name); goto out_unmap; =20 found_dirent: @@ -510,7 +510,7 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnode= , struct afs_vnode *new_d =20 i_size =3D i_size_read(&vnode->netfs.inode); if (i_size < AFS_DIR_BLOCK_SIZE) { - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_upd_bad_size); return; } nr_blocks =3D i_size / AFS_DIR_BLOCK_SIZE; @@ -542,7 +542,7 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnode= , struct afs_vnode *new_d /* Didn't find the dirent to clobber. Download the directory again. */ trace_afs_edit_dir(vnode, why, afs_edit_dir_update_nodd, 0, 0, 0, 0, ".."); - clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_upd_no_dd); goto out; =20 found_dirent: diff --git a/fs/afs/inode.c b/fs/afs/inode.c index a95e77670b49..495ecef91679 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -140,15 +140,17 @@ static int afs_inode_init_from_status(struct afs_oper= ation *op, afs_set_netfs_context(vnode); =20 vnode->invalid_before =3D status->data_version; + trace_afs_set_dv(vnode, status->data_version); inode_set_iversion_raw(&vnode->netfs.inode, status->data_version); =20 if (!vp->scb.have_cb) { /* it's a symlink we just created (the fileserver * didn't give us a callback) */ - atomic64_set(&vnode->cb_expires_at, AFS_NO_CB_PROMISE); + afs_clear_cb_promise(vnode, afs_cb_promise_set_new_symlink); } else { vnode->cb_server =3D op->server; - atomic64_set(&vnode->cb_expires_at, vp->scb.callback.expires_at); + afs_set_cb_promise(vnode, vp->scb.callback.expires_at, + afs_cb_promise_set_new_inode); } =20 write_sequnlock(&vnode->cb_lock); @@ -207,12 +209,17 @@ static void afs_apply_status(struct afs_operation *op, if (vp->update_ctime) inode_set_ctime_to_ts(inode, op->ctime); =20 - if (vnode->status.data_version !=3D status->data_version) + if (vnode->status.data_version !=3D status->data_version) { + trace_afs_set_dv(vnode, status->data_version); data_changed =3D true; + } =20 vnode->status =3D *status; =20 if (vp->dv_before + vp->dv_delta !=3D status->data_version) { + trace_afs_dv_mismatch(vnode, vp->dv_before, vp->dv_delta, + status->data_version); + if (vnode->cb_ro_snapshot =3D=3D atomic_read(&vnode->volume->cb_ro_snaps= hot) && atomic64_read(&vnode->cb_expires_at) !=3D AFS_NO_CB_PROMISE) pr_warn("kAFS: vnode modified {%llx:%llu} %llx->%llx %s (op=3D%x)\n", @@ -223,12 +230,10 @@ static void afs_apply_status(struct afs_operation *op, op->debug_id); =20 vnode->invalid_before =3D status->data_version; - if (vnode->status.type =3D=3D AFS_FTYPE_DIR) { - if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) - afs_stat_v(vnode, n_inval); - } else { + if (vnode->status.type =3D=3D AFS_FTYPE_DIR) + afs_invalidate_dir(vnode, afs_dir_invalid_dv_mismatch); + else set_bit(AFS_VNODE_ZAP_DATA, &vnode->flags); - } change_size =3D true; data_changed =3D true; unexpected_jump =3D true; @@ -273,7 +278,7 @@ static void afs_apply_callback(struct afs_operation *op, if (!afs_cb_is_broken(vp->cb_break_before, vnode)) { if (op->volume->type =3D=3D AFSVL_RWVOL) vnode->cb_server =3D op->server; - atomic64_set(&vnode->cb_expires_at, cb->expires_at); + afs_set_cb_promise(vnode, cb->expires_at, afs_cb_promise_set_apply_cb); } } =20 diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 07b8f7083e73..20d2f723948d 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1713,6 +1713,38 @@ static inline int afs_bad(struct afs_vnode *vnode, e= num afs_file_error where) return -EIO; } =20 +/* + * Set the callback promise on a vnode. + */ +static inline void afs_set_cb_promise(struct afs_vnode *vnode, time64_t ex= pires_at, + enum afs_cb_promise_trace trace) +{ + atomic64_set(&vnode->cb_expires_at, expires_at); + trace_afs_cb_promise(vnode, trace); +} + +/* + * Clear the callback promise on a vnode, returning true if it was promise= d. + */ +static inline bool afs_clear_cb_promise(struct afs_vnode *vnode, + enum afs_cb_promise_trace trace) +{ + trace_afs_cb_promise(vnode, trace); + return atomic64_xchg(&vnode->cb_expires_at, AFS_NO_CB_PROMISE) !=3D AFS_N= O_CB_PROMISE; +} + +/* + * Mark a directory as being invalid. + */ +static inline void afs_invalidate_dir(struct afs_vnode *dvnode, + enum afs_dir_invalid_trace trace) +{ + if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) { + trace_afs_dir_invalid(dvnode, trace); + afs_stat_v(dvnode, n_inval); + } +} + /*************************************************************************= ****/ /* * debug tracing diff --git a/fs/afs/rotate.c b/fs/afs/rotate.c index d612983d6f38..a1c24f589d9e 100644 --- a/fs/afs/rotate.c +++ b/fs/afs/rotate.c @@ -99,7 +99,7 @@ static bool afs_start_fs_iteration(struct afs_operation *= op, write_seqlock(&vnode->cb_lock); ASSERTCMP(cb_server, =3D=3D, vnode->cb_server); vnode->cb_server =3D NULL; - if (atomic64_xchg(&vnode->cb_expires_at, AFS_NO_CB_PROMISE) !=3D AFS_NO_= CB_PROMISE) + if (afs_clear_cb_promise(vnode, afs_cb_promise_clear_rotate_server)) vnode->cb_break++; write_sequnlock(&vnode->cb_lock); } @@ -583,7 +583,7 @@ bool afs_select_fileserver(struct afs_operation *op) if (vnode->cb_server !=3D server) { vnode->cb_server =3D server; vnode->cb_v_check =3D atomic_read(&vnode->volume->cb_v_break); - atomic64_set(&vnode->cb_expires_at, AFS_NO_CB_PROMISE); + afs_clear_cb_promise(vnode, afs_cb_promise_clear_server_change); } =20 retry_server: diff --git a/fs/afs/validation.c b/fs/afs/validation.c index bef8af12ebe2..0ba8336c9025 100644 --- a/fs/afs/validation.c +++ b/fs/afs/validation.c @@ -120,22 +120,31 @@ bool afs_check_validity(const struct afs_vnode *vnode) { const struct afs_volume *volume =3D vnode->volume; + enum afs_vnode_invalid_trace trace =3D afs_vnode_valid_trace; + time64_t cb_expires_at =3D atomic64_read(&vnode->cb_expires_at); time64_t deadline =3D ktime_get_real_seconds() + 10; =20 if (test_bit(AFS_VNODE_DELETED, &vnode->flags)) return true; =20 - if (atomic_read(&volume->cb_v_check) !=3D atomic_read(&volume->cb_v_break= ) || - atomic64_read(&vnode->cb_expires_at) <=3D deadline || - volume->cb_expires_at <=3D deadline || - vnode->cb_ro_snapshot !=3D atomic_read(&volume->cb_ro_snapshot) || - vnode->cb_scrub !=3D atomic_read(&volume->cb_scrub) || - test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags)) { - _debug("inval"); - return false; - } - - return true; + if (atomic_read(&volume->cb_v_check) !=3D atomic_read(&volume->cb_v_break= )) + trace =3D afs_vnode_invalid_trace_cb_v_break; + else if (cb_expires_at =3D=3D AFS_NO_CB_PROMISE) + trace =3D afs_vnode_invalid_trace_no_cb_promise; + else if (cb_expires_at <=3D deadline) + trace =3D afs_vnode_invalid_trace_expired; + else if (volume->cb_expires_at <=3D deadline) + trace =3D afs_vnode_invalid_trace_vol_expired; + else if (vnode->cb_ro_snapshot !=3D atomic_read(&volume->cb_ro_snapshot)) + trace =3D afs_vnode_invalid_trace_cb_ro_snapshot; + else if (vnode->cb_scrub !=3D atomic_read(&volume->cb_scrub)) + trace =3D afs_vnode_invalid_trace_cb_scrub; + else if (test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags)) + trace =3D afs_vnode_invalid_trace_zap_data; + else + return true; + trace_afs_vnode_invalid(vnode, trace); + return false; } =20 /* diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index a0aed1a428a1..7cb5583efb91 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -323,6 +323,43 @@ enum yfs_cm_operation { EM(yfs_CB_TellMeAboutYourself, "YFSCB.TellMeAboutYourself") \ E_(yfs_CB_CallBack, "YFSCB.CallBack") =20 +#define afs_cb_promise_traces \ + EM(afs_cb_promise_clear_cb_break, "CLEAR cb-break") \ + EM(afs_cb_promise_clear_rmdir, "CLEAR rmdir") \ + EM(afs_cb_promise_clear_rotate_server, "CLEAR rot-srv") \ + EM(afs_cb_promise_clear_server_change, "CLEAR srv-chg") \ + EM(afs_cb_promise_clear_vol_init_cb, "CLEAR vol-init-cb") \ + EM(afs_cb_promise_set_apply_cb, "SET apply-cb") \ + EM(afs_cb_promise_set_new_inode, "SET new-inode") \ + E_(afs_cb_promise_set_new_symlink, "SET new-symlink") + +#define afs_vnode_invalid_traces \ + EM(afs_vnode_invalid_trace_cb_ro_snapshot, "cb-ro-snapshot") \ + EM(afs_vnode_invalid_trace_cb_scrub, "cb-scrub") \ + EM(afs_vnode_invalid_trace_cb_v_break, "cb-v-break") \ + EM(afs_vnode_invalid_trace_expired, "expired") \ + EM(afs_vnode_invalid_trace_no_cb_promise, "no-cb-promise") \ + EM(afs_vnode_invalid_trace_vol_expired, "vol-expired") \ + EM(afs_vnode_invalid_trace_zap_data, "zap-data") \ + E_(afs_vnode_valid_trace, "valid") + +#define afs_dir_invalid_traces \ + EM(afs_dir_invalid_edit_add_bad_size, "edit-add-bad-size") \ + EM(afs_dir_invalid_edit_add_no_slots, "edit-add-no-slots") \ + EM(afs_dir_invalid_edit_add_too_many_blocks, "edit-add-too-many-blocks") \ + EM(afs_dir_invalid_edit_get_block, "edit-get-block") \ + EM(afs_dir_invalid_edit_rem_bad_size, "edit-rem-bad-size") \ + EM(afs_dir_invalid_edit_rem_wrong_name, "edit-rem-wrong_name") \ + EM(afs_dir_invalid_edit_upd_bad_size, "edit-upd-bad-size") \ + EM(afs_dir_invalid_edit_upd_no_dd, "edit-upd-no-dotdot") \ + EM(afs_dir_invalid_dv_mismatch, "dv-mismatch") \ + EM(afs_dir_invalid_inval_folio, "inv-folio") \ + EM(afs_dir_invalid_iter_stale, "iter-stale") \ + EM(afs_dir_invalid_reclaimed_folio, "reclaimed-folio") \ + EM(afs_dir_invalid_release_folio, "rel-folio") \ + EM(afs_dir_invalid_remote, "remote") \ + E_(afs_dir_invalid_subdir_removed, "subdir-removed") + #define afs_edit_dir_ops \ EM(afs_edit_dir_create, "create") \ EM(afs_edit_dir_create_error, "c_fail") \ @@ -487,7 +524,9 @@ enum yfs_cm_operation { enum afs_alist_trace { afs_alist_traces } __mode(byte); enum afs_call_trace { afs_call_traces } __mode(byte); enum afs_cb_break_reason { afs_cb_break_reasons } __mode(byte); +enum afs_cb_promise_trace { afs_cb_promise_traces } __mode(byte); enum afs_cell_trace { afs_cell_traces } __mode(byte); +enum afs_dir_invalid_trace { afs_dir_invalid_traces} __mode(byte); enum afs_edit_dir_op { afs_edit_dir_ops } __mode(byte); enum afs_edit_dir_reason { afs_edit_dir_reasons } __mode(byte); enum afs_eproto_cause { afs_eproto_causes } __mode(byte); @@ -498,6 +537,7 @@ enum afs_flock_operation { afs_flock_operations } __mod= e(byte); enum afs_io_error { afs_io_errors } __mode(byte); enum afs_rotate_trace { afs_rotate_traces } __mode(byte); enum afs_server_trace { afs_server_traces } __mode(byte); +enum afs_vnode_invalid_trace { afs_vnode_invalid_traces} __mode(byte); enum afs_volume_trace { afs_volume_traces } __mode(byte); =20 #endif /* end __AFS_GENERATE_TRACE_ENUMS_ONCE_ONLY */ @@ -513,8 +553,10 @@ enum afs_volume_trace { afs_volume_traces } __mode(by= te); afs_alist_traces; afs_call_traces; afs_cb_break_reasons; +afs_cb_promise_traces; afs_cell_traces; afs_cm_operations; +afs_dir_invalid_traces; afs_edit_dir_ops; afs_edit_dir_reasons; afs_eproto_causes; @@ -526,6 +568,7 @@ afs_fs_operations; afs_io_errors; afs_rotate_traces; afs_server_traces; +afs_vnode_invalid_traces; afs_vl_operations; yfs_cm_operations; =20 @@ -670,7 +713,7 @@ TRACE_EVENT(afs_make_fs_call, } ), =20 - TP_printk("c=3D%08x %06llx:%06llx:%06x %s", + TP_printk("c=3D%08x V=3D%llx i=3D%llx:%x %s", __entry->call, __entry->fid.vid, __entry->fid.vnode, @@ -704,7 +747,7 @@ TRACE_EVENT(afs_make_fs_calli, } ), =20 - TP_printk("c=3D%08x %06llx:%06llx:%06x %s i=3D%u", + TP_printk("c=3D%08x V=3D%llx i=3D%llx:%x %s i=3D%u", __entry->call, __entry->fid.vid, __entry->fid.vnode, @@ -741,7 +784,7 @@ TRACE_EVENT(afs_make_fs_call1, __entry->name[__len] =3D 0; ), =20 - TP_printk("c=3D%08x %06llx:%06llx:%06x %s \"%s\"", + TP_printk("c=3D%08x V=3D%llx i=3D%llx:%x %s \"%s\"", __entry->call, __entry->fid.vid, __entry->fid.vnode, @@ -782,7 +825,7 @@ TRACE_EVENT(afs_make_fs_call2, __entry->name2[__len2] =3D 0; ), =20 - TP_printk("c=3D%08x %06llx:%06llx:%06x %s \"%s\" \"%s\"", + TP_printk("c=3D%08x V=3D%llx i=3D%llx:%x %s \"%s\" \"%s\"", __entry->call, __entry->fid.vid, __entry->fid.vnode, @@ -1002,7 +1045,7 @@ TRACE_EVENT(afs_edit_dir, __entry->name[__len] =3D 0; ), =20 - TP_printk("d=3D%x:%x %s %s %u[%u] f=3D%x:%x \"%s\"", + TP_printk("di=3D%x:%x %s %s %u[%u] fi=3D%x:%x \"%s\"", __entry->vnode, __entry->unique, __print_symbolic(__entry->why, afs_edit_dir_reasons), __print_symbolic(__entry->op, afs_edit_dir_ops), @@ -1011,6 +1054,122 @@ TRACE_EVENT(afs_edit_dir, __entry->name) ); =20 +TRACE_EVENT(afs_dir_invalid, + TP_PROTO(const struct afs_vnode *dvnode, enum afs_dir_invalid_trace t= race), + + TP_ARGS(dvnode, trace), + + TP_STRUCT__entry( + __field(unsigned int, vnode) + __field(unsigned int, unique) + __field(enum afs_dir_invalid_trace, trace) + ), + + TP_fast_assign( + __entry->vnode =3D dvnode->fid.vnode; + __entry->unique =3D dvnode->fid.unique; + __entry->trace =3D trace; + ), + + TP_printk("di=3D%x:%x %s", + __entry->vnode, __entry->unique, + __print_symbolic(__entry->trace, afs_dir_invalid_traces)) + ); + +TRACE_EVENT(afs_cb_promise, + TP_PROTO(const struct afs_vnode *vnode, enum afs_cb_promise_trace tra= ce), + + TP_ARGS(vnode, trace), + + TP_STRUCT__entry( + __field(unsigned int, vnode) + __field(unsigned int, unique) + __field(enum afs_cb_promise_trace, trace) + ), + + TP_fast_assign( + __entry->vnode =3D vnode->fid.vnode; + __entry->unique =3D vnode->fid.unique; + __entry->trace =3D trace; + ), + + TP_printk("di=3D%x:%x %s", + __entry->vnode, __entry->unique, + __print_symbolic(__entry->trace, afs_cb_promise_traces)) + ); + +TRACE_EVENT(afs_vnode_invalid, + TP_PROTO(const struct afs_vnode *vnode, enum afs_vnode_invalid_trace = trace), + + TP_ARGS(vnode, trace), + + TP_STRUCT__entry( + __field(unsigned int, vnode) + __field(unsigned int, unique) + __field(enum afs_vnode_invalid_trace, trace) + ), + + TP_fast_assign( + __entry->vnode =3D vnode->fid.vnode; + __entry->unique =3D vnode->fid.unique; + __entry->trace =3D trace; + ), + + TP_printk("di=3D%x:%x %s", + __entry->vnode, __entry->unique, + __print_symbolic(__entry->trace, afs_vnode_invalid_traces)) + ); + +TRACE_EVENT(afs_set_dv, + TP_PROTO(const struct afs_vnode *dvnode, u64 new_dv), + + TP_ARGS(dvnode, new_dv), + + TP_STRUCT__entry( + __field(unsigned int, vnode) + __field(unsigned int, unique) + __field(u64, old_dv) + __field(u64, new_dv) + ), + + TP_fast_assign( + __entry->vnode =3D dvnode->fid.vnode; + __entry->unique =3D dvnode->fid.unique; + __entry->old_dv =3D dvnode->status.data_version; + __entry->new_dv =3D new_dv; + ), + + TP_printk("di=3D%x:%x dv=3D%llx -> dv=3D%llx", + __entry->vnode, __entry->unique, + __entry->old_dv, __entry->new_dv) + ); + +TRACE_EVENT(afs_dv_mismatch, + TP_PROTO(const struct afs_vnode *dvnode, u64 before_dv, int delta, u6= 4 new_dv), + + TP_ARGS(dvnode, before_dv, delta, new_dv), + + TP_STRUCT__entry( + __field(unsigned int, vnode) + __field(unsigned int, unique) + __field(int, delta) + __field(u64, before_dv) + __field(u64, new_dv) + ), + + TP_fast_assign( + __entry->vnode =3D dvnode->fid.vnode; + __entry->unique =3D dvnode->fid.unique; + __entry->delta =3D delta; + __entry->before_dv =3D before_dv; + __entry->new_dv =3D new_dv; + ), + + TP_printk("di=3D%x:%x xdv=3D%llx+%d dv=3D%llx", + __entry->vnode, __entry->unique, + __entry->before_dv, __entry->delta, __entry->new_dv) + ); + TRACE_EVENT(afs_protocol_error, TP_PROTO(struct afs_call *call, enum afs_eproto_cause cause), From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B06A62245E0 for ; Fri, 8 Nov 2024 17:35:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087321; cv=none; b=B2K7D4zw1vo4Ye73uBsc9GRz1LklNB6A3goWQd8Qu5zEjPYWJt9dB8Gi/HDK8iGixx7FYIjsRDxiSgngrxcVNHkZNqWKmY8t3n7v/+4yYDfjRmV2X/P1r2UUXxZK1HU3j13mgPTObV3LYeEn5zthybGadPUQ7pxHC97zyRg9+b4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087321; c=relaxed/simple; bh=nkE4MVxEsDjUpzt0UPjtwcqvHFnylhRjXNXoFmQlpmU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XFBFpsSQPYasG0oXvpUxQMk/94cpgqmyQ8NHhsyyVuzcXccBJOsHtXvaFCFBzkSze+/VPrOMYtmbh6tEKlI6L3km55IO8pJDVTeEOXi10Qsl/vIh+z564HiyLVBnt9J2LHNjQX7tg3oaANpEWxZeBT+7JHfmFiHWLGvFeu9wtMs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RHcn6Jdm; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RHcn6Jdm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087317; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NCjiDuYPLh8EX/SFjmoUAd9DPkvd6EWcQ/jqW9O8MdE=; b=RHcn6JdmpNYLVUj/kRFAe0SznavZHav+OhUxM4NvYbPRR4cEd0NXBsWS+A0zofVuk8Ceyx gaxbSN96cr6qrqE0TsA4+ED7ckBIDPVW84cACZE134MD/HI7qWDxuk+6L8Q7xsz5gV70mO 90F6vpKDP2bgm/2oj838dNk1lWHo1RY= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-269-qDKTEzG6PWWaHc7ckoxNVQ-1; Fri, 08 Nov 2024 12:35:12 -0500 X-MC-Unique: qDKTEzG6PWWaHc7ckoxNVQ-1 X-Mimecast-MFC-AGG-ID: qDKTEzG6PWWaHc7ckoxNVQ Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 78DC219560B0; Fri, 8 Nov 2024 17:35:09 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 09DB9300019E; Fri, 8 Nov 2024 17:35:03 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 20/33] netfs: Add functions to build/clean a buffer in a folio_queue Date: Fri, 8 Nov 2024 17:32:21 +0000 Message-ID: <20241108173236.1382366-21-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Add two netfslib functions to build up or clean up a buffer in a folio_queue. The first, netfs_alloc_folioq_buffer() will add folios to a buffer, extending up at least to the given size. If it can, it will add multipage folios. The folios are optionally have the mapping set and will have the index set according to the distance from the front of the folio queue. The second function will free up a folio queue and put any folios in the queue that have the first mark set. The netfs_folio tracepoint is also altered to cope with folios that have a NULL mapping, and the folios being added/put will have trace lines emitted and will be accounted in the stats. Signed-off-by: David Howells cc: Jeff Layton cc: Marc Dionne cc: netfs@lists.linux.dev cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org --- fs/netfs/misc.c | 95 ++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 6 +++ include/trace/events/netfs.h | 6 +-- 3 files changed, 103 insertions(+), 4 deletions(-) diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 4249715f4171..01a6ba0e2f82 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -8,6 +8,101 @@ #include #include "internal.h" =20 +/** + * netfs_alloc_folioq_buffer - Allocate buffer space into a folio queue + * @mapping: Address space to set on the folio (or NULL). + * @_buffer: Pointer to the folio queue to add to (may point to a NULL; up= dated). + * @_cur_size: Current size of the buffer (updated). + * @size: Target size of the buffer. + * @gfp: The allocation constraints. + */ +int netfs_alloc_folioq_buffer(struct address_space *mapping, + struct folio_queue **_buffer, + size_t *_cur_size, ssize_t size, gfp_t gfp) +{ + struct folio_queue *tail =3D *_buffer, *p; + + size =3D round_up(size, PAGE_SIZE); + if (*_cur_size >=3D size) + return 0; + + if (tail) + while (tail->next) + tail =3D tail->next; + + do { + struct folio *folio; + int order =3D 0, slot; + + if (!tail || folioq_full(tail)) { + p =3D netfs_folioq_alloc(0, GFP_NOFS, netfs_trace_folioq_alloc_buffer); + if (!p) + return -ENOMEM; + if (tail) { + tail->next =3D p; + p->prev =3D tail; + } else { + *_buffer =3D p; + } + tail =3D p; + } + + if (size - *_cur_size > PAGE_SIZE) + order =3D umin(ilog2(size - *_cur_size) - PAGE_SHIFT, + MAX_PAGECACHE_ORDER); + + folio =3D folio_alloc(gfp, order); + if (!folio && order > 0) + folio =3D folio_alloc(gfp, 0); + if (!folio) + return -ENOMEM; + + folio->mapping =3D mapping; + folio->index =3D *_cur_size / PAGE_SIZE; + trace_netfs_folio(folio, netfs_folio_trace_alloc_buffer); + slot =3D folioq_append_mark(tail, folio); + *_cur_size +=3D folioq_folio_size(tail, slot); + } while (*_cur_size < size); + + return 0; +} +EXPORT_SYMBOL(netfs_alloc_folioq_buffer); + +/** + * netfs_free_folioq_buffer - Free a folio queue. + * @fq: The start of the folio queue to free + * + * Free up a chain of folio_queues and, if marked, the marked folios they = point + * to. + */ +void netfs_free_folioq_buffer(struct folio_queue *fq) +{ + struct folio_queue *next; + struct folio_batch fbatch; + + folio_batch_init(&fbatch); + + for (; fq; fq =3D next) { + for (int slot =3D 0; slot < folioq_count(fq); slot++) { + struct folio *folio =3D folioq_folio(fq, slot); + if (!folio || + !folioq_is_marked(fq, slot)) + continue; + + trace_netfs_folio(folio, netfs_folio_trace_put); + if (folio_batch_add(&fbatch, folio)) + folio_batch_release(&fbatch); + } + + netfs_stat_d(&netfs_n_folioq); + next =3D fq->next; + kfree(fq); + } + + folio_batch_release(&fbatch); +} +EXPORT_SYMBOL(netfs_free_folioq_buffer); + /* * Reset the subrequest iterator to refer just to the region remaining to = be * read. The iterator may or may not have been advanced by socket ops or diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 738c9c8763f0..921cfcfc62f1 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -458,6 +458,12 @@ struct folio_queue *netfs_folioq_alloc(unsigned int rr= eq_id, gfp_t gfp, void netfs_folioq_free(struct folio_queue *folioq, unsigned int /*enum netfs_trace_folioq*/ trace); =20 +/* Buffer wrangling helpers API. */ +int netfs_alloc_folioq_buffer(struct address_space *mapping, + struct folio_queue **_buffer, + size_t *_cur_size, ssize_t size, gfp_t gfp); +void netfs_free_folioq_buffer(struct folio_queue *fq); + /** * netfs_inode - Get the netfs inode context from the inode * @inode: The inode to query diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 7c3c866ae183..167c89bc62e0 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -155,6 +155,7 @@ EM(netfs_streaming_filled_page, "mod-streamw-f") \ EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \ EM(netfs_folio_trace_abandon, "abandon") \ + EM(netfs_folio_trace_alloc_buffer, "alloc-buf") \ EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ EM(netfs_folio_trace_cancel_store, "cancel-store") \ EM(netfs_folio_trace_clear, "clear") \ @@ -195,10 +196,7 @@ E_(netfs_trace_donate_to_deferred_next, "defer-next") =20 #define netfs_folioq_traces \ - EM(netfs_trace_folioq_alloc_append_folio, "alloc-apf") \ - EM(netfs_trace_folioq_alloc_read_prep, "alloc-r-prep") \ - EM(netfs_trace_folioq_alloc_read_prime, "alloc-r-prime") \ - EM(netfs_trace_folioq_alloc_read_sing, "alloc-r-sing") \ + EM(netfs_trace_folioq_alloc_buffer, "alloc-buf") \ EM(netfs_trace_folioq_clear, "clear") \ EM(netfs_trace_folioq_delete, "delete") \ EM(netfs_trace_folioq_make_space, "make-space") \ From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D83020B7F6 for ; Fri, 8 Nov 2024 17:35:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087329; cv=none; b=unupn8Madn7sSqrbL+aI+QCZK/Jz8J246Ehswd77+/9ycSIA2pa9eExeGY6X+uCJWkExe2uMOlin4vBYHKGDejWeJ9jF25A7uUc4NtdySIZWSPp0XLcApaqseWMu+c6HMIf+am2jlwrvFhpQ7+Z+arDubi0R05Ml53brUHQo2fk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087329; c=relaxed/simple; bh=i1dhHaM+KQtPCZyqaXQXBV1cjnhgI23yRI9XuZqOlLk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LsoDVe+evymsCTYSAdtCMXOF6pwSQEhqI9XurK4TeyqeR3bSv0to2x5vj93Z2c3y8g58icHC3q2mDbjfuc9cOornzw2rCFWPFgs7iJdg4l8ODLuf47zbMhVE1PdH7LfNxIEDzY2sGWDYiQ9NpB6q7zmIgW+6M1V4yvHoFvAZWps= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=I+L3fD2K; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="I+L3fD2K" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087326; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Bf6FVeR96lv8i5aM4C0G5LqnFtPG1or8eKVBTv6D9Ds=; b=I+L3fD2KKTRB+K42tIql6aw1saKQdnT7GyMSGCcK1d019iPQqzcc8nXuEiIsVZPh5/Ie/3 E3abeldzB0ek+LexAlxZGKS4h3u4aXkuNkv8Tkq9Rth7tn2L3VL1rnbfS/s4QXDvMGgp3c 0TUgNV3ECJeVsH7SbpEBZcIlMxjyzco= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-680-b2hYWDz-PMiY_IfzGaI8_Q-1; Fri, 08 Nov 2024 12:35:22 -0500 X-MC-Unique: b2hYWDz-PMiY_IfzGaI8_Q-1 X-Mimecast-MFC-AGG-ID: b2hYWDz-PMiY_IfzGaI8_Q Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5C3A11955F43; Fri, 8 Nov 2024 17:35:19 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id F12391955F3D; Fri, 8 Nov 2024 17:35:11 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 21/33] netfs: Add support for caching single monolithic objects such as AFS dirs Date: Fri, 8 Nov 2024 17:32:22 +0000 Message-ID: <20241108173236.1382366-22-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" Add support for caching the content of a file that contains a single monolithic object that must be read/written with a single I/O operation, such as an AFS directory. Signed-off-by: David Howells cc: Jeff Layton cc: Marc Dionne cc: netfs@lists.linux.dev cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org --- fs/netfs/Makefile | 1 + fs/netfs/buffered_read.c | 11 +- fs/netfs/internal.h | 2 + fs/netfs/main.c | 2 + fs/netfs/objects.c | 2 + fs/netfs/read_collect.c | 45 +++++++- fs/netfs/read_single.c | 202 ++++++++++++++++++++++++++++++++++ fs/netfs/stats.c | 4 +- fs/netfs/write_collect.c | 6 +- fs/netfs/write_issue.c | 203 ++++++++++++++++++++++++++++++++++- include/linux/netfs.h | 10 ++ include/trace/events/netfs.h | 4 + 12 files changed, 478 insertions(+), 14 deletions(-) create mode 100644 fs/netfs/read_single.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index cbb30bdeacc4..b43188d64bd8 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -13,6 +13,7 @@ netfs-y :=3D \ read_collect.o \ read_pgpriv2.o \ read_retry.o \ + read_single.o \ rolling_buffer.o \ write_collect.o \ write_issue.o \ diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 4a48b79b8807..61287f6f6706 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -137,14 +137,17 @@ static enum netfs_io_source netfs_cache_prepare_read(= struct netfs_io_request *rr loff_t i_size) { struct netfs_cache_resources *cres =3D &rreq->cache_resources; + enum netfs_io_source source; =20 if (!cres->ops) return NETFS_DOWNLOAD_FROM_SERVER; - return cres->ops->prepare_read(subreq, i_size); + source =3D cres->ops->prepare_read(subreq, i_size); + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + return source; + } =20 -static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or= _error, - bool was_async) +void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error,= bool was_async) { struct netfs_io_subrequest *subreq =3D priv; =20 @@ -213,6 +216,8 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq) unsigned long long zp =3D umin(ictx->zero_point, rreq->i_size); size_t len =3D subreq->len; =20 + if (unlikely(rreq->origin =3D=3D NETFS_READ_SINGLE)) + zp =3D rreq->i_size; if (subreq->start >=3D zp) { subreq->source =3D source =3D NETFS_FILL_WITH_ZEROES; goto fill_with_zeroes; diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index ba32ca61063c..e236f752af88 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -23,6 +23,7 @@ /* * buffered_read.c */ +void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error,= bool was_async); int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len); =20 @@ -110,6 +111,7 @@ void netfs_unlock_abandoned_read_pages(struct netfs_io_= request *rreq); extern atomic_t netfs_n_rh_dio_read; extern atomic_t netfs_n_rh_readahead; extern atomic_t netfs_n_rh_read_folio; +extern atomic_t netfs_n_rh_read_single; extern atomic_t netfs_n_rh_rreq; extern atomic_t netfs_n_rh_sreq; extern atomic_t netfs_n_rh_download; diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 6c7be1377ee0..8c1922c0cb42 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -37,9 +37,11 @@ static const char *netfs_origins[nr__netfs_io_origin] = =3D { [NETFS_READAHEAD] =3D "RA", [NETFS_READPAGE] =3D "RP", [NETFS_READ_GAPS] =3D "RG", + [NETFS_READ_SINGLE] =3D "R1", [NETFS_READ_FOR_WRITE] =3D "RW", [NETFS_DIO_READ] =3D "DR", [NETFS_WRITEBACK] =3D "WB", + [NETFS_WRITEBACK_SINGLE] =3D "W1", [NETFS_WRITETHROUGH] =3D "WT", [NETFS_UNBUFFERED_WRITE] =3D "UW", [NETFS_DIO_WRITE] =3D "DW", diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 8c98b70eb3a4..dde4a679d9e2 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -54,6 +54,7 @@ struct netfs_io_request *netfs_alloc_request(struct addre= ss_space *mapping, if (origin =3D=3D NETFS_READAHEAD || origin =3D=3D NETFS_READPAGE || origin =3D=3D NETFS_READ_GAPS || + origin =3D=3D NETFS_READ_SINGLE || origin =3D=3D NETFS_READ_FOR_WRITE || origin =3D=3D NETFS_DIO_READ) INIT_WORK(&rreq->work, NULL); @@ -196,6 +197,7 @@ struct netfs_io_subrequest *netfs_alloc_subrequest(stru= ct netfs_io_request *rreq case NETFS_READAHEAD: case NETFS_READPAGE: case NETFS_READ_GAPS: + case NETFS_READ_SINGLE: case NETFS_READ_FOR_WRITE: case NETFS_DIO_READ: INIT_WORK(&subreq->work, netfs_read_subreq_termination_worker); diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 53ef7e0f3e9c..9124c8c36f9d 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -358,6 +358,39 @@ static void netfs_rreq_assess_dio(struct netfs_io_requ= est *rreq) inode_dio_end(rreq->inode); } =20 +/* + * Do processing after reading a monolithic single object. + */ +static void netfs_rreq_assess_single(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + + subreq =3D list_first_entry_or_null(&stream->subrequests, + struct netfs_io_subrequest, rreq_link); + if (subreq) { + if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) + rreq->error =3D subreq->error; + else + rreq->transferred =3D subreq->transferred; + + if (!rreq->error && subreq->source =3D=3D NETFS_DOWNLOAD_FROM_SERVER && + fscache_resources_valid(&rreq->cache_resources)) { + trace_netfs_rreq(rreq, netfs_rreq_trace_dirty); + netfs_single_mark_inode_dirty(rreq->inode); + } + } + + if (rreq->iocb) { + rreq->iocb->ki_pos +=3D rreq->transferred; + if (rreq->iocb->ki_complete) + rreq->iocb->ki_complete( + rreq->iocb, rreq->error ? rreq->error : rreq->transferred); + } + if (rreq->netfs_ops->done) + rreq->netfs_ops->done(rreq); +} + /* * Assess the state of a read request and decide what to do next. * @@ -375,9 +408,17 @@ void netfs_rreq_terminated(struct netfs_io_request *rr= eq) return; } =20 - if (rreq->origin =3D=3D NETFS_DIO_READ || - rreq->origin =3D=3D NETFS_READ_GAPS) + switch (rreq->origin) { + case NETFS_DIO_READ: + case NETFS_READ_GAPS: netfs_rreq_assess_dio(rreq); + break; + case NETFS_READ_SINGLE: + netfs_rreq_assess_single(rreq); + break; + default: + break; + } task_io_account_read(rreq->transferred); =20 trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c new file mode 100644 index 000000000000..2a66c5fde071 --- /dev/null +++ b/fs/netfs/read_single.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Single, monolithic object support (e.g. AFS directory). + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +/** + * netfs_single_mark_inode_dirty - Mark a single, monolithic object inode = dirty + * @inode: The inode to mark + * + * Mark an inode that contains a single, monolithic object as dirty so tha= t its + * writepages op will get called. If set, the SINGLE_NO_UPLOAD flag indic= ates + * that the object will only be written to the cache and not uploaded (e.g= . AFS + * directory contents). + */ +void netfs_single_mark_inode_dirty(struct inode *inode) +{ + struct netfs_inode *ictx =3D netfs_inode(inode); + bool cache_only =3D test_bit(NETFS_ICTX_SINGLE_NO_UPLOAD, &ictx->flags); + bool caching =3D fscache_cookie_enabled(netfs_i_cookie(netfs_inode(inode)= )); + + if (cache_only && !caching) + return; + + mark_inode_dirty(inode); + + if (caching && !(inode->i_state & I_PINNING_NETFS_WB)) { + bool need_use =3D false; + + spin_lock(&inode->i_lock); + if (!(inode->i_state & I_PINNING_NETFS_WB)) { + inode->i_state |=3D I_PINNING_NETFS_WB; + need_use =3D true; + } + spin_unlock(&inode->i_lock); + + if (need_use) + fscache_use_cookie(netfs_i_cookie(ictx), true); + } + +} +EXPORT_SYMBOL(netfs_single_mark_inode_dirty); + +static int netfs_single_begin_cache_read(struct netfs_io_request *rreq, st= ruct netfs_inode *ctx) +{ + return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cooki= e(ctx)); +} + +static void netfs_single_cache_prepare_read(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + struct netfs_cache_resources *cres =3D &rreq->cache_resources; + + if (!cres->ops) { + subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; + return; + } + subreq->source =3D cres->ops->prepare_read(subreq, rreq->i_size); + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); + +} + +static void netfs_single_read_cache(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + struct netfs_cache_resources *cres =3D &rreq->cache_resources; + + netfs_stat(&netfs_n_rh_read); + cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_FA= IL, + netfs_cache_read_terminated, subreq); +} + +/* + * Perform a read to a buffer from the cache or the server. Only a single + * subreq is permitted as the object must be fetched in a single transacti= on. + */ +static int netfs_single_dispatch_read(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *subreq; + int ret =3D 0; + + atomic_set(&rreq->nr_outstanding, 1); + + subreq =3D netfs_alloc_subrequest(rreq); + if (!subreq) { + ret =3D -ENOMEM; + goto out; + } + + subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; + subreq->start =3D 0; + subreq->len =3D rreq->len; + subreq->io_iter =3D rreq->buffer.iter; + + atomic_inc(&rreq->nr_outstanding); + + spin_lock_bh(&rreq->lock); + list_add_tail(&subreq->rreq_link, &rreq->subrequests); + trace_netfs_sreq(subreq, netfs_sreq_trace_added); + spin_unlock_bh(&rreq->lock); + + netfs_single_cache_prepare_read(rreq, subreq); + switch (subreq->source) { + case NETFS_DOWNLOAD_FROM_SERVER: + netfs_stat(&netfs_n_rh_download); + if (rreq->netfs_ops->prepare_read) { + ret =3D rreq->netfs_ops->prepare_read(subreq); + if (ret < 0) + goto cancel; + } + + rreq->netfs_ops->issue_read(subreq); + rreq->submitted +=3D subreq->len; + break; + case NETFS_READ_FROM_CACHE: + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + netfs_single_read_cache(rreq, subreq); + rreq->submitted +=3D subreq->len; + ret =3D 0; + break; + default: + pr_warn("Unexpected single-read source %u\n", subreq->source); + WARN_ON_ONCE(true); + ret =3D -EIO; + break; + } + +out: + if (atomic_dec_and_test(&rreq->nr_outstanding)) + netfs_rreq_terminated(rreq); + return ret; +cancel: + atomic_dec(&rreq->nr_outstanding); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); + goto out; +} + +/** + * netfs_read_single - Synchronously read a single blob of pages. + * @inode: The inode to read from. + * @file: The file we're using to read or NULL. + * @iter: The buffer we're reading into. + * + * Fulfil a read request for a single monolithic object by drawing data fr= om + * the cache if possible, or the netfs if not. The buffer may be larger t= han + * the file content; unused beyond the EOF will be zero-filled. The conte= nt + * will be read with a single I/O request (though this may be retried). + * + * The calling netfs must initialise a netfs context contiguous to the vfs + * inode before calling this. + * + * This is usable whether or not caching is enabled. If caching is enable= d, + * the data will be stored as a single object into the cache. + */ +ssize_t netfs_read_single(struct inode *inode, struct file *file, struct i= ov_iter *iter) +{ + struct netfs_io_request *rreq; + struct netfs_inode *ictx =3D netfs_inode(inode); + ssize_t ret; + + rreq =3D netfs_alloc_request(inode->i_mapping, file, 0, iov_iter_count(it= er), + NETFS_READ_SINGLE); + if (IS_ERR(rreq)) + return PTR_ERR(rreq); + + ret =3D netfs_single_begin_cache_read(rreq, ictx); + if (ret =3D=3D -ENOMEM || ret =3D=3D -EINTR || ret =3D=3D -ERESTARTSYS) + goto cleanup_free; + + netfs_stat(&netfs_n_rh_read_single); + trace_netfs_read(rreq, 0, rreq->len, netfs_read_trace_read_single); + + rreq->buffer.iter =3D *iter; + netfs_single_dispatch_read(rreq); + + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + + ret =3D rreq->error; + if (ret =3D=3D 0) + ret =3D rreq->transferred; + netfs_put_request(rreq, true, netfs_rreq_trace_put_return); + return ret; + +cleanup_free: + netfs_put_request(rreq, false, netfs_rreq_trace_put_failed); + return ret; +} +EXPORT_SYMBOL(netfs_read_single); diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c index 8e63516b40f6..f1af344266cc 100644 --- a/fs/netfs/stats.c +++ b/fs/netfs/stats.c @@ -12,6 +12,7 @@ atomic_t netfs_n_rh_dio_read; atomic_t netfs_n_rh_readahead; atomic_t netfs_n_rh_read_folio; +atomic_t netfs_n_rh_read_single; atomic_t netfs_n_rh_rreq; atomic_t netfs_n_rh_sreq; atomic_t netfs_n_rh_download; @@ -46,10 +47,11 @@ atomic_t netfs_n_folioq; =20 int netfs_stats_show(struct seq_file *m, void *v) { - seq_printf(m, "Reads : DR=3D%u RA=3D%u RF=3D%u WB=3D%u WBZ=3D%u\n", + seq_printf(m, "Reads : DR=3D%u RA=3D%u RF=3D%u RS=3D%u WB=3D%u WBZ=3D%u\= n", atomic_read(&netfs_n_rh_dio_read), atomic_read(&netfs_n_rh_readahead), atomic_read(&netfs_n_rh_read_folio), + atomic_read(&netfs_n_rh_read_single), atomic_read(&netfs_n_rh_write_begin), atomic_read(&netfs_n_rh_write_zskip)); seq_printf(m, "Writes : BW=3D%u WT=3D%u DW=3D%u WP=3D%u 2C=3D%u\n", diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index d291b31dd074..3d8b87c8e6a6 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -17,7 +17,7 @@ #define HIT_PENDING 0x01 /* A front op was still pending */ #define NEED_REASSESS 0x02 /* Need to loop round and reassess */ #define MADE_PROGRESS 0x04 /* Made progress cleaning up a stream or the f= olio set */ -#define BUFFERED 0x08 /* The pagecache needs cleaning up */ +#define NEED_UNLOCK 0x08 /* The pagecache needs unlocking */ #define NEED_RETRY 0x10 /* A front op requests retrying */ #define SAW_FAILURE 0x20 /* One stream or hit a permanent failure */ =20 @@ -179,7 +179,7 @@ static void netfs_collect_write_results(struct netfs_io= _request *wreq) if (wreq->origin =3D=3D NETFS_WRITEBACK || wreq->origin =3D=3D NETFS_WRITETHROUGH || wreq->origin =3D=3D NETFS_PGPRIV2_COPY_TO_CACHE) - notes =3D BUFFERED; + notes =3D NEED_UNLOCK; else notes =3D 0; =20 @@ -276,7 +276,7 @@ static void netfs_collect_write_results(struct netfs_io= _request *wreq) trace_netfs_collect_state(wreq, wreq->collected_to, notes); =20 /* Unlock any folios that we have now finished with. */ - if (notes & BUFFERED) { + if (notes & NEED_UNLOCK) { if (wreq->cleaned_to < wreq->collected_to) netfs_writeback_unlock_folios(wreq, ¬es); } else { diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index 10b5300b9448..cd2b349243b3 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -94,9 +94,10 @@ struct netfs_io_request *netfs_create_write_req(struct a= ddress_space *mapping, { struct netfs_io_request *wreq; struct netfs_inode *ictx; - bool is_buffered =3D (origin =3D=3D NETFS_WRITEBACK || - origin =3D=3D NETFS_WRITETHROUGH || - origin =3D=3D NETFS_PGPRIV2_COPY_TO_CACHE); + bool is_cacheable =3D (origin =3D=3D NETFS_WRITEBACK || + origin =3D=3D NETFS_WRITEBACK_SINGLE || + origin =3D=3D NETFS_WRITETHROUGH || + origin =3D=3D NETFS_PGPRIV2_COPY_TO_CACHE); =20 wreq =3D netfs_alloc_request(mapping, file, start, 0, origin); if (IS_ERR(wreq)) @@ -105,7 +106,7 @@ struct netfs_io_request *netfs_create_write_req(struct = address_space *mapping, _enter("R=3D%x", wreq->debug_id); =20 ictx =3D netfs_inode(wreq->inode); - if (is_buffered && netfs_is_cache_enabled(ictx)) + if (is_cacheable && netfs_is_cache_enabled(ictx)) fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ict= x)); if (rolling_buffer_init(&wreq->buffer, wreq->debug_id, ITER_SOURCE) < 0) goto nomem; @@ -450,7 +451,8 @@ static int netfs_write_folio(struct netfs_io_request *w= req, stream =3D &wreq->io_streams[s]; stream->submit_off =3D foff; stream->submit_len =3D flen; - if ((stream->source =3D=3D NETFS_WRITE_TO_CACHE && streamw) || + if (!stream->avail || + (stream->source =3D=3D NETFS_WRITE_TO_CACHE && streamw) || (stream->source =3D=3D NETFS_UPLOAD_TO_SERVER && fgroup =3D=3D NETFS_FOLIO_COPY_TO_CACHE)) { stream->submit_off =3D UINT_MAX; @@ -729,3 +731,194 @@ int netfs_unbuffered_write(struct netfs_io_request *w= req, bool may_wait, size_t _leave(" =3D %d", error); return error; } + +/* + * Write some of a pending folio data back to the server and/or the cache. + */ +static int netfs_write_folio_single(struct netfs_io_request *wreq, + struct folio *folio) +{ + struct netfs_io_stream *upload =3D &wreq->io_streams[0]; + struct netfs_io_stream *cache =3D &wreq->io_streams[1]; + struct netfs_io_stream *stream; + size_t iter_off =3D 0; + size_t fsize =3D folio_size(folio), flen; + loff_t fpos =3D folio_pos(folio); + bool to_eof =3D false; + bool no_debug =3D false; + + _enter(""); + + flen =3D folio_size(folio); + if (flen > wreq->i_size - fpos) { + flen =3D wreq->i_size - fpos; + folio_zero_segment(folio, flen, fsize); + to_eof =3D true; + } else if (flen =3D=3D wreq->i_size - fpos) { + to_eof =3D true; + } + + _debug("folio %zx/%zx", flen, fsize); + + if (!upload->avail && !cache->avail) { + trace_netfs_folio(folio, netfs_folio_trace_cancel_store); + return 0; + } + + if (!upload->construct) + trace_netfs_folio(folio, netfs_folio_trace_store); + else + trace_netfs_folio(folio, netfs_folio_trace_store_plus); + + /* Attach the folio to the rolling buffer. */ + folio_get(folio); + rolling_buffer_append(&wreq->buffer, folio, NETFS_ROLLBUF_PUT_MARK); + + /* Move the submission point forward to allow for write-streaming data + * not starting at the front of the page. We don't do write-streaming + * with the cache as the cache requires DIO alignment. + * + * Also skip uploading for data that's been read and just needs copying + * to the cache. + */ + for (int s =3D 0; s < NR_IO_STREAMS; s++) { + stream =3D &wreq->io_streams[s]; + stream->submit_off =3D 0; + stream->submit_len =3D flen; + if (!stream->avail) { + stream->submit_off =3D UINT_MAX; + stream->submit_len =3D 0; + } + } + + /* Attach the folio to one or more subrequests. For a big folio, we + * could end up with thousands of subrequests if the wsize is small - + * but we might need to wait during the creation of subrequests for + * network resources (eg. SMB credits). + */ + for (;;) { + ssize_t part; + size_t lowest_off =3D ULONG_MAX; + int choose_s =3D -1; + + /* Always add to the lowest-submitted stream first. */ + for (int s =3D 0; s < NR_IO_STREAMS; s++) { + stream =3D &wreq->io_streams[s]; + if (stream->submit_len > 0 && + stream->submit_off < lowest_off) { + lowest_off =3D stream->submit_off; + choose_s =3D s; + } + } + + if (choose_s < 0) + break; + stream =3D &wreq->io_streams[choose_s]; + + /* Advance the iterator(s). */ + if (stream->submit_off > iter_off) { + rolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off); + iter_off =3D stream->submit_off; + } + + atomic64_set(&wreq->issued_to, fpos + stream->submit_off); + stream->submit_extendable_to =3D fsize - stream->submit_off; + part =3D netfs_advance_write(wreq, stream, fpos + stream->submit_off, + stream->submit_len, to_eof); + stream->submit_off +=3D part; + if (part > stream->submit_len) + stream->submit_len =3D 0; + else + stream->submit_len -=3D part; + if (part > 0) + no_debug =3D true; + } + + wreq->buffer.iter.iov_offset =3D 0; + if (fsize > iter_off) + rolling_buffer_advance(&wreq->buffer, fsize - iter_off); + atomic64_set(&wreq->issued_to, fpos + fsize); + + if (!no_debug) + kdebug("R=3D%x: No submit", wreq->debug_id); + _leave(" =3D 0"); + return 0; +} + +/** + * netfs_writeback_single - Write back a monolithic payload + * @mapping: The mapping to write from + * @wbc: Hints from the VM + * @iter: Data to write, must be ITER_FOLIOQ. + * + * Write a monolithic, non-pagecache object back to the server and/or + * the cache. + */ +int netfs_writeback_single(struct address_space *mapping, + struct writeback_control *wbc, + struct iov_iter *iter) +{ + struct netfs_io_request *wreq; + struct netfs_inode *ictx =3D netfs_inode(mapping->host); + struct folio_queue *fq; + size_t size =3D iov_iter_count(iter); + int ret; + + if (WARN_ON_ONCE(!iov_iter_is_folioq(iter))) + return -EIO; + + if (!mutex_trylock(&ictx->wb_lock)) { + if (wbc->sync_mode =3D=3D WB_SYNC_NONE) { + netfs_stat(&netfs_n_wb_lock_skip); + return 0; + } + netfs_stat(&netfs_n_wb_lock_wait); + mutex_lock(&ictx->wb_lock); + } + + wreq =3D netfs_create_write_req(mapping, NULL, 0, NETFS_WRITEBACK_SINGLE); + if (IS_ERR(wreq)) { + ret =3D PTR_ERR(wreq); + goto couldnt_start; + } + + trace_netfs_write(wreq, netfs_write_trace_writeback); + netfs_stat(&netfs_n_wh_writepages); + + if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) + wreq->netfs_ops->begin_writeback(wreq); + + for (fq =3D (struct folio_queue *)iter->folioq; fq; fq =3D fq->next) { + for (int slot =3D 0; slot < folioq_count(fq); slot++) { + struct folio *folio =3D folioq_folio(fq, slot); + size_t part =3D umin(folioq_folio_size(fq, slot), size); + + _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to)= ); + + ret =3D netfs_write_folio_single(wreq, folio); + if (ret < 0) + goto stop; + size -=3D part; + if (size <=3D 0) + goto stop; + } + } + +stop: + for (int s =3D 0; s < NR_IO_STREAMS; s++) + netfs_issue_write(wreq, &wreq->io_streams[s]); + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); + + mutex_unlock(&ictx->wb_lock); + + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); + _leave(" =3D %d", ret); + return ret; + +couldnt_start: + mutex_unlock(&ictx->wb_lock); + _leave(" =3D %d", ret); + return ret; +} +EXPORT_SYMBOL(netfs_writeback_single); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 921cfcfc62f1..5e21c6939c88 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -73,6 +73,7 @@ struct netfs_inode { #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ #define NETFS_ICTX_WRITETHROUGH 2 /* Write-through caching */ #define NETFS_ICTX_MODIFIED_ATTR 3 /* Indicate change in mtime/ctime */ +#define NETFS_ICTX_SINGLE_NO_UPLOAD 4 /* Monolithic payload, cache but no= upload */ }; =20 /* @@ -210,9 +211,11 @@ enum netfs_io_origin { NETFS_READAHEAD, /* This read was triggered by readahead */ NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_GAPS, /* This read is a synchronous read to fill gaps */ + NETFS_READ_SINGLE, /* This read should be treated as a single object */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_DIO_READ, /* This is a direct I/O read */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_WRITEBACK_SINGLE, /* This monolithic write was triggered by writep= ages */ NETFS_WRITETHROUGH, /* This write was made by netfs_perform_write() */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ NETFS_DIO_WRITE, /* This is a direct I/O write */ @@ -409,6 +412,13 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kioc= b *iocb, struct iov_iter * struct netfs_group *netfs_group); ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from); =20 +/* Single, monolithic object read/write API. */ +void netfs_single_mark_inode_dirty(struct inode *inode); +ssize_t netfs_read_single(struct inode *inode, struct file *file, struct i= ov_iter *iter); +int netfs_writeback_single(struct address_space *mapping, + struct writeback_control *wbc, + struct iov_iter *iter); + /* Address operations API */ struct readahead_control; void netfs_readahead(struct readahead_control *); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 167c89bc62e0..e8075c29ecf5 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -21,6 +21,7 @@ EM(netfs_read_trace_readahead, "READAHEAD") \ EM(netfs_read_trace_readpage, "READPAGE ") \ EM(netfs_read_trace_read_gaps, "READ-GAPS") \ + EM(netfs_read_trace_read_single, "READ-SNGL") \ EM(netfs_read_trace_prefetch_for_write, "PREFETCHW") \ E_(netfs_read_trace_write_begin, "WRITEBEGN") =20 @@ -35,9 +36,11 @@ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_GAPS, "RG") \ + EM(NETFS_READ_SINGLE, "R1") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_DIO_READ, "DR") \ EM(NETFS_WRITEBACK, "WB") \ + EM(NETFS_WRITEBACK_SINGLE, "W1") \ EM(NETFS_WRITETHROUGH, "WT") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ EM(NETFS_DIO_WRITE, "DW") \ @@ -47,6 +50,7 @@ EM(netfs_rreq_trace_assess, "ASSESS ") \ EM(netfs_rreq_trace_copy, "COPY ") \ EM(netfs_rreq_trace_collect, "COLLECT") \ + EM(netfs_rreq_trace_dirty, "DIRTY ") \ EM(netfs_rreq_trace_done, "DONE ") \ EM(netfs_rreq_trace_free, "FREE ") \ EM(netfs_rreq_trace_redirty, "REDIRTY") \ From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B7AF2259D6 for ; Fri, 8 Nov 2024 17:35:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087335; cv=none; b=lCfhSkZ6DaKu9u6c/4QYMuyR5a52Z/zSHn8WMTjeJYnG55vBubQo9bN217a/M1+6SJKfYVq+LTXDRkFJKfMzfzM3Fv77ouZtYq5FVRgv8Q1a4tUXBVlgO7z6WvWbBP53btw00NZ97ekbxqiXA5TKfLsLfX0aTt2cmBFu0W4OB+M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087335; c=relaxed/simple; bh=raCpdqcu2ygldk7gd5VDDvJXWwDH1Uhs8ai3Ef9spQ4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Dtw4UWZ+rZhA8jvrOIfxUM2F2TkJAlJu04Tn9d4EpaxOSXMY4Q/U1exRH3zJR0csM2B0zxxw028rCJRkzu7JU3JfB372msKcatwFYmNUA9tfOdIaWCMBbTYuKanU/4cYaDqEhcCqY6WfuxNFRYehpQudUsEuDMD5bxhsA5W3qQ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=MqzNENKq; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MqzNENKq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087332; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u3/1sOVuQqHpfo8UyVSOFEz4LX0EnJcfMGVnlE5mJ3g=; b=MqzNENKqh6AYX2qDBAENlZh+uShvPY46k/LfQNOIFJoYRbUlHf4f7bzzhSeF042zQMXMYQ tE1bZN2R5VxJraVPT90NPpVvlhYJiGyEWt11Jiex5geEfkG1ayypGZm0s42V8x2g7iwH28 XLz/JtL0fqWS8JSkP6sAnhNt2Yq3tUo= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-460-t4pemmCkOQy6-Im92WTkrw-1; Fri, 08 Nov 2024 12:35:29 -0500 X-MC-Unique: t4pemmCkOQy6-Im92WTkrw-1 X-Mimecast-MFC-AGG-ID: t4pemmCkOQy6-Im92WTkrw Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 543341956095; Fri, 8 Nov 2024 17:35:26 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B97131955F3D; Fri, 8 Nov 2024 17:35:20 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 22/33] afs: Make afs_init_request() get a key if not given a file Date: Fri, 8 Nov 2024 17:32:23 +0000 Message-ID: <20241108173236.1382366-23-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" In a future patch, AFS directory caching will go through netfslib and this will involve, at times, running on behalf of ->lookup(), which doesn't provide us with a file from which we can get an authentication key. If a file isn't provided, make afs_init_request() get a key from the process's keyrings instead when setting up a read. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/file.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/fs/afs/file.c b/fs/afs/file.c index f717168da4ab..a9d98d18407c 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -372,10 +372,26 @@ static int afs_symlink_read_folio(struct file *file, = struct folio *folio) =20 static int afs_init_request(struct netfs_io_request *rreq, struct file *fi= le) { + struct afs_vnode *vnode =3D AFS_FS_I(rreq->inode); + if (file) rreq->netfs_priv =3D key_get(afs_file_key(file)); rreq->rsize =3D 256 * 1024; rreq->wsize =3D 256 * 1024 * 1024; + + switch (rreq->origin) { + case NETFS_READ_SINGLE: + if (!file) { + struct key *key =3D afs_request_key(vnode->volume->cell); + + if (IS_ERR(key)) + return PTR_ERR(key); + rreq->netfs_priv =3D key; + } + break; + default: + break; + } return 0; } From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0847219E43 for ; Fri, 8 Nov 2024 17:35:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087346; cv=none; b=BvTI9GXNzlLLEbcwv7bOVx8B2fMs2ufb5cB5y4KqtTjOc+wDTLLdCYqP/ikyiOwRZ9kDFWfgB5aIpvVe8zAngC9nXYN58b4tA52l5QEAI5vVVHLbnqZO/etv4uYtt8KS+LqYSLKr+MlvURdhRekFBOEWh82UoTZh+sJCvFERXDU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087346; c=relaxed/simple; bh=Jvrw5vqltWawpOKdJgbTZZbnGYdsatyHAjvDfoUct+M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JllUOeQTTv7jEQMxKM7/4UK0KTIWeyNuUqqtkslWr6qH8KkEFpTVm8quTaJdmLFj4dmhKkt9GrSnpPJzWxKrqT0UVNH0CZZpxGm82aRp6vN+FCuV87hxRvZA2EwkFwxeZaH2Pak5Oj7aTUXsVTI3HNBqSHbvAN3tZgbZvw6823E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DMAHy/M0; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DMAHy/M0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087342; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=b/xYoSxkUPV50q5mui7jhkfP0k+67AykkFEHdb2k0fs=; b=DMAHy/M0rVWlIlK7ogz4qRW4yzQS7IFhPMfXu7LOonXZ3nfDKsZTuYhsN7dKP8/JvmI8K1 hohdRODrdKqHw3EjvpFrmcdKkZtT0545/WHMNPF/BeAQAx2CQck/Y3qP6gV27zXcj4COAA 62jgJ39Sk6SijlRjIyNBLRJbQSMVrP0= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-612-fRoYO6o9PTu5jfske-DBbQ-1; Fri, 08 Nov 2024 12:35:36 -0500 X-MC-Unique: fRoYO6o9PTu5jfske-DBbQ-1 X-Mimecast-MFC-AGG-ID: fRoYO6o9PTu5jfske-DBbQ Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9442A1955F2B; Fri, 8 Nov 2024 17:35:33 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AD78A1956054; Fri, 8 Nov 2024 17:35:27 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 23/33] afs: Use netfslib for directories Date: Fri, 8 Nov 2024 17:32:24 +0000 Message-ID: <20241108173236.1382366-24-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Content-Type: text/plain; charset="utf-8" In the AFS ecosystem, directories are just a special type of file that is downloaded and parsed locally. Download is done by the same mechanism as ordinary files and the data can be cached. There is one important semantic restriction on directories over files: the client must download the entire directory in one go because, for example, the server could fabricate the contents of the blob on the fly with each download and give a different image each time. So that we can cache the directory download, switch AFS directory support over to using the netfslib single-object API, thereby allowing directory content to be stored in the local cache. To make this work, the following changes are made: (1) A directory's contents are now stored in a folio_queue chain attached to the afs_vnode (inode) struct rather than its associated pagecache, though multipage folios are still used to hold the data. The folio queue is discarded when the directory inode is evicted. This also helps with the phasing out of ITER_XARRAY. (2) Various directory operations are made to use and unuse the cache cookie. (3) The content checking, content dumping and content iteration are now performed with a standard iov_iter iterator over the contents of the folio queue. (4) Iteration and modification must be done with the vnode's validate_lock held. In conjunction with (1), this means that the iteration can be done without the need to lock pages or take extra refs on them, unlike when accessing ->i_pages. (5) Convert to using netfs_read_single() to read data. (6) Provide a ->writepages() to call netfs_writeback_single() to save the data to the cache according to the VM's scheduling whilst holding the validate_lock read-locked as (4). (7) Change local directory image editing functions: (a) Provide a function to get a specific block by number from the folio_queue as we can no longer use the i_pages xarray to locate folios by index. This uses a cursor to remember the current position as we need to iterate through the directory contents. The block is kmapped before being returned. (b) Make the function in (a) extend the directory by an extra folio if we run out of space. (c) Raise the check of the block free space counter, for those blocks that have one, higher in the function to eliminate a call to get a block. (d) Remove the page unlocking and putting done during the editing loops. This is no longer necessary as the folio_queue holds the references and the pages are no longer in the pagecache. (e) Mark the inode dirty and pin the cache usage till writeback at the end of a successful edit. (8) Don't set the large_folios flag on the inode as we do the allocation ourselves rather than the VM doing it automatically. (9) Mark the inode as being a single object that isn't uploaded to the server. (10) Enable caching on directories. (11) Only set the upload key for writeback for regular files. Notes: (*) We keep the ->release_folio(), ->invalidate_folio() and ->migrate_folio() ops as we set the mapping pointer on the folio. Signed-off-by: David Howells cc: Marc Dionne cc: Jeff Layton cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/dir.c | 742 +++++++++++++++++++------------------ fs/afs/dir_edit.c | 183 ++++----- fs/afs/file.c | 8 + fs/afs/inode.c | 21 +- fs/afs/internal.h | 16 + fs/afs/super.c | 2 + fs/afs/write.c | 4 +- include/trace/events/afs.h | 6 +- 8 files changed, 512 insertions(+), 470 deletions(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index f36a28a8f27b..86d3955a78cd 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include "internal.h" #include "afs_fs.h" @@ -42,15 +43,6 @@ static int afs_symlink(struct mnt_idmap *idmap, struct i= node *dir, static int afs_rename(struct mnt_idmap *idmap, struct inode *old_dir, struct dentry *old_dentry, struct inode *new_dir, struct dentry *new_dentry, unsigned int flags); -static bool afs_dir_release_folio(struct folio *folio, gfp_t gfp_flags); -static void afs_dir_invalidate_folio(struct folio *folio, size_t offset, - size_t length); - -static bool afs_dir_dirty_folio(struct address_space *mapping, - struct folio *folio) -{ - BUG(); /* This should never happen. */ -} =20 const struct file_operations afs_dir_file_operations =3D { .open =3D afs_dir_open, @@ -75,10 +67,7 @@ const struct inode_operations afs_dir_inode_operations = =3D { }; =20 const struct address_space_operations afs_dir_aops =3D { - .dirty_folio =3D afs_dir_dirty_folio, - .release_folio =3D afs_dir_release_folio, - .invalidate_folio =3D afs_dir_invalidate_folio, - .migrate_folio =3D filemap_migrate_folio, + .writepages =3D afs_single_writepages, }; =20 const struct dentry_operations afs_fs_dentry_operations =3D { @@ -105,146 +94,120 @@ struct afs_lookup_cookie { struct afs_fid fids[50]; }; =20 +static void afs_dir_unuse_cookie(struct afs_vnode *dvnode, int ret) +{ + if (ret =3D=3D 0) { + struct afs_vnode_cache_aux aux; + loff_t i_size =3D i_size_read(&dvnode->netfs.inode); + + afs_set_cache_aux(dvnode, &aux); + fscache_unuse_cookie(afs_vnode_cache(dvnode), &aux, &i_size); + } else { + fscache_unuse_cookie(afs_vnode_cache(dvnode), NULL, NULL); + } +} + /* - * Drop the refs that we're holding on the folios we were reading into. W= e've - * got refs on the first nr_pages pages. + * Iterate through a kmapped directory segment, dumping a summary of + * the contents. */ -static void afs_dir_read_cleanup(struct afs_read *req) +static size_t afs_dir_dump_step(void *iter_base, size_t progress, size_t l= en, + void *priv, void *priv2) { - struct address_space *mapping =3D req->vnode->netfs.inode.i_mapping; - struct folio *folio; - pgoff_t last =3D req->nr_pages - 1; + do { + union afs_xdr_dir_block *block =3D iter_base; =20 - XA_STATE(xas, &mapping->i_pages, 0); + pr_warn("[%05zx] %32phN\n", progress, block); + iter_base +=3D AFS_DIR_BLOCK_SIZE; + progress +=3D AFS_DIR_BLOCK_SIZE; + len -=3D AFS_DIR_BLOCK_SIZE; + } while (len > 0); =20 - if (unlikely(!req->nr_pages)) - return; + return len; +} =20 - rcu_read_lock(); - xas_for_each(&xas, folio, last) { - if (xas_retry(&xas, folio)) - continue; - BUG_ON(xa_is_value(folio)); - ASSERTCMP(folio->mapping, =3D=3D, mapping); +/* + * Dump the contents of a directory. + */ +static void afs_dir_dump(struct afs_vnode *dvnode) +{ + struct iov_iter iter; + unsigned long long i_size =3D i_size_read(&dvnode->netfs.inode); =20 - folio_put(folio); - } + pr_warn("DIR %llx:%llx is=3D%llx\n", + dvnode->fid.vid, dvnode->fid.vnode, i_size); =20 - rcu_read_unlock(); + iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + iterate_folioq(&iter, iov_iter_count(&iter), NULL, NULL, + afs_dir_dump_step); } =20 /* * check that a directory folio is valid */ -static bool afs_dir_check_folio(struct afs_vnode *dvnode, struct folio *fo= lio, - loff_t i_size) +static bool afs_dir_check_block(struct afs_vnode *dvnode, size_t progress, + union afs_xdr_dir_block *block) { - union afs_xdr_dir_block *block; - size_t offset, size; - loff_t pos; + if (block->hdr.magic !=3D AFS_DIR_MAGIC) { + pr_warn("%s(%lx): [%zx] bad magic %04x\n", + __func__, dvnode->netfs.inode.i_ino, + progress, ntohs(block->hdr.magic)); + trace_afs_dir_check_failed(dvnode, progress); + trace_afs_file_error(dvnode, -EIO, afs_file_error_dir_bad_magic); + return false; + } =20 - /* Determine how many magic numbers there should be in this folio, but - * we must take care because the directory may change size under us. + /* Make sure each block is NUL terminated so we can reasonably + * use string functions on it. The filenames in the folio + * *should* be NUL-terminated anyway. */ - pos =3D folio_pos(folio); - if (i_size <=3D pos) - goto checked; - - size =3D min_t(loff_t, folio_size(folio), i_size - pos); - for (offset =3D 0; offset < size; offset +=3D sizeof(*block)) { - block =3D kmap_local_folio(folio, offset); - if (block->hdr.magic !=3D AFS_DIR_MAGIC) { - printk("kAFS: %s(%lx): [%llx] bad magic %zx/%zx is %04hx\n", - __func__, dvnode->netfs.inode.i_ino, - pos, offset, size, ntohs(block->hdr.magic)); - trace_afs_dir_check_failed(dvnode, pos + offset, i_size); - kunmap_local(block); - trace_afs_file_error(dvnode, -EIO, afs_file_error_dir_bad_magic); - goto error; - } - - /* Make sure each block is NUL terminated so we can reasonably - * use string functions on it. The filenames in the folio - * *should* be NUL-terminated anyway. - */ - ((u8 *)block)[AFS_DIR_BLOCK_SIZE - 1] =3D 0; - - kunmap_local(block); - } -checked: + ((u8 *)block)[AFS_DIR_BLOCK_SIZE - 1] =3D 0; afs_stat_v(dvnode, n_read_dir); return true; - -error: - return false; } =20 /* - * Dump the contents of a directory. + * Iterate through a kmapped directory segment, checking the content. */ -static void afs_dir_dump(struct afs_vnode *dvnode, struct afs_read *req) +static size_t afs_dir_check_step(void *iter_base, size_t progress, size_t = len, + void *priv, void *priv2) { - union afs_xdr_dir_block *block; - struct address_space *mapping =3D dvnode->netfs.inode.i_mapping; - struct folio *folio; - pgoff_t last =3D req->nr_pages - 1; - size_t offset, size; - - XA_STATE(xas, &mapping->i_pages, 0); - - pr_warn("DIR %llx:%llx f=3D%llx l=3D%llx al=3D%llx\n", - dvnode->fid.vid, dvnode->fid.vnode, - req->file_size, req->len, req->actual_len); - pr_warn("DIR %llx %x %zx %zx\n", - req->pos, req->nr_pages, - req->iter->iov_offset, iov_iter_count(req->iter)); - - xas_for_each(&xas, folio, last) { - if (xas_retry(&xas, folio)) - continue; + struct afs_vnode *dvnode =3D priv; =20 - BUG_ON(folio->mapping !=3D mapping); + if (WARN_ON_ONCE(progress % AFS_DIR_BLOCK_SIZE || + len % AFS_DIR_BLOCK_SIZE)) + return len; =20 - size =3D min_t(loff_t, folio_size(folio), req->actual_len - folio_pos(fo= lio)); - for (offset =3D 0; offset < size; offset +=3D sizeof(*block)) { - block =3D kmap_local_folio(folio, offset); - pr_warn("[%02lx] %32phN\n", folio->index + offset, block); - kunmap_local(block); - } - } + do { + if (!afs_dir_check_block(dvnode, progress, iter_base)) + break; + iter_base +=3D AFS_DIR_BLOCK_SIZE; + len -=3D AFS_DIR_BLOCK_SIZE; + } while (len > 0); + + return len; } =20 /* - * Check all the blocks in a directory. All the folios are held pinned. + * Check all the blocks in a directory. */ -static int afs_dir_check(struct afs_vnode *dvnode, struct afs_read *req) +static int afs_dir_check(struct afs_vnode *dvnode) { - struct address_space *mapping =3D dvnode->netfs.inode.i_mapping; - struct folio *folio; - pgoff_t last =3D req->nr_pages - 1; - int ret =3D 0; + struct iov_iter iter; + unsigned long long i_size =3D i_size_read(&dvnode->netfs.inode); + size_t checked =3D 0; =20 - XA_STATE(xas, &mapping->i_pages, 0); - - if (unlikely(!req->nr_pages)) + if (unlikely(!i_size)) return 0; =20 - rcu_read_lock(); - xas_for_each(&xas, folio, last) { - if (xas_retry(&xas, folio)) - continue; - - BUG_ON(folio->mapping !=3D mapping); - - if (!afs_dir_check_folio(dvnode, folio, req->actual_len)) { - afs_dir_dump(dvnode, req); - ret =3D -EIO; - break; - } + iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + checked =3D iterate_folioq(&iter, iov_iter_count(&iter), dvnode, NULL, + afs_dir_check_step); + if (checked !=3D i_size) { + afs_dir_dump(dvnode); + return -EIO; } - - rcu_read_unlock(); - return ret; + return 0; } =20 /* @@ -264,133 +227,136 @@ static int afs_dir_open(struct inode *inode, struct= file *file) } =20 /* - * Read the directory into the pagecache in one go, scrubbing the previous - * contents. The list of folios is returned, pinning them so that they do= n't - * get reclaimed during the iteration. + * Read a file in a single download. */ -static struct afs_read *afs_read_dir(struct afs_vnode *dvnode, struct key = *key) - __acquires(&dvnode->validate_lock) +static ssize_t afs_do_read_single(struct afs_vnode *dvnode, struct file *f= ile) { - struct address_space *mapping =3D dvnode->netfs.inode.i_mapping; - struct afs_read *req; + struct iov_iter iter; + ssize_t ret; loff_t i_size; - int nr_pages, i; - int ret; - loff_t remote_size =3D 0; - - _enter(""); - - req =3D kzalloc(sizeof(*req), GFP_KERNEL); - if (!req) - return ERR_PTR(-ENOMEM); + bool is_dir =3D (S_ISDIR(dvnode->netfs.inode.i_mode) && + !test_bit(AFS_VNODE_MOUNTPOINT, &dvnode->flags)); =20 - refcount_set(&req->usage, 1); - req->vnode =3D dvnode; - req->key =3D key_get(key); - req->cleanup =3D afs_dir_read_cleanup; - -expand: i_size =3D i_size_read(&dvnode->netfs.inode); - if (i_size < remote_size) - i_size =3D remote_size; - if (i_size < 2048) { - ret =3D afs_bad(dvnode, afs_file_error_dir_small); - goto error; - } - if (i_size > 2048 * 1024) { - trace_afs_file_error(dvnode, -EFBIG, afs_file_error_dir_big); - ret =3D -EFBIG; - goto error; + if (is_dir) { + if (i_size < AFS_DIR_BLOCK_SIZE) + return afs_bad(dvnode, afs_file_error_dir_small); + if (i_size > AFS_DIR_BLOCK_SIZE * 1024) { + trace_afs_file_error(dvnode, -EFBIG, afs_file_error_dir_big); + return -EFBIG; + } + } else { + if (i_size > AFSPATHMAX) { + trace_afs_file_error(dvnode, -EFBIG, afs_file_error_dir_big); + return -EFBIG; + } } =20 - _enter("%llu", i_size); + /* Expand the storage. TODO: Shrink the storage too. */ + if (dvnode->directory_size < i_size) { + size_t cur_size =3D dvnode->directory_size; =20 - nr_pages =3D (i_size + PAGE_SIZE - 1) / PAGE_SIZE; + ret =3D netfs_alloc_folioq_buffer(NULL, + &dvnode->directory, &cur_size, i_size, + mapping_gfp_mask(dvnode->netfs.inode.i_mapping)); + dvnode->directory_size =3D cur_size; + if (ret < 0) + return ret; + } =20 - req->actual_len =3D i_size; /* May change */ - req->len =3D nr_pages * PAGE_SIZE; /* We can ask for more than there is */ - req->data_version =3D dvnode->status.data_version; /* May change */ - iov_iter_xarray(&req->def_iter, ITER_DEST, &dvnode->netfs.inode.i_mapping= ->i_pages, - 0, i_size); - req->iter =3D &req->def_iter; + iov_iter_folio_queue(&iter, ITER_DEST, dvnode->directory, 0, 0, dvnode->d= irectory_size); =20 - /* Fill in any gaps that we might find where the memory reclaimer has - * been at work and pin all the folios. If there are any gaps, we will - * need to reread the entire directory contents. + /* AFS requires us to perform the read of a directory synchronously as + * a single unit to avoid issues with the directory contents being + * changed between reads. */ - i =3D req->nr_pages; - while (i < nr_pages) { - struct folio *folio; - - folio =3D filemap_get_folio(mapping, i); - if (IS_ERR(folio)) { - afs_invalidate_dir(dvnode, afs_dir_invalid_reclaimed_folio); - folio =3D __filemap_get_folio(mapping, - i, FGP_LOCK | FGP_CREAT, - mapping->gfp_mask); - if (IS_ERR(folio)) { - ret =3D PTR_ERR(folio); - goto error; - } - folio_attach_private(folio, (void *)1); - folio_unlock(folio); + ret =3D netfs_read_single(&dvnode->netfs.inode, file, &iter); + if (ret >=3D 0) { + i_size =3D i_size_read(&dvnode->netfs.inode); + if (i_size > ret) { + /* The content has grown, so we need to expand the + * buffer. + */ + ret =3D -ESTALE; + } else if (is_dir) { + int ret2 =3D afs_dir_check(dvnode); + + if (ret2 < 0) + ret =3D ret2; + } else if (i_size < folioq_folio_size(dvnode->directory, 0)) { + /* NUL-terminate a symlink. */ + char *symlink =3D kmap_local_folio(folioq_folio(dvnode->directory, 0), = 0); + + symlink[i_size] =3D 0; + kunmap_local(symlink); } - - req->nr_pages +=3D folio_nr_pages(folio); - i +=3D folio_nr_pages(folio); } =20 - /* If we're going to reload, we need to lock all the pages to prevent - * races. - */ + return ret; +} + +ssize_t afs_read_single(struct afs_vnode *dvnode, struct file *file) +{ + ssize_t ret; + + fscache_use_cookie(afs_vnode_cache(dvnode), false); + ret =3D afs_do_read_single(dvnode, file); + fscache_unuse_cookie(afs_vnode_cache(dvnode), NULL, NULL); + return ret; +} + +/* + * Read the directory into a folio_queue buffer in one go, scrubbing the + * previous contents. We return -ESTALE if the caller needs to call us ag= ain. + */ +static ssize_t afs_read_dir(struct afs_vnode *dvnode, struct file *file) + __acquires(&dvnode->validate_lock) +{ + ssize_t ret; + loff_t i_size; + + i_size =3D i_size_read(&dvnode->netfs.inode); + ret =3D -ERESTARTSYS; if (down_read_killable(&dvnode->validate_lock) < 0) goto error; =20 - if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) - goto success; + /* We only need to reread the data if it became invalid - or if we + * haven't read it yet. + */ + if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) && + test_bit(AFS_VNODE_DIR_READ, &dvnode->flags)) + goto valid; =20 up_read(&dvnode->validate_lock); if (down_write_killable(&dvnode->validate_lock) < 0) goto error; =20 - if (!test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) { - trace_afs_reload_dir(dvnode); - ret =3D afs_fetch_data(dvnode, req); - if (ret < 0) - goto error_unlock; - - task_io_account_read(PAGE_SIZE * req->nr_pages); + if (!test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) + afs_invalidate_cache(dvnode, 0); =20 - if (req->len < req->file_size) { - /* The content has grown, so we need to expand the - * buffer. - */ - up_write(&dvnode->validate_lock); - remote_size =3D req->file_size; - goto expand; - } - - /* Validate the data we just read. */ - ret =3D afs_dir_check(dvnode, req); + if (!test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) || + !test_bit(AFS_VNODE_DIR_READ, &dvnode->flags)) { + trace_afs_reload_dir(dvnode); + ret =3D afs_read_single(dvnode, file); if (ret < 0) goto error_unlock; =20 // TODO: Trim excess pages =20 set_bit(AFS_VNODE_DIR_VALID, &dvnode->flags); + set_bit(AFS_VNODE_DIR_READ, &dvnode->flags); } =20 downgrade_write(&dvnode->validate_lock); -success: - return req; +valid: + return i_size; =20 error_unlock: up_write(&dvnode->validate_lock); error: - afs_put_read(req); - _leave(" =3D %d", ret); - return ERR_PTR(ret); + _leave(" =3D %zd", ret); + return ret; } =20 /* @@ -398,79 +364,69 @@ static struct afs_read *afs_read_dir(struct afs_vnode= *dvnode, struct key *key) */ static int afs_dir_iterate_block(struct afs_vnode *dvnode, struct dir_context *ctx, - union afs_xdr_dir_block *block, - unsigned blkoff) + union afs_xdr_dir_block *block) { union afs_xdr_dirent *dire; - unsigned offset, next, curr, nr_slots; + unsigned int blknum, base, hdr, pos, next, nr_slots; size_t nlen; int tmp; =20 - _enter("%llx,%x", ctx->pos, blkoff); + blknum =3D ctx->pos / AFS_DIR_BLOCK_SIZE; + base =3D blknum * AFS_DIR_SLOTS_PER_BLOCK; + hdr =3D (blknum =3D=3D 0 ? AFS_DIR_RESV_BLOCKS0 : AFS_DIR_RESV_BLOCKS); + pos =3D DIV_ROUND_UP(ctx->pos, AFS_DIR_DIRENT_SIZE) - base; =20 - curr =3D (ctx->pos - blkoff) / sizeof(union afs_xdr_dirent); + _enter("%llx,%x", ctx->pos, blknum); =20 /* walk through the block, an entry at a time */ - for (offset =3D (blkoff =3D=3D 0 ? AFS_DIR_RESV_BLOCKS0 : AFS_DIR_RESV_BL= OCKS); - offset < AFS_DIR_SLOTS_PER_BLOCK; - offset =3D next - ) { + for (unsigned int slot =3D hdr; slot < AFS_DIR_SLOTS_PER_BLOCK; slot =3D = next) { /* skip entries marked unused in the bitmap */ - if (!(block->hdr.bitmap[offset / 8] & - (1 << (offset % 8)))) { - _debug("ENT[%zu.%u]: unused", - blkoff / sizeof(union afs_xdr_dir_block), offset); - next =3D offset + 1; - if (offset >=3D curr) - ctx->pos =3D blkoff + - next * sizeof(union afs_xdr_dirent); + if (!(block->hdr.bitmap[slot / 8] & + (1 << (slot % 8)))) { + _debug("ENT[%x]: Unused", base + slot); + next =3D slot + 1; + if (next >=3D pos) + ctx->pos =3D (base + next) * sizeof(union afs_xdr_dirent); continue; } =20 /* got a valid entry */ - dire =3D &block->dirents[offset]; + dire =3D &block->dirents[slot]; nlen =3D strnlen(dire->u.name, - sizeof(*block) - - offset * sizeof(union afs_xdr_dirent)); + (unsigned long)(block + 1) - (unsigned long)dire->u.name - 1); if (nlen > AFSNAMEMAX - 1) { - _debug("ENT[%zu]: name too long (len %u/%zu)", - blkoff / sizeof(union afs_xdr_dir_block), - offset, nlen); + _debug("ENT[%x]: Name too long (len %zx)", + base + slot, nlen); return afs_bad(dvnode, afs_file_error_dir_name_too_long); } =20 - _debug("ENT[%zu.%u]: %s %zu \"%s\"", - blkoff / sizeof(union afs_xdr_dir_block), offset, - (offset < curr ? "skip" : "fill"), + _debug("ENT[%x]: %s %zx \"%s\"", + base + slot, (slot < pos ? "skip" : "fill"), nlen, dire->u.name); =20 nr_slots =3D afs_dir_calc_slots(nlen); - next =3D offset + nr_slots; + next =3D slot + nr_slots; if (next > AFS_DIR_SLOTS_PER_BLOCK) { - _debug("ENT[%zu.%u]:" - " %u extends beyond end dir block" - " (len %zu)", - blkoff / sizeof(union afs_xdr_dir_block), - offset, next, nlen); + _debug("ENT[%x]: extends beyond end dir block (len %zx)", + base + slot, nlen); return afs_bad(dvnode, afs_file_error_dir_over_end); } =20 /* Check that the name-extension dirents are all allocated */ for (tmp =3D 1; tmp < nr_slots; tmp++) { - unsigned int ix =3D offset + tmp; - if (!(block->hdr.bitmap[ix / 8] & (1 << (ix % 8)))) { - _debug("ENT[%zu.u]:" - " %u unmarked extension (%u/%u)", - blkoff / sizeof(union afs_xdr_dir_block), - offset, tmp, nr_slots); + unsigned int xslot =3D slot + tmp; + + if (!(block->hdr.bitmap[xslot / 8] & (1 << (xslot % 8)))) { + _debug("ENT[%x]: Unmarked extension (%x/%x)", + base + slot, tmp, nr_slots); return afs_bad(dvnode, afs_file_error_dir_unmarked_ext); } } =20 /* skip if starts before the current position */ - if (offset < curr) { - if (next > curr) - ctx->pos =3D blkoff + next * sizeof(union afs_xdr_dirent); + if (slot < pos) { + if (next > pos) + ctx->pos =3D (base + next) * sizeof(union afs_xdr_dirent); continue; } =20 @@ -484,7 +440,7 @@ static int afs_dir_iterate_block(struct afs_vnode *dvno= de, return 0; } =20 - ctx->pos =3D blkoff + next * sizeof(union afs_xdr_dirent); + ctx->pos =3D (base + next) * sizeof(union afs_xdr_dirent); } =20 _leave(" =3D 1 [more]"); @@ -492,67 +448,97 @@ static int afs_dir_iterate_block(struct afs_vnode *dv= node, } =20 /* - * iterate through the data blob that lists the contents of an AFS directo= ry + * Iterate through a kmapped directory segment. */ -static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, - struct key *key, afs_dataversion_t *_dir_version) +static size_t afs_dir_iterate_step(void *iter_base, size_t progress, size_= t len, + void *priv, void *priv2) { - struct afs_vnode *dvnode =3D AFS_FS_I(dir); - union afs_xdr_dir_block *dblock; - struct afs_read *req; - struct folio *folio; - unsigned offset, size; + struct dir_context *ctx =3D priv2; + struct afs_vnode *dvnode =3D priv; int ret; =20 - _enter("{%lu},%u,,", dir->i_ino, (unsigned)ctx->pos); - - if (test_bit(AFS_VNODE_DELETED, &AFS_FS_I(dir)->flags)) { - _leave(" =3D -ESTALE"); - return -ESTALE; + if (WARN_ON_ONCE(progress % AFS_DIR_BLOCK_SIZE || + len % AFS_DIR_BLOCK_SIZE)) { + pr_err("Mis-iteration prog=3D%zx len=3D%zx\n", + progress % AFS_DIR_BLOCK_SIZE, + len % AFS_DIR_BLOCK_SIZE); + return len; } =20 - req =3D afs_read_dir(dvnode, key); - if (IS_ERR(req)) - return PTR_ERR(req); - *_dir_version =3D req->data_version; + do { + ret =3D afs_dir_iterate_block(dvnode, ctx, iter_base); + if (ret !=3D 1) + break; =20 - /* round the file position up to the next entry boundary */ - ctx->pos +=3D sizeof(union afs_xdr_dirent) - 1; - ctx->pos &=3D ~(sizeof(union afs_xdr_dirent) - 1); + ctx->pos =3D round_up(ctx->pos, AFS_DIR_BLOCK_SIZE); + iter_base +=3D AFS_DIR_BLOCK_SIZE; + len -=3D AFS_DIR_BLOCK_SIZE; + } while (len > 0); =20 - /* walk through the blocks in sequence */ - ret =3D 0; - while (ctx->pos < req->actual_len) { - /* Fetch the appropriate folio from the directory and re-add it - * to the LRU. We have all the pages pinned with an extra ref. - */ - folio =3D __filemap_get_folio(dir->i_mapping, ctx->pos / PAGE_SIZE, - FGP_ACCESSED, 0); - if (IS_ERR(folio)) { - ret =3D afs_bad(dvnode, afs_file_error_dir_missing_page); - break; - } + return len; +} =20 - offset =3D round_down(ctx->pos, sizeof(*dblock)) - folio_pos(folio); - size =3D min_t(loff_t, folio_size(folio), - req->actual_len - folio_pos(folio)); +/* + * Iterate through the directory folios under RCU conditions. + */ +static int afs_dir_iterate_contents(struct inode *dir, struct dir_context = *ctx) +{ + struct afs_vnode *dvnode =3D AFS_FS_I(dir); + struct iov_iter iter; + unsigned long long i_size =3D i_size_read(dir); + int ret =3D 0; =20 - do { - dblock =3D kmap_local_folio(folio, offset); - ret =3D afs_dir_iterate_block(dvnode, ctx, dblock, - folio_pos(folio) + offset); - kunmap_local(dblock); - if (ret !=3D 1) - goto out; + /* Round the file position up to the next entry boundary */ + ctx->pos =3D round_up(ctx->pos, sizeof(union afs_xdr_dirent)); =20 - } while (offset +=3D sizeof(*dblock), offset < size); + if (i_size <=3D 0 || ctx->pos >=3D i_size) + return 0; =20 - ret =3D 0; - } + iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size); + iov_iter_advance(&iter, round_down(ctx->pos, AFS_DIR_BLOCK_SIZE)); + + iterate_folioq(&iter, iov_iter_count(&iter), dvnode, ctx, + afs_dir_iterate_step); + + if (ret =3D=3D -ESTALE) + afs_invalidate_dir(dvnode, afs_dir_invalid_iter_stale); + return ret; +} + +/* + * iterate through the data blob that lists the contents of an AFS directo= ry + */ +static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, + struct file *file, afs_dataversion_t *_dir_version) +{ + struct afs_vnode *dvnode =3D AFS_FS_I(dir); + int retry_limit =3D 100; + int ret; + + _enter("{%lu},%llx,,", dir->i_ino, ctx->pos); + + do { + if (--retry_limit < 0) { + pr_warn("afs_read_dir(): Too many retries\n"); + ret =3D -ESTALE; + break; + } + ret =3D afs_read_dir(dvnode, file); + if (ret < 0) { + if (ret !=3D -ESTALE) + break; + if (test_bit(AFS_VNODE_DELETED, &AFS_FS_I(dir)->flags)) { + ret =3D -ESTALE; + break; + } + continue; + } + *_dir_version =3D inode_peek_iversion_raw(dir); + + ret =3D afs_dir_iterate_contents(dir, ctx); + up_read(&dvnode->validate_lock); + } while (ret =3D=3D -ESTALE); =20 -out: - up_read(&dvnode->validate_lock); - afs_put_read(req); _leave(" =3D %d", ret); return ret; } @@ -564,8 +550,7 @@ static int afs_readdir(struct file *file, struct dir_co= ntext *ctx) { afs_dataversion_t dir_version; =20 - return afs_dir_iterate(file_inode(file), ctx, afs_file_key(file), - &dir_version); + return afs_dir_iterate(file_inode(file), ctx, file, &dir_version); } =20 /* @@ -606,7 +591,7 @@ static bool afs_lookup_one_filldir(struct dir_context *= ctx, const char *name, * - just returns the FID the dentry name maps to if found */ static int afs_do_lookup_one(struct inode *dir, struct dentry *dentry, - struct afs_fid *fid, struct key *key, + struct afs_fid *fid, afs_dataversion_t *_dir_version) { struct afs_super_info *as =3D dir->i_sb->s_fs_info; @@ -620,7 +605,7 @@ static int afs_do_lookup_one(struct inode *dir, struct = dentry *dentry, _enter("{%lu},%p{%pd},", dir->i_ino, dentry, dentry); =20 /* search the directory */ - ret =3D afs_dir_iterate(dir, &cookie.ctx, key, _dir_version); + ret =3D afs_dir_iterate(dir, &cookie.ctx, NULL, _dir_version); if (ret < 0) { _leave(" =3D %d [iter]", ret); return ret; @@ -787,8 +772,7 @@ static bool afs_server_supports_ibulk(struct afs_vnode = *dvnode) * files in one go and create inodes for them. The inode of the file we w= ere * asked for is returned. */ -static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentr= y, - struct key *key) +static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentr= y) { struct afs_lookup_cookie *cookie; struct afs_vnode_param *vp; @@ -816,7 +800,7 @@ static struct inode *afs_do_lookup(struct inode *dir, s= truct dentry *dentry, cookie->one_only =3D true; =20 /* search the directory */ - ret =3D afs_dir_iterate(dir, &cookie->ctx, key, &data_version); + ret =3D afs_dir_iterate(dir, &cookie->ctx, NULL, &data_version); if (ret < 0) goto out; =20 @@ -925,8 +909,7 @@ static struct inode *afs_do_lookup(struct inode *dir, s= truct dentry *dentry, /* * Look up an entry in a directory with @sys substitution. */ -static struct dentry *afs_lookup_atsys(struct inode *dir, struct dentry *d= entry, - struct key *key) +static struct dentry *afs_lookup_atsys(struct inode *dir, struct dentry *d= entry) { struct afs_sysnames *subs; struct afs_net *net =3D afs_i2net(dir); @@ -974,7 +957,6 @@ static struct dentry *afs_lookup_atsys(struct inode *di= r, struct dentry *dentry, afs_put_sysnames(subs); kfree(buf); out_p: - key_put(key); return ret; } =20 @@ -988,7 +970,6 @@ static struct dentry *afs_lookup(struct inode *dir, str= uct dentry *dentry, struct afs_fid fid =3D {}; struct inode *inode; struct dentry *d; - struct key *key; int ret; =20 _enter("{%llx:%llu},%p{%pd},", @@ -1006,15 +987,9 @@ static struct dentry *afs_lookup(struct inode *dir, s= truct dentry *dentry, return ERR_PTR(-ESTALE); } =20 - key =3D afs_request_key(dvnode->volume->cell); - if (IS_ERR(key)) { - _leave(" =3D %ld [key]", PTR_ERR(key)); - return ERR_CAST(key); - } - - ret =3D afs_validate(dvnode, key); + ret =3D afs_validate(dvnode, NULL); if (ret < 0) { - key_put(key); + afs_dir_unuse_cookie(dvnode, ret); _leave(" =3D %d [val]", ret); return ERR_PTR(ret); } @@ -1024,11 +999,10 @@ static struct dentry *afs_lookup(struct inode *dir, = struct dentry *dentry, dentry->d_name.name[dentry->d_name.len - 3] =3D=3D 's' && dentry->d_name.name[dentry->d_name.len - 2] =3D=3D 'y' && dentry->d_name.name[dentry->d_name.len - 1] =3D=3D 's') - return afs_lookup_atsys(dir, dentry, key); + return afs_lookup_atsys(dir, dentry); =20 afs_stat_v(dvnode, n_lookup); - inode =3D afs_do_lookup(dir, dentry, key); - key_put(key); + inode =3D afs_do_lookup(dir, dentry); if (inode =3D=3D ERR_PTR(-ENOENT)) inode =3D afs_try_auto_mntpt(dentry, dir); =20 @@ -1154,7 +1128,7 @@ static int afs_d_revalidate(struct dentry *dentry, un= signed int flags) afs_stat_v(dir, n_reval); =20 /* search the directory for this vnode */ - ret =3D afs_do_lookup_one(&dir->netfs.inode, dentry, &fid, key, &dir_vers= ion); + ret =3D afs_do_lookup_one(&dir->netfs.inode, dentry, &fid, &dir_version); switch (ret) { case 0: /* the filename maps to something */ @@ -1316,18 +1290,21 @@ static void afs_create_success(struct afs_operation= *op) =20 static void afs_create_edit_dir(struct afs_operation *op) { + struct netfs_cache_resources cres =3D {}; struct afs_vnode_param *dvp =3D &op->file[0]; struct afs_vnode_param *vp =3D &op->file[1]; struct afs_vnode *dvnode =3D dvp->vnode; =20 _enter("op=3D%08x", op->debug_id); =20 + fscache_begin_write_operation(&cres, afs_vnode_cache(dvnode)); down_write(&dvnode->validate_lock); if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) && dvnode->status.data_version =3D=3D dvp->dv_before + dvp->dv_delta) afs_edit_dir_add(dvnode, &op->dentry->d_name, &vp->fid, op->create.reason); up_write(&dvnode->validate_lock); + fscache_end_operation(&cres); } =20 static void afs_create_put(struct afs_operation *op) @@ -1355,6 +1332,7 @@ static int afs_mkdir(struct mnt_idmap *idmap, struct = inode *dir, { struct afs_operation *op; struct afs_vnode *dvnode =3D AFS_FS_I(dir); + int ret; =20 _enter("{%llx:%llu},{%pd},%ho", dvnode->fid.vid, dvnode->fid.vnode, dentry, mode); @@ -1365,6 +1343,8 @@ static int afs_mkdir(struct mnt_idmap *idmap, struct = inode *dir, return PTR_ERR(op); } =20 + fscache_use_cookie(afs_vnode_cache(dvnode), true); + afs_op_set_vnode(op, 0, dvnode); op->file[0].dv_delta =3D 1; op->file[0].modification =3D true; @@ -1374,7 +1354,9 @@ static int afs_mkdir(struct mnt_idmap *idmap, struct = inode *dir, op->create.reason =3D afs_edit_dir_for_mkdir; op->mtime =3D current_time(dir); op->ops =3D &afs_mkdir_operation; - return afs_do_sync_operation(op); + ret =3D afs_do_sync_operation(op); + afs_dir_unuse_cookie(dvnode, ret); + return ret; } =20 /* @@ -1402,18 +1384,21 @@ static void afs_rmdir_success(struct afs_operation = *op) =20 static void afs_rmdir_edit_dir(struct afs_operation *op) { + struct netfs_cache_resources cres =3D {}; struct afs_vnode_param *dvp =3D &op->file[0]; struct afs_vnode *dvnode =3D dvp->vnode; =20 _enter("op=3D%08x", op->debug_id); afs_dir_remove_subdir(op->dentry); =20 + fscache_begin_write_operation(&cres, afs_vnode_cache(dvnode)); down_write(&dvnode->validate_lock); if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) && dvnode->status.data_version =3D=3D dvp->dv_before + dvp->dv_delta) afs_edit_dir_remove(dvnode, &op->dentry->d_name, afs_edit_dir_for_rmdir); up_write(&dvnode->validate_lock); + fscache_end_operation(&cres); } =20 static void afs_rmdir_put(struct afs_operation *op) @@ -1448,6 +1433,8 @@ static int afs_rmdir(struct inode *dir, struct dentry= *dentry) if (IS_ERR(op)) return PTR_ERR(op); =20 + fscache_use_cookie(afs_vnode_cache(dvnode), true); + afs_op_set_vnode(op, 0, dvnode); op->file[0].dv_delta =3D 1; op->file[0].modification =3D true; @@ -1476,10 +1463,13 @@ static int afs_rmdir(struct inode *dir, struct dent= ry *dentry) /* Not all systems that can host afs servers have ENOTEMPTY. */ if (ret =3D=3D -EEXIST) ret =3D -ENOTEMPTY; +out: + afs_dir_unuse_cookie(dvnode, ret); return ret; =20 error: - return afs_put_operation(op); + ret =3D afs_put_operation(op); + goto out; } =20 /* @@ -1542,16 +1532,19 @@ static void afs_unlink_success(struct afs_operation= *op) =20 static void afs_unlink_edit_dir(struct afs_operation *op) { + struct netfs_cache_resources cres =3D {}; struct afs_vnode_param *dvp =3D &op->file[0]; struct afs_vnode *dvnode =3D dvp->vnode; =20 _enter("op=3D%08x", op->debug_id); + fscache_begin_write_operation(&cres, afs_vnode_cache(dvnode)); down_write(&dvnode->validate_lock); if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) && dvnode->status.data_version =3D=3D dvp->dv_before + dvp->dv_delta) afs_edit_dir_remove(dvnode, &op->dentry->d_name, afs_edit_dir_for_unlink); up_write(&dvnode->validate_lock); + fscache_end_operation(&cres); } =20 static void afs_unlink_put(struct afs_operation *op) @@ -1590,6 +1583,8 @@ static int afs_unlink(struct inode *dir, struct dentr= y *dentry) if (IS_ERR(op)) return PTR_ERR(op); =20 + fscache_use_cookie(afs_vnode_cache(dvnode), true); + afs_op_set_vnode(op, 0, dvnode); op->file[0].dv_delta =3D 1; op->file[0].modification =3D true; @@ -1636,10 +1631,10 @@ static int afs_unlink(struct inode *dir, struct den= try *dentry) afs_wait_for_operation(op); } =20 - return afs_put_operation(op); - error: - return afs_put_operation(op); + ret =3D afs_put_operation(op); + afs_dir_unuse_cookie(dvnode, ret); + return ret; } =20 static const struct afs_operation_ops afs_create_operation =3D { @@ -1673,6 +1668,8 @@ static int afs_create(struct mnt_idmap *idmap, struct= inode *dir, goto error; } =20 + fscache_use_cookie(afs_vnode_cache(dvnode), true); + afs_op_set_vnode(op, 0, dvnode); op->file[0].dv_delta =3D 1; op->file[0].modification =3D true; @@ -1683,7 +1680,9 @@ static int afs_create(struct mnt_idmap *idmap, struct= inode *dir, op->create.reason =3D afs_edit_dir_for_create; op->mtime =3D current_time(dir); op->ops =3D &afs_create_operation; - return afs_do_sync_operation(op); + ret =3D afs_do_sync_operation(op); + afs_dir_unuse_cookie(dvnode, ret); + return ret; =20 error: d_drop(dentry); @@ -1748,6 +1747,8 @@ static int afs_link(struct dentry *from, struct inode= *dir, goto error; } =20 + fscache_use_cookie(afs_vnode_cache(dvnode), true); + ret =3D afs_validate(vnode, op->key); if (ret < 0) goto error_op; @@ -1763,10 +1764,13 @@ static int afs_link(struct dentry *from, struct ino= de *dir, op->dentry_2 =3D from; op->ops =3D &afs_link_operation; op->create.reason =3D afs_edit_dir_for_link; - return afs_do_sync_operation(op); + ret =3D afs_do_sync_operation(op); + afs_dir_unuse_cookie(dvnode, ret); + return ret; =20 error_op: afs_put_operation(op); + afs_dir_unuse_cookie(dvnode, ret); error: d_drop(dentry); _leave(" =3D %d", ret); @@ -1810,6 +1814,8 @@ static int afs_symlink(struct mnt_idmap *idmap, struc= t inode *dir, goto error; } =20 + fscache_use_cookie(afs_vnode_cache(dvnode), true); + afs_op_set_vnode(op, 0, dvnode); op->file[0].dv_delta =3D 1; =20 @@ -1818,7 +1824,9 @@ static int afs_symlink(struct mnt_idmap *idmap, struc= t inode *dir, op->create.reason =3D afs_edit_dir_for_symlink; op->create.symlink =3D content; op->mtime =3D current_time(dir); - return afs_do_sync_operation(op); + ret =3D afs_do_sync_operation(op); + afs_dir_unuse_cookie(dvnode, ret); + return ret; =20 error: d_drop(dentry); @@ -1860,6 +1868,7 @@ static void afs_rename_success(struct afs_operation *= op) =20 static void afs_rename_edit_dir(struct afs_operation *op) { + struct netfs_cache_resources orig_cres =3D {}, new_cres =3D {}; struct afs_vnode_param *orig_dvp =3D &op->file[0]; struct afs_vnode_param *new_dvp =3D &op->file[1]; struct afs_vnode *orig_dvnode =3D orig_dvp->vnode; @@ -1876,6 +1885,10 @@ static void afs_rename_edit_dir(struct afs_operation= *op) op->rename.rehash =3D NULL; } =20 + fscache_begin_write_operation(&orig_cres, afs_vnode_cache(orig_dvnode)); + if (new_dvnode !=3D orig_dvnode) + fscache_begin_write_operation(&new_cres, afs_vnode_cache(new_dvnode)); + down_write(&orig_dvnode->validate_lock); if (test_bit(AFS_VNODE_DIR_VALID, &orig_dvnode->flags) && orig_dvnode->status.data_version =3D=3D orig_dvp->dv_before + orig_dv= p->dv_delta) @@ -1925,6 +1938,9 @@ static void afs_rename_edit_dir(struct afs_operation = *op) d_move(old_dentry, new_dentry); =20 up_write(&new_dvnode->validate_lock); + fscache_end_operation(&orig_cres); + if (new_dvnode !=3D orig_dvnode) + fscache_end_operation(&new_cres); } =20 static void afs_rename_put(struct afs_operation *op) @@ -1977,6 +1993,10 @@ static int afs_rename(struct mnt_idmap *idmap, struc= t inode *old_dir, if (IS_ERR(op)) return PTR_ERR(op); =20 + fscache_use_cookie(afs_vnode_cache(orig_dvnode), true); + if (new_dvnode !=3D orig_dvnode) + fscache_use_cookie(afs_vnode_cache(new_dvnode), true); + ret =3D afs_validate(vnode, op->key); afs_op_set_error(op, ret); if (ret < 0) @@ -2044,45 +2064,43 @@ static int afs_rename(struct mnt_idmap *idmap, stru= ct inode *old_dir, */ d_drop(old_dentry); =20 - return afs_do_sync_operation(op); + ret =3D afs_do_sync_operation(op); +out: + afs_dir_unuse_cookie(orig_dvnode, ret); + if (new_dvnode !=3D orig_dvnode) + afs_dir_unuse_cookie(new_dvnode, ret); + return ret; =20 error: - return afs_put_operation(op); -} - -/* - * Release a directory folio and clean up its private state if it's not bu= sy - * - return true if the folio can now be released, false if not - */ -static bool afs_dir_release_folio(struct folio *folio, gfp_t gfp_flags) -{ - struct afs_vnode *dvnode =3D AFS_FS_I(folio_inode(folio)); - - _enter("{{%llx:%llu}[%lu]}", dvnode->fid.vid, dvnode->fid.vnode, folio->i= ndex); - - folio_detach_private(folio); - - /* The directory will need reloading. */ - afs_invalidate_dir(dvnode, afs_dir_invalid_release_folio); - return true; + ret =3D afs_put_operation(op); + goto out; } =20 /* - * Invalidate part or all of a folio. + * Write the file contents to the cache as a single blob. */ -static void afs_dir_invalidate_folio(struct folio *folio, size_t offset, - size_t length) +int afs_single_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - struct afs_vnode *dvnode =3D AFS_FS_I(folio_inode(folio)); - - _enter("{%lu},%zu,%zu", folio->index, offset, length); - - BUG_ON(!folio_test_locked(folio)); + struct afs_vnode *dvnode =3D AFS_FS_I(mapping->host); + struct iov_iter iter; + bool is_dir =3D (S_ISDIR(dvnode->netfs.inode.i_mode) && + !test_bit(AFS_VNODE_MOUNTPOINT, &dvnode->flags)); + int ret =3D 0; =20 - /* The directory will need reloading. */ - afs_invalidate_dir(dvnode, afs_dir_invalid_inval_folio); + /* Need to lock to prevent the folio queue and folios from being thrown + * away. + */ + down_read(&dvnode->validate_lock); + + if (is_dir ? + test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) : + atomic64_read(&dvnode->cb_expires_at) !=3D AFS_NO_CB_PROMISE) { + iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, + i_size_read(&dvnode->netfs.inode)); + ret =3D netfs_writeback_single(mapping, wbc, &iter); + } =20 - /* we clean up only if the entire folio is being invalidated */ - if (offset =3D=3D 0 && length =3D=3D folio_size(folio)) - folio_detach_private(folio); + up_read(&dvnode->validate_lock); + return ret; } diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index 5d092c8c0157..71cce884e434 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "internal.h" #include "xdr_fs.h" =20 @@ -105,23 +106,57 @@ static void afs_clear_contig_bits(union afs_xdr_dir_b= lock *block, } =20 /* - * Get a new directory folio. + * Get a specific block, extending the directory storage to cover it as ne= eded. */ -static struct folio *afs_dir_get_folio(struct afs_vnode *vnode, pgoff_t in= dex) +static union afs_xdr_dir_block *afs_dir_get_block(struct afs_dir_iter *ite= r, size_t block) { - struct address_space *mapping =3D vnode->netfs.inode.i_mapping; + struct folio_queue *fq; + struct afs_vnode *dvnode =3D iter->dvnode; struct folio *folio; + size_t blpos =3D block * AFS_DIR_BLOCK_SIZE; + size_t blend =3D (block + 1) * AFS_DIR_BLOCK_SIZE, fpos =3D iter->fpos; + int ret; + + if (dvnode->directory_size < blend) { + size_t cur_size =3D dvnode->directory_size; + + ret =3D netfs_alloc_folioq_buffer( + NULL, &dvnode->directory, &cur_size, blend, + mapping_gfp_mask(dvnode->netfs.inode.i_mapping)); + dvnode->directory_size =3D cur_size; + if (ret < 0) + goto fail; + } =20 - folio =3D __filemap_get_folio(mapping, index, - FGP_LOCK | FGP_ACCESSED | FGP_CREAT, - mapping->gfp_mask); - if (IS_ERR(folio)) { - afs_invalidate_dir(vnode, afs_dir_invalid_edit_get_block); - return NULL; + fq =3D iter->fq; + if (!fq) + fq =3D dvnode->directory; + + /* Search the folio queue for the folio containing the block... */ + for (; fq; fq =3D fq->next) { + for (int s =3D iter->fq_slot; s < folioq_count(fq); s++) { + size_t fsize =3D folioq_folio_size(fq, s); + + if (blend <=3D fpos + fsize) { + /* ... and then return the mapped block. */ + folio =3D folioq_folio(fq, s); + if (WARN_ON_ONCE(folio_pos(folio) !=3D fpos)) + goto fail; + iter->fq =3D fq; + iter->fq_slot =3D s; + iter->fpos =3D fpos; + return kmap_local_folio(folio, blpos - fpos); + } + fpos +=3D fsize; + } + iter->fq_slot =3D 0; } - if (!folio_test_private(folio)) - folio_attach_private(folio, (void *)1); - return folio; + +fail: + iter->fq =3D NULL; + iter->fq_slot =3D 0; + afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block); + return NULL; } =20 /* @@ -209,9 +244,8 @@ void afs_edit_dir_add(struct afs_vnode *vnode, { union afs_xdr_dir_block *meta, *block; union afs_xdr_dirent *de; - struct folio *folio0, *folio; + struct afs_dir_iter iter =3D { .dvnode =3D vnode }; unsigned int need_slots, nr_blocks, b; - pgoff_t index; loff_t i_size; int slot; =20 @@ -224,16 +258,13 @@ void afs_edit_dir_add(struct afs_vnode *vnode, return; } =20 - folio0 =3D afs_dir_get_folio(vnode, 0); - if (!folio0) { - _leave(" [fgp]"); + meta =3D afs_dir_get_block(&iter, 0); + if (!meta) return; - } =20 /* Work out how many slots we're going to need. */ need_slots =3D afs_dir_calc_slots(name->len); =20 - meta =3D kmap_local_folio(folio0, 0); if (i_size =3D=3D 0) goto new_directory; nr_blocks =3D i_size / AFS_DIR_BLOCK_SIZE; @@ -245,18 +276,17 @@ void afs_edit_dir_add(struct afs_vnode *vnode, /* If the directory extended into a new folio, then we need to * tack a new folio on the end. */ - index =3D b / AFS_DIR_BLOCKS_PER_PAGE; if (nr_blocks >=3D AFS_DIR_MAX_BLOCKS) goto error_too_many_blocks; - if (index >=3D folio_nr_pages(folio0)) { - folio =3D afs_dir_get_folio(vnode, index); - if (!folio) - goto error; - } else { - folio =3D folio0; - } =20 - block =3D kmap_local_folio(folio, b * AFS_DIR_BLOCK_SIZE - folio_pos(fol= io)); + /* Lower dir blocks have a counter in the header we can check. */ + if (b < AFS_DIR_BLOCKS_WITH_CTR && + meta->meta.alloc_ctrs[b] < need_slots) + continue; + + block =3D afs_dir_get_block(&iter, b); + if (!block) + goto error; =20 /* Abandon the edit if we got a callback break. */ if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) @@ -275,24 +305,16 @@ void afs_edit_dir_add(struct afs_vnode *vnode, afs_set_i_size(vnode, (b + 1) * AFS_DIR_BLOCK_SIZE); } =20 - /* Only lower dir blocks have a counter in the header. */ - if (b >=3D AFS_DIR_BLOCKS_WITH_CTR || - meta->meta.alloc_ctrs[b] >=3D need_slots) { - /* We need to try and find one or more consecutive - * slots to hold the entry. - */ - slot =3D afs_find_contig_bits(block, need_slots); - if (slot >=3D 0) { - _debug("slot %u", slot); - goto found_space; - } + /* We need to try and find one or more consecutive slots to + * hold the entry. + */ + slot =3D afs_find_contig_bits(block, need_slots); + if (slot >=3D 0) { + _debug("slot %u", slot); + goto found_space; } =20 kunmap_local(block); - if (folio !=3D folio0) { - folio_unlock(folio); - folio_put(folio); - } } =20 /* There are no spare slots of sufficient size, yet the operation @@ -307,8 +329,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, i_size =3D AFS_DIR_BLOCK_SIZE; afs_set_i_size(vnode, i_size); slot =3D AFS_DIR_RESV_BLOCKS0; - folio =3D folio0; - block =3D kmap_local_folio(folio, 0); + block =3D afs_dir_get_block(&iter, 0); nr_blocks =3D 1; b =3D 0; =20 @@ -328,10 +349,6 @@ void afs_edit_dir_add(struct afs_vnode *vnode, /* Adjust the bitmap. */ afs_set_contig_bits(block, slot, need_slots); kunmap_local(block); - if (folio !=3D folio0) { - folio_unlock(folio); - folio_put(folio); - } =20 /* Adjust the allocation counter. */ if (b < AFS_DIR_BLOCKS_WITH_CTR) @@ -341,20 +358,16 @@ void afs_edit_dir_add(struct afs_vnode *vnode, afs_stat_v(vnode, n_dir_cr); _debug("Insert %s in %u[%u]", name->name, b, slot); =20 + netfs_single_mark_inode_dirty(&vnode->netfs.inode); + out_unmap: kunmap_local(meta); - folio_unlock(folio0); - folio_put(folio0); _leave(""); return; =20 already_invalidated: trace_afs_edit_dir(vnode, why, afs_edit_dir_create_inval, 0, 0, 0, 0, nam= e->name); kunmap_local(block); - if (folio !=3D folio0) { - folio_unlock(folio); - folio_put(folio); - } goto out_unmap; =20 error_too_many_blocks: @@ -376,9 +389,8 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, { union afs_xdr_dir_block *meta, *block; union afs_xdr_dirent *de; - struct folio *folio0, *folio; + struct afs_dir_iter iter =3D { .dvnode =3D vnode }; unsigned int need_slots, nr_blocks, b; - pgoff_t index; loff_t i_size; int slot; =20 @@ -393,31 +405,20 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, } nr_blocks =3D i_size / AFS_DIR_BLOCK_SIZE; =20 - folio0 =3D afs_dir_get_folio(vnode, 0); - if (!folio0) { - _leave(" [fgp]"); + meta =3D afs_dir_get_block(&iter, 0); + if (!meta) return; - } =20 /* Work out how many slots we're going to discard. */ need_slots =3D afs_dir_calc_slots(name->len); =20 - meta =3D kmap_local_folio(folio0, 0); - /* Find a block that has sufficient slots available. Each folio * contains two or more directory blocks. */ for (b =3D 0; b < nr_blocks; b++) { - index =3D b / AFS_DIR_BLOCKS_PER_PAGE; - if (index >=3D folio_nr_pages(folio0)) { - folio =3D afs_dir_get_folio(vnode, index); - if (!folio) - goto error; - } else { - folio =3D folio0; - } - - block =3D kmap_local_folio(folio, b * AFS_DIR_BLOCK_SIZE - folio_pos(fol= io)); + block =3D afs_dir_get_block(&iter, b); + if (!block) + goto error; =20 /* Abandon the edit if we got a callback break. */ if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) @@ -431,10 +432,6 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, } =20 kunmap_local(block); - if (folio !=3D folio0) { - folio_unlock(folio); - folio_put(folio); - } } =20 /* Didn't find the dirent to clobber. Download the directory again. */ @@ -455,34 +452,26 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, /* Adjust the bitmap. */ afs_clear_contig_bits(block, slot, need_slots); kunmap_local(block); - if (folio !=3D folio0) { - folio_unlock(folio); - folio_put(folio); - } =20 /* Adjust the allocation counter. */ if (b < AFS_DIR_BLOCKS_WITH_CTR) meta->meta.alloc_ctrs[b] +=3D need_slots; =20 + netfs_single_mark_inode_dirty(&vnode->netfs.inode); + inode_set_iversion_raw(&vnode->netfs.inode, vnode->status.data_version); afs_stat_v(vnode, n_dir_rm); _debug("Remove %s from %u[%u]", name->name, b, slot); =20 out_unmap: kunmap_local(meta); - folio_unlock(folio0); - folio_put(folio0); _leave(""); return; =20 already_invalidated: + kunmap_local(block); trace_afs_edit_dir(vnode, why, afs_edit_dir_delete_inval, 0, 0, 0, 0, name->name); - kunmap_local(block); - if (folio !=3D folio0) { - folio_unlock(folio); - folio_put(folio); - } goto out_unmap; =20 error: @@ -500,9 +489,8 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnode= , struct afs_vnode *new_d { union afs_xdr_dir_block *block; union afs_xdr_dirent *de; - struct folio *folio; + struct afs_dir_iter iter =3D { .dvnode =3D vnode }; unsigned int nr_blocks, b; - pgoff_t index; loff_t i_size; int slot; =20 @@ -513,19 +501,17 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vno= de, struct afs_vnode *new_d afs_invalidate_dir(vnode, afs_dir_invalid_edit_upd_bad_size); return; } + nr_blocks =3D i_size / AFS_DIR_BLOCK_SIZE; =20 /* Find a block that has sufficient slots available. Each folio * contains two or more directory blocks. */ for (b =3D 0; b < nr_blocks; b++) { - index =3D b / AFS_DIR_BLOCKS_PER_PAGE; - folio =3D afs_dir_get_folio(vnode, index); - if (!folio) + block =3D afs_dir_get_block(&iter, b); + if (!block) goto error; =20 - block =3D kmap_local_folio(folio, b * AFS_DIR_BLOCK_SIZE - folio_pos(fol= io)); - /* Abandon the edit if we got a callback break. */ if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) goto already_invalidated; @@ -535,8 +521,6 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnode= , struct afs_vnode *new_d goto found_dirent; =20 kunmap_local(block); - folio_unlock(folio); - folio_put(folio); } =20 /* Didn't find the dirent to clobber. Download the directory again. */ @@ -554,8 +538,7 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnode= , struct afs_vnode *new_d ntohl(de->u.vnode), ntohl(de->u.unique), ".."); =20 kunmap_local(block); - folio_unlock(folio); - folio_put(folio); + netfs_single_mark_inode_dirty(&vnode->netfs.inode); inode_set_iversion_raw(&vnode->netfs.inode, vnode->status.data_version); =20 out: @@ -564,8 +547,6 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnode= , struct afs_vnode *new_d =20 already_invalidated: kunmap_local(block); - folio_unlock(folio); - folio_put(folio); trace_afs_edit_dir(vnode, why, afs_edit_dir_update_inval, 0, 0, 0, 0, ".."); goto out; diff --git a/fs/afs/file.c b/fs/afs/file.c index a9d98d18407c..5bc36bfaa173 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -389,6 +389,14 @@ static int afs_init_request(struct netfs_io_request *r= req, struct file *file) rreq->netfs_priv =3D key; } break; + case NETFS_WRITEBACK: + case NETFS_WRITETHROUGH: + case NETFS_UNBUFFERED_WRITE: + case NETFS_DIO_WRITE: + if (S_ISREG(rreq->inode->i_mode)) + rreq->io_streams[0].avail =3D true; + break; + case NETFS_WRITEBACK_SINGLE: default: break; } diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 495ecef91679..0ed1e5c35fef 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -110,7 +110,9 @@ static int afs_inode_init_from_status(struct afs_operat= ion *op, inode->i_op =3D &afs_dir_inode_operations; inode->i_fop =3D &afs_dir_file_operations; inode->i_mapping->a_ops =3D &afs_dir_aops; - mapping_set_large_folios(inode->i_mapping); + __set_bit(NETFS_ICTX_SINGLE_NO_UPLOAD, &vnode->netfs.flags); + /* Assume locally cached directory data will be valid. */ + __set_bit(AFS_VNODE_DIR_VALID, &vnode->flags); break; case AFS_FTYPE_SYMLINK: /* Symlinks with a mode of 0644 are actually mountpoints. */ @@ -440,7 +442,8 @@ static void afs_get_inode_cache(struct afs_vnode *vnode) } __packed key; struct afs_vnode_cache_aux aux; =20 - if (vnode->status.type !=3D AFS_FTYPE_FILE) { + if (vnode->status.type !=3D AFS_FTYPE_FILE && + vnode->status.type !=3D AFS_FTYPE_DIR) { vnode->netfs.cache =3D NULL; return; } @@ -642,6 +645,7 @@ int afs_drop_inode(struct inode *inode) void afs_evict_inode(struct inode *inode) { struct afs_vnode_cache_aux aux; + struct afs_super_info *sbi =3D AFS_FS_S(inode->i_sb); struct afs_vnode *vnode =3D AFS_FS_I(inode); =20 _enter("{%llx:%llu.%d}", @@ -653,8 +657,21 @@ void afs_evict_inode(struct inode *inode) =20 ASSERTCMP(inode->i_ino, =3D=3D, vnode->fid.vnode); =20 + if ((S_ISDIR(inode->i_mode)) && + (inode->i_state & I_DIRTY) && + !sbi->dyn_root) { + struct writeback_control wbc =3D { + .sync_mode =3D WB_SYNC_ALL, + .for_sync =3D true, + .range_end =3D LLONG_MAX, + }; + + afs_single_writepages(inode->i_mapping, &wbc); + } + netfs_wait_for_outstanding_io(inode); truncate_inode_pages_final(&inode->i_data); + netfs_free_folioq_buffer(vnode->directory); =20 afs_set_cache_aux(vnode, &aux); netfs_clear_inode_writeback(inode, &aux); diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 20d2f723948d..1744a93aae27 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -720,7 +720,9 @@ struct afs_vnode { #define AFS_VNODE_NEW_CONTENT 8 /* Set if file has new content (create/tr= unc-0) */ #define AFS_VNODE_SILLY_DELETED 9 /* Set if file has been silly-deleted */ #define AFS_VNODE_MODIFYING 10 /* Set if we're performing a modification = op */ +#define AFS_VNODE_DIR_READ 11 /* Set if we've read a dir's contents */ =20 + struct folio_queue *directory; /* Directory contents */ struct list_head wb_keys; /* List of keys available for writeback */ struct list_head pending_locks; /* locks waiting to be granted */ struct list_head granted_locks; /* locks granted on this file */ @@ -729,6 +731,7 @@ struct afs_vnode { ktime_t locked_at; /* Time at which lock obtained */ enum afs_lock_state lock_state : 8; afs_lock_type_t lock_type : 8; + unsigned int directory_size; /* Amount of space in ->directory */ =20 /* outstanding callback notification on this file */ struct work_struct cb_work; /* Work for mmap'd files */ @@ -984,6 +987,16 @@ static inline void afs_invalidate_cache(struct afs_vno= de *vnode, unsigned int fl i_size_read(&vnode->netfs.inode), flags); } =20 +/* + * Directory iteration management. + */ +struct afs_dir_iter { + struct afs_vnode *dvnode; + struct folio_queue *fq; + unsigned int fpos; + int fq_slot; +}; + #include =20 /*************************************************************************= ****/ @@ -1065,8 +1078,11 @@ extern const struct inode_operations afs_dir_inode_o= perations; extern const struct address_space_operations afs_dir_aops; extern const struct dentry_operations afs_fs_dentry_operations; =20 +ssize_t afs_read_single(struct afs_vnode *dvnode, struct file *file); extern void afs_d_release(struct dentry *); extern void afs_check_for_remote_deletion(struct afs_operation *); +int afs_single_writepages(struct address_space *mapping, + struct writeback_control *wbc); =20 /* * dir_edit.c diff --git a/fs/afs/super.c b/fs/afs/super.c index 7631302c1984..a9bee610674e 100644 --- a/fs/afs/super.c +++ b/fs/afs/super.c @@ -696,6 +696,8 @@ static struct inode *afs_alloc_inode(struct super_block= *sb) vnode->volume =3D NULL; vnode->lock_key =3D NULL; vnode->permit_cache =3D NULL; + vnode->directory =3D NULL; + vnode->directory_size =3D 0; =20 vnode->flags =3D 1 << AFS_VNODE_UNSET; vnode->lock_state =3D AFS_VNODE_LOCK_NONE; diff --git a/fs/afs/write.c b/fs/afs/write.c index 34107b55f834..17d188aaf101 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -179,8 +179,8 @@ void afs_issue_write(struct netfs_io_subrequest *subreq) */ void afs_begin_writeback(struct netfs_io_request *wreq) { - afs_get_writeback_key(wreq); - wreq->io_streams[0].avail =3D true; + if (S_ISREG(wreq->inode->i_mode)) + afs_get_writeback_key(wreq); } =20 /* diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 7cb5583efb91..d05f2c09efe3 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -930,9 +930,9 @@ TRACE_EVENT(afs_sent_data, ); =20 TRACE_EVENT(afs_dir_check_failed, - TP_PROTO(struct afs_vnode *vnode, loff_t off, loff_t i_size), + TP_PROTO(struct afs_vnode *vnode, loff_t off), =20 - TP_ARGS(vnode, off, i_size), + TP_ARGS(vnode, off), =20 TP_STRUCT__entry( __field(struct afs_vnode *, vnode) @@ -943,7 +943,7 @@ TRACE_EVENT(afs_dir_check_failed, TP_fast_assign( __entry->vnode =3D vnode; __entry->off =3D off; - __entry->i_size =3D i_size; + __entry->i_size =3D i_size_read(&vnode->netfs.inode); ), =20 TP_printk("vn=3D%p %llx/%llx", From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50482226B66 for ; Fri, 8 Nov 2024 17:35:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087353; cv=none; b=EIQgmX5eVDqOVug4Ug8il/FZrLfMEfFOmTeCJTuB35cNnUCtG4PXyoRNVmKzww7DrteFBOgNDGb8+aQrKZ+oXmm3V0ClkmQrgtvWFf/13CdBtqiubEZLoYjhS31cxN1xJJ9WX5QyFinUdwlh2b8ZQr+o7i3VZxdhR+qLesVpHrU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087353; c=relaxed/simple; bh=O00wx9yI2+qc1vNe2JN75y+hB6ODBLFo2xDl1xFJJPw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=now0skFm8t3oLKfN2GUA7rACj6wK4FGyHYrvqolvC2RMk6ErFmJu5Qiuy61KbR3XuTH/+7Qu8R6+LaSVDZJLFqKFKFTqJ2TW6lH0bmw7sJHbyu/1XNNJ32FUV+dFgEBQ4fLv+sPvyjtK0xZxfouXpnOOs+dLw2aVuoFz91KP8GY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XT4Ei+Sp; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XT4Ei+Sp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087349; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4RLYbte7mYyt1QaQ0KqP8VYAxxJSLvvB/K4TT0z4kIU=; b=XT4Ei+SpwoquA2b9RPxpopVc9yKAyVuaZn70D6Vi6DuipzFNYMhfjXQ7QRW9+dg4I42OmF LYn5O6DSu1FsFn8NqDbLfvN+gTzh+qC7Cm8T+ASXYUvXfzp+JP2vXLdh0TOG+c2Q+fkzei 9FTrBhe352FHrfh6tPtMngqIyUKPgFU= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-317-VlM8zVieNbq7zF9-7Gov_w-1; Fri, 08 Nov 2024 12:35:46 -0500 X-MC-Unique: VlM8zVieNbq7zF9-7Gov_w-1 X-Mimecast-MFC-AGG-ID: VlM8zVieNbq7zF9-7Gov_w Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8B69A19560A1; Fri, 8 Nov 2024 17:35:40 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 17C7F195607C; Fri, 8 Nov 2024 17:35:34 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 24/33] afs: Use netfslib for symlinks, allowing them to be cached Date: Fri, 8 Nov 2024 17:32:25 +0000 Message-ID: <20241108173236.1382366-25-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Content-Type: text/plain; charset="utf-8" Use netfslib to read symlinks, thereby allowing them to be cached by fscache and cachefiles. Signed-off-by: David Howells cc: Marc Dionne cc: Jeff Layton cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/afs/file.c | 32 ------------------- fs/afs/inode.c | 64 +++++++++++++++++++++++++++++++++++--- fs/afs/internal.h | 4 ++- fs/afs/mntpt.c | 22 ++++++------- include/trace/events/afs.h | 1 + 5 files changed, 74 insertions(+), 49 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index 5bc36bfaa173..48695a50d2f9 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -20,7 +20,6 @@ #include "internal.h" =20 static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); -static int afs_symlink_read_folio(struct file *file, struct folio *folio); =20 static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *ite= r); static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos, @@ -61,13 +60,6 @@ const struct address_space_operations afs_file_aops =3D { .writepages =3D afs_writepages, }; =20 -const struct address_space_operations afs_symlink_aops =3D { - .read_folio =3D afs_symlink_read_folio, - .release_folio =3D netfs_release_folio, - .invalidate_folio =3D netfs_invalidate_folio, - .migrate_folio =3D filemap_migrate_folio, -}; - static const struct vm_operations_struct afs_vm_ops =3D { .open =3D afs_vm_open, .close =3D afs_vm_close, @@ -346,30 +338,6 @@ static void afs_issue_read(struct netfs_io_subrequest = *subreq) queue_work(system_long_wq, &subreq->work); } =20 -static int afs_symlink_read_folio(struct file *file, struct folio *folio) -{ - struct afs_vnode *vnode =3D AFS_FS_I(folio->mapping->host); - struct afs_read *fsreq; - int ret; - - fsreq =3D afs_alloc_read(GFP_NOFS); - if (!fsreq) - return -ENOMEM; - - fsreq->pos =3D folio_pos(folio); - fsreq->len =3D folio_size(folio); - fsreq->vnode =3D vnode; - fsreq->iter =3D &fsreq->def_iter; - iov_iter_xarray(&fsreq->def_iter, ITER_DEST, &folio->mapping->i_pages, - fsreq->pos, fsreq->len); - - ret =3D afs_fetch_data(fsreq->vnode, fsreq); - if (ret =3D=3D 0) - folio_mark_uptodate(folio); - folio_unlock(folio); - return ret; -} - static int afs_init_request(struct netfs_io_request *rreq, struct file *fi= le) { struct afs_vnode *vnode =3D AFS_FS_I(rreq->inode); diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 0ed1e5c35fef..6934cc30a4ca 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -25,8 +25,60 @@ #include "internal.h" #include "afs_fs.h" =20 +static void afs_put_link(void *arg) +{ + struct folio *folio =3D virt_to_folio(arg); + + kunmap_local(arg); + folio_put(folio); +} + +const char *afs_get_link(struct dentry *dentry, struct inode *inode, + struct delayed_call *callback) +{ + struct afs_vnode *vnode =3D AFS_FS_I(inode); + struct folio *folio; + char *content; + ssize_t ret; + + if (atomic64_read(&vnode->cb_expires_at) =3D=3D AFS_NO_CB_PROMISE || + !test_bit(AFS_VNODE_DIR_READ, &vnode->flags)) { + if (!dentry) + return ERR_PTR(-ECHILD); + ret =3D afs_read_single(vnode, NULL); + if (ret < 0) + return ERR_PTR(ret); + } + + folio =3D folioq_folio(vnode->directory, 0); + folio_get(folio); + content =3D kmap_local_folio(folio, 0); + set_delayed_call(callback, afs_put_link, content); + return content; +} + +int afs_readlink(struct dentry *dentry, char __user *buffer, int buflen) +{ + DEFINE_DELAYED_CALL(done); + const char *content; + int len; + + content =3D afs_get_link(dentry, d_inode(dentry), &done); + if (IS_ERR(content)) { + do_delayed_call(&done); + return PTR_ERR(content); + } + + len =3D umin(strlen(content), buflen); + if (copy_to_user(buffer, content, len)) + len =3D -EFAULT; + do_delayed_call(&done); + return len; +} + static const struct inode_operations afs_symlink_inode_operations =3D { - .get_link =3D page_get_link, + .get_link =3D afs_get_link, + .readlink =3D afs_readlink, }; =20 static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode = *parent_vnode) @@ -124,13 +176,13 @@ static int afs_inode_init_from_status(struct afs_oper= ation *op, inode->i_mode =3D S_IFDIR | 0555; inode->i_op =3D &afs_mntpt_inode_operations; inode->i_fop =3D &afs_mntpt_file_operations; - inode->i_mapping->a_ops =3D &afs_symlink_aops; } else { inode->i_mode =3D S_IFLNK | status->mode; inode->i_op =3D &afs_symlink_inode_operations; - inode->i_mapping->a_ops =3D &afs_symlink_aops; } + inode->i_mapping->a_ops =3D &afs_dir_aops; inode_nohighmem(inode); + mapping_set_release_always(inode->i_mapping); break; default: dump_vnode(vnode, op->file[0].vnode !=3D vnode ? op->file[0].vnode : NUL= L); @@ -443,7 +495,8 @@ static void afs_get_inode_cache(struct afs_vnode *vnode) struct afs_vnode_cache_aux aux; =20 if (vnode->status.type !=3D AFS_FTYPE_FILE && - vnode->status.type !=3D AFS_FTYPE_DIR) { + vnode->status.type !=3D AFS_FTYPE_DIR && + vnode->status.type !=3D AFS_FTYPE_SYMLINK) { vnode->netfs.cache =3D NULL; return; } @@ -657,7 +710,8 @@ void afs_evict_inode(struct inode *inode) =20 ASSERTCMP(inode->i_ino, =3D=3D, vnode->fid.vnode); =20 - if ((S_ISDIR(inode->i_mode)) && + if ((S_ISDIR(inode->i_mode) || + S_ISLNK(inode->i_mode)) && (inode->i_state & I_DIRTY) && !sbi->dyn_root) { struct writeback_control wbc =3D { diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 1744a93aae27..7f170455cf25 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1116,7 +1116,6 @@ extern void afs_dynroot_depopulate(struct super_block= *); * file.c */ extern const struct address_space_operations afs_file_aops; -extern const struct address_space_operations afs_symlink_aops; extern const struct inode_operations afs_file_inode_operations; extern const struct file_operations afs_file_operations; extern const struct netfs_request_ops afs_req_ops; @@ -1222,6 +1221,9 @@ extern void afs_fs_probe_cleanup(struct afs_net *); */ extern const struct afs_operation_ops afs_fetch_status_operation; =20 +const char *afs_get_link(struct dentry *dentry, struct inode *inode, + struct delayed_call *callback); +int afs_readlink(struct dentry *dentry, char __user *buffer, int buflen); extern void afs_vnode_commit_status(struct afs_operation *, struct afs_vno= de_param *); extern int afs_fetch_status(struct afs_vnode *, struct key *, bool, afs_ac= cess_t *); extern int afs_ilookup5_test_by_fid(struct inode *, void *); diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c index 297487ee8323..507c25a5b2cb 100644 --- a/fs/afs/mntpt.c +++ b/fs/afs/mntpt.c @@ -30,7 +30,7 @@ const struct file_operations afs_mntpt_file_operations = =3D { =20 const struct inode_operations afs_mntpt_inode_operations =3D { .lookup =3D afs_mntpt_lookup, - .readlink =3D page_readlink, + .readlink =3D afs_readlink, .getattr =3D afs_getattr, }; =20 @@ -118,9 +118,9 @@ static int afs_mntpt_set_params(struct fs_context *fc, = struct dentry *mntpt) ctx->volnamesz =3D sizeof(afs_root_volume) - 1; } else { /* read the contents of the AFS special symlink */ - struct page *page; + DEFINE_DELAYED_CALL(cleanup); + const char *content; loff_t size =3D i_size_read(d_inode(mntpt)); - char *buf; =20 if (src_as->cell) ctx->cell =3D afs_use_cell(src_as->cell, afs_cell_trace_use_mntpt); @@ -128,16 +128,16 @@ static int afs_mntpt_set_params(struct fs_context *fc= , struct dentry *mntpt) if (size < 2 || size > PAGE_SIZE - 1) return -EINVAL; =20 - page =3D read_mapping_page(d_inode(mntpt)->i_mapping, 0, NULL); - if (IS_ERR(page)) - return PTR_ERR(page); + content =3D afs_get_link(mntpt, d_inode(mntpt), &cleanup); + if (IS_ERR(content)) { + do_delayed_call(&cleanup); + return PTR_ERR(content); + } =20 - buf =3D kmap(page); ret =3D -EINVAL; - if (buf[size - 1] =3D=3D '.') - ret =3D vfs_parse_fs_string(fc, "source", buf, size - 1); - kunmap(page); - put_page(page); + if (content[size - 1] =3D=3D '.') + ret =3D vfs_parse_fs_string(fc, "source", content, size - 1); + do_delayed_call(&cleanup); if (ret < 0) return ret; =20 diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index d05f2c09efe3..49a749672e38 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -422,6 +422,7 @@ enum yfs_cm_operation { EM(afs_file_error_dir_over_end, "DIR_ENT_OVER_END") \ EM(afs_file_error_dir_small, "DIR_SMALL") \ EM(afs_file_error_dir_unmarked_ext, "DIR_UNMARKED_EXT") \ + EM(afs_file_error_symlink_big, "SYM_BIG") \ EM(afs_file_error_mntpt, "MNTPT_READ_FAILED") \ E_(afs_file_error_writeback_fail, "WRITEBACK_FAILED") From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F016226B97 for ; Fri, 8 Nov 2024 17:35:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087357; cv=none; b=D6IwikLPKGmPpjvatpEIzHwcSDXzvD1zkGqQ2lrFEm9yCMZMySIBI53zaHbUj3Sgoa68zswk+HReL+WnUOidRFZ4p5YjjQHYY105qFKarXX0PGLkUIxUlLp5QgfNlhjlZvRAiEnYnPwqCFI1GSUg/iqlhJ1lyBDTlkUOJhPJmxY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087357; c=relaxed/simple; bh=Oy5yYZTTmO+o5qwOfjC4vGsbK9SmJzv/FQWRM/LPoOk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G+VwJag7w9y2t6qf+CJB53UDB/PV9z+wXUo28CX4bSKNLBM07ok/g5ZVvJ0xQJRYzgZ1/hMLVGkZRmdF8Z678jrBtWgiFUqTu7xDDNFNblPlSqaB+EGgtiR1NDTn76bQOl66PqlJPXCZS+U6Q42sXDk+HXQAu3TqeBPskPdxzF0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KGXc5Zo5; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KGXc5Zo5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087354; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EZi+xeQOrtjBwDxdYglpVJujVSizwyFZu4D9uktqr3k=; b=KGXc5Zo56EhS9jAmS3Rja9qEhshUy0Ohfmn8CekyQ5+ejqULhbX0MWFcqlHLcYbIvimuk+ ZZaQfp1bcSO9GDcnjgX/fH0jJRA3z45wdnPSZFPcAkO0AQ4n2Zh9mbl9fKPOqBwiROr1ZL rstmBUVDFr9fWcMoBt4pZehuQ8uSq3I= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-279-5X70DVMVMkW2E8VNezS_yA-1; Fri, 08 Nov 2024 12:35:50 -0500 X-MC-Unique: 5X70DVMVMkW2E8VNezS_yA-1 X-Mimecast-MFC-AGG-ID: 5X70DVMVMkW2E8VNezS_yA Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7C0F11955D58; Fri, 8 Nov 2024 17:35:47 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0B951195E480; Fri, 8 Nov 2024 17:35:41 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 25/33] afs: Eliminate afs_read Date: Fri, 8 Nov 2024 17:32:26 +0000 Message-ID: <20241108173236.1382366-26-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Now that directory and symlink reads go through netfslib, the afs_read struct is mostly redundant with almost all data duplicated in the netfs_io_request and netfs_io_subrequest structs that are also available any time we're doing a fetch. Eliminate afs_read by moving the one field we still need there to the afs_call struct (we may be given a different amount of data than what we asked for and have to track what remains of that) and using the netfs_io_subrequest directly instead. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/file.c | 96 +++++++++------------------------------------- fs/afs/fsclient.c | 55 +++++++++++++------------- fs/afs/inode.c | 2 + fs/afs/internal.h | 35 ++--------------- fs/afs/yfsclient.c | 47 +++++++++++------------ fs/netfs/main.c | 2 +- 6 files changed, 72 insertions(+), 165 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index 48695a50d2f9..b996f4419c0c 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -200,50 +200,12 @@ int afs_release(struct inode *inode, struct file *fil= e) return ret; } =20 -/* - * Allocate a new read record. - */ -struct afs_read *afs_alloc_read(gfp_t gfp) -{ - struct afs_read *req; - - req =3D kzalloc(sizeof(struct afs_read), gfp); - if (req) - refcount_set(&req->usage, 1); - - return req; -} - -/* - * Dispose of a ref to a read record. - */ -void afs_put_read(struct afs_read *req) -{ - if (refcount_dec_and_test(&req->usage)) { - if (req->cleanup) - req->cleanup(req); - key_put(req->key); - kfree(req); - } -} - static void afs_fetch_data_notify(struct afs_operation *op) { - struct afs_read *req =3D op->fetch.req; - struct netfs_io_subrequest *subreq =3D req->subreq; - int error =3D afs_op_error(op); - - req->error =3D error; - if (subreq) { - subreq->rreq->i_size =3D req->file_size; - if (req->pos + req->actual_len >=3D req->file_size) - __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); - subreq->error =3D error; - netfs_read_subreq_terminated(subreq); - req->subreq =3D NULL; - } else if (req->done) { - req->done(req); - } + struct netfs_io_subrequest *subreq =3D op->fetch.subreq; + + subreq->error =3D afs_op_error(op); + netfs_read_subreq_terminated(subreq); } =20 static void afs_fetch_data_success(struct afs_operation *op) @@ -253,7 +215,7 @@ static void afs_fetch_data_success(struct afs_operation= *op) _enter("op=3D%08x", op->debug_id); afs_vnode_commit_status(op, &op->file[0]); afs_stat_v(vnode, n_fetches); - atomic_long_add(op->fetch.req->actual_len, &op->net->n_fetch_bytes); + atomic_long_add(op->fetch.subreq->transferred, &op->net->n_fetch_bytes); afs_fetch_data_notify(op); } =20 @@ -265,11 +227,10 @@ static void afs_fetch_data_aborted(struct afs_operati= on *op) =20 static void afs_fetch_data_put(struct afs_operation *op) { - op->fetch.req->error =3D afs_op_error(op); - afs_put_read(op->fetch.req); + op->fetch.subreq->error =3D afs_op_error(op); } =20 -static const struct afs_operation_ops afs_fetch_data_operation =3D { +const struct afs_operation_ops afs_fetch_data_operation =3D { .issue_afs_rpc =3D afs_fs_fetch_data, .issue_yfs_rpc =3D yfs_fs_fetch_data, .success =3D afs_fetch_data_success, @@ -281,55 +242,34 @@ static const struct afs_operation_ops afs_fetch_data_= operation =3D { /* * Fetch file data from the volume. */ -int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req) +static void afs_read_worker(struct work_struct *work) { + struct netfs_io_subrequest *subreq =3D container_of(work, struct netfs_io= _subrequest, work); struct afs_operation *op; + struct afs_vnode *vnode =3D AFS_FS_I(subreq->rreq->inode); + struct key *key =3D subreq->rreq->netfs_priv; =20 _enter("%s{%llx:%llu.%u},%x,,,", vnode->volume->name, vnode->fid.vid, vnode->fid.vnode, vnode->fid.unique, - key_serial(req->key)); + key_serial(key)); =20 - op =3D afs_alloc_operation(req->key, vnode->volume); + op =3D afs_alloc_operation(key, vnode->volume); if (IS_ERR(op)) { - if (req->subreq) { - req->subreq->error =3D PTR_ERR(op); - netfs_read_subreq_terminated(req->subreq); - } - return PTR_ERR(op); + subreq->error =3D PTR_ERR(op); + netfs_read_subreq_terminated(subreq); + return; } =20 afs_op_set_vnode(op, 0, vnode); =20 - op->fetch.req =3D afs_get_read(req); + op->fetch.subreq =3D subreq; op->ops =3D &afs_fetch_data_operation; - return afs_do_sync_operation(op); -} - -static void afs_read_worker(struct work_struct *work) -{ - struct netfs_io_subrequest *subreq =3D container_of(work, struct netfs_io= _subrequest, work); - struct afs_vnode *vnode =3D AFS_FS_I(subreq->rreq->inode); - struct afs_read *fsreq; - - fsreq =3D afs_alloc_read(GFP_NOFS); - if (!fsreq) { - subreq->error =3D -ENOMEM; - return netfs_read_subreq_terminated(subreq); - } - - fsreq->subreq =3D subreq; - fsreq->pos =3D subreq->start + subreq->transferred; - fsreq->len =3D subreq->len - subreq->transferred; - fsreq->key =3D key_get(subreq->rreq->netfs_priv); - fsreq->vnode =3D vnode; - fsreq->iter =3D &subreq->io_iter; =20 trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - afs_fetch_data(fsreq->vnode, fsreq); - afs_put_read(fsreq); + afs_do_sync_operation(op); } =20 static void afs_issue_read(struct netfs_io_subrequest *subreq) diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c index 784f7daab112..d9d224c95454 100644 --- a/fs/afs/fsclient.c +++ b/fs/afs/fsclient.c @@ -301,19 +301,19 @@ void afs_fs_fetch_status(struct afs_operation *op) static int afs_deliver_fs_fetch_data(struct afs_call *call) { struct afs_operation *op =3D call->op; + struct netfs_io_subrequest *subreq =3D op->fetch.subreq; struct afs_vnode_param *vp =3D &op->file[0]; - struct afs_read *req =3D op->fetch.req; const __be32 *bp; size_t count_before; int ret; =20 _enter("{%u,%zu,%zu/%llu}", call->unmarshall, call->iov_len, iov_iter_count(call->iter), - req->actual_len); + call->remaining); =20 switch (call->unmarshall) { case 0: - req->actual_len =3D 0; + call->remaining =3D 0; call->unmarshall++; if (call->operation_ID =3D=3D FSFETCHDATA64) { afs_extract_to_tmp64(call); @@ -323,8 +323,8 @@ static int afs_deliver_fs_fetch_data(struct afs_call *c= all) } fallthrough; =20 - /* Extract the returned data length into - * ->actual_len. This may indicate more or less data than was + /* Extract the returned data length into ->remaining. + * This may indicate more or less data than was * requested will be returned. */ case 1: @@ -333,42 +333,41 @@ static int afs_deliver_fs_fetch_data(struct afs_call = *call) if (ret < 0) return ret; =20 - req->actual_len =3D be64_to_cpu(call->tmp64); - _debug("DATA length: %llu", req->actual_len); + call->remaining =3D be64_to_cpu(call->tmp64); + _debug("DATA length: %llu", call->remaining); =20 - if (req->actual_len =3D=3D 0) + if (call->remaining =3D=3D 0) goto no_more_data; =20 - call->iter =3D req->iter; - call->iov_len =3D min(req->actual_len, req->len); + call->iter =3D &subreq->io_iter; + call->iov_len =3D umin(call->remaining, subreq->len - subreq->transferre= d); call->unmarshall++; fallthrough; =20 /* extract the returned data */ case 2: count_before =3D call->iov_len; - _debug("extract data %zu/%llu", count_before, req->actual_len); + _debug("extract data %zu/%llu", count_before, call->remaining); =20 ret =3D afs_extract_data(call, true); - if (req->subreq) { - req->subreq->transferred +=3D count_before - call->iov_len; - netfs_read_subreq_progress(req->subreq); - } + subreq->transferred +=3D count_before - call->iov_len; + call->remaining -=3D count_before - call->iov_len; + netfs_read_subreq_progress(subreq); if (ret < 0) return ret; =20 call->iter =3D &call->def_iter; - if (req->actual_len <=3D req->len) + if (call->remaining) goto no_more_data; =20 /* Discard any excess data the server gave us */ - afs_extract_discard(call, req->actual_len - req->len); + afs_extract_discard(call, call->remaining); call->unmarshall =3D 3; fallthrough; =20 case 3: _debug("extract discard %zu/%llu", - iov_iter_count(call->iter), req->actual_len - req->len); + iov_iter_count(call->iter), call->remaining); =20 ret =3D afs_extract_data(call, true); if (ret < 0) @@ -390,8 +389,8 @@ static int afs_deliver_fs_fetch_data(struct afs_call *c= all) xdr_decode_AFSCallBack(&bp, call, &vp->scb); xdr_decode_AFSVolSync(&bp, &op->volsync); =20 - req->data_version =3D vp->scb.status.data_version; - req->file_size =3D vp->scb.status.size; + if (subreq->start + subreq->transferred >=3D vp->scb.status.size) + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); =20 call->unmarshall++; fallthrough; @@ -426,8 +425,8 @@ static const struct afs_call_type afs_RXFSFetchData64 = =3D { */ static void afs_fs_fetch_data64(struct afs_operation *op) { + struct netfs_io_subrequest *subreq =3D op->fetch.subreq; struct afs_vnode_param *vp =3D &op->file[0]; - struct afs_read *req =3D op->fetch.req; struct afs_call *call; __be32 *bp; =20 @@ -443,10 +442,10 @@ static void afs_fs_fetch_data64(struct afs_operation = *op) bp[1] =3D htonl(vp->fid.vid); bp[2] =3D htonl(vp->fid.vnode); bp[3] =3D htonl(vp->fid.unique); - bp[4] =3D htonl(upper_32_bits(req->pos)); - bp[5] =3D htonl(lower_32_bits(req->pos)); + bp[4] =3D htonl(upper_32_bits(subreq->start + subreq->transferred)); + bp[5] =3D htonl(lower_32_bits(subreq->start + subreq->transferred)); bp[6] =3D 0; - bp[7] =3D htonl(lower_32_bits(req->len)); + bp[7] =3D htonl(lower_32_bits(subreq->len - subreq->transferred)); =20 call->fid =3D vp->fid; trace_afs_make_fs_call(call, &vp->fid); @@ -458,9 +457,9 @@ static void afs_fs_fetch_data64(struct afs_operation *o= p) */ void afs_fs_fetch_data(struct afs_operation *op) { + struct netfs_io_subrequest *subreq =3D op->fetch.subreq; struct afs_vnode_param *vp =3D &op->file[0]; struct afs_call *call; - struct afs_read *req =3D op->fetch.req; __be32 *bp; =20 if (test_bit(AFS_SERVER_FL_HAS_FS64, &op->server->flags)) @@ -472,16 +471,14 @@ void afs_fs_fetch_data(struct afs_operation *op) if (!call) return afs_op_nomem(op); =20 - req->call_debug_id =3D call->debug_id; - /* marshall the parameters */ bp =3D call->request; bp[0] =3D htonl(FSFETCHDATA); bp[1] =3D htonl(vp->fid.vid); bp[2] =3D htonl(vp->fid.vnode); bp[3] =3D htonl(vp->fid.unique); - bp[4] =3D htonl(lower_32_bits(req->pos)); - bp[5] =3D htonl(lower_32_bits(req->len)); + bp[4] =3D htonl(lower_32_bits(subreq->start + subreq->transferred)); + bp[5] =3D htonl(lower_32_bits(subreq->len + subreq->transferred)); =20 call->fid =3D vp->fid; trace_afs_make_fs_call(call, &vp->fid); diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 6934cc30a4ca..0e3c43c40632 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -317,6 +317,8 @@ static void afs_apply_status(struct afs_operation *op, inode_set_ctime_to_ts(inode, t); inode_set_atime_to_ts(inode, t); } + if (op->ops =3D=3D &afs_fetch_data_operation) + op->fetch.subreq->rreq->i_size =3D status->size; } } =20 diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 7f170455cf25..39d2e29ed0e0 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -163,6 +163,7 @@ struct afs_call { spinlock_t state_lock; int error; /* error code */ u32 abort_code; /* Remote abort ID or 0 */ + unsigned long long remaining; /* How much is left to receive */ unsigned int max_lifespan; /* Maximum lifespan in secs to set if not 0 */ unsigned request_size; /* size of request data */ unsigned reply_max; /* maximum size of reply */ @@ -232,28 +233,6 @@ static inline struct key *afs_file_key(struct file *fi= le) return af->key; } =20 -/* - * Record of an outstanding read operation on a vnode. - */ -struct afs_read { - loff_t pos; /* Where to start reading */ - loff_t len; /* How much we're asking for */ - loff_t actual_len; /* How much we're actually getting */ - loff_t file_size; /* File size returned by server */ - struct key *key; /* The key to use to reissue the read */ - struct afs_vnode *vnode; /* The file being read into. */ - struct netfs_io_subrequest *subreq; /* Fscache helper read request this b= elongs to */ - afs_dataversion_t data_version; /* Version number returned by server */ - refcount_t usage; - unsigned int call_debug_id; - unsigned int nr_pages; - int error; - void (*done)(struct afs_read *); - void (*cleanup)(struct afs_read *); - struct iov_iter *iter; /* Iterator representing the buffer */ - struct iov_iter def_iter; /* Default iterator */ -}; - /* * AFS superblock private data * - there's one superblock per volume @@ -911,7 +890,7 @@ struct afs_operation { bool new_negative; } rename; struct { - struct afs_read *req; + struct netfs_io_subrequest *subreq; } fetch; struct { afs_lock_type_t type; @@ -1118,21 +1097,13 @@ extern void afs_dynroot_depopulate(struct super_blo= ck *); extern const struct address_space_operations afs_file_aops; extern const struct inode_operations afs_file_inode_operations; extern const struct file_operations afs_file_operations; +extern const struct afs_operation_ops afs_fetch_data_operation; extern const struct netfs_request_ops afs_req_ops; =20 extern int afs_cache_wb_key(struct afs_vnode *, struct afs_file *); extern void afs_put_wb_key(struct afs_wb_key *); extern int afs_open(struct inode *, struct file *); extern int afs_release(struct inode *, struct file *); -extern int afs_fetch_data(struct afs_vnode *, struct afs_read *); -extern struct afs_read *afs_alloc_read(gfp_t); -extern void afs_put_read(struct afs_read *); - -static inline struct afs_read *afs_get_read(struct afs_read *req) -{ - refcount_inc(&req->usage); - return req; -} =20 /* * flock.c diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c index 368cf277d801..3718d852fabc 100644 --- a/fs/afs/yfsclient.c +++ b/fs/afs/yfsclient.c @@ -352,19 +352,19 @@ static int yfs_deliver_status_and_volsync(struct afs_= call *call) static int yfs_deliver_fs_fetch_data64(struct afs_call *call) { struct afs_operation *op =3D call->op; + struct netfs_io_subrequest *subreq =3D op->fetch.subreq; struct afs_vnode_param *vp =3D &op->file[0]; - struct afs_read *req =3D op->fetch.req; const __be32 *bp; size_t count_before; int ret; =20 _enter("{%u,%zu, %zu/%llu}", call->unmarshall, call->iov_len, iov_iter_count(call->iter), - req->actual_len); + call->remaining); =20 switch (call->unmarshall) { case 0: - req->actual_len =3D 0; + call->remaining =3D 0; afs_extract_to_tmp64(call); call->unmarshall++; fallthrough; @@ -379,42 +379,40 @@ static int yfs_deliver_fs_fetch_data64(struct afs_cal= l *call) if (ret < 0) return ret; =20 - req->actual_len =3D be64_to_cpu(call->tmp64); - _debug("DATA length: %llu", req->actual_len); + call->remaining =3D be64_to_cpu(call->tmp64); + _debug("DATA length: %llu", call->remaining); =20 - if (req->actual_len =3D=3D 0) + if (call->remaining =3D=3D 0) goto no_more_data; =20 - call->iter =3D req->iter; - call->iov_len =3D min(req->actual_len, req->len); + call->iter =3D &subreq->io_iter; + call->iov_len =3D min(call->remaining, subreq->len - subreq->transferred= ); call->unmarshall++; fallthrough; =20 /* extract the returned data */ case 2: count_before =3D call->iov_len; - _debug("extract data %zu/%llu", count_before, req->actual_len); + _debug("extract data %zu/%llu", count_before, call->remaining); =20 ret =3D afs_extract_data(call, true); - if (req->subreq) { - req->subreq->transferred +=3D count_before - call->iov_len; - netfs_read_subreq_progress(req->subreq); - } + subreq->transferred +=3D count_before - call->iov_len; + netfs_read_subreq_progress(subreq); if (ret < 0) return ret; =20 call->iter =3D &call->def_iter; - if (req->actual_len <=3D req->len) + if (call->remaining) goto no_more_data; =20 /* Discard any excess data the server gave us */ - afs_extract_discard(call, req->actual_len - req->len); + afs_extract_discard(call, call->remaining); call->unmarshall =3D 3; fallthrough; =20 case 3: _debug("extract discard %zu/%llu", - iov_iter_count(call->iter), req->actual_len - req->len); + iov_iter_count(call->iter), call->remaining); =20 ret =3D afs_extract_data(call, true); if (ret < 0) @@ -439,8 +437,8 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call = *call) xdr_decode_YFSCallBack(&bp, call, &vp->scb); xdr_decode_YFSVolSync(&bp, &op->volsync); =20 - req->data_version =3D vp->scb.status.data_version; - req->file_size =3D vp->scb.status.size; + if (subreq->start + subreq->transferred >=3D vp->scb.status.size) + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); =20 call->unmarshall++; fallthrough; @@ -468,14 +466,15 @@ static const struct afs_call_type yfs_RXYFSFetchData6= 4 =3D { */ void yfs_fs_fetch_data(struct afs_operation *op) { + struct netfs_io_subrequest *subreq =3D op->fetch.subreq; struct afs_vnode_param *vp =3D &op->file[0]; - struct afs_read *req =3D op->fetch.req; struct afs_call *call; __be32 *bp; =20 - _enter(",%x,{%llx:%llu},%llx,%llx", + _enter(",%x,{%llx:%llu},%llx,%zx", key_serial(op->key), vp->fid.vid, vp->fid.vnode, - req->pos, req->len); + subreq->start + subreq->transferred, + subreq->len - subreq->transferred); =20 call =3D afs_alloc_flat_call(op->net, &yfs_RXYFSFetchData64, sizeof(__be32) * 2 + @@ -487,15 +486,13 @@ void yfs_fs_fetch_data(struct afs_operation *op) if (!call) return afs_op_nomem(op); =20 - req->call_debug_id =3D call->debug_id; - /* marshall the parameters */ bp =3D call->request; bp =3D xdr_encode_u32(bp, YFSFETCHDATA64); bp =3D xdr_encode_u32(bp, 0); /* RPC flags */ bp =3D xdr_encode_YFSFid(bp, &vp->fid); - bp =3D xdr_encode_u64(bp, req->pos); - bp =3D xdr_encode_u64(bp, req->len); + bp =3D xdr_encode_u64(bp, subreq->start + subreq->transferred); + bp =3D xdr_encode_u64(bp, subreq->len - subreq->transferred); yfs_check_req(call, bp); =20 call->fid =3D vp->fid; diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 8c1922c0cb42..16760695e667 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -118,7 +118,7 @@ static int __init netfs_init(void) goto error_reqpool; =20 netfs_subrequest_slab =3D kmem_cache_create("netfs_subrequest", - sizeof(struct netfs_io_subrequest), 0, + sizeof(struct netfs_io_subrequest) + 16, 0, SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT, NULL); if (!netfs_subrequest_slab) From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3992C226B61 for ; Fri, 8 Nov 2024 17:36:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087363; cv=none; b=h3QBMy/sChWtUWRISbO3IXcOqplyyoEFnnrbbeXa8MJJgx4eiA8cQk96SzX6RvpQcxdh+wg5yVj1jf4u6nudfkZIkhk77MGC0UdhYqV/hPstHWYAx6v3Np/qdO83xHEgWBYfvuWm7tRdu891MRyDILrQ4T90d94UK1polD+jO5A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087363; c=relaxed/simple; bh=0HAISuIxjXuHYtmzbbMFtfDDjYEsez984EWtKLp+Uhg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LAcKPQUgArKUX8JRUMSpTfa4X2tYz/QxKgJo+I8WD/mnvcoO5G1GNWf7VzZTCR4+/CLzMHttRQfJBZyyRAvCYHOW7GPLci4ShvQQUVVwPEL3ikHGRe1aIOG+GXUAufRVRjpNb7pWuW7BYgJEPs0obhRL4fDQIxiRUnT5wcRPVlA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=JQ+RjlYL; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JQ+RjlYL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OrdAljyu/86B7Uupqp5aIMyvZ1Oua6M6rtiSOl8cYT8=; b=JQ+RjlYLMcocX3k38G+JcHf7pZJZ+GLDWELuhPrG6m3fj7W3lH0fnxz9YhdOGIeZFnCc1B CAjbrk14sV96bqwAqQBSi8bj0K5QHIZQu4ITKUeudUCQ8cSvHiBbAKnl5PXxmkCN9yIMeP R5wgQRbGD3QOWlTN4llhkXkhrHryqf4= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-529-EfyNqsjiPTm6U8UvLq2wbA-1; Fri, 08 Nov 2024 12:35:57 -0500 X-MC-Unique: EfyNqsjiPTm6U8UvLq2wbA-1 X-Mimecast-MFC-AGG-ID: EfyNqsjiPTm6U8UvLq2wbA Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 59E7019560A1; Fri, 8 Nov 2024 17:35:54 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DD9153003B71; Fri, 8 Nov 2024 17:35:48 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 26/33] afs: Fix cleanup of immediately failed async calls Date: Fri, 8 Nov 2024 17:32:27 +0000 Message-ID: <20241108173236.1382366-27-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" If we manage to begin an async call, but fail to transmit any data on it due to a signal, we then abort it which causes a race between the notification of call completion from rxrpc and our attempt to cancel the notification. The notification will be necessary, however, for async FetchData to terminate the netfs subrequest. However, since we get a notification from rxrpc upon completion of a call (aborted or otherwise), we can just leave it to that. This leads to calls not getting cleaned up, but appearing in /proc/net/rxrpc/calls as being aborted with code 6. Fix this by making the "error_do_abort:" case of afs_make_call() abort the call and then abandon it to the notification handler. Fixes: 34fa47612bfe ("afs: Fix race in async call refcounting") Reported-by: Marc Dionne Signed-off-by: David Howells cc: linux-afs@lists.infradead.org --- fs/afs/internal.h | 9 +++++++++ fs/afs/rxrpc.c | 12 +++++++++--- include/trace/events/afs.h | 2 ++ 3 files changed, 20 insertions(+), 3 deletions(-) diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 39d2e29ed0e0..96fc466efd10 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1336,6 +1336,15 @@ extern void afs_send_simple_reply(struct afs_call *,= const void *, size_t); extern int afs_extract_data(struct afs_call *, bool); extern int afs_protocol_error(struct afs_call *, enum afs_eproto_cause); =20 +static inline void afs_see_call(struct afs_call *call, enum afs_call_trace= why) +{ + int r =3D refcount_read(&call->ref); + + trace_afs_call(call->debug_id, why, r, + atomic_read(&call->net->nr_outstanding_calls), + __builtin_return_address(0)); +} + static inline void afs_make_op_call(struct afs_operation *op, struct afs_c= all *call, gfp_t gfp) { diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index 9f2a3bb56ec6..a122c6366ce1 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -430,11 +430,16 @@ void afs_make_call(struct afs_call *call, gfp_t gfp) return; =20 error_do_abort: - if (ret !=3D -ECONNABORTED) { + if (ret !=3D -ECONNABORTED) rxrpc_kernel_abort_call(call->net->socket, rxcall, RX_USER_ABORT, ret, afs_abort_send_data_error); - } else { + if (call->async) { + afs_see_call(call, afs_call_trace_async_abort); + return; + } + + if (ret =3D=3D -ECONNABORTED) { len =3D 0; iov_iter_kvec(&msg.msg_iter, ITER_DEST, NULL, 0, 0); rxrpc_kernel_recv_data(call->net->socket, rxcall, @@ -445,6 +450,8 @@ void afs_make_call(struct afs_call *call, gfp_t gfp) call->error =3D ret; trace_afs_call_done(call); error_kill_call: + if (call->async) + afs_see_call(call, afs_call_trace_async_kill); if (call->type->done) call->type->done(call); =20 @@ -602,7 +609,6 @@ static void afs_deliver_to_call(struct afs_call *call) abort_code =3D 0; call_complete: afs_set_call_complete(call, ret, remote_abort); - state =3D AFS_CALL_COMPLETE; goto done; } =20 diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 49a749672e38..cdb5f2af7799 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -118,6 +118,8 @@ enum yfs_cm_operation { */ #define afs_call_traces \ EM(afs_call_trace_alloc, "ALLOC") \ + EM(afs_call_trace_async_abort, "ASYAB") \ + EM(afs_call_trace_async_kill, "ASYKL") \ EM(afs_call_trace_free, "FREE ") \ EM(afs_call_trace_get, "GET ") \ EM(afs_call_trace_put, "PUT ") \ From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD405231C91 for ; Fri, 8 Nov 2024 17:36:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087375; cv=none; b=VWyVO7O3i7oa45qokBKR9OawH5ORNEPMjQYvexJBWOAcW1QIufNgLJ76tCb2TcpGcaaqYvkcK2TMKf7ICA78juQ9ligElnvoOLEBDQosVenG7idUm3uSN43C+024L/o/CpFVRYvAvmsDQrIVcv3GL/2J3eI65XTOVzlaAinamsM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087375; c=relaxed/simple; bh=h3oEXD8an+gSfwAJ1BNpZvfjOrsYuP671z36GJVvdrk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZjyvvGh0TXwpXS8fNqJZbsNTrkr4uSz7hXSQEFi4rG05Y1/GwVEjGDoqVsi1vpyoq9LWr9hkMa8gFQ9x0VZ4RcnMAoC+4IrKD0zjauUJJr4Kg/tzPmuo2t+raZahp4G8eosfpd1ANwmz6YoEzixyPZHOm5/BCnhH25WHxJW/cPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RHsYjXPd; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RHsYjXPd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087372; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y9YoXfs4wfvSpy+hTW2AnFT9uTx6/UW2p5ODgYs94QM=; b=RHsYjXPdk+Bv11JHoAzDsSODVTvaztky8DKd2W7RoDGD2YQ/UIFytlSL196RBzrvR6QtSZ pt7D4fI+pWqQBKLdT7m/wy7Kruwn8N/npPbdBsQMlV4q92naUTecWKHbsBeEmkGk6p9l9u 7HTNvzAQu2vMpHtqFB2OwRuf1VxcM4A= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-304-N1m8BlRoONe5xG7c0J0v7w-1; Fri, 08 Nov 2024 12:36:05 -0500 X-MC-Unique: N1m8BlRoONe5xG7c0J0v7w-1 X-Mimecast-MFC-AGG-ID: N1m8BlRoONe5xG7c0J0v7w Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4D6711955F68; Fri, 8 Nov 2024 17:36:01 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CFCF7300019E; Fri, 8 Nov 2024 17:35:55 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 27/33] afs: Make {Y,}FS.FetchData an asynchronous operation Date: Fri, 8 Nov 2024 17:32:28 +0000 Message-ID: <20241108173236.1382366-28-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Make FS.FetchData and YFS.FetchData an asynchronous operation in that the request is queued in AF_RXRPC and then we return to the caller rather than waiting. Processing of the returning packets is then done inline if it's a synchronous VFS/VM call (readdir, read_folio, sync DIO, prep for write) or offloaded to a workqueue if asynchronous VM calls (eg. readahead, async DIO). This reduces the chain of workqueues invoking workqueues and cuts out some of the overhead, driving rxrpc data extraction and netfslib read collection from a thread that's going to block to completion anyway if possible. The ->done() call op is also split with ->immediate_cancel() handling the cancellation on failure to begin the call and ->done() handling the rest. This means that the AFS async FetchData code doesn't try to terminate the netfs subrequest twice. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/file.c | 126 +++++++++++++++++++++++++++++++++++++----- fs/afs/fs_operation.c | 2 +- fs/afs/fsclient.c | 9 ++- fs/afs/internal.h | 24 ++++++++ fs/afs/main.c | 2 +- fs/afs/rxrpc.c | 25 ++------- fs/afs/vlclient.c | 1 + fs/afs/write.c | 12 ++++ fs/afs/yfsclient.c | 6 +- 9 files changed, 170 insertions(+), 37 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index b996f4419c0c..c296efebb491 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -225,26 +225,111 @@ static void afs_fetch_data_aborted(struct afs_operat= ion *op) afs_fetch_data_notify(op); } =20 -static void afs_fetch_data_put(struct afs_operation *op) -{ - op->fetch.subreq->error =3D afs_op_error(op); -} - const struct afs_operation_ops afs_fetch_data_operation =3D { .issue_afs_rpc =3D afs_fs_fetch_data, .issue_yfs_rpc =3D yfs_fs_fetch_data, .success =3D afs_fetch_data_success, .aborted =3D afs_fetch_data_aborted, .failed =3D afs_fetch_data_notify, - .put =3D afs_fetch_data_put, }; =20 +static void afs_issue_read_call(struct afs_operation *op) +{ + op->call_responded =3D false; + op->call_error =3D 0; + op->call_abort_code =3D 0; + if (test_bit(AFS_SERVER_FL_IS_YFS, &op->server->flags)) + yfs_fs_fetch_data(op); + else + afs_fs_fetch_data(op); +} + +static void afs_end_read(struct afs_operation *op) +{ + if (op->call_responded && op->server) + set_bit(AFS_SERVER_FL_RESPONDING, &op->server->flags); + + if (!afs_op_error(op)) + afs_fetch_data_success(op); + else if (op->cumul_error.aborted) + afs_fetch_data_aborted(op); + else + afs_fetch_data_notify(op); + + afs_end_vnode_operation(op); + afs_put_operation(op); +} + +/* + * Perform I/O processing on an asynchronous call. The work item carries = a ref + * to the call struct that we either need to release or to pass on. + */ +static void afs_read_receive(struct afs_call *call) +{ + struct afs_operation *op =3D call->op; + enum afs_call_state state; + + _enter(""); + + state =3D READ_ONCE(call->state); + if (state =3D=3D AFS_CALL_COMPLETE) + return; + + while (state < AFS_CALL_COMPLETE && READ_ONCE(call->need_attention)) { + WRITE_ONCE(call->need_attention, false); + afs_deliver_to_call(call); + state =3D READ_ONCE(call->state); + } + + if (state < AFS_CALL_COMPLETE) { + netfs_read_subreq_progress(op->fetch.subreq); + if (rxrpc_kernel_check_life(call->net->socket, call->rxcall)) + return; + /* rxrpc terminated the call. */ + afs_set_call_complete(call, call->error, call->abort_code); + } + + op->call_abort_code =3D call->abort_code; + op->call_error =3D call->error; + op->call_responded =3D call->responded; + op->call =3D NULL; + call->op =3D NULL; + afs_put_call(call); + + /* If the call failed, then we need to crank the server rotation + * handle and try the next. + */ + if (afs_select_fileserver(op)) { + afs_issue_read_call(op); + return; + } + + afs_end_read(op); +} + +void afs_fetch_data_async_rx(struct work_struct *work) +{ + struct afs_call *call =3D container_of(work, struct afs_call, async_work); + + afs_read_receive(call); + afs_put_call(call); +} + +void afs_fetch_data_immediate_cancel(struct afs_call *call) +{ + if (call->async) { + afs_get_call(call, afs_call_trace_wake); + if (!queue_work(afs_async_calls, &call->async_work)) + afs_deferred_put_call(call); + flush_work(&call->async_work); + } +} + /* * Fetch file data from the volume. */ -static void afs_read_worker(struct work_struct *work) +static void afs_issue_read(struct netfs_io_subrequest *subreq) { - struct netfs_io_subrequest *subreq =3D container_of(work, struct netfs_io= _subrequest, work); struct afs_operation *op; struct afs_vnode *vnode =3D AFS_FS_I(subreq->rreq->inode); struct key *key =3D subreq->rreq->netfs_priv; @@ -269,13 +354,26 @@ static void afs_read_worker(struct work_struct *work) op->ops =3D &afs_fetch_data_operation; =20 trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - afs_do_sync_operation(op); -} =20 -static void afs_issue_read(struct netfs_io_subrequest *subreq) -{ - INIT_WORK(&subreq->work, afs_read_worker); - queue_work(system_long_wq, &subreq->work); + if (subreq->rreq->origin =3D=3D NETFS_READAHEAD || + subreq->rreq->iocb) { + op->flags |=3D AFS_OPERATION_ASYNC; + + if (!afs_begin_vnode_operation(op)) { + subreq->error =3D afs_put_operation(op); + netfs_read_subreq_terminated(subreq); + return; + } + + if (!afs_select_fileserver(op)) { + afs_end_read(op); + return; + } + + afs_issue_read_call(op); + } else { + afs_do_sync_operation(op); + } } =20 static int afs_init_request(struct netfs_io_request *rreq, struct file *fi= le) diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c index 8488ff8183fa..0b1338d65ae6 100644 --- a/fs/afs/fs_operation.c +++ b/fs/afs/fs_operation.c @@ -256,7 +256,7 @@ bool afs_begin_vnode_operation(struct afs_operation *op) /* * Tidy up a filesystem cursor and unlock the vnode. */ -static void afs_end_vnode_operation(struct afs_operation *op) +void afs_end_vnode_operation(struct afs_operation *op) { _enter(""); =20 diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c index d9d224c95454..1d9ecd5418d8 100644 --- a/fs/afs/fsclient.c +++ b/fs/afs/fsclient.c @@ -352,7 +352,6 @@ static int afs_deliver_fs_fetch_data(struct afs_call *c= all) ret =3D afs_extract_data(call, true); subreq->transferred +=3D count_before - call->iov_len; call->remaining -=3D count_before - call->iov_len; - netfs_read_subreq_progress(subreq); if (ret < 0) return ret; =20 @@ -409,14 +408,18 @@ static int afs_deliver_fs_fetch_data(struct afs_call = *call) static const struct afs_call_type afs_RXFSFetchData =3D { .name =3D "FS.FetchData", .op =3D afs_FS_FetchData, + .async_rx =3D afs_fetch_data_async_rx, .deliver =3D afs_deliver_fs_fetch_data, + .immediate_cancel =3D afs_fetch_data_immediate_cancel, .destructor =3D afs_flat_call_destructor, }; =20 static const struct afs_call_type afs_RXFSFetchData64 =3D { .name =3D "FS.FetchData64", .op =3D afs_FS_FetchData64, + .async_rx =3D afs_fetch_data_async_rx, .deliver =3D afs_deliver_fs_fetch_data, + .immediate_cancel =3D afs_fetch_data_immediate_cancel, .destructor =3D afs_flat_call_destructor, }; =20 @@ -436,6 +439,9 @@ static void afs_fs_fetch_data64(struct afs_operation *o= p) if (!call) return afs_op_nomem(op); =20 + if (op->flags & AFS_OPERATION_ASYNC) + call->async =3D true; + /* marshall the parameters */ bp =3D call->request; bp[0] =3D htonl(FSFETCHDATA64); @@ -1730,6 +1736,7 @@ static const struct afs_call_type afs_RXFSGetCapabili= ties =3D { .op =3D afs_FS_GetCapabilities, .deliver =3D afs_deliver_fs_get_capabilities, .done =3D afs_fileserver_probe_result, + .immediate_cancel =3D afs_fileserver_probe_result, .destructor =3D afs_fs_get_capabilities_destructor, }; =20 diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 96fc466efd10..cd2c4f85117d 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -202,11 +202,17 @@ struct afs_call_type { /* clean up a call */ void (*destructor)(struct afs_call *call); =20 + /* Async receive processing function */ + void (*async_rx)(struct work_struct *work); + /* Work function */ void (*work)(struct work_struct *work); =20 /* Call done function (gets called immediately on success or failure) */ void (*done)(struct afs_call *call); + + /* Handle a call being immediately cancelled. */ + void (*immediate_cancel)(struct afs_call *call); }; =20 /* @@ -942,6 +948,7 @@ struct afs_operation { #define AFS_OPERATION_TRIED_ALL 0x0400 /* Set if we've tried all the file= servers */ #define AFS_OPERATION_RETRY_SERVER 0x0800 /* Set if we should retry the cu= rrent server */ #define AFS_OPERATION_DIR_CONFLICT 0x1000 /* Set if we detected a 3rd-part= y dir change */ +#define AFS_OPERATION_ASYNC 0x2000 /* Set if should run asynchronously */ }; =20 /* @@ -1104,6 +1111,8 @@ extern int afs_cache_wb_key(struct afs_vnode *, struc= t afs_file *); extern void afs_put_wb_key(struct afs_wb_key *); extern int afs_open(struct inode *, struct file *); extern int afs_release(struct inode *, struct file *); +void afs_fetch_data_async_rx(struct work_struct *work); +void afs_fetch_data_immediate_cancel(struct afs_call *call); =20 /* * flock.c @@ -1155,6 +1164,7 @@ extern void afs_fs_store_acl(struct afs_operation *); extern struct afs_operation *afs_alloc_operation(struct key *, struct afs_= volume *); extern int afs_put_operation(struct afs_operation *); extern bool afs_begin_vnode_operation(struct afs_operation *); +extern void afs_end_vnode_operation(struct afs_operation *op); extern void afs_wait_for_operation(struct afs_operation *); extern int afs_do_sync_operation(struct afs_operation *); =20 @@ -1326,6 +1336,7 @@ extern void afs_charge_preallocation(struct work_stru= ct *); extern void afs_put_call(struct afs_call *); void afs_deferred_put_call(struct afs_call *call); void afs_make_call(struct afs_call *call, gfp_t gfp); +void afs_deliver_to_call(struct afs_call *call); void afs_wait_for_call_to_complete(struct afs_call *call); extern struct afs_call *afs_alloc_flat_call(struct afs_net *, const struct afs_call_type *, @@ -1336,6 +1347,19 @@ extern void afs_send_simple_reply(struct afs_call *,= const void *, size_t); extern int afs_extract_data(struct afs_call *, bool); extern int afs_protocol_error(struct afs_call *, enum afs_eproto_cause); =20 +static inline struct afs_call *afs_get_call(struct afs_call *call, + enum afs_call_trace why) +{ + int r; + + __refcount_inc(&call->ref, &r); + + trace_afs_call(call->debug_id, why, r + 1, + atomic_read(&call->net->nr_outstanding_calls), + __builtin_return_address(0)); + return call; +} + static inline void afs_see_call(struct afs_call *call, enum afs_call_trace= why) { int r =3D refcount_read(&call->ref); diff --git a/fs/afs/main.c b/fs/afs/main.c index a14f6013e316..1ae0067f772d 100644 --- a/fs/afs/main.c +++ b/fs/afs/main.c @@ -177,7 +177,7 @@ static int __init afs_init(void) afs_wq =3D alloc_workqueue("afs", 0, 0); if (!afs_wq) goto error_afs_wq; - afs_async_calls =3D alloc_workqueue("kafsd", WQ_MEM_RECLAIM, 0); + afs_async_calls =3D alloc_workqueue("kafsd", WQ_MEM_RECLAIM | WQ_UNBOUND,= 0); if (!afs_async_calls) goto error_async; afs_lock_manager =3D alloc_workqueue("kafs_lockd", WQ_MEM_RECLAIM, 0); diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index a122c6366ce1..886416ea1d96 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -149,7 +149,8 @@ static struct afs_call *afs_alloc_call(struct afs_net *= net, call->net =3D net; call->debug_id =3D atomic_inc_return(&rxrpc_debug_id); refcount_set(&call->ref, 1); - INIT_WORK(&call->async_work, afs_process_async_call); + INIT_WORK(&call->async_work, type->async_rx ?: afs_process_async_call); + INIT_WORK(&call->work, call->type->work); INIT_WORK(&call->free_work, afs_deferred_free_worker); init_waitqueue_head(&call->waitq); spin_lock_init(&call->state_lock); @@ -235,27 +236,12 @@ void afs_deferred_put_call(struct afs_call *call) schedule_work(&call->free_work); } =20 -static struct afs_call *afs_get_call(struct afs_call *call, - enum afs_call_trace why) -{ - int r; - - __refcount_inc(&call->ref, &r); - - trace_afs_call(call->debug_id, why, r + 1, - atomic_read(&call->net->nr_outstanding_calls), - __builtin_return_address(0)); - return call; -} - /* * Queue the call for actual work. */ static void afs_queue_call_work(struct afs_call *call) { if (call->type->work) { - INIT_WORK(&call->work, call->type->work); - afs_get_call(call, afs_call_trace_work); if (!queue_work(afs_wq, &call->work)) afs_put_call(call); @@ -452,8 +438,8 @@ void afs_make_call(struct afs_call *call, gfp_t gfp) error_kill_call: if (call->async) afs_see_call(call, afs_call_trace_async_kill); - if (call->type->done) - call->type->done(call); + if (call->type->immediate_cancel) + call->type->immediate_cancel(call); =20 /* We need to dispose of the extra ref we grabbed for an async call. * The call, however, might be queued on afs_async_calls and we need to @@ -508,7 +494,7 @@ static void afs_log_error(struct afs_call *call, s32 re= mote_abort) /* * deliver messages to a call */ -static void afs_deliver_to_call(struct afs_call *call) +void afs_deliver_to_call(struct afs_call *call) { enum afs_call_state state; size_t len; @@ -809,6 +795,7 @@ static int afs_deliver_cm_op_id(struct afs_call *call) return -ENOTSUPP; =20 trace_afs_cb_call(call); + call->work.func =3D call->type->work; =20 /* pass responsibility for the remainer of this message off to the * cache manager op */ diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c index cac75f89b64a..adc617a82a86 100644 --- a/fs/afs/vlclient.c +++ b/fs/afs/vlclient.c @@ -370,6 +370,7 @@ static const struct afs_call_type afs_RXVLGetCapabiliti= es =3D { .name =3D "VL.GetCapabilities", .op =3D afs_VL_GetCapabilities, .deliver =3D afs_deliver_vl_get_capabilities, + .immediate_cancel =3D afs_vlserver_probe_result, .done =3D afs_vlserver_probe_result, .destructor =3D afs_destroy_vl_get_capabilities, }; diff --git a/fs/afs/write.c b/fs/afs/write.c index 17d188aaf101..e87b55792aa8 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -193,6 +193,18 @@ void afs_retry_request(struct netfs_io_request *wreq, = struct netfs_io_stream *st list_first_entry(&stream->subrequests, struct netfs_io_subrequest, rreq_link); =20 + switch (wreq->origin) { + case NETFS_READAHEAD: + case NETFS_READPAGE: + case NETFS_READ_GAPS: + case NETFS_READ_SINGLE: + case NETFS_READ_FOR_WRITE: + case NETFS_DIO_READ: + return; + default: + break; + } + switch (subreq->error) { case -EACCES: case -EPERM: diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c index 3718d852fabc..f57c089f26ee 100644 --- a/fs/afs/yfsclient.c +++ b/fs/afs/yfsclient.c @@ -397,7 +397,6 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call = *call) =20 ret =3D afs_extract_data(call, true); subreq->transferred +=3D count_before - call->iov_len; - netfs_read_subreq_progress(subreq); if (ret < 0) return ret; =20 @@ -457,7 +456,9 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call = *call) static const struct afs_call_type yfs_RXYFSFetchData64 =3D { .name =3D "YFS.FetchData64", .op =3D yfs_FS_FetchData64, + .async_rx =3D afs_fetch_data_async_rx, .deliver =3D yfs_deliver_fs_fetch_data64, + .immediate_cancel =3D afs_fetch_data_immediate_cancel, .destructor =3D afs_flat_call_destructor, }; =20 @@ -486,6 +487,9 @@ void yfs_fs_fetch_data(struct afs_operation *op) if (!call) return afs_op_nomem(op); =20 + if (op->flags & AFS_OPERATION_ASYNC) + call->async =3D true; + /* marshall the parameters */ bp =3D call->request; bp =3D xdr_encode_u32(bp, YFSFETCHDATA64); From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 917B522C75E for ; Fri, 8 Nov 2024 17:36:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087384; cv=none; b=n/T83Skczuc2AipSw9syoACB2216wBd3727a4QmJefCZCg5SGpw7j+8tLSAs0+hwdmaPtgf6nnPnWo5/Ol2eWGAqkvHuPWcvqHDrVBrdyp4fbjrAN/xuC4GDZGtVFLPsOfJAd2GnrQYkksmkrYLuDNvm88vWqGT4JuM+ZnjjZ/Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087384; c=relaxed/simple; bh=NqqUtcrr9705MOIsb6U7zbeojyQAbrAFnWz3Fu8mXVM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Bnlov9TkozpbkFNgaKXoWSGFQq8PBmRaxa5zDF7C4iAe3Eh9J4Cn2FDdt3alwMWOzacDGAm1QJNlg8h//LYuyL3lHGEe47wAyrhN/WHZqdJeZNGNQS7muHLCL+RNgcVfo5wolX1wj+gE8qUpSqYlLN61AOx7ackwatRXFpvUhak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=FOxyrkEj; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FOxyrkEj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087378; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0pxaLJUSCtKSncckc/jDUY2DSkQUPZU0ohB47Xxl2xk=; b=FOxyrkEjnVQxyYdBAO2LcgHnGQirSj8m9HsjY9Hs9S9ii/6p8sZX2dPxZ7wY4Mdn2Sy9zS 1p9LIy7QsrOvpQYWUCqI5vij0egNV1jXhnoHJQnwJOeqJ3UDpYvFhuHrNTM7PdaRnqxiRU o6h1DcEOSOPDIasMrnGox08iElKRHes= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-290-VlmEWM-kOsmpd9dsiY2Geg-1; Fri, 08 Nov 2024 12:36:15 -0500 X-MC-Unique: VlmEWM-kOsmpd9dsiY2Geg-1 X-Mimecast-MFC-AGG-ID: VlmEWM-kOsmpd9dsiY2Geg Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8D7FE1955F34; Fri, 8 Nov 2024 17:36:08 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A7C09300019F; Fri, 8 Nov 2024 17:36:02 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 28/33] netfs: Change the read result collector to only use one work item Date: Fri, 8 Nov 2024 17:32:29 +0000 Message-ID: <20241108173236.1382366-29-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Change the way netfslib collects read results to do all the collection for a particular read request using a single work item that walks along the subrequest queue as subrequests make progress or complete, unlocking folios progressively rather than doing the unlock in parallel as parallel requests come in. The code is remodelled to be more like the write-side code, though only using a single stream. This makes it more directly comparable and thus easier to duplicate fixes between the two sides. This has a number of advantages: (1) It's simpler. There doesn't need to be a complex donation mechanism to handle mismatches between the size and alignment of subrequests and folios. The collector unlocks folios as the subrequests covering each complete. (2) It should cause less scheduler overhead as there's a single work item in play unlocking pages in parallel when a read gets split up into a lot of subrequests instead of one per subrequest. Whilst the parallellism is nice in theory, in practice, the vast majority of loads are sequential reads of the whole file, so committing a bunch of threads to unlocking folios out of order doesn't help in those cases. (3) It should make it easier to implement content decryption. A folio cannot be decrypted until all the requests that contribute to it have completed - and, again, most loads are sequential and so, most of the time, we want to begin decryption sequentially (though it's great if the decryption can happen in parallel). There is a disadvantage in that we're losing the ability to decrypt and unlock things on an as-things-arrive basis which may affect some applications. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/9p/vfs_addr.c | 5 +- fs/afs/dir.c | 8 +- fs/netfs/buffered_read.c | 138 ++++---- fs/netfs/direct_read.c | 62 ++-- fs/netfs/internal.h | 16 +- fs/netfs/main.c | 2 +- fs/netfs/objects.c | 34 +- fs/netfs/read_collect.c | 663 ++++++++++++++++++++--------------- fs/netfs/read_pgpriv2.c | 3 +- fs/netfs/read_retry.c | 207 ++++++----- fs/netfs/read_single.c | 37 +- fs/netfs/write_retry.c | 14 +- fs/smb/client/cifssmb.c | 2 + fs/smb/client/smb2pdu.c | 4 +- include/linux/netfs.h | 17 +- include/trace/events/netfs.h | 76 +--- 16 files changed, 670 insertions(+), 618 deletions(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index e4144e1a10a9..b1c29fa08e82 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -79,9 +79,10 @@ static void v9fs_issue_read(struct netfs_io_subrequest *= subreq) __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); if (pos + total >=3D i_size_read(rreq->inode)) __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); - - if (!err) + if (!err && total) { + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); subreq->transferred +=3D total; + } =20 subreq->error =3D err; netfs_read_subreq_terminated(subreq); diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 86d3955a78cd..36b80449ef0e 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -325,8 +325,10 @@ static ssize_t afs_read_dir(struct afs_vnode *dvnode, = struct file *file) * haven't read it yet. */ if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) && - test_bit(AFS_VNODE_DIR_READ, &dvnode->flags)) + test_bit(AFS_VNODE_DIR_READ, &dvnode->flags)) { + ret =3D i_size; goto valid; + } =20 up_read(&dvnode->validate_lock); if (down_write_killable(&dvnode->validate_lock) < 0) @@ -346,11 +348,13 @@ static ssize_t afs_read_dir(struct afs_vnode *dvnode,= struct file *file) =20 set_bit(AFS_VNODE_DIR_VALID, &dvnode->flags); set_bit(AFS_VNODE_DIR_READ, &dvnode->flags); + } else { + ret =3D i_size; } =20 downgrade_write(&dvnode->validate_lock); valid: - return i_size; + return ret; =20 error_unlock: up_write(&dvnode->validate_lock); diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 61287f6f6706..7036e9f12b07 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -121,12 +121,6 @@ static ssize_t netfs_prepare_read_iterator(struct netf= s_io_subrequest *subreq) =20 subreq->io_iter =3D rreq->buffer.iter; =20 - if (iov_iter_is_folioq(&subreq->io_iter)) { - subreq->curr_folioq =3D (struct folio_queue *)subreq->io_iter.folioq; - subreq->curr_folioq_slot =3D subreq->io_iter.folioq_slot; - subreq->curr_folio_order =3D subreq->curr_folioq->orders[subreq->curr_fo= lioq_slot]; - } - iov_iter_truncate(&subreq->io_iter, subreq->len); rolling_buffer_advance(&rreq->buffer, subreq->len); return subreq->len; @@ -147,19 +141,6 @@ static enum netfs_io_source netfs_cache_prepare_read(s= truct netfs_io_request *rr =20 } =20 -void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error,= bool was_async) -{ - struct netfs_io_subrequest *subreq =3D priv; - - if (transferred_or_error > 0) { - subreq->transferred +=3D transferred_or_error; - subreq->error =3D 0; - } else { - subreq->error =3D transferred_or_error; - } - schedule_work(&subreq->work); -} - /* * Issue a read against the cache. * - Eats the caller's ref on subreq. @@ -174,6 +155,47 @@ static void netfs_read_cache_to_pagecache(struct netfs= _io_request *rreq, netfs_cache_read_terminated, subreq); } =20 +static void netfs_issue_read(struct netfs_io_request *rreq, + struct netfs_io_subrequest *subreq) +{ + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); + + /* We add to the end of the list whilst the collector may be walking + * the list. The collector only goes nextwards and uses the lock to + * remove entries off of the front. + */ + spin_lock(&rreq->lock); + list_add_tail(&subreq->rreq_link, &stream->subrequests); + if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { + stream->front =3D subreq; + if (!stream->active) { + stream->collected_to =3D stream->front->start; + /* Store list pointers before active flag */ + smp_store_release(&stream->active, true); + } + } + + spin_unlock(&rreq->lock); + + switch (subreq->source) { + case NETFS_DOWNLOAD_FROM_SERVER: + rreq->netfs_ops->issue_read(subreq); + break; + case NETFS_READ_FROM_CACHE: + netfs_read_cache_to_pagecache(rreq, subreq); + break; + default: + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + subreq->error =3D 0; + iov_iter_zero(subreq->len, &subreq->io_iter); + subreq->transferred =3D subreq->len; + netfs_read_subreq_terminated(subreq); + break; + } +} + /* * Perform a read to the pagecache from a series of sources of different t= ypes, * slicing up the region to be read according to available cache blocks and @@ -186,8 +208,6 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq) ssize_t size =3D rreq->len; int ret =3D 0; =20 - atomic_inc(&rreq->nr_outstanding); - do { struct netfs_io_subrequest *subreq; enum netfs_io_source source =3D NETFS_DOWNLOAD_FROM_SERVER; @@ -202,14 +222,6 @@ static void netfs_read_to_pagecache(struct netfs_io_re= quest *rreq) subreq->start =3D start; subreq->len =3D size; =20 - atomic_inc(&rreq->nr_outstanding); - spin_lock(&rreq->lock); - list_add_tail(&subreq->rreq_link, &rreq->subrequests); - subreq->prev_donated =3D rreq->prev_donated; - rreq->prev_donated =3D 0; - trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock(&rreq->lock); - source =3D netfs_cache_prepare_read(rreq, subreq, rreq->i_size); subreq->source =3D source; if (source =3D=3D NETFS_DOWNLOAD_FROM_SERVER) { @@ -238,24 +250,13 @@ static void netfs_read_to_pagecache(struct netfs_io_r= equest *rreq) if (rreq->netfs_ops->prepare_read) { ret =3D rreq->netfs_ops->prepare_read(subreq); if (ret < 0) { - atomic_dec(&rreq->nr_outstanding); netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); break; } trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); } - - slice =3D netfs_prepare_read_iterator(subreq); - if (slice < 0) { - atomic_dec(&rreq->nr_outstanding); - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); - ret =3D slice; - break; - } - - rreq->netfs_ops->issue_read(subreq); - goto done; + goto issue; } =20 fill_with_zeroes: @@ -263,55 +264,46 @@ static void netfs_read_to_pagecache(struct netfs_io_r= equest *rreq) subreq->source =3D NETFS_FILL_WITH_ZEROES; trace_netfs_sreq(subreq, netfs_sreq_trace_submit); netfs_stat(&netfs_n_rh_zero); - slice =3D netfs_prepare_read_iterator(subreq); - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); - subreq->error =3D 0; - netfs_read_subreq_terminated(subreq); - goto done; + goto issue; } =20 if (source =3D=3D NETFS_READ_FROM_CACHE) { trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - slice =3D netfs_prepare_read_iterator(subreq); - netfs_read_cache_to_pagecache(rreq, subreq); - goto done; + goto issue; } =20 pr_err("Unexpected read source %u\n", source); WARN_ON_ONCE(1); break; =20 - done: + issue: + slice =3D netfs_prepare_read_iterator(subreq); + if (slice < 0) { + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); + ret =3D slice; + break; + } size -=3D slice; start +=3D slice; + if (size <=3D 0) { + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + } + + netfs_issue_read(rreq, subreq); cond_resched(); } while (size > 0); =20 - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq); + if (unlikely(size > 0)) { + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + netfs_wake_read_collector(rreq); + } =20 /* Defer error return as we may need to wait for outstanding I/O. */ cmpxchg(&rreq->error, 0, ret); } =20 -/* - * Wait for the read operation to complete, successfully or otherwise. - */ -static int netfs_wait_for_read(struct netfs_io_request *rreq) -{ - int ret; - - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); - wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE); - ret =3D rreq->error; - if (ret =3D=3D 0 && rreq->submitted < rreq->len) { - trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); - ret =3D -EIO; - } - - return ret; -} - /** * netfs_readahead - Helper to manage a read request * @ractl: The description of the readahead request @@ -340,6 +332,8 @@ void netfs_readahead(struct readahead_control *ractl) if (IS_ERR(rreq)) return; =20 + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags); + ret =3D netfs_begin_cache_read(rreq, ictx); if (ret =3D=3D -ENOMEM || ret =3D=3D -EINTR || ret =3D=3D -ERESTARTSYS) goto cleanup_free; @@ -456,7 +450,7 @@ static int netfs_read_gaps(struct file *file, struct fo= lio *folio) folio_put(sink); =20 ret =3D netfs_wait_for_read(rreq); - if (ret =3D=3D 0) { + if (ret >=3D 0) { flush_dcache_folio(folio); folio_mark_uptodate(folio); } @@ -744,7 +738,7 @@ int netfs_prefetch_for_write(struct file *file, struct = folio *folio, netfs_read_to_pagecache(rreq); ret =3D netfs_wait_for_read(rreq); netfs_put_request(rreq, false, netfs_rreq_trace_put_return); - return ret; + return ret < 0 ? ret : 0; =20 error_put: netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 1a20cc3979c7..dedcfc2bab2d 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -47,12 +47,11 @@ static void netfs_prepare_dio_read_iterator(struct netf= s_io_subrequest *subreq) */ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) { + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; unsigned long long start =3D rreq->start; ssize_t size =3D rreq->len; int ret =3D 0; =20 - atomic_set(&rreq->nr_outstanding, 1); - do { struct netfs_io_subrequest *subreq; ssize_t slice; @@ -67,11 +66,18 @@ static int netfs_dispatch_unbuffered_reads(struct netfs= _io_request *rreq) subreq->start =3D start; subreq->len =3D size; =20 - atomic_inc(&rreq->nr_outstanding); + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); + spin_lock(&rreq->lock); - list_add_tail(&subreq->rreq_link, &rreq->subrequests); - subreq->prev_donated =3D rreq->prev_donated; - rreq->prev_donated =3D 0; + list_add_tail(&subreq->rreq_link, &stream->subrequests); + if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { + stream->front =3D subreq; + if (!stream->active) { + stream->collected_to =3D stream->front->start; + /* Store list pointers before active flag */ + smp_store_release(&stream->active, true); + } + } trace_netfs_sreq(subreq, netfs_sreq_trace_added); spin_unlock(&rreq->lock); =20 @@ -79,7 +85,6 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_i= o_request *rreq) if (rreq->netfs_ops->prepare_read) { ret =3D rreq->netfs_ops->prepare_read(subreq); if (ret < 0) { - atomic_dec(&rreq->nr_outstanding); netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); break; } @@ -87,20 +92,34 @@ static int netfs_dispatch_unbuffered_reads(struct netfs= _io_request *rreq) =20 netfs_prepare_dio_read_iterator(subreq); slice =3D subreq->len; - rreq->netfs_ops->issue_read(subreq); - size -=3D slice; start +=3D slice; rreq->submitted +=3D slice; + if (size <=3D 0) { + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + } =20 + rreq->netfs_ops->issue_read(subreq); + + if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) { + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_pause); + wait_on_bit(&rreq->flags, NETFS_RREQ_PAUSE, TASK_UNINTERRUPTIBLE); + } + if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) + break; if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) break; cond_resched(); } while (size > 0); =20 - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq); + if (unlikely(size > 0)) { + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); + netfs_wake_read_collector(rreq); + } + return ret; } =20 @@ -133,21 +152,10 @@ static int netfs_unbuffered_read(struct netfs_io_requ= est *rreq, bool sync) goto out; } =20 - if (sync) { - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); - wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, - TASK_UNINTERRUPTIBLE); - - ret =3D rreq->error; - if (ret =3D=3D 0 && rreq->submitted < rreq->len && - rreq->origin !=3D NETFS_DIO_READ) { - trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); - ret =3D -EIO; - } - } else { + if (sync) + ret =3D netfs_wait_for_read(rreq); + else ret =3D -EIOCBQUEUED; - } - out: _leave(" =3D %d", ret); return ret; @@ -215,8 +223,10 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb= *iocb, struct iov_iter *i =20 // TODO: Set up bounce buffer if needed =20 - if (!sync) + if (!sync) { rreq->iocb =3D iocb; + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags); + } =20 ret =3D netfs_unbuffered_read(rreq, sync); if (ret < 0) diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index e236f752af88..334bf9f6e6f2 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -82,17 +82,25 @@ static inline void netfs_see_request(struct netfs_io_re= quest *rreq, trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what); } =20 +static inline void netfs_see_subrequest(struct netfs_io_subrequest *subreq, + enum netfs_sreq_ref_trace what) +{ + trace_netfs_sreq_ref(subreq->rreq->debug_id, subreq->debug_index, + refcount_read(&subreq->ref), what); +} + /* * read_collect.c */ -void netfs_read_termination_worker(struct work_struct *work); -void netfs_rreq_terminated(struct netfs_io_request *rreq); +void netfs_read_collection_worker(struct work_struct *work); +void netfs_wake_read_collector(struct netfs_io_request *rreq); +void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error,= bool was_async); +ssize_t netfs_wait_for_read(struct netfs_io_request *rreq); =20 /* * read_pgpriv2.c */ -void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_subrequest *subreq, - struct netfs_io_request *rreq, +void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_request *rreq, struct folio_queue *folioq, int slot); void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq); diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 16760695e667..4e3e62040831 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -71,7 +71,7 @@ static int netfs_requests_seq_show(struct seq_file *m, vo= id *v) refcount_read(&rreq->ref), rreq->flags, rreq->error, - atomic_read(&rreq->nr_outstanding), + 0, rreq->start, rreq->submitted, rreq->len); seq_putc(m, '\n'); return 0; diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index dde4a679d9e2..dc6b41ef18b0 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -48,7 +48,7 @@ struct netfs_io_request *netfs_alloc_request(struct addre= ss_space *mapping, spin_lock_init(&rreq->lock); INIT_LIST_HEAD(&rreq->io_streams[0].subrequests); INIT_LIST_HEAD(&rreq->io_streams[1].subrequests); - INIT_LIST_HEAD(&rreq->subrequests); + init_waitqueue_head(&rreq->waitq); refcount_set(&rreq->ref, 1); =20 if (origin =3D=3D NETFS_READAHEAD || @@ -56,10 +56,12 @@ struct netfs_io_request *netfs_alloc_request(struct add= ress_space *mapping, origin =3D=3D NETFS_READ_GAPS || origin =3D=3D NETFS_READ_SINGLE || origin =3D=3D NETFS_READ_FOR_WRITE || - origin =3D=3D NETFS_DIO_READ) - INIT_WORK(&rreq->work, NULL); - else + origin =3D=3D NETFS_DIO_READ) { + INIT_WORK(&rreq->work, netfs_read_collection_worker); + rreq->io_streams[0].avail =3D true; + } else { INIT_WORK(&rreq->work, netfs_write_collection_worker); + } =20 __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); if (file && file->f_flags & O_NONBLOCK) @@ -93,14 +95,6 @@ void netfs_clear_subrequests(struct netfs_io_request *rr= eq, bool was_async) struct netfs_io_stream *stream; int s; =20 - while (!list_empty(&rreq->subrequests)) { - subreq =3D list_first_entry(&rreq->subrequests, - struct netfs_io_subrequest, rreq_link); - list_del(&subreq->rreq_link); - netfs_put_subrequest(subreq, was_async, - netfs_sreq_trace_put_clear); - } - for (s =3D 0; s < ARRAY_SIZE(rreq->io_streams); s++) { stream =3D &rreq->io_streams[s]; while (!list_empty(&stream->subrequests)) { @@ -192,21 +186,7 @@ struct netfs_io_subrequest *netfs_alloc_subrequest(str= uct netfs_io_request *rreq } =20 memset(subreq, 0, kmem_cache_size(cache)); - - switch (rreq->origin) { - case NETFS_READAHEAD: - case NETFS_READPAGE: - case NETFS_READ_GAPS: - case NETFS_READ_SINGLE: - case NETFS_READ_FOR_WRITE: - case NETFS_DIO_READ: - INIT_WORK(&subreq->work, netfs_read_subreq_termination_worker); - break; - default: - INIT_WORK(&subreq->work, NULL); - break; - } - + INIT_WORK(&subreq->work, NULL); INIT_LIST_HEAD(&subreq->rreq_link); refcount_set(&subreq->ref, 2); subreq->rreq =3D rreq; diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 9124c8c36f9d..7f3a3c056c6e 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -14,6 +14,14 @@ #include #include "internal.h" =20 +/* Notes made in the collector */ +#define HIT_PENDING 0x01 /* A front op was still pending */ +#define MADE_PROGRESS 0x04 /* Made progress cleaning up a stream or the fo= lio set */ +#define BUFFERED 0x08 /* The pagecache needs cleaning up */ +#define NEED_RETRY 0x10 /* A front op requests retrying */ +#define COPY_TO_CACHE 0x40 /* Need to copy subrequest to cache */ +#define ABANDON_SREQ 0x80 /* Need to abandon untransferred part of subrequ= est */ + /* * Clear the unread part of an I/O request. */ @@ -31,14 +39,18 @@ static void netfs_clear_unread(struct netfs_io_subreque= st *subreq) * cache the folio, we set the group to NETFS_FOLIO_COPY_TO_CACHE, mark it * dirty and let writeback handle it. */ -static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq, - struct netfs_io_request *rreq, +static void netfs_unlock_read_folio(struct netfs_io_request *rreq, struct folio_queue *folioq, int slot) { struct netfs_folio *finfo; struct folio *folio =3D folioq_folio(folioq, slot); =20 + if (unlikely(folio_pos(folio) < rreq->abandon_to)) { + trace_netfs_folio(folio, netfs_folio_trace_abandon); + goto just_unlock; + } + flush_dcache_folio(folio); folio_mark_uptodate(folio); =20 @@ -53,7 +65,7 @@ static void netfs_unlock_read_folio(struct netfs_io_subre= quest *subreq, kfree(finfo); } =20 - if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { + if (test_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags)) { if (!WARN_ON_ONCE(folio_get_private(folio) !=3D NULL)) { trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); folio_attach_private(folio, NETFS_FOLIO_COPY_TO_CACHE); @@ -64,10 +76,11 @@ static void netfs_unlock_read_folio(struct netfs_io_sub= request *subreq, } } else { // TODO: Use of PG_private_2 is deprecated. - if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) - netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot); + if (test_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags)) + netfs_pgpriv2_mark_copy_to_cache(rreq, folioq, slot); } =20 +just_unlock: if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { if (folio->index =3D=3D rreq->no_unlock_folio && test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) { @@ -82,237 +95,243 @@ static void netfs_unlock_read_folio(struct netfs_io_s= ubrequest *subreq, } =20 /* - * Unlock any folios that are now completely read. Returns true if the - * subrequest is removed from the list. + * Unlock any folios we've finished with. */ -static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) +static void netfs_read_unlock_folios(struct netfs_io_request *rreq, + unsigned int *notes) { - struct netfs_io_subrequest *prev, *next; - struct netfs_io_request *rreq =3D subreq->rreq; - struct folio_queue *folioq =3D subreq->curr_folioq; - size_t avail, prev_donated, next_donated, fsize, part, excess; - loff_t fpos, start; - loff_t fend; - int slot =3D subreq->curr_folioq_slot; - - if (WARN(subreq->transferred > subreq->len, - "Subreq overread: R%x[%x] %zu > %zu", - rreq->debug_id, subreq->debug_index, - subreq->transferred, subreq->len)) - subreq->transferred =3D subreq->len; - - trace_netfs_folioq(folioq, netfs_trace_folioq_read_progress); -next_folio: - fsize =3D PAGE_SIZE << subreq->curr_folio_order; - fpos =3D round_down(subreq->start + subreq->consumed, fsize); - fend =3D fpos + fsize; - - if (WARN_ON_ONCE(!folioq) || - WARN_ON_ONCE(!folioq_folio(folioq, slot)) || - WARN_ON_ONCE(folioq_folio(folioq, slot)->index !=3D fpos / PAGE_SIZE)= ) { - pr_err("R=3D%08x[%x] s=3D%llx-%llx ctl=3D%zx/%zx/%zx sl=3D%u\n", - rreq->debug_id, subreq->debug_index, - subreq->start, subreq->start + subreq->transferred - 1, - subreq->consumed, subreq->transferred, subreq->len, - slot); - if (folioq) { - struct folio *folio =3D folioq_folio(folioq, slot); - - pr_err("folioq: fq=3D%x orders=3D%02x%02x%02x%02x %px\n", - folioq->debug_id, - folioq->orders[0], folioq->orders[1], - folioq->orders[2], folioq->orders[3], - folioq); - if (folio) - pr_err("folio: %llx-%llx ix=3D%llx o=3D%u qo=3D%u\n", - fpos, fend - 1, folio_pos(folio), folio_order(folio), - folioq_folio_order(folioq, slot)); - } - } + struct folio_queue *folioq =3D rreq->buffer.tail; + unsigned long long collected_to =3D rreq->collected_to; + unsigned int slot =3D rreq->buffer.first_tail_slot; =20 -donation_changed: - /* Try to consume the current folio if we've hit or passed the end of - * it. There's a possibility that this subreq doesn't start at the - * beginning of the folio, in which case we need to donate to/from the - * preceding subreq. - * - * We also need to include any potential donation back from the - * following subreq. - */ - prev_donated =3D READ_ONCE(subreq->prev_donated); - next_donated =3D READ_ONCE(subreq->next_donated); - if (prev_donated || next_donated) { - spin_lock(&rreq->lock); - prev_donated =3D subreq->prev_donated; - next_donated =3D subreq->next_donated; - subreq->start -=3D prev_donated; - subreq->len +=3D prev_donated; - subreq->transferred +=3D prev_donated; - prev_donated =3D subreq->prev_donated =3D 0; - if (subreq->transferred =3D=3D subreq->len) { - subreq->len +=3D next_donated; - subreq->transferred +=3D next_donated; - next_donated =3D subreq->next_donated =3D 0; + if (rreq->cleaned_to >=3D rreq->collected_to) + return; + + // TODO: Begin decryption + + if (slot >=3D folioq_nr_slots(folioq)) { + folioq =3D rolling_buffer_delete_spent(&rreq->buffer); + if (!folioq) { + rreq->front_folio_order =3D 0; + return; } - trace_netfs_sreq(subreq, netfs_sreq_trace_add_donations); - spin_unlock(&rreq->lock); + slot =3D 0; } =20 - avail =3D subreq->transferred; - if (avail =3D=3D subreq->len) - avail +=3D next_donated; - start =3D subreq->start; - if (subreq->consumed =3D=3D 0) { - start -=3D prev_donated; - avail +=3D prev_donated; - } else { - start +=3D subreq->consumed; - avail -=3D subreq->consumed; - } - part =3D umin(avail, fsize); - - trace_netfs_progress(subreq, start, avail, part); - - if (start + avail >=3D fend) { - if (fpos =3D=3D start) { - /* Flush, unlock and mark for caching any folio we've just read. */ - subreq->consumed =3D fend - subreq->start; - netfs_unlock_read_folio(subreq, rreq, folioq, slot); - folioq_mark2(folioq, slot); - if (subreq->consumed >=3D subreq->len) - goto remove_subreq; - } else if (fpos < start) { - excess =3D fend - subreq->start; - - spin_lock(&rreq->lock); - /* If we complete first on a folio split with the - * preceding subreq, donate to that subreq - otherwise - * we get the responsibility. - */ - if (subreq->prev_donated !=3D prev_donated) { - spin_unlock(&rreq->lock); - goto donation_changed; - } + for (;;) { + struct folio *folio; + unsigned long long fpos, fend; + unsigned int order; + size_t fsize; =20 - if (list_is_first(&subreq->rreq_link, &rreq->subrequests)) { - spin_unlock(&rreq->lock); - pr_err("Can't donate prior to front\n"); - goto bad; - } + if (*notes & COPY_TO_CACHE) + set_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags); =20 - prev =3D list_prev_entry(subreq, rreq_link); - WRITE_ONCE(prev->next_donated, prev->next_donated + excess); - subreq->start +=3D excess; - subreq->len -=3D excess; - subreq->transferred -=3D excess; - trace_netfs_donate(rreq, subreq, prev, excess, - netfs_trace_donate_tail_to_prev); - trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); - - if (subreq->consumed >=3D subreq->len) - goto remove_subreq_locked; - spin_unlock(&rreq->lock); - } else { - pr_err("fpos > start\n"); - goto bad; - } + folio =3D folioq_folio(folioq, slot); + if (WARN_ONCE(!folio_test_locked(folio), + "R=3D%08x: folio %lx is not locked\n", + rreq->debug_id, folio->index)) + trace_netfs_folio(folio, netfs_folio_trace_not_locked); + + order =3D folioq_folio_order(folioq, slot); + rreq->front_folio_order =3D order; + fsize =3D PAGE_SIZE << order; + fpos =3D folio_pos(folio); + fend =3D umin(fpos + fsize, rreq->i_size); + + trace_netfs_collect_folio(rreq, folio, fend, collected_to); + + /* Unlock any folio we've transferred all of. */ + if (collected_to < fend) + break; + + netfs_unlock_read_folio(rreq, folioq, slot); + WRITE_ONCE(rreq->cleaned_to, fpos + fsize); + *notes |=3D MADE_PROGRESS; =20 - /* Advance the rolling buffer to the next folio. */ + clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags); + + /* Clean up the head folioq. If we clear an entire folioq, then + * we can get rid of it provided it's not also the tail folioq + * being filled by the issuer. + */ + folioq_clear(folioq, slot); slot++; if (slot >=3D folioq_nr_slots(folioq)) { + folioq =3D rolling_buffer_delete_spent(&rreq->buffer); + if (!folioq) + goto done; slot =3D 0; - folioq =3D folioq->next; - subreq->curr_folioq =3D folioq; trace_netfs_folioq(folioq, netfs_trace_folioq_read_progress); } - subreq->curr_folioq_slot =3D slot; - if (folioq && folioq_folio(folioq, slot)) - subreq->curr_folio_order =3D folioq->orders[slot]; - cond_resched(); - goto next_folio; + + if (fpos + fsize >=3D collected_to) + break; } =20 - /* Deal with partial progress. */ - if (subreq->transferred < subreq->len) - return false; + rreq->buffer.tail =3D folioq; +done: + rreq->buffer.first_tail_slot =3D slot; +} =20 - /* Donate the remaining downloaded data to one of the neighbouring - * subrequests. Note that we may race with them doing the same thing. +/* + * Collect and assess the results of various read subrequests. We may nee= d to + * retry some of the results. + * + * Note that we have a sequence of subrequests, which may be drawing on + * different sources and may or may not be the same size or starting posit= ion + * and may not even correspond in boundary alignment. + */ +static void netfs_collect_read_results(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *front, *remove; + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + unsigned int notes; + + _enter("%llx-%llx", rreq->start, rreq->start + rreq->len); + trace_netfs_rreq(rreq, netfs_rreq_trace_collect); + trace_netfs_collect(rreq); + +reassess: + if (rreq->origin =3D=3D NETFS_READAHEAD || + rreq->origin =3D=3D NETFS_READPAGE || + rreq->origin =3D=3D NETFS_READ_FOR_WRITE) + notes =3D BUFFERED; + else + notes =3D 0; + + /* Remove completed subrequests from the front of the stream and + * advance the completion point. We stop when we hit something that's + * in progress. The issuer thread may be adding stuff to the tail + * whilst we're doing this. */ - spin_lock(&rreq->lock); + front =3D READ_ONCE(stream->front); + while (front) { + size_t transferred; =20 - if (subreq->prev_donated !=3D prev_donated || - subreq->next_donated !=3D next_donated) { + trace_netfs_collect_sreq(rreq, front); + _debug("sreq [%x] %llx %zx/%zx", + front->debug_index, front->start, front->transferred, front->len); + + if (stream->collected_to < front->start) { + trace_netfs_collect_gap(rreq, stream, front->start, 'F'); + stream->collected_to =3D front->start; + } + + if (test_bit(NETFS_SREQ_IN_PROGRESS, &front->flags)) + notes |=3D HIT_PENDING; + smp_rmb(); /* Read counters after IN_PROGRESS flag. */ + transferred =3D READ_ONCE(front->transferred); + + /* If we can now collect the next folio, do so. We don't want + * to defer this as we have to decide whether we need to copy + * to the cache or not, and that may differ between adjacent + * subreqs. + */ + if (notes & BUFFERED) { + size_t fsize =3D PAGE_SIZE << rreq->front_folio_order; + + /* Clear the tail of a short read. */ + if (!(notes & HIT_PENDING) && + front->error =3D=3D 0 && + transferred < front->len && + (test_bit(NETFS_SREQ_HIT_EOF, &front->flags) || + test_bit(NETFS_SREQ_CLEAR_TAIL, &front->flags))) { + netfs_clear_unread(front); + transferred =3D front->transferred =3D front->len; + trace_netfs_sreq(front, netfs_sreq_trace_clear); + } + + stream->collected_to =3D front->start + transferred; + rreq->collected_to =3D stream->collected_to; + + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &front->flags)) + notes |=3D COPY_TO_CACHE; + + if (test_bit(NETFS_SREQ_FAILED, &front->flags)) { + rreq->abandon_to =3D front->start + front->len; + front->transferred =3D front->len; + transferred =3D front->len; + trace_netfs_rreq(rreq, netfs_rreq_trace_set_abandon); + } + if (front->start + transferred >=3D rreq->cleaned_to + fsize || + test_bit(NETFS_SREQ_HIT_EOF, &front->flags)) + netfs_read_unlock_folios(rreq, ¬es); + } else { + stream->collected_to =3D front->start + transferred; + rreq->collected_to =3D stream->collected_to; + } + + /* Stall if the front is still undergoing I/O. */ + if (notes & HIT_PENDING) + break; + + if (test_bit(NETFS_SREQ_FAILED, &front->flags)) { + if (!stream->failed) { + stream->error =3D front->error; + rreq->error =3D front->error; + set_bit(NETFS_RREQ_FAILED, &rreq->flags); + stream->failed =3D true; + } + notes |=3D MADE_PROGRESS | ABANDON_SREQ; + } else if (test_bit(NETFS_SREQ_NEED_RETRY, &front->flags)) { + stream->need_retry =3D true; + notes |=3D NEED_RETRY | MADE_PROGRESS; + break; + } else { + if (!stream->failed) + stream->transferred =3D stream->collected_to - rreq->start; + notes |=3D MADE_PROGRESS; + } + + /* Remove if completely consumed. */ + stream->source =3D front->source; + spin_lock(&rreq->lock); + + remove =3D front; + trace_netfs_sreq(front, netfs_sreq_trace_discard); + list_del_init(&front->rreq_link); + front =3D list_first_entry_or_null(&stream->subrequests, + struct netfs_io_subrequest, rreq_link); + stream->front =3D front; spin_unlock(&rreq->lock); - cond_resched(); - goto donation_changed; + netfs_put_subrequest(remove, false, + notes & ABANDON_SREQ ? + netfs_sreq_trace_put_cancel : + netfs_sreq_trace_put_done); } =20 - /* Deal with the trickiest case: that this subreq is in the middle of a - * folio, not touching either edge, but finishes first. In such a - * case, we donate to the previous subreq, if there is one, so that the - * donation is only handled when that completes - and remove this - * subreq from the list. - * - * If the previous subreq finished first, we will have acquired their - * donation and should be able to unlock folios and/or donate nextwards. - */ - if (!subreq->consumed && - !prev_donated && - !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { - prev =3D list_prev_entry(subreq, rreq_link); - WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); - subreq->start +=3D subreq->len; - subreq->len =3D 0; - subreq->transferred =3D 0; - trace_netfs_donate(rreq, subreq, prev, subreq->len, - netfs_trace_donate_to_prev); - trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); - goto remove_subreq_locked; - } + trace_netfs_collect_stream(rreq, stream); + trace_netfs_collect_state(rreq, rreq->collected_to, notes); =20 - /* If we can't donate down the chain, donate up the chain instead. */ - excess =3D subreq->len - subreq->consumed + next_donated; + if (!(notes & BUFFERED)) + rreq->cleaned_to =3D rreq->collected_to; =20 - if (!subreq->consumed) - excess +=3D prev_donated; + if (notes & NEED_RETRY) + goto need_retry; + if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) { + trace_netfs_rreq(rreq, netfs_rreq_trace_unpause); + clear_bit_unlock(NETFS_RREQ_PAUSE, &rreq->flags); + wake_up_bit(&rreq->flags, NETFS_RREQ_PAUSE); + } =20 - if (list_is_last(&subreq->rreq_link, &rreq->subrequests)) { - rreq->prev_donated =3D excess; - trace_netfs_donate(rreq, subreq, NULL, excess, - netfs_trace_donate_to_deferred_next); - } else { - next =3D list_next_entry(subreq, rreq_link); - WRITE_ONCE(next->prev_donated, excess); - trace_netfs_donate(rreq, subreq, next, excess, - netfs_trace_donate_to_next); + if (notes & MADE_PROGRESS) { + //cond_resched(); + goto reassess; } - trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_next); - subreq->len =3D subreq->consumed; - subreq->transferred =3D subreq->consumed; - goto remove_subreq_locked; - -remove_subreq: - spin_lock(&rreq->lock); -remove_subreq_locked: - subreq->consumed =3D subreq->len; - list_del(&subreq->rreq_link); - spin_unlock(&rreq->lock); - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_consumed); - return true; - -bad: - /* Errr... prev and next both donated to us, but insufficient to finish - * the folio. + +out: + _leave(" =3D %x", notes); + return; + +need_retry: + /* Okay... We're going to have to retry parts of the stream. Note + * that any partially completed op will have had any wholly transferred + * folios removed from it. */ - printk("R=3D%08x[%x] s=3D%llx-%llx %zx/%zx/%zx\n", - rreq->debug_id, subreq->debug_index, - subreq->start, subreq->start + subreq->transferred - 1, - subreq->consumed, subreq->transferred, subreq->len); - printk("folio: %llx-%llx\n", fpos, fend - 1); - printk("donated: prev=3D%zx next=3D%zx\n", prev_donated, next_donated); - printk("s=3D%llx av=3D%zx part=3D%zx\n", start, avail, part); - BUG(); + _debug("retry"); + netfs_retry_reads(rreq); + goto out; } =20 /* @@ -321,12 +340,13 @@ static bool netfs_consume_read_data(struct netfs_io_s= ubrequest *subreq) static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) { struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; unsigned int i; =20 /* Collect unbuffered reads and direct reads, adding up the transfer * sizes until we find the first short or failed subrequest. */ - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { + list_for_each_entry(subreq, &stream->subrequests, rreq_link) { rreq->transferred +=3D subreq->transferred; =20 if (subreq->transferred < subreq->len || @@ -363,22 +383,12 @@ static void netfs_rreq_assess_dio(struct netfs_io_req= uest *rreq) */ static void netfs_rreq_assess_single(struct netfs_io_request *rreq) { - struct netfs_io_subrequest *subreq; struct netfs_io_stream *stream =3D &rreq->io_streams[0]; =20 - subreq =3D list_first_entry_or_null(&stream->subrequests, - struct netfs_io_subrequest, rreq_link); - if (subreq) { - if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) - rreq->error =3D subreq->error; - else - rreq->transferred =3D subreq->transferred; - - if (!rreq->error && subreq->source =3D=3D NETFS_DOWNLOAD_FROM_SERVER && - fscache_resources_valid(&rreq->cache_resources)) { - trace_netfs_rreq(rreq, netfs_rreq_trace_dirty); - netfs_single_mark_inode_dirty(rreq->inode); - } + if (!rreq->error && stream->source =3D=3D NETFS_DOWNLOAD_FROM_SERVER && + fscache_resources_valid(&rreq->cache_resources)) { + trace_netfs_rreq(rreq, netfs_rreq_trace_dirty); + netfs_single_mark_inode_dirty(rreq->inode); } =20 if (rreq->iocb) { @@ -392,21 +402,32 @@ static void netfs_rreq_assess_single(struct netfs_io_= request *rreq) } =20 /* - * Assess the state of a read request and decide what to do next. + * Perform the collection of subrequests and folios. * * Note that we're in normal kernel thread context at this point, possibly * running on a workqueue. */ -void netfs_rreq_terminated(struct netfs_io_request *rreq) +static void netfs_read_collection(struct netfs_io_request *rreq) { - trace_netfs_rreq(rreq, netfs_rreq_trace_assess); + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; =20 - //netfs_rreq_is_still_valid(rreq); + netfs_collect_read_results(rreq); =20 - if (test_and_clear_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags)) { - netfs_retry_reads(rreq); + /* We're done when the app thread has finished posting subreqs and the + * queue is empty. + */ + if (!test_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags)) return; - } + smp_rmb(); /* Read ALL_QUEUED before subreq lists. */ + + if (!list_empty(&stream->subrequests)) + return; + + /* Okay, declare that all I/O is complete. */ + rreq->transferred =3D stream->transferred; + trace_netfs_rreq(rreq, netfs_rreq_trace_complete); + + //netfs_rreq_is_still_valid(rreq); =20 switch (rreq->origin) { case NETFS_DIO_READ: @@ -432,6 +453,33 @@ void netfs_rreq_terminated(struct netfs_io_request *rr= eq) netfs_pgpriv2_write_to_the_cache(rreq); } =20 +void netfs_read_collection_worker(struct work_struct *work) +{ + struct netfs_io_request *rreq =3D container_of(work, struct netfs_io_requ= est, work); + + netfs_see_request(rreq, netfs_rreq_trace_see_work); + if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) + netfs_read_collection(rreq); + netfs_put_request(rreq, false, netfs_rreq_trace_put_work); +} + +/* + * Wake the collection work item. + */ +void netfs_wake_read_collector(struct netfs_io_request *rreq) +{ + if (test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) { + if (!work_pending(&rreq->work)) { + netfs_get_request(rreq, netfs_rreq_trace_get_work); + if (!queue_work(system_unbound_wq, &rreq->work)) + netfs_put_request(rreq, true, netfs_rreq_trace_put_work_nq); + } + } else { + trace_netfs_rreq(rreq, netfs_rreq_trace_wake_queue); + wake_up(&rreq->waitq); + } +} + /** * netfs_read_subreq_progress - Note progress of a read operation. * @subreq: The read request that has terminated. @@ -445,17 +493,22 @@ void netfs_rreq_terminated(struct netfs_io_request *r= req) void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq =3D subreq->rreq; - - might_sleep(); + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + size_t fsize =3D PAGE_SIZE << rreq->front_folio_order; =20 trace_netfs_sreq(subreq, netfs_sreq_trace_progress); =20 - if (subreq->transferred > subreq->consumed && + /* If we are at the head of the queue, wake up the collector, + * getting a ref to it if we were the ones to do so. + */ + if (subreq->start + subreq->transferred > rreq->cleaned_to + fsize && (rreq->origin =3D=3D NETFS_READAHEAD || rreq->origin =3D=3D NETFS_READPAGE || - rreq->origin =3D=3D NETFS_READ_FOR_WRITE)) { - netfs_consume_read_data(subreq); - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); + rreq->origin =3D=3D NETFS_READ_FOR_WRITE) && + list_is_first(&subreq->rreq_link, &stream->subrequests) + ) { + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); + netfs_wake_read_collector(rreq); } } EXPORT_SYMBOL(netfs_read_subreq_progress); @@ -479,8 +532,7 @@ EXPORT_SYMBOL(netfs_read_subreq_progress); void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq =3D subreq->rreq; - - might_sleep(); + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; =20 switch (subreq->source) { case NETFS_READ_FROM_CACHE: @@ -493,83 +545,114 @@ void netfs_read_subreq_terminated(struct netfs_io_su= brequest *subreq) break; } =20 - if (rreq->origin !=3D NETFS_DIO_READ) { - /* Collect buffered reads. - * - * If the read completed validly short, then we can clear the - * tail before going on to unlock the folios. - */ - if (subreq->error =3D=3D 0 && subreq->transferred < subreq->len && - (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags) || - test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags))) { - netfs_clear_unread(subreq); - subreq->transferred =3D subreq->len; - trace_netfs_sreq(subreq, netfs_sreq_trace_clear); - } - if (subreq->transferred > subreq->consumed && - (rreq->origin =3D=3D NETFS_READAHEAD || - rreq->origin =3D=3D NETFS_READPAGE || - rreq->origin =3D=3D NETFS_READ_FOR_WRITE)) { - netfs_consume_read_data(subreq); - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); - } - rreq->transferred +=3D subreq->transferred; - } - /* Deal with retry requests, short reads and errors. If we retry * but don't make progress, we abandon the attempt. */ if (!subreq->error && subreq->transferred < subreq->len) { if (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags)) { trace_netfs_sreq(subreq, netfs_sreq_trace_hit_eof); + } else if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + trace_netfs_sreq(subreq, netfs_sreq_trace_need_retry); + } else if (test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags)) { + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + trace_netfs_sreq(subreq, netfs_sreq_trace_partial_read); } else { + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + subreq->error =3D -ENODATA; trace_netfs_sreq(subreq, netfs_sreq_trace_short); - if (subreq->transferred > subreq->consumed) { - __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); - set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); - } else if (!__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags))= { - __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); - set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); - } else { - __set_bit(NETFS_SREQ_FAILED, &subreq->flags); - subreq->error =3D -ENODATA; - } } } =20 - trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); - if (unlikely(subreq->error < 0)) { trace_netfs_failure(rreq, subreq, subreq->error, netfs_fail_read); if (subreq->source =3D=3D NETFS_READ_FROM_CACHE) { netfs_stat(&netfs_n_rh_read_failed); + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); } else { netfs_stat(&netfs_n_rh_download_failed); - set_bit(NETFS_RREQ_FAILED, &rreq->flags); - rreq->error =3D subreq->error; + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); } + trace_netfs_rreq(rreq, netfs_rreq_trace_set_pause); + set_bit(NETFS_RREQ_PAUSE, &rreq->flags); } =20 - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq); + trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); + + clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags); =20 - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_terminated); + /* If we are at the head of the queue, wake up the collector. */ + if (list_is_first(&subreq->rreq_link, &stream->subrequests)) + netfs_wake_read_collector(rreq); + + netfs_put_subrequest(subreq, true, netfs_sreq_trace_put_terminated); } EXPORT_SYMBOL(netfs_read_subreq_terminated); =20 -/** - * netfs_read_subreq_termination_worker - Workqueue helper for read termin= ation - * @work: The subreq->work in the I/O request that has been terminated. - * - * Helper function to jump to netfs_read_subreq_terminated() from the - * subrequest work item. +/* + * Handle termination of a read from the cache. */ -void netfs_read_subreq_termination_worker(struct work_struct *work) +void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error,= bool was_async) { - struct netfs_io_subrequest *subreq =3D - container_of(work, struct netfs_io_subrequest, work); + struct netfs_io_subrequest *subreq =3D priv; =20 + if (transferred_or_error > 0) { + subreq->error =3D 0; + if (transferred_or_error > 0) { + subreq->transferred +=3D transferred_or_error; + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); + } + } else { + subreq->error =3D transferred_or_error; + } netfs_read_subreq_terminated(subreq); } -EXPORT_SYMBOL(netfs_read_subreq_termination_worker); + +/* + * Wait for the read operation to complete, successfully or otherwise. + */ +ssize_t netfs_wait_for_read(struct netfs_io_request *rreq) +{ + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + DEFINE_WAIT(myself); + ssize_t ret; + + for (;;) { + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue); + prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE); + + subreq =3D list_first_entry_or_null(&stream->subrequests, + struct netfs_io_subrequest, rreq_link); + if (subreq && + (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) || + test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) + netfs_read_collection(rreq); + + if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) + break; + + schedule(); + trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue); + } + + finish_wait(&rreq->waitq, &myself); + + ret =3D rreq->error; + if (ret =3D=3D 0) { + ret =3D rreq->transferred; + switch (rreq->origin) { + case NETFS_DIO_READ: + case NETFS_READ_SINGLE: + ret =3D rreq->transferred; + break; + default: + if (rreq->submitted < rreq->len) { + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); + ret =3D -EIO; + } + break; + } + } + + return ret; +} diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c index d84dccc44cab..a6acebc9b659 100644 --- a/fs/netfs/read_pgpriv2.c +++ b/fs/netfs/read_pgpriv2.c @@ -18,8 +18,7 @@ * third mark in the folio queue is used to indicate that this folio needs * writing. */ -void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_subrequest *subreq, - struct netfs_io_request *rreq, +void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_request *rreq, struct folio_queue *folioq, int slot) { diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c index 264f3cb6a7dc..8ca0558570c1 100644 --- a/fs/netfs/read_retry.c +++ b/fs/netfs/read_retry.c @@ -12,15 +12,8 @@ static void netfs_reissue_read(struct netfs_io_request *rreq, struct netfs_io_subrequest *subreq) { - struct iov_iter *io_iter =3D &subreq->io_iter; - - if (iov_iter_is_folioq(io_iter)) { - subreq->curr_folioq =3D (struct folio_queue *)io_iter->folioq; - subreq->curr_folioq_slot =3D io_iter->folioq_slot; - subreq->curr_folio_order =3D subreq->curr_folioq->orders[subreq->curr_fo= lioq_slot]; - } - - atomic_inc(&rreq->nr_outstanding); + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); subreq->rreq->netfs_ops->issue_read(subreq); @@ -33,13 +26,12 @@ static void netfs_reissue_read(struct netfs_io_request = *rreq, static void netfs_retry_read_subrequests(struct netfs_io_request *rreq) { struct netfs_io_subrequest *subreq; - struct netfs_io_stream *stream0 =3D &rreq->io_streams[0]; - LIST_HEAD(sublist); - LIST_HEAD(queue); + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; + struct list_head *next; =20 _enter("R=3D%x", rreq->debug_id); =20 - if (list_empty(&rreq->subrequests)) + if (list_empty(&stream->subrequests)) return; =20 if (rreq->netfs_ops->retry_request) @@ -52,7 +44,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_= request *rreq) !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) { struct netfs_io_subrequest *subreq; =20 - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { + list_for_each_entry(subreq, &stream->subrequests, rreq_link) { if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) break; if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { @@ -73,48 +65,44 @@ static void netfs_retry_read_subrequests(struct netfs_i= o_request *rreq) * populating with smaller subrequests. In the event that the subreq * we just launched finishes before we insert the next subreq, it'll * fill in rreq->prev_donated instead. - + * * Note: Alternatively, we could split the tail subrequest right before * we reissue it and fix up the donations under lock. */ - list_splice_init(&rreq->subrequests, &queue); + next =3D stream->subrequests.next; =20 do { - struct netfs_io_subrequest *from; + struct netfs_io_subrequest *subreq =3D NULL, *from, *to, *tmp; struct iov_iter source; unsigned long long start, len; - size_t part, deferred_next_donated =3D 0; + size_t part; bool boundary =3D false; =20 /* Go through the subreqs and find the next span of contiguous * buffer that we then rejig (cifs, for example, needs the * rsize renegotiating) and reissue. */ - from =3D list_first_entry(&queue, struct netfs_io_subrequest, rreq_link); - list_move_tail(&from->rreq_link, &sublist); + from =3D list_entry(next, struct netfs_io_subrequest, rreq_link); + to =3D from; start =3D from->start + from->transferred; len =3D from->len - from->transferred; =20 - _debug("from R=3D%08x[%x] s=3D%llx ctl=3D%zx/%zx/%zx", + _debug("from R=3D%08x[%x] s=3D%llx ctl=3D%zx/%zx", rreq->debug_id, from->debug_index, - from->start, from->consumed, from->transferred, from->len); + from->start, from->transferred, from->len); =20 if (test_bit(NETFS_SREQ_FAILED, &from->flags) || !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) goto abandon; =20 - deferred_next_donated =3D from->next_donated; - while ((subreq =3D list_first_entry_or_null( - &queue, struct netfs_io_subrequest, rreq_link))) { - if (subreq->start !=3D start + len || - subreq->transferred > 0 || + list_for_each_continue(next, &stream->subrequests) { + subreq =3D list_entry(next, struct netfs_io_subrequest, rreq_link); + if (subreq->start + subreq->transferred !=3D start + len || + test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) || !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) break; - list_move_tail(&subreq->rreq_link, &sublist); - len +=3D subreq->len; - deferred_next_donated =3D subreq->next_donated; - if (test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags)) - break; + to =3D subreq; + len +=3D to->len; } =20 _debug(" - range: %llx-%llx %llx", start, start + len - 1, len); @@ -127,36 +115,28 @@ static void netfs_retry_read_subrequests(struct netfs= _io_request *rreq) source.count =3D len; =20 /* Work through the sublist. */ - while ((subreq =3D list_first_entry_or_null( - &sublist, struct netfs_io_subrequest, rreq_link))) { - list_del(&subreq->rreq_link); - + subreq =3D from; + list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { + if (!len) + break; subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; subreq->start =3D start - subreq->transferred; subreq->len =3D len + subreq->transferred; - stream0->sreq_max_len =3D subreq->len; - __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - - spin_lock(&rreq->lock); - list_add_tail(&subreq->rreq_link, &rreq->subrequests); - subreq->prev_donated +=3D rreq->prev_donated; - rreq->prev_donated =3D 0; trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - spin_unlock(&rreq->lock); - - BUG_ON(!len); =20 /* Renegotiate max_len (rsize) */ + stream->sreq_max_len =3D subreq->len; if (rreq->netfs_ops->prepare_read(subreq) < 0) { trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + goto abandon; } =20 - part =3D umin(len, stream0->sreq_max_len); - if (unlikely(rreq->io_streams[0].sreq_max_segs)) - part =3D netfs_limit_iter(&source, 0, part, stream0->sreq_max_segs); + part =3D umin(len, stream->sreq_max_len); + if (unlikely(stream->sreq_max_segs)) + part =3D netfs_limit_iter(&source, 0, part, stream->sreq_max_segs); subreq->len =3D subreq->transferred + part; subreq->io_iter =3D source; iov_iter_truncate(&subreq->io_iter, part); @@ -166,58 +146,106 @@ static void netfs_retry_read_subrequests(struct netf= s_io_request *rreq) if (!len) { if (boundary) __set_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); - subreq->next_donated =3D deferred_next_donated; } else { __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); - subreq->next_donated =3D 0; } =20 + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); netfs_reissue_read(rreq, subreq); - if (!len) + if (subreq =3D=3D to) break; - - /* If we ran out of subrequests, allocate another. */ - if (list_empty(&sublist)) { - subreq =3D netfs_alloc_subrequest(rreq); - if (!subreq) - goto abandon; - subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; - subreq->start =3D start; - - /* We get two refs, but need just one. */ - netfs_put_subrequest(subreq, false, netfs_sreq_trace_new); - trace_netfs_sreq(subreq, netfs_sreq_trace_split); - list_add_tail(&subreq->rreq_link, &sublist); - } } =20 /* If we managed to use fewer subreqs, we can discard the - * excess. + * excess; if we used the same number, then we're done. */ - while ((subreq =3D list_first_entry_or_null( - &sublist, struct netfs_io_subrequest, rreq_link))) { - trace_netfs_sreq(subreq, netfs_sreq_trace_discard); - list_del(&subreq->rreq_link); - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); + if (!len) { + if (subreq =3D=3D to) + continue; + list_for_each_entry_safe_from(subreq, tmp, + &stream->subrequests, rreq_link) { + trace_netfs_sreq(subreq, netfs_sreq_trace_discard); + list_del(&subreq->rreq_link); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); + if (subreq =3D=3D to) + break; + } + continue; } =20 - } while (!list_empty(&queue)); + /* We ran out of subrequests, so we need to allocate some more + * and insert them after. + */ + do { + subreq =3D netfs_alloc_subrequest(rreq); + if (!subreq) { + subreq =3D to; + goto abandon_after; + } + subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; + subreq->start =3D start; + subreq->len =3D len; + subreq->debug_index =3D atomic_inc_return(&rreq->subreq_counter); + subreq->stream_nr =3D stream->stream_nr; + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + + trace_netfs_sreq_ref(rreq->debug_id, subreq->debug_index, + refcount_read(&subreq->ref), + netfs_sreq_trace_new); + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + + list_add(&subreq->rreq_link, &to->rreq_link); + to =3D list_next_entry(to, rreq_link); + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + + stream->sreq_max_len =3D umin(len, rreq->rsize); + stream->sreq_max_segs =3D 0; + if (unlikely(stream->sreq_max_segs)) + part =3D netfs_limit_iter(&source, 0, part, stream->sreq_max_segs); + + netfs_stat(&netfs_n_rh_download); + if (rreq->netfs_ops->prepare_read(subreq) < 0) { + trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); + goto abandon; + } + + part =3D umin(len, stream->sreq_max_len); + subreq->len =3D subreq->transferred + part; + subreq->io_iter =3D source; + iov_iter_truncate(&subreq->io_iter, part); + iov_iter_advance(&source, part); + + len -=3D part; + start +=3D part; + if (!len && boundary) { + __set_bit(NETFS_SREQ_BOUNDARY, &to->flags); + boundary =3D false; + } + + netfs_reissue_read(rreq, subreq); + } while (len); + + } while (!list_is_head(next, &stream->subrequests)); =20 return; =20 - /* If we hit ENOMEM, fail all remaining subrequests */ + /* If we hit an error, fail all remaining incomplete subrequests */ +abandon_after: + if (list_is_last(&subreq->rreq_link, &stream->subrequests)) + return; + subreq =3D list_next_entry(subreq, rreq_link); abandon: - list_splice_init(&sublist, &queue); - list_for_each_entry(subreq, &queue, rreq_link) { - if (!subreq->error) - subreq->error =3D -ENOMEM; - __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); + list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { + if (!subreq->error && + !test_bit(NETFS_SREQ_FAILED, &subreq->flags) && + !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) + continue; + subreq->error =3D -ENOMEM; + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __clear_bit(NETFS_SREQ_RETRYING, &subreq->flags); } - spin_lock(&rreq->lock); - list_splice_tail_init(&queue, &rreq->subrequests); - spin_unlock(&rreq->lock); } =20 /* @@ -225,14 +253,19 @@ static void netfs_retry_read_subrequests(struct netfs= _io_request *rreq) */ void netfs_retry_reads(struct netfs_io_request *rreq) { - trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit); + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; =20 - atomic_inc(&rreq->nr_outstanding); + /* Wait for all outstanding I/O to quiesce before performing retries as + * we may need to renegotiate the I/O sizes. + */ + list_for_each_entry(subreq, &stream->subrequests, rreq_link) { + wait_on_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + } =20 + trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit); netfs_retry_read_subrequests(rreq); - - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq); } =20 /* diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c index 2a66c5fde071..14bc61107182 100644 --- a/fs/netfs/read_single.c +++ b/fs/netfs/read_single.c @@ -77,6 +77,7 @@ static void netfs_single_read_cache(struct netfs_io_reque= st *rreq, { struct netfs_cache_resources *cres =3D &rreq->cache_resources; =20 + _enter("R=3D%08x[%x]", rreq->debug_id, subreq->debug_index); netfs_stat(&netfs_n_rh_read); cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_FA= IL, netfs_cache_read_terminated, subreq); @@ -88,28 +89,28 @@ static void netfs_single_read_cache(struct netfs_io_req= uest *rreq, */ static int netfs_single_dispatch_read(struct netfs_io_request *rreq) { + struct netfs_io_stream *stream =3D &rreq->io_streams[0]; struct netfs_io_subrequest *subreq; int ret =3D 0; =20 - atomic_set(&rreq->nr_outstanding, 1); - subreq =3D netfs_alloc_subrequest(rreq); - if (!subreq) { - ret =3D -ENOMEM; - goto out; - } + if (!subreq) + return -ENOMEM; =20 subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; subreq->start =3D 0; subreq->len =3D rreq->len; subreq->io_iter =3D rreq->buffer.iter; =20 - atomic_inc(&rreq->nr_outstanding); + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); =20 - spin_lock_bh(&rreq->lock); - list_add_tail(&subreq->rreq_link, &rreq->subrequests); + spin_lock(&rreq->lock); + list_add_tail(&subreq->rreq_link, &stream->subrequests); trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock_bh(&rreq->lock); + stream->front =3D subreq; + /* Store list pointers before active flag */ + smp_store_release(&stream->active, true); + spin_unlock(&rreq->lock); =20 netfs_single_cache_prepare_read(rreq, subreq); switch (subreq->source) { @@ -137,14 +138,12 @@ static int netfs_single_dispatch_read(struct netfs_io= _request *rreq) break; } =20 -out: - if (atomic_dec_and_test(&rreq->nr_outstanding)) - netfs_rreq_terminated(rreq); + smp_wmb(); /* Write lists before ALL_QUEUED. */ + set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); return ret; cancel: - atomic_dec(&rreq->nr_outstanding); netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); - goto out; + return ret; } =20 /** @@ -185,13 +184,7 @@ ssize_t netfs_read_single(struct inode *inode, struct = file *file, struct iov_ite rreq->buffer.iter =3D *iter; netfs_single_dispatch_read(rreq); =20 - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); - wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, - TASK_UNINTERRUPTIBLE); - - ret =3D rreq->error; - if (ret =3D=3D 0) - ret =3D rreq->transferred; + ret =3D netfs_wait_for_read(rreq); netfs_put_request(rreq, true, netfs_rreq_trace_put_return); return ret; =20 diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c index 2222c3a6b9d1..74def87abb01 100644 --- a/fs/netfs/write_retry.c +++ b/fs/netfs/write_retry.c @@ -94,15 +94,21 @@ static void netfs_retry_write_stream(struct netfs_io_re= quest *wreq, list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { if (!len) break; - /* Renegotiate max_len (wsize) */ - trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + + subreq->start =3D start; + subreq->len =3D len; __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + + /* Renegotiate max_len (wsize) */ + stream->sreq_max_len =3D len; stream->prepare_write(subreq); =20 - part =3D min(len, stream->sreq_max_len); + part =3D umin(len, stream->sreq_max_len); + if (unlikely(stream->sreq_max_segs)) + part =3D netfs_limit_iter(&source, 0, part, stream->sreq_max_segs); subreq->len =3D part; - subreq->start =3D start; subreq->transferred =3D 0; len -=3D part; start +=3D part; diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index bdf1933cb0e2..107e3df97edc 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1323,6 +1323,8 @@ cifs_readv_callback(struct mid_q_entry *mid) __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); rdata->result =3D 0; } + if (rdata->got_bytes) + __set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags); } =20 rdata->credits.value =3D 0; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 0166eb42ce94..81165b2c6acf 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4620,6 +4620,8 @@ smb2_readv_callback(struct mid_q_entry *mid) __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); rdata->result =3D 0; } + if (rdata->got_bytes) + __set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags); } trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, rdata->credits.v= alue, server->credits, server->in_flight, @@ -4628,7 +4630,7 @@ smb2_readv_callback(struct mid_q_entry *mid) rdata->subreq.error =3D rdata->result; rdata->subreq.transferred +=3D rdata->got_bytes; trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_progress); - queue_work(cifsiod_wq, &rdata->subreq.work); + netfs_read_subreq_terminated(&rdata->subreq); release_mid(mid); trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0, server->credits, server->in_flight, diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 5e21c6939c88..4af7208e1360 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -181,23 +181,17 @@ struct netfs_io_subrequest { unsigned long long start; /* Where to start the I/O */ size_t len; /* Size of the I/O */ size_t transferred; /* Amount of data transferred */ - size_t consumed; /* Amount of read data consumed */ - size_t prev_donated; /* Amount of data donated from previous subreq */ - size_t next_donated; /* Amount of data donated from next subreq */ refcount_t ref; short error; /* 0 or error that occurred */ unsigned short debug_index; /* Index in list (for debugging output) */ unsigned int nr_segs; /* Number of segs in io_iter */ enum netfs_io_source source; /* Where to read from/write to */ unsigned char stream_nr; /* I/O stream this belongs to */ - unsigned char curr_folioq_slot; /* Folio currently being read */ - unsigned char curr_folio_order; /* Order of folio */ - struct folio_queue *curr_folioq; /* Queue segment in which current folio = resides */ unsigned long flags; #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the c= ache */ #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be = cleared */ #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA fi= rst */ -#define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any d= ata */ +#define NETFS_SREQ_MADE_PROGRESS 4 /* Set if we managed to read more data = */ #define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */ #define NETFS_SREQ_BOUNDARY 6 /* Set if ends on hard boundary (eg. ceph o= bject) */ #define NETFS_SREQ_HIT_EOF 7 /* Set if short due to EOF */ @@ -238,13 +232,13 @@ struct netfs_io_request { struct netfs_cache_resources cache_resources; struct readahead_control *ractl; /* Readahead descriptor */ struct list_head proc_link; /* Link in netfs_iorequests */ - struct list_head subrequests; /* Contributory I/O operations */ struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operatio= ns */ #define NR_IO_STREAMS 2 //wreq->nr_io_streams struct netfs_group *group; /* Writeback group being written back */ struct rolling_buffer buffer; /* Unencrypted buffer */ #define NETFS_ROLLBUF_PUT_MARK ROLLBUF_MARK_1 #define NETFS_ROLLBUF_PAGECACHE_MARK ROLLBUF_MARK_2 + wait_queue_head_t waitq; /* Processor waiter */ void *netfs_priv; /* Private data for the netfs */ void *netfs_priv2; /* Private data for the netfs */ struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter)= */ @@ -255,7 +249,6 @@ struct netfs_io_request { atomic_t subreq_counter; /* Next subreq->debug_index */ unsigned int nr_group_rel; /* Number of refs to release on ->group */ spinlock_t lock; /* Lock for queuing subreqs */ - atomic_t nr_outstanding; /* Number of ops in progress */ unsigned long long submitted; /* Amount submitted for I/O so far */ unsigned long long len; /* Length of the request */ size_t transferred; /* Amount to be indicated as transferred */ @@ -267,15 +260,18 @@ struct netfs_io_request { atomic64_t issued_to; /* Write issuer folio cursor */ unsigned long long collected_to; /* Point we've collected to */ unsigned long long cleaned_to; /* Position we've cleaned folios to */ + unsigned long long abandon_to; /* Position to abandon folios to */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ - size_t prev_donated; /* Fallback for subreq->prev_donated */ + unsigned char front_folio_order; /* Order (size) of front folio */ refcount_t ref; unsigned long flags; +#define NETFS_RREQ_OFFLOAD_COLLECTION 0 /* Offload collection to workqueue= */ #define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */ #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on co= mpletion */ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on comp= letion */ #define NETFS_RREQ_FAILED 4 /* The request failed */ #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */ +#define NETFS_RREQ_FOLIO_COPY_TO_CACHE 6 /* Copy current folio to cache fr= om read */ #define NETFS_RREQ_UPLOAD_TO_SERVER 8 /* Need to write to the server */ #define NETFS_RREQ_NONBLOCK 9 /* Don't block if possible (O_NONBLOCK) */ #define NETFS_RREQ_BLOCKED 10 /* We blocked */ @@ -440,7 +436,6 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, str= uct netfs_group *netfs_gr /* (Sub)request management API. */ void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq); void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq); -void netfs_read_subreq_termination_worker(struct work_struct *work); void netfs_get_subrequest(struct netfs_io_subrequest *subreq, enum netfs_sreq_ref_trace what); void netfs_put_subrequest(struct netfs_io_subrequest *subreq, diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index e8075c29ecf5..22eb77b1f5e6 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -50,18 +50,23 @@ EM(netfs_rreq_trace_assess, "ASSESS ") \ EM(netfs_rreq_trace_copy, "COPY ") \ EM(netfs_rreq_trace_collect, "COLLECT") \ + EM(netfs_rreq_trace_complete, "COMPLET") \ EM(netfs_rreq_trace_dirty, "DIRTY ") \ EM(netfs_rreq_trace_done, "DONE ") \ EM(netfs_rreq_trace_free, "FREE ") \ EM(netfs_rreq_trace_redirty, "REDIRTY") \ EM(netfs_rreq_trace_resubmit, "RESUBMT") \ + EM(netfs_rreq_trace_set_abandon, "S-ABNDN") \ EM(netfs_rreq_trace_set_pause, "PAUSE ") \ EM(netfs_rreq_trace_unlock, "UNLOCK ") \ EM(netfs_rreq_trace_unlock_pgpriv2, "UNLCK-2") \ EM(netfs_rreq_trace_unmark, "UNMARK ") \ EM(netfs_rreq_trace_wait_ip, "WAIT-IP") \ EM(netfs_rreq_trace_wait_pause, "WT-PAUS") \ + EM(netfs_rreq_trace_wait_queue, "WAIT-Q ") \ EM(netfs_rreq_trace_wake_ip, "WAKE-IP") \ + EM(netfs_rreq_trace_wake_queue, "WAKE-Q ") \ + EM(netfs_rreq_trace_woke_queue, "WOKE-Q ") \ EM(netfs_rreq_trace_unpause, "UNPAUSE") \ E_(netfs_rreq_trace_write_done, "WR-DONE") =20 @@ -91,6 +96,8 @@ EM(netfs_sreq_trace_hit_eof, "EOF ") \ EM(netfs_sreq_trace_io_progress, "IO ") \ EM(netfs_sreq_trace_limited, "LIMIT") \ + EM(netfs_sreq_trace_partial_read, "PARTR") \ + EM(netfs_sreq_trace_need_retry, "NRTRY") \ EM(netfs_sreq_trace_prepare, "PREP ") \ EM(netfs_sreq_trace_prep_failed, "PRPFL") \ EM(netfs_sreq_trace_progress, "PRGRS") \ @@ -176,6 +183,7 @@ EM(netfs_folio_trace_mkwrite, "mkwrite") \ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ EM(netfs_folio_trace_not_under_wback, "!wback") \ + EM(netfs_folio_trace_not_locked, "!locked") \ EM(netfs_folio_trace_put, "put") \ EM(netfs_folio_trace_read, "read") \ EM(netfs_folio_trace_read_done, "read-done") \ @@ -204,7 +212,6 @@ EM(netfs_trace_folioq_clear, "clear") \ EM(netfs_trace_folioq_delete, "delete") \ EM(netfs_trace_folioq_make_space, "make-space") \ - EM(netfs_trace_folioq_prep_write, "prep-wr") \ EM(netfs_trace_folioq_rollbuf_init, "roll-init") \ E_(netfs_trace_folioq_read_progress, "r-progress") =20 @@ -352,7 +359,7 @@ TRACE_EVENT(netfs_sreq, __entry->len =3D sreq->len; __entry->transferred =3D sreq->transferred; __entry->start =3D sreq->start; - __entry->slot =3D sreq->curr_folioq_slot; + __entry->slot =3D sreq->io_iter.folioq_slot; ), =20 TP_printk("R=3D%08x[%x] %s %s f=3D%02x s=3D%llx %zx/%zx s=3D%u e=3D%d= ", @@ -701,71 +708,6 @@ TRACE_EVENT(netfs_collect_stream, __entry->collected_to, __entry->front) ); =20 -TRACE_EVENT(netfs_progress, - TP_PROTO(const struct netfs_io_subrequest *subreq, - unsigned long long start, size_t avail, size_t part), - - TP_ARGS(subreq, start, avail, part), - - TP_STRUCT__entry( - __field(unsigned int, rreq) - __field(unsigned int, subreq) - __field(unsigned int, consumed) - __field(unsigned int, transferred) - __field(unsigned long long, f_start) - __field(unsigned int, f_avail) - __field(unsigned int, f_part) - __field(unsigned char, slot) - ), - - TP_fast_assign( - __entry->rreq =3D subreq->rreq->debug_id; - __entry->subreq =3D subreq->debug_index; - __entry->consumed =3D subreq->consumed; - __entry->transferred =3D subreq->transferred; - __entry->f_start =3D start; - __entry->f_avail =3D avail; - __entry->f_part =3D part; - __entry->slot =3D subreq->curr_folioq_slot; - ), - - TP_printk("R=3D%08x[%02x] s=3D%llx ct=3D%x/%x pa=3D%x/%x sl=3D%x", - __entry->rreq, __entry->subreq, __entry->f_start, - __entry->consumed, __entry->transferred, - __entry->f_part, __entry->f_avail, __entry->slot) - ); - -TRACE_EVENT(netfs_donate, - TP_PROTO(const struct netfs_io_request *rreq, - const struct netfs_io_subrequest *from, - const struct netfs_io_subrequest *to, - size_t amount, - enum netfs_donate_trace trace), - - TP_ARGS(rreq, from, to, amount, trace), - - TP_STRUCT__entry( - __field(unsigned int, rreq) - __field(unsigned int, from) - __field(unsigned int, to) - __field(unsigned int, amount) - __field(enum netfs_donate_trace, trace) - ), - - TP_fast_assign( - __entry->rreq =3D rreq->debug_id; - __entry->from =3D from->debug_index; - __entry->to =3D to ? to->debug_index : -1; - __entry->amount =3D amount; - __entry->trace =3D trace; - ), - - TP_printk("R=3D%08x[%02x] -> [%02x] %s am=3D%x", - __entry->rreq, __entry->from, __entry->to, - __print_symbolic(__entry->trace, netfs_donate_traces), - __entry->amount) - ); - TRACE_EVENT(netfs_folioq, TP_PROTO(const struct folio_queue *fq, enum netfs_folioq_trace trace), From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3F5A22D008 for ; Fri, 8 Nov 2024 17:36:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087384; cv=none; b=SdU0yx/6+xa5Yv/ibQTmzQ8enUkBGB679LaeNO3kTHMESwTLxfaOBAl5PSeRrSIyrCL3bKCgrn71lLmTqYuQOWWiX6bg42WbXKz1gc5lBDsKK8qIKZhZrKH5Zdcd8Q8GZmOxwzNS6ARwFoRjlh1DF3XmK7LNOreX7hOBAm0US4U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087384; c=relaxed/simple; bh=BhgoPT4SsJfUu+cNW6rZ6RphQLwJpzjYixNxJceaPUM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L16A7mZ/HRZESEjr31NSUB/PzzD/5FmnRqGDcpVHOrftJ/JhcYl7l5Mr3W/rxbnDCXcEndUg9XM/RRdozbrHB0y39T/JpihtEORXsIgphdugMur30O1ZyNWn/8T6mcmaOHnVSGWcmDiTUB6+DKsbumA5YSU+5QajeWhPsTAC4Xk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=byg1Zwna; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="byg1Zwna" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087380; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xT95iRI7GFEw7/DsY259MSksQtxbslsuX0w7QNZo9gI=; b=byg1ZwnafRdk5D+lCAw3lvT3SSt+AmaO7kMX2WY5Y+bLdlDrCm52Sm9qV8JxlBQgP4U0JR 8waEJ17dsKsjM+UhJ1O+L16CTS7CWtKgJ/o7MpLa7K03zb3Va6U0BitdO/xHeYeyVP/3E4 5iKQcO3L+tKIbiygZrqlNlgEjigwoFw= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-292-gbmONp36NJu4Uorphe299A-1; Fri, 08 Nov 2024 12:36:18 -0500 X-MC-Unique: gbmONp36NJu4Uorphe299A-1 X-Mimecast-MFC-AGG-ID: gbmONp36NJu4Uorphe299A Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9E3C51953942; Fri, 8 Nov 2024 17:36:15 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0DF60300019E; Fri, 8 Nov 2024 17:36:09 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 29/33] afs: Make afs_mkdir() locally initialise a new directory's content Date: Fri, 8 Nov 2024 17:32:30 +0000 Message-ID: <20241108173236.1382366-30-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Initialise a new directory's content when it is created by mkdir locally rather than downloading the content from the server as we can predict what it's going to look like. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/dir.c | 3 +++ fs/afs/dir_edit.c | 49 ++++++++++++++++++++++++++++++++++++++ fs/afs/internal.h | 1 + include/trace/events/afs.h | 2 ++ 4 files changed, 55 insertions(+) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 36b80449ef0e..8c4c1029ea2f 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -1259,6 +1259,7 @@ void afs_check_for_remote_deletion(struct afs_operati= on *op) */ static void afs_vnode_new_inode(struct afs_operation *op) { + struct afs_vnode_param *dvp =3D &op->file[0]; struct afs_vnode_param *vp =3D &op->file[1]; struct afs_vnode *vnode; struct inode *inode; @@ -1278,6 +1279,8 @@ static void afs_vnode_new_inode(struct afs_operation = *op) =20 vnode =3D AFS_FS_I(inode); set_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags); + if (S_ISDIR(inode->i_mode)) + afs_mkdir_init_dir(vnode, dvp->vnode); if (!afs_op_error(op)) afs_cache_permit(vnode, op->key, vnode->cb_break, &vp->scb); d_instantiate(op->dentry, inode); diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index 71cce884e434..53178bb2d1a6 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -556,3 +556,52 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vnod= e, struct afs_vnode *new_d 0, 0, 0, 0, ".."); goto out; } + +/* + * Initialise a new directory. We need to fill in the "." and ".." entrie= s. + */ +void afs_mkdir_init_dir(struct afs_vnode *dvnode, struct afs_vnode *parent= _dvnode) +{ + union afs_xdr_dir_block *meta; + struct afs_dir_iter iter =3D { .dvnode =3D dvnode }; + union afs_xdr_dirent *de; + unsigned int slot =3D AFS_DIR_RESV_BLOCKS0; + loff_t i_size; + + i_size =3D i_size_read(&dvnode->netfs.inode); + if (i_size !=3D AFS_DIR_BLOCK_SIZE) { + afs_invalidate_dir(dvnode, afs_dir_invalid_edit_add_bad_size); + return; + } + + meta =3D afs_dir_get_block(&iter, 0); + if (!meta) + return; + + afs_edit_init_block(meta, meta, 0); + + de =3D &meta->dirents[slot]; + de->u.valid =3D 1; + de->u.vnode =3D htonl(dvnode->fid.vnode); + de->u.unique =3D htonl(dvnode->fid.unique); + memcpy(de->u.name, ".", 2); + trace_afs_edit_dir(dvnode, afs_edit_dir_for_mkdir, afs_edit_dir_mkdir, 0,= slot, + dvnode->fid.vnode, dvnode->fid.unique, "."); + slot++; + + de =3D &meta->dirents[slot]; + de->u.valid =3D 1; + de->u.vnode =3D htonl(parent_dvnode->fid.vnode); + de->u.unique =3D htonl(parent_dvnode->fid.unique); + memcpy(de->u.name, "..", 3); + trace_afs_edit_dir(dvnode, afs_edit_dir_for_mkdir, afs_edit_dir_mkdir, 0,= slot, + parent_dvnode->fid.vnode, parent_dvnode->fid.unique, ".."); + + afs_set_contig_bits(meta, AFS_DIR_RESV_BLOCKS0, 2); + meta->meta.alloc_ctrs[0] -=3D 2; + kunmap_local(meta); + + netfs_single_mark_inode_dirty(&dvnode->netfs.inode); + set_bit(AFS_VNODE_DIR_VALID, &dvnode->flags); + set_bit(AFS_VNODE_DIR_READ, &dvnode->flags); +} diff --git a/fs/afs/internal.h b/fs/afs/internal.h index cd2c4f85117d..acae1b5bfc63 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1078,6 +1078,7 @@ extern void afs_edit_dir_add(struct afs_vnode *, stru= ct qstr *, struct afs_fid * extern void afs_edit_dir_remove(struct afs_vnode *, struct qstr *, enum af= s_edit_dir_reason); void afs_edit_dir_update_dotdot(struct afs_vnode *vnode, struct afs_vnode = *new_dvnode, enum afs_edit_dir_reason why); +void afs_mkdir_init_dir(struct afs_vnode *dvnode, struct afs_vnode *parent= _vnode); =20 /* * dir_silly.c diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index cdb5f2af7799..c52fd83ca9b7 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -350,6 +350,7 @@ enum yfs_cm_operation { EM(afs_dir_invalid_edit_add_no_slots, "edit-add-no-slots") \ EM(afs_dir_invalid_edit_add_too_many_blocks, "edit-add-too-many-blocks") \ EM(afs_dir_invalid_edit_get_block, "edit-get-block") \ + EM(afs_dir_invalid_edit_mkdir, "edit-mkdir") \ EM(afs_dir_invalid_edit_rem_bad_size, "edit-rem-bad-size") \ EM(afs_dir_invalid_edit_rem_wrong_name, "edit-rem-wrong_name") \ EM(afs_dir_invalid_edit_upd_bad_size, "edit-upd-bad-size") \ @@ -371,6 +372,7 @@ enum yfs_cm_operation { EM(afs_edit_dir_delete_error, "d_err ") \ EM(afs_edit_dir_delete_inval, "d_invl") \ EM(afs_edit_dir_delete_noent, "d_nent") \ + EM(afs_edit_dir_mkdir, "mk_ent") \ EM(afs_edit_dir_update_dd, "u_ddot") \ EM(afs_edit_dir_update_error, "u_fail") \ EM(afs_edit_dir_update_inval, "u_invl") \ From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DD933DABEA for ; Fri, 8 Nov 2024 17:36:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087393; cv=none; b=BU5hFPIiwsAaYxtsBTZLyaEP7Jw1CZsWW5Wf1jFAs1bhlM0y2rfjkz3KT/VRIIhCEmRsr5i5PebfOO4h0a+fvWRXdMdET7whX4Wa90YzOcNRiAfVKGZl/PdTOJP4MWREJvZG9RgVC2aNarncMhrBSMxB0jEff4G4tLjDENCNF28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087393; c=relaxed/simple; bh=7gnzXtzZv3vpsrY6scU8gOQBRoZJgsNPJ+9Zxox91d8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hqbqjtaKGGvALRlPkedVdMr/keeZOMyi1A70GV/S78hLb2ML+cfaZ/wXIze8/PNH3aNMbiUS9r2kpdnYrJ8W61/b2+Jjj/9NnAJ1fW83Swl8BOEfHbuW7DlGR+rBzE7C19/3fx4HfwKznvQ3AwcJrDDv3/i8XPKfmPbmUZe85xk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=OhYRrLxU; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OhYRrLxU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087390; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N84/tyZ1Zqmc+8460SgM1W5oZOUi9H5EDjBhi/Hz3hg=; b=OhYRrLxUCnj7jD3djb2fscoz1Ps2r9YiTOnNyAkwLtHASb3ulkx1yhudVG0ypMHoQMmDMm vtqTl1/8XWmzvwvHFd2dS6ZewKTjaAfMK0gg2vk8j568em7acKrhXYApbFo76EDWy246My eB98REMTHRWadcpXwjWqPifHicQtJpc= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-49-hk3-d5_yOlavS-880BzBGQ-1; Fri, 08 Nov 2024 12:36:25 -0500 X-MC-Unique: hk3-d5_yOlavS-880BzBGQ-1 X-Mimecast-MFC-AGG-ID: hk3-d5_yOlavS-880BzBGQ Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8E984195395A; Fri, 8 Nov 2024 17:36:22 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1D3B8300019E; Fri, 8 Nov 2024 17:36:16 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 30/33] afs: Use the contained hashtable to search a directory Date: Fri, 8 Nov 2024 17:32:31 +0000 Message-ID: <20241108173236.1382366-31-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Each directory image contains a hashtable with 128 buckets to speed up searching. Currently, kafs does not use this, but rather iterates over all the occupied slots in the image as it can share this with readdir. Switch kafs to use the hashtable for lookups to reduce the latency. Care must be taken that the hash chains are acyclic. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/Makefile | 1 + fs/afs/dir.c | 42 ++++---- fs/afs/dir_edit.c | 135 +++++++++++++++++--------- fs/afs/dir_search.c | 227 ++++++++++++++++++++++++++++++++++++++++++++ fs/afs/internal.h | 18 ++++ 5 files changed, 350 insertions(+), 73 deletions(-) create mode 100644 fs/afs/dir_search.c diff --git a/fs/afs/Makefile b/fs/afs/Makefile index dcdc0f1bb76f..5efd7e13b304 100644 --- a/fs/afs/Makefile +++ b/fs/afs/Makefile @@ -11,6 +11,7 @@ kafs-y :=3D \ cmservice.o \ dir.o \ dir_edit.o \ + dir_search.o \ dir_silly.o \ dynroot.o \ file.o \ diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 8c4c1029ea2f..d195a42cea1d 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -88,8 +88,6 @@ struct afs_lookup_one_cookie { struct afs_lookup_cookie { struct dir_context ctx; struct qstr name; - bool found; - bool one_only; unsigned short nr_fids; struct afs_fid fids[50]; }; @@ -309,7 +307,7 @@ ssize_t afs_read_single(struct afs_vnode *dvnode, struc= t file *file) * Read the directory into a folio_queue buffer in one go, scrubbing the * previous contents. We return -ESTALE if the caller needs to call us ag= ain. */ -static ssize_t afs_read_dir(struct afs_vnode *dvnode, struct file *file) +ssize_t afs_read_dir(struct afs_vnode *dvnode, struct file *file) __acquires(&dvnode->validate_lock) { ssize_t ret; @@ -644,19 +642,10 @@ static bool afs_lookup_filldir(struct dir_context *ct= x, const char *name, BUILD_BUG_ON(sizeof(union afs_xdr_dir_block) !=3D 2048); BUILD_BUG_ON(sizeof(union afs_xdr_dirent) !=3D 32); =20 - if (cookie->found) { - if (cookie->nr_fids < 50) { - cookie->fids[cookie->nr_fids].vnode =3D ino; - cookie->fids[cookie->nr_fids].unique =3D dtype; - cookie->nr_fids++; - } - } else if (cookie->name.len =3D=3D nlen && - memcmp(cookie->name.name, name, nlen) =3D=3D 0) { - cookie->fids[1].vnode =3D ino; - cookie->fids[1].unique =3D dtype; - cookie->found =3D 1; - if (cookie->one_only) - return false; + if (cookie->nr_fids < 50) { + cookie->fids[cookie->nr_fids].vnode =3D ino; + cookie->fids[cookie->nr_fids].unique =3D dtype; + cookie->nr_fids++; } =20 return cookie->nr_fids < 50; @@ -784,6 +773,7 @@ static struct inode *afs_do_lookup(struct inode *dir, s= truct dentry *dentry) struct afs_vnode *dvnode =3D AFS_FS_I(dir), *vnode; struct inode *inode =3D NULL, *ti; afs_dataversion_t data_version =3D READ_ONCE(dvnode->status.data_version); + bool supports_ibulk; long ret; int i; =20 @@ -800,19 +790,19 @@ static struct inode *afs_do_lookup(struct inode *dir,= struct dentry *dentry) cookie->nr_fids =3D 2; /* slot 1 is saved for the fid we actually want * and slot 0 for the directory */ =20 - if (!afs_server_supports_ibulk(dvnode)) - cookie->one_only =3D true; - - /* search the directory */ - ret =3D afs_dir_iterate(dir, &cookie->ctx, NULL, &data_version); + /* Search the directory for the named entry using the hash table... */ + ret =3D afs_dir_search(dvnode, &dentry->d_name, &cookie->fids[1], &data_v= ersion); if (ret < 0) goto out; =20 - dentry->d_fsdata =3D (void *)(unsigned long)data_version; + supports_ibulk =3D afs_server_supports_ibulk(dvnode); + if (supports_ibulk) { + /* ...then scan linearly from that point for entries to lookup-ahead. */ + cookie->ctx.pos =3D (ret + 1) * AFS_DIR_DIRENT_SIZE; + afs_dir_iterate(dir, &cookie->ctx, NULL, &data_version); + } =20 - ret =3D -ENOENT; - if (!cookie->found) - goto out; + dentry->d_fsdata =3D (void *)(unsigned long)data_version; =20 /* Check to see if we already have an inode for the primary fid. */ inode =3D ilookup5(dir->i_sb, cookie->fids[1].vnode, @@ -871,7 +861,7 @@ static struct inode *afs_do_lookup(struct inode *dir, s= truct dentry *dentry) * the whole operation. */ afs_op_set_error(op, -ENOTSUPP); - if (!cookie->one_only) { + if (supports_ibulk) { op->ops =3D &afs_inline_bulk_status_operation; afs_begin_vnode_operation(op); afs_wait_for_operation(op); diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index 53178bb2d1a6..60a549f1d9c5 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -245,7 +245,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, union afs_xdr_dir_block *meta, *block; union afs_xdr_dirent *de; struct afs_dir_iter iter =3D { .dvnode =3D vnode }; - unsigned int need_slots, nr_blocks, b; + unsigned int nr_blocks, b, entry; loff_t i_size; int slot; =20 @@ -263,7 +263,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, return; =20 /* Work out how many slots we're going to need. */ - need_slots =3D afs_dir_calc_slots(name->len); + iter.nr_slots =3D afs_dir_calc_slots(name->len); =20 if (i_size =3D=3D 0) goto new_directory; @@ -281,7 +281,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, =20 /* Lower dir blocks have a counter in the header we can check. */ if (b < AFS_DIR_BLOCKS_WITH_CTR && - meta->meta.alloc_ctrs[b] < need_slots) + meta->meta.alloc_ctrs[b] < iter.nr_slots) continue; =20 block =3D afs_dir_get_block(&iter, b); @@ -308,7 +308,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, /* We need to try and find one or more consecutive slots to * hold the entry. */ - slot =3D afs_find_contig_bits(block, need_slots); + slot =3D afs_find_contig_bits(block, iter.nr_slots); if (slot >=3D 0) { _debug("slot %u", slot); goto found_space; @@ -347,12 +347,18 @@ void afs_edit_dir_add(struct afs_vnode *vnode, de->u.name[name->len] =3D 0; =20 /* Adjust the bitmap. */ - afs_set_contig_bits(block, slot, need_slots); - kunmap_local(block); + afs_set_contig_bits(block, slot, iter.nr_slots); =20 /* Adjust the allocation counter. */ if (b < AFS_DIR_BLOCKS_WITH_CTR) - meta->meta.alloc_ctrs[b] -=3D need_slots; + meta->meta.alloc_ctrs[b] -=3D iter.nr_slots; + + /* Adjust the hash chain. */ + entry =3D b * AFS_DIR_SLOTS_PER_BLOCK + slot; + iter.bucket =3D afs_dir_hash_name(name); + de->u.hash_next =3D meta->meta.hashtable[iter.bucket]; + meta->meta.hashtable[iter.bucket] =3D htons(entry); + kunmap_local(block); =20 inode_inc_iversion_raw(&vnode->netfs.inode); afs_stat_v(vnode, n_dir_cr); @@ -387,12 +393,14 @@ void afs_edit_dir_add(struct afs_vnode *vnode, void afs_edit_dir_remove(struct afs_vnode *vnode, struct qstr *name, enum afs_edit_dir_reason why) { - union afs_xdr_dir_block *meta, *block; - union afs_xdr_dirent *de; + union afs_xdr_dir_block *meta, *block, *pblock; + union afs_xdr_dirent *de, *pde; struct afs_dir_iter iter =3D { .dvnode =3D vnode }; - unsigned int need_slots, nr_blocks, b; + struct afs_fid fid; + unsigned int b, slot, entry; loff_t i_size; - int slot; + __be16 next; + int found; =20 _enter(",,{%d,%s},", name->len, name->name); =20 @@ -403,59 +411,90 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, afs_invalidate_dir(vnode, afs_dir_invalid_edit_rem_bad_size); return; } - nr_blocks =3D i_size / AFS_DIR_BLOCK_SIZE; =20 - meta =3D afs_dir_get_block(&iter, 0); - if (!meta) + if (!afs_dir_init_iter(&iter, name)) return; =20 - /* Work out how many slots we're going to discard. */ - need_slots =3D afs_dir_calc_slots(name->len); - - /* Find a block that has sufficient slots available. Each folio - * contains two or more directory blocks. - */ - for (b =3D 0; b < nr_blocks; b++) { - block =3D afs_dir_get_block(&iter, b); - if (!block) - goto error; - - /* Abandon the edit if we got a callback break. */ - if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) - goto already_invalidated; - - if (b > AFS_DIR_BLOCKS_WITH_CTR || - meta->meta.alloc_ctrs[b] <=3D AFS_DIR_SLOTS_PER_BLOCK - 1 - need_slo= ts) { - slot =3D afs_dir_scan_block(block, name, b); - if (slot >=3D 0) - goto found_dirent; - } + meta =3D afs_dir_find_block(&iter, 0); + if (!meta) + return; =20 - kunmap_local(block); + /* Find the entry in the blob. */ + found =3D afs_dir_search_bucket(&iter, name, &fid); + if (found < 0) { + /* Didn't find the dirent to clobber. Re-download. */ + trace_afs_edit_dir(vnode, why, afs_edit_dir_delete_noent, + 0, 0, 0, 0, name->name); + afs_invalidate_dir(vnode, afs_dir_invalid_edit_rem_wrong_name); + goto out_unmap; } =20 - /* Didn't find the dirent to clobber. Download the directory again. */ - trace_afs_edit_dir(vnode, why, afs_edit_dir_delete_noent, - 0, 0, 0, 0, name->name); - afs_invalidate_dir(vnode, afs_dir_invalid_edit_rem_wrong_name); - goto out_unmap; + entry =3D found; + b =3D entry / AFS_DIR_SLOTS_PER_BLOCK; + slot =3D entry % AFS_DIR_SLOTS_PER_BLOCK; =20 -found_dirent: + block =3D afs_dir_find_block(&iter, b); + if (!block) + goto error; + if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) + goto already_invalidated; + + /* Check and clear the entry. */ de =3D &block->dirents[slot]; + if (de->u.valid !=3D 1) + goto error_unmap; =20 trace_afs_edit_dir(vnode, why, afs_edit_dir_delete, b, slot, ntohl(de->u.vnode), ntohl(de->u.unique), name->name); =20 - memset(de, 0, sizeof(*de) * need_slots); - /* Adjust the bitmap. */ - afs_clear_contig_bits(block, slot, need_slots); - kunmap_local(block); + afs_clear_contig_bits(block, slot, iter.nr_slots); =20 /* Adjust the allocation counter. */ if (b < AFS_DIR_BLOCKS_WITH_CTR) - meta->meta.alloc_ctrs[b] +=3D need_slots; + meta->meta.alloc_ctrs[b] +=3D iter.nr_slots; + + /* Clear the constituent entries. */ + next =3D de->u.hash_next; + memset(de, 0, sizeof(*de) * iter.nr_slots); + kunmap_local(block); + + /* Adjust the hash chain: if iter->prev_entry is 0, the hashtable head + * index is previous; otherwise it's slot number of the previous entry. + */ + if (!iter.prev_entry) { + __be16 prev_next =3D meta->meta.hashtable[iter.bucket]; + + if (unlikely(prev_next !=3D htons(entry))) { + pr_warn("%llx:%llx:%x: not head of chain b=3D%x p=3D%x,%x e=3D%x %*s", + vnode->fid.vid, vnode->fid.vnode, vnode->fid.unique, + iter.bucket, iter.prev_entry, prev_next, entry, + name->len, name->name); + goto error; + } + meta->meta.hashtable[iter.bucket] =3D next; + } else { + unsigned int pb =3D iter.prev_entry / AFS_DIR_SLOTS_PER_BLOCK; + unsigned int ps =3D iter.prev_entry % AFS_DIR_SLOTS_PER_BLOCK; + __be16 prev_next; + + pblock =3D afs_dir_find_block(&iter, pb); + if (!pblock) + goto error; + pde =3D &pblock->dirents[ps]; + prev_next =3D pde->u.hash_next; + if (prev_next !=3D htons(entry)) { + kunmap_local(pblock); + pr_warn("%llx:%llx:%x: not prev in chain b=3D%x p=3D%x,%x e=3D%x %*s", + vnode->fid.vid, vnode->fid.vnode, vnode->fid.unique, + iter.bucket, iter.prev_entry, prev_next, entry, + name->len, name->name); + goto error; + } + pde->u.hash_next =3D next; + kunmap_local(pblock); + } =20 netfs_single_mark_inode_dirty(&vnode->netfs.inode); =20 @@ -474,6 +513,8 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, 0, 0, 0, 0, name->name); goto out_unmap; =20 +error_unmap: + kunmap_local(block); error: trace_afs_edit_dir(vnode, why, afs_edit_dir_delete_error, 0, 0, 0, 0, name->name); diff --git a/fs/afs/dir_search.c b/fs/afs/dir_search.c new file mode 100644 index 000000000000..b25bd892db4d --- /dev/null +++ b/fs/afs/dir_search.c @@ -0,0 +1,227 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Search a directory's hash table. + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + * + * https://tools.ietf.org/html/draft-keiser-afs3-directory-object-00 + */ + +#include +#include +#include +#include +#include "internal.h" +#include "afs_fs.h" +#include "xdr_fs.h" + +/* + * Calculate the name hash. + */ +unsigned int afs_dir_hash_name(const struct qstr *name) +{ + const unsigned char *p =3D name->name; + unsigned int hash =3D 0, i; + int bucket; + + for (i =3D 0; i < name->len; i++) + hash =3D (hash * 173) + p[i]; + bucket =3D hash & (AFS_DIR_HASHTBL_SIZE - 1); + if (hash > INT_MAX) { + bucket =3D AFS_DIR_HASHTBL_SIZE - bucket; + bucket &=3D (AFS_DIR_HASHTBL_SIZE - 1); + } + return bucket; +} + +/* + * Reset a directory iterator. + */ +static bool afs_dir_reset_iter(struct afs_dir_iter *iter) +{ + unsigned long long i_size =3D i_size_read(&iter->dvnode->netfs.inode); + unsigned int nblocks; + + /* Work out the maximum number of steps we can take. */ + nblocks =3D umin(i_size / AFS_DIR_BLOCK_SIZE, AFS_DIR_MAX_BLOCKS); + if (!nblocks) + return false; + iter->loop_check =3D nblocks * (AFS_DIR_SLOTS_PER_BLOCK - AFS_DIR_RESV_BL= OCKS); + iter->prev_entry =3D 0; /* Hash head is previous */ + return true; +} + +/* + * Initialise a directory iterator for looking up a name. + */ +bool afs_dir_init_iter(struct afs_dir_iter *iter, const struct qstr *name) +{ + iter->nr_slots =3D afs_dir_calc_slots(name->len); + iter->bucket =3D afs_dir_hash_name(name); + return afs_dir_reset_iter(iter); +} + +/* + * Get a specific block. + */ +union afs_xdr_dir_block *afs_dir_find_block(struct afs_dir_iter *iter, siz= e_t block) +{ + struct folio_queue *fq =3D iter->fq; + struct afs_vnode *dvnode =3D iter->dvnode; + struct folio *folio; + size_t blpos =3D block * AFS_DIR_BLOCK_SIZE; + size_t blend =3D (block + 1) * AFS_DIR_BLOCK_SIZE, fpos =3D iter->fpos; + int slot =3D iter->fq_slot; + + _enter("%zx,%d", block, slot); + + if (iter->block) { + kunmap_local(iter->block); + iter->block =3D NULL; + } + + if (dvnode->directory_size < blend) + goto fail; + + if (!fq || blpos < fpos) { + fq =3D dvnode->directory; + slot =3D 0; + fpos =3D 0; + } + + /* Search the folio queue for the folio containing the block... */ + for (; fq; fq =3D fq->next) { + for (; slot < folioq_count(fq); slot++) { + size_t fsize =3D folioq_folio_size(fq, slot); + + if (blend <=3D fpos + fsize) { + /* ... and then return the mapped block. */ + folio =3D folioq_folio(fq, slot); + if (WARN_ON_ONCE(folio_pos(folio) !=3D fpos)) + goto fail; + iter->fq =3D fq; + iter->fq_slot =3D slot; + iter->fpos =3D fpos; + iter->block =3D kmap_local_folio(folio, blpos - fpos); + return iter->block; + } + fpos +=3D fsize; + } + slot =3D 0; + } + +fail: + iter->fq =3D NULL; + iter->fq_slot =3D 0; + afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block); + return NULL; +} + +/* + * Search through a directory bucket. + */ +int afs_dir_search_bucket(struct afs_dir_iter *iter, const struct qstr *na= me, + struct afs_fid *_fid) +{ + const union afs_xdr_dir_block *meta; + unsigned int entry; + int ret =3D -ESTALE; + + meta =3D afs_dir_find_block(iter, 0); + if (!meta) + return -ESTALE; + + entry =3D ntohs(meta->meta.hashtable[iter->bucket & (AFS_DIR_HASHTBL_SIZE= - 1)]); + _enter("%x,%x", iter->bucket, entry); + + while (entry) { + const union afs_xdr_dir_block *block; + const union afs_xdr_dirent *dire; + unsigned int blnum =3D entry / AFS_DIR_SLOTS_PER_BLOCK; + unsigned int slot =3D entry % AFS_DIR_SLOTS_PER_BLOCK; + unsigned int resv =3D (blnum =3D=3D 0 ? AFS_DIR_RESV_BLOCKS0 : AFS_DIR_R= ESV_BLOCKS); + + _debug("search %x", entry); + + if (slot < resv) { + kdebug("slot out of range h=3D%x rs=3D%2x sl=3D%2x-%2x", + iter->bucket, resv, slot, slot + iter->nr_slots - 1); + goto bad; + } + + block =3D afs_dir_find_block(iter, blnum); + if (!block) + goto bad; + dire =3D &block->dirents[slot]; + + if (slot + iter->nr_slots <=3D AFS_DIR_SLOTS_PER_BLOCK && + memcmp(dire->u.name, name->name, name->len) =3D=3D 0 && + dire->u.name[name->len] =3D=3D '\0') { + _fid->vnode =3D ntohl(dire->u.vnode); + _fid->unique =3D ntohl(dire->u.unique); + ret =3D entry; + goto found; + } + + iter->prev_entry =3D entry; + entry =3D ntohs(dire->u.hash_next); + if (!--iter->loop_check) { + kdebug("dir chain loop h=3D%x", iter->bucket); + goto bad; + } + } + + ret =3D -ENOENT; +found: + if (iter->block) { + kunmap_local(iter->block); + iter->block =3D NULL; + } + +bad: + if (ret =3D=3D -ESTALE) + afs_invalidate_dir(iter->dvnode, afs_dir_invalid_iter_stale); + _leave(" =3D %d", ret); + return ret; +} + +/* + * Search the appropriate hash chain in the contents of an AFS directory. + */ +int afs_dir_search(struct afs_vnode *dvnode, struct qstr *name, + struct afs_fid *_fid, afs_dataversion_t *_dir_version) +{ + struct afs_dir_iter iter =3D { .dvnode =3D dvnode, }; + int ret, retry_limit =3D 3; + + _enter("{%lu},,,", dvnode->netfs.inode.i_ino); + + if (!afs_dir_init_iter(&iter, name)) + return -ENOENT; + do { + if (--retry_limit < 0) { + pr_warn("afs_read_dir(): Too many retries\n"); + ret =3D -ESTALE; + break; + } + ret =3D afs_read_dir(dvnode, NULL); + if (ret < 0) { + if (ret !=3D -ESTALE) + break; + if (test_bit(AFS_VNODE_DELETED, &dvnode->flags)) { + ret =3D -ESTALE; + break; + } + continue; + } + *_dir_version =3D inode_peek_iversion_raw(&dvnode->netfs.inode); + + ret =3D afs_dir_search_bucket(&iter, name, _fid); + up_read(&dvnode->validate_lock); + if (ret =3D=3D -ESTALE) + afs_dir_reset_iter(&iter); + } while (ret =3D=3D -ESTALE); + + _leave(" =3D %d", ret); + return ret; +} diff --git a/fs/afs/internal.h b/fs/afs/internal.h index acae1b5bfc63..b7d02c105340 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -978,9 +978,14 @@ static inline void afs_invalidate_cache(struct afs_vno= de *vnode, unsigned int fl */ struct afs_dir_iter { struct afs_vnode *dvnode; + union afs_xdr_dir_block *block; struct folio_queue *fq; unsigned int fpos; int fq_slot; + unsigned int loop_check; + u8 nr_slots; + u8 bucket; + unsigned int prev_entry; }; =20 #include @@ -1065,6 +1070,8 @@ extern const struct address_space_operations afs_dir_= aops; extern const struct dentry_operations afs_fs_dentry_operations; =20 ssize_t afs_read_single(struct afs_vnode *dvnode, struct file *file); +ssize_t afs_read_dir(struct afs_vnode *dvnode, struct file *file) + __acquires(&dvnode->validate_lock); extern void afs_d_release(struct dentry *); extern void afs_check_for_remote_deletion(struct afs_operation *); int afs_single_writepages(struct address_space *mapping, @@ -1080,6 +1087,17 @@ void afs_edit_dir_update_dotdot(struct afs_vnode *vn= ode, struct afs_vnode *new_d enum afs_edit_dir_reason why); void afs_mkdir_init_dir(struct afs_vnode *dvnode, struct afs_vnode *parent= _vnode); =20 +/* + * dir_search.c + */ +unsigned int afs_dir_hash_name(const struct qstr *name); +bool afs_dir_init_iter(struct afs_dir_iter *iter, const struct qstr *name); +union afs_xdr_dir_block *afs_dir_find_block(struct afs_dir_iter *iter, siz= e_t block); +int afs_dir_search_bucket(struct afs_dir_iter *iter, const struct qstr *na= me, + struct afs_fid *_fid); +int afs_dir_search(struct afs_vnode *dvnode, struct qstr *name, + struct afs_fid *_fid, afs_dataversion_t *_dir_version); + /* * dir_silly.c */ From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23A0021D23D for ; Fri, 8 Nov 2024 17:36:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087397; cv=none; b=sTQYXhKxjrm4zTViVzai/LF73lN6UNi2Eszp+8w7teXSmhNtXJjimboL/GV5CDiqRF+41JajneU8wiLeXKyagNaOLXXxW6eRVKuAa7piw8N2idfS1ETPbh1RoBspGuo9uswThpqi1GY++iSym6MunazPUkyb5AvuFUL0GbuhDZM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087397; c=relaxed/simple; bh=I4L3md35rvmc1C72E1lOWwx2AoRius+PP3b8FWLJ+aQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B4+56My/NxO3jDw63ZEdmS4K1FjXhrCIvp49xSIQmHOMOHoArnC3PwVu/eNMegHwBwenzQOKmbxYrxAPMMzPiMwrPoWZOjD81Jc4+h95xEiFrZwcw3rUt0vh7On2/OMfjjYPi3+7yMW4kCsJBrp6741Rio6CIpKaXSxRt7zEBPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=iMrdnGNQ; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="iMrdnGNQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087395; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IFH4nAXrPrYA9knzNJw2PqNPG2o324dUKCIftg33KNo=; b=iMrdnGNQ8UpR9qooa5r3Wgqv4sjaj5Q6uz4JWhj2bNL5MS2eKofbsSjScHnMEmFLVDJ1UW wg4qCja5C57keyrHLXOA3BMF/Q1noPbby8NHuOFurYw82PIJz6SZQDToM6xzPbGZeQqr3d k93v6+yoft/UCl7303N9D8uiRKkQjKo= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-301-zQA4ISPnOveGXnynQdnMAw-1; Fri, 08 Nov 2024 12:36:32 -0500 X-MC-Unique: zQA4ISPnOveGXnynQdnMAw-1 X-Mimecast-MFC-AGG-ID: zQA4ISPnOveGXnynQdnMAw Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 613E619560A6; Fri, 8 Nov 2024 17:36:29 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E600030001A0; Fri, 8 Nov 2024 17:36:23 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 31/33] afs: Locally initialise the contents of a new symlink on creation Date: Fri, 8 Nov 2024 17:32:32 +0000 Message-ID: <20241108173236.1382366-32-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" Since we know what the contents of a symlink will be when we create it on the server, initialise its contents locally too to avoid the need to download it. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/dir.c | 2 ++ fs/afs/inode.c | 46 ++++++++++++++++++++++++++++++++++------ fs/afs/internal.h | 1 + fs/netfs/buffered_read.c | 2 +- fs/netfs/read_single.c | 2 +- 5 files changed, 45 insertions(+), 8 deletions(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index d195a42cea1d..b6a202fd9926 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -1271,6 +1271,8 @@ static void afs_vnode_new_inode(struct afs_operation = *op) set_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags); if (S_ISDIR(inode->i_mode)) afs_mkdir_init_dir(vnode, dvp->vnode); + else if (S_ISLNK(inode->i_mode)) + afs_init_new_symlink(vnode, op); if (!afs_op_error(op)) afs_cache_permit(vnode, op->key, vnode->cb_break, &vp->scb); d_instantiate(op->dentry, inode); diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 0e3c43c40632..e9538e91f848 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -25,6 +25,24 @@ #include "internal.h" #include "afs_fs.h" =20 +void afs_init_new_symlink(struct afs_vnode *vnode, struct afs_operation *o= p) +{ + size_t size =3D strlen(op->create.symlink) + 1; + size_t dsize =3D 0; + char *p; + + if (netfs_alloc_folioq_buffer(NULL, &vnode->directory, &dsize, size, + mapping_gfp_mask(vnode->netfs.inode.i_mapping)) < 0) + return; + + vnode->directory_size =3D dsize; + p =3D kmap_local_folio(folioq_folio(vnode->directory, 0), 0); + memcpy(p, op->create.symlink, size); + kunmap_local(p); + set_bit(AFS_VNODE_DIR_READ, &vnode->flags); + netfs_single_mark_inode_dirty(&vnode->netfs.inode); +} + static void afs_put_link(void *arg) { struct folio *folio =3D virt_to_folio(arg); @@ -41,15 +59,31 @@ const char *afs_get_link(struct dentry *dentry, struct = inode *inode, char *content; ssize_t ret; =20 - if (atomic64_read(&vnode->cb_expires_at) =3D=3D AFS_NO_CB_PROMISE || - !test_bit(AFS_VNODE_DIR_READ, &vnode->flags)) { - if (!dentry) + if (!dentry) { + /* RCU pathwalk. */ + if (!test_bit(AFS_VNODE_DIR_READ, &vnode->flags) || !afs_check_validity(= vnode)) return ERR_PTR(-ECHILD); - ret =3D afs_read_single(vnode, NULL); - if (ret < 0) - return ERR_PTR(ret); + goto good; } =20 + if (test_bit(AFS_VNODE_DIR_READ, &vnode->flags)) + goto fetch; + + ret =3D afs_validate(vnode, NULL); + if (ret < 0) + return ERR_PTR(ret); + + if (!test_and_clear_bit(AFS_VNODE_ZAP_DATA, &vnode->flags) && + test_bit(AFS_VNODE_DIR_READ, &vnode->flags)) + goto good; + +fetch: + ret =3D afs_read_single(vnode, NULL); + if (ret < 0) + return ERR_PTR(ret); + set_bit(AFS_VNODE_DIR_READ, &vnode->flags); + +good: folio =3D folioq_folio(vnode->directory, 0); folio_get(folio); content =3D kmap_local_folio(folio, 0); diff --git a/fs/afs/internal.h b/fs/afs/internal.h index b7d02c105340..90f407774a9a 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1221,6 +1221,7 @@ extern void afs_fs_probe_cleanup(struct afs_net *); */ extern const struct afs_operation_ops afs_fetch_status_operation; =20 +void afs_init_new_symlink(struct afs_vnode *vnode, struct afs_operation *o= p); const char *afs_get_link(struct dentry *dentry, struct inode *inode, struct delayed_call *callback); int afs_readlink(struct dentry *dentry, char __user *buffer, int buflen); diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 7036e9f12b07..65d9dd71f65d 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -210,7 +210,7 @@ static void netfs_read_to_pagecache(struct netfs_io_req= uest *rreq) =20 do { struct netfs_io_subrequest *subreq; - enum netfs_io_source source =3D NETFS_DOWNLOAD_FROM_SERVER; + enum netfs_io_source source =3D NETFS_SOURCE_UNKNOWN; ssize_t slice; =20 subreq =3D netfs_alloc_subrequest(rreq); diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c index 14bc61107182..fea0ecdecc53 100644 --- a/fs/netfs/read_single.c +++ b/fs/netfs/read_single.c @@ -97,7 +97,7 @@ static int netfs_single_dispatch_read(struct netfs_io_req= uest *rreq) if (!subreq) return -ENOMEM; =20 - subreq->source =3D NETFS_DOWNLOAD_FROM_SERVER; + subreq->source =3D NETFS_SOURCE_UNKNOWN; subreq->start =3D 0; subreq->len =3D rreq->len; subreq->io_iter =3D rreq->buffer.iter; From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02C5C3DABFE for ; Fri, 8 Nov 2024 17:36:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087414; cv=none; b=U6E9siu1RY2m2xr6Hlrjhhe6SjiF6BlY+5LERCsgKnb5wi1U0DO8mod729b7Zq052EQWHFlrRXktEDWxQl+ab0ky9UHYEbOFJNYQDsNauhCbtbNs/usBxTdALZY7BfJzgAUEUnlTJlKFDCK43WZh9yq9fzg2s8JiUlaoxkzkNIw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087414; c=relaxed/simple; bh=Py06Ff5m2c85n2uHi5fEm6kPZKCWzBfcFnAIEcZ8I5U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E3ejhUyWrDJPIqltfO6e9UfvR7YxiSLR605fyUukqXGHJcgAdiTwxlbKjy1/a4rHb1WteEK9Dl6G9FcyY47soWg5EuWG0fNF8oObdUq0Fwkn9W3/83nVIY72SAZtsi0htXw9VDe20XHVvyz+xPAKGozPih693ATRYRcc+WvbZd8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=V/scgHLs; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="V/scgHLs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QCDL9BqA4pDDX31tPDuoaoWVDY2hJyHDwXlxpfOucp0=; b=V/scgHLs2bAVKOQo7/2UK/ukxlI5qyeIefm0mn+N51APVEaJ4FY9dImnc2phF6l8mvZ/eY FISaLTJJktbMTT+fpiape0hH9gxsEl+eIwR6/OWTOZbXZQxtBqZCeadwMzrG9bOWL7OQyw dINnTMzA3BTtLVilpdcLBxRtoJ2Y/7E= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-222-bt92h9XNNfuuPtpTLYdiAw-1; Fri, 08 Nov 2024 12:36:44 -0500 X-MC-Unique: bt92h9XNNfuuPtpTLYdiAw-1 X-Mimecast-MFC-AGG-ID: bt92h9XNNfuuPtpTLYdiAw Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3A6A8195609F; Fri, 8 Nov 2024 17:36:36 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id BF7EA1953880; Fri, 8 Nov 2024 17:36:30 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 32/33] afs: Add a tracepoint for afs_read_receive() Date: Fri, 8 Nov 2024 17:32:33 +0000 Message-ID: <20241108173236.1382366-33-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" Add a tracepoint for afs_read_receive() to allow potential missed wakeups to be debugged. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/file.c | 1 + include/trace/events/afs.h | 30 ++++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/fs/afs/file.c b/fs/afs/file.c index c296efebb491..fc15497608c6 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -274,6 +274,7 @@ static void afs_read_receive(struct afs_call *call) state =3D READ_ONCE(call->state); if (state =3D=3D AFS_CALL_COMPLETE) return; + trace_afs_read_recv(op, call); =20 while (state < AFS_CALL_COMPLETE && READ_ONCE(call->need_attention)) { WRITE_ONCE(call->need_attention, false); diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index c52fd83ca9b7..2e92487f3f34 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -1775,6 +1775,36 @@ TRACE_EVENT(afs_make_call, __entry->fid.unique) ); =20 +TRACE_EVENT(afs_read_recv, + TP_PROTO(const struct afs_operation *op, const struct afs_call *call), + + TP_ARGS(op, call), + + TP_STRUCT__entry( + __field(unsigned int, rreq) + __field(unsigned int, sreq) + __field(unsigned int, op) + __field(unsigned int, op_flags) + __field(unsigned int, call) + __field(enum afs_call_state, call_state) + ), + + TP_fast_assign( + __entry->op =3D op->debug_id; + __entry->sreq =3D op->fetch.subreq->debug_index; + __entry->rreq =3D op->fetch.subreq->rreq->debug_id; + __entry->op_flags =3D op->flags; + __entry->call =3D call->debug_id; + __entry->call_state =3D call->state; + ), + + TP_printk("R=3D%08x[%x] OP=3D%08x c=3D%08x cs=3D%x of=3D%x", + __entry->rreq, __entry->sreq, + __entry->op, + __entry->call, __entry->call_state, + __entry->op_flags) + ); + #endif /* _TRACE_AFS_H */ =20 /* This part must be outside protection */ From nobody Sat Nov 23 23:18:39 2024 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60FDE21D233 for ; Fri, 8 Nov 2024 17:36:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087414; cv=none; b=LL/eCWhx2hqPet8rrH21RqZs86c+3Hk6SQyzKXE9eFLk+8YrU0wCwJYuQOjs16IEeaCp1g1gYLpnmtW5OTH4YXWN1MCaazkn1pwI0aIKx4+aYFmmlJTupMuT6P4+6Ymu5JBGQ93UTE/YQqsvsTUQ6lzkQZjfwOh7LbJJBBo8rBc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731087414; c=relaxed/simple; bh=YfHQHHklQqGthguUT+hofehz3Evx2ahQywNsAF+2OGI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mmkNAs2AK4MHmmbxbUqcUGm1ep52f5drVo+FKUxshbSFpQyuoPfpUH7lH+0hoNq7nGB4iLUw5tZJNobrAgvbM6Ck/DkGkaSduXEuEZeCHb0zYthtxU6SKP85pdHd6KI3ng+eDjdXCdx8BqGZY5RlQ/VKv7mKotf81uYb8o42XDM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=EnlVuOKH; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EnlVuOKH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087411; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jlfv2ApsSBgn//KdBsSlx8BkotK5+tCpg2808yZLllw=; b=EnlVuOKHtxRm5r/XUbwfEDUBfGg31uzPO+BhRxchcqfbRr6OVJnmSkMZd0jMy66pgqC8zh 2AnnDYGSll+xZ6ojZPTVsb7ehvZRQrSsSm2bzTFHRwyPrGf2ZS3fc+iey5vHpGiUcoTRPF 7N0oO5JsWY4du3oVMEUlc0lotqfO3QA= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-226-sjYpWV5SMt6Lmz3uofTwYw-1; Fri, 08 Nov 2024 12:36:48 -0500 X-MC-Unique: sjYpWV5SMt6Lmz3uofTwYw-1 X-Mimecast-MFC-AGG-ID: sjYpWV5SMt6Lmz3uofTwYw Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 935F41954128; Fri, 8 Nov 2024 17:36:43 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9CB78195E480; Fri, 8 Nov 2024 17:36:37 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+af5c06208fa71bf31b16@syzkaller.appspotmail.com, Chang Yu Subject: [PATCH v4 33/33] netfs: Report on NULL folioq in netfs_writeback_unlock_folios() Date: Fri, 8 Nov 2024 17:32:34 +0000 Message-ID: <20241108173236.1382366-34-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" It seems that it's possible to get to netfs_writeback_unlock_folios() with an empty rolling buffer during buffered writes. This should not be possible as the rolling buffer is initialised as the write request is set up and thereafter maintains at least one folio_queue struct therein until it gets destroyed. This allows lockless addition and removal of folio_queue structs in the buffer because, unlike with a ring buffer, the producer and consumer each only need to look at and alter one pointer into the buffer. Now, the rolling buffer is only used for buffered I/O operations as netfs_collect_write_results() should only call netfs_writeback_unlock_folios() if the request is of origin type NETFS_WRITEBACK, NETFS_WRITETHROUGH or NETFS_PGPRIV2_COPY_TO_CACHE. So it would seem that one of the following occurred: (1) I/O started before the request was fully initialised, (2) the origin got switched mid-flow or (3) the request has already been freed and this is a UAF error. I think the last is the most likely. Make netfs_writeback_unlock_folios() report information about the request and subrequests if folioq is seen to be NULL to try and help debug this, throw a warning and return. Note that this does not try to fix the problem. Reported-by: syzbot+af5c06208fa71bf31b16@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=3Daf5c06208fa71bf31b16 Signed-off-by: David Howells cc: Chang Yu Link: https://lore.kernel.org/r/ZxshMEW4U7MTgQYa@gmail.com/ cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/write_collect.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 3d8b87c8e6a6..4a1499167770 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -21,6 +21,34 @@ #define NEED_RETRY 0x10 /* A front op requests retrying */ #define SAW_FAILURE 0x20 /* One stream or hit a permanent failure */ =20 +static void netfs_dump_request(const struct netfs_io_request *rreq) +{ + pr_err("Request R=3D%08x r=3D%d fl=3D%lx or=3D%x e=3D%ld\n", + rreq->debug_id, refcount_read(&rreq->ref), rreq->flags, + rreq->origin, rreq->error); + pr_err(" st=3D%llx tsl=3D%zx/%llx/%llx\n", + rreq->start, rreq->transferred, rreq->submitted, rreq->len); + pr_err(" cci=3D%llx/%llx/%llx\n", + rreq->cleaned_to, rreq->collected_to, atomic64_read(&rreq->issued_= to)); + pr_err(" iw=3D%pSR\n", rreq->netfs_ops->issue_write); + for (int i =3D 0; i < NR_IO_STREAMS; i++) { + const struct netfs_io_subrequest *sreq; + const struct netfs_io_stream *s =3D &rreq->io_streams[i]; + + pr_err(" str[%x] s=3D%x e=3D%d acnf=3D%u,%u,%u,%u\n", + s->stream_nr, s->source, s->error, + s->avail, s->active, s->need_retry, s->failed); + pr_err(" str[%x] ct=3D%llx t=3D%zx\n", + s->stream_nr, s->collected_to, s->transferred); + list_for_each_entry(sreq, &s->subrequests, rreq_link) { + pr_err(" sreq[%x:%x] sc=3D%u s=3D%llx t=3D%zx/%zx r=3D%d f=3D%lx\n", + sreq->stream_nr, sreq->debug_index, sreq->source, + sreq->start, sreq->transferred, sreq->len, + refcount_read(&sreq->ref), sreq->flags); + } + } +} + /* * Successful completion of write of a folio to the server and/or cache. = Note * that we are not allowed to lock the folio here on pain of deadlocking w= ith @@ -87,6 +115,12 @@ static void netfs_writeback_unlock_folios(struct netfs_= io_request *wreq, unsigned long long collected_to =3D wreq->collected_to; unsigned int slot =3D wreq->buffer.first_tail_slot; =20 + if (WARN_ON_ONCE(!folioq)) { + pr_err("[!] Writeback unlock found empty rolling buffer!\n"); + netfs_dump_request(wreq); + return; + } + if (wreq->origin =3D=3D NETFS_PGPRIV2_COPY_TO_CACHE) { if (netfs_pgpriv2_unlock_copied_folios(wreq)) *notes |=3D MADE_PROGRESS;