From nobody Thu Dec 25 01:29:29 2025 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B578F39FD3; Mon, 22 Jan 2024 09:43:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705916619; cv=none; b=fsNsSophfg12vJ0k0qFq4je5vJJRxwjnlY1dWO6DrYGfy8FLp9fIxwdnnijOOkY3ECDdckIWmxVxAHBahb5sChcN//+oueFwVIGmn2BiutvSkEaL7RGeCyZS4+wGEYCGdvRSOjICLRBzfj5xGMp6EnPGh1YhZZrLLI6+OgHNfSI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705916619; c=relaxed/simple; bh=yGNLyz21m6wfgW6bm0tKN7rje1DaSGXW1t5BN/TxSWo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rBGt/Q2pX5AHCd1kpLNzccAGRrCkgjWTEJPk9oqb4A4DNhknf13Ovd6x8q+b/Lx6UiInUXLchI+vjObN21O77yktJziwXjpvsKk1RcIIb7aVloWIy4BhII1vlAn57KvUNRrIYVEtR8KqOnDwg6yW4qztMe/cPAcI2PaKvUzLW5k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4TJQJL23gnz1wnCd; Mon, 22 Jan 2024 17:43:14 +0800 (CST) Received: from dggpeml500021.china.huawei.com (unknown [7.185.36.21]) by mail.maildlp.com (Postfix) with ESMTPS id 7B5C71400DD; Mon, 22 Jan 2024 17:43:20 +0800 (CST) Received: from huawei.com (10.175.127.227) by dggpeml500021.china.huawei.com (7.185.36.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 22 Jan 2024 17:43:00 +0800 From: Baokun Li To: CC: , , , , , , , , , , Subject: [PATCH 1/2] fs: make the i_size_read/write helpers be smp_load_acquire/store_release() Date: Mon, 22 Jan 2024 17:45:35 +0800 Message-ID: <20240122094536.198454-2-libaokun1@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240122094536.198454-1-libaokun1@huawei.com> References: <20240122094536.198454-1-libaokun1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml500021.china.huawei.com (7.185.36.21) In [Link] linus mentions that acquire/release makes it clear which _particular_ memory accesses are the ordered ones, and it's unlikely to make any performance difference, so it's much better to pair up the release->acquire ordering than have a "wmb->rmb" ordering. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D update pagecache folio_mark_uptodate(folio) smp_wmb() set_bit PG_uptodate =3D=3D=3D =E2=86=91=E2=86=91=E2=86=91 STLR =E2=86=91=E2=86=91=E2=86=91 =3D= =3D=3D smp_store_release(&inode->i_size, i_size) folio_test_uptodate(folio) test_bit PG_uptodate smp_rmb() =3D=3D=3D =E2=86=93=E2=86=93=E2=86=93 LDAR =E2=86=93=E2=86=93=E2=86=93 =3D= =3D=3D smp_load_acquire(&inode->i_size) copy_page_to_iter() =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D Calling smp_store_release() in i_size_write() ensures that the data in the page and the PG_uptodate bit are updated before the isize is updated, and calling smp_load_acquire() in i_size_read ensures that it will not read a newer isize than the data in the page. Therefore, this avoids buffered read-write inconsistencies caused by Load-Load reordering. Link: https://lore.kernel.org/r/CAHk-=3DwifOnmeJq+sn+2s-P46zw0SFEbw9BSCGgp2= c5fYPtRPGw@mail.gmail.com/ Suggested-by: Linus Torvalds Signed-off-by: Baokun Li --- include/linux/fs.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 06ecccbb5bfe..077849bfe89a 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -907,7 +907,8 @@ static inline loff_t i_size_read(const struct inode *in= ode) preempt_enable(); return i_size; #else - return inode->i_size; + /* Pairs with smp_store_release() in i_size_write() */ + return smp_load_acquire(&inode->i_size); #endif } =20 @@ -929,7 +930,12 @@ static inline void i_size_write(struct inode *inode, l= off_t i_size) inode->i_size =3D i_size; preempt_enable(); #else - inode->i_size =3D i_size; + /* + * Pairs with smp_load_acquire() in i_size_read() to ensure + * changes related to inode size (such as page contents) are + * visible before we see the changed inode size. + */ + smp_store_release(&inode->i_size, i_size); #endif } =20 --=20 2.31.1 From nobody Thu Dec 25 01:29:29 2025 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DAD639AE7; Mon, 22 Jan 2024 09:43:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705916635; cv=none; b=p+zyD0lmti/BEmemUHEDzkQMM9r0vuGC3giPDYbPjB3mHdUmKgNa/UUHuZlp/rV/AAlTOFwl0lcAgHiSQaCGAE1aI3M00tZGK6b2CbSWpLUdN6AxQElxtnDv760TW3eOswGuQsb7s71RzA17/Hx81qtXObWYkZbNiVz9LZal4oI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705916635; c=relaxed/simple; bh=N2BkwODbR5p4I61Zd4giHOOLJ0sc4FCaSVpwgMLT51U=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qb9sBFWTuSoCaOt254PXmrJw46za2HqPqV41SaNsK4YOiIcZXPvVNEUITpDrbEWObYBHOzBqj+U4/8fg8f20QTkgM+qHO5zK+1Gm3gWz+XCBTZZR9RWf7A2Az50pQemYEcxdxGcXiE4rIMtGL28L0YkDoINswoTm2QGdomi2BBA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TJQJ41spfz1xmYn; Mon, 22 Jan 2024 17:43:00 +0800 (CST) Received: from dggpeml500021.china.huawei.com (unknown [7.185.36.21]) by mail.maildlp.com (Postfix) with ESMTPS id 807AE14066A; Mon, 22 Jan 2024 17:43:35 +0800 (CST) Received: from huawei.com (10.175.127.227) by dggpeml500021.china.huawei.com (7.185.36.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 22 Jan 2024 17:43:00 +0800 From: Baokun Li To: CC: , , , , , , , , , , Subject: [PATCH 2/2] Revert "mm/filemap: avoid buffered read/write race to read inconsistent data" Date: Mon, 22 Jan 2024 17:45:36 +0800 Message-ID: <20240122094536.198454-3-libaokun1@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240122094536.198454-1-libaokun1@huawei.com> References: <20240122094536.198454-1-libaokun1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml500021.china.huawei.com (7.185.36.21) Content-Type: text/plain; charset="utf-8" This reverts commit e2c27b803bb6 ("mm/filemap: avoid buffered read/write race to read inconsistent data"). After making the i_size_read/write helpers be smp_load_acquire/store_release(), it is already guaranteed that changes to page contents are visible before we see increased inode size, so the extra smp_rmb() in filemap_read() can be removed. Signed-off-by: Baokun Li --- mm/filemap.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 142864338ca4..bed844b07e87 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2608,15 +2608,6 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_= iter *iter, goto put_folios; end_offset =3D min_t(loff_t, isize, iocb->ki_pos + iter->count); =20 - /* - * Pairs with a barrier in - * block_write_end()->mark_buffer_dirty() or other page - * dirtying routines like iomap_write_end() to ensure - * changes to page contents are visible before we see - * increased inode size. - */ - smp_rmb(); - /* * Once we start copying data, we don't want to be touching any * cachelines that might be contended: --=20 2.31.1