On Mon, 8 Aug 2022, Matthew Wilcox wrote:
> On Mon, Aug 08, 2022 at 10:57:45AM -0400, Mikulas Patocka wrote:
> > On Mon, 8 Aug 2022, Matthew Wilcox wrote:
> >
> > > On Mon, Aug 08, 2022 at 10:26:10AM -0400, Mikulas Patocka wrote:
> > > > On Sun, 7 Aug 2022, Matthew Wilcox wrote:
> > > > > > +static __always_inline void set_buffer_locked(struct buffer_head *bh)
> > > > > > +{
> > > > > > + set_bit(BH_Lock, &bh->b_state);
> > > > > > +}
> > > > > > +
> > > > > > +static __always_inline int buffer_locked(const struct buffer_head *bh)
> > > > > > +{
> > > > > > + bool ret = test_bit(BH_Lock, &bh->b_state);
> > > > > > + /*
> > > > > > + * pairs with smp_mb__after_atomic in unlock_buffer
> > > > > > + */
> > > > > > + if (!ret)
> > > > > > + smp_acquire__after_ctrl_dep();
> > > > > > + return ret;
> > > > > > +}
> > > > >
> > > > > Are there places that think that lock/unlock buffer implies a memory
> > > > > barrier?
> > > >
> > > > There's this in fs/reiserfs:
> > > >
> > > > if (!buffer_dirty(bh) && !buffer_locked(bh)) {
> > > > reiserfs_free_jh(bh); <--- this could be moved before buffer_locked
> > >
> > > It might be better to think of buffer_locked() as
> > > buffer_someone_has_exclusive_access(). I can't see the problem with
> > > moving the reads in reiserfs_free_jh() before the read of buffer_locked.
> > >
> > > > if (buffer_locked((journal->j_header_bh))) {
> > > > ...
> > > > }
> > > > journal->j_last_flush_trans_id = trans_id;
> > > > journal->j_first_unflushed_offset = offset;
> > > > jh = (struct reiserfs_journal_header *)(journal->j_header_bh->b_data); <--- this could be moved before buffer_locked
> > >
> > > I don't think b_data is going to be changed while someone else holds
> > > the buffer locked. That's initialised by set_bh_page(), which is an
> > > initialisation-time thing, before the BH is visible to any other thread.
> >
> > So, do you think that we don't need a barrier in buffer_locked()?
>
> That's my feeling. Of course, you might not be the only one confused,
> and if fs authors in general have made the mistake of thinking that
> buffer_locked is serialising, then it might be better to live up to
> that expectation.
In my spadfs filesystem, I used lock_buffer/unlock_buffer to prevent the
system from seeing or writing back incomplete data. The patterns is
lock_buffer(bh);
... do several changes to the buffer that should appear atomically
unlock_buffer(bh);
mark_buffer_dirty(bh);
but it seems to be ok, because both lock_buffer and unlock_buffer have
acquire/release semantics. I'm not sure about buffer_locked - perhaps it
really doesn't need the barriers - spin_is_locked, mutex_is_locked and
rwsem_is_locked also don't have any barriers.
Here I'm sending the patch without the change to buffer_locked.
Mikulas
From: Mikulas Patocka <mpatocka@redhat.com>
Let's have a look at this piece of code in __bread_slow:
get_bh(bh);
bh->b_end_io = end_buffer_read_sync;
submit_bh(REQ_OP_READ, 0, bh);
wait_on_buffer(bh);
if (buffer_uptodate(bh))
return bh;
Neither wait_on_buffer nor buffer_uptodate contain any memory barrier.
Consequently, if someone calls sb_bread and then reads the buffer data,
the read of buffer data may be executed before wait_on_buffer(bh) on
architectures with weak memory ordering and it may return invalid data.
Fix this bug by adding a memory barrier to set_buffer_uptodate and an
acquire barrier to buffer_uptodate (in a similar way as
folio_test_uptodate and folio_mark_uptodate).
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org
Index: linux-2.6/include/linux/buffer_head.h
===================================================================
--- linux-2.6.orig/include/linux/buffer_head.h
+++ linux-2.6/include/linux/buffer_head.h
@@ -117,7 +117,6 @@ static __always_inline int test_clear_bu
* of the form "mark_buffer_foo()". These are higher-level functions which
* do something in addition to setting a b_state bit.
*/
-BUFFER_FNS(Uptodate, uptodate)
BUFFER_FNS(Dirty, dirty)
TAS_BUFFER_FNS(Dirty, dirty)
BUFFER_FNS(Lock, locked)
@@ -135,6 +134,30 @@ BUFFER_FNS(Meta, meta)
BUFFER_FNS(Prio, prio)
BUFFER_FNS(Defer_Completion, defer_completion)
+static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
+{
+ /*
+ * make it consistent with folio_mark_uptodate
+ * pairs with smp_load_acquire in buffer_uptodate
+ */
+ smp_mb__before_atomic();
+ set_bit(BH_Uptodate, &bh->b_state);
+}
+
+static __always_inline void clear_buffer_uptodate(struct buffer_head *bh)
+{
+ clear_bit(BH_Uptodate, &bh->b_state);
+}
+
+static __always_inline int buffer_uptodate(const struct buffer_head *bh)
+{
+ /*
+ * make it consistent with folio_test_uptodate
+ * pairs with smp_mb__before_atomic in set_buffer_uptodate
+ */
+ return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0;
+}
+
#define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)
/* If we *know* page->private refers to buffer_heads */
On Tue, Aug 9, 2022 at 11:32 AM Mikulas Patocka <mpatocka@redhat.com> wrote:
>
> Let's have a look at this piece of code in __bread_slow:
> get_bh(bh);
> bh->b_end_io = end_buffer_read_sync;
> submit_bh(REQ_OP_READ, 0, bh);
> wait_on_buffer(bh);
> if (buffer_uptodate(bh))
> return bh;
> Neither wait_on_buffer nor buffer_uptodate contain any memory barrier.
> Consequently, if someone calls sb_bread and then reads the buffer data,
> the read of buffer data may be executed before wait_on_buffer(bh) on
> architectures with weak memory ordering and it may return invalid data.
>
> Fix this bug by adding a memory barrier to set_buffer_uptodate and an
> acquire barrier to buffer_uptodate (in a similar way as
> folio_test_uptodate and folio_mark_uptodate).
Ok, I've applied this to my tree.
I still feel that we should probably take a long look at having the
proper "acquire/release" uses everywhere for the buffer / page / folio
flags, but that wouldn't really work for backporting to stable, so I
think that's a "future fixes/cleanup" thing.
Thanks,
Linus
On Tue, Aug 09, 2022 at 02:32:13PM -0400, Mikulas Patocka wrote:
> From: Mikulas Patocka <mpatocka@redhat.com>
>
> Let's have a look at this piece of code in __bread_slow:
> get_bh(bh);
> bh->b_end_io = end_buffer_read_sync;
> submit_bh(REQ_OP_READ, 0, bh);
> wait_on_buffer(bh);
> if (buffer_uptodate(bh))
> return bh;
> Neither wait_on_buffer nor buffer_uptodate contain any memory barrier.
> Consequently, if someone calls sb_bread and then reads the buffer data,
> the read of buffer data may be executed before wait_on_buffer(bh) on
> architectures with weak memory ordering and it may return invalid data.
>
> Fix this bug by adding a memory barrier to set_buffer_uptodate and an
> acquire barrier to buffer_uptodate (in a similar way as
> folio_test_uptodate and folio_mark_uptodate).
>
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: stable@vger.kernel.org
>
> Index: linux-2.6/include/linux/buffer_head.h
> ===================================================================
> --- linux-2.6.orig/include/linux/buffer_head.h
> +++ linux-2.6/include/linux/buffer_head.h
> @@ -117,7 +117,6 @@ static __always_inline int test_clear_bu
> * of the form "mark_buffer_foo()". These are higher-level functions which
> * do something in addition to setting a b_state bit.
> */
> -BUFFER_FNS(Uptodate, uptodate)
> BUFFER_FNS(Dirty, dirty)
> TAS_BUFFER_FNS(Dirty, dirty)
> BUFFER_FNS(Lock, locked)
> @@ -135,6 +134,30 @@ BUFFER_FNS(Meta, meta)
> BUFFER_FNS(Prio, prio)
> BUFFER_FNS(Defer_Completion, defer_completion)
>
> +static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
> +{
> + /*
> + * make it consistent with folio_mark_uptodate
> + * pairs with smp_load_acquire in buffer_uptodate
> + */
> + smp_mb__before_atomic();
> + set_bit(BH_Uptodate, &bh->b_state);
> +}
> +
> +static __always_inline void clear_buffer_uptodate(struct buffer_head *bh)
> +{
> + clear_bit(BH_Uptodate, &bh->b_state);
> +}
> +
> +static __always_inline int buffer_uptodate(const struct buffer_head *bh)
> +{
> + /*
> + * make it consistent with folio_test_uptodate
> + * pairs with smp_mb__before_atomic in set_buffer_uptodate
> + */
> + return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0;
> +}
> +
> #define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)
>
> /* If we *know* page->private refers to buffer_heads */
>
© 2016 - 2026 Red Hat, Inc.