[PATCH ipsec-next v3 3/9] libbpf: Add BPF_CORE_WRITE_BITFIELD() macro

Daniel Xu posted 9 patches 2 years ago
There is a newer version of this series
[PATCH ipsec-next v3 3/9] libbpf: Add BPF_CORE_WRITE_BITFIELD() macro
Posted by Daniel Xu 2 years ago
=== Motivation ===

Similar to reading from CO-RE bitfields, we need a CO-RE aware bitfield
writing wrapper to make the verifier happy.

Two alternatives to this approach are:

1. Use the upcoming `preserve_static_offset` [0] attribute to disable
   CO-RE on specific structs.
2. Use broader byte-sized writes to write to bitfields.

(1) is a bit hard to use. It requires specific and not-very-obvious
annotations to bpftool generated vmlinux.h. It's also not generally
available in released LLVM versions yet.

(2) makes the code quite hard to read and write. And especially if
BPF_CORE_READ_BITFIELD() is already being used, it makes more sense to
to have an inverse helper for writing.

=== Implementation details ===

Since the logic is a bit non-obvious, I thought it would be helpful
to explain exactly what's going on.

To start, it helps by explaining what LSHIFT_U64 (lshift) and RSHIFT_U64
(rshift) is designed to mean. Consider the core of the
BPF_CORE_READ_BITFIELD() algorithm:

        val <<= __CORE_RELO(s, field, LSHIFT_U64);
                val = val >> __CORE_RELO(s, field, RSHIFT_U64);

Basically what happens is we lshift to clear the non-relevant (blank)
higher order bits. Then we rshift to bring the relevant bits (bitfield)
down to LSB position (while also clearing blank lower order bits). To
illustrate:

        Start:    ........XXX......
        Lshift:   XXX......00000000
        Rshift:   00000000000000XXX

where `.` means blank bit, `0` means 0 bit, and `X` means bitfield bit.

After the two operations, the bitfield is ready to be interpreted as a
regular integer.

Next, we want to build an alternative (but more helpful) mental model
on lshift and rshift. That is, to consider:

* rshift as the total number of blank bits in the u64
* lshift as number of blank bits left of the bitfield in the u64

Take a moment to consider why that is true by consulting the above
diagram.

With this insight, we can how define the following relationship:

              bitfield
                 _
                | |
        0.....00XXX0...00
        |      |   |    |
        |______|   |    |
         lshift    |    |
                   |____|
              (rshift - lshift)

That is, we know the number of higher order blank bits is just lshift.
And the number of lower order blank bits is (rshift - lshift).

Finally, we can examine the core of the write side algorithm:

        mask = (~0ULL << rshift) >> lshift;   // 1
        nval = new_val;                       // 2
        nval = (nval << rpad) & mask;         // 3
        val = (val & ~mask) | nval;           // 4

(1): Compute a mask where the set bits are the bitfield bits. The first
     left shift zeros out exactly the number of blank bits, leaving a
     bitfield sized set of 1s. The subsequent right shift inserts the
     correct amount of higher order blank bits.
(2): Place the new value into a word sized container, nval.
(3): Place nval at the correct bit position and mask out blank bits.
(4): Mix the bitfield in with original surrounding blank bits.

[0]: https://reviews.llvm.org/D133361
Co-authored-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Co-authored-by: Jonathan Lemon <jlemon@aviatrix.com>
Signed-off-by: Jonathan Lemon <jlemon@aviatrix.com>
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
---
 tools/lib/bpf/bpf_core_read.h | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index 1ac57bb7ac55..a7ffb80e3539 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -111,6 +111,40 @@ enum bpf_enum_value_kind {
 	val;								      \
 })
 
+/*
+ * Write to a bitfield, identified by s->field.
+ * This is the inverse of BPF_CORE_WRITE_BITFIELD().
+ */
+#define BPF_CORE_WRITE_BITFIELD(s, field, new_val) ({			\
+	void *p = (void *)s + __CORE_RELO(s, field, BYTE_OFFSET);	\
+	unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE);	\
+	unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64);	\
+	unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64);	\
+	unsigned int rpad = rshift - lshift;				\
+	unsigned long long nval, mask, val;				\
+									\
+	asm volatile("" : "+r"(p));					\
+									\
+	switch (byte_size) {						\
+	case 1: val = *(unsigned char *)p; break;			\
+	case 2: val = *(unsigned short *)p; break;			\
+	case 4: val = *(unsigned int *)p; break;			\
+	case 8: val = *(unsigned long long *)p; break;			\
+	}								\
+									\
+	mask = (~0ULL << rshift) >> lshift;				\
+	nval = new_val;							\
+	nval = (nval << rpad) & mask;					\
+	val = (val & ~mask) | nval;					\
+									\
+	switch (byte_size) {						\
+	case 1: *(unsigned char *)p      = val; break;			\
+	case 2: *(unsigned short *)p     = val; break;			\
+	case 4: *(unsigned int *)p       = val; break;			\
+	case 8: *(unsigned long long *)p = val; break;			\
+	}								\
+})
+
 #define ___bpf_field_ref1(field)	(field)
 #define ___bpf_field_ref2(type, field)	(((typeof(type) *)0)->field)
 #define ___bpf_field_ref(args...)					    \
-- 
2.42.1
Re: [PATCH ipsec-next v3 3/9] libbpf: Add BPF_CORE_WRITE_BITFIELD() macro
Posted by Andrii Nakryiko 2 years ago
On Fri, Dec 1, 2023 at 12:24 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
>
> === Motivation ===
>
> Similar to reading from CO-RE bitfields, we need a CO-RE aware bitfield
> writing wrapper to make the verifier happy.
>
> Two alternatives to this approach are:
>
> 1. Use the upcoming `preserve_static_offset` [0] attribute to disable
>    CO-RE on specific structs.
> 2. Use broader byte-sized writes to write to bitfields.
>
> (1) is a bit hard to use. It requires specific and not-very-obvious
> annotations to bpftool generated vmlinux.h. It's also not generally
> available in released LLVM versions yet.
>
> (2) makes the code quite hard to read and write. And especially if
> BPF_CORE_READ_BITFIELD() is already being used, it makes more sense to
> to have an inverse helper for writing.
>
> === Implementation details ===
>
> Since the logic is a bit non-obvious, I thought it would be helpful
> to explain exactly what's going on.
>
> To start, it helps by explaining what LSHIFT_U64 (lshift) and RSHIFT_U64
> (rshift) is designed to mean. Consider the core of the
> BPF_CORE_READ_BITFIELD() algorithm:
>
>         val <<= __CORE_RELO(s, field, LSHIFT_U64);
>                 val = val >> __CORE_RELO(s, field, RSHIFT_U64);

nit: indentation is off?

>
> Basically what happens is we lshift to clear the non-relevant (blank)
> higher order bits. Then we rshift to bring the relevant bits (bitfield)
> down to LSB position (while also clearing blank lower order bits). To
> illustrate:
>
>         Start:    ........XXX......
>         Lshift:   XXX......00000000
>         Rshift:   00000000000000XXX
>
> where `.` means blank bit, `0` means 0 bit, and `X` means bitfield bit.
>
> After the two operations, the bitfield is ready to be interpreted as a
> regular integer.
>
> Next, we want to build an alternative (but more helpful) mental model
> on lshift and rshift. That is, to consider:
>
> * rshift as the total number of blank bits in the u64
> * lshift as number of blank bits left of the bitfield in the u64
>
> Take a moment to consider why that is true by consulting the above
> diagram.
>
> With this insight, we can how define the following relationship:
>
>               bitfield
>                  _
>                 | |
>         0.....00XXX0...00
>         |      |   |    |
>         |______|   |    |
>          lshift    |    |
>                    |____|
>               (rshift - lshift)
>
> That is, we know the number of higher order blank bits is just lshift.
> And the number of lower order blank bits is (rshift - lshift).
>

Nice diagrams and description, thanks!

> Finally, we can examine the core of the write side algorithm:
>
>         mask = (~0ULL << rshift) >> lshift;   // 1
>         nval = new_val;                       // 2
>         nval = (nval << rpad) & mask;         // 3
>         val = (val & ~mask) | nval;           // 4
>
> (1): Compute a mask where the set bits are the bitfield bits. The first
>      left shift zeros out exactly the number of blank bits, leaving a
>      bitfield sized set of 1s. The subsequent right shift inserts the
>      correct amount of higher order blank bits.
> (2): Place the new value into a word sized container, nval.
> (3): Place nval at the correct bit position and mask out blank bits.
> (4): Mix the bitfield in with original surrounding blank bits.
>
> [0]: https://reviews.llvm.org/D133361
> Co-authored-by: Eduard Zingerman <eddyz87@gmail.com>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> Co-authored-by: Jonathan Lemon <jlemon@aviatrix.com>
> Signed-off-by: Jonathan Lemon <jlemon@aviatrix.com>
> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
> ---
>  tools/lib/bpf/bpf_core_read.h | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
>
> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> index 1ac57bb7ac55..a7ffb80e3539 100644
> --- a/tools/lib/bpf/bpf_core_read.h
> +++ b/tools/lib/bpf/bpf_core_read.h
> @@ -111,6 +111,40 @@ enum bpf_enum_value_kind {
>         val;                                                                  \
>  })
>
> +/*
> + * Write to a bitfield, identified by s->field.
> + * This is the inverse of BPF_CORE_WRITE_BITFIELD().
> + */
> +#define BPF_CORE_WRITE_BITFIELD(s, field, new_val) ({                  \
> +       void *p = (void *)s + __CORE_RELO(s, field, BYTE_OFFSET);       \
> +       unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE);      \
> +       unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64);        \
> +       unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64);        \
> +       unsigned int rpad = rshift - lshift;                            \
> +       unsigned long long nval, mask, val;                             \
> +                                                                       \
> +       asm volatile("" : "+r"(p));                                     \
> +                                                                       \
> +       switch (byte_size) {                                            \
> +       case 1: val = *(unsigned char *)p; break;                       \
> +       case 2: val = *(unsigned short *)p; break;                      \
> +       case 4: val = *(unsigned int *)p; break;                        \
> +       case 8: val = *(unsigned long long *)p; break;                  \
> +       }                                                               \
> +                                                                       \
> +       mask = (~0ULL << rshift) >> lshift;                             \
> +       nval = new_val;                                                 \
> +       nval = (nval << rpad) & mask;                                   \
> +       val = (val & ~mask) | nval;                                     \

I'd simplify it to not need nval at all

val = (val & ~mask) | ((new_val << rpad) & mask);

I actually find it easier to follow and make sure we are not doing
anything unexpected. First part before |, we take old value and clear
bits we are about to set, second part after |, we take bitfield value,
shift it in position, and just in case mask it out if it's too big to
fit. Combine, done.

Other than that, it looks good.

> +                                                                       \
> +       switch (byte_size) {                                            \
> +       case 1: *(unsigned char *)p      = val; break;                  \
> +       case 2: *(unsigned short *)p     = val; break;                  \
> +       case 4: *(unsigned int *)p       = val; break;                  \
> +       case 8: *(unsigned long long *)p = val; break;                  \
> +       }                                                               \
> +})
> +
>  #define ___bpf_field_ref1(field)       (field)
>  #define ___bpf_field_ref2(type, field) (((typeof(type) *)0)->field)
>  #define ___bpf_field_ref(args...)                                          \
> --
> 2.42.1
>
Re: [PATCH ipsec-next v3 3/9] libbpf: Add BPF_CORE_WRITE_BITFIELD() macro
Posted by Daniel Xu 2 years ago
On Fri, Dec 01, 2023 at 03:49:30PM -0800, Andrii Nakryiko wrote:
> On Fri, Dec 1, 2023 at 12:24 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
> >
> > === Motivation ===
> >
> > Similar to reading from CO-RE bitfields, we need a CO-RE aware bitfield
> > writing wrapper to make the verifier happy.
> >
> > Two alternatives to this approach are:
> >
> > 1. Use the upcoming `preserve_static_offset` [0] attribute to disable
> >    CO-RE on specific structs.
> > 2. Use broader byte-sized writes to write to bitfields.
> >
> > (1) is a bit hard to use. It requires specific and not-very-obvious
> > annotations to bpftool generated vmlinux.h. It's also not generally
> > available in released LLVM versions yet.
> >
> > (2) makes the code quite hard to read and write. And especially if
> > BPF_CORE_READ_BITFIELD() is already being used, it makes more sense to
> > to have an inverse helper for writing.
> >
> > === Implementation details ===
> >
> > Since the logic is a bit non-obvious, I thought it would be helpful
> > to explain exactly what's going on.
> >
> > To start, it helps by explaining what LSHIFT_U64 (lshift) and RSHIFT_U64
> > (rshift) is designed to mean. Consider the core of the
> > BPF_CORE_READ_BITFIELD() algorithm:
> >
> >         val <<= __CORE_RELO(s, field, LSHIFT_U64);
> >                 val = val >> __CORE_RELO(s, field, RSHIFT_U64);
> 
> nit: indentation is off?

Oops, it's cuz I only deleted the SIGNED check. Will fix.
> 
> >
> > Basically what happens is we lshift to clear the non-relevant (blank)
> > higher order bits. Then we rshift to bring the relevant bits (bitfield)
> > down to LSB position (while also clearing blank lower order bits). To
> > illustrate:
> >
> >         Start:    ........XXX......
> >         Lshift:   XXX......00000000
> >         Rshift:   00000000000000XXX
> >
> > where `.` means blank bit, `0` means 0 bit, and `X` means bitfield bit.
> >
> > After the two operations, the bitfield is ready to be interpreted as a
> > regular integer.
> >
> > Next, we want to build an alternative (but more helpful) mental model
> > on lshift and rshift. That is, to consider:
> >
> > * rshift as the total number of blank bits in the u64
> > * lshift as number of blank bits left of the bitfield in the u64
> >
> > Take a moment to consider why that is true by consulting the above
> > diagram.
> >
> > With this insight, we can how define the following relationship:
> >
> >               bitfield
> >                  _
> >                 | |
> >         0.....00XXX0...00
> >         |      |   |    |
> >         |______|   |    |
> >          lshift    |    |
> >                    |____|
> >               (rshift - lshift)
> >
> > That is, we know the number of higher order blank bits is just lshift.
> > And the number of lower order blank bits is (rshift - lshift).
> >
> 
> Nice diagrams and description, thanks!

Thanks!

> 
> > Finally, we can examine the core of the write side algorithm:
> >
> >         mask = (~0ULL << rshift) >> lshift;   // 1
> >         nval = new_val;                       // 2
> >         nval = (nval << rpad) & mask;         // 3
> >         val = (val & ~mask) | nval;           // 4
> >
> > (1): Compute a mask where the set bits are the bitfield bits. The first
> >      left shift zeros out exactly the number of blank bits, leaving a
> >      bitfield sized set of 1s. The subsequent right shift inserts the
> >      correct amount of higher order blank bits.
> > (2): Place the new value into a word sized container, nval.
> > (3): Place nval at the correct bit position and mask out blank bits.
> > (4): Mix the bitfield in with original surrounding blank bits.
> >
> > [0]: https://reviews.llvm.org/D133361
> > Co-authored-by: Eduard Zingerman <eddyz87@gmail.com>
> > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > Co-authored-by: Jonathan Lemon <jlemon@aviatrix.com>
> > Signed-off-by: Jonathan Lemon <jlemon@aviatrix.com>
> > Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
> > ---
> >  tools/lib/bpf/bpf_core_read.h | 34 ++++++++++++++++++++++++++++++++++
> >  1 file changed, 34 insertions(+)
> >
> > diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> > index 1ac57bb7ac55..a7ffb80e3539 100644
> > --- a/tools/lib/bpf/bpf_core_read.h
> > +++ b/tools/lib/bpf/bpf_core_read.h
> > @@ -111,6 +111,40 @@ enum bpf_enum_value_kind {
> >         val;                                                                  \
> >  })
> >
> > +/*
> > + * Write to a bitfield, identified by s->field.
> > + * This is the inverse of BPF_CORE_WRITE_BITFIELD().
> > + */
> > +#define BPF_CORE_WRITE_BITFIELD(s, field, new_val) ({                  \
> > +       void *p = (void *)s + __CORE_RELO(s, field, BYTE_OFFSET);       \
> > +       unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE);      \
> > +       unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64);        \
> > +       unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64);        \
> > +       unsigned int rpad = rshift - lshift;                            \
> > +       unsigned long long nval, mask, val;                             \
> > +                                                                       \
> > +       asm volatile("" : "+r"(p));                                     \
> > +                                                                       \
> > +       switch (byte_size) {                                            \
> > +       case 1: val = *(unsigned char *)p; break;                       \
> > +       case 2: val = *(unsigned short *)p; break;                      \
> > +       case 4: val = *(unsigned int *)p; break;                        \
> > +       case 8: val = *(unsigned long long *)p; break;                  \
> > +       }                                                               \
> > +                                                                       \
> > +       mask = (~0ULL << rshift) >> lshift;                             \
> > +       nval = new_val;                                                 \
> > +       nval = (nval << rpad) & mask;                                   \
> > +       val = (val & ~mask) | nval;                                     \
> 
> I'd simplify it to not need nval at all
> 
> val = (val & ~mask) | ((new_val << rpad) & mask);
> 
> I actually find it easier to follow and make sure we are not doing
> anything unexpected. First part before |, we take old value and clear
> bits we are about to set, second part after |, we take bitfield value,
> shift it in position, and just in case mask it out if it's too big to
> fit. Combine, done.
> 
> Other than that, it looks good.

I mostly left it there for the cast. Cuz injecting the `unsigned long
long` cast made the line really long. How about this instead?

diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index a7ffb80e3539..7325a12692a3 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -120,8 +120,8 @@ enum bpf_enum_value_kind {
        unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE);      \
        unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64);        \
        unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64);        \
+       unsigned long long mask, val, nval = new_val;                   \
        unsigned int rpad = rshift - lshift;                            \
-       unsigned long long nval, mask, val;                             \
                                                                        \
        asm volatile("" : "+r"(p));                                     \
                                                                        \
@@ -133,9 +133,7 @@ enum bpf_enum_value_kind {
        }                                                               \
                                                                        \
        mask = (~0ULL << rshift) >> lshift;                             \
-       nval = new_val;                                                 \
-       nval = (nval << rpad) & mask;                                   \
-       val = (val & ~mask) | nval;                                     \
+       val = (val & ~mask) | ((nval << rpad) & mask);                  \
                                                                        \
        switch (byte_size) {                                            \
        case 1: *(unsigned char *)p      = val; break;                  \


Thanks,
Daniel
Re: [PATCH ipsec-next v3 3/9] libbpf: Add BPF_CORE_WRITE_BITFIELD() macro
Posted by Andrii Nakryiko 2 years ago
On Fri, Dec 1, 2023 at 4:13 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
>
> On Fri, Dec 01, 2023 at 03:49:30PM -0800, Andrii Nakryiko wrote:
> > On Fri, Dec 1, 2023 at 12:24 PM Daniel Xu <dxu@dxuuu.xyz> wrote:
> > >
> > > === Motivation ===
> > >
> > > Similar to reading from CO-RE bitfields, we need a CO-RE aware bitfield
> > > writing wrapper to make the verifier happy.
> > >
> > > Two alternatives to this approach are:
> > >
> > > 1. Use the upcoming `preserve_static_offset` [0] attribute to disable
> > >    CO-RE on specific structs.
> > > 2. Use broader byte-sized writes to write to bitfields.
> > >
> > > (1) is a bit hard to use. It requires specific and not-very-obvious
> > > annotations to bpftool generated vmlinux.h. It's also not generally
> > > available in released LLVM versions yet.
> > >
> > > (2) makes the code quite hard to read and write. And especially if
> > > BPF_CORE_READ_BITFIELD() is already being used, it makes more sense to
> > > to have an inverse helper for writing.
> > >
> > > === Implementation details ===
> > >
> > > Since the logic is a bit non-obvious, I thought it would be helpful
> > > to explain exactly what's going on.
> > >
> > > To start, it helps by explaining what LSHIFT_U64 (lshift) and RSHIFT_U64
> > > (rshift) is designed to mean. Consider the core of the
> > > BPF_CORE_READ_BITFIELD() algorithm:
> > >
> > >         val <<= __CORE_RELO(s, field, LSHIFT_U64);
> > >                 val = val >> __CORE_RELO(s, field, RSHIFT_U64);
> >
> > nit: indentation is off?
>
> Oops, it's cuz I only deleted the SIGNED check. Will fix.
> >
> > >
> > > Basically what happens is we lshift to clear the non-relevant (blank)
> > > higher order bits. Then we rshift to bring the relevant bits (bitfield)
> > > down to LSB position (while also clearing blank lower order bits). To
> > > illustrate:
> > >
> > >         Start:    ........XXX......
> > >         Lshift:   XXX......00000000
> > >         Rshift:   00000000000000XXX
> > >
> > > where `.` means blank bit, `0` means 0 bit, and `X` means bitfield bit.
> > >
> > > After the two operations, the bitfield is ready to be interpreted as a
> > > regular integer.
> > >
> > > Next, we want to build an alternative (but more helpful) mental model
> > > on lshift and rshift. That is, to consider:
> > >
> > > * rshift as the total number of blank bits in the u64
> > > * lshift as number of blank bits left of the bitfield in the u64
> > >
> > > Take a moment to consider why that is true by consulting the above
> > > diagram.
> > >
> > > With this insight, we can how define the following relationship:
> > >
> > >               bitfield
> > >                  _
> > >                 | |
> > >         0.....00XXX0...00
> > >         |      |   |    |
> > >         |______|   |    |
> > >          lshift    |    |
> > >                    |____|
> > >               (rshift - lshift)
> > >
> > > That is, we know the number of higher order blank bits is just lshift.
> > > And the number of lower order blank bits is (rshift - lshift).
> > >
> >
> > Nice diagrams and description, thanks!
>
> Thanks!
>
> >
> > > Finally, we can examine the core of the write side algorithm:
> > >
> > >         mask = (~0ULL << rshift) >> lshift;   // 1
> > >         nval = new_val;                       // 2
> > >         nval = (nval << rpad) & mask;         // 3
> > >         val = (val & ~mask) | nval;           // 4
> > >
> > > (1): Compute a mask where the set bits are the bitfield bits. The first
> > >      left shift zeros out exactly the number of blank bits, leaving a
> > >      bitfield sized set of 1s. The subsequent right shift inserts the
> > >      correct amount of higher order blank bits.
> > > (2): Place the new value into a word sized container, nval.
> > > (3): Place nval at the correct bit position and mask out blank bits.
> > > (4): Mix the bitfield in with original surrounding blank bits.
> > >
> > > [0]: https://reviews.llvm.org/D133361
> > > Co-authored-by: Eduard Zingerman <eddyz87@gmail.com>
> > > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > > Co-authored-by: Jonathan Lemon <jlemon@aviatrix.com>
> > > Signed-off-by: Jonathan Lemon <jlemon@aviatrix.com>
> > > Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
> > > ---
> > >  tools/lib/bpf/bpf_core_read.h | 34 ++++++++++++++++++++++++++++++++++
> > >  1 file changed, 34 insertions(+)
> > >
> > > diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> > > index 1ac57bb7ac55..a7ffb80e3539 100644
> > > --- a/tools/lib/bpf/bpf_core_read.h
> > > +++ b/tools/lib/bpf/bpf_core_read.h
> > > @@ -111,6 +111,40 @@ enum bpf_enum_value_kind {
> > >         val;                                                                  \
> > >  })
> > >
> > > +/*
> > > + * Write to a bitfield, identified by s->field.
> > > + * This is the inverse of BPF_CORE_WRITE_BITFIELD().
> > > + */
> > > +#define BPF_CORE_WRITE_BITFIELD(s, field, new_val) ({                  \
> > > +       void *p = (void *)s + __CORE_RELO(s, field, BYTE_OFFSET);       \
> > > +       unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE);      \
> > > +       unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64);        \
> > > +       unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64);        \
> > > +       unsigned int rpad = rshift - lshift;                            \
> > > +       unsigned long long nval, mask, val;                             \
> > > +                                                                       \
> > > +       asm volatile("" : "+r"(p));                                     \
> > > +                                                                       \
> > > +       switch (byte_size) {                                            \
> > > +       case 1: val = *(unsigned char *)p; break;                       \
> > > +       case 2: val = *(unsigned short *)p; break;                      \
> > > +       case 4: val = *(unsigned int *)p; break;                        \
> > > +       case 8: val = *(unsigned long long *)p; break;                  \
> > > +       }                                                               \
> > > +                                                                       \
> > > +       mask = (~0ULL << rshift) >> lshift;                             \
> > > +       nval = new_val;                                                 \
> > > +       nval = (nval << rpad) & mask;                                   \
> > > +       val = (val & ~mask) | nval;                                     \
> >
> > I'd simplify it to not need nval at all
> >
> > val = (val & ~mask) | ((new_val << rpad) & mask);
> >
> > I actually find it easier to follow and make sure we are not doing
> > anything unexpected. First part before |, we take old value and clear
> > bits we are about to set, second part after |, we take bitfield value,
> > shift it in position, and just in case mask it out if it's too big to
> > fit. Combine, done.
> >
> > Other than that, it looks good.
>
> I mostly left it there for the cast. Cuz injecting the `unsigned long
> long` cast made the line really long. How about this instead?
>
> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> index a7ffb80e3539..7325a12692a3 100644
> --- a/tools/lib/bpf/bpf_core_read.h
> +++ b/tools/lib/bpf/bpf_core_read.h
> @@ -120,8 +120,8 @@ enum bpf_enum_value_kind {
>         unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE);      \
>         unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64);        \
>         unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64);        \
> +       unsigned long long mask, val, nval = new_val;                   \
>         unsigned int rpad = rshift - lshift;                            \
> -       unsigned long long nval, mask, val;                             \
>                                                                         \
>         asm volatile("" : "+r"(p));                                     \
>                                                                         \
> @@ -133,9 +133,7 @@ enum bpf_enum_value_kind {
>         }                                                               \
>                                                                         \
>         mask = (~0ULL << rshift) >> lshift;                             \
> -       nval = new_val;                                                 \
> -       nval = (nval << rpad) & mask;                                   \
> -       val = (val & ~mask) | nval;                                     \
> +       val = (val & ~mask) | ((nval << rpad) & mask);                  \

sgtm

>                                                                         \
>         switch (byte_size) {                                            \
>         case 1: *(unsigned char *)p      = val; break;                  \
>
>
> Thanks,
> Daniel