kernel/bpf/verifier.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-)
Fix the case where min/max adjustments on a conditional statement cross
u32 boundaries.
The case where this bug was found has the following scenario (see full
bpf prog below):
insn 2: (bf) r0 = r2
...
insn 4: (3d) if r3 >= r0 goto pc+1
...
insn 9: (67) r2 <<= 7
...
insn 13: (65) if r7 s> 2 goto pc-12 # back to insn 2
Since this is a loop, at some later iteration, both
u32_min_value/u32_max_value of r0 converge to 0 due to the shift of r2
on insn 9 and later the assignment to r0 on insn 2.
When evaluating the branch FALSE on the conditional jump of insn 4,
since it needs to respect r0 > r3, it elevates umin of r0 by 1:
r0.umin = r3.umin + 1
This is done by this part of the code:
static void regs_refine_cond_op(...) {
...
case BPF_JLT:
...
u64 new_umin = max(reg1->umin_value + 1, reg2->umin_value);
But the problem is that u32_min_value/u32_max_value are zero, so it
needs to be adjusted by reg_bounds_sync(), which calls
__reg_deduce_mixed_bounds() that performs the following tightening:
new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
...
reg->umin_value = max_t(u64, reg->umin_value, new_umin);
Here it was hitting the situation of: u64=[0x1,0xfffffff800000000] u32=[0x0,0x0]
Leading to:
new_umin = 0
reg->umin_value = max(1, 0) = 1
Which is inconsistent because: (u32)reg->umin_value > reg->u32_max_value.
That propagates until it reaches a:
verifier bug: REG INVARIANTS VIOLATION (true_reg2): range bounds violation
u64=[0x0, 0x7800000000] s64=[0x0, 0xffffffffffffffff]
u32=[0x80000000, 0x0] s32=[0x0, 0xffffffff] var_off=(0x0, 0x7800000000)
Fix __reg_deduce_mixed_bounds() to detect the case when the u32
boundaries are crossed and advance umin to the next 32-bit block (or
retreat umax to the previous one).
Example:
If we have:
u64=[0x1,0xfffffff800000000] u32=[0x0,0x0]
Go to the next 32-bit block:
u64=[0x100000000,0xfffffff800000000] u32=[0x0,0x0]
This way we keep the consistency on the lower 32 bits boundaries.
The same applies for the max value, but go to the previous 32-bit block.
Full BPF prog for reference:
============================
NOTE: this was extracted from a reproducer of Syzbot from another bug
report.
insn 0: (61) r2 = *(u32 *)(r1 + 48)
insn 1: (61) r3 = *(u32 *)(r1 + 76)
insn 2: (bf) r0 = r2
insn 3: (16) if w0 == 21502727 goto pc+16
insn 4: (3d) if r3 >= r0 goto pc+1
insn 5: (0f) r2 += r0
insn 6: (bc) w6 = (s16)w2
insn 7: (bf) r7 = (s32)r6
insn 8: (16) if w2 == 524047 goto pc+0
insn 9: (67) r2 <<= 7
insn 10: (36) if w6 >= -268376562 goto pc+0
insn 11: (bf) r5 = r0
insn 12: (0f) r5 += r6
insn 13: (65) if r7 s> 2 goto pc-12
insn 14: (07) r7 += 4194380
insn 15: (1f) r5 -= r7
insn 16: (bf) r4 = r5
insn 17: (07) r5 += -458749
insn 18: (ad) if r3 < r4 goto pc+1
insn 19: (95) exit
insn 20: (05) goto pc+0
insn 21: (95) exit
insn 22: (4d) if r11 & r9 goto pc-28203
insn 23: (99) if w3 >= -2094934247 goto pc-30763
The reproducer can be found at:
https://syzkaller.appspot.com/text?tag=ReproC&x=16773406580000
Fixes: c51d5ad6543c ("bpf: improve deduction of 64-bit bounds from 32-bit bounds")
Signed-off-by: Helen Koike <koike@igalia.com>
---
Hi all,
I'm not familiar with the verifier code base or discussions around it,
but I tried to do my best, I searched a bit if this was already
discussed, but the discussions I found didn't seem related.
Please let me know if this case was already discussed and/or if I should
follow a different path into solving this.
Tested on bpf/master branch.
Thanks in advance.
---
kernel/bpf/verifier.c | 26 +++++++++++++++++++++++---
1 file changed, 23 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a965b2c45bbe..ddac09c8a9e5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2702,9 +2702,29 @@ static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
__u64 new_umin, new_umax;
__s64 new_smin, new_smax;
- /* u32 -> u64 tightening, it's always well-formed */
- new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
- new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value;
+ /*
+ * If (u32)umin > u32_max, no value in the current upper-32-bit block
+ * satisfies [u32_min, u32_max] while being >= umin; advance umin to
+ * the next block. Otherwise apply standard u32->u64 tightening.
+ */
+ if ((u32)reg->umin_value > reg->u32_max_value)
+ new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
+ reg->u32_min_value;
+ else
+ new_umin = (reg->umin_value & ~0xffffffffULL) |
+ reg->u32_min_value;
+
+ /*
+ * Symmetrically, if (u32)umax < u32_min, retreat umax to the
+ * previous block. Otherwise apply standard u32->u64 tightening.
+ */
+ if ((u32)reg->umax_value < reg->u32_min_value)
+ new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
+ reg->u32_max_value;
+ else
+ new_umax = (reg->umax_value & ~0xffffffffULL) |
+ reg->u32_max_value;
+
reg->umin_value = max_t(u64, reg->umin_value, new_umin);
reg->umax_value = min_t(u64, reg->umax_value, new_umax);
/* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
--
2.53.0
Hi Helen,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/net]
[also build test WARNING on bpf-next/master bpf/master linus/master v7.0-rc6 next-20260327]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Helen-Koike/bpf-fix-umin-umax-when-lower-bits-fall-outside-u32-range/20260330-171706
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git net
patch link: https://lore.kernel.org/r/20260327194849.855397-1-koike%40igalia.com
patch subject: [PATCH] bpf: fix umin/umax when lower bits fall outside u32 range
config: i386-randconfig-013-20260330 (https://download.01.org/0day-ci/archive/20260331/202603310103.qoJg885q-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260331/202603310103.qoJg885q-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603310103.qoJg885q-lkp@intel.com/
All warnings (new ones prefixed by >>):
kernel/bpf/verifier.c: In function '__reg_deduce_mixed_bounds':
>> kernel/bpf/verifier.c:2699:63: warning: suggest parentheses around arithmetic in operand of '|' [-Wparentheses]
2699 | new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
kernel/bpf/verifier.c:2710:63: warning: suggest parentheses around arithmetic in operand of '|' [-Wparentheses]
2710 | new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
vim +2699 kernel/bpf/verifier.c
2675
2676 static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
2677 {
2678 /* Try to tighten 64-bit bounds from 32-bit knowledge, using 32-bit
2679 * values on both sides of 64-bit range in hope to have tighter range.
2680 * E.g., if r1 is [0x1'00000000, 0x3'80000000], and we learn from
2681 * 32-bit signed > 0 operation that s32 bounds are now [1; 0x7fffffff].
2682 * With this, we can substitute 1 as low 32-bits of _low_ 64-bit bound
2683 * (0x100000000 -> 0x100000001) and 0x7fffffff as low 32-bits of
2684 * _high_ 64-bit bound (0x380000000 -> 0x37fffffff) and arrive at a
2685 * better overall bounds for r1 as [0x1'000000001; 0x3'7fffffff].
2686 * We just need to make sure that derived bounds we are intersecting
2687 * with are well-formed ranges in respective s64 or u64 domain, just
2688 * like we do with similar kinds of 32-to-64 or 64-to-32 adjustments.
2689 */
2690 __u64 new_umin, new_umax;
2691 __s64 new_smin, new_smax;
2692
2693 /*
2694 * If (u32)umin > u32_max, no value in the current upper-32-bit block
2695 * satisfies [u32_min, u32_max] while being >= umin; advance umin to
2696 * the next block. Otherwise apply standard u32->u64 tightening.
2697 */
2698 if ((u32)reg->umin_value > reg->u32_max_value)
> 2699 new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
2700 reg->u32_min_value;
2701 else
2702 new_umin = (reg->umin_value & ~0xffffffffULL) |
2703 reg->u32_min_value;
2704
2705 /*
2706 * Symmetrically, if (u32)umax < u32_min, retreat umax to the
2707 * previous block. Otherwise apply standard u32->u64 tightening.
2708 */
2709 if ((u32)reg->umax_value < reg->u32_min_value)
2710 new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
2711 reg->u32_max_value;
2712 else
2713 new_umax = (reg->umax_value & ~0xffffffffULL) |
2714 reg->u32_max_value;
2715
2716 reg->umin_value = max_t(u64, reg->umin_value, new_umin);
2717 reg->umax_value = min_t(u64, reg->umax_value, new_umax);
2718 /* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
2719 new_smin = (reg->smin_value & ~0xffffffffULL) | reg->u32_min_value;
2720 new_smax = (reg->smax_value & ~0xffffffffULL) | reg->u32_max_value;
2721 reg->smin_value = max_t(s64, reg->smin_value, new_smin);
2722 reg->smax_value = min_t(s64, reg->smax_value, new_smax);
2723
2724 /* Here we would like to handle a special case after sign extending load,
2725 * when upper bits for a 64-bit range are all 1s or all 0s.
2726 *
2727 * Upper bits are all 1s when register is in a range:
2728 * [0xffff_ffff_0000_0000, 0xffff_ffff_ffff_ffff]
2729 * Upper bits are all 0s when register is in a range:
2730 * [0x0000_0000_0000_0000, 0x0000_0000_ffff_ffff]
2731 * Together this forms are continuous range:
2732 * [0xffff_ffff_0000_0000, 0x0000_0000_ffff_ffff]
2733 *
2734 * Now, suppose that register range is in fact tighter:
2735 * [0xffff_ffff_8000_0000, 0x0000_0000_ffff_ffff] (R)
2736 * Also suppose that it's 32-bit range is positive,
2737 * meaning that lower 32-bits of the full 64-bit register
2738 * are in the range:
2739 * [0x0000_0000, 0x7fff_ffff] (W)
2740 *
2741 * If this happens, then any value in a range:
2742 * [0xffff_ffff_0000_0000, 0xffff_ffff_7fff_ffff]
2743 * is smaller than a lowest bound of the range (R):
2744 * 0xffff_ffff_8000_0000
2745 * which means that upper bits of the full 64-bit register
2746 * can't be all 1s, when lower bits are in range (W).
2747 *
2748 * Note that:
2749 * - 0xffff_ffff_8000_0000 == (s64)S32_MIN
2750 * - 0x0000_0000_7fff_ffff == (s64)S32_MAX
2751 * These relations are used in the conditions below.
2752 */
2753 if (reg->s32_min_value >= 0 && reg->smin_value >= S32_MIN && reg->smax_value <= S32_MAX) {
2754 reg->smin_value = reg->s32_min_value;
2755 reg->smax_value = reg->s32_max_value;
2756 reg->umin_value = reg->s32_min_value;
2757 reg->umax_value = reg->s32_max_value;
2758 reg->var_off = tnum_intersect(reg->var_off,
2759 tnum_range(reg->smin_value, reg->smax_value));
2760 }
2761 }
2762
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Hi Helen,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/net]
[also build test WARNING on bpf-next/master bpf/master linus/master v7.0-rc6 next-20260327]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Helen-Koike/bpf-fix-umin-umax-when-lower-bits-fall-outside-u32-range/20260330-171706
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git net
patch link: https://lore.kernel.org/r/20260327194849.855397-1-koike%40igalia.com
patch subject: [PATCH] bpf: fix umin/umax when lower bits fall outside u32 range
config: x86_64-rhel-9.4-ltp (https://download.01.org/0day-ci/archive/20260330/202603301811.Bddu9uxI-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260330/202603301811.Bddu9uxI-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603301811.Bddu9uxI-lkp@intel.com/
All warnings (new ones prefixed by >>):
kernel/bpf/verifier.c: In function '__reg_deduce_mixed_bounds':
>> kernel/bpf/verifier.c:2699:63: warning: suggest parentheses around arithmetic in operand of '|' [-Wparentheses]
2699 | new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
kernel/bpf/verifier.c:2710:63: warning: suggest parentheses around arithmetic in operand of '|' [-Wparentheses]
2710 | new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
vim +2699 kernel/bpf/verifier.c
2675
2676 static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
2677 {
2678 /* Try to tighten 64-bit bounds from 32-bit knowledge, using 32-bit
2679 * values on both sides of 64-bit range in hope to have tighter range.
2680 * E.g., if r1 is [0x1'00000000, 0x3'80000000], and we learn from
2681 * 32-bit signed > 0 operation that s32 bounds are now [1; 0x7fffffff].
2682 * With this, we can substitute 1 as low 32-bits of _low_ 64-bit bound
2683 * (0x100000000 -> 0x100000001) and 0x7fffffff as low 32-bits of
2684 * _high_ 64-bit bound (0x380000000 -> 0x37fffffff) and arrive at a
2685 * better overall bounds for r1 as [0x1'000000001; 0x3'7fffffff].
2686 * We just need to make sure that derived bounds we are intersecting
2687 * with are well-formed ranges in respective s64 or u64 domain, just
2688 * like we do with similar kinds of 32-to-64 or 64-to-32 adjustments.
2689 */
2690 __u64 new_umin, new_umax;
2691 __s64 new_smin, new_smax;
2692
2693 /*
2694 * If (u32)umin > u32_max, no value in the current upper-32-bit block
2695 * satisfies [u32_min, u32_max] while being >= umin; advance umin to
2696 * the next block. Otherwise apply standard u32->u64 tightening.
2697 */
2698 if ((u32)reg->umin_value > reg->u32_max_value)
> 2699 new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
2700 reg->u32_min_value;
2701 else
2702 new_umin = (reg->umin_value & ~0xffffffffULL) |
2703 reg->u32_min_value;
2704
2705 /*
2706 * Symmetrically, if (u32)umax < u32_min, retreat umax to the
2707 * previous block. Otherwise apply standard u32->u64 tightening.
2708 */
2709 if ((u32)reg->umax_value < reg->u32_min_value)
2710 new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
2711 reg->u32_max_value;
2712 else
2713 new_umax = (reg->umax_value & ~0xffffffffULL) |
2714 reg->u32_max_value;
2715
2716 reg->umin_value = max_t(u64, reg->umin_value, new_umin);
2717 reg->umax_value = min_t(u64, reg->umax_value, new_umax);
2718 /* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
2719 new_smin = (reg->smin_value & ~0xffffffffULL) | reg->u32_min_value;
2720 new_smax = (reg->smax_value & ~0xffffffffULL) | reg->u32_max_value;
2721 reg->smin_value = max_t(s64, reg->smin_value, new_smin);
2722 reg->smax_value = min_t(s64, reg->smax_value, new_smax);
2723
2724 /* Here we would like to handle a special case after sign extending load,
2725 * when upper bits for a 64-bit range are all 1s or all 0s.
2726 *
2727 * Upper bits are all 1s when register is in a range:
2728 * [0xffff_ffff_0000_0000, 0xffff_ffff_ffff_ffff]
2729 * Upper bits are all 0s when register is in a range:
2730 * [0x0000_0000_0000_0000, 0x0000_0000_ffff_ffff]
2731 * Together this forms are continuous range:
2732 * [0xffff_ffff_0000_0000, 0x0000_0000_ffff_ffff]
2733 *
2734 * Now, suppose that register range is in fact tighter:
2735 * [0xffff_ffff_8000_0000, 0x0000_0000_ffff_ffff] (R)
2736 * Also suppose that it's 32-bit range is positive,
2737 * meaning that lower 32-bits of the full 64-bit register
2738 * are in the range:
2739 * [0x0000_0000, 0x7fff_ffff] (W)
2740 *
2741 * If this happens, then any value in a range:
2742 * [0xffff_ffff_0000_0000, 0xffff_ffff_7fff_ffff]
2743 * is smaller than a lowest bound of the range (R):
2744 * 0xffff_ffff_8000_0000
2745 * which means that upper bits of the full 64-bit register
2746 * can't be all 1s, when lower bits are in range (W).
2747 *
2748 * Note that:
2749 * - 0xffff_ffff_8000_0000 == (s64)S32_MIN
2750 * - 0x0000_0000_7fff_ffff == (s64)S32_MAX
2751 * These relations are used in the conditions below.
2752 */
2753 if (reg->s32_min_value >= 0 && reg->smin_value >= S32_MIN && reg->smax_value <= S32_MAX) {
2754 reg->smin_value = reg->s32_min_value;
2755 reg->smax_value = reg->s32_max_value;
2756 reg->umin_value = reg->s32_min_value;
2757 reg->umax_value = reg->s32_max_value;
2758 reg->var_off = tnum_intersect(reg->var_off,
2759 tnum_range(reg->smin_value, reg->smax_value));
2760 }
2761 }
2762
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
On Fri, 2026-03-27 at 16:48 -0300, Helen Koike wrote:
[...]
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a965b2c45bbe..ddac09c8a9e5 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2702,9 +2702,29 @@ static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
> __u64 new_umin, new_umax;
> __s64 new_smin, new_smax;
>
> - /* u32 -> u64 tightening, it's always well-formed */
> - new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
> - new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value;
> + /*
> + * If (u32)umin > u32_max, no value in the current upper-32-bit block
> + * satisfies [u32_min, u32_max] while being >= umin; advance umin to
> + * the next block. Otherwise apply standard u32->u64 tightening.
> + */
> + if ((u32)reg->umin_value > reg->u32_max_value)
> + new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
> + reg->u32_min_value;
> + else
> + new_umin = (reg->umin_value & ~0xffffffffULL) |
> + reg->u32_min_value;
What would happen if there is no next or previous 32-bit block?
E.g. if (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) wraps around.
I guess the argument is that if this happens, there is an invariant
violation already, will the final register still bin in invariant
violation state?
Useful picture:
N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32
||----|=====|--|----------||----|=====|-------------||--|-|=====|-------------||
|< b >| | |< b >| | |< b >|
| | | |
|<---------------+- a -+---------------->|
| |
|< t >| refined r0 range
Also, as this is based solely on unsigned ranges, the following case
is not covered, right?
N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32
||===|---------|------|===||===|----------------|===||===|---------|------|===||
|b >| | |< b||b >| |< b||b >| | |< b|
| | | |
|<-----+----------------- a --------------+-------->|
| |
|<---------------- t ------------->| refined r0 range
Would it be hard to implementing something in the same line as [2] to cover it?
> +
> + /*
> + * Symmetrically, if (u32)umax < u32_min, retreat umax to the
> + * previous block. Otherwise apply standard u32->u64 tightening.
> + */
> + if ((u32)reg->umax_value < reg->u32_min_value)
> + new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
> + reg->u32_max_value;
> + else
> + new_umax = (reg->umax_value & ~0xffffffffULL) |
> + reg->u32_max_value;
> +
> reg->umin_value = max_t(u64, reg->umin_value, new_umin);
> reg->umax_value = min_t(u64, reg->umax_value, new_umax);
> /* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
I think we can move forward with this, as the fate of my RFC is
uncertain. Please add some selftests, e.g. from [1].
[1] https://lore.kernel.org/bpf/20260318-cnum-sync-bounds-v1-4-1f2e455ea711@gmail.com/
[2] https://github.com/eddyz87/cnum-verif/blob/master/cnum.c#L242
Hi Eduard, Thanks for your replies and pointers, please see my comments below. On 3/27/26 5:46 PM, Eduard Zingerman wrote: > On Fri, 2026-03-27 at 16:48 -0300, Helen Koike wrote: > > [...] > >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >> index a965b2c45bbe..ddac09c8a9e5 100644 >> --- a/kernel/bpf/verifier.c >> +++ b/kernel/bpf/verifier.c >> @@ -2702,9 +2702,29 @@ static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg) >> __u64 new_umin, new_umax; >> __s64 new_smin, new_smax; >> >> - /* u32 -> u64 tightening, it's always well-formed */ >> - new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value; >> - new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value; >> + /* >> + * If (u32)umin > u32_max, no value in the current upper-32-bit block >> + * satisfies [u32_min, u32_max] while being >= umin; advance umin to >> + * the next block. Otherwise apply standard u32->u64 tightening. >> + */ >> + if ((u32)reg->umin_value > reg->u32_max_value) >> + new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) | >> + reg->u32_min_value; >> + else >> + new_umin = (reg->umin_value & ~0xffffffffULL) | >> + reg->u32_min_value; > > What would happen if there is no next or previous 32-bit block? > E.g. if (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) wraps around. > I guess the argument is that if this happens, there is an invariant > violation already, will the final register still bin in invariant > violation state? I thought about it and my conclusion was that we would be in trouble anyway and the error is elsewhere (unless my understanding is wrong). We could add a check here, but I guess reg_bounds_sanity_check() already does that. > > Useful picture: > > N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32 > ||----|=====|--|----------||----|=====|-------------||--|-|=====|-------------|| > |< b >| | |< b >| | |< b >| > | | | | > |<---------------+- a -+---------------->| > | | > |< t >| refined r0 range > > Also, as this is based solely on unsigned ranges, the following case > is not covered, right? Right, this case is not covered. > > N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32 > ||===|---------|------|===||===|----------------|===||===|---------|------|===|| > |b >| | |< b||b >| |< b||b >| | |< b| > | | | | > |<-----+----------------- a --------------+-------->| > | | > |<---------------- t ------------->| refined r0 range > > Would it be hard to implementing something in the same line as [2] to cover it? I'll check and get back to you. > >> + >> + /* >> + * Symmetrically, if (u32)umax < u32_min, retreat umax to the >> + * previous block. Otherwise apply standard u32->u64 tightening. >> + */ >> + if ((u32)reg->umax_value < reg->u32_min_value) >> + new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) | >> + reg->u32_max_value; >> + else >> + new_umax = (reg->umax_value & ~0xffffffffULL) | >> + reg->u32_max_value; >> + >> reg->umin_value = max_t(u64, reg->umin_value, new_umin); >> reg->umax_value = min_t(u64, reg->umax_value, new_umax); >> /* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */ > > I think we can move forward with this, as the fate of my RFC is > uncertain. Please add some selftests, e.g. from [1]. Ack, I'll check the signed case, test against your selftests and re-submit the patches. Thanks again! Helen > > [1] https://lore.kernel.org/bpf/20260318-cnum-sync-bounds-v1-4-1f2e455ea711@gmail.com/ > [2] https://github.com/eddyz87/cnum-verif/blob/master/cnum.c#L242
On Fri, 2026-03-27 at 16:48 -0300, Helen Koike wrote: [...] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index a965b2c45bbe..ddac09c8a9e5 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -2702,9 +2702,29 @@ static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg) > __u64 new_umin, new_umax; > __s64 new_smin, new_smax; > > - /* u32 -> u64 tightening, it's always well-formed */ > - new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value; > - new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value; > + /* > + * If (u32)umin > u32_max, no value in the current upper-32-bit block > + * satisfies [u32_min, u32_max] while being >= umin; advance umin to > + * the next block. Otherwise apply standard u32->u64 tightening. > + */ > + if ((u32)reg->umin_value > reg->u32_max_value) > + new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) | > + reg->u32_min_value; > + else > + new_umin = (reg->umin_value & ~0xffffffffULL) | > + reg->u32_min_value; > + > + /* > + * Symmetrically, if (u32)umax < u32_min, retreat umax to the > + * previous block. Otherwise apply standard u32->u64 tightening. > + */ > + if ((u32)reg->umax_value < reg->u32_min_value) > + new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) | > + reg->u32_max_value; > + else > + new_umax = (reg->umax_value & ~0xffffffffULL) | > + reg->u32_max_value; > + > reg->umin_value = max_t(u64, reg->umin_value, new_umin); > reg->umax_value = min_t(u64, reg->umax_value, new_umax); > /* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */ Looks like it is the same case I identified in RFC [1]. [1] https://lore.kernel.org/bpf/20260318-cnum-sync-bounds-v1-4-1f2e455ea711@gmail.com/
© 2016 - 2026 Red Hat, Inc.