[PATCH v2 05/13] xen/bitops: Implement generic_f?sl() in lib/

Andrew Cooper posted 13 patches 6 months ago
[PATCH v2 05/13] xen/bitops: Implement generic_f?sl() in lib/
Posted by Andrew Cooper 6 months ago
generic_f?s() being static inline is the cause of lots of the complexity
between the common and arch-specific bitops.h

They appear to be static inline for constant-folding reasons (ARM uses them
for this), but there are better ways to achieve the same effect.

It is presumptuous that an unrolled binary search is the right algorithm to
use on all microarchitectures.  Indeed, it's not for the eventual users, but
that can be addressed at a later point.

It is also nonsense to implement the int form as the base primitive and
construct the long form from 2x int in 64-bit builds, when it's just one extra
step to operate at the native register width.

Therefore, implement generic_f?sl() in lib/.  They're not actually needed in
x86/ARM/PPC by the end of the cleanup (i.e. the functions will be dropped by
the linker), and they're only expected be needed by RISC-V on hardware which
lacks the Zbb extension.

Implement generic_fls() in terms of generic_flsl() for now, but this will be
cleaned up in due course.

Provide basic runtime testing using __constructor inside the lib/ file.  This
is important, as it means testing runs if and only if generic_f?sl() are used
elsewhere in Xen.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: Shawn Anastasio <sanastasio@raptorengineering.com>
CC: consulting@bugseng.com <consulting@bugseng.com>
CC: Simone Ballarin <simone.ballarin@bugseng.com>
CC: Federico Serafini <federico.serafini@bugseng.com>
CC: Nicola Vetrini <nicola.vetrini@bugseng.com>

v2:
 * New

I suspect we want to swap CONFIG_DEBUG for CONFIG_BOOT_UNIT_TESTS in due
course.  These ought to be able to be used in a release build too.
---
 xen/arch/arm/include/asm/bitops.h |  2 +-
 xen/arch/ppc/include/asm/bitops.h |  2 +-
 xen/include/xen/bitops.h          | 89 ++-----------------------------
 xen/lib/Makefile                  |  2 +
 xen/lib/generic-ffsl.c            | 65 ++++++++++++++++++++++
 xen/lib/generic-flsl.c            | 68 +++++++++++++++++++++++
 6 files changed, 142 insertions(+), 86 deletions(-)
 create mode 100644 xen/lib/generic-ffsl.c
 create mode 100644 xen/lib/generic-flsl.c

diff --git a/xen/arch/arm/include/asm/bitops.h b/xen/arch/arm/include/asm/bitops.h
index 199252201291..ec1cf7b9b323 100644
--- a/xen/arch/arm/include/asm/bitops.h
+++ b/xen/arch/arm/include/asm/bitops.h
@@ -150,7 +150,7 @@ static inline int fls(unsigned int x)
         int ret;
 
         if (__builtin_constant_p(x))
-               return generic_fls(x);
+               return generic_flsl(x);
 
         asm("clz\t%"__OP32"0, %"__OP32"1" : "=r" (ret) : "r" (x));
         return 32 - ret;
diff --git a/xen/arch/ppc/include/asm/bitops.h b/xen/arch/ppc/include/asm/bitops.h
index bea655796d64..ab692d01717b 100644
--- a/xen/arch/ppc/include/asm/bitops.h
+++ b/xen/arch/ppc/include/asm/bitops.h
@@ -172,7 +172,7 @@ static inline int __test_and_clear_bit(int nr, volatile void *addr)
 }
 
 #define flsl(x) generic_flsl(x)
-#define fls(x) generic_fls(x)
+#define fls(x) generic_flsl(x)
 #define ffs(x) ({ unsigned int t_ = (x); fls(t_ & -t_); })
 #define ffsl(x) ({ unsigned long t_ = (x); flsl(t_ & -t_); })
 
diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
index 9b40f20381a2..cd405df96180 100644
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -15,91 +15,12 @@
     (((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LLONG - 1 - (h))))
 
 /*
- * ffs: find first bit set. This is defined the same way as
- * the libc and compiler builtin ffs routines, therefore
- * differs in spirit from the above ffz (man ffs).
- */
-
-static inline int generic_ffs(unsigned int x)
-{
-    int r = 1;
-
-    if (!x)
-        return 0;
-    if (!(x & 0xffff)) {
-        x >>= 16;
-        r += 16;
-    }
-    if (!(x & 0xff)) {
-        x >>= 8;
-        r += 8;
-    }
-    if (!(x & 0xf)) {
-        x >>= 4;
-        r += 4;
-    }
-    if (!(x & 3)) {
-        x >>= 2;
-        r += 2;
-    }
-    if (!(x & 1)) {
-        x >>= 1;
-        r += 1;
-    }
-    return r;
-}
-
-/*
- * fls: find last bit set.
+ * Find First/Last Set bit.
+ *
+ * Bits are labelled from 1.  Returns 0 if given 0.
  */
-
-static inline int generic_fls(unsigned int x)
-{
-    int r = 32;
-
-    if (!x)
-        return 0;
-    if (!(x & 0xffff0000u)) {
-        x <<= 16;
-        r -= 16;
-    }
-    if (!(x & 0xff000000u)) {
-        x <<= 8;
-        r -= 8;
-    }
-    if (!(x & 0xf0000000u)) {
-        x <<= 4;
-        r -= 4;
-    }
-    if (!(x & 0xc0000000u)) {
-        x <<= 2;
-        r -= 2;
-    }
-    if (!(x & 0x80000000u)) {
-        x <<= 1;
-        r -= 1;
-    }
-    return r;
-}
-
-#if BITS_PER_LONG == 64
-
-static inline int generic_ffsl(unsigned long x)
-{
-    return !x || (u32)x ? generic_ffs(x) : generic_ffs(x >> 32) + 32;
-}
-
-static inline int generic_flsl(unsigned long x)
-{
-    u32 h = x >> 32;
-
-    return h ? generic_fls(h) + 32 : generic_fls(x);
-}
-
-#else
-# define generic_ffsl generic_ffs
-# define generic_flsl generic_fls
-#endif
+unsigned int __pure generic_ffsl(unsigned long x);
+unsigned int __pure generic_flsl(unsigned long x);
 
 /*
  * Include this here because some architectures need generic_ffs/fls in
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index e63798e1d452..a48541596470 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -4,6 +4,8 @@ lib-y += bsearch.o
 lib-y += ctors.o
 lib-y += ctype.o
 lib-y += find-next-bit.o
+lib-y += generic-ffsl.o
+lib-y += generic-flsl.o
 lib-y += list-sort.o
 lib-y += memchr.o
 lib-y += memchr_inv.o
diff --git a/xen/lib/generic-ffsl.c b/xen/lib/generic-ffsl.c
new file mode 100644
index 000000000000..804cbd752efe
--- /dev/null
+++ b/xen/lib/generic-ffsl.c
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <xen/bitops.h>
+#include <xen/boot-check.h>
+#include <xen/init.h>
+
+unsigned int generic_ffsl(unsigned long x)
+{
+    unsigned int r = 1;
+
+    if ( !x )
+        return 0;
+
+#if BITS_PER_LONG > 32
+    if ( !(x & 0xffffffffU) )
+    {
+        x >>= 32;
+        r += 32;
+    }
+#endif
+    if ( !(x & 0xffff) )
+    {
+        x >>= 16;
+        r += 16;
+    }
+    if ( !(x & 0xff) )
+    {
+        x >>= 8;
+        r += 8;
+    }
+    if ( !(x & 0xf) )
+    {
+        x >>= 4;
+        r += 4;
+    }
+    if ( !(x & 3) )
+    {
+        x >>= 2;
+        r += 2;
+    }
+    if ( !(x & 1) )
+    {
+        x >>= 1;
+        r += 1;
+    }
+
+    return r;
+}
+
+#ifdef CONFIG_DEBUG
+static void __init __constructor test_generic_ffsl(void)
+{
+    RUNTIME_CHECK(generic_ffsl, 0, 0);
+    RUNTIME_CHECK(generic_ffsl, 1, 1);
+    RUNTIME_CHECK(generic_ffsl, 3, 1);
+    RUNTIME_CHECK(generic_ffsl, 7, 1);
+    RUNTIME_CHECK(generic_ffsl, 6, 2);
+
+    RUNTIME_CHECK(generic_ffsl, 1UL << (BITS_PER_LONG - 1), BITS_PER_LONG);
+#if BITS_PER_LONG > 32
+    RUNTIME_CHECK(generic_ffsl, 1UL << 32, 33);
+    RUNTIME_CHECK(generic_ffsl, 1UL << 63, 64);
+#endif
+}
+#endif /* CONFIG_DEBUG */
diff --git a/xen/lib/generic-flsl.c b/xen/lib/generic-flsl.c
new file mode 100644
index 000000000000..e4543aeaf100
--- /dev/null
+++ b/xen/lib/generic-flsl.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <xen/bitops.h>
+#include <xen/boot-check.h>
+#include <xen/init.h>
+
+/* Mask of type UL with the upper x bits set. */
+#define UPPER_MASK(x) (~0UL << (BITS_PER_LONG - (x)))
+
+unsigned int generic_flsl(unsigned long x)
+{
+    unsigned int r = BITS_PER_LONG;
+
+    if ( !x )
+        return 0;
+
+#if BITS_PER_LONG > 32
+    if ( !(x & UPPER_MASK(32)) )
+    {
+        x <<= 32;
+        r -= 32;
+    }
+#endif
+    if ( !(x & UPPER_MASK(16)) )
+    {
+        x <<= 16;
+        r -= 16;
+    }
+    if ( !(x & UPPER_MASK(8)) )
+    {
+        x <<= 8;
+        r -= 8;
+    }
+    if ( !(x & UPPER_MASK(4)) )
+    {
+        x <<= 4;
+        r -= 4;
+    }
+    if ( !(x & UPPER_MASK(2)) )
+    {
+        x <<= 2;
+        r -= 2;
+    }
+    if ( !(x & UPPER_MASK(1)) )
+    {
+        x <<= 1;
+        r -= 1;
+    }
+
+    return r;
+}
+
+#ifdef CONFIG_DEBUG
+static void __init __constructor test_generic_flsl(void)
+{
+    RUNTIME_CHECK(generic_flsl, 0, 0);
+    RUNTIME_CHECK(generic_flsl, 1, 1);
+    RUNTIME_CHECK(generic_flsl, 3, 2);
+    RUNTIME_CHECK(generic_flsl, 7, 3);
+    RUNTIME_CHECK(generic_flsl, 6, 3);
+
+    RUNTIME_CHECK(generic_flsl, 1 | (1UL << (BITS_PER_LONG - 1)), BITS_PER_LONG);
+#if BITS_PER_LONG > 32
+    RUNTIME_CHECK(generic_flsl, 1 | (1UL << 32), 33);
+    RUNTIME_CHECK(generic_flsl, 1 | (1UL << 63), 64);
+#endif
+}
+#endif /* CONFIG_DEBUG */
-- 
2.30.2


Re: [PATCH v2 05/13] xen/bitops: Implement generic_f?sl() in lib/
Posted by Jan Beulich 5 months, 4 weeks ago
On 24.05.2024 22:03, Andrew Cooper wrote:
> generic_f?s() being static inline is the cause of lots of the complexity
> between the common and arch-specific bitops.h
> 
> They appear to be static inline for constant-folding reasons (ARM uses them
> for this), but there are better ways to achieve the same effect.
> 
> It is presumptuous that an unrolled binary search is the right algorithm to
> use on all microarchitectures.  Indeed, it's not for the eventual users, but
> that can be addressed at a later point.
> 
> It is also nonsense to implement the int form as the base primitive and
> construct the long form from 2x int in 64-bit builds, when it's just one extra
> step to operate at the native register width.
> 
> Therefore, implement generic_f?sl() in lib/.  They're not actually needed in
> x86/ARM/PPC by the end of the cleanup (i.e. the functions will be dropped by
> the linker), and they're only expected be needed by RISC-V on hardware which
> lacks the Zbb extension.
> 
> Implement generic_fls() in terms of generic_flsl() for now, but this will be
> cleaned up in due course.
> 
> Provide basic runtime testing using __constructor inside the lib/ file.  This
> is important, as it means testing runs if and only if generic_f?sl() are used
> elsewhere in Xen.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with a suggestion and a question.

> I suspect we want to swap CONFIG_DEBUG for CONFIG_BOOT_UNIT_TESTS in due
> course.  These ought to be able to be used in a release build too.

+1

> --- /dev/null
> +++ b/xen/lib/generic-ffsl.c
> @@ -0,0 +1,65 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/bitops.h>
> +#include <xen/boot-check.h>
> +#include <xen/init.h>
> +
> +unsigned int generic_ffsl(unsigned long x)
> +{
> +    unsigned int r = 1;
> +
> +    if ( !x )
> +        return 0;
> +
> +#if BITS_PER_LONG > 32

To be future-proof, perhaps ahead of this

#if BITS_PER_LONG > 64
# error "..."
#endif

or a functionally similar BUILD_BUG_ON()?

> --- /dev/null
> +++ b/xen/lib/generic-flsl.c
> @@ -0,0 +1,68 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/bitops.h>
> +#include <xen/boot-check.h>
> +#include <xen/init.h>
> +
> +/* Mask of type UL with the upper x bits set. */
> +#define UPPER_MASK(x) (~0UL << (BITS_PER_LONG - (x)))
> +
> +unsigned int generic_flsl(unsigned long x)
> +{
> +    unsigned int r = BITS_PER_LONG;
> +
> +    if ( !x )
> +        return 0;
> +
> +#if BITS_PER_LONG > 32
> +    if ( !(x & UPPER_MASK(32)) )
> +    {
> +        x <<= 32;
> +        r -= 32;
> +    }
> +#endif
> +    if ( !(x & UPPER_MASK(16)) )
> +    {
> +        x <<= 16;
> +        r -= 16;
> +    }
> +    if ( !(x & UPPER_MASK(8)) )
> +    {
> +        x <<= 8;
> +        r -= 8;
> +    }
> +    if ( !(x & UPPER_MASK(4)) )
> +    {
> +        x <<= 4;
> +        r -= 4;
> +    }
> +    if ( !(x & UPPER_MASK(2)) )
> +    {
> +        x <<= 2;
> +        r -= 2;
> +    }
> +    if ( !(x & UPPER_MASK(1)) )
> +    {
> +        x <<= 1;
> +        r -= 1;
> +    }
> +
> +    return r;
> +}

While, as you say, the expectation is for this code to not commonly come
into actual use, I still find the algorithm a little inefficient in terms
of the constants used, specifically considering how they would need
instantiating in resulting assembly. It may be that Arm's fancy constant-
move insns can actually efficiently synthesize them, but I think on most
other architectures it would be more efficient (and presumably no less
efficient on Arm) to shift the "remaining" value right, thus allowing for
successively smaller (and hence easier to instantiate) constants to be
used.

Jan
Re: [PATCH v2 05/13] xen/bitops: Implement generic_f?sl() in lib/
Posted by Stefano Stabellini 5 months, 3 weeks ago
On Mon, 27 May 2024, Jan Beulich wrote:
> On 24.05.2024 22:03, Andrew Cooper wrote:
> > generic_f?s() being static inline is the cause of lots of the complexity
> > between the common and arch-specific bitops.h
> > 
> > They appear to be static inline for constant-folding reasons (ARM uses them
> > for this), but there are better ways to achieve the same effect.
> > 
> > It is presumptuous that an unrolled binary search is the right algorithm to
> > use on all microarchitectures.  Indeed, it's not for the eventual users, but
> > that can be addressed at a later point.
> > 
> > It is also nonsense to implement the int form as the base primitive and
> > construct the long form from 2x int in 64-bit builds, when it's just one extra
> > step to operate at the native register width.
> > 
> > Therefore, implement generic_f?sl() in lib/.  They're not actually needed in
> > x86/ARM/PPC by the end of the cleanup (i.e. the functions will be dropped by
> > the linker), and they're only expected be needed by RISC-V on hardware which
> > lacks the Zbb extension.
> > 
> > Implement generic_fls() in terms of generic_flsl() for now, but this will be
> > cleaned up in due course.
> > 
> > Provide basic runtime testing using __constructor inside the lib/ file.  This
> > is important, as it means testing runs if and only if generic_f?sl() are used
> > elsewhere in Xen.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with a suggestion and a question.

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> > I suspect we want to swap CONFIG_DEBUG for CONFIG_BOOT_UNIT_TESTS in due
> > course.  These ought to be able to be used in a release build too.
> 
> +1

+1
Re: [PATCH v2 05/13] xen/bitops: Implement generic_f?sl() in lib/
Posted by Andrew Cooper 5 months, 3 weeks ago
On 27/05/2024 9:44 am, Jan Beulich wrote:
> On 24.05.2024 22:03, Andrew Cooper wrote:
>> generic_f?s() being static inline is the cause of lots of the complexity
>> between the common and arch-specific bitops.h
>>
>> They appear to be static inline for constant-folding reasons (ARM uses them
>> for this), but there are better ways to achieve the same effect.
>>
>> It is presumptuous that an unrolled binary search is the right algorithm to
>> use on all microarchitectures.  Indeed, it's not for the eventual users, but
>> that can be addressed at a later point.
>>
>> It is also nonsense to implement the int form as the base primitive and
>> construct the long form from 2x int in 64-bit builds, when it's just one extra
>> step to operate at the native register width.
>>
>> Therefore, implement generic_f?sl() in lib/.  They're not actually needed in
>> x86/ARM/PPC by the end of the cleanup (i.e. the functions will be dropped by
>> the linker), and they're only expected be needed by RISC-V on hardware which
>> lacks the Zbb extension.
>>
>> Implement generic_fls() in terms of generic_flsl() for now, but this will be
>> cleaned up in due course.
>>
>> Provide basic runtime testing using __constructor inside the lib/ file.  This
>> is important, as it means testing runs if and only if generic_f?sl() are used
>> elsewhere in Xen.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> with a suggestion and a question.
>
>> I suspect we want to swap CONFIG_DEBUG for CONFIG_BOOT_UNIT_TESTS in due
>> course.  These ought to be able to be used in a release build too.
> +1

Actually - I might as well do this now.  Start as we mean to go on.

>
>> --- /dev/null
>> +++ b/xen/lib/generic-ffsl.c
>> @@ -0,0 +1,65 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#include <xen/bitops.h>
>> +#include <xen/boot-check.h>
>> +#include <xen/init.h>
>> +
>> +unsigned int generic_ffsl(unsigned long x)
>> +{
>> +    unsigned int r = 1;
>> +
>> +    if ( !x )
>> +        return 0;
>> +
>> +#if BITS_PER_LONG > 32
> To be future-proof, perhaps ahead of this
>
> #if BITS_PER_LONG > 64
> # error "..."
> #endif
>
> or a functionally similar BUILD_BUG_ON()?

Good point.  I'll fold this in to both files.

>
>> --- /dev/null
>> +++ b/xen/lib/generic-flsl.c
>> @@ -0,0 +1,68 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#include <xen/bitops.h>
>> +#include <xen/boot-check.h>
>> +#include <xen/init.h>
>> +
>> +/* Mask of type UL with the upper x bits set. */
>> +#define UPPER_MASK(x) (~0UL << (BITS_PER_LONG - (x)))
>> +
>> +unsigned int generic_flsl(unsigned long x)
>> +{
>> +    unsigned int r = BITS_PER_LONG;
>> +
>> +    if ( !x )
>> +        return 0;
>> +
>> +#if BITS_PER_LONG > 32
>> +    if ( !(x & UPPER_MASK(32)) )
>> +    {
>> +        x <<= 32;
>> +        r -= 32;
>> +    }
>> +#endif
>> +    if ( !(x & UPPER_MASK(16)) )
>> +    {
>> +        x <<= 16;
>> +        r -= 16;
>> +    }
>> +    if ( !(x & UPPER_MASK(8)) )
>> +    {
>> +        x <<= 8;
>> +        r -= 8;
>> +    }
>> +    if ( !(x & UPPER_MASK(4)) )
>> +    {
>> +        x <<= 4;
>> +        r -= 4;
>> +    }
>> +    if ( !(x & UPPER_MASK(2)) )
>> +    {
>> +        x <<= 2;
>> +        r -= 2;
>> +    }
>> +    if ( !(x & UPPER_MASK(1)) )
>> +    {
>> +        x <<= 1;
>> +        r -= 1;
>> +    }
>> +
>> +    return r;
>> +}
> While, as you say, the expectation is for this code to not commonly come
> into actual use, I still find the algorithm a little inefficient in terms
> of the constants used, specifically considering how they would need
> instantiating in resulting assembly. It may be that Arm's fancy constant-
> move insns can actually efficiently synthesize them, but I think on most
> other architectures it would be more efficient (and presumably no less
> efficient on Arm) to shift the "remaining" value right, thus allowing for
> successively smaller (and hence easier to instantiate) constants to be
> used.

ARM can only synthesise UPPER_MASK(16) and narrower masks, I think.

That said, I'm not concerned about the (in)efficiency seeing as this
doesn't get included in x86/ARM/PPC builds by the end of the series.

It's RISC-V which matters, and I'm pretty sure this is the wrong
algorithm to be using.

Incidentally, this algorithm is terrible for superscalar pipelines,
because each branch is inherently unpredictable.

Both these files want rewriting based on an analysis of the H-capable
Zbb-incapable RISC-V cores which exist.

I expect that what we actually want is the De Bruijn form which is an
O(1) algorithm, given a decent hardware multiplier.  If not, there's a
loop form which I expect would still be better than this.

~Andrew