[Xen-devel] [PATCH 0/4] bitops: hweight<N>() improvements

Jan Beulich posted 4 patches 4 years, 10 months ago
Only 0 patches received!
[Xen-devel] [PATCH 0/4] bitops: hweight<N>() improvements
Posted by Jan Beulich 4 years, 10 months ago
1: bitops: speed up hweight<N>()
2: x86: further speed-up to hweight{32,64}()
3: RFC: Arm64: further speed-up to hweight{32,64}()
4: x86: use POPCNT for hweight<N>() when available

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH 1/4] bitops: speed up hweight<N>()
Posted by Jan Beulich 4 years, 10 months ago
Algorithmically this gets us in line with current Linux, where the same
change did happen about 13 years ago. See in particular Linux commits
f9b4192923 ("bitops: hweight() speedup") and 0136611c62 ("optimize
hweight64 for x86_64").

Kconfig changes for actually setting HAVE_FAST_MULTIPLY will follow.

Take the opportunity and change generic_hweight64()'s return type to
unsigned int.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -31,6 +31,9 @@ config HAS_DEVICE_TREE
 config HAS_EX_TABLE
 	bool
 
+config HAS_FAST_MULTIPLY
+	bool
+
 config MEM_ACCESS_ALWAYS_ON
 	bool
 
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -153,41 +153,54 @@ static __inline__ int get_count_order(un
 
 static inline unsigned int generic_hweight32(unsigned int w)
 {
-    unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
-    res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
-    res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
-    res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
-    return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
+    w -= (w >> 1) & 0x55555555;
+    w =  (w & 0x33333333) + ((w >> 2) & 0x33333333);
+    w =  (w + (w >> 4)) & 0x0f0f0f0f;
+
+#ifdef CONFIG_HAS_FAST_MULTIPLY
+    return (w * 0x01010101) >> 24;
+#else
+    w += w >> 8;
+
+    return (w + (w >> 16)) & 0xff;
+#endif
 }
 
 static inline unsigned int generic_hweight16(unsigned int w)
 {
-    unsigned int res = (w & 0x5555) + ((w >> 1) & 0x5555);
-    res = (res & 0x3333) + ((res >> 2) & 0x3333);
-    res = (res & 0x0F0F) + ((res >> 4) & 0x0F0F);
-    return (res & 0x00FF) + ((res >> 8) & 0x00FF);
+    w -= ((w >> 1) & 0x5555);
+    w =  (w & 0x3333) + ((w >> 2) & 0x3333);
+    w =  (w + (w >> 4)) & 0x0f0f;
+
+    return (w + (w >> 8)) & 0xff;
 }
 
 static inline unsigned int generic_hweight8(unsigned int w)
 {
-    unsigned int res = (w & 0x55) + ((w >> 1) & 0x55);
-    res = (res & 0x33) + ((res >> 2) & 0x33);
-    return (res & 0x0F) + ((res >> 4) & 0x0F);
+    w -= ((w >> 1) & 0x55);
+    w =  (w & 0x33) + ((w >> 2) & 0x33);
+
+    return (w + (w >> 4)) & 0x0f;
 }
 
-static inline unsigned long generic_hweight64(__u64 w)
+static inline unsigned int generic_hweight64(uint64_t w)
 {
 #if BITS_PER_LONG < 64
     return generic_hweight32((unsigned int)(w >> 32)) +
         generic_hweight32((unsigned int)w);
 #else
-    u64 res;
-    res = (w & 0x5555555555555555ul) + ((w >> 1) & 0x5555555555555555ul);
-    res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
-    res = (res & 0x0F0F0F0F0F0F0F0Ful) + ((res >> 4) & 0x0F0F0F0F0F0F0F0Ful);
-    res = (res & 0x00FF00FF00FF00FFul) + ((res >> 8) & 0x00FF00FF00FF00FFul);
-    res = (res & 0x0000FFFF0000FFFFul) + ((res >> 16) & 0x0000FFFF0000FFFFul);
-    return (res & 0x00000000FFFFFFFFul) + ((res >> 32) & 0x00000000FFFFFFFFul);
+    w -= (w >> 1) & 0x5555555555555555ul;
+    w =  (w & 0x3333333333333333ul) + ((w >> 2) & 0x3333333333333333ul);
+    w =  (w + (w >> 4)) & 0x0f0f0f0f0f0f0f0ful;
+
+# ifdef CONFIG_HAS_FAST_MULTIPLY
+    return (w * 0x0101010101010101ul) >> 56;
+# else
+    w += w >> 8;
+    w += w >> 16;
+
+    return (w + (w >> 32)) & 0xFF;
+# endif
 #endif
 }
 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 1/4] bitops: speed up hweight<N>()
Posted by Andrew Cooper 4 years, 10 months ago
On 31/05/2019 02:51, Jan Beulich wrote:
> Algorithmically this gets us in line with current Linux, where the same
> change did happen about 13 years ago. See in particular Linux commits
> f9b4192923 ("bitops: hweight() speedup") and 0136611c62 ("optimize
> hweight64 for x86_64").
>
> Kconfig changes for actually setting HAVE_FAST_MULTIPLY will follow.
>
> Take the opportunity and change generic_hweight64()'s return type to
> unsigned int.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

The code in Linux could do with a bit of cleanup.  Do you have patches
prepared?  I do have one further suggestion

> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -153,41 +153,54 @@ static __inline__ int get_count_order(un
>  
>  static inline unsigned int generic_hweight32(unsigned int w)
>  {
> -    unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
> -    res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
> -    res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
> -    res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
> -    return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
> +    w -= (w >> 1) & 0x55555555;
> +    w =  (w & 0x33333333) + ((w >> 2) & 0x33333333);
> +    w =  (w + (w >> 4)) & 0x0f0f0f0f;
> +
> +#ifdef CONFIG_HAS_FAST_MULTIPLY
> +    return (w * 0x01010101) >> 24;
> +#else
> +    w += w >> 8;
> +
> +    return (w + (w >> 16)) & 0xff;
> +#endif

This would be slightly shorter, less liable to bitrot, and IMO cleaner,
to do

if ( IS_ENABLED(CONFIG_HAS_FAST_MULTIPLY) )
    w = w * 0x01010101) >> 24;
else
    w += w >> 8;

return w;

which removes the #ifdef-ary fully, and in particular, avoids having
different return logic.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 1/4] bitops: speed up hweight<N>()
Posted by Jan Beulich 4 years, 10 months ago
>>> On 31.05.19 at 21:40, <andrew.cooper3@citrix.com> wrote:
> On 31/05/2019 02:51, Jan Beulich wrote:
>> Algorithmically this gets us in line with current Linux, where the same
>> change did happen about 13 years ago. See in particular Linux commits
>> f9b4192923 ("bitops: hweight() speedup") and 0136611c62 ("optimize
>> hweight64 for x86_64").
>>
>> Kconfig changes for actually setting HAVE_FAST_MULTIPLY will follow.
>>
>> Take the opportunity and change generic_hweight64()'s return type to
>> unsigned int.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> The code in Linux could do with a bit of cleanup.  Do you have patches
> prepared?

Not yet, no. I'll try to do this eventually, but it doesn't have a priority
for me.

>> --- a/xen/include/xen/bitops.h
>> +++ b/xen/include/xen/bitops.h
>> @@ -153,41 +153,54 @@ static __inline__ int get_count_order(un
>>  
>>  static inline unsigned int generic_hweight32(unsigned int w)
>>  {
>> -    unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
>> -    res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
>> -    res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
>> -    res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
>> -    return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
>> +    w -= (w >> 1) & 0x55555555;
>> +    w =  (w & 0x33333333) + ((w >> 2) & 0x33333333);
>> +    w =  (w + (w >> 4)) & 0x0f0f0f0f;
>> +
>> +#ifdef CONFIG_HAS_FAST_MULTIPLY
>> +    return (w * 0x01010101) >> 24;
>> +#else
>> +    w += w >> 8;
>> +
>> +    return (w + (w >> 16)) & 0xff;
>> +#endif
> 
> This would be slightly shorter, less liable to bitrot, and IMO cleaner,
> to do
> 
> if ( IS_ENABLED(CONFIG_HAS_FAST_MULTIPLY) )
>     w = w * 0x01010101) >> 24;
> else
>     w += w >> 8;
> 
> return w;
> 
> which removes the #ifdef-ary fully, and in particular, avoids having
> different return logic.

Hmm, yes, I can switch to such a model, albeit I think this will make
Coverity point out unreachable code.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 1/4] bitops: speed up hweight<N>()
Posted by Jan Beulich 4 years, 10 months ago
>>> On 31.05.19 at 21:40, <andrew.cooper3@citrix.com> wrote:
> On 31/05/2019 02:51, Jan Beulich wrote:
>> --- a/xen/include/xen/bitops.h
>> +++ b/xen/include/xen/bitops.h
>> @@ -153,41 +153,54 @@ static __inline__ int get_count_order(un
>>  
>>  static inline unsigned int generic_hweight32(unsigned int w)
>>  {
>> -    unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
>> -    res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
>> -    res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
>> -    res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
>> -    return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
>> +    w -= (w >> 1) & 0x55555555;
>> +    w =  (w & 0x33333333) + ((w >> 2) & 0x33333333);
>> +    w =  (w + (w >> 4)) & 0x0f0f0f0f;
>> +
>> +#ifdef CONFIG_HAS_FAST_MULTIPLY
>> +    return (w * 0x01010101) >> 24;
>> +#else
>> +    w += w >> 8;
>> +
>> +    return (w + (w >> 16)) & 0xff;
>> +#endif
> 
> This would be slightly shorter, less liable to bitrot, and IMO cleaner,
> to do
> 
> if ( IS_ENABLED(CONFIG_HAS_FAST_MULTIPLY) )
>     w = w * 0x01010101) >> 24;
> else
>     w += w >> 8;
> 
> return w;

Would you be okay with

static inline unsigned int generic_hweight32(unsigned int w)
{
    w -= (w >> 1) & 0x55555555;
    w =  (w & 0x33333333) + ((w >> 2) & 0x33333333);
    w =  (w + (w >> 4)) & 0x0f0f0f0f;

    if ( IS_ENABLED(CONFIG_HAS_FAST_MULTIPLY) )
        return (w * 0x01010101) >> 24;

    w += w >> 8;

    return (w + (w >> 16)) & 0xff;
}

despite there then still being two return statements?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 1/4] bitops: speed up hweight<N>()
Posted by Andrew Cooper 4 years, 10 months ago
On 03/06/2019 11:02, Jan Beulich wrote:
>>>> On 31.05.19 at 21:40, <andrew.cooper3@citrix.com> wrote:
>> On 31/05/2019 02:51, Jan Beulich wrote:
>>> --- a/xen/include/xen/bitops.h
>>> +++ b/xen/include/xen/bitops.h
>>> @@ -153,41 +153,54 @@ static __inline__ int get_count_order(un
>>>  
>>>  static inline unsigned int generic_hweight32(unsigned int w)
>>>  {
>>> -    unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
>>> -    res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
>>> -    res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
>>> -    res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
>>> -    return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
>>> +    w -= (w >> 1) & 0x55555555;
>>> +    w =  (w & 0x33333333) + ((w >> 2) & 0x33333333);
>>> +    w =  (w + (w >> 4)) & 0x0f0f0f0f;
>>> +
>>> +#ifdef CONFIG_HAS_FAST_MULTIPLY
>>> +    return (w * 0x01010101) >> 24;
>>> +#else
>>> +    w += w >> 8;
>>> +
>>> +    return (w + (w >> 16)) & 0xff;
>>> +#endif
>> This would be slightly shorter, less liable to bitrot, and IMO cleaner,
>> to do
>>
>> if ( IS_ENABLED(CONFIG_HAS_FAST_MULTIPLY) )
>>     w = w * 0x01010101) >> 24;
>> else
>>     w += w >> 8;
>>
>> return w;
> Would you be okay with
>
> static inline unsigned int generic_hweight32(unsigned int w)
> {
>     w -= (w >> 1) & 0x55555555;
>     w =  (w & 0x33333333) + ((w >> 2) & 0x33333333);
>     w =  (w + (w >> 4)) & 0x0f0f0f0f;
>
>     if ( IS_ENABLED(CONFIG_HAS_FAST_MULTIPLY) )
>         return (w * 0x01010101) >> 24;
>
>     w += w >> 8;
>
>     return (w + (w >> 16)) & 0xff;
> }
>
> despite there then still being two return statements?

Yeah - this form does read like a regular function.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH 2/4] x86: further speed-up to hweight{32,64}()
Posted by Jan Beulich 4 years, 10 months ago
According to Linux commit 0136611c62 ("optimize hweight64 for x86_64")
this is a further improvement over the variant using only bitwise
operations. It's also a slight further code size reduction.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -12,6 +12,7 @@ config X86
 	select HAS_CPUFREQ
 	select HAS_EHCI
 	select HAS_EX_TABLE
+	select HAS_FAST_MULTIPLY
 	select HAS_GDBSX
 	select HAS_IOPORTS
 	select HAS_KEXEC



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 2/4] x86: further speed-up to hweight{32, 64}()
Posted by Andrew Cooper 4 years, 10 months ago
On 31/05/2019 02:52, Jan Beulich wrote:
> According to Linux commit 0136611c62 ("optimize hweight64 for x86_64")
> this is a further improvement over the variant using only bitwise
> operations. It's also a slight further code size reduction.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

This should also include ARM64, which also unconditionally selects
HAS_FAST_MULTIPLY in Linux.

As for the x86 side of things, Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -12,6 +12,7 @@ config X86
>  	select HAS_CPUFREQ
>  	select HAS_EHCI
>  	select HAS_EX_TABLE
> +	select HAS_FAST_MULTIPLY
>  	select HAS_GDBSX
>  	select HAS_IOPORTS
>  	select HAS_KEXEC
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 2/4] x86: further speed-up to hweight{32, 64}()
Posted by Jan Beulich 4 years, 10 months ago
>>> On 31.05.19 at 21:23, <andrew.cooper3@citrix.com> wrote:
> On 31/05/2019 02:52, Jan Beulich wrote:
>> According to Linux commit 0136611c62 ("optimize hweight64 for x86_64")
>> this is a further improvement over the variant using only bitwise
>> operations. It's also a slight further code size reduction.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> This should also include ARM64, which also unconditionally selects
> HAS_FAST_MULTIPLY in Linux.

I've very intentionally split the Arm change from the x86 one:
Looking at the generated code I'm unconvinced this is a win
there, and hence I'd prefer if someone could measure this. It
is for this reason that patch 3 was actually sent as RFC.

> As for the x86 side of things, Acked-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Jan Beulich 4 years, 10 months ago
According to Linux commit e75bef2a4f ("arm64: Select
ARCH_HAS_FAST_MULTIPLIER") this is a further improvement over the
variant using only bitwise operations on at least some hardware, and no
worse on other.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: To be honest I'm not fully convinced this is a win in particular in
     the hweight32() case, as there's no actual shift insn which gets
     replaced by the multiplication. Even for hweight64() the compiler
     could emit better code and avoid the explicit shift by 32 (which it
     emits at least for me).

--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -12,6 +12,7 @@ config ARM_32
 config ARM_64
 	def_bool y
 	depends on 64BIT
+	select HAS_FAST_MULTIPLY
 
 config ARM
 	def_bool y



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Julien Grall 4 years, 10 months ago
Hi Jan,

On 5/31/19 10:53 AM, Jan Beulich wrote:
> According to Linux commit e75bef2a4f ("arm64: Select
> ARCH_HAS_FAST_MULTIPLIER") this is a further improvement over the
> variant using only bitwise operations on at least some hardware, and no
> worse on other.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> RFC: To be honest I'm not fully convinced this is a win in particular in
>       the hweight32() case, as there's no actual shift insn which gets
>       replaced by the multiplication. Even for hweight64() the compiler
>       could emit better code and avoid the explicit shift by 32 (which it
>       emits at least for me).

I can see multiplication instruction used in both hweight32() and 
hweight64() with the compiler I am using.

I would expect the compiler could easily replace a multiply by a series 
of shift but it would be more difficult to do the invert.

Also, this has been in Linux for a year now, so I am assuming Linux 
folks are happy with changes (CCing Robin just in case I missed 
anything). Therefore I am happy to give it a go on Xen as well.

Cheers,

> 
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -12,6 +12,7 @@ config ARM_32
>   config ARM_64
>   	def_bool y
>   	depends on 64BIT
> +	select HAS_FAST_MULTIPLY
>   
>   config ARM
>   	def_bool y
> 
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Robin Murphy 4 years, 10 months ago
On 04/06/2019 17:11, Julien Grall wrote:
> Hi Jan,
> 
> On 5/31/19 10:53 AM, Jan Beulich wrote:
>> According to Linux commit e75bef2a4f ("arm64: Select
>> ARCH_HAS_FAST_MULTIPLIER") this is a further improvement over the
>> variant using only bitwise operations on at least some hardware, and no
>> worse on other.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> RFC: To be honest I'm not fully convinced this is a win in particular in
>>       the hweight32() case, as there's no actual shift insn which gets
>>       replaced by the multiplication. Even for hweight64() the compiler
>>       could emit better code and avoid the explicit shift by 32 (which it
>>       emits at least for me).
> 
> I can see multiplication instruction used in both hweight32() and 
> hweight64() with the compiler I am using.
> 
> I would expect the compiler could easily replace a multiply by a series 
> of shift but it would be more difficult to do the invert.
> 
> Also, this has been in Linux for a year now, so I am assuming Linux 
> folks are happy with changes (CCing Robin just in case I missed 
> anything). Therefore I am happy to give it a go on Xen as well.

IIRC it did look like Linux's hweight() routines could possibly be 
further tuned for the A64 ISA to shave off one or two more instructions, 
but it's yet to be proven that performance is anywhere near critical 
enough to justify maintaining arch-specific versions. It costs us 
basically nothing to switch between the two generic implementations 
though, so since the smaller-and-no-slower code can be considered a net 
win (however minor), there seemed no reason *not* to apply the existing 
option appropriately.

Robin.

> 
> Cheers,
> 
>>
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -12,6 +12,7 @@ config ARM_32
>>   config ARM_64
>>       def_bool y
>>       depends on 64BIT
>> +    select HAS_FAST_MULTIPLY
>>   config ARM
>>       def_bool y
>>
>>
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Jan Beulich 4 years, 10 months ago
>>> On 04.06.19 at 18:11, <julien.grall@arm.com> wrote:
> On 5/31/19 10:53 AM, Jan Beulich wrote:
>> According to Linux commit e75bef2a4f ("arm64: Select
>> ARCH_HAS_FAST_MULTIPLIER") this is a further improvement over the
>> variant using only bitwise operations on at least some hardware, and no
>> worse on other.
>> 
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> RFC: To be honest I'm not fully convinced this is a win in particular in
>>       the hweight32() case, as there's no actual shift insn which gets
>>       replaced by the multiplication. Even for hweight64() the compiler
>>       could emit better code and avoid the explicit shift by 32 (which it
>>       emits at least for me).
> 
> I can see multiplication instruction used in both hweight32() and 
> hweight64() with the compiler I am using.

That is for which exact implementation? What I was referring to as
"could emit better code" was the multiplication-free variant, where
the compiler fails to recognize (afaict) another opportunity to fold
a shift into an arithmetic instruction:

	add	x0, x0, x0, lsr #4
	and	x0, x0, #0xf0f0f0f0f0f0f0f
	add	x0, x0, x0, lsr #8
	add	x0, x0, x0, lsr #16
>>>	lsr	x1, x0, #32
>>>	add	w0, w1, w0
	and	w0, w0, #0xff
	ret

Afaict the two marked insns could be replaced by

	add	x0, x0, x0, lsr #32

With there only a sequence of add-s remaining, I'm having
difficulty seeing how the use of mul+lsr would actually help:

	add	x0, x0, x0, lsr #4
	and	x0, x0, #0xf0f0f0f0f0f0f0f
	mov	x1, #0x101010101010101
	mul	x0, x0, x1
	lsr	x0, x0, #56
	ret

But of course I know nothing about throughput and latency
of such add-s with one of their operands shifted first. And
yes, the variant using mul is, comparing with the better
optimized case, still one insn smaller.

> I would expect the compiler could easily replace a multiply by a series 
> of shift but it would be more difficult to do the invert.
> 
> Also, this has been in Linux for a year now, so I am assuming Linux 
> folks are happy with changes (CCing Robin just in case I missed 
> anything). Therefore I am happy to give it a go on Xen as well.

In which case - can I take this as an ack, or do you want to first
pursue the discussion?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Julien Grall 4 years, 10 months ago
Hi Jan,

On 05/06/2019 08:42, Jan Beulich wrote:
>>>> On 04.06.19 at 18:11, <julien.grall@arm.com> wrote:
>> On 5/31/19 10:53 AM, Jan Beulich wrote:
>>> According to Linux commit e75bef2a4f ("arm64: Select
>>> ARCH_HAS_FAST_MULTIPLIER") this is a further improvement over the
>>> variant using only bitwise operations on at least some hardware, and no
>>> worse on other.
>>>
>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> RFC: To be honest I'm not fully convinced this is a win in particular in
>>>        the hweight32() case, as there's no actual shift insn which gets
>>>        replaced by the multiplication. Even for hweight64() the compiler
>>>        could emit better code and avoid the explicit shift by 32 (which it
>>>        emits at least for me).
>>
>> I can see multiplication instruction used in both hweight32() and
>> hweight64() with the compiler I am using.
> 
> That is for which exact implementation?

A simple call hweight64().

> What I was referring to as
> "could emit better code" was the multiplication-free variant, where
> the compiler fails to recognize (afaict) another opportunity to fold
> a shift into an arithmetic instruction:
> 
> 	add	x0, x0, x0, lsr #4
> 	and	x0, x0, #0xf0f0f0f0f0f0f0f
> 	add	x0, x0, x0, lsr #8
> 	add	x0, x0, x0, lsr #16
>>>> 	lsr	x1, x0, #32
>>>> 	add	w0, w1, w0
> 	and	w0, w0, #0xff
> 	ret
> 
> Afaict the two marked insns could be replaced by
> 
> 	add	x0, x0, x0, lsr #32

I am not a compiler expert. Anyway this likely depends on the version of the 
compiler you are using. They are becoming smarter and smarter.

> 
> With there only a sequence of add-s remaining, I'm having
> difficulty seeing how the use of mul+lsr would actually help:
> 
> 	add	x0, x0, x0, lsr #4
> 	and	x0, x0, #0xf0f0f0f0f0f0f0f
> 	mov	x1, #0x101010101010101
> 	mul	x0, x0, x1
> 	lsr	x0, x0, #56
> 	ret
> 
> But of course I know nothing about throughput and latency
> of such add-s with one of their operands shifted first. And
> yes, the variant using mul is, comparing with the better > optimized case, still one insn smaller.
The commit message in Linux (and Robin's answer) is pretty clear. It may improve 
on some core but does not make it worst on other.

> 
>> I would expect the compiler could easily replace a multiply by a series
>> of shift but it would be more difficult to do the invert.
>>
>> Also, this has been in Linux for a year now, so I am assuming Linux
>> folks are happy with changes (CCing Robin just in case I missed
>> anything). Therefore I am happy to give it a go on Xen as well.
> 
> In which case - can I take this as an ack, or do you want to first
> pursue the discussion?

I will commit it later on with another bunch of patches.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Jan Beulich 4 years, 10 months ago
>>> On 05.06.19 at 11:24, <julien.grall@arm.com> wrote:
> On 05/06/2019 08:42, Jan Beulich wrote:
>>>>> On 04.06.19 at 18:11, <julien.grall@arm.com> wrote:
>>> On 5/31/19 10:53 AM, Jan Beulich wrote:
>>>> According to Linux commit e75bef2a4f ("arm64: Select
>>>> ARCH_HAS_FAST_MULTIPLIER") this is a further improvement over the
>>>> variant using only bitwise operations on at least some hardware, and no
>>>> worse on other.
>>>>
>>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> RFC: To be honest I'm not fully convinced this is a win in particular in
>>>>        the hweight32() case, as there's no actual shift insn which gets
>>>>        replaced by the multiplication. Even for hweight64() the compiler
>>>>        could emit better code and avoid the explicit shift by 32 (which it
>>>>        emits at least for me).
>>>
>>> I can see multiplication instruction used in both hweight32() and
>>> hweight64() with the compiler I am using.
>> 
>> That is for which exact implementation?
> 
> A simple call hweight64().

That's as ambiguous as it was before: Are you talking about
un-patched code (before this series), or patches up to at
least this one applied. What I'm trying to understand is if your
compiler is smart enough to use MUL without us telling it to.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Julien Grall 4 years, 10 months ago

On 05/06/2019 10:59, Jan Beulich wrote:
>>>> On 05.06.19 at 11:24, <julien.grall@arm.com> wrote:
>> On 05/06/2019 08:42, Jan Beulich wrote:
>>>>>> On 04.06.19 at 18:11, <julien.grall@arm.com> wrote:
>>>> On 5/31/19 10:53 AM, Jan Beulich wrote:
>>>>> According to Linux commit e75bef2a4f ("arm64: Select
>>>>> ARCH_HAS_FAST_MULTIPLIER") this is a further improvement over the
>>>>> variant using only bitwise operations on at least some hardware, and no
>>>>> worse on other.
>>>>>
>>>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>> ---
>>>>> RFC: To be honest I'm not fully convinced this is a win in particular in
>>>>>         the hweight32() case, as there's no actual shift insn which gets
>>>>>         replaced by the multiplication. Even for hweight64() the compiler
>>>>>         could emit better code and avoid the explicit shift by 32 (which it
>>>>>         emits at least for me).
>>>>
>>>> I can see multiplication instruction used in both hweight32() and
>>>> hweight64() with the compiler I am using.
>>>
>>> That is for which exact implementation?
>>
>> A simple call hweight64().
> 
> That's as ambiguous as it was before: Are you talking about
> un-patched code (before this series), or patches up to at
> least this one applied. What I'm trying to understand is if your
> compiler is smart enough to use MUL without us telling it to.

unsigned int test_hweight64(uint64_t t)
{
     return hweight64(t);
}

I looked at the difference between before and after this patch. Before the 
multiplication is not used. After, it was used with less instructions emitted.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH RFC 3/4] Arm64: further speed-up to hweight{32, 64}()
Posted by Julien Grall 4 years, 10 months ago

On 6/5/19 10:24 AM, Julien Grall wrote:
> I will commit it later on with another bunch of patches.

Pushed now.

Thank you,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH 4/4] x86: use POPCNT for hweight<N>() when available
Posted by Jan Beulich 4 years, 10 months ago
This is faster than using the software implementation, and the insn is
available on all half-way recent hardware. Therefore convert
generic_hweight<N>() to out-of-line functions (without affecting Arm)
and use alternatives patching to replace the function calls.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Note: Using "g" instead of "X" as the dummy constraint in hweight64()
      and hweight32(), other than expected, produces slightly better
      code with gcc 8.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -31,6 +31,7 @@ obj-y += emul-i8254.o
 obj-y += extable.o
 obj-y += flushtlb.o
 obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
+obj-y += hweight.o
 obj-y += hypercall.o
 obj-y += i387.o
 obj-y += i8259.o
@@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
 efi/mkreloc: efi/mkreloc.c
 	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
 
+nocov-y += hweight.o
+hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
+
 .PHONY: clean
 clean::
 	rm -f asm-offsets.s *.lds boot/*.o boot/*~ boot/core boot/mkelf32
--- /dev/null
+++ b/xen/arch/x86/hweight.c
@@ -0,0 +1,28 @@
+#define generic_hweight64 _hweight64
+#define generic_hweight32 _hweight32
+#define generic_hweight16 _hweight16
+#define generic_hweight8  _hweight8
+
+#include <xen/compiler.h>
+
+#undef inline
+#define inline always_inline
+
+#include <xen/bitops.h>
+
+#undef generic_hweight8
+#undef generic_hweight16
+#undef generic_hweight32
+#undef generic_hweight64
+
+#define HWEIGHT(n)                                             \
+typeof(_hweight##n) generic_hweight##n;                        \
+unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
+{                                                              \
+    return _hweight##n(x);                                     \
+}
+
+HWEIGHT(64)
+HWEIGHT(32)
+HWEIGHT(16)
+HWEIGHT( 8)
--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
     return r + 1;
 }
 
+/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
+#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
+#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
+
 /**
  * hweightN - returns the hamming weight of a N-bit word
  * @x: the word to weigh
  *
  * The Hamming Weight of a number is the total number of bits set in it.
  */
-#define hweight64(x) generic_hweight64(x)
-#define hweight32(x) generic_hweight32(x)
-#define hweight16(x) generic_hweight16(x)
-#define hweight8(x) generic_hweight8(x)
+#define hweight_(n, x, insn, setup, cout, cin) ({                         \
+    unsigned int res_;                                                    \
+    /*                                                                    \
+     * For the function call the POPCNT input register needs to be marked \
+     * modified as well. Set up a local variable of appropriate type      \
+     * for this purpose.                                                  \
+     */                                                                   \
+    typeof((uint##n##_t)(x) + 0U) val_ = (x);                             \
+    alternative_io(setup "; call generic_hweight" #n,                     \
+                   insn, X86_FEATURE_POPCNT,                              \
+                   ASM_OUTPUT2([res] "=a" (res_), [val] cout (val_)),     \
+                   [src] cin (val_));                                     \
+    res_;                                                                 \
+})
+#define hweight64(x) hweight_(64, x, POPCNT_64, "", "+D", "g")
+#define hweight32(x) hweight_(32, x, POPCNT_32, "", "+D", "g")
+#define hweight16(x) hweight_(16, x, "movzwl %w[src], %[val]; " POPCNT_32, \
+                              "mov %[src], %[val]", "=&D", "rm")
+#define hweight8(x)  hweight_( 8, x, "movzbl %b[src], %[val]; " POPCNT_32, \
+                              "mov %[src], %[val]", "=&D", "rm")
 
 #endif /* _X86_BITOPS_H */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 4/4] x86: use POPCNT for hweight<N>() when available
Posted by Andrew Cooper 4 years, 10 months ago
On 31/05/2019 02:54, Jan Beulich wrote:
> This is faster than using the software implementation, and the insn is
> available on all half-way recent hardware. Therefore convert
> generic_hweight<N>() to out-of-line functions (without affecting Arm)
> and use alternatives patching to replace the function calls.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

So, I trust you weren't expecting to just ack this and let it go in?

The principle of the patch (use popcnt when available) is an improvement
which I'm entirely in agreement with, but everything else is a problem.

The long and the short of it is that I'm not going to accept any version
of this which isn't the Linux version.

>From a microarchitectural standpoint, the tradeoff between fractional
register scheduling flexibility (which in practice is largely bound
anyway by real function calls in surrounding code) and increased icache
pressure/coldness (from the redundant function copies) falls largely in
favour of the Linux way of doing it, a cold icache line is
disproportionally more expensive than requiring the compiler to order
its registers differently (especially as all non-obsolete processors
these days have zero-cost register renaming internally, for the purpose
of superscalar execution).

If however you can demonstrate that a number of CPU microarchitects are
wrong about what is faster in practice, then I will happily accept a
patch which matches Linux, once you've improved the performance of Linux.

> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>  efi/mkreloc: efi/mkreloc.c
>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>  
> +nocov-y += hweight.o

Irrespective of the exact specifics of how the patch ends up, I don't
think the nocov restriction is a direction we want to take.

Coverage may not be a thing used in production, but when it is used for
development, it needs to not have random holes missing in the results data.

> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> +

Does this work with Clang?


> --- /dev/null
> +++ b/xen/arch/x86/hweight.c
> @@ -0,0 +1,28 @@
> +#define generic_hweight64 _hweight64
> +#define generic_hweight32 _hweight32
> +#define generic_hweight16 _hweight16
> +#define generic_hweight8  _hweight8
> +
> +#include <xen/compiler.h>
> +
> +#undef inline
> +#define inline always_inline
> +
> +#include <xen/bitops.h>
> +
> +#undef generic_hweight8
> +#undef generic_hweight16
> +#undef generic_hweight32
> +#undef generic_hweight64
> +
> +#define HWEIGHT(n)                                             \
> +typeof(_hweight##n) generic_hweight##n;                        \
> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \

A question to the rest of xen-devel.  Is there anyone else who can
actually work out what this construct is doing?

I'd like to get a feel for how many people can even follow some of our C.

> --- a/xen/include/asm-x86/bitops.h
> +++ b/xen/include/asm-x86/bitops.h
> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>      return r + 1;
>  }
>  
> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"

So (the dangers of false micro-optimsiation aside), POPCNT_32 will
probably be better using a redundant %ds prefix.

The reason is that the result needs padding to 5 bytes, as the original
instruction is `call disp32`, meaning that a single byte nop needs
inserting.  The version with a single nop takes two decode ports as
opposed to one, and single byte nops are forced to take an execution
delay for backwards compatibility with DoS.

OTOH, I also bet that noone could observe a difference without using
perf counters and fetch/decode uarch events.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 4/4] x86: use POPCNT for hweight<N>() when available
Posted by Jan Beulich 4 years, 10 months ago
>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
> On 31/05/2019 02:54, Jan Beulich wrote:
>> This is faster than using the software implementation, and the insn is
>> available on all half-way recent hardware. Therefore convert
>> generic_hweight<N>() to out-of-line functions (without affecting Arm)
>> and use alternatives patching to replace the function calls.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> So, I trust you weren't expecting to just ack this and let it go in?
> 
> The principle of the patch (use popcnt when available) is an improvement
> which I'm entirely in agreement with, but everything else is a problem.
> 
> The long and the short of it is that I'm not going to accept any version
> of this which isn't the Linux version.

You're kidding. We want to move away from assembly wherever we
can, and you demand new assembly code?

>>From a microarchitectural standpoint, the tradeoff between fractional
> register scheduling flexibility (which in practice is largely bound
> anyway by real function calls in surrounding code) and increased icache
> pressure/coldness (from the redundant function copies) falls largely in
> favour of the Linux way of doing it, a cold icache line is
> disproportionally more expensive than requiring the compiler to order
> its registers differently (especially as all non-obsolete processors
> these days have zero-cost register renaming internally, for the purpose
> of superscalar execution).

I'm afraid I'm struggling heavily as to what you're wanting to tell
me here: Where's the difference (in this regard) between the
change here and the way how Linux does it? Both emit a CALL
insn with registers set up suitably for it, and both patch it with a
POPCNT insn using the registers as demanded by the CALL.

The difference to Linux is what gets called, not how the patching
works (afaict). I'm simply not buying the combination of arguments
and effects of the removal of the use of -ffixed-*.

>> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>>  efi/mkreloc: efi/mkreloc.c
>>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>  
>> +nocov-y += hweight.o
> 
> Irrespective of the exact specifics of how the patch ends up, I don't
> think the nocov restriction is a direction we want to take.
> 
> Coverage may not be a thing used in production, but when it is used for
> development, it needs to not have random holes missing in the results data.

Sure, but then we can't avoid saving/restoring the callee clobbered
registers in the to be called functions. Which in turn means I see no
way of avoiding code duplications (be it in C or assembly) of the
generic_hweight<N>() implementations.

>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>> +
> 
> Does this work with Clang?

I have no idea.

>> --- /dev/null
>> +++ b/xen/arch/x86/hweight.c
>> @@ -0,0 +1,28 @@
>> +#define generic_hweight64 _hweight64
>> +#define generic_hweight32 _hweight32
>> +#define generic_hweight16 _hweight16
>> +#define generic_hweight8  _hweight8
>> +
>> +#include <xen/compiler.h>
>> +
>> +#undef inline
>> +#define inline always_inline
>> +
>> +#include <xen/bitops.h>
>> +
>> +#undef generic_hweight8
>> +#undef generic_hweight16
>> +#undef generic_hweight32
>> +#undef generic_hweight64
>> +
>> +#define HWEIGHT(n)                                             \
>> +typeof(_hweight##n) generic_hweight##n;                        \
>> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
> 
> A question to the rest of xen-devel.  Is there anyone else who can
> actually work out what this construct is doing?
> 
> I'd like to get a feel for how many people can even follow some of our C.

I know you don't like such constructs, but you likely also know
that I don't like the redundancy resulting when not using them.
You've vetoed a change by Roger in this direction recently.
While I did accept this (as the code we have is fine as is as
well), I don't think your personal taste should rule out such
uses. If anything, may I ask for clear guidelines (to be put into
./CODING_STYLE after having reached consensus) which parts
of the C language are fine to use, and which ones aren't?

>> --- a/xen/include/asm-x86/bitops.h
>> +++ b/xen/include/asm-x86/bitops.h
>> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>>      return r + 1;
>>  }
>>  
>> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
>> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
>> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
> 
> So (the dangers of false micro-optimsiation aside), POPCNT_32 will
> probably be better using a redundant %ds prefix.

For the use in hweight32() - perhaps. But not for the uses in
hweight{16,8}(), as there original code and replacement fully
match up in lengths.

> The reason is that the result needs padding to 5 bytes, as the original
> instruction is `call disp32`, meaning that a single byte nop needs
> inserting.  The version with a single nop takes two decode ports as
> opposed to one, and single byte nops are forced to take an execution
> delay for backwards compatibility with DoS.
> 
> OTOH, I also bet that noone could observe a difference without using
> perf counters and fetch/decode uarch events.

Plus this is then a more general problem to address, not something
to specifically worry about here. Would you have asked for an
explicit override if the insn was written using a proper mnemonic
(i.e. if we didn't need to cope with incapable binutils)?

Furthermore, if we were to follow Linux, we'd rather use an empty
REX prefix here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 4/4] x86: use POPCNT for hweight<N>() when available
Posted by Jan Beulich 4 years, 10 months ago
>>> On 03.06.19 at 10:13, <JBeulich@suse.com> wrote:
>>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
>> On 31/05/2019 02:54, Jan Beulich wrote:
>>> This is faster than using the software implementation, and the insn is
>>> available on all half-way recent hardware. Therefore convert
>>> generic_hweight<N>() to out-of-line functions (without affecting Arm)
>>> and use alternatives patching to replace the function calls.
>>>
>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> So, I trust you weren't expecting to just ack this and let it go in?
>> 
>> The principle of the patch (use popcnt when available) is an improvement
>> which I'm entirely in agreement with, but everything else is a problem.
>> 
>> The long and the short of it is that I'm not going to accept any version
>> of this which isn't the Linux version.
> 
> You're kidding. We want to move away from assembly wherever we
> can, and you demand new assembly code?
> 
>>>From a microarchitectural standpoint, the tradeoff between fractional
>> register scheduling flexibility (which in practice is largely bound
>> anyway by real function calls in surrounding code) and increased icache
>> pressure/coldness (from the redundant function copies) falls largely in
>> favour of the Linux way of doing it, a cold icache line is
>> disproportionally more expensive than requiring the compiler to order
>> its registers differently (especially as all non-obsolete processors
>> these days have zero-cost register renaming internally, for the purpose
>> of superscalar execution).
> 
> I'm afraid I'm struggling heavily as to what you're wanting to tell
> me here: Where's the difference (in this regard) between the
> change here and the way how Linux does it? Both emit a CALL
> insn with registers set up suitably for it, and both patch it with a
> POPCNT insn using the registers as demanded by the CALL.

Having thought about this some more, in an attempt to try to
understand (a) what you mean and (b) how you want things
to be done "your way", I'm afraid I've got more confused: Your
reply reminds me heavily of the discussion we had on the BMI2
patching series I had done (and now dropped): There you
complained about me _not_ using fixed registers and hence
potentially calling frequent i-cache-cold lines to be accessed.
While my original plan was to use a similar approach here, I
specifically went the opposite way to avoid similar complaints
of yours. Just to find that you use the (apparently) same
argument again. As a result I can only conclude that I'm now
pretty unclear on what model you would actually approve of.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 4/4] x86: use POPCNT for hweight<N>() when available
Posted by Andrew Cooper 4 years, 10 months ago
On 03/06/2019 09:13, Jan Beulich wrote:
>>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
>> On 31/05/2019 02:54, Jan Beulich wrote:
>>> This is faster than using the software implementation, and the insn is
>>> available on all half-way recent hardware. Therefore convert
>>> generic_hweight<N>() to out-of-line functions (without affecting Arm)
>>> and use alternatives patching to replace the function calls.
>>>
>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> So, I trust you weren't expecting to just ack this and let it go in?
>>
>> The principle of the patch (use popcnt when available) is an improvement
>> which I'm entirely in agreement with, but everything else is a problem.
>>
>> The long and the short of it is that I'm not going to accept any version
>> of this which isn't the Linux version.
> You're kidding. We want to move away from assembly wherever we
> can, and you demand new assembly code?
>
>> >From a microarchitectural standpoint, the tradeoff between fractional
>> register scheduling flexibility (which in practice is largely bound
>> anyway by real function calls in surrounding code) and increased icache
>> pressure/coldness (from the redundant function copies) falls largely in
>> favour of the Linux way of doing it, a cold icache line is
>> disproportionally more expensive than requiring the compiler to order
>> its registers differently (especially as all non-obsolete processors
>> these days have zero-cost register renaming internally, for the purpose
>> of superscalar execution).
> I'm afraid I'm struggling heavily as to what you're wanting to tell
> me here: Where's the difference (in this regard) between the
> change here and the way how Linux does it? Both emit a CALL
> insn with registers set up suitably for it, and both patch it with a
> POPCNT insn using the registers as demanded by the CALL.
>
> The difference to Linux is what gets called, not how the patching
> works (afaict). I'm simply not buying the combination of arguments
> and effects of the removal of the use of -ffixed-*.

The patch description made it look as if it was still variadic on the
input registers, and throughout the entire patch, I failed to work out
what the -ffixed-r* was actually trying to achieve.

After actually trying the code locally, I see that isn't the case.  I
apologise for my original reaction.

>
>>> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>>>  efi/mkreloc: efi/mkreloc.c
>>>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>>  
>>> +nocov-y += hweight.o
>> Irrespective of the exact specifics of how the patch ends up, I don't
>> think the nocov restriction is a direction we want to take.
>>
>> Coverage may not be a thing used in production, but when it is used for
>> development, it needs to not have random holes missing in the results data.
> Sure, but then we can't avoid saving/restoring the callee clobbered
> registers in the to be called functions.

Why is this of concern?

a) it the compilers job to DTRT, and the sum total of GCC's coverage
data appears to be "addq $1, mumble(%rip)"

b) coverage is just one of several things which might add
instrumentation, ubsan being the other example which Xen already supports.

>  Which in turn means I see no
> way of avoiding code duplications (be it in C or assembly) of the
> generic_hweight<N>() implementations.
>
>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>>> +
>> Does this work with Clang?
> I have no idea.

I do - every time I have a suspicion, the answer is invariably no.

clang: error: unknown argument: '-ffixed-rcx'
clang: error: unknown argument: '-ffixed-rdx'
clang: error: unknown argument: '-ffixed-rsi'
clang: error: unknown argument '-ffixed-r8', did you mean '-ffixed-r9'?
clang: error: unknown argument '-ffixed-r10', did you mean '-ffixed-r19'?
clang: error: unknown argument '-ffixed-r11', did you mean '-ffixed-r19'?

As it turns out, the missing -ffixed-r9 is a Hexagon option only, and
isn't applicable for x86.

>
>>> --- /dev/null
>>> +++ b/xen/arch/x86/hweight.c
>>> @@ -0,0 +1,28 @@
>>> +#define generic_hweight64 _hweight64
>>> +#define generic_hweight32 _hweight32
>>> +#define generic_hweight16 _hweight16
>>> +#define generic_hweight8  _hweight8
>>> +
>>> +#include <xen/compiler.h>
>>> +
>>> +#undef inline
>>> +#define inline always_inline
>>> +
>>> +#include <xen/bitops.h>
>>> +
>>> +#undef generic_hweight8
>>> +#undef generic_hweight16
>>> +#undef generic_hweight32
>>> +#undef generic_hweight64
>>> +
>>> +#define HWEIGHT(n)                                             \
>>> +typeof(_hweight##n) generic_hweight##n;                        \
>>> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
>> A question to the rest of xen-devel.  Is there anyone else who can
>> actually work out what this construct is doing?
>>
>> I'd like to get a feel for how many people can even follow some of our C.
> I know you don't like such constructs, but you likely also know
> that I don't like the redundancy resulting when not using them.
> You've vetoed a change by Roger in this direction recently.
> While I did accept this (as the code we have is fine as is as
> well), I don't think your personal taste should rule out such
> uses. If anything, may I ask for clear guidelines (to be put into
> ./CODING_STYLE after having reached consensus) which parts
> of the C language are fine to use, and which ones aren't?

If it were up to me, I'd reject any use of constructs like this.  They
are fundamentally incompatible with easy-to-follow code, and I value
that as a far more important property than the source file being a few
lines shorter.

Particularly in this case, it really is just obfuscation, because the
longhand version can be written shorter than the HWEIGHT macro alone,
let alone its invocations:

unsigned int generic_hweight8 (unsigned int x) { return _hweight8(x);  }
unsigned int generic_hweight16(unsigned int x) { return _hweight16(x); }
unsigned int generic_hweight32(unsigned int x) { return _hweight32(x); }
unsigned int generic_hweight64(uint64_t x)     { return _hweight64(x); }

is a far easier to read result.  That said, the point is moot because
this file still doesn't compile.

>
>>> --- a/xen/include/asm-x86/bitops.h
>>> +++ b/xen/include/asm-x86/bitops.h
>>> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>>>      return r + 1;
>>>  }
>>>  
>>> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
>>> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
>>> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
>> So (the dangers of false micro-optimsiation aside), POPCNT_32 will
>> probably be better using a redundant %ds prefix.
> For the use in hweight32() - perhaps. But not for the uses in
> hweight{16,8}(), as there original code and replacement fully
> match up in lengths.

Looking at the compiler generated code, the 32 and 16 bit are identical
other than the final imul.  The 8 bit case is very similar, but can
actually get away with imm8 rather than imm32.

There is one caller each for the 8 and 16 bit version, both of which are
in is_multicast_dest() in the vlapic emulation code.  This particular
example would actually benefit from Linux's example of aliasing of the 8
and 16bit versions to the 32bit version.

The only user which is in any way a fastpath is in __bitmap_weight(),
and I've got another cleanup patch for that.  (I also bet that doing an
x86 specific vectorised version of __bitmap_weight() using POPCNT's
memory operand would be a perf win, but that can wait until some free
and some proper profiling support appears.)

Given that this is a helper which is unlikely to ever be rewritten, and
there doesn't appear to be an obvious way to get Clang to avoid using
specific registers, I'd still recommend the Linux way, with the 32 and
64bit versions custom written in assembly, and not worry about the
duplication.

>
>> The reason is that the result needs padding to 5 bytes, as the original
>> instruction is `call disp32`, meaning that a single byte nop needs
>> inserting.  The version with a single nop takes two decode ports as
>> opposed to one, and single byte nops are forced to take an execution
>> delay for backwards compatibility with DoS.
>>
>> OTOH, I also bet that noone could observe a difference without using
>> perf counters and fetch/decode uarch events.
> Plus this is then a more general problem to address, not something
> to specifically worry about here. Would you have asked for an
> explicit override if the insn was written using a proper mnemonic
> (i.e. if we didn't need to cope with incapable binutils)?

I probably wouldn't have noticed, but as eluded to earlier, I really
don't expect it matters.

Talking of binutils, when are we going to get around to upping our
minimum version?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 4/4] x86: use POPCNT for hweight<N>() when available
Posted by Jan Beulich 4 years, 10 months ago
>>> On 04.06.19 at 21:07, <andrew.cooper3@citrix.com> wrote:
> On 03/06/2019 09:13, Jan Beulich wrote:
>>>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
>>> On 31/05/2019 02:54, Jan Beulich wrote:
>>>> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>>>>  efi/mkreloc: efi/mkreloc.c
>>>>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>>>  
>>>> +nocov-y += hweight.o
>>> Irrespective of the exact specifics of how the patch ends up, I don't
>>> think the nocov restriction is a direction we want to take.
>>>
>>> Coverage may not be a thing used in production, but when it is used for
>>> development, it needs to not have random holes missing in the results data.
>> Sure, but then we can't avoid saving/restoring the callee clobbered
>> registers in the to be called functions.
> 
> Why is this of concern?
> 
> a) it the compilers job to DTRT, and the sum total of GCC's coverage
> data appears to be "addq $1, mumble(%rip)"
> 
> b) coverage is just one of several things which might add
> instrumentation, ubsan being the other example which Xen already supports.

Hmm, have I been under the wrong impression that, like for -p, a
function call _might_ get inserted at the beginning of functions? Is
it spelled out somewhere what exact code can be inserted? The
command line option descriptions alone don't ... What we need to
avoid are function calls; I believe any other sort of code insertion
ought to be okay.

However - having seen your suggestion to follow Linux and use
assembly implementations: How would not adding coverage
instrumentation here differ from there not being any in assembly
code?

As to ubsan - yes, this should be treated the same way as the
coverage instrumentation. I.e. unless there's a guarantee that
it can't result in emitted function calls, it would need to be
suppressed.

In fact I was considering to add a build step to verify that the
object file has no external dependencies, i.e. in particular
doesn't call any functions.

>>  Which in turn means I see no
>> way of avoiding code duplications (be it in C or assembly) of the
>> generic_hweight<N>() implementations.
>>
>>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>>>> +
>>> Does this work with Clang?
>> I have no idea.
> 
> I do - every time I have a suspicion, the answer is invariably no.
> 
> clang: error: unknown argument: '-ffixed-rcx'
> clang: error: unknown argument: '-ffixed-rdx'
> clang: error: unknown argument: '-ffixed-rsi'
> clang: error: unknown argument '-ffixed-r8', did you mean '-ffixed-r9'?
> clang: error: unknown argument '-ffixed-r10', did you mean '-ffixed-r19'?
> clang: error: unknown argument '-ffixed-r11', did you mean '-ffixed-r19'?
> 
> As it turns out, the missing -ffixed-r9 is a Hexagon option only, and
> isn't applicable for x86.

And -ffixed-r19 hardly can be x86 either. Do they expect the 32-bit
register names? https://clang.llvm.org/docs/UsersManual.html has
no hit of -ffixed-* at all, despite the hints the compiler gives according
to the output you've quoted.

I have to admit that I find it pretty frustrating to have to deal with
such issues: If they claim to be gcc compatible, there shouldn't be
a need to think about using basically every not entirely standard
command line option. However much I like being able to have code
compilable with multiple compilers (not the least because one may
spot issues another doesn't), I don't think this is a basis to work
from. I.e. without a sufficient level of compatibility on their part
I'd like to raise the question of whether we really need to retain
clang compatibility in our code.

>>>> --- /dev/null
>>>> +++ b/xen/arch/x86/hweight.c
>>>> @@ -0,0 +1,28 @@
>>>> +#define generic_hweight64 _hweight64
>>>> +#define generic_hweight32 _hweight32
>>>> +#define generic_hweight16 _hweight16
>>>> +#define generic_hweight8  _hweight8
>>>> +
>>>> +#include <xen/compiler.h>
>>>> +
>>>> +#undef inline
>>>> +#define inline always_inline
>>>> +
>>>> +#include <xen/bitops.h>
>>>> +
>>>> +#undef generic_hweight8
>>>> +#undef generic_hweight16
>>>> +#undef generic_hweight32
>>>> +#undef generic_hweight64
>>>> +
>>>> +#define HWEIGHT(n)                                             \
>>>> +typeof(_hweight##n) generic_hweight##n;                        \
>>>> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
>>> A question to the rest of xen-devel.  Is there anyone else who can
>>> actually work out what this construct is doing?
>>>
>>> I'd like to get a feel for how many people can even follow some of our C.
>> I know you don't like such constructs, but you likely also know
>> that I don't like the redundancy resulting when not using them.
>> You've vetoed a change by Roger in this direction recently.
>> While I did accept this (as the code we have is fine as is as
>> well), I don't think your personal taste should rule out such
>> uses. If anything, may I ask for clear guidelines (to be put into
>> ./CODING_STYLE after having reached consensus) which parts
>> of the C language are fine to use, and which ones aren't?
> 
> If it were up to me, I'd reject any use of constructs like this.  They
> are fundamentally incompatible with easy-to-follow code, and I value
> that as a far more important property than the source file being a few
> lines shorter.
> 
> Particularly in this case, it really is just obfuscation, because the
> longhand version can be written shorter than the HWEIGHT macro alone,
> let alone its invocations:
> 
> unsigned int generic_hweight8 (unsigned int x) { return _hweight8(x);  }
> unsigned int generic_hweight16(unsigned int x) { return _hweight16(x); }
> unsigned int generic_hweight32(unsigned int x) { return _hweight32(x); }
> unsigned int generic_hweight64(uint64_t x)     { return _hweight64(x); }
> 
> is a far easier to read result.

Okay, I'll switch. But you realize that this loses a property I had
specifically added: My original variant also ensured _hweight<N>()
and generic_hweight<N>() have matching types.

>  That said, the point is moot because this file still doesn't compile.

With clang you mean? Or with gcc (where it obviously does compile
for me, so I'd appreciate more detail).

>>>> --- a/xen/include/asm-x86/bitops.h
>>>> +++ b/xen/include/asm-x86/bitops.h
>>>> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>>>>      return r + 1;
>>>>  }
>>>>  
>>>> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
>>>> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
>>>> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
>>> So (the dangers of false micro-optimsiation aside), POPCNT_32 will
>>> probably be better using a redundant %ds prefix.
>> For the use in hweight32() - perhaps. But not for the uses in
>> hweight{16,8}(), as there original code and replacement fully
>> match up in lengths.
> 
> Looking at the compiler generated code, the 32 and 16 bit are identical
> other than the final imul.  The 8 bit case is very similar, but can
> actually get away with imm8 rather than imm32.

What imul? What imm8 / imm32? Are you taking about
generic_hweight<N>()? The code fragment above is the to-be-
patched-in code, which shouldn't require anything like this.

> There is one caller each for the 8 and 16 bit version, both of which are
> in is_multicast_dest() in the vlapic emulation code.  This particular
> example would actually benefit from Linux's example of aliasing of the 8
> and 16bit versions to the 32bit version.

Possibly. However, for one I don't think specific use sites should
influence the decision here. And then I don't think hweight() of
any form is needed in those two cases in the first place. All they
want to know is whether more than one bit is set, which can be
achieved by a standard is-power-of-2 check.

> The only user which is in any way a fastpath is in __bitmap_weight(),
> and I've got another cleanup patch for that.  (I also bet that doing an
> x86 specific vectorised version of __bitmap_weight() using POPCNT's
> memory operand would be a perf win, but that can wait until some free
> and some proper profiling support appears.)
> 
> Given that this is a helper which is unlikely to ever be rewritten, and
> there doesn't appear to be an obvious way to get Clang to avoid using
> specific registers, I'd still recommend the Linux way, with the 32 and
> 64bit versions custom written in assembly, and not worry about the
> duplication.

While you're likely right about the "never be rewritten" part (leaving
aside the logic change in patch 1 of the series, which - had we had
an assembly implementation already - would have meant a rewrite),
I'm still rather puzzled by you suggesting to extend the amount of
assembly code we have, when it can be easily written in C.

Anyway, if I was to clone Linux'es code, I wouldn't use it entirely
as is anyway (in particular I'd like to stick to clobbering %rdi at
the call site, rather than preserving it in the functions). Yet you
seem to be demanding us doing so.

> Talking of binutils, when are we going to get around to upping our
> minimum version?

I guess as soon as someone cares enough to submit a proposal of
a new minimum version that would cover all environments we care
about to be able to build on. Iirc there's some breakage already
anyway for the oldest version we claim to support.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel