arch/x86/include/asm/bitops.h | 64 +++++++++++++++++++++-------------- 1 file changed, 38 insertions(+), 26 deletions(-)
The compilers provide some builtin expression equivalent to the ffs(),
__ffs() and ffz() function of the kernel. The kernel uses optimized
assembly which produces better code than the builtin
functions. However, such assembly code can not be optimized when used
on constant expression.
This series relies on __builtin_constant_p to select the optimal solution:
* use kernel assembly for non constant expressions
* use compiler's __builtin function for constant expressions.
** Statistics **
Patch 1/2 optimizes 26.7% of ffs() calls and patch 2/2 optimizes 27.9%
of __ffs() and ffz() calls (details of the calculation in each patch).
** Changelog **
v3 -> v4:
* (no changes on code, only commit comment was modified)
* Remove note and link to Nick's message in patch 1/2, c.f.:
https://lore.kernel.org/all/CAKwvOdnnDaiJcV1gr9vV+ya-jWxx7+2KJNTDThyFctVDOgt9zQ@mail.gmail.com/
* Add Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> in tag in patch 2/2.
v2 -> v3:
* Redacted out the instructions after ret and before next function
in the assembly output.
* Added a note and a link to Nick's message on the constant
propagation missed-optimization in clang:
https://lore.kernel.org/all/CAKwvOdnH_gYv4qRN9pKY7jNTQK95xNeH1w1KZJJmvCkh8xJLBg@mail.gmail.com/
* Fix copy/paste typo in statistics of patch 1/2. Number of
occurences before patches are 1081 and not 3607 (percentage
reduction of 26.7% remains correct)
* Rename the functions as follow:
- __varible_ffs() -> variable___ffs()
- __variable_ffz() -> variable_ffz()
* Add Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> in tag in patch 1/2.
Vincent Mailhol (2):
x86/asm/bitops: ffs: use __builtin_ffs to evaluate constant
expressions
x86/asm/bitops: __ffs,ffz: use __builtin_ctzl to evaluate constant
expressions
arch/x86/include/asm/bitops.h | 64 +++++++++++++++++++++--------------
1 file changed, 38 insertions(+), 26 deletions(-)
--
2.35.1
On Sun 24 Jul. 2022 at 00:15, Vincent Mailhol <mailhol.vincent@wanadoo.fr> wrote: > The compilers provide some builtin expression equivalent to the ffs(), > __ffs() and ffz() function of the kernel. The kernel uses optimized > assembly which produces better code than the builtin > functions. However, such assembly code can not be optimized when used > on constant expression. > > This series relies on __builtin_constant_p to select the optimal solution: > > * use kernel assembly for non constant expressions > > * use compiler's __builtin function for constant expressions. > > > ** Statistics ** > > Patch 1/2 optimizes 26.7% of ffs() calls and patch 2/2 optimizes 27.9% > of __ffs() and ffz() calls (details of the calculation in each patch). > > > ** Changelog ** > > v3 -> v4: > > * (no changes on code, only commit comment was modified) > > * Remove note and link to Nick's message in patch 1/2, c.f.: > https://lore.kernel.org/all/CAKwvOdnnDaiJcV1gr9vV+ya-jWxx7+2KJNTDThyFctVDOgt9zQ@mail.gmail.com/ > > * Add Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> in tag in patch 2/2. > > > v2 -> v3: > > * Redacted out the instructions after ret and before next function > in the assembly output. > > * Added a note and a link to Nick's message on the constant > propagation missed-optimization in clang: > https://lore.kernel.org/all/CAKwvOdnH_gYv4qRN9pKY7jNTQK95xNeH1w1KZJJmvCkh8xJLBg@mail.gmail.com/ > > * Fix copy/paste typo in statistics of patch 1/2. Number of > occurences before patches are 1081 and not 3607 (percentage > reduction of 26.7% remains correct) > > * Rename the functions as follow: > - __varible_ffs() -> variable___ffs() > - __variable_ffz() -> variable_ffz() > > * Add Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> in tag in patch 1/2. > > Vincent Mailhol (2): > x86/asm/bitops: ffs: use __builtin_ffs to evaluate constant > expressions > x86/asm/bitops: __ffs,ffz: use __builtin_ctzl to evaluate constant > expressions > > arch/x86/include/asm/bitops.h | 64 +++++++++++++++++++++-------------- > 1 file changed, 38 insertions(+), 26 deletions(-) Hi Thomas, Ingo, Borislav, Dave and Peter, This patch series [1] has been waiting for more than two months already. So far, I have not heard back from any of the x86 mainteners. Do you see anything wrong with this series? If not, any chances to have someone of you to pick it up? [1] https://lore.kernel.org/all/20220625072645.251828-1-mailhol.vincent@wanadoo.fr/#t Thank you, Yours sincerely, Vincent Mailhol
On Fri, Jul 29, 2022 at 08:24:58PM +0900, Vincent MAILHOL wrote:
> This patch series [1] has been waiting for more than two months
> already. So far, I have not heard back from any of the x86 mainteners.
> Do you see anything wrong with this series? If not, any chances to
> have someone of you to pick it up?
They're on my todo list but you have to be patient. If you haven't
heard, we're still busy with this thing called retbleed.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
On Fri. 29 Jul. 2022 at 21:22, Borislav Petkov <bp@alien8.de> worte: > On Fri, Jul 29, 2022 at 08:24:58PM +0900, Vincent MAILHOL wrote: > > This patch series [1] has been waiting for more than two months > > already. So far, I have not heard back from any of the x86 mainteners. > > Do you see anything wrong with this series? If not, any chances to > > have someone of you to pick it up? > > They're on my todo list but you have to be patient. If you haven't > heard, we're still busy with this thing called retbleed. Understood and thanks for the update! If this is on your radar, then no more worries on my side. I wish you courage and good luck for the retbleed fix! Yours sincerely, Vincent Mailhol
© 2016 - 2026 Red Hat, Inc.