On arm64 we don't align assembly funciton in the same way as C
functions. This somewhat limits the utility of
CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B for testing.
Follow the example of x86, and align assembly functions in the same way
as C functions.
I've tested this by selecting CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B=y,
building and booting a kernel, and looking for misaligned text symbols:
Before:
# grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l
322
After:
# grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l
115
The remaining unaligned text symbols are a combination of non-function labels
in assembly and early position-independent code which is built with '-Os'.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Florent Revest <revest@chromium.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/linkage.h | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h
index 1436fa1cde24d..df18a3446ce82 100644
--- a/arch/arm64/include/asm/linkage.h
+++ b/arch/arm64/include/asm/linkage.h
@@ -5,8 +5,14 @@
#include <asm/assembler.h>
#endif
-#define __ALIGN .align 2
-#define __ALIGN_STR ".align 2"
+#if CONFIG_FUNCTION_ALIGNMENT > 0
+#define ARM64_FUNCTION_ALIGNMENT CONFIG_FUNCTION_ALIGNMENT
+#else
+#define ARM64_FUNCTION_ALIGNMENT 4
+#endif
+
+#define __ALIGN .balign ARM64_FUNCTION_ALIGNMENT
+#define __ALIGN_STR ".balign " #ARM64_FUNCTION_ALIGNMENT
/*
* When using in-kernel BTI we need to ensure that PCS-conformant
--
2.30.2
On Mon, Jan 09, 2023 at 01:58:23PM +0000, Mark Rutland wrote: > diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h > index 1436fa1cde24d..df18a3446ce82 100644 > --- a/arch/arm64/include/asm/linkage.h > +++ b/arch/arm64/include/asm/linkage.h > @@ -5,8 +5,14 @@ > #include <asm/assembler.h> > #endif > > -#define __ALIGN .align 2 > -#define __ALIGN_STR ".align 2" > +#if CONFIG_FUNCTION_ALIGNMENT > 0 > +#define ARM64_FUNCTION_ALIGNMENT CONFIG_FUNCTION_ALIGNMENT > +#else > +#define ARM64_FUNCTION_ALIGNMENT 4 > +#endif > + > +#define __ALIGN .balign ARM64_FUNCTION_ALIGNMENT > +#define __ALIGN_STR ".balign " #ARM64_FUNCTION_ALIGNMENT Isn't that much the same as having ARM64 select FUNCTION_ALIGNMENT_4B and simply removing all these lines and relying on the default behaviour?
On Tue, Jan 10, 2023 at 09:35:18PM +0100, Peter Zijlstra wrote: > On Mon, Jan 09, 2023 at 01:58:23PM +0000, Mark Rutland wrote: > > > diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h > > index 1436fa1cde24d..df18a3446ce82 100644 > > --- a/arch/arm64/include/asm/linkage.h > > +++ b/arch/arm64/include/asm/linkage.h > > @@ -5,8 +5,14 @@ > > #include <asm/assembler.h> > > #endif > > > > -#define __ALIGN .align 2 > > -#define __ALIGN_STR ".align 2" > > +#if CONFIG_FUNCTION_ALIGNMENT > 0 > > +#define ARM64_FUNCTION_ALIGNMENT CONFIG_FUNCTION_ALIGNMENT > > +#else > > +#define ARM64_FUNCTION_ALIGNMENT 4 > > +#endif > > + > > +#define __ALIGN .balign ARM64_FUNCTION_ALIGNMENT > > +#define __ALIGN_STR ".balign " #ARM64_FUNCTION_ALIGNMENT > > Isn't that much the same as having ARM64 select FUNCTION_ALIGNMENT_4B > and simply removing all these lines and relying on the default > behaviour? Yes, it is. I was focussed on obviously retaining the existing semantic by default, and I missed that was possible by selecting FUNCTION_ALIGNMENT_4B. Thanks, Mark.
On Tue, Jan 10, 2023 at 09:35:18PM +0100, Peter Zijlstra wrote: > On Mon, Jan 09, 2023 at 01:58:23PM +0000, Mark Rutland wrote: > > > diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h > > index 1436fa1cde24d..df18a3446ce82 100644 > > --- a/arch/arm64/include/asm/linkage.h > > +++ b/arch/arm64/include/asm/linkage.h > > @@ -5,8 +5,14 @@ > > #include <asm/assembler.h> > > #endif > > > > -#define __ALIGN .align 2 > > -#define __ALIGN_STR ".align 2" > > +#if CONFIG_FUNCTION_ALIGNMENT > 0 > > +#define ARM64_FUNCTION_ALIGNMENT CONFIG_FUNCTION_ALIGNMENT > > +#else > > +#define ARM64_FUNCTION_ALIGNMENT 4 > > +#endif > > + > > +#define __ALIGN .balign ARM64_FUNCTION_ALIGNMENT > > +#define __ALIGN_STR ".balign " #ARM64_FUNCTION_ALIGNMENT > > Isn't that much the same as having ARM64 select FUNCTION_ALIGNMENT_4B > and simply removing all these lines and relying on the default > behaviour? There's a proposal (with some rough performance claims) to select FUNCTION_ALIGNMENT_16B over at: https://lore.kernel.org/r/20221208053649.540891-1-almasrymina@google.com so we could just go with that? Will
On Tue, Jan 10, 2023 at 08:43:20PM +0000, Will Deacon wrote: > On Tue, Jan 10, 2023 at 09:35:18PM +0100, Peter Zijlstra wrote: > > On Mon, Jan 09, 2023 at 01:58:23PM +0000, Mark Rutland wrote: > > > > > diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h > > > index 1436fa1cde24d..df18a3446ce82 100644 > > > --- a/arch/arm64/include/asm/linkage.h > > > +++ b/arch/arm64/include/asm/linkage.h > > > @@ -5,8 +5,14 @@ > > > #include <asm/assembler.h> > > > #endif > > > > > > -#define __ALIGN .align 2 > > > -#define __ALIGN_STR ".align 2" > > > +#if CONFIG_FUNCTION_ALIGNMENT > 0 > > > +#define ARM64_FUNCTION_ALIGNMENT CONFIG_FUNCTION_ALIGNMENT > > > +#else > > > +#define ARM64_FUNCTION_ALIGNMENT 4 > > > +#endif > > > + > > > +#define __ALIGN .balign ARM64_FUNCTION_ALIGNMENT > > > +#define __ALIGN_STR ".balign " #ARM64_FUNCTION_ALIGNMENT > > > > Isn't that much the same as having ARM64 select FUNCTION_ALIGNMENT_4B > > and simply removing all these lines and relying on the default > > behaviour? > > There's a proposal (with some rough performance claims) to select > FUNCTION_ALIGNMENT_16B over at: > > https://lore.kernel.org/r/20221208053649.540891-1-almasrymina@google.com > > so we could just go with that? I reckon it'd be worth having that as a separate patch atop, to split the infrastructure from the actual change, but I'm happy to go with 16B immediately if you'd prefer. It'd be nice if we could get some numbers... Thanks, Mark.
© 2016 - 2025 Red Hat, Inc.