arch/arm64/kernel/probes/simulate-insn.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
The pt_regs registers are 64-bit on arm64, and should be u64 when
manipulated. Correct this so that we aren't truncating the address
during br/blr sequences.
Fixes: efb07ac534e2 ("arm64: probes: Add GCS support to bl/blr/ret")
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
arch/arm64/kernel/probes/simulate-insn.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/probes/simulate-insn.c b/arch/arm64/kernel/probes/simulate-insn.c
index 97ed4db75417..89fbeb32107e 100644
--- a/arch/arm64/kernel/probes/simulate-insn.c
+++ b/arch/arm64/kernel/probes/simulate-insn.c
@@ -145,7 +145,7 @@ void __kprobes
simulate_br_blr(u32 opcode, long addr, struct pt_regs *regs)
{
int xn = (opcode >> 5) & 0x1f;
- int b_target = get_x_reg(regs, xn);
+ u64 b_target = get_x_reg(regs, xn);
if (((opcode >> 21) & 0x3) == 1)
if (update_lr(regs, addr + 4))
@@ -160,7 +160,7 @@ simulate_ret(u32 opcode, long addr, struct pt_regs *regs)
u64 ret_addr;
int err = 0;
int xn = (opcode >> 5) & 0x1f;
- unsigned long r_target = get_x_reg(regs, xn);
+ u64 r_target = get_x_reg(regs, xn);
if (user_mode(regs) && task_gcs_el0_enabled(current)) {
ret_addr = pop_user_gcs(&err);
--
2.50.1
On Thu, 18 Sep 2025 12:54:24 -0500, Jeremy Linton wrote:
> The pt_regs registers are 64-bit on arm64, and should be u64 when
> manipulated. Correct this so that we aren't truncating the address
> during br/blr sequences.
>
>
Applied to arm64 (for-next/uprobes), thanks!
[1/1] arm64: probes: Fix incorrect bl/blr address and register usage
https://git.kernel.org/arm64/c/ea87c5536aa8
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
On Thu, Sep 18, 2025 at 12:54:24PM -0500, Jeremy Linton wrote:
> The pt_regs registers are 64-bit on arm64, and should be u64 when
> manipulated. Correct this so that we aren't truncating the address
> during br/blr sequences.
>
> Fixes: efb07ac534e2 ("arm64: probes: Add GCS support to bl/blr/ret")
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
> arch/arm64/kernel/probes/simulate-insn.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/probes/simulate-insn.c b/arch/arm64/kernel/probes/simulate-insn.c
> index 97ed4db75417..89fbeb32107e 100644
> --- a/arch/arm64/kernel/probes/simulate-insn.c
> +++ b/arch/arm64/kernel/probes/simulate-insn.c
> @@ -145,7 +145,7 @@ void __kprobes
> simulate_br_blr(u32 opcode, long addr, struct pt_regs *regs)
> {
> int xn = (opcode >> 5) & 0x1f;
> - int b_target = get_x_reg(regs, xn);
> + u64 b_target = get_x_reg(regs, xn);
>
> if (((opcode >> 21) & 0x3) == 1)
> if (update_lr(regs, addr + 4))
> @@ -160,7 +160,7 @@ simulate_ret(u32 opcode, long addr, struct pt_regs *regs)
> u64 ret_addr;
> int err = 0;
> int xn = (opcode >> 5) & 0x1f;
> - unsigned long r_target = get_x_reg(regs, xn);
> + u64 r_target = get_x_reg(regs, xn);
This part isn't really doing anything but it's probably worth being
consistent with 'u64' anyway. I'll pick this up, thanks.
Will
© 2016 - 2025 Red Hat, Inc.