arch/riscv/kernel/probes/kprobes.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
From: Guo Ren <guoren@linux.alibaba.com>
The current kprobe would cause a misaligned load for the probe point.
This patch fixup it with two half-word loads instead.
Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported")
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Link: https://lore.kernel.org/linux-riscv/878rhig9zj.fsf@all.your.base.are.belong.to.us/
Reported-by: Bjorn Topel <bjorn.topel@gmail.com>
---
arch/riscv/kernel/probes/kprobes.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
index 41c7481afde3..c1160629cef4 100644
--- a/arch/riscv/kernel/probes/kprobes.c
+++ b/arch/riscv/kernel/probes/kprobes.c
@@ -74,7 +74,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
return -EILSEQ;
/* copy instruction */
- p->opcode = *p->addr;
+ p->opcode = (kprobe_opcode_t)(*(u16 *)probe_addr);
+ if (GET_INSN_LENGTH(p->opcode) == 4)
+ p->opcode |= (kprobe_opcode_t)(*(u16 *)(probe_addr + 2)) << 16;
/* decode instruction */
switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
--
2.36.1
guoren@kernel.org writes: > From: Guo Ren <guoren@linux.alibaba.com> > > The current kprobe would cause a misaligned load for the probe point. > This patch fixup it with two half-word loads instead. > > Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported") > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > Signed-off-by: Guo Ren <guoren@kernel.org> > Link: https://lore.kernel.org/linux-riscv/878rhig9zj.fsf@all.your.base.are.belong.to.us/ > Reported-by: Bjorn Topel <bjorn.topel@gmail.com> > --- > arch/riscv/kernel/probes/kprobes.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c > index 41c7481afde3..c1160629cef4 100644 > --- a/arch/riscv/kernel/probes/kprobes.c > +++ b/arch/riscv/kernel/probes/kprobes.c > @@ -74,7 +74,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) > return -EILSEQ; > > /* copy instruction */ > - p->opcode = *p->addr; > + p->opcode = (kprobe_opcode_t)(*(u16 *)probe_addr); > + if (GET_INSN_LENGTH(p->opcode) == 4) > + p->opcode |= (kprobe_opcode_t)(*(u16 *)(probe_addr + 2)) > << 16; Ugh, those casts. :-( What about the memcpy variant you had in the other thread?
On Wed, Feb 1, 2023 at 5:40 PM Björn Töpel <bjorn@kernel.org> wrote: > > guoren@kernel.org writes: > > > From: Guo Ren <guoren@linux.alibaba.com> > > > > The current kprobe would cause a misaligned load for the probe point. > > This patch fixup it with two half-word loads instead. > > > > Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported") > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > > Signed-off-by: Guo Ren <guoren@kernel.org> > > Link: https://lore.kernel.org/linux-riscv/878rhig9zj.fsf@all.your.base.are.belong.to.us/ > > Reported-by: Bjorn Topel <bjorn.topel@gmail.com> > > --- > > arch/riscv/kernel/probes/kprobes.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c > > index 41c7481afde3..c1160629cef4 100644 > > --- a/arch/riscv/kernel/probes/kprobes.c > > +++ b/arch/riscv/kernel/probes/kprobes.c > > @@ -74,7 +74,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) > > return -EILSEQ; > > > > /* copy instruction */ > > - p->opcode = *p->addr; > > + p->opcode = (kprobe_opcode_t)(*(u16 *)probe_addr); > > + if (GET_INSN_LENGTH(p->opcode) == 4) > > + p->opcode |= (kprobe_opcode_t)(*(u16 *)(probe_addr + 2)) > > << 16; > > Ugh, those casts. :-( What about the memcpy variant you had in the other > thread? The memcpy version would force load probe_addr + 2. This one would save an lh operation. The code text guarantees half-word alignment. No misaligned load happened. Second, kprobe wouldn't write the last half of 32b instruction. -- Best Regards Guo Ren
Guo Ren <guoren@kernel.org> writes: > On Wed, Feb 1, 2023 at 5:40 PM Björn Töpel <bjorn@kernel.org> wrote: >> >> guoren@kernel.org writes: >> >> > From: Guo Ren <guoren@linux.alibaba.com> >> > >> > The current kprobe would cause a misaligned load for the probe point. >> > This patch fixup it with two half-word loads instead. >> > >> > Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported") >> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> >> > Signed-off-by: Guo Ren <guoren@kernel.org> >> > Link: https://lore.kernel.org/linux-riscv/878rhig9zj.fsf@all.your.base.are.belong.to.us/ >> > Reported-by: Bjorn Topel <bjorn.topel@gmail.com> >> > --- >> > arch/riscv/kernel/probes/kprobes.c | 4 +++- >> > 1 file changed, 3 insertions(+), 1 deletion(-) >> > >> > diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c >> > index 41c7481afde3..c1160629cef4 100644 >> > --- a/arch/riscv/kernel/probes/kprobes.c >> > +++ b/arch/riscv/kernel/probes/kprobes.c >> > @@ -74,7 +74,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) >> > return -EILSEQ; >> > >> > /* copy instruction */ >> > - p->opcode = *p->addr; >> > + p->opcode = (kprobe_opcode_t)(*(u16 *)probe_addr); >> > + if (GET_INSN_LENGTH(p->opcode) == 4) >> > + p->opcode |= (kprobe_opcode_t)(*(u16 *)(probe_addr + 2)) >> > << 16; >> >> Ugh, those casts. :-( What about the memcpy variant you had in the other >> thread? > The memcpy version would force load probe_addr + 2. This one would > save an lh operation. The code text guarantees half-word alignment. No > misaligned load happened. Second, kprobe wouldn't write the last half > of 32b instruction. Ok, something more readable, like: diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c index f21592d20306..3602352ba175 100644 --- a/arch/riscv/kernel/probes/kprobes.c +++ b/arch/riscv/kernel/probes/kprobes.c @@ -50,14 +50,16 @@ static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs) int __kprobes arch_prepare_kprobe(struct kprobe *p) { - unsigned long probe_addr = (unsigned long)p->addr; + u16 *insn = (u16 *)p->addr; - if (probe_addr & 0x1) + if ((uintptr_t)insn & 0x1) return -EILSEQ; /* copy instruction */ - p->opcode = *p->addr; - + p->opcode = *insn++; + if (GET_INSN_LENGTH(p->opcode) == 4) + p->opcode |= *insn << 16; + /* decode instruction */ switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) { case INSN_REJECTED: /* insn not supported */ Björn
On 2 Feb 2023, at 09:48, Björn Töpel <bjorn@kernel.org> wrote: > > Guo Ren <guoren@kernel.org> writes: > >> On Wed, Feb 1, 2023 at 5:40 PM Björn Töpel <bjorn@kernel.org> wrote: >>> >>> guoren@kernel.org writes: >>> >>>> From: Guo Ren <guoren@linux.alibaba.com> >>>> >>>> The current kprobe would cause a misaligned load for the probe point. >>>> This patch fixup it with two half-word loads instead. >>>> >>>> Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported") >>>> Signed-off-by: Guo Ren <guoren@linux.alibaba.com> >>>> Signed-off-by: Guo Ren <guoren@kernel.org> >>>> Link: https://lore.kernel.org/linux-riscv/878rhig9zj.fsf@all.your.base.are.belong.to.us/ >>>> Reported-by: Bjorn Topel <bjorn.topel@gmail.com> >>>> --- >>>> arch/riscv/kernel/probes/kprobes.c | 4 +++- >>>> 1 file changed, 3 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c >>>> index 41c7481afde3..c1160629cef4 100644 >>>> --- a/arch/riscv/kernel/probes/kprobes.c >>>> +++ b/arch/riscv/kernel/probes/kprobes.c >>>> @@ -74,7 +74,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) >>>> return -EILSEQ; >>>> >>>> /* copy instruction */ >>>> - p->opcode = *p->addr; >>>> + p->opcode = (kprobe_opcode_t)(*(u16 *)probe_addr); >>>> + if (GET_INSN_LENGTH(p->opcode) == 4) >>>> + p->opcode |= (kprobe_opcode_t)(*(u16 *)(probe_addr + 2)) >>>> << 16; >>> >>> Ugh, those casts. :-( What about the memcpy variant you had in the other >>> thread? >> The memcpy version would force load probe_addr + 2. This one would >> save an lh operation. The code text guarantees half-word alignment. No >> misaligned load happened. Second, kprobe wouldn't write the last half >> of 32b instruction. > > Ok, something more readable, like: > > diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c > index f21592d20306..3602352ba175 100644 > --- a/arch/riscv/kernel/probes/kprobes.c > +++ b/arch/riscv/kernel/probes/kprobes.c > @@ -50,14 +50,16 @@ static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs) > > int __kprobes arch_prepare_kprobe(struct kprobe *p) > { > - unsigned long probe_addr = (unsigned long)p->addr; > + u16 *insn = (u16 *)p->addr; > > - if (probe_addr & 0x1) > + if ((uintptr_t)insn & 0x1) > return -EILSEQ; > > /* copy instruction */ > - p->opcode = *p->addr; > - > + p->opcode = *insn++; > + if (GET_INSN_LENGTH(p->opcode) == 4) > + p->opcode |= *insn << 16; *insn gets promoted to int not unsigned so this is UB if bit 15 is set. Jess > + > /* decode instruction */ > switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) { > case INSN_REJECTED: /* insn not supported */ > > > Björn > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
Jessica Clarke <jrtc27@jrtc27.com> writes: >> + p->opcode = *insn++; >> + if (GET_INSN_LENGTH(p->opcode) == 4) >> + p->opcode |= *insn << 16; > > *insn gets promoted to int not unsigned so this is UB if bit 15 is set. Ugh. Good catch! I guess we can't get rid of *that* explicit cast to kprobe_opcode_t here...
On Thu, Feb 2, 2023 at 10:36 PM Björn Töpel <bjorn@kernel.org> wrote: > > Jessica Clarke <jrtc27@jrtc27.com> writes: > > >> + p->opcode = *insn++; > >> + if (GET_INSN_LENGTH(p->opcode) == 4) > >> + p->opcode |= *insn << 16; > > > > *insn gets promoted to int not unsigned so this is UB if bit 15 is set. > > Ugh. Good catch! I guess we can't get rid of *that* explicit cast to > kprobe_opcode_t here... Hi Bjorn & Jessica, Thx for reviewing. The new version came out: https://lore.kernel.org/linux-riscv/20230204063531.740220-1-guoren@kernel.org/ -- Best Regards Guo Ren
© 2016 - 2025 Red Hat, Inc.