arch/arm64/kvm/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
From: gyutrange <wlsrbwjd643@naver.com>
VNCR/TLBI VA reconstruction currently uses bit 48 as the sign bit,
but for 48-bit virtual addresses the correct sign bit is bit 47.
Using 48 can mis-canonicalize addresses in the negative half and may
cause missed invalidations.
Although VNCR_EL2 encodes other architectural fields (RESS, BADDR;
see Arm ARM D24.2.206), sign_extend64() interprets its second argument
as the index of the sign bit. Passing 48 prevents propagation of the
canonical sign bit for 48-bit VAs.
Impact:
- Incorrect canonicalization of VAs with bit47=1
- Potential stale VNCR pseudo-TLB entries after TLBI or MMU notifier
- Possible incorrect translation/permissions or DoS when combined
with other issues
Fixes: 667304740537 ("KVM: arm64: Mask out non-VA bits from TLBI VA* on VNCR invalidation")
Cc: stable@vger.kernel.org
Reported-by: DongHa Lee <gap-dev@example.com>
Reported-by: Gyujeong Jin <wlsrbwjd7232@gmail.com>
Reported-by: Daehyeon Ko <4ncient@example.com>
Reported-by: Geonha Lee <leegn4a@example.com>
Reported-by: Hyungyu Oh <dqpc_lover@example.com>
Reported-by: Jaewon Yang <r4mbb1@example.com>
Signed-off-by: Gyujeong Jin <wlsrbwjd7232@gmail.com>
---
arch/arm64/kvm/nested.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 77db81bae86f..eaa6dd9da086 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1169,7 +1169,7 @@ int kvm_vcpu_allocate_vncr_tlb(struct kvm_vcpu *vcpu)
static u64 read_vncr_el2(struct kvm_vcpu *vcpu)
{
- return (u64)sign_extend64(__vcpu_sys_reg(vcpu, VNCR_EL2), 48);
+ return (u64)sign_extend64(__vcpu_sys_reg(vcpu, VNCR_EL2), 47);
}
static int kvm_translate_vncr(struct kvm_vcpu *vcpu)
--
2.43.0
On Mon, Sep 01, 2025 at 11:15:51PM +0900, Gyujeong Jin wrote: > From: gyutrange <wlsrbwjd643@naver.com> Does not match your signed-off-by line :(
On Mon, Sep 01, 2025 at 11:15:51PM +0900, Gyujeong Jin wrote: > From: gyutrange <wlsrbwjd643@naver.com> > > VNCR/TLBI VA reconstruction currently uses bit 48 as the sign bit, > but for 48-bit virtual addresses the correct sign bit is bit 47. > Using 48 can mis-canonicalize addresses in the negative half and may > cause missed invalidations. > > Although VNCR_EL2 encodes other architectural fields (RESS, BADDR; > see Arm ARM D24.2.206), sign_extend64() interprets its second argument > as the index of the sign bit. Passing 48 prevents propagation of the > canonical sign bit for 48-bit VAs. > > Impact: > - Incorrect canonicalization of VAs with bit47=1 > - Potential stale VNCR pseudo-TLB entries after TLBI or MMU notifier > - Possible incorrect translation/permissions or DoS when combined > with other issues > > Fixes: 667304740537 ("KVM: arm64: Mask out non-VA bits from TLBI VA* on VNCR invalidation") > Cc: stable@vger.kernel.org > Reported-by: DongHa Lee <gap-dev@example.com> > Reported-by: Gyujeong Jin <wlsrbwjd7232@gmail.com> > Reported-by: Daehyeon Ko <4ncient@example.com> > Reported-by: Geonha Lee <leegn4a@example.com> > Reported-by: Hyungyu Oh <dqpc_lover@example.com> > Reported-by: Jaewon Yang <r4mbb1@example.com> Please do not use fake email addresses.
On Mon, 01 Sep 2025 15:15:51 +0100, Gyujeong Jin <wlsrbwjd7232@gmail.com> wrote: > > From: gyutrange <wlsrbwjd643@naver.com> > > VNCR/TLBI VA reconstruction currently uses bit 48 as the sign bit, > but for 48-bit virtual addresses the correct sign bit is bit 47. No, that's not the case. Bit 55 is used at all times to determine which half of the address space a VA gets resolved from. > Using 48 can mis-canonicalize addresses in the negative half and may > cause missed invalidations. > > Although VNCR_EL2 encodes other architectural fields (RESS, BADDR; > see Arm ARM D24.2.206), sign_extend64() interprets its second argument > as the index of the sign bit. Passing 48 prevents propagation of the > canonical sign bit for 48-bit VAs. > > Impact: > - Incorrect canonicalization of VAs with bit47=1 No. We are not trying to make the VA canonical. > - Potential stale VNCR pseudo-TLB entries after TLBI or MMU notifier No. The pseudo TLB is never created the first place. > - Possible incorrect translation/permissions or DoS when combined > with other issues Please explain, as "other issues" is not a valid argument. Thanks, M. -- Without deviation from the norm, progress is not possible.
© 2016 - 2025 Red Hat, Inc.