[PATCH] arm64: kvm: Fix incorrect VNCR invalidation range calculation

p@sswd.pw posted 1 patch 4 weeks, 1 day ago
arch/arm64/kvm/nested.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
[PATCH] arm64: kvm: Fix incorrect VNCR invalidation range calculation
Posted by p@sswd.pw 4 weeks, 1 day ago
From: leedongha <gapdev2004@gmail.com>

The code for invalidating VNCR entries in both kvm_invalidate_vncr_ipa()
and invalidate_vncr_va() incorrectly uses a bitwise AND with `(size - 1)`
instead of `~(size - 1)` to align the start address. This results
in masking the address bits instead of aligning them down to the start
of the block.

This bug may cause stale VNCR TLB entries to remain valid even after a
TLBI or MMU notifier, leading to incorrect memory translation and
unexpected guest behavior.

Credit
Team 0xB6 in bob14:
DongHa Lee (@GAP-dev)
Gyujeong Jin (@G1uN4sh)
Daehyeon Ko (@4ncienth)
Geonha Lee (@leegn4a)
Hyungyu Oh (@DQPC_lover)
Jaewon Yang (@R4mbb1)

Signed-off-by: leedongha <gapdev2004@gmail.com>
---
 arch/arm64/kvm/nested.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 77db81bae86f..d0ddce877b5d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -847,7 +847,7 @@ static void kvm_invalidate_vncr_ipa(struct kvm *kvm, u64 start, u64 end)
 
 		ipa_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift,
 							    vt->wr.level));
-		ipa_start = vt->wr.pa & (ipa_size - 1);
+		ipa_start = vt->wr.pa & ~(ipa_size - 1);
 		ipa_end = ipa_start + ipa_size;
 
 		if (ipa_end <= start || ipa_start >= end)
@@ -887,7 +887,7 @@ static void invalidate_vncr_va(struct kvm *kvm,
 
 		va_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift,
 							   vt->wr.level));
-		va_start = vt->gva & (va_size - 1);
+		va_start = vt->gva & ~(va_size - 1);
 		va_end = va_start + va_size;
 
 		switch (scope->type) {
-- 
2.43.0
Re: [PATCH] arm64: kvm: Fix incorrect VNCR invalidation range calculation
Posted by Marc Zyngier 4 weeks ago
On Wed, 03 Sep 2025 13:39:49 +0100,
p@sswd.pw wrote:
> 
> From: leedongha <gapdev2004@gmail.com>
> 
> The code for invalidating VNCR entries in both kvm_invalidate_vncr_ipa()
> and invalidate_vncr_va() incorrectly uses a bitwise AND with `(size - 1)`
> instead of `~(size - 1)` to align the start address. This results
> in masking the address bits instead of aligning them down to the start
> of the block.
> 
> This bug may cause stale VNCR TLB entries to remain valid even after a
> TLBI or MMU notifier, leading to incorrect memory translation and
> unexpected guest behavior.
> 
> Credit
> Team 0xB6 in bob14:
> DongHa Lee (@GAP-dev)
> Gyujeong Jin (@G1uN4sh)
> Daehyeon Ko (@4ncienth)
> Geonha Lee (@leegn4a)
> Hyungyu Oh (@DQPC_lover)
> Jaewon Yang (@R4mbb1)
> 
> Signed-off-by: leedongha <gapdev2004@gmail.com>

The SoB of the person sending the patch is required.

> ---
>  arch/arm64/kvm/nested.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 77db81bae86f..d0ddce877b5d 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -847,7 +847,7 @@ static void kvm_invalidate_vncr_ipa(struct kvm *kvm, u64 start, u64 end)
>  
>  		ipa_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift,
>  							    vt->wr.level));
> -		ipa_start = vt->wr.pa & (ipa_size - 1);
> +		ipa_start = vt->wr.pa & ~(ipa_size - 1);
>  		ipa_end = ipa_start + ipa_size;
>  
>  		if (ipa_end <= start || ipa_start >= end)
> @@ -887,7 +887,7 @@ static void invalidate_vncr_va(struct kvm *kvm,
>  
>  		va_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift,
>  							   vt->wr.level));
> -		va_start = vt->gva & (va_size - 1);
> +		va_start = vt->gva & ~(va_size - 1);
>  		va_end = va_start + va_size;
>  
>  		switch (scope->type) {

Yup, absolutely correct. Thanks a lot for spotting this.
With the above nit addressed:

Reviewed-by: Marc Zyngier <maz@kernel.org>

	M.

-- 
Jazz isn't dead. It just smells funny.
[PATCH v2] KVM: arm64: nv: Fix incorrect VNCR invalidation range calculation
Posted by p@sswd.pw 3 weeks, 6 days ago
From: leedongha <p@sswd.pw>

The code for invalidating VNCR entries in both kvm_invalidate_vncr_ipa()
and invalidate_vncr_va() incorrectly uses a bitwise AND with `(size - 1)`
instead of `~(size - 1)` to align the start address. This results
in masking the address bits instead of aligning them down to the start
of the block.

This bug may cause stale VNCR TLB entries to remain valid even after a
TLBI or MMU notifier, leading to incorrect memory translation and
unexpected guest behavior.

Credit
Team 0xB6 in bob14:
DongHa Lee (@GAP-dev)
Gyujeong Jin (@gyutrange)
Daehyeon Ko (@4ncienth)
Geonha Lee (@leegn4a)
Hyungyu Oh (@ohhyungyu)
Jaewon Yang (@R4mbb)

Link: https://lore.kernel.org/r/20250903123949.24858-1-p@sswd.pw
Reviewed-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Dongha Lee <p@sswd.pw>

---
Changes in v2:
- Match DCO with From: (p@sswd.pw)
- Use KVM: arm64: nv: prefix for clarity
---
 arch/arm64/kvm/nested.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 77db81bae86f..d0ddce877b5d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -847,7 +847,7 @@ static void kvm_invalidate_vncr_ipa(struct kvm *kvm, u64 start, u64 end)
 
 		ipa_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift,
 							    vt->wr.level));
-		ipa_start = vt->wr.pa & (ipa_size - 1);
+		ipa_start = vt->wr.pa & ~(ipa_size - 1);
 		ipa_end = ipa_start + ipa_size;
 
 		if (ipa_end <= start || ipa_start >= end)
@@ -887,7 +887,7 @@ static void invalidate_vncr_va(struct kvm *kvm,
 
 		va_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift,
 							   vt->wr.level));
-		va_start = vt->gva & (va_size - 1);
+		va_start = vt->gva & ~(va_size - 1);
 		va_end = va_start + va_size;
 
 		switch (scope->type) {
-- 
2.43.0
Re: [PATCH v2] KVM: arm64: nv: Fix incorrect VNCR invalidation range calculation
Posted by Oliver Upton 3 weeks, 6 days ago
Hi Dongha,

Thanks for respinning. Please send new versions of a patch series as a
new thread (i.e. don't specify In-Reply-To), it helps a lot for patch
organization on the receiving side.

On Fri, Sep 05, 2025 at 05:30:08PM +0900, p@sswd.pw wrote:
> From: leedongha <p@sswd.pw>
> 
> The code for invalidating VNCR entries in both kvm_invalidate_vncr_ipa()
> and invalidate_vncr_va() incorrectly uses a bitwise AND with `(size - 1)`
> instead of `~(size - 1)` to align the start address. This results
> in masking the address bits instead of aligning them down to the start
> of the block.
> 
> This bug may cause stale VNCR TLB entries to remain valid even after a
> TLBI or MMU notifier, leading to incorrect memory translation and
> unexpected guest behavior.
> 
> Credit
> Team 0xB6 in bob14:
> DongHa Lee (@GAP-dev)
> Gyujeong Jin (@gyutrange)
> Daehyeon Ko (@4ncienth)
> Geonha Lee (@leegn4a)
> Hyungyu Oh (@ohhyungyu)
> Jaewon Yang (@R4mbb)
> 
> Link: https://lore.kernel.org/r/20250903123949.24858-1-p@sswd.pw
> Reviewed-by: Marc Zyngier <maz@kernel.org>
> Signed-off-by: Dongha Lee <p@sswd.pw>

This SOB still doesn't match the one you used to author the patch.
Please make sure the author and SOB lines are an exact match, both name
and email.

Otherwise this looks good to me. I will apply it if you can respin once
more.

Thanks,
Oliver