[PATCH v3] KVM: arm64: Check range args for pKVM mem transitions

Vincent Donnefort posted 1 patch 2 months ago
[PATCH v3] KVM: arm64: Check range args for pKVM mem transitions
Posted by Vincent Donnefort 2 months ago
There's currently no verification for host issued ranges in most of the
pKVM memory transitions. The end boundary might therefore be subject to
overflow and later checks could be evaded.

Close this loophole with an additional pfn_range_is_valid() check on a
per public function basis. Once this check has passed, it is safe to
convert pfn and nr_pages into a phys_addr_t and a size.

host_unshare_guest transition is already protected via
__check_host_shared_guest(), while assert_host_shared_guest() callers
are already ignoring host checks.

Signed-off-by: Vincent Donnefort <vdonnefort@google.com>

---

v2 -> v3: 
   * Test range against PA-range and make the func phys specific.

v1 -> v2:
   * Also check for (nr_pages * PAGE_SIZE) overflow. (Quentin)
   * Rename to check_range_args().

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index ddc8beb55eee..49db32f3ddf7 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -367,6 +367,19 @@ static int host_stage2_unmap_dev_all(void)
 	return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr);
 }
 
+/*
+ * Ensure the PFN range is contained within PA-range.
+ *
+ * This check is also robust to overflows and is therefore a requirement before
+ * using a pfn/nr_pages pair from an untrusted source.
+ */
+static bool pfn_range_is_valid(u64 pfn, u64 nr_pages)
+{
+	u64 limit = BIT(kvm_phys_shift(&host_mmu.arch.mmu) - PAGE_SHIFT);
+
+	return pfn < limit && ((limit - pfn) >= nr_pages);
+}
+
 struct kvm_mem_range {
 	u64 start;
 	u64 end;
@@ -776,6 +789,9 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages)
 	void *virt = __hyp_va(phys);
 	int ret;
 
+	if (!pfn_range_is_valid(pfn, nr_pages))
+		return -EINVAL;
+
 	host_lock_component();
 	hyp_lock_component();
 
@@ -804,6 +820,9 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
 	u64 virt = (u64)__hyp_va(phys);
 	int ret;
 
+	if (!pfn_range_is_valid(pfn, nr_pages))
+		return -EINVAL;
+
 	host_lock_component();
 	hyp_lock_component();
 
@@ -887,6 +906,9 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages)
 	u64 size = PAGE_SIZE * nr_pages;
 	int ret;
 
+	if (!pfn_range_is_valid(pfn, nr_pages))
+		return -EINVAL;
+
 	host_lock_component();
 	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
 	if (!ret)
@@ -902,6 +924,9 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
 	u64 size = PAGE_SIZE * nr_pages;
 	int ret;
 
+	if (!pfn_range_is_valid(pfn, nr_pages))
+		return -EINVAL;
+
 	host_lock_component();
 	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED);
 	if (!ret)
@@ -945,6 +970,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu
 	if (prot & ~KVM_PGTABLE_PROT_RWX)
 		return -EINVAL;
 
+	if (!pfn_range_is_valid(pfn, nr_pages))
+		return -EINVAL;
+
 	ret = __guest_check_transition_size(phys, ipa, nr_pages, &size);
 	if (ret)
 		return ret;

base-commit: 7ea30958b3054f5e488fa0b33c352723f7ab3a2a
-- 
2.51.0.869.ge66316f041-goog
Re: [PATCH v3] KVM: arm64: Check range args for pKVM mem transitions
Posted by Marc Zyngier 1 month, 2 weeks ago
On Thu, 16 Oct 2025 17:45:41 +0100, Vincent Donnefort wrote:
> There's currently no verification for host issued ranges in most of the
> pKVM memory transitions. The end boundary might therefore be subject to
> overflow and later checks could be evaded.
> 
> Close this loophole with an additional pfn_range_is_valid() check on a
> per public function basis. Once this check has passed, it is safe to
> convert pfn and nr_pages into a phys_addr_t and a size.
> 
> [...]

Applied to fixes, thanks!

[1/1] KVM: arm64: Check range args for pKVM mem transitions
      commit: f71f7afd0a0cd3f044cd2f8aba71a1a7229df762

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.
Re: [PATCH v3] KVM: arm64: Check range args for pKVM mem transitions
Posted by Sebastian Ene 1 month, 2 weeks ago
On Thu, Oct 16, 2025 at 05:45:41PM +0100, Vincent Donnefort wrote:
> There's currently no verification for host issued ranges in most of the
> pKVM memory transitions. The end boundary might therefore be subject to
> overflow and later checks could be evaded.
> 
> Close this loophole with an additional pfn_range_is_valid() check on a
> per public function basis. Once this check has passed, it is safe to
> convert pfn and nr_pages into a phys_addr_t and a size.
> 
> host_unshare_guest transition is already protected via
> __check_host_shared_guest(), while assert_host_shared_guest() callers
> are already ignoring host checks.
> 
> Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
> 
> ---
> 
> v2 -> v3: 
>    * Test range against PA-range and make the func phys specific.
> 
> v1 -> v2:
>    * Also check for (nr_pages * PAGE_SIZE) overflow. (Quentin)
>    * Rename to check_range_args().
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index ddc8beb55eee..49db32f3ddf7 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -367,6 +367,19 @@ static int host_stage2_unmap_dev_all(void)
>  	return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr);
>  }

Hello Vincent,

>  
> +/*
> + * Ensure the PFN range is contained within PA-range.
> + *
> + * This check is also robust to overflows and is therefore a requirement before
> + * using a pfn/nr_pages pair from an untrusted source.
> + */
> +static bool pfn_range_is_valid(u64 pfn, u64 nr_pages)
> +{
> +	u64 limit = BIT(kvm_phys_shift(&host_mmu.arch.mmu) - PAGE_SHIFT);
> +
> +	return pfn < limit && ((limit - pfn) >= nr_pages);
> +}
> +

This newly introduced function is probably fine to be called without the host lock held as long
as no one modifies the vtcr field from the host.mmu structure. While
searching I couldn't find a place where this is directly modified so
this is probably fine. 

>  struct kvm_mem_range {
>  	u64 start;
>  	u64 end;
> @@ -776,6 +789,9 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages)
>  	void *virt = __hyp_va(phys);
>  	int ret;
>  
> +	if (!pfn_range_is_valid(pfn, nr_pages))
> +		return -EINVAL;
> +
>  	host_lock_component();
>  	hyp_lock_component();
>  
> @@ -804,6 +820,9 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
>  	u64 virt = (u64)__hyp_va(phys);
>  	int ret;
>  
> +	if (!pfn_range_is_valid(pfn, nr_pages))
> +		return -EINVAL;
> +
>  	host_lock_component();
>  	hyp_lock_component();
>  
> @@ -887,6 +906,9 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages)
>  	u64 size = PAGE_SIZE * nr_pages;
>  	int ret;
>  
> +	if (!pfn_range_is_valid(pfn, nr_pages))
> +		return -EINVAL;
> +
>  	host_lock_component();
>  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
>  	if (!ret)
> @@ -902,6 +924,9 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
>  	u64 size = PAGE_SIZE * nr_pages;
>  	int ret;
>  
> +	if (!pfn_range_is_valid(pfn, nr_pages))
> +		return -EINVAL;
> +
>  	host_lock_component();
>  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED);
>  	if (!ret)
> @@ -945,6 +970,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu
>  	if (prot & ~KVM_PGTABLE_PROT_RWX)
>  		return -EINVAL;
>  
> +	if (!pfn_range_is_valid(pfn, nr_pages))
> +		return -EINVAL;
> +

I think we don't need it here because __pkvm_host_share_guest has the
__guest_check_transition_size verification in place which limits
nr_pages.  

>  	ret = __guest_check_transition_size(phys, ipa, nr_pages, &size);
>  	if (ret)
>  		return ret;
> 
> base-commit: 7ea30958b3054f5e488fa0b33c352723f7ab3a2a
> -- 
> 2.51.0.869.ge66316f041-goog
>

Other than that this looks good, thanks
Sebastian
Re: [PATCH v3] KVM: arm64: Check range args for pKVM mem transitions
Posted by Vincent Donnefort 1 month, 2 weeks ago
On Thu, Oct 30, 2025 at 06:09:31AM +0000, Sebastian Ene wrote:
> On Thu, Oct 16, 2025 at 05:45:41PM +0100, Vincent Donnefort wrote:
> > There's currently no verification for host issued ranges in most of the
> > pKVM memory transitions. The end boundary might therefore be subject to
> > overflow and later checks could be evaded.
> > 
> > Close this loophole with an additional pfn_range_is_valid() check on a
> > per public function basis. Once this check has passed, it is safe to
> > convert pfn and nr_pages into a phys_addr_t and a size.
> > 
> > host_unshare_guest transition is already protected via
> > __check_host_shared_guest(), while assert_host_shared_guest() callers
> > are already ignoring host checks.
> > 
> > Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
> > 
> > ---
> > 
> > v2 -> v3: 
> >    * Test range against PA-range and make the func phys specific.
> > 
> > v1 -> v2:
> >    * Also check for (nr_pages * PAGE_SIZE) overflow. (Quentin)
> >    * Rename to check_range_args().
> > 
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index ddc8beb55eee..49db32f3ddf7 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -367,6 +367,19 @@ static int host_stage2_unmap_dev_all(void)
> >  	return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr);
> >  }
> 
> Hello Vincent,
> 
> >  
> > +/*
> > + * Ensure the PFN range is contained within PA-range.
> > + *
> > + * This check is also robust to overflows and is therefore a requirement before
> > + * using a pfn/nr_pages pair from an untrusted source.
> > + */
> > +static bool pfn_range_is_valid(u64 pfn, u64 nr_pages)
> > +{
> > +	u64 limit = BIT(kvm_phys_shift(&host_mmu.arch.mmu) - PAGE_SHIFT);
> > +
> > +	return pfn < limit && ((limit - pfn) >= nr_pages);
> > +}
> > +
> 
> This newly introduced function is probably fine to be called without the host lock held as long
> as no one modifies the vtcr field from the host.mmu structure. While
> searching I couldn't find a place where this is directly modified so
> this is probably fine. 
> 
> >  struct kvm_mem_range {
> >  	u64 start;
> >  	u64 end;
> > @@ -776,6 +789,9 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages)
> >  	void *virt = __hyp_va(phys);
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	hyp_lock_component();
> >  
> > @@ -804,6 +820,9 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
> >  	u64 virt = (u64)__hyp_va(phys);
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	hyp_lock_component();
> >  
> > @@ -887,6 +906,9 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages)
> >  	u64 size = PAGE_SIZE * nr_pages;
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
> >  	if (!ret)
> > @@ -902,6 +924,9 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
> >  	u64 size = PAGE_SIZE * nr_pages;
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED);
> >  	if (!ret)
> > @@ -945,6 +970,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu
> >  	if (prot & ~KVM_PGTABLE_PROT_RWX)
> >  		return -EINVAL;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> 
> I think we don't need it here because __pkvm_host_share_guest has the
> __guest_check_transition_size verification in place which limits
> nr_pages.  

__guest_check_transition size will only limit to PMD_SIZE, which can be quite a
big number if you consider > 4KiB pages systems. So I believe this is still a loophole
worth fixing.

> 
> >  	ret = __guest_check_transition_size(phys, ipa, nr_pages, &size);
> >  	if (ret)
> >  		return ret;
> > 
> > base-commit: 7ea30958b3054f5e488fa0b33c352723f7ab3a2a
> > -- 
> > 2.51.0.869.ge66316f041-goog
> >
> 
> Other than that this looks good, thanks
> Sebastian

Thanks for having a look at the patch.