[PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates

Ard Biesheuvel posted 6 patches 1 year ago
[PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
Posted by Ard Biesheuvel 1 year ago
From: Ard Biesheuvel <ardb@kernel.org>

The pKVM stage2 mapping code relies on an invalid physical address to
signal to the internal API that only the annotations of descriptors
should be updated, and these are stored in the high bits of invalid
descriptors covering memory that has been donated to protected guests,
and is therefore unmapped from the host stage-2 page tables.

Given that these invalid PAs are never stored into the descriptors, it
is better to rely on an explicit flag, to clarify the API and to avoid
confusion regarding whether or not the output address of a descriptor
can ever be invalid to begin with (which is not the case with LPA2).

That removes a dependency on the logic that reasons about the maximum PA
range, which differs on LPA2 capable CPUs based on whether LPA2 is
enabled or not, and will be further clarified in subsequent patches.

Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kvm/hyp/pgtable.c | 33 ++++++--------------
 1 file changed, 10 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 40bd55966540..ed600126161a 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -35,14 +35,6 @@ static bool kvm_pgtable_walk_skip_cmo(const struct kvm_pgtable_visit_ctx *ctx)
 	return unlikely(ctx->flags & KVM_PGTABLE_WALK_SKIP_CMO);
 }
 
-static bool kvm_phys_is_valid(u64 phys)
-{
-	u64 parange_max = kvm_get_parange_max();
-	u8 shift = id_aa64mmfr0_parange_to_phys_shift(parange_max);
-
-	return phys < BIT(shift);
-}
-
 static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx, u64 phys)
 {
 	u64 granule = kvm_granule_size(ctx->level);
@@ -53,7 +45,7 @@ static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx,
 	if (granule > (ctx->end - ctx->addr))
 		return false;
 
-	if (kvm_phys_is_valid(phys) && !IS_ALIGNED(phys, granule))
+	if (!IS_ALIGNED(phys, granule))
 		return false;
 
 	return IS_ALIGNED(ctx->addr, granule);
@@ -587,6 +579,9 @@ struct stage2_map_data {
 
 	/* Force mappings to page granularity */
 	bool				force_pte;
+
+	/* Walk should update owner_id only */
+	bool				annotation;
 };
 
 u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
@@ -885,18 +880,7 @@ static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx,
 {
 	u64 phys = data->phys;
 
-	/*
-	 * Stage-2 walks to update ownership data are communicated to the map
-	 * walker using an invalid PA. Avoid offsetting an already invalid PA,
-	 * which could overflow and make the address valid again.
-	 */
-	if (!kvm_phys_is_valid(phys))
-		return phys;
-
-	/*
-	 * Otherwise, work out the correct PA based on how far the walk has
-	 * gotten.
-	 */
+	/* Work out the correct PA based on how far the walk has gotten */
 	return phys + (ctx->addr - ctx->start);
 }
 
@@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
 	if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
 		return false;
 
+	if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
+		return true;
+
 	return kvm_block_mapping_supported(ctx, phys);
 }
 
@@ -923,7 +910,7 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
 	if (!stage2_leaf_mapping_allowed(ctx, data))
 		return -E2BIG;
 
-	if (kvm_phys_is_valid(phys))
+	if (!data->annotation)
 		new = kvm_init_valid_leaf_pte(phys, data->attr, ctx->level);
 	else
 		new = kvm_init_invalid_leaf_owner(data->owner_id);
@@ -1085,11 +1072,11 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
 {
 	int ret;
 	struct stage2_map_data map_data = {
-		.phys		= KVM_PHYS_INVALID,
 		.mmu		= pgt->mmu,
 		.memcache	= mc,
 		.owner_id	= owner_id,
 		.force_pte	= true,
+		.annotation	= true,
 	};
 	struct kvm_pgtable_walker walker = {
 		.cb		= stage2_map_walker,
-- 
2.47.1.613.gc27f4b7a9f-goog
Re: [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
Posted by Quentin Perret 1 year ago
On Thursday 12 Dec 2024 at 09:18:46 (+0100), Ard Biesheuvel wrote:
> @@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
>  	if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
>  		return false;
>  
> +	if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
> +		return true;
> +

I don't think it's a problem, but what's the rationale for checking
ctx->level here? The data->force_pte logic should already do this for us
and be somewhat orthogonal to data->annotation, no?

Either way, the patch looks good to me

  Reviewed-by: Quentin Perret <qperret@google.com>

Cheers,
Quentin

>  	return kvm_block_mapping_supported(ctx, phys);
>  }
Re: [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
Posted by Ard Biesheuvel 1 year ago
On Thu, 12 Dec 2024 at 12:33, Quentin Perret <qperret@google.com> wrote:
>
> On Thursday 12 Dec 2024 at 09:18:46 (+0100), Ard Biesheuvel wrote:
> > @@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
> >       if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
> >               return false;
> >
> > +     if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
> > +             return true;
> > +
>
> I don't think it's a problem, but what's the rationale for checking
> ctx->level here? The data->force_pte logic should already do this for us
> and be somewhat orthogonal to data->annotation, no?
>

So you are saying this could be

> > +     if (data->annotation)
> > +             return true;

right? That hides the fact that we expect data->annotation to imply
data->force_pte, but other than that, it should work the same, yes.

> Either way, the patch looks good to me
>
>   Reviewed-by: Quentin Perret <qperret@google.com>
>

Thanks!
Re: [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
Posted by Quentin Perret 1 year ago
On Thursday 12 Dec 2024 at 12:44:38 (+0100), Ard Biesheuvel wrote:
> On Thu, 12 Dec 2024 at 12:33, Quentin Perret <qperret@google.com> wrote:
> >
> > On Thursday 12 Dec 2024 at 09:18:46 (+0100), Ard Biesheuvel wrote:
> > > @@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
> > >       if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
> > >               return false;
> > >
> > > +     if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
> > > +             return true;
> > > +
> >
> > I don't think it's a problem, but what's the rationale for checking
> > ctx->level here? The data->force_pte logic should already do this for us
> > and be somewhat orthogonal to data->annotation, no?
> >
> 
> So you are saying this could be
> 
> > > +     if (data->annotation)
> > > +             return true;
> 
> right?

Yep, exactly.

> That hides the fact that we expect data->annotation to imply
> data->force_pte, but other than that, it should work the same, yes.

Eventually we'll want to make the two orthogonal to each other (e.g. to
annotate blocks when donating huge pages to protected guests), but
that'll require more work so again I don't mind that check in the
current code. We can always get rid of it when annotations on blocks
are supported.

Cheers,
Quentin