clean_dcache_guest_page() and invalidate_icache_guest_page() accept a
size as an argument. But they also rely on fixmap, which can only map a
single PAGE_SIZE page.
With the upcoming stage-2 huge mappings for pKVM np-guests, those
callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis
until the whole range is done.
Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 31173c694695..23544928a637 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -219,14 +219,28 @@ static void guest_s2_put_page(void *addr)
static void clean_dcache_guest_page(void *va, size_t size)
{
- __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
- hyp_fixmap_unmap();
+ WARN_ON(!PAGE_ALIGNED(size));
+
+ while (size) {
+ __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
+ PAGE_SIZE);
+ hyp_fixmap_unmap();
+ va += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
}
static void invalidate_icache_guest_page(void *va, size_t size)
{
- __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
- hyp_fixmap_unmap();
+ WARN_ON(!PAGE_ALIGNED(size));
+
+ while (size) {
+ __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
+ PAGE_SIZE);
+ hyp_fixmap_unmap();
+ va += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
}
int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd)
--
2.49.0.1015.ga840276032-goog
On Fri, 09 May 2025 14:16:57 +0100,
Vincent Donnefort <vdonnefort@google.com> wrote:
>
> clean_dcache_guest_page() and invalidate_icache_guest_page() accept a
> size as an argument. But they also rely on fixmap, which can only map a
> single PAGE_SIZE page.
>
> With the upcoming stage-2 huge mappings for pKVM np-guests, those
> callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis
> until the whole range is done.
>
> Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 31173c694695..23544928a637 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -219,14 +219,28 @@ static void guest_s2_put_page(void *addr)
>
> static void clean_dcache_guest_page(void *va, size_t size)
> {
> - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
> - hyp_fixmap_unmap();
> + WARN_ON(!PAGE_ALIGNED(size));
What if "va" isn't aligned?
> +
> + while (size) {
> + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
> + PAGE_SIZE);
> + hyp_fixmap_unmap();
> + va += PAGE_SIZE;
> + size -= PAGE_SIZE;
> + }
I know pKVM dies on WARN, but this code "looks" unsafe. Can you align
va and size to be on page boundaries, so that we are 100% sure the
loop terminates?
> }
>
> static void invalidate_icache_guest_page(void *va, size_t size)
> {
> - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
> - hyp_fixmap_unmap();
> + WARN_ON(!PAGE_ALIGNED(size));
> +
> + while (size) {
> + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
> + PAGE_SIZE);
> + hyp_fixmap_unmap();
> + va += PAGE_SIZE;
> + size -= PAGE_SIZE;
> + }
Same here.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Hi,
Thanks for having a look at the series.
On Fri, May 16, 2025 at 01:15:00PM +0100, Marc Zyngier wrote:
> On Fri, 09 May 2025 14:16:57 +0100,
> Vincent Donnefort <vdonnefort@google.com> wrote:
> >
> > clean_dcache_guest_page() and invalidate_icache_guest_page() accept a
> > size as an argument. But they also rely on fixmap, which can only map a
> > single PAGE_SIZE page.
> >
> > With the upcoming stage-2 huge mappings for pKVM np-guests, those
> > callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis
> > until the whole range is done.
> >
> > Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index 31173c694695..23544928a637 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -219,14 +219,28 @@ static void guest_s2_put_page(void *addr)
> >
> > static void clean_dcache_guest_page(void *va, size_t size)
> > {
> > - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
> > - hyp_fixmap_unmap();
> > + WARN_ON(!PAGE_ALIGNED(size));
>
> What if "va" isn't aligned?
So the only callers are either for PAGE_SIZE or PMD_SIZE with the right
alignment addr alignment.
But happy to make this more future-proof, after all an ALIGN() is quite cheap.
>
> > +
> > + while (size) {
> > + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
> > + PAGE_SIZE);
> > + hyp_fixmap_unmap();
> > + va += PAGE_SIZE;
> > + size -= PAGE_SIZE;
> > + }
>
> I know pKVM dies on WARN, but this code "looks" unsafe. Can you align
> va and size to be on page boundaries, so that we are 100% sure the
> loop terminates?
>
> > }
> >
> > static void invalidate_icache_guest_page(void *va, size_t size)
> > {
> > - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
> > - hyp_fixmap_unmap();
> > + WARN_ON(!PAGE_ALIGNED(size));
> > +
> > + while (size) {
> > + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
> > + PAGE_SIZE);
> > + hyp_fixmap_unmap();
> > + va += PAGE_SIZE;
> > + size -= PAGE_SIZE;
> > + }
>
> Same here.
>
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
On Fri, 16 May 2025 18:53:27 +0100,
Vincent Donnefort <vdonnefort@google.com> wrote:
>
> Hi,
>
> Thanks for having a look at the series.
>
> On Fri, May 16, 2025 at 01:15:00PM +0100, Marc Zyngier wrote:
> > On Fri, 09 May 2025 14:16:57 +0100,
> > Vincent Donnefort <vdonnefort@google.com> wrote:
> > >
> > > clean_dcache_guest_page() and invalidate_icache_guest_page() accept a
> > > size as an argument. But they also rely on fixmap, which can only map a
> > > single PAGE_SIZE page.
> > >
> > > With the upcoming stage-2 huge mappings for pKVM np-guests, those
> > > callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis
> > > until the whole range is done.
> > >
> > > Signed-off-by: Vincent Donnefort <vdonnefort@google.com>
> > >
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > > index 31173c694695..23544928a637 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > > @@ -219,14 +219,28 @@ static void guest_s2_put_page(void *addr)
> > >
> > > static void clean_dcache_guest_page(void *va, size_t size)
> > > {
> > > - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
> > > - hyp_fixmap_unmap();
> > > + WARN_ON(!PAGE_ALIGNED(size));
> >
> > What if "va" isn't aligned?
>
> So the only callers are either for PAGE_SIZE or PMD_SIZE with the right
> alignment addr alignment.
>
> But happy to make this more future-proof, after all an ALIGN() is quite cheap.
Exactly. I'd rather have too many of those instead of something that
may not terminate. If anything, it makes it easy to reason about it.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
© 2016 - 2025 Red Hat, Inc.