On 23/10/2024 05:03, Gavin Shan wrote:
> On 10/5/24 1:27 AM, Steven Price wrote:
>> From: Suzuki K Poulose <suzuki.poulose@arm.com>
>>
>> Keep track of the number of pages allocated for the top level PGD,
>> rather than computing it every time (though we need it only twice now).
>> This will be used later by Arm CCA KVM changes.
>>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_pgtable.h | 2 ++
>> arch/arm64/kvm/hyp/pgtable.c | 5 +++--
>> 2 files changed, 5 insertions(+), 2 deletions(-)
>>
>
> If we really want to have the number of pages for the top level PGDs,
> the existing helpers kvm_pgtable_stage2_pgd_size() for the same purpose
> needs to replaced by (struct kvm_pgtable::pgd_pages << PAGE_SHIFT) and
> then removed.
>
> The alternative would be just to use kvm_pgtable_stage2_pgd_size()
> instead of
> introducing struct kvm_pgtable::pgd_pages, which will be used in the slow
> paths where realm is created or destroyed.
I think just dropping this patch and using kvm_pgtable_stage2_pgd_size()
in the slow paths makes sense. I think originally there had been some
issue with the value being hard to obtain in the relevant path, but I
can't see any problem now.
Thanks,
Steve
>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h
>> b/arch/arm64/include/asm/kvm_pgtable.h
>> index 03f4c3d7839c..25b512756200 100644
>> --- a/arch/arm64/include/asm/kvm_pgtable.h
>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
>> @@ -404,6 +404,7 @@ static inline bool kvm_pgtable_walk_lock_held(void)
>> * struct kvm_pgtable - KVM page-table.
>> * @ia_bits: Maximum input address size, in bits.
>> * @start_level: Level at which the page-table walk starts.
>> + * @pgd_pages: Number of pages in the entry level of the
>> page-table.
>> * @pgd: Pointer to the first top-level entry of the page-table.
>> * @mm_ops: Memory management callbacks.
>> * @mmu: Stage-2 KVM MMU struct. Unused for stage-1 page-tables.
>> @@ -414,6 +415,7 @@ static inline bool kvm_pgtable_walk_lock_held(void)
>> struct kvm_pgtable {
>> u32 ia_bits;
>> s8 start_level;
>> + u8 pgd_pages;
>> kvm_pteref_t pgd;
>> struct kvm_pgtable_mm_ops *mm_ops;
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c
>> b/arch/arm64/kvm/hyp/pgtable.c
>> index b11bcebac908..9e1be28c3dc9 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -1534,7 +1534,8 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable
>> *pgt, struct kvm_s2_mmu *mmu,
>> u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr);
>> s8 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0;
>> - pgd_sz = kvm_pgd_pages(ia_bits, start_level) * PAGE_SIZE;
>> + pgt->pgd_pages = kvm_pgd_pages(ia_bits, start_level);
>> + pgd_sz = pgt->pgd_pages * PAGE_SIZE;
>> pgt->pgd = (kvm_pteref_t)mm_ops->zalloc_pages_exact(pgd_sz);
>> if (!pgt->pgd)
>> return -ENOMEM;
>> @@ -1586,7 +1587,7 @@ void kvm_pgtable_stage2_destroy(struct
>> kvm_pgtable *pgt)
>> };
>> WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker));
>> - pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level) * PAGE_SIZE;
>> + pgd_sz = pgt->pgd_pages * PAGE_SIZE;
>> pgt->mm_ops->free_pages_exact(kvm_dereference_pteref(&walker,
>> pgt->pgd), pgd_sz);
>> pgt->pgd = NULL;
>> }
>
> Thanks,
> Gavin
>