This patch set is continuous MMU enhancement, it is focused on TLB
search, especially on function helper_invtlb_page_asid() and function
helper_invtlb_page_asid_or_g(). The code is similiar with function
loongarch_tlb_search(), one common API loongarch_tlb_search_cb() is
added for these functions.
Also there is optimization with qemu TLB flush, invalidate_tlb_entry()
is used to flush one TLB entry rather than flush all TLB entries.
---
v2 ... v3:
1. Remove optimization to flush QEMU TLB with MMU_USER_IDX, now both
MMU_KERNEL_IDX and MMU_USER_IDX bitmap is added.
2. Add field RESERVE with register CSR_STLBPS, field RESERVE keeps
zero when change register CSR_STLBPS.
v1 ... v2:
1. Add bugfix patch with CSR_STLBPS page size set issue.
2. Add tlb entry invalidation in function invalidate_tlb(), it can be
used in both helper_invtlb_page_asid() and function
helper_invtlb_page_asid_or_g().
---
Bibo Mao (12):
target/loongarch: Use mmu idx bitmap method when flush TLB
target/loongarch: Add parameter tlb pointer with fill_tlb_entry
target/loongarch: Reduce TLB flush with helper_tlbwr
target/loongarch: Update TLB index selection method
target/loongarch: Fix page size set issue with CSR_STLBPS
target/loongarch: Add tlb search callback in loongarch_tlb_search()
target/loongarch: Add common API loongarch_tlb_search_cb()
target/loongarch: Use loongarch_tlb_search_cb in
helper_invtlb_page_asid_or_g
target/loongarch: Use loongarch_tlb_search_cb in
helper_invtlb_page_asid
target/loongarch: Invalid tlb entry in invalidate_tlb()
target/loongarch: Only flush one TLB entry in
helper_invtlb_page_asid_or_g()
target/loongarch: Only flush one TLB entry in
helper_invtlb_page_asid()
target/loongarch/cpu-csr.h | 1 +
target/loongarch/tcg/csr_helper.c | 5 +-
target/loongarch/tcg/tlb_helper.c | 197 +++++++++++++++++++-----------
3 files changed, 132 insertions(+), 71 deletions(-)
base-commit: 8415b0619f65bff12f10c774659df92d3f61daca
--
2.39.3