[PATCH v2] softmmu: Always initialize xlat in address_space_translate_for_iotlb

Richard Henderson posted 1 patch 1 year, 10 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20220621153829.366423-1-richard.henderson@linaro.org
Maintainers: Paolo Bonzini <pbonzini@redhat.com>, Peter Xu <peterx@redhat.com>, David Hildenbrand <david@redhat.com>, "Philippe Mathieu-Daudé" <f4bug@amsat.org>
softmmu/physmem.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
[PATCH v2] softmmu: Always initialize xlat in address_space_translate_for_iotlb
Posted by Richard Henderson 1 year, 10 months ago
The bug is an uninitialized memory read, along the translate_fail
path, which results in garbage being read from iotlb_to_section,
which can lead to a crash in io_readx/io_writex.

The bug may be fixed by writing any value with zero
in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
the xlat'ed address returns io_mem_unassigned, as desired by the
translate_fail path.

It is most useful to record the original physical page address,
which will eventually be logged by memory_region_access_valid
when the access is rejected by unassigned_mem_accepts.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Move the assignment to xlat down to translate_fail,
    add an assertion that the page offset is 0.
---
 softmmu/physmem.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index fb16be57a6..dc3c3e5f2e 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -669,7 +669,7 @@ void tcg_iommu_init_notifier_list(CPUState *cpu)
 
 /* Called from RCU critical section */
 MemoryRegionSection *
-address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
+address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr,
                                   hwaddr *xlat, hwaddr *plen,
                                   MemTxAttrs attrs, int *prot)
 {
@@ -678,6 +678,7 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
     IOMMUMemoryRegionClass *imrc;
     IOMMUTLBEntry iotlb;
     int iommu_idx;
+    hwaddr addr = orig_addr;
     AddressSpaceDispatch *d =
         qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
 
@@ -722,6 +723,16 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
     return section;
 
 translate_fail:
+    /*
+     * We should be given a page-aligned address -- certainly
+     * tlb_set_page_with_attrs() does so.  The page offset of xlat
+     * is used to index sections[], and PHYS_SECTION_UNASSIGNED = 0.
+     * The page portion of xlat will be logged by memory_region_access_valid()
+     * when this memory access is rejected, so use the original untranslated
+     * physical address.
+     */
+    assert((orig_addr & ~TARGET_PAGE_MASK) == 0);
+    *xlat = orig_addr;
     return &d->map.sections[PHYS_SECTION_UNASSIGNED];
 }
 
-- 
2.34.1
Re: [PATCH v2] softmmu: Always initialize xlat in address_space_translate_for_iotlb
Posted by Peter Maydell 1 year, 10 months ago
On Tue, 21 Jun 2022 at 16:38, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The bug is an uninitialized memory read, along the translate_fail
> path, which results in garbage being read from iotlb_to_section,
> which can lead to a crash in io_readx/io_writex.
>
> The bug may be fixed by writing any value with zero
> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
> the xlat'ed address returns io_mem_unassigned, as desired by the
> translate_fail path.
>
> It is most useful to record the original physical page address,
> which will eventually be logged by memory_region_access_valid
> when the access is rejected by unassigned_mem_accepts.
>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM