[PATCH v3] x86/hvm/dom0: fix PVH initrd and metadata placement

Xenia Ragiadakou posted 1 patch 6 months ago
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20231031102657.152207-1-xenia.ragiadakou@amd.com
xen/arch/x86/hvm/dom0_build.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
[PATCH v3] x86/hvm/dom0: fix PVH initrd and metadata placement
Posted by Xenia Ragiadakou 6 months ago
Zephyr image consists of multiple non-contiguous load segments
that reside in different RAM regions. For instance:
ELF: phdr: paddr=0x1000 memsz=0x8000
ELF: phdr: paddr=0x100000 memsz=0x28a90
ELF: phdr: paddr=0x128aa0 memsz=0x7560
ELF: memory: 0x1000 -> 0x130000

However, the logic that determines the best placement for dom0
initrd and metadata, assumes that the image is fully contained
in a single RAM region, not taking into account the cases where:
(1) start > kernel_start && end > kernel_end
(2) start < kernel_start && end < kernel_end
(3) start > kernel_start && end < kernel_end

In case (1), the evaluation will result in end = kernel_start,
i.e. end < start, and will load initrd in the middle of the kernel.
In case (2), the evaluation will result in start = kernel_end,
i.e. end < start, and will load initrd at kernel_end, that is out
of the memory region under evaluation.
In case (3), the evaluation will result in either end = kernel_start
or start = kernel_end but in both cases will be end < start, and
will either load initrd in the middle of the image, or arbitrarily
at kernel_end.

This patch reorganizes the conditionals to include so far unconsidered
cases as well, uniformly returning the lowest available address.

Fixes: 73b47eea2104 ('x86/dom0: improve PVH initrd and metadata placement')
Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---

Changes in v3:
    - mention Zephyr in commit message (Roger)

Changes in v2:
    - cover further cases of overlap (Jan)
    - mention with an in-code comment that a proper, more fine-grained
      solution can be implemented using a rangeset (Roger)

 xen/arch/x86/hvm/dom0_build.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index c7d47d0d4c..62debc7415 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -515,16 +515,23 @@ static paddr_t __init find_memory(
 
         ASSERT(IS_ALIGNED(start, PAGE_SIZE) && IS_ALIGNED(end, PAGE_SIZE));
 
+        /*
+         * NB: Even better would be to use rangesets to determine a suitable
+         * range, in particular in case a kernel requests multiple heavily
+         * discontiguous regions (which right now we fold all into one big
+         * region).
+         */
         if ( end <= kernel_start || start >= kernel_end )
-            ; /* No overlap, nothing to do. */
+        {
+            /* No overlap, just check whether the region is large enough. */
+            if ( end - start >= size )
+                return start;
+        }
         /* Deal with the kernel already being loaded in the region. */
-        else if ( kernel_start - start > end - kernel_end )
-            end = kernel_start;
-        else
-            start = kernel_end;
-
-        if ( end - start >= size )
+        else if ( kernel_start > start && kernel_start - start >= size )
             return start;
+        else if ( kernel_end < end && end - kernel_end >= size )
+            return kernel_end;
     }
 
     return INVALID_PADDR;
-- 
2.34.1