xen/arch/x86/hvm/dom0_build.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
RMRRs are setup ahead of populating the p2m and hence the ASSERT when
populating the low 1MB needs to be relaxed when it finds an existing
entry: it's either RAM or a RMRR resulting from the IOMMU setup.
Rework the logic a bit and introduce a local mfn variable in order to
assert that if the gfn is populated and not RAM it is an identity map.
Fixes: 6b4f6a31ac ('x86/PVH: de-duplicate mappings for first Mb of Dom0 memory')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/hvm/dom0_build.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index b00e45885c..739bb8adb6 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -466,11 +466,17 @@ static int __init pvh_populate_p2m(struct domain *d)
for ( i = rc = 0; i < MB1_PAGES; ++i )
{
p2m_type_t p2mt;
+ mfn_t mfn;
- if ( mfn_eq(get_gfn_query(d, i, &p2mt), INVALID_MFN) )
+ mfn = get_gfn_query(d, i, &p2mt);
+ if ( mfn_eq(mfn, INVALID_MFN) )
rc = set_mmio_p2m_entry(d, _gfn(i), _mfn(i), PAGE_ORDER_4K);
- else
- ASSERT(p2mt == p2m_ram_rw);
+ else if ( p2mt != p2m_ram_rw && !mfn_eq(mfn, _mfn(i)) )
+ /*
+ * If the p2m entry is already set it must belong to a RMRR and
+ * already be identity mapped, or be a RAM region.
+ */
+ ASSERT_UNREACHABLE();
put_gfn(d, i);
if ( rc )
{
--
2.34.1
On 24.01.2022 16:23, Roger Pau Monne wrote:
> RMRRs are setup ahead of populating the p2m and hence the ASSERT when
> populating the low 1MB needs to be relaxed when it finds an existing
> entry: it's either RAM or a RMRR resulting from the IOMMU setup.
>
> Rework the logic a bit and introduce a local mfn variable in order to
> assert that if the gfn is populated and not RAM it is an identity map.
>
> Fixes: 6b4f6a31ac ('x86/PVH: de-duplicate mappings for first Mb of Dom0 memory')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit ...
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -466,11 +466,17 @@ static int __init pvh_populate_p2m(struct domain *d)
> for ( i = rc = 0; i < MB1_PAGES; ++i )
> {
> p2m_type_t p2mt;
> + mfn_t mfn;
>
> - if ( mfn_eq(get_gfn_query(d, i, &p2mt), INVALID_MFN) )
> + mfn = get_gfn_query(d, i, &p2mt);
... preferably with this becoming the initializer of the variable then
and ...
> + if ( mfn_eq(mfn, INVALID_MFN) )
> rc = set_mmio_p2m_entry(d, _gfn(i), _mfn(i), PAGE_ORDER_4K);
> - else
> - ASSERT(p2mt == p2m_ram_rw);
> + else if ( p2mt != p2m_ram_rw && !mfn_eq(mfn, _mfn(i)) )
> + /*
> + * If the p2m entry is already set it must belong to a RMRR and
> + * already be identity mapped, or be a RAM region.
> + */
> + ASSERT_UNREACHABLE();
... (not just preferably) indentation reduced by a level here. (I wonder
why you didn't simply extend the existing ASSERT() - ASSERT_UNREACHABLE()
is nice in certain cases, but the expression it logs is entirely unhelpful
for disambiguating the reason without looking at the source.)
Jan
© 2016 - 2026 Red Hat, Inc.