[PATCH v1 5/6] xen: move domain_use_host_layout() to common header

Oleksii Kurochko posted 6 patches 1 month, 4 weeks ago
There is a newer version of this series
[PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Oleksii Kurochko 1 month, 4 weeks ago
domain_use_host_layout() is generic enough to be moved to the
common header xen/domain.h.

Wrap domain_use_host_layout() with "#ifndef domain_use_host_layout"
to allow architectures to override it if needed.

No functional change.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/arm/include/asm/domain.h | 14 --------------
 xen/include/xen/domain.h          | 16 ++++++++++++++++
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 758ad807e461..1a04fe658c97 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -29,20 +29,6 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
-/*
- * Is the domain using the host memory layout?
- *
- * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
- * To avoid any trouble finding space, it is easier to force using the
- * host memory layout.
- *
- * The hardware domain will use the host layout regardless of
- * direct-mapped because some OS may rely on a specific address ranges
- * for the devices.
- */
-#define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
-                                   is_hardware_domain(d))
-
 struct vtimer {
     struct vcpu *v;
     int irq;
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 93c0fd00c1d7..40487825ad91 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -62,6 +62,22 @@ void domid_free(domid_t domid);
 #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
 #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
 
+/*
+ * Is the domain using the host memory layout?
+ *
+ * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
+ * To avoid any trouble finding space, it is easier to force using the
+ * host memory layout.
+ *
+ * The hardware domain will use the host layout regardless of
+ * direct-mapped because some OS may rely on a specific address ranges
+ * for the devices.
+ */
+#ifndef domain_use_host_layout
+# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
+                                    is_hardware_domain(d))
+#endif
+
 /*
  * Arch-specifics.
  */
-- 
2.52.0
Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Jan Beulich 1 month, 3 weeks ago
On 12.02.2026 17:21, Oleksii Kurochko wrote:
> domain_use_host_layout() is generic enough to be moved to the
> common header xen/domain.h.

Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...

> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>  #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>  
> +/*
> + * Is the domain using the host memory layout?
> + *
> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> + * To avoid any trouble finding space, it is easier to force using the
> + * host memory layout.
> + *
> + * The hardware domain will use the host layout regardless of
> + * direct-mapped because some OS may rely on a specific address ranges
> + * for the devices.
> + */
> +#ifndef domain_use_host_layout
> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> +                                    is_hardware_domain(d))

... is_domain_direct_mapped() isn't something that I'd like to see further
proliferate in common (non-DT) code.

Jan
Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Stefano Stabellini 1 month, 3 weeks ago
On Mon, 16 Feb 2026, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
> > domain_use_host_layout() is generic enough to be moved to the
> > common header xen/domain.h.
> 
> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
> 
> > --- a/xen/include/xen/domain.h
> > +++ b/xen/include/xen/domain.h
> > @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
> >  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
> >  #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> >  
> > +/*
> > + * Is the domain using the host memory layout?
> > + *
> > + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> > + * To avoid any trouble finding space, it is easier to force using the
> > + * host memory layout.
> > + *
> > + * The hardware domain will use the host layout regardless of
> > + * direct-mapped because some OS may rely on a specific address ranges
> > + * for the devices.
> > + */
> > +#ifndef domain_use_host_layout
> > +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> > +                                    is_hardware_domain(d))
> 
> ... is_domain_direct_mapped() isn't something that I'd like to see further
> proliferate in common (non-DT) code.

Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
domain) on x86 as well. In fact, we already have a working prototype,
although it is not suitable for upstream yet.

In addition to the PSP use case that we discussed a few months ago,
where the PSP is not behind an IOMMU and therefore exchanged addresses
must be 1:1 mapped, we also have a new use case. We are running the full
Xen-based automotive stack on an Azure instance where SVM (vmentry and
vmexit) is available, but an IOMMU is not present. All virtual machines
are configured as PVH.
Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Jan Beulich 1 month, 3 weeks ago
On 16.02.2026 19:42, Stefano Stabellini wrote:
> On Mon, 16 Feb 2026, Jan Beulich wrote:
>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>> domain_use_host_layout() is generic enough to be moved to the
>>> common header xen/domain.h.
>>
>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>
>>> --- a/xen/include/xen/domain.h
>>> +++ b/xen/include/xen/domain.h
>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>  #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>  
>>> +/*
>>> + * Is the domain using the host memory layout?
>>> + *
>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>> + * To avoid any trouble finding space, it is easier to force using the
>>> + * host memory layout.
>>> + *
>>> + * The hardware domain will use the host layout regardless of
>>> + * direct-mapped because some OS may rely on a specific address ranges
>>> + * for the devices.
>>> + */
>>> +#ifndef domain_use_host_layout
>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>> +                                    is_hardware_domain(d))
>>
>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>> proliferate in common (non-DT) code.
> 
> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
> domain) on x86 as well. In fact, we already have a working prototype,
> although it is not suitable for upstream yet.
> 
> In addition to the PSP use case that we discussed a few months ago,
> where the PSP is not behind an IOMMU and therefore exchanged addresses
> must be 1:1 mapped, we also have a new use case. We are running the full
> Xen-based automotive stack on an Azure instance where SVM (vmentry and
> vmexit) is available, but an IOMMU is not present. All virtual machines
> are configured as PVH.

Hmm. Then adjustments need making, for commentary and macro to be correct
on x86. First and foremost none of what is there is true for PV.

Jan
Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Oleksii Kurochko 1 month, 3 weeks ago
On 2/17/26 8:34 AM, Jan Beulich wrote:
> On 16.02.2026 19:42, Stefano Stabellini wrote:
>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>> domain_use_host_layout() is generic enough to be moved to the
>>>> common header xen/domain.h.
>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>
>>>> --- a/xen/include/xen/domain.h
>>>> +++ b/xen/include/xen/domain.h
>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>   #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>   #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>   
>>>> +/*
>>>> + * Is the domain using the host memory layout?
>>>> + *
>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>> + * host memory layout.
>>>> + *
>>>> + * The hardware domain will use the host layout regardless of
>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>> + * for the devices.
>>>> + */
>>>> +#ifndef domain_use_host_layout
>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>> +                                    is_hardware_domain(d))
>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>> proliferate in common (non-DT) code.
>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>> domain) on x86 as well. In fact, we already have a working prototype,
>> although it is not suitable for upstream yet.
>>
>> In addition to the PSP use case that we discussed a few months ago,
>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>> must be 1:1 mapped, we also have a new use case. We are running the full
>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>> vmexit) is available, but an IOMMU is not present. All virtual machines
>> are configured as PVH.
> Hmm. Then adjustments need making, for commentary and macro to be correct
> on x86. First and foremost none of what is there is true for PV.

As is_domain_direct_mapped() returns always false for x86, so
domain_use_host_layout macro will return incorrect value for non-hardware
domains (dom0?). And as PV domains are not auto_translated domains so are
always direct-mapped, so technically is_domain_direct_mapped() (or
domain_use_host_layout()) should return true in such case.

(I assume it is also true for every domain except HVM according to the comment
/* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
domains (x86 only) are always direct-mapped*/).

Is my understanding correct?

Then isn't that a problem of how is_domain_direct_mapped() is defined
for x86? Shouldn't it be defined like:
   #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))

Would it be better to move "!paging_mode_translate(d) || " to the definition
of domain_use_host_layout()?

Could you please explain what is wrong with the comment? Probably, except:
   * To avoid any trouble finding space, it is easier to force using the
   * host memory layout.
everything else should be true for x86.

~ Oleksii


Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Jan Beulich 1 month, 3 weeks ago
On 18.02.2026 13:58, Oleksii Kurochko wrote:
> 
> On 2/17/26 8:34 AM, Jan Beulich wrote:
>> On 16.02.2026 19:42, Stefano Stabellini wrote:
>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>> domain_use_host_layout() is generic enough to be moved to the
>>>>> common header xen/domain.h.
>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>>
>>>>> --- a/xen/include/xen/domain.h
>>>>> +++ b/xen/include/xen/domain.h
>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>>   #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>>   #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>>   
>>>>> +/*
>>>>> + * Is the domain using the host memory layout?
>>>>> + *
>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>>> + * host memory layout.
>>>>> + *
>>>>> + * The hardware domain will use the host layout regardless of
>>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>>> + * for the devices.
>>>>> + */
>>>>> +#ifndef domain_use_host_layout
>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>>> +                                    is_hardware_domain(d))
>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>>> proliferate in common (non-DT) code.
>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>>> domain) on x86 as well. In fact, we already have a working prototype,
>>> although it is not suitable for upstream yet.
>>>
>>> In addition to the PSP use case that we discussed a few months ago,
>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>>> must be 1:1 mapped, we also have a new use case. We are running the full
>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>>> vmexit) is available, but an IOMMU is not present. All virtual machines
>>> are configured as PVH.
>> Hmm. Then adjustments need making, for commentary and macro to be correct
>> on x86. First and foremost none of what is there is true for PV.
> 
> As is_domain_direct_mapped() returns always false for x86, so
> domain_use_host_layout macro will return incorrect value for non-hardware
> domains (dom0?). And as PV domains are not auto_translated domains so are
> always direct-mapped, so technically is_domain_direct_mapped() (or
> domain_use_host_layout()) should return true in such case.

Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
some special purpose (absence of IOMMU iirc).

> (I assume it is also true for every domain except HVM according to the comment
> /* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
> the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
> domains (x86 only) are always direct-mapped*/).
> 
> Is my understanding correct?
> 
> Then isn't that a problem of how is_domain_direct_mapped() is defined
> for x86? Shouldn't it be defined like:
>    #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))
> 
> Would it be better to move "!paging_mode_translate(d) || " to the definition
> of domain_use_host_layout()?
> 
> Could you please explain what is wrong with the comment? Probably, except:
>    * To avoid any trouble finding space, it is easier to force using the
>    * host memory layout.
> everything else should be true for x86.

"The hardware domain will use ..." isn't true for PV Dom0.

Jan

Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Oleksii Kurochko 1 month, 3 weeks ago
On 2/18/26 2:12 PM, Jan Beulich wrote:
> On 18.02.2026 13:58, Oleksii Kurochko wrote:
>> On 2/17/26 8:34 AM, Jan Beulich wrote:
>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>>> domain_use_host_layout() is generic enough to be moved to the
>>>>>> common header xen/domain.h.
>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>>>
>>>>>> --- a/xen/include/xen/domain.h
>>>>>> +++ b/xen/include/xen/domain.h
>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>>>    
>>>>>> +/*
>>>>>> + * Is the domain using the host memory layout?
>>>>>> + *
>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>>>> + * host memory layout.
>>>>>> + *
>>>>>> + * The hardware domain will use the host layout regardless of
>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>>>> + * for the devices.
>>>>>> + */
>>>>>> +#ifndef domain_use_host_layout
>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>>>> +                                    is_hardware_domain(d))
>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>>>> proliferate in common (non-DT) code.
>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>>>> domain) on x86 as well. In fact, we already have a working prototype,
>>>> although it is not suitable for upstream yet.
>>>>
>>>> In addition to the PSP use case that we discussed a few months ago,
>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>>>> must be 1:1 mapped, we also have a new use case. We are running the full
>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
>>>> are configured as PVH.
>>> Hmm. Then adjustments need making, for commentary and macro to be correct
>>> on x86. First and foremost none of what is there is true for PV.
>> As is_domain_direct_mapped() returns always false for x86, so
>> domain_use_host_layout macro will return incorrect value for non-hardware
>> domains (dom0?). And as PV domains are not auto_translated domains so are
>> always direct-mapped, so technically is_domain_direct_mapped() (or
>> domain_use_host_layout()) should return true in such case.
> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
> some special purpose (absence of IOMMU iirc).

I made such conclusion because of the comments in the code mentioned below:
  - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
  - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107

Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
is mentioned that:
   XENFEAT_direct_mapped is always set for not auto-translated guests.

>
>> (I assume it is also true for every domain except HVM according to the comment
>> /* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
>> the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
>> domains (x86 only) are always direct-mapped*/).
>>
>> Is my understanding correct?
>>
>> Then isn't that a problem of how is_domain_direct_mapped() is defined
>> for x86? Shouldn't it be defined like:
>>     #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))
>>
>> Would it be better to move "!paging_mode_translate(d) || " to the definition
>> of domain_use_host_layout()?
>>
>> Could you please explain what is wrong with the comment? Probably, except:
>>     * To avoid any trouble finding space, it is easier to force using the
>>     * host memory layout.
>> everything else should be true for x86.
> "The hardware domain will use ..." isn't true for PV Dom0.

And then just pure is_hardware_domain(d) inside macros isn't correct too, right?
So it should be (... || (!is_pv_domain(d) && is_hardware_domain(d)))

~ Oleksii


Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Jan Beulich 1 month, 3 weeks ago
On 18.02.2026 15:38, Oleksii Kurochko wrote:
> On 2/18/26 2:12 PM, Jan Beulich wrote:
>> On 18.02.2026 13:58, Oleksii Kurochko wrote:
>>> On 2/17/26 8:34 AM, Jan Beulich wrote:
>>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
>>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>>>> domain_use_host_layout() is generic enough to be moved to the
>>>>>>> common header xen/domain.h.
>>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>>>>
>>>>>>> --- a/xen/include/xen/domain.h
>>>>>>> +++ b/xen/include/xen/domain.h
>>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>>>>    
>>>>>>> +/*
>>>>>>> + * Is the domain using the host memory layout?
>>>>>>> + *
>>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>>>>> + * host memory layout.
>>>>>>> + *
>>>>>>> + * The hardware domain will use the host layout regardless of
>>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>>>>> + * for the devices.
>>>>>>> + */
>>>>>>> +#ifndef domain_use_host_layout
>>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>>>>> +                                    is_hardware_domain(d))
>>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>>>>> proliferate in common (non-DT) code.
>>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>>>>> domain) on x86 as well. In fact, we already have a working prototype,
>>>>> although it is not suitable for upstream yet.
>>>>>
>>>>> In addition to the PSP use case that we discussed a few months ago,
>>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>>>>> must be 1:1 mapped, we also have a new use case. We are running the full
>>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
>>>>> are configured as PVH.
>>>> Hmm. Then adjustments need making, for commentary and macro to be correct
>>>> on x86. First and foremost none of what is there is true for PV.
>>> As is_domain_direct_mapped() returns always false for x86, so
>>> domain_use_host_layout macro will return incorrect value for non-hardware
>>> domains (dom0?). And as PV domains are not auto_translated domains so are
>>> always direct-mapped, so technically is_domain_direct_mapped() (or
>>> domain_use_host_layout()) should return true in such case.
>> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
>> some special purpose (absence of IOMMU iirc).
> 
> I made such conclusion because of the comments in the code mentioned below:
>   - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
>   - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107
> 
> Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
> is mentioned that:
>    XENFEAT_direct_mapped is always set for not auto-translated guests.

Hmm, this you're right with, and XENVER_get_features handling indeed has

            if ( !paging_mode_translate(d) || is_domain_direct_mapped(d) )
                fi.submap |= (1U << XENFEAT_direct_mapped);

Which now I have a vague recollection of not having been happy with back at
the time. Based solely on the GFN == MFN statement this may be correct, but
"GFN" is a questionable term for PV in the first place. See how e.g.
common/memory.c resorts to using GPFN and GMFN, in line with commentary in
public/memory.h.

What the above demonstrates quite well though is that there's no direct
relationship between XENFEAT_direct_mapped and is_domain_direct_mapped().

>>> (I assume it is also true for every domain except HVM according to the comment
>>> /* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
>>> the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
>>> domains (x86 only) are always direct-mapped*/).
>>>
>>> Is my understanding correct?
>>>
>>> Then isn't that a problem of how is_domain_direct_mapped() is defined
>>> for x86? Shouldn't it be defined like:
>>>     #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))
>>>
>>> Would it be better to move "!paging_mode_translate(d) || " to the definition
>>> of domain_use_host_layout()?
>>>
>>> Could you please explain what is wrong with the comment? Probably, except:
>>>     * To avoid any trouble finding space, it is easier to force using the
>>>     * host memory layout.
>>> everything else should be true for x86.
>> "The hardware domain will use ..." isn't true for PV Dom0.
> 
> And then just pure is_hardware_domain(d) inside macros isn't correct too, right?
> So it should be (... || (!is_pv_domain(d) && is_hardware_domain(d)))

Stefano, please can you guide Oleksii to put there something which is both
correct and will cover your intended use case as well?

Jan

Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Stefano Stabellini 1 month, 2 weeks ago
On Wed, 18 Feb 2026, Jan Beulich wrote:
> On 18.02.2026 15:38, Oleksii Kurochko wrote:
> > On 2/18/26 2:12 PM, Jan Beulich wrote:
> >> On 18.02.2026 13:58, Oleksii Kurochko wrote:
> >>> On 2/17/26 8:34 AM, Jan Beulich wrote:
> >>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
> >>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
> >>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
> >>>>>>> domain_use_host_layout() is generic enough to be moved to the
> >>>>>>> common header xen/domain.h.
> >>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
> >>>>>>
> >>>>>>> --- a/xen/include/xen/domain.h
> >>>>>>> +++ b/xen/include/xen/domain.h
> >>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
> >>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
> >>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> >>>>>>>    
> >>>>>>> +/*
> >>>>>>> + * Is the domain using the host memory layout?
> >>>>>>> + *
> >>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> >>>>>>> + * To avoid any trouble finding space, it is easier to force using the
> >>>>>>> + * host memory layout.
> >>>>>>> + *
> >>>>>>> + * The hardware domain will use the host layout regardless of
> >>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
> >>>>>>> + * for the devices.
> >>>>>>> + */
> >>>>>>> +#ifndef domain_use_host_layout
> >>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> >>>>>>> +                                    is_hardware_domain(d))
> >>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
> >>>>>> proliferate in common (non-DT) code.
> >>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
> >>>>> domain) on x86 as well. In fact, we already have a working prototype,
> >>>>> although it is not suitable for upstream yet.
> >>>>>
> >>>>> In addition to the PSP use case that we discussed a few months ago,
> >>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
> >>>>> must be 1:1 mapped, we also have a new use case. We are running the full
> >>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
> >>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
> >>>>> are configured as PVH.
> >>>> Hmm. Then adjustments need making, for commentary and macro to be correct
> >>>> on x86. First and foremost none of what is there is true for PV.
> >>> As is_domain_direct_mapped() returns always false for x86, so
> >>> domain_use_host_layout macro will return incorrect value for non-hardware
> >>> domains (dom0?). And as PV domains are not auto_translated domains so are
> >>> always direct-mapped, so technically is_domain_direct_mapped() (or
> >>> domain_use_host_layout()) should return true in such case.
> >> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
> >> some special purpose (absence of IOMMU iirc).
> > 
> > I made such conclusion because of the comments in the code mentioned below:
> >   - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
> >   - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107
> > 
> > Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
> > is mentioned that:
> >    XENFEAT_direct_mapped is always set for not auto-translated guests.
> 
> Hmm, this you're right with, and XENVER_get_features handling indeed has
> 
>             if ( !paging_mode_translate(d) || is_domain_direct_mapped(d) )
>                 fi.submap |= (1U << XENFEAT_direct_mapped);
> 
> Which now I have a vague recollection of not having been happy with back at
> the time. Based solely on the GFN == MFN statement this may be correct, but
> "GFN" is a questionable term for PV in the first place. See how e.g.
> common/memory.c resorts to using GPFN and GMFN, in line with commentary in
> public/memory.h.
> 
> What the above demonstrates quite well though is that there's no direct
> relationship between XENFEAT_direct_mapped and is_domain_direct_mapped().

Let's start from the easy case: domain_use_host_layout.

domain_use_host_layout is meant to indicate whether the domain memory
map (e.g. the address of the interrupt controller, the start of RAM,
etc.) matches the host memory map or not.

It is implemented as:

#define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
                                   is_hardware_domain(d))

Because on ARM there are two cases:
1) hardware domain is always using the host layout
2) non-hardware domain only use the host layout when directly mapped
(more on the later)


I think this can be generalized and made arch-neutral with the caveat
that it should return False for PV guests as Jan mentioned. After all
the virtual interrupt controller in a PV domain doesn't start at the
same guest physical address of the real interrupt controller. The
comment can be improved, but let's get to it after we talk about
is_domain_direct_mapped.


is_domain_direct_mapped is meant to indicate that a domain's memory is
allocated 1:1 such that GFN == MFN. is_domain_direct_mapped is easily
applicable as-is to PVH and HVM guests where there are two stages of
translation.

What about PV guests? One could take the stance that given that there
are no real GFN space, then GFN is always the same as MFN. But this is
more philosophical than practical.

Practically, is_domain_direct_mapped() triggers a different code path in
xen/common/memory.c:populate_physmap for contiguous 1:1 memory
allocations which is probably undesirable for PV guests.

Practically, there is a related flag exposed to Linux
XENFEAT_direct_mapped. For HVM/PVH guests makes sense to be one and the
same as is_domain_direct_mapped(). This flag is used by Linux to know
whether it can use swiotlb-xen or not. Specifically, swiotlb-xen is only
usable when XENFEAT_direct_mapped is enabled for ARM guests and the
principle could apply to HVM/PVH guests too. What about PV guests?
They also make use of swiotlb-xen and XENFEAT_direct_mapped is set to
True for PV guests today.


In conclusion, is_domain_direct_mapped() was born for autotranslated
guests and is meant to trigger large contigous memory allocations is Xen
and permit the usage of swiotlb-xen in Linux. For PV guests, while we
want swiotlb-xen and the XENFEAT_direct_mapped flag is already set to
True, we don't want to change the memory allocation scheme. 

So I think is_domain_direct_mapped() should be always False on x86:
- PV guests should be always False
- PVH/HVM guests could be True but it is currently unimplemented (AMD
  is working on an implementation)

For compatibility and functionality, XENFEAT_direct_mapped should be
left as is.

The implementation of domain_use_host_layout() can be moved to common
code with a change:


/*
 * Is the auto-translated domain using the host memory layout?
 *
 * domain_use_host_layout() is always False for PV guests.
 *
 * Direct-mapped domains (autotranslated domains with memory allocated
 * contiguously and mapped 1:1 so that GFN == MFN) are always using the
 * host memory layout to avoid address clashes.
 *
 * The hardware domain will use the host layout (regardless of
 * direct-mapped) because some OS may rely on a specific address ranges
 * for the devices. PV Dom0, like any other PV guests, has
 * domain_use_host_layout() returning False.
 */
#define domain_use_host_layout(d) (is_domain_direct_mapped(d) ||
                                   (paging_mode_translate(d) &&
                                    is_hardware_domain(d)))
Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
Posted by Stefano Stabellini 1 month, 2 weeks ago
On Fri, 27 Feb 2026, Stefano Stabellini wrote:
> On Wed, 18 Feb 2026, Jan Beulich wrote:
> > On 18.02.2026 15:38, Oleksii Kurochko wrote:
> > > On 2/18/26 2:12 PM, Jan Beulich wrote:
> > >> On 18.02.2026 13:58, Oleksii Kurochko wrote:
> > >>> On 2/17/26 8:34 AM, Jan Beulich wrote:
> > >>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
> > >>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
> > >>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
> > >>>>>>> domain_use_host_layout() is generic enough to be moved to the
> > >>>>>>> common header xen/domain.h.
> > >>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
> > >>>>>>
> > >>>>>>> --- a/xen/include/xen/domain.h
> > >>>>>>> +++ b/xen/include/xen/domain.h
> > >>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
> > >>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
> > >>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> > >>>>>>>    
> > >>>>>>> +/*
> > >>>>>>> + * Is the domain using the host memory layout?
> > >>>>>>> + *
> > >>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> > >>>>>>> + * To avoid any trouble finding space, it is easier to force using the
> > >>>>>>> + * host memory layout.
> > >>>>>>> + *
> > >>>>>>> + * The hardware domain will use the host layout regardless of
> > >>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
> > >>>>>>> + * for the devices.
> > >>>>>>> + */
> > >>>>>>> +#ifndef domain_use_host_layout
> > >>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> > >>>>>>> +                                    is_hardware_domain(d))
> > >>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
> > >>>>>> proliferate in common (non-DT) code.
> > >>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
> > >>>>> domain) on x86 as well. In fact, we already have a working prototype,
> > >>>>> although it is not suitable for upstream yet.
> > >>>>>
> > >>>>> In addition to the PSP use case that we discussed a few months ago,
> > >>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
> > >>>>> must be 1:1 mapped, we also have a new use case. We are running the full
> > >>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
> > >>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
> > >>>>> are configured as PVH.
> > >>>> Hmm. Then adjustments need making, for commentary and macro to be correct
> > >>>> on x86. First and foremost none of what is there is true for PV.
> > >>> As is_domain_direct_mapped() returns always false for x86, so
> > >>> domain_use_host_layout macro will return incorrect value for non-hardware
> > >>> domains (dom0?). And as PV domains are not auto_translated domains so are
> > >>> always direct-mapped, so technically is_domain_direct_mapped() (or
> > >>> domain_use_host_layout()) should return true in such case.
> > >> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
> > >> some special purpose (absence of IOMMU iirc).
> > > 
> > > I made such conclusion because of the comments in the code mentioned below:
> > >   - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
> > >   - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107
> > > 
> > > Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
> > > is mentioned that:
> > >    XENFEAT_direct_mapped is always set for not auto-translated guests.
> > 
> > Hmm, this you're right with, and XENVER_get_features handling indeed has
> > 
> >             if ( !paging_mode_translate(d) || is_domain_direct_mapped(d) )
> >                 fi.submap |= (1U << XENFEAT_direct_mapped);
> > 
> > Which now I have a vague recollection of not having been happy with back at
> > the time. Based solely on the GFN == MFN statement this may be correct, but
> > "GFN" is a questionable term for PV in the first place. See how e.g.
> > common/memory.c resorts to using GPFN and GMFN, in line with commentary in
> > public/memory.h.
> > 
> > What the above demonstrates quite well though is that there's no direct
> > relationship between XENFEAT_direct_mapped and is_domain_direct_mapped().
> 
> Let's start from the easy case: domain_use_host_layout.
> 
> domain_use_host_layout is meant to indicate whether the domain memory
> map (e.g. the address of the interrupt controller, the start of RAM,
> etc.) matches the host memory map or not.
> 
> It is implemented as:
> 
> #define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>                                    is_hardware_domain(d))
> 
> Because on ARM there are two cases:
> 1) hardware domain is always using the host layout
> 2) non-hardware domain only use the host layout when directly mapped
> (more on the later)
> 
> 
> I think this can be generalized and made arch-neutral with the caveat
> that it should return False for PV guests as Jan mentioned. After all
> the virtual interrupt controller in a PV domain doesn't start at the
> same guest physical address of the real interrupt controller. The
> comment can be improved, but let's get to it after we talk about
> is_domain_direct_mapped.
> 
> 
> is_domain_direct_mapped is meant to indicate that a domain's memory is
> allocated 1:1 such that GFN == MFN. is_domain_direct_mapped is easily
> applicable as-is to PVH and HVM guests where there are two stages of
> translation.
> 
> What about PV guests? One could take the stance that given that there
> are no real GFN space, then GFN is always the same as MFN. But this is
> more philosophical than practical.
> 
> Practically, is_domain_direct_mapped() triggers a different code path in
> xen/common/memory.c:populate_physmap for contiguous 1:1 memory
> allocations which is probably undesirable for PV guests.
> 
> Practically, there is a related flag exposed to Linux
> XENFEAT_direct_mapped. For HVM/PVH guests makes sense to be one and the
> same as is_domain_direct_mapped(). This flag is used by Linux to know
> whether it can use swiotlb-xen or not. Specifically, swiotlb-xen is only
> usable when XENFEAT_direct_mapped is enabled for ARM guests and the
> principle could apply to HVM/PVH guests too. What about PV guests?
> They also make use of swiotlb-xen and XENFEAT_direct_mapped is set to
> True for PV guests today.
> 
> 
> In conclusion, is_domain_direct_mapped() was born for autotranslated
> guests and is meant to trigger large contigous memory allocations is Xen
> and permit the usage of swiotlb-xen in Linux. For PV guests, while we
> want swiotlb-xen and the XENFEAT_direct_mapped flag is already set to
> True, we don't want to change the memory allocation scheme. 
> 
> So I think is_domain_direct_mapped() should be always False on x86:
> - PV guests should be always False
> - PVH/HVM guests could be True but it is currently unimplemented (AMD
>   is working on an implementation)
> 
> For compatibility and functionality, XENFEAT_direct_mapped should be
> left as is.
> 
> The implementation of domain_use_host_layout() can be moved to common
> code with a change:
> 
> 
> /*
>  * Is the auto-translated domain using the host memory layout?
>  *
>  * domain_use_host_layout() is always False for PV guests.
>  *
>  * Direct-mapped domains (autotranslated domains with memory allocated
>  * contiguously and mapped 1:1 so that GFN == MFN) are always using the
>  * host memory layout to avoid address clashes.
>  *
>  * The hardware domain will use the host layout (regardless of
>  * direct-mapped) because some OS may rely on a specific address ranges
>  * for the devices. PV Dom0, like any other PV guests, has
>  * domain_use_host_layout() returning False.
>  */
> #define domain_use_host_layout(d) (is_domain_direct_mapped(d) ||
>                                    (paging_mode_translate(d) &&
>                                     is_hardware_domain(d)))

I'll add one thing. While I think it is clear that XENFEAT_direct_mapped
should remain as is and that is_domain_direct_mapped() should be always
False for PV guests, given that domain_use_host_layout is not currently
used on x86 it is debatable how it should be implemented.                                                                                    
 
For PVH/HVM, domain_use_host_layout() can easily be aligned with ARM.                                                                        
                                                                
For PV DomUs, it will return false and there is no issue.

For PV Dom0, I would argue it should return False because the concept
of "host memory layout" is about the address of virtual platform
devices (interrupt controller, UART, etc.) in the guest physical
address space. PV guests don't have virtual devices mapped at specific
guest physical addresses, they use hypercalls. But I can see it could be
argued either way, when you take into consideration EFI/ACPI tables.
For now, domain_use_host_layout() is unused on x86, so it doesn't make a
difference.