If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
a passthrough device by using gsi, see
xen_pt_realize->xc_physdev_map_pirq and
pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
is not allowed because currd is PVH dom0 and PVH has no
X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.
So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
PHYSDEVOP_unmap_pirq for the failed path to unmap pirq. And
add a new check to prevent self map when caller has no PIRQ
flag.
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
xen/arch/x86/hvm/hypercall.c | 2 ++
xen/arch/x86/physdev.c | 24 ++++++++++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 56fbb69ab201..d49fb8b548a3 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -74,6 +74,8 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
{
case PHYSDEVOP_map_pirq:
case PHYSDEVOP_unmap_pirq:
+ break;
+
case PHYSDEVOP_eoi:
case PHYSDEVOP_irq_status_query:
case PHYSDEVOP_get_free_pirq:
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 7efa17cf4c1e..1337f95171cd 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -305,11 +305,23 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
case PHYSDEVOP_map_pirq: {
physdev_map_pirq_t map;
struct msi_info msi;
+ struct domain *d;
ret = -EFAULT;
if ( copy_from_guest(&map, arg, 1) != 0 )
break;
+ d = rcu_lock_domain_by_any_id(map.domid);
+ if ( d == NULL )
+ return -ESRCH;
+ /* If caller is the same HVM guest as current, check pirq flag */
+ if ( !is_pv_domain(d) && !has_pirq(d) && map.domid == DOMID_SELF )
+ {
+ rcu_unlock_domain(d);
+ return -EOPNOTSUPP;
+ }
+ rcu_unlock_domain(d);
+
switch ( map.type )
{
case MAP_PIRQ_TYPE_MSI_SEG:
@@ -343,11 +355,23 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
case PHYSDEVOP_unmap_pirq: {
struct physdev_unmap_pirq unmap;
+ struct domain *d;
ret = -EFAULT;
if ( copy_from_guest(&unmap, arg, 1) != 0 )
break;
+ d = rcu_lock_domain_by_any_id(unmap.domid);
+ if ( d == NULL )
+ return -ESRCH;
+ /* If caller is the same HVM guest as current, check pirq flag */
+ if ( !is_pv_domain(d) && !has_pirq(d) && unmap.domid == DOMID_SELF )
+ {
+ rcu_unlock_domain(d);
+ return -EOPNOTSUPP;
+ }
+ rcu_unlock_domain(d);
+
ret = physdev_unmap_pirq(unmap.domid, unmap.pirq);
break;
}
--
2.34.1
On 16.05.2024 11:52, Jiqian Chen wrote:
> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
> a passthrough device by using gsi, see
> xen_pt_realize->xc_physdev_map_pirq and
> pci_add_dm_done->xc_physdev_map_pirq.
xen_pt_realize() is in qemu, which imo wants saying here (for being a different
repo), the more that pci_add_dm_done() is in libxl.
> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -74,6 +74,8 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> {
> case PHYSDEVOP_map_pirq:
> case PHYSDEVOP_unmap_pirq:
> + break;
I think this could do with a comment as to why it's permitted as well as giving
a reference to where further restrictions are enforced (or simply mentioning
the constraint of this only being permitted for management of other domains).
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -305,11 +305,23 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> case PHYSDEVOP_map_pirq: {
> physdev_map_pirq_t map;
> struct msi_info msi;
> + struct domain *d;
>
> ret = -EFAULT;
> if ( copy_from_guest(&map, arg, 1) != 0 )
> break;
>
> + d = rcu_lock_domain_by_any_id(map.domid);
> + if ( d == NULL )
> + return -ESRCH;
> + /* If caller is the same HVM guest as current, check pirq flag */
The caller is always current. What I think you mean is "caller is same as
the subject domain". I'm also having trouble with seeing the usefulness
of saying "check pirq flag". Instead I think you want to state the
restriction here that you actually mean to enforce (which would also mean
mentioning PVH in some way, to distinguish from the "normal HVM" case).
> + if ( !is_pv_domain(d) && !has_pirq(d) && map.domid == DOMID_SELF )
You exclude DOMID_SELF but not the domain's ID? Why not simply check d
being current->domain, thus covering both cases? Plus you could use
rcu_lock_domain_by_id() to exclude DOMID_SELF, and you could use
rcu_lock_remote_domain_by_id() to exclude the local domain altogether.
Finally I'm not even sure you need the RCU lock here (else you could
use knownalive_domain_from_domid()). But perhaps that's better to cover
the qemu-in-stubdom case, which we have to consider potentially malicious.
I'm also inclined to suggest to use is_hvm_domain() here in favor of
!is_pv_domain().
Jan
On 2024/5/16 21:29, Jan Beulich wrote:
> On 16.05.2024 11:52, Jiqian Chen wrote:
>> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
>> a passthrough device by using gsi, see
>> xen_pt_realize->xc_physdev_map_pirq and
>> pci_add_dm_done->xc_physdev_map_pirq.
>
> xen_pt_realize() is in qemu, which imo wants saying here (for being a different
> repo), the more that pci_add_dm_done() is in libxl.
OK, I will describe more here(in qemu and in libxl).
>
>> --- a/xen/arch/x86/hvm/hypercall.c
>> +++ b/xen/arch/x86/hvm/hypercall.c
>> @@ -74,6 +74,8 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>> {
>> case PHYSDEVOP_map_pirq:
>> case PHYSDEVOP_unmap_pirq:
>> + break;
>
> I think this could do with a comment as to why it's permitted as well as giving
> a reference to where further restrictions are enforced (or simply mentioning
> the constraint of this only being permitted for management of other domains).
Thanks, will add
/*
* Only being permitted for management of other domains.
* Further restrictions are enforced in do_physdev_op.
*/
>
>> --- a/xen/arch/x86/physdev.c
>> +++ b/xen/arch/x86/physdev.c
>> @@ -305,11 +305,23 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>> case PHYSDEVOP_map_pirq: {
>> physdev_map_pirq_t map;
>> struct msi_info msi;
>> + struct domain *d;
>>
>> ret = -EFAULT;
>> if ( copy_from_guest(&map, arg, 1) != 0 )
>> break;
>>
>> + d = rcu_lock_domain_by_any_id(map.domid);
>> + if ( d == NULL )
>> + return -ESRCH;
>> + /* If caller is the same HVM guest as current, check pirq flag */
>
> The caller is always current. What I think you mean is "caller is same as
> the subject domain".
Yes, I want to prevent self-map when subject domain(domU) doesn't have X86_EMU_USE_PIRQ flag.
> I'm also having trouble with seeing the usefulness of saying "check pirq flag". Instead I think you want to state the
> restriction here that you actually mean to enforce (which would also mean
> mentioning PVH in some way, to distinguish from the "normal HVM" case).
Yes, PVH and the HVM without X86_EMU_USE_PIRQ flag,
If a HVM has X86_EMU_USE_PIRQ flag, map_pirq should be permitted.
I will change this comment to be:
/* Prevent self-map when domain has no X86_EMU_USE_PIRQ flag */
>
>> + if ( !is_pv_domain(d) && !has_pirq(d) && map.domid == DOMID_SELF )
>
> You exclude DOMID_SELF but not the domain's ID? Why not simply check d
> being current->domain, thus covering both cases?
> Plus you could use rcu_lock_domain_by_id() to exclude DOMID_SELF, and you could use
> rcu_lock_remote_domain_by_id() to exclude the local domain altogether.
But there is a case that hvm hold PIRQ flag and DOMID_SELF id will do this pirq_map, see code
physdev_map_pirq.
I think change to check d being current->domain is more suitable.
> Finally I'm not even sure you need the RCU lock here (else you could
> use knownalive_domain_from_domid()). But perhaps that's better to cover
> the qemu-in-stubdom case, which we have to consider potentially malicious.
Yes, for potential safety reasons, let's keep the RCU lock.
>
> I'm also inclined to suggest to use is_hvm_domain() here in favor of
> !is_pv_domain().
OK, will change to is_hvm_domain in next version.
>
> Jan
--
Best regards,
Jiqian Chen.
© 2016 - 2026 Red Hat, Inc.