[Xen-devel] [RFC PATCH 00/16] xenhost support

Ankur Arora posted 16 patches 10 weeks ago
Failed in applying to current master (apply log)
arch/x86/include/asm/xen/hypercall.h       | 239 +++++---
arch/x86/include/asm/xen/hypervisor.h      |   3 +-
arch/x86/pci/xen.c                         |  18 +-
arch/x86/xen/Makefile                      |   3 +-
arch/x86/xen/enlighten.c                   | 101 ++--
arch/x86/xen/enlighten_hvm.c               | 185 ++++--
arch/x86/xen/enlighten_pv.c                | 144 ++++-
arch/x86/xen/enlighten_pvh.c               |  25 +-
arch/x86/xen/grant-table.c                 |  71 ++-
arch/x86/xen/irq.c                         |  75 ++-
arch/x86/xen/mmu_pv.c                      |   6 +-
arch/x86/xen/p2m.c                         |  24 +-
arch/x86/xen/pci-swiotlb-xen.c             |   1 +
arch/x86/xen/setup.c                       |   1 +
arch/x86/xen/smp.c                         |  25 +-
arch/x86/xen/smp_hvm.c                     |  17 +-
arch/x86/xen/smp_pv.c                      |  27 +-
arch/x86/xen/suspend_hvm.c                 |   6 +-
arch/x86/xen/suspend_pv.c                  |  14 +-
arch/x86/xen/time.c                        |  32 +-
arch/x86/xen/xen-asm_32.S                  |   2 +-
arch/x86/xen/xen-asm_64.S                  |   2 +-
arch/x86/xen/xen-head.S                    |  11 +-
arch/x86/xen/xen-ops.h                     |   8 +-
arch/x86/xen/xenhost.c                     | 102 ++++
drivers/block/xen-blkback/blkback.c        |  56 +-
drivers/block/xen-blkback/common.h         |   2 +-
drivers/block/xen-blkback/xenbus.c         |  65 +--
drivers/block/xen-blkfront.c               | 105 ++--
drivers/input/misc/xen-kbdfront.c          |   2 +-
drivers/net/xen-netback/hash.c             |   7 +-
drivers/net/xen-netback/interface.c        |  15 +-
drivers/net/xen-netback/netback.c          |  11 +-
drivers/net/xen-netback/rx.c               |   3 +-
drivers/net/xen-netback/xenbus.c           |  81 +--
drivers/net/xen-netfront.c                 | 122 ++--
drivers/pci/xen-pcifront.c                 |   6 +-
drivers/tty/hvc/hvc_xen.c                  |   2 +-
drivers/xen/acpi.c                         |   2 +
drivers/xen/balloon.c                      |  21 +-
drivers/xen/cpu_hotplug.c                  |  16 +-
drivers/xen/events/Makefile                |   1 -
drivers/xen/events/events_2l.c             | 198 +++----
drivers/xen/events/events_base.c           | 381 +++++++------
drivers/xen/events/events_fifo.c           |   4 +-
drivers/xen/events/events_internal.h       |  78 +--
drivers/xen/evtchn.c                       |  24 +-
drivers/xen/fallback.c                     |   9 +-
drivers/xen/features.c                     |  33 +-
drivers/xen/gntalloc.c                     |  21 +-
drivers/xen/gntdev.c                       |  26 +-
drivers/xen/grant-table.c                  | 632 ++++++++++++---------
drivers/xen/manage.c                       |  37 +-
drivers/xen/mcelog.c                       |   2 +-
drivers/xen/pcpu.c                         |   2 +-
drivers/xen/platform-pci.c                 |  12 +-
drivers/xen/preempt.c                      |   1 +
drivers/xen/privcmd.c                      |   5 +-
drivers/xen/sys-hypervisor.c               |  14 +-
drivers/xen/time.c                         |   4 +-
drivers/xen/xen-balloon.c                  |  16 +-
drivers/xen/xen-pciback/xenbus.c           |   2 +-
drivers/xen/xen-scsiback.c                 |   5 +-
drivers/xen/xen-selfballoon.c              |   2 +
drivers/xen/xenbus/xenbus.h                |  45 +-
drivers/xen/xenbus/xenbus_client.c         |  40 +-
drivers/xen/xenbus/xenbus_comms.c          | 121 ++--
drivers/xen/xenbus/xenbus_dev_backend.c    |  30 +-
drivers/xen/xenbus/xenbus_dev_frontend.c   |  22 +-
drivers/xen/xenbus/xenbus_probe.c          | 247 +++++---
drivers/xen/xenbus/xenbus_probe_backend.c  |  20 +-
drivers/xen/xenbus/xenbus_probe_frontend.c |  66 ++-
drivers/xen/xenbus/xenbus_xs.c             | 192 ++++---
drivers/xen/xenfs/xenstored.c              |   7 +-
drivers/xen/xlate_mmu.c                    |   4 +-
include/xen/balloon.h                      |   4 +-
include/xen/events.h                       |  45 +-
include/xen/features.h                     |  17 +-
include/xen/grant_table.h                  |  83 +--
include/xen/xen-ops.h                      |  10 +-
include/xen/xen.h                          |   3 +
include/xen/xenbus.h                       |  54 +-
include/xen/xenhost.h                      | 302 ++++++++++
83 files changed, 2826 insertions(+), 1653 deletions(-)
create mode 100644 arch/x86/xen/xenhost.c
create mode 100644 include/xen/xenhost.h

[Xen-devel] [RFC PATCH 00/16] xenhost support

Posted by Ankur Arora 10 weeks ago
Hi all,

This is an RFC for xenhost support, outlined here by Juergen here:
https://lkml.org/lkml/2019/4/8/67.

The high level idea is to provide an abstraction of the Xen
communication interface, as a xenhost_t.

xenhost_t expose ops for communication between the guest and Xen
(hypercall, cpuid, shared_info/vcpu_info, evtchn, grant-table and on top
of those, xenbus, ballooning), and these can differ based on the kind
of underlying Xen: regular, local, and nested.

(Since this abstraction is largely about guest -- xenhost communication,
no ops are needed for timer, clock, sched, memory (MMU, P2M), VCPU mgmt.
etc.)

Xenhost use-cases:

Regular-Xen: the standard Xen interface presented to a guest,
specifically for comunication between Lx-guest and Lx-Xen.

Local-Xen: a Xen like interface which runs in the same address space as
the guest (dom0). This, can act as the default xenhost.

The major ways it differs from a regular Xen interface is in presenting
a different hypercall interface (call instead of a syscall/vmcall), and
in an inability to do grant-mappings: since local-Xen exists in the same
address space as Xen, there's no way for it to cheaply change the
physical page that a GFN maps to (assuming no P2M tables.)

Nested-Xen: this channel is to Xen, one level removed: from L1-guest to
L0-Xen. The use case is that we want L0-dom0-backends to talk to
L1-dom0-frontend drivers which can then present PV devices which can
in-turn be used by the L1-dom0-backend drivers as raw underlying devices.
The interfaces themselves, broadly remain similar.

Note: L0-Xen, L1-Xen represent Xen running at that nesting level
and L0-guest, L1-guest represent guests that are children of Xen
at that nesting level. Lx, represents any level.

Patches 1-7,
  "x86/xen: add xenhost_t interface"
  "x86/xen: cpuid support in xenhost_t"
  "x86/xen: make hypercall_page generic"
  "x86/xen: hypercall support for xenhost_t"
  "x86/xen: add feature support in xenhost_t"
  "x86/xen: add shared_info support to xenhost_t"
  "x86/xen: make vcpu_info part of xenhost_t"
abstract out interfaces that setup hypercalls/cpuid/shared_info/vcpu_info etc.

Patch 8, "x86/xen: irq/upcall handling with multiple xenhosts"
sets up the upcall and pv_irq ops based on vcpu_info.

Patch 9, "xen/evtchn: support evtchn in xenhost_t" adds xenhost based
evtchn support for evtchn_2l.

Patches 10 and 16, "xen/balloon: support ballooning in xenhost_t" and
"xen/grant-table: host_addr fixup in mapping on xenhost_r0"
implement support from GNTTABOP_map_grant_ref for xenhosts of type
xenhost_r0 (xenhost local.)

Patch 12, "xen/xenbus: support xenbus frontend/backend with xenhost_t"
makes xenbus so that both its frontend and backend can be bootstrapped
separately via separate xenhosts.

Remaining patches, 11, 13, 14, 15:
  "xen/grant-table: make grant-table xenhost aware"
  "drivers/xen: gnttab, evtchn, xenbus API changes"
  "xen/blk: gnttab, evtchn, xenbus API changes"
  "xen/net: gnttab, evtchn, xenbus API changes"
are mostly mechanical changes for APIs that now take xenhost_t *
as parameter.

The code itself is RFC quality, and is mostly meant to get feedback before
proceeding further. Also note that the FIFO logic and some Xen drivers
(input, pciback, scsi etc) are mostly unchanged, so will not build.


Please take a look.

Thanks
Ankur


Ankur Arora (16):

  x86/xen: add xenhost_t interface
  x86/xen: cpuid support in xenhost_t
  x86/xen: make hypercall_page generic
  x86/xen: hypercall support for xenhost_t
  x86/xen: add feature support in xenhost_t
  x86/xen: add shared_info support to xenhost_t
  x86/xen: make vcpu_info part of xenhost_t
  x86/xen: irq/upcall handling with multiple xenhosts
  xen/evtchn: support evtchn in xenhost_t
  xen/balloon: support ballooning in xenhost_t
  xen/grant-table: make grant-table xenhost aware
  xen/xenbus: support xenbus frontend/backend with xenhost_t
  drivers/xen: gnttab, evtchn, xenbus API changes
  xen/blk: gnttab, evtchn, xenbus API changes
  xen/net: gnttab, evtchn, xenbus API changes
  xen/grant-table: host_addr fixup in mapping on xenhost_r0

 arch/x86/include/asm/xen/hypercall.h       | 239 +++++---
 arch/x86/include/asm/xen/hypervisor.h      |   3 +-
 arch/x86/pci/xen.c                         |  18 +-
 arch/x86/xen/Makefile                      |   3 +-
 arch/x86/xen/enlighten.c                   | 101 ++--
 arch/x86/xen/enlighten_hvm.c               | 185 ++++--
 arch/x86/xen/enlighten_pv.c                | 144 ++++-
 arch/x86/xen/enlighten_pvh.c               |  25 +-
 arch/x86/xen/grant-table.c                 |  71 ++-
 arch/x86/xen/irq.c                         |  75 ++-
 arch/x86/xen/mmu_pv.c                      |   6 +-
 arch/x86/xen/p2m.c                         |  24 +-
 arch/x86/xen/pci-swiotlb-xen.c             |   1 +
 arch/x86/xen/setup.c                       |   1 +
 arch/x86/xen/smp.c                         |  25 +-
 arch/x86/xen/smp_hvm.c                     |  17 +-
 arch/x86/xen/smp_pv.c                      |  27 +-
 arch/x86/xen/suspend_hvm.c                 |   6 +-
 arch/x86/xen/suspend_pv.c                  |  14 +-
 arch/x86/xen/time.c                        |  32 +-
 arch/x86/xen/xen-asm_32.S                  |   2 +-
 arch/x86/xen/xen-asm_64.S                  |   2 +-
 arch/x86/xen/xen-head.S                    |  11 +-
 arch/x86/xen/xen-ops.h                     |   8 +-
 arch/x86/xen/xenhost.c                     | 102 ++++
 drivers/block/xen-blkback/blkback.c        |  56 +-
 drivers/block/xen-blkback/common.h         |   2 +-
 drivers/block/xen-blkback/xenbus.c         |  65 +--
 drivers/block/xen-blkfront.c               | 105 ++--
 drivers/input/misc/xen-kbdfront.c          |   2 +-
 drivers/net/xen-netback/hash.c             |   7 +-
 drivers/net/xen-netback/interface.c        |  15 +-
 drivers/net/xen-netback/netback.c          |  11 +-
 drivers/net/xen-netback/rx.c               |   3 +-
 drivers/net/xen-netback/xenbus.c           |  81 +--
 drivers/net/xen-netfront.c                 | 122 ++--
 drivers/pci/xen-pcifront.c                 |   6 +-
 drivers/tty/hvc/hvc_xen.c                  |   2 +-
 drivers/xen/acpi.c                         |   2 +
 drivers/xen/balloon.c                      |  21 +-
 drivers/xen/cpu_hotplug.c                  |  16 +-
 drivers/xen/events/Makefile                |   1 -
 drivers/xen/events/events_2l.c             | 198 +++----
 drivers/xen/events/events_base.c           | 381 +++++++------
 drivers/xen/events/events_fifo.c           |   4 +-
 drivers/xen/events/events_internal.h       |  78 +--
 drivers/xen/evtchn.c                       |  24 +-
 drivers/xen/fallback.c                     |   9 +-
 drivers/xen/features.c                     |  33 +-
 drivers/xen/gntalloc.c                     |  21 +-
 drivers/xen/gntdev.c                       |  26 +-
 drivers/xen/grant-table.c                  | 632 ++++++++++++---------
 drivers/xen/manage.c                       |  37 +-
 drivers/xen/mcelog.c                       |   2 +-
 drivers/xen/pcpu.c                         |   2 +-
 drivers/xen/platform-pci.c                 |  12 +-
 drivers/xen/preempt.c                      |   1 +
 drivers/xen/privcmd.c                      |   5 +-
 drivers/xen/sys-hypervisor.c               |  14 +-
 drivers/xen/time.c                         |   4 +-
 drivers/xen/xen-balloon.c                  |  16 +-
 drivers/xen/xen-pciback/xenbus.c           |   2 +-
 drivers/xen/xen-scsiback.c                 |   5 +-
 drivers/xen/xen-selfballoon.c              |   2 +
 drivers/xen/xenbus/xenbus.h                |  45 +-
 drivers/xen/xenbus/xenbus_client.c         |  40 +-
 drivers/xen/xenbus/xenbus_comms.c          | 121 ++--
 drivers/xen/xenbus/xenbus_dev_backend.c    |  30 +-
 drivers/xen/xenbus/xenbus_dev_frontend.c   |  22 +-
 drivers/xen/xenbus/xenbus_probe.c          | 247 +++++---
 drivers/xen/xenbus/xenbus_probe_backend.c  |  20 +-
 drivers/xen/xenbus/xenbus_probe_frontend.c |  66 ++-
 drivers/xen/xenbus/xenbus_xs.c             | 192 ++++---
 drivers/xen/xenfs/xenstored.c              |   7 +-
 drivers/xen/xlate_mmu.c                    |   4 +-
 include/xen/balloon.h                      |   4 +-
 include/xen/events.h                       |  45 +-
 include/xen/features.h                     |  17 +-
 include/xen/grant_table.h                  |  83 +--
 include/xen/xen-ops.h                      |  10 +-
 include/xen/xen.h                          |   3 +
 include/xen/xenbus.h                       |  54 +-
 include/xen/xenhost.h                      | 302 ++++++++++
 83 files changed, 2826 insertions(+), 1653 deletions(-)
 create mode 100644 arch/x86/xen/xenhost.c
 create mode 100644 include/xen/xenhost.h

-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 00/16] xenhost support

Posted by Juergen Gross 6 weeks ago
On 09.05.19 19:25, Ankur Arora wrote:
> Hi all,
> 
> This is an RFC for xenhost support, outlined here by Juergen here:
> https://lkml.org/lkml/2019/4/8/67.

First: thanks for all the effort you've put into this series!

> The high level idea is to provide an abstraction of the Xen
> communication interface, as a xenhost_t.
> 
> xenhost_t expose ops for communication between the guest and Xen
> (hypercall, cpuid, shared_info/vcpu_info, evtchn, grant-table and on top
> of those, xenbus, ballooning), and these can differ based on the kind
> of underlying Xen: regular, local, and nested.

I'm not sure we need to abstract away hypercalls and cpuid. I believe in
case of nested Xen all contacts to the L0 hypervisor should be done via
the L1 hypervisor. So we might need to issue some kind of passthrough
hypercall when e.g. granting a page to L0 dom0, but this should be
handled via the grant abstraction (events should be similar).

So IMO we should drop patches 2-5.

> (Since this abstraction is largely about guest -- xenhost communication,
> no ops are needed for timer, clock, sched, memory (MMU, P2M), VCPU mgmt.
> etc.)
> 
> Xenhost use-cases:
> 
> Regular-Xen: the standard Xen interface presented to a guest,
> specifically for comunication between Lx-guest and Lx-Xen.
> 
> Local-Xen: a Xen like interface which runs in the same address space as
> the guest (dom0). This, can act as the default xenhost.
> 
> The major ways it differs from a regular Xen interface is in presenting
> a different hypercall interface (call instead of a syscall/vmcall), and
> in an inability to do grant-mappings: since local-Xen exists in the same
> address space as Xen, there's no way for it to cheaply change the
> physical page that a GFN maps to (assuming no P2M tables.)
> 
> Nested-Xen: this channel is to Xen, one level removed: from L1-guest to
> L0-Xen. The use case is that we want L0-dom0-backends to talk to
> L1-dom0-frontend drivers which can then present PV devices which can
> in-turn be used by the L1-dom0-backend drivers as raw underlying devices.
> The interfaces themselves, broadly remain similar.
> 
> Note: L0-Xen, L1-Xen represent Xen running at that nesting level
> and L0-guest, L1-guest represent guests that are children of Xen
> at that nesting level. Lx, represents any level.
> 
> Patches 1-7,
>    "x86/xen: add xenhost_t interface"
>    "x86/xen: cpuid support in xenhost_t"
>    "x86/xen: make hypercall_page generic"
>    "x86/xen: hypercall support for xenhost_t"
>    "x86/xen: add feature support in xenhost_t"
>    "x86/xen: add shared_info support to xenhost_t"
>    "x86/xen: make vcpu_info part of xenhost_t"
> abstract out interfaces that setup hypercalls/cpuid/shared_info/vcpu_info etc.
> 
> Patch 8, "x86/xen: irq/upcall handling with multiple xenhosts"
> sets up the upcall and pv_irq ops based on vcpu_info.
> 
> Patch 9, "xen/evtchn: support evtchn in xenhost_t" adds xenhost based
> evtchn support for evtchn_2l.
> 
> Patches 10 and 16, "xen/balloon: support ballooning in xenhost_t" and
> "xen/grant-table: host_addr fixup in mapping on xenhost_r0"
> implement support from GNTTABOP_map_grant_ref for xenhosts of type
> xenhost_r0 (xenhost local.)
> 
> Patch 12, "xen/xenbus: support xenbus frontend/backend with xenhost_t"
> makes xenbus so that both its frontend and backend can be bootstrapped
> separately via separate xenhosts.
> 
> Remaining patches, 11, 13, 14, 15:
>    "xen/grant-table: make grant-table xenhost aware"
>    "drivers/xen: gnttab, evtchn, xenbus API changes"
>    "xen/blk: gnttab, evtchn, xenbus API changes"
>    "xen/net: gnttab, evtchn, xenbus API changes"
> are mostly mechanical changes for APIs that now take xenhost_t *
> as parameter.
> 
> The code itself is RFC quality, and is mostly meant to get feedback before
> proceeding further. Also note that the FIFO logic and some Xen drivers
> (input, pciback, scsi etc) are mostly unchanged, so will not build.
> 
> 
> Please take a look.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 00/16] xenhost support

Posted by Ankur Arora 6 weeks ago
On 2019-06-07 7:51 a.m., Juergen Gross wrote:
> On 09.05.19 19:25, Ankur Arora wrote:
>> Hi all,
>>
>> This is an RFC for xenhost support, outlined here by Juergen here:
>> https://lkml.org/lkml/2019/4/8/67.
> 
> First: thanks for all the effort you've put into this series!
> 
>> The high level idea is to provide an abstraction of the Xen
>> communication interface, as a xenhost_t.
>>
>> xenhost_t expose ops for communication between the guest and Xen
>> (hypercall, cpuid, shared_info/vcpu_info, evtchn, grant-table and on top
>> of those, xenbus, ballooning), and these can differ based on the kind
>> of underlying Xen: regular, local, and nested.
> 
> I'm not sure we need to abstract away hypercalls and cpuid. I believe in
> case of nested Xen all contacts to the L0 hypervisor should be done via
> the L1 hypervisor. So we might need to issue some kind of passthrough
Yes, that does make sense. This also allows the L1 hypervisor to
control which hypercalls can be nested.
As for cpuid, what about nested feature discovery such as in
gnttab_need_v2()?
(Though for this particular case, the hypercall should be fine.)

> hypercall when e.g. granting a page to L0 dom0, but this should be
> handled via the grant abstraction (events should be similar).
> 
> So IMO we should drop patches 2-5.
For 3-5, I'd like to prune them to provide a limited hypercall
registration ability -- this is meant to be used for the
xenhost_r0/xenhost_local case.

Ankur

> 
>> (Since this abstraction is largely about guest -- xenhost communication,
>> no ops are needed for timer, clock, sched, memory (MMU, P2M), VCPU mgmt.
>> etc.)
>>
>> Xenhost use-cases:
>>
>> Regular-Xen: the standard Xen interface presented to a guest,
>> specifically for comunication between Lx-guest and Lx-Xen.
>>
>> Local-Xen: a Xen like interface which runs in the same address space as
>> the guest (dom0). This, can act as the default xenhost.
>>
>> The major ways it differs from a regular Xen interface is in presenting
>> a different hypercall interface (call instead of a syscall/vmcall), and
>> in an inability to do grant-mappings: since local-Xen exists in the same
>> address space as Xen, there's no way for it to cheaply change the
>> physical page that a GFN maps to (assuming no P2M tables.)
>>
>> Nested-Xen: this channel is to Xen, one level removed: from L1-guest to
>> L0-Xen. The use case is that we want L0-dom0-backends to talk to
>> L1-dom0-frontend drivers which can then present PV devices which can
>> in-turn be used by the L1-dom0-backend drivers as raw underlying devices.
>> The interfaces themselves, broadly remain similar.
>>
>> Note: L0-Xen, L1-Xen represent Xen running at that nesting level
>> and L0-guest, L1-guest represent guests that are children of Xen
>> at that nesting level. Lx, represents any level.
>>
>> Patches 1-7,
>>    "x86/xen: add xenhost_t interface"
>>    "x86/xen: cpuid support in xenhost_t"
>>    "x86/xen: make hypercall_page generic"
>>    "x86/xen: hypercall support for xenhost_t"
>>    "x86/xen: add feature support in xenhost_t"
>>    "x86/xen: add shared_info support to xenhost_t"
>>    "x86/xen: make vcpu_info part of xenhost_t"
>> abstract out interfaces that setup 
>> hypercalls/cpuid/shared_info/vcpu_info etc.
>>
>> Patch 8, "x86/xen: irq/upcall handling with multiple xenhosts"
>> sets up the upcall and pv_irq ops based on vcpu_info.
>>
>> Patch 9, "xen/evtchn: support evtchn in xenhost_t" adds xenhost based
>> evtchn support for evtchn_2l.
>>
>> Patches 10 and 16, "xen/balloon: support ballooning in xenhost_t" and
>> "xen/grant-table: host_addr fixup in mapping on xenhost_r0"
>> implement support from GNTTABOP_map_grant_ref for xenhosts of type
>> xenhost_r0 (xenhost local.)
>>
>> Patch 12, "xen/xenbus: support xenbus frontend/backend with xenhost_t"
>> makes xenbus so that both its frontend and backend can be bootstrapped
>> separately via separate xenhosts.
>>
>> Remaining patches, 11, 13, 14, 15:
>>    "xen/grant-table: make grant-table xenhost aware"
>>    "drivers/xen: gnttab, evtchn, xenbus API changes"
>>    "xen/blk: gnttab, evtchn, xenbus API changes"
>>    "xen/net: gnttab, evtchn, xenbus API changes"
>> are mostly mechanical changes for APIs that now take xenhost_t *
>> as parameter.
>>
>> The code itself is RFC quality, and is mostly meant to get feedback 
>> before
>> proceeding further. Also note that the FIFO logic and some Xen drivers
>> (input, pciback, scsi etc) are mostly unchanged, so will not build.
>>
>>
>> Please take a look.
> 
> 
> Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 00/16] xenhost support

Posted by Joao Martins 6 weeks ago
On 6/7/19 3:51 PM, Juergen Gross wrote:
> On 09.05.19 19:25, Ankur Arora wrote:
>> Hi all,
>>
>> This is an RFC for xenhost support, outlined here by Juergen here:
>> https://lkml.org/lkml/2019/4/8/67.
> 
> First: thanks for all the effort you've put into this series!
> 
>> The high level idea is to provide an abstraction of the Xen
>> communication interface, as a xenhost_t.
>>
>> xenhost_t expose ops for communication between the guest and Xen
>> (hypercall, cpuid, shared_info/vcpu_info, evtchn, grant-table and on top
>> of those, xenbus, ballooning), and these can differ based on the kind
>> of underlying Xen: regular, local, and nested.
> 
> I'm not sure we need to abstract away hypercalls and cpuid. I believe in
> case of nested Xen all contacts to the L0 hypervisor should be done via
> the L1 hypervisor. So we might need to issue some kind of passthrough
> hypercall when e.g. granting a page to L0 dom0, but this should be
> handled via the grant abstraction (events should be similar).
> 
Just to be clear: By "kind of passthrough hypercall" you mean (e.g. for every
access/modify of grant table frames) you would proxy hypercall to L0 Xen via L1 Xen?

> So IMO we should drop patches 2-5.
> 
>> (Since this abstraction is largely about guest -- xenhost communication,
>> no ops are needed for timer, clock, sched, memory (MMU, P2M), VCPU mgmt.
>> etc.)
>>
>> Xenhost use-cases:
>>
>> Regular-Xen: the standard Xen interface presented to a guest,
>> specifically for comunication between Lx-guest and Lx-Xen.
>>
>> Local-Xen: a Xen like interface which runs in the same address space as
>> the guest (dom0). This, can act as the default xenhost.
>>
>> The major ways it differs from a regular Xen interface is in presenting
>> a different hypercall interface (call instead of a syscall/vmcall), and
>> in an inability to do grant-mappings: since local-Xen exists in the same
>> address space as Xen, there's no way for it to cheaply change the
>> physical page that a GFN maps to (assuming no P2M tables.)
>>
>> Nested-Xen: this channel is to Xen, one level removed: from L1-guest to
>> L0-Xen. The use case is that we want L0-dom0-backends to talk to
>> L1-dom0-frontend drivers which can then present PV devices which can
>> in-turn be used by the L1-dom0-backend drivers as raw underlying devices.
>> The interfaces themselves, broadly remain similar.
>>
>> Note: L0-Xen, L1-Xen represent Xen running at that nesting level
>> and L0-guest, L1-guest represent guests that are children of Xen
>> at that nesting level. Lx, represents any level.
>>
>> Patches 1-7,
>>    "x86/xen: add xenhost_t interface"
>>    "x86/xen: cpuid support in xenhost_t"
>>    "x86/xen: make hypercall_page generic"
>>    "x86/xen: hypercall support for xenhost_t"
>>    "x86/xen: add feature support in xenhost_t"
>>    "x86/xen: add shared_info support to xenhost_t"
>>    "x86/xen: make vcpu_info part of xenhost_t"
>> abstract out interfaces that setup hypercalls/cpuid/shared_info/vcpu_info etc.
>>
>> Patch 8, "x86/xen: irq/upcall handling with multiple xenhosts"
>> sets up the upcall and pv_irq ops based on vcpu_info.
>>
>> Patch 9, "xen/evtchn: support evtchn in xenhost_t" adds xenhost based
>> evtchn support for evtchn_2l.
>>
>> Patches 10 and 16, "xen/balloon: support ballooning in xenhost_t" and
>> "xen/grant-table: host_addr fixup in mapping on xenhost_r0"
>> implement support from GNTTABOP_map_grant_ref for xenhosts of type
>> xenhost_r0 (xenhost local.)
>>
>> Patch 12, "xen/xenbus: support xenbus frontend/backend with xenhost_t"
>> makes xenbus so that both its frontend and backend can be bootstrapped
>> separately via separate xenhosts.
>>
>> Remaining patches, 11, 13, 14, 15:
>>    "xen/grant-table: make grant-table xenhost aware"
>>    "drivers/xen: gnttab, evtchn, xenbus API changes"
>>    "xen/blk: gnttab, evtchn, xenbus API changes"
>>    "xen/net: gnttab, evtchn, xenbus API changes"
>> are mostly mechanical changes for APIs that now take xenhost_t *
>> as parameter.
>>
>> The code itself is RFC quality, and is mostly meant to get feedback before
>> proceeding further. Also note that the FIFO logic and some Xen drivers
>> (input, pciback, scsi etc) are mostly unchanged, so will not build.
>>
>>
>> Please take a look.
> 
> 
> Juergen
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 00/16] xenhost support

Posted by Juergen Gross 6 weeks ago
On 07.06.19 17:22, Joao Martins wrote:
> On 6/7/19 3:51 PM, Juergen Gross wrote:
>> On 09.05.19 19:25, Ankur Arora wrote:
>>> Hi all,
>>>
>>> This is an RFC for xenhost support, outlined here by Juergen here:
>>> https://lkml.org/lkml/2019/4/8/67.
>>
>> First: thanks for all the effort you've put into this series!
>>
>>> The high level idea is to provide an abstraction of the Xen
>>> communication interface, as a xenhost_t.
>>>
>>> xenhost_t expose ops for communication between the guest and Xen
>>> (hypercall, cpuid, shared_info/vcpu_info, evtchn, grant-table and on top
>>> of those, xenbus, ballooning), and these can differ based on the kind
>>> of underlying Xen: regular, local, and nested.
>>
>> I'm not sure we need to abstract away hypercalls and cpuid. I believe in
>> case of nested Xen all contacts to the L0 hypervisor should be done via
>> the L1 hypervisor. So we might need to issue some kind of passthrough
>> hypercall when e.g. granting a page to L0 dom0, but this should be
>> handled via the grant abstraction (events should be similar).
>>
> Just to be clear: By "kind of passthrough hypercall" you mean (e.g. for every
> access/modify of grant table frames) you would proxy hypercall to L0 Xen via L1 Xen?

It might be possible to spare some hypercalls by directly writing to
grant frames mapped into L1 dom0, but in general you are right.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 00/16] xenhost support

Posted by Ankur Arora 6 weeks ago
On 2019-06-07 9:21 a.m., Juergen Gross wrote:
> On 07.06.19 17:22, Joao Martins wrote:
>> On 6/7/19 3:51 PM, Juergen Gross wrote:
>>> On 09.05.19 19:25, Ankur Arora wrote:
>>>> Hi all,
>>>>
>>>> This is an RFC for xenhost support, outlined here by Juergen here:
>>>> https://lkml.org/lkml/2019/4/8/67.
>>>
>>> First: thanks for all the effort you've put into this series!
>>>
>>>> The high level idea is to provide an abstraction of the Xen
>>>> communication interface, as a xenhost_t.
>>>>
>>>> xenhost_t expose ops for communication between the guest and Xen
>>>> (hypercall, cpuid, shared_info/vcpu_info, evtchn, grant-table and on 
>>>> top
>>>> of those, xenbus, ballooning), and these can differ based on the kind
>>>> of underlying Xen: regular, local, and nested.
>>>
>>> I'm not sure we need to abstract away hypercalls and cpuid. I believe in
>>> case of nested Xen all contacts to the L0 hypervisor should be done via
>>> the L1 hypervisor. So we might need to issue some kind of passthrough
>>> hypercall when e.g. granting a page to L0 dom0, but this should be
>>> handled via the grant abstraction (events should be similar).
>>>
>> Just to be clear: By "kind of passthrough hypercall" you mean (e.g. 
>> for every
>> access/modify of grant table frames) you would proxy hypercall to L0 
>> Xen via L1 Xen?
> 
> It might be possible to spare some hypercalls by directly writing to
> grant frames mapped into L1 dom0, but in general you are right.
Wouldn't we still need map/unmap_grant_ref?
AFAICS, both the xenhost_direct and the xenhost_indirect cases should be
very similar (apart from the need to proxy in the indirect case.)

Ankur

> 
> 
> Juergen
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel