[Xen-devel] [PATCH 0/9] Per vcpu vm_event channels

Petre Pircalabu posted 9 patches 4 years, 11 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/xen tags/patchew/cover.1559224640.git.ppircalabu@bitdefender.com
There is a newer version of this series
tools/libxc/include/xenctrl.h        |  56 +---
tools/libxc/xc_mem_paging.c          |  23 +-
tools/libxc/xc_memshr.c              |  34 ---
tools/libxc/xc_monitor.c             |  46 ++-
tools/libxc/xc_private.h             |  16 +-
tools/libxc/xc_vm_event.c            | 175 +++++++-----
tools/tests/xen-access/Makefile      |   7 +-
tools/tests/xen-access/vm-event-ng.c | 210 ++++++++++++++
tools/tests/xen-access/vm-event.c    | 193 +++++++++++++
tools/tests/xen-access/xen-access.c  | 408 ++++++++++-----------------
tools/tests/xen-access/xen-access.h  |  91 ++++++
tools/xenpaging/xenpaging.c          |  42 +--
xen/arch/arm/mem_access.c            |   2 +-
xen/arch/x86/mm.c                    |   5 +
xen/arch/x86/mm/mem_access.c         |   4 +-
xen/arch/x86/mm/mem_paging.c         |   2 +-
xen/arch/x86/mm/mem_sharing.c        |   5 +-
xen/arch/x86/mm/p2m.c                |  11 +-
xen/common/Makefile                  |   1 +
xen/common/domctl.c                  |   7 +
xen/common/mem_access.c              |   2 +-
xen/common/monitor.c                 |   4 +-
xen/common/vm_event.c                | 527 ++++++++++++++++++-----------------
xen/common/vm_event_ng.c             | 449 +++++++++++++++++++++++++++++
xen/drivers/passthrough/pci.c        |   2 +-
xen/include/public/domctl.h          | 101 +++----
xen/include/public/memory.h          |   2 +
xen/include/public/vm_event.h        |  47 ++++
xen/include/xen/sched.h              |  25 +-
xen/include/xen/vm_event.h           |  80 +++++-
30 files changed, 1720 insertions(+), 857 deletions(-)
create mode 100644 tools/tests/xen-access/vm-event-ng.c
create mode 100644 tools/tests/xen-access/vm-event.c
create mode 100644 tools/tests/xen-access/xen-access.h
create mode 100644 xen/common/vm_event_ng.c
[Xen-devel] [PATCH 0/9] Per vcpu vm_event channels
Posted by Petre Pircalabu 4 years, 11 months ago
This patchset adds a new mechanism of sending synchronous vm_event
requests and handling vm_event responses without using a ring.
As each synchronous request pauses the vcpu until the corresponding
response is handled, it can be stored in a slotted memory buffer
(one per vcpu) shared between the hypervisor and the controlling domain.

The main advantages of this approach are:
- the ability to dynamicaly allocate the necessary memory used to hold
the requests/responses (the size of vm_event_request_t/vm_event_response_t
can grow unrestricted by the ring's one page limitation)
- the ring's waitqueue logic is unnecessary in this case because the
vcpu sending the request is blocked until a response is received.



Petre Pircalabu (9):
  tools/libxc: Consistent usage of xc_vm_event_* interface
  vm_event: Define VM_EVENT type
  vm_event: Make ‘local’ functions ‘static’
  vm_event: Remove "ring" suffix from vm_event_check_ring
  vm_event: Simplify vm_event interface
  vm_event: Move struct vm_event_domain to vm_event.c
  vm_event: Decouple implementation details from interface.
  vm_event: Add vm_event_ng interface
  xen-access: Add support for vm_event_ng interface

 tools/libxc/include/xenctrl.h        |  56 +---
 tools/libxc/xc_mem_paging.c          |  23 +-
 tools/libxc/xc_memshr.c              |  34 ---
 tools/libxc/xc_monitor.c             |  46 ++-
 tools/libxc/xc_private.h             |  16 +-
 tools/libxc/xc_vm_event.c            | 175 +++++++-----
 tools/tests/xen-access/Makefile      |   7 +-
 tools/tests/xen-access/vm-event-ng.c | 210 ++++++++++++++
 tools/tests/xen-access/vm-event.c    | 193 +++++++++++++
 tools/tests/xen-access/xen-access.c  | 408 ++++++++++-----------------
 tools/tests/xen-access/xen-access.h  |  91 ++++++
 tools/xenpaging/xenpaging.c          |  42 +--
 xen/arch/arm/mem_access.c            |   2 +-
 xen/arch/x86/mm.c                    |   5 +
 xen/arch/x86/mm/mem_access.c         |   4 +-
 xen/arch/x86/mm/mem_paging.c         |   2 +-
 xen/arch/x86/mm/mem_sharing.c        |   5 +-
 xen/arch/x86/mm/p2m.c                |  11 +-
 xen/common/Makefile                  |   1 +
 xen/common/domctl.c                  |   7 +
 xen/common/mem_access.c              |   2 +-
 xen/common/monitor.c                 |   4 +-
 xen/common/vm_event.c                | 527 ++++++++++++++++++-----------------
 xen/common/vm_event_ng.c             | 449 +++++++++++++++++++++++++++++
 xen/drivers/passthrough/pci.c        |   2 +-
 xen/include/public/domctl.h          | 101 +++----
 xen/include/public/memory.h          |   2 +
 xen/include/public/vm_event.h        |  47 ++++
 xen/include/xen/sched.h              |  25 +-
 xen/include/xen/vm_event.h           |  80 +++++-
 30 files changed, 1720 insertions(+), 857 deletions(-)
 create mode 100644 tools/tests/xen-access/vm-event-ng.c
 create mode 100644 tools/tests/xen-access/vm-event.c
 create mode 100644 tools/tests/xen-access/xen-access.h
 create mode 100644 xen/common/vm_event_ng.c

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 0/9] Per vcpu vm_event channels
Posted by Tamas K Lengyel 4 years, 11 months ago
On Thu, May 30, 2019 at 7:18 AM Petre Pircalabu
<ppircalabu@bitdefender.com> wrote:
>
> This patchset adds a new mechanism of sending synchronous vm_event
> requests and handling vm_event responses without using a ring.
> As each synchronous request pauses the vcpu until the corresponding
> response is handled, it can be stored in a slotted memory buffer
> (one per vcpu) shared between the hypervisor and the controlling domain.
>
> The main advantages of this approach are:
> - the ability to dynamicaly allocate the necessary memory used to hold
> the requests/responses (the size of vm_event_request_t/vm_event_response_t
> can grow unrestricted by the ring's one page limitation)
> - the ring's waitqueue logic is unnecessary in this case because the
> vcpu sending the request is blocked until a response is received.

Hi Petre,
could you push this series as a git branch somewhere?

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 0/9] Per vcpu vm_event channels
Posted by Petre Ovidiu PIRCALABU 4 years, 11 months ago
On Thu, 2019-05-30 at 08:27 -0700, Tamas K Lengyel wrote:
> On Thu, May 30, 2019 at 7:18 AM Petre Pircalabu
> <ppircalabu@bitdefender.com> wrote:
> > 
> > This patchset adds a new mechanism of sending synchronous vm_event
> > requests and handling vm_event responses without using a ring.
> > As each synchronous request pauses the vcpu until the corresponding
> > response is handled, it can be stored in a slotted memory buffer
> > (one per vcpu) shared between the hypervisor and the controlling
> > domain.
> > 
> > The main advantages of this approach are:
> > - the ability to dynamicaly allocate the necessary memory used to
> > hold
> > the requests/responses (the size of
> > vm_event_request_t/vm_event_response_t
> > can grow unrestricted by the ring's one page limitation)
> > - the ring's waitqueue logic is unnecessary in this case because
> > the
> > vcpu sending the request is blocked until a response is received.
> 
> Hi Petre,
> could you push this series as a git branch somewhere?
> 
> Thanks,
> Tamas

Hi Tamas,

I've pushed the changes to 
https://github.com/petrepircalabu/xen/tree/vm_event_ng/devel

Thank-you very much for your support,
Petre

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 0/9] Per vcpu vm_event channels
Posted by Andrew Cooper 4 years, 11 months ago
On 30/05/2019 07:18, Petre Pircalabu wrote:
> This patchset adds a new mechanism of sending synchronous vm_event
> requests and handling vm_event responses without using a ring.
> As each synchronous request pauses the vcpu until the corresponding
> response is handled, it can be stored in a slotted memory buffer
> (one per vcpu) shared between the hypervisor and the controlling domain.
>
> The main advantages of this approach are:
> - the ability to dynamicaly allocate the necessary memory used to hold
> the requests/responses (the size of vm_event_request_t/vm_event_response_t
> can grow unrestricted by the ring's one page limitation)
> - the ring's waitqueue logic is unnecessary in this case because the
> vcpu sending the request is blocked until a response is received.
>

Before I review patches 7-9 for more than stylistic things, can you
briefly describe what's next?

AFACT, this introduces a second interface between Xen and the agent,
which is limited to synchronous events only, and exclusively uses
slotted system per vcpu, with a per-vcpu event channel?

What (if any) are the future development plans, and what are the plans
for deprecating the use of the old interface?  (The answers to these
will affect my review of the new interface).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 0/9] Per vcpu vm_event channels
Posted by Petre Ovidiu PIRCALABU 4 years, 10 months ago
On Fri, 2019-05-31 at 17:25 -0700, Andrew Cooper wrote:
> On 30/05/2019 07:18, Petre Pircalabu wrote:
> > This patchset adds a new mechanism of sending synchronous vm_event
> > requests and handling vm_event responses without using a ring.
> > As each synchronous request pauses the vcpu until the corresponding
> > response is handled, it can be stored in a slotted memory buffer
> > (one per vcpu) shared between the hypervisor and the controlling
> > domain.
> > 
> > The main advantages of this approach are:
> > - the ability to dynamicaly allocate the necessary memory used to
> > hold
> > the requests/responses (the size of
> > vm_event_request_t/vm_event_response_t
> > can grow unrestricted by the ring's one page limitation)
> > - the ring's waitqueue logic is unnecessary in this case because
> > the
> > vcpu sending the request is blocked until a response is received.
> > 
> 
> Before I review patches 7-9 for more than stylistic things, can you
> briefly describe what's next?
> 
> AFACT, this introduces a second interface between Xen and the agent,
> which is limited to synchronous events only, and exclusively uses
> slotted system per vcpu, with a per-vcpu event channel?

Using a distinct interface was proposed by George in order to allow the
existing vm_event clients to run unmodified.
> 
> What (if any) are the future development plans, and what are the
> plans
> for deprecating the use of the old interface?  (The answers to these
> will affect my review of the new interface).
> 
> ~Andrew
> 
At the moment, we're only using sync vm_events, so the "one slot per
vcpu" approach suits us. Also, by allocating dynamically the
vm_event_requests/responses, we can increase their size without
suffering the performance drop incurred when using the ring
(+waitqueue).
At this moment, we don't have a schedule to deprecate the legacy (ring
based) interface, but we will adapt the new interface based on the
feedback we receive from other vm_event users. 

> ________________________
> This email was scanned by Bitdefender

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel