[PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory

Chao Peng posted 12 patches 2 years, 3 months ago
Failed in applying to current master (apply log)
There is a newer version of this series
arch/x86/kvm/Kconfig             |   1 +
arch/x86/kvm/mmu/mmu.c           |  73 +++++++++++-
arch/x86/kvm/mmu/paging_tmpl.h   |  11 +-
arch/x86/kvm/x86.c               |  12 +-
include/linux/kvm_host.h         |  49 +++++++-
include/linux/memfile_notifier.h |  53 +++++++++
include/linux/shmem_fs.h         |   4 +
include/uapi/linux/fcntl.h       |   1 +
include/uapi/linux/kvm.h         |  17 +++
include/uapi/linux/memfd.h       |   1 +
mm/Kconfig                       |   4 +
mm/Makefile                      |   1 +
mm/memfd.c                       |  20 +++-
mm/memfile_notifier.c            |  99 ++++++++++++++++
mm/shmem.c                       | 121 +++++++++++++++++++-
virt/kvm/kvm_main.c              | 188 +++++++++++++++++++++++++++----
16 files changed, 614 insertions(+), 41 deletions(-)
create mode 100644 include/linux/memfile_notifier.h
create mode 100644 mm/memfile_notifier.c
[PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory
Posted by Chao Peng 2 years, 3 months ago
This is the v4 of this series which try to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:

  fea31d169094 KVM: x86/pmu: Fix available_event_types check for
               REF_CPU_CYCLES event

Introduction
------------
In general this patch series introduce fd-based memslot which provides
guest memory through memory file descriptor fd[offset,size] instead of
hva/size. The fd can be created from a supported memory filesystem
like tmpfs/hugetlbfs etc. which we refer as memory backing store. KVM
and the the memory backing store exchange callbacks when such memslot
gets created. At runtime KVM will call into callbacks provided by the
backing store to get the pfn with the fd+offset. Memory backing store
will also call into KVM callbacks when userspace fallocate/punch hole
on the fd to notify KVM to map/unmap secondary MMU page tables.

Comparing to existing hva-based memslot, this new type of memslot allows
guest memory unmapped from host userspace like QEMU and even the kernel
itself, therefore reduce attack surface and prevent bugs.

Based on this fd-based memslot, we can build guest private memory that
is going to be used in confidential computing environments such as Intel
TDX and AMD SEV. When supported, the memory backing store can provide
more enforcement on the fd and KVM can use a single memslot to hold both
the private and shared part of the guest memory. 

mm extension
---------------------
Introduces new F_SEAL_INACCESSIBLE for shmem and new MFD_INACCESSIBLE
flag for memfd_create(), the file created with these flags cannot read(),
write() or mmap() etc via normal MMU operations. The file content can
only be used with the newly introduced memfile_notifier extension.

The memfile_notifier extension provides two sets of callbacks for KVM to
interact with the memory backing store:
  - memfile_notifier_ops: callbacks for memory backing store to notify
    KVM when memory gets allocated/invalidated.
  - memfile_pfn_ops: callbacks for KVM to call into memory backing store
    to request memory pages for guest private memory.

memslot extension
-----------------
Add the private fd and the fd offset to existing 'shared' memslot so that
both private/shared guest memory can live in one single memslot. A page in
the memslot is either private or shared. A page is private only when it's
already allocated in the backing store fd, all the other cases it's treated
as shared, this includes those already mapped as shared as well as those
having not been mapped. This means the memory backing store is the place
which tells the truth of which page is private.

Private memory map/unmap and conversion
---------------------------------------
Userspace's map/unmap operations are done by fallocate() ioctl on the
backing store fd.
  - map: default fallocate() with mode=0.
  - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
secondary MMU page tables.

Test
----
To test the new functionalities of this patch TDX patchset is needed.
Since TDX patchset has not been merged so I did two kinds of test:

-  Regresion test on kvm/queue (this patch)
   Most new code are not covered. I only tested building and booting.

-  New Funational test on latest TDX code
   The patch is rebased to latest TDX code and tested the new
   funcationalities.

For TDX test please see below repos:
Linux: https://github.com/chao-p/linux/tree/privmem-v4.3
QEMU: https://github.com/chao-p/qemu/tree/privmem-v4

And an example QEMU command line:
-object tdx-guest,id=tdx \
-object memory-backend-memfd-private,id=ram1,size=2G \
-machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1

Changelog
----------
v4:
  - Decoupled the callbacks between KVM/mm from memfd and use new
    name 'memfile_notifier'.
  - Supported register multiple memslots to the same backing store.
  - Added per-memslot pfn_ops instead of per-system.
  - Reworked the invalidation part.
  - Improved new KVM uAPIs (private memslot extension and memory
    error) per Sean's suggestions.
  - Addressed many other minor fixes for comments from v3.
v3:
  - Added locking protection when calling
    invalidate_page_range/fallocate callbacks.
  - Changed memslot structure to keep use useraddr for shared memory.
  - Re-organized F_SEAL_INACCESSIBLE and MEMFD_OPS.
  - Added MFD_INACCESSIBLE flag to force F_SEAL_INACCESSIBLE.
  - Commit message improvement.
  - Many small fixes for comments from the last version.

Links of previous discussions
-----------------------------
[1] Original design proposal:
https://lkml.kernel.org/kvm/20210824005248.200037-1-seanjc@google.com/
[2] Updated proposal and RFC patch v1:
https://lkml.kernel.org/linux-fsdevel/20211111141352.26311-1-chao.p.peng@linux.intel.com/
[3] Patch v3: https://lkml.org/lkml/2021/12/23/283

Chao Peng (11):
  mm/memfd: Introduce MFD_INACCESSIBLE flag
  mm: Introduce memfile_notifier
  mm/shmem: Support memfile_notifier
  KVM: Extend the memslot to support fd-based private memory
  KVM: Use kvm_userspace_memory_region_ext
  KVM: Add KVM_EXIT_MEMORY_ERROR exit
  KVM: Use memfile_pfn_ops to obtain pfn for private pages
  KVM: Handle page fault for private memory
  KVM: Register private memslot to memory backing store
  KVM: Zap existing KVM mappings when pages changed in the private fd
  KVM: Expose KVM_MEM_PRIVATE

Kirill A. Shutemov (1):
  mm/shmem: Introduce F_SEAL_INACCESSIBLE

 arch/x86/kvm/Kconfig             |   1 +
 arch/x86/kvm/mmu/mmu.c           |  73 +++++++++++-
 arch/x86/kvm/mmu/paging_tmpl.h   |  11 +-
 arch/x86/kvm/x86.c               |  12 +-
 include/linux/kvm_host.h         |  49 +++++++-
 include/linux/memfile_notifier.h |  53 +++++++++
 include/linux/shmem_fs.h         |   4 +
 include/uapi/linux/fcntl.h       |   1 +
 include/uapi/linux/kvm.h         |  17 +++
 include/uapi/linux/memfd.h       |   1 +
 mm/Kconfig                       |   4 +
 mm/Makefile                      |   1 +
 mm/memfd.c                       |  20 +++-
 mm/memfile_notifier.c            |  99 ++++++++++++++++
 mm/shmem.c                       | 121 +++++++++++++++++++-
 virt/kvm/kvm_main.c              | 188 +++++++++++++++++++++++++++----
 16 files changed, 614 insertions(+), 41 deletions(-)
 create mode 100644 include/linux/memfile_notifier.h
 create mode 100644 mm/memfile_notifier.c

-- 
2.17.1


Re: [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory
Posted by Steven Price 2 years, 2 months ago
On 18/01/2022 13:21, Chao Peng wrote:
> This is the v4 of this series which try to implement the fd-based KVM
> guest private memory. The patches are based on latest kvm/queue branch
> commit:
> 
>   fea31d169094 KVM: x86/pmu: Fix available_event_types check for
>                REF_CPU_CYCLES event
> 
> Introduction
> ------------
> In general this patch series introduce fd-based memslot which provides
> guest memory through memory file descriptor fd[offset,size] instead of
> hva/size. The fd can be created from a supported memory filesystem
> like tmpfs/hugetlbfs etc. which we refer as memory backing store. KVM
> and the the memory backing store exchange callbacks when such memslot
> gets created. At runtime KVM will call into callbacks provided by the
> backing store to get the pfn with the fd+offset. Memory backing store
> will also call into KVM callbacks when userspace fallocate/punch hole
> on the fd to notify KVM to map/unmap secondary MMU page tables.
> 
> Comparing to existing hva-based memslot, this new type of memslot allows
> guest memory unmapped from host userspace like QEMU and even the kernel
> itself, therefore reduce attack surface and prevent bugs.
> 
> Based on this fd-based memslot, we can build guest private memory that
> is going to be used in confidential computing environments such as Intel
> TDX and AMD SEV. When supported, the memory backing store can provide
> more enforcement on the fd and KVM can use a single memslot to hold both
> the private and shared part of the guest memory. 

This looks like it will be useful for Arm's Confidential Compute
Architecture (CCA) too - in particular we need a way of ensuring that
user space cannot 'trick' the kernel into accessing memory which has
been delegated to a realm (i.e. protected guest), and a memfd seems like
a good match.

Some comments below.

> mm extension
> ---------------------
> Introduces new F_SEAL_INACCESSIBLE for shmem and new MFD_INACCESSIBLE
> flag for memfd_create(), the file created with these flags cannot read(),
> write() or mmap() etc via normal MMU operations. The file content can
> only be used with the newly introduced memfile_notifier extension.

For Arm CCA we are expecting to seed the realm with an initial memory
contents (e.g. kernel and initrd) which will then be measured before
execution starts. The 'obvious' way of doing this with a memfd would be
to populate parts of the memfd then seal it with F_SEAL_INACCESSIBLE.

However as things stand it's not possible to set the INACCESSIBLE seal
after creating a memfd (F_ALL_SEALS hasn't been updated to include it).

One potential workaround would be for arm64 to provide a custom KVM
ioctl to effectively memcpy() into the guest's protected memory which
would only be accessible before the guest has started. The drawback is
that it requires two copies of the data during guest setup.

Do you think things could be relaxed so the F_SEAL_INACCESSIBLE flag
could be set after a memfd has been created (and partially populated)?

Thanks,

Steve

> The memfile_notifier extension provides two sets of callbacks for KVM to
> interact with the memory backing store:
>   - memfile_notifier_ops: callbacks for memory backing store to notify
>     KVM when memory gets allocated/invalidated.
>   - memfile_pfn_ops: callbacks for KVM to call into memory backing store
>     to request memory pages for guest private memory.
> 
> memslot extension
> -----------------
> Add the private fd and the fd offset to existing 'shared' memslot so that
> both private/shared guest memory can live in one single memslot. A page in
> the memslot is either private or shared. A page is private only when it's
> already allocated in the backing store fd, all the other cases it's treated
> as shared, this includes those already mapped as shared as well as those
> having not been mapped. This means the memory backing store is the place
> which tells the truth of which page is private.
> 
> Private memory map/unmap and conversion
> ---------------------------------------
> Userspace's map/unmap operations are done by fallocate() ioctl on the
> backing store fd.
>   - map: default fallocate() with mode=0.
>   - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
> The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
> secondary MMU page tables.
> 
> Test
> ----
> To test the new functionalities of this patch TDX patchset is needed.
> Since TDX patchset has not been merged so I did two kinds of test:
> 
> -  Regresion test on kvm/queue (this patch)
>    Most new code are not covered. I only tested building and booting.
> 
> -  New Funational test on latest TDX code
>    The patch is rebased to latest TDX code and tested the new
>    funcationalities.
> 
> For TDX test please see below repos:
> Linux: https://github.com/chao-p/linux/tree/privmem-v4.3
> QEMU: https://github.com/chao-p/qemu/tree/privmem-v4
> 
> And an example QEMU command line:
> -object tdx-guest,id=tdx \
> -object memory-backend-memfd-private,id=ram1,size=2G \
> -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1
> 
> Changelog
> ----------
> v4:
>   - Decoupled the callbacks between KVM/mm from memfd and use new
>     name 'memfile_notifier'.
>   - Supported register multiple memslots to the same backing store.
>   - Added per-memslot pfn_ops instead of per-system.
>   - Reworked the invalidation part.
>   - Improved new KVM uAPIs (private memslot extension and memory
>     error) per Sean's suggestions.
>   - Addressed many other minor fixes for comments from v3.
> v3:
>   - Added locking protection when calling
>     invalidate_page_range/fallocate callbacks.
>   - Changed memslot structure to keep use useraddr for shared memory.
>   - Re-organized F_SEAL_INACCESSIBLE and MEMFD_OPS.
>   - Added MFD_INACCESSIBLE flag to force F_SEAL_INACCESSIBLE.
>   - Commit message improvement.
>   - Many small fixes for comments from the last version.
> 
> Links of previous discussions
> -----------------------------
> [1] Original design proposal:
> https://lkml.kernel.org/kvm/20210824005248.200037-1-seanjc@google.com/
> [2] Updated proposal and RFC patch v1:
> https://lkml.kernel.org/linux-fsdevel/20211111141352.26311-1-chao.p.peng@linux.intel.com/
> [3] Patch v3: https://lkml.org/lkml/2021/12/23/283
> 
> Chao Peng (11):
>   mm/memfd: Introduce MFD_INACCESSIBLE flag
>   mm: Introduce memfile_notifier
>   mm/shmem: Support memfile_notifier
>   KVM: Extend the memslot to support fd-based private memory
>   KVM: Use kvm_userspace_memory_region_ext
>   KVM: Add KVM_EXIT_MEMORY_ERROR exit
>   KVM: Use memfile_pfn_ops to obtain pfn for private pages
>   KVM: Handle page fault for private memory
>   KVM: Register private memslot to memory backing store
>   KVM: Zap existing KVM mappings when pages changed in the private fd
>   KVM: Expose KVM_MEM_PRIVATE
> 
> Kirill A. Shutemov (1):
>   mm/shmem: Introduce F_SEAL_INACCESSIBLE
> 
>  arch/x86/kvm/Kconfig             |   1 +
>  arch/x86/kvm/mmu/mmu.c           |  73 +++++++++++-
>  arch/x86/kvm/mmu/paging_tmpl.h   |  11 +-
>  arch/x86/kvm/x86.c               |  12 +-
>  include/linux/kvm_host.h         |  49 +++++++-
>  include/linux/memfile_notifier.h |  53 +++++++++
>  include/linux/shmem_fs.h         |   4 +
>  include/uapi/linux/fcntl.h       |   1 +
>  include/uapi/linux/kvm.h         |  17 +++
>  include/uapi/linux/memfd.h       |   1 +
>  mm/Kconfig                       |   4 +
>  mm/Makefile                      |   1 +
>  mm/memfd.c                       |  20 +++-
>  mm/memfile_notifier.c            |  99 ++++++++++++++++
>  mm/shmem.c                       | 121 +++++++++++++++++++-
>  virt/kvm/kvm_main.c              | 188 +++++++++++++++++++++++++++----
>  16 files changed, 614 insertions(+), 41 deletions(-)
>  create mode 100644 include/linux/memfile_notifier.h
>  create mode 100644 mm/memfile_notifier.c
>
Re: [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory
Posted by Nakajima, Jun 2 years, 2 months ago
> On Jan 28, 2022, at 8:47 AM, Steven Price <steven.price@arm.com> wrote:
> 
> On 18/01/2022 13:21, Chao Peng wrote:
>> This is the v4 of this series which try to implement the fd-based KVM
>> guest private memory. The patches are based on latest kvm/queue branch
>> commit:
>> 
>>  fea31d169094 KVM: x86/pmu: Fix available_event_types check for
>>               REF_CPU_CYCLES event
>> 
>> Introduction
>> ------------
>> In general this patch series introduce fd-based memslot which provides
>> guest memory through memory file descriptor fd[offset,size] instead of
>> hva/size. The fd can be created from a supported memory filesystem
>> like tmpfs/hugetlbfs etc. which we refer as memory backing store. KVM
>> and the the memory backing store exchange callbacks when such memslot
>> gets created. At runtime KVM will call into callbacks provided by the
>> backing store to get the pfn with the fd+offset. Memory backing store
>> will also call into KVM callbacks when userspace fallocate/punch hole
>> on the fd to notify KVM to map/unmap secondary MMU page tables.
>> 
>> Comparing to existing hva-based memslot, this new type of memslot allows
>> guest memory unmapped from host userspace like QEMU and even the kernel
>> itself, therefore reduce attack surface and prevent bugs.
>> 
>> Based on this fd-based memslot, we can build guest private memory that
>> is going to be used in confidential computing environments such as Intel
>> TDX and AMD SEV. When supported, the memory backing store can provide
>> more enforcement on the fd and KVM can use a single memslot to hold both
>> the private and shared part of the guest memory. 
> 
> This looks like it will be useful for Arm's Confidential Compute
> Architecture (CCA) too - in particular we need a way of ensuring that
> user space cannot 'trick' the kernel into accessing memory which has
> been delegated to a realm (i.e. protected guest), and a memfd seems like
> a good match.

Good to hear that it will be useful for ARM’s CCA as well.

> 
> Some comments below.
> 
>> mm extension
>> ---------------------
>> Introduces new F_SEAL_INACCESSIBLE for shmem and new MFD_INACCESSIBLE
>> flag for memfd_create(), the file created with these flags cannot read(),
>> write() or mmap() etc via normal MMU operations. The file content can
>> only be used with the newly introduced memfile_notifier extension.
> 
> For Arm CCA we are expecting to seed the realm with an initial memory
> contents (e.g. kernel and initrd) which will then be measured before
> execution starts. The 'obvious' way of doing this with a memfd would be
> to populate parts of the memfd then seal it with F_SEAL_INACCESSIBLE.

As far as I understand, we have the same problem with TDX, where a guest TD (Trust Domain) starts in private memory. We seed the private memory typically with a guest firmware, and the initial image (plaintext) is copied to somewhere in QEMU memory (from disk, for example) for that purpose; this location is not associated with the target GPA.

Upon a (new) ioctl from QEMU, KVM requests the TDX Module to copy the pages to private memory (by encrypting) specifying the target GPA, using a TDX interface function (TDH.MEM.PAGE.ADD). The actual pages for the private memory is allocated by the callbacks provided by the backing store during the “copy” operation.

We extended the existing KVM_MEMORY_ENCRYPT_OP (ioctl) for the above. 

> 
> However as things stand it's not possible to set the INACCESSIBLE seal
> after creating a memfd (F_ALL_SEALS hasn't been updated to include it).
> 
> One potential workaround would be for arm64 to provide a custom KVM
> ioctl to effectively memcpy() into the guest's protected memory which
> would only be accessible before the guest has started. The drawback is
> that it requires two copies of the data during guest setup.

So, the guest pages are not encrypted in the realm?

I think you could do the same thing, i.e. KVM copies the pages to the realm, where pages are allocated by the backing store. But, yes, it will have two copies of the data at that time unless encrypted. .

> 
> Do you think things could be relaxed so the F_SEAL_INACCESSIBLE flag
> could be set after a memfd has been created (and partially populated)?
> 

I think F_SEAL_INACCESSIBLE could be deferred to the point where measurement of the initial image is done (we call “build-time” measurement in TDX). For example, if we add a callback to activate F_SEAL_INACCESSIBLE and KVM calls it before such the measurement time, does that work for you?

--- 
Jun



Re: [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory
Posted by Steven Price 2 years, 2 months ago
Hi Jun,

On 02/02/2022 02:28, Nakajima, Jun wrote:
> 
>> On Jan 28, 2022, at 8:47 AM, Steven Price <steven.price@arm.com> wrote:
>>
>> On 18/01/2022 13:21, Chao Peng wrote:
>>> This is the v4 of this series which try to implement the fd-based KVM
>>> guest private memory. The patches are based on latest kvm/queue branch
>>> commit:
>>>
>>>  fea31d169094 KVM: x86/pmu: Fix available_event_types check for
>>>               REF_CPU_CYCLES event
>>>
>>> Introduction
>>> ------------
>>> In general this patch series introduce fd-based memslot which provides
>>> guest memory through memory file descriptor fd[offset,size] instead of
>>> hva/size. The fd can be created from a supported memory filesystem
>>> like tmpfs/hugetlbfs etc. which we refer as memory backing store. KVM
>>> and the the memory backing store exchange callbacks when such memslot
>>> gets created. At runtime KVM will call into callbacks provided by the
>>> backing store to get the pfn with the fd+offset. Memory backing store
>>> will also call into KVM callbacks when userspace fallocate/punch hole
>>> on the fd to notify KVM to map/unmap secondary MMU page tables.
>>>
>>> Comparing to existing hva-based memslot, this new type of memslot allows
>>> guest memory unmapped from host userspace like QEMU and even the kernel
>>> itself, therefore reduce attack surface and prevent bugs.
>>>
>>> Based on this fd-based memslot, we can build guest private memory that
>>> is going to be used in confidential computing environments such as Intel
>>> TDX and AMD SEV. When supported, the memory backing store can provide
>>> more enforcement on the fd and KVM can use a single memslot to hold both
>>> the private and shared part of the guest memory. 
>>
>> This looks like it will be useful for Arm's Confidential Compute
>> Architecture (CCA) too - in particular we need a way of ensuring that
>> user space cannot 'trick' the kernel into accessing memory which has
>> been delegated to a realm (i.e. protected guest), and a memfd seems like
>> a good match.
> 
> Good to hear that it will be useful for ARM’s CCA as well.
> 
>>
>> Some comments below.
>>
>>> mm extension
>>> ---------------------
>>> Introduces new F_SEAL_INACCESSIBLE for shmem and new MFD_INACCESSIBLE
>>> flag for memfd_create(), the file created with these flags cannot read(),
>>> write() or mmap() etc via normal MMU operations. The file content can
>>> only be used with the newly introduced memfile_notifier extension.
>>
>> For Arm CCA we are expecting to seed the realm with an initial memory
>> contents (e.g. kernel and initrd) which will then be measured before
>> execution starts. The 'obvious' way of doing this with a memfd would be
>> to populate parts of the memfd then seal it with F_SEAL_INACCESSIBLE.
> 
> As far as I understand, we have the same problem with TDX, where a guest TD (Trust Domain) starts in private memory. We seed the private memory typically with a guest firmware, and the initial image (plaintext) is copied to somewhere in QEMU memory (from disk, for example) for that purpose; this location is not associated with the target GPA.
> 
> Upon a (new) ioctl from QEMU, KVM requests the TDX Module to copy the pages to private memory (by encrypting) specifying the target GPA, using a TDX interface function (TDH.MEM.PAGE.ADD). The actual pages for the private memory is allocated by the callbacks provided by the backing store during the “copy” operation.
> 
> We extended the existing KVM_MEMORY_ENCRYPT_OP (ioctl) for the above. 

Ok, so if I understand correctly QEMU would do something along the lines of:

1. Use memfd_create(...MFD_INACCESSIBLE) to allocate private memory for
the guest.

2. ftruncate/fallocate the memfd to back the appropriate areas of the memfd.

3. Create a memslot in KVM pointing to the memfd

4. Load the 'guest firmware' (kernel/initrd or similar) into VMM memory

5. Use the KVM_MEMORY_ENCRYPT_OP to request the 'guest firmware' be
copied into the private memory. The ioctl would temporarily pin the
pages and ask the TDX module to copy (& encrypt) the data into the
private memory, unpinning after the copy.

6. QEMU can then free the unencrypted copy of the guest firmware.

>>
>> However as things stand it's not possible to set the INACCESSIBLE seal
>> after creating a memfd (F_ALL_SEALS hasn't been updated to include it).
>>
>> One potential workaround would be for arm64 to provide a custom KVM
>> ioctl to effectively memcpy() into the guest's protected memory which
>> would only be accessible before the guest has started. The drawback is
>> that it requires two copies of the data during guest setup.
> 
> So, the guest pages are not encrypted in the realm?

The pages are likely to be encrypted, but architecturally it doesn't
matter - the hardware prevents the 'Normal World' accessing the pages
when they are assigned to the realm. Encryption is only necessary to
protect against hardware attacks (e.g. bus snooping).

> I think you could do the same thing, i.e. KVM copies the pages to the realm, where pages are allocated by the backing store. But, yes, it will have two copies of the data at that time unless encrypted. .

I'm not sure I follow the "unless encrypted" part of that.

>>
>> Do you think things could be relaxed so the F_SEAL_INACCESSIBLE flag
>> could be set after a memfd has been created (and partially populated)?
>>
> 
> I think F_SEAL_INACCESSIBLE could be deferred to the point where measurement of the initial image is done (we call “build-time” measurement in TDX). For example, if we add a callback to activate F_SEAL_INACCESSIBLE and KVM calls it before such the measurement time, does that work for you?

Yes, if it's possible to defer setting the F_SEAL_INACCESSIBLE then it
should be possible for QEMU to load the initial 'guest firmware'
straight into the memfd. Then to launch the guest the trusted component
only needs to protect and measure the populated pages - there's no need
to copy the data from one set of pages to another. This removes the need
to have two copies of the initial image in memory at the point of measuring.

Thanks,

Steve
Re: [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory
Posted by Nakajima, Jun 2 years, 2 months ago
> On Feb 2, 2022, at 1:23 AM, Steven Price <steven.price@arm.com> wrote:
> 
> Hi Jun,
> 
> On 02/02/2022 02:28, Nakajima, Jun wrote:
>> 
>>> On Jan 28, 2022, at 8:47 AM, Steven Price <steven.price@arm.com> wrote:
>>> 
>>> On 18/01/2022 13:21, Chao Peng wrote:
>>>> This is the v4 of this series which try to implement the fd-based KVM
>>>> guest private memory. The patches are based on latest kvm/queue branch
>>>> commit:
>>>> 
>>>> fea31d169094 KVM: x86/pmu: Fix available_event_types check for
>>>>              REF_CPU_CYCLES event
>>>> 
>>>> Introduction
>>>> ------------
>>>> In general this patch series introduce fd-based memslot which provides
>>>> guest memory through memory file descriptor fd[offset,size] instead of
>>>> hva/size. The fd can be created from a supported memory filesystem
>>>> like tmpfs/hugetlbfs etc. which we refer as memory backing store. KVM
>>>> and the the memory backing store exchange callbacks when such memslot
>>>> gets created. At runtime KVM will call into callbacks provided by the
>>>> backing store to get the pfn with the fd+offset. Memory backing store
>>>> will also call into KVM callbacks when userspace fallocate/punch hole
>>>> on the fd to notify KVM to map/unmap secondary MMU page tables.
>>>> 
>>>> Comparing to existing hva-based memslot, this new type of memslot allows
>>>> guest memory unmapped from host userspace like QEMU and even the kernel
>>>> itself, therefore reduce attack surface and prevent bugs.
>>>> 
>>>> Based on this fd-based memslot, we can build guest private memory that
>>>> is going to be used in confidential computing environments such as Intel
>>>> TDX and AMD SEV. When supported, the memory backing store can provide
>>>> more enforcement on the fd and KVM can use a single memslot to hold both
>>>> the private and shared part of the guest memory. 
>>> 
>>> This looks like it will be useful for Arm's Confidential Compute
>>> Architecture (CCA) too - in particular we need a way of ensuring that
>>> user space cannot 'trick' the kernel into accessing memory which has
>>> been delegated to a realm (i.e. protected guest), and a memfd seems like
>>> a good match.
>> 
>> Good to hear that it will be useful for ARM’s CCA as well.
>> 
>>> 
>>> Some comments below.
>>> 
>>>> mm extension
>>>> ---------------------
>>>> Introduces new F_SEAL_INACCESSIBLE for shmem and new MFD_INACCESSIBLE
>>>> flag for memfd_create(), the file created with these flags cannot read(),
>>>> write() or mmap() etc via normal MMU operations. The file content can
>>>> only be used with the newly introduced memfile_notifier extension.
>>> 
>>> For Arm CCA we are expecting to seed the realm with an initial memory
>>> contents (e.g. kernel and initrd) which will then be measured before
>>> execution starts. The 'obvious' way of doing this with a memfd would be
>>> to populate parts of the memfd then seal it with F_SEAL_INACCESSIBLE.
>> 
>> As far as I understand, we have the same problem with TDX, where a guest TD (Trust Domain) starts in private memory. We seed the private memory typically with a guest firmware, and the initial image (plaintext) is copied to somewhere in QEMU memory (from disk, for example) for that purpose; this location is not associated with the target GPA.
>> 
>> Upon a (new) ioctl from QEMU, KVM requests the TDX Module to copy the pages to private memory (by encrypting) specifying the target GPA, using a TDX interface function (TDH.MEM.PAGE.ADD). The actual pages for the private memory is allocated by the callbacks provided by the backing store during the “copy” operation.
>> 
>> We extended the existing KVM_MEMORY_ENCRYPT_OP (ioctl) for the above. 

Hi Steve,

> 
> Ok, so if I understand correctly QEMU would do something along the lines of:
> 
> 1. Use memfd_create(...MFD_INACCESSIBLE) to allocate private memory for
> the guest.
> 
> 2. ftruncate/fallocate the memfd to back the appropriate areas of the memfd.
> 
> 3. Create a memslot in KVM pointing to the memfd
> 
> 4. Load the 'guest firmware' (kernel/initrd or similar) into VMM memory
> 
> 5. Use the KVM_MEMORY_ENCRYPT_OP to request the 'guest firmware' be
> copied into the private memory. The ioctl would temporarily pin the
> pages and ask the TDX module to copy (& encrypt) the data into the
> private memory, unpinning after the copy.
> 
> 6. QEMU can then free the unencrypted copy of the guest firmware.

Yes, this is correct. We pin and unpin the pages one-by-one today, though.


> 
>>> 
>>> However as things stand it's not possible to set the INACCESSIBLE seal
>>> after creating a memfd (F_ALL_SEALS hasn't been updated to include it).
>>> 
>>> One potential workaround would be for arm64 to provide a custom KVM
>>> ioctl to effectively memcpy() into the guest's protected memory which
>>> would only be accessible before the guest has started. The drawback is
>>> that it requires two copies of the data during guest setup.
>> 
>> So, the guest pages are not encrypted in the realm?
> 
> The pages are likely to be encrypted, but architecturally it doesn't
> matter - the hardware prevents the 'Normal World' accessing the pages
> when they are assigned to the realm. Encryption is only necessary to
> protect against hardware attacks (e.g. bus snooping).
> 
>> I think you could do the same thing, i.e. KVM copies the pages to the realm, where pages are allocated by the backing store. But, yes, it will have two copies of the data at that time unless encrypted. .
> 
> I'm not sure I follow the "unless encrypted" part of that.

What I meant is, the encrypted one is not a copy because the host software cannot recreate it (from the TDX architecture point of view). Practically we “move” the pages to private memory as we pin and unpin the pages one-by-one (above).


> 
>>> 
>>> Do you think things could be relaxed so the F_SEAL_INACCESSIBLE flag
>>> could be set after a memfd has been created (and partially populated)?
>>> 
>> 
>> I think F_SEAL_INACCESSIBLE could be deferred to the point where measurement of the initial image is done (we call “build-time” measurement in TDX). For example, if we add a callback to activate F_SEAL_INACCESSIBLE and KVM calls it before such the measurement time, does that work for you?
> 
> Yes, if it's possible to defer setting the F_SEAL_INACCESSIBLE then it
> should be possible for QEMU to load the initial 'guest firmware'
> straight into the memfd. Then to launch the guest the trusted component
> only needs to protect and measure the populated pages - there's no need
> to copy the data from one set of pages to another. This removes the need
> to have two copies of the initial image in memory at the point of measuring.
> 

Ok. This should be useful for the existing VMs. We’ll take a look at the details.


--- 
Jun