[RFC 0/4] Adding Virtual Memory Fuses to Xen

Smith, Jackson posted 4 patches 1 year, 11 months ago
Failed in applying to current master (apply log)
[RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Smith, Jackson 1 year, 11 months ago
Hi Xen Developers,

My team at Riverside Research is currently spending IRAD funding
to prototype next-generation secure hypervisor design ideas
on Xen. In particular, we are prototyping the idea of Virtual
Memory Fuses for Software Enclaves, as described in this paper:
https://www.nspw.org/papers/2020/nspw2020-brookes.pdf. Note that
that paper talks about OS/Process while we have implemented the idea
for Hypervisor/VM.

Our goal is to emulate something akin to Intel SGX or AMD SEV,
but using only existing virtual memory features common in all
processors. The basic idea is not to map guest memory into the
hypervisor so that a compromised hypervisor cannot compromise
(e.g. read/write) the guest. This idea has been proposed before,
however, Virtual Memory Fuses go one step further; they delete the
hypervisor's mappings to its own page tables, essentially locking
the virtual memory configuration for the lifetime of the system. This
creates what we call "Software Enclaves", ensuring that an adversary
with arbitrary code execution in the hypervisor STILL cannot read/write
guest memory.

With this technique, we protect the integrity and confidentiality of
guest memory. However, a compromised hypervisor can still read/write
register state during traps, or refuse to schedule a guest, denying
service. We also recognize that because this technique precludes
modifying Xen's page tables after startup, it may not be compatible
with all of Xen's potential use cases. On the other hand, there are
some uses cases (in particular statically defined embedded systems)
where our technique could be adopted with minimal friction.

With this in mind our goal is to work with the Xen community to
upstream this work as an optional feature. At this point, we have
a prototype implementation of VMF on Xen (the contents of this RFC
patch series) that supports dom0less guests on arm 64. By sharing
our prototype, we hope to socialize our idea, gauge interest, and
hopefully gain useful feedback as we work toward upstreaming.

** IMPLEMENTATION **
In our current setup we have a static configuration with dom0 and
one or two domUs. Soon after boot, Dom0 issues a hypercall through
the xenctrl interface to blow the fuse for the domU. In the future,
we could also add code to support blowing the fuse automatically on
startup, before any domains are un-paused.

Our Xen/arm64 prototype creates Software Enclaves in two steps,
represented by these two functions defined in xen/vmf.h:
void vmf_unmap_guest(struct domain *d);
void vmf_lock_xen_pgtables(void);

In the first, the Xen removes mappings to the guest(s) On arm64, Xen
keeps a reference to all of guest memory in the directmap. Right now,
we simply walk all of the guest second stage tables and remove them
from the directmap, although there is probably a more elegant method
for this.

Second, the Xen removes mappings to its own page tables.
On arm64, this also involves manipulating the directmap. One challenge
here is that as we start to unmap our tables from the directmap,
we can't use the directmap to walk them. Our solution here is also
bit less elegant, we temporarily insert a recursive mapping and use
that to remove page table entries.

** LIMITATIONS and other closing thoughts **
The current Xen code has obviously been implemented under the
assumption that new pages can be mapped, and that guest virtual
addresses can be read, so this technique will break some Xen
features. However, in the general case (in particular for static
workloads where the number of guest's is not changed after boot)
we've seen that Xen rarely needs to access guest memory or adjust
its page tables.

We see a lot of potential synergy with other Xen initiatives like
Hyperlaunch for static domain allocation, or SEV support driving new
hypercall interfaces that don't require reading guest memory. These
features would allow VMF (Virtual Memory Fuses) to work with more
configurations and architectures than our current prototype, which
only supports static configurations on ARM 64.

We have not yet studied how the prototype VMF implementation impacts
performance. On the surface, there should be no significant changes.
However, cache effects from splitting the directmap superpages could
introduce a performance cost.

Additionally, there is additional latency introduced by walking all the
tables to retroactively remove guest memory. This could be optimized
by reworking the Xen code to remove the directmap. We've toyed with
the idea, but haven't attempted it yet.

Finally, our initial testing suggests that Xen never reads guest memory
(in a static, non-dom0-enchanced configuration), but have not really
explored this thoroughly.
We know at least these things work:
	Dom0less virtual serial terminal
	Domain scheduling
We are aware that these things currently depend on accessible guest
memory:
	Some hypercalls take guest pointers as arguments
	Virtualized MMIO on arm needs to decode certain load/store
	instructions

It's likely that other Xen features require guest memory access.

Also, there is currently a lot of debug code that isn't needed for
normal operation, but assumes the ability to read guest memory or
walk page tables in an exceptional case. The xen codebase will need
to be audited for these cases, and proper guards inserted so this
code doesn't pagefault.

Thanks for allowing us to share our work with you. We are really
excited about it, and we look forward to hearing your feedback. We
figure those working with Xen on a day to day basis will likely
uncover details we have overlooked.

Jackson
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago

On 13/12/2022 19:48, Smith, Jackson wrote:
> Hi Xen Developers,

Hi Jackson,

Thanks for sharing the prototype with the community. Some 
questions/remarks below.

> My team at Riverside Research is currently spending IRAD funding
> to prototype next-generation secure hypervisor design ideas
> on Xen. In particular, we are prototyping the idea of Virtual
> Memory Fuses for Software Enclaves, as described in this paper:
> https://www.nspw.org/papers/2020/nspw2020-brookes.pdf. Note that
> that paper talks about OS/Process while we have implemented the idea
> for Hypervisor/VM.
> 
> Our goal is to emulate something akin to Intel SGX or AMD SEV,
> but using only existing virtual memory features common in all
> processors. The basic idea is not to map guest memory into the
> hypervisor so that a compromised hypervisor cannot compromise
> (e.g. read/write) the guest. This idea has been proposed before,
> however, Virtual Memory Fuses go one step further; they delete the
> hypervisor's mappings to its own page tables, essentially locking
> the virtual memory configuration for the lifetime of the system. This
> creates what we call "Software Enclaves", ensuring that an adversary
> with arbitrary code execution in the hypervisor STILL cannot read/write
> guest memory.

I am confused, if the attacker is able to execute arbitrary code, then 
what prevent them to write code to map/unmap the page?

Skimming through the paper (pages 5-6), it looks like you would need to 
implement extra defense in Xen to be able to prevent map/unmap a page.

> 
> With this technique, we protect the integrity and confidentiality of
> guest memory. However, a compromised hypervisor can still read/write
> register state during traps, or refuse to schedule a guest, denying
> service. We also recognize that because this technique precludes
> modifying Xen's page tables after startup, it may not be compatible
> with all of Xen's potential use cases. On the other hand, there are
> some uses cases (in particular statically defined embedded systems)
> where our technique could be adopted with minimal friction.

 From what you wrote, this sounds very much like the project Citrix and 
Amazon worked on called "Secret-free hypervisor" with a twist. In your 
case, you want to prevent the hypervisor to map/unmap the guest memory.

You can find some details in [1]. The code is x86 only, but I don't see 
any major blocker to port it on arm64.

> 
> With this in mind our goal is to work with the Xen community to
> upstream this work as an optional feature. At this point, we have
> a prototype implementation of VMF on Xen (the contents of this RFC
> patch series) that supports dom0less guests on arm 64. By sharing
> our prototype, we hope to socialize our idea, gauge interest, and
> hopefully gain useful feedback as we work toward upstreaming.
> 
> ** IMPLEMENTATION **
> In our current setup we have a static configuration with dom0 and
> one or two domUs. Soon after boot, Dom0 issues a hypercall through
> the xenctrl interface to blow the fuse for the domU. In the future,
> we could also add code to support blowing the fuse automatically on
> startup, before any domains are un-paused.
> 
> Our Xen/arm64 prototype creates Software Enclaves in two steps,
> represented by these two functions defined in xen/vmf.h:
> void vmf_unmap_guest(struct domain *d);
> void vmf_lock_xen_pgtables(void);
> 
> In the first, the Xen removes mappings to the guest(s) On arm64, Xen
> keeps a reference to all of guest memory in the directmap. Right now,
> we simply walk all of the guest second stage tables and remove them
> from the directmap, although there is probably a more elegant method
> for this.

IIUC, you first map all the RAM and then remove the pages. What you 
could do instead is to map only the memory required for Xen use. The 
rest would be left unmapped.

This would be similar to what we are doing on arm32. We have a split 
heap. Only the xenheap is mapped. The pages from the domheap will be 
mapped ondemand.

Another approach, would be to have a single heap where pages used by Xen 
are mapped in the page-tables when allocated (this is what secret-free 
hypervisor is doing is).

If you don't map to keep the page-tables around, then it sounds like you 
want the first approach.

> 
> Second, the Xen removes mappings to its own page tables.
> On arm64, this also involves manipulating the directmap. One challenge
> here is that as we start to unmap our tables from the directmap,
> we can't use the directmap to walk them. Our solution here is also
> bit less elegant, we temporarily insert a recursive mapping and use
> that to remove page table entries.

See above.

> 
> ** LIMITATIONS and other closing thoughts **
> The current Xen code has obviously been implemented under the
> assumption that new pages can be mapped, and that guest virtual
> addresses can be read, so this technique will break some Xen
> features. However, in the general case

Can you clarify your definition of "general case"? From my PoV, it is a 
lot more common to have guest with PV emulated device rather than with 
device attached. So it will be mandatory to access part of the memory 
(e.g. grant table).

> (in particular for static
> workloads where the number of guest's is not changed after boot)

That very much depend on how you configure your guest. If they have 
device assigned then possibly yes. Otherwise see above.

> Finally, our initial testing suggests that Xen never reads guest memory
> (in a static, non-dom0-enchanced configuration), but have not really
> explored this thoroughly.
> We know at least these things work:
> 	Dom0less virtual serial terminal
> 	Domain scheduling
> We are aware that these things currently depend on accessible guest
> memory:
> 	Some hypercalls take guest pointers as arguments

There are not many hypercalls that don't take guest pointers.

> 	Virtualized MMIO on arm needs to decode certain load/store
> 	instructions

On Arm, this can be avoided of the guest OS is not using such 
instruction. In fact they were only added to cater "broken" guest OS.

Also, this will probably be a lot more difficult on x86 as, AFAIK, there 
is no instruction syndrome. So you will need to decode the instruction 
in order to emulate the access.

> 
> It's likely that other Xen features require guest memory access.

For Arm, guest memory access is also needed when using the GICv3 ITS 
and/or second-level SMMU (still in RFC).

For x86, if you don't want to access the guest memory, then you may need 
to restrict to PVH as for HVM we need to emulate some devices in QEMU. 
That said, I am not sure PVH is even feasible.

Cheers,

[1] 
https://www.youtube.com/watch?v=RKJOwIkCnB4&list=PLYyw7IQjL-zFYmEoZEYswoVuXrHvXAWxj&index=5

-- 
Julien Grall
RE: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Smith, Jackson 1 year, 11 months ago
Hi Julien,

-----Original Message-----
From: Julien Grall <julien@xen.org>
Sent: Tuesday, December 13, 2022 3:55 PM
To: Smith, Jackson <rsmith@RiversideResearch.org>
>
> On 13/12/2022 19:48, Smith, Jackson wrote:
> > Hi Xen Developers,
>
> Hi Jackson,
>
> Thanks for sharing the prototype with the community. Some
> questions/remarks below.
>
> > My team at Riverside Research is currently spending IRAD funding to
> > prototype next-generation secure hypervisor design ideas on Xen. In
> > particular, we are prototyping the idea of Virtual Memory Fuses for
> > Software Enclaves, as described in this paper:
> > https://www.nspw.org/papers/2020/nspw2020-brookes.pdf. Note
> that that
> > paper talks about OS/Process while we have implemented the idea
> for
> > Hypervisor/VM.
> >
> > Our goal is to emulate something akin to Intel SGX or AMD SEV, but
> > using only existing virtual memory features common in all
processors.
> > The basic idea is not to map guest memory into the hypervisor so
> that
> > a compromised hypervisor cannot compromise (e.g. read/write) the
> > guest. This idea has been proposed before, however, Virtual Memory
> > Fuses go one step further; they delete the hypervisor's mappings to
> > its own page tables, essentially locking the virtual memory
> > configuration for the lifetime of the system. This creates what we
> > call "Software Enclaves", ensuring that an adversary with arbitrary
> > code execution in the hypervisor STILL cannot read/write guest
> memory.
>
> I am confused, if the attacker is able to execute arbitrary code, then
> what prevent them to write code to map/unmap the page?
>
> Skimming through the paper (pages 5-6), it looks like you would need
> to implement extra defense in Xen to be able to prevent map/unmap a
> page.
>

The key piece is deleting all virtual mappings to Xen's page table
structures. From the paper (4.4.1 last paragraph), "Because all memory
accesses operate through the MMU, even page table memory needs
corresponding page table entries in order to be written to." Without a
virtual mapping to the page table, no code can modify the page table
because it cannot read or write the table. Therefore the mappings to the
guest cannot be restored even with arbitrary code execution.

> >
> > With this technique, we protect the integrity and confidentiality of
> > guest memory. However, a compromised hypervisor can still
> read/write
> > register state during traps, or refuse to schedule a guest, denying
> > service. We also recognize that because this technique precludes
> > modifying Xen's page tables after startup, it may not be compatible
> > with all of Xen's potential use cases. On the other hand, there are
> > some uses cases (in particular statically defined embedded systems)
> > where our technique could be adopted with minimal friction.
>
>  From what you wrote, this sounds very much like the project Citrix
and
> Amazon worked on called "Secret-free hypervisor" with a twist. In your
> case, you want to prevent the hypervisor to map/unmap the guest
> memory.
>
> You can find some details in [1]. The code is x86 only, but I don't
see
> any major blocker to port it on arm64.
>

Yes, we are familiar with the "secret-free hypervisor" work. As you
point out, both our work and the secret-free hypervisor remove the
directmap region to mitigate the risk of leaking sensitive guest
secrets. However, our work is slightly different because it additionally
prevents attackers from tricking Xen into remapping a guest. 

We see our goals and the secret-free hypervisor goals as orthogonal.
While the secret-free hypervisor views guests as untrusted and wants to
keep compromised guests from leaking secrets, our work comes from the
perspective of an individual guest trying to protect its secrets from
the rest of the stack. So it wouldn't be unreasonable to say "I want a
hypervisor that is 'secret-free' and implements VMF". We see them as 
different techniques with overlapping implementations.

> >
> > With this in mind our goal is to work with the Xen community to
> > upstream this work as an optional feature. At this point, we have a
> > prototype implementation of VMF on Xen (the contents of this RFC
> patch
> > series) that supports dom0less guests on arm 64. By sharing our
> > prototype, we hope to socialize our idea, gauge interest, and
> > hopefully gain useful feedback as we work toward upstreaming.
> >
> > ** IMPLEMENTATION **
> > In our current setup we have a static configuration with dom0 and
> one
> > or two domUs. Soon after boot, Dom0 issues a hypercall through the
> > xenctrl interface to blow the fuse for the domU. In the future, we
> > could also add code to support blowing the fuse automatically on
> > startup, before any domains are un-paused.
> >
> > Our Xen/arm64 prototype creates Software Enclaves in two steps,
> > represented by these two functions defined in xen/vmf.h:
> > void vmf_unmap_guest(struct domain *d); void
> > vmf_lock_xen_pgtables(void);
> >
> > In the first, the Xen removes mappings to the guest(s) On arm64, Xen
> > keeps a reference to all of guest memory in the directmap. Right
now,
> > we simply walk all of the guest second stage tables and remove them
> > from the directmap, although there is probably a more elegant
> method
> > for this.
>
> IIUC, you first map all the RAM and then remove the pages. What you
> could do instead is to map only the memory required for Xen use. The
> rest would be left unmapped.
>
> This would be similar to what we are doing on arm32. We have a split
> heap. Only the xenheap is mapped. The pages from the domheap will
> be mapped ondemand.

Yes, I think that would work. Xen can temporarily map guest memory
in the domheap when loading guests. When the system finishes booting, we
can prevent the hypervisor from mapping pages by unmaping the domheap
root tables. We could start by adding an option to enable split xenheap
on arm64.

> Another approach, would be to have a single heap where pages used
> by Xen are mapped in the page-tables when allocated (this is what
> secret-free hypervisor is doing is).
>
> If you don't map to keep the page-tables around, then it sounds like
> you want the first approach.
>
> >
> > Second, the Xen removes mappings to its own page tables.
> > On arm64, this also involves manipulating the directmap. One
> challenge
> > here is that as we start to unmap our tables from the directmap, we
> > can't use the directmap to walk them. Our solution here is also bit
> > less elegant, we temporarily insert a recursive mapping and use that
> > to remove page table entries.
>
> See above.

Using the split xenheap approach means we don't have to worry about
unmapping guest pagetables or xen's dynamically allocated tables.

We still need to unmap the handful of static pagetables that are
declared at the top of xen/arch/arm/mm.c. Remember our goal is to
prevent Xen from reading or writing its own page tables. We can't just
unmap these static tables without shattering because they end up part of
the superpages that map the xen binary. We're probably only shattering a
single superpage for this right now. Maybe we can move the static tables
to a superpage aligned region of the binary and pad that region so we
can unmap an entire superpage without shattering? In the future we might
adjust the boot code to avoid the dependency on static page table
locations.

>
> >
> > ** LIMITATIONS and other closing thoughts ** The current Xen code
> has
> > obviously been implemented under the assumption that new pages
> can be
> > mapped, and that guest virtual addresses can be read, so this
> > technique will break some Xen features. However, in the general case
>
> Can you clarify your definition of "general case"? From my PoV, it is
a
> lot more common to have guest with PV emulated device rather than
> with device attached. So it will be mandatory to access part of the
> memory (e.g. grant table).

Yes "general case" may have been poor wording on my part. I wanted to
say that configurations exist that do not require reading guest memory,
not that this was the most common (or even a common) case.

>
> > (in particular for static
> > workloads where the number of guest's is not changed after boot)
>
> That very much depend on how you configure your guest. If they have
> device assigned then possibly yes. Otherwise see above.

Yes right now we are assuming only assigned devices, no PV or emulated
ones.

>
> > Finally, our initial testing suggests that Xen never reads guest
> > memory (in a static, non-dom0-enchanced configuration), but have
> not
> > really explored this thoroughly.
> > We know at least these things work:
> > 	Dom0less virtual serial terminal
> > 	Domain scheduling
> > We are aware that these things currently depend on accessible guest
> > memory:
> > 	Some hypercalls take guest pointers as arguments
>
> There are not many hypercalls that don't take guest pointers.
>
> > 	Virtualized MMIO on arm needs to decode certain load/store
> > 	instructions
>
> On Arm, this can be avoided of the guest OS is not using such
> instruction. In fact they were only added to cater "broken" guest OS.
>

What do you mean by "broken" guests?

I see in the arm ARM where it discusses interpreting the syndrome
register. But I'm not understanding which instructions populate the
syndrome register and which do not. Why are guests using instructions
that don't populate the syndrome register considered "broken"? Is there
somewhere I can look to learn more?

> Also, this will probably be a lot more difficult on x86 as, AFAIK,
there
> is
> no instruction syndrome. So you will need to decode the instruction in
> order to emulate the access.
>
> >
> > It's likely that other Xen features require guest memory access.
>
> For Arm, guest memory access is also needed when using the GICv3 ITS
> and/or second-level SMMU (still in RFC).
>

Thanks for pointing this out. We will be sure to make note of these
limitations going forward.

>
> For x86, if you don't want to access the guest memory, then you may
> need to restrict to PVH as for HVM we need to emulate some devices in
> QEMU.
> That said, I am not sure PVH is even feasible.
>

Is that mostly in reference to the need decode instructions on x86 or
are there other reasons why you feel it might not be feasible to apply 
this to Xen on x86?

Thanks for taking the time to consider our work. I think our next step
is to rethink the implementation in terms of the split xenheap design
and try to avoid the need for superpage shattering, so I'll work on
that before pushing the idea further.

Thanks,
Jackson
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago

On 15/12/2022 19:27, Smith, Jackson wrote:
> Hi Julien,

Hi Jackson,

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, December 13, 2022 3:55 PM
> To: Smith, Jackson <rsmith@RiversideResearch.org>
>>
>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>> Hi Xen Developers,
>>
>> Hi Jackson,
>>
>> Thanks for sharing the prototype with the community. Some
>> questions/remarks below.
>>
>>> My team at Riverside Research is currently spending IRAD funding to
>>> prototype next-generation secure hypervisor design ideas on Xen. In
>>> particular, we are prototyping the idea of Virtual Memory Fuses for
>>> Software Enclaves, as described in this paper:
>>> https://www.nspw.org/papers/2020/nspw2020-brookes.pdf. Note
>> that that
>>> paper talks about OS/Process while we have implemented the idea
>> for
>>> Hypervisor/VM.
>>>
>>> Our goal is to emulate something akin to Intel SGX or AMD SEV, but
>>> using only existing virtual memory features common in all
> processors.
>>> The basic idea is not to map guest memory into the hypervisor so
>> that
>>> a compromised hypervisor cannot compromise (e.g. read/write) the
>>> guest. This idea has been proposed before, however, Virtual Memory
>>> Fuses go one step further; they delete the hypervisor's mappings to
>>> its own page tables, essentially locking the virtual memory
>>> configuration for the lifetime of the system. This creates what we
>>> call "Software Enclaves", ensuring that an adversary with arbitrary
>>> code execution in the hypervisor STILL cannot read/write guest
>> memory.
>>
>> I am confused, if the attacker is able to execute arbitrary code, then
>> what prevent them to write code to map/unmap the page?
>>
>> Skimming through the paper (pages 5-6), it looks like you would need
>> to implement extra defense in Xen to be able to prevent map/unmap a
>> page.
>>
> 
> The key piece is deleting all virtual mappings to Xen's page table
> structures. From the paper (4.4.1 last paragraph), "Because all memory
> accesses operate through the MMU, even page table memory needs
> corresponding page table entries in order to be written to." Without a
> virtual mapping to the page table, no code can modify the page table
> because it cannot read or write the table. Therefore the mappings to the
> guest cannot be restored even with arbitrary code execution.
I don't think this is sufficient. Even if the page-tables not part of 
the virtual mapping, an attacker could still modify TTBR0_EL2 (that's a 
system register hold a host physical address). So, with a bit more work, 
you can gain access to everything (see more below).

AFAICT, this problem is pointed out in the paper (section 4.4.1):

"The remaining attack vector. Unfortunately, deleting the page
table mappings does not stop the kernel from creating an entirely
new page table with the necessary mappings and switching to it
as the active context. Although this would be very difficult for
an attacker, switching to a new context with a carefully crafted
new page table structure could compromise the VMFE."

I believe this will be easier to do it in Xen because the virtual layout 
is not very complex.

It would be a matter of inserting a new entry in the root table you 
control. A rough sequence would be:
    1) Allocate a page
    2) Prepare the page to act as a root (e.g. mapping of your code...)
    3) Map the "existing" root as a writable.
    4) Update TTBR0_EL2 to point to your new root
    5) Add a mapping in the "old" root
    6) Switch to the old root

So can you outline how you plan to prevent/mitigate it?

> 
>>>
>>> With this technique, we protect the integrity and confidentiality of
>>> guest memory. However, a compromised hypervisor can still
>> read/write
>>> register state during traps, or refuse to schedule a guest, denying
>>> service. We also recognize that because this technique precludes
>>> modifying Xen's page tables after startup, it may not be compatible
>>> with all of Xen's potential use cases. On the other hand, there are
>>> some uses cases (in particular statically defined embedded systems)
>>> where our technique could be adopted with minimal friction.
>>
>>   From what you wrote, this sounds very much like the project Citrix
> and
>> Amazon worked on called "Secret-free hypervisor" with a twist. In your
>> case, you want to prevent the hypervisor to map/unmap the guest
>> memory.
>>
>> You can find some details in [1]. The code is x86 only, but I don't
> see
>> any major blocker to port it on arm64.
>>
> 
> Yes, we are familiar with the "secret-free hypervisor" work. As you
> point out, both our work and the secret-free hypervisor remove the
> directmap region to mitigate the risk of leaking sensitive guest
> secrets. However, our work is slightly different because it additionally
> prevents attackers from tricking Xen into remapping a guest.

I understand your goal, but I don't think this is achieved (see above). 
You would need an entity to prevent write to TTBR0_EL2 in order to fully 
protect it.

> 
> We see our goals and the secret-free hypervisor goals as orthogonal.
> While the secret-free hypervisor views guests as untrusted and wants to
> keep compromised guests from leaking secrets, our work comes from the
> perspective of an individual guest trying to protect its secrets from
> the rest of the stack. So it wouldn't be unreasonable to say "I want a
> hypervisor that is 'secret-free' and implements VMF". We see them as
> different techniques with overlapping implementations.

I can see why you want to divide them. But to me if you have VMF, then 
you have a secret-free hypervisor in term of implementation.

The major difference is how the xenheap is dealt with. At the moment, 
for the implementation we are looking to still use the same heap.

However there are a few drawback in term of pages usage:
   * A page can be allocated anywhere in the memory map. So you can end 
to allocate a L1 (Arm) or L3 (x86) just for a single page
   * Contiguous pages may be allocated at different time.
   * Page-tables can be empty

x86 has some logic to handle the last two points. but Arm don't have it 
yet. I feel this is quite complex (in particular because of the 
break-before-make).

So one solution would be to use a split heap. The trouble is that 
xenheap memory would be more "limited". That might be OK for VMF, I need 
to think a bit more for secret-free hypervisor.

Another solution would be to use the vmap() (which would not be possible 
for VMF).

> Using the split xenheap approach means we don't have to worry about
> unmapping guest pagetables or xen's dynamically allocated tables.
> 
> We still need to unmap the handful of static pagetables that are
> declared at the top of xen/arch/arm/mm.c. Remember our goal is to
> prevent Xen from reading or writing its own page tables. We can't just
> unmap these static tables without shattering because they end up part of
> the superpages that map the xen binary. We're probably only shattering a
> single superpage for this right now. Maybe we can move the static tables
> to a superpage aligned region of the binary and pad that region so we
> can unmap an entire superpage without shattering?

For static pages you don't even need to shatter superpages because Xen 
is mapped with 4KB pages.

> In the future we might
> adjust the boot code to avoid the dependency on static page table
> locations.

You will always need at least a few static page tables for the initial 
switch the MMU on. Now, you could possibly allocate a new set out of Xen 
and then switch to it.

But I am not sure this is worth the trouble if you can easily unmap the 
static version afterwards.

>>
>>> Finally, our initial testing suggests that Xen never reads guest
>>> memory (in a static, non-dom0-enchanced configuration), but have
>> not
>>> really explored this thoroughly.
>>> We know at least these things work:
>>> 	Dom0less virtual serial terminal
>>> 	Domain scheduling
>>> We are aware that these things currently depend on accessible guest
>>> memory:
>>> 	Some hypercalls take guest pointers as arguments
>>
>> There are not many hypercalls that don't take guest pointers.
>>
>>> 	Virtualized MMIO on arm needs to decode certain load/store
>>> 	instructions
>>
>> On Arm, this can be avoided of the guest OS is not using such
>> instruction. In fact they were only added to cater "broken" guest OS.
>>
> 
> What do you mean by "broken" guests?
> 
> I see in the arm ARM where it discusses interpreting the syndrome
> register. But I'm not understanding which instructions populate the
> syndrome register and which do not. Why are guests using instructions
> that don't populate the syndrome register considered "broken"?

The short answer is they can't be easily/safely decoded as Xen read from 
the data cache but the processor read instruction from the instruction 
cache. There are situation where they could mismatch. For more details...

> Is there
> somewhere I can look to learn more?
... you can read [1], [2].


> 
>> Also, this will probably be a lot more difficult on x86 as, AFAIK,
> there
>> is
>> no instruction syndrome. So you will need to decode the instruction in
>> order to emulate the access.
>>
>>>
>>> It's likely that other Xen features require guest memory access.
>>
>> For Arm, guest memory access is also needed when using the GICv3 ITS
>> and/or second-level SMMU (still in RFC).
>>
> 
> Thanks for pointing this out. We will be sure to make note of these
> limitations going forward.
> 
>>
>> For x86, if you don't want to access the guest memory, then you may
>> need to restrict to PVH as for HVM we need to emulate some devices in
>> QEMU.
>> That said, I am not sure PVH is even feasible.
>>
> 
> Is that mostly in reference to the need decode instructions on x86 or
> are there other reasons why you feel it might not be feasible to apply
> this to Xen on x86?

I am not aware of any other. But it would probably best to ask with 
someone more knowledgeable than me on x86.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/e2d041b2-3b38-f19b-2d8e-3a255b0ac07e@amd.com/
[2] 
https://lore.kernel.org/xen-devel/20211126131459.2bbc81ad@donnerap.cambridge.arm.com


-- 
Julien Grall
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Stefano Stabellini 1 year, 11 months ago
On Thu, 15 Dec 2022, Julien Grall wrote:
> > > On 13/12/2022 19:48, Smith, Jackson wrote:
> > > > Hi Xen Developers,
> > > 
> > > Hi Jackson,
> > > 
> > > Thanks for sharing the prototype with the community. Some
> > > questions/remarks below.
> > > 
> > > > My team at Riverside Research is currently spending IRAD funding to
> > > > prototype next-generation secure hypervisor design ideas on Xen. In
> > > > particular, we are prototyping the idea of Virtual Memory Fuses for
> > > > Software Enclaves, as described in this paper:
> > > > https://www.nspw.org/papers/2020/nspw2020-brookes.pdf. Note
> > > that that
> > > > paper talks about OS/Process while we have implemented the idea
> > > for
> > > > Hypervisor/VM.
> > > > 
> > > > Our goal is to emulate something akin to Intel SGX or AMD SEV, but
> > > > using only existing virtual memory features common in all
> > processors.
> > > > The basic idea is not to map guest memory into the hypervisor so
> > > that
> > > > a compromised hypervisor cannot compromise (e.g. read/write) the
> > > > guest. This idea has been proposed before, however, Virtual Memory
> > > > Fuses go one step further; they delete the hypervisor's mappings to
> > > > its own page tables, essentially locking the virtual memory
> > > > configuration for the lifetime of the system. This creates what we
> > > > call "Software Enclaves", ensuring that an adversary with arbitrary
> > > > code execution in the hypervisor STILL cannot read/write guest
> > > memory.
> > > 
> > > I am confused, if the attacker is able to execute arbitrary code, then
> > > what prevent them to write code to map/unmap the page?
> > > 
> > > Skimming through the paper (pages 5-6), it looks like you would need
> > > to implement extra defense in Xen to be able to prevent map/unmap a
> > > page.
> > > 
> > 
> > The key piece is deleting all virtual mappings to Xen's page table
> > structures. From the paper (4.4.1 last paragraph), "Because all memory
> > accesses operate through the MMU, even page table memory needs
> > corresponding page table entries in order to be written to." Without a
> > virtual mapping to the page table, no code can modify the page table
> > because it cannot read or write the table. Therefore the mappings to the
> > guest cannot be restored even with arbitrary code execution.
>
> I don't think this is sufficient. Even if the page-tables not part of the
> virtual mapping, an attacker could still modify TTBR0_EL2 (that's a system
> register hold a host physical address). So, with a bit more work, you can gain
> access to everything (see more below).
> 
> AFAICT, this problem is pointed out in the paper (section 4.4.1):
> 
> "The remaining attack vector. Unfortunately, deleting the page
> table mappings does not stop the kernel from creating an entirely
> new page table with the necessary mappings and switching to it
> as the active context. Although this would be very difficult for
> an attacker, switching to a new context with a carefully crafted
> new page table structure could compromise the VMFE."
> 
> I believe this will be easier to do it in Xen because the virtual layout is
> not very complex.
> 
> It would be a matter of inserting a new entry in the root table you control. A
> rough sequence would be:
>    1) Allocate a page
>    2) Prepare the page to act as a root (e.g. mapping of your code...)
>    3) Map the "existing" root as a writable.
>    4) Update TTBR0_EL2 to point to your new root
>    5) Add a mapping in the "old" root
>    6) Switch to the old root
> 
> So can you outline how you plan to prevent/mitigate it?

[...]

> > Yes, we are familiar with the "secret-free hypervisor" work. As you
> > point out, both our work and the secret-free hypervisor remove the
> > directmap region to mitigate the risk of leaking sensitive guest
> > secrets. However, our work is slightly different because it additionally
> > prevents attackers from tricking Xen into remapping a guest.
> 
> I understand your goal, but I don't think this is achieved (see above). You
> would need an entity to prevent write to TTBR0_EL2 in order to fully protect
> it.

Without a way to stop Xen from reading/writing TTBR0_EL2, we cannot
claim that the guest's secrets are 100% safe.

But the attacker would have to follow the sequence you outlines above to
change Xen's pagetables and remap guest memory before accessing it. It
is an additional obstacle for attackers that want to steal other guests'
secrets. The size of the code that the attacker would need to inject in
Xen would need to be bigger and more complex.

Every little helps :-)
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago
Hi Stefano,

On 16/12/2022 01:46, Stefano Stabellini wrote:
> On Thu, 15 Dec 2022, Julien Grall wrote:
>>>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>> Yes, we are familiar with the "secret-free hypervisor" work. As you
>>> point out, both our work and the secret-free hypervisor remove the
>>> directmap region to mitigate the risk of leaking sensitive guest
>>> secrets. However, our work is slightly different because it additionally
>>> prevents attackers from tricking Xen into remapping a guest.
>>
>> I understand your goal, but I don't think this is achieved (see above). You
>> would need an entity to prevent write to TTBR0_EL2 in order to fully protect
>> it.
> 
> Without a way to stop Xen from reading/writing TTBR0_EL2, we cannot
> claim that the guest's secrets are 100% safe.
> 
> But the attacker would have to follow the sequence you outlines above to
> change Xen's pagetables and remap guest memory before accessing it. It
> is an additional obstacle for attackers that want to steal other guests'
> secrets. The size of the code that the attacker would need to inject in
> Xen would need to be bigger and more complex.

Right, that's why I wrote with a bit more work. However, the nuance you 
mention doesn't seem to be present in the cover letter:

"This creates what we call "Software Enclaves", ensuring that an 
adversary with arbitrary code execution in the hypervisor STILL cannot 
read/write guest memory."

So if the end goal if really to protect against *all* sort of arbitrary 
code, then I think we should have a rough idea how this will look like 
in Xen.

 From a brief look, it doesn't look like it would be possible to prevent 
modification to TTBR0_EL2 (even from EL3). We would need to investigate 
if there are other bits in the architecture to help us.

> 
> Every little helps :-)

I can see how making the life of the attacker more difficult is 
appealing. Yet, the goal needs to be clarified and the risk with the 
approach acknowledged (see above).

Cheers,

-- 
Julien Grall
RE: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Smith, Jackson 1 year, 11 months ago
-----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Friday, December 16, 2022 3:39 AM
>
> Hi Stefano,
>
> On 16/12/2022 01:46, Stefano Stabellini wrote:
> > On Thu, 15 Dec 2022, Julien Grall wrote:
> >>>> On 13/12/2022 19:48, Smith, Jackson wrote:
> >>> Yes, we are familiar with the "secret-free hypervisor" work. As
you
> >>> point out, both our work and the secret-free hypervisor remove the
> >>> directmap region to mitigate the risk of leaking sensitive guest
> >>> secrets. However, our work is slightly different because it
> >>> additionally prevents attackers from tricking Xen into remapping a
> guest.
> >>
> >> I understand your goal, but I don't think this is achieved (see
> >> above). You would need an entity to prevent write to TTBR0_EL2 in
> >> order to fully protect it.
> >
> > Without a way to stop Xen from reading/writing TTBR0_EL2, we
> cannot
> > claim that the guest's secrets are 100% safe.
> >
> > But the attacker would have to follow the sequence you outlines
> above
> > to change Xen's pagetables and remap guest memory before
> accessing it.
> > It is an additional obstacle for attackers that want to steal other
> guests'
> > secrets. The size of the code that the attacker would need to inject
> > in Xen would need to be bigger and more complex.
>
> Right, that's why I wrote with a bit more work. However, the nuance
> you mention doesn't seem to be present in the cover letter:
>
> "This creates what we call "Software Enclaves", ensuring that an
> adversary with arbitrary code execution in the hypervisor STILL cannot
> read/write guest memory."
>
> So if the end goal if really to protect against *all* sort of
arbitrary 
> code,
> then I think we should have a rough idea how this will look like in
Xen.
>
>  From a brief look, it doesn't look like it would be possible to
prevent
> modification to TTBR0_EL2 (even from EL3). We would need to
> investigate if there are other bits in the architecture to help us.
>
> >
> > Every little helps :-)
>
> I can see how making the life of the attacker more difficult is 
> appealing.
> Yet, the goal needs to be clarified and the risk with the approach
> acknowledged (see above).
>

You're right, we should have mentioned this weakness in our first email.
Sorry about the oversight! This is definitely still a limitation that we
have not yet overcome. However, we do think that the increase in
attacker workload that you and Stefano are discussing could still be
valuable to security conscious Xen users.

It would nice to find additional architecture features that we can use
to close this hole on arm, but there aren't any that stand out to me
either.

With this limitation in mind, what are the next steps we should take to
support this feature for the xen community? Is this increase in attacker
workload meaningful enough to justify the inclusion of VMF in Xen?

Thanks,
Jackson

RE: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Stefano Stabellini 1 year, 11 months ago
On Tue, 20 Dec 2022, Smith, Jackson wrote:
> > Hi Stefano,
> >
> > On 16/12/2022 01:46, Stefano Stabellini wrote:
> > > On Thu, 15 Dec 2022, Julien Grall wrote:
> > >>>> On 13/12/2022 19:48, Smith, Jackson wrote:
> > >>> Yes, we are familiar with the "secret-free hypervisor" work. As
> you
> > >>> point out, both our work and the secret-free hypervisor remove the
> > >>> directmap region to mitigate the risk of leaking sensitive guest
> > >>> secrets. However, our work is slightly different because it
> > >>> additionally prevents attackers from tricking Xen into remapping a
> > guest.
> > >>
> > >> I understand your goal, but I don't think this is achieved (see
> > >> above). You would need an entity to prevent write to TTBR0_EL2 in
> > >> order to fully protect it.
> > >
> > > Without a way to stop Xen from reading/writing TTBR0_EL2, we
> > cannot
> > > claim that the guest's secrets are 100% safe.
> > >
> > > But the attacker would have to follow the sequence you outlines
> > above
> > > to change Xen's pagetables and remap guest memory before
> > accessing it.
> > > It is an additional obstacle for attackers that want to steal other
> > guests'
> > > secrets. The size of the code that the attacker would need to inject
> > > in Xen would need to be bigger and more complex.
> >
> > Right, that's why I wrote with a bit more work. However, the nuance
> > you mention doesn't seem to be present in the cover letter:
> >
> > "This creates what we call "Software Enclaves", ensuring that an
> > adversary with arbitrary code execution in the hypervisor STILL cannot
> > read/write guest memory."
> >
> > So if the end goal if really to protect against *all* sort of
> arbitrary 
> > code,
> > then I think we should have a rough idea how this will look like in
> Xen.
> >
> >  From a brief look, it doesn't look like it would be possible to
> prevent
> > modification to TTBR0_EL2 (even from EL3). We would need to
> > investigate if there are other bits in the architecture to help us.
> >
> > >
> > > Every little helps :-)
> >
> > I can see how making the life of the attacker more difficult is 
> > appealing.
> > Yet, the goal needs to be clarified and the risk with the approach
> > acknowledged (see above).
> >
> 
> You're right, we should have mentioned this weakness in our first email.
> Sorry about the oversight! This is definitely still a limitation that we
> have not yet overcome. However, we do think that the increase in
> attacker workload that you and Stefano are discussing could still be
> valuable to security conscious Xen users.
> 
> It would nice to find additional architecture features that we can use
> to close this hole on arm, but there aren't any that stand out to me
> either.
> 
> With this limitation in mind, what are the next steps we should take to
> support this feature for the xen community? Is this increase in attacker
> workload meaningful enough to justify the inclusion of VMF in Xen?

I think it could be valuable as an additional obstacle for the attacker
to overcome. The next step would be to port your series on top of
Julien's "Remove the directmap" patch series
https://marc.info/?l=xen-devel&m=167119090721116

Julien, what do you think?
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago
Hi Stefano,

On 22/12/2022 00:38, Stefano Stabellini wrote:
> On Tue, 20 Dec 2022, Smith, Jackson wrote:
>>> Hi Stefano,
>>>
>>> On 16/12/2022 01:46, Stefano Stabellini wrote:
>>>> On Thu, 15 Dec 2022, Julien Grall wrote:
>>>>>>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>>>>> Yes, we are familiar with the "secret-free hypervisor" work. As
>> you
>>>>>> point out, both our work and the secret-free hypervisor remove the
>>>>>> directmap region to mitigate the risk of leaking sensitive guest
>>>>>> secrets. However, our work is slightly different because it
>>>>>> additionally prevents attackers from tricking Xen into remapping a
>>> guest.
>>>>>
>>>>> I understand your goal, but I don't think this is achieved (see
>>>>> above). You would need an entity to prevent write to TTBR0_EL2 in
>>>>> order to fully protect it.
>>>>
>>>> Without a way to stop Xen from reading/writing TTBR0_EL2, we
>>> cannot
>>>> claim that the guest's secrets are 100% safe.
>>>>
>>>> But the attacker would have to follow the sequence you outlines
>>> above
>>>> to change Xen's pagetables and remap guest memory before
>>> accessing it.
>>>> It is an additional obstacle for attackers that want to steal other
>>> guests'
>>>> secrets. The size of the code that the attacker would need to inject
>>>> in Xen would need to be bigger and more complex.
>>>
>>> Right, that's why I wrote with a bit more work. However, the nuance
>>> you mention doesn't seem to be present in the cover letter:
>>>
>>> "This creates what we call "Software Enclaves", ensuring that an
>>> adversary with arbitrary code execution in the hypervisor STILL cannot
>>> read/write guest memory."
>>>
>>> So if the end goal if really to protect against *all* sort of
>> arbitrary
>>> code,
>>> then I think we should have a rough idea how this will look like in
>> Xen.
>>>
>>>   From a brief look, it doesn't look like it would be possible to
>> prevent
>>> modification to TTBR0_EL2 (even from EL3). We would need to
>>> investigate if there are other bits in the architecture to help us.
>>>
>>>>
>>>> Every little helps :-)
>>>
>>> I can see how making the life of the attacker more difficult is
>>> appealing.
>>> Yet, the goal needs to be clarified and the risk with the approach
>>> acknowledged (see above).
>>>
>>
>> You're right, we should have mentioned this weakness in our first email.
>> Sorry about the oversight! This is definitely still a limitation that we
>> have not yet overcome. However, we do think that the increase in
>> attacker workload that you and Stefano are discussing could still be
>> valuable to security conscious Xen users.
>>
>> It would nice to find additional architecture features that we can use
>> to close this hole on arm, but there aren't any that stand out to me
>> either.
>>
>> With this limitation in mind, what are the next steps we should take to
>> support this feature for the xen community? Is this increase in attacker
>> workload meaningful enough to justify the inclusion of VMF in Xen?
> 
> I think it could be valuable as an additional obstacle for the attacker
> to overcome. The next step would be to port your series on top of
> Julien's "Remove the directmap" patch series
> https://marc.info/?l=xen-devel&m=167119090721116
> 
> Julien, what do you think?

If we want Xen to be used in confidential compute, then we need a 
compelling story and prove that we are at least as secure as other 
hypervisors.

So I think we need to investigate a few areas:
    * Can we protect the TTBR? I don't think this can be done with the 
HW. But maybe I overlook it.
    * Can VMF be extended to more use-cases? For instances, for 
hypercalls, we could have bounce buffer.
    * If we can't fully secure VMF, can the attack surface be reduced 
(e.g. disable hypercalls at runtime/compile time)? Could we use a 
different architecture (I am thinking something like pKVM [1])?

Cheers,

[1] https://lwn.net/Articles/836693/

-- 
Julien Grall
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Demi Marie Obenour 1 year, 11 months ago
On Thu, Dec 22, 2022 at 09:52:11AM +0000, Julien Grall wrote:
> Hi Stefano,
> 
> On 22/12/2022 00:38, Stefano Stabellini wrote:
> > On Tue, 20 Dec 2022, Smith, Jackson wrote:
> > > > Hi Stefano,
> > > > 
> > > > On 16/12/2022 01:46, Stefano Stabellini wrote:
> > > > > On Thu, 15 Dec 2022, Julien Grall wrote:
> > > > > > > > On 13/12/2022 19:48, Smith, Jackson wrote:
> > > > > > > Yes, we are familiar with the "secret-free hypervisor" work. As
> > > you
> > > > > > > point out, both our work and the secret-free hypervisor remove the
> > > > > > > directmap region to mitigate the risk of leaking sensitive guest
> > > > > > > secrets. However, our work is slightly different because it
> > > > > > > additionally prevents attackers from tricking Xen into remapping a
> > > > guest.
> > > > > > 
> > > > > > I understand your goal, but I don't think this is achieved (see
> > > > > > above). You would need an entity to prevent write to TTBR0_EL2 in
> > > > > > order to fully protect it.
> > > > > 
> > > > > Without a way to stop Xen from reading/writing TTBR0_EL2, we
> > > > cannot
> > > > > claim that the guest's secrets are 100% safe.
> > > > > 
> > > > > But the attacker would have to follow the sequence you outlines
> > > > above
> > > > > to change Xen's pagetables and remap guest memory before
> > > > accessing it.
> > > > > It is an additional obstacle for attackers that want to steal other
> > > > guests'
> > > > > secrets. The size of the code that the attacker would need to inject
> > > > > in Xen would need to be bigger and more complex.
> > > > 
> > > > Right, that's why I wrote with a bit more work. However, the nuance
> > > > you mention doesn't seem to be present in the cover letter:
> > > > 
> > > > "This creates what we call "Software Enclaves", ensuring that an
> > > > adversary with arbitrary code execution in the hypervisor STILL cannot
> > > > read/write guest memory."
> > > > 
> > > > So if the end goal if really to protect against *all* sort of
> > > arbitrary
> > > > code,
> > > > then I think we should have a rough idea how this will look like in
> > > Xen.
> > > > 
> > > >   From a brief look, it doesn't look like it would be possible to
> > > prevent
> > > > modification to TTBR0_EL2 (even from EL3). We would need to
> > > > investigate if there are other bits in the architecture to help us.
> > > > 
> > > > > 
> > > > > Every little helps :-)
> > > > 
> > > > I can see how making the life of the attacker more difficult is
> > > > appealing.
> > > > Yet, the goal needs to be clarified and the risk with the approach
> > > > acknowledged (see above).
> > > > 
> > > 
> > > You're right, we should have mentioned this weakness in our first email.
> > > Sorry about the oversight! This is definitely still a limitation that we
> > > have not yet overcome. However, we do think that the increase in
> > > attacker workload that you and Stefano are discussing could still be
> > > valuable to security conscious Xen users.
> > > 
> > > It would nice to find additional architecture features that we can use
> > > to close this hole on arm, but there aren't any that stand out to me
> > > either.
> > > 
> > > With this limitation in mind, what are the next steps we should take to
> > > support this feature for the xen community? Is this increase in attacker
> > > workload meaningful enough to justify the inclusion of VMF in Xen?
> > 
> > I think it could be valuable as an additional obstacle for the attacker
> > to overcome. The next step would be to port your series on top of
> > Julien's "Remove the directmap" patch series
> > https://marc.info/?l=xen-devel&m=167119090721116
> > 
> > Julien, what do you think?
> 
> If we want Xen to be used in confidential compute, then we need a compelling
> story and prove that we are at least as secure as other hypervisors.
> 
> So I think we need to investigate a few areas:
>    * Can we protect the TTBR? I don't think this can be done with the HW.
> But maybe I overlook it.

This can be done by running most of Xen at a lower EL, and having only a
small trusted (and hopefully formally verified) kernel run at EL2.

>    * Can VMF be extended to more use-cases? For instances, for hypercalls,
> we could have bounce buffer.
>    * If we can't fully secure VMF, can the attack surface be reduced (e.g.
> disable hypercalls at runtime/compile time)? Could we use a different
> architecture (I am thinking something like pKVM [1])?
> 
> Cheers,
> 
> [1] https://lwn.net/Articles/836693/

pKVM has been formally verified already, in the form of seKVM.  So there
very much is precident for this.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago

On 22/12/2022 10:14, Demi Marie Obenour wrote:
> On Thu, Dec 22, 2022 at 09:52:11AM +0000, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 22/12/2022 00:38, Stefano Stabellini wrote:
>>> On Tue, 20 Dec 2022, Smith, Jackson wrote:
>>>>> Hi Stefano,
>>>>>
>>>>> On 16/12/2022 01:46, Stefano Stabellini wrote:
>>>>>> On Thu, 15 Dec 2022, Julien Grall wrote:
>>>>>>>>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>>>>>>> Yes, we are familiar with the "secret-free hypervisor" work. As
>>>> you
>>>>>>>> point out, both our work and the secret-free hypervisor remove the
>>>>>>>> directmap region to mitigate the risk of leaking sensitive guest
>>>>>>>> secrets. However, our work is slightly different because it
>>>>>>>> additionally prevents attackers from tricking Xen into remapping a
>>>>> guest.
>>>>>>>
>>>>>>> I understand your goal, but I don't think this is achieved (see
>>>>>>> above). You would need an entity to prevent write to TTBR0_EL2 in
>>>>>>> order to fully protect it.
>>>>>>
>>>>>> Without a way to stop Xen from reading/writing TTBR0_EL2, we
>>>>> cannot
>>>>>> claim that the guest's secrets are 100% safe.
>>>>>>
>>>>>> But the attacker would have to follow the sequence you outlines
>>>>> above
>>>>>> to change Xen's pagetables and remap guest memory before
>>>>> accessing it.
>>>>>> It is an additional obstacle for attackers that want to steal other
>>>>> guests'
>>>>>> secrets. The size of the code that the attacker would need to inject
>>>>>> in Xen would need to be bigger and more complex.
>>>>>
>>>>> Right, that's why I wrote with a bit more work. However, the nuance
>>>>> you mention doesn't seem to be present in the cover letter:
>>>>>
>>>>> "This creates what we call "Software Enclaves", ensuring that an
>>>>> adversary with arbitrary code execution in the hypervisor STILL cannot
>>>>> read/write guest memory."
>>>>>
>>>>> So if the end goal if really to protect against *all* sort of
>>>> arbitrary
>>>>> code,
>>>>> then I think we should have a rough idea how this will look like in
>>>> Xen.
>>>>>
>>>>>    From a brief look, it doesn't look like it would be possible to
>>>> prevent
>>>>> modification to TTBR0_EL2 (even from EL3). We would need to
>>>>> investigate if there are other bits in the architecture to help us.
>>>>>
>>>>>>
>>>>>> Every little helps :-)
>>>>>
>>>>> I can see how making the life of the attacker more difficult is
>>>>> appealing.
>>>>> Yet, the goal needs to be clarified and the risk with the approach
>>>>> acknowledged (see above).
>>>>>
>>>>
>>>> You're right, we should have mentioned this weakness in our first email.
>>>> Sorry about the oversight! This is definitely still a limitation that we
>>>> have not yet overcome. However, we do think that the increase in
>>>> attacker workload that you and Stefano are discussing could still be
>>>> valuable to security conscious Xen users.
>>>>
>>>> It would nice to find additional architecture features that we can use
>>>> to close this hole on arm, but there aren't any that stand out to me
>>>> either.
>>>>
>>>> With this limitation in mind, what are the next steps we should take to
>>>> support this feature for the xen community? Is this increase in attacker
>>>> workload meaningful enough to justify the inclusion of VMF in Xen?
>>>
>>> I think it could be valuable as an additional obstacle for the attacker
>>> to overcome. The next step would be to port your series on top of
>>> Julien's "Remove the directmap" patch series
>>> https://marc.info/?l=xen-devel&m=167119090721116
>>>
>>> Julien, what do you think?
>>
>> If we want Xen to be used in confidential compute, then we need a compelling
>> story and prove that we are at least as secure as other hypervisors.
>>
>> So I think we need to investigate a few areas:
>>     * Can we protect the TTBR? I don't think this can be done with the HW.
>> But maybe I overlook it.
> 
> This can be done by running most of Xen at a lower EL, and having only a
> small trusted (and hopefully formally verified) kernel run at EL2.

This is what I hinted in my 3rd bullet. :) I didn't consider this for 
the first bullet because the goal of this question is to figure out 
whether we can leave all Xen running in EL2 and still have the same 
guarantee.

Cheers,

-- 
Julien Grall
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Demi Marie Obenour 1 year, 11 months ago
On Thu, Dec 22, 2022 at 10:21:57AM +0000, Julien Grall wrote:
> 
> 
> On 22/12/2022 10:14, Demi Marie Obenour wrote:
> > On Thu, Dec 22, 2022 at 09:52:11AM +0000, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 22/12/2022 00:38, Stefano Stabellini wrote:
> > > > On Tue, 20 Dec 2022, Smith, Jackson wrote:
> > > > > > Hi Stefano,
> > > > > > 
> > > > > > On 16/12/2022 01:46, Stefano Stabellini wrote:
> > > > > > > On Thu, 15 Dec 2022, Julien Grall wrote:
> > > > > > > > > > On 13/12/2022 19:48, Smith, Jackson wrote:
> > > > > > > > > Yes, we are familiar with the "secret-free hypervisor" work. As
> > > > > you
> > > > > > > > > point out, both our work and the secret-free hypervisor remove the
> > > > > > > > > directmap region to mitigate the risk of leaking sensitive guest
> > > > > > > > > secrets. However, our work is slightly different because it
> > > > > > > > > additionally prevents attackers from tricking Xen into remapping a
> > > > > > guest.
> > > > > > > > 
> > > > > > > > I understand your goal, but I don't think this is achieved (see
> > > > > > > > above). You would need an entity to prevent write to TTBR0_EL2 in
> > > > > > > > order to fully protect it.
> > > > > > > 
> > > > > > > Without a way to stop Xen from reading/writing TTBR0_EL2, we
> > > > > > cannot
> > > > > > > claim that the guest's secrets are 100% safe.
> > > > > > > 
> > > > > > > But the attacker would have to follow the sequence you outlines
> > > > > > above
> > > > > > > to change Xen's pagetables and remap guest memory before
> > > > > > accessing it.
> > > > > > > It is an additional obstacle for attackers that want to steal other
> > > > > > guests'
> > > > > > > secrets. The size of the code that the attacker would need to inject
> > > > > > > in Xen would need to be bigger and more complex.
> > > > > > 
> > > > > > Right, that's why I wrote with a bit more work. However, the nuance
> > > > > > you mention doesn't seem to be present in the cover letter:
> > > > > > 
> > > > > > "This creates what we call "Software Enclaves", ensuring that an
> > > > > > adversary with arbitrary code execution in the hypervisor STILL cannot
> > > > > > read/write guest memory."
> > > > > > 
> > > > > > So if the end goal if really to protect against *all* sort of
> > > > > arbitrary
> > > > > > code,
> > > > > > then I think we should have a rough idea how this will look like in
> > > > > Xen.
> > > > > > 
> > > > > >    From a brief look, it doesn't look like it would be possible to
> > > > > prevent
> > > > > > modification to TTBR0_EL2 (even from EL3). We would need to
> > > > > > investigate if there are other bits in the architecture to help us.
> > > > > > 
> > > > > > > 
> > > > > > > Every little helps :-)
> > > > > > 
> > > > > > I can see how making the life of the attacker more difficult is
> > > > > > appealing.
> > > > > > Yet, the goal needs to be clarified and the risk with the approach
> > > > > > acknowledged (see above).
> > > > > > 
> > > > > 
> > > > > You're right, we should have mentioned this weakness in our first email.
> > > > > Sorry about the oversight! This is definitely still a limitation that we
> > > > > have not yet overcome. However, we do think that the increase in
> > > > > attacker workload that you and Stefano are discussing could still be
> > > > > valuable to security conscious Xen users.
> > > > > 
> > > > > It would nice to find additional architecture features that we can use
> > > > > to close this hole on arm, but there aren't any that stand out to me
> > > > > either.
> > > > > 
> > > > > With this limitation in mind, what are the next steps we should take to
> > > > > support this feature for the xen community? Is this increase in attacker
> > > > > workload meaningful enough to justify the inclusion of VMF in Xen?
> > > > 
> > > > I think it could be valuable as an additional obstacle for the attacker
> > > > to overcome. The next step would be to port your series on top of
> > > > Julien's "Remove the directmap" patch series
> > > > https://marc.info/?l=xen-devel&m=167119090721116
> > > > 
> > > > Julien, what do you think?
> > > 
> > > If we want Xen to be used in confidential compute, then we need a compelling
> > > story and prove that we are at least as secure as other hypervisors.
> > > 
> > > So I think we need to investigate a few areas:
> > >     * Can we protect the TTBR? I don't think this can be done with the HW.
> > > But maybe I overlook it.
> > 
> > This can be done by running most of Xen at a lower EL, and having only a
> > small trusted (and hopefully formally verified) kernel run at EL2.
> 
> This is what I hinted in my 3rd bullet. :) I didn't consider this for the
> first bullet because the goal of this question is to figure out whether we
> can leave all Xen running in EL2 and still have the same guarantee.

It should be possible (see Google Native Client) but whether or not it
is useful is questionable.  I expect the complexity of the needed
compiler patches and binary-level static analysis to be greater than
that of running most of Xen at a lower exception level.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Demi Marie Obenour 1 year, 11 months ago
On Tue, Dec 20, 2022 at 10:17:24PM +0000, Smith, Jackson wrote:
> -----Original Message-----
> > From: Julien Grall <julien@xen.org>
> > Sent: Friday, December 16, 2022 3:39 AM
> >
> > Hi Stefano,
> >
> > On 16/12/2022 01:46, Stefano Stabellini wrote:
> > > On Thu, 15 Dec 2022, Julien Grall wrote:
> > >>>> On 13/12/2022 19:48, Smith, Jackson wrote:
> > >>> Yes, we are familiar with the "secret-free hypervisor" work. As
> you
> > >>> point out, both our work and the secret-free hypervisor remove the
> > >>> directmap region to mitigate the risk of leaking sensitive guest
> > >>> secrets. However, our work is slightly different because it
> > >>> additionally prevents attackers from tricking Xen into remapping a
> > guest.
> > >>
> > >> I understand your goal, but I don't think this is achieved (see
> > >> above). You would need an entity to prevent write to TTBR0_EL2 in
> > >> order to fully protect it.
> > >
> > > Without a way to stop Xen from reading/writing TTBR0_EL2, we
> > cannot
> > > claim that the guest's secrets are 100% safe.
> > >
> > > But the attacker would have to follow the sequence you outlines
> > above
> > > to change Xen's pagetables and remap guest memory before
> > accessing it.
> > > It is an additional obstacle for attackers that want to steal other
> > guests'
> > > secrets. The size of the code that the attacker would need to inject
> > > in Xen would need to be bigger and more complex.
> >
> > Right, that's why I wrote with a bit more work. However, the nuance
> > you mention doesn't seem to be present in the cover letter:
> >
> > "This creates what we call "Software Enclaves", ensuring that an
> > adversary with arbitrary code execution in the hypervisor STILL cannot
> > read/write guest memory."
> >
> > So if the end goal if really to protect against *all* sort of
> arbitrary 
> > code,
> > then I think we should have a rough idea how this will look like in
> Xen.
> >
> >  From a brief look, it doesn't look like it would be possible to
> prevent
> > modification to TTBR0_EL2 (even from EL3). We would need to
> > investigate if there are other bits in the architecture to help us.
> >
> > >
> > > Every little helps :-)
> >
> > I can see how making the life of the attacker more difficult is 
> > appealing.
> > Yet, the goal needs to be clarified and the risk with the approach
> > acknowledged (see above).
> >
> 
> You're right, we should have mentioned this weakness in our first email.
> Sorry about the oversight! This is definitely still a limitation that we
> have not yet overcome. However, we do think that the increase in
> attacker workload that you and Stefano are discussing could still be
> valuable to security conscious Xen users.
> 
> It would nice to find additional architecture features that we can use
> to close this hole on arm, but there aren't any that stand out to me
> either.
> 
> With this limitation in mind, what are the next steps we should take to
> support this feature for the xen community? Is this increase in attacker
> workload meaningful enough to justify the inclusion of VMF in Xen?

Personally, I don’t think so.  The kinds of workloads VMF is usable
for (no hypercalls) are likely easily portable to other hypervisors,
including formally verified microkernels such as seL4 that provide a
significantly higher level of assurance.  seL4’s proofs do need to be
ported to each particular board, but this is fairly simple.  Conversely,
workloads that need Xen’s features cannot use VMF, so VMF again is not
suitable.

Have you considered other approaches to improving security, such as
fuzzing Xen’s hypercall interface or even using formal methods?  Those
would benefit all users of Xen, not merely a small subset who already
have alternatives available.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Stefano Stabellini 1 year, 11 months ago
On Tue, 20 Dec 2022, Demi Marie Obenour wrote:
> On Tue, Dec 20, 2022 at 10:17:24PM +0000, Smith, Jackson wrote:
> > > Hi Stefano,
> > >
> > > On 16/12/2022 01:46, Stefano Stabellini wrote:
> > > > On Thu, 15 Dec 2022, Julien Grall wrote:
> > > >>>> On 13/12/2022 19:48, Smith, Jackson wrote:
> > > >>> Yes, we are familiar with the "secret-free hypervisor" work. As
> > you
> > > >>> point out, both our work and the secret-free hypervisor remove the
> > > >>> directmap region to mitigate the risk of leaking sensitive guest
> > > >>> secrets. However, our work is slightly different because it
> > > >>> additionally prevents attackers from tricking Xen into remapping a
> > > guest.
> > > >>
> > > >> I understand your goal, but I don't think this is achieved (see
> > > >> above). You would need an entity to prevent write to TTBR0_EL2 in
> > > >> order to fully protect it.
> > > >
> > > > Without a way to stop Xen from reading/writing TTBR0_EL2, we
> > > cannot
> > > > claim that the guest's secrets are 100% safe.
> > > >
> > > > But the attacker would have to follow the sequence you outlines
> > > above
> > > > to change Xen's pagetables and remap guest memory before
> > > accessing it.
> > > > It is an additional obstacle for attackers that want to steal other
> > > guests'
> > > > secrets. The size of the code that the attacker would need to inject
> > > > in Xen would need to be bigger and more complex.
> > >
> > > Right, that's why I wrote with a bit more work. However, the nuance
> > > you mention doesn't seem to be present in the cover letter:
> > >
> > > "This creates what we call "Software Enclaves", ensuring that an
> > > adversary with arbitrary code execution in the hypervisor STILL cannot
> > > read/write guest memory."
> > >
> > > So if the end goal if really to protect against *all* sort of
> > arbitrary 
> > > code,
> > > then I think we should have a rough idea how this will look like in
> > Xen.
> > >
> > >  From a brief look, it doesn't look like it would be possible to
> > prevent
> > > modification to TTBR0_EL2 (even from EL3). We would need to
> > > investigate if there are other bits in the architecture to help us.
> > >
> > > >
> > > > Every little helps :-)
> > >
> > > I can see how making the life of the attacker more difficult is 
> > > appealing.
> > > Yet, the goal needs to be clarified and the risk with the approach
> > > acknowledged (see above).
> > >
> > 
> > You're right, we should have mentioned this weakness in our first email.
> > Sorry about the oversight! This is definitely still a limitation that we
> > have not yet overcome. However, we do think that the increase in
> > attacker workload that you and Stefano are discussing could still be
> > valuable to security conscious Xen users.
> > 
> > It would nice to find additional architecture features that we can use
> > to close this hole on arm, but there aren't any that stand out to me
> > either.
> > 
> > With this limitation in mind, what are the next steps we should take to
> > support this feature for the xen community? Is this increase in attacker
> > workload meaningful enough to justify the inclusion of VMF in Xen?
> 
> Personally, I don’t think so.  The kinds of workloads VMF is usable
> for (no hypercalls) are likely easily portable to other hypervisors,
> including formally verified microkernels such as seL4 that provide... 

What other hypervisors might or might not do should not be a factor in
this discussion and it would be best to leave it aside.

From an AMD/Xilinx point of view, most of our customers using Xen in
productions today don't use any hypercalls in one or more of their VMs.
Xen is great for these use-cases and it is rather common in embedded.
It is certainly a different configuration from what most are come to
expect from Xen on the server/desktop x86 side. There is no question
that guests without hypercalls are important for Xen on ARM.

As a Xen community we have a long history and strong interest in making
Xen more secure and also, more recently, safer (in the ISO 26262
safety-certification sense). The VMF work is very well aligned with both
of these efforts and any additional burder to attackers is certainly
good for Xen.

Now the question is what changes are necessary and how to make them to
the codebase. And if it turns out that some of the changes are not
applicable or too complex to accept, the decision will be made purely
from a code maintenance point of view and will have nothing to do with
VMs making no hypercalls being unimportant (i.e. if we don't accept one
or more patches is not going to have anything to do with the use-case
being unimportant or what other hypervisors might or might not do).
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago
Hi Stefano,

On 22/12/2022 00:53, Stefano Stabellini wrote:
> On Tue, 20 Dec 2022, Demi Marie Obenour wrote:
>> On Tue, Dec 20, 2022 at 10:17:24PM +0000, Smith, Jackson wrote:
>>>> Hi Stefano,
>>>>
>>>> On 16/12/2022 01:46, Stefano Stabellini wrote:
>>>>> On Thu, 15 Dec 2022, Julien Grall wrote:
>>>>>>>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>>>>>> Yes, we are familiar with the "secret-free hypervisor" work. As
>>> you
>>>>>>> point out, both our work and the secret-free hypervisor remove the
>>>>>>> directmap region to mitigate the risk of leaking sensitive guest
>>>>>>> secrets. However, our work is slightly different because it
>>>>>>> additionally prevents attackers from tricking Xen into remapping a
>>>> guest.
>>>>>>
>>>>>> I understand your goal, but I don't think this is achieved (see
>>>>>> above). You would need an entity to prevent write to TTBR0_EL2 in
>>>>>> order to fully protect it.
>>>>>
>>>>> Without a way to stop Xen from reading/writing TTBR0_EL2, we
>>>> cannot
>>>>> claim that the guest's secrets are 100% safe.
>>>>>
>>>>> But the attacker would have to follow the sequence you outlines
>>>> above
>>>>> to change Xen's pagetables and remap guest memory before
>>>> accessing it.
>>>>> It is an additional obstacle for attackers that want to steal other
>>>> guests'
>>>>> secrets. The size of the code that the attacker would need to inject
>>>>> in Xen would need to be bigger and more complex.
>>>>
>>>> Right, that's why I wrote with a bit more work. However, the nuance
>>>> you mention doesn't seem to be present in the cover letter:
>>>>
>>>> "This creates what we call "Software Enclaves", ensuring that an
>>>> adversary with arbitrary code execution in the hypervisor STILL cannot
>>>> read/write guest memory."
>>>>
>>>> So if the end goal if really to protect against *all* sort of
>>> arbitrary
>>>> code,
>>>> then I think we should have a rough idea how this will look like in
>>> Xen.
>>>>
>>>>   From a brief look, it doesn't look like it would be possible to
>>> prevent
>>>> modification to TTBR0_EL2 (even from EL3). We would need to
>>>> investigate if there are other bits in the architecture to help us.
>>>>
>>>>>
>>>>> Every little helps :-)
>>>>
>>>> I can see how making the life of the attacker more difficult is
>>>> appealing.
>>>> Yet, the goal needs to be clarified and the risk with the approach
>>>> acknowledged (see above).
>>>>
>>>
>>> You're right, we should have mentioned this weakness in our first email.
>>> Sorry about the oversight! This is definitely still a limitation that we
>>> have not yet overcome. However, we do think that the increase in
>>> attacker workload that you and Stefano are discussing could still be
>>> valuable to security conscious Xen users.
>>>
>>> It would nice to find additional architecture features that we can use
>>> to close this hole on arm, but there aren't any that stand out to me
>>> either.
>>>
>>> With this limitation in mind, what are the next steps we should take to
>>> support this feature for the xen community? Is this increase in attacker
>>> workload meaningful enough to justify the inclusion of VMF in Xen?
>>
>> Personally, I don’t think so.  The kinds of workloads VMF is usable
>> for (no hypercalls) are likely easily portable to other hypervisors,
>> including formally verified microkernels such as seL4 that provide...
> 
> What other hypervisors might or might not do should not be a factor in
> this discussion and it would be best to leave it aside.

To be honest, Demi has a point. At the moment, VMF is a very niche 
use-case (see more below). So you would end up to use less than 10% of 
the normal Xen on Arm code. A lot of people will likely wonder why using 
Xen in this case?

> 
>  From an AMD/Xilinx point of view, most of our customers using Xen in
> productions today don't use any hypercalls in one or more of their VMs.
This suggests a mix of guests are running (some using hypercalls and 
other not). It would not be possible if you were using VMF.

> Xen is great for these use-cases and it is rather common in embedded.
> It is certainly a different configuration from what most are come to
> expect from Xen on the server/desktop x86 side. There is no question
> that guests without hypercalls are important for Xen on ARM. >
> As a Xen community we have a long history and strong interest in making
> Xen more secure and also, more recently, safer (in the ISO 26262
> safety-certification sense). The VMF work is very well aligned with both
> of these efforts and any additional burder to attackers is certainly
> good for Xen.

I agree that we have a strong focus on making Xen more secure. However, 
we also need to look at the use cases for it. As it stands, there will no:
   - IOREQ use (don't think about emulating TPM)
   - GICv3 ITS
   - stage-1 SMMUv3
   - decoding of instructions when there is no syndrome
   - hypercalls (including event channels)
   - dom0

That's a lot of Xen features that can't be used. Effectively you will 
make Xen more "secure" for a very few users.

> 
> Now the question is what changes are necessary and how to make them to
> the codebase. And if it turns out that some of the changes are not
> applicable or too complex to accept, the decision will be made purely
> from a code maintenance point of view and will have nothing to do with
> VMs making no hypercalls being unimportant (i.e. if we don't accept one
> or more patches is not going to have anything to do with the use-case
> being unimportant or what other hypervisors might or might not do).
I disagree, I think this is also about use cases. On the paper VMF look 
very great, but so far it still has a big flaw (the TTBR can be changed) 
and it would restrict a lot what you can do.

To me, if you can't secure the TTBR, then there are other way to improve 
the security of Xen for the same setup and more.

The biggest attack surface of Xen on Arm today are the hypercalls. So if 
you remove hypercalls access to the guest (or even compile out), then 
there is a lot less chance for an attacker to compromise Xen.

This is not exactly the same guarantee as VMF. But as I wrote before, if 
the attacker has access to Xen, then you are already doomed because you 
have to assume they can switch the TTBR.

Cheers,

-- 
Julien Grall

Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Stefano Stabellini 1 year, 11 months ago
On Thu, 22 Dec 2022, Julien Grall wrote:
> > What other hypervisors might or might not do should not be a factor in
> > this discussion and it would be best to leave it aside.
> 
> To be honest, Demi has a point. At the moment, VMF is a very niche use-case
> (see more below). So you would end up to use less than 10% of the normal Xen
> on Arm code. A lot of people will likely wonder why using Xen in this case?

[...]

> >  From an AMD/Xilinx point of view, most of our customers using Xen in
> > productions today don't use any hypercalls in one or more of their VMs.
> This suggests a mix of guests are running (some using hypercalls and other
> not). It would not be possible if you were using VMF.

It is true that the current limitations are very restrictive.

In embedded, we have a few pure static partitioning deployments where no
hypercalls are required (Linux is using hypercalls today but it could do
without), so maybe VMF could be enabled, but admittedly in those cases
the main focus today is safety and fault tolerance, rather than
confidential computing.


> > Xen is great for these use-cases and it is rather common in embedded.
> > It is certainly a different configuration from what most are come to
> > expect from Xen on the server/desktop x86 side. There is no question
> > that guests without hypercalls are important for Xen on ARM. >
> > As a Xen community we have a long history and strong interest in making
> > Xen more secure and also, more recently, safer (in the ISO 26262
> > safety-certification sense). The VMF work is very well aligned with both
> > of these efforts and any additional burder to attackers is certainly
> > good for Xen.
> 
> I agree that we have a strong focus on making Xen more secure. However, we
> also need to look at the use cases for it. As it stands, there will no:
>   - IOREQ use (don't think about emulating TPM)
>   - GICv3 ITS
>   - stage-1 SMMUv3
>   - decoding of instructions when there is no syndrome
>   - hypercalls (including event channels)
>   - dom0
> 
> That's a lot of Xen features that can't be used. Effectively you will make Xen
> more "secure" for a very few users.

Among these, the main problems affecting AMD/Xilinx users today would be:
- decoding of instructions
- hypercalls, especially event channels

Decoding of instructions would affect all our deployments. For
hypercalls, even in static partitioning deployments, sometimes event
channels are used for VM-to-VM notifications.


> > Now the question is what changes are necessary and how to make them to
> > the codebase. And if it turns out that some of the changes are not
> > applicable or too complex to accept, the decision will be made purely
> > from a code maintenance point of view and will have nothing to do with
> > VMs making no hypercalls being unimportant (i.e. if we don't accept one
> > or more patches is not going to have anything to do with the use-case
> > being unimportant or what other hypervisors might or might not do).
> I disagree, I think this is also about use cases. On the paper VMF look very
> great, but so far it still has a big flaw (the TTBR can be changed) and it
> would restrict a lot what you can do.

We would need to be very clear in the commit messages and documentation
that with the current version of VMF we do *not* achieve confidential
computing and we do *not* offer protections comparable to AMD SEV. It is
still possible for Xen to access guest data, it is just a bit harder.

From an implementation perspective, if we can find a way to implement it
that would be easy to maintain, then it might still be worth it. It
would probably take only a small amount of changes on top of the "Remove
the directmap" series to make it so "map_domain_page" doesn't work
anymore after boot.

That might be worth exploring if you and Jackson agree?


One thing that would make it much more widely applicable is your idea of
hypercalls bounce buffers. VMF might work with hypercalls if the guest
always uses the same buffer to pass hypercalls parameters to Xen. That
one buffer could remain mapped in Xen for the lifetime of the VM and the
VM would know to use it only to pass parameters to Xen.
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 10 months ago
Hi Stefano,

On 22/12/2022 21:28, Stefano Stabellini wrote:
> On Thu, 22 Dec 2022, Julien Grall wrote:
>>> What other hypervisors might or might not do should not be a factor in
>>> this discussion and it would be best to leave it aside.
>>
>> To be honest, Demi has a point. At the moment, VMF is a very niche use-case
>> (see more below). So you would end up to use less than 10% of the normal Xen
>> on Arm code. A lot of people will likely wonder why using Xen in this case?
> 
> [...]
> 
>>>   From an AMD/Xilinx point of view, most of our customers using Xen in
>>> productions today don't use any hypercalls in one or more of their VMs.
>> This suggests a mix of guests are running (some using hypercalls and other
>> not). It would not be possible if you were using VMF.
> 
> It is true that the current limitations are very restrictive.
> 
> In embedded, we have a few pure static partitioning deployments where no
> hypercalls are required (Linux is using hypercalls today but it could do
> without), so maybe VMF could be enabled, but admittedly in those cases
> the main focus today is safety and fault tolerance, rather than
> confidential computing.
> 
> 
>>> Xen is great for these use-cases and it is rather common in embedded.
>>> It is certainly a different configuration from what most are come to
>>> expect from Xen on the server/desktop x86 side. There is no question
>>> that guests without hypercalls are important for Xen on ARM. >
>>> As a Xen community we have a long history and strong interest in making
>>> Xen more secure and also, more recently, safer (in the ISO 26262
>>> safety-certification sense). The VMF work is very well aligned with both
>>> of these efforts and any additional burder to attackers is certainly
>>> good for Xen.
>>
>> I agree that we have a strong focus on making Xen more secure. However, we
>> also need to look at the use cases for it. As it stands, there will no:
>>    - IOREQ use (don't think about emulating TPM)
>>    - GICv3 ITS
>>    - stage-1 SMMUv3
>>    - decoding of instructions when there is no syndrome
>>    - hypercalls (including event channels)
>>    - dom0
>>
>> That's a lot of Xen features that can't be used. Effectively you will make Xen
>> more "secure" for a very few users.
> 
> Among these, the main problems affecting AMD/Xilinx users today would be:
> - decoding of instructions
> - hypercalls, especially event channels
> 
> Decoding of instructions would affect all our deployments. For
> hypercalls, even in static partitioning deployments, sometimes event
> channels are used for VM-to-VM notifications.
> 
> 
>>> Now the question is what changes are necessary and how to make them to
>>> the codebase. And if it turns out that some of the changes are not
>>> applicable or too complex to accept, the decision will be made purely
>>> from a code maintenance point of view and will have nothing to do with
>>> VMs making no hypercalls being unimportant (i.e. if we don't accept one
>>> or more patches is not going to have anything to do with the use-case
>>> being unimportant or what other hypervisors might or might not do).
>> I disagree, I think this is also about use cases. On the paper VMF look very
>> great, but so far it still has a big flaw (the TTBR can be changed) and it
>> would restrict a lot what you can do.
> 
> We would need to be very clear in the commit messages and documentation
> that with the current version of VMF we do *not* achieve confidential
> computing and we do *not* offer protections comparable to AMD SEV. It is
> still possible for Xen to access guest data, it is just a bit harder.
> 
>  From an implementation perspective, if we can find a way to implement it
> that would be easy to maintain, then it might still be worth it. It
> would probably take only a small amount of changes on top of the "Remove
> the directmap" series to make it so "map_domain_page" doesn't work
> anymore after boot.

None of the callers of map_domain_page() expect the function to fais. So 
some treewide changes will be needed in order to deal with 
map_domain_page() not working. This is not something I am willing to 
accept if the only user is VMF (at the moment I can't think of any other).

So instead, we would need to come up with a way where map_domain_page() 
will never be called at runtime when VMF is in use (maybe by compiling 
out some code?). I haven't really looked in details to say whether 
that's feasiable.

> 
> That might be worth exploring if you and Jackson agree?

I am OK to continue explore it because I think some bits will be still 
useful for the general use. As for the full solution, I will wait and 
see the results before deciding whether this is something that I would 
be happy to merge/maintain.

Cheers,

-- 
Julien Grall
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Demi Marie Obenour 1 year, 11 months ago
On Wed, Dec 21, 2022 at 04:53:46PM -0800, Stefano Stabellini wrote:
> On Tue, 20 Dec 2022, Demi Marie Obenour wrote:
> > On Tue, Dec 20, 2022 at 10:17:24PM +0000, Smith, Jackson wrote:
> > > > Hi Stefano,
> > > >
> > > > On 16/12/2022 01:46, Stefano Stabellini wrote:
> > > > > On Thu, 15 Dec 2022, Julien Grall wrote:
> > > > >>>> On 13/12/2022 19:48, Smith, Jackson wrote:
> > > > >>> Yes, we are familiar with the "secret-free hypervisor" work. As
> > > you
> > > > >>> point out, both our work and the secret-free hypervisor remove the
> > > > >>> directmap region to mitigate the risk of leaking sensitive guest
> > > > >>> secrets. However, our work is slightly different because it
> > > > >>> additionally prevents attackers from tricking Xen into remapping a
> > > > guest.
> > > > >>
> > > > >> I understand your goal, but I don't think this is achieved (see
> > > > >> above). You would need an entity to prevent write to TTBR0_EL2 in
> > > > >> order to fully protect it.
> > > > >
> > > > > Without a way to stop Xen from reading/writing TTBR0_EL2, we
> > > > cannot
> > > > > claim that the guest's secrets are 100% safe.
> > > > >
> > > > > But the attacker would have to follow the sequence you outlines
> > > > above
> > > > > to change Xen's pagetables and remap guest memory before
> > > > accessing it.
> > > > > It is an additional obstacle for attackers that want to steal other
> > > > guests'
> > > > > secrets. The size of the code that the attacker would need to inject
> > > > > in Xen would need to be bigger and more complex.
> > > >
> > > > Right, that's why I wrote with a bit more work. However, the nuance
> > > > you mention doesn't seem to be present in the cover letter:
> > > >
> > > > "This creates what we call "Software Enclaves", ensuring that an
> > > > adversary with arbitrary code execution in the hypervisor STILL cannot
> > > > read/write guest memory."
> > > >
> > > > So if the end goal if really to protect against *all* sort of
> > > arbitrary 
> > > > code,
> > > > then I think we should have a rough idea how this will look like in
> > > Xen.
> > > >
> > > >  From a brief look, it doesn't look like it would be possible to
> > > prevent
> > > > modification to TTBR0_EL2 (even from EL3). We would need to
> > > > investigate if there are other bits in the architecture to help us.
> > > >
> > > > >
> > > > > Every little helps :-)
> > > >
> > > > I can see how making the life of the attacker more difficult is 
> > > > appealing.
> > > > Yet, the goal needs to be clarified and the risk with the approach
> > > > acknowledged (see above).
> > > >
> > > 
> > > You're right, we should have mentioned this weakness in our first email.
> > > Sorry about the oversight! This is definitely still a limitation that we
> > > have not yet overcome. However, we do think that the increase in
> > > attacker workload that you and Stefano are discussing could still be
> > > valuable to security conscious Xen users.
> > > 
> > > It would nice to find additional architecture features that we can use
> > > to close this hole on arm, but there aren't any that stand out to me
> > > either.
> > > 
> > > With this limitation in mind, what are the next steps we should take to
> > > support this feature for the xen community? Is this increase in attacker
> > > workload meaningful enough to justify the inclusion of VMF in Xen?
> > 
> > Personally, I don’t think so.  The kinds of workloads VMF is usable
> > for (no hypercalls) are likely easily portable to other hypervisors,
> > including formally verified microkernels such as seL4 that provide... 
> 
> What other hypervisors might or might not do should not be a factor in
> this discussion and it would be best to leave it aside.

Indeed so, sorry.

> From an AMD/Xilinx point of view, most of our customers using Xen in
> productions today don't use any hypercalls in one or more of their VMs.
> Xen is great for these use-cases and it is rather common in embedded.
> It is certainly a different configuration from what most are come to
> expect from Xen on the server/desktop x86 side. There is no question
> that guests without hypercalls are important for Xen on ARM.

I was completely unaware of this.

> As a Xen community we have a long history and strong interest in making
> Xen more secure and also, more recently, safer (in the ISO 26262
> safety-certification sense). The VMF work is very well aligned with both
> of these efforts and any additional burder to attackers is certainly
> good for Xen.

That it is.

> Now the question is what changes are necessary and how to make them to
> the codebase. And if it turns out that some of the changes are not
> applicable or too complex to accept, the decision will be made purely
> from a code maintenance point of view and will have nothing to do with
> VMs making no hypercalls being unimportant (i.e. if we don't accept one
> or more patches is not going to have anything to do with the use-case
> being unimportant or what other hypervisors might or might not do).

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Demi Marie Obenour 1 year, 11 months ago
On Tue, Dec 13, 2022 at 08:55:28PM +0000, Julien Grall wrote:
> On 13/12/2022 19:48, Smith, Jackson wrote:
> > Hi Xen Developers,
> 
> Hi Jackson,
> 
> Thanks for sharing the prototype with the community. Some questions/remarks
> below.

[snip]

> > With this technique, we protect the integrity and confidentiality of
> > guest memory. However, a compromised hypervisor can still read/write
> > register state during traps, or refuse to schedule a guest, denying
> > service. We also recognize that because this technique precludes
> > modifying Xen's page tables after startup, it may not be compatible
> > with all of Xen's potential use cases. On the other hand, there are
> > some uses cases (in particular statically defined embedded systems)
> > where our technique could be adopted with minimal friction.
> 
> From what you wrote, this sounds very much like the project Citrix and
> Amazon worked on called "Secret-free hypervisor" with a twist. In your case,
> you want to prevent the hypervisor to map/unmap the guest memory.
> 
> You can find some details in [1]. The code is x86 only, but I don't see any
> major blocker to port it on arm64.

Is there any way the secret-free hypervisor code could be upstreamed?
My understanding is that it would enable guests to use SMT without
risking the host, which would be amazing.

> > 	Virtualized MMIO on arm needs to decode certain load/store
> > 	instructions
> 
> On Arm, this can be avoided of the guest OS is not using such instruction.
> In fact they were only added to cater "broken" guest OS.
> 
> Also, this will probably be a lot more difficult on x86 as, AFAIK, there is
> no instruction syndrome. So you will need to decode the instruction in order
> to emulate the access.

Is requiring the guest to emulate such instructions itself an option?
μXen, SEV-SNP, and TDX all do this.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago
Hi Demi,

On 13/12/2022 22:22, Demi Marie Obenour wrote:
> On Tue, Dec 13, 2022 at 08:55:28PM +0000, Julien Grall wrote:
>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>> Hi Xen Developers,
>>
>> Hi Jackson,
>>
>> Thanks for sharing the prototype with the community. Some questions/remarks
>> below.
> 
> [snip]
> 
>>> With this technique, we protect the integrity and confidentiality of
>>> guest memory. However, a compromised hypervisor can still read/write
>>> register state during traps, or refuse to schedule a guest, denying
>>> service. We also recognize that because this technique precludes
>>> modifying Xen's page tables after startup, it may not be compatible
>>> with all of Xen's potential use cases. On the other hand, there are
>>> some uses cases (in particular statically defined embedded systems)
>>> where our technique could be adopted with minimal friction.
>>
>>  From what you wrote, this sounds very much like the project Citrix and
>> Amazon worked on called "Secret-free hypervisor" with a twist. In your case,
>> you want to prevent the hypervisor to map/unmap the guest memory.
>>
>> You can find some details in [1]. The code is x86 only, but I don't see any
>> major blocker to port it on arm64.
> 
> Is there any way the secret-free hypervisor code could be upstreamed?

I have posted a new version with also a PoC for arm64:

https://lore.kernel.org/xen-devel/20221216114853.8227-1-julien@xen.org/T/#t

For convenience, I have also pushed a branch to my personal git:

https://xenbits.xen.org/gitweb/?p=people/julieng/xen-unstable.git;a=summary

branch no-directmap-v1

Cheers,

-- 
Julien Grall
Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago
Hi Demi,

On 13/12/2022 22:22, Demi Marie Obenour wrote:
> On Tue, Dec 13, 2022 at 08:55:28PM +0000, Julien Grall wrote:
>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>> Hi Xen Developers,
>>
>> Hi Jackson,
>>
>> Thanks for sharing the prototype with the community. Some questions/remarks
>> below.
> 
> [snip]
> 
>>> With this technique, we protect the integrity and confidentiality of
>>> guest memory. However, a compromised hypervisor can still read/write
>>> register state during traps, or refuse to schedule a guest, denying
>>> service. We also recognize that because this technique precludes
>>> modifying Xen's page tables after startup, it may not be compatible
>>> with all of Xen's potential use cases. On the other hand, there are
>>> some uses cases (in particular statically defined embedded systems)
>>> where our technique could be adopted with minimal friction.
>>
>>  From what you wrote, this sounds very much like the project Citrix and
>> Amazon worked on called "Secret-free hypervisor" with a twist. In your case,
>> you want to prevent the hypervisor to map/unmap the guest memory.
>>
>> You can find some details in [1]. The code is x86 only, but I don't see any
>> major blocker to port it on arm64.
> 
> Is there any way the secret-free hypervisor code could be upstreamed?
This has been in my todo list for more than year but didn't yet find 
anyone to finish the work.

I need to have a look how much left the original work it is left to do. 
Would you be interested to contribute?

> My understanding is that it would enable guests to use SMT without
> risking the host, which would be amazing.
> 
>>> 	Virtualized MMIO on arm needs to decode certain load/store
>>> 	instructions
>>
>> On Arm, this can be avoided of the guest OS is not using such instruction.
>> In fact they were only added to cater "broken" guest OS.
>>
>> Also, this will probably be a lot more difficult on x86 as, AFAIK, there is
>> no instruction syndrome. So you will need to decode the instruction in order
>> to emulate the access.
> 
> Is requiring the guest to emulate such instructions itself an option?
> μXen, SEV-SNP, and TDX all do this.


I am not very familiar with this. So a few questions:
  * Does this mean the OS needs to be modified?
  * What happen for emulated device?

Cheers,

-- 
Julien Grall

Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Julien Grall 1 year, 11 months ago
Hi,

On 13/12/2022 23:05, Julien Grall wrote: > On 13/12/2022 22:22, Demi 
Marie Obenour wrote:
>> On Tue, Dec 13, 2022 at 08:55:28PM +0000, Julien Grall wrote:
>>> On 13/12/2022 19:48, Smith, Jackson wrote:
>>>> Hi Xen Developers,
>>>
>>> Hi Jackson,
>>>
>>> Thanks for sharing the prototype with the community. Some 
>>> questions/remarks
>>> below.
>>
>> [snip]
>>
>>>> With this technique, we protect the integrity and confidentiality of
>>>> guest memory. However, a compromised hypervisor can still read/write
>>>> register state during traps, or refuse to schedule a guest, denying
>>>> service. We also recognize that because this technique precludes
>>>> modifying Xen's page tables after startup, it may not be compatible
>>>> with all of Xen's potential use cases. On the other hand, there are
>>>> some uses cases (in particular statically defined embedded systems)
>>>> where our technique could be adopted with minimal friction.
>>>
>>>  From what you wrote, this sounds very much like the project Citrix and
>>> Amazon worked on called "Secret-free hypervisor" with a twist. In 
>>> your case,
>>> you want to prevent the hypervisor to map/unmap the guest memory.
>>>
>>> You can find some details in [1]. The code is x86 only, but I don't 
>>> see any
>>> major blocker to port it on arm64.
>>
>> Is there any way the secret-free hypervisor code could be upstreamed?
> This has been in my todo list for more than year but didn't yet find 
> anyone to finish the work.
> 
> I need to have a look how much left the original work it is left to do. 

I have looked at the series. It looks like there are only 16 patches 
left to be reviewed.

They are two years old but the code hasn't changed too much. So I will 
look at porting them over the next few days and hopefully I can respin 
the series before Christmas.

Cheers,
-- 
Julien Grall

Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
Posted by Demi Marie Obenour 1 year, 11 months ago
On Tue, Dec 13, 2022 at 11:05:49PM +0000, Julien Grall wrote:
> Hi Demi,
> 
> On 13/12/2022 22:22, Demi Marie Obenour wrote:
> > On Tue, Dec 13, 2022 at 08:55:28PM +0000, Julien Grall wrote:
> > > On 13/12/2022 19:48, Smith, Jackson wrote:
> > > > Hi Xen Developers,
> > > 
> > > Hi Jackson,
> > > 
> > > Thanks for sharing the prototype with the community. Some questions/remarks
> > > below.
> > 
> > [snip]
> > 
> > > > With this technique, we protect the integrity and confidentiality of
> > > > guest memory. However, a compromised hypervisor can still read/write
> > > > register state during traps, or refuse to schedule a guest, denying
> > > > service. We also recognize that because this technique precludes
> > > > modifying Xen's page tables after startup, it may not be compatible
> > > > with all of Xen's potential use cases. On the other hand, there are
> > > > some uses cases (in particular statically defined embedded systems)
> > > > where our technique could be adopted with minimal friction.
> > > 
> > >  From what you wrote, this sounds very much like the project Citrix and
> > > Amazon worked on called "Secret-free hypervisor" with a twist. In your case,
> > > you want to prevent the hypervisor to map/unmap the guest memory.
> > > 
> > > You can find some details in [1]. The code is x86 only, but I don't see any
> > > major blocker to port it on arm64.
> > 
> > Is there any way the secret-free hypervisor code could be upstreamed?
> This has been in my todo list for more than year but didn't yet find anyone
> to finish the work.
> 
> I need to have a look how much left the original work it is left to do.
> Would you be interested to contribute?

That’s up to Marek.  My understanding is that it would allow guests to
use SMT if (and only if) they do not rely on any form of in-guest
sandboxing (at least as far as confidentiality is concerned).  In Qubes
OS, most guests should satisfy this criterion.  The main exception are
guests that run a web browser or that use the sandboxed indexing
functionality of tracker3.  In particular, Marek’s builders and other
qubes that do CPU-intensive workloads could benefit significantly.

> > My understanding is that it would enable guests to use SMT without
> > risking the host, which would be amazing.
> > 
> > > > 	Virtualized MMIO on arm needs to decode certain load/store
> > > > 	instructions
> > > 
> > > On Arm, this can be avoided of the guest OS is not using such instruction.
> > > In fact they were only added to cater "broken" guest OS.
> > > 
> > > Also, this will probably be a lot more difficult on x86 as, AFAIK, there is
> > > no instruction syndrome. So you will need to decode the instruction in order
> > > to emulate the access.
> > 
> > Is requiring the guest to emulate such instructions itself an option?
> > μXen, SEV-SNP, and TDX all do this.
> 
> 
> I am not very familiar with this. So a few questions:
>  * Does this mean the OS needs to be modified?

Any form of confidential computing requires that the OS be modified to
treat the devices (such as disk and network interfaces) that it receives
from the host as untrusted, so such modification will be needed anyway.
Therefore, this is not an obstacle.  Conversely, cases where modifying
the guest is not possible invariably consider the host to be trusted,
unless I am missing something.

In contexts where the host is trusted, and the goal is to e.g. get rid
of the hypervisor’s instruction emulator, one approach would be inject
some emulation code into the guest that runs with guest kernel
privileges and has full R/W over all guest memory.  The emulation code
would normally be hidden by second-level page tables, but when the
hypervisor needs to emulate an instruction, the hypervisor switches to a
second-level page table in which this code and its stack are visible.
The emulation logic then does the needed emulation and returns to the
hypervisor, without the guest ever being aware that anything unusual has
happened.  While the emulation logic runs in the guest, it is normally
hidden by second-level page tables, so even the guest kernel cannot
observe or tamper with it.

>  * What happen for emulated device?

Using emulated devices in a setup where the emulator is not trusted
makes no sense anyway, so I don’t think this question is relevant.  The
only reason to use emulated devices is legacy compatibility, and the
legacy OSs that require them will consider them to be trusted.
Therefore, relying on emulated devices would defeat the purpose.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab