Leaving shadow setup just to the L1TF tasklet means running Dom0 on a
minimally acceptable shadow memory pool, rather than what normally
would be used (also, for example, for PVH). Populate the pool before
triggering the tasklet, on a best effort basis (again like done for
PVH).
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1298,7 +1298,7 @@ int shadow_set_allocation(struct domain
{
struct page_info *sp;
- ASSERT(paging_locked_by_me(d));
+ ASSERT(paging_locked_by_me(d) || system_state < SYS_STATE_active);
if ( pages > 0 )
{
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -21,6 +21,7 @@
#include <asm/page.h>
#include <asm/pv/mm.h>
#include <asm/setup.h>
+#include <asm/shadow.h>
/* Allow ring-3 access in long mode as guest cannot use ring 1 ... */
#define BASE_PROT (_PAGE_PRESENT|_PAGE_RW|_PAGE_ACCESSED|_PAGE_USER)
@@ -933,7 +934,17 @@ int __init dom0_construct_pv(struct doma
#ifdef CONFIG_SHADOW_PAGING
if ( opt_dom0_shadow )
{
+ bool preempted;
+
printk("Switching dom0 to using shadow paging\n");
+
+ do {
+ preempted = false;
+ shadow_set_allocation(d, dom0_paging_pages(d, nr_pages),
+ &preempted);
+ process_pending_softirqs();
+ } while ( preempted );
+
tasklet_schedule(&d->arch.paging.shadow.pv_l1tf_tasklet);
}
#endif
At 15:03 +0200 on 30 Aug (1630335824), Jan Beulich wrote: > Leaving shadow setup just to the L1TF tasklet means running Dom0 on a > minimally acceptable shadow memory pool, rather than what normally > would be used (also, for example, for PVH). Populate the pool before > triggering the tasklet, on a best effort basis (again like done for > PVH). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
On 30/08/2021 14:03, Jan Beulich wrote: > @@ -933,7 +934,17 @@ int __init dom0_construct_pv(struct doma > #ifdef CONFIG_SHADOW_PAGING > if ( opt_dom0_shadow ) > { > + bool preempted; > + > printk("Switching dom0 to using shadow paging\n"); > + > + do { > + preempted = false; > + shadow_set_allocation(d, dom0_paging_pages(d, nr_pages), > + &preempted); > + process_pending_softirqs(); > + } while ( preempted ); This isn't correct. The shadow pool is needed even without opt_dom0_shadow, because some downstreams have elected not to retain upstream's security vulnerability in default setting of opt_pv_l1tf_hwdom. Also, dom0_paging_pages() isn't a trivial calculation, so should be called once and cached. ~Andrew
On 31.08.2021 15:47, Andrew Cooper wrote: > On 30/08/2021 14:03, Jan Beulich wrote: >> @@ -933,7 +934,17 @@ int __init dom0_construct_pv(struct doma >> #ifdef CONFIG_SHADOW_PAGING >> if ( opt_dom0_shadow ) >> { >> + bool preempted; >> + >> printk("Switching dom0 to using shadow paging\n"); >> + >> + do { >> + preempted = false; >> + shadow_set_allocation(d, dom0_paging_pages(d, nr_pages), >> + &preempted); >> + process_pending_softirqs(); >> + } while ( preempted ); > > This isn't correct. The shadow pool is needed even without > opt_dom0_shadow, because some downstreams have elected not to retain > upstream's security vulnerability in default setting of opt_pv_l1tf_hwdom. Are you suggesting to set up a (perhaps large) shadow pool just in case we need to enable shadow mode on Dom0? And all of this memory to then remain unused in the majority of cases? Plus even if so, I'd view this as a 2nd, independent step, largely orthogonal to the handling of "dom0=shadow". If somebody really wanted that, I think this should be driven by an explicit setting of the shadow pool size, indicating the admin is willing to waste the memory. I'm further puzzled by "not to retain upstream's security vulnerability" - are you saying upstream is vulnerable in some way, while perhaps you (XenServer) are not? In general I don't think I view downstream decisions as a driving factor for what upstream does, when the result is deliberately different behavior from upstream. > Also, dom0_paging_pages() isn't a trivial calculation, so should be > called once and cached. Sure, can do that. You did notice though that all I did is take PVH's similar code? Jan
© 2016 - 2024 Red Hat, Inc.