Since we can't do CALL/RET until GS is restored and CR[04] pinning is
of dubious value in this code path, simply write the stored values.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/x86/power/cpu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -208,11 +208,11 @@ static void notrace __restore_processor_
#else
/* CONFIG X86_64 */
native_wrmsrl(MSR_EFER, ctxt->efer);
- native_write_cr4(ctxt->cr4);
+ asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
#endif
native_write_cr3(ctxt->cr3);
native_write_cr2(ctxt->cr2);
- native_write_cr0(ctxt->cr0);
+ asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
/* Restore the IDT. */
native_load_idt(&ctxt->idt);
* Peter Zijlstra <peterz@infradead.org> wrote:
> Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> of dubious value in this code path, simply write the stored values.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> arch/x86/power/cpu.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -208,11 +208,11 @@ static void notrace __restore_processor_
> #else
> /* CONFIG X86_64 */
> native_wrmsrl(MSR_EFER, ctxt->efer);
> - native_write_cr4(ctxt->cr4);
> + asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
> #endif
> native_write_cr3(ctxt->cr3);
> native_write_cr2(ctxt->cr2);
> - native_write_cr0(ctxt->cr0);
> + asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
Yeah, so CR pinning protects against are easily accessible 'gadget'
functions that exploits can call to disable HW protection features in the
CR register.
__restore_processor_state() might be such a gadget if an exploit can pass
in a well-prepared 'struct saved_context' on the stack.
Can we set up cr0/cr4 after we have a proper GS, or is that a
chicken-and-egg scenario?
Thanks,
Ingo
On Fri, Jan 13, 2023 at 02:16:44PM +0100, Ingo Molnar wrote:
>
> * Peter Zijlstra <peterz@infradead.org> wrote:
>
> > Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> > of dubious value in this code path, simply write the stored values.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> > arch/x86/power/cpu.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > --- a/arch/x86/power/cpu.c
> > +++ b/arch/x86/power/cpu.c
> > @@ -208,11 +208,11 @@ static void notrace __restore_processor_
> > #else
> > /* CONFIG X86_64 */
> > native_wrmsrl(MSR_EFER, ctxt->efer);
> > - native_write_cr4(ctxt->cr4);
> > + asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
>
> > #endif
> > native_write_cr3(ctxt->cr3);
> > native_write_cr2(ctxt->cr2);
> > - native_write_cr0(ctxt->cr0);
> > + asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
>
> Yeah, so CR pinning protects against are easily accessible 'gadget'
> functions that exploits can call to disable HW protection features in the
> CR register.
>
> __restore_processor_state() might be such a gadget if an exploit can pass
> in a well-prepared 'struct saved_context' on the stack.
Given the extent of saved_context, I think it's a total loss. Best we
can do is something like the last patch here that dis-allows indirect
calls of this function entirely (on appropriate builds/hardware).
> Can we set up cr0/cr4 after we have a proper GS, or is that a
> chicken-and-egg scenario?
Can be done, but given the state we're in, I'd rather have the simplest
possible rules, calling out to functions with dodgy CR[04] is
'suboptimal' as well.
If people really worry about this I suppose we can call the full
native_write_cr4() later to double check the value in the context or
something.
On Thu, Jan 12, 2023 at 03:31:43PM +0100, Peter Zijlstra wrote: > Since we can't do CALL/RET until GS is restored and CR[04] pinning is > of dubious value in this code path, simply write the stored values. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> -- Kees Cook
© 2016 - 2026 Red Hat, Inc.