[PATCH] x86/startup: Let the caller retrieve encryption mask

Khalid Ali posted 1 patch 3 months, 3 weeks ago
arch/x86/boot/startup/map_kernel.c | 11 +++--------
arch/x86/include/asm/setup.h       |  2 +-
arch/x86/kernel/head_64.S          | 12 +++++-------
3 files changed, 9 insertions(+), 16 deletions(-)
[PATCH] x86/startup: Let the caller retrieve encryption mask
Posted by Khalid Ali 3 months, 3 weeks ago
From: Khalid Ali <khaliidcaliy@gmail.com>

Don't return encryption mask . This makes __startup_64() to handle
doing the actual work including encrypting the kernel. The caller has
already access to encryption mask and can directly retrieve it. On C
code, the caller can call sme_get_me_mask() and include /arch/x86/include/asm/mem_encrypt.h
directly while on assembly functions like startup_64 the "sme_me_mask"
is directly accessible to them if CONFIG_AMD_MEM_ENCRYPT is set. This
also makes consistent with the way secondary_startup_64_no_verify label
is handled. On intel CPUs this is not even neccessary, so we should
retrieve the mask only if CONFIG_AMD_MEM_ENCRYPT is set.

Signed-off-by: Khalid Ali <khaliidcaliy@gmail.com>
---
 arch/x86/boot/startup/map_kernel.c | 11 +++--------
 arch/x86/include/asm/setup.h       |  2 +-
 arch/x86/kernel/head_64.S          | 12 +++++-------
 3 files changed, 9 insertions(+), 16 deletions(-)

diff --git a/arch/x86/boot/startup/map_kernel.c b/arch/x86/boot/startup/map_kernel.c
index 332dbe6688c4..6fdb340e9147 100644
--- a/arch/x86/boot/startup/map_kernel.c
+++ b/arch/x86/boot/startup/map_kernel.c
@@ -30,7 +30,7 @@ static inline bool check_la57_support(void)
 	return true;
 }
 
-static unsigned long __head sme_postprocess_startup(struct boot_params *bp,
+static void __head sme_postprocess_startup(struct boot_params *bp,
 						    pmdval_t *pmd,
 						    unsigned long p2v_offset)
 {
@@ -68,11 +68,6 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp,
 		}
 	}
 
-	/*
-	 * Return the SME encryption mask (if SME is active) to be used as a
-	 * modifier for the initial pgdir entry programmed into CR3.
-	 */
-	return sme_get_me_mask();
 }
 
 /*
@@ -84,7 +79,7 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp,
  * the 1:1 mapping of memory. Kernel virtual addresses can be determined by
  * subtracting p2v_offset from the RIP-relative address.
  */
-unsigned long __head __startup_64(unsigned long p2v_offset,
+void __head __startup_64(unsigned long p2v_offset,
 				  struct boot_params *bp)
 {
 	pmd_t (*early_pgts)[PTRS_PER_PMD] = rip_rel_ptr(early_dynamic_pgts);
@@ -213,5 +208,5 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
 	for (; i < PTRS_PER_PMD; i++)
 		pmd[i] &= ~_PAGE_PRESENT;
 
-	return sme_postprocess_startup(bp, pmd, p2v_offset);
+	 sme_postprocess_startup(bp, pmd, p2v_offset);
 }
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 692af46603a1..29ea24bb85ff 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -50,7 +50,7 @@ extern unsigned long acpi_realmode_flags;
 
 extern void reserve_standard_io_resources(void);
 extern void i386_reserve_resources(void);
-extern unsigned long __startup_64(unsigned long p2v_offset, struct boot_params *bp);
+extern void __startup_64(unsigned long p2v_offset, struct boot_params *bp);
 extern void startup_64_setup_gdt_idt(void);
 extern void startup_64_load_idt(void *vc_handler);
 extern void early_setup_idt(void);
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 3e9b3a3bd039..8b50bdd90927 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -106,18 +106,16 @@ SYM_CODE_START_NOALIGN(startup_64)
 
 	/*
 	 * Perform pagetable fixups. Additionally, if SME is active, encrypt
-	 * the kernel and retrieve the modifier (SME encryption mask if SME
-	 * is active) to be added to the initial pgdir entry that will be
-	 * programmed into CR3.
-	 */
+	 * the kernel.
+	 /
 	movq	%r15, %rsi
 	call	__startup_64
 
 	/* Form the CR3 value being sure to include the CR3 modifier */
-	leaq	early_top_pgt(%rip), %rcx
-	addq	%rcx, %rax
-
+	leaq	early_top_pgt(%rip), %rax
+	
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	addq	sme_me_mask(%rip), %rax
 	mov	%rax, %rdi
 
 	/*
-- 
2.49.0
Re: [PATCH] x86/startup: Let the caller retrieve encryption mask
Posted by kernel test robot 3 months, 3 weeks ago
Hi Khalid,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/x86/core]
[also build test WARNING on tip/master linus/master v6.16-rc2 next-20250616]
[cannot apply to tip/auto-latest]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Khalid-Ali/x86-startup-Let-the-caller-retrieve-encryption-mask/20250616-204024
base:   tip/x86/core
patch link:    https://lore.kernel.org/r/20250616123605.927-1-khaliidcaliy%40gmail.com
patch subject: [PATCH] x86/startup: Let the caller retrieve encryption mask
config: x86_64-buildonly-randconfig-001-20250617 (https://download.01.org/0day-ci/archive/20250617/202506171012.Ji3c5sJh-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250617/202506171012.Ji3c5sJh-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506171012.Ji3c5sJh-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> arch/x86/kernel/head_64.S:114:2: warning: '/*' within block comment [-Wcomment]
     114 |         /* Form the CR3 value being sure to include the CR3 modifier */
         |         ^
   1 warning generated.


vim +114 arch/x86/kernel/head_64.S

bcce829083339b arch/x86/kernel/head_64.S Michael Roth          2022-02-09   96  
04633df0c43d71 arch/x86/kernel/head_64.S Borislav Petkov       2015-11-05   97  	/* Sanitize CPU configuration */
04633df0c43d71 arch/x86/kernel/head_64.S Borislav Petkov       2015-11-05   98  	call verify_cpu
04633df0c43d71 arch/x86/kernel/head_64.S Borislav Petkov       2015-11-05   99  
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  100  	/*
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  101  	 * Derive the kernel's physical-to-virtual offset from the physical and
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  102  	 * virtual addresses of common_startup_64().
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  103  	 */
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  104  	leaq	common_startup_64(%rip), %rdi
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  105  	subq	.Lcommon_startup_64(%rip), %rdi
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  106  
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17  107  	/*
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17  108  	 * Perform pagetable fixups. Additionally, if SME is active, encrypt
8b4b3620a6b25c arch/x86/kernel/head_64.S Khalid Ali            2025-06-16  109  	 * the kernel.
8b4b3620a6b25c arch/x86/kernel/head_64.S Khalid Ali            2025-06-16  110  	 /
2f69a81ad68732 arch/x86/kernel/head_64.S Ard Biesheuvel        2023-08-07  111  	movq	%r15, %rsi
c88d71508e36b5 arch/x86/kernel/head_64.S Kirill A. Shutemov    2017-06-06  112  	call	__startup_64
^1da177e4c3f41 arch/x86_64/kernel/head.S Linus Torvalds        2005-04-16  113  
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17 @114  	/* Form the CR3 value being sure to include the CR3 modifier */
8b4b3620a6b25c arch/x86/kernel/head_64.S Khalid Ali            2025-06-16  115  	leaq	early_top_pgt(%rip), %rax
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  116) 	
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  117) #ifdef CONFIG_AMD_MEM_ENCRYPT
8b4b3620a6b25c arch/x86/kernel/head_64.S Khalid Ali            2025-06-16  118  	addq	sme_me_mask(%rip), %rax
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  119) 	mov	%rax, %rdi
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  120) 
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  121) 	/*
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  122) 	 * For SEV guests: Verify that the C-bit is correct. A malicious
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  123) 	 * hypervisor could lie about the C-bit position to perform a ROP
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  124) 	 * attack on the guest by writing to the unencrypted stack and wait for
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  125) 	 * the next RET instruction.
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  126) 	 */
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  127) 	call	sev_verify_cbit
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  128) #endif
30579c8baa5b4b arch/x86/kernel/head_64.S Borislav Petkov (AMD  2023-11-30  129) 
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  130  	/*
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  131  	 * Switch to early_top_pgt which still has the identity mappings
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  132  	 * present.
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  133  	 */
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  134  	movq	%rax, %cr3
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  135  
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  136  	/* Branch to the common startup code at its kernel virtual address */
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  137  	ANNOTATE_RETPOLINE_SAFE
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  138  	jmp	*.Lcommon_startup_64(%rip)
37818afd15fe72 arch/x86/kernel/head_64.S Jiri Slaby            2019-10-11  139  SYM_CODE_END(startup_64)
37818afd15fe72 arch/x86/kernel/head_64.S Jiri Slaby            2019-10-11  140  
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  141  	__INITRODATA
093562198e1a63 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-12-05  142  SYM_DATA_LOCAL(.Lcommon_startup_64, .quad common_startup_64)
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  143  
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  144  	.text
bc7b11c04ee9c9 arch/x86/kernel/head_64.S Jiri Slaby            2019-10-11  145  SYM_CODE_START(secondary_startup_64)
fb799447ae2974 arch/x86/kernel/head_64.S Josh Poimboeuf        2023-03-01  146  	UNWIND_HINT_END_OF_STACK
3e3f069504344c arch/x86/kernel/head_64.S Peter Zijlstra        2022-03-08  147  	ANNOTATE_NOENDBR
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  148  	/*
1256276c98dbcf arch/x86/kernel/head_64.S Konrad Rzeszutek Wilk 2013-02-25  149  	 * At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0,
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  150  	 * and someone has loaded a mapped page table.
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  151  	 *
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  152  	 * We come here either from startup_64 (using physical addresses)
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  153  	 * or from trampoline.S (using virtual addresses).
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  154  	 *
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  155  	 * Using virtual addresses from trampoline.S removes the need
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  156  	 * to have any identity mapped pages in the kernel page table
1ab60e0f72f71e arch/x86_64/kernel/head.S Vivek Goyal           2007-05-02  157  	 * after the boot processor executes this code.
^1da177e4c3f41 arch/x86_64/kernel/head.S Linus Torvalds        2005-04-16  158  	 */
^1da177e4c3f41 arch/x86_64/kernel/head.S Linus Torvalds        2005-04-16  159  
04633df0c43d71 arch/x86/kernel/head_64.S Borislav Petkov       2015-11-05  160  	/* Sanitize CPU configuration */
04633df0c43d71 arch/x86/kernel/head_64.S Borislav Petkov       2015-11-05  161  	call verify_cpu
04633df0c43d71 arch/x86/kernel/head_64.S Borislav Petkov       2015-11-05  162  
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  163  	/*
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  164  	 * The secondary_startup_64_no_verify entry point is only used by
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  165  	 * SEV-ES guests. In those guests the call to verify_cpu() would cause
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  166  	 * #VC exceptions which can not be handled at this stage of secondary
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  167  	 * CPU bringup.
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  168  	 *
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  169  	 * All non SEV-ES systems, especially Intel systems, need to execute
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  170  	 * verify_cpu() above to make sure NX is enabled.
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  171  	 */
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  172  SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
fb799447ae2974 arch/x86/kernel/head_64.S Josh Poimboeuf        2023-03-01  173  	UNWIND_HINT_END_OF_STACK
3e3f069504344c arch/x86/kernel/head_64.S Peter Zijlstra        2022-03-08  174  	ANNOTATE_NOENDBR
3ecacdbd23956a arch/x86/kernel/head_64.S Joerg Roedel          2020-09-07  175  
2f69a81ad68732 arch/x86/kernel/head_64.S Ard Biesheuvel        2023-08-07  176  	/* Clear %R15 which holds the boot_params pointer on the boot CPU */
721f791ce1cddf arch/x86/kernel/head_64.S Uros Bizjak           2024-01-24  177  	xorl	%r15d, %r15d
2f69a81ad68732 arch/x86/kernel/head_64.S Ard Biesheuvel        2023-08-07  178  
d6a41f184dcea0 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  179  	/* Derive the runtime physical address of init_top_pgt[] */
d6a41f184dcea0 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  180  	movq	phys_base(%rip), %rax
d6a41f184dcea0 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  181  	addq	$(init_top_pgt - __START_KERNEL_map), %rax
d6a41f184dcea0 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  182  
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17  183  	/*
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17  184  	 * Retrieve the modifier (SME encryption mask if SME is active) to be
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17  185  	 * added to the initial pgdir entry that will be programmed into CR3.
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17  186  	 */
469693d8f62299 arch/x86/kernel/head_64.S Michael Roth          2022-02-09  187  #ifdef CONFIG_AMD_MEM_ENCRYPT
d6a41f184dcea0 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  188  	addq	sme_me_mask(%rip), %rax
469693d8f62299 arch/x86/kernel/head_64.S Michael Roth          2022-02-09  189  #endif
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  190  	/*
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  191  	 * Switch to the init_top_pgt here, away from the trampoline_pgd and
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  192  	 * unmap the identity mapped ranges.
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  193  	 */
828263957611c2 arch/x86/kernel/head_64.S Ard Biesheuvel        2024-02-27  194  	movq	%rax, %cr3
5868f3651fa0df arch/x86/kernel/head_64.S Tom Lendacky          2017-07-17  195  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Re: [PATCH] x86/startup: Let the caller retrieve encryption mask
Posted by Borislav Petkov 3 months, 3 weeks ago
On Mon, Jun 16, 2025 at 12:34:13PM +0000, Khalid Ali wrote:
> From: Khalid Ali <khaliidcaliy@gmail.com>
> 
> Don't return encryption mask . This makes __startup_64() to handle
> doing the actual work including encrypting the kernel. The caller has
> already access to encryption mask and can directly retrieve it. On C
> code, the caller can call sme_get_me_mask() and include /arch/x86/include/asm/mem_encrypt.h
> directly while on assembly functions like startup_64 the "sme_me_mask"
> is directly accessible to them if CONFIG_AMD_MEM_ENCRYPT is set. This
> also makes consistent with the way secondary_startup_64_no_verify label
> is handled. On intel CPUs this is not even neccessary, so we should
> retrieve the mask only if CONFIG_AMD_MEM_ENCRYPT is set.

What Thomas told you about structuring commit messages:

https://lore.kernel.org/r/875xgziprs.ffs@tglx

Do it please and then send patches.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCH] x86/startup: Let the caller retrieve encryption mask
Posted by Khalid Ali 3 months, 3 weeks ago
> What Thomas told you about structuring commit messages:
>
> https://lore.kernel.org/r/875xgziprs.ffs@tglx
>
> Do it please and then send patches.
>
> Thx.

I apologies, i was aware, however English isn't my native language. 
I was grasping the doc as best i could. I sent v2 already.

Thanks
Khalid Ali