1 | Introduction | 1 | Introduction |
---|---|---|---|
2 | ------------ | 2 | ------------ |
3 | 3 | ||
4 | Secure AVIC is a new hardware feature in the AMD64 architecture to | 4 | Secure AVIC is a new hardware feature in the AMD64 architecture to |
5 | allow SEV-SNP guests to prevent hypervisor from generating unexpected | 5 | allow SEV-SNP guests to prevent the hypervisor from generating |
6 | interrupts to a vCPU or otherwise violate architectural assumptions | 6 | unexpected interrupts to a vCPU or otherwise violate architectural |
7 | around APIC behavior. | 7 | assumptions around APIC behavior. |
8 | 8 | ||
9 | One of the significant differences from AVIC or emulated x2APIC is that | 9 | One of the significant differences from AVIC or emulated x2APIC is that |
10 | Secure AVIC uses a guest-owned and managed APIC backing page. It also | 10 | Secure AVIC uses a guest-owned and managed APIC backing page. It also |
11 | introduces additional fields in both the VMCB and the Secure AVIC backing | 11 | introduces additional fields in both the VMCB and the Secure AVIC backing |
12 | page to aid the guest in limiting which interrupt vectors can be injected | 12 | page to aid the guest in limiting which interrupt vectors can be injected |
... | ... | ||
40 | the guest APIC backing page which can be modified directly by the | 40 | the guest APIC backing page which can be modified directly by the |
41 | guest: | 41 | guest: |
42 | 42 | ||
43 | a. ALLOWED_IRR | 43 | a. ALLOWED_IRR |
44 | 44 | ||
45 | ALLOWED_IRR vector indicates the interrupt vectors which the guest | 45 | ALLOWED_IRR reg offset indicates the interrupt vectors which the guest |
46 | allows the hypervisor to send. The combination of host-controlled | 46 | allows the hypervisor to send. The combination of host-controlled |
47 | REQUESTED_IRR vectors (part of VMCB) and ALLOWED_IRR is used by | 47 | REQUESTED_IRR vectors (part of VMCB) and ALLOWED_IRR is used by |
48 | hardware to update the IRR vectors of the Guest APIC backing page. | 48 | hardware to update the IRR vectors of the Guest APIC backing page. |
49 | 49 | ||
50 | #Offset #bits Description | 50 | #Offset #bits Description |
... | ... | ||
61 | b. NMI Request | 61 | b. NMI Request |
62 | 62 | ||
63 | #Offset #bits Description | 63 | #Offset #bits Description |
64 | 278h 0 Set by Guest to request Virtual NMI | 64 | 278h 0 Set by Guest to request Virtual NMI |
65 | 65 | ||
66 | Guest need to set NMI Request register to allow the Hypervisor to | ||
67 | inject vNMI to it. | ||
66 | 68 | ||
67 | LAPIC Timer Support | 69 | LAPIC Timer Support |
68 | ------------------- | 70 | ------------------- |
69 | LAPIC timer is emulated by hypervisor. So, APIC_LVTT, APIC_TMICT and | 71 | LAPIC timer is emulated by the hypervisor. So, APIC_LVTT, APIC_TMICT and |
70 | APIC_TDCR, APIC_TMCCT APIC registers are not read/written to the guest | 72 | APIC_TDCR, APIC_TMCCT APIC registers are not read/written to the guest |
71 | APIC backing page and are communicated to the hypervisor using SVM_EXIT_MSR | 73 | APIC backing page and are communicated to the hypervisor using SVM_EXIT_MSR |
72 | VMGEXIT. | 74 | VMGEXIT. |
73 | 75 | ||
74 | IPI Support | 76 | IPI Support |
75 | ----------- | 77 | ----------- |
76 | Only SELF_IPI is accelerated by Secure AVIC hardware. Other IPIs require | 78 | Only SELF_IPI is accelerated by Secure AVIC hardware. Other IPIs require |
77 | writing (from the Secure AVIC driver) to the IRR vector of the target CPU | 79 | writing (from the Secure AVIC driver) to the IRR vector of the target CPU |
78 | backing page and then issuing VMGEXIT for the hypervisor to notify the | 80 | backing page and then issuing VMGEXIT for the hypervisor to notify the |
79 | target vCPU. | 81 | target vCPU. |
80 | 82 | ||
81 | Driver Implementation Open Points | 83 | KEXEC Support |
82 | --------------------------------- | 84 | ------------- |
85 | Secure AVIC enabled guest can kexec to another kernel which has Secure | ||
86 | AVIC enabled, as the Hypervisor has Secure AVIC feature bit set in the | ||
87 | sev_status. | ||
88 | |||
89 | Open Points | ||
90 | ----------- | ||
83 | 91 | ||
84 | The Secure AVIC driver only supports physical destination mode. If | 92 | The Secure AVIC driver only supports physical destination mode. If |
85 | logical destination mode need to be supported, then a separate x2apic | 93 | logical destination mode need to be supported, then a separate x2apic |
86 | driver would be required for supporting logical destination mode. | 94 | driver would be required for supporting logical destination mode. |
87 | 95 | ||
88 | Setting of ALLOWED_IRR vectors is done from vector.c for IOAPIC and MSI | ||
89 | interrupts. ALLOWED_IRR vector is not cleared when an interrupt vector | ||
90 | migrates to different CPU. Using a cleaner approach to manage and | ||
91 | configure allowed vectors needs more work. | ||
92 | |||
93 | 96 | ||
94 | Testing | 97 | Testing |
95 | ------- | 98 | ------- |
96 | 99 | ||
97 | This series is based on top of commit 196145c606d0 "Merge | 100 | This series is based on top of commit 535bd326c565 "Merge branch into |
98 | tag 'clk-fixes-for-linus' of | 101 | tip/master: 'x86/tdx'" of tip/tip master branch. |
99 | git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux." | ||
100 | 102 | ||
101 | Host Secure AVIC support patch series is at [1]. | 103 | Host Secure AVIC support patch series is at [1]. |
104 | |||
105 | Qemu support patch is at [2]. | ||
106 | |||
107 | QEMU commandline for testing Secure AVIC enabled guest: | ||
108 | |||
109 | qemu-system-x86_64 <...> -object sev-snp-guest,id=sev0,policy=0xb0000,cbitpos=51,reduced-phys-bits=1,allowed-sev-features=true,secure-avic=true | ||
102 | 110 | ||
103 | Following tests are done: | 111 | Following tests are done: |
104 | 112 | ||
105 | 1) Boot to Prompt using initramfs and ubuntu fs. | 113 | 1) Boot to Prompt using initramfs and ubuntu fs. |
106 | 2) Verified timer and IPI as part of the guest bootup. | 114 | 2) Verified timer and IPI as part of the guest bootup. |
107 | 3) Verified long run SCF TORTURE IPI test. | 115 | 3) Verified long run SCF TORTURE IPI test. |
108 | 4) Verified FIO test with NVME passthrough. | ||
109 | 116 | ||
110 | [1] https://github.com/AMDESE/linux-kvm/tree/savic-host | 117 | [1] https://github.com/AMDESE/linux-kvm/tree/savic-host-latest |
118 | [2] https://github.com/AMDESE/qemu/tree/secure-avic | ||
111 | 119 | ||
112 | Kishon Vijay Abraham I (11): | 120 | Change since v2 |
121 | |||
122 | - Removed RFC tag. | ||
123 | - Change config rule to not select AMD_SECURE_AVIC config if | ||
124 | AMD_MEM_ENCRYPT config is enabled. | ||
125 | - Fix broken backing page GFP_KERNEL allocation in setup_local_APIC(). | ||
126 | Use alloc_percpu() for APIC backing pages allocation during Secure | ||
127 | AVIC driver probe. | ||
128 | - Remove code to check for duplicate APIC_ID returned by the | ||
129 | Hypervisor. Topology evaluation code already does that during boot. | ||
130 | - Fix missing update_vector() callback invocation during vector | ||
131 | cleanup paths. Invoke update_vector() during setup and tearing down | ||
132 | of a vector. | ||
133 | - Reuse find_highest_vector() from kvm/lapic.c. | ||
134 | - Change savic_register_gpa/savic_unregister_gpa() interface to be | ||
135 | invoked only for the local CPU. | ||
136 | - Misc cleanups. | ||
137 | |||
138 | Kishon Vijay Abraham I (2): | ||
139 | x86/sev: Initialize VGIF for secondary VCPUs for Secure AVIC | ||
140 | x86/sev: Enable NMI support for Secure AVIC | ||
141 | |||
142 | Neeraj Upadhyay (15): | ||
113 | x86/apic: Add new driver for Secure AVIC | 143 | x86/apic: Add new driver for Secure AVIC |
114 | x86/apic: Initialize Secure AVIC APIC backing page | 144 | x86/apic: Initialize Secure AVIC APIC backing page |
115 | x86/apic: Initialize APIC backing page for Secure AVIC | 145 | x86/apic: Populate .read()/.write() callbacks of Secure AVIC driver |
146 | x86/apic: Initialize APIC ID for Secure AVIC | ||
116 | x86/apic: Add update_vector callback for Secure AVIC | 147 | x86/apic: Add update_vector callback for Secure AVIC |
117 | x86/apic: Add support to send IPI for Secure AVIC | 148 | x86/apic: Add support to send IPI for Secure AVIC |
118 | x86/apic: Support LAPIC timer for Secure AVIC | 149 | x86/apic: Support LAPIC timer for Secure AVIC |
119 | x86/sev: Initialize VGIF for secondary VCPUs for Secure AVIC | ||
120 | x86/apic: Add support to send NMI IPI for Secure AVIC | 150 | x86/apic: Add support to send NMI IPI for Secure AVIC |
121 | x86/apic: Allow NMI to be injected from hypervisor for Secure AVIC | 151 | x86/apic: Allow NMI to be injected from hypervisor for Secure AVIC |
122 | x86/sev: Enable NMI support for Secure AVIC | 152 | x86/apic: Read and write LVT* APIC registers from HV for SAVIC guests |
153 | x86/apic: Handle EOI writes for SAVIC guests | ||
154 | x86/apic: Add kexec support for Secure AVIC | ||
155 | x86/apic: Enable Secure AVIC in Control MSR | ||
156 | x86/sev: Prevent SECURE_AVIC_CONTROL MSR interception for Secure AVIC | ||
157 | guests | ||
123 | x86/sev: Indicate SEV-SNP guest supports Secure AVIC | 158 | x86/sev: Indicate SEV-SNP guest supports Secure AVIC |
124 | 159 | ||
125 | Neeraj Upadhyay (3): | 160 | arch/x86/Kconfig | 13 + |
126 | x86/apic: Populate .read()/.write() callbacks of Secure AVIC driver | 161 | arch/x86/boot/compressed/sev.c | 10 +- |
127 | x86/apic: Initialize APIC ID for Secure AVIC | ||
128 | x86/apic: Enable Secure AVIC in Control MSR | ||
129 | |||
130 | arch/x86/Kconfig | 12 + | ||
131 | arch/x86/boot/compressed/sev.c | 3 +- | ||
132 | arch/x86/coco/core.c | 3 + | 162 | arch/x86/coco/core.c | 3 + |
133 | arch/x86/coco/sev/core.c | 91 +++++- | 163 | arch/x86/coco/sev/core.c | 131 +++++++- |
134 | arch/x86/include/asm/apic.h | 3 + | 164 | arch/x86/include/asm/apic-emul.h | 28 ++ |
165 | arch/x86/include/asm/apic.h | 12 + | ||
135 | arch/x86/include/asm/apicdef.h | 2 + | 166 | arch/x86/include/asm/apicdef.h | 2 + |
136 | arch/x86/include/asm/msr-index.h | 9 +- | 167 | arch/x86/include/asm/msr-index.h | 9 +- |
137 | arch/x86/include/asm/sev.h | 6 + | 168 | arch/x86/include/asm/sev.h | 8 + |
138 | arch/x86/include/uapi/asm/svm.h | 1 + | 169 | arch/x86/include/uapi/asm/svm.h | 3 + |
139 | arch/x86/kernel/apic/Makefile | 1 + | 170 | arch/x86/kernel/apic/Makefile | 1 + |
140 | arch/x86/kernel/apic/apic.c | 4 + | 171 | arch/x86/kernel/apic/apic.c | 7 + |
141 | arch/x86/kernel/apic/vector.c | 8 + | 172 | arch/x86/kernel/apic/init.c | 3 + |
142 | arch/x86/kernel/apic/x2apic_savic.c | 480 ++++++++++++++++++++++++++++ | 173 | arch/x86/kernel/apic/vector.c | 53 +++- |
174 | arch/x86/kernel/apic/x2apic_savic.c | 467 ++++++++++++++++++++++++++++ | ||
175 | arch/x86/kvm/lapic.c | 23 +- | ||
143 | include/linux/cc_platform.h | 8 + | 176 | include/linux/cc_platform.h | 8 + |
144 | 14 files changed, 621 insertions(+), 10 deletions(-) | 177 | 17 files changed, 742 insertions(+), 39 deletions(-) |
178 | create mode 100644 arch/x86/include/asm/apic-emul.h | ||
145 | create mode 100644 arch/x86/kernel/apic/x2apic_savic.c | 179 | create mode 100644 arch/x86/kernel/apic/x2apic_savic.c |
146 | 180 | ||
181 | base-commit: 535bd326c5657fe570f41b1f76941e449d9e2062 | ||
147 | -- | 182 | -- |
148 | 2.34.1 | 183 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
---|---|---|---|
2 | |||
3 | The Secure AVIC feature provides SEV-SNP guests hardware acceleration | 1 | The Secure AVIC feature provides SEV-SNP guests hardware acceleration |
4 | for performance sensitive APIC accesses while securely managing the | 2 | for performance sensitive APIC accesses while securely managing the |
5 | guest-owned APIC state through the use of a private APIC backing page. | 3 | guest-owned APIC state through the use of a private APIC backing page. |
6 | This helps prevent malicious hypervisor from generating unexpected | 4 | This helps prevent hypervisor from generating unexpected interrupts for |
7 | interrupts for a vCPU or otherwise violate architectural assumptions | 5 | a vCPU or otherwise violate architectural assumptions around APIC |
8 | around APIC behavior. | 6 | behavior. |
9 | 7 | ||
10 | Add a new x2APIC driver that will serve as the base of the Secure AVIC | 8 | Add a new x2APIC driver that will serve as the base of the Secure AVIC |
11 | support. It is initially the same as the x2APIC phys driver, but will be | 9 | support. It is initially the same as the x2APIC phys driver, but will be |
12 | modified as features of Secure AVIC are implemented. | 10 | modified as features of Secure AVIC are implemented. |
13 | 11 | ||
12 | If the hypervisor sets the Secure AVIC bit in SEV_STATUS and the bit is | ||
13 | not set in SNP_FEATURES_PRESENT, maintain the current behavior to | ||
14 | enforce the guest termination. | ||
15 | |||
16 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
14 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 17 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
15 | Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
16 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 18 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
17 | --- | 19 | --- |
18 | arch/x86/Kconfig | 12 +++ | 20 | Changes since v2: |
21 | |||
22 | - Do not autoselect AMD_SECURE_AVIC config when AMD_MEM_ENCRYPT config | ||
23 | is enabled. Make AMD_SECURE_AVIC depend on AMD_MEM_ENCRYPT. | ||
24 | - Misc cleanups. | ||
25 | |||
26 | arch/x86/Kconfig | 13 ++++ | ||
19 | arch/x86/boot/compressed/sev.c | 1 + | 27 | arch/x86/boot/compressed/sev.c | 1 + |
20 | arch/x86/coco/core.c | 3 + | 28 | arch/x86/coco/core.c | 3 + |
21 | arch/x86/include/asm/msr-index.h | 4 +- | 29 | arch/x86/include/asm/msr-index.h | 4 +- |
22 | arch/x86/kernel/apic/Makefile | 1 + | 30 | arch/x86/kernel/apic/Makefile | 1 + |
23 | arch/x86/kernel/apic/x2apic_savic.c | 112 ++++++++++++++++++++++++++++ | 31 | arch/x86/kernel/apic/x2apic_savic.c | 109 ++++++++++++++++++++++++++++ |
24 | include/linux/cc_platform.h | 8 ++ | 32 | include/linux/cc_platform.h | 8 ++ |
25 | 7 files changed, 140 insertions(+), 1 deletion(-) | 33 | 7 files changed, 138 insertions(+), 1 deletion(-) |
26 | create mode 100644 arch/x86/kernel/apic/x2apic_savic.c | 34 | create mode 100644 arch/x86/kernel/apic/x2apic_savic.c |
27 | 35 | ||
28 | diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig | 36 | diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig |
29 | index XXXXXXX..XXXXXXX 100644 | 37 | index XXXXXXX..XXXXXXX 100644 |
30 | --- a/arch/x86/Kconfig | 38 | --- a/arch/x86/Kconfig |
31 | +++ b/arch/x86/Kconfig | 39 | +++ b/arch/x86/Kconfig |
32 | @@ -XXX,XX +XXX,XX @@ config X86_X2APIC | 40 | @@ -XXX,XX +XXX,XX @@ config X86_X2APIC |
33 | 41 | ||
34 | If you don't know what to do here, say N. | 42 | If in doubt, say Y. |
35 | 43 | ||
36 | +config AMD_SECURE_AVIC | 44 | +config AMD_SECURE_AVIC |
37 | + bool "AMD Secure AVIC" | 45 | + bool "AMD Secure AVIC" |
38 | + depends on X86_X2APIC && AMD_MEM_ENCRYPT | 46 | + depends on AMD_MEM_ENCRYPT && X86_X2APIC |
39 | + help | 47 | + help |
40 | + This enables AMD Secure AVIC support on guests that have this feature. | 48 | + Enable this to get AMD Secure AVIC support on guests that have this feature. |
41 | + | 49 | + |
42 | + AMD Secure AVIC provides hardware acceleration for performance sensitive | 50 | + AMD Secure AVIC provides hardware acceleration for performance sensitive |
43 | + APIC accesses and support for managing guest owned APIC state for SEV-SNP | 51 | + APIC accesses and support for managing guest owned APIC state for SEV-SNP |
44 | + guests. | 52 | + guests. Secure AVIC does not support xapic mode. It has functional |
53 | + dependency on x2apic being enabled in the guest. | ||
45 | + | 54 | + |
46 | + If you don't know what to do here, say N. | 55 | + If you don't know what to do here, say N. |
47 | + | 56 | + |
48 | config X86_POSTED_MSI | 57 | config X86_POSTED_MSI |
49 | bool "Enable MSI and MSI-x delivery by posted interrupts" | 58 | bool "Enable MSI and MSI-x delivery by posted interrupts" |
... | ... | ||
54 | +++ b/arch/x86/boot/compressed/sev.c | 63 | +++ b/arch/x86/boot/compressed/sev.c |
55 | @@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code) | 64 | @@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code) |
56 | MSR_AMD64_SNP_VMSA_REG_PROT | \ | 65 | MSR_AMD64_SNP_VMSA_REG_PROT | \ |
57 | MSR_AMD64_SNP_RESERVED_BIT13 | \ | 66 | MSR_AMD64_SNP_RESERVED_BIT13 | \ |
58 | MSR_AMD64_SNP_RESERVED_BIT15 | \ | 67 | MSR_AMD64_SNP_RESERVED_BIT15 | \ |
59 | + MSR_AMD64_SNP_SECURE_AVIC_ENABLED | \ | 68 | + MSR_AMD64_SNP_SECURE_AVIC | \ |
60 | MSR_AMD64_SNP_RESERVED_MASK) | 69 | MSR_AMD64_SNP_RESERVED_MASK) |
61 | 70 | ||
62 | /* | 71 | /* |
63 | diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c | 72 | diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c |
64 | index XXXXXXX..XXXXXXX 100644 | 73 | index XXXXXXX..XXXXXXX 100644 |
... | ... | ||
67 | @@ -XXX,XX +XXX,XX @@ static bool noinstr amd_cc_platform_has(enum cc_attr attr) | 76 | @@ -XXX,XX +XXX,XX @@ static bool noinstr amd_cc_platform_has(enum cc_attr attr) |
68 | case CC_ATTR_HOST_SEV_SNP: | 77 | case CC_ATTR_HOST_SEV_SNP: |
69 | return cc_flags.host_sev_snp; | 78 | return cc_flags.host_sev_snp; |
70 | 79 | ||
71 | + case CC_ATTR_SNP_SECURE_AVIC: | 80 | + case CC_ATTR_SNP_SECURE_AVIC: |
72 | + return sev_status & MSR_AMD64_SNP_SECURE_AVIC_ENABLED; | 81 | + return sev_status & MSR_AMD64_SNP_SECURE_AVIC; |
73 | + | 82 | + |
74 | default: | 83 | default: |
75 | return false; | 84 | return false; |
76 | } | 85 | } |
77 | diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h | 86 | diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h |
... | ... | ||
82 | #define MSR_AMD64_SNP_VMSA_REG_PROT BIT_ULL(MSR_AMD64_SNP_VMSA_REG_PROT_BIT) | 91 | #define MSR_AMD64_SNP_VMSA_REG_PROT BIT_ULL(MSR_AMD64_SNP_VMSA_REG_PROT_BIT) |
83 | #define MSR_AMD64_SNP_SMT_PROT_BIT 17 | 92 | #define MSR_AMD64_SNP_SMT_PROT_BIT 17 |
84 | #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) | 93 | #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) |
85 | -#define MSR_AMD64_SNP_RESV_BIT 18 | 94 | -#define MSR_AMD64_SNP_RESV_BIT 18 |
86 | +#define MSR_AMD64_SNP_SECURE_AVIC_BIT 18 | 95 | +#define MSR_AMD64_SNP_SECURE_AVIC_BIT 18 |
87 | +#define MSR_AMD64_SNP_SECURE_AVIC_ENABLED BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) | 96 | +#define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) |
88 | +#define MSR_AMD64_SNP_RESV_BIT 19 | 97 | +#define MSR_AMD64_SNP_RESV_BIT 19 |
89 | #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) | 98 | #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) |
90 | 99 | #define MSR_AMD64_RMP_BASE 0xc0010132 | |
91 | #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f | 100 | #define MSR_AMD64_RMP_END 0xc0010133 |
92 | diff --git a/arch/x86/kernel/apic/Makefile b/arch/x86/kernel/apic/Makefile | 101 | diff --git a/arch/x86/kernel/apic/Makefile b/arch/x86/kernel/apic/Makefile |
93 | index XXXXXXX..XXXXXXX 100644 | 102 | index XXXXXXX..XXXXXXX 100644 |
94 | --- a/arch/x86/kernel/apic/Makefile | 103 | --- a/arch/x86/kernel/apic/Makefile |
95 | +++ b/arch/x86/kernel/apic/Makefile | 104 | +++ b/arch/x86/kernel/apic/Makefile |
96 | @@ -XXX,XX +XXX,XX @@ ifeq ($(CONFIG_X86_64),y) | 105 | @@ -XXX,XX +XXX,XX @@ ifeq ($(CONFIG_X86_64),y) |
... | ... | ||
111 | +/* | 120 | +/* |
112 | + * AMD Secure AVIC Support (SEV-SNP Guests) | 121 | + * AMD Secure AVIC Support (SEV-SNP Guests) |
113 | + * | 122 | + * |
114 | + * Copyright (C) 2024 Advanced Micro Devices, Inc. | 123 | + * Copyright (C) 2024 Advanced Micro Devices, Inc. |
115 | + * | 124 | + * |
116 | + * Author: Kishon Vijay Abraham I <kvijayab@amd.com> | 125 | + * Author: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
117 | + */ | 126 | + */ |
118 | + | 127 | + |
119 | +#include <linux/cpumask.h> | 128 | +#include <linux/cpumask.h> |
120 | +#include <linux/cc_platform.h> | 129 | +#include <linux/cc_platform.h> |
121 | + | 130 | + |
... | ... | ||
127 | +static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) | 136 | +static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) |
128 | +{ | 137 | +{ |
129 | + return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); | 138 | + return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); |
130 | +} | 139 | +} |
131 | + | 140 | + |
132 | +static void x2apic_savic_send_IPI(int cpu, int vector) | 141 | +static void x2apic_savic_send_ipi(int cpu, int vector) |
133 | +{ | 142 | +{ |
134 | + u32 dest = per_cpu(x86_cpu_to_apicid, cpu); | 143 | + u32 dest = per_cpu(x86_cpu_to_apicid, cpu); |
135 | + | 144 | + |
136 | + /* x2apic MSRs are special and need a special fence: */ | 145 | + /* x2apic MSRs are special and need a special fence: */ |
137 | + weak_wrmsr_fence(); | 146 | + weak_wrmsr_fence(); |
138 | + __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL); | 147 | + __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL); |
139 | +} | 148 | +} |
140 | + | 149 | + |
141 | +static void | 150 | +static void __send_ipi_mask(const struct cpumask *mask, int vector, bool excl_self) |
142 | +__send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest) | ||
143 | +{ | 151 | +{ |
144 | + unsigned long query_cpu; | 152 | + unsigned long query_cpu; |
145 | + unsigned long this_cpu; | 153 | + unsigned long this_cpu; |
146 | + unsigned long flags; | 154 | + unsigned long flags; |
147 | + | 155 | + |
... | ... | ||
150 | + | 158 | + |
151 | + local_irq_save(flags); | 159 | + local_irq_save(flags); |
152 | + | 160 | + |
153 | + this_cpu = smp_processor_id(); | 161 | + this_cpu = smp_processor_id(); |
154 | + for_each_cpu(query_cpu, mask) { | 162 | + for_each_cpu(query_cpu, mask) { |
155 | + if (apic_dest == APIC_DEST_ALLBUT && this_cpu == query_cpu) | 163 | + if (excl_self && this_cpu == query_cpu) |
156 | + continue; | 164 | + continue; |
157 | + __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), | 165 | + __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), |
158 | + vector, APIC_DEST_PHYSICAL); | 166 | + vector, APIC_DEST_PHYSICAL); |
159 | + } | 167 | + } |
160 | + local_irq_restore(flags); | 168 | + local_irq_restore(flags); |
161 | +} | 169 | +} |
162 | + | 170 | + |
163 | +static void x2apic_savic_send_IPI_mask(const struct cpumask *mask, int vector) | 171 | +static void x2apic_savic_send_ipi_mask(const struct cpumask *mask, int vector) |
164 | +{ | 172 | +{ |
165 | + __send_IPI_mask(mask, vector, APIC_DEST_ALLINC); | 173 | + __send_ipi_mask(mask, vector, false); |
166 | +} | 174 | +} |
167 | + | 175 | + |
168 | +static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, int vector) | 176 | +static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, int vector) |
169 | +{ | 177 | +{ |
170 | + __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT); | 178 | + __send_ipi_mask(mask, vector, true); |
171 | +} | 179 | +} |
172 | + | 180 | + |
173 | +static int x2apic_savic_probe(void) | 181 | +static int x2apic_savic_probe(void) |
174 | +{ | 182 | +{ |
175 | + if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC)) | 183 | + if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC)) |
... | ... | ||
178 | + if (!x2apic_mode) { | 186 | + if (!x2apic_mode) { |
179 | + pr_err("Secure AVIC enabled in non x2APIC mode\n"); | 187 | + pr_err("Secure AVIC enabled in non x2APIC mode\n"); |
180 | + snp_abort(); | 188 | + snp_abort(); |
181 | + } | 189 | + } |
182 | + | 190 | + |
183 | + pr_info("Secure AVIC Enabled\n"); | ||
184 | + | ||
185 | + return 1; | 191 | + return 1; |
186 | +} | 192 | +} |
187 | + | 193 | + |
188 | +static struct apic apic_x2apic_savic __ro_after_init = { | 194 | +static struct apic apic_x2apic_savic __ro_after_init = { |
189 | + | 195 | + |
... | ... | ||
201 | + .x2apic_set_max_apicid = true, | 207 | + .x2apic_set_max_apicid = true, |
202 | + .get_apic_id = x2apic_get_apic_id, | 208 | + .get_apic_id = x2apic_get_apic_id, |
203 | + | 209 | + |
204 | + .calc_dest_apicid = apic_default_calc_apicid, | 210 | + .calc_dest_apicid = apic_default_calc_apicid, |
205 | + | 211 | + |
206 | + .send_IPI = x2apic_savic_send_IPI, | 212 | + .send_IPI = x2apic_savic_send_ipi, |
207 | + .send_IPI_mask = x2apic_savic_send_IPI_mask, | 213 | + .send_IPI_mask = x2apic_savic_send_ipi_mask, |
208 | + .send_IPI_mask_allbutself = x2apic_savic_send_IPI_mask_allbutself, | 214 | + .send_IPI_mask_allbutself = x2apic_savic_send_ipi_mask_allbutself, |
209 | + .send_IPI_allbutself = x2apic_send_IPI_allbutself, | 215 | + .send_IPI_allbutself = x2apic_send_IPI_allbutself, |
210 | + .send_IPI_all = x2apic_send_IPI_all, | 216 | + .send_IPI_all = x2apic_send_IPI_all, |
211 | + .send_IPI_self = x2apic_send_IPI_self, | 217 | + .send_IPI_self = x2apic_send_IPI_self, |
212 | + .nmi_to_offline_cpu = true, | 218 | + .nmi_to_offline_cpu = true, |
213 | + | 219 | + |
... | ... | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
---|---|---|---|
2 | |||
3 | With Secure AVIC, the APIC backing page is owned and managed by guest. | 1 | With Secure AVIC, the APIC backing page is owned and managed by guest. |
4 | Allocate APIC backing page for all guest CPUs. In addition, add a | 2 | Allocate and initialize APIC backing page for all guest CPUs. |
5 | setup() APIC callback. This callback is used by Secure AVIC driver to | 3 | |
6 | initialize APIC backing page area for each CPU. | 4 | The NPT entry for a vCPU's APIC backing page must always be present |
7 | 5 | when the vCPU is running in order for Secure AVIC to function. A | |
8 | Allocate APIC backing page memory area in chunks of 2M, so that | 6 | VMEXIT_BUSY is returned on VMRUN and the vCPU cannot be resumed if |
9 | backing page memory is mapped using full huge pages. Without this, | 7 | the NPT entry for the APIC backing page is not present. Notify GPA of |
10 | if there are private to shared page state conversions for any | 8 | the vCPU's APIC backing page to the hypervisor by using the |
11 | non-backing-page allocation which is part of the same huge page as the | 9 | SVM_VMGEXIT_SECURE_AVIC GHCB protocol event. Before executing VMRUN, |
12 | one containing a backing page, hypervisor splits the huge page into 4K | 10 | the hypervisor makes use of this information to make sure the APIC backing |
13 | pages. Splitting of APIC backing page area into individual 4K pages can | 11 | page is mapped in NPT. |
14 | result in performance impact, due to TLB pressure. | 12 | |
15 | 13 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> | |
16 | Secure AVIC requires that vCPU's APIC backing page's NPT entry is always | ||
17 | present while that vCPU is running. If APIC backing page's NPT entry is | ||
18 | not present, a VMEXIT_BUSY is returned on VMRUN and the vCPU cannot | ||
19 | be resumed after that point. To handle this, invoke sev_notify_savic_gpa() | ||
20 | in Secure AVIC driver's setup() callback. This triggers SVM_VMGEXIT_SECURE_ | ||
21 | AVIC_GPA exit for the hypervisor to note GPA of the vCPU's APIC | ||
22 | backing page. Hypervisor uses this information to ensure that the APIC | ||
23 | backing page is mapped in NPT before invoking VMRUN. | ||
24 | |||
25 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 14 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
26 | Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
27 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 15 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
28 | --- | 16 | --- |
29 | 17 | Changes since v2: | |
30 | GHCB spec update for SVM_VMGEXIT_SECURE_AVIC_GPA NAE event is | 18 | |
31 | part of the draft spec: | 19 | - Fix broken AP bringup due to GFP_KERNEL allocation in setup() |
32 | 20 | callback. | |
33 | https://lore.kernel.org/linux-coco/3453675d-ca29-4715-9c17-10b56b3af17e@amd.com/T/#u | 21 | - Define apic_page struct and allocate per CPU API backing pages |
34 | 22 | for all CPUs in Secure AVIC driver probe. | |
35 | arch/x86/coco/sev/core.c | 22 +++++++++++++++++ | 23 | - Change savic_register_gpa() to only allow local CPU GPA |
24 | registration. | ||
25 | - Misc cleanups. | ||
26 | |||
27 | arch/x86/coco/sev/core.c | 27 +++++++++++++++++++ | ||
28 | arch/x86/coco/sev/core.c | 27 +++++++++++++++++++ | ||
36 | arch/x86/include/asm/apic.h | 1 + | 29 | arch/x86/include/asm/apic.h | 1 + |
37 | arch/x86/include/asm/sev.h | 2 ++ | 30 | arch/x86/include/asm/sev.h | 2 ++ |
38 | arch/x86/include/uapi/asm/svm.h | 1 + | 31 | arch/x86/include/uapi/asm/svm.h | 3 +++ |
39 | arch/x86/kernel/apic/apic.c | 2 ++ | 32 | arch/x86/kernel/apic/apic.c | 2 ++ |
40 | arch/x86/kernel/apic/x2apic_savic.c | 38 +++++++++++++++++++++++++++++ | 33 | arch/x86/kernel/apic/x2apic_savic.c | 42 +++++++++++++++++++++++++++++ |
41 | 6 files changed, 66 insertions(+) | 34 | 6 files changed, 77 insertions(+) |
42 | 35 | ||
43 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c | 36 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c |
44 | index XXXXXXX..XXXXXXX 100644 | 37 | index XXXXXXX..XXXXXXX 100644 |
45 | --- a/arch/x86/coco/sev/core.c | 38 | --- a/arch/x86/coco/sev/core.c |
46 | +++ b/arch/x86/coco/sev/core.c | 39 | +++ b/arch/x86/coco/sev/core.c |
47 | @@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | 40 | @@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) |
48 | return ret; | 41 | return ret; |
49 | } | 42 | } |
50 | 43 | ||
51 | +enum es_result sev_notify_savic_gpa(u64 gpa) | 44 | +enum es_result savic_register_gpa(u64 gpa) |
52 | +{ | 45 | +{ |
53 | + struct ghcb_state state; | 46 | + struct ghcb_state state; |
54 | + struct es_em_ctxt ctxt; | 47 | + struct es_em_ctxt ctxt; |
55 | + unsigned long flags; | 48 | + unsigned long flags; |
49 | + enum es_result res; | ||
56 | + struct ghcb *ghcb; | 50 | + struct ghcb *ghcb; |
57 | + int ret = 0; | ||
58 | + | 51 | + |
59 | + local_irq_save(flags); | 52 | + local_irq_save(flags); |
60 | + | 53 | + |
61 | + ghcb = __sev_get_ghcb(&state); | 54 | + ghcb = __sev_get_ghcb(&state); |
62 | + | 55 | + |
63 | + vc_ghcb_invalidate(ghcb); | 56 | + vc_ghcb_invalidate(ghcb); |
64 | + | 57 | + |
65 | + ret = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_SECURE_AVIC_GPA, gpa, 0); | 58 | + /* Register GPA for the local CPU */ |
59 | + ghcb_set_rax(ghcb, -1ULL); | ||
60 | + ghcb_set_rbx(ghcb, gpa); | ||
61 | + res = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_SECURE_AVIC, | ||
62 | + SVM_VMGEXIT_SECURE_AVIC_REGISTER_GPA, 0); | ||
66 | + | 63 | + |
67 | + __sev_put_ghcb(&state); | 64 | + __sev_put_ghcb(&state); |
68 | + | 65 | + |
69 | + local_irq_restore(flags); | 66 | + local_irq_restore(flags); |
70 | + return ret; | 67 | + |
68 | + return res; | ||
71 | +} | 69 | +} |
72 | + | 70 | + |
73 | static void snp_register_per_cpu_ghcb(void) | 71 | static void snp_register_per_cpu_ghcb(void) |
74 | { | 72 | { |
75 | struct sev_es_runtime_data *data; | 73 | struct sev_es_runtime_data *data; |
... | ... | ||
87 | void (*init_apic_ldr)(void); | 85 | void (*init_apic_ldr)(void); |
88 | diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h | 86 | diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h |
89 | index XXXXXXX..XXXXXXX 100644 | 87 | index XXXXXXX..XXXXXXX 100644 |
90 | --- a/arch/x86/include/asm/sev.h | 88 | --- a/arch/x86/include/asm/sev.h |
91 | +++ b/arch/x86/include/asm/sev.h | 89 | +++ b/arch/x86/include/asm/sev.h |
92 | @@ -XXX,XX +XXX,XX @@ u64 snp_get_unsupported_features(u64 status); | 90 | @@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req |
93 | u64 sev_get_status(void); | 91 | |
94 | void sev_show_status(void); | 92 | void __init snp_secure_tsc_prepare(void); |
95 | void snp_update_svsm_ca(void); | 93 | void __init snp_secure_tsc_init(void); |
96 | +enum es_result sev_notify_savic_gpa(u64 gpa); | 94 | +enum es_result savic_register_gpa(u64 gpa); |
97 | 95 | ||
98 | #else /* !CONFIG_AMD_MEM_ENCRYPT */ | 96 | #else /* !CONFIG_AMD_MEM_ENCRYPT */ |
99 | 97 | ||
100 | @@ -XXX,XX +XXX,XX @@ static inline u64 snp_get_unsupported_features(u64 status) { return 0; } | 98 | @@ -XXX,XX +XXX,XX @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_ |
101 | static inline u64 sev_get_status(void) { return 0; } | 99 | struct snp_guest_request_ioctl *rio) { return -ENODEV; } |
102 | static inline void sev_show_status(void) { } | 100 | static inline void __init snp_secure_tsc_prepare(void) { } |
103 | static inline void snp_update_svsm_ca(void) { } | 101 | static inline void __init snp_secure_tsc_init(void) { } |
104 | +static inline enum es_result sev_notify_savic_gpa(u64 gpa) { return ES_UNSUPPORTED; } | 102 | +static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; } |
105 | 103 | ||
106 | #endif /* CONFIG_AMD_MEM_ENCRYPT */ | 104 | #endif /* CONFIG_AMD_MEM_ENCRYPT */ |
107 | 105 | ||
108 | diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h | 106 | diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h |
109 | index XXXXXXX..XXXXXXX 100644 | 107 | index XXXXXXX..XXXXXXX 100644 |
110 | --- a/arch/x86/include/uapi/asm/svm.h | 108 | --- a/arch/x86/include/uapi/asm/svm.h |
111 | +++ b/arch/x86/include/uapi/asm/svm.h | 109 | +++ b/arch/x86/include/uapi/asm/svm.h |
112 | @@ -XXX,XX +XXX,XX @@ | 110 | @@ -XXX,XX +XXX,XX @@ |
113 | #define SVM_VMGEXIT_AP_CREATE 1 | 111 | #define SVM_VMGEXIT_AP_CREATE 1 |
114 | #define SVM_VMGEXIT_AP_DESTROY 2 | 112 | #define SVM_VMGEXIT_AP_DESTROY 2 |
115 | #define SVM_VMGEXIT_SNP_RUN_VMPL 0x80000018 | 113 | #define SVM_VMGEXIT_SNP_RUN_VMPL 0x80000018 |
116 | +#define SVM_VMGEXIT_SECURE_AVIC_GPA 0x8000001a | 114 | +#define SVM_VMGEXIT_SECURE_AVIC 0x8000001a |
115 | +#define SVM_VMGEXIT_SECURE_AVIC_REGISTER_GPA 0 | ||
116 | +#define SVM_VMGEXIT_SECURE_AVIC_UNREGISTER_GPA 1 | ||
117 | #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd | 117 | #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd |
118 | #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe | 118 | #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe |
119 | #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \ | 119 | #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \ |
120 | diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c | 120 | diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c |
121 | index XXXXXXX..XXXXXXX 100644 | 121 | index XXXXXXX..XXXXXXX 100644 |
... | ... | ||
143 | #include <asm/apic.h> | 143 | #include <asm/apic.h> |
144 | #include <asm/sev.h> | 144 | #include <asm/sev.h> |
145 | 145 | ||
146 | #include "local.h" | 146 | #include "local.h" |
147 | 147 | ||
148 | +static DEFINE_PER_CPU(void *, apic_backing_page); | 148 | +/* APIC_EILVTn(3) is the last defined APIC register. */ |
149 | +static DEFINE_PER_CPU(bool, savic_setup_done); | 149 | +#define NR_APIC_REGS (APIC_EILVTn(4) >> 2) |
150 | + | ||
151 | +struct apic_page { | ||
152 | + union { | ||
153 | + u32 regs[NR_APIC_REGS]; | ||
154 | + u8 bytes[PAGE_SIZE]; | ||
155 | + }; | ||
156 | +} __aligned(PAGE_SIZE); | ||
157 | + | ||
158 | +static struct apic_page __percpu *apic_page __ro_after_init; | ||
150 | + | 159 | + |
151 | static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) | 160 | static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) |
152 | { | 161 | { |
153 | return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); | 162 | return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); |
154 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in | 163 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, in |
155 | __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT); | 164 | __send_ipi_mask(mask, vector, true); |
156 | } | 165 | } |
157 | 166 | ||
158 | +static void x2apic_savic_setup(void) | 167 | +static void x2apic_savic_setup(void) |
159 | +{ | 168 | +{ |
160 | + void *backing_page; | 169 | + void *backing_page; |
161 | + enum es_result ret; | 170 | + enum es_result ret; |
162 | + unsigned long gpa; | 171 | + unsigned long gpa; |
163 | + | 172 | + |
164 | + if (this_cpu_read(savic_setup_done)) | 173 | + backing_page = this_cpu_ptr(apic_page); |
165 | + return; | ||
166 | + | ||
167 | + backing_page = this_cpu_read(apic_backing_page); | ||
168 | + gpa = __pa(backing_page); | 174 | + gpa = __pa(backing_page); |
169 | + ret = sev_notify_savic_gpa(gpa); | 175 | + |
176 | + /* | ||
177 | + * The NPT entry for a vCPU's APIC backing page must always be | ||
178 | + * present when the vCPU is running in order for Secure AVIC to | ||
179 | + * function. A VMEXIT_BUSY is returned on VMRUN and the vCPU cannot | ||
180 | + * be resumed if the NPT entry for the APIC backing page is not | ||
181 | + * present. Notify GPA of the vCPU's APIC backing page to the | ||
182 | + * hypervisor by calling savic_register_gpa(). Before executing | ||
183 | + * VMRUN, the hypervisor makes use of this information to make sure | ||
184 | + * the APIC backing page is mapped in NPT. | ||
185 | + */ | ||
186 | + ret = savic_register_gpa(gpa); | ||
170 | + if (ret != ES_OK) | 187 | + if (ret != ES_OK) |
171 | + snp_abort(); | 188 | + snp_abort(); |
172 | + this_cpu_write(savic_setup_done, true); | ||
173 | +} | 189 | +} |
174 | + | 190 | + |
175 | static int x2apic_savic_probe(void) | 191 | static int x2apic_savic_probe(void) |
176 | { | 192 | { |
177 | + void *backing_pages; | ||
178 | + unsigned int cpu; | ||
179 | + size_t sz; | ||
180 | + int i; | ||
181 | + | ||
182 | if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC)) | 193 | if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC)) |
183 | return 0; | ||
184 | |||
185 | @@ -XXX,XX +XXX,XX @@ static int x2apic_savic_probe(void) | 194 | @@ -XXX,XX +XXX,XX @@ static int x2apic_savic_probe(void) |
186 | snp_abort(); | 195 | snp_abort(); |
187 | } | 196 | } |
188 | 197 | ||
189 | + sz = ALIGN(num_possible_cpus() * SZ_4K, SZ_2M); | 198 | + apic_page = alloc_percpu(struct apic_page); |
190 | + backing_pages = kzalloc(sz, GFP_ATOMIC); | 199 | + if (!apic_page) |
191 | + if (!backing_pages) | ||
192 | + snp_abort(); | 200 | + snp_abort(); |
193 | + | 201 | + |
194 | + i = 0; | ||
195 | + for_each_possible_cpu(cpu) { | ||
196 | + per_cpu(apic_backing_page, cpu) = backing_pages + i * SZ_4K; | ||
197 | + i++; | ||
198 | + } | ||
199 | + | ||
200 | pr_info("Secure AVIC Enabled\n"); | ||
201 | |||
202 | return 1; | 202 | return 1; |
203 | } | ||
204 | |||
203 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { | 205 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { |
204 | .name = "secure avic x2apic", | 206 | .name = "secure avic x2apic", |
205 | .probe = x2apic_savic_probe, | 207 | .probe = x2apic_savic_probe, |
206 | .acpi_madt_oem_check = x2apic_savic_acpi_madt_oem_check, | 208 | .acpi_madt_oem_check = x2apic_savic_acpi_madt_oem_check, |
207 | + .setup = x2apic_savic_setup, | 209 | + .setup = x2apic_savic_setup, |
208 | 210 | ||
209 | .dest_mode_logical = false, | 211 | .dest_mode_logical = false, |
210 | 212 | ||
211 | -- | 213 | -- |
212 | 2.34.1 | 214 | 2.34.1 | diff view generated by jsdifflib |
1 | Add read() and write() APIC callback functions to read and write x2APIC | ||
---|---|---|---|
2 | registers directly from the guest APIC backing page of a vCPU. | ||
3 | |||
1 | The x2APIC registers are mapped at an offset within the guest APIC | 4 | The x2APIC registers are mapped at an offset within the guest APIC |
2 | backing page which is same as their x2APIC MMIO offset. Secure AVIC | 5 | backing page which is same as their x2APIC MMIO offset. Secure AVIC |
3 | adds new registers such as ALLOWED_IRRs (which are at 4-byte offset | 6 | adds new registers such as ALLOWED_IRRs (which are at 4-byte offset |
4 | within the IRR register offset range) and NMI_REQ to the APIC register | 7 | within the IRR register offset range) and NMI_REQ to the APIC register |
5 | space. In addition, the APIC_ID register is writable and configured by | 8 | space. |
6 | guest. | ||
7 | 9 | ||
8 | Add read() and write() APIC callback functions to read and write x2APIC | 10 | When Secure AVIC is enabled, guest's rdmsr/wrmsr of APIC registers |
9 | registers directly from the guest APIC backing page. | 11 | result in VC exception (for non-accelerated register accesses) with |
10 | 12 | error code VMEXIT_AVIC_NOACCEL. The VC exception handler can read/write | |
11 | The default .read()/.write() callbacks of x2APIC drivers perform | 13 | the x2APIC register in the guest APIC backing page to complete the |
12 | a rdmsr/wrmsr of the x2APIC registers. When Secure AVIC is enabled, | 14 | rdmsr/wrmsr. Since doing this would increase the latency of accessing |
13 | these would result in #VC exception (for non-accelerated register | 15 | x2APIC registers, instead of doing rdmsr/wrmsr based reg accesses |
14 | accesses). The #VC exception handler reads/write the x2APIC register | 16 | and handling reads/writes in VC exception, directly read/write APIC |
15 | in the guest APIC backing page. Since this would increase the latency | 17 | registers from/to the guest APIC backing page of the vCPU in read() |
16 | of accessing x2APIC registers, the read() and write() callbacks of | 18 | and write() callbacks of the Secure AVIC APIC driver. |
17 | Secure AVIC driver directly reads/writes to the guest APIC backing page. | ||
18 | 19 | ||
19 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 20 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
20 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 21 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
21 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 22 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
22 | --- | 23 | --- |
24 | Changes since v2: | ||
25 | - Use this_cpu_ptr() instead of type casting in get_reg() and | ||
26 | set_reg(). | ||
27 | |||
23 | arch/x86/include/asm/apicdef.h | 2 + | 28 | arch/x86/include/asm/apicdef.h | 2 + |
24 | arch/x86/kernel/apic/x2apic_savic.c | 107 +++++++++++++++++++++++++++- | 29 | arch/x86/kernel/apic/x2apic_savic.c | 116 +++++++++++++++++++++++++++- |
25 | 2 files changed, 107 insertions(+), 2 deletions(-) | 30 | 2 files changed, 116 insertions(+), 2 deletions(-) |
26 | 31 | ||
27 | diff --git a/arch/x86/include/asm/apicdef.h b/arch/x86/include/asm/apicdef.h | 32 | diff --git a/arch/x86/include/asm/apicdef.h b/arch/x86/include/asm/apicdef.h |
28 | index XXXXXXX..XXXXXXX 100644 | 33 | index XXXXXXX..XXXXXXX 100644 |
29 | --- a/arch/x86/include/asm/apicdef.h | 34 | --- a/arch/x86/include/asm/apicdef.h |
30 | +++ b/arch/x86/include/asm/apicdef.h | 35 | +++ b/arch/x86/include/asm/apicdef.h |
... | ... | ||
51 | #include <asm/sev.h> | 56 | #include <asm/sev.h> |
52 | @@ -XXX,XX +XXX,XX @@ static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) | 57 | @@ -XXX,XX +XXX,XX @@ static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) |
53 | return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); | 58 | return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); |
54 | } | 59 | } |
55 | 60 | ||
56 | +static inline u32 get_reg(char *page, int reg_off) | 61 | +static __always_inline u32 get_reg(unsigned int offset) |
57 | +{ | 62 | +{ |
58 | + return READ_ONCE(*((u32 *)(page + reg_off))); | 63 | + return READ_ONCE(this_cpu_ptr(apic_page)->regs[offset >> 2]); |
59 | +} | 64 | +} |
60 | + | 65 | + |
61 | +static inline void set_reg(char *page, int reg_off, u32 val) | 66 | +static __always_inline void set_reg(unsigned int offset, u32 val) |
62 | +{ | 67 | +{ |
63 | + WRITE_ONCE(*((u32 *)(page + reg_off)), val); | 68 | + WRITE_ONCE(this_cpu_ptr(apic_page)->regs[offset >> 2], val); |
64 | +} | 69 | +} |
65 | + | 70 | + |
66 | +#define SAVIC_ALLOWED_IRR_OFFSET 0x204 | 71 | +#define SAVIC_ALLOWED_IRR 0x204 |
67 | + | 72 | + |
68 | +static u32 x2apic_savic_read(u32 reg) | 73 | +static u32 x2apic_savic_read(u32 reg) |
69 | +{ | 74 | +{ |
70 | + void *backing_page = this_cpu_read(apic_backing_page); | 75 | + /* |
71 | + | 76 | + * When Secure AVIC is enabled, rdmsr/wrmsr of APIC registers |
77 | + * result in VC exception (for non-accelerated register accesses) | ||
78 | + * with VMEXIT_AVIC_NOACCEL error code. The VC exception handler | ||
79 | + * can read/write the x2APIC register in the guest APIC backing page. | ||
80 | + * Since doing this would increase the latency of accessing x2APIC | ||
81 | + * registers, instead of doing rdmsr/wrmsr based accesses and | ||
82 | + * handling apic register reads/writes in VC exception, the read() | ||
83 | + * and write() callbacks directly read/write APIC register from/to | ||
84 | + * the vCPU APIC backing page. | ||
85 | + */ | ||
72 | + switch (reg) { | 86 | + switch (reg) { |
73 | + case APIC_LVTT: | 87 | + case APIC_LVTT: |
74 | + case APIC_TMICT: | 88 | + case APIC_TMICT: |
75 | + case APIC_TMCCT: | 89 | + case APIC_TMCCT: |
76 | + case APIC_TDCR: | 90 | + case APIC_TDCR: |
... | ... | ||
91 | + case APIC_EFEAT: | 105 | + case APIC_EFEAT: |
92 | + case APIC_ECTRL: | 106 | + case APIC_ECTRL: |
93 | + case APIC_SEOI: | 107 | + case APIC_SEOI: |
94 | + case APIC_IER: | 108 | + case APIC_IER: |
95 | + case APIC_EILVTn(0) ... APIC_EILVTn(3): | 109 | + case APIC_EILVTn(0) ... APIC_EILVTn(3): |
96 | + return get_reg(backing_page, reg); | 110 | + return get_reg(reg); |
97 | + case APIC_ISR ... APIC_ISR + 0x70: | 111 | + case APIC_ISR ... APIC_ISR + 0x70: |
98 | + case APIC_TMR ... APIC_TMR + 0x70: | 112 | + case APIC_TMR ... APIC_TMR + 0x70: |
99 | + WARN_ONCE(!IS_ALIGNED(reg, 16), "Reg offset %#x not aligned at 16 bytes", reg); | 113 | + if (WARN_ONCE(!IS_ALIGNED(reg, 16), |
100 | + return get_reg(backing_page, reg); | 114 | + "APIC reg read offset 0x%x not aligned at 16 bytes", reg)) |
115 | + return 0; | ||
116 | + return get_reg(reg); | ||
101 | + /* IRR and ALLOWED_IRR offset range */ | 117 | + /* IRR and ALLOWED_IRR offset range */ |
102 | + case APIC_IRR ... APIC_IRR + 0x74: | 118 | + case APIC_IRR ... APIC_IRR + 0x74: |
103 | + /* | 119 | + /* |
104 | + * Either aligned at 16 bytes for valid IRR reg offset or a | 120 | + * Either aligned at 16 bytes for valid IRR reg offset or a |
105 | + * valid Secure AVIC ALLOWED_IRR offset. | 121 | + * valid Secure AVIC ALLOWED_IRR offset. |
106 | + */ | 122 | + */ |
107 | + WARN_ONCE(!(IS_ALIGNED(reg, 16) || IS_ALIGNED(reg - SAVIC_ALLOWED_IRR_OFFSET, 16)), | 123 | + if (WARN_ONCE(!(IS_ALIGNED(reg, 16) || |
108 | + "Misaligned IRR/ALLOWED_IRR reg offset %#x", reg); | 124 | + IS_ALIGNED(reg - SAVIC_ALLOWED_IRR, 16)), |
109 | + return get_reg(backing_page, reg); | 125 | + "Misaligned IRR/ALLOWED_IRR APIC reg read offset 0x%x", reg)) |
126 | + return 0; | ||
127 | + return get_reg(reg); | ||
110 | + default: | 128 | + default: |
111 | + pr_err("Permission denied: read of Secure AVIC reg offset %#x\n", reg); | 129 | + pr_err("Permission denied: read of Secure AVIC reg offset 0x%x\n", reg); |
112 | + return 0; | 130 | + return 0; |
113 | + } | 131 | + } |
114 | +} | 132 | +} |
115 | + | 133 | + |
116 | +#define SAVIC_NMI_REQ_OFFSET 0x278 | 134 | +#define SAVIC_NMI_REQ 0x278 |
117 | + | 135 | + |
118 | +static void x2apic_savic_write(u32 reg, u32 data) | 136 | +static void x2apic_savic_write(u32 reg, u32 data) |
119 | +{ | 137 | +{ |
120 | + void *backing_page = this_cpu_read(apic_backing_page); | ||
121 | + | ||
122 | + switch (reg) { | 138 | + switch (reg) { |
123 | + case APIC_LVTT: | 139 | + case APIC_LVTT: |
124 | + case APIC_LVT0: | 140 | + case APIC_LVT0: |
125 | + case APIC_LVT1: | 141 | + case APIC_LVT1: |
126 | + case APIC_TMICT: | 142 | + case APIC_TMICT: |
127 | + case APIC_TDCR: | 143 | + case APIC_TDCR: |
128 | + case APIC_SELF_IPI: | 144 | + case APIC_SELF_IPI: |
129 | + /* APIC_ID is writable and configured by guest for Secure AVIC */ | ||
130 | + case APIC_ID: | ||
131 | + case APIC_TASKPRI: | 145 | + case APIC_TASKPRI: |
132 | + case APIC_EOI: | 146 | + case APIC_EOI: |
133 | + case APIC_SPIV: | 147 | + case APIC_SPIV: |
134 | + case SAVIC_NMI_REQ_OFFSET: | 148 | + case SAVIC_NMI_REQ: |
135 | + case APIC_ESR: | 149 | + case APIC_ESR: |
136 | + case APIC_ICR: | 150 | + case APIC_ICR: |
137 | + case APIC_LVTTHMR: | 151 | + case APIC_LVTTHMR: |
138 | + case APIC_LVTPC: | 152 | + case APIC_LVTPC: |
139 | + case APIC_LVTERR: | 153 | + case APIC_LVTERR: |
140 | + case APIC_ECTRL: | 154 | + case APIC_ECTRL: |
141 | + case APIC_SEOI: | 155 | + case APIC_SEOI: |
142 | + case APIC_IER: | 156 | + case APIC_IER: |
143 | + case APIC_EILVTn(0) ... APIC_EILVTn(3): | 157 | + case APIC_EILVTn(0) ... APIC_EILVTn(3): |
144 | + set_reg(backing_page, reg, data); | 158 | + set_reg(reg, data); |
145 | + break; | 159 | + break; |
146 | + /* ALLOWED_IRR offsets are writable */ | 160 | + /* ALLOWED_IRR offsets are writable */ |
147 | + case SAVIC_ALLOWED_IRR_OFFSET ... SAVIC_ALLOWED_IRR_OFFSET + 0x70: | 161 | + case SAVIC_ALLOWED_IRR ... SAVIC_ALLOWED_IRR + 0x70: |
148 | + if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR_OFFSET, 16)) { | 162 | + if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR, 16)) { |
149 | + set_reg(backing_page, reg, data); | 163 | + set_reg(reg, data); |
150 | + break; | 164 | + break; |
151 | + } | 165 | + } |
152 | + fallthrough; | 166 | + fallthrough; |
153 | + default: | 167 | + default: |
154 | + pr_err("Permission denied: write to Secure AVIC reg offset %#x\n", reg); | 168 | + pr_err("Permission denied: write to Secure AVIC reg offset 0x%x\n", reg); |
155 | + } | 169 | + } |
156 | +} | 170 | +} |
157 | + | 171 | + |
158 | static void x2apic_savic_send_IPI(int cpu, int vector) | 172 | static void x2apic_savic_send_ipi(int cpu, int vector) |
159 | { | 173 | { |
160 | u32 dest = per_cpu(x86_cpu_to_apicid, cpu); | 174 | u32 dest = per_cpu(x86_cpu_to_apicid, cpu); |
161 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { | 175 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { |
162 | .send_IPI_self = x2apic_send_IPI_self, | 176 | .send_IPI_self = x2apic_send_IPI_self, |
163 | .nmi_to_offline_cpu = true, | 177 | .nmi_to_offline_cpu = true, |
... | ... | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Initialize the APIC ID in the Secure AVIC APIC backing page with | ||
2 | the APIC_ID msr value read from Hypervisor. CPU topology evaluation | ||
3 | later during boot would catch and report any duplicate APIC ID for | ||
4 | two CPUs. | ||
1 | 5 | ||
6 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
7 | --- | ||
8 | Changes since v2: | ||
9 | - Drop duplicate APIC ID checks. | ||
10 | |||
11 | arch/x86/kernel/apic/x2apic_savic.c | 13 +++++++++++++ | ||
12 | 1 file changed, 13 insertions(+) | ||
13 | |||
14 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | ||
15 | index XXXXXXX..XXXXXXX 100644 | ||
16 | --- a/arch/x86/kernel/apic/x2apic_savic.c | ||
17 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | ||
18 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, in | ||
19 | __send_ipi_mask(mask, vector, true); | ||
20 | } | ||
21 | |||
22 | +static void init_apic_page(void) | ||
23 | +{ | ||
24 | + u32 apic_id; | ||
25 | + | ||
26 | + /* | ||
27 | + * Before Secure AVIC is enabled, APIC msr reads are intercepted. | ||
28 | + * APIC_ID msr read returns the value from the Hypervisor. | ||
29 | + */ | ||
30 | + apic_id = native_apic_msr_read(APIC_ID); | ||
31 | + set_reg(APIC_ID, apic_id); | ||
32 | +} | ||
33 | + | ||
34 | static void x2apic_savic_setup(void) | ||
35 | { | ||
36 | void *backing_page; | ||
37 | enum es_result ret; | ||
38 | unsigned long gpa; | ||
39 | |||
40 | + init_apic_page(); | ||
41 | backing_page = this_cpu_ptr(apic_page); | ||
42 | gpa = __pa(backing_page); | ||
43 | |||
44 | -- | ||
45 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
---|---|---|---|
2 | |||
3 | Add update_vector callback to set/clear ALLOWED_IRR field in | 1 | Add update_vector callback to set/clear ALLOWED_IRR field in |
4 | the APIC backing page. The allowed IRR vector indicates the | 2 | a vCPU's APIC backing page for external vectors. The ALLOWED_IRR |
5 | interrupt vectors which the guest allows the hypervisor to | 3 | field indicates the interrupt vectors which the guest allows the |
6 | send (typically for emulated devices). ALLOWED_IRR is meant | 4 | hypervisor to send (typically for emulated devices). Interrupt |
7 | to be used specifically for vectors that the hypervisor is | 5 | vectors used exclusively by the guest itself and the vectors which |
8 | allowed to inject, such as device interrupts. Interrupt | 6 | are not emulated by the hypervisor, such as IPI vectors, are part |
9 | vectors used exclusively by the guest itself (like IPI vectors) | 7 | of system vectors and are not set in the ALLOWED_IRR. |
10 | should not be allowed to be injected into the guest for security | 8 | |
11 | reasons. | 9 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
12 | |||
13 | The update_vector callback is invoked from APIC vector domain | ||
14 | whenever a vector is allocated, freed or moved. | ||
15 | |||
16 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 10 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
17 | Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
18 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 11 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
19 | --- | 12 | --- |
20 | arch/x86/include/asm/apic.h | 2 ++ | 13 | Changes since v2: |
21 | arch/x86/kernel/apic/vector.c | 8 ++++++++ | 14 | |
22 | arch/x86/kernel/apic/x2apic_savic.c | 21 +++++++++++++++++++++ | 15 | - Associate update_vector() invocation with vector allocation/free |
23 | 3 files changed, 31 insertions(+) | 16 | calls. |
17 | - Cleanup and simplify vector bitmap calculation for ALLOWED_IRR. | ||
18 | |||
19 | arch/x86/include/asm/apic.h | 2 + | ||
20 | arch/x86/include/asm/apic.h | 2 + | ||
21 | arch/x86/kernel/apic/vector.c | 59 +++++++++++++++++++++++------ | ||
22 | arch/x86/kernel/apic/x2apic_savic.c | 20 ++++++++++ | ||
23 | 3 files changed, 69 insertions(+), 12 deletions(-) | ||
24 | 24 | ||
25 | diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h | 25 | diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h |
26 | index XXXXXXX..XXXXXXX 100644 | 26 | index XXXXXXX..XXXXXXX 100644 |
27 | --- a/arch/x86/include/asm/apic.h | 27 | --- a/arch/x86/include/asm/apic.h |
28 | +++ b/arch/x86/include/asm/apic.h | 28 | +++ b/arch/x86/include/asm/apic.h |
... | ... | ||
37 | 37 | ||
38 | diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c | 38 | diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c |
39 | index XXXXXXX..XXXXXXX 100644 | 39 | index XXXXXXX..XXXXXXX 100644 |
40 | --- a/arch/x86/kernel/apic/vector.c | 40 | --- a/arch/x86/kernel/apic/vector.c |
41 | +++ b/arch/x86/kernel/apic/vector.c | 41 | +++ b/arch/x86/kernel/apic/vector.c |
42 | @@ -XXX,XX +XXX,XX @@ static void apic_update_irq_cfg(struct irq_data *irqd, unsigned int vector, | ||
43 | apicd->hw_irq_cfg.dest_apicid); | ||
44 | } | ||
45 | |||
46 | -static void apic_update_vector(struct irq_data *irqd, unsigned int newvec, | ||
47 | - unsigned int newcpu) | ||
48 | +static inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set) | ||
49 | +{ | ||
50 | + if (apic->update_vector) | ||
51 | + apic->update_vector(cpu, vector, set); | ||
52 | +} | ||
53 | + | ||
54 | +static int irq_alloc_vector(const struct cpumask *dest, bool resvd, unsigned int *cpu) | ||
55 | +{ | ||
56 | + int vector; | ||
57 | + | ||
58 | + vector = irq_matrix_alloc(vector_matrix, dest, resvd, cpu); | ||
59 | + | ||
60 | + if (vector >= 0) | ||
61 | + apic_update_vector(*cpu, vector, true); | ||
62 | + | ||
63 | + return vector; | ||
64 | +} | ||
65 | + | ||
66 | +static int irq_alloc_managed_vector(unsigned int *cpu) | ||
67 | +{ | ||
68 | + int vector; | ||
69 | + | ||
70 | + vector = irq_matrix_alloc_managed(vector_matrix, vector_searchmask, cpu); | ||
71 | + | ||
72 | + if (vector >= 0) | ||
73 | + apic_update_vector(*cpu, vector, true); | ||
74 | + | ||
75 | + return vector; | ||
76 | +} | ||
77 | + | ||
78 | +static void irq_free_vector(unsigned int cpu, unsigned int vector, bool managed) | ||
79 | +{ | ||
80 | + apic_update_vector(cpu, vector, false); | ||
81 | + irq_matrix_free(vector_matrix, cpu, vector, managed); | ||
82 | +} | ||
83 | + | ||
84 | +static void apic_chipd_update_vector(struct irq_data *irqd, unsigned int newvec, | ||
85 | + unsigned int newcpu) | ||
86 | { | ||
87 | struct apic_chip_data *apicd = apic_chip_data(irqd); | ||
88 | struct irq_desc *desc = irq_data_to_desc(irqd); | ||
42 | @@ -XXX,XX +XXX,XX @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec, | 89 | @@ -XXX,XX +XXX,XX @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec, |
43 | apicd->prev_cpu = apicd->cpu; | 90 | apicd->prev_cpu = apicd->cpu; |
44 | WARN_ON_ONCE(apicd->cpu == newcpu); | 91 | WARN_ON_ONCE(apicd->cpu == newcpu); |
45 | } else { | 92 | } else { |
46 | + if (apic->update_vector) | 93 | - irq_matrix_free(vector_matrix, apicd->cpu, apicd->vector, |
47 | + apic->update_vector(apicd->cpu, apicd->vector, false); | 94 | - managed); |
48 | irq_matrix_free(vector_matrix, apicd->cpu, apicd->vector, | 95 | + irq_free_vector(apicd->cpu, apicd->vector, managed); |
49 | managed); | ||
50 | } | 96 | } |
51 | @@ -XXX,XX +XXX,XX @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec, | 97 | |
52 | apicd->cpu = newcpu; | 98 | setnew: |
53 | BUG_ON(!IS_ERR_OR_NULL(per_cpu(vector_irq, newcpu)[newvec])); | 99 | @@ -XXX,XX +XXX,XX @@ assign_vector_locked(struct irq_data *irqd, const struct cpumask *dest) |
54 | per_cpu(vector_irq, newcpu)[newvec] = desc; | 100 | if (apicd->move_in_progress || !hlist_unhashed(&apicd->clist)) |
55 | + if (apic->update_vector) | 101 | return -EBUSY; |
56 | + apic->update_vector(apicd->cpu, apicd->vector, true); | 102 | |
103 | - vector = irq_matrix_alloc(vector_matrix, dest, resvd, &cpu); | ||
104 | + vector = irq_alloc_vector(dest, resvd, &cpu); | ||
105 | trace_vector_alloc(irqd->irq, vector, resvd, vector); | ||
106 | if (vector < 0) | ||
107 | return vector; | ||
108 | - apic_update_vector(irqd, vector, cpu); | ||
109 | + apic_chipd_update_vector(irqd, vector, cpu); | ||
110 | apic_update_irq_cfg(irqd, vector, cpu); | ||
111 | |||
112 | return 0; | ||
113 | @@ -XXX,XX +XXX,XX @@ assign_managed_vector(struct irq_data *irqd, const struct cpumask *dest) | ||
114 | /* set_affinity might call here for nothing */ | ||
115 | if (apicd->vector && cpumask_test_cpu(apicd->cpu, vector_searchmask)) | ||
116 | return 0; | ||
117 | - vector = irq_matrix_alloc_managed(vector_matrix, vector_searchmask, | ||
118 | - &cpu); | ||
119 | + vector = irq_alloc_managed_vector(&cpu); | ||
120 | trace_vector_alloc_managed(irqd->irq, vector, vector); | ||
121 | if (vector < 0) | ||
122 | return vector; | ||
123 | - apic_update_vector(irqd, vector, cpu); | ||
124 | + apic_chipd_update_vector(irqd, vector, cpu); | ||
125 | apic_update_irq_cfg(irqd, vector, cpu); | ||
126 | return 0; | ||
57 | } | 127 | } |
58 | 128 | @@ -XXX,XX +XXX,XX @@ static void clear_irq_vector(struct irq_data *irqd) | |
59 | static void vector_assign_managed_shutdown(struct irq_data *irqd) | 129 | apicd->prev_cpu); |
130 | |||
131 | per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_SHUTDOWN; | ||
132 | - irq_matrix_free(vector_matrix, apicd->cpu, vector, managed); | ||
133 | + irq_free_vector(apicd->cpu, vector, managed); | ||
134 | apicd->vector = 0; | ||
135 | |||
136 | /* Clean up move in progress */ | ||
137 | @@ -XXX,XX +XXX,XX @@ static void clear_irq_vector(struct irq_data *irqd) | ||
138 | return; | ||
139 | |||
140 | per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_SHUTDOWN; | ||
141 | - irq_matrix_free(vector_matrix, apicd->prev_cpu, vector, managed); | ||
142 | + irq_free_vector(apicd->prev_cpu, vector, managed); | ||
143 | apicd->prev_vector = 0; | ||
144 | apicd->move_in_progress = 0; | ||
145 | hlist_del_init(&apicd->clist); | ||
60 | @@ -XXX,XX +XXX,XX @@ static bool vector_configure_legacy(unsigned int virq, struct irq_data *irqd, | 146 | @@ -XXX,XX +XXX,XX @@ static bool vector_configure_legacy(unsigned int virq, struct irq_data *irqd, |
61 | if (irqd_is_activated(irqd)) { | 147 | if (irqd_is_activated(irqd)) { |
62 | trace_vector_setup(virq, true, 0); | 148 | trace_vector_setup(virq, true, 0); |
63 | apic_update_irq_cfg(irqd, apicd->vector, apicd->cpu); | 149 | apic_update_irq_cfg(irqd, apicd->vector, apicd->cpu); |
64 | + if (apic->update_vector) | 150 | + apic_update_vector(apicd->cpu, apicd->vector, true); |
65 | + apic->update_vector(apicd->cpu, apicd->vector, true); | ||
66 | } else { | 151 | } else { |
67 | /* Release the vector */ | 152 | /* Release the vector */ |
68 | apicd->can_reserve = true; | 153 | apicd->can_reserve = true; |
69 | irqd_set_can_reserve(irqd); | 154 | @@ -XXX,XX +XXX,XX @@ static void free_moved_vector(struct apic_chip_data *apicd) |
70 | clear_irq_vector(irqd); | 155 | * affinity mask comes online. |
71 | + if (apic->update_vector) | 156 | */ |
72 | + apic->update_vector(apicd->cpu, apicd->vector, false); | 157 | trace_vector_free_moved(apicd->irq, cpu, vector, managed); |
73 | realloc = true; | 158 | - irq_matrix_free(vector_matrix, cpu, vector, managed); |
74 | } | 159 | + irq_free_vector(cpu, vector, managed); |
75 | raw_spin_unlock_irqrestore(&vector_lock, flags); | 160 | per_cpu(vector_irq, cpu)[vector] = VECTOR_UNUSED; |
161 | hlist_del_init(&apicd->clist); | ||
162 | apicd->prev_vector = 0; | ||
76 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 163 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
77 | index XXXXXXX..XXXXXXX 100644 | 164 | index XXXXXXX..XXXXXXX 100644 |
78 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 165 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
79 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 166 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
80 | @@ -XXX,XX +XXX,XX @@ | 167 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, in |
81 | 168 | __send_ipi_mask(mask, vector, true); | |
82 | #include "local.h" | ||
83 | |||
84 | +#define VEC_POS(v) ((v) & (32 - 1)) | ||
85 | +#define REG_POS(v) (((v) >> 5) << 4) | ||
86 | + | ||
87 | static DEFINE_PER_CPU(void *, apic_backing_page); | ||
88 | static DEFINE_PER_CPU(bool, savic_setup_done); | ||
89 | |||
90 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in | ||
91 | __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT); | ||
92 | } | 169 | } |
93 | 170 | ||
94 | +static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set) | 171 | +static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set) |
95 | +{ | 172 | +{ |
96 | + void *backing_page; | 173 | + struct apic_page *ap = per_cpu_ptr(apic_page, cpu); |
97 | + unsigned long *reg; | 174 | + unsigned long *sirr = (unsigned long *) &ap->bytes[SAVIC_ALLOWED_IRR]; |
98 | + int reg_off; | 175 | + unsigned int bit; |
99 | + | 176 | + |
100 | + backing_page = per_cpu(apic_backing_page, cpu); | 177 | + /* |
101 | + reg_off = SAVIC_ALLOWED_IRR_OFFSET + REG_POS(vector); | 178 | + * The registers are 32-bit wide and 16-byte aligned. |
102 | + reg = (unsigned long *)((char *)backing_page + reg_off); | 179 | + * Compensate for the resulting bit number spacing. |
180 | + */ | ||
181 | + bit = vector + 96 * (vector / 32); | ||
103 | + | 182 | + |
104 | + if (set) | 183 | + if (set) |
105 | + test_and_set_bit(VEC_POS(vector), reg); | 184 | + set_bit(bit, sirr); |
106 | + else | 185 | + else |
107 | + test_and_clear_bit(VEC_POS(vector), reg); | 186 | + clear_bit(bit, sirr); |
108 | +} | 187 | +} |
109 | + | 188 | + |
110 | static void init_backing_page(void *backing_page) | 189 | static void init_apic_page(void) |
111 | { | 190 | { |
112 | u32 hv_apic_id; | 191 | u32 apic_id; |
113 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { | 192 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { |
114 | .eoi = native_apic_msr_eoi, | 193 | .eoi = native_apic_msr_eoi, |
115 | .icr_read = native_x2apic_icr_read, | 194 | .icr_read = native_x2apic_icr_read, |
116 | .icr_write = native_x2apic_icr_write, | 195 | .icr_write = native_x2apic_icr_write, |
117 | + | 196 | + |
118 | + .update_vector = x2apic_savic_update_vector, | 197 | + .update_vector = x2apic_savic_update_vector, |
119 | }; | 198 | }; |
120 | 199 | ||
121 | apic_driver(apic_x2apic_savic); | 200 | apic_driver(apic_x2apic_savic); |
122 | -- | 201 | -- |
123 | 2.34.1 | 202 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
---|---|---|---|
2 | |||
3 | With Secure AVIC only Self-IPI is accelerated. To handle all the | 1 | With Secure AVIC only Self-IPI is accelerated. To handle all the |
4 | other IPIs, add new callbacks for sending IPI, which write to the | 2 | other IPIs, add new callbacks for sending IPI, which write to the |
5 | IRR of the target guest APIC backing page (after decoding the ICR | 3 | IRR of the target guest vCPU's APIC backing page and then issue |
6 | register) and then issue VMGEXIT for the hypervisor to notify the | 4 | GHCB protocol MSR write event for the hypervisor to notify the |
7 | target vCPU. | 5 | target vCPU. |
8 | 6 | ||
7 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
9 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 8 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
10 | Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
11 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 9 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
12 | --- | 10 | --- |
13 | arch/x86/coco/sev/core.c | 25 +++++ | 11 | Changes since v2: |
12 | - Simplify vector updates in bitmap. | ||
13 | - Cleanup icr_data parcelling and unparcelling. | ||
14 | - Misc cleanups. | ||
15 | - Fix warning reported by kernel test robot. | ||
16 | |||
17 | arch/x86/coco/sev/core.c | 40 ++++++- | ||
14 | arch/x86/include/asm/sev.h | 2 + | 18 | arch/x86/include/asm/sev.h | 2 + |
15 | arch/x86/kernel/apic/x2apic_savic.c | 152 +++++++++++++++++++++++++--- | 19 | arch/x86/kernel/apic/x2apic_savic.c | 164 ++++++++++++++++++++++------ |
16 | 3 files changed, 166 insertions(+), 13 deletions(-) | 20 | 3 files changed, 167 insertions(+), 39 deletions(-) |
17 | 21 | ||
18 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c | 22 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c |
19 | index XXXXXXX..XXXXXXX 100644 | 23 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/arch/x86/coco/sev/core.c | 24 | --- a/arch/x86/coco/sev/core.c |
21 | +++ b/arch/x86/coco/sev/core.c | 25 | +++ b/arch/x86/coco/sev/core.c |
22 | @@ -XXX,XX +XXX,XX @@ enum es_result sev_ghcb_msr_read(u64 msr, u64 *value) | 26 | @@ -XXX,XX +XXX,XX @@ static enum es_result __vc_handle_secure_tsc_msrs(struct pt_regs *regs, bool wri |
27 | return ES_OK; | ||
28 | } | ||
29 | |||
30 | -static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | ||
31 | +static enum es_result __vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt, bool write) | ||
32 | { | ||
33 | struct pt_regs *regs = ctxt->regs; | ||
34 | enum es_result ret; | ||
35 | - bool write; | ||
36 | - | ||
37 | - /* Is it a WRMSR? */ | ||
38 | - write = ctxt->insn.opcode.bytes[1] == 0x30; | ||
39 | |||
40 | switch (regs->cx) { | ||
41 | case MSR_SVSM_CAA: | ||
42 | @@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | ||
23 | return ret; | 43 | return ret; |
24 | } | 44 | } |
25 | 45 | ||
26 | +enum es_result sev_ghcb_msr_write(u64 msr, u64 value) | 46 | +static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) |
27 | +{ | 47 | +{ |
48 | + return __vc_handle_msr(ghcb, ctxt, ctxt->insn.opcode.bytes[1] == 0x30); | ||
49 | +} | ||
50 | + | ||
51 | +void savic_ghcb_msr_write(u32 reg, u64 value) | ||
52 | +{ | ||
53 | + u64 msr = APIC_BASE_MSR + (reg >> 4); | ||
28 | + struct pt_regs regs = { | 54 | + struct pt_regs regs = { |
29 | + .cx = msr, | 55 | + .cx = msr, |
30 | + .ax = lower_32_bits(value), | 56 | + .ax = lower_32_bits(value), |
31 | + .dx = upper_32_bits(value) | 57 | + .dx = upper_32_bits(value) |
32 | + }; | 58 | + }; |
... | ... | ||
39 | + local_irq_save(flags); | 65 | + local_irq_save(flags); |
40 | + ghcb = __sev_get_ghcb(&state); | 66 | + ghcb = __sev_get_ghcb(&state); |
41 | + vc_ghcb_invalidate(ghcb); | 67 | + vc_ghcb_invalidate(ghcb); |
42 | + | 68 | + |
43 | + ret = __vc_handle_msr(ghcb, &ctxt, true); | 69 | + ret = __vc_handle_msr(ghcb, &ctxt, true); |
70 | + if (ret != ES_OK) { | ||
71 | + pr_err("Secure AVIC msr (0x%llx) write returned error (%d)\n", msr, ret); | ||
72 | + /* MSR writes should never fail. Any failure is fatal error for SNP guest */ | ||
73 | + snp_abort(); | ||
74 | + } | ||
44 | + | 75 | + |
45 | + __sev_put_ghcb(&state); | 76 | + __sev_put_ghcb(&state); |
46 | + local_irq_restore(flags); | 77 | + local_irq_restore(flags); |
47 | + | 78 | +} |
48 | + return ret; | 79 | + |
49 | +} | 80 | enum es_result savic_register_gpa(u64 gpa) |
50 | + | ||
51 | enum es_result sev_notify_savic_gpa(u64 gpa) | ||
52 | { | 81 | { |
53 | struct ghcb_state state; | 82 | struct ghcb_state state; |
54 | diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h | 83 | diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h |
55 | index XXXXXXX..XXXXXXX 100644 | 84 | index XXXXXXX..XXXXXXX 100644 |
56 | --- a/arch/x86/include/asm/sev.h | 85 | --- a/arch/x86/include/asm/sev.h |
57 | +++ b/arch/x86/include/asm/sev.h | 86 | +++ b/arch/x86/include/asm/sev.h |
58 | @@ -XXX,XX +XXX,XX @@ void sev_show_status(void); | 87 | @@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req |
59 | void snp_update_svsm_ca(void); | 88 | void __init snp_secure_tsc_prepare(void); |
60 | enum es_result sev_notify_savic_gpa(u64 gpa); | 89 | void __init snp_secure_tsc_init(void); |
61 | enum es_result sev_ghcb_msr_read(u64 msr, u64 *value); | 90 | enum es_result savic_register_gpa(u64 gpa); |
62 | +enum es_result sev_ghcb_msr_write(u64 msr, u64 value); | 91 | +void savic_ghcb_msr_write(u32 reg, u64 value); |
63 | 92 | ||
64 | #else /* !CONFIG_AMD_MEM_ENCRYPT */ | 93 | #else /* !CONFIG_AMD_MEM_ENCRYPT */ |
65 | 94 | ||
66 | @@ -XXX,XX +XXX,XX @@ static inline void sev_show_status(void) { } | 95 | @@ -XXX,XX +XXX,XX @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_ |
67 | static inline void snp_update_svsm_ca(void) { } | 96 | static inline void __init snp_secure_tsc_prepare(void) { } |
68 | static inline enum es_result sev_notify_savic_gpa(u64 gpa) { return ES_UNSUPPORTED; } | 97 | static inline void __init snp_secure_tsc_init(void) { } |
69 | static inline enum es_result sev_ghcb_msr_read(u64 msr, u64 *value) { return ES_UNSUPPORTED; } | 98 | static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; } |
70 | +static inline enum es_result sev_ghcb_msr_write(u64 msr, u64 value) { return ES_UNSUPPORTED; } | 99 | +static inline void savic_ghcb_msr_write(u32 reg, u64 value) { } |
71 | 100 | ||
72 | #endif /* CONFIG_AMD_MEM_ENCRYPT */ | 101 | #endif /* CONFIG_AMD_MEM_ENCRYPT */ |
73 | 102 | ||
74 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 103 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
75 | index XXXXXXX..XXXXXXX 100644 | 104 | index XXXXXXX..XXXXXXX 100644 |
76 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 105 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
77 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 106 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
78 | @@ -XXX,XX +XXX,XX @@ static u32 read_msr_from_hv(u32 reg) | 107 | @@ -XXX,XX +XXX,XX @@ static __always_inline void set_reg(unsigned int offset, u32 val) |
79 | return lower_32_bits(data); | 108 | |
80 | } | 109 | #define SAVIC_ALLOWED_IRR 0x204 |
81 | 110 | ||
82 | +static void write_msr_to_hv(u32 reg, u64 data) | 111 | +static inline void update_vector(unsigned int cpu, unsigned int offset, |
83 | +{ | 112 | + unsigned int vector, bool set) |
84 | + u64 msr; | 113 | +{ |
85 | + int ret; | 114 | + struct apic_page *ap = per_cpu_ptr(apic_page, cpu); |
86 | + | 115 | + unsigned long *reg = (unsigned long *) &ap->bytes[offset]; |
87 | + msr = APIC_BASE_MSR + (reg >> 4); | 116 | + unsigned int bit; |
88 | + ret = sev_ghcb_msr_write(msr, data); | 117 | + |
89 | + if (ret != ES_OK) { | 118 | + /* |
90 | + pr_err("Secure AVIC msr (%#llx) write returned error (%d)\n", msr, ret); | 119 | + * The registers are 32-bit wide and 16-byte aligned. |
91 | + /* MSR writes should never fail. Any failure is fatal error for SNP guest */ | 120 | + * Compensate for the resulting bit number spacing. |
92 | + snp_abort(); | 121 | + */ |
93 | + } | 122 | + bit = vector + 96 * (vector / 32); |
94 | +} | 123 | + |
95 | + | 124 | + if (set) |
96 | #define SAVIC_ALLOWED_IRR_OFFSET 0x204 | 125 | + set_bit(bit, reg); |
97 | 126 | + else | |
127 | + clear_bit(bit, reg); | ||
128 | +} | ||
129 | + | ||
98 | static u32 x2apic_savic_read(u32 reg) | 130 | static u32 x2apic_savic_read(u32 reg) |
131 | { | ||
132 | /* | ||
99 | @@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg) | 133 | @@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg) |
134 | |||
135 | #define SAVIC_NMI_REQ 0x278 | ||
136 | |||
137 | +static inline void self_ipi_reg_write(unsigned int vector) | ||
138 | +{ | ||
139 | + /* | ||
140 | + * Secure AVIC hardware accelerates guest's MSR write to SELF_IPI | ||
141 | + * register. It updates the IRR in the APIC backing page, evaluates | ||
142 | + * the new IRR for interrupt injection and continues with guest | ||
143 | + * code execution. | ||
144 | + */ | ||
145 | + native_apic_msr_write(APIC_SELF_IPI, vector); | ||
146 | +} | ||
147 | + | ||
100 | static void x2apic_savic_write(u32 reg, u32 data) | 148 | static void x2apic_savic_write(u32 reg, u32 data) |
101 | { | 149 | { |
102 | void *backing_page = this_cpu_read(apic_backing_page); | ||
103 | + unsigned int cfg; | ||
104 | |||
105 | switch (reg) { | 150 | switch (reg) { |
106 | case APIC_LVTT: | ||
107 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) | 151 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) |
108 | case APIC_LVT1: | 152 | case APIC_LVT1: |
109 | case APIC_TMICT: | 153 | case APIC_TMICT: |
110 | case APIC_TDCR: | 154 | case APIC_TDCR: |
111 | - case APIC_SELF_IPI: | 155 | - case APIC_SELF_IPI: |
112 | /* APIC_ID is writable and configured by guest for Secure AVIC */ | ||
113 | case APIC_ID: | ||
114 | case APIC_TASKPRI: | 156 | case APIC_TASKPRI: |
157 | case APIC_EOI: | ||
158 | case APIC_SPIV: | ||
115 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) | 159 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) |
116 | case APIC_EILVTn(0) ... APIC_EILVTn(3): | 160 | case APIC_EILVTn(0) ... APIC_EILVTn(3): |
117 | set_reg(backing_page, reg, data); | 161 | set_reg(reg, data); |
118 | break; | 162 | break; |
119 | + /* Self IPIs are accelerated by hardware, use wrmsr */ | ||
120 | + case APIC_SELF_IPI: | 163 | + case APIC_SELF_IPI: |
121 | + cfg = __prepare_ICR(APIC_DEST_SELF, data, 0); | 164 | + self_ipi_reg_write(data); |
122 | + native_x2apic_icr_write(cfg, 0); | ||
123 | + break; | 165 | + break; |
124 | /* ALLOWED_IRR offsets are writable */ | 166 | /* ALLOWED_IRR offsets are writable */ |
125 | case SAVIC_ALLOWED_IRR_OFFSET ... SAVIC_ALLOWED_IRR_OFFSET + 0x70: | 167 | case SAVIC_ALLOWED_IRR ... SAVIC_ALLOWED_IRR + 0x70: |
126 | if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR_OFFSET, 16)) { | 168 | if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR, 16)) { |
127 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) | 169 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) |
128 | } | 170 | } |
129 | } | 171 | } |
130 | 172 | ||
131 | +static void send_ipi(int cpu, int vector) | 173 | +static inline void send_ipi_dest(unsigned int cpu, unsigned int vector) |
132 | +{ | 174 | +{ |
133 | + void *backing_page; | 175 | + update_vector(cpu, APIC_IRR, vector, true); |
134 | + int reg_off; | 176 | +} |
135 | + | 177 | + |
136 | + backing_page = per_cpu(apic_backing_page, cpu); | 178 | +static void send_ipi_allbut(unsigned int vector) |
137 | + reg_off = APIC_IRR + REG_POS(vector); | 179 | +{ |
138 | + /* | 180 | + unsigned int cpu, src_cpu; |
139 | + * Use test_and_set_bit() to ensure that IRR updates are atomic w.r.t. other | 181 | + unsigned long flags; |
140 | + * IRR updates such as during VMRUN and during CPU interrupt handling flow. | 182 | + |
141 | + */ | 183 | + local_irq_save(flags); |
142 | + test_and_set_bit(VEC_POS(vector), (unsigned long *)((char *)backing_page + reg_off)); | 184 | + |
143 | +} | 185 | + src_cpu = raw_smp_processor_id(); |
144 | + | 186 | + |
145 | +static void send_ipi_dest(u64 icr_data) | 187 | + for_each_cpu(cpu, cpu_online_mask) { |
146 | +{ | 188 | + if (cpu == src_cpu) |
147 | + int vector, cpu; | 189 | + continue; |
148 | + | 190 | + send_ipi_dest(cpu, vector); |
149 | + vector = icr_data & APIC_VECTOR_MASK; | ||
150 | + cpu = icr_data >> 32; | ||
151 | + | ||
152 | + send_ipi(cpu, vector); | ||
153 | +} | ||
154 | + | ||
155 | +static void send_ipi_target(u64 icr_data) | ||
156 | +{ | ||
157 | + if (icr_data & APIC_DEST_LOGICAL) { | ||
158 | + pr_err("IPI target should be of PHYSICAL type\n"); | ||
159 | + return; | ||
160 | + } | 191 | + } |
161 | + | 192 | + |
162 | + send_ipi_dest(icr_data); | ||
163 | +} | ||
164 | + | ||
165 | +static void send_ipi_allbut(u64 icr_data) | ||
166 | +{ | ||
167 | + const struct cpumask *self_cpu_mask = get_cpu_mask(smp_processor_id()); | ||
168 | + unsigned long flags; | ||
169 | + int vector, cpu; | ||
170 | + | ||
171 | + vector = icr_data & APIC_VECTOR_MASK; | ||
172 | + local_irq_save(flags); | ||
173 | + for_each_cpu_andnot(cpu, cpu_present_mask, self_cpu_mask) | ||
174 | + send_ipi(cpu, vector); | ||
175 | + write_msr_to_hv(APIC_ICR, icr_data); | ||
176 | + local_irq_restore(flags); | 193 | + local_irq_restore(flags); |
177 | +} | 194 | +} |
178 | + | 195 | + |
179 | +static void send_ipi_allinc(u64 icr_data) | 196 | +static inline void self_ipi(unsigned int vector) |
180 | +{ | 197 | +{ |
181 | + int vector; | 198 | + u32 icr_low = APIC_SELF_IPI | vector; |
182 | + | 199 | + |
183 | + send_ipi_allbut(icr_data); | 200 | + native_x2apic_icr_write(icr_low, 0); |
184 | + vector = icr_data & APIC_VECTOR_MASK; | ||
185 | + native_x2apic_icr_write(APIC_DEST_SELF | vector, 0); | ||
186 | +} | 201 | +} |
187 | + | 202 | + |
188 | +static void x2apic_savic_icr_write(u32 icr_low, u32 icr_high) | 203 | +static void x2apic_savic_icr_write(u32 icr_low, u32 icr_high) |
189 | +{ | 204 | +{ |
190 | + int dsh, vector; | 205 | + unsigned int dsh, vector; |
191 | + u64 icr_data; | 206 | + u64 icr_data; |
192 | + | 207 | + |
193 | + icr_data = ((u64)icr_high) << 32 | icr_low; | ||
194 | + dsh = icr_low & APIC_DEST_ALLBUT; | 208 | + dsh = icr_low & APIC_DEST_ALLBUT; |
209 | + vector = icr_low & APIC_VECTOR_MASK; | ||
195 | + | 210 | + |
196 | + switch (dsh) { | 211 | + switch (dsh) { |
197 | + case APIC_DEST_SELF: | 212 | + case APIC_DEST_SELF: |
198 | + vector = icr_data & APIC_VECTOR_MASK; | 213 | + self_ipi(vector); |
199 | + x2apic_savic_write(APIC_SELF_IPI, vector); | ||
200 | + break; | 214 | + break; |
201 | + case APIC_DEST_ALLINC: | 215 | + case APIC_DEST_ALLINC: |
202 | + send_ipi_allinc(icr_data); | 216 | + self_ipi(vector); |
203 | + break; | 217 | + fallthrough; |
204 | + case APIC_DEST_ALLBUT: | 218 | + case APIC_DEST_ALLBUT: |
205 | + send_ipi_allbut(icr_data); | 219 | + send_ipi_allbut(vector); |
206 | + break; | 220 | + break; |
207 | + default: | 221 | + default: |
208 | + send_ipi_target(icr_data); | 222 | + send_ipi_dest(icr_high, vector); |
209 | + write_msr_to_hv(APIC_ICR, icr_data); | 223 | + break; |
210 | + } | 224 | + } |
211 | +} | 225 | + |
212 | + | 226 | + icr_data = ((u64)icr_high) << 32 | icr_low; |
213 | +static void __send_IPI_dest(unsigned int apicid, int vector, unsigned int dest) | 227 | + if (dsh != APIC_DEST_SELF) |
214 | +{ | 228 | + savic_ghcb_msr_write(APIC_ICR, icr_data); |
215 | + unsigned int cfg = __prepare_ICR(0, vector, dest); | 229 | +} |
216 | + | 230 | + |
217 | + x2apic_savic_icr_write(cfg, apicid); | 231 | +static void send_ipi(u32 dest, unsigned int vector, unsigned int dsh) |
218 | +} | 232 | +{ |
219 | + | 233 | + unsigned int icr_low; |
220 | static void x2apic_savic_send_IPI(int cpu, int vector) | 234 | + |
235 | + icr_low = __prepare_ICR(dsh, vector, APIC_DEST_PHYSICAL); | ||
236 | + x2apic_savic_icr_write(icr_low, dest); | ||
237 | +} | ||
238 | + | ||
239 | static void x2apic_savic_send_ipi(int cpu, int vector) | ||
221 | { | 240 | { |
222 | u32 dest = per_cpu(x86_cpu_to_apicid, cpu); | 241 | u32 dest = per_cpu(x86_cpu_to_apicid, cpu); |
223 | 242 | ||
224 | - /* x2apic MSRs are special and need a special fence: */ | 243 | - /* x2apic MSRs are special and need a special fence: */ |
225 | - weak_wrmsr_fence(); | 244 | - weak_wrmsr_fence(); |
226 | - __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL); | 245 | - __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL); |
227 | + __send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL); | 246 | + send_ipi(dest, vector, 0); |
228 | } | 247 | } |
229 | 248 | ||
230 | static void | 249 | -static void __send_ipi_mask(const struct cpumask *mask, int vector, bool excl_self) |
231 | @@ -XXX,XX +XXX,XX @@ __send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest) | 250 | +static void send_ipi_mask(const struct cpumask *mask, unsigned int vector, bool excl_self) |
232 | unsigned long this_cpu; | 251 | { |
252 | - unsigned long query_cpu; | ||
253 | - unsigned long this_cpu; | ||
254 | + unsigned int this_cpu; | ||
255 | + unsigned int cpu; | ||
233 | unsigned long flags; | 256 | unsigned long flags; |
234 | 257 | ||
235 | - /* x2apic MSRs are special and need a special fence: */ | 258 | - /* x2apic MSRs are special and need a special fence: */ |
236 | - weak_wrmsr_fence(); | 259 | - weak_wrmsr_fence(); |
237 | - | 260 | - |
238 | local_irq_save(flags); | 261 | local_irq_save(flags); |
239 | 262 | ||
240 | this_cpu = smp_processor_id(); | 263 | - this_cpu = smp_processor_id(); |
241 | for_each_cpu(query_cpu, mask) { | 264 | - for_each_cpu(query_cpu, mask) { |
242 | if (apic_dest == APIC_DEST_ALLBUT && this_cpu == query_cpu) | 265 | - if (excl_self && this_cpu == query_cpu) |
266 | + this_cpu = raw_smp_processor_id(); | ||
267 | + | ||
268 | + for_each_cpu(cpu, mask) { | ||
269 | + if (excl_self && cpu == this_cpu) | ||
243 | continue; | 270 | continue; |
244 | - __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), | 271 | - __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), |
245 | - vector, APIC_DEST_PHYSICAL); | 272 | - vector, APIC_DEST_PHYSICAL); |
246 | + __send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), vector, | 273 | + send_ipi(per_cpu(x86_cpu_to_apicid, cpu), vector, 0); |
247 | + APIC_DEST_PHYSICAL); | ||
248 | } | 274 | } |
249 | + | 275 | + |
250 | local_irq_restore(flags); | 276 | local_irq_restore(flags); |
251 | } | 277 | } |
252 | 278 | ||
253 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in | 279 | static void x2apic_savic_send_ipi_mask(const struct cpumask *mask, int vector) |
254 | __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT); | 280 | { |
255 | } | 281 | - __send_ipi_mask(mask, vector, false); |
256 | 282 | + send_ipi_mask(mask, vector, false); | |
257 | +static void __send_IPI_shorthand(int vector, u32 which) | 283 | } |
258 | +{ | 284 | |
259 | + unsigned int cfg = __prepare_ICR(which, vector, 0); | 285 | static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, int vector) |
260 | + | 286 | { |
261 | + x2apic_savic_icr_write(cfg, 0); | 287 | - __send_ipi_mask(mask, vector, true); |
262 | +} | 288 | + send_ipi_mask(mask, vector, true); |
263 | + | 289 | } |
264 | +static void x2apic_savic_send_IPI_allbutself(int vector) | 290 | |
265 | +{ | 291 | -static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set) |
266 | + __send_IPI_shorthand(vector, APIC_DEST_ALLBUT); | 292 | +static void x2apic_savic_send_ipi_allbutself(int vector) |
267 | +} | 293 | { |
268 | + | 294 | - struct apic_page *ap = per_cpu_ptr(apic_page, cpu); |
269 | +static void x2apic_savic_send_IPI_all(int vector) | 295 | - unsigned long *sirr = (unsigned long *) &ap->bytes[SAVIC_ALLOWED_IRR]; |
270 | +{ | 296 | - unsigned int bit; |
271 | + __send_IPI_shorthand(vector, APIC_DEST_ALLINC); | 297 | + send_ipi(0, vector, APIC_DEST_ALLBUT); |
272 | +} | 298 | +} |
273 | + | 299 | |
274 | +static void x2apic_savic_send_IPI_self(int vector) | 300 | - /* |
275 | +{ | 301 | - * The registers are 32-bit wide and 16-byte aligned. |
276 | + __send_IPI_shorthand(vector, APIC_DEST_SELF); | 302 | - * Compensate for the resulting bit number spacing. |
277 | +} | 303 | - */ |
278 | + | 304 | - bit = vector + 96 * (vector / 32); |
279 | static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set) | 305 | +static void x2apic_savic_send_ipi_all(int vector) |
280 | { | 306 | +{ |
281 | void *backing_page; | 307 | + send_ipi(0, vector, APIC_DEST_ALLINC); |
308 | +} | ||
309 | |||
310 | - if (set) | ||
311 | - set_bit(bit, sirr); | ||
312 | - else | ||
313 | - clear_bit(bit, sirr); | ||
314 | +static void x2apic_savic_send_ipi_self(int vector) | ||
315 | +{ | ||
316 | + self_ipi_reg_write(vector); | ||
317 | +} | ||
318 | + | ||
319 | +static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set) | ||
320 | +{ | ||
321 | + update_vector(cpu, SAVIC_ALLOWED_IRR, vector, set); | ||
322 | } | ||
323 | |||
324 | static void init_apic_page(void) | ||
282 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { | 325 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { |
283 | .send_IPI = x2apic_savic_send_IPI, | 326 | .send_IPI = x2apic_savic_send_ipi, |
284 | .send_IPI_mask = x2apic_savic_send_IPI_mask, | 327 | .send_IPI_mask = x2apic_savic_send_ipi_mask, |
285 | .send_IPI_mask_allbutself = x2apic_savic_send_IPI_mask_allbutself, | 328 | .send_IPI_mask_allbutself = x2apic_savic_send_ipi_mask_allbutself, |
286 | - .send_IPI_allbutself = x2apic_send_IPI_allbutself, | 329 | - .send_IPI_allbutself = x2apic_send_IPI_allbutself, |
287 | - .send_IPI_all = x2apic_send_IPI_all, | 330 | - .send_IPI_all = x2apic_send_IPI_all, |
288 | - .send_IPI_self = x2apic_send_IPI_self, | 331 | - .send_IPI_self = x2apic_send_IPI_self, |
289 | + .send_IPI_allbutself = x2apic_savic_send_IPI_allbutself, | 332 | + .send_IPI_allbutself = x2apic_savic_send_ipi_allbutself, |
290 | + .send_IPI_all = x2apic_savic_send_IPI_all, | 333 | + .send_IPI_all = x2apic_savic_send_ipi_all, |
291 | + .send_IPI_self = x2apic_savic_send_IPI_self, | 334 | + .send_IPI_self = x2apic_savic_send_ipi_self, |
292 | .nmi_to_offline_cpu = true, | 335 | .nmi_to_offline_cpu = true, |
293 | 336 | ||
294 | .read = x2apic_savic_read, | 337 | .read = x2apic_savic_read, |
295 | .write = x2apic_savic_write, | 338 | .write = x2apic_savic_write, |
296 | .eoi = native_apic_msr_eoi, | 339 | .eoi = native_apic_msr_eoi, |
... | ... | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | 1 | Secure AVIC requires LAPIC timer to be emulated by the hypervisor. |
---|---|---|---|
2 | 2 | KVM already supports emulating LAPIC timer using hrtimers. In order | |
3 | Secure AVIC requires LAPIC timer to be emulated by hypervisor. KVM | ||
4 | already supports emulating LAPIC timer using hrtimers. In order | ||
5 | to emulate LAPIC timer, APIC_LVTT, APIC_TMICT and APIC_TDCR register | 3 | to emulate LAPIC timer, APIC_LVTT, APIC_TMICT and APIC_TDCR register |
6 | values need to be propagated to the hypervisor for arming the timer. | 4 | values need to be propagated to the hypervisor for arming the timer. |
7 | APIC_TMCCT register value has to be read from the hypervisor, which | 5 | APIC_TMCCT register value has to be read from the hypervisor, which |
8 | is required for calibrating the APIC timer. So, read/write all APIC | 6 | is required for calibrating the APIC timer. So, read/write all APIC |
9 | timer registers from/to the hypervisor. | 7 | timer registers from/to the hypervisor. |
10 | 8 | ||
11 | In addition, configure APIC_ALLOWED_IRR for the hypervisor to inject | 9 | In addition, add a static call for apic's update_vector() callback, |
12 | timer interrupt using LOCAL_TIMER_VECTOR. | 10 | to configure ALLOWED_IRR for the hypervisor to inject timer interrupt |
13 | 11 | using LOCAL_TIMER_VECTOR. | |
12 | |||
13 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
14 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 14 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
15 | Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
16 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 15 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
17 | --- | 16 | --- |
18 | arch/x86/kernel/apic/apic.c | 2 ++ | 17 | Changes since v2: |
19 | arch/x86/kernel/apic/x2apic_savic.c | 7 +++++-- | 18 | |
20 | 2 files changed, 7 insertions(+), 2 deletions(-) | 19 | - Add static call for apic_update_vector() |
21 | 20 | ||
21 | arch/x86/coco/sev/core.c | 27 +++++++++++++++++++++++++++ | ||
22 | arch/x86/include/asm/apic.h | 8 ++++++++ | ||
23 | arch/x86/include/asm/sev.h | 2 ++ | ||
24 | arch/x86/kernel/apic/apic.c | 2 ++ | ||
25 | arch/x86/kernel/apic/init.c | 3 +++ | ||
26 | arch/x86/kernel/apic/vector.c | 6 ------ | ||
27 | arch/x86/kernel/apic/x2apic_savic.c | 7 +++++-- | ||
28 | 7 files changed, 47 insertions(+), 8 deletions(-) | ||
29 | |||
30 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c | ||
31 | index XXXXXXX..XXXXXXX 100644 | ||
32 | --- a/arch/x86/coco/sev/core.c | ||
33 | +++ b/arch/x86/coco/sev/core.c | ||
34 | @@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | ||
35 | return __vc_handle_msr(ghcb, ctxt, ctxt->insn.opcode.bytes[1] == 0x30); | ||
36 | } | ||
37 | |||
38 | +u64 savic_ghcb_msr_read(u32 reg) | ||
39 | +{ | ||
40 | + u64 msr = APIC_BASE_MSR + (reg >> 4); | ||
41 | + struct pt_regs regs = { .cx = msr }; | ||
42 | + struct es_em_ctxt ctxt = { .regs = ®s }; | ||
43 | + struct ghcb_state state; | ||
44 | + unsigned long flags; | ||
45 | + enum es_result ret; | ||
46 | + struct ghcb *ghcb; | ||
47 | + | ||
48 | + local_irq_save(flags); | ||
49 | + ghcb = __sev_get_ghcb(&state); | ||
50 | + vc_ghcb_invalidate(ghcb); | ||
51 | + | ||
52 | + ret = __vc_handle_msr(ghcb, &ctxt, false); | ||
53 | + if (ret != ES_OK) { | ||
54 | + pr_err("Secure AVIC msr (0x%llx) read returned error (%d)\n", msr, ret); | ||
55 | + /* MSR read failures are treated as fatal errors */ | ||
56 | + snp_abort(); | ||
57 | + } | ||
58 | + | ||
59 | + __sev_put_ghcb(&state); | ||
60 | + local_irq_restore(flags); | ||
61 | + | ||
62 | + return regs.ax | regs.dx << 32; | ||
63 | +} | ||
64 | + | ||
65 | void savic_ghcb_msr_write(u32 reg, u64 value) | ||
66 | { | ||
67 | u64 msr = APIC_BASE_MSR + (reg >> 4); | ||
68 | diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h | ||
69 | index XXXXXXX..XXXXXXX 100644 | ||
70 | --- a/arch/x86/include/asm/apic.h | ||
71 | +++ b/arch/x86/include/asm/apic.h | ||
72 | @@ -XXX,XX +XXX,XX @@ struct apic_override { | ||
73 | void (*icr_write)(u32 low, u32 high); | ||
74 | int (*wakeup_secondary_cpu)(u32 apicid, unsigned long start_eip); | ||
75 | int (*wakeup_secondary_cpu_64)(u32 apicid, unsigned long start_eip); | ||
76 | + void (*update_vector)(unsigned int cpu, unsigned int vector, bool set); | ||
77 | }; | ||
78 | |||
79 | /* | ||
80 | @@ -XXX,XX +XXX,XX @@ DECLARE_APIC_CALL(wait_icr_idle); | ||
81 | DECLARE_APIC_CALL(wakeup_secondary_cpu); | ||
82 | DECLARE_APIC_CALL(wakeup_secondary_cpu_64); | ||
83 | DECLARE_APIC_CALL(write); | ||
84 | +DECLARE_APIC_CALL(update_vector); | ||
85 | |||
86 | static __always_inline u32 apic_read(u32 reg) | ||
87 | { | ||
88 | @@ -XXX,XX +XXX,XX @@ static __always_inline bool apic_id_valid(u32 apic_id) | ||
89 | return apic_id <= apic->max_apic_id; | ||
90 | } | ||
91 | |||
92 | +static __always_inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set) | ||
93 | +{ | ||
94 | + static_call(apic_call_update_vector)(cpu, vector, set); | ||
95 | +} | ||
96 | + | ||
97 | #else /* CONFIG_X86_LOCAL_APIC */ | ||
98 | |||
99 | static inline u32 apic_read(u32 reg) { return 0; } | ||
100 | @@ -XXX,XX +XXX,XX @@ static inline void apic_wait_icr_idle(void) { } | ||
101 | static inline u32 safe_apic_wait_icr_idle(void) { return 0; } | ||
102 | static inline void apic_native_eoi(void) { WARN_ON_ONCE(1); } | ||
103 | static inline void apic_setup_apic_calls(void) { } | ||
104 | +static inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set) { } | ||
105 | |||
106 | #define apic_update_callback(_callback, _fn) do { } while (0) | ||
107 | |||
108 | diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h | ||
109 | index XXXXXXX..XXXXXXX 100644 | ||
110 | --- a/arch/x86/include/asm/sev.h | ||
111 | +++ b/arch/x86/include/asm/sev.h | ||
112 | @@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req | ||
113 | void __init snp_secure_tsc_prepare(void); | ||
114 | void __init snp_secure_tsc_init(void); | ||
115 | enum es_result savic_register_gpa(u64 gpa); | ||
116 | +u64 savic_ghcb_msr_read(u32 reg); | ||
117 | void savic_ghcb_msr_write(u32 reg, u64 value); | ||
118 | |||
119 | #else /* !CONFIG_AMD_MEM_ENCRYPT */ | ||
120 | @@ -XXX,XX +XXX,XX @@ static inline void __init snp_secure_tsc_prepare(void) { } | ||
121 | static inline void __init snp_secure_tsc_init(void) { } | ||
122 | static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; } | ||
123 | static inline void savic_ghcb_msr_write(u32 reg, u64 value) { } | ||
124 | +static inline u64 savic_ghcb_msr_read(u32 reg) { return 0; } | ||
125 | |||
126 | #endif /* CONFIG_AMD_MEM_ENCRYPT */ | ||
127 | |||
22 | diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c | 128 | diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c |
23 | index XXXXXXX..XXXXXXX 100644 | 129 | index XXXXXXX..XXXXXXX 100644 |
24 | --- a/arch/x86/kernel/apic/apic.c | 130 | --- a/arch/x86/kernel/apic/apic.c |
25 | +++ b/arch/x86/kernel/apic/apic.c | 131 | +++ b/arch/x86/kernel/apic/apic.c |
26 | @@ -XXX,XX +XXX,XX @@ static void setup_APIC_timer(void) | 132 | @@ -XXX,XX +XXX,XX @@ static void setup_APIC_timer(void) |
27 | 0xF, ~0UL); | 133 | 0xF, ~0UL); |
28 | } else | 134 | } else |
29 | clockevents_register_device(levt); | 135 | clockevents_register_device(levt); |
30 | + | 136 | + |
31 | + apic->update_vector(smp_processor_id(), LOCAL_TIMER_VECTOR, true); | 137 | + apic_update_vector(smp_processor_id(), LOCAL_TIMER_VECTOR, true); |
32 | } | 138 | } |
33 | 139 | ||
34 | /* | 140 | /* |
141 | diff --git a/arch/x86/kernel/apic/init.c b/arch/x86/kernel/apic/init.c | ||
142 | index XXXXXXX..XXXXXXX 100644 | ||
143 | --- a/arch/x86/kernel/apic/init.c | ||
144 | +++ b/arch/x86/kernel/apic/init.c | ||
145 | @@ -XXX,XX +XXX,XX @@ DEFINE_APIC_CALL(wait_icr_idle); | ||
146 | DEFINE_APIC_CALL(wakeup_secondary_cpu); | ||
147 | DEFINE_APIC_CALL(wakeup_secondary_cpu_64); | ||
148 | DEFINE_APIC_CALL(write); | ||
149 | +DEFINE_APIC_CALL(update_vector); | ||
150 | |||
151 | EXPORT_STATIC_CALL_TRAMP_GPL(apic_call_send_IPI_mask); | ||
152 | EXPORT_STATIC_CALL_TRAMP_GPL(apic_call_send_IPI_self); | ||
153 | @@ -XXX,XX +XXX,XX @@ static __init void restore_override_callbacks(void) | ||
154 | apply_override(icr_write); | ||
155 | apply_override(wakeup_secondary_cpu); | ||
156 | apply_override(wakeup_secondary_cpu_64); | ||
157 | + apply_override(update_vector); | ||
158 | } | ||
159 | |||
160 | #define update_call(__cb) \ | ||
161 | @@ -XXX,XX +XXX,XX @@ static __init void update_static_calls(void) | ||
162 | update_call(wait_icr_idle); | ||
163 | update_call(wakeup_secondary_cpu); | ||
164 | update_call(wakeup_secondary_cpu_64); | ||
165 | + update_call(update_vector); | ||
166 | } | ||
167 | |||
168 | void __init apic_setup_apic_calls(void) | ||
169 | diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c | ||
170 | index XXXXXXX..XXXXXXX 100644 | ||
171 | --- a/arch/x86/kernel/apic/vector.c | ||
172 | +++ b/arch/x86/kernel/apic/vector.c | ||
173 | @@ -XXX,XX +XXX,XX @@ static void apic_update_irq_cfg(struct irq_data *irqd, unsigned int vector, | ||
174 | apicd->hw_irq_cfg.dest_apicid); | ||
175 | } | ||
176 | |||
177 | -static inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set) | ||
178 | -{ | ||
179 | - if (apic->update_vector) | ||
180 | - apic->update_vector(cpu, vector, set); | ||
181 | -} | ||
182 | - | ||
183 | static int irq_alloc_vector(const struct cpumask *dest, bool resvd, unsigned int *cpu) | ||
184 | { | ||
185 | int vector; | ||
35 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 186 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
36 | index XXXXXXX..XXXXXXX 100644 | 187 | index XXXXXXX..XXXXXXX 100644 |
37 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 188 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
38 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 189 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
39 | @@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg) | 190 | @@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg) |
40 | case APIC_TMICT: | 191 | case APIC_TMICT: |
41 | case APIC_TMCCT: | 192 | case APIC_TMCCT: |
42 | case APIC_TDCR: | 193 | case APIC_TDCR: |
43 | + return read_msr_from_hv(reg); | 194 | + return savic_ghcb_msr_read(reg); |
44 | case APIC_ID: | 195 | case APIC_ID: |
45 | case APIC_LVR: | 196 | case APIC_LVR: |
46 | case APIC_TASKPRI: | 197 | case APIC_TASKPRI: |
47 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) | 198 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) |
48 | 199 | { | |
49 | switch (reg) { | 200 | switch (reg) { |
50 | case APIC_LVTT: | 201 | case APIC_LVTT: |
51 | - case APIC_LVT0: | 202 | - case APIC_LVT0: |
52 | - case APIC_LVT1: | 203 | - case APIC_LVT1: |
53 | case APIC_TMICT: | 204 | case APIC_TMICT: |
54 | case APIC_TDCR: | 205 | case APIC_TDCR: |
55 | + write_msr_to_hv(reg, data); | 206 | + savic_ghcb_msr_write(reg, data); |
56 | + break; | 207 | + break; |
57 | + case APIC_LVT0: | 208 | + case APIC_LVT0: |
58 | + case APIC_LVT1: | 209 | + case APIC_LVT1: |
59 | /* APIC_ID is writable and configured by guest for Secure AVIC */ | ||
60 | case APIC_ID: | ||
61 | case APIC_TASKPRI: | 210 | case APIC_TASKPRI: |
211 | case APIC_EOI: | ||
212 | case APIC_SPIV: | ||
62 | -- | 213 | -- |
63 | 2.34.1 | 214 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | 1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> |
---|---|---|---|
2 | 2 | ||
3 | VINTR_CTRL in VMSA should be configured for Secure AVIC. Configure | 3 | Secure AVIC requires VGIF to be configured in VMSA. Configure |
4 | for secondary vCPUs (the configuration for boot CPU is done in | 4 | for secondary vCPUs (the configuration for boot CPU is done by |
5 | hypervisor). | 5 | the hypervisor). |
6 | 6 | ||
7 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 7 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
8 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 8 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
9 | --- | 9 | --- |
10 | Changes since v2: | ||
11 | - No change | ||
12 | |||
10 | arch/x86/coco/sev/core.c | 3 +++ | 13 | arch/x86/coco/sev/core.c | 3 +++ |
11 | 1 file changed, 3 insertions(+) | 14 | 1 file changed, 3 insertions(+) |
12 | 15 | ||
13 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c | 16 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c |
14 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
... | ... | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
---|---|---|---|
2 | |||
3 | Secure AVIC has introduced a new field in the APIC backing page | 1 | Secure AVIC has introduced a new field in the APIC backing page |
4 | "NmiReq" that has to be set by the guest to request a NMI IPI. | 2 | "NmiReq" that has to be set by the guest to request a NMI IPI |
3 | through APIC_ICR write. | ||
5 | 4 | ||
6 | Add support to set NmiReq appropriately to send NMI IPI. | 5 | Add support to set NmiReq appropriately to send NMI IPI. |
7 | 6 | ||
8 | This also requires Virtual NMI feature to be enabled in VINTRL_CTRL | 7 | This also requires Virtual NMI feature to be enabled in VINTRL_CTRL |
9 | field in the VMSA. However this would be added by a later commit | 8 | field in the VMSA. However this would be added by a later commit |
10 | after adding support for injecting NMI from the hypervisor. | 9 | after adding support for injecting NMI from the hypervisor. |
11 | 10 | ||
11 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
12 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 12 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
13 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 13 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
14 | --- | 14 | --- |
15 | arch/x86/kernel/apic/x2apic_savic.c | 12 +++++++++--- | 15 | Changes since v2: |
16 | 1 file changed, 9 insertions(+), 3 deletions(-) | 16 | - Updates to use per_cpu_ptr() on apic_page struct. |
17 | |||
18 | arch/x86/kernel/apic/x2apic_savic.c | 28 ++++++++++++++++++++-------- | ||
19 | 1 file changed, 20 insertions(+), 8 deletions(-) | ||
17 | 20 | ||
18 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 21 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
19 | index XXXXXXX..XXXXXXX 100644 | 22 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 23 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
21 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 24 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
22 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) | 25 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) |
23 | } | 26 | } |
24 | } | 27 | } |
25 | 28 | ||
26 | -static void send_ipi(int cpu, int vector) | 29 | -static inline void send_ipi_dest(unsigned int cpu, unsigned int vector) |
27 | +static void send_ipi(int cpu, int vector, bool nmi) | 30 | +static void send_ipi_dest(unsigned int cpu, unsigned int vector, bool nmi) |
28 | { | 31 | { |
29 | void *backing_page; | 32 | + if (nmi) { |
30 | int reg_off; | 33 | + struct apic_page *ap = per_cpu_ptr(apic_page, cpu); |
31 | @@ -XXX,XX +XXX,XX @@ static void send_ipi(int cpu, int vector) | 34 | + |
32 | * IRR updates such as during VMRUN and during CPU interrupt handling flow. | 35 | + WRITE_ONCE(ap->regs[SAVIC_NMI_REQ >> 2], 1); |
33 | */ | 36 | + return; |
34 | test_and_set_bit(VEC_POS(vector), (unsigned long *)((char *)backing_page + reg_off)); | 37 | + } |
35 | + if (nmi) | 38 | + |
36 | + set_reg(backing_page, SAVIC_NMI_REQ_OFFSET, nmi); | 39 | update_vector(cpu, APIC_IRR, vector, true); |
37 | } | 40 | } |
38 | 41 | ||
39 | static void send_ipi_dest(u64 icr_data) | 42 | -static void send_ipi_allbut(unsigned int vector) |
43 | +static void send_ipi_allbut(unsigned int vector, bool nmi) | ||
40 | { | 44 | { |
41 | int vector, cpu; | 45 | unsigned int cpu, src_cpu; |
42 | + bool nmi; | ||
43 | |||
44 | vector = icr_data & APIC_VECTOR_MASK; | ||
45 | cpu = icr_data >> 32; | ||
46 | + nmi = ((icr_data & APIC_DM_FIXED_MASK) == APIC_DM_NMI); | ||
47 | |||
48 | - send_ipi(cpu, vector); | ||
49 | + send_ipi(cpu, vector, nmi); | ||
50 | } | ||
51 | |||
52 | static void send_ipi_target(u64 icr_data) | ||
53 | @@ -XXX,XX +XXX,XX @@ static void send_ipi_allbut(u64 icr_data) | ||
54 | const struct cpumask *self_cpu_mask = get_cpu_mask(smp_processor_id()); | ||
55 | unsigned long flags; | 46 | unsigned long flags; |
56 | int vector, cpu; | 47 | @@ -XXX,XX +XXX,XX @@ static void send_ipi_allbut(unsigned int vector) |
57 | + bool nmi; | 48 | for_each_cpu(cpu, cpu_online_mask) { |
58 | 49 | if (cpu == src_cpu) | |
59 | vector = icr_data & APIC_VECTOR_MASK; | 50 | continue; |
60 | + nmi = ((icr_data & APIC_DM_FIXED_MASK) == APIC_DM_NMI); | 51 | - send_ipi_dest(cpu, vector); |
61 | local_irq_save(flags); | 52 | + send_ipi_dest(cpu, vector, nmi); |
62 | for_each_cpu_andnot(cpu, cpu_present_mask, self_cpu_mask) | 53 | } |
63 | - send_ipi(cpu, vector); | 54 | |
64 | + send_ipi(cpu, vector, nmi); | ||
65 | write_msr_to_hv(APIC_ICR, icr_data); | ||
66 | local_irq_restore(flags); | 55 | local_irq_restore(flags); |
67 | } | 56 | } |
57 | |||
58 | -static inline void self_ipi(unsigned int vector) | ||
59 | +static inline void self_ipi(unsigned int vector, bool nmi) | ||
60 | { | ||
61 | u32 icr_low = APIC_SELF_IPI | vector; | ||
62 | |||
63 | + if (nmi) | ||
64 | + icr_low |= APIC_DM_NMI; | ||
65 | + | ||
66 | native_x2apic_icr_write(icr_low, 0); | ||
67 | } | ||
68 | |||
69 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_icr_write(u32 icr_low, u32 icr_high) | ||
70 | { | ||
71 | unsigned int dsh, vector; | ||
72 | u64 icr_data; | ||
73 | + bool nmi; | ||
74 | |||
75 | dsh = icr_low & APIC_DEST_ALLBUT; | ||
76 | vector = icr_low & APIC_VECTOR_MASK; | ||
77 | + nmi = ((icr_low & APIC_DM_FIXED_MASK) == APIC_DM_NMI); | ||
78 | |||
79 | switch (dsh) { | ||
80 | case APIC_DEST_SELF: | ||
81 | - self_ipi(vector); | ||
82 | + self_ipi(vector, nmi); | ||
83 | break; | ||
84 | case APIC_DEST_ALLINC: | ||
85 | - self_ipi(vector); | ||
86 | + self_ipi(vector, nmi); | ||
87 | fallthrough; | ||
88 | case APIC_DEST_ALLBUT: | ||
89 | - send_ipi_allbut(vector); | ||
90 | + send_ipi_allbut(vector, nmi); | ||
91 | break; | ||
92 | default: | ||
93 | - send_ipi_dest(icr_high, vector); | ||
94 | + send_ipi_dest(icr_high, vector, nmi); | ||
95 | break; | ||
96 | } | ||
97 | |||
68 | -- | 98 | -- |
69 | 2.34.1 | 99 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
---|---|---|---|
2 | |||
3 | Secure AVIC requires "AllowedNmi" bit in the Secure AVIC Control MSR | 1 | Secure AVIC requires "AllowedNmi" bit in the Secure AVIC Control MSR |
4 | to be set for NMI to be injected from hypervisor. | 2 | to be set for NMI to be injected from hypervisor. Set "AllowedNmi" |
5 | 3 | bit in Secure AVIC Control MSR to allow NMI interrupts to be injected | |
6 | Set "AllowedNmi" bit in Secure AVIC Control MSR here to allow NMI | 4 | from hypervisor. |
7 | interrupts to be injected from hypervisor. While at that, also propagate | ||
8 | APIC_LVT0 and APIC_LVT1 register values to the hypervisor required for | ||
9 | injecting NMI interrupts from hypervisor. | ||
10 | 5 | ||
11 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 6 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
12 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 7 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
13 | --- | 8 | --- |
14 | arch/x86/include/asm/msr-index.h | 5 +++++ | 9 | Changes since v2: |
15 | arch/x86/kernel/apic/x2apic_savic.c | 10 ++++++++-- | 10 | - Remove MSR_AMD64_SECURE_AVIC_EN macros from this patch. |
16 | 2 files changed, 13 insertions(+), 2 deletions(-) | 11 | |
12 | arch/x86/include/asm/msr-index.h | 3 +++ | ||
13 | arch/x86/kernel/apic/x2apic_savic.c | 6 ++++++ | ||
14 | 2 files changed, 9 insertions(+) | ||
17 | 15 | ||
18 | diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h | 16 | diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h |
19 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/arch/x86/include/asm/msr-index.h | 18 | --- a/arch/x86/include/asm/msr-index.h |
21 | +++ b/arch/x86/include/asm/msr-index.h | 19 | +++ b/arch/x86/include/asm/msr-index.h |
22 | @@ -XXX,XX +XXX,XX @@ | 20 | @@ -XXX,XX +XXX,XX @@ |
23 | #define MSR_AMD64_SNP_SECURE_AVIC_ENABLED BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) | 21 | #define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) |
24 | #define MSR_AMD64_SNP_RESV_BIT 19 | 22 | #define MSR_AMD64_SNP_RESV_BIT 19 |
25 | #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) | 23 | #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) |
26 | +#define MSR_AMD64_SECURE_AVIC_CONTROL 0xc0010138 | 24 | +#define MSR_AMD64_SECURE_AVIC_CONTROL 0xc0010138 |
27 | +#define MSR_AMD64_SECURE_AVIC_EN_BIT 0 | ||
28 | +#define MSR_AMD64_SECURE_AVIC_EN BIT_ULL(MSR_AMD64_SECURE_AVIC_EN_BIT) | ||
29 | +#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT 1 | 25 | +#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT 1 |
30 | +#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI BIT_ULL(MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT) | 26 | +#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI BIT_ULL(MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT) |
31 | 27 | #define MSR_AMD64_RMP_BASE 0xc0010132 | |
32 | #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f | 28 | #define MSR_AMD64_RMP_END 0xc0010133 |
33 | 29 | #define MSR_AMD64_RMP_CFG 0xc0010136 | |
34 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 30 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
35 | index XXXXXXX..XXXXXXX 100644 | 31 | index XXXXXXX..XXXXXXX 100644 |
36 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 32 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
37 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 33 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
38 | @@ -XXX,XX +XXX,XX @@ enum lapic_lvt_entry { | 34 | @@ -XXX,XX +XXX,XX @@ struct apic_page { |
39 | 35 | ||
40 | #define APIC_LVTx(x) (APIC_LVTT + 0x10 * (x)) | 36 | static struct apic_page __percpu *apic_page __ro_after_init; |
41 | 37 | ||
42 | +static inline void savic_wr_control_msr(u64 val) | 38 | +static inline void savic_wr_control_msr(u64 val) |
43 | +{ | 39 | +{ |
44 | + native_wrmsr(MSR_AMD64_SECURE_AVIC_CONTROL, lower_32_bits(val), upper_32_bits(val)); | 40 | + native_wrmsr(MSR_AMD64_SECURE_AVIC_CONTROL, lower_32_bits(val), upper_32_bits(val)); |
45 | +} | 41 | +} |
46 | + | 42 | + |
47 | static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) | 43 | static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) |
48 | { | 44 | { |
49 | return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); | 45 | return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); |
50 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) | ||
51 | |||
52 | switch (reg) { | ||
53 | case APIC_LVTT: | ||
54 | + case APIC_LVT0: | ||
55 | + case APIC_LVT1: | ||
56 | case APIC_TMICT: | ||
57 | case APIC_TDCR: | ||
58 | write_msr_to_hv(reg, data); | ||
59 | break; | ||
60 | - case APIC_LVT0: | ||
61 | - case APIC_LVT1: | ||
62 | /* APIC_ID is writable and configured by guest for Secure AVIC */ | ||
63 | case APIC_ID: | ||
64 | case APIC_TASKPRI: | ||
65 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void) | 46 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void) |
66 | ret = sev_notify_savic_gpa(gpa); | 47 | ret = savic_register_gpa(gpa); |
67 | if (ret != ES_OK) | 48 | if (ret != ES_OK) |
68 | snp_abort(); | 49 | snp_abort(); |
69 | + savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI); | 50 | + savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI); |
70 | this_cpu_write(savic_setup_done, true); | ||
71 | } | 51 | } |
72 | 52 | ||
53 | static int x2apic_savic_probe(void) | ||
73 | -- | 54 | -- |
74 | 2.34.1 | 55 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | 1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> |
---|---|---|---|
2 | 2 | ||
3 | Now that support to send NMI IPI and support to inject NMI from | 3 | Now that support to send NMI IPI and support to inject NMI from |
4 | hypervisor has been added, set V_NMI_ENABLE in VINTR_CTRL field of | 4 | the hypervisor has been added, set V_NMI_ENABLE in VINTR_CTRL |
5 | VMSA to enable NMI. | 5 | field of VMSA to enable NMI for Secure AVIC guests. |
6 | 6 | ||
7 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 7 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
8 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 8 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
9 | --- | 9 | --- |
10 | Changes since v2: | ||
11 | - No change. | ||
12 | |||
10 | arch/x86/coco/sev/core.c | 2 +- | 13 | arch/x86/coco/sev/core.c | 2 +- |
11 | 1 file changed, 1 insertion(+), 1 deletion(-) | 14 | 1 file changed, 1 insertion(+), 1 deletion(-) |
12 | 15 | ||
13 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c | 16 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c |
14 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
... | ... | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Hypervisor need information about the current state of LVT registers | ||
2 | for device emulation and NMI. So, forward reads and write of these | ||
3 | registers to the Hypervisor for Secure AVIC guests. | ||
1 | 4 | ||
5 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
6 | --- | ||
7 | Changes since v2: | ||
8 | - No change. | ||
9 | |||
10 | arch/x86/kernel/apic/x2apic_savic.c | 20 ++++++++++---------- | ||
11 | 1 file changed, 10 insertions(+), 10 deletions(-) | ||
12 | |||
13 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | ||
14 | index XXXXXXX..XXXXXXX 100644 | ||
15 | --- a/arch/x86/kernel/apic/x2apic_savic.c | ||
16 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | ||
17 | @@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg) | ||
18 | case APIC_TMICT: | ||
19 | case APIC_TMCCT: | ||
20 | case APIC_TDCR: | ||
21 | + case APIC_LVTTHMR: | ||
22 | + case APIC_LVTPC: | ||
23 | + case APIC_LVT0: | ||
24 | + case APIC_LVT1: | ||
25 | + case APIC_LVTERR: | ||
26 | return savic_ghcb_msr_read(reg); | ||
27 | case APIC_ID: | ||
28 | case APIC_LVR: | ||
29 | @@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg) | ||
30 | case APIC_SPIV: | ||
31 | case APIC_ESR: | ||
32 | case APIC_ICR: | ||
33 | - case APIC_LVTTHMR: | ||
34 | - case APIC_LVTPC: | ||
35 | - case APIC_LVT0: | ||
36 | - case APIC_LVT1: | ||
37 | - case APIC_LVTERR: | ||
38 | case APIC_EFEAT: | ||
39 | case APIC_ECTRL: | ||
40 | case APIC_SEOI: | ||
41 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data) | ||
42 | case APIC_LVTT: | ||
43 | case APIC_TMICT: | ||
44 | case APIC_TDCR: | ||
45 | - savic_ghcb_msr_write(reg, data); | ||
46 | - break; | ||
47 | case APIC_LVT0: | ||
48 | case APIC_LVT1: | ||
49 | + case APIC_LVTTHMR: | ||
50 | + case APIC_LVTPC: | ||
51 | + case APIC_LVTERR: | ||
52 | + savic_ghcb_msr_write(reg, data); | ||
53 | + break; | ||
54 | case APIC_TASKPRI: | ||
55 | case APIC_EOI: | ||
56 | case APIC_SPIV: | ||
57 | case SAVIC_NMI_REQ: | ||
58 | case APIC_ESR: | ||
59 | case APIC_ICR: | ||
60 | - case APIC_LVTTHMR: | ||
61 | - case APIC_LVTPC: | ||
62 | - case APIC_LVTERR: | ||
63 | case APIC_ECTRL: | ||
64 | case APIC_SEOI: | ||
65 | case APIC_IER: | ||
66 | -- | ||
67 | 2.34.1 | diff view generated by jsdifflib |
1 | Initialize the APIC ID in the APIC backing page with the | 1 | Secure AVIC accelerates guest's EOI msr writes for edge-triggered |
---|---|---|---|
2 | CPUID function 0000_000bh_EDX (Extended Topology Enumeration), | 2 | interrupts. For level-triggered interrupts, EOI msr writes trigger |
3 | and ensure that APIC ID msr read from hypervisor is consistent | 3 | VC exception with SVM_EXIT_AVIC_UNACCELERATED_ACCESS error code. The |
4 | with the value read from CPUID. | 4 | VC handler would need to trigger a GHCB protocol MSR write event to |
5 | to notify the Hypervisor about completion of the level-triggered | ||
6 | interrupt. This is required for cases like emulated IOAPIC. VC exception | ||
7 | handling adds extra performance overhead for APIC register write. In | ||
8 | addition, some unaccelerated APIC register msr writes are trapped, | ||
9 | whereas others are faulted. This results in additional complexity in | ||
10 | VC exception handling for unacclerated accesses. So, directly do a GHCB | ||
11 | protocol based EOI write from apic->eoi() callback for level-triggered | ||
12 | interrupts. Use wrmsr for edge-triggered interrupts, so that hardware | ||
13 | re-evaluates any pending interrupt which can be delivered to guest vCPU. | ||
14 | For level-triggered interrupts, re-evaluation happens on return from | ||
15 | VMGEXIT corresponding to the GHCB event for EOI msr write. | ||
5 | 16 | ||
6 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 17 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
7 | --- | 18 | --- |
8 | arch/x86/kernel/apic/x2apic_savic.c | 10 ++++++++++ | 19 | Changes since v2: |
9 | 1 file changed, 10 insertions(+) | 20 | - Reuse find_highest_vector() from kvm/lapic.c |
21 | - Misc cleanups. | ||
10 | 22 | ||
23 | arch/x86/include/asm/apic-emul.h | 28 +++++++++++++ | ||
24 | arch/x86/kernel/apic/x2apic_savic.c | 62 +++++++++++++++++++++++++---- | ||
25 | arch/x86/kvm/lapic.c | 23 ++--------- | ||
26 | 3 files changed, 85 insertions(+), 28 deletions(-) | ||
27 | create mode 100644 arch/x86/include/asm/apic-emul.h | ||
28 | |||
29 | diff --git a/arch/x86/include/asm/apic-emul.h b/arch/x86/include/asm/apic-emul.h | ||
30 | new file mode 100644 | ||
31 | index XXXXXXX..XXXXXXX | ||
32 | --- /dev/null | ||
33 | +++ b/arch/x86/include/asm/apic-emul.h | ||
34 | @@ -XXX,XX +XXX,XX @@ | ||
35 | +/* SPDX-License-Identifier: GPL-2.0-only */ | ||
36 | +#ifndef _ASM_X86_APIC_EMUL_H | ||
37 | +#define _ASM_X86_APIC_EMUL_H | ||
38 | + | ||
39 | +#define MAX_APIC_VECTOR 256 | ||
40 | +#define APIC_VECTORS_PER_REG 32 | ||
41 | + | ||
42 | +static inline int apic_find_highest_vector(void *bitmap) | ||
43 | +{ | ||
44 | + unsigned int regno; | ||
45 | + unsigned int vec; | ||
46 | + u32 *reg; | ||
47 | + | ||
48 | + /* | ||
49 | + * The registers int the bitmap are 32-bit wide and 16-byte | ||
50 | + * aligned. State of a vector is stored in a single bit. | ||
51 | + */ | ||
52 | + for (regno = MAX_APIC_VECTOR / APIC_VECTORS_PER_REG - 1; regno >= 0; regno--) { | ||
53 | + vec = regno * APIC_VECTORS_PER_REG; | ||
54 | + reg = bitmap + regno * 16; | ||
55 | + if (*reg) | ||
56 | + return __fls(*reg) + vec; | ||
57 | + } | ||
58 | + | ||
59 | + return -1; | ||
60 | +} | ||
61 | + | ||
62 | +#endif /* _ASM_X86_APIC_EMUL_H */ | ||
11 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 63 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
12 | index XXXXXXX..XXXXXXX 100644 | 64 | index XXXXXXX..XXXXXXX 100644 |
13 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 65 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
14 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 66 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
15 | @@ -XXX,XX +XXX,XX @@ | 67 | @@ -XXX,XX +XXX,XX @@ |
16 | #include <linux/sizes.h> | 68 | #include <linux/align.h> |
17 | 69 | ||
18 | #include <asm/apic.h> | 70 | #include <asm/apic.h> |
19 | +#include <asm/cpuid.h> | 71 | +#include <asm/apic-emul.h> |
20 | #include <asm/sev.h> | 72 | #include <asm/sev.h> |
21 | 73 | ||
22 | #include "local.h" | 74 | #include "local.h" |
23 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in | 75 | @@ -XXX,XX +XXX,XX @@ static __always_inline void set_reg(unsigned int offset, u32 val) |
24 | 76 | WRITE_ONCE(this_cpu_ptr(apic_page)->regs[offset >> 2], val); | |
25 | static void init_backing_page(void *backing_page) | 77 | } |
26 | { | 78 | |
27 | + u32 hv_apic_id; | 79 | -#define SAVIC_ALLOWED_IRR 0x204 |
28 | + u32 apic_id; | 80 | - |
29 | u32 val; | 81 | -static inline void update_vector(unsigned int cpu, unsigned int offset, |
30 | int i; | 82 | - unsigned int vector, bool set) |
31 | 83 | +static inline unsigned long *get_reg_bitmap(unsigned int cpu, unsigned int offset) | |
32 | @@ -XXX,XX +XXX,XX @@ static void init_backing_page(void *backing_page) | 84 | { |
33 | 85 | struct apic_page *ap = per_cpu_ptr(apic_page, cpu); | |
34 | val = read_msr_from_hv(APIC_LDR); | 86 | - unsigned long *reg = (unsigned long *) &ap->bytes[offset]; |
35 | set_reg(backing_page, APIC_LDR, val); | 87 | - unsigned int bit; |
36 | + | 88 | |
37 | + /* Read APIC ID from Extended Topology Enumeration CPUID */ | 89 | + return (unsigned long *) &ap->bytes[offset]; |
38 | + apic_id = cpuid_edx(0x0000000b); | 90 | +} |
39 | + hv_apic_id = read_msr_from_hv(APIC_ID); | 91 | + |
40 | + WARN_ONCE(hv_apic_id != apic_id, "Inconsistent APIC_ID values: %d (cpuid), %d (msr)", | 92 | +static inline unsigned int get_vec_bit(unsigned int vector) |
41 | + apic_id, hv_apic_id); | 93 | +{ |
42 | + set_reg(backing_page, APIC_ID, apic_id); | 94 | /* |
43 | } | 95 | * The registers are 32-bit wide and 16-byte aligned. |
44 | 96 | * Compensate for the resulting bit number spacing. | |
45 | static void x2apic_savic_setup(void) | 97 | */ |
98 | - bit = vector + 96 * (vector / 32); | ||
99 | + return vector + 96 * (vector / 32); | ||
100 | +} | ||
101 | + | ||
102 | +static inline void update_vector(unsigned int cpu, unsigned int offset, | ||
103 | + unsigned int vector, bool set) | ||
104 | +{ | ||
105 | + unsigned long *reg = get_reg_bitmap(cpu, offset); | ||
106 | + unsigned int bit = get_vec_bit(vector); | ||
107 | |||
108 | if (set) | ||
109 | set_bit(bit, reg); | ||
110 | @@ -XXX,XX +XXX,XX @@ static inline void update_vector(unsigned int cpu, unsigned int offset, | ||
111 | clear_bit(bit, reg); | ||
112 | } | ||
113 | |||
114 | +static inline bool test_vector(unsigned int cpu, unsigned int offset, unsigned int vector) | ||
115 | +{ | ||
116 | + unsigned long *reg = get_reg_bitmap(cpu, offset); | ||
117 | + unsigned int bit = get_vec_bit(vector); | ||
118 | + | ||
119 | + return test_bit(bit, reg); | ||
120 | +} | ||
121 | + | ||
122 | +#define SAVIC_ALLOWED_IRR 0x204 | ||
123 | + | ||
124 | static u32 x2apic_savic_read(u32 reg) | ||
125 | { | ||
126 | /* | ||
127 | @@ -XXX,XX +XXX,XX @@ static int x2apic_savic_probe(void) | ||
128 | return 1; | ||
129 | } | ||
130 | |||
131 | +static void x2apic_savic_eoi(void) | ||
132 | +{ | ||
133 | + unsigned int cpu; | ||
134 | + int vec; | ||
135 | + | ||
136 | + cpu = raw_smp_processor_id(); | ||
137 | + vec = apic_find_highest_vector(get_reg_bitmap(cpu, APIC_ISR)); | ||
138 | + if (WARN_ONCE(vec == -1, "EOI write while no active interrupt in APIC_ISR")) | ||
139 | + return; | ||
140 | + | ||
141 | + if (test_vector(cpu, APIC_TMR, vec)) { | ||
142 | + update_vector(cpu, APIC_ISR, vec, false); | ||
143 | + /* | ||
144 | + * Propagate the EOI write to hv for level-triggered interrupts. | ||
145 | + * Return to guest from GHCB protocol event takes care of | ||
146 | + * re-evaluating interrupt state. | ||
147 | + */ | ||
148 | + savic_ghcb_msr_write(APIC_EOI, 0); | ||
149 | + } else { | ||
150 | + /* | ||
151 | + * Hardware clears APIC_ISR and re-evaluates the interrupt state | ||
152 | + * to determine if there is any pending interrupt which can be | ||
153 | + * delivered to CPU. | ||
154 | + */ | ||
155 | + native_apic_msr_eoi(); | ||
156 | + } | ||
157 | +} | ||
158 | + | ||
159 | static struct apic apic_x2apic_savic __ro_after_init = { | ||
160 | |||
161 | .name = "secure avic x2apic", | ||
162 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { | ||
163 | |||
164 | .read = x2apic_savic_read, | ||
165 | .write = x2apic_savic_write, | ||
166 | - .eoi = native_apic_msr_eoi, | ||
167 | + .eoi = x2apic_savic_eoi, | ||
168 | .icr_read = native_x2apic_icr_read, | ||
169 | .icr_write = x2apic_savic_icr_write, | ||
170 | |||
171 | diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c | ||
172 | index XXXXXXX..XXXXXXX 100644 | ||
173 | --- a/arch/x86/kvm/lapic.c | ||
174 | +++ b/arch/x86/kvm/lapic.c | ||
175 | @@ -XXX,XX +XXX,XX @@ | ||
176 | #include <linux/export.h> | ||
177 | #include <linux/math64.h> | ||
178 | #include <linux/slab.h> | ||
179 | +#include <asm/apic-emul.h> | ||
180 | #include <asm/processor.h> | ||
181 | #include <asm/mce.h> | ||
182 | #include <asm/msr.h> | ||
183 | @@ -XXX,XX +XXX,XX @@ | ||
184 | /* 14 is the version for Xeon and Pentium 8.4.8*/ | ||
185 | #define APIC_VERSION 0x14UL | ||
186 | #define LAPIC_MMIO_LENGTH (1 << 12) | ||
187 | -/* followed define is not in apicdef.h */ | ||
188 | -#define MAX_APIC_VECTOR 256 | ||
189 | -#define APIC_VECTORS_PER_REG 32 | ||
190 | |||
191 | /* | ||
192 | * Enable local APIC timer advancement (tscdeadline mode only) with adaptive | ||
193 | @@ -XXX,XX +XXX,XX @@ static const unsigned int apic_lvt_mask[KVM_APIC_MAX_NR_LVT_ENTRIES] = { | ||
194 | [LVT_CMCI] = LVT_MASK | APIC_MODE_MASK | ||
195 | }; | ||
196 | |||
197 | -static int find_highest_vector(void *bitmap) | ||
198 | -{ | ||
199 | - int vec; | ||
200 | - u32 *reg; | ||
201 | - | ||
202 | - for (vec = MAX_APIC_VECTOR - APIC_VECTORS_PER_REG; | ||
203 | - vec >= 0; vec -= APIC_VECTORS_PER_REG) { | ||
204 | - reg = bitmap + REG_POS(vec); | ||
205 | - if (*reg) | ||
206 | - return __fls(*reg) + vec; | ||
207 | - } | ||
208 | - | ||
209 | - return -1; | ||
210 | -} | ||
211 | - | ||
212 | static u8 count_vectors(void *bitmap) | ||
213 | { | ||
214 | int vec; | ||
215 | @@ -XXX,XX +XXX,XX @@ EXPORT_SYMBOL_GPL(kvm_apic_update_irr); | ||
216 | |||
217 | static inline int apic_search_irr(struct kvm_lapic *apic) | ||
218 | { | ||
219 | - return find_highest_vector(apic->regs + APIC_IRR); | ||
220 | + return apic_find_highest_vector(apic->regs + APIC_IRR); | ||
221 | } | ||
222 | |||
223 | static inline int apic_find_highest_irr(struct kvm_lapic *apic) | ||
224 | @@ -XXX,XX +XXX,XX @@ static inline int apic_find_highest_isr(struct kvm_lapic *apic) | ||
225 | if (likely(apic->highest_isr_cache != -1)) | ||
226 | return apic->highest_isr_cache; | ||
227 | |||
228 | - result = find_highest_vector(apic->regs + APIC_ISR); | ||
229 | + result = apic_find_highest_vector(apic->regs + APIC_ISR); | ||
230 | ASSERT(result == -1 || result >= 16); | ||
231 | |||
232 | return result; | ||
46 | -- | 233 | -- |
47 | 2.34.1 | 234 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | 1 | Add a apic->teardown() callback to disable Secure AVIC before |
---|---|---|---|
2 | rebooting into the new kernel. This ensures that the new | ||
3 | kernel does not access the old APIC backing page which was | ||
4 | allocated by the previous kernel. Such accesses can happen | ||
5 | if there are any APIC accesses done during guest boot before | ||
6 | Secure AVIC driver probe is done by the new kernel (as Secure | ||
7 | AVIC would have remained enabled in the Secure AVIC control | ||
8 | msr). | ||
2 | 9 | ||
3 | Secure AVIC lets guest manage the APIC backing page (unlike emulated | ||
4 | x2APIC or x2AVIC where the hypervisor manages the APIC backing page). | ||
5 | |||
6 | However the introduced Secure AVIC Linux design still maintains the | ||
7 | APIC backing page in the hypervisor to shadow the APIC backing page | ||
8 | maintained by guest (It should be noted only subset of the registers | ||
9 | are shadowed for specific usecases and registers like APIC_IRR, | ||
10 | APIC_ISR are not shadowed). | ||
11 | |||
12 | Add sev_ghcb_msr_read() to invoke "SVM_EXIT_MSR" VMGEXIT to read | ||
13 | MSRs from hypervisor. Initialize the Secure AVIC's APIC backing | ||
14 | page by copying the initial state of shadow APIC backing page in | ||
15 | the hypervisor to the guest APIC backing page. Specifically copy | ||
16 | APIC_LVR, APIC_LDR, and APIC_LVT MSRs from the shadow APIC backing | ||
17 | page. | ||
18 | |||
19 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | ||
20 | Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
21 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 10 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
22 | --- | 11 | --- |
23 | arch/x86/coco/sev/core.c | 41 ++++++++++++++++----- | 12 | Changes since v2: |
13 | - Change savic_unregister_gpa() interface to allow GPA unregistration | ||
14 | only for local CPU. | ||
15 | |||
16 | arch/x86/coco/sev/core.c | 25 +++++++++++++++++++++++++ | ||
17 | arch/x86/include/asm/apic.h | 1 + | ||
24 | arch/x86/include/asm/sev.h | 2 ++ | 18 | arch/x86/include/asm/sev.h | 2 ++ |
25 | arch/x86/kernel/apic/x2apic_savic.c | 55 +++++++++++++++++++++++++++++ | 19 | arch/x86/kernel/apic/apic.c | 3 +++ |
26 | 3 files changed, 90 insertions(+), 8 deletions(-) | 20 | arch/x86/kernel/apic/x2apic_savic.c | 8 ++++++++ |
21 | 5 files changed, 39 insertions(+) | ||
27 | 22 | ||
28 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c | 23 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c |
29 | index XXXXXXX..XXXXXXX 100644 | 24 | index XXXXXXX..XXXXXXX 100644 |
30 | --- a/arch/x86/coco/sev/core.c | 25 | --- a/arch/x86/coco/sev/core.c |
31 | +++ b/arch/x86/coco/sev/core.c | 26 | +++ b/arch/x86/coco/sev/core.c |
32 | @@ -XXX,XX +XXX,XX @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd) | 27 | @@ -XXX,XX +XXX,XX @@ enum es_result savic_register_gpa(u64 gpa) |
33 | return 0; | 28 | return res; |
34 | } | 29 | } |
35 | 30 | ||
36 | -static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | 31 | +enum es_result savic_unregister_gpa(u64 *gpa) |
37 | +static enum es_result __vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt, bool write) | ||
38 | { | ||
39 | struct pt_regs *regs = ctxt->regs; | ||
40 | + u64 exit_info_1 = write ? 1 : 0; | ||
41 | enum es_result ret; | ||
42 | - u64 exit_info_1; | ||
43 | - | ||
44 | - /* Is it a WRMSR? */ | ||
45 | - exit_info_1 = (ctxt->insn.opcode.bytes[1] == 0x30) ? 1 : 0; | ||
46 | |||
47 | if (regs->cx == MSR_SVSM_CAA) { | ||
48 | /* Writes to the SVSM CAA msr are ignored */ | ||
49 | - if (exit_info_1) | ||
50 | + if (write) | ||
51 | return ES_OK; | ||
52 | |||
53 | regs->ax = lower_32_bits(this_cpu_read(svsm_caa_pa)); | ||
54 | @@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | ||
55 | } | ||
56 | |||
57 | ghcb_set_rcx(ghcb, regs->cx); | ||
58 | - if (exit_info_1) { | ||
59 | + if (write) { | ||
60 | ghcb_set_rax(ghcb, regs->ax); | ||
61 | ghcb_set_rdx(ghcb, regs->dx); | ||
62 | } | ||
63 | |||
64 | ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_MSR, exit_info_1, 0); | ||
65 | |||
66 | - if ((ret == ES_OK) && (!exit_info_1)) { | ||
67 | + if (ret == ES_OK && !write) { | ||
68 | regs->ax = ghcb->save.rax; | ||
69 | regs->dx = ghcb->save.rdx; | ||
70 | } | ||
71 | @@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | ||
72 | return ret; | ||
73 | } | ||
74 | |||
75 | +static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) | ||
76 | +{ | 32 | +{ |
77 | + return __vc_handle_msr(ghcb, ctxt, ctxt->insn.opcode.bytes[1] == 0x30); | ||
78 | +} | ||
79 | + | ||
80 | +enum es_result sev_ghcb_msr_read(u64 msr, u64 *value) | ||
81 | +{ | ||
82 | + struct pt_regs regs = { .cx = msr }; | ||
83 | + struct es_em_ctxt ctxt = { .regs = ®s }; | ||
84 | + struct ghcb_state state; | 33 | + struct ghcb_state state; |
34 | + struct es_em_ctxt ctxt; | ||
85 | + unsigned long flags; | 35 | + unsigned long flags; |
86 | + enum es_result ret; | ||
87 | + struct ghcb *ghcb; | 36 | + struct ghcb *ghcb; |
37 | + int ret = 0; | ||
88 | + | 38 | + |
89 | + local_irq_save(flags); | 39 | + local_irq_save(flags); |
40 | + | ||
90 | + ghcb = __sev_get_ghcb(&state); | 41 | + ghcb = __sev_get_ghcb(&state); |
42 | + | ||
91 | + vc_ghcb_invalidate(ghcb); | 43 | + vc_ghcb_invalidate(ghcb); |
92 | + | 44 | + |
93 | + ret = __vc_handle_msr(ghcb, &ctxt, false); | 45 | + ghcb_set_rax(ghcb, -1ULL); |
94 | + if (ret == ES_OK) | 46 | + ret = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_SECURE_AVIC, |
95 | + *value = regs.ax | regs.dx << 32; | 47 | + SVM_VMGEXIT_SECURE_AVIC_UNREGISTER_GPA, 0); |
48 | + if (gpa && ret == ES_OK) | ||
49 | + *gpa = ghcb->save.rbx; | ||
50 | + __sev_put_ghcb(&state); | ||
96 | + | 51 | + |
97 | + __sev_put_ghcb(&state); | ||
98 | + local_irq_restore(flags); | 52 | + local_irq_restore(flags); |
99 | + | ||
100 | + return ret; | 53 | + return ret; |
101 | +} | 54 | +} |
102 | + | 55 | + |
103 | enum es_result sev_notify_savic_gpa(u64 gpa) | 56 | static void snp_register_per_cpu_ghcb(void) |
104 | { | 57 | { |
105 | struct ghcb_state state; | 58 | struct sev_es_runtime_data *data; |
59 | diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h | ||
60 | index XXXXXXX..XXXXXXX 100644 | ||
61 | --- a/arch/x86/include/asm/apic.h | ||
62 | +++ b/arch/x86/include/asm/apic.h | ||
63 | @@ -XXX,XX +XXX,XX @@ struct apic { | ||
64 | /* Probe, setup and smpboot functions */ | ||
65 | int (*probe)(void); | ||
66 | void (*setup)(void); | ||
67 | + void (*teardown)(void); | ||
68 | int (*acpi_madt_oem_check)(char *oem_id, char *oem_table_id); | ||
69 | |||
70 | void (*init_apic_ldr)(void); | ||
106 | diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h | 71 | diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h |
107 | index XXXXXXX..XXXXXXX 100644 | 72 | index XXXXXXX..XXXXXXX 100644 |
108 | --- a/arch/x86/include/asm/sev.h | 73 | --- a/arch/x86/include/asm/sev.h |
109 | +++ b/arch/x86/include/asm/sev.h | 74 | +++ b/arch/x86/include/asm/sev.h |
110 | @@ -XXX,XX +XXX,XX @@ u64 sev_get_status(void); | 75 | @@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req |
111 | void sev_show_status(void); | 76 | void __init snp_secure_tsc_prepare(void); |
112 | void snp_update_svsm_ca(void); | 77 | void __init snp_secure_tsc_init(void); |
113 | enum es_result sev_notify_savic_gpa(u64 gpa); | 78 | enum es_result savic_register_gpa(u64 gpa); |
114 | +enum es_result sev_ghcb_msr_read(u64 msr, u64 *value); | 79 | +enum es_result savic_unregister_gpa(u64 *gpa); |
115 | 80 | u64 savic_ghcb_msr_read(u32 reg); | |
116 | #else /* !CONFIG_AMD_MEM_ENCRYPT */ | 81 | void savic_ghcb_msr_write(u32 reg, u64 value); |
117 | 82 | ||
118 | @@ -XXX,XX +XXX,XX @@ static inline u64 sev_get_status(void) { return 0; } | 83 | @@ -XXX,XX +XXX,XX @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_ |
119 | static inline void sev_show_status(void) { } | 84 | static inline void __init snp_secure_tsc_prepare(void) { } |
120 | static inline void snp_update_svsm_ca(void) { } | 85 | static inline void __init snp_secure_tsc_init(void) { } |
121 | static inline enum es_result sev_notify_savic_gpa(u64 gpa) { return ES_UNSUPPORTED; } | 86 | static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; } |
122 | +static inline enum es_result sev_ghcb_msr_read(u64 msr, u64 *value) { return ES_UNSUPPORTED; } | 87 | +static inline enum es_result savic_unregister_gpa(u64 *gpa) { return ES_UNSUPPORTED; } |
123 | 88 | static inline void savic_ghcb_msr_write(u32 reg, u64 value) { } | |
124 | #endif /* CONFIG_AMD_MEM_ENCRYPT */ | 89 | static inline u64 savic_ghcb_msr_read(u32 reg) { return 0; } |
125 | 90 | ||
91 | diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c | ||
92 | index XXXXXXX..XXXXXXX 100644 | ||
93 | --- a/arch/x86/kernel/apic/apic.c | ||
94 | +++ b/arch/x86/kernel/apic/apic.c | ||
95 | @@ -XXX,XX +XXX,XX @@ void disable_local_APIC(void) | ||
96 | if (!apic_accessible()) | ||
97 | return; | ||
98 | |||
99 | + if (apic->teardown) | ||
100 | + apic->teardown(); | ||
101 | + | ||
102 | apic_soft_disable(); | ||
103 | |||
104 | #ifdef CONFIG_X86_32 | ||
126 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 105 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
127 | index XXXXXXX..XXXXXXX 100644 | 106 | index XXXXXXX..XXXXXXX 100644 |
128 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 107 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
129 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 108 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
130 | @@ -XXX,XX +XXX,XX @@ | 109 | @@ -XXX,XX +XXX,XX @@ static void init_apic_page(void) |
131 | #include <linux/cc_platform.h> | 110 | set_reg(APIC_ID, apic_id); |
132 | #include <linux/percpu-defs.h> | ||
133 | #include <linux/align.h> | ||
134 | +#include <linux/sizes.h> | ||
135 | |||
136 | #include <asm/apic.h> | ||
137 | #include <asm/sev.h> | ||
138 | @@ -XXX,XX +XXX,XX @@ | ||
139 | static DEFINE_PER_CPU(void *, apic_backing_page); | ||
140 | static DEFINE_PER_CPU(bool, savic_setup_done); | ||
141 | |||
142 | +enum lapic_lvt_entry { | ||
143 | + LVT_TIMER, | ||
144 | + LVT_THERMAL_MONITOR, | ||
145 | + LVT_PERFORMANCE_COUNTER, | ||
146 | + LVT_LINT0, | ||
147 | + LVT_LINT1, | ||
148 | + LVT_ERROR, | ||
149 | + | ||
150 | + APIC_MAX_NR_LVT_ENTRIES, | ||
151 | +}; | ||
152 | + | ||
153 | +#define APIC_LVTx(x) (APIC_LVTT + 0x10 * (x)) | ||
154 | + | ||
155 | static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id) | ||
156 | { | ||
157 | return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC); | ||
158 | @@ -XXX,XX +XXX,XX @@ static inline void set_reg(char *page, int reg_off, u32 val) | ||
159 | WRITE_ONCE(*((u32 *)(page + reg_off)), val); | ||
160 | } | 111 | } |
161 | 112 | ||
162 | +static u32 read_msr_from_hv(u32 reg) | 113 | +static void x2apic_savic_teardown(void) |
163 | +{ | 114 | +{ |
164 | + u64 data, msr; | 115 | + /* Disable Secure AVIC */ |
165 | + int ret; | 116 | + native_wrmsr(MSR_AMD64_SECURE_AVIC_CONTROL, 0, 0); |
166 | + | 117 | + savic_unregister_gpa(NULL); |
167 | + msr = APIC_BASE_MSR + (reg >> 4); | ||
168 | + ret = sev_ghcb_msr_read(msr, &data); | ||
169 | + if (ret != ES_OK) { | ||
170 | + pr_err("Secure AVIC msr (%#llx) read returned error (%d)\n", msr, ret); | ||
171 | + /* MSR read failures are treated as fatal errors */ | ||
172 | + snp_abort(); | ||
173 | + } | ||
174 | + | ||
175 | + return lower_32_bits(data); | ||
176 | +} | ||
177 | + | ||
178 | #define SAVIC_ALLOWED_IRR_OFFSET 0x204 | ||
179 | |||
180 | static u32 x2apic_savic_read(u32 reg) | ||
181 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in | ||
182 | __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT); | ||
183 | } | ||
184 | |||
185 | +static void init_backing_page(void *backing_page) | ||
186 | +{ | ||
187 | + u32 val; | ||
188 | + int i; | ||
189 | + | ||
190 | + val = read_msr_from_hv(APIC_LVR); | ||
191 | + set_reg(backing_page, APIC_LVR, val); | ||
192 | + | ||
193 | + /* | ||
194 | + * Hypervisor is used for all timer related functions, | ||
195 | + * so don't copy those values. | ||
196 | + */ | ||
197 | + for (i = LVT_THERMAL_MONITOR; i < APIC_MAX_NR_LVT_ENTRIES; i++) { | ||
198 | + val = read_msr_from_hv(APIC_LVTx(i)); | ||
199 | + set_reg(backing_page, APIC_LVTx(i), val); | ||
200 | + } | ||
201 | + | ||
202 | + val = read_msr_from_hv(APIC_LVT0); | ||
203 | + set_reg(backing_page, APIC_LVT0, val); | ||
204 | + | ||
205 | + val = read_msr_from_hv(APIC_LDR); | ||
206 | + set_reg(backing_page, APIC_LDR, val); | ||
207 | +} | 118 | +} |
208 | + | 119 | + |
209 | static void x2apic_savic_setup(void) | 120 | static void x2apic_savic_setup(void) |
210 | { | 121 | { |
211 | void *backing_page; | 122 | void *backing_page; |
212 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void) | 123 | @@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = { |
213 | return; | 124 | .probe = x2apic_savic_probe, |
214 | 125 | .acpi_madt_oem_check = x2apic_savic_acpi_madt_oem_check, | |
215 | backing_page = this_cpu_read(apic_backing_page); | 126 | .setup = x2apic_savic_setup, |
216 | + init_backing_page(backing_page); | 127 | + .teardown = x2apic_savic_teardown, |
217 | gpa = __pa(backing_page); | 128 | |
218 | ret = sev_notify_savic_gpa(gpa); | 129 | .dest_mode_logical = false, |
219 | if (ret != ES_OK) | 130 | |
220 | -- | 131 | -- |
221 | 2.34.1 | 132 | 2.34.1 | diff view generated by jsdifflib |
1 | With all the pieces in place now, enable Secure AVIC in Secure | 1 | With all the pieces in place now, enable Secure AVIC in Secure |
---|---|---|---|
2 | AVIC Control MSR. Any access to x2APIC MSRs are emulated by | 2 | AVIC Control MSR. Any access to x2APIC MSRs are emulated by |
3 | hypervisor before Secure AVIC is enabled in the Control MSR. | 3 | the hypervisor before Secure AVIC is enabled in the control MSR. |
4 | Post Secure AVIC enablement, all x2APIC MSR accesses (whether | 4 | Post Secure AVIC enablement, all x2APIC MSR accesses (whether |
5 | accelerated by AVIC hardware or trapped as #VC exception) operate | 5 | accelerated by AVIC hardware or trapped as VC exception) operate |
6 | on guest APIC backing page. | 6 | on vCPU's APIC backing page. |
7 | 7 | ||
8 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 8 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
9 | --- | 9 | --- |
10 | Changes since v2: | ||
11 | - Move MSR_AMD64_SECURE_AVIC_EN* macros to this patch. | ||
12 | |||
13 | arch/x86/include/asm/msr-index.h | 2 ++ | ||
10 | arch/x86/kernel/apic/x2apic_savic.c | 2 +- | 14 | arch/x86/kernel/apic/x2apic_savic.c | 2 +- |
11 | 1 file changed, 1 insertion(+), 1 deletion(-) | 15 | 2 files changed, 3 insertions(+), 1 deletion(-) |
12 | 16 | ||
17 | diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h | ||
18 | index XXXXXXX..XXXXXXX 100644 | ||
19 | --- a/arch/x86/include/asm/msr-index.h | ||
20 | +++ b/arch/x86/include/asm/msr-index.h | ||
21 | @@ -XXX,XX +XXX,XX @@ | ||
22 | #define MSR_AMD64_SNP_RESV_BIT 19 | ||
23 | #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) | ||
24 | #define MSR_AMD64_SECURE_AVIC_CONTROL 0xc0010138 | ||
25 | +#define MSR_AMD64_SECURE_AVIC_EN_BIT 0 | ||
26 | +#define MSR_AMD64_SECURE_AVIC_EN BIT_ULL(MSR_AMD64_SECURE_AVIC_EN_BIT) | ||
27 | #define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT 1 | ||
28 | #define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI BIT_ULL(MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT) | ||
29 | #define MSR_AMD64_RMP_BASE 0xc0010132 | ||
13 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c | 30 | diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c |
14 | index XXXXXXX..XXXXXXX 100644 | 31 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/arch/x86/kernel/apic/x2apic_savic.c | 32 | --- a/arch/x86/kernel/apic/x2apic_savic.c |
16 | +++ b/arch/x86/kernel/apic/x2apic_savic.c | 33 | +++ b/arch/x86/kernel/apic/x2apic_savic.c |
17 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void) | 34 | @@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void) |
18 | ret = sev_notify_savic_gpa(gpa); | 35 | ret = savic_register_gpa(gpa); |
19 | if (ret != ES_OK) | 36 | if (ret != ES_OK) |
20 | snp_abort(); | 37 | snp_abort(); |
21 | - savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI); | 38 | - savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI); |
22 | + savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_EN | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI); | 39 | + savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_EN | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI); |
23 | this_cpu_write(savic_setup_done, true); | ||
24 | } | 40 | } |
25 | 41 | ||
42 | static int x2apic_savic_probe(void) | ||
26 | -- | 43 | -- |
27 | 2.34.1 | 44 | 2.34.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | The SECURE_AVIC_CONTROL MSR holds the GPA of the guest APIC backing | ||
2 | page and bitfields to control enablement of Secure AVIC and NMI by | ||
3 | guest vCPUs. This MSR is populated by the guest and the hypervisor | ||
4 | should not intercept it. A #VC exception will be generated otherwise. | ||
5 | If this occurs and Secure AVIC is enabled, terminate guest execution. | ||
1 | 6 | ||
7 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | ||
8 | --- | ||
9 | Changes since v2: | ||
10 | - No change | ||
11 | |||
12 | arch/x86/coco/sev/core.c | 9 +++++++++ | ||
13 | 1 file changed, 9 insertions(+) | ||
14 | |||
15 | diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c | ||
16 | index XXXXXXX..XXXXXXX 100644 | ||
17 | --- a/arch/x86/coco/sev/core.c | ||
18 | +++ b/arch/x86/coco/sev/core.c | ||
19 | @@ -XXX,XX +XXX,XX @@ static enum es_result __vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt | ||
20 | if (sev_status & MSR_AMD64_SNP_SECURE_TSC) | ||
21 | return __vc_handle_secure_tsc_msrs(regs, write); | ||
22 | break; | ||
23 | + case MSR_AMD64_SECURE_AVIC_CONTROL: | ||
24 | + /* | ||
25 | + * AMD64_SECURE_AVIC_CONTROL should not be intercepted when | ||
26 | + * Secure AVIC is enabled. Terminate the Secure AVIC guest | ||
27 | + * if the interception is enabled. | ||
28 | + */ | ||
29 | + if (cc_platform_has(CC_ATTR_SNP_SECURE_AVIC)) | ||
30 | + return ES_VMM_ERROR; | ||
31 | + fallthrough; | ||
32 | default: | ||
33 | break; | ||
34 | } | ||
35 | -- | ||
36 | 2.34.1 | diff view generated by jsdifflib |
1 | From: Kishon Vijay Abraham I <kvijayab@amd.com> | 1 | Now that Secure AVIC support is added in the guest, indicate SEV-SNP |
---|---|---|---|
2 | guest supports Secure AVIC feature if CONFIG_AMD_SECURE_AVIC is | ||
3 | enabled. | ||
2 | 4 | ||
3 | Now that Secure AVIC support is added in the guest, indicate SEV-SNP | 5 | Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
4 | guest supports Secure AVIC. | ||
5 | |||
6 | Without this, the guest terminates booting with Non-Automatic Exit(NAE) | ||
7 | termination request event. | ||
8 | |||
9 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> | 6 | Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com> |
10 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> | 7 | Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> |
11 | --- | 8 | --- |
12 | arch/x86/boot/compressed/sev.c | 2 +- | 9 | Changes since v2: |
13 | 1 file changed, 1 insertion(+), 1 deletion(-) | 10 | - Set SNP_FEATURE_SECURE_AVIC in SNP_FEATURES_PRESENT only when |
11 | CONFIG_AMD_SECURE_AVIC is enabled. | ||
12 | |||
13 | arch/x86/boot/compressed/sev.c | 9 ++++++++- | ||
14 | 1 file changed, 8 insertions(+), 1 deletion(-) | ||
14 | 15 | ||
15 | diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c | 16 | diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c |
16 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/arch/x86/boot/compressed/sev.c | 18 | --- a/arch/x86/boot/compressed/sev.c |
18 | +++ b/arch/x86/boot/compressed/sev.c | 19 | +++ b/arch/x86/boot/compressed/sev.c |
19 | @@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code) | 20 | @@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code) |
21 | MSR_AMD64_SNP_SECURE_AVIC | \ | ||
22 | MSR_AMD64_SNP_RESERVED_MASK) | ||
23 | |||
24 | +#ifdef CONFIG_AMD_SECURE_AVIC | ||
25 | +#define SNP_FEATURE_SECURE_AVIC MSR_AMD64_SNP_SECURE_AVIC | ||
26 | +#else | ||
27 | +#define SNP_FEATURE_SECURE_AVIC 0 | ||
28 | +#endif | ||
29 | + | ||
30 | /* | ||
31 | * SNP_FEATURES_PRESENT is the mask of SNP features that are implemented | ||
20 | * by the guest kernel. As and when a new feature is implemented in the | 32 | * by the guest kernel. As and when a new feature is implemented in the |
21 | * guest kernel, a corresponding bit should be added to the mask. | 33 | * guest kernel, a corresponding bit should be added to the mask. |
22 | */ | 34 | */ |
23 | -#define SNP_FEATURES_PRESENT MSR_AMD64_SNP_DEBUG_SWAP | 35 | #define SNP_FEATURES_PRESENT (MSR_AMD64_SNP_DEBUG_SWAP | \ |
24 | +#define SNP_FEATURES_PRESENT (MSR_AMD64_SNP_DEBUG_SWAP | MSR_AMD64_SNP_SECURE_AVIC_ENABLED) | 36 | - MSR_AMD64_SNP_SECURE_TSC) |
37 | + MSR_AMD64_SNP_SECURE_TSC | \ | ||
38 | + SNP_FEATURE_SECURE_AVIC) | ||
25 | 39 | ||
26 | u64 snp_get_unsupported_features(u64 status) | 40 | u64 snp_get_unsupported_features(u64 status) |
27 | { | 41 | { |
28 | -- | 42 | -- |
29 | 2.34.1 | 43 | 2.34.1 | diff view generated by jsdifflib |