1
Introduction
1
Introduction
2
------------
2
------------
3
3
4
Secure AVIC is a new hardware feature in the AMD64 architecture to
4
Secure AVIC is a new hardware feature in the AMD64 architecture to
5
allow SEV-SNP guests to prevent hypervisor from generating unexpected
5
allow SEV-SNP guests to prevent the hypervisor from generating
6
interrupts to a vCPU or otherwise violate architectural assumptions
6
unexpected interrupts to a vCPU or otherwise violate architectural
7
around APIC behavior.
7
assumptions around APIC behavior.
8
8
9
One of the significant differences from AVIC or emulated x2APIC is that
9
One of the significant differences from AVIC or emulated x2APIC is that
10
Secure AVIC uses a guest-owned and managed APIC backing page. It also
10
Secure AVIC uses a guest-owned and managed APIC backing page. It also
11
introduces additional fields in both the VMCB and the Secure AVIC backing
11
introduces additional fields in both the VMCB and the Secure AVIC backing
12
page to aid the guest in limiting which interrupt vectors can be injected
12
page to aid the guest in limiting which interrupt vectors can be injected
...
...
40
the guest APIC backing page which can be modified directly by the
40
the guest APIC backing page which can be modified directly by the
41
guest:
41
guest:
42
42
43
a. ALLOWED_IRR
43
a. ALLOWED_IRR
44
44
45
ALLOWED_IRR vector indicates the interrupt vectors which the guest
45
ALLOWED_IRR reg offset indicates the interrupt vectors which the guest
46
allows the hypervisor to send. The combination of host-controlled
46
allows the hypervisor to send. The combination of host-controlled
47
REQUESTED_IRR vectors (part of VMCB) and ALLOWED_IRR is used by
47
REQUESTED_IRR vectors (part of VMCB) and ALLOWED_IRR is used by
48
hardware to update the IRR vectors of the Guest APIC backing page.
48
hardware to update the IRR vectors of the Guest APIC backing page.
49
49
50
#Offset #bits Description
50
#Offset #bits Description
...
...
61
b. NMI Request
61
b. NMI Request
62
62
63
#Offset #bits Description
63
#Offset #bits Description
64
278h 0 Set by Guest to request Virtual NMI
64
278h 0 Set by Guest to request Virtual NMI
65
65
66
Guest need to set NMI Request register to allow the Hypervisor to
67
inject vNMI to it.
66
68
67
LAPIC Timer Support
69
LAPIC Timer Support
68
-------------------
70
-------------------
69
LAPIC timer is emulated by hypervisor. So, APIC_LVTT, APIC_TMICT and
71
LAPIC timer is emulated by the hypervisor. So, APIC_LVTT, APIC_TMICT and
70
APIC_TDCR, APIC_TMCCT APIC registers are not read/written to the guest
72
APIC_TDCR, APIC_TMCCT APIC registers are not read/written to the guest
71
APIC backing page and are communicated to the hypervisor using SVM_EXIT_MSR
73
APIC backing page and are communicated to the hypervisor using SVM_EXIT_MSR
72
VMGEXIT.
74
VMGEXIT.
73
75
74
IPI Support
76
IPI Support
...
...
82
-------------
84
-------------
83
Secure AVIC enabled guest can kexec to another kernel which has Secure
85
Secure AVIC enabled guest can kexec to another kernel which has Secure
84
AVIC enabled, as the Hypervisor has Secure AVIC feature bit set in the
86
AVIC enabled, as the Hypervisor has Secure AVIC feature bit set in the
85
sev_status.
87
sev_status.
86
88
87
Driver Implementation Open Points
89
Open Points
88
---------------------------------
90
-----------
89
91
90
The Secure AVIC driver only supports physical destination mode. If
92
The Secure AVIC driver only supports physical destination mode. If
91
logical destination mode need to be supported, then a separate x2apic
93
logical destination mode need to be supported, then a separate x2apic
92
driver would be required for supporting logical destination mode.
94
driver would be required for supporting logical destination mode.
93
95
94
Setting of ALLOWED_IRR vectors is done from vector.c for IOAPIC and MSI
95
interrupts. ALLOWED_IRR vector is not cleared when an interrupt vector
96
migrates to different CPU. Using a cleaner approach to manage and
97
configure allowed vectors needs more work.
98
99
96
100
Testing
97
Testing
101
-------
98
-------
102
99
103
This series is based on top of commit 0f966b199269 "Merge branch into tip/master:
100
This series is based on top of commit 535bd326c565 "Merge branch into
104
'x86/platform'" of tip/tip master branch.
101
tip/master: 'x86/tdx'" of tip/tip master branch.
105
102
106
Host Secure AVIC support patch series is at [1].
103
Host Secure AVIC support patch series is at [1].
107
104
108
Qemu support patch is at [2].
105
Qemu support patch is at [2].
109
106
...
...
118
3) Verified long run SCF TORTURE IPI test.
115
3) Verified long run SCF TORTURE IPI test.
119
116
120
[1] https://github.com/AMDESE/linux-kvm/tree/savic-host-latest
117
[1] https://github.com/AMDESE/linux-kvm/tree/savic-host-latest
121
[2] https://github.com/AMDESE/qemu/tree/secure-avic
118
[2] https://github.com/AMDESE/qemu/tree/secure-avic
122
119
123
Change since v1
120
Change since v2
124
121
125
- Added Kexec support.
122
- Removed RFC tag.
126
- Instead of doing a 2M aligned allocation for backing pages,
123
- Change config rule to not select AMD_SECURE_AVIC config if
127
allocate individual PAGE_SIZE pages for vCPUs.
124
AMD_MEM_ENCRYPT config is enabled.
128
- Instead of reading Extended Topology Enumeration CPUID, APIC_ID
125
- Fix broken backing page GFP_KERNEL allocation in setup_local_APIC().
129
value is read from Hv and updated in APIC backing page. Hv returned
126
Use alloc_percpu() for APIC backing pages allocation during Secure
130
ID is checked for any duplicates.
127
AVIC driver probe.
131
- Propagate all LVT* register reads and writes to Hv.
128
- Remove code to check for duplicate APIC_ID returned by the
132
- Check that Secure AVIC control MSR is not intercepted by Hv.
129
Hypervisor. Topology evaluation code already does that during boot.
133
- Fix EOI handling for level-triggered interrupts.
130
- Fix missing update_vector() callback invocation during vector
134
- Misc cleanups and commit log updates.
131
cleanup paths. Invoke update_vector() during setup and tearing down
132
of a vector.
133
- Reuse find_highest_vector() from kvm/lapic.c.
134
- Change savic_register_gpa/savic_unregister_gpa() interface to be
135
invoked only for the local CPU.
136
- Misc cleanups.
135
137
136
Kishon Vijay Abraham I (5):
138
Kishon Vijay Abraham I (2):
137
x86/apic: Support LAPIC timer for Secure AVIC
138
x86/sev: Initialize VGIF for secondary VCPUs for Secure AVIC
139
x86/sev: Initialize VGIF for secondary VCPUs for Secure AVIC
139
x86/apic: Add support to send NMI IPI for Secure AVIC
140
x86/sev: Enable NMI support for Secure AVIC
140
x86/sev: Enable NMI support for Secure AVIC
141
x86/sev: Indicate SEV-SNP guest supports Secure AVIC
142
141
143
Neeraj Upadhyay (12):
142
Neeraj Upadhyay (15):
144
x86/apic: Add new driver for Secure AVIC
143
x86/apic: Add new driver for Secure AVIC
145
x86/apic: Initialize Secure AVIC APIC backing page
144
x86/apic: Initialize Secure AVIC APIC backing page
146
x86/apic: Populate .read()/.write() callbacks of Secure AVIC driver
145
x86/apic: Populate .read()/.write() callbacks of Secure AVIC driver
147
x86/apic: Initialize APIC ID for Secure AVIC
146
x86/apic: Initialize APIC ID for Secure AVIC
148
x86/apic: Add update_vector callback for Secure AVIC
147
x86/apic: Add update_vector callback for Secure AVIC
149
x86/apic: Add support to send IPI for Secure AVIC
148
x86/apic: Add support to send IPI for Secure AVIC
149
x86/apic: Support LAPIC timer for Secure AVIC
150
x86/apic: Add support to send NMI IPI for Secure AVIC
150
x86/apic: Allow NMI to be injected from hypervisor for Secure AVIC
151
x86/apic: Allow NMI to be injected from hypervisor for Secure AVIC
151
x86/apic: Read and write LVT* APIC registers from HV for SAVIC guests
152
x86/apic: Read and write LVT* APIC registers from HV for SAVIC guests
152
x86/apic: Handle EOI writes for SAVIC guests
153
x86/apic: Handle EOI writes for SAVIC guests
153
x86/apic: Add kexec support for Secure AVIC
154
x86/apic: Add kexec support for Secure AVIC
154
x86/apic: Enable Secure AVIC in Control MSR
155
x86/apic: Enable Secure AVIC in Control MSR
155
x86/sev: Prevent SECURE_AVIC_CONTROL MSR interception for Secure AVIC
156
x86/sev: Prevent SECURE_AVIC_CONTROL MSR interception for Secure AVIC
156
guests
157
guests
158
x86/sev: Indicate SEV-SNP guest supports Secure AVIC
157
159
158
arch/x86/Kconfig | 14 +
160
arch/x86/Kconfig | 13 +
159
arch/x86/boot/compressed/sev.c | 4 +-
161
arch/x86/boot/compressed/sev.c | 10 +-
160
arch/x86/coco/core.c | 3 +
162
arch/x86/coco/core.c | 3 +
161
arch/x86/coco/sev/core.c | 145 +++++++-
163
arch/x86/coco/sev/core.c | 131 +++++++-
162
arch/x86/include/asm/apic.h | 4 +
164
arch/x86/include/asm/apic-emul.h | 28 ++
165
arch/x86/include/asm/apic.h | 12 +
163
arch/x86/include/asm/apicdef.h | 2 +
166
arch/x86/include/asm/apicdef.h | 2 +
164
arch/x86/include/asm/msr-index.h | 9 +-
167
arch/x86/include/asm/msr-index.h | 9 +-
165
arch/x86/include/asm/sev.h | 10 +
168
arch/x86/include/asm/sev.h | 8 +
166
arch/x86/include/uapi/asm/svm.h | 3 +
169
arch/x86/include/uapi/asm/svm.h | 3 +
167
arch/x86/kernel/apic/Makefile | 1 +
170
arch/x86/kernel/apic/Makefile | 1 +
168
arch/x86/kernel/apic/apic.c | 9 +
171
arch/x86/kernel/apic/apic.c | 7 +
169
arch/x86/kernel/apic/vector.c | 8 +
172
arch/x86/kernel/apic/init.c | 3 +
170
arch/x86/kernel/apic/x2apic_savic.c | 530 ++++++++++++++++++++++++++++
173
arch/x86/kernel/apic/vector.c | 53 +++-
174
arch/x86/kernel/apic/x2apic_savic.c | 467 ++++++++++++++++++++++++++++
175
arch/x86/kvm/lapic.c | 23 +-
171
include/linux/cc_platform.h | 8 +
176
include/linux/cc_platform.h | 8 +
172
14 files changed, 743 insertions(+), 7 deletions(-)
177
17 files changed, 742 insertions(+), 39 deletions(-)
178
create mode 100644 arch/x86/include/asm/apic-emul.h
173
create mode 100644 arch/x86/kernel/apic/x2apic_savic.c
179
create mode 100644 arch/x86/kernel/apic/x2apic_savic.c
174
180
175
181
base-commit: 535bd326c5657fe570f41b1f76941e449d9e2062
176
base-commit: 0f966b1992694763de4dae6bdf817c5c1c6fc66d
177
--
182
--
178
2.34.1
183
2.34.1
diff view generated by jsdifflib
...
...
15
15
16
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
16
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
17
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
17
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
18
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
18
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
19
---
19
---
20
20
Changes since v2:
21
Changes since v1:
21
22
22
- Do not autoselect AMD_SECURE_AVIC config when AMD_MEM_ENCRYPT config
23
- Updated the commit log to highlight the behavior for the case when
23
is enabled. Make AMD_SECURE_AVIC depend on AMD_MEM_ENCRYPT.
24
guest SNP_FEATURES_PRESENT does not have SECURE AVIC set and
24
- Misc cleanups.
25
Hv has set the bit in SEV_STATUS.
25
26
26
arch/x86/Kconfig | 13 ++++
27
- Select AMD_SECURE_AVIC config if AMD_MEM_ENCRYPT config is enabled.
28
29
- Updated the config AMD_SECURE_AVIC description to highlight
30
functional dependency on x2apic enablement.
31
32
arch/x86/Kconfig | 14 ++++
33
arch/x86/boot/compressed/sev.c | 1 +
27
arch/x86/boot/compressed/sev.c | 1 +
34
arch/x86/coco/core.c | 3 +
28
arch/x86/coco/core.c | 3 +
35
arch/x86/include/asm/msr-index.h | 4 +-
29
arch/x86/include/asm/msr-index.h | 4 +-
36
arch/x86/kernel/apic/Makefile | 1 +
30
arch/x86/kernel/apic/Makefile | 1 +
37
arch/x86/kernel/apic/x2apic_savic.c | 112 ++++++++++++++++++++++++++++
31
arch/x86/kernel/apic/x2apic_savic.c | 109 ++++++++++++++++++++++++++++
38
include/linux/cc_platform.h | 8 ++
32
include/linux/cc_platform.h | 8 ++
39
7 files changed, 142 insertions(+), 1 deletion(-)
33
7 files changed, 138 insertions(+), 1 deletion(-)
40
create mode 100644 arch/x86/kernel/apic/x2apic_savic.c
34
create mode 100644 arch/x86/kernel/apic/x2apic_savic.c
41
35
42
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
36
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
43
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
44
--- a/arch/x86/Kconfig
38
--- a/arch/x86/Kconfig
45
+++ b/arch/x86/Kconfig
39
+++ b/arch/x86/Kconfig
46
@@ -XXX,XX +XXX,XX @@ config X86_X2APIC
40
@@ -XXX,XX +XXX,XX @@ config X86_X2APIC
47
41
48
     If you don't know what to do here, say N.
42
     If in doubt, say Y.
49
43
50
+config AMD_SECURE_AVIC
44
+config AMD_SECURE_AVIC
51
+    bool "AMD Secure AVIC"
45
+    bool "AMD Secure AVIC"
52
+    depends on X86_X2APIC
46
+    depends on AMD_MEM_ENCRYPT && X86_X2APIC
53
+    help
47
+    help
54
+     This enables AMD Secure AVIC support on guests that have this feature.
48
+     Enable this to get AMD Secure AVIC support on guests that have this feature.
55
+
49
+
56
+     AMD Secure AVIC provides hardware acceleration for performance sensitive
50
+     AMD Secure AVIC provides hardware acceleration for performance sensitive
57
+     APIC accesses and support for managing guest owned APIC state for SEV-SNP
51
+     APIC accesses and support for managing guest owned APIC state for SEV-SNP
58
+     guests. Secure AVIC does not support xapic mode. It has functional
52
+     guests. Secure AVIC does not support xapic mode. It has functional
59
+     dependency on x2apic being enabled in the guest.
53
+     dependency on x2apic being enabled in the guest.
60
+
54
+
61
+     If you don't know what to do here, say N.
55
+     If you don't know what to do here, say N.
62
+
56
+
63
config X86_POSTED_MSI
57
config X86_POSTED_MSI
64
    bool "Enable MSI and MSI-x delivery by posted interrupts"
58
    bool "Enable MSI and MSI-x delivery by posted interrupts"
65
    depends on X86_64 && IRQ_REMAP
59
    depends on X86_64 && IRQ_REMAP
66
@@ -XXX,XX +XXX,XX @@ config AMD_MEM_ENCRYPT
67
    select X86_MEM_ENCRYPT
68
    select UNACCEPTED_MEMORY
69
    select CRYPTO_LIB_AESGCM
70
+    select AMD_SECURE_AVIC
71
    help
72
     Say yes to enable support for the encryption of system memory.
73
     This requires an AMD processor that supports Secure Memory
74
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
60
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
75
index XXXXXXX..XXXXXXX 100644
61
index XXXXXXX..XXXXXXX 100644
76
--- a/arch/x86/boot/compressed/sev.c
62
--- a/arch/x86/boot/compressed/sev.c
77
+++ b/arch/x86/boot/compressed/sev.c
63
+++ b/arch/x86/boot/compressed/sev.c
78
@@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
64
@@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
...
...
105
#define MSR_AMD64_SNP_VMSA_REG_PROT    BIT_ULL(MSR_AMD64_SNP_VMSA_REG_PROT_BIT)
91
#define MSR_AMD64_SNP_VMSA_REG_PROT    BIT_ULL(MSR_AMD64_SNP_VMSA_REG_PROT_BIT)
106
#define MSR_AMD64_SNP_SMT_PROT_BIT    17
92
#define MSR_AMD64_SNP_SMT_PROT_BIT    17
107
#define MSR_AMD64_SNP_SMT_PROT        BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT)
93
#define MSR_AMD64_SNP_SMT_PROT        BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT)
108
-#define MSR_AMD64_SNP_RESV_BIT        18
94
-#define MSR_AMD64_SNP_RESV_BIT        18
109
+#define MSR_AMD64_SNP_SECURE_AVIC_BIT    18
95
+#define MSR_AMD64_SNP_SECURE_AVIC_BIT    18
110
+#define MSR_AMD64_SNP_SECURE_AVIC     BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT)
96
+#define MSR_AMD64_SNP_SECURE_AVIC    BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT)
111
+#define MSR_AMD64_SNP_RESV_BIT        19
97
+#define MSR_AMD64_SNP_RESV_BIT        19
112
#define MSR_AMD64_SNP_RESERVED_MASK    GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
98
#define MSR_AMD64_SNP_RESERVED_MASK    GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
113
#define MSR_AMD64_RMP_BASE        0xc0010132
99
#define MSR_AMD64_RMP_BASE        0xc0010132
114
#define MSR_AMD64_RMP_END        0xc0010133
100
#define MSR_AMD64_RMP_END        0xc0010133
115
diff --git a/arch/x86/kernel/apic/Makefile b/arch/x86/kernel/apic/Makefile
101
diff --git a/arch/x86/kernel/apic/Makefile b/arch/x86/kernel/apic/Makefile
...
...
150
+static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
136
+static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
151
+{
137
+{
152
+    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
138
+    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
153
+}
139
+}
154
+
140
+
155
+static void x2apic_savic_send_IPI(int cpu, int vector)
141
+static void x2apic_savic_send_ipi(int cpu, int vector)
156
+{
142
+{
157
+    u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
143
+    u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
158
+
144
+
159
+    /* x2apic MSRs are special and need a special fence: */
145
+    /* x2apic MSRs are special and need a special fence: */
160
+    weak_wrmsr_fence();
146
+    weak_wrmsr_fence();
161
+    __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL);
147
+    __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL);
162
+}
148
+}
163
+
149
+
164
+static void
150
+static void __send_ipi_mask(const struct cpumask *mask, int vector, bool excl_self)
165
+__send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
166
+{
151
+{
167
+    unsigned long query_cpu;
152
+    unsigned long query_cpu;
168
+    unsigned long this_cpu;
153
+    unsigned long this_cpu;
169
+    unsigned long flags;
154
+    unsigned long flags;
170
+
155
+
...
...
173
+
158
+
174
+    local_irq_save(flags);
159
+    local_irq_save(flags);
175
+
160
+
176
+    this_cpu = smp_processor_id();
161
+    this_cpu = smp_processor_id();
177
+    for_each_cpu(query_cpu, mask) {
162
+    for_each_cpu(query_cpu, mask) {
178
+        if (apic_dest == APIC_DEST_ALLBUT && this_cpu == query_cpu)
163
+        if (excl_self && this_cpu == query_cpu)
179
+            continue;
164
+            continue;
180
+        __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
165
+        __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
181
+                 vector, APIC_DEST_PHYSICAL);
166
+                 vector, APIC_DEST_PHYSICAL);
182
+    }
167
+    }
183
+    local_irq_restore(flags);
168
+    local_irq_restore(flags);
184
+}
169
+}
185
+
170
+
186
+static void x2apic_savic_send_IPI_mask(const struct cpumask *mask, int vector)
171
+static void x2apic_savic_send_ipi_mask(const struct cpumask *mask, int vector)
187
+{
172
+{
188
+    __send_IPI_mask(mask, vector, APIC_DEST_ALLINC);
173
+    __send_ipi_mask(mask, vector, false);
189
+}
174
+}
190
+
175
+
191
+static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, int vector)
176
+static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
192
+{
177
+{
193
+    __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT);
178
+    __send_ipi_mask(mask, vector, true);
194
+}
179
+}
195
+
180
+
196
+static int x2apic_savic_probe(void)
181
+static int x2apic_savic_probe(void)
197
+{
182
+{
198
+    if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC))
183
+    if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC))
...
...
201
+    if (!x2apic_mode) {
186
+    if (!x2apic_mode) {
202
+        pr_err("Secure AVIC enabled in non x2APIC mode\n");
187
+        pr_err("Secure AVIC enabled in non x2APIC mode\n");
203
+        snp_abort();
188
+        snp_abort();
204
+    }
189
+    }
205
+
190
+
206
+    pr_info("Secure AVIC Enabled\n");
207
+
208
+    return 1;
191
+    return 1;
209
+}
192
+}
210
+
193
+
211
+static struct apic apic_x2apic_savic __ro_after_init = {
194
+static struct apic apic_x2apic_savic __ro_after_init = {
212
+
195
+
...
...
224
+    .x2apic_set_max_apicid        = true,
207
+    .x2apic_set_max_apicid        = true,
225
+    .get_apic_id            = x2apic_get_apic_id,
208
+    .get_apic_id            = x2apic_get_apic_id,
226
+
209
+
227
+    .calc_dest_apicid        = apic_default_calc_apicid,
210
+    .calc_dest_apicid        = apic_default_calc_apicid,
228
+
211
+
229
+    .send_IPI            = x2apic_savic_send_IPI,
212
+    .send_IPI            = x2apic_savic_send_ipi,
230
+    .send_IPI_mask            = x2apic_savic_send_IPI_mask,
213
+    .send_IPI_mask            = x2apic_savic_send_ipi_mask,
231
+    .send_IPI_mask_allbutself    = x2apic_savic_send_IPI_mask_allbutself,
214
+    .send_IPI_mask_allbutself    = x2apic_savic_send_ipi_mask_allbutself,
232
+    .send_IPI_allbutself        = x2apic_send_IPI_allbutself,
215
+    .send_IPI_allbutself        = x2apic_send_IPI_allbutself,
233
+    .send_IPI_all            = x2apic_send_IPI_all,
216
+    .send_IPI_all            = x2apic_send_IPI_all,
234
+    .send_IPI_self            = x2apic_send_IPI_self,
217
+    .send_IPI_self            = x2apic_send_IPI_self,
235
+    .nmi_to_offline_cpu        = true,
218
+    .nmi_to_offline_cpu        = true,
236
+
219
+
...
...
diff view generated by jsdifflib
1
With Secure AVIC, the APIC backing page is owned and managed by guest.
1
With Secure AVIC, the APIC backing page is owned and managed by guest.
2
Allocate and initialize APIC backing page for all guest CPUs.
2
Allocate and initialize APIC backing page for all guest CPUs.
3
3
4
The NPT entry for the vCPU's APIC backing page must always be present
4
The NPT entry for a vCPU's APIC backing page must always be present
5
when the vCPU is running in order for Secure AVIC to function. A
5
when the vCPU is running in order for Secure AVIC to function. A
6
VMEXIT_BUSY is returned on VMRUN and the vCPU cannot be resumed if
6
VMEXIT_BUSY is returned on VMRUN and the vCPU cannot be resumed if
7
the NPT entry for the APIC backing page is not present. Notify GPA of
7
the NPT entry for the APIC backing page is not present. Notify GPA of
8
the vCPU's APIC backing page to the hypervisor by using the
8
the vCPU's APIC backing page to the hypervisor by using the
9
SVM_VMGEXIT_SECURE_AVIC GHCB protocol event. Before executing VMRUN,
9
SVM_VMGEXIT_SECURE_AVIC GHCB protocol event. Before executing VMRUN,
...
...
12
12
13
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
13
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
14
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
14
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
15
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
15
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
16
---
16
---
17
Changes since v1:
17
Changes since v2:
18
18
19
- Updated commit log.
19
- Fix broken AP bringup due to GFP_KERNEL allocation in setup()
20
- Allocate APIC backing page for each CPU as a separate PAGE_SIZE
20
callback.
21
allocation with GFP_KERNEL flag.
21
- Define apic_page struct and allocate per CPU API backing pages
22
- Update the GPA registeration API as per the latest GHCB spec updates
22
for all CPUs in Secure AVIC driver probe.
23
for Secure AVIC GHCB protocol event (yet to be published).
23
- Change savic_register_gpa() to only allow local CPU GPA
24
Corresponding KVM support is here:
24
registration.
25
https://github.com/AMDESE/linux-kvm/commit/5fbf231861207edf73bb31742f75e22cae18607b
25
- Misc cleanups.
26
- Remove savic_setup_done variable.
26
27
- Removed initialization of LVT* regs in backing page from Hv values.
27
arch/x86/coco/sev/core.c | 27 +++++++++++++++++++
28
These regs will reads/writes will be propagated to Hv in subsequent
28
arch/x86/coco/sev/core.c | 27 +++++++++++++++++++
29
patches.
30
- Move savic_ghcb_msr_read() definition to a later patch where it will
31
be first used.
32
33
arch/x86/coco/sev/core.c | 32 +++++++++++++++++++++++++++
34
arch/x86/include/asm/apic.h | 1 +
29
arch/x86/include/asm/apic.h | 1 +
35
arch/x86/include/asm/sev.h | 3 +++
30
arch/x86/include/asm/sev.h | 2 ++
36
arch/x86/include/uapi/asm/svm.h | 3 +++
31
arch/x86/include/uapi/asm/svm.h | 3 +++
37
arch/x86/kernel/apic/apic.c | 2 ++
32
arch/x86/kernel/apic/apic.c | 2 ++
38
arch/x86/kernel/apic/x2apic_savic.c | 34 +++++++++++++++++++++++++++++
33
arch/x86/kernel/apic/x2apic_savic.c | 42 +++++++++++++++++++++++++++++
39
6 files changed, 75 insertions(+)
34
6 files changed, 77 insertions(+)
40
35
41
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
36
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
42
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
43
--- a/arch/x86/coco/sev/core.c
38
--- a/arch/x86/coco/sev/core.c
44
+++ b/arch/x86/coco/sev/core.c
39
+++ b/arch/x86/coco/sev/core.c
45
@@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
40
@@ -XXX,XX +XXX,XX @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
46
    return ret;
41
    return ret;
47
}
42
}
48
43
49
+/*
44
+enum es_result savic_register_gpa(u64 gpa)
50
+ * Register GPA of the Secure AVIC backing page.
51
+ *
52
+ * @apic_id: APIC ID of the vCPU. Use -1ULL for the current vCPU
53
+ * doing the call.
54
+ * @gpa : GPA of the Secure AVIC backing page.
55
+ */
56
+enum es_result savic_register_gpa(u64 apic_id, u64 gpa)
57
+{
45
+{
58
+    struct ghcb_state state;
46
+    struct ghcb_state state;
59
+    struct es_em_ctxt ctxt;
47
+    struct es_em_ctxt ctxt;
60
+    unsigned long flags;
48
+    unsigned long flags;
49
+    enum es_result res;
61
+    struct ghcb *ghcb;
50
+    struct ghcb *ghcb;
62
+    int ret = 0;
63
+
51
+
64
+    local_irq_save(flags);
52
+    local_irq_save(flags);
65
+
53
+
66
+    ghcb = __sev_get_ghcb(&state);
54
+    ghcb = __sev_get_ghcb(&state);
67
+
55
+
68
+    vc_ghcb_invalidate(ghcb);
56
+    vc_ghcb_invalidate(ghcb);
69
+
57
+
70
+    ghcb_set_rax(ghcb, apic_id);
58
+    /* Register GPA for the local CPU */
59
+    ghcb_set_rax(ghcb, -1ULL);
71
+    ghcb_set_rbx(ghcb, gpa);
60
+    ghcb_set_rbx(ghcb, gpa);
72
+    ret = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_SECURE_AVIC,
61
+    res = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_SECURE_AVIC,
73
+            SVM_VMGEXIT_SECURE_AVIC_REGISTER_GPA, 0);
62
+            SVM_VMGEXIT_SECURE_AVIC_REGISTER_GPA, 0);
74
+
63
+
75
+    __sev_put_ghcb(&state);
64
+    __sev_put_ghcb(&state);
76
+
65
+
77
+    local_irq_restore(flags);
66
+    local_irq_restore(flags);
78
+    return ret;
67
+
68
+    return res;
79
+}
69
+}
80
+
70
+
81
static void snp_register_per_cpu_ghcb(void)
71
static void snp_register_per_cpu_ghcb(void)
82
{
72
{
83
    struct sev_es_runtime_data *data;
73
    struct sev_es_runtime_data *data;
...
...
99
+++ b/arch/x86/include/asm/sev.h
89
+++ b/arch/x86/include/asm/sev.h
100
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
90
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
101
91
102
void __init snp_secure_tsc_prepare(void);
92
void __init snp_secure_tsc_prepare(void);
103
void __init snp_secure_tsc_init(void);
93
void __init snp_secure_tsc_init(void);
104
+enum es_result savic_register_gpa(u64 apic_id, u64 gpa);
94
+enum es_result savic_register_gpa(u64 gpa);
105
95
106
#else    /* !CONFIG_AMD_MEM_ENCRYPT */
96
#else    /* !CONFIG_AMD_MEM_ENCRYPT */
107
97
108
@@ -XXX,XX +XXX,XX @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_
98
@@ -XXX,XX +XXX,XX @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_
109
                     struct snp_guest_request_ioctl *rio) { return -ENODEV; }
99
                     struct snp_guest_request_ioctl *rio) { return -ENODEV; }
110
static inline void __init snp_secure_tsc_prepare(void) { }
100
static inline void __init snp_secure_tsc_prepare(void) { }
111
static inline void __init snp_secure_tsc_init(void) { }
101
static inline void __init snp_secure_tsc_init(void) { }
112
+static inline enum es_result savic_register_gpa(u64 apic_id,
102
+static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; }
113
+                        u64 gpa) { return ES_UNSUPPORTED; }
114
103
115
#endif    /* CONFIG_AMD_MEM_ENCRYPT */
104
#endif    /* CONFIG_AMD_MEM_ENCRYPT */
116
105
117
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
106
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
118
index XXXXXXX..XXXXXXX 100644
107
index XXXXXXX..XXXXXXX 100644
...
...
154
#include <asm/apic.h>
143
#include <asm/apic.h>
155
#include <asm/sev.h>
144
#include <asm/sev.h>
156
145
157
#include "local.h"
146
#include "local.h"
158
147
159
+static DEFINE_PER_CPU(void *, apic_backing_page);
148
+/* APIC_EILVTn(3) is the last defined APIC register. */
149
+#define NR_APIC_REGS    (APIC_EILVTn(4) >> 2)
150
+
151
+struct apic_page {
152
+    union {
153
+        u32    regs[NR_APIC_REGS];
154
+        u8    bytes[PAGE_SIZE];
155
+    };
156
+} __aligned(PAGE_SIZE);
157
+
158
+static struct apic_page __percpu *apic_page __ro_after_init;
160
+
159
+
161
static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
160
static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
162
{
161
{
163
    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
162
    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
164
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in
163
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, in
165
    __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT);
164
    __send_ipi_mask(mask, vector, true);
166
}
165
}
167
166
168
+static void x2apic_savic_setup(void)
167
+static void x2apic_savic_setup(void)
169
+{
168
+{
170
+    void *backing_page;
169
+    void *backing_page;
171
+    enum es_result ret;
170
+    enum es_result ret;
172
+    unsigned long gpa;
171
+    unsigned long gpa;
173
+
172
+
174
+    if (this_cpu_read(apic_backing_page))
173
+    backing_page = this_cpu_ptr(apic_page);
175
+        return;
176
+
177
+    backing_page = kzalloc(PAGE_SIZE, GFP_KERNEL);
178
+    if (!backing_page)
179
+        snp_abort();
180
+    this_cpu_write(apic_backing_page, backing_page);
181
+    gpa = __pa(backing_page);
174
+    gpa = __pa(backing_page);
182
+
175
+
183
+    /*
176
+    /*
184
+     * The NPT entry for the vCPU's APIC backing page must always be
177
+     * The NPT entry for a vCPU's APIC backing page must always be
185
+     * present when the vCPU is running in order for Secure AVIC to
178
+     * present when the vCPU is running in order for Secure AVIC to
186
+     * function. A VMEXIT_BUSY is returned on VMRUN and the vCPU cannot
179
+     * function. A VMEXIT_BUSY is returned on VMRUN and the vCPU cannot
187
+     * be resumed if the NPT entry for the APIC backing page is not
180
+     * be resumed if the NPT entry for the APIC backing page is not
188
+     * present. Notify GPA of the vCPU's APIC backing page to the
181
+     * present. Notify GPA of the vCPU's APIC backing page to the
189
+     * hypervisor by calling savic_register_gpa(). Before executing
182
+     * hypervisor by calling savic_register_gpa(). Before executing
190
+     * VMRUN, the hypervisor makes use of this information to make sure
183
+     * VMRUN, the hypervisor makes use of this information to make sure
191
+     * the APIC backing page is mapped in NPT.
184
+     * the APIC backing page is mapped in NPT.
192
+     */
185
+     */
193
+    ret = savic_register_gpa(-1ULL, gpa);
186
+    ret = savic_register_gpa(gpa);
194
+    if (ret != ES_OK)
187
+    if (ret != ES_OK)
195
+        snp_abort();
188
+        snp_abort();
196
+}
189
+}
197
+
190
+
198
static int x2apic_savic_probe(void)
191
static int x2apic_savic_probe(void)
199
{
192
{
200
    if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC))
193
    if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC))
194
@@ -XXX,XX +XXX,XX @@ static int x2apic_savic_probe(void)
195
        snp_abort();
196
    }
197
198
+    apic_page = alloc_percpu(struct apic_page);
199
+    if (!apic_page)
200
+        snp_abort();
201
+
202
    return 1;
203
}
204
201
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
205
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
202
    .name                = "secure avic x2apic",
206
    .name                = "secure avic x2apic",
203
    .probe                = x2apic_savic_probe,
207
    .probe                = x2apic_savic_probe,
204
    .acpi_madt_oem_check        = x2apic_savic_acpi_madt_oem_check,
208
    .acpi_madt_oem_check        = x2apic_savic_acpi_madt_oem_check,
205
+    .setup                = x2apic_savic_setup,
209
+    .setup                = x2apic_savic_setup,
206
210
207
    .dest_mode_logical        = false,
211
    .dest_mode_logical        = false,
208
212
209
--
213
--
210
2.34.1
214
2.34.1
diff view generated by jsdifflib
1
Add read() and write() APIC callback functions to read and write x2APIC
2
registers directly from the guest APIC backing page of a vCPU.
3
1
The x2APIC registers are mapped at an offset within the guest APIC
4
The x2APIC registers are mapped at an offset within the guest APIC
2
backing page which is same as their x2APIC MMIO offset. Secure AVIC
5
backing page which is same as their x2APIC MMIO offset. Secure AVIC
3
adds new registers such as ALLOWED_IRRs (which are at 4-byte offset
6
adds new registers such as ALLOWED_IRRs (which are at 4-byte offset
4
within the IRR register offset range) and NMI_REQ to the APIC register
7
within the IRR register offset range) and NMI_REQ to the APIC register
5
space.
8
space.
6
9
7
Add read() and write() APIC callback functions to read and write x2APIC
10
When Secure AVIC is enabled, guest's rdmsr/wrmsr of APIC registers
8
registers directly from the guest APIC backing page.
11
result in VC exception (for non-accelerated register accesses) with
9
12
error code VMEXIT_AVIC_NOACCEL. The VC exception handler can read/write
10
When Secure AVIC is enabled, rdmsr/wrmsr of APIC registers result in
13
the x2APIC register in the guest APIC backing page to complete the
11
VC exception (for non-accelerated register accesses). The #VC
14
rdmsr/wrmsr. Since doing this would increase the latency of accessing
12
exception handler can read/write the x2APIC register in the guest
15
x2APIC registers, instead of doing rdmsr/wrmsr based reg accesses
13
APIC backing page. Since doing this would increase the latency of
16
and handling reads/writes in VC exception, directly read/write APIC
14
accessing x2APIC registers, instead of doing rdmsr/wrmsr based
17
registers from/to the guest APIC backing page of the vCPU in read()
15
accesses and handling apic register reads/writes in VC
18
and write() callbacks of the Secure AVIC APIC driver.
16
VMEXIT_AVIC_NOACCEL error condition, the read() and write()
17
callbacks of Secure AVIC driver directly read/write APIC register
18
from/to the guest APIC backing page.
19
19
20
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
20
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
21
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
21
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
22
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
22
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
23
---
23
---
24
Changes since v1:
24
Changes since v2:
25
25
- Use this_cpu_ptr() instead of type casting in get_reg() and
26
- APIC_ID reg write is not allowed.
26
set_reg().
27
- Put information about not using #VC exception path for register
28
reads/writes as comments.
29
- So not read backing page if WARN_ONCE is triggered for misaligned
30
reads.
31
- Cleanups.
32
27
33
arch/x86/include/asm/apicdef.h | 2 +
28
arch/x86/include/asm/apicdef.h | 2 +
34
arch/x86/kernel/apic/x2apic_savic.c | 120 +++++++++++++++++++++++++++-
29
arch/x86/kernel/apic/x2apic_savic.c | 116 +++++++++++++++++++++++++++-
35
2 files changed, 120 insertions(+), 2 deletions(-)
30
2 files changed, 116 insertions(+), 2 deletions(-)
36
31
37
diff --git a/arch/x86/include/asm/apicdef.h b/arch/x86/include/asm/apicdef.h
32
diff --git a/arch/x86/include/asm/apicdef.h b/arch/x86/include/asm/apicdef.h
38
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
39
--- a/arch/x86/include/asm/apicdef.h
34
--- a/arch/x86/include/asm/apicdef.h
40
+++ b/arch/x86/include/asm/apicdef.h
35
+++ b/arch/x86/include/asm/apicdef.h
...
...
61
#include <asm/sev.h>
56
#include <asm/sev.h>
62
@@ -XXX,XX +XXX,XX @@ static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
57
@@ -XXX,XX +XXX,XX @@ static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
63
    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
58
    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
64
}
59
}
65
60
66
+static inline u32 get_reg(char *page, int reg)
61
+static __always_inline u32 get_reg(unsigned int offset)
67
+{
62
+{
68
+    return READ_ONCE(*((u32 *)(page + reg)));
63
+    return READ_ONCE(this_cpu_ptr(apic_page)->regs[offset >> 2]);
69
+}
64
+}
70
+
65
+
71
+static inline void set_reg(char *page, int reg, u32 val)
66
+static __always_inline void set_reg(unsigned int offset, u32 val)
72
+{
67
+{
73
+    WRITE_ONCE(*((u32 *)(page + reg)), val);
68
+    WRITE_ONCE(this_cpu_ptr(apic_page)->regs[offset >> 2], val);
74
+}
69
+}
75
+
70
+
76
+#define SAVIC_ALLOWED_IRR_OFFSET    0x204
71
+#define SAVIC_ALLOWED_IRR    0x204
77
+
72
+
78
+static u32 x2apic_savic_read(u32 reg)
73
+static u32 x2apic_savic_read(u32 reg)
79
+{
74
+{
80
+    void *backing_page = this_cpu_read(apic_backing_page);
81
+
82
+    /*
75
+    /*
83
+     * When Secure AVIC is enabled, rdmsr/wrmsr of APIC registers result in
76
+     * When Secure AVIC is enabled, rdmsr/wrmsr of APIC registers
84
+     * #VC exception (for non-accelerated register accesses). The #VC
77
+     * result in VC exception (for non-accelerated register accesses)
85
+     * exception handler can read/write the x2APIC register in the guest
78
+     * with VMEXIT_AVIC_NOACCEL error code. The VC exception handler
86
+     * APIC backing page. Since doing this would increase the latency of
79
+     * can read/write the x2APIC register in the guest APIC backing page.
87
+     * accessing x2APIC registers, instead of doing rdmsr/wrmsr based
80
+     * Since doing this would increase the latency of accessing x2APIC
88
+     * accesses and handling apic register reads/writes in
81
+     * registers, instead of doing rdmsr/wrmsr based accesses and
89
+     * #VC VMEXIT_AVIC_NOACCEL error condition, the read() and write()
82
+     * handling apic register reads/writes in VC exception, the read()
90
+     * callbacks of Secure AVIC driver directly read/write APIC register
83
+     * and write() callbacks directly read/write APIC register from/to
91
+     * from/to the guest APIC backing page.
84
+     * the vCPU APIC backing page.
92
+     */
85
+     */
93
+    switch (reg) {
86
+    switch (reg) {
94
+    case APIC_LVTT:
87
+    case APIC_LVTT:
95
+    case APIC_TMICT:
88
+    case APIC_TMICT:
96
+    case APIC_TMCCT:
89
+    case APIC_TMCCT:
...
...
112
+    case APIC_EFEAT:
105
+    case APIC_EFEAT:
113
+    case APIC_ECTRL:
106
+    case APIC_ECTRL:
114
+    case APIC_SEOI:
107
+    case APIC_SEOI:
115
+    case APIC_IER:
108
+    case APIC_IER:
116
+    case APIC_EILVTn(0) ... APIC_EILVTn(3):
109
+    case APIC_EILVTn(0) ... APIC_EILVTn(3):
117
+        return get_reg(backing_page, reg);
110
+        return get_reg(reg);
118
+    case APIC_ISR ... APIC_ISR + 0x70:
111
+    case APIC_ISR ... APIC_ISR + 0x70:
119
+    case APIC_TMR ... APIC_TMR + 0x70:
112
+    case APIC_TMR ... APIC_TMR + 0x70:
120
+        if (WARN_ONCE(!IS_ALIGNED(reg, 16),
113
+        if (WARN_ONCE(!IS_ALIGNED(reg, 16),
121
+             "Reg offset 0x%x not aligned at 16 bytes", reg))
114
+             "APIC reg read offset 0x%x not aligned at 16 bytes", reg))
122
+            return 0;
115
+            return 0;
123
+        return get_reg(backing_page, reg);
116
+        return get_reg(reg);
124
+    /* IRR and ALLOWED_IRR offset range */
117
+    /* IRR and ALLOWED_IRR offset range */
125
+    case APIC_IRR ... APIC_IRR + 0x74:
118
+    case APIC_IRR ... APIC_IRR + 0x74:
126
+        /*
119
+        /*
127
+         * Either aligned at 16 bytes for valid IRR reg offset or a
120
+         * Either aligned at 16 bytes for valid IRR reg offset or a
128
+         * valid Secure AVIC ALLOWED_IRR offset.
121
+         * valid Secure AVIC ALLOWED_IRR offset.
129
+         */
122
+         */
130
+        if (WARN_ONCE(!(IS_ALIGNED(reg, 16) ||
123
+        if (WARN_ONCE(!(IS_ALIGNED(reg, 16) ||
131
+                IS_ALIGNED(reg - SAVIC_ALLOWED_IRR_OFFSET, 16)),
124
+                IS_ALIGNED(reg - SAVIC_ALLOWED_IRR, 16)),
132
+             "Misaligned IRR/ALLOWED_IRR reg offset 0x%x", reg))
125
+             "Misaligned IRR/ALLOWED_IRR APIC reg read offset 0x%x", reg))
133
+            return 0;
126
+            return 0;
134
+        return get_reg(backing_page, reg);
127
+        return get_reg(reg);
135
+    default:
128
+    default:
136
+        pr_err("Permission denied: read of Secure AVIC reg offset 0x%x\n", reg);
129
+        pr_err("Permission denied: read of Secure AVIC reg offset 0x%x\n", reg);
137
+        return 0;
130
+        return 0;
138
+    }
131
+    }
139
+}
132
+}
140
+
133
+
141
+#define SAVIC_NMI_REQ_OFFSET        0x278
134
+#define SAVIC_NMI_REQ        0x278
142
+
135
+
143
+static void x2apic_savic_write(u32 reg, u32 data)
136
+static void x2apic_savic_write(u32 reg, u32 data)
144
+{
137
+{
145
+    void *backing_page = this_cpu_read(apic_backing_page);
146
+
147
+    switch (reg) {
138
+    switch (reg) {
148
+    case APIC_LVTT:
139
+    case APIC_LVTT:
149
+    case APIC_LVT0:
140
+    case APIC_LVT0:
150
+    case APIC_LVT1:
141
+    case APIC_LVT1:
151
+    case APIC_TMICT:
142
+    case APIC_TMICT:
152
+    case APIC_TDCR:
143
+    case APIC_TDCR:
153
+    case APIC_SELF_IPI:
144
+    case APIC_SELF_IPI:
154
+    case APIC_TASKPRI:
145
+    case APIC_TASKPRI:
155
+    case APIC_EOI:
146
+    case APIC_EOI:
156
+    case APIC_SPIV:
147
+    case APIC_SPIV:
157
+    case SAVIC_NMI_REQ_OFFSET:
148
+    case SAVIC_NMI_REQ:
158
+    case APIC_ESR:
149
+    case APIC_ESR:
159
+    case APIC_ICR:
150
+    case APIC_ICR:
160
+    case APIC_LVTTHMR:
151
+    case APIC_LVTTHMR:
161
+    case APIC_LVTPC:
152
+    case APIC_LVTPC:
162
+    case APIC_LVTERR:
153
+    case APIC_LVTERR:
163
+    case APIC_ECTRL:
154
+    case APIC_ECTRL:
164
+    case APIC_SEOI:
155
+    case APIC_SEOI:
165
+    case APIC_IER:
156
+    case APIC_IER:
166
+    case APIC_EILVTn(0) ... APIC_EILVTn(3):
157
+    case APIC_EILVTn(0) ... APIC_EILVTn(3):
167
+        set_reg(backing_page, reg, data);
158
+        set_reg(reg, data);
168
+        break;
159
+        break;
169
+    /* ALLOWED_IRR offsets are writable */
160
+    /* ALLOWED_IRR offsets are writable */
170
+    case SAVIC_ALLOWED_IRR_OFFSET ... SAVIC_ALLOWED_IRR_OFFSET + 0x70:
161
+    case SAVIC_ALLOWED_IRR ... SAVIC_ALLOWED_IRR + 0x70:
171
+        if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR_OFFSET, 16)) {
162
+        if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR, 16)) {
172
+            set_reg(backing_page, reg, data);
163
+            set_reg(reg, data);
173
+            break;
164
+            break;
174
+        }
165
+        }
175
+        fallthrough;
166
+        fallthrough;
176
+    default:
167
+    default:
177
+        pr_err("Permission denied: write to Secure AVIC reg offset 0x%x\n", reg);
168
+        pr_err("Permission denied: write to Secure AVIC reg offset 0x%x\n", reg);
178
+    }
169
+    }
179
+}
170
+}
180
+
171
+
181
static void x2apic_savic_send_IPI(int cpu, int vector)
172
static void x2apic_savic_send_ipi(int cpu, int vector)
182
{
173
{
183
    u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
174
    u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
184
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
175
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
185
    .send_IPI_self            = x2apic_send_IPI_self,
176
    .send_IPI_self            = x2apic_send_IPI_self,
186
    .nmi_to_offline_cpu        = true,
177
    .nmi_to_offline_cpu        = true,
...
...
diff view generated by jsdifflib
1
Initialize the APIC ID in the Secure AVIC APIC backing page with
1
Initialize the APIC ID in the Secure AVIC APIC backing page with
2
the APIC_ID msr value read from Hypervisor. Maintain a hashmap to
2
the APIC_ID msr value read from Hypervisor. CPU topology evaluation
3
check and report same APIC_ID value returned by Hypervisor for two
3
later during boot would catch and report any duplicate APIC ID for
4
different vCPUs.
4
two CPUs.
5
5
6
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
6
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
7
---
7
---
8
Changes since v1:
8
Changes since v2:
9
- Drop duplicate APIC ID checks.
9
10
10
- Do not read APIC_ID from CPUID. Read APIC_ID from Hv and check for
11
arch/x86/kernel/apic/x2apic_savic.c | 13 +++++++++++++
11
duplicates.
12
1 file changed, 13 insertions(+)
12
- Add a more user-friendly log message on detecting duplicate APIC
13
IDs.
14
15
arch/x86/kernel/apic/x2apic_savic.c | 59 +++++++++++++++++++++++++++++
16
1 file changed, 59 insertions(+)
17
13
18
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
14
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/arch/x86/kernel/apic/x2apic_savic.c
16
--- a/arch/x86/kernel/apic/x2apic_savic.c
21
+++ b/arch/x86/kernel/apic/x2apic_savic.c
17
+++ b/arch/x86/kernel/apic/x2apic_savic.c
22
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, in
23
#include <linux/cc_platform.h>
19
    __send_ipi_mask(mask, vector, true);
24
#include <linux/percpu-defs.h>
25
#include <linux/align.h>
26
+#include <linux/sizes.h>
27
+#include <linux/llist.h>
28
29
#include <asm/apic.h>
30
#include <asm/sev.h>
31
@@ -XXX,XX +XXX,XX @@
32
33
static DEFINE_PER_CPU(void *, apic_backing_page);
34
35
+struct apic_id_node {
36
+     struct llist_node node;
37
+     u32 apic_id;
38
+     int cpu;
39
+};
40
+
41
+static DEFINE_PER_CPU(struct apic_id_node, apic_id_node);
42
+
43
+static struct llist_head *apic_id_map;
44
+
45
static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
46
{
47
    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
48
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in
49
    __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT);
50
}
20
}
51
21
52
+static void init_backing_page(void *backing_page)
22
+static void init_apic_page(void)
53
+{
23
+{
54
+    struct apic_id_node *next_node, *this_cpu_node;
55
+    unsigned int apic_map_slot;
56
+    u32 apic_id;
24
+    u32 apic_id;
57
+    int cpu;
58
+
25
+
59
+    /*
26
+    /*
60
+     * Before Secure AVIC is enabled, APIC msr reads are
27
+     * Before Secure AVIC is enabled, APIC msr reads are intercepted.
61
+     * intercepted. APIC_ID msr read returns the value
28
+     * APIC_ID msr read returns the value from the Hypervisor.
62
+     * from hv.
63
+     */
29
+     */
64
+    apic_id = native_apic_msr_read(APIC_ID);
30
+    apic_id = native_apic_msr_read(APIC_ID);
65
+    set_reg(backing_page, APIC_ID, apic_id);
31
+    set_reg(APIC_ID, apic_id);
66
+
67
+    if (!apic_id_map)
68
+        return;
69
+
70
+    cpu = smp_processor_id();
71
+    this_cpu_node = &per_cpu(apic_id_node, cpu);
72
+    this_cpu_node->apic_id = apic_id;
73
+    this_cpu_node->cpu = cpu;
74
+    /*
75
+     * In common case, apic_ids for CPUs are sequentially numbered.
76
+     * So, each CPU should hash to a different slot in the apic id
77
+     * map.
78
+     */
79
+    apic_map_slot = apic_id % nr_cpu_ids;
80
+    llist_add(&this_cpu_node->node, &apic_id_map[apic_map_slot]);
81
+    /* Each CPU checks only its next nodes for duplicates. */
82
+    llist_for_each_entry(next_node, this_cpu_node->node.next, node) {
83
+        if (WARN_ONCE(next_node->apic_id == apic_id,
84
+             "Duplicate APIC %u for cpu %d and cpu %d. IPI handling will suffer!",
85
+             apic_id, cpu, next_node->cpu))
86
+            break;
87
+    }
88
+}
32
+}
89
+
33
+
90
static void x2apic_savic_setup(void)
34
static void x2apic_savic_setup(void)
91
{
35
{
92
    void *backing_page;
36
    void *backing_page;
93
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void)
37
    enum es_result ret;
94
    if (!backing_page)
38
    unsigned long gpa;
95
        snp_abort();
39
96
    this_cpu_write(apic_backing_page, backing_page);
40
+    init_apic_page();
97
+    init_backing_page(backing_page);
41
    backing_page = this_cpu_ptr(apic_page);
98
    gpa = __pa(backing_page);
42
    gpa = __pa(backing_page);
99
43
100
    /*
101
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void)
102
103
static int x2apic_savic_probe(void)
104
{
105
+    int i;
106
+
107
    if (!cc_platform_has(CC_ATTR_SNP_SECURE_AVIC))
108
        return 0;
109
110
@@ -XXX,XX +XXX,XX @@ static int x2apic_savic_probe(void)
111
        snp_abort();
112
    }
113
114
+    apic_id_map = kvmalloc(nr_cpu_ids * sizeof(*apic_id_map), GFP_KERNEL);
115
+
116
+    if (apic_id_map)
117
+        for (i = 0; i < nr_cpu_ids; i++)
118
+            init_llist_head(&apic_id_map[i]);
119
+
120
    pr_info("Secure AVIC Enabled\n");
121
122
    return 1;
123
--
44
--
124
2.34.1
45
2.34.1
diff view generated by jsdifflib
1
Add update_vector callback to set/clear ALLOWED_IRR field in
1
Add update_vector callback to set/clear ALLOWED_IRR field in
2
the APIC backing page. The ALLOWED_IRR field indicates the
2
a vCPU's APIC backing page for external vectors. The ALLOWED_IRR
3
interrupt vectors which the guest allows the hypervisor to
3
field indicates the interrupt vectors which the guest allows the
4
send (typically for emulated devices). Interrupt vectors used
4
hypervisor to send (typically for emulated devices). Interrupt
5
exclusively by the guest itself (like IPI vectors) should not
5
vectors used exclusively by the guest itself and the vectors which
6
be allowed to be injected into the guest for security reasons.
6
are not emulated by the hypervisor, such as IPI vectors, are part
7
The update_vector callback is invoked from APIC vector domain
7
of system vectors and are not set in the ALLOWED_IRR.
8
whenever a vector is allocated, freed or moved.
9
8
10
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
9
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
11
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
10
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
12
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
11
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
13
---
12
---
14
Changes since v1:
13
Changes since v2:
15
14
16
- No change.
15
- Associate update_vector() invocation with vector allocation/free
17
16
calls.
18
arch/x86/include/asm/apic.h | 2 ++
17
- Cleanup and simplify vector bitmap calculation for ALLOWED_IRR.
19
arch/x86/kernel/apic/vector.c | 8 ++++++++
18
20
arch/x86/kernel/apic/x2apic_savic.c | 21 +++++++++++++++++++++
19
arch/x86/include/asm/apic.h | 2 +
21
3 files changed, 31 insertions(+)
20
arch/x86/include/asm/apic.h | 2 +
21
arch/x86/kernel/apic/vector.c | 59 +++++++++++++++++++++++------
22
arch/x86/kernel/apic/x2apic_savic.c | 20 ++++++++++
23
3 files changed, 69 insertions(+), 12 deletions(-)
22
24
23
diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
25
diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
24
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
25
--- a/arch/x86/include/asm/apic.h
27
--- a/arch/x86/include/asm/apic.h
26
+++ b/arch/x86/include/asm/apic.h
28
+++ b/arch/x86/include/asm/apic.h
...
...
35
37
36
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
38
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
37
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
38
--- a/arch/x86/kernel/apic/vector.c
40
--- a/arch/x86/kernel/apic/vector.c
39
+++ b/arch/x86/kernel/apic/vector.c
41
+++ b/arch/x86/kernel/apic/vector.c
42
@@ -XXX,XX +XXX,XX @@ static void apic_update_irq_cfg(struct irq_data *irqd, unsigned int vector,
43
             apicd->hw_irq_cfg.dest_apicid);
44
}
45
46
-static void apic_update_vector(struct irq_data *irqd, unsigned int newvec,
47
-             unsigned int newcpu)
48
+static inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set)
49
+{
50
+    if (apic->update_vector)
51
+        apic->update_vector(cpu, vector, set);
52
+}
53
+
54
+static int irq_alloc_vector(const struct cpumask *dest, bool resvd, unsigned int *cpu)
55
+{
56
+    int vector;
57
+
58
+    vector = irq_matrix_alloc(vector_matrix, dest, resvd, cpu);
59
+
60
+    if (vector >= 0)
61
+        apic_update_vector(*cpu, vector, true);
62
+
63
+    return vector;
64
+}
65
+
66
+static int irq_alloc_managed_vector(unsigned int *cpu)
67
+{
68
+    int vector;
69
+
70
+    vector = irq_matrix_alloc_managed(vector_matrix, vector_searchmask, cpu);
71
+
72
+    if (vector >= 0)
73
+        apic_update_vector(*cpu, vector, true);
74
+
75
+    return vector;
76
+}
77
+
78
+static void irq_free_vector(unsigned int cpu, unsigned int vector, bool managed)
79
+{
80
+    apic_update_vector(cpu, vector, false);
81
+    irq_matrix_free(vector_matrix, cpu, vector, managed);
82
+}
83
+
84
+static void apic_chipd_update_vector(struct irq_data *irqd, unsigned int newvec,
85
+                 unsigned int newcpu)
86
{
87
    struct apic_chip_data *apicd = apic_chip_data(irqd);
88
    struct irq_desc *desc = irq_data_to_desc(irqd);
40
@@ -XXX,XX +XXX,XX @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec,
89
@@ -XXX,XX +XXX,XX @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec,
41
        apicd->prev_cpu = apicd->cpu;
90
        apicd->prev_cpu = apicd->cpu;
42
        WARN_ON_ONCE(apicd->cpu == newcpu);
91
        WARN_ON_ONCE(apicd->cpu == newcpu);
43
    } else {
92
    } else {
44
+        if (apic->update_vector)
93
-        irq_matrix_free(vector_matrix, apicd->cpu, apicd->vector,
45
+            apic->update_vector(apicd->cpu, apicd->vector, false);
94
-                managed);
46
        irq_matrix_free(vector_matrix, apicd->cpu, apicd->vector,
95
+        irq_free_vector(apicd->cpu, apicd->vector, managed);
47
                managed);
48
    }
96
    }
49
@@ -XXX,XX +XXX,XX @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec,
97
50
    apicd->cpu = newcpu;
98
setnew:
51
    BUG_ON(!IS_ERR_OR_NULL(per_cpu(vector_irq, newcpu)[newvec]));
99
@@ -XXX,XX +XXX,XX @@ assign_vector_locked(struct irq_data *irqd, const struct cpumask *dest)
52
    per_cpu(vector_irq, newcpu)[newvec] = desc;
100
    if (apicd->move_in_progress || !hlist_unhashed(&apicd->clist))
53
+    if (apic->update_vector)
101
        return -EBUSY;
54
+        apic->update_vector(apicd->cpu, apicd->vector, true);
102
103
-    vector = irq_matrix_alloc(vector_matrix, dest, resvd, &cpu);
104
+    vector = irq_alloc_vector(dest, resvd, &cpu);
105
    trace_vector_alloc(irqd->irq, vector, resvd, vector);
106
    if (vector < 0)
107
        return vector;
108
-    apic_update_vector(irqd, vector, cpu);
109
+    apic_chipd_update_vector(irqd, vector, cpu);
110
    apic_update_irq_cfg(irqd, vector, cpu);
111
112
    return 0;
113
@@ -XXX,XX +XXX,XX @@ assign_managed_vector(struct irq_data *irqd, const struct cpumask *dest)
114
    /* set_affinity might call here for nothing */
115
    if (apicd->vector && cpumask_test_cpu(apicd->cpu, vector_searchmask))
116
        return 0;
117
-    vector = irq_matrix_alloc_managed(vector_matrix, vector_searchmask,
118
-                     &cpu);
119
+    vector = irq_alloc_managed_vector(&cpu);
120
    trace_vector_alloc_managed(irqd->irq, vector, vector);
121
    if (vector < 0)
122
        return vector;
123
-    apic_update_vector(irqd, vector, cpu);
124
+    apic_chipd_update_vector(irqd, vector, cpu);
125
    apic_update_irq_cfg(irqd, vector, cpu);
126
    return 0;
55
}
127
}
56
128
@@ -XXX,XX +XXX,XX @@ static void clear_irq_vector(struct irq_data *irqd)
57
static void vector_assign_managed_shutdown(struct irq_data *irqd)
129
             apicd->prev_cpu);
130
131
    per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_SHUTDOWN;
132
-    irq_matrix_free(vector_matrix, apicd->cpu, vector, managed);
133
+    irq_free_vector(apicd->cpu, vector, managed);
134
    apicd->vector = 0;
135
136
    /* Clean up move in progress */
137
@@ -XXX,XX +XXX,XX @@ static void clear_irq_vector(struct irq_data *irqd)
138
        return;
139
140
    per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_SHUTDOWN;
141
-    irq_matrix_free(vector_matrix, apicd->prev_cpu, vector, managed);
142
+    irq_free_vector(apicd->prev_cpu, vector, managed);
143
    apicd->prev_vector = 0;
144
    apicd->move_in_progress = 0;
145
    hlist_del_init(&apicd->clist);
58
@@ -XXX,XX +XXX,XX @@ static bool vector_configure_legacy(unsigned int virq, struct irq_data *irqd,
146
@@ -XXX,XX +XXX,XX @@ static bool vector_configure_legacy(unsigned int virq, struct irq_data *irqd,
59
    if (irqd_is_activated(irqd)) {
147
    if (irqd_is_activated(irqd)) {
60
        trace_vector_setup(virq, true, 0);
148
        trace_vector_setup(virq, true, 0);
61
        apic_update_irq_cfg(irqd, apicd->vector, apicd->cpu);
149
        apic_update_irq_cfg(irqd, apicd->vector, apicd->cpu);
62
+        if (apic->update_vector)
150
+        apic_update_vector(apicd->cpu, apicd->vector, true);
63
+            apic->update_vector(apicd->cpu, apicd->vector, true);
64
    } else {
151
    } else {
65
        /* Release the vector */
152
        /* Release the vector */
66
        apicd->can_reserve = true;
153
        apicd->can_reserve = true;
67
        irqd_set_can_reserve(irqd);
154
@@ -XXX,XX +XXX,XX @@ static void free_moved_vector(struct apic_chip_data *apicd)
68
        clear_irq_vector(irqd);
155
     * affinity mask comes online.
69
+        if (apic->update_vector)
156
     */
70
+            apic->update_vector(apicd->cpu, apicd->vector, false);
157
    trace_vector_free_moved(apicd->irq, cpu, vector, managed);
71
        realloc = true;
158
-    irq_matrix_free(vector_matrix, cpu, vector, managed);
72
    }
159
+    irq_free_vector(cpu, vector, managed);
73
    raw_spin_unlock_irqrestore(&vector_lock, flags);
160
    per_cpu(vector_irq, cpu)[vector] = VECTOR_UNUSED;
161
    hlist_del_init(&apicd->clist);
162
    apicd->prev_vector = 0;
74
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
163
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
75
index XXXXXXX..XXXXXXX 100644
164
index XXXXXXX..XXXXXXX 100644
76
--- a/arch/x86/kernel/apic/x2apic_savic.c
165
--- a/arch/x86/kernel/apic/x2apic_savic.c
77
+++ b/arch/x86/kernel/apic/x2apic_savic.c
166
+++ b/arch/x86/kernel/apic/x2apic_savic.c
78
@@ -XXX,XX +XXX,XX @@
167
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, in
79
168
    __send_ipi_mask(mask, vector, true);
80
#include "local.h"
81
82
+#define VEC_POS(v)    ((v) & (32 - 1))
83
+#define REG_POS(v)    (((v) >> 5) << 4)
84
+
85
static DEFINE_PER_CPU(void *, apic_backing_page);
86
87
struct apic_id_node {
88
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in
89
    __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT);
90
}
169
}
91
170
92
+static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set)
171
+static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set)
93
+{
172
+{
94
+    void *backing_page;
173
+    struct apic_page *ap = per_cpu_ptr(apic_page, cpu);
95
+    unsigned long *reg;
174
+    unsigned long *sirr = (unsigned long *) &ap->bytes[SAVIC_ALLOWED_IRR];
96
+    int reg_off;
175
+    unsigned int bit;
97
+
176
+
98
+    backing_page = per_cpu(apic_backing_page, cpu);
177
+    /*
99
+    reg_off = SAVIC_ALLOWED_IRR_OFFSET + REG_POS(vector);
178
+     * The registers are 32-bit wide and 16-byte aligned.
100
+    reg = (unsigned long *)((char *)backing_page + reg_off);
179
+     * Compensate for the resulting bit number spacing.
180
+     */
181
+    bit = vector + 96 * (vector / 32);
101
+
182
+
102
+    if (set)
183
+    if (set)
103
+        test_and_set_bit(VEC_POS(vector), reg);
184
+        set_bit(bit, sirr);
104
+    else
185
+    else
105
+        test_and_clear_bit(VEC_POS(vector), reg);
186
+        clear_bit(bit, sirr);
106
+}
187
+}
107
+
188
+
108
static void init_backing_page(void *backing_page)
189
static void init_apic_page(void)
109
{
190
{
110
    struct apic_id_node *next_node, *this_cpu_node;
191
    u32 apic_id;
111
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
192
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
112
    .eoi                = native_apic_msr_eoi,
193
    .eoi                = native_apic_msr_eoi,
113
    .icr_read            = native_x2apic_icr_read,
194
    .icr_read            = native_x2apic_icr_read,
114
    .icr_write            = native_x2apic_icr_write,
195
    .icr_write            = native_x2apic_icr_write,
115
+
196
+
116
+    .update_vector            = x2apic_savic_update_vector,
197
+    .update_vector            = x2apic_savic_update_vector,
117
};
198
};
118
199
119
apic_driver(apic_x2apic_savic);
200
apic_driver(apic_x2apic_savic);
120
--
201
--
121
2.34.1
202
2.34.1
diff view generated by jsdifflib
1
With Secure AVIC only Self-IPI is accelerated. To handle all the
1
With Secure AVIC only Self-IPI is accelerated. To handle all the
2
other IPIs, add new callbacks for sending IPI, which write to the
2
other IPIs, add new callbacks for sending IPI, which write to the
3
IRR of the target guest vCPU's APIC backing page and then issue
3
IRR of the target guest vCPU's APIC backing page and then issue
4
GHCB protocol MSR write event for the hypervisor to notify the
4
GHCB protocol MSR write event for the hypervisor to notify the
5
target vCPU.
5
target vCPU.
6
6
7
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
7
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
8
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
8
Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
9
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
9
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
10
---
10
---
11
Changes since v1:
11
Changes since v2:
12
- Remove write_msr_to_hv() and define savic_ghcb_msr_write() in
12
- Simplify vector updates in bitmap.
13
sev/core.c.
13
- Cleanup icr_data parcelling and unparcelling.
14
- Misc cleanups.
15
- Fix warning reported by kernel test robot.
14
16
15
arch/x86/coco/sev/core.c | 40 +++++++-
17
arch/x86/coco/sev/core.c | 40 ++++++-
16
arch/x86/include/asm/sev.h | 2 +
18
arch/x86/include/asm/sev.h | 2 +
17
arch/x86/kernel/apic/x2apic_savic.c | 138 +++++++++++++++++++++++++---
19
arch/x86/kernel/apic/x2apic_savic.c | 164 ++++++++++++++++++++++------
18
3 files changed, 162 insertions(+), 18 deletions(-)
20
3 files changed, 167 insertions(+), 39 deletions(-)
19
21
20
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
22
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
21
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
22
--- a/arch/x86/coco/sev/core.c
24
--- a/arch/x86/coco/sev/core.c
23
+++ b/arch/x86/coco/sev/core.c
25
+++ b/arch/x86/coco/sev/core.c
...
...
73
+
75
+
74
+    __sev_put_ghcb(&state);
76
+    __sev_put_ghcb(&state);
75
+    local_irq_restore(flags);
77
+    local_irq_restore(flags);
76
+}
78
+}
77
+
79
+
78
/*
80
enum es_result savic_register_gpa(u64 gpa)
79
* Register GPA of the Secure AVIC backing page.
81
{
80
*
82
    struct ghcb_state state;
81
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
83
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
82
index XXXXXXX..XXXXXXX 100644
84
index XXXXXXX..XXXXXXX 100644
83
--- a/arch/x86/include/asm/sev.h
85
--- a/arch/x86/include/asm/sev.h
84
+++ b/arch/x86/include/asm/sev.h
86
+++ b/arch/x86/include/asm/sev.h
85
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
87
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
86
void __init snp_secure_tsc_prepare(void);
88
void __init snp_secure_tsc_prepare(void);
87
void __init snp_secure_tsc_init(void);
89
void __init snp_secure_tsc_init(void);
88
enum es_result savic_register_gpa(u64 apic_id, u64 gpa);
90
enum es_result savic_register_gpa(u64 gpa);
89
+void savic_ghcb_msr_write(u32 reg, u64 value);
91
+void savic_ghcb_msr_write(u32 reg, u64 value);
90
92
91
#else    /* !CONFIG_AMD_MEM_ENCRYPT */
93
#else    /* !CONFIG_AMD_MEM_ENCRYPT */
92
94
93
@@ -XXX,XX +XXX,XX @@ static inline void __init snp_secure_tsc_prepare(void) { }
95
@@ -XXX,XX +XXX,XX @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_
96
static inline void __init snp_secure_tsc_prepare(void) { }
94
static inline void __init snp_secure_tsc_init(void) { }
97
static inline void __init snp_secure_tsc_init(void) { }
95
static inline enum es_result savic_register_gpa(u64 apic_id,
98
static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; }
96
                        u64 gpa) { return ES_UNSUPPORTED; }
99
+static inline void savic_ghcb_msr_write(u32 reg, u64 value) { }
97
+static void savic_ghcb_msr_write(u32 reg, u64 value) { }
98
100
99
#endif    /* CONFIG_AMD_MEM_ENCRYPT */
101
#endif    /* CONFIG_AMD_MEM_ENCRYPT */
100
102
101
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
103
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
102
index XXXXXXX..XXXXXXX 100644
104
index XXXXXXX..XXXXXXX 100644
103
--- a/arch/x86/kernel/apic/x2apic_savic.c
105
--- a/arch/x86/kernel/apic/x2apic_savic.c
104
+++ b/arch/x86/kernel/apic/x2apic_savic.c
106
+++ b/arch/x86/kernel/apic/x2apic_savic.c
107
@@ -XXX,XX +XXX,XX @@ static __always_inline void set_reg(unsigned int offset, u32 val)
108
109
#define SAVIC_ALLOWED_IRR    0x204
110
111
+static inline void update_vector(unsigned int cpu, unsigned int offset,
112
+                 unsigned int vector, bool set)
113
+{
114
+    struct apic_page *ap = per_cpu_ptr(apic_page, cpu);
115
+    unsigned long *reg = (unsigned long *) &ap->bytes[offset];
116
+    unsigned int bit;
117
+
118
+    /*
119
+     * The registers are 32-bit wide and 16-byte aligned.
120
+     * Compensate for the resulting bit number spacing.
121
+     */
122
+    bit = vector + 96 * (vector / 32);
123
+
124
+    if (set)
125
+        set_bit(bit, reg);
126
+    else
127
+        clear_bit(bit, reg);
128
+}
129
+
130
static u32 x2apic_savic_read(u32 reg)
131
{
132
    /*
105
@@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg)
133
@@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg)
134
135
#define SAVIC_NMI_REQ        0x278
136
137
+static inline void self_ipi_reg_write(unsigned int vector)
138
+{
139
+    /*
140
+     * Secure AVIC hardware accelerates guest's MSR write to SELF_IPI
141
+     * register. It updates the IRR in the APIC backing page, evaluates
142
+     * the new IRR for interrupt injection and continues with guest
143
+     * code execution.
144
+     */
145
+    native_apic_msr_write(APIC_SELF_IPI, vector);
146
+}
147
+
106
static void x2apic_savic_write(u32 reg, u32 data)
148
static void x2apic_savic_write(u32 reg, u32 data)
107
{
149
{
108
    void *backing_page = this_cpu_read(apic_backing_page);
109
+    unsigned int cfg;
110
111
    switch (reg) {
150
    switch (reg) {
112
    case APIC_LVTT:
113
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
151
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
114
    case APIC_LVT1:
152
    case APIC_LVT1:
115
    case APIC_TMICT:
153
    case APIC_TMICT:
116
    case APIC_TDCR:
154
    case APIC_TDCR:
117
-    case APIC_SELF_IPI:
155
-    case APIC_SELF_IPI:
118
    case APIC_TASKPRI:
156
    case APIC_TASKPRI:
119
    case APIC_EOI:
157
    case APIC_EOI:
120
    case APIC_SPIV:
158
    case APIC_SPIV:
121
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
159
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
122
    case APIC_EILVTn(0) ... APIC_EILVTn(3):
160
    case APIC_EILVTn(0) ... APIC_EILVTn(3):
123
        set_reg(backing_page, reg, data);
161
        set_reg(reg, data);
124
        break;
162
        break;
125
+    /* Self IPIs are accelerated by hardware, use wrmsr */
126
+    case APIC_SELF_IPI:
163
+    case APIC_SELF_IPI:
127
+        cfg = __prepare_ICR(APIC_DEST_SELF, data, 0);
164
+        self_ipi_reg_write(data);
128
+        native_x2apic_icr_write(cfg, 0);
129
+        break;
165
+        break;
130
    /* ALLOWED_IRR offsets are writable */
166
    /* ALLOWED_IRR offsets are writable */
131
    case SAVIC_ALLOWED_IRR_OFFSET ... SAVIC_ALLOWED_IRR_OFFSET + 0x70:
167
    case SAVIC_ALLOWED_IRR ... SAVIC_ALLOWED_IRR + 0x70:
132
        if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR_OFFSET, 16)) {
168
        if (IS_ALIGNED(reg - SAVIC_ALLOWED_IRR, 16)) {
133
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
169
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
134
    }
170
    }
135
}
171
}
136
172
137
+static void send_ipi(int cpu, int vector)
173
+static inline void send_ipi_dest(unsigned int cpu, unsigned int vector)
138
+{
174
+{
139
+    void *backing_page;
175
+    update_vector(cpu, APIC_IRR, vector, true);
140
+    int reg_off;
176
+}
141
+
177
+
142
+    backing_page = per_cpu(apic_backing_page, cpu);
178
+static void send_ipi_allbut(unsigned int vector)
143
+    reg_off = APIC_IRR + REG_POS(vector);
179
+{
144
+    /*
180
+    unsigned int cpu, src_cpu;
145
+     * Use test_and_set_bit() to ensure that IRR updates are atomic w.r.t. other
181
+    unsigned long flags;
146
+     * IRR updates such as during VMRUN and during CPU interrupt handling flow.
182
+
147
+     */
183
+    local_irq_save(flags);
148
+    test_and_set_bit(VEC_POS(vector), (unsigned long *)((char *)backing_page + reg_off));
184
+
149
+}
185
+    src_cpu = raw_smp_processor_id();
150
+
186
+
151
+static void send_ipi_dest(u64 icr_data)
187
+    for_each_cpu(cpu, cpu_online_mask) {
152
+{
188
+        if (cpu == src_cpu)
153
+    int vector, cpu;
189
+            continue;
154
+
190
+        send_ipi_dest(cpu, vector);
155
+    vector = icr_data & APIC_VECTOR_MASK;
156
+    cpu = icr_data >> 32;
157
+
158
+    send_ipi(cpu, vector);
159
+}
160
+
161
+static void send_ipi_target(u64 icr_data)
162
+{
163
+    if (icr_data & APIC_DEST_LOGICAL) {
164
+        pr_err("IPI target should be of PHYSICAL type\n");
165
+        return;
166
+    }
191
+    }
167
+
192
+
168
+    send_ipi_dest(icr_data);
169
+}
170
+
171
+static void send_ipi_allbut(u64 icr_data)
172
+{
173
+    const struct cpumask *self_cpu_mask = get_cpu_mask(smp_processor_id());
174
+    unsigned long flags;
175
+    int vector, cpu;
176
+
177
+    vector = icr_data & APIC_VECTOR_MASK;
178
+    local_irq_save(flags);
179
+    for_each_cpu_andnot(cpu, cpu_present_mask, self_cpu_mask)
180
+        send_ipi(cpu, vector);
181
+    savic_ghcb_msr_write(APIC_ICR, icr_data);
182
+    local_irq_restore(flags);
193
+    local_irq_restore(flags);
183
+}
194
+}
184
+
195
+
185
+static void send_ipi_allinc(u64 icr_data)
196
+static inline void self_ipi(unsigned int vector)
186
+{
197
+{
187
+    int vector;
198
+    u32 icr_low = APIC_SELF_IPI | vector;
188
+
199
+
189
+    send_ipi_allbut(icr_data);
200
+    native_x2apic_icr_write(icr_low, 0);
190
+    vector = icr_data & APIC_VECTOR_MASK;
191
+    native_x2apic_icr_write(APIC_DEST_SELF | vector, 0);
192
+}
201
+}
193
+
202
+
194
+static void x2apic_savic_icr_write(u32 icr_low, u32 icr_high)
203
+static void x2apic_savic_icr_write(u32 icr_low, u32 icr_high)
195
+{
204
+{
196
+    int dsh, vector;
205
+    unsigned int dsh, vector;
197
+    u64 icr_data;
206
+    u64 icr_data;
198
+
207
+
199
+    icr_data = ((u64)icr_high) << 32 | icr_low;
200
+    dsh = icr_low & APIC_DEST_ALLBUT;
208
+    dsh = icr_low & APIC_DEST_ALLBUT;
209
+    vector = icr_low & APIC_VECTOR_MASK;
201
+
210
+
202
+    switch (dsh) {
211
+    switch (dsh) {
203
+    case APIC_DEST_SELF:
212
+    case APIC_DEST_SELF:
204
+        vector = icr_data & APIC_VECTOR_MASK;
213
+        self_ipi(vector);
205
+        x2apic_savic_write(APIC_SELF_IPI, vector);
206
+        break;
214
+        break;
207
+    case APIC_DEST_ALLINC:
215
+    case APIC_DEST_ALLINC:
208
+        send_ipi_allinc(icr_data);
216
+        self_ipi(vector);
209
+        break;
217
+        fallthrough;
210
+    case APIC_DEST_ALLBUT:
218
+    case APIC_DEST_ALLBUT:
211
+        send_ipi_allbut(icr_data);
219
+        send_ipi_allbut(vector);
212
+        break;
220
+        break;
213
+    default:
221
+    default:
214
+        send_ipi_target(icr_data);
222
+        send_ipi_dest(icr_high, vector);
223
+        break;
224
+    }
225
+
226
+    icr_data = ((u64)icr_high) << 32 | icr_low;
227
+    if (dsh != APIC_DEST_SELF)
215
+        savic_ghcb_msr_write(APIC_ICR, icr_data);
228
+        savic_ghcb_msr_write(APIC_ICR, icr_data);
216
+    }
229
+}
217
+}
230
+
218
+
231
+static void send_ipi(u32 dest, unsigned int vector, unsigned int dsh)
219
+static void __send_IPI_dest(unsigned int apicid, int vector, unsigned int dest)
232
+{
220
+{
233
+    unsigned int icr_low;
221
+    unsigned int cfg = __prepare_ICR(0, vector, dest);
234
+
222
+
235
+    icr_low = __prepare_ICR(dsh, vector, APIC_DEST_PHYSICAL);
223
+    x2apic_savic_icr_write(cfg, apicid);
236
+    x2apic_savic_icr_write(icr_low, dest);
224
+}
237
+}
225
+
238
+
226
static void x2apic_savic_send_IPI(int cpu, int vector)
239
static void x2apic_savic_send_ipi(int cpu, int vector)
227
{
240
{
228
    u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
241
    u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
229
242
230
-    /* x2apic MSRs are special and need a special fence: */
243
-    /* x2apic MSRs are special and need a special fence: */
231
-    weak_wrmsr_fence();
244
-    weak_wrmsr_fence();
232
-    __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL);
245
-    __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL);
233
+    __send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL);
246
+    send_ipi(dest, vector, 0);
234
}
247
}
235
248
236
static void
249
-static void __send_ipi_mask(const struct cpumask *mask, int vector, bool excl_self)
237
@@ -XXX,XX +XXX,XX @@ __send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
250
+static void send_ipi_mask(const struct cpumask *mask, unsigned int vector, bool excl_self)
238
    unsigned long this_cpu;
251
{
252
-    unsigned long query_cpu;
253
-    unsigned long this_cpu;
254
+    unsigned int this_cpu;
255
+    unsigned int cpu;
239
    unsigned long flags;
256
    unsigned long flags;
240
257
241
-    /* x2apic MSRs are special and need a special fence: */
258
-    /* x2apic MSRs are special and need a special fence: */
242
-    weak_wrmsr_fence();
259
-    weak_wrmsr_fence();
243
-
260
-
244
    local_irq_save(flags);
261
    local_irq_save(flags);
245
262
246
    this_cpu = smp_processor_id();
263
-    this_cpu = smp_processor_id();
247
    for_each_cpu(query_cpu, mask) {
264
-    for_each_cpu(query_cpu, mask) {
248
        if (apic_dest == APIC_DEST_ALLBUT && this_cpu == query_cpu)
265
-        if (excl_self && this_cpu == query_cpu)
266
+    this_cpu = raw_smp_processor_id();
267
+
268
+    for_each_cpu(cpu, mask) {
269
+        if (excl_self && cpu == this_cpu)
249
            continue;
270
            continue;
250
-        __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
271
-        __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
251
-                 vector, APIC_DEST_PHYSICAL);
272
-                 vector, APIC_DEST_PHYSICAL);
252
+        __send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), vector,
273
+        send_ipi(per_cpu(x86_cpu_to_apicid, cpu), vector, 0);
253
+                 APIC_DEST_PHYSICAL);
254
    }
274
    }
255
+
275
+
256
    local_irq_restore(flags);
276
    local_irq_restore(flags);
257
}
277
}
258
278
259
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_send_IPI_mask_allbutself(const struct cpumask *mask, in
279
static void x2apic_savic_send_ipi_mask(const struct cpumask *mask, int vector)
260
    __send_IPI_mask(mask, vector, APIC_DEST_ALLBUT);
280
{
261
}
281
-    __send_ipi_mask(mask, vector, false);
262
282
+    send_ipi_mask(mask, vector, false);
263
+static void __send_IPI_shorthand(int vector, u32 which)
283
}
264
+{
284
265
+    unsigned int cfg = __prepare_ICR(which, vector, 0);
285
static void x2apic_savic_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
266
+
286
{
267
+    x2apic_savic_icr_write(cfg, 0);
287
-    __send_ipi_mask(mask, vector, true);
268
+}
288
+    send_ipi_mask(mask, vector, true);
269
+
289
}
270
+static void x2apic_savic_send_IPI_allbutself(int vector)
290
271
+{
291
-static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set)
272
+    __send_IPI_shorthand(vector, APIC_DEST_ALLBUT);
292
+static void x2apic_savic_send_ipi_allbutself(int vector)
273
+}
293
{
274
+
294
-    struct apic_page *ap = per_cpu_ptr(apic_page, cpu);
275
+static void x2apic_savic_send_IPI_all(int vector)
295
-    unsigned long *sirr = (unsigned long *) &ap->bytes[SAVIC_ALLOWED_IRR];
276
+{
296
-    unsigned int bit;
277
+    __send_IPI_shorthand(vector, APIC_DEST_ALLINC);
297
+    send_ipi(0, vector, APIC_DEST_ALLBUT);
278
+}
298
+}
279
+
299
280
+static void x2apic_savic_send_IPI_self(int vector)
300
-    /*
281
+{
301
-     * The registers are 32-bit wide and 16-byte aligned.
282
+    __send_IPI_shorthand(vector, APIC_DEST_SELF);
302
-     * Compensate for the resulting bit number spacing.
283
+}
303
-     */
284
+
304
-    bit = vector + 96 * (vector / 32);
285
static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set)
305
+static void x2apic_savic_send_ipi_all(int vector)
286
{
306
+{
287
    void *backing_page;
307
+    send_ipi(0, vector, APIC_DEST_ALLINC);
308
+}
309
310
-    if (set)
311
-        set_bit(bit, sirr);
312
-    else
313
-        clear_bit(bit, sirr);
314
+static void x2apic_savic_send_ipi_self(int vector)
315
+{
316
+    self_ipi_reg_write(vector);
317
+}
318
+
319
+static void x2apic_savic_update_vector(unsigned int cpu, unsigned int vector, bool set)
320
+{
321
+    update_vector(cpu, SAVIC_ALLOWED_IRR, vector, set);
322
}
323
324
static void init_apic_page(void)
288
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
325
@@ -XXX,XX +XXX,XX @@ static struct apic apic_x2apic_savic __ro_after_init = {
289
    .send_IPI            = x2apic_savic_send_IPI,
326
    .send_IPI            = x2apic_savic_send_ipi,
290
    .send_IPI_mask            = x2apic_savic_send_IPI_mask,
327
    .send_IPI_mask            = x2apic_savic_send_ipi_mask,
291
    .send_IPI_mask_allbutself    = x2apic_savic_send_IPI_mask_allbutself,
328
    .send_IPI_mask_allbutself    = x2apic_savic_send_ipi_mask_allbutself,
292
-    .send_IPI_allbutself        = x2apic_send_IPI_allbutself,
329
-    .send_IPI_allbutself        = x2apic_send_IPI_allbutself,
293
-    .send_IPI_all            = x2apic_send_IPI_all,
330
-    .send_IPI_all            = x2apic_send_IPI_all,
294
-    .send_IPI_self            = x2apic_send_IPI_self,
331
-    .send_IPI_self            = x2apic_send_IPI_self,
295
+    .send_IPI_allbutself        = x2apic_savic_send_IPI_allbutself,
332
+    .send_IPI_allbutself        = x2apic_savic_send_ipi_allbutself,
296
+    .send_IPI_all            = x2apic_savic_send_IPI_all,
333
+    .send_IPI_all            = x2apic_savic_send_ipi_all,
297
+    .send_IPI_self            = x2apic_savic_send_IPI_self,
334
+    .send_IPI_self            = x2apic_savic_send_ipi_self,
298
    .nmi_to_offline_cpu        = true,
335
    .nmi_to_offline_cpu        = true,
299
336
300
    .read                = x2apic_savic_read,
337
    .read                = x2apic_savic_read,
301
    .write                = x2apic_savic_write,
338
    .write                = x2apic_savic_write,
302
    .eoi                = native_apic_msr_eoi,
339
    .eoi                = native_apic_msr_eoi,
...
...
diff view generated by jsdifflib
1
From: Kishon Vijay Abraham I <kvijayab@amd.com>
1
Secure AVIC requires LAPIC timer to be emulated by the hypervisor.
2
2
KVM already supports emulating LAPIC timer using hrtimers. In order
3
Secure AVIC requires LAPIC timer to be emulated by hypervisor. KVM
4
already supports emulating LAPIC timer using hrtimers. In order
5
to emulate LAPIC timer, APIC_LVTT, APIC_TMICT and APIC_TDCR register
3
to emulate LAPIC timer, APIC_LVTT, APIC_TMICT and APIC_TDCR register
6
values need to be propagated to the hypervisor for arming the timer.
4
values need to be propagated to the hypervisor for arming the timer.
7
APIC_TMCCT register value has to be read from the hypervisor, which
5
APIC_TMCCT register value has to be read from the hypervisor, which
8
is required for calibrating the APIC timer. So, read/write all APIC
6
is required for calibrating the APIC timer. So, read/write all APIC
9
timer registers from/to the hypervisor.
7
timer registers from/to the hypervisor.
10
8
11
In addition, configure APIC_ALLOWED_IRR for the hypervisor to inject
9
In addition, add a static call for apic's update_vector() callback,
12
timer interrupt using LOCAL_TIMER_VECTOR.
10
to configure ALLOWED_IRR for the hypervisor to inject timer interrupt
13
11
using LOCAL_TIMER_VECTOR.
12
13
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
14
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
14
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
15
Co-developed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
16
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
15
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
17
---
16
---
18
Changes since v1:
17
Changes since v2:
19
18
20
- Move savic_ghcb_msr_read() definition here.
19
- Add static call for apic_update_vector()
21
- Call update_vector() callback only when it is initialized.
22
20
23
arch/x86/coco/sev/core.c | 27 +++++++++++++++++++++++++++
21
arch/x86/coco/sev/core.c | 27 +++++++++++++++++++++++++++
22
arch/x86/include/asm/apic.h | 8 ++++++++
24
arch/x86/include/asm/sev.h | 2 ++
23
arch/x86/include/asm/sev.h | 2 ++
25
arch/x86/kernel/apic/apic.c | 4 ++++
24
arch/x86/kernel/apic/apic.c | 2 ++
25
arch/x86/kernel/apic/init.c | 3 +++
26
arch/x86/kernel/apic/vector.c | 6 ------
26
arch/x86/kernel/apic/x2apic_savic.c | 7 +++++--
27
arch/x86/kernel/apic/x2apic_savic.c | 7 +++++--
27
4 files changed, 38 insertions(+), 2 deletions(-)
28
7 files changed, 47 insertions(+), 8 deletions(-)
28
29
29
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
30
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
30
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
31
--- a/arch/x86/coco/sev/core.c
32
--- a/arch/x86/coco/sev/core.c
32
+++ b/arch/x86/coco/sev/core.c
33
+++ b/arch/x86/coco/sev/core.c
...
...
62
+}
63
+}
63
+
64
+
64
void savic_ghcb_msr_write(u32 reg, u64 value)
65
void savic_ghcb_msr_write(u32 reg, u64 value)
65
{
66
{
66
    u64 msr = APIC_BASE_MSR + (reg >> 4);
67
    u64 msr = APIC_BASE_MSR + (reg >> 4);
68
diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
69
index XXXXXXX..XXXXXXX 100644
70
--- a/arch/x86/include/asm/apic.h
71
+++ b/arch/x86/include/asm/apic.h
72
@@ -XXX,XX +XXX,XX @@ struct apic_override {
73
    void    (*icr_write)(u32 low, u32 high);
74
    int    (*wakeup_secondary_cpu)(u32 apicid, unsigned long start_eip);
75
    int    (*wakeup_secondary_cpu_64)(u32 apicid, unsigned long start_eip);
76
+    void    (*update_vector)(unsigned int cpu, unsigned int vector, bool set);
77
};
78
79
/*
80
@@ -XXX,XX +XXX,XX @@ DECLARE_APIC_CALL(wait_icr_idle);
81
DECLARE_APIC_CALL(wakeup_secondary_cpu);
82
DECLARE_APIC_CALL(wakeup_secondary_cpu_64);
83
DECLARE_APIC_CALL(write);
84
+DECLARE_APIC_CALL(update_vector);
85
86
static __always_inline u32 apic_read(u32 reg)
87
{
88
@@ -XXX,XX +XXX,XX @@ static __always_inline bool apic_id_valid(u32 apic_id)
89
    return apic_id <= apic->max_apic_id;
90
}
91
92
+static __always_inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set)
93
+{
94
+    static_call(apic_call_update_vector)(cpu, vector, set);
95
+}
96
+
97
#else /* CONFIG_X86_LOCAL_APIC */
98
99
static inline u32 apic_read(u32 reg) { return 0; }
100
@@ -XXX,XX +XXX,XX @@ static inline void apic_wait_icr_idle(void) { }
101
static inline u32 safe_apic_wait_icr_idle(void) { return 0; }
102
static inline void apic_native_eoi(void) { WARN_ON_ONCE(1); }
103
static inline void apic_setup_apic_calls(void) { }
104
+static inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set) { }
105
106
#define apic_update_callback(_callback, _fn) do { } while (0)
107
67
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
108
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
68
index XXXXXXX..XXXXXXX 100644
109
index XXXXXXX..XXXXXXX 100644
69
--- a/arch/x86/include/asm/sev.h
110
--- a/arch/x86/include/asm/sev.h
70
+++ b/arch/x86/include/asm/sev.h
111
+++ b/arch/x86/include/asm/sev.h
71
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
112
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
72
void __init snp_secure_tsc_prepare(void);
113
void __init snp_secure_tsc_prepare(void);
73
void __init snp_secure_tsc_init(void);
114
void __init snp_secure_tsc_init(void);
74
enum es_result savic_register_gpa(u64 apic_id, u64 gpa);
115
enum es_result savic_register_gpa(u64 gpa);
75
+u64 savic_ghcb_msr_read(u32 reg);
116
+u64 savic_ghcb_msr_read(u32 reg);
76
void savic_ghcb_msr_write(u32 reg, u64 value);
117
void savic_ghcb_msr_write(u32 reg, u64 value);
77
118
78
#else    /* !CONFIG_AMD_MEM_ENCRYPT */
119
#else    /* !CONFIG_AMD_MEM_ENCRYPT */
79
@@ -XXX,XX +XXX,XX @@ static inline void __init snp_secure_tsc_prepare(void) { }
120
@@ -XXX,XX +XXX,XX @@ static inline void __init snp_secure_tsc_prepare(void) { }
80
static inline void __init snp_secure_tsc_init(void) { }
121
static inline void __init snp_secure_tsc_init(void) { }
81
static inline enum es_result savic_register_gpa(u64 apic_id,
122
static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; }
82
                        u64 gpa) { return ES_UNSUPPORTED; }
123
static inline void savic_ghcb_msr_write(u32 reg, u64 value) { }
83
+static inline u64 savic_ghcb_msr_read(u32 reg) { return 0; }
124
+static inline u64 savic_ghcb_msr_read(u32 reg) { return 0; }
84
static void savic_ghcb_msr_write(u32 reg, u64 value) { }
85
125
86
#endif    /* CONFIG_AMD_MEM_ENCRYPT */
126
#endif    /* CONFIG_AMD_MEM_ENCRYPT */
127
87
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
128
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
88
index XXXXXXX..XXXXXXX 100644
129
index XXXXXXX..XXXXXXX 100644
89
--- a/arch/x86/kernel/apic/apic.c
130
--- a/arch/x86/kernel/apic/apic.c
90
+++ b/arch/x86/kernel/apic/apic.c
131
+++ b/arch/x86/kernel/apic/apic.c
91
@@ -XXX,XX +XXX,XX @@ static void setup_APIC_timer(void)
132
@@ -XXX,XX +XXX,XX @@ static void setup_APIC_timer(void)
92
                        0xF, ~0UL);
133
                        0xF, ~0UL);
93
    } else
134
    } else
94
        clockevents_register_device(levt);
135
        clockevents_register_device(levt);
95
+
136
+
96
+    if (apic->update_vector)
137
+    apic_update_vector(smp_processor_id(), LOCAL_TIMER_VECTOR, true);
97
+        apic->update_vector(smp_processor_id(), LOCAL_TIMER_VECTOR,
138
}
98
+                 true);
99
}
100
139
101
/*
140
/*
141
diff --git a/arch/x86/kernel/apic/init.c b/arch/x86/kernel/apic/init.c
142
index XXXXXXX..XXXXXXX 100644
143
--- a/arch/x86/kernel/apic/init.c
144
+++ b/arch/x86/kernel/apic/init.c
145
@@ -XXX,XX +XXX,XX @@ DEFINE_APIC_CALL(wait_icr_idle);
146
DEFINE_APIC_CALL(wakeup_secondary_cpu);
147
DEFINE_APIC_CALL(wakeup_secondary_cpu_64);
148
DEFINE_APIC_CALL(write);
149
+DEFINE_APIC_CALL(update_vector);
150
151
EXPORT_STATIC_CALL_TRAMP_GPL(apic_call_send_IPI_mask);
152
EXPORT_STATIC_CALL_TRAMP_GPL(apic_call_send_IPI_self);
153
@@ -XXX,XX +XXX,XX @@ static __init void restore_override_callbacks(void)
154
    apply_override(icr_write);
155
    apply_override(wakeup_secondary_cpu);
156
    apply_override(wakeup_secondary_cpu_64);
157
+    apply_override(update_vector);
158
}
159
160
#define update_call(__cb)                    \
161
@@ -XXX,XX +XXX,XX @@ static __init void update_static_calls(void)
162
    update_call(wait_icr_idle);
163
    update_call(wakeup_secondary_cpu);
164
    update_call(wakeup_secondary_cpu_64);
165
+    update_call(update_vector);
166
}
167
168
void __init apic_setup_apic_calls(void)
169
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
170
index XXXXXXX..XXXXXXX 100644
171
--- a/arch/x86/kernel/apic/vector.c
172
+++ b/arch/x86/kernel/apic/vector.c
173
@@ -XXX,XX +XXX,XX @@ static void apic_update_irq_cfg(struct irq_data *irqd, unsigned int vector,
174
             apicd->hw_irq_cfg.dest_apicid);
175
}
176
177
-static inline void apic_update_vector(unsigned int cpu, unsigned int vector, bool set)
178
-{
179
-    if (apic->update_vector)
180
-        apic->update_vector(cpu, vector, set);
181
-}
182
-
183
static int irq_alloc_vector(const struct cpumask *dest, bool resvd, unsigned int *cpu)
184
{
185
    int vector;
102
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
186
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
103
index XXXXXXX..XXXXXXX 100644
187
index XXXXXXX..XXXXXXX 100644
104
--- a/arch/x86/kernel/apic/x2apic_savic.c
188
--- a/arch/x86/kernel/apic/x2apic_savic.c
105
+++ b/arch/x86/kernel/apic/x2apic_savic.c
189
+++ b/arch/x86/kernel/apic/x2apic_savic.c
106
@@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg)
190
@@ -XXX,XX +XXX,XX @@ static u32 x2apic_savic_read(u32 reg)
...
...
110
+        return savic_ghcb_msr_read(reg);
194
+        return savic_ghcb_msr_read(reg);
111
    case APIC_ID:
195
    case APIC_ID:
112
    case APIC_LVR:
196
    case APIC_LVR:
113
    case APIC_TASKPRI:
197
    case APIC_TASKPRI:
114
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
198
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
115
199
{
116
    switch (reg) {
200
    switch (reg) {
117
    case APIC_LVTT:
201
    case APIC_LVTT:
118
-    case APIC_LVT0:
202
-    case APIC_LVT0:
119
-    case APIC_LVT1:
203
-    case APIC_LVT1:
120
    case APIC_TMICT:
204
    case APIC_TMICT:
...
...
diff view generated by jsdifflib
1
From: Kishon Vijay Abraham I <kvijayab@amd.com>
1
From: Kishon Vijay Abraham I <kvijayab@amd.com>
2
2
3
Secure AVIC requires VGIF to be configured in VMSA. Configure
3
Secure AVIC requires VGIF to be configured in VMSA. Configure
4
for secondary vCPUs (the configuration for boot CPU is done in
4
for secondary vCPUs (the configuration for boot CPU is done by
5
hypervisor).
5
the hypervisor).
6
6
7
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
7
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
8
Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@amd.com>
8
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
9
---
9
---
10
Changes since v1:
10
Changes since v2:
11
- No change
11
- No change
12
12
13
arch/x86/coco/sev/core.c | 3 +++
13
arch/x86/coco/sev/core.c | 3 +++
14
1 file changed, 3 insertions(+)
14
1 file changed, 3 insertions(+)
15
15
...
...
diff view generated by jsdifflib
1
From: Kishon Vijay Abraham I <kvijayab@amd.com>
2
3
Secure AVIC has introduced a new field in the APIC backing page
1
Secure AVIC has introduced a new field in the APIC backing page
4
"NmiReq" that has to be set by the guest to request a NMI IPI.
2
"NmiReq" that has to be set by the guest to request a NMI IPI
3
through APIC_ICR write.
5
4
6
Add support to set NmiReq appropriately to send NMI IPI.
5
Add support to set NmiReq appropriately to send NMI IPI.
7
6
8
This also requires Virtual NMI feature to be enabled in VINTRL_CTRL
7
This also requires Virtual NMI feature to be enabled in VINTRL_CTRL
9
field in the VMSA. However this would be added by a later commit
8
field in the VMSA. However this would be added by a later commit
10
after adding support for injecting NMI from the hypervisor.
9
after adding support for injecting NMI from the hypervisor.
11
10
11
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
12
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
12
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
13
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
13
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
14
---
14
---
15
Changes since v1:
15
Changes since v2:
16
- Do not set APIC_IRR for NMI IPI.
16
- Updates to use per_cpu_ptr() on apic_page struct.
17
17
18
arch/x86/kernel/apic/x2apic_savic.c | 24 ++++++++++++++++--------
18
arch/x86/kernel/apic/x2apic_savic.c | 28 ++++++++++++++++++++--------
19
1 file changed, 16 insertions(+), 8 deletions(-)
19
1 file changed, 20 insertions(+), 8 deletions(-)
20
20
21
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
21
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
22
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
23
--- a/arch/x86/kernel/apic/x2apic_savic.c
23
--- a/arch/x86/kernel/apic/x2apic_savic.c
24
+++ b/arch/x86/kernel/apic/x2apic_savic.c
24
+++ b/arch/x86/kernel/apic/x2apic_savic.c
25
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
25
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_write(u32 reg, u32 data)
26
    }
26
    }
27
}
27
}
28
28
29
-static void send_ipi(int cpu, int vector)
29
-static inline void send_ipi_dest(unsigned int cpu, unsigned int vector)
30
+static void send_ipi(int cpu, int vector, bool nmi)
30
+static void send_ipi_dest(unsigned int cpu, unsigned int vector, bool nmi)
31
{
31
{
32
    void *backing_page;
32
+    if (nmi) {
33
    int reg_off;
33
+        struct apic_page *ap = per_cpu_ptr(apic_page, cpu);
34
34
+
35
    backing_page = per_cpu(apic_backing_page, cpu);
35
+        WRITE_ONCE(ap->regs[SAVIC_NMI_REQ >> 2], 1);
36
    reg_off = APIC_IRR + REG_POS(vector);
36
+        return;
37
-    /*
37
+    }
38
-     * Use test_and_set_bit() to ensure that IRR updates are atomic w.r.t. other
38
+
39
-     * IRR updates such as during VMRUN and during CPU interrupt handling flow.
39
    update_vector(cpu, APIC_IRR, vector, true);
40
-     */
41
-    test_and_set_bit(VEC_POS(vector), (unsigned long *)((char *)backing_page + reg_off));
42
+    if (!nmi)
43
+        /*
44
+         * Use test_and_set_bit() to ensure that IRR updates are atomic w.r.t. other
45
+         * IRR updates such as during VMRUN and during CPU interrupt handling flow.
46
+         * */
47
+        test_and_set_bit(VEC_POS(vector),
48
+                 (unsigned long *)((char *)backing_page + reg_off));
49
+    else
50
+        set_reg(backing_page, SAVIC_NMI_REQ_OFFSET, nmi);
51
}
40
}
52
41
53
static void send_ipi_dest(u64 icr_data)
42
-static void send_ipi_allbut(unsigned int vector)
43
+static void send_ipi_allbut(unsigned int vector, bool nmi)
54
{
44
{
55
    int vector, cpu;
45
    unsigned int cpu, src_cpu;
56
+    bool nmi;
57
58
    vector = icr_data & APIC_VECTOR_MASK;
59
    cpu = icr_data >> 32;
60
+    nmi = ((icr_data & APIC_DM_FIXED_MASK) == APIC_DM_NMI);
61
62
-    send_ipi(cpu, vector);
63
+    send_ipi(cpu, vector, nmi);
64
}
65
66
static void send_ipi_target(u64 icr_data)
67
@@ -XXX,XX +XXX,XX @@ static void send_ipi_allbut(u64 icr_data)
68
    const struct cpumask *self_cpu_mask = get_cpu_mask(smp_processor_id());
69
    unsigned long flags;
46
    unsigned long flags;
70
    int vector, cpu;
47
@@ -XXX,XX +XXX,XX @@ static void send_ipi_allbut(unsigned int vector)
71
+    bool nmi;
48
    for_each_cpu(cpu, cpu_online_mask) {
72
49
        if (cpu == src_cpu)
73
    vector = icr_data & APIC_VECTOR_MASK;
50
            continue;
74
+    nmi = ((icr_data & APIC_DM_FIXED_MASK) == APIC_DM_NMI);
51
-        send_ipi_dest(cpu, vector);
75
    local_irq_save(flags);
52
+        send_ipi_dest(cpu, vector, nmi);
76
    for_each_cpu_andnot(cpu, cpu_present_mask, self_cpu_mask)
53
    }
77
-        send_ipi(cpu, vector);
54
78
+        send_ipi(cpu, vector, nmi);
79
    savic_ghcb_msr_write(APIC_ICR, icr_data);
80
    local_irq_restore(flags);
55
    local_irq_restore(flags);
81
}
56
}
57
58
-static inline void self_ipi(unsigned int vector)
59
+static inline void self_ipi(unsigned int vector, bool nmi)
60
{
61
    u32 icr_low = APIC_SELF_IPI | vector;
62
63
+    if (nmi)
64
+        icr_low |= APIC_DM_NMI;
65
+
66
    native_x2apic_icr_write(icr_low, 0);
67
}
68
69
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_icr_write(u32 icr_low, u32 icr_high)
70
{
71
    unsigned int dsh, vector;
72
    u64 icr_data;
73
+    bool nmi;
74
75
    dsh = icr_low & APIC_DEST_ALLBUT;
76
    vector = icr_low & APIC_VECTOR_MASK;
77
+    nmi = ((icr_low & APIC_DM_FIXED_MASK) == APIC_DM_NMI);
78
79
    switch (dsh) {
80
    case APIC_DEST_SELF:
81
-        self_ipi(vector);
82
+        self_ipi(vector, nmi);
83
        break;
84
    case APIC_DEST_ALLINC:
85
-        self_ipi(vector);
86
+        self_ipi(vector, nmi);
87
        fallthrough;
88
    case APIC_DEST_ALLBUT:
89
-        send_ipi_allbut(vector);
90
+        send_ipi_allbut(vector, nmi);
91
        break;
92
    default:
93
-        send_ipi_dest(icr_high, vector);
94
+        send_ipi_dest(icr_high, vector, nmi);
95
        break;
96
    }
97
82
--
98
--
83
2.34.1
99
2.34.1
diff view generated by jsdifflib
...
...
4
from hypervisor.
4
from hypervisor.
5
5
6
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
6
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
7
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
7
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
8
---
8
---
9
Changes since v1:
9
Changes since v2:
10
- No change
10
- Remove MSR_AMD64_SECURE_AVIC_EN macros from this patch.
11
11
12
arch/x86/include/asm/msr-index.h | 5 +++++
12
arch/x86/include/asm/msr-index.h | 3 +++
13
arch/x86/kernel/apic/x2apic_savic.c | 6 ++++++
13
arch/x86/kernel/apic/x2apic_savic.c | 6 ++++++
14
2 files changed, 11 insertions(+)
14
2 files changed, 9 insertions(+)
15
15
16
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
16
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/arch/x86/include/asm/msr-index.h
18
--- a/arch/x86/include/asm/msr-index.h
19
+++ b/arch/x86/include/asm/msr-index.h
19
+++ b/arch/x86/include/asm/msr-index.h
20
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
21
#define MSR_AMD64_SNP_SECURE_AVIC     BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT)
21
#define MSR_AMD64_SNP_SECURE_AVIC    BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT)
22
#define MSR_AMD64_SNP_RESV_BIT        19
22
#define MSR_AMD64_SNP_RESV_BIT        19
23
#define MSR_AMD64_SNP_RESERVED_MASK    GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
23
#define MSR_AMD64_SNP_RESERVED_MASK    GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
24
+#define MSR_AMD64_SECURE_AVIC_CONTROL    0xc0010138
24
+#define MSR_AMD64_SECURE_AVIC_CONTROL    0xc0010138
25
+#define MSR_AMD64_SECURE_AVIC_EN_BIT    0
26
+#define MSR_AMD64_SECURE_AVIC_EN    BIT_ULL(MSR_AMD64_SECURE_AVIC_EN_BIT)
27
+#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT 1
25
+#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT 1
28
+#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI BIT_ULL(MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT)
26
+#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI BIT_ULL(MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT)
29
#define MSR_AMD64_RMP_BASE        0xc0010132
27
#define MSR_AMD64_RMP_BASE        0xc0010132
30
#define MSR_AMD64_RMP_END        0xc0010133
28
#define MSR_AMD64_RMP_END        0xc0010133
31
#define MSR_AMD64_RMP_CFG        0xc0010136
29
#define MSR_AMD64_RMP_CFG        0xc0010136
32
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
30
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
33
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
34
--- a/arch/x86/kernel/apic/x2apic_savic.c
32
--- a/arch/x86/kernel/apic/x2apic_savic.c
35
+++ b/arch/x86/kernel/apic/x2apic_savic.c
33
+++ b/arch/x86/kernel/apic/x2apic_savic.c
36
@@ -XXX,XX +XXX,XX @@ static DEFINE_PER_CPU(struct apic_id_node, apic_id_node);
34
@@ -XXX,XX +XXX,XX @@ struct apic_page {
37
35
38
static struct llist_head *apic_id_map;
36
static struct apic_page __percpu *apic_page __ro_after_init;
39
37
40
+static inline void savic_wr_control_msr(u64 val)
38
+static inline void savic_wr_control_msr(u64 val)
41
+{
39
+{
42
+    native_wrmsr(MSR_AMD64_SECURE_AVIC_CONTROL, lower_32_bits(val), upper_32_bits(val));
40
+    native_wrmsr(MSR_AMD64_SECURE_AVIC_CONTROL, lower_32_bits(val), upper_32_bits(val));
43
+}
41
+}
44
+
42
+
45
static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
43
static int x2apic_savic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
46
{
44
{
47
    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
45
    return x2apic_enabled() && cc_platform_has(CC_ATTR_SNP_SECURE_AVIC);
48
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void)
46
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void)
49
    ret = savic_register_gpa(-1ULL, gpa);
47
    ret = savic_register_gpa(gpa);
50
    if (ret != ES_OK)
48
    if (ret != ES_OK)
51
        snp_abort();
49
        snp_abort();
52
+    savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI);
50
+    savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI);
53
}
51
}
54
52
55
static int x2apic_savic_probe(void)
53
static int x2apic_savic_probe(void)
56
--
54
--
57
2.34.1
55
2.34.1
diff view generated by jsdifflib
1
From: Kishon Vijay Abraham I <kvijayab@amd.com>
1
From: Kishon Vijay Abraham I <kvijayab@amd.com>
2
2
3
Now that support to send NMI IPI and support to inject NMI from
3
Now that support to send NMI IPI and support to inject NMI from
4
hypervisor has been added, set V_NMI_ENABLE in VINTR_CTRL field of
4
the hypervisor has been added, set V_NMI_ENABLE in VINTR_CTRL
5
VMSA to enable NMI.
5
field of VMSA to enable NMI for Secure AVIC guests.
6
6
7
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
7
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
8
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
8
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
9
---
9
---
10
Changes since v1:
10
Changes since v2:
11
- No change
11
- No change.
12
12
13
arch/x86/coco/sev/core.c | 2 +-
13
arch/x86/coco/sev/core.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
15
16
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
16
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
...
...
diff view generated by jsdifflib
1
Hypervisor need information about the current state of LVT registers
1
Hypervisor need information about the current state of LVT registers
2
for device emulation and NMI. So, forward reads and write of these
2
for device emulation and NMI. So, forward reads and write of these
3
registers to the Hypervisor for Secure AVIC guests.
3
registers to the Hypervisor for Secure AVIC guests.
4
4
5
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
5
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
6
---
6
---
7
Changes since v1:
7
Changes since v2:
8
- New change.
8
- No change.
9
9
10
arch/x86/kernel/apic/x2apic_savic.c | 20 ++++++++++----------
10
arch/x86/kernel/apic/x2apic_savic.c | 20 ++++++++++----------
11
1 file changed, 10 insertions(+), 10 deletions(-)
11
1 file changed, 10 insertions(+), 10 deletions(-)
12
12
13
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
13
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
...
...
52
+        savic_ghcb_msr_write(reg, data);
52
+        savic_ghcb_msr_write(reg, data);
53
+        break;
53
+        break;
54
    case APIC_TASKPRI:
54
    case APIC_TASKPRI:
55
    case APIC_EOI:
55
    case APIC_EOI:
56
    case APIC_SPIV:
56
    case APIC_SPIV:
57
    case SAVIC_NMI_REQ_OFFSET:
57
    case SAVIC_NMI_REQ:
58
    case APIC_ESR:
58
    case APIC_ESR:
59
    case APIC_ICR:
59
    case APIC_ICR:
60
-    case APIC_LVTTHMR:
60
-    case APIC_LVTTHMR:
61
-    case APIC_LVTPC:
61
-    case APIC_LVTPC:
62
-    case APIC_LVTERR:
62
-    case APIC_LVTERR:
63
    case APIC_ECTRL:
63
    case APIC_ECTRL:
64
    case APIC_SEOI:
64
    case APIC_SEOI:
65
    case APIC_IER:
65
    case APIC_IER:
66
--
66
--
67
2.34.1
67
2.34.1
diff view generated by jsdifflib
1
Secure AVIC accelerates EOI msr writes for edge-triggered interrupts.
1
Secure AVIC accelerates guest's EOI msr writes for edge-triggered
2
For level-triggered interrupts, EOI msr writes trigger #VC exception
2
interrupts. For level-triggered interrupts, EOI msr writes trigger
3
with SVM_EXIT_AVIC_UNACCELERATED_ACCESS error code. The #VC handler
3
VC exception with SVM_EXIT_AVIC_UNACCELERATED_ACCESS error code. The
4
would need to trigger a GHCB protocol MSR write event to the Hypervisor.
4
VC handler would need to trigger a GHCB protocol MSR write event to
5
As #VC handling adds extra overhead, directly do a GHCB protocol based
5
to notify the Hypervisor about completion of the level-triggered
6
EOI write from apic->eoi() callback for level-triggered interrupts.
6
interrupt. This is required for cases like emulated IOAPIC. VC exception
7
Use wrmsr for edge-triggered interrupts, so that hardware re-evaluates
7
handling adds extra performance overhead for APIC register write. In
8
any pending interrupt which can be delivered to guest vCPU. For level-
8
addition, some unaccelerated APIC register msr writes are trapped,
9
triggered interrupts, re-evaluation happens on return from VMGEXIT.
9
whereas others are faulted. This results in additional complexity in
10
VC exception handling for unacclerated accesses. So, directly do a GHCB
11
protocol based EOI write from apic->eoi() callback for level-triggered
12
interrupts. Use wrmsr for edge-triggered interrupts, so that hardware
13
re-evaluates any pending interrupt which can be delivered to guest vCPU.
14
For level-triggered interrupts, re-evaluation happens on return from
15
VMGEXIT corresponding to the GHCB event for EOI msr write.
10
16
11
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
17
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
12
---
18
---
13
Changes since v1:
19
Changes since v2:
14
- New change.
20
- Reuse find_highest_vector() from kvm/lapic.c
21
- Misc cleanups.
15
22
16
arch/x86/kernel/apic/x2apic_savic.c | 53 ++++++++++++++++++++++++++++-
23
arch/x86/include/asm/apic-emul.h | 28 +++++++++++++
17
1 file changed, 52 insertions(+), 1 deletion(-)
24
arch/x86/kernel/apic/x2apic_savic.c | 62 +++++++++++++++++++++++++----
25
arch/x86/kvm/lapic.c | 23 ++---------
26
3 files changed, 85 insertions(+), 28 deletions(-)
27
create mode 100644 arch/x86/include/asm/apic-emul.h
18
28
29
diff --git a/arch/x86/include/asm/apic-emul.h b/arch/x86/include/asm/apic-emul.h
30
new file mode 100644
31
index XXXXXXX..XXXXXXX
32
--- /dev/null
33
+++ b/arch/x86/include/asm/apic-emul.h
34
@@ -XXX,XX +XXX,XX @@
35
+/* SPDX-License-Identifier: GPL-2.0-only */
36
+#ifndef _ASM_X86_APIC_EMUL_H
37
+#define _ASM_X86_APIC_EMUL_H
38
+
39
+#define MAX_APIC_VECTOR            256
40
+#define APIC_VECTORS_PER_REG        32
41
+
42
+static inline int apic_find_highest_vector(void *bitmap)
43
+{
44
+    unsigned int regno;
45
+    unsigned int vec;
46
+    u32 *reg;
47
+
48
+    /*
49
+     * The registers int the bitmap are 32-bit wide and 16-byte
50
+     * aligned. State of a vector is stored in a single bit.
51
+     */
52
+    for (regno = MAX_APIC_VECTOR / APIC_VECTORS_PER_REG - 1; regno >= 0; regno--) {
53
+        vec = regno * APIC_VECTORS_PER_REG;
54
+        reg = bitmap + regno * 16;
55
+        if (*reg)
56
+            return __fls(*reg) + vec;
57
+    }
58
+
59
+    return -1;
60
+}
61
+
62
+#endif /* _ASM_X86_APIC_EMUL_H */
19
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
63
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
20
index XXXXXXX..XXXXXXX 100644
64
index XXXXXXX..XXXXXXX 100644
21
--- a/arch/x86/kernel/apic/x2apic_savic.c
65
--- a/arch/x86/kernel/apic/x2apic_savic.c
22
+++ b/arch/x86/kernel/apic/x2apic_savic.c
66
+++ b/arch/x86/kernel/apic/x2apic_savic.c
67
@@ -XXX,XX +XXX,XX @@
68
#include <linux/align.h>
69
70
#include <asm/apic.h>
71
+#include <asm/apic-emul.h>
72
#include <asm/sev.h>
73
74
#include "local.h"
75
@@ -XXX,XX +XXX,XX @@ static __always_inline void set_reg(unsigned int offset, u32 val)
76
    WRITE_ONCE(this_cpu_ptr(apic_page)->regs[offset >> 2], val);
77
}
78
79
-#define SAVIC_ALLOWED_IRR    0x204
80
-
81
-static inline void update_vector(unsigned int cpu, unsigned int offset,
82
-                 unsigned int vector, bool set)
83
+static inline unsigned long *get_reg_bitmap(unsigned int cpu, unsigned int offset)
84
{
85
    struct apic_page *ap = per_cpu_ptr(apic_page, cpu);
86
-    unsigned long *reg = (unsigned long *) &ap->bytes[offset];
87
-    unsigned int bit;
88
89
+    return (unsigned long *) &ap->bytes[offset];
90
+}
91
+
92
+static inline unsigned int get_vec_bit(unsigned int vector)
93
+{
94
    /*
95
     * The registers are 32-bit wide and 16-byte aligned.
96
     * Compensate for the resulting bit number spacing.
97
     */
98
-    bit = vector + 96 * (vector / 32);
99
+    return vector + 96 * (vector / 32);
100
+}
101
+
102
+static inline void update_vector(unsigned int cpu, unsigned int offset,
103
+                 unsigned int vector, bool set)
104
+{
105
+    unsigned long *reg = get_reg_bitmap(cpu, offset);
106
+    unsigned int bit = get_vec_bit(vector);
107
108
    if (set)
109
        set_bit(bit, reg);
110
@@ -XXX,XX +XXX,XX @@ static inline void update_vector(unsigned int cpu, unsigned int offset,
111
        clear_bit(bit, reg);
112
}
113
114
+static inline bool test_vector(unsigned int cpu, unsigned int offset, unsigned int vector)
115
+{
116
+    unsigned long *reg = get_reg_bitmap(cpu, offset);
117
+    unsigned int bit = get_vec_bit(vector);
118
+
119
+    return test_bit(bit, reg);
120
+}
121
+
122
+#define SAVIC_ALLOWED_IRR    0x204
123
+
124
static u32 x2apic_savic_read(u32 reg)
125
{
126
    /*
23
@@ -XXX,XX +XXX,XX @@ static int x2apic_savic_probe(void)
127
@@ -XXX,XX +XXX,XX @@ static int x2apic_savic_probe(void)
24
    return 1;
128
    return 1;
25
}
129
}
26
130
27
+static int find_highest_isr(void *backing_page)
131
+static void x2apic_savic_eoi(void)
28
+{
132
+{
29
+    int vec_per_reg = 32;
133
+    unsigned int cpu;
30
+    int max_vec = 256;
31
+    u32 reg;
32
+    int vec;
134
+    int vec;
33
+
135
+
34
+    for (vec = max_vec - 32; vec >= 0; vec -= vec_per_reg) {
136
+    cpu = raw_smp_processor_id();
35
+        reg = get_reg(backing_page, APIC_ISR + REG_POS(vec));
137
+    vec = apic_find_highest_vector(get_reg_bitmap(cpu, APIC_ISR));
36
+        if (reg)
138
+    if (WARN_ONCE(vec == -1, "EOI write while no active interrupt in APIC_ISR"))
37
+            return __fls(reg) + vec;
38
+    }
39
+
40
+    return -1;
41
+}
42
+
43
+static void x2apic_savic_eoi(void)
44
+{
45
+    void *backing_page;
46
+    int reg_off;
47
+    int vec_pos;
48
+    u32 tmr;
49
+    int vec;
50
+
51
+    backing_page = this_cpu_read(apic_backing_page);
52
+
53
+    vec = find_highest_isr(backing_page);
54
+    if (WARN_ONCE(vec == -1, "EOI write without any active interrupt in APIC_ISR"))
55
+        return;
139
+        return;
56
+
140
+
57
+    reg_off = REG_POS(vec);
141
+    if (test_vector(cpu, APIC_TMR, vec)) {
58
+    vec_pos = VEC_POS(vec);
142
+        update_vector(cpu, APIC_ISR, vec, false);
59
+    tmr = get_reg(backing_page, APIC_TMR + reg_off);
60
+    if (tmr & BIT(vec_pos)) {
61
+        clear_bit(vec_pos, backing_page + APIC_ISR + reg_off);
62
+        /*
143
+        /*
63
+         * Propagate the EOI write to hv for level-triggered interrupts.
144
+         * Propagate the EOI write to hv for level-triggered interrupts.
64
+         * Return to guest from GHCB protocol event takes care of
145
+         * Return to guest from GHCB protocol event takes care of
65
+         * re-evaluating interrupt state.
146
+         * re-evaluating interrupt state.
66
+         */
147
+         */
...
...
85
-    .eoi                = native_apic_msr_eoi,
166
-    .eoi                = native_apic_msr_eoi,
86
+    .eoi                = x2apic_savic_eoi,
167
+    .eoi                = x2apic_savic_eoi,
87
    .icr_read            = native_x2apic_icr_read,
168
    .icr_read            = native_x2apic_icr_read,
88
    .icr_write            = x2apic_savic_icr_write,
169
    .icr_write            = x2apic_savic_icr_write,
89
170
171
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
172
index XXXXXXX..XXXXXXX 100644
173
--- a/arch/x86/kvm/lapic.c
174
+++ b/arch/x86/kvm/lapic.c
175
@@ -XXX,XX +XXX,XX @@
176
#include <linux/export.h>
177
#include <linux/math64.h>
178
#include <linux/slab.h>
179
+#include <asm/apic-emul.h>
180
#include <asm/processor.h>
181
#include <asm/mce.h>
182
#include <asm/msr.h>
183
@@ -XXX,XX +XXX,XX @@
184
/* 14 is the version for Xeon and Pentium 8.4.8*/
185
#define APIC_VERSION            0x14UL
186
#define LAPIC_MMIO_LENGTH        (1 << 12)
187
-/* followed define is not in apicdef.h */
188
-#define MAX_APIC_VECTOR            256
189
-#define APIC_VECTORS_PER_REG        32
190
191
/*
192
* Enable local APIC timer advancement (tscdeadline mode only) with adaptive
193
@@ -XXX,XX +XXX,XX @@ static const unsigned int apic_lvt_mask[KVM_APIC_MAX_NR_LVT_ENTRIES] = {
194
    [LVT_CMCI] = LVT_MASK | APIC_MODE_MASK
195
};
196
197
-static int find_highest_vector(void *bitmap)
198
-{
199
-    int vec;
200
-    u32 *reg;
201
-
202
-    for (vec = MAX_APIC_VECTOR - APIC_VECTORS_PER_REG;
203
-     vec >= 0; vec -= APIC_VECTORS_PER_REG) {
204
-        reg = bitmap + REG_POS(vec);
205
-        if (*reg)
206
-            return __fls(*reg) + vec;
207
-    }
208
-
209
-    return -1;
210
-}
211
-
212
static u8 count_vectors(void *bitmap)
213
{
214
    int vec;
215
@@ -XXX,XX +XXX,XX @@ EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
216
217
static inline int apic_search_irr(struct kvm_lapic *apic)
218
{
219
-    return find_highest_vector(apic->regs + APIC_IRR);
220
+    return apic_find_highest_vector(apic->regs + APIC_IRR);
221
}
222
223
static inline int apic_find_highest_irr(struct kvm_lapic *apic)
224
@@ -XXX,XX +XXX,XX @@ static inline int apic_find_highest_isr(struct kvm_lapic *apic)
225
    if (likely(apic->highest_isr_cache != -1))
226
        return apic->highest_isr_cache;
227
228
-    result = find_highest_vector(apic->regs + APIC_ISR);
229
+    result = apic_find_highest_vector(apic->regs + APIC_ISR);
230
    ASSERT(result == -1 || result >= 16);
231
232
    return result;
90
--
233
--
91
2.34.1
234
2.34.1
diff view generated by jsdifflib
1
Add a ->teardown callback to disable Secure AVIC before
1
Add a apic->teardown() callback to disable Secure AVIC before
2
rebooting into the new kernel. This ensures that the new
2
rebooting into the new kernel. This ensures that the new
3
kernel does not access the old APIC backing page which was
3
kernel does not access the old APIC backing page which was
4
allocated by the previous kernel. Such accesses can happen
4
allocated by the previous kernel. Such accesses can happen
5
if there are any APIC accesses done during guest boot before
5
if there are any APIC accesses done during guest boot before
6
Secure AVIC driver probe is done by the new kernel (as
6
Secure AVIC driver probe is done by the new kernel (as Secure
7
Secure AVIC remains enabled in control msr).
7
AVIC would have remained enabled in the Secure AVIC control
8
msr).
8
9
9
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
10
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
10
---
11
---
11
Changes since v1:
12
Changes since v2:
12
- New change.
13
- Change savic_unregister_gpa() interface to allow GPA unregistration
14
only for local CPU.
13
15
14
arch/x86/coco/sev/core.c | 34 +++++++++++++++++++++++++++++
16
arch/x86/coco/sev/core.c | 25 +++++++++++++++++++++++++
15
arch/x86/include/asm/apic.h | 1 +
17
arch/x86/include/asm/apic.h | 1 +
16
arch/x86/include/asm/sev.h | 3 +++
18
arch/x86/include/asm/sev.h | 2 ++
17
arch/x86/kernel/apic/apic.c | 3 +++
19
arch/x86/kernel/apic/apic.c | 3 +++
18
arch/x86/kernel/apic/x2apic_savic.c | 8 +++++++
20
arch/x86/kernel/apic/x2apic_savic.c | 8 ++++++++
19
5 files changed, 49 insertions(+)
21
5 files changed, 39 insertions(+)
20
22
21
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
23
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
22
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
23
--- a/arch/x86/coco/sev/core.c
25
--- a/arch/x86/coco/sev/core.c
24
+++ b/arch/x86/coco/sev/core.c
26
+++ b/arch/x86/coco/sev/core.c
25
@@ -XXX,XX +XXX,XX @@ enum es_result savic_register_gpa(u64 apic_id, u64 gpa)
27
@@ -XXX,XX +XXX,XX @@ enum es_result savic_register_gpa(u64 gpa)
26
    return ret;
28
    return res;
27
}
29
}
28
30
29
+/*
31
+enum es_result savic_unregister_gpa(u64 *gpa)
30
+ * Unregister GPA of the Secure AVIC backing page.
31
+ *
32
+ * @apic_id: APIC ID of the vCPU. Use -1ULL for the current vCPU
33
+ * doing the call.
34
+ *
35
+ * On success, returns previously registered GPA of the Secure AVIC
36
+ * backing page in gpa arg.
37
+ */
38
+enum es_result savic_unregister_gpa(u64 apic_id, u64 *gpa)
39
+{
32
+{
40
+    struct ghcb_state state;
33
+    struct ghcb_state state;
41
+    struct es_em_ctxt ctxt;
34
+    struct es_em_ctxt ctxt;
42
+    unsigned long flags;
35
+    unsigned long flags;
43
+    struct ghcb *ghcb;
36
+    struct ghcb *ghcb;
...
...
47
+
40
+
48
+    ghcb = __sev_get_ghcb(&state);
41
+    ghcb = __sev_get_ghcb(&state);
49
+
42
+
50
+    vc_ghcb_invalidate(ghcb);
43
+    vc_ghcb_invalidate(ghcb);
51
+
44
+
52
+    ghcb_set_rax(ghcb, apic_id);
45
+    ghcb_set_rax(ghcb, -1ULL);
53
+    ret = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_SECURE_AVIC,
46
+    ret = sev_es_ghcb_hv_call(ghcb, &ctxt, SVM_VMGEXIT_SECURE_AVIC,
54
+            SVM_VMGEXIT_SECURE_AVIC_UNREGISTER_GPA, 0);
47
+            SVM_VMGEXIT_SECURE_AVIC_UNREGISTER_GPA, 0);
55
+    if (gpa && ret == ES_OK)
48
+    if (gpa && ret == ES_OK)
56
+        *gpa = ghcb->save.rbx;
49
+        *gpa = ghcb->save.rbx;
57
+    __sev_put_ghcb(&state);
50
+    __sev_put_ghcb(&state);
...
...
80
--- a/arch/x86/include/asm/sev.h
73
--- a/arch/x86/include/asm/sev.h
81
+++ b/arch/x86/include/asm/sev.h
74
+++ b/arch/x86/include/asm/sev.h
82
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
75
@@ -XXX,XX +XXX,XX @@ int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_req *req
83
void __init snp_secure_tsc_prepare(void);
76
void __init snp_secure_tsc_prepare(void);
84
void __init snp_secure_tsc_init(void);
77
void __init snp_secure_tsc_init(void);
85
enum es_result savic_register_gpa(u64 apic_id, u64 gpa);
78
enum es_result savic_register_gpa(u64 gpa);
86
+enum es_result savic_unregister_gpa(u64 apic_id, u64 *gpa);
79
+enum es_result savic_unregister_gpa(u64 *gpa);
87
u64 savic_ghcb_msr_read(u32 reg);
80
u64 savic_ghcb_msr_read(u32 reg);
88
void savic_ghcb_msr_write(u32 reg, u64 value);
81
void savic_ghcb_msr_write(u32 reg, u64 value);
89
82
90
@@ -XXX,XX +XXX,XX @@ static inline void __init snp_secure_tsc_prepare(void) { }
83
@@ -XXX,XX +XXX,XX @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_
84
static inline void __init snp_secure_tsc_prepare(void) { }
91
static inline void __init snp_secure_tsc_init(void) { }
85
static inline void __init snp_secure_tsc_init(void) { }
92
static inline enum es_result savic_register_gpa(u64 apic_id,
86
static inline enum es_result savic_register_gpa(u64 gpa) { return ES_UNSUPPORTED; }
93
                        u64 gpa) { return ES_UNSUPPORTED; }
87
+static inline enum es_result savic_unregister_gpa(u64 *gpa) { return ES_UNSUPPORTED; }
94
+static inline enum es_result savic_unregister_gpa(u64 apic_id,
88
static inline void savic_ghcb_msr_write(u32 reg, u64 value) { }
95
+                         u64 *gpa) { return ES_UNSUPPORTED; }
96
static inline u64 savic_ghcb_msr_read(u32 reg) { return 0; }
89
static inline u64 savic_ghcb_msr_read(u32 reg) { return 0; }
97
static void savic_ghcb_msr_write(u32 reg, u64 value) { }
98
90
99
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
91
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
100
index XXXXXXX..XXXXXXX 100644
92
index XXXXXXX..XXXXXXX 100644
101
--- a/arch/x86/kernel/apic/apic.c
93
--- a/arch/x86/kernel/apic/apic.c
102
+++ b/arch/x86/kernel/apic/apic.c
94
+++ b/arch/x86/kernel/apic/apic.c
...
...
112
#ifdef CONFIG_X86_32
104
#ifdef CONFIG_X86_32
113
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
105
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
114
index XXXXXXX..XXXXXXX 100644
106
index XXXXXXX..XXXXXXX 100644
115
--- a/arch/x86/kernel/apic/x2apic_savic.c
107
--- a/arch/x86/kernel/apic/x2apic_savic.c
116
+++ b/arch/x86/kernel/apic/x2apic_savic.c
108
+++ b/arch/x86/kernel/apic/x2apic_savic.c
117
@@ -XXX,XX +XXX,XX @@ static void init_backing_page(void *backing_page)
109
@@ -XXX,XX +XXX,XX @@ static void init_apic_page(void)
118
    }
110
    set_reg(APIC_ID, apic_id);
119
}
111
}
120
112
121
+static void x2apic_savic_teardown(void)
113
+static void x2apic_savic_teardown(void)
122
+{
114
+{
123
+    /* Disable Secure AVIC */
115
+    /* Disable Secure AVIC */
124
+    native_wrmsr(MSR_AMD64_SECURE_AVIC_CONTROL, 0, 0);
116
+    native_wrmsr(MSR_AMD64_SECURE_AVIC_CONTROL, 0, 0);
125
+    savic_unregister_gpa(-1ULL, NULL);
117
+    savic_unregister_gpa(NULL);
126
+}
118
+}
127
+
119
+
128
static void x2apic_savic_setup(void)
120
static void x2apic_savic_setup(void)
129
{
121
{
130
    void *backing_page;
122
    void *backing_page;
...
...
diff view generated by jsdifflib
1
With all the pieces in place now, enable Secure AVIC in Secure
1
With all the pieces in place now, enable Secure AVIC in Secure
2
AVIC Control MSR. Any access to x2APIC MSRs are emulated by
2
AVIC Control MSR. Any access to x2APIC MSRs are emulated by
3
hypervisor before Secure AVIC is enabled in the Control MSR.
3
the hypervisor before Secure AVIC is enabled in the control MSR.
4
Post Secure AVIC enablement, all x2APIC MSR accesses (whether
4
Post Secure AVIC enablement, all x2APIC MSR accesses (whether
5
accelerated by AVIC hardware or trapped as #VC exception) operate
5
accelerated by AVIC hardware or trapped as VC exception) operate
6
on guest APIC backing page.
6
on vCPU's APIC backing page.
7
7
8
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
8
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
9
---
9
---
10
Changes since v1:
10
Changes since v2:
11
- No change.
11
- Move MSR_AMD64_SECURE_AVIC_EN* macros to this patch.
12
12
13
arch/x86/include/asm/msr-index.h | 2 ++
13
arch/x86/kernel/apic/x2apic_savic.c | 2 +-
14
arch/x86/kernel/apic/x2apic_savic.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
2 files changed, 3 insertions(+), 1 deletion(-)
15
16
17
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/arch/x86/include/asm/msr-index.h
20
+++ b/arch/x86/include/asm/msr-index.h
21
@@ -XXX,XX +XXX,XX @@
22
#define MSR_AMD64_SNP_RESV_BIT        19
23
#define MSR_AMD64_SNP_RESERVED_MASK    GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
24
#define MSR_AMD64_SECURE_AVIC_CONTROL    0xc0010138
25
+#define MSR_AMD64_SECURE_AVIC_EN_BIT    0
26
+#define MSR_AMD64_SECURE_AVIC_EN    BIT_ULL(MSR_AMD64_SECURE_AVIC_EN_BIT)
27
#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT 1
28
#define MSR_AMD64_SECURE_AVIC_ALLOWEDNMI BIT_ULL(MSR_AMD64_SECURE_AVIC_ALLOWEDNMI_BIT)
29
#define MSR_AMD64_RMP_BASE        0xc0010132
16
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
30
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
17
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
18
--- a/arch/x86/kernel/apic/x2apic_savic.c
32
--- a/arch/x86/kernel/apic/x2apic_savic.c
19
+++ b/arch/x86/kernel/apic/x2apic_savic.c
33
+++ b/arch/x86/kernel/apic/x2apic_savic.c
20
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void)
34
@@ -XXX,XX +XXX,XX @@ static void x2apic_savic_setup(void)
21
    ret = savic_register_gpa(-1ULL, gpa);
35
    ret = savic_register_gpa(gpa);
22
    if (ret != ES_OK)
36
    if (ret != ES_OK)
23
        snp_abort();
37
        snp_abort();
24
-    savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI);
38
-    savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI);
25
+    savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_EN | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI);
39
+    savic_wr_control_msr(gpa | MSR_AMD64_SECURE_AVIC_EN | MSR_AMD64_SECURE_AVIC_ALLOWEDNMI);
26
}
40
}
27
41
28
static int x2apic_savic_probe(void)
42
static int x2apic_savic_probe(void)
29
--
43
--
30
2.34.1
44
2.34.1
diff view generated by jsdifflib
1
The SECURE_AVIC_CONTROL MSR (0xc0010138) holds the GPA of the guest
1
The SECURE_AVIC_CONTROL MSR holds the GPA of the guest APIC backing
2
APIC backing page and bitfields to enable Secure AVIC and NMI.
2
page and bitfields to control enablement of Secure AVIC and NMI by
3
This MSR is populated by the guest and the hypervisor should not
3
guest vCPUs. This MSR is populated by the guest and the hypervisor
4
intercept it. A #VC exception will be generated otherwise. If
4
should not intercept it. A #VC exception will be generated otherwise.
5
this should occur and Secure AVIC is enabled, terminate guest
5
If this occurs and Secure AVIC is enabled, terminate guest execution.
6
execution.
7
6
8
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
7
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
9
---
8
---
10
Changes since v1:
9
Changes since v2:
11
- New change.
10
- No change
12
11
13
arch/x86/coco/sev/core.c | 9 +++++++++
12
arch/x86/coco/sev/core.c | 9 +++++++++
14
1 file changed, 9 insertions(+)
13
1 file changed, 9 insertions(+)
15
14
16
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
15
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/arch/x86/coco/sev/core.c
17
--- a/arch/x86/coco/sev/core.c
19
+++ b/arch/x86/coco/sev/core.c
18
+++ b/arch/x86/coco/sev/core.c
20
@@ -XXX,XX +XXX,XX @@ static enum es_result __vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt
19
@@ -XXX,XX +XXX,XX @@ static enum es_result __vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt
20
        if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
21
            return __vc_handle_secure_tsc_msrs(regs, write);
21
            return __vc_handle_secure_tsc_msrs(regs, write);
22
        else
22
        break;
23
            break;
24
+    case MSR_AMD64_SECURE_AVIC_CONTROL:
23
+    case MSR_AMD64_SECURE_AVIC_CONTROL:
25
+        /*
24
+        /*
26
+         * AMD64_SECURE_AVIC_CONTROL should not be intercepted when
25
+         * AMD64_SECURE_AVIC_CONTROL should not be intercepted when
27
+         * Secure AVIC is enabled. Terminate the Secure AVIC guest
26
+         * Secure AVIC is enabled. Terminate the Secure AVIC guest
28
+         * if the interception is enabled.
27
+         * if the interception is enabled.
...
...
diff view generated by jsdifflib
1
From: Kishon Vijay Abraham I <kvijayab@amd.com>
1
Now that Secure AVIC support is added in the guest, indicate SEV-SNP
2
guest supports Secure AVIC feature if CONFIG_AMD_SECURE_AVIC is
3
enabled.
2
4
3
Now that Secure AVIC support is added in the guest, indicate SEV-SNP
5
Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
4
guest supports Secure AVIC.
5
6
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
6
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
7
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
7
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
8
---
8
---
9
Changes since v1:
9
Changes since v2:
10
- No change.
10
- Set SNP_FEATURE_SECURE_AVIC in SNP_FEATURES_PRESENT only when
11
CONFIG_AMD_SECURE_AVIC is enabled.
11
12
12
arch/x86/boot/compressed/sev.c | 3 ++-
13
arch/x86/boot/compressed/sev.c | 9 ++++++++-
13
1 file changed, 2 insertions(+), 1 deletion(-)
14
1 file changed, 8 insertions(+), 1 deletion(-)
14
15
15
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
16
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/arch/x86/boot/compressed/sev.c
18
--- a/arch/x86/boot/compressed/sev.c
18
+++ b/arch/x86/boot/compressed/sev.c
19
+++ b/arch/x86/boot/compressed/sev.c
19
@@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
20
@@ -XXX,XX +XXX,XX @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
21
                 MSR_AMD64_SNP_SECURE_AVIC |        \
22
                 MSR_AMD64_SNP_RESERVED_MASK)
23
24
+#ifdef CONFIG_AMD_SECURE_AVIC
25
+#define SNP_FEATURE_SECURE_AVIC        MSR_AMD64_SNP_SECURE_AVIC
26
+#else
27
+#define SNP_FEATURE_SECURE_AVIC        0
28
+#endif
29
+
30
/*
31
* SNP_FEATURES_PRESENT is the mask of SNP features that are implemented
32
* by the guest kernel. As and when a new feature is implemented in the
20
* guest kernel, a corresponding bit should be added to the mask.
33
* guest kernel, a corresponding bit should be added to the mask.
21
*/
34
*/
22
#define SNP_FEATURES_PRESENT    (MSR_AMD64_SNP_DEBUG_SWAP |    \
35
#define SNP_FEATURES_PRESENT    (MSR_AMD64_SNP_DEBUG_SWAP |    \
23
-                 MSR_AMD64_SNP_SECURE_TSC)
36
-                 MSR_AMD64_SNP_SECURE_TSC)
24
+                 MSR_AMD64_SNP_SECURE_TSC |    \
37
+                 MSR_AMD64_SNP_SECURE_TSC |    \
25
+                 MSR_AMD64_SNP_SECURE_AVIC)
38
+                 SNP_FEATURE_SECURE_AVIC)
26
39
27
u64 snp_get_unsupported_features(u64 status)
40
u64 snp_get_unsupported_features(u64 status)
28
{
41
{
29
--
42
--
30
2.34.1
43
2.34.1
diff view generated by jsdifflib