accel/hvf/hvf-all.c | 51 ++ accel/stubs/hvf-stub.c | 2 + hw/arm/virt.c | 32 +- hw/intc/arm_gicv3_common.c | 3 + hw/intc/arm_gicv3_hvf.c | 742 +++++++++++++++++++++++++++++ hw/intc/meson.build | 1 + include/hw/intc/arm_gicv3_common.h | 1 + include/system/hvf.h | 8 + system/vl.c | 2 + target/arm/hvf/hvf.c | 197 +++++++- target/arm/hvf/sysreg.c.inc | 35 ++ 11 files changed, 1060 insertions(+), 14 deletions(-) create mode 100644 hw/intc/arm_gicv3_hvf.c
Link to branch: https://github.com/mediouni-m/qemu hvf-irqchip-and-nested
(tag for this submission: hvf-irqchip-and-nested-v8)
This series adds supports for nested virtualisation when using HVF on arm64 Macs.
It has two parts:
- Apple vGICv3 support and necessary infrastructure changes for it
- Nested virtualisation support. Note that the nested virtualisation implementation
shipping as of macOS 26.3 is nVHE only (but it _does_ use VNCR as shipped).
It's rebased on top of the WHPX arm64 series.
Known issues:
- This series doesn't contain EL2 physical timer emulation, which is
needed if not leveraging the Apple vGIC.
- when nested virt is enabled, no UI response within EDK2
and a permanent wait. Workaround: -boot menu=on,splash-time=0.
Apple Feedback Assistant item: FB21649319
When the VM is running at EL2 at the very moment the virtual timer fires:
HV_EXIT_REASON_VTIMER_ACTIVATED doesn’t fire (when not using the provided vGIC
- using a GICv2 doesn’t require having transition notifiers which Hypervisor.framework doesn’t have…)
and when using the provided vGIC, the interrupt never gets delivered back to the guest
Linux as a guest OS is fine with this… but the reference ArmVirtQemu edk2 build always uses the virtual timer even when running EFI at EL2, so it gets broken unless this patch is applied to edk2:
diff --git a/ArmVirtPkg/ArmVirt.dsc.inc b/ArmVirtPkg/ArmVirt.dsc.inc
index 643620371b..1bfe7b67fc 100644
--- a/ArmVirtPkg/ArmVirt.dsc.inc
+++ b/ArmVirtPkg/ArmVirt.dsc.inc
@@ -101,7 +101,7 @@
CpuExceptionHandlerLib|ArmPkg/Library/ArmExceptionLib/ArmExceptionLib.inf
ArmSmcLib|MdePkg/Library/ArmSmcLib/ArmSmcLib.inf
ArmHvcLib|ArmPkg/Library/ArmHvcLib/ArmHvcLib.inf
- ArmGenericTimerCounterLib|ArmPkg/Library/ArmGenericTimerVirtCounterLib/ArmGenericTimerVirtCounterLib.inf
+ ArmGenericTimerCounterLib|ArmPkg/Library/ArmGenericTimerPhyCounterLib/ArmGenericTimerPhyCounterLib.inf^M
PlatformPeiLib|ArmVirtPkg/Library/PlatformPeiLib/PlatformPeiLib.inf
MemoryInitPeiLib|ArmVirtPkg/Library/ArmVirtMemoryInitPeiLib/ArmVirtMemoryInitPeiLib.inf
Changelog:
v1->v2:
Oops. I did a mistake when preparing my patches.
- Add hvf_arm_el2_enable(_) call to virt_set_virt
- Fix nested virt support check to add HVF
v2->v3:
- LORC_EL1 patch was merged separately, remove from this series.
- fix LPIs when kernel-irqchip disabled and using TCG
- remove spurious if case in vGIC supported version detection (inapplicable now)
- Add hvf_enabled() check in combination with hvf kernel-irqchip checks
- cleanly fail on attempt to use the platform vGIC together with ITS
v3->v4:
- GIC state save improvements, including saving the opaque Apple-specific state
- Saving HVF system register state when using the vGIC and/or EL2
v5:
- oops, fixed up save/restore to be functional
- misc changes otherwise
v6:
- Addressing review comments
v7:
- Address review comments, adapt around Qemu changes and bugfixes.
v8:
- Rebase, and misc fixes
Based-on: https://patchew.org/QEMU/20260121134114.9781-1-mohamed@unpredictable.fr/
Mohamed Mediouni (11):
hw/intc: Add hvf vGIC interrupt controller support
accel, hw/arm, include/system/hvf: infrastructure changes for HVF vGIC
hvf: save/restore Apple GIC state
hw/arm, target/arm: nested virtualisation on HVF
hvf: only call hvf_sync_vtimer() when running without the platform
vGIC
hvf: gate ARM_FEATURE_PMU register emulation behind not being at EL2
hvf: arm: allow exposing minimal PMU when running with nested virt on
target/arm: hvf: instantiate GIC early
target/arm: hvf: add asserts for code paths not leveraged when using
the vGIC
hvf: sync registers used at EL2
target/arm: hvf: pass through CNTHCTL_EL2 and MDCCINT_EL1
accel/hvf/hvf-all.c | 51 ++
accel/stubs/hvf-stub.c | 2 +
hw/arm/virt.c | 32 +-
hw/intc/arm_gicv3_common.c | 3 +
hw/intc/arm_gicv3_hvf.c | 742 +++++++++++++++++++++++++++++
hw/intc/meson.build | 1 +
include/hw/intc/arm_gicv3_common.h | 1 +
include/system/hvf.h | 8 +
system/vl.c | 2 +
target/arm/hvf/hvf.c | 197 +++++++-
target/arm/hvf/sysreg.c.inc | 35 ++
11 files changed, 1060 insertions(+), 14 deletions(-)
create mode 100644 hw/intc/arm_gicv3_hvf.c
--
2.50.1 (Apple Git-155)
> On 21. Jan 2026, at 14:58, Mohamed Mediouni <mohamed@unpredictable.fr> wrote: > > Link to branch: https://github.com/mediouni-m/qemu hvf-irqchip-and-nested > (tag for this submission: hvf-irqchip-and-nested-v8) > > This series adds supports for nested virtualisation when using HVF on arm64 Macs. > > It has two parts: > - Apple vGICv3 support and necessary infrastructure changes for it > - Nested virtualisation support. Note that the nested virtualisation implementation > shipping as of macOS 26.3 is nVHE only (but it _does_ use VNCR as shipped). > > It's rebased on top of the WHPX arm64 series. > > Known issues: > - This series doesn't contain EL2 physical timer emulation, which is > needed if not leveraging the Apple vGIC. > > - when nested virt is enabled, no UI response within EDK2 > and a permanent wait. Workaround: -boot menu=on,splash-time=0. > > Apple Feedback Assistant item: FB21649319 > > When the VM is running at EL2 at the very moment the virtual timer fires: > HV_EXIT_REASON_VTIMER_ACTIVATED doesn’t fire (when not using the provided vGIC > - using a GICv2 doesn’t require having transition notifiers which Hypervisor.framework doesn’t have…) > > and when using the provided vGIC, the interrupt never gets delivered back to the guest > > Linux as a guest OS is fine with this… but the reference ArmVirtQemu edk2 build always uses the virtual timer even when running EFI at EL2, so it gets broken unless this patch is applied to edk2: > > diff --git a/ArmVirtPkg/ArmVirt.dsc.inc b/ArmVirtPkg/ArmVirt.dsc.inc > index 643620371b..1bfe7b67fc 100644 > --- a/ArmVirtPkg/ArmVirt.dsc.inc > +++ b/ArmVirtPkg/ArmVirt.dsc.inc > @@ -101,7 +101,7 @@ > CpuExceptionHandlerLib|ArmPkg/Library/ArmExceptionLib/ArmExceptionLib.inf > ArmSmcLib|MdePkg/Library/ArmSmcLib/ArmSmcLib.inf > ArmHvcLib|ArmPkg/Library/ArmHvcLib/ArmHvcLib.inf > - ArmGenericTimerCounterLib|ArmPkg/Library/ArmGenericTimerVirtCounterLib/ArmGenericTimerVirtCounterLib.inf > + ArmGenericTimerCounterLib|ArmPkg/Library/ArmGenericTimerPhyCounterLib/ArmGenericTimerPhyCounterLib.inf^M > > PlatformPeiLib|ArmVirtPkg/Library/PlatformPeiLib/PlatformPeiLib.inf > MemoryInitPeiLib|ArmVirtPkg/Library/ArmVirtMemoryInitPeiLib/ArmVirtMemoryInitPeiLib.inf Oops, looks like patchew recognised this as a patch and failed to apply it… Hopefully other automation didn’t get broken too... > Changelog: > > v1->v2: > Oops. I did a mistake when preparing my patches. > > - Add hvf_arm_el2_enable(_) call to virt_set_virt > - Fix nested virt support check to add HVF > > v2->v3: > - LORC_EL1 patch was merged separately, remove from this series. > - fix LPIs when kernel-irqchip disabled and using TCG > - remove spurious if case in vGIC supported version detection (inapplicable now) > - Add hvf_enabled() check in combination with hvf kernel-irqchip checks > - cleanly fail on attempt to use the platform vGIC together with ITS > > v3->v4: > - GIC state save improvements, including saving the opaque Apple-specific state > - Saving HVF system register state when using the vGIC and/or EL2 > > v5: > - oops, fixed up save/restore to be functional > - misc changes otherwise > > v6: > - Addressing review comments > > v7: > - Address review comments, adapt around Qemu changes and bugfixes. > > v8: > - Rebase, and misc fixes > > Based-on: https://patchew.org/QEMU/20260121134114.9781-1-mohamed@unpredictable.fr/ > > Mohamed Mediouni (11): > hw/intc: Add hvf vGIC interrupt controller support > accel, hw/arm, include/system/hvf: infrastructure changes for HVF vGIC > hvf: save/restore Apple GIC state > hw/arm, target/arm: nested virtualisation on HVF > hvf: only call hvf_sync_vtimer() when running without the platform > vGIC > hvf: gate ARM_FEATURE_PMU register emulation behind not being at EL2 > hvf: arm: allow exposing minimal PMU when running with nested virt on > target/arm: hvf: instantiate GIC early > target/arm: hvf: add asserts for code paths not leveraged when using > the vGIC > hvf: sync registers used at EL2 > target/arm: hvf: pass through CNTHCTL_EL2 and MDCCINT_EL1 > > accel/hvf/hvf-all.c | 51 ++ > accel/stubs/hvf-stub.c | 2 + > hw/arm/virt.c | 32 +- > hw/intc/arm_gicv3_common.c | 3 + > hw/intc/arm_gicv3_hvf.c | 742 +++++++++++++++++++++++++++++ > hw/intc/meson.build | 1 + > include/hw/intc/arm_gicv3_common.h | 1 + > include/system/hvf.h | 8 + > system/vl.c | 2 + > target/arm/hvf/hvf.c | 197 +++++++- > target/arm/hvf/sysreg.c.inc | 35 ++ > 11 files changed, 1060 insertions(+), 14 deletions(-) > create mode 100644 hw/intc/arm_gicv3_hvf.c > > -- > 2.50.1 (Apple Git-155) >
This opens up the door to nested virtualisation support.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
---
hw/intc/arm_gicv3_hvf.c | 742 +++++++++++++++++++++++++++++
hw/intc/meson.build | 1 +
include/hw/intc/arm_gicv3_common.h | 1 +
3 files changed, 744 insertions(+)
create mode 100644 hw/intc/arm_gicv3_hvf.c
diff --git a/hw/intc/arm_gicv3_hvf.c b/hw/intc/arm_gicv3_hvf.c
new file mode 100644
index 0000000000..b8b859abb7
--- /dev/null
+++ b/hw/intc/arm_gicv3_hvf.c
@@ -0,0 +1,742 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * ARM Generic Interrupt Controller using HVF platform support
+ *
+ * Copyright (c) 2025 Mohamed Mediouni
+ * Based on vGICv3 KVM code by Pavel Fedin
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "hw/intc/arm_gicv3_common.h"
+#include "qemu/error-report.h"
+#include "qemu/module.h"
+#include "system/runstate.h"
+#include "system/hvf.h"
+#include "system/hvf_int.h"
+#include "hvf_arm.h"
+#include "gicv3_internal.h"
+#include "vgic_common.h"
+#include "qom/object.h"
+#include "target/arm/cpregs.h"
+#include <Hypervisor/Hypervisor.h>
+
+struct HVFARMGICv3Class {
+ ARMGICv3CommonClass parent_class;
+ DeviceRealize parent_realize;
+ ResettablePhases parent_phases;
+};
+
+typedef struct HVFARMGICv3Class HVFARMGICv3Class;
+
+/* This is reusing the GICv3State typedef from ARM_GICV3_ITS_COMMON */
+DECLARE_OBJ_CHECKERS(GICv3State, HVFARMGICv3Class,
+ HVF_GICV3, TYPE_HVF_GICV3);
+
+/*
+ * Loop through each distributor IRQ related register; since bits
+ * corresponding to SPIs and PPIs are RAZ/WI when affinity routing
+ * is enabled, we skip those.
+ */
+#define for_each_dist_irq_reg(_irq, _max, _field_width) \
+ for (_irq = GIC_INTERNAL; _irq < _max; _irq += (32 / _field_width))
+
+/*
+ * Wrap calls to the vGIC APIs to assert_hvf_ok()
+ * as a macro to keep the code clean.
+ */
+#define hv_gic_get_distributor_reg(offset, reg) \
+ assert_hvf_ok(hv_gic_get_distributor_reg(offset, reg))
+
+#define hv_gic_set_distributor_reg(offset, reg) \
+ assert_hvf_ok(hv_gic_set_distributor_reg(offset, reg))
+
+#define hv_gic_get_redistributor_reg(vcpu, reg, value) \
+ assert_hvf_ok(hv_gic_get_redistributor_reg(vcpu, reg, value))
+
+#define hv_gic_set_redistributor_reg(vcpu, reg, value) \
+ assert_hvf_ok(hv_gic_set_redistributor_reg(vcpu, reg, value))
+
+#define hv_gic_get_icc_reg(vcpu, reg, value) \
+ assert_hvf_ok(hv_gic_get_icc_reg(vcpu, reg, value))
+
+#define hv_gic_set_icc_reg(vcpu, reg, value) \
+ assert_hvf_ok(hv_gic_set_icc_reg(vcpu, reg, value))
+
+#define hv_gic_get_ich_reg(vcpu, reg, value) \
+ assert_hvf_ok(hv_gic_get_ich_reg(vcpu, reg, value))
+
+#define hv_gic_set_ich_reg(vcpu, reg, value) \
+ assert_hvf_ok(hv_gic_set_ich_reg(vcpu, reg, value))
+
+static void hvf_dist_get_priority(GICv3State *s, hv_gic_distributor_reg_t offset
+ , uint8_t *bmp)
+{
+ uint64_t reg;
+ uint32_t *field;
+ int irq;
+ field = (uint32_t *)(bmp);
+
+ for_each_dist_irq_reg(irq, s->num_irq, 8) {
+ hv_gic_get_distributor_reg(offset, ®);
+ *field = reg;
+ offset += 4;
+ field++;
+ }
+}
+
+static void hvf_dist_put_priority(GICv3State *s, hv_gic_distributor_reg_t offset
+ , uint8_t *bmp)
+{
+ uint32_t reg, *field;
+ int irq;
+ field = (uint32_t *)(bmp);
+
+ for_each_dist_irq_reg(irq, s->num_irq, 8) {
+ reg = *field;
+ hv_gic_set_distributor_reg(offset, reg);
+ offset += 4;
+ field++;
+ }
+}
+
+static void hvf_dist_get_edge_trigger(GICv3State *s, hv_gic_distributor_reg_t offset,
+ uint32_t *bmp)
+{
+ uint64_t reg;
+ int irq;
+
+ for_each_dist_irq_reg(irq, s->num_irq, 2) {
+ hv_gic_get_distributor_reg(offset, ®);
+ reg = half_unshuffle32(reg >> 1);
+ if (irq % 32 != 0) {
+ reg = (reg << 16);
+ }
+ *gic_bmp_ptr32(bmp, irq) |= reg;
+ offset += 4;
+ }
+}
+
+static void hvf_dist_put_edge_trigger(GICv3State *s, hv_gic_distributor_reg_t offset,
+ uint32_t *bmp)
+{
+ uint32_t reg;
+ int irq;
+
+ for_each_dist_irq_reg(irq, s->num_irq, 2) {
+ reg = *gic_bmp_ptr32(bmp, irq);
+ if (irq % 32 != 0) {
+ reg = (reg & 0xffff0000) >> 16;
+ } else {
+ reg = reg & 0xffff;
+ }
+ reg = half_shuffle32(reg) << 1;
+ hv_gic_set_distributor_reg(offset, reg);
+ offset += 4;
+ }
+}
+
+/* Read a bitmap register group from the kernel VGIC. */
+static void hvf_dist_getbmp(GICv3State *s, hv_gic_distributor_reg_t offset, uint32_t *bmp)
+{
+ uint64_t reg;
+ int irq;
+
+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
+
+ hv_gic_get_distributor_reg(offset, ®);
+ *gic_bmp_ptr32(bmp, irq) = reg;
+ offset += 4;
+ }
+}
+
+static void hvf_dist_putbmp(GICv3State *s, hv_gic_distributor_reg_t offset,
+ hv_gic_distributor_reg_t clroffset, uint32_t *bmp)
+{
+ uint32_t reg;
+ int irq;
+
+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
+ /*
+ * If this bitmap is a set/clear register pair, first write to the
+ * clear-reg to clear all bits before using the set-reg to write
+ * the 1 bits.
+ */
+ if (clroffset != 0) {
+ reg = 0;
+ hv_gic_set_distributor_reg(clroffset, reg);
+ clroffset += 4;
+ }
+ reg = *gic_bmp_ptr32(bmp, irq);
+ hv_gic_set_distributor_reg(offset, reg);
+ offset += 4;
+ }
+}
+
+static void hvf_gicv3_check(GICv3State *s)
+{
+ uint64_t reg;
+ uint32_t num_irq;
+
+ /* Sanity checking s->num_irq */
+ hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_TYPER, ®);
+ num_irq = ((reg & 0x1f) + 1) * 32;
+
+ if (num_irq < s->num_irq) {
+ error_report("Model requests %u IRQs, but HVF supports max %u",
+ s->num_irq, num_irq);
+ abort();
+ }
+}
+
+static void hvf_gicv3_put_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg)
+{
+ int num_pri_bits;
+
+ /* Redistributor state */
+ GICv3CPUState *c = arg.host_ptr;
+ hv_vcpu_t vcpu = c->cpu->accel->fd;
+
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, c->ich_vmcr_el2);
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, c->ich_hcr_el2);
+
+ for (int i = 0; i < GICV3_LR_MAX; i++) {
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, c->ich_lr_el2[i]);
+ }
+
+ num_pri_bits = c->vpribits;
+
+ switch (num_pri_bits) {
+ case 7:
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3,
+ c->ich_apr[GICV3_G0][3]);
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2,
+ c->ich_apr[GICV3_G0][2]);
+ /* fall through */
+ case 6:
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1,
+ c->ich_apr[GICV3_G0][1]);
+ /* fall through */
+ default:
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2,
+ c->ich_apr[GICV3_G0][0]);
+ }
+
+ switch (num_pri_bits) {
+ case 7:
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3,
+ c->ich_apr[GICV3_G1NS][3]);
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2,
+ c->ich_apr[GICV3_G1NS][2]);
+ /* fall through */
+ case 6:
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1,
+ c->ich_apr[GICV3_G1NS][1]);
+ /* fall through */
+ default:
+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2,
+ c->ich_apr[GICV3_G1NS][0]);
+ }
+}
+
+static void hvf_gicv3_put_cpu(CPUState *cpu_state, run_on_cpu_data arg)
+{
+ uint32_t reg;
+ uint64_t reg64;
+ int i, num_pri_bits;
+
+ /* Redistributor state */
+ GICv3CPUState *c = arg.host_ptr;
+ hv_vcpu_t vcpu = c->cpu->accel->fd;
+
+ reg = c->gicr_waker;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0, reg);
+
+ reg = c->gicr_igroupr0;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0, reg);
+
+ reg = ~0;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICENABLER0, reg);
+ reg = c->gicr_ienabler0;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENABLER0, reg);
+
+ /* Restore config before pending so we treat level/edge correctly */
+ reg = half_shuffle32(c->edge_trigger >> 16) << 1;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR1, reg);
+
+ reg = ~0;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICPENDR0, reg);
+ reg = c->gicr_ipendr0;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPENDR0, reg);
+
+ reg = ~0;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICACTIVER0, reg);
+ reg = c->gicr_iactiver0;
+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACTIVER0, reg);
+
+ for (i = 0; i < GIC_INTERNAL; i += 4) {
+ reg = c->gicr_ipriorityr[i] |
+ (c->gicr_ipriorityr[i + 1] << 8) |
+ (c->gicr_ipriorityr[i + 2] << 16) |
+ (c->gicr_ipriorityr[i + 3] << 24);
+ hv_gic_set_redistributor_reg(vcpu,
+ HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, reg);
+ }
+
+ /* CPU interface state */
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, c->icc_sre_el1);
+
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1,
+ c->icc_ctlr_el1[GICV3_NS]);
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1,
+ c->icc_igrpen[GICV3_G0]);
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1,
+ c->icc_igrpen[GICV3_G1NS]);
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, c->icc_pmr_el1);
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, c->icc_bpr[GICV3_G0]);
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, c->icc_bpr[GICV3_G1NS]);
+
+ num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] &
+ ICC_CTLR_EL1_PRIBITS_MASK) >>
+ ICC_CTLR_EL1_PRIBITS_SHIFT) + 1;
+
+ switch (num_pri_bits) {
+ case 7:
+ reg64 = c->icc_apr[GICV3_G0][3];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3, reg64);
+ reg64 = c->icc_apr[GICV3_G0][2];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2, reg64);
+ /* fall through */
+ case 6:
+ reg64 = c->icc_apr[GICV3_G0][1];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1, reg64);
+ /* fall through */
+ default:
+ reg64 = c->icc_apr[GICV3_G0][0];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1, reg64);
+ }
+
+ switch (num_pri_bits) {
+ case 7:
+ reg64 = c->icc_apr[GICV3_G1NS][3];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3, reg64);
+ reg64 = c->icc_apr[GICV3_G1NS][2];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2, reg64);
+ /* fall through */
+ case 6:
+ reg64 = c->icc_apr[GICV3_G1NS][1];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1, reg64);
+ /* fall through */
+ default:
+ reg64 = c->icc_apr[GICV3_G1NS][0];
+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1, reg64);
+ }
+
+ /* Registers beyond this point are with nested virt only */
+ if (c->gic->maint_irq) {
+ hvf_gicv3_put_cpu_el2(cpu_state, arg);
+ }
+}
+
+static void hvf_gicv3_put(GICv3State *s)
+{
+ uint32_t reg;
+ int ncpu, i;
+
+ hvf_gicv3_check(s);
+
+ reg = s->gicd_ctlr;
+ hv_gic_set_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, reg);
+
+ /* per-CPU state */
+
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
+ run_on_cpu_data data;
+ data.host_ptr = &s->cpu[ncpu];
+ run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_put_cpu, data);
+ }
+
+ /* s->enable bitmap -> GICD_ISENABLERn */
+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0
+ , HV_GIC_DISTRIBUTOR_REG_GICD_ICENABLER0, s->enabled);
+
+ /* s->group bitmap -> GICD_IGROUPRn */
+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0
+ , 0, s->group);
+
+ /* Restore targets before pending to ensure the pending state is set on
+ * the appropriate CPU interfaces in the kernel
+ */
+
+ /* s->gicd_irouter[irq] -> GICD_IROUTERn */
+ for (i = GIC_INTERNAL; i < s->num_irq; i++) {
+ uint32_t offset = HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32 + (8 * i)
+ - (8 * GIC_INTERNAL);
+ hv_gic_set_distributor_reg(offset, s->gicd_irouter[i]);
+ }
+
+ /*
+ * s->trigger bitmap -> GICD_ICFGRn
+ * (restore configuration registers before pending IRQs so we treat
+ * level/edge correctly)
+ */
+ hvf_dist_put_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0, s->edge_trigger);
+
+ /* s->pending bitmap -> GICD_ISPENDRn */
+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0,
+ HV_GIC_DISTRIBUTOR_REG_GICD_ICPENDR0, s->pending);
+
+ /* s->active bitmap -> GICD_ISACTIVERn */
+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0,
+ HV_GIC_DISTRIBUTOR_REG_GICD_ICACTIVER0, s->active);
+
+ /* s->gicd_ipriority[] -> GICD_IPRIORITYRn */
+ hvf_dist_put_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0, s->gicd_ipriority);
+}
+
+static void hvf_gicv3_get_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg)
+{
+ int num_pri_bits;
+
+ /* Redistributor state */
+ GICv3CPUState *c = arg.host_ptr;
+ hv_vcpu_t vcpu = c->cpu->accel->fd;
+
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, &c->ich_vmcr_el2);
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, &c->ich_hcr_el2);
+
+ for (int i = 0; i < GICV3_LR_MAX; i++) {
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, &c->ich_lr_el2[i]);
+ }
+
+ num_pri_bits = c->vpribits;
+
+ switch (num_pri_bits) {
+ case 7:
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3,
+ &c->ich_apr[GICV3_G0][3]);
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2,
+ &c->ich_apr[GICV3_G0][2]);
+ /* fall through */
+ case 6:
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1,
+ &c->ich_apr[GICV3_G0][1]);
+ /* fall through */
+ default:
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2,
+ &c->ich_apr[GICV3_G0][0]);
+ }
+
+ switch (num_pri_bits) {
+ case 7:
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3,
+ &c->ich_apr[GICV3_G1NS][3]);
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2,
+ &c->ich_apr[GICV3_G1NS][2]);
+ /* fall through */
+ case 6:
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1,
+ &c->ich_apr[GICV3_G1NS][1]);
+ /* fall through */
+ default:
+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2,
+ &c->ich_apr[GICV3_G1NS][0]);
+ }
+}
+
+static void hvf_gicv3_get_cpu(CPUState *cpu_state, run_on_cpu_data arg)
+{
+ uint64_t reg;
+ int i, num_pri_bits;
+
+ /* Redistributor state */
+ GICv3CPUState *c = arg.host_ptr;
+ hv_vcpu_t vcpu = c->cpu->accel->fd;
+
+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0,
+ ®);
+ c->gicr_igroupr0 = reg;
+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENABLER0,
+ ®);
+ c->gicr_ienabler0 = reg;
+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR1,
+ ®);
+ c->edge_trigger = half_unshuffle32(reg >> 1) << 16;
+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPENDR0,
+ ®);
+ c->gicr_ipendr0 = reg;
+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACTIVER0,
+ ®);
+ c->gicr_iactiver0 = reg;
+
+ for (i = 0; i < GIC_INTERNAL; i += 4) {
+ hv_gic_get_redistributor_reg(
+ vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, ®);
+ c->gicr_ipriorityr[i] = extract32(reg, 0, 8);
+ c->gicr_ipriorityr[i + 1] = extract32(reg, 8, 8);
+ c->gicr_ipriorityr[i + 2] = extract32(reg, 16, 8);
+ c->gicr_ipriorityr[i + 3] = extract32(reg, 24, 8);
+ }
+
+ /* CPU interface */
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, &c->icc_sre_el1);
+
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1,
+ &c->icc_ctlr_el1[GICV3_NS]);
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1,
+ &c->icc_igrpen[GICV3_G0]);
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1,
+ &c->icc_igrpen[GICV3_G1NS]);
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, &c->icc_pmr_el1);
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, &c->icc_bpr[GICV3_G0]);
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, &c->icc_bpr[GICV3_G1NS]);
+ num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] & ICC_CTLR_EL1_PRIBITS_MASK) >>
+ ICC_CTLR_EL1_PRIBITS_SHIFT) +
+ 1;
+
+ switch (num_pri_bits) {
+ case 7:
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3,
+ &c->icc_apr[GICV3_G0][3]);
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2,
+ &c->icc_apr[GICV3_G0][2]);
+ /* fall through */
+ case 6:
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1,
+ &c->icc_apr[GICV3_G0][1]);
+ /* fall through */
+ default:
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1,
+ &c->icc_apr[GICV3_G0][0]);
+ }
+
+ switch (num_pri_bits) {
+ case 7:
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3,
+ &c->icc_apr[GICV3_G1NS][3]);
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2,
+ &c->icc_apr[GICV3_G1NS][2]);
+ /* fall through */
+ case 6:
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1,
+ &c->icc_apr[GICV3_G1NS][1]);
+ /* fall through */
+ default:
+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1,
+ &c->icc_apr[GICV3_G1NS][0]);
+ }
+
+ /* Registers beyond this point are with nested virt only */
+ if (c->gic->maint_irq) {
+ hvf_gicv3_get_cpu_el2(cpu_state, arg);
+ }
+}
+
+static void hvf_gicv3_get(GICv3State *s)
+{
+ uint64_t reg;
+ int ncpu, i;
+
+ hvf_gicv3_check(s);
+
+ hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, ®);
+ s->gicd_ctlr = reg;
+
+ /* Redistributor state (one per CPU) */
+
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
+ run_on_cpu_data data;
+ data.host_ptr = &s->cpu[ncpu];
+ run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_get_cpu, data);
+ }
+
+ /* GICD_IGROUPRn -> s->group bitmap */
+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0, s->group);
+
+ /* GICD_ISENABLERn -> s->enabled bitmap */
+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0, s->enabled);
+
+ /* GICD_ISPENDRn -> s->pending bitmap */
+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0, s->pending);
+
+ /* GICD_ISACTIVERn -> s->active bitmap */
+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0, s->active);
+
+ /* GICD_ICFGRn -> s->trigger bitmap */
+ hvf_dist_get_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0
+ , s->edge_trigger);
+
+ /* GICD_IPRIORITYRn -> s->gicd_ipriority[] */
+ hvf_dist_get_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0
+ , s->gicd_ipriority);
+
+ /* GICD_IROUTERn -> s->gicd_irouter[irq] */
+ for (i = GIC_INTERNAL; i < s->num_irq; i++) {
+ uint32_t offset = HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32
+ + (8 * i) - (8 * GIC_INTERNAL);
+ hv_gic_get_distributor_reg(offset, &s->gicd_irouter[i]);
+ }
+}
+
+static void hvf_gicv3_set_irq(void *opaque, int irq, int level)
+{
+ GICv3State *s = opaque;
+ if (irq > s->num_irq) {
+ return;
+ }
+ hv_gic_set_spi(GIC_INTERNAL + irq, !!level);
+}
+
+static void hvf_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+ GICv3State *s;
+ GICv3CPUState *c;
+
+ c = env->gicv3state;
+ s = c->gic;
+
+ c->icc_pmr_el1 = 0;
+ /*
+ * Architecturally the reset value of the ICC_BPR registers
+ * is UNKNOWN. We set them all to 0 here; when the kernel
+ * uses these values to program the ICH_VMCR_EL2 fields that
+ * determine the guest-visible ICC_BPR register values, the
+ * hardware's "writing a value less than the minimum sets
+ * the field to the minimum value" behaviour will result in
+ * them effectively resetting to the correct minimum value
+ * for the host GIC.
+ */
+ c->icc_bpr[GICV3_G0] = 0;
+ c->icc_bpr[GICV3_G1] = 0;
+ c->icc_bpr[GICV3_G1NS] = 0;
+
+ c->icc_sre_el1 = 0x7;
+ memset(c->icc_apr, 0, sizeof(c->icc_apr));
+ memset(c->icc_igrpen, 0, sizeof(c->icc_igrpen));
+
+ if (s->migration_blocker) {
+ return;
+ }
+}
+
+static void hvf_gicv3_reset_hold(Object *obj, ResetType type)
+{
+ GICv3State *s = ARM_GICV3_COMMON(obj);
+ HVFARMGICv3Class *kgc = HVF_GICV3_GET_CLASS(s);
+
+ if (kgc->parent_phases.hold) {
+ kgc->parent_phases.hold(obj, type);
+ }
+
+ hvf_gicv3_put(s);
+}
+
+
+/*
+ * CPU interface registers of GIC needs to be reset on CPU reset.
+ * For the calling arm_gicv3_icc_reset() on CPU reset, we register
+ * below ARMCPRegInfo. As we reset the whole cpu interface under single
+ * register reset, we define only one register of CPU interface instead
+ * of defining all the registers.
+ */
+static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
+ { .name = "ICC_CTLR_EL1", .state = ARM_CP_STATE_BOTH,
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 4,
+ /*
+ * If ARM_CP_NOP is used, resetfn is not called,
+ * So ARM_CP_NO_RAW is appropriate type.
+ */
+ .type = ARM_CP_NO_RAW,
+ .access = PL1_RW,
+ .readfn = arm_cp_read_zero,
+ .writefn = arm_cp_write_ignore,
+ /*
+ * We hang the whole cpu interface reset routine off here
+ * rather than parcelling it out into one little function
+ * per register
+ */
+ .resetfn = hvf_gicv3_icc_reset,
+ },
+};
+
+static void hvf_gicv3_realize(DeviceState *dev, Error **errp)
+{
+ ERRP_GUARD();
+ GICv3State *s = HVF_GICV3(dev);
+ HVFARMGICv3Class *kgc = HVF_GICV3_GET_CLASS(s);
+ int i;
+
+ kgc->parent_realize(dev, errp);
+ if (*errp) {
+ return;
+ }
+
+ if (s->revision != 3) {
+ error_setg(errp, "unsupported GIC revision %d for platform GIC",
+ s->revision);
+ }
+
+ if (s->security_extn) {
+ error_setg(errp, "the platform vGICv3 does not implement the "
+ "security extensions");
+ return;
+ }
+
+ if (s->nmi_support) {
+ error_setg(errp, "NMI is not supported with the platform GIC");
+ return;
+ }
+
+ if (s->nb_redist_regions > 1) {
+ error_setg(errp, "Multiple VGICv3 redistributor regions are not "
+ "supported by HVF");
+ error_append_hint(errp, "A maximum of %d VCPUs can be used",
+ s->redist_region_count[0]);
+ return;
+ }
+
+ gicv3_init_irqs_and_mmio(s, hvf_gicv3_set_irq, NULL);
+
+ for (i = 0; i < s->num_cpu; i++) {
+ ARMCPU *cpu = ARM_CPU(qemu_get_cpu(i));
+
+ define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
+ }
+
+ if (s->maint_irq && s->maint_irq != HV_GIC_INT_MAINTENANCE) {
+ error_setg(errp, "vGIC maintenance IRQ mismatch with the hardcoded one in HVF.");
+ return;
+ }
+}
+
+static void hvf_gicv3_class_init(ObjectClass *klass, const void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
+ ARMGICv3CommonClass *agcc = ARM_GICV3_COMMON_CLASS(klass);
+ HVFARMGICv3Class *kgc = HVF_GICV3_CLASS(klass);
+
+ agcc->pre_save = hvf_gicv3_get;
+ agcc->post_load = hvf_gicv3_put;
+
+ device_class_set_parent_realize(dc, hvf_gicv3_realize,
+ &kgc->parent_realize);
+ resettable_class_set_parent_phases(rc, NULL, hvf_gicv3_reset_hold, NULL,
+ &kgc->parent_phases);
+}
+
+static const TypeInfo hvf_arm_gicv3_info = {
+ .name = TYPE_HVF_GICV3,
+ .parent = TYPE_ARM_GICV3_COMMON,
+ .instance_size = sizeof(GICv3State),
+ .class_init = hvf_gicv3_class_init,
+ .class_size = sizeof(HVFARMGICv3Class),
+};
+
+static void hvf_gicv3_register_types(void)
+{
+ type_register_static(&hvf_arm_gicv3_info);
+}
+
+type_init(hvf_gicv3_register_types)
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
index 96742df090..b7baf8a0f6 100644
--- a/hw/intc/meson.build
+++ b/hw/intc/meson.build
@@ -42,6 +42,7 @@ arm_common_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif_common
arm_common_ss.add(when: 'CONFIG_ARM_GICV3', if_true: files('arm_gicv3_cpuif.c'))
specific_ss.add(when: 'CONFIG_ARM_GIC_KVM', if_true: files('arm_gic_kvm.c'))
specific_ss.add(when: ['CONFIG_WHPX', 'TARGET_AARCH64'], if_true: files('arm_gicv3_whpx.c'))
+specific_ss.add(when: ['CONFIG_HVF', 'CONFIG_ARM_GICV3'], if_true: files('arm_gicv3_hvf.c'))
specific_ss.add(when: ['CONFIG_ARM_GIC_KVM', 'TARGET_AARCH64'], if_true: files('arm_gicv3_kvm.c', 'arm_gicv3_its_kvm.c'))
arm_common_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('armv7m_nvic.c'))
specific_ss.add(when: 'CONFIG_GRLIB', if_true: files('grlib_irqmp.c'))
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
index c55cf18120..9adcab0a0c 100644
--- a/include/hw/intc/arm_gicv3_common.h
+++ b/include/hw/intc/arm_gicv3_common.h
@@ -315,6 +315,7 @@ DECLARE_OBJ_CHECKERS(GICv3State, ARMGICv3CommonClass,
/* Types for GICv3 kernel-irqchip */
#define TYPE_WHPX_GICV3 "whpx-arm-gicv3"
+#define TYPE_HVF_GICV3 "hvf-arm-gicv3"
struct ARMGICv3CommonClass {
/*< private >*/
--
2.50.1 (Apple Git-155)
Misc changes needed for HVF vGIC enablement.
Note: x86_64 macOS exposes interrupt controller virtualisation since macOS 12.
Keeping an #ifdef here in case we end up supporting that...
However, given that x86_64 macOS is on its way out, it'll probably (?) not be supported in Qemu.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
---
accel/hvf/hvf-all.c | 50 ++++++++++++++++++++++++++++++++++++++
accel/stubs/hvf-stub.c | 1 +
hw/arm/virt.c | 24 +++++++++++++++---
hw/intc/arm_gicv3_common.c | 3 +++
include/system/hvf.h | 3 +++
system/vl.c | 2 ++
6 files changed, 79 insertions(+), 4 deletions(-)
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
index 033c677b6f..929f53fd37 100644
--- a/accel/hvf/hvf-all.c
+++ b/accel/hvf/hvf-all.c
@@ -10,6 +10,8 @@
#include "qemu/osdep.h"
#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "qapi/qapi-visit-common.h"
#include "accel/accel-ops.h"
#include "exec/cpu-common.h"
#include "system/address-spaces.h"
@@ -22,6 +24,7 @@
#include "trace.h"
bool hvf_allowed;
+bool hvf_kernel_irqchip;
const char *hvf_return_string(hv_return_t ret)
{
@@ -217,6 +220,43 @@ static int hvf_gdbstub_sstep_flags(AccelState *as)
return SSTEP_ENABLE | SSTEP_NOIRQ;
}
+static void hvf_set_kernel_irqchip(Object *obj, Visitor *v,
+ const char *name, void *opaque,
+ Error **errp)
+{
+ OnOffSplit mode;
+ if (!visit_type_OnOffSplit(v, name, &mode, errp)) {
+ return;
+ }
+
+ switch (mode) {
+ case ON_OFF_SPLIT_ON:
+#ifdef HOST_X86_64
+ /* macOS 12 onwards exposes an HVF virtual APIC. */
+ error_setg(errp, "HVF: kernel irqchip is not currently implemented for x86.");
+ break;
+#else
+ hvf_kernel_irqchip = true;
+ break;
+#endif
+
+ case ON_OFF_SPLIT_OFF:
+ hvf_kernel_irqchip = false;
+ break;
+
+ case ON_OFF_SPLIT_SPLIT:
+ error_setg(errp, "HVF: split irqchip is not supported on HVF.");
+ break;
+
+ default:
+ /*
+ * The value was checked in visit_type_OnOffSplit() above. If
+ * we get here, then something is wrong in QEMU.
+ */
+ abort();
+ }
+}
+
static void hvf_accel_class_init(ObjectClass *oc, const void *data)
{
AccelClass *ac = ACCEL_CLASS(oc);
@@ -224,6 +264,16 @@ static void hvf_accel_class_init(ObjectClass *oc, const void *data)
ac->init_machine = hvf_accel_init;
ac->allowed = &hvf_allowed;
ac->gdbstub_supported_sstep_flags = hvf_gdbstub_sstep_flags;
+#ifdef HOST_X86_64
+ hvf_kernel_irqchip = false;
+#else
+ hvf_kernel_irqchip = true;
+#endif
+ object_class_property_add(oc, "kernel-irqchip", "on|off|split",
+ NULL, hvf_set_kernel_irqchip,
+ NULL, NULL);
+ object_class_property_set_description(oc, "kernel-irqchip",
+ "Configure HVF irqchip");
}
static const TypeInfo hvf_accel_type = {
diff --git a/accel/stubs/hvf-stub.c b/accel/stubs/hvf-stub.c
index 42eadc5ca9..6bd08759ba 100644
--- a/accel/stubs/hvf-stub.c
+++ b/accel/stubs/hvf-stub.c
@@ -10,3 +10,4 @@
#include "system/hvf.h"
bool hvf_allowed;
+bool hvf_kernel_irqchip;
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 55cdd67657..4b4f572f9f 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -838,7 +838,7 @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
* interrupts; there are always 32 of the former (mandated by GIC spec).
*/
qdev_prop_set_uint32(vms->gic, "num-irq", NUM_IRQS + 32);
- if (!kvm_irqchip_in_kernel()) {
+ if (!kvm_irqchip_in_kernel() && !hvf_irqchip_in_kernel()) {
qdev_prop_set_bit(vms->gic, "has-security-extensions", vms->secure);
}
@@ -861,7 +861,8 @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
qdev_prop_set_array(vms->gic, "redist-region-count",
redist_region_count);
- if (!kvm_irqchip_in_kernel()) {
+ if (!kvm_irqchip_in_kernel() &&
+ !(hvf_enabled() && hvf_irqchip_in_kernel())) {
if (vms->tcg_its) {
object_property_set_link(OBJECT(vms->gic), "sysmem",
OBJECT(mem), &error_fatal);
@@ -872,7 +873,7 @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
ARCH_GIC_MAINT_IRQ);
}
} else {
- if (!kvm_irqchip_in_kernel()) {
+ if (!kvm_irqchip_in_kernel() && !hvf_irqchip_in_kernel()) {
qdev_prop_set_bit(vms->gic, "has-virtualization-extensions",
vms->virt);
}
@@ -2114,7 +2115,15 @@ static void finalize_gic_version(VirtMachineState *vms)
accel_name = "KVM with kernel-irqchip=off";
} else if (whpx_enabled()) {
gics_supported |= VIRT_GIC_VERSION_3_MASK;
- } else if (tcg_enabled() || hvf_enabled() || qtest_enabled()) {
+ } else if (hvf_enabled()) {
+ if (!hvf_irqchip_in_kernel()) {
+ gics_supported |= VIRT_GIC_VERSION_2_MASK;
+ }
+ /* Hypervisor.framework doesn't expose EL2<->1 transition notifiers */
+ if (!(!hvf_irqchip_in_kernel() && vms->virt)) {
+ gics_supported |= VIRT_GIC_VERSION_3_MASK;
+ }
+ } else if (tcg_enabled() || qtest_enabled()) {
gics_supported |= VIRT_GIC_VERSION_2_MASK;
if (module_object_class_by_name("arm-gicv3")) {
gics_supported |= VIRT_GIC_VERSION_3_MASK;
@@ -2150,6 +2159,9 @@ static void finalize_msi_controller(VirtMachineState *vms)
if (whpx_enabled() && whpx_irqchip_in_kernel()) {
vms->msi_controller = VIRT_MSI_CTRL_GICV2M;
}
+ if (hvf_enabled() && hvf_irqchip_in_kernel()) {
+ vms->msi_controller = VIRT_MSI_CTRL_GICV2M;
+ }
if (vms->gic_version == VIRT_GIC_VERSION_2) {
vms->msi_controller = VIRT_MSI_CTRL_GICV2M;
}
@@ -2168,6 +2180,10 @@ static void finalize_msi_controller(VirtMachineState *vms)
error_report("ITS not supported on WHPX.");
exit(1);
}
+ if (hvf_enabled() && hvf_irqchip_in_kernel()) {
+ error_report("ITS not supported on HVF when using the hardware vGIC.");
+ exit(1);
+ }
}
assert(vms->msi_controller != VIRT_MSI_CTRL_AUTO);
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
index 9054143ea7..d98f95be00 100644
--- a/hw/intc/arm_gicv3_common.c
+++ b/hw/intc/arm_gicv3_common.c
@@ -33,6 +33,7 @@
#include "hw/arm/linux-boot-if.h"
#include "system/kvm.h"
#include "system/whpx.h"
+#include "system/hvf.h"
static void gicv3_gicd_no_migration_shift_bug_post_load(GICv3State *cs)
@@ -666,6 +667,8 @@ const char *gicv3_class_name(void)
return "kvm-arm-gicv3";
} else if (whpx_enabled()) {
return TYPE_WHPX_GICV3;
+ } else if (hvf_enabled() && hvf_irqchip_in_kernel()) {
+ return TYPE_HVF_GICV3;
} else {
if (kvm_enabled()) {
error_report("Userspace GICv3 is not supported with KVM");
diff --git a/include/system/hvf.h b/include/system/hvf.h
index d3dcf088b3..dc8da85979 100644
--- a/include/system/hvf.h
+++ b/include/system/hvf.h
@@ -26,8 +26,11 @@
#ifdef CONFIG_HVF_IS_POSSIBLE
extern bool hvf_allowed;
#define hvf_enabled() (hvf_allowed)
+extern bool hvf_kernel_irqchip;
+#define hvf_irqchip_in_kernel() (hvf_kernel_irqchip)
#else /* !CONFIG_HVF_IS_POSSIBLE */
#define hvf_enabled() 0
+#define hvf_irqchip_in_kernel() 0
#endif /* !CONFIG_HVF_IS_POSSIBLE */
#define TYPE_HVF_ACCEL ACCEL_CLASS_NAME("hvf")
diff --git a/system/vl.c b/system/vl.c
index aa9a155041..d3f311eff8 100644
--- a/system/vl.c
+++ b/system/vl.c
@@ -1778,6 +1778,8 @@ static void qemu_apply_legacy_machine_options(QDict *qdict)
false);
object_register_sugar_prop(ACCEL_CLASS_NAME("whpx"), "kernel-irqchip", value,
false);
+ object_register_sugar_prop(ACCEL_CLASS_NAME("hvf"), "kernel-irqchip", value,
+ false);
qdict_del(qdict, "kernel-irqchip");
}
--
2.50.1 (Apple Git-155)
On HVF, some of the GIC state is in an opaque Apple-provided structure.
Save/restore that state to be able to save/restore VMs.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
---
target/arm/hvf/hvf.c | 73 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 73 insertions(+)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 74b6f5e7db..a220699077 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -22,6 +22,7 @@
#include "cpu-sysregs.h"
#include <mach/mach_time.h>
+#include <stdint.h>
#include "system/address-spaces.h"
#include "system/memory.h"
@@ -2069,15 +2070,83 @@ static const VMStateDescription vmstate_hvf_vtimer = {
},
};
+/* Apple specific opaque state for the vGIC */
+
+typedef struct HVGICState {
+ void *state;
+ uint32_t size;
+} HVGICState;
+
+static HVGICState gic;
+
+static int hvf_gic_opaque_state_get(void)
+{
+ hv_gic_state_t gic_state;
+ hv_return_t err;
+ size_t size;
+
+ gic_state = hv_gic_state_create();
+ if (gic_state == NULL) {
+ error_report("hvf: vgic: failed to create hv_gic_state_create.");
+ return 1;
+ }
+ err = hv_gic_state_get_size(gic_state, &size);
+ gic.size = size;
+ if (err != HV_SUCCESS) {
+ error_report("hvf: vgic: failed to get GIC state size.");
+ return 1;
+ }
+ gic.state = malloc(gic.size);
+ err = hv_gic_state_get_data(gic_state, gic.state);
+ if (err != HV_SUCCESS) {
+ error_report("hvf: vgic: failed to get GIC state.");
+ return 1;
+ }
+ return 0;
+}
+
+static int hvf_gic_opaque_state_set(void)
+{
+ hv_return_t err;
+ if (!gic.size) {
+ return 0;
+ }
+ err = hv_gic_set_state(gic.state, gic.size);
+ if (err != HV_SUCCESS) {
+ error_report("hvf: vgic: failed to restore GIC state.");
+ return 1;
+ }
+ return 0;
+}
+
+static const VMStateDescription vmstate_hvf_gic = {
+ .name = "hvf-gic",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT32(size, HVGICState),
+ VMSTATE_VBUFFER_UINT32(state,
+ HVGICState, 0, 0,
+ size),
+ VMSTATE_END_OF_LIST()
+ },
+};
+
static void hvf_vm_state_change(void *opaque, bool running, RunState state)
{
HVFVTimer *s = opaque;
if (running) {
+ if (hvf_irqchip_in_kernel()) {
+ hvf_gic_opaque_state_set();
+ }
/* Update vtimer offset on all CPUs */
hvf_state->vtimer_offset = mach_absolute_time() - s->vtimer_val;
cpu_synchronize_all_states();
} else {
+ if (hvf_irqchip_in_kernel()) {
+ hvf_gic_opaque_state_get();
+ }
/* Remember vtimer value on every pause */
s->vtimer_val = hvf_vtimer_val_raw();
}
@@ -2087,6 +2156,10 @@ int hvf_arch_init(void)
{
hvf_state->vtimer_offset = mach_absolute_time();
vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
+ if (hvf_irqchip_in_kernel()) {
+ gic.size = 0;
+ vmstate_register(NULL, 0, &vmstate_hvf_gic, &gic);
+ }
qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
hvf_arm_init_debug();
--
2.50.1 (Apple Git-155)
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
---
accel/hvf/hvf-all.c | 1 +
accel/stubs/hvf-stub.c | 1 +
hw/arm/virt.c | 8 +++++++-
include/system/hvf.h | 5 +++++
target/arm/hvf/hvf.c | 36 ++++++++++++++++++++++++++++++++++--
5 files changed, 48 insertions(+), 3 deletions(-)
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
index 929f53fd37..88f6d5accb 100644
--- a/accel/hvf/hvf-all.c
+++ b/accel/hvf/hvf-all.c
@@ -25,6 +25,7 @@
bool hvf_allowed;
bool hvf_kernel_irqchip;
+bool hvf_nested_virt;
const char *hvf_return_string(hv_return_t ret)
{
diff --git a/accel/stubs/hvf-stub.c b/accel/stubs/hvf-stub.c
index 6bd08759ba..cec1cbb056 100644
--- a/accel/stubs/hvf-stub.c
+++ b/accel/stubs/hvf-stub.c
@@ -11,3 +11,4 @@
bool hvf_allowed;
bool hvf_kernel_irqchip;
+bool hvf_nested_virt;
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 4b4f572f9f..d39e190076 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2383,7 +2383,8 @@ static void machvirt_init(MachineState *machine)
exit(1);
}
- if (vms->virt && !kvm_enabled() && !tcg_enabled() && !qtest_enabled()) {
+ if (vms->virt && !kvm_enabled() && !tcg_enabled()
+ && !hvf_enabled() && !qtest_enabled()) {
error_report("mach-virt: %s does not support providing "
"Virtualization extensions to the guest CPU",
current_accel_name());
@@ -2651,6 +2652,11 @@ static void virt_set_virt(Object *obj, bool value, Error **errp)
VirtMachineState *vms = VIRT_MACHINE(obj);
vms->virt = value;
+ /*
+ * At this point, HVF is not initialised yet.
+ * However, it needs to know if nested virt is enabled at init time.
+ */
+ hvf_nested_virt_enable(value);
}
static bool virt_get_highmem(Object *obj, Error **errp)
diff --git a/include/system/hvf.h b/include/system/hvf.h
index dc8da85979..0f0632f7ae 100644
--- a/include/system/hvf.h
+++ b/include/system/hvf.h
@@ -28,9 +28,14 @@ extern bool hvf_allowed;
#define hvf_enabled() (hvf_allowed)
extern bool hvf_kernel_irqchip;
#define hvf_irqchip_in_kernel() (hvf_kernel_irqchip)
+extern bool hvf_nested_virt;
+#define hvf_nested_virt_enabled() (hvf_nested_virt)
+#define hvf_nested_virt_enable(enable) hvf_nested_virt = enable
#else /* !CONFIG_HVF_IS_POSSIBLE */
#define hvf_enabled() 0
#define hvf_irqchip_in_kernel() 0
+#define hvf_nested_virt_enabled() 0
+#define hvf_nested_virt_enable(enable) 0
#endif /* !CONFIG_HVF_IS_POSSIBLE */
#define TYPE_HVF_ACCEL ACCEL_CLASS_NAME("hvf")
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index a220699077..fe9b63bc76 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -28,6 +28,7 @@
#include "system/memory.h"
#include "hw/core/boards.h"
#include "hw/core/irq.h"
+#include "hw/arm/virt.h"
#include "qemu/main-loop.h"
#include "system/cpus.h"
#include "arm-powerctl.h"
@@ -772,6 +773,10 @@ static bool hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
(1ULL << ARM_FEATURE_PMU) |
(1ULL << ARM_FEATURE_GENERIC_TIMER);
+ if (hvf_nested_virt_enabled()) {
+ ahcf->features |= 1ULL << ARM_FEATURE_EL2;
+ }
+
for (i = 0; i < ARRAY_SIZE(regs); i++) {
r |= hv_vcpu_config_get_feature_reg(config, regs[i].reg,
&host_isar.idregs[regs[i].index]);
@@ -879,6 +884,15 @@ void hvf_arch_vcpu_destroy(CPUState *cpu)
assert_hvf_ok(ret);
}
+static bool hvf_arm_el2_supported(void)
+{
+ bool is_nested_virt_supported;
+ hv_return_t ret = hv_vm_config_get_el2_supported(&is_nested_virt_supported);
+ assert_hvf_ok(ret);
+ return is_nested_virt_supported;
+}
+
+
hv_return_t hvf_arch_vm_create(MachineState *ms, uint32_t pa_range)
{
hv_return_t ret;
@@ -890,6 +904,18 @@ hv_return_t hvf_arch_vm_create(MachineState *ms, uint32_t pa_range)
}
chosen_ipa_bit_size = pa_range;
+ if (hvf_nested_virt_enabled()) {
+ if (!hvf_arm_el2_supported()) {
+ error_report("Nested virtualization not supported on this system.");
+ goto cleanup;
+ }
+ ret = hv_vm_config_set_el2_enabled(config, true);
+ if (ret != HV_SUCCESS) {
+ error_report("Failed to enable nested virtualization.");
+ goto cleanup;
+ }
+ }
+
ret = hv_vm_create(config);
cleanup:
@@ -1024,6 +1050,13 @@ static void hvf_psci_cpu_off(ARMCPU *arm_cpu)
assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
}
+static int hvf_psci_get_target_el(void)
+{
+ if (hvf_nested_virt_enabled()) {
+ return 2;
+ }
+ return 1;
+}
/*
* Handle a PSCI call.
*
@@ -1045,7 +1078,6 @@ static bool hvf_handle_psci_call(CPUState *cpu, int *excp_ret)
CPUState *target_cpu_state;
ARMCPU *target_cpu;
target_ulong entry;
- int target_el = 1;
int32_t ret = 0;
trace_arm_psci_call(param[0], param[1], param[2], param[3],
@@ -1099,7 +1131,7 @@ static bool hvf_handle_psci_call(CPUState *cpu, int *excp_ret)
entry = param[2];
context_id = param[3];
ret = arm_set_cpu_on(mpidr, entry, context_id,
- target_el, target_aarch64);
+ hvf_psci_get_target_el(), target_aarch64);
break;
case QEMU_PSCI_0_1_FN_CPU_OFF:
case QEMU_PSCI_0_2_FN_CPU_OFF:
--
2.50.1 (Apple Git-155)
When running with the Apple vGIC, the EL1 vtimer is handled by the platform.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
Reviewed-by: Mads Ynddal <mads@ynddal.dk>
---
target/arm/hvf/hvf.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index fe9b63bc76..f6313d9afd 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -2037,7 +2037,9 @@ static int hvf_handle_vmexit(CPUState *cpu, hv_vcpu_exit_t *exit)
switch (exit->reason) {
case HV_EXIT_REASON_EXCEPTION:
- hvf_sync_vtimer(cpu);
+ if (!hvf_irqchip_in_kernel()) {
+ hvf_sync_vtimer(cpu);
+ }
ret = hvf_handle_exception(cpu, &exit->exception);
break;
case HV_EXIT_REASON_VTIMER_ACTIVATED:
--
2.50.1 (Apple Git-155)
From Apple documentation:
> When EL2 is disabled, PMU register accesses trigger "Trapped MSR, MRS, or
> System Instruction" exceptions. When this happens, hv_vcpu_run() returns, and the
> hv_vcpu_exit_t object contains the information about this exception.
> When EL2 is enabled, the handling of PMU register accesses is determined by the PMUVer
> field of ID_AA64DFR0_EL1 register.
> If the PMUVer field value is zero or is invalid, PMU register accesses generate "Undefined"
> exceptions, which are sent to the guest.
> If the PMUVer field value is non-zero and valid, PMU register accesses are emulated by the framework.
> The ID_AA64DFR0_EL1 register can be modified via hv_vcpu_set_sys_reg API.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
target/arm/hvf/hvf.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index f6313d9afd..13836d6510 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -1239,7 +1239,7 @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint64_t *val)
ARMCPU *arm_cpu = ARM_CPU(cpu);
CPUARMState *env = &arm_cpu->env;
- if (arm_feature(env, ARM_FEATURE_PMU)) {
+ if (!hvf_nested_virt_enabled() && arm_feature(env, ARM_FEATURE_PMU)) {
switch (reg) {
case SYSREG_PMCR_EL0:
*val = env->cp15.c9_pmcr;
@@ -1531,7 +1531,7 @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
SYSREG_OP2(reg),
val);
- if (arm_feature(env, ARM_FEATURE_PMU)) {
+ if (!hvf_nested_virt_enabled() && arm_feature(env, ARM_FEATURE_PMU)) {
switch (reg) {
case SYSREG_PMCCNTR_EL0:
pmu_op_start(env);
--
2.50.1 (Apple Git-155)
When running with nested virt on, a minimum PMU is exposed by Hypervisor.framework
if a valid PMUVer register value is set. That PMU isn't exposed otherwise.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
---
target/arm/hvf/hvf.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 13836d6510..0d46073fd2 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -809,6 +809,10 @@ static bool hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
SET_IDREG(&host_isar, ID_AA64PFR1,
GET_IDREG(&host_isar, ID_AA64PFR1) & ~R_ID_AA64PFR1_SME_MASK);
+ if (hvf_nested_virt_enabled()) {
+ FIELD_DP64_IDREG(&host_isar, ID_AA64DFR0, PMUVER, 0x1);
+ }
+
ahcf->isar = host_isar;
/*
--
2.50.1 (Apple Git-155)
While figuring out a better spot for it, put it in hv_arch_vm_create().
After hv_vcpu_create is documented as too late, and deferring
vCPU initialization isn't enough either.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
---
target/arm/hvf/hvf.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 0d46073fd2..0e7b8f3431 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -921,6 +921,22 @@ hv_return_t hvf_arch_vm_create(MachineState *ms, uint32_t pa_range)
}
ret = hv_vm_create(config);
+ if (hvf_irqchip_in_kernel()) {
+ /*
+ * Instantiate GIC.
+ * This must be done prior to the creation of any vCPU
+ * but past hv_vm_create()
+ */
+ hv_gic_config_t cfg = hv_gic_config_create();
+ hv_gic_config_set_distributor_base(cfg, 0x08000000);
+ hv_gic_config_set_redistributor_base(cfg, 0x080A0000);
+ hv_return_t err = hv_gic_create(cfg);
+ if (err != HV_SUCCESS) {
+ error_report("error creating platform VGIC");
+ goto cleanup;
+ }
+ os_release(cfg);
+ }
cleanup:
os_release(config);
--
2.50.1 (Apple Git-155)
When using the vGIC, timers are directly handled by the platform.
No vmexits ought to happen in that case. Abort if reaching those code paths.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
Reviewed-by: Mads Ynddal <mads@ynddal.dk>
---
target/arm/hvf/hvf.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 0e7b8f3431..32662a35f0 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -1335,6 +1335,7 @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint64_t *val)
case SYSREG_ICC_SGI1R_EL1:
case SYSREG_ICC_SRE_EL1:
case SYSREG_ICC_CTLR_EL1:
+ assert(!hvf_irqchip_in_kernel());
/* Call the TCG sysreg handler. This is only safe for GICv3 regs. */
if (hvf_sysreg_read_cp(cpu, "GICv3", reg, val)) {
return 0;
@@ -1656,6 +1657,7 @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
case SYSREG_ICC_SGI0R_EL1:
case SYSREG_ICC_SGI1R_EL1:
case SYSREG_ICC_SRE_EL1:
+ assert(!hvf_irqchip_in_kernel());
/* Call the TCG sysreg handler. This is only safe for GICv3 regs. */
if (hvf_sysreg_write_cp(cpu, "GICv3", reg, val)) {
return 0;
@@ -2063,6 +2065,7 @@ static int hvf_handle_vmexit(CPUState *cpu, hv_vcpu_exit_t *exit)
ret = hvf_handle_exception(cpu, &exit->exception);
break;
case HV_EXIT_REASON_VTIMER_ACTIVATED:
+ assert(!hvf_irqchip_in_kernel());
qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
cpu->accel->vtimer_masked = true;
break;
--
2.50.1 (Apple Git-155)
When starting up the VM at EL2, more sysregs are available. Sync the state of those.
In addition, sync the state of the EL1 physical timer when the vGIC is used, even
if running at EL1. However, no OS running at EL1 is expected to use those registers.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
---
target/arm/hvf/hvf.c | 37 +++++++++++++++++++++++++++++++++----
target/arm/hvf/sysreg.c.inc | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 68 insertions(+), 4 deletions(-)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 32662a35f0..e1031b6b43 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -413,13 +413,34 @@ static const struct hvf_reg_match hvf_fpreg_match[] = {
#define DEF_SYSREG(HVF_ID, ...) \
QEMU_BUILD_BUG_ON(HVF_ID != KVMID_TO_HVF(KVMID_AA64_SYS_REG64(__VA_ARGS__)));
+#define DEF_SYSREG_EL2(HVF_ID, ...) \
+ QEMU_BUILD_BUG_ON(HVF_ID != KVMID_TO_HVF(KVMID_AA64_SYS_REG64(__VA_ARGS__)));
+
+#define DEF_SYSREG_VGIC(HVF_ID, ...) \
+ QEMU_BUILD_BUG_ON(HVF_ID != KVMID_TO_HVF(KVMID_AA64_SYS_REG64(__VA_ARGS__)));
+
+#define DEF_SYSREG_VGIC_EL2(HVF_ID, ...) \
+ QEMU_BUILD_BUG_ON(HVF_ID != KVMID_TO_HVF(KVMID_AA64_SYS_REG64(__VA_ARGS__)));
+
#include "sysreg.c.inc"
#undef DEF_SYSREG
+#undef DEF_SYSREG_EL2
+#undef DEF_SYSREG_VGIC
+#undef DEF_SYSREG_VGIC_EL2
+
+#define DEF_SYSREG(HVF_ID, op0, op1, crn, crm, op2) {HVF_ID},
+#define DEF_SYSREG_EL2(HVF_ID, op0, op1, crn, crm, op2) {HVF_ID, .el2 = true},
+#define DEF_SYSREG_VGIC(HVF_ID, op0, op1, crn, crm, op2) {HVF_ID, .vgic = true},
+#define DEF_SYSREG_VGIC_EL2(HVF_ID, op0, op1, crn, crm, op2) {HVF_ID, true, true},
+
+struct hvf_sreg {
+ hv_sys_reg_t sreg;
+ bool vgic;
+ bool el2;
+};
-#define DEF_SYSREG(HVF_ID, op0, op1, crn, crm, op2) HVF_ID,
-
-static const hv_sys_reg_t hvf_sreg_list[] = {
+static struct hvf_sreg hvf_sreg_list[] = {
#include "sysreg.c.inc"
};
@@ -982,11 +1003,19 @@ int hvf_arch_init_vcpu(CPUState *cpu)
/* Populate cp list for all known sysregs */
for (i = 0; i < sregs_match_len; i++) {
- hv_sys_reg_t hvf_id = hvf_sreg_list[i];
+ hv_sys_reg_t hvf_id = hvf_sreg_list[i].sreg;
uint64_t kvm_id = HVF_TO_KVMID(hvf_id);
uint32_t key = kvm_to_cpreg_id(kvm_id);
const ARMCPRegInfo *ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
+ if (hvf_sreg_list[i].vgic && !hvf_irqchip_in_kernel()) {
+ continue;
+ }
+
+ if (hvf_sreg_list[i].el2 && !hvf_nested_virt_enabled()) {
+ continue;
+ }
+
if (ri) {
assert(!(ri->type & ARM_CP_NO_RAW));
arm_cpu->cpreg_indexes[sregs_cnt++] = kvm_id;
diff --git a/target/arm/hvf/sysreg.c.inc b/target/arm/hvf/sysreg.c.inc
index 067a8603fa..ce4a4fdc68 100644
--- a/target/arm/hvf/sysreg.c.inc
+++ b/target/arm/hvf/sysreg.c.inc
@@ -145,3 +145,38 @@ DEF_SYSREG(HV_SYS_REG_TPIDRRO_EL0, 3, 3, 13, 0, 3)
DEF_SYSREG(HV_SYS_REG_CNTV_CTL_EL0, 3, 3, 14, 3, 1)
DEF_SYSREG(HV_SYS_REG_CNTV_CVAL_EL0, 3, 3, 14, 3, 2)
DEF_SYSREG(HV_SYS_REG_SP_EL1, 3, 4, 4, 1, 0)
+
+DEF_SYSREG_VGIC(HV_SYS_REG_CNTP_CTL_EL0, 3, 3, 14, 2, 1)
+DEF_SYSREG_VGIC(HV_SYS_REG_CNTP_CVAL_EL0, 3, 3, 14, 2, 2)
+#ifdef SYNC_NO_RAW_REGS
+DEF_SYSREG_VGIC(HV_SYS_REG_CNTP_TVAL_EL0, 3, 3, 14, 2, 0)
+#endif
+
+DEF_SYSREG_VGIC_EL2(HV_SYS_REG_CNTHCTL_EL2, 3, 4, 14, 1, 0)
+DEF_SYSREG_VGIC_EL2(HV_SYS_REG_CNTHP_CVAL_EL2, 3, 4, 14, 2, 2)
+DEF_SYSREG_VGIC_EL2(HV_SYS_REG_CNTHP_CTL_EL2, 3, 4, 14, 2, 1)
+#ifdef SYNC_NO_RAW_REGS
+DEF_SYSREG_VGIC_EL2(HV_SYS_REG_CNTHP_TVAL_EL2, 3, 4, 14, 2, 0)
+#endif
+DEF_SYSREG_VGIC_EL2(HV_SYS_REG_CNTVOFF_EL2, 3, 4, 14, 0, 3)
+
+DEF_SYSREG_EL2(HV_SYS_REG_CPTR_EL2, 3, 4, 1, 1, 2)
+DEF_SYSREG_EL2(HV_SYS_REG_ELR_EL2, 3, 4, 4, 0, 1)
+DEF_SYSREG_EL2(HV_SYS_REG_ESR_EL2, 3, 4, 5, 2, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_FAR_EL2, 3, 4, 6, 0, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_HCR_EL2, 3, 4, 1, 1, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_HPFAR_EL2, 3, 4, 6, 0, 4)
+DEF_SYSREG_EL2(HV_SYS_REG_MAIR_EL2, 3, 4, 10, 2, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_MDCR_EL2, 3, 4, 1, 1, 1)
+DEF_SYSREG_EL2(HV_SYS_REG_SCTLR_EL2, 3, 4, 1, 0, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_SPSR_EL2, 3, 4, 4, 0, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_SP_EL2, 3, 6, 4, 1, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_TCR_EL2, 3, 4, 2, 0, 2)
+DEF_SYSREG_EL2(HV_SYS_REG_TPIDR_EL2, 3, 4, 13, 0, 2)
+DEF_SYSREG_EL2(HV_SYS_REG_TTBR0_EL2, 3, 4, 2, 0, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_TTBR1_EL2, 3, 4, 2, 0, 1)
+DEF_SYSREG_EL2(HV_SYS_REG_VBAR_EL2, 3, 4, 12, 0, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_VMPIDR_EL2, 3, 4, 0, 0, 5)
+DEF_SYSREG_EL2(HV_SYS_REG_VPIDR_EL2, 3, 4, 0, 0, 0)
+DEF_SYSREG_EL2(HV_SYS_REG_VTCR_EL2, 3, 4, 2, 1, 2)
+DEF_SYSREG_EL2(HV_SYS_REG_VTTBR_EL2, 3, 4, 2, 1, 0)
--
2.50.1 (Apple Git-155)
HVF traps accesses to CNTHCTL_EL2. For nested guests, HVF traps accesses to MDCCINT_EL1.
Pass through those accesses to the Hypervisor.framework library.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
target/arm/hvf/hvf.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index e1031b6b43..2568f24e75 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -297,6 +297,10 @@ void hvf_arm_init_debug(void)
#define SYSREG_DBGWVR15_EL1 SYSREG(2, 0, 0, 15, 6)
#define SYSREG_DBGWCR15_EL1 SYSREG(2, 0, 0, 15, 7)
+/* EL2 registers */
+#define SYSREG_CNTHCTL_EL2 SYSREG(3, 4, 14, 1, 0)
+#define SYSREG_MDCCINT_EL1 SYSREG(2, 0, 0, 2, 0)
+
#define WFX_IS_WFE (1 << 0)
#define TMR_CTL_ENABLE (1 << 0)
@@ -1338,6 +1342,14 @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint64_t *val)
case SYSREG_OSDLR_EL1:
/* Dummy register */
return 0;
+ case SYSREG_CNTHCTL_EL2:
+ if (__builtin_available(macOS 15.0, *)) {
+ assert_hvf_ok(hv_vcpu_get_sys_reg(cpu->accel->fd, HV_SYS_REG_CNTHCTL_EL2, val));
+ }
+ return 0;
+ case SYSREG_MDCCINT_EL1:
+ assert_hvf_ok(hv_vcpu_get_sys_reg(cpu->accel->fd, HV_SYS_REG_MDCCINT_EL1, val));
+ return 0;
case SYSREG_ICC_AP0R0_EL1:
case SYSREG_ICC_AP0R1_EL1:
case SYSREG_ICC_AP0R2_EL1:
@@ -1657,6 +1669,14 @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
case SYSREG_OSDLR_EL1:
/* Dummy register */
return 0;
+ case SYSREG_CNTHCTL_EL2:
+ if (__builtin_available(macOS 15.0, *)) {
+ assert_hvf_ok(hv_vcpu_set_sys_reg(cpu->accel->fd, HV_SYS_REG_CNTHCTL_EL2, val));
+ }
+ return 0;
+ case SYSREG_MDCCINT_EL1:
+ assert_hvf_ok(hv_vcpu_set_sys_reg(cpu->accel->fd, HV_SYS_REG_MDCCINT_EL1, val));
+ return 0;
case SYSREG_LORC_EL1:
/* Dummy register */
return 0;
--
2.50.1 (Apple Git-155)
© 2016 - 2026 Red Hat, Inc.