1 | ARM pullreq; contains some patches that arrived while I | 1 | This pullreq is (1) my GICv4 patches (2) most of the first third of RTH's |
---|---|---|---|
2 | was on holiday, plus the series I sent off before going | 2 | cleanup patchset (3) one patch fixing an smmuv3 bug... |
3 | away, which got reviewed while I was away. | ||
4 | 3 | ||
5 | thanks | 4 | thanks |
6 | -- PMM | 5 | -- PMM |
7 | 6 | ||
7 | The following changes since commit a74782936dc6e979ce371dabda4b1c05624ea87f: | ||
8 | 8 | ||
9 | The following changes since commit c077a998eb3fcae2d048e3baeb5bc592d30fddde: | 9 | Merge tag 'pull-migration-20220421a' of https://gitlab.com/dagrh/qemu into staging (2022-04-21 18:48:18 -0700) |
10 | 10 | ||
11 | Merge remote-tracking branch 'remotes/riku/tags/pull-linux-user-20170531' into staging (2017-06-01 15:50:40 +0100) | 11 | are available in the Git repository at: |
12 | 12 | ||
13 | are available in the git repository at: | 13 | https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220422 |
14 | 14 | ||
15 | git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20170601 | 15 | for you to fetch changes up to 9792130613191c1e0c34109918c5e07b9f1429a5: |
16 | 16 | ||
17 | for you to fetch changes up to cdc58be430b0bdeaef282e2e70f8135ae531616d: | 17 | hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in smmuv3_translate() (2022-04-22 10:19:15 +0100) |
18 | |||
19 | hw/arm/virt: fdt: generate distance-map when needed (2017-06-01 17:27:07 +0100) | ||
20 | 18 | ||
21 | ---------------------------------------------------------------- | 19 | ---------------------------------------------------------------- |
22 | target-arm queue: | 20 | target-arm queue: |
23 | * virt: numa: provide ACPI distance info when needed | 21 | * Implement GICv4 emulation |
24 | * aspeed: fix i2c controller bugs | 22 | * Some cleanup patches in target/arm |
25 | * aspeed: add temperature sensor device | 23 | * hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in smmuv3_translate() |
26 | * M profile: support MPU | ||
27 | * gicv3: fix mishandling of BPR1, VBPR1 | ||
28 | * load_uboot_image: don't assume a full header read | ||
29 | * libvixl: Correct build failures on NetBSD | ||
30 | 24 | ||
31 | ---------------------------------------------------------------- | 25 | ---------------------------------------------------------------- |
32 | Andrew Jones (3): | 26 | Peter Maydell (41): |
33 | load_uboot_image: don't assume a full header read | 27 | hw/intc/arm_gicv3_its: Add missing blank line |
34 | hw/arm/virt-acpi-build: build SLIT when needed | 28 | hw/intc/arm_gicv3: Sanity-check num-cpu property |
35 | hw/arm/virt: fdt: generate distance-map when needed | 29 | hw/intc/arm_gicv3: Insist that redist region capacity matches CPU count |
30 | hw/intc/arm_gicv3: Report correct PIDR0 values for ID registers | ||
31 | target/arm/cpu.c: ignore VIRQ and VFIQ if no EL2 | ||
32 | hw/intc/arm_gicv3_its: Factor out "is intid a valid LPI ID?" | ||
33 | hw/intc/arm_gicv3_its: Implement GITS_BASER2 for GICv4 | ||
34 | hw/intc/arm_gicv3_its: Implement VMAPI and VMAPTI | ||
35 | hw/intc/arm_gicv3_its: Implement VMAPP | ||
36 | hw/intc/arm_gicv3_its: Distinguish success and error cases of CMD_CONTINUE | ||
37 | hw/intc/arm_gicv3_its: Factor out "find ITE given devid, eventid" | ||
38 | hw/intc/arm_gicv3_its: Factor out CTE lookup sequence | ||
39 | hw/intc/arm_gicv3_its: Split out process_its_cmd() physical interrupt code | ||
40 | hw/intc/arm_gicv3_its: Handle virtual interrupts in process_its_cmd() | ||
41 | hw/intc/arm_gicv3: Keep pointers to every connected ITS | ||
42 | hw/intc/arm_gicv3_its: Implement VMOVP | ||
43 | hw/intc/arm_gicv3_its: Implement VSYNC | ||
44 | hw/intc/arm_gicv3_its: Implement INV command properly | ||
45 | hw/intc/arm_gicv3_its: Implement INV for virtual interrupts | ||
46 | hw/intc/arm_gicv3_its: Implement VMOVI | ||
47 | hw/intc/arm_gicv3_its: Implement VINVALL | ||
48 | hw/intc/arm_gicv3: Implement GICv4's new redistributor frame | ||
49 | hw/intc/arm_gicv3: Implement new GICv4 redistributor registers | ||
50 | hw/intc/arm_gicv3_cpuif: Split "update vIRQ/vFIQ" from gicv3_cpuif_virt_update() | ||
51 | hw/intc/arm_gicv3_cpuif: Support vLPIs | ||
52 | hw/intc/arm_gicv3_cpuif: Don't recalculate maintenance irq unnecessarily | ||
53 | hw/intc/arm_gicv3_redist: Factor out "update hpplpi for one LPI" logic | ||
54 | hw/intc/arm_gicv3_redist: Factor out "update hpplpi for all LPIs" logic | ||
55 | hw/intc/arm_gicv3_redist: Recalculate hppvlpi on VPENDBASER writes | ||
56 | hw/intc/arm_gicv3_redist: Factor out "update bit in pending table" code | ||
57 | hw/intc/arm_gicv3_redist: Implement gicv3_redist_process_vlpi() | ||
58 | hw/intc/arm_gicv3_redist: Implement gicv3_redist_vlpi_pending() | ||
59 | hw/intc/arm_gicv3_redist: Use set_pending_table_bit() in mov handling | ||
60 | hw/intc/arm_gicv3_redist: Implement gicv3_redist_mov_vlpi() | ||
61 | hw/intc/arm_gicv3_redist: Implement gicv3_redist_vinvall() | ||
62 | hw/intc/arm_gicv3_redist: Implement gicv3_redist_inv_vlpi() | ||
63 | hw/intc/arm_gicv3: Update ID and feature registers for GICv4 | ||
64 | hw/intc/arm_gicv3: Allow 'revision' property to be set to 4 | ||
65 | hw/arm/virt: Use VIRT_GIC_VERSION_* enum values in create_gic() | ||
66 | hw/arm/virt: Abstract out calculation of redistributor region capacity | ||
67 | hw/arm/virt: Support TCG GICv4 | ||
36 | 68 | ||
37 | Cédric Le Goater (6): | 69 | Richard Henderson (19): |
38 | aspeed/i2c: improve command handling | 70 | target/arm: Update ISAR fields for ARMv8.8 |
39 | aspeed/i2c: handle LAST command under the RX command | 71 | target/arm: Update SCR_EL3 bits to ARMv8.8 |
40 | aspeed/i2c: introduce a state machine | 72 | target/arm: Update SCTLR bits to ARMv9.2 |
41 | aspeed: add some I2C devices to the Aspeed machines | 73 | target/arm: Change DisasContext.aarch64 to bool |
42 | hw/misc: add a TMP42{1,2,3} device model | 74 | target/arm: Change CPUArchState.aarch64 to bool |
43 | aspeed: add a temp sensor device on I2C bus 3 | 75 | target/arm: Extend store_cpu_offset to take field size |
76 | target/arm: Change DisasContext.thumb to bool | ||
77 | target/arm: Change CPUArchState.thumb to bool | ||
78 | target/arm: Remove fpexc32_access | ||
79 | target/arm: Split out set_btype_raw | ||
80 | target/arm: Split out gen_rebuild_hflags | ||
81 | target/arm: Simplify GEN_SHIFT in translate.c | ||
82 | target/arm: Simplify gen_sar | ||
83 | target/arm: Simplify aa32 DISAS_WFI | ||
84 | target/arm: Use tcg_constant in translate-m-nocp.c | ||
85 | target/arm: Use tcg_constant in translate-neon.c | ||
86 | target/arm: Use smin/smax for do_sat_addsub_32 | ||
87 | target/arm: Use tcg_constant in translate-vfp.c | ||
88 | target/arm: Use tcg_constant_i32 in translate.h | ||
44 | 89 | ||
45 | Kamil Rytarowski (1): | 90 | Xiang Chen (1): |
46 | libvixl: Correct build failures on NetBSD | 91 | hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in smmuv3_translate() |
47 | 92 | ||
48 | Michael Davidsaver (4): | 93 | docs/system/arm/virt.rst | 5 +- |
49 | armv7m: Improve "-d mmu" tracing for PMSAv7 MPU | 94 | hw/intc/gicv3_internal.h | 231 ++++++++- |
50 | armv7m: Implement M profile default memory map | 95 | include/hw/arm/virt.h | 19 +- |
51 | armv7m: Classify faults as MemManage or BusFault | 96 | include/hw/intc/arm_gicv3_common.h | 13 + |
52 | arm: add MPU support to M profile CPUs | 97 | include/hw/intc/arm_gicv3_its_common.h | 1 + |
53 | 98 | target/arm/cpu.h | 59 ++- | |
54 | Peter Maydell (12): | 99 | target/arm/translate-a32.h | 13 +- |
55 | hw/intc/arm_gicv3_cpuif: Fix reset value for VMCR_EL2.VBPR1 | 100 | target/arm/translate.h | 17 +- |
56 | hw/intc/arm_gicv3_cpuif: Don't let BPR be set below its minimum | 101 | hw/arm/smmuv3.c | 2 +- |
57 | hw/intc/arm_gicv3_cpuif: Fix priority masking for NS BPR1 | 102 | hw/arm/virt.c | 102 +++- |
58 | arm: Use the mmu_idx we're passed in arm_cpu_do_unaligned_access() | 103 | hw/intc/arm_gicv3_common.c | 54 +- |
59 | arm: Add support for M profile CPUs having different MMU index semantics | 104 | hw/intc/arm_gicv3_cpuif.c | 195 ++++++-- |
60 | arm: Use different ARMMMUIdx values for M profile | 105 | hw/intc/arm_gicv3_dist.c | 7 +- |
61 | arm: Clean up handling of no-MPU PMSA CPUs | 106 | hw/intc/arm_gicv3_its.c | 876 +++++++++++++++++++++++++++------ |
62 | arm: Don't clear ARM_FEATURE_PMSA for no-mpu configs | 107 | hw/intc/arm_gicv3_its_kvm.c | 2 + |
63 | arm: Don't let no-MPU PMSA cores write to SCTLR.M | 108 | hw/intc/arm_gicv3_kvm.c | 5 + |
64 | arm: Remove unnecessary check on cpu->pmsav7_dregion | 109 | hw/intc/arm_gicv3_redist.c | 480 +++++++++++++++--- |
65 | arm: All M profile cores are PMSA | 110 | linux-user/arm/cpu_loop.c | 2 +- |
66 | arm: Implement HFNMIENA support for M profile MPU | 111 | target/arm/cpu.c | 16 +- |
67 | 112 | target/arm/helper-a64.c | 4 +- | |
68 | Wei Huang (1): | 113 | target/arm/helper.c | 19 +- |
69 | target/arm: clear PMUVER field of AA64DFR0 when vPMU=off | 114 | target/arm/hvf/hvf.c | 2 +- |
70 | 115 | target/arm/m_helper.c | 6 +- | |
71 | disas/libvixl/Makefile.objs | 3 + | 116 | target/arm/op_helper.c | 13 - |
72 | hw/misc/Makefile.objs | 1 + | 117 | target/arm/translate-a64.c | 50 +- |
73 | target/arm/cpu.h | 118 ++++++++++-- | 118 | target/arm/translate-m-nocp.c | 12 +- |
74 | target/arm/translate.h | 2 +- | 119 | target/arm/translate-neon.c | 21 +- |
75 | hw/arm/aspeed.c | 36 ++++ | 120 | target/arm/translate-sve.c | 9 +- |
76 | hw/arm/virt-acpi-build.c | 4 + | 121 | target/arm/translate-vfp.c | 76 +-- |
77 | hw/arm/virt.c | 21 +++ | 122 | target/arm/translate.c | 101 ++-- |
78 | hw/core/loader.c | 3 +- | 123 | hw/intc/trace-events | 18 +- |
79 | hw/i2c/aspeed_i2c.c | 65 ++++++- | 124 | 31 files changed, 1890 insertions(+), 540 deletions(-) |
80 | hw/intc/arm_gicv3_cpuif.c | 50 ++++- | ||
81 | hw/intc/armv7m_nvic.c | 104 +++++++++++ | ||
82 | hw/misc/tmp421.c | 401 ++++++++++++++++++++++++++++++++++++++++ | ||
83 | target/arm/cpu.c | 28 ++- | ||
84 | target/arm/helper.c | 338 ++++++++++++++++++++++----------- | ||
85 | target/arm/machine.c | 7 +- | ||
86 | target/arm/op_helper.c | 3 +- | ||
87 | target/arm/translate-a64.c | 18 +- | ||
88 | target/arm/translate.c | 14 +- | ||
89 | default-configs/arm-softmmu.mak | 1 + | ||
90 | 19 files changed, 1060 insertions(+), 157 deletions(-) | ||
91 | create mode 100644 hw/misc/tmp421.c | ||
92 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | In commit b6f96009acc we split do_process_its_cmd() from | ||
2 | process_its_cmd(), but forgot the usual blank line between function | ||
3 | definitions. Add it. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-2-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/arm_gicv3_its.c | 1 + | ||
10 | 1 file changed, 1 insertion(+) | ||
11 | |||
12 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/hw/intc/arm_gicv3_its.c | ||
15 | +++ b/hw/intc/arm_gicv3_its.c | ||
16 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult do_process_its_cmd(GICv3ITSState *s, uint32_t devid, | ||
17 | } | ||
18 | return CMD_CONTINUE; | ||
19 | } | ||
20 | + | ||
21 | static ItsCmdResult process_its_cmd(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
22 | ItsCmdType cmd) | ||
23 | { | ||
24 | -- | ||
25 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | In the GICv3 code we implicitly rely on there being at least one CPU | ||
2 | and thus at least one redistributor and CPU interface. Sanity-check | ||
3 | that the property the board code sets is not zero. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-3-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/arm_gicv3_common.c | 4 ++++ | ||
10 | 1 file changed, 4 insertions(+) | ||
11 | |||
12 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/hw/intc/arm_gicv3_common.c | ||
15 | +++ b/hw/intc/arm_gicv3_common.c | ||
16 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp) | ||
17 | s->num_irq, GIC_INTERNAL); | ||
18 | return; | ||
19 | } | ||
20 | + if (s->num_cpu == 0) { | ||
21 | + error_setg(errp, "num-cpu must be at least 1"); | ||
22 | + return; | ||
23 | + } | ||
24 | |||
25 | /* ITLinesNumber is represented as (N / 32) - 1, so this is an | ||
26 | * implementation imposed restriction, not an architectural one, | ||
27 | -- | ||
28 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Boards using the GICv3 need to configure it with both the total | ||
2 | number of CPUs and also the sizes of all the memory regions which | ||
3 | contain redistributors (one redistributor per CPU). At the moment | ||
4 | the GICv3 checks that the number of CPUs specified is not too many to | ||
5 | fit in the defined redistributor regions, but in fact the code | ||
6 | assumes that the two match exactly. For instance when we set the | ||
7 | GICR_TYPER.Last bit on the final redistributor in each region, we | ||
8 | assume that we don't need to consider the possibility of a region | ||
9 | being only half full of redistributors or even completely empty. We | ||
10 | also assume in gicv3_redist_read() and gicv3_redist_write() that we | ||
11 | can calculate the CPU index from the offset within the MemoryRegion | ||
12 | and that this will always be in range. | ||
1 | 13 | ||
14 | Fortunately all the board code sets the redistributor region sizes to | ||
15 | exactly match the CPU count, so this isn't a visible bug. We could | ||
16 | in theory make the GIC code handle non-full redistributor regions, or | ||
17 | have it automatically reduce the provided region sizes to match the | ||
18 | CPU count, but the simplest thing is just to strengthen the error | ||
19 | check and insist that the CPU count and redistributor region size | ||
20 | settings match exactly, since all the board code does that anyway. | ||
21 | |||
22 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
23 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
24 | Message-id: 20220408141550.1271295-4-peter.maydell@linaro.org | ||
25 | --- | ||
26 | hw/intc/arm_gicv3_common.c | 4 ++-- | ||
27 | 1 file changed, 2 insertions(+), 2 deletions(-) | ||
28 | |||
29 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c | ||
30 | index XXXXXXX..XXXXXXX 100644 | ||
31 | --- a/hw/intc/arm_gicv3_common.c | ||
32 | +++ b/hw/intc/arm_gicv3_common.c | ||
33 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp) | ||
34 | for (i = 0; i < s->nb_redist_regions; i++) { | ||
35 | rdist_capacity += s->redist_region_count[i]; | ||
36 | } | ||
37 | - if (rdist_capacity < s->num_cpu) { | ||
38 | + if (rdist_capacity != s->num_cpu) { | ||
39 | error_setg(errp, "Capacity of the redist regions(%d) " | ||
40 | - "is less than number of vcpus(%d)", | ||
41 | + "does not match the number of vcpus(%d)", | ||
42 | rdist_capacity, s->num_cpu); | ||
43 | return; | ||
44 | } | ||
45 | -- | ||
46 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | We use the common function gicv3_idreg() to supply the CoreSight ID | ||
2 | register values for the GICv3 for the copies of these ID registers in | ||
3 | the distributor, redistributor and ITS register frames. This isn't | ||
4 | quite correct, because while most of the register values are the | ||
5 | same, the PIDR0 value should vary to indicate which of these three | ||
6 | frames it is. (You can see this and also the correct values of these | ||
7 | PIDR0 registers by looking at the GIC-600 or GIC-700 TRMs, for | ||
8 | example.) | ||
1 | 9 | ||
10 | Make gicv3_idreg() take an extra argument for the PIDR0 value. | ||
11 | |||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
14 | Message-id: 20220408141550.1271295-5-peter.maydell@linaro.org | ||
15 | --- | ||
16 | hw/intc/gicv3_internal.h | 15 +++++++++++++-- | ||
17 | hw/intc/arm_gicv3_dist.c | 2 +- | ||
18 | hw/intc/arm_gicv3_its.c | 2 +- | ||
19 | hw/intc/arm_gicv3_redist.c | 2 +- | ||
20 | 4 files changed, 16 insertions(+), 5 deletions(-) | ||
21 | |||
22 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
23 | index XXXXXXX..XXXXXXX 100644 | ||
24 | --- a/hw/intc/gicv3_internal.h | ||
25 | +++ b/hw/intc/gicv3_internal.h | ||
26 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t gicv3_iidr(void) | ||
27 | return 0x43b; | ||
28 | } | ||
29 | |||
30 | -static inline uint32_t gicv3_idreg(int regoffset) | ||
31 | +/* CoreSight PIDR0 values for ARM GICv3 implementations */ | ||
32 | +#define GICV3_PIDR0_DIST 0x92 | ||
33 | +#define GICV3_PIDR0_REDIST 0x93 | ||
34 | +#define GICV3_PIDR0_ITS 0x94 | ||
35 | + | ||
36 | +static inline uint32_t gicv3_idreg(int regoffset, uint8_t pidr0) | ||
37 | { | ||
38 | /* Return the value of the CoreSight ID register at the specified | ||
39 | * offset from the first ID register (as found in the distributor | ||
40 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t gicv3_idreg(int regoffset) | ||
41 | static const uint8_t gicd_ids[] = { | ||
42 | 0x44, 0x00, 0x00, 0x00, 0x92, 0xB4, 0x3B, 0x00, 0x0D, 0xF0, 0x05, 0xB1 | ||
43 | }; | ||
44 | - return gicd_ids[regoffset / 4]; | ||
45 | + | ||
46 | + regoffset /= 4; | ||
47 | + | ||
48 | + if (regoffset == 4) { | ||
49 | + return pidr0; | ||
50 | + } | ||
51 | + return gicd_ids[regoffset]; | ||
52 | } | ||
53 | |||
54 | /** | ||
55 | diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c | ||
56 | index XXXXXXX..XXXXXXX 100644 | ||
57 | --- a/hw/intc/arm_gicv3_dist.c | ||
58 | +++ b/hw/intc/arm_gicv3_dist.c | ||
59 | @@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset, | ||
60 | } | ||
61 | case GICD_IDREGS ... GICD_IDREGS + 0x2f: | ||
62 | /* ID registers */ | ||
63 | - *data = gicv3_idreg(offset - GICD_IDREGS); | ||
64 | + *data = gicv3_idreg(offset - GICD_IDREGS, GICV3_PIDR0_DIST); | ||
65 | return true; | ||
66 | case GICD_SGIR: | ||
67 | /* WO registers, return unknown value */ | ||
68 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
69 | index XXXXXXX..XXXXXXX 100644 | ||
70 | --- a/hw/intc/arm_gicv3_its.c | ||
71 | +++ b/hw/intc/arm_gicv3_its.c | ||
72 | @@ -XXX,XX +XXX,XX @@ static bool its_readl(GICv3ITSState *s, hwaddr offset, | ||
73 | break; | ||
74 | case GITS_IDREGS ... GITS_IDREGS + 0x2f: | ||
75 | /* ID registers */ | ||
76 | - *data = gicv3_idreg(offset - GITS_IDREGS); | ||
77 | + *data = gicv3_idreg(offset - GITS_IDREGS, GICV3_PIDR0_ITS); | ||
78 | break; | ||
79 | case GITS_TYPER: | ||
80 | *data = extract64(s->typer, 0, 32); | ||
81 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
82 | index XXXXXXX..XXXXXXX 100644 | ||
83 | --- a/hw/intc/arm_gicv3_redist.c | ||
84 | +++ b/hw/intc/arm_gicv3_redist.c | ||
85 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr offset, | ||
86 | *data = cs->gicr_nsacr; | ||
87 | return MEMTX_OK; | ||
88 | case GICR_IDREGS ... GICR_IDREGS + 0x2f: | ||
89 | - *data = gicv3_idreg(offset - GICR_IDREGS); | ||
90 | + *data = gicv3_idreg(offset - GICR_IDREGS, GICV3_PIDR0_REDIST); | ||
91 | return MEMTX_OK; | ||
92 | default: | ||
93 | return MEMTX_ERROR; | ||
94 | -- | ||
95 | 2.25.1 | diff view generated by jsdifflib |
1 | All M profile CPUs are PMSA, so set the feature bit. | 1 | In a GICv3, it is impossible for the GIC to deliver a VIRQ or VFIQ to |
---|---|---|---|
2 | (We haven't actually implemented the M profile MPU register | 2 | the CPU unless the CPU has EL2, because VIRQ and VFIQ are only |
3 | interface yet, but setting this feature bit gives us closer | 3 | configurable via EL2-only system registers. Moreover, in our |
4 | to correct behaviour for the MPU-disabled case.) | 4 | implementation we were only calculating and updating the state of the |
5 | VIRQ and VFIQ lines in gicv3_cpuif_virt_irq_fiq_update() when those | ||
6 | EL2 system registers changed. We were therefore able to assert in | ||
7 | arm_cpu_set_irq() that we didn't see a VIRQ or VFIQ line update if | ||
8 | EL2 wasn't present. | ||
9 | |||
10 | This assumption no longer holds with GICv4: | ||
11 | * even if the CPU does not have EL2 the guest is able to cause the | ||
12 | GIC to deliver a virtual LPI by programming the ITS (which is a | ||
13 | silly thing for it to do, but possible) | ||
14 | * because we now need to recalculate the state of the VIRQ and VFIQ | ||
15 | lines in more cases than just "some EL2 GIC sysreg was written", | ||
16 | we will see calls to arm_cpu_set_irq() for "VIRQ is 0, VFIQ is 0" | ||
17 | even if the guest is not using the virtual LPI parts of the ITS | ||
18 | |||
19 | Remove the assertions, and instead simply ignore the state of the | ||
20 | VIRQ and VFIQ lines if the CPU does not have EL2. | ||
5 | 21 | ||
6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 22 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
7 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 23 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
8 | Message-id: 1493122030-32191-11-git-send-email-peter.maydell@linaro.org | 24 | Message-id: 20220408141550.1271295-6-peter.maydell@linaro.org |
9 | --- | 25 | --- |
10 | target/arm/cpu.c | 8 ++++++++ | 26 | target/arm/cpu.c | 12 ++++++++++-- |
11 | 1 file changed, 8 insertions(+) | 27 | 1 file changed, 10 insertions(+), 2 deletions(-) |
12 | 28 | ||
13 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c | 29 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c |
14 | index XXXXXXX..XXXXXXX 100644 | 30 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/cpu.c | 31 | --- a/target/arm/cpu.c |
16 | +++ b/target/arm/cpu.c | 32 | +++ b/target/arm/cpu.c |
17 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj) | 33 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level) |
18 | { | 34 | [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ |
19 | ARMCPU *cpu = ARM_CPU(obj); | 35 | }; |
20 | 36 | ||
21 | + /* M profile implies PMSA. We have to do this here rather than | 37 | + if (!arm_feature(env, ARM_FEATURE_EL2) && |
22 | + * in realize with the other feature-implication checks because | 38 | + (irq == ARM_CPU_VIRQ || irq == ARM_CPU_VFIQ)) { |
23 | + * we look at the PMSA bit to see if we should add some properties. | 39 | + /* |
24 | + */ | 40 | + * The GIC might tell us about VIRQ and VFIQ state, but if we don't |
25 | + if (arm_feature(&cpu->env, ARM_FEATURE_M)) { | 41 | + * have EL2 support we don't care. (Unless the guest is doing something |
26 | + set_feature(&cpu->env, ARM_FEATURE_PMSA); | 42 | + * silly this will only be calls saying "level is still 0".) |
43 | + */ | ||
44 | + return; | ||
27 | + } | 45 | + } |
28 | + | 46 | + |
29 | if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) || | 47 | if (level) { |
30 | arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) { | 48 | env->irq_line_state |= mask[irq]; |
31 | qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_cbar_property, | 49 | } else { |
50 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level) | ||
51 | |||
52 | switch (irq) { | ||
53 | case ARM_CPU_VIRQ: | ||
54 | - assert(arm_feature(env, ARM_FEATURE_EL2)); | ||
55 | arm_cpu_update_virq(cpu); | ||
56 | break; | ||
57 | case ARM_CPU_VFIQ: | ||
58 | - assert(arm_feature(env, ARM_FEATURE_EL2)); | ||
59 | arm_cpu_update_vfiq(cpu); | ||
60 | break; | ||
61 | case ARM_CPU_IRQ: | ||
32 | -- | 62 | -- |
33 | 2.7.4 | 63 | 2.25.1 |
34 | |||
35 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | In process_mapti() we check interrupt IDs to see whether they are | ||
2 | in the valid LPI range. Factor this out into its own utility | ||
3 | function, as we're going to want it elsewhere too for GICv4. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-7-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/arm_gicv3_its.c | 10 +++++++--- | ||
10 | 1 file changed, 7 insertions(+), 3 deletions(-) | ||
11 | |||
12 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/hw/intc/arm_gicv3_its.c | ||
15 | +++ b/hw/intc/arm_gicv3_its.c | ||
16 | @@ -XXX,XX +XXX,XX @@ typedef enum ItsCmdResult { | ||
17 | CMD_CONTINUE = 1, | ||
18 | } ItsCmdResult; | ||
19 | |||
20 | +static inline bool intid_in_lpi_range(uint32_t id) | ||
21 | +{ | ||
22 | + return id >= GICV3_LPI_INTID_START && | ||
23 | + id < (1 << (GICD_TYPER_IDBITS + 1)); | ||
24 | +} | ||
25 | + | ||
26 | static uint64_t baser_base_addr(uint64_t value, uint32_t page_sz) | ||
27 | { | ||
28 | uint64_t result = 0; | ||
29 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
30 | uint32_t devid, eventid; | ||
31 | uint32_t pIntid = 0; | ||
32 | uint64_t num_eventids; | ||
33 | - uint32_t num_intids; | ||
34 | uint16_t icid = 0; | ||
35 | DTEntry dte; | ||
36 | ITEntry ite; | ||
37 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
38 | return CMD_STALL; | ||
39 | } | ||
40 | num_eventids = 1ULL << (dte.size + 1); | ||
41 | - num_intids = 1ULL << (GICD_TYPER_IDBITS + 1); | ||
42 | |||
43 | if (icid >= s->ct.num_entries) { | ||
44 | qemu_log_mask(LOG_GUEST_ERROR, | ||
45 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
46 | return CMD_CONTINUE; | ||
47 | } | ||
48 | |||
49 | - if (pIntid < GICV3_LPI_INTID_START || pIntid >= num_intids) { | ||
50 | + if (!intid_in_lpi_range(pIntid)) { | ||
51 | qemu_log_mask(LOG_GUEST_ERROR, | ||
52 | "%s: invalid interrupt ID 0x%x\n", __func__, pIntid); | ||
53 | return CMD_CONTINUE; | ||
54 | -- | ||
55 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | The GICv4 defines a new in-guest-memory table for the ITS: this is | ||
2 | the vPE table. Implement the new GITS_BASER2 register which the | ||
3 | guest uses to tell the ITS where the vPE table is located, including | ||
4 | the decode of the register fields into the TableDesc structure which | ||
5 | we do for the GITS_BASER<n> when the guest enables the ITS. | ||
1 | 6 | ||
7 | We guard provision of the new register with the its_feature_virtual() | ||
8 | function, which does a check of the GITS_TYPER.Virtual bit which | ||
9 | indicates presence of ITS support for virtual LPIs. Since this bit | ||
10 | is currently always zero, GICv4-specific features will not be | ||
11 | accessible to the guest yet. | ||
12 | |||
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
14 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
15 | Message-id: 20220408141550.1271295-8-peter.maydell@linaro.org | ||
16 | --- | ||
17 | hw/intc/gicv3_internal.h | 16 ++++++++++++++++ | ||
18 | include/hw/intc/arm_gicv3_its_common.h | 1 + | ||
19 | hw/intc/arm_gicv3_its.c | 25 +++++++++++++++++++++++++ | ||
20 | 3 files changed, 42 insertions(+) | ||
21 | |||
22 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
23 | index XXXXXXX..XXXXXXX 100644 | ||
24 | --- a/hw/intc/gicv3_internal.h | ||
25 | +++ b/hw/intc/gicv3_internal.h | ||
26 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_CTLR, ENABLED, 0, 1) | ||
27 | FIELD(GITS_CTLR, QUIESCENT, 31, 1) | ||
28 | |||
29 | FIELD(GITS_TYPER, PHYSICAL, 0, 1) | ||
30 | +FIELD(GITS_TYPER, VIRTUAL, 1, 1) | ||
31 | FIELD(GITS_TYPER, ITT_ENTRY_SIZE, 4, 4) | ||
32 | FIELD(GITS_TYPER, IDBITS, 8, 5) | ||
33 | FIELD(GITS_TYPER, DEVBITS, 13, 5) | ||
34 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1) | ||
35 | #define GITS_BASER_PAGESIZE_64K 2 | ||
36 | |||
37 | #define GITS_BASER_TYPE_DEVICE 1ULL | ||
38 | +#define GITS_BASER_TYPE_VPE 2ULL | ||
39 | #define GITS_BASER_TYPE_COLLECTION 4ULL | ||
40 | |||
41 | #define GITS_PAGE_SIZE_4K 0x1000 | ||
42 | @@ -XXX,XX +XXX,XX @@ FIELD(DTE, ITTADDR, 6, 44) | ||
43 | FIELD(CTE, VALID, 0, 1) | ||
44 | FIELD(CTE, RDBASE, 1, RDBASE_PROCNUM_LENGTH) | ||
45 | |||
46 | +/* | ||
47 | + * 8 bytes VPE table entry size: | ||
48 | + * Valid = 1 bit, VPTsize = 5 bits, VPTaddr = 36 bits, RDbase = 16 bits | ||
49 | + * | ||
50 | + * Field sizes for Valid and size are mandated; field sizes for RDbase | ||
51 | + * and VPT_addr are IMPDEF. | ||
52 | + */ | ||
53 | +#define GITS_VPE_SIZE 0x8ULL | ||
54 | + | ||
55 | +FIELD(VTE, VALID, 0, 1) | ||
56 | +FIELD(VTE, VPTSIZE, 1, 5) | ||
57 | +FIELD(VTE, VPTADDR, 6, 36) | ||
58 | +FIELD(VTE, RDBASE, 42, RDBASE_PROCNUM_LENGTH) | ||
59 | + | ||
60 | /* Special interrupt IDs */ | ||
61 | #define INTID_SECURE 1020 | ||
62 | #define INTID_NONSECURE 1021 | ||
63 | diff --git a/include/hw/intc/arm_gicv3_its_common.h b/include/hw/intc/arm_gicv3_its_common.h | ||
64 | index XXXXXXX..XXXXXXX 100644 | ||
65 | --- a/include/hw/intc/arm_gicv3_its_common.h | ||
66 | +++ b/include/hw/intc/arm_gicv3_its_common.h | ||
67 | @@ -XXX,XX +XXX,XX @@ struct GICv3ITSState { | ||
68 | |||
69 | TableDesc dt; | ||
70 | TableDesc ct; | ||
71 | + TableDesc vpet; | ||
72 | CmdQDesc cq; | ||
73 | |||
74 | Error *migration_blocker; | ||
75 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
76 | index XXXXXXX..XXXXXXX 100644 | ||
77 | --- a/hw/intc/arm_gicv3_its.c | ||
78 | +++ b/hw/intc/arm_gicv3_its.c | ||
79 | @@ -XXX,XX +XXX,XX @@ typedef enum ItsCmdResult { | ||
80 | CMD_CONTINUE = 1, | ||
81 | } ItsCmdResult; | ||
82 | |||
83 | +/* True if the ITS supports the GICv4 virtual LPI feature */ | ||
84 | +static bool its_feature_virtual(GICv3ITSState *s) | ||
85 | +{ | ||
86 | + return s->typer & R_GITS_TYPER_VIRTUAL_MASK; | ||
87 | +} | ||
88 | + | ||
89 | static inline bool intid_in_lpi_range(uint32_t id) | ||
90 | { | ||
91 | return id >= GICV3_LPI_INTID_START && | ||
92 | @@ -XXX,XX +XXX,XX @@ static void extract_table_params(GICv3ITSState *s) | ||
93 | idbits = 16; | ||
94 | } | ||
95 | break; | ||
96 | + case GITS_BASER_TYPE_VPE: | ||
97 | + td = &s->vpet; | ||
98 | + /* | ||
99 | + * For QEMU vPEIDs are always 16 bits. (GICv4.1 allows an | ||
100 | + * implementation to implement fewer bits and report this | ||
101 | + * via GICD_TYPER2.) | ||
102 | + */ | ||
103 | + idbits = 16; | ||
104 | + break; | ||
105 | default: | ||
106 | /* | ||
107 | * GITS_BASER<n>.TYPE is read-only, so GITS_BASER_RO_MASK | ||
108 | @@ -XXX,XX +XXX,XX @@ static void gicv3_its_reset(DeviceState *dev) | ||
109 | /* | ||
110 | * setting GITS_BASER0.Type = 0b001 (Device) | ||
111 | * GITS_BASER1.Type = 0b100 (Collection Table) | ||
112 | + * GITS_BASER2.Type = 0b010 (vPE) for GICv4 and later | ||
113 | * GITS_BASER<n>.Type,where n = 3 to 7 are 0b00 (Unimplemented) | ||
114 | * GITS_BASER<0,1>.Page_Size = 64KB | ||
115 | * and default translation table entry size to 16 bytes | ||
116 | @@ -XXX,XX +XXX,XX @@ static void gicv3_its_reset(DeviceState *dev) | ||
117 | GITS_BASER_PAGESIZE_64K); | ||
118 | s->baser[1] = FIELD_DP64(s->baser[1], GITS_BASER, ENTRYSIZE, | ||
119 | GITS_CTE_SIZE - 1); | ||
120 | + | ||
121 | + if (its_feature_virtual(s)) { | ||
122 | + s->baser[2] = FIELD_DP64(s->baser[2], GITS_BASER, TYPE, | ||
123 | + GITS_BASER_TYPE_VPE); | ||
124 | + s->baser[2] = FIELD_DP64(s->baser[2], GITS_BASER, PAGESIZE, | ||
125 | + GITS_BASER_PAGESIZE_64K); | ||
126 | + s->baser[2] = FIELD_DP64(s->baser[2], GITS_BASER, ENTRYSIZE, | ||
127 | + GITS_VPE_SIZE - 1); | ||
128 | + } | ||
129 | } | ||
130 | |||
131 | static void gicv3_its_post_load(GICv3ITSState *s) | ||
132 | -- | ||
133 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the GICv4 VMAPI and VMAPTI commands. These write | ||
2 | an interrupt translation table entry that maps (DeviceID,EventID) | ||
3 | to (vPEID,vINTID,doorbell). The only difference between VMAPI | ||
4 | and VMAPTI is that VMAPI assumes vINTID == EventID rather than | ||
5 | both being specified in the command packet. | ||
1 | 6 | ||
7 | (This code won't be reachable until we allow the GIC version to be | ||
8 | set to 4. Support for reading this new virtual-interrupt DTE and | ||
9 | handling it correctly will be implemented in a later commit.) | ||
10 | |||
11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
12 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
13 | Message-id: 20220408141550.1271295-9-peter.maydell@linaro.org | ||
14 | --- | ||
15 | hw/intc/gicv3_internal.h | 9 ++++ | ||
16 | hw/intc/arm_gicv3_its.c | 91 ++++++++++++++++++++++++++++++++++++++++ | ||
17 | hw/intc/trace-events | 2 + | ||
18 | 3 files changed, 102 insertions(+) | ||
19 | |||
20 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
21 | index XXXXXXX..XXXXXXX 100644 | ||
22 | --- a/hw/intc/gicv3_internal.h | ||
23 | +++ b/hw/intc/gicv3_internal.h | ||
24 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1) | ||
25 | #define GITS_CMD_INVALL 0x0D | ||
26 | #define GITS_CMD_MOVALL 0x0E | ||
27 | #define GITS_CMD_DISCARD 0x0F | ||
28 | +#define GITS_CMD_VMAPTI 0x2A | ||
29 | +#define GITS_CMD_VMAPI 0x2B | ||
30 | |||
31 | /* MAPC command fields */ | ||
32 | #define ICID_LENGTH 16 | ||
33 | @@ -XXX,XX +XXX,XX @@ FIELD(MOVI_0, DEVICEID, 32, 32) | ||
34 | FIELD(MOVI_1, EVENTID, 0, 32) | ||
35 | FIELD(MOVI_2, ICID, 0, 16) | ||
36 | |||
37 | +/* VMAPI, VMAPTI command fields */ | ||
38 | +FIELD(VMAPTI_0, DEVICEID, 32, 32) | ||
39 | +FIELD(VMAPTI_1, EVENTID, 0, 32) | ||
40 | +FIELD(VMAPTI_1, VPEID, 32, 16) | ||
41 | +FIELD(VMAPTI_2, VINTID, 0, 32) /* VMAPTI only */ | ||
42 | +FIELD(VMAPTI_2, DOORBELL, 32, 32) | ||
43 | + | ||
44 | /* | ||
45 | * 12 bytes Interrupt translation Table Entry size | ||
46 | * as per Table 5.3 in GICv3 spec | ||
47 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
48 | index XXXXXXX..XXXXXXX 100644 | ||
49 | --- a/hw/intc/arm_gicv3_its.c | ||
50 | +++ b/hw/intc/arm_gicv3_its.c | ||
51 | @@ -XXX,XX +XXX,XX @@ static inline bool intid_in_lpi_range(uint32_t id) | ||
52 | id < (1 << (GICD_TYPER_IDBITS + 1)); | ||
53 | } | ||
54 | |||
55 | +static inline bool valid_doorbell(uint32_t id) | ||
56 | +{ | ||
57 | + /* Doorbell fields may be an LPI, or 1023 to mean "no doorbell" */ | ||
58 | + return id == INTID_SPURIOUS || intid_in_lpi_range(id); | ||
59 | +} | ||
60 | + | ||
61 | static uint64_t baser_base_addr(uint64_t value, uint32_t page_sz) | ||
62 | { | ||
63 | uint64_t result = 0; | ||
64 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
65 | return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE : CMD_STALL; | ||
66 | } | ||
67 | |||
68 | +static ItsCmdResult process_vmapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
69 | + bool ignore_vintid) | ||
70 | +{ | ||
71 | + uint32_t devid, eventid, vintid, doorbell, vpeid; | ||
72 | + uint32_t num_eventids; | ||
73 | + DTEntry dte; | ||
74 | + ITEntry ite; | ||
75 | + | ||
76 | + if (!its_feature_virtual(s)) { | ||
77 | + return CMD_CONTINUE; | ||
78 | + } | ||
79 | + | ||
80 | + devid = FIELD_EX64(cmdpkt[0], VMAPTI_0, DEVICEID); | ||
81 | + eventid = FIELD_EX64(cmdpkt[1], VMAPTI_1, EVENTID); | ||
82 | + vpeid = FIELD_EX64(cmdpkt[1], VMAPTI_1, VPEID); | ||
83 | + doorbell = FIELD_EX64(cmdpkt[2], VMAPTI_2, DOORBELL); | ||
84 | + if (ignore_vintid) { | ||
85 | + vintid = eventid; | ||
86 | + trace_gicv3_its_cmd_vmapi(devid, eventid, vpeid, doorbell); | ||
87 | + } else { | ||
88 | + vintid = FIELD_EX64(cmdpkt[2], VMAPTI_2, VINTID); | ||
89 | + trace_gicv3_its_cmd_vmapti(devid, eventid, vpeid, vintid, doorbell); | ||
90 | + } | ||
91 | + | ||
92 | + if (devid >= s->dt.num_entries) { | ||
93 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
94 | + "%s: invalid DeviceID 0x%x (must be less than 0x%x)\n", | ||
95 | + __func__, devid, s->dt.num_entries); | ||
96 | + return CMD_CONTINUE; | ||
97 | + } | ||
98 | + | ||
99 | + if (get_dte(s, devid, &dte) != MEMTX_OK) { | ||
100 | + return CMD_STALL; | ||
101 | + } | ||
102 | + | ||
103 | + if (!dte.valid) { | ||
104 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
105 | + "%s: no entry in device table for DeviceID 0x%x\n", | ||
106 | + __func__, devid); | ||
107 | + return CMD_CONTINUE; | ||
108 | + } | ||
109 | + | ||
110 | + num_eventids = 1ULL << (dte.size + 1); | ||
111 | + | ||
112 | + if (eventid >= num_eventids) { | ||
113 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
114 | + "%s: EventID 0x%x too large for DeviceID 0x%x " | ||
115 | + "(must be less than 0x%x)\n", | ||
116 | + __func__, eventid, devid, num_eventids); | ||
117 | + return CMD_CONTINUE; | ||
118 | + } | ||
119 | + if (!intid_in_lpi_range(vintid)) { | ||
120 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
121 | + "%s: VIntID 0x%x not a valid LPI\n", | ||
122 | + __func__, vintid); | ||
123 | + return CMD_CONTINUE; | ||
124 | + } | ||
125 | + if (!valid_doorbell(doorbell)) { | ||
126 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
127 | + "%s: Doorbell %d not 1023 and not a valid LPI\n", | ||
128 | + __func__, doorbell); | ||
129 | + return CMD_CONTINUE; | ||
130 | + } | ||
131 | + if (vpeid >= s->vpet.num_entries) { | ||
132 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
133 | + "%s: VPEID 0x%x out of range (must be less than 0x%x)\n", | ||
134 | + __func__, vpeid, s->vpet.num_entries); | ||
135 | + return CMD_CONTINUE; | ||
136 | + } | ||
137 | + /* add ite entry to interrupt translation table */ | ||
138 | + ite.valid = true; | ||
139 | + ite.inttype = ITE_INTTYPE_VIRTUAL; | ||
140 | + ite.intid = vintid; | ||
141 | + ite.icid = 0; | ||
142 | + ite.doorbell = doorbell; | ||
143 | + ite.vpeid = vpeid; | ||
144 | + return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE : CMD_STALL; | ||
145 | +} | ||
146 | + | ||
147 | /* | ||
148 | * Update the Collection Table entry for @icid to @cte. Returns true | ||
149 | * on success, false if there was a memory access error. | ||
150 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
151 | case GITS_CMD_MOVALL: | ||
152 | result = process_movall(s, cmdpkt); | ||
153 | break; | ||
154 | + case GITS_CMD_VMAPTI: | ||
155 | + result = process_vmapti(s, cmdpkt, false); | ||
156 | + break; | ||
157 | + case GITS_CMD_VMAPI: | ||
158 | + result = process_vmapti(s, cmdpkt, true); | ||
159 | + break; | ||
160 | default: | ||
161 | trace_gicv3_its_cmd_unknown(cmd); | ||
162 | break; | ||
163 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
164 | index XXXXXXX..XXXXXXX 100644 | ||
165 | --- a/hw/intc/trace-events | ||
166 | +++ b/hw/intc/trace-events | ||
167 | @@ -XXX,XX +XXX,XX @@ gicv3_its_cmd_mapti(uint32_t devid, uint32_t eventid, uint32_t icid, uint32_t in | ||
168 | gicv3_its_cmd_inv(void) "GICv3 ITS: command INV or INVALL" | ||
169 | gicv3_its_cmd_movall(uint64_t rd1, uint64_t rd2) "GICv3 ITS: command MOVALL RDbase1 0x%" PRIx64 " RDbase2 0x%" PRIx64 | ||
170 | gicv3_its_cmd_movi(uint32_t devid, uint32_t eventid, uint32_t icid) "GICv3 ITS: command MOVI DeviceID 0x%x EventID 0x%x ICID 0x%x" | ||
171 | +gicv3_its_cmd_vmapi(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x Dbell_pINTID 0x%x" | ||
172 | +gicv3_its_cmd_vmapti(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t vintid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x vINTID 0x%x Dbell_pINTID 0x%x" | ||
173 | gicv3_its_cmd_unknown(unsigned cmd) "GICv3 ITS: unknown command 0x%x" | ||
174 | gicv3_its_cte_read(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table read for ICID 0x%x: valid %d RDBase 0x%x" | ||
175 | gicv3_its_cte_write(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table write for ICID 0x%x: valid %d RDBase 0x%x" | ||
176 | -- | ||
177 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the GICv4 VMAPP command, which writes an entry to the vPE | ||
2 | table. | ||
1 | 3 | ||
4 | For GICv4.1 this command has extra fields in the command packet | ||
5 | and additional behaviour. We define the 4.1-only fields with the | ||
6 | FIELD macro, but only implement the GICv4.0 version of the command. | ||
7 | |||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 20220408141550.1271295-10-peter.maydell@linaro.org | ||
11 | --- | ||
12 | hw/intc/gicv3_internal.h | 12 ++++++ | ||
13 | hw/intc/arm_gicv3_its.c | 88 ++++++++++++++++++++++++++++++++++++++++ | ||
14 | hw/intc/trace-events | 2 + | ||
15 | 3 files changed, 102 insertions(+) | ||
16 | |||
17 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
18 | index XXXXXXX..XXXXXXX 100644 | ||
19 | --- a/hw/intc/gicv3_internal.h | ||
20 | +++ b/hw/intc/gicv3_internal.h | ||
21 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1) | ||
22 | #define GITS_CMD_INVALL 0x0D | ||
23 | #define GITS_CMD_MOVALL 0x0E | ||
24 | #define GITS_CMD_DISCARD 0x0F | ||
25 | +#define GITS_CMD_VMAPP 0x29 | ||
26 | #define GITS_CMD_VMAPTI 0x2A | ||
27 | #define GITS_CMD_VMAPI 0x2B | ||
28 | |||
29 | @@ -XXX,XX +XXX,XX @@ FIELD(VMAPTI_1, VPEID, 32, 16) | ||
30 | FIELD(VMAPTI_2, VINTID, 0, 32) /* VMAPTI only */ | ||
31 | FIELD(VMAPTI_2, DOORBELL, 32, 32) | ||
32 | |||
33 | +/* VMAPP command fields */ | ||
34 | +FIELD(VMAPP_0, ALLOC, 8, 1) /* GICv4.1 only */ | ||
35 | +FIELD(VMAPP_0, PTZ, 9, 1) /* GICv4.1 only */ | ||
36 | +FIELD(VMAPP_0, VCONFADDR, 16, 36) /* GICv4.1 only */ | ||
37 | +FIELD(VMAPP_1, DEFAULT_DOORBELL, 0, 32) /* GICv4.1 only */ | ||
38 | +FIELD(VMAPP_1, VPEID, 32, 16) | ||
39 | +FIELD(VMAPP_2, RDBASE, 16, 36) | ||
40 | +FIELD(VMAPP_2, V, 63, 1) | ||
41 | +FIELD(VMAPP_3, VPTSIZE, 0, 8) /* For GICv4.0, bits [7:6] are RES0 */ | ||
42 | +FIELD(VMAPP_3, VPTADDR, 16, 36) | ||
43 | + | ||
44 | /* | ||
45 | * 12 bytes Interrupt translation Table Entry size | ||
46 | * as per Table 5.3 in GICv3 spec | ||
47 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
48 | index XXXXXXX..XXXXXXX 100644 | ||
49 | --- a/hw/intc/arm_gicv3_its.c | ||
50 | +++ b/hw/intc/arm_gicv3_its.c | ||
51 | @@ -XXX,XX +XXX,XX @@ typedef struct ITEntry { | ||
52 | uint32_t vpeid; | ||
53 | } ITEntry; | ||
54 | |||
55 | +typedef struct VTEntry { | ||
56 | + bool valid; | ||
57 | + unsigned vptsize; | ||
58 | + uint32_t rdbase; | ||
59 | + uint64_t vptaddr; | ||
60 | +} VTEntry; | ||
61 | |||
62 | /* | ||
63 | * The ITS spec permits a range of CONSTRAINED UNPREDICTABLE options | ||
64 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_movi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
65 | return update_ite(s, eventid, &dte, &old_ite) ? CMD_CONTINUE : CMD_STALL; | ||
66 | } | ||
67 | |||
68 | +/* | ||
69 | + * Update the vPE Table entry at index @vpeid with the entry @vte. | ||
70 | + * Returns true on success, false if there was a memory access error. | ||
71 | + */ | ||
72 | +static bool update_vte(GICv3ITSState *s, uint32_t vpeid, const VTEntry *vte) | ||
73 | +{ | ||
74 | + AddressSpace *as = &s->gicv3->dma_as; | ||
75 | + uint64_t entry_addr; | ||
76 | + uint64_t vteval = 0; | ||
77 | + MemTxResult res = MEMTX_OK; | ||
78 | + | ||
79 | + trace_gicv3_its_vte_write(vpeid, vte->valid, vte->vptsize, vte->vptaddr, | ||
80 | + vte->rdbase); | ||
81 | + | ||
82 | + if (vte->valid) { | ||
83 | + vteval = FIELD_DP64(vteval, VTE, VALID, 1); | ||
84 | + vteval = FIELD_DP64(vteval, VTE, VPTSIZE, vte->vptsize); | ||
85 | + vteval = FIELD_DP64(vteval, VTE, VPTADDR, vte->vptaddr); | ||
86 | + vteval = FIELD_DP64(vteval, VTE, RDBASE, vte->rdbase); | ||
87 | + } | ||
88 | + | ||
89 | + entry_addr = table_entry_addr(s, &s->vpet, vpeid, &res); | ||
90 | + if (res != MEMTX_OK) { | ||
91 | + return false; | ||
92 | + } | ||
93 | + if (entry_addr == -1) { | ||
94 | + /* No L2 table for this index: discard write and continue */ | ||
95 | + return true; | ||
96 | + } | ||
97 | + address_space_stq_le(as, entry_addr, vteval, MEMTXATTRS_UNSPECIFIED, &res); | ||
98 | + return res == MEMTX_OK; | ||
99 | +} | ||
100 | + | ||
101 | +static ItsCmdResult process_vmapp(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
102 | +{ | ||
103 | + VTEntry vte; | ||
104 | + uint32_t vpeid; | ||
105 | + | ||
106 | + if (!its_feature_virtual(s)) { | ||
107 | + return CMD_CONTINUE; | ||
108 | + } | ||
109 | + | ||
110 | + vpeid = FIELD_EX64(cmdpkt[1], VMAPP_1, VPEID); | ||
111 | + vte.rdbase = FIELD_EX64(cmdpkt[2], VMAPP_2, RDBASE); | ||
112 | + vte.valid = FIELD_EX64(cmdpkt[2], VMAPP_2, V); | ||
113 | + vte.vptsize = FIELD_EX64(cmdpkt[3], VMAPP_3, VPTSIZE); | ||
114 | + vte.vptaddr = FIELD_EX64(cmdpkt[3], VMAPP_3, VPTADDR); | ||
115 | + | ||
116 | + trace_gicv3_its_cmd_vmapp(vpeid, vte.rdbase, vte.valid, | ||
117 | + vte.vptaddr, vte.vptsize); | ||
118 | + | ||
119 | + /* | ||
120 | + * For GICv4.0 the VPT_size field is only 5 bits, whereas we | ||
121 | + * define our field macros to include the full GICv4.1 8 bits. | ||
122 | + * The range check on VPT_size will catch the cases where | ||
123 | + * the guest set the RES0-in-GICv4.0 bits [7:6]. | ||
124 | + */ | ||
125 | + if (vte.vptsize > FIELD_EX64(s->typer, GITS_TYPER, IDBITS)) { | ||
126 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
127 | + "%s: invalid VPT_size 0x%x\n", __func__, vte.vptsize); | ||
128 | + return CMD_CONTINUE; | ||
129 | + } | ||
130 | + | ||
131 | + if (vte.valid && vte.rdbase >= s->gicv3->num_cpu) { | ||
132 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
133 | + "%s: invalid rdbase 0x%x\n", __func__, vte.rdbase); | ||
134 | + return CMD_CONTINUE; | ||
135 | + } | ||
136 | + | ||
137 | + if (vpeid >= s->vpet.num_entries) { | ||
138 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
139 | + "%s: VPEID 0x%x out of range (must be less than 0x%x)\n", | ||
140 | + __func__, vpeid, s->vpet.num_entries); | ||
141 | + return CMD_CONTINUE; | ||
142 | + } | ||
143 | + | ||
144 | + return update_vte(s, vpeid, &vte) ? CMD_CONTINUE : CMD_STALL; | ||
145 | +} | ||
146 | + | ||
147 | /* | ||
148 | * Current implementation blocks until all | ||
149 | * commands are processed | ||
150 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
151 | case GITS_CMD_VMAPI: | ||
152 | result = process_vmapti(s, cmdpkt, true); | ||
153 | break; | ||
154 | + case GITS_CMD_VMAPP: | ||
155 | + result = process_vmapp(s, cmdpkt); | ||
156 | + break; | ||
157 | default: | ||
158 | trace_gicv3_its_cmd_unknown(cmd); | ||
159 | break; | ||
160 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
161 | index XXXXXXX..XXXXXXX 100644 | ||
162 | --- a/hw/intc/trace-events | ||
163 | +++ b/hw/intc/trace-events | ||
164 | @@ -XXX,XX +XXX,XX @@ gicv3_its_cmd_movall(uint64_t rd1, uint64_t rd2) "GICv3 ITS: command MOVALL RDba | ||
165 | gicv3_its_cmd_movi(uint32_t devid, uint32_t eventid, uint32_t icid) "GICv3 ITS: command MOVI DeviceID 0x%x EventID 0x%x ICID 0x%x" | ||
166 | gicv3_its_cmd_vmapi(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x Dbell_pINTID 0x%x" | ||
167 | gicv3_its_cmd_vmapti(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t vintid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x vINTID 0x%x Dbell_pINTID 0x%x" | ||
168 | +gicv3_its_cmd_vmapp(uint32_t vpeid, uint64_t rdbase, int valid, uint64_t vptaddr, uint32_t vptsize) "GICv3 ITS: command VMAPP vPEID 0x%x RDbase 0x%" PRIx64 " V %d VPT_addr 0x%" PRIx64 " VPT_size 0x%x" | ||
169 | gicv3_its_cmd_unknown(unsigned cmd) "GICv3 ITS: unknown command 0x%x" | ||
170 | gicv3_its_cte_read(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table read for ICID 0x%x: valid %d RDBase 0x%x" | ||
171 | gicv3_its_cte_write(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table write for ICID 0x%x: valid %d RDBase 0x%x" | ||
172 | @@ -XXX,XX +XXX,XX @@ gicv3_its_ite_write(uint64_t ittaddr, uint32_t eventid, int valid, int inttype, | ||
173 | gicv3_its_dte_read(uint32_t devid, int valid, uint32_t size, uint64_t ittaddr) "GICv3 ITS: Device Table read for DeviceID 0x%x: valid %d size 0x%x ITTaddr 0x%" PRIx64 | ||
174 | gicv3_its_dte_write(uint32_t devid, int valid, uint32_t size, uint64_t ittaddr) "GICv3 ITS: Device Table write for DeviceID 0x%x: valid %d size 0x%x ITTaddr 0x%" PRIx64 | ||
175 | gicv3_its_dte_read_fault(uint32_t devid) "GICv3 ITS: Device Table read for DeviceID 0x%x: faulted" | ||
176 | +gicv3_its_vte_write(uint32_t vpeid, int valid, uint32_t vptsize, uint64_t vptaddr, uint32_t rdbase) "GICv3 ITS: vPE Table write for vPEID 0x%x: valid %d VPTsize 0x%x VPTaddr 0x%" PRIx64 " RDbase 0x%x" | ||
177 | |||
178 | # armv7m_nvic.c | ||
179 | nvic_recompute_state(int vectpending, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d vectpending_prio %d exception_prio %d" | ||
180 | -- | ||
181 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | In the ItsCmdResult enum, we currently distinguish only CMD_STALL | ||
2 | (failure, stall processing of the command queue) and CMD_CONTINUE | ||
3 | (keep processing the queue), and we use the latter both for "there | ||
4 | was a parameter error, go on to the next command" and "the command | ||
5 | succeeded, go on to the next command". Sometimes we would like to | ||
6 | distinguish those two cases, so add CMD_CONTINUE_OK to the enum to | ||
7 | represent the success situation, and use it in the relevant places. | ||
1 | 8 | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
11 | Message-id: 20220408141550.1271295-11-peter.maydell@linaro.org | ||
12 | --- | ||
13 | hw/intc/arm_gicv3_its.c | 29 ++++++++++++++++------------- | ||
14 | 1 file changed, 16 insertions(+), 13 deletions(-) | ||
15 | |||
16 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
17 | index XXXXXXX..XXXXXXX 100644 | ||
18 | --- a/hw/intc/arm_gicv3_its.c | ||
19 | +++ b/hw/intc/arm_gicv3_its.c | ||
20 | @@ -XXX,XX +XXX,XX @@ typedef struct VTEntry { | ||
21 | * and continue processing. | ||
22 | * The process_* functions which handle individual ITS commands all | ||
23 | * return an ItsCmdResult which tells process_cmdq() whether it should | ||
24 | - * stall or keep going. | ||
25 | + * stall, keep going because of an error, or keep going because the | ||
26 | + * command was a success. | ||
27 | */ | ||
28 | typedef enum ItsCmdResult { | ||
29 | CMD_STALL = 0, | ||
30 | CMD_CONTINUE = 1, | ||
31 | + CMD_CONTINUE_OK = 2, | ||
32 | } ItsCmdResult; | ||
33 | |||
34 | /* True if the ITS supports the GICv4 virtual LPI feature */ | ||
35 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult do_process_its_cmd(GICv3ITSState *s, uint32_t devid, | ||
36 | ITEntry ite = {}; | ||
37 | /* remove mapping from interrupt translation table */ | ||
38 | ite.valid = false; | ||
39 | - return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE : CMD_STALL; | ||
40 | + return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE_OK : CMD_STALL; | ||
41 | } | ||
42 | - return CMD_CONTINUE; | ||
43 | + return CMD_CONTINUE_OK; | ||
44 | } | ||
45 | |||
46 | static ItsCmdResult process_its_cmd(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
47 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
48 | ite.icid = icid; | ||
49 | ite.doorbell = INTID_SPURIOUS; | ||
50 | ite.vpeid = 0; | ||
51 | - return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE : CMD_STALL; | ||
52 | + return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE_OK : CMD_STALL; | ||
53 | } | ||
54 | |||
55 | static ItsCmdResult process_vmapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
56 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmapti(GICv3ITSState *s, const uint64_t *cmdpkt, | ||
57 | ite.icid = 0; | ||
58 | ite.doorbell = doorbell; | ||
59 | ite.vpeid = vpeid; | ||
60 | - return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE : CMD_STALL; | ||
61 | + return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE_OK : CMD_STALL; | ||
62 | } | ||
63 | |||
64 | /* | ||
65 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapc(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
66 | return CMD_CONTINUE; | ||
67 | } | ||
68 | |||
69 | - return update_cte(s, icid, &cte) ? CMD_CONTINUE : CMD_STALL; | ||
70 | + return update_cte(s, icid, &cte) ? CMD_CONTINUE_OK : CMD_STALL; | ||
71 | } | ||
72 | |||
73 | /* | ||
74 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapd(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
75 | return CMD_CONTINUE; | ||
76 | } | ||
77 | |||
78 | - return update_dte(s, devid, &dte) ? CMD_CONTINUE : CMD_STALL; | ||
79 | + return update_dte(s, devid, &dte) ? CMD_CONTINUE_OK : CMD_STALL; | ||
80 | } | ||
81 | |||
82 | static ItsCmdResult process_movall(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
83 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_movall(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
84 | |||
85 | if (rd1 == rd2) { | ||
86 | /* Move to same target must succeed as a no-op */ | ||
87 | - return CMD_CONTINUE; | ||
88 | + return CMD_CONTINUE_OK; | ||
89 | } | ||
90 | |||
91 | /* Move all pending LPIs from redistributor 1 to redistributor 2 */ | ||
92 | gicv3_redist_movall_lpis(&s->gicv3->cpu[rd1], &s->gicv3->cpu[rd2]); | ||
93 | |||
94 | - return CMD_CONTINUE; | ||
95 | + return CMD_CONTINUE_OK; | ||
96 | } | ||
97 | |||
98 | static ItsCmdResult process_movi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
99 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_movi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
100 | |||
101 | /* Update the ICID field in the interrupt translation table entry */ | ||
102 | old_ite.icid = new_icid; | ||
103 | - return update_ite(s, eventid, &dte, &old_ite) ? CMD_CONTINUE : CMD_STALL; | ||
104 | + return update_ite(s, eventid, &dte, &old_ite) ? CMD_CONTINUE_OK : CMD_STALL; | ||
105 | } | ||
106 | |||
107 | /* | ||
108 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmapp(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
109 | return CMD_CONTINUE; | ||
110 | } | ||
111 | |||
112 | - return update_vte(s, vpeid, &vte) ? CMD_CONTINUE : CMD_STALL; | ||
113 | + return update_vte(s, vpeid, &vte) ? CMD_CONTINUE_OK : CMD_STALL; | ||
114 | } | ||
115 | |||
116 | /* | ||
117 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
118 | } | ||
119 | |||
120 | while (wr_offset != rd_offset) { | ||
121 | - ItsCmdResult result = CMD_CONTINUE; | ||
122 | + ItsCmdResult result = CMD_CONTINUE_OK; | ||
123 | void *hostmem; | ||
124 | hwaddr buflen; | ||
125 | uint64_t cmdpkt[GITS_CMDQ_ENTRY_WORDS]; | ||
126 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
127 | trace_gicv3_its_cmd_unknown(cmd); | ||
128 | break; | ||
129 | } | ||
130 | - if (result == CMD_CONTINUE) { | ||
131 | + if (result != CMD_STALL) { | ||
132 | + /* CMD_CONTINUE or CMD_CONTINUE_OK */ | ||
133 | rd_offset++; | ||
134 | rd_offset %= s->cq.num_entries; | ||
135 | s->creadr = FIELD_DP64(s->creadr, GITS_CREADR, OFFSET, rd_offset); | ||
136 | -- | ||
137 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | The operation of finding an interrupt table entry given a (DeviceID, | ||
2 | EventID) pair is necessary in multiple different ITS commands. The | ||
3 | process requires first using the DeviceID as an index into the device | ||
4 | table to find the DTE, and then useng the EventID as an index into | ||
5 | the interrupt table specified by that DTE to find the ITE. We also | ||
6 | need to handle all the possible error cases: indexes out of range, | ||
7 | table memory not readable, table entries not valid. | ||
1 | 8 | ||
9 | Factor this out into a separate lookup_ite() function which we | ||
10 | can then call from the places where we were previously open-coding | ||
11 | this sequence. We'll also need this for some of the new GICv4.0 | ||
12 | commands. | ||
13 | |||
14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 20220408141550.1271295-12-peter.maydell@linaro.org | ||
17 | --- | ||
18 | hw/intc/arm_gicv3_its.c | 124 +++++++++++++++++++++------------------- | ||
19 | 1 file changed, 64 insertions(+), 60 deletions(-) | ||
20 | |||
21 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
22 | index XXXXXXX..XXXXXXX 100644 | ||
23 | --- a/hw/intc/arm_gicv3_its.c | ||
24 | +++ b/hw/intc/arm_gicv3_its.c | ||
25 | @@ -XXX,XX +XXX,XX @@ out: | ||
26 | return res; | ||
27 | } | ||
28 | |||
29 | +/* | ||
30 | + * Given a (DeviceID, EventID), look up the corresponding ITE, including | ||
31 | + * checking for the various invalid-value cases. If we find a valid ITE, | ||
32 | + * fill in @ite and @dte and return CMD_CONTINUE_OK. Otherwise return | ||
33 | + * CMD_STALL or CMD_CONTINUE as appropriate (and the contents of @ite | ||
34 | + * should not be relied on). | ||
35 | + * | ||
36 | + * The string @who is purely for the LOG_GUEST_ERROR messages, | ||
37 | + * and should indicate the name of the calling function or similar. | ||
38 | + */ | ||
39 | +static ItsCmdResult lookup_ite(GICv3ITSState *s, const char *who, | ||
40 | + uint32_t devid, uint32_t eventid, ITEntry *ite, | ||
41 | + DTEntry *dte) | ||
42 | +{ | ||
43 | + uint64_t num_eventids; | ||
44 | + | ||
45 | + if (devid >= s->dt.num_entries) { | ||
46 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
47 | + "%s: invalid command attributes: devid %d>=%d", | ||
48 | + who, devid, s->dt.num_entries); | ||
49 | + return CMD_CONTINUE; | ||
50 | + } | ||
51 | + | ||
52 | + if (get_dte(s, devid, dte) != MEMTX_OK) { | ||
53 | + return CMD_STALL; | ||
54 | + } | ||
55 | + if (!dte->valid) { | ||
56 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
57 | + "%s: invalid command attributes: " | ||
58 | + "invalid dte for %d\n", who, devid); | ||
59 | + return CMD_CONTINUE; | ||
60 | + } | ||
61 | + | ||
62 | + num_eventids = 1ULL << (dte->size + 1); | ||
63 | + if (eventid >= num_eventids) { | ||
64 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
65 | + "%s: invalid command attributes: eventid %d >= %" | ||
66 | + PRId64 "\n", who, eventid, num_eventids); | ||
67 | + return CMD_CONTINUE; | ||
68 | + } | ||
69 | + | ||
70 | + if (get_ite(s, eventid, dte, ite) != MEMTX_OK) { | ||
71 | + return CMD_STALL; | ||
72 | + } | ||
73 | + | ||
74 | + if (!ite->valid) { | ||
75 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
76 | + "%s: invalid command attributes: invalid ITE\n", who); | ||
77 | + return CMD_CONTINUE; | ||
78 | + } | ||
79 | + | ||
80 | + return CMD_CONTINUE_OK; | ||
81 | +} | ||
82 | + | ||
83 | /* | ||
84 | * This function handles the processing of following commands based on | ||
85 | * the ItsCmdType parameter passed:- | ||
86 | @@ -XXX,XX +XXX,XX @@ out: | ||
87 | static ItsCmdResult do_process_its_cmd(GICv3ITSState *s, uint32_t devid, | ||
88 | uint32_t eventid, ItsCmdType cmd) | ||
89 | { | ||
90 | - uint64_t num_eventids; | ||
91 | DTEntry dte; | ||
92 | CTEntry cte; | ||
93 | ITEntry ite; | ||
94 | + ItsCmdResult cmdres; | ||
95 | |||
96 | - if (devid >= s->dt.num_entries) { | ||
97 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
98 | - "%s: invalid command attributes: devid %d>=%d", | ||
99 | - __func__, devid, s->dt.num_entries); | ||
100 | - return CMD_CONTINUE; | ||
101 | + cmdres = lookup_ite(s, __func__, devid, eventid, &ite, &dte); | ||
102 | + if (cmdres != CMD_CONTINUE_OK) { | ||
103 | + return cmdres; | ||
104 | } | ||
105 | |||
106 | - if (get_dte(s, devid, &dte) != MEMTX_OK) { | ||
107 | - return CMD_STALL; | ||
108 | - } | ||
109 | - if (!dte.valid) { | ||
110 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
111 | - "%s: invalid command attributes: " | ||
112 | - "invalid dte for %d\n", __func__, devid); | ||
113 | - return CMD_CONTINUE; | ||
114 | - } | ||
115 | - | ||
116 | - num_eventids = 1ULL << (dte.size + 1); | ||
117 | - if (eventid >= num_eventids) { | ||
118 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
119 | - "%s: invalid command attributes: eventid %d >= %" | ||
120 | - PRId64 "\n", | ||
121 | - __func__, eventid, num_eventids); | ||
122 | - return CMD_CONTINUE; | ||
123 | - } | ||
124 | - | ||
125 | - if (get_ite(s, eventid, &dte, &ite) != MEMTX_OK) { | ||
126 | - return CMD_STALL; | ||
127 | - } | ||
128 | - | ||
129 | - if (!ite.valid || ite.inttype != ITE_INTTYPE_PHYSICAL) { | ||
130 | + if (ite.inttype != ITE_INTTYPE_PHYSICAL) { | ||
131 | qemu_log_mask(LOG_GUEST_ERROR, | ||
132 | "%s: invalid command attributes: invalid ITE\n", | ||
133 | __func__); | ||
134 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_movi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
135 | { | ||
136 | uint32_t devid, eventid; | ||
137 | uint16_t new_icid; | ||
138 | - uint64_t num_eventids; | ||
139 | DTEntry dte; | ||
140 | CTEntry old_cte, new_cte; | ||
141 | ITEntry old_ite; | ||
142 | + ItsCmdResult cmdres; | ||
143 | |||
144 | devid = FIELD_EX64(cmdpkt[0], MOVI_0, DEVICEID); | ||
145 | eventid = FIELD_EX64(cmdpkt[1], MOVI_1, EVENTID); | ||
146 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_movi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
147 | |||
148 | trace_gicv3_its_cmd_movi(devid, eventid, new_icid); | ||
149 | |||
150 | - if (devid >= s->dt.num_entries) { | ||
151 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
152 | - "%s: invalid command attributes: devid %d>=%d", | ||
153 | - __func__, devid, s->dt.num_entries); | ||
154 | - return CMD_CONTINUE; | ||
155 | - } | ||
156 | - if (get_dte(s, devid, &dte) != MEMTX_OK) { | ||
157 | - return CMD_STALL; | ||
158 | + cmdres = lookup_ite(s, __func__, devid, eventid, &old_ite, &dte); | ||
159 | + if (cmdres != CMD_CONTINUE_OK) { | ||
160 | + return cmdres; | ||
161 | } | ||
162 | |||
163 | - if (!dte.valid) { | ||
164 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
165 | - "%s: invalid command attributes: " | ||
166 | - "invalid dte for %d\n", __func__, devid); | ||
167 | - return CMD_CONTINUE; | ||
168 | - } | ||
169 | - | ||
170 | - num_eventids = 1ULL << (dte.size + 1); | ||
171 | - if (eventid >= num_eventids) { | ||
172 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
173 | - "%s: invalid command attributes: eventid %d >= %" | ||
174 | - PRId64 "\n", | ||
175 | - __func__, eventid, num_eventids); | ||
176 | - return CMD_CONTINUE; | ||
177 | - } | ||
178 | - | ||
179 | - if (get_ite(s, eventid, &dte, &old_ite) != MEMTX_OK) { | ||
180 | - return CMD_STALL; | ||
181 | - } | ||
182 | - | ||
183 | - if (!old_ite.valid || old_ite.inttype != ITE_INTTYPE_PHYSICAL) { | ||
184 | + if (old_ite.inttype != ITE_INTTYPE_PHYSICAL) { | ||
185 | qemu_log_mask(LOG_GUEST_ERROR, | ||
186 | "%s: invalid command attributes: invalid ITE\n", | ||
187 | __func__); | ||
188 | -- | ||
189 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Factor out the sequence of looking up a CTE from an ICID including | ||
2 | the validity and error checks. | ||
1 | 3 | ||
4 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
5 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
6 | Message-id: 20220408141550.1271295-13-peter.maydell@linaro.org | ||
7 | --- | ||
8 | hw/intc/arm_gicv3_its.c | 109 ++++++++++++++-------------------------- | ||
9 | 1 file changed, 39 insertions(+), 70 deletions(-) | ||
10 | |||
11 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
12 | index XXXXXXX..XXXXXXX 100644 | ||
13 | --- a/hw/intc/arm_gicv3_its.c | ||
14 | +++ b/hw/intc/arm_gicv3_its.c | ||
15 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult lookup_ite(GICv3ITSState *s, const char *who, | ||
16 | return CMD_CONTINUE_OK; | ||
17 | } | ||
18 | |||
19 | +/* | ||
20 | + * Given an ICID, look up the corresponding CTE, including checking for various | ||
21 | + * invalid-value cases. If we find a valid CTE, fill in @cte and return | ||
22 | + * CMD_CONTINUE_OK; otherwise return CMD_STALL or CMD_CONTINUE (and the | ||
23 | + * contents of @cte should not be relied on). | ||
24 | + * | ||
25 | + * The string @who is purely for the LOG_GUEST_ERROR messages, | ||
26 | + * and should indicate the name of the calling function or similar. | ||
27 | + */ | ||
28 | +static ItsCmdResult lookup_cte(GICv3ITSState *s, const char *who, | ||
29 | + uint32_t icid, CTEntry *cte) | ||
30 | +{ | ||
31 | + if (icid >= s->ct.num_entries) { | ||
32 | + qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid ICID 0x%x\n", who, icid); | ||
33 | + return CMD_CONTINUE; | ||
34 | + } | ||
35 | + if (get_cte(s, icid, cte) != MEMTX_OK) { | ||
36 | + return CMD_STALL; | ||
37 | + } | ||
38 | + if (!cte->valid) { | ||
39 | + qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid CTE\n", who); | ||
40 | + return CMD_CONTINUE; | ||
41 | + } | ||
42 | + if (cte->rdbase >= s->gicv3->num_cpu) { | ||
43 | + return CMD_CONTINUE; | ||
44 | + } | ||
45 | + return CMD_CONTINUE_OK; | ||
46 | +} | ||
47 | + | ||
48 | + | ||
49 | /* | ||
50 | * This function handles the processing of following commands based on | ||
51 | * the ItsCmdType parameter passed:- | ||
52 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult do_process_its_cmd(GICv3ITSState *s, uint32_t devid, | ||
53 | return CMD_CONTINUE; | ||
54 | } | ||
55 | |||
56 | - if (ite.icid >= s->ct.num_entries) { | ||
57 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
58 | - "%s: invalid ICID 0x%x in ITE (table corrupted?)\n", | ||
59 | - __func__, ite.icid); | ||
60 | - return CMD_CONTINUE; | ||
61 | - } | ||
62 | - | ||
63 | - if (get_cte(s, ite.icid, &cte) != MEMTX_OK) { | ||
64 | - return CMD_STALL; | ||
65 | - } | ||
66 | - if (!cte.valid) { | ||
67 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
68 | - "%s: invalid command attributes: invalid CTE\n", | ||
69 | - __func__); | ||
70 | - return CMD_CONTINUE; | ||
71 | - } | ||
72 | - | ||
73 | - /* | ||
74 | - * Current implementation only supports rdbase == procnum | ||
75 | - * Hence rdbase physical address is ignored | ||
76 | - */ | ||
77 | - if (cte.rdbase >= s->gicv3->num_cpu) { | ||
78 | - return CMD_CONTINUE; | ||
79 | + cmdres = lookup_cte(s, __func__, ite.icid, &cte); | ||
80 | + if (cmdres != CMD_CONTINUE_OK) { | ||
81 | + return cmdres; | ||
82 | } | ||
83 | |||
84 | if ((cmd == CLEAR) || (cmd == DISCARD)) { | ||
85 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_movi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
86 | return CMD_CONTINUE; | ||
87 | } | ||
88 | |||
89 | - if (old_ite.icid >= s->ct.num_entries) { | ||
90 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
91 | - "%s: invalid ICID 0x%x in ITE (table corrupted?)\n", | ||
92 | - __func__, old_ite.icid); | ||
93 | - return CMD_CONTINUE; | ||
94 | + cmdres = lookup_cte(s, __func__, old_ite.icid, &old_cte); | ||
95 | + if (cmdres != CMD_CONTINUE_OK) { | ||
96 | + return cmdres; | ||
97 | } | ||
98 | - | ||
99 | - if (new_icid >= s->ct.num_entries) { | ||
100 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
101 | - "%s: invalid command attributes: ICID 0x%x\n", | ||
102 | - __func__, new_icid); | ||
103 | - return CMD_CONTINUE; | ||
104 | - } | ||
105 | - | ||
106 | - if (get_cte(s, old_ite.icid, &old_cte) != MEMTX_OK) { | ||
107 | - return CMD_STALL; | ||
108 | - } | ||
109 | - if (!old_cte.valid) { | ||
110 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
111 | - "%s: invalid command attributes: " | ||
112 | - "invalid CTE for old ICID 0x%x\n", | ||
113 | - __func__, old_ite.icid); | ||
114 | - return CMD_CONTINUE; | ||
115 | - } | ||
116 | - | ||
117 | - if (get_cte(s, new_icid, &new_cte) != MEMTX_OK) { | ||
118 | - return CMD_STALL; | ||
119 | - } | ||
120 | - if (!new_cte.valid) { | ||
121 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
122 | - "%s: invalid command attributes: " | ||
123 | - "invalid CTE for new ICID 0x%x\n", | ||
124 | - __func__, new_icid); | ||
125 | - return CMD_CONTINUE; | ||
126 | - } | ||
127 | - | ||
128 | - if (old_cte.rdbase >= s->gicv3->num_cpu) { | ||
129 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
130 | - "%s: CTE has invalid rdbase 0x%x\n", | ||
131 | - __func__, old_cte.rdbase); | ||
132 | - return CMD_CONTINUE; | ||
133 | - } | ||
134 | - | ||
135 | - if (new_cte.rdbase >= s->gicv3->num_cpu) { | ||
136 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
137 | - "%s: CTE has invalid rdbase 0x%x\n", | ||
138 | - __func__, new_cte.rdbase); | ||
139 | - return CMD_CONTINUE; | ||
140 | + cmdres = lookup_cte(s, __func__, new_icid, &new_cte); | ||
141 | + if (cmdres != CMD_CONTINUE_OK) { | ||
142 | + return cmdres; | ||
143 | } | ||
144 | |||
145 | if (old_cte.rdbase != new_cte.rdbase) { | ||
146 | -- | ||
147 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Split the part of process_its_cmd() which is specific to physical | ||
2 | interrupts into its own function. This is the part which starts by | ||
3 | taking the ICID and looking it up in the collection table. The | ||
4 | handling of virtual interrupts is significantly different (involving | ||
5 | a lookup in the vPE table) so structuring the code with one | ||
6 | sub-function for the physical interrupt case and one for the virtual | ||
7 | interrupt case will be clearer than putting both cases in one large | ||
8 | function. | ||
1 | 9 | ||
10 | The code for handling the "remove mapping from ITE" for the DISCARD | ||
11 | command remains in process_its_cmd() because it is common to both | ||
12 | virtual and physical interrupts. | ||
13 | |||
14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 20220408141550.1271295-14-peter.maydell@linaro.org | ||
17 | --- | ||
18 | hw/intc/arm_gicv3_its.c | 51 ++++++++++++++++++++++++++--------------- | ||
19 | 1 file changed, 33 insertions(+), 18 deletions(-) | ||
20 | |||
21 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
22 | index XXXXXXX..XXXXXXX 100644 | ||
23 | --- a/hw/intc/arm_gicv3_its.c | ||
24 | +++ b/hw/intc/arm_gicv3_its.c | ||
25 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult lookup_cte(GICv3ITSState *s, const char *who, | ||
26 | return CMD_CONTINUE_OK; | ||
27 | } | ||
28 | |||
29 | +static ItsCmdResult process_its_cmd_phys(GICv3ITSState *s, const ITEntry *ite, | ||
30 | + int irqlevel) | ||
31 | +{ | ||
32 | + CTEntry cte; | ||
33 | + ItsCmdResult cmdres; | ||
34 | + | ||
35 | + cmdres = lookup_cte(s, __func__, ite->icid, &cte); | ||
36 | + if (cmdres != CMD_CONTINUE_OK) { | ||
37 | + return cmdres; | ||
38 | + } | ||
39 | + gicv3_redist_process_lpi(&s->gicv3->cpu[cte.rdbase], ite->intid, irqlevel); | ||
40 | + return CMD_CONTINUE_OK; | ||
41 | +} | ||
42 | |||
43 | /* | ||
44 | * This function handles the processing of following commands based on | ||
45 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult do_process_its_cmd(GICv3ITSState *s, uint32_t devid, | ||
46 | uint32_t eventid, ItsCmdType cmd) | ||
47 | { | ||
48 | DTEntry dte; | ||
49 | - CTEntry cte; | ||
50 | ITEntry ite; | ||
51 | ItsCmdResult cmdres; | ||
52 | + int irqlevel; | ||
53 | |||
54 | cmdres = lookup_ite(s, __func__, devid, eventid, &ite, &dte); | ||
55 | if (cmdres != CMD_CONTINUE_OK) { | ||
56 | return cmdres; | ||
57 | } | ||
58 | |||
59 | - if (ite.inttype != ITE_INTTYPE_PHYSICAL) { | ||
60 | - qemu_log_mask(LOG_GUEST_ERROR, | ||
61 | - "%s: invalid command attributes: invalid ITE\n", | ||
62 | - __func__); | ||
63 | - return CMD_CONTINUE; | ||
64 | + irqlevel = (cmd == CLEAR || cmd == DISCARD) ? 0 : 1; | ||
65 | + | ||
66 | + switch (ite.inttype) { | ||
67 | + case ITE_INTTYPE_PHYSICAL: | ||
68 | + cmdres = process_its_cmd_phys(s, &ite, irqlevel); | ||
69 | + break; | ||
70 | + case ITE_INTTYPE_VIRTUAL: | ||
71 | + if (!its_feature_virtual(s)) { | ||
72 | + /* Can't happen unless guest is illegally writing to table memory */ | ||
73 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
74 | + "%s: invalid type %d in ITE (table corrupted?)\n", | ||
75 | + __func__, ite.inttype); | ||
76 | + return CMD_CONTINUE; | ||
77 | + } | ||
78 | + /* The GICv4 virtual interrupt handling will go here */ | ||
79 | + g_assert_not_reached(); | ||
80 | + default: | ||
81 | + g_assert_not_reached(); | ||
82 | } | ||
83 | |||
84 | - cmdres = lookup_cte(s, __func__, ite.icid, &cte); | ||
85 | - if (cmdres != CMD_CONTINUE_OK) { | ||
86 | - return cmdres; | ||
87 | - } | ||
88 | - | ||
89 | - if ((cmd == CLEAR) || (cmd == DISCARD)) { | ||
90 | - gicv3_redist_process_lpi(&s->gicv3->cpu[cte.rdbase], ite.intid, 0); | ||
91 | - } else { | ||
92 | - gicv3_redist_process_lpi(&s->gicv3->cpu[cte.rdbase], ite.intid, 1); | ||
93 | - } | ||
94 | - | ||
95 | - if (cmd == DISCARD) { | ||
96 | + if (cmdres == CMD_CONTINUE_OK && cmd == DISCARD) { | ||
97 | ITEntry ite = {}; | ||
98 | /* remove mapping from interrupt translation table */ | ||
99 | ite.valid = false; | ||
100 | -- | ||
101 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | For GICv4, interrupt table entries read by process_its_cmd() may | ||
2 | indicate virtual LPIs which are to be directly injected into a VM. | ||
3 | Implement the ITS side of the code for handling this. This is | ||
4 | similar to the existing handling of physical LPIs, but instead of | ||
5 | looking up a collection ID in a collection table, we look up a vPEID | ||
6 | in a vPE table. As with the physical LPIs, we leave the rest of the | ||
7 | work to code in the redistributor device. | ||
1 | 8 | ||
9 | The redistributor half will be implemented in a later commit; | ||
10 | for now we just provide a stub function which does nothing. | ||
11 | |||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
14 | Message-id: 20220408141550.1271295-15-peter.maydell@linaro.org | ||
15 | --- | ||
16 | hw/intc/gicv3_internal.h | 17 +++++++ | ||
17 | hw/intc/arm_gicv3_its.c | 99 +++++++++++++++++++++++++++++++++++++- | ||
18 | hw/intc/arm_gicv3_redist.c | 9 ++++ | ||
19 | hw/intc/trace-events | 2 + | ||
20 | 4 files changed, 125 insertions(+), 2 deletions(-) | ||
21 | |||
22 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
23 | index XXXXXXX..XXXXXXX 100644 | ||
24 | --- a/hw/intc/gicv3_internal.h | ||
25 | +++ b/hw/intc/gicv3_internal.h | ||
26 | @@ -XXX,XX +XXX,XX @@ MemTxResult gicv3_redist_write(void *opaque, hwaddr offset, uint64_t data, | ||
27 | void gicv3_dist_set_irq(GICv3State *s, int irq, int level); | ||
28 | void gicv3_redist_set_irq(GICv3CPUState *cs, int irq, int level); | ||
29 | void gicv3_redist_process_lpi(GICv3CPUState *cs, int irq, int level); | ||
30 | +/** | ||
31 | + * gicv3_redist_process_vlpi: | ||
32 | + * @cs: GICv3CPUState | ||
33 | + * @irq: (virtual) interrupt number | ||
34 | + * @vptaddr: (guest) address of VLPI table | ||
35 | + * @doorbell: doorbell (physical) interrupt number (1023 for "no doorbell") | ||
36 | + * @level: level to set @irq to | ||
37 | + * | ||
38 | + * Process a virtual LPI being directly injected by the ITS. This function | ||
39 | + * will update the VLPI table specified by @vptaddr and @vptsize. If the | ||
40 | + * vCPU corresponding to that VLPI table is currently running on | ||
41 | + * the CPU associated with this redistributor, directly inject the VLPI | ||
42 | + * @irq. If the vCPU is not running on this CPU, raise the doorbell | ||
43 | + * interrupt instead. | ||
44 | + */ | ||
45 | +void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
46 | + int doorbell, int level); | ||
47 | void gicv3_redist_lpi_pending(GICv3CPUState *cs, int irq, int level); | ||
48 | /** | ||
49 | * gicv3_redist_update_lpi: | ||
50 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
51 | index XXXXXXX..XXXXXXX 100644 | ||
52 | --- a/hw/intc/arm_gicv3_its.c | ||
53 | +++ b/hw/intc/arm_gicv3_its.c | ||
54 | @@ -XXX,XX +XXX,XX @@ out: | ||
55 | return res; | ||
56 | } | ||
57 | |||
58 | +/* | ||
59 | + * Read the vPE Table entry at index @vpeid. On success (including | ||
60 | + * successfully determining that there is no valid entry for this index), | ||
61 | + * we return MEMTX_OK and populate the VTEntry struct accordingly. | ||
62 | + * If there is an error reading memory then we return the error code. | ||
63 | + */ | ||
64 | +static MemTxResult get_vte(GICv3ITSState *s, uint32_t vpeid, VTEntry *vte) | ||
65 | +{ | ||
66 | + MemTxResult res = MEMTX_OK; | ||
67 | + AddressSpace *as = &s->gicv3->dma_as; | ||
68 | + uint64_t entry_addr = table_entry_addr(s, &s->vpet, vpeid, &res); | ||
69 | + uint64_t vteval; | ||
70 | + | ||
71 | + if (entry_addr == -1) { | ||
72 | + /* No L2 table entry, i.e. no valid VTE, or a memory error */ | ||
73 | + vte->valid = false; | ||
74 | + goto out; | ||
75 | + } | ||
76 | + vteval = address_space_ldq_le(as, entry_addr, MEMTXATTRS_UNSPECIFIED, &res); | ||
77 | + if (res != MEMTX_OK) { | ||
78 | + goto out; | ||
79 | + } | ||
80 | + vte->valid = FIELD_EX64(vteval, VTE, VALID); | ||
81 | + vte->vptsize = FIELD_EX64(vteval, VTE, VPTSIZE); | ||
82 | + vte->vptaddr = FIELD_EX64(vteval, VTE, VPTADDR); | ||
83 | + vte->rdbase = FIELD_EX64(vteval, VTE, RDBASE); | ||
84 | +out: | ||
85 | + if (res != MEMTX_OK) { | ||
86 | + trace_gicv3_its_vte_read_fault(vpeid); | ||
87 | + } else { | ||
88 | + trace_gicv3_its_vte_read(vpeid, vte->valid, vte->vptsize, | ||
89 | + vte->vptaddr, vte->rdbase); | ||
90 | + } | ||
91 | + return res; | ||
92 | +} | ||
93 | + | ||
94 | /* | ||
95 | * Given a (DeviceID, EventID), look up the corresponding ITE, including | ||
96 | * checking for the various invalid-value cases. If we find a valid ITE, | ||
97 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult lookup_cte(GICv3ITSState *s, const char *who, | ||
98 | return CMD_CONTINUE_OK; | ||
99 | } | ||
100 | |||
101 | +/* | ||
102 | + * Given a VPEID, look up the corresponding VTE, including checking | ||
103 | + * for various invalid-value cases. if we find a valid VTE, fill in @vte | ||
104 | + * and return CMD_CONTINUE_OK; otherwise return CMD_STALL or CMD_CONTINUE | ||
105 | + * (and the contents of @vte should not be relied on). | ||
106 | + * | ||
107 | + * The string @who is purely for the LOG_GUEST_ERROR messages, | ||
108 | + * and should indicate the name of the calling function or similar. | ||
109 | + */ | ||
110 | +static ItsCmdResult lookup_vte(GICv3ITSState *s, const char *who, | ||
111 | + uint32_t vpeid, VTEntry *vte) | ||
112 | +{ | ||
113 | + if (vpeid >= s->vpet.num_entries) { | ||
114 | + qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid VPEID 0x%x\n", who, vpeid); | ||
115 | + return CMD_CONTINUE; | ||
116 | + } | ||
117 | + | ||
118 | + if (get_vte(s, vpeid, vte) != MEMTX_OK) { | ||
119 | + return CMD_STALL; | ||
120 | + } | ||
121 | + if (!vte->valid) { | ||
122 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
123 | + "%s: invalid VTE for VPEID 0x%x\n", who, vpeid); | ||
124 | + return CMD_CONTINUE; | ||
125 | + } | ||
126 | + | ||
127 | + if (vte->rdbase >= s->gicv3->num_cpu) { | ||
128 | + return CMD_CONTINUE; | ||
129 | + } | ||
130 | + return CMD_CONTINUE_OK; | ||
131 | +} | ||
132 | + | ||
133 | static ItsCmdResult process_its_cmd_phys(GICv3ITSState *s, const ITEntry *ite, | ||
134 | int irqlevel) | ||
135 | { | ||
136 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_its_cmd_phys(GICv3ITSState *s, const ITEntry *ite, | ||
137 | return CMD_CONTINUE_OK; | ||
138 | } | ||
139 | |||
140 | +static ItsCmdResult process_its_cmd_virt(GICv3ITSState *s, const ITEntry *ite, | ||
141 | + int irqlevel) | ||
142 | +{ | ||
143 | + VTEntry vte; | ||
144 | + ItsCmdResult cmdres; | ||
145 | + | ||
146 | + cmdres = lookup_vte(s, __func__, ite->vpeid, &vte); | ||
147 | + if (cmdres != CMD_CONTINUE_OK) { | ||
148 | + return cmdres; | ||
149 | + } | ||
150 | + | ||
151 | + if (!intid_in_lpi_range(ite->intid) || | ||
152 | + ite->intid >= (1ULL << (vte.vptsize + 1))) { | ||
153 | + qemu_log_mask(LOG_GUEST_ERROR, "%s: intid 0x%x out of range\n", | ||
154 | + __func__, ite->intid); | ||
155 | + return CMD_CONTINUE; | ||
156 | + } | ||
157 | + | ||
158 | + /* | ||
159 | + * For QEMU the actual pending of the vLPI is handled in the | ||
160 | + * redistributor code | ||
161 | + */ | ||
162 | + gicv3_redist_process_vlpi(&s->gicv3->cpu[vte.rdbase], ite->intid, | ||
163 | + vte.vptaddr << 16, ite->doorbell, irqlevel); | ||
164 | + return CMD_CONTINUE_OK; | ||
165 | +} | ||
166 | + | ||
167 | /* | ||
168 | * This function handles the processing of following commands based on | ||
169 | * the ItsCmdType parameter passed:- | ||
170 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult do_process_its_cmd(GICv3ITSState *s, uint32_t devid, | ||
171 | __func__, ite.inttype); | ||
172 | return CMD_CONTINUE; | ||
173 | } | ||
174 | - /* The GICv4 virtual interrupt handling will go here */ | ||
175 | - g_assert_not_reached(); | ||
176 | + cmdres = process_its_cmd_virt(s, &ite, irqlevel); | ||
177 | + break; | ||
178 | default: | ||
179 | g_assert_not_reached(); | ||
180 | } | ||
181 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
182 | index XXXXXXX..XXXXXXX 100644 | ||
183 | --- a/hw/intc/arm_gicv3_redist.c | ||
184 | +++ b/hw/intc/arm_gicv3_redist.c | ||
185 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_movall_lpis(GICv3CPUState *src, GICv3CPUState *dest) | ||
186 | gicv3_redist_update_lpi(dest); | ||
187 | } | ||
188 | |||
189 | +void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
190 | + int doorbell, int level) | ||
191 | +{ | ||
192 | + /* | ||
193 | + * The redistributor handling for being handed a VLPI by the ITS | ||
194 | + * will be added in a subsequent commit. | ||
195 | + */ | ||
196 | +} | ||
197 | + | ||
198 | void gicv3_redist_set_irq(GICv3CPUState *cs, int irq, int level) | ||
199 | { | ||
200 | /* Update redistributor state for a change in an external PPI input line */ | ||
201 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
202 | index XXXXXXX..XXXXXXX 100644 | ||
203 | --- a/hw/intc/trace-events | ||
204 | +++ b/hw/intc/trace-events | ||
205 | @@ -XXX,XX +XXX,XX @@ gicv3_its_ite_write(uint64_t ittaddr, uint32_t eventid, int valid, int inttype, | ||
206 | gicv3_its_dte_read(uint32_t devid, int valid, uint32_t size, uint64_t ittaddr) "GICv3 ITS: Device Table read for DeviceID 0x%x: valid %d size 0x%x ITTaddr 0x%" PRIx64 | ||
207 | gicv3_its_dte_write(uint32_t devid, int valid, uint32_t size, uint64_t ittaddr) "GICv3 ITS: Device Table write for DeviceID 0x%x: valid %d size 0x%x ITTaddr 0x%" PRIx64 | ||
208 | gicv3_its_dte_read_fault(uint32_t devid) "GICv3 ITS: Device Table read for DeviceID 0x%x: faulted" | ||
209 | +gicv3_its_vte_read(uint32_t vpeid, int valid, uint32_t vptsize, uint64_t vptaddr, uint32_t rdbase) "GICv3 ITS: vPE Table read for vPEID 0x%x: valid %d VPTsize 0x%x VPTaddr 0x%" PRIx64 " RDbase 0x%x" | ||
210 | +gicv3_its_vte_read_fault(uint32_t vpeid) "GICv3 ITS: vPE Table read for vPEID 0x%x: faulted" | ||
211 | gicv3_its_vte_write(uint32_t vpeid, int valid, uint32_t vptsize, uint64_t vptaddr, uint32_t rdbase) "GICv3 ITS: vPE Table write for vPEID 0x%x: valid %d VPTsize 0x%x VPTaddr 0x%" PRIx64 " RDbase 0x%x" | ||
212 | |||
213 | # armv7m_nvic.c | ||
214 | -- | ||
215 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | The GICv4 ITS VMOVP command's semantics require it to perform the | ||
2 | operation on every ITS connected to the same GIC that the ITS that | ||
3 | received the command is attached to. This means that the GIC object | ||
4 | needs to keep a pointer to every ITS that is connected to it | ||
5 | (previously it was sufficient for the ITS to have a pointer to its | ||
6 | GIC). | ||
1 | 7 | ||
8 | Add a glib ptrarray to the GICv3 object which holds pointers to every | ||
9 | connected ITS, and make the ITS add itself to the array for the GIC | ||
10 | it is connected to when it is realized. | ||
11 | |||
12 | Note that currently all QEMU machine types with an ITS have exactly | ||
13 | one ITS in the system, so typically the length of this ptrarray will | ||
14 | be 1. Multiple ITSes are typically used to improve performance on | ||
15 | real hardware, so we wouldn't need to have more than one unless we | ||
16 | were modelling a real machine type that had multile ITSes. | ||
17 | |||
18 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
19 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
20 | Message-id: 20220408141550.1271295-16-peter.maydell@linaro.org | ||
21 | --- | ||
22 | hw/intc/gicv3_internal.h | 9 +++++++++ | ||
23 | include/hw/intc/arm_gicv3_common.h | 2 ++ | ||
24 | hw/intc/arm_gicv3_common.c | 2 ++ | ||
25 | hw/intc/arm_gicv3_its.c | 2 ++ | ||
26 | hw/intc/arm_gicv3_its_kvm.c | 2 ++ | ||
27 | 5 files changed, 17 insertions(+) | ||
28 | |||
29 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
30 | index XXXXXXX..XXXXXXX 100644 | ||
31 | --- a/hw/intc/gicv3_internal.h | ||
32 | +++ b/hw/intc/gicv3_internal.h | ||
33 | @@ -XXX,XX +XXX,XX @@ static inline void gicv3_cache_all_target_cpustates(GICv3State *s) | ||
34 | |||
35 | void gicv3_set_gicv3state(CPUState *cpu, GICv3CPUState *s); | ||
36 | |||
37 | +/* | ||
38 | + * The ITS should call this when it is realized to add itself | ||
39 | + * to its GIC's list of connected ITSes. | ||
40 | + */ | ||
41 | +static inline void gicv3_add_its(GICv3State *s, DeviceState *its) | ||
42 | +{ | ||
43 | + g_ptr_array_add(s->itslist, its); | ||
44 | +} | ||
45 | + | ||
46 | #endif /* QEMU_ARM_GICV3_INTERNAL_H */ | ||
47 | diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h | ||
48 | index XXXXXXX..XXXXXXX 100644 | ||
49 | --- a/include/hw/intc/arm_gicv3_common.h | ||
50 | +++ b/include/hw/intc/arm_gicv3_common.h | ||
51 | @@ -XXX,XX +XXX,XX @@ struct GICv3State { | ||
52 | uint32_t gicd_nsacr[DIV_ROUND_UP(GICV3_MAXIRQ, 16)]; | ||
53 | |||
54 | GICv3CPUState *cpu; | ||
55 | + /* List of all ITSes connected to this GIC */ | ||
56 | + GPtrArray *itslist; | ||
57 | }; | ||
58 | |||
59 | #define GICV3_BITMAP_ACCESSORS(BMP) \ | ||
60 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c | ||
61 | index XXXXXXX..XXXXXXX 100644 | ||
62 | --- a/hw/intc/arm_gicv3_common.c | ||
63 | +++ b/hw/intc/arm_gicv3_common.c | ||
64 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp) | ||
65 | cpuidx += s->redist_region_count[i]; | ||
66 | s->cpu[cpuidx - 1].gicr_typer |= GICR_TYPER_LAST; | ||
67 | } | ||
68 | + | ||
69 | + s->itslist = g_ptr_array_new(); | ||
70 | } | ||
71 | |||
72 | static void arm_gicv3_finalize(Object *obj) | ||
73 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
74 | index XXXXXXX..XXXXXXX 100644 | ||
75 | --- a/hw/intc/arm_gicv3_its.c | ||
76 | +++ b/hw/intc/arm_gicv3_its.c | ||
77 | @@ -XXX,XX +XXX,XX @@ static void gicv3_arm_its_realize(DeviceState *dev, Error **errp) | ||
78 | } | ||
79 | } | ||
80 | |||
81 | + gicv3_add_its(s->gicv3, dev); | ||
82 | + | ||
83 | gicv3_its_init_mmio(s, &gicv3_its_control_ops, &gicv3_its_translation_ops); | ||
84 | |||
85 | /* set the ITS default features supported */ | ||
86 | diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c | ||
87 | index XXXXXXX..XXXXXXX 100644 | ||
88 | --- a/hw/intc/arm_gicv3_its_kvm.c | ||
89 | +++ b/hw/intc/arm_gicv3_its_kvm.c | ||
90 | @@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_realize(DeviceState *dev, Error **errp) | ||
91 | kvm_arm_register_device(&s->iomem_its_cntrl, -1, KVM_DEV_ARM_VGIC_GRP_ADDR, | ||
92 | KVM_VGIC_ITS_ADDR_TYPE, s->dev_fd, 0); | ||
93 | |||
94 | + gicv3_add_its(s->gicv3, dev); | ||
95 | + | ||
96 | gicv3_its_init_mmio(s, NULL, NULL); | ||
97 | |||
98 | if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS, | ||
99 | -- | ||
100 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the GICv4 VMOVP command, which updates an entry in the vPE | ||
2 | table to change its rdbase field. This command is unique in the ITS | ||
3 | command set because its effects must be propagated to all the other | ||
4 | ITSes connected to the same GIC as the ITS which executes the VMOVP | ||
5 | command. | ||
1 | 6 | ||
7 | The GICv4 spec allows two implementation choices for handling the | ||
8 | propagation to other ITSes: | ||
9 | * If GITS_TYPER.VMOVP is 1, the guest only needs to issue the command | ||
10 | on one ITS, and the implementation handles the propagation to | ||
11 | all ITSes | ||
12 | * If GITS_TYPER.VMOVP is 0, the guest must issue the command on | ||
13 | every ITS, and arrange for the ITSes to synchronize the updates | ||
14 | with each other by setting ITSList and Sequence Number fields | ||
15 | in the command packets | ||
16 | |||
17 | We choose the GITS_TYPER.VMOVP = 1 approach, and synchronously | ||
18 | execute the update on every ITS. | ||
19 | |||
20 | For GICv4.1 this command has extra fields in the command packet and | ||
21 | additional behaviour. We define the 4.1-only fields with the FIELD | ||
22 | macro, but only implement the GICv4.0 version of the command. | ||
23 | |||
24 | Note that we don't update the reported GITS_TYPER value here; | ||
25 | we'll do that later in a commit which updates all the reported | ||
26 | feature bit and ID register values for GICv4. | ||
27 | |||
28 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
29 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
30 | Message-id: 20220408141550.1271295-17-peter.maydell@linaro.org | ||
31 | --- | ||
32 | hw/intc/gicv3_internal.h | 18 ++++++++++ | ||
33 | hw/intc/arm_gicv3_its.c | 75 ++++++++++++++++++++++++++++++++++++++++ | ||
34 | hw/intc/trace-events | 1 + | ||
35 | 3 files changed, 94 insertions(+) | ||
36 | |||
37 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
38 | index XXXXXXX..XXXXXXX 100644 | ||
39 | --- a/hw/intc/gicv3_internal.h | ||
40 | +++ b/hw/intc/gicv3_internal.h | ||
41 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1) | ||
42 | #define GITS_CMD_INVALL 0x0D | ||
43 | #define GITS_CMD_MOVALL 0x0E | ||
44 | #define GITS_CMD_DISCARD 0x0F | ||
45 | +#define GITS_CMD_VMOVP 0x22 | ||
46 | #define GITS_CMD_VMAPP 0x29 | ||
47 | #define GITS_CMD_VMAPTI 0x2A | ||
48 | #define GITS_CMD_VMAPI 0x2B | ||
49 | @@ -XXX,XX +XXX,XX @@ FIELD(VMAPP_2, V, 63, 1) | ||
50 | FIELD(VMAPP_3, VPTSIZE, 0, 8) /* For GICv4.0, bits [7:6] are RES0 */ | ||
51 | FIELD(VMAPP_3, VPTADDR, 16, 36) | ||
52 | |||
53 | +/* VMOVP command fields */ | ||
54 | +FIELD(VMOVP_0, SEQNUM, 32, 16) /* not used for GITS_TYPER.VMOVP == 1 */ | ||
55 | +FIELD(VMOVP_1, ITSLIST, 0, 16) /* not used for GITS_TYPER.VMOVP == 1 */ | ||
56 | +FIELD(VMOVP_1, VPEID, 32, 16) | ||
57 | +FIELD(VMOVP_2, RDBASE, 16, 36) | ||
58 | +FIELD(VMOVP_2, DB, 63, 1) /* GICv4.1 only */ | ||
59 | +FIELD(VMOVP_3, DEFAULT_DOORBELL, 0, 32) /* GICv4.1 only */ | ||
60 | + | ||
61 | /* | ||
62 | * 12 bytes Interrupt translation Table Entry size | ||
63 | * as per Table 5.3 in GICv3 spec | ||
64 | @@ -XXX,XX +XXX,XX @@ static inline void gicv3_add_its(GICv3State *s, DeviceState *its) | ||
65 | g_ptr_array_add(s->itslist, its); | ||
66 | } | ||
67 | |||
68 | +/* | ||
69 | + * The ITS can use this for operations that must be performed on | ||
70 | + * every ITS connected to the same GIC that it is | ||
71 | + */ | ||
72 | +static inline void gicv3_foreach_its(GICv3State *s, GFunc func, void *opaque) | ||
73 | +{ | ||
74 | + g_ptr_array_foreach(s->itslist, func, opaque); | ||
75 | +} | ||
76 | + | ||
77 | #endif /* QEMU_ARM_GICV3_INTERNAL_H */ | ||
78 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
79 | index XXXXXXX..XXXXXXX 100644 | ||
80 | --- a/hw/intc/arm_gicv3_its.c | ||
81 | +++ b/hw/intc/arm_gicv3_its.c | ||
82 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmapp(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
83 | return update_vte(s, vpeid, &vte) ? CMD_CONTINUE_OK : CMD_STALL; | ||
84 | } | ||
85 | |||
86 | +typedef struct VmovpCallbackData { | ||
87 | + uint64_t rdbase; | ||
88 | + uint32_t vpeid; | ||
89 | + /* | ||
90 | + * Overall command result. If more than one callback finds an | ||
91 | + * error, STALL beats CONTINUE. | ||
92 | + */ | ||
93 | + ItsCmdResult result; | ||
94 | +} VmovpCallbackData; | ||
95 | + | ||
96 | +static void vmovp_callback(gpointer data, gpointer opaque) | ||
97 | +{ | ||
98 | + /* | ||
99 | + * This function is called to update the VPEID field in a VPE | ||
100 | + * table entry for this ITS. This might be because of a VMOVP | ||
101 | + * command executed on any ITS that is connected to the same GIC | ||
102 | + * as this ITS. We need to read the VPE table entry for the VPEID | ||
103 | + * and update its RDBASE field. | ||
104 | + */ | ||
105 | + GICv3ITSState *s = data; | ||
106 | + VmovpCallbackData *cbdata = opaque; | ||
107 | + VTEntry vte; | ||
108 | + ItsCmdResult cmdres; | ||
109 | + | ||
110 | + cmdres = lookup_vte(s, __func__, cbdata->vpeid, &vte); | ||
111 | + switch (cmdres) { | ||
112 | + case CMD_STALL: | ||
113 | + cbdata->result = CMD_STALL; | ||
114 | + return; | ||
115 | + case CMD_CONTINUE: | ||
116 | + if (cbdata->result != CMD_STALL) { | ||
117 | + cbdata->result = CMD_CONTINUE; | ||
118 | + } | ||
119 | + return; | ||
120 | + case CMD_CONTINUE_OK: | ||
121 | + break; | ||
122 | + } | ||
123 | + | ||
124 | + vte.rdbase = cbdata->rdbase; | ||
125 | + if (!update_vte(s, cbdata->vpeid, &vte)) { | ||
126 | + cbdata->result = CMD_STALL; | ||
127 | + } | ||
128 | +} | ||
129 | + | ||
130 | +static ItsCmdResult process_vmovp(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
131 | +{ | ||
132 | + VmovpCallbackData cbdata; | ||
133 | + | ||
134 | + if (!its_feature_virtual(s)) { | ||
135 | + return CMD_CONTINUE; | ||
136 | + } | ||
137 | + | ||
138 | + cbdata.vpeid = FIELD_EX64(cmdpkt[1], VMOVP_1, VPEID); | ||
139 | + cbdata.rdbase = FIELD_EX64(cmdpkt[2], VMOVP_2, RDBASE); | ||
140 | + | ||
141 | + trace_gicv3_its_cmd_vmovp(cbdata.vpeid, cbdata.rdbase); | ||
142 | + | ||
143 | + if (cbdata.rdbase >= s->gicv3->num_cpu) { | ||
144 | + return CMD_CONTINUE; | ||
145 | + } | ||
146 | + | ||
147 | + /* | ||
148 | + * Our ITS implementation reports GITS_TYPER.VMOVP == 1, which means | ||
149 | + * that when the VMOVP command is executed on an ITS to change the | ||
150 | + * VPEID field in a VPE table entry the change must be propagated | ||
151 | + * to all the ITSes connected to the same GIC. | ||
152 | + */ | ||
153 | + cbdata.result = CMD_CONTINUE_OK; | ||
154 | + gicv3_foreach_its(s->gicv3, vmovp_callback, &cbdata); | ||
155 | + return cbdata.result; | ||
156 | +} | ||
157 | + | ||
158 | /* | ||
159 | * Current implementation blocks until all | ||
160 | * commands are processed | ||
161 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
162 | case GITS_CMD_VMAPP: | ||
163 | result = process_vmapp(s, cmdpkt); | ||
164 | break; | ||
165 | + case GITS_CMD_VMOVP: | ||
166 | + result = process_vmovp(s, cmdpkt); | ||
167 | + break; | ||
168 | default: | ||
169 | trace_gicv3_its_cmd_unknown(cmd); | ||
170 | break; | ||
171 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
172 | index XXXXXXX..XXXXXXX 100644 | ||
173 | --- a/hw/intc/trace-events | ||
174 | +++ b/hw/intc/trace-events | ||
175 | @@ -XXX,XX +XXX,XX @@ gicv3_its_cmd_movi(uint32_t devid, uint32_t eventid, uint32_t icid) "GICv3 ITS: | ||
176 | gicv3_its_cmd_vmapi(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x Dbell_pINTID 0x%x" | ||
177 | gicv3_its_cmd_vmapti(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t vintid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x vINTID 0x%x Dbell_pINTID 0x%x" | ||
178 | gicv3_its_cmd_vmapp(uint32_t vpeid, uint64_t rdbase, int valid, uint64_t vptaddr, uint32_t vptsize) "GICv3 ITS: command VMAPP vPEID 0x%x RDbase 0x%" PRIx64 " V %d VPT_addr 0x%" PRIx64 " VPT_size 0x%x" | ||
179 | +gicv3_its_cmd_vmovp(uint32_t vpeid, uint64_t rdbase) "GICv3 ITS: command VMOVP vPEID 0x%x RDbase 0x%" PRIx64 | ||
180 | gicv3_its_cmd_unknown(unsigned cmd) "GICv3 ITS: unknown command 0x%x" | ||
181 | gicv3_its_cte_read(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table read for ICID 0x%x: valid %d RDBase 0x%x" | ||
182 | gicv3_its_cte_write(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table write for ICID 0x%x: valid %d RDBase 0x%x" | ||
183 | -- | ||
184 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Michael Davidsaver <mdavidsaver@gmail.com> | 1 | The VSYNC command forces the ITS to synchronize all outstanding ITS |
---|---|---|---|
2 | operations for the specified vPEID, so that subsequent writes to | ||
3 | GITS_TRANSLATER honour them. The QEMU implementation is always in | ||
4 | sync, so for us this is a nop, like the existing SYNC command. | ||
2 | 5 | ||
3 | General logic is that operations stopped by the MPU are MemManage, | 6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
4 | and those which go through the MPU and are caught by the unassigned | 7 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
5 | handle are BusFault. Distinguish these by looking at the | 8 | Message-id: 20220408141550.1271295-18-peter.maydell@linaro.org |
6 | exception.fsr values, and set the CFSR bits and (if appropriate) | 9 | --- |
7 | fill in the BFAR or MMFAR with the exception address. | 10 | hw/intc/gicv3_internal.h | 1 + |
11 | hw/intc/arm_gicv3_its.c | 11 +++++++++++ | ||
12 | hw/intc/trace-events | 1 + | ||
13 | 3 files changed, 13 insertions(+) | ||
8 | 14 | ||
9 | Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> | 15 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h |
10 | Message-id: 1493122030-32191-12-git-send-email-peter.maydell@linaro.org | ||
11 | [PMM: i-side faults do not set BFAR/MMFAR, only d-side; | ||
12 | added some CPU_LOG_INT logging] | ||
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
14 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
15 | --- | ||
16 | target/arm/helper.c | 45 ++++++++++++++++++++++++++++++++++++++++++--- | ||
17 | 1 file changed, 42 insertions(+), 3 deletions(-) | ||
18 | |||
19 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
20 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/target/arm/helper.c | 17 | --- a/hw/intc/gicv3_internal.h |
22 | +++ b/target/arm/helper.c | 18 | +++ b/hw/intc/gicv3_internal.h |
23 | @@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) | 19 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1) |
24 | break; | 20 | #define GITS_CMD_MOVALL 0x0E |
25 | case EXCP_PREFETCH_ABORT: | 21 | #define GITS_CMD_DISCARD 0x0F |
26 | case EXCP_DATA_ABORT: | 22 | #define GITS_CMD_VMOVP 0x22 |
27 | - /* TODO: if we implemented the MPU registers, this is where we | 23 | +#define GITS_CMD_VSYNC 0x25 |
28 | - * should set the MMFAR, etc from exception.fsr and exception.vaddress. | 24 | #define GITS_CMD_VMAPP 0x29 |
29 | + /* Note that for M profile we don't have a guest facing FSR, but | 25 | #define GITS_CMD_VMAPTI 0x2A |
30 | + * the env->exception.fsr will be populated by the code that | 26 | #define GITS_CMD_VMAPI 0x2B |
31 | + * raises the fault, in the A profile short-descriptor format. | 27 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c |
32 | */ | 28 | index XXXXXXX..XXXXXXX 100644 |
33 | - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM); | 29 | --- a/hw/intc/arm_gicv3_its.c |
34 | + switch (env->exception.fsr & 0xf) { | 30 | +++ b/hw/intc/arm_gicv3_its.c |
35 | + case 0x8: /* External Abort */ | 31 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) |
36 | + switch (cs->exception_index) { | 32 | */ |
37 | + case EXCP_PREFETCH_ABORT: | 33 | trace_gicv3_its_cmd_sync(); |
38 | + env->v7m.cfsr |= R_V7M_CFSR_PRECISERR_MASK; | 34 | break; |
39 | + qemu_log_mask(CPU_LOG_INT, "...with CFSR.PRECISERR\n"); | 35 | + case GITS_CMD_VSYNC: |
40 | + break; | 36 | + /* |
41 | + case EXCP_DATA_ABORT: | 37 | + * VSYNC also is a nop, because our implementation is always |
42 | + env->v7m.cfsr |= | 38 | + * in sync. |
43 | + (R_V7M_CFSR_IBUSERR_MASK | R_V7M_CFSR_BFARVALID_MASK); | 39 | + */ |
44 | + env->v7m.bfar = env->exception.vaddress; | 40 | + if (!its_feature_virtual(s)) { |
45 | + qemu_log_mask(CPU_LOG_INT, | 41 | + result = CMD_CONTINUE; |
46 | + "...with CFSR.IBUSERR and BFAR 0x%x\n", | ||
47 | + env->v7m.bfar); | ||
48 | + break; | 42 | + break; |
49 | + } | 43 | + } |
50 | + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS); | 44 | + trace_gicv3_its_cmd_vsync(); |
51 | + break; | 45 | + break; |
52 | + default: | 46 | case GITS_CMD_MAPD: |
53 | + /* All other FSR values are either MPU faults or "can't happen | 47 | result = process_mapd(s, cmdpkt); |
54 | + * for M profile" cases. | 48 | break; |
55 | + */ | 49 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events |
56 | + switch (cs->exception_index) { | 50 | index XXXXXXX..XXXXXXX 100644 |
57 | + case EXCP_PREFETCH_ABORT: | 51 | --- a/hw/intc/trace-events |
58 | + env->v7m.cfsr |= R_V7M_CFSR_IACCVIOL_MASK; | 52 | +++ b/hw/intc/trace-events |
59 | + qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n"); | 53 | @@ -XXX,XX +XXX,XX @@ gicv3_its_cmd_vmapi(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t d |
60 | + break; | 54 | gicv3_its_cmd_vmapti(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t vintid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x vINTID 0x%x Dbell_pINTID 0x%x" |
61 | + case EXCP_DATA_ABORT: | 55 | gicv3_its_cmd_vmapp(uint32_t vpeid, uint64_t rdbase, int valid, uint64_t vptaddr, uint32_t vptsize) "GICv3 ITS: command VMAPP vPEID 0x%x RDbase 0x%" PRIx64 " V %d VPT_addr 0x%" PRIx64 " VPT_size 0x%x" |
62 | + env->v7m.cfsr |= | 56 | gicv3_its_cmd_vmovp(uint32_t vpeid, uint64_t rdbase) "GICv3 ITS: command VMOVP vPEID 0x%x RDbase 0x%" PRIx64 |
63 | + (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK); | 57 | +gicv3_its_cmd_vsync(void) "GICv3 ITS: command VSYNC" |
64 | + env->v7m.mmfar = env->exception.vaddress; | 58 | gicv3_its_cmd_unknown(unsigned cmd) "GICv3 ITS: unknown command 0x%x" |
65 | + qemu_log_mask(CPU_LOG_INT, | 59 | gicv3_its_cte_read(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table read for ICID 0x%x: valid %d RDBase 0x%x" |
66 | + "...with CFSR.DACCVIOL and MMFAR 0x%x\n", | 60 | gicv3_its_cte_write(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table write for ICID 0x%x: valid %d RDBase 0x%x" |
67 | + env->v7m.mmfar); | ||
68 | + break; | ||
69 | + } | ||
70 | + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM); | ||
71 | + break; | ||
72 | + } | ||
73 | break; | ||
74 | case EXCP_BKPT: | ||
75 | if (semihosting_enabled()) { | ||
76 | -- | 61 | -- |
77 | 2.7.4 | 62 | 2.25.1 |
78 | |||
79 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | We were previously implementing INV (like INVALL) to just blow away | ||
2 | cached highest-priority-pending-LPI information on all connected | ||
3 | redistributors. For GICv4.0, this isn't going to be sufficient, | ||
4 | because the LPI we are invalidating cached information for might be | ||
5 | either physical or virtual, and the required action is different for | ||
6 | those two cases. So we need to do the full process of looking up the | ||
7 | ITE from the devid and eventid. This also means we can do the error | ||
8 | checks that the spec lists for this command. | ||
1 | 9 | ||
10 | Split out INV handling into a process_inv() function like our other | ||
11 | command-processing functions. For the moment, stick to handling only | ||
12 | physical LPIs; we will add the vLPI parts later. | ||
13 | |||
14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 20220408141550.1271295-19-peter.maydell@linaro.org | ||
17 | --- | ||
18 | hw/intc/gicv3_internal.h | 12 +++++++++ | ||
19 | hw/intc/arm_gicv3_its.c | 50 +++++++++++++++++++++++++++++++++++++- | ||
20 | hw/intc/arm_gicv3_redist.c | 11 +++++++++ | ||
21 | hw/intc/trace-events | 3 ++- | ||
22 | 4 files changed, 74 insertions(+), 2 deletions(-) | ||
23 | |||
24 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
25 | index XXXXXXX..XXXXXXX 100644 | ||
26 | --- a/hw/intc/gicv3_internal.h | ||
27 | +++ b/hw/intc/gicv3_internal.h | ||
28 | @@ -XXX,XX +XXX,XX @@ FIELD(MOVI_0, DEVICEID, 32, 32) | ||
29 | FIELD(MOVI_1, EVENTID, 0, 32) | ||
30 | FIELD(MOVI_2, ICID, 0, 16) | ||
31 | |||
32 | +/* INV command fields */ | ||
33 | +FIELD(INV_0, DEVICEID, 32, 32) | ||
34 | +FIELD(INV_1, EVENTID, 0, 32) | ||
35 | + | ||
36 | /* VMAPI, VMAPTI command fields */ | ||
37 | FIELD(VMAPTI_0, DEVICEID, 32, 32) | ||
38 | FIELD(VMAPTI_1, EVENTID, 0, 32) | ||
39 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_update_lpi(GICv3CPUState *cs); | ||
40 | * an incoming migration has loaded new state. | ||
41 | */ | ||
42 | void gicv3_redist_update_lpi_only(GICv3CPUState *cs); | ||
43 | +/** | ||
44 | + * gicv3_redist_inv_lpi: | ||
45 | + * @cs: GICv3CPUState | ||
46 | + * @irq: LPI to invalidate cached information for | ||
47 | + * | ||
48 | + * Forget or update any cached information associated with this LPI. | ||
49 | + */ | ||
50 | +void gicv3_redist_inv_lpi(GICv3CPUState *cs, int irq); | ||
51 | /** | ||
52 | * gicv3_redist_mov_lpi: | ||
53 | * @src: source redistributor | ||
54 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
55 | index XXXXXXX..XXXXXXX 100644 | ||
56 | --- a/hw/intc/arm_gicv3_its.c | ||
57 | +++ b/hw/intc/arm_gicv3_its.c | ||
58 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmovp(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
59 | return cbdata.result; | ||
60 | } | ||
61 | |||
62 | +static ItsCmdResult process_inv(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
63 | +{ | ||
64 | + uint32_t devid, eventid; | ||
65 | + ITEntry ite; | ||
66 | + DTEntry dte; | ||
67 | + CTEntry cte; | ||
68 | + ItsCmdResult cmdres; | ||
69 | + | ||
70 | + devid = FIELD_EX64(cmdpkt[0], INV_0, DEVICEID); | ||
71 | + eventid = FIELD_EX64(cmdpkt[1], INV_1, EVENTID); | ||
72 | + | ||
73 | + trace_gicv3_its_cmd_inv(devid, eventid); | ||
74 | + | ||
75 | + cmdres = lookup_ite(s, __func__, devid, eventid, &ite, &dte); | ||
76 | + if (cmdres != CMD_CONTINUE_OK) { | ||
77 | + return cmdres; | ||
78 | + } | ||
79 | + | ||
80 | + switch (ite.inttype) { | ||
81 | + case ITE_INTTYPE_PHYSICAL: | ||
82 | + cmdres = lookup_cte(s, __func__, ite.icid, &cte); | ||
83 | + if (cmdres != CMD_CONTINUE_OK) { | ||
84 | + return cmdres; | ||
85 | + } | ||
86 | + gicv3_redist_inv_lpi(&s->gicv3->cpu[cte.rdbase], ite.intid); | ||
87 | + break; | ||
88 | + case ITE_INTTYPE_VIRTUAL: | ||
89 | + if (!its_feature_virtual(s)) { | ||
90 | + /* Can't happen unless guest is illegally writing to table memory */ | ||
91 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
92 | + "%s: invalid type %d in ITE (table corrupted?)\n", | ||
93 | + __func__, ite.inttype); | ||
94 | + return CMD_CONTINUE; | ||
95 | + } | ||
96 | + /* We will implement the vLPI invalidation in a later commit */ | ||
97 | + g_assert_not_reached(); | ||
98 | + break; | ||
99 | + default: | ||
100 | + g_assert_not_reached(); | ||
101 | + } | ||
102 | + | ||
103 | + return CMD_CONTINUE_OK; | ||
104 | +} | ||
105 | + | ||
106 | /* | ||
107 | * Current implementation blocks until all | ||
108 | * commands are processed | ||
109 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
110 | result = process_its_cmd(s, cmdpkt, DISCARD); | ||
111 | break; | ||
112 | case GITS_CMD_INV: | ||
113 | + result = process_inv(s, cmdpkt); | ||
114 | + break; | ||
115 | case GITS_CMD_INVALL: | ||
116 | /* | ||
117 | * Current implementation doesn't cache any ITS tables, | ||
118 | * but the calculated lpi priority information. We only | ||
119 | * need to trigger lpi priority re-calculation to be in | ||
120 | * sync with LPI config table or pending table changes. | ||
121 | + * INVALL operates on a collection specified by ICID so | ||
122 | + * it only affects physical LPIs. | ||
123 | */ | ||
124 | - trace_gicv3_its_cmd_inv(); | ||
125 | + trace_gicv3_its_cmd_invall(); | ||
126 | for (i = 0; i < s->gicv3->num_cpu; i++) { | ||
127 | gicv3_redist_update_lpi(&s->gicv3->cpu[i]); | ||
128 | } | ||
129 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
130 | index XXXXXXX..XXXXXXX 100644 | ||
131 | --- a/hw/intc/arm_gicv3_redist.c | ||
132 | +++ b/hw/intc/arm_gicv3_redist.c | ||
133 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_process_lpi(GICv3CPUState *cs, int irq, int level) | ||
134 | gicv3_redist_lpi_pending(cs, irq, level); | ||
135 | } | ||
136 | |||
137 | +void gicv3_redist_inv_lpi(GICv3CPUState *cs, int irq) | ||
138 | +{ | ||
139 | + /* | ||
140 | + * The only cached information for LPIs we have is the HPPLPI. | ||
141 | + * We could be cleverer about identifying when we don't need | ||
142 | + * to do a full rescan of the pending table, but until we find | ||
143 | + * this is a performance issue, just always recalculate. | ||
144 | + */ | ||
145 | + gicv3_redist_update_lpi(cs); | ||
146 | +} | ||
147 | + | ||
148 | void gicv3_redist_mov_lpi(GICv3CPUState *src, GICv3CPUState *dest, int irq) | ||
149 | { | ||
150 | /* | ||
151 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
152 | index XXXXXXX..XXXXXXX 100644 | ||
153 | --- a/hw/intc/trace-events | ||
154 | +++ b/hw/intc/trace-events | ||
155 | @@ -XXX,XX +XXX,XX @@ gicv3_its_cmd_mapd(uint32_t devid, uint32_t size, uint64_t ittaddr, int valid) " | ||
156 | gicv3_its_cmd_mapc(uint32_t icid, uint64_t rdbase, int valid) "GICv3 ITS: command MAPC ICID 0x%x RDbase 0x%" PRIx64 " V %d" | ||
157 | gicv3_its_cmd_mapi(uint32_t devid, uint32_t eventid, uint32_t icid) "GICv3 ITS: command MAPI DeviceID 0x%x EventID 0x%x ICID 0x%x" | ||
158 | gicv3_its_cmd_mapti(uint32_t devid, uint32_t eventid, uint32_t icid, uint32_t intid) "GICv3 ITS: command MAPTI DeviceID 0x%x EventID 0x%x ICID 0x%x pINTID 0x%x" | ||
159 | -gicv3_its_cmd_inv(void) "GICv3 ITS: command INV or INVALL" | ||
160 | +gicv3_its_cmd_inv(uint32_t devid, uint32_t eventid) "GICv3 ITS: command INV DeviceID 0x%x EventID 0x%x" | ||
161 | +gicv3_its_cmd_invall(void) "GICv3 ITS: command INVALL" | ||
162 | gicv3_its_cmd_movall(uint64_t rd1, uint64_t rd2) "GICv3 ITS: command MOVALL RDbase1 0x%" PRIx64 " RDbase2 0x%" PRIx64 | ||
163 | gicv3_its_cmd_movi(uint32_t devid, uint32_t eventid, uint32_t icid) "GICv3 ITS: command MOVI DeviceID 0x%x EventID 0x%x ICID 0x%x" | ||
164 | gicv3_its_cmd_vmapi(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t doorbell) "GICv3 ITS: command VMAPI DeviceID 0x%x EventID 0x%x vPEID 0x%x Dbell_pINTID 0x%x" | ||
165 | -- | ||
166 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the ITS side of the handling of the INV command for | ||
2 | virtual interrupts; as usual this calls into a redistributor | ||
3 | function which we leave as a stub to fill in later. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-20-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/gicv3_internal.h | 9 +++++++++ | ||
10 | hw/intc/arm_gicv3_its.c | 16 ++++++++++++++-- | ||
11 | hw/intc/arm_gicv3_redist.c | 8 ++++++++ | ||
12 | 3 files changed, 31 insertions(+), 2 deletions(-) | ||
13 | |||
14 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
15 | index XXXXXXX..XXXXXXX 100644 | ||
16 | --- a/hw/intc/gicv3_internal.h | ||
17 | +++ b/hw/intc/gicv3_internal.h | ||
18 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_update_lpi_only(GICv3CPUState *cs); | ||
19 | * Forget or update any cached information associated with this LPI. | ||
20 | */ | ||
21 | void gicv3_redist_inv_lpi(GICv3CPUState *cs, int irq); | ||
22 | +/** | ||
23 | + * gicv3_redist_inv_vlpi: | ||
24 | + * @cs: GICv3CPUState | ||
25 | + * @irq: vLPI to invalidate cached information for | ||
26 | + * @vptaddr: (guest) address of vLPI table | ||
27 | + * | ||
28 | + * Forget or update any cached information associated with this vLPI. | ||
29 | + */ | ||
30 | +void gicv3_redist_inv_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr); | ||
31 | /** | ||
32 | * gicv3_redist_mov_lpi: | ||
33 | * @src: source redistributor | ||
34 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
35 | index XXXXXXX..XXXXXXX 100644 | ||
36 | --- a/hw/intc/arm_gicv3_its.c | ||
37 | +++ b/hw/intc/arm_gicv3_its.c | ||
38 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_inv(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
39 | ITEntry ite; | ||
40 | DTEntry dte; | ||
41 | CTEntry cte; | ||
42 | + VTEntry vte; | ||
43 | ItsCmdResult cmdres; | ||
44 | |||
45 | devid = FIELD_EX64(cmdpkt[0], INV_0, DEVICEID); | ||
46 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_inv(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
47 | __func__, ite.inttype); | ||
48 | return CMD_CONTINUE; | ||
49 | } | ||
50 | - /* We will implement the vLPI invalidation in a later commit */ | ||
51 | - g_assert_not_reached(); | ||
52 | + | ||
53 | + cmdres = lookup_vte(s, __func__, ite.vpeid, &vte); | ||
54 | + if (cmdres != CMD_CONTINUE_OK) { | ||
55 | + return cmdres; | ||
56 | + } | ||
57 | + if (!intid_in_lpi_range(ite.intid) || | ||
58 | + ite.intid >= (1ULL << (vte.vptsize + 1))) { | ||
59 | + qemu_log_mask(LOG_GUEST_ERROR, "%s: intid 0x%x out of range\n", | ||
60 | + __func__, ite.intid); | ||
61 | + return CMD_CONTINUE; | ||
62 | + } | ||
63 | + gicv3_redist_inv_vlpi(&s->gicv3->cpu[vte.rdbase], ite.intid, | ||
64 | + vte.vptaddr << 16); | ||
65 | break; | ||
66 | default: | ||
67 | g_assert_not_reached(); | ||
68 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
69 | index XXXXXXX..XXXXXXX 100644 | ||
70 | --- a/hw/intc/arm_gicv3_redist.c | ||
71 | +++ b/hw/intc/arm_gicv3_redist.c | ||
72 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
73 | */ | ||
74 | } | ||
75 | |||
76 | +void gicv3_redist_inv_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr) | ||
77 | +{ | ||
78 | + /* | ||
79 | + * The redistributor handling for invalidating cached information | ||
80 | + * about a VLPI will be added in a subsequent commit. | ||
81 | + */ | ||
82 | +} | ||
83 | + | ||
84 | void gicv3_redist_set_irq(GICv3CPUState *cs, int irq, int level) | ||
85 | { | ||
86 | /* Update redistributor state for a change in an external PPI input line */ | ||
87 | -- | ||
88 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the GICv4 VMOVI command, which moves the pending state | ||
2 | of a virtual interrupt from one redistributor to another. As with | ||
3 | MOVI, we handle the "parse and validate command arguments and | ||
4 | table lookups" part in the ITS source file, and pass the final | ||
5 | results to a function in the redistributor which will do the | ||
6 | actual operation. As with the "make a VLPI pending" change, | ||
7 | for the moment we leave that redistributor function as a stub, | ||
8 | to be implemented in a later commit. | ||
1 | 9 | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
12 | Message-id: 20220408141550.1271295-21-peter.maydell@linaro.org | ||
13 | --- | ||
14 | hw/intc/gicv3_internal.h | 23 +++++++++++ | ||
15 | hw/intc/arm_gicv3_its.c | 82 ++++++++++++++++++++++++++++++++++++++ | ||
16 | hw/intc/arm_gicv3_redist.c | 10 +++++ | ||
17 | hw/intc/trace-events | 1 + | ||
18 | 4 files changed, 116 insertions(+) | ||
19 | |||
20 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
21 | index XXXXXXX..XXXXXXX 100644 | ||
22 | --- a/hw/intc/gicv3_internal.h | ||
23 | +++ b/hw/intc/gicv3_internal.h | ||
24 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1) | ||
25 | #define GITS_CMD_INVALL 0x0D | ||
26 | #define GITS_CMD_MOVALL 0x0E | ||
27 | #define GITS_CMD_DISCARD 0x0F | ||
28 | +#define GITS_CMD_VMOVI 0x21 | ||
29 | #define GITS_CMD_VMOVP 0x22 | ||
30 | #define GITS_CMD_VSYNC 0x25 | ||
31 | #define GITS_CMD_VMAPP 0x29 | ||
32 | @@ -XXX,XX +XXX,XX @@ FIELD(VMOVP_2, RDBASE, 16, 36) | ||
33 | FIELD(VMOVP_2, DB, 63, 1) /* GICv4.1 only */ | ||
34 | FIELD(VMOVP_3, DEFAULT_DOORBELL, 0, 32) /* GICv4.1 only */ | ||
35 | |||
36 | +/* VMOVI command fields */ | ||
37 | +FIELD(VMOVI_0, DEVICEID, 32, 32) | ||
38 | +FIELD(VMOVI_1, EVENTID, 0, 32) | ||
39 | +FIELD(VMOVI_1, VPEID, 32, 16) | ||
40 | +FIELD(VMOVI_2, D, 0, 1) | ||
41 | +FIELD(VMOVI_2, DOORBELL, 32, 32) | ||
42 | + | ||
43 | /* | ||
44 | * 12 bytes Interrupt translation Table Entry size | ||
45 | * as per Table 5.3 in GICv3 spec | ||
46 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_mov_lpi(GICv3CPUState *src, GICv3CPUState *dest, int irq); | ||
47 | * by the ITS MOVALL command. | ||
48 | */ | ||
49 | void gicv3_redist_movall_lpis(GICv3CPUState *src, GICv3CPUState *dest); | ||
50 | +/** | ||
51 | + * gicv3_redist_mov_vlpi: | ||
52 | + * @src: source redistributor | ||
53 | + * @src_vptaddr: (guest) address of source VLPI table | ||
54 | + * @dest: destination redistributor | ||
55 | + * @dest_vptaddr: (guest) address of destination VLPI table | ||
56 | + * @irq: VLPI to update | ||
57 | + * @doorbell: doorbell for destination (1023 for "no doorbell") | ||
58 | + * | ||
59 | + * Move the pending state of the specified VLPI from @src to @dest, | ||
60 | + * as required by the ITS VMOVI command. | ||
61 | + */ | ||
62 | +void gicv3_redist_mov_vlpi(GICv3CPUState *src, uint64_t src_vptaddr, | ||
63 | + GICv3CPUState *dest, uint64_t dest_vptaddr, | ||
64 | + int irq, int doorbell); | ||
65 | |||
66 | void gicv3_redist_send_sgi(GICv3CPUState *cs, int grp, int irq, bool ns); | ||
67 | void gicv3_init_cpuif(GICv3State *s); | ||
68 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
69 | index XXXXXXX..XXXXXXX 100644 | ||
70 | --- a/hw/intc/arm_gicv3_its.c | ||
71 | +++ b/hw/intc/arm_gicv3_its.c | ||
72 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmovp(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
73 | return cbdata.result; | ||
74 | } | ||
75 | |||
76 | +static ItsCmdResult process_vmovi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
77 | +{ | ||
78 | + uint32_t devid, eventid, vpeid, doorbell; | ||
79 | + bool doorbell_valid; | ||
80 | + DTEntry dte; | ||
81 | + ITEntry ite; | ||
82 | + VTEntry old_vte, new_vte; | ||
83 | + ItsCmdResult cmdres; | ||
84 | + | ||
85 | + if (!its_feature_virtual(s)) { | ||
86 | + return CMD_CONTINUE; | ||
87 | + } | ||
88 | + | ||
89 | + devid = FIELD_EX64(cmdpkt[0], VMOVI_0, DEVICEID); | ||
90 | + eventid = FIELD_EX64(cmdpkt[1], VMOVI_1, EVENTID); | ||
91 | + vpeid = FIELD_EX64(cmdpkt[1], VMOVI_1, VPEID); | ||
92 | + doorbell_valid = FIELD_EX64(cmdpkt[2], VMOVI_2, D); | ||
93 | + doorbell = FIELD_EX64(cmdpkt[2], VMOVI_2, DOORBELL); | ||
94 | + | ||
95 | + trace_gicv3_its_cmd_vmovi(devid, eventid, vpeid, doorbell_valid, doorbell); | ||
96 | + | ||
97 | + if (doorbell_valid && !valid_doorbell(doorbell)) { | ||
98 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
99 | + "%s: invalid doorbell 0x%x\n", __func__, doorbell); | ||
100 | + return CMD_CONTINUE; | ||
101 | + } | ||
102 | + | ||
103 | + cmdres = lookup_ite(s, __func__, devid, eventid, &ite, &dte); | ||
104 | + if (cmdres != CMD_CONTINUE_OK) { | ||
105 | + return cmdres; | ||
106 | + } | ||
107 | + | ||
108 | + if (ite.inttype != ITE_INTTYPE_VIRTUAL) { | ||
109 | + qemu_log_mask(LOG_GUEST_ERROR, "%s: ITE is not for virtual interrupt\n", | ||
110 | + __func__); | ||
111 | + return CMD_CONTINUE; | ||
112 | + } | ||
113 | + | ||
114 | + cmdres = lookup_vte(s, __func__, ite.vpeid, &old_vte); | ||
115 | + if (cmdres != CMD_CONTINUE_OK) { | ||
116 | + return cmdres; | ||
117 | + } | ||
118 | + cmdres = lookup_vte(s, __func__, vpeid, &new_vte); | ||
119 | + if (cmdres != CMD_CONTINUE_OK) { | ||
120 | + return cmdres; | ||
121 | + } | ||
122 | + | ||
123 | + if (!intid_in_lpi_range(ite.intid) || | ||
124 | + ite.intid >= (1ULL << (old_vte.vptsize + 1)) || | ||
125 | + ite.intid >= (1ULL << (new_vte.vptsize + 1))) { | ||
126 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
127 | + "%s: ITE intid 0x%x out of range\n", | ||
128 | + __func__, ite.intid); | ||
129 | + return CMD_CONTINUE; | ||
130 | + } | ||
131 | + | ||
132 | + ite.vpeid = vpeid; | ||
133 | + if (doorbell_valid) { | ||
134 | + ite.doorbell = doorbell; | ||
135 | + } | ||
136 | + | ||
137 | + /* | ||
138 | + * Move the LPI from the old redistributor to the new one. We don't | ||
139 | + * need to do anything if the guest somehow specified the | ||
140 | + * same pending table for source and destination. | ||
141 | + */ | ||
142 | + if (old_vte.vptaddr != new_vte.vptaddr) { | ||
143 | + gicv3_redist_mov_vlpi(&s->gicv3->cpu[old_vte.rdbase], | ||
144 | + old_vte.vptaddr << 16, | ||
145 | + &s->gicv3->cpu[new_vte.rdbase], | ||
146 | + new_vte.vptaddr << 16, | ||
147 | + ite.intid, | ||
148 | + ite.doorbell); | ||
149 | + } | ||
150 | + | ||
151 | + /* Update the ITE to the new VPEID and possibly doorbell values */ | ||
152 | + return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE_OK : CMD_STALL; | ||
153 | +} | ||
154 | + | ||
155 | static ItsCmdResult process_inv(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
156 | { | ||
157 | uint32_t devid, eventid; | ||
158 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
159 | case GITS_CMD_VMOVP: | ||
160 | result = process_vmovp(s, cmdpkt); | ||
161 | break; | ||
162 | + case GITS_CMD_VMOVI: | ||
163 | + result = process_vmovi(s, cmdpkt); | ||
164 | + break; | ||
165 | default: | ||
166 | trace_gicv3_its_cmd_unknown(cmd); | ||
167 | break; | ||
168 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
169 | index XXXXXXX..XXXXXXX 100644 | ||
170 | --- a/hw/intc/arm_gicv3_redist.c | ||
171 | +++ b/hw/intc/arm_gicv3_redist.c | ||
172 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
173 | */ | ||
174 | } | ||
175 | |||
176 | +void gicv3_redist_mov_vlpi(GICv3CPUState *src, uint64_t src_vptaddr, | ||
177 | + GICv3CPUState *dest, uint64_t dest_vptaddr, | ||
178 | + int irq, int doorbell) | ||
179 | +{ | ||
180 | + /* | ||
181 | + * The redistributor handling for moving a VLPI will be added | ||
182 | + * in a subsequent commit. | ||
183 | + */ | ||
184 | +} | ||
185 | + | ||
186 | void gicv3_redist_inv_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr) | ||
187 | { | ||
188 | /* | ||
189 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
190 | index XXXXXXX..XXXXXXX 100644 | ||
191 | --- a/hw/intc/trace-events | ||
192 | +++ b/hw/intc/trace-events | ||
193 | @@ -XXX,XX +XXX,XX @@ gicv3_its_cmd_vmapti(uint32_t devid, uint32_t eventid, uint32_t vpeid, uint32_t | ||
194 | gicv3_its_cmd_vmapp(uint32_t vpeid, uint64_t rdbase, int valid, uint64_t vptaddr, uint32_t vptsize) "GICv3 ITS: command VMAPP vPEID 0x%x RDbase 0x%" PRIx64 " V %d VPT_addr 0x%" PRIx64 " VPT_size 0x%x" | ||
195 | gicv3_its_cmd_vmovp(uint32_t vpeid, uint64_t rdbase) "GICv3 ITS: command VMOVP vPEID 0x%x RDbase 0x%" PRIx64 | ||
196 | gicv3_its_cmd_vsync(void) "GICv3 ITS: command VSYNC" | ||
197 | +gicv3_its_cmd_vmovi(uint32_t devid, uint32_t eventid, uint32_t vpeid, int dbvalid, uint32_t doorbell) "GICv3 ITS: command VMOVI DeviceID 0x%x EventID 0x%x vPEID 0x%x D %d Dbell_pINTID 0x%x" | ||
198 | gicv3_its_cmd_unknown(unsigned cmd) "GICv3 ITS: unknown command 0x%x" | ||
199 | gicv3_its_cte_read(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table read for ICID 0x%x: valid %d RDBase 0x%x" | ||
200 | gicv3_its_cte_write(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table write for ICID 0x%x: valid %d RDBase 0x%x" | ||
201 | -- | ||
202 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | The VINVALL command should cause any cached information in the | ||
2 | ITS or redistributor for the specified vCPU to be dropped or | ||
3 | otherwise made consistent with the in-memory LPI configuration | ||
4 | tables. | ||
1 | 5 | ||
6 | Here we implement the command and table parsing, leaving the | ||
7 | redistributor part as a stub for the moment, as usual. | ||
8 | |||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
11 | Message-id: 20220408141550.1271295-22-peter.maydell@linaro.org | ||
12 | --- | ||
13 | hw/intc/gicv3_internal.h | 13 +++++++++++++ | ||
14 | hw/intc/arm_gicv3_its.c | 26 ++++++++++++++++++++++++++ | ||
15 | hw/intc/arm_gicv3_redist.c | 5 +++++ | ||
16 | hw/intc/trace-events | 1 + | ||
17 | 4 files changed, 45 insertions(+) | ||
18 | |||
19 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
20 | index XXXXXXX..XXXXXXX 100644 | ||
21 | --- a/hw/intc/gicv3_internal.h | ||
22 | +++ b/hw/intc/gicv3_internal.h | ||
23 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1) | ||
24 | #define GITS_CMD_VMAPP 0x29 | ||
25 | #define GITS_CMD_VMAPTI 0x2A | ||
26 | #define GITS_CMD_VMAPI 0x2B | ||
27 | +#define GITS_CMD_VINVALL 0x2D | ||
28 | |||
29 | /* MAPC command fields */ | ||
30 | #define ICID_LENGTH 16 | ||
31 | @@ -XXX,XX +XXX,XX @@ FIELD(VMOVI_1, VPEID, 32, 16) | ||
32 | FIELD(VMOVI_2, D, 0, 1) | ||
33 | FIELD(VMOVI_2, DOORBELL, 32, 32) | ||
34 | |||
35 | +/* VINVALL command fields */ | ||
36 | +FIELD(VINVALL_1, VPEID, 32, 16) | ||
37 | + | ||
38 | /* | ||
39 | * 12 bytes Interrupt translation Table Entry size | ||
40 | * as per Table 5.3 in GICv3 spec | ||
41 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_movall_lpis(GICv3CPUState *src, GICv3CPUState *dest); | ||
42 | void gicv3_redist_mov_vlpi(GICv3CPUState *src, uint64_t src_vptaddr, | ||
43 | GICv3CPUState *dest, uint64_t dest_vptaddr, | ||
44 | int irq, int doorbell); | ||
45 | +/** | ||
46 | + * gicv3_redist_vinvall: | ||
47 | + * @cs: GICv3CPUState | ||
48 | + * @vptaddr: address of VLPI pending table | ||
49 | + * | ||
50 | + * On redistributor @cs, invalidate all cached information associated | ||
51 | + * with the vCPU defined by @vptaddr. | ||
52 | + */ | ||
53 | +void gicv3_redist_vinvall(GICv3CPUState *cs, uint64_t vptaddr); | ||
54 | |||
55 | void gicv3_redist_send_sgi(GICv3CPUState *cs, int grp, int irq, bool ns); | ||
56 | void gicv3_init_cpuif(GICv3State *s); | ||
57 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
58 | index XXXXXXX..XXXXXXX 100644 | ||
59 | --- a/hw/intc/arm_gicv3_its.c | ||
60 | +++ b/hw/intc/arm_gicv3_its.c | ||
61 | @@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmovi(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
62 | return update_ite(s, eventid, &dte, &ite) ? CMD_CONTINUE_OK : CMD_STALL; | ||
63 | } | ||
64 | |||
65 | +static ItsCmdResult process_vinvall(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
66 | +{ | ||
67 | + VTEntry vte; | ||
68 | + uint32_t vpeid; | ||
69 | + ItsCmdResult cmdres; | ||
70 | + | ||
71 | + if (!its_feature_virtual(s)) { | ||
72 | + return CMD_CONTINUE; | ||
73 | + } | ||
74 | + | ||
75 | + vpeid = FIELD_EX64(cmdpkt[1], VINVALL_1, VPEID); | ||
76 | + | ||
77 | + trace_gicv3_its_cmd_vinvall(vpeid); | ||
78 | + | ||
79 | + cmdres = lookup_vte(s, __func__, vpeid, &vte); | ||
80 | + if (cmdres != CMD_CONTINUE_OK) { | ||
81 | + return cmdres; | ||
82 | + } | ||
83 | + | ||
84 | + gicv3_redist_vinvall(&s->gicv3->cpu[vte.rdbase], vte.vptaddr << 16); | ||
85 | + return CMD_CONTINUE_OK; | ||
86 | +} | ||
87 | + | ||
88 | static ItsCmdResult process_inv(GICv3ITSState *s, const uint64_t *cmdpkt) | ||
89 | { | ||
90 | uint32_t devid, eventid; | ||
91 | @@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s) | ||
92 | case GITS_CMD_VMOVI: | ||
93 | result = process_vmovi(s, cmdpkt); | ||
94 | break; | ||
95 | + case GITS_CMD_VINVALL: | ||
96 | + result = process_vinvall(s, cmdpkt); | ||
97 | + break; | ||
98 | default: | ||
99 | trace_gicv3_its_cmd_unknown(cmd); | ||
100 | break; | ||
101 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
102 | index XXXXXXX..XXXXXXX 100644 | ||
103 | --- a/hw/intc/arm_gicv3_redist.c | ||
104 | +++ b/hw/intc/arm_gicv3_redist.c | ||
105 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_mov_vlpi(GICv3CPUState *src, uint64_t src_vptaddr, | ||
106 | */ | ||
107 | } | ||
108 | |||
109 | +void gicv3_redist_vinvall(GICv3CPUState *cs, uint64_t vptaddr) | ||
110 | +{ | ||
111 | + /* The redistributor handling will be added in a subsequent commit */ | ||
112 | +} | ||
113 | + | ||
114 | void gicv3_redist_inv_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr) | ||
115 | { | ||
116 | /* | ||
117 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
118 | index XXXXXXX..XXXXXXX 100644 | ||
119 | --- a/hw/intc/trace-events | ||
120 | +++ b/hw/intc/trace-events | ||
121 | @@ -XXX,XX +XXX,XX @@ gicv3_its_cmd_vmapp(uint32_t vpeid, uint64_t rdbase, int valid, uint64_t vptaddr | ||
122 | gicv3_its_cmd_vmovp(uint32_t vpeid, uint64_t rdbase) "GICv3 ITS: command VMOVP vPEID 0x%x RDbase 0x%" PRIx64 | ||
123 | gicv3_its_cmd_vsync(void) "GICv3 ITS: command VSYNC" | ||
124 | gicv3_its_cmd_vmovi(uint32_t devid, uint32_t eventid, uint32_t vpeid, int dbvalid, uint32_t doorbell) "GICv3 ITS: command VMOVI DeviceID 0x%x EventID 0x%x vPEID 0x%x D %d Dbell_pINTID 0x%x" | ||
125 | +gicv3_its_cmd_vinvall(uint32_t vpeid) "GICv3 ITS: command VINVALL vPEID 0x%x" | ||
126 | gicv3_its_cmd_unknown(unsigned cmd) "GICv3 ITS: unknown command 0x%x" | ||
127 | gicv3_its_cte_read(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table read for ICID 0x%x: valid %d RDBase 0x%x" | ||
128 | gicv3_its_cte_write(uint32_t icid, int valid, uint32_t rdbase) "GICv3 ITS: Collection Table write for ICID 0x%x: valid %d RDBase 0x%x" | ||
129 | -- | ||
130 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | The GICv4 extends the redistributor register map -- where GICv3 | ||
2 | had two 64KB frames per CPU, GICv4 has four frames. Add support | ||
3 | for the extra frame by using a new gicv3_redist_size() function | ||
4 | in the places in the GIC implementation which currently use | ||
5 | a fixed constant size for the redistributor register block. | ||
6 | (Until we implement the extra registers they will RAZ/WI.) | ||
1 | 7 | ||
8 | Any board that wants to use a GICv4 will need to also adjust | ||
9 | to handle the different sized redistributor register block; | ||
10 | that will be done separately. | ||
11 | |||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
14 | Message-id: 20220408141550.1271295-23-peter.maydell@linaro.org | ||
15 | --- | ||
16 | hw/intc/gicv3_internal.h | 21 +++++++++++++++++++++ | ||
17 | include/hw/intc/arm_gicv3_common.h | 5 +++++ | ||
18 | hw/intc/arm_gicv3_common.c | 2 +- | ||
19 | hw/intc/arm_gicv3_redist.c | 8 ++++---- | ||
20 | 4 files changed, 31 insertions(+), 5 deletions(-) | ||
21 | |||
22 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
23 | index XXXXXXX..XXXXXXX 100644 | ||
24 | --- a/hw/intc/gicv3_internal.h | ||
25 | +++ b/hw/intc/gicv3_internal.h | ||
26 | @@ -XXX,XX +XXX,XX @@ FIELD(VTE, RDBASE, 42, RDBASE_PROCNUM_LENGTH) | ||
27 | |||
28 | /* Functions internal to the emulated GICv3 */ | ||
29 | |||
30 | +/** | ||
31 | + * gicv3_redist_size: | ||
32 | + * @s: GICv3State | ||
33 | + * | ||
34 | + * Return the size of the redistributor register frame in bytes | ||
35 | + * (which depends on what GIC version this is) | ||
36 | + */ | ||
37 | +static inline int gicv3_redist_size(GICv3State *s) | ||
38 | +{ | ||
39 | + /* | ||
40 | + * Redistributor size is controlled by the redistributor GICR_TYPER.VLPIS. | ||
41 | + * It's the same for every redistributor in the GIC, so arbitrarily | ||
42 | + * use the register field in the first one. | ||
43 | + */ | ||
44 | + if (s->cpu[0].gicr_typer & GICR_TYPER_VLPIS) { | ||
45 | + return GICV4_REDIST_SIZE; | ||
46 | + } else { | ||
47 | + return GICV3_REDIST_SIZE; | ||
48 | + } | ||
49 | +} | ||
50 | + | ||
51 | /** | ||
52 | * gicv3_intid_is_special: | ||
53 | * @intid: interrupt ID | ||
54 | diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h | ||
55 | index XXXXXXX..XXXXXXX 100644 | ||
56 | --- a/include/hw/intc/arm_gicv3_common.h | ||
57 | +++ b/include/hw/intc/arm_gicv3_common.h | ||
58 | @@ -XXX,XX +XXX,XX @@ | ||
59 | |||
60 | #define GICV3_LPI_INTID_START 8192 | ||
61 | |||
62 | +/* | ||
63 | + * The redistributor in GICv3 has two 64KB frames per CPU; in | ||
64 | + * GICv4 it has four 64KB frames per CPU. | ||
65 | + */ | ||
66 | #define GICV3_REDIST_SIZE 0x20000 | ||
67 | +#define GICV4_REDIST_SIZE 0x40000 | ||
68 | |||
69 | /* Number of SGI target-list bits */ | ||
70 | #define GICV3_TARGETLIST_BITS 16 | ||
71 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c | ||
72 | index XXXXXXX..XXXXXXX 100644 | ||
73 | --- a/hw/intc/arm_gicv3_common.c | ||
74 | +++ b/hw/intc/arm_gicv3_common.c | ||
75 | @@ -XXX,XX +XXX,XX @@ void gicv3_init_irqs_and_mmio(GICv3State *s, qemu_irq_handler handler, | ||
76 | |||
77 | memory_region_init_io(®ion->iomem, OBJECT(s), | ||
78 | ops ? &ops[1] : NULL, region, name, | ||
79 | - s->redist_region_count[i] * GICV3_REDIST_SIZE); | ||
80 | + s->redist_region_count[i] * gicv3_redist_size(s)); | ||
81 | sysbus_init_mmio(sbd, ®ion->iomem); | ||
82 | g_free(name); | ||
83 | } | ||
84 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
85 | index XXXXXXX..XXXXXXX 100644 | ||
86 | --- a/hw/intc/arm_gicv3_redist.c | ||
87 | +++ b/hw/intc/arm_gicv3_redist.c | ||
88 | @@ -XXX,XX +XXX,XX @@ MemTxResult gicv3_redist_read(void *opaque, hwaddr offset, uint64_t *data, | ||
89 | * in the memory map); if so then the GIC has multiple MemoryRegions | ||
90 | * for the redistributors. | ||
91 | */ | ||
92 | - cpuidx = region->cpuidx + offset / GICV3_REDIST_SIZE; | ||
93 | - offset %= GICV3_REDIST_SIZE; | ||
94 | + cpuidx = region->cpuidx + offset / gicv3_redist_size(s); | ||
95 | + offset %= gicv3_redist_size(s); | ||
96 | |||
97 | cs = &s->cpu[cpuidx]; | ||
98 | |||
99 | @@ -XXX,XX +XXX,XX @@ MemTxResult gicv3_redist_write(void *opaque, hwaddr offset, uint64_t data, | ||
100 | * in the memory map); if so then the GIC has multiple MemoryRegions | ||
101 | * for the redistributors. | ||
102 | */ | ||
103 | - cpuidx = region->cpuidx + offset / GICV3_REDIST_SIZE; | ||
104 | - offset %= GICV3_REDIST_SIZE; | ||
105 | + cpuidx = region->cpuidx + offset / gicv3_redist_size(s); | ||
106 | + offset %= gicv3_redist_size(s); | ||
107 | |||
108 | cs = &s->cpu[cpuidx]; | ||
109 | |||
110 | -- | ||
111 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the new GICv4 redistributor registers: GICR_VPROPBASER | ||
2 | and GICR_VPENDBASER; for the moment we implement these as simple | ||
3 | reads-as-written stubs, together with the necessary migration | ||
4 | and reset handling. | ||
1 | 5 | ||
6 | We don't put ID-register checks on the handling of these registers, | ||
7 | because they are all in the only-in-v4 extra register frames, so | ||
8 | they're not accessible in a GICv3. | ||
9 | |||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
12 | Message-id: 20220408141550.1271295-24-peter.maydell@linaro.org | ||
13 | --- | ||
14 | hw/intc/gicv3_internal.h | 21 +++++++++++ | ||
15 | include/hw/intc/arm_gicv3_common.h | 3 ++ | ||
16 | hw/intc/arm_gicv3_common.c | 22 ++++++++++++ | ||
17 | hw/intc/arm_gicv3_redist.c | 56 ++++++++++++++++++++++++++++++ | ||
18 | 4 files changed, 102 insertions(+) | ||
19 | |||
20 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
21 | index XXXXXXX..XXXXXXX 100644 | ||
22 | --- a/hw/intc/gicv3_internal.h | ||
23 | +++ b/hw/intc/gicv3_internal.h | ||
24 | @@ -XXX,XX +XXX,XX @@ | ||
25 | * Redistributor frame offsets from RD_base | ||
26 | */ | ||
27 | #define GICR_SGI_OFFSET 0x10000 | ||
28 | +#define GICR_VLPI_OFFSET 0x20000 | ||
29 | |||
30 | /* | ||
31 | * Redistributor registers, offsets from RD_base | ||
32 | @@ -XXX,XX +XXX,XX @@ | ||
33 | #define GICR_IGRPMODR0 (GICR_SGI_OFFSET + 0x0D00) | ||
34 | #define GICR_NSACR (GICR_SGI_OFFSET + 0x0E00) | ||
35 | |||
36 | +/* VLPI redistributor registers, offsets from VLPI_base */ | ||
37 | +#define GICR_VPROPBASER (GICR_VLPI_OFFSET + 0x70) | ||
38 | +#define GICR_VPENDBASER (GICR_VLPI_OFFSET + 0x78) | ||
39 | + | ||
40 | #define GICR_CTLR_ENABLE_LPIS (1U << 0) | ||
41 | #define GICR_CTLR_CES (1U << 1) | ||
42 | #define GICR_CTLR_RWP (1U << 3) | ||
43 | @@ -XXX,XX +XXX,XX @@ FIELD(GICR_PENDBASER, PTZ, 62, 1) | ||
44 | |||
45 | #define GICR_PROPBASER_IDBITS_THRESHOLD 0xd | ||
46 | |||
47 | +/* These are the GICv4 VPROPBASER and VPENDBASER layouts; v4.1 is different */ | ||
48 | +FIELD(GICR_VPROPBASER, IDBITS, 0, 5) | ||
49 | +FIELD(GICR_VPROPBASER, INNERCACHE, 7, 3) | ||
50 | +FIELD(GICR_VPROPBASER, SHAREABILITY, 10, 2) | ||
51 | +FIELD(GICR_VPROPBASER, PHYADDR, 12, 40) | ||
52 | +FIELD(GICR_VPROPBASER, OUTERCACHE, 56, 3) | ||
53 | + | ||
54 | +FIELD(GICR_VPENDBASER, INNERCACHE, 7, 3) | ||
55 | +FIELD(GICR_VPENDBASER, SHAREABILITY, 10, 2) | ||
56 | +FIELD(GICR_VPENDBASER, PHYADDR, 16, 36) | ||
57 | +FIELD(GICR_VPENDBASER, OUTERCACHE, 56, 3) | ||
58 | +FIELD(GICR_VPENDBASER, DIRTY, 60, 1) | ||
59 | +FIELD(GICR_VPENDBASER, PENDINGLAST, 61, 1) | ||
60 | +FIELD(GICR_VPENDBASER, IDAI, 62, 1) | ||
61 | +FIELD(GICR_VPENDBASER, VALID, 63, 1) | ||
62 | + | ||
63 | #define ICC_CTLR_EL1_CBPR (1U << 0) | ||
64 | #define ICC_CTLR_EL1_EOIMODE (1U << 1) | ||
65 | #define ICC_CTLR_EL1_PMHE (1U << 6) | ||
66 | diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h | ||
67 | index XXXXXXX..XXXXXXX 100644 | ||
68 | --- a/include/hw/intc/arm_gicv3_common.h | ||
69 | +++ b/include/hw/intc/arm_gicv3_common.h | ||
70 | @@ -XXX,XX +XXX,XX @@ struct GICv3CPUState { | ||
71 | uint32_t gicr_igrpmodr0; | ||
72 | uint32_t gicr_nsacr; | ||
73 | uint8_t gicr_ipriorityr[GIC_INTERNAL]; | ||
74 | + /* VLPI_base page registers */ | ||
75 | + uint64_t gicr_vpropbaser; | ||
76 | + uint64_t gicr_vpendbaser; | ||
77 | |||
78 | /* CPU interface */ | ||
79 | uint64_t icc_sre_el1; | ||
80 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c | ||
81 | index XXXXXXX..XXXXXXX 100644 | ||
82 | --- a/hw/intc/arm_gicv3_common.c | ||
83 | +++ b/hw/intc/arm_gicv3_common.c | ||
84 | @@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_gicv3_cpu_sre_el1 = { | ||
85 | } | ||
86 | }; | ||
87 | |||
88 | +static bool gicv4_needed(void *opaque) | ||
89 | +{ | ||
90 | + GICv3CPUState *cs = opaque; | ||
91 | + | ||
92 | + return cs->gic->revision > 3; | ||
93 | +} | ||
94 | + | ||
95 | +const VMStateDescription vmstate_gicv3_gicv4 = { | ||
96 | + .name = "arm_gicv3_cpu/gicv4", | ||
97 | + .version_id = 1, | ||
98 | + .minimum_version_id = 1, | ||
99 | + .needed = gicv4_needed, | ||
100 | + .fields = (VMStateField[]) { | ||
101 | + VMSTATE_UINT64(gicr_vpropbaser, GICv3CPUState), | ||
102 | + VMSTATE_UINT64(gicr_vpendbaser, GICv3CPUState), | ||
103 | + VMSTATE_END_OF_LIST() | ||
104 | + } | ||
105 | +}; | ||
106 | + | ||
107 | static const VMStateDescription vmstate_gicv3_cpu = { | ||
108 | .name = "arm_gicv3_cpu", | ||
109 | .version_id = 1, | ||
110 | @@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gicv3_cpu = { | ||
111 | .subsections = (const VMStateDescription * []) { | ||
112 | &vmstate_gicv3_cpu_virt, | ||
113 | &vmstate_gicv3_cpu_sre_el1, | ||
114 | + &vmstate_gicv3_gicv4, | ||
115 | NULL | ||
116 | } | ||
117 | }; | ||
118 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_reset(DeviceState *dev) | ||
119 | cs->gicr_waker = GICR_WAKER_ProcessorSleep | GICR_WAKER_ChildrenAsleep; | ||
120 | cs->gicr_propbaser = 0; | ||
121 | cs->gicr_pendbaser = 0; | ||
122 | + cs->gicr_vpropbaser = 0; | ||
123 | + cs->gicr_vpendbaser = 0; | ||
124 | /* If we're resetting a TZ-aware GIC as if secure firmware | ||
125 | * had set it up ready to start a kernel in non-secure, we | ||
126 | * need to set interrupts to group 1 so the kernel can use them. | ||
127 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
128 | index XXXXXXX..XXXXXXX 100644 | ||
129 | --- a/hw/intc/arm_gicv3_redist.c | ||
130 | +++ b/hw/intc/arm_gicv3_redist.c | ||
131 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr offset, | ||
132 | case GICR_IDREGS ... GICR_IDREGS + 0x2f: | ||
133 | *data = gicv3_idreg(offset - GICR_IDREGS, GICV3_PIDR0_REDIST); | ||
134 | return MEMTX_OK; | ||
135 | + /* | ||
136 | + * VLPI frame registers. We don't need a version check for | ||
137 | + * VPROPBASER and VPENDBASER because gicv3_redist_size() will | ||
138 | + * prevent pre-v4 GIC from passing us offsets this high. | ||
139 | + */ | ||
140 | + case GICR_VPROPBASER: | ||
141 | + *data = extract64(cs->gicr_vpropbaser, 0, 32); | ||
142 | + return MEMTX_OK; | ||
143 | + case GICR_VPROPBASER + 4: | ||
144 | + *data = extract64(cs->gicr_vpropbaser, 32, 32); | ||
145 | + return MEMTX_OK; | ||
146 | + case GICR_VPENDBASER: | ||
147 | + *data = extract64(cs->gicr_vpendbaser, 0, 32); | ||
148 | + return MEMTX_OK; | ||
149 | + case GICR_VPENDBASER + 4: | ||
150 | + *data = extract64(cs->gicr_vpendbaser, 32, 32); | ||
151 | + return MEMTX_OK; | ||
152 | default: | ||
153 | return MEMTX_ERROR; | ||
154 | } | ||
155 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset, | ||
156 | "%s: invalid guest write to RO register at offset " | ||
157 | TARGET_FMT_plx "\n", __func__, offset); | ||
158 | return MEMTX_OK; | ||
159 | + /* | ||
160 | + * VLPI frame registers. We don't need a version check for | ||
161 | + * VPROPBASER and VPENDBASER because gicv3_redist_size() will | ||
162 | + * prevent pre-v4 GIC from passing us offsets this high. | ||
163 | + */ | ||
164 | + case GICR_VPROPBASER: | ||
165 | + cs->gicr_vpropbaser = deposit64(cs->gicr_vpropbaser, 0, 32, value); | ||
166 | + return MEMTX_OK; | ||
167 | + case GICR_VPROPBASER + 4: | ||
168 | + cs->gicr_vpropbaser = deposit64(cs->gicr_vpropbaser, 32, 32, value); | ||
169 | + return MEMTX_OK; | ||
170 | + case GICR_VPENDBASER: | ||
171 | + cs->gicr_vpendbaser = deposit64(cs->gicr_vpendbaser, 0, 32, value); | ||
172 | + return MEMTX_OK; | ||
173 | + case GICR_VPENDBASER + 4: | ||
174 | + cs->gicr_vpendbaser = deposit64(cs->gicr_vpendbaser, 32, 32, value); | ||
175 | + return MEMTX_OK; | ||
176 | default: | ||
177 | return MEMTX_ERROR; | ||
178 | } | ||
179 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_readll(GICv3CPUState *cs, hwaddr offset, | ||
180 | case GICR_PENDBASER: | ||
181 | *data = cs->gicr_pendbaser; | ||
182 | return MEMTX_OK; | ||
183 | + /* | ||
184 | + * VLPI frame registers. We don't need a version check for | ||
185 | + * VPROPBASER and VPENDBASER because gicv3_redist_size() will | ||
186 | + * prevent pre-v4 GIC from passing us offsets this high. | ||
187 | + */ | ||
188 | + case GICR_VPROPBASER: | ||
189 | + *data = cs->gicr_vpropbaser; | ||
190 | + return MEMTX_OK; | ||
191 | + case GICR_VPENDBASER: | ||
192 | + *data = cs->gicr_vpendbaser; | ||
193 | + return MEMTX_OK; | ||
194 | default: | ||
195 | return MEMTX_ERROR; | ||
196 | } | ||
197 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writell(GICv3CPUState *cs, hwaddr offset, | ||
198 | "%s: invalid guest write to RO register at offset " | ||
199 | TARGET_FMT_plx "\n", __func__, offset); | ||
200 | return MEMTX_OK; | ||
201 | + /* | ||
202 | + * VLPI frame registers. We don't need a version check for | ||
203 | + * VPROPBASER and VPENDBASER because gicv3_redist_size() will | ||
204 | + * prevent pre-v4 GIC from passing us offsets this high. | ||
205 | + */ | ||
206 | + case GICR_VPROPBASER: | ||
207 | + cs->gicr_vpropbaser = value; | ||
208 | + return MEMTX_OK; | ||
209 | + case GICR_VPENDBASER: | ||
210 | + cs->gicr_vpendbaser = value; | ||
211 | + return MEMTX_OK; | ||
212 | default: | ||
213 | return MEMTX_ERROR; | ||
214 | } | ||
215 | -- | ||
216 | 2.25.1 | diff view generated by jsdifflib |
1 | Fix the handling of QOM properties for PMSA CPUs with no MPU: | 1 | The function gicv3_cpuif_virt_update() currently sets all of vIRQ, |
---|---|---|---|
2 | 2 | vFIQ and the maintenance interrupt. This implies that it has to be | |
3 | Allow no-MPU to be specified by either: | 3 | used quite carefully -- as the comment notes, setting the maintenance |
4 | * has-mpu = false | 4 | interrupt will typically cause the GIC code to be re-entered |
5 | * pmsav7_dregion = 0 | 5 | recursively. For handling vLPIs, we need the redistributor to be |
6 | and make setting one imply the other. Don't clear the PMSA | 6 | able to tell the cpuif to update the vIRQ and vFIQ lines when the |
7 | feature bit in this situation. | 7 | highest priority pending vLPI changes. Since that change can't cause |
8 | the maintenance interrupt state to change, we can pull the "update | ||
9 | vIRQ/vFIQ" parts of gicv3_cpuif_virt_update() out into a separate | ||
10 | function, which the redistributor can then call without having to | ||
11 | worry about the reentrancy issue. | ||
8 | 12 | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
10 | Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> | 14 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
11 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 15 | Message-id: 20220408141550.1271295-25-peter.maydell@linaro.org |
12 | Message-id: 1493122030-32191-6-git-send-email-peter.maydell@linaro.org | ||
13 | --- | 16 | --- |
14 | target/arm/cpu.c | 8 +++++++- | 17 | hw/intc/gicv3_internal.h | 11 +++++++ |
15 | 1 file changed, 7 insertions(+), 1 deletion(-) | 18 | hw/intc/arm_gicv3_cpuif.c | 64 ++++++++++++++++++++++++--------------- |
19 | hw/intc/trace-events | 3 +- | ||
20 | 3 files changed, 53 insertions(+), 25 deletions(-) | ||
16 | 21 | ||
17 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c | 22 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h |
18 | index XXXXXXX..XXXXXXX 100644 | 23 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/target/arm/cpu.c | 24 | --- a/hw/intc/gicv3_internal.h |
20 | +++ b/target/arm/cpu.c | 25 | +++ b/hw/intc/gicv3_internal.h |
21 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) | 26 | @@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s); |
22 | cpu->id_pfr1 &= ~0xf000; | 27 | */ |
28 | void gicv3_cpuif_update(GICv3CPUState *cs); | ||
29 | |||
30 | +/* | ||
31 | + * gicv3_cpuif_virt_irq_fiq_update: | ||
32 | + * @cs: GICv3CPUState for the CPU to update | ||
33 | + * | ||
34 | + * Recalculate whether to assert the virtual IRQ or FIQ lines after | ||
35 | + * a change to the current highest priority pending virtual interrupt. | ||
36 | + * Note that this does not recalculate and change the maintenance | ||
37 | + * interrupt status (for that, see gicv3_cpuif_virt_update()). | ||
38 | + */ | ||
39 | +void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs); | ||
40 | + | ||
41 | static inline uint32_t gicv3_iidr(void) | ||
42 | { | ||
43 | /* Return the Implementer Identification Register value | ||
44 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c | ||
45 | index XXXXXXX..XXXXXXX 100644 | ||
46 | --- a/hw/intc/arm_gicv3_cpuif.c | ||
47 | +++ b/hw/intc/arm_gicv3_cpuif.c | ||
48 | @@ -XXX,XX +XXX,XX @@ static uint32_t maintenance_interrupt_state(GICv3CPUState *cs) | ||
49 | return value; | ||
50 | } | ||
51 | |||
52 | -static void gicv3_cpuif_virt_update(GICv3CPUState *cs) | ||
53 | +void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs) | ||
54 | { | ||
55 | - /* Tell the CPU about any pending virtual interrupts or | ||
56 | - * maintenance interrupts, following a change to the state | ||
57 | - * of the CPU interface relevant to virtual interrupts. | ||
58 | - * | ||
59 | - * CAUTION: this function will call qemu_set_irq() on the | ||
60 | - * CPU maintenance IRQ line, which is typically wired up | ||
61 | - * to the GIC as a per-CPU interrupt. This means that it | ||
62 | - * will recursively call back into the GIC code via | ||
63 | - * gicv3_redist_set_irq() and thus into the CPU interface code's | ||
64 | - * gicv3_cpuif_update(). It is therefore important that this | ||
65 | - * function is only called as the final action of a CPU interface | ||
66 | - * register write implementation, after all the GIC state | ||
67 | - * fields have been updated. gicv3_cpuif_update() also must | ||
68 | - * not cause this function to be called, but that happens | ||
69 | - * naturally as a result of there being no architectural | ||
70 | - * linkage between the physical and virtual GIC logic. | ||
71 | + /* | ||
72 | + * Tell the CPU about any pending virtual interrupts. | ||
73 | + * This should only be called for changes that affect the | ||
74 | + * vIRQ and vFIQ status and do not change the maintenance | ||
75 | + * interrupt status. This means that unlike gicv3_cpuif_virt_update() | ||
76 | + * this function won't recursively call back into the GIC code. | ||
77 | + * The main use of this is when the redistributor has changed the | ||
78 | + * highest priority pending virtual LPI. | ||
79 | */ | ||
80 | int idx; | ||
81 | int irqlevel = 0; | ||
82 | int fiqlevel = 0; | ||
83 | - int maintlevel = 0; | ||
84 | - ARMCPU *cpu = ARM_CPU(cs->cpu); | ||
85 | |||
86 | idx = hppvi_index(cs); | ||
87 | trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx); | ||
88 | @@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs) | ||
89 | } | ||
23 | } | 90 | } |
24 | 91 | ||
25 | + /* MPU can be configured out of a PMSA CPU either by setting has-mpu | 92 | + trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel, irqlevel); |
26 | + * to false or by setting pmsav7-dregion to 0. | 93 | + qemu_set_irq(cs->parent_vfiq, fiqlevel); |
94 | + qemu_set_irq(cs->parent_virq, irqlevel); | ||
95 | +} | ||
96 | + | ||
97 | +static void gicv3_cpuif_virt_update(GICv3CPUState *cs) | ||
98 | +{ | ||
99 | + /* | ||
100 | + * Tell the CPU about any pending virtual interrupts or | ||
101 | + * maintenance interrupts, following a change to the state | ||
102 | + * of the CPU interface relevant to virtual interrupts. | ||
103 | + * | ||
104 | + * CAUTION: this function will call qemu_set_irq() on the | ||
105 | + * CPU maintenance IRQ line, which is typically wired up | ||
106 | + * to the GIC as a per-CPU interrupt. This means that it | ||
107 | + * will recursively call back into the GIC code via | ||
108 | + * gicv3_redist_set_irq() and thus into the CPU interface code's | ||
109 | + * gicv3_cpuif_update(). It is therefore important that this | ||
110 | + * function is only called as the final action of a CPU interface | ||
111 | + * register write implementation, after all the GIC state | ||
112 | + * fields have been updated. gicv3_cpuif_update() also must | ||
113 | + * not cause this function to be called, but that happens | ||
114 | + * naturally as a result of there being no architectural | ||
115 | + * linkage between the physical and virtual GIC logic. | ||
27 | + */ | 116 | + */ |
28 | if (!cpu->has_mpu) { | 117 | + ARMCPU *cpu = ARM_CPU(cs->cpu); |
29 | - unset_feature(env, ARM_FEATURE_PMSA); | 118 | + int maintlevel = 0; |
30 | + cpu->pmsav7_dregion = 0; | 119 | + |
31 | + } | 120 | + gicv3_cpuif_virt_irq_fiq_update(cs); |
32 | + if (cpu->pmsav7_dregion == 0) { | 121 | + |
33 | + cpu->has_mpu = false; | 122 | if ((cs->ich_hcr_el2 & ICH_HCR_EL2_EN) && |
123 | maintenance_interrupt_state(cs) != 0) { | ||
124 | maintlevel = 1; | ||
34 | } | 125 | } |
35 | 126 | ||
36 | if (arm_feature(env, ARM_FEATURE_PMSA) && | 127 | - trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel, |
128 | - irqlevel, maintlevel); | ||
129 | - | ||
130 | - qemu_set_irq(cs->parent_vfiq, fiqlevel); | ||
131 | - qemu_set_irq(cs->parent_virq, irqlevel); | ||
132 | + trace_gicv3_cpuif_virt_set_maint_irq(gicv3_redist_affid(cs), maintlevel); | ||
133 | qemu_set_irq(cpu->gicv3_maintenance_interrupt, maintlevel); | ||
134 | } | ||
135 | |||
136 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
137 | index XXXXXXX..XXXXXXX 100644 | ||
138 | --- a/hw/intc/trace-events | ||
139 | +++ b/hw/intc/trace-events | ||
140 | @@ -XXX,XX +XXX,XX @@ gicv3_icv_dir_write(uint32_t cpu, uint64_t val) "GICv3 ICV_DIR write cpu 0x%x va | ||
141 | gicv3_icv_iar_read(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_IAR%d read cpu 0x%x value 0x%" PRIx64 | ||
142 | gicv3_icv_eoir_write(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_EOIR%d write cpu 0x%x value 0x%" PRIx64 | ||
143 | gicv3_cpuif_virt_update(uint32_t cpuid, int idx) "GICv3 CPU i/f 0x%x virt HPPI update LR index %d" | ||
144 | -gicv3_cpuif_virt_set_irqs(uint32_t cpuid, int fiqlevel, int irqlevel, int maintlevel) "GICv3 CPU i/f 0x%x virt HPPI update: setting FIQ %d IRQ %d maintenance-irq %d" | ||
145 | +gicv3_cpuif_virt_set_irqs(uint32_t cpuid, int fiqlevel, int irqlevel) "GICv3 CPU i/f 0x%x virt HPPI update: setting FIQ %d IRQ %d" | ||
146 | +gicv3_cpuif_virt_set_maint_irq(uint32_t cpuid, int maintlevel) "GICv3 CPU i/f 0x%x virt HPPI update: setting maintenance-irq %d" | ||
147 | |||
148 | # arm_gicv3_dist.c | ||
149 | gicv3_dist_read(uint64_t offset, uint64_t data, unsigned size, bool secure) "GICv3 distributor read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u secure %d" | ||
37 | -- | 150 | -- |
38 | 2.7.4 | 151 | 2.25.1 |
39 | |||
40 | diff view generated by jsdifflib |
1 | We were setting the VBPR1 field of VMCR_EL2 to icv_min_vbpr() | 1 | The CPU interface changes to support vLPIs are fairly minor: |
---|---|---|---|
2 | on reset, but this is not correct. The field should reset to | 2 | in the parts of the code that currently look at the list registers |
3 | the minimum value of ICV_BPR0_EL1 plus one. | 3 | to determine the highest priority pending virtual interrupt, we |
4 | must also look at the highest priority pending vLPI. To do this | ||
5 | we change hppvi_index() to check the vLPI and return a special-case | ||
6 | value if that is the right virtual interrupt to take. The callsites | ||
7 | (which handle HPPIR and IAR registers and the "raise vIRQ and vFIQ | ||
8 | lines" code) then have to handle this special-case value. | ||
9 | |||
10 | This commit includes two interfaces with the as-yet-unwritten | ||
11 | redistributor code: | ||
12 | * the new GICv3CPUState::hppvlpi will be set by the redistributor | ||
13 | (in the same way as the existing hpplpi does for physical LPIs) | ||
14 | * when the CPU interface acknowledges a vLPI it needs to set it | ||
15 | to non-pending; the new gicv3_redist_vlpi_pending() function | ||
16 | (which matches the existing gicv3_redist_lpi_pending() used | ||
17 | for physical LPIs) is a stub that will be filled in later | ||
4 | 18 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 19 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
6 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 20 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 1493226792-3237-2-git-send-email-peter.maydell@linaro.org | 21 | Message-id: 20220408141550.1271295-26-peter.maydell@linaro.org |
8 | --- | 22 | --- |
9 | hw/intc/arm_gicv3_cpuif.c | 2 +- | 23 | hw/intc/gicv3_internal.h | 13 ++++ |
10 | 1 file changed, 1 insertion(+), 1 deletion(-) | 24 | include/hw/intc/arm_gicv3_common.h | 3 + |
25 | hw/intc/arm_gicv3_common.c | 1 + | ||
26 | hw/intc/arm_gicv3_cpuif.c | 119 +++++++++++++++++++++++++++-- | ||
27 | hw/intc/arm_gicv3_redist.c | 8 ++ | ||
28 | hw/intc/trace-events | 2 +- | ||
29 | 6 files changed, 140 insertions(+), 6 deletions(-) | ||
11 | 30 | ||
31 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
32 | index XXXXXXX..XXXXXXX 100644 | ||
33 | --- a/hw/intc/gicv3_internal.h | ||
34 | +++ b/hw/intc/gicv3_internal.h | ||
35 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_process_lpi(GICv3CPUState *cs, int irq, int level); | ||
36 | */ | ||
37 | void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
38 | int doorbell, int level); | ||
39 | +/** | ||
40 | + * gicv3_redist_vlpi_pending: | ||
41 | + * @cs: GICv3CPUState | ||
42 | + * @irq: (virtual) interrupt number | ||
43 | + * @level: level to set @irq to | ||
44 | + * | ||
45 | + * Set/clear the pending status of a virtual LPI in the vLPI table | ||
46 | + * that this redistributor is currently using. (The difference between | ||
47 | + * this and gicv3_redist_process_vlpi() is that this is called from | ||
48 | + * the cpuif and does not need to do the not-running-on-this-vcpu checks.) | ||
49 | + */ | ||
50 | +void gicv3_redist_vlpi_pending(GICv3CPUState *cs, int irq, int level); | ||
51 | + | ||
52 | void gicv3_redist_lpi_pending(GICv3CPUState *cs, int irq, int level); | ||
53 | /** | ||
54 | * gicv3_redist_update_lpi: | ||
55 | diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h | ||
56 | index XXXXXXX..XXXXXXX 100644 | ||
57 | --- a/include/hw/intc/arm_gicv3_common.h | ||
58 | +++ b/include/hw/intc/arm_gicv3_common.h | ||
59 | @@ -XXX,XX +XXX,XX @@ struct GICv3CPUState { | ||
60 | */ | ||
61 | PendingIrq hpplpi; | ||
62 | |||
63 | + /* Cached information recalculated from vLPI tables in guest memory */ | ||
64 | + PendingIrq hppvlpi; | ||
65 | + | ||
66 | /* This is temporary working state, to avoid a malloc in gicv3_update() */ | ||
67 | bool seenbetter; | ||
68 | }; | ||
69 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c | ||
70 | index XXXXXXX..XXXXXXX 100644 | ||
71 | --- a/hw/intc/arm_gicv3_common.c | ||
72 | +++ b/hw/intc/arm_gicv3_common.c | ||
73 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_reset(DeviceState *dev) | ||
74 | |||
75 | cs->hppi.prio = 0xff; | ||
76 | cs->hpplpi.prio = 0xff; | ||
77 | + cs->hppvlpi.prio = 0xff; | ||
78 | |||
79 | /* State in the CPU interface must *not* be reset here, because it | ||
80 | * is part of the CPU's reset domain, not the GIC device's. | ||
12 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c | 81 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c |
13 | index XXXXXXX..XXXXXXX 100644 | 82 | index XXXXXXX..XXXXXXX 100644 |
14 | --- a/hw/intc/arm_gicv3_cpuif.c | 83 | --- a/hw/intc/arm_gicv3_cpuif.c |
15 | +++ b/hw/intc/arm_gicv3_cpuif.c | 84 | +++ b/hw/intc/arm_gicv3_cpuif.c |
16 | @@ -XXX,XX +XXX,XX @@ static void icc_reset(CPUARMState *env, const ARMCPRegInfo *ri) | 85 | @@ -XXX,XX +XXX,XX @@ |
17 | cs->ich_hcr_el2 = 0; | 86 | #include "hw/irq.h" |
18 | memset(cs->ich_lr_el2, 0, sizeof(cs->ich_lr_el2)); | 87 | #include "cpu.h" |
19 | cs->ich_vmcr_el2 = ICH_VMCR_EL2_VFIQEN | | 88 | |
20 | - (icv_min_vbpr(cs) << ICH_VMCR_EL2_VBPR1_SHIFT) | | 89 | +/* |
21 | + ((icv_min_vbpr(cs) + 1) << ICH_VMCR_EL2_VBPR1_SHIFT) | | 90 | + * Special case return value from hppvi_index(); must be larger than |
22 | (icv_min_vbpr(cs) << ICH_VMCR_EL2_VBPR0_SHIFT); | 91 | + * the architecturally maximum possible list register index (which is 15) |
23 | } | 92 | + */ |
93 | +#define HPPVI_INDEX_VLPI 16 | ||
94 | + | ||
95 | static GICv3CPUState *icc_cs_from_env(CPUARMState *env) | ||
96 | { | ||
97 | return env->gicv3state; | ||
98 | @@ -XXX,XX +XXX,XX @@ static int ich_highest_active_virt_prio(GICv3CPUState *cs) | ||
99 | |||
100 | static int hppvi_index(GICv3CPUState *cs) | ||
101 | { | ||
102 | - /* Return the list register index of the highest priority pending | ||
103 | + /* | ||
104 | + * Return the list register index of the highest priority pending | ||
105 | * virtual interrupt, as per the HighestPriorityVirtualInterrupt | ||
106 | * pseudocode. If no pending virtual interrupts, return -1. | ||
107 | + * If the highest priority pending virtual interrupt is a vLPI, | ||
108 | + * return HPPVI_INDEX_VLPI. | ||
109 | + * (The pseudocode handles checking whether the vLPI is higher | ||
110 | + * priority than the highest priority list register at every | ||
111 | + * callsite of HighestPriorityVirtualInterrupt; we check it here.) | ||
112 | */ | ||
113 | + ARMCPU *cpu = ARM_CPU(cs->cpu); | ||
114 | + CPUARMState *env = &cpu->env; | ||
115 | int idx = -1; | ||
116 | int i; | ||
117 | /* Note that a list register entry with a priority of 0xff will | ||
118 | @@ -XXX,XX +XXX,XX @@ static int hppvi_index(GICv3CPUState *cs) | ||
119 | } | ||
120 | } | ||
121 | |||
122 | + /* | ||
123 | + * "no pending vLPI" is indicated with prio = 0xff, which always | ||
124 | + * fails the priority check here. vLPIs are only considered | ||
125 | + * when we are in Non-Secure state. | ||
126 | + */ | ||
127 | + if (cs->hppvlpi.prio < prio && !arm_is_secure(env)) { | ||
128 | + if (cs->hppvlpi.grp == GICV3_G0) { | ||
129 | + if (cs->ich_vmcr_el2 & ICH_VMCR_EL2_VENG0) { | ||
130 | + return HPPVI_INDEX_VLPI; | ||
131 | + } | ||
132 | + } else { | ||
133 | + if (cs->ich_vmcr_el2 & ICH_VMCR_EL2_VENG1) { | ||
134 | + return HPPVI_INDEX_VLPI; | ||
135 | + } | ||
136 | + } | ||
137 | + } | ||
138 | + | ||
139 | return idx; | ||
140 | } | ||
141 | |||
142 | @@ -XXX,XX +XXX,XX @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr) | ||
143 | return false; | ||
144 | } | ||
145 | |||
146 | +static bool icv_hppvlpi_can_preempt(GICv3CPUState *cs) | ||
147 | +{ | ||
148 | + /* | ||
149 | + * Return true if we can signal the highest priority pending vLPI. | ||
150 | + * We can assume we're Non-secure because hppvi_index() already | ||
151 | + * tested for that. | ||
152 | + */ | ||
153 | + uint32_t mask, rprio, vpmr; | ||
154 | + | ||
155 | + if (!(cs->ich_hcr_el2 & ICH_HCR_EL2_EN)) { | ||
156 | + /* Virtual interface disabled */ | ||
157 | + return false; | ||
158 | + } | ||
159 | + | ||
160 | + vpmr = extract64(cs->ich_vmcr_el2, ICH_VMCR_EL2_VPMR_SHIFT, | ||
161 | + ICH_VMCR_EL2_VPMR_LENGTH); | ||
162 | + | ||
163 | + if (cs->hppvlpi.prio >= vpmr) { | ||
164 | + /* Priority mask masks this interrupt */ | ||
165 | + return false; | ||
166 | + } | ||
167 | + | ||
168 | + rprio = ich_highest_active_virt_prio(cs); | ||
169 | + if (rprio == 0xff) { | ||
170 | + /* No running interrupt so we can preempt */ | ||
171 | + return true; | ||
172 | + } | ||
173 | + | ||
174 | + mask = icv_gprio_mask(cs, cs->hppvlpi.grp); | ||
175 | + | ||
176 | + /* | ||
177 | + * We only preempt a running interrupt if the pending interrupt's | ||
178 | + * group priority is sufficient (the subpriorities are not considered). | ||
179 | + */ | ||
180 | + if ((cs->hppvlpi.prio & mask) < (rprio & mask)) { | ||
181 | + return true; | ||
182 | + } | ||
183 | + | ||
184 | + return false; | ||
185 | +} | ||
186 | + | ||
187 | static uint32_t eoi_maintenance_interrupt_state(GICv3CPUState *cs, | ||
188 | uint32_t *misr) | ||
189 | { | ||
190 | @@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs) | ||
191 | int fiqlevel = 0; | ||
192 | |||
193 | idx = hppvi_index(cs); | ||
194 | - trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx); | ||
195 | - if (idx >= 0) { | ||
196 | + trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx, | ||
197 | + cs->hppvlpi.irq, cs->hppvlpi.grp, | ||
198 | + cs->hppvlpi.prio); | ||
199 | + if (idx == HPPVI_INDEX_VLPI) { | ||
200 | + if (icv_hppvlpi_can_preempt(cs)) { | ||
201 | + if (cs->hppvlpi.grp == GICV3_G0) { | ||
202 | + fiqlevel = 1; | ||
203 | + } else { | ||
204 | + irqlevel = 1; | ||
205 | + } | ||
206 | + } | ||
207 | + } else if (idx >= 0) { | ||
208 | uint64_t lr = cs->ich_lr_el2[idx]; | ||
209 | |||
210 | if (icv_hppi_can_preempt(cs, lr)) { | ||
211 | @@ -XXX,XX +XXX,XX @@ static uint64_t icv_hppir_read(CPUARMState *env, const ARMCPRegInfo *ri) | ||
212 | int idx = hppvi_index(cs); | ||
213 | uint64_t value = INTID_SPURIOUS; | ||
214 | |||
215 | - if (idx >= 0) { | ||
216 | + if (idx == HPPVI_INDEX_VLPI) { | ||
217 | + if (cs->hppvlpi.grp == grp) { | ||
218 | + value = cs->hppvlpi.irq; | ||
219 | + } | ||
220 | + } else if (idx >= 0) { | ||
221 | uint64_t lr = cs->ich_lr_el2[idx]; | ||
222 | int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0; | ||
223 | |||
224 | @@ -XXX,XX +XXX,XX @@ static void icv_activate_irq(GICv3CPUState *cs, int idx, int grp) | ||
225 | cs->ich_apr[grp][regno] |= (1 << regbit); | ||
226 | } | ||
227 | |||
228 | +static void icv_activate_vlpi(GICv3CPUState *cs) | ||
229 | +{ | ||
230 | + uint32_t mask = icv_gprio_mask(cs, cs->hppvlpi.grp); | ||
231 | + int prio = cs->hppvlpi.prio & mask; | ||
232 | + int aprbit = prio >> (8 - cs->vprebits); | ||
233 | + int regno = aprbit / 32; | ||
234 | + int regbit = aprbit % 32; | ||
235 | + | ||
236 | + cs->ich_apr[cs->hppvlpi.grp][regno] |= (1 << regbit); | ||
237 | + gicv3_redist_vlpi_pending(cs, cs->hppvlpi.irq, 0); | ||
238 | +} | ||
239 | + | ||
240 | static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri) | ||
241 | { | ||
242 | GICv3CPUState *cs = icc_cs_from_env(env); | ||
243 | @@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri) | ||
244 | int idx = hppvi_index(cs); | ||
245 | uint64_t intid = INTID_SPURIOUS; | ||
246 | |||
247 | - if (idx >= 0) { | ||
248 | + if (idx == HPPVI_INDEX_VLPI) { | ||
249 | + if (cs->hppvlpi.grp == grp && icv_hppvlpi_can_preempt(cs)) { | ||
250 | + intid = cs->hppvlpi.irq; | ||
251 | + icv_activate_vlpi(cs); | ||
252 | + } | ||
253 | + } else if (idx >= 0) { | ||
254 | uint64_t lr = cs->ich_lr_el2[idx]; | ||
255 | int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0; | ||
256 | |||
257 | @@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_el_change_hook(ARMCPU *cpu, void *opaque) | ||
258 | GICv3CPUState *cs = opaque; | ||
259 | |||
260 | gicv3_cpuif_update(cs); | ||
261 | + /* | ||
262 | + * Because vLPIs are only pending in NonSecure state, | ||
263 | + * an EL change can change the VIRQ/VFIQ status (but | ||
264 | + * cannot affect the maintenance interrupt state) | ||
265 | + */ | ||
266 | + gicv3_cpuif_virt_irq_fiq_update(cs); | ||
267 | } | ||
268 | |||
269 | void gicv3_init_cpuif(GICv3State *s) | ||
270 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
271 | index XXXXXXX..XXXXXXX 100644 | ||
272 | --- a/hw/intc/arm_gicv3_redist.c | ||
273 | +++ b/hw/intc/arm_gicv3_redist.c | ||
274 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_movall_lpis(GICv3CPUState *src, GICv3CPUState *dest) | ||
275 | gicv3_redist_update_lpi(dest); | ||
276 | } | ||
277 | |||
278 | +void gicv3_redist_vlpi_pending(GICv3CPUState *cs, int irq, int level) | ||
279 | +{ | ||
280 | + /* | ||
281 | + * The redistributor handling for changing the pending state | ||
282 | + * of a vLPI will be added in a subsequent commit. | ||
283 | + */ | ||
284 | +} | ||
285 | + | ||
286 | void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
287 | int doorbell, int level) | ||
288 | { | ||
289 | diff --git a/hw/intc/trace-events b/hw/intc/trace-events | ||
290 | index XXXXXXX..XXXXXXX 100644 | ||
291 | --- a/hw/intc/trace-events | ||
292 | +++ b/hw/intc/trace-events | ||
293 | @@ -XXX,XX +XXX,XX @@ gicv3_icv_hppir_read(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_HPPIR%d rea | ||
294 | gicv3_icv_dir_write(uint32_t cpu, uint64_t val) "GICv3 ICV_DIR write cpu 0x%x value 0x%" PRIx64 | ||
295 | gicv3_icv_iar_read(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_IAR%d read cpu 0x%x value 0x%" PRIx64 | ||
296 | gicv3_icv_eoir_write(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_EOIR%d write cpu 0x%x value 0x%" PRIx64 | ||
297 | -gicv3_cpuif_virt_update(uint32_t cpuid, int idx) "GICv3 CPU i/f 0x%x virt HPPI update LR index %d" | ||
298 | +gicv3_cpuif_virt_update(uint32_t cpuid, int idx, int hppvlpi, int grp, int prio) "GICv3 CPU i/f 0x%x virt HPPI update LR index %d HPPVLPI %d grp %d prio %d" | ||
299 | gicv3_cpuif_virt_set_irqs(uint32_t cpuid, int fiqlevel, int irqlevel) "GICv3 CPU i/f 0x%x virt HPPI update: setting FIQ %d IRQ %d" | ||
300 | gicv3_cpuif_virt_set_maint_irq(uint32_t cpuid, int maintlevel) "GICv3 CPU i/f 0x%x virt HPPI update: setting maintenance-irq %d" | ||
24 | 301 | ||
25 | -- | 302 | -- |
26 | 2.7.4 | 303 | 2.25.1 |
27 | |||
28 | diff view generated by jsdifflib |
1 | When we calculate the mask to use to get the group priority from | 1 | The maintenance interrupt state depends only on: |
---|---|---|---|
2 | an interrupt priority, the way that NS BPR1 is handled differs | 2 | * ICH_HCR_EL2 |
3 | from how BPR0 and S BPR1 work -- a BPR1 value of 1 means | 3 | * ICH_LR<n>_EL2 |
4 | the group priority is in bits [7:1], whereas for BPR0 and S BPR1 | 4 | * ICH_VMCR_EL2 fields VENG0 and VENG1 |
5 | this is indicated by a 0 BPR value. | ||
6 | 5 | ||
7 | Subtract 1 from the BPR value before creating the mask if | 6 | Now we have a separate function that updates only the vIRQ and vFIQ |
8 | we're using the NS BPR value, for both hardware and virtual | 7 | lines, use that in places that only change state that affects vIRQ |
9 | interrupts, as the GICv3 pseudocode does, and fix the comments | 8 | and vFIQ but not the maintenance interrupt. |
10 | accordingly. | ||
11 | 9 | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
13 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 11 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
14 | Message-id: 1493226792-3237-4-git-send-email-peter.maydell@linaro.org | 12 | Message-id: 20220408141550.1271295-27-peter.maydell@linaro.org |
15 | --- | 13 | --- |
16 | hw/intc/arm_gicv3_cpuif.c | 42 ++++++++++++++++++++++++++++++++++++++---- | 14 | hw/intc/arm_gicv3_cpuif.c | 10 +++++----- |
17 | 1 file changed, 38 insertions(+), 4 deletions(-) | 15 | 1 file changed, 5 insertions(+), 5 deletions(-) |
18 | 16 | ||
19 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c | 17 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c |
20 | index XXXXXXX..XXXXXXX 100644 | 18 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/hw/intc/arm_gicv3_cpuif.c | 19 | --- a/hw/intc/arm_gicv3_cpuif.c |
22 | +++ b/hw/intc/arm_gicv3_cpuif.c | 20 | +++ b/hw/intc/arm_gicv3_cpuif.c |
23 | @@ -XXX,XX +XXX,XX @@ static uint32_t icv_gprio_mask(GICv3CPUState *cs, int group) | 21 | @@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri, |
24 | { | 22 | |
25 | /* Return a mask word which clears the subpriority bits from | 23 | cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU; |
26 | * a priority value for a virtual interrupt in the specified group. | 24 | |
27 | - * This depends on the VBPR value: | 25 | - gicv3_cpuif_virt_update(cs); |
28 | + * This depends on the VBPR value. | 26 | + gicv3_cpuif_virt_irq_fiq_update(cs); |
29 | + * If using VBPR0 then: | 27 | return; |
30 | * a BPR of 0 means the group priority bits are [7:1]; | ||
31 | * a BPR of 1 means they are [7:2], and so on down to | ||
32 | * a BPR of 7 meaning no group priority bits at all. | ||
33 | + * If using VBPR1 then: | ||
34 | + * a BPR of 0 is impossible (the minimum value is 1) | ||
35 | + * a BPR of 1 means the group priority bits are [7:1]; | ||
36 | + * a BPR of 2 means they are [7:2], and so on down to | ||
37 | + * a BPR of 7 meaning the group priority is [7]. | ||
38 | + * | ||
39 | * Which BPR to use depends on the group of the interrupt and | ||
40 | * the current ICH_VMCR_EL2.VCBPR settings. | ||
41 | + * | ||
42 | + * This corresponds to the VGroupBits() pseudocode. | ||
43 | */ | ||
44 | + int bpr; | ||
45 | + | ||
46 | if (group == GICV3_G1NS && cs->ich_vmcr_el2 & ICH_VMCR_EL2_VCBPR) { | ||
47 | group = GICV3_G0; | ||
48 | } | ||
49 | |||
50 | - return ~0U << (read_vbpr(cs, group) + 1); | ||
51 | + bpr = read_vbpr(cs, group); | ||
52 | + if (group == GICV3_G1NS) { | ||
53 | + assert(bpr > 0); | ||
54 | + bpr--; | ||
55 | + } | ||
56 | + | ||
57 | + return ~0U << (bpr + 1); | ||
58 | } | 28 | } |
59 | 29 | ||
60 | static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr) | 30 | @@ -XXX,XX +XXX,XX @@ static void icv_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri, |
61 | @@ -XXX,XX +XXX,XX @@ static uint32_t icc_gprio_mask(GICv3CPUState *cs, int group) | 31 | |
62 | { | 32 | write_vbpr(cs, grp, value); |
63 | /* Return a mask word which clears the subpriority bits from | 33 | |
64 | * a priority value for an interrupt in the specified group. | 34 | - gicv3_cpuif_virt_update(cs); |
65 | - * This depends on the BPR value: | 35 | + gicv3_cpuif_virt_irq_fiq_update(cs); |
66 | + * This depends on the BPR value. For CBPR0 (S or NS): | ||
67 | * a BPR of 0 means the group priority bits are [7:1]; | ||
68 | * a BPR of 1 means they are [7:2], and so on down to | ||
69 | * a BPR of 7 meaning no group priority bits at all. | ||
70 | + * For CBPR1 NS: | ||
71 | + * a BPR of 0 is impossible (the minimum value is 1) | ||
72 | + * a BPR of 1 means the group priority bits are [7:1]; | ||
73 | + * a BPR of 2 means they are [7:2], and so on down to | ||
74 | + * a BPR of 7 meaning the group priority is [7]. | ||
75 | + * | ||
76 | * Which BPR to use depends on the group of the interrupt and | ||
77 | * the current ICC_CTLR.CBPR settings. | ||
78 | + * | ||
79 | + * This corresponds to the GroupBits() pseudocode. | ||
80 | */ | ||
81 | + int bpr; | ||
82 | + | ||
83 | if ((group == GICV3_G1 && cs->icc_ctlr_el1[GICV3_S] & ICC_CTLR_EL1_CBPR) || | ||
84 | (group == GICV3_G1NS && | ||
85 | cs->icc_ctlr_el1[GICV3_NS] & ICC_CTLR_EL1_CBPR)) { | ||
86 | group = GICV3_G0; | ||
87 | } | ||
88 | |||
89 | - return ~0U << ((cs->icc_bpr[group] & 7) + 1); | ||
90 | + bpr = cs->icc_bpr[group] & 7; | ||
91 | + | ||
92 | + if (group == GICV3_G1NS) { | ||
93 | + assert(bpr > 0); | ||
94 | + bpr--; | ||
95 | + } | ||
96 | + | ||
97 | + return ~0U << (bpr + 1); | ||
98 | } | 36 | } |
99 | 37 | ||
100 | static bool icc_no_enabled_hppi(GICv3CPUState *cs) | 38 | static uint64_t icv_pmr_read(CPUARMState *env, const ARMCPRegInfo *ri) |
39 | @@ -XXX,XX +XXX,XX @@ static void icv_pmr_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
40 | cs->ich_vmcr_el2 = deposit64(cs->ich_vmcr_el2, ICH_VMCR_EL2_VPMR_SHIFT, | ||
41 | ICH_VMCR_EL2_VPMR_LENGTH, value); | ||
42 | |||
43 | - gicv3_cpuif_virt_update(cs); | ||
44 | + gicv3_cpuif_virt_irq_fiq_update(cs); | ||
45 | } | ||
46 | |||
47 | static uint64_t icv_igrpen_read(CPUARMState *env, const ARMCPRegInfo *ri) | ||
48 | @@ -XXX,XX +XXX,XX @@ static void icv_ctlr_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
49 | cs->ich_vmcr_el2 = deposit64(cs->ich_vmcr_el2, ICH_VMCR_EL2_VEOIM_SHIFT, | ||
50 | 1, value & ICC_CTLR_EL1_EOIMODE ? 1 : 0); | ||
51 | |||
52 | - gicv3_cpuif_virt_update(cs); | ||
53 | + gicv3_cpuif_virt_irq_fiq_update(cs); | ||
54 | } | ||
55 | |||
56 | static uint64_t icv_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri) | ||
57 | @@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
58 | trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value); | ||
59 | |||
60 | cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU; | ||
61 | - gicv3_cpuif_virt_update(cs); | ||
62 | + gicv3_cpuif_virt_irq_fiq_update(cs); | ||
63 | } | ||
64 | |||
65 | static uint64_t ich_hcr_read(CPUARMState *env, const ARMCPRegInfo *ri) | ||
101 | -- | 66 | -- |
102 | 2.7.4 | 67 | 2.25.1 |
103 | |||
104 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Currently the functions which update the highest priority pending LPI | ||
2 | information by looking at the LPI Pending and Configuration tables | ||
3 | are hard-coded to use the physical LPI tables addressed by | ||
4 | GICR_PENDBASER and GICR_PROPBASER. To support virtual LPIs we will | ||
5 | need to do essentially the same job, but looking at the current | ||
6 | virtual LPI Pending and Configuration tables and updating cs->hppvlpi | ||
7 | instead of cs->hpplpi. | ||
1 | 8 | ||
9 | Factor out the common part of the gicv3_redist_check_lpi_priority() | ||
10 | function into a new update_for_one_lpi() function, which updates | ||
11 | a PendingIrq struct if the specified LPI is higher priority than | ||
12 | what is currently recorded there. | ||
13 | |||
14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 20220408141550.1271295-28-peter.maydell@linaro.org | ||
17 | --- | ||
18 | hw/intc/arm_gicv3_redist.c | 74 ++++++++++++++++++++++++-------------- | ||
19 | 1 file changed, 47 insertions(+), 27 deletions(-) | ||
20 | |||
21 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
22 | index XXXXXXX..XXXXXXX 100644 | ||
23 | --- a/hw/intc/arm_gicv3_redist.c | ||
24 | +++ b/hw/intc/arm_gicv3_redist.c | ||
25 | @@ -XXX,XX +XXX,XX @@ static uint32_t gicr_read_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs, | ||
26 | return reg; | ||
27 | } | ||
28 | |||
29 | +/** | ||
30 | + * update_for_one_lpi: Update pending information if this LPI is better | ||
31 | + * | ||
32 | + * @cs: GICv3CPUState | ||
33 | + * @irq: interrupt to look up in the LPI Configuration table | ||
34 | + * @ctbase: physical address of the LPI Configuration table to use | ||
35 | + * @ds: true if priority value should not be shifted | ||
36 | + * @hpp: points to pending information to update | ||
37 | + * | ||
38 | + * Look up @irq in the Configuration table specified by @ctbase | ||
39 | + * to see if it is enabled and what its priority is. If it is an | ||
40 | + * enabled interrupt with a higher priority than that currently | ||
41 | + * recorded in @hpp, update @hpp. | ||
42 | + */ | ||
43 | +static void update_for_one_lpi(GICv3CPUState *cs, int irq, | ||
44 | + uint64_t ctbase, bool ds, PendingIrq *hpp) | ||
45 | +{ | ||
46 | + uint8_t lpite; | ||
47 | + uint8_t prio; | ||
48 | + | ||
49 | + address_space_read(&cs->gic->dma_as, | ||
50 | + ctbase + ((irq - GICV3_LPI_INTID_START) * sizeof(lpite)), | ||
51 | + MEMTXATTRS_UNSPECIFIED, &lpite, sizeof(lpite)); | ||
52 | + | ||
53 | + if (!(lpite & LPI_CTE_ENABLED)) { | ||
54 | + return; | ||
55 | + } | ||
56 | + | ||
57 | + if (ds) { | ||
58 | + prio = lpite & LPI_PRIORITY_MASK; | ||
59 | + } else { | ||
60 | + prio = ((lpite & LPI_PRIORITY_MASK) >> 1) | 0x80; | ||
61 | + } | ||
62 | + | ||
63 | + if ((prio < hpp->prio) || | ||
64 | + ((prio == hpp->prio) && (irq <= hpp->irq))) { | ||
65 | + hpp->irq = irq; | ||
66 | + hpp->prio = prio; | ||
67 | + /* LPIs and vLPIs are always non-secure Grp1 interrupts */ | ||
68 | + hpp->grp = GICV3_G1NS; | ||
69 | + } | ||
70 | +} | ||
71 | + | ||
72 | static uint8_t gicr_read_ipriorityr(GICv3CPUState *cs, MemTxAttrs attrs, | ||
73 | int irq) | ||
74 | { | ||
75 | @@ -XXX,XX +XXX,XX @@ MemTxResult gicv3_redist_write(void *opaque, hwaddr offset, uint64_t data, | ||
76 | |||
77 | static void gicv3_redist_check_lpi_priority(GICv3CPUState *cs, int irq) | ||
78 | { | ||
79 | - AddressSpace *as = &cs->gic->dma_as; | ||
80 | - uint64_t lpict_baddr; | ||
81 | - uint8_t lpite; | ||
82 | - uint8_t prio; | ||
83 | + uint64_t lpict_baddr = cs->gicr_propbaser & R_GICR_PROPBASER_PHYADDR_MASK; | ||
84 | |||
85 | - lpict_baddr = cs->gicr_propbaser & R_GICR_PROPBASER_PHYADDR_MASK; | ||
86 | - | ||
87 | - address_space_read(as, lpict_baddr + ((irq - GICV3_LPI_INTID_START) * | ||
88 | - sizeof(lpite)), MEMTXATTRS_UNSPECIFIED, &lpite, | ||
89 | - sizeof(lpite)); | ||
90 | - | ||
91 | - if (!(lpite & LPI_CTE_ENABLED)) { | ||
92 | - return; | ||
93 | - } | ||
94 | - | ||
95 | - if (cs->gic->gicd_ctlr & GICD_CTLR_DS) { | ||
96 | - prio = lpite & LPI_PRIORITY_MASK; | ||
97 | - } else { | ||
98 | - prio = ((lpite & LPI_PRIORITY_MASK) >> 1) | 0x80; | ||
99 | - } | ||
100 | - | ||
101 | - if ((prio < cs->hpplpi.prio) || | ||
102 | - ((prio == cs->hpplpi.prio) && (irq <= cs->hpplpi.irq))) { | ||
103 | - cs->hpplpi.irq = irq; | ||
104 | - cs->hpplpi.prio = prio; | ||
105 | - /* LPIs are always non-secure Grp1 interrupts */ | ||
106 | - cs->hpplpi.grp = GICV3_G1NS; | ||
107 | - } | ||
108 | + update_for_one_lpi(cs, irq, lpict_baddr, | ||
109 | + cs->gic->gicd_ctlr & GICD_CTLR_DS, | ||
110 | + &cs->hpplpi); | ||
111 | } | ||
112 | |||
113 | void gicv3_redist_update_lpi_only(GICv3CPUState *cs) | ||
114 | -- | ||
115 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Cédric Le Goater <clg@kaod.org> | 1 | Factor out the common part of gicv3_redist_update_lpi_only() into |
---|---|---|---|
2 | a new function update_for_all_lpis(), which does a full rescan | ||
3 | of an LPI Pending table and sets the specified PendingIrq struct | ||
4 | with the highest priority pending enabled LPI it finds. | ||
2 | 5 | ||
3 | The Aspeed I2C controller maintains a state machine in the command | 6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
4 | register, which is mostly used for debug. | 7 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
8 | Message-id: 20220408141550.1271295-29-peter.maydell@linaro.org | ||
9 | --- | ||
10 | hw/intc/arm_gicv3_redist.c | 66 ++++++++++++++++++++++++++------------ | ||
11 | 1 file changed, 46 insertions(+), 20 deletions(-) | ||
5 | 12 | ||
6 | Let's start adding a few states to handle abnormal STOP | 13 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c |
7 | commands. Today, the model uses the busy status of the bus as a | ||
8 | condition to do so but it is not precise enough. | ||
9 | |||
10 | Also remove the ABNORMAL bit for failing TX commands. This is | ||
11 | incorrect with respect to the specs. | ||
12 | |||
13 | Signed-off-by: Cédric Le Goater <clg@kaod.org> | ||
14 | Message-id: 1494827476-1487-4-git-send-email-clg@kaod.org | ||
15 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
16 | --- | ||
17 | hw/i2c/aspeed_i2c.c | 36 +++++++++++++++++++++++++++++++++--- | ||
18 | 1 file changed, 33 insertions(+), 3 deletions(-) | ||
19 | |||
20 | diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c | ||
21 | index XXXXXXX..XXXXXXX 100644 | 14 | index XXXXXXX..XXXXXXX 100644 |
22 | --- a/hw/i2c/aspeed_i2c.c | 15 | --- a/hw/intc/arm_gicv3_redist.c |
23 | +++ b/hw/i2c/aspeed_i2c.c | 16 | +++ b/hw/intc/arm_gicv3_redist.c |
24 | @@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_i2c_bus_read(void *opaque, hwaddr offset, | 17 | @@ -XXX,XX +XXX,XX @@ static void update_for_one_lpi(GICv3CPUState *cs, int irq, |
25 | } | 18 | } |
26 | } | 19 | } |
27 | 20 | ||
28 | +static void aspeed_i2c_set_state(AspeedI2CBus *bus, uint8_t state) | 21 | +/** |
22 | + * update_for_all_lpis: Fully scan LPI tables and find best pending LPI | ||
23 | + * | ||
24 | + * @cs: GICv3CPUState | ||
25 | + * @ptbase: physical address of LPI Pending table | ||
26 | + * @ctbase: physical address of LPI Configuration table | ||
27 | + * @ptsizebits: size of tables, specified as number of interrupt ID bits minus 1 | ||
28 | + * @ds: true if priority value should not be shifted | ||
29 | + * @hpp: points to pending information to set | ||
30 | + * | ||
31 | + * Recalculate the highest priority pending enabled LPI from scratch, | ||
32 | + * and set @hpp accordingly. | ||
33 | + * | ||
34 | + * We scan the LPI pending table @ptbase; for each pending LPI, we read the | ||
35 | + * corresponding entry in the LPI configuration table @ctbase to extract | ||
36 | + * the priority and enabled information. | ||
37 | + * | ||
38 | + * We take @ptsizebits in the form idbits-1 because this is the way that | ||
39 | + * LPI table sizes are architecturally specified in GICR_PROPBASER.IDBits | ||
40 | + * and in the VMAPP command's VPT_size field. | ||
41 | + */ | ||
42 | +static void update_for_all_lpis(GICv3CPUState *cs, uint64_t ptbase, | ||
43 | + uint64_t ctbase, unsigned ptsizebits, | ||
44 | + bool ds, PendingIrq *hpp) | ||
29 | +{ | 45 | +{ |
30 | + bus->cmd &= ~(I2CD_TX_STATE_MASK << I2CD_TX_STATE_SHIFT); | 46 | + AddressSpace *as = &cs->gic->dma_as; |
31 | + bus->cmd |= (state & I2CD_TX_STATE_MASK) << I2CD_TX_STATE_SHIFT; | 47 | + uint8_t pend; |
48 | + uint32_t pendt_size = (1ULL << (ptsizebits + 1)); | ||
49 | + int i, bit; | ||
50 | + | ||
51 | + hpp->prio = 0xff; | ||
52 | + | ||
53 | + for (i = GICV3_LPI_INTID_START / 8; i < pendt_size / 8; i++) { | ||
54 | + address_space_read(as, ptbase + i, MEMTXATTRS_UNSPECIFIED, &pend, 1); | ||
55 | + while (pend) { | ||
56 | + bit = ctz32(pend); | ||
57 | + update_for_one_lpi(cs, i * 8 + bit, ctbase, ds, hpp); | ||
58 | + pend &= ~(1 << bit); | ||
59 | + } | ||
60 | + } | ||
32 | +} | 61 | +} |
33 | + | 62 | + |
34 | +static uint8_t aspeed_i2c_get_state(AspeedI2CBus *bus) | 63 | static uint8_t gicr_read_ipriorityr(GICv3CPUState *cs, MemTxAttrs attrs, |
35 | +{ | 64 | int irq) |
36 | + return (bus->cmd >> I2CD_TX_STATE_SHIFT) & I2CD_TX_STATE_MASK; | ||
37 | +} | ||
38 | + | ||
39 | +/* | ||
40 | + * The state machine needs some refinement. It is only used to track | ||
41 | + * invalid STOP commands for the moment. | ||
42 | + */ | ||
43 | static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | ||
44 | { | 65 | { |
45 | bus->cmd &= ~0xFFFF; | 66 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_update_lpi_only(GICv3CPUState *cs) |
46 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | 67 | * priority is lower than the last computed high priority lpi interrupt. |
47 | bus->intr_status = 0; | 68 | * If yes, replace current LPI as the new high priority lpi interrupt. |
48 | 69 | */ | |
49 | if (bus->cmd & I2CD_M_START_CMD) { | 70 | - AddressSpace *as = &cs->gic->dma_as; |
50 | + uint8_t state = aspeed_i2c_get_state(bus) & I2CD_MACTIVE ? | 71 | - uint64_t lpipt_baddr; |
51 | + I2CD_MSTARTR : I2CD_MSTART; | 72 | - uint32_t pendt_size = 0; |
52 | + | 73 | - uint8_t pend; |
53 | + aspeed_i2c_set_state(bus, state); | 74 | - int i, bit; |
54 | + | 75 | + uint64_t lpipt_baddr, lpict_baddr; |
55 | if (i2c_start_transfer(bus->bus, extract32(bus->buf, 1, 7), | 76 | uint64_t idbits; |
56 | extract32(bus->buf, 0, 1))) { | 77 | |
57 | bus->intr_status |= I2CD_INTR_TX_NAK; | 78 | idbits = MIN(FIELD_EX64(cs->gicr_propbaser, GICR_PROPBASER, IDBITS), |
58 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | 79 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_update_lpi_only(GICv3CPUState *cs) |
59 | if (!i2c_bus_busy(bus->bus)) { | 80 | return; |
60 | return; | ||
61 | } | ||
62 | + aspeed_i2c_set_state(bus, I2CD_MACTIVE); | ||
63 | } | 81 | } |
64 | 82 | ||
65 | if (bus->cmd & I2CD_M_TX_CMD) { | 83 | - cs->hpplpi.prio = 0xff; |
66 | + aspeed_i2c_set_state(bus, I2CD_MTXD); | 84 | - |
67 | if (i2c_send(bus->bus, bus->buf)) { | 85 | lpipt_baddr = cs->gicr_pendbaser & R_GICR_PENDBASER_PHYADDR_MASK; |
68 | - bus->intr_status |= (I2CD_INTR_TX_NAK | I2CD_INTR_ABNORMAL); | 86 | + lpict_baddr = cs->gicr_propbaser & R_GICR_PROPBASER_PHYADDR_MASK; |
69 | + bus->intr_status |= (I2CD_INTR_TX_NAK); | 87 | |
70 | i2c_end_transfer(bus->bus); | 88 | - /* Determine the highest priority pending interrupt among LPIs */ |
71 | } else { | 89 | - pendt_size = (1ULL << (idbits + 1)); |
72 | bus->intr_status |= I2CD_INTR_TX_ACK; | 90 | - |
73 | } | 91 | - for (i = GICV3_LPI_INTID_START / 8; i < pendt_size / 8; i++) { |
74 | bus->cmd &= ~I2CD_M_TX_CMD; | 92 | - address_space_read(as, lpipt_baddr + i, MEMTXATTRS_UNSPECIFIED, &pend, |
75 | + aspeed_i2c_set_state(bus, I2CD_MACTIVE); | 93 | - sizeof(pend)); |
76 | } | 94 | - |
77 | 95 | - while (pend) { | |
78 | if (bus->cmd & (I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST)) { | 96 | - bit = ctz32(pend); |
79 | - int ret = i2c_recv(bus->bus); | 97 | - gicv3_redist_check_lpi_priority(cs, i * 8 + bit); |
80 | + int ret; | 98 | - pend &= ~(1 << bit); |
81 | + | 99 | - } |
82 | + aspeed_i2c_set_state(bus, I2CD_MRXD); | 100 | - } |
83 | + ret = i2c_recv(bus->bus); | 101 | + update_for_all_lpis(cs, lpipt_baddr, lpict_baddr, idbits, |
84 | if (ret < 0) { | 102 | + cs->gic->gicd_ctlr & GICD_CTLR_DS, &cs->hpplpi); |
85 | qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__); | ||
86 | ret = 0xff; | ||
87 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | ||
88 | i2c_nack(bus->bus); | ||
89 | } | ||
90 | bus->cmd &= ~(I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST); | ||
91 | + aspeed_i2c_set_state(bus, I2CD_MACTIVE); | ||
92 | } | ||
93 | |||
94 | if (bus->cmd & I2CD_M_STOP_CMD) { | ||
95 | - if (!i2c_bus_busy(bus->bus)) { | ||
96 | + if (!(aspeed_i2c_get_state(bus) & I2CD_MACTIVE)) { | ||
97 | + qemu_log_mask(LOG_GUEST_ERROR, "%s: abnormal stop\n", __func__); | ||
98 | bus->intr_status |= I2CD_INTR_ABNORMAL; | ||
99 | } else { | ||
100 | + aspeed_i2c_set_state(bus, I2CD_MSTOP); | ||
101 | i2c_end_transfer(bus->bus); | ||
102 | bus->intr_status |= I2CD_INTR_NORMAL_STOP; | ||
103 | } | ||
104 | bus->cmd &= ~I2CD_M_STOP_CMD; | ||
105 | + aspeed_i2c_set_state(bus, I2CD_IDLE); | ||
106 | } | ||
107 | } | 103 | } |
108 | 104 | ||
105 | void gicv3_redist_update_lpi(GICv3CPUState *cs) | ||
109 | -- | 106 | -- |
110 | 2.7.4 | 107 | 2.25.1 |
111 | |||
112 | diff view generated by jsdifflib |
1 | From: Cédric Le Goater <clg@kaod.org> | 1 | The guest uses GICR_VPENDBASER to tell the redistributor when it is |
---|---|---|---|
2 | scheduling or descheduling a vCPU. When it writes and changes the | ||
3 | VALID bit from 0 to 1, it is scheduling a vCPU, and we must update | ||
4 | our view of the current highest priority pending vLPI from the new | ||
5 | Pending and Configuration tables. When it writes and changes the | ||
6 | VALID bit from 1 to 0, it is descheduling, which means that there is | ||
7 | no longer a highest priority pending vLPI. | ||
2 | 8 | ||
3 | Largely inspired by the TMP105 temperature sensor, here is a model for | 9 | The specification allows the implementation to use part of the vLPI |
4 | the TMP42{1,2,3} temperature sensors. | 10 | Pending table as an IMPDEF area where it can cache information when a |
11 | vCPU is descheduled, so that it can avoid having to do a full rescan | ||
12 | of the tables when the vCPU is scheduled again. For now, we don't | ||
13 | take advantage of this, and simply do a complete rescan. | ||
5 | 14 | ||
6 | Specs can be found here : | 15 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
16 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
17 | Message-id: 20220408141550.1271295-30-peter.maydell@linaro.org | ||
18 | --- | ||
19 | hw/intc/arm_gicv3_redist.c | 87 ++++++++++++++++++++++++++++++++++++-- | ||
20 | 1 file changed, 84 insertions(+), 3 deletions(-) | ||
7 | 21 | ||
8 | http://www.ti.com/lit/gpn/tmp421 | 22 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c |
9 | |||
10 | Signed-off-by: Cédric Le Goater <clg@kaod.org> | ||
11 | Message-id: 1494827476-1487-6-git-send-email-clg@kaod.org | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
14 | --- | ||
15 | hw/misc/Makefile.objs | 1 + | ||
16 | hw/misc/tmp421.c | 401 ++++++++++++++++++++++++++++++++++++++++ | ||
17 | default-configs/arm-softmmu.mak | 1 + | ||
18 | 3 files changed, 403 insertions(+) | ||
19 | create mode 100644 hw/misc/tmp421.c | ||
20 | |||
21 | diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs | ||
22 | index XXXXXXX..XXXXXXX 100644 | 23 | index XXXXXXX..XXXXXXX 100644 |
23 | --- a/hw/misc/Makefile.objs | 24 | --- a/hw/intc/arm_gicv3_redist.c |
24 | +++ b/hw/misc/Makefile.objs | 25 | +++ b/hw/intc/arm_gicv3_redist.c |
25 | @@ -XXX,XX +XXX,XX @@ | 26 | @@ -XXX,XX +XXX,XX @@ static void gicr_write_ipriorityr(GICv3CPUState *cs, MemTxAttrs attrs, int irq, |
26 | common-obj-$(CONFIG_APPLESMC) += applesmc.o | 27 | cs->gicr_ipriorityr[irq] = value; |
27 | common-obj-$(CONFIG_MAX111X) += max111x.o | 28 | } |
28 | common-obj-$(CONFIG_TMP105) += tmp105.o | 29 | |
29 | +common-obj-$(CONFIG_TMP421) += tmp421.o | 30 | +static void gicv3_redist_update_vlpi_only(GICv3CPUState *cs) |
30 | common-obj-$(CONFIG_ISA_DEBUG) += debugexit.o | 31 | +{ |
31 | common-obj-$(CONFIG_SGA) += sga.o | 32 | + uint64_t ptbase, ctbase, idbits; |
32 | common-obj-$(CONFIG_ISA_TESTDEV) += pc-testdev.o | ||
33 | diff --git a/hw/misc/tmp421.c b/hw/misc/tmp421.c | ||
34 | new file mode 100644 | ||
35 | index XXXXXXX..XXXXXXX | ||
36 | --- /dev/null | ||
37 | +++ b/hw/misc/tmp421.c | ||
38 | @@ -XXX,XX +XXX,XX @@ | ||
39 | +/* | ||
40 | + * Texas Instruments TMP421 temperature sensor. | ||
41 | + * | ||
42 | + * Copyright (c) 2016 IBM Corporation. | ||
43 | + * | ||
44 | + * Largely inspired by : | ||
45 | + * | ||
46 | + * Texas Instruments TMP105 temperature sensor. | ||
47 | + * | ||
48 | + * Copyright (C) 2008 Nokia Corporation | ||
49 | + * Written by Andrzej Zaborowski <andrew@openedhand.com> | ||
50 | + * | ||
51 | + * This program is free software; you can redistribute it and/or | ||
52 | + * modify it under the terms of the GNU General Public License as | ||
53 | + * published by the Free Software Foundation; either version 2 or | ||
54 | + * (at your option) version 3 of the License. | ||
55 | + * | ||
56 | + * This program is distributed in the hope that it will be useful, | ||
57 | + * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
58 | + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
59 | + * GNU General Public License for more details. | ||
60 | + * | ||
61 | + * You should have received a copy of the GNU General Public License along | ||
62 | + * with this program; if not, see <http://www.gnu.org/licenses/>. | ||
63 | + */ | ||
64 | + | 33 | + |
65 | +#include "qemu/osdep.h" | 34 | + if (!FIELD_EX64(cs->gicr_vpendbaser, GICR_VPENDBASER, VALID)) { |
66 | +#include "hw/hw.h" | 35 | + cs->hppvlpi.prio = 0xff; |
67 | +#include "hw/i2c/i2c.h" | ||
68 | +#include "qapi/error.h" | ||
69 | +#include "qapi/visitor.h" | ||
70 | + | ||
71 | +/* Manufacturer / Device ID's */ | ||
72 | +#define TMP421_MANUFACTURER_ID 0x55 | ||
73 | +#define TMP421_DEVICE_ID 0x21 | ||
74 | +#define TMP422_DEVICE_ID 0x22 | ||
75 | +#define TMP423_DEVICE_ID 0x23 | ||
76 | + | ||
77 | +typedef struct DeviceInfo { | ||
78 | + int model; | ||
79 | + const char *name; | ||
80 | +} DeviceInfo; | ||
81 | + | ||
82 | +static const DeviceInfo devices[] = { | ||
83 | + { TMP421_DEVICE_ID, "tmp421" }, | ||
84 | + { TMP422_DEVICE_ID, "tmp422" }, | ||
85 | + { TMP423_DEVICE_ID, "tmp423" }, | ||
86 | +}; | ||
87 | + | ||
88 | +typedef struct TMP421State { | ||
89 | + /*< private >*/ | ||
90 | + I2CSlave i2c; | ||
91 | + /*< public >*/ | ||
92 | + | ||
93 | + int16_t temperature[4]; | ||
94 | + | ||
95 | + uint8_t status; | ||
96 | + uint8_t config[2]; | ||
97 | + uint8_t rate; | ||
98 | + | ||
99 | + uint8_t len; | ||
100 | + uint8_t buf[2]; | ||
101 | + uint8_t pointer; | ||
102 | + | ||
103 | +} TMP421State; | ||
104 | + | ||
105 | +typedef struct TMP421Class { | ||
106 | + I2CSlaveClass parent_class; | ||
107 | + DeviceInfo *dev; | ||
108 | +} TMP421Class; | ||
109 | + | ||
110 | +#define TYPE_TMP421 "tmp421-generic" | ||
111 | +#define TMP421(obj) OBJECT_CHECK(TMP421State, (obj), TYPE_TMP421) | ||
112 | + | ||
113 | +#define TMP421_CLASS(klass) \ | ||
114 | + OBJECT_CLASS_CHECK(TMP421Class, (klass), TYPE_TMP421) | ||
115 | +#define TMP421_GET_CLASS(obj) \ | ||
116 | + OBJECT_GET_CLASS(TMP421Class, (obj), TYPE_TMP421) | ||
117 | + | ||
118 | +/* the TMP421 registers */ | ||
119 | +#define TMP421_STATUS_REG 0x08 | ||
120 | +#define TMP421_STATUS_BUSY (1 << 7) | ||
121 | +#define TMP421_CONFIG_REG_1 0x09 | ||
122 | +#define TMP421_CONFIG_RANGE (1 << 2) | ||
123 | +#define TMP421_CONFIG_SHUTDOWN (1 << 6) | ||
124 | +#define TMP421_CONFIG_REG_2 0x0A | ||
125 | +#define TMP421_CONFIG_RC (1 << 2) | ||
126 | +#define TMP421_CONFIG_LEN (1 << 3) | ||
127 | +#define TMP421_CONFIG_REN (1 << 4) | ||
128 | +#define TMP421_CONFIG_REN2 (1 << 5) | ||
129 | +#define TMP421_CONFIG_REN3 (1 << 6) | ||
130 | + | ||
131 | +#define TMP421_CONVERSION_RATE_REG 0x0B | ||
132 | +#define TMP421_ONE_SHOT 0x0F | ||
133 | + | ||
134 | +#define TMP421_RESET 0xFC | ||
135 | +#define TMP421_MANUFACTURER_ID_REG 0xFE | ||
136 | +#define TMP421_DEVICE_ID_REG 0xFF | ||
137 | + | ||
138 | +#define TMP421_TEMP_MSB0 0x00 | ||
139 | +#define TMP421_TEMP_MSB1 0x01 | ||
140 | +#define TMP421_TEMP_MSB2 0x02 | ||
141 | +#define TMP421_TEMP_MSB3 0x03 | ||
142 | +#define TMP421_TEMP_LSB0 0x10 | ||
143 | +#define TMP421_TEMP_LSB1 0x11 | ||
144 | +#define TMP421_TEMP_LSB2 0x12 | ||
145 | +#define TMP421_TEMP_LSB3 0x13 | ||
146 | + | ||
147 | +static const int32_t mins[2] = { -40000, -55000 }; | ||
148 | +static const int32_t maxs[2] = { 127000, 150000 }; | ||
149 | + | ||
150 | +static void tmp421_get_temperature(Object *obj, Visitor *v, const char *name, | ||
151 | + void *opaque, Error **errp) | ||
152 | +{ | ||
153 | + TMP421State *s = TMP421(obj); | ||
154 | + bool ext_range = (s->config[0] & TMP421_CONFIG_RANGE); | ||
155 | + int offset = ext_range * 64 * 256; | ||
156 | + int64_t value; | ||
157 | + int tempid; | ||
158 | + | ||
159 | + if (sscanf(name, "temperature%d", &tempid) != 1) { | ||
160 | + error_setg(errp, "error reading %s: %m", name); | ||
161 | + return; | 36 | + return; |
162 | + } | 37 | + } |
163 | + | 38 | + |
164 | + if (tempid >= 4 || tempid < 0) { | 39 | + ptbase = cs->gicr_vpendbaser & R_GICR_VPENDBASER_PHYADDR_MASK; |
165 | + error_setg(errp, "error reading %s", name); | 40 | + ctbase = cs->gicr_vpropbaser & R_GICR_VPROPBASER_PHYADDR_MASK; |
41 | + idbits = FIELD_EX64(cs->gicr_vpropbaser, GICR_VPROPBASER, IDBITS); | ||
42 | + | ||
43 | + update_for_all_lpis(cs, ptbase, ctbase, idbits, true, &cs->hppvlpi); | ||
44 | +} | ||
45 | + | ||
46 | +static void gicv3_redist_update_vlpi(GICv3CPUState *cs) | ||
47 | +{ | ||
48 | + gicv3_redist_update_vlpi_only(cs); | ||
49 | + gicv3_cpuif_virt_irq_fiq_update(cs); | ||
50 | +} | ||
51 | + | ||
52 | +static void gicr_write_vpendbaser(GICv3CPUState *cs, uint64_t newval) | ||
53 | +{ | ||
54 | + /* Write @newval to GICR_VPENDBASER, handling its effects */ | ||
55 | + bool oldvalid = FIELD_EX64(cs->gicr_vpendbaser, GICR_VPENDBASER, VALID); | ||
56 | + bool newvalid = FIELD_EX64(newval, GICR_VPENDBASER, VALID); | ||
57 | + bool pendinglast; | ||
58 | + | ||
59 | + /* | ||
60 | + * The DIRTY bit is read-only and for us is always zero; | ||
61 | + * other fields are writeable. | ||
62 | + */ | ||
63 | + newval &= R_GICR_VPENDBASER_INNERCACHE_MASK | | ||
64 | + R_GICR_VPENDBASER_SHAREABILITY_MASK | | ||
65 | + R_GICR_VPENDBASER_PHYADDR_MASK | | ||
66 | + R_GICR_VPENDBASER_OUTERCACHE_MASK | | ||
67 | + R_GICR_VPENDBASER_PENDINGLAST_MASK | | ||
68 | + R_GICR_VPENDBASER_IDAI_MASK | | ||
69 | + R_GICR_VPENDBASER_VALID_MASK; | ||
70 | + | ||
71 | + if (oldvalid && newvalid) { | ||
72 | + /* | ||
73 | + * Changing other fields while VALID is 1 is UNPREDICTABLE; | ||
74 | + * we choose to log and ignore the write. | ||
75 | + */ | ||
76 | + if (cs->gicr_vpendbaser ^ newval) { | ||
77 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
78 | + "%s: Changing GICR_VPENDBASER when VALID=1 " | ||
79 | + "is UNPREDICTABLE\n", __func__); | ||
80 | + } | ||
81 | + return; | ||
82 | + } | ||
83 | + if (!oldvalid && !newvalid) { | ||
84 | + cs->gicr_vpendbaser = newval; | ||
166 | + return; | 85 | + return; |
167 | + } | 86 | + } |
168 | + | 87 | + |
169 | + value = ((s->temperature[tempid] - offset) * 1000 + 128) / 256; | 88 | + if (newvalid) { |
89 | + /* | ||
90 | + * Valid going from 0 to 1: update hppvlpi from tables. | ||
91 | + * If IDAI is 0 we are allowed to use the info we cached in | ||
92 | + * the IMPDEF area of the table. | ||
93 | + * PendingLast is RES1 when we make this transition. | ||
94 | + */ | ||
95 | + pendinglast = true; | ||
96 | + } else { | ||
97 | + /* | ||
98 | + * Valid going from 1 to 0: | ||
99 | + * Set PendingLast if there was a pending enabled interrupt | ||
100 | + * for the vPE that was just descheduled. | ||
101 | + * If we cache info in the IMPDEF area, write it out here. | ||
102 | + */ | ||
103 | + pendinglast = cs->hppvlpi.prio != 0xff; | ||
104 | + } | ||
170 | + | 105 | + |
171 | + visit_type_int(v, name, &value, errp); | 106 | + newval = FIELD_DP64(newval, GICR_VPENDBASER, PENDINGLAST, pendinglast); |
107 | + cs->gicr_vpendbaser = newval; | ||
108 | + gicv3_redist_update_vlpi(cs); | ||
172 | +} | 109 | +} |
173 | + | 110 | + |
174 | +/* Units are 0.001 centigrades relative to 0 C. s->temperature is 8.8 | 111 | static MemTxResult gicr_readb(GICv3CPUState *cs, hwaddr offset, |
175 | + * fixed point, so units are 1/256 centigrades. A simple ratio will do. | 112 | uint64_t *data, MemTxAttrs attrs) |
176 | + */ | 113 | { |
177 | +static void tmp421_set_temperature(Object *obj, Visitor *v, const char *name, | 114 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset, |
178 | + void *opaque, Error **errp) | 115 | cs->gicr_vpropbaser = deposit64(cs->gicr_vpropbaser, 32, 32, value); |
179 | +{ | 116 | return MEMTX_OK; |
180 | + TMP421State *s = TMP421(obj); | 117 | case GICR_VPENDBASER: |
181 | + Error *local_err = NULL; | 118 | - cs->gicr_vpendbaser = deposit64(cs->gicr_vpendbaser, 0, 32, value); |
182 | + int64_t temp; | 119 | + gicr_write_vpendbaser(cs, deposit64(cs->gicr_vpendbaser, 0, 32, value)); |
183 | + bool ext_range = (s->config[0] & TMP421_CONFIG_RANGE); | 120 | return MEMTX_OK; |
184 | + int offset = ext_range * 64 * 256; | 121 | case GICR_VPENDBASER + 4: |
185 | + int tempid; | 122 | - cs->gicr_vpendbaser = deposit64(cs->gicr_vpendbaser, 32, 32, value); |
186 | + | 123 | + gicr_write_vpendbaser(cs, deposit64(cs->gicr_vpendbaser, 32, 32, value)); |
187 | + visit_type_int(v, name, &temp, &local_err); | 124 | return MEMTX_OK; |
188 | + if (local_err) { | 125 | default: |
189 | + error_propagate(errp, local_err); | 126 | return MEMTX_ERROR; |
190 | + return; | 127 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writell(GICv3CPUState *cs, hwaddr offset, |
191 | + } | 128 | cs->gicr_vpropbaser = value; |
192 | + | 129 | return MEMTX_OK; |
193 | + if (temp >= maxs[ext_range] || temp < mins[ext_range]) { | 130 | case GICR_VPENDBASER: |
194 | + error_setg(errp, "value %" PRId64 ".%03" PRIu64 " °C is out of range", | 131 | - cs->gicr_vpendbaser = value; |
195 | + temp / 1000, temp % 1000); | 132 | + gicr_write_vpendbaser(cs, value); |
196 | + return; | 133 | return MEMTX_OK; |
197 | + } | 134 | default: |
198 | + | 135 | return MEMTX_ERROR; |
199 | + if (sscanf(name, "temperature%d", &tempid) != 1) { | ||
200 | + error_setg(errp, "error reading %s: %m", name); | ||
201 | + return; | ||
202 | + } | ||
203 | + | ||
204 | + if (tempid >= 4 || tempid < 0) { | ||
205 | + error_setg(errp, "error reading %s", name); | ||
206 | + return; | ||
207 | + } | ||
208 | + | ||
209 | + s->temperature[tempid] = (int16_t) ((temp * 256 - 128) / 1000) + offset; | ||
210 | +} | ||
211 | + | ||
212 | +static void tmp421_read(TMP421State *s) | ||
213 | +{ | ||
214 | + TMP421Class *sc = TMP421_GET_CLASS(s); | ||
215 | + | ||
216 | + s->len = 0; | ||
217 | + | ||
218 | + switch (s->pointer) { | ||
219 | + case TMP421_MANUFACTURER_ID_REG: | ||
220 | + s->buf[s->len++] = TMP421_MANUFACTURER_ID; | ||
221 | + break; | ||
222 | + case TMP421_DEVICE_ID_REG: | ||
223 | + s->buf[s->len++] = sc->dev->model; | ||
224 | + break; | ||
225 | + case TMP421_CONFIG_REG_1: | ||
226 | + s->buf[s->len++] = s->config[0]; | ||
227 | + break; | ||
228 | + case TMP421_CONFIG_REG_2: | ||
229 | + s->buf[s->len++] = s->config[1]; | ||
230 | + break; | ||
231 | + case TMP421_CONVERSION_RATE_REG: | ||
232 | + s->buf[s->len++] = s->rate; | ||
233 | + break; | ||
234 | + case TMP421_STATUS_REG: | ||
235 | + s->buf[s->len++] = s->status; | ||
236 | + break; | ||
237 | + | ||
238 | + /* FIXME: check for channel enablement in config registers */ | ||
239 | + case TMP421_TEMP_MSB0: | ||
240 | + s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 8); | ||
241 | + s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 0) & 0xf0; | ||
242 | + break; | ||
243 | + case TMP421_TEMP_MSB1: | ||
244 | + s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 8); | ||
245 | + s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 0) & 0xf0; | ||
246 | + break; | ||
247 | + case TMP421_TEMP_MSB2: | ||
248 | + s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 8); | ||
249 | + s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 0) & 0xf0; | ||
250 | + break; | ||
251 | + case TMP421_TEMP_MSB3: | ||
252 | + s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 8); | ||
253 | + s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 0) & 0xf0; | ||
254 | + break; | ||
255 | + case TMP421_TEMP_LSB0: | ||
256 | + s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 0) & 0xf0; | ||
257 | + break; | ||
258 | + case TMP421_TEMP_LSB1: | ||
259 | + s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 0) & 0xf0; | ||
260 | + break; | ||
261 | + case TMP421_TEMP_LSB2: | ||
262 | + s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 0) & 0xf0; | ||
263 | + break; | ||
264 | + case TMP421_TEMP_LSB3: | ||
265 | + s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 0) & 0xf0; | ||
266 | + break; | ||
267 | + } | ||
268 | +} | ||
269 | + | ||
270 | +static void tmp421_reset(I2CSlave *i2c); | ||
271 | + | ||
272 | +static void tmp421_write(TMP421State *s) | ||
273 | +{ | ||
274 | + switch (s->pointer) { | ||
275 | + case TMP421_CONVERSION_RATE_REG: | ||
276 | + s->rate = s->buf[0]; | ||
277 | + break; | ||
278 | + case TMP421_CONFIG_REG_1: | ||
279 | + s->config[0] = s->buf[0]; | ||
280 | + break; | ||
281 | + case TMP421_CONFIG_REG_2: | ||
282 | + s->config[1] = s->buf[0]; | ||
283 | + break; | ||
284 | + case TMP421_RESET: | ||
285 | + tmp421_reset(I2C_SLAVE(s)); | ||
286 | + break; | ||
287 | + } | ||
288 | +} | ||
289 | + | ||
290 | +static int tmp421_rx(I2CSlave *i2c) | ||
291 | +{ | ||
292 | + TMP421State *s = TMP421(i2c); | ||
293 | + | ||
294 | + if (s->len < 2) { | ||
295 | + return s->buf[s->len++]; | ||
296 | + } else { | ||
297 | + return 0xff; | ||
298 | + } | ||
299 | +} | ||
300 | + | ||
301 | +static int tmp421_tx(I2CSlave *i2c, uint8_t data) | ||
302 | +{ | ||
303 | + TMP421State *s = TMP421(i2c); | ||
304 | + | ||
305 | + if (s->len == 0) { | ||
306 | + /* first byte is the register pointer for a read or write | ||
307 | + * operation */ | ||
308 | + s->pointer = data; | ||
309 | + s->len++; | ||
310 | + } else if (s->len == 1) { | ||
311 | + /* second byte is the data to write. The device only supports | ||
312 | + * one byte writes */ | ||
313 | + s->buf[0] = data; | ||
314 | + tmp421_write(s); | ||
315 | + } | ||
316 | + | ||
317 | + return 0; | ||
318 | +} | ||
319 | + | ||
320 | +static int tmp421_event(I2CSlave *i2c, enum i2c_event event) | ||
321 | +{ | ||
322 | + TMP421State *s = TMP421(i2c); | ||
323 | + | ||
324 | + if (event == I2C_START_RECV) { | ||
325 | + tmp421_read(s); | ||
326 | + } | ||
327 | + | ||
328 | + s->len = 0; | ||
329 | + return 0; | ||
330 | +} | ||
331 | + | ||
332 | +static const VMStateDescription vmstate_tmp421 = { | ||
333 | + .name = "TMP421", | ||
334 | + .version_id = 0, | ||
335 | + .minimum_version_id = 0, | ||
336 | + .fields = (VMStateField[]) { | ||
337 | + VMSTATE_UINT8(len, TMP421State), | ||
338 | + VMSTATE_UINT8_ARRAY(buf, TMP421State, 2), | ||
339 | + VMSTATE_UINT8(pointer, TMP421State), | ||
340 | + VMSTATE_UINT8_ARRAY(config, TMP421State, 2), | ||
341 | + VMSTATE_UINT8(status, TMP421State), | ||
342 | + VMSTATE_UINT8(rate, TMP421State), | ||
343 | + VMSTATE_INT16_ARRAY(temperature, TMP421State, 4), | ||
344 | + VMSTATE_I2C_SLAVE(i2c, TMP421State), | ||
345 | + VMSTATE_END_OF_LIST() | ||
346 | + } | ||
347 | +}; | ||
348 | + | ||
349 | +static void tmp421_reset(I2CSlave *i2c) | ||
350 | +{ | ||
351 | + TMP421State *s = TMP421(i2c); | ||
352 | + TMP421Class *sc = TMP421_GET_CLASS(s); | ||
353 | + | ||
354 | + memset(s->temperature, 0, sizeof(s->temperature)); | ||
355 | + s->pointer = 0; | ||
356 | + | ||
357 | + s->config[0] = 0; /* TMP421_CONFIG_RANGE */ | ||
358 | + | ||
359 | + /* resistance correction and channel enablement */ | ||
360 | + switch (sc->dev->model) { | ||
361 | + case TMP421_DEVICE_ID: | ||
362 | + s->config[1] = 0x1c; | ||
363 | + break; | ||
364 | + case TMP422_DEVICE_ID: | ||
365 | + s->config[1] = 0x3c; | ||
366 | + break; | ||
367 | + case TMP423_DEVICE_ID: | ||
368 | + s->config[1] = 0x7c; | ||
369 | + break; | ||
370 | + } | ||
371 | + | ||
372 | + s->rate = 0x7; /* 8Hz */ | ||
373 | + s->status = 0; | ||
374 | +} | ||
375 | + | ||
376 | +static int tmp421_init(I2CSlave *i2c) | ||
377 | +{ | ||
378 | + TMP421State *s = TMP421(i2c); | ||
379 | + | ||
380 | + tmp421_reset(&s->i2c); | ||
381 | + | ||
382 | + return 0; | ||
383 | +} | ||
384 | + | ||
385 | +static void tmp421_initfn(Object *obj) | ||
386 | +{ | ||
387 | + object_property_add(obj, "temperature0", "int", | ||
388 | + tmp421_get_temperature, | ||
389 | + tmp421_set_temperature, NULL, NULL, NULL); | ||
390 | + object_property_add(obj, "temperature1", "int", | ||
391 | + tmp421_get_temperature, | ||
392 | + tmp421_set_temperature, NULL, NULL, NULL); | ||
393 | + object_property_add(obj, "temperature2", "int", | ||
394 | + tmp421_get_temperature, | ||
395 | + tmp421_set_temperature, NULL, NULL, NULL); | ||
396 | + object_property_add(obj, "temperature3", "int", | ||
397 | + tmp421_get_temperature, | ||
398 | + tmp421_set_temperature, NULL, NULL, NULL); | ||
399 | +} | ||
400 | + | ||
401 | +static void tmp421_class_init(ObjectClass *klass, void *data) | ||
402 | +{ | ||
403 | + DeviceClass *dc = DEVICE_CLASS(klass); | ||
404 | + I2CSlaveClass *k = I2C_SLAVE_CLASS(klass); | ||
405 | + TMP421Class *sc = TMP421_CLASS(klass); | ||
406 | + | ||
407 | + k->init = tmp421_init; | ||
408 | + k->event = tmp421_event; | ||
409 | + k->recv = tmp421_rx; | ||
410 | + k->send = tmp421_tx; | ||
411 | + dc->vmsd = &vmstate_tmp421; | ||
412 | + sc->dev = (DeviceInfo *) data; | ||
413 | +} | ||
414 | + | ||
415 | +static const TypeInfo tmp421_info = { | ||
416 | + .name = TYPE_TMP421, | ||
417 | + .parent = TYPE_I2C_SLAVE, | ||
418 | + .instance_size = sizeof(TMP421State), | ||
419 | + .instance_init = tmp421_initfn, | ||
420 | + .class_init = tmp421_class_init, | ||
421 | +}; | ||
422 | + | ||
423 | +static void tmp421_register_types(void) | ||
424 | +{ | ||
425 | + int i; | ||
426 | + | ||
427 | + type_register_static(&tmp421_info); | ||
428 | + for (i = 0; i < ARRAY_SIZE(devices); ++i) { | ||
429 | + TypeInfo ti = { | ||
430 | + .name = devices[i].name, | ||
431 | + .parent = TYPE_TMP421, | ||
432 | + .class_init = tmp421_class_init, | ||
433 | + .class_data = (void *) &devices[i], | ||
434 | + }; | ||
435 | + type_register(&ti); | ||
436 | + } | ||
437 | +} | ||
438 | + | ||
439 | +type_init(tmp421_register_types) | ||
440 | diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak | ||
441 | index XXXXXXX..XXXXXXX 100644 | ||
442 | --- a/default-configs/arm-softmmu.mak | ||
443 | +++ b/default-configs/arm-softmmu.mak | ||
444 | @@ -XXX,XX +XXX,XX @@ CONFIG_TWL92230=y | ||
445 | CONFIG_TSC2005=y | ||
446 | CONFIG_LM832X=y | ||
447 | CONFIG_TMP105=y | ||
448 | +CONFIG_TMP421=y | ||
449 | CONFIG_STELLARIS=y | ||
450 | CONFIG_STELLARIS_INPUT=y | ||
451 | CONFIG_STELLARIS_ENET=y | ||
452 | -- | 136 | -- |
453 | 2.7.4 | 137 | 2.25.1 |
454 | |||
455 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Factor out the code which sets a single bit in an LPI pending table. | ||
2 | We're going to need this for handling vLPI tables, not just the | ||
3 | physical LPI table. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-31-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/arm_gicv3_redist.c | 49 +++++++++++++++++++++++--------------- | ||
10 | 1 file changed, 30 insertions(+), 19 deletions(-) | ||
11 | |||
12 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/hw/intc/arm_gicv3_redist.c | ||
15 | +++ b/hw/intc/arm_gicv3_redist.c | ||
16 | @@ -XXX,XX +XXX,XX @@ static void update_for_all_lpis(GICv3CPUState *cs, uint64_t ptbase, | ||
17 | } | ||
18 | } | ||
19 | |||
20 | +/** | ||
21 | + * set_lpi_pending_bit: Set or clear pending bit for an LPI | ||
22 | + * | ||
23 | + * @cs: GICv3CPUState | ||
24 | + * @ptbase: physical address of LPI Pending table | ||
25 | + * @irq: LPI to change pending state for | ||
26 | + * @level: false to clear pending state, true to set | ||
27 | + * | ||
28 | + * Returns true if we needed to do something, false if the pending bit | ||
29 | + * was already at @level. | ||
30 | + */ | ||
31 | +static bool set_pending_table_bit(GICv3CPUState *cs, uint64_t ptbase, | ||
32 | + int irq, bool level) | ||
33 | +{ | ||
34 | + AddressSpace *as = &cs->gic->dma_as; | ||
35 | + uint64_t addr = ptbase + irq / 8; | ||
36 | + uint8_t pend; | ||
37 | + | ||
38 | + address_space_read(as, addr, MEMTXATTRS_UNSPECIFIED, &pend, 1); | ||
39 | + if (extract32(pend, irq % 8, 1) == level) { | ||
40 | + /* Bit already at requested state, no action required */ | ||
41 | + return false; | ||
42 | + } | ||
43 | + pend = deposit32(pend, irq % 8, 1, level ? 1 : 0); | ||
44 | + address_space_write(as, addr, MEMTXATTRS_UNSPECIFIED, &pend, 1); | ||
45 | + return true; | ||
46 | +} | ||
47 | + | ||
48 | static uint8_t gicr_read_ipriorityr(GICv3CPUState *cs, MemTxAttrs attrs, | ||
49 | int irq) | ||
50 | { | ||
51 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_lpi_pending(GICv3CPUState *cs, int irq, int level) | ||
52 | * This function updates the pending bit in lpi pending table for | ||
53 | * the irq being activated or deactivated. | ||
54 | */ | ||
55 | - AddressSpace *as = &cs->gic->dma_as; | ||
56 | uint64_t lpipt_baddr; | ||
57 | - bool ispend = false; | ||
58 | - uint8_t pend; | ||
59 | |||
60 | - /* | ||
61 | - * get the bit value corresponding to this irq in the | ||
62 | - * lpi pending table | ||
63 | - */ | ||
64 | lpipt_baddr = cs->gicr_pendbaser & R_GICR_PENDBASER_PHYADDR_MASK; | ||
65 | - | ||
66 | - address_space_read(as, lpipt_baddr + ((irq / 8) * sizeof(pend)), | ||
67 | - MEMTXATTRS_UNSPECIFIED, &pend, sizeof(pend)); | ||
68 | - | ||
69 | - ispend = extract32(pend, irq % 8, 1); | ||
70 | - | ||
71 | - /* no change in the value of pending bit, return */ | ||
72 | - if (ispend == level) { | ||
73 | + if (!set_pending_table_bit(cs, lpipt_baddr, irq, level)) { | ||
74 | + /* no change in the value of pending bit, return */ | ||
75 | return; | ||
76 | } | ||
77 | - pend = deposit32(pend, irq % 8, 1, level ? 1 : 0); | ||
78 | - | ||
79 | - address_space_write(as, lpipt_baddr + ((irq / 8) * sizeof(pend)), | ||
80 | - MEMTXATTRS_UNSPECIFIED, &pend, sizeof(pend)); | ||
81 | |||
82 | /* | ||
83 | * check if this LPI is better than the current hpplpi, if yes | ||
84 | -- | ||
85 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Cédric Le Goater <clg@kaod.org> | 1 | Implement the function gicv3_redist_process_vlpi(), which was left as |
---|---|---|---|
2 | just a stub earlier. This function deals with being handed a VLPI by | ||
3 | the ITS. It must set the bit in the pending table. If the vCPU is | ||
4 | currently resident we must recalculate the highest priority pending | ||
5 | vLPI; otherwise we may need to ring a "doorbell" interrupt to let the | ||
6 | hypervisor know it might want to reschedule the vCPU. | ||
2 | 7 | ||
3 | Multiple I2C commands can be fired simultaneously and the controller | 8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
4 | execute the commands following these priorities: | 9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
10 | Message-id: 20220408141550.1271295-32-peter.maydell@linaro.org | ||
11 | --- | ||
12 | hw/intc/arm_gicv3_redist.c | 48 ++++++++++++++++++++++++++++++++++---- | ||
13 | 1 file changed, 44 insertions(+), 4 deletions(-) | ||
5 | 14 | ||
6 | (1) Master Start Command | 15 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c |
7 | (2) Master Transmit Command | ||
8 | (3) Slave Transmit Command or Master Receive Command | ||
9 | (4) Master Stop Command | ||
10 | |||
11 | The current code is incorrect with respect to the above sequence and | ||
12 | needs to be reworked to handle each individual command. | ||
13 | |||
14 | Signed-off-by: Cédric Le Goater <clg@kaod.org> | ||
15 | Message-id: 1494827476-1487-2-git-send-email-clg@kaod.org | ||
16 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
17 | --- | ||
18 | hw/i2c/aspeed_i2c.c | 24 ++++++++++++++++++------ | ||
19 | 1 file changed, 18 insertions(+), 6 deletions(-) | ||
20 | |||
21 | diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c | ||
22 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
23 | --- a/hw/i2c/aspeed_i2c.c | 17 | --- a/hw/intc/arm_gicv3_redist.c |
24 | +++ b/hw/i2c/aspeed_i2c.c | 18 | +++ b/hw/intc/arm_gicv3_redist.c |
25 | @@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_i2c_bus_read(void *opaque, hwaddr offset, | 19 | @@ -XXX,XX +XXX,XX @@ static uint32_t gicr_read_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs, |
26 | 20 | return reg; | |
27 | static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | 21 | } |
22 | |||
23 | +static bool vcpu_resident(GICv3CPUState *cs, uint64_t vptaddr) | ||
24 | +{ | ||
25 | + /* | ||
26 | + * Return true if a vCPU is resident, which is defined by | ||
27 | + * whether the GICR_VPENDBASER register is marked VALID and | ||
28 | + * has the right virtual pending table address. | ||
29 | + */ | ||
30 | + if (!FIELD_EX64(cs->gicr_vpendbaser, GICR_VPENDBASER, VALID)) { | ||
31 | + return false; | ||
32 | + } | ||
33 | + return vptaddr == (cs->gicr_vpendbaser & R_GICR_VPENDBASER_PHYADDR_MASK); | ||
34 | +} | ||
35 | + | ||
36 | /** | ||
37 | * update_for_one_lpi: Update pending information if this LPI is better | ||
38 | * | ||
39 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_vlpi_pending(GICv3CPUState *cs, int irq, int level) | ||
40 | void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
41 | int doorbell, int level) | ||
28 | { | 42 | { |
29 | + bus->cmd &= ~0xFFFF; | 43 | - /* |
30 | bus->cmd |= value & 0xFFFF; | 44 | - * The redistributor handling for being handed a VLPI by the ITS |
31 | bus->intr_status = 0; | 45 | - * will be added in a subsequent commit. |
32 | 46 | - */ | |
33 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | 47 | + bool bit_changed; |
34 | bus->intr_status |= I2CD_INTR_TX_ACK; | 48 | + bool resident = vcpu_resident(cs, vptaddr); |
35 | } | 49 | + uint64_t ctbase; |
36 | |||
37 | - } else if (bus->cmd & I2CD_M_TX_CMD) { | ||
38 | + /* START command is also a TX command, as the slave address is | ||
39 | + * sent on the bus */ | ||
40 | + bus->cmd &= ~(I2CD_M_START_CMD | I2CD_M_TX_CMD); | ||
41 | + | 50 | + |
42 | + /* No slave found */ | 51 | + if (resident) { |
43 | + if (!i2c_bus_busy(bus->bus)) { | 52 | + uint32_t idbits = FIELD_EX64(cs->gicr_vpropbaser, GICR_VPROPBASER, IDBITS); |
53 | + if (irq >= (1ULL << (idbits + 1))) { | ||
44 | + return; | 54 | + return; |
45 | + } | 55 | + } |
46 | + } | 56 | + } |
47 | + | 57 | + |
48 | + if (bus->cmd & I2CD_M_TX_CMD) { | 58 | + bit_changed = set_pending_table_bit(cs, vptaddr, irq, level); |
49 | if (i2c_send(bus->bus, bus->buf)) { | 59 | + if (resident && bit_changed) { |
50 | bus->intr_status |= (I2CD_INTR_TX_NAK | I2CD_INTR_ABNORMAL); | 60 | + if (level) { |
51 | i2c_end_transfer(bus->bus); | 61 | + /* Check whether this vLPI is now the best */ |
52 | } else { | 62 | + ctbase = cs->gicr_vpropbaser & R_GICR_VPROPBASER_PHYADDR_MASK; |
53 | bus->intr_status |= I2CD_INTR_TX_ACK; | 63 | + update_for_one_lpi(cs, irq, ctbase, true, &cs->hppvlpi); |
54 | } | 64 | + gicv3_cpuif_virt_irq_fiq_update(cs); |
55 | + bus->cmd &= ~I2CD_M_TX_CMD; | 65 | + } else { |
66 | + /* Only need to recalculate if this was previously the best vLPI */ | ||
67 | + if (irq == cs->hppvlpi.irq) { | ||
68 | + gicv3_redist_update_vlpi(cs); | ||
69 | + } | ||
70 | + } | ||
56 | + } | 71 | + } |
57 | 72 | + | |
58 | - } else if (bus->cmd & I2CD_M_RX_CMD) { | 73 | + if (!resident && level && doorbell != INTID_SPURIOUS && |
59 | + if (bus->cmd & I2CD_M_RX_CMD) { | 74 | + (cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS)) { |
60 | int ret = i2c_recv(bus->bus); | 75 | + /* vCPU is not currently resident: ring the doorbell */ |
61 | if (ret < 0) { | 76 | + gicv3_redist_process_lpi(cs, doorbell, 1); |
62 | qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__); | 77 | + } |
63 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | ||
64 | bus->intr_status |= I2CD_INTR_RX_DONE; | ||
65 | } | ||
66 | bus->buf = (ret & I2CD_BYTE_BUF_RX_MASK) << I2CD_BYTE_BUF_RX_SHIFT; | ||
67 | + bus->cmd &= ~I2CD_M_RX_CMD; | ||
68 | } | ||
69 | |||
70 | if (bus->cmd & (I2CD_M_STOP_CMD | I2CD_M_S_RX_CMD_LAST)) { | ||
71 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | ||
72 | i2c_end_transfer(bus->bus); | ||
73 | bus->intr_status |= I2CD_INTR_NORMAL_STOP; | ||
74 | } | ||
75 | + bus->cmd &= ~I2CD_M_STOP_CMD; | ||
76 | } | ||
77 | - | ||
78 | - /* command is handled, reset it and check for interrupts */ | ||
79 | - bus->cmd &= ~0xFFFF; | ||
80 | - aspeed_i2c_bus_raise_interrupt(bus); | ||
81 | } | 78 | } |
82 | 79 | ||
83 | static void aspeed_i2c_bus_write(void *opaque, hwaddr offset, | 80 | void gicv3_redist_mov_vlpi(GICv3CPUState *src, uint64_t src_vptaddr, |
84 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_write(void *opaque, hwaddr offset, | ||
85 | } | ||
86 | |||
87 | aspeed_i2c_bus_handle_cmd(bus, value); | ||
88 | + aspeed_i2c_bus_raise_interrupt(bus); | ||
89 | break; | ||
90 | |||
91 | default: | ||
92 | -- | 81 | -- |
93 | 2.7.4 | 82 | 2.25.1 |
94 | |||
95 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the function gicv3_redist_vlpi_pending(), which was | ||
2 | previously left as a stub. This is the function that is called by | ||
3 | the CPU interface when it changes the state of a vLPI. It's similar | ||
4 | to gicv3_redist_process_vlpi(), but we know that the vCPU is | ||
5 | definitely resident on the redistributor and the irq is in range, so | ||
6 | it is a bit simpler. | ||
1 | 7 | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 20220408141550.1271295-33-peter.maydell@linaro.org | ||
11 | --- | ||
12 | hw/intc/arm_gicv3_redist.c | 23 +++++++++++++++++++++-- | ||
13 | 1 file changed, 21 insertions(+), 2 deletions(-) | ||
14 | |||
15 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
16 | index XXXXXXX..XXXXXXX 100644 | ||
17 | --- a/hw/intc/arm_gicv3_redist.c | ||
18 | +++ b/hw/intc/arm_gicv3_redist.c | ||
19 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_movall_lpis(GICv3CPUState *src, GICv3CPUState *dest) | ||
20 | void gicv3_redist_vlpi_pending(GICv3CPUState *cs, int irq, int level) | ||
21 | { | ||
22 | /* | ||
23 | - * The redistributor handling for changing the pending state | ||
24 | - * of a vLPI will be added in a subsequent commit. | ||
25 | + * Change the pending state of the specified vLPI. | ||
26 | + * Unlike gicv3_redist_process_vlpi(), we know here that the | ||
27 | + * vCPU is definitely resident on this redistributor, and that | ||
28 | + * the irq is in range. | ||
29 | */ | ||
30 | + uint64_t vptbase, ctbase; | ||
31 | + | ||
32 | + vptbase = FIELD_EX64(cs->gicr_vpendbaser, GICR_VPENDBASER, PHYADDR) << 16; | ||
33 | + | ||
34 | + if (set_pending_table_bit(cs, vptbase, irq, level)) { | ||
35 | + if (level) { | ||
36 | + /* Check whether this vLPI is now the best */ | ||
37 | + ctbase = cs->gicr_vpropbaser & R_GICR_VPROPBASER_PHYADDR_MASK; | ||
38 | + update_for_one_lpi(cs, irq, ctbase, true, &cs->hppvlpi); | ||
39 | + gicv3_cpuif_virt_irq_fiq_update(cs); | ||
40 | + } else { | ||
41 | + /* Only need to recalculate if this was previously the best vLPI */ | ||
42 | + if (irq == cs->hppvlpi.irq) { | ||
43 | + gicv3_redist_update_vlpi(cs); | ||
44 | + } | ||
45 | + } | ||
46 | + } | ||
47 | } | ||
48 | |||
49 | void gicv3_redist_process_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr, | ||
50 | -- | ||
51 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | We can use our new set_pending_table_bit() utility function | ||
2 | in gicv3_redist_mov_lpi() to clear the bit in the source | ||
3 | pending table, rather than doing the "load, clear bit, store" | ||
4 | ourselves. | ||
1 | 5 | ||
6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20220408141550.1271295-34-peter.maydell@linaro.org | ||
9 | --- | ||
10 | hw/intc/arm_gicv3_redist.c | 9 +-------- | ||
11 | 1 file changed, 1 insertion(+), 8 deletions(-) | ||
12 | |||
13 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
14 | index XXXXXXX..XXXXXXX 100644 | ||
15 | --- a/hw/intc/arm_gicv3_redist.c | ||
16 | +++ b/hw/intc/arm_gicv3_redist.c | ||
17 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_mov_lpi(GICv3CPUState *src, GICv3CPUState *dest, int irq) | ||
18 | * we choose to NOP. If LPIs are disabled on source there's nothing | ||
19 | * to be transferred anyway. | ||
20 | */ | ||
21 | - AddressSpace *as = &src->gic->dma_as; | ||
22 | uint64_t idbits; | ||
23 | uint32_t pendt_size; | ||
24 | uint64_t src_baddr; | ||
25 | - uint8_t src_pend; | ||
26 | |||
27 | if (!(src->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) || | ||
28 | !(dest->gicr_ctlr & GICR_CTLR_ENABLE_LPIS)) { | ||
29 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_mov_lpi(GICv3CPUState *src, GICv3CPUState *dest, int irq) | ||
30 | |||
31 | src_baddr = src->gicr_pendbaser & R_GICR_PENDBASER_PHYADDR_MASK; | ||
32 | |||
33 | - address_space_read(as, src_baddr + (irq / 8), | ||
34 | - MEMTXATTRS_UNSPECIFIED, &src_pend, sizeof(src_pend)); | ||
35 | - if (!extract32(src_pend, irq % 8, 1)) { | ||
36 | + if (!set_pending_table_bit(src, src_baddr, irq, 0)) { | ||
37 | /* Not pending on source, nothing to do */ | ||
38 | return; | ||
39 | } | ||
40 | - src_pend &= ~(1 << (irq % 8)); | ||
41 | - address_space_write(as, src_baddr + (irq / 8), | ||
42 | - MEMTXATTRS_UNSPECIFIED, &src_pend, sizeof(src_pend)); | ||
43 | if (irq == src->hpplpi.irq) { | ||
44 | /* | ||
45 | * We just made this LPI not-pending so only need to update | ||
46 | -- | ||
47 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the gicv3_redist_mov_vlpi() function (previously left as a | ||
2 | stub). This function handles the work of a VMOVI command: it marks | ||
3 | the vLPI not-pending on the source and pending on the destination. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-35-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/arm_gicv3_redist.c | 20 ++++++++++++++++++-- | ||
10 | 1 file changed, 18 insertions(+), 2 deletions(-) | ||
11 | |||
12 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/hw/intc/arm_gicv3_redist.c | ||
15 | +++ b/hw/intc/arm_gicv3_redist.c | ||
16 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_mov_vlpi(GICv3CPUState *src, uint64_t src_vptaddr, | ||
17 | int irq, int doorbell) | ||
18 | { | ||
19 | /* | ||
20 | - * The redistributor handling for moving a VLPI will be added | ||
21 | - * in a subsequent commit. | ||
22 | + * Move the specified vLPI's pending state from the source redistributor | ||
23 | + * to the destination. | ||
24 | */ | ||
25 | + if (!set_pending_table_bit(src, src_vptaddr, irq, 0)) { | ||
26 | + /* Not pending on source, nothing to do */ | ||
27 | + return; | ||
28 | + } | ||
29 | + if (vcpu_resident(src, src_vptaddr) && irq == src->hppvlpi.irq) { | ||
30 | + /* | ||
31 | + * Update src's cached highest-priority pending vLPI if we just made | ||
32 | + * it not-pending | ||
33 | + */ | ||
34 | + gicv3_redist_update_vlpi(src); | ||
35 | + } | ||
36 | + /* | ||
37 | + * Mark the vLPI pending on the destination (ringing the doorbell | ||
38 | + * if the vCPU isn't resident) | ||
39 | + */ | ||
40 | + gicv3_redist_process_vlpi(dest, irq, dest_vptaddr, doorbell, irq); | ||
41 | } | ||
42 | |||
43 | void gicv3_redist_vinvall(GICv3CPUState *cs, uint64_t vptaddr) | ||
44 | -- | ||
45 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the gicv3_redist_vinvall() function (previously left as a | ||
2 | stub). This function handles the work of a VINVALL command: it must | ||
3 | invalidate any cached information associated with a specific vCPU. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-36-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/arm_gicv3_redist.c | 8 +++++++- | ||
10 | 1 file changed, 7 insertions(+), 1 deletion(-) | ||
11 | |||
12 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/hw/intc/arm_gicv3_redist.c | ||
15 | +++ b/hw/intc/arm_gicv3_redist.c | ||
16 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_mov_vlpi(GICv3CPUState *src, uint64_t src_vptaddr, | ||
17 | |||
18 | void gicv3_redist_vinvall(GICv3CPUState *cs, uint64_t vptaddr) | ||
19 | { | ||
20 | - /* The redistributor handling will be added in a subsequent commit */ | ||
21 | + if (!vcpu_resident(cs, vptaddr)) { | ||
22 | + /* We don't have anything cached if the vCPU isn't resident */ | ||
23 | + return; | ||
24 | + } | ||
25 | + | ||
26 | + /* Otherwise, our only cached information is the HPPVLPI info */ | ||
27 | + gicv3_redist_update_vlpi(cs); | ||
28 | } | ||
29 | |||
30 | void gicv3_redist_inv_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr) | ||
31 | -- | ||
32 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Implement the function gicv3_redist_inv_vlpi(), which was previously | ||
2 | left as a stub. This is the function that does the work of the INV | ||
3 | command for a virtual interrupt. | ||
1 | 4 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 20220408141550.1271295-37-peter.maydell@linaro.org | ||
8 | --- | ||
9 | hw/intc/arm_gicv3_redist.c | 7 +++++-- | ||
10 | 1 file changed, 5 insertions(+), 2 deletions(-) | ||
11 | |||
12 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/hw/intc/arm_gicv3_redist.c | ||
15 | +++ b/hw/intc/arm_gicv3_redist.c | ||
16 | @@ -XXX,XX +XXX,XX @@ void gicv3_redist_vinvall(GICv3CPUState *cs, uint64_t vptaddr) | ||
17 | void gicv3_redist_inv_vlpi(GICv3CPUState *cs, int irq, uint64_t vptaddr) | ||
18 | { | ||
19 | /* | ||
20 | - * The redistributor handling for invalidating cached information | ||
21 | - * about a VLPI will be added in a subsequent commit. | ||
22 | + * The only cached information for LPIs we have is the HPPLPI. | ||
23 | + * We could be cleverer about identifying when we don't need | ||
24 | + * to do a full rescan of the pending table, but until we find | ||
25 | + * this is a performance issue, just always recalculate. | ||
26 | */ | ||
27 | + gicv3_redist_vinvall(cs, vptaddr); | ||
28 | } | ||
29 | |||
30 | void gicv3_redist_set_irq(GICv3CPUState *cs, int irq, int level) | ||
31 | -- | ||
32 | 2.25.1 | diff view generated by jsdifflib |
1 | icc_bpr_write() was not enforcing that writing a value below the | 1 | Update the various GIC ID and feature registers for GICv4: |
---|---|---|---|
2 | minimum for the BPR should behave as if the BPR was set to the | 2 | * PIDR2 [7:4] is the GIC architecture revision |
3 | minimum value. This doesn't make a difference for the secure | 3 | * GICD_TYPER.DVIS is 1 to indicate direct vLPI injection support |
4 | BPRs (since we define the minimum for the QEMU implementation | 4 | * GICR_TYPER.VLPIS is 1 to indicate redistributor support for vLPIs |
5 | as zero) but did mean we were allowing the NS BPR1 to be set to | 5 | * GITS_TYPER.VIRTUAL is 1 to indicate vLPI support |
6 | 0 when 1 should be the lowest value. | 6 | * GITS_TYPER.VMOVP is 1 to indicate that our VMOVP implementation |
7 | handles cross-ITS synchronization for the guest | ||
8 | * ICH_VTR_EL2.nV4 is 0 to indicate direct vLPI injection support | ||
7 | 9 | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 11 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
10 | Message-id: 1493226792-3237-3-git-send-email-peter.maydell@linaro.org | 12 | Message-id: 20220408141550.1271295-38-peter.maydell@linaro.org |
11 | --- | 13 | --- |
12 | hw/intc/arm_gicv3_cpuif.c | 6 ++++++ | 14 | hw/intc/gicv3_internal.h | 15 +++++++++++---- |
13 | 1 file changed, 6 insertions(+) | 15 | hw/intc/arm_gicv3_common.c | 7 +++++-- |
16 | hw/intc/arm_gicv3_cpuif.c | 6 +++++- | ||
17 | hw/intc/arm_gicv3_dist.c | 7 ++++--- | ||
18 | hw/intc/arm_gicv3_its.c | 7 ++++++- | ||
19 | hw/intc/arm_gicv3_redist.c | 2 +- | ||
20 | 6 files changed, 32 insertions(+), 12 deletions(-) | ||
14 | 21 | ||
22 | diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h | ||
23 | index XXXXXXX..XXXXXXX 100644 | ||
24 | --- a/hw/intc/gicv3_internal.h | ||
25 | +++ b/hw/intc/gicv3_internal.h | ||
26 | @@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, SEIS, 18, 1) | ||
27 | FIELD(GITS_TYPER, PTA, 19, 1) | ||
28 | FIELD(GITS_TYPER, CIDBITS, 32, 4) | ||
29 | FIELD(GITS_TYPER, CIL, 36, 1) | ||
30 | +FIELD(GITS_TYPER, VMOVP, 37, 1) | ||
31 | |||
32 | #define GITS_IDREGS 0xFFD0 | ||
33 | |||
34 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t gicv3_iidr(void) | ||
35 | #define GICV3_PIDR0_REDIST 0x93 | ||
36 | #define GICV3_PIDR0_ITS 0x94 | ||
37 | |||
38 | -static inline uint32_t gicv3_idreg(int regoffset, uint8_t pidr0) | ||
39 | +static inline uint32_t gicv3_idreg(GICv3State *s, int regoffset, uint8_t pidr0) | ||
40 | { | ||
41 | /* Return the value of the CoreSight ID register at the specified | ||
42 | * offset from the first ID register (as found in the distributor | ||
43 | * and redistributor register banks). | ||
44 | - * These values indicate an ARM implementation of a GICv3. | ||
45 | + * These values indicate an ARM implementation of a GICv3 or v4. | ||
46 | */ | ||
47 | static const uint8_t gicd_ids[] = { | ||
48 | - 0x44, 0x00, 0x00, 0x00, 0x92, 0xB4, 0x3B, 0x00, 0x0D, 0xF0, 0x05, 0xB1 | ||
49 | + 0x44, 0x00, 0x00, 0x00, 0x92, 0xB4, 0x0B, 0x00, 0x0D, 0xF0, 0x05, 0xB1 | ||
50 | }; | ||
51 | + uint32_t id; | ||
52 | |||
53 | regoffset /= 4; | ||
54 | |||
55 | if (regoffset == 4) { | ||
56 | return pidr0; | ||
57 | } | ||
58 | - return gicd_ids[regoffset]; | ||
59 | + id = gicd_ids[regoffset]; | ||
60 | + if (regoffset == 6) { | ||
61 | + /* PIDR2 bits [7:4] are the GIC architecture revision */ | ||
62 | + id |= s->revision << 4; | ||
63 | + } | ||
64 | + return id; | ||
65 | } | ||
66 | |||
67 | /** | ||
68 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c | ||
69 | index XXXXXXX..XXXXXXX 100644 | ||
70 | --- a/hw/intc/arm_gicv3_common.c | ||
71 | +++ b/hw/intc/arm_gicv3_common.c | ||
72 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp) | ||
73 | * Last == 1 if this is the last redistributor in a series of | ||
74 | * contiguous redistributor pages | ||
75 | * DirectLPI == 0 (direct injection of LPIs not supported) | ||
76 | - * VLPIS == 0 (virtual LPIs not supported) | ||
77 | - * PLPIS == 0 (physical LPIs not supported) | ||
78 | + * VLPIS == 1 if vLPIs supported (GICv4 and up) | ||
79 | + * PLPIS == 1 if LPIs supported | ||
80 | */ | ||
81 | cpu_affid = object_property_get_uint(OBJECT(cpu), "mp-affinity", NULL); | ||
82 | |||
83 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp) | ||
84 | |||
85 | if (s->lpi_enable) { | ||
86 | s->cpu[i].gicr_typer |= GICR_TYPER_PLPIS; | ||
87 | + if (s->revision > 3) { | ||
88 | + s->cpu[i].gicr_typer |= GICR_TYPER_VLPIS; | ||
89 | + } | ||
90 | } | ||
91 | } | ||
92 | |||
15 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c | 93 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c |
16 | index XXXXXXX..XXXXXXX 100644 | 94 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/hw/intc/arm_gicv3_cpuif.c | 95 | --- a/hw/intc/arm_gicv3_cpuif.c |
18 | +++ b/hw/intc/arm_gicv3_cpuif.c | 96 | +++ b/hw/intc/arm_gicv3_cpuif.c |
19 | @@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri, | 97 | @@ -XXX,XX +XXX,XX @@ static uint64_t ich_vtr_read(CPUARMState *env, const ARMCPRegInfo *ri) |
20 | { | 98 | uint64_t value; |
21 | GICv3CPUState *cs = icc_cs_from_env(env); | 99 | |
22 | int grp = (ri->crm == 8) ? GICV3_G0 : GICV3_G1; | 100 | value = ((cs->num_list_regs - 1) << ICH_VTR_EL2_LISTREGS_SHIFT) |
23 | + uint64_t minval; | 101 | - | ICH_VTR_EL2_TDS | ICH_VTR_EL2_NV4 | ICH_VTR_EL2_A3V |
24 | 102 | + | ICH_VTR_EL2_TDS | ICH_VTR_EL2_A3V | |
25 | if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) { | 103 | | (1 << ICH_VTR_EL2_IDBITS_SHIFT) |
26 | icv_bpr_write(env, ri, value); | 104 | | ((cs->vprebits - 1) << ICH_VTR_EL2_PREBITS_SHIFT) |
27 | @@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri, | 105 | | ((cs->vpribits - 1) << ICH_VTR_EL2_PRIBITS_SHIFT); |
28 | return; | 106 | |
29 | } | 107 | + if (cs->gic->revision < 4) { |
30 | 108 | + value |= ICH_VTR_EL2_NV4; | |
31 | + minval = (grp == GICV3_G1NS) ? GIC_MIN_BPR_NS : GIC_MIN_BPR; | ||
32 | + if (value < minval) { | ||
33 | + value = minval; | ||
34 | + } | 109 | + } |
35 | + | 110 | + |
36 | cs->icc_bpr[grp] = value & 7; | 111 | trace_gicv3_ich_vtr_read(gicv3_redist_affid(cs), value); |
37 | gicv3_cpuif_update(cs); | 112 | return value; |
38 | } | 113 | } |
114 | diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c | ||
115 | index XXXXXXX..XXXXXXX 100644 | ||
116 | --- a/hw/intc/arm_gicv3_dist.c | ||
117 | +++ b/hw/intc/arm_gicv3_dist.c | ||
118 | @@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset, | ||
119 | * No1N == 1 (1-of-N SPI interrupts not supported) | ||
120 | * A3V == 1 (non-zero values of Affinity level 3 supported) | ||
121 | * IDbits == 0xf (we support 16-bit interrupt identifiers) | ||
122 | - * DVIS == 0 (Direct virtual LPI injection not supported) | ||
123 | + * DVIS == 1 (Direct virtual LPI injection supported) if GICv4 | ||
124 | * LPIS == 1 (LPIs are supported if affinity routing is enabled) | ||
125 | * num_LPIs == 0b00000 (bits [15:11],Number of LPIs as indicated | ||
126 | * by GICD_TYPER.IDbits) | ||
127 | @@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset, | ||
128 | * so we only need to check the DS bit. | ||
129 | */ | ||
130 | bool sec_extn = !(s->gicd_ctlr & GICD_CTLR_DS); | ||
131 | + bool dvis = s->revision >= 4; | ||
132 | |||
133 | - *data = (1 << 25) | (1 << 24) | (sec_extn << 10) | | ||
134 | + *data = (1 << 25) | (1 << 24) | (dvis << 18) | (sec_extn << 10) | | ||
135 | (s->lpi_enable << GICD_TYPER_LPIS_SHIFT) | | ||
136 | (0xf << 19) | itlinesnumber; | ||
137 | return true; | ||
138 | @@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset, | ||
139 | } | ||
140 | case GICD_IDREGS ... GICD_IDREGS + 0x2f: | ||
141 | /* ID registers */ | ||
142 | - *data = gicv3_idreg(offset - GICD_IDREGS, GICV3_PIDR0_DIST); | ||
143 | + *data = gicv3_idreg(s, offset - GICD_IDREGS, GICV3_PIDR0_DIST); | ||
144 | return true; | ||
145 | case GICD_SGIR: | ||
146 | /* WO registers, return unknown value */ | ||
147 | diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c | ||
148 | index XXXXXXX..XXXXXXX 100644 | ||
149 | --- a/hw/intc/arm_gicv3_its.c | ||
150 | +++ b/hw/intc/arm_gicv3_its.c | ||
151 | @@ -XXX,XX +XXX,XX @@ static bool its_readl(GICv3ITSState *s, hwaddr offset, | ||
152 | break; | ||
153 | case GITS_IDREGS ... GITS_IDREGS + 0x2f: | ||
154 | /* ID registers */ | ||
155 | - *data = gicv3_idreg(offset - GITS_IDREGS, GICV3_PIDR0_ITS); | ||
156 | + *data = gicv3_idreg(s->gicv3, offset - GITS_IDREGS, GICV3_PIDR0_ITS); | ||
157 | break; | ||
158 | case GITS_TYPER: | ||
159 | *data = extract64(s->typer, 0, 32); | ||
160 | @@ -XXX,XX +XXX,XX @@ static void gicv3_arm_its_realize(DeviceState *dev, Error **errp) | ||
161 | s->typer = FIELD_DP64(s->typer, GITS_TYPER, DEVBITS, ITS_DEVBITS); | ||
162 | s->typer = FIELD_DP64(s->typer, GITS_TYPER, CIL, 1); | ||
163 | s->typer = FIELD_DP64(s->typer, GITS_TYPER, CIDBITS, ITS_CIDBITS); | ||
164 | + if (s->gicv3->revision >= 4) { | ||
165 | + /* Our VMOVP handles cross-ITS synchronization itself */ | ||
166 | + s->typer = FIELD_DP64(s->typer, GITS_TYPER, VMOVP, 1); | ||
167 | + s->typer = FIELD_DP64(s->typer, GITS_TYPER, VIRTUAL, 1); | ||
168 | + } | ||
169 | } | ||
170 | |||
171 | static void gicv3_its_reset(DeviceState *dev) | ||
172 | diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c | ||
173 | index XXXXXXX..XXXXXXX 100644 | ||
174 | --- a/hw/intc/arm_gicv3_redist.c | ||
175 | +++ b/hw/intc/arm_gicv3_redist.c | ||
176 | @@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr offset, | ||
177 | *data = cs->gicr_nsacr; | ||
178 | return MEMTX_OK; | ||
179 | case GICR_IDREGS ... GICR_IDREGS + 0x2f: | ||
180 | - *data = gicv3_idreg(offset - GICR_IDREGS, GICV3_PIDR0_REDIST); | ||
181 | + *data = gicv3_idreg(cs->gic, offset - GICR_IDREGS, GICV3_PIDR0_REDIST); | ||
182 | return MEMTX_OK; | ||
183 | /* | ||
184 | * VLPI frame registers. We don't need a version check for | ||
39 | -- | 185 | -- |
40 | 2.7.4 | 186 | 2.25.1 |
41 | |||
42 | diff view generated by jsdifflib |
1 | If the CPU is a PMSA config with no MPU implemented, then the | 1 | Now that we have implemented all the GICv4 requirements, relax the |
---|---|---|---|
2 | SCTLR.M bit should be RAZ/WI, so that the guest can never | 2 | error-checking on the GIC object's 'revision' property to allow a TCG |
3 | turn on the non-existent MPU. | 3 | GIC to be a GICv4, whilst still constraining the KVM GIC to GICv3. |
4 | |||
5 | Our 'revision' property doesn't consider the possibility of wanting | ||
6 | to specify the minor version of the GIC -- for instance there is a | ||
7 | GICv3.1 which adds support for extended SPI and PPI ranges, among | ||
8 | other things, and also GICv4.1. But since the QOM property is | ||
9 | internal to QEMU, not user-facing, we can cross that bridge when we | ||
10 | come to it. Within the GIC implementation itself code generally | ||
11 | checks against the appropriate ID register feature bits, and the | ||
12 | only use of s->revision is for setting those ID register bits. | ||
4 | 13 | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
6 | Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> | 15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 16 | Message-id: 20220408141550.1271295-39-peter.maydell@linaro.org |
8 | Message-id: 1493122030-32191-7-git-send-email-peter.maydell@linaro.org | ||
9 | --- | 17 | --- |
10 | target/arm/helper.c | 5 +++++ | 18 | hw/intc/arm_gicv3_common.c | 12 +++++++----- |
11 | 1 file changed, 5 insertions(+) | 19 | hw/intc/arm_gicv3_kvm.c | 5 +++++ |
20 | 2 files changed, 12 insertions(+), 5 deletions(-) | ||
12 | 21 | ||
13 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 22 | diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c |
14 | index XXXXXXX..XXXXXXX 100644 | 23 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/helper.c | 24 | --- a/hw/intc/arm_gicv3_common.c |
16 | +++ b/target/arm/helper.c | 25 | +++ b/hw/intc/arm_gicv3_common.c |
17 | @@ -XXX,XX +XXX,XX @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri, | 26 | @@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp) |
27 | GICv3State *s = ARM_GICV3_COMMON(dev); | ||
28 | int i, rdist_capacity, cpuidx; | ||
29 | |||
30 | - /* revision property is actually reserved and currently used only in order | ||
31 | - * to keep the interface compatible with GICv2 code, avoiding extra | ||
32 | - * conditions. However, in future it could be used, for example, if we | ||
33 | - * implement GICv4. | ||
34 | + /* | ||
35 | + * This GIC device supports only revisions 3 and 4. The GICv1/v2 | ||
36 | + * is a separate device. | ||
37 | + * Note that subclasses of this device may impose further restrictions | ||
38 | + * on the GIC revision: notably, the in-kernel KVM GIC doesn't | ||
39 | + * support GICv4. | ||
40 | */ | ||
41 | - if (s->revision != 3) { | ||
42 | + if (s->revision != 3 && s->revision != 4) { | ||
43 | error_setg(errp, "unsupported GIC revision %d", s->revision); | ||
18 | return; | 44 | return; |
19 | } | 45 | } |
20 | 46 | diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c | |
21 | + if (arm_feature(env, ARM_FEATURE_PMSA) && !cpu->has_mpu) { | 47 | index XXXXXXX..XXXXXXX 100644 |
22 | + /* M bit is RAZ/WI for PMSA with no MPU implemented */ | 48 | --- a/hw/intc/arm_gicv3_kvm.c |
23 | + value &= ~SCTLR_M; | 49 | +++ b/hw/intc/arm_gicv3_kvm.c |
50 | @@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp) | ||
51 | return; | ||
52 | } | ||
53 | |||
54 | + if (s->revision != 3) { | ||
55 | + error_setg(errp, "unsupported GIC revision %d for in-kernel GIC", | ||
56 | + s->revision); | ||
24 | + } | 57 | + } |
25 | + | 58 | + |
26 | raw_write(env, ri, value); | 59 | if (s->security_extn) { |
27 | /* ??? Lots of these bits are not implemented. */ | 60 | error_setg(errp, "the in-kernel VGICv3 does not implement the " |
28 | /* This may enable/disable the MMU, so do a TLB flush. */ | 61 | "security extensions"); |
29 | -- | 62 | -- |
30 | 2.7.4 | 63 | 2.25.1 |
31 | |||
32 | diff view generated by jsdifflib |
1 | Now that we enforce both: | 1 | Everywhere we need to check which GIC version we're using, we look at |
---|---|---|---|
2 | * pmsav7_dregion == 0 implies has_mpu == false | 2 | vms->gic_version and use the VIRT_GIC_VERSION_* enum values, except |
3 | * PMSA with has_mpu == false means SCTLR.M cannot be set | 3 | in create_gic(), which copies vms->gic_version into a local 'int' |
4 | we can remove a check on pmsav7_dregion from get_phys_addr_pmsav7(), | 4 | variable and makes direct comparisons against values 2 and 3. |
5 | because we can only reach this code path if the MPU is enabled | 5 | |
6 | (and so region_translation_disabled() returned false). | 6 | For consistency, change this function to check the GIC version |
7 | the same way we do elsewhere. This includes not implicitly relying | ||
8 | on the enumeration type values happening to match the integer | ||
9 | 'revision' values the GIC device object wants. | ||
7 | 10 | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 12 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
10 | Message-id: 1493122030-32191-8-git-send-email-peter.maydell@linaro.org | 13 | Message-id: 20220408141550.1271295-40-peter.maydell@linaro.org |
11 | --- | 14 | --- |
12 | target/arm/helper.c | 3 +-- | 15 | hw/arm/virt.c | 31 +++++++++++++++++++++++-------- |
13 | 1 file changed, 1 insertion(+), 2 deletions(-) | 16 | 1 file changed, 23 insertions(+), 8 deletions(-) |
14 | 17 | ||
15 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 18 | diff --git a/hw/arm/virt.c b/hw/arm/virt.c |
16 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/helper.c | 20 | --- a/hw/arm/virt.c |
18 | +++ b/target/arm/helper.c | 21 | +++ b/hw/arm/virt.c |
19 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | 22 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) |
23 | /* We create a standalone GIC */ | ||
24 | SysBusDevice *gicbusdev; | ||
25 | const char *gictype; | ||
26 | - int type = vms->gic_version, i; | ||
27 | + int i; | ||
28 | unsigned int smp_cpus = ms->smp.cpus; | ||
29 | uint32_t nb_redist_regions = 0; | ||
30 | + int revision; | ||
31 | |||
32 | - gictype = (type == 3) ? gicv3_class_name() : gic_class_name(); | ||
33 | + if (vms->gic_version == VIRT_GIC_VERSION_2) { | ||
34 | + gictype = gic_class_name(); | ||
35 | + } else { | ||
36 | + gictype = gicv3_class_name(); | ||
37 | + } | ||
38 | |||
39 | + switch (vms->gic_version) { | ||
40 | + case VIRT_GIC_VERSION_2: | ||
41 | + revision = 2; | ||
42 | + break; | ||
43 | + case VIRT_GIC_VERSION_3: | ||
44 | + revision = 3; | ||
45 | + break; | ||
46 | + default: | ||
47 | + g_assert_not_reached(); | ||
48 | + } | ||
49 | vms->gic = qdev_new(gictype); | ||
50 | - qdev_prop_set_uint32(vms->gic, "revision", type); | ||
51 | + qdev_prop_set_uint32(vms->gic, "revision", revision); | ||
52 | qdev_prop_set_uint32(vms->gic, "num-cpu", smp_cpus); | ||
53 | /* Note that the num-irq property counts both internal and external | ||
54 | * interrupts; there are always 32 of the former (mandated by GIC spec). | ||
55 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
56 | qdev_prop_set_bit(vms->gic, "has-security-extensions", vms->secure); | ||
57 | } | ||
58 | |||
59 | - if (type == 3) { | ||
60 | + if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
61 | uint32_t redist0_capacity = | ||
62 | vms->memmap[VIRT_GIC_REDIST].size / GICV3_REDIST_SIZE; | ||
63 | uint32_t redist0_count = MIN(smp_cpus, redist0_capacity); | ||
64 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
65 | gicbusdev = SYS_BUS_DEVICE(vms->gic); | ||
66 | sysbus_realize_and_unref(gicbusdev, &error_fatal); | ||
67 | sysbus_mmio_map(gicbusdev, 0, vms->memmap[VIRT_GIC_DIST].base); | ||
68 | - if (type == 3) { | ||
69 | + if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
70 | sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_REDIST].base); | ||
71 | if (nb_redist_regions == 2) { | ||
72 | sysbus_mmio_map(gicbusdev, 2, | ||
73 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
74 | ppibase + timer_irq[irq])); | ||
20 | } | 75 | } |
21 | 76 | ||
22 | if (n == -1) { /* no hits */ | 77 | - if (type == 3) { |
23 | - if (cpu->pmsav7_dregion && | 78 | + if (vms->gic_version == VIRT_GIC_VERSION_3) { |
24 | - (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR))) { | 79 | qemu_irq irq = qdev_get_gpio_in(vms->gic, |
25 | + if (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR)) { | 80 | ppibase + ARCH_GIC_MAINT_IRQ); |
26 | /* background fault */ | 81 | qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt", |
27 | *fsr = 0; | 82 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) |
28 | return true; | 83 | |
84 | fdt_add_gic_node(vms); | ||
85 | |||
86 | - if (type == 3 && vms->its) { | ||
87 | + if (vms->gic_version == VIRT_GIC_VERSION_3 && vms->its) { | ||
88 | create_its(vms); | ||
89 | - } else if (type == 2) { | ||
90 | + } else if (vms->gic_version == VIRT_GIC_VERSION_2) { | ||
91 | create_v2m(vms); | ||
92 | } | ||
93 | } | ||
29 | -- | 94 | -- |
30 | 2.7.4 | 95 | 2.25.1 |
31 | |||
32 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | In several places in virt.c we calculate the number of redistributors that | ||
2 | fit in a region of our memory map, which is the size of the region | ||
3 | divided by the size of a single redistributor frame. For GICv4, the | ||
4 | redistributor frame is a different size from that for GICv3. Abstract | ||
5 | out the calculation of redistributor region capacity so that we have | ||
6 | one place we need to change to handle GICv4 rather than several. | ||
1 | 7 | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 20220408141550.1271295-41-peter.maydell@linaro.org | ||
11 | --- | ||
12 | include/hw/arm/virt.h | 9 +++++++-- | ||
13 | hw/arm/virt.c | 11 ++++------- | ||
14 | 2 files changed, 11 insertions(+), 9 deletions(-) | ||
15 | |||
16 | diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h | ||
17 | index XXXXXXX..XXXXXXX 100644 | ||
18 | --- a/include/hw/arm/virt.h | ||
19 | +++ b/include/hw/arm/virt.h | ||
20 | @@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_TYPE(VirtMachineState, VirtMachineClass, VIRT_MACHINE) | ||
21 | void virt_acpi_setup(VirtMachineState *vms); | ||
22 | bool virt_is_acpi_enabled(VirtMachineState *vms); | ||
23 | |||
24 | +/* Return number of redistributors that fit in the specified region */ | ||
25 | +static uint32_t virt_redist_capacity(VirtMachineState *vms, int region) | ||
26 | +{ | ||
27 | + return vms->memmap[region].size / GICV3_REDIST_SIZE; | ||
28 | +} | ||
29 | + | ||
30 | /* Return the number of used redistributor regions */ | ||
31 | static inline int virt_gicv3_redist_region_count(VirtMachineState *vms) | ||
32 | { | ||
33 | - uint32_t redist0_capacity = | ||
34 | - vms->memmap[VIRT_GIC_REDIST].size / GICV3_REDIST_SIZE; | ||
35 | + uint32_t redist0_capacity = virt_redist_capacity(vms, VIRT_GIC_REDIST); | ||
36 | |||
37 | assert(vms->gic_version == VIRT_GIC_VERSION_3); | ||
38 | |||
39 | diff --git a/hw/arm/virt.c b/hw/arm/virt.c | ||
40 | index XXXXXXX..XXXXXXX 100644 | ||
41 | --- a/hw/arm/virt.c | ||
42 | +++ b/hw/arm/virt.c | ||
43 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
44 | } | ||
45 | |||
46 | if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
47 | - uint32_t redist0_capacity = | ||
48 | - vms->memmap[VIRT_GIC_REDIST].size / GICV3_REDIST_SIZE; | ||
49 | + uint32_t redist0_capacity = virt_redist_capacity(vms, VIRT_GIC_REDIST); | ||
50 | uint32_t redist0_count = MIN(smp_cpus, redist0_capacity); | ||
51 | |||
52 | nb_redist_regions = virt_gicv3_redist_region_count(vms); | ||
53 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
54 | |||
55 | if (nb_redist_regions == 2) { | ||
56 | uint32_t redist1_capacity = | ||
57 | - vms->memmap[VIRT_HIGH_GIC_REDIST2].size / GICV3_REDIST_SIZE; | ||
58 | + virt_redist_capacity(vms, VIRT_HIGH_GIC_REDIST2); | ||
59 | |||
60 | qdev_prop_set_uint32(vms->gic, "redist-region-count[1]", | ||
61 | MIN(smp_cpus - redist0_count, redist1_capacity)); | ||
62 | @@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine) | ||
63 | * many redistributors we can fit into the memory map. | ||
64 | */ | ||
65 | if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
66 | - virt_max_cpus = | ||
67 | - vms->memmap[VIRT_GIC_REDIST].size / GICV3_REDIST_SIZE; | ||
68 | - virt_max_cpus += | ||
69 | - vms->memmap[VIRT_HIGH_GIC_REDIST2].size / GICV3_REDIST_SIZE; | ||
70 | + virt_max_cpus = virt_redist_capacity(vms, VIRT_GIC_REDIST) + | ||
71 | + virt_redist_capacity(vms, VIRT_HIGH_GIC_REDIST2); | ||
72 | } else { | ||
73 | virt_max_cpus = GIC_NCPU; | ||
74 | } | ||
75 | -- | ||
76 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Andrew Jones <drjones@redhat.com> | 1 | Add support for the TCG GICv4 to the virt board. For the board, |
---|---|---|---|
2 | the GICv4 is very similar to the GICv3, with the only difference | ||
3 | being the size of the redistributor frame. The changes here are thus: | ||
4 | * calculating virt_redist_capacity correctly for GICv4 | ||
5 | * changing various places which were "if GICv3" to be "if not GICv2" | ||
6 | * the commandline option handling | ||
2 | 7 | ||
3 | This is based on patch Shannon Zhao originally posted. | 8 | Note that using GICv4 reduces the maximum possible number of CPUs on |
9 | the virt board from 512 to 317, because we can now only fit half as | ||
10 | many redistributors into the redistributor regions we have defined. | ||
4 | 11 | ||
5 | Cc: Shannon Zhao <zhaoshenglong@huawei.com> | ||
6 | Signed-off-by: Andrew Jones <drjones@redhat.com> | ||
7 | Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org> | ||
8 | Message-id: 20170529173751.3443-3-drjones@redhat.com | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
13 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
14 | Message-id: 20220408141550.1271295-42-peter.maydell@linaro.org | ||
10 | --- | 15 | --- |
11 | hw/arm/virt.c | 21 +++++++++++++++++++++ | 16 | docs/system/arm/virt.rst | 5 ++- |
12 | 1 file changed, 21 insertions(+) | 17 | include/hw/arm/virt.h | 12 +++++-- |
18 | hw/arm/virt.c | 70 ++++++++++++++++++++++++++++++---------- | ||
19 | 3 files changed, 67 insertions(+), 20 deletions(-) | ||
13 | 20 | ||
21 | diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst | ||
22 | index XXXXXXX..XXXXXXX 100644 | ||
23 | --- a/docs/system/arm/virt.rst | ||
24 | +++ b/docs/system/arm/virt.rst | ||
25 | @@ -XXX,XX +XXX,XX @@ gic-version | ||
26 | GICv2. Note that this limits the number of CPUs to 8. | ||
27 | ``3`` | ||
28 | GICv3. This allows up to 512 CPUs. | ||
29 | + ``4`` | ||
30 | + GICv4. Requires ``virtualization`` to be ``on``; allows up to 317 CPUs. | ||
31 | ``host`` | ||
32 | Use the same GIC version the host provides, when using KVM | ||
33 | ``max`` | ||
34 | Use the best GIC version possible (same as host when using KVM; | ||
35 | - currently same as ``3``` for TCG, but this may change in future) | ||
36 | + with TCG this is currently ``3`` if ``virtualization`` is ``off`` and | ||
37 | + ``4`` if ``virtualization`` is ``on``, but this may change in future) | ||
38 | |||
39 | its | ||
40 | Set ``on``/``off`` to enable/disable ITS instantiation. The default is ``on`` | ||
41 | diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h | ||
42 | index XXXXXXX..XXXXXXX 100644 | ||
43 | --- a/include/hw/arm/virt.h | ||
44 | +++ b/include/hw/arm/virt.h | ||
45 | @@ -XXX,XX +XXX,XX @@ typedef enum VirtGICType { | ||
46 | VIRT_GIC_VERSION_HOST, | ||
47 | VIRT_GIC_VERSION_2, | ||
48 | VIRT_GIC_VERSION_3, | ||
49 | + VIRT_GIC_VERSION_4, | ||
50 | VIRT_GIC_VERSION_NOSEL, | ||
51 | } VirtGICType; | ||
52 | |||
53 | @@ -XXX,XX +XXX,XX @@ bool virt_is_acpi_enabled(VirtMachineState *vms); | ||
54 | /* Return number of redistributors that fit in the specified region */ | ||
55 | static uint32_t virt_redist_capacity(VirtMachineState *vms, int region) | ||
56 | { | ||
57 | - return vms->memmap[region].size / GICV3_REDIST_SIZE; | ||
58 | + uint32_t redist_size; | ||
59 | + | ||
60 | + if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
61 | + redist_size = GICV3_REDIST_SIZE; | ||
62 | + } else { | ||
63 | + redist_size = GICV4_REDIST_SIZE; | ||
64 | + } | ||
65 | + return vms->memmap[region].size / redist_size; | ||
66 | } | ||
67 | |||
68 | /* Return the number of used redistributor regions */ | ||
69 | @@ -XXX,XX +XXX,XX @@ static inline int virt_gicv3_redist_region_count(VirtMachineState *vms) | ||
70 | { | ||
71 | uint32_t redist0_capacity = virt_redist_capacity(vms, VIRT_GIC_REDIST); | ||
72 | |||
73 | - assert(vms->gic_version == VIRT_GIC_VERSION_3); | ||
74 | + assert(vms->gic_version != VIRT_GIC_VERSION_2); | ||
75 | |||
76 | return (MACHINE(vms)->smp.cpus > redist0_capacity && | ||
77 | vms->highmem_redists) ? 2 : 1; | ||
14 | diff --git a/hw/arm/virt.c b/hw/arm/virt.c | 78 | diff --git a/hw/arm/virt.c b/hw/arm/virt.c |
15 | index XXXXXXX..XXXXXXX 100644 | 79 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/hw/arm/virt.c | 80 | --- a/hw/arm/virt.c |
17 | +++ b/hw/arm/virt.c | 81 | +++ b/hw/arm/virt.c |
18 | @@ -XXX,XX +XXX,XX @@ static void create_fdt(VirtMachineState *vms) | 82 | @@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms) |
19 | "clk24mhz"); | 83 | qemu_fdt_setprop_cell(ms->fdt, nodename, "#address-cells", 0x2); |
20 | qemu_fdt_setprop_cell(fdt, "/apb-pclk", "phandle", vms->clock_phandle); | 84 | qemu_fdt_setprop_cell(ms->fdt, nodename, "#size-cells", 0x2); |
21 | 85 | qemu_fdt_setprop(ms->fdt, nodename, "ranges", NULL, 0); | |
22 | + if (have_numa_distance) { | 86 | - if (vms->gic_version == VIRT_GIC_VERSION_3) { |
23 | + int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t); | 87 | + if (vms->gic_version != VIRT_GIC_VERSION_2) { |
24 | + uint32_t *matrix = g_malloc0(size); | 88 | int nb_redist_regions = virt_gicv3_redist_region_count(vms); |
25 | + int idx, i, j; | 89 | |
26 | + | 90 | qemu_fdt_setprop_string(ms->fdt, nodename, "compatible", |
27 | + for (i = 0; i < nb_numa_nodes; i++) { | 91 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) |
28 | + for (j = 0; j < nb_numa_nodes; j++) { | 92 | case VIRT_GIC_VERSION_3: |
29 | + idx = (i * nb_numa_nodes + j) * 3; | 93 | revision = 3; |
30 | + matrix[idx + 0] = cpu_to_be32(i); | 94 | break; |
31 | + matrix[idx + 1] = cpu_to_be32(j); | 95 | + case VIRT_GIC_VERSION_4: |
32 | + matrix[idx + 2] = cpu_to_be32(numa_info[i].distance[j]); | 96 | + revision = 4; |
97 | + break; | ||
98 | default: | ||
99 | g_assert_not_reached(); | ||
100 | } | ||
101 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
102 | qdev_prop_set_bit(vms->gic, "has-security-extensions", vms->secure); | ||
103 | } | ||
104 | |||
105 | - if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
106 | + if (vms->gic_version != VIRT_GIC_VERSION_2) { | ||
107 | uint32_t redist0_capacity = virt_redist_capacity(vms, VIRT_GIC_REDIST); | ||
108 | uint32_t redist0_count = MIN(smp_cpus, redist0_capacity); | ||
109 | |||
110 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
111 | gicbusdev = SYS_BUS_DEVICE(vms->gic); | ||
112 | sysbus_realize_and_unref(gicbusdev, &error_fatal); | ||
113 | sysbus_mmio_map(gicbusdev, 0, vms->memmap[VIRT_GIC_DIST].base); | ||
114 | - if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
115 | + if (vms->gic_version != VIRT_GIC_VERSION_2) { | ||
116 | sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_REDIST].base); | ||
117 | if (nb_redist_regions == 2) { | ||
118 | sysbus_mmio_map(gicbusdev, 2, | ||
119 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
120 | ppibase + timer_irq[irq])); | ||
121 | } | ||
122 | |||
123 | - if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
124 | + if (vms->gic_version != VIRT_GIC_VERSION_2) { | ||
125 | qemu_irq irq = qdev_get_gpio_in(vms->gic, | ||
126 | ppibase + ARCH_GIC_MAINT_IRQ); | ||
127 | qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt", | ||
128 | @@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem) | ||
129 | |||
130 | fdt_add_gic_node(vms); | ||
131 | |||
132 | - if (vms->gic_version == VIRT_GIC_VERSION_3 && vms->its) { | ||
133 | + if (vms->gic_version != VIRT_GIC_VERSION_2 && vms->its) { | ||
134 | create_its(vms); | ||
135 | } else if (vms->gic_version == VIRT_GIC_VERSION_2) { | ||
136 | create_v2m(vms); | ||
137 | @@ -XXX,XX +XXX,XX @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx) | ||
138 | * purposes are to make TCG consistent (with 64-bit KVM hosts) | ||
139 | * and to improve SGI efficiency. | ||
140 | */ | ||
141 | - if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
142 | - clustersz = GICV3_TARGETLIST_BITS; | ||
143 | - } else { | ||
144 | + if (vms->gic_version == VIRT_GIC_VERSION_2) { | ||
145 | clustersz = GIC_TARGETLIST_BITS; | ||
146 | + } else { | ||
147 | + clustersz = GICV3_TARGETLIST_BITS; | ||
148 | } | ||
149 | } | ||
150 | return arm_cpu_mp_affinity(idx, clustersz); | ||
151 | @@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms) | ||
152 | error_report( | ||
153 | "gic-version=3 is not supported with kernel-irqchip=off"); | ||
154 | exit(1); | ||
155 | + case VIRT_GIC_VERSION_4: | ||
156 | + error_report( | ||
157 | + "gic-version=4 is not supported with kernel-irqchip=off"); | ||
158 | + exit(1); | ||
159 | } | ||
160 | } | ||
161 | |||
162 | @@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms) | ||
163 | case VIRT_GIC_VERSION_2: | ||
164 | case VIRT_GIC_VERSION_3: | ||
165 | break; | ||
166 | + case VIRT_GIC_VERSION_4: | ||
167 | + error_report("gic-version=4 is not supported with KVM"); | ||
168 | + exit(1); | ||
169 | } | ||
170 | |||
171 | /* Check chosen version is effectively supported by the host */ | ||
172 | @@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms) | ||
173 | case VIRT_GIC_VERSION_MAX: | ||
174 | if (module_object_class_by_name("arm-gicv3")) { | ||
175 | /* CONFIG_ARM_GICV3_TCG was set */ | ||
176 | - vms->gic_version = VIRT_GIC_VERSION_3; | ||
177 | + if (vms->virt) { | ||
178 | + /* GICv4 only makes sense if CPU has EL2 */ | ||
179 | + vms->gic_version = VIRT_GIC_VERSION_4; | ||
180 | + } else { | ||
181 | + vms->gic_version = VIRT_GIC_VERSION_3; | ||
33 | + } | 182 | + } |
183 | } else { | ||
184 | vms->gic_version = VIRT_GIC_VERSION_2; | ||
185 | } | ||
186 | @@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms) | ||
187 | case VIRT_GIC_VERSION_HOST: | ||
188 | error_report("gic-version=host requires KVM"); | ||
189 | exit(1); | ||
190 | + case VIRT_GIC_VERSION_4: | ||
191 | + if (!vms->virt) { | ||
192 | + error_report("gic-version=4 requires virtualization enabled"); | ||
193 | + exit(1); | ||
34 | + } | 194 | + } |
35 | + | 195 | + break; |
36 | + qemu_fdt_add_subnode(fdt, "/distance-map"); | 196 | case VIRT_GIC_VERSION_2: |
37 | + qemu_fdt_setprop_string(fdt, "/distance-map", "compatible", | 197 | case VIRT_GIC_VERSION_3: |
38 | + "numa-distance-map-v1"); | 198 | break; |
39 | + qemu_fdt_setprop(fdt, "/distance-map", "distance-matrix", | 199 | @@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine) |
40 | + matrix, size); | 200 | vms->psci_conduit = QEMU_PSCI_CONDUIT_HVC; |
41 | + g_free(matrix); | 201 | } |
202 | |||
203 | - /* The maximum number of CPUs depends on the GIC version, or on how | ||
204 | - * many redistributors we can fit into the memory map. | ||
205 | + /* | ||
206 | + * The maximum number of CPUs depends on the GIC version, or on how | ||
207 | + * many redistributors we can fit into the memory map (which in turn | ||
208 | + * depends on whether this is a GICv3 or v4). | ||
209 | */ | ||
210 | - if (vms->gic_version == VIRT_GIC_VERSION_3) { | ||
211 | + if (vms->gic_version == VIRT_GIC_VERSION_2) { | ||
212 | + virt_max_cpus = GIC_NCPU; | ||
213 | + } else { | ||
214 | virt_max_cpus = virt_redist_capacity(vms, VIRT_GIC_REDIST) + | ||
215 | virt_redist_capacity(vms, VIRT_HIGH_GIC_REDIST2); | ||
216 | - } else { | ||
217 | - virt_max_cpus = GIC_NCPU; | ||
218 | } | ||
219 | |||
220 | if (max_cpus > virt_max_cpus) { | ||
221 | @@ -XXX,XX +XXX,XX @@ static void virt_set_mte(Object *obj, bool value, Error **errp) | ||
222 | static char *virt_get_gic_version(Object *obj, Error **errp) | ||
223 | { | ||
224 | VirtMachineState *vms = VIRT_MACHINE(obj); | ||
225 | - const char *val = vms->gic_version == VIRT_GIC_VERSION_3 ? "3" : "2"; | ||
226 | + const char *val; | ||
227 | |||
228 | + switch (vms->gic_version) { | ||
229 | + case VIRT_GIC_VERSION_4: | ||
230 | + val = "4"; | ||
231 | + break; | ||
232 | + case VIRT_GIC_VERSION_3: | ||
233 | + val = "3"; | ||
234 | + break; | ||
235 | + default: | ||
236 | + val = "2"; | ||
237 | + break; | ||
42 | + } | 238 | + } |
239 | return g_strdup(val); | ||
43 | } | 240 | } |
44 | 241 | ||
45 | static void fdt_add_psci_node(const VirtMachineState *vms) | 242 | @@ -XXX,XX +XXX,XX @@ static void virt_set_gic_version(Object *obj, const char *value, Error **errp) |
243 | { | ||
244 | VirtMachineState *vms = VIRT_MACHINE(obj); | ||
245 | |||
246 | - if (!strcmp(value, "3")) { | ||
247 | + if (!strcmp(value, "4")) { | ||
248 | + vms->gic_version = VIRT_GIC_VERSION_4; | ||
249 | + } else if (!strcmp(value, "3")) { | ||
250 | vms->gic_version = VIRT_GIC_VERSION_3; | ||
251 | } else if (!strcmp(value, "2")) { | ||
252 | vms->gic_version = VIRT_GIC_VERSION_2; | ||
253 | @@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data) | ||
254 | virt_set_gic_version); | ||
255 | object_class_property_set_description(oc, "gic-version", | ||
256 | "Set GIC version. " | ||
257 | - "Valid values are 2, 3, host and max"); | ||
258 | + "Valid values are 2, 3, 4, host and max"); | ||
259 | |||
260 | object_class_property_add_str(oc, "iommu", virt_get_iommu, virt_set_iommu); | ||
261 | object_class_property_set_description(oc, "iommu", | ||
46 | -- | 262 | -- |
47 | 2.7.4 | 263 | 2.25.1 |
48 | |||
49 | diff view generated by jsdifflib |
1 | From: Andrew Jones <drjones@redhat.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Cc: Shannon Zhao <zhaoshenglong@huawei.com> | 3 | Update isar fields per ARM DDI0487 H.a. |
4 | Signed-off-by: Andrew Jones <drjones@redhat.com> | 4 | |
5 | Reviewed-by: Igor Mammedov <imammedo@redhat.com> | 5 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
6 | Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org> | 6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 20170529173751.3443-2-drjones@redhat.com | 7 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> |
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | --- | 9 | --- |
10 | hw/arm/virt-acpi-build.c | 4 ++++ | 10 | target/arm/cpu.h | 24 ++++++++++++++++++++++++ |
11 | 1 file changed, 4 insertions(+) | 11 | 1 file changed, 24 insertions(+) |
12 | 12 | ||
13 | diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c | 13 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
14 | index XXXXXXX..XXXXXXX 100644 | 14 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/hw/arm/virt-acpi-build.c | 15 | --- a/target/arm/cpu.h |
16 | +++ b/hw/arm/virt-acpi-build.c | 16 | +++ b/target/arm/cpu.h |
17 | @@ -XXX,XX +XXX,XX @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables) | 17 | @@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, CCIDX, 24, 4) |
18 | if (nb_numa_nodes > 0) { | 18 | FIELD(ID_MMFR4, EVT, 28, 4) |
19 | acpi_add_table(table_offsets, tables_blob); | 19 | |
20 | build_srat(tables_blob, tables->linker, vms); | 20 | FIELD(ID_MMFR5, ETS, 0, 4) |
21 | + if (have_numa_distance) { | 21 | +FIELD(ID_MMFR5, NTLBPA, 4, 4) |
22 | + acpi_add_table(table_offsets, tables_blob); | 22 | |
23 | + build_slit(tables_blob, tables->linker); | 23 | FIELD(ID_PFR0, STATE0, 0, 4) |
24 | + } | 24 | FIELD(ID_PFR0, STATE1, 4, 4) |
25 | } | 25 | @@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, SPECRES, 40, 4) |
26 | 26 | FIELD(ID_AA64ISAR1, BF16, 44, 4) | |
27 | if (its_class_name() && !vmc->no_its) { | 27 | FIELD(ID_AA64ISAR1, DGH, 48, 4) |
28 | FIELD(ID_AA64ISAR1, I8MM, 52, 4) | ||
29 | +FIELD(ID_AA64ISAR1, XS, 56, 4) | ||
30 | +FIELD(ID_AA64ISAR1, LS64, 60, 4) | ||
31 | + | ||
32 | +FIELD(ID_AA64ISAR2, WFXT, 0, 4) | ||
33 | +FIELD(ID_AA64ISAR2, RPRES, 4, 4) | ||
34 | +FIELD(ID_AA64ISAR2, GPA3, 8, 4) | ||
35 | +FIELD(ID_AA64ISAR2, APA3, 12, 4) | ||
36 | +FIELD(ID_AA64ISAR2, MOPS, 16, 4) | ||
37 | +FIELD(ID_AA64ISAR2, BC, 20, 4) | ||
38 | +FIELD(ID_AA64ISAR2, PAC_FRAC, 24, 4) | ||
39 | |||
40 | FIELD(ID_AA64PFR0, EL0, 0, 4) | ||
41 | FIELD(ID_AA64PFR0, EL1, 4, 4) | ||
42 | @@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64PFR1, SSBS, 4, 4) | ||
43 | FIELD(ID_AA64PFR1, MTE, 8, 4) | ||
44 | FIELD(ID_AA64PFR1, RAS_FRAC, 12, 4) | ||
45 | FIELD(ID_AA64PFR1, MPAM_FRAC, 16, 4) | ||
46 | +FIELD(ID_AA64PFR1, SME, 24, 4) | ||
47 | +FIELD(ID_AA64PFR1, RNDR_TRAP, 28, 4) | ||
48 | +FIELD(ID_AA64PFR1, CSV2_FRAC, 32, 4) | ||
49 | +FIELD(ID_AA64PFR1, NMI, 36, 4) | ||
50 | |||
51 | FIELD(ID_AA64MMFR0, PARANGE, 0, 4) | ||
52 | FIELD(ID_AA64MMFR0, ASIDBITS, 4, 4) | ||
53 | @@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64MMFR1, SPECSEI, 24, 4) | ||
54 | FIELD(ID_AA64MMFR1, XNX, 28, 4) | ||
55 | FIELD(ID_AA64MMFR1, TWED, 32, 4) | ||
56 | FIELD(ID_AA64MMFR1, ETS, 36, 4) | ||
57 | +FIELD(ID_AA64MMFR1, HCX, 40, 4) | ||
58 | +FIELD(ID_AA64MMFR1, AFP, 44, 4) | ||
59 | +FIELD(ID_AA64MMFR1, NTLBPA, 48, 4) | ||
60 | +FIELD(ID_AA64MMFR1, TIDCP1, 52, 4) | ||
61 | +FIELD(ID_AA64MMFR1, CMOW, 56, 4) | ||
62 | |||
63 | FIELD(ID_AA64MMFR2, CNP, 0, 4) | ||
64 | FIELD(ID_AA64MMFR2, UAO, 4, 4) | ||
65 | @@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, CTX_CMPS, 28, 4) | ||
66 | FIELD(ID_AA64DFR0, PMSVER, 32, 4) | ||
67 | FIELD(ID_AA64DFR0, DOUBLELOCK, 36, 4) | ||
68 | FIELD(ID_AA64DFR0, TRACEFILT, 40, 4) | ||
69 | +FIELD(ID_AA64DFR0, TRACEBUFFER, 44, 4) | ||
70 | FIELD(ID_AA64DFR0, MTPMU, 48, 4) | ||
71 | +FIELD(ID_AA64DFR0, BRBE, 52, 4) | ||
72 | +FIELD(ID_AA64DFR0, HPMN0, 60, 4) | ||
73 | |||
74 | FIELD(ID_AA64ZFR0, SVEVER, 0, 4) | ||
75 | FIELD(ID_AA64ZFR0, AES, 4, 4) | ||
76 | @@ -XXX,XX +XXX,XX @@ FIELD(ID_DFR0, PERFMON, 24, 4) | ||
77 | FIELD(ID_DFR0, TRACEFILT, 28, 4) | ||
78 | |||
79 | FIELD(ID_DFR1, MTPMU, 0, 4) | ||
80 | +FIELD(ID_DFR1, HPMN0, 4, 4) | ||
81 | |||
82 | FIELD(DBGDIDR, SE_IMP, 12, 1) | ||
83 | FIELD(DBGDIDR, NSUHD_IMP, 14, 1) | ||
28 | -- | 84 | -- |
29 | 2.7.4 | 85 | 2.25.1 |
30 | 86 | ||
31 | 87 | diff view generated by jsdifflib |
1 | Implement HFNMIENA support for the M profile MPU. This bit controls | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | whether the MPU is treated as enabled when executing at execution | ||
3 | priorities of less than zero (in NMI, HardFault or with the FAULTMASK | ||
4 | bit set). | ||
5 | 2 | ||
6 | Doing this requires us to use a different MMU index for "running | 3 | Update SCR_EL3 fields per ARM DDI0487 H.a. |
7 | at execution priority < 0", because we will have different | ||
8 | access permissions for that case versus the normal case. | ||
9 | 4 | ||
5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
6 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
11 | Message-id: 1493122030-32191-14-git-send-email-peter.maydell@linaro.org | ||
12 | --- | 8 | --- |
13 | target/arm/cpu.h | 24 +++++++++++++++++++++++- | 9 | target/arm/cpu.h | 12 ++++++++++++ |
14 | target/arm/helper.c | 18 +++++++++++++++++- | 10 | 1 file changed, 12 insertions(+) |
15 | target/arm/translate.c | 1 + | ||
16 | 3 files changed, 41 insertions(+), 2 deletions(-) | ||
17 | 11 | ||
18 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 12 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
19 | index XXXXXXX..XXXXXXX 100644 | 13 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/target/arm/cpu.h | 14 | --- a/target/arm/cpu.h |
21 | +++ b/target/arm/cpu.h | 15 | +++ b/target/arm/cpu.h |
22 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, | 16 | @@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask) |
23 | * for the accesses done as part of a stage 1 page table walk, rather than | 17 | #define SCR_FIEN (1U << 21) |
24 | * having to walk the stage 2 page table over and over.) | 18 | #define SCR_ENSCXT (1U << 25) |
25 | * | 19 | #define SCR_ATA (1U << 26) |
26 | + * R profile CPUs have an MPU, but can use the same set of MMU indexes | 20 | +#define SCR_FGTEN (1U << 27) |
27 | + * as A profile. They only need to distinguish NS EL0 and NS EL1 (and | 21 | +#define SCR_ECVEN (1U << 28) |
28 | + * NS EL2 if we ever model a Cortex-R52). | 22 | +#define SCR_TWEDEN (1U << 29) |
29 | + * | 23 | +#define SCR_TWEDEL MAKE_64BIT_MASK(30, 4) |
30 | + * M profile CPUs are rather different as they do not have a true MMU. | 24 | +#define SCR_TME (1ULL << 34) |
31 | + * They have the following different MMU indexes: | 25 | +#define SCR_AMVOFFEN (1ULL << 35) |
32 | + * User | 26 | +#define SCR_ENAS0 (1ULL << 36) |
33 | + * Privileged | 27 | +#define SCR_ADEN (1ULL << 37) |
34 | + * Execution priority negative (this is like privileged, but the | 28 | +#define SCR_HXEN (1ULL << 38) |
35 | + * MPU HFNMIENA bit means that it may have different access permission | 29 | +#define SCR_TRNDR (1ULL << 40) |
36 | + * check results to normal privileged code, so can't share a TLB). | 30 | +#define SCR_ENTP2 (1ULL << 41) |
37 | + * | 31 | +#define SCR_GPF (1ULL << 48) |
38 | * The ARMMMUIdx and the mmu index value used by the core QEMU TLB code | 32 | |
39 | * are not quite the same -- different CPU types (most notably M profile | 33 | #define HSTR_TTEE (1 << 16) |
40 | * vs A/R profile) would like to use MMU indexes with different semantics, | 34 | #define HSTR_TJDBX (1 << 17) |
41 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | ||
42 | ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A, | ||
43 | ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M, | ||
44 | ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M, | ||
45 | + ARMMMUIdx_MNegPri = 2 | ARM_MMU_IDX_M, | ||
46 | /* Indexes below here don't have TLBs and are used only for AT system | ||
47 | * instructions or for the first stage of an S12 page table walk. | ||
48 | */ | ||
49 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit { | ||
50 | ARMMMUIdxBit_S2NS = 1 << 6, | ||
51 | ARMMMUIdxBit_MUser = 1 << 0, | ||
52 | ARMMMUIdxBit_MPriv = 1 << 1, | ||
53 | + ARMMMUIdxBit_MNegPri = 1 << 2, | ||
54 | } ARMMMUIdxBit; | ||
55 | |||
56 | #define MMU_USER_IDX 0 | ||
57 | @@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) | ||
58 | case ARM_MMU_IDX_A: | ||
59 | return mmu_idx & 3; | ||
60 | case ARM_MMU_IDX_M: | ||
61 | - return mmu_idx & 1; | ||
62 | + return mmu_idx == ARMMMUIdx_MUser ? 0 : 1; | ||
63 | default: | ||
64 | g_assert_not_reached(); | ||
65 | } | ||
66 | @@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch) | ||
67 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
68 | ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv; | ||
69 | |||
70 | + /* Execution priority is negative if FAULTMASK is set or | ||
71 | + * we're in a HardFault or NMI handler. | ||
72 | + */ | ||
73 | + if ((env->v7m.exception > 0 && env->v7m.exception <= 3) | ||
74 | + || env->daif & PSTATE_F) { | ||
75 | + return arm_to_core_mmu_idx(ARMMMUIdx_MNegPri); | ||
76 | + } | ||
77 | + | ||
78 | return arm_to_core_mmu_idx(mmu_idx); | ||
79 | } | ||
80 | |||
81 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
82 | index XXXXXXX..XXXXXXX 100644 | ||
83 | --- a/target/arm/helper.c | ||
84 | +++ b/target/arm/helper.c | ||
85 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
86 | case ARMMMUIdx_S1NSE0: | ||
87 | case ARMMMUIdx_S1NSE1: | ||
88 | case ARMMMUIdx_MPriv: | ||
89 | + case ARMMMUIdx_MNegPri: | ||
90 | case ARMMMUIdx_MUser: | ||
91 | return 1; | ||
92 | default: | ||
93 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
94 | case ARMMMUIdx_S1E2: | ||
95 | case ARMMMUIdx_S2NS: | ||
96 | case ARMMMUIdx_MPriv: | ||
97 | + case ARMMMUIdx_MNegPri: | ||
98 | case ARMMMUIdx_MUser: | ||
99 | return false; | ||
100 | case ARMMMUIdx_S1E3: | ||
101 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env, | ||
102 | ARMMMUIdx mmu_idx) | ||
103 | { | ||
104 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
105 | - return !(env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_ENABLE_MASK); | ||
106 | + switch (env->v7m.mpu_ctrl & | ||
107 | + (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) { | ||
108 | + case R_V7M_MPU_CTRL_ENABLE_MASK: | ||
109 | + /* Enabled, but not for HardFault and NMI */ | ||
110 | + return mmu_idx == ARMMMUIdx_MNegPri; | ||
111 | + case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK: | ||
112 | + /* Enabled for all cases */ | ||
113 | + return false; | ||
114 | + case 0: | ||
115 | + default: | ||
116 | + /* HFNMIENA set and ENABLE clear is UNPREDICTABLE, but | ||
117 | + * we warned about that in armv7m_nvic.c when the guest set it. | ||
118 | + */ | ||
119 | + return true; | ||
120 | + } | ||
121 | } | ||
122 | |||
123 | if (mmu_idx == ARMMMUIdx_S2NS) { | ||
124 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
125 | index XXXXXXX..XXXXXXX 100644 | ||
126 | --- a/target/arm/translate.c | ||
127 | +++ b/target/arm/translate.c | ||
128 | @@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s) | ||
129 | return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0); | ||
130 | case ARMMMUIdx_MUser: | ||
131 | case ARMMMUIdx_MPriv: | ||
132 | + case ARMMMUIdx_MNegPri: | ||
133 | return arm_to_core_mmu_idx(ARMMMUIdx_MUser); | ||
134 | case ARMMMUIdx_S2NS: | ||
135 | default: | ||
136 | -- | 35 | -- |
137 | 2.7.4 | 36 | 2.25.1 |
138 | 37 | ||
139 | 38 | diff view generated by jsdifflib |
1 | Make M profile use completely separate ARMMMUIdx values from | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | those that A profile CPUs use. This is a prelude to adding | ||
3 | support for the MPU and for v8M, which together will require | ||
4 | 6 MMU indexes which don't map cleanly onto the A profile | ||
5 | uses: | ||
6 | non secure User | ||
7 | non secure Privileged | ||
8 | non secure Privileged, execution priority < 0 | ||
9 | secure User | ||
10 | secure Privileged | ||
11 | secure Privileged, execution priority < 0 | ||
12 | 2 | ||
3 | Update SCTLR_ELx fields per ARM DDI0487 H.a. | ||
4 | |||
5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
6 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> | ||
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
14 | Message-id: 1493122030-32191-4-git-send-email-peter.maydell@linaro.org | ||
15 | --- | 8 | --- |
16 | target/arm/cpu.h | 21 +++++++++++++++++++-- | 9 | target/arm/cpu.h | 14 ++++++++++++++ |
17 | target/arm/helper.c | 5 +++++ | 10 | 1 file changed, 14 insertions(+) |
18 | target/arm/translate.c | 3 +++ | ||
19 | 3 files changed, 27 insertions(+), 2 deletions(-) | ||
20 | 11 | ||
21 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 12 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
22 | index XXXXXXX..XXXXXXX 100644 | 13 | index XXXXXXX..XXXXXXX 100644 |
23 | --- a/target/arm/cpu.h | 14 | --- a/target/arm/cpu.h |
24 | +++ b/target/arm/cpu.h | 15 | +++ b/target/arm/cpu.h |
25 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, | 16 | @@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu); |
26 | * of the AT/ATS operations. | 17 | #define SCTLR_ATA0 (1ULL << 42) /* v8.5-MemTag */ |
27 | * The values used are carefully arranged to make mmu_idx => EL lookup easy. | 18 | #define SCTLR_ATA (1ULL << 43) /* v8.5-MemTag */ |
28 | */ | 19 | #define SCTLR_DSSBS_64 (1ULL << 44) /* v8.5, AArch64 only */ |
29 | -#define ARM_MMU_IDX_A 0x10 /* A profile (and M profile, for the moment) */ | 20 | +#define SCTLR_TWEDEn (1ULL << 45) /* FEAT_TWED */ |
30 | +#define ARM_MMU_IDX_A 0x10 /* A profile */ | 21 | +#define SCTLR_TWEDEL MAKE_64_MASK(46, 4) /* FEAT_TWED */ |
31 | #define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */ | 22 | +#define SCTLR_TMT0 (1ULL << 50) /* FEAT_TME */ |
32 | +#define ARM_MMU_IDX_M 0x40 /* M profile */ | 23 | +#define SCTLR_TMT (1ULL << 51) /* FEAT_TME */ |
33 | 24 | +#define SCTLR_TME0 (1ULL << 52) /* FEAT_TME */ | |
34 | #define ARM_MMU_IDX_TYPE_MASK (~0x7) | 25 | +#define SCTLR_TME (1ULL << 53) /* FEAT_TME */ |
35 | #define ARM_MMU_IDX_COREIDX_MASK 0x7 | 26 | +#define SCTLR_EnASR (1ULL << 54) /* FEAT_LS64_V */ |
36 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | 27 | +#define SCTLR_EnAS0 (1ULL << 55) /* FEAT_LS64_ACCDATA */ |
37 | ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A, | 28 | +#define SCTLR_EnALS (1ULL << 56) /* FEAT_LS64 */ |
38 | ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A, | 29 | +#define SCTLR_EPAN (1ULL << 57) /* FEAT_PAN3 */ |
39 | ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A, | 30 | +#define SCTLR_EnTP2 (1ULL << 60) /* FEAT_SME */ |
40 | + ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M, | 31 | +#define SCTLR_NMI (1ULL << 61) /* FEAT_NMI */ |
41 | + ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M, | 32 | +#define SCTLR_SPINTMASK (1ULL << 62) /* FEAT_NMI */ |
42 | /* Indexes below here don't have TLBs and are used only for AT system | 33 | +#define SCTLR_TIDCP (1ULL << 63) /* FEAT_TIDCP1 */ |
43 | * instructions or for the first stage of an S12 page table walk. | 34 | |
44 | */ | 35 | #define CPTR_TCPAC (1U << 31) |
45 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit { | 36 | #define CPTR_TTA (1U << 20) |
46 | ARMMMUIdxBit_S1SE0 = 1 << 4, | ||
47 | ARMMMUIdxBit_S1SE1 = 1 << 5, | ||
48 | ARMMMUIdxBit_S2NS = 1 << 6, | ||
49 | + ARMMMUIdxBit_MUser = 1 << 0, | ||
50 | + ARMMMUIdxBit_MPriv = 1 << 1, | ||
51 | } ARMMMUIdxBit; | ||
52 | |||
53 | #define MMU_USER_IDX 0 | ||
54 | @@ -XXX,XX +XXX,XX @@ static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx) | ||
55 | |||
56 | static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx) | ||
57 | { | ||
58 | - return mmu_idx | ARM_MMU_IDX_A; | ||
59 | + if (arm_feature(env, ARM_FEATURE_M)) { | ||
60 | + return mmu_idx | ARM_MMU_IDX_M; | ||
61 | + } else { | ||
62 | + return mmu_idx | ARM_MMU_IDX_A; | ||
63 | + } | ||
64 | } | ||
65 | |||
66 | /* Return the exception level we're running at if this is our mmu_idx */ | ||
67 | @@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) | ||
68 | switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) { | ||
69 | case ARM_MMU_IDX_A: | ||
70 | return mmu_idx & 3; | ||
71 | + case ARM_MMU_IDX_M: | ||
72 | + return mmu_idx & 1; | ||
73 | default: | ||
74 | g_assert_not_reached(); | ||
75 | } | ||
76 | @@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch) | ||
77 | { | ||
78 | int el = arm_current_el(env); | ||
79 | |||
80 | + if (arm_feature(env, ARM_FEATURE_M)) { | ||
81 | + ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv; | ||
82 | + | ||
83 | + return arm_to_core_mmu_idx(mmu_idx); | ||
84 | + } | ||
85 | + | ||
86 | if (el < 2 && arm_is_secure_below_el3(env)) { | ||
87 | return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el); | ||
88 | } | ||
89 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
90 | index XXXXXXX..XXXXXXX 100644 | ||
91 | --- a/target/arm/helper.c | ||
92 | +++ b/target/arm/helper.c | ||
93 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
94 | case ARMMMUIdx_S1SE1: | ||
95 | case ARMMMUIdx_S1NSE0: | ||
96 | case ARMMMUIdx_S1NSE1: | ||
97 | + case ARMMMUIdx_MPriv: | ||
98 | + case ARMMMUIdx_MUser: | ||
99 | return 1; | ||
100 | default: | ||
101 | g_assert_not_reached(); | ||
102 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
103 | case ARMMMUIdx_S1NSE1: | ||
104 | case ARMMMUIdx_S1E2: | ||
105 | case ARMMMUIdx_S2NS: | ||
106 | + case ARMMMUIdx_MPriv: | ||
107 | + case ARMMMUIdx_MUser: | ||
108 | return false; | ||
109 | case ARMMMUIdx_S1E3: | ||
110 | case ARMMMUIdx_S1SE0: | ||
111 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
112 | switch (mmu_idx) { | ||
113 | case ARMMMUIdx_S1SE0: | ||
114 | case ARMMMUIdx_S1NSE0: | ||
115 | + case ARMMMUIdx_MUser: | ||
116 | return true; | ||
117 | default: | ||
118 | return false; | ||
119 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
120 | index XXXXXXX..XXXXXXX 100644 | ||
121 | --- a/target/arm/translate.c | ||
122 | +++ b/target/arm/translate.c | ||
123 | @@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s) | ||
124 | case ARMMMUIdx_S1SE0: | ||
125 | case ARMMMUIdx_S1SE1: | ||
126 | return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0); | ||
127 | + case ARMMMUIdx_MUser: | ||
128 | + case ARMMMUIdx_MPriv: | ||
129 | + return arm_to_core_mmu_idx(ARMMMUIdx_MUser); | ||
130 | case ARMMMUIdx_S2NS: | ||
131 | default: | ||
132 | g_assert_not_reached(); | ||
133 | -- | 37 | -- |
134 | 2.7.4 | 38 | 2.25.1 |
135 | 39 | ||
136 | 40 | diff view generated by jsdifflib |
1 | The M profile CPU's MPU has an awkward corner case which we | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | would like to implement with a different MMU index. | ||
3 | 2 | ||
4 | We can avoid having to bump the number of MMU modes ARM | 3 | Bool is a more appropriate type for this value. |
5 | uses, because some of our existing MMU indexes are only | 4 | Move the member down in the struct to keep the |
6 | used by non-M-profile CPUs, so we can borrow one. | 5 | bool type members together and remove a hole. |
7 | To avoid that getting too confusing, clean up the code | ||
8 | to try to keep the two meanings of the index separate. | ||
9 | 6 | ||
10 | Instead of ARMMMUIdx enum values being identical to core QEMU | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
11 | MMU index values, they are now the core index values with some | 8 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> |
12 | high bits set. Any particular CPU always uses the same high | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
13 | bits (so eventually A profile cores and M profile cores will | 10 | --- |
14 | use different bits). New functions arm_to_core_mmu_idx() | 11 | target/arm/translate.h | 2 +- |
15 | and core_to_arm_mmu_idx() convert between the two. | 12 | target/arm/translate-a64.c | 2 +- |
13 | target/arm/translate.c | 2 +- | ||
14 | 3 files changed, 3 insertions(+), 3 deletions(-) | ||
16 | 15 | ||
17 | In general core index values are stored in 'int' types, and | ||
18 | ARM values are stored in ARMMMUIdx types. | ||
19 | |||
20 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
21 | Message-id: 1493122030-32191-3-git-send-email-peter.maydell@linaro.org | ||
22 | --- | ||
23 | target/arm/cpu.h | 71 ++++++++++++++++----- | ||
24 | target/arm/translate.h | 2 +- | ||
25 | target/arm/helper.c | 151 ++++++++++++++++++++++++--------------------- | ||
26 | target/arm/op_helper.c | 3 +- | ||
27 | target/arm/translate-a64.c | 18 ++++-- | ||
28 | target/arm/translate.c | 10 +-- | ||
29 | 6 files changed, 156 insertions(+), 99 deletions(-) | ||
30 | |||
31 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | ||
32 | index XXXXXXX..XXXXXXX 100644 | ||
33 | --- a/target/arm/cpu.h | ||
34 | +++ b/target/arm/cpu.h | ||
35 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, | ||
36 | * for the accesses done as part of a stage 1 page table walk, rather than | ||
37 | * having to walk the stage 2 page table over and over.) | ||
38 | * | ||
39 | + * The ARMMMUIdx and the mmu index value used by the core QEMU TLB code | ||
40 | + * are not quite the same -- different CPU types (most notably M profile | ||
41 | + * vs A/R profile) would like to use MMU indexes with different semantics, | ||
42 | + * but since we don't ever need to use all of those in a single CPU we | ||
43 | + * can avoid setting NB_MMU_MODES to more than 8. The lower bits of | ||
44 | + * ARMMMUIdx are the core TLB mmu index, and the higher bits are always | ||
45 | + * the same for any particular CPU. | ||
46 | + * Variables of type ARMMUIdx are always full values, and the core | ||
47 | + * index values are in variables of type 'int'. | ||
48 | + * | ||
49 | * Our enumeration includes at the end some entries which are not "true" | ||
50 | * mmu_idx values in that they don't have corresponding TLBs and are only | ||
51 | * valid for doing slow path page table walks. | ||
52 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, | ||
53 | * of the AT/ATS operations. | ||
54 | * The values used are carefully arranged to make mmu_idx => EL lookup easy. | ||
55 | */ | ||
56 | +#define ARM_MMU_IDX_A 0x10 /* A profile (and M profile, for the moment) */ | ||
57 | +#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */ | ||
58 | + | ||
59 | +#define ARM_MMU_IDX_TYPE_MASK (~0x7) | ||
60 | +#define ARM_MMU_IDX_COREIDX_MASK 0x7 | ||
61 | + | ||
62 | typedef enum ARMMMUIdx { | ||
63 | - ARMMMUIdx_S12NSE0 = 0, | ||
64 | - ARMMMUIdx_S12NSE1 = 1, | ||
65 | - ARMMMUIdx_S1E2 = 2, | ||
66 | - ARMMMUIdx_S1E3 = 3, | ||
67 | - ARMMMUIdx_S1SE0 = 4, | ||
68 | - ARMMMUIdx_S1SE1 = 5, | ||
69 | - ARMMMUIdx_S2NS = 6, | ||
70 | + ARMMMUIdx_S12NSE0 = 0 | ARM_MMU_IDX_A, | ||
71 | + ARMMMUIdx_S12NSE1 = 1 | ARM_MMU_IDX_A, | ||
72 | + ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A, | ||
73 | + ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A, | ||
74 | + ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A, | ||
75 | + ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A, | ||
76 | + ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A, | ||
77 | /* Indexes below here don't have TLBs and are used only for AT system | ||
78 | * instructions or for the first stage of an S12 page table walk. | ||
79 | */ | ||
80 | - ARMMMUIdx_S1NSE0 = 7, | ||
81 | - ARMMMUIdx_S1NSE1 = 8, | ||
82 | + ARMMMUIdx_S1NSE0 = 0 | ARM_MMU_IDX_NOTLB, | ||
83 | + ARMMMUIdx_S1NSE1 = 1 | ARM_MMU_IDX_NOTLB, | ||
84 | } ARMMMUIdx; | ||
85 | |||
86 | +/* Bit macros for the core-mmu-index values for each index, | ||
87 | + * for use when calling tlb_flush_by_mmuidx() and friends. | ||
88 | + */ | ||
89 | +typedef enum ARMMMUIdxBit { | ||
90 | + ARMMMUIdxBit_S12NSE0 = 1 << 0, | ||
91 | + ARMMMUIdxBit_S12NSE1 = 1 << 1, | ||
92 | + ARMMMUIdxBit_S1E2 = 1 << 2, | ||
93 | + ARMMMUIdxBit_S1E3 = 1 << 3, | ||
94 | + ARMMMUIdxBit_S1SE0 = 1 << 4, | ||
95 | + ARMMMUIdxBit_S1SE1 = 1 << 5, | ||
96 | + ARMMMUIdxBit_S2NS = 1 << 6, | ||
97 | +} ARMMMUIdxBit; | ||
98 | + | ||
99 | #define MMU_USER_IDX 0 | ||
100 | |||
101 | +static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx) | ||
102 | +{ | ||
103 | + return mmu_idx & ARM_MMU_IDX_COREIDX_MASK; | ||
104 | +} | ||
105 | + | ||
106 | +static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx) | ||
107 | +{ | ||
108 | + return mmu_idx | ARM_MMU_IDX_A; | ||
109 | +} | ||
110 | + | ||
111 | /* Return the exception level we're running at if this is our mmu_idx */ | ||
112 | static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) | ||
113 | { | ||
114 | - assert(mmu_idx < ARMMMUIdx_S2NS); | ||
115 | - return mmu_idx & 3; | ||
116 | + switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) { | ||
117 | + case ARM_MMU_IDX_A: | ||
118 | + return mmu_idx & 3; | ||
119 | + default: | ||
120 | + g_assert_not_reached(); | ||
121 | + } | ||
122 | } | ||
123 | |||
124 | /* Determine the current mmu_idx to use for normal loads/stores */ | ||
125 | @@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch) | ||
126 | int el = arm_current_el(env); | ||
127 | |||
128 | if (el < 2 && arm_is_secure_below_el3(env)) { | ||
129 | - return ARMMMUIdx_S1SE0 + el; | ||
130 | + return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el); | ||
131 | } | ||
132 | return el; | ||
133 | } | ||
134 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
135 | static inline void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc, | ||
136 | target_ulong *cs_base, uint32_t *flags) | ||
137 | { | ||
138 | - ARMMMUIdx mmu_idx = cpu_mmu_index(env, false); | ||
139 | + ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false)); | ||
140 | if (is_a64(env)) { | ||
141 | *pc = env->pc; | ||
142 | *flags = ARM_TBFLAG_AARCH64_STATE_MASK; | ||
143 | @@ -XXX,XX +XXX,XX @@ static inline void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc, | ||
144 | << ARM_TBFLAG_XSCALE_CPAR_SHIFT); | ||
145 | } | ||
146 | |||
147 | - *flags |= (mmu_idx << ARM_TBFLAG_MMUIDX_SHIFT); | ||
148 | + *flags |= (arm_to_core_mmu_idx(mmu_idx) << ARM_TBFLAG_MMUIDX_SHIFT); | ||
149 | |||
150 | /* The SS_ACTIVE and PSTATE_SS bits correspond to the state machine | ||
151 | * states defined in the ARM ARM for software singlestep: | ||
152 | diff --git a/target/arm/translate.h b/target/arm/translate.h | 16 | diff --git a/target/arm/translate.h b/target/arm/translate.h |
153 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
154 | --- a/target/arm/translate.h | 18 | --- a/target/arm/translate.h |
155 | +++ b/target/arm/translate.h | 19 | +++ b/target/arm/translate.h |
156 | @@ -XXX,XX +XXX,XX @@ static inline int arm_dc_feature(DisasContext *dc, int feature) | 20 | @@ -XXX,XX +XXX,XX @@ typedef struct DisasContext { |
157 | 21 | * so that top level loop can generate correct syndrome information. | |
158 | static inline int get_mem_index(DisasContext *s) | ||
159 | { | ||
160 | - return s->mmu_idx; | ||
161 | + return arm_to_core_mmu_idx(s->mmu_idx); | ||
162 | } | ||
163 | |||
164 | /* Function used to determine the target exception EL when otherwise not known | ||
165 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
166 | index XXXXXXX..XXXXXXX 100644 | ||
167 | --- a/target/arm/helper.c | ||
168 | +++ b/target/arm/helper.c | ||
169 | @@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
170 | CPUState *cs = ENV_GET_CPU(env); | ||
171 | |||
172 | tlb_flush_by_mmuidx(cs, | ||
173 | - (1 << ARMMMUIdx_S12NSE1) | | ||
174 | - (1 << ARMMMUIdx_S12NSE0) | | ||
175 | - (1 << ARMMMUIdx_S2NS)); | ||
176 | + ARMMMUIdxBit_S12NSE1 | | ||
177 | + ARMMMUIdxBit_S12NSE0 | | ||
178 | + ARMMMUIdxBit_S2NS); | ||
179 | } | ||
180 | |||
181 | static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
182 | @@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
183 | CPUState *cs = ENV_GET_CPU(env); | ||
184 | |||
185 | tlb_flush_by_mmuidx_all_cpus_synced(cs, | ||
186 | - (1 << ARMMMUIdx_S12NSE1) | | ||
187 | - (1 << ARMMMUIdx_S12NSE0) | | ||
188 | - (1 << ARMMMUIdx_S2NS)); | ||
189 | + ARMMMUIdxBit_S12NSE1 | | ||
190 | + ARMMMUIdxBit_S12NSE0 | | ||
191 | + ARMMMUIdxBit_S2NS); | ||
192 | } | ||
193 | |||
194 | static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
195 | @@ -XXX,XX +XXX,XX @@ static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
196 | |||
197 | pageaddr = sextract64(value << 12, 0, 40); | ||
198 | |||
199 | - tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS)); | ||
200 | + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS); | ||
201 | } | ||
202 | |||
203 | static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
204 | @@ -XXX,XX +XXX,XX @@ static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
205 | pageaddr = sextract64(value << 12, 0, 40); | ||
206 | |||
207 | tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
208 | - (1 << ARMMMUIdx_S2NS)); | ||
209 | + ARMMMUIdxBit_S2NS); | ||
210 | } | ||
211 | |||
212 | static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
213 | @@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
214 | { | ||
215 | CPUState *cs = ENV_GET_CPU(env); | ||
216 | |||
217 | - tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2)); | ||
218 | + tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2); | ||
219 | } | ||
220 | |||
221 | static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
222 | @@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
223 | { | ||
224 | CPUState *cs = ENV_GET_CPU(env); | ||
225 | |||
226 | - tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E2)); | ||
227 | + tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2); | ||
228 | } | ||
229 | |||
230 | static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
231 | @@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
232 | CPUState *cs = ENV_GET_CPU(env); | ||
233 | uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12); | ||
234 | |||
235 | - tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2)); | ||
236 | + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2); | ||
237 | } | ||
238 | |||
239 | static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
240 | @@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
241 | uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12); | ||
242 | |||
243 | tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
244 | - (1 << ARMMMUIdx_S1E2)); | ||
245 | + ARMMMUIdxBit_S1E2); | ||
246 | } | ||
247 | |||
248 | static const ARMCPRegInfo cp_reginfo[] = { | ||
249 | @@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
250 | /* Accesses to VTTBR may change the VMID so we must flush the TLB. */ | ||
251 | if (raw_read(env, ri) != value) { | ||
252 | tlb_flush_by_mmuidx(cs, | ||
253 | - (1 << ARMMMUIdx_S12NSE1) | | ||
254 | - (1 << ARMMMUIdx_S12NSE0) | | ||
255 | - (1 << ARMMMUIdx_S2NS)); | ||
256 | + ARMMMUIdxBit_S12NSE1 | | ||
257 | + ARMMMUIdxBit_S12NSE0 | | ||
258 | + ARMMMUIdxBit_S2NS); | ||
259 | raw_write(env, ri, value); | ||
260 | } | ||
261 | } | ||
262 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
263 | |||
264 | if (arm_is_secure_below_el3(env)) { | ||
265 | tlb_flush_by_mmuidx(cs, | ||
266 | - (1 << ARMMMUIdx_S1SE1) | | ||
267 | - (1 << ARMMMUIdx_S1SE0)); | ||
268 | + ARMMMUIdxBit_S1SE1 | | ||
269 | + ARMMMUIdxBit_S1SE0); | ||
270 | } else { | ||
271 | tlb_flush_by_mmuidx(cs, | ||
272 | - (1 << ARMMMUIdx_S12NSE1) | | ||
273 | - (1 << ARMMMUIdx_S12NSE0)); | ||
274 | + ARMMMUIdxBit_S12NSE1 | | ||
275 | + ARMMMUIdxBit_S12NSE0); | ||
276 | } | ||
277 | } | ||
278 | |||
279 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
280 | |||
281 | if (sec) { | ||
282 | tlb_flush_by_mmuidx_all_cpus_synced(cs, | ||
283 | - (1 << ARMMMUIdx_S1SE1) | | ||
284 | - (1 << ARMMMUIdx_S1SE0)); | ||
285 | + ARMMMUIdxBit_S1SE1 | | ||
286 | + ARMMMUIdxBit_S1SE0); | ||
287 | } else { | ||
288 | tlb_flush_by_mmuidx_all_cpus_synced(cs, | ||
289 | - (1 << ARMMMUIdx_S12NSE1) | | ||
290 | - (1 << ARMMMUIdx_S12NSE0)); | ||
291 | + ARMMMUIdxBit_S12NSE1 | | ||
292 | + ARMMMUIdxBit_S12NSE0); | ||
293 | } | ||
294 | } | ||
295 | |||
296 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
297 | |||
298 | if (arm_is_secure_below_el3(env)) { | ||
299 | tlb_flush_by_mmuidx(cs, | ||
300 | - (1 << ARMMMUIdx_S1SE1) | | ||
301 | - (1 << ARMMMUIdx_S1SE0)); | ||
302 | + ARMMMUIdxBit_S1SE1 | | ||
303 | + ARMMMUIdxBit_S1SE0); | ||
304 | } else { | ||
305 | if (arm_feature(env, ARM_FEATURE_EL2)) { | ||
306 | tlb_flush_by_mmuidx(cs, | ||
307 | - (1 << ARMMMUIdx_S12NSE1) | | ||
308 | - (1 << ARMMMUIdx_S12NSE0) | | ||
309 | - (1 << ARMMMUIdx_S2NS)); | ||
310 | + ARMMMUIdxBit_S12NSE1 | | ||
311 | + ARMMMUIdxBit_S12NSE0 | | ||
312 | + ARMMMUIdxBit_S2NS); | ||
313 | } else { | ||
314 | tlb_flush_by_mmuidx(cs, | ||
315 | - (1 << ARMMMUIdx_S12NSE1) | | ||
316 | - (1 << ARMMMUIdx_S12NSE0)); | ||
317 | + ARMMMUIdxBit_S12NSE1 | | ||
318 | + ARMMMUIdxBit_S12NSE0); | ||
319 | } | ||
320 | } | ||
321 | } | ||
322 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
323 | ARMCPU *cpu = arm_env_get_cpu(env); | ||
324 | CPUState *cs = CPU(cpu); | ||
325 | |||
326 | - tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2)); | ||
327 | + tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2); | ||
328 | } | ||
329 | |||
330 | static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
331 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
332 | ARMCPU *cpu = arm_env_get_cpu(env); | ||
333 | CPUState *cs = CPU(cpu); | ||
334 | |||
335 | - tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E3)); | ||
336 | + tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E3); | ||
337 | } | ||
338 | |||
339 | static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
340 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
341 | |||
342 | if (sec) { | ||
343 | tlb_flush_by_mmuidx_all_cpus_synced(cs, | ||
344 | - (1 << ARMMMUIdx_S1SE1) | | ||
345 | - (1 << ARMMMUIdx_S1SE0)); | ||
346 | + ARMMMUIdxBit_S1SE1 | | ||
347 | + ARMMMUIdxBit_S1SE0); | ||
348 | } else if (has_el2) { | ||
349 | tlb_flush_by_mmuidx_all_cpus_synced(cs, | ||
350 | - (1 << ARMMMUIdx_S12NSE1) | | ||
351 | - (1 << ARMMMUIdx_S12NSE0) | | ||
352 | - (1 << ARMMMUIdx_S2NS)); | ||
353 | + ARMMMUIdxBit_S12NSE1 | | ||
354 | + ARMMMUIdxBit_S12NSE0 | | ||
355 | + ARMMMUIdxBit_S2NS); | ||
356 | } else { | ||
357 | tlb_flush_by_mmuidx_all_cpus_synced(cs, | ||
358 | - (1 << ARMMMUIdx_S12NSE1) | | ||
359 | - (1 << ARMMMUIdx_S12NSE0)); | ||
360 | + ARMMMUIdxBit_S12NSE1 | | ||
361 | + ARMMMUIdxBit_S12NSE0); | ||
362 | } | ||
363 | } | ||
364 | |||
365 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
366 | { | ||
367 | CPUState *cs = ENV_GET_CPU(env); | ||
368 | |||
369 | - tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E2)); | ||
370 | + tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2); | ||
371 | } | ||
372 | |||
373 | static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
374 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
375 | { | ||
376 | CPUState *cs = ENV_GET_CPU(env); | ||
377 | |||
378 | - tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E3)); | ||
379 | + tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3); | ||
380 | } | ||
381 | |||
382 | static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
383 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
384 | |||
385 | if (arm_is_secure_below_el3(env)) { | ||
386 | tlb_flush_page_by_mmuidx(cs, pageaddr, | ||
387 | - (1 << ARMMMUIdx_S1SE1) | | ||
388 | - (1 << ARMMMUIdx_S1SE0)); | ||
389 | + ARMMMUIdxBit_S1SE1 | | ||
390 | + ARMMMUIdxBit_S1SE0); | ||
391 | } else { | ||
392 | tlb_flush_page_by_mmuidx(cs, pageaddr, | ||
393 | - (1 << ARMMMUIdx_S12NSE1) | | ||
394 | - (1 << ARMMMUIdx_S12NSE0)); | ||
395 | + ARMMMUIdxBit_S12NSE1 | | ||
396 | + ARMMMUIdxBit_S12NSE0); | ||
397 | } | ||
398 | } | ||
399 | |||
400 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
401 | CPUState *cs = CPU(cpu); | ||
402 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
403 | |||
404 | - tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2)); | ||
405 | + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2); | ||
406 | } | ||
407 | |||
408 | static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
409 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
410 | CPUState *cs = CPU(cpu); | ||
411 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
412 | |||
413 | - tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E3)); | ||
414 | + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E3); | ||
415 | } | ||
416 | |||
417 | static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
418 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
419 | |||
420 | if (sec) { | ||
421 | tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
422 | - (1 << ARMMMUIdx_S1SE1) | | ||
423 | - (1 << ARMMMUIdx_S1SE0)); | ||
424 | + ARMMMUIdxBit_S1SE1 | | ||
425 | + ARMMMUIdxBit_S1SE0); | ||
426 | } else { | ||
427 | tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
428 | - (1 << ARMMMUIdx_S12NSE1) | | ||
429 | - (1 << ARMMMUIdx_S12NSE0)); | ||
430 | + ARMMMUIdxBit_S12NSE1 | | ||
431 | + ARMMMUIdxBit_S12NSE0); | ||
432 | } | ||
433 | } | ||
434 | |||
435 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
436 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
437 | |||
438 | tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
439 | - (1 << ARMMMUIdx_S1E2)); | ||
440 | + ARMMMUIdxBit_S1E2); | ||
441 | } | ||
442 | |||
443 | static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
444 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
445 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
446 | |||
447 | tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
448 | - (1 << ARMMMUIdx_S1E3)); | ||
449 | + ARMMMUIdxBit_S1E3); | ||
450 | } | ||
451 | |||
452 | static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
453 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
454 | |||
455 | pageaddr = sextract64(value << 12, 0, 48); | ||
456 | |||
457 | - tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS)); | ||
458 | + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS); | ||
459 | } | ||
460 | |||
461 | static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
462 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
463 | pageaddr = sextract64(value << 12, 0, 48); | ||
464 | |||
465 | tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
466 | - (1 << ARMMMUIdx_S2NS)); | ||
467 | + ARMMMUIdxBit_S2NS); | ||
468 | } | ||
469 | |||
470 | static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri, | ||
471 | @@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
472 | return &env->cp15.tcr_el[regime_el(env, mmu_idx)]; | ||
473 | } | ||
474 | |||
475 | +/* Convert a possible stage1+2 MMU index into the appropriate | ||
476 | + * stage 1 MMU index | ||
477 | + */ | ||
478 | +static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx) | ||
479 | +{ | ||
480 | + if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) { | ||
481 | + mmu_idx += (ARMMMUIdx_S1NSE0 - ARMMMUIdx_S12NSE0); | ||
482 | + } | ||
483 | + return mmu_idx; | ||
484 | +} | ||
485 | + | ||
486 | /* Returns TBI0 value for current regime el */ | ||
487 | uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
488 | { | ||
489 | @@ -XXX,XX +XXX,XX @@ uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
490 | uint32_t el; | ||
491 | |||
492 | /* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert | ||
493 | - * a stage 1+2 mmu index into the appropriate stage 1 mmu index. | ||
494 | - */ | ||
495 | - if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) { | ||
496 | - mmu_idx += ARMMMUIdx_S1NSE0; | ||
497 | - } | ||
498 | + * a stage 1+2 mmu index into the appropriate stage 1 mmu index. | ||
499 | + */ | ||
500 | + mmu_idx = stage_1_mmu_idx(mmu_idx); | ||
501 | |||
502 | tcr = regime_tcr(env, mmu_idx); | ||
503 | el = regime_el(env, mmu_idx); | ||
504 | @@ -XXX,XX +XXX,XX @@ uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
505 | uint32_t el; | ||
506 | |||
507 | /* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert | ||
508 | - * a stage 1+2 mmu index into the appropriate stage 1 mmu index. | ||
509 | - */ | ||
510 | - if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) { | ||
511 | - mmu_idx += ARMMMUIdx_S1NSE0; | ||
512 | - } | ||
513 | + * a stage 1+2 mmu index into the appropriate stage 1 mmu index. | ||
514 | + */ | ||
515 | + mmu_idx = stage_1_mmu_idx(mmu_idx); | ||
516 | |||
517 | tcr = regime_tcr(env, mmu_idx); | ||
518 | el = regime_el(env, mmu_idx); | ||
519 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_using_lpae_format(CPUARMState *env, | ||
520 | * on whether the long or short descriptor format is in use. */ | ||
521 | bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
522 | { | ||
523 | - if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) { | ||
524 | - mmu_idx += ARMMMUIdx_S1NSE0; | ||
525 | - } | ||
526 | + mmu_idx = stage_1_mmu_idx(mmu_idx); | ||
527 | |||
528 | return regime_using_lpae_format(env, mmu_idx); | ||
529 | } | ||
530 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
531 | int ret; | ||
532 | |||
533 | ret = get_phys_addr(env, address, access_type, | ||
534 | - mmu_idx + ARMMMUIdx_S1NSE0, &ipa, attrs, | ||
535 | + stage_1_mmu_idx(mmu_idx), &ipa, attrs, | ||
536 | prot, page_size, fsr, fi); | ||
537 | |||
538 | /* If S1 fails or S2 is disabled, return early. */ | ||
539 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
540 | /* | ||
541 | * For non-EL2 CPUs a stage1+stage2 translation is just stage 1. | ||
542 | */ | ||
543 | - mmu_idx += ARMMMUIdx_S1NSE0; | ||
544 | + mmu_idx = stage_1_mmu_idx(mmu_idx); | ||
545 | } | ||
546 | } | ||
547 | |||
548 | @@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address, | ||
549 | int ret; | ||
550 | MemTxAttrs attrs = {}; | ||
551 | |||
552 | - ret = get_phys_addr(env, address, access_type, mmu_idx, &phys_addr, | ||
553 | + ret = get_phys_addr(env, address, access_type, | ||
554 | + core_to_arm_mmu_idx(env, mmu_idx), &phys_addr, | ||
555 | &attrs, &prot, &page_size, fsr, fi); | ||
556 | if (!ret) { | ||
557 | /* Map a single [sub]page. */ | ||
558 | @@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, | ||
559 | bool ret; | ||
560 | uint32_t fsr; | ||
561 | ARMMMUFaultInfo fi = {}; | ||
562 | + ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false)); | ||
563 | |||
564 | *attrs = (MemTxAttrs) {}; | ||
565 | |||
566 | - ret = get_phys_addr(env, addr, 0, cpu_mmu_index(env, false), &phys_addr, | ||
567 | + ret = get_phys_addr(env, addr, 0, mmu_idx, &phys_addr, | ||
568 | attrs, &prot, &page_size, &fsr, &fi); | ||
569 | |||
570 | if (ret) { | ||
571 | diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c | ||
572 | index XXXXXXX..XXXXXXX 100644 | ||
573 | --- a/target/arm/op_helper.c | ||
574 | +++ b/target/arm/op_helper.c | ||
575 | @@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, | ||
576 | int target_el; | ||
577 | bool same_el; | ||
578 | uint32_t syn; | ||
579 | + ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx); | ||
580 | |||
581 | if (retaddr) { | ||
582 | /* now we have a real cpu fault */ | ||
583 | @@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, | ||
584 | /* the DFSR for an alignment fault depends on whether we're using | ||
585 | * the LPAE long descriptor format, or the short descriptor format | ||
586 | */ | 22 | */ |
587 | - if (arm_s1_regime_using_lpae_format(env, mmu_idx)) { | 23 | uint32_t svc_imm; |
588 | + if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) { | 24 | - int aarch64; |
589 | env->exception.fsr = (1 << 9) | 0x21; | 25 | int current_el; |
590 | } else { | 26 | /* Debug target exception level for single-step exceptions */ |
591 | env->exception.fsr = 0x1; | 27 | int debug_target_el; |
28 | GHashTable *cp_regs; | ||
29 | uint64_t features; /* CPU features bits */ | ||
30 | + bool aarch64; | ||
31 | /* Because unallocated encodings generate different exception syndrome | ||
32 | * information from traps due to FP being disabled, we can't do a single | ||
33 | * "is fp access disabled" check at a high level in the decode tree. | ||
592 | diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c | 34 | diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c |
593 | index XXXXXXX..XXXXXXX 100644 | 35 | index XXXXXXX..XXXXXXX 100644 |
594 | --- a/target/arm/translate-a64.c | 36 | --- a/target/arm/translate-a64.c |
595 | +++ b/target/arm/translate-a64.c | 37 | +++ b/target/arm/translate-a64.c |
596 | @@ -XXX,XX +XXX,XX @@ void a64_translate_init(void) | 38 | @@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase, |
597 | offsetof(CPUARMState, exclusive_high), "exclusive_high"); | 39 | dc->isar = &arm_cpu->isar; |
598 | } | 40 | dc->condjmp = 0; |
599 | 41 | ||
600 | -static inline ARMMMUIdx get_a64_user_mem_index(DisasContext *s) | 42 | - dc->aarch64 = 1; |
601 | +static inline int get_a64_user_mem_index(DisasContext *s) | 43 | + dc->aarch64 = true; |
602 | { | 44 | /* If we are coming from secure EL0 in a system with a 32-bit EL3, then |
603 | - /* Return the mmu_idx to use for A64 "unprivileged load/store" insns: | 45 | * there is no secure EL1, so we route exceptions to EL3. |
604 | + /* Return the core mmu_idx to use for A64 "unprivileged load/store" insns: | ||
605 | * if EL1, access as if EL0; otherwise access at current EL | ||
606 | */ | 46 | */ |
607 | + ARMMMUIdx useridx; | ||
608 | + | ||
609 | switch (s->mmu_idx) { | ||
610 | case ARMMMUIdx_S12NSE1: | ||
611 | - return ARMMMUIdx_S12NSE0; | ||
612 | + useridx = ARMMMUIdx_S12NSE0; | ||
613 | + break; | ||
614 | case ARMMMUIdx_S1SE1: | ||
615 | - return ARMMMUIdx_S1SE0; | ||
616 | + useridx = ARMMMUIdx_S1SE0; | ||
617 | + break; | ||
618 | case ARMMMUIdx_S2NS: | ||
619 | g_assert_not_reached(); | ||
620 | default: | ||
621 | - return s->mmu_idx; | ||
622 | + useridx = s->mmu_idx; | ||
623 | + break; | ||
624 | } | ||
625 | + return arm_to_core_mmu_idx(useridx); | ||
626 | } | ||
627 | |||
628 | void aarch64_cpu_dump_state(CPUState *cs, FILE *f, | ||
629 | @@ -XXX,XX +XXX,XX @@ void gen_intermediate_code_a64(ARMCPU *cpu, TranslationBlock *tb) | ||
630 | dc->be_data = ARM_TBFLAG_BE_DATA(tb->flags) ? MO_BE : MO_LE; | ||
631 | dc->condexec_mask = 0; | ||
632 | dc->condexec_cond = 0; | ||
633 | - dc->mmu_idx = ARM_TBFLAG_MMUIDX(tb->flags); | ||
634 | + dc->mmu_idx = core_to_arm_mmu_idx(env, ARM_TBFLAG_MMUIDX(tb->flags)); | ||
635 | dc->tbi0 = ARM_TBFLAG_TBI0(tb->flags); | ||
636 | dc->tbi1 = ARM_TBFLAG_TBI1(tb->flags); | ||
637 | dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx); | ||
638 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 47 | diff --git a/target/arm/translate.c b/target/arm/translate.c |
639 | index XXXXXXX..XXXXXXX 100644 | 48 | index XXXXXXX..XXXXXXX 100644 |
640 | --- a/target/arm/translate.c | 49 | --- a/target/arm/translate.c |
641 | +++ b/target/arm/translate.c | 50 | +++ b/target/arm/translate.c |
642 | @@ -XXX,XX +XXX,XX @@ static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo) | 51 | @@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs) |
643 | disas_set_insn_syndrome(s, syn); | 52 | dc->isar = &cpu->isar; |
644 | } | 53 | dc->condjmp = 0; |
645 | 54 | ||
646 | -static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s) | 55 | - dc->aarch64 = 0; |
647 | +static inline int get_a32_user_mem_index(DisasContext *s) | 56 | + dc->aarch64 = false; |
648 | { | 57 | /* If we are coming from secure EL0 in a system with a 32-bit EL3, then |
649 | - /* Return the mmu_idx to use for A32/T32 "unprivileged load/store" | 58 | * there is no secure EL1, so we route exceptions to EL3. |
650 | + /* Return the core mmu_idx to use for A32/T32 "unprivileged load/store" | 59 | */ |
651 | * insns: | ||
652 | * if PL2, UNPREDICTABLE (we choose to implement as if PL0) | ||
653 | * otherwise, access as if at PL0. | ||
654 | @@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s) | ||
655 | case ARMMMUIdx_S1E2: /* this one is UNPREDICTABLE */ | ||
656 | case ARMMMUIdx_S12NSE0: | ||
657 | case ARMMMUIdx_S12NSE1: | ||
658 | - return ARMMMUIdx_S12NSE0; | ||
659 | + return arm_to_core_mmu_idx(ARMMMUIdx_S12NSE0); | ||
660 | case ARMMMUIdx_S1E3: | ||
661 | case ARMMMUIdx_S1SE0: | ||
662 | case ARMMMUIdx_S1SE1: | ||
663 | - return ARMMMUIdx_S1SE0; | ||
664 | + return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0); | ||
665 | case ARMMMUIdx_S2NS: | ||
666 | default: | ||
667 | g_assert_not_reached(); | ||
668 | @@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUARMState *env, TranslationBlock *tb) | ||
669 | dc->be_data = ARM_TBFLAG_BE_DATA(tb->flags) ? MO_BE : MO_LE; | ||
670 | dc->condexec_mask = (ARM_TBFLAG_CONDEXEC(tb->flags) & 0xf) << 1; | ||
671 | dc->condexec_cond = ARM_TBFLAG_CONDEXEC(tb->flags) >> 4; | ||
672 | - dc->mmu_idx = ARM_TBFLAG_MMUIDX(tb->flags); | ||
673 | + dc->mmu_idx = core_to_arm_mmu_idx(env, ARM_TBFLAG_MMUIDX(tb->flags)); | ||
674 | dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx); | ||
675 | #if !defined(CONFIG_USER_ONLY) | ||
676 | dc->user = (dc->current_el == 0); | ||
677 | -- | 60 | -- |
678 | 2.7.4 | 61 | 2.25.1 |
679 | 62 | ||
680 | 63 | diff view generated by jsdifflib |
1 | ARM CPUs come in two flavours: | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | * proper MMU ("VMSA") | ||
3 | * only an MPU ("PMSA") | ||
4 | For PMSA, the MPU may be implemented, or not (in which case there | ||
5 | is default "always acts the same" behaviour, but it isn't guest | ||
6 | programmable). | ||
7 | 2 | ||
8 | QEMU is a bit confused about how we indicate this: we have an | 3 | Bool is a more appropriate type for this value. |
9 | ARM_FEATURE_MPU, but it's not clear whether this indicates | 4 | Adjust the assignments to use true/false. |
10 | "PMSA, not VMSA" or "PMSA and MPU present" , and sometimes we | ||
11 | use it for one purpose and sometimes the other. | ||
12 | 5 | ||
13 | Currently trying to implement a PMSA-without-MPU core won't | 6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
14 | work correctly because we turn off the ARM_FEATURE_MPU bit | 7 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> |
15 | and then a lot of things which should still exist get | ||
16 | turned off too. | ||
17 | |||
18 | As the first step in cleaning this up, rename the feature | ||
19 | bit to ARM_FEATURE_PMSA, which indicates a PMSA CPU (with | ||
20 | or without MPU). | ||
21 | |||
22 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
23 | Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> | ||
24 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
25 | Message-id: 1493122030-32191-5-git-send-email-peter.maydell@linaro.org | ||
26 | --- | 9 | --- |
27 | target/arm/cpu.h | 2 +- | 10 | target/arm/cpu.h | 2 +- |
28 | target/arm/cpu.c | 12 ++++++------ | 11 | target/arm/cpu.c | 2 +- |
29 | target/arm/helper.c | 12 ++++++------ | 12 | target/arm/helper-a64.c | 4 ++-- |
30 | target/arm/machine.c | 2 +- | 13 | target/arm/helper.c | 2 +- |
31 | 4 files changed, 14 insertions(+), 14 deletions(-) | 14 | target/arm/hvf/hvf.c | 2 +- |
15 | 5 files changed, 6 insertions(+), 6 deletions(-) | ||
32 | 16 | ||
33 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 17 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
34 | index XXXXXXX..XXXXXXX 100644 | 18 | index XXXXXXX..XXXXXXX 100644 |
35 | --- a/target/arm/cpu.h | 19 | --- a/target/arm/cpu.h |
36 | +++ b/target/arm/cpu.h | 20 | +++ b/target/arm/cpu.h |
37 | @@ -XXX,XX +XXX,XX @@ enum arm_features { | 21 | @@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState { |
38 | ARM_FEATURE_V6K, | 22 | * all other bits are stored in their correct places in env->pstate |
39 | ARM_FEATURE_V7, | 23 | */ |
40 | ARM_FEATURE_THUMB2, | 24 | uint32_t pstate; |
41 | - ARM_FEATURE_MPU, /* Only has Memory Protection Unit, not full MMU. */ | 25 | - uint32_t aarch64; /* 1 if CPU is in aarch64 state; inverse of PSTATE.nRW */ |
42 | + ARM_FEATURE_PMSA, /* no MMU; may have Memory Protection Unit */ | 26 | + bool aarch64; /* True if CPU is in aarch64 state; inverse of PSTATE.nRW */ |
43 | ARM_FEATURE_VFP3, | 27 | |
44 | ARM_FEATURE_VFP_FP16, | 28 | /* Cached TBFLAGS state. See below for which bits are included. */ |
45 | ARM_FEATURE_NEON, | 29 | CPUARMTBFlags hflags; |
46 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c | 30 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c |
47 | index XXXXXXX..XXXXXXX 100644 | 31 | index XXXXXXX..XXXXXXX 100644 |
48 | --- a/target/arm/cpu.c | 32 | --- a/target/arm/cpu.c |
49 | +++ b/target/arm/cpu.c | 33 | +++ b/target/arm/cpu.c |
50 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj) | 34 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev) |
51 | &error_abort); | 35 | |
52 | } | 36 | if (arm_feature(env, ARM_FEATURE_AARCH64)) { |
53 | 37 | /* 64 bit CPUs always start in 64 bit mode */ | |
54 | - if (arm_feature(&cpu->env, ARM_FEATURE_MPU)) { | 38 | - env->aarch64 = 1; |
55 | + if (arm_feature(&cpu->env, ARM_FEATURE_PMSA)) { | 39 | + env->aarch64 = true; |
56 | qdev_property_add_static(DEVICE(obj), &arm_cpu_has_mpu_property, | 40 | #if defined(CONFIG_USER_ONLY) |
57 | &error_abort); | 41 | env->pstate = PSTATE_MODE_EL0t; |
58 | if (arm_feature(&cpu->env, ARM_FEATURE_V7)) { | 42 | /* Userspace expects access to DC ZVA, CTL_EL0 and the cache ops */ |
59 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) | 43 | diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c |
60 | 44 | index XXXXXXX..XXXXXXX 100644 | |
61 | if (arm_feature(env, ARM_FEATURE_V7) && | 45 | --- a/target/arm/helper-a64.c |
62 | !arm_feature(env, ARM_FEATURE_M) && | 46 | +++ b/target/arm/helper-a64.c |
63 | - !arm_feature(env, ARM_FEATURE_MPU)) { | 47 | @@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc) |
64 | + !arm_feature(env, ARM_FEATURE_PMSA)) { | 48 | qemu_mutex_unlock_iothread(); |
65 | /* v7VMSA drops support for the old ARMv5 tiny pages, so we | 49 | |
66 | * can use 4K pages. | 50 | if (!return_to_aa64) { |
67 | */ | 51 | - env->aarch64 = 0; |
68 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) | 52 | + env->aarch64 = false; |
69 | } | 53 | /* We do a raw CPSR write because aarch64_sync_64_to_32() |
70 | 54 | * will sort the register banks out for us, and we've already | |
71 | if (!cpu->has_mpu) { | 55 | * caught all the bad-mode cases in el_from_spsr(). |
72 | - unset_feature(env, ARM_FEATURE_MPU); | 56 | @@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc) |
73 | + unset_feature(env, ARM_FEATURE_PMSA); | 57 | } else { |
74 | } | 58 | int tbii; |
75 | 59 | ||
76 | - if (arm_feature(env, ARM_FEATURE_MPU) && | 60 | - env->aarch64 = 1; |
77 | + if (arm_feature(env, ARM_FEATURE_PMSA) && | 61 | + env->aarch64 = true; |
78 | arm_feature(env, ARM_FEATURE_V7)) { | 62 | spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar); |
79 | uint32_t nr = cpu->pmsav7_dregion; | 63 | pstate_write(env, spsr); |
80 | 64 | if (!arm_singlestep_active(env)) { | |
81 | @@ -XXX,XX +XXX,XX @@ static void arm946_initfn(Object *obj) | ||
82 | |||
83 | cpu->dtb_compatible = "arm,arm946"; | ||
84 | set_feature(&cpu->env, ARM_FEATURE_V5); | ||
85 | - set_feature(&cpu->env, ARM_FEATURE_MPU); | ||
86 | + set_feature(&cpu->env, ARM_FEATURE_PMSA); | ||
87 | set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS); | ||
88 | cpu->midr = 0x41059461; | ||
89 | cpu->ctr = 0x0f004006; | ||
90 | @@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj) | ||
91 | set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV); | ||
92 | set_feature(&cpu->env, ARM_FEATURE_ARM_DIV); | ||
93 | set_feature(&cpu->env, ARM_FEATURE_V7MP); | ||
94 | - set_feature(&cpu->env, ARM_FEATURE_MPU); | ||
95 | + set_feature(&cpu->env, ARM_FEATURE_PMSA); | ||
96 | cpu->midr = 0x411fc153; /* r1p3 */ | ||
97 | cpu->id_pfr0 = 0x0131; | ||
98 | cpu->id_pfr1 = 0x001; | ||
99 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 65 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
100 | index XXXXXXX..XXXXXXX 100644 | 66 | index XXXXXXX..XXXXXXX 100644 |
101 | --- a/target/arm/helper.c | 67 | --- a/target/arm/helper.c |
102 | +++ b/target/arm/helper.c | 68 | +++ b/target/arm/helper.c |
103 | @@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri, | 69 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs) |
104 | { | ||
105 | ARMCPU *cpu = arm_env_get_cpu(env); | ||
106 | |||
107 | - if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_MPU) | ||
108 | + if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA) | ||
109 | && !extended_addresses_enabled(env)) { | ||
110 | /* For VMSA (when not using the LPAE long descriptor page table | ||
111 | * format) this register includes the ASID, so do a TLB flush. | ||
112 | @@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu) | ||
113 | define_arm_cp_regs(cpu, v6k_cp_reginfo); | ||
114 | } | 70 | } |
115 | if (arm_feature(env, ARM_FEATURE_V7MP) && | 71 | |
116 | - !arm_feature(env, ARM_FEATURE_MPU)) { | 72 | pstate_write(env, PSTATE_DAIF | new_mode); |
117 | + !arm_feature(env, ARM_FEATURE_PMSA)) { | 73 | - env->aarch64 = 1; |
118 | define_arm_cp_regs(cpu, v7mp_cp_reginfo); | 74 | + env->aarch64 = true; |
119 | } | 75 | aarch64_restore_sp(env, new_el); |
120 | if (arm_feature(env, ARM_FEATURE_V7)) { | 76 | helper_rebuild_hflags_a64(env, new_el); |
121 | @@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu) | 77 | |
122 | } | 78 | diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c |
123 | } | ||
124 | |||
125 | - if (arm_feature(env, ARM_FEATURE_MPU)) { | ||
126 | + if (arm_feature(env, ARM_FEATURE_PMSA)) { | ||
127 | if (arm_feature(env, ARM_FEATURE_V6)) { | ||
128 | /* PMSAv6 not implemented */ | ||
129 | assert(arm_feature(env, ARM_FEATURE_V7)); | ||
130 | @@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu) | ||
131 | define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo); | ||
132 | } | ||
133 | define_arm_cp_regs(cpu, id_cp_reginfo); | ||
134 | - if (!arm_feature(env, ARM_FEATURE_MPU)) { | ||
135 | + if (!arm_feature(env, ARM_FEATURE_PMSA)) { | ||
136 | define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo); | ||
137 | } else if (arm_feature(env, ARM_FEATURE_V7)) { | ||
138 | define_one_arm_cp_reg(cpu, &id_mpuir_reginfo); | ||
139 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
140 | /* pmsav7 has special handling for when MPU is disabled so call it before | ||
141 | * the common MMU/MPU disabled check below. | ||
142 | */ | ||
143 | - if (arm_feature(env, ARM_FEATURE_MPU) && | ||
144 | + if (arm_feature(env, ARM_FEATURE_PMSA) && | ||
145 | arm_feature(env, ARM_FEATURE_V7)) { | ||
146 | *page_size = TARGET_PAGE_SIZE; | ||
147 | return get_phys_addr_pmsav7(env, address, access_type, mmu_idx, | ||
148 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
149 | return 0; | ||
150 | } | ||
151 | |||
152 | - if (arm_feature(env, ARM_FEATURE_MPU)) { | ||
153 | + if (arm_feature(env, ARM_FEATURE_PMSA)) { | ||
154 | /* Pre-v7 MPU */ | ||
155 | *page_size = TARGET_PAGE_SIZE; | ||
156 | return get_phys_addr_pmsav5(env, address, access_type, mmu_idx, | ||
157 | diff --git a/target/arm/machine.c b/target/arm/machine.c | ||
158 | index XXXXXXX..XXXXXXX 100644 | 79 | index XXXXXXX..XXXXXXX 100644 |
159 | --- a/target/arm/machine.c | 80 | --- a/target/arm/hvf/hvf.c |
160 | +++ b/target/arm/machine.c | 81 | +++ b/target/arm/hvf/hvf.c |
161 | @@ -XXX,XX +XXX,XX @@ static bool pmsav7_needed(void *opaque) | 82 | @@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu) |
162 | ARMCPU *cpu = opaque; | 83 | hv_return_t ret; |
163 | CPUARMState *env = &cpu->env; | 84 | int i; |
164 | 85 | ||
165 | - return arm_feature(env, ARM_FEATURE_MPU) && | 86 | - env->aarch64 = 1; |
166 | + return arm_feature(env, ARM_FEATURE_PMSA) && | 87 | + env->aarch64 = true; |
167 | arm_feature(env, ARM_FEATURE_V7); | 88 | asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz)); |
168 | } | 89 | |
169 | 90 | /* Allocate enough space for our sysreg sync */ | |
170 | -- | 91 | -- |
171 | 2.7.4 | 92 | 2.25.1 |
172 | 93 | ||
173 | 94 | diff view generated by jsdifflib |
1 | From: Michael Davidsaver <mdavidsaver@gmail.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Improve the "-d mmu" tracing for the PMSAv7 MPU translation | 3 | Currently we assume all fields are 32-bit. |
4 | process as an aid in debugging guest MPU configurations: | 4 | Prepare for fields of a single byte, using sizeof_field(). |
5 | * fix a missing newline for a guest-error log | ||
6 | * report the region number with guest-error or unimp | ||
7 | logs of bad region register values | ||
8 | * add a log message for the overall result of the lookup | ||
9 | * print "0x" prefix for hex values | ||
10 | 5 | ||
11 | Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> | 6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
12 | Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> | 7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
13 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 8 | [PMM: use sizeof_field() instead of raw sizeof()] |
14 | Message-id: 1493122030-32191-9-git-send-email-peter.maydell@linaro.org | ||
15 | [PMM: a little tidyup, report region number in all messages | ||
16 | rather than just one] | ||
17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
18 | --- | 10 | --- |
19 | target/arm/helper.c | 39 +++++++++++++++++++++++++++------------ | 11 | target/arm/translate-a32.h | 13 +++++-------- |
20 | 1 file changed, 27 insertions(+), 12 deletions(-) | 12 | target/arm/translate.c | 21 ++++++++++++++++++++- |
13 | 2 files changed, 25 insertions(+), 9 deletions(-) | ||
21 | 14 | ||
22 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 15 | diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h |
23 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
24 | --- a/target/arm/helper.c | 17 | --- a/target/arm/translate-a32.h |
25 | +++ b/target/arm/helper.c | 18 | +++ b/target/arm/translate-a32.h |
26 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | 19 | @@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 load_cpu_offset(int offset) |
27 | } | 20 | |
28 | 21 | #define load_cpu_field(name) load_cpu_offset(offsetof(CPUARMState, name)) | |
29 | if (!rsize) { | 22 | |
30 | - qemu_log_mask(LOG_GUEST_ERROR, "DRSR.Rsize field can not be 0"); | 23 | -static inline void store_cpu_offset(TCGv_i32 var, int offset) |
31 | + qemu_log_mask(LOG_GUEST_ERROR, | 24 | -{ |
32 | + "DRSR[%d]: Rsize field cannot be 0\n", n); | 25 | - tcg_gen_st_i32(var, cpu_env, offset); |
33 | continue; | 26 | - tcg_temp_free_i32(var); |
34 | } | 27 | -} |
35 | rsize++; | 28 | +void store_cpu_offset(TCGv_i32 var, int offset, int size); |
36 | rmask = (1ull << rsize) - 1; | 29 | |
37 | 30 | -#define store_cpu_field(var, name) \ | |
38 | if (base & rmask) { | 31 | - store_cpu_offset(var, offsetof(CPUARMState, name)) |
39 | - qemu_log_mask(LOG_GUEST_ERROR, "DRBAR %" PRIx32 " misaligned " | 32 | +#define store_cpu_field(var, name) \ |
40 | - "to DRSR region size, mask = %" PRIx32, | 33 | + store_cpu_offset(var, offsetof(CPUARMState, name), \ |
41 | - base, rmask); | 34 | + sizeof_field(CPUARMState, name)) |
42 | + qemu_log_mask(LOG_GUEST_ERROR, | 35 | |
43 | + "DRBAR[%d]: 0x%" PRIx32 " misaligned " | 36 | #define store_cpu_field_constant(val, name) \ |
44 | + "to DRSR region size, mask = 0x%" PRIx32 "\n", | 37 | - tcg_gen_st_i32(tcg_constant_i32(val), cpu_env, offsetof(CPUARMState, name)) |
45 | + n, base, rmask); | 38 | + store_cpu_field(tcg_constant_i32(val), name) |
46 | continue; | 39 | |
47 | } | 40 | /* Create a new temporary and set it to the value of a CPU register. */ |
48 | 41 | static inline TCGv_i32 load_reg(DisasContext *s, int reg) | |
49 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | 42 | diff --git a/target/arm/translate.c b/target/arm/translate.c |
43 | index XXXXXXX..XXXXXXX 100644 | ||
44 | --- a/target/arm/translate.c | ||
45 | +++ b/target/arm/translate.c | ||
46 | @@ -XXX,XX +XXX,XX @@ typedef enum ISSInfo { | ||
47 | ISSIs16Bit = (1 << 8), | ||
48 | } ISSInfo; | ||
49 | |||
50 | +/* | ||
51 | + * Store var into env + offset to a member with size bytes. | ||
52 | + * Free var after use. | ||
53 | + */ | ||
54 | +void store_cpu_offset(TCGv_i32 var, int offset, int size) | ||
55 | +{ | ||
56 | + switch (size) { | ||
57 | + case 1: | ||
58 | + tcg_gen_st8_i32(var, cpu_env, offset); | ||
59 | + break; | ||
60 | + case 4: | ||
61 | + tcg_gen_st_i32(var, cpu_env, offset); | ||
62 | + break; | ||
63 | + default: | ||
64 | + g_assert_not_reached(); | ||
65 | + } | ||
66 | + tcg_temp_free_i32(var); | ||
67 | +} | ||
68 | + | ||
69 | /* Save the syndrome information for a Data Abort */ | ||
70 | static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo) | ||
71 | { | ||
72 | @@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64, | ||
73 | tcg_temp_free_i32(tmp); | ||
74 | } else { | ||
75 | TCGv_i32 tmp = load_reg(s, rt); | ||
76 | - store_cpu_offset(tmp, ri->fieldoffset); | ||
77 | + store_cpu_offset(tmp, ri->fieldoffset, 4); | ||
50 | } | 78 | } |
51 | } | 79 | } |
52 | if (rsize < TARGET_PAGE_BITS) { | 80 | } |
53 | - qemu_log_mask(LOG_UNIMP, "No support for MPU (sub)region" | ||
54 | + qemu_log_mask(LOG_UNIMP, | ||
55 | + "DRSR[%d]: No support for MPU (sub)region " | ||
56 | "alignment of %" PRIu32 " bits. Minimum is %d\n", | ||
57 | - rsize, TARGET_PAGE_BITS); | ||
58 | + n, rsize, TARGET_PAGE_BITS); | ||
59 | continue; | ||
60 | } | ||
61 | if (srdis) { | ||
62 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
63 | break; | ||
64 | default: | ||
65 | qemu_log_mask(LOG_GUEST_ERROR, | ||
66 | - "Bad value for AP bits in DRACR %" | ||
67 | - PRIx32 "\n", ap); | ||
68 | + "DRACR[%d]: Bad value for AP bits: 0x%" | ||
69 | + PRIx32 "\n", n, ap); | ||
70 | } | ||
71 | } else { /* Priv. mode AP bits decoding */ | ||
72 | switch (ap) { | ||
73 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
74 | break; | ||
75 | default: | ||
76 | qemu_log_mask(LOG_GUEST_ERROR, | ||
77 | - "Bad value for AP bits in DRACR %" | ||
78 | - PRIx32 "\n", ap); | ||
79 | + "DRACR[%d]: Bad value for AP bits: 0x%" | ||
80 | + PRIx32 "\n", n, ap); | ||
81 | } | ||
82 | } | ||
83 | |||
84 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
85 | */ | ||
86 | if (arm_feature(env, ARM_FEATURE_PMSA) && | ||
87 | arm_feature(env, ARM_FEATURE_V7)) { | ||
88 | + bool ret; | ||
89 | *page_size = TARGET_PAGE_SIZE; | ||
90 | - return get_phys_addr_pmsav7(env, address, access_type, mmu_idx, | ||
91 | - phys_ptr, prot, fsr); | ||
92 | + ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx, | ||
93 | + phys_ptr, prot, fsr); | ||
94 | + qemu_log_mask(CPU_LOG_MMU, "PMSAv7 MPU lookup for %s at 0x%08" PRIx32 | ||
95 | + " mmu_idx %u -> %s (prot %c%c%c)\n", | ||
96 | + access_type == 1 ? "reading" : | ||
97 | + (access_type == 2 ? "writing" : "execute"), | ||
98 | + (uint32_t)address, mmu_idx, | ||
99 | + ret ? "Miss" : "Hit", | ||
100 | + *prot & PAGE_READ ? 'r' : '-', | ||
101 | + *prot & PAGE_WRITE ? 'w' : '-', | ||
102 | + *prot & PAGE_EXEC ? 'x' : '-'); | ||
103 | + | ||
104 | + return ret; | ||
105 | } | ||
106 | |||
107 | if (regime_translation_disabled(env, mmu_idx)) { | ||
108 | -- | 81 | -- |
109 | 2.7.4 | 82 | 2.25.1 |
110 | |||
111 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | Bool is a more appropriate type for this value. | ||
4 | Move the member down in the struct to keep the | ||
5 | bool type members together and remove a hole. | ||
6 | |||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | --- | ||
11 | target/arm/translate.h | 2 +- | ||
12 | target/arm/translate-a64.c | 2 +- | ||
13 | 2 files changed, 2 insertions(+), 2 deletions(-) | ||
14 | |||
15 | diff --git a/target/arm/translate.h b/target/arm/translate.h | ||
16 | index XXXXXXX..XXXXXXX 100644 | ||
17 | --- a/target/arm/translate.h | ||
18 | +++ b/target/arm/translate.h | ||
19 | @@ -XXX,XX +XXX,XX @@ typedef struct DisasContext { | ||
20 | bool eci_handled; | ||
21 | /* TCG op to rewind to if this turns out to be an invalid ECI state */ | ||
22 | TCGOp *insn_eci_rewind; | ||
23 | - int thumb; | ||
24 | int sctlr_b; | ||
25 | MemOp be_data; | ||
26 | #if !defined(CONFIG_USER_ONLY) | ||
27 | @@ -XXX,XX +XXX,XX @@ typedef struct DisasContext { | ||
28 | GHashTable *cp_regs; | ||
29 | uint64_t features; /* CPU features bits */ | ||
30 | bool aarch64; | ||
31 | + bool thumb; | ||
32 | /* Because unallocated encodings generate different exception syndrome | ||
33 | * information from traps due to FP being disabled, we can't do a single | ||
34 | * "is fp access disabled" check at a high level in the decode tree. | ||
35 | diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c | ||
36 | index XXXXXXX..XXXXXXX 100644 | ||
37 | --- a/target/arm/translate-a64.c | ||
38 | +++ b/target/arm/translate-a64.c | ||
39 | @@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase, | ||
40 | */ | ||
41 | dc->secure_routed_to_el3 = arm_feature(env, ARM_FEATURE_EL3) && | ||
42 | !arm_el_is_aa64(env, 3); | ||
43 | - dc->thumb = 0; | ||
44 | + dc->thumb = false; | ||
45 | dc->sctlr_b = 0; | ||
46 | dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE; | ||
47 | dc->condexec_mask = 0; | ||
48 | -- | ||
49 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Wei Huang <wei@redhat.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | The PMUv3 driver of linux kernel (in arch/arm64/kernel/perf_event.c) | 3 | Bool is a more appropriate type for this value. |
4 | relies on the PMUVER field of id_aa64dfr0_el1 to decide if PMU support | 4 | Adjust the assignments to use true/false. |
5 | is present or not. This patch clears the PMUVER field under TCG mode | ||
6 | when vPMU=off. Without it, PMUv3 will init insider guest VMs even | ||
7 | with vPMU=off. This patch also removes a redundant line inside the | ||
8 | if-statement. | ||
9 | 5 | ||
10 | Signed-off-by: Wei Huang <wei@redhat.com> | 6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
11 | Message-id: 1495123889-32301-1-git-send-email-wei@redhat.com | ||
12 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
14 | --- | 9 | --- |
15 | target/arm/cpu.c | 2 +- | 10 | target/arm/cpu.h | 2 +- |
16 | 1 file changed, 1 insertion(+), 1 deletion(-) | 11 | linux-user/arm/cpu_loop.c | 2 +- |
12 | target/arm/cpu.c | 2 +- | ||
13 | target/arm/m_helper.c | 6 +++--- | ||
14 | 4 files changed, 6 insertions(+), 6 deletions(-) | ||
17 | 15 | ||
16 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | ||
17 | index XXXXXXX..XXXXXXX 100644 | ||
18 | --- a/target/arm/cpu.h | ||
19 | +++ b/target/arm/cpu.h | ||
20 | @@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState { | ||
21 | */ | ||
22 | uint32_t pstate; | ||
23 | bool aarch64; /* True if CPU is in aarch64 state; inverse of PSTATE.nRW */ | ||
24 | + bool thumb; /* True if CPU is in thumb mode; cpsr[5] */ | ||
25 | |||
26 | /* Cached TBFLAGS state. See below for which bits are included. */ | ||
27 | CPUARMTBFlags hflags; | ||
28 | @@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState { | ||
29 | uint32_t ZF; /* Z set if zero. */ | ||
30 | uint32_t QF; /* 0 or 1 */ | ||
31 | uint32_t GE; /* cpsr[19:16] */ | ||
32 | - uint32_t thumb; /* cpsr[5]. 0 = arm mode, 1 = thumb mode. */ | ||
33 | uint32_t condexec_bits; /* IT bits. cpsr[15:10,26:25]. */ | ||
34 | uint32_t btype; /* BTI branch type. spsr[11:10]. */ | ||
35 | uint64_t daif; /* exception masks, in the bits they are in PSTATE */ | ||
36 | diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c | ||
37 | index XXXXXXX..XXXXXXX 100644 | ||
38 | --- a/linux-user/arm/cpu_loop.c | ||
39 | +++ b/linux-user/arm/cpu_loop.c | ||
40 | @@ -XXX,XX +XXX,XX @@ do_kernel_trap(CPUARMState *env) | ||
41 | /* Jump back to the caller. */ | ||
42 | addr = env->regs[14]; | ||
43 | if (addr & 1) { | ||
44 | - env->thumb = 1; | ||
45 | + env->thumb = true; | ||
46 | addr &= ~1; | ||
47 | } | ||
48 | env->regs[15] = addr; | ||
18 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c | 49 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c |
19 | index XXXXXXX..XXXXXXX 100644 | 50 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/target/arm/cpu.c | 51 | --- a/target/arm/cpu.c |
21 | +++ b/target/arm/cpu.c | 52 | +++ b/target/arm/cpu.c |
22 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) | 53 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_pc(CPUState *cs, vaddr value) |
54 | |||
55 | if (is_a64(env)) { | ||
56 | env->pc = value; | ||
57 | - env->thumb = 0; | ||
58 | + env->thumb = false; | ||
59 | } else { | ||
60 | env->regs[15] = value & ~1; | ||
61 | env->thumb = value & 1; | ||
62 | diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c | ||
63 | index XXXXXXX..XXXXXXX 100644 | ||
64 | --- a/target/arm/m_helper.c | ||
65 | +++ b/target/arm/m_helper.c | ||
66 | @@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) | ||
67 | env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; | ||
23 | } | 68 | } |
24 | 69 | switch_v7m_security_state(env, dest & 1); | |
25 | if (!cpu->has_pmu) { | 70 | - env->thumb = 1; |
26 | - cpu->has_pmu = false; | 71 | + env->thumb = true; |
27 | unset_feature(env, ARM_FEATURE_PMU); | 72 | env->regs[15] = dest & ~1; |
28 | + cpu->id_aa64dfr0 &= ~0xf00; | 73 | arm_rebuild_hflags(env); |
74 | } | ||
75 | @@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) | ||
76 | * except that the low bit doesn't indicate Thumb/not. | ||
77 | */ | ||
78 | env->regs[14] = nextinst; | ||
79 | - env->thumb = 1; | ||
80 | + env->thumb = true; | ||
81 | env->regs[15] = dest & ~1; | ||
82 | return; | ||
29 | } | 83 | } |
30 | 84 | @@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) | |
31 | if (!arm_feature(env, ARM_FEATURE_EL2)) { | 85 | } |
86 | env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; | ||
87 | switch_v7m_security_state(env, 0); | ||
88 | - env->thumb = 1; | ||
89 | + env->thumb = true; | ||
90 | env->regs[15] = dest; | ||
91 | arm_rebuild_hflags(env); | ||
92 | } | ||
32 | -- | 93 | -- |
33 | 2.7.4 | 94 | 2.25.1 |
34 | |||
35 | diff view generated by jsdifflib |
1 | From: Michael Davidsaver <mdavidsaver@gmail.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | The M series MPU is almost the same as the already implemented R | 3 | This function is incorrect in that it does not properly consider |
4 | profile MPU (v7 PMSA). So all we need to implement here is the MPU | 4 | CPTR_EL2.FPEN. We've already got another mechanism for raising |
5 | register interface in the system register space. | 5 | an FPU access trap: ARM_CP_FPU, so use that instead. |
6 | 6 | ||
7 | This implementation has the same restriction as the R profile MPU | 7 | Remove CP_ACCESS_TRAP_FP_EL{2,3}, which becomes unused. |
8 | that it doesn't permit regions to be sized down smaller than 1K. | ||
9 | 8 | ||
10 | We also do not yet implement support for MPU_CTRL.HFNMIENA; this | 9 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
11 | bit should if zero disable use of the MPU when running HardFault, | 10 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
12 | NMI or with FAULTMASK set to 1 (ie at an execution priority of | ||
13 | less than zero) -- if the MPU is enabled we don't treat these | ||
14 | cases any differently. | ||
15 | |||
16 | Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> | ||
17 | Message-id: 1493122030-32191-13-git-send-email-peter.maydell@linaro.org | ||
18 | [PMM: Keep all the bits in mpu_ctrl field, rather than | ||
19 | using SCTLR bits for them; drop broken HFNMIENA support; | ||
20 | various cleanup] | ||
21 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
22 | --- | 12 | --- |
23 | target/arm/cpu.h | 6 +++ | 13 | target/arm/cpu.h | 5 ----- |
24 | hw/intc/armv7m_nvic.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++++++ | 14 | target/arm/helper.c | 17 ++--------------- |
25 | target/arm/helper.c | 25 +++++++++++- | 15 | target/arm/op_helper.c | 13 ------------- |
26 | target/arm/machine.c | 5 ++- | 16 | 3 files changed, 2 insertions(+), 33 deletions(-) |
27 | 4 files changed, 137 insertions(+), 3 deletions(-) | ||
28 | 17 | ||
29 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 18 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
30 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
31 | --- a/target/arm/cpu.h | 20 | --- a/target/arm/cpu.h |
32 | +++ b/target/arm/cpu.h | 21 | +++ b/target/arm/cpu.h |
33 | @@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState { | 22 | @@ -XXX,XX +XXX,XX @@ typedef enum CPAccessResult { |
34 | uint32_t dfsr; /* Debug Fault Status Register */ | 23 | /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */ |
35 | uint32_t mmfar; /* MemManage Fault Address */ | 24 | CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5, |
36 | uint32_t bfar; /* BusFault Address */ | 25 | CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6, |
37 | + unsigned mpu_ctrl; /* MPU_CTRL (some bits kept in sctlr_el[1]) */ | 26 | - /* Access fails and results in an exception syndrome for an FP access, |
38 | int exception; | 27 | - * trapped directly to EL2 or EL3 |
39 | } v7m; | 28 | - */ |
40 | 29 | - CP_ACCESS_TRAP_FP_EL2 = 7, | |
41 | @@ -XXX,XX +XXX,XX @@ FIELD(V7M_DFSR, DWTTRAP, 2, 1) | 30 | - CP_ACCESS_TRAP_FP_EL3 = 8, |
42 | FIELD(V7M_DFSR, VCATCH, 3, 1) | 31 | } CPAccessResult; |
43 | FIELD(V7M_DFSR, EXTERNAL, 4, 1) | 32 | |
44 | 33 | /* Access functions for coprocessor registers. These cannot fail and | |
45 | +/* v7M MPU_CTRL bits */ | ||
46 | +FIELD(V7M_MPU_CTRL, ENABLE, 0, 1) | ||
47 | +FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1) | ||
48 | +FIELD(V7M_MPU_CTRL, PRIVDEFENA, 2, 1) | ||
49 | + | ||
50 | /* If adding a feature bit which corresponds to a Linux ELF | ||
51 | * HWCAP bit, remember to update the feature-bit-to-hwcap | ||
52 | * mapping in linux-user/elfload.c:get_elf_hwcap(). | ||
53 | diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c | ||
54 | index XXXXXXX..XXXXXXX 100644 | ||
55 | --- a/hw/intc/armv7m_nvic.c | ||
56 | +++ b/hw/intc/armv7m_nvic.c | ||
57 | @@ -XXX,XX +XXX,XX @@ | ||
58 | #include "hw/arm/arm.h" | ||
59 | #include "hw/arm/armv7m_nvic.h" | ||
60 | #include "target/arm/cpu.h" | ||
61 | +#include "exec/exec-all.h" | ||
62 | #include "qemu/log.h" | ||
63 | #include "trace.h" | ||
64 | |||
65 | @@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset) | ||
66 | case 0xd70: /* ISAR4. */ | ||
67 | return 0x01310102; | ||
68 | /* TODO: Implement debug registers. */ | ||
69 | + case 0xd90: /* MPU_TYPE */ | ||
70 | + /* Unified MPU; if the MPU is not present this value is zero */ | ||
71 | + return cpu->pmsav7_dregion << 8; | ||
72 | + break; | ||
73 | + case 0xd94: /* MPU_CTRL */ | ||
74 | + return cpu->env.v7m.mpu_ctrl; | ||
75 | + case 0xd98: /* MPU_RNR */ | ||
76 | + return cpu->env.cp15.c6_rgnr; | ||
77 | + case 0xd9c: /* MPU_RBAR */ | ||
78 | + case 0xda4: /* MPU_RBAR_A1 */ | ||
79 | + case 0xdac: /* MPU_RBAR_A2 */ | ||
80 | + case 0xdb4: /* MPU_RBAR_A3 */ | ||
81 | + { | ||
82 | + int region = cpu->env.cp15.c6_rgnr; | ||
83 | + | ||
84 | + if (region >= cpu->pmsav7_dregion) { | ||
85 | + return 0; | ||
86 | + } | ||
87 | + return (cpu->env.pmsav7.drbar[region] & 0x1f) | (region & 0xf); | ||
88 | + } | ||
89 | + case 0xda0: /* MPU_RASR */ | ||
90 | + case 0xda8: /* MPU_RASR_A1 */ | ||
91 | + case 0xdb0: /* MPU_RASR_A2 */ | ||
92 | + case 0xdb8: /* MPU_RASR_A3 */ | ||
93 | + { | ||
94 | + int region = cpu->env.cp15.c6_rgnr; | ||
95 | + | ||
96 | + if (region >= cpu->pmsav7_dregion) { | ||
97 | + return 0; | ||
98 | + } | ||
99 | + return ((cpu->env.pmsav7.dracr[region] & 0xffff) << 16) | | ||
100 | + (cpu->env.pmsav7.drsr[region] & 0xffff); | ||
101 | + } | ||
102 | default: | ||
103 | qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset); | ||
104 | return 0; | ||
105 | @@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value) | ||
106 | qemu_log_mask(LOG_UNIMP, | ||
107 | "NVIC: Aux fault status registers unimplemented\n"); | ||
108 | break; | ||
109 | + case 0xd90: /* MPU_TYPE */ | ||
110 | + return; /* RO */ | ||
111 | + case 0xd94: /* MPU_CTRL */ | ||
112 | + if ((value & | ||
113 | + (R_V7M_MPU_CTRL_HFNMIENA_MASK | R_V7M_MPU_CTRL_ENABLE_MASK)) | ||
114 | + == R_V7M_MPU_CTRL_HFNMIENA_MASK) { | ||
115 | + qemu_log_mask(LOG_GUEST_ERROR, "MPU_CTRL: HFNMIENA and !ENABLE is " | ||
116 | + "UNPREDICTABLE\n"); | ||
117 | + } | ||
118 | + cpu->env.v7m.mpu_ctrl = value & (R_V7M_MPU_CTRL_ENABLE_MASK | | ||
119 | + R_V7M_MPU_CTRL_HFNMIENA_MASK | | ||
120 | + R_V7M_MPU_CTRL_PRIVDEFENA_MASK); | ||
121 | + tlb_flush(CPU(cpu)); | ||
122 | + break; | ||
123 | + case 0xd98: /* MPU_RNR */ | ||
124 | + if (value >= cpu->pmsav7_dregion) { | ||
125 | + qemu_log_mask(LOG_GUEST_ERROR, "MPU region out of range %" | ||
126 | + PRIu32 "/%" PRIu32 "\n", | ||
127 | + value, cpu->pmsav7_dregion); | ||
128 | + } else { | ||
129 | + cpu->env.cp15.c6_rgnr = value; | ||
130 | + } | ||
131 | + break; | ||
132 | + case 0xd9c: /* MPU_RBAR */ | ||
133 | + case 0xda4: /* MPU_RBAR_A1 */ | ||
134 | + case 0xdac: /* MPU_RBAR_A2 */ | ||
135 | + case 0xdb4: /* MPU_RBAR_A3 */ | ||
136 | + { | ||
137 | + int region; | ||
138 | + | ||
139 | + if (value & (1 << 4)) { | ||
140 | + /* VALID bit means use the region number specified in this | ||
141 | + * value and also update MPU_RNR.REGION with that value. | ||
142 | + */ | ||
143 | + region = extract32(value, 0, 4); | ||
144 | + if (region >= cpu->pmsav7_dregion) { | ||
145 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
146 | + "MPU region out of range %u/%" PRIu32 "\n", | ||
147 | + region, cpu->pmsav7_dregion); | ||
148 | + return; | ||
149 | + } | ||
150 | + cpu->env.cp15.c6_rgnr = region; | ||
151 | + } else { | ||
152 | + region = cpu->env.cp15.c6_rgnr; | ||
153 | + } | ||
154 | + | ||
155 | + if (region >= cpu->pmsav7_dregion) { | ||
156 | + return; | ||
157 | + } | ||
158 | + | ||
159 | + cpu->env.pmsav7.drbar[region] = value & ~0x1f; | ||
160 | + tlb_flush(CPU(cpu)); | ||
161 | + break; | ||
162 | + } | ||
163 | + case 0xda0: /* MPU_RASR */ | ||
164 | + case 0xda8: /* MPU_RASR_A1 */ | ||
165 | + case 0xdb0: /* MPU_RASR_A2 */ | ||
166 | + case 0xdb8: /* MPU_RASR_A3 */ | ||
167 | + { | ||
168 | + int region = cpu->env.cp15.c6_rgnr; | ||
169 | + | ||
170 | + if (region >= cpu->pmsav7_dregion) { | ||
171 | + return; | ||
172 | + } | ||
173 | + | ||
174 | + cpu->env.pmsav7.drsr[region] = value & 0xff3f; | ||
175 | + cpu->env.pmsav7.dracr[region] = (value >> 16) & 0x173f; | ||
176 | + tlb_flush(CPU(cpu)); | ||
177 | + break; | ||
178 | + } | ||
179 | case 0xf00: /* Software Triggered Interrupt Register */ | ||
180 | { | ||
181 | /* user mode can only write to STIR if CCR.USERSETMPEND permits it */ | ||
182 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 34 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
183 | index XXXXXXX..XXXXXXX 100644 | 35 | index XXXXXXX..XXXXXXX 100644 |
184 | --- a/target/arm/helper.c | 36 | --- a/target/arm/helper.c |
185 | +++ b/target/arm/helper.c | 37 | +++ b/target/arm/helper.c |
186 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx) | 38 | @@ -XXX,XX +XXX,XX @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri, |
187 | static inline bool regime_translation_disabled(CPUARMState *env, | ||
188 | ARMMMUIdx mmu_idx) | ||
189 | { | ||
190 | + if (arm_feature(env, ARM_FEATURE_M)) { | ||
191 | + return !(env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_ENABLE_MASK); | ||
192 | + } | ||
193 | + | ||
194 | if (mmu_idx == ARMMMUIdx_S2NS) { | ||
195 | return (env->cp15.hcr_el2 & HCR_VM) == 0; | ||
196 | } | ||
197 | @@ -XXX,XX +XXX,XX @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env, | ||
198 | } | 39 | } |
199 | } | 40 | } |
200 | 41 | ||
201 | +static bool pmsav7_use_background_region(ARMCPU *cpu, | 42 | -static CPAccessResult fpexc32_access(CPUARMState *env, const ARMCPRegInfo *ri, |
202 | + ARMMMUIdx mmu_idx, bool is_user) | 43 | - bool isread) |
203 | +{ | 44 | -{ |
204 | + /* Return true if we should use the default memory map as a | 45 | - if ((env->cp15.cptr_el[2] & CPTR_TFP) && arm_current_el(env) == 2) { |
205 | + * "background" region if there are no hits against any MPU regions. | 46 | - return CP_ACCESS_TRAP_FP_EL2; |
206 | + */ | 47 | - } |
207 | + CPUARMState *env = &cpu->env; | 48 | - if (env->cp15.cptr_el[3] & CPTR_TFP) { |
208 | + | 49 | - return CP_ACCESS_TRAP_FP_EL3; |
209 | + if (is_user) { | 50 | - } |
210 | + return false; | 51 | - return CP_ACCESS_OK; |
211 | + } | 52 | -} |
212 | + | 53 | - |
213 | + if (arm_feature(env, ARM_FEATURE_M)) { | 54 | static void sdcr_write(CPUARMState *env, const ARMCPRegInfo *ri, |
214 | + return env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_PRIVDEFENA_MASK; | 55 | uint64_t value) |
215 | + } else { | 56 | { |
216 | + return regime_sctlr(env, mmu_idx) & SCTLR_BR; | 57 | @@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = { |
217 | + } | 58 | .access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write }, |
218 | +} | 59 | { .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64, |
219 | + | 60 | .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0, |
220 | static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | 61 | - .type = ARM_CP_ALIAS, |
221 | int access_type, ARMMMUIdx mmu_idx, | 62 | - .fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]), |
222 | hwaddr *phys_ptr, int *prot, uint32_t *fsr) | 63 | - .access = PL2_RW, .accessfn = fpexc32_access }, |
223 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | 64 | + .access = PL2_RW, .type = ARM_CP_ALIAS | ARM_CP_FPU, |
224 | } | 65 | + .fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) }, |
225 | 66 | { .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64, | |
226 | if (n == -1) { /* no hits */ | 67 | .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0, |
227 | - if (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR)) { | 68 | .access = PL2_RW, .resetvalue = 0, |
228 | + if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) { | 69 | diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c |
229 | /* background fault */ | ||
230 | *fsr = 0; | ||
231 | return true; | ||
232 | diff --git a/target/arm/machine.c b/target/arm/machine.c | ||
233 | index XXXXXXX..XXXXXXX 100644 | 70 | index XXXXXXX..XXXXXXX 100644 |
234 | --- a/target/arm/machine.c | 71 | --- a/target/arm/op_helper.c |
235 | +++ b/target/arm/machine.c | 72 | +++ b/target/arm/op_helper.c |
236 | @@ -XXX,XX +XXX,XX @@ static bool m_needed(void *opaque) | 73 | @@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome, |
237 | 74 | target_el = 3; | |
238 | static const VMStateDescription vmstate_m = { | 75 | syndrome = syn_uncategorized(); |
239 | .name = "cpu/m", | 76 | break; |
240 | - .version_id = 3, | 77 | - case CP_ACCESS_TRAP_FP_EL2: |
241 | - .minimum_version_id = 3, | 78 | - target_el = 2; |
242 | + .version_id = 4, | 79 | - /* Since we are an implementation that takes exceptions on a trapped |
243 | + .minimum_version_id = 4, | 80 | - * conditional insn only if the insn has passed its condition code |
244 | .needed = m_needed, | 81 | - * check, we take the IMPDEF choice to always report CV=1 COND=0xe |
245 | .fields = (VMStateField[]) { | 82 | - * (which is also the required value for AArch64 traps). |
246 | VMSTATE_UINT32(env.v7m.vecbase, ARMCPU), | 83 | - */ |
247 | @@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = { | 84 | - syndrome = syn_fp_access_trap(1, 0xe, false); |
248 | VMSTATE_UINT32(env.v7m.dfsr, ARMCPU), | 85 | - break; |
249 | VMSTATE_UINT32(env.v7m.mmfar, ARMCPU), | 86 | - case CP_ACCESS_TRAP_FP_EL3: |
250 | VMSTATE_UINT32(env.v7m.bfar, ARMCPU), | 87 | - target_el = 3; |
251 | + VMSTATE_UINT32(env.v7m.mpu_ctrl, ARMCPU), | 88 | - syndrome = syn_fp_access_trap(1, 0xe, false); |
252 | VMSTATE_INT32(env.v7m.exception, ARMCPU), | 89 | - break; |
253 | VMSTATE_END_OF_LIST() | 90 | default: |
91 | g_assert_not_reached(); | ||
254 | } | 92 | } |
255 | -- | 93 | -- |
256 | 2.7.4 | 94 | 2.25.1 |
257 | |||
258 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | Common code for reset_btype and set_btype. | ||
4 | Use tcg_constant_i32. | ||
5 | |||
6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | --- | ||
10 | target/arm/translate-a64.c | 25 ++++++++++++------------- | ||
11 | 1 file changed, 12 insertions(+), 13 deletions(-) | ||
12 | |||
13 | diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c | ||
14 | index XXXXXXX..XXXXXXX 100644 | ||
15 | --- a/target/arm/translate-a64.c | ||
16 | +++ b/target/arm/translate-a64.c | ||
17 | @@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s) | ||
18 | return arm_to_core_mmu_idx(useridx); | ||
19 | } | ||
20 | |||
21 | -static void reset_btype(DisasContext *s) | ||
22 | +static void set_btype_raw(int val) | ||
23 | { | ||
24 | - if (s->btype != 0) { | ||
25 | - TCGv_i32 zero = tcg_const_i32(0); | ||
26 | - tcg_gen_st_i32(zero, cpu_env, offsetof(CPUARMState, btype)); | ||
27 | - tcg_temp_free_i32(zero); | ||
28 | - s->btype = 0; | ||
29 | - } | ||
30 | + tcg_gen_st_i32(tcg_constant_i32(val), cpu_env, | ||
31 | + offsetof(CPUARMState, btype)); | ||
32 | } | ||
33 | |||
34 | static void set_btype(DisasContext *s, int val) | ||
35 | { | ||
36 | - TCGv_i32 tcg_val; | ||
37 | - | ||
38 | /* BTYPE is a 2-bit field, and 0 should be done with reset_btype. */ | ||
39 | tcg_debug_assert(val >= 1 && val <= 3); | ||
40 | - | ||
41 | - tcg_val = tcg_const_i32(val); | ||
42 | - tcg_gen_st_i32(tcg_val, cpu_env, offsetof(CPUARMState, btype)); | ||
43 | - tcg_temp_free_i32(tcg_val); | ||
44 | + set_btype_raw(val); | ||
45 | s->btype = -1; | ||
46 | } | ||
47 | |||
48 | +static void reset_btype(DisasContext *s) | ||
49 | +{ | ||
50 | + if (s->btype != 0) { | ||
51 | + set_btype_raw(0); | ||
52 | + s->btype = 0; | ||
53 | + } | ||
54 | +} | ||
55 | + | ||
56 | void gen_a64_set_pc_im(uint64_t val) | ||
57 | { | ||
58 | tcg_gen_movi_i64(cpu_pc, val); | ||
59 | -- | ||
60 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Cédric Le Goater <clg@kaod.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Let's add an RTC to the palmetto BMC and a LM75 temperature sensor to | 3 | For aa32, the function has a parameter to use the new el. |
4 | the AST2500 EVB to start with. | 4 | For aa64, that never happens. |
5 | Use tcg_constant_i32 while we're at it. | ||
5 | 6 | ||
6 | Signed-off-by: Cédric Le Goater <clg@kaod.org> | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 1494827476-1487-5-git-send-email-clg@kaod.org | 8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | --- | 10 | --- |
11 | hw/arm/aspeed.c | 27 +++++++++++++++++++++++++++ | 11 | target/arm/translate-a64.c | 21 +++++++++----------- |
12 | 1 file changed, 27 insertions(+) | 12 | target/arm/translate.c | 40 +++++++++++++++++++++++--------------- |
13 | 2 files changed, 33 insertions(+), 28 deletions(-) | ||
13 | 14 | ||
14 | diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c | 15 | diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c |
15 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/hw/arm/aspeed.c | 17 | --- a/target/arm/translate-a64.c |
17 | +++ b/hw/arm/aspeed.c | 18 | +++ b/target/arm/translate-a64.c |
18 | @@ -XXX,XX +XXX,XX @@ typedef struct AspeedBoardConfig { | 19 | @@ -XXX,XX +XXX,XX @@ static void a64_free_cc(DisasCompare64 *c64) |
19 | const char *fmc_model; | 20 | tcg_temp_free_i64(c64->value); |
20 | const char *spi_model; | ||
21 | uint32_t num_cs; | ||
22 | + void (*i2c_init)(AspeedBoardState *bmc); | ||
23 | } AspeedBoardConfig; | ||
24 | |||
25 | enum { | ||
26 | @@ -XXX,XX +XXX,XX @@ enum { | ||
27 | SCU_AST2500_HW_STRAP_ACPI_ENABLE | \ | ||
28 | SCU_HW_STRAP_SPI_MODE(SCU_HW_STRAP_SPI_MASTER)) | ||
29 | |||
30 | +static void palmetto_bmc_i2c_init(AspeedBoardState *bmc); | ||
31 | +static void ast2500_evb_i2c_init(AspeedBoardState *bmc); | ||
32 | + | ||
33 | static const AspeedBoardConfig aspeed_boards[] = { | ||
34 | [PALMETTO_BMC] = { | ||
35 | .soc_name = "ast2400-a1", | ||
36 | @@ -XXX,XX +XXX,XX @@ static const AspeedBoardConfig aspeed_boards[] = { | ||
37 | .fmc_model = "n25q256a", | ||
38 | .spi_model = "mx25l25635e", | ||
39 | .num_cs = 1, | ||
40 | + .i2c_init = palmetto_bmc_i2c_init, | ||
41 | }, | ||
42 | [AST2500_EVB] = { | ||
43 | .soc_name = "ast2500-a1", | ||
44 | @@ -XXX,XX +XXX,XX @@ static const AspeedBoardConfig aspeed_boards[] = { | ||
45 | .fmc_model = "n25q256a", | ||
46 | .spi_model = "mx25l25635e", | ||
47 | .num_cs = 1, | ||
48 | + .i2c_init = ast2500_evb_i2c_init, | ||
49 | }, | ||
50 | [ROMULUS_BMC] = { | ||
51 | .soc_name = "ast2500-a1", | ||
52 | @@ -XXX,XX +XXX,XX @@ static void aspeed_board_init(MachineState *machine, | ||
53 | aspeed_board_binfo.ram_size = ram_size; | ||
54 | aspeed_board_binfo.loader_start = sc->info->sdram_base; | ||
55 | |||
56 | + if (cfg->i2c_init) { | ||
57 | + cfg->i2c_init(bmc); | ||
58 | + } | ||
59 | + | ||
60 | arm_load_kernel(ARM_CPU(first_cpu), &aspeed_board_binfo); | ||
61 | } | 21 | } |
62 | 22 | ||
63 | +static void palmetto_bmc_i2c_init(AspeedBoardState *bmc) | 23 | +static void gen_rebuild_hflags(DisasContext *s) |
64 | +{ | 24 | +{ |
65 | + AspeedSoCState *soc = &bmc->soc; | 25 | + gen_helper_rebuild_hflags_a64(cpu_env, tcg_constant_i32(s->current_el)); |
66 | + | ||
67 | + /* The palmetto platform expects a ds3231 RTC but a ds1338 is | ||
68 | + * enough to provide basic RTC features. Alarms will be missing */ | ||
69 | + i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 0), "ds1338", 0x68); | ||
70 | +} | 26 | +} |
71 | + | 27 | + |
72 | static void palmetto_bmc_init(MachineState *machine) | 28 | static void gen_exception_internal(int excp) |
73 | { | 29 | { |
74 | aspeed_board_init(machine, &aspeed_boards[PALMETTO_BMC]); | 30 | TCGv_i32 tcg_excp = tcg_const_i32(excp); |
75 | @@ -XXX,XX +XXX,XX @@ static const TypeInfo palmetto_bmc_type = { | 31 | @@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn, |
76 | .class_init = palmetto_bmc_class_init, | 32 | } else { |
77 | }; | 33 | clear_pstate_bits(PSTATE_UAO); |
78 | 34 | } | |
79 | +static void ast2500_evb_i2c_init(AspeedBoardState *bmc) | 35 | - t1 = tcg_const_i32(s->current_el); |
36 | - gen_helper_rebuild_hflags_a64(cpu_env, t1); | ||
37 | - tcg_temp_free_i32(t1); | ||
38 | + gen_rebuild_hflags(s); | ||
39 | break; | ||
40 | |||
41 | case 0x04: /* PAN */ | ||
42 | @@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn, | ||
43 | } else { | ||
44 | clear_pstate_bits(PSTATE_PAN); | ||
45 | } | ||
46 | - t1 = tcg_const_i32(s->current_el); | ||
47 | - gen_helper_rebuild_hflags_a64(cpu_env, t1); | ||
48 | - tcg_temp_free_i32(t1); | ||
49 | + gen_rebuild_hflags(s); | ||
50 | break; | ||
51 | |||
52 | case 0x05: /* SPSel */ | ||
53 | @@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn, | ||
54 | } else { | ||
55 | clear_pstate_bits(PSTATE_TCO); | ||
56 | } | ||
57 | - t1 = tcg_const_i32(s->current_el); | ||
58 | - gen_helper_rebuild_hflags_a64(cpu_env, t1); | ||
59 | - tcg_temp_free_i32(t1); | ||
60 | + gen_rebuild_hflags(s); | ||
61 | /* Many factors, including TCO, go into MTE_ACTIVE. */ | ||
62 | s->base.is_jmp = DISAS_UPDATE_NOCHAIN; | ||
63 | } else if (dc_isar_feature(aa64_mte_insn_reg, s)) { | ||
64 | @@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread, | ||
65 | * A write to any coprocessor regiser that ends a TB | ||
66 | * must rebuild the hflags for the next TB. | ||
67 | */ | ||
68 | - TCGv_i32 tcg_el = tcg_const_i32(s->current_el); | ||
69 | - gen_helper_rebuild_hflags_a64(cpu_env, tcg_el); | ||
70 | - tcg_temp_free_i32(tcg_el); | ||
71 | + gen_rebuild_hflags(s); | ||
72 | /* | ||
73 | * We default to ending the TB on a coprocessor register write, | ||
74 | * but allow this to be suppressed by the register definition | ||
75 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
76 | index XXXXXXX..XXXXXXX 100644 | ||
77 | --- a/target/arm/translate.c | ||
78 | +++ b/target/arm/translate.c | ||
79 | @@ -XXX,XX +XXX,XX @@ void gen_set_cpsr(TCGv_i32 var, uint32_t mask) | ||
80 | tcg_temp_free_i32(tmp_mask); | ||
81 | } | ||
82 | |||
83 | +static void gen_rebuild_hflags(DisasContext *s, bool new_el) | ||
80 | +{ | 84 | +{ |
81 | + AspeedSoCState *soc = &bmc->soc; | 85 | + bool m_profile = arm_dc_feature(s, ARM_FEATURE_M); |
82 | + | 86 | + |
83 | + /* The AST2500 EVB expects a LM75 but a TMP105 is compatible */ | 87 | + if (new_el) { |
84 | + i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7), "tmp105", 0x4d); | 88 | + if (m_profile) { |
89 | + gen_helper_rebuild_hflags_m32_newel(cpu_env); | ||
90 | + } else { | ||
91 | + gen_helper_rebuild_hflags_a32_newel(cpu_env); | ||
92 | + } | ||
93 | + } else { | ||
94 | + TCGv_i32 tcg_el = tcg_constant_i32(s->current_el); | ||
95 | + if (m_profile) { | ||
96 | + gen_helper_rebuild_hflags_m32(cpu_env, tcg_el); | ||
97 | + } else { | ||
98 | + gen_helper_rebuild_hflags_a32(cpu_env, tcg_el); | ||
99 | + } | ||
100 | + } | ||
85 | +} | 101 | +} |
86 | + | 102 | + |
87 | static void ast2500_evb_init(MachineState *machine) | 103 | static void gen_exception_internal(int excp) |
88 | { | 104 | { |
89 | aspeed_board_init(machine, &aspeed_boards[AST2500_EVB]); | 105 | TCGv_i32 tcg_excp = tcg_const_i32(excp); |
106 | @@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64, | ||
107 | * A write to any coprocessor register that ends a TB | ||
108 | * must rebuild the hflags for the next TB. | ||
109 | */ | ||
110 | - TCGv_i32 tcg_el = tcg_const_i32(s->current_el); | ||
111 | - if (arm_dc_feature(s, ARM_FEATURE_M)) { | ||
112 | - gen_helper_rebuild_hflags_m32(cpu_env, tcg_el); | ||
113 | - } else { | ||
114 | - if (ri->type & ARM_CP_NEWEL) { | ||
115 | - gen_helper_rebuild_hflags_a32_newel(cpu_env); | ||
116 | - } else { | ||
117 | - gen_helper_rebuild_hflags_a32(cpu_env, tcg_el); | ||
118 | - } | ||
119 | - } | ||
120 | - tcg_temp_free_i32(tcg_el); | ||
121 | + gen_rebuild_hflags(s, ri->type & ARM_CP_NEWEL); | ||
122 | /* | ||
123 | * We default to ending the TB on a coprocessor register write, | ||
124 | * but allow this to be suppressed by the register definition | ||
125 | @@ -XXX,XX +XXX,XX @@ static bool trans_MSR_v7m(DisasContext *s, arg_MSR_v7m *a) | ||
126 | tcg_temp_free_i32(addr); | ||
127 | tcg_temp_free_i32(reg); | ||
128 | /* If we wrote to CONTROL, the EL might have changed */ | ||
129 | - gen_helper_rebuild_hflags_m32_newel(cpu_env); | ||
130 | + gen_rebuild_hflags(s, true); | ||
131 | gen_lookup_tb(s); | ||
132 | return true; | ||
133 | } | ||
134 | @@ -XXX,XX +XXX,XX @@ static bool trans_CPS(DisasContext *s, arg_CPS *a) | ||
135 | |||
136 | static bool trans_CPS_v7m(DisasContext *s, arg_CPS_v7m *a) | ||
137 | { | ||
138 | - TCGv_i32 tmp, addr, el; | ||
139 | + TCGv_i32 tmp, addr; | ||
140 | |||
141 | if (!arm_dc_feature(s, ARM_FEATURE_M)) { | ||
142 | return false; | ||
143 | @@ -XXX,XX +XXX,XX @@ static bool trans_CPS_v7m(DisasContext *s, arg_CPS_v7m *a) | ||
144 | gen_helper_v7m_msr(cpu_env, addr, tmp); | ||
145 | tcg_temp_free_i32(addr); | ||
146 | } | ||
147 | - el = tcg_const_i32(s->current_el); | ||
148 | - gen_helper_rebuild_hflags_m32(cpu_env, el); | ||
149 | - tcg_temp_free_i32(el); | ||
150 | + gen_rebuild_hflags(s, false); | ||
151 | tcg_temp_free_i32(tmp); | ||
152 | gen_lookup_tb(s); | ||
153 | return true; | ||
90 | -- | 154 | -- |
91 | 2.7.4 | 155 | 2.25.1 |
92 | |||
93 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | Instead of computing | ||
4 | |||
5 | tmp1 = shift & 0xff; | ||
6 | dest = (tmp1 > 0x1f ? 0 : value) << (tmp1 & 0x1f) | ||
7 | |||
8 | use | ||
9 | |||
10 | tmpd = value << (shift & 0x1f); | ||
11 | dest = shift & 0xe0 ? 0 : tmpd; | ||
12 | |||
13 | which has a flatter dependency tree. | ||
14 | Use tcg_constant_i32 while we're at it. | ||
15 | |||
16 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
17 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
18 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
19 | --- | ||
20 | target/arm/translate.c | 18 ++++++++---------- | ||
21 | 1 file changed, 8 insertions(+), 10 deletions(-) | ||
22 | |||
23 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
24 | index XXXXXXX..XXXXXXX 100644 | ||
25 | --- a/target/arm/translate.c | ||
26 | +++ b/target/arm/translate.c | ||
27 | @@ -XXX,XX +XXX,XX @@ static void gen_sbc_CC(TCGv_i32 dest, TCGv_i32 t0, TCGv_i32 t1) | ||
28 | #define GEN_SHIFT(name) \ | ||
29 | static void gen_##name(TCGv_i32 dest, TCGv_i32 t0, TCGv_i32 t1) \ | ||
30 | { \ | ||
31 | - TCGv_i32 tmp1, tmp2, tmp3; \ | ||
32 | - tmp1 = tcg_temp_new_i32(); \ | ||
33 | - tcg_gen_andi_i32(tmp1, t1, 0xff); \ | ||
34 | - tmp2 = tcg_const_i32(0); \ | ||
35 | - tmp3 = tcg_const_i32(0x1f); \ | ||
36 | - tcg_gen_movcond_i32(TCG_COND_GTU, tmp2, tmp1, tmp3, tmp2, t0); \ | ||
37 | - tcg_temp_free_i32(tmp3); \ | ||
38 | - tcg_gen_andi_i32(tmp1, tmp1, 0x1f); \ | ||
39 | - tcg_gen_##name##_i32(dest, tmp2, tmp1); \ | ||
40 | - tcg_temp_free_i32(tmp2); \ | ||
41 | + TCGv_i32 tmpd = tcg_temp_new_i32(); \ | ||
42 | + TCGv_i32 tmp1 = tcg_temp_new_i32(); \ | ||
43 | + TCGv_i32 zero = tcg_constant_i32(0); \ | ||
44 | + tcg_gen_andi_i32(tmp1, t1, 0x1f); \ | ||
45 | + tcg_gen_##name##_i32(tmpd, t0, tmp1); \ | ||
46 | + tcg_gen_andi_i32(tmp1, t1, 0xe0); \ | ||
47 | + tcg_gen_movcond_i32(TCG_COND_NE, dest, tmp1, zero, zero, tmpd); \ | ||
48 | + tcg_temp_free_i32(tmpd); \ | ||
49 | tcg_temp_free_i32(tmp1); \ | ||
50 | } | ||
51 | GEN_SHIFT(shl) | ||
52 | -- | ||
53 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | Use tcg_gen_umin_i32 instead of tcg_gen_movcond_i32. | ||
4 | Use tcg_constant_i32 while we're at it. | ||
5 | |||
6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | --- | ||
10 | target/arm/translate.c | 8 +++----- | ||
11 | 1 file changed, 3 insertions(+), 5 deletions(-) | ||
12 | |||
13 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
14 | index XXXXXXX..XXXXXXX 100644 | ||
15 | --- a/target/arm/translate.c | ||
16 | +++ b/target/arm/translate.c | ||
17 | @@ -XXX,XX +XXX,XX @@ GEN_SHIFT(shr) | ||
18 | |||
19 | static void gen_sar(TCGv_i32 dest, TCGv_i32 t0, TCGv_i32 t1) | ||
20 | { | ||
21 | - TCGv_i32 tmp1, tmp2; | ||
22 | - tmp1 = tcg_temp_new_i32(); | ||
23 | + TCGv_i32 tmp1 = tcg_temp_new_i32(); | ||
24 | + | ||
25 | tcg_gen_andi_i32(tmp1, t1, 0xff); | ||
26 | - tmp2 = tcg_const_i32(0x1f); | ||
27 | - tcg_gen_movcond_i32(TCG_COND_GTU, tmp1, tmp1, tmp2, tmp2, tmp1); | ||
28 | - tcg_temp_free_i32(tmp2); | ||
29 | + tcg_gen_umin_i32(tmp1, tmp1, tcg_constant_i32(31)); | ||
30 | tcg_gen_sar_i32(dest, t0, tmp1); | ||
31 | tcg_temp_free_i32(tmp1); | ||
32 | } | ||
33 | -- | ||
34 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | The length of the previous insn may be computed from | ||
4 | the difference of start and end addresses. | ||
5 | Use tcg_constant_i32 while we're at it. | ||
6 | |||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | --- | ||
11 | target/arm/translate.c | 12 ++++-------- | ||
12 | 1 file changed, 4 insertions(+), 8 deletions(-) | ||
13 | |||
14 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
15 | index XXXXXXX..XXXXXXX 100644 | ||
16 | --- a/target/arm/translate.c | ||
17 | +++ b/target/arm/translate.c | ||
18 | @@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu) | ||
19 | /* nothing more to generate */ | ||
20 | break; | ||
21 | case DISAS_WFI: | ||
22 | - { | ||
23 | - TCGv_i32 tmp = tcg_const_i32((dc->thumb && | ||
24 | - !(dc->insn & (1U << 31))) ? 2 : 4); | ||
25 | - | ||
26 | - gen_helper_wfi(cpu_env, tmp); | ||
27 | - tcg_temp_free_i32(tmp); | ||
28 | - /* The helper doesn't necessarily throw an exception, but we | ||
29 | + gen_helper_wfi(cpu_env, | ||
30 | + tcg_constant_i32(dc->base.pc_next - dc->pc_curr)); | ||
31 | + /* | ||
32 | + * The helper doesn't necessarily throw an exception, but we | ||
33 | * must go back to the main loop to check for interrupts anyway. | ||
34 | */ | ||
35 | tcg_gen_exit_tb(NULL, 0); | ||
36 | break; | ||
37 | - } | ||
38 | case DISAS_WFE: | ||
39 | gen_helper_wfe(cpu_env); | ||
40 | break; | ||
41 | -- | ||
42 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Cédric Le Goater <clg@kaod.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Today, the LAST command is handled with the STOP command but this is | 3 | Use tcg_constant_{i32,i64} as appropriate throughout. |
4 | incorrect. Also nack the I2C bus when a LAST is issued. | 4 | This fixes a bug in trans_VSCCLRM() where we were leaking a TCGv. |
5 | 5 | ||
6 | Signed-off-by: Cédric Le Goater <clg@kaod.org> | 6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 1494827476-1487-3-git-send-email-clg@kaod.org | 7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | --- | 9 | --- |
10 | hw/i2c/aspeed_i2c.c | 9 ++++++--- | 10 | target/arm/translate-m-nocp.c | 12 +++++------- |
11 | 1 file changed, 6 insertions(+), 3 deletions(-) | 11 | 1 file changed, 5 insertions(+), 7 deletions(-) |
12 | 12 | ||
13 | diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c | 13 | diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c |
14 | index XXXXXXX..XXXXXXX 100644 | 14 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/hw/i2c/aspeed_i2c.c | 15 | --- a/target/arm/translate-m-nocp.c |
16 | +++ b/hw/i2c/aspeed_i2c.c | 16 | +++ b/target/arm/translate-m-nocp.c |
17 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | 17 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a) |
18 | bus->cmd &= ~I2CD_M_TX_CMD; | ||
19 | } | 18 | } |
20 | 19 | ||
21 | - if (bus->cmd & I2CD_M_RX_CMD) { | 20 | /* Zero the Sregs from btmreg to topreg inclusive. */ |
22 | + if (bus->cmd & (I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST)) { | 21 | - zero = tcg_const_i64(0); |
23 | int ret = i2c_recv(bus->bus); | 22 | + zero = tcg_constant_i64(0); |
24 | if (ret < 0) { | 23 | if (btmreg & 1) { |
25 | qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__); | 24 | write_neon_element64(zero, btmreg >> 1, 1, MO_32); |
26 | @@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value) | 25 | btmreg++; |
27 | bus->intr_status |= I2CD_INTR_RX_DONE; | 26 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a) |
28 | } | ||
29 | bus->buf = (ret & I2CD_BYTE_BUF_RX_MASK) << I2CD_BYTE_BUF_RX_SHIFT; | ||
30 | - bus->cmd &= ~I2CD_M_RX_CMD; | ||
31 | + if (bus->cmd & I2CD_M_S_RX_CMD_LAST) { | ||
32 | + i2c_nack(bus->bus); | ||
33 | + } | ||
34 | + bus->cmd &= ~(I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST); | ||
35 | } | 27 | } |
36 | 28 | assert(btmreg == topreg + 1); | |
37 | - if (bus->cmd & (I2CD_M_STOP_CMD | I2CD_M_S_RX_CMD_LAST)) { | 29 | if (dc_isar_feature(aa32_mve, s)) { |
38 | + if (bus->cmd & I2CD_M_STOP_CMD) { | 30 | - TCGv_i32 z32 = tcg_const_i32(0); |
39 | if (!i2c_bus_busy(bus->bus)) { | 31 | - store_cpu_field(z32, v7m.vpr); |
40 | bus->intr_status |= I2CD_INTR_ABNORMAL; | 32 | + store_cpu_field(tcg_constant_i32(0), v7m.vpr); |
41 | } else { | 33 | } |
34 | |||
35 | clear_eci_state(s); | ||
36 | @@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno, | ||
37 | } | ||
38 | case ARM_VFP_FPCXT_NS: | ||
39 | { | ||
40 | - TCGv_i32 control, sfpa, fpscr, fpdscr, zero; | ||
41 | + TCGv_i32 control, sfpa, fpscr, fpdscr; | ||
42 | TCGLabel *lab_active = gen_new_label(); | ||
43 | |||
44 | lookup_tb = true; | ||
45 | @@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno, | ||
46 | storefn(s, opaque, tmp, true); | ||
47 | /* If SFPA is zero then set FPSCR from FPDSCR_NS */ | ||
48 | fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]); | ||
49 | - zero = tcg_const_i32(0); | ||
50 | - tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, zero, fpdscr, fpscr); | ||
51 | + tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, tcg_constant_i32(0), | ||
52 | + fpdscr, fpscr); | ||
53 | gen_helper_vfp_set_fpscr(cpu_env, fpscr); | ||
54 | - tcg_temp_free_i32(zero); | ||
55 | tcg_temp_free_i32(sfpa); | ||
56 | tcg_temp_free_i32(fpdscr); | ||
57 | tcg_temp_free_i32(fpscr); | ||
42 | -- | 58 | -- |
43 | 2.7.4 | 59 | 2.25.1 |
44 | |||
45 | diff view generated by jsdifflib |
1 | From: Andrew Jones <drjones@redhat.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Don't allow load_uboot_image() to proceed when less bytes than | 3 | Use tcg_constant_{i32,i64} as appropriate throughout. |
4 | header-size was read. | ||
5 | 4 | ||
6 | Signed-off-by: Andrew Jones <drjones@redhat.com> | 5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 20170524091315.20284-1-drjones@redhat.com | ||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
10 | --- | 8 | --- |
11 | hw/core/loader.c | 3 ++- | 9 | target/arm/translate-neon.c | 21 +++++++-------------- |
12 | 1 file changed, 2 insertions(+), 1 deletion(-) | 10 | 1 file changed, 7 insertions(+), 14 deletions(-) |
13 | 11 | ||
14 | diff --git a/hw/core/loader.c b/hw/core/loader.c | 12 | diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c |
15 | index XXXXXXX..XXXXXXX 100644 | 13 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/hw/core/loader.c | 14 | --- a/target/arm/translate-neon.c |
17 | +++ b/hw/core/loader.c | 15 | +++ b/target/arm/translate-neon.c |
18 | @@ -XXX,XX +XXX,XX @@ static int load_uboot_image(const char *filename, hwaddr *ep, hwaddr *loadaddr, | 16 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a) |
19 | return -1; | 17 | int mmu_idx = get_mem_index(s); |
20 | 18 | int size = a->size; | |
21 | size = read(fd, hdr, sizeof(uboot_image_header_t)); | 19 | TCGv_i64 tmp64; |
22 | - if (size < 0) | 20 | - TCGv_i32 addr, tmp; |
23 | + if (size < sizeof(uboot_image_header_t)) { | 21 | + TCGv_i32 addr; |
24 | goto out; | 22 | |
25 | + } | 23 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { |
26 | 24 | return false; | |
27 | bswap_uboot_header(hdr); | 25 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a) |
26 | |||
27 | tmp64 = tcg_temp_new_i64(); | ||
28 | addr = tcg_temp_new_i32(); | ||
29 | - tmp = tcg_const_i32(1 << size); | ||
30 | load_reg_var(s, addr, a->rn); | ||
31 | |||
32 | mop = endian | size | align; | ||
33 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a) | ||
34 | neon_load_element64(tmp64, tt, n, size); | ||
35 | gen_aa32_st_internal_i64(s, tmp64, addr, mmu_idx, mop); | ||
36 | } | ||
37 | - tcg_gen_add_i32(addr, addr, tmp); | ||
38 | + tcg_gen_addi_i32(addr, addr, 1 << size); | ||
39 | |||
40 | /* Subsequent memory operations inherit alignment */ | ||
41 | mop &= ~MO_AMASK; | ||
42 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a) | ||
43 | } | ||
44 | } | ||
45 | tcg_temp_free_i32(addr); | ||
46 | - tcg_temp_free_i32(tmp); | ||
47 | tcg_temp_free_i64(tmp64); | ||
48 | |||
49 | gen_neon_ldst_base_update(s, a->rm, a->rn, nregs * interleave * 8); | ||
50 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a, | ||
51 | * To avoid excessive duplication of ops we implement shift | ||
52 | * by immediate using the variable shift operations. | ||
53 | */ | ||
54 | - constimm = tcg_const_i64(dup_const(a->size, a->shift)); | ||
55 | + constimm = tcg_constant_i64(dup_const(a->size, a->shift)); | ||
56 | |||
57 | for (pass = 0; pass < a->q + 1; pass++) { | ||
58 | TCGv_i64 tmp = tcg_temp_new_i64(); | ||
59 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a, | ||
60 | write_neon_element64(tmp, a->vd, pass, MO_64); | ||
61 | tcg_temp_free_i64(tmp); | ||
62 | } | ||
63 | - tcg_temp_free_i64(constimm); | ||
64 | return true; | ||
65 | } | ||
66 | |||
67 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a, | ||
68 | * To avoid excessive duplication of ops we implement shift | ||
69 | * by immediate using the variable shift operations. | ||
70 | */ | ||
71 | - constimm = tcg_const_i32(dup_const(a->size, a->shift)); | ||
72 | + constimm = tcg_constant_i32(dup_const(a->size, a->shift)); | ||
73 | tmp = tcg_temp_new_i32(); | ||
74 | |||
75 | for (pass = 0; pass < (a->q ? 4 : 2); pass++) { | ||
76 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a, | ||
77 | write_neon_element32(tmp, a->vd, pass, MO_32); | ||
78 | } | ||
79 | tcg_temp_free_i32(tmp); | ||
80 | - tcg_temp_free_i32(constimm); | ||
81 | return true; | ||
82 | } | ||
83 | |||
84 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a, | ||
85 | * This is always a right shift, and the shiftfn is always a | ||
86 | * left-shift helper, which thus needs the negated shift count. | ||
87 | */ | ||
88 | - constimm = tcg_const_i64(-a->shift); | ||
89 | + constimm = tcg_constant_i64(-a->shift); | ||
90 | rm1 = tcg_temp_new_i64(); | ||
91 | rm2 = tcg_temp_new_i64(); | ||
92 | rd = tcg_temp_new_i32(); | ||
93 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a, | ||
94 | tcg_temp_free_i32(rd); | ||
95 | tcg_temp_free_i64(rm1); | ||
96 | tcg_temp_free_i64(rm2); | ||
97 | - tcg_temp_free_i64(constimm); | ||
98 | |||
99 | return true; | ||
100 | } | ||
101 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a, | ||
102 | /* size == 2 */ | ||
103 | imm = -a->shift; | ||
104 | } | ||
105 | - constimm = tcg_const_i32(imm); | ||
106 | + constimm = tcg_constant_i32(imm); | ||
107 | |||
108 | /* Load all inputs first to avoid potential overwrite */ | ||
109 | rm1 = tcg_temp_new_i32(); | ||
110 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a, | ||
111 | |||
112 | shiftfn(rm3, rm3, constimm); | ||
113 | shiftfn(rm4, rm4, constimm); | ||
114 | - tcg_temp_free_i32(constimm); | ||
115 | |||
116 | tcg_gen_concat_i32_i64(rtmp, rm3, rm4); | ||
117 | tcg_temp_free_i32(rm4); | ||
118 | @@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a) | ||
119 | return true; | ||
120 | } | ||
121 | |||
122 | - desc = tcg_const_i32((a->vn << 2) | a->len); | ||
123 | + desc = tcg_constant_i32((a->vn << 2) | a->len); | ||
124 | def = tcg_temp_new_i64(); | ||
125 | if (a->op) { | ||
126 | read_neon_element64(def, a->vd, 0, MO_64); | ||
127 | @@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a) | ||
128 | |||
129 | tcg_temp_free_i64(def); | ||
130 | tcg_temp_free_i64(val); | ||
131 | - tcg_temp_free_i32(desc); | ||
132 | return true; | ||
133 | } | ||
28 | 134 | ||
29 | -- | 135 | -- |
30 | 2.7.4 | 136 | 2.25.1 |
31 | |||
32 | diff view generated by jsdifflib |
1 | From: Cédric Le Goater <clg@kaod.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Temperatures can be changed from the monitor with : | 3 | The operation we're performing with the movcond |
4 | is either min/max depending on cond -- simplify. | ||
5 | Use tcg_constant_i64 while we're at it. | ||
4 | 6 | ||
5 | (qemu) qom-set /machine/unattached/device[2] temperature0 12000 | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | --- | ||
11 | target/arm/translate-sve.c | 9 ++------- | ||
12 | 1 file changed, 2 insertions(+), 7 deletions(-) | ||
6 | 13 | ||
7 | Signed-off-by: Cédric Le Goater <clg@kaod.org> | 14 | diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c |
8 | Message-id: 1494827476-1487-7-git-send-email-clg@kaod.org | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | --- | ||
12 | hw/arm/aspeed.c | 9 +++++++++ | ||
13 | 1 file changed, 9 insertions(+) | ||
14 | |||
15 | diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c | ||
16 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/hw/arm/aspeed.c | 16 | --- a/target/arm/translate-sve.c |
18 | +++ b/hw/arm/aspeed.c | 17 | +++ b/target/arm/translate-sve.c |
19 | @@ -XXX,XX +XXX,XX @@ static void aspeed_board_init(MachineState *machine, | 18 | @@ -XXX,XX +XXX,XX @@ static bool trans_PNEXT(DisasContext *s, arg_rr_esz *a) |
20 | static void palmetto_bmc_i2c_init(AspeedBoardState *bmc) | 19 | static void do_sat_addsub_32(TCGv_i64 reg, TCGv_i64 val, bool u, bool d) |
21 | { | 20 | { |
22 | AspeedSoCState *soc = &bmc->soc; | 21 | int64_t ibound; |
23 | + DeviceState *dev; | 22 | - TCGv_i64 bound; |
24 | 23 | - TCGCond cond; | |
25 | /* The palmetto platform expects a ds3231 RTC but a ds1338 is | 24 | |
26 | * enough to provide basic RTC features. Alarms will be missing */ | 25 | /* Use normal 64-bit arithmetic to detect 32-bit overflow. */ |
27 | i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 0), "ds1338", 0x68); | 26 | if (u) { |
28 | + | 27 | @@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_32(TCGv_i64 reg, TCGv_i64 val, bool u, bool d) |
29 | + /* add a TMP423 temperature sensor */ | 28 | if (d) { |
30 | + dev = i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 2), | 29 | tcg_gen_sub_i64(reg, reg, val); |
31 | + "tmp423", 0x4c); | 30 | ibound = (u ? 0 : INT32_MIN); |
32 | + object_property_set_int(OBJECT(dev), 31000, "temperature0", &error_abort); | 31 | - cond = TCG_COND_LT; |
33 | + object_property_set_int(OBJECT(dev), 28000, "temperature1", &error_abort); | 32 | + tcg_gen_smax_i64(reg, reg, tcg_constant_i64(ibound)); |
34 | + object_property_set_int(OBJECT(dev), 20000, "temperature2", &error_abort); | 33 | } else { |
35 | + object_property_set_int(OBJECT(dev), 110000, "temperature3", &error_abort); | 34 | tcg_gen_add_i64(reg, reg, val); |
35 | ibound = (u ? UINT32_MAX : INT32_MAX); | ||
36 | - cond = TCG_COND_GT; | ||
37 | + tcg_gen_smin_i64(reg, reg, tcg_constant_i64(ibound)); | ||
38 | } | ||
39 | - bound = tcg_const_i64(ibound); | ||
40 | - tcg_gen_movcond_i64(cond, reg, reg, bound, bound, reg); | ||
41 | - tcg_temp_free_i64(bound); | ||
36 | } | 42 | } |
37 | 43 | ||
38 | static void palmetto_bmc_init(MachineState *machine) | 44 | /* Similarly with 64-bit values. */ |
39 | -- | 45 | -- |
40 | 2.7.4 | 46 | 2.25.1 |
41 | |||
42 | diff view generated by jsdifflib |
1 | From: Kamil Rytarowski <n54@gmx.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Ensure that C99 macros are defined regardless of the inclusion order of | 3 | Use tcg_constant_{i32,i64} as appropriate throughout. |
4 | headers in vixl. This is required at least on NetBSD. | ||
5 | 4 | ||
6 | The vixl/globals.h headers defines __STDC_CONSTANT_MACROS and must be | 5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | included before other system headers. | ||
8 | |||
9 | This file defines unconditionally the following macros, without altering | ||
10 | the original sources: | ||
11 | - __STDC_CONSTANT_MACROS | ||
12 | - __STDC_LIMIT_MACROS | ||
13 | - __STDC_FORMAT_MACROS | ||
14 | |||
15 | Signed-off-by: Kamil Rytarowski <n54@gmx.com> | ||
16 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
17 | Message-id: 20170514051820.15985-1-n54@gmx.com | ||
18 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
19 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
20 | --- | 8 | --- |
21 | disas/libvixl/Makefile.objs | 3 +++ | 9 | target/arm/translate-vfp.c | 76 ++++++++++++-------------------------- |
22 | 1 file changed, 3 insertions(+) | 10 | 1 file changed, 23 insertions(+), 53 deletions(-) |
23 | 11 | ||
24 | diff --git a/disas/libvixl/Makefile.objs b/disas/libvixl/Makefile.objs | 12 | diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c |
25 | index XXXXXXX..XXXXXXX 100644 | 13 | index XXXXXXX..XXXXXXX 100644 |
26 | --- a/disas/libvixl/Makefile.objs | 14 | --- a/target/arm/translate-vfp.c |
27 | +++ b/disas/libvixl/Makefile.objs | 15 | +++ b/target/arm/translate-vfp.c |
28 | @@ -XXX,XX +XXX,XX @@ libvixl_OBJS = vixl/utils.o \ | 16 | @@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s) |
29 | # The -Wno-sign-compare is needed only for gcc 4.6, which complains about | 17 | gen_helper_vfp_set_fpscr(cpu_env, fpscr); |
30 | # some signed-unsigned equality comparisons which later gcc versions do not. | 18 | tcg_temp_free_i32(fpscr); |
31 | $(addprefix $(obj)/,$(libvixl_OBJS)): QEMU_CFLAGS := -I$(SRC_PATH)/disas/libvixl $(QEMU_CFLAGS) -Wno-sign-compare | 19 | if (dc_isar_feature(aa32_mve, s)) { |
32 | +# Ensure that C99 macros are defined regardless of the inclusion order of | 20 | - TCGv_i32 z32 = tcg_const_i32(0); |
33 | +# headers in vixl. This is required at least on NetBSD. | 21 | - store_cpu_field(z32, v7m.vpr); |
34 | +$(addprefix $(obj)/,$(libvixl_OBJS)): QEMU_CFLAGS += -D__STDC_CONSTANT_MACROS -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS | 22 | + store_cpu_field(tcg_constant_i32(0), v7m.vpr); |
35 | 23 | } | |
36 | common-obj-$(CONFIG_ARM_A64_DIS) += $(libvixl_OBJS) | 24 | /* |
25 | * We just updated the FPSCR and VPR. Some of this state is cached | ||
26 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
27 | TCGv_i64 frn, frm, dest; | ||
28 | TCGv_i64 tmp, zero, zf, nf, vf; | ||
29 | |||
30 | - zero = tcg_const_i64(0); | ||
31 | + zero = tcg_constant_i64(0); | ||
32 | |||
33 | frn = tcg_temp_new_i64(); | ||
34 | frm = tcg_temp_new_i64(); | ||
35 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
36 | vfp_load_reg64(frm, rm); | ||
37 | switch (a->cc) { | ||
38 | case 0: /* eq: Z */ | ||
39 | - tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero, | ||
40 | - frn, frm); | ||
41 | + tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero, frn, frm); | ||
42 | break; | ||
43 | case 1: /* vs: V */ | ||
44 | - tcg_gen_movcond_i64(TCG_COND_LT, dest, vf, zero, | ||
45 | - frn, frm); | ||
46 | + tcg_gen_movcond_i64(TCG_COND_LT, dest, vf, zero, frn, frm); | ||
47 | break; | ||
48 | case 2: /* ge: N == V -> N ^ V == 0 */ | ||
49 | tmp = tcg_temp_new_i64(); | ||
50 | tcg_gen_xor_i64(tmp, vf, nf); | ||
51 | - tcg_gen_movcond_i64(TCG_COND_GE, dest, tmp, zero, | ||
52 | - frn, frm); | ||
53 | + tcg_gen_movcond_i64(TCG_COND_GE, dest, tmp, zero, frn, frm); | ||
54 | tcg_temp_free_i64(tmp); | ||
55 | break; | ||
56 | case 3: /* gt: !Z && N == V */ | ||
57 | - tcg_gen_movcond_i64(TCG_COND_NE, dest, zf, zero, | ||
58 | - frn, frm); | ||
59 | + tcg_gen_movcond_i64(TCG_COND_NE, dest, zf, zero, frn, frm); | ||
60 | tmp = tcg_temp_new_i64(); | ||
61 | tcg_gen_xor_i64(tmp, vf, nf); | ||
62 | - tcg_gen_movcond_i64(TCG_COND_GE, dest, tmp, zero, | ||
63 | - dest, frm); | ||
64 | + tcg_gen_movcond_i64(TCG_COND_GE, dest, tmp, zero, dest, frm); | ||
65 | tcg_temp_free_i64(tmp); | ||
66 | break; | ||
67 | } | ||
68 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
69 | tcg_temp_free_i64(zf); | ||
70 | tcg_temp_free_i64(nf); | ||
71 | tcg_temp_free_i64(vf); | ||
72 | - | ||
73 | - tcg_temp_free_i64(zero); | ||
74 | } else { | ||
75 | TCGv_i32 frn, frm, dest; | ||
76 | TCGv_i32 tmp, zero; | ||
77 | |||
78 | - zero = tcg_const_i32(0); | ||
79 | + zero = tcg_constant_i32(0); | ||
80 | |||
81 | frn = tcg_temp_new_i32(); | ||
82 | frm = tcg_temp_new_i32(); | ||
83 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
84 | vfp_load_reg32(frm, rm); | ||
85 | switch (a->cc) { | ||
86 | case 0: /* eq: Z */ | ||
87 | - tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero, | ||
88 | - frn, frm); | ||
89 | + tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero, frn, frm); | ||
90 | break; | ||
91 | case 1: /* vs: V */ | ||
92 | - tcg_gen_movcond_i32(TCG_COND_LT, dest, cpu_VF, zero, | ||
93 | - frn, frm); | ||
94 | + tcg_gen_movcond_i32(TCG_COND_LT, dest, cpu_VF, zero, frn, frm); | ||
95 | break; | ||
96 | case 2: /* ge: N == V -> N ^ V == 0 */ | ||
97 | tmp = tcg_temp_new_i32(); | ||
98 | tcg_gen_xor_i32(tmp, cpu_VF, cpu_NF); | ||
99 | - tcg_gen_movcond_i32(TCG_COND_GE, dest, tmp, zero, | ||
100 | - frn, frm); | ||
101 | + tcg_gen_movcond_i32(TCG_COND_GE, dest, tmp, zero, frn, frm); | ||
102 | tcg_temp_free_i32(tmp); | ||
103 | break; | ||
104 | case 3: /* gt: !Z && N == V */ | ||
105 | - tcg_gen_movcond_i32(TCG_COND_NE, dest, cpu_ZF, zero, | ||
106 | - frn, frm); | ||
107 | + tcg_gen_movcond_i32(TCG_COND_NE, dest, cpu_ZF, zero, frn, frm); | ||
108 | tmp = tcg_temp_new_i32(); | ||
109 | tcg_gen_xor_i32(tmp, cpu_VF, cpu_NF); | ||
110 | - tcg_gen_movcond_i32(TCG_COND_GE, dest, tmp, zero, | ||
111 | - dest, frm); | ||
112 | + tcg_gen_movcond_i32(TCG_COND_GE, dest, tmp, zero, dest, frm); | ||
113 | tcg_temp_free_i32(tmp); | ||
114 | break; | ||
115 | } | ||
116 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
117 | tcg_temp_free_i32(frn); | ||
118 | tcg_temp_free_i32(frm); | ||
119 | tcg_temp_free_i32(dest); | ||
120 | - | ||
121 | - tcg_temp_free_i32(zero); | ||
122 | } | ||
123 | |||
124 | return true; | ||
125 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a) | ||
126 | fpst = fpstatus_ptr(FPST_FPCR); | ||
127 | } | ||
128 | |||
129 | - tcg_shift = tcg_const_i32(0); | ||
130 | + tcg_shift = tcg_constant_i32(0); | ||
131 | |||
132 | tcg_rmode = tcg_const_i32(arm_rmode_to_sf(rounding)); | ||
133 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
134 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a) | ||
135 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
136 | tcg_temp_free_i32(tcg_rmode); | ||
137 | |||
138 | - tcg_temp_free_i32(tcg_shift); | ||
139 | - | ||
140 | tcg_temp_free_ptr(fpst); | ||
141 | |||
142 | return true; | ||
143 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a) | ||
144 | case ARM_VFP_MVFR2: | ||
145 | case ARM_VFP_FPSID: | ||
146 | if (s->current_el == 1) { | ||
147 | - TCGv_i32 tcg_reg, tcg_rt; | ||
148 | - | ||
149 | gen_set_condexec(s); | ||
150 | gen_set_pc_im(s, s->pc_curr); | ||
151 | - tcg_reg = tcg_const_i32(a->reg); | ||
152 | - tcg_rt = tcg_const_i32(a->rt); | ||
153 | - gen_helper_check_hcr_el2_trap(cpu_env, tcg_rt, tcg_reg); | ||
154 | - tcg_temp_free_i32(tcg_reg); | ||
155 | - tcg_temp_free_i32(tcg_rt); | ||
156 | + gen_helper_check_hcr_el2_trap(cpu_env, | ||
157 | + tcg_constant_i32(a->rt), | ||
158 | + tcg_constant_i32(a->reg)); | ||
159 | } | ||
160 | /* fall through */ | ||
161 | case ARM_VFP_FPEXC: | ||
162 | @@ -XXX,XX +XXX,XX @@ MAKE_VFM_TRANS_FNS(dp) | ||
163 | |||
164 | static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a) | ||
165 | { | ||
166 | - TCGv_i32 fd; | ||
167 | - | ||
168 | if (!dc_isar_feature(aa32_fp16_arith, s)) { | ||
169 | return false; | ||
170 | } | ||
171 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a) | ||
172 | return true; | ||
173 | } | ||
174 | |||
175 | - fd = tcg_const_i32(vfp_expand_imm(MO_16, a->imm)); | ||
176 | - vfp_store_reg32(fd, a->vd); | ||
177 | - tcg_temp_free_i32(fd); | ||
178 | + vfp_store_reg32(tcg_constant_i32(vfp_expand_imm(MO_16, a->imm)), a->vd); | ||
179 | return true; | ||
180 | } | ||
181 | |||
182 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a) | ||
183 | } | ||
184 | } | ||
185 | |||
186 | - fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm)); | ||
187 | + fd = tcg_constant_i32(vfp_expand_imm(MO_32, a->imm)); | ||
188 | |||
189 | for (;;) { | ||
190 | vfp_store_reg32(fd, vd); | ||
191 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a) | ||
192 | vd = vfp_advance_sreg(vd, delta_d); | ||
193 | } | ||
194 | |||
195 | - tcg_temp_free_i32(fd); | ||
196 | return true; | ||
197 | } | ||
198 | |||
199 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a) | ||
200 | } | ||
201 | } | ||
202 | |||
203 | - fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm)); | ||
204 | + fd = tcg_constant_i64(vfp_expand_imm(MO_64, a->imm)); | ||
205 | |||
206 | for (;;) { | ||
207 | vfp_store_reg64(fd, vd); | ||
208 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a) | ||
209 | vd = vfp_advance_dreg(vd, delta_d); | ||
210 | } | ||
211 | |||
212 | - tcg_temp_free_i64(fd); | ||
213 | return true; | ||
214 | } | ||
215 | |||
216 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a) | ||
217 | vfp_load_reg32(vd, a->vd); | ||
218 | |||
219 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
220 | - shift = tcg_const_i32(frac_bits); | ||
221 | + shift = tcg_constant_i32(frac_bits); | ||
222 | |||
223 | /* Switch on op:U:sx bits */ | ||
224 | switch (a->opc) { | ||
225 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a) | ||
226 | |||
227 | vfp_store_reg32(vd, a->vd); | ||
228 | tcg_temp_free_i32(vd); | ||
229 | - tcg_temp_free_i32(shift); | ||
230 | tcg_temp_free_ptr(fpst); | ||
231 | return true; | ||
232 | } | ||
233 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a) | ||
234 | vfp_load_reg32(vd, a->vd); | ||
235 | |||
236 | fpst = fpstatus_ptr(FPST_FPCR); | ||
237 | - shift = tcg_const_i32(frac_bits); | ||
238 | + shift = tcg_constant_i32(frac_bits); | ||
239 | |||
240 | /* Switch on op:U:sx bits */ | ||
241 | switch (a->opc) { | ||
242 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a) | ||
243 | |||
244 | vfp_store_reg32(vd, a->vd); | ||
245 | tcg_temp_free_i32(vd); | ||
246 | - tcg_temp_free_i32(shift); | ||
247 | tcg_temp_free_ptr(fpst); | ||
248 | return true; | ||
249 | } | ||
250 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a) | ||
251 | vfp_load_reg64(vd, a->vd); | ||
252 | |||
253 | fpst = fpstatus_ptr(FPST_FPCR); | ||
254 | - shift = tcg_const_i32(frac_bits); | ||
255 | + shift = tcg_constant_i32(frac_bits); | ||
256 | |||
257 | /* Switch on op:U:sx bits */ | ||
258 | switch (a->opc) { | ||
259 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a) | ||
260 | |||
261 | vfp_store_reg64(vd, a->vd); | ||
262 | tcg_temp_free_i64(vd); | ||
263 | - tcg_temp_free_i32(shift); | ||
264 | tcg_temp_free_ptr(fpst); | ||
265 | return true; | ||
266 | } | ||
37 | -- | 267 | -- |
38 | 2.7.4 | 268 | 2.25.1 |
39 | |||
40 | diff view generated by jsdifflib |
1 | From: Michael Davidsaver <mdavidsaver@gmail.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Add support for the M profile default memory map which is used | 3 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
4 | if the MPU is not present or disabled. | 4 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
5 | |||
6 | The main differences in behaviour from implementing this | ||
7 | correctly are that we set the PAGE_EXEC attribute on | ||
8 | the right regions of memory, such that device regions | ||
9 | are not executable. | ||
10 | |||
11 | Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> | ||
12 | Message-id: 1493122030-32191-10-git-send-email-peter.maydell@linaro.org | ||
13 | [PMM: rephrased comment and commit message; don't mark | ||
14 | the flash memory region as not-writable; list all | ||
15 | the cases in the default map explicitly rather than | ||
16 | using a 'default' case for the non-executable regions] | ||
17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
18 | --- | 6 | --- |
19 | target/arm/helper.c | 41 ++++++++++++++++++++++++++++++++--------- | 7 | target/arm/translate.h | 13 +++---------- |
20 | 1 file changed, 32 insertions(+), 9 deletions(-) | 8 | 1 file changed, 3 insertions(+), 10 deletions(-) |
21 | 9 | ||
22 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 10 | diff --git a/target/arm/translate.h b/target/arm/translate.h |
23 | index XXXXXXX..XXXXXXX 100644 | 11 | index XXXXXXX..XXXXXXX 100644 |
24 | --- a/target/arm/helper.c | 12 | --- a/target/arm/translate.h |
25 | +++ b/target/arm/helper.c | 13 | +++ b/target/arm/translate.h |
26 | @@ -XXX,XX +XXX,XX @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env, | 14 | @@ -XXX,XX +XXX,XX @@ static inline void gen_ss_advance(DisasContext *s) |
27 | ARMMMUIdx mmu_idx, | 15 | static inline void gen_exception(int excp, uint32_t syndrome, |
28 | int32_t address, int *prot) | 16 | uint32_t target_el) |
29 | { | 17 | { |
30 | - *prot = PAGE_READ | PAGE_WRITE; | 18 | - TCGv_i32 tcg_excp = tcg_const_i32(excp); |
31 | - switch (address) { | 19 | - TCGv_i32 tcg_syn = tcg_const_i32(syndrome); |
32 | - case 0xF0000000 ... 0xFFFFFFFF: | 20 | - TCGv_i32 tcg_el = tcg_const_i32(target_el); |
33 | - if (regime_sctlr(env, mmu_idx) & SCTLR_V) { /* hivecs execing is ok */ | ||
34 | + if (!arm_feature(env, ARM_FEATURE_M)) { | ||
35 | + *prot = PAGE_READ | PAGE_WRITE; | ||
36 | + switch (address) { | ||
37 | + case 0xF0000000 ... 0xFFFFFFFF: | ||
38 | + if (regime_sctlr(env, mmu_idx) & SCTLR_V) { | ||
39 | + /* hivecs execing is ok */ | ||
40 | + *prot |= PAGE_EXEC; | ||
41 | + } | ||
42 | + break; | ||
43 | + case 0x00000000 ... 0x7FFFFFFF: | ||
44 | *prot |= PAGE_EXEC; | ||
45 | + break; | ||
46 | + } | ||
47 | + } else { | ||
48 | + /* Default system address map for M profile cores. | ||
49 | + * The architecture specifies which regions are execute-never; | ||
50 | + * at the MPU level no other checks are defined. | ||
51 | + */ | ||
52 | + switch (address) { | ||
53 | + case 0x00000000 ... 0x1fffffff: /* ROM */ | ||
54 | + case 0x20000000 ... 0x3fffffff: /* SRAM */ | ||
55 | + case 0x60000000 ... 0x7fffffff: /* RAM */ | ||
56 | + case 0x80000000 ... 0x9fffffff: /* RAM */ | ||
57 | + *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
58 | + break; | ||
59 | + case 0x40000000 ... 0x5fffffff: /* Peripheral */ | ||
60 | + case 0xa0000000 ... 0xbfffffff: /* Device */ | ||
61 | + case 0xc0000000 ... 0xdfffffff: /* Device */ | ||
62 | + case 0xe0000000 ... 0xffffffff: /* System */ | ||
63 | + *prot = PAGE_READ | PAGE_WRITE; | ||
64 | + break; | ||
65 | + default: | ||
66 | + g_assert_not_reached(); | ||
67 | } | ||
68 | - break; | ||
69 | - case 0x00000000 ... 0x7FFFFFFF: | ||
70 | - *prot |= PAGE_EXEC; | ||
71 | - break; | ||
72 | } | ||
73 | - | 21 | - |
22 | - gen_helper_exception_with_syndrome(cpu_env, tcg_excp, | ||
23 | - tcg_syn, tcg_el); | ||
24 | - | ||
25 | - tcg_temp_free_i32(tcg_el); | ||
26 | - tcg_temp_free_i32(tcg_syn); | ||
27 | - tcg_temp_free_i32(tcg_excp); | ||
28 | + gen_helper_exception_with_syndrome(cpu_env, tcg_constant_i32(excp), | ||
29 | + tcg_constant_i32(syndrome), | ||
30 | + tcg_constant_i32(target_el)); | ||
74 | } | 31 | } |
75 | 32 | ||
76 | static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | 33 | /* Generate an architectural singlestep exception */ |
77 | -- | 34 | -- |
78 | 2.7.4 | 35 | 2.25.1 |
79 | |||
80 | diff view generated by jsdifflib |
1 | When identifying the DFSR format for an alignment fault, use | 1 | From: Xiang Chen <chenxiang66@hisilicon.com> |
---|---|---|---|
2 | the mmu index that we are passed, rather than calling cpu_mmu_index() | ||
3 | to get the mmu index for the current CPU state. This doesn't actually | ||
4 | make any difference since the only cases where the current MMU index | ||
5 | differs from the index used for the load are the "unprivileged | ||
6 | load/store" instructions, and in that case the mmu index may | ||
7 | differ but the translation regime is the same (apart from the | ||
8 | "use from Hyp mode" case which is UNPREDICTABLE). | ||
9 | However it's the more logical thing to do. | ||
10 | 2 | ||
3 | It always calls the IOMMU MR translate() callback with flag=IOMMU_NONE in | ||
4 | memory_region_iommu_replay(). Currently, smmuv3_translate() return an | ||
5 | IOMMUTLBEntry with perm set to IOMMU_NONE even if the translation success, | ||
6 | whereas it is expected to return the actual permission set in the table | ||
7 | entry. | ||
8 | So pass the actual perm to returned IOMMUTLBEntry in the table entry. | ||
9 | |||
10 | Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com> | ||
11 | Reviewed-by: Eric Auger <eric.auger@redhat.com> | ||
12 | Message-id: 1650094695-121918-1-git-send-email-chenxiang66@hisilicon.com | ||
11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
12 | Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> | ||
13 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
14 | Message-id: 1493122030-32191-2-git-send-email-peter.maydell@linaro.org | ||
15 | --- | 14 | --- |
16 | target/arm/op_helper.c | 2 +- | 15 | hw/arm/smmuv3.c | 2 +- |
17 | 1 file changed, 1 insertion(+), 1 deletion(-) | 16 | 1 file changed, 1 insertion(+), 1 deletion(-) |
18 | 17 | ||
19 | diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c | 18 | diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c |
20 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/target/arm/op_helper.c | 20 | --- a/hw/arm/smmuv3.c |
22 | +++ b/target/arm/op_helper.c | 21 | +++ b/hw/arm/smmuv3.c |
23 | @@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, | 22 | @@ -XXX,XX +XXX,XX @@ epilogue: |
24 | /* the DFSR for an alignment fault depends on whether we're using | 23 | qemu_mutex_unlock(&s->mutex); |
25 | * the LPAE long descriptor format, or the short descriptor format | 24 | switch (status) { |
26 | */ | 25 | case SMMU_TRANS_SUCCESS: |
27 | - if (arm_s1_regime_using_lpae_format(env, cpu_mmu_index(env, false))) { | 26 | - entry.perm = flag; |
28 | + if (arm_s1_regime_using_lpae_format(env, mmu_idx)) { | 27 | + entry.perm = cached_entry->entry.perm; |
29 | env->exception.fsr = (1 << 9) | 0x21; | 28 | entry.translated_addr = cached_entry->entry.translated_addr + |
30 | } else { | 29 | (addr & cached_entry->entry.addr_mask); |
31 | env->exception.fsr = 0x1; | 30 | entry.addr_mask = cached_entry->entry.addr_mask; |
32 | -- | 31 | -- |
33 | 2.7.4 | 32 | 2.25.1 |
34 | |||
35 | diff view generated by jsdifflib |