1
target-arm queue. The big thing here is the landing of the 3-phase
1
Small pile of bug fixes for rc1. I've included my patches to get
2
reset patches...
2
our docs building with Sphinx 3, just for convenience...
3
3
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 204aa60b37c23a89e690d418f49787d274303ca7:
6
The following changes since commit b149dea55cce97cb226683d06af61984a1c11e96:
7
7
8
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-jan-29-2020' into staging (2020-01-30 14:18:45 +0000)
8
Merge remote-tracking branch 'remotes/cschoenebeck/tags/pull-9p-20201102' into staging (2020-11-02 10:57:48 +0000)
9
9
10
are available in the Git repository at:
10
are available in the Git repository at:
11
11
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200130
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201102
13
13
14
for you to fetch changes up to dea101a1ae9968c9fec6ab0291489dad7c49f36f:
14
for you to fetch changes up to ffb4fbf90a2f63c9cb33e4bb9f854c79bf04ca4a:
15
15
16
target/arm/cpu: Add the kvm-no-adjvtime CPU property (2020-01-30 16:02:06 +0000)
16
tests/qtest/npcm7xx_rng-test: Disable randomness tests (2020-11-02 16:52:18 +0000)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* hw/core/or-irq: Fix incorrect assert forbidding num-lines == MAX_OR_LINES
20
* target/arm: Fix Neon emulation bugs on big-endian hosts
21
* target/arm/arm-semi: Don't let the guest close stdin/stdout/stderr
21
* target/arm: fix handling of HCR.FB
22
* aspeed: some minor bugfixes
22
* target/arm: fix LORID_EL1 access check
23
* aspeed: add eMMC controller model for AST2600 SoC
23
* disas/capstone: Fix monitor disassembly of >32 bytes
24
* hw/arm/raspi: Remove obsolete use of -smp to set the soc 'enabled-cpus'
24
* hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
25
* New 3-phase reset API for device models
25
* hw/arm/boot: fix SVE for EL3 direct kernel boot
26
* hw/intc/arm_gicv3_kvm: Stop wrongly programming GICR_PENDBASER.PTZ bit
26
* hw/display/omap_lcdc: Fix potential NULL pointer dereference
27
* Arm KVM: stop/restart the guest counter when the VM is stopped and started
27
* hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
28
* target/arm: Get correct MMU index for other-security-state
29
* configure: Test that gio libs from pkg-config work
30
* hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
31
* docs: Fix building with Sphinx 3
32
* tests/qtest/npcm7xx_rng-test: Disable randomness tests
28
33
29
----------------------------------------------------------------
34
----------------------------------------------------------------
30
Andrew Jeffery (2):
35
AlexChen (2):
31
hw/sd: Configure number of slots exposed by the ASPEED SDHCI model
36
hw/display/omap_lcdc: Fix potential NULL pointer dereference
32
hw/arm: ast2600: Wire up the eMMC controller
37
hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
33
38
34
Andrew Jones (6):
39
Peter Maydell (9):
35
target/arm/kvm: trivial: Clean up header documentation
40
target/arm: Fix float16 pairwise Neon ops on big-endian hosts
36
hw/arm/virt: Add missing 5.0 options call to 4.2 options
41
target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hosts
37
target/arm/kvm64: kvm64 cpus have timer registers
42
disas/capstone: Fix monitor disassembly of >32 bytes
38
tests/arm-cpu-features: Check feature default values
43
target/arm: Get correct MMU index for other-security-state
39
target/arm/kvm: Implement virtual time adjustment
44
configure: Test that gio libs from pkg-config work
40
target/arm/cpu: Add the kvm-no-adjvtime CPU property
45
hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
41
46
scripts/kerneldoc: For Sphinx 3 use c:macro for macros with arguments
42
Cédric Le Goater (2):
47
qemu-option-trace.rst.inc: Don't use option:: markup
43
ftgmac100: check RX and TX buffer alignment
48
tests/qtest/npcm7xx_rng-test: Disable randomness tests
44
hw/arm/aspeed: add a 'execute-in-place' property to boot directly from CE0
45
46
Damien Hedde (11):
47
add device_legacy_reset function to prepare for reset api change
48
hw/core/qdev: add trace events to help with resettable transition
49
hw/core: create Resettable QOM interface
50
hw/core: add Resettable support to BusClass and DeviceClass
51
hw/core/resettable: add support for changing parent
52
hw/core/qdev: handle parent bus change regarding resettable
53
hw/core/qdev: update hotplug reset regarding resettable
54
hw/core: deprecate old reset functions and introduce new ones
55
docs/devel/reset.rst: add doc about Resettable interface
56
vl: replace deprecated qbus_reset_all registration
57
hw/s390x/ipl: replace deprecated qdev_reset_all registration
58
59
Joel Stanley (1):
60
misc/pca9552: Add qom set and get
61
62
Peter Maydell (2):
63
hw/core/or-irq: Fix incorrect assert forbidding num-lines == MAX_OR_LINES
64
target/arm/arm-semi: Don't let the guest close stdin/stdout/stderr
65
49
66
Philippe Mathieu-Daudé (1):
50
Philippe Mathieu-Daudé (1):
67
hw/arm/raspi: Remove obsolete use of -smp to set the soc 'enabled-cpus'
51
hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
68
52
69
Zenghui Yu (1):
53
Richard Henderson (11):
70
hw/intc/arm_gicv3_kvm: Stop wrongly programming GICR_PENDBASER.PTZ bit
54
target/arm: Introduce neon_full_reg_offset
55
target/arm: Move neon_element_offset to translate.c
56
target/arm: Use neon_element_offset in neon_load/store_reg
57
target/arm: Use neon_element_offset in vfp_reg_offset
58
target/arm: Add read/write_neon_element32
59
target/arm: Expand read/write_neon_element32 to all MemOp
60
target/arm: Rename neon_load_reg32 to vfp_load_reg32
61
target/arm: Add read/write_neon_element64
62
target/arm: Rename neon_load_reg64 to vfp_load_reg64
63
target/arm: Simplify do_long_3d and do_2scalar_long
64
target/arm: Improve do_prewiden_3d
71
65
72
hw/core/Makefile.objs | 1 +
66
Rémi Denis-Courmont (3):
73
tests/Makefile.include | 1 +
67
target/arm: fix handling of HCR.FB
74
include/hw/arm/aspeed.h | 2 +
68
target/arm: fix LORID_EL1 access check
75
include/hw/arm/aspeed_soc.h | 2 +
69
hw/arm/boot: fix SVE for EL3 direct kernel boot
76
include/hw/arm/virt.h | 1 +
77
include/hw/qdev-core.h | 58 +++++++-
78
include/hw/resettable.h | 247 +++++++++++++++++++++++++++++++++
79
include/hw/sd/aspeed_sdhci.h | 1 +
80
target/arm/cpu.h | 7 +
81
target/arm/kvm_arm.h | 95 ++++++++++---
82
hw/arm/aspeed.c | 72 ++++++++--
83
hw/arm/aspeed_ast2600.c | 31 ++++-
84
hw/arm/aspeed_soc.c | 2 +
85
hw/arm/raspi.c | 2 -
86
hw/arm/virt.c | 9 ++
87
hw/audio/intel-hda.c | 2 +-
88
hw/core/bus.c | 102 ++++++++++++++
89
hw/core/or-irq.c | 2 +-
90
hw/core/qdev.c | 160 ++++++++++++++++++++--
91
hw/core/resettable.c | 301 +++++++++++++++++++++++++++++++++++++++++
92
hw/hyperv/hyperv.c | 2 +-
93
hw/i386/microvm.c | 2 +-
94
hw/i386/pc.c | 2 +-
95
hw/ide/microdrive.c | 8 +-
96
hw/intc/arm_gicv3_kvm.c | 11 +-
97
hw/intc/spapr_xive.c | 2 +-
98
hw/misc/pca9552.c | 90 ++++++++++++
99
hw/net/ftgmac100.c | 13 ++
100
hw/ppc/pnv_psi.c | 4 +-
101
hw/ppc/spapr_pci.c | 2 +-
102
hw/ppc/spapr_vio.c | 2 +-
103
hw/s390x/ipl.c | 10 +-
104
hw/s390x/s390-pci-inst.c | 2 +-
105
hw/scsi/vmw_pvscsi.c | 2 +-
106
hw/sd/aspeed_sdhci.c | 11 +-
107
hw/sd/omap_mmc.c | 2 +-
108
hw/sd/pl181.c | 2 +-
109
target/arm/arm-semi.c | 9 ++
110
target/arm/cpu.c | 2 +
111
target/arm/cpu64.c | 1 +
112
target/arm/kvm.c | 120 ++++++++++++++++
113
target/arm/kvm32.c | 3 +
114
target/arm/kvm64.c | 4 +
115
target/arm/machine.c | 7 +
116
target/arm/monitor.c | 1 +
117
tests/qtest/arm-cpu-features.c | 41 ++++--
118
vl.c | 10 +-
119
docs/arm-cpu-features.rst | 37 ++++-
120
docs/devel/index.rst | 1 +
121
docs/devel/reset.rst | 289 +++++++++++++++++++++++++++++++++++++++
122
hw/core/trace-events | 27 ++++
123
51 files changed, 1727 insertions(+), 90 deletions(-)
124
create mode 100644 include/hw/resettable.h
125
create mode 100644 hw/core/resettable.c
126
create mode 100644 docs/devel/reset.rst
127
70
71
docs/qemu-option-trace.rst.inc | 6 +-
72
configure | 10 +-
73
include/hw/intc/arm_gicv3_common.h | 1 -
74
disas/capstone.c | 2 +-
75
hw/arm/boot.c | 3 +
76
hw/arm/smmuv3.c | 3 +-
77
hw/display/exynos4210_fimd.c | 4 +-
78
hw/display/omap_lcdc.c | 10 +-
79
hw/intc/arm_gicv3_cpuif.c | 5 +-
80
target/arm/helper.c | 24 +-
81
target/arm/m_helper.c | 3 +-
82
target/arm/translate.c | 153 +++++++++---
83
target/arm/vec_helper.c | 12 +-
84
tests/qtest/npcm7xx_rng-test.c | 14 +-
85
scripts/kernel-doc | 18 +-
86
target/arm/translate-neon.c.inc | 472 ++++++++++++++++++++-----------------
87
target/arm/translate-vfp.c.inc | 341 +++++++++++----------------
88
17 files changed, 588 insertions(+), 493 deletions(-)
89
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Following the pattern of the work recently done with the ASPEED GPIO
3
This function makes it clear that we're talking about the whole
4
model, this adds support for inspecting and modifying the PCA9552 LEDs
4
register, and not the 32-bit piece at index 0. This fixes a bug
5
from the monitor.
5
when running on a big-endian host.
6
6
7
(qemu) qom-set /machine/unattached/device[17] led0 on
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
(qemu) qom-set /machine/unattached/device[17] led0 off
8
Message-id: 20201030022618.785675-2-richard.henderson@linaro.org
9
(qemu) qom-set /machine/unattached/device[17] led0 pwm0
10
(qemu) qom-set /machine/unattached/device[17] led0 pwm1
11
12
Signed-off-by: Joel Stanley <joel@jms.id.au>
13
Signed-off-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20200114103433.30534-6-clg@kaod.org
15
[clg: - removed the "qom-get" examples from the commit log
16
- merged memory leak fixes from Joel ]
17
Signed-off-by: Cédric Le Goater <clg@kaod.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
11
---
21
hw/misc/pca9552.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 8 ++++++
22
1 file changed, 90 insertions(+)
13
target/arm/translate-neon.c.inc | 44 ++++++++++++++++-----------------
14
target/arm/translate-vfp.c.inc | 2 +-
15
3 files changed, 31 insertions(+), 23 deletions(-)
23
16
24
diff --git a/hw/misc/pca9552.c b/hw/misc/pca9552.c
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
25
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/misc/pca9552.c
19
--- a/target/arm/translate.c
27
+++ b/hw/misc/pca9552.c
20
+++ b/target/arm/translate.c
28
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
29
#include "hw/misc/pca9552.h"
22
unallocated_encoding(s);
30
#include "hw/misc/pca9552_regs.h"
31
#include "migration/vmstate.h"
32
+#include "qapi/error.h"
33
+#include "qapi/visitor.h"
34
35
#define PCA9552_LED_ON 0x0
36
#define PCA9552_LED_OFF 0x1
37
#define PCA9552_LED_PWM0 0x2
38
#define PCA9552_LED_PWM1 0x3
39
40
+static const char *led_state[] = {"on", "off", "pwm0", "pwm1"};
41
+
42
static uint8_t pca9552_pin_get_config(PCA9552State *s, int pin)
43
{
44
uint8_t reg = PCA9552_LS0 + (pin / 4);
45
@@ -XXX,XX +XXX,XX @@ static int pca9552_event(I2CSlave *i2c, enum i2c_event event)
46
return 0;
47
}
23
}
48
24
49
+static void pca9552_get_led(Object *obj, Visitor *v, const char *name,
25
+/*
50
+ void *opaque, Error **errp)
26
+ * Return the offset of a "full" NEON Dreg.
27
+ */
28
+static long neon_full_reg_offset(unsigned reg)
51
+{
29
+{
52
+ PCA9552State *s = PCA9552(obj);
30
+ return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
53
+ int led, rc, reg;
54
+ uint8_t state;
55
+
56
+ rc = sscanf(name, "led%2d", &led);
57
+ if (rc != 1) {
58
+ error_setg(errp, "%s: error reading %s", __func__, name);
59
+ return;
60
+ }
61
+ if (led < 0 || led > s->nr_leds) {
62
+ error_setg(errp, "%s invalid led %s", __func__, name);
63
+ return;
64
+ }
65
+ /*
66
+ * Get the LSx register as the qom interface should expose the device
67
+ * state, not the modeled 'input line' behaviour which would come from
68
+ * reading the INPUTx reg
69
+ */
70
+ reg = PCA9552_LS0 + led / 4;
71
+ state = (pca9552_read(s, reg) >> (led % 8)) & 0x3;
72
+ visit_type_str(v, name, (char **)&led_state[state], errp);
73
+}
31
+}
74
+
32
+
75
+/*
33
static inline long vfp_reg_offset(bool dp, unsigned reg)
76
+ * Return an LED selector register value based on an existing one, with
77
+ * the appropriate 2-bit state value set for the given LED number (0-3).
78
+ */
79
+static inline uint8_t pca955x_ledsel(uint8_t oldval, int led_num, int state)
80
+{
81
+ return (oldval & (~(0x3 << (led_num << 1)))) |
82
+ ((state & 0x3) << (led_num << 1));
83
+}
84
+
85
+static void pca9552_set_led(Object *obj, Visitor *v, const char *name,
86
+ void *opaque, Error **errp)
87
+{
88
+ PCA9552State *s = PCA9552(obj);
89
+ Error *local_err = NULL;
90
+ int led, rc, reg, val;
91
+ uint8_t state;
92
+ char *state_str;
93
+
94
+ visit_type_str(v, name, &state_str, &local_err);
95
+ if (local_err) {
96
+ error_propagate(errp, local_err);
97
+ return;
98
+ }
99
+ rc = sscanf(name, "led%2d", &led);
100
+ if (rc != 1) {
101
+ error_setg(errp, "%s: error reading %s", __func__, name);
102
+ return;
103
+ }
104
+ if (led < 0 || led > s->nr_leds) {
105
+ error_setg(errp, "%s invalid led %s", __func__, name);
106
+ return;
107
+ }
108
+
109
+ for (state = 0; state < ARRAY_SIZE(led_state); state++) {
110
+ if (!strcmp(state_str, led_state[state])) {
111
+ break;
112
+ }
113
+ }
114
+ if (state >= ARRAY_SIZE(led_state)) {
115
+ error_setg(errp, "%s invalid led state %s", __func__, state_str);
116
+ return;
117
+ }
118
+
119
+ reg = PCA9552_LS0 + led / 4;
120
+ val = pca9552_read(s, reg);
121
+ val = pca955x_ledsel(val, led % 4, state);
122
+ pca9552_write(s, reg, val);
123
+}
124
+
125
static const VMStateDescription pca9552_vmstate = {
126
.name = "PCA9552",
127
.version_id = 0,
128
@@ -XXX,XX +XXX,XX @@ static void pca9552_reset(DeviceState *dev)
129
static void pca9552_initfn(Object *obj)
130
{
34
{
131
PCA9552State *s = PCA9552(obj);
35
if (dp) {
132
+ int led;
36
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
133
37
index XXXXXXX..XXXXXXX 100644
134
/* If support for the other PCA955X devices are implemented, these
38
--- a/target/arm/translate-neon.c.inc
135
* constant values might be part of class structure describing the
39
+++ b/target/arm/translate-neon.c.inc
136
@@ -XXX,XX +XXX,XX @@ static void pca9552_initfn(Object *obj)
40
@@ -XXX,XX +XXX,XX @@ neon_element_offset(int reg, int element, MemOp size)
137
*/
41
ofs ^= 8 - element_size;
138
s->max_reg = PCA9552_LS3;
42
}
139
s->nr_leds = 16;
43
#endif
140
+
44
- return neon_reg_offset(reg, 0) + ofs;
141
+ for (led = 0; led < s->nr_leds; led++) {
45
+ return neon_full_reg_offset(reg) + ofs;
142
+ char *name;
143
+
144
+ name = g_strdup_printf("led%d", led);
145
+ object_property_add(obj, name, "bool", pca9552_get_led, pca9552_set_led,
146
+ NULL, NULL, NULL);
147
+ g_free(name);
148
+ }
149
}
46
}
150
47
151
static void pca9552_class_init(ObjectClass *klass, void *data)
48
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
50
* We cannot write 16 bytes at once because the
51
* destination is unaligned.
52
*/
53
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
54
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
55
8, 8, tmp);
56
- tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0),
57
- neon_reg_offset(vd, 0), 8, 8);
58
+ tcg_gen_gvec_mov(0, neon_full_reg_offset(vd + 1),
59
+ neon_full_reg_offset(vd), 8, 8);
60
} else {
61
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
62
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
63
vec_size, vec_size, tmp);
64
}
65
tcg_gen_addi_i32(addr, addr, 1 << size);
66
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
67
static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
68
{
69
int vec_size = a->q ? 16 : 8;
70
- int rd_ofs = neon_reg_offset(a->vd, 0);
71
- int rn_ofs = neon_reg_offset(a->vn, 0);
72
- int rm_ofs = neon_reg_offset(a->vm, 0);
73
+ int rd_ofs = neon_full_reg_offset(a->vd);
74
+ int rn_ofs = neon_full_reg_offset(a->vn);
75
+ int rm_ofs = neon_full_reg_offset(a->vm);
76
77
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
78
return false;
79
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
80
{
81
/* Handle a 2-reg-shift insn which can be vectorized. */
82
int vec_size = a->q ? 16 : 8;
83
- int rd_ofs = neon_reg_offset(a->vd, 0);
84
- int rm_ofs = neon_reg_offset(a->vm, 0);
85
+ int rd_ofs = neon_full_reg_offset(a->vd);
86
+ int rm_ofs = neon_full_reg_offset(a->vm);
87
88
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
89
return false;
90
@@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
91
{
92
/* FP operations in 2-reg-and-shift group */
93
int vec_size = a->q ? 16 : 8;
94
- int rd_ofs = neon_reg_offset(a->vd, 0);
95
- int rm_ofs = neon_reg_offset(a->vm, 0);
96
+ int rd_ofs = neon_full_reg_offset(a->vd);
97
+ int rm_ofs = neon_full_reg_offset(a->vm);
98
TCGv_ptr fpst;
99
100
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
101
@@ -XXX,XX +XXX,XX @@ static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
102
return true;
103
}
104
105
- reg_ofs = neon_reg_offset(a->vd, 0);
106
+ reg_ofs = neon_full_reg_offset(a->vd);
107
vec_size = a->q ? 16 : 8;
108
imm = asimd_imm_const(a->imm, a->cmode, a->op);
109
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a)
111
return true;
112
}
113
114
- tcg_gen_gvec_3_ool(neon_reg_offset(a->vd, 0),
115
- neon_reg_offset(a->vn, 0),
116
- neon_reg_offset(a->vm, 0),
117
+ tcg_gen_gvec_3_ool(neon_full_reg_offset(a->vd),
118
+ neon_full_reg_offset(a->vn),
119
+ neon_full_reg_offset(a->vm),
120
16, 16, 0, fn_gvec);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
124
{
125
/* Two registers and a scalar, using gvec */
126
int vec_size = a->q ? 16 : 8;
127
- int rd_ofs = neon_reg_offset(a->vd, 0);
128
- int rn_ofs = neon_reg_offset(a->vn, 0);
129
+ int rd_ofs = neon_full_reg_offset(a->vd);
130
+ int rn_ofs = neon_full_reg_offset(a->vn);
131
int rm_ofs;
132
int idx;
133
TCGv_ptr fpstatus;
134
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
135
/* a->vm is M:Vm, which encodes both register and index */
136
idx = extract32(a->vm, a->size + 2, 2);
137
a->vm = extract32(a->vm, 0, a->size + 2);
138
- rm_ofs = neon_reg_offset(a->vm, 0);
139
+ rm_ofs = neon_full_reg_offset(a->vm);
140
141
fpstatus = fpstatus_ptr(a->size == 1 ? FPST_STD_F16 : FPST_STD);
142
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpstatus,
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
144
return true;
145
}
146
147
- tcg_gen_gvec_dup_mem(a->size, neon_reg_offset(a->vd, 0),
148
+ tcg_gen_gvec_dup_mem(a->size, neon_full_reg_offset(a->vd),
149
neon_element_offset(a->vm, a->index, a->size),
150
a->q ? 16 : 8, a->q ? 16 : 8);
151
return true;
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
153
static bool do_2misc_vec(DisasContext *s, arg_2misc *a, GVecGen2Fn *fn)
154
{
155
int vec_size = a->q ? 16 : 8;
156
- int rd_ofs = neon_reg_offset(a->vd, 0);
157
- int rm_ofs = neon_reg_offset(a->vm, 0);
158
+ int rd_ofs = neon_full_reg_offset(a->vd);
159
+ int rm_ofs = neon_full_reg_offset(a->vm);
160
161
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
162
return false;
163
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-vfp.c.inc
166
+++ b/target/arm/translate-vfp.c.inc
167
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
168
}
169
170
tmp = load_reg(s, a->rt);
171
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(a->vn, 0),
172
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(a->vn),
173
vec_size, vec_size, tmp);
174
tcg_temp_free_i32(tmp);
175
152
--
176
--
153
2.20.1
177
2.20.1
154
178
155
179
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The overhead for the OpenBMC firmware images using the a custom U-Boot
3
This will shortly have users outside of translate-neon.c.inc.
4
is around 2 seconds, which is fine, but with a U-Boot from mainline,
5
it takes an extra 50 seconds or so to reach Linux. A quick survey on
6
the number of reads performed on the flash memory region gives the
7
following figures :
8
4
9
OpenBMC U-Boot 922478 (~ 3.5 MBytes)
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Mainline U-Boot 20569977 (~ 80 MBytes)
6
Message-id: 20201030022618.785675-3-richard.henderson@linaro.org
11
12
QEMU must be trashing the TCG TBs and reloading text very often. Some
13
addresses are read more than 250.000 times. Until we find a solution
14
to improve boot time, execution from MMIO is not activated by default.
15
16
Setting this option also breaks migration compatibility.
17
18
Signed-off-by: Cédric Le Goater <clg@kaod.org>
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Message-id: 20200114103433.30534-5-clg@kaod.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
9
---
24
include/hw/arm/aspeed.h | 2 ++
10
target/arm/translate.c | 20 ++++++++++++++++++++
25
hw/arm/aspeed.c | 44 ++++++++++++++++++++++++++++++++++++-----
11
target/arm/translate-neon.c.inc | 19 -------------------
26
2 files changed, 41 insertions(+), 5 deletions(-)
12
2 files changed, 20 insertions(+), 19 deletions(-)
27
13
28
diff --git a/include/hw/arm/aspeed.h b/include/hw/arm/aspeed.h
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
30
--- a/include/hw/arm/aspeed.h
16
--- a/target/arm/translate.c
31
+++ b/include/hw/arm/aspeed.h
17
+++ b/target/arm/translate.c
32
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedBoardState AspeedBoardState;
18
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
33
19
return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
34
typedef struct AspeedMachine {
35
MachineState parent_obj;
36
+
37
+ bool mmio_exec;
38
} AspeedMachine;
39
40
#define ASPEED_MACHINE_CLASS(klass) \
41
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/arm/aspeed.c
44
+++ b/hw/arm/aspeed.c
45
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_init(MachineState *machine)
46
* SoC and 128MB for the AST2500 SoC, which is twice as big as
47
* needed by the flash modules of the Aspeed machines.
48
*/
49
- memory_region_init_rom(boot_rom, OBJECT(bmc), "aspeed.boot_rom",
50
- fl->size, &error_abort);
51
- memory_region_add_subregion(get_system_memory(), FIRMWARE_ADDR,
52
- boot_rom);
53
- write_boot_rom(drive0, FIRMWARE_ADDR, fl->size, &error_abort);
54
+ if (ASPEED_MACHINE(machine)->mmio_exec) {
55
+ memory_region_init_alias(boot_rom, OBJECT(bmc), "aspeed.boot_rom",
56
+ &fl->mmio, 0, fl->size);
57
+ memory_region_add_subregion(get_system_memory(), FIRMWARE_ADDR,
58
+ boot_rom);
59
+ } else {
60
+ memory_region_init_rom(boot_rom, OBJECT(bmc), "aspeed.boot_rom",
61
+ fl->size, &error_abort);
62
+ memory_region_add_subregion(get_system_memory(), FIRMWARE_ADDR,
63
+ boot_rom);
64
+ write_boot_rom(drive0, FIRMWARE_ADDR, fl->size, &error_abort);
65
+ }
66
}
67
68
aspeed_board_binfo.ram_size = ram_size;
69
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
70
/* Bus 11: TODO ucd90160@64 */
71
}
20
}
72
21
73
+static bool aspeed_get_mmio_exec(Object *obj, Error **errp)
22
+/*
23
+ * Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
24
+ * where 0 is the least significant end of the register.
25
+ */
26
+static long neon_element_offset(int reg, int element, MemOp size)
74
+{
27
+{
75
+ return ASPEED_MACHINE(obj)->mmio_exec;
28
+ int element_size = 1 << size;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /*
32
+ * Calculate the offset assuming fully little-endian,
33
+ * then XOR to account for the order of the 8-byte units.
34
+ */
35
+ if (element_size < 8) {
36
+ ofs ^= 8 - element_size;
37
+ }
38
+#endif
39
+ return neon_full_reg_offset(reg) + ofs;
76
+}
40
+}
77
+
41
+
78
+static void aspeed_set_mmio_exec(Object *obj, bool value, Error **errp)
42
static inline long vfp_reg_offset(bool dp, unsigned reg)
79
+{
80
+ ASPEED_MACHINE(obj)->mmio_exec = value;
81
+}
82
+
83
+static void aspeed_machine_instance_init(Object *obj)
84
+{
85
+ ASPEED_MACHINE(obj)->mmio_exec = false;
86
+}
87
+
88
+static void aspeed_machine_class_props_init(ObjectClass *oc)
89
+{
90
+ object_class_property_add_bool(oc, "execute-in-place",
91
+ aspeed_get_mmio_exec,
92
+ aspeed_set_mmio_exec, &error_abort);
93
+ object_class_property_set_description(oc, "execute-in-place",
94
+ "boot directly from CE0 flash device", &error_abort);
95
+}
96
+
97
static void aspeed_machine_class_init(ObjectClass *oc, void *data)
98
{
43
{
99
MachineClass *mc = MACHINE_CLASS(oc);
44
if (dp) {
100
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_class_init(ObjectClass *oc, void *data)
45
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
101
mc->no_floppy = 1;
46
index XXXXXXX..XXXXXXX 100644
102
mc->no_cdrom = 1;
47
--- a/target/arm/translate-neon.c.inc
103
mc->no_parallel = 1;
48
+++ b/target/arm/translate-neon.c.inc
104
+
49
@@ -XXX,XX +XXX,XX @@ static inline int neon_3same_fp_size(DisasContext *s, int x)
105
+ aspeed_machine_class_props_init(oc);
50
#include "decode-neon-ls.c.inc"
106
}
51
#include "decode-neon-shared.c.inc"
107
52
108
static void aspeed_machine_palmetto_class_init(ObjectClass *oc, void *data)
53
-/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
109
@@ -XXX,XX +XXX,XX @@ static const TypeInfo aspeed_machine_types[] = {
54
- * where 0 is the least significant end of the register.
110
.name = TYPE_ASPEED_MACHINE,
55
- */
111
.parent = TYPE_MACHINE,
56
-static inline long
112
.instance_size = sizeof(AspeedMachine),
57
-neon_element_offset(int reg, int element, MemOp size)
113
+ .instance_init = aspeed_machine_instance_init,
58
-{
114
.class_size = sizeof(AspeedMachineClass),
59
- int element_size = 1 << size;
115
.class_init = aspeed_machine_class_init,
60
- int ofs = element * element_size;
116
.abstract = true,
61
-#ifdef HOST_WORDS_BIGENDIAN
62
- /* Calculate the offset assuming fully little-endian,
63
- * then XOR to account for the order of the 8-byte units.
64
- */
65
- if (element_size < 8) {
66
- ofs ^= 8 - element_size;
67
- }
68
-#endif
69
- return neon_full_reg_offset(reg) + ofs;
70
-}
71
-
72
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
73
{
74
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
117
--
75
--
118
2.20.1
76
2.20.1
119
77
120
78
diff view generated by jsdifflib
1
From: Andrew Jeffery <andrew@aj.id.au>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Initialise another SDHCI model instance for the AST2600's eMMC
3
These are the only users of neon_reg_offset, so remove that.
4
controller and use the SDHCI's num_slots value introduced previously to
5
determine whether we should create an SD card instance for the new slot.
6
4
7
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Cédric Le Goater <clg@kaod.org>
6
Message-id: 20201030022618.785675-4-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Cédric Le Goater <clg@kaod.org>
11
Message-id: 20200114103433.30534-3-clg@kaod.org
12
[ clg : - removed ternary operator from sdhci_attach_drive()
13
- renamed SDHCI objects with a '-controller' prefix ]
14
Signed-off-by: Cédric Le Goater <clg@kaod.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
9
---
17
include/hw/arm/aspeed_soc.h | 2 ++
10
target/arm/translate.c | 14 ++------------
18
hw/arm/aspeed.c | 26 +++++++++++++++++---------
11
1 file changed, 2 insertions(+), 12 deletions(-)
19
hw/arm/aspeed_ast2600.c | 29 ++++++++++++++++++++++++++---
20
3 files changed, 45 insertions(+), 12 deletions(-)
21
12
22
diff --git a/include/hw/arm/aspeed_soc.h b/include/hw/arm/aspeed_soc.h
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
24
--- a/include/hw/arm/aspeed_soc.h
15
--- a/target/arm/translate.c
25
+++ b/include/hw/arm/aspeed_soc.h
16
+++ b/target/arm/translate.c
26
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSoCState {
17
@@ -XXX,XX +XXX,XX @@ static inline long vfp_reg_offset(bool dp, unsigned reg)
27
AspeedGPIOState gpio;
28
AspeedGPIOState gpio_1_8v;
29
AspeedSDHCIState sdhci;
30
+ AspeedSDHCIState emmc;
31
} AspeedSoCState;
32
33
#define TYPE_ASPEED_SOC "aspeed-soc"
34
@@ -XXX,XX +XXX,XX @@ enum {
35
ASPEED_MII4,
36
ASPEED_SDRAM,
37
ASPEED_XDMA,
38
+ ASPEED_EMMC,
39
};
40
41
#endif /* ASPEED_SOC_H */
42
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/aspeed.c
45
+++ b/hw/arm/aspeed.c
46
@@ -XXX,XX +XXX,XX @@ static void aspeed_board_init_flashes(AspeedSMCState *s, const char *flashtype,
47
}
18
}
48
}
19
}
49
20
50
+static void sdhci_attach_drive(SDHCIState *sdhci, DriveInfo *dinfo)
21
-/* Return the offset of a 32-bit piece of a NEON register.
51
+{
22
- zero is the least significant end of the register. */
52
+ DeviceState *card;
23
-static inline long
53
+
24
-neon_reg_offset (int reg, int n)
54
+ card = qdev_create(qdev_get_child_bus(DEVICE(sdhci), "sd-bus"),
25
-{
55
+ TYPE_SD_CARD);
26
- int sreg;
56
+ if (dinfo) {
27
- sreg = reg * 2 + n;
57
+ qdev_prop_set_drive(card, "drive", blk_by_legacy_dinfo(dinfo),
28
- return vfp_reg_offset(0, sreg);
58
+ &error_fatal);
29
-}
59
+ }
30
-
60
+ object_property_set_bool(OBJECT(card), true, "realized", &error_fatal);
31
static TCGv_i32 neon_load_reg(int reg, int pass)
61
+}
62
+
63
static void aspeed_machine_init(MachineState *machine)
64
{
32
{
65
AspeedBoardState *bmc;
33
TCGv_i32 tmp = tcg_temp_new_i32();
66
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_init(MachineState *machine)
34
- tcg_gen_ld_i32(tmp, cpu_env, neon_reg_offset(reg, pass));
67
}
35
+ tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
68
36
return tmp;
69
for (i = 0; i < bmc->soc.sdhci.num_slots; i++) {
70
- SDHCIState *sdhci = &bmc->soc.sdhci.slots[i];
71
- DriveInfo *dinfo = drive_get_next(IF_SD);
72
- BlockBackend *blk;
73
- DeviceState *card;
74
+ sdhci_attach_drive(&bmc->soc.sdhci.slots[i], drive_get_next(IF_SD));
75
+ }
76
77
- blk = dinfo ? blk_by_legacy_dinfo(dinfo) : NULL;
78
- card = qdev_create(qdev_get_child_bus(DEVICE(sdhci), "sd-bus"),
79
- TYPE_SD_CARD);
80
- qdev_prop_set_drive(card, "drive", blk, &error_fatal);
81
- object_property_set_bool(OBJECT(card), true, "realized", &error_fatal);
82
+ if (bmc->soc.emmc.num_slots) {
83
+ sdhci_attach_drive(&bmc->soc.emmc.slots[0], drive_get_next(IF_SD));
84
}
85
86
arm_load_kernel(ARM_CPU(first_cpu), machine, &aspeed_board_binfo);
87
diff --git a/hw/arm/aspeed_ast2600.c b/hw/arm/aspeed_ast2600.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/hw/arm/aspeed_ast2600.c
90
+++ b/hw/arm/aspeed_ast2600.c
91
@@ -XXX,XX +XXX,XX @@ static const hwaddr aspeed_soc_ast2600_memmap[] = {
92
[ASPEED_ADC] = 0x1E6E9000,
93
[ASPEED_VIDEO] = 0x1E700000,
94
[ASPEED_SDHCI] = 0x1E740000,
95
+ [ASPEED_EMMC] = 0x1E750000,
96
[ASPEED_GPIO] = 0x1E780000,
97
[ASPEED_GPIO_1_8V] = 0x1E780800,
98
[ASPEED_RTC] = 0x1E781000,
99
@@ -XXX,XX +XXX,XX @@ static const hwaddr aspeed_soc_ast2600_memmap[] = {
100
101
#define ASPEED_SOC_AST2600_MAX_IRQ 128
102
103
+/* Shared Peripheral Interrupt values below are offset by -32 from datasheet */
104
static const int aspeed_soc_ast2600_irqmap[] = {
105
[ASPEED_UART1] = 47,
106
[ASPEED_UART2] = 48,
107
@@ -XXX,XX +XXX,XX @@ static const int aspeed_soc_ast2600_irqmap[] = {
108
[ASPEED_ADC] = 78,
109
[ASPEED_XDMA] = 6,
110
[ASPEED_SDHCI] = 43,
111
+ [ASPEED_EMMC] = 15,
112
[ASPEED_GPIO] = 40,
113
[ASPEED_GPIO_1_8V] = 11,
114
[ASPEED_RTC] = 13,
115
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_init(Object *obj)
116
sysbus_init_child_obj(obj, "gpio_1_8v", OBJECT(&s->gpio_1_8v),
117
sizeof(s->gpio_1_8v), typename);
118
119
- sysbus_init_child_obj(obj, "sdc", OBJECT(&s->sdhci), sizeof(s->sdhci),
120
- TYPE_ASPEED_SDHCI);
121
+ sysbus_init_child_obj(obj, "sd-controller", OBJECT(&s->sdhci),
122
+ sizeof(s->sdhci), TYPE_ASPEED_SDHCI);
123
124
object_property_set_int(OBJECT(&s->sdhci), 2, "num-slots", &error_abort);
125
126
/* Init sd card slot class here so that they're under the correct parent */
127
for (i = 0; i < ASPEED_SDHCI_NUM_SLOTS; ++i) {
128
- sysbus_init_child_obj(obj, "sdhci[*]", OBJECT(&s->sdhci.slots[i]),
129
+ sysbus_init_child_obj(obj, "sd-controller.sdhci[*]",
130
+ OBJECT(&s->sdhci.slots[i]),
131
sizeof(s->sdhci.slots[i]), TYPE_SYSBUS_SDHCI);
132
}
133
+
134
+ sysbus_init_child_obj(obj, "emmc-controller", OBJECT(&s->emmc),
135
+ sizeof(s->emmc), TYPE_ASPEED_SDHCI);
136
+
137
+ object_property_set_int(OBJECT(&s->emmc), 1, "num-slots", &error_abort);
138
+
139
+ sysbus_init_child_obj(obj, "emmc-controller.sdhci",
140
+ OBJECT(&s->emmc.slots[0]), sizeof(s->emmc.slots[0]),
141
+ TYPE_SYSBUS_SDHCI);
142
}
37
}
143
38
144
/*
39
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
145
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_realize(DeviceState *dev, Error **errp)
40
{
146
sc->memmap[ASPEED_SDHCI]);
41
- tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
147
sysbus_connect_irq(SYS_BUS_DEVICE(&s->sdhci), 0,
42
+ tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
148
aspeed_soc_get_irq(s, ASPEED_SDHCI));
43
tcg_temp_free_i32(var);
149
+
150
+ /* eMMC */
151
+ object_property_set_bool(OBJECT(&s->emmc), true, "realized", &err);
152
+ if (err) {
153
+ error_propagate(errp, err);
154
+ return;
155
+ }
156
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->emmc), 0, sc->memmap[ASPEED_EMMC]);
157
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->emmc), 0,
158
+ aspeed_soc_get_irq(s, ASPEED_EMMC));
159
}
44
}
160
45
161
static void aspeed_soc_ast2600_class_init(ObjectClass *oc, void *data)
162
--
46
--
163
2.20.1
47
2.20.1
164
48
165
49
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Deprecate device_legacy_reset(), qdev_reset_all() and
3
This seems a bit more readable than using offsetof CPU_DoubleU.
4
qbus_reset_all() to be replaced by new functions
5
device_cold_reset() and bus_cold_reset() which uses resettable API.
6
4
7
Also introduce resettable_cold_reset_fn() which may be used as a
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
replacement for qdev_reset_all_fn and qbus_reset_all_fn().
6
Message-id: 20201030022618.785675-5-richard.henderson@linaro.org
9
10
Following patches will be needed to look at legacy reset call sites
11
and switch to resettable api. The legacy functions will be removed
12
when unused.
13
14
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
19
Message-id: 20200123132823.1117486-9-damien.hedde@greensocs.com
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
9
---
22
include/hw/qdev-core.h | 27 +++++++++++++++++++++++++++
10
target/arm/translate.c | 13 ++++---------
23
include/hw/resettable.h | 9 +++++++++
11
1 file changed, 4 insertions(+), 9 deletions(-)
24
hw/core/bus.c | 5 +++++
25
hw/core/qdev.c | 5 +++++
26
hw/core/resettable.c | 5 +++++
27
5 files changed, 51 insertions(+)
28
12
29
diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
30
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
31
--- a/include/hw/qdev-core.h
15
--- a/target/arm/translate.c
32
+++ b/include/hw/qdev-core.h
16
+++ b/target/arm/translate.c
33
@@ -XXX,XX +XXX,XX @@ int qdev_walk_children(DeviceState *dev,
17
@@ -XXX,XX +XXX,XX @@ static long neon_element_offset(int reg, int element, MemOp size)
34
qdev_walkerfn *post_devfn, qbus_walkerfn *post_busfn,
18
return neon_full_reg_offset(reg) + ofs;
35
void *opaque);
36
37
+/**
38
+ * @qdev_reset_all:
39
+ * Reset @dev. See @qbus_reset_all() for more details.
40
+ *
41
+ * Note: This function is deprecated and will be removed when it becomes unused.
42
+ * Please use device_cold_reset() now.
43
+ */
44
void qdev_reset_all(DeviceState *dev);
45
void qdev_reset_all_fn(void *opaque);
46
47
@@ -XXX,XX +XXX,XX @@ void qdev_reset_all_fn(void *opaque);
48
* hard reset means that qbus_reset_all will reset all state of the device.
49
* For PCI devices, for example, this will include the base address registers
50
* or configuration space.
51
+ *
52
+ * Note: This function is deprecated and will be removed when it becomes unused.
53
+ * Please use bus_cold_reset() now.
54
*/
55
void qbus_reset_all(BusState *bus);
56
void qbus_reset_all_fn(void *opaque);
57
58
+/**
59
+ * device_cold_reset:
60
+ * Reset device @dev and perform a recursive processing using the resettable
61
+ * interface. It triggers a RESET_TYPE_COLD.
62
+ */
63
+void device_cold_reset(DeviceState *dev);
64
+
65
+/**
66
+ * bus_cold_reset:
67
+ *
68
+ * Reset bus @bus and perform a recursive processing using the resettable
69
+ * interface. It triggers a RESET_TYPE_COLD.
70
+ */
71
+void bus_cold_reset(BusState *bus);
72
+
73
/**
74
* device_is_in_reset:
75
* Return true if the device @dev is currently being reset.
76
@@ -XXX,XX +XXX,XX @@ void qdev_machine_init(void);
77
* device_legacy_reset:
78
*
79
* Reset a single device (by calling the reset method).
80
+ * Note: This function is deprecated and will be removed when it becomes unused.
81
+ * Please use device_cold_reset() now.
82
*/
83
void device_legacy_reset(DeviceState *dev);
84
85
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
86
index XXXXXXX..XXXXXXX 100644
87
--- a/include/hw/resettable.h
88
+++ b/include/hw/resettable.h
89
@@ -XXX,XX +XXX,XX @@ bool resettable_is_in_reset(Object *obj);
90
*/
91
void resettable_change_parent(Object *obj, Object *newp, Object *oldp);
92
93
+/**
94
+ * resettable_cold_reset_fn:
95
+ * Helper to call resettable_reset((Object *) opaque, RESET_TYPE_COLD).
96
+ *
97
+ * This function is typically useful to register a reset handler with
98
+ * qemu_register_reset.
99
+ */
100
+void resettable_cold_reset_fn(void *opaque);
101
+
102
/**
103
* resettable_class_set_parent_phases:
104
*
105
diff --git a/hw/core/bus.c b/hw/core/bus.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/hw/core/bus.c
108
+++ b/hw/core/bus.c
109
@@ -XXX,XX +XXX,XX @@ int qbus_walk_children(BusState *bus,
110
return 0;
111
}
19
}
112
20
113
+void bus_cold_reset(BusState *bus)
21
-static inline long vfp_reg_offset(bool dp, unsigned reg)
114
+{
22
+/* Return the offset of a VFP Dreg (dp = true) or VFP Sreg (dp = false). */
115
+ resettable_reset(OBJECT(bus), RESET_TYPE_COLD);
23
+static long vfp_reg_offset(bool dp, unsigned reg)
116
+}
117
+
118
bool bus_is_in_reset(BusState *bus)
119
{
24
{
120
return resettable_is_in_reset(OBJECT(bus));
25
if (dp) {
121
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
26
- return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
122
index XXXXXXX..XXXXXXX 100644
27
+ return neon_element_offset(reg, 0, MO_64);
123
--- a/hw/core/qdev.c
28
} else {
124
+++ b/hw/core/qdev.c
29
- long ofs = offsetof(CPUARMState, vfp.zregs[reg >> 2].d[(reg >> 1) & 1]);
125
@@ -XXX,XX +XXX,XX @@ void qbus_reset_all_fn(void *opaque)
30
- if (reg & 1) {
126
qbus_reset_all(bus);
31
- ofs += offsetof(CPU_DoubleU, l.upper);
127
}
32
- } else {
128
33
- ofs += offsetof(CPU_DoubleU, l.lower);
129
+void device_cold_reset(DeviceState *dev)
34
- }
130
+{
35
- return ofs;
131
+ resettable_reset(OBJECT(dev), RESET_TYPE_COLD);
36
+ return neon_element_offset(reg >> 1, reg & 1, MO_32);
132
+}
133
+
134
bool device_is_in_reset(DeviceState *dev)
135
{
136
return resettable_is_in_reset(OBJECT(dev));
137
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
138
index XXXXXXX..XXXXXXX 100644
139
--- a/hw/core/resettable.c
140
+++ b/hw/core/resettable.c
141
@@ -XXX,XX +XXX,XX @@ void resettable_change_parent(Object *obj, Object *newp, Object *oldp)
142
}
37
}
143
}
38
}
144
39
145
+void resettable_cold_reset_fn(void *opaque)
146
+{
147
+ resettable_reset((Object *) opaque, RESET_TYPE_COLD);
148
+}
149
+
150
void resettable_class_set_parent_phases(ResettableClass *rc,
151
ResettableEnterPhase enter,
152
ResettableHoldPhase hold,
153
--
40
--
154
2.20.1
41
2.20.1
155
42
156
43
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This commit defines an interface allowing multi-phase reset. This aims
3
Model these off the aa64 read/write_vec_element functions.
4
to solve a problem of the actual single-phase reset (built in
4
Use it within translate-neon.c.inc. The new functions do
5
DeviceClass and BusClass): reset behavior is dependent on the order
5
not allocate or free temps, so this rearranges the calling
6
in which reset handlers are called. In particular doing external
6
code a bit.
7
side-effect (like setting an qemu_irq) is problematic because receiving
8
object may not be reset yet.
9
7
10
The Resettable interface divides the reset in 3 well defined phases.
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
To reset an object tree, all 1st phases are executed then all 2nd then
9
Message-id: 20201030022618.785675-6-richard.henderson@linaro.org
12
all 3rd. See the comments in include/hw/resettable.h for a more complete
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
description. The interface defines 3 phases to let the future
14
possibility of holding an object into reset for some time.
15
16
The qdev/qbus reset in DeviceClass and BusClass will be modified in
17
following commits to use this interface. A mechanism is provided
18
to allow executing a transitional reset handler in place of the 2nd
19
phase which is executed in children-then-parent order inside a tree.
20
This will allow to transition devices and buses smoothly while
21
keeping the exact current qdev/qbus reset behavior for now.
22
23
Documentation will be added in a following commit.
24
25
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
26
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
28
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
29
Message-id: 20200123132823.1117486-4-damien.hedde@greensocs.com
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
---
12
---
32
hw/core/Makefile.objs | 1 +
13
target/arm/translate.c | 26 ++++
33
include/hw/resettable.h | 211 +++++++++++++++++++++++++++++++++++
14
target/arm/translate-neon.c.inc | 256 ++++++++++++++++++++------------
34
hw/core/resettable.c | 238 ++++++++++++++++++++++++++++++++++++++++
15
2 files changed, 183 insertions(+), 99 deletions(-)
35
hw/core/trace-events | 17 +++
36
4 files changed, 467 insertions(+)
37
create mode 100644 include/hw/resettable.h
38
create mode 100644 hw/core/resettable.c
39
16
40
diff --git a/hw/core/Makefile.objs b/hw/core/Makefile.objs
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
41
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/core/Makefile.objs
19
--- a/target/arm/translate.c
43
+++ b/hw/core/Makefile.objs
20
+++ b/target/arm/translate.c
44
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
45
common-obj-y += qdev.o qdev-properties.o
22
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
46
common-obj-y += bus.o
23
}
47
common-obj-y += cpu.o
24
48
+common-obj-y += resettable.o
25
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
49
common-obj-y += hotplug.o
50
common-obj-y += vmstate-if.o
51
# irq.o needed for qdev GPIO handling:
52
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
53
new file mode 100644
54
index XXXXXXX..XXXXXXX
55
--- /dev/null
56
+++ b/include/hw/resettable.h
57
@@ -XXX,XX +XXX,XX @@
58
+/*
59
+ * Resettable interface header.
60
+ *
61
+ * Copyright (c) 2019 GreenSocs SAS
62
+ *
63
+ * Authors:
64
+ * Damien Hedde
65
+ *
66
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
67
+ * See the COPYING file in the top-level directory.
68
+ */
69
+
70
+#ifndef HW_RESETTABLE_H
71
+#define HW_RESETTABLE_H
72
+
73
+#include "qom/object.h"
74
+
75
+#define TYPE_RESETTABLE_INTERFACE "resettable"
76
+
77
+#define RESETTABLE_CLASS(class) \
78
+ OBJECT_CLASS_CHECK(ResettableClass, (class), TYPE_RESETTABLE_INTERFACE)
79
+
80
+#define RESETTABLE_GET_CLASS(obj) \
81
+ OBJECT_GET_CLASS(ResettableClass, (obj), TYPE_RESETTABLE_INTERFACE)
82
+
83
+typedef struct ResettableState ResettableState;
84
+
85
+/**
86
+ * ResetType:
87
+ * Types of reset.
88
+ *
89
+ * + Cold: reset resulting from a power cycle of the object.
90
+ *
91
+ * TODO: Support has to be added to handle more types. In particular,
92
+ * ResettableState structure needs to be expanded.
93
+ */
94
+typedef enum ResetType {
95
+ RESET_TYPE_COLD,
96
+} ResetType;
97
+
98
+/*
99
+ * ResettableClass:
100
+ * Interface for resettable objects.
101
+ *
102
+ * See docs/devel/reset.rst for more detailed information about how QEMU models
103
+ * reset. This whole API must only be used when holding the iothread mutex.
104
+ *
105
+ * All objects which can be reset must implement this interface;
106
+ * it is usually provided by a base class such as DeviceClass or BusClass.
107
+ * Every Resettable object must maintain some state tracking the
108
+ * progress of a reset operation by providing a ResettableState structure.
109
+ * The functions defined in this module take care of updating the
110
+ * state of the reset.
111
+ * The base class implementation of the interface provides this
112
+ * state and implements the associated method: get_state.
113
+ *
114
+ * Concrete object implementations (typically specific devices
115
+ * such as a UART model) should provide the functions
116
+ * for the phases.enter, phases.hold and phases.exit methods, which
117
+ * they can set in their class init function, either directly or
118
+ * by calling resettable_class_set_parent_phases().
119
+ * The phase methods are guaranteed to only only ever be called once
120
+ * for any reset event, in the order 'enter', 'hold', 'exit'.
121
+ * An object will always move quickly from 'enter' to 'hold'
122
+ * but might remain in 'hold' for an arbitrary period of time
123
+ * before eventually reset is deasserted and the 'exit' phase is called.
124
+ * Object implementations should be prepared for functions handling
125
+ * inbound connections from other devices (such as qemu_irq handler
126
+ * functions) to be called at any point during reset after their
127
+ * 'enter' method has been called.
128
+ *
129
+ * Users of a resettable object should not call these methods
130
+ * directly, but instead use the function resettable_reset().
131
+ *
132
+ * @phases.enter: This phase is called when the object enters reset. It
133
+ * should reset local state of the object, but it must not do anything that
134
+ * has a side-effect on other objects, such as raising or lowering a qemu_irq
135
+ * line or reading or writing guest memory. It takes the reset's type as
136
+ * argument.
137
+ *
138
+ * @phases.hold: This phase is called for entry into reset, once every object
139
+ * in the system which is being reset has had its @phases.enter method called.
140
+ * At this point devices can do actions that affect other objects.
141
+ *
142
+ * @phases.exit: This phase is called when the object leaves the reset state.
143
+ * Actions affecting other objects are permitted.
144
+ *
145
+ * @get_state: Mandatory method which must return a pointer to a
146
+ * ResettableState.
147
+ *
148
+ * @get_transitional_function: transitional method to handle Resettable objects
149
+ * not yet fully moved to this interface. It will be removed as soon as it is
150
+ * not needed anymore. This method is optional and may return a pointer to a
151
+ * function to be used instead of the phases. If the method exists and returns
152
+ * a non-NULL function pointer then that function is executed as a replacement
153
+ * of the 'hold' phase method taking the object as argument. The two other phase
154
+ * methods are not executed.
155
+ *
156
+ * @child_foreach: Executes a given callback on every Resettable child. Child
157
+ * in this context means a child in the qbus tree, so the children of a qbus
158
+ * are the devices on it, and the children of a device are all the buses it
159
+ * owns. This is not the same as the QOM object hierarchy. The function takes
160
+ * additional opaque and ResetType arguments which must be passed unmodified to
161
+ * the callback.
162
+ */
163
+typedef void (*ResettableEnterPhase)(Object *obj, ResetType type);
164
+typedef void (*ResettableHoldPhase)(Object *obj);
165
+typedef void (*ResettableExitPhase)(Object *obj);
166
+typedef ResettableState * (*ResettableGetState)(Object *obj);
167
+typedef void (*ResettableTrFunction)(Object *obj);
168
+typedef ResettableTrFunction (*ResettableGetTrFunction)(Object *obj);
169
+typedef void (*ResettableChildCallback)(Object *, void *opaque,
170
+ ResetType type);
171
+typedef void (*ResettableChildForeach)(Object *obj,
172
+ ResettableChildCallback cb,
173
+ void *opaque, ResetType type);
174
+typedef struct ResettablePhases {
175
+ ResettableEnterPhase enter;
176
+ ResettableHoldPhase hold;
177
+ ResettableExitPhase exit;
178
+} ResettablePhases;
179
+typedef struct ResettableClass {
180
+ InterfaceClass parent_class;
181
+
182
+ /* Phase methods */
183
+ ResettablePhases phases;
184
+
185
+ /* State access method */
186
+ ResettableGetState get_state;
187
+
188
+ /* Transitional method for legacy reset compatibility */
189
+ ResettableGetTrFunction get_transitional_function;
190
+
191
+ /* Hierarchy handling method */
192
+ ResettableChildForeach child_foreach;
193
+} ResettableClass;
194
+
195
+/**
196
+ * ResettableState:
197
+ * Structure holding reset related state. The fields should not be accessed
198
+ * directly; the definition is here to allow further inclusion into other
199
+ * objects.
200
+ *
201
+ * @count: Number of reset level the object is into. It is incremented when
202
+ * the reset operation starts and decremented when it finishes.
203
+ * @hold_phase_pending: flag which indicates that we need to invoke the 'hold'
204
+ * phase handler for this object.
205
+ * @exit_phase_in_progress: true if we are currently in the exit phase
206
+ */
207
+struct ResettableState {
208
+ unsigned count;
209
+ bool hold_phase_pending;
210
+ bool exit_phase_in_progress;
211
+};
212
+
213
+/**
214
+ * resettable_reset:
215
+ * Trigger a reset on an object @obj of type @type. @obj must implement
216
+ * Resettable interface.
217
+ *
218
+ * Calling this function is equivalent to calling @resettable_assert_reset()
219
+ * then @resettable_release_reset().
220
+ */
221
+void resettable_reset(Object *obj, ResetType type);
222
+
223
+/**
224
+ * resettable_assert_reset:
225
+ * Put an object @obj into reset. @obj must implement Resettable interface.
226
+ *
227
+ * @resettable_release_reset() must eventually be called after this call.
228
+ * There must be one call to @resettable_release_reset() per call of
229
+ * @resettable_assert_reset(), with the same type argument.
230
+ *
231
+ * NOTE: Until support for migration is added, the @resettable_release_reset()
232
+ * must not be delayed. It must occur just after @resettable_assert_reset() so
233
+ * that migration cannot be triggered in between. Prefer using
234
+ * @resettable_reset() for now.
235
+ */
236
+void resettable_assert_reset(Object *obj, ResetType type);
237
+
238
+/**
239
+ * resettable_release_reset:
240
+ * Release the object @obj from reset. @obj must implement Resettable interface.
241
+ *
242
+ * See @resettable_assert_reset() description for details.
243
+ */
244
+void resettable_release_reset(Object *obj, ResetType type);
245
+
246
+/**
247
+ * resettable_is_in_reset:
248
+ * Return true if @obj is under reset.
249
+ *
250
+ * @obj must implement Resettable interface.
251
+ */
252
+bool resettable_is_in_reset(Object *obj);
253
+
254
+/**
255
+ * resettable_class_set_parent_phases:
256
+ *
257
+ * Save @rc current reset phases into @parent_phases and override @rc phases
258
+ * by the given new methods (@enter, @hold and @exit).
259
+ * Each phase is overridden only if the new one is not NULL allowing to
260
+ * override a subset of phases.
261
+ */
262
+void resettable_class_set_parent_phases(ResettableClass *rc,
263
+ ResettableEnterPhase enter,
264
+ ResettableHoldPhase hold,
265
+ ResettableExitPhase exit,
266
+ ResettablePhases *parent_phases);
267
+
268
+#endif
269
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
270
new file mode 100644
271
index XXXXXXX..XXXXXXX
272
--- /dev/null
273
+++ b/hw/core/resettable.c
274
@@ -XXX,XX +XXX,XX @@
275
+/*
276
+ * Resettable interface.
277
+ *
278
+ * Copyright (c) 2019 GreenSocs SAS
279
+ *
280
+ * Authors:
281
+ * Damien Hedde
282
+ *
283
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
284
+ * See the COPYING file in the top-level directory.
285
+ */
286
+
287
+#include "qemu/osdep.h"
288
+#include "qemu/module.h"
289
+#include "hw/resettable.h"
290
+#include "trace.h"
291
+
292
+/**
293
+ * resettable_phase_enter/hold/exit:
294
+ * Function executing a phase recursively in a resettable object and its
295
+ * children.
296
+ */
297
+static void resettable_phase_enter(Object *obj, void *opaque, ResetType type);
298
+static void resettable_phase_hold(Object *obj, void *opaque, ResetType type);
299
+static void resettable_phase_exit(Object *obj, void *opaque, ResetType type);
300
+
301
+/**
302
+ * enter_phase_in_progress:
303
+ * True if we are currently in reset enter phase.
304
+ *
305
+ * Note: This flag is only used to guarantee (using asserts) that the reset
306
+ * API is used correctly. We can use a global variable because we rely on the
307
+ * iothread mutex to ensure only one reset operation is in a progress at a
308
+ * given time.
309
+ */
310
+static bool enter_phase_in_progress;
311
+
312
+void resettable_reset(Object *obj, ResetType type)
313
+{
26
+{
314
+ trace_resettable_reset(obj, type);
27
+ long off = neon_element_offset(reg, ele, size);
315
+ resettable_assert_reset(obj, type);
28
+
316
+ resettable_release_reset(obj, type);
29
+ switch (size) {
317
+}
30
+ case MO_32:
318
+
31
+ tcg_gen_ld_i32(dest, cpu_env, off);
319
+void resettable_assert_reset(Object *obj, ResetType type)
32
+ break;
320
+{
33
+ default:
321
+ /* TODO: change this assert when adding support for other reset types */
34
+ g_assert_not_reached();
322
+ assert(type == RESET_TYPE_COLD);
323
+ trace_resettable_reset_assert_begin(obj, type);
324
+ assert(!enter_phase_in_progress);
325
+
326
+ enter_phase_in_progress = true;
327
+ resettable_phase_enter(obj, NULL, type);
328
+ enter_phase_in_progress = false;
329
+
330
+ resettable_phase_hold(obj, NULL, type);
331
+
332
+ trace_resettable_reset_assert_end(obj);
333
+}
334
+
335
+void resettable_release_reset(Object *obj, ResetType type)
336
+{
337
+ /* TODO: change this assert when adding support for other reset types */
338
+ assert(type == RESET_TYPE_COLD);
339
+ trace_resettable_reset_release_begin(obj, type);
340
+ assert(!enter_phase_in_progress);
341
+
342
+ resettable_phase_exit(obj, NULL, type);
343
+
344
+ trace_resettable_reset_release_end(obj);
345
+}
346
+
347
+bool resettable_is_in_reset(Object *obj)
348
+{
349
+ ResettableClass *rc = RESETTABLE_GET_CLASS(obj);
350
+ ResettableState *s = rc->get_state(obj);
351
+
352
+ return s->count > 0;
353
+}
354
+
355
+/**
356
+ * resettable_child_foreach:
357
+ * helper to avoid checking the existence of the method.
358
+ */
359
+static void resettable_child_foreach(ResettableClass *rc, Object *obj,
360
+ ResettableChildCallback cb,
361
+ void *opaque, ResetType type)
362
+{
363
+ if (rc->child_foreach) {
364
+ rc->child_foreach(obj, cb, opaque, type);
365
+ }
35
+ }
366
+}
36
+}
367
+
37
+
368
+/**
38
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
369
+ * resettable_get_tr_func:
370
+ * helper to fetch transitional reset callback if any.
371
+ */
372
+static ResettableTrFunction resettable_get_tr_func(ResettableClass *rc,
373
+ Object *obj)
374
+{
39
+{
375
+ ResettableTrFunction tr_func = NULL;
40
+ long off = neon_element_offset(reg, ele, size);
376
+ if (rc->get_transitional_function) {
41
+
377
+ tr_func = rc->get_transitional_function(obj);
42
+ switch (size) {
378
+ }
43
+ case MO_32:
379
+ return tr_func;
44
+ tcg_gen_st_i32(src, cpu_env, off);
380
+}
45
+ break;
381
+
46
+ default:
382
+static void resettable_phase_enter(Object *obj, void *opaque, ResetType type)
47
+ g_assert_not_reached();
383
+{
384
+ ResettableClass *rc = RESETTABLE_GET_CLASS(obj);
385
+ ResettableState *s = rc->get_state(obj);
386
+ const char *obj_typename = object_get_typename(obj);
387
+ bool action_needed = false;
388
+
389
+ /* exit phase has to finish properly before entering back in reset */
390
+ assert(!s->exit_phase_in_progress);
391
+
392
+ trace_resettable_phase_enter_begin(obj, obj_typename, s->count, type);
393
+
394
+ /* Only take action if we really enter reset for the 1st time. */
395
+ /*
396
+ * TODO: if adding more ResetType support, some additional checks
397
+ * are probably needed here.
398
+ */
399
+ if (s->count++ == 0) {
400
+ action_needed = true;
401
+ }
402
+ /*
403
+ * We limit the count to an arbitrary "big" value. The value is big
404
+ * enough not to be triggered normally.
405
+ * The assert will stop an infinite loop if there is a cycle in the
406
+ * reset tree. The loop goes through resettable_foreach_child below
407
+ * which at some point will call us again.
408
+ */
409
+ assert(s->count <= 50);
410
+
411
+ /*
412
+ * handle the children even if action_needed is at false so that
413
+ * child counts are incremented too
414
+ */
415
+ resettable_child_foreach(rc, obj, resettable_phase_enter, NULL, type);
416
+
417
+ /* execute enter phase for the object if needed */
418
+ if (action_needed) {
419
+ trace_resettable_phase_enter_exec(obj, obj_typename, type,
420
+ !!rc->phases.enter);
421
+ if (rc->phases.enter && !resettable_get_tr_func(rc, obj)) {
422
+ rc->phases.enter(obj, type);
423
+ }
424
+ s->hold_phase_pending = true;
425
+ }
426
+ trace_resettable_phase_enter_end(obj, obj_typename, s->count);
427
+}
428
+
429
+static void resettable_phase_hold(Object *obj, void *opaque, ResetType type)
430
+{
431
+ ResettableClass *rc = RESETTABLE_GET_CLASS(obj);
432
+ ResettableState *s = rc->get_state(obj);
433
+ const char *obj_typename = object_get_typename(obj);
434
+
435
+ /* exit phase has to finish properly before entering back in reset */
436
+ assert(!s->exit_phase_in_progress);
437
+
438
+ trace_resettable_phase_hold_begin(obj, obj_typename, s->count, type);
439
+
440
+ /* handle children first */
441
+ resettable_child_foreach(rc, obj, resettable_phase_hold, NULL, type);
442
+
443
+ /* exec hold phase */
444
+ if (s->hold_phase_pending) {
445
+ s->hold_phase_pending = false;
446
+ ResettableTrFunction tr_func = resettable_get_tr_func(rc, obj);
447
+ trace_resettable_phase_hold_exec(obj, obj_typename, !!rc->phases.hold);
448
+ if (tr_func) {
449
+ trace_resettable_transitional_function(obj, obj_typename);
450
+ tr_func(obj);
451
+ } else if (rc->phases.hold) {
452
+ rc->phases.hold(obj);
453
+ }
454
+ }
455
+ trace_resettable_phase_hold_end(obj, obj_typename, s->count);
456
+}
457
+
458
+static void resettable_phase_exit(Object *obj, void *opaque, ResetType type)
459
+{
460
+ ResettableClass *rc = RESETTABLE_GET_CLASS(obj);
461
+ ResettableState *s = rc->get_state(obj);
462
+ const char *obj_typename = object_get_typename(obj);
463
+
464
+ assert(!s->exit_phase_in_progress);
465
+ trace_resettable_phase_exit_begin(obj, obj_typename, s->count, type);
466
+
467
+ /* exit_phase_in_progress ensures this phase is 'atomic' */
468
+ s->exit_phase_in_progress = true;
469
+ resettable_child_foreach(rc, obj, resettable_phase_exit, NULL, type);
470
+
471
+ assert(s->count > 0);
472
+ if (s->count == 1) {
473
+ trace_resettable_phase_exit_exec(obj, obj_typename, !!rc->phases.exit);
474
+ if (rc->phases.exit && !resettable_get_tr_func(rc, obj)) {
475
+ rc->phases.exit(obj);
476
+ }
477
+ s->count = 0;
478
+ }
479
+ s->exit_phase_in_progress = false;
480
+ trace_resettable_phase_exit_end(obj, obj_typename, s->count);
481
+}
482
+
483
+void resettable_class_set_parent_phases(ResettableClass *rc,
484
+ ResettableEnterPhase enter,
485
+ ResettableHoldPhase hold,
486
+ ResettableExitPhase exit,
487
+ ResettablePhases *parent_phases)
488
+{
489
+ *parent_phases = rc->phases;
490
+ if (enter) {
491
+ rc->phases.enter = enter;
492
+ }
493
+ if (hold) {
494
+ rc->phases.hold = hold;
495
+ }
496
+ if (exit) {
497
+ rc->phases.exit = exit;
498
+ }
48
+ }
499
+}
49
+}
500
+
50
+
501
+static const TypeInfo resettable_interface_info = {
51
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
502
+ .name = TYPE_RESETTABLE_INTERFACE,
52
{
503
+ .parent = TYPE_INTERFACE,
53
TCGv_ptr ret = tcg_temp_new_ptr();
504
+ .class_size = sizeof(ResettableClass),
54
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
505
+};
506
+
507
+static void reset_register_types(void)
508
+{
509
+ type_register_static(&resettable_interface_info);
510
+}
511
+
512
+type_init(reset_register_types)
513
diff --git a/hw/core/trace-events b/hw/core/trace-events
514
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
515
--- a/hw/core/trace-events
56
--- a/target/arm/translate-neon.c.inc
516
+++ b/hw/core/trace-events
57
+++ b/target/arm/translate-neon.c.inc
517
@@ -XXX,XX +XXX,XX @@ qbus_reset(void *obj, const char *objtype) "obj=%p(%s)"
58
@@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
518
qbus_reset_all(void *obj, const char *objtype) "obj=%p(%s)"
59
* early. Since Q is 0 there are always just two passes, so instead
519
qbus_reset_tree(void *obj, const char *objtype) "obj=%p(%s)"
60
* of a complicated loop over each pass we just unroll.
520
qdev_update_parent_bus(void *obj, const char *objtype, void *oldp, const char *oldptype, void *newp, const char *newptype) "obj=%p(%s) old_parent=%p(%s) new_parent=%p(%s)"
61
*/
521
+
62
- tmp = neon_load_reg(a->vn, 0);
522
+# resettable.c
63
- tmp2 = neon_load_reg(a->vn, 1);
523
+resettable_reset(void *obj, int cold) "obj=%p cold=%d"
64
+ tmp = tcg_temp_new_i32();
524
+resettable_reset_assert_begin(void *obj, int cold) "obj=%p cold=%d"
65
+ tmp2 = tcg_temp_new_i32();
525
+resettable_reset_assert_end(void *obj) "obj=%p"
66
+ tmp3 = tcg_temp_new_i32();
526
+resettable_reset_release_begin(void *obj, int cold) "obj=%p cold=%d"
67
+
527
+resettable_reset_release_end(void *obj) "obj=%p"
68
+ read_neon_element32(tmp, a->vn, 0, MO_32);
528
+resettable_phase_enter_begin(void *obj, const char *objtype, unsigned count, int type) "obj=%p(%s) count=%d type=%d"
69
+ read_neon_element32(tmp2, a->vn, 1, MO_32);
529
+resettable_phase_enter_exec(void *obj, const char *objtype, int type, int has_method) "obj=%p(%s) type=%d method=%d"
70
fn(tmp, tmp, tmp2);
530
+resettable_phase_enter_end(void *obj, const char *objtype, unsigned count) "obj=%p(%s) count=%d"
71
- tcg_temp_free_i32(tmp2);
531
+resettable_phase_hold_begin(void *obj, const char *objtype, unsigned count, int type) "obj=%p(%s) count=%d type=%d"
72
532
+resettable_phase_hold_exec(void *obj, const char *objtype, int has_method) "obj=%p(%s) method=%d"
73
- tmp3 = neon_load_reg(a->vm, 0);
533
+resettable_phase_hold_end(void *obj, const char *objtype, unsigned count) "obj=%p(%s) count=%d"
74
- tmp2 = neon_load_reg(a->vm, 1);
534
+resettable_phase_exit_begin(void *obj, const char *objtype, unsigned count, int type) "obj=%p(%s) count=%d type=%d"
75
+ read_neon_element32(tmp3, a->vm, 0, MO_32);
535
+resettable_phase_exit_exec(void *obj, const char *objtype, int has_method) "obj=%p(%s) method=%d"
76
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
536
+resettable_phase_exit_end(void *obj, const char *objtype, unsigned count) "obj=%p(%s) count=%d"
77
fn(tmp3, tmp3, tmp2);
537
+resettable_transitional_function(void *obj, const char *objtype) "obj=%p(%s)"
78
- tcg_temp_free_i32(tmp2);
79
80
- neon_store_reg(a->vd, 0, tmp);
81
- neon_store_reg(a->vd, 1, tmp3);
82
+ write_neon_element32(tmp, a->vd, 0, MO_32);
83
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
84
+
85
+ tcg_temp_free_i32(tmp);
86
+ tcg_temp_free_i32(tmp2);
87
+ tcg_temp_free_i32(tmp3);
88
return true;
89
}
90
91
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
92
* 2-reg-and-shift operations, size < 3 case, where the
93
* helper needs to be passed cpu_env.
94
*/
95
- TCGv_i32 constimm;
96
+ TCGv_i32 constimm, tmp;
97
int pass;
98
99
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
100
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
101
* by immediate using the variable shift operations.
102
*/
103
constimm = tcg_const_i32(dup_const(a->size, a->shift));
104
+ tmp = tcg_temp_new_i32();
105
106
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
107
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
108
+ read_neon_element32(tmp, a->vm, pass, MO_32);
109
fn(tmp, cpu_env, tmp, constimm);
110
- neon_store_reg(a->vd, pass, tmp);
111
+ write_neon_element32(tmp, a->vd, pass, MO_32);
112
}
113
+ tcg_temp_free_i32(tmp);
114
tcg_temp_free_i32(constimm);
115
return true;
116
}
117
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
118
constimm = tcg_const_i64(-a->shift);
119
rm1 = tcg_temp_new_i64();
120
rm2 = tcg_temp_new_i64();
121
+ rd = tcg_temp_new_i32();
122
123
/* Load both inputs first to avoid potential overwrite if rm == rd */
124
neon_load_reg64(rm1, a->vm);
125
neon_load_reg64(rm2, a->vm + 1);
126
127
shiftfn(rm1, rm1, constimm);
128
- rd = tcg_temp_new_i32();
129
narrowfn(rd, cpu_env, rm1);
130
- neon_store_reg(a->vd, 0, rd);
131
+ write_neon_element32(rd, a->vd, 0, MO_32);
132
133
shiftfn(rm2, rm2, constimm);
134
- rd = tcg_temp_new_i32();
135
narrowfn(rd, cpu_env, rm2);
136
- neon_store_reg(a->vd, 1, rd);
137
+ write_neon_element32(rd, a->vd, 1, MO_32);
138
139
+ tcg_temp_free_i32(rd);
140
tcg_temp_free_i64(rm1);
141
tcg_temp_free_i64(rm2);
142
tcg_temp_free_i64(constimm);
143
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
144
constimm = tcg_const_i32(imm);
145
146
/* Load all inputs first to avoid potential overwrite */
147
- rm1 = neon_load_reg(a->vm, 0);
148
- rm2 = neon_load_reg(a->vm, 1);
149
- rm3 = neon_load_reg(a->vm + 1, 0);
150
- rm4 = neon_load_reg(a->vm + 1, 1);
151
+ rm1 = tcg_temp_new_i32();
152
+ rm2 = tcg_temp_new_i32();
153
+ rm3 = tcg_temp_new_i32();
154
+ rm4 = tcg_temp_new_i32();
155
+ read_neon_element32(rm1, a->vm, 0, MO_32);
156
+ read_neon_element32(rm2, a->vm, 1, MO_32);
157
+ read_neon_element32(rm3, a->vm, 2, MO_32);
158
+ read_neon_element32(rm4, a->vm, 3, MO_32);
159
rtmp = tcg_temp_new_i64();
160
161
shiftfn(rm1, rm1, constimm);
162
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
163
tcg_temp_free_i32(rm2);
164
165
narrowfn(rm1, cpu_env, rtmp);
166
- neon_store_reg(a->vd, 0, rm1);
167
+ write_neon_element32(rm1, a->vd, 0, MO_32);
168
+ tcg_temp_free_i32(rm1);
169
170
shiftfn(rm3, rm3, constimm);
171
shiftfn(rm4, rm4, constimm);
172
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
173
174
narrowfn(rm3, cpu_env, rtmp);
175
tcg_temp_free_i64(rtmp);
176
- neon_store_reg(a->vd, 1, rm3);
177
+ write_neon_element32(rm3, a->vd, 1, MO_32);
178
+ tcg_temp_free_i32(rm3);
179
return true;
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
183
widen_mask = dup_const(a->size + 1, widen_mask);
184
}
185
186
- rm0 = neon_load_reg(a->vm, 0);
187
- rm1 = neon_load_reg(a->vm, 1);
188
+ rm0 = tcg_temp_new_i32();
189
+ rm1 = tcg_temp_new_i32();
190
+ read_neon_element32(rm0, a->vm, 0, MO_32);
191
+ read_neon_element32(rm1, a->vm, 1, MO_32);
192
tmp = tcg_temp_new_i64();
193
194
widenfn(tmp, rm0);
195
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
196
if (src1_wide) {
197
neon_load_reg64(rn0_64, a->vn);
198
} else {
199
- TCGv_i32 tmp = neon_load_reg(a->vn, 0);
200
+ TCGv_i32 tmp = tcg_temp_new_i32();
201
+ read_neon_element32(tmp, a->vn, 0, MO_32);
202
widenfn(rn0_64, tmp);
203
tcg_temp_free_i32(tmp);
204
}
205
- rm = neon_load_reg(a->vm, 0);
206
+ rm = tcg_temp_new_i32();
207
+ read_neon_element32(rm, a->vm, 0, MO_32);
208
209
widenfn(rm_64, rm);
210
tcg_temp_free_i32(rm);
211
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
212
if (src1_wide) {
213
neon_load_reg64(rn1_64, a->vn + 1);
214
} else {
215
- TCGv_i32 tmp = neon_load_reg(a->vn, 1);
216
+ TCGv_i32 tmp = tcg_temp_new_i32();
217
+ read_neon_element32(tmp, a->vn, 1, MO_32);
218
widenfn(rn1_64, tmp);
219
tcg_temp_free_i32(tmp);
220
}
221
- rm = neon_load_reg(a->vm, 1);
222
+ rm = tcg_temp_new_i32();
223
+ read_neon_element32(rm, a->vm, 1, MO_32);
224
225
neon_store_reg64(rn0_64, a->vd);
226
227
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
228
229
narrowfn(rd1, rn_64);
230
231
- neon_store_reg(a->vd, 0, rd0);
232
- neon_store_reg(a->vd, 1, rd1);
233
+ write_neon_element32(rd0, a->vd, 0, MO_32);
234
+ write_neon_element32(rd1, a->vd, 1, MO_32);
235
236
+ tcg_temp_free_i32(rd0);
237
+ tcg_temp_free_i32(rd1);
238
tcg_temp_free_i64(rn_64);
239
tcg_temp_free_i64(rm_64);
240
241
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
242
rd0 = tcg_temp_new_i64();
243
rd1 = tcg_temp_new_i64();
244
245
- rn = neon_load_reg(a->vn, 0);
246
- rm = neon_load_reg(a->vm, 0);
247
+ rn = tcg_temp_new_i32();
248
+ rm = tcg_temp_new_i32();
249
+ read_neon_element32(rn, a->vn, 0, MO_32);
250
+ read_neon_element32(rm, a->vm, 0, MO_32);
251
opfn(rd0, rn, rm);
252
- tcg_temp_free_i32(rn);
253
- tcg_temp_free_i32(rm);
254
255
- rn = neon_load_reg(a->vn, 1);
256
- rm = neon_load_reg(a->vm, 1);
257
+ read_neon_element32(rn, a->vn, 1, MO_32);
258
+ read_neon_element32(rm, a->vm, 1, MO_32);
259
opfn(rd1, rn, rm);
260
tcg_temp_free_i32(rn);
261
tcg_temp_free_i32(rm);
262
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
263
264
static inline TCGv_i32 neon_get_scalar(int size, int reg)
265
{
266
- TCGv_i32 tmp;
267
- if (size == 1) {
268
- tmp = neon_load_reg(reg & 7, reg >> 4);
269
+ TCGv_i32 tmp = tcg_temp_new_i32();
270
+ if (size == MO_16) {
271
+ read_neon_element32(tmp, reg & 7, reg >> 4, MO_32);
272
if (reg & 8) {
273
gen_neon_dup_high16(tmp);
274
} else {
275
gen_neon_dup_low16(tmp);
276
}
277
} else {
278
- tmp = neon_load_reg(reg & 15, reg >> 4);
279
+ read_neon_element32(tmp, reg & 15, reg >> 4, MO_32);
280
}
281
return tmp;
282
}
283
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
284
* perform an accumulation operation of that result into the
285
* destination.
286
*/
287
- TCGv_i32 scalar;
288
+ TCGv_i32 scalar, tmp;
289
int pass;
290
291
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
292
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
293
}
294
295
scalar = neon_get_scalar(a->size, a->vm);
296
+ tmp = tcg_temp_new_i32();
297
298
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
299
- TCGv_i32 tmp = neon_load_reg(a->vn, pass);
300
+ read_neon_element32(tmp, a->vn, pass, MO_32);
301
opfn(tmp, tmp, scalar);
302
if (accfn) {
303
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
304
+ TCGv_i32 rd = tcg_temp_new_i32();
305
+ read_neon_element32(rd, a->vd, pass, MO_32);
306
accfn(tmp, rd, tmp);
307
tcg_temp_free_i32(rd);
308
}
309
- neon_store_reg(a->vd, pass, tmp);
310
+ write_neon_element32(tmp, a->vd, pass, MO_32);
311
}
312
+ tcg_temp_free_i32(tmp);
313
tcg_temp_free_i32(scalar);
314
return true;
315
}
316
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
317
* performs a kind of fused op-then-accumulate using a helper
318
* function that takes all of rd, rn and the scalar at once.
319
*/
320
- TCGv_i32 scalar;
321
+ TCGv_i32 scalar, rn, rd;
322
int pass;
323
324
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
325
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
326
}
327
328
scalar = neon_get_scalar(a->size, a->vm);
329
+ rn = tcg_temp_new_i32();
330
+ rd = tcg_temp_new_i32();
331
332
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
333
- TCGv_i32 rn = neon_load_reg(a->vn, pass);
334
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
335
+ read_neon_element32(rn, a->vn, pass, MO_32);
336
+ read_neon_element32(rd, a->vd, pass, MO_32);
337
opfn(rd, cpu_env, rn, scalar, rd);
338
- tcg_temp_free_i32(rn);
339
- neon_store_reg(a->vd, pass, rd);
340
+ write_neon_element32(rd, a->vd, pass, MO_32);
341
}
342
+ tcg_temp_free_i32(rn);
343
+ tcg_temp_free_i32(rd);
344
tcg_temp_free_i32(scalar);
345
346
return true;
347
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
348
scalar = neon_get_scalar(a->size, a->vm);
349
350
/* Load all inputs before writing any outputs, in case of overlap */
351
- rn = neon_load_reg(a->vn, 0);
352
+ rn = tcg_temp_new_i32();
353
+ read_neon_element32(rn, a->vn, 0, MO_32);
354
rn0_64 = tcg_temp_new_i64();
355
opfn(rn0_64, rn, scalar);
356
- tcg_temp_free_i32(rn);
357
358
- rn = neon_load_reg(a->vn, 1);
359
+ read_neon_element32(rn, a->vn, 1, MO_32);
360
rn1_64 = tcg_temp_new_i64();
361
opfn(rn1_64, rn, scalar);
362
tcg_temp_free_i32(rn);
363
@@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a)
364
return false;
365
}
366
n <<= 3;
367
+ tmp = tcg_temp_new_i32();
368
if (a->op) {
369
- tmp = neon_load_reg(a->vd, 0);
370
+ read_neon_element32(tmp, a->vd, 0, MO_32);
371
} else {
372
- tmp = tcg_temp_new_i32();
373
tcg_gen_movi_i32(tmp, 0);
374
}
375
- tmp2 = neon_load_reg(a->vm, 0);
376
+ tmp2 = tcg_temp_new_i32();
377
+ read_neon_element32(tmp2, a->vm, 0, MO_32);
378
ptr1 = vfp_reg_ptr(true, a->vn);
379
tmp4 = tcg_const_i32(n);
380
gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp4);
381
- tcg_temp_free_i32(tmp);
382
+
383
if (a->op) {
384
- tmp = neon_load_reg(a->vd, 1);
385
+ read_neon_element32(tmp, a->vd, 1, MO_32);
386
} else {
387
- tmp = tcg_temp_new_i32();
388
tcg_gen_movi_i32(tmp, 0);
389
}
390
- tmp3 = neon_load_reg(a->vm, 1);
391
+ tmp3 = tcg_temp_new_i32();
392
+ read_neon_element32(tmp3, a->vm, 1, MO_32);
393
gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp4);
394
+ tcg_temp_free_i32(tmp);
395
tcg_temp_free_i32(tmp4);
396
tcg_temp_free_ptr(ptr1);
397
- neon_store_reg(a->vd, 0, tmp2);
398
- neon_store_reg(a->vd, 1, tmp3);
399
- tcg_temp_free_i32(tmp);
400
+
401
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
402
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
403
+ tcg_temp_free_i32(tmp2);
404
+ tcg_temp_free_i32(tmp3);
405
return true;
406
}
407
408
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
409
static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
410
{
411
int pass, half;
412
+ TCGv_i32 tmp[2];
413
414
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
415
return false;
416
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
417
return true;
418
}
419
420
- for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
421
- TCGv_i32 tmp[2];
422
+ tmp[0] = tcg_temp_new_i32();
423
+ tmp[1] = tcg_temp_new_i32();
424
425
+ for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
426
for (half = 0; half < 2; half++) {
427
- tmp[half] = neon_load_reg(a->vm, pass * 2 + half);
428
+ read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32);
429
switch (a->size) {
430
case 0:
431
tcg_gen_bswap32_i32(tmp[half], tmp[half]);
432
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
433
g_assert_not_reached();
434
}
435
}
436
- neon_store_reg(a->vd, pass * 2, tmp[1]);
437
- neon_store_reg(a->vd, pass * 2 + 1, tmp[0]);
438
+ write_neon_element32(tmp[1], a->vd, pass * 2, MO_32);
439
+ write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32);
440
}
441
+
442
+ tcg_temp_free_i32(tmp[0]);
443
+ tcg_temp_free_i32(tmp[1]);
444
return true;
445
}
446
447
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
448
rm0_64 = tcg_temp_new_i64();
449
rm1_64 = tcg_temp_new_i64();
450
rd_64 = tcg_temp_new_i64();
451
- tmp = neon_load_reg(a->vm, pass * 2);
452
+
453
+ tmp = tcg_temp_new_i32();
454
+ read_neon_element32(tmp, a->vm, pass * 2, MO_32);
455
widenfn(rm0_64, tmp);
456
- tcg_temp_free_i32(tmp);
457
- tmp = neon_load_reg(a->vm, pass * 2 + 1);
458
+ read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32);
459
widenfn(rm1_64, tmp);
460
tcg_temp_free_i32(tmp);
461
+
462
opfn(rd_64, rm0_64, rm1_64);
463
tcg_temp_free_i64(rm0_64);
464
tcg_temp_free_i64(rm1_64);
465
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
466
narrowfn(rd0, cpu_env, rm);
467
neon_load_reg64(rm, a->vm + 1);
468
narrowfn(rd1, cpu_env, rm);
469
- neon_store_reg(a->vd, 0, rd0);
470
- neon_store_reg(a->vd, 1, rd1);
471
+ write_neon_element32(rd0, a->vd, 0, MO_32);
472
+ write_neon_element32(rd1, a->vd, 1, MO_32);
473
+ tcg_temp_free_i32(rd0);
474
+ tcg_temp_free_i32(rd1);
475
tcg_temp_free_i64(rm);
476
return true;
477
}
478
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
479
}
480
481
rd = tcg_temp_new_i64();
482
+ rm0 = tcg_temp_new_i32();
483
+ rm1 = tcg_temp_new_i32();
484
485
- rm0 = neon_load_reg(a->vm, 0);
486
- rm1 = neon_load_reg(a->vm, 1);
487
+ read_neon_element32(rm0, a->vm, 0, MO_32);
488
+ read_neon_element32(rm1, a->vm, 1, MO_32);
489
490
widenfn(rd, rm0);
491
tcg_gen_shli_i64(rd, rd, 8 << a->size);
492
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
493
494
fpst = fpstatus_ptr(FPST_STD);
495
ahp = get_ahp_flag();
496
- tmp = neon_load_reg(a->vm, 0);
497
+ tmp = tcg_temp_new_i32();
498
+ read_neon_element32(tmp, a->vm, 0, MO_32);
499
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
500
- tmp2 = neon_load_reg(a->vm, 1);
501
+ tmp2 = tcg_temp_new_i32();
502
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
503
gen_helper_vfp_fcvt_f32_to_f16(tmp2, tmp2, fpst, ahp);
504
tcg_gen_shli_i32(tmp2, tmp2, 16);
505
tcg_gen_or_i32(tmp2, tmp2, tmp);
506
- tcg_temp_free_i32(tmp);
507
- tmp = neon_load_reg(a->vm, 2);
508
+ read_neon_element32(tmp, a->vm, 2, MO_32);
509
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
510
- tmp3 = neon_load_reg(a->vm, 3);
511
- neon_store_reg(a->vd, 0, tmp2);
512
+ tmp3 = tcg_temp_new_i32();
513
+ read_neon_element32(tmp3, a->vm, 3, MO_32);
514
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
515
+ tcg_temp_free_i32(tmp2);
516
gen_helper_vfp_fcvt_f32_to_f16(tmp3, tmp3, fpst, ahp);
517
tcg_gen_shli_i32(tmp3, tmp3, 16);
518
tcg_gen_or_i32(tmp3, tmp3, tmp);
519
- neon_store_reg(a->vd, 1, tmp3);
520
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
521
+ tcg_temp_free_i32(tmp3);
522
tcg_temp_free_i32(tmp);
523
tcg_temp_free_i32(ahp);
524
tcg_temp_free_ptr(fpst);
525
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
526
fpst = fpstatus_ptr(FPST_STD);
527
ahp = get_ahp_flag();
528
tmp3 = tcg_temp_new_i32();
529
- tmp = neon_load_reg(a->vm, 0);
530
- tmp2 = neon_load_reg(a->vm, 1);
531
+ tmp2 = tcg_temp_new_i32();
532
+ tmp = tcg_temp_new_i32();
533
+ read_neon_element32(tmp, a->vm, 0, MO_32);
534
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
535
tcg_gen_ext16u_i32(tmp3, tmp);
536
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
537
- neon_store_reg(a->vd, 0, tmp3);
538
+ write_neon_element32(tmp3, a->vd, 0, MO_32);
539
tcg_gen_shri_i32(tmp, tmp, 16);
540
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp);
541
- neon_store_reg(a->vd, 1, tmp);
542
- tmp3 = tcg_temp_new_i32();
543
+ write_neon_element32(tmp, a->vd, 1, MO_32);
544
+ tcg_temp_free_i32(tmp);
545
tcg_gen_ext16u_i32(tmp3, tmp2);
546
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
547
- neon_store_reg(a->vd, 2, tmp3);
548
+ write_neon_element32(tmp3, a->vd, 2, MO_32);
549
+ tcg_temp_free_i32(tmp3);
550
tcg_gen_shri_i32(tmp2, tmp2, 16);
551
gen_helper_vfp_fcvt_f16_to_f32(tmp2, tmp2, fpst, ahp);
552
- neon_store_reg(a->vd, 3, tmp2);
553
+ write_neon_element32(tmp2, a->vd, 3, MO_32);
554
+ tcg_temp_free_i32(tmp2);
555
tcg_temp_free_i32(ahp);
556
tcg_temp_free_ptr(fpst);
557
558
@@ -XXX,XX +XXX,XX @@ DO_2M_CRYPTO(SHA256SU0, aa32_sha2, 2)
559
560
static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
561
{
562
+ TCGv_i32 tmp;
563
int pass;
564
565
/* Handle a 2-reg-misc operation by iterating 32 bits at a time */
566
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
567
return true;
568
}
569
570
+ tmp = tcg_temp_new_i32();
571
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
572
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
573
+ read_neon_element32(tmp, a->vm, pass, MO_32);
574
fn(tmp, tmp);
575
- neon_store_reg(a->vd, pass, tmp);
576
+ write_neon_element32(tmp, a->vd, pass, MO_32);
577
}
578
+ tcg_temp_free_i32(tmp);
579
580
return true;
581
}
582
@@ -XXX,XX +XXX,XX @@ static bool trans_VTRN(DisasContext *s, arg_2misc *a)
583
return true;
584
}
585
586
- if (a->size == 2) {
587
+ tmp = tcg_temp_new_i32();
588
+ tmp2 = tcg_temp_new_i32();
589
+ if (a->size == MO_32) {
590
for (pass = 0; pass < (a->q ? 4 : 2); pass += 2) {
591
- tmp = neon_load_reg(a->vm, pass);
592
- tmp2 = neon_load_reg(a->vd, pass + 1);
593
- neon_store_reg(a->vm, pass, tmp2);
594
- neon_store_reg(a->vd, pass + 1, tmp);
595
+ read_neon_element32(tmp, a->vm, pass, MO_32);
596
+ read_neon_element32(tmp2, a->vd, pass + 1, MO_32);
597
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
598
+ write_neon_element32(tmp, a->vd, pass + 1, MO_32);
599
}
600
} else {
601
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
602
- tmp = neon_load_reg(a->vm, pass);
603
- tmp2 = neon_load_reg(a->vd, pass);
604
- if (a->size == 0) {
605
+ read_neon_element32(tmp, a->vm, pass, MO_32);
606
+ read_neon_element32(tmp2, a->vd, pass, MO_32);
607
+ if (a->size == MO_8) {
608
gen_neon_trn_u8(tmp, tmp2);
609
} else {
610
gen_neon_trn_u16(tmp, tmp2);
611
}
612
- neon_store_reg(a->vm, pass, tmp2);
613
- neon_store_reg(a->vd, pass, tmp);
614
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
615
+ write_neon_element32(tmp, a->vd, pass, MO_32);
616
}
617
}
618
+ tcg_temp_free_i32(tmp);
619
+ tcg_temp_free_i32(tmp2);
620
return true;
621
}
538
--
622
--
539
2.20.1
623
2.20.1
540
624
541
625
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Andrew Jones <drjones@redhat.com>
3
We can then use this to improve VMOV (scalar to gp) and
4
Message-id: 20200120101023.16030-3-drjones@redhat.com
4
VMOV (gp to scalar) so that we simply perform the memory
5
operation that we wanted, rather than inserting or
6
extracting from a 32-bit quantity.
7
8
These were the last uses of neon_load/store_reg, so remove them.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20201030022618.785675-7-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
hw/arm/virt.c | 1 +
15
target/arm/translate.c | 50 +++++++++++++-----------
9
1 file changed, 1 insertion(+)
16
target/arm/translate-vfp.c.inc | 71 +++++-----------------------------
10
17
2 files changed, 37 insertions(+), 84 deletions(-)
11
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/virt.c
21
--- a/target/arm/translate.c
14
+++ b/hw/arm/virt.c
22
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ DEFINE_VIRT_MACHINE_AS_LATEST(5, 0)
23
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
16
24
* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
17
static void virt_machine_4_2_options(MachineClass *mc)
25
* where 0 is the least significant end of the register.
18
{
26
*/
19
+ virt_machine_5_0_options(mc);
27
-static long neon_element_offset(int reg, int element, MemOp size)
20
compat_props_add(mc->compat_props, hw_compat_4_2, hw_compat_4_2_len);
28
+static long neon_element_offset(int reg, int element, MemOp memop)
21
}
29
{
22
DEFINE_VIRT_MACHINE(4, 2)
30
- int element_size = 1 << size;
31
+ int element_size = 1 << (memop & MO_SIZE);
32
int ofs = element * element_size;
33
#ifdef HOST_WORDS_BIGENDIAN
34
/*
35
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
36
}
37
}
38
39
-static TCGv_i32 neon_load_reg(int reg, int pass)
40
-{
41
- TCGv_i32 tmp = tcg_temp_new_i32();
42
- tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
43
- return tmp;
44
-}
45
-
46
-static void neon_store_reg(int reg, int pass, TCGv_i32 var)
47
-{
48
- tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
49
- tcg_temp_free_i32(var);
50
-}
51
-
52
static inline void neon_load_reg64(TCGv_i64 var, int reg)
53
{
54
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
55
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
56
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
57
}
58
59
-static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
60
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
61
{
62
- long off = neon_element_offset(reg, ele, size);
63
+ long off = neon_element_offset(reg, ele, memop);
64
65
- switch (size) {
66
- case MO_32:
67
+ switch (memop) {
68
+ case MO_SB:
69
+ tcg_gen_ld8s_i32(dest, cpu_env, off);
70
+ break;
71
+ case MO_UB:
72
+ tcg_gen_ld8u_i32(dest, cpu_env, off);
73
+ break;
74
+ case MO_SW:
75
+ tcg_gen_ld16s_i32(dest, cpu_env, off);
76
+ break;
77
+ case MO_UW:
78
+ tcg_gen_ld16u_i32(dest, cpu_env, off);
79
+ break;
80
+ case MO_UL:
81
+ case MO_SL:
82
tcg_gen_ld_i32(dest, cpu_env, off);
83
break;
84
default:
85
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
86
}
87
}
88
89
-static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
90
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
91
{
92
- long off = neon_element_offset(reg, ele, size);
93
+ long off = neon_element_offset(reg, ele, memop);
94
95
- switch (size) {
96
+ switch (memop) {
97
+ case MO_8:
98
+ tcg_gen_st8_i32(src, cpu_env, off);
99
+ break;
100
+ case MO_16:
101
+ tcg_gen_st16_i32(src, cpu_env, off);
102
+ break;
103
case MO_32:
104
tcg_gen_st_i32(src, cpu_env, off);
105
break;
106
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-vfp.c.inc
109
+++ b/target/arm/translate-vfp.c.inc
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
111
{
112
/* VMOV scalar to general purpose register */
113
TCGv_i32 tmp;
114
- int pass;
115
- uint32_t offset;
116
117
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
118
- if (a->size == 2
119
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
120
+ if (a->size == MO_32
121
? !dc_isar_feature(aa32_fpsp_v2, s)
122
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
123
return false;
124
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
125
return false;
126
}
127
128
- offset = a->index << a->size;
129
- pass = extract32(offset, 2, 1);
130
- offset = extract32(offset, 0, 2) * 8;
131
-
132
if (!vfp_access_check(s)) {
133
return true;
134
}
135
136
- tmp = neon_load_reg(a->vn, pass);
137
- switch (a->size) {
138
- case 0:
139
- if (offset) {
140
- tcg_gen_shri_i32(tmp, tmp, offset);
141
- }
142
- if (a->u) {
143
- gen_uxtb(tmp);
144
- } else {
145
- gen_sxtb(tmp);
146
- }
147
- break;
148
- case 1:
149
- if (a->u) {
150
- if (offset) {
151
- tcg_gen_shri_i32(tmp, tmp, 16);
152
- } else {
153
- gen_uxth(tmp);
154
- }
155
- } else {
156
- if (offset) {
157
- tcg_gen_sari_i32(tmp, tmp, 16);
158
- } else {
159
- gen_sxth(tmp);
160
- }
161
- }
162
- break;
163
- case 2:
164
- break;
165
- }
166
+ tmp = tcg_temp_new_i32();
167
+ read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
168
store_reg(s, a->rt, tmp);
169
170
return true;
171
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
172
static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
173
{
174
/* VMOV general purpose register to scalar */
175
- TCGv_i32 tmp, tmp2;
176
- int pass;
177
- uint32_t offset;
178
+ TCGv_i32 tmp;
179
180
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
181
- if (a->size == 2
182
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
183
+ if (a->size == MO_32
184
? !dc_isar_feature(aa32_fpsp_v2, s)
185
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
186
return false;
187
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
188
return false;
189
}
190
191
- offset = a->index << a->size;
192
- pass = extract32(offset, 2, 1);
193
- offset = extract32(offset, 0, 2) * 8;
194
-
195
if (!vfp_access_check(s)) {
196
return true;
197
}
198
199
tmp = load_reg(s, a->rt);
200
- switch (a->size) {
201
- case 0:
202
- tmp2 = neon_load_reg(a->vn, pass);
203
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 8);
204
- tcg_temp_free_i32(tmp2);
205
- break;
206
- case 1:
207
- tmp2 = neon_load_reg(a->vn, pass);
208
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 16);
209
- tcg_temp_free_i32(tmp2);
210
- break;
211
- case 2:
212
- break;
213
- }
214
- neon_store_reg(a->vn, pass, tmp);
215
+ write_neon_element32(tmp, a->vn, a->index, a->size);
216
+ tcg_temp_free_i32(tmp);
217
218
return true;
219
}
23
--
220
--
24
2.20.1
221
2.20.1
25
222
26
223
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Replace deprecated qdev_reset_all by resettable_cold_reset_fn for
3
The only uses of this function are for loading VFP
4
the ipl registration in the main reset handlers.
4
single-precision values, and nothing to do with NEON.
5
5
6
This does not impact the behavior for the following reasons:
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
+ at this point resettable just call the old reset methods of devices
7
Message-id: 20201030022618.785675-8-richard.henderson@linaro.org
8
and buses in the same order than qdev/qbus.
9
+ resettable handlers registered with qemu_register_reset are
10
serialized; there is no interleaving.
11
+ eventual explicit calls to legacy reset API (device_reset or
12
qdev/qbus_reset) inside this reset handler will not be masked out
13
by resettable mechanism; they do not go through resettable api.
14
15
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
16
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20200123132823.1117486-12-damien.hedde@greensocs.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
10
---
23
hw/s390x/ipl.c | 10 +++++++++-
11
target/arm/translate.c | 4 +-
24
1 file changed, 9 insertions(+), 1 deletion(-)
12
target/arm/translate-vfp.c.inc | 184 ++++++++++++++++-----------------
13
2 files changed, 94 insertions(+), 94 deletions(-)
25
14
26
diff --git a/hw/s390x/ipl.c b/hw/s390x/ipl.c
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
27
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/s390x/ipl.c
17
--- a/target/arm/translate.c
29
+++ b/hw/s390x/ipl.c
18
+++ b/target/arm/translate.c
30
@@ -XXX,XX +XXX,XX @@ static void s390_ipl_realize(DeviceState *dev, Error **errp)
19
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg64(TCGv_i64 var, int reg)
31
*/
20
tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
32
ipl->compat_start_addr = ipl->start_addr;
21
}
33
ipl->compat_bios_start_addr = ipl->bios_start_addr;
22
34
- qemu_register_reset(qdev_reset_all_fn, dev);
23
-static inline void neon_load_reg32(TCGv_i32 var, int reg)
35
+ /*
24
+static inline void vfp_load_reg32(TCGv_i32 var, int reg)
36
+ * Because this Device is not on any bus in the qbus tree (it is
25
{
37
+ * not a sysbus device and it's not on some other bus like a PCI
26
tcg_gen_ld_i32(var, cpu_env, vfp_reg_offset(false, reg));
38
+ * bus) it will not be automatically reset by the 'reset the
27
}
39
+ * sysbus' hook registered by vl.c like most devices. So we must
28
40
+ * manually register a reset hook for it.
29
-static inline void neon_store_reg32(TCGv_i32 var, int reg)
41
+ * TODO: there should be a better way to do this.
30
+static inline void vfp_store_reg32(TCGv_i32 var, int reg)
42
+ */
31
{
43
+ qemu_register_reset(resettable_cold_reset_fn, dev);
32
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
44
error:
33
}
45
error_propagate(errp, err);
34
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-vfp.c.inc
37
+++ b/target/arm/translate-vfp.c.inc
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
39
frn = tcg_temp_new_i32();
40
frm = tcg_temp_new_i32();
41
dest = tcg_temp_new_i32();
42
- neon_load_reg32(frn, rn);
43
- neon_load_reg32(frm, rm);
44
+ vfp_load_reg32(frn, rn);
45
+ vfp_load_reg32(frm, rm);
46
switch (a->cc) {
47
case 0: /* eq: Z */
48
tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero,
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
50
if (sz == 1) {
51
tcg_gen_andi_i32(dest, dest, 0xffff);
52
}
53
- neon_store_reg32(dest, rd);
54
+ vfp_store_reg32(dest, rd);
55
tcg_temp_free_i32(frn);
56
tcg_temp_free_i32(frm);
57
tcg_temp_free_i32(dest);
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
59
TCGv_i32 tcg_res;
60
tcg_op = tcg_temp_new_i32();
61
tcg_res = tcg_temp_new_i32();
62
- neon_load_reg32(tcg_op, rm);
63
+ vfp_load_reg32(tcg_op, rm);
64
if (sz == 1) {
65
gen_helper_rinth(tcg_res, tcg_op, fpst);
66
} else {
67
gen_helper_rints(tcg_res, tcg_op, fpst);
68
}
69
- neon_store_reg32(tcg_res, rd);
70
+ vfp_store_reg32(tcg_res, rd);
71
tcg_temp_free_i32(tcg_op);
72
tcg_temp_free_i32(tcg_res);
73
}
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
gen_helper_vfp_tould(tcg_res, tcg_double, tcg_shift, fpst);
76
}
77
tcg_gen_extrl_i64_i32(tcg_tmp, tcg_res);
78
- neon_store_reg32(tcg_tmp, rd);
79
+ vfp_store_reg32(tcg_tmp, rd);
80
tcg_temp_free_i32(tcg_tmp);
81
tcg_temp_free_i64(tcg_res);
82
tcg_temp_free_i64(tcg_double);
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
84
TCGv_i32 tcg_single, tcg_res;
85
tcg_single = tcg_temp_new_i32();
86
tcg_res = tcg_temp_new_i32();
87
- neon_load_reg32(tcg_single, rm);
88
+ vfp_load_reg32(tcg_single, rm);
89
if (sz == 1) {
90
if (is_signed) {
91
gen_helper_vfp_toslh(tcg_res, tcg_single, tcg_shift, fpst);
92
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
93
gen_helper_vfp_touls(tcg_res, tcg_single, tcg_shift, fpst);
94
}
95
}
96
- neon_store_reg32(tcg_res, rd);
97
+ vfp_store_reg32(tcg_res, rd);
98
tcg_temp_free_i32(tcg_res);
99
tcg_temp_free_i32(tcg_single);
100
}
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
102
if (a->l) {
103
/* VFP to general purpose register */
104
tmp = tcg_temp_new_i32();
105
- neon_load_reg32(tmp, a->vn);
106
+ vfp_load_reg32(tmp, a->vn);
107
tcg_gen_andi_i32(tmp, tmp, 0xffff);
108
store_reg(s, a->rt, tmp);
109
} else {
110
/* general purpose register to VFP */
111
tmp = load_reg(s, a->rt);
112
tcg_gen_andi_i32(tmp, tmp, 0xffff);
113
- neon_store_reg32(tmp, a->vn);
114
+ vfp_store_reg32(tmp, a->vn);
115
tcg_temp_free_i32(tmp);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
119
if (a->l) {
120
/* VFP to general purpose register */
121
tmp = tcg_temp_new_i32();
122
- neon_load_reg32(tmp, a->vn);
123
+ vfp_load_reg32(tmp, a->vn);
124
if (a->rt == 15) {
125
/* Set the 4 flag bits in the CPSR. */
126
gen_set_nzcv(tmp);
127
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
128
} else {
129
/* general purpose register to VFP */
130
tmp = load_reg(s, a->rt);
131
- neon_store_reg32(tmp, a->vn);
132
+ vfp_store_reg32(tmp, a->vn);
133
tcg_temp_free_i32(tmp);
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
137
if (a->op) {
138
/* fpreg to gpreg */
139
tmp = tcg_temp_new_i32();
140
- neon_load_reg32(tmp, a->vm);
141
+ vfp_load_reg32(tmp, a->vm);
142
store_reg(s, a->rt, tmp);
143
tmp = tcg_temp_new_i32();
144
- neon_load_reg32(tmp, a->vm + 1);
145
+ vfp_load_reg32(tmp, a->vm + 1);
146
store_reg(s, a->rt2, tmp);
147
} else {
148
/* gpreg to fpreg */
149
tmp = load_reg(s, a->rt);
150
- neon_store_reg32(tmp, a->vm);
151
+ vfp_store_reg32(tmp, a->vm);
152
tcg_temp_free_i32(tmp);
153
tmp = load_reg(s, a->rt2);
154
- neon_store_reg32(tmp, a->vm + 1);
155
+ vfp_store_reg32(tmp, a->vm + 1);
156
tcg_temp_free_i32(tmp);
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
160
if (a->op) {
161
/* fpreg to gpreg */
162
tmp = tcg_temp_new_i32();
163
- neon_load_reg32(tmp, a->vm * 2);
164
+ vfp_load_reg32(tmp, a->vm * 2);
165
store_reg(s, a->rt, tmp);
166
tmp = tcg_temp_new_i32();
167
- neon_load_reg32(tmp, a->vm * 2 + 1);
168
+ vfp_load_reg32(tmp, a->vm * 2 + 1);
169
store_reg(s, a->rt2, tmp);
170
} else {
171
/* gpreg to fpreg */
172
tmp = load_reg(s, a->rt);
173
- neon_store_reg32(tmp, a->vm * 2);
174
+ vfp_store_reg32(tmp, a->vm * 2);
175
tcg_temp_free_i32(tmp);
176
tmp = load_reg(s, a->rt2);
177
- neon_store_reg32(tmp, a->vm * 2 + 1);
178
+ vfp_store_reg32(tmp, a->vm * 2 + 1);
179
tcg_temp_free_i32(tmp);
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
183
tmp = tcg_temp_new_i32();
184
if (a->l) {
185
gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
186
- neon_store_reg32(tmp, a->vd);
187
+ vfp_store_reg32(tmp, a->vd);
188
} else {
189
- neon_load_reg32(tmp, a->vd);
190
+ vfp_load_reg32(tmp, a->vd);
191
gen_aa32_st16(s, tmp, addr, get_mem_index(s));
192
}
193
tcg_temp_free_i32(tmp);
194
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
195
tmp = tcg_temp_new_i32();
196
if (a->l) {
197
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
198
- neon_store_reg32(tmp, a->vd);
199
+ vfp_store_reg32(tmp, a->vd);
200
} else {
201
- neon_load_reg32(tmp, a->vd);
202
+ vfp_load_reg32(tmp, a->vd);
203
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
204
}
205
tcg_temp_free_i32(tmp);
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
207
if (a->l) {
208
/* load */
209
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
210
- neon_store_reg32(tmp, a->vd + i);
211
+ vfp_store_reg32(tmp, a->vd + i);
212
} else {
213
/* store */
214
- neon_load_reg32(tmp, a->vd + i);
215
+ vfp_load_reg32(tmp, a->vd + i);
216
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
217
}
218
tcg_gen_addi_i32(addr, addr, offset);
219
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
220
fd = tcg_temp_new_i32();
221
fpst = fpstatus_ptr(FPST_FPCR);
222
223
- neon_load_reg32(f0, vn);
224
- neon_load_reg32(f1, vm);
225
+ vfp_load_reg32(f0, vn);
226
+ vfp_load_reg32(f1, vm);
227
228
for (;;) {
229
if (reads_vd) {
230
- neon_load_reg32(fd, vd);
231
+ vfp_load_reg32(fd, vd);
232
}
233
fn(fd, f0, f1, fpst);
234
- neon_store_reg32(fd, vd);
235
+ vfp_store_reg32(fd, vd);
236
237
if (veclen == 0) {
238
break;
239
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
240
veclen--;
241
vd = vfp_advance_sreg(vd, delta_d);
242
vn = vfp_advance_sreg(vn, delta_d);
243
- neon_load_reg32(f0, vn);
244
+ vfp_load_reg32(f0, vn);
245
if (delta_m) {
246
vm = vfp_advance_sreg(vm, delta_m);
247
- neon_load_reg32(f1, vm);
248
+ vfp_load_reg32(f1, vm);
249
}
250
}
251
252
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn,
253
fd = tcg_temp_new_i32();
254
fpst = fpstatus_ptr(FPST_FPCR_F16);
255
256
- neon_load_reg32(f0, vn);
257
- neon_load_reg32(f1, vm);
258
+ vfp_load_reg32(f0, vn);
259
+ vfp_load_reg32(f1, vm);
260
261
if (reads_vd) {
262
- neon_load_reg32(fd, vd);
263
+ vfp_load_reg32(fd, vd);
264
}
265
fn(fd, f0, f1, fpst);
266
- neon_store_reg32(fd, vd);
267
+ vfp_store_reg32(fd, vd);
268
269
tcg_temp_free_i32(f0);
270
tcg_temp_free_i32(f1);
271
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
272
f0 = tcg_temp_new_i32();
273
fd = tcg_temp_new_i32();
274
275
- neon_load_reg32(f0, vm);
276
+ vfp_load_reg32(f0, vm);
277
278
for (;;) {
279
fn(fd, f0);
280
- neon_store_reg32(fd, vd);
281
+ vfp_store_reg32(fd, vd);
282
283
if (veclen == 0) {
284
break;
285
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
286
/* single source one-many */
287
while (veclen--) {
288
vd = vfp_advance_sreg(vd, delta_d);
289
- neon_store_reg32(fd, vd);
290
+ vfp_store_reg32(fd, vd);
291
}
292
break;
293
}
294
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
295
veclen--;
296
vd = vfp_advance_sreg(vd, delta_d);
297
vm = vfp_advance_sreg(vm, delta_m);
298
- neon_load_reg32(f0, vm);
299
+ vfp_load_reg32(f0, vm);
300
}
301
302
tcg_temp_free_i32(f0);
303
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
304
}
305
306
f0 = tcg_temp_new_i32();
307
- neon_load_reg32(f0, vm);
308
+ vfp_load_reg32(f0, vm);
309
fn(f0, f0);
310
- neon_store_reg32(f0, vd);
311
+ vfp_store_reg32(f0, vd);
312
tcg_temp_free_i32(f0);
313
314
return true;
315
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
316
vm = tcg_temp_new_i32();
317
vd = tcg_temp_new_i32();
318
319
- neon_load_reg32(vn, a->vn);
320
- neon_load_reg32(vm, a->vm);
321
+ vfp_load_reg32(vn, a->vn);
322
+ vfp_load_reg32(vm, a->vm);
323
if (neg_n) {
324
/* VFNMS, VFMS */
325
gen_helper_vfp_negh(vn, vn);
326
}
327
- neon_load_reg32(vd, a->vd);
328
+ vfp_load_reg32(vd, a->vd);
329
if (neg_d) {
330
/* VFNMA, VFNMS */
331
gen_helper_vfp_negh(vd, vd);
332
}
333
fpst = fpstatus_ptr(FPST_FPCR_F16);
334
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
335
- neon_store_reg32(vd, a->vd);
336
+ vfp_store_reg32(vd, a->vd);
337
338
tcg_temp_free_ptr(fpst);
339
tcg_temp_free_i32(vn);
340
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
341
vm = tcg_temp_new_i32();
342
vd = tcg_temp_new_i32();
343
344
- neon_load_reg32(vn, a->vn);
345
- neon_load_reg32(vm, a->vm);
346
+ vfp_load_reg32(vn, a->vn);
347
+ vfp_load_reg32(vm, a->vm);
348
if (neg_n) {
349
/* VFNMS, VFMS */
350
gen_helper_vfp_negs(vn, vn);
351
}
352
- neon_load_reg32(vd, a->vd);
353
+ vfp_load_reg32(vd, a->vd);
354
if (neg_d) {
355
/* VFNMA, VFNMS */
356
gen_helper_vfp_negs(vd, vd);
357
}
358
fpst = fpstatus_ptr(FPST_FPCR);
359
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
360
- neon_store_reg32(vd, a->vd);
361
+ vfp_store_reg32(vd, a->vd);
362
363
tcg_temp_free_ptr(fpst);
364
tcg_temp_free_i32(vn);
365
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a)
366
}
367
368
fd = tcg_const_i32(vfp_expand_imm(MO_16, a->imm));
369
- neon_store_reg32(fd, a->vd);
370
+ vfp_store_reg32(fd, a->vd);
371
tcg_temp_free_i32(fd);
372
return true;
373
}
374
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
375
fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
376
377
for (;;) {
378
- neon_store_reg32(fd, vd);
379
+ vfp_store_reg32(fd, vd);
380
381
if (veclen == 0) {
382
break;
383
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
384
vd = tcg_temp_new_i32();
385
vm = tcg_temp_new_i32();
386
387
- neon_load_reg32(vd, a->vd);
388
+ vfp_load_reg32(vd, a->vd);
389
if (a->z) {
390
tcg_gen_movi_i32(vm, 0);
391
} else {
392
- neon_load_reg32(vm, a->vm);
393
+ vfp_load_reg32(vm, a->vm);
394
}
395
396
if (a->e) {
397
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a)
398
vd = tcg_temp_new_i32();
399
vm = tcg_temp_new_i32();
400
401
- neon_load_reg32(vd, a->vd);
402
+ vfp_load_reg32(vd, a->vd);
403
if (a->z) {
404
tcg_gen_movi_i32(vm, 0);
405
} else {
406
- neon_load_reg32(vm, a->vm);
407
+ vfp_load_reg32(vm, a->vm);
408
}
409
410
if (a->e) {
411
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f32_f16(DisasContext *s, arg_VCVT_f32_f16 *a)
412
/* The T bit tells us if we want the low or high 16 bits of Vm */
413
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
414
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp_mode);
415
- neon_store_reg32(tmp, a->vd);
416
+ vfp_store_reg32(tmp, a->vd);
417
tcg_temp_free_i32(ahp_mode);
418
tcg_temp_free_ptr(fpst);
419
tcg_temp_free_i32(tmp);
420
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
421
ahp_mode = get_ahp_flag();
422
tmp = tcg_temp_new_i32();
423
424
- neon_load_reg32(tmp, a->vm);
425
+ vfp_load_reg32(tmp, a->vm);
426
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp_mode);
427
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
428
tcg_temp_free_i32(ahp_mode);
429
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a)
430
}
431
432
tmp = tcg_temp_new_i32();
433
- neon_load_reg32(tmp, a->vm);
434
+ vfp_load_reg32(tmp, a->vm);
435
fpst = fpstatus_ptr(FPST_FPCR_F16);
436
gen_helper_rinth(tmp, tmp, fpst);
437
- neon_store_reg32(tmp, a->vd);
438
+ vfp_store_reg32(tmp, a->vd);
439
tcg_temp_free_ptr(fpst);
440
tcg_temp_free_i32(tmp);
441
return true;
442
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a)
443
}
444
445
tmp = tcg_temp_new_i32();
446
- neon_load_reg32(tmp, a->vm);
447
+ vfp_load_reg32(tmp, a->vm);
448
fpst = fpstatus_ptr(FPST_FPCR);
449
gen_helper_rints(tmp, tmp, fpst);
450
- neon_store_reg32(tmp, a->vd);
451
+ vfp_store_reg32(tmp, a->vd);
452
tcg_temp_free_ptr(fpst);
453
tcg_temp_free_i32(tmp);
454
return true;
455
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a)
456
}
457
458
tmp = tcg_temp_new_i32();
459
- neon_load_reg32(tmp, a->vm);
460
+ vfp_load_reg32(tmp, a->vm);
461
fpst = fpstatus_ptr(FPST_FPCR_F16);
462
tcg_rmode = tcg_const_i32(float_round_to_zero);
463
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
464
gen_helper_rinth(tmp, tmp, fpst);
465
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
466
- neon_store_reg32(tmp, a->vd);
467
+ vfp_store_reg32(tmp, a->vd);
468
tcg_temp_free_ptr(fpst);
469
tcg_temp_free_i32(tcg_rmode);
470
tcg_temp_free_i32(tmp);
471
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a)
472
}
473
474
tmp = tcg_temp_new_i32();
475
- neon_load_reg32(tmp, a->vm);
476
+ vfp_load_reg32(tmp, a->vm);
477
fpst = fpstatus_ptr(FPST_FPCR);
478
tcg_rmode = tcg_const_i32(float_round_to_zero);
479
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
480
gen_helper_rints(tmp, tmp, fpst);
481
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
482
- neon_store_reg32(tmp, a->vd);
483
+ vfp_store_reg32(tmp, a->vd);
484
tcg_temp_free_ptr(fpst);
485
tcg_temp_free_i32(tcg_rmode);
486
tcg_temp_free_i32(tmp);
487
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a)
488
}
489
490
tmp = tcg_temp_new_i32();
491
- neon_load_reg32(tmp, a->vm);
492
+ vfp_load_reg32(tmp, a->vm);
493
fpst = fpstatus_ptr(FPST_FPCR_F16);
494
gen_helper_rinth_exact(tmp, tmp, fpst);
495
- neon_store_reg32(tmp, a->vd);
496
+ vfp_store_reg32(tmp, a->vd);
497
tcg_temp_free_ptr(fpst);
498
tcg_temp_free_i32(tmp);
499
return true;
500
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_sp(DisasContext *s, arg_VRINTX_sp *a)
501
}
502
503
tmp = tcg_temp_new_i32();
504
- neon_load_reg32(tmp, a->vm);
505
+ vfp_load_reg32(tmp, a->vm);
506
fpst = fpstatus_ptr(FPST_FPCR);
507
gen_helper_rints_exact(tmp, tmp, fpst);
508
- neon_store_reg32(tmp, a->vd);
509
+ vfp_store_reg32(tmp, a->vd);
510
tcg_temp_free_ptr(fpst);
511
tcg_temp_free_i32(tmp);
512
return true;
513
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
514
515
vm = tcg_temp_new_i32();
516
vd = tcg_temp_new_i64();
517
- neon_load_reg32(vm, a->vm);
518
+ vfp_load_reg32(vm, a->vm);
519
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
520
neon_store_reg64(vd, a->vd);
521
tcg_temp_free_i32(vm);
522
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
523
vm = tcg_temp_new_i64();
524
neon_load_reg64(vm, a->vm);
525
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
526
- neon_store_reg32(vd, a->vd);
527
+ vfp_store_reg32(vd, a->vd);
528
tcg_temp_free_i32(vd);
529
tcg_temp_free_i64(vm);
530
return true;
531
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
532
}
533
534
vm = tcg_temp_new_i32();
535
- neon_load_reg32(vm, a->vm);
536
+ vfp_load_reg32(vm, a->vm);
537
fpst = fpstatus_ptr(FPST_FPCR_F16);
538
if (a->s) {
539
/* i32 -> f16 */
540
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
541
/* u32 -> f16 */
542
gen_helper_vfp_uitoh(vm, vm, fpst);
543
}
544
- neon_store_reg32(vm, a->vd);
545
+ vfp_store_reg32(vm, a->vd);
546
tcg_temp_free_i32(vm);
547
tcg_temp_free_ptr(fpst);
548
return true;
549
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
550
}
551
552
vm = tcg_temp_new_i32();
553
- neon_load_reg32(vm, a->vm);
554
+ vfp_load_reg32(vm, a->vm);
555
fpst = fpstatus_ptr(FPST_FPCR);
556
if (a->s) {
557
/* i32 -> f32 */
558
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
559
/* u32 -> f32 */
560
gen_helper_vfp_uitos(vm, vm, fpst);
561
}
562
- neon_store_reg32(vm, a->vd);
563
+ vfp_store_reg32(vm, a->vd);
564
tcg_temp_free_i32(vm);
565
tcg_temp_free_ptr(fpst);
566
return true;
567
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
568
569
vm = tcg_temp_new_i32();
570
vd = tcg_temp_new_i64();
571
- neon_load_reg32(vm, a->vm);
572
+ vfp_load_reg32(vm, a->vm);
573
fpst = fpstatus_ptr(FPST_FPCR);
574
if (a->s) {
575
/* i32 -> f64 */
576
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
577
vd = tcg_temp_new_i32();
578
neon_load_reg64(vm, a->vm);
579
gen_helper_vjcvt(vd, vm, cpu_env);
580
- neon_store_reg32(vd, a->vd);
581
+ vfp_store_reg32(vd, a->vd);
582
tcg_temp_free_i64(vm);
583
tcg_temp_free_i32(vd);
584
return true;
585
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
586
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
587
588
vd = tcg_temp_new_i32();
589
- neon_load_reg32(vd, a->vd);
590
+ vfp_load_reg32(vd, a->vd);
591
592
fpst = fpstatus_ptr(FPST_FPCR_F16);
593
shift = tcg_const_i32(frac_bits);
594
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
595
g_assert_not_reached();
596
}
597
598
- neon_store_reg32(vd, a->vd);
599
+ vfp_store_reg32(vd, a->vd);
600
tcg_temp_free_i32(vd);
601
tcg_temp_free_i32(shift);
602
tcg_temp_free_ptr(fpst);
603
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
604
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
605
606
vd = tcg_temp_new_i32();
607
- neon_load_reg32(vd, a->vd);
608
+ vfp_load_reg32(vd, a->vd);
609
610
fpst = fpstatus_ptr(FPST_FPCR);
611
shift = tcg_const_i32(frac_bits);
612
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
613
g_assert_not_reached();
614
}
615
616
- neon_store_reg32(vd, a->vd);
617
+ vfp_store_reg32(vd, a->vd);
618
tcg_temp_free_i32(vd);
619
tcg_temp_free_i32(shift);
620
tcg_temp_free_ptr(fpst);
621
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
622
623
fpst = fpstatus_ptr(FPST_FPCR_F16);
624
vm = tcg_temp_new_i32();
625
- neon_load_reg32(vm, a->vm);
626
+ vfp_load_reg32(vm, a->vm);
627
628
if (a->s) {
629
if (a->rz) {
630
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
631
gen_helper_vfp_touih(vm, vm, fpst);
632
}
633
}
634
- neon_store_reg32(vm, a->vd);
635
+ vfp_store_reg32(vm, a->vd);
636
tcg_temp_free_i32(vm);
637
tcg_temp_free_ptr(fpst);
638
return true;
639
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
640
641
fpst = fpstatus_ptr(FPST_FPCR);
642
vm = tcg_temp_new_i32();
643
- neon_load_reg32(vm, a->vm);
644
+ vfp_load_reg32(vm, a->vm);
645
646
if (a->s) {
647
if (a->rz) {
648
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
649
gen_helper_vfp_touis(vm, vm, fpst);
650
}
651
}
652
- neon_store_reg32(vm, a->vd);
653
+ vfp_store_reg32(vm, a->vd);
654
tcg_temp_free_i32(vm);
655
tcg_temp_free_ptr(fpst);
656
return true;
657
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
658
gen_helper_vfp_touid(vd, vm, fpst);
659
}
660
}
661
- neon_store_reg32(vd, a->vd);
662
+ vfp_store_reg32(vd, a->vd);
663
tcg_temp_free_i32(vd);
664
tcg_temp_free_i64(vm);
665
tcg_temp_free_ptr(fpst);
666
@@ -XXX,XX +XXX,XX @@ static bool trans_VINS(DisasContext *s, arg_VINS *a)
667
/* Insert low half of Vm into high half of Vd */
668
rm = tcg_temp_new_i32();
669
rd = tcg_temp_new_i32();
670
- neon_load_reg32(rm, a->vm);
671
- neon_load_reg32(rd, a->vd);
672
+ vfp_load_reg32(rm, a->vm);
673
+ vfp_load_reg32(rd, a->vd);
674
tcg_gen_deposit_i32(rd, rd, rm, 16, 16);
675
- neon_store_reg32(rd, a->vd);
676
+ vfp_store_reg32(rd, a->vd);
677
tcg_temp_free_i32(rm);
678
tcg_temp_free_i32(rd);
679
return true;
680
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOVX(DisasContext *s, arg_VINS *a)
681
682
/* Set Vd to high half of Vm */
683
rm = tcg_temp_new_i32();
684
- neon_load_reg32(rm, a->vm);
685
+ vfp_load_reg32(rm, a->vm);
686
tcg_gen_shri_i32(rm, rm, 16);
687
- neon_store_reg32(rm, a->vd);
688
+ vfp_store_reg32(rm, a->vd);
689
tcg_temp_free_i32(rm);
690
return true;
46
}
691
}
47
--
692
--
48
2.20.1
693
2.20.1
49
694
50
695
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This commit adds support of Resettable interface to buses and devices:
3
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc.
4
+ ResettableState structure is added in the Bus/Device state
4
5
+ Resettable methods are implemented.
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
+ device/bus_is_in_reset function defined
6
Message-id: 20201030022618.785675-9-richard.henderson@linaro.org
7
8
This commit allows to transition the objects to the new
9
multi-phase interface without changing the reset behavior at all.
10
Object single reset method can be split into the 3 different phases
11
but the 3 phases are still executed in a row for a given object.
12
From the qdev/qbus reset api point of view, nothing is changed.
13
qdev_reset_all() and qbus_reset_all() are not modified as well as
14
device_legacy_reset().
15
16
Transition of an object must be done from parent class to child class.
17
Care has been taken to allow the transition of a parent class
18
without requiring the child classes to be transitioned at the same
19
time. Note that SysBus and SysBusDevice class do not need any transition
20
because they do not override the legacy reset method.
21
22
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
26
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
27
Message-id: 20200123132823.1117486-5-damien.hedde@greensocs.com
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
9
---
30
tests/Makefile.include | 1 +
10
target/arm/translate.c | 26 +++++++++
31
include/hw/qdev-core.h | 27 ++++++++++++
11
target/arm/translate-neon.c.inc | 94 ++++++++++++++++-----------------
32
hw/core/bus.c | 97 ++++++++++++++++++++++++++++++++++++++++++
12
2 files changed, 73 insertions(+), 47 deletions(-)
33
hw/core/qdev.c | 93 ++++++++++++++++++++++++++++++++++++++++
13
34
4 files changed, 218 insertions(+)
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
35
36
diff --git a/tests/Makefile.include b/tests/Makefile.include
37
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
38
--- a/tests/Makefile.include
16
--- a/target/arm/translate.c
39
+++ b/tests/Makefile.include
17
+++ b/target/arm/translate.c
40
@@ -XXX,XX +XXX,XX @@ tests/fp/%:
18
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
41
tests/test-qdev-global-props$(EXESUF): tests/test-qdev-global-props.o \
19
}
42
    hw/core/qdev.o hw/core/qdev-properties.o hw/core/hotplug.o\
20
}
43
    hw/core/bus.o \
21
44
+    hw/core/resettable.o \
22
+static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
45
    hw/core/irq.o \
23
+{
46
    hw/core/fw-path-provider.o \
24
+ long off = neon_element_offset(reg, ele, memop);
47
    hw/core/reset.o \
48
diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
49
index XXXXXXX..XXXXXXX 100644
50
--- a/include/hw/qdev-core.h
51
+++ b/include/hw/qdev-core.h
52
@@ -XXX,XX +XXX,XX @@
53
#include "qemu/bitmap.h"
54
#include "qom/object.h"
55
#include "hw/hotplug.h"
56
+#include "hw/resettable.h"
57
58
enum {
59
DEV_NVECTORS_UNSPECIFIED = -1,
60
@@ -XXX,XX +XXX,XX @@ typedef struct DeviceClass {
61
bool hotpluggable;
62
63
/* callbacks */
64
+ /*
65
+ * Reset method here is deprecated and replaced by methods in the
66
+ * resettable class interface to implement a multi-phase reset.
67
+ * TODO: remove once every reset callback is unused
68
+ */
69
DeviceReset reset;
70
DeviceRealize realize;
71
DeviceUnrealize unrealize;
72
@@ -XXX,XX +XXX,XX @@ struct NamedGPIOList {
73
/**
74
* DeviceState:
75
* @realized: Indicates whether the device has been fully constructed.
76
+ * @reset: ResettableState for the device; handled by Resettable interface.
77
*
78
* This structure should not be accessed directly. We declare it here
79
* so that it can be embedded in individual device state structures.
80
@@ -XXX,XX +XXX,XX @@ struct DeviceState {
81
int num_child_bus;
82
int instance_id_alias;
83
int alias_required_for_version;
84
+ ResettableState reset;
85
};
86
87
struct DeviceListener {
88
@@ -XXX,XX +XXX,XX @@ typedef struct BusChild {
89
/**
90
* BusState:
91
* @hotplug_handler: link to a hotplug handler associated with bus.
92
+ * @reset: ResettableState for the bus; handled by Resettable interface.
93
*/
94
struct BusState {
95
Object obj;
96
@@ -XXX,XX +XXX,XX @@ struct BusState {
97
int num_children;
98
QTAILQ_HEAD(, BusChild) children;
99
QLIST_ENTRY(BusState) sibling;
100
+ ResettableState reset;
101
};
102
103
/**
104
@@ -XXX,XX +XXX,XX @@ void qdev_reset_all_fn(void *opaque);
105
void qbus_reset_all(BusState *bus);
106
void qbus_reset_all_fn(void *opaque);
107
108
+/**
109
+ * device_is_in_reset:
110
+ * Return true if the device @dev is currently being reset.
111
+ */
112
+bool device_is_in_reset(DeviceState *dev);
113
+
25
+
114
+/**
26
+ switch (memop) {
115
+ * bus_is_in_reset:
27
+ case MO_Q:
116
+ * Return true if the bus @bus is currently being reset.
28
+ tcg_gen_ld_i64(dest, cpu_env, off);
117
+ */
29
+ break;
118
+bool bus_is_in_reset(BusState *bus);
30
+ default:
119
+
31
+ g_assert_not_reached();
120
/* This should go away once we get rid of the NULL bus hack */
121
BusState *sysbus_get_default(void);
122
123
@@ -XXX,XX +XXX,XX @@ void device_legacy_reset(DeviceState *dev);
124
125
void device_class_set_props(DeviceClass *dc, Property *props);
126
127
+/**
128
+ * device_class_set_parent_reset:
129
+ * TODO: remove the function when DeviceClass's reset method
130
+ * is not used anymore.
131
+ */
132
void device_class_set_parent_reset(DeviceClass *dc,
133
DeviceReset dev_reset,
134
DeviceReset *parent_reset);
135
diff --git a/hw/core/bus.c b/hw/core/bus.c
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/core/bus.c
138
+++ b/hw/core/bus.c
139
@@ -XXX,XX +XXX,XX @@ int qbus_walk_children(BusState *bus,
140
return 0;
141
}
142
143
+bool bus_is_in_reset(BusState *bus)
144
+{
145
+ return resettable_is_in_reset(OBJECT(bus));
146
+}
147
+
148
+static ResettableState *bus_get_reset_state(Object *obj)
149
+{
150
+ BusState *bus = BUS(obj);
151
+ return &bus->reset;
152
+}
153
+
154
+static void bus_reset_child_foreach(Object *obj, ResettableChildCallback cb,
155
+ void *opaque, ResetType type)
156
+{
157
+ BusState *bus = BUS(obj);
158
+ BusChild *kid;
159
+
160
+ QTAILQ_FOREACH(kid, &bus->children, sibling) {
161
+ cb(OBJECT(kid->child), opaque, type);
162
+ }
32
+ }
163
+}
33
+}
164
+
34
+
165
static void qbus_realize(BusState *bus, DeviceState *parent, const char *name)
35
static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
166
{
36
{
167
const char *typename = object_get_typename(OBJECT(bus));
37
long off = neon_element_offset(reg, ele, memop);
168
@@ -XXX,XX +XXX,XX @@ static char *default_bus_get_fw_dev_path(DeviceState *dev)
38
@@ -XXX,XX +XXX,XX @@ static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
169
return g_strdup(object_get_typename(OBJECT(dev)));
39
}
170
}
40
}
171
41
172
+/**
42
+static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
173
+ * bus_phases_reset:
174
+ * Transition reset method for buses to allow moving
175
+ * smoothly from legacy reset method to multi-phases
176
+ */
177
+static void bus_phases_reset(BusState *bus)
178
+{
43
+{
179
+ ResettableClass *rc = RESETTABLE_GET_CLASS(bus);
44
+ long off = neon_element_offset(reg, ele, memop);
180
+
45
+
181
+ if (rc->phases.enter) {
46
+ switch (memop) {
182
+ rc->phases.enter(OBJECT(bus), RESET_TYPE_COLD);
47
+ case MO_64:
183
+ }
48
+ tcg_gen_st_i64(src, cpu_env, off);
184
+ if (rc->phases.hold) {
49
+ break;
185
+ rc->phases.hold(OBJECT(bus));
50
+ default:
186
+ }
51
+ g_assert_not_reached();
187
+ if (rc->phases.exit) {
188
+ rc->phases.exit(OBJECT(bus));
189
+ }
52
+ }
190
+}
53
+}
191
+
54
+
192
+static void bus_transitional_reset(Object *obj)
55
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
193
+{
194
+ BusClass *bc = BUS_GET_CLASS(obj);
195
+
196
+ /*
197
+ * This will call either @bus_phases_reset (for multi-phases transitioned
198
+ * buses) or a bus's specific method for not-yet transitioned buses.
199
+ * In both case, it does not reset children.
200
+ */
201
+ if (bc->reset) {
202
+ bc->reset(BUS(obj));
203
+ }
204
+}
205
+
206
+/**
207
+ * bus_get_transitional_reset:
208
+ * check if the bus's class is ready for multi-phase
209
+ */
210
+static ResettableTrFunction bus_get_transitional_reset(Object *obj)
211
+{
212
+ BusClass *dc = BUS_GET_CLASS(obj);
213
+ if (dc->reset != bus_phases_reset) {
214
+ /*
215
+ * dc->reset has been overridden by a subclass,
216
+ * the bus is not ready for multi phase yet.
217
+ */
218
+ return bus_transitional_reset;
219
+ }
220
+ return NULL;
221
+}
222
+
223
static void bus_class_init(ObjectClass *class, void *data)
224
{
56
{
225
BusClass *bc = BUS_CLASS(class);
57
TCGv_ptr ret = tcg_temp_new_ptr();
226
+ ResettableClass *rc = RESETTABLE_CLASS(class);
58
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
227
59
index XXXXXXX..XXXXXXX 100644
228
class->unparent = bus_unparent;
60
--- a/target/arm/translate-neon.c.inc
229
bc->get_fw_dev_path = default_bus_get_fw_dev_path;
61
+++ b/target/arm/translate-neon.c.inc
230
+
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
231
+ rc->get_state = bus_get_reset_state;
63
for (pass = 0; pass < a->q + 1; pass++) {
232
+ rc->child_foreach = bus_reset_child_foreach;
64
TCGv_i64 tmp = tcg_temp_new_i64();
233
+
65
234
+ /*
66
- neon_load_reg64(tmp, a->vm + pass);
235
+ * @bus_phases_reset is put as the default reset method below, allowing
67
+ read_neon_element64(tmp, a->vm, pass, MO_64);
236
+ * to do the multi-phase transition from base classes to leaf classes. It
68
fn(tmp, cpu_env, tmp, constimm);
237
+ * allows a legacy-reset Bus class to extend a multi-phases-reset
69
- neon_store_reg64(tmp, a->vd + pass);
238
+ * Bus class for the following reason:
70
+ write_neon_element64(tmp, a->vd, pass, MO_64);
239
+ * + If a base class B has been moved to multi-phase, then it does not
71
tcg_temp_free_i64(tmp);
240
+ * override this default reset method and may have defined phase methods.
72
}
241
+ * + A child class C (extending class B) which uses
73
tcg_temp_free_i64(constimm);
242
+ * bus_class_set_parent_reset() (or similar means) to override the
74
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
243
+ * reset method will still work as expected. @bus_phases_reset function
75
rd = tcg_temp_new_i32();
244
+ * will be registered as the parent reset method and effectively call
76
245
+ * parent reset phases.
77
/* Load both inputs first to avoid potential overwrite if rm == rd */
246
+ */
78
- neon_load_reg64(rm1, a->vm);
247
+ bc->reset = bus_phases_reset;
79
- neon_load_reg64(rm2, a->vm + 1);
248
+ rc->get_transitional_function = bus_get_transitional_reset;
80
+ read_neon_element64(rm1, a->vm, 0, MO_64);
81
+ read_neon_element64(rm2, a->vm, 1, MO_64);
82
83
shiftfn(rm1, rm1, constimm);
84
narrowfn(rd, cpu_env, rm1);
85
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
86
tcg_gen_shli_i64(tmp, tmp, a->shift);
87
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
88
}
89
- neon_store_reg64(tmp, a->vd);
90
+ write_neon_element64(tmp, a->vd, 0, MO_64);
91
92
widenfn(tmp, rm1);
93
tcg_temp_free_i32(rm1);
94
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
95
tcg_gen_shli_i64(tmp, tmp, a->shift);
96
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
97
}
98
- neon_store_reg64(tmp, a->vd + 1);
99
+ write_neon_element64(tmp, a->vd, 1, MO_64);
100
tcg_temp_free_i64(tmp);
101
return true;
249
}
102
}
250
103
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
251
static void qbus_finalize(Object *obj)
104
rm_64 = tcg_temp_new_i64();
252
@@ -XXX,XX +XXX,XX @@ static const TypeInfo bus_info = {
105
253
.instance_init = qbus_initfn,
106
if (src1_wide) {
254
.instance_finalize = qbus_finalize,
107
- neon_load_reg64(rn0_64, a->vn);
255
.class_init = bus_class_init,
108
+ read_neon_element64(rn0_64, a->vn, 0, MO_64);
256
+ .interfaces = (InterfaceInfo[]) {
109
} else {
257
+ { TYPE_RESETTABLE_INTERFACE },
110
TCGv_i32 tmp = tcg_temp_new_i32();
258
+ { }
111
read_neon_element32(tmp, a->vn, 0, MO_32);
259
+ },
112
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
260
};
113
* avoid incorrect results if a narrow input overlaps with the result.
261
114
*/
262
static void bus_register_types(void)
115
if (src1_wide) {
263
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
116
- neon_load_reg64(rn1_64, a->vn + 1);
264
index XXXXXXX..XXXXXXX 100644
117
+ read_neon_element64(rn1_64, a->vn, 1, MO_64);
265
--- a/hw/core/qdev.c
118
} else {
266
+++ b/hw/core/qdev.c
119
TCGv_i32 tmp = tcg_temp_new_i32();
267
@@ -XXX,XX +XXX,XX @@ void qbus_reset_all_fn(void *opaque)
120
read_neon_element32(tmp, a->vn, 1, MO_32);
268
qbus_reset_all(bus);
121
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
269
}
122
rm = tcg_temp_new_i32();
270
123
read_neon_element32(rm, a->vm, 1, MO_32);
271
+bool device_is_in_reset(DeviceState *dev)
124
272
+{
125
- neon_store_reg64(rn0_64, a->vd);
273
+ return resettable_is_in_reset(OBJECT(dev));
126
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
274
+}
127
275
+
128
widenfn(rm_64, rm);
276
+static ResettableState *device_get_reset_state(Object *obj)
129
tcg_temp_free_i32(rm);
277
+{
130
opfn(rn1_64, rn1_64, rm_64);
278
+ DeviceState *dev = DEVICE(obj);
131
- neon_store_reg64(rn1_64, a->vd + 1);
279
+ return &dev->reset;
132
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
280
+}
133
281
+
134
tcg_temp_free_i64(rn0_64);
282
+static void device_reset_child_foreach(Object *obj, ResettableChildCallback cb,
135
tcg_temp_free_i64(rn1_64);
283
+ void *opaque, ResetType type)
136
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
284
+{
137
rd0 = tcg_temp_new_i32();
285
+ DeviceState *dev = DEVICE(obj);
138
rd1 = tcg_temp_new_i32();
286
+ BusState *bus;
139
287
+
140
- neon_load_reg64(rn_64, a->vn);
288
+ QLIST_FOREACH(bus, &dev->child_bus, sibling) {
141
- neon_load_reg64(rm_64, a->vm);
289
+ cb(OBJECT(bus), opaque, type);
142
+ read_neon_element64(rn_64, a->vn, 0, MO_64);
290
+ }
143
+ read_neon_element64(rm_64, a->vm, 0, MO_64);
291
+}
144
292
+
145
opfn(rn_64, rn_64, rm_64);
293
/* can be used as ->unplug() callback for the simple cases */
146
294
void qdev_simple_device_unplug_cb(HotplugHandler *hotplug_dev,
147
narrowfn(rd0, rn_64);
295
DeviceState *dev, Error **errp)
148
296
@@ -XXX,XX +XXX,XX @@ device_vmstate_if_get_id(VMStateIf *obj)
149
- neon_load_reg64(rn_64, a->vn + 1);
297
return qdev_get_dev_path(dev);
150
- neon_load_reg64(rm_64, a->vm + 1);
298
}
151
+ read_neon_element64(rn_64, a->vn, 1, MO_64);
299
152
+ read_neon_element64(rm_64, a->vm, 1, MO_64);
300
+/**
153
301
+ * device_phases_reset:
154
opfn(rn_64, rn_64, rm_64);
302
+ * Transition reset method for devices to allow moving
155
303
+ * smoothly from legacy reset method to multi-phases
156
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
304
+ */
157
/* Don't store results until after all loads: they might overlap */
305
+static void device_phases_reset(DeviceState *dev)
158
if (accfn) {
306
+{
159
tmp = tcg_temp_new_i64();
307
+ ResettableClass *rc = RESETTABLE_GET_CLASS(dev);
160
- neon_load_reg64(tmp, a->vd);
308
+
161
+ read_neon_element64(tmp, a->vd, 0, MO_64);
309
+ if (rc->phases.enter) {
162
accfn(tmp, tmp, rd0);
310
+ rc->phases.enter(OBJECT(dev), RESET_TYPE_COLD);
163
- neon_store_reg64(tmp, a->vd);
311
+ }
164
- neon_load_reg64(tmp, a->vd + 1);
312
+ if (rc->phases.hold) {
165
+ write_neon_element64(tmp, a->vd, 0, MO_64);
313
+ rc->phases.hold(OBJECT(dev));
166
+ read_neon_element64(tmp, a->vd, 1, MO_64);
314
+ }
167
accfn(tmp, tmp, rd1);
315
+ if (rc->phases.exit) {
168
- neon_store_reg64(tmp, a->vd + 1);
316
+ rc->phases.exit(OBJECT(dev));
169
+ write_neon_element64(tmp, a->vd, 1, MO_64);
317
+ }
170
tcg_temp_free_i64(tmp);
318
+}
171
} else {
319
+
172
- neon_store_reg64(rd0, a->vd);
320
+static void device_transitional_reset(Object *obj)
173
- neon_store_reg64(rd1, a->vd + 1);
321
+{
174
+ write_neon_element64(rd0, a->vd, 0, MO_64);
322
+ DeviceClass *dc = DEVICE_GET_CLASS(obj);
175
+ write_neon_element64(rd1, a->vd, 1, MO_64);
323
+
176
}
324
+ /*
177
325
+ * This will call either @device_phases_reset (for multi-phases transitioned
178
tcg_temp_free_i64(rd0);
326
+ * devices) or a device's specific method for not-yet transitioned devices.
179
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
327
+ * In both case, it does not reset children.
180
328
+ */
181
if (accfn) {
329
+ if (dc->reset) {
182
TCGv_i64 t64 = tcg_temp_new_i64();
330
+ dc->reset(DEVICE(obj));
183
- neon_load_reg64(t64, a->vd);
331
+ }
184
+ read_neon_element64(t64, a->vd, 0, MO_64);
332
+}
185
accfn(t64, t64, rn0_64);
333
+
186
- neon_store_reg64(t64, a->vd);
334
+/**
187
- neon_load_reg64(t64, a->vd + 1);
335
+ * device_get_transitional_reset:
188
+ write_neon_element64(t64, a->vd, 0, MO_64);
336
+ * check if the device's class is ready for multi-phase
189
+ read_neon_element64(t64, a->vd, 1, MO_64);
337
+ */
190
accfn(t64, t64, rn1_64);
338
+static ResettableTrFunction device_get_transitional_reset(Object *obj)
191
- neon_store_reg64(t64, a->vd + 1);
339
+{
192
+ write_neon_element64(t64, a->vd, 1, MO_64);
340
+ DeviceClass *dc = DEVICE_GET_CLASS(obj);
193
tcg_temp_free_i64(t64);
341
+ if (dc->reset != device_phases_reset) {
194
} else {
342
+ /*
195
- neon_store_reg64(rn0_64, a->vd);
343
+ * dc->reset has been overridden by a subclass,
196
- neon_store_reg64(rn1_64, a->vd + 1);
344
+ * the device is not ready for multi phase yet.
197
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
345
+ */
198
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
346
+ return device_transitional_reset;
199
}
347
+ }
200
tcg_temp_free_i64(rn0_64);
348
+ return NULL;
201
tcg_temp_free_i64(rn1_64);
349
+}
202
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
350
+
203
right = tcg_temp_new_i64();
351
static void device_class_init(ObjectClass *class, void *data)
204
dest = tcg_temp_new_i64();
352
{
205
353
DeviceClass *dc = DEVICE_CLASS(class);
206
- neon_load_reg64(right, a->vn);
354
VMStateIfClass *vc = VMSTATE_IF_CLASS(class);
207
- neon_load_reg64(left, a->vm);
355
+ ResettableClass *rc = RESETTABLE_CLASS(class);
208
+ read_neon_element64(right, a->vn, 0, MO_64);
356
209
+ read_neon_element64(left, a->vm, 0, MO_64);
357
class->unparent = device_unparent;
210
tcg_gen_extract2_i64(dest, right, left, a->imm * 8);
358
211
- neon_store_reg64(dest, a->vd);
359
@@ -XXX,XX +XXX,XX @@ static void device_class_init(ObjectClass *class, void *data)
212
+ write_neon_element64(dest, a->vd, 0, MO_64);
360
dc->hotpluggable = true;
213
361
dc->user_creatable = true;
214
tcg_temp_free_i64(left);
362
vc->get_id = device_vmstate_if_get_id;
215
tcg_temp_free_i64(right);
363
+ rc->get_state = device_get_reset_state;
216
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
364
+ rc->child_foreach = device_reset_child_foreach;
217
destright = tcg_temp_new_i64();
365
+
218
366
+ /*
219
if (a->imm < 8) {
367
+ * @device_phases_reset is put as the default reset method below, allowing
220
- neon_load_reg64(right, a->vn);
368
+ * to do the multi-phase transition from base classes to leaf classes. It
221
- neon_load_reg64(middle, a->vn + 1);
369
+ * allows a legacy-reset Device class to extend a multi-phases-reset
222
+ read_neon_element64(right, a->vn, 0, MO_64);
370
+ * Device class for the following reason:
223
+ read_neon_element64(middle, a->vn, 1, MO_64);
371
+ * + If a base class B has been moved to multi-phase, then it does not
224
tcg_gen_extract2_i64(destright, right, middle, a->imm * 8);
372
+ * override this default reset method and may have defined phase methods.
225
- neon_load_reg64(left, a->vm);
373
+ * + A child class C (extending class B) which uses
226
+ read_neon_element64(left, a->vm, 0, MO_64);
374
+ * device_class_set_parent_reset() (or similar means) to override the
227
tcg_gen_extract2_i64(destleft, middle, left, a->imm * 8);
375
+ * reset method will still work as expected. @device_phases_reset function
228
} else {
376
+ * will be registered as the parent reset method and effectively call
229
- neon_load_reg64(right, a->vn + 1);
377
+ * parent reset phases.
230
- neon_load_reg64(middle, a->vm);
378
+ */
231
+ read_neon_element64(right, a->vn, 1, MO_64);
379
+ dc->reset = device_phases_reset;
232
+ read_neon_element64(middle, a->vm, 0, MO_64);
380
+ rc->get_transitional_function = device_get_transitional_reset;
233
tcg_gen_extract2_i64(destright, right, middle, (a->imm - 8) * 8);
381
234
- neon_load_reg64(left, a->vm + 1);
382
object_class_property_add_bool(class, "realized",
235
+ read_neon_element64(left, a->vm, 1, MO_64);
383
device_get_realized, device_set_realized,
236
tcg_gen_extract2_i64(destleft, middle, left, (a->imm - 8) * 8);
384
@@ -XXX,XX +XXX,XX @@ static const TypeInfo device_type_info = {
237
}
385
.class_size = sizeof(DeviceClass),
238
386
.interfaces = (InterfaceInfo[]) {
239
- neon_store_reg64(destright, a->vd);
387
{ TYPE_VMSTATE_IF },
240
- neon_store_reg64(destleft, a->vd + 1);
388
+ { TYPE_RESETTABLE_INTERFACE },
241
+ write_neon_element64(destright, a->vd, 0, MO_64);
389
{ }
242
+ write_neon_element64(destleft, a->vd, 1, MO_64);
390
}
243
391
};
244
tcg_temp_free_i64(destright);
245
tcg_temp_free_i64(destleft);
246
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
247
248
if (accfn) {
249
TCGv_i64 tmp64 = tcg_temp_new_i64();
250
- neon_load_reg64(tmp64, a->vd + pass);
251
+ read_neon_element64(tmp64, a->vd, pass, MO_64);
252
accfn(rd_64, tmp64, rd_64);
253
tcg_temp_free_i64(tmp64);
254
}
255
- neon_store_reg64(rd_64, a->vd + pass);
256
+ write_neon_element64(rd_64, a->vd, pass, MO_64);
257
tcg_temp_free_i64(rd_64);
258
}
259
return true;
260
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
261
rd0 = tcg_temp_new_i32();
262
rd1 = tcg_temp_new_i32();
263
264
- neon_load_reg64(rm, a->vm);
265
+ read_neon_element64(rm, a->vm, 0, MO_64);
266
narrowfn(rd0, cpu_env, rm);
267
- neon_load_reg64(rm, a->vm + 1);
268
+ read_neon_element64(rm, a->vm, 1, MO_64);
269
narrowfn(rd1, cpu_env, rm);
270
write_neon_element32(rd0, a->vd, 0, MO_32);
271
write_neon_element32(rd1, a->vd, 1, MO_32);
272
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
273
274
widenfn(rd, rm0);
275
tcg_gen_shli_i64(rd, rd, 8 << a->size);
276
- neon_store_reg64(rd, a->vd);
277
+ write_neon_element64(rd, a->vd, 0, MO_64);
278
widenfn(rd, rm1);
279
tcg_gen_shli_i64(rd, rd, 8 << a->size);
280
- neon_store_reg64(rd, a->vd + 1);
281
+ write_neon_element64(rd, a->vd, 1, MO_64);
282
283
tcg_temp_free_i64(rd);
284
tcg_temp_free_i32(rm0);
285
@@ -XXX,XX +XXX,XX @@ static bool trans_VSWP(DisasContext *s, arg_2misc *a)
286
rm = tcg_temp_new_i64();
287
rd = tcg_temp_new_i64();
288
for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
289
- neon_load_reg64(rm, a->vm + pass);
290
- neon_load_reg64(rd, a->vd + pass);
291
- neon_store_reg64(rm, a->vd + pass);
292
- neon_store_reg64(rd, a->vm + pass);
293
+ read_neon_element64(rm, a->vm, pass, MO_64);
294
+ read_neon_element64(rd, a->vd, pass, MO_64);
295
+ write_neon_element64(rm, a->vd, pass, MO_64);
296
+ write_neon_element64(rd, a->vm, pass, MO_64);
297
}
298
tcg_temp_free_i64(rm);
299
tcg_temp_free_i64(rd);
392
--
300
--
393
2.20.1
301
2.20.1
394
302
395
303
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Adds trace events to reset procedure and when updating the parent
3
The only uses of this function are for loading VFP
4
bus of a device.
4
double-precision values, and nothing to do with NEON.
5
5
6
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-10-richard.henderson@linaro.org
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
10
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Message-id: 20200123132823.1117486-3-damien.hedde@greensocs.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/core/qdev.c | 29 ++++++++++++++++++++++++++---
11
target/arm/translate.c | 8 ++--
15
hw/core/trace-events | 9 +++++++++
12
target/arm/translate-vfp.c.inc | 84 +++++++++++++++++-----------------
16
2 files changed, 35 insertions(+), 3 deletions(-)
13
2 files changed, 46 insertions(+), 46 deletions(-)
17
14
18
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/core/qdev.c
17
--- a/target/arm/translate.c
21
+++ b/hw/core/qdev.c
18
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
23
#include "hw/boards.h"
20
}
24
#include "hw/sysbus.h"
25
#include "migration/vmstate.h"
26
+#include "trace.h"
27
28
bool qdev_hotplug = false;
29
static bool qdev_hot_added = false;
30
@@ -XXX,XX +XXX,XX @@ void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
31
bool replugging = dev->parent_bus != NULL;
32
33
if (replugging) {
34
- /* Keep a reference to the device while it's not plugged into
35
+ trace_qdev_update_parent_bus(dev, object_get_typename(OBJECT(dev)),
36
+ dev->parent_bus, object_get_typename(OBJECT(dev->parent_bus)),
37
+ OBJECT(bus), object_get_typename(OBJECT(bus)));
38
+ /*
39
+ * Keep a reference to the device while it's not plugged into
40
* any bus, to avoid it potentially evaporating when it is
41
* dereffed in bus_remove_child().
42
*/
43
@@ -XXX,XX +XXX,XX @@ HotplugHandler *qdev_get_hotplug_handler(DeviceState *dev)
44
return hotplug_ctrl;
45
}
21
}
46
22
47
+static int qdev_prereset(DeviceState *dev, void *opaque)
23
-static inline void neon_load_reg64(TCGv_i64 var, int reg)
48
+{
24
+static inline void vfp_load_reg64(TCGv_i64 var, int reg)
49
+ trace_qdev_reset_tree(dev, object_get_typename(OBJECT(dev)));
50
+ return 0;
51
+}
52
+
53
+static int qbus_prereset(BusState *bus, void *opaque)
54
+{
55
+ trace_qbus_reset_tree(bus, object_get_typename(OBJECT(bus)));
56
+ return 0;
57
+}
58
+
59
static int qdev_reset_one(DeviceState *dev, void *opaque)
60
{
25
{
61
device_legacy_reset(dev);
26
- tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
62
@@ -XXX,XX +XXX,XX @@ static int qdev_reset_one(DeviceState *dev, void *opaque)
27
+ tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(true, reg));
63
static int qbus_reset_one(BusState *bus, void *opaque)
28
}
29
30
-static inline void neon_store_reg64(TCGv_i64 var, int reg)
31
+static inline void vfp_store_reg64(TCGv_i64 var, int reg)
64
{
32
{
65
BusClass *bc = BUS_GET_CLASS(bus);
33
- tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
66
+ trace_qbus_reset(bus, object_get_typename(OBJECT(bus)));
34
+ tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(true, reg));
67
if (bc->reset) {
68
bc->reset(bus);
69
}
70
@@ -XXX,XX +XXX,XX @@ static int qbus_reset_one(BusState *bus, void *opaque)
71
72
void qdev_reset_all(DeviceState *dev)
73
{
74
- qdev_walk_children(dev, NULL, NULL, qdev_reset_one, qbus_reset_one, NULL);
75
+ trace_qdev_reset_all(dev, object_get_typename(OBJECT(dev)));
76
+ qdev_walk_children(dev, qdev_prereset, qbus_prereset,
77
+ qdev_reset_one, qbus_reset_one, NULL);
78
}
35
}
79
36
80
void qdev_reset_all_fn(void *opaque)
37
static inline void vfp_load_reg32(TCGv_i32 var, int reg)
81
@@ -XXX,XX +XXX,XX @@ void qdev_reset_all_fn(void *opaque)
38
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
82
83
void qbus_reset_all(BusState *bus)
84
{
85
- qbus_walk_children(bus, NULL, NULL, qdev_reset_one, qbus_reset_one, NULL);
86
+ trace_qbus_reset_all(bus, object_get_typename(OBJECT(bus)));
87
+ qbus_walk_children(bus, qdev_prereset, qbus_prereset,
88
+ qdev_reset_one, qbus_reset_one, NULL);
89
}
90
91
void qbus_reset_all_fn(void *opaque)
92
@@ -XXX,XX +XXX,XX @@ void device_legacy_reset(DeviceState *dev)
93
{
94
DeviceClass *klass = DEVICE_GET_CLASS(dev);
95
96
+ trace_qdev_reset(dev, object_get_typename(OBJECT(dev)));
97
if (klass->reset) {
98
klass->reset(dev);
99
}
100
diff --git a/hw/core/trace-events b/hw/core/trace-events
101
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/core/trace-events
40
--- a/target/arm/translate-vfp.c.inc
103
+++ b/hw/core/trace-events
41
+++ b/target/arm/translate-vfp.c.inc
104
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
105
# loader.c
43
tcg_gen_ext_i32_i64(nf, cpu_NF);
106
loader_write_rom(const char *name, uint64_t gpa, uint64_t size, bool isrom) "%s: @0x%"PRIx64" size=0x%"PRIx64" ROM=%d"
44
tcg_gen_ext_i32_i64(vf, cpu_VF);
107
+
45
108
+# qdev.c
46
- neon_load_reg64(frn, rn);
109
+qdev_reset(void *obj, const char *objtype) "obj=%p(%s)"
47
- neon_load_reg64(frm, rm);
110
+qdev_reset_all(void *obj, const char *objtype) "obj=%p(%s)"
48
+ vfp_load_reg64(frn, rn);
111
+qdev_reset_tree(void *obj, const char *objtype) "obj=%p(%s)"
49
+ vfp_load_reg64(frm, rm);
112
+qbus_reset(void *obj, const char *objtype) "obj=%p(%s)"
50
switch (a->cc) {
113
+qbus_reset_all(void *obj, const char *objtype) "obj=%p(%s)"
51
case 0: /* eq: Z */
114
+qbus_reset_tree(void *obj, const char *objtype) "obj=%p(%s)"
52
tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero,
115
+qdev_update_parent_bus(void *obj, const char *objtype, void *oldp, const char *oldptype, void *newp, const char *newptype) "obj=%p(%s) old_parent=%p(%s) new_parent=%p(%s)"
53
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
54
tcg_temp_free_i64(tmp);
55
break;
56
}
57
- neon_store_reg64(dest, rd);
58
+ vfp_store_reg64(dest, rd);
59
tcg_temp_free_i64(frn);
60
tcg_temp_free_i64(frm);
61
tcg_temp_free_i64(dest);
62
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
63
TCGv_i64 tcg_res;
64
tcg_op = tcg_temp_new_i64();
65
tcg_res = tcg_temp_new_i64();
66
- neon_load_reg64(tcg_op, rm);
67
+ vfp_load_reg64(tcg_op, rm);
68
gen_helper_rintd(tcg_res, tcg_op, fpst);
69
- neon_store_reg64(tcg_res, rd);
70
+ vfp_store_reg64(tcg_res, rd);
71
tcg_temp_free_i64(tcg_op);
72
tcg_temp_free_i64(tcg_res);
73
} else {
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
tcg_double = tcg_temp_new_i64();
76
tcg_res = tcg_temp_new_i64();
77
tcg_tmp = tcg_temp_new_i32();
78
- neon_load_reg64(tcg_double, rm);
79
+ vfp_load_reg64(tcg_double, rm);
80
if (is_signed) {
81
gen_helper_vfp_tosld(tcg_res, tcg_double, tcg_shift, fpst);
82
} else {
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
84
tmp = tcg_temp_new_i64();
85
if (a->l) {
86
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
87
- neon_store_reg64(tmp, a->vd);
88
+ vfp_store_reg64(tmp, a->vd);
89
} else {
90
- neon_load_reg64(tmp, a->vd);
91
+ vfp_load_reg64(tmp, a->vd);
92
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
93
}
94
tcg_temp_free_i64(tmp);
95
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
96
if (a->l) {
97
/* load */
98
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
99
- neon_store_reg64(tmp, a->vd + i);
100
+ vfp_store_reg64(tmp, a->vd + i);
101
} else {
102
/* store */
103
- neon_load_reg64(tmp, a->vd + i);
104
+ vfp_load_reg64(tmp, a->vd + i);
105
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
106
}
107
tcg_gen_addi_i32(addr, addr, offset);
108
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
109
fd = tcg_temp_new_i64();
110
fpst = fpstatus_ptr(FPST_FPCR);
111
112
- neon_load_reg64(f0, vn);
113
- neon_load_reg64(f1, vm);
114
+ vfp_load_reg64(f0, vn);
115
+ vfp_load_reg64(f1, vm);
116
117
for (;;) {
118
if (reads_vd) {
119
- neon_load_reg64(fd, vd);
120
+ vfp_load_reg64(fd, vd);
121
}
122
fn(fd, f0, f1, fpst);
123
- neon_store_reg64(fd, vd);
124
+ vfp_store_reg64(fd, vd);
125
126
if (veclen == 0) {
127
break;
128
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
129
veclen--;
130
vd = vfp_advance_dreg(vd, delta_d);
131
vn = vfp_advance_dreg(vn, delta_d);
132
- neon_load_reg64(f0, vn);
133
+ vfp_load_reg64(f0, vn);
134
if (delta_m) {
135
vm = vfp_advance_dreg(vm, delta_m);
136
- neon_load_reg64(f1, vm);
137
+ vfp_load_reg64(f1, vm);
138
}
139
}
140
141
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
142
f0 = tcg_temp_new_i64();
143
fd = tcg_temp_new_i64();
144
145
- neon_load_reg64(f0, vm);
146
+ vfp_load_reg64(f0, vm);
147
148
for (;;) {
149
fn(fd, f0);
150
- neon_store_reg64(fd, vd);
151
+ vfp_store_reg64(fd, vd);
152
153
if (veclen == 0) {
154
break;
155
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
156
/* single source one-many */
157
while (veclen--) {
158
vd = vfp_advance_dreg(vd, delta_d);
159
- neon_store_reg64(fd, vd);
160
+ vfp_store_reg64(fd, vd);
161
}
162
break;
163
}
164
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
165
veclen--;
166
vd = vfp_advance_dreg(vd, delta_d);
167
vd = vfp_advance_dreg(vm, delta_m);
168
- neon_load_reg64(f0, vm);
169
+ vfp_load_reg64(f0, vm);
170
}
171
172
tcg_temp_free_i64(f0);
173
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
174
vm = tcg_temp_new_i64();
175
vd = tcg_temp_new_i64();
176
177
- neon_load_reg64(vn, a->vn);
178
- neon_load_reg64(vm, a->vm);
179
+ vfp_load_reg64(vn, a->vn);
180
+ vfp_load_reg64(vm, a->vm);
181
if (neg_n) {
182
/* VFNMS, VFMS */
183
gen_helper_vfp_negd(vn, vn);
184
}
185
- neon_load_reg64(vd, a->vd);
186
+ vfp_load_reg64(vd, a->vd);
187
if (neg_d) {
188
/* VFNMA, VFNMS */
189
gen_helper_vfp_negd(vd, vd);
190
}
191
fpst = fpstatus_ptr(FPST_FPCR);
192
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
193
- neon_store_reg64(vd, a->vd);
194
+ vfp_store_reg64(vd, a->vd);
195
196
tcg_temp_free_ptr(fpst);
197
tcg_temp_free_i64(vn);
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
199
fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
200
201
for (;;) {
202
- neon_store_reg64(fd, vd);
203
+ vfp_store_reg64(fd, vd);
204
205
if (veclen == 0) {
206
break;
207
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
208
vd = tcg_temp_new_i64();
209
vm = tcg_temp_new_i64();
210
211
- neon_load_reg64(vd, a->vd);
212
+ vfp_load_reg64(vd, a->vd);
213
if (a->z) {
214
tcg_gen_movi_i64(vm, 0);
215
} else {
216
- neon_load_reg64(vm, a->vm);
217
+ vfp_load_reg64(vm, a->vm);
218
}
219
220
if (a->e) {
221
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
222
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
223
vd = tcg_temp_new_i64();
224
gen_helper_vfp_fcvt_f16_to_f64(vd, tmp, fpst, ahp_mode);
225
- neon_store_reg64(vd, a->vd);
226
+ vfp_store_reg64(vd, a->vd);
227
tcg_temp_free_i32(ahp_mode);
228
tcg_temp_free_ptr(fpst);
229
tcg_temp_free_i32(tmp);
230
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
231
tmp = tcg_temp_new_i32();
232
vm = tcg_temp_new_i64();
233
234
- neon_load_reg64(vm, a->vm);
235
+ vfp_load_reg64(vm, a->vm);
236
gen_helper_vfp_fcvt_f64_to_f16(tmp, vm, fpst, ahp_mode);
237
tcg_temp_free_i64(vm);
238
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
239
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
240
}
241
242
tmp = tcg_temp_new_i64();
243
- neon_load_reg64(tmp, a->vm);
244
+ vfp_load_reg64(tmp, a->vm);
245
fpst = fpstatus_ptr(FPST_FPCR);
246
gen_helper_rintd(tmp, tmp, fpst);
247
- neon_store_reg64(tmp, a->vd);
248
+ vfp_store_reg64(tmp, a->vd);
249
tcg_temp_free_ptr(fpst);
250
tcg_temp_free_i64(tmp);
251
return true;
252
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
253
}
254
255
tmp = tcg_temp_new_i64();
256
- neon_load_reg64(tmp, a->vm);
257
+ vfp_load_reg64(tmp, a->vm);
258
fpst = fpstatus_ptr(FPST_FPCR);
259
tcg_rmode = tcg_const_i32(float_round_to_zero);
260
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
261
gen_helper_rintd(tmp, tmp, fpst);
262
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
263
- neon_store_reg64(tmp, a->vd);
264
+ vfp_store_reg64(tmp, a->vd);
265
tcg_temp_free_ptr(fpst);
266
tcg_temp_free_i64(tmp);
267
tcg_temp_free_i32(tcg_rmode);
268
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
269
}
270
271
tmp = tcg_temp_new_i64();
272
- neon_load_reg64(tmp, a->vm);
273
+ vfp_load_reg64(tmp, a->vm);
274
fpst = fpstatus_ptr(FPST_FPCR);
275
gen_helper_rintd_exact(tmp, tmp, fpst);
276
- neon_store_reg64(tmp, a->vd);
277
+ vfp_store_reg64(tmp, a->vd);
278
tcg_temp_free_ptr(fpst);
279
tcg_temp_free_i64(tmp);
280
return true;
281
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
282
vd = tcg_temp_new_i64();
283
vfp_load_reg32(vm, a->vm);
284
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
285
- neon_store_reg64(vd, a->vd);
286
+ vfp_store_reg64(vd, a->vd);
287
tcg_temp_free_i32(vm);
288
tcg_temp_free_i64(vd);
289
return true;
290
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
291
292
vd = tcg_temp_new_i32();
293
vm = tcg_temp_new_i64();
294
- neon_load_reg64(vm, a->vm);
295
+ vfp_load_reg64(vm, a->vm);
296
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
297
vfp_store_reg32(vd, a->vd);
298
tcg_temp_free_i32(vd);
299
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
300
/* u32 -> f64 */
301
gen_helper_vfp_uitod(vd, vm, fpst);
302
}
303
- neon_store_reg64(vd, a->vd);
304
+ vfp_store_reg64(vd, a->vd);
305
tcg_temp_free_i32(vm);
306
tcg_temp_free_i64(vd);
307
tcg_temp_free_ptr(fpst);
308
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
309
310
vm = tcg_temp_new_i64();
311
vd = tcg_temp_new_i32();
312
- neon_load_reg64(vm, a->vm);
313
+ vfp_load_reg64(vm, a->vm);
314
gen_helper_vjcvt(vd, vm, cpu_env);
315
vfp_store_reg32(vd, a->vd);
316
tcg_temp_free_i64(vm);
317
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
318
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
319
320
vd = tcg_temp_new_i64();
321
- neon_load_reg64(vd, a->vd);
322
+ vfp_load_reg64(vd, a->vd);
323
324
fpst = fpstatus_ptr(FPST_FPCR);
325
shift = tcg_const_i32(frac_bits);
326
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
327
g_assert_not_reached();
328
}
329
330
- neon_store_reg64(vd, a->vd);
331
+ vfp_store_reg64(vd, a->vd);
332
tcg_temp_free_i64(vd);
333
tcg_temp_free_i32(shift);
334
tcg_temp_free_ptr(fpst);
335
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
336
fpst = fpstatus_ptr(FPST_FPCR);
337
vm = tcg_temp_new_i64();
338
vd = tcg_temp_new_i32();
339
- neon_load_reg64(vm, a->vm);
340
+ vfp_load_reg64(vm, a->vm);
341
342
if (a->s) {
343
if (a->rz) {
116
--
344
--
117
2.20.1
345
2.20.1
118
346
119
347
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Andrew Jones <drjones@redhat.com>
3
In both cases, we can sink the write-back and perform
4
Message-id: 20200120101023.16030-2-drjones@redhat.com
4
the accumulate into the normal destination temps.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/kvm_arm.h | 46 ++++++++++++++++++++++++++------------------
11
target/arm/translate-neon.c.inc | 23 +++++++++--------------
9
1 file changed, 27 insertions(+), 19 deletions(-)
12
1 file changed, 9 insertions(+), 14 deletions(-)
10
13
11
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
14
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/kvm_arm.h
16
--- a/target/arm/translate-neon.c.inc
14
+++ b/target/arm/kvm_arm.h
17
+++ b/target/arm/translate-neon.c.inc
15
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
16
int kvm_arm_vcpu_init(CPUState *cs);
19
if (accfn) {
17
20
tmp = tcg_temp_new_i64();
18
/**
21
read_neon_element64(tmp, a->vd, 0, MO_64);
19
- * kvm_arm_vcpu_finalize
22
- accfn(tmp, tmp, rd0);
20
+ * kvm_arm_vcpu_finalize:
23
- write_neon_element64(tmp, a->vd, 0, MO_64);
21
* @cs: CPUState
24
+ accfn(rd0, tmp, rd0);
22
- * @feature: int
25
read_neon_element64(tmp, a->vd, 1, MO_64);
23
+ * @feature: feature to finalize
26
- accfn(tmp, tmp, rd1);
24
*
27
- write_neon_element64(tmp, a->vd, 1, MO_64);
25
* Finalizes the configuration of the specified VCPU feature by
28
+ accfn(rd1, tmp, rd1);
26
* invoking the KVM_ARM_VCPU_FINALIZE ioctl. Features requiring
29
tcg_temp_free_i64(tmp);
27
@@ -XXX,XX +XXX,XX @@ void kvm_arm_register_device(MemoryRegion *mr, uint64_t devid, uint64_t group,
30
- } else {
28
int kvm_arm_init_cpreg_list(ARMCPU *cpu);
31
- write_neon_element64(rd0, a->vd, 0, MO_64);
29
32
- write_neon_element64(rd1, a->vd, 1, MO_64);
30
/**
33
}
31
- * kvm_arm_reg_syncs_via_cpreg_list
34
32
- * regidx: KVM register index
35
+ write_neon_element64(rd0, a->vd, 0, MO_64);
33
+ * kvm_arm_reg_syncs_via_cpreg_list:
36
+ write_neon_element64(rd1, a->vd, 1, MO_64);
34
+ * @regidx: KVM register index
37
tcg_temp_free_i64(rd0);
35
*
38
tcg_temp_free_i64(rd1);
36
* Return true if this KVM register should be synchronized via the
39
37
* cpreg list of arbitrary system registers, false if it is synchronized
40
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
38
@@ -XXX,XX +XXX,XX @@ int kvm_arm_init_cpreg_list(ARMCPU *cpu);
41
if (accfn) {
39
bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx);
42
TCGv_i64 t64 = tcg_temp_new_i64();
40
43
read_neon_element64(t64, a->vd, 0, MO_64);
41
/**
44
- accfn(t64, t64, rn0_64);
42
- * kvm_arm_cpreg_level
45
- write_neon_element64(t64, a->vd, 0, MO_64);
43
- * regidx: KVM register index
46
+ accfn(rn0_64, t64, rn0_64);
44
+ * kvm_arm_cpreg_level:
47
read_neon_element64(t64, a->vd, 1, MO_64);
45
+ * @regidx: KVM register index
48
- accfn(t64, t64, rn1_64);
46
*
49
- write_neon_element64(t64, a->vd, 1, MO_64);
47
* Return the level of this coprocessor/system register. Return value is
50
+ accfn(rn1_64, t64, rn1_64);
48
* either KVM_PUT_RUNTIME_STATE, KVM_PUT_RESET_STATE, or KVM_PUT_FULL_STATE.
51
tcg_temp_free_i64(t64);
49
@@ -XXX,XX +XXX,XX @@ void kvm_arm_init_serror_injection(CPUState *cs);
52
- } else {
50
* @cpu: ARMCPU
53
- write_neon_element64(rn0_64, a->vd, 0, MO_64);
51
*
54
- write_neon_element64(rn1_64, a->vd, 1, MO_64);
52
* Get VCPU related state from kvm.
55
}
53
+ *
56
+
54
+ * Returns: 0 if success else < 0 error code
57
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
55
*/
58
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
56
int kvm_get_vcpu_events(ARMCPU *cpu);
59
tcg_temp_free_i64(rn0_64);
57
60
tcg_temp_free_i64(rn1_64);
58
@@ -XXX,XX +XXX,XX @@ int kvm_get_vcpu_events(ARMCPU *cpu);
61
return true;
59
* @cpu: ARMCPU
60
*
61
* Put VCPU related state to kvm.
62
+ *
63
+ * Returns: 0 if success else < 0 error code
64
*/
65
int kvm_put_vcpu_events(ARMCPU *cpu);
66
67
@@ -XXX,XX +XXX,XX @@ typedef struct ARMHostCPUFeatures {
68
69
/**
70
* kvm_arm_get_host_cpu_features:
71
- * @ahcc: ARMHostCPUClass to fill in
72
+ * @ahcf: ARMHostCPUClass to fill in
73
*
74
* Probe the capabilities of the host kernel's preferred CPU and fill
75
* in the ARMHostCPUClass struct accordingly.
76
+ *
77
+ * Returns true on success and false otherwise.
78
*/
79
bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
80
81
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
82
bool kvm_arm_aarch32_supported(CPUState *cs);
83
84
/**
85
- * bool kvm_arm_pmu_supported:
86
+ * kvm_arm_pmu_supported:
87
* @cs: CPUState
88
*
89
* Returns: true if the KVM VCPU can enable its PMU
90
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_aarch32_supported(CPUState *cs);
91
bool kvm_arm_pmu_supported(CPUState *cs);
92
93
/**
94
- * bool kvm_arm_sve_supported:
95
+ * kvm_arm_sve_supported:
96
* @cs: CPUState
97
*
98
* Returns true if the KVM VCPU can enable SVE and false otherwise.
99
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_pmu_supported(CPUState *cs);
100
bool kvm_arm_sve_supported(CPUState *cs);
101
102
/**
103
- * kvm_arm_get_max_vm_ipa_size - Returns the number of bits in the
104
- * IPA address space supported by KVM
105
- *
106
+ * kvm_arm_get_max_vm_ipa_size:
107
* @ms: Machine state handle
108
+ *
109
+ * Returns the number of bits in the IPA address space supported by KVM
110
*/
111
int kvm_arm_get_max_vm_ipa_size(MachineState *ms);
112
113
/**
114
- * kvm_arm_sync_mpstate_to_kvm
115
+ * kvm_arm_sync_mpstate_to_kvm:
116
* @cpu: ARMCPU
117
*
118
* If supported set the KVM MP_STATE based on QEMU's model.
119
+ *
120
+ * Returns 0 on success and -1 on failure.
121
*/
122
int kvm_arm_sync_mpstate_to_kvm(ARMCPU *cpu);
123
124
/**
125
- * kvm_arm_sync_mpstate_to_qemu
126
+ * kvm_arm_sync_mpstate_to_qemu:
127
* @cpu: ARMCPU
128
*
129
* If supported get the MP_STATE from KVM and store in QEMU's model.
130
+ *
131
+ * Returns 0 on success and aborts on failure.
132
*/
133
int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu);
134
135
@@ -XXX,XX +XXX,XX @@ int kvm_arm_set_irq(int cpu, int irqtype, int irq, int level);
136
137
static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
138
{
139
- /* This should never actually be called in the "not KVM" case,
140
+ /*
141
+ * This should never actually be called in the "not KVM" case,
142
* but set up the fields to indicate an error anyway.
143
*/
144
cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
145
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit);
146
*
147
* Return: TRUE if any hardware breakpoints in use.
148
*/
149
-
150
bool kvm_arm_hw_debug_active(CPUState *cs);
151
152
/**
153
* kvm_arm_copy_hw_debug_data:
154
- *
155
* @ptr: kvm_guest_debug_arch structure
156
*
157
* Copy the architecture specific debug registers into the
158
* kvm_guest_debug ioctl structure.
159
*/
160
struct kvm_guest_debug_arch;
161
-
162
void kvm_arm_copy_hw_debug_data(struct kvm_guest_debug_arch *ptr);
163
164
/**
165
- * its_class_name
166
+ * its_class_name:
167
*
168
* Return the ITS class name to use depending on whether KVM acceleration
169
* and KVM CAP_SIGNAL_MSI are supported
170
--
62
--
171
2.20.1
63
2.20.1
172
64
173
65
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When a VM is stopped (such as when it's paused) guest virtual time
3
We can use proper widening loads to extend 32-bit inputs,
4
should stop counting. Otherwise, when the VM is resumed it will
4
and skip the "widenfn" step.
5
experience time jumps and its kernel may report soft lockups. Not
6
counting virtual time while the VM is stopped has the side effect
7
of making the guest's time appear to lag when compared with real
8
time, and even with time derived from the physical counter. For
9
this reason, this change, which is enabled by default, comes with
10
a KVM CPU feature allowing it to be disabled, restoring legacy
11
behavior.
12
5
13
This patch only provides the implementation of the virtual time
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
adjustment. A subsequent patch will provide the CPU property
7
Message-id: 20201030022618.785675-12-richard.henderson@linaro.org
15
allowing the change to be enabled and disabled.
16
17
Reported-by: Bijan Mottahedeh <bijan.mottahedeh@oracle.com>
18
Signed-off-by: Andrew Jones <drjones@redhat.com>
19
Message-id: 20200120101023.16030-6-drjones@redhat.com
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
10
---
23
target/arm/cpu.h | 7 ++++
11
target/arm/translate.c | 6 +++
24
target/arm/kvm_arm.h | 38 ++++++++++++++++++
12
target/arm/translate-neon.c.inc | 66 ++++++++++++++++++---------------
25
target/arm/kvm.c | 92 ++++++++++++++++++++++++++++++++++++++++++++
13
2 files changed, 43 insertions(+), 29 deletions(-)
26
target/arm/kvm32.c | 3 ++
27
target/arm/kvm64.c | 3 ++
28
target/arm/machine.c | 7 ++++
29
6 files changed, 150 insertions(+)
30
14
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
32
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
17
--- a/target/arm/translate.c
34
+++ b/target/arm/cpu.h
18
+++ b/target/arm/translate.c
35
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
19
@@ -XXX,XX +XXX,XX @@ static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
36
/* KVM init features for this CPU */
20
long off = neon_element_offset(reg, ele, memop);
37
uint32_t kvm_init_features[7];
21
38
22
switch (memop) {
39
+ /* KVM CPU state */
23
+ case MO_SL:
40
+
24
+ tcg_gen_ld32s_i64(dest, cpu_env, off);
41
+ /* KVM virtual time adjustment */
25
+ break;
42
+ bool kvm_adjvtime;
26
+ case MO_UL:
43
+ bool kvm_vtime_dirty;
27
+ tcg_gen_ld32u_i64(dest, cpu_env, off);
44
+ uint64_t kvm_vtime;
28
+ break;
45
+
29
case MO_Q:
46
/* Uniprocessor system with MP extensions */
30
tcg_gen_ld_i64(dest, cpu_env, off);
47
bool mp_is_up;
31
break;
48
32
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
49
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
50
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/kvm_arm.h
34
--- a/target/arm/translate-neon.c.inc
52
+++ b/target/arm/kvm_arm.h
35
+++ b/target/arm/translate-neon.c.inc
53
@@ -XXX,XX +XXX,XX @@ bool write_list_to_kvmstate(ARMCPU *cpu, int level);
36
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
54
*/
37
static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
55
bool write_kvmstate_to_list(ARMCPU *cpu);
38
NeonGenWidenFn *widenfn,
56
39
NeonGenTwo64OpFn *opfn,
57
+/**
40
- bool src1_wide)
58
+ * kvm_arm_cpu_pre_save:
41
+ int src1_mop, int src2_mop)
59
+ * @cpu: ARMCPU
42
{
60
+ *
43
/* 3-regs different lengths, prewidening case (VADDL/VSUBL/VAADW/VSUBW) */
61
+ * Called after write_kvmstate_to_list() from cpu_pre_save() to update
44
TCGv_i64 rn0_64, rn1_64, rm_64;
62
+ * the cpreg list with KVM CPU state.
45
- TCGv_i32 rm;
63
+ */
46
64
+void kvm_arm_cpu_pre_save(ARMCPU *cpu);
47
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
65
+
48
return false;
66
+/**
49
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
67
+ * kvm_arm_cpu_post_load:
50
return false;
68
+ * @cpu: ARMCPU
51
}
69
+ *
52
70
+ * Called from cpu_post_load() to update KVM CPU state from the cpreg list.
53
- if (!widenfn || !opfn) {
71
+ */
54
+ if (!opfn) {
72
+void kvm_arm_cpu_post_load(ARMCPU *cpu);
55
/* size == 3 case, which is an entirely different insn group */
73
+
56
return false;
74
/**
57
}
75
* kvm_arm_reset_vcpu:
58
76
* @cpu: ARMCPU
59
- if ((a->vd & 1) || (src1_wide && (a->vn & 1))) {
77
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_kvm(ARMCPU *cpu);
60
+ if ((a->vd & 1) || (src1_mop == MO_Q && (a->vn & 1))) {
78
*/
61
return false;
79
int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu);
62
}
80
63
81
+/**
64
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
82
+ * kvm_arm_get_virtual_time:
65
rn1_64 = tcg_temp_new_i64();
83
+ * @cs: CPUState
66
rm_64 = tcg_temp_new_i64();
84
+ *
67
85
+ * Gets the VCPU's virtual counter and stores it in the KVM CPU state.
68
- if (src1_wide) {
86
+ */
69
- read_neon_element64(rn0_64, a->vn, 0, MO_64);
87
+void kvm_arm_get_virtual_time(CPUState *cs);
70
+ if (src1_mop >= 0) {
88
+
71
+ read_neon_element64(rn0_64, a->vn, 0, src1_mop);
89
+/**
72
} else {
90
+ * kvm_arm_put_virtual_time:
73
TCGv_i32 tmp = tcg_temp_new_i32();
91
+ * @cs: CPUState
74
read_neon_element32(tmp, a->vn, 0, MO_32);
92
+ *
75
widenfn(rn0_64, tmp);
93
+ * Sets the VCPU's virtual counter to the value stored in the KVM CPU state.
76
tcg_temp_free_i32(tmp);
94
+ */
77
}
95
+void kvm_arm_put_virtual_time(CPUState *cs);
78
- rm = tcg_temp_new_i32();
96
+
79
- read_neon_element32(rm, a->vm, 0, MO_32);
97
+void kvm_arm_vm_state_change(void *opaque, int running, RunState state);
80
+ if (src2_mop >= 0) {
98
+
81
+ read_neon_element64(rm_64, a->vm, 0, src2_mop);
99
int kvm_arm_vgic_probe(void);
82
+ } else {
100
83
+ TCGv_i32 tmp = tcg_temp_new_i32();
101
void kvm_arm_pmu_set_irq(CPUState *cs, int irq);
84
+ read_neon_element32(tmp, a->vm, 0, MO_32);
102
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_pmu_set_irq(CPUState *cs, int irq) {}
85
+ widenfn(rm_64, tmp);
103
static inline void kvm_arm_pmu_init(CPUState *cs) {}
86
+ tcg_temp_free_i32(tmp);
104
87
+ }
105
static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map) {}
88
106
+
89
- widenfn(rm_64, rm);
107
+static inline void kvm_arm_get_virtual_time(CPUState *cs) {}
90
- tcg_temp_free_i32(rm);
108
+static inline void kvm_arm_put_virtual_time(CPUState *cs) {}
91
opfn(rn0_64, rn0_64, rm_64);
109
#endif
92
110
93
/*
111
static inline const char *gic_class_name(void)
94
* Load second pass inputs before storing the first pass result, to
112
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
95
* avoid incorrect results if a narrow input overlaps with the result.
113
index XXXXXXX..XXXXXXX 100644
96
*/
114
--- a/target/arm/kvm.c
97
- if (src1_wide) {
115
+++ b/target/arm/kvm.c
98
- read_neon_element64(rn1_64, a->vn, 1, MO_64);
116
@@ -XXX,XX +XXX,XX @@ static int compare_u64(const void *a, const void *b)
99
+ if (src1_mop >= 0) {
117
return 0;
100
+ read_neon_element64(rn1_64, a->vn, 1, src1_mop);
101
} else {
102
TCGv_i32 tmp = tcg_temp_new_i32();
103
read_neon_element32(tmp, a->vn, 1, MO_32);
104
widenfn(rn1_64, tmp);
105
tcg_temp_free_i32(tmp);
106
}
107
- rm = tcg_temp_new_i32();
108
- read_neon_element32(rm, a->vm, 1, MO_32);
109
+ if (src2_mop >= 0) {
110
+ read_neon_element64(rm_64, a->vm, 1, src2_mop);
111
+ } else {
112
+ TCGv_i32 tmp = tcg_temp_new_i32();
113
+ read_neon_element32(tmp, a->vm, 1, MO_32);
114
+ widenfn(rm_64, tmp);
115
+ tcg_temp_free_i32(tmp);
116
+ }
117
118
write_neon_element64(rn0_64, a->vd, 0, MO_64);
119
120
- widenfn(rm_64, rm);
121
- tcg_temp_free_i32(rm);
122
opfn(rn1_64, rn1_64, rm_64);
123
write_neon_element64(rn1_64, a->vd, 1, MO_64);
124
125
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
126
return true;
118
}
127
}
119
128
120
+/*
129
-#define DO_PREWIDEN(INSN, S, EXT, OP, SRC1WIDE) \
121
+ * cpreg_values are sorted in ascending order by KVM register ID
130
+#define DO_PREWIDEN(INSN, S, OP, SRC1WIDE, SIGN) \
122
+ * (see kvm_arm_init_cpreg_list). This allows us to cheaply find
131
static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
123
+ * the storage for a KVM register by ID with a binary search.
132
{ \
124
+ */
133
static NeonGenWidenFn * const widenfn[] = { \
125
+static uint64_t *kvm_arm_get_cpreg_ptr(ARMCPU *cpu, uint64_t regidx)
134
gen_helper_neon_widen_##S##8, \
126
+{
135
gen_helper_neon_widen_##S##16, \
127
+ uint64_t *res;
136
- tcg_gen_##EXT##_i32_i64, \
128
+
137
- NULL, \
129
+ res = bsearch(&regidx, cpu->cpreg_indexes, cpu->cpreg_array_len,
138
+ NULL, NULL, \
130
+ sizeof(uint64_t), compare_u64);
139
}; \
131
+ assert(res);
140
static NeonGenTwo64OpFn * const addfn[] = { \
132
+
141
gen_helper_neon_##OP##l_u16, \
133
+ return &cpu->cpreg_values[res - cpu->cpreg_indexes];
142
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
134
+}
143
tcg_gen_##OP##_i64, \
135
+
144
NULL, \
136
/* Initialize the ARMCPU cpreg list according to the kernel's
145
}; \
137
* definition of what CPU registers it knows about (and throw away
146
- return do_prewiden_3d(s, a, widenfn[a->size], \
138
* the previous TCG-created cpreg list).
147
- addfn[a->size], SRC1WIDE); \
139
@@ -XXX,XX +XXX,XX @@ bool write_list_to_kvmstate(ARMCPU *cpu, int level)
148
+ int narrow_mop = a->size == MO_32 ? MO_32 | SIGN : -1; \
140
return ok;
149
+ return do_prewiden_3d(s, a, widenfn[a->size], addfn[a->size], \
141
}
150
+ SRC1WIDE ? MO_Q : narrow_mop, \
142
151
+ narrow_mop); \
143
+void kvm_arm_cpu_pre_save(ARMCPU *cpu)
144
+{
145
+ /* KVM virtual time adjustment */
146
+ if (cpu->kvm_vtime_dirty) {
147
+ *kvm_arm_get_cpreg_ptr(cpu, KVM_REG_ARM_TIMER_CNT) = cpu->kvm_vtime;
148
+ }
149
+}
150
+
151
+void kvm_arm_cpu_post_load(ARMCPU *cpu)
152
+{
153
+ /* KVM virtual time adjustment */
154
+ if (cpu->kvm_adjvtime) {
155
+ cpu->kvm_vtime = *kvm_arm_get_cpreg_ptr(cpu, KVM_REG_ARM_TIMER_CNT);
156
+ cpu->kvm_vtime_dirty = true;
157
+ }
158
+}
159
+
160
void kvm_arm_reset_vcpu(ARMCPU *cpu)
161
{
162
int ret;
163
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
164
return 0;
165
}
166
167
+void kvm_arm_get_virtual_time(CPUState *cs)
168
+{
169
+ ARMCPU *cpu = ARM_CPU(cs);
170
+ struct kvm_one_reg reg = {
171
+ .id = KVM_REG_ARM_TIMER_CNT,
172
+ .addr = (uintptr_t)&cpu->kvm_vtime,
173
+ };
174
+ int ret;
175
+
176
+ if (cpu->kvm_vtime_dirty) {
177
+ return;
178
+ }
179
+
180
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
181
+ if (ret) {
182
+ error_report("Failed to get KVM_REG_ARM_TIMER_CNT");
183
+ abort();
184
+ }
185
+
186
+ cpu->kvm_vtime_dirty = true;
187
+}
188
+
189
+void kvm_arm_put_virtual_time(CPUState *cs)
190
+{
191
+ ARMCPU *cpu = ARM_CPU(cs);
192
+ struct kvm_one_reg reg = {
193
+ .id = KVM_REG_ARM_TIMER_CNT,
194
+ .addr = (uintptr_t)&cpu->kvm_vtime,
195
+ };
196
+ int ret;
197
+
198
+ if (!cpu->kvm_vtime_dirty) {
199
+ return;
200
+ }
201
+
202
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
203
+ if (ret) {
204
+ error_report("Failed to set KVM_REG_ARM_TIMER_CNT");
205
+ abort();
206
+ }
207
+
208
+ cpu->kvm_vtime_dirty = false;
209
+}
210
+
211
int kvm_put_vcpu_events(ARMCPU *cpu)
212
{
213
CPUARMState *env = &cpu->env;
214
@@ -XXX,XX +XXX,XX @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
215
return MEMTXATTRS_UNSPECIFIED;
216
}
217
218
+void kvm_arm_vm_state_change(void *opaque, int running, RunState state)
219
+{
220
+ CPUState *cs = opaque;
221
+ ARMCPU *cpu = ARM_CPU(cs);
222
+
223
+ if (running) {
224
+ if (cpu->kvm_adjvtime) {
225
+ kvm_arm_put_virtual_time(cs);
226
+ }
227
+ } else {
228
+ if (cpu->kvm_adjvtime) {
229
+ kvm_arm_get_virtual_time(cs);
230
+ }
231
+ }
232
+}
233
234
int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
235
{
236
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
237
index XXXXXXX..XXXXXXX 100644
238
--- a/target/arm/kvm32.c
239
+++ b/target/arm/kvm32.c
240
@@ -XXX,XX +XXX,XX @@
241
#include "qemu-common.h"
242
#include "cpu.h"
243
#include "qemu/timer.h"
244
+#include "sysemu/runstate.h"
245
#include "sysemu/kvm.h"
246
#include "kvm_arm.h"
247
#include "internals.h"
248
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
249
return -EINVAL;
250
}
152
}
251
153
252
+ qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
154
-DO_PREWIDEN(VADDL_S, s, ext, add, false)
253
+
155
-DO_PREWIDEN(VADDL_U, u, extu, add, false)
254
/* Determine init features for this CPU */
156
-DO_PREWIDEN(VSUBL_S, s, ext, sub, false)
255
memset(cpu->kvm_init_features, 0, sizeof(cpu->kvm_init_features));
157
-DO_PREWIDEN(VSUBL_U, u, extu, sub, false)
256
if (cpu->start_powered_off) {
158
-DO_PREWIDEN(VADDW_S, s, ext, add, true)
257
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
159
-DO_PREWIDEN(VADDW_U, u, extu, add, true)
258
index XXXXXXX..XXXXXXX 100644
160
-DO_PREWIDEN(VSUBW_S, s, ext, sub, true)
259
--- a/target/arm/kvm64.c
161
-DO_PREWIDEN(VSUBW_U, u, extu, sub, true)
260
+++ b/target/arm/kvm64.c
162
+DO_PREWIDEN(VADDL_S, s, add, false, MO_SIGN)
261
@@ -XXX,XX +XXX,XX @@
163
+DO_PREWIDEN(VADDL_U, u, add, false, 0)
262
#include "qemu/host-utils.h"
164
+DO_PREWIDEN(VSUBL_S, s, sub, false, MO_SIGN)
263
#include "qemu/main-loop.h"
165
+DO_PREWIDEN(VSUBL_U, u, sub, false, 0)
264
#include "exec/gdbstub.h"
166
+DO_PREWIDEN(VADDW_S, s, add, true, MO_SIGN)
265
+#include "sysemu/runstate.h"
167
+DO_PREWIDEN(VADDW_U, u, add, true, 0)
266
#include "sysemu/kvm.h"
168
+DO_PREWIDEN(VSUBW_S, s, sub, true, MO_SIGN)
267
#include "sysemu/kvm_int.h"
169
+DO_PREWIDEN(VSUBW_U, u, sub, true, 0)
268
#include "kvm_arm.h"
170
269
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
171
static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
270
return -EINVAL;
172
NeonGenTwo64OpFn *opfn, NeonGenNarrowFn *narrowfn)
271
}
272
273
+ qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
274
+
275
/* Determine init features for this CPU */
276
memset(cpu->kvm_init_features, 0, sizeof(cpu->kvm_init_features));
277
if (cpu->start_powered_off) {
278
diff --git a/target/arm/machine.c b/target/arm/machine.c
279
index XXXXXXX..XXXXXXX 100644
280
--- a/target/arm/machine.c
281
+++ b/target/arm/machine.c
282
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
283
/* This should never fail */
284
abort();
285
}
286
+
287
+ /*
288
+ * kvm_arm_cpu_pre_save() must be called after
289
+ * write_kvmstate_to_list()
290
+ */
291
+ kvm_arm_cpu_pre_save(cpu);
292
} else {
293
if (!write_cpustate_to_list(cpu, false)) {
294
/* This should never fail. */
295
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
296
* we're using it.
297
*/
298
write_list_to_cpustate(cpu);
299
+ kvm_arm_cpu_post_load(cpu);
300
} else {
301
if (!write_list_to_cpustate(cpu)) {
302
return -1;
303
--
173
--
304
2.20.1
174
2.20.1
305
175
306
176
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error
2
meant we were using the H4() address swizzler macro rather than the
3
H2() which is required for 2-byte data. This had no effect on
4
little-endian hosts but meant we put the result data into the
5
destination Dreg in the wrong order on big-endian hosts.
2
6
3
If we know what the default value should be then we can test for
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
that as well as the feature existence.
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
11
---
12
target/arm/vec_helper.c | 8 ++++----
13
1 file changed, 4 insertions(+), 4 deletions(-)
5
14
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200120101023.16030-5-drjones@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
tests/qtest/arm-cpu-features.c | 37 +++++++++++++++++++++++++---------
12
1 file changed, 28 insertions(+), 9 deletions(-)
13
14
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/qtest/arm-cpu-features.c
17
--- a/target/arm/vec_helper.c
17
+++ b/tests/qtest/arm-cpu-features.c
18
+++ b/target/arm/vec_helper.c
18
@@ -XXX,XX +XXX,XX @@ static bool resp_get_feature(QDict *resp, const char *feature)
19
@@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t)
19
qobject_unref(_resp); \
20
r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
20
})
21
r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
21
22
\
22
+#define assert_feature(qts, cpu_type, feature, expected_value) \
23
- d[H4(0)] = r0; \
23
+({ \
24
- d[H4(1)] = r1; \
24
+ QDict *_resp, *_props; \
25
- d[H4(2)] = r2; \
25
+ \
26
- d[H4(3)] = r3; \
26
+ _resp = do_query_no_props(qts, cpu_type); \
27
+ d[H2(0)] = r0; \
27
+ g_assert(_resp); \
28
+ d[H2(1)] = r1; \
28
+ g_assert(resp_has_props(_resp)); \
29
+ d[H2(2)] = r2; \
29
+ _props = resp_get_props(_resp); \
30
+ d[H2(3)] = r3; \
30
+ g_assert(qdict_get(_props, feature)); \
31
}
31
+ g_assert(qdict_get_bool(_props, feature) == (expected_value)); \
32
32
+ qobject_unref(_resp); \
33
DO_NEON_PAIRWISE(neon_padd, add)
33
+})
34
+
35
+#define assert_has_feature_enabled(qts, cpu_type, feature) \
36
+ assert_feature(qts, cpu_type, feature, true)
37
+
38
+#define assert_has_feature_disabled(qts, cpu_type, feature) \
39
+ assert_feature(qts, cpu_type, feature, false)
40
+
41
static void assert_type_full(QTestState *qts)
42
{
43
const char *error;
44
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
45
assert_error(qts, "host", "The CPU type 'host' requires KVM", NULL);
46
47
/* Test expected feature presence/absence for some cpu types */
48
- assert_has_feature(qts, "max", "pmu");
49
- assert_has_feature(qts, "cortex-a15", "pmu");
50
+ assert_has_feature_enabled(qts, "max", "pmu");
51
+ assert_has_feature_enabled(qts, "cortex-a15", "pmu");
52
assert_has_not_feature(qts, "cortex-a15", "aarch64");
53
54
if (g_str_equal(qtest_get_arch(), "aarch64")) {
55
- assert_has_feature(qts, "max", "aarch64");
56
- assert_has_feature(qts, "max", "sve");
57
- assert_has_feature(qts, "max", "sve128");
58
- assert_has_feature(qts, "cortex-a57", "pmu");
59
- assert_has_feature(qts, "cortex-a57", "aarch64");
60
+ assert_has_feature_enabled(qts, "max", "aarch64");
61
+ assert_has_feature_enabled(qts, "max", "sve");
62
+ assert_has_feature_enabled(qts, "max", "sve128");
63
+ assert_has_feature_enabled(qts, "cortex-a57", "pmu");
64
+ assert_has_feature_enabled(qts, "cortex-a57", "aarch64");
65
66
sve_tests_default(qts, "max");
67
68
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
69
QDict *resp;
70
char *error;
71
72
- assert_has_feature(qts, "host", "aarch64");
73
- assert_has_feature(qts, "host", "pmu");
74
+ assert_has_feature_enabled(qts, "host", "aarch64");
75
+ assert_has_feature_enabled(qts, "host", "pmu");
76
77
assert_error(qts, "cortex-a15",
78
"We cannot guarantee the CPU type 'cortex-a15' works "
79
--
34
--
80
2.20.1
35
2.20.1
81
36
82
37
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
The helper functions for performing the udot/sdot operations against
2
a scalar were not using an address-swizzling macro when converting
3
the index of the scalar element into a pointer into the vm array.
4
This had no effect on little-endian hosts but meant we generated
5
incorrect results on big-endian hosts.
2
6
3
Add the missing GENERIC_TIMER feature to kvm64 cpus.
7
For these insns, the index is indexing over group of 4 8-bit values,
8
so 32 bits per indexed entity, and H4() is therefore what we want.
9
(For Neon the only possible input indexes are 0 and 1.)
4
10
5
We don't currently use these registers when KVM is enabled, but it's
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
probably best we add the feature flag for consistency and potential
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
future use. There's also precedent, as we add the PMU feature flag to
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
KVM enabled guests, even though we don't use those registers either.
14
Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
15
---
16
target/arm/vec_helper.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
9
18
10
This change was originally posted as a hunk of a different, never
19
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
11
merged patch from Bijan Mottahedeh.
12
13
Signed-off-by: Andrew Jones <drjones@redhat.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20200120101023.16030-4-drjones@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
target/arm/kvm64.c | 1 +
19
1 file changed, 1 insertion(+)
20
21
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
22
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/kvm64.c
21
--- a/target/arm/vec_helper.c
24
+++ b/target/arm/kvm64.c
22
+++ b/target/arm/vec_helper.c
25
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
23
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
26
set_feature(&features, ARM_FEATURE_NEON);
24
intptr_t index = simd_data(desc);
27
set_feature(&features, ARM_FEATURE_AARCH64);
25
uint32_t *d = vd;
28
set_feature(&features, ARM_FEATURE_PMU);
26
int8_t *n = vn;
29
+ set_feature(&features, ARM_FEATURE_GENERIC_TIMER);
27
- int8_t *m_indexed = (int8_t *)vm + index * 4;
30
28
+ int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
31
ahcf->features = features;
29
32
30
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
31
* Otherwise opr_sz is a multiple of 16.
32
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
33
intptr_t index = simd_data(desc);
34
uint32_t *d = vd;
35
uint8_t *n = vn;
36
- uint8_t *m_indexed = (uint8_t *)vm + index * 4;
37
+ uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4;
38
39
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
40
* Otherwise opr_sz is a multiple of 16.
33
--
41
--
34
2.20.1
42
2.20.1
35
43
36
44
diff view generated by jsdifflib
1
From: Zenghui Yu <yuzenghui@huawei.com>
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
If LPIs are disabled, KVM will just ignore the GICR_PENDBASER.PTZ bit when
3
HCR should be applied when NS is set, not when it is cleared.
4
restoring GICR_CTLR. Setting PTZ here makes littlt sense in "reduce GIC
5
initialization time".
6
4
7
And what's worse, PTZ is generally programmed by guest to indicate to the
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
8
Redistributor whether the LPI Pending table is zero when enabling LPIs.
9
If migration is triggered when the PTZ has just been cleared by guest (and
10
before enabling LPIs), we will see PTZ==1 on the destination side, which
11
is not as expected. Let's just drop this hackish userspace behavior.
12
13
Also take this chance to refine the comment a bit.
14
15
Fixes: 367b9f527bec ("hw/intc/arm_gicv3_kvm: Implement get/put functions")
16
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
17
Message-id: 20200119133051.642-1-yuzenghui@huawei.com
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
8
---
21
hw/intc/arm_gicv3_kvm.c | 11 ++++-------
9
target/arm/helper.c | 5 ++---
22
1 file changed, 4 insertions(+), 7 deletions(-)
10
1 file changed, 2 insertions(+), 3 deletions(-)
23
11
24
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/arm_gicv3_kvm.c
14
--- a/target/arm/helper.c
27
+++ b/hw/intc/arm_gicv3_kvm.c
15
+++ b/target/arm/helper.c
28
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_put(GICv3State *s)
16
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
kvm_gicd_access(s, GICD_CTLR, &reg, true);
17
30
18
/*
31
if (redist_typer & GICR_TYPER_PLPIS) {
19
* Non-IS variants of TLB operations are upgraded to
32
- /* Set base addresses before LPIs are enabled by GICR_CTLR write */
20
- * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
33
+ /*
21
+ * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
34
+ * Restore base addresses before LPIs are potentially enabled by
22
* force broadcast of these operations.
35
+ * GICR_CTLR write
23
*/
36
+ */
24
static bool tlb_force_broadcast(CPUARMState *env)
37
for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
25
{
38
GICv3CPUState *c = &s->cpu[ncpu];
26
- return (env->cp15.hcr_el2 & HCR_FB) &&
39
27
- arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
40
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_put(GICv3State *s)
28
+ return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
41
kvm_gicr_access(s, GICR_PROPBASER + 4, ncpu, &regh, true);
29
}
42
30
43
reg64 = c->gicr_pendbaser;
31
static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
44
- if (!(c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS)) {
45
- /* Setting PTZ is advised if LPIs are disabled, to reduce
46
- * GIC initialization time.
47
- */
48
- reg64 |= GICR_PENDBASER_PTZ;
49
- }
50
regl = (uint32_t)reg64;
51
kvm_gicr_access(s, GICR_PENDBASER, ncpu, &regl, true);
52
regh = (uint32_t)(reg64 >> 32);
53
--
32
--
54
2.20.1
33
2.20.1
55
34
56
35
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
Add a function resettable_change_parent() to do the required
3
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the
4
plumbing when changing the parent a of Resettable object.
4
future HCR_EL2.TLOR when S-EL2 is enabled.
5
5
6
We need to make sure that the reset state of the object remains
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
7
coherent with the reset state of the new parent.
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
9
We make the 2 following hypothesis:
10
+ when an object is put in a parent under reset, the object goes in
11
reset.
12
+ when an object is removed from a parent under reset, the object
13
leaves reset.
14
15
The added function avoids any glitch if both old and new parent are
16
already in reset.
17
18
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
22
Message-id: 20200123132823.1117486-6-damien.hedde@greensocs.com
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
9
---
25
include/hw/resettable.h | 16 +++++++++++
10
target/arm/helper.c | 19 +++++--------------
26
hw/core/resettable.c | 62 +++++++++++++++++++++++++++++++++++++++--
11
1 file changed, 5 insertions(+), 14 deletions(-)
27
hw/core/trace-events | 1 +
28
3 files changed, 77 insertions(+), 2 deletions(-)
29
12
30
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
32
--- a/include/hw/resettable.h
15
--- a/target/arm/helper.c
33
+++ b/include/hw/resettable.h
16
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ void resettable_release_reset(Object *obj, ResetType type);
17
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
18
#endif
19
20
/* Shared logic between LORID and the rest of the LOR* registers.
21
- * Secure state has already been delt with.
22
+ * Secure state exclusion has already been dealt with.
35
*/
23
*/
36
bool resettable_is_in_reset(Object *obj);
24
-static CPAccessResult access_lor_ns(CPUARMState *env)
37
25
+static CPAccessResult access_lor_ns(CPUARMState *env,
38
+/**
26
+ const ARMCPRegInfo *ri, bool isread)
39
+ * resettable_change_parent:
40
+ * Indicate that the parent of Ressettable @obj is changing from @oldp to @newp.
41
+ * All 3 objects must implement resettable interface. @oldp or @newp may be
42
+ * NULL.
43
+ *
44
+ * This function will adapt the reset state of @obj so that it is coherent
45
+ * with the reset state of @newp. It may trigger @resettable_assert_reset()
46
+ * or @resettable_release_reset(). It will do such things only if the reset
47
+ * state of @newp and @oldp are different.
48
+ *
49
+ * When using this function during reset, it must only be called during
50
+ * a hold phase method. Calling this during enter or exit phase is an error.
51
+ */
52
+void resettable_change_parent(Object *obj, Object *newp, Object *oldp);
53
+
54
/**
55
* resettable_class_set_parent_phases:
56
*
57
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/core/resettable.c
60
+++ b/hw/core/resettable.c
61
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_exit(Object *obj, void *opaque, ResetType type);
62
* enter_phase_in_progress:
63
* True if we are currently in reset enter phase.
64
*
65
- * Note: This flag is only used to guarantee (using asserts) that the reset
66
- * API is used correctly. We can use a global variable because we rely on the
67
+ * exit_phase_in_progress:
68
+ * count the number of exit phase we are in.
69
+ *
70
+ * Note: These flags are only used to guarantee (using asserts) that the reset
71
+ * API is used correctly. We can use global variables because we rely on the
72
* iothread mutex to ensure only one reset operation is in a progress at a
73
* given time.
74
*/
75
static bool enter_phase_in_progress;
76
+static unsigned exit_phase_in_progress;
77
78
void resettable_reset(Object *obj, ResetType type)
79
{
27
{
80
@@ -XXX,XX +XXX,XX @@ void resettable_release_reset(Object *obj, ResetType type)
28
int el = arm_current_el(env);
81
trace_resettable_reset_release_begin(obj, type);
29
82
assert(!enter_phase_in_progress);
30
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_ns(CPUARMState *env)
83
31
return CP_ACCESS_OK;
84
+ exit_phase_in_progress += 1;
85
resettable_phase_exit(obj, NULL, type);
86
+ exit_phase_in_progress -= 1;
87
88
trace_resettable_reset_release_end(obj);
89
}
32
}
90
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_exit(Object *obj, void *opaque, ResetType type)
33
91
trace_resettable_phase_exit_end(obj, obj_typename, s->count);
34
-static CPAccessResult access_lorid(CPUARMState *env, const ARMCPRegInfo *ri,
35
- bool isread)
36
-{
37
- if (arm_is_secure_below_el3(env)) {
38
- /* Access ok in secure mode. */
39
- return CP_ACCESS_OK;
40
- }
41
- return access_lor_ns(env);
42
-}
43
-
44
static CPAccessResult access_lor_other(CPUARMState *env,
45
const ARMCPRegInfo *ri, bool isread)
46
{
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env,
48
/* Access denied in secure mode. */
49
return CP_ACCESS_TRAP;
50
}
51
- return access_lor_ns(env);
52
+ return access_lor_ns(env, ri, isread);
92
}
53
}
93
54
94
+/*
55
/*
95
+ * resettable_get_count:
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
96
+ * Get the count of the Resettable object @obj. Return 0 if @obj is NULL.
57
.type = ARM_CP_CONST, .resetvalue = 0 },
97
+ */
58
{ .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
98
+static unsigned resettable_get_count(Object *obj)
59
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
99
+{
60
- .access = PL1_R, .accessfn = access_lorid,
100
+ if (obj) {
61
+ .access = PL1_R, .accessfn = access_lor_ns,
101
+ ResettableClass *rc = RESETTABLE_GET_CLASS(obj);
62
.type = ARM_CP_CONST, .resetvalue = 0 },
102
+ return rc->get_state(obj)->count;
63
REGINFO_SENTINEL
103
+ }
64
};
104
+ return 0;
105
+}
106
+
107
+void resettable_change_parent(Object *obj, Object *newp, Object *oldp)
108
+{
109
+ ResettableClass *rc = RESETTABLE_GET_CLASS(obj);
110
+ ResettableState *s = rc->get_state(obj);
111
+ unsigned newp_count = resettable_get_count(newp);
112
+ unsigned oldp_count = resettable_get_count(oldp);
113
+
114
+ /*
115
+ * Ensure we do not change parent when in enter or exit phase.
116
+ * During these phases, the reset subtree being updated is partly in reset
117
+ * and partly not in reset (it depends on the actual position in
118
+ * resettable_child_foreach()s). We are not able to tell in which part is a
119
+ * leaving or arriving device. Thus we cannot set the reset count of the
120
+ * moving device to the proper value.
121
+ */
122
+ assert(!enter_phase_in_progress && !exit_phase_in_progress);
123
+ trace_resettable_change_parent(obj, oldp, oldp_count, newp, newp_count);
124
+
125
+ /*
126
+ * At most one of the two 'for' loops will be executed below
127
+ * in order to cope with the difference between the two counts.
128
+ */
129
+ /* if newp is more reset than oldp */
130
+ for (unsigned i = oldp_count; i < newp_count; i++) {
131
+ resettable_assert_reset(obj, RESET_TYPE_COLD);
132
+ }
133
+ /*
134
+ * if obj is leaving a bus under reset, we need to ensure
135
+ * hold phase is not pending.
136
+ */
137
+ if (oldp_count && s->hold_phase_pending) {
138
+ resettable_phase_hold(obj, NULL, RESET_TYPE_COLD);
139
+ }
140
+ /* if oldp is more reset than newp */
141
+ for (unsigned i = newp_count; i < oldp_count; i++) {
142
+ resettable_release_reset(obj, RESET_TYPE_COLD);
143
+ }
144
+}
145
+
146
void resettable_class_set_parent_phases(ResettableClass *rc,
147
ResettableEnterPhase enter,
148
ResettableHoldPhase hold,
149
diff --git a/hw/core/trace-events b/hw/core/trace-events
150
index XXXXXXX..XXXXXXX 100644
151
--- a/hw/core/trace-events
152
+++ b/hw/core/trace-events
153
@@ -XXX,XX +XXX,XX @@ resettable_reset_assert_begin(void *obj, int cold) "obj=%p cold=%d"
154
resettable_reset_assert_end(void *obj) "obj=%p"
155
resettable_reset_release_begin(void *obj, int cold) "obj=%p cold=%d"
156
resettable_reset_release_end(void *obj) "obj=%p"
157
+resettable_change_parent(void *obj, void *o, unsigned oc, void *n, unsigned nc) "obj=%p from=%p(%d) to=%p(%d)"
158
resettable_phase_enter_begin(void *obj, const char *objtype, unsigned count, int type) "obj=%p(%s) count=%d type=%d"
159
resettable_phase_enter_exec(void *obj, const char *objtype, int type, int has_method) "obj=%p(%s) type=%d method=%d"
160
resettable_phase_enter_end(void *obj, const char *objtype, unsigned count) "obj=%p(%s) count=%d"
161
--
65
--
162
2.20.1
66
2.20.1
163
67
164
68
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
If we're using the capstone disassembler, disassembly of a run of
2
instructions more than 32 bytes long disassembles the wrong data for
3
instructions beyond the 32 byte mark:
2
4
3
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
5
(qemu) xp /16x 0x100
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000
6
Message-id: 20200123132823.1117486-10-damien.hedde@greensocs.com
8
0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574
9
0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000
10
(qemu) xp /16i 0x100
11
0x00000100: 00000005 andeq r0, r0, r5
12
0x00000104: 54410001 strbpl r0, [r1], #-1
13
0x00000108: 00000001 andeq r0, r0, r1
14
0x0000010c: 00001000 andeq r1, r0, r0
15
0x00000110: 00000000 andeq r0, r0, r0
16
0x00000114: 00000004 andeq r0, r0, r4
17
0x00000118: 54410002 strbpl r0, [r1], #-2
18
0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
19
0x00000120: 54410001 strbpl r0, [r1], #-1
20
0x00000124: 00000001 andeq r0, r0, r1
21
0x00000128: 00001000 andeq r1, r0, r0
22
0x0000012c: 00000000 andeq r0, r0, r0
23
0x00000130: 00000004 andeq r0, r0, r4
24
0x00000134: 54410002 strbpl r0, [r1], #-2
25
0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
26
0x0000013c: 00000000 andeq r0, r0, r0
27
28
Here the disassembly of 0x120..0x13f is using the data that is in
29
0x104..0x123.
30
31
This is caused by passing the wrong value to the read_memory_func().
32
The intention is that at this point in the loop the 'cap_buf' buffer
33
already contains 'csize' bytes of data for the instruction at guest
34
addr 'pc', and we want to read in an extra 'tsize' bytes. Those
35
extra bytes are therefore at 'pc + csize', not 'pc'. On the first
36
time through the loop 'csize' happens to be zero, so the initial read
37
of 32 bytes into cap_buf is correct and as long as the disassembly
38
never needs to read more data we return the correct information.
39
40
Use the correct guest address in the call to read_memory_func().
41
42
Cc: qemu-stable@nongnu.org
43
Fixes: https://bugs.launchpad.net/qemu/+bug/1900779
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
44
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
46
Message-id: 20201022132445.25039-1-peter.maydell@linaro.org
8
---
47
---
9
docs/devel/index.rst | 1 +
48
disas/capstone.c | 2 +-
10
docs/devel/reset.rst | 289 +++++++++++++++++++++++++++++++++++++++++++
49
1 file changed, 1 insertion(+), 1 deletion(-)
11
2 files changed, 290 insertions(+)
12
create mode 100644 docs/devel/reset.rst
13
50
14
diff --git a/docs/devel/index.rst b/docs/devel/index.rst
51
diff --git a/disas/capstone.c b/disas/capstone.c
15
index XXXXXXX..XXXXXXX 100644
52
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/devel/index.rst
53
--- a/disas/capstone.c
17
+++ b/docs/devel/index.rst
54
+++ b/disas/capstone.c
18
@@ -XXX,XX +XXX,XX @@ Contents:
55
@@ -XXX,XX +XXX,XX @@ bool cap_disas_monitor(disassemble_info *info, uint64_t pc, int count)
19
tcg
56
20
tcg-plugins
57
/* Make certain that we can make progress. */
21
bitops
58
assert(tsize != 0);
22
+ reset
59
- info->read_memory_func(pc, cap_buf + csize, tsize, info);
23
diff --git a/docs/devel/reset.rst b/docs/devel/reset.rst
60
+ info->read_memory_func(pc + csize, cap_buf + csize, tsize, info);
24
new file mode 100644
61
csize += tsize;
25
index XXXXXXX..XXXXXXX
62
26
--- /dev/null
63
if (cs_disasm_iter(handle, &cbuf, &csize, &pc, insn)) {
27
+++ b/docs/devel/reset.rst
28
@@ -XXX,XX +XXX,XX @@
29
+
30
+=======================================
31
+Reset in QEMU: the Resettable interface
32
+=======================================
33
+
34
+The reset of qemu objects is handled using the resettable interface declared
35
+in ``include/hw/resettable.h``.
36
+
37
+This interface allows objects to be grouped (on a tree basis); so that the
38
+whole group can be reset consistently. Each individual member object does not
39
+have to care about others; in particular, problems of order (which object is
40
+reset first) are addressed.
41
+
42
+As of now DeviceClass and BusClass implement this interface.
43
+
44
+
45
+Triggering reset
46
+----------------
47
+
48
+This section documents the APIs which "users" of a resettable object should use
49
+to control it. All resettable control functions must be called while holding
50
+the iothread lock.
51
+
52
+You can apply a reset to an object using ``resettable_assert_reset()``. You need
53
+to call ``resettable_release_reset()`` to release the object from reset. To
54
+instantly reset an object, without keeping it in reset state, just call
55
+``resettable_reset()``. These functions take two parameters: a pointer to the
56
+object to reset and a reset type.
57
+
58
+Several types of reset will be supported. For now only cold reset is defined;
59
+others may be added later. The Resettable interface handles reset types with an
60
+enum:
61
+
62
+``RESET_TYPE_COLD``
63
+ Cold reset is supported by every resettable object. In QEMU, it means we reset
64
+ to the initial state corresponding to the start of QEMU; this might differ
65
+ from what is a real hardware cold reset. It differs from other resets (like
66
+ warm or bus resets) which may keep certain parts untouched.
67
+
68
+Calling ``resettable_reset()`` is equivalent to calling
69
+``resettable_assert_reset()`` then ``resettable_release_reset()``. It is
70
+possible to interleave multiple calls to these three functions. There may
71
+be several reset sources/controllers of a given object. The interface handles
72
+everything and the different reset controllers do not need to know anything
73
+about each others. The object will leave reset state only when each other
74
+controllers end their reset operation. This point is handled internally by
75
+maintaining a count of in-progress resets; it is crucial to call
76
+``resettable_release_reset()`` one time and only one time per
77
+``resettable_assert_reset()`` call.
78
+
79
+For now migration of a device or bus in reset is not supported. Care must be
80
+taken not to delay ``resettable_release_reset()`` after its
81
+``resettable_assert_reset()`` counterpart.
82
+
83
+Note that, since resettable is an interface, the API takes a simple Object as
84
+parameter. Still, it is a programming error to call a resettable function on a
85
+non-resettable object and it will trigger a run time assert error. Since most
86
+calls to resettable interface are done through base class functions, such an
87
+error is not likely to happen.
88
+
89
+For Devices and Buses, the following helper functions exist:
90
+
91
+- ``device_cold_reset()``
92
+- ``bus_cold_reset()``
93
+
94
+These are simple wrappers around resettable_reset() function; they only cast the
95
+Device or Bus into an Object and pass the cold reset type. When possible
96
+prefer to use these functions instead of ``resettable_reset()``.
97
+
98
+Device and bus functions co-exist because there can be semantic differences
99
+between resetting a bus and resetting the controller bridge which owns it.
100
+For example, consider a SCSI controller. Resetting the controller puts all
101
+its registers back to what reset state was as well as reset everything on the
102
+SCSI bus, whereas resetting just the SCSI bus only resets everything that's on
103
+it but not the controller.
104
+
105
+
106
+Multi-phase mechanism
107
+---------------------
108
+
109
+This section documents the internals of the resettable interface.
110
+
111
+The resettable interface uses a multi-phase system to relieve objects and
112
+machines from reset ordering problems. To address this, the reset operation
113
+of an object is split into three well defined phases.
114
+
115
+When resetting several objects (for example the whole machine at simulation
116
+startup), all first phases of all objects are executed, then all second phases
117
+and then all third phases.
118
+
119
+The three phases are:
120
+
121
+1. The **enter** phase is executed when the object enters reset. It resets only
122
+ local state of the object; it must not do anything that has a side-effect
123
+ on other objects, such as raising or lowering a qemu_irq line or reading or
124
+ writing guest memory.
125
+
126
+2. The **hold** phase is executed for entry into reset, once every object in the
127
+ group which is being reset has had its *enter* phase executed. At this point
128
+ devices can do actions that affect other objects.
129
+
130
+3. The **exit** phase is executed when the object leaves the reset state.
131
+ Actions affecting other objects are permitted.
132
+
133
+As said in previous section, the interface maintains a count of reset. This
134
+count is used to ensure phases are executed only when required. *enter* and
135
+*hold* phases are executed only when asserting reset for the first time
136
+(if an object is already in reset state when calling
137
+``resettable_assert_reset()`` or ``resettable_reset()``, they are not
138
+executed).
139
+The *exit* phase is executed only when the last reset operation ends. Therefore
140
+the object does not need to care how many of reset controllers it has and how
141
+many of them have started a reset.
142
+
143
+
144
+Handling reset in a resettable object
145
+-------------------------------------
146
+
147
+This section documents the APIs that an implementation of a resettable object
148
+must provide and what functions it has access to. It is intended for people
149
+who want to implement or convert a class which has the resettable interface;
150
+for example when specializing an existing device or bus.
151
+
152
+Methods to implement
153
+....................
154
+
155
+Three methods should be defined or left empty. Each method corresponds to a
156
+phase of the reset; they are name ``phases.enter()``, ``phases.hold()`` and
157
+``phases.exit()``. They all take the object as parameter. The *enter* method
158
+also take the reset type as second parameter.
159
+
160
+When extending an existing class, these methods may need to be extended too.
161
+The ``resettable_class_set_parent_phases()`` class function may be used to
162
+backup parent class methods.
163
+
164
+Here follows an example to implement reset for a Device which sets an IO while
165
+in reset.
166
+
167
+::
168
+
169
+ static void mydev_reset_enter(Object *obj, ResetType type)
170
+ {
171
+ MyDevClass *myclass = MYDEV_GET_CLASS(obj);
172
+ MyDevState *mydev = MYDEV(obj);
173
+ /* call parent class enter phase */
174
+ if (myclass->parent_phases.enter) {
175
+ myclass->parent_phases.enter(obj, type);
176
+ }
177
+ /* initialize local state only */
178
+ mydev->var = 0;
179
+ }
180
+
181
+ static void mydev_reset_hold(Object *obj)
182
+ {
183
+ MyDevClass *myclass = MYDEV_GET_CLASS(obj);
184
+ MyDevState *mydev = MYDEV(obj);
185
+ /* call parent class hold phase */
186
+ if (myclass->parent_phases.hold) {
187
+ myclass->parent_phases.hold(obj);
188
+ }
189
+ /* set an IO */
190
+ qemu_set_irq(mydev->irq, 1);
191
+ }
192
+
193
+ static void mydev_reset_exit(Object *obj)
194
+ {
195
+ MyDevClass *myclass = MYDEV_GET_CLASS(obj);
196
+ MyDevState *mydev = MYDEV(obj);
197
+ /* call parent class exit phase */
198
+ if (myclass->parent_phases.exit) {
199
+ myclass->parent_phases.exit(obj);
200
+ }
201
+ /* clear an IO */
202
+ qemu_set_irq(mydev->irq, 0);
203
+ }
204
+
205
+ typedef struct MyDevClass {
206
+ MyParentClass parent_class;
207
+ /* to store eventual parent reset methods */
208
+ ResettablePhases parent_phases;
209
+ } MyDevClass;
210
+
211
+ static void mydev_class_init(ObjectClass *class, void *data)
212
+ {
213
+ MyDevClass *myclass = MYDEV_CLASS(class);
214
+ ResettableClass *rc = RESETTABLE_CLASS(class);
215
+ resettable_class_set_parent_reset_phases(rc,
216
+ mydev_reset_enter,
217
+ mydev_reset_hold,
218
+ mydev_reset_exit,
219
+ &myclass->parent_phases);
220
+ }
221
+
222
+In the above example, we override all three phases. It is possible to override
223
+only some of them by passing NULL instead of a function pointer to
224
+``resettable_class_set_parent_reset_phases()``. For example, the following will
225
+only override the *enter* phase and leave *hold* and *exit* untouched::
226
+
227
+ resettable_class_set_parent_reset_phases(rc, mydev_reset_enter,
228
+ NULL, NULL,
229
+ &myclass->parent_phases);
230
+
231
+This is equivalent to providing a trivial implementation of the hold and exit
232
+phases which does nothing but call the parent class's implementation of the
233
+phase.
234
+
235
+Polling the reset state
236
+.......................
237
+
238
+Resettable interface provides the ``resettable_is_in_reset()`` function.
239
+This function returns true if the object parameter is currently under reset.
240
+
241
+An object is under reset from the beginning of the *init* phase to the end of
242
+the *exit* phase. During all three phases, the function will return that the
243
+object is in reset.
244
+
245
+This function may be used if the object behavior has to be adapted
246
+while in reset state. For example if a device has an irq input,
247
+it will probably need to ignore it while in reset; then it can for
248
+example check the reset state at the beginning of the irq callback.
249
+
250
+Note that until migration of the reset state is supported, an object
251
+should not be left in reset. So apart from being currently executing
252
+one of the reset phases, the only cases when this function will return
253
+true is if an external interaction (like changing an io) is made during
254
+*hold* or *exit* phase of another object in the same reset group.
255
+
256
+Helpers ``device_is_in_reset()`` and ``bus_is_in_reset()`` are also provided
257
+for devices and buses and should be preferred.
258
+
259
+
260
+Base class handling of reset
261
+----------------------------
262
+
263
+This section documents parts of the reset mechanism that you only need to know
264
+about if you are extending it to work with a new base class other than
265
+DeviceClass or BusClass, or maintaining the existing code in those classes. Most
266
+people can ignore it.
267
+
268
+Methods to implement
269
+....................
270
+
271
+There are two other methods that need to exist in a class implementing the
272
+interface: ``get_state()`` and ``child_foreach()``.
273
+
274
+``get_state()`` is simple. *resettable* is an interface and, as a consequence,
275
+does not have any class state structure. But in order to factorize the code, we
276
+need one. This method must return a pointer to ``ResettableState`` structure.
277
+The structure must be allocated by the base class; preferably it should be
278
+located inside the object instance structure.
279
+
280
+``child_foreach()`` is more complex. It should execute the given callback on
281
+every reset child of the given resettable object. All children must be
282
+resettable too. Additional parameters (a reset type and an opaque pointer) must
283
+be passed to the callback too.
284
+
285
+In ``DeviceClass`` and ``BusClass`` the ``ResettableState`` is located
286
+``DeviceState`` and ``BusState`` structure. ``child_foreach()`` is implemented
287
+to follow the bus hierarchy; for a bus, it calls the function on every child
288
+device; for a device, it calls the function on every bus child. When we reset
289
+the main system bus, we reset the whole machine bus tree.
290
+
291
+Changing a resettable parent
292
+............................
293
+
294
+One thing which should be taken care of by the base class is handling reset
295
+hierarchy changes.
296
+
297
+The reset hierarchy is supposed to be static and built during machine creation.
298
+But there are actually some exceptions. To cope with this, the resettable API
299
+provides ``resettable_change_parent()``. This function allows to set, update or
300
+remove the parent of a resettable object after machine creation is done. As
301
+parameters, it takes the object being moved, the old parent if any and the new
302
+parent if any.
303
+
304
+This function can be used at any time when not in a reset operation. During
305
+a reset operation it must be used only in *hold* phase. Using it in *enter* or
306
+*exit* phase is an error.
307
+Also it should not be used during machine creation, although it is harmless to
308
+do so: the function is a no-op as long as old and new parent are NULL or not
309
+in reset.
310
+
311
+There is currently 2 cases where this function is used:
312
+
313
+1. *device hotplug*; it means a new device is introduced on a live bus.
314
+
315
+2. *hot bus change*; it means an existing live device is added, moved or
316
+ removed in the bus hierarchy. At the moment, it occurs only in the raspi
317
+ machines for changing the sdbus used by sd card.
318
--
64
--
319
2.20.1
65
2.20.1
320
66
321
67
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
These buffers should be aligned on 16 bytes.
3
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic.
4
This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN):
4
5
5
Ignore invalid RX and TX buffer addresses and log an error. All
6
CID 1432363 (#1 of 1): Unintentional integer overflow:
6
incoming and outgoing traffic will be dropped because no valid RX or
7
TX descriptors will be available.
8
7
9
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
overflow_before_widen:
10
Message-id: 20200114103433.30534-4-clg@kaod.org
9
Potentially overflowing expression 1 << scale with type int
10
(32 bits, signed) is evaluated using 32-bit arithmetic, and
11
then used in a context that expects an expression of type
12
hwaddr (64 bits, unsigned).
13
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Acked-by: Eric Auger <eric.auger@redhat.com>
16
Message-id: 20201030144617.1535064-1-philmd@redhat.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
19
---
14
hw/net/ftgmac100.c | 13 +++++++++++++
20
hw/arm/smmuv3.c | 3 ++-
15
1 file changed, 13 insertions(+)
21
1 file changed, 2 insertions(+), 1 deletion(-)
16
22
17
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
23
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/net/ftgmac100.c
25
--- a/hw/arm/smmuv3.c
20
+++ b/hw/net/ftgmac100.c
26
+++ b/hw/arm/smmuv3.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct {
27
@@ -XXX,XX +XXX,XX @@
22
uint32_t des3;
23
} FTGMAC100Desc;
24
25
+#define FTGMAC100_DESC_ALIGNMENT 16
26
+
27
/*
28
* Specific RTL8211E MII Registers
29
*/
28
*/
30
@@ -XXX,XX +XXX,XX @@ static void ftgmac100_write(void *opaque, hwaddr addr,
29
31
s->itc = value;
30
#include "qemu/osdep.h"
32
break;
31
+#include "qemu/bitops.h"
33
case FTGMAC100_RXR_BADR: /* Ring buffer address */
32
#include "hw/irq.h"
34
+ if (!QEMU_IS_ALIGNED(value, FTGMAC100_DESC_ALIGNMENT)) {
33
#include "hw/sysbus.h"
35
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad RX buffer alignment 0x%"
34
#include "migration/vmstate.h"
36
+ HWADDR_PRIx "\n", __func__, value);
35
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
37
+ return;
36
scale = CMD_SCALE(cmd);
38
+ }
37
num = CMD_NUM(cmd);
39
+
38
ttl = CMD_TTL(cmd);
40
s->rx_ring = value;
39
- num_pages = (num + 1) * (1 << (scale));
41
s->rx_descriptor = s->rx_ring;
40
+ num_pages = (num + 1) * BIT_ULL(scale);
42
break;
41
}
43
@@ -XXX,XX +XXX,XX @@ static void ftgmac100_write(void *opaque, hwaddr addr,
42
44
break;
43
if (type == SMMU_CMD_TLBI_NH_VA) {
45
46
case FTGMAC100_NPTXR_BADR: /* Transmit buffer address */
47
+ if (!QEMU_IS_ALIGNED(value, FTGMAC100_DESC_ALIGNMENT)) {
48
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad TX buffer alignment 0x%"
49
+ HWADDR_PRIx "\n", __func__, value);
50
+ return;
51
+ }
52
s->tx_ring = value;
53
s->tx_descriptor = s->tx_ring;
54
break;
55
--
44
--
56
2.20.1
45
2.20.1
57
46
58
47
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
Provide a temporary device_legacy_reset function doing what
3
When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so
4
device_reset does to prepare for the transition with Resettable
4
that SVE will not trap to EL3.
5
API.
6
5
7
All occurrence of device_reset in the code tree are also replaced
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
8
by device_legacy_reset.
9
10
The new resettable API has different prototype and semantics
11
(resetting child buses as well as the specified device). Subsequent
12
commits will make the changeover for each call site individually; once
13
that is complete device_legacy_reset() will be removed.
14
15
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Acked-by: David Gibson <david@gibson.dropbear.id.au>
8
Message-id: 20201030151541.11976-1-remi@remlab.net
19
Acked-by: Cornelia Huck <cohuck@redhat.com>
20
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
22
Message-id: 20200123132823.1117486-2-damien.hedde@greensocs.com
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
10
---
25
include/hw/qdev-core.h | 4 ++--
11
hw/arm/boot.c | 3 +++
26
hw/audio/intel-hda.c | 2 +-
12
1 file changed, 3 insertions(+)
27
hw/core/qdev.c | 6 +++---
28
hw/hyperv/hyperv.c | 2 +-
29
hw/i386/microvm.c | 2 +-
30
hw/i386/pc.c | 2 +-
31
hw/ide/microdrive.c | 8 ++++----
32
hw/intc/spapr_xive.c | 2 +-
33
hw/ppc/pnv_psi.c | 4 ++--
34
hw/ppc/spapr_pci.c | 2 +-
35
hw/ppc/spapr_vio.c | 2 +-
36
hw/s390x/s390-pci-inst.c | 2 +-
37
hw/scsi/vmw_pvscsi.c | 2 +-
38
hw/sd/omap_mmc.c | 2 +-
39
hw/sd/pl181.c | 2 +-
40
15 files changed, 22 insertions(+), 22 deletions(-)
41
13
42
diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
43
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
44
--- a/include/hw/qdev-core.h
16
--- a/hw/arm/boot.c
45
+++ b/include/hw/qdev-core.h
17
+++ b/hw/arm/boot.c
46
@@ -XXX,XX +XXX,XX @@ char *qdev_get_own_fw_dev_path_from_handler(BusState *bus, DeviceState *dev);
18
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
47
void qdev_machine_init(void);
19
if (cpu_isar_feature(aa64_mte, cpu)) {
48
20
env->cp15.scr_el3 |= SCR_ATA;
49
/**
21
}
50
- * @device_reset
22
+ if (cpu_isar_feature(aa64_sve, cpu)) {
51
+ * device_legacy_reset:
23
+ env->cp15.cptr_el[3] |= CPTR_EZ;
52
*
24
+ }
53
* Reset a single device (by calling the reset method).
25
/* AArch64 kernels never boot in secure mode */
54
*/
26
assert(!info->secure_boot);
55
-void device_reset(DeviceState *dev);
27
/* This hook is only supported for AArch32 currently:
56
+void device_legacy_reset(DeviceState *dev);
57
58
void device_class_set_props(DeviceClass *dc, Property *props);
59
60
diff --git a/hw/audio/intel-hda.c b/hw/audio/intel-hda.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/audio/intel-hda.c
63
+++ b/hw/audio/intel-hda.c
64
@@ -XXX,XX +XXX,XX @@ static void intel_hda_reset(DeviceState *dev)
65
QTAILQ_FOREACH(kid, &d->codecs.qbus.children, sibling) {
66
DeviceState *qdev = kid->child;
67
cdev = HDA_CODEC_DEVICE(qdev);
68
- device_reset(DEVICE(cdev));
69
+ device_legacy_reset(DEVICE(cdev));
70
d->state_sts |= (1 << cdev->cad);
71
}
72
intel_hda_update_irq(d);
73
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/core/qdev.c
76
+++ b/hw/core/qdev.c
77
@@ -XXX,XX +XXX,XX @@ HotplugHandler *qdev_get_hotplug_handler(DeviceState *dev)
78
79
static int qdev_reset_one(DeviceState *dev, void *opaque)
80
{
81
- device_reset(dev);
82
+ device_legacy_reset(dev);
83
84
return 0;
85
}
86
@@ -XXX,XX +XXX,XX @@ static void device_set_realized(Object *obj, bool value, Error **errp)
87
}
88
}
89
if (dev->hotplugged) {
90
- device_reset(dev);
91
+ device_legacy_reset(dev);
92
}
93
dev->pending_deleted_event = false;
94
95
@@ -XXX,XX +XXX,XX @@ void device_class_set_parent_unrealize(DeviceClass *dc,
96
dc->unrealize = dev_unrealize;
97
}
98
99
-void device_reset(DeviceState *dev)
100
+void device_legacy_reset(DeviceState *dev)
101
{
102
DeviceClass *klass = DEVICE_GET_CLASS(dev);
103
104
diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/hw/hyperv/hyperv.c
107
+++ b/hw/hyperv/hyperv.c
108
@@ -XXX,XX +XXX,XX @@ void hyperv_synic_reset(CPUState *cs)
109
SynICState *synic = get_synic(cs);
110
111
if (synic) {
112
- device_reset(DEVICE(synic));
113
+ device_legacy_reset(DEVICE(synic));
114
}
115
}
116
117
diff --git a/hw/i386/microvm.c b/hw/i386/microvm.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/hw/i386/microvm.c
120
+++ b/hw/i386/microvm.c
121
@@ -XXX,XX +XXX,XX @@ static void microvm_machine_reset(MachineState *machine)
122
cpu = X86_CPU(cs);
123
124
if (cpu->apic_state) {
125
- device_reset(cpu->apic_state);
126
+ device_legacy_reset(cpu->apic_state);
127
}
128
}
129
}
130
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
131
index XXXXXXX..XXXXXXX 100644
132
--- a/hw/i386/pc.c
133
+++ b/hw/i386/pc.c
134
@@ -XXX,XX +XXX,XX @@ static void pc_machine_reset(MachineState *machine)
135
cpu = X86_CPU(cs);
136
137
if (cpu->apic_state) {
138
- device_reset(cpu->apic_state);
139
+ device_legacy_reset(cpu->apic_state);
140
}
141
}
142
}
143
diff --git a/hw/ide/microdrive.c b/hw/ide/microdrive.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/ide/microdrive.c
146
+++ b/hw/ide/microdrive.c
147
@@ -XXX,XX +XXX,XX @@ static void md_attr_write(PCMCIACardState *card, uint32_t at, uint8_t value)
148
case 0x00:    /* Configuration Option Register */
149
s->opt = value & 0xcf;
150
if (value & OPT_SRESET) {
151
- device_reset(DEVICE(s));
152
+ device_legacy_reset(DEVICE(s));
153
}
154
md_interrupt_update(s);
155
break;
156
@@ -XXX,XX +XXX,XX @@ static void md_common_write(PCMCIACardState *card, uint32_t at, uint16_t value)
157
case 0xe:    /* Device Control */
158
s->ctrl = value;
159
if (value & CTRL_SRST) {
160
- device_reset(DEVICE(s));
161
+ device_legacy_reset(DEVICE(s));
162
}
163
md_interrupt_update(s);
164
break;
165
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_attach(PCMCIACardState *card)
166
md->attr_base = pcc->cis[0x74] | (pcc->cis[0x76] << 8);
167
md->io_base = 0x0;
168
169
- device_reset(DEVICE(md));
170
+ device_legacy_reset(DEVICE(md));
171
md_interrupt_update(md);
172
173
return 0;
174
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_detach(PCMCIACardState *card)
175
{
176
MicroDriveState *md = MICRODRIVE(card);
177
178
- device_reset(DEVICE(md));
179
+ device_legacy_reset(DEVICE(md));
180
return 0;
181
}
182
183
diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/hw/intc/spapr_xive.c
186
+++ b/hw/intc/spapr_xive.c
187
@@ -XXX,XX +XXX,XX @@ static target_ulong h_int_reset(PowerPCCPU *cpu,
188
return H_PARAMETER;
189
}
190
191
- device_reset(DEVICE(xive));
192
+ device_legacy_reset(DEVICE(xive));
193
194
if (kvm_irqchip_in_kernel()) {
195
Error *local_err = NULL;
196
diff --git a/hw/ppc/pnv_psi.c b/hw/ppc/pnv_psi.c
197
index XXXXXXX..XXXXXXX 100644
198
--- a/hw/ppc/pnv_psi.c
199
+++ b/hw/ppc/pnv_psi.c
200
@@ -XXX,XX +XXX,XX @@ static void pnv_psi_reset(DeviceState *dev)
201
202
static void pnv_psi_reset_handler(void *dev)
203
{
204
- device_reset(DEVICE(dev));
205
+ device_legacy_reset(DEVICE(dev));
206
}
207
208
static void pnv_psi_realize(DeviceState *dev, Error **errp)
209
@@ -XXX,XX +XXX,XX @@ static void pnv_psi_p9_mmio_write(void *opaque, hwaddr addr,
210
break;
211
case PSIHB9_INTERRUPT_CONTROL:
212
if (val & PSIHB9_IRQ_RESET) {
213
- device_reset(DEVICE(&psi9->source));
214
+ device_legacy_reset(DEVICE(&psi9->source));
215
}
216
psi->regs[reg] = val;
217
break;
218
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
219
index XXXXXXX..XXXXXXX 100644
220
--- a/hw/ppc/spapr_pci.c
221
+++ b/hw/ppc/spapr_pci.c
222
@@ -XXX,XX +XXX,XX @@ static int spapr_phb_children_reset(Object *child, void *opaque)
223
DeviceState *dev = (DeviceState *) object_dynamic_cast(child, TYPE_DEVICE);
224
225
if (dev) {
226
- device_reset(dev);
227
+ device_legacy_reset(dev);
228
}
229
230
return 0;
231
diff --git a/hw/ppc/spapr_vio.c b/hw/ppc/spapr_vio.c
232
index XXXXXXX..XXXXXXX 100644
233
--- a/hw/ppc/spapr_vio.c
234
+++ b/hw/ppc/spapr_vio.c
235
@@ -XXX,XX +XXX,XX @@ int spapr_vio_send_crq(SpaprVioDevice *dev, uint8_t *crq)
236
static void spapr_vio_quiesce_one(SpaprVioDevice *dev)
237
{
238
if (dev->tcet) {
239
- device_reset(DEVICE(dev->tcet));
240
+ device_legacy_reset(DEVICE(dev->tcet));
241
}
242
free_crq(dev);
243
}
244
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
245
index XXXXXXX..XXXXXXX 100644
246
--- a/hw/s390x/s390-pci-inst.c
247
+++ b/hw/s390x/s390-pci-inst.c
248
@@ -XXX,XX +XXX,XX @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra)
249
stw_p(&ressetpci->hdr.rsp, CLP_RC_SETPCIFN_FHOP);
250
goto out;
251
}
252
- device_reset(DEVICE(pbdev));
253
+ device_legacy_reset(DEVICE(pbdev));
254
pbdev->fh &= ~FH_MASK_ENABLE;
255
pbdev->state = ZPCI_FS_DISABLED;
256
stl_p(&ressetpci->fh, pbdev->fh);
257
diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
258
index XXXXXXX..XXXXXXX 100644
259
--- a/hw/scsi/vmw_pvscsi.c
260
+++ b/hw/scsi/vmw_pvscsi.c
261
@@ -XXX,XX +XXX,XX @@ pvscsi_on_cmd_reset_device(PVSCSIState *s)
262
263
if (sdev != NULL) {
264
s->resetting++;
265
- device_reset(&sdev->qdev);
266
+ device_legacy_reset(&sdev->qdev);
267
s->resetting--;
268
return PVSCSI_COMMAND_PROCESSING_SUCCEEDED;
269
}
270
diff --git a/hw/sd/omap_mmc.c b/hw/sd/omap_mmc.c
271
index XXXXXXX..XXXXXXX 100644
272
--- a/hw/sd/omap_mmc.c
273
+++ b/hw/sd/omap_mmc.c
274
@@ -XXX,XX +XXX,XX @@ void omap_mmc_reset(struct omap_mmc_s *host)
275
* into any bus, and we must reset it manually. When omap_mmc is
276
* QOMified this must move into the QOM reset function.
277
*/
278
- device_reset(DEVICE(host->card));
279
+ device_legacy_reset(DEVICE(host->card));
280
}
281
282
static uint64_t omap_mmc_read(void *opaque, hwaddr offset,
283
diff --git a/hw/sd/pl181.c b/hw/sd/pl181.c
284
index XXXXXXX..XXXXXXX 100644
285
--- a/hw/sd/pl181.c
286
+++ b/hw/sd/pl181.c
287
@@ -XXX,XX +XXX,XX @@ static void pl181_reset(DeviceState *d)
288
/* Since we're still using the legacy SD API the card is not plugged
289
* into any bus, and we must reset it manually.
290
*/
291
- device_reset(DEVICE(s->card));
292
+ device_legacy_reset(DEVICE(s->card));
293
}
294
295
static void pl181_init(Object *obj)
296
--
28
--
297
2.20.1
29
2.20.1
298
30
299
31
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: AlexChen <alex.chen@huawei.com>
2
2
3
kvm-no-adjvtime is a KVM specific CPU property and a first of its
3
In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before
4
kind. To accommodate it we also add kvm_arm_add_vcpu_properties()
4
being check if it is valid, which may lead to NULL pointer dereference.
5
and a KVM specific CPU properties description to the CPU features
5
So move the assignment to surface after checking that the omap_lcd is valid
6
document.
6
and move surface_bits_per_pixel(surface) to after the surface assignment.
7
7
8
Signed-off-by: Andrew Jones <drjones@redhat.com>
8
Reported-by: Euler Robot <euler.robot@huawei.com>
9
Message-id: 20200120101023.16030-7-drjones@redhat.com
9
Signed-off-by: AlexChen <alex.chen@huawei.com>
10
Message-id: 5F9CDB8A.9000001@huawei.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
include/hw/arm/virt.h | 1 +
14
hw/display/omap_lcdc.c | 10 +++++++---
14
target/arm/kvm_arm.h | 11 ++++++++++
15
1 file changed, 7 insertions(+), 3 deletions(-)
15
hw/arm/virt.c | 8 ++++++++
16
target/arm/cpu.c | 2 ++
17
target/arm/cpu64.c | 1 +
18
target/arm/kvm.c | 28 +++++++++++++++++++++++++
19
target/arm/monitor.c | 1 +
20
tests/qtest/arm-cpu-features.c | 4 ++++
21
docs/arm-cpu-features.rst | 37 +++++++++++++++++++++++++++++++++-
22
9 files changed, 92 insertions(+), 1 deletion(-)
23
16
24
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
17
diff --git a/hw/display/omap_lcdc.c b/hw/display/omap_lcdc.c
25
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/arm/virt.h
19
--- a/hw/display/omap_lcdc.c
27
+++ b/include/hw/arm/virt.h
20
+++ b/hw/display/omap_lcdc.c
28
@@ -XXX,XX +XXX,XX @@ typedef struct {
21
@@ -XXX,XX +XXX,XX @@ static void omap_lcd_interrupts(struct omap_lcd_panel_s *s)
29
bool smbios_old_sys_ver;
22
static void omap_update_display(void *opaque)
30
bool no_highmem_ecam;
31
bool no_ged; /* Machines < 4.2 has no support for ACPI GED device */
32
+ bool kvm_no_adjvtime;
33
} VirtMachineClass;
34
35
typedef struct {
36
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/kvm_arm.h
39
+++ b/target/arm/kvm_arm.h
40
@@ -XXX,XX +XXX,XX @@ void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map);
41
*/
42
void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
43
44
+/**
45
+ * kvm_arm_add_vcpu_properties:
46
+ * @obj: The CPU object to add the properties to
47
+ *
48
+ * Add all KVM specific CPU properties to the CPU object. These
49
+ * are the CPU properties with "kvm-" prefixed names.
50
+ */
51
+void kvm_arm_add_vcpu_properties(Object *obj);
52
+
53
/**
54
* kvm_arm_aarch32_supported:
55
* @cs: CPUState
56
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
57
cpu->host_cpu_probe_failed = true;
58
}
59
60
+static inline void kvm_arm_add_vcpu_properties(Object *obj) {}
61
+
62
static inline bool kvm_arm_aarch32_supported(CPUState *cs)
63
{
23
{
64
return false;
24
struct omap_lcd_panel_s *omap_lcd = (struct omap_lcd_panel_s *) opaque;
65
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
25
- DisplaySurface *surface = qemu_console_surface(omap_lcd->con);
66
index XXXXXXX..XXXXXXX 100644
26
+ DisplaySurface *surface;
67
--- a/hw/arm/virt.c
27
draw_line_func draw_line;
68
+++ b/hw/arm/virt.c
28
int size, height, first, last;
69
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
29
int width, linesize, step, bpp, frame_offset;
70
}
30
hwaddr frame_base;
71
}
31
72
32
- if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable ||
73
+ if (vmc->kvm_no_adjvtime &&
33
- !surface_bits_per_pixel(surface)) {
74
+ object_property_find(cpuobj, "kvm-no-adjvtime", NULL)) {
34
+ if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable) {
75
+ object_property_set_bool(cpuobj, true, "kvm-no-adjvtime", NULL);
76
+ }
77
+
78
if (vmc->no_pmu && object_property_find(cpuobj, "pmu", NULL)) {
79
object_property_set_bool(cpuobj, false, "pmu", NULL);
80
}
81
@@ -XXX,XX +XXX,XX @@ DEFINE_VIRT_MACHINE_AS_LATEST(5, 0)
82
83
static void virt_machine_4_2_options(MachineClass *mc)
84
{
85
+ VirtMachineClass *vmc = VIRT_MACHINE_CLASS(OBJECT_CLASS(mc));
86
+
87
virt_machine_5_0_options(mc);
88
compat_props_add(mc->compat_props, hw_compat_4_2, hw_compat_4_2_len);
89
+ vmc->kvm_no_adjvtime = true;
90
}
91
DEFINE_VIRT_MACHINE(4, 2)
92
93
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/cpu.c
96
+++ b/target/arm/cpu.c
97
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
98
99
if (kvm_enabled()) {
100
kvm_arm_set_cpu_features_from_host(cpu);
101
+ kvm_arm_add_vcpu_properties(obj);
102
} else {
103
cortex_a15_initfn(obj);
104
105
@@ -XXX,XX +XXX,XX @@ static void arm_host_initfn(Object *obj)
106
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
107
aarch64_add_sve_properties(obj);
108
}
109
+ kvm_arm_add_vcpu_properties(obj);
110
arm_cpu_post_init(obj);
111
}
112
113
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/cpu64.c
116
+++ b/target/arm/cpu64.c
117
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
118
119
if (kvm_enabled()) {
120
kvm_arm_set_cpu_features_from_host(cpu);
121
+ kvm_arm_add_vcpu_properties(obj);
122
} else {
123
uint64_t t;
124
uint32_t u;
125
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/kvm.c
128
+++ b/target/arm/kvm.c
129
@@ -XXX,XX +XXX,XX @@
130
#include "qemu/timer.h"
131
#include "qemu/error-report.h"
132
#include "qemu/main-loop.h"
133
+#include "qom/object.h"
134
+#include "qapi/error.h"
135
#include "sysemu/sysemu.h"
136
#include "sysemu/kvm.h"
137
#include "sysemu/kvm_int.h"
138
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
139
env->features = arm_host_cpu_features.features;
140
}
141
142
+static bool kvm_no_adjvtime_get(Object *obj, Error **errp)
143
+{
144
+ return !ARM_CPU(obj)->kvm_adjvtime;
145
+}
146
+
147
+static void kvm_no_adjvtime_set(Object *obj, bool value, Error **errp)
148
+{
149
+ ARM_CPU(obj)->kvm_adjvtime = !value;
150
+}
151
+
152
+/* KVM VCPU properties should be prefixed with "kvm-". */
153
+void kvm_arm_add_vcpu_properties(Object *obj)
154
+{
155
+ if (!kvm_enabled()) {
156
+ return;
35
+ return;
157
+ }
36
+ }
158
+
37
+
159
+ ARM_CPU(obj)->kvm_adjvtime = true;
38
+ surface = qemu_console_surface(omap_lcd->con);
160
+ object_property_add_bool(obj, "kvm-no-adjvtime", kvm_no_adjvtime_get,
39
+ if (!surface_bits_per_pixel(surface)) {
161
+ kvm_no_adjvtime_set, &error_abort);
162
+ object_property_set_description(obj, "kvm-no-adjvtime",
163
+ "Set on to disable the adjustment of "
164
+ "the virtual counter. VM stopped time "
165
+ "will be counted.", &error_abort);
166
+}
167
+
168
bool kvm_arm_pmu_supported(CPUState *cpu)
169
{
170
return kvm_check_extension(cpu->kvm_state, KVM_CAP_ARM_PMU_V3);
171
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
172
index XXXXXXX..XXXXXXX 100644
173
--- a/target/arm/monitor.c
174
+++ b/target/arm/monitor.c
175
@@ -XXX,XX +XXX,XX @@ static const char *cpu_model_advertised_features[] = {
176
"sve128", "sve256", "sve384", "sve512",
177
"sve640", "sve768", "sve896", "sve1024", "sve1152", "sve1280",
178
"sve1408", "sve1536", "sve1664", "sve1792", "sve1920", "sve2048",
179
+ "kvm-no-adjvtime",
180
NULL
181
};
182
183
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/tests/qtest/arm-cpu-features.c
186
+++ b/tests/qtest/arm-cpu-features.c
187
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
188
assert_has_feature_enabled(qts, "cortex-a15", "pmu");
189
assert_has_not_feature(qts, "cortex-a15", "aarch64");
190
191
+ assert_has_not_feature(qts, "max", "kvm-no-adjvtime");
192
+
193
if (g_str_equal(qtest_get_arch(), "aarch64")) {
194
assert_has_feature_enabled(qts, "max", "aarch64");
195
assert_has_feature_enabled(qts, "max", "sve");
196
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
197
return;
40
return;
198
}
41
}
199
200
+ assert_has_feature_disabled(qts, "host", "kvm-no-adjvtime");
201
+
202
if (g_str_equal(qtest_get_arch(), "aarch64")) {
203
bool kvm_supports_sve;
204
char max_name[8], name[8];
205
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
206
index XXXXXXX..XXXXXXX 100644
207
--- a/docs/arm-cpu-features.rst
208
+++ b/docs/arm-cpu-features.rst
209
@@ -XXX,XX +XXX,XX @@ supporting the feature or only supporting the feature under certain
210
configurations. For example, the `aarch64` CPU feature, which, when
211
disabled, enables the optional AArch32 CPU feature, is only supported
212
when using the KVM accelerator and when running on a host CPU type that
213
-supports the feature.
214
+supports the feature. While `aarch64` currently only works with KVM,
215
+it could work with TCG. CPU features that are specific to KVM are
216
+prefixed with "kvm-" and are described in "KVM VCPU Features".
217
218
CPU Feature Probing
219
===================
220
@@ -XXX,XX +XXX,XX @@ disabling many SVE vector lengths would be quite verbose, the `sve<N>` CPU
221
properties have special semantics (see "SVE CPU Property Parsing
222
Semantics").
223
224
+KVM VCPU Features
225
+=================
226
+
227
+KVM VCPU features are CPU features that are specific to KVM, such as
228
+paravirt features or features that enable CPU virtualization extensions.
229
+The features' CPU properties are only available when KVM is enabled and
230
+are named with the prefix "kvm-". KVM VCPU features may be probed,
231
+enabled, and disabled in the same way as other CPU features. Below is
232
+the list of KVM VCPU features and their descriptions.
233
+
234
+ kvm-no-adjvtime By default kvm-no-adjvtime is disabled. This
235
+ means that by default the virtual time
236
+ adjustment is enabled (vtime is *not not*
237
+ adjusted).
238
+
239
+ When virtual time adjustment is enabled each
240
+ time the VM transitions back to running state
241
+ the VCPU's virtual counter is updated to ensure
242
+ stopped time is not counted. This avoids time
243
+ jumps surprising guest OSes and applications,
244
+ as long as they use the virtual counter for
245
+ timekeeping. However it has the side effect of
246
+ the virtual and physical counters diverging.
247
+ All timekeeping based on the virtual counter
248
+ will appear to lag behind any timekeeping that
249
+ does not subtract VM stopped time. The guest
250
+ may resynchronize its virtual counter with
251
+ other time sources as needed.
252
+
253
+ Enable kvm-no-adjvtime to disable virtual time
254
+ adjustment, also restoring the legacy (pre-5.0)
255
+ behavior.
256
+
257
SVE CPU Properties
258
==================
259
42
260
--
43
--
261
2.20.1
44
2.20.1
262
45
263
46
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: AlexChen <alex.chen@huawei.com>
2
2
3
This commit make use of the resettable API to reset the device being
3
In exynos4210_fimd_update(), the pointer s is dereferinced before
4
hotplugged when it is realized. Also it ensures it is put in a reset
4
being check if it is valid, which may lead to NULL pointer dereference.
5
state coherent with the parent it is plugged into.
5
So move the assignment to global_width after checking that the s is valid.
6
6
7
Note that there is a difference in the reset. Instead of resetting
7
Reported-by: Euler Robot <euler.robot@huawei.com>
8
only the hotplugged device, we reset also its subtree (switch to
8
Signed-off-by: Alex Chen <alex.chen@huawei.com>
9
resettable API). This is not expected to be a problem because
10
sub-buses are just realized too. If a hotplugged device has any
11
sub-buses it is logical to reset them too at this point.
12
13
The recently added should_be_hidden and PCI's partially_hotplugged
14
mechanisms do not interfere with realize operation:
15
+ In the should_be_hidden use case, device creation is
16
delayed.
17
+ The partially_hotplugged mechanism prevents a device to be
18
unplugged and unrealized from qdev POV and unrealized.
19
20
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
23
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 5F9F8D88.9030102@huawei.com
24
Message-id: 20200123132823.1117486-8-damien.hedde@greensocs.com
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
12
---
27
include/hw/resettable.h | 11 +++++++++++
13
hw/display/exynos4210_fimd.c | 4 +++-
28
hw/core/qdev.c | 15 ++++++++++++++-
14
1 file changed, 3 insertions(+), 1 deletion(-)
29
2 files changed, 25 insertions(+), 1 deletion(-)
30
15
31
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
16
diff --git a/hw/display/exynos4210_fimd.c b/hw/display/exynos4210_fimd.c
32
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/resettable.h
18
--- a/hw/display/exynos4210_fimd.c
34
+++ b/include/hw/resettable.h
19
+++ b/hw/display/exynos4210_fimd.c
35
@@ -XXX,XX +XXX,XX @@ struct ResettableState {
20
@@ -XXX,XX +XXX,XX @@ static void exynos4210_fimd_update(void *opaque)
36
bool exit_phase_in_progress;
21
bool blend = false;
37
};
22
uint8_t *host_fb_addr;
38
23
bool is_dirty = false;
39
+/**
24
- const int global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
40
+ * resettable_state_clear:
25
+ int global_width;
41
+ * Clear the state. It puts the state to the initial (zeroed) state required
26
42
+ * to reuse an object. Typically used in realize step of base classes
27
if (!s || !s->console || !s->enabled ||
43
+ * implementing the interface.
28
surface_bits_per_pixel(qemu_console_surface(s->console)) == 0) {
44
+ */
29
return;
45
+static inline void resettable_state_clear(ResettableState *state)
30
}
46
+{
47
+ memset(state, 0, sizeof(ResettableState));
48
+}
49
+
31
+
50
/**
32
+ global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
51
* resettable_reset:
33
exynos4210_update_resolution(s);
52
* Trigger a reset on an object @obj of type @type. @obj must implement
34
surface = qemu_console_surface(s->console);
53
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/core/qdev.c
56
+++ b/hw/core/qdev.c
57
@@ -XXX,XX +XXX,XX @@ static void device_set_realized(Object *obj, bool value, Error **errp)
58
}
59
}
60
61
+ /*
62
+ * Clear the reset state, in case the object was previously unrealized
63
+ * with a dirty state.
64
+ */
65
+ resettable_state_clear(&dev->reset);
66
+
67
QLIST_FOREACH(bus, &dev->child_bus, sibling) {
68
object_property_set_bool(OBJECT(bus), true, "realized",
69
&local_err);
70
@@ -XXX,XX +XXX,XX @@ static void device_set_realized(Object *obj, bool value, Error **errp)
71
}
72
}
73
if (dev->hotplugged) {
74
- device_legacy_reset(dev);
75
+ /*
76
+ * Reset the device, as well as its subtree which, at this point,
77
+ * should be realized too.
78
+ */
79
+ resettable_assert_reset(OBJECT(dev), RESET_TYPE_COLD);
80
+ resettable_change_parent(OBJECT(dev), OBJECT(dev->parent_bus),
81
+ NULL);
82
+ resettable_release_reset(OBJECT(dev), RESET_TYPE_COLD);
83
}
84
dev->pending_deleted_event = false;
85
35
86
--
36
--
87
2.20.1
37
2.20.1
88
38
89
39
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to
2
armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el().
3
This is incorrect when the security state being queried is not the
4
current one, because arm_current_el() uses the current security state
5
to determine which of the banked CONTROL.nPRIV bits to look at.
6
The effect was that if (for instance) Secure state was in privileged
7
mode but Non-Secure was not then we would return the wrong MMU index.
2
8
3
In qdev_set_parent_bus(), when changing the parent bus of a
9
The only places where we are using this function in a way that could
4
realized device, if the source and destination buses are not in the
10
trigger this bug are for the stack loads during a v8M function-return
5
same reset state, some adaptations are required. This patch adds
11
and for the instruction fetch of a v8M SG insn.
6
needed call to resettable_change_parent() to make sure a device reset
7
state stays coherent with its parent bus.
8
12
9
The addition is a no-op if:
13
Fix the bug by expanding out the M-profile version of the
10
1. the device being parented is not realized.
14
arm_current_el() logic inline so it can use the passed in secstate
11
2. the device is realized, but both buses are not under reset.
15
rather than env->v7m.secure.
12
16
13
Case 2 means that as long as qdev_set_parent_bus() is called
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
during the machine realization procedure (which is before the
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
machine reset so nothing is in reset), it is a no op.
19
Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
20
---
21
target/arm/m_helper.c | 3 ++-
22
1 file changed, 2 insertions(+), 1 deletion(-)
16
23
17
There are 52 call sites of qdev_set_parent_bus(). All but one fall
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
18
into the no-op case:
19
+ 29 trivial calls related to virtio (in hw/{s390x,display,virtio}/
20
{vhost,virtio}-xxx.c) to set a vdev(or vgpu) composing device
21
parent bus just before realizing the same vdev(vgpu).
22
+ hw/core/qdev.c: when creating a device in qdev_try_create()
23
+ hw/core/sysbus.c: when initializing a device in the sysbus
24
+ hw/i386/amd_iommu.c: before realizing AMDVIState/pci
25
+ hw/isa/piix4.c: before realizing PIIX4State/rtc
26
+ hw/misc/auxbus.c: when creating an AUXBus
27
+ hw/misc/auxbus.c: when creating an AUXBus child
28
+ hw/misc/macio/macio.c: when initializing a MACIOState child
29
+ hw/misc/macio/macio.c: before realizing NewWorldMacIOState/pmu
30
+ hw/misc/macio/macio.c: before realizing NewWorldMacIOState/cuda
31
+ hw/net/virtio-net.c: Used for migration when using the failover
32
mechanism to migration a vfio-pci/net. It is
33
a no-op because at this point the device is
34
already on the bus.
35
+ hw/pci-host/designware.c: before realizing DesignwarePCIEHost/root
36
+ hw/pci-host/gpex.c: before realizing GPEXHost/root
37
+ hw/pci-host/prep.c: when initialiazing PREPPCIState/pci_dev
38
+ hw/pci-host/q35.c: before realizing Q35PCIHost/mch
39
+ hw/pci-host/versatile.c: when initializing PCIVPBState/pci_dev
40
+ hw/pci-host/xilinx-pcie.c: before realizing XilinxPCIEHost/root
41
+ hw/s390x/event-facility.c: when creating SCLPEventFacility/
42
TYPE_SCLP_QUIESCE
43
+ hw/s390x/event-facility.c: ditto with SCLPEventFacility/
44
TYPE_SCLP_CPU_HOTPLUG
45
+ hw/s390x/sclp.c: Not trivial because it is called on a SLCPDevice
46
just after realizing it. Ok because at this point the destination
47
bus (sysbus) is not in reset; the realize step is before the
48
machine reset.
49
+ hw/sd/core.c: Not OK. Used in sdbus_reparent_card(). See below.
50
+ hw/ssi/ssi.c: Used to put spi slave on spi bus and connect the cs
51
line in ssi_auto_connect_slave(). Ok because this function is only
52
used in realize step in hw/ssi/aspeed_smc.ci, hw/ssi/imx_spi.c,
53
hw/ssi/mss-spi.c, hw/ssi/xilinx_spi.c and hw/ssi/xilinx_spips.c.
54
+ hw/xen/xen-legacy-backend.c: when creating a XenLegacyDevice device
55
+ qdev-monitor.c: in device hotplug creation procedure before realize
56
57
Note that this commit alone will have no effect, right now there is no
58
use of resettable API to reset anything. So a bus will never be tagged
59
as in-reset by this same API.
60
61
The one place where side-effect will occurs is in hw/sd/core.c in
62
sdbus_reparent_card(). This function is only used in the raspi machines,
63
including during the sysbus reset procedure. This case will be
64
carrefully handled when doing the multiple phase reset transition.
65
66
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
67
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
68
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
69
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
70
Message-id: 20200123132823.1117486-7-damien.hedde@greensocs.com
71
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
72
---
73
hw/core/qdev.c | 16 +++++++++++-----
74
1 file changed, 11 insertions(+), 5 deletions(-)
75
76
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
77
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
78
--- a/hw/core/qdev.c
26
--- a/target/arm/m_helper.c
79
+++ b/hw/core/qdev.c
27
+++ b/target/arm/m_helper.c
80
@@ -XXX,XX +XXX,XX @@ static void bus_add_child(BusState *bus, DeviceState *child)
28
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
81
29
/* Return the MMU index for a v7M CPU in the specified security state */
82
void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
30
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
83
{
31
{
84
- bool replugging = dev->parent_bus != NULL;
32
- bool priv = arm_current_el(env) != 0;
85
+ BusState *old_parent_bus = dev->parent_bus;
33
+ bool priv = arm_v7m_is_handler_mode(env) ||
86
34
+ !(env->v7m.control[secstate] & 1);
87
- if (replugging) {
35
88
+ if (old_parent_bus) {
36
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
89
trace_qdev_update_parent_bus(dev, object_get_typename(OBJECT(dev)),
90
- dev->parent_bus, object_get_typename(OBJECT(dev->parent_bus)),
91
+ old_parent_bus, object_get_typename(OBJECT(old_parent_bus)),
92
OBJECT(bus), object_get_typename(OBJECT(bus)));
93
/*
94
* Keep a reference to the device while it's not plugged into
95
* any bus, to avoid it potentially evaporating when it is
96
* dereffed in bus_remove_child().
97
+ * Also keep the ref of the parent bus until the end, so that
98
+ * we can safely call resettable_change_parent() below.
99
*/
100
object_ref(OBJECT(dev));
101
bus_remove_child(dev->parent_bus, dev);
102
- object_unref(OBJECT(dev->parent_bus));
103
}
104
dev->parent_bus = bus;
105
object_ref(OBJECT(bus));
106
bus_add_child(bus, dev);
107
- if (replugging) {
108
+ if (dev->realized) {
109
+ resettable_change_parent(OBJECT(dev), OBJECT(bus),
110
+ OBJECT(old_parent_bus));
111
+ }
112
+ if (old_parent_bus) {
113
+ object_unref(OBJECT(old_parent_bus));
114
object_unref(OBJECT(dev));
115
}
116
}
37
}
117
--
38
--
118
2.20.1
39
2.20.1
119
40
120
41
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
On some hosts (eg Ubuntu Bionic) pkg-config returns a set of
2
libraries for gio-2.0 which don't actually work when compiling
3
statically. (Specifically, the returned library string includes
4
-lmount, but not -lblkid which -lmount depends upon, so linking
5
fails due to missing symbols.)
2
6
3
Replace deprecated qbus_reset_all by resettable_cold_reset_fn for
7
Check that the libraries work, and don't enable gio if they don't,
4
the sysbus reset registration.
8
in the same way we do for gnutls.
5
9
6
Apart for the raspi machines, this does not impact the behavior
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
because:
11
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
8
+ at this point resettable just calls the old reset methods of devices
9
and buses in the same order as qdev/qbus.
10
+ resettable handlers registered with qemu_register_reset are
11
serialized; there is no interleaving.
12
+ eventual explicit calls to legacy reset API (device_reset or
13
qdev/qbus_reset) inside this reset handler will not be masked out
14
by resettable mechanism; they do not go through resettable api.
15
16
For the raspi machines, during the sysbus reset the sd-card is not
17
reset twice anymore but only once. This is a consequence of switching
18
both sysbus reset and changing parent to resettable; it detects the
19
second reset is not needed. This has no impact on the state after
20
reset; the sd-card reset method only reset local state and query
21
information from the block backend.
22
23
The raspi reset change can be observed by using the following command
24
(reset will occurs, then do Ctrl-C to end qemu; no firmware is
25
given here).
26
qemu-system-aarch64 -M raspi3 \
27
-trace resettable_phase_hold_exec \
28
-trace qdev_update_parent_bus \
29
-trace resettable_change_parent \
30
-trace qdev_reset -trace qbus_reset
31
32
Before the patch, the qdev/qbus_reset traces show when reset method are
33
called. After the patch, the resettable_phase_hold_exec show when reset
34
method are called.
35
36
The traced reset order of the raspi3 is listed below. I've added empty
37
lines and the tree structure.
38
39
+->bcm2835-peripherals reset
40
|
41
| +->sd-card reset
42
| +->sd-bus reset
43
+->bcm2835_gpio reset
44
| -> dev_update_parent_bus (move the sd-card on the sdhci-bus)
45
| -> resettable_change_parent
46
|
47
+->bcm2835-dma reset
48
|
49
| +->bcm2835-sdhost-bus reset
50
+->bcm2835-sdhost reset
51
|
52
| +->sd-card (reset ONLY BEFORE BEFORE THE PATCH)
53
| +->sdhci-bus reset
54
+->generic-sdhci reset
55
|
56
+->bcm2835-rng reset
57
+->bcm2835-property reset
58
+->bcm2835-fb reset
59
+->bcm2835-mbox reset
60
+->bcm2835-aux reset
61
+->pl011 reset
62
+->bcm2835-ic reset
63
+->bcm2836-control reset
64
System reset
65
66
In both case, the sd-card is reset (being on bcm2835_gpio/sd-bus) then moved
67
to generic-sdhci/sdhci-bus by the bcm2835_gpio reset method.
68
69
Before the patch, it is then reset again being part of generic-sdhci/sdhci-bus.
70
After the patch, it considered again for reset but its reset method is not
71
called because it is already flagged as reset.
72
73
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
74
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
75
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
76
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
77
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20200928160402.7961-1-peter.maydell@linaro.org
78
Message-id: 20200123132823.1117486-11-damien.hedde@greensocs.com
79
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
80
---
14
---
81
vl.c | 10 +++++++++-
15
configure | 10 +++++++++-
82
1 file changed, 9 insertions(+), 1 deletion(-)
16
1 file changed, 9 insertions(+), 1 deletion(-)
83
17
84
diff --git a/vl.c b/vl.c
18
diff --git a/configure b/configure
85
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100755
86
--- a/vl.c
20
--- a/configure
87
+++ b/vl.c
21
+++ b/configure
88
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
22
@@ -XXX,XX +XXX,XX @@ if test "$static" = yes && test "$mingw32" = yes; then
89
23
fi
90
/* TODO: once all bus devices are qdevified, this should be done
24
91
* when bus is created by qdev.c */
25
if $pkg_config --atleast-version=$glib_req_ver gio-2.0; then
92
- qemu_register_reset(qbus_reset_all_fn, sysbus_get_default());
26
- gio=yes
93
+ /*
27
gio_cflags=$($pkg_config --cflags gio-2.0)
94
+ * TODO: If we had a main 'reset container' that the whole system
28
gio_libs=$($pkg_config --libs gio-2.0)
95
+ * lived in, we could reset that using the multi-phase reset
29
gdbus_codegen=$($pkg_config --variable=gdbus_codegen gio-2.0)
96
+ * APIs. For the moment, we just reset the sysbus, which will cause
30
if [ ! -x "$gdbus_codegen" ]; then
97
+ * all devices hanging off it (and all their child buses, recursively)
31
gdbus_codegen=
98
+ * to be reset. Note that this will *not* reset any Device objects
32
fi
99
+ * which are not attached to some part of the qbus tree!
33
+ # Check that the libraries actually work -- Ubuntu 18.04 ships
100
+ */
34
+ # with pkg-config --static --libs data for gio-2.0 that is missing
101
+ qemu_register_reset(resettable_cold_reset_fn, sysbus_get_default());
35
+ # -lblkid and will give a link error.
102
qemu_run_machine_init_done_notifiers();
36
+ write_c_skeleton
103
37
+ if compile_prog "" "gio_libs" ; then
104
if (rom_check_and_register_reset() != 0) {
38
+ gio=yes
39
+ else
40
+ gio=no
41
+ fi
42
else
43
gio=no
44
fi
105
--
45
--
106
2.20.1
46
2.20.1
107
47
108
48
diff view generated by jsdifflib
1
From: Andrew Jeffery <andrew@aj.id.au>
1
In gicv3_init_cpuif() we copy the ARMCPU gicv3_maintenance_interrupt
2
into the GICv3CPUState struct's maintenance_irq field. This will
3
only work if the board happens to have already wired up the CPU
4
maintenance IRQ before the GIC was realized. Unfortunately this is
5
not the case for the 'virt' board, and so the value that gets copied
6
is NULL (since a qemu_irq is really a pointer to an IRQState struct
7
under the hood). The effect is that the CPU interface code never
8
actually raises the maintenance interrupt line.
2
9
3
The AST2600 includes a second cut-down version of the SD/MMC controller
10
Instead, since the GICv3CPUState has a pointer to the CPUState, make
4
found in the AST2500, named the eMMC controller. It's cut down in the
11
the dereference at the point where we want to raise the interrupt, to
5
sense that it only supports one slot rather than two, but it brings the
12
avoid an implicit requirement on board code to wire things up in a
6
total number of slots supported by the AST2600 to three.
13
particular order.
7
14
8
The existing code assumed that the SD controller always provided two
15
Reported-by: Jose Martins <josemartins90@gmail.com>
9
slots. Rework the SDHCI object to expose the number of slots as a
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
property to be set by the SoC configuration.
17
Message-id: 20201009153904.28529-1-peter.maydell@linaro.org
18
Reviewed-by: Luc Michel <luc@lmichel.fr>
19
---
20
include/hw/intc/arm_gicv3_common.h | 1 -
21
hw/intc/arm_gicv3_cpuif.c | 5 ++---
22
2 files changed, 2 insertions(+), 4 deletions(-)
11
23
12
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
24
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
15
Signed-off-by: Cédric Le Goater <clg@kaod.org>
16
Message-id: 20200114103433.30534-2-clg@kaod.org
17
[PMM: fixed up to use device_class_set_props()]
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
include/hw/sd/aspeed_sdhci.h | 1 +
21
hw/arm/aspeed.c | 2 +-
22
hw/arm/aspeed_ast2600.c | 2 ++
23
hw/arm/aspeed_soc.c | 2 ++
24
hw/sd/aspeed_sdhci.c | 11 +++++++++--
25
5 files changed, 15 insertions(+), 3 deletions(-)
26
27
diff --git a/include/hw/sd/aspeed_sdhci.h b/include/hw/sd/aspeed_sdhci.h
28
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
29
--- a/include/hw/sd/aspeed_sdhci.h
26
--- a/include/hw/intc/arm_gicv3_common.h
30
+++ b/include/hw/sd/aspeed_sdhci.h
27
+++ b/include/hw/intc/arm_gicv3_common.h
31
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSDHCIState {
28
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
32
SysBusDevice parent;
29
qemu_irq parent_fiq;
33
30
qemu_irq parent_virq;
34
SDHCIState slots[ASPEED_SDHCI_NUM_SLOTS];
31
qemu_irq parent_vfiq;
35
+ uint8_t num_slots;
32
- qemu_irq maintenance_irq;
36
33
37
MemoryRegion iomem;
34
/* Redistributor */
38
qemu_irq irq;
35
uint32_t level; /* Current IRQ level */
39
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
36
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
40
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/arm/aspeed.c
38
--- a/hw/intc/arm_gicv3_cpuif.c
42
+++ b/hw/arm/aspeed.c
39
+++ b/hw/intc/arm_gicv3_cpuif.c
43
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_init(MachineState *machine)
40
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
44
amc->i2c_init(bmc);
41
int irqlevel = 0;
45
}
42
int fiqlevel = 0;
46
43
int maintlevel = 0;
47
- for (i = 0; i < ARRAY_SIZE(bmc->soc.sdhci.slots); i++) {
44
+ ARMCPU *cpu = ARM_CPU(cs->cpu);
48
+ for (i = 0; i < bmc->soc.sdhci.num_slots; i++) {
45
49
SDHCIState *sdhci = &bmc->soc.sdhci.slots[i];
46
idx = hppvi_index(cs);
50
DriveInfo *dinfo = drive_get_next(IF_SD);
47
trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx);
51
BlockBackend *blk;
48
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
52
diff --git a/hw/arm/aspeed_ast2600.c b/hw/arm/aspeed_ast2600.c
49
53
index XXXXXXX..XXXXXXX 100644
50
qemu_set_irq(cs->parent_vfiq, fiqlevel);
54
--- a/hw/arm/aspeed_ast2600.c
51
qemu_set_irq(cs->parent_virq, irqlevel);
55
+++ b/hw/arm/aspeed_ast2600.c
52
- qemu_set_irq(cs->maintenance_irq, maintlevel);
56
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_init(Object *obj)
53
+ qemu_set_irq(cpu->gicv3_maintenance_interrupt, maintlevel);
57
sysbus_init_child_obj(obj, "sdc", OBJECT(&s->sdhci), sizeof(s->sdhci),
58
TYPE_ASPEED_SDHCI);
59
60
+ object_property_set_int(OBJECT(&s->sdhci), 2, "num-slots", &error_abort);
61
+
62
/* Init sd card slot class here so that they're under the correct parent */
63
for (i = 0; i < ASPEED_SDHCI_NUM_SLOTS; ++i) {
64
sysbus_init_child_obj(obj, "sdhci[*]", OBJECT(&s->sdhci.slots[i]),
65
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/hw/arm/aspeed_soc.c
68
+++ b/hw/arm/aspeed_soc.c
69
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
70
sysbus_init_child_obj(obj, "sdc", OBJECT(&s->sdhci), sizeof(s->sdhci),
71
TYPE_ASPEED_SDHCI);
72
73
+ object_property_set_int(OBJECT(&s->sdhci), 2, "num-slots", &error_abort);
74
+
75
/* Init sd card slot class here so that they're under the correct parent */
76
for (i = 0; i < ASPEED_SDHCI_NUM_SLOTS; ++i) {
77
sysbus_init_child_obj(obj, "sdhci[*]", OBJECT(&s->sdhci.slots[i]),
78
diff --git a/hw/sd/aspeed_sdhci.c b/hw/sd/aspeed_sdhci.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/sd/aspeed_sdhci.c
81
+++ b/hw/sd/aspeed_sdhci.c
82
@@ -XXX,XX +XXX,XX @@
83
#include "qapi/error.h"
84
#include "hw/irq.h"
85
#include "migration/vmstate.h"
86
+#include "hw/qdev-properties.h"
87
88
#define ASPEED_SDHCI_INFO 0x00
89
#define ASPEED_SDHCI_INFO_RESET 0x00030000
90
@@ -XXX,XX +XXX,XX @@ static void aspeed_sdhci_realize(DeviceState *dev, Error **errp)
91
92
/* Create input irqs for the slots */
93
qdev_init_gpio_in_named_with_opaque(DEVICE(sbd), aspeed_sdhci_set_irq,
94
- sdhci, NULL, ASPEED_SDHCI_NUM_SLOTS);
95
+ sdhci, NULL, sdhci->num_slots);
96
97
sysbus_init_irq(sbd, &sdhci->irq);
98
memory_region_init_io(&sdhci->iomem, OBJECT(sdhci), &aspeed_sdhci_ops,
99
sdhci, TYPE_ASPEED_SDHCI, 0x1000);
100
sysbus_init_mmio(sbd, &sdhci->iomem);
101
102
- for (int i = 0; i < ASPEED_SDHCI_NUM_SLOTS; ++i) {
103
+ for (int i = 0; i < sdhci->num_slots; ++i) {
104
Object *sdhci_slot = OBJECT(&sdhci->slots[i]);
105
SysBusDevice *sbd_slot = SYS_BUS_DEVICE(&sdhci->slots[i]);
106
107
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_aspeed_sdhci = {
108
},
109
};
110
111
+static Property aspeed_sdhci_properties[] = {
112
+ DEFINE_PROP_UINT8("num-slots", AspeedSDHCIState, num_slots, 0),
113
+ DEFINE_PROP_END_OF_LIST(),
114
+};
115
+
116
static void aspeed_sdhci_class_init(ObjectClass *classp, void *data)
117
{
118
DeviceClass *dc = DEVICE_CLASS(classp);
119
@@ -XXX,XX +XXX,XX @@ static void aspeed_sdhci_class_init(ObjectClass *classp, void *data)
120
dc->realize = aspeed_sdhci_realize;
121
dc->reset = aspeed_sdhci_reset;
122
dc->vmsd = &vmstate_aspeed_sdhci;
123
+ device_class_set_props(dc, aspeed_sdhci_properties);
124
}
54
}
125
55
126
static TypeInfo aspeed_sdhci_info = {
56
static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
58
&& cpu->gic_num_lrs) {
59
int j;
60
61
- cs->maintenance_irq = cpu->gicv3_maintenance_interrupt;
62
-
63
cs->num_list_regs = cpu->gic_num_lrs;
64
cs->vpribits = cpu->gic_vpribits;
65
cs->vprebits = cpu->gic_vprebits;
127
--
66
--
128
2.20.1
67
2.20.1
129
68
130
69
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
The kerneldoc script currently emits Sphinx markup for a macro with
2
arguments that uses the c:function directive. This is correct for
3
Sphinx versions earlier than Sphinx 3, where c:macro doesn't allow
4
documentation of macros with arguments and c:function is not picky
5
about the syntax of what it is passed. However, in Sphinx 3 the
6
c:macro directive was enhanced to support macros with arguments,
7
and c:function was made more picky about what syntax it accepted.
2
8
3
Since we enabled parallel TCG code generation for softmmu (see
9
When kerneldoc is told that it needs to produce output for Sphinx
4
commit 3468b59 "tcg: enable multiple TCG contexts in softmmu")
10
3 or later, make it emit c:function only for functions and c:macro
5
and its subsequent fix (commit 72649619 "add .min_cpus and
11
for macros with arguments. We assume that anything with a return
6
.default_cpus fields to machine_class"), the raspi machines are
12
type is a function and anything without is a macro.
7
restricted to always use their 4 cores:
8
13
9
See in hw/arm/raspi2 (with BCM283X_NCPUS set to 4):
14
This fixes the Sphinx error:
10
15
11
222 static void raspi2_machine_init(MachineClass *mc)
16
/home/petmay01/linaro/qemu-from-laptop/qemu/docs/../include/qom/object.h:155:Error in declarator
12
223 {
17
If declarator-id with parameters (e.g., 'void f(int arg)'):
13
224 mc->desc = "Raspberry Pi 2";
18
Invalid C declaration: Expected identifier in nested name. [error at 25]
14
230 mc->max_cpus = BCM283X_NCPUS;
19
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
15
231 mc->min_cpus = BCM283X_NCPUS;
20
-------------------------^
16
232 mc->default_cpus = BCM283X_NCPUS;
21
If parenthesis in noptr-declarator (e.g., 'void (*f(int arg))(double)'):
17
235 };
22
Error in declarator or parameters
18
236 DEFINE_MACHINE("raspi2", raspi2_machine_init)
23
Invalid C declaration: Expecting "(" in parameters. [error at 39]
24
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
25
---------------------------------------^
19
26
20
We can no longer use the -smp option, as we get:
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
29
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
30
Message-id: 20201030174700.7204-2-peter.maydell@linaro.org
31
---
32
scripts/kernel-doc | 18 +++++++++++++++++-
33
1 file changed, 17 insertions(+), 1 deletion(-)
21
34
22
$ qemu-system-arm -M raspi2 -smp 1
35
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
23
qemu-system-arm: Invalid SMP CPUs 1. The min CPUs supported by machine 'raspi2' is 4
36
index XXXXXXX..XXXXXXX 100755
24
37
--- a/scripts/kernel-doc
25
Since we can not set the TYPE_BCM283x SOC "enabled-cpus" with -smp,
38
+++ b/scripts/kernel-doc
26
remove the unuseful code.
39
@@ -XXX,XX +XXX,XX @@ sub output_function_rst(%) {
27
40
    output_highlight_rst($args{'purpose'});
28
We can achieve the same by using the '-global bcm2836.enabled-cpus=1'
41
    $start = "\n\n**Syntax**\n\n ``";
29
option.
42
} else {
30
43
-    print ".. c:function:: ";
31
Reported-by: Laurent Bonnans <laurent.bonnans@here.com>
44
+ if ((split(/\./, $sphinx_version))[0] >= 3) {
32
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
45
+ # Sphinx 3 and later distinguish macros and functions and
33
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
46
+ # complain if you use c:function with something that's not
34
Message-id: 20200120235159.18510-2-f4bug@amsat.org
47
+ # syntactically valid as a function declaration.
35
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
48
+ # We assume that anything with a return type is a function
36
---
49
+ # and anything without is a macro.
37
hw/arm/raspi.c | 2 --
50
+ if ($args{'functiontype'} ne "") {
38
1 file changed, 2 deletions(-)
51
+ print ".. c:function:: ";
39
52
+ } else {
40
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
53
+ print ".. c:macro:: ";
41
index XXXXXXX..XXXXXXX 100644
54
+ }
42
--- a/hw/arm/raspi.c
55
+ } else {
43
+++ b/hw/arm/raspi.c
56
+ # Older Sphinx don't support documenting macros that take
44
@@ -XXX,XX +XXX,XX @@ static void raspi_init(MachineState *machine, int version)
57
+ # arguments with c:macro, and don't complain about the use
45
/* Setup the SOC */
58
+ # of c:function for this.
46
object_property_add_const_link(OBJECT(&s->soc), "ram", OBJECT(&s->ram),
59
+ print ".. c:function:: ";
47
&error_abort);
60
+ }
48
- object_property_set_int(OBJECT(&s->soc), machine->smp.cpus, "enabled-cpus",
61
}
49
- &error_abort);
62
if ($args{'functiontype'} ne "") {
50
int board_rev = version == 3 ? 0xa02082 : 0xa21041;
63
    $start .= $args{'functiontype'} . " " . $args{'function'} . " (";
51
object_property_set_int(OBJECT(&s->soc), board_rev, "board-rev",
52
&error_abort);
53
--
64
--
54
2.20.1
65
2.20.1
55
66
56
67
diff view generated by jsdifflib
1
The guest can use the semihosting API to open a handle
1
Sphinx 3.2 is pickier than earlier versions about the option:: markup,
2
corresponding to QEMU's own stdin, stdout, or stderr.
2
and complains about our usage in qemu-option-trace.rst:
3
When the guest closes this handle, we should not
3
4
close the underlying host stdin/stdout/stderr
4
../../docs/qemu-option-trace.rst.inc:4:Malformed option description
5
the way we would do if the handle corresponded to
5
'[enable=]PATTERN', should look like "opt", "-opt args", "--opt args",
6
a host fd we'd opened on behalf of the guest in SYS_OPEN.
6
"/opt args" or "+opt args"
7
8
In this file, we're really trying to document the different parts of
9
the top-level --trace option, which qemu-nbd.rst and qemu-img.rst
10
have already introduced with an option:: markup. So it's not right
11
to use option:: here anyway. Switch to a different markup
12
(definition lists) which gives about the same formatted output.
13
14
(Unlike option::, this markup doesn't produce index entries; but
15
at the moment we don't do anything much with indexes anyway, and
16
in any case I think it doesn't make much sense to have individual
17
index entries for the sub-parts of the --trace option.)
7
18
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
20
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-id: 20200124172954.28481-1-peter.maydell@linaro.org
22
Message-id: 20201030174700.7204-3-peter.maydell@linaro.org
12
---
23
---
13
target/arm/arm-semi.c | 9 +++++++++
24
docs/qemu-option-trace.rst.inc | 6 +++---
14
1 file changed, 9 insertions(+)
25
1 file changed, 3 insertions(+), 3 deletions(-)
15
26
16
diff --git a/target/arm/arm-semi.c b/target/arm/arm-semi.c
27
diff --git a/docs/qemu-option-trace.rst.inc b/docs/qemu-option-trace.rst.inc
17
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/arm-semi.c
29
--- a/docs/qemu-option-trace.rst.inc
19
+++ b/target/arm/arm-semi.c
30
+++ b/docs/qemu-option-trace.rst.inc
20
@@ -XXX,XX +XXX,XX @@ static uint32_t host_closefn(ARMCPU *cpu, GuestFD *gf)
31
@@ -XXX,XX +XXX,XX @@
21
{
32
22
CPUARMState *env = &cpu->env;
33
Specify tracing options.
23
34
24
+ /*
35
-.. option:: [enable=]PATTERN
25
+ * Only close the underlying host fd if it's one we opened on behalf
36
+``[enable=]PATTERN``
26
+ * of the guest in SYS_OPEN.
37
27
+ */
38
Immediately enable events matching *PATTERN*
28
+ if (gf->hostfd == STDIN_FILENO ||
39
(either event name or a globbing pattern). This option is only
29
+ gf->hostfd == STDOUT_FILENO ||
40
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
30
+ gf->hostfd == STDERR_FILENO) {
41
31
+ return 0;
42
Use :option:`-trace help` to print a list of names of trace points.
32
+ }
43
33
return set_swi_errno(env, close(gf->hostfd));
44
-.. option:: events=FILE
34
}
45
+``events=FILE``
35
46
47
Immediately enable events listed in *FILE*.
48
The file must contain one event name (as listed in the ``trace-events-all``
49
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
50
available if QEMU has been compiled with the ``simple``, ``log`` or
51
``ftrace`` tracing backend.
52
53
-.. option:: file=FILE
54
+``file=FILE``
55
56
Log output traces to *FILE*.
57
This option is only available if QEMU has been compiled with
36
--
58
--
37
2.20.1
59
2.20.1
38
60
39
61
diff view generated by jsdifflib
1
The num-lines property of the TYPE_OR_GATE device sets the number
1
The randomness tests in the NPCM7xx RNG test fail intermittently
2
of input lines it has. An assert() in or_irq_realize() restricts
2
but fairly frequently. On my machine running the test in a loop:
3
this to the maximum supported by the implementation. However we
3
while QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/npcm7xx_rng-test; do true; done
4
got the condition in the assert wrong: it should be using <=,
5
because num-lines == MAX_OR_LINES is permitted, and means that
6
all entries from 0 to MAX_OR_LINES-1 in the s->levels[] array
7
are used.
8
4
9
We didn't notice this previously because no user has so far
5
will fail in less than a minute with an error like:
10
needed that many input lines.
6
ERROR:../../tests/qtest/npcm7xx_rng-test.c:256:test_first_byte_runs:
7
assertion failed (calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE) > 0.01): (0.00286205989 > 0.01)
11
8
12
Reported-by: Guenter Roeck <linux@roeck-us.net>
9
(Failures have been observed on all 4 of the randomness tests,
10
not just first_byte_runs.)
11
12
It's not clear why these tests are failing like this, but intermittent
13
failures make CI and merge testing awkward, so disable running them
14
unless a developer specifically sets QEMU_TEST_FLAKY_RNG_TESTS when
15
running the test suite, until we work out the cause.
16
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
19
Message-id: 20201102152454.8287-1-peter.maydell@linaro.org
16
Message-id: 20200120142235.10432-1-peter.maydell@linaro.org
20
Reviewed-by: Havard Skinnemoen <hskinnemoen@google.com>
17
---
21
---
18
hw/core/or-irq.c | 2 +-
22
tests/qtest/npcm7xx_rng-test.c | 14 ++++++++++----
19
1 file changed, 1 insertion(+), 1 deletion(-)
23
1 file changed, 10 insertions(+), 4 deletions(-)
20
24
21
diff --git a/hw/core/or-irq.c b/hw/core/or-irq.c
25
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
22
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/core/or-irq.c
27
--- a/tests/qtest/npcm7xx_rng-test.c
24
+++ b/hw/core/or-irq.c
28
+++ b/tests/qtest/npcm7xx_rng-test.c
25
@@ -XXX,XX +XXX,XX @@ static void or_irq_realize(DeviceState *dev, Error **errp)
29
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
26
{
30
27
qemu_or_irq *s = OR_IRQ(dev);
31
qtest_add_func("npcm7xx_rng/enable_disable", test_enable_disable);
28
32
qtest_add_func("npcm7xx_rng/rosel", test_rosel);
29
- assert(s->num_lines < MAX_OR_LINES);
33
- qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
30
+ assert(s->num_lines <= MAX_OR_LINES);
34
- qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
31
35
- qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
32
qdev_init_gpio_in(dev, or_irq_handler, s->num_lines);
36
- qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
33
}
37
+ /*
38
+ * These tests fail intermittently; only run them on explicit
39
+ * request until we figure out why.
40
+ */
41
+ if (getenv("QEMU_TEST_FLAKY_RNG_TESTS")) {
42
+ qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
43
+ qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
44
+ qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
45
+ qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
46
+ }
47
48
qtest_start("-machine npcm750-evb");
49
ret = g_test_run();
34
--
50
--
35
2.20.1
51
2.20.1
36
52
37
53
diff view generated by jsdifflib