1
target-arm queue. This has the "plumb txattrs through various
1
The following changes since commit 003ba52a8b327180e284630b289c6ece5a3e08b9:
2
bits of exec.c" patches, and a collection of bug fixes from
3
various people.
4
2
5
thanks
3
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2023-02-16 11:16:39 +0000)
6
-- PMM
7
8
9
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
4
14
are available in the Git repository at:
5
are available in the Git repository at:
15
6
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230216
17
8
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
9
for you to fetch changes up to caf01d6a435d9f4a95aeae2f9fc6cb8b889b1fb8:
19
10
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
11
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG (2023-02-16 16:28:53 +0000)
21
12
22
----------------------------------------------------------------
13
----------------------------------------------------------------
23
target-arm queue:
14
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
15
* Some mostly M-profile-related code cleanups
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
16
* avocado: Retire the boot_linux.py AArch64 TCG tests
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
17
* hw/arm/smmuv3: Add GBPA register
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
18
* arm/virt: don't try to spell out the accelerator
28
GIC state
19
* hw/arm: Attach PSPI module to NPCM7XX SoC
29
* tcg: Fix helper function vs host abi for float16
20
* Some cleanup/refactoring patches aiming towards
30
* arm: fix qemu crash on startup with -bios option
21
allowing building Arm targets without CONFIG_TCG
31
* arm: fix malloc type mismatch
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* Correct CPACR reset value for v7 cores
34
* memory.h: Improve IOMMU related documentation
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
22
41
----------------------------------------------------------------
23
----------------------------------------------------------------
42
Francisco Iglesias (1):
24
Alex Bennée (1):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
25
tests/avocado: retire the Aarch64 TCG tests from boot_linux.py
44
26
45
Igor Mammedov (1):
27
Claudio Fontana (3):
46
arm: fix qemu crash on startup with -bios option
28
target/arm: rename handle_semihosting to tcg_handle_semihosting
29
target/arm: wrap psci call with tcg_enabled
30
target/arm: wrap call to aarch64_sve_change_el in tcg_enabled()
47
31
48
Jan Kiszka (1):
32
Cornelia Huck (1):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
33
arm/virt: don't try to spell out the accelerator
50
34
51
Paolo Bonzini (1):
35
Fabiano Rosas (7):
52
arm: fix malloc type mismatch
36
target/arm: Move PC alignment check
37
target/arm: Move cpregs code out of cpu.h
38
tests/avocado: Skip tests that require a missing accelerator
39
tests/avocado: Tag TCG tests with accel:tcg
40
target/arm: Use "max" as default cpu for the virt machine with KVM
41
tests/qtest: arm-cpu-features: Match tests to required accelerators
42
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG
53
43
54
Peter Maydell (17):
44
Hao Wu (3):
55
target/arm: Honour FPCR.FZ in FRECPX
45
MAINTAINERS: Add myself to maintainers and remove Havard
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
46
hw/ssi: Add Nuvoton PSPI Module
57
Correct CPACR reset value for v7 cores
47
hw/arm: Attach PSPI module to NPCM7XX SoC
58
memory.h: Improve IOMMU related documentation
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
61
Make address_space_map() take a MemTxAttrs argument
62
Make address_space_access_valid() take a MemTxAttrs argument
63
Make flatview_extend_translation() take a MemTxAttrs argument
64
Make memory_region_access_valid() take a MemTxAttrs argument
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
66
Make flatview_access_valid() take a MemTxAttrs argument
67
Make flatview_translate() take a MemTxAttrs argument
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
48
73
Richard Henderson (1):
49
Jean-Philippe Brucker (2):
74
tcg: Fix helper function vs host abi for float16
50
hw/arm/smmu-common: Support 64-bit addresses
51
hw/arm/smmu-common: Fix TTB1 handling
75
52
76
Shannon Zhao (3):
53
Mostafa Saleh (1):
77
arm_gicv3_kvm: increase clroffset accordingly
54
hw/arm/smmuv3: Add GBPA register
78
ARM: ACPI: Fix use-after-free due to memory realloc
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
80
55
81
include/exec/exec-all.h | 5 +-
56
Philippe Mathieu-Daudé (12):
82
include/exec/helper-head.h | 2 +-
57
hw/intc/armv7m_nvic: Use OBJECT_DECLARE_SIMPLE_TYPE() macro
83
include/exec/memory-internal.h | 3 +-
58
target/arm: Simplify arm_v7m_mmu_idx_for_secstate() for user emulation
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
59
target/arm: Reduce arm_v7m_mmu_idx_[all/for_secstate_and_priv]() scope
85
include/migration/vmstate.h | 3 +
60
target/arm: Constify ID_PFR1 on user emulation
86
include/sysemu/dma.h | 6 +-
61
target/arm: Convert CPUARMState::eabi to boolean
87
accel/tcg/translate-all.c | 4 +-
62
target/arm: Avoid resetting CPUARMState::eabi field
88
exec.c | 95 ++++++++++++++++++------------
63
target/arm: Restrict CPUARMState::gicv3state to sysemu
89
hw/arm/boot.c | 18 +++---
64
target/arm: Restrict CPUARMState::arm_boot_info to sysemu
90
hw/arm/virt-acpi-build.c | 20 +++++--
65
target/arm: Restrict CPUARMState::nvic to sysemu
91
hw/dma/xlnx-zdma.c | 10 +++-
66
target/arm: Store CPUARMState::nvic as NVICState*
92
hw/hppa/dino.c | 3 +-
67
target/arm: Declare CPU <-> NVIC helpers in 'hw/intc/armv7m_nvic.h'
93
hw/intc/arm_gic_kvm.c | 1 -
68
hw/arm: Add missing XLNX_ZYNQMP_ARM -> USB_DWC3 Kconfig dependency
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
69
70
MAINTAINERS | 8 +-
71
docs/system/arm/nuvoton.rst | 2 +-
72
hw/arm/smmuv3-internal.h | 7 +
73
include/hw/arm/npcm7xx.h | 2 +
74
include/hw/arm/smmu-common.h | 2 -
75
include/hw/arm/smmuv3.h | 1 +
76
include/hw/intc/armv7m_nvic.h | 128 +++++++++++++++++-
77
include/hw/ssi/npcm_pspi.h | 53 ++++++++
78
linux-user/user-internals.h | 2 +-
79
target/arm/cpregs.h | 98 ++++++++++++++
80
target/arm/cpu.h | 228 ++-------------------------------
81
target/arm/internals.h | 14 --
82
hw/arm/npcm7xx.c | 25 +++-
83
hw/arm/smmu-common.c | 4 +-
84
hw/arm/smmuv3.c | 43 ++++++-
85
hw/arm/virt.c | 10 +-
86
hw/intc/armv7m_nvic.c | 38 ++----
87
hw/ssi/npcm_pspi.c | 221 ++++++++++++++++++++++++++++++++
88
linux-user/arm/cpu_loop.c | 4 +-
89
target/arm/cpu.c | 5 +-
90
target/arm/cpu_tcg.c | 3 +
91
target/arm/helper.c | 31 +++--
92
target/arm/m_helper.c | 86 +++++++------
93
target/arm/machine.c | 18 +--
94
tests/qtest/arm-cpu-features.c | 28 ++--
95
hw/arm/Kconfig | 1 +
96
hw/ssi/meson.build | 2 +-
97
hw/ssi/trace-events | 5 +
98
tests/avocado/avocado_qemu/__init__.py | 4 +
99
tests/avocado/boot_linux.py | 48 ++-----
100
tests/avocado/boot_linux_console.py | 1 +
101
tests/avocado/machine_aarch64_virt.py | 63 ++++++++-
102
tests/avocado/reverse_debugging.py | 8 ++
103
tests/qtest/meson.build | 4 +-
104
34 files changed, 798 insertions(+), 399 deletions(-)
105
create mode 100644 include/hw/ssi/npcm_pspi.h
106
create mode 100644 hw/ssi/npcm_pspi.c
107
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
Manually convert to OBJECT_DECLARE_SIMPLE_TYPE() macro,
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
similarly to automatic conversion from commit 8063396bf3
5
GIC realize function, previous allocated memory will leak.
5
("Use OBJECT_DECLARE_SIMPLE_TYPE when possible").
6
6
7
Fix this by deleting the unnecessary call.
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Message-id: 20230206223502.25122-2-philmd@linaro.org
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/intc/arm_gic_kvm.c | 1 -
12
include/hw/intc/armv7m_nvic.h | 5 +----
15
hw/intc/arm_gicv3_kvm.c | 1 -
13
1 file changed, 1 insertion(+), 4 deletions(-)
16
2 files changed, 2 deletions(-)
17
14
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
17
--- a/include/hw/intc/armv7m_nvic.h
21
+++ b/hw/intc/arm_gic_kvm.c
18
+++ b/include/hw/intc/armv7m_nvic.h
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
19
@@ -XXX,XX +XXX,XX @@
23
20
#include "qom/object.h"
24
if (kvm_has_gsi_routing()) {
21
25
/* set up irq routing */
22
#define TYPE_NVIC "armv7m_nvic"
26
- kvm_init_irq_routing(kvm_state);
23
-
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
24
-typedef struct NVICState NVICState;
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
25
-DECLARE_INSTANCE_CHECKER(NVICState, NVIC,
29
}
26
- TYPE_NVIC)
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
27
+OBJECT_DECLARE_SIMPLE_TYPE(NVICState, NVIC)
31
index XXXXXXX..XXXXXXX 100644
28
32
--- a/hw/intc/arm_gicv3_kvm.c
29
/* Highest permitted number of exceptions (architectural limit) */
33
+++ b/hw/intc/arm_gicv3_kvm.c
30
#define NVIC_MAX_VECTORS 512
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
35
36
if (kvm_has_gsi_routing()) {
37
/* set up irq routing */
38
- kvm_init_irq_routing(kvm_state);
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
41
}
42
--
31
--
43
2.17.1
32
2.34.1
44
33
45
34
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to flatview_access_valid().
3
Its callers now all have an attrs value to hand, so we can
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
5
2
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230206223502.25122-3-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
10
---
8
---
11
exec.c | 12 +++++-------
9
target/arm/m_helper.c | 11 ++++++++---
12
1 file changed, 5 insertions(+), 7 deletions(-)
10
1 file changed, 8 insertions(+), 3 deletions(-)
13
11
14
diff --git a/exec.c b/exec.c
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
14
--- a/target/arm/m_helper.c
17
+++ b/exec.c
15
+++ b/target/arm/m_helper.c
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
16
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
17
return 0;
20
const uint8_t *buf, int len);
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
22
- bool is_write);
23
+ bool is_write, MemTxAttrs attrs);
24
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
26
unsigned len, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
28
#endif
29
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
31
- len, is_write);
32
+ len, is_write, attrs);
33
}
18
}
34
19
35
static const MemoryRegionOps subpage_ops = {
20
-#else
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
21
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
22
+{
23
+ return ARMMMUIdx_MUser;
24
+}
25
+
26
+#else /* !CONFIG_USER_ONLY */
27
28
/*
29
* What kind of stack write are we doing? This affects how exceptions
30
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
31
return tt_resp;
37
}
32
}
38
33
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
34
-#endif /* !CONFIG_USER_ONLY */
40
- bool is_write)
35
-
41
+ bool is_write, MemTxAttrs attrs)
36
ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
37
bool secstate, bool priv, bool negpri)
42
{
38
{
43
MemoryRegion *mr;
39
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
44
hwaddr l, xlat;
40
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
41
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
47
if (!memory_access_is_direct(mr, is_write)) {
48
l = memory_access_size(mr, l, addr);
49
- /* When our callers all have attrs we'll pass them through here */
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
51
- MEMTXATTRS_UNSPECIFIED)) {
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
53
return false;
54
}
55
}
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
57
58
rcu_read_lock();
59
fv = address_space_to_flatview(as);
60
- result = flatview_access_valid(fv, addr, len, is_write);
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
62
rcu_read_unlock();
63
return result;
64
}
42
}
43
+
44
+#endif /* !CONFIG_USER_ONLY */
65
--
45
--
66
2.17.1
46
2.34.1
67
47
68
48
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
3
arm_v7m_mmu_idx_all() and arm_v7m_mmu_idx_for_secstate_and_priv()
4
are only used for system emulation in m_helper.c.
5
Move the definitions to avoid prototype forward declarations.
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230206223502.25122-4-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
8
---
11
---
9
exec.c | 8 +++++---
12
target/arm/internals.h | 14 --------
10
1 file changed, 5 insertions(+), 3 deletions(-)
13
target/arm/m_helper.c | 74 +++++++++++++++++++++---------------------
14
2 files changed, 37 insertions(+), 51 deletions(-)
11
15
12
diff --git a/exec.c b/exec.c
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
18
--- a/target/arm/internals.h
15
+++ b/exec.c
19
+++ b/target/arm/internals.h
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
20
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx core_to_aa64_mmu_idx(int mmu_idx)
17
* @is_write: whether the translation operation is for write
21
18
* @is_mmio: whether this can be MMIO, set true if it can
22
int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
19
* @target_as: the address space targeted by the IOMMU
23
20
+ * @attrs: transaction attributes
24
-/*
21
*
25
- * Return the MMU index for a v7M CPU with all relevant information
22
* This function is called from RCU critical section. It is the common
26
- * manually specified.
23
* part of flatview_do_translate and address_space_translate_cached.
27
- */
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
28
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
25
hwaddr *page_mask_out,
29
- bool secstate, bool priv, bool negpri);
26
bool is_write,
30
-
27
bool is_mmio,
31
-/*
28
- AddressSpace **target_as)
32
- * Return the MMU index for a v7M CPU in the specified security and
29
+ AddressSpace **target_as,
33
- * privilege state.
30
+ MemTxAttrs attrs)
34
- */
31
{
35
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
32
MemoryRegionSection *section;
36
- bool secstate, bool priv);
33
hwaddr page_mask = (hwaddr)-1;
37
-
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
38
/* Return the MMU index for a v7M CPU in the specified security state */
35
return address_space_translate_iommu(iommu_mr, xlat,
39
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
36
plen_out, page_mask_out,
40
37
is_write, is_mmio,
41
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
38
- target_as);
42
index XXXXXXX..XXXXXXX 100644
39
+ target_as, attrs);
43
--- a/target/arm/m_helper.c
40
}
44
+++ b/target/arm/m_helper.c
41
if (page_mask_out) {
45
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
42
/* Not behind an IOMMU, use default page size. */
46
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
47
#else /* !CONFIG_USER_ONLY */
44
48
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
49
+static ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
46
NULL, is_write, true,
50
+ bool secstate, bool priv, bool negpri)
47
- &target_as);
51
+{
48
+ &target_as, attrs);
52
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
49
return section.mr;
53
+
54
+ if (priv) {
55
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
56
+ }
57
+
58
+ if (negpri) {
59
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
60
+ }
61
+
62
+ if (secstate) {
63
+ mmu_idx |= ARM_MMU_IDX_M_S;
64
+ }
65
+
66
+ return mmu_idx;
67
+}
68
+
69
+static ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
70
+ bool secstate, bool priv)
71
+{
72
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
73
+
74
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
75
+}
76
+
77
+/* Return the MMU index for a v7M CPU in the specified security state */
78
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
79
+{
80
+ bool priv = arm_v7m_is_handler_mode(env) ||
81
+ !(env->v7m.control[secstate] & 1);
82
+
83
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
84
+}
85
+
86
/*
87
* What kind of stack write are we doing? This affects how exceptions
88
* generated during the stacking are treated.
89
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
90
return tt_resp;
50
}
91
}
51
92
93
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
94
- bool secstate, bool priv, bool negpri)
95
-{
96
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
97
-
98
- if (priv) {
99
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
100
- }
101
-
102
- if (negpri) {
103
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
104
- }
105
-
106
- if (secstate) {
107
- mmu_idx |= ARM_MMU_IDX_M_S;
108
- }
109
-
110
- return mmu_idx;
111
-}
112
-
113
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
114
- bool secstate, bool priv)
115
-{
116
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
117
-
118
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
119
-}
120
-
121
-/* Return the MMU index for a v7M CPU in the specified security state */
122
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
123
-{
124
- bool priv = arm_v7m_is_handler_mode(env) ||
125
- !(env->v7m.control[secstate] & 1);
126
-
127
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
128
-}
129
-
130
#endif /* !CONFIG_USER_ONLY */
52
--
131
--
53
2.17.1
132
2.34.1
54
133
55
134
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
passed and returned either zero-extended in the host register
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
or with garbage at the top of the host register.
5
Message-id: 20230206223502.25122-5-philmd@linaro.org
6
7
The tcg code generator has so far been assuming garbage, as that
8
matches the x86 abi, but this is incorrect for other host abis.
9
Further, target/arm has so far been assuming zero-extended results,
10
so that it may store the 16-bit value into a 32-bit slot with the
11
high 16-bits already clear.
12
13
Rectify both problems by mapping "f16" in the helper definition
14
to uint32_t instead of (a typedef for) uint16_t. This forces
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
7
---
26
include/exec/helper-head.h | 2 +-
8
target/arm/helper.c | 12 ++++++++++--
27
target/arm/helper-a64.c | 35 +++++++++--------
9
1 file changed, 10 insertions(+), 2 deletions(-)
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
29
3 files changed, 59 insertions(+), 58 deletions(-)
30
10
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
34
+++ b/include/exec/helper-head.h
35
@@ -XXX,XX +XXX,XX @@
36
#define dh_ctype_int int
37
#define dh_ctype_i64 uint64_t
38
#define dh_ctype_s64 int64_t
39
-#define dh_ctype_f16 float16
40
+#define dh_ctype_f16 uint32_t
41
#define dh_ctype_f32 float32
42
#define dh_ctype_f64 float64
43
#define dh_ctype_ptr void *
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper-a64.c
47
+++ b/target/arm/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
49
return flags;
50
}
51
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
54
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
56
}
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
15
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
199
200
/* Integer to float and float to integer conversions */
201
202
-#define CONV_ITOF(name, fsz, sign) \
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
204
-{ \
205
- float_status *fpst = fpstp; \
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
209
+{ \
210
+ float_status *fpst = fpstp; \
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
212
}
213
214
-#define CONV_FTOI(name, fsz, sign, round) \
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
216
-{ \
217
- float_status *fpst = fpstp; \
218
- if (float##fsz##_is_any_nan(x)) { \
219
- float_raise(float_flag_invalid, fpst); \
220
- return 0; \
221
- } \
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
225
+{ \
226
+ float_status *fpst = fpstp; \
227
+ if (float##fsz##_is_any_nan(x)) { \
228
+ float_raise(float_flag_invalid, fpst); \
229
+ return 0; \
230
+ } \
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
232
}
233
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
242
243
-FLOAT_CONVS(si, h, 16, )
244
-FLOAT_CONVS(si, s, 32, )
245
-FLOAT_CONVS(si, d, 64, )
246
-FLOAT_CONVS(ui, h, 16, u)
247
-FLOAT_CONVS(ui, s, 32, u)
248
-FLOAT_CONVS(ui, d, 64, u)
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
250
+FLOAT_CONVS(si, s, float32, 32, )
251
+FLOAT_CONVS(si, d, float64, 64, )
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
253
+FLOAT_CONVS(ui, s, float32, 32, u)
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
266
}
267
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
270
{
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
272
}
273
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
276
{
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
278
}
279
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
282
{
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
284
}
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
286
}
16
}
287
}
17
}
288
18
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
19
+#ifndef CONFIG_USER_ONLY
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
20
/*
21
* We don't know until after realize whether there's a GICv3
22
* attached, and that is what registers the gicv3 sysregs.
23
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
24
return pfr1;
25
}
26
27
-#ifndef CONFIG_USER_ONLY
28
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
291
{
29
{
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
30
ARMCPU *cpu = env_archcpu(env);
293
}
31
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
294
32
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1,
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
33
.access = PL1_R, .type = ARM_CP_NO_RAW,
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
34
.accessfn = access_aa32_tid3,
297
{
35
+#ifdef CONFIG_USER_ONLY
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
36
+ .type = ARM_CP_CONST,
299
}
37
+ .resetvalue = cpu->isar.id_pfr1,
300
38
+#else
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
39
+ .type = ARM_CP_NO_RAW,
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
40
+ .accessfn = access_aa32_tid3,
303
{
41
.readfn = id_pfr1_read,
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
42
- .writefn = arm_cp_write_ignore },
305
}
43
+ .writefn = arm_cp_write_ignore
306
44
+#endif
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
45
+ },
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
46
{ .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH,
309
{
47
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
48
.access = PL1_R, .type = ARM_CP_CONST,
311
}
312
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
315
{
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
317
}
318
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
321
{
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
323
}
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
325
}
326
327
/* Half precision conversions. */
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
49
--
379
2.17.1
50
2.34.1
380
51
381
52
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
and friends.
3
2
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230206223502.25122-6-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
---
8
---
8
include/migration/vmstate.h | 3 +++
9
linux-user/user-internals.h | 2 +-
9
1 file changed, 3 insertions(+)
10
target/arm/cpu.h | 2 +-
11
linux-user/arm/cpu_loop.c | 4 ++--
12
3 files changed, 4 insertions(+), 4 deletions(-)
10
13
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
14
diff --git a/linux-user/user-internals.h b/linux-user/user-internals.h
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
16
--- a/linux-user/user-internals.h
14
+++ b/include/migration/vmstate.h
17
+++ b/linux-user/user-internals.h
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
18
@@ -XXX,XX +XXX,XX @@ void print_termios(void *arg);
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
19
#ifdef TARGET_ARM
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
20
static inline int regpairs_aligned(CPUArchState *cpu_env, int num)
18
21
{
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
22
- return cpu_env->eabi == 1;
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
23
+ return cpu_env->eabi;
21
+
24
}
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
25
#elif defined(TARGET_MIPS) && defined(TARGET_ABI_MIPSO32)
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
26
static inline int regpairs_aligned(CPUArchState *cpu_env, int num) { return 1; }
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
32
33
#if defined(CONFIG_USER_ONLY)
34
/* For usermode syscall translation. */
35
- int eabi;
36
+ bool eabi;
37
#endif
38
39
struct CPUBreakpoint *cpu_breakpoint[16];
40
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/arm/cpu_loop.c
43
+++ b/linux-user/arm/cpu_loop.c
44
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
45
break;
46
case EXCP_SWI:
47
{
48
- env->eabi = 1;
49
+ env->eabi = true;
50
/* system call */
51
if (env->thumb) {
52
/* Thumb is always EABI style with syscall number in r7 */
53
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
54
* > 0xfffff and are handled below as out-of-range.
55
*/
56
n ^= ARM_SYSCALL_BASE;
57
- env->eabi = 0;
58
+ env->eabi = false;
59
}
60
}
24
61
25
--
62
--
26
2.17.1
63
2.34.1
27
64
28
65
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to flatview_do_translate().
3
2
3
Although the 'eabi' field is only used in user emulation where
4
CPU reset doesn't occur, it doesn't belong to the area to reset.
5
Move it after the 'end_reset_fields' for consistency.
6
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20230206223502.25122-7-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
8
---
11
---
9
exec.c | 9 ++++++---
12
target/arm/cpu.h | 9 ++++-----
10
1 file changed, 6 insertions(+), 3 deletions(-)
13
1 file changed, 4 insertions(+), 5 deletions(-)
11
14
12
diff --git a/exec.c b/exec.c
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
17
--- a/target/arm/cpu.h
15
+++ b/exec.c
18
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ unassigned:
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
17
* @is_write: whether the translation operation is for write
20
ARMVectorReg zarray[ARM_MAX_VQ * 16];
18
* @is_mmio: whether this can be MMIO, set true if it can
21
#endif
19
* @target_as: the address space targeted by the IOMMU
22
20
+ * @attrs: memory transaction attributes
23
-#if defined(CONFIG_USER_ONLY)
21
*
24
- /* For usermode syscall translation. */
22
* This function is called from RCU critical section
25
- bool eabi;
23
*/
26
-#endif
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
27
-
25
hwaddr *page_mask_out,
28
struct CPUBreakpoint *cpu_breakpoint[16];
26
bool is_write,
29
struct CPUWatchpoint *cpu_watchpoint[16];
27
bool is_mmio,
30
28
- AddressSpace **target_as)
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
29
+ AddressSpace **target_as,
32
const struct arm_boot_info *boot_info;
30
+ MemTxAttrs attrs)
33
/* Store GICv3CPUState to access from this struct */
31
{
34
void *gicv3state;
32
MemoryRegionSection *section;
35
+#if defined(CONFIG_USER_ONLY)
33
IOMMUMemoryRegion *iommu_mr;
36
+ /* For usermode syscall translation. */
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
37
+ bool eabi;
35
* but page mask.
38
+#endif /* CONFIG_USER_ONLY */
36
*/
39
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
40
#ifdef TARGET_TAGGED_ADDRESSES
38
- NULL, &page_mask, is_write, false, &as);
41
/* Linux syscall tagged address support */
39
+ NULL, &page_mask, is_write, false, &as,
40
+ attrs);
41
42
/* Illegal translation */
43
if (section.mr == &io_mem_unassigned) {
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
45
46
/* This can be MMIO, so setup MMIO bit. */
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
48
- is_write, true, &as);
49
+ is_write, true, &as, attrs);
50
mr = section.mr;
51
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
53
--
42
--
54
2.17.1
43
2.34.1
55
44
56
45
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
3
2
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20230206223502.25122-8-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
8
---
7
---
9
include/exec/memory.h | 2 +-
8
target/arm/cpu.h | 3 ++-
10
exec.c | 2 +-
9
1 file changed, 2 insertions(+), 1 deletion(-)
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
10
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
13
--- a/target/arm/cpu.h
17
+++ b/include/exec/memory.h
14
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
* entry. Should be called from an RCU critical section.
16
20
*/
17
void *nvic;
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
18
const struct arm_boot_info *boot_info;
22
- bool is_write);
19
+#if !defined(CONFIG_USER_ONLY)
23
+ bool is_write, MemTxAttrs attrs);
20
/* Store GICv3CPUState to access from this struct */
24
21
void *gicv3state;
25
/* address_space_translate: translate an address range into an address space
22
-#if defined(CONFIG_USER_ONLY)
26
* into a MemoryRegion and an address range into that section. Should be
23
+#else /* CONFIG_USER_ONLY */
27
diff --git a/exec.c b/exec.c
24
/* For usermode syscall translation. */
28
index XXXXXXX..XXXXXXX 100644
25
bool eabi;
29
--- a/exec.c
26
#endif /* CONFIG_USER_ONLY */
30
+++ b/exec.c
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
33
/* Called from RCU critical section */
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
- bool is_write)
36
+ bool is_write, MemTxAttrs attrs)
37
{
38
MemoryRegionSection section;
39
hwaddr xlat, page_mask;
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/virtio/vhost.c
43
+++ b/hw/virtio/vhost.c
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
45
trace_vhost_iotlb_miss(dev, 1);
46
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
48
- iova, write);
49
+ iova, write,
50
+ MEMTXATTRS_UNSPECIFIED);
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
27
--
55
2.17.1
28
2.34.1
56
29
57
30
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to memory_region_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
6
The callsite in flatview_access_valid() is part of a recursive
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
loop flatview_access_valid() -> memory_region_access_valid() ->
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
subpage_accepts() -> flatview_access_valid(); we make it pass
5
Message-id: 20230206223502.25122-9-philmd@linaro.org
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
have plumbed an attrs parameter through the rest of the loop
7
---
11
and we can add an attrs parameter to flatview_access_valid().
8
target/arm/cpu.h | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
12
10
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
17
---
18
include/exec/memory-internal.h | 3 ++-
19
exec.c | 4 +++-
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
25
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
13
--- a/target/arm/cpu.h
27
+++ b/include/exec/memory-internal.h
14
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
29
extern const MemoryRegionOps unassigned_mem_ops;
16
} sau;
30
17
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
18
void *nvic;
32
- unsigned size, bool is_write);
19
- const struct arm_boot_info *boot_info;
33
+ unsigned size, bool is_write,
20
#if !defined(CONFIG_USER_ONLY)
34
+ MemTxAttrs attrs);
21
+ const struct arm_boot_info *boot_info;
35
22
/* Store GICv3CPUState to access from this struct */
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
23
void *gicv3state;
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
24
#else /* CONFIG_USER_ONLY */
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
44
if (!memory_access_is_direct(mr, is_write)) {
45
l = memory_access_size(mr, l, addr);
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
47
+ /* When our callers all have attrs we'll pass them through here */
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
49
+ MEMTXATTRS_UNSPECIFIED)) {
50
return false;
51
}
52
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
66
}
67
diff --git a/memory.c b/memory.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
99
--
25
--
100
2.17.1
26
2.34.1
101
27
102
28
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to flatview_extend_translation().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20230206223502.25122-10-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
10
---
7
---
11
exec.c | 15 ++++++++++-----
8
target/arm/cpu.h | 2 +-
12
1 file changed, 10 insertions(+), 5 deletions(-)
9
1 file changed, 1 insertion(+), 1 deletion(-)
13
10
14
diff --git a/exec.c b/exec.c
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
13
--- a/target/arm/cpu.h
17
+++ b/exec.c
14
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
16
uint32_t ctrl;
20
static hwaddr
17
} sau;
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
18
22
- hwaddr target_len,
19
- void *nvic;
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
20
#if !defined(CONFIG_USER_ONLY)
24
- bool is_write)
21
+ void *nvic;
25
+ hwaddr target_len,
22
const struct arm_boot_info *boot_info;
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
23
/* Store GICv3CPUState to access from this struct */
27
+ bool is_write, MemTxAttrs attrs)
24
void *gicv3state;
28
{
29
hwaddr done = 0;
30
hwaddr xlat;
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
32
33
memory_region_ref(mr);
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
35
- l, is_write);
36
+ l, is_write, attrs);
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
38
rcu_read_unlock();
39
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
41
mr = cache->mrs.mr;
42
memory_region_ref(mr);
43
if (memory_access_is_direct(mr, is_write)) {
44
+ /* We don't care about the memory attributes here as we're only
45
+ * doing this if we found actual RAM, which behaves the same
46
+ * regardless of attributes; so UNSPECIFIED is fine.
47
+ */
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
49
- cache->xlat, l, is_write);
50
+ cache->xlat, l, is_write,
51
+ MEMTXATTRS_UNSPECIFIED);
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
53
} else {
54
cache->ptr = NULL;
55
--
25
--
56
2.17.1
26
2.34.1
57
27
58
28
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
2
3
Its callers either have an attrs value to hand, or don't care
3
There is no point in using a void pointer to access the NVIC.
4
and can use MEMTXATTRS_UNSPECIFIED.
4
Use the real type to avoid casting it while debugging.
5
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230206223502.25122-11-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
10
---
11
include/exec/exec-all.h | 5 +++--
11
target/arm/cpu.h | 46 ++++++++++++++++++++++---------------------
12
accel/tcg/translate-all.c | 2 +-
12
hw/intc/armv7m_nvic.c | 38 ++++++++++++-----------------------
13
exec.c | 2 +-
13
target/arm/cpu.c | 1 +
14
target/xtensa/op_helper.c | 3 ++-
14
target/arm/m_helper.c | 2 +-
15
4 files changed, 7 insertions(+), 5 deletions(-)
15
4 files changed, 39 insertions(+), 48 deletions(-)
16
16
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
19
--- a/target/arm/cpu.h
20
+++ b/include/exec/exec-all.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
22
23
hwaddr paddr, int prot,
23
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
24
int mmu_idx, target_ulong size);
24
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
25
+typedef struct NVICState NVICState;
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
26
+
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
27
typedef struct CPUArchState {
28
uintptr_t retaddr);
28
/* Regs for current mode. */
29
uint32_t regs[16];
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
31
} sau;
32
33
#if !defined(CONFIG_USER_ONLY)
34
- void *nvic;
35
+ NVICState *nvic;
36
const struct arm_boot_info *boot_info;
37
/* Store GICv3CPUState to access from this struct */
38
void *gicv3state;
39
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
40
41
/* Interface between CPU and Interrupt controller. */
42
#ifndef CONFIG_USER_ONLY
43
-bool armv7m_nvic_can_take_pending_exception(void *opaque);
44
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
29
#else
45
#else
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
46
-static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
31
uint16_t idxmap)
47
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
32
{
48
{
33
}
49
return true;
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
36
+ MemTxAttrs attrs)
37
{
38
}
50
}
39
#endif
51
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
52
/**
53
* armv7m_nvic_set_pending: mark the specified exception as pending
54
- * @opaque: the NVIC
55
+ * @s: the NVIC
56
* @irq: the exception number to mark pending
57
* @secure: false for non-banked exceptions or for the nonsecure
58
* version of a banked exception, true for the secure version of a banked
59
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
60
* if @secure is true and @irq does not specify one of the fixed set
61
* of architecturally banked exceptions.
62
*/
63
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
64
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
65
/**
66
* armv7m_nvic_set_pending_derived: mark this derived exception as pending
67
- * @opaque: the NVIC
68
+ * @s: the NVIC
69
* @irq: the exception number to mark pending
70
* @secure: false for non-banked exceptions or for the nonsecure
71
* version of a banked exception, true for the secure version of a banked
72
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
73
* exceptions (exceptions generated in the course of trying to take
74
* a different exception).
75
*/
76
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
77
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
78
/**
79
* armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
80
- * @opaque: the NVIC
81
+ * @s: the NVIC
82
* @irq: the exception number to mark pending
83
* @secure: false for non-banked exceptions or for the nonsecure
84
* version of a banked exception, true for the secure version of a banked
85
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
86
* Similar to armv7m_nvic_set_pending(), but specifically for exceptions
87
* generated in the course of lazy stacking of FP registers.
88
*/
89
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
90
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
91
/**
92
* armv7m_nvic_get_pending_irq_info: return highest priority pending
93
* exception, and whether it targets Secure state
94
- * @opaque: the NVIC
95
+ * @s: the NVIC
96
* @pirq: set to pending exception number
97
* @ptargets_secure: set to whether pending exception targets Secure
98
*
99
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
100
* to true if the current highest priority pending exception should
101
* be taken to Secure state, false for NS.
102
*/
103
-void armv7m_nvic_get_pending_irq_info(void *opaque, int *pirq,
104
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
105
bool *ptargets_secure);
106
/**
107
* armv7m_nvic_acknowledge_irq: make highest priority pending exception active
108
- * @opaque: the NVIC
109
+ * @s: the NVIC
110
*
111
* Move the current highest priority pending exception from the pending
112
* state to the active state, and update v7m.exception to indicate that
113
* it is the exception currently being handled.
114
*/
115
-void armv7m_nvic_acknowledge_irq(void *opaque);
116
+void armv7m_nvic_acknowledge_irq(NVICState *s);
117
/**
118
* armv7m_nvic_complete_irq: complete specified interrupt or exception
119
- * @opaque: the NVIC
120
+ * @s: the NVIC
121
* @irq: the exception number to complete
122
* @secure: true if this exception was secure
123
*
124
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
125
* 0 if there is still an irq active after this one was completed
126
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
127
*/
128
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
129
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
130
/**
131
* armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
132
- * @opaque: the NVIC
133
+ * @s: the NVIC
134
* @irq: the exception number to mark pending
135
* @secure: false for non-banked exceptions or for the nonsecure
136
* version of a banked exception, true for the secure version of a banked
137
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
138
* interrupt the current execution priority. This controls whether the
139
* RDY bit for it in the FPCCR is set.
140
*/
141
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
142
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
143
/**
144
* armv7m_nvic_raw_execution_priority: return the raw execution priority
145
- * @opaque: the NVIC
146
+ * @s: the NVIC
147
*
148
* Returns: the raw execution priority as defined by the v8M architecture.
149
* This is the execution priority minus the effects of AIRCR.PRIS,
150
* and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
151
* (v8M ARM ARM I_PKLD.)
152
*/
153
-int armv7m_nvic_raw_execution_priority(void *opaque);
154
+int armv7m_nvic_raw_execution_priority(NVICState *s);
155
/**
156
* armv7m_nvic_neg_prio_requested: return true if the requested execution
157
* priority is negative for the specified security state.
158
- * @opaque: the NVIC
159
+ * @s: the NVIC
160
* @secure: the security state to test
161
* This corresponds to the pseudocode IsReqExecPriNeg().
162
*/
163
#ifndef CONFIG_USER_ONLY
164
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure);
165
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
166
#else
167
-static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
168
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
169
{
170
return false;
171
}
172
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
41
index XXXXXXX..XXXXXXX 100644
173
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
174
--- a/hw/intc/armv7m_nvic.c
43
+++ b/accel/tcg/translate-all.c
175
+++ b/hw/intc/armv7m_nvic.c
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
176
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
45
}
177
return MIN(running, s->exception_prio);
46
178
}
179
180
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
181
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
182
{
183
/* Return true if the requested execution priority is negative
184
* for the specified security state, ie that security state
185
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
186
* mean we don't allow FAULTMASK_NS to actually make the execution
187
* priority negative). Compare pseudocode IsReqExcPriNeg().
188
*/
189
- NVICState *s = opaque;
190
-
191
if (s->cpu->env.v7m.faultmask[secure]) {
192
return true;
193
}
194
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
195
return false;
196
}
197
198
-bool armv7m_nvic_can_take_pending_exception(void *opaque)
199
+bool armv7m_nvic_can_take_pending_exception(NVICState *s)
200
{
201
- NVICState *s = opaque;
202
-
203
return nvic_exec_prio(s) > nvic_pending_prio(s);
204
}
205
206
-int armv7m_nvic_raw_execution_priority(void *opaque)
207
+int armv7m_nvic_raw_execution_priority(NVICState *s)
208
{
209
- NVICState *s = opaque;
210
-
211
return s->exception_prio;
212
}
213
214
@@ -XXX,XX +XXX,XX @@ static void nvic_irq_update(NVICState *s)
215
* if @secure is true and @irq does not specify one of the fixed set
216
* of architecturally banked exceptions.
217
*/
218
-static void armv7m_nvic_clear_pending(void *opaque, int irq, bool secure)
219
+static void armv7m_nvic_clear_pending(NVICState *s, int irq, bool secure)
220
{
221
- NVICState *s = (NVICState *)opaque;
222
VecInfo *vec;
223
224
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
225
@@ -XXX,XX +XXX,XX @@ static void do_armv7m_nvic_set_pending(void *opaque, int irq, bool secure,
226
}
227
}
228
229
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
230
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure)
231
{
232
- do_armv7m_nvic_set_pending(opaque, irq, secure, false);
233
+ do_armv7m_nvic_set_pending(s, irq, secure, false);
234
}
235
236
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
237
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure)
238
{
239
- do_armv7m_nvic_set_pending(opaque, irq, secure, true);
240
+ do_armv7m_nvic_set_pending(s, irq, secure, true);
241
}
242
243
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
244
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure)
245
{
246
/*
247
* Pend an exception during lazy FP stacking. This differs
248
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
249
* whether we should escalate depends on the saved context
250
* in the FPCCR register, not on the current state of the CPU/NVIC.
251
*/
252
- NVICState *s = (NVICState *)opaque;
253
bool banked = exc_is_banked(irq);
254
VecInfo *vec;
255
bool targets_secure;
256
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
257
}
258
259
/* Make pending IRQ active. */
260
-void armv7m_nvic_acknowledge_irq(void *opaque)
261
+void armv7m_nvic_acknowledge_irq(NVICState *s)
262
{
263
- NVICState *s = (NVICState *)opaque;
264
CPUARMState *env = &s->cpu->env;
265
const int pending = s->vectpending;
266
const int running = nvic_exec_prio(s);
267
@@ -XXX,XX +XXX,XX @@ static bool vectpending_targets_secure(NVICState *s)
268
exc_targets_secure(s, s->vectpending);
269
}
270
271
-void armv7m_nvic_get_pending_irq_info(void *opaque,
272
+void armv7m_nvic_get_pending_irq_info(NVICState *s,
273
int *pirq, bool *ptargets_secure)
274
{
275
- NVICState *s = (NVICState *)opaque;
276
const int pending = s->vectpending;
277
bool targets_secure;
278
279
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
280
*pirq = pending;
281
}
282
283
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
284
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure)
285
{
286
- NVICState *s = (NVICState *)opaque;
287
VecInfo *vec = NULL;
288
int ret = 0;
289
290
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
291
return ret;
292
}
293
294
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
295
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure)
296
{
297
/*
298
* Return whether an exception is "ready", i.e. it is enabled and is
299
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
300
* for non-banked exceptions secure is always false; for banked exceptions
301
* it indicates which of the exceptions is required.
302
*/
303
- NVICState *s = (NVICState *)opaque;
304
bool banked = exc_is_banked(irq);
305
VecInfo *vec;
306
int running = nvic_exec_prio(s);
307
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
308
index XXXXXXX..XXXXXXX 100644
309
--- a/target/arm/cpu.c
310
+++ b/target/arm/cpu.c
311
@@ -XXX,XX +XXX,XX @@
47
#if !defined(CONFIG_USER_ONLY)
312
#if !defined(CONFIG_USER_ONLY)
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
313
#include "hw/loader.h"
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
314
#include "hw/boards.h"
50
{
315
+#include "hw/intc/armv7m_nvic.h"
51
ram_addr_t ram_addr;
316
#endif
52
MemoryRegion *mr;
317
#include "sysemu/tcg.h"
53
diff --git a/exec.c b/exec.c
318
#include "sysemu/qtest.h"
319
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
54
index XXXXXXX..XXXXXXX 100644
320
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
321
--- a/target/arm/m_helper.c
56
+++ b/exec.c
322
+++ b/target/arm/m_helper.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
323
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
58
if (phys != -1) {
324
* that we will need later in order to do lazy FP reg stacking.
59
/* Locks grabbed by tb_invalidate_phys_addr */
325
*/
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
326
bool is_secure = env->v7m.secure;
61
- phys | (pc & ~TARGET_PAGE_MASK));
327
- void *nvic = env->nvic;
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
328
+ NVICState *nvic = env->nvic;
63
}
329
/*
64
}
330
* Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
65
#endif
331
* are banked and we want to update the bit in the bank for the
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/xtensa/op_helper.c
69
+++ b/target/xtensa/op_helper.c
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
72
&paddr, &page_size, &access);
73
if (ret == 0) {
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
76
+ MEMTXATTRS_UNSPECIFIED);
77
}
78
}
79
80
--
332
--
81
2.17.1
333
2.34.1
82
334
83
335
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
add MemTxAttrs as an argument to address_space_map().
2
3
Its callers either have an attrs value to hand, or don't care
3
While dozens of files include "cpu.h", only 3 files require
4
and can use MEMTXATTRS_UNSPECIFIED.
4
these NVIC helper declarations.
5
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20230206223502.25122-12-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
10
---
11
include/exec/memory.h | 3 ++-
11
include/hw/intc/armv7m_nvic.h | 123 ++++++++++++++++++++++++++++++++++
12
include/sysemu/dma.h | 3 ++-
12
target/arm/cpu.h | 123 ----------------------------------
13
exec.c | 6 ++++--
13
target/arm/cpu.c | 4 +-
14
target/ppc/mmu-hash64.c | 3 ++-
14
target/arm/cpu_tcg.c | 3 +
15
4 files changed, 10 insertions(+), 5 deletions(-)
15
target/arm/m_helper.c | 3 +
16
16
5 files changed, 132 insertions(+), 124 deletions(-)
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
17
18
index XXXXXXX..XXXXXXX 100644
18
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
19
--- a/include/exec/memory.h
19
index XXXXXXX..XXXXXXX 100644
20
+++ b/include/exec/memory.h
20
--- a/include/hw/intc/armv7m_nvic.h
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
21
+++ b/include/hw/intc/armv7m_nvic.h
22
* @addr: address within that address space
22
@@ -XXX,XX +XXX,XX @@ struct NVICState {
23
* @plen: pointer to length of buffer; updated on return
23
qemu_irq sysresetreq;
24
* @is_write: indicates the transfer direction
24
};
25
+ * @attrs: memory attributes
25
26
*/
26
+/* Interface between CPU and Interrupt controller. */
27
void *address_space_map(AddressSpace *as, hwaddr addr,
27
+/**
28
- hwaddr *plen, bool is_write);
28
+ * armv7m_nvic_set_pending: mark the specified exception as pending
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
29
+ * @s: the NVIC
30
30
+ * @irq: the exception number to mark pending
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
31
+ * @secure: false for non-banked exceptions or for the nonsecure
32
*
32
+ * version of a banked exception, true for the secure version of a banked
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
33
+ * exception.
34
index XXXXXXX..XXXXXXX 100644
34
+ *
35
--- a/include/sysemu/dma.h
35
+ * Marks the specified exception as pending. Note that we will assert()
36
+++ b/include/sysemu/dma.h
36
+ * if @secure is true and @irq does not specify one of the fixed set
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
37
+ * of architecturally banked exceptions.
38
hwaddr xlen = *len;
38
+ */
39
void *p;
39
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
40
40
+/**
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
41
+ * armv7m_nvic_set_pending_derived: mark this derived exception as pending
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
42
+ * @s: the NVIC
43
+ MEMTXATTRS_UNSPECIFIED);
43
+ * @irq: the exception number to mark pending
44
*len = xlen;
44
+ * @secure: false for non-banked exceptions or for the nonsecure
45
return p;
45
+ * version of a banked exception, true for the secure version of a banked
46
}
46
+ * exception.
47
diff --git a/exec.c b/exec.c
47
+ *
48
index XXXXXXX..XXXXXXX 100644
48
+ * Similar to armv7m_nvic_set_pending(), but specifically for derived
49
--- a/exec.c
49
+ * exceptions (exceptions generated in the course of trying to take
50
+++ b/exec.c
50
+ * a different exception).
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
51
+ */
52
void *address_space_map(AddressSpace *as,
52
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
53
hwaddr addr,
53
+/**
54
hwaddr *plen,
54
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
55
- bool is_write)
55
+ * @s: the NVIC
56
+ bool is_write,
56
+ * @irq: the exception number to mark pending
57
+ MemTxAttrs attrs)
57
+ * @secure: false for non-banked exceptions or for the nonsecure
58
{
58
+ * version of a banked exception, true for the secure version of a banked
59
hwaddr len = *plen;
59
+ * exception.
60
hwaddr l, xlat;
60
+ *
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
61
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
62
hwaddr *plen,
62
+ * generated in the course of lazy stacking of FP registers.
63
int is_write)
63
+ */
64
{
64
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
65
+/**
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
66
+ * armv7m_nvic_get_pending_irq_info: return highest priority pending
67
+ MEMTXATTRS_UNSPECIFIED);
67
+ * exception, and whether it targets Secure state
68
}
68
+ * @s: the NVIC
69
69
+ * @pirq: set to pending exception number
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
70
+ * @ptargets_secure: set to whether pending exception targets Secure
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
71
+ *
72
index XXXXXXX..XXXXXXX 100644
72
+ * This function writes the number of the highest priority pending
73
--- a/target/ppc/mmu-hash64.c
73
+ * exception (the one which would be made active by
74
+++ b/target/ppc/mmu-hash64.c
74
+ * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
75
+ * to true if the current highest priority pending exception should
76
return NULL;
76
+ * be taken to Secure state, false for NS.
77
}
77
+ */
78
78
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
79
+ bool *ptargets_secure);
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
80
+/**
81
+ MEMTXATTRS_UNSPECIFIED);
81
+ * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
82
if (plen < (n * HASH_PTE_SIZE_64)) {
82
+ * @s: the NVIC
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
83
+ *
84
}
84
+ * Move the current highest priority pending exception from the pending
85
+ * state to the active state, and update v7m.exception to indicate that
86
+ * it is the exception currently being handled.
87
+ */
88
+void armv7m_nvic_acknowledge_irq(NVICState *s);
89
+/**
90
+ * armv7m_nvic_complete_irq: complete specified interrupt or exception
91
+ * @s: the NVIC
92
+ * @irq: the exception number to complete
93
+ * @secure: true if this exception was secure
94
+ *
95
+ * Returns: -1 if the irq was not active
96
+ * 1 if completing this irq brought us back to base (no active irqs)
97
+ * 0 if there is still an irq active after this one was completed
98
+ * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
99
+ */
100
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
101
+/**
102
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
103
+ * @s: the NVIC
104
+ * @irq: the exception number to mark pending
105
+ * @secure: false for non-banked exceptions or for the nonsecure
106
+ * version of a banked exception, true for the secure version of a banked
107
+ * exception.
108
+ *
109
+ * Return whether an exception is "ready", i.e. whether the exception is
110
+ * enabled and is configured at a priority which would allow it to
111
+ * interrupt the current execution priority. This controls whether the
112
+ * RDY bit for it in the FPCCR is set.
113
+ */
114
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
115
+/**
116
+ * armv7m_nvic_raw_execution_priority: return the raw execution priority
117
+ * @s: the NVIC
118
+ *
119
+ * Returns: the raw execution priority as defined by the v8M architecture.
120
+ * This is the execution priority minus the effects of AIRCR.PRIS,
121
+ * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
122
+ * (v8M ARM ARM I_PKLD.)
123
+ */
124
+int armv7m_nvic_raw_execution_priority(NVICState *s);
125
+/**
126
+ * armv7m_nvic_neg_prio_requested: return true if the requested execution
127
+ * priority is negative for the specified security state.
128
+ * @s: the NVIC
129
+ * @secure: the security state to test
130
+ * This corresponds to the pseudocode IsReqExecPriNeg().
131
+ */
132
+#ifndef CONFIG_USER_ONLY
133
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
134
+#else
135
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
136
+{
137
+ return false;
138
+}
139
+#endif
140
+#ifndef CONFIG_USER_ONLY
141
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
142
+#else
143
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
144
+{
145
+ return true;
146
+}
147
+#endif
148
+
149
#endif
150
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
151
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/cpu.h
153
+++ b/target/arm/cpu.h
154
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
155
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
156
uint32_t cur_el, bool secure);
157
158
-/* Interface between CPU and Interrupt controller. */
159
-#ifndef CONFIG_USER_ONLY
160
-bool armv7m_nvic_can_take_pending_exception(NVICState *s);
161
-#else
162
-static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
163
-{
164
- return true;
165
-}
166
-#endif
167
-/**
168
- * armv7m_nvic_set_pending: mark the specified exception as pending
169
- * @s: the NVIC
170
- * @irq: the exception number to mark pending
171
- * @secure: false for non-banked exceptions or for the nonsecure
172
- * version of a banked exception, true for the secure version of a banked
173
- * exception.
174
- *
175
- * Marks the specified exception as pending. Note that we will assert()
176
- * if @secure is true and @irq does not specify one of the fixed set
177
- * of architecturally banked exceptions.
178
- */
179
-void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
180
-/**
181
- * armv7m_nvic_set_pending_derived: mark this derived exception as pending
182
- * @s: the NVIC
183
- * @irq: the exception number to mark pending
184
- * @secure: false for non-banked exceptions or for the nonsecure
185
- * version of a banked exception, true for the secure version of a banked
186
- * exception.
187
- *
188
- * Similar to armv7m_nvic_set_pending(), but specifically for derived
189
- * exceptions (exceptions generated in the course of trying to take
190
- * a different exception).
191
- */
192
-void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
193
-/**
194
- * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
195
- * @s: the NVIC
196
- * @irq: the exception number to mark pending
197
- * @secure: false for non-banked exceptions or for the nonsecure
198
- * version of a banked exception, true for the secure version of a banked
199
- * exception.
200
- *
201
- * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
202
- * generated in the course of lazy stacking of FP registers.
203
- */
204
-void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
205
-/**
206
- * armv7m_nvic_get_pending_irq_info: return highest priority pending
207
- * exception, and whether it targets Secure state
208
- * @s: the NVIC
209
- * @pirq: set to pending exception number
210
- * @ptargets_secure: set to whether pending exception targets Secure
211
- *
212
- * This function writes the number of the highest priority pending
213
- * exception (the one which would be made active by
214
- * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
215
- * to true if the current highest priority pending exception should
216
- * be taken to Secure state, false for NS.
217
- */
218
-void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
219
- bool *ptargets_secure);
220
-/**
221
- * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
222
- * @s: the NVIC
223
- *
224
- * Move the current highest priority pending exception from the pending
225
- * state to the active state, and update v7m.exception to indicate that
226
- * it is the exception currently being handled.
227
- */
228
-void armv7m_nvic_acknowledge_irq(NVICState *s);
229
-/**
230
- * armv7m_nvic_complete_irq: complete specified interrupt or exception
231
- * @s: the NVIC
232
- * @irq: the exception number to complete
233
- * @secure: true if this exception was secure
234
- *
235
- * Returns: -1 if the irq was not active
236
- * 1 if completing this irq brought us back to base (no active irqs)
237
- * 0 if there is still an irq active after this one was completed
238
- * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
239
- */
240
-int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
241
-/**
242
- * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
243
- * @s: the NVIC
244
- * @irq: the exception number to mark pending
245
- * @secure: false for non-banked exceptions or for the nonsecure
246
- * version of a banked exception, true for the secure version of a banked
247
- * exception.
248
- *
249
- * Return whether an exception is "ready", i.e. whether the exception is
250
- * enabled and is configured at a priority which would allow it to
251
- * interrupt the current execution priority. This controls whether the
252
- * RDY bit for it in the FPCCR is set.
253
- */
254
-bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
255
-/**
256
- * armv7m_nvic_raw_execution_priority: return the raw execution priority
257
- * @s: the NVIC
258
- *
259
- * Returns: the raw execution priority as defined by the v8M architecture.
260
- * This is the execution priority minus the effects of AIRCR.PRIS,
261
- * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
262
- * (v8M ARM ARM I_PKLD.)
263
- */
264
-int armv7m_nvic_raw_execution_priority(NVICState *s);
265
-/**
266
- * armv7m_nvic_neg_prio_requested: return true if the requested execution
267
- * priority is negative for the specified security state.
268
- * @s: the NVIC
269
- * @secure: the security state to test
270
- * This corresponds to the pseudocode IsReqExecPriNeg().
271
- */
272
-#ifndef CONFIG_USER_ONLY
273
-bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
274
-#else
275
-static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
276
-{
277
- return false;
278
-}
279
-#endif
280
-
281
/* Interface for defining coprocessor registers.
282
* Registers are defined in tables of arm_cp_reginfo structs
283
* which are passed to define_arm_cp_regs().
284
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
285
index XXXXXXX..XXXXXXX 100644
286
--- a/target/arm/cpu.c
287
+++ b/target/arm/cpu.c
288
@@ -XXX,XX +XXX,XX @@
289
#if !defined(CONFIG_USER_ONLY)
290
#include "hw/loader.h"
291
#include "hw/boards.h"
292
+#ifdef CONFIG_TCG
293
#include "hw/intc/armv7m_nvic.h"
294
-#endif
295
+#endif /* CONFIG_TCG */
296
+#endif /* !CONFIG_USER_ONLY */
297
#include "sysemu/tcg.h"
298
#include "sysemu/qtest.h"
299
#include "sysemu/hw_accel.h"
300
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
301
index XXXXXXX..XXXXXXX 100644
302
--- a/target/arm/cpu_tcg.c
303
+++ b/target/arm/cpu_tcg.c
304
@@ -XXX,XX +XXX,XX @@
305
#include "hw/boards.h"
306
#endif
307
#include "cpregs.h"
308
+#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_TCG)
309
+#include "hw/intc/armv7m_nvic.h"
310
+#endif
311
312
313
/* Share AArch32 -cpu max features with AArch64. */
314
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
315
index XXXXXXX..XXXXXXX 100644
316
--- a/target/arm/m_helper.c
317
+++ b/target/arm/m_helper.c
318
@@ -XXX,XX +XXX,XX @@
319
#include "exec/cpu_ldst.h"
320
#include "semihosting/common-semi.h"
321
#endif
322
+#if !defined(CONFIG_USER_ONLY)
323
+#include "hw/intc/armv7m_nvic.h"
324
+#endif
325
326
static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask,
327
uint32_t reg, uint32_t val)
85
--
328
--
86
2.17.1
329
2.34.1
87
330
88
331
diff view generated by jsdifflib
New patch
1
1
From: Alex Bennée <alex.bennee@linaro.org>
2
3
The two TCG tests for GICv2 and GICv3 are very heavy weight distros
4
that take a long time to boot up, especially for an --enable-debug
5
build. The total code coverage they give is:
6
7
Overall coverage rate:
8
lines......: 11.2% (59584 of 530123 lines)
9
functions..: 15.0% (7436 of 49443 functions)
10
branches...: 6.3% (19273 of 303933 branches)
11
12
We already get pretty close to that with the machine_aarch64_virt
13
tests which only does one full boot (~120s vs ~600s) of alpine. We
14
expand the kernel+initrd boot (~8s) to test both GICs and also add an
15
RNG device and a block device to generate a few IRQs and exercise the
16
storage layer. With that we get to a coverage of:
17
18
Overall coverage rate:
19
lines......: 11.0% (58121 of 530123 lines)
20
functions..: 14.9% (7343 of 49443 functions)
21
branches...: 6.0% (18269 of 303933 branches)
22
23
which I feel is close enough given the massive time saving. If we want
24
to target any more sub-systems we can use lighter weight more directed
25
tests.
26
27
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
28
Reviewed-by: Fabiano Rosas <farosas@suse.de>
29
Acked-by: Richard Henderson <richard.henderson@linaro.org>
30
Message-id: 20230203181632.2919715-1-alex.bennee@linaro.org
31
Cc: Peter Maydell <peter.maydell@linaro.org>
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
---
34
tests/avocado/boot_linux.py | 48 ++++----------------
35
tests/avocado/machine_aarch64_virt.py | 63 ++++++++++++++++++++++++---
36
2 files changed, 65 insertions(+), 46 deletions(-)
37
38
diff --git a/tests/avocado/boot_linux.py b/tests/avocado/boot_linux.py
39
index XXXXXXX..XXXXXXX 100644
40
--- a/tests/avocado/boot_linux.py
41
+++ b/tests/avocado/boot_linux.py
42
@@ -XXX,XX +XXX,XX @@ def test_pc_q35_kvm(self):
43
self.launch_and_wait(set_up_ssh_connection=False)
44
45
46
-# For Aarch64 we only boot KVM tests in CI as the TCG tests are very
47
-# heavyweight. There are lighter weight distros which we use in the
48
-# machine_aarch64_virt.py tests.
49
+# For Aarch64 we only boot KVM tests in CI as booting the current
50
+# Fedora OS in TCG tests is very heavyweight. There are lighter weight
51
+# distros which we use in the machine_aarch64_virt.py tests.
52
class BootLinuxAarch64(LinuxTest):
53
"""
54
:avocado: tags=arch:aarch64
55
:avocado: tags=machine:virt
56
- :avocado: tags=machine:gic-version=2
57
"""
58
timeout = 720
59
60
- def add_common_args(self):
61
- self.vm.add_args('-bios',
62
- os.path.join(BUILD_DIR, 'pc-bios',
63
- 'edk2-aarch64-code.fd'))
64
- self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
65
- self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
66
-
67
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
68
- def test_fedora_cloud_tcg_gicv2(self):
69
- """
70
- :avocado: tags=accel:tcg
71
- :avocado: tags=cpu:max
72
- :avocado: tags=device:gicv2
73
- """
74
- self.require_accelerator("tcg")
75
- self.vm.add_args("-accel", "tcg")
76
- self.vm.add_args("-cpu", "max,lpa2=off")
77
- self.vm.add_args("-machine", "virt,gic-version=2")
78
- self.add_common_args()
79
- self.launch_and_wait(set_up_ssh_connection=False)
80
-
81
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
82
- def test_fedora_cloud_tcg_gicv3(self):
83
- """
84
- :avocado: tags=accel:tcg
85
- :avocado: tags=cpu:max
86
- :avocado: tags=device:gicv3
87
- """
88
- self.require_accelerator("tcg")
89
- self.vm.add_args("-accel", "tcg")
90
- self.vm.add_args("-cpu", "max,lpa2=off")
91
- self.vm.add_args("-machine", "virt,gic-version=3")
92
- self.add_common_args()
93
- self.launch_and_wait(set_up_ssh_connection=False)
94
-
95
def test_virt_kvm(self):
96
"""
97
:avocado: tags=accel:kvm
98
@@ -XXX,XX +XXX,XX @@ def test_virt_kvm(self):
99
self.require_accelerator("kvm")
100
self.vm.add_args("-accel", "kvm")
101
self.vm.add_args("-machine", "virt,gic-version=host")
102
- self.add_common_args()
103
+ self.vm.add_args('-bios',
104
+ os.path.join(BUILD_DIR, 'pc-bios',
105
+ 'edk2-aarch64-code.fd'))
106
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
107
+ self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
108
self.launch_and_wait(set_up_ssh_connection=False)
109
110
111
diff --git a/tests/avocado/machine_aarch64_virt.py b/tests/avocado/machine_aarch64_virt.py
112
index XXXXXXX..XXXXXXX 100644
113
--- a/tests/avocado/machine_aarch64_virt.py
114
+++ b/tests/avocado/machine_aarch64_virt.py
115
@@ -XXX,XX +XXX,XX @@
116
117
import time
118
import os
119
+import logging
120
121
from avocado_qemu import QemuSystemTest
122
from avocado_qemu import wait_for_console_pattern
123
from avocado_qemu import exec_command
124
from avocado_qemu import BUILD_DIR
125
+from avocado.utils import process
126
+from avocado.utils.path import find_command
127
128
class Aarch64VirtMachine(QemuSystemTest):
129
KERNEL_COMMON_COMMAND_LINE = 'printk.time=0 '
130
@@ -XXX,XX +XXX,XX @@ def test_alpine_virt_tcg_gic_max(self):
131
self.wait_for_console_pattern('Welcome to Alpine Linux 3.16')
132
133
134
- def test_aarch64_virt(self):
135
+ def common_aarch64_virt(self, machine):
136
"""
137
- :avocado: tags=arch:aarch64
138
- :avocado: tags=machine:virt
139
- :avocado: tags=accel:tcg
140
- :avocado: tags=cpu:max
141
+ Common code to launch basic virt machine with kernel+initrd
142
+ and a scratch disk.
143
"""
144
+ logger = logging.getLogger('aarch64_virt')
145
+
146
kernel_url = ('https://fileserver.linaro.org/s/'
147
'z6B2ARM7DQT3HWN/download')
148
-
149
kernel_hash = 'ed11daab50c151dde0e1e9c9cb8b2d9bd3215347'
150
kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
151
152
@@ -XXX,XX +XXX,XX @@ def test_aarch64_virt(self):
153
'console=ttyAMA0')
154
self.require_accelerator("tcg")
155
self.vm.add_args('-cpu', 'max,pauth-impdef=on',
156
+ '-machine', machine,
157
'-accel', 'tcg',
158
'-kernel', kernel_path,
159
'-append', kernel_command_line)
160
+
161
+ # A RNG offers an easy way to generate a few IRQs
162
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
163
+ self.vm.add_args('-object',
164
+ 'rng-random,id=rng0,filename=/dev/urandom')
165
+
166
+ # Also add a scratch block device
167
+ logger.info('creating scratch qcow2 image')
168
+ image_path = os.path.join(self.workdir, 'scratch.qcow2')
169
+ qemu_img = os.path.join(BUILD_DIR, 'qemu-img')
170
+ if not os.path.exists(qemu_img):
171
+ qemu_img = find_command('qemu-img', False)
172
+ if qemu_img is False:
173
+ self.cancel('Could not find "qemu-img", which is required to '
174
+ 'create the temporary qcow2 image')
175
+ cmd = '%s create -f qcow2 %s 8M' % (qemu_img, image_path)
176
+ process.run(cmd)
177
+
178
+ # Add the device
179
+ self.vm.add_args('-blockdev',
180
+ f"driver=qcow2,file.driver=file,file.filename={image_path},node-name=scratch")
181
+ self.vm.add_args('-device',
182
+ 'virtio-blk-device,drive=scratch')
183
+
184
self.vm.launch()
185
self.wait_for_console_pattern('Welcome to Buildroot')
186
time.sleep(0.1)
187
exec_command(self, 'root')
188
time.sleep(0.1)
189
+ exec_command(self, 'dd if=/dev/hwrng of=/dev/vda bs=512 count=4')
190
+ time.sleep(0.1)
191
+ exec_command(self, 'md5sum /dev/vda')
192
+ time.sleep(0.1)
193
+ exec_command(self, 'cat /proc/interrupts')
194
+ time.sleep(0.1)
195
exec_command(self, 'cat /proc/self/maps')
196
time.sleep(0.1)
197
+
198
+ def test_aarch64_virt_gicv3(self):
199
+ """
200
+ :avocado: tags=arch:aarch64
201
+ :avocado: tags=machine:virt
202
+ :avocado: tags=accel:tcg
203
+ :avocado: tags=cpu:max
204
+ """
205
+ self.common_aarch64_virt("virt,gic_version=3")
206
+
207
+ def test_aarch64_virt_gicv2(self):
208
+ """
209
+ :avocado: tags=arch:aarch64
210
+ :avocado: tags=machine:virt
211
+ :avocado: tags=accel:tcg
212
+ :avocado: tags=cpu:max
213
+ """
214
+ self.common_aarch64_virt("virt,gic-version=2")
215
--
216
2.34.1
217
218
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Mostafa Saleh <smostafa@google.com>
2
add MemTxAttrs as an argument to flatview_translate(); all its
3
callers now have attrs available.
4
2
3
GBPA register can be used to globally abort all
4
transactions.
5
6
It is described in the SMMU manual in "6.3.14 SMMU_GBPA".
7
ABORT reset value is IMPLEMENTATION DEFINED, it is chosen to
8
be zero(Do not abort incoming transactions).
9
10
Other fields have default values of Use Incoming.
11
12
If UPDATE is not set, the write is ignored. This is the only permitted
13
behavior in SMMUv3.2 and later.(6.3.14.1 Update procedure)
14
15
As this patch adds a new state to the SMMU (GBPA), it is added
16
in a new subsection for forward migration compatibility.
17
GBPA is only migrated if its value is different from the reset value.
18
It does this to be backward migration compatible if SW didn't write
19
the register.
20
21
Signed-off-by: Mostafa Saleh <smostafa@google.com>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Eric Auger <eric.auger@redhat.com>
24
Message-id: 20230214094009.2445653-1-smostafa@google.com
25
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
9
---
27
---
10
include/exec/memory.h | 7 ++++---
28
hw/arm/smmuv3-internal.h | 7 +++++++
11
exec.c | 17 +++++++++--------
29
include/hw/arm/smmuv3.h | 1 +
12
2 files changed, 13 insertions(+), 11 deletions(-)
30
hw/arm/smmuv3.c | 43 +++++++++++++++++++++++++++++++++++++++-
31
3 files changed, 50 insertions(+), 1 deletion(-)
13
32
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
33
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
35
--- a/hw/arm/smmuv3-internal.h
17
+++ b/include/exec/memory.h
36
+++ b/hw/arm/smmuv3-internal.h
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
37
@@ -XXX,XX +XXX,XX @@ REG32(CR0ACK, 0x24)
19
*/
38
REG32(CR1, 0x28)
20
MemoryRegion *flatview_translate(FlatView *fv,
39
REG32(CR2, 0x2c)
21
hwaddr addr, hwaddr *xlat,
40
REG32(STATUSR, 0x40)
22
- hwaddr *len, bool is_write);
41
+REG32(GBPA, 0x44)
23
+ hwaddr *len, bool is_write,
42
+ FIELD(GBPA, ABORT, 20, 1)
24
+ MemTxAttrs attrs);
43
+ FIELD(GBPA, UPDATE, 31, 1)
25
44
+
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
45
+/* Use incoming. */
27
hwaddr addr, hwaddr *xlat,
46
+#define SMMU_GBPA_RESET_VAL 0x1000
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
47
+
29
MemTxAttrs attrs)
48
REG32(IRQ_CTRL, 0x50)
30
{
49
FIELD(IRQ_CTRL, GERROR_IRQEN, 0, 1)
31
return flatview_translate(address_space_to_flatview(as),
50
FIELD(IRQ_CTRL, PRI_IRQEN, 1, 1)
32
- addr, xlat, len, is_write);
51
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
33
+ addr, xlat, len, is_write, attrs);
52
index XXXXXXX..XXXXXXX 100644
53
--- a/include/hw/arm/smmuv3.h
54
+++ b/include/hw/arm/smmuv3.h
55
@@ -XXX,XX +XXX,XX @@ struct SMMUv3State {
56
uint32_t cr[3];
57
uint32_t cr0ack;
58
uint32_t statusr;
59
+ uint32_t gbpa;
60
uint32_t irq_ctrl;
61
uint32_t gerror;
62
uint32_t gerrorn;
63
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/smmuv3.c
66
+++ b/hw/arm/smmuv3.c
67
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
68
s->gerror = 0;
69
s->gerrorn = 0;
70
s->statusr = 0;
71
+ s->gbpa = SMMU_GBPA_RESET_VAL;
34
}
72
}
35
73
36
/* address_space_access_valid: check for validity of accessing an address
74
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
75
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
38
rcu_read_lock();
76
qemu_mutex_lock(&s->mutex);
39
fv = address_space_to_flatview(as);
77
40
l = len;
78
if (!smmu_enabled(s)) {
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
79
- status = SMMU_TRANS_DISABLE;
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
80
+ if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
43
if (len == l && memory_access_is_direct(mr, false)) {
81
+ status = SMMU_TRANS_ABORT;
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
82
+ } else {
45
memcpy(buf, ptr, len);
83
+ status = SMMU_TRANS_DISABLE;
46
diff --git a/exec.c b/exec.c
84
+ }
47
index XXXXXXX..XXXXXXX 100644
85
goto epilogue;
48
--- a/exec.c
49
+++ b/exec.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
51
52
/* Called from RCU critical section */
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
54
- hwaddr *plen, bool is_write)
55
+ hwaddr *plen, bool is_write,
56
+ MemTxAttrs attrs)
57
{
58
MemoryRegion *mr;
59
MemoryRegionSection section;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
61
}
62
63
l = len;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
66
}
86
}
67
87
68
return result;
88
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
89
case A_GERROR_IRQ_CFG2:
70
MemTxResult result = MEMTX_OK;
90
s->gerror_irq_cfg2 = data;
71
91
return MEMTX_OK;
72
l = len;
92
+ case A_GBPA:
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
93
+ /*
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
94
+ * If UPDATE is not set, the write is ignored. This is the only
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
95
+ * permitted behavior in SMMUv3.2 and later.
76
addr1, l, mr);
96
+ */
77
97
+ if (data & R_GBPA_UPDATE_MASK) {
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
98
+ /* Ignore update bit as write is synchronous. */
79
}
99
+ s->gbpa = data & ~R_GBPA_UPDATE_MASK;
80
100
+ }
81
l = len;
101
+ return MEMTX_OK;
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
102
case A_STRTAB_BASE: /* 64b */
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
103
s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
84
}
104
return MEMTX_OK;
85
105
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
86
return result;
106
case A_STATUSR:
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
107
*data = s->statusr;
88
MemoryRegion *mr;
108
return MEMTX_OK;
89
109
+ case A_GBPA:
90
l = len;
110
+ *data = s->gbpa;
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
111
+ return MEMTX_OK;
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
112
case A_IRQ_CTRL:
93
return flatview_read_continue(fv, addr, attrs, buf, len,
113
case A_IRQ_CTRL_ACK:
94
addr1, l, mr);
114
*data = s->irq_ctrl;
95
}
115
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3_queue = {
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
116
},
97
117
};
98
while (len > 0) {
118
99
l = len;
119
+static bool smmuv3_gbpa_needed(void *opaque)
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
120
+{
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
121
+ SMMUv3State *s = opaque;
102
if (!memory_access_is_direct(mr, is_write)) {
122
+
103
l = memory_access_size(mr, l, addr);
123
+ /* Only migrate GBPA if it has different reset value. */
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
124
+ return s->gbpa != SMMU_GBPA_RESET_VAL;
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
125
+}
106
126
+
107
len = target_len;
127
+static const VMStateDescription vmstate_gbpa = {
108
this_mr = flatview_translate(fv, addr, &xlat,
128
+ .name = "smmuv3/gbpa",
109
- &len, is_write);
129
+ .version_id = 1,
110
+ &len, is_write, attrs);
130
+ .minimum_version_id = 1,
111
if (this_mr != mr || xlat != base + done) {
131
+ .needed = smmuv3_gbpa_needed,
112
return done;
132
+ .fields = (VMStateField[]) {
113
}
133
+ VMSTATE_UINT32(gbpa, SMMUv3State),
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
134
+ VMSTATE_END_OF_LIST()
115
l = len;
135
+ }
116
rcu_read_lock();
136
+};
117
fv = address_space_to_flatview(as);
137
+
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
138
static const VMStateDescription vmstate_smmuv3 = {
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
139
.name = "smmuv3",
120
140
.version_id = 1,
121
if (!memory_access_is_direct(mr, is_write)) {
141
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3 = {
122
if (atomic_xchg(&bounce.in_use, true)) {
142
143
VMSTATE_END_OF_LIST(),
144
},
145
+ .subsections = (const VMStateDescription * []) {
146
+ &vmstate_gbpa,
147
+ NULL
148
+ }
149
};
150
151
static void smmuv3_instance_init(Object *obj)
123
--
152
--
124
2.17.1
153
2.34.1
125
126
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
Since commit acc0b8b05a when running the ZynqMP ZCU102 board with
4
first 4 bytes.
4
a QEMU configured using --without-default-devices, we get:
5
5
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
6
$ qemu-system-aarch64 -M xlnx-zcu102
7
Cc: qemu-stable@nongnu.org
7
qemu-system-aarch64: missing object type 'usb_dwc3'
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
8
Abort trap: 6
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
10
Fix by adding the missing Kconfig dependency.
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
12
Fixes: acc0b8b05a ("hw/arm/xlnx-zynqmp: Connect ZynqMP's USB controllers")
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Message-id: 20230216092327.2203-1-philmd@linaro.org
15
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
17
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
18
hw/arm/Kconfig | 1 +
15
1 file changed, 1 insertion(+)
19
1 file changed, 1 insertion(+)
16
20
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
21
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
23
--- a/hw/arm/Kconfig
20
+++ b/hw/intc/arm_gicv3_kvm.c
24
+++ b/hw/arm/Kconfig
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
25
@@ -XXX,XX +XXX,XX @@ config XLNX_ZYNQMP_ARM
22
if (clroffset != 0) {
26
select XLNX_CSU_DMA
23
reg = 0;
27
select XLNX_ZYNQMP
24
kvm_gicd_access(s, clroffset, &reg, true);
28
select XLNX_ZDMA
25
+ clroffset += 4;
29
+ select USB_DWC3
26
}
30
27
reg = *gic_bmp_ptr32(bmp, irq);
31
config XLNX_VERSAL
28
kvm_gicd_access(s, offset, &reg, true);
32
bool
29
--
33
--
30
2.17.1
34
2.34.1
31
35
32
36
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
From: Cornelia Huck <cohuck@redhat.com>
2
honour the FPCR.FZ bit which specifies whether input denormals should
3
be flushed to zero (or FZ16 for the half-precision version).
4
We forgot to implement this, which doesn't affect the results (since
5
the calculation doesn't actually care about the mantissa bits) but did
6
mean we were failing to set the FPSR.IDC bit.
7
2
3
Just use current_accel_name() directly.
4
5
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
6
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
11
---
9
---
12
target/arm/helper-a64.c | 6 ++++++
10
hw/arm/virt.c | 6 +++---
13
1 file changed, 6 insertions(+)
11
1 file changed, 3 insertions(+), 3 deletions(-)
14
12
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
13
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
15
--- a/hw/arm/virt.c
18
+++ b/target/arm/helper-a64.c
16
+++ b/hw/arm/virt.c
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
17
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
20
return nan;
18
if (vms->secure && (kvm_enabled() || hvf_enabled())) {
19
error_report("mach-virt: %s does not support providing "
20
"Security extensions (TrustZone) to the guest CPU",
21
- kvm_enabled() ? "KVM" : "HVF");
22
+ current_accel_name());
23
exit(1);
21
}
24
}
22
25
23
+ a = float16_squash_input_denormal(a, fpst);
26
if (vms->virt && (kvm_enabled() || hvf_enabled())) {
24
+
27
error_report("mach-virt: %s does not support providing "
25
val16 = float16_val(a);
28
"Virtualization extensions to the guest CPU",
26
sbit = 0x8000 & val16;
29
- kvm_enabled() ? "KVM" : "HVF");
27
exp = extract32(val16, 10, 5);
30
+ current_accel_name());
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
31
exit(1);
29
return nan;
30
}
32
}
31
33
32
+ a = float32_squash_input_denormal(a, fpst);
34
if (vms->mte && (kvm_enabled() || hvf_enabled())) {
33
+
35
error_report("mach-virt: %s does not support providing "
34
val32 = float32_val(a);
36
"MTE to the guest CPU",
35
sbit = 0x80000000ULL & val32;
37
- kvm_enabled() ? "KVM" : "HVF");
36
exp = extract32(val32, 23, 8);
38
+ current_accel_name());
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
39
exit(1);
38
return nan;
39
}
40
}
40
41
41
+ a = float64_squash_input_denormal(a, fpst);
42
+
43
val64 = float64_val(a);
44
sbit = 0x8000000000000000ULL & val64;
45
exp = extract64(float64_val(a), 52, 11);
46
--
42
--
47
2.17.1
43
2.34.1
48
49
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Hao Wu <wuhaotsh@google.com>
2
add MemTxAttrs as an argument to address_space_translate()
3
and address_space_translate_cached(). Callers either have an
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Havard is no longer working on the Nuvoton systems for a while
4
and won't be able to do any work on it in the future. So I'll
5
take over maintaining the Nuvoton system from him.
6
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
8
Acked-by: Havard Skinnemoen <hskinnemoen@google.com>
9
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
10
Message-id: 20230208235433.3989937-2-wuhaotsh@google.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
10
---
12
---
11
include/exec/memory.h | 4 +++-
13
MAINTAINERS | 2 +-
12
accel/tcg/translate-all.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
exec.c | 14 +++++++++-----
14
hw/vfio/common.c | 3 ++-
15
memory_ldst.inc.c | 18 +++++++++---------
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
15
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/MAINTAINERS b/MAINTAINERS
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
18
--- a/MAINTAINERS
22
+++ b/include/exec/memory.h
19
+++ b/MAINTAINERS
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
20
@@ -XXX,XX +XXX,XX @@ F: include/hw/net/mv88w8618_eth.h
24
* #MemoryRegion.
21
F: docs/system/arm/musicpal.rst
25
* @len: pointer to length
22
26
* @is_write: indicates the transfer direction
23
Nuvoton NPCM7xx
27
+ * @attrs: memory attributes
24
-M: Havard Skinnemoen <hskinnemoen@google.com>
28
*/
25
M: Tyrone Ting <kfting@nuvoton.com>
29
MemoryRegion *flatview_translate(FlatView *fv,
26
+M: Hao Wu <wuhaotsh@google.com>
30
hwaddr addr, hwaddr *xlat,
27
L: qemu-arm@nongnu.org
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
28
S: Supported
32
29
F: hw/*/npcm7xx*
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
34
hwaddr addr, hwaddr *xlat,
35
- hwaddr *len, bool is_write)
36
+ hwaddr *len, bool is_write,
37
+ MemTxAttrs attrs)
38
{
39
return flatview_translate(address_space_to_flatview(as),
40
addr, xlat, len, is_write);
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
44
+++ b/accel/tcg/translate-all.c
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
46
hwaddr l = 1;
47
48
rcu_read_lock();
49
- mr = address_space_translate(as, addr, &addr, &l, false);
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
51
if (!(memory_region_is_ram(mr)
52
|| memory_region_is_romd(mr))) {
53
rcu_read_unlock();
54
diff --git a/exec.c b/exec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/exec.c
57
+++ b/exec.c
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
59
rcu_read_lock();
60
while (len > 0) {
61
l = len;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
64
+ MEMTXATTRS_UNSPECIFIED);
65
66
if (!(memory_region_is_ram(mr) ||
67
memory_region_is_romd(mr))) {
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
69
*/
70
static inline MemoryRegion *address_space_translate_cached(
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
72
- hwaddr *plen, bool is_write)
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
74
{
75
MemoryRegionSection section;
76
MemoryRegion *mr;
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
78
MemoryRegion *mr;
79
80
l = len;
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
83
+ MEMTXATTRS_UNSPECIFIED);
84
flatview_read_continue(cache->fv,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
86
addr1, l, mr);
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
93
+ MEMTXATTRS_UNSPECIFIED);
94
flatview_write_continue(cache->fv,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
96
addr1, l, mr);
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
112
*/
113
mr = address_space_translate(&address_space_memory,
114
iotlb->translated_addr,
115
- &xlat, &len, writable);
116
+ &xlat, &len, writable,
117
+ MEMTXATTRS_UNSPECIFIED);
118
if (!memory_region_is_ram(mr)) {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
120
xlat);
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
124
+++ b/memory_ldst.inc.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
126
bool release_lock = false;
127
128
RCU_READ_LOCK();
129
- mr = TRANSLATE(addr, &addr1, &l, false);
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
132
release_lock |= prepare_mmio_access(mr);
133
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
135
bool release_lock = false;
136
137
RCU_READ_LOCK();
138
- mr = TRANSLATE(addr, &addr1, &l, false);
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
141
release_lock |= prepare_mmio_access(mr);
142
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
144
bool release_lock = false;
145
146
RCU_READ_LOCK();
147
- mr = TRANSLATE(addr, &addr1, &l, false);
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
149
if (!IS_DIRECT(mr, false)) {
150
release_lock |= prepare_mmio_access(mr);
151
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
153
bool release_lock = false;
154
155
RCU_READ_LOCK();
156
- mr = TRANSLATE(addr, &addr1, &l, false);
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
158
if (l < 2 || !IS_DIRECT(mr, false)) {
159
release_lock |= prepare_mmio_access(mr);
160
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
162
bool release_lock = false;
163
164
RCU_READ_LOCK();
165
- mr = TRANSLATE(addr, &addr1, &l, true);
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
167
if (l < 4 || !IS_DIRECT(mr, true)) {
168
release_lock |= prepare_mmio_access(mr);
169
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
171
bool release_lock = false;
172
173
RCU_READ_LOCK();
174
- mr = TRANSLATE(addr, &addr1, &l, true);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
176
if (l < 4 || !IS_DIRECT(mr, true)) {
177
release_lock |= prepare_mmio_access(mr);
178
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
180
bool release_lock = false;
181
182
RCU_READ_LOCK();
183
- mr = TRANSLATE(addr, &addr1, &l, true);
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
185
if (!IS_DIRECT(mr, true)) {
186
release_lock |= prepare_mmio_access(mr);
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
30
--
220
2.17.1
31
2.34.1
221
222
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
From: Hao Wu <wuhaotsh@google.com>
2
the new devices they use.
3
2
3
Nuvoton's PSPI is a general purpose SPI module which enables
4
connections to SPI-based peripheral devices.
5
6
Signed-off-by: Hao Wu <wuhaotsh@google.com>
7
Reviewed-by: Chris Rauer <crauer@google.com>
8
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
9
Message-id: 20230208235433.3989937-3-wuhaotsh@google.com
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
6
---
11
---
7
MAINTAINERS | 9 +++++++--
12
MAINTAINERS | 6 +-
8
1 file changed, 7 insertions(+), 2 deletions(-)
13
include/hw/ssi/npcm_pspi.h | 53 +++++++++
14
hw/ssi/npcm_pspi.c | 221 +++++++++++++++++++++++++++++++++++++
15
hw/ssi/meson.build | 2 +-
16
hw/ssi/trace-events | 5 +
17
5 files changed, 283 insertions(+), 4 deletions(-)
18
create mode 100644 include/hw/ssi/npcm_pspi.h
19
create mode 100644 hw/ssi/npcm_pspi.c
9
20
10
diff --git a/MAINTAINERS b/MAINTAINERS
21
diff --git a/MAINTAINERS b/MAINTAINERS
11
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
23
--- a/MAINTAINERS
13
+++ b/MAINTAINERS
24
+++ b/MAINTAINERS
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
25
@@ -XXX,XX +XXX,XX @@ M: Tyrone Ting <kfting@nuvoton.com>
15
F: include/hw/timer/cmsdk-apb-timer.h
26
M: Hao Wu <wuhaotsh@google.com>
16
F: hw/char/cmsdk-apb-uart.c
17
F: include/hw/char/cmsdk-apb-uart.h
18
+F: hw/misc/tz-ppc.c
19
+F: include/hw/misc/tz-ppc.h
20
21
ARM cores
22
M: Peter Maydell <peter.maydell@linaro.org>
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
24
L: qemu-arm@nongnu.org
27
L: qemu-arm@nongnu.org
25
S: Maintained
28
S: Supported
26
F: hw/arm/mps2.c
29
-F: hw/*/npcm7xx*
27
-F: hw/misc/mps2-scc.c
30
-F: include/hw/*/npcm7xx*
28
-F: include/hw/misc/mps2-scc.h
31
-F: tests/qtest/npcm7xx*
29
+F: hw/arm/mps2-tz.c
32
+F: hw/*/npcm*
30
+F: hw/misc/mps2-*.c
33
+F: include/hw/*/npcm*
31
+F: include/hw/misc/mps2-*.h
34
+F: tests/qtest/npcm*
32
+F: hw/arm/iotkit.c
35
F: pc-bios/npcm7xx_bootrom.bin
33
+F: include/hw/arm/iotkit.h
36
F: roms/vbootrom
34
37
F: docs/system/arm/nuvoton.rst
35
Musicpal
38
diff --git a/include/hw/ssi/npcm_pspi.h b/include/hw/ssi/npcm_pspi.h
36
M: Jan Kiszka <jan.kiszka@web.de>
39
new file mode 100644
40
index XXXXXXX..XXXXXXX
41
--- /dev/null
42
+++ b/include/hw/ssi/npcm_pspi.h
43
@@ -XXX,XX +XXX,XX @@
44
+/*
45
+ * Nuvoton Peripheral SPI Module
46
+ *
47
+ * Copyright 2023 Google LLC
48
+ *
49
+ * This program is free software; you can redistribute it and/or modify it
50
+ * under the terms of the GNU General Public License as published by the
51
+ * Free Software Foundation; either version 2 of the License, or
52
+ * (at your option) any later version.
53
+ *
54
+ * This program is distributed in the hope that it will be useful, but WITHOUT
55
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
56
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
57
+ * for more details.
58
+ */
59
+#ifndef NPCM_PSPI_H
60
+#define NPCM_PSPI_H
61
+
62
+#include "hw/ssi/ssi.h"
63
+#include "hw/sysbus.h"
64
+
65
+/*
66
+ * Number of registers in our device state structure. Don't change this without
67
+ * incrementing the version_id in the vmstate.
68
+ */
69
+#define NPCM_PSPI_NR_REGS 3
70
+
71
+/**
72
+ * NPCMPSPIState - Device state for one Flash Interface Unit.
73
+ * @parent: System bus device.
74
+ * @mmio: Memory region for register access.
75
+ * @spi: The SPI bus mastered by this controller.
76
+ * @regs: Register contents.
77
+ * @irq: The interrupt request queue for this module.
78
+ *
79
+ * Each PSPI has a shared bank of registers, and controls up to four chip
80
+ * selects. Each chip select has a dedicated memory region which may be used to
81
+ * read and write the flash connected to that chip select as if it were memory.
82
+ */
83
+typedef struct NPCMPSPIState {
84
+ SysBusDevice parent;
85
+
86
+ MemoryRegion mmio;
87
+
88
+ SSIBus *spi;
89
+ uint16_t regs[NPCM_PSPI_NR_REGS];
90
+ qemu_irq irq;
91
+} NPCMPSPIState;
92
+
93
+#define TYPE_NPCM_PSPI "npcm-pspi"
94
+OBJECT_DECLARE_SIMPLE_TYPE(NPCMPSPIState, NPCM_PSPI)
95
+
96
+#endif /* NPCM_PSPI_H */
97
diff --git a/hw/ssi/npcm_pspi.c b/hw/ssi/npcm_pspi.c
98
new file mode 100644
99
index XXXXXXX..XXXXXXX
100
--- /dev/null
101
+++ b/hw/ssi/npcm_pspi.c
102
@@ -XXX,XX +XXX,XX @@
103
+/*
104
+ * Nuvoton NPCM Peripheral SPI Module (PSPI)
105
+ *
106
+ * Copyright 2023 Google LLC
107
+ *
108
+ * This program is free software; you can redistribute it and/or modify it
109
+ * under the terms of the GNU General Public License as published by the
110
+ * Free Software Foundation; either version 2 of the License, or
111
+ * (at your option) any later version.
112
+ *
113
+ * This program is distributed in the hope that it will be useful, but WITHOUT
114
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
115
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
116
+ * for more details.
117
+ */
118
+
119
+#include "qemu/osdep.h"
120
+
121
+#include "hw/irq.h"
122
+#include "hw/registerfields.h"
123
+#include "hw/ssi/npcm_pspi.h"
124
+#include "migration/vmstate.h"
125
+#include "qapi/error.h"
126
+#include "qemu/error-report.h"
127
+#include "qemu/log.h"
128
+#include "qemu/module.h"
129
+#include "qemu/units.h"
130
+
131
+#include "trace.h"
132
+
133
+REG16(PSPI_DATA, 0x0)
134
+REG16(PSPI_CTL1, 0x2)
135
+ FIELD(PSPI_CTL1, SPIEN, 0, 1)
136
+ FIELD(PSPI_CTL1, MOD, 2, 1)
137
+ FIELD(PSPI_CTL1, EIR, 5, 1)
138
+ FIELD(PSPI_CTL1, EIW, 6, 1)
139
+ FIELD(PSPI_CTL1, SCM, 7, 1)
140
+ FIELD(PSPI_CTL1, SCIDL, 8, 1)
141
+ FIELD(PSPI_CTL1, SCDV, 9, 7)
142
+REG16(PSPI_STAT, 0x4)
143
+ FIELD(PSPI_STAT, BSY, 0, 1)
144
+ FIELD(PSPI_STAT, RBF, 1, 1)
145
+
146
+static void npcm_pspi_update_irq(NPCMPSPIState *s)
147
+{
148
+ int level = 0;
149
+
150
+ /* Only fire IRQ when the module is enabled. */
151
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, SPIEN)) {
152
+ /* Update interrupt as BSY is cleared. */
153
+ if ((!FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, BSY)) &&
154
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIW)) {
155
+ level = 1;
156
+ }
157
+
158
+ /* Update interrupt as RBF is set. */
159
+ if (FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, RBF) &&
160
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIR)) {
161
+ level = 1;
162
+ }
163
+ }
164
+ qemu_set_irq(s->irq, level);
165
+}
166
+
167
+static uint16_t npcm_pspi_read_data(NPCMPSPIState *s)
168
+{
169
+ uint16_t value = s->regs[R_PSPI_DATA];
170
+
171
+ /* Clear stat bits as the value are read out. */
172
+ s->regs[R_PSPI_STAT] = 0;
173
+
174
+ return value;
175
+}
176
+
177
+static void npcm_pspi_write_data(NPCMPSPIState *s, uint16_t data)
178
+{
179
+ uint16_t value = 0;
180
+
181
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, MOD)) {
182
+ value = ssi_transfer(s->spi, extract16(data, 8, 8)) << 8;
183
+ }
184
+ value |= ssi_transfer(s->spi, extract16(data, 0, 8));
185
+ s->regs[R_PSPI_DATA] = value;
186
+
187
+ /* Mark data as available */
188
+ s->regs[R_PSPI_STAT] = R_PSPI_STAT_BSY_MASK | R_PSPI_STAT_RBF_MASK;
189
+}
190
+
191
+/* Control register read handler. */
192
+static uint64_t npcm_pspi_ctrl_read(void *opaque, hwaddr addr,
193
+ unsigned int size)
194
+{
195
+ NPCMPSPIState *s = opaque;
196
+ uint16_t value;
197
+
198
+ switch (addr) {
199
+ case A_PSPI_DATA:
200
+ value = npcm_pspi_read_data(s);
201
+ break;
202
+
203
+ case A_PSPI_CTL1:
204
+ value = s->regs[R_PSPI_CTL1];
205
+ break;
206
+
207
+ case A_PSPI_STAT:
208
+ value = s->regs[R_PSPI_STAT];
209
+ break;
210
+
211
+ default:
212
+ qemu_log_mask(LOG_GUEST_ERROR,
213
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
214
+ DEVICE(s)->canonical_path, addr);
215
+ return 0;
216
+ }
217
+ trace_npcm_pspi_ctrl_read(DEVICE(s)->canonical_path, addr, value);
218
+ npcm_pspi_update_irq(s);
219
+
220
+ return value;
221
+}
222
+
223
+/* Control register write handler. */
224
+static void npcm_pspi_ctrl_write(void *opaque, hwaddr addr, uint64_t v,
225
+ unsigned int size)
226
+{
227
+ NPCMPSPIState *s = opaque;
228
+ uint16_t value = v;
229
+
230
+ trace_npcm_pspi_ctrl_write(DEVICE(s)->canonical_path, addr, value);
231
+
232
+ switch (addr) {
233
+ case A_PSPI_DATA:
234
+ npcm_pspi_write_data(s, value);
235
+ break;
236
+
237
+ case A_PSPI_CTL1:
238
+ s->regs[R_PSPI_CTL1] = value;
239
+ break;
240
+
241
+ case A_PSPI_STAT:
242
+ qemu_log_mask(LOG_GUEST_ERROR,
243
+ "%s: write to read-only register PSPI_STAT: 0x%08"
244
+ PRIx64 "\n", DEVICE(s)->canonical_path, v);
245
+ break;
246
+
247
+ default:
248
+ qemu_log_mask(LOG_GUEST_ERROR,
249
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
250
+ DEVICE(s)->canonical_path, addr);
251
+ return;
252
+ }
253
+ npcm_pspi_update_irq(s);
254
+}
255
+
256
+static const MemoryRegionOps npcm_pspi_ctrl_ops = {
257
+ .read = npcm_pspi_ctrl_read,
258
+ .write = npcm_pspi_ctrl_write,
259
+ .endianness = DEVICE_LITTLE_ENDIAN,
260
+ .valid = {
261
+ .min_access_size = 1,
262
+ .max_access_size = 2,
263
+ .unaligned = false,
264
+ },
265
+ .impl = {
266
+ .min_access_size = 2,
267
+ .max_access_size = 2,
268
+ .unaligned = false,
269
+ },
270
+};
271
+
272
+static void npcm_pspi_enter_reset(Object *obj, ResetType type)
273
+{
274
+ NPCMPSPIState *s = NPCM_PSPI(obj);
275
+
276
+ trace_npcm_pspi_enter_reset(DEVICE(obj)->canonical_path, type);
277
+ memset(s->regs, 0, sizeof(s->regs));
278
+}
279
+
280
+static void npcm_pspi_realize(DeviceState *dev, Error **errp)
281
+{
282
+ NPCMPSPIState *s = NPCM_PSPI(dev);
283
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
284
+ Object *obj = OBJECT(dev);
285
+
286
+ s->spi = ssi_create_bus(dev, "pspi");
287
+ memory_region_init_io(&s->mmio, obj, &npcm_pspi_ctrl_ops, s,
288
+ "mmio", 4 * KiB);
289
+ sysbus_init_mmio(sbd, &s->mmio);
290
+ sysbus_init_irq(sbd, &s->irq);
291
+}
292
+
293
+static const VMStateDescription vmstate_npcm_pspi = {
294
+ .name = "npcm-pspi",
295
+ .version_id = 0,
296
+ .minimum_version_id = 0,
297
+ .fields = (VMStateField[]) {
298
+ VMSTATE_UINT16_ARRAY(regs, NPCMPSPIState, NPCM_PSPI_NR_REGS),
299
+ VMSTATE_END_OF_LIST(),
300
+ },
301
+};
302
+
303
+
304
+static void npcm_pspi_class_init(ObjectClass *klass, void *data)
305
+{
306
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
307
+ DeviceClass *dc = DEVICE_CLASS(klass);
308
+
309
+ dc->desc = "NPCM Peripheral SPI Module";
310
+ dc->realize = npcm_pspi_realize;
311
+ dc->vmsd = &vmstate_npcm_pspi;
312
+ rc->phases.enter = npcm_pspi_enter_reset;
313
+}
314
+
315
+static const TypeInfo npcm_pspi_types[] = {
316
+ {
317
+ .name = TYPE_NPCM_PSPI,
318
+ .parent = TYPE_SYS_BUS_DEVICE,
319
+ .instance_size = sizeof(NPCMPSPIState),
320
+ .class_init = npcm_pspi_class_init,
321
+ },
322
+};
323
+DEFINE_TYPES(npcm_pspi_types);
324
diff --git a/hw/ssi/meson.build b/hw/ssi/meson.build
325
index XXXXXXX..XXXXXXX 100644
326
--- a/hw/ssi/meson.build
327
+++ b/hw/ssi/meson.build
328
@@ -XXX,XX +XXX,XX @@
329
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_smc.c'))
330
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('mss-spi.c'))
331
-softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c'))
332
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c', 'npcm_pspi.c'))
333
softmmu_ss.add(when: 'CONFIG_PL022', if_true: files('pl022.c'))
334
softmmu_ss.add(when: 'CONFIG_SIFIVE_SPI', if_true: files('sifive_spi.c'))
335
softmmu_ss.add(when: 'CONFIG_SSI', if_true: files('ssi.c'))
336
diff --git a/hw/ssi/trace-events b/hw/ssi/trace-events
337
index XXXXXXX..XXXXXXX 100644
338
--- a/hw/ssi/trace-events
339
+++ b/hw/ssi/trace-events
340
@@ -XXX,XX +XXX,XX @@ npcm7xx_fiu_ctrl_write(const char *id, uint64_t addr, uint32_t data) "%s offset:
341
npcm7xx_fiu_flash_read(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
342
npcm7xx_fiu_flash_write(const char *id, unsigned cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
343
344
+# npcm_pspi.c
345
+npcm_pspi_enter_reset(const char *id, int reset_type) "%s reset type: %d"
346
+npcm_pspi_ctrl_read(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
347
+npcm_pspi_ctrl_write(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
348
+
349
# ibex_spi_host.c
350
351
ibex_spi_host_reset(const char *msg) "%s"
37
--
352
--
38
2.17.1
353
2.34.1
39
40
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Hao Wu <wuhaotsh@google.com>
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
3
callback. We'll need this for subpage_accepts().
4
2
5
We could take the approach we used with the read and write
3
Signed-off-by: Hao Wu <wuhaotsh@google.com>
6
callbacks and add new a new _with_attrs version, but since there
4
Reviewed-by: Titus Rwantare <titusr@google.com>
7
are so few implementations of the accepts hook we just change
5
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
8
them all.
6
Message-id: 20230208235433.3989937-4-wuhaotsh@google.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
docs/system/arm/nuvoton.rst | 2 +-
10
include/hw/arm/npcm7xx.h | 2 ++
11
hw/arm/npcm7xx.c | 25 +++++++++++++++++++++++--
12
3 files changed, 26 insertions(+), 3 deletions(-)
9
13
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
14
---
15
include/exec/memory.h | 3 ++-
16
exec.c | 9 ++++++---
17
hw/hppa/dino.c | 3 ++-
18
hw/nvram/fw_cfg.c | 12 ++++++++----
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
16
--- a/docs/system/arm/nuvoton.rst
27
+++ b/include/exec/memory.h
17
+++ b/docs/system/arm/nuvoton.rst
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
18
@@ -XXX,XX +XXX,XX @@ Supported devices
29
* as a machine check exception).
19
* SMBus controller (SMBF)
30
*/
20
* Ethernet controller (EMC)
31
bool (*accepts)(void *opaque, hwaddr addr,
21
* Tachometer
32
- unsigned size, bool is_write);
22
+ * Peripheral SPI controller (PSPI)
33
+ unsigned size, bool is_write,
23
34
+ MemTxAttrs attrs);
24
Missing devices
35
} valid;
25
---------------
36
/* Internal implementation constraints: */
26
@@ -XXX,XX +XXX,XX @@ Missing devices
37
struct {
27
38
diff --git a/exec.c b/exec.c
28
* Ethernet controller (GMAC)
29
* USB device (USBD)
30
- * Peripheral SPI controller (PSPI)
31
* SD/MMC host
32
* PECI interface
33
* PCI and PCIe root complex and bridges
34
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
39
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
36
--- a/include/hw/arm/npcm7xx.h
41
+++ b/exec.c
37
+++ b/include/hw/arm/npcm7xx.h
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
38
@@ -XXX,XX +XXX,XX @@
39
#include "hw/nvram/npcm7xx_otp.h"
40
#include "hw/timer/npcm7xx_timer.h"
41
#include "hw/ssi/npcm7xx_fiu.h"
42
+#include "hw/ssi/npcm_pspi.h"
43
#include "hw/usb/hcd-ehci.h"
44
#include "hw/usb/hcd-ohci.h"
45
#include "target/arm/cpu.h"
46
@@ -XXX,XX +XXX,XX @@ struct NPCM7xxState {
47
NPCM7xxFIUState fiu[2];
48
NPCM7xxEMCState emc[2];
49
NPCM7xxSDHCIState mmc;
50
+ NPCMPSPIState pspi[2];
51
};
52
53
#define TYPE_NPCM7XX "npcm7xx"
54
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/arm/npcm7xx.c
57
+++ b/hw/arm/npcm7xx.c
58
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
59
NPCM7XX_EMC1RX_IRQ = 15,
60
NPCM7XX_EMC1TX_IRQ,
61
NPCM7XX_MMC_IRQ = 26,
62
+ NPCM7XX_PSPI2_IRQ = 28,
63
+ NPCM7XX_PSPI1_IRQ = 31,
64
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
65
NPCM7XX_TIMER1_IRQ,
66
NPCM7XX_TIMER2_IRQ,
67
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_emc_addr[] = {
68
0xf0826000,
69
};
70
71
+/* Register base address for each PSPI Module */
72
+static const hwaddr npcm7xx_pspi_addr[] = {
73
+ 0xf0200000,
74
+ 0xf0201000,
75
+};
76
+
77
static const struct {
78
hwaddr regs_addr;
79
uint32_t unconnected_pins;
80
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
81
object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
82
}
83
84
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
85
+ object_initialize_child(obj, "pspi[*]", &s->pspi[i], TYPE_NPCM_PSPI);
86
+ }
87
+
88
object_initialize_child(obj, "mmc", &s->mmc, TYPE_NPCM7XX_SDHCI);
43
}
89
}
44
90
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
91
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
46
- unsigned size, bool is_write)
92
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc), 0,
47
+ unsigned size, bool is_write,
93
npcm7xx_irq(s, NPCM7XX_MMC_IRQ));
48
+ MemTxAttrs attrs)
94
49
{
95
+ /* PSPI */
50
return is_write;
96
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_pspi_addr) != ARRAY_SIZE(s->pspi));
51
}
97
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
98
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->pspi[i]);
53
}
99
+ int irq = (i == 0) ? NPCM7XX_PSPI1_IRQ : NPCM7XX_PSPI2_IRQ;
54
100
+
55
static bool subpage_accepts(void *opaque, hwaddr addr,
101
+ sysbus_realize(sbd, &error_abort);
56
- unsigned len, bool is_write)
102
+ sysbus_mmio_map(sbd, 0, npcm7xx_pspi_addr[i]);
57
+ unsigned len, bool is_write,
103
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, irq));
58
+ MemTxAttrs attrs)
104
+ }
59
{
105
+
60
subpage_t *subpage = opaque;
106
create_unimplemented_device("npcm7xx.shm", 0xc0001000, 4 * KiB);
61
#if defined(DEBUG_SUBPAGE)
107
create_unimplemented_device("npcm7xx.vdmx", 0xe0800000, 4 * KiB);
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
108
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
63
}
109
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
64
110
create_unimplemented_device("npcm7xx.peci", 0xf0100000, 4 * KiB);
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
111
create_unimplemented_device("npcm7xx.siox[1]", 0xf0101000, 4 * KiB);
66
- unsigned size, bool is_write)
112
create_unimplemented_device("npcm7xx.siox[2]", 0xf0102000, 4 * KiB);
67
+ unsigned size, bool is_write,
113
- create_unimplemented_device("npcm7xx.pspi1", 0xf0200000, 4 * KiB);
68
+ MemTxAttrs attrs)
114
- create_unimplemented_device("npcm7xx.pspi2", 0xf0201000, 4 * KiB);
69
{
115
create_unimplemented_device("npcm7xx.ahbpci", 0xf0400000, 1 * MiB);
70
return is_write;
116
create_unimplemented_device("npcm7xx.mcphy", 0xf05f0000, 64 * KiB);
71
}
117
create_unimplemented_device("npcm7xx.gmac1", 0xf0802000, 8 * KiB);
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/hppa/dino.c
75
+++ b/hw/hppa/dino.c
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
77
}
78
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
80
- unsigned size, bool is_write)
81
+ unsigned size, bool is_write,
82
+ MemTxAttrs attrs)
83
{
84
switch (addr) {
85
case DINO_IAR0:
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
91
}
92
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
180
--
118
--
181
2.17.1
119
2.34.1
182
183
diff view generated by jsdifflib
New patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
2
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
all upper bits set. Ensure the IOMMU region covers all 64 bits.
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20230214171921.1917916-2-jean-philippe@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/arm/smmu-common.h | 2 --
13
hw/arm/smmu-common.c | 2 +-
14
2 files changed, 1 insertion(+), 3 deletions(-)
15
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/smmu-common.h
19
+++ b/include/hw/arm/smmu-common.h
20
@@ -XXX,XX +XXX,XX @@
21
#define SMMU_PCI_DEVFN_MAX 256
22
#define SMMU_PCI_DEVFN(sid) (sid & 0xFF)
23
24
-#define SMMU_MAX_VA_BITS 48
25
-
26
/*
27
* Page table walk error types
28
*/
29
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/smmu-common.c
32
+++ b/hw/arm/smmu-common.c
33
@@ -XXX,XX +XXX,XX @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
34
35
memory_region_init_iommu(&sdev->iommu, sizeof(sdev->iommu),
36
s->mrtypename,
37
- OBJECT(s), name, 1ULL << SMMU_MAX_VA_BITS);
38
+ OBJECT(s), name, UINT64_MAX);
39
address_space_init(&sdev->as,
40
MEMORY_REGION(&sdev->iommu), name);
41
trace_smmu_add_mr(name);
42
--
43
2.34.1
diff view generated by jsdifflib
New patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
2
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
all upper bits set (except for the top byte when TBI is enabled). Fix
5
the TTB1 check.
6
7
Reported-by: Ola Hugosson <ola.hugosson@arm.com>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
Message-id: 20230214171921.1917916-3-jean-philippe@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/arm/smmu-common.c | 2 +-
15
1 file changed, 1 insertion(+), 1 deletion(-)
16
17
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/smmu-common.c
20
+++ b/hw/arm/smmu-common.c
21
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
22
/* there is a ttbr0 region and we are in it (high bits all zero) */
23
return &cfg->tt[0];
24
} else if (cfg->tt[1].tsz &&
25
- !extract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte)) {
26
+ sextract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte) == -1) {
27
/* there is a ttbr1 region and we are in it (high bits all one) */
28
return &cfg->tt[1];
29
} else if (!cfg->tt[0].tsz) {
30
--
31
2.34.1
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Claudio Fontana <cfontana@suse.de>
2
add MemTxAttrs as an argument to address_space_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
make it clearer from the name that this is a tcg-only function.
4
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
11
---
11
include/exec/memory.h | 4 +++-
12
target/arm/helper.c | 4 ++--
12
include/sysemu/dma.h | 3 ++-
13
1 file changed, 2 insertions(+), 2 deletions(-)
13
exec.c | 3 ++-
14
target/s390x/diag.c | 6 ++++--
15
target/s390x/excp_helper.c | 3 ++-
16
target/s390x/mmu_helper.c | 3 ++-
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
19
14
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/memory.h
17
--- a/target/arm/helper.c
23
+++ b/include/exec/memory.h
18
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
25
* @addr: address within that address space
20
* trapped to the hypervisor in KVM.
26
* @len: length of the area to be checked
27
* @is_write: indicates the transfer direction
28
+ * @attrs: memory attributes
29
*/
21
*/
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
22
#ifdef CONFIG_TCG
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
23
-static void handle_semihosting(CPUState *cs)
32
+ bool is_write, MemTxAttrs attrs);
24
+static void tcg_handle_semihosting(CPUState *cs)
33
34
/* address_space_map: map a physical memory region into a host virtual address
35
*
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
39
+++ b/include/sysemu/dma.h
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
41
DMADirection dir)
42
{
25
{
43
return address_space_access_valid(as, addr, len,
26
ARMCPU *cpu = ARM_CPU(cs);
44
- dir == DMA_DIRECTION_FROM_DEVICE);
27
CPUARMState *env = &cpu->env;
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
28
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
46
+ MEMTXATTRS_UNSPECIFIED);
29
*/
47
}
30
#ifdef CONFIG_TCG
48
31
if (cs->exception_index == EXCP_SEMIHOST) {
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
32
- handle_semihosting(cs);
50
diff --git a/exec.c b/exec.c
33
+ tcg_handle_semihosting(cs);
51
index XXXXXXX..XXXXXXX 100644
52
--- a/exec.c
53
+++ b/exec.c
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
55
}
56
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
58
- int len, bool is_write)
59
+ int len, bool is_write,
60
+ MemTxAttrs attrs)
61
{
62
FlatView *fv;
63
bool result;
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/s390x/diag.c
67
+++ b/target/s390x/diag.c
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
69
return;
70
}
71
if (!address_space_access_valid(&address_space_memory, addr,
72
- sizeof(IplParameterBlock), false)) {
73
+ sizeof(IplParameterBlock), false,
74
+ MEMTXATTRS_UNSPECIFIED)) {
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
76
return;
77
}
78
@@ -XXX,XX +XXX,XX @@ out:
79
return;
80
}
81
if (!address_space_access_valid(&address_space_memory, addr,
82
- sizeof(IplParameterBlock), true)) {
83
+ sizeof(IplParameterBlock), true,
84
+ MEMTXATTRS_UNSPECIFIED)) {
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
86
return;
87
}
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/s390x/excp_helper.c
91
+++ b/target/s390x/excp_helper.c
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
93
94
/* check out of RAM access */
95
if (!address_space_access_valid(&address_space_memory, raddr,
96
- TARGET_PAGE_SIZE, rw)) {
97
+ TARGET_PAGE_SIZE, rw,
98
+ MEMTXATTRS_UNSPECIFIED)) {
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
100
(uint64_t)raddr, (uint64_t)ram_size);
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/s390x/mmu_helper.c
105
+++ b/target/s390x/mmu_helper.c
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
107
return ret;
108
}
109
if (!address_space_access_valid(&address_space_memory, pages[i],
110
- TARGET_PAGE_SIZE, is_write)) {
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
34
return;
129
}
35
}
36
#endif
130
--
37
--
131
2.17.1
38
2.34.1
132
39
133
40
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
for "all" builds (tcg + kvm), we want to avoid doing
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
4
the psci check if tcg is built-in, but not enabled.
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
5
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Claudio Fontana <cfontana@suse.de>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
11
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
12
target/arm/helper.c | 3 ++-
18
1 file changed, 7 insertions(+), 3 deletions(-)
13
1 file changed, 2 insertions(+), 1 deletion(-)
19
14
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
17
--- a/target/arm/helper.c
23
+++ b/hw/dma/xlnx-zdma.c
18
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
19
@@ -XXX,XX +XXX,XX @@
25
qemu_log_mask(LOG_GUEST_ERROR,
20
#include "hw/irq.h"
26
"zdma: unaligned descriptor at %" PRIx64,
21
#include "sysemu/cpu-timers.h"
27
addr);
22
#include "sysemu/kvm.h"
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
23
+#include "sysemu/tcg.h"
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
24
#include "qapi/qapi-commands-machine-target.h"
30
s->error = true;
25
#include "qapi/error.h"
31
return false;
26
#include "qemu/guest-random.h"
27
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
28
env->exception.syndrome);
32
}
29
}
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
30
34
RegisterInfo *r = &s->regs_info[addr / 4];
31
- if (arm_is_psci_call(cpu, cs->exception_index)) {
35
32
+ if (tcg_enabled() && arm_is_psci_call(cpu, cs->exception_index)) {
36
if (!r->data) {
33
arm_handle_psci_call(cpu);
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
34
qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n");
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
39
- object_get_canonical_path(OBJECT(s)),
40
+ path,
41
addr);
42
+ g_free(path);
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
44
zdma_ch_imr_update_irq(s);
45
return 0;
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
47
RegisterInfo *r = &s->regs_info[addr / 4];
48
49
if (!r->data) {
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
52
- object_get_canonical_path(OBJECT(s)),
53
+ path,
54
addr, value);
55
+ g_free(path);
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
57
zdma_ch_imr_update_irq(s);
58
return;
35
return;
59
--
36
--
60
2.17.1
37
2.34.1
61
38
62
39
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
From: Claudio Fontana <cfontana@suse.de>
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
2
10
Implement reset for the CPACR using a custom reset function
3
Signed-off-by: Claudio Fontana <cfontana@suse.de>
11
that just calls cpacr_write(), to avoid having to duplicate
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
the logic for which bits are RAO.
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
13
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
This bug would affect migration for TCG CPUs which are ARMv7
15
with VFP but without one of Neon or VFPv3.
16
17
Reported-by: Cédric Le Goater <clg@kaod.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
21
---
8
---
22
target/arm/helper.c | 10 +++++++++-
9
target/arm/helper.c | 12 +++++++-----
23
1 file changed, 9 insertions(+), 1 deletion(-)
10
1 file changed, 7 insertions(+), 5 deletions(-)
24
11
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
30
env->cp15.cpacr_el1 = value;
17
unsigned int cur_el = arm_current_el(env);
31
}
18
int rt;
32
19
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
20
- /*
34
+{
21
- * Note that new_el can never be 0. If cur_el is 0, then
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
22
- * el0_a64 is is_a64(), else el0_a64 is ignored.
36
+ * for our CPU features.
23
- */
37
+ */
24
- aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
38
+ cpacr_write(env, ri, 0);
25
+ if (tcg_enabled()) {
39
+}
26
+ /*
40
+
27
+ * Note that new_el can never be 0. If cur_el is 0, then
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
28
+ * el0_a64 is is_a64(), else el0_a64 is ignored.
42
bool isread)
29
+ */
43
{
30
+ aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
31
+ }
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
32
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
33
if (cur_el < new_el) {
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
34
/*
48
- .resetvalue = 0, .writefn = cpacr_write },
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
50
REGINFO_SENTINEL
51
};
52
53
--
35
--
54
2.17.1
36
2.34.1
55
37
56
38
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
When QEMU is started with following CLI
3
Move this earlier to make the next patch diff cleaner. While here
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
update the comment slightly to not give the impression that the
5
it crashes with abort at
5
misalignment affects only TCG.
6
accel/kvm/kvm-all.c:2164:
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
8
6
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
arm_gicv3_icc_reset() where the later is called by CPU reset
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
reset callback.
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
12
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
However commit:
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
15
broke CPU reset callback registration in case
16
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
12
---
43
hw/arm/boot.c | 18 +++++++++---------
13
target/arm/machine.c | 18 +++++++++---------
44
1 file changed, 9 insertions(+), 9 deletions(-)
14
1 file changed, 9 insertions(+), 9 deletions(-)
45
15
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
16
diff --git a/target/arm/machine.c b/target/arm/machine.c
47
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
18
--- a/target/arm/machine.c
49
+++ b/hw/arm/boot.c
19
+++ b/target/arm/machine.c
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
20
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
51
static const ARMInsnFixup *primary_loader;
21
}
52
AddressSpace *as = arm_boot_address_space(cpu, info);
22
}
53
23
54
+ /* CPU objects (unlike devices) are not automatically reset on system
24
+ /*
55
+ * reset, so we must always register a handler to do so. If we're
25
+ * Misaligned thumb pc is architecturally impossible. Fail the
56
+ * actually loading a kernel, the handler is also responsible for
26
+ * incoming migration. For TCG it would trigger the assert in
57
+ * arranging that we start it correctly.
27
+ * thumb_tr_translate_insn().
58
+ */
28
+ */
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
29
+ if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
30
+ return -1;
61
+ }
31
+ }
62
+
32
+
63
/* The board code is not supposed to set secure_board_setup unless
33
hw_breakpoint_update_all(cpu);
64
* running its code in secure mode is actually possible, and KVM
34
hw_watchpoint_update_all(cpu);
65
* doesn't support secure.
35
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
36
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
67
ARM_CPU(cs)->env.boot_info = info;
37
}
68
}
38
}
69
39
70
- /* CPU objects (unlike devices) are not automatically reset on system
40
- /*
71
- * reset, so we must always register a handler to do so. If we're
41
- * Misaligned thumb pc is architecturally impossible.
72
- * actually loading a kernel, the handler is also responsible for
42
- * We have an assert in thumb_tr_translate_insn to verify this.
73
- * arranging that we start it correctly.
43
- * Fail an incoming migrate to avoid this assert.
74
- */
44
- */
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
45
- if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
46
- return -1;
77
- }
47
- }
78
-
48
-
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
49
if (!kvm_enabled()) {
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
50
pmu_op_finish(&cpu->env);
81
exit(1);
51
}
82
--
52
--
83
2.17.1
53
2.34.1
84
54
85
55
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
From: Fabiano Rosas <farosas@suse.de>
2
and other IOMMU-related functions and data structures.
2
3
3
Since commit cf7c6d1004 ("target/arm: Split out cpregs.h") we now have
4
a cpregs.h header which is more suitable for this code.
5
6
Code moved verbatim.
7
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
9
---
13
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
14
target/arm/cpregs.h | 98 +++++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 95 insertions(+), 10 deletions(-)
15
target/arm/cpu.h | 91 -----------------------------------------
12
16
2 files changed, 98 insertions(+), 91 deletions(-)
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
17
18
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
20
--- a/target/arm/cpregs.h
16
+++ b/include/exec/memory.h
21
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
22
@@ -XXX,XX +XXX,XX @@ enum {
18
IOMMU_ATTR_SPAPR_TCE_FD
23
ARM_CP_SME = 1 << 19,
19
};
24
};
20
25
21
+/**
26
+/*
22
+ * IOMMUMemoryRegionClass:
27
+ * Interface for defining coprocessor registers.
23
+ *
28
+ * Registers are defined in tables of arm_cp_reginfo structs
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
29
+ * which are passed to define_arm_cp_regs().
25
+ * and provide an implementation of at least the @translate method here
30
+ */
26
+ * to handle requests to the memory region. Other methods are optional.
31
+
27
+ *
32
+/*
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
33
+ * When looking up a coprocessor register we look for it
29
+ * to report whenever mappings are changed, by calling
34
+ * via an integer which encodes all of:
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
35
+ * coprocessor number
31
+ * memory_region_notify_one() for each registered notifier).
36
+ * Crn, Crm, opc1, opc2 fields
32
+ */
37
+ * 32 or 64 bit register (ie is it accessed via MRC/MCR
33
typedef struct IOMMUMemoryRegionClass {
38
+ * or via MRRC/MCRR?)
34
/* private */
39
+ * non-secure/secure bank (AArch32 only)
35
struct DeviceClass parent_class;
40
+ * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
36
41
+ * (In this case crn and opc2 should be zero.)
37
/*
42
+ * For AArch64, there is no 32/64 bit size distinction;
38
- * Return a TLB entry that contains a given address. Flag should
43
+ * instead all registers have a 2 bit op0, 3 bit op1 and op2,
39
- * be the access permission of this translation operation. We can
44
+ * and 4 bit CRn and CRm. The encoding patterns are chosen
40
- * set flag to IOMMU_NONE to mean that we don't need any
45
+ * to be easy to convert to and from the KVM encodings, and also
41
- * read/write permission checks, like, when for region replay.
46
+ * so that the hashtable can contain both AArch32 and AArch64
42
+ * Return a TLB entry that contains a given address.
47
+ * registers (to allow for interprocessing where we might run
43
+ *
48
+ * 32 bit code on a 64 bit core).
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
49
+ */
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
50
+/*
46
+ * the full translation information for both reads and writes. If
51
+ * This bit is private to our hashtable cpreg; in KVM register
47
+ * the access flags are specified then the IOMMU implementation
52
+ * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
48
+ * may use this as an optimization, to stop doing a page table
53
+ * in the upper bits of the 64 bit ID.
49
+ * walk as soon as it knows that the requested permissions are not
54
+ */
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
55
+#define CP_REG_AA64_SHIFT 28
51
+ * full page table walk and report the permissions in the returned
56
+#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
57
+
53
+ * return different mappings for reads and writes.)
58
+/*
54
+ *
59
+ * To enable banking of coprocessor registers depending on ns-bit we
55
+ * The returned information remains valid while the caller is
60
+ * add a bit to distinguish between secure and non-secure cpregs in the
56
+ * holding the big QEMU lock or is inside an RCU critical section;
61
+ * hashtable.
57
+ * if the caller wishes to cache the mapping beyond that it must
62
+ */
58
+ * register an IOMMU notifier so it can invalidate its cached
63
+#define CP_REG_NS_SHIFT 29
59
+ * information when the IOMMU mapping changes.
64
+#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
60
+ *
65
+
61
+ * @iommu: the IOMMUMemoryRegion
66
+#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
62
+ * @hwaddr: address to be translated within the memory region
67
+ ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
63
+ * @flag: requested access permissions
68
+ ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
64
*/
69
+
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
70
+#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
66
IOMMUAccessFlags flag);
71
+ (CP_REG_AA64_MASK | \
67
- /* Returns minimum supported page size */
72
+ ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
68
+ /* Returns minimum supported page size in bytes.
73
+ ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
69
+ * If this method is not provided then the minimum is assumed to
74
+ ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
70
+ * be TARGET_PAGE_SIZE.
75
+ ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
71
+ *
76
+ ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
72
+ * @iommu: the IOMMUMemoryRegion
77
+ ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
73
+ */
78
+
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
79
+/*
75
- /* Called when IOMMU Notifier flag changed */
80
+ * Convert a full 64 bit KVM register ID to the truncated 32 bit
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
81
+ * version used as a key for the coprocessor register hashtable
77
+ * events which IOMMU users are requesting notification for changes).
82
+ */
78
+ * Optional method -- need not be provided if the IOMMU does not
83
+static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
79
+ * need to know exactly which events must be notified.
84
+{
80
+ *
85
+ uint32_t cpregid = kvmid;
81
+ * @iommu: the IOMMUMemoryRegion
86
+ if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
82
+ * @old_flags: events which previously needed to be notified
87
+ cpregid |= CP_REG_AA64_MASK;
83
+ * @new_flags: events which now need to be notified
88
+ } else {
84
+ */
89
+ if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
90
+ cpregid |= (1 << 15);
86
IOMMUNotifierFlag old_flags,
91
+ }
87
IOMMUNotifierFlag new_flags);
92
+
88
- /* Set this up to provide customized IOMMU replay function */
93
+ /*
89
+ /* Called to handle memory_region_iommu_replay().
94
+ * KVM is always non-secure so add the NS flag on AArch32 register
90
+ *
95
+ * entries.
91
+ * The default implementation of memory_region_iommu_replay() is to
96
+ */
92
+ * call the IOMMU translate method for every page in the address space
97
+ cpregid |= 1 << CP_REG_NS_SHIFT;
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
98
+ }
94
+ * returns a valid mapping. If this method is implemented then it
99
+ return cpregid;
95
+ * overrides the default behaviour, and must provide the full semantics
100
+}
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
101
+
97
+ * translation present in the IOMMU.
102
+/*
98
+ *
103
+ * Convert a truncated 32 bit hashtable key into the full
99
+ * Optional method -- an IOMMU only needs to provide this method
104
+ * 64 bit KVM register ID.
100
+ * if the default is inefficient or produces undesirable side effects.
105
+ */
101
+ *
106
+static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
102
+ * Note: this is not related to record-and-replay functionality.
107
+{
103
+ */
108
+ uint64_t kvmid;
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
109
+
105
110
+ if (cpregid & CP_REG_AA64_MASK) {
106
- /* Get IOMMU misc attributes */
111
+ kvmid = cpregid & ~CP_REG_AA64_MASK;
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
112
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
108
+ /* Get IOMMU misc attributes. This is an optional method that
113
+ } else {
109
+ * can be used to allow users of the IOMMU to get implementation-specific
114
+ kvmid = cpregid & ~(1 << 15);
110
+ * information. The IOMMU implements this method to handle calls
115
+ if (cpregid & (1 << 15)) {
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
116
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
117
+ } else {
113
+ * the IOMMU supports. If the method is unimplemented then
118
+ kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
119
+ }
115
+ *
120
+ }
116
+ * @iommu: the IOMMUMemoryRegion
121
+ return kvmid;
117
+ * @attr: attribute being queried
122
+}
118
+ * @data: memory to fill in with the attribute data
123
+
119
+ *
124
/*
120
+ * Returns 0 on success, or a negative errno; in particular
125
* Valid values for ARMCPRegInfo state field, indicating which of
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
126
* the AArch32 and AArch64 execution states this register is visible in.
122
+ */
127
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
128
index XXXXXXX..XXXXXXX 100644
124
void *data);
129
--- a/target/arm/cpu.h
125
} IOMMUMemoryRegionClass;
130
+++ b/target/arm/cpu.h
126
131
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
132
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
128
* An IOMMU region translates addresses and forwards accesses to a target
133
uint32_t cur_el, bool secure);
129
* memory region.
134
130
*
135
-/* Interface for defining coprocessor registers.
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
136
- * Registers are defined in tables of arm_cp_reginfo structs
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
137
- * which are passed to define_arm_cp_regs().
133
+ * that subclass, @instance_size is the size of that subclass, and
138
- */
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
139
-
135
+ * instance of the subclass, and its methods will then be called to handle
140
-/* When looking up a coprocessor register we look for it
136
+ * accesses to the memory region. See the documentation of
141
- * via an integer which encodes all of:
137
+ * #IOMMUMemoryRegionClass for further details.
142
- * coprocessor number
138
+ *
143
- * Crn, Crm, opc1, opc2 fields
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
144
- * 32 or 64 bit register (ie is it accessed via MRC/MCR
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
145
- * or via MRRC/MCRR?)
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
146
- * non-secure/secure bank (AArch32 only)
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
147
- * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
143
* a notifier with the minimum page granularity returned by
148
- * (In this case crn and opc2 should be zero.)
144
* mr->iommu_ops->get_page_size().
149
- * For AArch64, there is no 32/64 bit size distinction;
145
*
150
- * instead all registers have a 2 bit op0, 3 bit op1 and op2,
146
+ * Note: this is not related to record-and-replay functionality.
151
- * and 4 bit CRn and CRm. The encoding patterns are chosen
147
+ *
152
- * to be easy to convert to and from the KVM encodings, and also
148
* @iommu_mr: the memory region to observe
153
- * so that the hashtable can contain both AArch32 and AArch64
149
* @n: the notifier to which to replay iommu mappings
154
- * registers (to allow for interprocessing where we might run
150
*/
155
- * 32 bit code on a 64 bit core).
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
156
- */
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
157
-/* This bit is private to our hashtable cpreg; in KVM register
153
* to all the notifiers registered.
158
- * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
154
*
159
- * in the upper bits of the 64 bit ID.
155
+ * Note: this is not related to record-and-replay functionality.
160
- */
156
+ *
161
-#define CP_REG_AA64_SHIFT 28
157
* @iommu_mr: the memory region to observe
162
-#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
158
*/
163
-
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
164
-/* To enable banking of coprocessor registers depending on ns-bit we
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
165
- * add a bit to distinguish between secure and non-secure cpregs in the
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
166
- * hashtable.
162
* defined on the IOMMU.
167
- */
163
*
168
-#define CP_REG_NS_SHIFT 29
164
- * Returns 0 if succeded, error code otherwise.
169
-#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
170
-
166
+ * -EINVAL indicates that the IOMMU does not support the requested
171
-#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
167
+ * attribute.
172
- ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
168
*
173
- ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
169
* @iommu_mr: the memory region
174
-
170
* @attr: the requested attribute
175
-#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
176
- (CP_REG_AA64_MASK | \
177
- ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
178
- ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
179
- ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
180
- ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
181
- ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
182
- ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
183
-
184
-/* Convert a full 64 bit KVM register ID to the truncated 32 bit
185
- * version used as a key for the coprocessor register hashtable
186
- */
187
-static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
188
-{
189
- uint32_t cpregid = kvmid;
190
- if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
191
- cpregid |= CP_REG_AA64_MASK;
192
- } else {
193
- if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
194
- cpregid |= (1 << 15);
195
- }
196
-
197
- /* KVM is always non-secure so add the NS flag on AArch32 register
198
- * entries.
199
- */
200
- cpregid |= 1 << CP_REG_NS_SHIFT;
201
- }
202
- return cpregid;
203
-}
204
-
205
-/* Convert a truncated 32 bit hashtable key into the full
206
- * 64 bit KVM register ID.
207
- */
208
-static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
209
-{
210
- uint64_t kvmid;
211
-
212
- if (cpregid & CP_REG_AA64_MASK) {
213
- kvmid = cpregid & ~CP_REG_AA64_MASK;
214
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
215
- } else {
216
- kvmid = cpregid & ~(1 << 15);
217
- if (cpregid & (1 << 15)) {
218
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
219
- } else {
220
- kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
221
- }
222
- }
223
- return kvmid;
224
-}
225
-
226
/* Return the highest implemented Exception Level */
227
static inline int arm_highest_el(CPUARMState *env)
228
{
171
--
229
--
172
2.17.1
230
2.34.1
173
231
174
232
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
There was a nasty flip in identifying which register group an access is
3
If a test was tagged with the "accel" tag and the specified
4
targeting. The issue caused spuriously raised priorities of the guest
4
accelerator it not present in the qemu binary, cancel the test.
5
when handing CPUs over in the Jailhouse hypervisor.
6
5
7
Cc: qemu-stable@nongnu.org
6
We can now write tests without explicit calls to require_accelerator,
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
7
just the tag is enough.
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
8
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
14
tests/avocado/avocado_qemu/__init__.py | 4 ++++
14
1 file changed, 6 insertions(+), 6 deletions(-)
15
1 file changed, 4 insertions(+)
15
16
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
17
diff --git a/tests/avocado/avocado_qemu/__init__.py b/tests/avocado/avocado_qemu/__init__.py
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
19
--- a/tests/avocado/avocado_qemu/__init__.py
19
+++ b/hw/intc/arm_gicv3_cpuif.c
20
+++ b/tests/avocado/avocado_qemu/__init__.py
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
21
@@ -XXX,XX +XXX,XX @@ def setUp(self):
21
{
22
22
GICv3CPUState *cs = icc_cs_from_env(env);
23
super().setUp('qemu-system-')
23
int regno = ri->opc2 & 3;
24
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
25
+ accel_required = self._get_unique_tag_val('accel')
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
26
+ if accel_required:
26
uint64_t value = cs->ich_apr[grp][regno];
27
+ self.require_accelerator(accel_required)
27
28
+
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
29
self.machine = self.params.get('machine',
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
30
default=self._get_unique_tag_val('machine'))
30
{
31
GICv3CPUState *cs = icc_cs_from_env(env);
32
int regno = ri->opc2 & 3;
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
35
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
37
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
39
uint64_t value;
40
41
int regno = ri->opc2 & 3;
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
44
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
46
return icv_ap_read(env, ri);
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
GICv3CPUState *cs = icc_cs_from_env(env);
49
50
int regno = ri->opc2 & 3;
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
53
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
55
icv_ap_write(env, ri, value);
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
{
58
GICv3CPUState *cs = icc_cs_from_env(env);
59
int regno = ri->opc2 & 3;
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
62
uint64_t value;
63
64
value = cs->ich_apr[grp][regno];
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
{
67
GICv3CPUState *cs = icc_cs_from_env(env);
68
int regno = ri->opc2 & 3;
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
71
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
73
31
74
--
32
--
75
2.17.1
33
2.34.1
76
34
77
35
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
This allows the test to be skipped when TCG is not present in the QEMU
4
g_new is even better because it is type-safe.
4
binary.
5
5
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/gdbstub.c | 3 +--
11
tests/avocado/boot_linux_console.py | 1 +
12
1 file changed, 1 insertion(+), 2 deletions(-)
12
tests/avocado/reverse_debugging.py | 8 ++++++++
13
2 files changed, 9 insertions(+)
13
14
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
15
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
17
--- a/tests/avocado/boot_linux_console.py
17
+++ b/target/arm/gdbstub.c
18
+++ b/tests/avocado/boot_linux_console.py
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
19
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_uboot_netbsd9(self):
19
RegisterSysregXmlParam param = {cs, s};
20
20
21
def test_aarch64_raspi3_atf(self):
21
cpu->dyn_xml.num_cpregs = 0;
22
"""
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
23
+ :avocado: tags=accel:tcg
23
- g_hash_table_size(cpu->cp_regs));
24
:avocado: tags=arch:aarch64
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
25
:avocado: tags=machine:raspi3b
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
26
:avocado: tags=cpu:cortex-a53
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
27
diff --git a/tests/avocado/reverse_debugging.py b/tests/avocado/reverse_debugging.py
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
28
index XXXXXXX..XXXXXXX 100644
29
--- a/tests/avocado/reverse_debugging.py
30
+++ b/tests/avocado/reverse_debugging.py
31
@@ -XXX,XX +XXX,XX @@ def reverse_debugging(self, shift=7, args=None):
32
vm.shutdown()
33
34
class ReverseDebugging_X86_64(ReverseDebugging):
35
+ """
36
+ :avocado: tags=accel:tcg
37
+ """
38
+
39
REG_PC = 0x10
40
REG_CS = 0x12
41
def get_pc(self, g):
42
@@ -XXX,XX +XXX,XX @@ def test_x86_64_pc(self):
43
self.reverse_debugging()
44
45
class ReverseDebugging_AArch64(ReverseDebugging):
46
+ """
47
+ :avocado: tags=accel:tcg
48
+ """
49
+
50
REG_PC = 32
51
52
# unidentified gitlab timeout problem
28
--
53
--
29
2.17.1
54
2.34.1
30
55
31
56
diff view generated by jsdifflib
New patch
1
From: Fabiano Rosas <farosas@suse.de>
1
2
3
Now that the cortex-a15 is under CONFIG_TCG, use as default CPU for a
4
KVM-only build the 'max' cpu.
5
6
Note that we cannot use 'host' here because the qtests can run without
7
any other accelerator (than qtest) and 'host' depends on KVM being
8
enabled.
9
10
Signed-off-by: Fabiano Rosas <farosas@suse.de>
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Thomas Huth <thuth@redhat.com>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/arm/virt.c | 4 ++++
16
1 file changed, 4 insertions(+)
17
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/virt.c
21
+++ b/hw/arm/virt.c
22
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
23
mc->minimum_page_bits = 12;
24
mc->possible_cpu_arch_ids = virt_possible_cpu_arch_ids;
25
mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
26
+#ifdef CONFIG_TCG
27
mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
28
+#else
29
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("max");
30
+#endif
31
mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
32
mc->kvm_type = virt_kvm_type;
33
assert(!mc->get_hotplug_handler);
34
--
35
2.34.1
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
is no enough contiguous memory, the address will be changed. So previous
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
pointer could not be used any more. It must update the pointer and use
5
Acked-by: Thomas Huth <thuth@redhat.com>
6
the new one.
7
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
9
for subsequent computations that will result incorrect value if host is
10
not litlle endian. So use the non-converted one instead.
11
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
7
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
8
tests/qtest/arm-cpu-features.c | 28 ++++++++++++++++++----------
18
1 file changed, 15 insertions(+), 5 deletions(-)
9
1 file changed, 18 insertions(+), 10 deletions(-)
19
10
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
11
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
13
--- a/tests/qtest/arm-cpu-features.c
23
+++ b/hw/arm/virt-acpi-build.c
14
+++ b/tests/qtest/arm-cpu-features.c
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
15
@@ -XXX,XX +XXX,XX @@
25
AcpiIortItsGroup *its;
16
#define SVE_MAX_VQ 16
26
AcpiIortTable *iort;
17
27
AcpiIortSmmu3 *smmu;
18
#define MACHINE "-machine virt,gic-version=max -accel tcg "
28
- size_t node_size, iort_length, smmu_offset = 0;
19
-#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm -accel tcg "
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
20
+#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm "
30
AcpiIortRC *rc;
21
#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
31
22
" 'arguments': { 'type': 'full', "
32
iort = acpi_data_push(table_data, sizeof(*iort));
23
#define QUERY_TAIL "}}"
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
24
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
34
25
{
35
iort_length = sizeof(*iort);
26
g_test_init(&argc, &argv, NULL);
36
iort->node_count = cpu_to_le32(nb_nodes);
27
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
28
- qtest_add_data_func("/arm/query-cpu-model-expansion",
38
+ /*
29
- NULL, test_query_cpu_model_expansion);
39
+ * Use a copy in case table_data->data moves during acpi_data_push
30
+ if (qtest_has_accel("tcg")) {
40
+ * operations.
31
+ qtest_add_data_func("/arm/query-cpu-model-expansion",
41
+ */
32
+ NULL, test_query_cpu_model_expansion);
42
+ iort_node_offset = sizeof(*iort);
33
+ }
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
34
+
44
35
+ if (!g_str_equal(qtest_get_arch(), "aarch64")) {
45
/* ITS group node */
36
+ goto out;
46
node_size = sizeof(*its) + sizeof(uint32_t);
37
+ }
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
38
48
int irq = vms->irqmap[VIRT_SMMU];
39
/*
49
40
* For now we only run KVM specific tests with AArch64 QEMU in
50
/* SMMUv3 node */
41
* order avoid attempting to run an AArch32 QEMU with KVM on
51
- smmu_offset = iort->node_offset + node_size;
42
* AArch64 hosts. That won't work and isn't easy to detect.
52
+ smmu_offset = iort_node_offset + node_size;
43
*/
53
node_size = sizeof(*smmu) + sizeof(*idmap);
44
- if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("kvm")) {
54
iort_length += node_size;
45
+ if (qtest_has_accel("kvm")) {
55
smmu = acpi_data_push(table_data, node_size);
46
/*
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
47
* This tests target the 'host' CPU type, so register it only if
57
idmap->id_count = cpu_to_le32(0xFFFF);
48
* KVM is available.
58
idmap->output_base = 0;
49
*/
59
/* output IORT node is the ITS group node (the first node) */
50
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
51
NULL, test_query_cpu_model_expansion_kvm);
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
52
- }
53
54
- if (g_str_equal(qtest_get_arch(), "aarch64")) {
55
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
56
- NULL, sve_tests_sve_max_vq_8);
57
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
58
- NULL, sve_tests_sve_off);
59
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off",
60
NULL, sve_tests_sve_off_kvm);
62
}
61
}
63
62
64
/* Root Complex Node */
63
+ if (qtest_has_accel("tcg")) {
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
64
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
66
idmap->output_reference = cpu_to_le32(smmu_offset);
65
+ NULL, sve_tests_sve_max_vq_8);
67
} else {
66
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
68
/* output IORT node is the ITS group node (the first node) */
67
+ NULL, sve_tests_sve_off);
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
68
+ }
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
69
+
71
}
70
+out:
72
71
return g_test_run();
73
+ /*
72
}
74
+ * Update the pointer address in case table_data->data moves during above
75
+ * acpi_data_push operations.
76
+ */
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
78
iort->length = cpu_to_le32(iort_length);
79
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
81
--
73
--
82
2.17.1
74
2.34.1
83
84
diff view generated by jsdifflib
New patch
1
From: Fabiano Rosas <farosas@suse.de>
1
2
3
These tests set -accel tcg, so restrict them to when TCG is present.
4
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
tests/qtest/meson.build | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/qtest/meson.build
16
+++ b/tests/qtest/meson.build
17
@@ -XXX,XX +XXX,XX @@ qtests_arm = \
18
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make bios-tables-test unconditional
19
qtests_aarch64 = \
20
(cpu != 'arm' and unpack_edk2_blobs ? ['bios-tables-test'] : []) + \
21
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \
22
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \
23
+ (config_all.has_key('CONFIG_TCG') and config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? \
24
+ ['tpm-tis-device-test', 'tpm-tis-device-swtpm-test'] : []) + \
25
(config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
26
(config_all_devices.has_key('CONFIG_RASPI') ? ['bcm2835-dma-test'] : []) + \
27
['arm-cpu-features',
28
--
29
2.34.1
diff view generated by jsdifflib