1
target-arm queue. This has the "plumb txattrs through various
1
target-arm queue: mostly smallish stuff. I expect to send
2
bits of exec.c" patches, and a collection of bug fixes from
2
out another pullreq at the end of this week, but since this
3
various people.
3
is up to 32 patches already I'd rather send it out now
4
than accumulate a monster sized patchset.
4
5
5
thanks
6
thanks
6
-- PMM
7
-- PMM
7
8
8
9
10
The following changes since commit 0ab4c574a55448a37b9f616259b82950742c9427:
9
11
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
12
Merge remote-tracking branch 'remotes/kraxel/tags/ui-20180626-pull-request' into staging (2018-06-26 16:44:57 +0100)
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
13
14
are available in the Git repository at:
14
are available in the Git repository at:
15
15
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180626
17
17
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
18
for you to fetch changes up to 9b945a9ee36a34eaeca412ef9ef35fbfe33c2c85:
19
19
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
20
aspeed/timer: use the APB frequency from the SCU (2018-06-26 17:50:42 +0100)
21
21
22
----------------------------------------------------------------
22
----------------------------------------------------------------
23
target-arm queue:
23
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
24
* aspeed: set APB clocks correctly (fixes slowdown on palmetto)
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
25
* smmuv3: cache config data and TLB entries
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
26
* v7m/v8m: support read/write from MPU regions smaller than 1K
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
27
* various: clean up logging/debug messages
28
GIC state
28
* xilinx_spips: Make dma transactions as per dma_burst_size
29
* tcg: Fix helper function vs host abi for float16
30
* arm: fix qemu crash on startup with -bios option
31
* arm: fix malloc type mismatch
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* Correct CPACR reset value for v7 cores
34
* memory.h: Improve IOMMU related documentation
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
29
41
----------------------------------------------------------------
30
----------------------------------------------------------------
42
Francisco Iglesias (1):
31
Cédric Le Goater (6):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
32
aspeed/smc: fix dummy cycles count when in dual IO mode
33
aspeed/smc: fix HW strapping
34
aspeed/smc: rename aspeed_smc_flash_send_addr() to aspeed_smc_flash_setup()
35
aspeed/scu: introduce clock frequencies
36
aspeed: initialize the SCU controller first
37
aspeed/timer: use the APB frequency from the SCU
44
38
45
Igor Mammedov (1):
39
Eric Auger (3):
46
arm: fix qemu crash on startup with -bios option
40
hw/arm/smmuv3: Cache/invalidate config data
41
hw/arm/smmuv3: IOTLB emulation
42
hw/arm/smmuv3: Add notifications on invalidation
47
43
48
Jan Kiszka (1):
44
Jia He (1):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
45
hw/arm/smmuv3: Fix translate error handling
50
46
51
Paolo Bonzini (1):
47
Joel Stanley (1):
52
arm: fix malloc type mismatch
48
MAINTAINERS: Add ASPEED BMCs
53
49
54
Peter Maydell (17):
50
Peter Maydell (3):
55
target/arm: Honour FPCR.FZ in FRECPX
51
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
52
target/arm: Set page (region) size in get_phys_addr_pmsav7()
57
Correct CPACR reset value for v7 cores
53
target/arm: Handle small regions in get_phys_addr_pmsav8()
58
memory.h: Improve IOMMU related documentation
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
61
Make address_space_map() take a MemTxAttrs argument
62
Make address_space_access_valid() take a MemTxAttrs argument
63
Make flatview_extend_translation() take a MemTxAttrs argument
64
Make memory_region_access_valid() take a MemTxAttrs argument
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
66
Make flatview_access_valid() take a MemTxAttrs argument
67
Make flatview_translate() take a MemTxAttrs argument
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
54
73
Richard Henderson (1):
55
Philippe Mathieu-Daudé (17):
74
tcg: Fix helper function vs host abi for float16
56
MAINTAINERS: Adopt the Gumstix computers-on-module machines
57
hw/input/pckbd: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
58
hw/input/tsc2005: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
59
hw/dma/omap_dma: Use qemu_log_mask(UNIMP) instead of printf
60
hw/dma/omap_dma: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
61
hw/ssi/omap_spi: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
62
hw/sd/omap_mmc: Use qemu_log_mask(UNIMP) instead of printf
63
hw/i2c/omap_i2c: Use qemu_log_mask(UNIMP) instead of fprintf
64
hw/arm/omap1: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
65
hw/arm/omap: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
66
hw/arm/stellaris: Use qemu_log_mask(UNIMP) instead of fprintf
67
hw/net/stellaris_enet: Fix a typo
68
hw/net/stellaris_enet: Use qemu_log_mask(GUEST_ERROR) instead of hw_error
69
hw/net/smc91c111: Use qemu_log_mask(GUEST_ERROR) instead of hw_error
70
hw/net/smc91c111: Use qemu_log_mask(UNIMP) instead of fprintf
71
hw/arm/stellaris: Fix gptm_write() error message
72
hw/arm/stellaris: Use HWADDR_PRIx to display register address
75
73
76
Shannon Zhao (3):
74
Sai Pavan Boddu (1):
77
arm_gicv3_kvm: increase clroffset accordingly
75
xilinx_spips: Make dma transactions as per dma_burst_size
78
ARM: ACPI: Fix use-after-free due to memory realloc
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
80
76
81
include/exec/exec-all.h | 5 +-
77
accel/tcg/softmmu_template.h | 24 ++-
82
include/exec/helper-head.h | 2 +-
78
hw/arm/smmuv3-internal.h | 12 +-
83
include/exec/memory-internal.h | 3 +-
79
include/exec/cpu-all.h | 5 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
80
include/hw/arm/omap.h | 30 +--
85
include/migration/vmstate.h | 3 +
81
include/hw/arm/smmu-common.h | 24 +++
86
include/sysemu/dma.h | 6 +-
82
include/hw/arm/smmuv3.h | 1 +
87
accel/tcg/translate-all.c | 4 +-
83
include/hw/misc/aspeed_scu.h | 70 ++++++-
88
exec.c | 95 ++++++++++++++++++------------
84
include/hw/ssi/xilinx_spips.h | 5 +-
89
hw/arm/boot.c | 18 +++---
85
include/hw/timer/aspeed_timer.h | 4 +
90
hw/arm/virt-acpi-build.c | 20 +++++--
86
accel/tcg/cputlb.c | 131 +++++++++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
87
hw/arm/aspeed_soc.c | 42 ++--
92
hw/hppa/dino.c | 3 +-
88
hw/arm/omap1.c | 18 +-
93
hw/intc/arm_gic_kvm.c | 1 -
89
hw/arm/smmu-common.c | 118 ++++++++++-
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
90
hw/arm/smmuv3.c | 420 ++++++++++++++++++++++++++++++++++++----
95
hw/intc/arm_gicv3_kvm.c | 2 +-
91
hw/arm/stellaris.c | 8 +-
96
hw/nvram/fw_cfg.c | 12 ++--
92
hw/dma/omap_dma.c | 70 ++++---
97
hw/s390x/s390-pci-inst.c | 3 +-
93
hw/i2c/omap_i2c.c | 20 +-
98
hw/scsi/esp.c | 3 +-
94
hw/input/pckbd.c | 4 +-
99
hw/vfio/common.c | 3 +-
95
hw/input/tsc2005.c | 13 +-
100
hw/virtio/vhost.c | 3 +-
96
hw/misc/aspeed_scu.c | 106 ++++++++++
101
hw/xen/xen_pt_msi.c | 3 +-
97
hw/net/smc91c111.c | 21 +-
102
memory.c | 12 ++--
98
hw/net/stellaris_enet.c | 11 +-
103
memory_ldst.inc.c | 18 +++---
99
hw/sd/omap_mmc.c | 13 +-
104
target/arm/gdbstub.c | 3 +-
100
hw/ssi/aspeed_smc.c | 48 ++---
105
target/arm/helper-a64.c | 41 +++++++------
101
hw/ssi/omap_spi.c | 15 +-
106
target/arm/helper.c | 90 ++++++++++++++++-------------
102
hw/ssi/xilinx_spips.c | 23 ++-
107
target/ppc/mmu-hash64.c | 3 +-
103
hw/timer/aspeed_timer.c | 19 +-
108
target/riscv/helper.c | 2 +-
104
target/arm/helper.c | 115 +++++++----
109
target/s390x/diag.c | 6 +-
105
MAINTAINERS | 14 +-
110
target/s390x/excp_helper.c | 3 +-
106
hw/arm/trace-events | 27 ++-
111
target/s390x/mmu_helper.c | 3 +-
107
30 files changed, 1176 insertions(+), 255 deletions(-)
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
108
diff view generated by jsdifflib
New patch
1
From: Cédric Le Goater <clg@kaod.org>
1
2
3
When configured in dual I/O mode, address and data are sent in dual
4
mode, including the dummy byte cycles in between. Adapt the count to
5
the IO setting.
6
7
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
9
Message-id: 20180612065716.10587-2-clg@kaod.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/ssi/aspeed_smc.c | 9 ++++++++-
13
1 file changed, 8 insertions(+), 1 deletion(-)
14
15
diff --git a/hw/ssi/aspeed_smc.c b/hw/ssi/aspeed_smc.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/aspeed_smc.c
18
+++ b/hw/ssi/aspeed_smc.c
19
@@ -XXX,XX +XXX,XX @@
20
21
/* CEx Control Register */
22
#define R_CTRL0 (0x10 / 4)
23
+#define CTRL_IO_DUAL_DATA (1 << 29)
24
+#define CTRL_IO_DUAL_ADDR_DATA (1 << 28) /* Includes dummies */
25
#define CTRL_CMD_SHIFT 16
26
#define CTRL_CMD_MASK 0xff
27
#define CTRL_DUMMY_HIGH_SHIFT 14
28
@@ -XXX,XX +XXX,XX @@ static int aspeed_smc_flash_dummies(const AspeedSMCFlash *fl)
29
uint32_t r_ctrl0 = s->regs[s->r_ctrl0 + fl->id];
30
uint32_t dummy_high = (r_ctrl0 >> CTRL_DUMMY_HIGH_SHIFT) & 0x1;
31
uint32_t dummy_low = (r_ctrl0 >> CTRL_DUMMY_LOW_SHIFT) & 0x3;
32
+ uint32_t dummies = ((dummy_high << 2) | dummy_low) * 8;
33
34
- return ((dummy_high << 2) | dummy_low) * 8;
35
+ if (r_ctrl0 & CTRL_IO_DUAL_ADDR_DATA) {
36
+ dummies /= 2;
37
+ }
38
+
39
+ return dummies;
40
}
41
42
static void aspeed_smc_flash_send_addr(AspeedSMCFlash *fl, uint32_t addr)
43
--
44
2.17.1
45
46
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Cédric Le Goater <clg@kaod.org>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
Only the flash type is strapped by HW. The 4BYTE mode is set by
4
is no enough contiguous memory, the address will be changed. So previous
4
firmware when the flash device is detected.
5
pointer could not be used any more. It must update the pointer and use
6
the new one.
7
5
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
9
for subsequent computations that will result incorrect value if host is
7
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
10
not litlle endian. So use the non-converted one instead.
8
Message-id: 20180612065716.10587-3-clg@kaod.org
11
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
10
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
11
hw/ssi/aspeed_smc.c | 8 +-------
18
1 file changed, 15 insertions(+), 5 deletions(-)
12
1 file changed, 1 insertion(+), 7 deletions(-)
19
13
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
14
diff --git a/hw/ssi/aspeed_smc.c b/hw/ssi/aspeed_smc.c
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
16
--- a/hw/ssi/aspeed_smc.c
23
+++ b/hw/arm/virt-acpi-build.c
17
+++ b/hw/ssi/aspeed_smc.c
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
18
@@ -XXX,XX +XXX,XX @@ static void aspeed_smc_reset(DeviceState *d)
25
AcpiIortItsGroup *its;
19
aspeed_smc_segment_to_reg(&s->ctrl->segments[i]);
26
AcpiIortTable *iort;
27
AcpiIortSmmu3 *smmu;
28
- size_t node_size, iort_length, smmu_offset = 0;
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
30
AcpiIortRC *rc;
31
32
iort = acpi_data_push(table_data, sizeof(*iort));
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
34
35
iort_length = sizeof(*iort);
36
iort->node_count = cpu_to_le32(nb_nodes);
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
38
+ /*
39
+ * Use a copy in case table_data->data moves during acpi_data_push
40
+ * operations.
41
+ */
42
+ iort_node_offset = sizeof(*iort);
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
44
45
/* ITS group node */
46
node_size = sizeof(*its) + sizeof(uint32_t);
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
48
int irq = vms->irqmap[VIRT_SMMU];
49
50
/* SMMUv3 node */
51
- smmu_offset = iort->node_offset + node_size;
52
+ smmu_offset = iort_node_offset + node_size;
53
node_size = sizeof(*smmu) + sizeof(*idmap);
54
iort_length += node_size;
55
smmu = acpi_data_push(table_data, node_size);
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
57
idmap->id_count = cpu_to_le32(0xFFFF);
58
idmap->output_base = 0;
59
/* output IORT node is the ITS group node (the first node) */
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
62
}
20
}
63
21
64
/* Root Complex Node */
22
- /* HW strapping for AST2500 FMC controllers */
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
23
+ /* HW strapping flash type for FMC controllers */
66
idmap->output_reference = cpu_to_le32(smmu_offset);
24
if (s->ctrl->segments == aspeed_segments_ast2500_fmc) {
67
} else {
25
/* flash type is fixed to SPI for CE0 and CE1 */
68
/* output IORT node is the ITS group node (the first node) */
26
s->regs[s->r_conf] |= (CONF_FLASH_TYPE_SPI << CONF_FLASH_TYPE0);
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
27
s->regs[s->r_conf] |= (CONF_FLASH_TYPE_SPI << CONF_FLASH_TYPE1);
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
28
-
29
- /* 4BYTE mode is autodetected for CE0. Let's force it to 1 for
30
- * now */
31
- s->regs[s->r_ce_ctrl] |= (1 << (CTRL_EXTENDED0));
71
}
32
}
72
33
73
+ /*
34
/* HW strapping for AST2400 FMC controllers (SCU70). Let's use the
74
+ * Update the pointer address in case table_data->data moves during above
35
* configuration of the palmetto-bmc machine */
75
+ * acpi_data_push operations.
36
if (s->ctrl->segments == aspeed_segments_fmc) {
76
+ */
37
s->regs[s->r_conf] |= (CONF_FLASH_TYPE_SPI << CONF_FLASH_TYPE0);
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
38
-
78
iort->length = cpu_to_le32(iort_length);
39
- s->regs[s->r_ce_ctrl] |= (1 << (CTRL_EXTENDED0));
79
40
}
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
41
}
42
81
--
43
--
82
2.17.1
44
2.17.1
83
45
84
46
diff view generated by jsdifflib
New patch
1
From: Cédric Le Goater <clg@kaod.org>
1
2
3
Also handle the fake transfers for dummy bytes in this setup
4
routine. It will be useful when we activate MMIO execution.
5
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
7
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
8
Message-id: 20180612065716.10587-4-clg@kaod.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/ssi/aspeed_smc.c | 31 ++++++++++++++++---------------
12
1 file changed, 16 insertions(+), 15 deletions(-)
13
14
diff --git a/hw/ssi/aspeed_smc.c b/hw/ssi/aspeed_smc.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/aspeed_smc.c
17
+++ b/hw/ssi/aspeed_smc.c
18
@@ -XXX,XX +XXX,XX @@ static int aspeed_smc_flash_dummies(const AspeedSMCFlash *fl)
19
return dummies;
20
}
21
22
-static void aspeed_smc_flash_send_addr(AspeedSMCFlash *fl, uint32_t addr)
23
+static void aspeed_smc_flash_setup(AspeedSMCFlash *fl, uint32_t addr)
24
{
25
const AspeedSMCState *s = fl->controller;
26
uint8_t cmd = aspeed_smc_flash_cmd(fl);
27
+ int i;
28
29
/* Flash access can not exceed CS segment */
30
addr = aspeed_smc_check_segment_addr(fl, addr);
31
@@ -XXX,XX +XXX,XX @@ static void aspeed_smc_flash_send_addr(AspeedSMCFlash *fl, uint32_t addr)
32
ssi_transfer(s->spi, (addr >> 16) & 0xff);
33
ssi_transfer(s->spi, (addr >> 8) & 0xff);
34
ssi_transfer(s->spi, (addr & 0xff));
35
+
36
+ /*
37
+ * Use fake transfers to model dummy bytes. The value should
38
+ * be configured to some non-zero value in fast read mode and
39
+ * zero in read mode. But, as the HW allows inconsistent
40
+ * settings, let's check for fast read mode.
41
+ */
42
+ if (aspeed_smc_flash_mode(fl) == CTRL_FREADMODE) {
43
+ for (i = 0; i < aspeed_smc_flash_dummies(fl); i++) {
44
+ ssi_transfer(fl->controller->spi, 0xFF);
45
+ }
46
+ }
47
}
48
49
static uint64_t aspeed_smc_flash_read(void *opaque, hwaddr addr, unsigned size)
50
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_smc_flash_read(void *opaque, hwaddr addr, unsigned size)
51
case CTRL_READMODE:
52
case CTRL_FREADMODE:
53
aspeed_smc_flash_select(fl);
54
- aspeed_smc_flash_send_addr(fl, addr);
55
-
56
- /*
57
- * Use fake transfers to model dummy bytes. The value should
58
- * be configured to some non-zero value in fast read mode and
59
- * zero in read mode. But, as the HW allows inconsistent
60
- * settings, let's check for fast read mode.
61
- */
62
- if (aspeed_smc_flash_mode(fl) == CTRL_FREADMODE) {
63
- for (i = 0; i < aspeed_smc_flash_dummies(fl); i++) {
64
- ssi_transfer(fl->controller->spi, 0xFF);
65
- }
66
- }
67
+ aspeed_smc_flash_setup(fl, addr);
68
69
for (i = 0; i < size; i++) {
70
ret |= ssi_transfer(s->spi, 0x0) << (8 * i);
71
@@ -XXX,XX +XXX,XX @@ static void aspeed_smc_flash_write(void *opaque, hwaddr addr, uint64_t data,
72
break;
73
case CTRL_WRITEMODE:
74
aspeed_smc_flash_select(fl);
75
- aspeed_smc_flash_send_addr(fl, addr);
76
+ aspeed_smc_flash_setup(fl, addr);
77
78
for (i = 0; i < size; i++) {
79
ssi_transfer(s->spi, (data >> (8 * i)) & 0xff);
80
--
81
2.17.1
82
83
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
These COMs are hard to find, and the companie dropped the support
4
few years ago.
5
6
Per the "Gumstix Product Changes, Known Issues, and EOL" pdf:
7
8
- Phasing out: PXA270-based Verdex product line
9
September 2012
10
11
- Phasing out: PXA255-based Basix & Connex
12
September 2009
13
14
However there are still booting SD card image availables, very
15
convenient to stress test the QEMU SD card implementation.
16
Therefore I volunteer to keep an eye on this file, while it
17
is useful for testing.
18
19
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Reviewed-by: Thomas Huth <thuth@redhat.com>
21
Message-id: 20180606144706.29732-1-f4bug@amsat.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
24
MAINTAINERS | 3 ++-
25
1 file changed, 2 insertions(+), 1 deletion(-)
26
27
diff --git a/MAINTAINERS b/MAINTAINERS
28
index XXXXXXX..XXXXXXX 100644
29
--- a/MAINTAINERS
30
+++ b/MAINTAINERS
31
@@ -XXX,XX +XXX,XX @@ F: include/hw/arm/digic.h
32
F: hw/*/digic*
33
34
Gumstix
35
+M: Philippe Mathieu-Daudé <f4bug@amsat.org>
36
L: qemu-devel@nongnu.org
37
L: qemu-arm@nongnu.org
38
-S: Orphan
39
+S: Odd Fixes
40
F: hw/arm/gumstix.c
41
42
i.MX31
43
--
44
2.17.1
45
46
diff view generated by jsdifflib
New patch
1
From: Sai Pavan Boddu <saipava@xilinx.com>
1
2
3
Qspi dma has a burst length of 64 bytes, So limit the transactions w.r.t
4
dma-burst-size property.
5
6
Signed-off-by: Sai Pavan Boddu <saipava@xilinx.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 1529660880-30376-1-git-send-email-sai.pavan.boddu@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/ssi/xilinx_spips.h | 5 ++++-
12
hw/ssi/xilinx_spips.c | 23 ++++++++++++++++++++---
13
2 files changed, 24 insertions(+), 4 deletions(-)
14
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/ssi/xilinx_spips.h
18
+++ b/include/hw/ssi/xilinx_spips.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPS XilinxSPIPS;
20
/* Bite off 4k chunks at a time */
21
#define LQSPI_CACHE_SIZE 1024
22
23
+#define QSPI_DMA_MAX_BURST_SIZE 2048
24
+
25
typedef enum {
26
READ = 0x3, READ_4 = 0x13,
27
FAST_READ = 0xb, FAST_READ_4 = 0x0c,
28
@@ -XXX,XX +XXX,XX @@ typedef struct {
29
XilinxQSPIPS parent_obj;
30
31
StreamSlave *dma;
32
- uint8_t dma_buf[4];
33
int gqspi_irqline;
34
35
uint32_t regs[XLNX_ZYNQMP_SPIPS_R_MAX];
36
@@ -XXX,XX +XXX,XX @@ typedef struct {
37
uint8_t rx_fifo_g_align;
38
uint8_t tx_fifo_g_align;
39
bool man_start_com_g;
40
+ uint32_t dma_burst_size;
41
+ uint8_t dma_buf[QSPI_DMA_MAX_BURST_SIZE];
42
} XlnxZynqMPQSPIPS;
43
44
typedef struct XilinxSPIPSClass {
45
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/ssi/xilinx_spips.c
48
+++ b/hw/ssi/xilinx_spips.c
49
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
50
{
51
size_t ret;
52
uint32_t num;
53
- const void *rxd = pop_buf(recv_fifo, 4, &num);
54
+ const void *rxd;
55
+ int len;
56
+
57
+ len = recv_fifo->num >= rq->dma_burst_size ? rq->dma_burst_size :
58
+ recv_fifo->num;
59
+ rxd = pop_buf(recv_fifo, len, &num);
60
61
memcpy(rq->dma_buf, rxd, num);
62
63
- ret = stream_push(rq->dma, rq->dma_buf, 4);
64
- assert(ret == 4);
65
+ ret = stream_push(rq->dma, rq->dma_buf, num);
66
+ assert(ret == num);
67
xlnx_zynqmp_qspips_check_flush(rq);
68
}
69
}
70
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_realize(DeviceState *dev, Error **errp)
71
XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(dev);
72
XilinxSPIPSClass *xsc = XILINX_SPIPS_GET_CLASS(s);
73
74
+ if (s->dma_burst_size > QSPI_DMA_MAX_BURST_SIZE) {
75
+ error_setg(errp,
76
+ "qspi dma burst size %u exceeds maximum limit %d",
77
+ s->dma_burst_size, QSPI_DMA_MAX_BURST_SIZE);
78
+ return;
79
+ }
80
xilinx_qspips_realize(dev, errp);
81
fifo8_create(&s->rx_fifo_g, xsc->rx_fifo_size);
82
fifo8_create(&s->tx_fifo_g, xsc->tx_fifo_size);
83
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_xlnx_zynqmp_qspips = {
84
}
85
};
86
87
+static Property xilinx_zynqmp_qspips_properties[] = {
88
+ DEFINE_PROP_UINT32("dma-burst-size", XlnxZynqMPQSPIPS, dma_burst_size, 64),
89
+ DEFINE_PROP_END_OF_LIST(),
90
+};
91
+
92
static Property xilinx_qspips_properties[] = {
93
/* We had to turn this off for 2.10 as it is not compatible with migration.
94
* It can be enabled but will prevent the device to be migrated.
95
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_class_init(ObjectClass *klass, void * data)
96
dc->realize = xlnx_zynqmp_qspips_realize;
97
dc->reset = xlnx_zynqmp_qspips_reset;
98
dc->vmsd = &vmstate_xlnx_zynqmp_qspips;
99
+ dc->props = xilinx_zynqmp_qspips_properties;
100
xsc->reg_ops = &xlnx_zynqmp_qspips_ops;
101
xsc->rx_fifo_size = RXFF_A_Q;
102
xsc->tx_fifo_size = TXFF_A_Q;
103
--
104
2.17.1
105
106
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
From: Joel Stanley <joel@jms.id.au>
2
the new devices they use.
3
2
3
This adds Cedric as the maintainer, with Andrew and I as reviewers, for
4
the ASPEED boards and the peripherals we have developed.
5
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
8
Acked-by: Cédric Le Goater <clg@kaod.org>
9
Signed-off-by: Joel Stanley <joel@jms.id.au>
10
Message-id: 20180625140055.32223-1-joel@jms.id.au
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
6
---
12
---
7
MAINTAINERS | 9 +++++++--
13
MAINTAINERS | 11 +++++++++++
8
1 file changed, 7 insertions(+), 2 deletions(-)
14
1 file changed, 11 insertions(+)
9
15
10
diff --git a/MAINTAINERS b/MAINTAINERS
16
diff --git a/MAINTAINERS b/MAINTAINERS
11
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
18
--- a/MAINTAINERS
13
+++ b/MAINTAINERS
19
+++ b/MAINTAINERS
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
20
@@ -XXX,XX +XXX,XX @@ M: Subbaraya Sundeep <sundeep.lkml@gmail.com>
15
F: include/hw/timer/cmsdk-apb-timer.h
16
F: hw/char/cmsdk-apb-uart.c
17
F: include/hw/char/cmsdk-apb-uart.h
18
+F: hw/misc/tz-ppc.c
19
+F: include/hw/misc/tz-ppc.h
20
21
ARM cores
22
M: Peter Maydell <peter.maydell@linaro.org>
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
24
L: qemu-arm@nongnu.org
25
S: Maintained
21
S: Maintained
26
F: hw/arm/mps2.c
22
F: hw/arm/msf2-som.c
27
-F: hw/misc/mps2-scc.c
23
28
-F: include/hw/misc/mps2-scc.h
24
+ASPEED BMCs
29
+F: hw/arm/mps2-tz.c
25
+M: Cédric Le Goater <clg@kaod.org>
30
+F: hw/misc/mps2-*.c
26
+R: Andrew Jeffery <andrew@aj.id.au>
31
+F: include/hw/misc/mps2-*.h
27
+R: Joel Stanley <joel@jms.id.au>
32
+F: hw/arm/iotkit.c
28
+L: qemu-arm@nongnu.org
33
+F: include/hw/arm/iotkit.h
29
+S: Maintained
34
30
+F: hw/*/*aspeed*
35
Musicpal
31
+F: include/hw/*/*aspeed*
36
M: Jan Kiszka <jan.kiszka@web.de>
32
+F: hw/net/ftgmac100.c
33
+F: include/hw/net/ftgmac100.h
34
+
35
CRIS Machines
36
-------------
37
Axis Dev88
37
--
38
--
38
2.17.1
39
2.17.1
39
40
40
41
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-2-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/input/pckbd.c | 4 +++-
9
1 file changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/hw/input/pckbd.c b/hw/input/pckbd.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/input/pckbd.c
14
+++ b/hw/input/pckbd.c
15
@@ -XXX,XX +XXX,XX @@
16
* THE SOFTWARE.
17
*/
18
#include "qemu/osdep.h"
19
+#include "qemu/log.h"
20
#include "hw/hw.h"
21
#include "hw/isa/isa.h"
22
#include "hw/i386/pc.h"
23
@@ -XXX,XX +XXX,XX @@ static void kbd_write_command(void *opaque, hwaddr addr,
24
/* ignore that */
25
break;
26
default:
27
- fprintf(stderr, "qemu: unsupported keyboard cmd=0x%02x\n", (int)val);
28
+ qemu_log_mask(LOG_GUEST_ERROR,
29
+ "unsupported keyboard cmd=0x%02" PRIx64 "\n", val);
30
break;
31
}
32
}
33
--
34
2.17.1
35
36
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-3-f4bug@amsat.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
8
---
7
---
9
exec.c | 8 +++++---
8
hw/input/tsc2005.c | 13 ++++++++-----
10
1 file changed, 5 insertions(+), 3 deletions(-)
9
1 file changed, 8 insertions(+), 5 deletions(-)
11
10
12
diff --git a/exec.c b/exec.c
11
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
13
--- a/hw/input/tsc2005.c
15
+++ b/exec.c
14
+++ b/hw/input/tsc2005.c
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
15
@@ -XXX,XX +XXX,XX @@
17
* @is_write: whether the translation operation is for write
16
*/
18
* @is_mmio: whether this can be MMIO, set true if it can
17
19
* @target_as: the address space targeted by the IOMMU
18
#include "qemu/osdep.h"
20
+ * @attrs: transaction attributes
19
+#include "qemu/log.h"
21
*
20
#include "hw/hw.h"
22
* This function is called from RCU critical section. It is the common
21
#include "qemu/timer.h"
23
* part of flatview_do_translate and address_space_translate_cached.
22
#include "ui/console.h"
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
23
@@ -XXX,XX +XXX,XX @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
25
hwaddr *page_mask_out,
24
}
26
bool is_write,
25
s->nextprecision = (data >> 13) & 1;
27
bool is_mmio,
26
s->timing[0] = data & 0x1fff;
28
- AddressSpace **target_as)
27
- if ((s->timing[0] >> 11) == 3)
29
+ AddressSpace **target_as,
28
- fprintf(stderr, "%s: illegal conversion clock setting\n",
30
+ MemTxAttrs attrs)
29
- __func__);
31
{
30
+ if ((s->timing[0] >> 11) == 3) {
32
MemoryRegionSection *section;
31
+ qemu_log_mask(LOG_GUEST_ERROR,
33
hwaddr page_mask = (hwaddr)-1;
32
+ "tsc2005_write: illegal conversion clock setting\n");
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
33
+ }
35
return address_space_translate_iommu(iommu_mr, xlat,
34
break;
36
plen_out, page_mask_out,
35
case 0xd:    /* CFR1 */
37
is_write, is_mmio,
36
s->timing[1] = data & 0xf07;
38
- target_as);
37
@@ -XXX,XX +XXX,XX @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
39
+ target_as, attrs);
38
break;
39
40
default:
41
- fprintf(stderr, "%s: write into read-only register %x\n",
42
- __func__, reg);
43
+ qemu_log_mask(LOG_GUEST_ERROR,
44
+ "%s: write into read-only register 0x%x\n",
45
+ __func__, reg);
40
}
46
}
41
if (page_mask_out) {
42
/* Not behind an IOMMU, use default page size. */
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
44
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
46
NULL, is_write, true,
47
- &target_as);
48
+ &target_as, attrs);
49
return section.mr;
50
}
47
}
51
48
52
--
49
--
53
2.17.1
50
2.17.1
54
51
55
52
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-4-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/dma/omap_dma.c | 6 ++++--
9
1 file changed, 4 insertions(+), 2 deletions(-)
10
11
diff --git a/hw/dma/omap_dma.c b/hw/dma/omap_dma.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/dma/omap_dma.c
14
+++ b/hw/dma/omap_dma.c
15
@@ -XXX,XX +XXX,XX @@
16
* with this program; if not, see <http://www.gnu.org/licenses/>.
17
*/
18
#include "qemu/osdep.h"
19
+#include "qemu/log.h"
20
#include "qemu-common.h"
21
#include "qemu/timer.h"
22
#include "hw/arm/omap.h"
23
@@ -XXX,XX +XXX,XX @@ static int omap_dma_sys_read(struct omap_dma_s *s, int offset,
24
case 0x480:    /* DMA_PCh0_SR */
25
case 0x482:    /* DMA_PCh1_SR */
26
case 0x4c0:    /* DMA_PChD_SR_0 */
27
- printf("%s: Physical Channel Status Registers not implemented.\n",
28
- __func__);
29
+ qemu_log_mask(LOG_UNIMP,
30
+ "%s: Physical Channel Status Registers not implemented\n",
31
+ __func__);
32
*ret = 0xff;
33
break;
34
35
--
36
2.17.1
37
38
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-5-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/dma/omap_dma.c | 64 +++++++++++++++++++++++++++++------------------
9
1 file changed, 40 insertions(+), 24 deletions(-)
10
11
diff --git a/hw/dma/omap_dma.c b/hw/dma/omap_dma.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/dma/omap_dma.c
14
+++ b/hw/dma/omap_dma.c
15
@@ -XXX,XX +XXX,XX @@ static int omap_dma_ch_reg_write(struct omap_dma_s *s,
16
ch->burst[0] = (value & 0x0180) >> 7;
17
ch->pack[0] = (value & 0x0040) >> 6;
18
ch->port[0] = (enum omap_dma_port) ((value & 0x003c) >> 2);
19
- if (ch->port[0] >= __omap_dma_port_last)
20
- printf("%s: invalid DMA port %i\n", __func__,
21
- ch->port[0]);
22
- if (ch->port[1] >= __omap_dma_port_last)
23
- printf("%s: invalid DMA port %i\n", __func__,
24
- ch->port[1]);
25
+ if (ch->port[0] >= __omap_dma_port_last) {
26
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid DMA port %i\n",
27
+ __func__, ch->port[0]);
28
+ }
29
+ if (ch->port[1] >= __omap_dma_port_last) {
30
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid DMA port %i\n",
31
+ __func__, ch->port[1]);
32
+ }
33
ch->data_type = 1 << (value & 3);
34
if ((value & 3) == 3) {
35
- printf("%s: bad data_type for DMA channel\n", __func__);
36
+ qemu_log_mask(LOG_GUEST_ERROR,
37
+ "%s: bad data_type for DMA channel\n", __func__);
38
ch->data_type >>= 1;
39
}
40
break;
41
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
42
if (value & 2)                        /* SOFTRESET */
43
omap_dma_reset(s->dma);
44
s->ocp = value & 0x3321;
45
- if (((s->ocp >> 12) & 3) == 3)                /* MIDLEMODE */
46
- fprintf(stderr, "%s: invalid DMA power mode\n", __func__);
47
+ if (((s->ocp >> 12) & 3) == 3) { /* MIDLEMODE */
48
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid DMA power mode\n",
49
+ __func__);
50
+ }
51
return;
52
53
case 0x78:    /* DMA4_GCR */
54
s->gcr = value & 0x00ff00ff;
55
-    if ((value & 0xff) == 0x00)        /* MAX_CHANNEL_FIFO_DEPTH */
56
- fprintf(stderr, "%s: wrong FIFO depth in GCR\n", __func__);
57
+ if ((value & 0xff) == 0x00) { /* MAX_CHANNEL_FIFO_DEPTH */
58
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: wrong FIFO depth in GCR\n",
59
+ __func__);
60
+ }
61
return;
62
63
case 0x80 ... 0xfff:
64
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
65
case 0x00:    /* DMA4_CCR */
66
ch->buf_disable = (value >> 25) & 1;
67
ch->src_sync = (value >> 24) & 1;    /* XXX For CamDMA must be 1 */
68
- if (ch->buf_disable && !ch->src_sync)
69
- fprintf(stderr, "%s: Buffering disable is not allowed in "
70
- "destination synchronised mode\n", __func__);
71
+ if (ch->buf_disable && !ch->src_sync) {
72
+ qemu_log_mask(LOG_GUEST_ERROR,
73
+ "%s: Buffering disable is not allowed in "
74
+ "destination synchronised mode\n", __func__);
75
+ }
76
ch->prefetch = (value >> 23) & 1;
77
ch->bs = (value >> 18) & 1;
78
ch->transparent_copy = (value >> 17) & 1;
79
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
80
ch->suspend = (value & 0x0100) >> 8;
81
ch->priority = (value & 0x0040) >> 6;
82
ch->fs = (value & 0x0020) >> 5;
83
- if (ch->fs && ch->bs && ch->mode[0] && ch->mode[1])
84
- fprintf(stderr, "%s: For a packet transfer at least one port "
85
- "must be constant-addressed\n", __func__);
86
+ if (ch->fs && ch->bs && ch->mode[0] && ch->mode[1]) {
87
+ qemu_log_mask(LOG_GUEST_ERROR,
88
+ "%s: For a packet transfer at least one port "
89
+ "must be constant-addressed\n", __func__);
90
+ }
91
ch->sync = (value & 0x001f) | ((value >> 14) & 0x0060);
92
/* XXX must be 0x01 for CamDMA */
93
94
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
95
ch->endian_lock[0] =(value >> 20) & 1;
96
ch->endian[1] =(value >> 19) & 1;
97
ch->endian_lock[1] =(value >> 18) & 1;
98
- if (ch->endian[0] != ch->endian[1])
99
- fprintf(stderr, "%s: DMA endianness conversion enable attempt\n",
100
- __func__);
101
+ if (ch->endian[0] != ch->endian[1]) {
102
+ qemu_log_mask(LOG_GUEST_ERROR,
103
+ "%s: DMA endianness conversion enable attempt\n",
104
+ __func__);
105
+ }
106
ch->write_mode = (value >> 16) & 3;
107
ch->burst[1] = (value & 0xc000) >> 14;
108
ch->pack[1] = (value & 0x2000) >> 13;
109
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
110
ch->burst[0] = (value & 0x0180) >> 7;
111
ch->pack[0] = (value & 0x0040) >> 6;
112
ch->translate[0] = (value & 0x003c) >> 2;
113
- if (ch->translate[0] | ch->translate[1])
114
- fprintf(stderr, "%s: bad MReqAddressTranslate sideband signal\n",
115
- __func__);
116
+ if (ch->translate[0] | ch->translate[1]) {
117
+ qemu_log_mask(LOG_GUEST_ERROR,
118
+ "%s: bad MReqAddressTranslate sideband signal\n",
119
+ __func__);
120
+ }
121
ch->data_type = 1 << (value & 3);
122
if ((value & 3) == 3) {
123
- printf("%s: bad data_type for DMA channel\n", __func__);
124
+ qemu_log_mask(LOG_GUEST_ERROR,
125
+ "%s: bad data_type for DMA channel\n", __func__);
126
ch->data_type >>= 1;
127
}
128
break;
129
--
130
2.17.1
131
132
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to flatview_extend_translation().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Message-id: 20180624040609.17572-6-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
10
---
7
---
11
exec.c | 15 ++++++++++-----
8
hw/ssi/omap_spi.c | 15 ++++++++++-----
12
1 file changed, 10 insertions(+), 5 deletions(-)
9
1 file changed, 10 insertions(+), 5 deletions(-)
13
10
14
diff --git a/exec.c b/exec.c
11
diff --git a/hw/ssi/omap_spi.c b/hw/ssi/omap_spi.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
13
--- a/hw/ssi/omap_spi.c
17
+++ b/exec.c
14
+++ b/hw/ssi/omap_spi.c
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
15
@@ -XXX,XX +XXX,XX @@
19
16
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
20
static hwaddr
17
*/
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
18
#include "qemu/osdep.h"
22
- hwaddr target_len,
19
+#include "qemu/log.h"
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
20
#include "hw/hw.h"
24
- bool is_write)
21
#include "hw/arm/omap.h"
25
+ hwaddr target_len,
22
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
23
@@ -XXX,XX +XXX,XX @@ static void omap_mcspi_write(void *opaque, hwaddr addr,
27
+ bool is_write, MemTxAttrs attrs)
24
case 0x2c:    /* MCSPI_CHCONF */
28
{
25
if ((value ^ s->ch[ch].config) & (3 << 14))    /* DMAR | DMAW */
29
hwaddr done = 0;
26
omap_mcspi_dmarequest_update(s->ch + ch);
30
hwaddr xlat;
27
- if (((value >> 12) & 3) == 3)            /* TRM */
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
28
- fprintf(stderr, "%s: invalid TRM value (3)\n", __func__);
32
29
- if (((value >> 7) & 0x1f) < 3)            /* WL */
33
memory_region_ref(mr);
30
- fprintf(stderr, "%s: invalid WL value (%" PRIx64 ")\n",
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
31
- __func__, (value >> 7) & 0x1f);
35
- l, is_write);
32
+ if (((value >> 12) & 3) == 3) { /* TRM */
36
+ l, is_write, attrs);
33
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid TRM value (3)\n",
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
34
+ __func__);
38
rcu_read_unlock();
35
+ }
39
36
+ if (((value >> 7) & 0x1f) < 3) { /* WL */
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
37
+ qemu_log_mask(LOG_GUEST_ERROR,
41
mr = cache->mrs.mr;
38
+ "%s: invalid WL value (%" PRIx64 ")\n",
42
memory_region_ref(mr);
39
+ __func__, (value >> 7) & 0x1f);
43
if (memory_access_is_direct(mr, is_write)) {
40
+ }
44
+ /* We don't care about the memory attributes here as we're only
41
s->ch[ch].config = value & 0x7fffff;
45
+ * doing this if we found actual RAM, which behaves the same
42
break;
46
+ * regardless of attributes; so UNSPECIFIED is fine.
43
47
+ */
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
49
- cache->xlat, l, is_write);
50
+ cache->xlat, l, is_write,
51
+ MEMTXATTRS_UNSPECIFIED);
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
53
} else {
54
cache->ptr = NULL;
55
--
44
--
56
2.17.1
45
2.17.1
57
46
58
47
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
GIC realize function, previous allocated memory will leak.
5
Message-id: 20180624040609.17572-7-f4bug@amsat.org
6
7
Fix this by deleting the unnecessary call.
8
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
hw/intc/arm_gic_kvm.c | 1 -
8
hw/sd/omap_mmc.c | 13 +++++++++----
15
hw/intc/arm_gicv3_kvm.c | 1 -
9
1 file changed, 9 insertions(+), 4 deletions(-)
16
2 files changed, 2 deletions(-)
17
10
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
11
diff --git a/hw/sd/omap_mmc.c b/hw/sd/omap_mmc.c
19
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
13
--- a/hw/sd/omap_mmc.c
21
+++ b/hw/intc/arm_gic_kvm.c
14
+++ b/hw/sd/omap_mmc.c
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
15
@@ -XXX,XX +XXX,XX @@
23
16
* with this program; if not, see <http://www.gnu.org/licenses/>.
24
if (kvm_has_gsi_routing()) {
17
*/
25
/* set up irq routing */
18
#include "qemu/osdep.h"
26
- kvm_init_irq_routing(kvm_state);
19
+#include "qemu/log.h"
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
20
#include "hw/hw.h"
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
21
#include "hw/arm/omap.h"
29
}
22
#include "hw/sd/sd.h"
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
23
@@ -XXX,XX +XXX,XX @@ static void omap_mmc_write(void *opaque, hwaddr offset,
31
index XXXXXXX..XXXXXXX 100644
24
s->enable = (value >> 11) & 1;
32
--- a/hw/intc/arm_gicv3_kvm.c
25
s->be = (value >> 10) & 1;
33
+++ b/hw/intc/arm_gicv3_kvm.c
26
s->clkdiv = (value >> 0) & (s->rev >= 2 ? 0x3ff : 0xff);
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
27
- if (s->mode != 0)
35
28
- printf("SD mode %i unimplemented!\n", s->mode);
36
if (kvm_has_gsi_routing()) {
29
- if (s->be != 0)
37
/* set up irq routing */
30
- printf("SD FIFO byte sex unimplemented!\n");
38
- kvm_init_irq_routing(kvm_state);
31
+ if (s->mode != 0) {
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
32
+ qemu_log_mask(LOG_UNIMP,
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
33
+ "omap_mmc_wr: mode #%i unimplemented\n", s->mode);
41
}
34
+ }
35
+ if (s->be != 0) {
36
+ qemu_log_mask(LOG_UNIMP,
37
+ "omap_mmc_wr: Big Endian not implemented\n");
38
+ }
39
if (s->dw != 0 && s->lines < 4)
40
printf("4-bit SD bus enabled\n");
41
if (!s->enable)
42
--
42
--
43
2.17.1
43
2.17.1
44
44
45
45
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to flatview_do_translate().
3
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-8-f4bug@amsat.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
8
---
7
---
9
exec.c | 9 ++++++---
8
hw/i2c/omap_i2c.c | 20 ++++++++++++--------
10
1 file changed, 6 insertions(+), 3 deletions(-)
9
1 file changed, 12 insertions(+), 8 deletions(-)
11
10
12
diff --git a/exec.c b/exec.c
11
diff --git a/hw/i2c/omap_i2c.c b/hw/i2c/omap_i2c.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
13
--- a/hw/i2c/omap_i2c.c
15
+++ b/exec.c
14
+++ b/hw/i2c/omap_i2c.c
16
@@ -XXX,XX +XXX,XX @@ unassigned:
15
@@ -XXX,XX +XXX,XX @@
17
* @is_write: whether the translation operation is for write
16
* with this program; if not, see <http://www.gnu.org/licenses/>.
18
* @is_mmio: whether this can be MMIO, set true if it can
19
* @target_as: the address space targeted by the IOMMU
20
+ * @attrs: memory transaction attributes
21
*
22
* This function is called from RCU critical section
23
*/
17
*/
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
18
#include "qemu/osdep.h"
25
hwaddr *page_mask_out,
19
+#include "qemu/log.h"
26
bool is_write,
20
#include "hw/hw.h"
27
bool is_mmio,
21
#include "hw/i2c/i2c.h"
28
- AddressSpace **target_as)
22
#include "hw/arm/omap.h"
29
+ AddressSpace **target_as,
23
@@ -XXX,XX +XXX,XX @@ static void omap_i2c_write(void *opaque, hwaddr addr,
30
+ MemTxAttrs attrs)
24
}
31
{
25
break;
32
MemoryRegionSection *section;
26
}
33
IOMMUMemoryRegion *iommu_mr;
27
- if ((value & (1 << 15)) && !(value & (1 << 10))) {    /* MST */
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
28
- fprintf(stderr, "%s: I^2C slave mode not supported\n",
35
* but page mask.
29
- __func__);
36
*/
30
+ if ((value & (1 << 15)) && !(value & (1 << 10))) { /* MST */
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
31
+ qemu_log_mask(LOG_UNIMP, "%s: I^2C slave mode not supported\n",
38
- NULL, &page_mask, is_write, false, &as);
32
+ __func__);
39
+ NULL, &page_mask, is_write, false, &as,
33
break;
40
+ attrs);
34
}
41
35
- if ((value & (1 << 15)) && value & (1 << 8)) {        /* XA */
42
/* Illegal translation */
36
- fprintf(stderr, "%s: 10-bit addressing mode not supported\n",
43
if (section.mr == &io_mem_unassigned) {
37
- __func__);
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
38
+ if ((value & (1 << 15)) && value & (1 << 8)) { /* XA */
45
39
+ qemu_log_mask(LOG_UNIMP,
46
/* This can be MMIO, so setup MMIO bit. */
40
+ "%s: 10-bit addressing mode not supported\n",
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
41
+ __func__);
48
- is_write, true, &as);
42
break;
49
+ is_write, true, &as, attrs);
43
}
50
mr = section.mr;
44
if ((value & (1 << 15)) && value & (1 << 0)) {        /* STT */
51
45
@@ -XXX,XX +XXX,XX @@ static void omap_i2c_write(void *opaque, hwaddr addr,
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
46
s->stat |= 0x3f;
47
omap_i2c_interrupts_update(s);
48
}
49
- if (value & (1 << 15))                    /* ST_EN */
50
- fprintf(stderr, "%s: System Test not supported\n", __func__);
51
+ if (value & (1 << 15)) { /* ST_EN */
52
+ qemu_log_mask(LOG_UNIMP,
53
+ "%s: System Test not supported\n", __func__);
54
+ }
55
break;
56
57
default:
53
--
58
--
54
2.17.1
59
2.17.1
55
60
56
61
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
3
callback. We'll need this for subpage_accepts().
4
2
5
We could take the approach we used with the read and write
3
TCMI_VERBOSE is no more used, drop the OMAP_8/16/32B_REG macros.
6
callbacks and add new a new _with_attrs version, but since there
7
are so few implementations of the accepts hook we just change
8
them all.
9
4
5
Suggested-by: Thomas Huth <thuth@redhat.com>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Message-id: 20180624040609.17572-9-f4bug@amsat.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
14
---
10
---
15
include/exec/memory.h | 3 ++-
11
include/hw/arm/omap.h | 18 ------------------
16
exec.c | 9 ++++++---
12
hw/arm/omap1.c | 18 ++++++++++++------
17
hw/hppa/dino.c | 3 ++-
13
2 files changed, 12 insertions(+), 24 deletions(-)
18
hw/nvram/fw_cfg.c | 12 ++++++++----
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
14
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
17
--- a/include/hw/arm/omap.h
27
+++ b/include/exec/memory.h
18
+++ b/include/hw/arm/omap.h
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
19
@@ -XXX,XX +XXX,XX @@ enum {
29
* as a machine check exception).
20
#define OMAP_GPIOSW_INVERTED    0x0001
30
*/
21
#define OMAP_GPIOSW_OUTPUT    0x0002
31
bool (*accepts)(void *opaque, hwaddr addr,
22
32
- unsigned size, bool is_write);
23
-# define TCMI_VERBOSE            1
33
+ unsigned size, bool is_write,
24
-
34
+ MemTxAttrs attrs);
25
-# ifdef TCMI_VERBOSE
35
} valid;
26
-# define OMAP_8B_REG(paddr)        \
36
/* Internal implementation constraints: */
27
- fprintf(stderr, "%s: 8-bit register " OMAP_FMT_plx "\n",    \
37
struct {
28
- __func__, paddr)
38
diff --git a/exec.c b/exec.c
29
-# define OMAP_16B_REG(paddr)        \
30
- fprintf(stderr, "%s: 16-bit register " OMAP_FMT_plx "\n",    \
31
- __func__, paddr)
32
-# define OMAP_32B_REG(paddr)        \
33
- fprintf(stderr, "%s: 32-bit register " OMAP_FMT_plx "\n",    \
34
- __func__, paddr)
35
-# else
36
-# define OMAP_8B_REG(paddr)
37
-# define OMAP_16B_REG(paddr)
38
-# define OMAP_32B_REG(paddr)
39
-# endif
40
-
41
# define OMAP_MPUI_REG_MASK        0x000007ff
42
43
#endif /* hw_omap_h */
44
diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
39
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
46
--- a/hw/arm/omap1.c
41
+++ b/exec.c
47
+++ b/hw/arm/omap1.c
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
48
@@ -XXX,XX +XXX,XX @@
49
#include "qemu/cutils.h"
50
#include "qemu/bcd.h"
51
52
+static inline void omap_log_badwidth(const char *funcname, hwaddr addr, int sz)
53
+{
54
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: %d-bit register %#08" HWADDR_PRIx "\n",
55
+ funcname, 8 * sz, addr);
56
+}
57
+
58
/* Should signal the TCMI/GPMC */
59
uint32_t omap_badwidth_read8(void *opaque, hwaddr addr)
60
{
61
uint8_t ret;
62
63
- OMAP_8B_REG(addr);
64
+ omap_log_badwidth(__func__, addr, 1);
65
cpu_physical_memory_read(addr, &ret, 1);
66
return ret;
43
}
67
}
44
68
@@ -XXX,XX +XXX,XX @@ void omap_badwidth_write8(void *opaque, hwaddr addr,
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
46
- unsigned size, bool is_write)
47
+ unsigned size, bool is_write,
48
+ MemTxAttrs attrs)
49
{
69
{
50
return is_write;
70
uint8_t val8 = value;
71
72
- OMAP_8B_REG(addr);
73
+ omap_log_badwidth(__func__, addr, 1);
74
cpu_physical_memory_write(addr, &val8, 1);
51
}
75
}
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
76
77
@@ -XXX,XX +XXX,XX @@ uint32_t omap_badwidth_read16(void *opaque, hwaddr addr)
78
{
79
uint16_t ret;
80
81
- OMAP_16B_REG(addr);
82
+ omap_log_badwidth(__func__, addr, 2);
83
cpu_physical_memory_read(addr, &ret, 2);
84
return ret;
53
}
85
}
54
86
@@ -XXX,XX +XXX,XX @@ void omap_badwidth_write16(void *opaque, hwaddr addr,
55
static bool subpage_accepts(void *opaque, hwaddr addr,
56
- unsigned len, bool is_write)
57
+ unsigned len, bool is_write,
58
+ MemTxAttrs attrs)
59
{
87
{
60
subpage_t *subpage = opaque;
88
uint16_t val16 = value;
61
#if defined(DEBUG_SUBPAGE)
89
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
90
- OMAP_16B_REG(addr);
91
+ omap_log_badwidth(__func__, addr, 2);
92
cpu_physical_memory_write(addr, &val16, 2);
63
}
93
}
64
94
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
95
@@ -XXX,XX +XXX,XX @@ uint32_t omap_badwidth_read32(void *opaque, hwaddr addr)
66
- unsigned size, bool is_write)
67
+ unsigned size, bool is_write,
68
+ MemTxAttrs attrs)
69
{
96
{
70
return is_write;
97
uint32_t ret;
98
99
- OMAP_32B_REG(addr);
100
+ omap_log_badwidth(__func__, addr, 4);
101
cpu_physical_memory_read(addr, &ret, 4);
102
return ret;
71
}
103
}
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
104
@@ -XXX,XX +XXX,XX @@ uint32_t omap_badwidth_read32(void *opaque, hwaddr addr)
73
index XXXXXXX..XXXXXXX 100644
105
void omap_badwidth_write32(void *opaque, hwaddr addr,
74
--- a/hw/hppa/dino.c
106
uint32_t value)
75
+++ b/hw/hppa/dino.c
107
{
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
108
- OMAP_32B_REG(addr);
109
+ omap_log_badwidth(__func__, addr, 4);
110
cpu_physical_memory_write(addr, &value, 4);
77
}
111
}
78
112
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
80
- unsigned size, bool is_write)
81
+ unsigned size, bool is_write,
82
+ MemTxAttrs attrs)
83
{
84
switch (addr) {
85
case DINO_IAR0:
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
91
}
92
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
180
--
113
--
181
2.17.1
114
2.17.1
182
115
183
116
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
There was a nasty flip in identifying which register group an access is
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
targeting. The issue caused spuriously raised priorities of the guest
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
when handing CPUs over in the Jailhouse hypervisor.
5
Message-id: 20180624040609.17572-10-f4bug@amsat.org
6
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
7
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
8
include/hw/arm/omap.h | 12 ++++++------
14
1 file changed, 6 insertions(+), 6 deletions(-)
9
1 file changed, 6 insertions(+), 6 deletions(-)
15
10
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
11
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
13
--- a/include/hw/arm/omap.h
19
+++ b/hw/intc/arm_gicv3_cpuif.c
14
+++ b/include/hw/arm/omap.h
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
15
@@ -XXX,XX +XXX,XX @@
21
{
16
# define hw_omap_h        "omap.h"
22
GICv3CPUState *cs = icc_cs_from_env(env);
17
#include "hw/irq.h"
23
int regno = ri->opc2 & 3;
18
#include "target/arm/cpu-qom.h"
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
19
+#include "qemu/log.h"
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
20
26
uint64_t value = cs->ich_apr[grp][regno];
21
# define OMAP_EMIFS_BASE    0x00000000
27
22
# define OMAP2_Q0_BASE        0x00000000
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
23
@@ -XXX,XX +XXX,XX @@ struct omap_mpu_state_s *omap2420_mpu_init(MemoryRegion *sysmem,
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
unsigned long sdram_size,
30
{
25
const char *core);
31
GICv3CPUState *cs = icc_cs_from_env(env);
26
32
int regno = ri->opc2 & 3;
27
-#define OMAP_FMT_plx "%#08" HWADDR_PRIx
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
28
-
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
29
uint32_t omap_badwidth_read8(void *opaque, hwaddr addr);
35
30
void omap_badwidth_write8(void *opaque, hwaddr addr,
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
31
uint32_t value);
37
32
@@ -XXX,XX +XXX,XX @@ void omap_badwidth_write32(void *opaque, hwaddr addr,
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
33
void omap_mpu_wakeup(void *opaque, int irq, int req);
39
uint64_t value;
34
40
35
# define OMAP_BAD_REG(paddr)        \
41
int regno = ri->opc2 & 3;
36
- fprintf(stderr, "%s: Bad register " OMAP_FMT_plx "\n",    \
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
37
- __func__, paddr)
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
38
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad register %#08"HWADDR_PRIx"\n", \
44
39
+ __func__, paddr)
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
40
# define OMAP_RO_REG(paddr)        \
46
return icv_ap_read(env, ri);
41
- fprintf(stderr, "%s: Read-only register " OMAP_FMT_plx "\n",    \
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
42
- __func__, paddr)
48
GICv3CPUState *cs = icc_cs_from_env(env);
43
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Read-only register %#08" \
49
44
+ HWADDR_PRIx "\n", \
50
int regno = ri->opc2 & 3;
45
+ __func__, paddr)
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
46
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
47
/* OMAP-specific Linux bootloader tags for the ATAG_BOARD area
53
48
(Board-specifc tags are not here) */
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
55
icv_ap_write(env, ri, value);
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
{
58
GICv3CPUState *cs = icc_cs_from_env(env);
59
int regno = ri->opc2 & 3;
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
62
uint64_t value;
63
64
value = cs->ich_apr[grp][regno];
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
{
67
GICv3CPUState *cs = icc_cs_from_env(env);
68
int regno = ri->opc2 & 3;
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
71
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
73
74
--
49
--
75
2.17.1
50
2.17.1
76
51
77
52
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
3
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-11-f4bug@amsat.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
8
---
7
---
9
include/exec/memory.h | 2 +-
8
hw/arm/stellaris.c | 2 +-
10
exec.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
10
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
13
--- a/hw/arm/stellaris.c
17
+++ b/include/exec/memory.h
14
+++ b/hw/arm/stellaris.c
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
15
@@ -XXX,XX +XXX,XX @@ static void ssys_write(void *opaque, hwaddr offset,
19
* entry. Should be called from an RCU critical section.
16
case 0x040: /* SRCR0 */
20
*/
17
case 0x044: /* SRCR1 */
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
18
case 0x048: /* SRCR2 */
22
- bool is_write);
19
- fprintf(stderr, "Peripheral reset not implemented\n");
23
+ bool is_write, MemTxAttrs attrs);
20
+ qemu_log_mask(LOG_UNIMP, "Peripheral reset not implemented\n");
24
21
break;
25
/* address_space_translate: translate an address range into an address space
22
case 0x054: /* IMC */
26
* into a MemoryRegion and an address range into that section. Should be
23
s->int_mask = value & 0x7f;
27
diff --git a/exec.c b/exec.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/exec.c
30
+++ b/exec.c
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
33
/* Called from RCU critical section */
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
- bool is_write)
36
+ bool is_write, MemTxAttrs attrs)
37
{
38
MemoryRegionSection section;
39
hwaddr xlat, page_mask;
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/virtio/vhost.c
43
+++ b/hw/virtio/vhost.c
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
45
trace_vhost_iotlb_miss(dev, 1);
46
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
48
- iova, write);
49
+ iova, write,
50
+ MEMTXATTRS_UNSPECIFIED);
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
24
--
55
2.17.1
25
2.17.1
56
26
57
27
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to memory_region_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
6
The callsite in flatview_access_valid() is part of a recursive
3
Suggested-by: Thomas Huth <thuth@redhat.com>
7
loop flatview_access_valid() -> memory_region_access_valid() ->
4
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
subpage_accepts() -> flatview_access_valid(); we make it pass
5
Message-id: 20180624040609.17572-12-f4bug@amsat.org
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
have plumbed an attrs parameter through the rest of the loop
7
---
11
and we can add an attrs parameter to flatview_access_valid().
8
hw/net/stellaris_enet.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
12
10
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
diff --git a/hw/net/stellaris_enet.c b/hw/net/stellaris_enet.c
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
17
---
18
include/exec/memory-internal.h | 3 ++-
19
exec.c | 4 +++-
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
25
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
13
--- a/hw/net/stellaris_enet.c
27
+++ b/include/exec/memory-internal.h
14
+++ b/hw/net/stellaris_enet.c
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
15
@@ -XXX,XX +XXX,XX @@ static uint64_t stellaris_enet_read(void *opaque, hwaddr offset,
29
extern const MemoryRegionOps unassigned_mem_ops;
16
return s->np;
30
17
case 0x38: /* TR */
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
32
- unsigned size, bool is_write);
33
+ unsigned size, bool is_write,
34
+ MemTxAttrs attrs);
35
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
44
if (!memory_access_is_direct(mr, is_write)) {
45
l = memory_access_size(mr, l, addr);
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
47
+ /* When our callers all have attrs we'll pass them through here */
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
49
+ MEMTXATTRS_UNSPECIFIED)) {
50
return false;
51
}
52
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
18
return 0;
66
}
19
- case 0x3c: /* Undocuented: Timestamp? */
67
diff --git a/memory.c b/memory.c
20
+ case 0x3c: /* Undocumented: Timestamp? */
68
index XXXXXXX..XXXXXXX 100644
21
return 0;
69
--- a/memory.c
22
default:
70
+++ b/memory.c
23
hw_error("stellaris_enet_read: Bad offset %x\n", (int)offset);
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
99
--
24
--
100
2.17.1
25
2.17.1
101
26
102
27
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
hw_error() finally calls abort(), but there is no need to abort here.
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180624040609.17572-13-f4bug@amsat.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
9
---
11
include/exec/exec-all.h | 5 +++--
10
hw/net/stellaris_enet.c | 9 +++++++--
12
accel/tcg/translate-all.c | 2 +-
11
1 file changed, 7 insertions(+), 2 deletions(-)
13
exec.c | 2 +-
14
target/xtensa/op_helper.c | 3 ++-
15
4 files changed, 7 insertions(+), 5 deletions(-)
16
12
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
13
diff --git a/hw/net/stellaris_enet.c b/hw/net/stellaris_enet.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
15
--- a/hw/net/stellaris_enet.c
20
+++ b/include/exec/exec-all.h
16
+++ b/hw/net/stellaris_enet.c
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
17
@@ -XXX,XX +XXX,XX @@
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
18
#include "qemu/osdep.h"
23
hwaddr paddr, int prot,
19
#include "hw/sysbus.h"
24
int mmu_idx, target_ulong size);
20
#include "net/net.h"
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
21
+#include "qemu/log.h"
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
22
#include <zlib.h>
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
23
28
uintptr_t retaddr);
24
//#define DEBUG_STELLARIS_ENET 1
29
#else
25
@@ -XXX,XX +XXX,XX @@ static uint64_t stellaris_enet_read(void *opaque, hwaddr offset,
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
26
case 0x3c: /* Undocumented: Timestamp? */
31
uint16_t idxmap)
27
return 0;
32
{
28
default:
33
}
29
- hw_error("stellaris_enet_read: Bad offset %x\n", (int)offset);
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
30
+ qemu_log_mask(LOG_GUEST_ERROR, "stellaris_enet_rd%d: Illegal register"
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
31
+ " 0x02%" HWADDR_PRIx "\n",
36
+ MemTxAttrs attrs)
32
+ size * 8, offset);
37
{
33
return 0;
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
43
+++ b/accel/tcg/translate-all.c
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
45
}
46
47
#if !defined(CONFIG_USER_ONLY)
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
50
{
51
ram_addr_t ram_addr;
52
MemoryRegion *mr;
53
diff --git a/exec.c b/exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
56
+++ b/exec.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
58
if (phys != -1) {
59
/* Locks grabbed by tb_invalidate_phys_addr */
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
61
- phys | (pc & ~TARGET_PAGE_MASK));
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
63
}
34
}
64
}
35
}
65
#endif
36
@@ -XXX,XX +XXX,XX @@ static void stellaris_enet_write(void *opaque, hwaddr offset,
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
37
/* Ignored. */
67
index XXXXXXX..XXXXXXX 100644
38
break;
68
--- a/target/xtensa/op_helper.c
39
default:
69
+++ b/target/xtensa/op_helper.c
40
- hw_error("stellaris_enet_write: Bad offset %x\n", (int)offset);
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
41
+ qemu_log_mask(LOG_GUEST_ERROR, "stellaris_enet_wr%d: Illegal register "
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
42
+ "0x02%" HWADDR_PRIx " = 0x%" PRIx64 "\n",
72
&paddr, &page_size, &access);
43
+ size * 8, offset, value);
73
if (ret == 0) {
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
76
+ MEMTXATTRS_UNSPECIFIED);
77
}
44
}
78
}
45
}
79
46
80
--
47
--
81
2.17.1
48
2.17.1
82
49
83
50
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
and friends.
3
2
3
hw_error() finally calls abort(), but there is no need to abort here.
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Thomas Huth <thuth@redhat.com>
7
Message-id: 20180624040609.17572-14-f4bug@amsat.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
---
9
---
8
include/migration/vmstate.h | 3 +++
10
hw/net/smc91c111.c | 9 +++++++--
9
1 file changed, 3 insertions(+)
11
1 file changed, 7 insertions(+), 2 deletions(-)
10
12
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
13
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
15
--- a/hw/net/smc91c111.c
14
+++ b/include/migration/vmstate.h
16
+++ b/hw/net/smc91c111.c
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
17
@@ -XXX,XX +XXX,XX @@
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
18
#include "hw/sysbus.h"
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
19
#include "net/net.h"
18
20
#include "hw/devices.h"
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
21
+#include "qemu/log.h"
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
22
/* For crc32 */
21
+
23
#include <zlib.h>
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
24
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
25
@@ -XXX,XX +XXX,XX @@ static void smc91c111_writeb(void *opaque, hwaddr offset,
26
}
27
break;
28
}
29
- hw_error("smc91c111_write: Bad reg %d:%x\n", s->bank, (int)offset);
30
+ qemu_log_mask(LOG_GUEST_ERROR, "smc91c111_write(bank:%d) Illegal register"
31
+ " 0x%" HWADDR_PRIx " = 0x%x\n",
32
+ s->bank, offset, value);
33
}
34
35
static uint32_t smc91c111_readb(void *opaque, hwaddr offset)
36
@@ -XXX,XX +XXX,XX @@ static uint32_t smc91c111_readb(void *opaque, hwaddr offset)
37
}
38
break;
39
}
40
- hw_error("smc91c111_read: Bad reg %d:%x\n", s->bank, (int)offset);
41
+ qemu_log_mask(LOG_GUEST_ERROR, "smc91c111_read(bank:%d) Illegal register"
42
+ " 0x%" HWADDR_PRIx "\n",
43
+ s->bank, offset);
44
return 0;
45
}
24
46
25
--
47
--
26
2.17.1
48
2.17.1
27
49
28
50
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
add MemTxAttrs as an argument to address_space_translate()
3
and address_space_translate_cached(). Callers either have an
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-15-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
10
---
7
---
11
include/exec/memory.h | 4 +++-
8
hw/net/smc91c111.c | 12 ++++++++----
12
accel/tcg/translate-all.c | 2 +-
9
1 file changed, 8 insertions(+), 4 deletions(-)
13
exec.c | 14 +++++++++-----
14
hw/vfio/common.c | 3 ++-
15
memory_ldst.inc.c | 18 +++++++++---------
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
10
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
13
--- a/hw/net/smc91c111.c
22
+++ b/include/exec/memory.h
14
+++ b/hw/net/smc91c111.c
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
15
@@ -XXX,XX +XXX,XX @@ static void smc91c111_writeb(void *opaque, hwaddr offset,
24
* #MemoryRegion.
16
SET_HIGH(gpr, value);
25
* @len: pointer to length
17
return;
26
* @is_write: indicates the transfer direction
18
case 12: /* Control */
27
+ * @attrs: memory attributes
19
- if (value & 1)
28
*/
20
- fprintf(stderr, "smc91c111:EEPROM store not implemented\n");
29
MemoryRegion *flatview_translate(FlatView *fv,
21
- if (value & 2)
30
hwaddr addr, hwaddr *xlat,
22
- fprintf(stderr, "smc91c111:EEPROM reload not implemented\n");
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
23
+ if (value & 1) {
32
24
+ qemu_log_mask(LOG_UNIMP,
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
25
+ "smc91c111: EEPROM store not implemented\n");
34
hwaddr addr, hwaddr *xlat,
26
+ }
35
- hwaddr *len, bool is_write)
27
+ if (value & 2) {
36
+ hwaddr *len, bool is_write,
28
+ qemu_log_mask(LOG_UNIMP,
37
+ MemTxAttrs attrs)
29
+ "smc91c111: EEPROM reload not implemented\n");
38
{
30
+ }
39
return flatview_translate(address_space_to_flatview(as),
31
value &= ~3;
40
addr, xlat, len, is_write);
32
SET_LOW(ctr, value);
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
33
return;
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
44
+++ b/accel/tcg/translate-all.c
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
46
hwaddr l = 1;
47
48
rcu_read_lock();
49
- mr = address_space_translate(as, addr, &addr, &l, false);
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
51
if (!(memory_region_is_ram(mr)
52
|| memory_region_is_romd(mr))) {
53
rcu_read_unlock();
54
diff --git a/exec.c b/exec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/exec.c
57
+++ b/exec.c
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
59
rcu_read_lock();
60
while (len > 0) {
61
l = len;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
64
+ MEMTXATTRS_UNSPECIFIED);
65
66
if (!(memory_region_is_ram(mr) ||
67
memory_region_is_romd(mr))) {
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
69
*/
70
static inline MemoryRegion *address_space_translate_cached(
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
72
- hwaddr *plen, bool is_write)
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
74
{
75
MemoryRegionSection section;
76
MemoryRegion *mr;
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
78
MemoryRegion *mr;
79
80
l = len;
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
83
+ MEMTXATTRS_UNSPECIFIED);
84
flatview_read_continue(cache->fv,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
86
addr1, l, mr);
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
93
+ MEMTXATTRS_UNSPECIFIED);
94
flatview_write_continue(cache->fv,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
96
addr1, l, mr);
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
112
*/
113
mr = address_space_translate(&address_space_memory,
114
iotlb->translated_addr,
115
- &xlat, &len, writable);
116
+ &xlat, &len, writable,
117
+ MEMTXATTRS_UNSPECIFIED);
118
if (!memory_region_is_ram(mr)) {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
120
xlat);
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
124
+++ b/memory_ldst.inc.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
126
bool release_lock = false;
127
128
RCU_READ_LOCK();
129
- mr = TRANSLATE(addr, &addr1, &l, false);
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
132
release_lock |= prepare_mmio_access(mr);
133
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
135
bool release_lock = false;
136
137
RCU_READ_LOCK();
138
- mr = TRANSLATE(addr, &addr1, &l, false);
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
141
release_lock |= prepare_mmio_access(mr);
142
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
144
bool release_lock = false;
145
146
RCU_READ_LOCK();
147
- mr = TRANSLATE(addr, &addr1, &l, false);
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
149
if (!IS_DIRECT(mr, false)) {
150
release_lock |= prepare_mmio_access(mr);
151
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
153
bool release_lock = false;
154
155
RCU_READ_LOCK();
156
- mr = TRANSLATE(addr, &addr1, &l, false);
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
158
if (l < 2 || !IS_DIRECT(mr, false)) {
159
release_lock |= prepare_mmio_access(mr);
160
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
162
bool release_lock = false;
163
164
RCU_READ_LOCK();
165
- mr = TRANSLATE(addr, &addr1, &l, true);
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
167
if (l < 4 || !IS_DIRECT(mr, true)) {
168
release_lock |= prepare_mmio_access(mr);
169
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
171
bool release_lock = false;
172
173
RCU_READ_LOCK();
174
- mr = TRANSLATE(addr, &addr1, &l, true);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
176
if (l < 4 || !IS_DIRECT(mr, true)) {
177
release_lock |= prepare_mmio_access(mr);
178
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
180
bool release_lock = false;
181
182
RCU_READ_LOCK();
183
- mr = TRANSLATE(addr, &addr1, &l, true);
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
185
if (!IS_DIRECT(mr, true)) {
186
release_lock |= prepare_mmio_access(mr);
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
34
--
220
2.17.1
35
2.17.1
221
36
222
37
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
Missed in df3692e04b2.
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
4
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Message-id: 20180624040609.17572-16-f4bug@amsat.org
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
9
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
10
hw/arm/stellaris.c | 2 +-
18
1 file changed, 7 insertions(+), 3 deletions(-)
11
1 file changed, 1 insertion(+), 1 deletion(-)
19
12
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
13
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
15
--- a/hw/arm/stellaris.c
23
+++ b/hw/dma/xlnx-zdma.c
16
+++ b/hw/arm/stellaris.c
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
17
@@ -XXX,XX +XXX,XX @@ static void gptm_write(void *opaque, hwaddr offset,
18
break;
19
default:
25
qemu_log_mask(LOG_GUEST_ERROR,
20
qemu_log_mask(LOG_GUEST_ERROR,
26
"zdma: unaligned descriptor at %" PRIx64,
21
- "GPTM: read at bad offset 0x%x\n", (int)offset);
27
addr);
22
+ "GPTM: write at bad offset 0x%x\n", (int)offset);
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
30
s->error = true;
31
return false;
32
}
23
}
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
24
gptm_update_irq(s);
34
RegisterInfo *r = &s->regs_info[addr / 4];
25
}
35
36
if (!r->data) {
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
39
- object_get_canonical_path(OBJECT(s)),
40
+ path,
41
addr);
42
+ g_free(path);
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
44
zdma_ch_imr_update_irq(s);
45
return 0;
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
47
RegisterInfo *r = &s->regs_info[addr / 4];
48
49
if (!r->data) {
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
52
- object_get_canonical_path(OBJECT(s)),
53
+ path,
54
addr, value);
55
+ g_free(path);
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
57
zdma_ch_imr_update_irq(s);
58
return;
59
--
26
--
60
2.17.1
27
2.17.1
61
28
62
29
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
Suggested-by: Thomas Huth <thuth@redhat.com>
4
first 4 bytes.
4
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
5
Message-id: 20180624040609.17572-17-f4bug@amsat.org
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
8
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
9
hw/arm/stellaris.c | 6 ++++--
15
1 file changed, 1 insertion(+)
10
1 file changed, 4 insertions(+), 2 deletions(-)
16
11
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
12
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
18
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
14
--- a/hw/arm/stellaris.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
15
+++ b/hw/arm/stellaris.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
16
@@ -XXX,XX +XXX,XX @@ static uint64_t gptm_read(void *opaque, hwaddr offset,
22
if (clroffset != 0) {
17
return 0;
23
reg = 0;
18
default:
24
kvm_gicd_access(s, clroffset, &reg, true);
19
qemu_log_mask(LOG_GUEST_ERROR,
25
+ clroffset += 4;
20
- "GPTM: read at bad offset 0x%x\n", (int)offset);
26
}
21
+ "GPTM: read at bad offset 0x02%" HWADDR_PRIx "\n",
27
reg = *gic_bmp_ptr32(bmp, irq);
22
+ offset);
28
kvm_gicd_access(s, offset, &reg, true);
23
return 0;
24
}
25
}
26
@@ -XXX,XX +XXX,XX @@ static void gptm_write(void *opaque, hwaddr offset,
27
break;
28
default:
29
qemu_log_mask(LOG_GUEST_ERROR,
30
- "GPTM: write at bad offset 0x%x\n", (int)offset);
31
+ "GPTM: write at bad offset 0x02%" HWADDR_PRIx "\n",
32
+ offset);
33
}
34
gptm_update_irq(s);
35
}
29
--
36
--
30
2.17.1
37
2.17.1
31
38
32
39
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
Add support for MMU protection regions that are smaller than
2
and other IOMMU-related functions and data structures.
2
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
3
pages with a flag TLB_RECHECK. This flag causes us to always
4
take the slow-path for accesses. In the slow path we can then
5
special case them to always call tlb_fill() again, so we have
6
the correct information for the exact address being accessed.
7
8
This change allows us to handle reading and writing from small
9
regions; we cannot deal with execution from the small region.
3
10
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
9
---
14
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
15
accel/tcg/softmmu_template.h | 24 ++++---
11
1 file changed, 95 insertions(+), 10 deletions(-)
16
include/exec/cpu-all.h | 5 +-
12
17
accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
18
3 files changed, 130 insertions(+), 30 deletions(-)
19
20
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
22
--- a/accel/tcg/softmmu_template.h
16
+++ b/include/exec/memory.h
23
+++ b/accel/tcg/softmmu_template.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
24
@@ -XXX,XX +XXX,XX @@
18
IOMMU_ATTR_SPAPR_TCE_FD
25
static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
19
};
26
size_t mmu_idx, size_t index,
20
27
target_ulong addr,
21
+/**
28
- uintptr_t retaddr)
22
+ * IOMMUMemoryRegionClass:
29
+ uintptr_t retaddr,
23
+ *
30
+ bool recheck)
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
31
{
25
+ * and provide an implementation of at least the @translate method here
32
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
26
+ * to handle requests to the memory region. Other methods are optional.
33
- return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, DATA_SIZE);
27
+ *
34
+ return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
35
+ DATA_SIZE);
29
+ * to report whenever mappings are changed, by calling
36
}
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
37
#endif
31
+ * memory_region_notify_one() for each registered notifier).
38
32
+ */
39
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
33
typedef struct IOMMUMemoryRegionClass {
40
34
/* private */
41
/* ??? Note that the io helpers always read data in the target
35
struct DeviceClass parent_class;
42
byte ordering. We should push the LE/BE request down into io. */
36
43
- res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
37
/*
44
+ res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
38
- * Return a TLB entry that contains a given address. Flag should
45
+ tlb_addr & TLB_RECHECK);
39
- * be the access permission of this translation operation. We can
46
res = TGT_LE(res);
40
- * set flag to IOMMU_NONE to mean that we don't need any
47
return res;
41
- * read/write permission checks, like, when for region replay.
48
}
42
+ * Return a TLB entry that contains a given address.
49
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
43
+ *
50
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
51
/* ??? Note that the io helpers always read data in the target
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
52
byte ordering. We should push the LE/BE request down into io. */
46
+ * the full translation information for both reads and writes. If
53
- res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
47
+ * the access flags are specified then the IOMMU implementation
54
+ res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
48
+ * may use this as an optimization, to stop doing a page table
55
+ tlb_addr & TLB_RECHECK);
49
+ * walk as soon as it knows that the requested permissions are not
56
res = TGT_BE(res);
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
57
return res;
51
+ * full page table walk and report the permissions in the returned
58
}
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
59
@@ -XXX,XX +XXX,XX @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env,
53
+ * return different mappings for reads and writes.)
60
size_t mmu_idx, size_t index,
54
+ *
61
DATA_TYPE val,
55
+ * The returned information remains valid while the caller is
62
target_ulong addr,
56
+ * holding the big QEMU lock or is inside an RCU critical section;
63
- uintptr_t retaddr)
57
+ * if the caller wishes to cache the mapping beyond that it must
64
+ uintptr_t retaddr,
58
+ * register an IOMMU notifier so it can invalidate its cached
65
+ bool recheck)
59
+ * information when the IOMMU mapping changes.
66
{
60
+ *
67
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
61
+ * @iommu: the IOMMUMemoryRegion
68
- return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, DATA_SIZE);
62
+ * @hwaddr: address to be translated within the memory region
69
+ return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
63
+ * @flag: requested access permissions
70
+ recheck, DATA_SIZE);
71
}
72
73
void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
74
@@ -XXX,XX +XXX,XX @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
75
/* ??? Note that the io helpers always read data in the target
76
byte ordering. We should push the LE/BE request down into io. */
77
val = TGT_LE(val);
78
- glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
79
+ glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr,
80
+ retaddr, tlb_addr & TLB_RECHECK);
81
return;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
85
/* ??? Note that the io helpers always read data in the target
86
byte ordering. We should push the LE/BE request down into io. */
87
val = TGT_BE(val);
88
- glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
89
+ glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr,
90
+ tlb_addr & TLB_RECHECK);
91
return;
92
}
93
94
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
95
index XXXXXXX..XXXXXXX 100644
96
--- a/include/exec/cpu-all.h
97
+++ b/include/exec/cpu-all.h
98
@@ -XXX,XX +XXX,XX @@ CPUArchState *cpu_copy(CPUArchState *env);
99
#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS - 2))
100
/* Set if TLB entry is an IO callback. */
101
#define TLB_MMIO (1 << (TARGET_PAGE_BITS - 3))
102
+/* Set if TLB entry must have MMU lookup repeated for every access */
103
+#define TLB_RECHECK (1 << (TARGET_PAGE_BITS - 4))
104
105
/* Use this mask to check interception with an alignment mask
106
* in a TCG backend.
107
*/
108
-#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO)
109
+#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
110
+ | TLB_RECHECK)
111
112
void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
113
void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf);
114
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/accel/tcg/cputlb.c
117
+++ b/accel/tcg/cputlb.c
118
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
119
target_ulong code_address;
120
uintptr_t addend;
121
CPUTLBEntry *te, *tv, tn;
122
- hwaddr iotlb, xlat, sz;
123
+ hwaddr iotlb, xlat, sz, paddr_page;
124
+ target_ulong vaddr_page;
125
unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
126
int asidx = cpu_asidx_from_attrs(cpu, attrs);
127
128
assert_cpu_is_self(cpu);
129
- assert(size >= TARGET_PAGE_SIZE);
130
- if (size != TARGET_PAGE_SIZE) {
131
- tlb_add_large_page(env, vaddr, size);
132
- }
133
134
- sz = size;
135
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
136
- attrs, &prot);
137
+ if (size < TARGET_PAGE_SIZE) {
138
+ sz = TARGET_PAGE_SIZE;
139
+ } else {
140
+ if (size > TARGET_PAGE_SIZE) {
141
+ tlb_add_large_page(env, vaddr, size);
142
+ }
143
+ sz = size;
144
+ }
145
+ vaddr_page = vaddr & TARGET_PAGE_MASK;
146
+ paddr_page = paddr & TARGET_PAGE_MASK;
147
+
148
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr_page,
149
+ &xlat, &sz, attrs, &prot);
150
assert(sz >= TARGET_PAGE_SIZE);
151
152
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
153
" prot=%x idx=%d\n",
154
vaddr, paddr, prot, mmu_idx);
155
156
- address = vaddr;
157
- if (!memory_region_is_ram(section->mr) && !memory_region_is_romd(section->mr)) {
158
+ address = vaddr_page;
159
+ if (size < TARGET_PAGE_SIZE) {
160
+ /*
161
+ * Slow-path the TLB entries; we will repeat the MMU check and TLB
162
+ * fill on every access.
163
+ */
164
+ address |= TLB_RECHECK;
165
+ }
166
+ if (!memory_region_is_ram(section->mr) &&
167
+ !memory_region_is_romd(section->mr)) {
168
/* IO memory case */
169
address |= TLB_MMIO;
170
addend = 0;
171
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
172
}
173
174
code_address = address;
175
- iotlb = memory_region_section_get_iotlb(cpu, section, vaddr, paddr, xlat,
176
- prot, &address);
177
+ iotlb = memory_region_section_get_iotlb(cpu, section, vaddr_page,
178
+ paddr_page, xlat, prot, &address);
179
180
- index = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
181
+ index = (vaddr_page >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
182
te = &env->tlb_table[mmu_idx][index];
183
/* do not discard the translation in te, evict it into a victim tlb */
184
tv = &env->tlb_v_table[mmu_idx][vidx];
185
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
186
* TARGET_PAGE_BITS, and either
187
* + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
188
* + the offset within section->mr of the page base (otherwise)
189
- * We subtract the vaddr (which is page aligned and thus won't
190
+ * We subtract the vaddr_page (which is page aligned and thus won't
191
* disturb the low bits) to give an offset which can be added to the
192
* (non-page-aligned) vaddr of the eventual memory access to get
193
* the MemoryRegion offset for the access. Note that the vaddr we
194
* subtract here is that of the page base, and not the same as the
195
* vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
64
*/
196
*/
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
197
- env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
66
IOMMUAccessFlags flag);
198
+ env->iotlb[mmu_idx][index].addr = iotlb - vaddr_page;
67
- /* Returns minimum supported page size */
199
env->iotlb[mmu_idx][index].attrs = attrs;
68
+ /* Returns minimum supported page size in bytes.
200
69
+ * If this method is not provided then the minimum is assumed to
201
/* Now calculate the new entry */
70
+ * be TARGET_PAGE_SIZE.
202
- tn.addend = addend - vaddr;
71
+ *
203
+ tn.addend = addend - vaddr_page;
72
+ * @iommu: the IOMMUMemoryRegion
204
if (prot & PAGE_READ) {
73
+ */
205
tn.addr_read = address;
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
206
} else {
75
- /* Called when IOMMU Notifier flag changed */
207
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
208
tn.addr_write = address | TLB_MMIO;
77
+ * events which IOMMU users are requesting notification for changes).
209
} else if (memory_region_is_ram(section->mr)
78
+ * Optional method -- need not be provided if the IOMMU does not
210
&& cpu_physical_memory_is_clean(
79
+ * need to know exactly which events must be notified.
211
- memory_region_get_ram_addr(section->mr) + xlat)) {
80
+ *
212
+ memory_region_get_ram_addr(section->mr) + xlat)) {
81
+ * @iommu: the IOMMUMemoryRegion
213
tn.addr_write = address | TLB_NOTDIRTY;
82
+ * @old_flags: events which previously needed to be notified
214
} else {
83
+ * @new_flags: events which now need to be notified
215
tn.addr_write = address;
84
+ */
216
@@ -XXX,XX +XXX,XX @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
217
86
IOMMUNotifierFlag old_flags,
218
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
87
IOMMUNotifierFlag new_flags);
219
int mmu_idx,
88
- /* Set this up to provide customized IOMMU replay function */
220
- target_ulong addr, uintptr_t retaddr, int size)
89
+ /* Called to handle memory_region_iommu_replay().
221
+ target_ulong addr, uintptr_t retaddr,
90
+ *
222
+ bool recheck, int size)
91
+ * The default implementation of memory_region_iommu_replay() is to
223
{
92
+ * call the IOMMU translate method for every page in the address space
224
CPUState *cpu = ENV_GET_CPU(env);
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
225
hwaddr mr_offset;
94
+ * returns a valid mapping. If this method is implemented then it
226
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
95
+ * overrides the default behaviour, and must provide the full semantics
227
bool locked = false;
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
228
MemTxResult r;
97
+ * translation present in the IOMMU.
229
98
+ *
230
+ if (recheck) {
99
+ * Optional method -- an IOMMU only needs to provide this method
231
+ /*
100
+ * if the default is inefficient or produces undesirable side effects.
232
+ * This is a TLB_RECHECK access, where the MMU protection
101
+ *
233
+ * covers a smaller range than a target page, and we must
102
+ * Note: this is not related to record-and-replay functionality.
234
+ * repeat the MMU check here. This tlb_fill() call might
103
+ */
235
+ * longjump out if this access should cause a guest exception.
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
236
+ */
105
237
+ int index;
106
- /* Get IOMMU misc attributes */
238
+ target_ulong tlb_addr;
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
239
+
108
+ /* Get IOMMU misc attributes. This is an optional method that
240
+ tlb_fill(cpu, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr);
109
+ * can be used to allow users of the IOMMU to get implementation-specific
241
+
110
+ * information. The IOMMU implements this method to handle calls
242
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
243
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_read;
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
244
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
113
+ * the IOMMU supports. If the method is unimplemented then
245
+ /* RAM access */
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
246
+ uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
115
+ *
247
+
116
+ * @iommu: the IOMMUMemoryRegion
248
+ return ldn_p((void *)haddr, size);
117
+ * @attr: attribute being queried
249
+ }
118
+ * @data: memory to fill in with the attribute data
250
+ /* Fall through for handling IO accesses */
119
+ *
251
+ }
120
+ * Returns 0 on success, or a negative errno; in particular
252
+
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
253
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
122
+ */
254
mr = section->mr;
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
255
mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
124
void *data);
256
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
125
} IOMMUMemoryRegionClass;
257
static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
126
258
int mmu_idx,
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
259
uint64_t val, target_ulong addr,
128
* An IOMMU region translates addresses and forwards accesses to a target
260
- uintptr_t retaddr, int size)
129
* memory region.
261
+ uintptr_t retaddr, bool recheck, int size)
130
*
262
{
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
263
CPUState *cpu = ENV_GET_CPU(env);
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
264
hwaddr mr_offset;
133
+ * that subclass, @instance_size is the size of that subclass, and
265
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
266
bool locked = false;
135
+ * instance of the subclass, and its methods will then be called to handle
267
MemTxResult r;
136
+ * accesses to the memory region. See the documentation of
268
137
+ * #IOMMUMemoryRegionClass for further details.
269
+ if (recheck) {
138
+ *
270
+ /*
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
271
+ * This is a TLB_RECHECK access, where the MMU protection
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
272
+ * covers a smaller range than a target page, and we must
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
273
+ * repeat the MMU check here. This tlb_fill() call might
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
274
+ * longjump out if this access should cause a guest exception.
143
* a notifier with the minimum page granularity returned by
275
+ */
144
* mr->iommu_ops->get_page_size().
276
+ int index;
145
*
277
+ target_ulong tlb_addr;
146
+ * Note: this is not related to record-and-replay functionality.
278
+
147
+ *
279
+ tlb_fill(cpu, addr, size, MMU_DATA_STORE, mmu_idx, retaddr);
148
* @iommu_mr: the memory region to observe
280
+
149
* @n: the notifier to which to replay iommu mappings
281
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
150
*/
282
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
283
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
284
+ /* RAM access */
153
* to all the notifiers registered.
285
+ uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
154
*
286
+
155
+ * Note: this is not related to record-and-replay functionality.
287
+ stn_p((void *)haddr, size, val);
156
+ *
288
+ return;
157
* @iommu_mr: the memory region to observe
289
+ }
158
*/
290
+ /* Fall through for handling IO accesses */
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
291
+ }
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
292
+
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
293
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
162
* defined on the IOMMU.
294
mr = section->mr;
163
*
295
mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
164
- * Returns 0 if succeded, error code otherwise.
296
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
297
tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
166
+ * -EINVAL indicates that the IOMMU does not support the requested
298
}
167
+ * attribute.
299
}
168
*
300
+
169
* @iommu_mr: the memory region
301
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
170
* @attr: the requested attribute
302
+ /*
303
+ * This is a TLB_RECHECK access, where the MMU protection
304
+ * covers a smaller range than a target page, and we must
305
+ * repeat the MMU check here. This tlb_fill() call might
306
+ * longjump out if this access should cause a guest exception.
307
+ */
308
+ int index;
309
+ target_ulong tlb_addr;
310
+
311
+ tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
312
+
313
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
314
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
315
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
316
+ /* RAM access. We can't handle this, so for now just stop */
317
+ cpu_abort(cpu, "Unable to handle guest executing from RAM within "
318
+ "a small MPU region at 0x" TARGET_FMT_lx, addr);
319
+ }
320
+ /*
321
+ * Fall through to handle IO accesses (which will almost certainly
322
+ * also result in failure)
323
+ */
324
+ }
325
+
326
iotlbentry = &env->iotlb[mmu_idx][index];
327
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
328
mr = section->mr;
329
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
330
tlb_addr = tlbe->addr_write & ~TLB_INVALID_MASK;
331
}
332
333
- /* Notice an IO access */
334
- if (unlikely(tlb_addr & TLB_MMIO)) {
335
+ /* Notice an IO access or a needs-MMU-lookup access */
336
+ if (unlikely(tlb_addr & (TLB_MMIO | TLB_RECHECK))) {
337
/* There's really nothing that can be done to
338
support this apart from stop-the-world. */
339
goto stop_the_world;
171
--
340
--
172
2.17.1
341
2.17.1
173
342
174
343
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
We want to handle small MPU region sizes for ARMv7M. To do this,
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
2
make get_phys_addr_pmsav7() set the page size to the region
3
we forgot to also update the register's reset value. The effect
3
size if it is less that TARGET_PAGE_SIZE, rather than working
4
was that (a) a guest that read CPACR on reset would not see ones in
4
only in TARGET_PAGE_SIZE chunks.
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
5
10
Implement reset for the CPACR using a custom reset function
6
Since the core TCG code con't handle execution from small
11
that just calls cpacr_write(), to avoid having to duplicate
7
MPU regions, we strip the exec permission from them so that
12
the logic for which bits are RAO.
8
any execution attempts will cause an MPU exception, rather
9
than allowing it to end up with a cpu_abort() in
10
get_page_addr_code().
13
11
14
This bug would affect migration for TCG CPUs which are ARMv7
12
(The previous code's intention was to make any small page be
15
with VFP but without one of Neon or VFPv3.
13
treated as having no permissions, but unfortunately errors
14
in the implementation meant that it didn't behave that way.
15
It's possible that some binaries using small regions were
16
accidentally working with our old behaviour and won't now.)
16
17
17
Reported-by: Cédric Le Goater <clg@kaod.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
20
Message-id: 20180620130619.11362-3-peter.maydell@linaro.org
21
---
21
---
22
target/arm/helper.c | 10 +++++++++-
22
target/arm/helper.c | 37 ++++++++++++++++++++++++++-----------
23
1 file changed, 9 insertions(+), 1 deletion(-)
23
1 file changed, 26 insertions(+), 11 deletions(-)
24
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
@@ -XXX,XX +XXX,XX @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
30
env->cp15.cpacr_el1 = value;
30
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
31
MMUAccessType access_type, ARMMMUIdx mmu_idx,
32
hwaddr *phys_ptr, int *prot,
33
+ target_ulong *page_size,
34
ARMMMUFaultInfo *fi)
35
{
36
ARMCPU *cpu = arm_env_get_cpu(env);
37
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
38
bool is_user = regime_is_user(env, mmu_idx);
39
40
*phys_ptr = address;
41
+ *page_size = TARGET_PAGE_SIZE;
42
*prot = 0;
43
44
if (regime_translation_disabled(env, mmu_idx) ||
45
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
46
rsize++;
47
}
48
}
49
- if (rsize < TARGET_PAGE_BITS) {
50
- qemu_log_mask(LOG_UNIMP,
51
- "DRSR[%d]: No support for MPU (sub)region size of"
52
- " %" PRIu32 " bytes. Minimum is %d.\n",
53
- n, (1 << rsize), TARGET_PAGE_SIZE);
54
- continue;
55
- }
56
if (srdis) {
57
continue;
58
}
59
+ if (rsize < TARGET_PAGE_BITS) {
60
+ *page_size = 1 << rsize;
61
+ }
62
break;
63
}
64
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
66
67
fi->type = ARMFault_Permission;
68
fi->level = 1;
69
+ /*
70
+ * Core QEMU code can't handle execution from small pages yet, so
71
+ * don't try it. This way we'll get an MPU exception, rather than
72
+ * eventually causing QEMU to exit in get_page_addr_code().
73
+ */
74
+ if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
75
+ qemu_log_mask(LOG_UNIMP,
76
+ "MPU: No support for execution from regions "
77
+ "smaller than 1K\n");
78
+ *prot &= ~PAGE_EXEC;
79
+ }
80
return !(*prot & (1 << access_type));
31
}
81
}
32
82
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
83
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
34
+{
84
} else if (arm_feature(env, ARM_FEATURE_V7)) {
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
85
/* PMSAv7 */
36
+ * for our CPU features.
86
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
37
+ */
87
- phys_ptr, prot, fi);
38
+ cpacr_write(env, ri, 0);
88
+ phys_ptr, prot, page_size, fi);
39
+}
89
} else {
40
+
90
/* Pre-v7 MPU */
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
91
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
42
bool isread)
92
@@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
43
{
93
core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
94
&attrs, &prot, &page_size, fi, NULL);
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
95
if (!ret) {
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
96
- /* Map a single [sub]page. */
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
97
- phys_addr &= TARGET_PAGE_MASK;
48
- .resetvalue = 0, .writefn = cpacr_write },
98
- address &= TARGET_PAGE_MASK;
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
99
+ /*
50
REGINFO_SENTINEL
100
+ * Map a single [sub]page. Regions smaller than our declared
51
};
101
+ * target page size are handled specially, so for those we
52
102
+ * pass in the exact addresses.
103
+ */
104
+ if (page_size >= TARGET_PAGE_SIZE) {
105
+ phys_addr &= TARGET_PAGE_MASK;
106
+ address &= TARGET_PAGE_MASK;
107
+ }
108
tlb_set_page_with_attrs(cs, address, phys_addr, attrs,
109
prot, mmu_idx, page_size);
110
return 0;
53
--
111
--
54
2.17.1
112
2.17.1
55
113
56
114
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
Allow ARMv8M to handle small MPU and SAU region sizes, by making
2
honour the FPCR.FZ bit which specifies whether input denormals should
2
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
3
be flushed to zero (or FZ16 for the half-precision version).
3
SAU region covers less than a TARGET_PAGE_SIZE.
4
We forgot to implement this, which doesn't affect the results (since
4
5
the calculation doesn't actually care about the mantissa bits) but did
5
We choose to use a size of 1 because it makes no difference to
6
mean we were failing to set the FPSR.IDC bit.
6
the core code, and avoids having to track both the base and
7
limit for SAU and MPU and then convert into an artificially
8
restricted "page size" that the core code will then ignore.
9
10
Since the core TCG code can't handle execution from small
11
MPU regions, we strip the exec permission from them so that
12
any execution attempts will cause an MPU exception, rather
13
than allowing it to end up with a cpu_abort() in
14
get_page_addr_code().
15
16
(The previous code's intention was to make any small page be
17
treated as having no permissions, but unfortunately errors
18
in the implementation meant that it didn't behave that way.
19
It's possible that some binaries using small regions were
20
accidentally working with our old behaviour and won't now.)
21
22
We also retain an existing bug, where we ignored the possibility
23
that the SAU region might not cover the entire page, in the
24
case of executable regions. This is necessary because some
25
currently-working guest code images rely on being able to
26
execute from addresses which are covered by a page-sized
27
MPU region but a smaller SAU region. We can remove this
28
workaround if we ever support execution from small regions.
7
29
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
32
Message-id: 20180620130619.11362-4-peter.maydell@linaro.org
11
---
33
---
12
target/arm/helper-a64.c | 6 ++++++
34
target/arm/helper.c | 78 ++++++++++++++++++++++++++++++++-------------
13
1 file changed, 6 insertions(+)
35
1 file changed, 55 insertions(+), 23 deletions(-)
14
36
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
39
--- a/target/arm/helper.c
18
+++ b/target/arm/helper-a64.c
40
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
41
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
20
return nan;
42
43
/* Security attributes for an address, as returned by v8m_security_lookup. */
44
typedef struct V8M_SAttributes {
45
+ bool subpage; /* true if these attrs don't cover the whole TARGET_PAGE */
46
bool ns;
47
bool nsc;
48
uint8_t sregion;
49
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
50
int r;
51
bool idau_exempt = false, idau_ns = true, idau_nsc = true;
52
int idau_region = IREGION_NOTVALID;
53
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
54
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
55
56
if (cpu->idau) {
57
IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau);
58
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
59
uint32_t limit = env->sau.rlar[r] | 0x1f;
60
61
if (base <= address && limit >= address) {
62
+ if (base > addr_page_base || limit < addr_page_limit) {
63
+ sattrs->subpage = true;
64
+ }
65
if (sattrs->srvalid) {
66
/* If we hit in more than one region then we must report
67
* as Secure, not NS-Callable, with no valid region
68
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
69
static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
70
MMUAccessType access_type, ARMMMUIdx mmu_idx,
71
hwaddr *phys_ptr, MemTxAttrs *txattrs,
72
- int *prot, ARMMMUFaultInfo *fi, uint32_t *mregion)
73
+ int *prot, bool *is_subpage,
74
+ ARMMMUFaultInfo *fi, uint32_t *mregion)
75
{
76
/* Perform a PMSAv8 MPU lookup (without also doing the SAU check
77
* that a full phys-to-virt translation does).
78
* mregion is (if not NULL) set to the region number which matched,
79
* or -1 if no region number is returned (MPU off, address did not
80
* hit a region, address hit in multiple regions).
81
+ * We set is_subpage to true if the region hit doesn't cover the
82
+ * entire TARGET_PAGE the address is within.
83
*/
84
ARMCPU *cpu = arm_env_get_cpu(env);
85
bool is_user = regime_is_user(env, mmu_idx);
86
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
87
int n;
88
int matchregion = -1;
89
bool hit = false;
90
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
91
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
92
93
+ *is_subpage = false;
94
*phys_ptr = address;
95
*prot = 0;
96
if (mregion) {
97
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
98
continue;
99
}
100
101
+ if (base > addr_page_base || limit < addr_page_limit) {
102
+ *is_subpage = true;
103
+ }
104
+
105
if (hit) {
106
/* Multiple regions match -- always a failure (unlike
107
* PMSAv7 where highest-numbered-region wins)
108
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
109
110
matchregion = n;
111
hit = true;
112
-
113
- if (base & ~TARGET_PAGE_MASK) {
114
- qemu_log_mask(LOG_UNIMP,
115
- "MPU_RBAR[%d]: No support for MPU region base"
116
- "address of 0x%" PRIx32 ". Minimum alignment is "
117
- "%d\n",
118
- n, base, TARGET_PAGE_BITS);
119
- continue;
120
- }
121
- if ((limit + 1) & ~TARGET_PAGE_MASK) {
122
- qemu_log_mask(LOG_UNIMP,
123
- "MPU_RBAR[%d]: No support for MPU region limit"
124
- "address of 0x%" PRIx32 ". Minimum alignment is "
125
- "%d\n",
126
- n, limit, TARGET_PAGE_BITS);
127
- continue;
128
- }
129
}
21
}
130
}
22
131
23
+ a = float16_squash_input_denormal(a, fpst);
132
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
24
+
133
25
val16 = float16_val(a);
134
fi->type = ARMFault_Permission;
26
sbit = 0x8000 & val16;
135
fi->level = 1;
27
exp = extract32(val16, 10, 5);
136
+ /*
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
137
+ * Core QEMU code can't handle execution from small pages yet, so
29
return nan;
138
+ * don't try it. This means any attempted execution will generate
139
+ * an MPU exception, rather than eventually causing QEMU to exit in
140
+ * get_page_addr_code().
141
+ */
142
+ if (*is_subpage && (*prot & PAGE_EXEC)) {
143
+ qemu_log_mask(LOG_UNIMP,
144
+ "MPU: No support for execution from regions "
145
+ "smaller than 1K\n");
146
+ *prot &= ~PAGE_EXEC;
147
+ }
148
return !(*prot & (1 << access_type));
149
}
150
151
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
152
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
153
MMUAccessType access_type, ARMMMUIdx mmu_idx,
154
hwaddr *phys_ptr, MemTxAttrs *txattrs,
155
- int *prot, ARMMMUFaultInfo *fi)
156
+ int *prot, target_ulong *page_size,
157
+ ARMMMUFaultInfo *fi)
158
{
159
uint32_t secure = regime_is_secure(env, mmu_idx);
160
V8M_SAttributes sattrs = {};
161
+ bool ret;
162
+ bool mpu_is_subpage;
163
164
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
165
v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
166
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
167
} else {
168
fi->type = ARMFault_QEMU_SFault;
169
}
170
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
171
*phys_ptr = address;
172
*prot = 0;
173
return true;
174
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
175
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
176
*/
177
fi->type = ARMFault_QEMU_SFault;
178
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
179
*phys_ptr = address;
180
*prot = 0;
181
return true;
182
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
183
}
30
}
184
}
31
185
32
+ a = float32_squash_input_denormal(a, fpst);
186
- return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
33
+
187
- txattrs, prot, fi, NULL);
34
val32 = float32_val(a);
188
+ ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
35
sbit = 0x80000000ULL & val32;
189
+ txattrs, prot, &mpu_is_subpage, fi, NULL);
36
exp = extract32(val32, 23, 8);
190
+ /*
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
191
+ * TODO: this is a temporary hack to ignore the fact that the SAU region
38
return nan;
192
+ * is smaller than a page if this is an executable region. We never
39
}
193
+ * supported small MPU regions, but we did (accidentally) allow small
40
194
+ * SAU regions, and if we now made small SAU regions not be executable
41
+ a = float64_squash_input_denormal(a, fpst);
195
+ * then this would break previously working guest code. We can't
42
+
196
+ * remove this until/unless we implement support for execution from
43
val64 = float64_val(a);
197
+ * small regions.
44
sbit = 0x8000000000000000ULL & val64;
198
+ */
45
exp = extract64(float64_val(a), 52, 11);
199
+ if (*prot & PAGE_EXEC) {
200
+ sattrs.subpage = false;
201
+ }
202
+ *page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
203
+ return ret;
204
}
205
206
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
207
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
208
if (arm_feature(env, ARM_FEATURE_V8)) {
209
/* PMSAv8 */
210
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
211
- phys_ptr, attrs, prot, fi);
212
+ phys_ptr, attrs, prot, page_size, fi);
213
} else if (arm_feature(env, ARM_FEATURE_V7)) {
214
/* PMSAv7 */
215
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
216
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
217
uint32_t mregion;
218
bool targetpriv;
219
bool targetsec = env->v7m.secure;
220
+ bool is_subpage;
221
222
/* Work out what the security state and privilege level we're
223
* interested in is...
224
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
225
if (arm_current_el(env) != 0 || alt) {
226
/* We can ignore the return value as prot is always set */
227
pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
228
- &phys_addr, &attrs, &prot, &fi, &mregion);
229
+ &phys_addr, &attrs, &prot, &is_subpage,
230
+ &fi, &mregion);
231
if (mregion == -1) {
232
mrvalid = false;
233
mregion = 0;
46
--
234
--
47
2.17.1
235
2.17.1
48
236
49
237
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Jia He <hejianet@gmail.com>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
In case the STE's config is "Bypass" we currently don't set the
4
g_new is even better because it is type-safe.
4
IOMMUTLBEntry perm flags and the access does not succeed. Also
5
5
if the config is 0b0xx (Aborted/Reserved), decode_ste and
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
smmuv3_decode_config currently returns -EINVAL and we don't enter
7
the expected code path: we record an event whereas we should not.
8
9
This patch fixes those bugs and simplifies the error handling.
10
decode_ste and smmuv3_decode_config now return 0 if aborted or
11
bypassed config was found. Only bad config info produces negative
12
error values. In smmuv3_translate we more clearly differentiate
13
errors, bypass/smmu disabled, aborted and success cases. Also
14
trace points are differentiated.
15
16
Fixes: 9bde7f0674fe ("hw/arm/smmuv3: Implement translate callback")
17
Reported-by: jia.he@hxt-semitech.com
18
Signed-off-by: jia.he@hxt-semitech.com
19
Signed-off-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Message-id: 1529653501-15358-2-git-send-email-eric.auger@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
23
---
11
target/arm/gdbstub.c | 3 +--
24
hw/arm/smmuv3-internal.h | 12 ++++-
12
1 file changed, 1 insertion(+), 2 deletions(-)
25
hw/arm/smmuv3.c | 96 +++++++++++++++++++++++++++-------------
13
26
hw/arm/trace-events | 7 +--
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
27
3 files changed, 80 insertions(+), 35 deletions(-)
28
29
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
31
--- a/hw/arm/smmuv3-internal.h
17
+++ b/target/arm/gdbstub.c
32
+++ b/hw/arm/smmuv3-internal.h
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
33
@@ -XXX,XX +XXX,XX @@
19
RegisterSysregXmlParam param = {cs, s};
34
20
35
#include "hw/arm/smmu-common.h"
21
cpu->dyn_xml.num_cpregs = 0;
36
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
37
+typedef enum SMMUTranslationStatus {
23
- g_hash_table_size(cpu->cp_regs));
38
+ SMMU_TRANS_DISABLE,
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
39
+ SMMU_TRANS_ABORT,
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
40
+ SMMU_TRANS_BYPASS,
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
41
+ SMMU_TRANS_ERROR,
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
42
+ SMMU_TRANS_SUCCESS,
43
+} SMMUTranslationStatus;
44
+
45
/* MMIO Registers */
46
47
REG32(IDR0, 0x0)
48
@@ -XXX,XX +XXX,XX @@ enum { /* Command completion notification */
49
/* Events */
50
51
typedef enum SMMUEventType {
52
- SMMU_EVT_OK = 0x00,
53
+ SMMU_EVT_NONE = 0x00,
54
SMMU_EVT_F_UUT ,
55
SMMU_EVT_C_BAD_STREAMID ,
56
SMMU_EVT_F_STE_FETCH ,
57
@@ -XXX,XX +XXX,XX @@ typedef enum SMMUEventType {
58
} SMMUEventType;
59
60
static const char *event_stringify[] = {
61
- [SMMU_EVT_OK] = "SMMU_EVT_OK",
62
+ [SMMU_EVT_NONE] = "no recorded event",
63
[SMMU_EVT_F_UUT] = "SMMU_EVT_F_UUT",
64
[SMMU_EVT_C_BAD_STREAMID] = "SMMU_EVT_C_BAD_STREAMID",
65
[SMMU_EVT_F_STE_FETCH] = "SMMU_EVT_F_STE_FETCH",
66
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/hw/arm/smmuv3.c
69
+++ b/hw/arm/smmuv3.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "hw/qdev-core.h"
72
#include "hw/pci/pci.h"
73
#include "exec/address-spaces.h"
74
+#include "cpu.h"
75
#include "trace.h"
76
#include "qemu/log.h"
77
#include "qemu/error-report.h"
78
@@ -XXX,XX +XXX,XX @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
79
EVT_SET_SID(&evt, info->sid);
80
81
switch (info->type) {
82
- case SMMU_EVT_OK:
83
+ case SMMU_EVT_NONE:
84
return;
85
case SMMU_EVT_F_UUT:
86
EVT_SET_SSID(&evt, info->u.f_uut.ssid);
87
@@ -XXX,XX +XXX,XX @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, uint32_t ssid,
88
return 0;
89
}
90
91
-/* Returns <0 if the caller has no need to continue the translation */
92
+/* Returns < 0 in case of invalid STE, 0 otherwise */
93
static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
94
STE *ste, SMMUEventInfo *event)
95
{
96
uint32_t config;
97
- int ret = -EINVAL;
98
99
if (!STE_VALID(ste)) {
100
goto bad_ste;
101
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
102
config = STE_CONFIG(ste);
103
104
if (STE_CFG_ABORT(config)) {
105
- cfg->aborted = true; /* abort but don't record any event */
106
- return ret;
107
+ cfg->aborted = true;
108
+ return 0;
109
}
110
111
if (STE_CFG_BYPASS(config)) {
112
cfg->bypassed = true;
113
- return ret;
114
+ return 0;
115
}
116
117
if (STE_CFG_S2_ENABLED(config)) {
118
@@ -XXX,XX +XXX,XX @@ bad_cd:
119
* the different configuration decoding steps
120
* @event: must be zero'ed by the caller
121
*
122
- * return < 0 if the translation needs to be aborted (@event is filled
123
+ * return < 0 in case of config decoding error (@event is filled
124
* accordingly). Return 0 otherwise.
125
*/
126
static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
127
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
128
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
129
uint32_t sid = smmu_get_sid(sdev);
130
SMMUv3State *s = sdev->smmu;
131
- int ret = -EINVAL;
132
+ int ret;
133
STE ste;
134
CD cd;
135
136
- if (smmu_find_ste(s, sid, &ste, event)) {
137
+ ret = smmu_find_ste(s, sid, &ste, event);
138
+ if (ret) {
139
return ret;
140
}
141
142
- if (decode_ste(s, cfg, &ste, event)) {
143
+ ret = decode_ste(s, cfg, &ste, event);
144
+ if (ret) {
145
return ret;
146
}
147
148
- if (smmu_get_cd(s, &ste, 0 /* ssid */, &cd, event)) {
149
+ if (cfg->aborted || cfg->bypassed) {
150
+ return 0;
151
+ }
152
+
153
+ ret = smmu_get_cd(s, &ste, 0 /* ssid */, &cd, event);
154
+ if (ret) {
155
return ret;
156
}
157
158
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
159
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
160
SMMUv3State *s = sdev->smmu;
161
uint32_t sid = smmu_get_sid(sdev);
162
- SMMUEventInfo event = {.type = SMMU_EVT_OK, .sid = sid};
163
+ SMMUEventInfo event = {.type = SMMU_EVT_NONE, .sid = sid};
164
SMMUPTWEventInfo ptw_info = {};
165
+ SMMUTranslationStatus status;
166
SMMUTransCfg cfg = {};
167
IOMMUTLBEntry entry = {
168
.target_as = &address_space_memory,
169
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
170
.addr_mask = ~(hwaddr)0,
171
.perm = IOMMU_NONE,
172
};
173
- int ret = 0;
174
175
if (!smmu_enabled(s)) {
176
- goto out;
177
+ status = SMMU_TRANS_DISABLE;
178
+ goto epilogue;
179
}
180
181
- ret = smmuv3_decode_config(mr, &cfg, &event);
182
- if (ret) {
183
- goto out;
184
+ if (smmuv3_decode_config(mr, &cfg, &event)) {
185
+ status = SMMU_TRANS_ERROR;
186
+ goto epilogue;
187
}
188
189
if (cfg.aborted) {
190
- goto out;
191
+ status = SMMU_TRANS_ABORT;
192
+ goto epilogue;
193
}
194
195
- ret = smmu_ptw(&cfg, addr, flag, &entry, &ptw_info);
196
- if (ret) {
197
+ if (cfg.bypassed) {
198
+ status = SMMU_TRANS_BYPASS;
199
+ goto epilogue;
200
+ }
201
+
202
+ if (smmu_ptw(&cfg, addr, flag, &entry, &ptw_info)) {
203
switch (ptw_info.type) {
204
case SMMU_PTW_ERR_WALK_EABT:
205
event.type = SMMU_EVT_F_WALK_EABT;
206
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
207
default:
208
g_assert_not_reached();
209
}
210
+ status = SMMU_TRANS_ERROR;
211
+ } else {
212
+ status = SMMU_TRANS_SUCCESS;
213
}
214
-out:
215
- if (ret) {
216
- qemu_log_mask(LOG_GUEST_ERROR,
217
- "%s translation failed for iova=0x%"PRIx64"(%d)\n",
218
- mr->parent_obj.name, addr, ret);
219
- entry.perm = IOMMU_NONE;
220
- smmuv3_record_event(s, &event);
221
- } else if (!cfg.aborted) {
222
+
223
+epilogue:
224
+ switch (status) {
225
+ case SMMU_TRANS_SUCCESS:
226
entry.perm = flag;
227
- trace_smmuv3_translate(mr->parent_obj.name, sid, addr,
228
- entry.translated_addr, entry.perm);
229
+ trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
230
+ entry.translated_addr, entry.perm);
231
+ break;
232
+ case SMMU_TRANS_DISABLE:
233
+ entry.perm = flag;
234
+ entry.addr_mask = ~TARGET_PAGE_MASK;
235
+ trace_smmuv3_translate_disable(mr->parent_obj.name, sid, addr,
236
+ entry.perm);
237
+ break;
238
+ case SMMU_TRANS_BYPASS:
239
+ entry.perm = flag;
240
+ entry.addr_mask = ~TARGET_PAGE_MASK;
241
+ trace_smmuv3_translate_bypass(mr->parent_obj.name, sid, addr,
242
+ entry.perm);
243
+ break;
244
+ case SMMU_TRANS_ABORT:
245
+ /* no event is recorded on abort */
246
+ trace_smmuv3_translate_abort(mr->parent_obj.name, sid, addr,
247
+ entry.perm);
248
+ break;
249
+ case SMMU_TRANS_ERROR:
250
+ qemu_log_mask(LOG_GUEST_ERROR,
251
+ "%s translation failed for iova=0x%"PRIx64"(%s)\n",
252
+ mr->parent_obj.name, addr, smmu_event_string(event.type));
253
+ smmuv3_record_event(s, &event);
254
+ break;
255
}
256
257
return entry;
258
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
259
index XXXXXXX..XXXXXXX 100644
260
--- a/hw/arm/trace-events
261
+++ b/hw/arm/trace-events
262
@@ -XXX,XX +XXX,XX @@ smmuv3_record_event(const char *type, uint32_t sid) "%s sid=%d"
263
smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "SID:0x%x features:0x%x, sid_split:0x%x"
264
smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
265
smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
266
-smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d bypass iova:0x%"PRIx64" is_write=%d"
267
-smmuv3_translate_in(uint16_t sid, int pci_bus_num, uint64_t strtab_base) "SID:0x%x bus:%d strtab_base:0x%"PRIx64
268
+smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
269
+smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d STE bypass iova:0x%"PRIx64" is_write=%d"
270
+smmuv3_translate_abort(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d abort on iova:0x%"PRIx64" is_write=%d"
271
+smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=%d iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
272
smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
273
-smmuv3_translate(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=%d iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
274
smmuv3_decode_cd(uint32_t oas) "oas=%d"
275
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d"
28
--
276
--
29
2.17.1
277
2.17.1
30
278
31
279
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Eric Auger <eric.auger@redhat.com>
2
add MemTxAttrs as an argument to address_space_access_valid().
2
3
Its callers either have an attrs value to hand, or don't care
3
Let's cache config data to avoid fetching and parsing STE/CD
4
and can use MEMTXATTRS_UNSPECIFIED.
4
structures on each translation. We invalidate them on data structure
5
5
invalidation commands.
6
7
We put in place a per-smmu mutex to protect the config cache. This
8
will be useful too to protect the IOTLB cache. The caches can be
9
accessed without BQL, ie. in IO dataplane. The same kind of mutex was
10
put in place in the intel viommu.
11
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 1529653501-15358-3-git-send-email-eric.auger@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
16
---
11
include/exec/memory.h | 4 +++-
17
include/hw/arm/smmu-common.h | 5 ++
12
include/sysemu/dma.h | 3 ++-
18
include/hw/arm/smmuv3.h | 1 +
13
exec.c | 3 ++-
19
hw/arm/smmu-common.c | 24 ++++++-
14
target/s390x/diag.c | 6 ++++--
20
hw/arm/smmuv3.c | 135 +++++++++++++++++++++++++++++++++--
15
target/s390x/excp_helper.c | 3 ++-
21
hw/arm/trace-events | 6 ++
16
target/s390x/mmu_helper.c | 3 ++-
22
5 files changed, 164 insertions(+), 7 deletions(-)
17
target/s390x/sigp.c | 3 ++-
23
18
7 files changed, 17 insertions(+), 8 deletions(-)
24
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
19
25
index XXXXXXX..XXXXXXX 100644
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
26
--- a/include/hw/arm/smmu-common.h
21
index XXXXXXX..XXXXXXX 100644
27
+++ b/include/hw/arm/smmu-common.h
22
--- a/include/exec/memory.h
28
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUDevice {
23
+++ b/include/exec/memory.h
29
int devfn;
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
30
IOMMUMemoryRegion iommu;
25
* @addr: address within that address space
31
AddressSpace as;
26
* @len: length of the area to be checked
32
+ uint32_t cfg_cache_hits;
27
* @is_write: indicates the transfer direction
33
+ uint32_t cfg_cache_misses;
28
+ * @attrs: memory attributes
34
} SMMUDevice;
35
36
typedef struct SMMUNotifierNode {
37
@@ -XXX,XX +XXX,XX @@ int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
29
*/
38
*/
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
39
SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova);
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
40
32
+ bool is_write, MemTxAttrs attrs);
41
+/* Return the iommu mr associated to @sid, or NULL if none */
33
42
+IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid);
34
/* address_space_map: map a physical memory region into a host virtual address
43
+
35
*
44
#endif /* HW_ARM_SMMU_COMMON */
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
45
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
37
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
47
--- a/include/hw/arm/smmuv3.h
39
+++ b/include/sysemu/dma.h
48
+++ b/include/hw/arm/smmuv3.h
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
49
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUv3State {
41
DMADirection dir)
50
SMMUQueue eventq, cmdq;
51
52
qemu_irq irq[4];
53
+ QemuMutex mutex;
54
} SMMUv3State;
55
56
typedef enum {
57
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/arm/smmu-common.c
60
+++ b/hw/arm/smmu-common.c
61
@@ -XXX,XX +XXX,XX @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
62
return &sdev->as;
63
}
64
65
+IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid)
66
+{
67
+ uint8_t bus_n, devfn;
68
+ SMMUPciBus *smmu_bus;
69
+ SMMUDevice *smmu;
70
+
71
+ bus_n = PCI_BUS_NUM(sid);
72
+ smmu_bus = smmu_find_smmu_pcibus(s, bus_n);
73
+ if (smmu_bus) {
74
+ devfn = sid & 0x7;
75
+ smmu = smmu_bus->pbdev[devfn];
76
+ if (smmu) {
77
+ return &smmu->iommu;
78
+ }
79
+ }
80
+ return NULL;
81
+}
82
+
83
static void smmu_base_realize(DeviceState *dev, Error **errp)
42
{
84
{
43
return address_space_access_valid(as, addr, len,
85
SMMUState *s = ARM_SMMU(dev);
44
- dir == DMA_DIRECTION_FROM_DEVICE);
86
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
87
error_propagate(errp, local_err);
46
+ MEMTXATTRS_UNSPECIFIED);
88
return;
89
}
90
-
91
+ s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
92
s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
93
94
if (s->primary_bus) {
95
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
96
97
static void smmu_base_reset(DeviceState *dev)
98
{
99
- /* will be filled later on */
100
+ SMMUState *s = ARM_SMMU(dev);
101
+
102
+ g_hash_table_remove_all(s->configs);
47
}
103
}
48
104
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
105
static Property smmu_dev_properties[] = {
50
diff --git a/exec.c b/exec.c
106
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
51
index XXXXXXX..XXXXXXX 100644
107
index XXXXXXX..XXXXXXX 100644
52
--- a/exec.c
108
--- a/hw/arm/smmuv3.c
53
+++ b/exec.c
109
+++ b/hw/arm/smmuv3.c
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
110
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
111
return decode_cd(cfg, &cd, event);
55
}
112
}
56
113
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
114
+/**
58
- int len, bool is_write)
115
+ * smmuv3_get_config - Look up for a cached copy of configuration data for
59
+ int len, bool is_write,
116
+ * @sdev and on cache miss performs a configuration structure decoding from
60
+ MemTxAttrs attrs)
117
+ * guest RAM.
118
+ *
119
+ * @sdev: SMMUDevice handle
120
+ * @event: output event info
121
+ *
122
+ * The configuration cache contains data resulting from both STE and CD
123
+ * decoding under the form of an SMMUTransCfg struct. The hash table is indexed
124
+ * by the SMMUDevice handle.
125
+ */
126
+static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
127
+{
128
+ SMMUv3State *s = sdev->smmu;
129
+ SMMUState *bc = &s->smmu_state;
130
+ SMMUTransCfg *cfg;
131
+
132
+ cfg = g_hash_table_lookup(bc->configs, sdev);
133
+ if (cfg) {
134
+ sdev->cfg_cache_hits++;
135
+ trace_smmuv3_config_cache_hit(smmu_get_sid(sdev),
136
+ sdev->cfg_cache_hits, sdev->cfg_cache_misses,
137
+ 100 * sdev->cfg_cache_hits /
138
+ (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
139
+ } else {
140
+ sdev->cfg_cache_misses++;
141
+ trace_smmuv3_config_cache_miss(smmu_get_sid(sdev),
142
+ sdev->cfg_cache_hits, sdev->cfg_cache_misses,
143
+ 100 * sdev->cfg_cache_hits /
144
+ (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
145
+ cfg = g_new0(SMMUTransCfg, 1);
146
+
147
+ if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
148
+ g_hash_table_insert(bc->configs, sdev, cfg);
149
+ } else {
150
+ g_free(cfg);
151
+ cfg = NULL;
152
+ }
153
+ }
154
+ return cfg;
155
+}
156
+
157
+static void smmuv3_flush_config(SMMUDevice *sdev)
158
+{
159
+ SMMUv3State *s = sdev->smmu;
160
+ SMMUState *bc = &s->smmu_state;
161
+
162
+ trace_smmuv3_config_cache_inv(smmu_get_sid(sdev));
163
+ g_hash_table_remove(bc->configs, sdev);
164
+}
165
+
166
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
167
IOMMUAccessFlags flag, int iommu_idx)
61
{
168
{
62
FlatView *fv;
169
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
63
bool result;
170
SMMUEventInfo event = {.type = SMMU_EVT_NONE, .sid = sid};
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
171
SMMUPTWEventInfo ptw_info = {};
65
index XXXXXXX..XXXXXXX 100644
172
SMMUTranslationStatus status;
66
--- a/target/s390x/diag.c
173
- SMMUTransCfg cfg = {};
67
+++ b/target/s390x/diag.c
174
+ SMMUTransCfg *cfg = NULL;
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
175
IOMMUTLBEntry entry = {
69
return;
176
.target_as = &address_space_memory,
177
.iova = addr,
178
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
179
.perm = IOMMU_NONE,
180
};
181
182
+ qemu_mutex_lock(&s->mutex);
183
+
184
if (!smmu_enabled(s)) {
185
status = SMMU_TRANS_DISABLE;
186
goto epilogue;
187
}
188
189
- if (smmuv3_decode_config(mr, &cfg, &event)) {
190
+ cfg = smmuv3_get_config(sdev, &event);
191
+ if (!cfg) {
192
status = SMMU_TRANS_ERROR;
193
goto epilogue;
194
}
195
196
- if (cfg.aborted) {
197
+ if (cfg->aborted) {
198
status = SMMU_TRANS_ABORT;
199
goto epilogue;
200
}
201
202
- if (cfg.bypassed) {
203
+ if (cfg->bypassed) {
204
status = SMMU_TRANS_BYPASS;
205
goto epilogue;
206
}
207
208
- if (smmu_ptw(&cfg, addr, flag, &entry, &ptw_info)) {
209
+ if (smmu_ptw(cfg, addr, flag, &entry, &ptw_info)) {
210
switch (ptw_info.type) {
211
case SMMU_PTW_ERR_WALK_EABT:
212
event.type = SMMU_EVT_F_WALK_EABT;
213
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
214
}
215
216
epilogue:
217
+ qemu_mutex_unlock(&s->mutex);
218
switch (status) {
219
case SMMU_TRANS_SUCCESS:
220
entry.perm = flag;
221
@@ -XXX,XX +XXX,XX @@ epilogue:
222
223
static int smmuv3_cmdq_consume(SMMUv3State *s)
224
{
225
+ SMMUState *bs = ARM_SMMU(s);
226
SMMUCmdError cmd_error = SMMU_CERROR_NONE;
227
SMMUQueue *q = &s->cmdq;
228
SMMUCommandType type = 0;
229
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
230
231
trace_smmuv3_cmdq_opcode(smmu_cmd_string(type));
232
233
+ qemu_mutex_lock(&s->mutex);
234
switch (type) {
235
case SMMU_CMD_SYNC:
236
if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
237
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
238
break;
239
case SMMU_CMD_PREFETCH_CONFIG:
240
case SMMU_CMD_PREFETCH_ADDR:
241
+ break;
242
case SMMU_CMD_CFGI_STE:
243
+ {
244
+ uint32_t sid = CMD_SID(&cmd);
245
+ IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, sid);
246
+ SMMUDevice *sdev;
247
+
248
+ if (CMD_SSEC(&cmd)) {
249
+ cmd_error = SMMU_CERROR_ILL;
250
+ break;
251
+ }
252
+
253
+ if (!mr) {
254
+ break;
255
+ }
256
+
257
+ trace_smmuv3_cmdq_cfgi_ste(sid);
258
+ sdev = container_of(mr, SMMUDevice, iommu);
259
+ smmuv3_flush_config(sdev);
260
+
261
+ break;
262
+ }
263
case SMMU_CMD_CFGI_STE_RANGE: /* same as SMMU_CMD_CFGI_ALL */
264
+ {
265
+ uint32_t start = CMD_SID(&cmd), end, i;
266
+ uint8_t range = CMD_STE_RANGE(&cmd);
267
+
268
+ if (CMD_SSEC(&cmd)) {
269
+ cmd_error = SMMU_CERROR_ILL;
270
+ break;
271
+ }
272
+
273
+ end = start + (1 << (range + 1)) - 1;
274
+ trace_smmuv3_cmdq_cfgi_ste_range(start, end);
275
+
276
+ for (i = start; i <= end; i++) {
277
+ IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, i);
278
+ SMMUDevice *sdev;
279
+
280
+ if (!mr) {
281
+ continue;
282
+ }
283
+ sdev = container_of(mr, SMMUDevice, iommu);
284
+ smmuv3_flush_config(sdev);
285
+ }
286
+ break;
287
+ }
288
case SMMU_CMD_CFGI_CD:
289
case SMMU_CMD_CFGI_CD_ALL:
290
+ {
291
+ uint32_t sid = CMD_SID(&cmd);
292
+ IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, sid);
293
+ SMMUDevice *sdev;
294
+
295
+ if (CMD_SSEC(&cmd)) {
296
+ cmd_error = SMMU_CERROR_ILL;
297
+ break;
298
+ }
299
+
300
+ if (!mr) {
301
+ break;
302
+ }
303
+
304
+ trace_smmuv3_cmdq_cfgi_cd(sid);
305
+ sdev = container_of(mr, SMMUDevice, iommu);
306
+ smmuv3_flush_config(sdev);
307
+ break;
308
+ }
309
case SMMU_CMD_TLBI_NH_ALL:
310
case SMMU_CMD_TLBI_NH_ASID:
311
case SMMU_CMD_TLBI_NH_VA:
312
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
313
"Illegal command type: %d\n", CMD_TYPE(&cmd));
314
break;
70
}
315
}
71
if (!address_space_access_valid(&address_space_memory, addr,
316
+ qemu_mutex_unlock(&s->mutex);
72
- sizeof(IplParameterBlock), false)) {
317
if (cmd_error) {
73
+ sizeof(IplParameterBlock), false,
318
break;
74
+ MEMTXATTRS_UNSPECIFIED)) {
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
76
return;
77
}
319
}
78
@@ -XXX,XX +XXX,XX @@ out:
320
@@ -XXX,XX +XXX,XX @@ static void smmu_realize(DeviceState *d, Error **errp)
79
return;
80
}
81
if (!address_space_access_valid(&address_space_memory, addr,
82
- sizeof(IplParameterBlock), true)) {
83
+ sizeof(IplParameterBlock), true,
84
+ MEMTXATTRS_UNSPECIFIED)) {
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
86
return;
87
}
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/s390x/excp_helper.c
91
+++ b/target/s390x/excp_helper.c
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
93
94
/* check out of RAM access */
95
if (!address_space_access_valid(&address_space_memory, raddr,
96
- TARGET_PAGE_SIZE, rw)) {
97
+ TARGET_PAGE_SIZE, rw,
98
+ MEMTXATTRS_UNSPECIFIED)) {
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
100
(uint64_t)raddr, (uint64_t)ram_size);
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/s390x/mmu_helper.c
105
+++ b/target/s390x/mmu_helper.c
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
107
return ret;
108
}
109
if (!address_space_access_valid(&address_space_memory, pages[i],
110
- TARGET_PAGE_SIZE, is_write)) {
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
321
return;
129
}
322
}
323
324
+ qemu_mutex_init(&s->mutex);
325
+
326
memory_region_init_io(&sys->iomem, OBJECT(s),
327
&smmu_mem_ops, sys, TYPE_ARM_SMMUV3, 0x20000);
328
329
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
330
index XXXXXXX..XXXXXXX 100644
331
--- a/hw/arm/trace-events
332
+++ b/hw/arm/trace-events
333
@@ -XXX,XX +XXX,XX @@ smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t tr
334
smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
335
smmuv3_decode_cd(uint32_t oas) "oas=%d"
336
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d"
337
+smmuv3_cmdq_cfgi_ste(int streamid) "streamid =%d"
338
+smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%d - end=0x%d"
339
+smmuv3_cmdq_cfgi_cd(uint32_t sid) "streamid = %d"
340
+smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid %d (hits=%d, misses=%d, hit rate=%d)"
341
+smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid %d (hits=%d, misses=%d, hit rate=%d)"
342
+smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
130
--
343
--
131
2.17.1
344
2.17.1
132
345
133
346
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Eric Auger <eric.auger@redhat.com>
2
2
3
When QEMU is started with following CLI
3
We emulate a TLB cache of size SMMU_IOTLB_MAX_SIZE=256.
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
It is implemented as a hash table whose key is a combination
5
it crashes with abort at
5
of the 16b asid and 48b IOVA (Jenkins hash).
6
accel/kvm/kvm-all.c:2164:
6
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
7
Entries are invalidated on TLB invalidation commands, either
8
8
globally, or per asid, or per asid/iova.
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
9
10
arm_gicv3_icc_reset() where the later is called by CPU reset
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
reset callback.
11
Message-id: 1529653501-15358-4-git-send-email-eric.auger@redhat.com
12
13
However commit:
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
15
broke CPU reset callback registration in case
16
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
14
---
43
hw/arm/boot.c | 18 +++++++++---------
15
include/hw/arm/smmu-common.h | 13 +++++
44
1 file changed, 9 insertions(+), 9 deletions(-)
16
hw/arm/smmu-common.c | 60 ++++++++++++++++++++++
45
17
hw/arm/smmuv3.c | 98 ++++++++++++++++++++++++++++++++++--
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
18
hw/arm/trace-events | 9 ++++
19
4 files changed, 176 insertions(+), 4 deletions(-)
20
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
47
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
23
--- a/include/hw/arm/smmu-common.h
49
+++ b/hw/arm/boot.c
24
+++ b/include/hw/arm/smmu-common.h
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
25
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUTransCfg {
51
static const ARMInsnFixup *primary_loader;
26
uint8_t tbi; /* Top Byte Ignore */
52
AddressSpace *as = arm_boot_address_space(cpu, info);
27
uint16_t asid;
53
28
SMMUTransTableInfo tt[2];
54
+ /* CPU objects (unlike devices) are not automatically reset on system
29
+ uint32_t iotlb_hits; /* counts IOTLB hits for this asid */
55
+ * reset, so we must always register a handler to do so. If we're
30
+ uint32_t iotlb_misses; /* counts IOTLB misses for this asid */
56
+ * actually loading a kernel, the handler is also responsible for
31
} SMMUTransCfg;
57
+ * arranging that we start it correctly.
32
58
+ */
33
typedef struct SMMUDevice {
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
34
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUPciBus {
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
35
SMMUDevice *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
36
} SMMUPciBus;
37
38
+typedef struct SMMUIOTLBKey {
39
+ uint64_t iova;
40
+ uint16_t asid;
41
+} SMMUIOTLBKey;
42
+
43
typedef struct SMMUState {
44
/* <private> */
45
SysBusDevice dev;
46
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova);
47
/* Return the iommu mr associated to @sid, or NULL if none */
48
IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid);
49
50
+#define SMMU_IOTLB_MAX_SIZE 256
51
+
52
+void smmu_iotlb_inv_all(SMMUState *s);
53
+void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
54
+void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova);
55
+
56
#endif /* HW_ARM_SMMU_COMMON */
57
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/arm/smmu-common.c
60
+++ b/hw/arm/smmu-common.c
61
@@ -XXX,XX +XXX,XX @@
62
#include "qom/cpu.h"
63
#include "hw/qdev-properties.h"
64
#include "qapi/error.h"
65
+#include "qemu/jhash.h"
66
67
#include "qemu/error-report.h"
68
#include "hw/arm/smmu-common.h"
69
#include "smmu-internal.h"
70
71
+/* IOTLB Management */
72
+
73
+inline void smmu_iotlb_inv_all(SMMUState *s)
74
+{
75
+ trace_smmu_iotlb_inv_all();
76
+ g_hash_table_remove_all(s->iotlb);
77
+}
78
+
79
+static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
80
+ gpointer user_data)
81
+{
82
+ uint16_t asid = *(uint16_t *)user_data;
83
+ SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
84
+
85
+ return iotlb_key->asid == asid;
86
+}
87
+
88
+inline void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova)
89
+{
90
+ SMMUIOTLBKey key = {.asid = asid, .iova = iova};
91
+
92
+ trace_smmu_iotlb_inv_iova(asid, iova);
93
+ g_hash_table_remove(s->iotlb, &key);
94
+}
95
+
96
+inline void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
97
+{
98
+ trace_smmu_iotlb_inv_asid(asid);
99
+ g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
100
+}
101
+
102
/* VMSAv8-64 Translation */
103
104
/**
105
@@ -XXX,XX +XXX,XX @@ IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid)
106
return NULL;
107
}
108
109
+static guint smmu_iotlb_key_hash(gconstpointer v)
110
+{
111
+ SMMUIOTLBKey *key = (SMMUIOTLBKey *)v;
112
+ uint32_t a, b, c;
113
+
114
+ /* Jenkins hash */
115
+ a = b = c = JHASH_INITVAL + sizeof(*key);
116
+ a += key->asid;
117
+ b += extract64(key->iova, 0, 32);
118
+ c += extract64(key->iova, 32, 32);
119
+
120
+ __jhash_mix(a, b, c);
121
+ __jhash_final(a, b, c);
122
+
123
+ return c;
124
+}
125
+
126
+static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
127
+{
128
+ const SMMUIOTLBKey *k1 = v1;
129
+ const SMMUIOTLBKey *k2 = v2;
130
+
131
+ return (k1->asid == k2->asid) && (k1->iova == k2->iova);
132
+}
133
+
134
static void smmu_base_realize(DeviceState *dev, Error **errp)
135
{
136
SMMUState *s = ARM_SMMU(dev);
137
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
138
return;
139
}
140
s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
141
+ s->iotlb = g_hash_table_new_full(smmu_iotlb_key_hash, smmu_iotlb_key_equal,
142
+ g_free, g_free);
143
s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
144
145
if (s->primary_bus) {
146
@@ -XXX,XX +XXX,XX @@ static void smmu_base_reset(DeviceState *dev)
147
SMMUState *s = ARM_SMMU(dev);
148
149
g_hash_table_remove_all(s->configs);
150
+ g_hash_table_remove_all(s->iotlb);
151
}
152
153
static Property smmu_dev_properties[] = {
154
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
155
index XXXXXXX..XXXXXXX 100644
156
--- a/hw/arm/smmuv3.c
157
+++ b/hw/arm/smmuv3.c
158
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
159
SMMUEventInfo event = {.type = SMMU_EVT_NONE, .sid = sid};
160
SMMUPTWEventInfo ptw_info = {};
161
SMMUTranslationStatus status;
162
+ SMMUState *bs = ARM_SMMU(s);
163
+ uint64_t page_mask, aligned_addr;
164
+ IOMMUTLBEntry *cached_entry = NULL;
165
+ SMMUTransTableInfo *tt;
166
SMMUTransCfg *cfg = NULL;
167
IOMMUTLBEntry entry = {
168
.target_as = &address_space_memory,
169
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
170
.addr_mask = ~(hwaddr)0,
171
.perm = IOMMU_NONE,
172
};
173
+ SMMUIOTLBKey key, *new_key;
174
175
qemu_mutex_lock(&s->mutex);
176
177
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
178
goto epilogue;
179
}
180
181
- if (smmu_ptw(cfg, addr, flag, &entry, &ptw_info)) {
182
+ tt = select_tt(cfg, addr);
183
+ if (!tt) {
184
+ if (event.record_trans_faults) {
185
+ event.type = SMMU_EVT_F_TRANSLATION;
186
+ event.u.f_translation.addr = addr;
187
+ event.u.f_translation.rnw = flag & 0x1;
188
+ }
189
+ status = SMMU_TRANS_ERROR;
190
+ goto epilogue;
61
+ }
191
+ }
62
+
192
+
63
/* The board code is not supposed to set secure_board_setup unless
193
+ page_mask = (1ULL << (tt->granule_sz)) - 1;
64
* running its code in secure mode is actually possible, and KVM
194
+ aligned_addr = addr & ~page_mask;
65
* doesn't support secure.
195
+
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
196
+ key.asid = cfg->asid;
67
ARM_CPU(cs)->env.boot_info = info;
197
+ key.iova = aligned_addr;
198
+
199
+ cached_entry = g_hash_table_lookup(bs->iotlb, &key);
200
+ if (cached_entry) {
201
+ cfg->iotlb_hits++;
202
+ trace_smmu_iotlb_cache_hit(cfg->asid, aligned_addr,
203
+ cfg->iotlb_hits, cfg->iotlb_misses,
204
+ 100 * cfg->iotlb_hits /
205
+ (cfg->iotlb_hits + cfg->iotlb_misses));
206
+ if ((flag & IOMMU_WO) && !(cached_entry->perm & IOMMU_WO)) {
207
+ status = SMMU_TRANS_ERROR;
208
+ if (event.record_trans_faults) {
209
+ event.type = SMMU_EVT_F_PERMISSION;
210
+ event.u.f_permission.addr = addr;
211
+ event.u.f_permission.rnw = flag & 0x1;
212
+ }
213
+ } else {
214
+ status = SMMU_TRANS_SUCCESS;
215
+ }
216
+ goto epilogue;
217
+ }
218
+
219
+ cfg->iotlb_misses++;
220
+ trace_smmu_iotlb_cache_miss(cfg->asid, addr & ~page_mask,
221
+ cfg->iotlb_hits, cfg->iotlb_misses,
222
+ 100 * cfg->iotlb_hits /
223
+ (cfg->iotlb_hits + cfg->iotlb_misses));
224
+
225
+ if (g_hash_table_size(bs->iotlb) >= SMMU_IOTLB_MAX_SIZE) {
226
+ smmu_iotlb_inv_all(bs);
227
+ }
228
+
229
+ cached_entry = g_new0(IOMMUTLBEntry, 1);
230
+
231
+ if (smmu_ptw(cfg, aligned_addr, flag, cached_entry, &ptw_info)) {
232
+ g_free(cached_entry);
233
switch (ptw_info.type) {
234
case SMMU_PTW_ERR_WALK_EABT:
235
event.type = SMMU_EVT_F_WALK_EABT;
236
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
237
}
238
status = SMMU_TRANS_ERROR;
239
} else {
240
+ new_key = g_new0(SMMUIOTLBKey, 1);
241
+ new_key->asid = cfg->asid;
242
+ new_key->iova = aligned_addr;
243
+ g_hash_table_insert(bs->iotlb, new_key, cached_entry);
244
status = SMMU_TRANS_SUCCESS;
68
}
245
}
69
246
70
- /* CPU objects (unlike devices) are not automatically reset on system
247
@@ -XXX,XX +XXX,XX @@ epilogue:
71
- * reset, so we must always register a handler to do so. If we're
248
switch (status) {
72
- * actually loading a kernel, the handler is also responsible for
249
case SMMU_TRANS_SUCCESS:
73
- * arranging that we start it correctly.
250
entry.perm = flag;
74
- */
251
+ entry.translated_addr = cached_entry->translated_addr +
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
252
+ (addr & page_mask);
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
253
+ entry.addr_mask = cached_entry->addr_mask;
77
- }
254
trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
78
-
255
entry.translated_addr, entry.perm);
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
256
break;
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
257
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
81
exit(1);
258
smmuv3_flush_config(sdev);
259
break;
260
}
261
- case SMMU_CMD_TLBI_NH_ALL:
262
case SMMU_CMD_TLBI_NH_ASID:
263
- case SMMU_CMD_TLBI_NH_VA:
264
+ {
265
+ uint16_t asid = CMD_ASID(&cmd);
266
+
267
+ trace_smmuv3_cmdq_tlbi_nh_asid(asid);
268
+ smmu_iotlb_inv_asid(bs, asid);
269
+ break;
270
+ }
271
+ case SMMU_CMD_TLBI_NH_ALL:
272
+ case SMMU_CMD_TLBI_NSNH_ALL:
273
+ trace_smmuv3_cmdq_tlbi_nh();
274
+ smmu_iotlb_inv_all(bs);
275
+ break;
276
case SMMU_CMD_TLBI_NH_VAA:
277
+ {
278
+ dma_addr_t addr = CMD_ADDR(&cmd);
279
+ uint16_t vmid = CMD_VMID(&cmd);
280
+
281
+ trace_smmuv3_cmdq_tlbi_nh_vaa(vmid, addr);
282
+ smmu_iotlb_inv_all(bs);
283
+ break;
284
+ }
285
+ case SMMU_CMD_TLBI_NH_VA:
286
+ {
287
+ uint16_t asid = CMD_ASID(&cmd);
288
+ uint16_t vmid = CMD_VMID(&cmd);
289
+ dma_addr_t addr = CMD_ADDR(&cmd);
290
+ bool leaf = CMD_LEAF(&cmd);
291
+
292
+ trace_smmuv3_cmdq_tlbi_nh_va(vmid, asid, addr, leaf);
293
+ smmu_iotlb_inv_iova(bs, asid, addr);
294
+ break;
295
+ }
296
case SMMU_CMD_TLBI_EL3_ALL:
297
case SMMU_CMD_TLBI_EL3_VA:
298
case SMMU_CMD_TLBI_EL2_ALL:
299
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
300
case SMMU_CMD_TLBI_EL2_VAA:
301
case SMMU_CMD_TLBI_S12_VMALL:
302
case SMMU_CMD_TLBI_S2_IPA:
303
- case SMMU_CMD_TLBI_NSNH_ALL:
304
case SMMU_CMD_ATC_INV:
305
case SMMU_CMD_PRI_RESP:
306
case SMMU_CMD_RESUME:
307
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
308
index XXXXXXX..XXXXXXX 100644
309
--- a/hw/arm/trace-events
310
+++ b/hw/arm/trace-events
311
@@ -XXX,XX +XXX,XX @@ smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr,
312
smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
313
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
314
smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
315
+smmu_iotlb_cache_hit(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
316
+smmu_iotlb_cache_miss(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
317
+smmu_iotlb_inv_all(void) "IOTLB invalidate all"
318
+smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
319
+smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
320
321
#hw/arm/smmuv3.c
322
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
323
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%d - end=0x%d"
324
smmuv3_cmdq_cfgi_cd(uint32_t sid) "streamid = %d"
325
smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid %d (hits=%d, misses=%d, hit rate=%d)"
326
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid %d (hits=%d, misses=%d, hit rate=%d)"
327
+smmuv3_cmdq_tlbi_nh_va(int vmid, int asid, uint64_t addr, bool leaf) "vmid =%d asid =%d addr=0x%"PRIx64" leaf=%d"
328
+smmuv3_cmdq_tlbi_nh_vaa(int vmid, uint64_t addr) "vmid =%d addr=0x%"PRIx64
329
+smmuv3_cmdq_tlbi_nh(void) ""
330
+smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
331
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
82
--
332
--
83
2.17.1
333
2.17.1
84
334
85
335
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Eric Auger <eric.auger@redhat.com>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
On TLB invalidation commands, let's call registered
4
passed and returned either zero-extended in the host register
4
IOMMU notifiers. Those can only be UNMAP notifiers.
5
or with garbage at the top of the host register.
5
SMMUv3 does not support notification on MAP (VFIO).
6
6
7
The tcg code generator has so far been assuming garbage, as that
7
This patch allows vhost use case where IOTLB API is notified
8
matches the x86 abi, but this is incorrect for other host abis.
8
on each guest IOTLB invalidation.
9
Further, target/arm has so far been assuming zero-extended results,
9
10
so that it may store the 16-bit value into a 32-bit slot with the
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
high 16-bits already clear.
12
13
Rectify both problems by mapping "f16" in the helper definition
14
to uint32_t instead of (a typedef for) uint16_t. This forces
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1529653501-15358-5-git-send-email-eric.auger@redhat.com
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
14
---
26
include/exec/helper-head.h | 2 +-
15
include/hw/arm/smmu-common.h | 6 +++
27
target/arm/helper-a64.c | 35 +++++++++--------
16
hw/arm/smmu-common.c | 34 +++++++++++++
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
17
hw/arm/smmuv3.c | 99 +++++++++++++++++++++++++++++++++++-
29
3 files changed, 59 insertions(+), 58 deletions(-)
18
hw/arm/trace-events | 5 ++
30
19
4 files changed, 142 insertions(+), 2 deletions(-)
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
20
32
index XXXXXXX..XXXXXXX 100644
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
33
--- a/include/exec/helper-head.h
22
index XXXXXXX..XXXXXXX 100644
34
+++ b/include/exec/helper-head.h
23
--- a/include/hw/arm/smmu-common.h
35
@@ -XXX,XX +XXX,XX @@
24
+++ b/include/hw/arm/smmu-common.h
36
#define dh_ctype_int int
25
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_all(SMMUState *s);
37
#define dh_ctype_i64 uint64_t
26
void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
38
#define dh_ctype_s64 int64_t
27
void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova);
39
-#define dh_ctype_f16 float16
28
40
+#define dh_ctype_f16 uint32_t
29
+/* Unmap the range of all the notifiers registered to any IOMMU mr */
41
#define dh_ctype_f32 float32
30
+void smmu_inv_notifiers_all(SMMUState *s);
42
#define dh_ctype_f64 float64
31
+
43
#define dh_ctype_ptr void *
32
+/* Unmap the range of all the notifiers registered to @mr */
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
33
+void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr);
45
index XXXXXXX..XXXXXXX 100644
34
+
46
--- a/target/arm/helper-a64.c
35
#endif /* HW_ARM_SMMU_COMMON */
47
+++ b/target/arm/helper-a64.c
36
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
37
index XXXXXXX..XXXXXXX 100644
49
return flags;
38
--- a/hw/arm/smmu-common.c
39
+++ b/hw/arm/smmu-common.c
40
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
41
return (k1->asid == k2->asid) && (k1->iova == k2->iova);
50
}
42
}
51
43
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
44
+/* Unmap the whole notifier's range */
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
45
+static void smmu_unmap_notifier_range(IOMMUNotifier *n)
46
+{
47
+ IOMMUTLBEntry entry;
48
+
49
+ entry.target_as = &address_space_memory;
50
+ entry.iova = n->start;
51
+ entry.perm = IOMMU_NONE;
52
+ entry.addr_mask = n->end - n->start;
53
+
54
+ memory_region_notify_one(n, &entry);
55
+}
56
+
57
+/* Unmap all notifiers attached to @mr */
58
+inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
59
+{
60
+ IOMMUNotifier *n;
61
+
62
+ trace_smmu_inv_notifiers_mr(mr->parent_obj.name);
63
+ IOMMU_NOTIFIER_FOREACH(n, mr) {
64
+ smmu_unmap_notifier_range(n);
65
+ }
66
+}
67
+
68
+/* Unmap all notifiers of all mr's */
69
+void smmu_inv_notifiers_all(SMMUState *s)
70
+{
71
+ SMMUNotifierNode *node;
72
+
73
+ QLIST_FOREACH(node, &s->notifiers_list, next) {
74
+ smmu_inv_notifiers_mr(&node->sdev->iommu);
75
+ }
76
+}
77
+
78
static void smmu_base_realize(DeviceState *dev, Error **errp)
54
{
79
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
80
SMMUState *s = ARM_SMMU(dev);
81
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/hw/arm/smmuv3.c
84
+++ b/hw/arm/smmuv3.c
85
@@ -XXX,XX +XXX,XX @@ epilogue:
86
return entry;
56
}
87
}
57
88
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
89
+/**
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
90
+ * smmuv3_notify_iova - call the notifier @n for a given
91
+ * @asid and @iova tuple.
92
+ *
93
+ * @mr: IOMMU mr region handle
94
+ * @n: notifier to be called
95
+ * @asid: address space ID or negative value if we don't care
96
+ * @iova: iova
97
+ */
98
+static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
99
+ IOMMUNotifier *n,
100
+ int asid,
101
+ dma_addr_t iova)
102
+{
103
+ SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
104
+ SMMUEventInfo event = {};
105
+ SMMUTransTableInfo *tt;
106
+ SMMUTransCfg *cfg;
107
+ IOMMUTLBEntry entry;
108
+
109
+ cfg = smmuv3_get_config(sdev, &event);
110
+ if (!cfg) {
111
+ qemu_log_mask(LOG_GUEST_ERROR,
112
+ "%s error decoding the configuration for iommu mr=%s\n",
113
+ __func__, mr->parent_obj.name);
114
+ return;
115
+ }
116
+
117
+ if (asid >= 0 && cfg->asid != asid) {
118
+ return;
119
+ }
120
+
121
+ tt = select_tt(cfg, iova);
122
+ if (!tt) {
123
+ return;
124
+ }
125
+
126
+ entry.target_as = &address_space_memory;
127
+ entry.iova = iova;
128
+ entry.addr_mask = (1 << tt->granule_sz) - 1;
129
+ entry.perm = IOMMU_NONE;
130
+
131
+ memory_region_notify_one(n, &entry);
132
+}
133
+
134
+/* invalidate an asid/iova tuple in all mr's */
135
+static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
136
+{
137
+ SMMUNotifierNode *node;
138
+
139
+ QLIST_FOREACH(node, &s->notifiers_list, next) {
140
+ IOMMUMemoryRegion *mr = &node->sdev->iommu;
141
+ IOMMUNotifier *n;
142
+
143
+ trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova);
144
+
145
+ IOMMU_NOTIFIER_FOREACH(n, mr) {
146
+ smmuv3_notify_iova(mr, n, asid, iova);
147
+ }
148
+ }
149
+}
150
+
151
static int smmuv3_cmdq_consume(SMMUv3State *s)
60
{
152
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
153
SMMUState *bs = ARM_SMMU(s);
62
}
154
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
155
uint16_t asid = CMD_ASID(&cmd);
64
#define float64_three make_float64(0x4008000000000000ULL)
156
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
157
trace_smmuv3_cmdq_tlbi_nh_asid(asid);
66
158
+ smmu_inv_notifiers_all(&s->smmu_state);
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
159
smmu_iotlb_inv_asid(bs, asid);
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
160
break;
161
}
162
case SMMU_CMD_TLBI_NH_ALL:
163
case SMMU_CMD_TLBI_NSNH_ALL:
164
trace_smmuv3_cmdq_tlbi_nh();
165
+ smmu_inv_notifiers_all(&s->smmu_state);
166
smmu_iotlb_inv_all(bs);
167
break;
168
case SMMU_CMD_TLBI_NH_VAA:
169
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
170
uint16_t vmid = CMD_VMID(&cmd);
171
172
trace_smmuv3_cmdq_tlbi_nh_vaa(vmid, addr);
173
+ smmuv3_inv_notifiers_iova(bs, -1, addr);
174
smmu_iotlb_inv_all(bs);
175
break;
176
}
177
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
178
bool leaf = CMD_LEAF(&cmd);
179
180
trace_smmuv3_cmdq_tlbi_nh_va(vmid, asid, addr, leaf);
181
+ smmuv3_inv_notifiers_iova(bs, asid, addr);
182
smmu_iotlb_inv_iova(bs, asid, addr);
183
break;
184
}
185
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
186
IOMMUNotifierFlag old,
187
IOMMUNotifierFlag new)
69
{
188
{
70
float_status *fpst = fpstp;
189
+ SMMUDevice *sdev = container_of(iommu, SMMUDevice, iommu);
71
190
+ SMMUv3State *s3 = sdev->smmu;
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
191
+ SMMUState *s = &(s3->smmu_state);
73
return float64_muladd(a, b, float64_two, 0, fpst);
192
+ SMMUNotifierNode *node = NULL;
74
}
193
+ SMMUNotifierNode *next_node = NULL;
75
194
+
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
195
+ if (new & IOMMU_NOTIFIER_MAP) {
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
196
+ int bus_num = pci_bus_num(sdev->bus);
78
{
197
+ PCIDevice *pcidev = pci_find_device(sdev->bus, bus_num, sdev->devfn);
79
float_status *fpst = fpstp;
198
+
80
199
+ warn_report("SMMUv3 does not support notification on MAP: "
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
200
+ "device %s will not function properly", pcidev->name);
82
}
201
+ }
83
202
+
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
203
if (old == IOMMU_NOTIFIER_NONE) {
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
204
- warn_report("SMMUV3 does not support vhost/vfio integration yet: "
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
205
- "devices of those types will not function properly");
87
{
206
+ trace_smmuv3_notify_flag_add(iommu->parent_obj.name);
88
float_status *fpst = fpstp;
207
+ node = g_malloc0(sizeof(*node));
89
uint16_t val16, sbit;
208
+ node->sdev = sdev;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
209
+ QLIST_INSERT_HEAD(&s->notifiers_list, node, next);
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
210
+ return;
92
211
+ }
93
#define ADVSIMD_HALFOP(name) \
212
+
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
213
+ /* update notifier node with new flags */
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
214
+ QLIST_FOREACH_SAFE(node, &s->notifiers_list, next, next_node) {
96
{ \
215
+ if (node->sdev == sdev) {
97
float_status *fpst = fpstp; \
216
+ if (new == IOMMU_NOTIFIER_NONE) {
98
return float16_ ## name(a, b, fpst); \
217
+ trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
218
+ QLIST_REMOVE(node, next);
100
ADVSIMD_TWOHALFOP(mulx)
219
+ g_free(node);
101
220
+ }
102
/* fused multiply-accumulate */
221
+ return;
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
222
+ }
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
199
200
/* Integer to float and float to integer conversions */
201
202
-#define CONV_ITOF(name, fsz, sign) \
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
204
-{ \
205
- float_status *fpst = fpstp; \
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
209
+{ \
210
+ float_status *fpst = fpstp; \
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
212
}
213
214
-#define CONV_FTOI(name, fsz, sign, round) \
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
216
-{ \
217
- float_status *fpst = fpstp; \
218
- if (float##fsz##_is_any_nan(x)) { \
219
- float_raise(float_flag_invalid, fpst); \
220
- return 0; \
221
- } \
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
225
+{ \
226
+ float_status *fpst = fpstp; \
227
+ if (float##fsz##_is_any_nan(x)) { \
228
+ float_raise(float_flag_invalid, fpst); \
229
+ return 0; \
230
+ } \
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
232
}
233
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
242
243
-FLOAT_CONVS(si, h, 16, )
244
-FLOAT_CONVS(si, s, 32, )
245
-FLOAT_CONVS(si, d, 64, )
246
-FLOAT_CONVS(ui, h, 16, u)
247
-FLOAT_CONVS(ui, s, 32, u)
248
-FLOAT_CONVS(ui, d, 64, u)
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
250
+FLOAT_CONVS(si, s, float32, 32, )
251
+FLOAT_CONVS(si, d, float64, 64, )
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
253
+FLOAT_CONVS(ui, s, float32, 32, u)
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
266
}
267
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
270
{
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
272
}
273
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
276
{
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
278
}
279
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
282
{
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
284
}
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
286
}
223
}
287
}
224
}
288
225
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
226
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
227
index XXXXXXX..XXXXXXX 100644
291
{
228
--- a/hw/arm/trace-events
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
229
+++ b/hw/arm/trace-events
293
}
230
@@ -XXX,XX +XXX,XX @@ smmu_iotlb_cache_miss(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss,
294
231
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
232
smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
233
smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
297
{
234
+smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
235
299
}
236
#hw/arm/smmuv3.c
300
237
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
238
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_tlbi_nh_vaa(int vmid, uint64_t addr) "vmid =%d addr=0x%"PRIx64
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
239
smmuv3_cmdq_tlbi_nh(void) ""
303
{
240
smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
241
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
305
}
242
+smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
306
243
+smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
244
+smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint64_t iova) "iommu mr=%s asid=%d iova=0x%"PRIx64
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
245
+
309
{
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
311
}
312
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
315
{
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
317
}
318
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
321
{
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
323
}
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
325
}
326
327
/* Half precision conversions. */
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
246
--
379
2.17.1
247
2.17.1
380
248
381
249
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Cédric Le Goater <clg@kaod.org>
2
add MemTxAttrs as an argument to flatview_translate(); all its
2
3
callers now have attrs available.
3
All Aspeed SoC clocks are driven by an input source clock which can
4
4
have different frequencies : 24MHz or 25MHz, and also, on the Aspeed
5
AST2400 SoC, 48MHz. The H-PLL (CPU) clock is defined from a
6
calculation using parameters in the H-PLL Parameter register or from a
7
predefined set of frequencies if the setting is strapped by hardware
8
(Aspeed AST2400 SoC). The other clocks of the SoC are then defined
9
from the H-PLL using dividers.
10
11
We introduce first the APB clock because it should be used to drive
12
the Aspeed timer model.
13
14
Signed-off-by: Cédric Le Goater <clg@kaod.org>
15
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
16
Message-id: 20180622075700.5923-2-clg@kaod.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
9
---
18
---
10
include/exec/memory.h | 7 ++++---
19
include/hw/misc/aspeed_scu.h | 70 +++++++++++++++++++++--
11
exec.c | 17 +++++++++--------
20
hw/misc/aspeed_scu.c | 106 +++++++++++++++++++++++++++++++++++
12
2 files changed, 13 insertions(+), 11 deletions(-)
21
2 files changed, 172 insertions(+), 4 deletions(-)
13
22
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
23
diff --git a/include/hw/misc/aspeed_scu.h b/include/hw/misc/aspeed_scu.h
15
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
25
--- a/include/hw/misc/aspeed_scu.h
17
+++ b/include/exec/memory.h
26
+++ b/include/hw/misc/aspeed_scu.h
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
27
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSCUState {
28
uint32_t hw_strap1;
29
uint32_t hw_strap2;
30
uint32_t hw_prot_key;
31
+
32
+ uint32_t clkin;
33
+ uint32_t hpll;
34
+ uint32_t apb_freq;
35
} AspeedSCUState;
36
37
#define AST2400_A0_SILICON_REV 0x02000303U
38
@@ -XXX,XX +XXX,XX @@ extern bool is_supported_silicon_rev(uint32_t silicon_rev);
39
* 1. 2012/12/29 Ryan Chen Create
19
*/
40
*/
20
MemoryRegion *flatview_translate(FlatView *fv,
41
21
hwaddr addr, hwaddr *xlat,
42
-/* Hardware Strapping Register definition (for Aspeed AST2400 SOC)
22
- hwaddr *len, bool is_write);
43
+/* SCU08 Clock Selection Register
23
+ hwaddr *len, bool is_write,
44
+ *
24
+ MemTxAttrs attrs);
45
+ * 31 Enable Video Engine clock dynamic slow down
25
46
+ * 30:28 Video Engine clock slow down setting
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
47
+ * 27 2D Engine GCLK clock source selection
27
hwaddr addr, hwaddr *xlat,
48
+ * 26 2D Engine GCLK clock throttling enable
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
49
+ * 25:23 APB PCLK divider selection
29
MemTxAttrs attrs)
50
+ * 22:20 LPC Host LHCLK divider selection
51
+ * 19 LPC Host LHCLK clock generation/output enable control
52
+ * 18:16 MAC AHB bus clock divider selection
53
+ * 15 SD/SDIO clock running enable
54
+ * 14:12 SD/SDIO divider selection
55
+ * 11 Reserved
56
+ * 10:8 Video port output clock delay control bit
57
+ * 7 ARM CPU/AHB clock slow down enable
58
+ * 6:4 ARM CPU/AHB clock slow down setting
59
+ * 3:2 ECLK clock source selection
60
+ * 1 CPU/AHB clock slow down idle timer
61
+ * 0 CPU/AHB clock dynamic slow down enable (defined in bit[6:4])
62
+ */
63
+#define SCU_CLK_GET_PCLK_DIV(x) (((x) >> 23) & 0x7)
64
+
65
+/* SCU24 H-PLL Parameter Register (for Aspeed AST2400 SOC)
66
+ *
67
+ * 18 H-PLL parameter selection
68
+ * 0: Select H-PLL by strapping resistors
69
+ * 1: Select H-PLL by the programmed registers (SCU24[17:0])
70
+ * 17 Enable H-PLL bypass mode
71
+ * 16 Turn off H-PLL
72
+ * 10:5 H-PLL Numerator
73
+ * 4 H-PLL Output Divider
74
+ * 3:0 H-PLL Denumerator
75
+ *
76
+ * (Output frequency) = 24MHz * (2-OD) * [(Numerator+2) / (Denumerator+1)]
77
+ */
78
+
79
+#define SCU_AST2400_H_PLL_PROGRAMMED (0x1 << 18)
80
+#define SCU_AST2400_H_PLL_BYPASS_EN (0x1 << 17)
81
+#define SCU_AST2400_H_PLL_OFF (0x1 << 16)
82
+
83
+/* SCU24 H-PLL Parameter Register (for Aspeed AST2500 SOC)
84
+ *
85
+ * 21 Enable H-PLL reset
86
+ * 20 Enable H-PLL bypass mode
87
+ * 19 Turn off H-PLL
88
+ * 18:13 H-PLL Post Divider
89
+ * 12:5 H-PLL Numerator (M)
90
+ * 4:0 H-PLL Denumerator (N)
91
+ *
92
+ * (Output frequency) = CLKIN(24MHz) * [(M+1) / (N+1)] / (P+1)
93
+ *
94
+ * The default frequency is 792Mhz when CLKIN = 24MHz
95
+ */
96
+
97
+#define SCU_H_PLL_BYPASS_EN (0x1 << 20)
98
+#define SCU_H_PLL_OFF (0x1 << 19)
99
+
100
+/* SCU70 Hardware Strapping Register definition (for Aspeed AST2400 SOC)
101
*
102
* 31:29 Software defined strapping registers
103
* 28:27 DRAM size setting (for VGA driver use)
104
@@ -XXX,XX +XXX,XX @@ extern bool is_supported_silicon_rev(uint32_t silicon_rev);
105
#define SCU_AST2400_HW_STRAP_GET_CLK_SOURCE(x) (((((x) >> 23) & 0x1) << 1) \
106
| (((x) >> 18) & 0x1))
107
#define SCU_AST2400_HW_STRAP_CLK_SOURCE_MASK ((0x1 << 23) | (0x1 << 18))
108
-#define AST2400_CLK_25M_IN (0x1 << 23)
109
+#define SCU_HW_STRAP_CLK_25M_IN (0x1 << 23)
110
#define AST2400_CLK_24M_IN 0
111
#define AST2400_CLK_48M_IN 1
112
#define AST2400_CLK_25M_IN_24M_USB_CKI 2
113
#define AST2400_CLK_25M_IN_48M_USB_CKI 3
114
115
+#define SCU_HW_STRAP_CLK_48M_IN (0x1 << 18)
116
#define SCU_HW_STRAP_2ND_BOOT_WDT (0x1 << 17)
117
#define SCU_HW_STRAP_SUPER_IO_CONFIG (0x1 << 16)
118
#define SCU_HW_STRAP_VGA_CLASS_CODE (0x1 << 15)
119
@@ -XXX,XX +XXX,XX @@ extern bool is_supported_silicon_rev(uint32_t silicon_rev);
120
#define AST2400_DIS_BOOT 3
121
122
/*
123
- * Hardware strapping register definition (for Aspeed AST2500 SoC and
124
- * higher)
125
+ * SCU70 Hardware strapping register definition (for Aspeed AST2500
126
+ * SoC and higher)
127
*
128
* 31 Enable SPI Flash Strap Auto Fetch Mode
129
* 30 Enable GPIO Strap Mode
130
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
131
index XXXXXXX..XXXXXXX 100644
132
--- a/hw/misc/aspeed_scu.c
133
+++ b/hw/misc/aspeed_scu.c
134
@@ -XXX,XX +XXX,XX @@ static uint32_t aspeed_scu_get_random(void)
135
return num;
136
}
137
138
+static void aspeed_scu_set_apb_freq(AspeedSCUState *s)
139
+{
140
+ uint32_t apb_divider;
141
+
142
+ switch (s->silicon_rev) {
143
+ case AST2400_A0_SILICON_REV:
144
+ case AST2400_A1_SILICON_REV:
145
+ apb_divider = 2;
146
+ break;
147
+ case AST2500_A0_SILICON_REV:
148
+ case AST2500_A1_SILICON_REV:
149
+ apb_divider = 4;
150
+ break;
151
+ default:
152
+ g_assert_not_reached();
153
+ }
154
+
155
+ s->apb_freq = s->hpll / (SCU_CLK_GET_PCLK_DIV(s->regs[CLK_SEL]) + 1)
156
+ / apb_divider;
157
+}
158
+
159
static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
30
{
160
{
31
return flatview_translate(address_space_to_flatview(as),
161
AspeedSCUState *s = ASPEED_SCU(opaque);
32
- addr, xlat, len, is_write);
162
@@ -XXX,XX +XXX,XX @@ static void aspeed_scu_write(void *opaque, hwaddr offset, uint64_t data,
33
+ addr, xlat, len, is_write, attrs);
163
case PROT_KEY:
164
s->regs[reg] = (data == ASPEED_SCU_PROT_KEY) ? 1 : 0;
165
return;
166
+ case CLK_SEL:
167
+ s->regs[reg] = data;
168
+ aspeed_scu_set_apb_freq(s);
169
+ break;
170
171
case FREQ_CNTR_EVAL:
172
case VGA_SCRATCH1 ... VGA_SCRATCH8:
173
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps aspeed_scu_ops = {
174
.valid.unaligned = false,
175
};
176
177
+static uint32_t aspeed_scu_get_clkin(AspeedSCUState *s)
178
+{
179
+ if (s->hw_strap1 & SCU_HW_STRAP_CLK_25M_IN) {
180
+ return 25000000;
181
+ } else if (s->hw_strap1 & SCU_HW_STRAP_CLK_48M_IN) {
182
+ return 48000000;
183
+ } else {
184
+ return 24000000;
185
+ }
186
+}
187
+
188
+/*
189
+ * Strapped frequencies for the AST2400 in MHz. They depend on the
190
+ * clkin frequency.
191
+ */
192
+static const uint32_t hpll_ast2400_freqs[][4] = {
193
+ { 384, 360, 336, 408 }, /* 24MHz or 48MHz */
194
+ { 400, 375, 350, 425 }, /* 25MHz */
195
+};
196
+
197
+static uint32_t aspeed_scu_calc_hpll_ast2400(AspeedSCUState *s)
198
+{
199
+ uint32_t hpll_reg = s->regs[HPLL_PARAM];
200
+ uint8_t freq_select;
201
+ bool clk_25m_in;
202
+
203
+ if (hpll_reg & SCU_AST2400_H_PLL_OFF) {
204
+ return 0;
205
+ }
206
+
207
+ if (hpll_reg & SCU_AST2400_H_PLL_PROGRAMMED) {
208
+ uint32_t multiplier = 1;
209
+
210
+ if (!(hpll_reg & SCU_AST2400_H_PLL_BYPASS_EN)) {
211
+ uint32_t n = (hpll_reg >> 5) & 0x3f;
212
+ uint32_t od = (hpll_reg >> 4) & 0x1;
213
+ uint32_t d = hpll_reg & 0xf;
214
+
215
+ multiplier = (2 - od) * ((n + 2) / (d + 1));
216
+ }
217
+
218
+ return s->clkin * multiplier;
219
+ }
220
+
221
+ /* HW strapping */
222
+ clk_25m_in = !!(s->hw_strap1 & SCU_HW_STRAP_CLK_25M_IN);
223
+ freq_select = SCU_AST2400_HW_STRAP_GET_H_PLL_CLK(s->hw_strap1);
224
+
225
+ return hpll_ast2400_freqs[clk_25m_in][freq_select] * 1000000;
226
+}
227
+
228
+static uint32_t aspeed_scu_calc_hpll_ast2500(AspeedSCUState *s)
229
+{
230
+ uint32_t hpll_reg = s->regs[HPLL_PARAM];
231
+ uint32_t multiplier = 1;
232
+
233
+ if (hpll_reg & SCU_H_PLL_OFF) {
234
+ return 0;
235
+ }
236
+
237
+ if (!(hpll_reg & SCU_H_PLL_BYPASS_EN)) {
238
+ uint32_t p = (hpll_reg >> 13) & 0x3f;
239
+ uint32_t m = (hpll_reg >> 5) & 0xff;
240
+ uint32_t n = hpll_reg & 0x1f;
241
+
242
+ multiplier = ((m + 1) / (n + 1)) / (p + 1);
243
+ }
244
+
245
+ return s->clkin * multiplier;
246
+}
247
+
248
static void aspeed_scu_reset(DeviceState *dev)
249
{
250
AspeedSCUState *s = ASPEED_SCU(dev);
251
const uint32_t *reset;
252
+ uint32_t (*calc_hpll)(AspeedSCUState *s);
253
254
switch (s->silicon_rev) {
255
case AST2400_A0_SILICON_REV:
256
case AST2400_A1_SILICON_REV:
257
reset = ast2400_a0_resets;
258
+ calc_hpll = aspeed_scu_calc_hpll_ast2400;
259
break;
260
case AST2500_A0_SILICON_REV:
261
case AST2500_A1_SILICON_REV:
262
reset = ast2500_a1_resets;
263
+ calc_hpll = aspeed_scu_calc_hpll_ast2500;
264
break;
265
default:
266
g_assert_not_reached();
267
@@ -XXX,XX +XXX,XX @@ static void aspeed_scu_reset(DeviceState *dev)
268
s->regs[HW_STRAP1] = s->hw_strap1;
269
s->regs[HW_STRAP2] = s->hw_strap2;
270
s->regs[PROT_KEY] = s->hw_prot_key;
271
+
272
+ /*
273
+ * All registers are set. Now compute the frequencies of the main clocks
274
+ */
275
+ s->clkin = aspeed_scu_get_clkin(s);
276
+ s->hpll = calc_hpll(s);
277
+ aspeed_scu_set_apb_freq(s);
34
}
278
}
35
279
36
/* address_space_access_valid: check for validity of accessing an address
280
static uint32_t aspeed_silicon_revs[] = {
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
38
rcu_read_lock();
39
fv = address_space_to_flatview(as);
40
l = len;
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
43
if (len == l && memory_access_is_direct(mr, false)) {
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
45
memcpy(buf, ptr, len);
46
diff --git a/exec.c b/exec.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/exec.c
49
+++ b/exec.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
51
52
/* Called from RCU critical section */
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
54
- hwaddr *plen, bool is_write)
55
+ hwaddr *plen, bool is_write,
56
+ MemTxAttrs attrs)
57
{
58
MemoryRegion *mr;
59
MemoryRegionSection section;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
61
}
62
63
l = len;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
66
}
67
68
return result;
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
70
MemTxResult result = MEMTX_OK;
71
72
l = len;
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
76
addr1, l, mr);
77
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
79
}
80
81
l = len;
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
84
}
85
86
return result;
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
93
return flatview_read_continue(fv, addr, attrs, buf, len,
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
281
--
124
2.17.1
282
2.17.1
125
283
126
284
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Cédric Le Goater <clg@kaod.org>
2
add MemTxAttrs as an argument to address_space_map().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
The System Control Unit should be initialized first as it drives all
4
the configuration of the SoC and other device models.
5
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
7
Reviewed-by: Joel Stanley <joel@jms.id.au>
8
Acked-by: Andrew Jeffery <andrew@aj.id.au>
9
Message-id: 20180622075700.5923-3-clg@kaod.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
11
---
11
include/exec/memory.h | 3 ++-
12
hw/arm/aspeed_soc.c | 40 ++++++++++++++++++++--------------------
12
include/sysemu/dma.h | 3 ++-
13
1 file changed, 20 insertions(+), 20 deletions(-)
13
exec.c | 6 ++++--
14
target/ppc/mmu-hash64.c | 3 ++-
15
4 files changed, 10 insertions(+), 5 deletions(-)
16
14
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
17
--- a/hw/arm/aspeed_soc.c
20
+++ b/include/exec/memory.h
18
+++ b/hw/arm/aspeed_soc.c
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
19
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
22
* @addr: address within that address space
20
object_initialize(&s->cpu, sizeof(s->cpu), sc->info->cpu_type);
23
* @plen: pointer to length of buffer; updated on return
21
object_property_add_child(obj, "cpu", OBJECT(&s->cpu), NULL);
24
* @is_write: indicates the transfer direction
22
25
+ * @attrs: memory attributes
23
- object_initialize(&s->vic, sizeof(s->vic), TYPE_ASPEED_VIC);
26
*/
24
- object_property_add_child(obj, "vic", OBJECT(&s->vic), NULL);
27
void *address_space_map(AddressSpace *as, hwaddr addr,
25
- qdev_set_parent_bus(DEVICE(&s->vic), sysbus_get_default());
28
- hwaddr *plen, bool is_write);
26
-
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
27
- object_initialize(&s->timerctrl, sizeof(s->timerctrl), TYPE_ASPEED_TIMER);
30
28
- object_property_add_child(obj, "timerctrl", OBJECT(&s->timerctrl), NULL);
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
29
- qdev_set_parent_bus(DEVICE(&s->timerctrl), sysbus_get_default());
32
*
30
-
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
31
- object_initialize(&s->i2c, sizeof(s->i2c), TYPE_ASPEED_I2C);
34
index XXXXXXX..XXXXXXX 100644
32
- object_property_add_child(obj, "i2c", OBJECT(&s->i2c), NULL);
35
--- a/include/sysemu/dma.h
33
- qdev_set_parent_bus(DEVICE(&s->i2c), sysbus_get_default());
36
+++ b/include/sysemu/dma.h
34
-
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
35
object_initialize(&s->scu, sizeof(s->scu), TYPE_ASPEED_SCU);
38
hwaddr xlen = *len;
36
object_property_add_child(obj, "scu", OBJECT(&s->scu), NULL);
39
void *p;
37
qdev_set_parent_bus(DEVICE(&s->scu), sysbus_get_default());
40
38
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
39
object_property_add_alias(obj, "hw-prot-key", OBJECT(&s->scu),
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
40
"hw-prot-key", &error_abort);
43
+ MEMTXATTRS_UNSPECIFIED);
41
44
*len = xlen;
42
+ object_initialize(&s->vic, sizeof(s->vic), TYPE_ASPEED_VIC);
45
return p;
43
+ object_property_add_child(obj, "vic", OBJECT(&s->vic), NULL);
46
}
44
+ qdev_set_parent_bus(DEVICE(&s->vic), sysbus_get_default());
47
diff --git a/exec.c b/exec.c
45
+
48
index XXXXXXX..XXXXXXX 100644
46
+ object_initialize(&s->timerctrl, sizeof(s->timerctrl), TYPE_ASPEED_TIMER);
49
--- a/exec.c
47
+ object_property_add_child(obj, "timerctrl", OBJECT(&s->timerctrl), NULL);
50
+++ b/exec.c
48
+ qdev_set_parent_bus(DEVICE(&s->timerctrl), sysbus_get_default());
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
49
+
52
void *address_space_map(AddressSpace *as,
50
+ object_initialize(&s->i2c, sizeof(s->i2c), TYPE_ASPEED_I2C);
53
hwaddr addr,
51
+ object_property_add_child(obj, "i2c", OBJECT(&s->i2c), NULL);
54
hwaddr *plen,
52
+ qdev_set_parent_bus(DEVICE(&s->i2c), sysbus_get_default());
55
- bool is_write)
53
+
56
+ bool is_write,
54
object_initialize(&s->fmc, sizeof(s->fmc), sc->info->fmc_typename);
57
+ MemTxAttrs attrs)
55
object_property_add_child(obj, "fmc", OBJECT(&s->fmc), NULL);
58
{
56
qdev_set_parent_bus(DEVICE(&s->fmc), sysbus_get_default());
59
hwaddr len = *plen;
57
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_realize(DeviceState *dev, Error **errp)
60
hwaddr l, xlat;
58
memory_region_add_subregion(get_system_memory(), ASPEED_SOC_SRAM_BASE,
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
59
&s->sram);
62
hwaddr *plen,
60
63
int is_write)
61
+ /* SCU */
64
{
62
+ object_property_set_bool(OBJECT(&s->scu), true, "realized", &err);
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
63
+ if (err) {
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
64
+ error_propagate(errp, err);
67
+ MEMTXATTRS_UNSPECIFIED);
65
+ return;
68
}
66
+ }
69
67
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->scu), 0, ASPEED_SOC_SCU_BASE);
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
68
+
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
69
/* VIC */
72
index XXXXXXX..XXXXXXX 100644
70
object_property_set_bool(OBJECT(&s->vic), true, "realized", &err);
73
--- a/target/ppc/mmu-hash64.c
71
if (err) {
74
+++ b/target/ppc/mmu-hash64.c
72
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_realize(DeviceState *dev, Error **errp)
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
73
sysbus_connect_irq(SYS_BUS_DEVICE(&s->timerctrl), i, irq);
76
return NULL;
77
}
74
}
78
75
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
76
- /* SCU */
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
77
- object_property_set_bool(OBJECT(&s->scu), true, "realized", &err);
81
+ MEMTXATTRS_UNSPECIFIED);
78
- if (err) {
82
if (plen < (n * HASH_PTE_SIZE_64)) {
79
- error_propagate(errp, err);
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
80
- return;
84
}
81
- }
82
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->scu), 0, ASPEED_SOC_SCU_BASE);
83
-
84
/* UART - attach an 8250 to the IO space as our UART5 */
85
if (serial_hd(0)) {
86
qemu_irq uart5 = qdev_get_gpio_in(DEVICE(&s->vic), uart_irqs[4]);
85
--
87
--
86
2.17.1
88
2.17.1
87
89
88
90
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Cédric Le Goater <clg@kaod.org>
2
add MemTxAttrs as an argument to flatview_access_valid().
3
Its callers now all have an attrs value to hand, so we can
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
5
2
3
The timer controller can be driven by either an external 1MHz clock or
4
by the APB clock. Today, the model makes the assumption that the APB
5
frequency is always set to 24MHz but this is incorrect.
6
7
The AST2400 SoC on the palmetto machines uses a 48MHz input clock
8
source and the APB can be set to 48MHz. The consequence is a general
9
system slowdown. The QEMU machines using the AST2500 SoC do not seem
10
impacted today because the APB frequency is still set to 24MHz.
11
12
We fix the timer frequency for all SoCs by linking the Timer model to
13
the SCU model. The APB frequency driving the timers is now the one
14
configured for the SoC.
15
16
Signed-off-by: Cédric Le Goater <clg@kaod.org>
17
Reviewed-by: Joel Stanley <joel@jms.id.au>
18
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
19
Message-id: 20180622075700.5923-4-clg@kaod.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
10
---
21
---
11
exec.c | 12 +++++-------
22
include/hw/timer/aspeed_timer.h | 4 ++++
12
1 file changed, 5 insertions(+), 7 deletions(-)
23
hw/arm/aspeed_soc.c | 2 ++
24
hw/timer/aspeed_timer.c | 19 +++++++++++++++----
25
3 files changed, 21 insertions(+), 4 deletions(-)
13
26
14
diff --git a/exec.c b/exec.c
27
diff --git a/include/hw/timer/aspeed_timer.h b/include/hw/timer/aspeed_timer.h
15
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
29
--- a/include/hw/timer/aspeed_timer.h
17
+++ b/exec.c
30
+++ b/include/hw/timer/aspeed_timer.h
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
31
@@ -XXX,XX +XXX,XX @@
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
32
20
const uint8_t *buf, int len);
33
#include "qemu/timer.h"
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
34
22
- bool is_write);
35
+typedef struct AspeedSCUState AspeedSCUState;
23
+ bool is_write, MemTxAttrs attrs);
36
+
24
37
#define ASPEED_TIMER(obj) \
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
38
OBJECT_CHECK(AspeedTimerCtrlState, (obj), TYPE_ASPEED_TIMER);
26
unsigned len, MemTxAttrs attrs)
39
#define TYPE_ASPEED_TIMER "aspeed.timer"
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
40
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedTimerCtrlState {
28
#endif
41
uint32_t ctrl;
29
42
uint32_t ctrl2;
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
43
AspeedTimer timers[ASPEED_TIMER_NR_TIMERS];
31
- len, is_write);
44
+
32
+ len, is_write, attrs);
45
+ AspeedSCUState *scu;
46
} AspeedTimerCtrlState;
47
48
#endif /* ASPEED_TIMER_H */
49
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/hw/arm/aspeed_soc.c
52
+++ b/hw/arm/aspeed_soc.c
53
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
54
55
object_initialize(&s->timerctrl, sizeof(s->timerctrl), TYPE_ASPEED_TIMER);
56
object_property_add_child(obj, "timerctrl", OBJECT(&s->timerctrl), NULL);
57
+ object_property_add_const_link(OBJECT(&s->timerctrl), "scu",
58
+ OBJECT(&s->scu), &error_abort);
59
qdev_set_parent_bus(DEVICE(&s->timerctrl), sysbus_get_default());
60
61
object_initialize(&s->i2c, sizeof(s->i2c), TYPE_ASPEED_I2C);
62
diff --git a/hw/timer/aspeed_timer.c b/hw/timer/aspeed_timer.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/hw/timer/aspeed_timer.c
65
+++ b/hw/timer/aspeed_timer.c
66
@@ -XXX,XX +XXX,XX @@
67
*/
68
69
#include "qemu/osdep.h"
70
+#include "qapi/error.h"
71
#include "hw/sysbus.h"
72
#include "hw/timer/aspeed_timer.h"
73
+#include "hw/misc/aspeed_scu.h"
74
#include "qemu-common.h"
75
#include "qemu/bitops.h"
76
#include "qemu/timer.h"
77
@@ -XXX,XX +XXX,XX @@
78
#define TIMER_CLOCK_USE_EXT true
79
#define TIMER_CLOCK_EXT_HZ 1000000
80
#define TIMER_CLOCK_USE_APB false
81
-#define TIMER_CLOCK_APB_HZ 24000000
82
83
#define TIMER_REG_STATUS 0
84
#define TIMER_REG_RELOAD 1
85
@@ -XXX,XX +XXX,XX @@ static inline bool timer_external_clock(AspeedTimer *t)
86
return timer_ctrl_status(t, op_external_clock);
33
}
87
}
34
88
35
static const MemoryRegionOps subpage_ops = {
89
-static uint32_t clock_rates[] = { TIMER_CLOCK_APB_HZ, TIMER_CLOCK_EXT_HZ };
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
90
-
91
static inline uint32_t calculate_rate(struct AspeedTimer *t)
92
{
93
- return clock_rates[timer_external_clock(t)];
94
+ AspeedTimerCtrlState *s = timer_to_ctrl(t);
95
+
96
+ return timer_external_clock(t) ? TIMER_CLOCK_EXT_HZ : s->scu->apb_freq;
37
}
97
}
38
98
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
99
static inline uint32_t calculate_ticks(struct AspeedTimer *t, uint64_t now_ns)
40
- bool is_write)
100
@@ -XXX,XX +XXX,XX @@ static void aspeed_timer_realize(DeviceState *dev, Error **errp)
41
+ bool is_write, MemTxAttrs attrs)
101
int i;
42
{
102
SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
43
MemoryRegion *mr;
103
AspeedTimerCtrlState *s = ASPEED_TIMER(dev);
44
hwaddr l, xlat;
104
+ Object *obj;
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
105
+ Error *err = NULL;
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
106
+
47
if (!memory_access_is_direct(mr, is_write)) {
107
+ obj = object_property_get_link(OBJECT(dev), "scu", &err);
48
l = memory_access_size(mr, l, addr);
108
+ if (!obj) {
49
- /* When our callers all have attrs we'll pass them through here */
109
+ error_propagate(errp, err);
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
110
+ error_prepend(errp, "required link 'scu' not found: ");
51
- MEMTXATTRS_UNSPECIFIED)) {
111
+ return;
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
112
+ }
53
return false;
113
+ s->scu = ASPEED_SCU(obj);
54
}
114
55
}
115
for (i = 0; i < ASPEED_TIMER_NR_TIMERS; i++) {
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
116
aspeed_init_one_timer(s, i);
57
58
rcu_read_lock();
59
fv = address_space_to_flatview(as);
60
- result = flatview_access_valid(fv, addr, len, is_write);
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
62
rcu_read_unlock();
63
return result;
64
}
65
--
117
--
66
2.17.1
118
2.17.1
67
119
68
120
diff view generated by jsdifflib