1
First arm pullreq for the 2.12 cycle, with all the
1
Nothing much exciting here, but it's 37 patches worth...
2
things that queued up during the release phase.
3
2.11 isn't quite released yet, but might as well put
4
the pullreq on the mailing list :-)
5
2
6
thanks
3
thanks
7
-- PMM
4
-- PMM
8
5
9
The following changes since commit 0a0dc59d27527b78a195c2d838d28b7b49e5a639:
6
The following changes since commit e64a62df378a746c0b257105959613c9f8122e59:
10
7
11
Update version for v2.11.0 release (2017-12-13 14:31:09 +0000)
8
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-040320-1' into staging (2020-03-05 12:13:51 +0000)
12
9
13
are available in the git repository at:
10
are available in the Git repository at:
14
11
15
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171213
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200305
16
13
17
for you to fetch changes up to d3c348b6e3af3598bfcb755d59f8f4de80a2228a:
14
for you to fetch changes up to 597d61a3b1f94c53a3aaa77671697c0c5f797dbf:
18
15
19
xilinx_spips: Use memset instead of a for loop to zero registers (2017-12-13 17:59:26 +0000)
16
target/arm: Clean address for DC ZVA (2020-03-05 16:09:21 +0000)
20
17
21
----------------------------------------------------------------
18
----------------------------------------------------------------
22
target-arm queue:
19
* versal: Implement ADMA
23
* xilinx_spips: set reset values correctly
20
* Implement (trivially) ARMv8.2-TTCNP
24
* MAINTAINERS: fix an email address
21
* hw/arm/smmu-common: a fix to smmu_find_smmu_pcibus
25
* hw/display/tc6393xb: limit irq handler index to TC6393XB_GPIOS
22
* Remove unnecessary endianness-handling on some boards
26
* nvic: Make systick banked for v8M
23
* Avoid minor memory leaks from timer_new in some devices
27
* refactor get_phys_addr() so we can return the right format PAR
24
* Honour more of the HCR_EL2 trap bits
28
for ATS operations
25
* Complain rather than ignoring bad command line options for cubieboard
29
* implement v8M TT instruction
26
* Honour TBI for DC ZVA and exception return
30
* fix some minor v8M bugs
31
* Implement reset for GICv3 ITS
32
* xlnx-zcu102: Add support for the ZynqMP QSPI
33
27
34
----------------------------------------------------------------
28
----------------------------------------------------------------
35
Alistair Francis (3):
29
Edgar E. Iglesias (2):
36
xilinx_spips: Update the QSPI Mod ID reset value
30
hw/arm: versal: Add support for the LPD ADMAs
37
xilinx_spips: Set all of the reset values
31
hw/arm: versal: Generate xlnx-versal-virt zdma FDT nodes
38
xilinx_spips: Use memset instead of a for loop to zero registers
39
32
40
Edgar E. Iglesias (1):
33
Eric Auger (1):
41
target/arm: Extend PAR format determination
34
hw/arm/smmu-common: a fix to smmu_find_smmu_pcibus
42
35
43
Eric Auger (4):
36
Niek Linnenbank (4):
44
hw/intc/arm_gicv3_its: Don't call post_load on reset
37
hw/arm/cubieboard: use ARM Cortex-A8 as the default CPU in machine definition
45
hw/intc/arm_gicv3_its: Implement a minimalist reset
38
hw/arm/cubieboard: restrict allowed CPU type to ARM Cortex-A8
46
linux-headers: update to 4.15-rc1
39
hw/arm/cubieboard: restrict allowed RAM size to 512MiB and 1GiB
47
hw/intc/arm_gicv3_its: Implement full reset
40
hw/arm/cubieboard: report error when using unsupported -bios argument
48
41
49
Francisco Iglesias (13):
42
Pan Nengyuan (4):
50
m25p80: Add support for continuous read out of RDSR and READ_FSR
43
hw/arm/pxa2xx: move timer_new from init() into realize() to avoid memleaks
51
m25p80: Add support for SST READ ID 0x90/0xAB commands
44
hw/arm/spitz: move timer_new from init() into realize() to avoid memleaks
52
m25p80: Add support for BRRD/BRWR and BULK_ERASE (0x60)
45
hw/arm/strongarm: move timer_new from init() into realize() to avoid memleaks
53
m25p80: Add support for n25q512a11 and n25q512a13
46
hw/timer/cadence_ttc: move timer_new from init() into realize() to avoid memleaks
54
xilinx_spips: Move FlashCMD, XilinxQSPIPS and XilinxSPIPSClass
55
xilinx_spips: Update striping to be big-endian bit order
56
xilinx_spips: Add support for RX discard and RX drain
57
xilinx_spips: Make tx/rx_data_bytes more generic and reusable
58
xilinx_spips: Add support for zero pumping
59
xilinx_spips: Add support for 4 byte addresses in the LQSPI
60
xilinx_spips: Don't set TX FIFO UNDERFLOW at cmd done
61
xilinx_spips: Add support for the ZynqMP Generic QSPI
62
xlnx-zcu102: Add support for the ZynqMP QSPI
63
47
64
Peter Maydell (20):
48
Peter Maydell (1):
65
target/arm: Handle SPSEL and current stack being out of sync in MSP/PSP reads
49
target/arm: Implement (trivially) ARMv8.2-TTCNP
66
target/arm: Allow explicit writes to CONTROL.SPSEL in Handler mode
67
target/arm: Add missing M profile case to regime_is_user()
68
target/arm: Split M profile MNegPri mmu index into user and priv
69
target/arm: Create new arm_v7m_mmu_idx_for_secstate_and_priv()
70
target/arm: Factor MPU lookup code out of get_phys_addr_pmsav8()
71
target/arm: Implement TT instruction
72
target/arm: Provide fault type enum and FSR conversion functions
73
target/arm: Remove fsr argument from arm_ld*_ptw()
74
target/arm: Convert get_phys_addr_v5() to not return FSC values
75
target/arm: Convert get_phys_addr_v6() to not return FSC values
76
target/arm: Convert get_phys_addr_lpae() to not return FSC values
77
target/arm: Convert get_phys_addr_pmsav5() to not return FSC values
78
target/arm: Convert get_phys_addr_pmsav7() to not return FSC values
79
target/arm: Convert get_phys_addr_pmsav8() to not return FSC values
80
target/arm: Use ARMMMUFaultInfo in deliver_fault()
81
target/arm: Ignore fsr from get_phys_addr() in do_ats_write()
82
target/arm: Remove fsr argument from get_phys_addr() and arm_tlb_fill()
83
nvic: Make nvic_sysreg_ns_ops work with any MemoryRegion
84
nvic: Make systick banked
85
50
86
Prasad J Pandit (1):
51
Philippe Mathieu-Daudé (6):
87
hw/display/tc6393xb: limit irq handler index to TC6393XB_GPIOS
52
hw/arm/smmu-common: Simplify smmu_find_smmu_pcibus() logic
53
hw/arm/gumstix: Simplify since the machines are little-endian only
54
hw/arm/mainstone: Simplify since the machines are little-endian only
55
hw/arm/omap_sx1: Simplify since the machines are little-endian only
56
hw/arm/z2: Simplify since the machines are little-endian only
57
hw/arm/musicpal: Simplify since the machines are little-endian only
88
58
89
Zhaoshenglong (1):
59
Richard Henderson (19):
90
MAINTAINERS: replace the unavailable email address
60
target/arm: Improve masking of HCR/HCR2 RES0 bits
61
target/arm: Add HCR_EL2 bit definitions from ARMv8.6
62
target/arm: Disable has_el2 and has_el3 for user-only
63
target/arm: Remove EL2 and EL3 setup from user-only
64
target/arm: Improve masking in arm_hcr_el2_eff
65
target/arm: Honor the HCR_EL2.{TVM,TRVM} bits
66
target/arm: Honor the HCR_EL2.TSW bit
67
target/arm: Honor the HCR_EL2.TACR bit
68
target/arm: Honor the HCR_EL2.TPCP bit
69
target/arm: Honor the HCR_EL2.TPU bit
70
target/arm: Honor the HCR_EL2.TTLB bit
71
tests/tcg/aarch64: Add newline in pauth-1 printf
72
target/arm: Replicate TBI/TBID bits for single range regimes
73
target/arm: Optimize cpu_mmu_index
74
target/arm: Introduce core_to_aa64_mmu_idx
75
target/arm: Apply TBI to ESR_ELx in helper_exception_return
76
target/arm: Move helper_dc_zva to helper-a64.c
77
target/arm: Use DEF_HELPER_FLAGS for helper_dc_zva
78
target/arm: Clean address for DC ZVA
91
79
92
include/hw/arm/xlnx-zynqmp.h | 5 +
80
include/hw/arm/xlnx-versal.h | 6 +
93
include/hw/intc/armv7m_nvic.h | 4 +-
81
target/arm/cpu.h | 30 ++--
94
include/hw/ssi/xilinx_spips.h | 74 +-
82
target/arm/helper-a64.h | 1 +
95
include/standard-headers/asm-s390/virtio-ccw.h | 1 +
83
target/arm/helper.h | 1 -
96
include/standard-headers/asm-x86/hyperv.h | 394 +--------
84
target/arm/internals.h | 6 +
97
include/standard-headers/linux/input-event-codes.h | 2 +
85
hw/arm/cubieboard.c | 29 +++-
98
include/standard-headers/linux/input.h | 1 +
86
hw/arm/gumstix.c | 16 +-
99
include/standard-headers/linux/pci_regs.h | 45 +-
87
hw/arm/mainstone.c | 8 +-
100
linux-headers/asm-arm/kvm.h | 8 +
88
hw/arm/musicpal.c | 10 --
101
linux-headers/asm-arm/kvm_para.h | 1 +
89
hw/arm/omap_sx1.c | 11 +-
102
linux-headers/asm-arm/unistd.h | 2 +
90
hw/arm/pxa2xx.c | 17 +-
103
linux-headers/asm-arm64/kvm.h | 8 +
91
hw/arm/smmu-common.c | 20 +--
104
linux-headers/asm-arm64/unistd.h | 1 +
92
hw/arm/spitz.c | 8 +-
105
linux-headers/asm-powerpc/epapr_hcalls.h | 1 +
93
hw/arm/strongarm.c | 18 ++-
106
linux-headers/asm-powerpc/kvm.h | 1 +
94
hw/arm/xlnx-versal-virt.c | 28 ++++
107
linux-headers/asm-powerpc/kvm_para.h | 1 +
95
hw/arm/xlnx-versal.c | 24 +++
108
linux-headers/asm-powerpc/unistd.h | 1 +
96
hw/arm/z2.c | 8 +-
109
linux-headers/asm-s390/kvm.h | 1 +
97
hw/timer/cadence_ttc.c | 18 ++-
110
linux-headers/asm-s390/kvm_para.h | 1 +
98
target/arm/cpu.c | 13 +-
111
linux-headers/asm-s390/unistd.h | 4 +-
99
target/arm/cpu64.c | 2 +
112
linux-headers/asm-x86/kvm.h | 1 +
100
target/arm/helper-a64.c | 114 ++++++++++++-
113
linux-headers/asm-x86/kvm_para.h | 2 +-
101
target/arm/helper.c | 373 ++++++++++++++++++++++++++++++-------------
114
linux-headers/asm-x86/unistd.h | 1 +
102
target/arm/op_helper.c | 93 -----------
115
linux-headers/linux/kvm.h | 2 +
103
target/arm/translate-a64.c | 4 +-
116
linux-headers/linux/kvm_para.h | 1 +
104
tests/tcg/aarch64/pauth-1.c | 2 +-
117
linux-headers/linux/psci.h | 1 +
105
25 files changed, 551 insertions(+), 309 deletions(-)
118
linux-headers/linux/userfaultfd.h | 1 +
119
linux-headers/linux/vfio.h | 1 +
120
linux-headers/linux/vfio_ccw.h | 1 +
121
linux-headers/linux/vhost.h | 1 +
122
target/arm/cpu.h | 73 +-
123
target/arm/helper.h | 2 +
124
target/arm/internals.h | 193 ++++-
125
hw/arm/xlnx-zcu102.c | 23 +
126
hw/arm/xlnx-zynqmp.c | 26 +
127
hw/block/m25p80.c | 80 +-
128
hw/display/tc6393xb.c | 1 +
129
hw/intc/arm_gicv3_its_common.c | 2 -
130
hw/intc/arm_gicv3_its_kvm.c | 53 +-
131
hw/intc/armv7m_nvic.c | 100 ++-
132
hw/ssi/xilinx_spips.c | 928 +++++++++++++++++----
133
target/arm/helper.c | 489 +++++++----
134
target/arm/op_helper.c | 82 +-
135
target/arm/translate.c | 37 +-
136
MAINTAINERS | 2 +-
137
default-configs/arm-softmmu.mak | 2 +-
138
46 files changed, 1833 insertions(+), 828 deletions(-)
139
106
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
Make tx/rx_data_bytes more generic so they can be reused (when adding
3
Add support for the Versal LPD ADMAs.
4
support for the Zynqmp Generic QSPI).
5
4
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
9
Message-id: 20171126231634.9531-9-frasse.iglesias@gmail.com
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/ssi/xilinx_spips.c | 64 +++++++++++++++++++++++++++++----------------------
11
include/hw/arm/xlnx-versal.h | 6 ++++++
13
1 file changed, 37 insertions(+), 27 deletions(-)
12
hw/arm/xlnx-versal.c | 24 ++++++++++++++++++++++++
13
2 files changed, 30 insertions(+)
14
14
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
15
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
17
--- a/include/hw/arm/xlnx-versal.h
18
+++ b/hw/ssi/xilinx_spips.c
18
+++ b/include/hw/arm/xlnx-versal.h
19
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
20
/* config register */
20
#define XLNX_VERSAL_NR_ACPUS 2
21
#define R_CONFIG (0x00 / 4)
21
#define XLNX_VERSAL_NR_UARTS 2
22
#define IFMODE (1U << 31)
22
#define XLNX_VERSAL_NR_GEMS 2
23
-#define ENDIAN (1 << 26)
23
+#define XLNX_VERSAL_NR_ADMAS 8
24
+#define R_CONFIG_ENDIAN (1 << 26)
24
#define XLNX_VERSAL_NR_IRQS 192
25
#define MODEFAIL_GEN_EN (1 << 17)
25
26
#define MAN_START_COM (1 << 16)
26
typedef struct Versal {
27
#define MAN_START_EN (1 << 15)
27
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
28
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
28
struct {
29
SysBusDevice *uart[XLNX_VERSAL_NR_UARTS];
30
SysBusDevice *gem[XLNX_VERSAL_NR_GEMS];
31
+ SysBusDevice *adma[XLNX_VERSAL_NR_ADMAS];
32
} iou;
33
} lpd;
34
35
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
36
#define VERSAL_GEM0_WAKE_IRQ_0 57
37
#define VERSAL_GEM1_IRQ_0 58
38
#define VERSAL_GEM1_WAKE_IRQ_0 59
39
+#define VERSAL_ADMA_IRQ_0 60
40
41
/* Architecturally reserved IRQs suitable for virtualization. */
42
#define VERSAL_RSVD_IRQ_FIRST 111
43
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
44
#define MM_GEM1 0xff0d0000U
45
#define MM_GEM1_SIZE 0x10000
46
47
+#define MM_ADMA_CH0 0xffa80000U
48
+#define MM_ADMA_CH0_SIZE 0x10000
49
+
50
#define MM_OCM 0xfffc0000U
51
#define MM_OCM_SIZE 0x40000
52
53
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/xlnx-versal.c
56
+++ b/hw/arm/xlnx-versal.c
57
@@ -XXX,XX +XXX,XX @@ static void versal_create_gems(Versal *s, qemu_irq *pic)
29
}
58
}
30
}
59
}
31
60
32
-static inline void rx_data_bytes(XilinxSPIPS *s, uint8_t *value, int max)
61
+static void versal_create_admas(Versal *s, qemu_irq *pic)
33
+static inline void tx_data_bytes(Fifo8 *fifo, uint32_t value, int num, bool be)
34
{
35
int i;
36
+ for (i = 0; i < num && !fifo8_is_full(fifo); ++i) {
37
+ if (be) {
38
+ fifo8_push(fifo, (uint8_t)(value >> 24));
39
+ value <<= 8;
40
+ } else {
41
+ fifo8_push(fifo, (uint8_t)value);
42
+ value >>= 8;
43
+ }
44
+ }
45
+}
46
47
- for (i = 0; i < max && !fifo8_is_empty(&s->rx_fifo); ++i) {
48
- value[i] = fifo8_pop(&s->rx_fifo);
49
+static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
50
+{
62
+{
51
+ int i;
63
+ int i;
52
+
64
+
53
+ for (i = 0; i < max && !fifo8_is_empty(fifo); ++i) {
65
+ for (i = 0; i < ARRAY_SIZE(s->lpd.iou.adma); i++) {
54
+ value[i] = fifo8_pop(fifo);
66
+ char *name = g_strdup_printf("adma%d", i);
55
}
67
+ DeviceState *dev;
56
+ return max - i;
68
+ MemoryRegion *mr;
57
}
69
+
58
70
+ dev = qdev_create(NULL, "xlnx.zdma");
59
static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
71
+ s->lpd.iou.adma[i] = SYS_BUS_DEVICE(dev);
60
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
72
+ object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
61
uint32_t mask = ~0;
73
+ qdev_init_nofail(dev);
62
uint32_t ret;
74
+
63
uint8_t rx_buf[4];
75
+ mr = sysbus_mmio_get_region(s->lpd.iou.adma[i], 0);
64
+ int shortfall;
76
+ memory_region_add_subregion(&s->mr_ps,
65
77
+ MM_ADMA_CH0 + i * MM_ADMA_CH0_SIZE, mr);
66
addr >>= 2;
78
+
67
switch (addr) {
79
+ sysbus_connect_irq(s->lpd.iou.adma[i], 0, pic[VERSAL_ADMA_IRQ_0 + i]);
68
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
80
+ g_free(name);
69
break;
81
+ }
70
case R_RX_DATA:
82
+}
71
memset(rx_buf, 0, sizeof(rx_buf));
83
+
72
- rx_data_bytes(s, rx_buf, s->num_txrx_bytes);
84
/* This takes the board allocated linear DDR memory and creates aliases
73
- ret = s->regs[R_CONFIG] & ENDIAN ? cpu_to_be32(*(uint32_t *)rx_buf)
85
* for each split DDR range/aperture on the Versal address map.
74
- : cpu_to_le32(*(uint32_t *)rx_buf);
86
*/
75
+ shortfall = rx_data_bytes(&s->rx_fifo, rx_buf, s->num_txrx_bytes);
87
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
76
+ ret = s->regs[R_CONFIG] & R_CONFIG_ENDIAN ?
88
versal_create_apu_gic(s, pic);
77
+ cpu_to_be32(*(uint32_t *)rx_buf) :
89
versal_create_uarts(s, pic);
78
+ cpu_to_le32(*(uint32_t *)rx_buf);
90
versal_create_gems(s, pic);
79
+ if (!(s->regs[R_CONFIG] & R_CONFIG_ENDIAN)) {
91
+ versal_create_admas(s, pic);
80
+ ret <<= 8 * shortfall;
92
versal_map_ddr(s);
81
+ }
93
versal_unimp(s);
82
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
83
xilinx_spips_update_ixr(s);
84
return ret;
85
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
86
87
}
88
89
-static inline void tx_data_bytes(XilinxSPIPS *s, uint32_t value, int num)
90
-{
91
- int i;
92
- for (i = 0; i < num && !fifo8_is_full(&s->tx_fifo); ++i) {
93
- if (s->regs[R_CONFIG] & ENDIAN) {
94
- fifo8_push(&s->tx_fifo, (uint8_t)(value >> 24));
95
- value <<= 8;
96
- } else {
97
- fifo8_push(&s->tx_fifo, (uint8_t)value);
98
- value >>= 8;
99
- }
100
- }
101
-}
102
-
103
static void xilinx_spips_write(void *opaque, hwaddr addr,
104
uint64_t value, unsigned size)
105
{
106
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
107
mask = 0;
108
break;
109
case R_TX_DATA:
110
- tx_data_bytes(s, (uint32_t)value, s->num_txrx_bytes);
111
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, s->num_txrx_bytes,
112
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
113
goto no_reg_update;
114
case R_TXD1:
115
- tx_data_bytes(s, (uint32_t)value, 1);
116
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 1,
117
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
118
goto no_reg_update;
119
case R_TXD2:
120
- tx_data_bytes(s, (uint32_t)value, 2);
121
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 2,
122
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
123
goto no_reg_update;
124
case R_TXD3:
125
- tx_data_bytes(s, (uint32_t)value, 3);
126
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 3,
127
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
128
goto no_reg_update;
129
}
130
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
131
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
132
133
while (cache_entry < LQSPI_CACHE_SIZE) {
134
for (i = 0; i < 64; ++i) {
135
- tx_data_bytes(s, 0, 1);
136
+ tx_data_bytes(&s->tx_fifo, 0, 1, false);
137
}
138
xilinx_spips_flush_txfifo(s);
139
for (i = 0; i < 64; ++i) {
140
- rx_data_bytes(s, &q->lqspi_buf[cache_entry++], 1);
141
+ rx_data_bytes(&s->rx_fifo, &q->lqspi_buf[cache_entry++], 1);
142
}
143
}
144
94
145
--
95
--
146
2.7.4
96
2.20.1
147
97
148
98
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
Add support for zero pumping according to the transfer size register.
3
Generate xlnx-versal-virt zdma FDT nodes.
4
4
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
8
Message-id: 20171126231634.9531-10-frasse.iglesias@gmail.com
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/ssi/xilinx_spips.h | 2 ++
11
hw/arm/xlnx-versal-virt.c | 28 ++++++++++++++++++++++++++++
12
hw/ssi/xilinx_spips.c | 47 ++++++++++++++++++++++++++++++++++++-------
12
1 file changed, 28 insertions(+)
13
2 files changed, 42 insertions(+), 7 deletions(-)
14
13
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
14
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/ssi/xilinx_spips.h
16
--- a/hw/arm/xlnx-versal-virt.c
18
+++ b/include/hw/ssi/xilinx_spips.h
17
+++ b/hw/arm/xlnx-versal-virt.c
19
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
18
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gem_nodes(VersalVirt *s)
20
uint32_t rx_discard;
21
22
uint32_t regs[XLNX_SPIPS_R_MAX];
23
+
24
+ bool man_start_com;
25
};
26
27
typedef struct {
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/ssi/xilinx_spips.c
31
+++ b/hw/ssi/xilinx_spips.c
32
@@ -XXX,XX +XXX,XX @@
33
FIELD(CMND, DUMMY_CYCLES, 2, 6)
34
#define R_CMND_DMA_EN (1 << 1)
35
#define R_CMND_PUSH_WAIT (1 << 0)
36
+#define R_TRANSFER_SIZE (0xc4 / 4)
37
#define R_LQSPI_STS (0xA4 / 4)
38
#define LQSPI_STS_WR_RECVD (1 << 1)
39
40
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
41
s->link_state_next_when = 0;
42
s->snoop_state = SNOOP_CHECKING;
43
s->cmd_dummies = 0;
44
+ s->man_start_com = false;
45
xilinx_spips_update_ixr(s);
46
xilinx_spips_update_cs_lines(s);
47
}
48
@@ -XXX,XX +XXX,XX @@ static inline void tx_data_bytes(Fifo8 *fifo, uint32_t value, int num, bool be)
49
}
19
}
50
}
20
}
51
21
52
+static void xilinx_spips_check_zero_pump(XilinxSPIPS *s)
22
+static void fdt_add_zdma_nodes(VersalVirt *s)
53
+{
23
+{
54
+ if (!s->regs[R_TRANSFER_SIZE]) {
24
+ const char clocknames[] = "clk_main\0clk_apb";
55
+ return;
25
+ const char compat[] = "xlnx,zynqmp-dma-1.0";
56
+ }
26
+ int i;
57
+ if (!fifo8_is_empty(&s->tx_fifo) && s->regs[R_CMND] & R_CMND_PUSH_WAIT) {
27
+
58
+ return;
28
+ for (i = XLNX_VERSAL_NR_ADMAS - 1; i >= 0; i--) {
59
+ }
29
+ uint64_t addr = MM_ADMA_CH0 + MM_ADMA_CH0_SIZE * i;
60
+ /*
30
+ char *name = g_strdup_printf("/dma@%" PRIx64, addr);
61
+ * The zero pump must never fill tx fifo such that rx overflow is
31
+
62
+ * possible
32
+ qemu_fdt_add_subnode(s->fdt, name);
63
+ */
33
+
64
+ while (s->regs[R_TRANSFER_SIZE] &&
34
+ qemu_fdt_setprop_cell(s->fdt, name, "xlnx,bus-width", 64);
65
+ s->rx_fifo.num + s->tx_fifo.num < RXFF_A_Q - 3) {
35
+ qemu_fdt_setprop_cells(s->fdt, name, "clocks",
66
+ /* endianess just doesn't matter when zero pumping */
36
+ s->phandle.clk_25Mhz, s->phandle.clk_25Mhz);
67
+ tx_data_bytes(&s->tx_fifo, 0, 4, false);
37
+ qemu_fdt_setprop(s->fdt, name, "clock-names",
68
+ s->regs[R_TRANSFER_SIZE] &= ~0x03ull;
38
+ clocknames, sizeof(clocknames));
69
+ s->regs[R_TRANSFER_SIZE] -= 4;
39
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
40
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_ADMA_IRQ_0 + i,
41
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
42
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
43
+ 2, addr, 2, 0x1000);
44
+ qemu_fdt_setprop(s->fdt, name, "compatible", compat, sizeof(compat));
45
+ g_free(name);
70
+ }
46
+ }
71
+}
47
+}
72
+
48
+
73
+static void xilinx_spips_check_flush(XilinxSPIPS *s)
49
static void fdt_nop_memory_nodes(void *fdt, Error **errp)
74
+{
75
+ if (s->man_start_com ||
76
+ (!fifo8_is_empty(&s->tx_fifo) &&
77
+ !(s->regs[R_CONFIG] & MAN_START_EN))) {
78
+ xilinx_spips_check_zero_pump(s);
79
+ xilinx_spips_flush_txfifo(s);
80
+ }
81
+ if (fifo8_is_empty(&s->tx_fifo) && !s->regs[R_TRANSFER_SIZE]) {
82
+ s->man_start_com = false;
83
+ }
84
+ xilinx_spips_update_ixr(s);
85
+}
86
+
87
static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
88
{
50
{
89
int i;
51
Error *err = NULL;
90
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
52
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
91
uint64_t value, unsigned size)
53
fdt_add_uart_nodes(s);
92
{
54
fdt_add_gic_nodes(s);
93
int mask = ~0;
55
fdt_add_timer_nodes(s);
94
- int man_start_com = 0;
56
+ fdt_add_zdma_nodes(s);
95
XilinxSPIPS *s = opaque;
57
fdt_add_cpu_nodes(s, psci_conduit);
96
58
fdt_add_clk_node(s, "/clk125", 125000000, s->phandle.clk_125Mhz);
97
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr, (unsigned)value);
59
fdt_add_clk_node(s, "/clk25", 25000000, s->phandle.clk_25Mhz);
98
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
99
switch (addr) {
100
case R_CONFIG:
101
mask = ~(R_CONFIG_RSVD | MAN_START_COM);
102
- if (value & MAN_START_COM) {
103
- man_start_com = 1;
104
+ if ((value & MAN_START_COM) && (s->regs[R_CONFIG] & MAN_START_EN)) {
105
+ s->man_start_com = true;
106
}
107
break;
108
case R_INTR_STATUS:
109
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
110
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
111
no_reg_update:
112
xilinx_spips_update_cs_lines(s);
113
- if ((man_start_com && s->regs[R_CONFIG] & MAN_START_EN) ||
114
- (fifo8_is_empty(&s->tx_fifo) && s->regs[R_CONFIG] & MAN_START_EN)) {
115
- xilinx_spips_flush_txfifo(s);
116
- }
117
+ xilinx_spips_check_flush(s);
118
xilinx_spips_update_cs_lines(s);
119
xilinx_spips_update_ixr(s);
120
}
121
--
60
--
122
2.7.4
61
2.20.1
123
62
124
63
diff view generated by jsdifflib
1
All of the callers of get_phys_addr() and arm_tlb_fill() now ignore
1
The ARMv8.2-TTCNP extension allows an implementation to optimize by
2
the FSR values they return, so we can just remove the argument
2
sharing TLB entries between multiple cores, provided that software
3
entirely.
3
declares that it's ready to deal with this by setting a CnP bit in
4
the TTBRn_ELx. It is mandatory from ARMv8.2 onward.
5
6
For QEMU's TLB implementation, sharing TLB entries between different
7
cores would not really benefit us and would be a lot of work to
8
implement. So we implement this extension in the "trivial" manner:
9
we allow the guest to set and read back the CnP bit, but don't change
10
our behaviour (this is an architecturally valid implementation
11
choice).
12
13
The only code path which looks at the TTBRn_ELx values for the
14
long-descriptor format where the CnP bit is defined is already doing
15
enough masking to not get confused when the CnP bit at the bottom of
16
the register is set, so we can simply add a comment noting why we're
17
relying on that mask.
4
18
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
21
Message-id: 20200225193822.18874-1-peter.maydell@linaro.org
8
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
9
Message-id: 1512503192-2239-12-git-send-email-peter.maydell@linaro.org
10
---
22
---
11
target/arm/internals.h | 2 +-
23
target/arm/cpu.c | 1 +
12
target/arm/helper.c | 45 ++++++++++++++-------------------------------
24
target/arm/cpu64.c | 2 ++
13
target/arm/op_helper.c | 3 +--
25
target/arm/helper.c | 4 ++++
14
3 files changed, 16 insertions(+), 34 deletions(-)
26
3 files changed, 7 insertions(+)
15
27
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
28
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
17
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
30
--- a/target/arm/cpu.c
19
+++ b/target/arm/internals.h
31
+++ b/target/arm/cpu.c
20
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
32
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
21
/* Do a page table walk and add page to TLB if possible */
33
t = cpu->isar.id_mmfr4;
22
bool arm_tlb_fill(CPUState *cpu, vaddr address,
34
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
23
MMUAccessType access_type, int mmu_idx,
35
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
24
- uint32_t *fsr, ARMMMUFaultInfo *fi);
36
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
25
+ ARMMMUFaultInfo *fi);
37
cpu->isar.id_mmfr4 = t;
26
38
}
27
/* Return true if the stage 1 translation regime is using LPAE format page
39
#endif
28
* tables */
40
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/cpu64.c
43
+++ b/target/arm/cpu64.c
44
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
45
46
t = cpu->isar.id_aa64mmfr2;
47
t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
48
+ t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
49
cpu->isar.id_aa64mmfr2 = t;
50
51
/* Replicate the same data to the 32-bit id registers. */
52
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
53
u = cpu->isar.id_mmfr4;
54
u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
55
u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
56
+ u = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
57
cpu->isar.id_mmfr4 = u;
58
59
u = cpu->isar.id_aa64dfr0;
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
60
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
index XXXXXXX..XXXXXXX 100644
61
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.c
62
--- a/target/arm/helper.c
32
+++ b/target/arm/helper.c
63
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
64
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
34
static bool get_phys_addr(CPUARMState *env, target_ulong address,
65
35
MMUAccessType access_type, ARMMMUIdx mmu_idx,
66
/* Now we can extract the actual base address from the TTBR */
36
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
67
descaddr = extract64(ttbr, 0, 48);
37
- target_ulong *page_size, uint32_t *fsr,
68
+ /*
38
+ target_ulong *page_size,
69
+ * We rely on this masking to clear the RES0 bits at the bottom of the TTBR
39
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
70
+ * and also to mask out CnP (bit 0) which could validly be non-zero.
40
71
+ */
41
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
72
descaddr &= ~indexmask;
42
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
73
43
hwaddr phys_addr;
74
/* The address field in the descriptor goes up to bit 39 for ARMv7
44
target_ulong page_size;
45
int prot;
46
- uint32_t fsr_unused;
47
bool ret;
48
uint64_t par64;
49
MemTxAttrs attrs = {};
50
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
51
ARMCacheAttrs cacheattrs = {};
52
53
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
54
- &prot, &page_size, &fsr_unused, &fi, &cacheattrs);
55
+ &prot, &page_size, &fi, &cacheattrs);
56
/* TODO: this is not the correct condition to use to decide whether
57
* to report a PAR in 64-bit or 32-bit format.
58
*/
59
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
60
target_ulong page_size;
61
hwaddr physaddr;
62
int prot;
63
- uint32_t fsr;
64
65
v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
66
if (!sattrs.nsc || sattrs.ns) {
67
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
68
return false;
69
}
70
if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
71
- &physaddr, &attrs, &prot, &page_size, &fsr, &fi, NULL)) {
72
+ &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
73
/* the MPU lookup failed */
74
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
75
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
76
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2)
77
* @attrs: set to the memory transaction attributes to use
78
* @prot: set to the permissions for the page containing phys_ptr
79
* @page_size: set to the size of the page containing phys_ptr
80
- * @fsr: set to the DFSR/IFSR value on failure
81
* @fi: set to fault info if the translation fails
82
* @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
83
*/
84
static bool get_phys_addr(CPUARMState *env, target_ulong address,
85
MMUAccessType access_type, ARMMMUIdx mmu_idx,
86
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
87
- target_ulong *page_size, uint32_t *fsr,
88
+ target_ulong *page_size,
89
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
90
{
91
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
92
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
93
94
ret = get_phys_addr(env, address, access_type,
95
stage_1_mmu_idx(mmu_idx), &ipa, attrs,
96
- prot, page_size, fsr, fi, cacheattrs);
97
+ prot, page_size, fi, cacheattrs);
98
99
/* If S1 fails or S2 is disabled, return early. */
100
if (ret || regime_translation_disabled(env, ARMMMUIdx_S2NS)) {
101
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
102
phys_ptr, attrs, &s2_prot,
103
page_size, fi,
104
cacheattrs != NULL ? &cacheattrs2 : NULL);
105
- *fsr = arm_fi_to_lfsc(fi);
106
fi->s2addr = ipa;
107
/* Combine the S1 and S2 perms. */
108
*prot &= s2_prot;
109
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
110
/* PMSAv8 */
111
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
112
phys_ptr, attrs, prot, fi);
113
- *fsr = arm_fi_to_sfsc(fi);
114
} else if (arm_feature(env, ARM_FEATURE_V7)) {
115
/* PMSAv7 */
116
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
117
phys_ptr, prot, fi);
118
- *fsr = arm_fi_to_sfsc(fi);
119
} else {
120
/* Pre-v7 MPU */
121
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
122
phys_ptr, prot, fi);
123
- *fsr = arm_fi_to_sfsc(fi);
124
}
125
qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
126
" mmu_idx %u -> %s (prot %c%c%c)\n",
127
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
128
}
129
130
if (regime_using_lpae_format(env, mmu_idx)) {
131
- bool ret = get_phys_addr_lpae(env, address, access_type, mmu_idx,
132
- phys_ptr, attrs, prot, page_size,
133
- fi, cacheattrs);
134
-
135
- *fsr = arm_fi_to_lfsc(fi);
136
- return ret;
137
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
138
+ phys_ptr, attrs, prot, page_size,
139
+ fi, cacheattrs);
140
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
141
- bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
142
- phys_ptr, attrs, prot, page_size, fi);
143
-
144
- *fsr = arm_fi_to_sfsc(fi);
145
- return ret;
146
+ return get_phys_addr_v6(env, address, access_type, mmu_idx,
147
+ phys_ptr, attrs, prot, page_size, fi);
148
} else {
149
- bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
150
+ return get_phys_addr_v5(env, address, access_type, mmu_idx,
151
phys_ptr, prot, page_size, fi);
152
-
153
- *fsr = arm_fi_to_sfsc(fi);
154
- return ret;
155
}
156
}
157
158
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
159
* fsr with ARM DFSR/IFSR fault register format value on failure.
160
*/
161
bool arm_tlb_fill(CPUState *cs, vaddr address,
162
- MMUAccessType access_type, int mmu_idx, uint32_t *fsr,
163
+ MMUAccessType access_type, int mmu_idx,
164
ARMMMUFaultInfo *fi)
165
{
166
ARMCPU *cpu = ARM_CPU(cs);
167
@@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
168
169
ret = get_phys_addr(env, address, access_type,
170
core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
171
- &attrs, &prot, &page_size, fsr, fi, NULL);
172
+ &attrs, &prot, &page_size, fi, NULL);
173
if (!ret) {
174
/* Map a single [sub]page. */
175
phys_addr &= TARGET_PAGE_MASK;
176
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
177
target_ulong page_size;
178
int prot;
179
bool ret;
180
- uint32_t fsr;
181
ARMMMUFaultInfo fi = {};
182
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
183
184
*attrs = (MemTxAttrs) {};
185
186
ret = get_phys_addr(env, addr, 0, mmu_idx, &phys_addr,
187
- attrs, &prot, &page_size, &fsr, &fi, NULL);
188
+ attrs, &prot, &page_size, &fi, NULL);
189
190
if (ret) {
191
return -1;
192
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/op_helper.c
195
+++ b/target/arm/op_helper.c
196
@@ -XXX,XX +XXX,XX @@ void tlb_fill(CPUState *cs, target_ulong addr, MMUAccessType access_type,
197
int mmu_idx, uintptr_t retaddr)
198
{
199
bool ret;
200
- uint32_t fsr = 0;
201
ARMMMUFaultInfo fi = {};
202
203
- ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fsr, &fi);
204
+ ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fi);
205
if (unlikely(ret)) {
206
ARMCPU *cpu = ARM_CPU(cs);
207
208
--
75
--
209
2.7.4
76
2.20.1
210
77
211
78
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Eric Auger <eric.auger@redhat.com>
2
2
3
Update headers against v4.15-rc1.
3
Make sure a null SMMUPciBus is returned in case we were
4
not able to identify a pci bus matching the @bus_num.
5
6
This matches the fix done on intel iommu in commit:
7
a2e1cd41ccfe796529abfd1b6aeb1dd4393762a2
4
8
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
Message-id: 1511883692-11511-4-git-send-email-eric.auger@redhat.com
10
Reviewed-by: Peter Xu <peterx@redhat.com>
11
Message-Id: <20200226172628.17449-1-eric.auger@redhat.com>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
15
---
9
include/standard-headers/asm-s390/virtio-ccw.h | 1 +
16
hw/arm/smmu-common.c | 1 +
10
include/standard-headers/asm-x86/hyperv.h | 394 +--------------------
17
1 file changed, 1 insertion(+)
11
include/standard-headers/linux/input-event-codes.h | 2 +
12
include/standard-headers/linux/input.h | 1 +
13
include/standard-headers/linux/pci_regs.h | 45 ++-
14
linux-headers/asm-arm/kvm.h | 8 +
15
linux-headers/asm-arm/kvm_para.h | 1 +
16
linux-headers/asm-arm/unistd.h | 2 +
17
linux-headers/asm-arm64/kvm.h | 8 +
18
linux-headers/asm-arm64/unistd.h | 1 +
19
linux-headers/asm-powerpc/epapr_hcalls.h | 1 +
20
linux-headers/asm-powerpc/kvm.h | 1 +
21
linux-headers/asm-powerpc/kvm_para.h | 1 +
22
linux-headers/asm-powerpc/unistd.h | 1 +
23
linux-headers/asm-s390/kvm.h | 1 +
24
linux-headers/asm-s390/kvm_para.h | 1 +
25
linux-headers/asm-s390/unistd.h | 4 +-
26
linux-headers/asm-x86/kvm.h | 1 +
27
linux-headers/asm-x86/kvm_para.h | 2 +-
28
linux-headers/asm-x86/unistd.h | 1 +
29
linux-headers/linux/kvm.h | 2 +
30
linux-headers/linux/kvm_para.h | 1 +
31
linux-headers/linux/psci.h | 1 +
32
linux-headers/linux/userfaultfd.h | 1 +
33
linux-headers/linux/vfio.h | 1 +
34
linux-headers/linux/vfio_ccw.h | 1 +
35
linux-headers/linux/vhost.h | 1 +
36
27 files changed, 74 insertions(+), 411 deletions(-)
37
18
38
diff --git a/include/standard-headers/asm-s390/virtio-ccw.h b/include/standard-headers/asm-s390/virtio-ccw.h
19
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
39
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
40
--- a/include/standard-headers/asm-s390/virtio-ccw.h
21
--- a/hw/arm/smmu-common.c
41
+++ b/include/standard-headers/asm-s390/virtio-ccw.h
22
+++ b/hw/arm/smmu-common.c
42
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ SMMUPciBus *smmu_find_smmu_pcibus(SMMUState *s, uint8_t bus_num)
43
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
24
return smmu_pci_bus;
44
/*
25
}
45
* Definitions for virtio-ccw devices.
26
}
46
*
27
+ smmu_pci_bus = NULL;
47
diff --git a/include/standard-headers/asm-x86/hyperv.h b/include/standard-headers/asm-x86/hyperv.h
28
}
48
index XXXXXXX..XXXXXXX 100644
29
return smmu_pci_bus;
49
--- a/include/standard-headers/asm-x86/hyperv.h
30
}
50
+++ b/include/standard-headers/asm-x86/hyperv.h
51
@@ -1,393 +1 @@
52
-#ifndef _ASM_X86_HYPERV_H
53
-#define _ASM_X86_HYPERV_H
54
-
55
-#include "standard-headers/linux/types.h"
56
-
57
-/*
58
- * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
59
- * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
60
- */
61
-#define HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS    0x40000000
62
-#define HYPERV_CPUID_INTERFACE            0x40000001
63
-#define HYPERV_CPUID_VERSION            0x40000002
64
-#define HYPERV_CPUID_FEATURES            0x40000003
65
-#define HYPERV_CPUID_ENLIGHTMENT_INFO        0x40000004
66
-#define HYPERV_CPUID_IMPLEMENT_LIMITS        0x40000005
67
-
68
-#define HYPERV_HYPERVISOR_PRESENT_BIT        0x80000000
69
-#define HYPERV_CPUID_MIN            0x40000005
70
-#define HYPERV_CPUID_MAX            0x4000ffff
71
-
72
-/*
73
- * Feature identification. EAX indicates which features are available
74
- * to the partition based upon the current partition privileges.
75
- */
76
-
77
-/* VP Runtime (HV_X64_MSR_VP_RUNTIME) available */
78
-#define HV_X64_MSR_VP_RUNTIME_AVAILABLE        (1 << 0)
79
-/* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/
80
-#define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE    (1 << 1)
81
-/* Partition reference TSC MSR is available */
82
-#define HV_X64_MSR_REFERENCE_TSC_AVAILABLE (1 << 9)
83
-
84
-/* A partition's reference time stamp counter (TSC) page */
85
-#define HV_X64_MSR_REFERENCE_TSC        0x40000021
86
-
87
-/*
88
- * There is a single feature flag that signifies if the partition has access
89
- * to MSRs with local APIC and TSC frequencies.
90
- */
91
-#define HV_X64_ACCESS_FREQUENCY_MSRS        (1 << 11)
92
-
93
-/*
94
- * Basic SynIC MSRs (HV_X64_MSR_SCONTROL through HV_X64_MSR_EOM
95
- * and HV_X64_MSR_SINT0 through HV_X64_MSR_SINT15) available
96
- */
97
-#define HV_X64_MSR_SYNIC_AVAILABLE        (1 << 2)
98
-/*
99
- * Synthetic Timer MSRs (HV_X64_MSR_STIMER0_CONFIG through
100
- * HV_X64_MSR_STIMER3_COUNT) available
101
- */
102
-#define HV_X64_MSR_SYNTIMER_AVAILABLE        (1 << 3)
103
-/*
104
- * APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR)
105
- * are available
106
- */
107
-#define HV_X64_MSR_APIC_ACCESS_AVAILABLE    (1 << 4)
108
-/* Hypercall MSRs (HV_X64_MSR_GUEST_OS_ID and HV_X64_MSR_HYPERCALL) available*/
109
-#define HV_X64_MSR_HYPERCALL_AVAILABLE        (1 << 5)
110
-/* Access virtual processor index MSR (HV_X64_MSR_VP_INDEX) available*/
111
-#define HV_X64_MSR_VP_INDEX_AVAILABLE        (1 << 6)
112
-/* Virtual system reset MSR (HV_X64_MSR_RESET) is available*/
113
-#define HV_X64_MSR_RESET_AVAILABLE        (1 << 7)
114
- /*
115
- * Access statistics pages MSRs (HV_X64_MSR_STATS_PARTITION_RETAIL_PAGE,
116
- * HV_X64_MSR_STATS_PARTITION_INTERNAL_PAGE, HV_X64_MSR_STATS_VP_RETAIL_PAGE,
117
- * HV_X64_MSR_STATS_VP_INTERNAL_PAGE) available
118
- */
119
-#define HV_X64_MSR_STAT_PAGES_AVAILABLE        (1 << 8)
120
-
121
-/* Frequency MSRs available */
122
-#define HV_FEATURE_FREQUENCY_MSRS_AVAILABLE    (1 << 8)
123
-
124
-/* Crash MSR available */
125
-#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10)
126
-
127
-/*
128
- * Feature identification: EBX indicates which flags were specified at
129
- * partition creation. The format is the same as the partition creation
130
- * flag structure defined in section Partition Creation Flags.
131
- */
132
-#define HV_X64_CREATE_PARTITIONS        (1 << 0)
133
-#define HV_X64_ACCESS_PARTITION_ID        (1 << 1)
134
-#define HV_X64_ACCESS_MEMORY_POOL        (1 << 2)
135
-#define HV_X64_ADJUST_MESSAGE_BUFFERS        (1 << 3)
136
-#define HV_X64_POST_MESSAGES            (1 << 4)
137
-#define HV_X64_SIGNAL_EVENTS            (1 << 5)
138
-#define HV_X64_CREATE_PORT            (1 << 6)
139
-#define HV_X64_CONNECT_PORT            (1 << 7)
140
-#define HV_X64_ACCESS_STATS            (1 << 8)
141
-#define HV_X64_DEBUGGING            (1 << 11)
142
-#define HV_X64_CPU_POWER_MANAGEMENT        (1 << 12)
143
-#define HV_X64_CONFIGURE_PROFILER        (1 << 13)
144
-
145
-/*
146
- * Feature identification. EDX indicates which miscellaneous features
147
- * are available to the partition.
148
- */
149
-/* The MWAIT instruction is available (per section MONITOR / MWAIT) */
150
-#define HV_X64_MWAIT_AVAILABLE                (1 << 0)
151
-/* Guest debugging support is available */
152
-#define HV_X64_GUEST_DEBUGGING_AVAILABLE        (1 << 1)
153
-/* Performance Monitor support is available*/
154
-#define HV_X64_PERF_MONITOR_AVAILABLE            (1 << 2)
155
-/* Support for physical CPU dynamic partitioning events is available*/
156
-#define HV_X64_CPU_DYNAMIC_PARTITIONING_AVAILABLE    (1 << 3)
157
-/*
158
- * Support for passing hypercall input parameter block via XMM
159
- * registers is available
160
- */
161
-#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE        (1 << 4)
162
-/* Support for a virtual guest idle state is available */
163
-#define HV_X64_GUEST_IDLE_STATE_AVAILABLE        (1 << 5)
164
-/* Guest crash data handler available */
165
-#define HV_X64_GUEST_CRASH_MSR_AVAILABLE        (1 << 10)
166
-
167
-/*
168
- * Implementation recommendations. Indicates which behaviors the hypervisor
169
- * recommends the OS implement for optimal performance.
170
- */
171
- /*
172
- * Recommend using hypercall for address space switches rather
173
- * than MOV to CR3 instruction
174
- */
175
-#define HV_X64_AS_SWITCH_RECOMMENDED        (1 << 0)
176
-/* Recommend using hypercall for local TLB flushes rather
177
- * than INVLPG or MOV to CR3 instructions */
178
-#define HV_X64_LOCAL_TLB_FLUSH_RECOMMENDED    (1 << 1)
179
-/*
180
- * Recommend using hypercall for remote TLB flushes rather
181
- * than inter-processor interrupts
182
- */
183
-#define HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED    (1 << 2)
184
-/*
185
- * Recommend using MSRs for accessing APIC registers
186
- * EOI, ICR and TPR rather than their memory-mapped counterparts
187
- */
188
-#define HV_X64_APIC_ACCESS_RECOMMENDED        (1 << 3)
189
-/* Recommend using the hypervisor-provided MSR to initiate a system RESET */
190
-#define HV_X64_SYSTEM_RESET_RECOMMENDED        (1 << 4)
191
-/*
192
- * Recommend using relaxed timing for this partition. If used,
193
- * the VM should disable any watchdog timeouts that rely on the
194
- * timely delivery of external interrupts
195
- */
196
-#define HV_X64_RELAXED_TIMING_RECOMMENDED    (1 << 5)
197
-
198
-/*
199
- * Virtual APIC support
200
- */
201
-#define HV_X64_DEPRECATING_AEOI_RECOMMENDED    (1 << 9)
202
-
203
-/* Recommend using the newer ExProcessorMasks interface */
204
-#define HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED    (1 << 11)
205
-
206
-/*
207
- * Crash notification flag.
208
- */
209
-#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
210
-
211
-/* MSR used to identify the guest OS. */
212
-#define HV_X64_MSR_GUEST_OS_ID            0x40000000
213
-
214
-/* MSR used to setup pages used to communicate with the hypervisor. */
215
-#define HV_X64_MSR_HYPERCALL            0x40000001
216
-
217
-/* MSR used to provide vcpu index */
218
-#define HV_X64_MSR_VP_INDEX            0x40000002
219
-
220
-/* MSR used to reset the guest OS. */
221
-#define HV_X64_MSR_RESET            0x40000003
222
-
223
-/* MSR used to provide vcpu runtime in 100ns units */
224
-#define HV_X64_MSR_VP_RUNTIME            0x40000010
225
-
226
-/* MSR used to read the per-partition time reference counter */
227
-#define HV_X64_MSR_TIME_REF_COUNT        0x40000020
228
-
229
-/* MSR used to retrieve the TSC frequency */
230
-#define HV_X64_MSR_TSC_FREQUENCY        0x40000022
231
-
232
-/* MSR used to retrieve the local APIC timer frequency */
233
-#define HV_X64_MSR_APIC_FREQUENCY        0x40000023
234
-
235
-/* Define the virtual APIC registers */
236
-#define HV_X64_MSR_EOI                0x40000070
237
-#define HV_X64_MSR_ICR                0x40000071
238
-#define HV_X64_MSR_TPR                0x40000072
239
-#define HV_X64_MSR_APIC_ASSIST_PAGE        0x40000073
240
-
241
-/* Define synthetic interrupt controller model specific registers. */
242
-#define HV_X64_MSR_SCONTROL            0x40000080
243
-#define HV_X64_MSR_SVERSION            0x40000081
244
-#define HV_X64_MSR_SIEFP            0x40000082
245
-#define HV_X64_MSR_SIMP                0x40000083
246
-#define HV_X64_MSR_EOM                0x40000084
247
-#define HV_X64_MSR_SINT0            0x40000090
248
-#define HV_X64_MSR_SINT1            0x40000091
249
-#define HV_X64_MSR_SINT2            0x40000092
250
-#define HV_X64_MSR_SINT3            0x40000093
251
-#define HV_X64_MSR_SINT4            0x40000094
252
-#define HV_X64_MSR_SINT5            0x40000095
253
-#define HV_X64_MSR_SINT6            0x40000096
254
-#define HV_X64_MSR_SINT7            0x40000097
255
-#define HV_X64_MSR_SINT8            0x40000098
256
-#define HV_X64_MSR_SINT9            0x40000099
257
-#define HV_X64_MSR_SINT10            0x4000009A
258
-#define HV_X64_MSR_SINT11            0x4000009B
259
-#define HV_X64_MSR_SINT12            0x4000009C
260
-#define HV_X64_MSR_SINT13            0x4000009D
261
-#define HV_X64_MSR_SINT14            0x4000009E
262
-#define HV_X64_MSR_SINT15            0x4000009F
263
-
264
-/*
265
- * Synthetic Timer MSRs. Four timers per vcpu.
266
- */
267
-#define HV_X64_MSR_STIMER0_CONFIG        0x400000B0
268
-#define HV_X64_MSR_STIMER0_COUNT        0x400000B1
269
-#define HV_X64_MSR_STIMER1_CONFIG        0x400000B2
270
-#define HV_X64_MSR_STIMER1_COUNT        0x400000B3
271
-#define HV_X64_MSR_STIMER2_CONFIG        0x400000B4
272
-#define HV_X64_MSR_STIMER2_COUNT        0x400000B5
273
-#define HV_X64_MSR_STIMER3_CONFIG        0x400000B6
274
-#define HV_X64_MSR_STIMER3_COUNT        0x400000B7
275
-
276
-/* Hyper-V guest crash notification MSR's */
277
-#define HV_X64_MSR_CRASH_P0            0x40000100
278
-#define HV_X64_MSR_CRASH_P1            0x40000101
279
-#define HV_X64_MSR_CRASH_P2            0x40000102
280
-#define HV_X64_MSR_CRASH_P3            0x40000103
281
-#define HV_X64_MSR_CRASH_P4            0x40000104
282
-#define HV_X64_MSR_CRASH_CTL            0x40000105
283
-#define HV_X64_MSR_CRASH_CTL_NOTIFY        (1ULL << 63)
284
-#define HV_X64_MSR_CRASH_PARAMS        \
285
-        (1 + (HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0))
286
-
287
-#define HV_X64_MSR_HYPERCALL_ENABLE        0x00000001
288
-#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT    12
289
-#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_MASK    \
290
-        (~((1ull << HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT) - 1))
291
-
292
-/* Declare the various hypercall operations. */
293
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE    0x0002
294
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST    0x0003
295
-#define HVCALL_NOTIFY_LONG_SPIN_WAIT        0x0008
296
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX 0x0013
297
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX 0x0014
298
-#define HVCALL_POST_MESSAGE            0x005c
299
-#define HVCALL_SIGNAL_EVENT            0x005d
300
-
301
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ENABLE        0x00000001
302
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT    12
303
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_MASK    \
304
-        (~((1ull << HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
305
-
306
-#define HV_X64_MSR_TSC_REFERENCE_ENABLE        0x00000001
307
-#define HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT    12
308
-
309
-#define HV_PROCESSOR_POWER_STATE_C0        0
310
-#define HV_PROCESSOR_POWER_STATE_C1        1
311
-#define HV_PROCESSOR_POWER_STATE_C2        2
312
-#define HV_PROCESSOR_POWER_STATE_C3        3
313
-
314
-#define HV_FLUSH_ALL_PROCESSORS            BIT(0)
315
-#define HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES    BIT(1)
316
-#define HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY    BIT(2)
317
-#define HV_FLUSH_USE_EXTENDED_RANGE_FORMAT    BIT(3)
318
-
319
-enum HV_GENERIC_SET_FORMAT {
320
-    HV_GENERIC_SET_SPARCE_4K,
321
-    HV_GENERIC_SET_ALL,
322
-};
323
-
324
-/* hypercall status code */
325
-#define HV_STATUS_SUCCESS            0
326
-#define HV_STATUS_INVALID_HYPERCALL_CODE    2
327
-#define HV_STATUS_INVALID_HYPERCALL_INPUT    3
328
-#define HV_STATUS_INVALID_ALIGNMENT        4
329
-#define HV_STATUS_INSUFFICIENT_MEMORY        11
330
-#define HV_STATUS_INVALID_CONNECTION_ID        18
331
-#define HV_STATUS_INSUFFICIENT_BUFFERS        19
332
-
333
-typedef struct _HV_REFERENCE_TSC_PAGE {
334
-    uint32_t tsc_sequence;
335
-    uint32_t res1;
336
-    uint64_t tsc_scale;
337
-    int64_t tsc_offset;
338
-} HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE;
339
-
340
-/* Define the number of synthetic interrupt sources. */
341
-#define HV_SYNIC_SINT_COUNT        (16)
342
-/* Define the expected SynIC version. */
343
-#define HV_SYNIC_VERSION_1        (0x1)
344
-
345
-#define HV_SYNIC_CONTROL_ENABLE        (1ULL << 0)
346
-#define HV_SYNIC_SIMP_ENABLE        (1ULL << 0)
347
-#define HV_SYNIC_SIEFP_ENABLE        (1ULL << 0)
348
-#define HV_SYNIC_SINT_MASKED        (1ULL << 16)
349
-#define HV_SYNIC_SINT_AUTO_EOI        (1ULL << 17)
350
-#define HV_SYNIC_SINT_VECTOR_MASK    (0xFF)
351
-
352
-#define HV_SYNIC_STIMER_COUNT        (4)
353
-
354
-/* Define synthetic interrupt controller message constants. */
355
-#define HV_MESSAGE_SIZE            (256)
356
-#define HV_MESSAGE_PAYLOAD_BYTE_COUNT    (240)
357
-#define HV_MESSAGE_PAYLOAD_QWORD_COUNT    (30)
358
-
359
-/* Define hypervisor message types. */
360
-enum hv_message_type {
361
-    HVMSG_NONE            = 0x00000000,
362
-
363
-    /* Memory access messages. */
364
-    HVMSG_UNMAPPED_GPA        = 0x80000000,
365
-    HVMSG_GPA_INTERCEPT        = 0x80000001,
366
-
367
-    /* Timer notification messages. */
368
-    HVMSG_TIMER_EXPIRED            = 0x80000010,
369
-
370
-    /* Error messages. */
371
-    HVMSG_INVALID_VP_REGISTER_VALUE    = 0x80000020,
372
-    HVMSG_UNRECOVERABLE_EXCEPTION    = 0x80000021,
373
-    HVMSG_UNSUPPORTED_FEATURE        = 0x80000022,
374
-
375
-    /* Trace buffer complete messages. */
376
-    HVMSG_EVENTLOG_BUFFERCOMPLETE    = 0x80000040,
377
-
378
-    /* Platform-specific processor intercept messages. */
379
-    HVMSG_X64_IOPORT_INTERCEPT        = 0x80010000,
380
-    HVMSG_X64_MSR_INTERCEPT        = 0x80010001,
381
-    HVMSG_X64_CPUID_INTERCEPT        = 0x80010002,
382
-    HVMSG_X64_EXCEPTION_INTERCEPT    = 0x80010003,
383
-    HVMSG_X64_APIC_EOI            = 0x80010004,
384
-    HVMSG_X64_LEGACY_FP_ERROR        = 0x80010005
385
-};
386
-
387
-/* Define synthetic interrupt controller message flags. */
388
-union hv_message_flags {
389
-    uint8_t asu8;
390
-    struct {
391
-        uint8_t msg_pending:1;
392
-        uint8_t reserved:7;
393
-    };
394
-};
395
-
396
-/* Define port identifier type. */
397
-union hv_port_id {
398
-    uint32_t asu32;
399
-    struct {
400
-        uint32_t id:24;
401
-        uint32_t reserved:8;
402
-    } u;
403
-};
404
-
405
-/* Define synthetic interrupt controller message header. */
406
-struct hv_message_header {
407
-    uint32_t message_type;
408
-    uint8_t payload_size;
409
-    union hv_message_flags message_flags;
410
-    uint8_t reserved[2];
411
-    union {
412
-        uint64_t sender;
413
-        union hv_port_id port;
414
-    };
415
-};
416
-
417
-/* Define synthetic interrupt controller message format. */
418
-struct hv_message {
419
-    struct hv_message_header header;
420
-    union {
421
-        uint64_t payload[HV_MESSAGE_PAYLOAD_QWORD_COUNT];
422
-    } u;
423
-};
424
-
425
-/* Define the synthetic interrupt message page layout. */
426
-struct hv_message_page {
427
-    struct hv_message sint_message[HV_SYNIC_SINT_COUNT];
428
-};
429
-
430
-/* Define timer message payload structure. */
431
-struct hv_timer_message_payload {
432
-    uint32_t timer_index;
433
-    uint32_t reserved;
434
-    uint64_t expiration_time;    /* When the timer expired */
435
-    uint64_t delivery_time;    /* When the message was delivered */
436
-};
437
-
438
-#define HV_STIMER_ENABLE        (1ULL << 0)
439
-#define HV_STIMER_PERIODIC        (1ULL << 1)
440
-#define HV_STIMER_LAZY            (1ULL << 2)
441
-#define HV_STIMER_AUTOENABLE        (1ULL << 3)
442
-#define HV_STIMER_SINT(config)        (uint8_t)(((config) >> 16) & 0x0F)
443
-
444
-#endif
445
+ /* this is a temporary placeholder until kvm_para.h stops including it */
446
diff --git a/include/standard-headers/linux/input-event-codes.h b/include/standard-headers/linux/input-event-codes.h
447
index XXXXXXX..XXXXXXX 100644
448
--- a/include/standard-headers/linux/input-event-codes.h
449
+++ b/include/standard-headers/linux/input-event-codes.h
450
@@ -XXX,XX +XXX,XX @@
451
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
452
/*
453
* Input event codes
454
*
455
@@ -XXX,XX +XXX,XX @@
456
#define BTN_TOOL_MOUSE        0x146
457
#define BTN_TOOL_LENS        0x147
458
#define BTN_TOOL_QUINTTAP    0x148    /* Five fingers on trackpad */
459
+#define BTN_STYLUS3        0x149
460
#define BTN_TOUCH        0x14a
461
#define BTN_STYLUS        0x14b
462
#define BTN_STYLUS2        0x14c
463
diff --git a/include/standard-headers/linux/input.h b/include/standard-headers/linux/input.h
464
index XXXXXXX..XXXXXXX 100644
465
--- a/include/standard-headers/linux/input.h
466
+++ b/include/standard-headers/linux/input.h
467
@@ -XXX,XX +XXX,XX @@
468
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
469
/*
470
* Copyright (c) 1999-2002 Vojtech Pavlik
471
*
472
diff --git a/include/standard-headers/linux/pci_regs.h b/include/standard-headers/linux/pci_regs.h
473
index XXXXXXX..XXXXXXX 100644
474
--- a/include/standard-headers/linux/pci_regs.h
475
+++ b/include/standard-headers/linux/pci_regs.h
476
@@ -XXX,XX +XXX,XX @@
477
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
478
/*
479
*    pci_regs.h
480
*
481
@@ -XXX,XX +XXX,XX @@
482
#define PCI_ERR_ROOT_FIRST_FATAL    0x00000010 /* First UNC is Fatal */
483
#define PCI_ERR_ROOT_NONFATAL_RCV    0x00000020 /* Non-Fatal Received */
484
#define PCI_ERR_ROOT_FATAL_RCV        0x00000040 /* Fatal Received */
485
+#define PCI_ERR_ROOT_AER_IRQ        0xf8000000 /* Advanced Error Interrupt Message Number */
486
#define PCI_ERR_ROOT_ERR_SRC    52    /* Error Source Identification */
487
488
/* Virtual Channel */
489
@@ -XXX,XX +XXX,XX @@
490
#define PCI_SATA_SIZEOF_LONG    16
491
492
/* Resizable BARs */
493
+#define PCI_REBAR_CAP        4    /* capability register */
494
+#define PCI_REBAR_CAP_SIZES        0x00FFFFF0 /* supported BAR sizes */
495
#define PCI_REBAR_CTRL        8    /* control register */
496
-#define PCI_REBAR_CTRL_NBAR_MASK    (7 << 5)    /* mask for # bars */
497
-#define PCI_REBAR_CTRL_NBAR_SHIFT    5    /* shift for # bars */
498
+#define PCI_REBAR_CTRL_BAR_IDX        0x00000007 /* BAR index */
499
+#define PCI_REBAR_CTRL_NBAR_MASK    0x000000E0 /* # of resizable BARs */
500
+#define PCI_REBAR_CTRL_NBAR_SHIFT    5      /* shift for # of BARs */
501
+#define PCI_REBAR_CTRL_BAR_SIZE    0x00001F00 /* BAR size */
502
503
/* Dynamic Power Allocation */
504
#define PCI_DPA_CAP        4    /* capability register */
505
@@ -XXX,XX +XXX,XX @@
506
507
/* Downstream Port Containment */
508
#define PCI_EXP_DPC_CAP            4    /* DPC Capability */
509
+#define PCI_EXP_DPC_IRQ            0x1f    /* DPC Interrupt Message Number */
510
#define PCI_EXP_DPC_CAP_RP_EXT        0x20    /* Root Port Extensions for DPC */
511
#define PCI_EXP_DPC_CAP_POISONED_TLP    0x40    /* Poisoned TLP Egress Blocking Supported */
512
#define PCI_EXP_DPC_CAP_SW_TRIGGER    0x80    /* Software Triggering Supported */
513
@@ -XXX,XX +XXX,XX @@
514
#define PCI_PTM_CTRL_ENABLE        0x00000001 /* PTM enable */
515
#define PCI_PTM_CTRL_ROOT        0x00000002 /* Root select */
516
517
-/* L1 PM Substates */
518
-#define PCI_L1SS_CAP         4    /* capability register */
519
-#define PCI_L1SS_CAP_PCIPM_L1_2     1    /* PCI PM L1.2 Support */
520
-#define PCI_L1SS_CAP_PCIPM_L1_1     2    /* PCI PM L1.1 Support */
521
-#define PCI_L1SS_CAP_ASPM_L1_2         4    /* ASPM L1.2 Support */
522
-#define PCI_L1SS_CAP_ASPM_L1_1         8    /* ASPM L1.1 Support */
523
-#define PCI_L1SS_CAP_L1_PM_SS        16    /* L1 PM Substates Support */
524
-#define PCI_L1SS_CTL1         8    /* Control Register 1 */
525
-#define PCI_L1SS_CTL1_PCIPM_L1_2    1    /* PCI PM L1.2 Enable */
526
-#define PCI_L1SS_CTL1_PCIPM_L1_1    2    /* PCI PM L1.1 Support */
527
-#define PCI_L1SS_CTL1_ASPM_L1_2    4    /* ASPM L1.2 Support */
528
-#define PCI_L1SS_CTL1_ASPM_L1_1    8    /* ASPM L1.1 Support */
529
-#define PCI_L1SS_CTL1_L1SS_MASK    0x0000000F
530
-#define PCI_L1SS_CTL2         0xC    /* Control Register 2 */
531
+/* ASPM L1 PM Substates */
532
+#define PCI_L1SS_CAP        0x04    /* Capabilities Register */
533
+#define PCI_L1SS_CAP_PCIPM_L1_2    0x00000001 /* PCI-PM L1.2 Supported */
534
+#define PCI_L1SS_CAP_PCIPM_L1_1    0x00000002 /* PCI-PM L1.1 Supported */
535
+#define PCI_L1SS_CAP_ASPM_L1_2        0x00000004 /* ASPM L1.2 Supported */
536
+#define PCI_L1SS_CAP_ASPM_L1_1        0x00000008 /* ASPM L1.1 Supported */
537
+#define PCI_L1SS_CAP_L1_PM_SS        0x00000010 /* L1 PM Substates Supported */
538
+#define PCI_L1SS_CAP_CM_RESTORE_TIME    0x0000ff00 /* Port Common_Mode_Restore_Time */
539
+#define PCI_L1SS_CAP_P_PWR_ON_SCALE    0x00030000 /* Port T_POWER_ON scale */
540
+#define PCI_L1SS_CAP_P_PWR_ON_VALUE    0x00f80000 /* Port T_POWER_ON value */
541
+#define PCI_L1SS_CTL1        0x08    /* Control 1 Register */
542
+#define PCI_L1SS_CTL1_PCIPM_L1_2    0x00000001 /* PCI-PM L1.2 Enable */
543
+#define PCI_L1SS_CTL1_PCIPM_L1_1    0x00000002 /* PCI-PM L1.1 Enable */
544
+#define PCI_L1SS_CTL1_ASPM_L1_2    0x00000004 /* ASPM L1.2 Enable */
545
+#define PCI_L1SS_CTL1_ASPM_L1_1    0x00000008 /* ASPM L1.1 Enable */
546
+#define PCI_L1SS_CTL1_L1SS_MASK    0x0000000f
547
+#define PCI_L1SS_CTL1_CM_RESTORE_TIME    0x0000ff00 /* Common_Mode_Restore_Time */
548
+#define PCI_L1SS_CTL1_LTR_L12_TH_VALUE    0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */
549
+#define PCI_L1SS_CTL1_LTR_L12_TH_SCALE    0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */
550
+#define PCI_L1SS_CTL2        0x0c    /* Control 2 Register */
551
552
#endif /* LINUX_PCI_REGS_H */
553
diff --git a/linux-headers/asm-arm/kvm.h b/linux-headers/asm-arm/kvm.h
554
index XXXXXXX..XXXXXXX 100644
555
--- a/linux-headers/asm-arm/kvm.h
556
+++ b/linux-headers/asm-arm/kvm.h
557
@@ -XXX,XX +XXX,XX @@
558
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
559
/*
560
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
561
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
562
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
563
    (__ARM_CP15_REG(op1, 0, crm, 0) | KVM_REG_SIZE_U64)
564
#define ARM_CP15_REG64(...) __ARM_CP15_REG64(__VA_ARGS__)
565
566
+/* PL1 Physical Timer Registers */
567
+#define KVM_REG_ARM_PTIMER_CTL        ARM_CP15_REG32(0, 14, 2, 1)
568
+#define KVM_REG_ARM_PTIMER_CNT        ARM_CP15_REG64(0, 14)
569
+#define KVM_REG_ARM_PTIMER_CVAL        ARM_CP15_REG64(2, 14)
570
+
571
+/* Virtual Timer Registers */
572
#define KVM_REG_ARM_TIMER_CTL        ARM_CP15_REG32(0, 14, 3, 1)
573
#define KVM_REG_ARM_TIMER_CNT        ARM_CP15_REG64(1, 14)
574
#define KVM_REG_ARM_TIMER_CVAL        ARM_CP15_REG64(3, 14)
575
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
576
#define KVM_DEV_ARM_ITS_SAVE_TABLES        1
577
#define KVM_DEV_ARM_ITS_RESTORE_TABLES    2
578
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES    3
579
+#define KVM_DEV_ARM_ITS_CTRL_RESET        4
580
581
/* KVM_IRQ_LINE irq field index values */
582
#define KVM_ARM_IRQ_TYPE_SHIFT        24
583
diff --git a/linux-headers/asm-arm/kvm_para.h b/linux-headers/asm-arm/kvm_para.h
584
index XXXXXXX..XXXXXXX 100644
585
--- a/linux-headers/asm-arm/kvm_para.h
586
+++ b/linux-headers/asm-arm/kvm_para.h
587
@@ -1 +1,2 @@
588
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
589
#include <asm-generic/kvm_para.h>
590
diff --git a/linux-headers/asm-arm/unistd.h b/linux-headers/asm-arm/unistd.h
591
index XXXXXXX..XXXXXXX 100644
592
--- a/linux-headers/asm-arm/unistd.h
593
+++ b/linux-headers/asm-arm/unistd.h
594
@@ -XXX,XX +XXX,XX @@
595
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
596
/*
597
* arch/arm/include/asm/unistd.h
598
*
599
@@ -XXX,XX +XXX,XX @@
600
#define __ARM_NR_usr26            (__ARM_NR_BASE+3)
601
#define __ARM_NR_usr32            (__ARM_NR_BASE+4)
602
#define __ARM_NR_set_tls        (__ARM_NR_BASE+5)
603
+#define __ARM_NR_get_tls        (__ARM_NR_BASE+6)
604
605
#endif /* __ASM_ARM_UNISTD_H */
606
diff --git a/linux-headers/asm-arm64/kvm.h b/linux-headers/asm-arm64/kvm.h
607
index XXXXXXX..XXXXXXX 100644
608
--- a/linux-headers/asm-arm64/kvm.h
609
+++ b/linux-headers/asm-arm64/kvm.h
610
@@ -XXX,XX +XXX,XX @@
611
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
612
/*
613
* Copyright (C) 2012,2013 - ARM Ltd
614
* Author: Marc Zyngier <marc.zyngier@arm.com>
615
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
616
617
#define ARM64_SYS_REG(...) (__ARM64_SYS_REG(__VA_ARGS__) | KVM_REG_SIZE_U64)
618
619
+/* Physical Timer EL0 Registers */
620
+#define KVM_REG_ARM_PTIMER_CTL        ARM64_SYS_REG(3, 3, 14, 2, 1)
621
+#define KVM_REG_ARM_PTIMER_CVAL        ARM64_SYS_REG(3, 3, 14, 2, 2)
622
+#define KVM_REG_ARM_PTIMER_CNT        ARM64_SYS_REG(3, 3, 14, 0, 1)
623
+
624
+/* EL0 Virtual Timer Registers */
625
#define KVM_REG_ARM_TIMER_CTL        ARM64_SYS_REG(3, 3, 14, 3, 1)
626
#define KVM_REG_ARM_TIMER_CNT        ARM64_SYS_REG(3, 3, 14, 3, 2)
627
#define KVM_REG_ARM_TIMER_CVAL        ARM64_SYS_REG(3, 3, 14, 0, 2)
628
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
629
#define KVM_DEV_ARM_ITS_SAVE_TABLES 1
630
#define KVM_DEV_ARM_ITS_RESTORE_TABLES 2
631
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES    3
632
+#define KVM_DEV_ARM_ITS_CTRL_RESET        4
633
634
/* Device Control API on vcpu fd */
635
#define KVM_ARM_VCPU_PMU_V3_CTRL    0
636
diff --git a/linux-headers/asm-arm64/unistd.h b/linux-headers/asm-arm64/unistd.h
637
index XXXXXXX..XXXXXXX 100644
638
--- a/linux-headers/asm-arm64/unistd.h
639
+++ b/linux-headers/asm-arm64/unistd.h
640
@@ -XXX,XX +XXX,XX @@
641
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
642
/*
643
* Copyright (C) 2012 ARM Ltd.
644
*
645
diff --git a/linux-headers/asm-powerpc/epapr_hcalls.h b/linux-headers/asm-powerpc/epapr_hcalls.h
646
index XXXXXXX..XXXXXXX 100644
647
--- a/linux-headers/asm-powerpc/epapr_hcalls.h
648
+++ b/linux-headers/asm-powerpc/epapr_hcalls.h
649
@@ -XXX,XX +XXX,XX @@
650
+/* SPDX-License-Identifier: ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) */
651
/*
652
* ePAPR hcall interface
653
*
654
diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
655
index XXXXXXX..XXXXXXX 100644
656
--- a/linux-headers/asm-powerpc/kvm.h
657
+++ b/linux-headers/asm-powerpc/kvm.h
658
@@ -XXX,XX +XXX,XX @@
659
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
660
/*
661
* This program is free software; you can redistribute it and/or modify
662
* it under the terms of the GNU General Public License, version 2, as
663
diff --git a/linux-headers/asm-powerpc/kvm_para.h b/linux-headers/asm-powerpc/kvm_para.h
664
index XXXXXXX..XXXXXXX 100644
665
--- a/linux-headers/asm-powerpc/kvm_para.h
666
+++ b/linux-headers/asm-powerpc/kvm_para.h
667
@@ -XXX,XX +XXX,XX @@
668
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
669
/*
670
* This program is free software; you can redistribute it and/or modify
671
* it under the terms of the GNU General Public License, version 2, as
672
diff --git a/linux-headers/asm-powerpc/unistd.h b/linux-headers/asm-powerpc/unistd.h
673
index XXXXXXX..XXXXXXX 100644
674
--- a/linux-headers/asm-powerpc/unistd.h
675
+++ b/linux-headers/asm-powerpc/unistd.h
676
@@ -XXX,XX +XXX,XX @@
677
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
678
/*
679
* This file contains the system call numbers.
680
*
681
diff --git a/linux-headers/asm-s390/kvm.h b/linux-headers/asm-s390/kvm.h
682
index XXXXXXX..XXXXXXX 100644
683
--- a/linux-headers/asm-s390/kvm.h
684
+++ b/linux-headers/asm-s390/kvm.h
685
@@ -XXX,XX +XXX,XX @@
686
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
687
#ifndef __LINUX_KVM_S390_H
688
#define __LINUX_KVM_S390_H
689
/*
690
diff --git a/linux-headers/asm-s390/kvm_para.h b/linux-headers/asm-s390/kvm_para.h
691
index XXXXXXX..XXXXXXX 100644
692
--- a/linux-headers/asm-s390/kvm_para.h
693
+++ b/linux-headers/asm-s390/kvm_para.h
694
@@ -XXX,XX +XXX,XX @@
695
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
696
/*
697
* User API definitions for paravirtual devices on s390
698
*
699
diff --git a/linux-headers/asm-s390/unistd.h b/linux-headers/asm-s390/unistd.h
700
index XXXXXXX..XXXXXXX 100644
701
--- a/linux-headers/asm-s390/unistd.h
702
+++ b/linux-headers/asm-s390/unistd.h
703
@@ -XXX,XX +XXX,XX @@
704
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
705
/*
706
* S390 version
707
*
708
@@ -XXX,XX +XXX,XX @@
709
#define __NR_pwritev2        377
710
#define __NR_s390_guarded_storage    378
711
#define __NR_statx        379
712
-#define NR_syscalls 380
713
+#define __NR_s390_sthyi        380
714
+#define NR_syscalls 381
715
716
/*
717
* There are some system calls that are not present on 64 bit, some
718
diff --git a/linux-headers/asm-x86/kvm.h b/linux-headers/asm-x86/kvm.h
719
index XXXXXXX..XXXXXXX 100644
720
--- a/linux-headers/asm-x86/kvm.h
721
+++ b/linux-headers/asm-x86/kvm.h
722
@@ -XXX,XX +XXX,XX @@
723
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
724
#ifndef _ASM_X86_KVM_H
725
#define _ASM_X86_KVM_H
726
727
diff --git a/linux-headers/asm-x86/kvm_para.h b/linux-headers/asm-x86/kvm_para.h
728
index XXXXXXX..XXXXXXX 100644
729
--- a/linux-headers/asm-x86/kvm_para.h
730
+++ b/linux-headers/asm-x86/kvm_para.h
731
@@ -XXX,XX +XXX,XX @@
732
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
733
#ifndef _ASM_X86_KVM_PARA_H
734
#define _ASM_X86_KVM_PARA_H
735
736
@@ -XXX,XX +XXX,XX @@ struct kvm_vcpu_pv_apf_data {
737
#define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
738
#define KVM_PV_EOI_DISABLED 0x0
739
740
-
741
#endif /* _ASM_X86_KVM_PARA_H */
742
diff --git a/linux-headers/asm-x86/unistd.h b/linux-headers/asm-x86/unistd.h
743
index XXXXXXX..XXXXXXX 100644
744
--- a/linux-headers/asm-x86/unistd.h
745
+++ b/linux-headers/asm-x86/unistd.h
746
@@ -XXX,XX +XXX,XX @@
747
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
748
#ifndef _ASM_X86_UNISTD_H
749
#define _ASM_X86_UNISTD_H
750
751
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
752
index XXXXXXX..XXXXXXX 100644
753
--- a/linux-headers/linux/kvm.h
754
+++ b/linux-headers/linux/kvm.h
755
@@ -XXX,XX +XXX,XX @@
756
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
757
#ifndef __LINUX_KVM_H
758
#define __LINUX_KVM_H
759
760
@@ -XXX,XX +XXX,XX @@ struct kvm_ppc_resize_hpt {
761
#define KVM_CAP_PPC_SMT_POSSIBLE 147
762
#define KVM_CAP_HYPERV_SYNIC2 148
763
#define KVM_CAP_HYPERV_VP_INDEX 149
764
+#define KVM_CAP_S390_AIS_MIGRATION 150
765
766
#ifdef KVM_CAP_IRQ_ROUTING
767
768
diff --git a/linux-headers/linux/kvm_para.h b/linux-headers/linux/kvm_para.h
769
index XXXXXXX..XXXXXXX 100644
770
--- a/linux-headers/linux/kvm_para.h
771
+++ b/linux-headers/linux/kvm_para.h
772
@@ -XXX,XX +XXX,XX @@
773
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
774
#ifndef __LINUX_KVM_PARA_H
775
#define __LINUX_KVM_PARA_H
776
777
diff --git a/linux-headers/linux/psci.h b/linux-headers/linux/psci.h
778
index XXXXXXX..XXXXXXX 100644
779
--- a/linux-headers/linux/psci.h
780
+++ b/linux-headers/linux/psci.h
781
@@ -XXX,XX +XXX,XX @@
782
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
783
/*
784
* ARM Power State and Coordination Interface (PSCI) header
785
*
786
diff --git a/linux-headers/linux/userfaultfd.h b/linux-headers/linux/userfaultfd.h
787
index XXXXXXX..XXXXXXX 100644
788
--- a/linux-headers/linux/userfaultfd.h
789
+++ b/linux-headers/linux/userfaultfd.h
790
@@ -XXX,XX +XXX,XX @@
791
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
792
/*
793
* include/linux/userfaultfd.h
794
*
795
diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
796
index XXXXXXX..XXXXXXX 100644
797
--- a/linux-headers/linux/vfio.h
798
+++ b/linux-headers/linux/vfio.h
799
@@ -XXX,XX +XXX,XX @@
800
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
801
/*
802
* VFIO API definition
803
*
804
diff --git a/linux-headers/linux/vfio_ccw.h b/linux-headers/linux/vfio_ccw.h
805
index XXXXXXX..XXXXXXX 100644
806
--- a/linux-headers/linux/vfio_ccw.h
807
+++ b/linux-headers/linux/vfio_ccw.h
808
@@ -XXX,XX +XXX,XX @@
809
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
810
/*
811
* Interfaces for vfio-ccw
812
*
813
diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
814
index XXXXXXX..XXXXXXX 100644
815
--- a/linux-headers/linux/vhost.h
816
+++ b/linux-headers/linux/vhost.h
817
@@ -XXX,XX +XXX,XX @@
818
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
819
#ifndef _LINUX_VHOST_H
820
#define _LINUX_VHOST_H
821
/* Userspace interface for in-kernel virtio accelerators. */
822
--
31
--
823
2.7.4
32
2.20.1
824
33
825
34
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Add support for the Zynq Ultrascale MPSoc Generic QSPI.
3
The smmu_find_smmu_pcibus() function was introduced (in commit
4
cac994ef43b) in a code format that could return an incorrect
5
pointer, which was then fixed by the previous commit.
6
We could have avoided this by writing the if() statement
7
differently. Do it now, in case this function is re-used.
8
The code is easier to review (harder to miss bugs).
4
9
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
Reviewed-by: Peter Xu <peterx@redhat.com>
7
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20171126231634.9531-13-frasse.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
14
---
11
include/hw/ssi/xilinx_spips.h | 32 ++-
15
hw/arm/smmu-common.c | 25 +++++++++++++------------
12
hw/ssi/xilinx_spips.c | 579 ++++++++++++++++++++++++++++++++++++----
16
1 file changed, 13 insertions(+), 12 deletions(-)
13
default-configs/arm-softmmu.mak | 2 +-
14
3 files changed, 564 insertions(+), 49 deletions(-)
15
17
16
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
18
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
17
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/ssi/xilinx_spips.h
20
--- a/hw/arm/smmu-common.c
19
+++ b/include/hw/ssi/xilinx_spips.h
21
+++ b/hw/arm/smmu-common.c
20
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ inline int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
21
#define XILINX_SPIPS_H
23
SMMUPciBus *smmu_find_smmu_pcibus(SMMUState *s, uint8_t bus_num)
22
24
{
23
#include "hw/ssi/ssi.h"
25
SMMUPciBus *smmu_pci_bus = s->smmu_pcibus_by_bus_num[bus_num];
24
-#include "qemu/fifo8.h"
26
+ GHashTableIter iter;
25
+#include "qemu/fifo32.h"
27
26
+#include "hw/stream.h"
28
- if (!smmu_pci_bus) {
27
29
- GHashTableIter iter;
28
typedef struct XilinxSPIPS XilinxSPIPS;
29
30
#define XLNX_SPIPS_R_MAX (0x100 / 4)
31
+#define XLNX_ZYNQMP_SPIPS_R_MAX (0x200 / 4)
32
33
/* Bite off 4k chunks at a time */
34
#define LQSPI_CACHE_SIZE 1024
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
36
bool mmio_execution_enabled;
37
} XilinxQSPIPS;
38
39
+typedef struct {
40
+ XilinxQSPIPS parent_obj;
41
+
42
+ StreamSlave *dma;
43
+ uint8_t dma_buf[4];
44
+ int gqspi_irqline;
45
+
46
+ uint32_t regs[XLNX_ZYNQMP_SPIPS_R_MAX];
47
+
48
+ /* GQSPI has seperate tx/rx fifos */
49
+ Fifo8 rx_fifo_g;
50
+ Fifo8 tx_fifo_g;
51
+ Fifo32 fifo_g;
52
+ /*
53
+ * At the end of each generic command, misaligned extra bytes are discard
54
+ * or padded to tx and rx respectively to round it out (and avoid need for
55
+ * individual byte access. Since we use byte fifos, keep track of the
56
+ * alignment WRT to word access.
57
+ */
58
+ uint8_t rx_fifo_g_align;
59
+ uint8_t tx_fifo_g_align;
60
+ bool man_start_com_g;
61
+} XlnxZynqMPQSPIPS;
62
+
63
typedef struct XilinxSPIPSClass {
64
SysBusDeviceClass parent_class;
65
66
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPSClass {
67
68
#define TYPE_XILINX_SPIPS "xlnx.ps7-spi"
69
#define TYPE_XILINX_QSPIPS "xlnx.ps7-qspi"
70
+#define TYPE_XLNX_ZYNQMP_QSPIPS "xlnx.usmp-gqspi"
71
72
#define XILINX_SPIPS(obj) \
73
OBJECT_CHECK(XilinxSPIPS, (obj), TYPE_XILINX_SPIPS)
74
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPSClass {
75
#define XILINX_QSPIPS(obj) \
76
OBJECT_CHECK(XilinxQSPIPS, (obj), TYPE_XILINX_QSPIPS)
77
78
+#define XLNX_ZYNQMP_QSPIPS(obj) \
79
+ OBJECT_CHECK(XlnxZynqMPQSPIPS, (obj), TYPE_XLNX_ZYNQMP_QSPIPS)
80
+
81
#endif /* XILINX_SPIPS_H */
82
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/hw/ssi/xilinx_spips.c
85
+++ b/hw/ssi/xilinx_spips.c
86
@@ -XXX,XX +XXX,XX @@
87
#include "hw/ssi/xilinx_spips.h"
88
#include "qapi/error.h"
89
#include "hw/register.h"
90
+#include "sysemu/dma.h"
91
#include "migration/blocker.h"
92
93
#ifndef XILINX_SPIPS_ERR_DEBUG
94
@@ -XXX,XX +XXX,XX @@
95
#define R_INTR_DIS (0x0C / 4)
96
#define R_INTR_MASK (0x10 / 4)
97
#define IXR_TX_FIFO_UNDERFLOW (1 << 6)
98
+/* Poll timeout not implemented */
99
+#define IXR_RX_FIFO_EMPTY (1 << 11)
100
+#define IXR_GENERIC_FIFO_FULL (1 << 10)
101
+#define IXR_GENERIC_FIFO_NOT_FULL (1 << 9)
102
+#define IXR_TX_FIFO_EMPTY (1 << 8)
103
+#define IXR_GENERIC_FIFO_EMPTY (1 << 7)
104
#define IXR_RX_FIFO_FULL (1 << 5)
105
#define IXR_RX_FIFO_NOT_EMPTY (1 << 4)
106
#define IXR_TX_FIFO_FULL (1 << 3)
107
#define IXR_TX_FIFO_NOT_FULL (1 << 2)
108
#define IXR_TX_FIFO_MODE_FAIL (1 << 1)
109
#define IXR_RX_FIFO_OVERFLOW (1 << 0)
110
-#define IXR_ALL ((IXR_TX_FIFO_UNDERFLOW<<1)-1)
111
+#define IXR_ALL ((1 << 13) - 1)
112
+#define GQSPI_IXR_MASK 0xFBE
113
+#define IXR_SELF_CLEAR \
114
+(IXR_GENERIC_FIFO_EMPTY \
115
+| IXR_GENERIC_FIFO_FULL \
116
+| IXR_GENERIC_FIFO_NOT_FULL \
117
+| IXR_TX_FIFO_EMPTY \
118
+| IXR_TX_FIFO_FULL \
119
+| IXR_TX_FIFO_NOT_FULL \
120
+| IXR_RX_FIFO_EMPTY \
121
+| IXR_RX_FIFO_FULL \
122
+| IXR_RX_FIFO_NOT_EMPTY)
123
124
#define R_EN (0x14 / 4)
125
#define R_DELAY (0x18 / 4)
126
@@ -XXX,XX +XXX,XX @@
127
128
#define R_MOD_ID (0xFC / 4)
129
130
+#define R_GQSPI_SELECT (0x144 / 4)
131
+ FIELD(GQSPI_SELECT, GENERIC_QSPI_EN, 0, 1)
132
+#define R_GQSPI_ISR (0x104 / 4)
133
+#define R_GQSPI_IER (0x108 / 4)
134
+#define R_GQSPI_IDR (0x10c / 4)
135
+#define R_GQSPI_IMR (0x110 / 4)
136
+#define R_GQSPI_TX_THRESH (0x128 / 4)
137
+#define R_GQSPI_RX_THRESH (0x12c / 4)
138
+#define R_GQSPI_CNFG (0x100 / 4)
139
+ FIELD(GQSPI_CNFG, MODE_EN, 30, 2)
140
+ FIELD(GQSPI_CNFG, GEN_FIFO_START_MODE, 29, 1)
141
+ FIELD(GQSPI_CNFG, GEN_FIFO_START, 28, 1)
142
+ FIELD(GQSPI_CNFG, ENDIAN, 26, 1)
143
+ /* Poll timeout not implemented */
144
+ FIELD(GQSPI_CNFG, EN_POLL_TIMEOUT, 20, 1)
145
+ /* QEMU doesnt care about any of these last three */
146
+ FIELD(GQSPI_CNFG, BR, 3, 3)
147
+ FIELD(GQSPI_CNFG, CPH, 2, 1)
148
+ FIELD(GQSPI_CNFG, CPL, 1, 1)
149
+#define R_GQSPI_GEN_FIFO (0x140 / 4)
150
+#define R_GQSPI_TXD (0x11c / 4)
151
+#define R_GQSPI_RXD (0x120 / 4)
152
+#define R_GQSPI_FIFO_CTRL (0x14c / 4)
153
+ FIELD(GQSPI_FIFO_CTRL, RX_FIFO_RESET, 2, 1)
154
+ FIELD(GQSPI_FIFO_CTRL, TX_FIFO_RESET, 1, 1)
155
+ FIELD(GQSPI_FIFO_CTRL, GENERIC_FIFO_RESET, 0, 1)
156
+#define R_GQSPI_GFIFO_THRESH (0x150 / 4)
157
+#define R_GQSPI_DATA_STS (0x15c / 4)
158
+/* We use the snapshot register to hold the core state for the currently
159
+ * or most recently executed command. So the generic fifo format is defined
160
+ * for the snapshot register
161
+ */
162
+#define R_GQSPI_GF_SNAPSHOT (0x160 / 4)
163
+ FIELD(GQSPI_GF_SNAPSHOT, POLL, 19, 1)
164
+ FIELD(GQSPI_GF_SNAPSHOT, STRIPE, 18, 1)
165
+ FIELD(GQSPI_GF_SNAPSHOT, RECIEVE, 17, 1)
166
+ FIELD(GQSPI_GF_SNAPSHOT, TRANSMIT, 16, 1)
167
+ FIELD(GQSPI_GF_SNAPSHOT, DATA_BUS_SELECT, 14, 2)
168
+ FIELD(GQSPI_GF_SNAPSHOT, CHIP_SELECT, 12, 2)
169
+ FIELD(GQSPI_GF_SNAPSHOT, SPI_MODE, 10, 2)
170
+ FIELD(GQSPI_GF_SNAPSHOT, EXPONENT, 9, 1)
171
+ FIELD(GQSPI_GF_SNAPSHOT, DATA_XFER, 8, 1)
172
+ FIELD(GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA, 0, 8)
173
+#define R_GQSPI_MOD_ID (0x168 / 4)
174
+#define R_GQSPI_MOD_ID_VALUE 0x010A0000
175
/* size of TXRX FIFOs */
176
-#define RXFF_A 32
177
-#define TXFF_A 32
178
+#define RXFF_A (128)
179
+#define TXFF_A (128)
180
181
#define RXFF_A_Q (64 * 4)
182
#define TXFF_A_Q (64 * 4)
183
@@ -XXX,XX +XXX,XX @@ static inline int num_effective_busses(XilinxSPIPS *s)
184
s->regs[R_LQSPI_CFG] & LQSPI_CFG_TWO_MEM) ? s->num_busses : 1;
185
}
186
187
-static inline bool xilinx_spips_cs_is_set(XilinxSPIPS *s, int i, int field)
188
-{
189
- return ~field & (1 << i) && (s->regs[R_CONFIG] & MANUAL_CS
190
- || !fifo8_is_empty(&s->tx_fifo));
191
-}
192
-
30
-
193
-static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
31
- g_hash_table_iter_init(&iter, s->smmu_pcibus_by_busptr);
194
+static void xilinx_spips_update_cs(XilinxSPIPS *s, int field)
32
- while (g_hash_table_iter_next(&iter, NULL, (void **)&smmu_pci_bus)) {
195
{
33
- if (pci_bus_num(smmu_pci_bus->bus) == bus_num) {
196
- int i, j;
34
- s->smmu_pcibus_by_bus_num[bus_num] = smmu_pci_bus;
197
- bool found = false;
35
- return smmu_pci_bus;
198
- int field = s->regs[R_CONFIG] >> CS_SHIFT;
199
+ int i;
200
201
for (i = 0; i < s->num_cs; i++) {
202
- for (j = 0; j < num_effective_busses(s); j++) {
203
- int upage = !!(s->regs[R_LQSPI_STS] & LQSPI_CFG_U_PAGE);
204
- int cs_to_set = (j * s->num_cs + i + upage) %
205
- (s->num_cs * s->num_busses);
206
-
207
- if (xilinx_spips_cs_is_set(s, i, field) && !found) {
208
- DB_PRINT_L(0, "selecting slave %d\n", i);
209
- qemu_set_irq(s->cs_lines[cs_to_set], 0);
210
- if (s->cs_lines_state[cs_to_set]) {
211
- s->cs_lines_state[cs_to_set] = false;
212
- s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
213
- }
214
- } else {
215
- DB_PRINT_L(0, "deselecting slave %d\n", i);
216
- qemu_set_irq(s->cs_lines[cs_to_set], 1);
217
- s->cs_lines_state[cs_to_set] = true;
218
- }
36
- }
219
- }
37
- }
220
- if (xilinx_spips_cs_is_set(s, i, field)) {
38
- smmu_pci_bus = NULL;
221
- found = true;
39
+ if (smmu_pci_bus) {
222
+ bool old_state = s->cs_lines_state[i];
40
+ return smmu_pci_bus;
223
+ bool new_state = field & (1 << i);
41
}
42
- return smmu_pci_bus;
224
+
43
+
225
+ if (old_state != new_state) {
44
+ g_hash_table_iter_init(&iter, s->smmu_pcibus_by_busptr);
226
+ s->cs_lines_state[i] = new_state;
45
+ while (g_hash_table_iter_next(&iter, NULL, (void **)&smmu_pci_bus)) {
227
+ s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
46
+ if (pci_bus_num(smmu_pci_bus->bus) == bus_num) {
228
+ DB_PRINT_L(1, "%sselecting slave %d\n", new_state ? "" : "de", i);
47
+ s->smmu_pcibus_by_bus_num[bus_num] = smmu_pci_bus;
229
}
48
+ return smmu_pci_bus;
230
+ qemu_set_irq(s->cs_lines[i], !new_state);
231
}
232
- if (!found) {
233
+ if (!(field & ((1 << s->num_cs) - 1))) {
234
s->snoop_state = SNOOP_CHECKING;
235
s->cmd_dummies = 0;
236
s->link_state = 1;
237
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
238
}
239
}
240
241
+static void xlnx_zynqmp_qspips_update_cs_lines(XlnxZynqMPQSPIPS *s)
242
+{
243
+ if (s->regs[R_GQSPI_GF_SNAPSHOT]) {
244
+ int field = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, CHIP_SELECT);
245
+ xilinx_spips_update_cs(XILINX_SPIPS(s), field);
246
+ }
247
+}
248
+
249
+static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
250
+{
251
+ int field = ~((s->regs[R_CONFIG] & CS) >> CS_SHIFT);
252
+
253
+ /* In dual parallel, mirror low CS to both */
254
+ if (num_effective_busses(s) == 2) {
255
+ /* Single bit chip-select for qspi */
256
+ field &= 0x1;
257
+ field |= field << 1;
258
+ /* Dual stack U-Page */
259
+ } else if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_TWO_MEM &&
260
+ s->regs[R_LQSPI_STS] & LQSPI_CFG_U_PAGE) {
261
+ /* Single bit chip-select for qspi */
262
+ field &= 0x1;
263
+ /* change from CS0 to CS1 */
264
+ field <<= 1;
265
+ }
266
+ /* Auto CS */
267
+ if (!(s->regs[R_CONFIG] & MANUAL_CS) &&
268
+ fifo8_is_empty(&s->tx_fifo)) {
269
+ field = 0;
270
+ }
271
+ xilinx_spips_update_cs(s, field);
272
+}
273
+
274
static void xilinx_spips_update_ixr(XilinxSPIPS *s)
275
{
276
- if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE) {
277
- return;
278
+ if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
279
+ s->regs[R_INTR_STATUS] &= ~IXR_SELF_CLEAR;
280
+ s->regs[R_INTR_STATUS] |=
281
+ (fifo8_is_full(&s->rx_fifo) ? IXR_RX_FIFO_FULL : 0) |
282
+ (s->rx_fifo.num >= s->regs[R_RX_THRES] ?
283
+ IXR_RX_FIFO_NOT_EMPTY : 0) |
284
+ (fifo8_is_full(&s->tx_fifo) ? IXR_TX_FIFO_FULL : 0) |
285
+ (fifo8_is_empty(&s->tx_fifo) ? IXR_TX_FIFO_EMPTY : 0) |
286
+ (s->tx_fifo.num < s->regs[R_TX_THRES] ? IXR_TX_FIFO_NOT_FULL : 0);
287
}
288
- /* These are set/cleared as they occur */
289
- s->regs[R_INTR_STATUS] &= (IXR_TX_FIFO_UNDERFLOW | IXR_RX_FIFO_OVERFLOW |
290
- IXR_TX_FIFO_MODE_FAIL);
291
- /* these are pure functions of fifo state, set them here */
292
- s->regs[R_INTR_STATUS] |=
293
- (fifo8_is_full(&s->rx_fifo) ? IXR_RX_FIFO_FULL : 0) |
294
- (s->rx_fifo.num >= s->regs[R_RX_THRES] ? IXR_RX_FIFO_NOT_EMPTY : 0) |
295
- (fifo8_is_full(&s->tx_fifo) ? IXR_TX_FIFO_FULL : 0) |
296
- (s->tx_fifo.num < s->regs[R_TX_THRES] ? IXR_TX_FIFO_NOT_FULL : 0);
297
- /* drive external interrupt pin */
298
int new_irqline = !!(s->regs[R_INTR_MASK] & s->regs[R_INTR_STATUS] &
299
IXR_ALL);
300
if (new_irqline != s->irqline) {
301
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_ixr(XilinxSPIPS *s)
302
}
303
}
304
305
+static void xlnx_zynqmp_qspips_update_ixr(XlnxZynqMPQSPIPS *s)
306
+{
307
+ uint32_t gqspi_int;
308
+ int new_irqline;
309
+
310
+ s->regs[R_GQSPI_ISR] &= ~IXR_SELF_CLEAR;
311
+ s->regs[R_GQSPI_ISR] |=
312
+ (fifo32_is_empty(&s->fifo_g) ? IXR_GENERIC_FIFO_EMPTY : 0) |
313
+ (fifo32_is_full(&s->fifo_g) ? IXR_GENERIC_FIFO_FULL : 0) |
314
+ (s->fifo_g.fifo.num < s->regs[R_GQSPI_GFIFO_THRESH] ?
315
+ IXR_GENERIC_FIFO_NOT_FULL : 0) |
316
+ (fifo8_is_empty(&s->rx_fifo_g) ? IXR_RX_FIFO_EMPTY : 0) |
317
+ (fifo8_is_full(&s->rx_fifo_g) ? IXR_RX_FIFO_FULL : 0) |
318
+ (s->rx_fifo_g.num >= s->regs[R_GQSPI_RX_THRESH] ?
319
+ IXR_RX_FIFO_NOT_EMPTY : 0) |
320
+ (fifo8_is_empty(&s->tx_fifo_g) ? IXR_TX_FIFO_EMPTY : 0) |
321
+ (fifo8_is_full(&s->tx_fifo_g) ? IXR_TX_FIFO_FULL : 0) |
322
+ (s->tx_fifo_g.num < s->regs[R_GQSPI_TX_THRESH] ?
323
+ IXR_TX_FIFO_NOT_FULL : 0);
324
+
325
+ /* GQSPI Interrupt Trigger Status */
326
+ gqspi_int = (~s->regs[R_GQSPI_IMR]) & s->regs[R_GQSPI_ISR] & GQSPI_IXR_MASK;
327
+ new_irqline = !!(gqspi_int & IXR_ALL);
328
+
329
+ /* drive external interrupt pin */
330
+ if (new_irqline != s->gqspi_irqline) {
331
+ s->gqspi_irqline = new_irqline;
332
+ qemu_set_irq(XILINX_SPIPS(s)->irq, s->gqspi_irqline);
333
+ }
334
+}
335
+
336
static void xilinx_spips_reset(DeviceState *d)
337
{
338
XilinxSPIPS *s = XILINX_SPIPS(d);
339
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
340
xilinx_spips_update_cs_lines(s);
341
}
342
343
+static void xlnx_zynqmp_qspips_reset(DeviceState *d)
344
+{
345
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(d);
346
+ int i;
347
+
348
+ xilinx_spips_reset(d);
349
+
350
+ for (i = 0; i < XLNX_ZYNQMP_SPIPS_R_MAX; i++) {
351
+ s->regs[i] = 0;
352
+ }
353
+ fifo8_reset(&s->rx_fifo_g);
354
+ fifo8_reset(&s->rx_fifo_g);
355
+ fifo32_reset(&s->fifo_g);
356
+ s->regs[R_GQSPI_TX_THRESH] = 1;
357
+ s->regs[R_GQSPI_RX_THRESH] = 1;
358
+ s->regs[R_GQSPI_GFIFO_THRESH] = 1;
359
+ s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
360
+ s->man_start_com_g = false;
361
+ s->gqspi_irqline = 0;
362
+ xlnx_zynqmp_qspips_update_ixr(s);
363
+}
364
+
365
/* N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
366
* column wise (from element 0 to N-1). num is the length of x, and dir
367
* reverses the direction of the transform. Best illustrated by example:
368
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
369
memcpy(x, r, sizeof(uint8_t) * num);
370
}
371
372
+static void xlnx_zynqmp_qspips_flush_fifo_g(XlnxZynqMPQSPIPS *s)
373
+{
374
+ while (s->regs[R_GQSPI_DATA_STS] || !fifo32_is_empty(&s->fifo_g)) {
375
+ uint8_t tx_rx[2] = { 0 };
376
+ int num_stripes = 1;
377
+ uint8_t busses;
378
+ int i;
379
+
380
+ if (!s->regs[R_GQSPI_DATA_STS]) {
381
+ uint8_t imm;
382
+
383
+ s->regs[R_GQSPI_GF_SNAPSHOT] = fifo32_pop(&s->fifo_g);
384
+ DB_PRINT_L(0, "GQSPI command: %x\n", s->regs[R_GQSPI_GF_SNAPSHOT]);
385
+ if (!s->regs[R_GQSPI_GF_SNAPSHOT]) {
386
+ DB_PRINT_L(0, "Dummy GQSPI Delay Command Entry, Do nothing");
387
+ continue;
388
+ }
389
+ xlnx_zynqmp_qspips_update_cs_lines(s);
390
+
391
+ imm = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA);
392
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_XFER)) {
393
+ /* immedate transfer */
394
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT) ||
395
+ ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE)) {
396
+ s->regs[R_GQSPI_DATA_STS] = 1;
397
+ /* CS setup/hold - do nothing */
398
+ } else {
399
+ s->regs[R_GQSPI_DATA_STS] = 0;
400
+ }
401
+ } else if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, EXPONENT)) {
402
+ if (imm > 31) {
403
+ qemu_log_mask(LOG_UNIMP, "QSPI exponential transfer too"
404
+ " long - 2 ^ %" PRId8 " requested\n", imm);
405
+ }
406
+ s->regs[R_GQSPI_DATA_STS] = 1ul << imm;
407
+ } else {
408
+ s->regs[R_GQSPI_DATA_STS] = imm;
409
+ }
410
+ }
411
+ /* Zero length transfer check */
412
+ if (!s->regs[R_GQSPI_DATA_STS]) {
413
+ continue;
414
+ }
415
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE) &&
416
+ fifo8_is_full(&s->rx_fifo_g)) {
417
+ /* No space in RX fifo for transfer - try again later */
418
+ return;
419
+ }
420
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, STRIPE) &&
421
+ (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT) ||
422
+ ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE))) {
423
+ num_stripes = 2;
424
+ }
425
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_XFER)) {
426
+ tx_rx[0] = ARRAY_FIELD_EX32(s->regs,
427
+ GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA);
428
+ } else if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT)) {
429
+ for (i = 0; i < num_stripes; ++i) {
430
+ if (!fifo8_is_empty(&s->tx_fifo_g)) {
431
+ tx_rx[i] = fifo8_pop(&s->tx_fifo_g);
432
+ s->tx_fifo_g_align++;
433
+ } else {
434
+ return;
435
+ }
436
+ }
437
+ }
438
+ if (num_stripes == 1) {
439
+ /* mirror */
440
+ tx_rx[1] = tx_rx[0];
441
+ }
442
+ busses = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_BUS_SELECT);
443
+ for (i = 0; i < 2; ++i) {
444
+ DB_PRINT_L(1, "bus %d tx = %02x\n", i, tx_rx[i]);
445
+ tx_rx[i] = ssi_transfer(XILINX_SPIPS(s)->spi[i], tx_rx[i]);
446
+ DB_PRINT_L(1, "bus %d rx = %02x\n", i, tx_rx[i]);
447
+ }
448
+ if (s->regs[R_GQSPI_DATA_STS] > 1 &&
449
+ busses == 0x3 && num_stripes == 2) {
450
+ s->regs[R_GQSPI_DATA_STS] -= 2;
451
+ } else if (s->regs[R_GQSPI_DATA_STS] > 0) {
452
+ s->regs[R_GQSPI_DATA_STS]--;
453
+ }
454
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE)) {
455
+ for (i = 0; i < 2; ++i) {
456
+ if (busses & (1 << i)) {
457
+ DB_PRINT_L(1, "bus %d push_byte = %02x\n", i, tx_rx[i]);
458
+ fifo8_push(&s->rx_fifo_g, tx_rx[i]);
459
+ s->rx_fifo_g_align++;
460
+ }
461
+ }
462
+ }
463
+ if (!s->regs[R_GQSPI_DATA_STS]) {
464
+ for (; s->tx_fifo_g_align % 4; s->tx_fifo_g_align++) {
465
+ fifo8_pop(&s->tx_fifo_g);
466
+ }
467
+ for (; s->rx_fifo_g_align % 4; s->rx_fifo_g_align++) {
468
+ fifo8_push(&s->rx_fifo_g, 0);
469
+ }
470
+ }
49
+ }
471
+ }
50
+ }
472
+}
473
+
51
+
474
static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, uint8_t command)
52
+ return NULL;
475
{
476
if (!qs) {
477
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_check_flush(XilinxSPIPS *s)
478
xilinx_spips_update_ixr(s);
479
}
53
}
480
54
481
+static void xlnx_zynqmp_qspips_check_flush(XlnxZynqMPQSPIPS *s)
55
static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
482
+{
483
+ bool gqspi_has_work = s->regs[R_GQSPI_DATA_STS] ||
484
+ !fifo32_is_empty(&s->fifo_g);
485
+
486
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
487
+ if (s->man_start_com_g || (gqspi_has_work &&
488
+ !ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, GEN_FIFO_START_MODE))) {
489
+ xlnx_zynqmp_qspips_flush_fifo_g(s);
490
+ }
491
+ } else {
492
+ xilinx_spips_check_flush(XILINX_SPIPS(s));
493
+ }
494
+ if (!gqspi_has_work) {
495
+ s->man_start_com_g = false;
496
+ }
497
+ xlnx_zynqmp_qspips_update_ixr(s);
498
+}
499
+
500
static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
501
{
502
int i;
503
@@ -XXX,XX +XXX,XX @@ static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
504
return max - i;
505
}
506
507
+static const void *pop_buf(Fifo8 *fifo, uint32_t max, uint32_t *num)
508
+{
509
+ void *ret;
510
+
511
+ if (max == 0 || max > fifo->num) {
512
+ abort();
513
+ }
514
+ *num = MIN(fifo->capacity - fifo->head, max);
515
+ ret = &fifo->data[fifo->head];
516
+ fifo->head += *num;
517
+ fifo->head %= fifo->capacity;
518
+ fifo->num -= *num;
519
+ return ret;
520
+}
521
+
522
+static void xlnx_zynqmp_qspips_notify(void *opaque)
523
+{
524
+ XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(opaque);
525
+ XilinxSPIPS *s = XILINX_SPIPS(rq);
526
+ Fifo8 *recv_fifo;
527
+
528
+ if (ARRAY_FIELD_EX32(rq->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
529
+ if (!(ARRAY_FIELD_EX32(rq->regs, GQSPI_CNFG, MODE_EN) == 2)) {
530
+ return;
531
+ }
532
+ recv_fifo = &rq->rx_fifo_g;
533
+ } else {
534
+ if (!(s->regs[R_CMND] & R_CMND_DMA_EN)) {
535
+ return;
536
+ }
537
+ recv_fifo = &s->rx_fifo;
538
+ }
539
+ while (recv_fifo->num >= 4
540
+ && stream_can_push(rq->dma, xlnx_zynqmp_qspips_notify, rq))
541
+ {
542
+ size_t ret;
543
+ uint32_t num;
544
+ const void *rxd = pop_buf(recv_fifo, 4, &num);
545
+
546
+ memcpy(rq->dma_buf, rxd, num);
547
+
548
+ ret = stream_push(rq->dma, rq->dma_buf, 4);
549
+ assert(ret == 4);
550
+ xlnx_zynqmp_qspips_check_flush(rq);
551
+ }
552
+}
553
+
554
static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
555
unsigned size)
556
{
557
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
558
ret <<= 8 * shortfall;
559
}
560
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
561
+ xilinx_spips_check_flush(s);
562
xilinx_spips_update_ixr(s);
563
return ret;
564
}
565
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
566
567
}
568
569
+static uint64_t xlnx_zynqmp_qspips_read(void *opaque,
570
+ hwaddr addr, unsigned size)
571
+{
572
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(opaque);
573
+ uint32_t reg = addr / 4;
574
+ uint32_t ret;
575
+ uint8_t rx_buf[4];
576
+ int shortfall;
577
+
578
+ if (reg <= R_MOD_ID) {
579
+ return xilinx_spips_read(opaque, addr, size);
580
+ } else {
581
+ switch (reg) {
582
+ case R_GQSPI_RXD:
583
+ if (fifo8_is_empty(&s->rx_fifo_g)) {
584
+ qemu_log_mask(LOG_GUEST_ERROR,
585
+ "Read from empty GQSPI RX FIFO\n");
586
+ return 0;
587
+ }
588
+ memset(rx_buf, 0, sizeof(rx_buf));
589
+ shortfall = rx_data_bytes(&s->rx_fifo_g, rx_buf,
590
+ XILINX_SPIPS(s)->num_txrx_bytes);
591
+ ret = ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN) ?
592
+ cpu_to_be32(*(uint32_t *)rx_buf) :
593
+ cpu_to_le32(*(uint32_t *)rx_buf);
594
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN)) {
595
+ ret <<= 8 * shortfall;
596
+ }
597
+ xlnx_zynqmp_qspips_check_flush(s);
598
+ xlnx_zynqmp_qspips_update_ixr(s);
599
+ return ret;
600
+ default:
601
+ return s->regs[reg];
602
+ }
603
+ }
604
+}
605
+
606
static void xilinx_spips_write(void *opaque, hwaddr addr,
607
uint64_t value, unsigned size)
608
{
609
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
610
}
611
}
612
613
+static void xlnx_zynqmp_qspips_write(void *opaque, hwaddr addr,
614
+ uint64_t value, unsigned size)
615
+{
616
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(opaque);
617
+ uint32_t reg = addr / 4;
618
+
619
+ if (reg <= R_MOD_ID) {
620
+ xilinx_qspips_write(opaque, addr, value, size);
621
+ } else {
622
+ switch (reg) {
623
+ case R_GQSPI_CNFG:
624
+ if (FIELD_EX32(value, GQSPI_CNFG, GEN_FIFO_START) &&
625
+ ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, GEN_FIFO_START_MODE)) {
626
+ s->man_start_com_g = true;
627
+ }
628
+ s->regs[reg] = value & ~(R_GQSPI_CNFG_GEN_FIFO_START_MASK);
629
+ break;
630
+ case R_GQSPI_GEN_FIFO:
631
+ if (!fifo32_is_full(&s->fifo_g)) {
632
+ fifo32_push(&s->fifo_g, value);
633
+ }
634
+ break;
635
+ case R_GQSPI_TXD:
636
+ tx_data_bytes(&s->tx_fifo_g, (uint32_t)value, 4,
637
+ ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN));
638
+ break;
639
+ case R_GQSPI_FIFO_CTRL:
640
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, GENERIC_FIFO_RESET)) {
641
+ fifo32_reset(&s->fifo_g);
642
+ }
643
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, TX_FIFO_RESET)) {
644
+ fifo8_reset(&s->tx_fifo_g);
645
+ }
646
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, RX_FIFO_RESET)) {
647
+ fifo8_reset(&s->rx_fifo_g);
648
+ }
649
+ break;
650
+ case R_GQSPI_IDR:
651
+ s->regs[R_GQSPI_IMR] |= value;
652
+ break;
653
+ case R_GQSPI_IER:
654
+ s->regs[R_GQSPI_IMR] &= ~value;
655
+ break;
656
+ case R_GQSPI_ISR:
657
+ s->regs[R_GQSPI_ISR] &= ~value;
658
+ break;
659
+ case R_GQSPI_IMR:
660
+ case R_GQSPI_RXD:
661
+ case R_GQSPI_GF_SNAPSHOT:
662
+ case R_GQSPI_MOD_ID:
663
+ break;
664
+ default:
665
+ s->regs[reg] = value;
666
+ break;
667
+ }
668
+ xlnx_zynqmp_qspips_update_cs_lines(s);
669
+ xlnx_zynqmp_qspips_check_flush(s);
670
+ xlnx_zynqmp_qspips_update_cs_lines(s);
671
+ xlnx_zynqmp_qspips_update_ixr(s);
672
+ }
673
+ xlnx_zynqmp_qspips_notify(s);
674
+}
675
+
676
static const MemoryRegionOps qspips_ops = {
677
.read = xilinx_spips_read,
678
.write = xilinx_qspips_write,
679
.endianness = DEVICE_LITTLE_ENDIAN,
680
};
681
682
+static const MemoryRegionOps xlnx_zynqmp_qspips_ops = {
683
+ .read = xlnx_zynqmp_qspips_read,
684
+ .write = xlnx_zynqmp_qspips_write,
685
+ .endianness = DEVICE_LITTLE_ENDIAN,
686
+};
687
+
688
#define LQSPI_CACHE_SIZE 1024
689
690
static void lqspi_load_cache(void *opaque, hwaddr addr)
691
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_realize(DeviceState *dev, Error **errp)
692
}
693
694
memory_region_init_io(&s->iomem, OBJECT(s), xsc->reg_ops, s,
695
- "spi", XLNX_SPIPS_R_MAX * 4);
696
+ "spi", XLNX_ZYNQMP_SPIPS_R_MAX * 4);
697
sysbus_init_mmio(sbd, &s->iomem);
698
699
s->irqline = -1;
700
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_realize(DeviceState *dev, Error **errp)
701
}
702
}
703
704
+static void xlnx_zynqmp_qspips_realize(DeviceState *dev, Error **errp)
705
+{
706
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(dev);
707
+ XilinxSPIPSClass *xsc = XILINX_SPIPS_GET_CLASS(s);
708
+
709
+ xilinx_qspips_realize(dev, errp);
710
+ fifo8_create(&s->rx_fifo_g, xsc->rx_fifo_size);
711
+ fifo8_create(&s->tx_fifo_g, xsc->tx_fifo_size);
712
+ fifo32_create(&s->fifo_g, 32);
713
+}
714
+
715
+static void xlnx_zynqmp_qspips_init(Object *obj)
716
+{
717
+ XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(obj);
718
+
719
+ object_property_add_link(obj, "stream-connected-dma", TYPE_STREAM_SLAVE,
720
+ (Object **)&rq->dma,
721
+ object_property_allow_set_link,
722
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
723
+ NULL);
724
+}
725
+
726
static int xilinx_spips_post_load(void *opaque, int version_id)
727
{
728
xilinx_spips_update_ixr((XilinxSPIPS *)opaque);
729
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_xilinx_spips = {
730
}
731
};
732
733
+static int xlnx_zynqmp_qspips_post_load(void *opaque, int version_id)
734
+{
735
+ XlnxZynqMPQSPIPS *s = (XlnxZynqMPQSPIPS *)opaque;
736
+ XilinxSPIPS *qs = XILINX_SPIPS(s);
737
+
738
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_SELECT, GENERIC_QSPI_EN) &&
739
+ fifo8_is_empty(&qs->rx_fifo) && fifo8_is_empty(&qs->tx_fifo)) {
740
+ xlnx_zynqmp_qspips_update_ixr(s);
741
+ xlnx_zynqmp_qspips_update_cs_lines(s);
742
+ }
743
+ return 0;
744
+}
745
+
746
+static const VMStateDescription vmstate_xilinx_qspips = {
747
+ .name = "xilinx_qspips",
748
+ .version_id = 1,
749
+ .minimum_version_id = 1,
750
+ .fields = (VMStateField[]) {
751
+ VMSTATE_STRUCT(parent_obj, XilinxQSPIPS, 0,
752
+ vmstate_xilinx_spips, XilinxSPIPS),
753
+ VMSTATE_END_OF_LIST()
754
+ }
755
+};
756
+
757
+static const VMStateDescription vmstate_xlnx_zynqmp_qspips = {
758
+ .name = "xlnx_zynqmp_qspips",
759
+ .version_id = 1,
760
+ .minimum_version_id = 1,
761
+ .post_load = xlnx_zynqmp_qspips_post_load,
762
+ .fields = (VMStateField[]) {
763
+ VMSTATE_STRUCT(parent_obj, XlnxZynqMPQSPIPS, 0,
764
+ vmstate_xilinx_qspips, XilinxQSPIPS),
765
+ VMSTATE_FIFO8(tx_fifo_g, XlnxZynqMPQSPIPS),
766
+ VMSTATE_FIFO8(rx_fifo_g, XlnxZynqMPQSPIPS),
767
+ VMSTATE_FIFO32(fifo_g, XlnxZynqMPQSPIPS),
768
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPQSPIPS, XLNX_ZYNQMP_SPIPS_R_MAX),
769
+ VMSTATE_END_OF_LIST()
770
+ }
771
+};
772
+
773
static Property xilinx_qspips_properties[] = {
774
/* We had to turn this off for 2.10 as it is not compatible with migration.
775
* It can be enabled but will prevent the device to be migrated.
776
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_class_init(ObjectClass *klass, void *data)
777
xsc->tx_fifo_size = TXFF_A;
778
}
779
780
+static void xlnx_zynqmp_qspips_class_init(ObjectClass *klass, void * data)
781
+{
782
+ DeviceClass *dc = DEVICE_CLASS(klass);
783
+ XilinxSPIPSClass *xsc = XILINX_SPIPS_CLASS(klass);
784
+
785
+ dc->realize = xlnx_zynqmp_qspips_realize;
786
+ dc->reset = xlnx_zynqmp_qspips_reset;
787
+ dc->vmsd = &vmstate_xlnx_zynqmp_qspips;
788
+ xsc->reg_ops = &xlnx_zynqmp_qspips_ops;
789
+ xsc->rx_fifo_size = RXFF_A_Q;
790
+ xsc->tx_fifo_size = TXFF_A_Q;
791
+}
792
+
793
static const TypeInfo xilinx_spips_info = {
794
.name = TYPE_XILINX_SPIPS,
795
.parent = TYPE_SYS_BUS_DEVICE,
796
@@ -XXX,XX +XXX,XX @@ static const TypeInfo xilinx_qspips_info = {
797
.class_init = xilinx_qspips_class_init,
798
};
799
800
+static const TypeInfo xlnx_zynqmp_qspips_info = {
801
+ .name = TYPE_XLNX_ZYNQMP_QSPIPS,
802
+ .parent = TYPE_XILINX_QSPIPS,
803
+ .instance_size = sizeof(XlnxZynqMPQSPIPS),
804
+ .instance_init = xlnx_zynqmp_qspips_init,
805
+ .class_init = xlnx_zynqmp_qspips_class_init,
806
+};
807
+
808
static void xilinx_spips_register_types(void)
809
{
810
type_register_static(&xilinx_spips_info);
811
type_register_static(&xilinx_qspips_info);
812
+ type_register_static(&xlnx_zynqmp_qspips_info);
813
}
814
815
type_init(xilinx_spips_register_types)
816
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
817
index XXXXXXX..XXXXXXX 100644
818
--- a/default-configs/arm-softmmu.mak
819
+++ b/default-configs/arm-softmmu.mak
820
@@ -XXX,XX +XXX,XX @@ CONFIG_SMBIOS=y
821
CONFIG_ASPEED_SOC=y
822
CONFIG_GPIO_KEY=y
823
CONFIG_MSF2=y
824
-
825
CONFIG_FW_CFG_DMA=y
826
+CONFIG_XILINX_AXI=y
827
--
56
--
828
2.7.4
57
2.20.1
829
58
830
59
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Use memset() instead of a for loop to zero all of the registers.
3
As the Connex and Verdex machines only boot in little-endian,
4
we can simplify the code.
4
5
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: c076e907f355923864cb1afde31b938ffb677778.1513104804.git.alistair.francis@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
hw/ssi/xilinx_spips.c | 11 +++--------
11
hw/arm/gumstix.c | 16 ++--------------
12
1 file changed, 3 insertions(+), 8 deletions(-)
12
1 file changed, 2 insertions(+), 14 deletions(-)
13
13
14
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
14
diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/xilinx_spips.c
16
--- a/hw/arm/gumstix.c
17
+++ b/hw/ssi/xilinx_spips.c
17
+++ b/hw/arm/gumstix.c
18
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
18
@@ -XXX,XX +XXX,XX @@ static void connex_init(MachineState *machine)
19
{
19
{
20
XilinxSPIPS *s = XILINX_SPIPS(d);
20
PXA2xxState *cpu;
21
21
DriveInfo *dinfo;
22
- int i;
22
- int be;
23
- for (i = 0; i < XLNX_SPIPS_R_MAX; i++) {
23
MemoryRegion *address_space_mem = get_system_memory();
24
- s->regs[i] = 0;
24
25
- }
25
uint32_t connex_rom = 0x01000000;
26
+ memset(s->regs, 0, sizeof(s->regs));
26
@@ -XXX,XX +XXX,XX @@ static void connex_init(MachineState *machine)
27
27
exit(1);
28
fifo8_reset(&s->rx_fifo);
28
}
29
fifo8_reset(&s->rx_fifo);
29
30
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
30
-#ifdef TARGET_WORDS_BIGENDIAN
31
static void xlnx_zynqmp_qspips_reset(DeviceState *d)
31
- be = 1;
32
-#else
33
- be = 0;
34
-#endif
35
if (!pflash_cfi01_register(0x00000000, "connext.rom", connex_rom,
36
dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
37
- sector_len, 2, 0, 0, 0, 0, be)) {
38
+ sector_len, 2, 0, 0, 0, 0, 0)) {
39
error_report("Error registering flash memory");
40
exit(1);
41
}
42
@@ -XXX,XX +XXX,XX @@ static void verdex_init(MachineState *machine)
32
{
43
{
33
XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(d);
44
PXA2xxState *cpu;
34
- int i;
45
DriveInfo *dinfo;
35
46
- int be;
36
xilinx_spips_reset(d);
47
MemoryRegion *address_space_mem = get_system_memory();
37
48
38
- for (i = 0; i < XLNX_ZYNQMP_SPIPS_R_MAX; i++) {
49
uint32_t verdex_rom = 0x02000000;
39
- s->regs[i] = 0;
50
@@ -XXX,XX +XXX,XX @@ static void verdex_init(MachineState *machine)
40
- }
51
exit(1);
41
+ memset(s->regs, 0, sizeof(s->regs));
52
}
42
+
53
43
fifo8_reset(&s->rx_fifo_g);
54
-#ifdef TARGET_WORDS_BIGENDIAN
44
fifo8_reset(&s->rx_fifo_g);
55
- be = 1;
45
fifo32_reset(&s->fifo_g);
56
-#else
57
- be = 0;
58
-#endif
59
if (!pflash_cfi01_register(0x00000000, "verdex.rom", verdex_rom,
60
dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
61
- sector_len, 2, 0, 0, 0, 0, be)) {
62
+ sector_len, 2, 0, 0, 0, 0, 0)) {
63
error_report("Error registering flash memory");
64
exit(1);
65
}
46
--
66
--
47
2.7.4
67
2.20.1
48
68
49
69
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Following the ZynqMP register spec let's ensure that all reset values
3
We only build the little-endian softmmu configurations. Checking
4
are set.
4
for big endian is pointless, remove the unused code.
5
5
6
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 19836f3e0a298b13343c5a59c87425355e7fd8bd.1513104804.git.alistair.francis@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
include/hw/ssi/xilinx_spips.h | 2 +-
10
hw/arm/mainstone.c | 8 +-------
12
hw/ssi/xilinx_spips.c | 35 ++++++++++++++++++++++++++++++-----
11
1 file changed, 1 insertion(+), 7 deletions(-)
13
2 files changed, 31 insertions(+), 6 deletions(-)
14
12
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
13
diff --git a/hw/arm/mainstone.c b/hw/arm/mainstone.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/ssi/xilinx_spips.h
15
--- a/hw/arm/mainstone.c
18
+++ b/include/hw/ssi/xilinx_spips.h
16
+++ b/hw/arm/mainstone.c
19
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static void mainstone_common_init(MemoryRegion *address_space_mem,
20
typedef struct XilinxSPIPS XilinxSPIPS;
18
DeviceState *mst_irq;
21
19
DriveInfo *dinfo;
22
#define XLNX_SPIPS_R_MAX (0x100 / 4)
20
int i;
23
-#define XLNX_ZYNQMP_SPIPS_R_MAX (0x200 / 4)
21
- int be;
24
+#define XLNX_ZYNQMP_SPIPS_R_MAX (0x830 / 4)
22
MemoryRegion *rom = g_new(MemoryRegion, 1);
25
23
26
/* Bite off 4k chunks at a time */
24
/* Setup CPU & memory */
27
#define LQSPI_CACHE_SIZE 1024
25
@@ -XXX,XX +XXX,XX @@ static void mainstone_common_init(MemoryRegion *address_space_mem,
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
26
memory_region_set_readonly(rom, true);
29
index XXXXXXX..XXXXXXX 100644
27
memory_region_add_subregion(address_space_mem, 0, rom);
30
--- a/hw/ssi/xilinx_spips.c
28
31
+++ b/hw/ssi/xilinx_spips.c
29
-#ifdef TARGET_WORDS_BIGENDIAN
32
@@ -XXX,XX +XXX,XX @@
30
- be = 1;
33
31
-#else
34
/* interrupt mechanism */
32
- be = 0;
35
#define R_INTR_STATUS (0x04 / 4)
33
-#endif
36
+#define R_INTR_STATUS_RESET (0x104)
34
/* There are two 32MiB flash devices on the board */
37
#define R_INTR_EN (0x08 / 4)
35
for (i = 0; i < 2; i ++) {
38
#define R_INTR_DIS (0x0C / 4)
36
dinfo = drive_get(IF_PFLASH, 0, i);
39
#define R_INTR_MASK (0x10 / 4)
37
@@ -XXX,XX +XXX,XX @@ static void mainstone_common_init(MemoryRegion *address_space_mem,
40
@@ -XXX,XX +XXX,XX @@
38
i ? "mainstone.flash1" : "mainstone.flash0",
41
#define R_SLAVE_IDLE_COUNT (0x24 / 4)
39
MAINSTONE_FLASH,
42
#define R_TX_THRES (0x28 / 4)
40
dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
43
#define R_RX_THRES (0x2C / 4)
41
- sector_len, 4, 0, 0, 0, 0, be)) {
44
+#define R_GPIO (0x30 / 4)
42
+ sector_len, 4, 0, 0, 0, 0, 0)) {
45
+#define R_LPBK_DLY_ADJ (0x38 / 4)
43
error_report("Error registering flash memory");
46
+#define R_LPBK_DLY_ADJ_RESET (0x33)
44
exit(1);
47
#define R_TXD1 (0x80 / 4)
45
}
48
#define R_TXD2 (0x84 / 4)
49
#define R_TXD3 (0x88 / 4)
50
@@ -XXX,XX +XXX,XX @@
51
#define R_GQSPI_IER (0x108 / 4)
52
#define R_GQSPI_IDR (0x10c / 4)
53
#define R_GQSPI_IMR (0x110 / 4)
54
+#define R_GQSPI_IMR_RESET (0xfbe)
55
#define R_GQSPI_TX_THRESH (0x128 / 4)
56
#define R_GQSPI_RX_THRESH (0x12c / 4)
57
+#define R_GQSPI_GPIO (0x130 / 4)
58
+#define R_GQSPI_LPBK_DLY_ADJ (0x138 / 4)
59
+#define R_GQSPI_LPBK_DLY_ADJ_RESET (0x33)
60
#define R_GQSPI_CNFG (0x100 / 4)
61
FIELD(GQSPI_CNFG, MODE_EN, 30, 2)
62
FIELD(GQSPI_CNFG, GEN_FIFO_START_MODE, 29, 1)
63
@@ -XXX,XX +XXX,XX @@
64
FIELD(GQSPI_GF_SNAPSHOT, EXPONENT, 9, 1)
65
FIELD(GQSPI_GF_SNAPSHOT, DATA_XFER, 8, 1)
66
FIELD(GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA, 0, 8)
67
-#define R_GQSPI_MOD_ID (0x168 / 4)
68
-#define R_GQSPI_MOD_ID_VALUE 0x010A0000
69
+#define R_GQSPI_MOD_ID (0x1fc / 4)
70
+#define R_GQSPI_MOD_ID_RESET (0x10a0000)
71
+
72
+#define R_QSPIDMA_DST_CTRL (0x80c / 4)
73
+#define R_QSPIDMA_DST_CTRL_RESET (0x803ffa00)
74
+#define R_QSPIDMA_DST_I_MASK (0x820 / 4)
75
+#define R_QSPIDMA_DST_I_MASK_RESET (0xfe)
76
+#define R_QSPIDMA_DST_CTRL2 (0x824 / 4)
77
+#define R_QSPIDMA_DST_CTRL2_RESET (0x081bfff8)
78
+
79
/* size of TXRX FIFOs */
80
#define RXFF_A (128)
81
#define TXFF_A (128)
82
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
83
fifo8_reset(&s->rx_fifo_g);
84
fifo8_reset(&s->rx_fifo_g);
85
fifo32_reset(&s->fifo_g);
86
+ s->regs[R_INTR_STATUS] = R_INTR_STATUS_RESET;
87
+ s->regs[R_GPIO] = 1;
88
+ s->regs[R_LPBK_DLY_ADJ] = R_LPBK_DLY_ADJ_RESET;
89
+ s->regs[R_GQSPI_GFIFO_THRESH] = 0x10;
90
+ s->regs[R_MOD_ID] = 0x01090101;
91
+ s->regs[R_GQSPI_IMR] = R_GQSPI_IMR_RESET;
92
s->regs[R_GQSPI_TX_THRESH] = 1;
93
s->regs[R_GQSPI_RX_THRESH] = 1;
94
- s->regs[R_GQSPI_GFIFO_THRESH] = 1;
95
- s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
96
- s->regs[R_MOD_ID] = 0x01090101;
97
+ s->regs[R_GQSPI_GPIO] = 1;
98
+ s->regs[R_GQSPI_LPBK_DLY_ADJ] = R_GQSPI_LPBK_DLY_ADJ_RESET;
99
+ s->regs[R_GQSPI_MOD_ID] = R_GQSPI_MOD_ID_RESET;
100
+ s->regs[R_QSPIDMA_DST_CTRL] = R_QSPIDMA_DST_CTRL_RESET;
101
+ s->regs[R_QSPIDMA_DST_I_MASK] = R_QSPIDMA_DST_I_MASK_RESET;
102
+ s->regs[R_QSPIDMA_DST_CTRL2] = R_QSPIDMA_DST_CTRL2_RESET;
103
s->man_start_com_g = false;
104
s->gqspi_irqline = 0;
105
xlnx_zynqmp_qspips_update_ixr(s);
106
--
46
--
107
2.7.4
47
2.20.1
108
48
109
49
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Update the reset value to match the latest ZynqMP register spec.
3
We only build the little-endian softmmu configurations. Checking
4
for big endian is pointless, remove the unused code.
4
5
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Message-id: c03e51d041db7f055596084891aeb1e856e32b9f.1513104804.git.alistair.francis@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
hw/ssi/xilinx_spips.c | 1 +
10
hw/arm/omap_sx1.c | 11 ++---------
12
1 file changed, 1 insertion(+)
11
1 file changed, 2 insertions(+), 9 deletions(-)
13
12
14
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
13
diff --git a/hw/arm/omap_sx1.c b/hw/arm/omap_sx1.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/xilinx_spips.c
15
--- a/hw/arm/omap_sx1.c
17
+++ b/hw/ssi/xilinx_spips.c
16
+++ b/hw/arm/omap_sx1.c
18
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
17
@@ -XXX,XX +XXX,XX @@ static void sx1_init(MachineState *machine, const int version)
19
s->regs[R_GQSPI_RX_THRESH] = 1;
18
DriveInfo *dinfo;
20
s->regs[R_GQSPI_GFIFO_THRESH] = 1;
19
int fl_idx;
21
s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
20
uint32_t flash_size = flash0_size;
22
+ s->regs[R_MOD_ID] = 0x01090101;
21
- int be;
23
s->man_start_com_g = false;
22
24
s->gqspi_irqline = 0;
23
if (machine->ram_size != mc->default_ram_size) {
25
xlnx_zynqmp_qspips_update_ixr(s);
24
char *sz = size_to_str(mc->default_ram_size);
25
@@ -XXX,XX +XXX,XX @@ static void sx1_init(MachineState *machine, const int version)
26
OMAP_CS2_BASE, &cs[3]);
27
28
fl_idx = 0;
29
-#ifdef TARGET_WORDS_BIGENDIAN
30
- be = 1;
31
-#else
32
- be = 0;
33
-#endif
34
-
35
if ((dinfo = drive_get(IF_PFLASH, 0, fl_idx)) != NULL) {
36
if (!pflash_cfi01_register(OMAP_CS0_BASE,
37
"omap_sx1.flash0-1", flash_size,
38
blk_by_legacy_dinfo(dinfo),
39
- sector_size, 4, 0, 0, 0, 0, be)) {
40
+ sector_size, 4, 0, 0, 0, 0, 0)) {
41
fprintf(stderr, "qemu: Error registering flash memory %d.\n",
42
fl_idx);
43
}
44
@@ -XXX,XX +XXX,XX @@ static void sx1_init(MachineState *machine, const int version)
45
if (!pflash_cfi01_register(OMAP_CS1_BASE,
46
"omap_sx1.flash1-1", flash1_size,
47
blk_by_legacy_dinfo(dinfo),
48
- sector_size, 4, 0, 0, 0, 0, be)) {
49
+ sector_size, 4, 0, 0, 0, 0, 0)) {
50
fprintf(stderr, "qemu: Error registering flash memory %d.\n",
51
fl_idx);
52
}
26
--
53
--
27
2.7.4
54
2.20.1
28
55
29
56
diff view generated by jsdifflib
1
Generalize nvic_sysreg_ns_ops so that we can pass it an
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
arbitrary MemoryRegion which it will use as the underlying
3
register implementation to apply the NS-alias behaviour
4
to. We'll want this so we can do the same with systick.
5
2
3
We only build the little-endian softmmu configurations. Checking
4
for big endian is pointless, remove the unused code.
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 1512154296-5652-2-git-send-email-peter.maydell@linaro.org
9
---
9
---
10
hw/intc/armv7m_nvic.c | 10 +++++++---
10
hw/arm/z2.c | 8 +-------
11
1 file changed, 7 insertions(+), 3 deletions(-)
11
1 file changed, 1 insertion(+), 7 deletions(-)
12
12
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
diff --git a/hw/arm/z2.c b/hw/arm/z2.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
15
--- a/hw/arm/z2.c
16
+++ b/hw/intc/armv7m_nvic.c
16
+++ b/hw/arm/z2.c
17
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
17
@@ -XXX,XX +XXX,XX @@ static void z2_init(MachineState *machine)
18
uint64_t value, unsigned size,
18
uint32_t sector_len = 0x10000;
19
MemTxAttrs attrs)
19
PXA2xxState *mpu;
20
{
20
DriveInfo *dinfo;
21
+ MemoryRegion *mr = opaque;
21
- int be;
22
+
22
void *z2_lcd;
23
if (attrs.secure) {
23
I2CBus *bus;
24
/* S accesses to the alias act like NS accesses to the real region */
24
DeviceState *wm;
25
attrs.secure = 0;
25
@@ -XXX,XX +XXX,XX @@ static void z2_init(MachineState *machine)
26
- return nvic_sysreg_write(opaque, addr, value, size, attrs);
26
/* Setup CPU & memory */
27
+ return memory_region_dispatch_write(mr, addr, value, size, attrs);
27
mpu = pxa270_init(address_space_mem, z2_binfo.ram_size, machine->cpu_type);
28
} else {
28
29
/* NS attrs are RAZ/WI for privileged, and BusFault for user */
29
-#ifdef TARGET_WORDS_BIGENDIAN
30
if (attrs.user) {
30
- be = 1;
31
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
31
-#else
32
uint64_t *data, unsigned size,
32
- be = 0;
33
MemTxAttrs attrs)
33
-#endif
34
{
34
dinfo = drive_get(IF_PFLASH, 0, 0);
35
+ MemoryRegion *mr = opaque;
35
if (!pflash_cfi01_register(Z2_FLASH_BASE, "z2.flash0", Z2_FLASH_SIZE,
36
+
36
dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
37
if (attrs.secure) {
37
- sector_len, 4, 0, 0, 0, 0, be)) {
38
/* S accesses to the alias act like NS accesses to the real region */
38
+ sector_len, 4, 0, 0, 0, 0, 0)) {
39
attrs.secure = 0;
39
error_report("Error registering flash memory");
40
- return nvic_sysreg_read(opaque, addr, data, size, attrs);
40
exit(1);
41
+ return memory_region_dispatch_read(mr, addr, data, size, attrs);
42
} else {
43
/* NS attrs are RAZ/WI for privileged, and BusFault for user */
44
if (attrs.user) {
45
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
46
47
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
48
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
49
- &nvic_sysreg_ns_ops, s,
50
+ &nvic_sysreg_ns_ops, &s->sysregmem,
51
"nvic_sysregs_ns", 0x1000);
52
memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
53
}
41
}
54
--
42
--
55
2.7.4
43
2.20.1
56
44
57
45
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
From the very beginning, post_load() was called from common
3
We only build the little-endian softmmu configurations. Checking
4
reset. This is not standard and obliged to discriminate the
4
for big endian is pointless, remove the unused code.
5
reset case from the restore case using the iidr value.
6
5
7
Let's get rid of that call.
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 1511883692-11511-2-git-send-email-eric.auger@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
hw/intc/arm_gicv3_its_common.c | 2 --
10
hw/arm/musicpal.c | 10 ----------
15
hw/intc/arm_gicv3_its_kvm.c | 4 ----
11
1 file changed, 10 deletions(-)
16
2 files changed, 6 deletions(-)
17
12
18
diff --git a/hw/intc/arm_gicv3_its_common.c b/hw/intc/arm_gicv3_its_common.c
13
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gicv3_its_common.c
15
--- a/hw/arm/musicpal.c
21
+++ b/hw/intc/arm_gicv3_its_common.c
16
+++ b/hw/arm/musicpal.c
22
@@ -XXX,XX +XXX,XX @@ static void gicv3_its_common_reset(DeviceState *dev)
17
@@ -XXX,XX +XXX,XX @@ static void musicpal_init(MachineState *machine)
23
s->creadr = 0;
18
* 0xFF800000 (if there is 8 MB flash). So remap flash access if the
24
s->iidr = 0;
19
* image is smaller than 32 MB.
25
memset(&s->baser, 0, sizeof(s->baser));
20
*/
21
-#ifdef TARGET_WORDS_BIGENDIAN
22
- pflash_cfi02_register(0x100000000ULL - MP_FLASH_SIZE_MAX,
23
- "musicpal.flash", flash_size,
24
- blk, 0x10000,
25
- MP_FLASH_SIZE_MAX / flash_size,
26
- 2, 0x00BF, 0x236D, 0x0000, 0x0000,
27
- 0x5555, 0x2AAA, 1);
28
-#else
29
pflash_cfi02_register(0x100000000ULL - MP_FLASH_SIZE_MAX,
30
"musicpal.flash", flash_size,
31
blk, 0x10000,
32
MP_FLASH_SIZE_MAX / flash_size,
33
2, 0x00BF, 0x236D, 0x0000, 0x0000,
34
0x5555, 0x2AAA, 0);
35
-#endif
26
-
36
-
27
- gicv3_its_post_load(s, 0);
37
}
28
}
38
sysbus_create_simple(TYPE_MV88W8618_FLASHCFG, MP_FLASHCFG_BASE, NULL);
29
30
static void gicv3_its_common_class_init(ObjectClass *klass, void *data)
31
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/intc/arm_gicv3_its_kvm.c
34
+++ b/hw/intc/arm_gicv3_its_kvm.c
35
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
36
{
37
int i;
38
39
- if (!s->iidr) {
40
- return;
41
- }
42
-
43
kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
44
GITS_IIDR, &s->iidr, true, &error_abort);
45
39
46
--
40
--
47
2.7.4
41
2.20.1
48
42
49
43
diff view generated by jsdifflib
1
For the v8M security extension, there should be two systick
1
From: Pan Nengyuan <pannengyuan@huawei.com>
2
devices, which use separate banked systick exceptions. The
3
register interface is banked in the same way as for other
4
banked registers, including the existence of an NS alias
5
region for secure code to access the nonsecure timer.
6
2
3
There are some memleaks when we call 'device_list_properties'. This patch move timer_new from init into realize to fix it.
4
5
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
7
Message-id: 20200227025055.14341-3-pannengyuan@huawei.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 1512154296-5652-3-git-send-email-peter.maydell@linaro.org
10
---
10
---
11
include/hw/intc/armv7m_nvic.h | 4 +-
11
hw/arm/pxa2xx.c | 17 +++++++++++------
12
hw/intc/armv7m_nvic.c | 90 ++++++++++++++++++++++++++++++++++++-------
12
1 file changed, 11 insertions(+), 6 deletions(-)
13
2 files changed, 80 insertions(+), 14 deletions(-)
14
13
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
14
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/intc/armv7m_nvic.h
16
--- a/hw/arm/pxa2xx.c
18
+++ b/include/hw/intc/armv7m_nvic.h
17
+++ b/hw/arm/pxa2xx.c
19
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
18
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_rtc_init(Object *obj)
20
19
s->last_rtcpicr = 0;
21
MemoryRegion sysregmem;
20
s->last_hz = s->last_sw = s->last_pi = qemu_clock_get_ms(rtc_clock);
22
MemoryRegion sysreg_ns_mem;
21
23
+ MemoryRegion systickmem;
22
+ sysbus_init_irq(dev, &s->rtc_irq);
24
+ MemoryRegion systick_ns_mem;
25
MemoryRegion container;
26
27
uint32_t num_irq;
28
qemu_irq excpout;
29
qemu_irq sysresetreq;
30
31
- SysTickState systick;
32
+ SysTickState systick[M_REG_NUM_BANKS];
33
} NVICState;
34
35
#endif
36
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/intc/armv7m_nvic.c
39
+++ b/hw/intc/armv7m_nvic.c
40
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_sysreg_ns_ops = {
41
.endianness = DEVICE_NATIVE_ENDIAN,
42
};
43
44
+static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,
45
+ uint64_t value, unsigned size,
46
+ MemTxAttrs attrs)
47
+{
48
+ NVICState *s = opaque;
49
+ MemoryRegion *mr;
50
+
23
+
51
+ /* Direct the access to the correct systick */
24
+ memory_region_init_io(&s->iomem, obj, &pxa2xx_rtc_ops, s,
52
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
25
+ "pxa2xx-rtc", 0x10000);
53
+ return memory_region_dispatch_write(mr, addr, value, size, attrs);
26
+ sysbus_init_mmio(dev, &s->iomem);
54
+}
27
+}
55
+
28
+
56
+static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
29
+static void pxa2xx_rtc_realize(DeviceState *dev, Error **errp)
57
+ uint64_t *data, unsigned size,
58
+ MemTxAttrs attrs)
59
+{
30
+{
60
+ NVICState *s = opaque;
31
+ PXA2xxRTCState *s = PXA2XX_RTC(dev);
61
+ MemoryRegion *mr;
32
s->rtc_hz = timer_new_ms(rtc_clock, pxa2xx_rtc_hz_tick, s);
62
+
33
s->rtc_rdal1 = timer_new_ms(rtc_clock, pxa2xx_rtc_rdal1_tick, s);
63
+ /* Direct the access to the correct systick */
34
s->rtc_rdal2 = timer_new_ms(rtc_clock, pxa2xx_rtc_rdal2_tick, s);
64
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
35
s->rtc_swal1 = timer_new_ms(rtc_clock, pxa2xx_rtc_swal1_tick, s);
65
+ return memory_region_dispatch_read(mr, addr, data, size, attrs);
36
s->rtc_swal2 = timer_new_ms(rtc_clock, pxa2xx_rtc_swal2_tick, s);
66
+}
37
s->rtc_pi = timer_new_ms(rtc_clock, pxa2xx_rtc_pi_tick, s);
67
+
38
-
68
+static const MemoryRegionOps nvic_systick_ops = {
39
- sysbus_init_irq(dev, &s->rtc_irq);
69
+ .read_with_attrs = nvic_systick_read,
40
-
70
+ .write_with_attrs = nvic_systick_write,
41
- memory_region_init_io(&s->iomem, obj, &pxa2xx_rtc_ops, s,
71
+ .endianness = DEVICE_NATIVE_ENDIAN,
42
- "pxa2xx-rtc", 0x10000);
72
+};
43
- sysbus_init_mmio(dev, &s->iomem);
73
+
74
static int nvic_post_load(void *opaque, int version_id)
75
{
76
NVICState *s = opaque;
77
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
78
/* SysTick just asked us to pend its exception.
79
* (This is different from an external interrupt line's
80
* behaviour.)
81
- * TODO: when we implement the banked systicks we must make
82
- * this pend the correct banked exception.
83
+ * n == 0 : NonSecure systick
84
+ * n == 1 : Secure systick
85
*/
86
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, false);
87
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, n);
88
}
89
}
44
}
90
45
91
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
46
static int pxa2xx_rtc_pre_save(void *opaque)
92
{
47
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_rtc_sysbus_class_init(ObjectClass *klass, void *data)
93
NVICState *s = NVIC(dev);
48
94
- SysBusDevice *systick_sbd;
49
dc->desc = "PXA2xx RTC Controller";
95
Error *err = NULL;
50
dc->vmsd = &vmstate_pxa2xx_rtc_regs;
96
int regionlen;
51
+ dc->realize = pxa2xx_rtc_realize;
97
98
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
99
/* include space for internal exception vectors */
100
s->num_irq += NVIC_FIRST_IRQ;
101
102
- object_property_set_bool(OBJECT(&s->systick), true, "realized", &err);
103
+ object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
104
+ "realized", &err);
105
if (err != NULL) {
106
error_propagate(errp, err);
107
return;
108
}
109
- systick_sbd = SYS_BUS_DEVICE(&s->systick);
110
- sysbus_connect_irq(systick_sbd, 0,
111
- qdev_get_gpio_in_named(dev, "systick-trigger", 0));
112
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systick[M_REG_NS]), 0,
113
+ qdev_get_gpio_in_named(dev, "systick-trigger",
114
+ M_REG_NS));
115
+
116
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
117
+ /* We couldn't init the secure systick device in instance_init
118
+ * as we didn't know then if the CPU had the security extensions;
119
+ * so we have to do it here.
120
+ */
121
+ object_initialize(&s->systick[M_REG_S], sizeof(s->systick[M_REG_S]),
122
+ TYPE_SYSTICK);
123
+ qdev_set_parent_bus(DEVICE(&s->systick[M_REG_S]), sysbus_get_default());
124
+
125
+ object_property_set_bool(OBJECT(&s->systick[M_REG_S]), true,
126
+ "realized", &err);
127
+ if (err != NULL) {
128
+ error_propagate(errp, err);
129
+ return;
130
+ }
131
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systick[M_REG_S]), 0,
132
+ qdev_get_gpio_in_named(dev, "systick-trigger",
133
+ M_REG_S));
134
+ }
135
136
/* The NVIC and System Control Space (SCS) starts at 0xe000e000
137
* and looks like this:
138
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
139
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
140
"nvic_sysregs", 0x1000);
141
memory_region_add_subregion(&s->container, 0, &s->sysregmem);
142
+
143
+ memory_region_init_io(&s->systickmem, OBJECT(s),
144
+ &nvic_systick_ops, s,
145
+ "nvic_systick", 0xe0);
146
+
147
memory_region_add_subregion_overlap(&s->container, 0x10,
148
- sysbus_mmio_get_region(systick_sbd, 0),
149
- 1);
150
+ &s->systickmem, 1);
151
152
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
153
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
154
&nvic_sysreg_ns_ops, &s->sysregmem,
155
"nvic_sysregs_ns", 0x1000);
156
memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
157
+ memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
158
+ &nvic_sysreg_ns_ops, &s->systickmem,
159
+ "nvic_systick_ns", 0xe0);
160
+ memory_region_add_subregion_overlap(&s->container, 0x20010,
161
+ &s->systick_ns_mem, 1);
162
}
163
164
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
165
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_instance_init(Object *obj)
166
NVICState *nvic = NVIC(obj);
167
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
168
169
- object_initialize(&nvic->systick, sizeof(nvic->systick), TYPE_SYSTICK);
170
- qdev_set_parent_bus(DEVICE(&nvic->systick), sysbus_get_default());
171
+ object_initialize(&nvic->systick[M_REG_NS],
172
+ sizeof(nvic->systick[M_REG_NS]), TYPE_SYSTICK);
173
+ qdev_set_parent_bus(DEVICE(&nvic->systick[M_REG_NS]), sysbus_get_default());
174
+ /* We can't initialize the secure systick here, as we don't know
175
+ * yet if we need it.
176
+ */
177
178
sysbus_init_irq(sbd, &nvic->excpout);
179
qdev_init_gpio_out_named(dev, &nvic->sysresetreq, "SYSRESETREQ", 1);
180
- qdev_init_gpio_in_named(dev, nvic_systick_trigger, "systick-trigger", 1);
181
+ qdev_init_gpio_in_named(dev, nvic_systick_trigger, "systick-trigger",
182
+ M_REG_NUM_BANKS);
183
}
52
}
184
53
185
static void armv7m_nvic_class_init(ObjectClass *klass, void *data)
54
static const TypeInfo pxa2xx_rtc_sysbus_info = {
186
--
55
--
187
2.7.4
56
2.20.1
188
57
189
58
diff view generated by jsdifflib
1
Now that ARMMMUFaultInfo is guaranteed to have enough information
1
From: Pan Nengyuan <pannengyuan@huawei.com>
2
to construct a fault status code, we can pass it in to the
3
deliver_fault() function and let it generate the correct type
4
of FSR for the destination, rather than relying on the value
5
provided by get_phys_addr().
6
2
7
I don't think there are any cases the old code was getting
3
There are some memleaks when we call 'device_list_properties'. This patch move timer_new from init into realize to fix it.
8
wrong, but this is more obviously correct.
9
4
5
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
7
Message-id: 20200227025055.14341-4-pannengyuan@huawei.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-10-git-send-email-peter.maydell@linaro.org
15
---
10
---
16
target/arm/op_helper.c | 79 ++++++++++++++------------------------------------
11
hw/arm/spitz.c | 8 +++++++-
17
1 file changed, 22 insertions(+), 57 deletions(-)
12
1 file changed, 7 insertions(+), 1 deletion(-)
18
13
19
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
14
diff --git a/hw/arm/spitz.c b/hw/arm/spitz.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/op_helper.c
16
--- a/hw/arm/spitz.c
22
+++ b/target/arm/op_helper.c
17
+++ b/hw/arm/spitz.c
23
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
18
@@ -XXX,XX +XXX,XX @@ static void spitz_keyboard_init(Object *obj)
19
20
spitz_keyboard_pre_map(s);
21
22
- s->kbdtimer = timer_new_ns(QEMU_CLOCK_VIRTUAL, spitz_keyboard_tick, s);
23
qdev_init_gpio_in(dev, spitz_keyboard_strobe, SPITZ_KEY_STROBE_NUM);
24
qdev_init_gpio_out(dev, s->sense, SPITZ_KEY_SENSE_NUM);
24
}
25
}
25
26
26
static void deliver_fault(ARMCPU *cpu, vaddr addr, MMUAccessType access_type,
27
+static void spitz_keyboard_realize(DeviceState *dev, Error **errp)
27
- uint32_t fsr, uint32_t fsc, ARMMMUFaultInfo *fi)
28
+{
28
+ int mmu_idx, ARMMMUFaultInfo *fi)
29
+ SpitzKeyboardState *s = SPITZ_KEYBOARD(dev);
29
{
30
+ s->kbdtimer = timer_new_ns(QEMU_CLOCK_VIRTUAL, spitz_keyboard_tick, s);
30
CPUARMState *env = &cpu->env;
31
+}
31
int target_el;
32
+
32
bool same_el;
33
/* LCD backlight controller */
33
- uint32_t syn, exc;
34
34
+ uint32_t syn, exc, fsr, fsc;
35
#define LCDTG_RESCTL    0x00
35
+ ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
36
@@ -XXX,XX +XXX,XX @@ static void spitz_keyboard_class_init(ObjectClass *klass, void *data)
36
37
DeviceClass *dc = DEVICE_CLASS(klass);
37
target_el = exception_target_el(env);
38
38
if (fi->stage2) {
39
dc->vmsd = &vmstate_spitz_kbd;
39
@@ -XXX,XX +XXX,XX @@ static void deliver_fault(ARMCPU *cpu, vaddr addr, MMUAccessType access_type,
40
+ dc->realize = spitz_keyboard_realize;
40
}
41
same_el = (arm_current_el(env) == target_el);
42
43
- if (fsc == 0x3f) {
44
- /* Caller doesn't have a long-format fault status code. This
45
- * should only happen if this fault will never actually be reported
46
- * to an EL that uses a syndrome register. Check that here.
47
- * 0x3f is a (currently) reserved FSC code, in case the constructed
48
- * syndrome does leak into the guest somehow.
49
+ if (target_el == 2 || arm_el_is_aa64(env, target_el) ||
50
+ arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
51
+ /* LPAE format fault status register : bottom 6 bits are
52
+ * status code in the same form as needed for syndrome
53
+ */
54
+ fsr = arm_fi_to_lfsc(fi);
55
+ fsc = extract32(fsr, 0, 6);
56
+ } else {
57
+ fsr = arm_fi_to_sfsc(fi);
58
+ /* Short format FSR : this fault will never actually be reported
59
+ * to an EL that uses a syndrome register. Use a (currently)
60
+ * reserved FSR code in case the constructed syndrome does leak
61
+ * into the guest somehow.
62
*/
63
- assert(target_el != 2 && !arm_el_is_aa64(env, target_el));
64
+ fsc = 0x3f;
65
}
66
67
if (access_type == MMU_INST_FETCH) {
68
@@ -XXX,XX +XXX,XX @@ void tlb_fill(CPUState *cs, target_ulong addr, MMUAccessType access_type,
69
ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fsr, &fi);
70
if (unlikely(ret)) {
71
ARMCPU *cpu = ARM_CPU(cs);
72
- uint32_t fsc;
73
74
if (retaddr) {
75
/* now we have a real cpu fault */
76
cpu_restore_state(cs, retaddr);
77
}
78
79
- if (fsr & (1 << 9)) {
80
- /* LPAE format fault status register : bottom 6 bits are
81
- * status code in the same form as needed for syndrome
82
- */
83
- fsc = extract32(fsr, 0, 6);
84
- } else {
85
- /* Short format FSR : this fault will never actually be reported
86
- * to an EL that uses a syndrome register. Use a (currently)
87
- * reserved FSR code in case the constructed syndrome does leak
88
- * into the guest somehow. deliver_fault will assert that
89
- * we don't target an EL using the syndrome.
90
- */
91
- fsc = 0x3f;
92
- }
93
-
94
- deliver_fault(cpu, addr, access_type, fsr, fsc, &fi);
95
+ deliver_fault(cpu, addr, access_type, mmu_idx, &fi);
96
}
97
}
41
}
98
42
99
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
43
static const TypeInfo spitz_keyboard_info = {
100
int mmu_idx, uintptr_t retaddr)
101
{
102
ARMCPU *cpu = ARM_CPU(cs);
103
- CPUARMState *env = &cpu->env;
104
- uint32_t fsr, fsc;
105
ARMMMUFaultInfo fi = {};
106
- ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
107
108
if (retaddr) {
109
/* now we have a real cpu fault */
110
cpu_restore_state(cs, retaddr);
111
}
112
113
- /* the DFSR for an alignment fault depends on whether we're using
114
- * the LPAE long descriptor format, or the short descriptor format
115
- */
116
- if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
117
- fsr = (1 << 9) | 0x21;
118
- } else {
119
- fsr = 0x1;
120
- }
121
- fsc = 0x21;
122
-
123
- deliver_fault(cpu, vaddr, access_type, fsr, fsc, &fi);
124
+ fi.type = ARMFault_Alignment;
125
+ deliver_fault(cpu, vaddr, access_type, mmu_idx, &fi);
126
}
127
128
/* arm_cpu_do_transaction_failed: handle a memory system error response
129
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
130
MemTxResult response, uintptr_t retaddr)
131
{
132
ARMCPU *cpu = ARM_CPU(cs);
133
- CPUARMState *env = &cpu->env;
134
- uint32_t fsr, fsc;
135
ARMMMUFaultInfo fi = {};
136
- ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
137
138
if (retaddr) {
139
/* now we have a real cpu fault */
140
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
141
* Slave error (1); in QEMU we follow that.
142
*/
143
fi.ea = (response != MEMTX_DECODE_ERROR);
144
-
145
- /* The fault status register format depends on whether we're using
146
- * the LPAE long descriptor format, or the short descriptor format.
147
- */
148
- if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
149
- /* long descriptor form, STATUS 0b010000: synchronous ext abort */
150
- fsr = (fi.ea << 12) | (1 << 9) | 0x10;
151
- } else {
152
- /* short descriptor form, FSR 0b01000 : synchronous ext abort */
153
- fsr = (fi.ea << 12) | 0x8;
154
- }
155
- fsc = 0x10;
156
-
157
- deliver_fault(cpu, addr, access_type, fsr, fsc, &fi);
158
+ fi.type = ARMFault_SyncExternal;
159
+ deliver_fault(cpu, addr, access_type, mmu_idx, &fi);
160
}
161
162
#endif /* !defined(CONFIG_USER_ONLY) */
163
--
44
--
164
2.7.4
45
2.20.1
165
46
166
47
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Pan Nengyuan <pannengyuan@huawei.com>
2
2
3
Update striping functionality to be big-endian bit order (as according to
3
There are some memleaks when we call 'device_list_properties'. This patch move timer_new from init into realize to fix it.
4
the Zynq-7000 Technical Reference Manual). Output thereafter the even bits
5
into the flash memory connected to the lower QSPI bus and the odd bits into
6
the flash memory connected to the upper QSPI bus.
7
4
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Reported-by: Euler Robot <euler.robot@huawei.com>
9
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
6
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20200227025055.14341-5-pannengyuan@huawei.com
11
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20171126231634.9531-7-frasse.iglesias@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/ssi/xilinx_spips.c | 19 ++++++++++---------
11
hw/arm/strongarm.c | 18 ++++++++++++------
16
1 file changed, 10 insertions(+), 9 deletions(-)
12
1 file changed, 12 insertions(+), 6 deletions(-)
17
13
18
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
14
diff --git a/hw/arm/strongarm.c b/hw/arm/strongarm.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/ssi/xilinx_spips.c
16
--- a/hw/arm/strongarm.c
21
+++ b/hw/ssi/xilinx_spips.c
17
+++ b/hw/arm/strongarm.c
22
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
18
@@ -XXX,XX +XXX,XX @@ static void strongarm_rtc_init(Object *obj)
23
xilinx_spips_update_cs_lines(s);
19
s->last_rcnr = (uint32_t) mktimegm(&tm);
20
s->last_hz = qemu_clock_get_ms(rtc_clock);
21
22
- s->rtc_alarm = timer_new_ms(rtc_clock, strongarm_rtc_alarm_tick, s);
23
- s->rtc_hz = timer_new_ms(rtc_clock, strongarm_rtc_hz_tick, s);
24
-
25
sysbus_init_irq(dev, &s->rtc_irq);
26
sysbus_init_irq(dev, &s->rtc_hz_irq);
27
28
@@ -XXX,XX +XXX,XX @@ static void strongarm_rtc_init(Object *obj)
29
sysbus_init_mmio(dev, &s->iomem);
24
}
30
}
25
31
26
-/* N way (num) in place bit striper. Lay out row wise bits (LSB to MSB)
32
+static void strongarm_rtc_realize(DeviceState *dev, Error **errp)
27
+/* N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
33
+{
28
* column wise (from element 0 to N-1). num is the length of x, and dir
34
+ StrongARMRTCState *s = STRONGARM_RTC(dev);
29
* reverses the direction of the transform. Best illustrated by example:
35
+ s->rtc_alarm = timer_new_ms(rtc_clock, strongarm_rtc_alarm_tick, s);
30
* Each digit in the below array is a single bit (num == 3):
36
+ s->rtc_hz = timer_new_ms(rtc_clock, strongarm_rtc_hz_tick, s);
31
*
37
+}
32
- * {{ 76543210, } ----- stripe (dir == false) -----> {{ FCheb630, }
38
+
33
- * { hgfedcba, } { GDAfc741, }
39
static int strongarm_rtc_pre_save(void *opaque)
34
- * { HGFEDCBA, }} <---- upstripe (dir == true) ----- { HEBgda52, }}
40
{
35
+ * {{ 76543210, } ----- stripe (dir == false) -----> {{ 741gdaFC, }
41
StrongARMRTCState *s = opaque;
36
+ * { hgfedcba, } { 630fcHEB, }
42
@@ -XXX,XX +XXX,XX @@ static void strongarm_rtc_sysbus_class_init(ObjectClass *klass, void *data)
37
+ * { HGFEDCBA, }} <---- upstripe (dir == true) ----- { 52hebGDA, }}
43
38
*/
44
dc->desc = "StrongARM RTC Controller";
39
45
dc->vmsd = &vmstate_strongarm_rtc_regs;
40
static inline void stripe8(uint8_t *x, int num, bool dir)
46
+ dc->realize = strongarm_rtc_realize;
41
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
47
}
42
uint8_t r[num];
48
43
memset(r, 0, sizeof(uint8_t) * num);
49
static const TypeInfo strongarm_rtc_sysbus_info = {
44
int idx[2] = {0, 0};
50
@@ -XXX,XX +XXX,XX @@ static void strongarm_uart_init(Object *obj)
45
- int bit[2] = {0, 0};
51
"uart", 0x10000);
46
+ int bit[2] = {0, 7};
52
sysbus_init_mmio(dev, &s->iomem);
47
int d = dir;
53
sysbus_init_irq(dev, &s->irq);
48
54
-
49
for (idx[0] = 0; idx[0] < num; ++idx[0]) {
55
- s->rx_timeout_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, strongarm_uart_rx_to, s);
50
- for (bit[0] = 0; bit[0] < 8; ++bit[0]) {
56
- s->tx_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, strongarm_uart_tx, s);
51
- r[idx[d]] |= x[idx[!d]] & 1 << bit[!d] ? 1 << bit[d] : 0;
57
}
52
+ for (bit[0] = 7; bit[0] >= 0; bit[0]--) {
58
53
+ r[idx[!d]] |= x[idx[d]] & 1 << bit[d] ? 1 << bit[!d] : 0;
59
static void strongarm_uart_realize(DeviceState *dev, Error **errp)
54
idx[1] = (idx[1] + 1) % num;
60
{
55
if (!idx[1]) {
61
StrongARMUARTState *s = STRONGARM_UART(dev);
56
- bit[1]++;
62
57
+ bit[1]--;
63
+ s->rx_timeout_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
58
}
64
+ strongarm_uart_rx_to,
59
}
65
+ s);
60
}
66
+ s->tx_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, strongarm_uart_tx, s);
61
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
67
qemu_chr_fe_set_handlers(&s->chr,
62
}
68
strongarm_uart_can_receive,
63
69
strongarm_uart_receive,
64
for (i = 0; i < num_effective_busses(s); ++i) {
65
+ int bus = num_effective_busses(s) - 1 - i;
66
DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
67
- tx_rx[i] = ssi_transfer(s->spi[i], (uint32_t)tx_rx[i]);
68
+ tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
69
DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
70
}
71
72
--
70
--
73
2.7.4
71
2.20.1
74
72
75
73
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Pan Nengyuan <pannengyuan@huawei.com>
2
2
3
At the moment the ITS is not properly reset and this causes
3
There are some memleaks when we call 'device_list_properties'. This patch move timer_new from init into realize to fix it.
4
various bugs on save/restore. We implement a minimalist reset
5
through individual register writes but for kernel versions
6
before v4.15 this fails voiding the vITS cache. We cannot
7
claim we have a comprehensive reset (hence the error message)
8
but that's better than nothing.
9
4
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
5
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20200227025055.14341-7-pannengyuan@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1511883692-11511-3-git-send-email-eric.auger@redhat.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/intc/arm_gicv3_its_kvm.c | 42 ++++++++++++++++++++++++++++++++++++++++++
12
hw/timer/cadence_ttc.c | 18 ++++++++++++------
16
1 file changed, 42 insertions(+)
13
1 file changed, 12 insertions(+), 6 deletions(-)
17
14
18
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
15
diff --git a/hw/timer/cadence_ttc.c b/hw/timer/cadence_ttc.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gicv3_its_kvm.c
17
--- a/hw/timer/cadence_ttc.c
21
+++ b/hw/intc/arm_gicv3_its_kvm.c
18
+++ b/hw/timer/cadence_ttc.c
22
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void cadence_timer_init(uint32_t freq, CadenceTimerState *s)
23
20
static void cadence_ttc_init(Object *obj)
24
#define TYPE_KVM_ARM_ITS "arm-its-kvm"
25
#define KVM_ARM_ITS(obj) OBJECT_CHECK(GICv3ITSState, (obj), TYPE_KVM_ARM_ITS)
26
+#define KVM_ARM_ITS_CLASS(klass) \
27
+ OBJECT_CLASS_CHECK(KVMARMITSClass, (klass), TYPE_KVM_ARM_ITS)
28
+#define KVM_ARM_ITS_GET_CLASS(obj) \
29
+ OBJECT_GET_CLASS(KVMARMITSClass, (obj), TYPE_KVM_ARM_ITS)
30
+
31
+typedef struct KVMARMITSClass {
32
+ GICv3ITSCommonClass parent_class;
33
+ void (*parent_reset)(DeviceState *dev);
34
+} KVMARMITSClass;
35
+
36
37
static int kvm_its_send_msi(GICv3ITSState *s, uint32_t value, uint16_t devid)
38
{
21
{
39
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
22
CadenceTTCState *s = CADENCE_TTC(obj);
40
GITS_CTLR, &s->ctlr, true, &error_abort);
23
- int i;
24
-
25
- for (i = 0; i < 3; ++i) {
26
- cadence_timer_init(133000000, &s->timer[i]);
27
- sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->timer[i].irq);
28
- }
29
30
memory_region_init_io(&s->iomem, obj, &cadence_ttc_ops, s,
31
"timer", 0x1000);
32
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->iomem);
41
}
33
}
42
34
43
+static void kvm_arm_its_reset(DeviceState *dev)
35
+static void cadence_ttc_realize(DeviceState *dev, Error **errp)
44
+{
36
+{
45
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
37
+ CadenceTTCState *s = CADENCE_TTC(dev);
46
+ KVMARMITSClass *c = KVM_ARM_ITS_GET_CLASS(s);
47
+ int i;
38
+ int i;
48
+
39
+
49
+ c->parent_reset(dev);
40
+ for (i = 0; i < 3; ++i) {
50
+
41
+ cadence_timer_init(133000000, &s->timer[i]);
51
+ error_report("ITS KVM: full reset is not supported by QEMU");
42
+ sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->timer[i].irq);
52
+
53
+ if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
54
+ GITS_CTLR)) {
55
+ return;
56
+ }
57
+
58
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
59
+ GITS_CTLR, &s->ctlr, true, &error_abort);
60
+
61
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
62
+ GITS_CBASER, &s->cbaser, true, &error_abort);
63
+
64
+ for (i = 0; i < 8; i++) {
65
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
66
+ GITS_BASER + i * 8, &s->baser[i], true,
67
+ &error_abort);
68
+ }
43
+ }
69
+}
44
+}
70
+
45
+
71
static Property kvm_arm_its_props[] = {
46
static int cadence_timer_pre_save(void *opaque)
72
DEFINE_PROP_LINK("parent-gicv3", GICv3ITSState, gicv3, "kvm-arm-gicv3",
73
GICv3State *),
74
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_class_init(ObjectClass *klass, void *data)
75
{
47
{
48
cadence_timer_sync((CadenceTimerState *)opaque);
49
@@ -XXX,XX +XXX,XX @@ static void cadence_ttc_class_init(ObjectClass *klass, void *data)
76
DeviceClass *dc = DEVICE_CLASS(klass);
50
DeviceClass *dc = DEVICE_CLASS(klass);
77
GICv3ITSCommonClass *icc = ARM_GICV3_ITS_COMMON_CLASS(klass);
51
78
+ KVMARMITSClass *ic = KVM_ARM_ITS_CLASS(klass);
52
dc->vmsd = &vmstate_cadence_ttc;
79
53
+ dc->realize = cadence_ttc_realize;
80
dc->realize = kvm_arm_its_realize;
81
dc->props = kvm_arm_its_props;
82
+ ic->parent_reset = dc->reset;
83
icc->send_msi = kvm_its_send_msi;
84
icc->pre_save = kvm_arm_its_pre_save;
85
icc->post_load = kvm_arm_its_post_load;
86
+ dc->reset = kvm_arm_its_reset;
87
}
54
}
88
55
89
static const TypeInfo kvm_arm_its_info = {
56
static const TypeInfo cadence_ttc_info = {
90
@@ -XXX,XX +XXX,XX @@ static const TypeInfo kvm_arm_its_info = {
91
.parent = TYPE_ARM_GICV3_ITS_COMMON,
92
.instance_size = sizeof(GICv3ITSState),
93
.class_init = kvm_arm_its_class_init,
94
+ .class_size = sizeof(KVMARMITSClass),
95
};
96
97
static void kvm_arm_its_register_types(void)
98
--
57
--
99
2.7.4
58
2.20.1
100
59
101
60
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav8() return a fault type in the ARMMMUFaultInfo
1
From: Richard Henderson <richard.henderson@linaro.org>
2
structure, which we convert to the FSC at the callsite.
3
2
3
Don't merely start with v8.0, handle v7VE as well. Ensure that writes
4
from aarch32 mode do not change bits in the other half of the register.
5
Protect reads of aa64 id registers with ARM_FEATURE_AARCH64.
6
7
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200229012811.24129-2-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-9-git-send-email-peter.maydell@linaro.org
9
---
12
---
10
target/arm/helper.c | 29 ++++++++++++++++++-----------
13
target/arm/helper.c | 38 +++++++++++++++++++++++++-------------
11
1 file changed, 18 insertions(+), 11 deletions(-)
14
1 file changed, 25 insertions(+), 13 deletions(-)
12
15
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
18
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
19
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
18
static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
21
REGINFO_SENTINEL
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
};
20
hwaddr *phys_ptr, MemTxAttrs *txattrs,
23
21
- int *prot, uint32_t *fsr, uint32_t *mregion)
24
-static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
22
+ int *prot, ARMMMUFaultInfo *fi, uint32_t *mregion)
25
+static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
23
{
26
{
24
/* Perform a PMSAv8 MPU lookup (without also doing the SAU check
27
ARMCPU *cpu = env_archcpu(env);
25
* that a full phys-to-virt translation does).
28
- /* Begin with bits defined in base ARMv8.0. */
26
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
29
- uint64_t valid_mask = MAKE_64BIT_MASK(0, 34);
27
/* Multiple regions match -- always a failure (unlike
30
+
28
* PMSAv7 where highest-numbered-region wins)
31
+ if (arm_feature(env, ARM_FEATURE_V8)) {
29
*/
32
+ valid_mask |= MAKE_64BIT_MASK(0, 34); /* ARMv8.0 */
30
- *fsr = 0x00d; /* permission fault */
33
+ } else {
31
+ fi->type = ARMFault_Permission;
34
+ valid_mask |= MAKE_64BIT_MASK(0, 28); /* ARMv7VE */
32
+ fi->level = 1;
35
+ }
33
return true;
36
34
}
37
if (arm_feature(env, ARM_FEATURE_EL3)) {
35
38
valid_mask &= ~HCR_HCD;
36
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
39
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
37
40
*/
38
if (!hit) {
41
valid_mask &= ~HCR_TSC;
39
/* background fault */
40
- *fsr = 0;
41
+ fi->type = ARMFault_Background;
42
return true;
43
}
42
}
44
43
- if (cpu_isar_feature(aa64_vh, cpu)) {
45
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
44
- valid_mask |= HCR_E2H;
46
}
45
- }
46
- if (cpu_isar_feature(aa64_lor, cpu)) {
47
- valid_mask |= HCR_TLOR;
48
- }
49
- if (cpu_isar_feature(aa64_pauth, cpu)) {
50
- valid_mask |= HCR_API | HCR_APK;
51
+
52
+ if (arm_feature(env, ARM_FEATURE_AARCH64)) {
53
+ if (cpu_isar_feature(aa64_vh, cpu)) {
54
+ valid_mask |= HCR_E2H;
55
+ }
56
+ if (cpu_isar_feature(aa64_lor, cpu)) {
57
+ valid_mask |= HCR_TLOR;
58
+ }
59
+ if (cpu_isar_feature(aa64_pauth, cpu)) {
60
+ valid_mask |= HCR_API | HCR_APK;
61
+ }
47
}
62
}
48
63
49
- *fsr = 0x00d; /* Permission fault */
64
/* Clear RES0 bits. */
50
+ fi->type = ARMFault_Permission;
65
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
51
+ fi->level = 1;
66
arm_cpu_update_vfiq(cpu);
52
return !(*prot & (1 << access_type));
53
}
67
}
54
68
55
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
69
+static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
56
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
70
+{
57
MMUAccessType access_type, ARMMMUIdx mmu_idx,
71
+ do_hcr_write(env, value, 0);
58
hwaddr *phys_ptr, MemTxAttrs *txattrs,
72
+}
59
- int *prot, uint32_t *fsr)
73
+
60
+ int *prot, ARMMMUFaultInfo *fi)
74
static void hcr_writehigh(CPUARMState *env, const ARMCPRegInfo *ri,
75
uint64_t value)
61
{
76
{
62
uint32_t secure = regime_is_secure(env, mmu_idx);
77
/* Handle HCR2 write, i.e. write to high half of HCR_EL2 */
63
V8M_SAttributes sattrs = {};
78
value = deposit64(env->cp15.hcr_el2, 32, 32, value);
64
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
79
- hcr_write(env, NULL, value);
65
* (including possibly emulating an SG instruction).
80
+ do_hcr_write(env, value, MAKE_64BIT_MASK(0, 32));
66
*/
67
if (sattrs.ns != !secure) {
68
- *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
69
+ if (sattrs.nsc) {
70
+ fi->type = ARMFault_QEMU_NSCExec;
71
+ } else {
72
+ fi->type = ARMFault_QEMU_SFault;
73
+ }
74
*phys_ptr = address;
75
*prot = 0;
76
return true;
77
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
78
* If we added it we would need to do so as a special case
79
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
80
*/
81
- *fsr = M_FAKE_FSR_SFAULT;
82
+ fi->type = ARMFault_QEMU_SFault;
83
*phys_ptr = address;
84
*prot = 0;
85
return true;
86
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
87
}
88
89
return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
90
- txattrs, prot, fsr, NULL);
91
+ txattrs, prot, fi, NULL);
92
}
81
}
93
82
94
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
83
static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
95
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
84
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
96
if (arm_feature(env, ARM_FEATURE_V8)) {
85
{
97
/* PMSAv8 */
86
/* Handle HCR write, i.e. write to low half of HCR_EL2 */
98
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
87
value = deposit64(env->cp15.hcr_el2, 0, 32, value);
99
- phys_ptr, attrs, prot, fsr);
88
- hcr_write(env, NULL, value);
100
+ phys_ptr, attrs, prot, fi);
89
+ do_hcr_write(env, value, MAKE_64BIT_MASK(32, 32));
101
+ *fsr = arm_fi_to_sfsc(fi);
90
}
102
} else if (arm_feature(env, ARM_FEATURE_V7)) {
91
103
/* PMSAv7 */
92
/*
104
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
105
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
106
uint32_t tt_resp;
107
bool r, rw, nsr, nsrw, mrvalid;
108
int prot;
109
+ ARMMMUFaultInfo fi = {};
110
MemTxAttrs attrs = {};
111
hwaddr phys_addr;
112
- uint32_t fsr;
113
ARMMMUIdx mmu_idx;
114
uint32_t mregion;
115
bool targetpriv;
116
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
117
if (arm_current_el(env) != 0 || alt) {
118
/* We can ignore the return value as prot is always set */
119
pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
120
- &phys_addr, &attrs, &prot, &fsr, &mregion);
121
+ &phys_addr, &attrs, &prot, &fi, &mregion);
122
if (mregion == -1) {
123
mrvalid = false;
124
mregion = 0;
125
--
93
--
126
2.7.4
94
2.20.1
127
95
128
96
diff view generated by jsdifflib
1
The TT instruction is going to need to look up the MMU index
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for a specified security and privilege state. Refactor the
3
existing arm_v7m_mmu_idx_for_secstate() into a version that
4
lets you specify the privilege state and one that uses the
5
current state of the CPU.
6
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20200229012811.24129-3-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1512153879-5291-6-git-send-email-peter.maydell@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
---
7
---
12
target/arm/cpu.h | 21 ++++++++++++++++-----
8
target/arm/cpu.h | 7 +++++++
13
1 file changed, 16 insertions(+), 5 deletions(-)
9
1 file changed, 7 insertions(+)
14
10
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
13
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
15
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
20
}
16
#define HCR_TERR (1ULL << 36)
21
}
17
#define HCR_TEA (1ULL << 37)
22
18
#define HCR_MIOCNCE (1ULL << 38)
23
-/* Return the MMU index for a v7M CPU in the specified security state */
19
+/* RES0 bit 39 */
24
-static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
20
#define HCR_APK (1ULL << 40)
25
- bool secstate)
21
#define HCR_API (1ULL << 41)
26
+/* Return the MMU index for a v7M CPU in the specified security and
22
#define HCR_NV (1ULL << 42)
27
+ * privilege state
23
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
28
+ */
24
#define HCR_NV2 (1ULL << 45)
29
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
25
#define HCR_FWB (1ULL << 46)
30
+ bool secstate,
26
#define HCR_FIEN (1ULL << 47)
31
+ bool priv)
27
+/* RES0 bit 48 */
32
{
28
#define HCR_TID4 (1ULL << 49)
33
- int el = arm_current_el(env);
29
#define HCR_TICAB (1ULL << 50)
34
ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
30
+#define HCR_AMVOFFEN (1ULL << 51)
35
31
#define HCR_TOCU (1ULL << 52)
36
- if (el != 0) {
32
+#define HCR_ENSCXT (1ULL << 53)
37
+ if (priv) {
33
#define HCR_TTLBIS (1ULL << 54)
38
mmu_idx |= ARM_MMU_IDX_M_PRIV;
34
#define HCR_TTLBOS (1ULL << 55)
39
}
35
#define HCR_ATA (1ULL << 56)
40
36
#define HCR_DCT (1ULL << 57)
41
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
37
+#define HCR_TID5 (1ULL << 58)
42
return mmu_idx;
38
+#define HCR_TWEDEN (1ULL << 59)
43
}
39
+#define HCR_TWEDEL MAKE_64BIT_MASK(60, 4)
44
40
45
+/* Return the MMU index for a v7M CPU in the specified security state */
41
#define SCR_NS (1U << 0)
46
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
42
#define SCR_IRQ (1U << 1)
47
+ bool secstate)
48
+{
49
+ bool priv = arm_current_el(env) != 0;
50
+
51
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
52
+}
53
+
54
/* Determine the current mmu_idx to use for normal loads/stores */
55
static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
56
{
57
--
43
--
58
2.7.4
44
2.20.1
59
45
60
46
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for the ZynqMP QSPI (consisting of the Generic QSPI and Legacy
3
In arm_cpu_reset, we configure many system registers so that user-only
4
QSPI) and connect Numonyx n25q512a11 flashes to it.
4
behaves as it should with a minimum of ifdefs. However, we do not set
5
all of the system registers as required for a cpu with EL2 and EL3.
5
6
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Disabling EL2 and EL3 mean that we will not look at those registers,
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
which means that we don't have to worry about configuring them.
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200229012811.24129-4-richard.henderson@linaro.org
11
Message-id: 20171126231634.9531-14-frasse.iglesias@gmail.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
---
14
include/hw/arm/xlnx-zynqmp.h | 5 +++++
15
target/arm/cpu.c | 6 ++++--
15
hw/arm/xlnx-zcu102.c | 23 +++++++++++++++++++++++
16
1 file changed, 4 insertions(+), 2 deletions(-)
16
hw/arm/xlnx-zynqmp.c | 26 ++++++++++++++++++++++++++
17
3 files changed, 54 insertions(+)
18
17
19
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
21
--- a/include/hw/arm/xlnx-zynqmp.h
20
--- a/target/arm/cpu.c
22
+++ b/include/hw/arm/xlnx-zynqmp.h
21
+++ b/target/arm/cpu.c
23
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_reset_hivecs_property =
24
#define XLNX_ZYNQMP_NUM_SDHCI 2
23
static Property arm_cpu_rvbar_property =
25
#define XLNX_ZYNQMP_NUM_SPIS 2
24
DEFINE_PROP_UINT64("rvbar", ARMCPU, rvbar, 0);
26
25
27
+#define XLNX_ZYNQMP_NUM_QSPI_BUS 2
26
+#ifndef CONFIG_USER_ONLY
28
+#define XLNX_ZYNQMP_NUM_QSPI_BUS_CS 2
27
static Property arm_cpu_has_el2_property =
29
+#define XLNX_ZYNQMP_NUM_QSPI_FLASH 4
28
DEFINE_PROP_BOOL("has_el2", ARMCPU, has_el2, true);
30
+
29
31
#define XLNX_ZYNQMP_NUM_OCM_BANKS 4
30
static Property arm_cpu_has_el3_property =
32
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
31
DEFINE_PROP_BOOL("has_el3", ARMCPU, has_el3, true);
33
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
32
+#endif
34
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPState {
33
35
SysbusAHCIState sata;
34
static Property arm_cpu_cfgend_property =
36
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
35
DEFINE_PROP_BOOL("cfgend", ARMCPU, cfgend, false);
37
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
36
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
38
+ XlnxZynqMPQSPIPS qspi;
37
qdev_property_add_static(DEVICE(obj), &arm_cpu_rvbar_property);
39
XlnxDPState dp;
40
XlnxDPDMAState dpdma;
41
42
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/xlnx-zcu102.c
45
+++ b/hw/arm/xlnx-zcu102.c
46
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(XlnxZCU102 *s, MachineState *machine)
47
sysbus_connect_irq(SYS_BUS_DEVICE(&s->soc.spi[i]), 1, cs_line);
48
}
38
}
49
39
50
+ for (i = 0; i < XLNX_ZYNQMP_NUM_QSPI_FLASH; i++) {
40
+#ifndef CONFIG_USER_ONLY
51
+ SSIBus *spi_bus;
41
if (arm_feature(&cpu->env, ARM_FEATURE_EL3)) {
52
+ DeviceState *flash_dev;
42
/* Add the has_el3 state CPU property only if EL3 is allowed. This will
53
+ qemu_irq cs_line;
43
* prevent "has_el3" from existing on CPUs which cannot support EL3.
54
+ DriveInfo *dinfo = drive_get_next(IF_MTD);
44
*/
55
+ int bus = i / XLNX_ZYNQMP_NUM_QSPI_BUS_CS;
45
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_el3_property);
56
+ gchar *bus_name = g_strdup_printf("qspi%d", bus);
46
57
+
47
-#ifndef CONFIG_USER_ONLY
58
+ spi_bus = (SSIBus *)qdev_get_child_bus(DEVICE(&s->soc), bus_name);
48
object_property_add_link(obj, "secure-memory",
59
+ g_free(bus_name);
49
TYPE_MEMORY_REGION,
60
+
50
(Object **)&cpu->secure_memory,
61
+ flash_dev = ssi_create_slave_no_init(spi_bus, "n25q512a11");
51
qdev_prop_allow_set_link_before_realize,
62
+ if (dinfo) {
52
OBJ_PROP_LINK_STRONG,
63
+ qdev_prop_set_drive(flash_dev, "drive", blk_by_legacy_dinfo(dinfo),
53
&error_abort);
64
+ &error_fatal);
54
-#endif
65
+ }
66
+ qdev_init_nofail(flash_dev);
67
+
68
+ cs_line = qdev_get_gpio_in_named(flash_dev, SSI_GPIO_CS, 0);
69
+
70
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->soc.qspi), i + 1, cs_line);
71
+ }
72
+
73
/* TODO create and connect IDE devices for ide_drive_get() */
74
75
xlnx_zcu102_binfo.ram_size = ram_size;
76
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/hw/arm/xlnx-zynqmp.c
79
+++ b/hw/arm/xlnx-zynqmp.c
80
@@ -XXX,XX +XXX,XX @@
81
#define SATA_ADDR 0xFD0C0000
82
#define SATA_NUM_PORTS 2
83
84
+#define QSPI_ADDR 0xff0f0000
85
+#define LQSPI_ADDR 0xc0000000
86
+#define QSPI_IRQ 15
87
+
88
#define DP_ADDR 0xfd4a0000
89
#define DP_IRQ 113
90
91
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
92
qdev_set_parent_bus(DEVICE(&s->spi[i]), sysbus_get_default());
93
}
55
}
94
56
95
+ object_initialize(&s->qspi, sizeof(s->qspi), TYPE_XLNX_ZYNQMP_QSPIPS);
57
if (arm_feature(&cpu->env, ARM_FEATURE_EL2)) {
96
+ qdev_set_parent_bus(DEVICE(&s->qspi), sysbus_get_default());
58
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_el2_property);
97
+
98
object_initialize(&s->dp, sizeof(s->dp), TYPE_XLNX_DP);
99
qdev_set_parent_bus(DEVICE(&s->dp), sysbus_get_default());
100
101
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
102
g_free(bus_name);
103
}
59
}
104
60
+#endif
105
+ object_property_set_bool(OBJECT(&s->qspi), true, "realized", &err);
61
106
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->qspi), 0, QSPI_ADDR);
62
if (arm_feature(&cpu->env, ARM_FEATURE_PMU)) {
107
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->qspi), 1, LQSPI_ADDR);
63
cpu->has_pmu = true;
108
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->qspi), 0, gic_spi[QSPI_IRQ]);
109
+
110
+ for (i = 0; i < XLNX_ZYNQMP_NUM_QSPI_BUS; i++) {
111
+ gchar *bus_name;
112
+ gchar *target_bus;
113
+
114
+ /* Alias controller SPI bus to the SoC itself */
115
+ bus_name = g_strdup_printf("qspi%d", i);
116
+ target_bus = g_strdup_printf("spi%d", i);
117
+ object_property_add_alias(OBJECT(s), bus_name,
118
+ OBJECT(&s->qspi), target_bus,
119
+ &error_abort);
120
+ g_free(bus_name);
121
+ g_free(target_bus);
122
+ }
123
+
124
object_property_set_bool(OBJECT(&s->dp), true, "realized", &err);
125
if (err) {
126
error_propagate(errp, err);
127
--
64
--
128
2.7.4
65
2.20.1
129
66
130
67
diff view generated by jsdifflib
1
From: Prasad J Pandit <pjp@fedoraproject.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The ctz32() routine could return a value greater than
3
We have disabled EL2 and EL3 for user-only, which means that these
4
TC6393XB_GPIOS=16, because the device has 24 GPIO level
4
registers "don't exist" and should not be set.
5
bits but we only implement 16 outgoing lines. This could
6
lead to an OOB array access. Mask 'level' to avoid it.
7
5
8
Reported-by: Moguofang <moguofang@huawei.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
7
Message-id: 20200229012811.24129-5-richard.henderson@linaro.org
10
Message-id: 20171212041539.25700-1-ppandit@redhat.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/display/tc6393xb.c | 1 +
11
target/arm/cpu.c | 6 ------
15
1 file changed, 1 insertion(+)
12
1 file changed, 6 deletions(-)
16
13
17
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/display/tc6393xb.c
16
--- a/target/arm/cpu.c
20
+++ b/hw/display/tc6393xb.c
17
+++ b/target/arm/cpu.c
21
@@ -XXX,XX +XXX,XX @@ static void tc6393xb_gpio_handler_update(TC6393xbState *s)
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
22
int bit;
19
/* Enable all PAC keys. */
23
20
env->cp15.sctlr_el[1] |= (SCTLR_EnIA | SCTLR_EnIB |
24
level = s->gpio_level & s->gpio_dir;
21
SCTLR_EnDA | SCTLR_EnDB);
25
+ level &= MAKE_64BIT_MASK(0, TC6393XB_GPIOS);
22
- /* Enable all PAC instructions */
26
23
- env->cp15.hcr_el2 |= HCR_API;
27
for (diff = s->prev_level ^ level; diff; diff ^= 1 << bit) {
24
- env->cp15.scr_el3 |= SCR_API;
28
bit = ctz32(diff);
25
/* and to the FP/Neon instructions */
26
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
27
/* and to the SVE instructions */
28
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
29
- env->cp15.cptr_el[3] |= CPTR_EZ;
30
/* with maximum vector length */
31
env->vfp.zcr_el[1] = cpu_isar_feature(aa64_sve, cpu) ?
32
cpu->sve_max_vq - 1 : 0;
33
- env->vfp.zcr_el[2] = env->vfp.zcr_el[1];
34
- env->vfp.zcr_el[3] = env->vfp.zcr_el[1];
35
/*
36
* Enable TBI0 and TBI1. While the real kernel only enables TBI0,
37
* turning on both here will produce smaller code and otherwise
29
--
38
--
30
2.7.4
39
2.20.1
31
40
32
41
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Now that do_ats_write() is entirely in control of whether to
3
Update the {TGE,E2H} == '11' masking to ARMv8.6.
4
generate a 32-bit PAR or a 64-bit PAR, we can make it use the
4
If EL2 is configured for aarch32, disable all of
5
correct (complicated) condition for doing so.
5
the bits that are RES0 in aarch32 mode.
6
6
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200229012811.24129-6-richard.henderson@linaro.org
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1512503192-2239-13-git-send-email-peter.maydell@linaro.org
13
[PMM: Rebased Edgar's patch on top of get_phys_addr() refactoring;
14
use arm_s1_regime_using_lpae_format() rather than
15
regime_using_lpae_format() because the latter will assert
16
if passed ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1;
17
updated commit message appropriately]
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
11
---
20
target/arm/helper.c | 33 +++++++++++++++++++++++++++++----
12
target/arm/helper.c | 31 +++++++++++++++++++++++++++----
21
1 file changed, 29 insertions(+), 4 deletions(-)
13
1 file changed, 27 insertions(+), 4 deletions(-)
22
14
23
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
26
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
19
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
28
int prot;
20
* Since the v8.4 language applies to the entire register, and
29
bool ret;
21
* appears to be backward compatible, use that.
30
uint64_t par64;
22
*/
31
+ bool format64 = false;
23
- ret = 0;
32
MemTxAttrs attrs = {};
24
- } else if (ret & HCR_TGE) {
33
ARMMMUFaultInfo fi = {};
25
- /* These bits are up-to-date as of ARMv8.4. */
34
ARMCacheAttrs cacheattrs = {};
26
+ return 0;
35
36
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
37
&prot, &page_size, &fi, &cacheattrs);
38
- /* TODO: this is not the correct condition to use to decide whether
39
- * to report a PAR in 64-bit or 32-bit format.
40
- */
41
- if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
42
+
43
+ if (is_a64(env)) {
44
+ format64 = true;
45
+ } else if (arm_feature(env, ARM_FEATURE_LPAE)) {
46
+ /*
47
+ * ATS1Cxx:
48
+ * * TTBCR.EAE determines whether the result is returned using the
49
+ * 32-bit or the 64-bit PAR format
50
+ * * Instructions executed in Hyp mode always use the 64bit format
51
+ *
52
+ * ATS1S2NSOxx uses the 64bit format if any of the following is true:
53
+ * * The Non-secure TTBCR.EAE bit is set to 1
54
+ * * The implementation includes EL2, and the value of HCR.VM is 1
55
+ *
56
+ * ATS1Hx always uses the 64bit format (not supported yet).
57
+ */
58
+ format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
59
+
60
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
61
+ if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
62
+ format64 |= env->cp15.hcr_el2 & HCR_VM;
63
+ } else {
64
+ format64 |= arm_current_el(env) == 2;
65
+ }
66
+ }
67
+ }
27
+ }
68
+
28
+
69
+ if (format64) {
29
+ /*
70
/* Create a 64-bit PAR */
30
+ * For a cpu that supports both aarch64 and aarch32, we can set bits
71
par64 = (1 << 11); /* LPAE bit always set */
31
+ * in HCR_EL2 (e.g. via EL3) that are RES0 when we enter EL2 as aa32.
72
if (!ret) {
32
+ * Ignore all of the bits in HCR+HCR2 that are not valid for aarch32.
33
+ */
34
+ if (!arm_el_is_aa64(env, 2)) {
35
+ uint64_t aa32_valid;
36
+
37
+ /*
38
+ * These bits are up-to-date as of ARMv8.6.
39
+ * For HCR, it's easiest to list just the 2 bits that are invalid.
40
+ * For HCR2, list those that are valid.
41
+ */
42
+ aa32_valid = MAKE_64BIT_MASK(0, 32) & ~(HCR_RW | HCR_TDZ);
43
+ aa32_valid |= (HCR_CD | HCR_ID | HCR_TERR | HCR_TEA | HCR_MIOCNCE |
44
+ HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_TTLBIS);
45
+ ret &= aa32_valid;
46
+ }
47
+
48
+ if (ret & HCR_TGE) {
49
+ /* These bits are up-to-date as of ARMv8.6. */
50
if (ret & HCR_E2H) {
51
ret &= ~(HCR_VM | HCR_FMO | HCR_IMO | HCR_AMO |
52
HCR_BSU_MASK | HCR_DC | HCR_TWI | HCR_TWE |
53
HCR_TID0 | HCR_TID2 | HCR_TPCP | HCR_TPU |
54
- HCR_TDZ | HCR_CD | HCR_ID | HCR_MIOCNCE);
55
+ HCR_TDZ | HCR_CD | HCR_ID | HCR_MIOCNCE |
56
+ HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_ENSCXT |
57
+ HCR_TTLBIS | HCR_TTLBOS | HCR_TID5);
58
} else {
59
ret |= HCR_FMO | HCR_IMO | HCR_AMO;
60
}
73
--
61
--
74
2.7.4
62
2.20.1
75
63
76
64
diff view generated by jsdifflib
1
For the TT instruction we're going to need to do an MPU lookup that
1
From: Richard Henderson <richard.henderson@linaro.org>
2
also tells us which MPU region the access hit. This requires us
2
3
to do the MPU lookup without first doing the SAU security access
3
These bits trap EL1 access to various virtual memory controls.
4
check, so pull the MPU lookup parts of get_phys_addr_pmsav8()
4
5
out into their own function.
5
Buglink: https://bugs.launchpad.net/bugs/1855072
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
The TT instruction also needs to know the MPU region number which
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
the lookup hit, so provide this information to the caller of the
8
Message-id: 20200229012811.24129-7-richard.henderson@linaro.org
9
MPU lookup code, even though get_phys_addr_pmsav8() doesn't
10
need to know it.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1512153879-5291-7-git-send-email-peter.maydell@linaro.org
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
---
10
---
17
target/arm/helper.c | 130 +++++++++++++++++++++++++++++++---------------------
11
target/arm/helper.c | 82 ++++++++++++++++++++++++++++++---------------
18
1 file changed, 79 insertions(+), 51 deletions(-)
12
1 file changed, 55 insertions(+), 27 deletions(-)
19
13
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
25
}
19
return CP_ACCESS_OK;
26
}
20
}
27
21
28
-static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
22
+/* Check for traps from EL1 due to HCR_EL2.TVM and HCR_EL2.TRVM. */
29
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
23
+static CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri,
30
- hwaddr *phys_ptr, MemTxAttrs *txattrs,
24
+ bool isread)
31
- int *prot, uint32_t *fsr)
32
+static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
33
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
34
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
35
+ int *prot, uint32_t *fsr, uint32_t *mregion)
36
{
37
+ /* Perform a PMSAv8 MPU lookup (without also doing the SAU check
38
+ * that a full phys-to-virt translation does).
39
+ * mregion is (if not NULL) set to the region number which matched,
40
+ * or -1 if no region number is returned (MPU off, address did not
41
+ * hit a region, address hit in multiple regions).
42
+ */
43
ARMCPU *cpu = arm_env_get_cpu(env);
44
bool is_user = regime_is_user(env, mmu_idx);
45
uint32_t secure = regime_is_secure(env, mmu_idx);
46
int n;
47
int matchregion = -1;
48
bool hit = false;
49
- V8M_SAttributes sattrs = {};
50
51
*phys_ptr = address;
52
*prot = 0;
53
-
54
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
55
- v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
56
- if (access_type == MMU_INST_FETCH) {
57
- /* Instruction fetches always use the MMU bank and the
58
- * transaction attribute determined by the fetch address,
59
- * regardless of CPU state. This is painful for QEMU
60
- * to handle, because it would mean we need to encode
61
- * into the mmu_idx not just the (user, negpri) information
62
- * for the current security state but also that for the
63
- * other security state, which would balloon the number
64
- * of mmu_idx values needed alarmingly.
65
- * Fortunately we can avoid this because it's not actually
66
- * possible to arbitrarily execute code from memory with
67
- * the wrong security attribute: it will always generate
68
- * an exception of some kind or another, apart from the
69
- * special case of an NS CPU executing an SG instruction
70
- * in S&NSC memory. So we always just fail the translation
71
- * here and sort things out in the exception handler
72
- * (including possibly emulating an SG instruction).
73
- */
74
- if (sattrs.ns != !secure) {
75
- *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
76
- return true;
77
- }
78
- } else {
79
- /* For data accesses we always use the MMU bank indicated
80
- * by the current CPU state, but the security attributes
81
- * might downgrade a secure access to nonsecure.
82
- */
83
- if (sattrs.ns) {
84
- txattrs->secure = false;
85
- } else if (!secure) {
86
- /* NS access to S memory must fault.
87
- * Architecturally we should first check whether the
88
- * MPU information for this address indicates that we
89
- * are doing an unaligned access to Device memory, which
90
- * should generate a UsageFault instead. QEMU does not
91
- * currently check for that kind of unaligned access though.
92
- * If we added it we would need to do so as a special case
93
- * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
94
- */
95
- *fsr = M_FAKE_FSR_SFAULT;
96
- return true;
97
- }
98
- }
99
+ if (mregion) {
100
+ *mregion = -1;
101
}
102
103
/* Unlike the ARM ARM pseudocode, we don't need to check whether this
104
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
105
/* We don't need to look the attribute up in the MAIR0/MAIR1
106
* registers because that only tells us about cacheability.
107
*/
108
+ if (mregion) {
109
+ *mregion = matchregion;
110
+ }
111
}
112
113
*fsr = 0x00d; /* Permission fault */
114
return !(*prot & (1 << access_type));
115
}
116
117
+
118
+static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
119
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
120
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
121
+ int *prot, uint32_t *fsr)
122
+{
25
+{
123
+ uint32_t secure = regime_is_secure(env, mmu_idx);
26
+ if (arm_current_el(env) == 1) {
124
+ V8M_SAttributes sattrs = {};
27
+ uint64_t trap = isread ? HCR_TRVM : HCR_TVM;
125
+
28
+ if (arm_hcr_el2_eff(env) & trap) {
126
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
29
+ return CP_ACCESS_TRAP_EL2;
127
+ v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
128
+ if (access_type == MMU_INST_FETCH) {
129
+ /* Instruction fetches always use the MMU bank and the
130
+ * transaction attribute determined by the fetch address,
131
+ * regardless of CPU state. This is painful for QEMU
132
+ * to handle, because it would mean we need to encode
133
+ * into the mmu_idx not just the (user, negpri) information
134
+ * for the current security state but also that for the
135
+ * other security state, which would balloon the number
136
+ * of mmu_idx values needed alarmingly.
137
+ * Fortunately we can avoid this because it's not actually
138
+ * possible to arbitrarily execute code from memory with
139
+ * the wrong security attribute: it will always generate
140
+ * an exception of some kind or another, apart from the
141
+ * special case of an NS CPU executing an SG instruction
142
+ * in S&NSC memory. So we always just fail the translation
143
+ * here and sort things out in the exception handler
144
+ * (including possibly emulating an SG instruction).
145
+ */
146
+ if (sattrs.ns != !secure) {
147
+ *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
148
+ *phys_ptr = address;
149
+ *prot = 0;
150
+ return true;
151
+ }
152
+ } else {
153
+ /* For data accesses we always use the MMU bank indicated
154
+ * by the current CPU state, but the security attributes
155
+ * might downgrade a secure access to nonsecure.
156
+ */
157
+ if (sattrs.ns) {
158
+ txattrs->secure = false;
159
+ } else if (!secure) {
160
+ /* NS access to S memory must fault.
161
+ * Architecturally we should first check whether the
162
+ * MPU information for this address indicates that we
163
+ * are doing an unaligned access to Device memory, which
164
+ * should generate a UsageFault instead. QEMU does not
165
+ * currently check for that kind of unaligned access though.
166
+ * If we added it we would need to do so as a special case
167
+ * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
168
+ */
169
+ *fsr = M_FAKE_FSR_SFAULT;
170
+ *phys_ptr = address;
171
+ *prot = 0;
172
+ return true;
173
+ }
174
+ }
30
+ }
175
+ }
31
+ }
176
+
32
+ return CP_ACCESS_OK;
177
+ return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
178
+ txattrs, prot, fsr, NULL);
179
+}
33
+}
180
+
34
+
181
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
35
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
182
MMUAccessType access_type, ARMMMUIdx mmu_idx,
36
{
183
hwaddr *phys_ptr, int *prot, uint32_t *fsr)
37
ARMCPU *cpu = env_archcpu(env);
38
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
39
*/
40
{ .name = "CONTEXTIDR_EL1", .state = ARM_CP_STATE_BOTH,
41
.opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1,
42
- .access = PL1_RW, .secure = ARM_CP_SECSTATE_NS,
43
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
44
+ .secure = ARM_CP_SECSTATE_NS,
45
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[1]),
46
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
47
{ .name = "CONTEXTIDR_S", .state = ARM_CP_STATE_AA32,
48
.cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1,
49
- .access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
50
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
51
+ .secure = ARM_CP_SECSTATE_S,
52
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_s),
53
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
54
REGINFO_SENTINEL
55
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
56
/* MMU Domain access control / MPU write buffer control */
57
{ .name = "DACR",
58
.cp = 15, .opc1 = CP_ANY, .crn = 3, .crm = CP_ANY, .opc2 = CP_ANY,
59
- .access = PL1_RW, .resetvalue = 0,
60
+ .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0,
61
.writefn = dacr_write, .raw_writefn = raw_write,
62
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s),
63
offsetoflow32(CPUARMState, cp15.dacr_ns) } },
64
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
65
{ .name = "DMB", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 5,
66
.access = PL0_W, .type = ARM_CP_NOP },
67
{ .name = "IFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 2,
68
- .access = PL1_RW,
69
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
70
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ifar_s),
71
offsetof(CPUARMState, cp15.ifar_ns) },
72
.resetvalue = 0, },
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
74
*/
75
{ .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH,
76
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 0,
77
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
78
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
79
+ .type = ARM_CP_CONST, .resetvalue = 0 },
80
{ .name = "AFSR1_EL1", .state = ARM_CP_STATE_BOTH,
81
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1,
82
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
83
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
84
+ .type = ARM_CP_CONST, .resetvalue = 0 },
85
/* MAIR can just read-as-written because we don't implement caches
86
* and so don't need to care about memory attributes.
87
*/
88
{ .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64,
89
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0,
90
- .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[1]),
91
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
92
+ .fieldoffset = offsetof(CPUARMState, cp15.mair_el[1]),
93
.resetvalue = 0 },
94
{ .name = "MAIR_EL3", .state = ARM_CP_STATE_AA64,
95
.opc0 = 3, .opc1 = 6, .crn = 10, .crm = 2, .opc2 = 0,
96
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
97
* handled in the field definitions.
98
*/
99
{ .name = "MAIR0", .state = ARM_CP_STATE_AA32,
100
- .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0, .access = PL1_RW,
101
+ .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0,
102
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
103
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.mair0_s),
104
offsetof(CPUARMState, cp15.mair0_ns) },
105
.resetfn = arm_cp_reset_ignore },
106
{ .name = "MAIR1", .state = ARM_CP_STATE_AA32,
107
- .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 1, .access = PL1_RW,
108
+ .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 1,
109
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
110
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.mair1_s),
111
offsetof(CPUARMState, cp15.mair1_ns) },
112
.resetfn = arm_cp_reset_ignore },
113
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
114
115
static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
116
{ .name = "DFSR", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 0,
117
- .access = PL1_RW, .type = ARM_CP_ALIAS,
118
+ .access = PL1_RW, .accessfn = access_tvm_trvm, .type = ARM_CP_ALIAS,
119
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dfsr_s),
120
offsetoflow32(CPUARMState, cp15.dfsr_ns) }, },
121
{ .name = "IFSR", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 1,
122
- .access = PL1_RW, .resetvalue = 0,
123
+ .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0,
124
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.ifsr_s),
125
offsetoflow32(CPUARMState, cp15.ifsr_ns) } },
126
{ .name = "DFAR", .cp = 15, .opc1 = 0, .crn = 6, .crm = 0, .opc2 = 0,
127
- .access = PL1_RW, .resetvalue = 0,
128
+ .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0,
129
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.dfar_s),
130
offsetof(CPUARMState, cp15.dfar_ns) } },
131
{ .name = "FAR_EL1", .state = ARM_CP_STATE_AA64,
132
.opc0 = 3, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 0,
133
- .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
134
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
135
+ .fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
136
.resetvalue = 0, },
137
REGINFO_SENTINEL
138
};
139
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
140
static const ARMCPRegInfo vmsa_cp_reginfo[] = {
141
{ .name = "ESR_EL1", .state = ARM_CP_STATE_AA64,
142
.opc0 = 3, .crn = 5, .crm = 2, .opc1 = 0, .opc2 = 0,
143
- .access = PL1_RW,
144
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
145
.fieldoffset = offsetof(CPUARMState, cp15.esr_el[1]), .resetvalue = 0, },
146
{ .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH,
147
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0,
148
- .access = PL1_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
149
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
150
+ .writefn = vmsa_ttbr_write, .resetvalue = 0,
151
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s),
152
offsetof(CPUARMState, cp15.ttbr0_ns) } },
153
{ .name = "TTBR1_EL1", .state = ARM_CP_STATE_BOTH,
154
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 1,
155
- .access = PL1_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
156
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
157
+ .writefn = vmsa_ttbr_write, .resetvalue = 0,
158
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
159
offsetof(CPUARMState, cp15.ttbr1_ns) } },
160
{ .name = "TCR_EL1", .state = ARM_CP_STATE_AA64,
161
.opc0 = 3, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
162
- .access = PL1_RW, .writefn = vmsa_tcr_el12_write,
163
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
164
+ .writefn = vmsa_tcr_el12_write,
165
.resetfn = vmsa_ttbcr_reset, .raw_writefn = raw_write,
166
.fieldoffset = offsetof(CPUARMState, cp15.tcr_el[1]) },
167
{ .name = "TTBCR", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
168
- .access = PL1_RW, .type = ARM_CP_ALIAS, .writefn = vmsa_ttbcr_write,
169
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
170
+ .type = ARM_CP_ALIAS, .writefn = vmsa_ttbcr_write,
171
.raw_writefn = vmsa_ttbcr_raw_write,
172
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tcr_el[3]),
173
offsetoflow32(CPUARMState, cp15.tcr_el[1])} },
174
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
175
*/
176
static const ARMCPRegInfo ttbcr2_reginfo = {
177
.name = "TTBCR2", .cp = 15, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 3,
178
- .access = PL1_RW, .type = ARM_CP_ALIAS,
179
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
180
+ .type = ARM_CP_ALIAS,
181
.bank_fieldoffsets = { offsetofhigh32(CPUARMState, cp15.tcr_el[3]),
182
offsetofhigh32(CPUARMState, cp15.tcr_el[1]) },
183
};
184
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lpae_cp_reginfo[] = {
185
/* NOP AMAIR0/1 */
186
{ .name = "AMAIR0", .state = ARM_CP_STATE_BOTH,
187
.opc0 = 3, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 0,
188
- .access = PL1_RW, .type = ARM_CP_CONST,
189
- .resetvalue = 0 },
190
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
191
+ .type = ARM_CP_CONST, .resetvalue = 0 },
192
/* AMAIR1 is mapped to AMAIR_EL1[63:32] */
193
{ .name = "AMAIR1", .cp = 15, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 1,
194
- .access = PL1_RW, .type = ARM_CP_CONST,
195
- .resetvalue = 0 },
196
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
197
+ .type = ARM_CP_CONST, .resetvalue = 0 },
198
{ .name = "PAR", .cp = 15, .crm = 7, .opc1 = 0,
199
.access = PL1_RW, .type = ARM_CP_64BIT, .resetvalue = 0,
200
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.par_s),
201
offsetof(CPUARMState, cp15.par_ns)} },
202
{ .name = "TTBR0", .cp = 15, .crm = 2, .opc1 = 0,
203
- .access = PL1_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
204
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
205
+ .type = ARM_CP_64BIT | ARM_CP_ALIAS,
206
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s),
207
offsetof(CPUARMState, cp15.ttbr0_ns) },
208
.writefn = vmsa_ttbr_write, },
209
{ .name = "TTBR1", .cp = 15, .crm = 2, .opc1 = 1,
210
- .access = PL1_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
211
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
212
+ .type = ARM_CP_64BIT | ARM_CP_ALIAS,
213
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
214
offsetof(CPUARMState, cp15.ttbr1_ns) },
215
.writefn = vmsa_ttbr_write, },
216
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
217
.type = ARM_CP_NOP, .access = PL1_W },
218
/* MMU Domain access control / MPU write buffer control */
219
{ .name = "DACR", .cp = 15, .opc1 = 0, .crn = 3, .crm = 0, .opc2 = 0,
220
- .access = PL1_RW, .resetvalue = 0,
221
+ .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0,
222
.writefn = dacr_write, .raw_writefn = raw_write,
223
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s),
224
offsetoflow32(CPUARMState, cp15.dacr_ns) } },
225
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
226
ARMCPRegInfo sctlr = {
227
.name = "SCTLR", .state = ARM_CP_STATE_BOTH,
228
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0,
229
- .access = PL1_RW,
230
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
231
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.sctlr_s),
232
offsetof(CPUARMState, cp15.sctlr_ns) },
233
.writefn = sctlr_write, .resetvalue = cpu->reset_sctlr,
184
--
234
--
185
2.7.4
235
2.20.1
186
236
187
237
diff view generated by jsdifflib
1
In do_ats_write(), rather than using the FSR value from get_phys_addr(),
1
From: Richard Henderson <richard.henderson@linaro.org>
2
construct the PAR values using the information in the ARMMMUFaultInfo
3
struct. This allows us to create a PAR of the correct format regardless
4
of what the translation table format is.
5
2
6
For the moment we leave the condition for "when should this be a
3
These bits trap EL1 access to set/way cache maintenance insns.
7
64 bit PAR" as it was previously; this will need to be fixed to
8
properly support AArch32 Hyp mode.
9
4
5
Buglink: https://bugs.launchpad.net/bugs/1863685
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200229012811.24129-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-11-git-send-email-peter.maydell@linaro.org
15
---
10
---
16
target/arm/helper.c | 16 ++++++++++------
11
target/arm/helper.c | 22 ++++++++++++++++------
17
1 file changed, 10 insertions(+), 6 deletions(-)
12
1 file changed, 16 insertions(+), 6 deletions(-)
18
13
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri,
24
hwaddr phys_addr;
19
return CP_ACCESS_OK;
25
target_ulong page_size;
20
}
26
int prot;
21
27
- uint32_t fsr;
22
+/* Check for traps from EL1 due to HCR_EL2.TSW. */
28
+ uint32_t fsr_unused;
23
+static CPAccessResult access_tsw(CPUARMState *env, const ARMCPRegInfo *ri,
29
bool ret;
24
+ bool isread)
30
uint64_t par64;
25
+{
31
MemTxAttrs attrs = {};
26
+ if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TSW)) {
32
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
27
+ return CP_ACCESS_TRAP_EL2;
33
ARMCacheAttrs cacheattrs = {};
28
+ }
34
29
+ return CP_ACCESS_OK;
35
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
30
+}
36
- &prot, &page_size, &fsr, &fi, &cacheattrs);
37
+ &prot, &page_size, &fsr_unused, &fi, &cacheattrs);
38
+ /* TODO: this is not the correct condition to use to decide whether
39
+ * to report a PAR in 64-bit or 32-bit format.
40
+ */
41
if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
42
- /* fsr is a DFSR/IFSR value for the long descriptor
43
- * translation table format, but with WnR always clear.
44
- * Convert it to a 64-bit PAR.
45
- */
46
+ /* Create a 64-bit PAR */
47
par64 = (1 << 11); /* LPAE bit always set */
48
if (!ret) {
49
par64 |= phys_addr & ~0xfffULL;
50
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
51
par64 |= (uint64_t)cacheattrs.attrs << 56; /* ATTR */
52
par64 |= cacheattrs.shareability << 7; /* SH */
53
} else {
54
+ uint32_t fsr = arm_fi_to_lfsc(&fi);
55
+
31
+
56
par64 |= 1; /* F */
32
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
57
par64 |= (fsr & 0x3f) << 1; /* FS */
33
{
58
/* Note that S2WLK and FSTAGE are always zero, because we don't
34
ARMCPU *cpu = env_archcpu(env);
59
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
35
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
60
par64 |= (1 << 9); /* NS */
36
.access = PL1_W, .type = ARM_CP_NOP },
61
}
37
{ .name = "DC_ISW", .state = ARM_CP_STATE_AA64,
62
} else {
38
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2,
63
+ uint32_t fsr = arm_fi_to_sfsc(&fi);
39
- .access = PL1_W, .type = ARM_CP_NOP },
64
+
40
+ .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
65
par64 = ((fsr & (1 << 10)) >> 5) | ((fsr & (1 << 12)) >> 6) |
41
{ .name = "DC_CVAC", .state = ARM_CP_STATE_AA64,
66
((fsr & 0xf) << 1) | 1;
42
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1,
67
}
43
.access = PL0_W, .type = ARM_CP_NOP,
44
.accessfn = aa64_cacheop_access },
45
{ .name = "DC_CSW", .state = ARM_CP_STATE_AA64,
46
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
47
- .access = PL1_W, .type = ARM_CP_NOP },
48
+ .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
49
{ .name = "DC_CVAU", .state = ARM_CP_STATE_AA64,
50
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 11, .opc2 = 1,
51
.access = PL0_W, .type = ARM_CP_NOP,
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
53
.accessfn = aa64_cacheop_access },
54
{ .name = "DC_CISW", .state = ARM_CP_STATE_AA64,
55
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
56
- .access = PL1_W, .type = ARM_CP_NOP },
57
+ .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
58
/* TLBI operations */
59
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
60
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
61
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
62
{ .name = "DCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1,
63
.type = ARM_CP_NOP, .access = PL1_W },
64
{ .name = "DCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2,
65
- .type = ARM_CP_NOP, .access = PL1_W },
66
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
67
{ .name = "DCCMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 1,
68
.type = ARM_CP_NOP, .access = PL1_W },
69
{ .name = "DCCSW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
70
- .type = ARM_CP_NOP, .access = PL1_W },
71
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
72
{ .name = "DCCMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 11, .opc2 = 1,
73
.type = ARM_CP_NOP, .access = PL1_W },
74
{ .name = "DCCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 1,
75
.type = ARM_CP_NOP, .access = PL1_W },
76
{ .name = "DCCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
77
- .type = ARM_CP_NOP, .access = PL1_W },
78
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
79
/* MMU Domain access control / MPU write buffer control */
80
{ .name = "DACR", .cp = 15, .opc1 = 0, .crn = 3, .crm = 0, .opc2 = 0,
81
.access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0,
68
--
82
--
69
2.7.4
83
2.20.1
70
84
71
85
diff view generated by jsdifflib
1
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
1
From: Richard Henderson <richard.henderson@linaro.org>
2
structure, which we convert to the FSC at the callsite.
3
2
3
This bit traps EL1 access to the auxiliary control registers.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200229012811.24129-9-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-6-git-send-email-peter.maydell@linaro.org
9
---
9
---
10
target/arm/helper.c | 41 ++++++++++++++++++-----------------------
10
target/arm/helper.c | 18 ++++++++++++++----
11
1 file changed, 18 insertions(+), 23 deletions(-)
11
1 file changed, 14 insertions(+), 4 deletions(-)
12
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
17
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tsw(CPUARMState *env, const ARMCPRegInfo *ri,
18
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
18
return CP_ACCESS_OK;
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
20
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
21
- target_ulong *page_size_ptr, uint32_t *fsr,
22
+ target_ulong *page_size_ptr,
23
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
24
25
/* Security attributes for an address, as returned by v8m_security_lookup. */
26
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
27
hwaddr s2pa;
28
int s2prot;
29
int ret;
30
- uint32_t fsr;
31
32
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
33
- &txattrs, &s2prot, &s2size, &fsr, fi, NULL);
34
+ &txattrs, &s2prot, &s2size, fi, NULL);
35
if (ret) {
36
fi->s2addr = addr;
37
fi->stage2 = true;
38
@@ -XXX,XX +XXX,XX @@ do_fault:
39
return true;
40
}
19
}
41
20
42
-/* Fault type for long-descriptor MMU fault reporting; this corresponds
21
+/* Check for traps from EL1 due to HCR_EL2.TACR. */
43
- * to bits [5..2] in the STATUS field in long-format DFSR/IFSR.
22
+static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri,
44
- */
23
+ bool isread)
45
-typedef enum {
24
+{
46
- translation_fault = 1,
25
+ if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TACR)) {
47
- access_fault = 2,
26
+ return CP_ACCESS_TRAP_EL2;
48
- permission_fault = 3,
27
+ }
49
-} MMUFaultType;
28
+ return CP_ACCESS_OK;
50
-
29
+}
51
/*
30
+
52
* check_s2_mmu_setup
31
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
53
* @cpu: ARMCPU
54
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
55
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
56
MMUAccessType access_type, ARMMMUIdx mmu_idx,
57
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
58
- target_ulong *page_size_ptr, uint32_t *fsr,
59
+ target_ulong *page_size_ptr,
60
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
61
{
32
{
62
ARMCPU *cpu = arm_env_get_cpu(env);
33
ARMCPU *cpu = env_archcpu(env);
63
CPUState *cs = CPU(cpu);
34
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1cp_reginfo[] = {
64
/* Read an LPAE long-descriptor translation table. */
35
static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = {
65
- MMUFaultType fault_type = translation_fault;
36
{ .name = "ACTLR2", .state = ARM_CP_STATE_AA32,
66
+ ARMFaultType fault_type = ARMFault_Translation;
37
.cp = 15, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 3,
67
uint32_t level;
38
- .access = PL1_RW, .type = ARM_CP_CONST,
68
uint32_t epd = 0;
39
- .resetvalue = 0 },
69
int32_t t0sz, t1sz;
40
+ .access = PL1_RW, .accessfn = access_tacr,
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
41
+ .type = ARM_CP_CONST, .resetvalue = 0 },
71
ttbr_select = 1;
42
{ .name = "HACTLR2", .state = ARM_CP_STATE_AA32,
72
} else {
43
.cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3,
73
/* in the gap between the two regions, this is a Translation fault */
44
.access = PL2_RW, .type = ARM_CP_CONST,
74
- fault_type = translation_fault;
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
75
+ fault_type = ARMFault_Translation;
46
ARMCPRegInfo auxcr_reginfo[] = {
76
goto do_fault;
47
{ .name = "ACTLR_EL1", .state = ARM_CP_STATE_BOTH,
77
}
48
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 1,
78
49
- .access = PL1_RW, .type = ARM_CP_CONST,
79
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
50
- .resetvalue = cpu->reset_auxcr },
80
ok = check_s2_mmu_setup(cpu, aarch64, startlevel,
51
+ .access = PL1_RW, .accessfn = access_tacr,
81
inputsize, stride);
52
+ .type = ARM_CP_CONST, .resetvalue = cpu->reset_auxcr },
82
if (!ok) {
53
{ .name = "ACTLR_EL2", .state = ARM_CP_STATE_BOTH,
83
- fault_type = translation_fault;
54
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 1,
84
+ fault_type = ARMFault_Translation;
55
.access = PL2_RW, .type = ARM_CP_CONST,
85
goto do_fault;
86
}
87
level = startlevel;
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
89
/* Here descaddr is the final physical address, and attributes
90
* are all in attrs.
91
*/
92
- fault_type = access_fault;
93
+ fault_type = ARMFault_AccessFlag;
94
if ((attrs & (1 << 8)) == 0) {
95
/* Access flag */
96
goto do_fault;
97
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
98
*prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
99
}
100
101
- fault_type = permission_fault;
102
+ fault_type = ARMFault_Permission;
103
if (!(*prot & (1 << access_type))) {
104
goto do_fault;
105
}
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
107
return false;
108
109
do_fault:
110
- /* Long-descriptor format IFSR/DFSR value */
111
- *fsr = (1 << 9) | (fault_type << 2) | level;
112
+ fi->type = fault_type;
113
+ fi->level = level;
114
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
115
fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_S2NS);
116
return true;
117
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
118
/* S1 is done. Now do S2 translation. */
119
ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_S2NS,
120
phys_ptr, attrs, &s2_prot,
121
- page_size, fsr, fi,
122
+ page_size, fi,
123
cacheattrs != NULL ? &cacheattrs2 : NULL);
124
+ *fsr = arm_fi_to_lfsc(fi);
125
fi->s2addr = ipa;
126
/* Combine the S1 and S2 perms. */
127
*prot &= s2_prot;
128
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
129
}
130
131
if (regime_using_lpae_format(env, mmu_idx)) {
132
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, phys_ptr,
133
- attrs, prot, page_size, fsr, fi, cacheattrs);
134
+ bool ret = get_phys_addr_lpae(env, address, access_type, mmu_idx,
135
+ phys_ptr, attrs, prot, page_size,
136
+ fi, cacheattrs);
137
+
138
+ *fsr = arm_fi_to_lfsc(fi);
139
+ return ret;
140
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
141
bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
142
phys_ptr, attrs, prot, page_size, fi);
143
--
56
--
144
2.7.4
57
2.20.1
145
58
146
59
diff view generated by jsdifflib
1
All the callers of arm_ldq_ptw() and arm_ldl_ptw() ignore the value
1
From: Richard Henderson <richard.henderson@linaro.org>
2
that those functions store in the fsr argument on failure: if they
3
return failure to their callers they will always overwrite the fsr
4
value with something else.
5
2
6
Remove the argument from these functions and S1_ptw_translate().
3
This bit traps EL1 access to cache maintenance insns that operate
7
This will simplify removing fsr from the calling functions.
4
to the point of coherency or persistence.
8
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200229012811.24129-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
13
Message-id: 1512503192-2239-3-git-send-email-peter.maydell@linaro.org
14
---
10
---
15
target/arm/helper.c | 24 +++++++++++-------------
11
target/arm/helper.c | 39 +++++++++++++++++++++++++++++++--------
16
1 file changed, 11 insertions(+), 13 deletions(-)
12
1 file changed, 31 insertions(+), 8 deletions(-)
17
13
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
23
/* Translate a S1 pagetable walk through S2 if needed. */
19
return CP_ACCESS_OK;
24
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
20
}
25
hwaddr addr, MemTxAttrs txattrs,
21
26
- uint32_t *fsr,
22
+static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env,
27
ARMMMUFaultInfo *fi)
23
+ const ARMCPRegInfo *ri,
28
{
24
+ bool isread)
29
if ((mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1) &&
25
+{
30
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
26
+ /* Cache invalidate/clean to Point of Coherency or Persistence... */
31
hwaddr s2pa;
27
+ switch (arm_current_el(env)) {
32
int s2prot;
28
+ case 0:
33
int ret;
29
+ /* ... EL0 must UNDEF unless SCTLR_EL1.UCI is set. */
34
+ uint32_t fsr;
30
+ if (!(arm_sctlr(env, 0) & SCTLR_UCI)) {
35
31
+ return CP_ACCESS_TRAP;
36
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
32
+ }
37
- &txattrs, &s2prot, &s2size, fsr, fi, NULL);
33
+ /* fall through */
38
+ &txattrs, &s2prot, &s2size, &fsr, fi, NULL);
34
+ case 1:
39
if (ret) {
35
+ /* ... EL1 must trap to EL2 if HCR_EL2.TPCP is set. */
40
fi->s2addr = addr;
36
+ if (arm_hcr_el2_eff(env) & HCR_TPCP) {
41
fi->stage2 = true;
37
+ return CP_ACCESS_TRAP_EL2;
42
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
38
+ }
43
* (but not if it was for a debug access).
39
+ break;
40
+ }
41
+ return CP_ACCESS_OK;
42
+}
43
+
44
/* See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
45
* Page D4-1736 (DDI0487A.b)
44
*/
46
*/
45
static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
47
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
46
- ARMMMUIdx mmu_idx, uint32_t *fsr,
48
.accessfn = aa64_cacheop_access },
47
- ARMMMUFaultInfo *fi)
49
{ .name = "DC_IVAC", .state = ARM_CP_STATE_AA64,
48
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
50
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1,
49
{
51
- .access = PL1_W, .type = ARM_CP_NOP },
50
ARMCPU *cpu = ARM_CPU(cs);
52
+ .access = PL1_W, .accessfn = aa64_cacheop_poc_access,
51
CPUARMState *env = &cpu->env;
53
+ .type = ARM_CP_NOP },
52
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
54
{ .name = "DC_ISW", .state = ARM_CP_STATE_AA64,
53
55
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2,
54
attrs.secure = is_secure;
56
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
55
as = arm_addressspace(cs, attrs);
57
{ .name = "DC_CVAC", .state = ARM_CP_STATE_AA64,
56
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fsr, fi);
58
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1,
57
+ addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
59
.access = PL0_W, .type = ARM_CP_NOP,
58
if (fi->s1ptw) {
60
- .accessfn = aa64_cacheop_access },
59
return 0;
61
+ .accessfn = aa64_cacheop_poc_access },
60
}
62
{ .name = "DC_CSW", .state = ARM_CP_STATE_AA64,
61
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
63
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
62
}
64
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
63
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
64
static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
66
{ .name = "DC_CIVAC", .state = ARM_CP_STATE_AA64,
65
- ARMMMUIdx mmu_idx, uint32_t *fsr,
67
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 1,
66
- ARMMMUFaultInfo *fi)
68
.access = PL0_W, .type = ARM_CP_NOP,
67
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
69
- .accessfn = aa64_cacheop_access },
68
{
70
+ .accessfn = aa64_cacheop_poc_access },
69
ARMCPU *cpu = ARM_CPU(cs);
71
{ .name = "DC_CISW", .state = ARM_CP_STATE_AA64,
70
CPUARMState *env = &cpu->env;
72
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
71
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
73
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
72
74
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
73
attrs.secure = is_secure;
75
{ .name = "BPIMVA", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 7,
74
as = arm_addressspace(cs, attrs);
76
.type = ARM_CP_NOP, .access = PL1_W },
75
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fsr, fi);
77
{ .name = "DCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1,
76
+ addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
78
- .type = ARM_CP_NOP, .access = PL1_W },
77
if (fi->s1ptw) {
79
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access },
78
return 0;
80
{ .name = "DCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2,
79
}
81
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
80
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
82
{ .name = "DCCMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 1,
81
goto do_fault;
83
- .type = ARM_CP_NOP, .access = PL1_W },
82
}
84
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access },
83
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
85
{ .name = "DCCSW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
84
- mmu_idx, fsr, fi);
86
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
85
+ mmu_idx, fi);
87
{ .name = "DCCMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 11, .opc2 = 1,
86
type = (desc & 3);
88
.type = ARM_CP_NOP, .access = PL1_W },
87
domain = (desc >> 5) & 0x0f;
89
{ .name = "DCCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 1,
88
if (regime_el(env, mmu_idx) == 1) {
90
- .type = ARM_CP_NOP, .access = PL1_W },
89
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
91
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access },
90
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
92
{ .name = "DCCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
91
}
93
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
92
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
94
/* MMU Domain access control / MPU write buffer control */
93
- mmu_idx, fsr, fi);
95
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpop_reg[] = {
94
+ mmu_idx, fi);
96
{ .name = "DC_CVAP", .state = ARM_CP_STATE_AA64,
95
switch (desc & 3) {
97
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1,
96
case 0: /* Page translation fault. */
98
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
97
code = 7;
99
- .accessfn = aa64_cacheop_access, .writefn = dccvap_writefn },
98
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
100
+ .accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
99
goto do_fault;
101
REGINFO_SENTINEL
100
}
102
};
101
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
103
102
- mmu_idx, fsr, fi);
104
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
103
+ mmu_idx, fi);
105
{ .name = "DC_CVADP", .state = ARM_CP_STATE_AA64,
104
type = (desc & 3);
106
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1,
105
if (type == 0 || (type == 3 && !arm_feature(env, ARM_FEATURE_PXN))) {
107
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
106
/* Section translation fault, or attempt to use the encoding
108
- .accessfn = aa64_cacheop_access, .writefn = dccvap_writefn },
107
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
109
+ .accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
108
/* Lookup l2 entry. */
110
REGINFO_SENTINEL
109
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
111
};
110
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
112
#endif /*CONFIG_USER_ONLY*/
111
- mmu_idx, fsr, fi);
112
+ mmu_idx, fi);
113
ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
114
switch (desc & 3) {
115
case 0: /* Page translation fault. */
116
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
117
descaddr |= (address >> (stride * (4 - level))) & indexmask;
118
descaddr &= ~7ULL;
119
nstable = extract32(tableattrs, 4, 1);
120
- descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fsr, fi);
121
+ descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi);
122
if (fi->s1ptw) {
123
goto do_fault;
124
}
125
--
113
--
126
2.7.4
114
2.20.1
127
115
128
116
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav7() return a fault type in the ARMMMUFaultInfo
1
From: Richard Henderson <richard.henderson@linaro.org>
2
structure, which we convert to the FSC at the callsite.
3
2
3
This bit traps EL1 access to cache maintenance insns that operate
4
to the point of unification. There are no longer any references to
5
plain aa64_cacheop_access, so remove it.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200229012811.24129-11-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-8-git-send-email-peter.maydell@linaro.org
9
---
11
---
10
target/arm/helper.c | 11 +++++++----
12
target/arm/helper.c | 53 +++++++++++++++++++++++++++------------------
11
1 file changed, 7 insertions(+), 4 deletions(-)
13
1 file changed, 32 insertions(+), 21 deletions(-)
12
14
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo uao_reginfo = {
18
20
.readfn = aa64_uao_read, .writefn = aa64_uao_write
19
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
21
};
20
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
21
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
23
-static CPAccessResult aa64_cacheop_access(CPUARMState *env,
22
+ hwaddr *phys_ptr, int *prot,
24
- const ARMCPRegInfo *ri,
23
+ ARMMMUFaultInfo *fi)
25
- bool isread)
24
{
26
-{
25
ARMCPU *cpu = arm_env_get_cpu(env);
27
- /* Cache invalidate/clean: NOP, but EL0 must UNDEF unless
26
int n;
28
- * SCTLR_EL1.UCI is set.
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
29
- */
28
if (n == -1) { /* no hits */
30
- if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UCI)) {
29
if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
31
- return CP_ACCESS_TRAP;
30
/* background fault */
32
- }
31
- *fsr = 0;
33
- return CP_ACCESS_OK;
32
+ fi->type = ARMFault_Background;
34
-}
33
return true;
35
-
34
}
36
static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env,
35
get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
37
const ARMCPRegInfo *ri,
36
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
38
bool isread)
37
}
39
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env,
38
}
40
return CP_ACCESS_OK;
39
40
- *fsr = 0x00d; /* Permission fault */
41
+ fi->type = ARMFault_Permission;
42
+ fi->level = 1;
43
return !(*prot & (1 << access_type));
44
}
41
}
45
42
46
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
43
+static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
47
} else if (arm_feature(env, ARM_FEATURE_V7)) {
44
+ const ARMCPRegInfo *ri,
48
/* PMSAv7 */
45
+ bool isread)
49
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
46
+{
50
- phys_ptr, prot, fsr);
47
+ /* Cache invalidate/clean to Point of Unification... */
51
+ phys_ptr, prot, fi);
48
+ switch (arm_current_el(env)) {
52
+ *fsr = arm_fi_to_sfsc(fi);
49
+ case 0:
53
} else {
50
+ /* ... EL0 must UNDEF unless SCTLR_EL1.UCI is set. */
54
/* Pre-v7 MPU */
51
+ if (!(arm_sctlr(env, 0) & SCTLR_UCI)) {
55
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
52
+ return CP_ACCESS_TRAP;
53
+ }
54
+ /* fall through */
55
+ case 1:
56
+ /* ... EL1 must trap to EL2 if HCR_EL2.TPU is set. */
57
+ if (arm_hcr_el2_eff(env) & HCR_TPU) {
58
+ return CP_ACCESS_TRAP_EL2;
59
+ }
60
+ break;
61
+ }
62
+ return CP_ACCESS_OK;
63
+}
64
+
65
/* See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
66
* Page D4-1736 (DDI0487A.b)
67
*/
68
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
69
/* Cache ops: all NOPs since we don't emulate caches */
70
{ .name = "IC_IALLUIS", .state = ARM_CP_STATE_AA64,
71
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
72
- .access = PL1_W, .type = ARM_CP_NOP },
73
+ .access = PL1_W, .type = ARM_CP_NOP,
74
+ .accessfn = aa64_cacheop_pou_access },
75
{ .name = "IC_IALLU", .state = ARM_CP_STATE_AA64,
76
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0,
77
- .access = PL1_W, .type = ARM_CP_NOP },
78
+ .access = PL1_W, .type = ARM_CP_NOP,
79
+ .accessfn = aa64_cacheop_pou_access },
80
{ .name = "IC_IVAU", .state = ARM_CP_STATE_AA64,
81
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 5, .opc2 = 1,
82
.access = PL0_W, .type = ARM_CP_NOP,
83
- .accessfn = aa64_cacheop_access },
84
+ .accessfn = aa64_cacheop_pou_access },
85
{ .name = "DC_IVAC", .state = ARM_CP_STATE_AA64,
86
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1,
87
.access = PL1_W, .accessfn = aa64_cacheop_poc_access,
88
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
89
{ .name = "DC_CVAU", .state = ARM_CP_STATE_AA64,
90
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 11, .opc2 = 1,
91
.access = PL0_W, .type = ARM_CP_NOP,
92
- .accessfn = aa64_cacheop_access },
93
+ .accessfn = aa64_cacheop_pou_access },
94
{ .name = "DC_CIVAC", .state = ARM_CP_STATE_AA64,
95
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 1,
96
.access = PL0_W, .type = ARM_CP_NOP,
97
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
98
.writefn = tlbiipas2_is_write },
99
/* 32 bit cache operations */
100
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
101
- .type = ARM_CP_NOP, .access = PL1_W },
102
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
103
{ .name = "BPIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 6,
104
.type = ARM_CP_NOP, .access = PL1_W },
105
{ .name = "ICIALLU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0,
106
- .type = ARM_CP_NOP, .access = PL1_W },
107
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
108
{ .name = "ICIMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 1,
109
- .type = ARM_CP_NOP, .access = PL1_W },
110
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
111
{ .name = "BPIALL", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 6,
112
.type = ARM_CP_NOP, .access = PL1_W },
113
{ .name = "BPIMVA", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 7,
114
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
115
{ .name = "DCCSW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
116
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
117
{ .name = "DCCMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 11, .opc2 = 1,
118
- .type = ARM_CP_NOP, .access = PL1_W },
119
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
120
{ .name = "DCCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 1,
121
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access },
122
{ .name = "DCCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
56
--
123
--
57
2.7.4
124
2.20.1
58
125
59
126
diff view generated by jsdifflib
1
In ARMv7M the CPU ignores explicit writes to CONTROL.SPSEL
1
From: Richard Henderson <richard.henderson@linaro.org>
2
in Handler mode. In v8M the behaviour is slightly different:
3
writes to the bit are permitted but will have no effect.
4
2
5
We've already done the hard work to handle the value in
3
This bit traps EL1 access to tlb maintenance insns.
6
CONTROL.SPSEL being out of sync with what stack pointer is
7
actually in use, so all we need to do to fix this last loose
8
end is to update the condition we use to guard whether we
9
call write_v7m_control_spsel() on the register write.
10
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200229012811.24129-12-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 1512153879-5291-3-git-send-email-peter.maydell@linaro.org
14
---
9
---
15
target/arm/helper.c | 5 ++++-
10
target/arm/helper.c | 85 +++++++++++++++++++++++++++++----------------
16
1 file changed, 4 insertions(+), 1 deletion(-)
11
1 file changed, 55 insertions(+), 30 deletions(-)
17
12
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
17
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri,
23
* thread mode; other bits can be updated by any privileged code.
18
return CP_ACCESS_OK;
24
* write_v7m_control_spsel() deals with updating the SPSEL bit in
19
}
25
* env->v7m.control, so we only need update the others.
20
26
+ * For v7M, we must just ignore explicit writes to SPSEL in handler
21
+/* Check for traps from EL1 due to HCR_EL2.TTLB. */
27
+ * mode; for v8M the write is permitted but will have no effect.
22
+static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
28
*/
23
+ bool isread)
29
- if (!arm_v7m_is_handler_mode(env)) {
24
+{
30
+ if (arm_feature(env, ARM_FEATURE_V8) ||
25
+ if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) {
31
+ !arm_v7m_is_handler_mode(env)) {
26
+ return CP_ACCESS_TRAP_EL2;
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
27
+ }
33
}
28
+ return CP_ACCESS_OK;
34
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
29
+}
30
+
31
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
32
{
33
ARMCPU *cpu = env_archcpu(env);
34
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
35
.type = ARM_CP_NO_RAW, .access = PL1_R, .readfn = isr_read },
36
/* 32 bit ITLB invalidates */
37
{ .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0,
38
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbiall_write },
39
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
40
+ .writefn = tlbiall_write },
41
{ .name = "ITLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
42
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimva_write },
43
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
44
+ .writefn = tlbimva_write },
45
{ .name = "ITLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 2,
46
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbiasid_write },
47
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
48
+ .writefn = tlbiasid_write },
49
/* 32 bit DTLB invalidates */
50
{ .name = "DTLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 0,
51
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbiall_write },
52
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
53
+ .writefn = tlbiall_write },
54
{ .name = "DTLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
55
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimva_write },
56
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
57
+ .writefn = tlbimva_write },
58
{ .name = "DTLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 2,
59
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbiasid_write },
60
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
61
+ .writefn = tlbiasid_write },
62
/* 32 bit TLB invalidates */
63
{ .name = "TLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
64
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbiall_write },
65
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
66
+ .writefn = tlbiall_write },
67
{ .name = "TLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
68
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimva_write },
69
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
70
+ .writefn = tlbimva_write },
71
{ .name = "TLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
72
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbiasid_write },
73
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
74
+ .writefn = tlbiasid_write },
75
{ .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
76
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimvaa_write },
77
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
78
+ .writefn = tlbimvaa_write },
79
REGINFO_SENTINEL
80
};
81
82
static const ARMCPRegInfo v7mp_cp_reginfo[] = {
83
/* 32 bit TLB invalidates, Inner Shareable */
84
{ .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
85
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbiall_is_write },
86
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
87
+ .writefn = tlbiall_is_write },
88
{ .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
89
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimva_is_write },
90
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
91
+ .writefn = tlbimva_is_write },
92
{ .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
93
- .type = ARM_CP_NO_RAW, .access = PL1_W,
94
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
95
.writefn = tlbiasid_is_write },
96
{ .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
97
- .type = ARM_CP_NO_RAW, .access = PL1_W,
98
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
99
.writefn = tlbimvaa_is_write },
100
REGINFO_SENTINEL
101
};
102
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
103
/* TLBI operations */
104
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
105
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
106
- .access = PL1_W, .type = ARM_CP_NO_RAW,
107
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
108
.writefn = tlbi_aa64_vmalle1is_write },
109
{ .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
110
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
111
- .access = PL1_W, .type = ARM_CP_NO_RAW,
112
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
113
.writefn = tlbi_aa64_vae1is_write },
114
{ .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
115
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
116
- .access = PL1_W, .type = ARM_CP_NO_RAW,
117
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
118
.writefn = tlbi_aa64_vmalle1is_write },
119
{ .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
120
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
121
- .access = PL1_W, .type = ARM_CP_NO_RAW,
122
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
123
.writefn = tlbi_aa64_vae1is_write },
124
{ .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
125
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
126
- .access = PL1_W, .type = ARM_CP_NO_RAW,
127
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
128
.writefn = tlbi_aa64_vae1is_write },
129
{ .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
130
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
131
- .access = PL1_W, .type = ARM_CP_NO_RAW,
132
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
133
.writefn = tlbi_aa64_vae1is_write },
134
{ .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
135
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
136
- .access = PL1_W, .type = ARM_CP_NO_RAW,
137
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
138
.writefn = tlbi_aa64_vmalle1_write },
139
{ .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
140
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
141
- .access = PL1_W, .type = ARM_CP_NO_RAW,
142
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
143
.writefn = tlbi_aa64_vae1_write },
144
{ .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
145
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
146
- .access = PL1_W, .type = ARM_CP_NO_RAW,
147
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
148
.writefn = tlbi_aa64_vmalle1_write },
149
{ .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
150
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
151
- .access = PL1_W, .type = ARM_CP_NO_RAW,
152
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
153
.writefn = tlbi_aa64_vae1_write },
154
{ .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
155
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
156
- .access = PL1_W, .type = ARM_CP_NO_RAW,
157
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
158
.writefn = tlbi_aa64_vae1_write },
159
{ .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
160
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
161
- .access = PL1_W, .type = ARM_CP_NO_RAW,
162
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
163
.writefn = tlbi_aa64_vae1_write },
164
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
165
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
166
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
167
#endif
168
/* TLB invalidate last level of translation table walk */
169
{ .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
170
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimva_is_write },
171
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
172
+ .writefn = tlbimva_is_write },
173
{ .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
174
- .type = ARM_CP_NO_RAW, .access = PL1_W,
175
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
176
.writefn = tlbimvaa_is_write },
177
{ .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
178
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimva_write },
179
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
180
+ .writefn = tlbimva_write },
181
{ .name = "TLBIMVAAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
182
- .type = ARM_CP_NO_RAW, .access = PL1_W, .writefn = tlbimvaa_write },
183
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
184
+ .writefn = tlbimvaa_write },
185
{ .name = "TLBIMVALH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
186
.type = ARM_CP_NO_RAW, .access = PL2_W,
187
.writefn = tlbimva_hyp_write },
35
--
188
--
36
2.7.4
189
2.20.1
37
190
38
191
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for continuous read out of the RDSR and READ_FSR status
3
Make the output just a bit prettier when running by hand.
4
registers until the chip select is deasserted. This feature is supported
5
by amongst others 1 or more flashtypes manufactured by Numonyx (Micron),
6
Windbond, SST, Gigadevice, Eon and Macronix.
7
4
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Cc: Alex Bennée <alex.bennee@linaro.org>
9
Acked-by: Marcin Krzemiński<mar.krzeminski@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20200229012811.24129-13-richard.henderson@linaro.org
11
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20171126231634.9531-2-frasse.iglesias@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/block/m25p80.c | 39 ++++++++++++++++++++++++++++++++++++++-
11
tests/tcg/aarch64/pauth-1.c | 2 +-
16
1 file changed, 38 insertions(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
17
13
18
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
14
diff --git a/tests/tcg/aarch64/pauth-1.c b/tests/tcg/aarch64/pauth-1.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/block/m25p80.c
16
--- a/tests/tcg/aarch64/pauth-1.c
21
+++ b/hw/block/m25p80.c
17
+++ b/tests/tcg/aarch64/pauth-1.c
22
@@ -XXX,XX +XXX,XX @@ typedef struct Flash {
18
@@ -XXX,XX +XXX,XX @@ int main()
23
uint8_t data[M25P80_INTERNAL_DATA_BUFFER_SZ];
24
uint32_t len;
25
uint32_t pos;
26
+ bool data_read_loop;
27
uint8_t needed_bytes;
28
uint8_t cmd_in_progress;
29
uint32_t cur_addr;
30
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
31
}
32
s->pos = 0;
33
s->len = 1;
34
+ s->data_read_loop = true;
35
s->state = STATE_READING_DATA;
36
break;
37
38
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
39
}
40
s->pos = 0;
41
s->len = 1;
42
+ s->data_read_loop = true;
43
s->state = STATE_READING_DATA;
44
break;
45
46
@@ -XXX,XX +XXX,XX @@ static int m25p80_cs(SSISlave *ss, bool select)
47
s->pos = 0;
48
s->state = STATE_IDLE;
49
flash_sync_dirty(s, -1);
50
+ s->data_read_loop = false;
51
}
19
}
52
20
53
DB_PRINT_L(0, "%sselect\n", select ? "de" : "");
21
perc = (float) count / (float) (TESTS * 2);
54
@@ -XXX,XX +XXX,XX @@ static uint32_t m25p80_transfer8(SSISlave *ss, uint32_t tx)
22
- printf("Ptr Check: %0.2f%%", perc * 100.0);
55
s->pos++;
23
+ printf("Ptr Check: %0.2f%%\n", perc * 100.0);
56
if (s->pos == s->len) {
24
assert(perc > 0.95);
57
s->pos = 0;
25
return 0;
58
- s->state = STATE_IDLE;
26
}
59
+ if (!s->data_read_loop) {
60
+ s->state = STATE_IDLE;
61
+ }
62
}
63
break;
64
65
@@ -XXX,XX +XXX,XX @@ static Property m25p80_properties[] = {
66
DEFINE_PROP_END_OF_LIST(),
67
};
68
69
+static int m25p80_pre_load(void *opaque)
70
+{
71
+ Flash *s = (Flash *)opaque;
72
+
73
+ s->data_read_loop = false;
74
+ return 0;
75
+}
76
+
77
+static bool m25p80_data_read_loop_needed(void *opaque)
78
+{
79
+ Flash *s = (Flash *)opaque;
80
+
81
+ return s->data_read_loop;
82
+}
83
+
84
+static const VMStateDescription vmstate_m25p80_data_read_loop = {
85
+ .name = "m25p80/data_read_loop",
86
+ .version_id = 1,
87
+ .minimum_version_id = 1,
88
+ .needed = m25p80_data_read_loop_needed,
89
+ .fields = (VMStateField[]) {
90
+ VMSTATE_BOOL(data_read_loop, Flash),
91
+ VMSTATE_END_OF_LIST()
92
+ }
93
+};
94
+
95
static const VMStateDescription vmstate_m25p80 = {
96
.name = "m25p80",
97
.version_id = 0,
98
.minimum_version_id = 0,
99
.pre_save = m25p80_pre_save,
100
+ .pre_load = m25p80_pre_load,
101
.fields = (VMStateField[]) {
102
VMSTATE_UINT8(state, Flash),
103
VMSTATE_UINT8_ARRAY(data, Flash, M25P80_INTERNAL_DATA_BUFFER_SZ),
104
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m25p80 = {
105
VMSTATE_UINT8(spansion_cr3nv, Flash),
106
VMSTATE_UINT8(spansion_cr4nv, Flash),
107
VMSTATE_END_OF_LIST()
108
+ },
109
+ .subsections = (const VMStateDescription * []) {
110
+ &vmstate_m25p80_data_read_loop,
111
+ NULL
112
}
113
};
114
115
--
27
--
116
2.7.4
28
2.20.1
117
29
118
30
diff view generated by jsdifflib
Deleted patch
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
2
1
3
Add support for SST READ ID 0x90/0xAB commands for reading out the flash
4
manufacturer ID and device ID.
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20171126231634.9531-3-frasse.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/block/m25p80.c | 32 ++++++++++++++++++++++++++++++++
12
1 file changed, 32 insertions(+)
13
14
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/block/m25p80.c
17
+++ b/hw/block/m25p80.c
18
@@ -XXX,XX +XXX,XX @@ typedef enum {
19
DPP = 0xa2,
20
QPP = 0x32,
21
QPP_4 = 0x34,
22
+ RDID_90 = 0x90,
23
+ RDID_AB = 0xab,
24
25
ERASE_4K = 0x20,
26
ERASE4_4K = 0x21,
27
@@ -XXX,XX +XXX,XX @@ typedef enum {
28
MAN_MACRONIX,
29
MAN_NUMONYX,
30
MAN_WINBOND,
31
+ MAN_SST,
32
MAN_GENERIC,
33
} Manufacturer;
34
35
@@ -XXX,XX +XXX,XX @@ static inline Manufacturer get_man(Flash *s)
36
return MAN_SPANSION;
37
case 0xC2:
38
return MAN_MACRONIX;
39
+ case 0xBF:
40
+ return MAN_SST;
41
default:
42
return MAN_GENERIC;
43
}
44
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
45
case WEVCR:
46
s->enh_volatile_cfg = s->data[0];
47
break;
48
+ case RDID_90:
49
+ case RDID_AB:
50
+ if (get_man(s) == MAN_SST) {
51
+ if (s->cur_addr <= 1) {
52
+ if (s->cur_addr) {
53
+ s->data[0] = s->pi->id[2];
54
+ s->data[1] = s->pi->id[0];
55
+ } else {
56
+ s->data[0] = s->pi->id[0];
57
+ s->data[1] = s->pi->id[2];
58
+ }
59
+ s->pos = 0;
60
+ s->len = 2;
61
+ s->data_read_loop = true;
62
+ s->state = STATE_READING_DATA;
63
+ } else {
64
+ qemu_log_mask(LOG_GUEST_ERROR,
65
+ "M25P80: Invalid read id address\n");
66
+ }
67
+ } else {
68
+ qemu_log_mask(LOG_GUEST_ERROR,
69
+ "M25P80: Read id (command 0x90/0xAB) is not supported"
70
+ " by device\n");
71
+ }
72
+ break;
73
default:
74
break;
75
}
76
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
77
case PP4:
78
case PP4_4:
79
case DIE_ERASE:
80
+ case RDID_90:
81
+ case RDID_AB:
82
s->needed_bytes = get_addr_length(s);
83
s->pos = 0;
84
s->len = 0;
85
--
86
2.7.4
87
88
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
2
2
3
Add support for 4 byte addresses in the LQSPI and correct LQSPI_CFG_SEP_BUS.
3
The Cubieboard is a singleboard computer with an Allwinner A10 System-on-Chip [1].
4
As documented in the Allwinner A10 User Manual V1.5 [2], the SoC has an ARM
5
Cortex-A8 processor. Currently the Cubieboard machine definition specifies the
6
ARM Cortex-A9 in its description and as the default CPU.
4
7
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
This patch corrects the Cubieboard machine definition to use the ARM Cortex-A8.
6
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
9
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
The only user-visible effect is that our textual description of the
8
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
machine was wrong, because hw/arm/allwinner-a10.c always creates a
9
Message-id: 20171126231634.9531-11-frasse.iglesias@gmail.com
12
Cortex-A8 CPU regardless of the default value in the MachineClass struct.
13
14
[1] http://docs.cubieboard.org/products/start#cubieboard1
15
[2] https://linux-sunxi.org/File:Allwinner_A10_User_manual_V1.5.pdf
16
17
Fixes: 8a863c8120994981a099
18
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
19
Message-id: 20200227220149.6845-2-nieklinnenbank@gmail.com
20
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
[note in commit message that the bug didn't have much visible effect]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
24
---
12
hw/ssi/xilinx_spips.c | 6 +++++-
25
hw/arm/cubieboard.c | 4 ++--
13
1 file changed, 5 insertions(+), 1 deletion(-)
26
1 file changed, 2 insertions(+), 2 deletions(-)
14
27
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
28
diff --git a/hw/arm/cubieboard.c b/hw/arm/cubieboard.c
16
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
30
--- a/hw/arm/cubieboard.c
18
+++ b/hw/ssi/xilinx_spips.c
31
+++ b/hw/arm/cubieboard.c
19
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ static void cubieboard_init(MachineState *machine)
20
#define R_LQSPI_CFG_RESET 0x03A002EB
33
21
#define LQSPI_CFG_LQ_MODE (1U << 31)
34
static void cubieboard_machine_init(MachineClass *mc)
22
#define LQSPI_CFG_TWO_MEM (1 << 30)
35
{
23
-#define LQSPI_CFG_SEP_BUS (1 << 30)
36
- mc->desc = "cubietech cubieboard (Cortex-A9)";
24
+#define LQSPI_CFG_SEP_BUS (1 << 29)
37
- mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a9");
25
#define LQSPI_CFG_U_PAGE (1 << 28)
38
+ mc->desc = "cubietech cubieboard (Cortex-A8)";
26
+#define LQSPI_CFG_ADDR4 (1 << 27)
39
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a8");
27
#define LQSPI_CFG_MODE_EN (1 << 25)
40
mc->init = cubieboard_init;
28
#define LQSPI_CFG_MODE_WIDTH 8
41
mc->block_default_type = IF_IDE;
29
#define LQSPI_CFG_MODE_SHIFT 16
42
mc->units_per_default_bus = 1;
30
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
31
fifo8_push(&s->tx_fifo, s->regs[R_LQSPI_CFG] & LQSPI_CFG_INST_CODE);
32
/* read address */
33
DB_PRINT_L(0, "pushing read address %06x\n", flash_addr);
34
+ if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_ADDR4) {
35
+ fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 24));
36
+ }
37
fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 16));
38
fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 8));
39
fifo8_push(&s->tx_fifo, (uint8_t)flash_addr);
40
--
43
--
41
2.7.4
44
2.20.1
42
45
43
46
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
2
2
3
Move the FlashCMD enum, XilinxQSPIPS and XilinxSPIPSClass structures to the
3
The Cubieboard has an ARM Cortex-A8. Instead of simply ignoring a
4
header for consistency (struct XilinxSPIPS is found there). Also move out
4
bogus -cpu option provided by the user, give them an error message so
5
a define and remove two double included headers (while touching the code).
5
they know their command line is wrong.
6
Finally, add 4 byte address commands to the FlashCMD enum.
7
6
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
9
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
Message-id: 20200227220149.6845-3-nieklinnenbank@gmail.com
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
[PMM: tweaked commit message]
13
Message-id: 20171126231634.9531-6-frasse.iglesias@gmail.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
13
---
16
include/hw/ssi/xilinx_spips.h | 34 ++++++++++++++++++++++++++++++++++
14
hw/arm/cubieboard.c | 10 +++++++++-
17
hw/ssi/xilinx_spips.c | 35 -----------------------------------
15
1 file changed, 9 insertions(+), 1 deletion(-)
18
2 files changed, 34 insertions(+), 35 deletions(-)
19
16
20
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
17
diff --git a/hw/arm/cubieboard.c b/hw/arm/cubieboard.c
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/include/hw/ssi/xilinx_spips.h
19
--- a/hw/arm/cubieboard.c
23
+++ b/include/hw/ssi/xilinx_spips.h
20
+++ b/hw/arm/cubieboard.c
24
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPS XilinxSPIPS;
21
@@ -XXX,XX +XXX,XX @@ static struct arm_boot_info cubieboard_binfo = {
25
22
26
#define XLNX_SPIPS_R_MAX (0x100 / 4)
23
static void cubieboard_init(MachineState *machine)
27
24
{
28
+/* Bite off 4k chunks at a time */
25
- AwA10State *a10 = AW_A10(object_new(TYPE_AW_A10));
29
+#define LQSPI_CACHE_SIZE 1024
26
+ AwA10State *a10;
27
Error *err = NULL;
28
29
+ /* Only allow Cortex-A8 for this board */
30
+ if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a8")) != 0) {
31
+ error_report("This board can only be used with cortex-a8 CPU");
32
+ exit(1);
33
+ }
30
+
34
+
31
+typedef enum {
35
+ a10 = AW_A10(object_new(TYPE_AW_A10));
32
+ READ = 0x3, READ_4 = 0x13,
33
+ FAST_READ = 0xb, FAST_READ_4 = 0x0c,
34
+ DOR = 0x3b, DOR_4 = 0x3c,
35
+ QOR = 0x6b, QOR_4 = 0x6c,
36
+ DIOR = 0xbb, DIOR_4 = 0xbc,
37
+ QIOR = 0xeb, QIOR_4 = 0xec,
38
+
36
+
39
+ PP = 0x2, PP_4 = 0x12,
37
object_property_set_int(OBJECT(&a10->emac), 1, "phy-addr", &err);
40
+ DPP = 0xa2,
38
if (err != NULL) {
41
+ QPP = 0x32, QPP_4 = 0x34,
39
error_reportf_err(err, "Couldn't set phy address: ");
42
+} FlashCMD;
43
+
44
struct XilinxSPIPS {
45
SysBusDevice parent_obj;
46
47
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
48
uint32_t regs[XLNX_SPIPS_R_MAX];
49
};
50
51
+typedef struct {
52
+ XilinxSPIPS parent_obj;
53
+
54
+ uint8_t lqspi_buf[LQSPI_CACHE_SIZE];
55
+ hwaddr lqspi_cached_addr;
56
+ Error *migration_blocker;
57
+ bool mmio_execution_enabled;
58
+} XilinxQSPIPS;
59
+
60
+typedef struct XilinxSPIPSClass {
61
+ SysBusDeviceClass parent_class;
62
+
63
+ const MemoryRegionOps *reg_ops;
64
+
65
+ uint32_t rx_fifo_size;
66
+ uint32_t tx_fifo_size;
67
+} XilinxSPIPSClass;
68
+
69
#define TYPE_XILINX_SPIPS "xlnx.ps7-spi"
70
#define TYPE_XILINX_QSPIPS "xlnx.ps7-qspi"
71
72
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/ssi/xilinx_spips.c
75
+++ b/hw/ssi/xilinx_spips.c
76
@@ -XXX,XX +XXX,XX @@
77
#include "sysemu/sysemu.h"
78
#include "hw/ptimer.h"
79
#include "qemu/log.h"
80
-#include "qemu/fifo8.h"
81
-#include "hw/ssi/ssi.h"
82
#include "qemu/bitops.h"
83
#include "hw/ssi/xilinx_spips.h"
84
#include "qapi/error.h"
85
@@ -XXX,XX +XXX,XX @@
86
87
/* 16MB per linear region */
88
#define LQSPI_ADDRESS_BITS 24
89
-/* Bite off 4k chunks at a time */
90
-#define LQSPI_CACHE_SIZE 1024
91
92
#define SNOOP_CHECKING 0xFF
93
#define SNOOP_NONE 0xFE
94
#define SNOOP_STRIPING 0
95
96
-typedef enum {
97
- READ = 0x3,
98
- FAST_READ = 0xb,
99
- DOR = 0x3b,
100
- QOR = 0x6b,
101
- DIOR = 0xbb,
102
- QIOR = 0xeb,
103
-
104
- PP = 0x2,
105
- DPP = 0xa2,
106
- QPP = 0x32,
107
-} FlashCMD;
108
-
109
-typedef struct {
110
- XilinxSPIPS parent_obj;
111
-
112
- uint8_t lqspi_buf[LQSPI_CACHE_SIZE];
113
- hwaddr lqspi_cached_addr;
114
- Error *migration_blocker;
115
- bool mmio_execution_enabled;
116
-} XilinxQSPIPS;
117
-
118
-typedef struct XilinxSPIPSClass {
119
- SysBusDeviceClass parent_class;
120
-
121
- const MemoryRegionOps *reg_ops;
122
-
123
- uint32_t rx_fifo_size;
124
- uint32_t tx_fifo_size;
125
-} XilinxSPIPSClass;
126
-
127
static inline int num_effective_busses(XilinxSPIPS *s)
128
{
129
return (s->regs[R_LQSPI_CFG] & LQSPI_CFG_SEP_BUS &&
130
--
40
--
131
2.7.4
41
2.20.1
132
42
133
43
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
2
2
3
Voiding the ITS caches is not supposed to happen via
3
The Cubieboard contains either 512MiB or 1GiB of onboard RAM [1].
4
individual register writes. So we introduced a dedicated
4
Prevent changing RAM to a different size which could break user programs.
5
ITS KVM device ioctl to perform a cold reset of the ITS:
6
KVM_DEV_ARM_VGIC_GRP_CTRL/KVM_DEV_ARM_ITS_CTRL_RESET. Let's
7
use this latter if the kernel supports it.
8
5
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
[1] http://linux-sunxi.org/Cubieboard
7
8
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
9
Message-id: 20200227220149.6845-4-nieklinnenbank@gmail.com
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 1511883692-11511-5-git-send-email-eric.auger@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
hw/intc/arm_gicv3_its_kvm.c | 9 ++++++++-
14
hw/arm/cubieboard.c | 8 ++++++++
15
1 file changed, 8 insertions(+), 1 deletion(-)
15
1 file changed, 8 insertions(+)
16
16
17
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
17
diff --git a/hw/arm/cubieboard.c b/hw/arm/cubieboard.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_its_kvm.c
19
--- a/hw/arm/cubieboard.c
20
+++ b/hw/intc/arm_gicv3_its_kvm.c
20
+++ b/hw/arm/cubieboard.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_reset(DeviceState *dev)
21
@@ -XXX,XX +XXX,XX @@ static void cubieboard_init(MachineState *machine)
22
22
AwA10State *a10;
23
c->parent_reset(dev);
23
Error *err = NULL;
24
24
25
- error_report("ITS KVM: full reset is not supported by QEMU");
25
+ /* This board has fixed size RAM (512MiB or 1GiB) */
26
+ if (kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
26
+ if (machine->ram_size != 512 * MiB &&
27
+ KVM_DEV_ARM_ITS_CTRL_RESET)) {
27
+ machine->ram_size != 1 * GiB) {
28
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
28
+ error_report("This machine can only be used with 512MiB or 1GiB RAM");
29
+ KVM_DEV_ARM_ITS_CTRL_RESET, NULL, true, &error_abort);
29
+ exit(1);
30
+ return;
31
+ }
30
+ }
32
+
31
+
33
+ error_report("ITS KVM: full reset is not supported by the host kernel");
32
/* Only allow Cortex-A8 for this board */
34
33
if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a8")) != 0) {
35
if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
34
error_report("This board can only be used with cortex-a8 CPU");
36
GITS_CTLR)) {
35
@@ -XXX,XX +XXX,XX @@ static void cubieboard_machine_init(MachineClass *mc)
36
{
37
mc->desc = "cubietech cubieboard (Cortex-A8)";
38
mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a8");
39
+ mc->default_ram_size = 1 * GiB;
40
mc->init = cubieboard_init;
41
mc->block_default_type = IF_IDE;
42
mc->units_per_default_bus = 1;
37
--
43
--
38
2.7.4
44
2.20.1
39
45
40
46
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
2
2
3
Add support for the bank address register access commands (BRRD/BRWR) and
3
The Cubieboard machine does not support the -bios argument.
4
the BULK_ERASE (0x60) command.
4
Report an error when -bios is used and exit immediately.
5
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
7
Acked-by: Marcin Krzemiński <mar.krzeminski@gmail.com>
7
Message-id: 20200227220149.6845-5-nieklinnenbank@gmail.com
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20171126231634.9531-4-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/block/m25p80.c | 7 +++++++
12
hw/arm/cubieboard.c | 7 +++++++
14
1 file changed, 7 insertions(+)
13
1 file changed, 7 insertions(+)
15
14
16
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
15
diff --git a/hw/arm/cubieboard.c b/hw/arm/cubieboard.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/block/m25p80.c
17
--- a/hw/arm/cubieboard.c
19
+++ b/hw/block/m25p80.c
18
+++ b/hw/arm/cubieboard.c
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
19
@@ -XXX,XX +XXX,XX @@
21
WRDI = 0x4,
20
#include "exec/address-spaces.h"
22
RDSR = 0x5,
21
#include "qapi/error.h"
23
WREN = 0x6,
22
#include "cpu.h"
24
+ BRRD = 0x16,
23
+#include "sysemu/sysemu.h"
25
+ BRWR = 0x17,
24
#include "hw/sysbus.h"
26
JEDEC_READ = 0x9f,
25
#include "hw/boards.h"
27
+ BULK_ERASE_60 = 0x60,
26
#include "hw/arm/allwinner-a10.h"
28
BULK_ERASE = 0xc7,
27
@@ -XXX,XX +XXX,XX @@ static void cubieboard_init(MachineState *machine)
29
READ_FSR = 0x70,
28
AwA10State *a10;
30
RDCR = 0x15,
29
Error *err = NULL;
31
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
30
32
s->write_enable = false;
31
+ /* BIOS is not supported by this board */
33
}
32
+ if (bios_name) {
34
break;
33
+ error_report("BIOS not supported for this machine");
35
+ case BRWR:
34
+ exit(1);
36
case EXTEND_ADDR_WRITE:
35
+ }
37
s->ear = s->data[0];
36
+
38
break;
37
/* This board has fixed size RAM (512MiB or 1GiB) */
39
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
38
if (machine->ram_size != 512 * MiB &&
40
s->state = STATE_READING_DATA;
39
machine->ram_size != 1 * GiB) {
41
break;
42
43
+ case BULK_ERASE_60:
44
case BULK_ERASE:
45
if (s->write_enable) {
46
DB_PRINT_L(0, "chip erase\n");
47
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
48
case EX_4BYTE_ADDR:
49
s->four_bytes_address_mode = false;
50
break;
51
+ case BRRD:
52
case EXTEND_ADDR_READ:
53
s->data[0] = s->ear;
54
s->pos = 0;
55
s->len = 1;
56
s->state = STATE_READING_DATA;
57
break;
58
+ case BRWR:
59
case EXTEND_ADDR_WRITE:
60
if (s->write_enable) {
61
s->needed_bytes = 1;
62
--
40
--
63
2.7.4
41
2.20.1
64
42
65
43
diff view generated by jsdifflib
Deleted patch
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
2
1
3
Add support for Micron (Numonyx) n25q512a11 and n25q512a13 flashes.
4
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Acked-by: Marcin Krzemiński <mar.krzeminski@gmail.com>
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20171126231634.9531-5-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/block/m25p80.c | 2 ++
14
1 file changed, 2 insertions(+)
15
16
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/block/m25p80.c
19
+++ b/hw/block/m25p80.c
20
@@ -XXX,XX +XXX,XX @@ static const FlashPartInfo known_devices[] = {
21
{ INFO("n25q128a13", 0x20ba18, 0, 64 << 10, 256, ER_4K) },
22
{ INFO("n25q256a11", 0x20bb19, 0, 64 << 10, 512, ER_4K) },
23
{ INFO("n25q256a13", 0x20ba19, 0, 64 << 10, 512, ER_4K) },
24
+ { INFO("n25q512a11", 0x20bb20, 0, 64 << 10, 1024, ER_4K) },
25
+ { INFO("n25q512a13", 0x20ba20, 0, 64 << 10, 1024, ER_4K) },
26
{ INFO("n25q128", 0x20ba18, 0, 64 << 10, 256, 0) },
27
{ INFO("n25q256a", 0x20ba19, 0, 64 << 10, 512, ER_4K) },
28
{ INFO("n25q512a", 0x20ba20, 0, 64 << 10, 1024, ER_4K) },
29
--
30
2.7.4
31
32
diff view generated by jsdifflib
1
Make get_phys_addr_v5() return a fault type in the ARMMMUFaultInfo
1
From: Richard Henderson <richard.henderson@linaro.org>
2
structure, which we convert to the FSC at the callsite.
3
2
3
Replicate the single TBI bit from TCR_EL2 and TCR_EL3 so that
4
we can unconditionally use pointer bit 55 to index into our
5
composite TBI1:TBI0 field.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20200302175829.2183-2-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-4-git-send-email-peter.maydell@linaro.org
9
---
12
---
10
target/arm/helper.c | 33 ++++++++++++++++++---------------
13
target/arm/helper.c | 6 ++++--
11
1 file changed, 18 insertions(+), 15 deletions(-)
14
1 file changed, 4 insertions(+), 2 deletions(-)
12
15
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
18
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
19
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
20
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
18
static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
21
} else if (mmu_idx == ARMMMUIdx_Stage2) {
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
return 0; /* VTCR_EL2 */
20
hwaddr *phys_ptr, int *prot,
21
- target_ulong *page_size, uint32_t *fsr,
22
+ target_ulong *page_size,
23
ARMMMUFaultInfo *fi)
24
{
25
CPUState *cs = CPU(arm_env_get_cpu(env));
26
- int code;
27
+ int level = 1;
28
uint32_t table;
29
uint32_t desc;
30
int type;
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
32
/* Lookup l1 descriptor. */
33
if (!get_level1_table_address(env, mmu_idx, &table, address)) {
34
/* Section translation fault if page walk is disabled by PD0 or PD1 */
35
- code = 5;
36
+ fi->type = ARMFault_Translation;
37
goto do_fault;
38
}
39
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
41
domain_prot = (dacr >> (domain * 2)) & 3;
42
if (type == 0) {
43
/* Section translation fault. */
44
- code = 5;
45
+ fi->type = ARMFault_Translation;
46
goto do_fault;
47
}
48
+ if (type != 2) {
49
+ level = 2;
50
+ }
51
if (domain_prot == 0 || domain_prot == 2) {
52
- if (type == 2)
53
- code = 9; /* Section domain fault. */
54
- else
55
- code = 11; /* Page domain fault. */
56
+ fi->type = ARMFault_Domain;
57
goto do_fault;
58
}
59
if (type == 2) {
60
/* 1Mb section. */
61
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
62
ap = (desc >> 10) & 3;
63
- code = 13;
64
*page_size = 1024 * 1024;
65
} else {
23
} else {
66
/* Lookup l2 entry. */
24
- return extract32(tcr, 20, 1);
67
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
25
+ /* Replicate the single TBI bit so we always have 2 bits. */
68
mmu_idx, fi);
26
+ return extract32(tcr, 20, 1) * 3;
69
switch (desc & 3) {
70
case 0: /* Page translation fault. */
71
- code = 7;
72
+ fi->type = ARMFault_Translation;
73
goto do_fault;
74
case 1: /* 64k page. */
75
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
76
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
77
/* UNPREDICTABLE in ARMv5; we choose to take a
78
* page translation fault.
79
*/
80
- code = 7;
81
+ fi->type = ARMFault_Translation;
82
goto do_fault;
83
}
84
} else {
85
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
86
/* Never happens, but compiler isn't smart enough to tell. */
87
abort();
88
}
89
- code = 15;
90
}
91
*prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
92
*prot |= *prot ? PAGE_EXEC : 0;
93
if (!(*prot & (1 << access_type))) {
94
/* Access permission fault. */
95
+ fi->type = ARMFault_Permission;
96
goto do_fault;
97
}
98
*phys_ptr = phys_addr;
99
return false;
100
do_fault:
101
- *fsr = code | (domain << 4);
102
+ fi->domain = domain;
103
+ fi->level = level;
104
return true;
105
}
106
107
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
108
return get_phys_addr_v6(env, address, access_type, mmu_idx, phys_ptr,
109
attrs, prot, page_size, fsr, fi);
110
} else {
111
- return get_phys_addr_v5(env, address, access_type, mmu_idx, phys_ptr,
112
- prot, page_size, fsr, fi);
113
+ bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
114
+ phys_ptr, prot, page_size, fi);
115
+
116
+ *fsr = arm_fi_to_sfsc(fi);
117
+ return ret;
118
}
27
}
119
}
28
}
120
29
30
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
31
} else if (mmu_idx == ARMMMUIdx_Stage2) {
32
return 0; /* VTCR_EL2 */
33
} else {
34
- return extract32(tcr, 29, 1);
35
+ /* Replicate the single TBID bit so we always have 2 bits. */
36
+ return extract32(tcr, 29, 1) * 3;
37
}
38
}
39
121
--
40
--
122
2.7.4
41
2.20.1
123
42
124
43
diff view generated by jsdifflib
1
For M profile, we currently have an mmu index MNegPri for
1
From: Richard Henderson <richard.henderson@linaro.org>
2
"requested execution priority negative". This fails to
3
distinguish "requested execution priority negative, privileged"
4
from "requested execution priority negative, usermode", but
5
the two can return different results for MPU lookups. Fix this
6
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
7
similarly for the Secure equivalent MSNegPri.
8
2
9
This takes us from 6 M profile MMU modes to 8, which means
3
We now cache the core mmu_idx in env->hflags. Rather than recompute
10
we need to bump NB_MMU_MODES; this is OK since the point
4
from scratch, extract the field. All of the uses of cpu_mmu_index
11
where we are forced to reduce TLB sizes is 9 MMU modes.
5
within target/arm are within helpers, and env->hflags is always stable
6
within a translation block from whence helpers are called.
12
7
13
(It would in theory be possible to stick with 6 MMU indexes:
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
{mpu-disabled,user,privileged} x {secure,nonsecure} since
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
in the MPU-disabled case the result of an MPU lookup is
10
Message-id: 20200302175829.2183-3-richard.henderson@linaro.org
16
always the same for both user and privileged code. However
17
we would then need to rework the TB flags handling to put
18
user/priv into the TB flags separately from the mmuidx.
19
Adding an extra couple of mmu indexes is simpler.)
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 1512153879-5291-5-git-send-email-peter.maydell@linaro.org
24
---
12
---
25
target/arm/cpu.h | 54 ++++++++++++++++++++++++++++++--------------------
13
target/arm/cpu.h | 23 +++++++++++++----------
26
target/arm/internals.h | 6 ++++--
14
target/arm/helper.c | 5 -----
27
target/arm/helper.c | 11 ++++++----
15
2 files changed, 13 insertions(+), 15 deletions(-)
28
target/arm/translate.c | 8 ++++++--
29
4 files changed, 50 insertions(+), 29 deletions(-)
30
16
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ enum {
21
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
36
#define ARM_CPU_VIRQ 2
22
37
#define ARM_CPU_VFIQ 3
23
#define MMU_USER_IDX 0
38
24
39
-#define NB_MMU_MODES 7
25
-/**
40
+#define NB_MMU_MODES 8
26
- * cpu_mmu_index:
41
/* ARM-specific extra insn start words:
27
- * @env: The cpu environment
42
* 1: Conditional execution bits
28
- * @ifetch: True for code access, false for data access.
43
* 2: Partial exception syndrome for data aborts
29
- *
44
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
30
- * Return the core mmu index for the current translation regime.
45
* They have the following different MMU indexes:
31
- * This function is used by generic TCG code paths.
46
* User
32
- */
47
* Privileged
33
-int cpu_mmu_index(CPUARMState *env, bool ifetch);
48
- * Execution priority negative (this is like privileged, but the
34
-
49
- * MPU HFNMIENA bit means that it may have different access permission
35
/* Indexes used when registering address spaces with cpu_address_space_init */
50
- * check results to normal privileged code, so can't share a TLB).
36
typedef enum ARMASIdx {
51
+ * User, execution priority negative (ie the MPU HFNMIENA bit may apply)
37
ARMASIdx_NS = 0,
52
+ * Privileged, execution priority negative (ditto)
38
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, BTYPE, 10, 2) /* Not cached. */
53
* If the CPU supports the v8M Security Extension then there are also:
39
FIELD(TBFLAG_A64, TBID, 12, 2)
54
* Secure User
40
FIELD(TBFLAG_A64, UNPRIV, 14, 1)
55
* Secure Privileged
41
56
- * Secure, execution priority negative
42
+/**
57
+ * Secure User, execution priority negative
43
+ * cpu_mmu_index:
58
+ * Secure Privileged, execution priority negative
44
+ * @env: The cpu environment
59
*
45
+ * @ifetch: True for code access, false for data access.
60
* The ARMMMUIdx and the mmu index value used by the core QEMU TLB code
46
+ *
61
* are not quite the same -- different CPU types (most notably M profile
47
+ * Return the core mmu index for the current translation regime.
62
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
48
+ * This function is used by generic TCG code paths.
63
* The constant names here are patterned after the general style of the names
49
+ */
64
* of the AT/ATS operations.
50
+static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
65
* The values used are carefully arranged to make mmu_idx => EL lookup easy.
51
+{
66
+ * For M profile we arrange them to have a bit for priv, a bit for negpri
52
+ return FIELD_EX32(env->hflags, TBFLAG_ANY, MMUIDX);
67
+ * and a bit for secure.
53
+}
68
*/
69
#define ARM_MMU_IDX_A 0x10 /* A profile */
70
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
71
#define ARM_MMU_IDX_M 0x40 /* M profile */
72
73
+/* meanings of the bits for M profile mmu idx values */
74
+#define ARM_MMU_IDX_M_PRIV 0x1
75
+#define ARM_MMU_IDX_M_NEGPRI 0x2
76
+#define ARM_MMU_IDX_M_S 0x4
77
+
54
+
78
#define ARM_MMU_IDX_TYPE_MASK (~0x7)
55
static inline bool bswap_code(bool sctlr_b)
79
#define ARM_MMU_IDX_COREIDX_MASK 0x7
80
81
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
82
ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
83
ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
84
ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
85
- ARMMMUIdx_MNegPri = 2 | ARM_MMU_IDX_M,
86
- ARMMMUIdx_MSUser = 3 | ARM_MMU_IDX_M,
87
- ARMMMUIdx_MSPriv = 4 | ARM_MMU_IDX_M,
88
- ARMMMUIdx_MSNegPri = 5 | ARM_MMU_IDX_M,
89
+ ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M,
90
+ ARMMMUIdx_MPrivNegPri = 3 | ARM_MMU_IDX_M,
91
+ ARMMMUIdx_MSUser = 4 | ARM_MMU_IDX_M,
92
+ ARMMMUIdx_MSPriv = 5 | ARM_MMU_IDX_M,
93
+ ARMMMUIdx_MSUserNegPri = 6 | ARM_MMU_IDX_M,
94
+ ARMMMUIdx_MSPrivNegPri = 7 | ARM_MMU_IDX_M,
95
/* Indexes below here don't have TLBs and are used only for AT system
96
* instructions or for the first stage of an S12 page table walk.
97
*/
98
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
99
ARMMMUIdxBit_S2NS = 1 << 6,
100
ARMMMUIdxBit_MUser = 1 << 0,
101
ARMMMUIdxBit_MPriv = 1 << 1,
102
- ARMMMUIdxBit_MNegPri = 1 << 2,
103
- ARMMMUIdxBit_MSUser = 1 << 3,
104
- ARMMMUIdxBit_MSPriv = 1 << 4,
105
- ARMMMUIdxBit_MSNegPri = 1 << 5,
106
+ ARMMMUIdxBit_MUserNegPri = 1 << 2,
107
+ ARMMMUIdxBit_MPrivNegPri = 1 << 3,
108
+ ARMMMUIdxBit_MSUser = 1 << 4,
109
+ ARMMMUIdxBit_MSPriv = 1 << 5,
110
+ ARMMMUIdxBit_MSUserNegPri = 1 << 6,
111
+ ARMMMUIdxBit_MSPrivNegPri = 1 << 7,
112
} ARMMMUIdxBit;
113
114
#define MMU_USER_IDX 0
115
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
116
case ARM_MMU_IDX_A:
117
return mmu_idx & 3;
118
case ARM_MMU_IDX_M:
119
- return (mmu_idx == ARMMMUIdx_MUser || mmu_idx == ARMMMUIdx_MSUser)
120
- ? 0 : 1;
121
+ return mmu_idx & ARM_MMU_IDX_M_PRIV;
122
default:
123
g_assert_not_reached();
124
}
125
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
126
bool secstate)
127
{
56
{
128
int el = arm_current_el(env);
57
#ifdef CONFIG_USER_ONLY
129
- ARMMMUIdx mmu_idx;
130
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
131
132
- if (el == 0) {
133
- mmu_idx = secstate ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
134
- } else {
135
- mmu_idx = secstate ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
136
+ if (el != 0) {
137
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
138
}
139
140
if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
141
- mmu_idx = secstate ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
142
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
143
+ }
144
+
145
+ if (secstate) {
146
+ mmu_idx |= ARM_MMU_IDX_M_S;
147
}
148
149
return mmu_idx;
150
diff --git a/target/arm/internals.h b/target/arm/internals.h
151
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/internals.h
153
+++ b/target/arm/internals.h
154
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
155
case ARMMMUIdx_S1NSE1:
156
case ARMMMUIdx_S1E2:
157
case ARMMMUIdx_S2NS:
158
+ case ARMMMUIdx_MPrivNegPri:
159
+ case ARMMMUIdx_MUserNegPri:
160
case ARMMMUIdx_MPriv:
161
- case ARMMMUIdx_MNegPri:
162
case ARMMMUIdx_MUser:
163
return false;
164
case ARMMMUIdx_S1E3:
165
case ARMMMUIdx_S1SE0:
166
case ARMMMUIdx_S1SE1:
167
+ case ARMMMUIdx_MSPrivNegPri:
168
+ case ARMMMUIdx_MSUserNegPri:
169
case ARMMMUIdx_MSPriv:
170
- case ARMMMUIdx_MSNegPri:
171
case ARMMMUIdx_MSUser:
172
return true;
173
default:
174
diff --git a/target/arm/helper.c b/target/arm/helper.c
58
diff --git a/target/arm/helper.c b/target/arm/helper.c
175
index XXXXXXX..XXXXXXX 100644
59
index XXXXXXX..XXXXXXX 100644
176
--- a/target/arm/helper.c
60
--- a/target/arm/helper.c
177
+++ b/target/arm/helper.c
61
+++ b/target/arm/helper.c
178
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
62
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env)
179
case ARMMMUIdx_S1SE1:
63
return arm_mmu_idx_el(env, arm_current_el(env));
180
case ARMMMUIdx_S1NSE0:
64
}
181
case ARMMMUIdx_S1NSE1:
65
182
+ case ARMMMUIdx_MPrivNegPri:
66
-int cpu_mmu_index(CPUARMState *env, bool ifetch)
183
+ case ARMMMUIdx_MUserNegPri:
67
-{
184
case ARMMMUIdx_MPriv:
68
- return arm_to_core_mmu_idx(arm_mmu_idx(env));
185
- case ARMMMUIdx_MNegPri:
69
-}
186
case ARMMMUIdx_MUser:
70
-
187
+ case ARMMMUIdx_MSPrivNegPri:
71
#ifndef CONFIG_USER_ONLY
188
+ case ARMMMUIdx_MSUserNegPri:
72
ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
189
case ARMMMUIdx_MSPriv:
73
{
190
- case ARMMMUIdx_MSNegPri:
191
case ARMMMUIdx_MSUser:
192
return 1;
193
default:
194
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
195
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
196
case R_V7M_MPU_CTRL_ENABLE_MASK:
197
/* Enabled, but not for HardFault and NMI */
198
- return mmu_idx == ARMMMUIdx_MNegPri ||
199
- mmu_idx == ARMMMUIdx_MSNegPri;
200
+ return mmu_idx & ARM_MMU_IDX_M_NEGPRI;
201
case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK:
202
/* Enabled for all cases */
203
return false;
204
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
205
case ARMMMUIdx_S1NSE0:
206
case ARMMMUIdx_MUser:
207
case ARMMMUIdx_MSUser:
208
+ case ARMMMUIdx_MUserNegPri:
209
+ case ARMMMUIdx_MSUserNegPri:
210
return true;
211
default:
212
return false;
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
218
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
219
case ARMMMUIdx_MUser:
220
case ARMMMUIdx_MPriv:
221
- case ARMMMUIdx_MNegPri:
222
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
223
+ case ARMMMUIdx_MUserNegPri:
224
+ case ARMMMUIdx_MPrivNegPri:
225
+ return arm_to_core_mmu_idx(ARMMMUIdx_MUserNegPri);
226
case ARMMMUIdx_MSUser:
227
case ARMMMUIdx_MSPriv:
228
- case ARMMMUIdx_MSNegPri:
229
return arm_to_core_mmu_idx(ARMMMUIdx_MSUser);
230
+ case ARMMMUIdx_MSUserNegPri:
231
+ case ARMMMUIdx_MSPrivNegPri:
232
+ return arm_to_core_mmu_idx(ARMMMUIdx_MSUserNegPri);
233
case ARMMMUIdx_S2NS:
234
default:
235
g_assert_not_reached();
236
--
74
--
237
2.7.4
75
2.20.1
238
76
239
77
diff view generated by jsdifflib
1
Currently get_phys_addr() and its various subfunctions return
1
From: Richard Henderson <richard.henderson@linaro.org>
2
a hard-coded fault status register value for translation
3
failures. This is awkward because FSR values these days may
4
be either long-descriptor format or short-descriptor format.
5
Worse, the right FSR type to use doesn't depend only on the
6
translation table being walked -- some cases, like fault
7
info reported to AArch32 EL2 for some kinds of ATS operation,
8
must be in long-descriptor format even if the translation
9
table being walked was short format. We can't get those cases
10
right with our current approach.
11
2
12
Provide fields in the ARMMMUFaultInfo struct which allow
3
If by context we know that we're in AArch64 mode, we need not
13
get_phys_addr() to provide sufficient information for a caller to
4
test for M-profile when reconstructing the full ARMMMUIdx.
14
construct an FSR value themselves, and utility functions which do
15
this for both long and short format FSR values, as a first step in
16
switching get_phys_addr() and its children to only returning the
17
failure cause in the ARMMMUFaultInfo struct.
18
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20200302175829.2183-4-richard.henderson@linaro.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
22
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
23
Message-id: 1512503192-2239-2-git-send-email-peter.maydell@linaro.org
24
---
11
---
25
target/arm/internals.h | 185 +++++++++++++++++++++++++++++++++++++++++++++++++
12
target/arm/internals.h | 6 ++++++
26
1 file changed, 185 insertions(+)
13
target/arm/translate-a64.c | 2 +-
14
2 files changed, 7 insertions(+), 1 deletion(-)
27
15
28
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
29
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/internals.h
18
--- a/target/arm/internals.h
31
+++ b/target/arm/internals.h
19
+++ b/target/arm/internals.h
32
@@ -XXX,XX +XXX,XX @@ static inline void arm_clear_exclusive(CPUARMState *env)
20
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
21
}
33
}
22
}
34
23
35
/**
24
+static inline ARMMMUIdx core_to_aa64_mmu_idx(int mmu_idx)
36
+ * ARMFaultType: type of an ARM MMU fault
37
+ * This corresponds to the v8A pseudocode's Fault enumeration,
38
+ * with extensions for QEMU internal conditions.
39
+ */
40
+typedef enum ARMFaultType {
41
+ ARMFault_None,
42
+ ARMFault_AccessFlag,
43
+ ARMFault_Alignment,
44
+ ARMFault_Background,
45
+ ARMFault_Domain,
46
+ ARMFault_Permission,
47
+ ARMFault_Translation,
48
+ ARMFault_AddressSize,
49
+ ARMFault_SyncExternal,
50
+ ARMFault_SyncExternalOnWalk,
51
+ ARMFault_SyncParity,
52
+ ARMFault_SyncParityOnWalk,
53
+ ARMFault_AsyncParity,
54
+ ARMFault_AsyncExternal,
55
+ ARMFault_Debug,
56
+ ARMFault_TLBConflict,
57
+ ARMFault_Lockdown,
58
+ ARMFault_Exclusive,
59
+ ARMFault_ICacheMaint,
60
+ ARMFault_QEMU_NSCExec, /* v8M: NS executing in S&NSC memory */
61
+ ARMFault_QEMU_SFault, /* v8M: SecureFault INVTRAN, INVEP or AUVIOL */
62
+} ARMFaultType;
63
+
64
+/**
65
* ARMMMUFaultInfo: Information describing an ARM MMU Fault
66
+ * @type: Type of fault
67
+ * @level: Table walk level (for translation, access flag and permission faults)
68
+ * @domain: Domain of the fault address (for non-LPAE CPUs only)
69
* @s2addr: Address that caused a fault at stage 2
70
* @stage2: True if we faulted at stage 2
71
* @s1ptw: True if we faulted at stage 2 while doing a stage 1 page-table walk
72
@@ -XXX,XX +XXX,XX @@ static inline void arm_clear_exclusive(CPUARMState *env)
73
*/
74
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
75
struct ARMMMUFaultInfo {
76
+ ARMFaultType type;
77
target_ulong s2addr;
78
+ int level;
79
+ int domain;
80
bool stage2;
81
bool s1ptw;
82
bool ea;
83
};
84
85
+/**
86
+ * arm_fi_to_sfsc: Convert fault info struct to short-format FSC
87
+ * Compare pseudocode EncodeSDFSC(), though unlike that function
88
+ * we set up a whole FSR-format code including domain field and
89
+ * putting the high bit of the FSC into bit 10.
90
+ */
91
+static inline uint32_t arm_fi_to_sfsc(ARMMMUFaultInfo *fi)
92
+{
25
+{
93
+ uint32_t fsc;
26
+ /* AArch64 is always a-profile. */
94
+
27
+ return mmu_idx | ARM_MMU_IDX_A;
95
+ switch (fi->type) {
96
+ case ARMFault_None:
97
+ return 0;
98
+ case ARMFault_AccessFlag:
99
+ fsc = fi->level == 1 ? 0x3 : 0x6;
100
+ break;
101
+ case ARMFault_Alignment:
102
+ fsc = 0x1;
103
+ break;
104
+ case ARMFault_Permission:
105
+ fsc = fi->level == 1 ? 0xd : 0xf;
106
+ break;
107
+ case ARMFault_Domain:
108
+ fsc = fi->level == 1 ? 0x9 : 0xb;
109
+ break;
110
+ case ARMFault_Translation:
111
+ fsc = fi->level == 1 ? 0x5 : 0x7;
112
+ break;
113
+ case ARMFault_SyncExternal:
114
+ fsc = 0x8 | (fi->ea << 12);
115
+ break;
116
+ case ARMFault_SyncExternalOnWalk:
117
+ fsc = fi->level == 1 ? 0xc : 0xe;
118
+ fsc |= (fi->ea << 12);
119
+ break;
120
+ case ARMFault_SyncParity:
121
+ fsc = 0x409;
122
+ break;
123
+ case ARMFault_SyncParityOnWalk:
124
+ fsc = fi->level == 1 ? 0x40c : 0x40e;
125
+ break;
126
+ case ARMFault_AsyncParity:
127
+ fsc = 0x408;
128
+ break;
129
+ case ARMFault_AsyncExternal:
130
+ fsc = 0x406 | (fi->ea << 12);
131
+ break;
132
+ case ARMFault_Debug:
133
+ fsc = 0x2;
134
+ break;
135
+ case ARMFault_TLBConflict:
136
+ fsc = 0x400;
137
+ break;
138
+ case ARMFault_Lockdown:
139
+ fsc = 0x404;
140
+ break;
141
+ case ARMFault_Exclusive:
142
+ fsc = 0x405;
143
+ break;
144
+ case ARMFault_ICacheMaint:
145
+ fsc = 0x4;
146
+ break;
147
+ case ARMFault_Background:
148
+ fsc = 0x0;
149
+ break;
150
+ case ARMFault_QEMU_NSCExec:
151
+ fsc = M_FAKE_FSR_NSC_EXEC;
152
+ break;
153
+ case ARMFault_QEMU_SFault:
154
+ fsc = M_FAKE_FSR_SFAULT;
155
+ break;
156
+ default:
157
+ /* Other faults can't occur in a context that requires a
158
+ * short-format status code.
159
+ */
160
+ g_assert_not_reached();
161
+ }
162
+
163
+ fsc |= (fi->domain << 4);
164
+ return fsc;
165
+}
28
+}
166
+
29
+
167
+/**
30
int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
168
+ * arm_fi_to_lfsc: Convert fault info struct to long-format FSC
31
169
+ * Compare pseudocode EncodeLDFSC(), though unlike that function
32
/*
170
+ * we fill in also the LPAE bit 9 of a DFSR format.
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
171
+ */
34
index XXXXXXX..XXXXXXX 100644
172
+static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
35
--- a/target/arm/translate-a64.c
173
+{
36
+++ b/target/arm/translate-a64.c
174
+ uint32_t fsc;
37
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
175
+
38
dc->condexec_mask = 0;
176
+ switch (fi->type) {
39
dc->condexec_cond = 0;
177
+ case ARMFault_None:
40
core_mmu_idx = FIELD_EX32(tb_flags, TBFLAG_ANY, MMUIDX);
178
+ return 0;
41
- dc->mmu_idx = core_to_arm_mmu_idx(env, core_mmu_idx);
179
+ case ARMFault_AddressSize:
42
+ dc->mmu_idx = core_to_aa64_mmu_idx(core_mmu_idx);
180
+ fsc = fi->level & 3;
43
dc->tbii = FIELD_EX32(tb_flags, TBFLAG_A64, TBII);
181
+ break;
44
dc->tbid = FIELD_EX32(tb_flags, TBFLAG_A64, TBID);
182
+ case ARMFault_AccessFlag:
45
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
183
+ fsc = (fi->level & 3) | (0x2 << 2);
184
+ break;
185
+ case ARMFault_Permission:
186
+ fsc = (fi->level & 3) | (0x3 << 2);
187
+ break;
188
+ case ARMFault_Translation:
189
+ fsc = (fi->level & 3) | (0x1 << 2);
190
+ break;
191
+ case ARMFault_SyncExternal:
192
+ fsc = 0x10 | (fi->ea << 12);
193
+ break;
194
+ case ARMFault_SyncExternalOnWalk:
195
+ fsc = (fi->level & 3) | (0x5 << 2) | (fi->ea << 12);
196
+ break;
197
+ case ARMFault_SyncParity:
198
+ fsc = 0x18;
199
+ break;
200
+ case ARMFault_SyncParityOnWalk:
201
+ fsc = (fi->level & 3) | (0x7 << 2);
202
+ break;
203
+ case ARMFault_AsyncParity:
204
+ fsc = 0x19;
205
+ break;
206
+ case ARMFault_AsyncExternal:
207
+ fsc = 0x11 | (fi->ea << 12);
208
+ break;
209
+ case ARMFault_Alignment:
210
+ fsc = 0x21;
211
+ break;
212
+ case ARMFault_Debug:
213
+ fsc = 0x22;
214
+ break;
215
+ case ARMFault_TLBConflict:
216
+ fsc = 0x30;
217
+ break;
218
+ case ARMFault_Lockdown:
219
+ fsc = 0x34;
220
+ break;
221
+ case ARMFault_Exclusive:
222
+ fsc = 0x35;
223
+ break;
224
+ default:
225
+ /* Other faults can't occur in a context that requires a
226
+ * long-format status code.
227
+ */
228
+ g_assert_not_reached();
229
+ }
230
+
231
+ fsc |= 1 << 9;
232
+ return fsc;
233
+}
234
+
235
/* Do a page table walk and add page to TLB if possible */
236
bool arm_tlb_fill(CPUState *cpu, vaddr address,
237
MMUAccessType access_type, int mmu_idx,
238
--
46
--
239
2.7.4
47
2.20.1
240
48
241
49
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for the RX discard and RX drain functionality. Also transmit
3
We missed this case within AArch64.ExceptionReturn.
4
one byte per dummy cycle (to the flash memories) with commands that require
5
these.
6
4
7
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20200302175829.2183-5-richard.henderson@linaro.org
10
Message-id: 20171126231634.9531-8-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
include/hw/ssi/xilinx_spips.h | 6 ++
10
target/arm/helper-a64.c | 23 ++++++++++++++++++++++-
14
hw/ssi/xilinx_spips.c | 167 +++++++++++++++++++++++++++++++++++++-----
11
1 file changed, 22 insertions(+), 1 deletion(-)
15
2 files changed, 155 insertions(+), 18 deletions(-)
16
12
17
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
13
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/ssi/xilinx_spips.h
15
--- a/target/arm/helper-a64.c
20
+++ b/include/hw/ssi/xilinx_spips.h
16
+++ b/target/arm/helper-a64.c
21
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
17
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
22
uint8_t num_busses;
18
"AArch32 EL%d PC 0x%" PRIx32 "\n",
23
19
cur_el, new_el, env->regs[15]);
24
uint8_t snoop_state;
20
} else {
25
+ int cmd_dummies;
21
+ int tbii;
26
+ uint8_t link_state;
22
+
27
+ uint8_t link_state_next;
23
env->aarch64 = 1;
28
+ uint8_t link_state_next_when;
24
spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar);
29
qemu_irq *cs_lines;
25
pstate_write(env, spsr);
30
+ bool *cs_lines_state;
26
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
31
SSIBus **spi;
27
env->pstate &= ~PSTATE_SS;
32
33
Fifo8 rx_fifo;
34
Fifo8 tx_fifo;
35
36
uint8_t num_txrx_bytes;
37
+ uint32_t rx_discard;
38
39
uint32_t regs[XLNX_SPIPS_R_MAX];
40
};
41
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/ssi/xilinx_spips.c
44
+++ b/hw/ssi/xilinx_spips.c
45
@@ -XXX,XX +XXX,XX @@
46
#include "qemu/bitops.h"
47
#include "hw/ssi/xilinx_spips.h"
48
#include "qapi/error.h"
49
+#include "hw/register.h"
50
#include "migration/blocker.h"
51
52
#ifndef XILINX_SPIPS_ERR_DEBUG
53
@@ -XXX,XX +XXX,XX @@
54
#define LQSPI_CFG_DUMMY_SHIFT 8
55
#define LQSPI_CFG_INST_CODE 0xFF
56
57
+#define R_CMND (0xc0 / 4)
58
+ #define R_CMND_RXFIFO_DRAIN (1 << 19)
59
+ FIELD(CMND, PARTIAL_BYTE_LEN, 16, 3)
60
+#define R_CMND_EXT_ADD (1 << 15)
61
+ FIELD(CMND, RX_DISCARD, 8, 7)
62
+ FIELD(CMND, DUMMY_CYCLES, 2, 6)
63
+#define R_CMND_DMA_EN (1 << 1)
64
+#define R_CMND_PUSH_WAIT (1 << 0)
65
#define R_LQSPI_STS (0xA4 / 4)
66
#define LQSPI_STS_WR_RECVD (1 << 1)
67
68
@@ -XXX,XX +XXX,XX @@
69
#define LQSPI_ADDRESS_BITS 24
70
71
#define SNOOP_CHECKING 0xFF
72
-#define SNOOP_NONE 0xFE
73
+#define SNOOP_ADDR 0xF0
74
+#define SNOOP_NONE 0xEE
75
#define SNOOP_STRIPING 0
76
77
static inline int num_effective_busses(XilinxSPIPS *s)
78
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
79
if (xilinx_spips_cs_is_set(s, i, field) && !found) {
80
DB_PRINT_L(0, "selecting slave %d\n", i);
81
qemu_set_irq(s->cs_lines[cs_to_set], 0);
82
+ if (s->cs_lines_state[cs_to_set]) {
83
+ s->cs_lines_state[cs_to_set] = false;
84
+ s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
85
+ }
86
} else {
87
DB_PRINT_L(0, "deselecting slave %d\n", i);
88
qemu_set_irq(s->cs_lines[cs_to_set], 1);
89
+ s->cs_lines_state[cs_to_set] = true;
90
}
91
}
28
}
92
if (xilinx_spips_cs_is_set(s, i, field)) {
29
aarch64_restore_sp(env, new_el);
93
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
30
- env->pc = new_pc;
94
}
31
helper_rebuild_hflags_a64(env, new_el);
95
if (!found) {
96
s->snoop_state = SNOOP_CHECKING;
97
+ s->cmd_dummies = 0;
98
+ s->link_state = 1;
99
+ s->link_state_next = 1;
100
+ s->link_state_next_when = 0;
101
DB_PRINT_L(1, "moving to snoop check state\n");
102
}
103
}
104
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
105
/* FIXME: move magic number definition somewhere sensible */
106
s->regs[R_MOD_ID] = 0x01090106;
107
s->regs[R_LQSPI_CFG] = R_LQSPI_CFG_RESET;
108
+ s->link_state = 1;
109
+ s->link_state_next = 1;
110
+ s->link_state_next_when = 0;
111
s->snoop_state = SNOOP_CHECKING;
112
+ s->cmd_dummies = 0;
113
xilinx_spips_update_ixr(s);
114
xilinx_spips_update_cs_lines(s);
115
}
116
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
117
memcpy(x, r, sizeof(uint8_t) * num);
118
}
119
120
+static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, uint8_t command)
121
+{
122
+ if (!qs) {
123
+ /* The SPI device is not a QSPI device */
124
+ return -1;
125
+ }
126
+
32
+
127
+ switch (command) { /* check for dummies */
33
+ /*
128
+ case READ: /* no dummy bytes/cycles */
34
+ * Apply TBI to the exception return address. We had to delay this
129
+ case PP:
35
+ * until after we selected the new EL, so that we could select the
130
+ case DPP:
36
+ * correct TBI+TBID bits. This is made easier by waiting until after
131
+ case QPP:
37
+ * the hflags rebuild, since we can pull the composite TBII field
132
+ case READ_4:
38
+ * from there.
133
+ case PP_4:
39
+ */
134
+ case QPP_4:
40
+ tbii = FIELD_EX32(env->hflags, TBFLAG_A64, TBII);
135
+ return 0;
41
+ if ((tbii >> extract64(new_pc, 55, 1)) & 1) {
136
+ case FAST_READ:
42
+ /* TBI is enabled. */
137
+ case DOR:
43
+ int core_mmu_idx = cpu_mmu_index(env, false);
138
+ case QOR:
44
+ if (regime_has_2_ranges(core_to_aa64_mmu_idx(core_mmu_idx))) {
139
+ case DOR_4:
45
+ new_pc = sextract64(new_pc, 0, 56);
140
+ case QOR_4:
141
+ return 1;
142
+ case DIOR:
143
+ case FAST_READ_4:
144
+ case DIOR_4:
145
+ return 2;
146
+ case QIOR:
147
+ case QIOR_4:
148
+ return 5;
149
+ default:
150
+ return -1;
151
+ }
152
+}
153
+
154
+static inline uint8_t get_addr_length(XilinxSPIPS *s, uint8_t cmd)
155
+{
156
+ switch (cmd) {
157
+ case PP_4:
158
+ case QPP_4:
159
+ case READ_4:
160
+ case QIOR_4:
161
+ case FAST_READ_4:
162
+ case DOR_4:
163
+ case QOR_4:
164
+ case DIOR_4:
165
+ return 4;
166
+ default:
167
+ return (s->regs[R_CMND] & R_CMND_EXT_ADD) ? 4 : 3;
168
+ }
169
+}
170
+
171
static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
172
{
173
int debug_level = 0;
174
+ XilinxQSPIPS *q = (XilinxQSPIPS *) object_dynamic_cast(OBJECT(s),
175
+ TYPE_XILINX_QSPIPS);
176
177
for (;;) {
178
int i;
179
uint8_t tx = 0;
180
uint8_t tx_rx[num_effective_busses(s)];
181
+ uint8_t dummy_cycles = 0;
182
+ uint8_t addr_length;
183
184
if (fifo8_is_empty(&s->tx_fifo)) {
185
if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
186
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
187
tx_rx[i] = fifo8_pop(&s->tx_fifo);
188
}
189
stripe8(tx_rx, num_effective_busses(s), false);
190
- } else {
191
+ } else if (s->snoop_state >= SNOOP_ADDR) {
192
tx = fifo8_pop(&s->tx_fifo);
193
for (i = 0; i < num_effective_busses(s); ++i) {
194
tx_rx[i] = tx;
195
}
196
+ } else {
197
+ /* Extract a dummy byte and generate dummy cycles according to the
198
+ * link state */
199
+ tx = fifo8_pop(&s->tx_fifo);
200
+ dummy_cycles = 8 / s->link_state;
201
}
202
203
for (i = 0; i < num_effective_busses(s); ++i) {
204
int bus = num_effective_busses(s) - 1 - i;
205
- DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
206
- tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
207
- DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
208
+ if (dummy_cycles) {
209
+ int d;
210
+ for (d = 0; d < dummy_cycles; ++d) {
211
+ tx_rx[0] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[0]);
212
+ }
213
+ } else {
46
+ } else {
214
+ DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
47
+ new_pc = extract64(new_pc, 0, 56);
215
+ tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
216
+ DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
217
+ }
218
}
219
220
- if (fifo8_is_full(&s->rx_fifo)) {
221
+ if (s->regs[R_CMND] & R_CMND_RXFIFO_DRAIN) {
222
+ DB_PRINT_L(debug_level, "dircarding drained rx byte\n");
223
+ /* Do nothing */
224
+ } else if (s->rx_discard) {
225
+ DB_PRINT_L(debug_level, "dircarding discarded rx byte\n");
226
+ s->rx_discard -= 8 / s->link_state;
227
+ } else if (fifo8_is_full(&s->rx_fifo)) {
228
s->regs[R_INTR_STATUS] |= IXR_RX_FIFO_OVERFLOW;
229
DB_PRINT_L(0, "rx FIFO overflow");
230
} else if (s->snoop_state == SNOOP_STRIPING) {
231
stripe8(tx_rx, num_effective_busses(s), true);
232
for (i = 0; i < num_effective_busses(s); ++i) {
233
fifo8_push(&s->rx_fifo, (uint8_t)tx_rx[i]);
234
+ DB_PRINT_L(debug_level, "pushing striped rx byte\n");
235
}
236
} else {
237
+ DB_PRINT_L(debug_level, "pushing unstriped rx byte\n");
238
fifo8_push(&s->rx_fifo, (uint8_t)tx_rx[0]);
239
}
240
241
+ if (s->link_state_next_when) {
242
+ s->link_state_next_when--;
243
+ if (!s->link_state_next_when) {
244
+ s->link_state = s->link_state_next;
245
+ }
48
+ }
246
+ }
49
+ }
50
+ env->pc = new_pc;
247
+
51
+
248
DB_PRINT_L(debug_level, "initial snoop state: %x\n",
52
qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to "
249
(unsigned)s->snoop_state);
53
"AArch64 EL%d PC 0x%" PRIx64 "\n",
250
switch (s->snoop_state) {
54
cur_el, new_el, env->pc);
251
case (SNOOP_CHECKING):
252
- switch (tx) { /* new instruction code */
253
- case READ: /* 3 address bytes, no dummy bytes/cycles */
254
- case PP:
255
+ /* Store the count of dummy bytes in the txfifo */
256
+ s->cmd_dummies = xilinx_spips_num_dummies(q, tx);
257
+ addr_length = get_addr_length(s, tx);
258
+ if (s->cmd_dummies < 0) {
259
+ s->snoop_state = SNOOP_NONE;
260
+ } else {
261
+ s->snoop_state = SNOOP_ADDR + addr_length - 1;
262
+ }
263
+ switch (tx) {
264
case DPP:
265
- case QPP:
266
- s->snoop_state = 3;
267
- break;
268
- case FAST_READ: /* 3 address bytes, 1 dummy byte */
269
case DOR:
270
+ case DOR_4:
271
+ s->link_state_next = 2;
272
+ s->link_state_next_when = addr_length + s->cmd_dummies;
273
+ break;
274
+ case QPP:
275
+ case QPP_4:
276
case QOR:
277
- case DIOR: /* FIXME: these vary between vendor - set to spansion */
278
- s->snoop_state = 4;
279
+ case QOR_4:
280
+ s->link_state_next = 4;
281
+ s->link_state_next_when = addr_length + s->cmd_dummies;
282
+ break;
283
+ case DIOR:
284
+ case DIOR_4:
285
+ s->link_state = 2;
286
break;
287
- case QIOR: /* 3 address bytes, 2 dummy bytes */
288
- s->snoop_state = 6;
289
+ case QIOR:
290
+ case QIOR_4:
291
+ s->link_state = 4;
292
break;
293
- default:
294
+ }
295
+ break;
296
+ case (SNOOP_ADDR):
297
+ /* Address has been transmitted, transmit dummy cycles now if
298
+ * needed */
299
+ if (s->cmd_dummies < 0) {
300
s->snoop_state = SNOOP_NONE;
301
+ } else {
302
+ s->snoop_state = s->cmd_dummies;
303
}
304
break;
305
case (SNOOP_STRIPING):
306
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
307
uint64_t value, unsigned size)
308
{
309
XilinxQSPIPS *q = XILINX_QSPIPS(opaque);
310
+ XilinxSPIPS *s = XILINX_SPIPS(opaque);
311
312
xilinx_spips_write(opaque, addr, value, size);
313
addr >>= 2;
314
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
315
if (addr == R_LQSPI_CFG) {
316
xilinx_qspips_invalidate_mmio_ptr(q);
317
}
318
+ if (s->regs[R_CMND] & R_CMND_RXFIFO_DRAIN) {
319
+ fifo8_reset(&s->rx_fifo);
320
+ }
321
}
322
323
static const MemoryRegionOps qspips_ops = {
324
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_realize(DeviceState *dev, Error **errp)
325
}
326
327
s->cs_lines = g_new0(qemu_irq, s->num_cs * s->num_busses);
328
+ s->cs_lines_state = g_new0(bool, s->num_cs * s->num_busses);
329
for (i = 0, cs = s->cs_lines; i < s->num_busses; ++i, cs += s->num_cs) {
330
ssi_auto_connect_slaves(DEVICE(s), cs, s->spi[i]);
331
}
332
--
55
--
333
2.7.4
56
2.20.1
334
57
335
58
diff view generated by jsdifflib
1
Implement the TT instruction which queries the security
1
From: Richard Henderson <richard.henderson@linaro.org>
2
state and access permissions of a memory location.
2
3
3
This is an aarch64-only function. Move it out of the shared file.
4
This patch is code movement only.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20200302175829.2183-6-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1512153879-5291-8-git-send-email-peter.maydell@linaro.org
7
---
11
---
8
target/arm/helper.h | 2 +
12
target/arm/helper-a64.h | 1 +
9
target/arm/helper.c | 108 +++++++++++++++++++++++++++++++++++++++++++++++++
13
target/arm/helper.h | 1 -
10
target/arm/translate.c | 29 ++++++++++++-
14
target/arm/helper-a64.c | 91 ++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 138 insertions(+), 1 deletion(-)
15
target/arm/op_helper.c | 93 -----------------------------------------
12
16
4 files changed, 92 insertions(+), 94 deletions(-)
17
18
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-a64.h
21
+++ b/target/arm/helper-a64.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
23
DEF_HELPER_2(sqrt_f16, f16, f16, ptr)
24
25
DEF_HELPER_2(exception_return, void, env, i64)
26
+DEF_HELPER_2(dc_zva, void, env, i64)
27
28
DEF_HELPER_FLAGS_3(pacia, TCG_CALL_NO_WG, i64, env, i64, i64)
29
DEF_HELPER_FLAGS_3(pacib, TCG_CALL_NO_WG, i64, env, i64, i64)
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
30
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
32
--- a/target/arm/helper.h
16
+++ b/target/arm/helper.h
33
+++ b/target/arm/helper.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(v7m_mrs, i32, env, i32)
34
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
18
DEF_HELPER_2(v7m_bxns, void, env, i32)
35
19
DEF_HELPER_2(v7m_blxns, void, env, i32)
36
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
20
37
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
21
+DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
38
-DEF_HELPER_2(dc_zva, void, env, i64)
22
+
39
23
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
40
DEF_HELPER_FLAGS_5(gvec_qrdmlah_s16, TCG_CALL_NO_RWG,
24
DEF_HELPER_3(set_cp_reg, void, env, ptr, i32)
41
void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_2(get_cp_reg, i32, env, ptr)
42
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/helper-a64.c
28
--- a/target/arm/helper.c
45
+++ b/target/arm/helper-a64.c
29
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
47
*/
31
g_assert_not_reached();
48
49
#include "qemu/osdep.h"
50
+#include "qemu/units.h"
51
#include "cpu.h"
52
#include "exec/gdbstub.h"
53
#include "exec/helper-proto.h"
54
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
55
return float16_sqrt(a, s);
32
}
56
}
33
57
34
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
58
+void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
35
+{
59
+{
36
+ /* The TT instructions can be used by unprivileged code, but in
60
+ /*
37
+ * user-only emulation we don't have the MPU.
61
+ * Implement DC ZVA, which zeroes a fixed-length block of memory.
38
+ * Luckily since we know we are NonSecure unprivileged (and that in
62
+ * Note that we do not implement the (architecturally mandated)
39
+ * turn means that the A flag wasn't specified), all the bits in the
63
+ * alignment fault for attempts to use this on Device memory
40
+ * register must be zero:
64
+ * (which matches the usual QEMU behaviour of not implementing either
41
+ * IREGION: 0 because IRVALID is 0
65
+ * alignment faults or any memory attribute handling).
42
+ * IRVALID: 0 because NS
43
+ * S: 0 because NS
44
+ * NSRW: 0 because NS
45
+ * NSR: 0 because NS
46
+ * RW: 0 because unpriv and A flag not set
47
+ * R: 0 because unpriv and A flag not set
48
+ * SRVALID: 0 because NS
49
+ * MRVALID: 0 because unpriv and A flag not set
50
+ * SREGION: 0 becaus SRVALID is 0
51
+ * MREGION: 0 because MRVALID is 0
52
+ */
66
+ */
53
+ return 0;
67
68
+ ARMCPU *cpu = env_archcpu(env);
69
+ uint64_t blocklen = 4 << cpu->dcz_blocksize;
70
+ uint64_t vaddr = vaddr_in & ~(blocklen - 1);
71
+
72
+#ifndef CONFIG_USER_ONLY
73
+ {
74
+ /*
75
+ * Slightly awkwardly, QEMU's TARGET_PAGE_SIZE may be less than
76
+ * the block size so we might have to do more than one TLB lookup.
77
+ * We know that in fact for any v8 CPU the page size is at least 4K
78
+ * and the block size must be 2K or less, but TARGET_PAGE_SIZE is only
79
+ * 1K as an artefact of legacy v5 subpage support being present in the
80
+ * same QEMU executable. So in practice the hostaddr[] array has
81
+ * two entries, given the current setting of TARGET_PAGE_BITS_MIN.
82
+ */
83
+ int maxidx = DIV_ROUND_UP(blocklen, TARGET_PAGE_SIZE);
84
+ void *hostaddr[DIV_ROUND_UP(2 * KiB, 1 << TARGET_PAGE_BITS_MIN)];
85
+ int try, i;
86
+ unsigned mmu_idx = cpu_mmu_index(env, false);
87
+ TCGMemOpIdx oi = make_memop_idx(MO_UB, mmu_idx);
88
+
89
+ assert(maxidx <= ARRAY_SIZE(hostaddr));
90
+
91
+ for (try = 0; try < 2; try++) {
92
+
93
+ for (i = 0; i < maxidx; i++) {
94
+ hostaddr[i] = tlb_vaddr_to_host(env,
95
+ vaddr + TARGET_PAGE_SIZE * i,
96
+ 1, mmu_idx);
97
+ if (!hostaddr[i]) {
98
+ break;
99
+ }
100
+ }
101
+ if (i == maxidx) {
102
+ /*
103
+ * If it's all in the TLB it's fair game for just writing to;
104
+ * we know we don't need to update dirty status, etc.
105
+ */
106
+ for (i = 0; i < maxidx - 1; i++) {
107
+ memset(hostaddr[i], 0, TARGET_PAGE_SIZE);
108
+ }
109
+ memset(hostaddr[i], 0, blocklen - (i * TARGET_PAGE_SIZE));
110
+ return;
111
+ }
112
+ /*
113
+ * OK, try a store and see if we can populate the tlb. This
114
+ * might cause an exception if the memory isn't writable,
115
+ * in which case we will longjmp out of here. We must for
116
+ * this purpose use the actual register value passed to us
117
+ * so that we get the fault address right.
118
+ */
119
+ helper_ret_stb_mmu(env, vaddr_in, 0, oi, GETPC());
120
+ /* Now we can populate the other TLB entries, if any */
121
+ for (i = 0; i < maxidx; i++) {
122
+ uint64_t va = vaddr + TARGET_PAGE_SIZE * i;
123
+ if (va != (vaddr_in & TARGET_PAGE_MASK)) {
124
+ helper_ret_stb_mmu(env, va, 0, oi, GETPC());
125
+ }
126
+ }
127
+ }
128
+
129
+ /*
130
+ * Slow path (probably attempt to do this to an I/O device or
131
+ * similar, or clearing of a block of code we have translations
132
+ * cached for). Just do a series of byte writes as the architecture
133
+ * demands. It's not worth trying to use a cpu_physical_memory_map(),
134
+ * memset(), unmap() sequence here because:
135
+ * + we'd need to account for the blocksize being larger than a page
136
+ * + the direct-RAM access case is almost always going to be dealt
137
+ * with in the fastpath code above, so there's no speed benefit
138
+ * + we would have to deal with the map returning NULL because the
139
+ * bounce buffer was in use
140
+ */
141
+ for (i = 0; i < blocklen; i++) {
142
+ helper_ret_stb_mmu(env, vaddr + i, 0, oi, GETPC());
143
+ }
144
+ }
145
+#else
146
+ memset(g2h(vaddr), 0, blocklen);
147
+#endif
54
+}
148
+}
55
+
149
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
56
void switch_mode(CPUARMState *env, int mode)
150
index XXXXXXX..XXXXXXX 100644
57
{
151
--- a/target/arm/op_helper.c
58
ARMCPU *cpu = arm_env_get_cpu(env);
152
+++ b/target/arm/op_helper.c
59
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
153
@@ -XXX,XX +XXX,XX @@
154
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
155
*/
156
#include "qemu/osdep.h"
157
-#include "qemu/units.h"
158
#include "qemu/log.h"
159
#include "qemu/main-loop.h"
160
#include "cpu.h"
161
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(ror_cc)(CPUARMState *env, uint32_t x, uint32_t i)
162
return ((uint32_t)x >> shift) | (x << (32 - shift));
60
}
163
}
61
}
164
}
62
165
-
63
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
166
-void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
64
+{
167
-{
65
+ /* Implement the TT instruction. op is bits [7:6] of the insn. */
168
- /*
66
+ bool forceunpriv = op & 1;
169
- * Implement DC ZVA, which zeroes a fixed-length block of memory.
67
+ bool alt = op & 2;
170
- * Note that we do not implement the (architecturally mandated)
68
+ V8M_SAttributes sattrs = {};
171
- * alignment fault for attempts to use this on Device memory
69
+ uint32_t tt_resp;
172
- * (which matches the usual QEMU behaviour of not implementing either
70
+ bool r, rw, nsr, nsrw, mrvalid;
173
- * alignment faults or any memory attribute handling).
71
+ int prot;
174
- */
72
+ MemTxAttrs attrs = {};
175
-
73
+ hwaddr phys_addr;
176
- ARMCPU *cpu = env_archcpu(env);
74
+ uint32_t fsr;
177
- uint64_t blocklen = 4 << cpu->dcz_blocksize;
75
+ ARMMMUIdx mmu_idx;
178
- uint64_t vaddr = vaddr_in & ~(blocklen - 1);
76
+ uint32_t mregion;
179
-
77
+ bool targetpriv;
180
-#ifndef CONFIG_USER_ONLY
78
+ bool targetsec = env->v7m.secure;
181
- {
79
+
182
- /*
80
+ /* Work out what the security state and privilege level we're
183
- * Slightly awkwardly, QEMU's TARGET_PAGE_SIZE may be less than
81
+ * interested in is...
184
- * the block size so we might have to do more than one TLB lookup.
82
+ */
185
- * We know that in fact for any v8 CPU the page size is at least 4K
83
+ if (alt) {
186
- * and the block size must be 2K or less, but TARGET_PAGE_SIZE is only
84
+ targetsec = !targetsec;
187
- * 1K as an artefact of legacy v5 subpage support being present in the
85
+ }
188
- * same QEMU executable. So in practice the hostaddr[] array has
86
+
189
- * two entries, given the current setting of TARGET_PAGE_BITS_MIN.
87
+ if (forceunpriv) {
190
- */
88
+ targetpriv = false;
191
- int maxidx = DIV_ROUND_UP(blocklen, TARGET_PAGE_SIZE);
89
+ } else {
192
- void *hostaddr[DIV_ROUND_UP(2 * KiB, 1 << TARGET_PAGE_BITS_MIN)];
90
+ targetpriv = arm_v7m_is_handler_mode(env) ||
193
- int try, i;
91
+ !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
194
- unsigned mmu_idx = cpu_mmu_index(env, false);
92
+ }
195
- TCGMemOpIdx oi = make_memop_idx(MO_UB, mmu_idx);
93
+
196
-
94
+ /* ...and then figure out which MMU index this is */
197
- assert(maxidx <= ARRAY_SIZE(hostaddr));
95
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
198
-
96
+
199
- for (try = 0; try < 2; try++) {
97
+ /* We know that the MPU and SAU don't care about the access type
200
-
98
+ * for our purposes beyond that we don't want to claim to be
201
- for (i = 0; i < maxidx; i++) {
99
+ * an insn fetch, so we arbitrarily call this a read.
202
- hostaddr[i] = tlb_vaddr_to_host(env,
100
+ */
203
- vaddr + TARGET_PAGE_SIZE * i,
101
+
204
- 1, mmu_idx);
102
+ /* MPU region info only available for privileged or if
205
- if (!hostaddr[i]) {
103
+ * inspecting the other MPU state.
206
- break;
104
+ */
207
- }
105
+ if (arm_current_el(env) != 0 || alt) {
208
- }
106
+ /* We can ignore the return value as prot is always set */
209
- if (i == maxidx) {
107
+ pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
210
- /*
108
+ &phys_addr, &attrs, &prot, &fsr, &mregion);
211
- * If it's all in the TLB it's fair game for just writing to;
109
+ if (mregion == -1) {
212
- * we know we don't need to update dirty status, etc.
110
+ mrvalid = false;
213
- */
111
+ mregion = 0;
214
- for (i = 0; i < maxidx - 1; i++) {
112
+ } else {
215
- memset(hostaddr[i], 0, TARGET_PAGE_SIZE);
113
+ mrvalid = true;
216
- }
114
+ }
217
- memset(hostaddr[i], 0, blocklen - (i * TARGET_PAGE_SIZE));
115
+ r = prot & PAGE_READ;
218
- return;
116
+ rw = prot & PAGE_WRITE;
219
- }
117
+ } else {
220
- /*
118
+ r = false;
221
- * OK, try a store and see if we can populate the tlb. This
119
+ rw = false;
222
- * might cause an exception if the memory isn't writable,
120
+ mrvalid = false;
223
- * in which case we will longjmp out of here. We must for
121
+ mregion = 0;
224
- * this purpose use the actual register value passed to us
122
+ }
225
- * so that we get the fault address right.
123
+
226
- */
124
+ if (env->v7m.secure) {
227
- helper_ret_stb_mmu(env, vaddr_in, 0, oi, GETPC());
125
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
228
- /* Now we can populate the other TLB entries, if any */
126
+ nsr = sattrs.ns && r;
229
- for (i = 0; i < maxidx; i++) {
127
+ nsrw = sattrs.ns && rw;
230
- uint64_t va = vaddr + TARGET_PAGE_SIZE * i;
128
+ } else {
231
- if (va != (vaddr_in & TARGET_PAGE_MASK)) {
129
+ sattrs.ns = true;
232
- helper_ret_stb_mmu(env, va, 0, oi, GETPC());
130
+ nsr = false;
233
- }
131
+ nsrw = false;
234
- }
132
+ }
235
- }
133
+
236
-
134
+ tt_resp = (sattrs.iregion << 24) |
237
- /*
135
+ (sattrs.irvalid << 23) |
238
- * Slow path (probably attempt to do this to an I/O device or
136
+ ((!sattrs.ns) << 22) |
239
- * similar, or clearing of a block of code we have translations
137
+ (nsrw << 21) |
240
- * cached for). Just do a series of byte writes as the architecture
138
+ (nsr << 20) |
241
- * demands. It's not worth trying to use a cpu_physical_memory_map(),
139
+ (rw << 19) |
242
- * memset(), unmap() sequence here because:
140
+ (r << 18) |
243
- * + we'd need to account for the blocksize being larger than a page
141
+ (sattrs.srvalid << 17) |
244
- * + the direct-RAM access case is almost always going to be dealt
142
+ (mrvalid << 16) |
245
- * with in the fastpath code above, so there's no speed benefit
143
+ (sattrs.sregion << 8) |
246
- * + we would have to deal with the map returning NULL because the
144
+ mregion;
247
- * bounce buffer was in use
145
+
248
- */
146
+ return tt_resp;
249
- for (i = 0; i < blocklen; i++) {
147
+}
250
- helper_ret_stb_mmu(env, vaddr + i, 0, oi, GETPC());
148
+
251
- }
149
#endif
252
- }
150
253
-#else
151
void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
254
- memset(g2h(vaddr), 0, blocklen);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
255
-#endif
153
index XXXXXXX..XXXXXXX 100644
256
-}
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
157
if (insn & (1 << 22)) {
158
/* 0b1110_100x_x1xx_xxxx_xxxx_xxxx_xxxx_xxxx
159
* - load/store doubleword, load/store exclusive, ldacq/strel,
160
- * table branch.
161
+ * table branch, TT.
162
*/
163
if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_M) &&
164
arm_dc_feature(s, ARM_FEATURE_V8)) {
165
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
166
} else if ((insn & (1 << 23)) == 0) {
167
/* 0b1110_1000_010x_xxxx_xxxx_xxxx_xxxx_xxxx
168
* - load/store exclusive word
169
+ * - TT (v8M only)
170
*/
171
if (rs == 15) {
172
+ if (!(insn & (1 << 20)) &&
173
+ arm_dc_feature(s, ARM_FEATURE_M) &&
174
+ arm_dc_feature(s, ARM_FEATURE_V8)) {
175
+ /* 0b1110_1000_0100_xxxx_1111_xxxx_xxxx_xxxx
176
+ * - TT (v8M only)
177
+ */
178
+ bool alt = insn & (1 << 7);
179
+ TCGv_i32 addr, op, ttresp;
180
+
181
+ if ((insn & 0x3f) || rd == 13 || rd == 15 || rn == 15) {
182
+ /* we UNDEF for these UNPREDICTABLE cases */
183
+ goto illegal_op;
184
+ }
185
+
186
+ if (alt && !s->v8m_secure) {
187
+ goto illegal_op;
188
+ }
189
+
190
+ addr = load_reg(s, rn);
191
+ op = tcg_const_i32(extract32(insn, 6, 2));
192
+ ttresp = tcg_temp_new_i32();
193
+ gen_helper_v7m_tt(ttresp, cpu_env, addr, op);
194
+ tcg_temp_free_i32(addr);
195
+ tcg_temp_free_i32(op);
196
+ store_reg(s, rd, ttresp);
197
+ }
198
goto illegal_op;
199
}
200
addr = tcg_temp_local_new_i32();
201
--
257
--
202
2.7.4
258
2.20.1
203
259
204
260
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Don't set TX FIFO UNDERFLOW interrupt after transmitting the commands.
3
The function does not write registers, and only reads them by
4
Also update interrupts after reading out the interrupt status.
4
implication via the exception path.
5
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20200302175829.2183-7-richard.henderson@linaro.org
10
Message-id: 20171126231634.9531-12-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/ssi/xilinx_spips.c | 4 +---
12
target/arm/helper-a64.h | 2 +-
14
1 file changed, 1 insertion(+), 3 deletions(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
15
14
16
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
15
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/xilinx_spips.c
17
--- a/target/arm/helper-a64.h
19
+++ b/hw/ssi/xilinx_spips.c
18
+++ b/target/arm/helper-a64.h
20
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
21
uint8_t addr_length;
20
DEF_HELPER_2(sqrt_f16, f16, f16, ptr)
22
21
23
if (fifo8_is_empty(&s->tx_fifo)) {
22
DEF_HELPER_2(exception_return, void, env, i64)
24
- if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
23
-DEF_HELPER_2(dc_zva, void, env, i64)
25
- s->regs[R_INTR_STATUS] |= IXR_TX_FIFO_UNDERFLOW;
24
+DEF_HELPER_FLAGS_2(dc_zva, TCG_CALL_NO_WG, void, env, i64)
26
- }
25
27
xilinx_spips_update_ixr(s);
26
DEF_HELPER_FLAGS_3(pacia, TCG_CALL_NO_WG, i64, env, i64, i64)
28
return;
27
DEF_HELPER_FLAGS_3(pacib, TCG_CALL_NO_WG, i64, env, i64, i64)
29
} else if (s->snoop_state == SNOOP_STRIPING) {
30
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
31
ret = s->regs[addr] & IXR_ALL;
32
s->regs[addr] = 0;
33
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
34
+ xilinx_spips_update_ixr(s);
35
return ret;
36
case R_INTR_MASK:
37
mask = IXR_ALL;
38
--
28
--
39
2.7.4
29
2.20.1
40
30
41
31
diff view generated by jsdifflib
Deleted patch
1
For v8M it is possible for the CONTROL.SPSEL bit value and the
2
current stack to be out of sync. This means we need to update
3
the checks used in reads and writes of the PSP and MSP special
4
registers to use v7m_using_psp() rather than directly checking
5
the SPSEL bit in the control register.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1512153879-5291-2-git-send-email-peter.maydell@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
---
12
target/arm/helper.c | 10 ++++------
13
1 file changed, 4 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
20
21
switch (reg) {
22
case 8: /* MSP */
23
- return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
24
- env->v7m.other_sp : env->regs[13];
25
+ return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
26
case 9: /* PSP */
27
- return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
28
- env->regs[13] : env->v7m.other_sp;
29
+ return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
30
case 16: /* PRIMASK */
31
return env->v7m.primask[env->v7m.secure];
32
case 17: /* BASEPRI */
33
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
34
}
35
break;
36
case 8: /* MSP */
37
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
38
+ if (v7m_using_psp(env)) {
39
env->v7m.other_sp = val;
40
} else {
41
env->regs[13] = val;
42
}
43
break;
44
case 9: /* PSP */
45
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
46
+ if (v7m_using_psp(env)) {
47
env->regs[13] = val;
48
} else {
49
env->v7m.other_sp = val;
50
--
51
2.7.4
52
53
diff view generated by jsdifflib
Deleted patch
1
When we added the ARMMMUIdx_MSUser MMU index we forgot to
2
add it to the case statement in regime_is_user(), so we
3
weren't treating it as unprivileged when doing MPU lookups.
4
Correct the omission.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1512153879-5291-4-git-send-email-peter.maydell@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
---
11
target/arm/helper.c | 1 +
12
1 file changed, 1 insertion(+)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
19
case ARMMMUIdx_S1SE0:
20
case ARMMMUIdx_S1NSE0:
21
case ARMMMUIdx_MUser:
22
+ case ARMMMUIdx_MSUser:
23
return true;
24
default:
25
return false;
26
--
27
2.7.4
28
29
diff view generated by jsdifflib
Deleted patch
1
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
2
structure, which we convert to the FSC at the callsite.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-5-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 40 ++++++++++++++++++++++------------------
11
1 file changed, 22 insertions(+), 18 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ do_fault:
18
static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
20
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
21
- target_ulong *page_size, uint32_t *fsr,
22
- ARMMMUFaultInfo *fi)
23
+ target_ulong *page_size, ARMMMUFaultInfo *fi)
24
{
25
CPUState *cs = CPU(arm_env_get_cpu(env));
26
- int code;
27
+ int level = 1;
28
uint32_t table;
29
uint32_t desc;
30
uint32_t xn;
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
32
/* Lookup l1 descriptor. */
33
if (!get_level1_table_address(env, mmu_idx, &table, address)) {
34
/* Section translation fault if page walk is disabled by PD0 or PD1 */
35
- code = 5;
36
+ fi->type = ARMFault_Translation;
37
goto do_fault;
38
}
39
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
41
/* Section translation fault, or attempt to use the encoding
42
* which is Reserved on implementations without PXN.
43
*/
44
- code = 5;
45
+ fi->type = ARMFault_Translation;
46
goto do_fault;
47
}
48
if ((type == 1) || !(desc & (1 << 18))) {
49
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
50
} else {
51
dacr = env->cp15.dacr_s;
52
}
53
+ if (type == 1) {
54
+ level = 2;
55
+ }
56
domain_prot = (dacr >> (domain * 2)) & 3;
57
if (domain_prot == 0 || domain_prot == 2) {
58
- if (type != 1) {
59
- code = 9; /* Section domain fault. */
60
- } else {
61
- code = 11; /* Page domain fault. */
62
- }
63
+ /* Section or Page domain fault */
64
+ fi->type = ARMFault_Domain;
65
goto do_fault;
66
}
67
if (type != 1) {
68
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
69
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
70
xn = desc & (1 << 4);
71
pxn = desc & 1;
72
- code = 13;
73
ns = extract32(desc, 19, 1);
74
} else {
75
if (arm_feature(env, ARM_FEATURE_PXN)) {
76
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
77
ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
78
switch (desc & 3) {
79
case 0: /* Page translation fault. */
80
- code = 7;
81
+ fi->type = ARMFault_Translation;
82
goto do_fault;
83
case 1: /* 64k page. */
84
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
85
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
86
/* Never happens, but compiler isn't smart enough to tell. */
87
abort();
88
}
89
- code = 15;
90
}
91
if (domain_prot == 3) {
92
*prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
93
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
94
if (pxn && !regime_is_user(env, mmu_idx)) {
95
xn = 1;
96
}
97
- if (xn && access_type == MMU_INST_FETCH)
98
+ if (xn && access_type == MMU_INST_FETCH) {
99
+ fi->type = ARMFault_Permission;
100
goto do_fault;
101
+ }
102
103
if (arm_feature(env, ARM_FEATURE_V6K) &&
104
(regime_sctlr(env, mmu_idx) & SCTLR_AFE)) {
105
/* The simplified model uses AP[0] as an access control bit. */
106
if ((ap & 1) == 0) {
107
/* Access flag fault. */
108
- code = (code == 15) ? 6 : 3;
109
+ fi->type = ARMFault_AccessFlag;
110
goto do_fault;
111
}
112
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
113
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
114
}
115
if (!(*prot & (1 << access_type))) {
116
/* Access permission fault. */
117
+ fi->type = ARMFault_Permission;
118
goto do_fault;
119
}
120
}
121
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
122
*phys_ptr = phys_addr;
123
return false;
124
do_fault:
125
- *fsr = code | (domain << 4);
126
+ fi->domain = domain;
127
+ fi->level = level;
128
return true;
129
}
130
131
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
132
return get_phys_addr_lpae(env, address, access_type, mmu_idx, phys_ptr,
133
attrs, prot, page_size, fsr, fi, cacheattrs);
134
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
135
- return get_phys_addr_v6(env, address, access_type, mmu_idx, phys_ptr,
136
- attrs, prot, page_size, fsr, fi);
137
+ bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
138
+ phys_ptr, attrs, prot, page_size, fi);
139
+
140
+ *fsr = arm_fi_to_sfsc(fi);
141
+ return ret;
142
} else {
143
bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
144
phys_ptr, prot, page_size, fi);
145
--
146
2.7.4
147
148
diff view generated by jsdifflib
Deleted patch
1
Make get_phys_addr_pmsav5() return a fault type in the ARMMMUFaultInfo
2
structure, which we convert to the FSC at the callsite.
3
1
4
Note that PMSAv5 does not define any guest-visible fault status
5
register, so the different "fsr" values we were previously
6
returning are entirely arbitrary. So we can just switch to using
7
the most appropriae fi->type values without worrying that we
8
need to special-case FaultInfo->FSC conversion for PMSAv5.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-7-git-send-email-peter.maydell@linaro.org
15
---
16
target/arm/helper.c | 20 +++++++++++++-------
17
1 file changed, 13 insertions(+), 7 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
24
25
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
26
MMUAccessType access_type, ARMMMUIdx mmu_idx,
27
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
28
+ hwaddr *phys_ptr, int *prot,
29
+ ARMMMUFaultInfo *fi)
30
{
31
int n;
32
uint32_t mask;
33
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
34
}
35
}
36
if (n < 0) {
37
- *fsr = 2;
38
+ fi->type = ARMFault_Background;
39
return true;
40
}
41
42
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
43
mask = (mask >> (n * 4)) & 0xf;
44
switch (mask) {
45
case 0:
46
- *fsr = 1;
47
+ fi->type = ARMFault_Permission;
48
+ fi->level = 1;
49
return true;
50
case 1:
51
if (is_user) {
52
- *fsr = 1;
53
+ fi->type = ARMFault_Permission;
54
+ fi->level = 1;
55
return true;
56
}
57
*prot = PAGE_READ | PAGE_WRITE;
58
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
59
break;
60
case 5:
61
if (is_user) {
62
- *fsr = 1;
63
+ fi->type = ARMFault_Permission;
64
+ fi->level = 1;
65
return true;
66
}
67
*prot = PAGE_READ;
68
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
69
break;
70
default:
71
/* Bad permission. */
72
- *fsr = 1;
73
+ fi->type = ARMFault_Permission;
74
+ fi->level = 1;
75
return true;
76
}
77
*prot |= PAGE_EXEC;
78
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
79
} else {
80
/* Pre-v7 MPU */
81
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
82
- phys_ptr, prot, fsr);
83
+ phys_ptr, prot, fi);
84
+ *fsr = arm_fi_to_sfsc(fi);
85
}
86
qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
87
" mmu_idx %u -> %s (prot %c%c%c)\n",
88
--
89
2.7.4
90
91
diff view generated by jsdifflib
1
From: Zhaoshenglong <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since I'm not working as an assignee in Linaro, replace the Linaro email
3
This data access was forgotten when we added support for cleaning
4
address with my personal one.
4
addresses of TBI information.
5
5
6
Signed-off-by: Zhaoshenglong <zhaoshenglong@huawei.com>
6
Fixes: 3a471103ac1823ba
7
Message-id: 1513058845-9768-1-git-send-email-zhaoshenglong@huawei.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200302175829.2183-8-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
MAINTAINERS | 2 +-
12
target/arm/translate-a64.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
12
14
13
diff --git a/MAINTAINERS b/MAINTAINERS
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/MAINTAINERS
17
--- a/target/arm/translate-a64.c
16
+++ b/MAINTAINERS
18
+++ b/target/arm/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ F: include/hw/*/xlnx*.h
19
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
18
20
return;
19
ARM ACPI Subsystem
21
case ARM_CP_DC_ZVA:
20
M: Shannon Zhao <zhaoshenglong@huawei.com>
22
/* Writes clear the aligned block of memory which rt points into. */
21
-M: Shannon Zhao <shannon.zhao@linaro.org>
23
- tcg_rt = cpu_reg(s, rt);
22
+M: Shannon Zhao <shannon.zhaosl@gmail.com>
24
+ tcg_rt = clean_data_tbi(s, cpu_reg(s, rt));
23
L: qemu-arm@nongnu.org
25
gen_helper_dc_zva(cpu_env, tcg_rt);
24
S: Maintained
26
return;
25
F: hw/arm/virt-acpi-build.c
27
default:
26
--
28
--
27
2.7.4
29
2.20.1
28
30
29
31
diff view generated by jsdifflib