1
Another go at the v8.5-MemTag linux-user support, plus a
1
The following changes since commit 5a67d7735d4162630769ef495cf813244fc850df:
2
couple more npcm7xx devices.
3
2
4
-- PMM
3
Merge remote-tracking branch 'remotes/berrange-gitlab/tags/tls-deps-pull-request' into staging (2021-07-02 08:22:39 +0100)
5
6
The following changes since commit 8ba4bca570ace1e60614a0808631a517cf5df67a:
7
8
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging (2021-02-15 17:13:57 +0000)
9
4
10
are available in the Git repository at:
5
are available in the Git repository at:
11
6
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210216
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210702
13
8
14
for you to fetch changes up to 64fd5bddf3b71d1b92b55382ab39768bd87ecfbd:
9
for you to fetch changes up to 04ea4d3cfd0a21b248ece8eb7a9436a3d9898dd8:
15
10
16
tests/qtests: Add npcm7xx emc model test (2021-02-16 14:27:05 +0000)
11
target/arm: Implement MVE shifts by register (2021-07-02 11:48:38 +0100)
17
12
18
----------------------------------------------------------------
13
----------------------------------------------------------------
19
target-arm queue:
14
target-arm queue:
20
* Support ARMv8.5-MemTag for linux-user
15
* more MVE instructions
21
* ncpm7xx: Support SMBus, EMC ethernet devices
16
* hw/gpio/gpio_pwr: use shutdown function for reboot
22
* MAINTAINERS: add section for Clock framework
17
* target/arm: Check NaN mode before silencing NaN
18
* tests: Boot and halt a Linux guest on the Raspberry Pi 2 machine
19
* hw/arm: Add basic power management to raspi.
20
* docs/system/arm: Add quanta-gbs-bmc, quanta-q7l1-bmc
23
21
24
----------------------------------------------------------------
22
----------------------------------------------------------------
25
Doug Evans (3):
23
Joe Komlodi (1):
26
hw/net: Add npcm7xx emc model
24
target/arm: Check NaN mode before silencing NaN
27
hw/arm: Add npcm7xx emc model
28
tests/qtests: Add npcm7xx emc model test
29
25
30
Hao Wu (5):
26
Maxim Uvarov (1):
31
hw/i2c: Implement NPCM7XX SMBus Module Single Mode
27
hw/gpio/gpio_pwr: use shutdown function for reboot
32
hw/arm: Add I2C sensors for NPCM750 eval board
33
hw/arm: Add I2C sensors and EEPROM for GSJ machine
34
hw/i2c: Add a QTest for NPCM7XX SMBus Device
35
hw/i2c: Implement NPCM7XX SMBus Module FIFO Mode
36
28
37
Luc Michel (1):
29
Nolan Leake (1):
38
MAINTAINERS: add myself maintainer for the clock framework
30
hw/arm: Add basic power management to raspi.
39
31
40
Richard Henderson (31):
32
Patrick Venture (2):
41
tcg: Introduce target-specific page data for user-only
33
docs/system/arm: Add quanta-q7l1-bmc reference
42
linux-user: Introduce PAGE_ANON
34
docs/system/arm: Add quanta-gbs-bmc reference
43
exec: Use uintptr_t for guest_base
44
exec: Use uintptr_t in cpu_ldst.h
45
exec: Improve types for guest_addr_valid
46
linux-user: Check for overflow in access_ok
47
linux-user: Tidy VERIFY_READ/VERIFY_WRITE
48
bsd-user: Tidy VERIFY_READ/VERIFY_WRITE
49
linux-user: Do not use guest_addr_valid for h2g_valid
50
linux-user: Fix guest_addr_valid vs reserved_va
51
exec: Introduce cpu_untagged_addr
52
exec: Use cpu_untagged_addr in g2h; split out g2h_untagged
53
linux-user: Explicitly untag memory management syscalls
54
linux-user: Use guest_range_valid in access_ok
55
exec: Rename guest_{addr,range}_valid to *_untagged
56
linux-user: Use cpu_untagged_addr in access_ok; split out *_untagged
57
linux-user: Move lock_user et al out of line
58
linux-user: Fix types in uaccess.c
59
linux-user: Handle tags in lock_user/unlock_user
60
linux-user/aarch64: Implement PR_TAGGED_ADDR_ENABLE
61
target/arm: Improve gen_top_byte_ignore
62
target/arm: Use the proper TBI settings for linux-user
63
linux-user/aarch64: Implement PR_MTE_TCF and PR_MTE_TAG
64
linux-user/aarch64: Implement PROT_MTE
65
target/arm: Split out syndrome.h from internals.h
66
linux-user/aarch64: Pass syndrome to EXC_*_ABORT
67
linux-user/aarch64: Signal SEGV_MTESERR for sync tag check fault
68
linux-user/aarch64: Signal SEGV_MTEAERR for async tag check error
69
target/arm: Add allocation tag storage for user mode
70
target/arm: Enable MTE for user-only
71
tests/tcg/aarch64: Add mte smoke tests
72
35
73
docs/system/arm/nuvoton.rst | 5 +-
36
Peter Maydell (18):
74
bsd-user/qemu.h | 17 +-
37
target/arm: Fix MVE widening/narrowing VLDR/VSTR offset calculation
75
include/exec/cpu-all.h | 47 +-
38
target/arm: Fix bugs in MVE VRMLALDAVH, VRMLSLDAVH
76
include/exec/cpu_ldst.h | 39 +-
39
target/arm: Make asimd_imm_const() public
77
include/exec/exec-all.h | 2 +-
40
target/arm: Use asimd_imm_const for A64 decode
78
include/hw/arm/npcm7xx.h | 4 +
41
target/arm: Use dup_const() instead of bitfield_replicate()
79
include/hw/i2c/npcm7xx_smbus.h | 113 ++++
42
target/arm: Implement MVE logical immediate insns
80
include/hw/net/npcm7xx_emc.h | 286 +++++++++
43
target/arm: Implement MVE vector shift left by immediate insns
81
linux-user/aarch64/target_signal.h | 3 +
44
target/arm: Implement MVE vector shift right by immediate insns
82
linux-user/aarch64/target_syscall.h | 13 +
45
target/arm: Implement MVE VSHLL
83
linux-user/qemu.h | 76 +--
46
target/arm: Implement MVE VSRI, VSLI
84
linux-user/syscall_defs.h | 1 +
47
target/arm: Implement MVE VSHRN, VRSHRN
85
target/arm/cpu-param.h | 3 +
48
target/arm: Implement MVE saturating narrowing shifts
86
target/arm/cpu.h | 32 +
49
target/arm: Implement MVE VSHLC
87
target/arm/internals.h | 249 +-------
50
target/arm: Implement MVE VADDLV
88
target/arm/syndrome.h | 273 +++++++++
51
target/arm: Implement MVE long shifts by immediate
89
tests/tcg/aarch64/mte.h | 60 ++
52
target/arm: Implement MVE long shifts by register
90
accel/tcg/translate-all.c | 32 +-
53
target/arm: Implement MVE shifts by immediate
91
accel/tcg/user-exec.c | 51 +-
54
target/arm: Implement MVE shifts by register
92
bsd-user/elfload.c | 2 +-
93
bsd-user/main.c | 8 +-
94
bsd-user/mmap.c | 23 +-
95
hw/arm/npcm7xx.c | 118 +++-
96
hw/arm/npcm7xx_boards.c | 46 ++
97
hw/i2c/npcm7xx_smbus.c | 1099 +++++++++++++++++++++++++++++++++++
98
hw/net/npcm7xx_emc.c | 857 +++++++++++++++++++++++++++
99
linux-user/aarch64/cpu_loop.c | 38 +-
100
linux-user/elfload.c | 18 +-
101
linux-user/flatload.c | 2 +-
102
linux-user/hppa/cpu_loop.c | 39 +-
103
linux-user/i386/cpu_loop.c | 6 +-
104
linux-user/i386/signal.c | 5 +-
105
linux-user/main.c | 4 +-
106
linux-user/mmap.c | 88 +--
107
linux-user/ppc/signal.c | 4 +-
108
linux-user/syscall.c | 165 ++++--
109
linux-user/uaccess.c | 82 ++-
110
target/arm/cpu.c | 25 +-
111
target/arm/helper-a64.c | 4 +-
112
target/arm/mte_helper.c | 39 +-
113
target/arm/tlb_helper.c | 15 +-
114
target/arm/translate-a64.c | 25 +-
115
target/hppa/op_helper.c | 2 +-
116
target/i386/tcg/mem_helper.c | 2 +-
117
target/s390x/mem_helper.c | 4 +-
118
tests/qtest/npcm7xx_emc-test.c | 862 +++++++++++++++++++++++++++
119
tests/qtest/npcm7xx_smbus-test.c | 495 ++++++++++++++++
120
tests/tcg/aarch64/mte-1.c | 28 +
121
tests/tcg/aarch64/mte-2.c | 45 ++
122
tests/tcg/aarch64/mte-3.c | 51 ++
123
tests/tcg/aarch64/mte-4.c | 45 ++
124
tests/tcg/aarch64/pauth-2.c | 1 -
125
MAINTAINERS | 11 +
126
hw/arm/Kconfig | 1 +
127
hw/i2c/meson.build | 1 +
128
hw/i2c/trace-events | 12 +
129
hw/net/meson.build | 1 +
130
hw/net/trace-events | 17 +
131
tests/qtest/meson.build | 2 +
132
tests/tcg/aarch64/Makefile.target | 6 +
133
tests/tcg/configure.sh | 4 +
134
61 files changed, 5052 insertions(+), 556 deletions(-)
135
create mode 100644 include/hw/i2c/npcm7xx_smbus.h
136
create mode 100644 include/hw/net/npcm7xx_emc.h
137
create mode 100644 target/arm/syndrome.h
138
create mode 100644 tests/tcg/aarch64/mte.h
139
create mode 100644 hw/i2c/npcm7xx_smbus.c
140
create mode 100644 hw/net/npcm7xx_emc.c
141
create mode 100644 tests/qtest/npcm7xx_emc-test.c
142
create mode 100644 tests/qtest/npcm7xx_smbus-test.c
143
create mode 100644 tests/tcg/aarch64/mte-1.c
144
create mode 100644 tests/tcg/aarch64/mte-2.c
145
create mode 100644 tests/tcg/aarch64/mte-3.c
146
create mode 100644 tests/tcg/aarch64/mte-4.c
147
55
56
Philippe Mathieu-Daudé (1):
57
tests: Boot and halt a Linux guest on the Raspberry Pi 2 machine
58
59
docs/system/arm/aspeed.rst | 1 +
60
docs/system/arm/nuvoton.rst | 5 +-
61
include/hw/arm/bcm2835_peripherals.h | 3 +-
62
include/hw/misc/bcm2835_powermgt.h | 29 ++
63
target/arm/helper-mve.h | 108 +++++++
64
target/arm/translate.h | 41 +++
65
target/arm/mve.decode | 177 ++++++++++-
66
target/arm/t32.decode | 71 ++++-
67
hw/arm/bcm2835_peripherals.c | 13 +-
68
hw/gpio/gpio_pwr.c | 2 +-
69
hw/misc/bcm2835_powermgt.c | 160 ++++++++++
70
target/arm/helper-a64.c | 12 +-
71
target/arm/mve_helper.c | 524 +++++++++++++++++++++++++++++++--
72
target/arm/translate-a64.c | 86 +-----
73
target/arm/translate-mve.c | 261 +++++++++++++++-
74
target/arm/translate-neon.c | 81 -----
75
target/arm/translate.c | 327 +++++++++++++++++++-
76
target/arm/vfp_helper.c | 24 +-
77
hw/misc/meson.build | 1 +
78
tests/acceptance/boot_linux_console.py | 43 +++
79
20 files changed, 1760 insertions(+), 209 deletions(-)
80
create mode 100644 include/hw/misc/bcm2835_powermgt.h
81
create mode 100644 hw/misc/bcm2835_powermgt.c
82
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Patrick Venture <venture@google.com>
2
2
3
Remember the PROT_MTE bit as PAGE_MTE/PAGE_TARGET_2.
3
Adds a line-item reference to the supported quanta-q71l-bmc aspeed
4
Otherwise this does not yet have effect.
4
entry.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Patrick Venture <venture@google.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Cédric Le Goater <clg@kaod.org>
8
Message-id: 20210212184902.1251044-25-richard.henderson@linaro.org
8
Message-id: 20210615192848.1065297-2-venture@google.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/exec/cpu-all.h | 1 +
11
docs/system/arm/aspeed.rst | 1 +
12
linux-user/syscall_defs.h | 1 +
12
1 file changed, 1 insertion(+)
13
target/arm/cpu.h | 1 +
14
linux-user/mmap.c | 22 ++++++++++++++--------
15
4 files changed, 17 insertions(+), 8 deletions(-)
16
13
17
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
14
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/cpu-all.h
16
--- a/docs/system/arm/aspeed.rst
20
+++ b/include/exec/cpu-all.h
17
+++ b/docs/system/arm/aspeed.rst
21
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
18
@@ -XXX,XX +XXX,XX @@ etc.
22
#endif
19
AST2400 SoC based machines :
23
/* Target-specific bits that will be used via page_get_flags(). */
20
24
#define PAGE_TARGET_1 0x0080
21
- ``palmetto-bmc`` OpenPOWER Palmetto POWER8 BMC
25
+#define PAGE_TARGET_2 0x0200
22
+- ``quanta-q71l-bmc`` OpenBMC Quanta BMC
26
23
27
#if defined(CONFIG_USER_ONLY)
24
AST2500 SoC based machines :
28
void page_dump(FILE *f);
29
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/linux-user/syscall_defs.h
32
+++ b/linux-user/syscall_defs.h
33
@@ -XXX,XX +XXX,XX @@ struct target_winsize {
34
35
#ifdef TARGET_AARCH64
36
#define TARGET_PROT_BTI 0x10
37
+#define TARGET_PROT_MTE 0x20
38
#endif
39
40
/* Common */
41
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/cpu.h
44
+++ b/target/arm/cpu.h
45
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
46
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
47
*/
48
#define PAGE_BTI PAGE_TARGET_1
49
+#define PAGE_MTE PAGE_TARGET_2
50
51
#ifdef TARGET_TAGGED_ADDRESSES
52
/**
53
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/linux-user/mmap.c
56
+++ b/linux-user/mmap.c
57
@@ -XXX,XX +XXX,XX @@ static int validate_prot_to_pageflags(int *host_prot, int prot)
58
| (prot & PROT_EXEC ? PROT_READ : 0);
59
60
#ifdef TARGET_AARCH64
61
- /*
62
- * The PROT_BTI bit is only accepted if the cpu supports the feature.
63
- * Since this is the unusual case, don't bother checking unless
64
- * the bit has been requested. If set and valid, record the bit
65
- * within QEMU's page_flags.
66
- */
67
- if (prot & TARGET_PROT_BTI) {
68
+ {
69
ARMCPU *cpu = ARM_CPU(thread_cpu);
70
- if (cpu_isar_feature(aa64_bti, cpu)) {
71
+
72
+ /*
73
+ * The PROT_BTI bit is only accepted if the cpu supports the feature.
74
+ * Since this is the unusual case, don't bother checking unless
75
+ * the bit has been requested. If set and valid, record the bit
76
+ * within QEMU's page_flags.
77
+ */
78
+ if ((prot & TARGET_PROT_BTI) && cpu_isar_feature(aa64_bti, cpu)) {
79
valid |= TARGET_PROT_BTI;
80
page_flags |= PAGE_BTI;
81
}
82
+ /* Similarly for the PROT_MTE bit. */
83
+ if ((prot & TARGET_PROT_MTE) && cpu_isar_feature(aa64_mte, cpu)) {
84
+ valid |= TARGET_PROT_MTE;
85
+ page_flags |= PAGE_MTE;
86
+ }
87
}
88
#endif
89
25
90
--
26
--
91
2.20.1
27
2.20.1
92
28
93
29
diff view generated by jsdifflib
1
From: Doug Evans <dje@google.com>
1
From: Patrick Venture <venture@google.com>
2
2
3
This is a 10/100 ethernet device that has several features.
3
Add line item reference to quanta-gbs-bmc machine.
4
Only the ones needed by the Linux driver have been implemented.
5
See npcm7xx_emc.c for a list of unimplemented features.
6
4
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
5
Signed-off-by: Patrick Venture <venture@google.com>
8
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
6
Reviewed-by: Cédric Le Goater <clg@kaod.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20210615192848.1065297-3-venture@google.com
10
Signed-off-by: Doug Evans <dje@google.com>
8
[PMM: fixed underline Sphinx warning]
11
Message-id: 20210213002520.1374134-3-dje@google.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
docs/system/arm/nuvoton.rst | 3 ++-
11
docs/system/arm/nuvoton.rst | 5 +++--
15
include/hw/arm/npcm7xx.h | 2 ++
12
1 file changed, 3 insertions(+), 2 deletions(-)
16
hw/arm/npcm7xx.c | 50 +++++++++++++++++++++++++++++++++++--
17
3 files changed, 52 insertions(+), 3 deletions(-)
18
13
19
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
14
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/system/arm/nuvoton.rst
16
--- a/docs/system/arm/nuvoton.rst
22
+++ b/docs/system/arm/nuvoton.rst
17
+++ b/docs/system/arm/nuvoton.rst
23
@@ -XXX,XX +XXX,XX @@ Supported devices
24
* Analog to Digital Converter (ADC)
25
* Pulse Width Modulation (PWM)
26
* SMBus controller (SMBF)
27
+ * Ethernet controller (EMC)
28
29
Missing devices
30
---------------
31
@@ -XXX,XX +XXX,XX @@ Missing devices
32
* Shared memory (SHM)
33
* eSPI slave interface
34
35
- * Ethernet controllers (GMAC and EMC)
36
+ * Ethernet controller (GMAC)
37
* USB device (USBD)
38
* Peripheral SPI controller (PSPI)
39
* SD/MMC host
40
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
41
index XXXXXXX..XXXXXXX 100644
42
--- a/include/hw/arm/npcm7xx.h
43
+++ b/include/hw/arm/npcm7xx.h
44
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@
45
#include "hw/misc/npcm7xx_gcr.h"
19
-Nuvoton iBMC boards (``npcm750-evb``, ``quanta-gsj``)
46
#include "hw/misc/npcm7xx_pwm.h"
20
-=====================================================
47
#include "hw/misc/npcm7xx_rng.h"
21
+Nuvoton iBMC boards (``*-bmc``, ``npcm750-evb``, ``quanta-gsj``)
48
+#include "hw/net/npcm7xx_emc.h"
22
+================================================================
49
#include "hw/nvram/npcm7xx_otp.h"
23
50
#include "hw/timer/npcm7xx_timer.h"
24
The `Nuvoton iBMC`_ chips (NPCM7xx) are a family of ARM-based SoCs that are
51
#include "hw/ssi/npcm7xx_fiu.h"
25
designed to be used as Baseboard Management Controllers (BMCs) in various
52
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxState {
26
@@ -XXX,XX +XXX,XX @@ segment. The following machines are based on this chip :
53
EHCISysBusState ehci;
27
The NPCM730 SoC has two Cortex-A9 cores and is targeted for Data Center and
54
OHCISysBusState ohci;
28
Hyperscale applications. The following machines are based on this chip :
55
NPCM7xxFIUState fiu[2];
29
56
+ NPCM7xxEMCState emc[2];
30
+- ``quanta-gbs-bmc`` Quanta GBS server BMC
57
} NPCM7xxState;
31
- ``quanta-gsj`` Quanta GSJ server BMC
58
32
59
#define TYPE_NPCM7XX "npcm7xx"
33
There are also two more SoCs, NPCM710 and NPCM705, which are single-core
60
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/arm/npcm7xx.c
63
+++ b/hw/arm/npcm7xx.c
64
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
65
NPCM7XX_UART1_IRQ,
66
NPCM7XX_UART2_IRQ,
67
NPCM7XX_UART3_IRQ,
68
+ NPCM7XX_EMC1RX_IRQ = 15,
69
+ NPCM7XX_EMC1TX_IRQ,
70
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
71
NPCM7XX_TIMER1_IRQ,
72
NPCM7XX_TIMER2_IRQ,
73
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
74
NPCM7XX_SMBUS15_IRQ,
75
NPCM7XX_PWM0_IRQ = 93, /* PWM module 0 */
76
NPCM7XX_PWM1_IRQ, /* PWM module 1 */
77
+ NPCM7XX_EMC2RX_IRQ = 114,
78
+ NPCM7XX_EMC2TX_IRQ,
79
NPCM7XX_GPIO0_IRQ = 116,
80
NPCM7XX_GPIO1_IRQ,
81
NPCM7XX_GPIO2_IRQ,
82
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_smbus_addr[] = {
83
0xf008f000,
84
};
85
86
+/* Register base address for each EMC Module */
87
+static const hwaddr npcm7xx_emc_addr[] = {
88
+ 0xf0825000,
89
+ 0xf0826000,
90
+};
91
+
92
static const struct {
93
hwaddr regs_addr;
94
uint32_t unconnected_pins;
95
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
96
for (i = 0; i < ARRAY_SIZE(s->pwm); i++) {
97
object_initialize_child(obj, "pwm[*]", &s->pwm[i], TYPE_NPCM7XX_PWM);
98
}
99
+
100
+ for (i = 0; i < ARRAY_SIZE(s->emc); i++) {
101
+ object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
102
+ }
103
}
104
105
static void npcm7xx_realize(DeviceState *dev, Error **errp)
106
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
107
sysbus_connect_irq(sbd, i, npcm7xx_irq(s, NPCM7XX_PWM0_IRQ + i));
108
}
109
110
+ /*
111
+ * EMC Modules. Cannot fail.
112
+ * The mapping of the device to its netdev backend works as follows:
113
+ * emc[i] = nd_table[i]
114
+ * This works around the inability to specify the netdev property for the
115
+ * emc device: it's not pluggable and thus the -device option can't be
116
+ * used.
117
+ */
118
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_emc_addr) != ARRAY_SIZE(s->emc));
119
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(s->emc) != 2);
120
+ for (i = 0; i < ARRAY_SIZE(s->emc); i++) {
121
+ s->emc[i].emc_num = i;
122
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->emc[i]);
123
+ if (nd_table[i].used) {
124
+ qemu_check_nic_model(&nd_table[i], TYPE_NPCM7XX_EMC);
125
+ qdev_set_nic_properties(DEVICE(sbd), &nd_table[i]);
126
+ }
127
+ /*
128
+ * The device exists regardless of whether it's connected to a QEMU
129
+ * netdev backend. So always instantiate it even if there is no
130
+ * backend.
131
+ */
132
+ sysbus_realize(sbd, &error_abort);
133
+ sysbus_mmio_map(sbd, 0, npcm7xx_emc_addr[i]);
134
+ int tx_irq = i == 0 ? NPCM7XX_EMC1TX_IRQ : NPCM7XX_EMC2TX_IRQ;
135
+ int rx_irq = i == 0 ? NPCM7XX_EMC1RX_IRQ : NPCM7XX_EMC2RX_IRQ;
136
+ /*
137
+ * N.B. The values for the second argument sysbus_connect_irq are
138
+ * chosen to match the registration order in npcm7xx_emc_realize.
139
+ */
140
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, tx_irq));
141
+ sysbus_connect_irq(sbd, 1, npcm7xx_irq(s, rx_irq));
142
+ }
143
+
144
/*
145
* Flash Interface Unit (FIU). Can fail if incorrect number of chip selects
146
* specified, but this is a programming error.
147
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
148
create_unimplemented_device("npcm7xx.vcd", 0xf0810000, 64 * KiB);
149
create_unimplemented_device("npcm7xx.ece", 0xf0820000, 8 * KiB);
150
create_unimplemented_device("npcm7xx.vdma", 0xf0822000, 8 * KiB);
151
- create_unimplemented_device("npcm7xx.emc1", 0xf0825000, 4 * KiB);
152
- create_unimplemented_device("npcm7xx.emc2", 0xf0826000, 4 * KiB);
153
create_unimplemented_device("npcm7xx.usbd[0]", 0xf0830000, 4 * KiB);
154
create_unimplemented_device("npcm7xx.usbd[1]", 0xf0831000, 4 * KiB);
155
create_unimplemented_device("npcm7xx.usbd[2]", 0xf0832000, 4 * KiB);
156
--
34
--
157
2.20.1
35
2.20.1
158
36
159
37
diff view generated by jsdifflib
1
From: Doug Evans <dje@google.com>
1
From: Nolan Leake <nolan@sigbus.net>
2
2
3
This is a 10/100 ethernet device that has several features.
3
This is just enough to make reboot and poweroff work. Works for
4
Only the ones needed by the Linux driver have been implemented.
4
linux, u-boot, and the arm trusted firmware. Not tested, but should
5
See npcm7xx_emc.c for a list of unimplemented features.
5
work for plan9, and bare-metal/hobby OSes, since they seem to generally
6
6
do what linux does for reset.
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
7
8
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
8
The watchdog timer functionality is not yet implemented.
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
10
Signed-off-by: Doug Evans <dje@google.com>
10
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/64
11
Message-id: 20210213002520.1374134-2-dje@google.com
11
Signed-off-by: Nolan Leake <nolan@sigbus.net>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20210625210209.1870217-1-nolan@sigbus.net
15
[PMM: tweaked commit title; fixed region size to 0x200;
16
moved header file to include/]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
18
---
14
include/hw/net/npcm7xx_emc.h | 286 ++++++++++++
19
include/hw/arm/bcm2835_peripherals.h | 3 +-
15
hw/net/npcm7xx_emc.c | 857 +++++++++++++++++++++++++++++++++++
20
include/hw/misc/bcm2835_powermgt.h | 29 +++++
16
hw/net/meson.build | 1 +
21
hw/arm/bcm2835_peripherals.c | 13 ++-
17
hw/net/trace-events | 17 +
22
hw/misc/bcm2835_powermgt.c | 160 +++++++++++++++++++++++++++
18
4 files changed, 1161 insertions(+)
23
hw/misc/meson.build | 1 +
19
create mode 100644 include/hw/net/npcm7xx_emc.h
24
5 files changed, 204 insertions(+), 2 deletions(-)
20
create mode 100644 hw/net/npcm7xx_emc.c
25
create mode 100644 include/hw/misc/bcm2835_powermgt.h
21
26
create mode 100644 hw/misc/bcm2835_powermgt.c
22
diff --git a/include/hw/net/npcm7xx_emc.h b/include/hw/net/npcm7xx_emc.h
27
28
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/hw/arm/bcm2835_peripherals.h
31
+++ b/include/hw/arm/bcm2835_peripherals.h
32
@@ -XXX,XX +XXX,XX @@
33
#include "hw/misc/bcm2835_mphi.h"
34
#include "hw/misc/bcm2835_thermal.h"
35
#include "hw/misc/bcm2835_cprman.h"
36
+#include "hw/misc/bcm2835_powermgt.h"
37
#include "hw/sd/sdhci.h"
38
#include "hw/sd/bcm2835_sdhost.h"
39
#include "hw/gpio/bcm2835_gpio.h"
40
@@ -XXX,XX +XXX,XX @@ struct BCM2835PeripheralState {
41
BCM2835MphiState mphi;
42
UnimplementedDeviceState txp;
43
UnimplementedDeviceState armtmr;
44
- UnimplementedDeviceState powermgt;
45
+ BCM2835PowerMgtState powermgt;
46
BCM2835CprmanState cprman;
47
PL011State uart0;
48
BCM2835AuxState aux;
49
diff --git a/include/hw/misc/bcm2835_powermgt.h b/include/hw/misc/bcm2835_powermgt.h
23
new file mode 100644
50
new file mode 100644
24
index XXXXXXX..XXXXXXX
51
index XXXXXXX..XXXXXXX
25
--- /dev/null
52
--- /dev/null
26
+++ b/include/hw/net/npcm7xx_emc.h
53
+++ b/include/hw/misc/bcm2835_powermgt.h
27
@@ -XXX,XX +XXX,XX @@
54
@@ -XXX,XX +XXX,XX @@
28
+/*
55
+/*
29
+ * Nuvoton NPCM7xx EMC Module
56
+ * BCM2835 Power Management emulation
30
+ *
57
+ *
31
+ * Copyright 2020 Google LLC
58
+ * Copyright (C) 2017 Marcin Chojnacki <marcinch7@gmail.com>
32
+ *
59
+ * Copyright (C) 2021 Nolan Leake <nolan@sigbus.net>
33
+ * This program is free software; you can redistribute it and/or modify it
60
+ *
34
+ * under the terms of the GNU General Public License as published by the
61
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
35
+ * Free Software Foundation; either version 2 of the License, or
62
+ * See the COPYING file in the top-level directory.
36
+ * (at your option) any later version.
37
+ *
38
+ * This program is distributed in the hope that it will be useful, but WITHOUT
39
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
40
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
41
+ * for more details.
42
+ */
63
+ */
43
+
64
+
44
+#ifndef NPCM7XX_EMC_H
65
+#ifndef BCM2835_POWERMGT_H
45
+#define NPCM7XX_EMC_H
66
+#define BCM2835_POWERMGT_H
46
+
67
+
47
+#include "hw/irq.h"
48
+#include "hw/sysbus.h"
68
+#include "hw/sysbus.h"
49
+#include "net/net.h"
69
+#include "qom/object.h"
50
+
70
+
51
+/* 32-bit register indices. */
71
+#define TYPE_BCM2835_POWERMGT "bcm2835-powermgt"
52
+enum NPCM7xxPWMRegister {
72
+OBJECT_DECLARE_SIMPLE_TYPE(BCM2835PowerMgtState, BCM2835_POWERMGT)
53
+ /* Control registers. */
73
+
54
+ REG_CAMCMR,
74
+struct BCM2835PowerMgtState {
55
+ REG_CAMEN,
75
+ SysBusDevice busdev;
56
+
57
+ /* There are 16 CAMn[ML] registers. */
58
+ REG_CAMM_BASE,
59
+ REG_CAML_BASE,
60
+ REG_CAMML_LAST = 0x21,
61
+
62
+ REG_TXDLSA = 0x22,
63
+ REG_RXDLSA,
64
+ REG_MCMDR,
65
+ REG_MIID,
66
+ REG_MIIDA,
67
+ REG_FFTCR,
68
+ REG_TSDR,
69
+ REG_RSDR,
70
+ REG_DMARFC,
71
+ REG_MIEN,
72
+
73
+ /* Status registers. */
74
+ REG_MISTA,
75
+ REG_MGSTA,
76
+ REG_MPCNT,
77
+ REG_MRPC,
78
+ REG_MRPCC,
79
+ REG_MREPC,
80
+ REG_DMARFS,
81
+ REG_CTXDSA,
82
+ REG_CTXBSA,
83
+ REG_CRXDSA,
84
+ REG_CRXBSA,
85
+
86
+ NPCM7XX_NUM_EMC_REGS,
87
+};
88
+
89
+/* REG_CAMCMR fields */
90
+/* Enable CAM Compare */
91
+#define REG_CAMCMR_ECMP (1 << 4)
92
+/* Complement CAM Compare */
93
+#define REG_CAMCMR_CCAM (1 << 3)
94
+/* Accept Broadcast Packet */
95
+#define REG_CAMCMR_ABP (1 << 2)
96
+/* Accept Multicast Packet */
97
+#define REG_CAMCMR_AMP (1 << 1)
98
+/* Accept Unicast Packet */
99
+#define REG_CAMCMR_AUP (1 << 0)
100
+
101
+/* REG_MCMDR fields */
102
+/* Software Reset */
103
+#define REG_MCMDR_SWR (1 << 24)
104
+/* Internal Loopback Select */
105
+#define REG_MCMDR_LBK (1 << 21)
106
+/* Operation Mode Select */
107
+#define REG_MCMDR_OPMOD (1 << 20)
108
+/* Enable MDC Clock Generation */
109
+#define REG_MCMDR_ENMDC (1 << 19)
110
+/* Full-Duplex Mode Select */
111
+#define REG_MCMDR_FDUP (1 << 18)
112
+/* Enable SQE Checking */
113
+#define REG_MCMDR_ENSEQ (1 << 17)
114
+/* Send PAUSE Frame */
115
+#define REG_MCMDR_SDPZ (1 << 16)
116
+/* No Defer */
117
+#define REG_MCMDR_NDEF (1 << 9)
118
+/* Frame Transmission On */
119
+#define REG_MCMDR_TXON (1 << 8)
120
+/* Strip CRC Checksum */
121
+#define REG_MCMDR_SPCRC (1 << 5)
122
+/* Accept CRC Error Packet */
123
+#define REG_MCMDR_AEP (1 << 4)
124
+/* Accept Control Packet */
125
+#define REG_MCMDR_ACP (1 << 3)
126
+/* Accept Runt Packet */
127
+#define REG_MCMDR_ARP (1 << 2)
128
+/* Accept Long Packet */
129
+#define REG_MCMDR_ALP (1 << 1)
130
+/* Frame Reception On */
131
+#define REG_MCMDR_RXON (1 << 0)
132
+
133
+/* REG_MIEN fields */
134
+/* Enable Transmit Descriptor Unavailable Interrupt */
135
+#define REG_MIEN_ENTDU (1 << 23)
136
+/* Enable Transmit Completion Interrupt */
137
+#define REG_MIEN_ENTXCP (1 << 18)
138
+/* Enable Transmit Interrupt */
139
+#define REG_MIEN_ENTXINTR (1 << 16)
140
+/* Enable Receive Descriptor Unavailable Interrupt */
141
+#define REG_MIEN_ENRDU (1 << 10)
142
+/* Enable Receive Good Interrupt */
143
+#define REG_MIEN_ENRXGD (1 << 4)
144
+/* Enable Receive Interrupt */
145
+#define REG_MIEN_ENRXINTR (1 << 0)
146
+
147
+/* REG_MISTA fields */
148
+/* TODO: Add error fields and support simulated errors? */
149
+/* Transmit Bus Error Interrupt */
150
+#define REG_MISTA_TXBERR (1 << 24)
151
+/* Transmit Descriptor Unavailable Interrupt */
152
+#define REG_MISTA_TDU (1 << 23)
153
+/* Transmit Completion Interrupt */
154
+#define REG_MISTA_TXCP (1 << 18)
155
+/* Transmit Interrupt */
156
+#define REG_MISTA_TXINTR (1 << 16)
157
+/* Receive Bus Error Interrupt */
158
+#define REG_MISTA_RXBERR (1 << 11)
159
+/* Receive Descriptor Unavailable Interrupt */
160
+#define REG_MISTA_RDU (1 << 10)
161
+/* DMA Early Notification Interrupt */
162
+#define REG_MISTA_DENI (1 << 9)
163
+/* Maximum Frame Length Interrupt */
164
+#define REG_MISTA_DFOI (1 << 8)
165
+/* Receive Good Interrupt */
166
+#define REG_MISTA_RXGD (1 << 4)
167
+/* Packet Too Long Interrupt */
168
+#define REG_MISTA_PTLE (1 << 3)
169
+/* Receive Interrupt */
170
+#define REG_MISTA_RXINTR (1 << 0)
171
+
172
+/* REG_MGSTA fields */
173
+/* Transmission Halted */
174
+#define REG_MGSTA_TXHA (1 << 11)
175
+/* Receive Halted */
176
+#define REG_MGSTA_RXHA (1 << 11)
177
+
178
+/* REG_DMARFC fields */
179
+/* Maximum Receive Frame Length */
180
+#define REG_DMARFC_RXMS(word) extract32((word), 0, 16)
181
+
182
+/* REG MIIDA fields */
183
+/* Busy Bit */
184
+#define REG_MIIDA_BUSY (1 << 17)
185
+
186
+/* Transmit and receive descriptors */
187
+typedef struct NPCM7xxEMCTxDesc NPCM7xxEMCTxDesc;
188
+typedef struct NPCM7xxEMCRxDesc NPCM7xxEMCRxDesc;
189
+
190
+struct NPCM7xxEMCTxDesc {
191
+ uint32_t flags;
192
+ uint32_t txbsa;
193
+ uint32_t status_and_length;
194
+ uint32_t ntxdsa;
195
+};
196
+
197
+struct NPCM7xxEMCRxDesc {
198
+ uint32_t status_and_length;
199
+ uint32_t rxbsa;
200
+ uint32_t reserved;
201
+ uint32_t nrxdsa;
202
+};
203
+
204
+/* NPCM7xxEMCTxDesc.flags values */
205
+/* Owner: 0 = cpu, 1 = emc */
206
+#define TX_DESC_FLAG_OWNER_MASK (1 << 31)
207
+/* Transmit interrupt enable */
208
+#define TX_DESC_FLAG_INTEN (1 << 2)
209
+/* CRC append */
210
+#define TX_DESC_FLAG_CRCAPP (1 << 1)
211
+/* Padding enable */
212
+#define TX_DESC_FLAG_PADEN (1 << 0)
213
+
214
+/* NPCM7xxEMCTxDesc.status_and_length values */
215
+/* Collision count */
216
+#define TX_DESC_STATUS_CCNT_SHIFT 28
217
+#define TX_DESC_STATUS_CCNT_BITSIZE 4
218
+/* SQE error */
219
+#define TX_DESC_STATUS_SQE (1 << 26)
220
+/* Transmission paused */
221
+#define TX_DESC_STATUS_PAU (1 << 25)
222
+/* P transmission halted */
223
+#define TX_DESC_STATUS_TXHA (1 << 24)
224
+/* Late collision */
225
+#define TX_DESC_STATUS_LC (1 << 23)
226
+/* Transmission abort */
227
+#define TX_DESC_STATUS_TXABT (1 << 22)
228
+/* No carrier sense */
229
+#define TX_DESC_STATUS_NCS (1 << 21)
230
+/* Defer exceed */
231
+#define TX_DESC_STATUS_EXDEF (1 << 20)
232
+/* Transmission complete */
233
+#define TX_DESC_STATUS_TXCP (1 << 19)
234
+/* Transmission deferred */
235
+#define TX_DESC_STATUS_DEF (1 << 17)
236
+/* Transmit interrupt */
237
+#define TX_DESC_STATUS_TXINTR (1 << 16)
238
+
239
+#define TX_DESC_PKT_LEN(word) extract32((word), 0, 16)
240
+
241
+/* Transmit buffer start address */
242
+#define TX_DESC_TXBSA(word) ((uint32_t) (word) & ~3u)
243
+
244
+/* Next transmit descriptor start address */
245
+#define TX_DESC_NTXDSA(word) ((uint32_t) (word) & ~3u)
246
+
247
+/* NPCM7xxEMCRxDesc.status_and_length values */
248
+/* Owner: 0b00 = cpu, 0b01 = undefined, 0b10 = emc, 0b11 = undefined */
249
+#define RX_DESC_STATUS_OWNER_SHIFT 30
250
+#define RX_DESC_STATUS_OWNER_BITSIZE 2
251
+#define RX_DESC_STATUS_OWNER_MASK (3 << RX_DESC_STATUS_OWNER_SHIFT)
252
+/* Runt packet */
253
+#define RX_DESC_STATUS_RP (1 << 22)
254
+/* Alignment error */
255
+#define RX_DESC_STATUS_ALIE (1 << 21)
256
+/* Frame reception complete */
257
+#define RX_DESC_STATUS_RXGD (1 << 20)
258
+/* Packet too long */
259
+#define RX_DESC_STATUS_PTLE (1 << 19)
260
+/* CRC error */
261
+#define RX_DESC_STATUS_CRCE (1 << 17)
262
+/* Receive interrupt */
263
+#define RX_DESC_STATUS_RXINTR (1 << 16)
264
+
265
+#define RX_DESC_PKT_LEN(word) extract32((word), 0, 16)
266
+
267
+/* Receive buffer start address */
268
+#define RX_DESC_RXBSA(word) ((uint32_t) (word) & ~3u)
269
+
270
+/* Next receive descriptor start address */
271
+#define RX_DESC_NRXDSA(word) ((uint32_t) (word) & ~3u)
272
+
273
+/* Minimum packet length, when TX_DESC_FLAG_PADEN is set. */
274
+#define MIN_PACKET_LENGTH 64
275
+
276
+struct NPCM7xxEMCState {
277
+ /*< private >*/
278
+ SysBusDevice parent;
279
+ /*< public >*/
280
+
281
+ MemoryRegion iomem;
76
+ MemoryRegion iomem;
282
+
77
+
283
+ qemu_irq tx_irq;
78
+ uint32_t rstc;
284
+ qemu_irq rx_irq;
79
+ uint32_t rsts;
285
+
80
+ uint32_t wdog;
286
+ NICState *nic;
81
+};
287
+ NICConf conf;
82
+
288
+
83
+#endif
289
+ /* 0 or 1, for log messages */
84
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
290
+ uint8_t emc_num;
85
index XXXXXXX..XXXXXXX 100644
291
+
86
--- a/hw/arm/bcm2835_peripherals.c
292
+ uint32_t regs[NPCM7XX_NUM_EMC_REGS];
87
+++ b/hw/arm/bcm2835_peripherals.c
293
+
88
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
294
+ /*
89
295
+ * tx is active. Set to true by TSDR and then switches off when out of
90
object_property_add_const_link(OBJECT(&s->dwc2), "dma-mr",
296
+ * descriptors. If the TXON bit in REG_MCMDR is off then this is off.
91
OBJECT(&s->gpu_bus_mr));
297
+ */
92
+
298
+ bool tx_active;
93
+ /* Power Management */
299
+
94
+ object_initialize_child(obj, "powermgt", &s->powermgt,
300
+ /*
95
+ TYPE_BCM2835_POWERMGT);
301
+ * rx is active. Set to true by RSDR and then switches off when out of
96
}
302
+ * descriptors. If the RXON bit in REG_MCMDR is off then this is off.
97
303
+ */
98
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
304
+ bool rx_active;
99
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
305
+};
100
qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
306
+
101
INTERRUPT_USB));
307
+typedef struct NPCM7xxEMCState NPCM7xxEMCState;
102
308
+
103
+ /* Power Management */
309
+#define TYPE_NPCM7XX_EMC "npcm7xx-emc"
104
+ if (!sysbus_realize(SYS_BUS_DEVICE(&s->powermgt), errp)) {
310
+#define NPCM7XX_EMC(obj) \
105
+ return;
311
+ OBJECT_CHECK(NPCM7xxEMCState, (obj), TYPE_NPCM7XX_EMC)
106
+ }
312
+
107
+
313
+#endif /* NPCM7XX_EMC_H */
108
+ memory_region_add_subregion(&s->peri_mr, PM_OFFSET,
314
diff --git a/hw/net/npcm7xx_emc.c b/hw/net/npcm7xx_emc.c
109
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->powermgt), 0));
110
+
111
create_unimp(s, &s->txp, "bcm2835-txp", TXP_OFFSET, 0x1000);
112
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
113
- create_unimp(s, &s->powermgt, "bcm2835-powermgt", PM_OFFSET, 0x114);
114
create_unimp(s, &s->i2s, "bcm2835-i2s", I2S_OFFSET, 0x100);
115
create_unimp(s, &s->smi, "bcm2835-smi", SMI_OFFSET, 0x100);
116
create_unimp(s, &s->spi[0], "bcm2835-spi0", SPI0_OFFSET, 0x20);
117
diff --git a/hw/misc/bcm2835_powermgt.c b/hw/misc/bcm2835_powermgt.c
315
new file mode 100644
118
new file mode 100644
316
index XXXXXXX..XXXXXXX
119
index XXXXXXX..XXXXXXX
317
--- /dev/null
120
--- /dev/null
318
+++ b/hw/net/npcm7xx_emc.c
121
+++ b/hw/misc/bcm2835_powermgt.c
319
@@ -XXX,XX +XXX,XX @@
122
@@ -XXX,XX +XXX,XX @@
320
+/*
123
+/*
321
+ * Nuvoton NPCM7xx EMC Module
124
+ * BCM2835 Power Management emulation
322
+ *
125
+ *
323
+ * Copyright 2020 Google LLC
126
+ * Copyright (C) 2017 Marcin Chojnacki <marcinch7@gmail.com>
324
+ *
127
+ * Copyright (C) 2021 Nolan Leake <nolan@sigbus.net>
325
+ * This program is free software; you can redistribute it and/or modify it
128
+ *
326
+ * under the terms of the GNU General Public License as published by the
129
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
327
+ * Free Software Foundation; either version 2 of the License, or
130
+ * See the COPYING file in the top-level directory.
328
+ * (at your option) any later version.
329
+ *
330
+ * This program is distributed in the hope that it will be useful, but WITHOUT
331
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
332
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
333
+ * for more details.
334
+ *
335
+ * Unsupported/unimplemented features:
336
+ * - MCMDR.FDUP (full duplex) is ignored, half duplex is not supported
337
+ * - Only CAM0 is supported, CAM[1-15] are not
338
+ * - writes to CAMEN.[1-15] are ignored, these bits always read as zeroes
339
+ * - MII is not implemented, MIIDA.BUSY and MIID always return zero
340
+ * - MCMDR.LBK is not implemented
341
+ * - MCMDR.{OPMOD,ENSQE,AEP,ARP} are not supported
342
+ * - H/W FIFOs are not supported, MCMDR.FFTCR is ignored
343
+ * - MGSTA.SQE is not supported
344
+ * - pause and control frames are not implemented
345
+ * - MGSTA.CCNT is not supported
346
+ * - MPCNT, DMARFS are not implemented
347
+ */
131
+ */
348
+
132
+
349
+#include "qemu/osdep.h"
133
+#include "qemu/osdep.h"
350
+
351
+/* For crc32 */
352
+#include <zlib.h>
353
+
354
+#include "qemu-common.h"
355
+#include "hw/irq.h"
356
+#include "hw/qdev-clock.h"
357
+#include "hw/qdev-properties.h"
358
+#include "hw/net/npcm7xx_emc.h"
359
+#include "net/eth.h"
360
+#include "migration/vmstate.h"
361
+#include "qemu/bitops.h"
362
+#include "qemu/error-report.h"
363
+#include "qemu/log.h"
134
+#include "qemu/log.h"
364
+#include "qemu/module.h"
135
+#include "qemu/module.h"
365
+#include "qemu/units.h"
136
+#include "hw/misc/bcm2835_powermgt.h"
366
+#include "sysemu/dma.h"
137
+#include "migration/vmstate.h"
367
+#include "trace.h"
138
+#include "sysemu/runstate.h"
368
+
139
+
369
+#define CRC_LENGTH 4
140
+#define PASSWORD 0x5a000000
370
+
141
+#define PASSWORD_MASK 0xff000000
371
+/*
142
+
372
+ * The maximum size of a (layer 2) ethernet frame as defined by 802.3.
143
+#define R_RSTC 0x1c
373
+ * 1518 = 6(dest macaddr) + 6(src macaddr) + 2(proto) + 4(crc) + 1500(payload)
144
+#define V_RSTC_RESET 0x20
374
+ * This does not include an additional 4 for the vlan field (802.1q).
145
+#define R_RSTS 0x20
375
+ */
146
+#define V_RSTS_POWEROFF 0x555 /* Linux uses partition 63 to indicate halt. */
376
+#define MAX_ETH_FRAME_SIZE 1518
147
+#define R_WDOG 0x24
377
+
148
+
378
+static const char *emc_reg_name(int regno)
149
+static uint64_t bcm2835_powermgt_read(void *opaque, hwaddr offset,
379
+{
150
+ unsigned size)
380
+#define REG(name) case REG_ ## name: return #name;
151
+{
381
+ switch (regno) {
152
+ BCM2835PowerMgtState *s = (BCM2835PowerMgtState *)opaque;
382
+ REG(CAMCMR)
153
+ uint32_t res = 0;
383
+ REG(CAMEN)
154
+
384
+ REG(TXDLSA)
155
+ switch (offset) {
385
+ REG(RXDLSA)
156
+ case R_RSTC:
386
+ REG(MCMDR)
157
+ res = s->rstc;
387
+ REG(MIID)
158
+ break;
388
+ REG(MIIDA)
159
+ case R_RSTS:
389
+ REG(FFTCR)
160
+ res = s->rsts;
390
+ REG(TSDR)
161
+ break;
391
+ REG(RSDR)
162
+ case R_WDOG:
392
+ REG(DMARFC)
163
+ res = s->wdog;
393
+ REG(MIEN)
164
+ break;
394
+ REG(MISTA)
165
+
395
+ REG(MGSTA)
166
+ default:
396
+ REG(MPCNT)
167
+ qemu_log_mask(LOG_UNIMP,
397
+ REG(MRPC)
168
+ "bcm2835_powermgt_read: Unknown offset 0x%08"HWADDR_PRIx
398
+ REG(MRPCC)
169
+ "\n", offset);
399
+ REG(MREPC)
170
+ res = 0;
400
+ REG(DMARFS)
171
+ break;
401
+ REG(CTXDSA)
172
+ }
402
+ REG(CTXBSA)
173
+
403
+ REG(CRXDSA)
174
+ return res;
404
+ REG(CRXBSA)
175
+}
405
+ case REG_CAMM_BASE + 0: return "CAM0M";
176
+
406
+ case REG_CAML_BASE + 0: return "CAM0L";
177
+static void bcm2835_powermgt_write(void *opaque, hwaddr offset,
407
+ case REG_CAMM_BASE + 2 ... REG_CAMML_LAST:
178
+ uint64_t value, unsigned size)
408
+ /* Only CAM0 is supported, fold the others into something simple. */
179
+{
409
+ if (regno & 1) {
180
+ BCM2835PowerMgtState *s = (BCM2835PowerMgtState *)opaque;
410
+ return "CAM<n>L";
181
+
411
+ } else {
182
+ if ((value & PASSWORD_MASK) != PASSWORD) {
412
+ return "CAM<n>M";
183
+ qemu_log_mask(LOG_GUEST_ERROR,
413
+ }
184
+ "bcm2835_powermgt_write: Bad password 0x%"PRIx64
414
+ default: return "UNKNOWN";
185
+ " at offset 0x%08"HWADDR_PRIx"\n",
415
+ }
186
+ value, offset);
416
+#undef REG
417
+}
418
+
419
+static void emc_reset(NPCM7xxEMCState *emc)
420
+{
421
+ trace_npcm7xx_emc_reset(emc->emc_num);
422
+
423
+ memset(&emc->regs[0], 0, sizeof(emc->regs));
424
+
425
+ /* These regs have non-zero reset values. */
426
+ emc->regs[REG_TXDLSA] = 0xfffffffc;
427
+ emc->regs[REG_RXDLSA] = 0xfffffffc;
428
+ emc->regs[REG_MIIDA] = 0x00900000;
429
+ emc->regs[REG_FFTCR] = 0x0101;
430
+ emc->regs[REG_DMARFC] = 0x0800;
431
+ emc->regs[REG_MPCNT] = 0x7fff;
432
+
433
+ emc->tx_active = false;
434
+ emc->rx_active = false;
435
+}
436
+
437
+static void npcm7xx_emc_reset(DeviceState *dev)
438
+{
439
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
440
+ emc_reset(emc);
441
+}
442
+
443
+static void emc_soft_reset(NPCM7xxEMCState *emc)
444
+{
445
+ /*
446
+ * The docs say at least MCMDR.{LBK,OPMOD} bits are not changed during a
447
+ * soft reset, but does not go into further detail. For now, KISS.
448
+ */
449
+ uint32_t mcmdr = emc->regs[REG_MCMDR];
450
+ emc_reset(emc);
451
+ emc->regs[REG_MCMDR] = mcmdr & (REG_MCMDR_LBK | REG_MCMDR_OPMOD);
452
+
453
+ qemu_set_irq(emc->tx_irq, 0);
454
+ qemu_set_irq(emc->rx_irq, 0);
455
+}
456
+
457
+static void emc_set_link(NetClientState *nc)
458
+{
459
+ /* Nothing to do yet. */
460
+}
461
+
462
+/* MISTA.TXINTR is the union of the individual bits with their enables. */
463
+static void emc_update_mista_txintr(NPCM7xxEMCState *emc)
464
+{
465
+ /* Only look at the bits we support. */
466
+ uint32_t mask = (REG_MISTA_TXBERR |
467
+ REG_MISTA_TDU |
468
+ REG_MISTA_TXCP);
469
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & mask) {
470
+ emc->regs[REG_MISTA] |= REG_MISTA_TXINTR;
471
+ } else {
472
+ emc->regs[REG_MISTA] &= ~REG_MISTA_TXINTR;
473
+ }
474
+}
475
+
476
+/* MISTA.RXINTR is the union of the individual bits with their enables. */
477
+static void emc_update_mista_rxintr(NPCM7xxEMCState *emc)
478
+{
479
+ /* Only look at the bits we support. */
480
+ uint32_t mask = (REG_MISTA_RXBERR |
481
+ REG_MISTA_RDU |
482
+ REG_MISTA_RXGD);
483
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & mask) {
484
+ emc->regs[REG_MISTA] |= REG_MISTA_RXINTR;
485
+ } else {
486
+ emc->regs[REG_MISTA] &= ~REG_MISTA_RXINTR;
487
+ }
488
+}
489
+
490
+/* N.B. emc_update_mista_txintr must have already been called. */
491
+static void emc_update_tx_irq(NPCM7xxEMCState *emc)
492
+{
493
+ int level = !!(emc->regs[REG_MISTA] &
494
+ emc->regs[REG_MIEN] &
495
+ REG_MISTA_TXINTR);
496
+ trace_npcm7xx_emc_update_tx_irq(level);
497
+ qemu_set_irq(emc->tx_irq, level);
498
+}
499
+
500
+/* N.B. emc_update_mista_rxintr must have already been called. */
501
+static void emc_update_rx_irq(NPCM7xxEMCState *emc)
502
+{
503
+ int level = !!(emc->regs[REG_MISTA] &
504
+ emc->regs[REG_MIEN] &
505
+ REG_MISTA_RXINTR);
506
+ trace_npcm7xx_emc_update_rx_irq(level);
507
+ qemu_set_irq(emc->rx_irq, level);
508
+}
509
+
510
+/* Update IRQ states due to changes in MIEN,MISTA. */
511
+static void emc_update_irq_from_reg_change(NPCM7xxEMCState *emc)
512
+{
513
+ emc_update_mista_txintr(emc);
514
+ emc_update_tx_irq(emc);
515
+
516
+ emc_update_mista_rxintr(emc);
517
+ emc_update_rx_irq(emc);
518
+}
519
+
520
+static int emc_read_tx_desc(dma_addr_t addr, NPCM7xxEMCTxDesc *desc)
521
+{
522
+ if (dma_memory_read(&address_space_memory, addr, desc, sizeof(*desc))) {
523
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read descriptor @ 0x%"
524
+ HWADDR_PRIx "\n", __func__, addr);
525
+ return -1;
526
+ }
527
+ desc->flags = le32_to_cpu(desc->flags);
528
+ desc->txbsa = le32_to_cpu(desc->txbsa);
529
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
530
+ desc->ntxdsa = le32_to_cpu(desc->ntxdsa);
531
+ return 0;
532
+}
533
+
534
+static int emc_write_tx_desc(const NPCM7xxEMCTxDesc *desc, dma_addr_t addr)
535
+{
536
+ NPCM7xxEMCTxDesc le_desc;
537
+
538
+ le_desc.flags = cpu_to_le32(desc->flags);
539
+ le_desc.txbsa = cpu_to_le32(desc->txbsa);
540
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
541
+ le_desc.ntxdsa = cpu_to_le32(desc->ntxdsa);
542
+ if (dma_memory_write(&address_space_memory, addr, &le_desc,
543
+ sizeof(le_desc))) {
544
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to write descriptor @ 0x%"
545
+ HWADDR_PRIx "\n", __func__, addr);
546
+ return -1;
547
+ }
548
+ return 0;
549
+}
550
+
551
+static int emc_read_rx_desc(dma_addr_t addr, NPCM7xxEMCRxDesc *desc)
552
+{
553
+ if (dma_memory_read(&address_space_memory, addr, desc, sizeof(*desc))) {
554
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read descriptor @ 0x%"
555
+ HWADDR_PRIx "\n", __func__, addr);
556
+ return -1;
557
+ }
558
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
559
+ desc->rxbsa = le32_to_cpu(desc->rxbsa);
560
+ desc->reserved = le32_to_cpu(desc->reserved);
561
+ desc->nrxdsa = le32_to_cpu(desc->nrxdsa);
562
+ return 0;
563
+}
564
+
565
+static int emc_write_rx_desc(const NPCM7xxEMCRxDesc *desc, dma_addr_t addr)
566
+{
567
+ NPCM7xxEMCRxDesc le_desc;
568
+
569
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
570
+ le_desc.rxbsa = cpu_to_le32(desc->rxbsa);
571
+ le_desc.reserved = cpu_to_le32(desc->reserved);
572
+ le_desc.nrxdsa = cpu_to_le32(desc->nrxdsa);
573
+ if (dma_memory_write(&address_space_memory, addr, &le_desc,
574
+ sizeof(le_desc))) {
575
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to write descriptor @ 0x%"
576
+ HWADDR_PRIx "\n", __func__, addr);
577
+ return -1;
578
+ }
579
+ return 0;
580
+}
581
+
582
+static void emc_set_mista(NPCM7xxEMCState *emc, uint32_t flags)
583
+{
584
+ trace_npcm7xx_emc_set_mista(flags);
585
+ emc->regs[REG_MISTA] |= flags;
586
+ if (extract32(flags, 16, 16)) {
587
+ emc_update_mista_txintr(emc);
588
+ }
589
+ if (extract32(flags, 0, 16)) {
590
+ emc_update_mista_rxintr(emc);
591
+ }
592
+}
593
+
594
+static void emc_halt_tx(NPCM7xxEMCState *emc, uint32_t mista_flag)
595
+{
596
+ emc->tx_active = false;
597
+ emc_set_mista(emc, mista_flag);
598
+}
599
+
600
+static void emc_halt_rx(NPCM7xxEMCState *emc, uint32_t mista_flag)
601
+{
602
+ emc->rx_active = false;
603
+ emc_set_mista(emc, mista_flag);
604
+}
605
+
606
+static void emc_set_next_tx_descriptor(NPCM7xxEMCState *emc,
607
+ const NPCM7xxEMCTxDesc *tx_desc,
608
+ uint32_t desc_addr)
609
+{
610
+ /* Update the current descriptor, if only to reset the owner flag. */
611
+ if (emc_write_tx_desc(tx_desc, desc_addr)) {
612
+ /*
613
+ * We just read it so this shouldn't generally happen.
614
+ * Error already reported.
615
+ */
616
+ emc_set_mista(emc, REG_MISTA_TXBERR);
617
+ }
618
+ emc->regs[REG_CTXDSA] = TX_DESC_NTXDSA(tx_desc->ntxdsa);
619
+}
620
+
621
+static void emc_set_next_rx_descriptor(NPCM7xxEMCState *emc,
622
+ const NPCM7xxEMCRxDesc *rx_desc,
623
+ uint32_t desc_addr)
624
+{
625
+ /* Update the current descriptor, if only to reset the owner flag. */
626
+ if (emc_write_rx_desc(rx_desc, desc_addr)) {
627
+ /*
628
+ * We just read it so this shouldn't generally happen.
629
+ * Error already reported.
630
+ */
631
+ emc_set_mista(emc, REG_MISTA_RXBERR);
632
+ }
633
+ emc->regs[REG_CRXDSA] = RX_DESC_NRXDSA(rx_desc->nrxdsa);
634
+}
635
+
636
+static void emc_try_send_next_packet(NPCM7xxEMCState *emc)
637
+{
638
+ /* Working buffer for sending out packets. Most packets fit in this. */
639
+#define TX_BUFFER_SIZE 2048
640
+ uint8_t tx_send_buffer[TX_BUFFER_SIZE];
641
+ uint32_t desc_addr = TX_DESC_NTXDSA(emc->regs[REG_CTXDSA]);
642
+ NPCM7xxEMCTxDesc tx_desc;
643
+ uint32_t next_buf_addr, length;
644
+ uint8_t *buf;
645
+ g_autofree uint8_t *malloced_buf = NULL;
646
+
647
+ if (emc_read_tx_desc(desc_addr, &tx_desc)) {
648
+ /* Error reading descriptor, already reported. */
649
+ emc_halt_tx(emc, REG_MISTA_TXBERR);
650
+ emc_update_tx_irq(emc);
651
+ return;
187
+ return;
652
+ }
188
+ }
653
+
189
+
654
+ /* Nothing we can do if we don't own the descriptor. */
190
+ value = value & ~PASSWORD_MASK;
655
+ if (!(tx_desc.flags & TX_DESC_FLAG_OWNER_MASK)) {
191
+
656
+ trace_npcm7xx_emc_cpu_owned_desc(desc_addr);
192
+ switch (offset) {
657
+ emc_halt_tx(emc, REG_MISTA_TDU);
193
+ case R_RSTC:
658
+ emc_update_tx_irq(emc);
194
+ s->rstc = value;
659
+ return;
195
+ if (value & V_RSTC_RESET) {
660
+ }
196
+ if ((s->rsts & 0xfff) == V_RSTS_POWEROFF) {
661
+
197
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
662
+ /* Give the descriptor back regardless of what happens. */
198
+ } else {
663
+ tx_desc.flags &= ~TX_DESC_FLAG_OWNER_MASK;
199
+ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
664
+ tx_desc.status_and_length &= 0xffff;
665
+
666
+ /*
667
+ * Despite the h/w documentation saying the tx buffer is word aligned,
668
+ * the linux driver does not word align the buffer. There is value in not
669
+ * aligning the buffer: See the description of NET_IP_ALIGN in linux
670
+ * kernel sources.
671
+ */
672
+ next_buf_addr = tx_desc.txbsa;
673
+ emc->regs[REG_CTXBSA] = next_buf_addr;
674
+ length = TX_DESC_PKT_LEN(tx_desc.status_and_length);
675
+ buf = &tx_send_buffer[0];
676
+
677
+ if (length > sizeof(tx_send_buffer)) {
678
+ malloced_buf = g_malloc(length);
679
+ buf = malloced_buf;
680
+ }
681
+
682
+ if (dma_memory_read(&address_space_memory, next_buf_addr, buf, length)) {
683
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read packet @ 0x%x\n",
684
+ __func__, next_buf_addr);
685
+ emc_set_mista(emc, REG_MISTA_TXBERR);
686
+ emc_set_next_tx_descriptor(emc, &tx_desc, desc_addr);
687
+ emc_update_tx_irq(emc);
688
+ trace_npcm7xx_emc_tx_done(emc->regs[REG_CTXDSA]);
689
+ return;
690
+ }
691
+
692
+ if ((tx_desc.flags & TX_DESC_FLAG_PADEN) && (length < MIN_PACKET_LENGTH)) {
693
+ memset(buf + length, 0, MIN_PACKET_LENGTH - length);
694
+ length = MIN_PACKET_LENGTH;
695
+ }
696
+
697
+ /* N.B. emc_receive can get called here. */
698
+ qemu_send_packet(qemu_get_queue(emc->nic), buf, length);
699
+ trace_npcm7xx_emc_sent_packet(length);
700
+
701
+ tx_desc.status_and_length |= TX_DESC_STATUS_TXCP;
702
+ if (tx_desc.flags & TX_DESC_FLAG_INTEN) {
703
+ emc_set_mista(emc, REG_MISTA_TXCP);
704
+ }
705
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & REG_MISTA_TXINTR) {
706
+ tx_desc.status_and_length |= TX_DESC_STATUS_TXINTR;
707
+ }
708
+
709
+ emc_set_next_tx_descriptor(emc, &tx_desc, desc_addr);
710
+ emc_update_tx_irq(emc);
711
+ trace_npcm7xx_emc_tx_done(emc->regs[REG_CTXDSA]);
712
+}
713
+
714
+static bool emc_can_receive(NetClientState *nc)
715
+{
716
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(qemu_get_nic_opaque(nc));
717
+
718
+ bool can_receive = emc->rx_active;
719
+ trace_npcm7xx_emc_can_receive(can_receive);
720
+ return can_receive;
721
+}
722
+
723
+/* If result is false then *fail_reason contains the reason. */
724
+static bool emc_receive_filter1(NPCM7xxEMCState *emc, const uint8_t *buf,
725
+ size_t len, const char **fail_reason)
726
+{
727
+ eth_pkt_types_e pkt_type = get_eth_packet_type(PKT_GET_ETH_HDR(buf));
728
+
729
+ switch (pkt_type) {
730
+ case ETH_PKT_BCAST:
731
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
732
+ return true;
733
+ } else {
734
+ *fail_reason = "Broadcast packet disabled";
735
+ return !!(emc->regs[REG_CAMCMR] & REG_CAMCMR_ABP);
736
+ }
737
+ case ETH_PKT_MCAST:
738
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
739
+ return true;
740
+ } else {
741
+ *fail_reason = "Multicast packet disabled";
742
+ return !!(emc->regs[REG_CAMCMR] & REG_CAMCMR_AMP);
743
+ }
744
+ case ETH_PKT_UCAST: {
745
+ bool matches;
746
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_AUP) {
747
+ return true;
748
+ }
749
+ matches = ((emc->regs[REG_CAMCMR] & REG_CAMCMR_ECMP) &&
750
+ /* We only support one CAM register, CAM0. */
751
+ (emc->regs[REG_CAMEN] & (1 << 0)) &&
752
+ memcmp(buf, emc->conf.macaddr.a, ETH_ALEN) == 0);
753
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
754
+ *fail_reason = "MACADDR matched, comparison complemented";
755
+ return !matches;
756
+ } else {
757
+ *fail_reason = "MACADDR didn't match";
758
+ return matches;
759
+ }
760
+ }
761
+ default:
762
+ g_assert_not_reached();
763
+ }
764
+}
765
+
766
+static bool emc_receive_filter(NPCM7xxEMCState *emc, const uint8_t *buf,
767
+ size_t len)
768
+{
769
+ const char *fail_reason = NULL;
770
+ bool ok = emc_receive_filter1(emc, buf, len, &fail_reason);
771
+ if (!ok) {
772
+ trace_npcm7xx_emc_packet_filtered_out(fail_reason);
773
+ }
774
+ return ok;
775
+}
776
+
777
+static ssize_t emc_receive(NetClientState *nc, const uint8_t *buf, size_t len1)
778
+{
779
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(qemu_get_nic_opaque(nc));
780
+ const uint32_t len = len1;
781
+ size_t max_frame_len;
782
+ bool long_frame;
783
+ uint32_t desc_addr;
784
+ NPCM7xxEMCRxDesc rx_desc;
785
+ uint32_t crc;
786
+ uint8_t *crc_ptr;
787
+ uint32_t buf_addr;
788
+
789
+ trace_npcm7xx_emc_receiving_packet(len);
790
+
791
+ if (!emc_can_receive(nc)) {
792
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Unexpected packet\n", __func__);
793
+ return -1;
794
+ }
795
+
796
+ if (len < ETH_HLEN ||
797
+ /* Defensive programming: drop unsupportable large packets. */
798
+ len > 0xffff - CRC_LENGTH) {
799
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Dropped frame of %u bytes\n",
800
+ __func__, len);
801
+ return len;
802
+ }
803
+
804
+ /*
805
+ * DENI is set if EMC received the Length/Type field of the incoming
806
+ * packet, so it will be set regardless of what happens next.
807
+ */
808
+ emc_set_mista(emc, REG_MISTA_DENI);
809
+
810
+ if (!emc_receive_filter(emc, buf, len)) {
811
+ emc_update_rx_irq(emc);
812
+ return len;
813
+ }
814
+
815
+ /* Huge frames (> DMARFC) are dropped. */
816
+ max_frame_len = REG_DMARFC_RXMS(emc->regs[REG_DMARFC]);
817
+ if (len + CRC_LENGTH > max_frame_len) {
818
+ trace_npcm7xx_emc_packet_dropped(len);
819
+ emc_set_mista(emc, REG_MISTA_DFOI);
820
+ emc_update_rx_irq(emc);
821
+ return len;
822
+ }
823
+
824
+ /*
825
+ * Long Frames (> MAX_ETH_FRAME_SIZE) are also dropped, unless MCMDR.ALP
826
+ * is set.
827
+ */
828
+ long_frame = false;
829
+ if (len + CRC_LENGTH > MAX_ETH_FRAME_SIZE) {
830
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_ALP) {
831
+ long_frame = true;
832
+ } else {
833
+ trace_npcm7xx_emc_packet_dropped(len);
834
+ emc_set_mista(emc, REG_MISTA_PTLE);
835
+ emc_update_rx_irq(emc);
836
+ return len;
837
+ }
838
+ }
839
+
840
+ desc_addr = RX_DESC_NRXDSA(emc->regs[REG_CRXDSA]);
841
+ if (emc_read_rx_desc(desc_addr, &rx_desc)) {
842
+ /* Error reading descriptor, already reported. */
843
+ emc_halt_rx(emc, REG_MISTA_RXBERR);
844
+ emc_update_rx_irq(emc);
845
+ return len;
846
+ }
847
+
848
+ /* Nothing we can do if we don't own the descriptor. */
849
+ if (!(rx_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK)) {
850
+ trace_npcm7xx_emc_cpu_owned_desc(desc_addr);
851
+ emc_halt_rx(emc, REG_MISTA_RDU);
852
+ emc_update_rx_irq(emc);
853
+ return len;
854
+ }
855
+
856
+ crc = 0;
857
+ crc_ptr = (uint8_t *) &crc;
858
+ if (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC)) {
859
+ crc = cpu_to_be32(crc32(~0, buf, len));
860
+ }
861
+
862
+ /* Give the descriptor back regardless of what happens. */
863
+ rx_desc.status_and_length &= ~RX_DESC_STATUS_OWNER_MASK;
864
+
865
+ buf_addr = rx_desc.rxbsa;
866
+ emc->regs[REG_CRXBSA] = buf_addr;
867
+ if (dma_memory_write(&address_space_memory, buf_addr, buf, len) ||
868
+ (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC) &&
869
+ dma_memory_write(&address_space_memory, buf_addr + len, crc_ptr,
870
+ 4))) {
871
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bus error writing packet\n",
872
+ __func__);
873
+ emc_set_mista(emc, REG_MISTA_RXBERR);
874
+ emc_set_next_rx_descriptor(emc, &rx_desc, desc_addr);
875
+ emc_update_rx_irq(emc);
876
+ trace_npcm7xx_emc_rx_done(emc->regs[REG_CRXDSA]);
877
+ return len;
878
+ }
879
+
880
+ trace_npcm7xx_emc_received_packet(len);
881
+
882
+ /* Note: We've already verified len+4 <= 0xffff. */
883
+ rx_desc.status_and_length = len;
884
+ if (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC)) {
885
+ rx_desc.status_and_length += 4;
886
+ }
887
+ rx_desc.status_and_length |= RX_DESC_STATUS_RXGD;
888
+ emc_set_mista(emc, REG_MISTA_RXGD);
889
+
890
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & REG_MISTA_RXINTR) {
891
+ rx_desc.status_and_length |= RX_DESC_STATUS_RXINTR;
892
+ }
893
+ if (long_frame) {
894
+ rx_desc.status_and_length |= RX_DESC_STATUS_PTLE;
895
+ }
896
+
897
+ emc_set_next_rx_descriptor(emc, &rx_desc, desc_addr);
898
+ emc_update_rx_irq(emc);
899
+ trace_npcm7xx_emc_rx_done(emc->regs[REG_CRXDSA]);
900
+ return len;
901
+}
902
+
903
+static void emc_try_receive_next_packet(NPCM7xxEMCState *emc)
904
+{
905
+ if (emc_can_receive(qemu_get_queue(emc->nic))) {
906
+ qemu_flush_queued_packets(qemu_get_queue(emc->nic));
907
+ }
908
+}
909
+
910
+static uint64_t npcm7xx_emc_read(void *opaque, hwaddr offset, unsigned size)
911
+{
912
+ NPCM7xxEMCState *emc = opaque;
913
+ uint32_t reg = offset / sizeof(uint32_t);
914
+ uint32_t result;
915
+
916
+ if (reg >= NPCM7XX_NUM_EMC_REGS) {
917
+ qemu_log_mask(LOG_GUEST_ERROR,
918
+ "%s: Invalid offset 0x%04" HWADDR_PRIx "\n",
919
+ __func__, offset);
920
+ return 0;
921
+ }
922
+
923
+ switch (reg) {
924
+ case REG_MIID:
925
+ /*
926
+ * We don't implement MII. For determinism, always return zero as
927
+ * writes record the last value written for debugging purposes.
928
+ */
929
+ qemu_log_mask(LOG_UNIMP, "%s: Read of MIID, returning 0\n", __func__);
930
+ result = 0;
931
+ break;
932
+ case REG_TSDR:
933
+ case REG_RSDR:
934
+ qemu_log_mask(LOG_GUEST_ERROR,
935
+ "%s: Read of write-only reg, %s/%d\n",
936
+ __func__, emc_reg_name(reg), reg);
937
+ return 0;
938
+ default:
939
+ result = emc->regs[reg];
940
+ break;
941
+ }
942
+
943
+ trace_npcm7xx_emc_reg_read(emc->emc_num, result, emc_reg_name(reg), reg);
944
+ return result;
945
+}
946
+
947
+static void npcm7xx_emc_write(void *opaque, hwaddr offset,
948
+ uint64_t v, unsigned size)
949
+{
950
+ NPCM7xxEMCState *emc = opaque;
951
+ uint32_t reg = offset / sizeof(uint32_t);
952
+ uint32_t value = v;
953
+
954
+ g_assert(size == sizeof(uint32_t));
955
+
956
+ if (reg >= NPCM7XX_NUM_EMC_REGS) {
957
+ qemu_log_mask(LOG_GUEST_ERROR,
958
+ "%s: Invalid offset 0x%04" HWADDR_PRIx "\n",
959
+ __func__, offset);
960
+ return;
961
+ }
962
+
963
+ trace_npcm7xx_emc_reg_write(emc->emc_num, emc_reg_name(reg), reg, value);
964
+
965
+ switch (reg) {
966
+ case REG_CAMCMR:
967
+ emc->regs[reg] = value;
968
+ break;
969
+ case REG_CAMEN:
970
+ /* Only CAM0 is supported, don't pretend otherwise. */
971
+ if (value & ~1) {
972
+ qemu_log_mask(LOG_GUEST_ERROR,
973
+ "%s: Only CAM0 is supported, cannot enable others"
974
+ ": 0x%x\n",
975
+ __func__, value);
976
+ }
977
+ emc->regs[reg] = value & 1;
978
+ break;
979
+ case REG_CAMM_BASE + 0:
980
+ emc->regs[reg] = value;
981
+ emc->conf.macaddr.a[0] = value >> 24;
982
+ emc->conf.macaddr.a[1] = value >> 16;
983
+ emc->conf.macaddr.a[2] = value >> 8;
984
+ emc->conf.macaddr.a[3] = value >> 0;
985
+ break;
986
+ case REG_CAML_BASE + 0:
987
+ emc->regs[reg] = value;
988
+ emc->conf.macaddr.a[4] = value >> 24;
989
+ emc->conf.macaddr.a[5] = value >> 16;
990
+ break;
991
+ case REG_MCMDR: {
992
+ uint32_t prev;
993
+ if (value & REG_MCMDR_SWR) {
994
+ emc_soft_reset(emc);
995
+ /* On h/w the reset happens over multiple cycles. For now KISS. */
996
+ break;
997
+ }
998
+ prev = emc->regs[reg];
999
+ emc->regs[reg] = value;
1000
+ /* Update tx state. */
1001
+ if (!(prev & REG_MCMDR_TXON) &&
1002
+ (value & REG_MCMDR_TXON)) {
1003
+ emc->regs[REG_CTXDSA] = emc->regs[REG_TXDLSA];
1004
+ /*
1005
+ * Linux kernel turns TX on with CPU still holding descriptor,
1006
+ * which suggests we should wait for a write to TSDR before trying
1007
+ * to send a packet: so we don't send one here.
1008
+ */
1009
+ } else if ((prev & REG_MCMDR_TXON) &&
1010
+ !(value & REG_MCMDR_TXON)) {
1011
+ emc->regs[REG_MGSTA] |= REG_MGSTA_TXHA;
1012
+ }
1013
+ if (!(value & REG_MCMDR_TXON)) {
1014
+ emc_halt_tx(emc, 0);
1015
+ }
1016
+ /* Update rx state. */
1017
+ if (!(prev & REG_MCMDR_RXON) &&
1018
+ (value & REG_MCMDR_RXON)) {
1019
+ emc->regs[REG_CRXDSA] = emc->regs[REG_RXDLSA];
1020
+ } else if ((prev & REG_MCMDR_RXON) &&
1021
+ !(value & REG_MCMDR_RXON)) {
1022
+ emc->regs[REG_MGSTA] |= REG_MGSTA_RXHA;
1023
+ }
1024
+ if (!(value & REG_MCMDR_RXON)) {
1025
+ emc_halt_rx(emc, 0);
1026
+ }
1027
+ break;
1028
+ }
1029
+ case REG_TXDLSA:
1030
+ case REG_RXDLSA:
1031
+ case REG_DMARFC:
1032
+ case REG_MIID:
1033
+ emc->regs[reg] = value;
1034
+ break;
1035
+ case REG_MIEN:
1036
+ emc->regs[reg] = value;
1037
+ emc_update_irq_from_reg_change(emc);
1038
+ break;
1039
+ case REG_MISTA:
1040
+ /* Clear the bits that have 1 in "value". */
1041
+ emc->regs[reg] &= ~value;
1042
+ emc_update_irq_from_reg_change(emc);
1043
+ break;
1044
+ case REG_MGSTA:
1045
+ /* Clear the bits that have 1 in "value". */
1046
+ emc->regs[reg] &= ~value;
1047
+ break;
1048
+ case REG_TSDR:
1049
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_TXON) {
1050
+ emc->tx_active = true;
1051
+ /* Keep trying to send packets until we run out. */
1052
+ while (emc->tx_active) {
1053
+ emc_try_send_next_packet(emc);
1054
+ }
200
+ }
1055
+ }
201
+ }
1056
+ break;
202
+ break;
1057
+ case REG_RSDR:
203
+ case R_RSTS:
1058
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_RXON) {
204
+ qemu_log_mask(LOG_UNIMP,
1059
+ emc->rx_active = true;
205
+ "bcm2835_powermgt_write: RSTS\n");
1060
+ emc_try_receive_next_packet(emc);
206
+ s->rsts = value;
1061
+ }
207
+ break;
1062
+ break;
208
+ case R_WDOG:
1063
+ case REG_MIIDA:
209
+ qemu_log_mask(LOG_UNIMP,
1064
+ emc->regs[reg] = value & ~REG_MIIDA_BUSY;
210
+ "bcm2835_powermgt_write: WDOG\n");
1065
+ break;
211
+ s->wdog = value;
1066
+ case REG_MRPC:
212
+ break;
1067
+ case REG_MRPCC:
213
+
1068
+ case REG_MREPC:
1069
+ case REG_CTXDSA:
1070
+ case REG_CTXBSA:
1071
+ case REG_CRXDSA:
1072
+ case REG_CRXBSA:
1073
+ qemu_log_mask(LOG_GUEST_ERROR,
1074
+ "%s: Write to read-only reg %s/%d\n",
1075
+ __func__, emc_reg_name(reg), reg);
1076
+ break;
1077
+ default:
214
+ default:
1078
+ qemu_log_mask(LOG_UNIMP, "%s: Write to unimplemented reg %s/%d\n",
215
+ qemu_log_mask(LOG_UNIMP,
1079
+ __func__, emc_reg_name(reg), reg);
216
+ "bcm2835_powermgt_write: Unknown offset 0x%08"HWADDR_PRIx
1080
+ break;
217
+ "\n", offset);
1081
+ }
218
+ break;
1082
+}
219
+ }
1083
+
220
+}
1084
+static const struct MemoryRegionOps npcm7xx_emc_ops = {
221
+
1085
+ .read = npcm7xx_emc_read,
222
+static const MemoryRegionOps bcm2835_powermgt_ops = {
1086
+ .write = npcm7xx_emc_write,
223
+ .read = bcm2835_powermgt_read,
1087
+ .endianness = DEVICE_LITTLE_ENDIAN,
224
+ .write = bcm2835_powermgt_write,
1088
+ .valid = {
225
+ .endianness = DEVICE_NATIVE_ENDIAN,
1089
+ .min_access_size = 4,
226
+ .impl.min_access_size = 4,
1090
+ .max_access_size = 4,
227
+ .impl.max_access_size = 4,
1091
+ .unaligned = false,
228
+};
1092
+ },
229
+
1093
+};
230
+static const VMStateDescription vmstate_bcm2835_powermgt = {
1094
+
231
+ .name = TYPE_BCM2835_POWERMGT,
1095
+static void emc_cleanup(NetClientState *nc)
232
+ .version_id = 1,
1096
+{
233
+ .minimum_version_id = 1,
1097
+ /* Nothing to do yet. */
1098
+}
1099
+
1100
+static NetClientInfo net_npcm7xx_emc_info = {
1101
+ .type = NET_CLIENT_DRIVER_NIC,
1102
+ .size = sizeof(NICState),
1103
+ .can_receive = emc_can_receive,
1104
+ .receive = emc_receive,
1105
+ .cleanup = emc_cleanup,
1106
+ .link_status_changed = emc_set_link,
1107
+};
1108
+
1109
+static void npcm7xx_emc_realize(DeviceState *dev, Error **errp)
1110
+{
1111
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
1112
+ SysBusDevice *sbd = SYS_BUS_DEVICE(emc);
1113
+
1114
+ memory_region_init_io(&emc->iomem, OBJECT(emc), &npcm7xx_emc_ops, emc,
1115
+ TYPE_NPCM7XX_EMC, 4 * KiB);
1116
+ sysbus_init_mmio(sbd, &emc->iomem);
1117
+ sysbus_init_irq(sbd, &emc->tx_irq);
1118
+ sysbus_init_irq(sbd, &emc->rx_irq);
1119
+
1120
+ qemu_macaddr_default_if_unset(&emc->conf.macaddr);
1121
+ emc->nic = qemu_new_nic(&net_npcm7xx_emc_info, &emc->conf,
1122
+ object_get_typename(OBJECT(dev)), dev->id, emc);
1123
+ qemu_format_nic_info_str(qemu_get_queue(emc->nic), emc->conf.macaddr.a);
1124
+}
1125
+
1126
+static void npcm7xx_emc_unrealize(DeviceState *dev)
1127
+{
1128
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
1129
+
1130
+ qemu_del_nic(emc->nic);
1131
+}
1132
+
1133
+static const VMStateDescription vmstate_npcm7xx_emc = {
1134
+ .name = TYPE_NPCM7XX_EMC,
1135
+ .version_id = 0,
1136
+ .minimum_version_id = 0,
1137
+ .fields = (VMStateField[]) {
234
+ .fields = (VMStateField[]) {
1138
+ VMSTATE_UINT8(emc_num, NPCM7xxEMCState),
235
+ VMSTATE_UINT32(rstc, BCM2835PowerMgtState),
1139
+ VMSTATE_UINT32_ARRAY(regs, NPCM7xxEMCState, NPCM7XX_NUM_EMC_REGS),
236
+ VMSTATE_UINT32(rsts, BCM2835PowerMgtState),
1140
+ VMSTATE_BOOL(tx_active, NPCM7xxEMCState),
237
+ VMSTATE_UINT32(wdog, BCM2835PowerMgtState),
1141
+ VMSTATE_BOOL(rx_active, NPCM7xxEMCState),
238
+ VMSTATE_END_OF_LIST()
1142
+ VMSTATE_END_OF_LIST(),
239
+ }
1143
+ },
240
+};
1144
+};
241
+
1145
+
242
+static void bcm2835_powermgt_init(Object *obj)
1146
+static Property npcm7xx_emc_properties[] = {
243
+{
1147
+ DEFINE_NIC_PROPERTIES(NPCM7xxEMCState, conf),
244
+ BCM2835PowerMgtState *s = BCM2835_POWERMGT(obj);
1148
+ DEFINE_PROP_END_OF_LIST(),
245
+
1149
+};
246
+ memory_region_init_io(&s->iomem, obj, &bcm2835_powermgt_ops, s,
1150
+
247
+ TYPE_BCM2835_POWERMGT, 0x200);
1151
+static void npcm7xx_emc_class_init(ObjectClass *klass, void *data)
248
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem);
249
+}
250
+
251
+static void bcm2835_powermgt_reset(DeviceState *dev)
252
+{
253
+ BCM2835PowerMgtState *s = BCM2835_POWERMGT(dev);
254
+
255
+ /* https://elinux.org/BCM2835_registers#PM */
256
+ s->rstc = 0x00000102;
257
+ s->rsts = 0x00001000;
258
+ s->wdog = 0x00000000;
259
+}
260
+
261
+static void bcm2835_powermgt_class_init(ObjectClass *klass, void *data)
1152
+{
262
+{
1153
+ DeviceClass *dc = DEVICE_CLASS(klass);
263
+ DeviceClass *dc = DEVICE_CLASS(klass);
1154
+
264
+
1155
+ set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
265
+ dc->reset = bcm2835_powermgt_reset;
1156
+ dc->desc = "NPCM7xx EMC Controller";
266
+ dc->vmsd = &vmstate_bcm2835_powermgt;
1157
+ dc->realize = npcm7xx_emc_realize;
267
+}
1158
+ dc->unrealize = npcm7xx_emc_unrealize;
268
+
1159
+ dc->reset = npcm7xx_emc_reset;
269
+static TypeInfo bcm2835_powermgt_info = {
1160
+ dc->vmsd = &vmstate_npcm7xx_emc;
270
+ .name = TYPE_BCM2835_POWERMGT,
1161
+ device_class_set_props(dc, npcm7xx_emc_properties);
271
+ .parent = TYPE_SYS_BUS_DEVICE,
1162
+}
272
+ .instance_size = sizeof(BCM2835PowerMgtState),
1163
+
273
+ .class_init = bcm2835_powermgt_class_init,
1164
+static const TypeInfo npcm7xx_emc_info = {
274
+ .instance_init = bcm2835_powermgt_init,
1165
+ .name = TYPE_NPCM7XX_EMC,
275
+};
1166
+ .parent = TYPE_SYS_BUS_DEVICE,
276
+
1167
+ .instance_size = sizeof(NPCM7xxEMCState),
277
+static void bcm2835_powermgt_register_types(void)
1168
+ .class_init = npcm7xx_emc_class_init,
278
+{
1169
+};
279
+ type_register_static(&bcm2835_powermgt_info);
1170
+
280
+}
1171
+static void npcm7xx_emc_register_type(void)
281
+
1172
+{
282
+type_init(bcm2835_powermgt_register_types)
1173
+ type_register_static(&npcm7xx_emc_info);
283
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
1174
+}
1175
+
1176
+type_init(npcm7xx_emc_register_type)
1177
diff --git a/hw/net/meson.build b/hw/net/meson.build
1178
index XXXXXXX..XXXXXXX 100644
284
index XXXXXXX..XXXXXXX 100644
1179
--- a/hw/net/meson.build
285
--- a/hw/misc/meson.build
1180
+++ b/hw/net/meson.build
286
+++ b/hw/misc/meson.build
1181
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_I82596_COMMON', if_true: files('i82596.c'))
287
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_RASPI', if_true: files(
1182
softmmu_ss.add(when: 'CONFIG_SUNHME', if_true: files('sunhme.c'))
288
'bcm2835_rng.c',
1183
softmmu_ss.add(when: 'CONFIG_FTGMAC100', if_true: files('ftgmac100.c'))
289
'bcm2835_thermal.c',
1184
softmmu_ss.add(when: 'CONFIG_SUNGEM', if_true: files('sungem.c'))
290
'bcm2835_cprman.c',
1185
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_emc.c'))
291
+ 'bcm2835_powermgt.c',
1186
292
))
1187
softmmu_ss.add(when: 'CONFIG_ETRAXFS', if_true: files('etraxfs_eth.c'))
293
softmmu_ss.add(when: 'CONFIG_SLAVIO', if_true: files('slavio_misc.c'))
1188
softmmu_ss.add(when: 'CONFIG_COLDFIRE', if_true: files('mcf_fec.c'))
294
softmmu_ss.add(when: 'CONFIG_ZYNQ', if_true: files('zynq_slcr.c', 'zynq-xadc.c'))
1189
diff --git a/hw/net/trace-events b/hw/net/trace-events
1190
index XXXXXXX..XXXXXXX 100644
1191
--- a/hw/net/trace-events
1192
+++ b/hw/net/trace-events
1193
@@ -XXX,XX +XXX,XX @@ imx_fec_receive_last(int last) "rx frame flags 0x%04x"
1194
imx_enet_receive(size_t size) "len %zu"
1195
imx_enet_receive_len(uint64_t addr, int len) "rx_bd 0x%"PRIx64" length %d"
1196
imx_enet_receive_last(int last) "rx frame flags 0x%04x"
1197
+
1198
+# npcm7xx_emc.c
1199
+npcm7xx_emc_reset(int emc_num) "Resetting emc%d"
1200
+npcm7xx_emc_update_tx_irq(int level) "Setting tx irq to %d"
1201
+npcm7xx_emc_update_rx_irq(int level) "Setting rx irq to %d"
1202
+npcm7xx_emc_set_mista(uint32_t flags) "ORing 0x%x into MISTA"
1203
+npcm7xx_emc_cpu_owned_desc(uint32_t addr) "Can't process cpu-owned descriptor @0x%x"
1204
+npcm7xx_emc_sent_packet(uint32_t len) "Sent %u byte packet"
1205
+npcm7xx_emc_tx_done(uint32_t ctxdsa) "TX done, CTXDSA=0x%x"
1206
+npcm7xx_emc_can_receive(int can_receive) "Can receive: %d"
1207
+npcm7xx_emc_packet_filtered_out(const char* fail_reason) "Packet filtered out: %s"
1208
+npcm7xx_emc_packet_dropped(uint32_t len) "%u byte packet dropped"
1209
+npcm7xx_emc_receiving_packet(uint32_t len) "Receiving %u byte packet"
1210
+npcm7xx_emc_received_packet(uint32_t len) "Received %u byte packet"
1211
+npcm7xx_emc_rx_done(uint32_t crxdsa) "RX done, CRXDSA=0x%x"
1212
+npcm7xx_emc_reg_read(int emc_num, uint32_t result, const char *name, int regno) "emc%d: 0x%x = reg[%s/%d]"
1213
+npcm7xx_emc_reg_write(int emc_num, const char *name, int regno, uint32_t value) "emc%d: reg[%s/%d] = 0x%x"
1214
--
295
--
1215
2.20.1
296
2.20.1
1216
297
1217
298
diff view generated by jsdifflib
1
From: Luc Michel <luc@lmichel.fr>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Also add Damien as a reviewer.
3
Add a test booting and quickly shutdown a raspi2 machine,
4
to test the power management model:
4
5
5
Signed-off-by: Luc Michel <luc@lmichel.fr>
6
(1/1) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_raspi2_initrd:
6
Acked-by: Damien Hedde <damien.hedde@greensocs.com>
7
console: [ 0.000000] Booting Linux on physical CPU 0xf00
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
console: [ 0.000000] Linux version 4.14.98-v7+ (dom@dom-XPS-13-9370) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611)) #1200 SMP Tue Feb 12 20:27:48 GMT 2019
8
Message-id: 20210211085318.2507-1-luc@lmichel.fr
9
console: [ 0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c5387d
10
console: [ 0.000000] CPU: div instructions available: patching division code
11
console: [ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
12
console: [ 0.000000] OF: fdt: Machine model: Raspberry Pi 2 Model B
13
...
14
console: Boot successful.
15
console: cat /proc/cpuinfo
16
console: / # cat /proc/cpuinfo
17
...
18
console: processor : 3
19
console: model name : ARMv7 Processor rev 5 (v7l)
20
console: BogoMIPS : 125.00
21
console: Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
22
console: CPU implementer : 0x41
23
console: CPU architecture: 7
24
console: CPU variant : 0x0
25
console: CPU part : 0xc07
26
console: CPU revision : 5
27
console: Hardware : BCM2835
28
console: Revision : 0000
29
console: Serial : 0000000000000000
30
console: cat /proc/iomem
31
console: / # cat /proc/iomem
32
console: 00000000-3bffffff : System RAM
33
console: 00008000-00afffff : Kernel code
34
console: 00c00000-00d468ef : Kernel data
35
console: 3f006000-3f006fff : dwc_otg
36
console: 3f007000-3f007eff : /soc/dma@7e007000
37
console: 3f00b880-3f00b8bf : /soc/mailbox@7e00b880
38
console: 3f100000-3f100027 : /soc/watchdog@7e100000
39
console: 3f101000-3f102fff : /soc/cprman@7e101000
40
console: 3f200000-3f2000b3 : /soc/gpio@7e200000
41
PASS (24.59 s)
42
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
43
JOB TIME : 25.02 s
44
45
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
46
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
47
Message-id: 20210531113837.1689775-1-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
48
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
49
---
11
MAINTAINERS | 11 +++++++++++
50
tests/acceptance/boot_linux_console.py | 43 ++++++++++++++++++++++++++
12
1 file changed, 11 insertions(+)
51
1 file changed, 43 insertions(+)
13
52
14
diff --git a/MAINTAINERS b/MAINTAINERS
53
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
15
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
16
--- a/MAINTAINERS
55
--- a/tests/acceptance/boot_linux_console.py
17
+++ b/MAINTAINERS
56
+++ b/tests/acceptance/boot_linux_console.py
18
@@ -XXX,XX +XXX,XX @@ F: pc-bios/opensbi-*
57
@@ -XXX,XX +XXX,XX @@
19
F: .gitlab-ci.d/opensbi.yml
58
from avocado import skip
20
F: .gitlab-ci.d/opensbi/
59
from avocado import skipUnless
21
60
from avocado_qemu import Test
22
+Clock framework
61
+from avocado_qemu import exec_command
23
+M: Luc Michel <luc@lmichel.fr>
62
from avocado_qemu import exec_command_and_wait_for_pattern
24
+R: Damien Hedde <damien.hedde@greensocs.com>
63
from avocado_qemu import interrupt_interactive_console_until_pattern
25
+S: Maintained
64
from avocado_qemu import wait_for_console_pattern
26
+F: include/hw/clock.h
65
@@ -XXX,XX +XXX,XX @@ def test_arm_raspi2_uart0(self):
27
+F: include/hw/qdev-clock.h
66
"""
28
+F: hw/core/clock.c
67
self.do_test_arm_raspi2(0)
29
+F: hw/core/clock-vmstate.c
68
30
+F: hw/core/qdev-clock.c
69
+ def test_arm_raspi2_initrd(self):
31
+F: docs/devel/clocks.rst
70
+ """
71
+ :avocado: tags=arch:arm
72
+ :avocado: tags=machine:raspi2
73
+ """
74
+ deb_url = ('http://archive.raspberrypi.org/debian/'
75
+ 'pool/main/r/raspberrypi-firmware/'
76
+ 'raspberrypi-kernel_1.20190215-1_armhf.deb')
77
+ deb_hash = 'cd284220b32128c5084037553db3c482426f3972'
78
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
79
+ kernel_path = self.extract_from_deb(deb_path, '/boot/kernel7.img')
80
+ dtb_path = self.extract_from_deb(deb_path, '/boot/bcm2709-rpi-2-b.dtb')
32
+
81
+
33
Usermode Emulation
82
+ initrd_url = ('https://github.com/groeck/linux-build-test/raw/'
34
------------------
83
+ '2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
35
Overall usermode emulation
84
+ 'arm/rootfs-armv7a.cpio.gz')
85
+ initrd_hash = '604b2e45cdf35045846b8bbfbf2129b1891bdc9c'
86
+ initrd_path_gz = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
87
+ initrd_path = os.path.join(self.workdir, 'rootfs.cpio')
88
+ archive.gzip_uncompress(initrd_path_gz, initrd_path)
89
+
90
+ self.vm.set_console()
91
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
92
+ 'earlycon=pl011,0x3f201000 console=ttyAMA0 '
93
+ 'panic=-1 noreboot ' +
94
+ 'dwc_otg.fiq_fsm_enable=0')
95
+ self.vm.add_args('-kernel', kernel_path,
96
+ '-dtb', dtb_path,
97
+ '-initrd', initrd_path,
98
+ '-append', kernel_command_line,
99
+ '-no-reboot')
100
+ self.vm.launch()
101
+ self.wait_for_console_pattern('Boot successful.')
102
+
103
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
104
+ 'BCM2835')
105
+ exec_command_and_wait_for_pattern(self, 'cat /proc/iomem',
106
+ '/soc/cprman@7e101000')
107
+ exec_command(self, 'halt')
108
+ # Wait for VM to shut down gracefully
109
+ self.vm.wait()
110
+
111
def test_arm_exynos4210_initrd(self):
112
"""
113
:avocado: tags=arch:arm
36
--
114
--
37
2.20.1
115
2.20.1
38
116
39
117
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
2
2
3
Use g2h_untagged in contexts that have no cpu, e.g. the binary
3
If the CPU is running in default NaN mode (FPCR.DN == 1) and we execute
4
loaders that operate before the primary cpu is created. As a
4
FRSQRTE, FRECPE, or FRECPX with a signaling NaN, parts_silence_nan_frac() will
5
colollary, target_mmap and friends must use untagged addresses,
5
assert due to fpst->default_nan_mode being set.
6
since they are used by the loaders.
7
6
8
Use g2h_untagged on values returned from target_mmap, as the
7
To avoid this, we check to see what NaN mode we're running in before we call
9
kernel never applies a tag itself.
8
floatxx_silence_nan().
10
9
11
Use g2h_untagged on all pc values. The only current user of
10
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
12
tags, aarch64, removes tags from code addresses upon branch,
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
so "pc" is always untagged.
12
Message-id: 1624662174-175828-2-git-send-email-joe.komlodi@xilinx.com
14
15
Use g2h with the cpu context on hand wherever possible.
16
17
Use g2h_untagged in lock_user, which will be updated soon.
18
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20210212184902.1251044-13-richard.henderson@linaro.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
15
---
24
bsd-user/qemu.h | 8 ++--
16
target/arm/helper-a64.c | 12 +++++++++---
25
include/exec/cpu_ldst.h | 12 +++++-
17
target/arm/vfp_helper.c | 24 ++++++++++++++++++------
26
include/exec/exec-all.h | 2 +-
18
2 files changed, 27 insertions(+), 9 deletions(-)
27
linux-user/qemu.h | 6 +--
28
accel/tcg/translate-all.c | 4 +-
29
accel/tcg/user-exec.c | 48 ++++++++++++------------
30
bsd-user/elfload.c | 2 +-
31
bsd-user/main.c | 4 +-
32
bsd-user/mmap.c | 23 ++++++------
33
linux-user/elfload.c | 12 +++---
34
linux-user/flatload.c | 2 +-
35
linux-user/hppa/cpu_loop.c | 31 ++++++++--------
36
linux-user/i386/cpu_loop.c | 4 +-
37
linux-user/mmap.c | 45 +++++++++++-----------
38
linux-user/ppc/signal.c | 4 +-
39
linux-user/syscall.c | 72 +++++++++++++++++++-----------------
40
target/arm/helper-a64.c | 4 +-
41
target/hppa/op_helper.c | 2 +-
42
target/i386/tcg/mem_helper.c | 2 +-
43
target/s390x/mem_helper.c | 4 +-
44
20 files changed, 154 insertions(+), 137 deletions(-)
45
19
46
diff --git a/bsd-user/qemu.h b/bsd-user/qemu.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/bsd-user/qemu.h
49
+++ b/bsd-user/qemu.h
50
@@ -XXX,XX +XXX,XX @@ static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy
51
void *addr;
52
addr = g_malloc(len);
53
if (copy)
54
- memcpy(addr, g2h(guest_addr), len);
55
+ memcpy(addr, g2h_untagged(guest_addr), len);
56
else
57
memset(addr, 0, len);
58
return addr;
59
}
60
#else
61
- return g2h(guest_addr);
62
+ return g2h_untagged(guest_addr);
63
#endif
64
}
65
66
@@ -XXX,XX +XXX,XX @@ static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
67
#ifdef DEBUG_REMAP
68
if (!host_ptr)
69
return;
70
- if (host_ptr == g2h(guest_addr))
71
+ if (host_ptr == g2h_untagged(guest_addr))
72
return;
73
if (len > 0)
74
- memcpy(g2h(guest_addr), host_ptr, len);
75
+ memcpy(g2h_untagged(guest_addr), host_ptr, len);
76
g_free(host_ptr);
77
#endif
78
}
79
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
80
index XXXXXXX..XXXXXXX 100644
81
--- a/include/exec/cpu_ldst.h
82
+++ b/include/exec/cpu_ldst.h
83
@@ -XXX,XX +XXX,XX @@ static inline abi_ptr cpu_untagged_addr(CPUState *cs, abi_ptr x)
84
#endif
85
86
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
87
-#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
88
+static inline void *g2h_untagged(abi_ptr x)
89
+{
90
+ return (void *)((uintptr_t)(x) + guest_base);
91
+}
92
+
93
+static inline void *g2h(CPUState *cs, abi_ptr x)
94
+{
95
+ return g2h_untagged(cpu_untagged_addr(cs, x));
96
+}
97
98
static inline bool guest_addr_valid(abi_ulong x)
99
{
100
@@ -XXX,XX +XXX,XX @@ static inline int cpu_ldsw_code(CPUArchState *env, abi_ptr addr)
101
static inline void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
102
MMUAccessType access_type, int mmu_idx)
103
{
104
- return g2h(addr);
105
+ return g2h(env_cpu(env), addr);
106
}
107
#else
108
void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
109
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
110
index XXXXXXX..XXXXXXX 100644
111
--- a/include/exec/exec-all.h
112
+++ b/include/exec/exec-all.h
113
@@ -XXX,XX +XXX,XX @@ static inline tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env,
114
void **hostp)
115
{
116
if (hostp) {
117
- *hostp = g2h(addr);
118
+ *hostp = g2h_untagged(addr);
119
}
120
return addr;
121
}
122
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
123
index XXXXXXX..XXXXXXX 100644
124
--- a/linux-user/qemu.h
125
+++ b/linux-user/qemu.h
126
@@ -XXX,XX +XXX,XX @@ static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy
127
return addr;
128
}
129
#else
130
- return g2h(guest_addr);
131
+ return g2h_untagged(guest_addr);
132
#endif
133
}
134
135
@@ -XXX,XX +XXX,XX @@ static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
136
#ifdef DEBUG_REMAP
137
if (!host_ptr)
138
return;
139
- if (host_ptr == g2h(guest_addr))
140
+ if (host_ptr == g2h_untagged(guest_addr))
141
return;
142
if (len > 0)
143
- memcpy(g2h(guest_addr), host_ptr, len);
144
+ memcpy(g2h_untagged(guest_addr), host_ptr, len);
145
g_free(host_ptr);
146
#endif
147
}
148
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/accel/tcg/translate-all.c
151
+++ b/accel/tcg/translate-all.c
152
@@ -XXX,XX +XXX,XX @@ static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
153
prot |= p2->flags;
154
p2->flags &= ~PAGE_WRITE;
155
}
156
- mprotect(g2h(page_addr), qemu_host_page_size,
157
+ mprotect(g2h_untagged(page_addr), qemu_host_page_size,
158
(prot & PAGE_BITS) & ~PAGE_WRITE);
159
if (DEBUG_TB_INVALIDATE_GATE) {
160
printf("protecting code page: 0x" TB_PAGE_ADDR_FMT "\n", page_addr);
161
@@ -XXX,XX +XXX,XX @@ int page_unprotect(target_ulong address, uintptr_t pc)
162
}
163
#endif
164
}
165
- mprotect((void *)g2h(host_start), qemu_host_page_size,
166
+ mprotect((void *)g2h_untagged(host_start), qemu_host_page_size,
167
prot & PAGE_BITS);
168
}
169
mmap_unlock();
170
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/accel/tcg/user-exec.c
173
+++ b/accel/tcg/user-exec.c
174
@@ -XXX,XX +XXX,XX @@ int probe_access_flags(CPUArchState *env, target_ulong addr,
175
int flags;
176
177
flags = probe_access_internal(env, addr, 0, access_type, nonfault, ra);
178
- *phost = flags ? NULL : g2h(addr);
179
+ *phost = flags ? NULL : g2h(env_cpu(env), addr);
180
return flags;
181
}
182
183
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
184
flags = probe_access_internal(env, addr, size, access_type, false, ra);
185
g_assert(flags == 0);
186
187
- return size ? g2h(addr) : NULL;
188
+ return size ? g2h(env_cpu(env), addr) : NULL;
189
}
190
191
#if defined(__i386__)
192
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr)
193
uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, false);
194
195
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
196
- ret = ldub_p(g2h(ptr));
197
+ ret = ldub_p(g2h(env_cpu(env), ptr));
198
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
199
return ret;
200
}
201
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr)
202
uint16_t meminfo = trace_mem_get_info(MO_SB, MMU_USER_IDX, false);
203
204
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
205
- ret = ldsb_p(g2h(ptr));
206
+ ret = ldsb_p(g2h(env_cpu(env), ptr));
207
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
208
return ret;
209
}
210
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_be_data(CPUArchState *env, abi_ptr ptr)
211
uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, false);
212
213
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
214
- ret = lduw_be_p(g2h(ptr));
215
+ ret = lduw_be_p(g2h(env_cpu(env), ptr));
216
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
217
return ret;
218
}
219
@@ -XXX,XX +XXX,XX @@ int cpu_ldsw_be_data(CPUArchState *env, abi_ptr ptr)
220
uint16_t meminfo = trace_mem_get_info(MO_BESW, MMU_USER_IDX, false);
221
222
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
223
- ret = ldsw_be_p(g2h(ptr));
224
+ ret = ldsw_be_p(g2h(env_cpu(env), ptr));
225
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
226
return ret;
227
}
228
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_be_data(CPUArchState *env, abi_ptr ptr)
229
uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, false);
230
231
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
232
- ret = ldl_be_p(g2h(ptr));
233
+ ret = ldl_be_p(g2h(env_cpu(env), ptr));
234
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
235
return ret;
236
}
237
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_be_data(CPUArchState *env, abi_ptr ptr)
238
uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, false);
239
240
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
241
- ret = ldq_be_p(g2h(ptr));
242
+ ret = ldq_be_p(g2h(env_cpu(env), ptr));
243
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
244
return ret;
245
}
246
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_le_data(CPUArchState *env, abi_ptr ptr)
247
uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, false);
248
249
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
250
- ret = lduw_le_p(g2h(ptr));
251
+ ret = lduw_le_p(g2h(env_cpu(env), ptr));
252
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
253
return ret;
254
}
255
@@ -XXX,XX +XXX,XX @@ int cpu_ldsw_le_data(CPUArchState *env, abi_ptr ptr)
256
uint16_t meminfo = trace_mem_get_info(MO_LESW, MMU_USER_IDX, false);
257
258
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
259
- ret = ldsw_le_p(g2h(ptr));
260
+ ret = ldsw_le_p(g2h(env_cpu(env), ptr));
261
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
262
return ret;
263
}
264
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_le_data(CPUArchState *env, abi_ptr ptr)
265
uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, false);
266
267
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
268
- ret = ldl_le_p(g2h(ptr));
269
+ ret = ldl_le_p(g2h(env_cpu(env), ptr));
270
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
271
return ret;
272
}
273
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_le_data(CPUArchState *env, abi_ptr ptr)
274
uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, false);
275
276
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
277
- ret = ldq_le_p(g2h(ptr));
278
+ ret = ldq_le_p(g2h(env_cpu(env), ptr));
279
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
280
return ret;
281
}
282
@@ -XXX,XX +XXX,XX @@ void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
283
uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, true);
284
285
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
286
- stb_p(g2h(ptr), val);
287
+ stb_p(g2h(env_cpu(env), ptr), val);
288
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
289
}
290
291
@@ -XXX,XX +XXX,XX @@ void cpu_stw_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
292
uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, true);
293
294
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
295
- stw_be_p(g2h(ptr), val);
296
+ stw_be_p(g2h(env_cpu(env), ptr), val);
297
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
298
}
299
300
@@ -XXX,XX +XXX,XX @@ void cpu_stl_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
301
uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, true);
302
303
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
304
- stl_be_p(g2h(ptr), val);
305
+ stl_be_p(g2h(env_cpu(env), ptr), val);
306
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
307
}
308
309
@@ -XXX,XX +XXX,XX @@ void cpu_stq_be_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
310
uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, true);
311
312
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
313
- stq_be_p(g2h(ptr), val);
314
+ stq_be_p(g2h(env_cpu(env), ptr), val);
315
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
316
}
317
318
@@ -XXX,XX +XXX,XX @@ void cpu_stw_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
319
uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, true);
320
321
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
322
- stw_le_p(g2h(ptr), val);
323
+ stw_le_p(g2h(env_cpu(env), ptr), val);
324
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
325
}
326
327
@@ -XXX,XX +XXX,XX @@ void cpu_stl_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
328
uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, true);
329
330
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
331
- stl_le_p(g2h(ptr), val);
332
+ stl_le_p(g2h(env_cpu(env), ptr), val);
333
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
334
}
335
336
@@ -XXX,XX +XXX,XX @@ void cpu_stq_le_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
337
uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, true);
338
339
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
340
- stq_le_p(g2h(ptr), val);
341
+ stq_le_p(g2h(env_cpu(env), ptr), val);
342
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
343
}
344
345
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
346
uint32_t ret;
347
348
set_helper_retaddr(1);
349
- ret = ldub_p(g2h(ptr));
350
+ ret = ldub_p(g2h_untagged(ptr));
351
clear_helper_retaddr();
352
return ret;
353
}
354
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr ptr)
355
uint32_t ret;
356
357
set_helper_retaddr(1);
358
- ret = lduw_p(g2h(ptr));
359
+ ret = lduw_p(g2h_untagged(ptr));
360
clear_helper_retaddr();
361
return ret;
362
}
363
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr ptr)
364
uint32_t ret;
365
366
set_helper_retaddr(1);
367
- ret = ldl_p(g2h(ptr));
368
+ ret = ldl_p(g2h_untagged(ptr));
369
clear_helper_retaddr();
370
return ret;
371
}
372
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr ptr)
373
uint64_t ret;
374
375
set_helper_retaddr(1);
376
- ret = ldq_p(g2h(ptr));
377
+ ret = ldq_p(g2h_untagged(ptr));
378
clear_helper_retaddr();
379
return ret;
380
}
381
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
382
if (unlikely(addr & (size - 1))) {
383
cpu_loop_exit_atomic(env_cpu(env), retaddr);
384
}
385
- void *ret = g2h(addr);
386
+ void *ret = g2h(env_cpu(env), addr);
387
set_helper_retaddr(retaddr);
388
return ret;
389
}
390
diff --git a/bsd-user/elfload.c b/bsd-user/elfload.c
391
index XXXXXXX..XXXXXXX 100644
392
--- a/bsd-user/elfload.c
393
+++ b/bsd-user/elfload.c
394
@@ -XXX,XX +XXX,XX @@ static void padzero(abi_ulong elf_bss, abi_ulong last_bss)
395
end_addr1 = REAL_HOST_PAGE_ALIGN(elf_bss);
396
end_addr = HOST_PAGE_ALIGN(elf_bss);
397
if (end_addr1 < end_addr) {
398
- mmap((void *)g2h(end_addr1), end_addr - end_addr1,
399
+ mmap((void *)g2h_untagged(end_addr1), end_addr - end_addr1,
400
PROT_READ|PROT_WRITE|PROT_EXEC,
401
MAP_FIXED|MAP_PRIVATE|MAP_ANON, -1, 0);
402
}
403
diff --git a/bsd-user/main.c b/bsd-user/main.c
404
index XXXXXXX..XXXXXXX 100644
405
--- a/bsd-user/main.c
406
+++ b/bsd-user/main.c
407
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
408
env->idt.base = target_mmap(0, sizeof(uint64_t) * (env->idt.limit + 1),
409
PROT_READ|PROT_WRITE,
410
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
411
- idt_table = g2h(env->idt.base);
412
+ idt_table = g2h_untagged(env->idt.base);
413
set_idt(0, 0);
414
set_idt(1, 0);
415
set_idt(2, 0);
416
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
417
PROT_READ|PROT_WRITE,
418
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
419
env->gdt.limit = sizeof(uint64_t) * TARGET_GDT_ENTRIES - 1;
420
- gdt_table = g2h(env->gdt.base);
421
+ gdt_table = g2h_untagged(env->gdt.base);
422
#ifdef TARGET_ABI32
423
write_dt(&gdt_table[__USER_CS >> 3], 0, 0xfffff,
424
DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
425
diff --git a/bsd-user/mmap.c b/bsd-user/mmap.c
426
index XXXXXXX..XXXXXXX 100644
427
--- a/bsd-user/mmap.c
428
+++ b/bsd-user/mmap.c
429
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int prot)
430
}
431
end = host_end;
432
}
433
- ret = mprotect(g2h(host_start), qemu_host_page_size, prot1 & PAGE_BITS);
434
+ ret = mprotect(g2h_untagged(host_start),
435
+ qemu_host_page_size, prot1 & PAGE_BITS);
436
if (ret != 0)
437
goto error;
438
host_start += qemu_host_page_size;
439
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int prot)
440
for(addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
441
prot1 |= page_get_flags(addr);
442
}
443
- ret = mprotect(g2h(host_end - qemu_host_page_size), qemu_host_page_size,
444
- prot1 & PAGE_BITS);
445
+ ret = mprotect(g2h_untagged(host_end - qemu_host_page_size),
446
+ qemu_host_page_size, prot1 & PAGE_BITS);
447
if (ret != 0)
448
goto error;
449
host_end -= qemu_host_page_size;
450
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int prot)
451
452
/* handle the pages in the middle */
453
if (host_start < host_end) {
454
- ret = mprotect(g2h(host_start), host_end - host_start, prot);
455
+ ret = mprotect(g2h_untagged(host_start), host_end - host_start, prot);
456
if (ret != 0)
457
goto error;
458
}
459
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
460
int prot1, prot_new;
461
462
real_end = real_start + qemu_host_page_size;
463
- host_start = g2h(real_start);
464
+ host_start = g2h_untagged(real_start);
465
466
/* get the protection of the target pages outside the mapping */
467
prot1 = 0;
468
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
469
mprotect(host_start, qemu_host_page_size, prot1 | PROT_WRITE);
470
471
/* read the corresponding file data */
472
- pread(fd, g2h(start), end - start, offset);
473
+ pread(fd, g2h_untagged(start), end - start, offset);
474
475
/* put final protection */
476
if (prot_new != (prot1 | PROT_WRITE))
477
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
478
/* Note: we prefer to control the mapping address. It is
479
especially important if qemu_host_page_size >
480
qemu_real_host_page_size */
481
- p = mmap(g2h(mmap_start),
482
+ p = mmap(g2h_untagged(mmap_start),
483
host_len, prot, flags | MAP_FIXED, fd, host_offset);
484
if (p == MAP_FAILED)
485
goto fail;
486
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
487
-1, 0);
488
if (retaddr == -1)
489
goto fail;
490
- pread(fd, g2h(start), len, offset);
491
+ pread(fd, g2h_untagged(start), len, offset);
492
if (!(prot & PROT_WRITE)) {
493
ret = target_mprotect(start, len, prot);
494
if (ret != 0) {
495
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
496
offset1 = 0;
497
else
498
offset1 = offset + real_start - start;
499
- p = mmap(g2h(real_start), real_end - real_start,
500
+ p = mmap(g2h_untagged(real_start), real_end - real_start,
501
prot, flags, fd, offset1);
502
if (p == MAP_FAILED)
503
goto fail;
504
@@ -XXX,XX +XXX,XX @@ int target_munmap(abi_ulong start, abi_ulong len)
505
ret = 0;
506
/* unmap what we can */
507
if (real_start < real_end) {
508
- ret = munmap(g2h(real_start), real_end - real_start);
509
+ ret = munmap(g2h_untagged(real_start), real_end - real_start);
510
}
511
512
if (ret == 0)
513
@@ -XXX,XX +XXX,XX @@ int target_msync(abi_ulong start, abi_ulong len, int flags)
514
return 0;
515
516
start &= qemu_host_page_mask;
517
- return msync(g2h(start), end - start, flags);
518
+ return msync(g2h_untagged(start), end - start, flags);
519
}
520
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
521
index XXXXXXX..XXXXXXX 100644
522
--- a/linux-user/elfload.c
523
+++ b/linux-user/elfload.c
524
@@ -XXX,XX +XXX,XX @@ enum {
525
526
static bool init_guest_commpage(void)
527
{
528
- void *want = g2h(ARM_COMMPAGE & -qemu_host_page_size);
529
+ void *want = g2h_untagged(ARM_COMMPAGE & -qemu_host_page_size);
530
void *addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE,
531
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
532
533
@@ -XXX,XX +XXX,XX @@ static bool init_guest_commpage(void)
534
}
535
536
/* Set kernel helper versions; rest of page is 0. */
537
- __put_user(5, (uint32_t *)g2h(0xffff0ffcu));
538
+ __put_user(5, (uint32_t *)g2h_untagged(0xffff0ffcu));
539
540
if (mprotect(addr, qemu_host_page_size, PROT_READ)) {
541
perror("Protecting guest commpage");
542
@@ -XXX,XX +XXX,XX @@ static void zero_bss(abi_ulong elf_bss, abi_ulong last_bss, int prot)
543
here is still actually needed. For now, continue with it,
544
but merge it with the "normal" mmap that would allocate the bss. */
545
546
- host_start = (uintptr_t) g2h(elf_bss);
547
- host_end = (uintptr_t) g2h(last_bss);
548
+ host_start = (uintptr_t) g2h_untagged(elf_bss);
549
+ host_end = (uintptr_t) g2h_untagged(last_bss);
550
host_map_start = REAL_HOST_PAGE_ALIGN(host_start);
551
552
if (host_map_start < host_end) {
553
@@ -XXX,XX +XXX,XX @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr,
554
}
555
556
/* Reserve the address space for the binary, or reserved_va. */
557
- test = g2h(guest_loaddr);
558
+ test = g2h_untagged(guest_loaddr);
559
addr = mmap(test, guest_hiaddr - guest_loaddr, PROT_NONE, flags, -1, 0);
560
if (test != addr) {
561
pgb_fail_in_use(image_name);
562
@@ -XXX,XX +XXX,XX @@ static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr,
563
564
/* Reserve the memory on the host. */
565
assert(guest_base != 0);
566
- test = g2h(0);
567
+ test = g2h_untagged(0);
568
addr = mmap(test, reserved_va, PROT_NONE, flags, -1, 0);
569
if (addr == MAP_FAILED || addr != test) {
570
error_report("Unable to reserve 0x%lx bytes of virtual address "
571
diff --git a/linux-user/flatload.c b/linux-user/flatload.c
572
index XXXXXXX..XXXXXXX 100644
573
--- a/linux-user/flatload.c
574
+++ b/linux-user/flatload.c
575
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
576
}
577
578
/* zero the BSS. */
579
- memset(g2h(datapos + data_len), 0, bss_len);
580
+ memset(g2h_untagged(datapos + data_len), 0, bss_len);
581
582
return 0;
583
}
584
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
585
index XXXXXXX..XXXXXXX 100644
586
--- a/linux-user/hppa/cpu_loop.c
587
+++ b/linux-user/hppa/cpu_loop.c
588
@@ -XXX,XX +XXX,XX @@
589
590
static abi_ulong hppa_lws(CPUHPPAState *env)
591
{
592
+ CPUState *cs = env_cpu(env);
593
uint32_t which = env->gr[20];
594
abi_ulong addr = env->gr[26];
595
abi_ulong old = env->gr[25];
596
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
597
}
598
old = tswap32(old);
599
new = tswap32(new);
600
- ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
601
+ ret = qatomic_cmpxchg((uint32_t *)g2h(cs, addr), old, new);
602
ret = tswap32(ret);
603
break;
604
605
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
606
can be host-endian as well. */
607
switch (size) {
608
case 0:
609
- old = *(uint8_t *)g2h(old);
610
- new = *(uint8_t *)g2h(new);
611
- ret = qatomic_cmpxchg((uint8_t *)g2h(addr), old, new);
612
+ old = *(uint8_t *)g2h(cs, old);
613
+ new = *(uint8_t *)g2h(cs, new);
614
+ ret = qatomic_cmpxchg((uint8_t *)g2h(cs, addr), old, new);
615
ret = ret != old;
616
break;
617
case 1:
618
- old = *(uint16_t *)g2h(old);
619
- new = *(uint16_t *)g2h(new);
620
- ret = qatomic_cmpxchg((uint16_t *)g2h(addr), old, new);
621
+ old = *(uint16_t *)g2h(cs, old);
622
+ new = *(uint16_t *)g2h(cs, new);
623
+ ret = qatomic_cmpxchg((uint16_t *)g2h(cs, addr), old, new);
624
ret = ret != old;
625
break;
626
case 2:
627
- old = *(uint32_t *)g2h(old);
628
- new = *(uint32_t *)g2h(new);
629
- ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
630
+ old = *(uint32_t *)g2h(cs, old);
631
+ new = *(uint32_t *)g2h(cs, new);
632
+ ret = qatomic_cmpxchg((uint32_t *)g2h(cs, addr), old, new);
633
ret = ret != old;
634
break;
635
case 3:
636
{
637
uint64_t o64, n64, r64;
638
- o64 = *(uint64_t *)g2h(old);
639
- n64 = *(uint64_t *)g2h(new);
640
+ o64 = *(uint64_t *)g2h(cs, old);
641
+ n64 = *(uint64_t *)g2h(cs, new);
642
#ifdef CONFIG_ATOMIC64
643
- r64 = qatomic_cmpxchg__nocheck((uint64_t *)g2h(addr),
644
+ r64 = qatomic_cmpxchg__nocheck((uint64_t *)g2h(cs, addr),
645
o64, n64);
646
ret = r64 != o64;
647
#else
648
start_exclusive();
649
- r64 = *(uint64_t *)g2h(addr);
650
+ r64 = *(uint64_t *)g2h(cs, addr);
651
ret = 1;
652
if (r64 == o64) {
653
- *(uint64_t *)g2h(addr) = n64;
654
+ *(uint64_t *)g2h(cs, addr) = n64;
655
ret = 0;
656
}
657
end_exclusive();
658
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
659
index XXXXXXX..XXXXXXX 100644
660
--- a/linux-user/i386/cpu_loop.c
661
+++ b/linux-user/i386/cpu_loop.c
662
@@ -XXX,XX +XXX,XX @@ void target_cpu_copy_regs(CPUArchState *env, struct target_pt_regs *regs)
663
env->idt.base = target_mmap(0, sizeof(uint64_t) * (env->idt.limit + 1),
664
PROT_READ|PROT_WRITE,
665
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
666
- idt_table = g2h(env->idt.base);
667
+ idt_table = g2h_untagged(env->idt.base);
668
set_idt(0, 0);
669
set_idt(1, 0);
670
set_idt(2, 0);
671
@@ -XXX,XX +XXX,XX @@ void target_cpu_copy_regs(CPUArchState *env, struct target_pt_regs *regs)
672
PROT_READ|PROT_WRITE,
673
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
674
env->gdt.limit = sizeof(uint64_t) * TARGET_GDT_ENTRIES - 1;
675
- gdt_table = g2h(env->gdt.base);
676
+ gdt_table = g2h_untagged(env->gdt.base);
677
#ifdef TARGET_ABI32
678
write_dt(&gdt_table[__USER_CS >> 3], 0, 0xfffff,
679
DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
680
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
681
index XXXXXXX..XXXXXXX 100644
682
--- a/linux-user/mmap.c
683
+++ b/linux-user/mmap.c
684
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
685
}
686
end = host_end;
687
}
688
- ret = mprotect(g2h(host_start), qemu_host_page_size,
689
+ ret = mprotect(g2h_untagged(host_start), qemu_host_page_size,
690
prot1 & PAGE_BITS);
691
if (ret != 0) {
692
goto error;
693
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
694
for (addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
695
prot1 |= page_get_flags(addr);
696
}
697
- ret = mprotect(g2h(host_end - qemu_host_page_size),
698
+ ret = mprotect(g2h_untagged(host_end - qemu_host_page_size),
699
qemu_host_page_size, prot1 & PAGE_BITS);
700
if (ret != 0) {
701
goto error;
702
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
703
704
/* handle the pages in the middle */
705
if (host_start < host_end) {
706
- ret = mprotect(g2h(host_start), host_end - host_start, host_prot);
707
+ ret = mprotect(g2h_untagged(host_start),
708
+ host_end - host_start, host_prot);
709
if (ret != 0) {
710
goto error;
711
}
712
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
713
int prot1, prot_new;
714
715
real_end = real_start + qemu_host_page_size;
716
- host_start = g2h(real_start);
717
+ host_start = g2h_untagged(real_start);
718
719
/* get the protection of the target pages outside the mapping */
720
prot1 = 0;
721
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
722
mprotect(host_start, qemu_host_page_size, prot1 | PROT_WRITE);
723
724
/* read the corresponding file data */
725
- if (pread(fd, g2h(start), end - start, offset) == -1)
726
+ if (pread(fd, g2h_untagged(start), end - start, offset) == -1)
727
return -1;
728
729
/* put final protection */
730
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
731
mprotect(host_start, qemu_host_page_size, prot_new);
732
}
733
if (prot_new & PROT_WRITE) {
734
- memset(g2h(start), 0, end - start);
735
+ memset(g2h_untagged(start), 0, end - start);
736
}
737
}
738
return 0;
739
@@ -XXX,XX +XXX,XX @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
740
* - mremap() with MREMAP_FIXED flag
741
* - shmat() with SHM_REMAP flag
742
*/
743
- ptr = mmap(g2h(addr), size, PROT_NONE,
744
+ ptr = mmap(g2h_untagged(addr), size, PROT_NONE,
745
MAP_ANONYMOUS|MAP_PRIVATE|MAP_NORESERVE, -1, 0);
746
747
/* ENOMEM, if host address space has no memory */
748
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
749
/* Note: we prefer to control the mapping address. It is
750
especially important if qemu_host_page_size >
751
qemu_real_host_page_size */
752
- p = mmap(g2h(start), host_len, host_prot,
753
+ p = mmap(g2h_untagged(start), host_len, host_prot,
754
flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
755
if (p == MAP_FAILED) {
756
goto fail;
757
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
758
/* update start so that it points to the file position at 'offset' */
759
host_start = (unsigned long)p;
760
if (!(flags & MAP_ANONYMOUS)) {
761
- p = mmap(g2h(start), len, host_prot,
762
+ p = mmap(g2h_untagged(start), len, host_prot,
763
flags | MAP_FIXED, fd, host_offset);
764
if (p == MAP_FAILED) {
765
- munmap(g2h(start), host_len);
766
+ munmap(g2h_untagged(start), host_len);
767
goto fail;
768
}
769
host_start += offset - host_offset;
770
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
771
-1, 0);
772
if (retaddr == -1)
773
goto fail;
774
- if (pread(fd, g2h(start), len, offset) == -1)
775
+ if (pread(fd, g2h_untagged(start), len, offset) == -1)
776
goto fail;
777
if (!(host_prot & PROT_WRITE)) {
778
ret = target_mprotect(start, len, target_prot);
779
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
780
offset1 = 0;
781
else
782
offset1 = offset + real_start - start;
783
- p = mmap(g2h(real_start), real_end - real_start,
784
+ p = mmap(g2h_untagged(real_start), real_end - real_start,
785
host_prot, flags, fd, offset1);
786
if (p == MAP_FAILED)
787
goto fail;
788
@@ -XXX,XX +XXX,XX @@ static void mmap_reserve(abi_ulong start, abi_ulong size)
789
real_end -= qemu_host_page_size;
790
}
791
if (real_start != real_end) {
792
- mmap(g2h(real_start), real_end - real_start, PROT_NONE,
793
+ mmap(g2h_untagged(real_start), real_end - real_start, PROT_NONE,
794
MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE,
795
-1, 0);
796
}
797
@@ -XXX,XX +XXX,XX @@ int target_munmap(abi_ulong start, abi_ulong len)
798
if (reserved_va) {
799
mmap_reserve(real_start, real_end - real_start);
800
} else {
801
- ret = munmap(g2h(real_start), real_end - real_start);
802
+ ret = munmap(g2h_untagged(real_start), real_end - real_start);
803
}
804
}
805
806
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
807
mmap_lock();
808
809
if (flags & MREMAP_FIXED) {
810
- host_addr = mremap(g2h(old_addr), old_size, new_size,
811
- flags, g2h(new_addr));
812
+ host_addr = mremap(g2h_untagged(old_addr), old_size, new_size,
813
+ flags, g2h_untagged(new_addr));
814
815
if (reserved_va && host_addr != MAP_FAILED) {
816
/* If new and old addresses overlap then the above mremap will
817
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
818
errno = ENOMEM;
819
host_addr = MAP_FAILED;
820
} else {
821
- host_addr = mremap(g2h(old_addr), old_size, new_size,
822
- flags | MREMAP_FIXED, g2h(mmap_start));
823
+ host_addr = mremap(g2h_untagged(old_addr), old_size, new_size,
824
+ flags | MREMAP_FIXED,
825
+ g2h_untagged(mmap_start));
826
if (reserved_va) {
827
mmap_reserve(old_addr, old_size);
828
}
829
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
830
}
831
}
832
if (prot == 0) {
833
- host_addr = mremap(g2h(old_addr), old_size, new_size, flags);
834
+ host_addr = mremap(g2h_untagged(old_addr),
835
+ old_size, new_size, flags);
836
837
if (host_addr != MAP_FAILED) {
838
/* Check if address fits target address space */
839
if (!guest_range_valid(h2g(host_addr), new_size)) {
840
/* Revert mremap() changes */
841
- host_addr = mremap(g2h(old_addr), new_size, old_size,
842
- flags);
843
+ host_addr = mremap(g2h_untagged(old_addr),
844
+ new_size, old_size, flags);
845
errno = ENOMEM;
846
host_addr = MAP_FAILED;
847
} else if (reserved_va && old_size > new_size) {
848
diff --git a/linux-user/ppc/signal.c b/linux-user/ppc/signal.c
849
index XXXXXXX..XXXXXXX 100644
850
--- a/linux-user/ppc/signal.c
851
+++ b/linux-user/ppc/signal.c
852
@@ -XXX,XX +XXX,XX @@ static void restore_user_regs(CPUPPCState *env,
853
uint64_t v_addr;
854
/* 64-bit needs to recover the pointer to the vectors from the frame */
855
__get_user(v_addr, &frame->v_regs);
856
- v_regs = g2h(v_addr);
857
+ v_regs = g2h(env_cpu(env), v_addr);
858
#else
859
v_regs = (ppc_avr_t *)frame->mc_vregs.altivec;
860
#endif
861
@@ -XXX,XX +XXX,XX @@ void setup_rt_frame(int sig, struct target_sigaction *ka,
862
if (get_ppc64_abi(image) < 2) {
863
/* ELFv1 PPC64 function pointers are pointers to OPD entries. */
864
struct target_func_ptr *handler =
865
- (struct target_func_ptr *)g2h(ka->_sa_handler);
866
+ (struct target_func_ptr *)g2h(env_cpu(env), ka->_sa_handler);
867
env->nip = tswapl(handler->entry);
868
env->gpr[2] = tswapl(handler->toc);
869
} else {
870
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
871
index XXXXXXX..XXXXXXX 100644
872
--- a/linux-user/syscall.c
873
+++ b/linux-user/syscall.c
874
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
875
/* Heap contents are initialized to zero, as for anonymous
876
* mapped pages. */
877
if (new_brk > target_brk) {
878
- memset(g2h(target_brk), 0, new_brk - target_brk);
879
+ memset(g2h_untagged(target_brk), 0, new_brk - target_brk);
880
}
881
    target_brk = new_brk;
882
DEBUGF_BRK(TARGET_ABI_FMT_lx " (new_brk <= brk_page)\n", target_brk);
883
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
884
* come from the remaining part of the previous page: it may
885
* contains garbage data due to a previous heap usage (grown
886
* then shrunken). */
887
- memset(g2h(target_brk), 0, brk_page - target_brk);
888
+ memset(g2h_untagged(target_brk), 0, brk_page - target_brk);
889
890
target_brk = new_brk;
891
brk_page = HOST_PAGE_ALIGN(target_brk);
892
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
893
mmap_lock();
894
895
if (shmaddr)
896
- host_raddr = shmat(shmid, (void *)g2h(shmaddr), shmflg);
897
+ host_raddr = shmat(shmid, (void *)g2h_untagged(shmaddr), shmflg);
898
else {
899
abi_ulong mmap_start;
900
901
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
902
errno = ENOMEM;
903
host_raddr = (void *)-1;
904
} else
905
- host_raddr = shmat(shmid, g2h(mmap_start), shmflg | SHM_REMAP);
906
+ host_raddr = shmat(shmid, g2h_untagged(mmap_start),
907
+ shmflg | SHM_REMAP);
908
}
909
910
if (host_raddr == (void *)-1) {
911
@@ -XXX,XX +XXX,XX @@ static inline abi_long do_shmdt(abi_ulong shmaddr)
912
break;
913
}
914
}
915
- rv = get_errno(shmdt(g2h(shmaddr)));
916
+ rv = get_errno(shmdt(g2h_untagged(shmaddr)));
917
918
mmap_unlock();
919
920
@@ -XXX,XX +XXX,XX @@ static abi_long write_ldt(CPUX86State *env,
921
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
922
if (env->ldt.base == -1)
923
return -TARGET_ENOMEM;
924
- memset(g2h(env->ldt.base), 0,
925
+ memset(g2h_untagged(env->ldt.base), 0,
926
TARGET_LDT_ENTRIES * TARGET_LDT_ENTRY_SIZE);
927
env->ldt.limit = 0xffff;
928
- ldt_table = g2h(env->ldt.base);
929
+ ldt_table = g2h_untagged(env->ldt.base);
930
}
931
932
/* NOTE: same code as Linux kernel */
933
@@ -XXX,XX +XXX,XX @@ static abi_long do_modify_ldt(CPUX86State *env, int func, abi_ulong ptr,
934
#if defined(TARGET_ABI32)
935
abi_long do_set_thread_area(CPUX86State *env, abi_ulong ptr)
936
{
937
- uint64_t *gdt_table = g2h(env->gdt.base);
938
+ uint64_t *gdt_table = g2h_untagged(env->gdt.base);
939
struct target_modify_ldt_ldt_s ldt_info;
940
struct target_modify_ldt_ldt_s *target_ldt_info;
941
int seg_32bit, contents, read_exec_only, limit_in_pages;
942
@@ -XXX,XX +XXX,XX @@ install:
943
static abi_long do_get_thread_area(CPUX86State *env, abi_ulong ptr)
944
{
945
struct target_modify_ldt_ldt_s *target_ldt_info;
946
- uint64_t *gdt_table = g2h(env->gdt.base);
947
+ uint64_t *gdt_table = g2h_untagged(env->gdt.base);
948
uint32_t base_addr, limit, flags;
949
int seg_32bit, contents, read_exec_only, limit_in_pages, idx;
950
int seg_not_present, useable, lm;
951
@@ -XXX,XX +XXX,XX @@ static int do_safe_futex(int *uaddr, int op, int val,
952
tricky. However they're probably useless because guest atomic
953
operations won't work either. */
954
#if defined(TARGET_NR_futex)
955
-static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
956
- target_ulong uaddr2, int val3)
957
+static int do_futex(CPUState *cpu, target_ulong uaddr, int op, int val,
958
+ target_ulong timeout, target_ulong uaddr2, int val3)
959
{
960
struct timespec ts, *pts;
961
int base_op;
962
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
963
} else {
964
pts = NULL;
965
}
966
- return do_safe_futex(g2h(uaddr), op, tswap32(val), pts, NULL, val3);
967
+ return do_safe_futex(g2h(cpu, uaddr),
968
+ op, tswap32(val), pts, NULL, val3);
969
case FUTEX_WAKE:
970
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
971
+ return do_safe_futex(g2h(cpu, uaddr),
972
+ op, val, NULL, NULL, 0);
973
case FUTEX_FD:
974
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
975
+ return do_safe_futex(g2h(cpu, uaddr),
976
+ op, val, NULL, NULL, 0);
977
case FUTEX_REQUEUE:
978
case FUTEX_CMP_REQUEUE:
979
case FUTEX_WAKE_OP:
980
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
981
to satisfy the compiler. We do not need to tswap TIMEOUT
982
since it's not compared to guest memory. */
983
pts = (struct timespec *)(uintptr_t) timeout;
984
- return do_safe_futex(g2h(uaddr), op, val, pts, g2h(uaddr2),
985
+ return do_safe_futex(g2h(cpu, uaddr), op, val, pts, g2h(cpu, uaddr2),
986
(base_op == FUTEX_CMP_REQUEUE
987
- ? tswap32(val3)
988
- : val3));
989
+ ? tswap32(val3) : val3));
990
default:
991
return -TARGET_ENOSYS;
992
}
993
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
994
#endif
995
996
#if defined(TARGET_NR_futex_time64)
997
-static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong timeout,
998
+static int do_futex_time64(CPUState *cpu, target_ulong uaddr, int op,
999
+ int val, target_ulong timeout,
1000
target_ulong uaddr2, int val3)
1001
{
1002
struct timespec ts, *pts;
1003
@@ -XXX,XX +XXX,XX @@ static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong tim
1004
} else {
1005
pts = NULL;
1006
}
1007
- return do_safe_futex(g2h(uaddr), op, tswap32(val), pts, NULL, val3);
1008
+ return do_safe_futex(g2h(cpu, uaddr), op,
1009
+ tswap32(val), pts, NULL, val3);
1010
case FUTEX_WAKE:
1011
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
1012
+ return do_safe_futex(g2h(cpu, uaddr), op, val, NULL, NULL, 0);
1013
case FUTEX_FD:
1014
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
1015
+ return do_safe_futex(g2h(cpu, uaddr), op, val, NULL, NULL, 0);
1016
case FUTEX_REQUEUE:
1017
case FUTEX_CMP_REQUEUE:
1018
case FUTEX_WAKE_OP:
1019
@@ -XXX,XX +XXX,XX @@ static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong tim
1020
to satisfy the compiler. We do not need to tswap TIMEOUT
1021
since it's not compared to guest memory. */
1022
pts = (struct timespec *)(uintptr_t) timeout;
1023
- return do_safe_futex(g2h(uaddr), op, val, pts, g2h(uaddr2),
1024
+ return do_safe_futex(g2h(cpu, uaddr), op, val, pts, g2h(cpu, uaddr2),
1025
(base_op == FUTEX_CMP_REQUEUE
1026
- ? tswap32(val3)
1027
- : val3));
1028
+ ? tswap32(val3) : val3));
1029
default:
1030
return -TARGET_ENOSYS;
1031
}
1032
@@ -XXX,XX +XXX,XX @@ static int open_self_maps(void *cpu_env, int fd)
1033
const char *path;
1034
1035
max = h2g_valid(max - 1) ?
1036
- max : (uintptr_t) g2h(GUEST_ADDR_MAX) + 1;
1037
+ max : (uintptr_t) g2h_untagged(GUEST_ADDR_MAX) + 1;
1038
1039
if (page_check_range(h2g(min), max - min, flags) == -1) {
1040
continue;
1041
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
1042
1043
if (ts->child_tidptr) {
1044
put_user_u32(0, ts->child_tidptr);
1045
- do_sys_futex(g2h(ts->child_tidptr), FUTEX_WAKE, INT_MAX,
1046
- NULL, NULL, 0);
1047
+ do_sys_futex(g2h(cpu, ts->child_tidptr),
1048
+ FUTEX_WAKE, INT_MAX, NULL, NULL, 0);
1049
}
1050
thread_cpu = NULL;
1051
g_free(ts);
1052
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
1053
if (!arg5) {
1054
ret = mount(p, p2, p3, (unsigned long)arg4, NULL);
1055
} else {
1056
- ret = mount(p, p2, p3, (unsigned long)arg4, g2h(arg5));
1057
+ ret = mount(p, p2, p3, (unsigned long)arg4, g2h(cpu, arg5));
1058
}
1059
ret = get_errno(ret);
1060
1061
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
1062
/* ??? msync/mlock/munlock are broken for softmmu. */
1063
#ifdef TARGET_NR_msync
1064
case TARGET_NR_msync:
1065
- return get_errno(msync(g2h(arg1), arg2, arg3));
1066
+ return get_errno(msync(g2h(cpu, arg1), arg2, arg3));
1067
#endif
1068
#ifdef TARGET_NR_mlock
1069
case TARGET_NR_mlock:
1070
- return get_errno(mlock(g2h(arg1), arg2));
1071
+ return get_errno(mlock(g2h(cpu, arg1), arg2));
1072
#endif
1073
#ifdef TARGET_NR_munlock
1074
case TARGET_NR_munlock:
1075
- return get_errno(munlock(g2h(arg1), arg2));
1076
+ return get_errno(munlock(g2h(cpu, arg1), arg2));
1077
#endif
1078
#ifdef TARGET_NR_mlockall
1079
case TARGET_NR_mlockall:
1080
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
1081
1082
#if defined(TARGET_NR_set_tid_address) && defined(__NR_set_tid_address)
1083
case TARGET_NR_set_tid_address:
1084
- return get_errno(set_tid_address((int *)g2h(arg1)));
1085
+ return get_errno(set_tid_address((int *)g2h(cpu, arg1)));
1086
#endif
1087
1088
case TARGET_NR_tkill:
1089
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
1090
#endif
1091
#ifdef TARGET_NR_futex
1092
case TARGET_NR_futex:
1093
- return do_futex(arg1, arg2, arg3, arg4, arg5, arg6);
1094
+ return do_futex(cpu, arg1, arg2, arg3, arg4, arg5, arg6);
1095
#endif
1096
#ifdef TARGET_NR_futex_time64
1097
case TARGET_NR_futex_time64:
1098
- return do_futex_time64(arg1, arg2, arg3, arg4, arg5, arg6);
1099
+ return do_futex_time64(cpu, arg1, arg2, arg3, arg4, arg5, arg6);
1100
#endif
1101
#if defined(TARGET_NR_inotify_init) && defined(__NR_inotify_init)
1102
case TARGET_NR_inotify_init:
1103
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
20
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
1104
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
1105
--- a/target/arm/helper-a64.c
22
--- a/target/arm/helper-a64.c
1106
+++ b/target/arm/helper-a64.c
23
+++ b/target/arm/helper-a64.c
1107
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_le)(CPUARMState *env, uint64_t addr,
24
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
1108
25
float16 nan = a;
1109
#ifdef CONFIG_USER_ONLY
26
if (float16_is_signaling_nan(a, fpst)) {
1110
/* ??? Enforce alignment. */
27
float_raise(float_flag_invalid, fpst);
1111
- uint64_t *haddr = g2h(addr);
28
- nan = float16_silence_nan(a, fpst);
1112
+ uint64_t *haddr = g2h(env_cpu(env), addr);
29
+ if (!fpst->default_nan_mode) {
1113
30
+ nan = float16_silence_nan(a, fpst);
1114
set_helper_retaddr(ra);
31
+ }
1115
o0 = ldq_le_p(haddr + 0);
32
}
1116
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_be)(CPUARMState *env, uint64_t addr,
33
if (fpst->default_nan_mode) {
1117
34
nan = float16_default_nan(fpst);
1118
#ifdef CONFIG_USER_ONLY
35
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
1119
/* ??? Enforce alignment. */
36
float32 nan = a;
1120
- uint64_t *haddr = g2h(addr);
37
if (float32_is_signaling_nan(a, fpst)) {
1121
+ uint64_t *haddr = g2h(env_cpu(env), addr);
38
float_raise(float_flag_invalid, fpst);
1122
39
- nan = float32_silence_nan(a, fpst);
1123
set_helper_retaddr(ra);
40
+ if (!fpst->default_nan_mode) {
1124
o1 = ldq_be_p(haddr + 0);
41
+ nan = float32_silence_nan(a, fpst);
1125
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
42
+ }
43
}
44
if (fpst->default_nan_mode) {
45
nan = float32_default_nan(fpst);
46
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
47
float64 nan = a;
48
if (float64_is_signaling_nan(a, fpst)) {
49
float_raise(float_flag_invalid, fpst);
50
- nan = float64_silence_nan(a, fpst);
51
+ if (!fpst->default_nan_mode) {
52
+ nan = float64_silence_nan(a, fpst);
53
+ }
54
}
55
if (fpst->default_nan_mode) {
56
nan = float64_default_nan(fpst);
57
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
1126
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
1127
--- a/target/hppa/op_helper.c
59
--- a/target/arm/vfp_helper.c
1128
+++ b/target/hppa/op_helper.c
60
+++ b/target/arm/vfp_helper.c
1129
@@ -XXX,XX +XXX,XX @@ static void atomic_store_3(CPUHPPAState *env, target_ulong addr, uint32_t val,
61
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
1130
#ifdef CONFIG_USER_ONLY
62
float16 nan = f16;
1131
uint32_t old, new, cmp;
63
if (float16_is_signaling_nan(f16, fpst)) {
1132
64
float_raise(float_flag_invalid, fpst);
1133
- uint32_t *haddr = g2h(addr - 1);
65
- nan = float16_silence_nan(f16, fpst);
1134
+ uint32_t *haddr = g2h(env_cpu(env), addr - 1);
66
+ if (!fpst->default_nan_mode) {
1135
old = *haddr;
67
+ nan = float16_silence_nan(f16, fpst);
1136
while (1) {
68
+ }
1137
new = (old & ~mask) | (val & mask);
69
}
1138
diff --git a/target/i386/tcg/mem_helper.c b/target/i386/tcg/mem_helper.c
70
if (fpst->default_nan_mode) {
1139
index XXXXXXX..XXXXXXX 100644
71
nan = float16_default_nan(fpst);
1140
--- a/target/i386/tcg/mem_helper.c
72
@@ -XXX,XX +XXX,XX @@ float32 HELPER(recpe_f32)(float32 input, void *fpstp)
1141
+++ b/target/i386/tcg/mem_helper.c
73
float32 nan = f32;
1142
@@ -XXX,XX +XXX,XX @@ void helper_cmpxchg8b(CPUX86State *env, target_ulong a0)
74
if (float32_is_signaling_nan(f32, fpst)) {
1143
75
float_raise(float_flag_invalid, fpst);
1144
#ifdef CONFIG_USER_ONLY
76
- nan = float32_silence_nan(f32, fpst);
1145
{
77
+ if (!fpst->default_nan_mode) {
1146
- uint64_t *haddr = g2h(a0);
78
+ nan = float32_silence_nan(f32, fpst);
1147
+ uint64_t *haddr = g2h(env_cpu(env), a0);
79
+ }
1148
cmpv = cpu_to_le64(cmpv);
80
}
1149
newv = cpu_to_le64(newv);
81
if (fpst->default_nan_mode) {
1150
oldv = qatomic_cmpxchg__nocheck(haddr, cmpv, newv);
82
nan = float32_default_nan(fpst);
1151
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
83
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpe_f64)(float64 input, void *fpstp)
1152
index XXXXXXX..XXXXXXX 100644
84
float64 nan = f64;
1153
--- a/target/s390x/mem_helper.c
85
if (float64_is_signaling_nan(f64, fpst)) {
1154
+++ b/target/s390x/mem_helper.c
86
float_raise(float_flag_invalid, fpst);
1155
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
87
- nan = float64_silence_nan(f64, fpst);
1156
88
+ if (!fpst->default_nan_mode) {
1157
if (parallel) {
89
+ nan = float64_silence_nan(f64, fpst);
1158
#ifdef CONFIG_USER_ONLY
90
+ }
1159
- uint32_t *haddr = g2h(a1);
91
}
1160
+ uint32_t *haddr = g2h(env_cpu(env), a1);
92
if (fpst->default_nan_mode) {
1161
ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
93
nan = float64_default_nan(fpst);
1162
#else
94
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
1163
TCGMemOpIdx oi = make_memop_idx(MO_TEUL | MO_ALIGN, mem_idx);
95
float16 nan = f16;
1164
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
96
if (float16_is_signaling_nan(f16, s)) {
1165
if (parallel) {
97
float_raise(float_flag_invalid, s);
1166
#ifdef CONFIG_ATOMIC64
98
- nan = float16_silence_nan(f16, s);
1167
# ifdef CONFIG_USER_ONLY
99
+ if (!s->default_nan_mode) {
1168
- uint64_t *haddr = g2h(a1);
100
+ nan = float16_silence_nan(f16, fpstp);
1169
+ uint64_t *haddr = g2h(env_cpu(env), a1);
101
+ }
1170
ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
102
}
1171
# else
103
if (s->default_nan_mode) {
1172
TCGMemOpIdx oi = make_memop_idx(MO_TEQ | MO_ALIGN, mem_idx);
104
nan = float16_default_nan(s);
105
@@ -XXX,XX +XXX,XX @@ float32 HELPER(rsqrte_f32)(float32 input, void *fpstp)
106
float32 nan = f32;
107
if (float32_is_signaling_nan(f32, s)) {
108
float_raise(float_flag_invalid, s);
109
- nan = float32_silence_nan(f32, s);
110
+ if (!s->default_nan_mode) {
111
+ nan = float32_silence_nan(f32, fpstp);
112
+ }
113
}
114
if (s->default_nan_mode) {
115
nan = float32_default_nan(s);
116
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrte_f64)(float64 input, void *fpstp)
117
float64 nan = f64;
118
if (float64_is_signaling_nan(f64, s)) {
119
float_raise(float_flag_invalid, s);
120
- nan = float64_silence_nan(f64, s);
121
+ if (!s->default_nan_mode) {
122
+ nan = float64_silence_nan(f64, fpstp);
123
+ }
124
}
125
if (s->default_nan_mode) {
126
nan = float64_default_nan(s);
1173
--
127
--
1174
2.20.1
128
2.20.1
1175
129
1176
130
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Maxim Uvarov <maxim.uvarov@linaro.org>
2
2
3
This data can be allocated by page_alloc_target_data() and
3
qemu has 2 type of functions: shutdown and reboot. Shutdown
4
released by page_set_flags(start, end, prot | PAGE_RESET).
4
function has to be used for machine shutdown. Otherwise we cause
5
a reset with a bogus "cause" value, when we intended a shutdown.
5
6
6
This data will be used to hold tag memory for AArch64 MTE.
7
Signed-off-by: Maxim Uvarov <maxim.uvarov@linaro.org>
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210625111842.3790-3-maxim.uvarov@linaro.org
10
Message-id: 20210212184902.1251044-2-richard.henderson@linaro.org
10
[PMM: tweaked commit message]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
include/exec/cpu-all.h | 42 +++++++++++++++++++++++++++++++++------
13
hw/gpio/gpio_pwr.c | 2 +-
14
accel/tcg/translate-all.c | 28 ++++++++++++++++++++++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
linux-user/mmap.c | 4 +++-
16
linux-user/syscall.c | 4 ++--
17
4 files changed, 69 insertions(+), 9 deletions(-)
18
15
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
16
diff --git a/hw/gpio/gpio_pwr.c b/hw/gpio/gpio_pwr.c
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu-all.h
18
--- a/hw/gpio/gpio_pwr.c
22
+++ b/include/exec/cpu-all.h
19
+++ b/hw/gpio/gpio_pwr.c
23
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
20
@@ -XXX,XX +XXX,XX @@ static void gpio_pwr_reset(void *opaque, int n, int level)
24
#define PAGE_EXEC 0x0004
21
static void gpio_pwr_shutdown(void *opaque, int n, int level)
25
#define PAGE_BITS (PAGE_READ | PAGE_WRITE | PAGE_EXEC)
26
#define PAGE_VALID 0x0008
27
-/* original state of the write flag (used when tracking self-modifying
28
- code */
29
+/*
30
+ * Original state of the write flag (used when tracking self-modifying code)
31
+ */
32
#define PAGE_WRITE_ORG 0x0010
33
-/* Invalidate the TLB entry immediately, helpful for s390x
34
- * Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs() */
35
-#define PAGE_WRITE_INV 0x0040
36
+/*
37
+ * Invalidate the TLB entry immediately, helpful for s390x
38
+ * Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs()
39
+ */
40
+#define PAGE_WRITE_INV 0x0020
41
+/* For use with page_set_flags: page is being replaced; target_data cleared. */
42
+#define PAGE_RESET 0x0040
43
+
44
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
45
/* FIXME: Code that sets/uses this is broken and needs to go away. */
46
-#define PAGE_RESERVED 0x0020
47
+#define PAGE_RESERVED 0x0100
48
#endif
49
/* Target-specific bits that will be used via page_get_flags(). */
50
#define PAGE_TARGET_1 0x0080
51
@@ -XXX,XX +XXX,XX @@ int walk_memory_regions(void *, walk_memory_regions_fn);
52
int page_get_flags(target_ulong address);
53
void page_set_flags(target_ulong start, target_ulong end, int flags);
54
int page_check_range(target_ulong start, target_ulong len, int flags);
55
+
56
+/**
57
+ * page_alloc_target_data(address, size)
58
+ * @address: guest virtual address
59
+ * @size: size of data to allocate
60
+ *
61
+ * Allocate @size bytes of out-of-band data to associate with the
62
+ * guest page at @address. If the page is not mapped, NULL will
63
+ * be returned. If there is existing data associated with @address,
64
+ * no new memory will be allocated.
65
+ *
66
+ * The memory will be freed when the guest page is deallocated,
67
+ * e.g. with the munmap system call.
68
+ */
69
+void *page_alloc_target_data(target_ulong address, size_t size);
70
+
71
+/**
72
+ * page_get_target_data(address)
73
+ * @address: guest virtual address
74
+ *
75
+ * Return any out-of-bound memory assocated with the guest page
76
+ * at @address, as per page_alloc_target_data.
77
+ */
78
+void *page_get_target_data(target_ulong address);
79
#endif
80
81
CPUArchState *cpu_copy(CPUArchState *env);
82
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/accel/tcg/translate-all.c
85
+++ b/accel/tcg/translate-all.c
86
@@ -XXX,XX +XXX,XX @@ typedef struct PageDesc {
87
unsigned int code_write_count;
88
#else
89
unsigned long flags;
90
+ void *target_data;
91
#endif
92
#ifndef CONFIG_USER_ONLY
93
QemuSpin lock;
94
@@ -XXX,XX +XXX,XX @@ int page_get_flags(target_ulong address)
95
void page_set_flags(target_ulong start, target_ulong end, int flags)
96
{
22
{
97
target_ulong addr, len;
23
if (level) {
98
+ bool reset_target_data;
24
- qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
99
25
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
100
/* This function should never be called with addresses outside the
101
guest address space. If this assert fires, it probably indicates
102
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
103
if (flags & PAGE_WRITE) {
104
flags |= PAGE_WRITE_ORG;
105
}
106
+ reset_target_data = !(flags & PAGE_VALID) || (flags & PAGE_RESET);
107
+ flags &= ~PAGE_RESET;
108
109
for (addr = start, len = end - start;
110
len != 0;
111
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
112
p->first_tb) {
113
tb_invalidate_phys_page(addr, 0);
114
}
115
+ if (reset_target_data && p->target_data) {
116
+ g_free(p->target_data);
117
+ p->target_data = NULL;
118
+ }
119
p->flags = flags;
120
}
26
}
121
}
27
}
122
28
123
+void *page_get_target_data(target_ulong address)
124
+{
125
+ PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
126
+ return p ? p->target_data : NULL;
127
+}
128
+
129
+void *page_alloc_target_data(target_ulong address, size_t size)
130
+{
131
+ PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
132
+ void *ret = NULL;
133
+
134
+ if (p->flags & PAGE_VALID) {
135
+ ret = p->target_data;
136
+ if (!ret) {
137
+ p->target_data = ret = g_malloc0(size);
138
+ }
139
+ }
140
+ return ret;
141
+}
142
+
143
int page_check_range(target_ulong start, target_ulong len, int flags)
144
{
145
PageDesc *p;
146
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/linux-user/mmap.c
149
+++ b/linux-user/mmap.c
150
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
151
}
152
}
153
the_end1:
154
+ page_flags |= PAGE_RESET;
155
page_set_flags(start, start + len, page_flags);
156
the_end:
157
trace_target_mmap_complete(start);
158
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
159
new_addr = h2g(host_addr);
160
prot = page_get_flags(old_addr);
161
page_set_flags(old_addr, old_addr + old_size, 0);
162
- page_set_flags(new_addr, new_addr + new_size, prot | PAGE_VALID);
163
+ page_set_flags(new_addr, new_addr + new_size,
164
+ prot | PAGE_VALID | PAGE_RESET);
165
}
166
tb_invalidate_phys_range(new_addr, new_addr + new_size);
167
mmap_unlock();
168
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
169
index XXXXXXX..XXXXXXX 100644
170
--- a/linux-user/syscall.c
171
+++ b/linux-user/syscall.c
172
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
173
raddr=h2g((unsigned long)host_raddr);
174
175
page_set_flags(raddr, raddr + shm_info.shm_segsz,
176
- PAGE_VALID | PAGE_READ |
177
- ((shmflg & SHM_RDONLY)? 0 : PAGE_WRITE));
178
+ PAGE_VALID | PAGE_RESET | PAGE_READ |
179
+ (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE));
180
181
for (i = 0; i < N_SHM_REGIONS; i++) {
182
if (!shm_regions[i].in_use) {
183
--
29
--
184
2.20.1
30
2.20.1
185
31
186
32
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Record whether the backing page is anonymous, or if it has file
4
backing. This will allow us to get close to the Linux AArch64
5
ABI for MTE, which allows tag memory only on ram-backed VMAs.
6
7
The real ABI allows tag memory on files, when those files are
8
on ram-backed filesystems, such as tmpfs. We will not be able
9
to implement that in QEMU linux-user.
10
11
Thankfully, anonymous memory for malloc arenas is the primary
12
consumer of this feature, so this restricted version should
13
still be of use.
14
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20210212184902.1251044-3-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
include/exec/cpu-all.h | 2 ++
21
linux-user/mmap.c | 3 +++
22
2 files changed, 5 insertions(+)
23
24
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/cpu-all.h
27
+++ b/include/exec/cpu-all.h
28
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
29
#define PAGE_WRITE_INV 0x0020
30
/* For use with page_set_flags: page is being replaced; target_data cleared. */
31
#define PAGE_RESET 0x0040
32
+/* For linux-user, indicates that the page is MAP_ANON. */
33
+#define PAGE_ANON 0x0080
34
35
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
36
/* FIXME: Code that sets/uses this is broken and needs to go away. */
37
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/linux-user/mmap.c
40
+++ b/linux-user/mmap.c
41
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
42
}
43
}
44
the_end1:
45
+ if (flags & MAP_ANONYMOUS) {
46
+ page_flags |= PAGE_ANON;
47
+ }
48
page_flags |= PAGE_RESET;
49
page_set_flags(start, start + len, page_flags);
50
the_end:
51
--
52
2.20.1
53
54
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In do_ldst(), the calculation of the offset needs to be based on the
2
size of the memory access, not the size of the elements in the
3
vector. This meant we were getting it wrong for the widening and
4
narrowing variants of the various VLDR and VSTR insns.
2
5
3
Provide both tagged and untagged versions of access_ok.
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
In a few places use thread_cpu, as the user is several
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
callees removed from do_syscall1.
8
Message-id: 20210628135835.6690-2-peter.maydell@linaro.org
9
---
10
target/arm/translate-mve.c | 17 +++++++++--------
11
1 file changed, 9 insertions(+), 8 deletions(-)
6
12
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210212184902.1251044-17-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
linux-user/qemu.h | 11 +++++++++--
13
linux-user/elfload.c | 2 +-
14
linux-user/hppa/cpu_loop.c | 8 ++++----
15
linux-user/i386/cpu_loop.c | 2 +-
16
linux-user/i386/signal.c | 5 +++--
17
linux-user/syscall.c | 9 ++++++---
18
6 files changed, 24 insertions(+), 13 deletions(-)
19
20
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/linux-user/qemu.h
15
--- a/target/arm/translate-mve.c
23
+++ b/linux-user/qemu.h
16
+++ b/target/arm/translate-mve.c
24
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
17
@@ -XXX,XX +XXX,XX @@ static bool mve_skip_first_beat(DisasContext *s)
25
#define VERIFY_READ PAGE_READ
18
}
26
#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
19
}
27
20
28
-static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
21
-static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
29
+static inline bool access_ok_untagged(int type, abi_ulong addr, abi_ulong size)
22
+static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn,
23
+ unsigned msize)
30
{
24
{
31
if (size == 0
25
TCGv_i32 addr;
32
? !guest_addr_valid_untagged(addr)
26
uint32_t offset;
33
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
27
@@ -XXX,XX +XXX,XX @@ static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
34
return page_check_range((target_ulong)addr, size, type) == 0;
35
}
36
37
+static inline bool access_ok(CPUState *cpu, int type,
38
+ abi_ulong addr, abi_ulong size)
39
+{
40
+ return access_ok_untagged(type, cpu_untagged_addr(cpu, addr), size);
41
+}
42
+
43
/* NOTE __get_user and __put_user use host pointers and don't check access.
44
These are usually used to access struct data members once the struct has
45
been locked - usually with lock_user_struct. */
46
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
47
host area will have the same contents as the guest. */
48
static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
49
{
50
- if (!access_ok(type, guest_addr, len))
51
+ if (!access_ok_untagged(type, guest_addr, len)) {
52
return NULL;
53
+ }
54
#ifdef DEBUG_REMAP
55
{
56
void *addr;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
61
@@ -XXX,XX +XXX,XX @@ static int vma_get_mapping_count(const struct mm_struct *mm)
62
static abi_ulong vma_dump_size(const struct vm_area_struct *vma)
63
{
64
/* if we cannot even read the first page, skip it */
65
- if (!access_ok(VERIFY_READ, vma->vma_start, TARGET_PAGE_SIZE))
66
+ if (!access_ok_untagged(VERIFY_READ, vma->vma_start, TARGET_PAGE_SIZE))
67
return (0);
68
69
/*
70
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/linux-user/hppa/cpu_loop.c
73
+++ b/linux-user/hppa/cpu_loop.c
74
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
75
return -TARGET_ENOSYS;
76
77
case 0: /* elf32 atomic 32bit cmpxchg */
78
- if ((addr & 3) || !access_ok(VERIFY_WRITE, addr, 4)) {
79
+ if ((addr & 3) || !access_ok(cs, VERIFY_WRITE, addr, 4)) {
80
return -TARGET_EFAULT;
81
}
82
old = tswap32(old);
83
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
84
return -TARGET_ENOSYS;
85
}
86
if (((addr | old | new) & ((1 << size) - 1))
87
- || !access_ok(VERIFY_WRITE, addr, 1 << size)
88
- || !access_ok(VERIFY_READ, old, 1 << size)
89
- || !access_ok(VERIFY_READ, new, 1 << size)) {
90
+ || !access_ok(cs, VERIFY_WRITE, addr, 1 << size)
91
+ || !access_ok(cs, VERIFY_READ, old, 1 << size)
92
+ || !access_ok(cs, VERIFY_READ, new, 1 << size)) {
93
return -TARGET_EFAULT;
94
}
95
/* Note that below we use host-endian loads so that the cmpxchg
96
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/linux-user/i386/cpu_loop.c
99
+++ b/linux-user/i386/cpu_loop.c
100
@@ -XXX,XX +XXX,XX @@ static bool write_ok_or_segv(CPUX86State *env, abi_ptr addr, size_t len)
101
* For all the vsyscalls, NULL means "don't write anything" not
102
* "write it at address 0".
103
*/
104
- if (addr == 0 || access_ok(VERIFY_WRITE, addr, len)) {
105
+ if (addr == 0 || access_ok(env_cpu(env), VERIFY_WRITE, addr, len)) {
106
return true;
28
return true;
107
}
29
}
108
30
109
diff --git a/linux-user/i386/signal.c b/linux-user/i386/signal.c
31
- offset = a->imm << a->size;
110
index XXXXXXX..XXXXXXX 100644
32
+ offset = a->imm << msize;
111
--- a/linux-user/i386/signal.c
33
if (!a->a) {
112
+++ b/linux-user/i386/signal.c
34
offset = -offset;
113
@@ -XXX,XX +XXX,XX @@ restore_sigcontext(CPUX86State *env, struct target_sigcontext *sc)
114
115
fpstate_addr = tswapl(sc->fpstate);
116
if (fpstate_addr != 0) {
117
- if (!access_ok(VERIFY_READ, fpstate_addr,
118
- sizeof(struct target_fpstate)))
119
+ if (!access_ok(env_cpu(env), VERIFY_READ, fpstate_addr,
120
+ sizeof(struct target_fpstate))) {
121
goto badframe;
122
+ }
123
#ifndef TARGET_X86_64
124
cpu_x86_frstor(env, fpstate_addr, 1);
125
#else
126
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/linux-user/syscall.c
129
+++ b/linux-user/syscall.c
130
@@ -XXX,XX +XXX,XX @@ static abi_long do_accept4(int fd, abi_ulong target_addr,
131
return -TARGET_EINVAL;
132
}
35
}
133
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
134
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
37
{ gen_helper_mve_vstrw, gen_helper_mve_vldrw },
135
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
38
{ NULL, NULL }
136
return -TARGET_EFAULT;
39
};
137
+ }
40
- return do_ldst(s, a, ldstfns[a->size][a->l]);
138
41
+ return do_ldst(s, a, ldstfns[a->size][a->l], a->size);
139
addr = alloca(addrlen);
42
}
140
43
141
@@ -XXX,XX +XXX,XX @@ static abi_long do_getpeername(int fd, abi_ulong target_addr,
44
-#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST) \
142
return -TARGET_EINVAL;
45
+#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST, MSIZE) \
46
static bool trans_##OP(DisasContext *s, arg_VLDR_VSTR *a) \
47
{ \
48
static MVEGenLdStFn * const ldstfns[2][2] = { \
49
{ gen_helper_mve_##ST, gen_helper_mve_##SLD }, \
50
{ NULL, gen_helper_mve_##ULD }, \
51
}; \
52
- return do_ldst(s, a, ldstfns[a->u][a->l]); \
53
+ return do_ldst(s, a, ldstfns[a->u][a->l], MSIZE); \
143
}
54
}
144
55
145
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
56
-DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
146
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
57
-DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
147
return -TARGET_EFAULT;
58
-DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
148
+ }
59
+DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h, MO_8)
149
60
+DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w, MO_8)
150
addr = alloca(addrlen);
61
+DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w, MO_16)
151
62
152
@@ -XXX,XX +XXX,XX @@ static abi_long do_getsockname(int fd, abi_ulong target_addr,
63
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
153
return -TARGET_EINVAL;
64
{
154
}
155
156
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
157
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
158
return -TARGET_EFAULT;
159
+ }
160
161
addr = alloca(addrlen);
162
163
--
65
--
164
2.20.1
66
2.20.1
165
67
166
68
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The initial implementation of the MVE VRMLALDAVH and VRMLSLDAVH
2
insns had some bugs:
3
* the 32x32 multiply of elements was being done as 32x32->32,
4
not 32x32->64
5
* we were incorrectly maintaining the accumulator in its full
6
72-bit form across all 4 beats of the insn; in the pseudocode
7
it is squashed back into the 64 bits of the RdaHi:RdaLo
8
registers after each beat
2
9
3
This is more descriptive than 'unsigned long'.
10
In particular, fixing the second of these allows us to recast
4
No functional change, since these match on all linux+bsd hosts.
11
the implementation to avoid 128-bit arithmetic entirely.
5
12
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Since the element size here is always 4, we can also drop the
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
parameterization of ESIZE to make the code a little more readable.
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
9
Message-id: 20210212184902.1251044-4-richard.henderson@linaro.org
16
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210628135835.6690-3-peter.maydell@linaro.org
11
---
20
---
12
include/exec/cpu-all.h | 2 +-
21
target/arm/mve_helper.c | 38 +++++++++++++++++++++-----------------
13
bsd-user/main.c | 4 ++--
22
1 file changed, 21 insertions(+), 17 deletions(-)
14
linux-user/elfload.c | 4 ++--
15
linux-user/main.c | 4 ++--
16
4 files changed, 7 insertions(+), 7 deletions(-)
17
23
18
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
24
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
19
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
20
--- a/include/exec/cpu-all.h
26
--- a/target/arm/mve_helper.c
21
+++ b/include/exec/cpu-all.h
27
+++ b/target/arm/mve_helper.c
22
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
28
@@ -XXX,XX +XXX,XX @@
23
/* On some host systems the guest address space is reserved on the host.
24
* This allows the guest address space to be offset to a convenient location.
25
*/
29
*/
26
-extern unsigned long guest_base;
30
27
+extern uintptr_t guest_base;
31
#include "qemu/osdep.h"
28
extern bool have_guest_base;
32
-#include "qemu/int128.h"
29
extern unsigned long reserved_va;
33
#include "cpu.h"
30
34
#include "internals.h"
31
diff --git a/bsd-user/main.c b/bsd-user/main.c
35
#include "vec_internal.h"
32
index XXXXXXX..XXXXXXX 100644
36
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
33
--- a/bsd-user/main.c
37
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
34
+++ b/bsd-user/main.c
38
35
@@ -XXX,XX +XXX,XX @@
39
/*
36
40
- * Rounding multiply add long dual accumulate high: we must keep
37
int singlestep;
41
- * a 72-bit internal accumulator value and return the top 64 bits.
38
unsigned long mmap_min_addr;
42
+ * Rounding multiply add long dual accumulate high. In the pseudocode
39
-unsigned long guest_base;
43
+ * this is implemented with a 72-bit internal accumulator value of which
40
+uintptr_t guest_base;
44
+ * the top 64 bits are returned. We optimize this to avoid having to
41
bool have_guest_base;
45
+ * use 128-bit arithmetic -- we can do this because the 74-bit accumulator
42
unsigned long reserved_va;
46
+ * is squashed back into 64-bits after each beat.
43
47
*/
44
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
48
-#define DO_LDAVH(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC, TO128) \
45
g_free(target_environ);
49
+#define DO_LDAVH(OP, TYPE, LTYPE, XCHG, SUB) \
46
50
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
47
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
51
void *vm, uint64_t a) \
48
- qemu_log("guest_base 0x%lx\n", guest_base);
52
{ \
49
+ qemu_log("guest_base %p\n", (void *)guest_base);
53
uint16_t mask = mve_element_mask(env); \
50
log_page_dump("binary load");
54
unsigned e; \
51
55
TYPE *n = vn, *m = vm; \
52
qemu_log("start_brk 0x" TARGET_ABI_FMT_lx "\n", info->start_brk);
56
- Int128 acc = int128_lshift(TO128(a), 8); \
53
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
57
- for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
54
index XXXXXXX..XXXXXXX 100644
58
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
55
--- a/linux-user/elfload.c
59
if (mask & 1) { \
56
+++ b/linux-user/elfload.c
60
+ LTYPE mul; \
57
@@ -XXX,XX +XXX,XX @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr,
61
if (e & 1) { \
58
void *addr, *test;
62
- acc = ODDACC(acc, TO128(n[H##ESIZE(e - 1 * XCHG)] * \
59
63
- m[H##ESIZE(e)])); \
60
if (!QEMU_IS_ALIGNED(guest_base, align)) {
64
+ mul = (LTYPE)n[H4(e - 1 * XCHG)] * m[H4(e)]; \
61
- fprintf(stderr, "Requested guest base 0x%lx does not satisfy "
65
+ if (SUB) { \
62
+ fprintf(stderr, "Requested guest base %p does not satisfy "
66
+ mul = -mul; \
63
"host minimum alignment (0x%lx)\n",
67
+ } \
64
- guest_base, align);
68
} else { \
65
+ (void *)guest_base, align);
69
- acc = EVENACC(acc, TO128(n[H##ESIZE(e + 1 * XCHG)] * \
66
exit(EXIT_FAILURE);
70
- m[H##ESIZE(e)])); \
71
+ mul = (LTYPE)n[H4(e + 1 * XCHG)] * m[H4(e)]; \
72
} \
73
- acc = int128_add(acc, int128_make64(1 << 7)); \
74
+ mul = (mul >> 8) + ((mul >> 7) & 1); \
75
+ a += mul; \
76
} \
77
} \
78
mve_advance_vpt(env); \
79
- return int128_getlo(int128_rshift(acc, 8)); \
80
+ return a; \
67
}
81
}
68
82
69
diff --git a/linux-user/main.c b/linux-user/main.c
83
-DO_LDAVH(vrmlaldavhsw, 4, int32_t, false, int128_add, int128_add, int128_makes64)
70
index XXXXXXX..XXXXXXX 100644
84
-DO_LDAVH(vrmlaldavhxsw, 4, int32_t, true, int128_add, int128_add, int128_makes64)
71
--- a/linux-user/main.c
85
+DO_LDAVH(vrmlaldavhsw, int32_t, int64_t, false, false)
72
+++ b/linux-user/main.c
86
+DO_LDAVH(vrmlaldavhxsw, int32_t, int64_t, true, false)
73
@@ -XXX,XX +XXX,XX @@ static const char *cpu_model;
87
74
static const char *cpu_type;
88
-DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64)
75
static const char *seed_optarg;
89
+DO_LDAVH(vrmlaldavhuw, uint32_t, uint64_t, false, false)
76
unsigned long mmap_min_addr;
90
77
-unsigned long guest_base;
91
-DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
78
+uintptr_t guest_base;
92
-DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
79
bool have_guest_base;
93
+DO_LDAVH(vrmlsldavhsw, int32_t, int64_t, false, true)
80
94
+DO_LDAVH(vrmlsldavhxsw, int32_t, int64_t, true, true)
81
/*
95
82
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
96
/* Vector add across vector */
83
g_free(target_environ);
97
#define DO_VADDV(OP, ESIZE, TYPE) \
84
85
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
86
- qemu_log("guest_base 0x%lx\n", guest_base);
87
+ qemu_log("guest_base %p\n", (void *)guest_base);
88
log_page_dump("binary load");
89
90
qemu_log("start_brk 0x" TARGET_ABI_FMT_lx "\n", info->start_brk);
91
--
98
--
92
2.20.1
99
2.20.1
93
100
94
101
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is more descriptive than 'unsigned long'.
4
No functional change, since these match on all linux+bsd hosts.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20210212184902.1251044-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/exec/cpu_ldst.h | 6 +++---
13
1 file changed, 3 insertions(+), 3 deletions(-)
14
15
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/exec/cpu_ldst.h
18
+++ b/include/exec/cpu_ldst.h
19
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
20
#endif
21
22
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
23
-#define g2h(x) ((void *)((unsigned long)(abi_ptr)(x) + guest_base))
24
+#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
25
26
#if HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS
27
#define guest_addr_valid(x) (1)
28
#else
29
#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
30
#endif
31
-#define h2g_valid(x) guest_addr_valid((unsigned long)(x) - guest_base)
32
+#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
33
34
static inline int guest_range_valid(unsigned long start, unsigned long len)
35
{
36
@@ -XXX,XX +XXX,XX @@ static inline int guest_range_valid(unsigned long start, unsigned long len)
37
}
38
39
#define h2g_nocheck(x) ({ \
40
- unsigned long __ret = (unsigned long)(x) - guest_base; \
41
+ uintptr_t __ret = (uintptr_t)(x) - guest_base; \
42
(abi_ptr)__ret; \
43
})
44
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Return bool not int; pass abi_ulong not 'unsigned long'.
4
All callers use abi_ulong already, so the change in type
5
has no effect.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20210212184902.1251044-6-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/exec/cpu_ldst.h | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/cpu_ldst.h
19
+++ b/include/exec/cpu_ldst.h
20
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
21
#endif
22
#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
23
24
-static inline int guest_range_valid(unsigned long start, unsigned long len)
25
+static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
26
{
27
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
28
}
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The function asimd_imm_const() in translate-neon.c is an
2
implementation of the pseudocode AdvSIMDExpandImm(), which we will
3
also want for MVE. Move the implementation to translate.c, with a
4
prototype in translate.h.
2
5
3
Move everything related to syndromes to a new file,
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
which can be shared with linux-user.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-4-peter.maydell@linaro.org
9
---
10
target/arm/translate.h | 16 ++++++++++
11
target/arm/translate-neon.c | 63 -------------------------------------
12
target/arm/translate.c | 57 +++++++++++++++++++++++++++++++++
13
3 files changed, 73 insertions(+), 63 deletions(-)
5
14
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20210212184902.1251044-26-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/internals.h | 245 +-----------------------------------
13
target/arm/syndrome.h | 273 +++++++++++++++++++++++++++++++++++++++++
14
2 files changed, 274 insertions(+), 244 deletions(-)
15
create mode 100644 target/arm/syndrome.h
16
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
17
--- a/target/arm/translate.h
20
+++ b/target/arm/internals.h
18
+++ b/target/arm/translate.h
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
22
#define TARGET_ARM_INTERNALS_H
20
return opc | s->be_data;
23
24
#include "hw/registerfields.h"
25
+#include "syndrome.h"
26
27
/* register banks for CPU modes */
28
#define BANK_USRSYS 0
29
@@ -XXX,XX +XXX,XX @@ static inline bool extended_addresses_enabled(CPUARMState *env)
30
(arm_feature(env, ARM_FEATURE_LPAE) && (tcr->raw_tcr & TTBCR_EAE));
31
}
21
}
32
22
33
-/* Valid Syndrome Register EC field values */
23
+/**
34
-enum arm_exception_class {
24
+ * asimd_imm_const: Expand an encoded SIMD constant value
35
- EC_UNCATEGORIZED = 0x00,
25
+ *
36
- EC_WFX_TRAP = 0x01,
26
+ * Expand a SIMD constant value. This is essentially the pseudocode
37
- EC_CP15RTTRAP = 0x03,
27
+ * AdvSIMDExpandImm, except that we also perform the boolean NOT needed for
38
- EC_CP15RRTTRAP = 0x04,
28
+ * VMVN and VBIC (when cmode < 14 && op == 1).
39
- EC_CP14RTTRAP = 0x05,
29
+ *
40
- EC_CP14DTTRAP = 0x06,
30
+ * The combination cmode == 15 op == 1 is a reserved encoding for AArch32;
41
- EC_ADVSIMDFPACCESSTRAP = 0x07,
31
+ * callers must catch this.
42
- EC_FPIDTRAP = 0x08,
32
+ *
43
- EC_PACTRAP = 0x09,
33
+ * cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 was UNPREDICTABLE in v7A but
44
- EC_CP14RRTTRAP = 0x0c,
34
+ * is either not unpredictable or merely CONSTRAINED UNPREDICTABLE in v8A;
45
- EC_BTITRAP = 0x0d,
35
+ * we produce an immediate constant value of 0 in these cases.
46
- EC_ILLEGALSTATE = 0x0e,
36
+ */
47
- EC_AA32_SVC = 0x11,
37
+uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
48
- EC_AA32_HVC = 0x12,
38
+
49
- EC_AA32_SMC = 0x13,
39
#endif /* TARGET_ARM_TRANSLATE_H */
50
- EC_AA64_SVC = 0x15,
40
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
51
- EC_AA64_HVC = 0x16,
41
index XXXXXXX..XXXXXXX 100644
52
- EC_AA64_SMC = 0x17,
42
--- a/target/arm/translate-neon.c
53
- EC_SYSTEMREGISTERTRAP = 0x18,
43
+++ b/target/arm/translate-neon.c
54
- EC_SVEACCESSTRAP = 0x19,
44
@@ -XXX,XX +XXX,XX @@ DO_FP_2SH(VCVT_UH, gen_helper_gvec_vcvt_uh)
55
- EC_INSNABORT = 0x20,
45
DO_FP_2SH(VCVT_HS, gen_helper_gvec_vcvt_hs)
56
- EC_INSNABORT_SAME_EL = 0x21,
46
DO_FP_2SH(VCVT_HU, gen_helper_gvec_vcvt_hu)
57
- EC_PCALIGNMENT = 0x22,
47
58
- EC_DATAABORT = 0x24,
48
-static uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
59
- EC_DATAABORT_SAME_EL = 0x25,
49
-{
60
- EC_SPALIGNMENT = 0x26,
50
- /*
61
- EC_AA32_FPTRAP = 0x28,
51
- * Expand the encoded constant.
62
- EC_AA64_FPTRAP = 0x2c,
52
- * Note that cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
63
- EC_SERROR = 0x2f,
53
- * We choose to not special-case this and will behave as if a
64
- EC_BREAKPOINT = 0x30,
54
- * valid constant encoding of 0 had been given.
65
- EC_BREAKPOINT_SAME_EL = 0x31,
55
- * cmode = 15 op = 1 must UNDEF; we assume decode has handled that.
66
- EC_SOFTWARESTEP = 0x32,
56
- */
67
- EC_SOFTWARESTEP_SAME_EL = 0x33,
57
- switch (cmode) {
68
- EC_WATCHPOINT = 0x34,
58
- case 0: case 1:
69
- EC_WATCHPOINT_SAME_EL = 0x35,
59
- /* no-op */
70
- EC_AA32_BKPT = 0x38,
60
- break;
71
- EC_VECTORCATCH = 0x3a,
61
- case 2: case 3:
72
- EC_AA64_BKPT = 0x3c,
62
- imm <<= 8;
73
-};
63
- break;
64
- case 4: case 5:
65
- imm <<= 16;
66
- break;
67
- case 6: case 7:
68
- imm <<= 24;
69
- break;
70
- case 8: case 9:
71
- imm |= imm << 16;
72
- break;
73
- case 10: case 11:
74
- imm = (imm << 8) | (imm << 24);
75
- break;
76
- case 12:
77
- imm = (imm << 8) | 0xff;
78
- break;
79
- case 13:
80
- imm = (imm << 16) | 0xffff;
81
- break;
82
- case 14:
83
- if (op) {
84
- /*
85
- * This is the only case where the top and bottom 32 bits
86
- * of the encoded constant differ.
87
- */
88
- uint64_t imm64 = 0;
89
- int n;
74
-
90
-
75
-#define ARM_EL_EC_SHIFT 26
91
- for (n = 0; n < 8; n++) {
76
-#define ARM_EL_IL_SHIFT 25
92
- if (imm & (1 << n)) {
77
-#define ARM_EL_ISV_SHIFT 24
93
- imm64 |= (0xffULL << (n * 8));
78
-#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
94
- }
79
-#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
95
- }
80
-
96
- return imm64;
81
-static inline uint32_t syn_get_ec(uint32_t syn)
97
- }
82
-{
98
- imm |= (imm << 8) | (imm << 16) | (imm << 24);
83
- return syn >> ARM_EL_EC_SHIFT;
99
- break;
100
- case 15:
101
- imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
102
- | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
103
- break;
104
- }
105
- if (op) {
106
- imm = ~imm;
107
- }
108
- return dup_const(MO_32, imm);
84
-}
109
-}
85
-
110
-
86
-/* Utility functions for constructing various kinds of syndrome value.
111
static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
87
- * Note that in general we follow the AArch64 syndrome values; in a
112
GVecGen2iFn *fn)
88
- * few cases the value in HSR for exceptions taken to AArch32 Hyp
113
{
89
- * mode differs slightly, and we fix this up when populating HSR in
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
90
- * arm_cpu_do_interrupt_aarch32_hyp().
115
index XXXXXXX..XXXXXXX 100644
91
- * The exception is FP/SIMD access traps -- these report extra information
116
--- a/target/arm/translate.c
92
- * when taking an exception to AArch32. For those we include the extra coproc
117
+++ b/target/arm/translate.c
93
- * and TA fields, and mask them out when taking the exception to AArch64.
118
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
94
- */
119
a64_translate_init();
95
-static inline uint32_t syn_uncategorized(void)
120
}
96
-{
121
97
- return (EC_UNCATEGORIZED << ARM_EL_EC_SHIFT) | ARM_EL_IL;
122
+uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
98
-}
123
+{
99
-
124
+ /* Expand the encoded constant as per AdvSIMDExpandImm pseudocode */
100
-static inline uint32_t syn_aa64_svc(uint32_t imm16)
125
+ switch (cmode) {
101
-{
126
+ case 0: case 1:
102
- return (EC_AA64_SVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
127
+ /* no-op */
103
-}
128
+ break;
104
-
129
+ case 2: case 3:
105
-static inline uint32_t syn_aa64_hvc(uint32_t imm16)
130
+ imm <<= 8;
106
-{
131
+ break;
107
- return (EC_AA64_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
132
+ case 4: case 5:
108
-}
133
+ imm <<= 16;
109
-
134
+ break;
110
-static inline uint32_t syn_aa64_smc(uint32_t imm16)
135
+ case 6: case 7:
111
-{
136
+ imm <<= 24;
112
- return (EC_AA64_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
137
+ break;
113
-}
138
+ case 8: case 9:
114
-
139
+ imm |= imm << 16;
115
-static inline uint32_t syn_aa32_svc(uint32_t imm16, bool is_16bit)
140
+ break;
116
-{
141
+ case 10: case 11:
117
- return (EC_AA32_SVC << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
142
+ imm = (imm << 8) | (imm << 24);
118
- | (is_16bit ? 0 : ARM_EL_IL);
143
+ break;
119
-}
144
+ case 12:
120
-
145
+ imm = (imm << 8) | 0xff;
121
-static inline uint32_t syn_aa32_hvc(uint32_t imm16)
146
+ break;
122
-{
147
+ case 13:
123
- return (EC_AA32_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
148
+ imm = (imm << 16) | 0xffff;
124
-}
149
+ break;
125
-
150
+ case 14:
126
-static inline uint32_t syn_aa32_smc(void)
151
+ if (op) {
127
-{
152
+ /*
128
- return (EC_AA32_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL;
153
+ * This is the only case where the top and bottom 32 bits
129
-}
154
+ * of the encoded constant differ.
130
-
155
+ */
131
-static inline uint32_t syn_aa64_bkpt(uint32_t imm16)
156
+ uint64_t imm64 = 0;
132
-{
157
+ int n;
133
- return (EC_AA64_BKPT << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
134
-}
135
-
136
-static inline uint32_t syn_aa32_bkpt(uint32_t imm16, bool is_16bit)
137
-{
138
- return (EC_AA32_BKPT << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
139
- | (is_16bit ? 0 : ARM_EL_IL);
140
-}
141
-
142
-static inline uint32_t syn_aa64_sysregtrap(int op0, int op1, int op2,
143
- int crn, int crm, int rt,
144
- int isread)
145
-{
146
- return (EC_SYSTEMREGISTERTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL
147
- | (op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (rt << 5)
148
- | (crm << 1) | isread;
149
-}
150
-
151
-static inline uint32_t syn_cp14_rt_trap(int cv, int cond, int opc1, int opc2,
152
- int crn, int crm, int rt, int isread,
153
- bool is_16bit)
154
-{
155
- return (EC_CP14RTTRAP << ARM_EL_EC_SHIFT)
156
- | (is_16bit ? 0 : ARM_EL_IL)
157
- | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
158
- | (crn << 10) | (rt << 5) | (crm << 1) | isread;
159
-}
160
-
161
-static inline uint32_t syn_cp15_rt_trap(int cv, int cond, int opc1, int opc2,
162
- int crn, int crm, int rt, int isread,
163
- bool is_16bit)
164
-{
165
- return (EC_CP15RTTRAP << ARM_EL_EC_SHIFT)
166
- | (is_16bit ? 0 : ARM_EL_IL)
167
- | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
168
- | (crn << 10) | (rt << 5) | (crm << 1) | isread;
169
-}
170
-
171
-static inline uint32_t syn_cp14_rrt_trap(int cv, int cond, int opc1, int crm,
172
- int rt, int rt2, int isread,
173
- bool is_16bit)
174
-{
175
- return (EC_CP14RRTTRAP << ARM_EL_EC_SHIFT)
176
- | (is_16bit ? 0 : ARM_EL_IL)
177
- | (cv << 24) | (cond << 20) | (opc1 << 16)
178
- | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
179
-}
180
-
181
-static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
182
- int rt, int rt2, int isread,
183
- bool is_16bit)
184
-{
185
- return (EC_CP15RRTTRAP << ARM_EL_EC_SHIFT)
186
- | (is_16bit ? 0 : ARM_EL_IL)
187
- | (cv << 24) | (cond << 20) | (opc1 << 16)
188
- | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
189
-}
190
-
191
-static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
192
-{
193
- /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
194
- return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
195
- | (is_16bit ? 0 : ARM_EL_IL)
196
- | (cv << 24) | (cond << 20) | 0xa;
197
-}
198
-
199
-static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
200
-{
201
- /* AArch32 SIMD trap: TA == 1 coproc == 0 */
202
- return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
203
- | (is_16bit ? 0 : ARM_EL_IL)
204
- | (cv << 24) | (cond << 20) | (1 << 5);
205
-}
206
-
207
-static inline uint32_t syn_sve_access_trap(void)
208
-{
209
- return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
210
-}
211
-
212
-static inline uint32_t syn_pactrap(void)
213
-{
214
- return EC_PACTRAP << ARM_EL_EC_SHIFT;
215
-}
216
-
217
-static inline uint32_t syn_btitrap(int btype)
218
-{
219
- return (EC_BTITRAP << ARM_EL_EC_SHIFT) | btype;
220
-}
221
-
222
-static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
223
-{
224
- return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
225
- | ARM_EL_IL | (ea << 9) | (s1ptw << 7) | fsc;
226
-}
227
-
228
-static inline uint32_t syn_data_abort_no_iss(int same_el, int fnv,
229
- int ea, int cm, int s1ptw,
230
- int wnr, int fsc)
231
-{
232
- return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
233
- | ARM_EL_IL
234
- | (fnv << 10) | (ea << 9) | (cm << 8) | (s1ptw << 7)
235
- | (wnr << 6) | fsc;
236
-}
237
-
238
-static inline uint32_t syn_data_abort_with_iss(int same_el,
239
- int sas, int sse, int srt,
240
- int sf, int ar,
241
- int ea, int cm, int s1ptw,
242
- int wnr, int fsc,
243
- bool is_16bit)
244
-{
245
- return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
246
- | (is_16bit ? 0 : ARM_EL_IL)
247
- | ARM_EL_ISV | (sas << 22) | (sse << 21) | (srt << 16)
248
- | (sf << 15) | (ar << 14)
249
- | (ea << 9) | (cm << 8) | (s1ptw << 7) | (wnr << 6) | fsc;
250
-}
251
-
252
-static inline uint32_t syn_swstep(int same_el, int isv, int ex)
253
-{
254
- return (EC_SOFTWARESTEP << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
255
- | ARM_EL_IL | (isv << 24) | (ex << 6) | 0x22;
256
-}
257
-
258
-static inline uint32_t syn_watchpoint(int same_el, int cm, int wnr)
259
-{
260
- return (EC_WATCHPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
261
- | ARM_EL_IL | (cm << 8) | (wnr << 6) | 0x22;
262
-}
263
-
264
-static inline uint32_t syn_breakpoint(int same_el)
265
-{
266
- return (EC_BREAKPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
267
- | ARM_EL_IL | 0x22;
268
-}
269
-
270
-static inline uint32_t syn_wfx(int cv, int cond, int ti, bool is_16bit)
271
-{
272
- return (EC_WFX_TRAP << ARM_EL_EC_SHIFT) |
273
- (is_16bit ? 0 : (1 << ARM_EL_IL_SHIFT)) |
274
- (cv << 24) | (cond << 20) | ti;
275
-}
276
-
277
/* Update a QEMU watchpoint based on the information the guest has set in the
278
* DBGWCR<n>_EL1 and DBGWVR<n>_EL1 registers.
279
*/
280
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
281
new file mode 100644
282
index XXXXXXX..XXXXXXX
283
--- /dev/null
284
+++ b/target/arm/syndrome.h
285
@@ -XXX,XX +XXX,XX @@
286
+/*
287
+ * QEMU ARM CPU -- syndrome functions and types
288
+ *
289
+ * Copyright (c) 2014 Linaro Ltd
290
+ *
291
+ * This program is free software; you can redistribute it and/or
292
+ * modify it under the terms of the GNU General Public License
293
+ * as published by the Free Software Foundation; either version 2
294
+ * of the License, or (at your option) any later version.
295
+ *
296
+ * This program is distributed in the hope that it will be useful,
297
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
298
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
299
+ * GNU General Public License for more details.
300
+ *
301
+ * You should have received a copy of the GNU General Public License
302
+ * along with this program; if not, see
303
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
304
+ *
305
+ * This header defines functions, types, etc which need to be shared
306
+ * between different source files within target/arm/ but which are
307
+ * private to it and not required by the rest of QEMU.
308
+ */
309
+
158
+
310
+#ifndef TARGET_ARM_SYNDROME_H
159
+ for (n = 0; n < 8; n++) {
311
+#define TARGET_ARM_SYNDROME_H
160
+ if (imm & (1 << n)) {
312
+
161
+ imm64 |= (0xffULL << (n * 8));
313
+/* Valid Syndrome Register EC field values */
162
+ }
314
+enum arm_exception_class {
163
+ }
315
+ EC_UNCATEGORIZED = 0x00,
164
+ return imm64;
316
+ EC_WFX_TRAP = 0x01,
165
+ }
317
+ EC_CP15RTTRAP = 0x03,
166
+ imm |= (imm << 8) | (imm << 16) | (imm << 24);
318
+ EC_CP15RRTTRAP = 0x04,
167
+ break;
319
+ EC_CP14RTTRAP = 0x05,
168
+ case 15:
320
+ EC_CP14DTTRAP = 0x06,
169
+ imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
321
+ EC_ADVSIMDFPACCESSTRAP = 0x07,
170
+ | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
322
+ EC_FPIDTRAP = 0x08,
171
+ break;
323
+ EC_PACTRAP = 0x09,
172
+ }
324
+ EC_CP14RRTTRAP = 0x0c,
173
+ if (op) {
325
+ EC_BTITRAP = 0x0d,
174
+ imm = ~imm;
326
+ EC_ILLEGALSTATE = 0x0e,
175
+ }
327
+ EC_AA32_SVC = 0x11,
176
+ return dup_const(MO_32, imm);
328
+ EC_AA32_HVC = 0x12,
329
+ EC_AA32_SMC = 0x13,
330
+ EC_AA64_SVC = 0x15,
331
+ EC_AA64_HVC = 0x16,
332
+ EC_AA64_SMC = 0x17,
333
+ EC_SYSTEMREGISTERTRAP = 0x18,
334
+ EC_SVEACCESSTRAP = 0x19,
335
+ EC_INSNABORT = 0x20,
336
+ EC_INSNABORT_SAME_EL = 0x21,
337
+ EC_PCALIGNMENT = 0x22,
338
+ EC_DATAABORT = 0x24,
339
+ EC_DATAABORT_SAME_EL = 0x25,
340
+ EC_SPALIGNMENT = 0x26,
341
+ EC_AA32_FPTRAP = 0x28,
342
+ EC_AA64_FPTRAP = 0x2c,
343
+ EC_SERROR = 0x2f,
344
+ EC_BREAKPOINT = 0x30,
345
+ EC_BREAKPOINT_SAME_EL = 0x31,
346
+ EC_SOFTWARESTEP = 0x32,
347
+ EC_SOFTWARESTEP_SAME_EL = 0x33,
348
+ EC_WATCHPOINT = 0x34,
349
+ EC_WATCHPOINT_SAME_EL = 0x35,
350
+ EC_AA32_BKPT = 0x38,
351
+ EC_VECTORCATCH = 0x3a,
352
+ EC_AA64_BKPT = 0x3c,
353
+};
354
+
355
+#define ARM_EL_EC_SHIFT 26
356
+#define ARM_EL_IL_SHIFT 25
357
+#define ARM_EL_ISV_SHIFT 24
358
+#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
359
+#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
360
+
361
+static inline uint32_t syn_get_ec(uint32_t syn)
362
+{
363
+ return syn >> ARM_EL_EC_SHIFT;
364
+}
177
+}
365
+
178
+
366
+/*
179
/* Generate a label used for skipping this instruction */
367
+ * Utility functions for constructing various kinds of syndrome value.
180
void arm_gen_condlabel(DisasContext *s)
368
+ * Note that in general we follow the AArch64 syndrome values; in a
181
{
369
+ * few cases the value in HSR for exceptions taken to AArch32 Hyp
370
+ * mode differs slightly, and we fix this up when populating HSR in
371
+ * arm_cpu_do_interrupt_aarch32_hyp().
372
+ * The exception is FP/SIMD access traps -- these report extra information
373
+ * when taking an exception to AArch32. For those we include the extra coproc
374
+ * and TA fields, and mask them out when taking the exception to AArch64.
375
+ */
376
+static inline uint32_t syn_uncategorized(void)
377
+{
378
+ return (EC_UNCATEGORIZED << ARM_EL_EC_SHIFT) | ARM_EL_IL;
379
+}
380
+
381
+static inline uint32_t syn_aa64_svc(uint32_t imm16)
382
+{
383
+ return (EC_AA64_SVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
384
+}
385
+
386
+static inline uint32_t syn_aa64_hvc(uint32_t imm16)
387
+{
388
+ return (EC_AA64_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
389
+}
390
+
391
+static inline uint32_t syn_aa64_smc(uint32_t imm16)
392
+{
393
+ return (EC_AA64_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
394
+}
395
+
396
+static inline uint32_t syn_aa32_svc(uint32_t imm16, bool is_16bit)
397
+{
398
+ return (EC_AA32_SVC << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
399
+ | (is_16bit ? 0 : ARM_EL_IL);
400
+}
401
+
402
+static inline uint32_t syn_aa32_hvc(uint32_t imm16)
403
+{
404
+ return (EC_AA32_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
405
+}
406
+
407
+static inline uint32_t syn_aa32_smc(void)
408
+{
409
+ return (EC_AA32_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL;
410
+}
411
+
412
+static inline uint32_t syn_aa64_bkpt(uint32_t imm16)
413
+{
414
+ return (EC_AA64_BKPT << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
415
+}
416
+
417
+static inline uint32_t syn_aa32_bkpt(uint32_t imm16, bool is_16bit)
418
+{
419
+ return (EC_AA32_BKPT << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
420
+ | (is_16bit ? 0 : ARM_EL_IL);
421
+}
422
+
423
+static inline uint32_t syn_aa64_sysregtrap(int op0, int op1, int op2,
424
+ int crn, int crm, int rt,
425
+ int isread)
426
+{
427
+ return (EC_SYSTEMREGISTERTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL
428
+ | (op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (rt << 5)
429
+ | (crm << 1) | isread;
430
+}
431
+
432
+static inline uint32_t syn_cp14_rt_trap(int cv, int cond, int opc1, int opc2,
433
+ int crn, int crm, int rt, int isread,
434
+ bool is_16bit)
435
+{
436
+ return (EC_CP14RTTRAP << ARM_EL_EC_SHIFT)
437
+ | (is_16bit ? 0 : ARM_EL_IL)
438
+ | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
439
+ | (crn << 10) | (rt << 5) | (crm << 1) | isread;
440
+}
441
+
442
+static inline uint32_t syn_cp15_rt_trap(int cv, int cond, int opc1, int opc2,
443
+ int crn, int crm, int rt, int isread,
444
+ bool is_16bit)
445
+{
446
+ return (EC_CP15RTTRAP << ARM_EL_EC_SHIFT)
447
+ | (is_16bit ? 0 : ARM_EL_IL)
448
+ | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
449
+ | (crn << 10) | (rt << 5) | (crm << 1) | isread;
450
+}
451
+
452
+static inline uint32_t syn_cp14_rrt_trap(int cv, int cond, int opc1, int crm,
453
+ int rt, int rt2, int isread,
454
+ bool is_16bit)
455
+{
456
+ return (EC_CP14RRTTRAP << ARM_EL_EC_SHIFT)
457
+ | (is_16bit ? 0 : ARM_EL_IL)
458
+ | (cv << 24) | (cond << 20) | (opc1 << 16)
459
+ | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
460
+}
461
+
462
+static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
463
+ int rt, int rt2, int isread,
464
+ bool is_16bit)
465
+{
466
+ return (EC_CP15RRTTRAP << ARM_EL_EC_SHIFT)
467
+ | (is_16bit ? 0 : ARM_EL_IL)
468
+ | (cv << 24) | (cond << 20) | (opc1 << 16)
469
+ | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
470
+}
471
+
472
+static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
473
+{
474
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
475
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
476
+ | (is_16bit ? 0 : ARM_EL_IL)
477
+ | (cv << 24) | (cond << 20) | 0xa;
478
+}
479
+
480
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
481
+{
482
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
483
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
484
+ | (is_16bit ? 0 : ARM_EL_IL)
485
+ | (cv << 24) | (cond << 20) | (1 << 5);
486
+}
487
+
488
+static inline uint32_t syn_sve_access_trap(void)
489
+{
490
+ return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
491
+}
492
+
493
+static inline uint32_t syn_pactrap(void)
494
+{
495
+ return EC_PACTRAP << ARM_EL_EC_SHIFT;
496
+}
497
+
498
+static inline uint32_t syn_btitrap(int btype)
499
+{
500
+ return (EC_BTITRAP << ARM_EL_EC_SHIFT) | btype;
501
+}
502
+
503
+static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
504
+{
505
+ return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
506
+ | ARM_EL_IL | (ea << 9) | (s1ptw << 7) | fsc;
507
+}
508
+
509
+static inline uint32_t syn_data_abort_no_iss(int same_el, int fnv,
510
+ int ea, int cm, int s1ptw,
511
+ int wnr, int fsc)
512
+{
513
+ return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
514
+ | ARM_EL_IL
515
+ | (fnv << 10) | (ea << 9) | (cm << 8) | (s1ptw << 7)
516
+ | (wnr << 6) | fsc;
517
+}
518
+
519
+static inline uint32_t syn_data_abort_with_iss(int same_el,
520
+ int sas, int sse, int srt,
521
+ int sf, int ar,
522
+ int ea, int cm, int s1ptw,
523
+ int wnr, int fsc,
524
+ bool is_16bit)
525
+{
526
+ return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
527
+ | (is_16bit ? 0 : ARM_EL_IL)
528
+ | ARM_EL_ISV | (sas << 22) | (sse << 21) | (srt << 16)
529
+ | (sf << 15) | (ar << 14)
530
+ | (ea << 9) | (cm << 8) | (s1ptw << 7) | (wnr << 6) | fsc;
531
+}
532
+
533
+static inline uint32_t syn_swstep(int same_el, int isv, int ex)
534
+{
535
+ return (EC_SOFTWARESTEP << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
536
+ | ARM_EL_IL | (isv << 24) | (ex << 6) | 0x22;
537
+}
538
+
539
+static inline uint32_t syn_watchpoint(int same_el, int cm, int wnr)
540
+{
541
+ return (EC_WATCHPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
542
+ | ARM_EL_IL | (cm << 8) | (wnr << 6) | 0x22;
543
+}
544
+
545
+static inline uint32_t syn_breakpoint(int same_el)
546
+{
547
+ return (EC_BREAKPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
548
+ | ARM_EL_IL | 0x22;
549
+}
550
+
551
+static inline uint32_t syn_wfx(int cv, int cond, int ti, bool is_16bit)
552
+{
553
+ return (EC_WFX_TRAP << ARM_EL_EC_SHIFT) |
554
+ (is_16bit ? 0 : (1 << ARM_EL_IL_SHIFT)) |
555
+ (cv << 24) | (cond << 20) | ti;
556
+}
557
+
558
+#endif /* TARGET_ARM_SYNDROME_H */
559
--
182
--
560
2.20.1
183
2.20.1
561
184
562
185
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The A64 AdvSIMD modified-immediate grouping uses almost the same
2
constant encoding that A32 Neon does; reuse asimd_imm_const() (to
3
which we add the AArch64-specific case for cmode 15 op 1) instead of
4
reimplementing it all.
2
5
3
Resolve the untagged address once, using thread_cpu.
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Tidy the DEBUG_REMAP code using glib routines.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-5-peter.maydell@linaro.org
9
---
10
target/arm/translate.h | 3 +-
11
target/arm/translate-a64.c | 86 ++++----------------------------------
12
target/arm/translate.c | 17 +++++++-
13
3 files changed, 24 insertions(+), 82 deletions(-)
5
14
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210212184902.1251044-20-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/uaccess.c | 27 ++++++++++++++-------------
12
1 file changed, 14 insertions(+), 13 deletions(-)
13
14
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/uaccess.c
17
--- a/target/arm/translate.h
17
+++ b/linux-user/uaccess.c
18
+++ b/target/arm/translate.h
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
19
20
* VMVN and VBIC (when cmode < 14 && op == 1).
20
void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy)
21
*
22
* The combination cmode == 15 op == 1 is a reserved encoding for AArch32;
23
- * callers must catch this.
24
+ * callers must catch this; we return the 64-bit constant value defined
25
+ * for AArch64.
26
*
27
* cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 was UNPREDICTABLE in v7A but
28
* is either not unpredictable or merely CONSTRAINED UNPREDICTABLE in v8A;
29
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-a64.c
32
+++ b/target/arm/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
21
{
34
{
22
+ void *host_addr;
35
int rd = extract32(insn, 0, 5);
23
+
36
int cmode = extract32(insn, 12, 4);
24
+ guest_addr = cpu_untagged_addr(thread_cpu, guest_addr);
37
- int cmode_3_1 = extract32(cmode, 1, 3);
25
if (!access_ok_untagged(type, guest_addr, len)) {
38
- int cmode_0 = extract32(cmode, 0, 1);
26
return NULL;
39
int o2 = extract32(insn, 11, 1);
27
}
40
uint64_t abcdefgh = extract32(insn, 5, 5) | (extract32(insn, 16, 3) << 5);
28
+ host_addr = g2h_untagged(guest_addr);
41
bool is_neg = extract32(insn, 29, 1);
29
#ifdef DEBUG_REMAP
42
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
30
- {
31
- void *addr;
32
- addr = g_malloc(len);
33
- if (copy) {
34
- memcpy(addr, g2h(guest_addr), len);
35
- } else {
36
- memset(addr, 0, len);
37
- }
38
- return addr;
39
+ if (copy) {
40
+ host_addr = g_memdup(host_addr, len);
41
+ } else {
42
+ host_addr = g_malloc0(len);
43
}
44
-#else
45
- return g2h_untagged(guest_addr);
46
#endif
47
+ return host_addr;
48
}
49
50
#ifdef DEBUG_REMAP
51
void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len);
52
{
53
+ void *host_ptr_conv;
54
+
55
if (!host_ptr) {
56
return;
43
return;
57
}
44
}
58
- if (host_ptr == g2h_untagged(guest_addr)) {
45
59
+ host_ptr_conv = g2h(thread_cpu, guest_addr);
46
- /* See AdvSIMDExpandImm() in ARM ARM */
60
+ if (host_ptr == host_ptr_conv) {
47
- switch (cmode_3_1) {
61
return;
48
- case 0: /* Replicate(Zeros(24):imm8, 2) */
49
- case 1: /* Replicate(Zeros(16):imm8:Zeros(8), 2) */
50
- case 2: /* Replicate(Zeros(8):imm8:Zeros(16), 2) */
51
- case 3: /* Replicate(imm8:Zeros(24), 2) */
52
- {
53
- int shift = cmode_3_1 * 8;
54
- imm = bitfield_replicate(abcdefgh << shift, 32);
55
- break;
56
- }
57
- case 4: /* Replicate(Zeros(8):imm8, 4) */
58
- case 5: /* Replicate(imm8:Zeros(8), 4) */
59
- {
60
- int shift = (cmode_3_1 & 0x1) * 8;
61
- imm = bitfield_replicate(abcdefgh << shift, 16);
62
- break;
63
- }
64
- case 6:
65
- if (cmode_0) {
66
- /* Replicate(Zeros(8):imm8:Ones(16), 2) */
67
- imm = (abcdefgh << 16) | 0xffff;
68
- } else {
69
- /* Replicate(Zeros(16):imm8:Ones(8), 2) */
70
- imm = (abcdefgh << 8) | 0xff;
71
- }
72
- imm = bitfield_replicate(imm, 32);
73
- break;
74
- case 7:
75
- if (!cmode_0 && !is_neg) {
76
- imm = bitfield_replicate(abcdefgh, 8);
77
- } else if (!cmode_0 && is_neg) {
78
- int i;
79
- imm = 0;
80
- for (i = 0; i < 8; i++) {
81
- if ((abcdefgh) & (1 << i)) {
82
- imm |= 0xffULL << (i * 8);
83
- }
84
- }
85
- } else if (cmode_0) {
86
- if (is_neg) {
87
- imm = (abcdefgh & 0x3f) << 48;
88
- if (abcdefgh & 0x80) {
89
- imm |= 0x8000000000000000ULL;
90
- }
91
- if (abcdefgh & 0x40) {
92
- imm |= 0x3fc0000000000000ULL;
93
- } else {
94
- imm |= 0x4000000000000000ULL;
95
- }
96
- } else {
97
- if (o2) {
98
- /* FMOV (vector, immediate) - half-precision */
99
- imm = vfp_expand_imm(MO_16, abcdefgh);
100
- /* now duplicate across the lanes */
101
- imm = bitfield_replicate(imm, 16);
102
- } else {
103
- imm = (abcdefgh & 0x3f) << 19;
104
- if (abcdefgh & 0x80) {
105
- imm |= 0x80000000;
106
- }
107
- if (abcdefgh & 0x40) {
108
- imm |= 0x3e000000;
109
- } else {
110
- imm |= 0x40000000;
111
- }
112
- imm |= (imm << 32);
113
- }
114
- }
115
- }
116
- break;
117
- default:
118
- g_assert_not_reached();
119
- }
120
-
121
- if (cmode_3_1 != 7 && is_neg) {
122
- imm = ~imm;
123
+ if (cmode == 15 && o2 && !is_neg) {
124
+ /* FMOV (vector, immediate) - half-precision */
125
+ imm = vfp_expand_imm(MO_16, abcdefgh);
126
+ /* now duplicate across the lanes */
127
+ imm = bitfield_replicate(imm, 16);
128
+ } else {
129
+ imm = asimd_imm_const(abcdefgh, cmode, is_neg);
62
}
130
}
63
if (len != 0) {
131
64
- memcpy(g2h_untagged(guest_addr), host_ptr, len);
132
if (!((cmode & 0x9) == 0x1 || (cmode & 0xd) == 0x9)) {
65
+ memcpy(host_ptr_conv, host_ptr, len);
133
diff --git a/target/arm/translate.c b/target/arm/translate.c
66
}
134
index XXXXXXX..XXXXXXX 100644
67
g_free(host_ptr);
135
--- a/target/arm/translate.c
68
}
136
+++ b/target/arm/translate.c
137
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
138
case 14:
139
if (op) {
140
/*
141
- * This is the only case where the top and bottom 32 bits
142
- * of the encoded constant differ.
143
+ * This and cmode == 15 op == 1 are the only cases where
144
+ * the top and bottom 32 bits of the encoded constant differ.
145
*/
146
uint64_t imm64 = 0;
147
int n;
148
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
149
imm |= (imm << 8) | (imm << 16) | (imm << 24);
150
break;
151
case 15:
152
+ if (op) {
153
+ /* Reserved encoding for AArch32; valid for AArch64 */
154
+ uint64_t imm64 = (uint64_t)(imm & 0x3f) << 48;
155
+ if (imm & 0x80) {
156
+ imm64 |= 0x8000000000000000ULL;
157
+ }
158
+ if (imm & 0x40) {
159
+ imm64 |= 0x3fc0000000000000ULL;
160
+ } else {
161
+ imm64 |= 0x4000000000000000ULL;
162
+ }
163
+ return imm64;
164
+ }
165
imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
166
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
167
break;
69
--
168
--
70
2.20.1
169
2.20.1
71
170
72
171
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Use dup_const() instead of bitfield_replicate() in
2
disas_simd_mod_imm().
2
3
3
Use simple arithmetic instead of a conditional
4
(We can't replace the other use of bitfield_replicate() in this file,
4
move when tbi0 != tbi1.
5
in logic_imm_decode_wmask(), because that location needs to handle 2
6
and 4 bit elements, which dup_const() cannot.)
5
7
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210212184902.1251044-22-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210628135835.6690-6-peter.maydell@linaro.org
10
---
11
---
11
target/arm/translate-a64.c | 25 ++++++++++++++-----------
12
target/arm/translate-a64.c | 2 +-
12
1 file changed, 14 insertions(+), 11 deletions(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
13
14
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
17
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void gen_top_byte_ignore(DisasContext *s, TCGv_i64 dst,
19
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
19
/* Sign-extend from bit 55. */
20
/* FMOV (vector, immediate) - half-precision */
20
tcg_gen_sextract_i64(dst, src, 0, 56);
21
imm = vfp_expand_imm(MO_16, abcdefgh);
21
22
/* now duplicate across the lanes */
22
- if (tbi != 3) {
23
- imm = bitfield_replicate(imm, 16);
23
- TCGv_i64 tcg_zero = tcg_const_i64(0);
24
+ imm = dup_const(MO_16, imm);
24
-
25
} else {
25
- /*
26
imm = asimd_imm_const(abcdefgh, cmode, is_neg);
26
- * The two TBI bits differ.
27
- * If tbi0, then !tbi1: only use the extension if positive.
28
- * if !tbi0, then tbi1: only use the extension if negative.
29
- */
30
- tcg_gen_movcond_i64(tbi == 1 ? TCG_COND_GE : TCG_COND_LT,
31
- dst, dst, tcg_zero, dst, src);
32
- tcg_temp_free_i64(tcg_zero);
33
+ switch (tbi) {
34
+ case 1:
35
+ /* tbi0 but !tbi1: only use the extension if positive */
36
+ tcg_gen_and_i64(dst, dst, src);
37
+ break;
38
+ case 2:
39
+ /* !tbi0 but tbi1: only use the extension if negative */
40
+ tcg_gen_or_i64(dst, dst, src);
41
+ break;
42
+ case 3:
43
+ /* tbi0 and tbi1: always use the extension */
44
+ break;
45
+ default:
46
+ g_assert_not_reached();
47
}
48
}
27
}
49
}
50
--
28
--
51
2.20.1
29
2.20.1
52
30
53
31
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE logical-immediate insns (VMOV, VMVN,
2
VORR and VBIC). These have essentially the same encoding
3
as their Neon equivalents, and we implement the decode
4
in the same way.
2
5
3
Use the now-saved PAGE_ANON and PAGE_MTE bits,
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
and the per-page saved data.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-7-peter.maydell@linaro.org
9
---
10
target/arm/helper-mve.h | 4 +++
11
target/arm/mve.decode | 17 +++++++++++++
12
target/arm/mve_helper.c | 24 ++++++++++++++++++
13
target/arm/translate-mve.c | 50 ++++++++++++++++++++++++++++++++++++++
14
4 files changed, 95 insertions(+)
5
15
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210212184902.1251044-30-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/mte_helper.c | 29 +++++++++++++++++++++++++++--
12
1 file changed, 27 insertions(+), 2 deletions(-)
13
14
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/mte_helper.c
18
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/mte_helper.c
19
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
19
int tag_size, uintptr_t ra)
21
DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
20
{
22
DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
21
#ifdef CONFIG_USER_ONLY
23
DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
22
- /* Tag storage not implemented. */
23
- return NULL;
24
+ uint64_t clean_ptr = useronly_clean_ptr(ptr);
25
+ int flags = page_get_flags(clean_ptr);
26
+ uint8_t *tags;
27
+ uintptr_t index;
28
+
24
+
29
+ if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE : PAGE_READ))) {
25
+DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
30
+ /* SIGSEGV */
26
+DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
31
+ arm_cpu_tlb_fill(env_cpu(env), ptr, ptr_size, ptr_access,
27
+DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
32
+ ptr_mmu_idx, false, ra);
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
+ g_assert_not_reached();
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@
33
# VQDMULL has size in bit 28: 0 for 16 bit, 1 for 32 bit
34
%size_28 28:1 !function=plus_1
35
36
+# 1imm format immediate
37
+%imm_28_16_0 28:1 16:3 0:4
38
+
39
&vldr_vstr rn qd imm p a w size l u
40
&1op qd qm size
41
&2op qd qm qn size
42
&2scalar qd qn rm size
43
+&1imm qd imm cmode op
44
45
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
46
# Note that both Rn and Qd are 3 bits only (no D bit)
47
@@ -XXX,XX +XXX,XX @@
48
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
49
@2op_sz28 .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn \
50
size=%size_28
51
+@1imm .... .... .... .... .... cmode:4 .. op:1 . .... &1imm qd=%qd imm=%imm_28_16_0
52
53
# The _rev suffix indicates that Vn and Vm are reversed. This is
54
# the case for shifts. In the Arm ARM these insns are documented
55
@@ -XXX,XX +XXX,XX @@ VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rd
56
# Predicate operations
57
%mask_22_13 22:1 13:3
58
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
59
+
60
+# Logical immediate operations (1 reg and modified-immediate)
61
+
62
+# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
63
+# not in a way we can conveniently represent in decodetree without
64
+# a lot of repetition:
65
+# VORR: op=0, (cmode & 1) && cmode < 12
66
+# VBIC: op=1, (cmode & 1) && cmode < 12
67
+# VMOV: everything else
68
+# So we have a single decode line and check the cmode/op in the
69
+# trans function.
70
+Vimm_1r 111 . 1111 1 . 00 0 ... ... 0 .... 0 1 . 1 .... @1imm
71
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/mve_helper.c
74
+++ b/target/arm/mve_helper.c
75
@@ -XXX,XX +XXX,XX @@ DO_1OP(vnegw, 4, int32_t, DO_NEG)
76
DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
77
DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
78
79
+/*
80
+ * 1 operand immediates: Vda is destination and possibly also one source.
81
+ * All these insns work at 64-bit widths.
82
+ */
83
+#define DO_1OP_IMM(OP, FN) \
84
+ void HELPER(mve_##OP)(CPUARMState *env, void *vda, uint64_t imm) \
85
+ { \
86
+ uint64_t *da = vda; \
87
+ uint16_t mask = mve_element_mask(env); \
88
+ unsigned e; \
89
+ for (e = 0; e < 16 / 8; e++, mask >>= 8) { \
90
+ mergemask(&da[H8(e)], FN(da[H8(e)], imm), mask); \
91
+ } \
92
+ mve_advance_vpt(env); \
34
+ }
93
+ }
35
+
94
+
36
+ /* Require both MAP_ANON and PROT_MTE for the page. */
95
+#define DO_MOVI(N, I) (I)
37
+ if (!(flags & PAGE_ANON) || !(flags & PAGE_MTE)) {
96
+#define DO_ANDI(N, I) ((N) & (I))
38
+ return NULL;
97
+#define DO_ORRI(N, I) ((N) | (I))
98
+
99
+DO_1OP_IMM(vmovi, DO_MOVI)
100
+DO_1OP_IMM(vandi, DO_ANDI)
101
+DO_1OP_IMM(vorri, DO_ORRI)
102
+
103
#define DO_2OP(OP, ESIZE, TYPE, FN) \
104
void HELPER(glue(mve_, OP))(CPUARMState *env, \
105
void *vd, void *vn, void *vm) \
106
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-mve.c
109
+++ b/target/arm/translate-mve.c
110
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
111
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
112
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
113
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
114
+typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
115
116
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
117
static inline long mve_qreg_offset(unsigned reg)
118
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
119
mve_update_eci(s);
120
return true;
121
}
122
+
123
+static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
124
+{
125
+ TCGv_ptr qd;
126
+ uint64_t imm;
127
+
128
+ if (!dc_isar_feature(aa32_mve, s) ||
129
+ !mve_check_qreg_bank(s, a->qd) ||
130
+ !fn) {
131
+ return false;
132
+ }
133
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
134
+ return true;
39
+ }
135
+ }
40
+
136
+
41
+ tags = page_get_target_data(clean_ptr);
137
+ imm = asimd_imm_const(a->imm, a->cmode, a->op);
42
+ if (tags == NULL) {
138
+
43
+ size_t alloc_size = TARGET_PAGE_SIZE >> (LOG2_TAG_GRANULE + 1);
139
+ qd = mve_qreg_ptr(a->qd);
44
+ tags = page_alloc_target_data(clean_ptr, alloc_size);
140
+ fn(cpu_env, qd, tcg_constant_i64(imm));
45
+ assert(tags != NULL);
141
+ tcg_temp_free_ptr(qd);
142
+ mve_update_eci(s);
143
+ return true;
144
+}
145
+
146
+static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
147
+{
148
+ /* Handle decode of cmode/op here between VORR/VBIC/VMOV */
149
+ MVEGenOneOpImmFn *fn;
150
+
151
+ if ((a->cmode & 1) && a->cmode < 12) {
152
+ if (a->op) {
153
+ /*
154
+ * For op=1, the immediate will be inverted by asimd_imm_const(),
155
+ * so the VBIC becomes a logical AND operation.
156
+ */
157
+ fn = gen_helper_mve_vandi;
158
+ } else {
159
+ fn = gen_helper_mve_vorri;
160
+ }
161
+ } else {
162
+ /* There is one unallocated cmode/op combination in this space */
163
+ if (a->cmode == 15 && a->op == 1) {
164
+ return false;
165
+ }
166
+ /* asimd_imm_const() sorts out VMVNI vs VMOVI for us */
167
+ fn = gen_helper_mve_vmovi;
46
+ }
168
+ }
47
+
169
+ return do_1imm(s, a, fn);
48
+ index = extract32(ptr, LOG2_TAG_GRANULE + 1,
170
+}
49
+ TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
50
+ return tags + index;
51
#else
52
uintptr_t index;
53
CPUIOTLBEntry *iotlbentry;
54
--
171
--
55
2.20.1
172
2.20.1
56
173
57
174
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE shift-vector-left-by-immediate insns VSHL, VQSHL
2
2
and VQSHLU.
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
The size-and-immediate encoding here is the same as Neon, and we
5
Message-id: 20210212184902.1251044-31-richard.henderson@linaro.org
5
handle it the same way neon-dp.decode does.
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210628135835.6690-8-peter.maydell@linaro.org
7
---
10
---
8
target/arm/cpu.c | 15 +++++++++++++++
11
target/arm/helper-mve.h | 16 +++++++++++
9
1 file changed, 15 insertions(+)
12
target/arm/mve.decode | 23 +++++++++++++++
10
13
target/arm/mve_helper.c | 57 ++++++++++++++++++++++++++++++++++++++
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
target/arm/translate-mve.c | 51 ++++++++++++++++++++++++++++++++++
12
index XXXXXXX..XXXXXXX 100644
15
4 files changed, 147 insertions(+)
13
--- a/target/arm/cpu.c
16
14
+++ b/target/arm/cpu.c
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
18
index XXXXXXX..XXXXXXX 100644
16
* Note that this must match useronly_clean_ptr.
19
--- a/target/arm/helper-mve.h
17
*/
20
+++ b/target/arm/helper-mve.h
18
env->cp15.tcr_el[1].raw_tcr = (1ULL << 37);
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
19
+
22
DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
20
+ /* Enable MTE */
23
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
21
+ if (cpu_isar_feature(aa64_mte, cpu)) {
24
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
22
+ /* Enable tag access, but leave TCF0 as No Effect (0). */
25
+
23
+ env->cp15.sctlr_el[1] |= SCTLR_ATA0;
26
+DEF_HELPER_FLAGS_4(mve_vshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+ /*
27
+DEF_HELPER_FLAGS_4(mve_vshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+ * Exclude all tags, so that tag 0 is always used.
28
+DEF_HELPER_FLAGS_4(mve_vshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+ * This corresponds to Linux current->thread.gcr_incl = 0.
29
+
27
+ *
30
+DEF_HELPER_FLAGS_4(mve_vqshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+ * Set RRND, so that helper_irg() will generate a seed later.
31
+DEF_HELPER_FLAGS_4(mve_vqshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+ * Here in cpu_reset(), the crypto subsystem has not yet been
32
+DEF_HELPER_FLAGS_4(mve_vqshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+ * initialized.
33
+
31
+ */
34
+DEF_HELPER_FLAGS_4(mve_vqshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+ env->cp15.gcr_el1 = 0x1ffff;
35
+DEF_HELPER_FLAGS_4(mve_vqshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+ }
36
+DEF_HELPER_FLAGS_4(mve_vqshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
#else
37
+
35
/* Reset into the highest available EL */
38
+DEF_HELPER_FLAGS_4(mve_vqshlui_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
if (arm_feature(env, ARM_FEATURE_EL3)) {
39
+DEF_HELPER_FLAGS_4(mve_vqshlui_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vqshlui_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/mve.decode
44
+++ b/target/arm/mve.decode
45
@@ -XXX,XX +XXX,XX @@
46
&2op qd qm qn size
47
&2scalar qd qn rm size
48
&1imm qd imm cmode op
49
+&2shift qd qm shift size
50
51
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
52
# Note that both Rn and Qd are 3 bits only (no D bit)
53
@@ -XXX,XX +XXX,XX @@
54
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
55
@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
56
57
+@2_shl_b .... .... .. 001 shift:3 .... .... .... .... &2shift qd=%qd qm=%qm size=0
58
+@2_shl_h .... .... .. 01 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
59
+@2_shl_w .... .... .. 1 shift:5 .... .... .... .... &2shift qd=%qd qm=%qm size=2
60
+
61
# Vector loads and stores
62
63
# Widening loads and narrowing stores:
64
@@ -XXX,XX +XXX,XX @@ VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
65
# So we have a single decode line and check the cmode/op in the
66
# trans function.
67
Vimm_1r 111 . 1111 1 . 00 0 ... ... 0 .... 0 1 . 1 .... @1imm
68
+
69
+# Shifts by immediate
70
+
71
+VSHLI 111 0 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_b
72
+VSHLI 111 0 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_h
73
+VSHLI 111 0 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_w
74
+
75
+VQSHLI_S 111 0 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_b
76
+VQSHLI_S 111 0 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_h
77
+VQSHLI_S 111 0 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_w
78
+
79
+VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_b
80
+VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_h
81
+VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_w
82
+
83
+VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_b
84
+VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_h
85
+VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_w
86
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/mve_helper.c
89
+++ b/target/arm/mve_helper.c
90
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
91
WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, true, satp)
92
#define DO_UQRSHL_OP(N, M, satp) \
93
WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, true, satp)
94
+#define DO_SUQSHL_OP(N, M, satp) \
95
+ WRAP_QRSHL_HELPER(do_suqrshl_bhs, N, M, false, satp)
96
97
DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
98
DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
99
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvsw, 4, uint32_t)
100
DO_VADDV(vaddvub, 1, uint8_t)
101
DO_VADDV(vaddvuh, 2, uint16_t)
102
DO_VADDV(vaddvuw, 4, uint32_t)
103
+
104
+/* Shifts by immediate */
105
+#define DO_2SHIFT(OP, ESIZE, TYPE, FN) \
106
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
107
+ void *vm, uint32_t shift) \
108
+ { \
109
+ TYPE *d = vd, *m = vm; \
110
+ uint16_t mask = mve_element_mask(env); \
111
+ unsigned e; \
112
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
113
+ mergemask(&d[H##ESIZE(e)], \
114
+ FN(m[H##ESIZE(e)], shift), mask); \
115
+ } \
116
+ mve_advance_vpt(env); \
117
+ }
118
+
119
+#define DO_2SHIFT_SAT(OP, ESIZE, TYPE, FN) \
120
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
121
+ void *vm, uint32_t shift) \
122
+ { \
123
+ TYPE *d = vd, *m = vm; \
124
+ uint16_t mask = mve_element_mask(env); \
125
+ unsigned e; \
126
+ bool qc = false; \
127
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
128
+ bool sat = false; \
129
+ mergemask(&d[H##ESIZE(e)], \
130
+ FN(m[H##ESIZE(e)], shift, &sat), mask); \
131
+ qc |= sat & mask & 1; \
132
+ } \
133
+ if (qc) { \
134
+ env->vfp.qc[0] = qc; \
135
+ } \
136
+ mve_advance_vpt(env); \
137
+ }
138
+
139
+/* provide unsigned 2-op shift helpers for all sizes */
140
+#define DO_2SHIFT_U(OP, FN) \
141
+ DO_2SHIFT(OP##b, 1, uint8_t, FN) \
142
+ DO_2SHIFT(OP##h, 2, uint16_t, FN) \
143
+ DO_2SHIFT(OP##w, 4, uint32_t, FN)
144
+
145
+#define DO_2SHIFT_SAT_U(OP, FN) \
146
+ DO_2SHIFT_SAT(OP##b, 1, uint8_t, FN) \
147
+ DO_2SHIFT_SAT(OP##h, 2, uint16_t, FN) \
148
+ DO_2SHIFT_SAT(OP##w, 4, uint32_t, FN)
149
+#define DO_2SHIFT_SAT_S(OP, FN) \
150
+ DO_2SHIFT_SAT(OP##b, 1, int8_t, FN) \
151
+ DO_2SHIFT_SAT(OP##h, 2, int16_t, FN) \
152
+ DO_2SHIFT_SAT(OP##w, 4, int32_t, FN)
153
+
154
+DO_2SHIFT_U(vshli_u, DO_VSHLU)
155
+DO_2SHIFT_SAT_U(vqshli_u, DO_UQSHL_OP)
156
+DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
157
+DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
158
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
159
index XXXXXXX..XXXXXXX 100644
160
--- a/target/arm/translate-mve.c
161
+++ b/target/arm/translate-mve.c
162
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
163
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
164
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
165
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
166
+typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
167
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
168
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
169
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
170
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
171
}
172
return do_1imm(s, a, fn);
173
}
174
+
175
+static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
176
+ bool negateshift)
177
+{
178
+ TCGv_ptr qd, qm;
179
+ int shift = a->shift;
180
+
181
+ if (!dc_isar_feature(aa32_mve, s) ||
182
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
183
+ !fn) {
184
+ return false;
185
+ }
186
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
187
+ return true;
188
+ }
189
+
190
+ /*
191
+ * When we handle a right shift insn using a left-shift helper
192
+ * which permits a negative shift count to indicate a right-shift,
193
+ * we must negate the shift count.
194
+ */
195
+ if (negateshift) {
196
+ shift = -shift;
197
+ }
198
+
199
+ qd = mve_qreg_ptr(a->qd);
200
+ qm = mve_qreg_ptr(a->qm);
201
+ fn(cpu_env, qd, qm, tcg_constant_i32(shift));
202
+ tcg_temp_free_ptr(qd);
203
+ tcg_temp_free_ptr(qm);
204
+ mve_update_eci(s);
205
+ return true;
206
+}
207
+
208
+#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
209
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
210
+ { \
211
+ static MVEGenTwoOpShiftFn * const fns[] = { \
212
+ gen_helper_mve_##FN##b, \
213
+ gen_helper_mve_##FN##h, \
214
+ gen_helper_mve_##FN##w, \
215
+ NULL, \
216
+ }; \
217
+ return do_2shift(s, a, fns[a->size], NEGATESHIFT); \
218
+ }
219
+
220
+DO_2SHIFT(VSHLI, vshli_u, false)
221
+DO_2SHIFT(VQSHLI_S, vqshli_s, false)
222
+DO_2SHIFT(VQSHLI_U, vqshli_u, false)
223
+DO_2SHIFT(VQSHLUI, vqshlui_s, false)
37
--
224
--
38
2.20.1
225
2.20.1
39
226
40
227
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
Implement the MVE vector shift right by immediate insns VSHRI and
2
VRSHRI. As with Neon, we implement these by using helper functions
3
which perform left shifts but allow negative shift counts to indicate
4
right shifts.
2
5
3
Add AT24 EEPROM and temperature sensors for GSJ machine.
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-9-peter.maydell@linaro.org
9
---
10
target/arm/helper-mve.h | 12 ++++++++++++
11
target/arm/translate.h | 20 ++++++++++++++++++++
12
target/arm/mve.decode | 28 ++++++++++++++++++++++++++++
13
target/arm/mve_helper.c | 7 +++++++
14
target/arm/translate-mve.c | 5 +++++
15
target/arm/translate-neon.c | 18 ------------------
16
6 files changed, 72 insertions(+), 18 deletions(-)
4
17
5
Reviewed-by: Doug Evans<dje@google.com>
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Tyrong Ting<kfting@nuvoton.com>
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
8
Message-id: 20210210220426.3577804-4-wuhaotsh@google.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/npcm7xx_boards.c | 27 +++++++++++++++++++++++++++
13
hw/arm/Kconfig | 1 +
14
2 files changed, 28 insertions(+)
15
16
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
17
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/npcm7xx_boards.c
20
--- a/target/arm/helper-mve.h
19
+++ b/hw/arm/npcm7xx_boards.c
21
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
21
#include "exec/address-spaces.h"
23
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
22
#include "hw/arm/npcm7xx.h"
24
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
23
#include "hw/core/cpu.h"
25
24
+#include "hw/i2c/smbus_eeprom.h"
26
+DEF_HELPER_FLAGS_4(mve_vshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
#include "hw/loader.h"
27
+DEF_HELPER_FLAGS_4(mve_vshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
#include "hw/qdev-properties.h"
28
+DEF_HELPER_FLAGS_4(mve_vshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
#include "qapi/error.h"
29
+
28
@@ -XXX,XX +XXX,XX @@ static I2CBus *npcm7xx_i2c_get_bus(NPCM7xxState *soc, uint32_t num)
30
DEF_HELPER_FLAGS_4(mve_vshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
return I2C_BUS(qdev_get_child_bus(DEVICE(&soc->smbus[num]), "i2c-bus"));
31
DEF_HELPER_FLAGS_4(mve_vshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_4(mve_vqshlui_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(mve_vqshlui_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vqshlui_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_4(mve_vrshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vrshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
+
42
+DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
45
diff --git a/target/arm/translate.h b/target/arm/translate.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate.h
48
+++ b/target/arm/translate.h
49
@@ -XXX,XX +XXX,XX @@ static inline int times_2_plus_1(DisasContext *s, int x)
50
return x * 2 + 1;
30
}
51
}
31
52
32
+static void at24c_eeprom_init(NPCM7xxState *soc, int bus, uint8_t addr,
53
+static inline int rsub_64(DisasContext *s, int x)
33
+ uint32_t rsize)
34
+{
54
+{
35
+ I2CBus *i2c_bus = npcm7xx_i2c_get_bus(soc, bus);
55
+ return 64 - x;
36
+ I2CSlave *i2c_dev = i2c_slave_new("at24c-eeprom", addr);
37
+ DeviceState *dev = DEVICE(i2c_dev);
38
+
39
+ qdev_prop_set_uint32(dev, "rom-size", rsize);
40
+ i2c_slave_realize_and_unref(i2c_dev, i2c_bus, &error_abort);
41
+}
56
+}
42
+
57
+
43
static void npcm750_evb_i2c_init(NPCM7xxState *soc)
58
+static inline int rsub_32(DisasContext *s, int x)
44
{
45
/* lm75 temperature sensor on SVB, tmp105 is compatible */
46
@@ -XXX,XX +XXX,XX @@ static void npcm750_evb_i2c_init(NPCM7xxState *soc)
47
i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 6), "tmp105", 0x48);
48
}
49
50
+static void quanta_gsj_i2c_init(NPCM7xxState *soc)
51
+{
59
+{
52
+ /* GSJ machine have 4 max31725 temperature sensors, tmp105 is compatible. */
60
+ return 32 - x;
53
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 1), "tmp105", 0x5c);
54
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 2), "tmp105", 0x5c);
55
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 3), "tmp105", 0x5c);
56
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 4), "tmp105", 0x5c);
57
+
58
+ at24c_eeprom_init(soc, 9, 0x55, 8192);
59
+ at24c_eeprom_init(soc, 10, 0x55, 8192);
60
+
61
+ /* TODO: Add additional i2c devices. */
62
+}
61
+}
63
+
62
+
64
static void npcm750_evb_init(MachineState *machine)
63
+static inline int rsub_16(DisasContext *s, int x)
64
+{
65
+ return 16 - x;
66
+}
67
+
68
+static inline int rsub_8(DisasContext *s, int x)
69
+{
70
+ return 8 - x;
71
+}
72
+
73
static inline int arm_dc_feature(DisasContext *dc, int feature)
65
{
74
{
66
NPCM7xxState *soc;
75
return (dc->features & (1ULL << feature)) != 0;
67
@@ -XXX,XX +XXX,XX @@ static void quanta_gsj_init(MachineState *machine)
76
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
68
npcm7xx_load_bootrom(machine, soc);
77
index XXXXXXX..XXXXXXX 100644
69
npcm7xx_connect_flash(&soc->fiu[0], 0, "mx25l25635e",
78
--- a/target/arm/mve.decode
70
drive_get(IF_MTD, 0, 0));
79
+++ b/target/arm/mve.decode
71
+ quanta_gsj_i2c_init(soc);
80
@@ -XXX,XX +XXX,XX @@
72
npcm7xx_load_kernel(machine, soc);
81
@2_shl_h .... .... .. 01 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
82
@2_shl_w .... .... .. 1 shift:5 .... .... .... .... &2shift qd=%qd qm=%qm size=2
83
84
+# Right shifts are encoded as N - shift, where N is the element size in bits.
85
+%rshift_i5 16:5 !function=rsub_32
86
+%rshift_i4 16:4 !function=rsub_16
87
+%rshift_i3 16:3 !function=rsub_8
88
+
89
+@2_shr_b .... .... .. 001 ... .... .... .... .... &2shift qd=%qd qm=%qm \
90
+ size=0 shift=%rshift_i3
91
+@2_shr_h .... .... .. 01 .... .... .... .... .... &2shift qd=%qd qm=%qm \
92
+ size=1 shift=%rshift_i4
93
+@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
94
+ size=2 shift=%rshift_i5
95
+
96
# Vector loads and stores
97
98
# Widening loads and narrowing stores:
99
@@ -XXX,XX +XXX,XX @@ VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_w
100
VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_b
101
VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_h
102
VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_w
103
+
104
+VSHRI_S 111 0 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_b
105
+VSHRI_S 111 0 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_h
106
+VSHRI_S 111 0 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_w
107
+
108
+VSHRI_U 111 1 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_b
109
+VSHRI_U 111 1 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_h
110
+VSHRI_U 111 1 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_w
111
+
112
+VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_b
113
+VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
114
+VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
115
+
116
+VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_b
117
+VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
118
+VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
119
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/mve_helper.c
122
+++ b/target/arm/mve_helper.c
123
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvuw, 4, uint32_t)
124
DO_2SHIFT(OP##b, 1, uint8_t, FN) \
125
DO_2SHIFT(OP##h, 2, uint16_t, FN) \
126
DO_2SHIFT(OP##w, 4, uint32_t, FN)
127
+#define DO_2SHIFT_S(OP, FN) \
128
+ DO_2SHIFT(OP##b, 1, int8_t, FN) \
129
+ DO_2SHIFT(OP##h, 2, int16_t, FN) \
130
+ DO_2SHIFT(OP##w, 4, int32_t, FN)
131
132
#define DO_2SHIFT_SAT_U(OP, FN) \
133
DO_2SHIFT_SAT(OP##b, 1, uint8_t, FN) \
134
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvuw, 4, uint32_t)
135
DO_2SHIFT_SAT(OP##w, 4, int32_t, FN)
136
137
DO_2SHIFT_U(vshli_u, DO_VSHLU)
138
+DO_2SHIFT_S(vshli_s, DO_VSHLS)
139
DO_2SHIFT_SAT_U(vqshli_u, DO_UQSHL_OP)
140
DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
141
DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
142
+DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
143
+DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
144
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/target/arm/translate-mve.c
147
+++ b/target/arm/translate-mve.c
148
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VSHLI, vshli_u, false)
149
DO_2SHIFT(VQSHLI_S, vqshli_s, false)
150
DO_2SHIFT(VQSHLI_U, vqshli_u, false)
151
DO_2SHIFT(VQSHLUI, vqshlui_s, false)
152
+/* These right shifts use a left-shift helper with negated shift count */
153
+DO_2SHIFT(VSHRI_S, vshli_s, true)
154
+DO_2SHIFT(VSHRI_U, vshli_u, true)
155
+DO_2SHIFT(VRSHRI_S, vrshli_s, true)
156
+DO_2SHIFT(VRSHRI_U, vrshli_u, true)
157
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/target/arm/translate-neon.c
160
+++ b/target/arm/translate-neon.c
161
@@ -XXX,XX +XXX,XX @@ static inline int plus1(DisasContext *s, int x)
162
return x + 1;
73
}
163
}
74
164
75
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
165
-static inline int rsub_64(DisasContext *s, int x)
76
index XXXXXXX..XXXXXXX 100644
166
-{
77
--- a/hw/arm/Kconfig
167
- return 64 - x;
78
+++ b/hw/arm/Kconfig
168
-}
79
@@ -XXX,XX +XXX,XX @@ config NPCM7XX
169
-
80
bool
170
-static inline int rsub_32(DisasContext *s, int x)
81
select A9MPCORE
171
-{
82
select ARM_GIC
172
- return 32 - x;
83
+ select AT24C # EEPROM
173
-}
84
select PL310 # cache controller
174
-static inline int rsub_16(DisasContext *s, int x)
85
select SERIAL
175
-{
86
select SSI
176
- return 16 - x;
177
-}
178
-static inline int rsub_8(DisasContext *s, int x)
179
-{
180
- return 8 - x;
181
-}
182
-
183
static inline int neon_3same_fp_size(DisasContext *s, int x)
184
{
185
/* Convert 0==fp32, 1==fp16 into a MO_* value */
87
--
186
--
88
2.20.1
187
2.20.1
89
188
90
189
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE VHLL (vector shift left long) insn. This has two
2
encodings: the T1 encoding is the usual shift-by-immediate format,
3
and the T2 encoding is a special case where the shift count is always
4
equal to the element size.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210212184902.1251044-32-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-10-peter.maydell@linaro.org
7
---
9
---
8
tests/tcg/aarch64/mte.h | 60 +++++++++++++++++++++++++++++++
10
target/arm/helper-mve.h | 9 +++++++
9
tests/tcg/aarch64/mte-1.c | 28 +++++++++++++++
11
target/arm/mve.decode | 53 +++++++++++++++++++++++++++++++++++---
10
tests/tcg/aarch64/mte-2.c | 45 +++++++++++++++++++++++
12
target/arm/mve_helper.c | 32 +++++++++++++++++++++++
11
tests/tcg/aarch64/mte-3.c | 51 ++++++++++++++++++++++++++
13
target/arm/translate-mve.c | 15 +++++++++++
12
tests/tcg/aarch64/mte-4.c | 45 +++++++++++++++++++++++
14
4 files changed, 105 insertions(+), 4 deletions(-)
13
tests/tcg/aarch64/Makefile.target | 6 ++++
14
tests/tcg/configure.sh | 4 +++
15
7 files changed, 239 insertions(+)
16
create mode 100644 tests/tcg/aarch64/mte.h
17
create mode 100644 tests/tcg/aarch64/mte-1.c
18
create mode 100644 tests/tcg/aarch64/mte-2.c
19
create mode 100644 tests/tcg/aarch64/mte-3.c
20
create mode 100644 tests/tcg/aarch64/mte-4.c
21
15
22
diff --git a/tests/tcg/aarch64/mte.h b/tests/tcg/aarch64/mte.h
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
23
new file mode 100644
17
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX
18
--- a/target/arm/helper-mve.h
25
--- /dev/null
19
+++ b/target/arm/helper-mve.h
26
+++ b/tests/tcg/aarch64/mte.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vshllbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vshllbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vshllbub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vshllbuh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vshlltsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vshlltsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vshlltub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vshlltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
27
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
28
+/*
38
@2_shl_h .... .... .. 01 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
29
+ * Linux kernel fallback API definitions for MTE and test helpers.
39
@2_shl_w .... .... .. 1 shift:5 .... .... .... .... &2shift qd=%qd qm=%qm size=2
30
+ *
40
31
+ * Copyright (c) 2021 Linaro Ltd
41
+@2_shll_b .... .... ... 01 shift:3 .... .... .... .... &2shift qd=%qd qm=%qm size=0
32
+ * SPDX-License-Identifier: GPL-2.0-or-later
42
+@2_shll_h .... .... ... 1 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
33
+ */
43
+# VSHLL encoding T2 where shift == esize
44
+@2_shll_esize_b .... .... .... 00 .. .... .... .... .... &2shift \
45
+ qd=%qd qm=%qm size=0 shift=8
46
+@2_shll_esize_h .... .... .... 01 .. .... .... .... .... &2shift \
47
+ qd=%qd qm=%qm size=1 shift=16
34
+
48
+
35
+#include <assert.h>
49
# Right shifts are encoded as N - shift, where N is the element size in bits.
36
+#include <string.h>
50
%rshift_i5 16:5 !function=rsub_32
37
+#include <stdlib.h>
51
%rshift_i4 16:4 !function=rsub_16
38
+#include <stdio.h>
52
@@ -XXX,XX +XXX,XX @@ VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
39
+#include <unistd.h>
53
VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
40
+#include <signal.h>
54
VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
41
+#include <sys/mman.h>
55
42
+#include <sys/prctl.h>
56
-VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
43
+
57
-VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
44
+#ifndef PR_SET_TAGGED_ADDR_CTRL
58
+# The VSHLL T2 encoding is not a @2op pattern, but is here because it
45
+# define PR_SET_TAGGED_ADDR_CTRL 55
59
+# overlaps what would be size=0b11 VMULH/VRMULH
46
+#endif
47
+#ifndef PR_TAGGED_ADDR_ENABLE
48
+# define PR_TAGGED_ADDR_ENABLE (1UL << 0)
49
+#endif
50
+#ifndef PR_MTE_TCF_SHIFT
51
+# define PR_MTE_TCF_SHIFT 1
52
+# define PR_MTE_TCF_NONE (0UL << PR_MTE_TCF_SHIFT)
53
+# define PR_MTE_TCF_SYNC (1UL << PR_MTE_TCF_SHIFT)
54
+# define PR_MTE_TCF_ASYNC (2UL << PR_MTE_TCF_SHIFT)
55
+# define PR_MTE_TAG_SHIFT 3
56
+#endif
57
+
58
+#ifndef PROT_MTE
59
+# define PROT_MTE 0x20
60
+#endif
61
+
62
+#ifndef SEGV_MTEAERR
63
+# define SEGV_MTEAERR 8
64
+# define SEGV_MTESERR 9
65
+#endif
66
+
67
+static void enable_mte(int tcf)
68
+{
60
+{
69
+ int r = prctl(PR_SET_TAGGED_ADDR_CTRL,
61
+ VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
70
+ PR_TAGGED_ADDR_ENABLE | tcf | (0xfffe << PR_MTE_TAG_SHIFT),
62
+ VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
71
+ 0, 0, 0);
63
72
+ if (r < 0) {
64
-VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
73
+ perror("PR_SET_TAGGED_ADDR_CTRL");
65
-VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
74
+ exit(2);
66
+ VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
75
+ }
76
+}
67
+}
77
+
68
+
78
+static void *alloc_mte_mem(size_t size)
79
+{
69
+{
80
+ void *p = mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_MTE,
70
+ VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
81
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
71
+ VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
82
+ if (p == MAP_FAILED) {
83
+ perror("mmap PROT_MTE");
84
+ exit(2);
85
+ }
86
+ return p;
87
+}
88
diff --git a/tests/tcg/aarch64/mte-1.c b/tests/tcg/aarch64/mte-1.c
89
new file mode 100644
90
index XXXXXXX..XXXXXXX
91
--- /dev/null
92
+++ b/tests/tcg/aarch64/mte-1.c
93
@@ -XXX,XX +XXX,XX @@
94
+/*
95
+ * Memory tagging, basic pass cases.
96
+ *
97
+ * Copyright (c) 2021 Linaro Ltd
98
+ * SPDX-License-Identifier: GPL-2.0-or-later
99
+ */
100
+
72
+
101
+#include "mte.h"
73
+ VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
102
+
103
+int main(int ac, char **av)
104
+{
105
+ int *p0, *p1, *p2;
106
+ long c;
107
+
108
+ enable_mte(PR_MTE_TCF_NONE);
109
+ p0 = alloc_mte_mem(sizeof(*p0));
110
+
111
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(1));
112
+ assert(p1 != p0);
113
+ asm("subp %0,%1,%2" : "=r"(c) : "r"(p0), "r"(p1));
114
+ assert(c == 0);
115
+
116
+ asm("stg %0, [%0]" : : "r"(p1));
117
+ asm("ldg %0, [%1]" : "=r"(p2) : "r"(p0), "0"(p0));
118
+ assert(p1 == p2);
119
+
120
+ return 0;
121
+}
122
diff --git a/tests/tcg/aarch64/mte-2.c b/tests/tcg/aarch64/mte-2.c
123
new file mode 100644
124
index XXXXXXX..XXXXXXX
125
--- /dev/null
126
+++ b/tests/tcg/aarch64/mte-2.c
127
@@ -XXX,XX +XXX,XX @@
128
+/*
129
+ * Memory tagging, basic fail cases, synchronous signals.
130
+ *
131
+ * Copyright (c) 2021 Linaro Ltd
132
+ * SPDX-License-Identifier: GPL-2.0-or-later
133
+ */
134
+
135
+#include "mte.h"
136
+
137
+void pass(int sig, siginfo_t *info, void *uc)
138
+{
139
+ assert(info->si_code == SEGV_MTESERR);
140
+ exit(0);
141
+}
74
+}
142
+
75
+
143
+int main(int ac, char **av)
144
+{
76
+{
145
+ struct sigaction sa;
77
+ VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
146
+ int *p0, *p1, *p2;
78
+ VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
147
+ long excl = 1;
148
+
79
+
149
+ enable_mte(PR_MTE_TCF_SYNC);
80
+ VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
150
+ p0 = alloc_mte_mem(sizeof(*p0));
151
+
152
+ /* Create two differently tagged pointers. */
153
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
154
+ asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
155
+ assert(excl != 1);
156
+ asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
157
+ assert(p1 != p2);
158
+
159
+ /* Store the tag from the first pointer. */
160
+ asm("stg %0, [%0]" : : "r"(p1));
161
+
162
+ *p1 = 0;
163
+
164
+ memset(&sa, 0, sizeof(sa));
165
+ sa.sa_sigaction = pass;
166
+ sa.sa_flags = SA_SIGINFO;
167
+ sigaction(SIGSEGV, &sa, NULL);
168
+
169
+ *p2 = 0;
170
+
171
+ abort();
172
+}
173
diff --git a/tests/tcg/aarch64/mte-3.c b/tests/tcg/aarch64/mte-3.c
174
new file mode 100644
175
index XXXXXXX..XXXXXXX
176
--- /dev/null
177
+++ b/tests/tcg/aarch64/mte-3.c
178
@@ -XXX,XX +XXX,XX @@
179
+/*
180
+ * Memory tagging, basic fail cases, asynchronous signals.
181
+ *
182
+ * Copyright (c) 2021 Linaro Ltd
183
+ * SPDX-License-Identifier: GPL-2.0-or-later
184
+ */
185
+
186
+#include "mte.h"
187
+
188
+void pass(int sig, siginfo_t *info, void *uc)
189
+{
190
+ assert(info->si_code == SEGV_MTEAERR);
191
+ exit(0);
192
+}
81
+}
193
+
82
+
194
+int main(int ac, char **av)
195
+{
83
+{
196
+ struct sigaction sa;
84
+ VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
197
+ long *p0, *p1, *p2;
85
+ VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
198
+ long excl = 1;
199
+
86
+
200
+ enable_mte(PR_MTE_TCF_ASYNC);
87
+ VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
201
+ p0 = alloc_mte_mem(sizeof(*p0));
88
+}
89
90
VMAX_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
91
VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
92
@@ -XXX,XX +XXX,XX @@ VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
93
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_b
94
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
95
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
202
+
96
+
203
+ /* Create two differently tagged pointers. */
97
+# VSHLL T1 encoding; the T2 VSHLL encoding is elsewhere in this file
204
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
98
+VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b
205
+ asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
99
+VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h
206
+ assert(excl != 1);
207
+ asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
208
+ assert(p1 != p2);
209
+
100
+
210
+ /* Store the tag from the first pointer. */
101
+VSHLL_BU 111 1 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b
211
+ asm("stg %0, [%0]" : : "r"(p1));
102
+VSHLL_BU 111 1 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h
212
+
103
+
213
+ *p1 = 0;
104
+VSHLL_TS 111 0 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_b
105
+VSHLL_TS 111 0 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
214
+
106
+
215
+ memset(&sa, 0, sizeof(sa));
107
+VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_b
216
+ sa.sa_sigaction = pass;
108
+VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
217
+ sa.sa_flags = SA_SIGINFO;
109
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
218
+ sigaction(SIGSEGV, &sa, NULL);
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/mve_helper.c
112
+++ b/target/arm/mve_helper.c
113
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
114
DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
115
DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
116
DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
219
+
117
+
220
+ /*
221
+ * Signal for async error will happen eventually.
222
+ * For a real kernel this should be after the next IRQ (e.g. timer).
223
+ * For qemu linux-user, we kick the cpu and exit at the next TB.
224
+ * In either case, loop until this happens (or killed by timeout).
225
+ * For extra sauce, yield, producing EXCP_YIELD to cpu_loop().
226
+ */
227
+ asm("str %0, [%0]; yield" : : "r"(p2));
228
+ while (1);
229
+}
230
diff --git a/tests/tcg/aarch64/mte-4.c b/tests/tcg/aarch64/mte-4.c
231
new file mode 100644
232
index XXXXXXX..XXXXXXX
233
--- /dev/null
234
+++ b/tests/tcg/aarch64/mte-4.c
235
@@ -XXX,XX +XXX,XX @@
236
+/*
118
+/*
237
+ * Memory tagging, re-reading tag checks.
119
+ * Long shifts taking half-sized inputs from top or bottom of the input
238
+ *
120
+ * vector and producing a double-width result. ESIZE, TYPE are for
239
+ * Copyright (c) 2021 Linaro Ltd
121
+ * the input, and LESIZE, LTYPE for the output.
240
+ * SPDX-License-Identifier: GPL-2.0-or-later
122
+ * Unlike the normal shift helpers, we do not handle negative shift counts,
123
+ * because the long shift is strictly left-only.
241
+ */
124
+ */
125
+#define DO_VSHLL(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE) \
126
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
127
+ void *vm, uint32_t shift) \
128
+ { \
129
+ LTYPE *d = vd; \
130
+ TYPE *m = vm; \
131
+ uint16_t mask = mve_element_mask(env); \
132
+ unsigned le; \
133
+ assert(shift <= 16); \
134
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
135
+ LTYPE r = (LTYPE)m[H##ESIZE(le * 2 + TOP)] << shift; \
136
+ mergemask(&d[H##LESIZE(le)], r, mask); \
137
+ } \
138
+ mve_advance_vpt(env); \
139
+ }
242
+
140
+
243
+#include "mte.h"
141
+#define DO_VSHLL_ALL(OP, TOP) \
142
+ DO_VSHLL(OP##sb, TOP, 1, int8_t, 2, int16_t) \
143
+ DO_VSHLL(OP##ub, TOP, 1, uint8_t, 2, uint16_t) \
144
+ DO_VSHLL(OP##sh, TOP, 2, int16_t, 4, int32_t) \
145
+ DO_VSHLL(OP##uh, TOP, 2, uint16_t, 4, uint32_t) \
244
+
146
+
245
+void __attribute__((noinline)) tagset(void *p, size_t size)
147
+DO_VSHLL_ALL(vshllb, false)
246
+{
148
+DO_VSHLL_ALL(vshllt, true)
247
+ size_t i;
149
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
248
+ for (i = 0; i < size; i += 16) {
150
index XXXXXXX..XXXXXXX 100644
249
+ asm("stg %0, [%0]" : : "r"(p + i));
151
--- a/target/arm/translate-mve.c
152
+++ b/target/arm/translate-mve.c
153
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VSHRI_S, vshli_s, true)
154
DO_2SHIFT(VSHRI_U, vshli_u, true)
155
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
156
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
157
+
158
+#define DO_VSHLL(INSN, FN) \
159
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
160
+ { \
161
+ static MVEGenTwoOpShiftFn * const fns[] = { \
162
+ gen_helper_mve_##FN##b, \
163
+ gen_helper_mve_##FN##h, \
164
+ }; \
165
+ return do_2shift(s, a, fns[a->size], false); \
250
+ }
166
+ }
251
+}
252
+
167
+
253
+void __attribute__((noinline)) tagcheck(void *p, size_t size)
168
+DO_VSHLL(VSHLL_BS, vshllbs)
254
+{
169
+DO_VSHLL(VSHLL_BU, vshllbu)
255
+ size_t i;
170
+DO_VSHLL(VSHLL_TS, vshllts)
256
+ void *c;
171
+DO_VSHLL(VSHLL_TU, vshlltu)
257
+
258
+ for (i = 0; i < size; i += 16) {
259
+ asm("ldg %0, [%1]" : "=r"(c) : "r"(p + i), "0"(p));
260
+ assert(c == p);
261
+ }
262
+}
263
+
264
+int main(int ac, char **av)
265
+{
266
+ size_t size = getpagesize() * 4;
267
+ long excl = 1;
268
+ int *p0, *p1;
269
+
270
+ enable_mte(PR_MTE_TCF_ASYNC);
271
+ p0 = alloc_mte_mem(size);
272
+
273
+ /* Tag the pointer. */
274
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
275
+
276
+ tagset(p1, size);
277
+ tagcheck(p1, size);
278
+
279
+ return 0;
280
+}
281
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
282
index XXXXXXX..XXXXXXX 100644
283
--- a/tests/tcg/aarch64/Makefile.target
284
+++ b/tests/tcg/aarch64/Makefile.target
285
@@ -XXX,XX +XXX,XX @@ endif
286
# bti-2 tests PROT_BTI, so no special compiler support required.
287
AARCH64_TESTS += bti-2
288
289
+# MTE Tests
290
+ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),)
291
+AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4
292
+mte-%: CFLAGS += -march=armv8.5-a+memtag
293
+endif
294
+
295
# Semihosting smoke test for linux-user
296
AARCH64_TESTS += semihosting
297
run-semihosting: semihosting
298
diff --git a/tests/tcg/configure.sh b/tests/tcg/configure.sh
299
index XXXXXXX..XXXXXXX 100755
300
--- a/tests/tcg/configure.sh
301
+++ b/tests/tcg/configure.sh
302
@@ -XXX,XX +XXX,XX @@ for target in $target_list; do
303
-mbranch-protection=standard -o $TMPE $TMPC; then
304
echo "CROSS_CC_HAS_ARMV8_BTI=y" >> $config_target_mak
305
fi
306
+ if do_compiler "$target_compiler" $target_compiler_cflags \
307
+ -march=armv8.5-a+memtag -o $TMPE $TMPC; then
308
+ echo "CROSS_CC_HAS_ARMV8_MTE=y" >> $config_target_mak
309
+ fi
310
;;
311
esac
312
313
--
172
--
314
2.20.1
173
2.20.1
315
174
316
175
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE VSRI and VSLI insns, which perform a
2
shift-and-insert operation.
2
3
3
This is the prctl bit that controls whether syscalls accept tagged
4
addresses. See Documentation/arm64/tagged-address-abi.rst in the
5
linux kernel.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210212184902.1251044-21-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210628135835.6690-11-peter.maydell@linaro.org
11
---
7
---
12
linux-user/aarch64/target_syscall.h | 4 ++++
8
target/arm/helper-mve.h | 8 ++++++++
13
target/arm/cpu-param.h | 3 +++
9
target/arm/mve.decode | 9 ++++++++
14
target/arm/cpu.h | 31 +++++++++++++++++++++++++++++
10
target/arm/mve_helper.c | 42 ++++++++++++++++++++++++++++++++++++++
15
linux-user/syscall.c | 24 ++++++++++++++++++++++
11
target/arm/translate-mve.c | 3 +++
16
4 files changed, 62 insertions(+)
12
4 files changed, 62 insertions(+)
17
13
18
diff --git a/linux-user/aarch64/target_syscall.h b/linux-user/aarch64/target_syscall.h
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/aarch64/target_syscall.h
16
--- a/target/arm/helper-mve.h
21
+++ b/linux-user/aarch64/target_syscall.h
17
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ struct target_pt_regs {
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vshlltsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
# define TARGET_PR_PAC_APDBKEY (1 << 3)
19
DEF_HELPER_FLAGS_4(mve_vshlltsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
# define TARGET_PR_PAC_APGAKEY (1 << 4)
20
DEF_HELPER_FLAGS_4(mve_vshlltub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
21
DEF_HELPER_FLAGS_4(mve_vshlltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+#define TARGET_PR_SET_TAGGED_ADDR_CTRL 55
27
+#define TARGET_PR_GET_TAGGED_ADDR_CTRL 56
28
+# define TARGET_PR_TAGGED_ADDR_ENABLE (1UL << 0)
29
+
22
+
30
#endif /* AARCH64_TARGET_SYSCALL_H */
23
+DEF_HELPER_FLAGS_4(mve_vsrib, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
24
+DEF_HELPER_FLAGS_4(mve_vsrih, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(mve_vsriw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_4(mve_vslib, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vslih, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vsliw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
32
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu-param.h
32
--- a/target/arm/mve.decode
34
+++ b/target/arm/cpu-param.h
33
+++ b/target/arm/mve.decode
35
@@ -XXX,XX +XXX,XX @@
34
@@ -XXX,XX +XXX,XX @@ VSHLL_TS 111 0 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
36
35
37
#ifdef CONFIG_USER_ONLY
36
VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_b
38
#define TARGET_PAGE_BITS 12
37
VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
39
+# ifdef TARGET_AARCH64
38
+
40
+# define TARGET_TAGGED_ADDRESSES
39
+# Shift-and-insert
41
+# endif
40
+VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_b
42
#else
41
+VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_h
43
/*
42
+VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_w
44
* ARMv7 and later CPUs have 4K pages minimum, but ARMv5 and v6
43
+
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
44
+VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_b
45
+VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_h
46
+VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_w
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.h
49
--- a/target/arm/mve_helper.c
48
+++ b/target/arm/cpu.h
50
+++ b/target/arm/mve_helper.c
49
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
51
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
50
const struct arm_boot_info *boot_info;
52
DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
51
/* Store GICv3CPUState to access from this struct */
53
DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
52
void *gicv3state;
54
55
+/* Shift-and-insert; we always work with 64 bits at a time */
56
+#define DO_2SHIFT_INSERT(OP, ESIZE, SHIFTFN, MASKFN) \
57
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
58
+ void *vm, uint32_t shift) \
59
+ { \
60
+ uint64_t *d = vd, *m = vm; \
61
+ uint16_t mask; \
62
+ uint64_t shiftmask; \
63
+ unsigned e; \
64
+ if (shift == 0 || shift == ESIZE * 8) { \
65
+ /* \
66
+ * Only VSLI can shift by 0; only VSRI can shift by <dt>. \
67
+ * The generic logic would give the right answer for 0 but \
68
+ * fails for <dt>. \
69
+ */ \
70
+ goto done; \
71
+ } \
72
+ assert(shift < ESIZE * 8); \
73
+ mask = mve_element_mask(env); \
74
+ /* ESIZE / 2 gives the MO_* value if ESIZE is in [1,2,4] */ \
75
+ shiftmask = dup_const(ESIZE / 2, MASKFN(ESIZE * 8, shift)); \
76
+ for (e = 0; e < 16 / 8; e++, mask >>= 8) { \
77
+ uint64_t r = (SHIFTFN(m[H8(e)], shift) & shiftmask) | \
78
+ (d[H8(e)] & ~shiftmask); \
79
+ mergemask(&d[H8(e)], r, mask); \
80
+ } \
81
+done: \
82
+ mve_advance_vpt(env); \
83
+ }
53
+
84
+
54
+#ifdef TARGET_TAGGED_ADDRESSES
85
+#define DO_SHL(N, SHIFT) ((N) << (SHIFT))
55
+ /* Linux syscall tagged address support */
86
+#define DO_SHR(N, SHIFT) ((N) >> (SHIFT))
56
+ bool tagged_addr_enable;
87
+#define SHL_MASK(EBITS, SHIFT) MAKE_64BIT_MASK((SHIFT), (EBITS) - (SHIFT))
57
+#endif
88
+#define SHR_MASK(EBITS, SHIFT) MAKE_64BIT_MASK(0, (EBITS) - (SHIFT))
58
} CPUARMState;
89
+
59
90
+DO_2SHIFT_INSERT(vsrib, 1, DO_SHR, SHR_MASK)
60
static inline void set_feature(CPUARMState *env, int feature)
91
+DO_2SHIFT_INSERT(vsrih, 2, DO_SHR, SHR_MASK)
61
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
92
+DO_2SHIFT_INSERT(vsriw, 4, DO_SHR, SHR_MASK)
62
*/
93
+DO_2SHIFT_INSERT(vslib, 1, DO_SHL, SHL_MASK)
63
#define PAGE_BTI PAGE_TARGET_1
94
+DO_2SHIFT_INSERT(vslih, 2, DO_SHL, SHL_MASK)
64
95
+DO_2SHIFT_INSERT(vsliw, 4, DO_SHL, SHL_MASK)
65
+#ifdef TARGET_TAGGED_ADDRESSES
66
+/**
67
+ * cpu_untagged_addr:
68
+ * @cs: CPU context
69
+ * @x: tagged address
70
+ *
71
+ * Remove any address tag from @x. This is explicitly related to the
72
+ * linux syscall TIF_TAGGED_ADDR setting, not TBI in general.
73
+ *
74
+ * There should be a better place to put this, but we need this in
75
+ * include/exec/cpu_ldst.h, and not some place linux-user specific.
76
+ */
77
+static inline target_ulong cpu_untagged_addr(CPUState *cs, target_ulong x)
78
+{
79
+ ARMCPU *cpu = ARM_CPU(cs);
80
+ if (cpu->env.tagged_addr_enable) {
81
+ /*
82
+ * TBI is enabled for userspace but not kernelspace addresses.
83
+ * Only clear the tag if bit 55 is clear.
84
+ */
85
+ x &= sextract64(x, 0, 56);
86
+ }
87
+ return x;
88
+}
89
+#endif
90
+
96
+
91
/*
97
/*
92
* Naming convention for isar_feature functions:
98
* Long shifts taking half-sized inputs from top or bottom of the input
93
* Functions which test 32-bit ID registers should have _aa32_ in
99
* vector and producing a double-width result. ESIZE, TYPE are for
94
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
100
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
95
index XXXXXXX..XXXXXXX 100644
101
index XXXXXXX..XXXXXXX 100644
96
--- a/linux-user/syscall.c
102
--- a/target/arm/translate-mve.c
97
+++ b/linux-user/syscall.c
103
+++ b/target/arm/translate-mve.c
98
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
104
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VSHRI_U, vshli_u, true)
99
}
105
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
100
}
106
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
101
return -TARGET_EINVAL;
107
102
+ case TARGET_PR_SET_TAGGED_ADDR_CTRL:
108
+DO_2SHIFT(VSRI, vsri, false)
103
+ {
109
+DO_2SHIFT(VSLI, vsli, false)
104
+ abi_ulong valid_mask = TARGET_PR_TAGGED_ADDR_ENABLE;
105
+ CPUARMState *env = cpu_env;
106
+
110
+
107
+ if ((arg2 & ~valid_mask) || arg3 || arg4 || arg5) {
111
#define DO_VSHLL(INSN, FN) \
108
+ return -TARGET_EINVAL;
112
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
109
+ }
113
{ \
110
+ env->tagged_addr_enable = arg2 & TARGET_PR_TAGGED_ADDR_ENABLE;
111
+ return 0;
112
+ }
113
+ case TARGET_PR_GET_TAGGED_ADDR_CTRL:
114
+ {
115
+ abi_long ret = 0;
116
+ CPUARMState *env = cpu_env;
117
+
118
+ if (arg2 || arg3 || arg4 || arg5) {
119
+ return -TARGET_EINVAL;
120
+ }
121
+ if (env->tagged_addr_enable) {
122
+ ret |= TARGET_PR_TAGGED_ADDR_ENABLE;
123
+ }
124
+ return ret;
125
+ }
126
#endif /* AARCH64 */
127
case PR_GET_SECCOMP:
128
case PR_SET_SECCOMP:
129
--
114
--
130
2.20.1
115
2.20.1
131
116
132
117
diff view generated by jsdifflib
1
From: Doug Evans <dje@google.com>
1
Implement the MVE shift-right-and-narrow insn VSHRN and VRSHRN.
2
2
3
Reviewed-by: Hao Wu <wuhaotsh@google.com>
3
do_urshr() is borrowed from sve_helper.c.
4
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Doug Evans <dje@google.com>
7
Message-id: 20210213002520.1374134-4-dje@google.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210628135835.6690-12-peter.maydell@linaro.org
9
---
8
---
10
tests/qtest/npcm7xx_emc-test.c | 862 +++++++++++++++++++++++++++++++++
9
target/arm/helper-mve.h | 10 ++++++++++
11
tests/qtest/meson.build | 1 +
10
target/arm/mve.decode | 11 +++++++++++
12
2 files changed, 863 insertions(+)
11
target/arm/mve_helper.c | 40 ++++++++++++++++++++++++++++++++++++++
13
create mode 100644 tests/qtest/npcm7xx_emc-test.c
12
target/arm/translate-mve.c | 15 ++++++++++++++
13
4 files changed, 76 insertions(+)
14
14
15
diff --git a/tests/qtest/npcm7xx_emc-test.c b/tests/qtest/npcm7xx_emc-test.c
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
new file mode 100644
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX
17
--- a/target/arm/helper-mve.h
18
--- /dev/null
18
+++ b/target/arm/helper-mve.h
19
+++ b/tests/qtest/npcm7xx_emc-test.c
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vsriw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@
20
DEF_HELPER_FLAGS_4(mve_vslib, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(mve_vslih, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(mve_vsliw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+
24
+DEF_HELPER_FLAGS_4(mve_vshrnbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(mve_vshrnbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vshrntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vshrnth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vrshrnbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vrshrnbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vrshrntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vrshrnth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_w
38
VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_b
39
VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_h
40
VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_w
41
+
42
+# Narrowing shifts (which only support b and h sizes)
43
+VSHRNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_b
44
+VSHRNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_h
45
+VSHRNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_b
46
+VSHRNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_h
47
+
48
+VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_b
49
+VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_h
50
+VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_b
51
+VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_h
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_INSERT(vsliw, 4, DO_SHL, SHL_MASK)
57
58
DO_VSHLL_ALL(vshllb, false)
59
DO_VSHLL_ALL(vshllt, true)
60
+
21
+/*
61
+/*
22
+ * QTests for Nuvoton NPCM7xx EMC Modules.
62
+ * Narrowing right shifts, taking a double sized input, shifting it
23
+ *
63
+ * and putting the result in either the top or bottom half of the output.
24
+ * Copyright 2020 Google LLC
64
+ * ESIZE, TYPE are the output, and LESIZE, LTYPE the input.
25
+ *
26
+ * This program is free software; you can redistribute it and/or modify it
27
+ * under the terms of the GNU General Public License as published by the
28
+ * Free Software Foundation; either version 2 of the License, or
29
+ * (at your option) any later version.
30
+ *
31
+ * This program is distributed in the hope that it will be useful, but WITHOUT
32
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
33
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
34
+ * for more details.
35
+ */
65
+ */
36
+
66
+#define DO_VSHRN(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
37
+#include "qemu/osdep.h"
67
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
38
+#include "qemu-common.h"
68
+ void *vm, uint32_t shift) \
39
+#include "libqos/libqos.h"
69
+ { \
40
+#include "qapi/qmp/qdict.h"
70
+ LTYPE *m = vm; \
41
+#include "qapi/qmp/qnum.h"
71
+ TYPE *d = vd; \
42
+#include "qemu/bitops.h"
72
+ uint16_t mask = mve_element_mask(env); \
43
+#include "qemu/iov.h"
73
+ unsigned le; \
44
+
74
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
45
+/* Name of the emc device. */
75
+ TYPE r = FN(m[H##LESIZE(le)], shift); \
46
+#define TYPE_NPCM7XX_EMC "npcm7xx-emc"
76
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
47
+
77
+ } \
48
+/* Timeout for various operations, in seconds. */
78
+ mve_advance_vpt(env); \
49
+#define TIMEOUT_SECONDS 10
50
+
51
+/* Address in memory of the descriptor. */
52
+#define DESC_ADDR (1 << 20) /* 1 MiB */
53
+
54
+/* Address in memory of the data packet. */
55
+#define DATA_ADDR (DESC_ADDR + 4096)
56
+
57
+#define CRC_LENGTH 4
58
+
59
+#define NUM_TX_DESCRIPTORS 3
60
+#define NUM_RX_DESCRIPTORS 2
61
+
62
+/* Size of tx,rx test buffers. */
63
+#define TX_DATA_LEN 64
64
+#define RX_DATA_LEN 64
65
+
66
+#define TX_STEP_COUNT 10000
67
+#define RX_STEP_COUNT 10000
68
+
69
+/* 32-bit register indices. */
70
+typedef enum NPCM7xxPWMRegister {
71
+ /* Control registers. */
72
+ REG_CAMCMR,
73
+ REG_CAMEN,
74
+
75
+ /* There are 16 CAMn[ML] registers. */
76
+ REG_CAMM_BASE,
77
+ REG_CAML_BASE,
78
+
79
+ REG_TXDLSA = 0x22,
80
+ REG_RXDLSA,
81
+ REG_MCMDR,
82
+ REG_MIID,
83
+ REG_MIIDA,
84
+ REG_FFTCR,
85
+ REG_TSDR,
86
+ REG_RSDR,
87
+ REG_DMARFC,
88
+ REG_MIEN,
89
+
90
+ /* Status registers. */
91
+ REG_MISTA,
92
+ REG_MGSTA,
93
+ REG_MPCNT,
94
+ REG_MRPC,
95
+ REG_MRPCC,
96
+ REG_MREPC,
97
+ REG_DMARFS,
98
+ REG_CTXDSA,
99
+ REG_CTXBSA,
100
+ REG_CRXDSA,
101
+ REG_CRXBSA,
102
+
103
+ NPCM7XX_NUM_EMC_REGS,
104
+} NPCM7xxPWMRegister;
105
+
106
+enum { NUM_CAMML_REGS = 16 };
107
+
108
+/* REG_CAMCMR fields */
109
+/* Enable CAM Compare */
110
+#define REG_CAMCMR_ECMP (1 << 4)
111
+/* Accept Unicast Packet */
112
+#define REG_CAMCMR_AUP (1 << 0)
113
+
114
+/* REG_MCMDR fields */
115
+/* Software Reset */
116
+#define REG_MCMDR_SWR (1 << 24)
117
+/* Frame Transmission On */
118
+#define REG_MCMDR_TXON (1 << 8)
119
+/* Accept Long Packet */
120
+#define REG_MCMDR_ALP (1 << 1)
121
+/* Frame Reception On */
122
+#define REG_MCMDR_RXON (1 << 0)
123
+
124
+/* REG_MIEN fields */
125
+/* Enable Transmit Completion Interrupt */
126
+#define REG_MIEN_ENTXCP (1 << 18)
127
+/* Enable Transmit Interrupt */
128
+#define REG_MIEN_ENTXINTR (1 << 16)
129
+/* Enable Receive Good Interrupt */
130
+#define REG_MIEN_ENRXGD (1 << 4)
131
+/* ENable Receive Interrupt */
132
+#define REG_MIEN_ENRXINTR (1 << 0)
133
+
134
+/* REG_MISTA fields */
135
+/* Transmit Bus Error Interrupt */
136
+#define REG_MISTA_TXBERR (1 << 24)
137
+/* Transmit Descriptor Unavailable Interrupt */
138
+#define REG_MISTA_TDU (1 << 23)
139
+/* Transmit Completion Interrupt */
140
+#define REG_MISTA_TXCP (1 << 18)
141
+/* Transmit Interrupt */
142
+#define REG_MISTA_TXINTR (1 << 16)
143
+/* Receive Bus Error Interrupt */
144
+#define REG_MISTA_RXBERR (1 << 11)
145
+/* Receive Descriptor Unavailable Interrupt */
146
+#define REG_MISTA_RDU (1 << 10)
147
+/* DMA Early Notification Interrupt */
148
+#define REG_MISTA_DENI (1 << 9)
149
+/* Maximum Frame Length Interrupt */
150
+#define REG_MISTA_DFOI (1 << 8)
151
+/* Receive Good Interrupt */
152
+#define REG_MISTA_RXGD (1 << 4)
153
+/* Packet Too Long Interrupt */
154
+#define REG_MISTA_PTLE (1 << 3)
155
+/* Receive Interrupt */
156
+#define REG_MISTA_RXINTR (1 << 0)
157
+
158
+typedef struct NPCM7xxEMCTxDesc NPCM7xxEMCTxDesc;
159
+typedef struct NPCM7xxEMCRxDesc NPCM7xxEMCRxDesc;
160
+
161
+struct NPCM7xxEMCTxDesc {
162
+ uint32_t flags;
163
+ uint32_t txbsa;
164
+ uint32_t status_and_length;
165
+ uint32_t ntxdsa;
166
+};
167
+
168
+struct NPCM7xxEMCRxDesc {
169
+ uint32_t status_and_length;
170
+ uint32_t rxbsa;
171
+ uint32_t reserved;
172
+ uint32_t nrxdsa;
173
+};
174
+
175
+/* NPCM7xxEMCTxDesc.flags values */
176
+/* Owner: 0 = cpu, 1 = emc */
177
+#define TX_DESC_FLAG_OWNER_MASK (1 << 31)
178
+/* Transmit interrupt enable */
179
+#define TX_DESC_FLAG_INTEN (1 << 2)
180
+
181
+/* NPCM7xxEMCTxDesc.status_and_length values */
182
+/* Transmission complete */
183
+#define TX_DESC_STATUS_TXCP (1 << 19)
184
+/* Transmit interrupt */
185
+#define TX_DESC_STATUS_TXINTR (1 << 16)
186
+
187
+/* NPCM7xxEMCRxDesc.status_and_length values */
188
+/* Owner: 0b00 = cpu, 0b10 = emc */
189
+#define RX_DESC_STATUS_OWNER_SHIFT 30
190
+#define RX_DESC_STATUS_OWNER_MASK 0xc0000000
191
+/* Frame Reception Complete */
192
+#define RX_DESC_STATUS_RXGD (1 << 20)
193
+/* Packet too long */
194
+#define RX_DESC_STATUS_PTLE (1 << 19)
195
+/* Receive Interrupt */
196
+#define RX_DESC_STATUS_RXINTR (1 << 16)
197
+
198
+#define RX_DESC_PKT_LEN(word) ((uint32_t) (word) & 0xffff)
199
+
200
+typedef struct EMCModule {
201
+ int rx_irq;
202
+ int tx_irq;
203
+ uint64_t base_addr;
204
+} EMCModule;
205
+
206
+typedef struct TestData {
207
+ const EMCModule *module;
208
+} TestData;
209
+
210
+static const EMCModule emc_module_list[] = {
211
+ {
212
+ .rx_irq = 15,
213
+ .tx_irq = 16,
214
+ .base_addr = 0xf0825000
215
+ },
216
+ {
217
+ .rx_irq = 114,
218
+ .tx_irq = 115,
219
+ .base_addr = 0xf0826000
220
+ }
221
+};
222
+
223
+/* Returns the index of the EMC module. */
224
+static int emc_module_index(const EMCModule *mod)
225
+{
226
+ ptrdiff_t diff = mod - emc_module_list;
227
+
228
+ g_assert_true(diff >= 0 && diff < ARRAY_SIZE(emc_module_list));
229
+
230
+ return diff;
231
+}
232
+
233
+static void packet_test_clear(void *sockets)
234
+{
235
+ int *test_sockets = sockets;
236
+
237
+ close(test_sockets[0]);
238
+ g_free(test_sockets);
239
+}
240
+
241
+static int *packet_test_init(int module_num, GString *cmd_line)
242
+{
243
+ int *test_sockets = g_new(int, 2);
244
+ int ret = socketpair(PF_UNIX, SOCK_STREAM, 0, test_sockets);
245
+ g_assert_cmpint(ret, != , -1);
246
+
247
+ /*
248
+ * KISS and use -nic. We specify two nics (both emc{0,1}) because there's
249
+ * currently no way to specify only emc1: The driver implicitly relies on
250
+ * emc[i] == nd_table[i].
251
+ */
252
+ if (module_num == 0) {
253
+ g_string_append_printf(cmd_line,
254
+ " -nic socket,fd=%d,model=" TYPE_NPCM7XX_EMC " "
255
+ " -nic user,model=" TYPE_NPCM7XX_EMC " ",
256
+ test_sockets[1]);
257
+ } else {
258
+ g_string_append_printf(cmd_line,
259
+ " -nic user,model=" TYPE_NPCM7XX_EMC " "
260
+ " -nic socket,fd=%d,model=" TYPE_NPCM7XX_EMC " ",
261
+ test_sockets[1]);
262
+ }
79
+ }
263
+
80
+
264
+ g_test_queue_destroy(packet_test_clear, test_sockets);
81
+#define DO_VSHRN_ALL(OP, FN) \
265
+ return test_sockets;
82
+ DO_VSHRN(OP##bb, false, 1, uint8_t, 2, uint16_t, FN) \
266
+}
83
+ DO_VSHRN(OP##bh, false, 2, uint16_t, 4, uint32_t, FN) \
84
+ DO_VSHRN(OP##tb, true, 1, uint8_t, 2, uint16_t, FN) \
85
+ DO_VSHRN(OP##th, true, 2, uint16_t, 4, uint32_t, FN)
267
+
86
+
268
+static uint32_t emc_read(QTestState *qts, const EMCModule *mod,
87
+static inline uint64_t do_urshr(uint64_t x, unsigned sh)
269
+ NPCM7xxPWMRegister regno)
270
+{
88
+{
271
+ return qtest_readl(qts, mod->base_addr + regno * sizeof(uint32_t));
89
+ if (likely(sh < 64)) {
272
+}
90
+ return (x >> sh) + ((x >> (sh - 1)) & 1);
273
+
91
+ } else if (sh == 64) {
274
+static void emc_write(QTestState *qts, const EMCModule *mod,
92
+ return x >> 63;
275
+ NPCM7xxPWMRegister regno, uint32_t value)
93
+ } else {
276
+{
94
+ return 0;
277
+ qtest_writel(qts, mod->base_addr + regno * sizeof(uint32_t), value);
278
+}
279
+
280
+static void emc_read_tx_desc(QTestState *qts, uint32_t addr,
281
+ NPCM7xxEMCTxDesc *desc)
282
+{
283
+ qtest_memread(qts, addr, desc, sizeof(*desc));
284
+ desc->flags = le32_to_cpu(desc->flags);
285
+ desc->txbsa = le32_to_cpu(desc->txbsa);
286
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
287
+ desc->ntxdsa = le32_to_cpu(desc->ntxdsa);
288
+}
289
+
290
+static void emc_write_tx_desc(QTestState *qts, const NPCM7xxEMCTxDesc *desc,
291
+ uint32_t addr)
292
+{
293
+ NPCM7xxEMCTxDesc le_desc;
294
+
295
+ le_desc.flags = cpu_to_le32(desc->flags);
296
+ le_desc.txbsa = cpu_to_le32(desc->txbsa);
297
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
298
+ le_desc.ntxdsa = cpu_to_le32(desc->ntxdsa);
299
+ qtest_memwrite(qts, addr, &le_desc, sizeof(le_desc));
300
+}
301
+
302
+static void emc_read_rx_desc(QTestState *qts, uint32_t addr,
303
+ NPCM7xxEMCRxDesc *desc)
304
+{
305
+ qtest_memread(qts, addr, desc, sizeof(*desc));
306
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
307
+ desc->rxbsa = le32_to_cpu(desc->rxbsa);
308
+ desc->reserved = le32_to_cpu(desc->reserved);
309
+ desc->nrxdsa = le32_to_cpu(desc->nrxdsa);
310
+}
311
+
312
+static void emc_write_rx_desc(QTestState *qts, const NPCM7xxEMCRxDesc *desc,
313
+ uint32_t addr)
314
+{
315
+ NPCM7xxEMCRxDesc le_desc;
316
+
317
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
318
+ le_desc.rxbsa = cpu_to_le32(desc->rxbsa);
319
+ le_desc.reserved = cpu_to_le32(desc->reserved);
320
+ le_desc.nrxdsa = cpu_to_le32(desc->nrxdsa);
321
+ qtest_memwrite(qts, addr, &le_desc, sizeof(le_desc));
322
+}
323
+
324
+/*
325
+ * Reset the EMC module.
326
+ * The module must be reset before, e.g., TXDLSA,RXDLSA are changed.
327
+ */
328
+static bool emc_soft_reset(QTestState *qts, const EMCModule *mod)
329
+{
330
+ uint32_t val;
331
+ uint64_t end_time;
332
+
333
+ emc_write(qts, mod, REG_MCMDR, REG_MCMDR_SWR);
334
+
335
+ /*
336
+ * Wait for device to reset as the linux driver does.
337
+ * During reset the AHB reads 0 for all registers. So first wait for
338
+ * something that resets to non-zero, and then wait for SWR becoming 0.
339
+ */
340
+ end_time = g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
341
+
342
+ do {
343
+ qtest_clock_step(qts, 100);
344
+ val = emc_read(qts, mod, REG_FFTCR);
345
+ } while (val == 0 && g_get_monotonic_time() < end_time);
346
+ if (val != 0) {
347
+ do {
348
+ qtest_clock_step(qts, 100);
349
+ val = emc_read(qts, mod, REG_MCMDR);
350
+ if ((val & REG_MCMDR_SWR) == 0) {
351
+ /*
352
+ * N.B. The CAMs have been reset here, so macaddr matching of
353
+ * incoming packets will not work.
354
+ */
355
+ return true;
356
+ }
357
+ } while (g_get_monotonic_time() < end_time);
358
+ }
359
+
360
+ g_message("%s: Timeout expired", __func__);
361
+ return false;
362
+}
363
+
364
+/* Check emc registers are reset to default value. */
365
+static void test_init(gconstpointer test_data)
366
+{
367
+ const TestData *td = test_data;
368
+ const EMCModule *mod = td->module;
369
+ QTestState *qts = qtest_init("-machine quanta-gsj");
370
+ int i;
371
+
372
+#define CHECK_REG(regno, value) \
373
+ do { \
374
+ g_assert_cmphex(emc_read(qts, mod, (regno)), ==, (value)); \
375
+ } while (0)
376
+
377
+ CHECK_REG(REG_CAMCMR, 0);
378
+ CHECK_REG(REG_CAMEN, 0);
379
+ CHECK_REG(REG_TXDLSA, 0xfffffffc);
380
+ CHECK_REG(REG_RXDLSA, 0xfffffffc);
381
+ CHECK_REG(REG_MCMDR, 0);
382
+ CHECK_REG(REG_MIID, 0);
383
+ CHECK_REG(REG_MIIDA, 0x00900000);
384
+ CHECK_REG(REG_FFTCR, 0x0101);
385
+ CHECK_REG(REG_DMARFC, 0x0800);
386
+ CHECK_REG(REG_MIEN, 0);
387
+ CHECK_REG(REG_MISTA, 0);
388
+ CHECK_REG(REG_MGSTA, 0);
389
+ CHECK_REG(REG_MPCNT, 0x7fff);
390
+ CHECK_REG(REG_MRPC, 0);
391
+ CHECK_REG(REG_MRPCC, 0);
392
+ CHECK_REG(REG_MREPC, 0);
393
+ CHECK_REG(REG_DMARFS, 0);
394
+ CHECK_REG(REG_CTXDSA, 0);
395
+ CHECK_REG(REG_CTXBSA, 0);
396
+ CHECK_REG(REG_CRXDSA, 0);
397
+ CHECK_REG(REG_CRXBSA, 0);
398
+
399
+#undef CHECK_REG
400
+
401
+ for (i = 0; i < NUM_CAMML_REGS; ++i) {
402
+ g_assert_cmpuint(emc_read(qts, mod, REG_CAMM_BASE + i * 2), ==,
403
+ 0);
404
+ g_assert_cmpuint(emc_read(qts, mod, REG_CAML_BASE + i * 2), ==,
405
+ 0);
406
+ }
407
+
408
+ qtest_quit(qts);
409
+}
410
+
411
+static bool emc_wait_irq(QTestState *qts, const EMCModule *mod, int step,
412
+ bool is_tx)
413
+{
414
+ uint64_t end_time =
415
+ g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
416
+
417
+ do {
418
+ if (qtest_get_irq(qts, is_tx ? mod->tx_irq : mod->rx_irq)) {
419
+ return true;
420
+ }
421
+ qtest_clock_step(qts, step);
422
+ } while (g_get_monotonic_time() < end_time);
423
+
424
+ g_message("%s: Timeout expired", __func__);
425
+ return false;
426
+}
427
+
428
+static bool emc_wait_mista(QTestState *qts, const EMCModule *mod, int step,
429
+ uint32_t flag)
430
+{
431
+ uint64_t end_time =
432
+ g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
433
+
434
+ do {
435
+ uint32_t mista = emc_read(qts, mod, REG_MISTA);
436
+ if (mista & flag) {
437
+ return true;
438
+ }
439
+ qtest_clock_step(qts, step);
440
+ } while (g_get_monotonic_time() < end_time);
441
+
442
+ g_message("%s: Timeout expired", __func__);
443
+ return false;
444
+}
445
+
446
+static bool wait_socket_readable(int fd)
447
+{
448
+ fd_set read_fds;
449
+ struct timeval tv;
450
+ int rv;
451
+
452
+ FD_ZERO(&read_fds);
453
+ FD_SET(fd, &read_fds);
454
+ tv.tv_sec = TIMEOUT_SECONDS;
455
+ tv.tv_usec = 0;
456
+ rv = select(fd + 1, &read_fds, NULL, NULL, &tv);
457
+ if (rv == -1) {
458
+ perror("select");
459
+ } else if (rv == 0) {
460
+ g_message("%s: Timeout expired", __func__);
461
+ }
462
+ return rv == 1;
463
+}
464
+
465
+/* Initialize *desc (in host endian format). */
466
+static void init_tx_desc(NPCM7xxEMCTxDesc *desc, size_t count,
467
+ uint32_t desc_addr)
468
+{
469
+ g_assert(count >= 2);
470
+ memset(&desc[0], 0, sizeof(*desc) * count);
471
+ /* Leave the last one alone, owned by the cpu -> stops transmission. */
472
+ for (size_t i = 0; i < count - 1; ++i) {
473
+ desc[i].flags =
474
+ (TX_DESC_FLAG_OWNER_MASK | /* owner = 1: emc */
475
+ TX_DESC_FLAG_INTEN |
476
+ 0 | /* crc append = 0 */
477
+ 0 /* padding enable = 0 */);
478
+ desc[i].status_and_length =
479
+ (0 | /* collision count = 0 */
480
+ 0 | /* SQE = 0 */
481
+ 0 | /* PAU = 0 */
482
+ 0 | /* TXHA = 0 */
483
+ 0 | /* LC = 0 */
484
+ 0 | /* TXABT = 0 */
485
+ 0 | /* NCS = 0 */
486
+ 0 | /* EXDEF = 0 */
487
+ 0 | /* TXCP = 0 */
488
+ 0 | /* DEF = 0 */
489
+ 0 | /* TXINTR = 0 */
490
+ 0 /* length filled in later */);
491
+ desc[i].ntxdsa = desc_addr + (i + 1) * sizeof(*desc);
492
+ }
95
+ }
493
+}
96
+}
494
+
97
+
495
+static void enable_tx(QTestState *qts, const EMCModule *mod,
98
+DO_VSHRN_ALL(vshrn, DO_SHR)
496
+ const NPCM7xxEMCTxDesc *desc, size_t count,
99
+DO_VSHRN_ALL(vrshrn, do_urshr)
497
+ uint32_t desc_addr, uint32_t mien_flags)
100
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
498
+{
101
index XXXXXXX..XXXXXXX 100644
499
+ /* Write the descriptors to guest memory. */
102
--- a/target/arm/translate-mve.c
500
+ for (size_t i = 0; i < count; ++i) {
103
+++ b/target/arm/translate-mve.c
501
+ emc_write_tx_desc(qts, desc + i, desc_addr + i * sizeof(*desc));
104
@@ -XXX,XX +XXX,XX @@ DO_VSHLL(VSHLL_BS, vshllbs)
105
DO_VSHLL(VSHLL_BU, vshllbu)
106
DO_VSHLL(VSHLL_TS, vshllts)
107
DO_VSHLL(VSHLL_TU, vshlltu)
108
+
109
+#define DO_2SHIFT_N(INSN, FN) \
110
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
111
+ { \
112
+ static MVEGenTwoOpShiftFn * const fns[] = { \
113
+ gen_helper_mve_##FN##b, \
114
+ gen_helper_mve_##FN##h, \
115
+ }; \
116
+ return do_2shift(s, a, fns[a->size], false); \
502
+ }
117
+ }
503
+
118
+
504
+ /* Trigger sending the packet. */
119
+DO_2SHIFT_N(VSHRNB, vshrnb)
505
+ /* The module must be reset before changing TXDLSA. */
120
+DO_2SHIFT_N(VSHRNT, vshrnt)
506
+ g_assert(emc_soft_reset(qts, mod));
121
+DO_2SHIFT_N(VRSHRNB, vrshrnb)
507
+ emc_write(qts, mod, REG_TXDLSA, desc_addr);
122
+DO_2SHIFT_N(VRSHRNT, vrshrnt)
508
+ emc_write(qts, mod, REG_CTXDSA, ~0);
509
+ emc_write(qts, mod, REG_MIEN, REG_MIEN_ENTXCP | mien_flags);
510
+ {
511
+ uint32_t mcmdr = emc_read(qts, mod, REG_MCMDR);
512
+ mcmdr |= REG_MCMDR_TXON;
513
+ emc_write(qts, mod, REG_MCMDR, mcmdr);
514
+ }
515
+
516
+ /* Prod the device to send the packet. */
517
+ emc_write(qts, mod, REG_TSDR, 1);
518
+}
519
+
520
+static void emc_send_verify1(QTestState *qts, const EMCModule *mod, int fd,
521
+ bool with_irq, uint32_t desc_addr,
522
+ uint32_t next_desc_addr,
523
+ const char *test_data, int test_size)
524
+{
525
+ NPCM7xxEMCTxDesc result_desc;
526
+ uint32_t expected_mask, expected_value, recv_len;
527
+ int ret;
528
+ char buffer[TX_DATA_LEN];
529
+
530
+ g_assert(wait_socket_readable(fd));
531
+
532
+ /* Read the descriptor back. */
533
+ emc_read_tx_desc(qts, desc_addr, &result_desc);
534
+ /* Descriptor should be owned by cpu now. */
535
+ g_assert((result_desc.flags & TX_DESC_FLAG_OWNER_MASK) == 0);
536
+ /* Test the status bits, ignoring the length field. */
537
+ expected_mask = 0xffff << 16;
538
+ expected_value = TX_DESC_STATUS_TXCP;
539
+ if (with_irq) {
540
+ expected_value |= TX_DESC_STATUS_TXINTR;
541
+ }
542
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
543
+ expected_value);
544
+
545
+ /* Check data sent to the backend. */
546
+ recv_len = ~0;
547
+ ret = qemu_recv(fd, &recv_len, sizeof(recv_len), MSG_DONTWAIT);
548
+ g_assert_cmpint(ret, == , sizeof(recv_len));
549
+
550
+ g_assert(wait_socket_readable(fd));
551
+ memset(buffer, 0xff, sizeof(buffer));
552
+ ret = qemu_recv(fd, buffer, test_size, MSG_DONTWAIT);
553
+ g_assert_cmpmem(buffer, ret, test_data, test_size);
554
+}
555
+
556
+static void emc_send_verify(QTestState *qts, const EMCModule *mod, int fd,
557
+ bool with_irq)
558
+{
559
+ NPCM7xxEMCTxDesc desc[NUM_TX_DESCRIPTORS];
560
+ uint32_t desc_addr = DESC_ADDR;
561
+ static const char test1_data[] = "TEST1";
562
+ static const char test2_data[] = "Testing 1 2 3 ...";
563
+ uint32_t data1_addr = DATA_ADDR;
564
+ uint32_t data2_addr = data1_addr + sizeof(test1_data);
565
+ bool got_tdu;
566
+ uint32_t end_desc_addr;
567
+
568
+ /* Prepare test data buffer. */
569
+ qtest_memwrite(qts, data1_addr, test1_data, sizeof(test1_data));
570
+ qtest_memwrite(qts, data2_addr, test2_data, sizeof(test2_data));
571
+
572
+ init_tx_desc(&desc[0], NUM_TX_DESCRIPTORS, desc_addr);
573
+ desc[0].txbsa = data1_addr;
574
+ desc[0].status_and_length |= sizeof(test1_data);
575
+ desc[1].txbsa = data2_addr;
576
+ desc[1].status_and_length |= sizeof(test2_data);
577
+
578
+ enable_tx(qts, mod, &desc[0], NUM_TX_DESCRIPTORS, desc_addr,
579
+ with_irq ? REG_MIEN_ENTXINTR : 0);
580
+
581
+ /*
582
+ * It's problematic to observe the interrupt for each packet.
583
+ * Instead just wait until all the packets go out.
584
+ */
585
+ got_tdu = false;
586
+ while (!got_tdu) {
587
+ if (with_irq) {
588
+ g_assert_true(emc_wait_irq(qts, mod, TX_STEP_COUNT,
589
+ /*is_tx=*/true));
590
+ } else {
591
+ g_assert_true(emc_wait_mista(qts, mod, TX_STEP_COUNT,
592
+ REG_MISTA_TXINTR));
593
+ }
594
+ got_tdu = !!(emc_read(qts, mod, REG_MISTA) & REG_MISTA_TDU);
595
+ /* If we don't have TDU yet, reset the interrupt. */
596
+ if (!got_tdu) {
597
+ emc_write(qts, mod, REG_MISTA,
598
+ emc_read(qts, mod, REG_MISTA) & 0xffff0000);
599
+ }
600
+ }
601
+
602
+ end_desc_addr = desc_addr + 2 * sizeof(desc[0]);
603
+ g_assert_cmphex(emc_read(qts, mod, REG_CTXDSA), ==, end_desc_addr);
604
+ g_assert_cmphex(emc_read(qts, mod, REG_MISTA), ==,
605
+ REG_MISTA_TXCP | REG_MISTA_TXINTR | REG_MISTA_TDU);
606
+
607
+ emc_send_verify1(qts, mod, fd, with_irq,
608
+ desc_addr, end_desc_addr,
609
+ test1_data, sizeof(test1_data));
610
+ emc_send_verify1(qts, mod, fd, with_irq,
611
+ desc_addr + sizeof(desc[0]), end_desc_addr,
612
+ test2_data, sizeof(test2_data));
613
+}
614
+
615
+/* Initialize *desc (in host endian format). */
616
+static void init_rx_desc(NPCM7xxEMCRxDesc *desc, size_t count,
617
+ uint32_t desc_addr, uint32_t data_addr)
618
+{
619
+ g_assert_true(count >= 2);
620
+ memset(desc, 0, sizeof(*desc) * count);
621
+ desc[0].rxbsa = data_addr;
622
+ desc[0].status_and_length =
623
+ (0b10 << RX_DESC_STATUS_OWNER_SHIFT | /* owner = 10: emc */
624
+ 0 | /* RP = 0 */
625
+ 0 | /* ALIE = 0 */
626
+ 0 | /* RXGD = 0 */
627
+ 0 | /* PTLE = 0 */
628
+ 0 | /* CRCE = 0 */
629
+ 0 | /* RXINTR = 0 */
630
+ 0 /* length (filled in later) */);
631
+ /* Leave the last one alone, owned by the cpu -> stops transmission. */
632
+ desc[0].nrxdsa = desc_addr + sizeof(*desc);
633
+}
634
+
635
+static void enable_rx(QTestState *qts, const EMCModule *mod,
636
+ const NPCM7xxEMCRxDesc *desc, size_t count,
637
+ uint32_t desc_addr, uint32_t mien_flags,
638
+ uint32_t mcmdr_flags)
639
+{
640
+ /*
641
+ * Write the descriptor to guest memory.
642
+ * FWIW, IWBN if the docs said the buffer needs to be at least DMARFC
643
+ * bytes.
644
+ */
645
+ for (size_t i = 0; i < count; ++i) {
646
+ emc_write_rx_desc(qts, desc + i, desc_addr + i * sizeof(*desc));
647
+ }
648
+
649
+ /* Trigger receiving the packet. */
650
+ /* The module must be reset before changing RXDLSA. */
651
+ g_assert(emc_soft_reset(qts, mod));
652
+ emc_write(qts, mod, REG_RXDLSA, desc_addr);
653
+ emc_write(qts, mod, REG_MIEN, REG_MIEN_ENRXGD | mien_flags);
654
+
655
+ /*
656
+ * We don't know what the device's macaddr is, so just accept all
657
+ * unicast packets (AUP).
658
+ */
659
+ emc_write(qts, mod, REG_CAMCMR, REG_CAMCMR_AUP);
660
+ emc_write(qts, mod, REG_CAMEN, 1 << 0);
661
+ {
662
+ uint32_t mcmdr = emc_read(qts, mod, REG_MCMDR);
663
+ mcmdr |= REG_MCMDR_RXON | mcmdr_flags;
664
+ emc_write(qts, mod, REG_MCMDR, mcmdr);
665
+ }
666
+
667
+ /* Prod the device to accept a packet. */
668
+ emc_write(qts, mod, REG_RSDR, 1);
669
+}
670
+
671
+static void emc_recv_verify(QTestState *qts, const EMCModule *mod, int fd,
672
+ bool with_irq)
673
+{
674
+ NPCM7xxEMCRxDesc desc[NUM_RX_DESCRIPTORS];
675
+ uint32_t desc_addr = DESC_ADDR;
676
+ uint32_t data_addr = DATA_ADDR;
677
+ int ret;
678
+ uint32_t expected_mask, expected_value;
679
+ NPCM7xxEMCRxDesc result_desc;
680
+
681
+ /* Prepare test data buffer. */
682
+ const char test[RX_DATA_LEN] = "TEST";
683
+ int len = htonl(sizeof(test));
684
+ const struct iovec iov[] = {
685
+ {
686
+ .iov_base = &len,
687
+ .iov_len = sizeof(len),
688
+ },{
689
+ .iov_base = (char *) test,
690
+ .iov_len = sizeof(test),
691
+ },
692
+ };
693
+
694
+ /*
695
+ * Reset the device BEFORE sending a test packet, otherwise the packet
696
+ * may get swallowed by an active device of an earlier test.
697
+ */
698
+ init_rx_desc(&desc[0], NUM_RX_DESCRIPTORS, desc_addr, data_addr);
699
+ enable_rx(qts, mod, &desc[0], NUM_RX_DESCRIPTORS, desc_addr,
700
+ with_irq ? REG_MIEN_ENRXINTR : 0, 0);
701
+
702
+ /* Send test packet to device's socket. */
703
+ ret = iov_send(fd, iov, 2, 0, sizeof(len) + sizeof(test));
704
+ g_assert_cmpint(ret, == , sizeof(test) + sizeof(len));
705
+
706
+ /* Wait for RX interrupt. */
707
+ if (with_irq) {
708
+ g_assert_true(emc_wait_irq(qts, mod, RX_STEP_COUNT, /*is_tx=*/false));
709
+ } else {
710
+ g_assert_true(emc_wait_mista(qts, mod, RX_STEP_COUNT, REG_MISTA_RXGD));
711
+ }
712
+
713
+ g_assert_cmphex(emc_read(qts, mod, REG_CRXDSA), ==,
714
+ desc_addr + sizeof(desc[0]));
715
+
716
+ expected_mask = 0xffff;
717
+ expected_value = (REG_MISTA_DENI |
718
+ REG_MISTA_RXGD |
719
+ REG_MISTA_RXINTR);
720
+ g_assert_cmphex((emc_read(qts, mod, REG_MISTA) & expected_mask),
721
+ ==, expected_value);
722
+
723
+ /* Read the descriptor back. */
724
+ emc_read_rx_desc(qts, desc_addr, &result_desc);
725
+ /* Descriptor should be owned by cpu now. */
726
+ g_assert((result_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK) == 0);
727
+ /* Test the status bits, ignoring the length field. */
728
+ expected_mask = 0xffff << 16;
729
+ expected_value = RX_DESC_STATUS_RXGD;
730
+ if (with_irq) {
731
+ expected_value |= RX_DESC_STATUS_RXINTR;
732
+ }
733
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
734
+ expected_value);
735
+ g_assert_cmpint(RX_DESC_PKT_LEN(result_desc.status_and_length), ==,
736
+ RX_DATA_LEN + CRC_LENGTH);
737
+
738
+ {
739
+ char buffer[RX_DATA_LEN];
740
+ qtest_memread(qts, data_addr, buffer, sizeof(buffer));
741
+ g_assert_cmpstr(buffer, == , "TEST");
742
+ }
743
+}
744
+
745
+static void emc_test_ptle(QTestState *qts, const EMCModule *mod, int fd)
746
+{
747
+ NPCM7xxEMCRxDesc desc[NUM_RX_DESCRIPTORS];
748
+ uint32_t desc_addr = DESC_ADDR;
749
+ uint32_t data_addr = DATA_ADDR;
750
+ int ret;
751
+ NPCM7xxEMCRxDesc result_desc;
752
+ uint32_t expected_mask, expected_value;
753
+
754
+ /* Prepare test data buffer. */
755
+#define PTLE_DATA_LEN 1600
756
+ char test_data[PTLE_DATA_LEN];
757
+ int len = htonl(sizeof(test_data));
758
+ const struct iovec iov[] = {
759
+ {
760
+ .iov_base = &len,
761
+ .iov_len = sizeof(len),
762
+ },{
763
+ .iov_base = (char *) test_data,
764
+ .iov_len = sizeof(test_data),
765
+ },
766
+ };
767
+ memset(test_data, 42, sizeof(test_data));
768
+
769
+ /*
770
+ * Reset the device BEFORE sending a test packet, otherwise the packet
771
+ * may get swallowed by an active device of an earlier test.
772
+ */
773
+ init_rx_desc(&desc[0], NUM_RX_DESCRIPTORS, desc_addr, data_addr);
774
+ enable_rx(qts, mod, &desc[0], NUM_RX_DESCRIPTORS, desc_addr,
775
+ REG_MIEN_ENRXINTR, REG_MCMDR_ALP);
776
+
777
+ /* Send test packet to device's socket. */
778
+ ret = iov_send(fd, iov, 2, 0, sizeof(len) + sizeof(test_data));
779
+ g_assert_cmpint(ret, == , sizeof(test_data) + sizeof(len));
780
+
781
+ /* Wait for RX interrupt. */
782
+ g_assert_true(emc_wait_irq(qts, mod, RX_STEP_COUNT, /*is_tx=*/false));
783
+
784
+ /* Read the descriptor back. */
785
+ emc_read_rx_desc(qts, desc_addr, &result_desc);
786
+ /* Descriptor should be owned by cpu now. */
787
+ g_assert((result_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK) == 0);
788
+ /* Test the status bits, ignoring the length field. */
789
+ expected_mask = 0xffff << 16;
790
+ expected_value = (RX_DESC_STATUS_RXGD |
791
+ RX_DESC_STATUS_PTLE |
792
+ RX_DESC_STATUS_RXINTR);
793
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
794
+ expected_value);
795
+ g_assert_cmpint(RX_DESC_PKT_LEN(result_desc.status_and_length), ==,
796
+ PTLE_DATA_LEN + CRC_LENGTH);
797
+
798
+ {
799
+ char buffer[PTLE_DATA_LEN];
800
+ qtest_memread(qts, data_addr, buffer, sizeof(buffer));
801
+ g_assert(memcmp(buffer, test_data, PTLE_DATA_LEN) == 0);
802
+ }
803
+}
804
+
805
+static void test_tx(gconstpointer test_data)
806
+{
807
+ const TestData *td = test_data;
808
+ GString *cmd_line = g_string_new("-machine quanta-gsj");
809
+ int *test_sockets = packet_test_init(emc_module_index(td->module),
810
+ cmd_line);
811
+ QTestState *qts = qtest_init(cmd_line->str);
812
+
813
+ /*
814
+ * TODO: For pedantic correctness test_sockets[0] should be closed after
815
+ * the fork and before the exec, but that will require some harness
816
+ * improvements.
817
+ */
818
+ close(test_sockets[1]);
819
+ /* Defensive programming */
820
+ test_sockets[1] = -1;
821
+
822
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
823
+
824
+ emc_send_verify(qts, td->module, test_sockets[0], /*with_irq=*/false);
825
+ emc_send_verify(qts, td->module, test_sockets[0], /*with_irq=*/true);
826
+
827
+ qtest_quit(qts);
828
+}
829
+
830
+static void test_rx(gconstpointer test_data)
831
+{
832
+ const TestData *td = test_data;
833
+ GString *cmd_line = g_string_new("-machine quanta-gsj");
834
+ int *test_sockets = packet_test_init(emc_module_index(td->module),
835
+ cmd_line);
836
+ QTestState *qts = qtest_init(cmd_line->str);
837
+
838
+ /*
839
+ * TODO: For pedantic correctness test_sockets[0] should be closed after
840
+ * the fork and before the exec, but that will require some harness
841
+ * improvements.
842
+ */
843
+ close(test_sockets[1]);
844
+ /* Defensive programming */
845
+ test_sockets[1] = -1;
846
+
847
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
848
+
849
+ emc_recv_verify(qts, td->module, test_sockets[0], /*with_irq=*/false);
850
+ emc_recv_verify(qts, td->module, test_sockets[0], /*with_irq=*/true);
851
+ emc_test_ptle(qts, td->module, test_sockets[0]);
852
+
853
+ qtest_quit(qts);
854
+}
855
+
856
+static void emc_add_test(const char *name, const TestData* td,
857
+ GTestDataFunc fn)
858
+{
859
+ g_autofree char *full_name = g_strdup_printf(
860
+ "npcm7xx_emc/emc[%d]/%s", emc_module_index(td->module), name);
861
+ qtest_add_data_func(full_name, td, fn);
862
+}
863
+#define add_test(name, td) emc_add_test(#name, td, test_##name)
864
+
865
+int main(int argc, char **argv)
866
+{
867
+ TestData test_data_list[ARRAY_SIZE(emc_module_list)];
868
+
869
+ g_test_init(&argc, &argv, NULL);
870
+
871
+ for (int i = 0; i < ARRAY_SIZE(emc_module_list); ++i) {
872
+ TestData *td = &test_data_list[i];
873
+
874
+ td->module = &emc_module_list[i];
875
+
876
+ add_test(init, td);
877
+ add_test(tx, td);
878
+ add_test(rx, td);
879
+ }
880
+
881
+ return g_test_run();
882
+}
883
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
884
index XXXXXXX..XXXXXXX 100644
885
--- a/tests/qtest/meson.build
886
+++ b/tests/qtest/meson.build
887
@@ -XXX,XX +XXX,XX @@ qtests_sparc64 = \
888
889
qtests_npcm7xx = \
890
['npcm7xx_adc-test',
891
+ 'npcm7xx_emc-test',
892
'npcm7xx_gpio-test',
893
'npcm7xx_pwm-test',
894
'npcm7xx_rng-test',
895
--
123
--
896
2.20.1
124
2.20.1
897
125
898
126
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
Implement the MVE saturating shift-right-and-narrow insns
2
2
VQSHRN, VQSHRUN, VQRSHRN and VQRSHRUN.
3
This patch implements the FIFO mode of the SMBus module. In FIFO, the
3
4
user transmits or receives at most 16 bytes at a time. The FIFO mode
4
do_srshr() is borrowed from sve_helper.c.
5
allows the module to transmit large amount of data faster than single
5
6
byte mode.
7
8
Since we only added the device in a patch that is only a few commits
9
away in the same patch set. We do not increase the VMstate version
10
number in this special case.
11
12
Reviewed-by: Doug Evans<dje@google.com>
13
Reviewed-by: Tyrong Ting<kfting@nuvoton.com>
14
Signed-off-by: Hao Wu <wuhaotsh@google.com>
15
Reviewed-by: Corey Minyard <cminyard@mvista.com>
16
Message-id: 20210210220426.3577804-6-wuhaotsh@google.com
17
Acked-by: Corey Minyard <cminyard@mvista.com>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-13-peter.maydell@linaro.org
19
---
9
---
20
include/hw/i2c/npcm7xx_smbus.h | 25 +++
10
target/arm/helper-mve.h | 30 +++++++++++
21
hw/i2c/npcm7xx_smbus.c | 342 +++++++++++++++++++++++++++++--
11
target/arm/mve.decode | 28 ++++++++++
22
tests/qtest/npcm7xx_smbus-test.c | 149 +++++++++++++-
12
target/arm/mve_helper.c | 104 +++++++++++++++++++++++++++++++++++++
23
hw/i2c/trace-events | 1 +
13
target/arm/translate-mve.c | 12 +++++
24
4 files changed, 501 insertions(+), 16 deletions(-)
14
4 files changed, 174 insertions(+)
25
15
26
diff --git a/include/hw/i2c/npcm7xx_smbus.h b/include/hw/i2c/npcm7xx_smbus.h
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
27
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/i2c/npcm7xx_smbus.h
18
--- a/target/arm/helper-mve.h
29
+++ b/include/hw/i2c/npcm7xx_smbus.h
19
+++ b/target/arm/helper-mve.h
30
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrshrnbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
*/
21
DEF_HELPER_FLAGS_4(mve_vrshrnbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
#define NPCM7XX_SMBUS_NR_ADDRS 10
22
DEF_HELPER_FLAGS_4(mve_vrshrntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
23
DEF_HELPER_FLAGS_4(mve_vrshrnth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+/* Size of the FIFO buffer. */
24
+
35
+#define NPCM7XX_SMBUS_FIFO_SIZE 16
25
+DEF_HELPER_FLAGS_4(mve_vqshrnb_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
26
+DEF_HELPER_FLAGS_4(mve_vqshrnb_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
typedef enum NPCM7xxSMBusStatus {
27
+DEF_HELPER_FLAGS_4(mve_vqshrnt_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
NPCM7XX_SMBUS_STATUS_IDLE,
28
+DEF_HELPER_FLAGS_4(mve_vqshrnt_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
NPCM7XX_SMBUS_STATUS_SENDING,
29
+
40
@@ -XXX,XX +XXX,XX @@ typedef enum NPCM7xxSMBusStatus {
30
+DEF_HELPER_FLAGS_4(mve_vqshrnb_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
* @addr: The SMBus module's own addresses on the I2C bus.
31
+DEF_HELPER_FLAGS_4(mve_vqshrnb_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
* @scllt: The SCL low time register.
32
+DEF_HELPER_FLAGS_4(mve_vqshrnt_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
* @sclht: The SCL high time register.
33
+DEF_HELPER_FLAGS_4(mve_vqshrnt_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
44
+ * @fif_ctl: The FIFO control register.
34
+
45
+ * @fif_cts: The FIFO control status register.
35
+DEF_HELPER_FLAGS_4(mve_vqshrunbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
+ * @fair_per: The fair preriod register.
36
+DEF_HELPER_FLAGS_4(mve_vqshrunbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
+ * @txf_ctl: The transmit FIFO control register.
37
+DEF_HELPER_FLAGS_4(mve_vqshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
48
+ * @t_out: The SMBus timeout register.
38
+DEF_HELPER_FLAGS_4(mve_vqshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
49
+ * @txf_sts: The transmit FIFO status register.
39
+
50
+ * @rxf_sts: The receive FIFO status register.
40
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
51
+ * @rxf_ctl: The receive FIFO control register.
41
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
52
+ * @rx_fifo: The FIFO buffer for receiving in FIFO mode.
42
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
53
+ * @rx_cur: The current position of rx_fifo.
43
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
54
* @status: The current status of the SMBus.
44
+
55
*/
45
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
56
typedef struct NPCM7xxSMBusState {
46
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
57
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxSMBusState {
47
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
58
uint8_t scllt;
48
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
59
uint8_t sclht;
49
+
60
50
+DEF_HELPER_FLAGS_4(mve_vqrshrunbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
61
+ uint8_t fif_ctl;
51
+DEF_HELPER_FLAGS_4(mve_vqrshrunbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
62
+ uint8_t fif_cts;
52
+DEF_HELPER_FLAGS_4(mve_vqrshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
63
+ uint8_t fair_per;
53
+DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
64
+ uint8_t txf_ctl;
54
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
65
+ uint8_t t_out;
55
index XXXXXXX..XXXXXXX 100644
66
+ uint8_t txf_sts;
56
--- a/target/arm/mve.decode
67
+ uint8_t rxf_sts;
57
+++ b/target/arm/mve.decode
68
+ uint8_t rxf_ctl;
58
@@ -XXX,XX +XXX,XX @@ VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_b
69
+
59
VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_h
70
+ uint8_t rx_fifo[NPCM7XX_SMBUS_FIFO_SIZE];
60
VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_b
71
+ uint8_t rx_cur;
61
VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_h
72
+
62
+
73
NPCM7xxSMBusStatus status;
63
+VQSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_b
74
} NPCM7xxSMBusState;
64
+VQSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_h
75
65
+VQSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_b
76
diff --git a/hw/i2c/npcm7xx_smbus.c b/hw/i2c/npcm7xx_smbus.c
66
+VQSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_h
77
index XXXXXXX..XXXXXXX 100644
67
+VQSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_b
78
--- a/hw/i2c/npcm7xx_smbus.c
68
+VQSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_h
79
+++ b/hw/i2c/npcm7xx_smbus.c
69
+VQSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_b
80
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxSMBusBank1Register {
70
+VQSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_h
81
#define NPCM7XX_ADDR_EN BIT(7)
71
+
82
#define NPCM7XX_ADDR_A(rv) extract8((rv), 0, 6)
72
+VQSHRUNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_b
83
73
+VQSHRUNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_h
84
+/* FIFO Mode Register Fields */
74
+VQSHRUNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
85
+/* FIF_CTL fields */
75
+VQSHRUNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
86
+#define NPCM7XX_SMBFIF_CTL_FIFO_EN BIT(4)
76
+
87
+#define NPCM7XX_SMBFIF_CTL_FAIR_RDY_IE BIT(2)
77
+VQRSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_b
88
+#define NPCM7XX_SMBFIF_CTL_FAIR_RDY BIT(1)
78
+VQRSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_h
89
+#define NPCM7XX_SMBFIF_CTL_FAIR_BUSY BIT(0)
79
+VQRSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_b
90
+/* FIF_CTS fields */
80
+VQRSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_h
91
+#define NPCM7XX_SMBFIF_CTS_STR BIT(7)
81
+VQRSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_b
92
+#define NPCM7XX_SMBFIF_CTS_CLR_FIFO BIT(6)
82
+VQRSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_h
93
+#define NPCM7XX_SMBFIF_CTS_RFTE_IE BIT(3)
83
+VQRSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_b
94
+#define NPCM7XX_SMBFIF_CTS_RXF_TXE BIT(1)
84
+VQRSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_h
95
+/* TXF_CTL fields */
85
+
96
+#define NPCM7XX_SMBTXF_CTL_THR_TXIE BIT(6)
86
+VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_b
97
+#define NPCM7XX_SMBTXF_CTL_TX_THR(rv) extract8((rv), 0, 5)
87
+VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_h
98
+/* T_OUT fields */
88
+VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
99
+#define NPCM7XX_SMBT_OUT_ST BIT(7)
89
+VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
100
+#define NPCM7XX_SMBT_OUT_IE BIT(6)
90
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
101
+#define NPCM7XX_SMBT_OUT_CLKDIV(rv) extract8((rv), 0, 6)
91
index XXXXXXX..XXXXXXX 100644
102
+/* TXF_STS fields */
92
--- a/target/arm/mve_helper.c
103
+#define NPCM7XX_SMBTXF_STS_TX_THST BIT(6)
93
+++ b/target/arm/mve_helper.c
104
+#define NPCM7XX_SMBTXF_STS_TX_BYTES(rv) extract8((rv), 0, 5)
94
@@ -XXX,XX +XXX,XX @@ static inline uint64_t do_urshr(uint64_t x, unsigned sh)
105
+/* RXF_STS fields */
106
+#define NPCM7XX_SMBRXF_STS_RX_THST BIT(6)
107
+#define NPCM7XX_SMBRXF_STS_RX_BYTES(rv) extract8((rv), 0, 5)
108
+/* RXF_CTL fields */
109
+#define NPCM7XX_SMBRXF_CTL_THR_RXIE BIT(6)
110
+#define NPCM7XX_SMBRXF_CTL_LAST BIT(5)
111
+#define NPCM7XX_SMBRXF_CTL_RX_THR(rv) extract8((rv), 0, 5)
112
+
113
#define KEEP_OLD_BIT(o, n, b) (((n) & (~(b))) | ((o) & (b)))
114
#define WRITE_ONE_CLEAR(o, n, b) ((n) & (b) ? (o) & (~(b)) : (o))
115
116
#define NPCM7XX_SMBUS_ENABLED(s) ((s)->ctl2 & NPCM7XX_SMBCTL2_ENABLE)
117
+#define NPCM7XX_SMBUS_FIFO_ENABLED(s) ((s)->fif_ctl & \
118
+ NPCM7XX_SMBFIF_CTL_FIFO_EN)
119
120
/* VERSION fields values, read-only. */
121
#define NPCM7XX_SMBUS_VERSION_NUMBER 1
122
-#define NPCM7XX_SMBUS_VERSION_FIFO_SUPPORTED 0
123
+#define NPCM7XX_SMBUS_VERSION_FIFO_SUPPORTED 1
124
125
/* Reset values */
126
#define NPCM7XX_SMB_ST_INIT_VAL 0x00
127
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxSMBusBank1Register {
128
#define NPCM7XX_SMB_ADDR_INIT_VAL 0x00
129
#define NPCM7XX_SMB_SCLLT_INIT_VAL 0x00
130
#define NPCM7XX_SMB_SCLHT_INIT_VAL 0x00
131
+#define NPCM7XX_SMB_FIF_CTL_INIT_VAL 0x00
132
+#define NPCM7XX_SMB_FIF_CTS_INIT_VAL 0x00
133
+#define NPCM7XX_SMB_FAIR_PER_INIT_VAL 0x00
134
+#define NPCM7XX_SMB_TXF_CTL_INIT_VAL 0x00
135
+#define NPCM7XX_SMB_T_OUT_INIT_VAL 0x3f
136
+#define NPCM7XX_SMB_TXF_STS_INIT_VAL 0x00
137
+#define NPCM7XX_SMB_RXF_STS_INIT_VAL 0x00
138
+#define NPCM7XX_SMB_RXF_CTL_INIT_VAL 0x01
139
140
static uint8_t npcm7xx_smbus_get_version(void)
141
{
142
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_update_irq(NPCM7xxSMBusState *s)
143
(s->ctl1 & NPCM7XX_SMBCTL1_STASTRE &&
144
s->st & NPCM7XX_SMBST_SDAST) ||
145
(s->ctl1 & NPCM7XX_SMBCTL1_EOBINTE &&
146
- s->cst3 & NPCM7XX_SMBCST3_EO_BUSY));
147
+ s->cst3 & NPCM7XX_SMBCST3_EO_BUSY) ||
148
+ (s->rxf_ctl & NPCM7XX_SMBRXF_CTL_THR_RXIE &&
149
+ s->rxf_sts & NPCM7XX_SMBRXF_STS_RX_THST) ||
150
+ (s->txf_ctl & NPCM7XX_SMBTXF_CTL_THR_TXIE &&
151
+ s->txf_sts & NPCM7XX_SMBTXF_STS_TX_THST) ||
152
+ (s->fif_cts & NPCM7XX_SMBFIF_CTS_RFTE_IE &&
153
+ s->fif_cts & NPCM7XX_SMBFIF_CTS_RXF_TXE));
154
155
if (level) {
156
s->cst2 |= NPCM7XX_SMBCST2_INTSTS;
157
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_nack(NPCM7xxSMBusState *s)
158
s->status = NPCM7XX_SMBUS_STATUS_NEGACK;
159
}
160
161
+static void npcm7xx_smbus_clear_buffer(NPCM7xxSMBusState *s)
162
+{
163
+ s->fif_cts &= ~NPCM7XX_SMBFIF_CTS_RXF_TXE;
164
+ s->txf_sts = 0;
165
+ s->rxf_sts = 0;
166
+}
167
+
168
static void npcm7xx_smbus_send_byte(NPCM7xxSMBusState *s, uint8_t value)
169
{
170
int rv = i2c_send(s->bus, value);
171
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_send_byte(NPCM7xxSMBusState *s, uint8_t value)
172
npcm7xx_smbus_nack(s);
173
} else {
174
s->st |= NPCM7XX_SMBST_SDAST;
175
+ if (NPCM7XX_SMBUS_FIFO_ENABLED(s)) {
176
+ s->fif_cts |= NPCM7XX_SMBFIF_CTS_RXF_TXE;
177
+ if (NPCM7XX_SMBTXF_STS_TX_BYTES(s->txf_sts) ==
178
+ NPCM7XX_SMBTXF_CTL_TX_THR(s->txf_ctl)) {
179
+ s->txf_sts = NPCM7XX_SMBTXF_STS_TX_THST;
180
+ } else {
181
+ s->txf_sts = 0;
182
+ }
183
+ }
184
}
185
trace_npcm7xx_smbus_send_byte((DEVICE(s)->canonical_path), value, !rv);
186
npcm7xx_smbus_update_irq(s);
187
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_recv_byte(NPCM7xxSMBusState *s)
188
npcm7xx_smbus_update_irq(s);
189
}
190
191
+static void npcm7xx_smbus_recv_fifo(NPCM7xxSMBusState *s)
192
+{
193
+ uint8_t expected_bytes = NPCM7XX_SMBRXF_CTL_RX_THR(s->rxf_ctl);
194
+ uint8_t received_bytes = NPCM7XX_SMBRXF_STS_RX_BYTES(s->rxf_sts);
195
+ uint8_t pos;
196
+
197
+ if (received_bytes == expected_bytes) {
198
+ return;
199
+ }
200
+
201
+ while (received_bytes < expected_bytes &&
202
+ received_bytes < NPCM7XX_SMBUS_FIFO_SIZE) {
203
+ pos = (s->rx_cur + received_bytes) % NPCM7XX_SMBUS_FIFO_SIZE;
204
+ s->rx_fifo[pos] = i2c_recv(s->bus);
205
+ trace_npcm7xx_smbus_recv_byte((DEVICE(s)->canonical_path),
206
+ s->rx_fifo[pos]);
207
+ ++received_bytes;
208
+ }
209
+
210
+ trace_npcm7xx_smbus_recv_fifo((DEVICE(s)->canonical_path),
211
+ received_bytes, expected_bytes);
212
+ s->rxf_sts = received_bytes;
213
+ if (unlikely(received_bytes < expected_bytes)) {
214
+ qemu_log_mask(LOG_GUEST_ERROR,
215
+ "%s: invalid rx_thr value: 0x%02x\n",
216
+ DEVICE(s)->canonical_path, expected_bytes);
217
+ return;
218
+ }
219
+
220
+ s->rxf_sts |= NPCM7XX_SMBRXF_STS_RX_THST;
221
+ if (s->rxf_ctl & NPCM7XX_SMBRXF_CTL_LAST) {
222
+ trace_npcm7xx_smbus_nack(DEVICE(s)->canonical_path);
223
+ i2c_nack(s->bus);
224
+ s->rxf_ctl &= ~NPCM7XX_SMBRXF_CTL_LAST;
225
+ }
226
+ if (received_bytes == NPCM7XX_SMBUS_FIFO_SIZE) {
227
+ s->st |= NPCM7XX_SMBST_SDAST;
228
+ s->fif_cts |= NPCM7XX_SMBFIF_CTS_RXF_TXE;
229
+ } else if (!(s->rxf_ctl & NPCM7XX_SMBRXF_CTL_THR_RXIE)) {
230
+ s->st |= NPCM7XX_SMBST_SDAST;
231
+ } else {
232
+ s->st &= ~NPCM7XX_SMBST_SDAST;
233
+ }
234
+ npcm7xx_smbus_update_irq(s);
235
+}
236
+
237
+static void npcm7xx_smbus_read_byte_fifo(NPCM7xxSMBusState *s)
238
+{
239
+ uint8_t received_bytes = NPCM7XX_SMBRXF_STS_RX_BYTES(s->rxf_sts);
240
+
241
+ if (received_bytes == 0) {
242
+ npcm7xx_smbus_recv_fifo(s);
243
+ return;
244
+ }
245
+
246
+ s->sda = s->rx_fifo[s->rx_cur];
247
+ s->rx_cur = (s->rx_cur + 1u) % NPCM7XX_SMBUS_FIFO_SIZE;
248
+ --s->rxf_sts;
249
+ npcm7xx_smbus_update_irq(s);
250
+}
251
+
252
static void npcm7xx_smbus_start(NPCM7xxSMBusState *s)
253
{
254
/*
255
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_start(NPCM7xxSMBusState *s)
256
if (available) {
257
s->st |= NPCM7XX_SMBST_MODE | NPCM7XX_SMBST_XMIT | NPCM7XX_SMBST_SDAST;
258
s->cst |= NPCM7XX_SMBCST_BUSY;
259
+ if (NPCM7XX_SMBUS_FIFO_ENABLED(s)) {
260
+ s->fif_cts |= NPCM7XX_SMBFIF_CTS_RXF_TXE;
261
+ }
262
} else {
263
s->st &= ~NPCM7XX_SMBST_MODE;
264
s->cst &= ~NPCM7XX_SMBCST_BUSY;
265
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_send_address(NPCM7xxSMBusState *s, uint8_t value)
266
s->st |= NPCM7XX_SMBST_SDAST;
267
}
268
} else if (recv) {
269
- npcm7xx_smbus_recv_byte(s);
270
+ s->st |= NPCM7XX_SMBST_SDAST;
271
+ if (NPCM7XX_SMBUS_FIFO_ENABLED(s)) {
272
+ npcm7xx_smbus_recv_fifo(s);
273
+ } else {
274
+ npcm7xx_smbus_recv_byte(s);
275
+ }
276
+ } else if (NPCM7XX_SMBUS_FIFO_ENABLED(s)) {
277
+ s->st |= NPCM7XX_SMBST_SDAST;
278
+ s->fif_cts |= NPCM7XX_SMBFIF_CTS_RXF_TXE;
279
}
280
npcm7xx_smbus_update_irq(s);
281
}
282
@@ -XXX,XX +XXX,XX @@ static uint8_t npcm7xx_smbus_read_sda(NPCM7xxSMBusState *s)
283
284
switch (s->status) {
285
case NPCM7XX_SMBUS_STATUS_STOPPING_LAST_RECEIVE:
286
- npcm7xx_smbus_execute_stop(s);
287
+ if (NPCM7XX_SMBUS_FIFO_ENABLED(s)) {
288
+ if (NPCM7XX_SMBRXF_STS_RX_BYTES(s->rxf_sts) <= 1) {
289
+ npcm7xx_smbus_execute_stop(s);
290
+ }
291
+ if (NPCM7XX_SMBRXF_STS_RX_BYTES(s->rxf_sts) == 0) {
292
+ qemu_log_mask(LOG_GUEST_ERROR,
293
+ "%s: read to SDA with an empty rx-fifo buffer, "
294
+ "result undefined: %u\n",
295
+ DEVICE(s)->canonical_path, s->sda);
296
+ break;
297
+ }
298
+ npcm7xx_smbus_read_byte_fifo(s);
299
+ value = s->sda;
300
+ } else {
301
+ npcm7xx_smbus_execute_stop(s);
302
+ }
303
break;
304
305
case NPCM7XX_SMBUS_STATUS_RECEIVING:
306
- npcm7xx_smbus_recv_byte(s);
307
+ if (NPCM7XX_SMBUS_FIFO_ENABLED(s)) {
308
+ npcm7xx_smbus_read_byte_fifo(s);
309
+ value = s->sda;
310
+ } else {
311
+ npcm7xx_smbus_recv_byte(s);
312
+ }
313
break;
314
315
default:
316
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_write_st(NPCM7xxSMBusState *s, uint8_t value)
317
}
318
319
if (value & NPCM7XX_SMBST_STASTR &&
320
- s->status == NPCM7XX_SMBUS_STATUS_RECEIVING) {
321
- npcm7xx_smbus_recv_byte(s);
322
+ s->status == NPCM7XX_SMBUS_STATUS_RECEIVING) {
323
+ if (NPCM7XX_SMBUS_FIFO_ENABLED(s)) {
324
+ npcm7xx_smbus_recv_fifo(s);
325
+ } else {
326
+ npcm7xx_smbus_recv_byte(s);
327
+ }
328
}
329
330
npcm7xx_smbus_update_irq(s);
331
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_write_ctl2(NPCM7xxSMBusState *s, uint8_t value)
332
s->st = 0;
333
s->cst3 = s->cst3 & (~NPCM7XX_SMBCST3_EO_BUSY);
334
s->cst = 0;
335
+ npcm7xx_smbus_clear_buffer(s);
336
}
95
}
337
}
96
}
338
97
339
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_write_ctl3(NPCM7xxSMBusState *s, uint8_t value)
98
+static inline int64_t do_srshr(int64_t x, unsigned sh)
340
NPCM7XX_SMBCTL3_SCL_LVL | NPCM7XX_SMBCTL3_SDA_LVL);
341
}
342
343
+static void npcm7xx_smbus_write_fif_ctl(NPCM7xxSMBusState *s, uint8_t value)
344
+{
99
+{
345
+ uint8_t new_ctl = value;
100
+ if (likely(sh < 64)) {
346
+
101
+ return (x >> sh) + ((x >> (sh - 1)) & 1);
347
+ new_ctl = KEEP_OLD_BIT(s->fif_ctl, new_ctl, NPCM7XX_SMBFIF_CTL_FAIR_RDY);
102
+ } else {
348
+ new_ctl = WRITE_ONE_CLEAR(new_ctl, value, NPCM7XX_SMBFIF_CTL_FAIR_RDY);
103
+ /* Rounding the sign bit always produces 0. */
349
+ new_ctl = KEEP_OLD_BIT(s->fif_ctl, new_ctl, NPCM7XX_SMBFIF_CTL_FAIR_BUSY);
104
+ return 0;
350
+ s->fif_ctl = new_ctl;
351
+}
352
+
353
+static void npcm7xx_smbus_write_fif_cts(NPCM7xxSMBusState *s, uint8_t value)
354
+{
355
+ s->fif_cts = WRITE_ONE_CLEAR(s->fif_cts, value, NPCM7XX_SMBFIF_CTS_STR);
356
+ s->fif_cts = WRITE_ONE_CLEAR(s->fif_cts, value, NPCM7XX_SMBFIF_CTS_RXF_TXE);
357
+ s->fif_cts = KEEP_OLD_BIT(value, s->fif_cts, NPCM7XX_SMBFIF_CTS_RFTE_IE);
358
+
359
+ if (value & NPCM7XX_SMBFIF_CTS_CLR_FIFO) {
360
+ npcm7xx_smbus_clear_buffer(s);
361
+ }
105
+ }
362
+}
106
+}
363
+
107
+
364
+static void npcm7xx_smbus_write_txf_ctl(NPCM7xxSMBusState *s, uint8_t value)
108
DO_VSHRN_ALL(vshrn, DO_SHR)
109
DO_VSHRN_ALL(vrshrn, do_urshr)
110
+
111
+static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max,
112
+ bool *satp)
365
+{
113
+{
366
+ s->txf_ctl = value;
114
+ if (val > max) {
367
+}
115
+ *satp = true;
368
+
116
+ return max;
369
+static void npcm7xx_smbus_write_t_out(NPCM7xxSMBusState *s, uint8_t value)
117
+ } else if (val < min) {
370
+{
118
+ *satp = true;
371
+ uint8_t new_t_out = value;
119
+ return min;
372
+
373
+ if ((value & NPCM7XX_SMBT_OUT_ST) || (!(s->t_out & NPCM7XX_SMBT_OUT_ST))) {
374
+ new_t_out &= ~NPCM7XX_SMBT_OUT_ST;
375
+ } else {
120
+ } else {
376
+ new_t_out |= NPCM7XX_SMBT_OUT_ST;
121
+ return val;
377
+ }
378
+
379
+ s->t_out = new_t_out;
380
+}
381
+
382
+static void npcm7xx_smbus_write_txf_sts(NPCM7xxSMBusState *s, uint8_t value)
383
+{
384
+ s->txf_sts = WRITE_ONE_CLEAR(s->txf_sts, value, NPCM7XX_SMBTXF_STS_TX_THST);
385
+}
386
+
387
+static void npcm7xx_smbus_write_rxf_sts(NPCM7xxSMBusState *s, uint8_t value)
388
+{
389
+ if (value & NPCM7XX_SMBRXF_STS_RX_THST) {
390
+ s->rxf_sts &= ~NPCM7XX_SMBRXF_STS_RX_THST;
391
+ if (s->status == NPCM7XX_SMBUS_STATUS_RECEIVING) {
392
+ npcm7xx_smbus_recv_fifo(s);
393
+ }
394
+ }
122
+ }
395
+}
123
+}
396
+
124
+
397
+static void npcm7xx_smbus_write_rxf_ctl(NPCM7xxSMBusState *s, uint8_t value)
125
+/* Saturating narrowing right shifts */
398
+{
126
+#define DO_VSHRN_SAT(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
399
+ uint8_t new_ctl = value;
127
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
400
+
128
+ void *vm, uint32_t shift) \
401
+ if (!(value & NPCM7XX_SMBRXF_CTL_LAST)) {
129
+ { \
402
+ new_ctl = KEEP_OLD_BIT(s->rxf_ctl, new_ctl, NPCM7XX_SMBRXF_CTL_LAST);
130
+ LTYPE *m = vm; \
131
+ TYPE *d = vd; \
132
+ uint16_t mask = mve_element_mask(env); \
133
+ bool qc = false; \
134
+ unsigned le; \
135
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
136
+ bool sat = false; \
137
+ TYPE r = FN(m[H##LESIZE(le)], shift, &sat); \
138
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
139
+ qc |= sat && (mask & 1 << (TOP * ESIZE)); \
140
+ } \
141
+ if (qc) { \
142
+ env->vfp.qc[0] = qc; \
143
+ } \
144
+ mve_advance_vpt(env); \
403
+ }
145
+ }
404
+ s->rxf_ctl = new_ctl;
146
+
405
+}
147
+#define DO_VSHRN_SAT_UB(BOP, TOP, FN) \
406
+
148
+ DO_VSHRN_SAT(BOP, false, 1, uint8_t, 2, uint16_t, FN) \
407
static uint64_t npcm7xx_smbus_read(void *opaque, hwaddr offset, unsigned size)
149
+ DO_VSHRN_SAT(TOP, true, 1, uint8_t, 2, uint16_t, FN)
408
{
150
+
409
NPCM7xxSMBusState *s = opaque;
151
+#define DO_VSHRN_SAT_UH(BOP, TOP, FN) \
410
@@ -XXX,XX +XXX,XX @@ static uint64_t npcm7xx_smbus_read(void *opaque, hwaddr offset, unsigned size)
152
+ DO_VSHRN_SAT(BOP, false, 2, uint16_t, 4, uint32_t, FN) \
411
default:
153
+ DO_VSHRN_SAT(TOP, true, 2, uint16_t, 4, uint32_t, FN)
412
if (bank) {
154
+
413
/* Bank 1 */
155
+#define DO_VSHRN_SAT_SB(BOP, TOP, FN) \
414
- qemu_log_mask(LOG_GUEST_ERROR,
156
+ DO_VSHRN_SAT(BOP, false, 1, int8_t, 2, int16_t, FN) \
415
- "%s: read from invalid offset 0x%" HWADDR_PRIx "\n",
157
+ DO_VSHRN_SAT(TOP, true, 1, int8_t, 2, int16_t, FN)
416
- DEVICE(s)->canonical_path, offset);
158
+
417
+ switch (offset) {
159
+#define DO_VSHRN_SAT_SH(BOP, TOP, FN) \
418
+ case NPCM7XX_SMB_FIF_CTS:
160
+ DO_VSHRN_SAT(BOP, false, 2, int16_t, 4, int32_t, FN) \
419
+ value = s->fif_cts;
161
+ DO_VSHRN_SAT(TOP, true, 2, int16_t, 4, int32_t, FN)
420
+ break;
162
+
421
+
163
+#define DO_SHRN_SB(N, M, SATP) \
422
+ case NPCM7XX_SMB_FAIR_PER:
164
+ do_sat_bhs((int64_t)(N) >> (M), INT8_MIN, INT8_MAX, SATP)
423
+ value = s->fair_per;
165
+#define DO_SHRN_UB(N, M, SATP) \
424
+ break;
166
+ do_sat_bhs((uint64_t)(N) >> (M), 0, UINT8_MAX, SATP)
425
+
167
+#define DO_SHRUN_B(N, M, SATP) \
426
+ case NPCM7XX_SMB_TXF_CTL:
168
+ do_sat_bhs((int64_t)(N) >> (M), 0, UINT8_MAX, SATP)
427
+ value = s->txf_ctl;
169
+
428
+ break;
170
+#define DO_SHRN_SH(N, M, SATP) \
429
+
171
+ do_sat_bhs((int64_t)(N) >> (M), INT16_MIN, INT16_MAX, SATP)
430
+ case NPCM7XX_SMB_T_OUT:
172
+#define DO_SHRN_UH(N, M, SATP) \
431
+ value = s->t_out;
173
+ do_sat_bhs((uint64_t)(N) >> (M), 0, UINT16_MAX, SATP)
432
+ break;
174
+#define DO_SHRUN_H(N, M, SATP) \
433
+
175
+ do_sat_bhs((int64_t)(N) >> (M), 0, UINT16_MAX, SATP)
434
+ case NPCM7XX_SMB_TXF_STS:
176
+
435
+ value = s->txf_sts;
177
+#define DO_RSHRN_SB(N, M, SATP) \
436
+ break;
178
+ do_sat_bhs(do_srshr(N, M), INT8_MIN, INT8_MAX, SATP)
437
+
179
+#define DO_RSHRN_UB(N, M, SATP) \
438
+ case NPCM7XX_SMB_RXF_STS:
180
+ do_sat_bhs(do_urshr(N, M), 0, UINT8_MAX, SATP)
439
+ value = s->rxf_sts;
181
+#define DO_RSHRUN_B(N, M, SATP) \
440
+ break;
182
+ do_sat_bhs(do_srshr(N, M), 0, UINT8_MAX, SATP)
441
+
183
+
442
+ case NPCM7XX_SMB_RXF_CTL:
184
+#define DO_RSHRN_SH(N, M, SATP) \
443
+ value = s->rxf_ctl;
185
+ do_sat_bhs(do_srshr(N, M), INT16_MIN, INT16_MAX, SATP)
444
+ break;
186
+#define DO_RSHRN_UH(N, M, SATP) \
445
+
187
+ do_sat_bhs(do_urshr(N, M), 0, UINT16_MAX, SATP)
446
+ default:
188
+#define DO_RSHRUN_H(N, M, SATP) \
447
+ qemu_log_mask(LOG_GUEST_ERROR,
189
+ do_sat_bhs(do_srshr(N, M), 0, UINT16_MAX, SATP)
448
+ "%s: read from invalid offset 0x%" HWADDR_PRIx "\n",
190
+
449
+ DEVICE(s)->canonical_path, offset);
191
+DO_VSHRN_SAT_SB(vqshrnb_sb, vqshrnt_sb, DO_SHRN_SB)
450
+ break;
192
+DO_VSHRN_SAT_SH(vqshrnb_sh, vqshrnt_sh, DO_SHRN_SH)
451
+ }
193
+DO_VSHRN_SAT_UB(vqshrnb_ub, vqshrnt_ub, DO_SHRN_UB)
452
} else {
194
+DO_VSHRN_SAT_UH(vqshrnb_uh, vqshrnt_uh, DO_SHRN_UH)
453
/* Bank 0 */
195
+DO_VSHRN_SAT_SB(vqshrunbb, vqshruntb, DO_SHRUN_B)
454
switch (offset) {
196
+DO_VSHRN_SAT_SH(vqshrunbh, vqshrunth, DO_SHRUN_H)
455
@@ -XXX,XX +XXX,XX @@ static uint64_t npcm7xx_smbus_read(void *opaque, hwaddr offset, unsigned size)
197
+
456
value = s->scllt;
198
+DO_VSHRN_SAT_SB(vqrshrnb_sb, vqrshrnt_sb, DO_RSHRN_SB)
457
break;
199
+DO_VSHRN_SAT_SH(vqrshrnb_sh, vqrshrnt_sh, DO_RSHRN_SH)
458
200
+DO_VSHRN_SAT_UB(vqrshrnb_ub, vqrshrnt_ub, DO_RSHRN_UB)
459
+ case NPCM7XX_SMB_FIF_CTL:
201
+DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH)
460
+ value = s->fif_ctl;
202
+DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B)
461
+ break;
203
+DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H)
462
+
204
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
463
case NPCM7XX_SMB_SCLHT:
205
index XXXXXXX..XXXXXXX 100644
464
value = s->sclht;
206
--- a/target/arm/translate-mve.c
465
break;
207
+++ b/target/arm/translate-mve.c
466
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_write(void *opaque, hwaddr offset, uint64_t value,
208
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_N(VSHRNB, vshrnb)
467
default:
209
DO_2SHIFT_N(VSHRNT, vshrnt)
468
if (bank) {
210
DO_2SHIFT_N(VRSHRNB, vrshrnb)
469
/* Bank 1 */
211
DO_2SHIFT_N(VRSHRNT, vrshrnt)
470
- qemu_log_mask(LOG_GUEST_ERROR,
212
+DO_2SHIFT_N(VQSHRNB_S, vqshrnb_s)
471
- "%s: write to invalid offset 0x%" HWADDR_PRIx "\n",
213
+DO_2SHIFT_N(VQSHRNT_S, vqshrnt_s)
472
- DEVICE(s)->canonical_path, offset);
214
+DO_2SHIFT_N(VQSHRNB_U, vqshrnb_u)
473
+ switch (offset) {
215
+DO_2SHIFT_N(VQSHRNT_U, vqshrnt_u)
474
+ case NPCM7XX_SMB_FIF_CTS:
216
+DO_2SHIFT_N(VQSHRUNB, vqshrunb)
475
+ npcm7xx_smbus_write_fif_cts(s, value);
217
+DO_2SHIFT_N(VQSHRUNT, vqshrunt)
476
+ break;
218
+DO_2SHIFT_N(VQRSHRNB_S, vqrshrnb_s)
477
+
219
+DO_2SHIFT_N(VQRSHRNT_S, vqrshrnt_s)
478
+ case NPCM7XX_SMB_FAIR_PER:
220
+DO_2SHIFT_N(VQRSHRNB_U, vqrshrnb_u)
479
+ s->fair_per = value;
221
+DO_2SHIFT_N(VQRSHRNT_U, vqrshrnt_u)
480
+ break;
222
+DO_2SHIFT_N(VQRSHRUNB, vqrshrunb)
481
+
223
+DO_2SHIFT_N(VQRSHRUNT, vqrshrunt)
482
+ case NPCM7XX_SMB_TXF_CTL:
483
+ npcm7xx_smbus_write_txf_ctl(s, value);
484
+ break;
485
+
486
+ case NPCM7XX_SMB_T_OUT:
487
+ npcm7xx_smbus_write_t_out(s, value);
488
+ break;
489
+
490
+ case NPCM7XX_SMB_TXF_STS:
491
+ npcm7xx_smbus_write_txf_sts(s, value);
492
+ break;
493
+
494
+ case NPCM7XX_SMB_RXF_STS:
495
+ npcm7xx_smbus_write_rxf_sts(s, value);
496
+ break;
497
+
498
+ case NPCM7XX_SMB_RXF_CTL:
499
+ npcm7xx_smbus_write_rxf_ctl(s, value);
500
+ break;
501
+
502
+ default:
503
+ qemu_log_mask(LOG_GUEST_ERROR,
504
+ "%s: write to invalid offset 0x%" HWADDR_PRIx "\n",
505
+ DEVICE(s)->canonical_path, offset);
506
+ break;
507
+ }
508
} else {
509
/* Bank 0 */
510
switch (offset) {
511
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_write(void *opaque, hwaddr offset, uint64_t value,
512
s->scllt = value;
513
break;
514
515
+ case NPCM7XX_SMB_FIF_CTL:
516
+ npcm7xx_smbus_write_fif_ctl(s, value);
517
+ break;
518
+
519
case NPCM7XX_SMB_SCLHT:
520
s->sclht = value;
521
break;
522
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_enter_reset(Object *obj, ResetType type)
523
s->scllt = NPCM7XX_SMB_SCLLT_INIT_VAL;
524
s->sclht = NPCM7XX_SMB_SCLHT_INIT_VAL;
525
526
+ s->fif_ctl = NPCM7XX_SMB_FIF_CTL_INIT_VAL;
527
+ s->fif_cts = NPCM7XX_SMB_FIF_CTS_INIT_VAL;
528
+ s->fair_per = NPCM7XX_SMB_FAIR_PER_INIT_VAL;
529
+ s->txf_ctl = NPCM7XX_SMB_TXF_CTL_INIT_VAL;
530
+ s->t_out = NPCM7XX_SMB_T_OUT_INIT_VAL;
531
+ s->txf_sts = NPCM7XX_SMB_TXF_STS_INIT_VAL;
532
+ s->rxf_sts = NPCM7XX_SMB_RXF_STS_INIT_VAL;
533
+ s->rxf_ctl = NPCM7XX_SMB_RXF_CTL_INIT_VAL;
534
+
535
+ npcm7xx_smbus_clear_buffer(s);
536
s->status = NPCM7XX_SMBUS_STATUS_IDLE;
537
+ s->rx_cur = 0;
538
}
539
540
static void npcm7xx_smbus_hold_reset(Object *obj)
541
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_npcm7xx_smbus = {
542
VMSTATE_UINT8_ARRAY(addr, NPCM7xxSMBusState, NPCM7XX_SMBUS_NR_ADDRS),
543
VMSTATE_UINT8(scllt, NPCM7xxSMBusState),
544
VMSTATE_UINT8(sclht, NPCM7xxSMBusState),
545
+ VMSTATE_UINT8(fif_ctl, NPCM7xxSMBusState),
546
+ VMSTATE_UINT8(fif_cts, NPCM7xxSMBusState),
547
+ VMSTATE_UINT8(fair_per, NPCM7xxSMBusState),
548
+ VMSTATE_UINT8(txf_ctl, NPCM7xxSMBusState),
549
+ VMSTATE_UINT8(t_out, NPCM7xxSMBusState),
550
+ VMSTATE_UINT8(txf_sts, NPCM7xxSMBusState),
551
+ VMSTATE_UINT8(rxf_sts, NPCM7xxSMBusState),
552
+ VMSTATE_UINT8(rxf_ctl, NPCM7xxSMBusState),
553
+ VMSTATE_UINT8_ARRAY(rx_fifo, NPCM7xxSMBusState,
554
+ NPCM7XX_SMBUS_FIFO_SIZE),
555
+ VMSTATE_UINT8(rx_cur, NPCM7xxSMBusState),
556
VMSTATE_END_OF_LIST(),
557
},
558
};
559
diff --git a/tests/qtest/npcm7xx_smbus-test.c b/tests/qtest/npcm7xx_smbus-test.c
560
index XXXXXXX..XXXXXXX 100644
561
--- a/tests/qtest/npcm7xx_smbus-test.c
562
+++ b/tests/qtest/npcm7xx_smbus-test.c
563
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxSMBusBank1Register {
564
#define ADDR_EN BIT(7)
565
#define ADDR_A(rv) extract8((rv), 0, 6)
566
567
+/* FIF_CTL fields */
568
+#define FIF_CTL_FIFO_EN BIT(4)
569
+
570
+/* FIF_CTS fields */
571
+#define FIF_CTS_CLR_FIFO BIT(6)
572
+#define FIF_CTS_RFTE_IE BIT(3)
573
+#define FIF_CTS_RXF_TXE BIT(1)
574
+
575
+/* TXF_CTL fields */
576
+#define TXF_CTL_THR_TXIE BIT(6)
577
+#define TXF_CTL_TX_THR(rv) extract8((rv), 0, 5)
578
+
579
+/* TXF_STS fields */
580
+#define TXF_STS_TX_THST BIT(6)
581
+#define TXF_STS_TX_BYTES(rv) extract8((rv), 0, 5)
582
+
583
+/* RXF_CTL fields */
584
+#define RXF_CTL_THR_RXIE BIT(6)
585
+#define RXF_CTL_LAST BIT(5)
586
+#define RXF_CTL_RX_THR(rv) extract8((rv), 0, 5)
587
+
588
+/* RXF_STS fields */
589
+#define RXF_STS_RX_THST BIT(6)
590
+#define RXF_STS_RX_BYTES(rv) extract8((rv), 0, 5)
591
+
592
+
593
+static void choose_bank(QTestState *qts, uint64_t base_addr, uint8_t bank)
594
+{
595
+ uint8_t ctl3 = qtest_readb(qts, base_addr + OFFSET_CTL3);
596
+
597
+ if (bank) {
598
+ ctl3 |= CTL3_BNK_SEL;
599
+ } else {
600
+ ctl3 &= ~CTL3_BNK_SEL;
601
+ }
602
+
603
+ qtest_writeb(qts, base_addr + OFFSET_CTL3, ctl3);
604
+}
605
606
static void check_running(QTestState *qts, uint64_t base_addr)
607
{
608
@@ -XXX,XX +XXX,XX @@ static void send_byte(QTestState *qts, uint64_t base_addr, uint8_t byte)
609
qtest_writeb(qts, base_addr + OFFSET_SDA, byte);
610
}
611
612
+static bool check_recv(QTestState *qts, uint64_t base_addr)
613
+{
614
+ uint8_t st, fif_ctl, rxf_ctl, rxf_sts;
615
+ bool fifo;
616
+
617
+ st = qtest_readb(qts, base_addr + OFFSET_ST);
618
+ choose_bank(qts, base_addr, 0);
619
+ fif_ctl = qtest_readb(qts, base_addr + OFFSET_FIF_CTL);
620
+ fifo = fif_ctl & FIF_CTL_FIFO_EN;
621
+ if (!fifo) {
622
+ return st == (ST_MODE | ST_SDAST);
623
+ }
624
+
625
+ choose_bank(qts, base_addr, 1);
626
+ rxf_ctl = qtest_readb(qts, base_addr + OFFSET_RXF_CTL);
627
+ rxf_sts = qtest_readb(qts, base_addr + OFFSET_RXF_STS);
628
+
629
+ if ((rxf_ctl & RXF_CTL_THR_RXIE) && RXF_STS_RX_BYTES(rxf_sts) < 16) {
630
+ return st == ST_MODE;
631
+ } else {
632
+ return st == (ST_MODE | ST_SDAST);
633
+ }
634
+}
635
+
636
static uint8_t recv_byte(QTestState *qts, uint64_t base_addr)
637
{
638
- g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_ST), ==,
639
- ST_MODE | ST_SDAST);
640
+ g_assert_true(check_recv(qts, base_addr));
641
return qtest_readb(qts, base_addr + OFFSET_SDA);
642
}
643
644
@@ -XXX,XX +XXX,XX @@ static void send_address(QTestState *qts, uint64_t base_addr, uint8_t addr,
645
qtest_writeb(qts, base_addr + OFFSET_ST, ST_STASTR);
646
st = qtest_readb(qts, base_addr + OFFSET_ST);
647
if (recv) {
648
- g_assert_cmphex(st, ==, ST_MODE | ST_SDAST);
649
+ g_assert_true(check_recv(qts, base_addr));
650
} else {
651
g_assert_cmphex(st, ==, ST_MODE | ST_XMIT | ST_SDAST);
652
}
653
@@ -XXX,XX +XXX,XX @@ static void send_nack(QTestState *qts, uint64_t base_addr)
654
qtest_writeb(qts, base_addr + OFFSET_CTL1, ctl1);
655
}
656
657
+static void start_fifo_mode(QTestState *qts, uint64_t base_addr)
658
+{
659
+ choose_bank(qts, base_addr, 0);
660
+ qtest_writeb(qts, base_addr + OFFSET_FIF_CTL, FIF_CTL_FIFO_EN);
661
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_FIF_CTL) &
662
+ FIF_CTL_FIFO_EN);
663
+ choose_bank(qts, base_addr, 1);
664
+ qtest_writeb(qts, base_addr + OFFSET_FIF_CTS,
665
+ FIF_CTS_CLR_FIFO | FIF_CTS_RFTE_IE);
666
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_FIF_CTS), ==,
667
+ FIF_CTS_RFTE_IE);
668
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_TXF_STS), ==, 0);
669
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_RXF_STS), ==, 0);
670
+}
671
+
672
+static void start_recv_fifo(QTestState *qts, uint64_t base_addr, uint8_t bytes)
673
+{
674
+ choose_bank(qts, base_addr, 1);
675
+ qtest_writeb(qts, base_addr + OFFSET_TXF_CTL, 0);
676
+ qtest_writeb(qts, base_addr + OFFSET_RXF_CTL,
677
+ RXF_CTL_THR_RXIE | RXF_CTL_LAST | bytes);
678
+}
679
+
680
/* Check the SMBus's status is set correctly when disabled. */
681
static void test_disable_bus(gconstpointer data)
682
{
683
@@ -XXX,XX +XXX,XX @@ static void test_single_mode(gconstpointer data)
684
qtest_quit(qts);
685
}
686
687
+/* Check the SMBus can send and receive bytes in FIFO mode. */
688
+static void test_fifo_mode(gconstpointer data)
689
+{
690
+ intptr_t index = (intptr_t)data;
691
+ uint64_t base_addr = SMBUS_ADDR(index);
692
+ int irq = SMBUS_IRQ(index);
693
+ uint8_t value = 0x60;
694
+ QTestState *qts = qtest_init("-machine npcm750-evb");
695
+
696
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
697
+ enable_bus(qts, base_addr);
698
+ start_fifo_mode(qts, base_addr);
699
+ g_assert_false(qtest_get_irq(qts, irq));
700
+
701
+ /* Sending */
702
+ start_transfer(qts, base_addr);
703
+ send_address(qts, base_addr, EVB_DEVICE_ADDR, false, true);
704
+ choose_bank(qts, base_addr, 1);
705
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_FIF_CTS) &
706
+ FIF_CTS_RXF_TXE);
707
+ qtest_writeb(qts, base_addr + OFFSET_TXF_CTL, TXF_CTL_THR_TXIE);
708
+ send_byte(qts, base_addr, TMP105_REG_CONFIG);
709
+ send_byte(qts, base_addr, value);
710
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_FIF_CTS) &
711
+ FIF_CTS_RXF_TXE);
712
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_TXF_STS) &
713
+ TXF_STS_TX_THST);
714
+ g_assert_cmpuint(TXF_STS_TX_BYTES(
715
+ qtest_readb(qts, base_addr + OFFSET_TXF_STS)), ==, 0);
716
+ g_assert_true(qtest_get_irq(qts, irq));
717
+ stop_transfer(qts, base_addr);
718
+ check_stopped(qts, base_addr);
719
+
720
+ /* Receiving */
721
+ start_fifo_mode(qts, base_addr);
722
+ start_transfer(qts, base_addr);
723
+ send_address(qts, base_addr, EVB_DEVICE_ADDR, false, true);
724
+ send_byte(qts, base_addr, TMP105_REG_CONFIG);
725
+ start_transfer(qts, base_addr);
726
+ qtest_writeb(qts, base_addr + OFFSET_FIF_CTS, FIF_CTS_RXF_TXE);
727
+ start_recv_fifo(qts, base_addr, 1);
728
+ send_address(qts, base_addr, EVB_DEVICE_ADDR, true, true);
729
+ g_assert_false(qtest_readb(qts, base_addr + OFFSET_FIF_CTS) &
730
+ FIF_CTS_RXF_TXE);
731
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_RXF_STS) &
732
+ RXF_STS_RX_THST);
733
+ g_assert_cmpuint(RXF_STS_RX_BYTES(
734
+ qtest_readb(qts, base_addr + OFFSET_RXF_STS)), ==, 1);
735
+ send_nack(qts, base_addr);
736
+ stop_transfer(qts, base_addr);
737
+ check_running(qts, base_addr);
738
+ g_assert_cmphex(recv_byte(qts, base_addr), ==, value);
739
+ g_assert_cmpuint(RXF_STS_RX_BYTES(
740
+ qtest_readb(qts, base_addr + OFFSET_RXF_STS)), ==, 0);
741
+ check_stopped(qts, base_addr);
742
+ qtest_quit(qts);
743
+}
744
+
745
static void smbus_add_test(const char *name, int index, GTestDataFunc fn)
746
{
747
g_autofree char *full_name = g_strdup_printf(
748
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
749
750
for (i = 0; i < ARRAY_SIZE(evb_bus_list); ++i) {
751
add_test(single_mode, evb_bus_list[i]);
752
+ add_test(fifo_mode, evb_bus_list[i]);
753
}
754
755
return g_test_run();
756
diff --git a/hw/i2c/trace-events b/hw/i2c/trace-events
757
index XXXXXXX..XXXXXXX 100644
758
--- a/hw/i2c/trace-events
759
+++ b/hw/i2c/trace-events
760
@@ -XXX,XX +XXX,XX @@ npcm7xx_smbus_send_byte(const char *id, uint8_t value, int success) "%s send byt
761
npcm7xx_smbus_recv_byte(const char *id, uint8_t value) "%s recv byte: 0x%02x"
762
npcm7xx_smbus_stop(const char *id) "%s stopping"
763
npcm7xx_smbus_nack(const char *id) "%s nacking"
764
+npcm7xx_smbus_recv_fifo(const char *id, uint8_t received, uint8_t expected) "%s recv fifo: received %u, expected %u"
765
--
224
--
766
2.20.1
225
2.20.1
767
226
768
227
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
Implement the MVE VSHLC insn, which performs a shift left of the
2
entire vector with carry in bits provided from a general purpose
3
register and carry out bits written back to that register.
2
4
3
This patch adds a QTest for NPCM7XX SMBus's single byte mode. It sends a
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
byte to a device in the evaluation board, and verify the retrieved value
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
is equivalent to the sent value.
7
Message-id: 20210628135835.6690-14-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 2 ++
10
target/arm/mve.decode | 2 ++
11
target/arm/mve_helper.c | 38 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-mve.c | 30 ++++++++++++++++++++++++++++++
13
4 files changed, 72 insertions(+)
6
14
7
Reviewed-by: Doug Evans<dje@google.com>
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
Reviewed-by: Tyrong Ting<kfting@nuvoton.com>
16
index XXXXXXX..XXXXXXX 100644
9
Signed-off-by: Hao Wu <wuhaotsh@google.com>
17
--- a/target/arm/helper-mve.h
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
+++ b/target/arm/helper-mve.h
11
Message-id: 20210210220426.3577804-5-wuhaotsh@google.com
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshrunbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
DEF_HELPER_FLAGS_4(mve_vqrshrunbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
13
---
21
DEF_HELPER_FLAGS_4(mve_vqrshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
14
tests/qtest/npcm7xx_smbus-test.c | 352 +++++++++++++++++++++++++++++++
22
DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
15
tests/qtest/meson.build | 1 +
16
2 files changed, 353 insertions(+)
17
create mode 100644 tests/qtest/npcm7xx_smbus-test.c
18
19
diff --git a/tests/qtest/npcm7xx_smbus-test.c b/tests/qtest/npcm7xx_smbus-test.c
20
new file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- /dev/null
23
+++ b/tests/qtest/npcm7xx_smbus-test.c
24
@@ -XXX,XX +XXX,XX @@
25
+/*
26
+ * QTests for Nuvoton NPCM7xx SMBus Modules.
27
+ *
28
+ * Copyright 2020 Google LLC
29
+ *
30
+ * This program is free software; you can redistribute it and/or modify it
31
+ * under the terms of the GNU General Public License as published by the
32
+ * Free Software Foundation; either version 2 of the License, or
33
+ * (at your option) any later version.
34
+ *
35
+ * This program is distributed in the hope that it will be useful, but WITHOUT
36
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
37
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
38
+ * for more details.
39
+ */
40
+
23
+
41
+#include "qemu/osdep.h"
24
+DEF_HELPER_FLAGS_4(mve_vshlc, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
42
+#include "qemu/bitops.h"
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
43
+#include "libqos/i2c.h"
26
index XXXXXXX..XXXXXXX 100644
44
+#include "libqos/libqtest.h"
27
--- a/target/arm/mve.decode
45
+#include "hw/misc/tmp105_regs.h"
28
+++ b/target/arm/mve.decode
29
@@ -XXX,XX +XXX,XX @@ VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_b
30
VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_h
31
VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
32
VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
46
+
33
+
47
+#define NR_SMBUS_DEVICES 16
34
+VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
48
+#define SMBUS_ADDR(x) (0xf0080000 + 0x1000 * (x))
35
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
49
+#define SMBUS_IRQ(x) (64 + (x))
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/mve_helper.c
38
+++ b/target/arm/mve_helper.c
39
@@ -XXX,XX +XXX,XX @@ DO_VSHRN_SAT_UB(vqrshrnb_ub, vqrshrnt_ub, DO_RSHRN_UB)
40
DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH)
41
DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B)
42
DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H)
50
+
43
+
51
+#define EVB_DEVICE_ADDR 0x48
44
+uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
52
+#define INVALID_DEVICE_ADDR 0x01
45
+ uint32_t shift)
46
+{
47
+ uint32_t *d = vd;
48
+ uint16_t mask = mve_element_mask(env);
49
+ unsigned e;
50
+ uint32_t r;
53
+
51
+
54
+const int evb_bus_list[] = {0, 1, 2, 6};
52
+ /*
55
+
53
+ * For each 32-bit element, we shift it left, bringing in the
56
+/* Offsets */
54
+ * low 'shift' bits of rdm at the bottom. Bits shifted out at
57
+enum CommonRegister {
55
+ * the top become the new rdm, if the predicate mask permits.
58
+ OFFSET_SDA = 0x0,
56
+ * The final rdm value is returned to update the register.
59
+ OFFSET_ST = 0x2,
57
+ * shift == 0 here means "shift by 32 bits".
60
+ OFFSET_CST = 0x4,
58
+ */
61
+ OFFSET_CTL1 = 0x6,
59
+ if (shift == 0) {
62
+ OFFSET_ADDR1 = 0x8,
60
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
63
+ OFFSET_CTL2 = 0xa,
61
+ r = rdm;
64
+ OFFSET_ADDR2 = 0xc,
62
+ if (mask & 1) {
65
+ OFFSET_CTL3 = 0xe,
63
+ rdm = d[H4(e)];
66
+ OFFSET_CST2 = 0x18,
64
+ }
67
+ OFFSET_CST3 = 0x19,
65
+ mergemask(&d[H4(e)], r, mask);
68
+};
69
+
70
+enum NPCM7xxSMBusBank0Register {
71
+ OFFSET_ADDR3 = 0x10,
72
+ OFFSET_ADDR7 = 0x11,
73
+ OFFSET_ADDR4 = 0x12,
74
+ OFFSET_ADDR8 = 0x13,
75
+ OFFSET_ADDR5 = 0x14,
76
+ OFFSET_ADDR9 = 0x15,
77
+ OFFSET_ADDR6 = 0x16,
78
+ OFFSET_ADDR10 = 0x17,
79
+ OFFSET_CTL4 = 0x1a,
80
+ OFFSET_CTL5 = 0x1b,
81
+ OFFSET_SCLLT = 0x1c,
82
+ OFFSET_FIF_CTL = 0x1d,
83
+ OFFSET_SCLHT = 0x1e,
84
+};
85
+
86
+enum NPCM7xxSMBusBank1Register {
87
+ OFFSET_FIF_CTS = 0x10,
88
+ OFFSET_FAIR_PER = 0x11,
89
+ OFFSET_TXF_CTL = 0x12,
90
+ OFFSET_T_OUT = 0x14,
91
+ OFFSET_TXF_STS = 0x1a,
92
+ OFFSET_RXF_STS = 0x1c,
93
+ OFFSET_RXF_CTL = 0x1e,
94
+};
95
+
96
+/* ST fields */
97
+#define ST_STP BIT(7)
98
+#define ST_SDAST BIT(6)
99
+#define ST_BER BIT(5)
100
+#define ST_NEGACK BIT(4)
101
+#define ST_STASTR BIT(3)
102
+#define ST_NMATCH BIT(2)
103
+#define ST_MODE BIT(1)
104
+#define ST_XMIT BIT(0)
105
+
106
+/* CST fields */
107
+#define CST_ARPMATCH BIT(7)
108
+#define CST_MATCHAF BIT(6)
109
+#define CST_TGSCL BIT(5)
110
+#define CST_TSDA BIT(4)
111
+#define CST_GCMATCH BIT(3)
112
+#define CST_MATCH BIT(2)
113
+#define CST_BB BIT(1)
114
+#define CST_BUSY BIT(0)
115
+
116
+/* CST2 fields */
117
+#define CST2_INSTTS BIT(7)
118
+#define CST2_MATCH7F BIT(6)
119
+#define CST2_MATCH6F BIT(5)
120
+#define CST2_MATCH5F BIT(4)
121
+#define CST2_MATCH4F BIT(3)
122
+#define CST2_MATCH3F BIT(2)
123
+#define CST2_MATCH2F BIT(1)
124
+#define CST2_MATCH1F BIT(0)
125
+
126
+/* CST3 fields */
127
+#define CST3_EO_BUSY BIT(7)
128
+#define CST3_MATCH10F BIT(2)
129
+#define CST3_MATCH9F BIT(1)
130
+#define CST3_MATCH8F BIT(0)
131
+
132
+/* CTL1 fields */
133
+#define CTL1_STASTRE BIT(7)
134
+#define CTL1_NMINTE BIT(6)
135
+#define CTL1_GCMEN BIT(5)
136
+#define CTL1_ACK BIT(4)
137
+#define CTL1_EOBINTE BIT(3)
138
+#define CTL1_INTEN BIT(2)
139
+#define CTL1_STOP BIT(1)
140
+#define CTL1_START BIT(0)
141
+
142
+/* CTL2 fields */
143
+#define CTL2_SCLFRQ(rv) extract8((rv), 1, 6)
144
+#define CTL2_ENABLE BIT(0)
145
+
146
+/* CTL3 fields */
147
+#define CTL3_SCL_LVL BIT(7)
148
+#define CTL3_SDA_LVL BIT(6)
149
+#define CTL3_BNK_SEL BIT(5)
150
+#define CTL3_400K_MODE BIT(4)
151
+#define CTL3_IDL_START BIT(3)
152
+#define CTL3_ARPMEN BIT(2)
153
+#define CTL3_SCLFRQ(rv) extract8((rv), 0, 2)
154
+
155
+/* ADDR fields */
156
+#define ADDR_EN BIT(7)
157
+#define ADDR_A(rv) extract8((rv), 0, 6)
158
+
159
+
160
+static void check_running(QTestState *qts, uint64_t base_addr)
161
+{
162
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_CST) & CST_BUSY);
163
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_CST) & CST_BB);
164
+}
165
+
166
+static void check_stopped(QTestState *qts, uint64_t base_addr)
167
+{
168
+ uint8_t cst3;
169
+
170
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_ST), ==, 0);
171
+ g_assert_false(qtest_readb(qts, base_addr + OFFSET_CST) & CST_BUSY);
172
+ g_assert_false(qtest_readb(qts, base_addr + OFFSET_CST) & CST_BB);
173
+
174
+ cst3 = qtest_readb(qts, base_addr + OFFSET_CST3);
175
+ g_assert_true(cst3 & CST3_EO_BUSY);
176
+ qtest_writeb(qts, base_addr + OFFSET_CST3, cst3);
177
+ cst3 = qtest_readb(qts, base_addr + OFFSET_CST3);
178
+ g_assert_false(cst3 & CST3_EO_BUSY);
179
+}
180
+
181
+static void enable_bus(QTestState *qts, uint64_t base_addr)
182
+{
183
+ uint8_t ctl2 = qtest_readb(qts, base_addr + OFFSET_CTL2);
184
+
185
+ ctl2 |= CTL2_ENABLE;
186
+ qtest_writeb(qts, base_addr + OFFSET_CTL2, ctl2);
187
+ g_assert_true(qtest_readb(qts, base_addr + OFFSET_CTL2) & CTL2_ENABLE);
188
+}
189
+
190
+static void disable_bus(QTestState *qts, uint64_t base_addr)
191
+{
192
+ uint8_t ctl2 = qtest_readb(qts, base_addr + OFFSET_CTL2);
193
+
194
+ ctl2 &= ~CTL2_ENABLE;
195
+ qtest_writeb(qts, base_addr + OFFSET_CTL2, ctl2);
196
+ g_assert_false(qtest_readb(qts, base_addr + OFFSET_CTL2) & CTL2_ENABLE);
197
+}
198
+
199
+static void start_transfer(QTestState *qts, uint64_t base_addr)
200
+{
201
+ uint8_t ctl1;
202
+
203
+ ctl1 = CTL1_START | CTL1_INTEN | CTL1_STASTRE;
204
+ qtest_writeb(qts, base_addr + OFFSET_CTL1, ctl1);
205
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_CTL1), ==,
206
+ CTL1_INTEN | CTL1_STASTRE | CTL1_INTEN);
207
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_ST), ==,
208
+ ST_MODE | ST_XMIT | ST_SDAST);
209
+ check_running(qts, base_addr);
210
+}
211
+
212
+static void stop_transfer(QTestState *qts, uint64_t base_addr)
213
+{
214
+ uint8_t ctl1 = qtest_readb(qts, base_addr + OFFSET_CTL1);
215
+
216
+ ctl1 &= ~(CTL1_START | CTL1_ACK);
217
+ ctl1 |= CTL1_STOP | CTL1_INTEN | CTL1_EOBINTE;
218
+ qtest_writeb(qts, base_addr + OFFSET_CTL1, ctl1);
219
+ ctl1 = qtest_readb(qts, base_addr + OFFSET_CTL1);
220
+ g_assert_false(ctl1 & CTL1_STOP);
221
+}
222
+
223
+static void send_byte(QTestState *qts, uint64_t base_addr, uint8_t byte)
224
+{
225
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_ST), ==,
226
+ ST_MODE | ST_XMIT | ST_SDAST);
227
+ qtest_writeb(qts, base_addr + OFFSET_SDA, byte);
228
+}
229
+
230
+static uint8_t recv_byte(QTestState *qts, uint64_t base_addr)
231
+{
232
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_ST), ==,
233
+ ST_MODE | ST_SDAST);
234
+ return qtest_readb(qts, base_addr + OFFSET_SDA);
235
+}
236
+
237
+static void send_address(QTestState *qts, uint64_t base_addr, uint8_t addr,
238
+ bool recv, bool valid)
239
+{
240
+ uint8_t encoded_addr = (addr << 1) | (recv ? 1 : 0);
241
+ uint8_t st;
242
+
243
+ qtest_writeb(qts, base_addr + OFFSET_SDA, encoded_addr);
244
+ st = qtest_readb(qts, base_addr + OFFSET_ST);
245
+
246
+ if (valid) {
247
+ if (recv) {
248
+ g_assert_cmphex(st, ==, ST_MODE | ST_SDAST | ST_STASTR);
249
+ } else {
250
+ g_assert_cmphex(st, ==, ST_MODE | ST_XMIT | ST_SDAST | ST_STASTR);
251
+ }
252
+
253
+ qtest_writeb(qts, base_addr + OFFSET_ST, ST_STASTR);
254
+ st = qtest_readb(qts, base_addr + OFFSET_ST);
255
+ if (recv) {
256
+ g_assert_cmphex(st, ==, ST_MODE | ST_SDAST);
257
+ } else {
258
+ g_assert_cmphex(st, ==, ST_MODE | ST_XMIT | ST_SDAST);
259
+ }
66
+ }
260
+ } else {
67
+ } else {
261
+ if (recv) {
68
+ uint32_t shiftmask = MAKE_64BIT_MASK(0, shift);
262
+ g_assert_cmphex(st, ==, ST_MODE | ST_NEGACK);
69
+
263
+ } else {
70
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
264
+ g_assert_cmphex(st, ==, ST_MODE | ST_XMIT | ST_NEGACK);
71
+ r = (d[H4(e)] << shift) | (rdm & shiftmask);
72
+ if (mask & 1) {
73
+ rdm = d[H4(e)] >> (32 - shift);
74
+ }
75
+ mergemask(&d[H4(e)], r, mask);
265
+ }
76
+ }
266
+ }
77
+ }
78
+ mve_advance_vpt(env);
79
+ return rdm;
267
+}
80
+}
81
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/translate-mve.c
84
+++ b/target/arm/translate-mve.c
85
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_N(VQRSHRNB_U, vqrshrnb_u)
86
DO_2SHIFT_N(VQRSHRNT_U, vqrshrnt_u)
87
DO_2SHIFT_N(VQRSHRUNB, vqrshrunb)
88
DO_2SHIFT_N(VQRSHRUNT, vqrshrunt)
268
+
89
+
269
+static void send_nack(QTestState *qts, uint64_t base_addr)
90
+static bool trans_VSHLC(DisasContext *s, arg_VSHLC *a)
270
+{
91
+{
271
+ uint8_t ctl1 = qtest_readb(qts, base_addr + OFFSET_CTL1);
92
+ /*
93
+ * Whole Vector Left Shift with Carry. The carry is taken
94
+ * from a general purpose register and written back there.
95
+ * An imm of 0 means "shift by 32".
96
+ */
97
+ TCGv_ptr qd;
98
+ TCGv_i32 rdm;
272
+
99
+
273
+ ctl1 &= ~(CTL1_START | CTL1_STOP);
100
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
274
+ ctl1 |= CTL1_ACK | CTL1_INTEN;
101
+ return false;
275
+ qtest_writeb(qts, base_addr + OFFSET_CTL1, ctl1);
102
+ }
276
+}
103
+ if (a->rdm == 13 || a->rdm == 15) {
277
+
104
+ /* CONSTRAINED UNPREDICTABLE: we UNDEF */
278
+/* Check the SMBus's status is set correctly when disabled. */
105
+ return false;
279
+static void test_disable_bus(gconstpointer data)
106
+ }
280
+{
107
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
281
+ intptr_t index = (intptr_t)data;
108
+ return true;
282
+ uint64_t base_addr = SMBUS_ADDR(index);
283
+ QTestState *qts = qtest_init("-machine npcm750-evb");
284
+
285
+ disable_bus(qts, base_addr);
286
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_CTL1), ==, 0);
287
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_ST), ==, 0);
288
+ g_assert_false(qtest_readb(qts, base_addr + OFFSET_CST3) & CST3_EO_BUSY);
289
+ g_assert_cmphex(qtest_readb(qts, base_addr + OFFSET_CST), ==, 0);
290
+ qtest_quit(qts);
291
+}
292
+
293
+/* Check the SMBus returns a NACK for an invalid address. */
294
+static void test_invalid_addr(gconstpointer data)
295
+{
296
+ intptr_t index = (intptr_t)data;
297
+ uint64_t base_addr = SMBUS_ADDR(index);
298
+ int irq = SMBUS_IRQ(index);
299
+ QTestState *qts = qtest_init("-machine npcm750-evb");
300
+
301
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
302
+ enable_bus(qts, base_addr);
303
+ g_assert_false(qtest_get_irq(qts, irq));
304
+ start_transfer(qts, base_addr);
305
+ send_address(qts, base_addr, INVALID_DEVICE_ADDR, false, false);
306
+ g_assert_true(qtest_get_irq(qts, irq));
307
+ stop_transfer(qts, base_addr);
308
+ check_running(qts, base_addr);
309
+ qtest_writeb(qts, base_addr + OFFSET_ST, ST_NEGACK);
310
+ g_assert_false(qtest_readb(qts, base_addr + OFFSET_ST) & ST_NEGACK);
311
+ check_stopped(qts, base_addr);
312
+ qtest_quit(qts);
313
+}
314
+
315
+/* Check the SMBus can send and receive bytes to a device in single mode. */
316
+static void test_single_mode(gconstpointer data)
317
+{
318
+ intptr_t index = (intptr_t)data;
319
+ uint64_t base_addr = SMBUS_ADDR(index);
320
+ int irq = SMBUS_IRQ(index);
321
+ uint8_t value = 0x60;
322
+ QTestState *qts = qtest_init("-machine npcm750-evb");
323
+
324
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
325
+ enable_bus(qts, base_addr);
326
+
327
+ /* Sending */
328
+ g_assert_false(qtest_get_irq(qts, irq));
329
+ start_transfer(qts, base_addr);
330
+ g_assert_true(qtest_get_irq(qts, irq));
331
+ send_address(qts, base_addr, EVB_DEVICE_ADDR, false, true);
332
+ send_byte(qts, base_addr, TMP105_REG_CONFIG);
333
+ send_byte(qts, base_addr, value);
334
+ stop_transfer(qts, base_addr);
335
+ check_stopped(qts, base_addr);
336
+
337
+ /* Receiving */
338
+ start_transfer(qts, base_addr);
339
+ send_address(qts, base_addr, EVB_DEVICE_ADDR, false, true);
340
+ send_byte(qts, base_addr, TMP105_REG_CONFIG);
341
+ start_transfer(qts, base_addr);
342
+ send_address(qts, base_addr, EVB_DEVICE_ADDR, true, true);
343
+ send_nack(qts, base_addr);
344
+ stop_transfer(qts, base_addr);
345
+ check_running(qts, base_addr);
346
+ g_assert_cmphex(recv_byte(qts, base_addr), ==, value);
347
+ check_stopped(qts, base_addr);
348
+ qtest_quit(qts);
349
+}
350
+
351
+static void smbus_add_test(const char *name, int index, GTestDataFunc fn)
352
+{
353
+ g_autofree char *full_name = g_strdup_printf(
354
+ "npcm7xx_smbus[%d]/%s", index, name);
355
+ qtest_add_data_func(full_name, (void *)(intptr_t)index, fn);
356
+}
357
+#define add_test(name, td) smbus_add_test(#name, td, test_##name)
358
+
359
+int main(int argc, char **argv)
360
+{
361
+ int i;
362
+
363
+ g_test_init(&argc, &argv, NULL);
364
+ g_test_set_nonfatal_assertions();
365
+
366
+ for (i = 0; i < NR_SMBUS_DEVICES; ++i) {
367
+ add_test(disable_bus, i);
368
+ add_test(invalid_addr, i);
369
+ }
109
+ }
370
+
110
+
371
+ for (i = 0; i < ARRAY_SIZE(evb_bus_list); ++i) {
111
+ qd = mve_qreg_ptr(a->qd);
372
+ add_test(single_mode, evb_bus_list[i]);
112
+ rdm = load_reg(s, a->rdm);
373
+ }
113
+ gen_helper_mve_vshlc(rdm, cpu_env, qd, rdm, tcg_constant_i32(a->imm));
374
+
114
+ store_reg(s, a->rdm, rdm);
375
+ return g_test_run();
115
+ tcg_temp_free_ptr(qd);
116
+ mve_update_eci(s);
117
+ return true;
376
+}
118
+}
377
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
378
index XXXXXXX..XXXXXXX 100644
379
--- a/tests/qtest/meson.build
380
+++ b/tests/qtest/meson.build
381
@@ -XXX,XX +XXX,XX @@ qtests_npcm7xx = \
382
'npcm7xx_gpio-test',
383
'npcm7xx_pwm-test',
384
'npcm7xx_rng-test',
385
+ 'npcm7xx_smbus-test',
386
'npcm7xx_timer-test',
387
'npcm7xx_watchdog_timer-test']
388
qtests_arm = \
389
--
119
--
390
2.20.1
120
2.20.1
391
121
392
122
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
Implement the MVE VADDLV insn; this is similar to VADDV, except
2
that it accumulates 32-bit elements into a 64-bit accumulator
3
stored in a pair of general-purpose registers.
2
4
3
Add I2C temperature sensors for NPCM750 eval board.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210628135835.6690-15-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 3 ++
10
target/arm/mve.decode | 6 +++-
11
target/arm/mve_helper.c | 19 ++++++++++++
12
target/arm/translate-mve.c | 63 ++++++++++++++++++++++++++++++++++++++
13
4 files changed, 90 insertions(+), 1 deletion(-)
4
14
5
Reviewed-by: Doug Evans<dje@google.com>
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Tyrong Ting<kfting@nuvoton.com>
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20210210220426.3577804-3-wuhaotsh@google.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/npcm7xx_boards.c | 19 +++++++++++++++++++
13
1 file changed, 19 insertions(+)
14
15
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/npcm7xx_boards.c
17
--- a/target/arm/helper-mve.h
18
+++ b/hw/arm/npcm7xx_boards.c
18
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static NPCM7xxState *npcm7xx_create_soc(MachineState *machine,
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
20
return NPCM7XX(obj);
20
DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
21
DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
22
23
+DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
24
+DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
25
+
26
DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
27
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
28
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
34
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
35
36
# Vector add across vector
37
-VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
38
+{
39
+ VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
40
+ VADDLV 111 u:1 1110 1 ... 1001 ... 0 1111 00 a:1 0 qm:3 0 \
41
+ rdahi=%rdahi rdalo=%rdalo
42
+}
43
44
# Predicate operations
45
%mask_22_13 22:1 13:3
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvub, 1, uint8_t)
51
DO_VADDV(vaddvuh, 2, uint16_t)
52
DO_VADDV(vaddvuw, 4, uint32_t)
53
54
+#define DO_VADDLV(OP, TYPE, LTYPE) \
55
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
56
+ uint64_t ra) \
57
+ { \
58
+ uint16_t mask = mve_element_mask(env); \
59
+ unsigned e; \
60
+ TYPE *m = vm; \
61
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
62
+ if (mask & 1) { \
63
+ ra += (LTYPE)m[H4(e)]; \
64
+ } \
65
+ } \
66
+ mve_advance_vpt(env); \
67
+ return ra; \
68
+ } \
69
+
70
+DO_VADDLV(vaddlv_s, int32_t, int64_t)
71
+DO_VADDLV(vaddlv_u, uint32_t, uint64_t)
72
+
73
/* Shifts by immediate */
74
#define DO_2SHIFT(OP, ESIZE, TYPE, FN) \
75
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
76
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/translate-mve.c
79
+++ b/target/arm/translate-mve.c
80
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
81
return true;
21
}
82
}
22
83
23
+static I2CBus *npcm7xx_i2c_get_bus(NPCM7xxState *soc, uint32_t num)
84
+static bool trans_VADDLV(DisasContext *s, arg_VADDLV *a)
24
+{
85
+{
25
+ g_assert(num < ARRAY_SIZE(soc->smbus));
86
+ /*
26
+ return I2C_BUS(qdev_get_child_bus(DEVICE(&soc->smbus[num]), "i2c-bus"));
87
+ * Vector Add Long Across Vector: accumulate the 32-bit
88
+ * elements of the vector into a 64-bit result stored in
89
+ * a pair of general-purpose registers.
90
+ * No need to check Qm's bank: it is only 3 bits in decode.
91
+ */
92
+ TCGv_ptr qm;
93
+ TCGv_i64 rda;
94
+ TCGv_i32 rdalo, rdahi;
95
+
96
+ if (!dc_isar_feature(aa32_mve, s)) {
97
+ return false;
98
+ }
99
+ /*
100
+ * rdahi == 13 is UNPREDICTABLE; rdahi == 15 is a related
101
+ * encoding; rdalo always has bit 0 clear so cannot be 13 or 15.
102
+ */
103
+ if (a->rdahi == 13 || a->rdahi == 15) {
104
+ return false;
105
+ }
106
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
107
+ return true;
108
+ }
109
+
110
+ /*
111
+ * This insn is subject to beat-wise execution. Partial execution
112
+ * of an A=0 (no-accumulate) insn which does not execute the first
113
+ * beat must start with the current value of RdaHi:RdaLo, not zero.
114
+ */
115
+ if (a->a || mve_skip_first_beat(s)) {
116
+ /* Accumulate input from RdaHi:RdaLo */
117
+ rda = tcg_temp_new_i64();
118
+ rdalo = load_reg(s, a->rdalo);
119
+ rdahi = load_reg(s, a->rdahi);
120
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
121
+ tcg_temp_free_i32(rdalo);
122
+ tcg_temp_free_i32(rdahi);
123
+ } else {
124
+ /* Accumulate starting at zero */
125
+ rda = tcg_const_i64(0);
126
+ }
127
+
128
+ qm = mve_qreg_ptr(a->qm);
129
+ if (a->u) {
130
+ gen_helper_mve_vaddlv_u(rda, cpu_env, qm, rda);
131
+ } else {
132
+ gen_helper_mve_vaddlv_s(rda, cpu_env, qm, rda);
133
+ }
134
+ tcg_temp_free_ptr(qm);
135
+
136
+ rdalo = tcg_temp_new_i32();
137
+ rdahi = tcg_temp_new_i32();
138
+ tcg_gen_extrl_i64_i32(rdalo, rda);
139
+ tcg_gen_extrh_i64_i32(rdahi, rda);
140
+ store_reg(s, a->rdalo, rdalo);
141
+ store_reg(s, a->rdahi, rdahi);
142
+ tcg_temp_free_i64(rda);
143
+ mve_update_eci(s);
144
+ return true;
27
+}
145
+}
28
+
146
+
29
+static void npcm750_evb_i2c_init(NPCM7xxState *soc)
147
static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
30
+{
31
+ /* lm75 temperature sensor on SVB, tmp105 is compatible */
32
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 0), "tmp105", 0x48);
33
+ /* lm75 temperature sensor on EB, tmp105 is compatible */
34
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 1), "tmp105", 0x48);
35
+ /* tmp100 temperature sensor on EB, tmp105 is compatible */
36
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 2), "tmp105", 0x48);
37
+ /* tmp100 temperature sensor on SVB, tmp105 is compatible */
38
+ i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 6), "tmp105", 0x48);
39
+}
40
+
41
static void npcm750_evb_init(MachineState *machine)
42
{
148
{
43
NPCM7xxState *soc;
149
TCGv_ptr qd;
44
@@ -XXX,XX +XXX,XX @@ static void npcm750_evb_init(MachineState *machine)
45
46
npcm7xx_load_bootrom(machine, soc);
47
npcm7xx_connect_flash(&soc->fiu[0], 0, "w25q256", drive_get(IF_MTD, 0, 0));
48
+ npcm750_evb_i2c_init(soc);
49
npcm7xx_load_kernel(machine, soc);
50
}
51
52
--
150
--
53
2.20.1
151
2.20.1
54
152
55
153
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The MVE extension to v8.1M includes some new shift instructions which
2
2
sit entirely within the non-coprocessor part of the encoding space
3
The real kernel collects _TIF_MTE_ASYNC_FAULT into the current thread's
3
and which operate only on general-purpose registers. They take up
4
state on any kernel entry (interrupt, exception etc), and then delivers
4
the space which was previously UNPREDICTABLE MOVS and ORRS encodings
5
the signal in advance of resuming the thread.
5
with Rm == 13 or 15.
6
6
7
This means that while the signal won't be delivered immediately, it will
7
Implement the long shifts by immediate, which perform shifts on a
8
not be delayed forever -- at minimum it will be delivered after the next
8
pair of general-purpose registers treated as a 64-bit quantity, with
9
clock interrupt.
9
an immediate shift count between 1 and 32.
10
10
11
We don't have a clock interrupt in linux-user, so we issue a cpu_kick
11
Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for
12
to signal a return to the main loop at the end of the current TB.
12
the Rm==13,15 case, we need to explicitly emit code to UNDEF for the
13
13
cases where v8.1M now requires that. (Trying to change MOVS and ORRS
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
is too difficult, because the functions that generate the code are
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
shared between a dozen different kinds of arithmetic or logical
16
Message-id: 20210212184902.1251044-29-richard.henderson@linaro.org
16
instruction for all A32, T16 and T32 encodings, and for some insns
17
and some encodings Rm==13,15 are valid.)
18
19
We make the helper functions we need for UQSHLL and SQSHLL take
20
a 32-bit value which the helper casts to int8_t because we'll need
21
these helpers also for the shift-by-register insns, where the shift
22
count might be < 0 or > 32.
23
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
18
---
27
---
19
linux-user/aarch64/target_signal.h | 1 +
28
target/arm/helper-mve.h | 3 ++
20
linux-user/aarch64/cpu_loop.c | 11 +++++++++++
29
target/arm/translate.h | 1 +
21
target/arm/mte_helper.c | 10 ++++++++++
30
target/arm/t32.decode | 28 +++++++++++++
22
3 files changed, 22 insertions(+)
31
target/arm/mve_helper.c | 10 +++++
23
32
target/arm/translate.c | 90 +++++++++++++++++++++++++++++++++++++++++
24
diff --git a/linux-user/aarch64/target_signal.h b/linux-user/aarch64/target_signal.h
33
5 files changed, 132 insertions(+)
25
index XXXXXXX..XXXXXXX 100644
34
26
--- a/linux-user/aarch64/target_signal.h
35
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
27
+++ b/linux-user/aarch64/target_signal.h
36
index XXXXXXX..XXXXXXX 100644
28
@@ -XXX,XX +XXX,XX @@ typedef struct target_sigaltstack {
37
--- a/target/arm/helper-mve.h
29
38
+++ b/target/arm/helper-mve.h
30
#include "../generic/signal.h"
39
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
40
DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+#define TARGET_SEGV_MTEAERR 8 /* Asynchronous ARM MTE error */
41
33
#define TARGET_SEGV_MTESERR 9 /* Synchronous ARM MTE exception */
42
DEF_HELPER_FLAGS_4(mve_vshlc, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
34
43
+
35
#define TARGET_ARCH_HAS_SETUP_FRAME
44
+DEF_HELPER_FLAGS_3(mve_sqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
36
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
45
+DEF_HELPER_FLAGS_3(mve_uqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
37
index XXXXXXX..XXXXXXX 100644
46
diff --git a/target/arm/translate.h b/target/arm/translate.h
38
--- a/linux-user/aarch64/cpu_loop.c
47
index XXXXXXX..XXXXXXX 100644
39
+++ b/linux-user/aarch64/cpu_loop.c
48
--- a/target/arm/translate.h
40
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
49
+++ b/target/arm/translate.h
41
EXCP_DUMP(env, "qemu: unhandled CPU exception 0x%x - aborting\n", trapnr);
50
@@ -XXX,XX +XXX,XX @@ typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
42
abort();
51
typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
43
}
52
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
44
+
53
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
45
+ /* Check for MTE asynchronous faults */
54
+typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
46
+ if (unlikely(env->cp15.tfsr_el[0])) {
55
47
+ env->cp15.tfsr_el[0] = 0;
56
/**
48
+ info.si_signo = TARGET_SIGSEGV;
57
* arm_tbflags_from_tb:
49
+ info.si_errno = 0;
58
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
50
+ info._sifields._sigfault._addr = 0;
59
index XXXXXXX..XXXXXXX 100644
51
+ info.si_code = TARGET_SEGV_MTEAERR;
60
--- a/target/arm/t32.decode
52
+ queue_signal(env, info.si_signo, QEMU_SI_FAULT, &info);
61
+++ b/target/arm/t32.decode
53
+ }
62
@@ -XXX,XX +XXX,XX @@
54
+
63
&mcr !extern cp opc1 crn crm opc2 rt
55
process_pending_signals(env);
64
&mcrr !extern cp opc1 crm rt rt2
56
/* Exception return on AArch64 always clears the exclusive monitor,
65
57
* so any return to running guest code implies this.
66
+&mve_shl_ri rdalo rdahi shim
58
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
67
+
59
index XXXXXXX..XXXXXXX 100644
68
+# rdahi: bits [3:1] from insn, bit 0 is 1
60
--- a/target/arm/mte_helper.c
69
+# rdalo: bits [3:1] from insn, bit 0 is 0
61
+++ b/target/arm/mte_helper.c
70
+%rdahi_9 9:3 !function=times_2_plus_1
62
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
71
+%rdalo_17 17:3 !function=times_2
63
select = 0;
72
+
64
}
73
# Data-processing (register)
65
env->cp15.tfsr_el[el] |= 1 << select;
74
66
+#ifdef CONFIG_USER_ONLY
75
%imm5_12_6 12:3 6:2
67
+ /*
76
@@ -XXX,XX +XXX,XX @@
68
+ * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
77
@S_xrr_shi ....... .... . rn:4 .... .... .. shty:2 rm:4 \
69
+ * which then sends a SIGSEGV when the thread is next scheduled.
78
&s_rrr_shi shim=%imm5_12_6 s=1 rd=0
70
+ * This cpu will return to the main loop at the end of the TB,
79
71
+ * which is rather sooner than "normal". But the alternative
80
+@mve_shl_ri ....... .... . ... . . ... ... . .. .. .... \
72
+ * is waiting until the next syscall.
81
+ &mve_shl_ri shim=%imm5_12_6 rdalo=%rdalo_17 rdahi=%rdahi_9
73
+ */
82
+
74
+ qemu_cpu_kick(env_cpu(env));
83
{
75
+#endif
84
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
76
break;
85
AND_rrri 1110101 0000 . .... 0 ... .... .... .... @s_rrr_shi
77
86
}
78
default:
87
BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
88
{
89
+ # The v8.1M MVE shift insns overlap in encoding with MOVS/ORRS
90
+ # and are distinguished by having Rm==13 or 15. Those are UNPREDICTABLE
91
+ # cases for MOVS/ORRS. We decode the MVE cases first, ensuring that
92
+ # they explicitly call unallocated_encoding() for cases that must UNDEF
93
+ # (eg "using a new shift insn on a v8.1M CPU without MVE"), and letting
94
+ # the rest fall through (where ORR_rrri and MOV_rxri will end up
95
+ # handling them as r13 and r15 accesses with the same semantics as A32).
96
+ [
97
+ LSLL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111 @mve_shl_ri
98
+ LSRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111 @mve_shl_ri
99
+ ASRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111 @mve_shl_ri
100
+
101
+ UQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111 @mve_shl_ri
102
+ URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
103
+ SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
104
+ SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
105
+ ]
106
+
107
MOV_rxri 1110101 0010 . 1111 0 ... .... .... .... @s_rxr_shi
108
ORR_rrri 1110101 0010 . .... 0 ... .... .... .... @s_rrr_shi
109
}
110
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/mve_helper.c
113
+++ b/target/arm/mve_helper.c
114
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
115
mve_advance_vpt(env);
116
return rdm;
117
}
118
+
119
+uint64_t HELPER(mve_sqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
120
+{
121
+ return do_sqrshl_d(n, (int8_t)shift, false, &env->QF);
122
+}
123
+
124
+uint64_t HELPER(mve_uqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
125
+{
126
+ return do_uqrshl_d(n, (int8_t)shift, false, &env->QF);
127
+}
128
diff --git a/target/arm/translate.c b/target/arm/translate.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/target/arm/translate.c
131
+++ b/target/arm/translate.c
132
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVT(DisasContext *s, arg_MOVW *a)
133
return true;
134
}
135
136
+/*
137
+ * v8.1M MVE wide-shifts
138
+ */
139
+static bool do_mve_shl_ri(DisasContext *s, arg_mve_shl_ri *a,
140
+ WideShiftImmFn *fn)
141
+{
142
+ TCGv_i64 rda;
143
+ TCGv_i32 rdalo, rdahi;
144
+
145
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
146
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
147
+ return false;
148
+ }
149
+ if (a->rdahi == 15) {
150
+ /* These are a different encoding (SQSHL/SRSHR/UQSHL/URSHR) */
151
+ return false;
152
+ }
153
+ if (!dc_isar_feature(aa32_mve, s) ||
154
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
155
+ a->rdahi == 13) {
156
+ /* RdaHi == 13 is UNPREDICTABLE; we choose to UNDEF */
157
+ unallocated_encoding(s);
158
+ return true;
159
+ }
160
+
161
+ if (a->shim == 0) {
162
+ a->shim = 32;
163
+ }
164
+
165
+ rda = tcg_temp_new_i64();
166
+ rdalo = load_reg(s, a->rdalo);
167
+ rdahi = load_reg(s, a->rdahi);
168
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
169
+
170
+ fn(rda, rda, a->shim);
171
+
172
+ tcg_gen_extrl_i64_i32(rdalo, rda);
173
+ tcg_gen_extrh_i64_i32(rdahi, rda);
174
+ store_reg(s, a->rdalo, rdalo);
175
+ store_reg(s, a->rdahi, rdahi);
176
+ tcg_temp_free_i64(rda);
177
+
178
+ return true;
179
+}
180
+
181
+static bool trans_ASRL_ri(DisasContext *s, arg_mve_shl_ri *a)
182
+{
183
+ return do_mve_shl_ri(s, a, tcg_gen_sari_i64);
184
+}
185
+
186
+static bool trans_LSLL_ri(DisasContext *s, arg_mve_shl_ri *a)
187
+{
188
+ return do_mve_shl_ri(s, a, tcg_gen_shli_i64);
189
+}
190
+
191
+static bool trans_LSRL_ri(DisasContext *s, arg_mve_shl_ri *a)
192
+{
193
+ return do_mve_shl_ri(s, a, tcg_gen_shri_i64);
194
+}
195
+
196
+static void gen_mve_sqshll(TCGv_i64 r, TCGv_i64 n, int64_t shift)
197
+{
198
+ gen_helper_mve_sqshll(r, cpu_env, n, tcg_constant_i32(shift));
199
+}
200
+
201
+static bool trans_SQSHLL_ri(DisasContext *s, arg_mve_shl_ri *a)
202
+{
203
+ return do_mve_shl_ri(s, a, gen_mve_sqshll);
204
+}
205
+
206
+static void gen_mve_uqshll(TCGv_i64 r, TCGv_i64 n, int64_t shift)
207
+{
208
+ gen_helper_mve_uqshll(r, cpu_env, n, tcg_constant_i32(shift));
209
+}
210
+
211
+static bool trans_UQSHLL_ri(DisasContext *s, arg_mve_shl_ri *a)
212
+{
213
+ return do_mve_shl_ri(s, a, gen_mve_uqshll);
214
+}
215
+
216
+static bool trans_SRSHRL_ri(DisasContext *s, arg_mve_shl_ri *a)
217
+{
218
+ return do_mve_shl_ri(s, a, gen_srshr64_i64);
219
+}
220
+
221
+static bool trans_URSHRL_ri(DisasContext *s, arg_mve_shl_ri *a)
222
+{
223
+ return do_mve_shl_ri(s, a, gen_urshr64_i64);
224
+}
225
+
226
/*
227
* Multiply and multiply accumulate
228
*/
79
--
229
--
80
2.20.1
230
2.20.1
81
231
82
232
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE long shifts by register, which perform shifts on a
2
2
pair of general-purpose registers treated as a 64-bit quantity, with
3
Verify that addr + size - 1 does not wrap around.
3
the shift count in another general-purpose register, which might be
4
4
either positive or negative.
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Like the long-shifts-by-immediate, these encodings sit in the space
7
Message-id: 20210212184902.1251044-7-richard.henderson@linaro.org
7
that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15.
8
Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and
9
also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases),
10
we have to move the CSEL pattern into the same decodetree group.
11
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
9
---
15
---
10
linux-user/qemu.h | 17 ++++++++++++-----
16
target/arm/helper-mve.h | 6 +++
11
1 file changed, 12 insertions(+), 5 deletions(-)
17
target/arm/translate.h | 1 +
12
18
target/arm/t32.decode | 16 +++++--
13
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
19
target/arm/mve_helper.c | 93 +++++++++++++++++++++++++++++++++++++++++
14
index XXXXXXX..XXXXXXX 100644
20
target/arm/translate.c | 69 ++++++++++++++++++++++++++++++
15
--- a/linux-user/qemu.h
21
5 files changed, 182 insertions(+), 3 deletions(-)
16
+++ b/linux-user/qemu.h
22
17
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
23
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
#define VERIFY_READ 0
24
index XXXXXXX..XXXXXXX 100644
19
#define VERIFY_WRITE 1 /* implies read access */
25
--- a/target/arm/helper-mve.h
20
26
+++ b/target/arm/helper-mve.h
21
-static inline int access_ok(int type, abi_ulong addr, abi_ulong size)
27
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
28
23
{
29
DEF_HELPER_FLAGS_4(mve_vshlc, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
24
- return guest_addr_valid(addr) &&
30
25
- (size == 0 || guest_addr_valid(addr + size - 1)) &&
31
+DEF_HELPER_FLAGS_3(mve_sshrl, TCG_CALL_NO_RWG, i64, env, i64, i32)
26
- page_check_range((target_ulong)addr, size,
32
+DEF_HELPER_FLAGS_3(mve_ushll, TCG_CALL_NO_RWG, i64, env, i64, i32)
27
- (type == VERIFY_READ) ? PAGE_READ : (PAGE_READ | PAGE_WRITE)) == 0;
33
DEF_HELPER_FLAGS_3(mve_sqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
28
+ if (!guest_addr_valid(addr)) {
34
DEF_HELPER_FLAGS_3(mve_uqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
35
+DEF_HELPER_FLAGS_3(mve_sqrshrl, TCG_CALL_NO_RWG, i64, env, i64, i32)
36
+DEF_HELPER_FLAGS_3(mve_uqrshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
37
+DEF_HELPER_FLAGS_3(mve_sqrshrl48, TCG_CALL_NO_RWG, i64, env, i64, i32)
38
+DEF_HELPER_FLAGS_3(mve_uqrshll48, TCG_CALL_NO_RWG, i64, env, i64, i32)
39
diff --git a/target/arm/translate.h b/target/arm/translate.h
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/translate.h
42
+++ b/target/arm/translate.h
43
@@ -XXX,XX +XXX,XX @@ typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
44
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
45
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
46
typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
47
+typedef void WideShiftFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i32);
48
49
/**
50
* arm_tbflags_from_tb:
51
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/t32.decode
54
+++ b/target/arm/t32.decode
55
@@ -XXX,XX +XXX,XX @@
56
&mcrr !extern cp opc1 crm rt rt2
57
58
&mve_shl_ri rdalo rdahi shim
59
+&mve_shl_rr rdalo rdahi rm
60
61
# rdahi: bits [3:1] from insn, bit 0 is 1
62
# rdalo: bits [3:1] from insn, bit 0 is 0
63
@@ -XXX,XX +XXX,XX @@
64
65
@mve_shl_ri ....... .... . ... . . ... ... . .. .. .... \
66
&mve_shl_ri shim=%imm5_12_6 rdalo=%rdalo_17 rdahi=%rdahi_9
67
+@mve_shl_rr ....... .... . ... . rm:4 ... . .. .. .... \
68
+ &mve_shl_rr rdalo=%rdalo_17 rdahi=%rdahi_9
69
70
{
71
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
72
@@ -XXX,XX +XXX,XX @@ BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
73
URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
74
SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
75
SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
76
+
77
+ LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
78
+ ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
79
+ UQRSHLL64_rr 1110101 0010 1 ... 1 .... ... 1 0000 1101 @mve_shl_rr
80
+ SQRSHRL64_rr 1110101 0010 1 ... 1 .... ... 1 0010 1101 @mve_shl_rr
81
+ UQRSHLL48_rr 1110101 0010 1 ... 1 .... ... 1 1000 1101 @mve_shl_rr
82
+ SQRSHRL48_rr 1110101 0010 1 ... 1 .... ... 1 1010 1101 @mve_shl_rr
83
]
84
85
MOV_rxri 1110101 0010 . 1111 0 ... .... .... .... @s_rxr_shi
86
ORR_rrri 1110101 0010 . .... 0 ... .... .... .... @s_rrr_shi
87
+
88
+ # v8.1M CSEL and friends
89
+ CSEL 1110101 0010 1 rn:4 10 op:2 rd:4 fcond:4 rm:4
90
}
91
{
92
MVN_rxri 1110101 0011 . 1111 0 ... .... .... .... @s_rxr_shi
93
@@ -XXX,XX +XXX,XX @@ SBC_rrri 1110101 1011 . .... 0 ... .... .... .... @s_rrr_shi
94
}
95
RSB_rrri 1110101 1110 . .... 0 ... .... .... .... @s_rrr_shi
96
97
-# v8.1M CSEL and friends
98
-CSEL 1110101 0010 1 rn:4 10 op:2 rd:4 fcond:4 rm:4
99
-
100
# Data-processing (register-shifted register)
101
102
MOV_rxrr 1111 1010 0 shty:2 s:1 rm:4 1111 rd:4 0000 rs:4 \
103
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/mve_helper.c
106
+++ b/target/arm/mve_helper.c
107
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
108
return rdm;
109
}
110
111
+uint64_t HELPER(mve_sshrl)(CPUARMState *env, uint64_t n, uint32_t shift)
112
+{
113
+ return do_sqrshl_d(n, -(int8_t)shift, false, NULL);
114
+}
115
+
116
+uint64_t HELPER(mve_ushll)(CPUARMState *env, uint64_t n, uint32_t shift)
117
+{
118
+ return do_uqrshl_d(n, (int8_t)shift, false, NULL);
119
+}
120
+
121
uint64_t HELPER(mve_sqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
122
{
123
return do_sqrshl_d(n, (int8_t)shift, false, &env->QF);
124
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mve_uqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
125
{
126
return do_uqrshl_d(n, (int8_t)shift, false, &env->QF);
127
}
128
+
129
+uint64_t HELPER(mve_sqrshrl)(CPUARMState *env, uint64_t n, uint32_t shift)
130
+{
131
+ return do_sqrshl_d(n, -(int8_t)shift, true, &env->QF);
132
+}
133
+
134
+uint64_t HELPER(mve_uqrshll)(CPUARMState *env, uint64_t n, uint32_t shift)
135
+{
136
+ return do_uqrshl_d(n, (int8_t)shift, true, &env->QF);
137
+}
138
+
139
+/* Operate on 64-bit values, but saturate at 48 bits */
140
+static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
141
+ bool round, uint32_t *sat)
142
+{
143
+ if (shift <= -48) {
144
+ /* Rounding the sign bit always produces 0. */
145
+ if (round) {
146
+ return 0;
147
+ }
148
+ return src >> 63;
149
+ } else if (shift < 0) {
150
+ if (round) {
151
+ src >>= -shift - 1;
152
+ return (src >> 1) + (src & 1);
153
+ }
154
+ return src >> -shift;
155
+ } else if (shift < 48) {
156
+ int64_t val = src << shift;
157
+ int64_t extval = sextract64(val, 0, 48);
158
+ if (!sat || val == extval) {
159
+ return extval;
160
+ }
161
+ } else if (!sat || src == 0) {
162
+ return 0;
163
+ }
164
+
165
+ *sat = 1;
166
+ return (1ULL << 47) - (src >= 0);
167
+}
168
+
169
+/* Operate on 64-bit values, but saturate at 48 bits */
170
+static inline uint64_t do_uqrshl48_d(uint64_t src, int64_t shift,
171
+ bool round, uint32_t *sat)
172
+{
173
+ uint64_t val, extval;
174
+
175
+ if (shift <= -(48 + round)) {
176
+ return 0;
177
+ } else if (shift < 0) {
178
+ if (round) {
179
+ val = src >> (-shift - 1);
180
+ val = (val >> 1) + (val & 1);
181
+ } else {
182
+ val = src >> -shift;
183
+ }
184
+ extval = extract64(val, 0, 48);
185
+ if (!sat || val == extval) {
186
+ return extval;
187
+ }
188
+ } else if (shift < 48) {
189
+ uint64_t val = src << shift;
190
+ uint64_t extval = extract64(val, 0, 48);
191
+ if (!sat || val == extval) {
192
+ return extval;
193
+ }
194
+ } else if (!sat || src == 0) {
195
+ return 0;
196
+ }
197
+
198
+ *sat = 1;
199
+ return MAKE_64BIT_MASK(0, 48);
200
+}
201
+
202
+uint64_t HELPER(mve_sqrshrl48)(CPUARMState *env, uint64_t n, uint32_t shift)
203
+{
204
+ return do_sqrshl48_d(n, -(int8_t)shift, true, &env->QF);
205
+}
206
+
207
+uint64_t HELPER(mve_uqrshll48)(CPUARMState *env, uint64_t n, uint32_t shift)
208
+{
209
+ return do_uqrshl48_d(n, (int8_t)shift, true, &env->QF);
210
+}
211
diff --git a/target/arm/translate.c b/target/arm/translate.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/target/arm/translate.c
214
+++ b/target/arm/translate.c
215
@@ -XXX,XX +XXX,XX @@ static bool trans_URSHRL_ri(DisasContext *s, arg_mve_shl_ri *a)
216
return do_mve_shl_ri(s, a, gen_urshr64_i64);
217
}
218
219
+static bool do_mve_shl_rr(DisasContext *s, arg_mve_shl_rr *a, WideShiftFn *fn)
220
+{
221
+ TCGv_i64 rda;
222
+ TCGv_i32 rdalo, rdahi;
223
+
224
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
225
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
29
+ return false;
226
+ return false;
30
+ }
227
+ }
31
+ if (size != 0 &&
228
+ if (a->rdahi == 15) {
32
+ (addr + size - 1 < addr ||
229
+ /* These are a different encoding (SQSHL/SRSHR/UQSHL/URSHR) */
33
+ !guest_addr_valid(addr + size - 1))) {
34
+ return false;
230
+ return false;
35
+ }
231
+ }
36
+ return page_check_range((target_ulong)addr, size,
232
+ if (!dc_isar_feature(aa32_mve, s) ||
37
+ (type == VERIFY_READ) ? PAGE_READ :
233
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
38
+ (PAGE_READ | PAGE_WRITE)) == 0;
234
+ a->rdahi == 13 || a->rm == 13 || a->rm == 15 ||
39
}
235
+ a->rm == a->rdahi || a->rm == a->rdalo) {
40
236
+ /* These rdahi/rdalo/rm cases are UNPREDICTABLE; we choose to UNDEF */
41
/* NOTE __get_user and __put_user use host pointers and don't check access.
237
+ unallocated_encoding(s);
238
+ return true;
239
+ }
240
+
241
+ rda = tcg_temp_new_i64();
242
+ rdalo = load_reg(s, a->rdalo);
243
+ rdahi = load_reg(s, a->rdahi);
244
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
245
+
246
+ /* The helper takes care of the sign-extension of the low 8 bits of Rm */
247
+ fn(rda, cpu_env, rda, cpu_R[a->rm]);
248
+
249
+ tcg_gen_extrl_i64_i32(rdalo, rda);
250
+ tcg_gen_extrh_i64_i32(rdahi, rda);
251
+ store_reg(s, a->rdalo, rdalo);
252
+ store_reg(s, a->rdahi, rdahi);
253
+ tcg_temp_free_i64(rda);
254
+
255
+ return true;
256
+}
257
+
258
+static bool trans_LSLL_rr(DisasContext *s, arg_mve_shl_rr *a)
259
+{
260
+ return do_mve_shl_rr(s, a, gen_helper_mve_ushll);
261
+}
262
+
263
+static bool trans_ASRL_rr(DisasContext *s, arg_mve_shl_rr *a)
264
+{
265
+ return do_mve_shl_rr(s, a, gen_helper_mve_sshrl);
266
+}
267
+
268
+static bool trans_UQRSHLL64_rr(DisasContext *s, arg_mve_shl_rr *a)
269
+{
270
+ return do_mve_shl_rr(s, a, gen_helper_mve_uqrshll);
271
+}
272
+
273
+static bool trans_SQRSHRL64_rr(DisasContext *s, arg_mve_shl_rr *a)
274
+{
275
+ return do_mve_shl_rr(s, a, gen_helper_mve_sqrshrl);
276
+}
277
+
278
+static bool trans_UQRSHLL48_rr(DisasContext *s, arg_mve_shl_rr *a)
279
+{
280
+ return do_mve_shl_rr(s, a, gen_helper_mve_uqrshll48);
281
+}
282
+
283
+static bool trans_SQRSHRL48_rr(DisasContext *s, arg_mve_shl_rr *a)
284
+{
285
+ return do_mve_shl_rr(s, a, gen_helper_mve_sqrshrl48);
286
+}
287
+
288
/*
289
* Multiply and multiply accumulate
290
*/
42
--
291
--
43
2.20.1
292
2.20.1
44
293
45
294
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These constants are only ever used with access_ok, and friends.
4
Rather than translating them to PAGE_* bits, let them equal
5
the PAGE_* bits to begin.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210212184902.1251044-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
linux-user/qemu.h | 8 +++-----
13
1 file changed, 3 insertions(+), 5 deletions(-)
14
15
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/qemu.h
18
+++ b/linux-user/qemu.h
19
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
20
21
/* user access */
22
23
-#define VERIFY_READ 0
24
-#define VERIFY_WRITE 1 /* implies read access */
25
+#define VERIFY_READ PAGE_READ
26
+#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
27
28
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
29
{
30
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
31
!guest_addr_valid(addr + size - 1))) {
32
return false;
33
}
34
- return page_check_range((target_ulong)addr, size,
35
- (type == VERIFY_READ) ? PAGE_READ :
36
- (PAGE_READ | PAGE_WRITE)) == 0;
37
+ return page_check_range((target_ulong)addr, size, type) == 0;
38
}
39
40
/* NOTE __get_user and __put_user use host pointers and don't check access.
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These constants are only ever used with access_ok, and friends.
4
Rather than translating them to PAGE_* bits, let them equal
5
the PAGE_* bits to begin.
6
7
Reviewed-by: Warner Losh <imp@bsdimp.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210212184902.1251044-9-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
bsd-user/qemu.h | 9 ++++-----
14
1 file changed, 4 insertions(+), 5 deletions(-)
15
16
diff --git a/bsd-user/qemu.h b/bsd-user/qemu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/bsd-user/qemu.h
19
+++ b/bsd-user/qemu.h
20
@@ -XXX,XX +XXX,XX @@ extern unsigned long x86_stack_size;
21
22
/* user access */
23
24
-#define VERIFY_READ 0
25
-#define VERIFY_WRITE 1 /* implies read access */
26
+#define VERIFY_READ PAGE_READ
27
+#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
28
29
-static inline int access_ok(int type, abi_ulong addr, abi_ulong size)
30
+static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
31
{
32
- return page_check_range((target_ulong)addr, size,
33
- (type == VERIFY_READ) ? PAGE_READ : (PAGE_READ | PAGE_WRITE)) == 0;
34
+ return page_check_range((target_ulong)addr, size, type) == 0;
35
}
36
37
/* NOTE __get_user and __put_user use host pointers and don't check access. */
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is the only use of guest_addr_valid that does not begin
4
with a guest address, but a host address being transformed to
5
a guest address.
6
7
We will shortly adjust guest_addr_valid to handle guest memory
8
tags, and the host address should not be subjected to that.
9
10
Move h2g_valid adjacent to the other h2g macros.
11
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210212184902.1251044-10-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/exec/cpu_ldst.h | 5 ++++-
18
1 file changed, 4 insertions(+), 1 deletion(-)
19
20
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/cpu_ldst.h
23
+++ b/include/exec/cpu_ldst.h
24
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
25
#else
26
#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
27
#endif
28
-#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
29
30
static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
31
{
32
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
33
}
34
35
+#define h2g_valid(x) \
36
+ (HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS || \
37
+ (uintptr_t)(x) - guest_base <= GUEST_ADDR_MAX)
38
+
39
#define h2g_nocheck(x) ({ \
40
uintptr_t __ret = (uintptr_t)(x) - guest_base; \
41
(abi_ptr)__ret; \
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We must always use GUEST_ADDR_MAX, because even 32-bit hosts can
4
use -R <reserved_va> to restrict the memory address of the guest.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210212184902.1251044-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu_ldst.h | 9 ++++-----
12
1 file changed, 4 insertions(+), 5 deletions(-)
13
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst.h
17
+++ b/include/exec/cpu_ldst.h
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
19
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
20
#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
21
22
-#if HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS
23
-#define guest_addr_valid(x) (1)
24
-#else
25
-#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
26
-#endif
27
+static inline bool guest_addr_valid(abi_ulong x)
28
+{
29
+ return x <= GUEST_ADDR_MAX;
30
+}
31
32
static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
33
{
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Provide an identity fallback for target that do not
4
use tagged addresses.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210212184902.1251044-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu_ldst.h | 7 +++++++
12
1 file changed, 7 insertions(+)
13
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst.h
17
+++ b/include/exec/cpu_ldst.h
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
19
#define TARGET_ABI_FMT_ptr "%"PRIx64
20
#endif
21
22
+#ifndef TARGET_TAGGED_ADDRESSES
23
+static inline abi_ptr cpu_untagged_addr(CPUState *cs, abi_ptr x)
24
+{
25
+ return x;
26
+}
27
+#endif
28
+
29
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
30
#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
31
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We define target_mmap et al as untagged, so that they can be
4
used from the binary loaders. Explicitly call cpu_untagged_addr
5
for munmap, mprotect, mremap syscall entry points.
6
7
Add a few comments for the syscalls that are exempted by the
8
kernel's tagged-address-abi.rst.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210212184902.1251044-14-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
linux-user/syscall.c | 11 +++++++++++
16
1 file changed, 11 insertions(+)
17
18
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/syscall.c
21
+++ b/linux-user/syscall.c
22
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
23
abi_long mapped_addr;
24
abi_ulong new_alloc_size;
25
26
+ /* brk pointers are always untagged */
27
+
28
DEBUGF_BRK("do_brk(" TARGET_ABI_FMT_lx ") -> ", new_brk);
29
30
if (!new_brk) {
31
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
32
int i,ret;
33
abi_ulong shmlba;
34
35
+ /* shmat pointers are always untagged */
36
+
37
/* find out the length of the shared memory segment */
38
ret = get_errno(shmctl(shmid, IPC_STAT, &shm_info));
39
if (is_error(ret)) {
40
@@ -XXX,XX +XXX,XX @@ static inline abi_long do_shmdt(abi_ulong shmaddr)
41
int i;
42
abi_long rv;
43
44
+ /* shmdt pointers are always untagged */
45
+
46
mmap_lock();
47
48
for (i = 0; i < N_SHM_REGIONS; ++i) {
49
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
50
v5, v6));
51
}
52
#else
53
+ /* mmap pointers are always untagged */
54
ret = get_errno(target_mmap(arg1, arg2, arg3,
55
target_to_host_bitmask(arg4, mmap_flags_tbl),
56
arg5,
57
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
58
return get_errno(ret);
59
#endif
60
case TARGET_NR_munmap:
61
+ arg1 = cpu_untagged_addr(cpu, arg1);
62
return get_errno(target_munmap(arg1, arg2));
63
case TARGET_NR_mprotect:
64
+ arg1 = cpu_untagged_addr(cpu, arg1);
65
{
66
TaskState *ts = cpu->opaque;
67
/* Special hack to detect libc making the stack executable. */
68
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
69
return get_errno(target_mprotect(arg1, arg2, arg3));
70
#ifdef TARGET_NR_mremap
71
case TARGET_NR_mremap:
72
+ arg1 = cpu_untagged_addr(cpu, arg1);
73
+ /* mremap new_addr (arg5) is always untagged */
74
return get_errno(target_mremap(arg1, arg2, arg3, arg4, arg5));
75
#endif
76
/* ??? msync/mlock/munlock are broken for softmmu. */
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We're currently open-coding the range check in access_ok;
4
use guest_range_valid when size != 0.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210212184902.1251044-15-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/qemu.h | 9 +++------
12
1 file changed, 3 insertions(+), 6 deletions(-)
13
14
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/qemu.h
17
+++ b/linux-user/qemu.h
18
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
19
20
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
21
{
22
- if (!guest_addr_valid(addr)) {
23
- return false;
24
- }
25
- if (size != 0 &&
26
- (addr + size - 1 < addr ||
27
- !guest_addr_valid(addr + size - 1))) {
28
+ if (size == 0
29
+ ? !guest_addr_valid(addr)
30
+ : !guest_range_valid(addr, size)) {
31
return false;
32
}
33
return page_check_range((target_ulong)addr, size, type) == 0;
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The places that use these are better off using untagged
4
addresses, so do not provide a tagged versions. Rename
5
to make it clear about the address type.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210212184902.1251044-16-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/exec/cpu_ldst.h | 4 ++--
13
linux-user/qemu.h | 4 ++--
14
accel/tcg/user-exec.c | 3 ++-
15
linux-user/mmap.c | 14 +++++++-------
16
linux-user/syscall.c | 2 +-
17
5 files changed, 14 insertions(+), 13 deletions(-)
18
19
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu_ldst.h
22
+++ b/include/exec/cpu_ldst.h
23
@@ -XXX,XX +XXX,XX @@ static inline void *g2h(CPUState *cs, abi_ptr x)
24
return g2h_untagged(cpu_untagged_addr(cs, x));
25
}
26
27
-static inline bool guest_addr_valid(abi_ulong x)
28
+static inline bool guest_addr_valid_untagged(abi_ulong x)
29
{
30
return x <= GUEST_ADDR_MAX;
31
}
32
33
-static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
34
+static inline bool guest_range_valid_untagged(abi_ulong start, abi_ulong len)
35
{
36
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
37
}
38
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/linux-user/qemu.h
41
+++ b/linux-user/qemu.h
42
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
43
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
44
{
45
if (size == 0
46
- ? !guest_addr_valid(addr)
47
- : !guest_range_valid(addr, size)) {
48
+ ? !guest_addr_valid_untagged(addr)
49
+ : !guest_range_valid_untagged(addr, size)) {
50
return false;
51
}
52
return page_check_range((target_ulong)addr, size, type) == 0;
53
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/accel/tcg/user-exec.c
56
+++ b/accel/tcg/user-exec.c
57
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
58
g_assert_not_reached();
59
}
60
61
- if (!guest_addr_valid(addr) || page_check_range(addr, 1, flags) < 0) {
62
+ if (!guest_addr_valid_untagged(addr) ||
63
+ page_check_range(addr, 1, flags) < 0) {
64
if (nonfault) {
65
return TLB_INVALID_MASK;
66
} else {
67
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/linux-user/mmap.c
70
+++ b/linux-user/mmap.c
71
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
72
}
73
len = TARGET_PAGE_ALIGN(len);
74
end = start + len;
75
- if (!guest_range_valid(start, len)) {
76
+ if (!guest_range_valid_untagged(start, len)) {
77
return -TARGET_ENOMEM;
78
}
79
if (len == 0) {
80
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
81
* It can fail only on 64-bit host with 32-bit target.
82
* On any other target/host host mmap() handles this error correctly.
83
*/
84
- if (end < start || !guest_range_valid(start, len)) {
85
+ if (end < start || !guest_range_valid_untagged(start, len)) {
86
errno = ENOMEM;
87
goto fail;
88
}
89
@@ -XXX,XX +XXX,XX @@ int target_munmap(abi_ulong start, abi_ulong len)
90
if (start & ~TARGET_PAGE_MASK)
91
return -TARGET_EINVAL;
92
len = TARGET_PAGE_ALIGN(len);
93
- if (len == 0 || !guest_range_valid(start, len)) {
94
+ if (len == 0 || !guest_range_valid_untagged(start, len)) {
95
return -TARGET_EINVAL;
96
}
97
98
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
99
int prot;
100
void *host_addr;
101
102
- if (!guest_range_valid(old_addr, old_size) ||
103
+ if (!guest_range_valid_untagged(old_addr, old_size) ||
104
((flags & MREMAP_FIXED) &&
105
- !guest_range_valid(new_addr, new_size)) ||
106
+ !guest_range_valid_untagged(new_addr, new_size)) ||
107
((flags & MREMAP_MAYMOVE) == 0 &&
108
- !guest_range_valid(old_addr, new_size))) {
109
+ !guest_range_valid_untagged(old_addr, new_size))) {
110
errno = ENOMEM;
111
return -1;
112
}
113
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
114
115
if (host_addr != MAP_FAILED) {
116
/* Check if address fits target address space */
117
- if (!guest_range_valid(h2g(host_addr), new_size)) {
118
+ if (!guest_range_valid_untagged(h2g(host_addr), new_size)) {
119
/* Revert mremap() changes */
120
host_addr = mremap(g2h_untagged(old_addr),
121
new_size, old_size, flags);
122
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
123
index XXXXXXX..XXXXXXX 100644
124
--- a/linux-user/syscall.c
125
+++ b/linux-user/syscall.c
126
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
127
return -TARGET_EINVAL;
128
}
129
}
130
- if (!guest_range_valid(shmaddr, shm_info.shm_segsz)) {
131
+ if (!guest_range_valid_untagged(shmaddr, shm_info.shm_segsz)) {
132
return -TARGET_EINVAL;
133
}
134
135
--
136
2.20.1
137
138
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the MVE shifts by immediate, which perform shifts
2
2
on a single general-purpose register.
3
These functions are not small, except for unlock_user
3
4
without debugging enabled. Move them out of line, and
4
These patterns overlap with the long-shift-by-immediates,
5
add missing braces on the way.
5
so we have to rearrange the grouping a little here.
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20210212184902.1251044-18-richard.henderson@linaro.org
11
[PMM: fixed the sense of an ifdef test in qemu.h]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
13
---
10
---
14
linux-user/qemu.h | 47 +++++++-------------------------------------
11
target/arm/helper-mve.h | 3 ++
15
linux-user/uaccess.c | 46 +++++++++++++++++++++++++++++++++++++++++++
12
target/arm/translate.h | 1 +
16
2 files changed, 53 insertions(+), 40 deletions(-)
13
target/arm/t32.decode | 31 ++++++++++++++-----
17
14
target/arm/mve_helper.c | 10 ++++++
18
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
15
target/arm/translate.c | 68 +++++++++++++++++++++++++++++++++++++++--
19
index XXXXXXX..XXXXXXX 100644
16
5 files changed, 104 insertions(+), 9 deletions(-)
20
--- a/linux-user/qemu.h
17
21
+++ b/linux-user/qemu.h
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
19
index XXXXXXX..XXXXXXX 100644
23
20
--- a/target/arm/helper-mve.h
24
/* Lock an area of guest memory into the host. If copy is true then the
21
+++ b/target/arm/helper-mve.h
25
host area will have the same contents as the guest. */
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_sqrshrl, TCG_CALL_NO_RWG, i64, env, i64, i32)
26
-static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
23
DEF_HELPER_FLAGS_3(mve_uqrshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
27
-{
24
DEF_HELPER_FLAGS_3(mve_sqrshrl48, TCG_CALL_NO_RWG, i64, env, i64, i32)
28
- if (!access_ok_untagged(type, guest_addr, len)) {
25
DEF_HELPER_FLAGS_3(mve_uqrshll48, TCG_CALL_NO_RWG, i64, env, i64, i32)
29
- return NULL;
26
+
30
- }
27
+DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
31
-#ifdef DEBUG_REMAP
28
+DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
32
- {
29
diff --git a/target/arm/translate.h b/target/arm/translate.h
33
- void *addr;
30
index XXXXXXX..XXXXXXX 100644
34
- addr = g_malloc(len);
31
--- a/target/arm/translate.h
35
- if (copy)
32
+++ b/target/arm/translate.h
36
- memcpy(addr, g2h(guest_addr), len);
33
@@ -XXX,XX +XXX,XX @@ typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
37
- else
34
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
38
- memset(addr, 0, len);
35
typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
39
- return addr;
36
typedef void WideShiftFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i32);
40
- }
37
+typedef void ShiftImmFn(TCGv_i32, TCGv_i32, int32_t shift);
41
-#else
38
42
- return g2h_untagged(guest_addr);
39
/**
43
-#endif
40
* arm_tbflags_from_tb:
44
-}
41
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
45
+void *lock_user(int type, abi_ulong guest_addr, long len, int copy);
42
index XXXXXXX..XXXXXXX 100644
46
43
--- a/target/arm/t32.decode
47
/* Unlock an area of guest memory. The first LEN bytes must be
44
+++ b/target/arm/t32.decode
48
flushed back to guest memory. host_ptr = NULL is explicitly
49
allowed and does nothing. */
50
-static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
51
- long len)
52
-{
53
-
54
-#ifdef DEBUG_REMAP
55
- if (!host_ptr)
56
- return;
57
- if (host_ptr == g2h_untagged(guest_addr))
58
- return;
59
- if (len > 0)
60
- memcpy(g2h_untagged(guest_addr), host_ptr, len);
61
- g_free(host_ptr);
62
+#ifndef DEBUG_REMAP
63
+static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, long len)
64
+{ }
65
+#else
66
+void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
67
#endif
68
-}
69
70
/* Return the length of a string in target memory or -TARGET_EFAULT if
71
access error. */
72
abi_long target_strlen(abi_ulong gaddr);
73
74
/* Like lock_user but for null terminated strings. */
75
-static inline void *lock_user_string(abi_ulong guest_addr)
76
-{
77
- abi_long len;
78
- len = target_strlen(guest_addr);
79
- if (len < 0)
80
- return NULL;
81
- return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
82
-}
83
+void *lock_user_string(abi_ulong guest_addr);
84
85
/* Helper macros for locking/unlocking a target struct. */
86
#define lock_user_struct(type, host_ptr, guest_addr, copy)    \
87
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/linux-user/uaccess.c
90
+++ b/linux-user/uaccess.c
91
@@ -XXX,XX +XXX,XX @@
45
@@ -XXX,XX +XXX,XX @@
92
46
93
#include "qemu.h"
47
&mve_shl_ri rdalo rdahi shim
94
48
&mve_shl_rr rdalo rdahi rm
95
+void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
49
+&mve_sh_ri rda shim
96
+{
50
97
+ if (!access_ok_untagged(type, guest_addr, len)) {
51
# rdahi: bits [3:1] from insn, bit 0 is 1
98
+ return NULL;
52
# rdalo: bits [3:1] from insn, bit 0 is 0
99
+ }
53
@@ -XXX,XX +XXX,XX @@
100
+#ifdef DEBUG_REMAP
54
&mve_shl_ri shim=%imm5_12_6 rdalo=%rdalo_17 rdahi=%rdahi_9
101
+ {
55
@mve_shl_rr ....... .... . ... . rm:4 ... . .. .. .... \
102
+ void *addr;
56
&mve_shl_rr rdalo=%rdalo_17 rdahi=%rdahi_9
103
+ addr = g_malloc(len);
57
+@mve_sh_ri ....... .... . rda:4 . ... ... . .. .. .... \
104
+ if (copy) {
58
+ &mve_sh_ri shim=%imm5_12_6
105
+ memcpy(addr, g2h(guest_addr), len);
59
106
+ } else {
60
{
107
+ memset(addr, 0, len);
61
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
108
+ }
62
@@ -XXX,XX +XXX,XX @@ BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
109
+ return addr;
63
# the rest fall through (where ORR_rrri and MOV_rxri will end up
110
+ }
64
# handling them as r13 and r15 accesses with the same semantics as A32).
111
+#else
65
[
112
+ return g2h_untagged(guest_addr);
66
- LSLL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111 @mve_shl_ri
113
+#endif
67
- LSRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111 @mve_shl_ri
114
+}
68
- ASRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111 @mve_shl_ri
115
+
69
+ {
116
+#ifdef DEBUG_REMAP
70
+ UQSHL_ri 1110101 0010 1 .... 0 ... 1111 .. 00 1111 @mve_sh_ri
117
+void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
71
+ LSLL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111 @mve_shl_ri
118
+{
72
+ UQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111 @mve_shl_ri
119
+ if (!host_ptr) {
73
+ }
74
75
- UQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111 @mve_shl_ri
76
- URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
77
- SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
78
- SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
79
+ {
80
+ URSHR_ri 1110101 0010 1 .... 0 ... 1111 .. 01 1111 @mve_sh_ri
81
+ LSRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111 @mve_shl_ri
82
+ URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
83
+ }
84
+
85
+ {
86
+ SRSHR_ri 1110101 0010 1 .... 0 ... 1111 .. 10 1111 @mve_sh_ri
87
+ ASRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111 @mve_shl_ri
88
+ SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
89
+ }
90
+
91
+ {
92
+ SQSHL_ri 1110101 0010 1 .... 0 ... 1111 .. 11 1111 @mve_sh_ri
93
+ SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
94
+ }
95
96
LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
97
ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
98
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/mve_helper.c
101
+++ b/target/arm/mve_helper.c
102
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mve_uqrshll48)(CPUARMState *env, uint64_t n, uint32_t shift)
103
{
104
return do_uqrshl48_d(n, (int8_t)shift, true, &env->QF);
105
}
106
+
107
+uint32_t HELPER(mve_uqshl)(CPUARMState *env, uint32_t n, uint32_t shift)
108
+{
109
+ return do_uqrshl_bhs(n, (int8_t)shift, 32, false, &env->QF);
110
+}
111
+
112
+uint32_t HELPER(mve_sqshl)(CPUARMState *env, uint32_t n, uint32_t shift)
113
+{
114
+ return do_sqrshl_bhs(n, (int8_t)shift, 32, false, &env->QF);
115
+}
116
diff --git a/target/arm/translate.c b/target/arm/translate.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate.c
119
+++ b/target/arm/translate.c
120
@@ -XXX,XX +XXX,XX @@ static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
121
122
static void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
123
{
124
- TCGv_i32 t = tcg_temp_new_i32();
125
+ TCGv_i32 t;
126
127
+ /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
128
+ if (sh == 32) {
129
+ tcg_gen_movi_i32(d, 0);
120
+ return;
130
+ return;
121
+ }
131
+ }
122
+ if (host_ptr == g2h_untagged(guest_addr)) {
132
+ t = tcg_temp_new_i32();
133
tcg_gen_extract_i32(t, a, sh - 1, 1);
134
tcg_gen_sari_i32(d, a, sh);
135
tcg_gen_add_i32(d, d, t);
136
@@ -XXX,XX +XXX,XX @@ static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
137
138
static void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
139
{
140
- TCGv_i32 t = tcg_temp_new_i32();
141
+ TCGv_i32 t;
142
143
+ /* Handle shift by the input size for the benefit of trans_URSHR_ri */
144
+ if (sh == 32) {
145
+ tcg_gen_extract_i32(d, a, sh - 1, 1);
123
+ return;
146
+ return;
124
+ }
147
+ }
125
+ if (len > 0) {
148
+ t = tcg_temp_new_i32();
126
+ memcpy(g2h_untagged(guest_addr), host_ptr, len);
149
tcg_gen_extract_i32(t, a, sh - 1, 1);
127
+ }
150
tcg_gen_shri_i32(d, a, sh);
128
+ g_free(host_ptr);
151
tcg_gen_add_i32(d, d, t);
129
+}
152
@@ -XXX,XX +XXX,XX @@ static bool trans_SQRSHRL48_rr(DisasContext *s, arg_mve_shl_rr *a)
130
+#endif
153
return do_mve_shl_rr(s, a, gen_helper_mve_sqrshrl48);
131
+
154
}
132
+void *lock_user_string(abi_ulong guest_addr)
155
133
+{
156
+static bool do_mve_sh_ri(DisasContext *s, arg_mve_sh_ri *a, ShiftImmFn *fn)
134
+ abi_long len = target_strlen(guest_addr);
157
+{
135
+ if (len < 0) {
158
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
136
+ return NULL;
159
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
137
+ }
160
+ return false;
138
+ return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
161
+ }
139
+}
162
+ if (!dc_isar_feature(aa32_mve, s) ||
140
+
163
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
141
/* copy_from_user() and copy_to_user() are usually used to copy data
164
+ a->rda == 13 || a->rda == 15) {
142
* buffers between the target and host. These internally perform
165
+ /* These rda cases are UNPREDICTABLE; we choose to UNDEF */
143
* locking/unlocking of the memory.
166
+ unallocated_encoding(s);
167
+ return true;
168
+ }
169
+
170
+ if (a->shim == 0) {
171
+ a->shim = 32;
172
+ }
173
+ fn(cpu_R[a->rda], cpu_R[a->rda], a->shim);
174
+
175
+ return true;
176
+}
177
+
178
+static bool trans_URSHR_ri(DisasContext *s, arg_mve_sh_ri *a)
179
+{
180
+ return do_mve_sh_ri(s, a, gen_urshr32_i32);
181
+}
182
+
183
+static bool trans_SRSHR_ri(DisasContext *s, arg_mve_sh_ri *a)
184
+{
185
+ return do_mve_sh_ri(s, a, gen_srshr32_i32);
186
+}
187
+
188
+static void gen_mve_sqshl(TCGv_i32 r, TCGv_i32 n, int32_t shift)
189
+{
190
+ gen_helper_mve_sqshl(r, cpu_env, n, tcg_constant_i32(shift));
191
+}
192
+
193
+static bool trans_SQSHL_ri(DisasContext *s, arg_mve_sh_ri *a)
194
+{
195
+ return do_mve_sh_ri(s, a, gen_mve_sqshl);
196
+}
197
+
198
+static void gen_mve_uqshl(TCGv_i32 r, TCGv_i32 n, int32_t shift)
199
+{
200
+ gen_helper_mve_uqshl(r, cpu_env, n, tcg_constant_i32(shift));
201
+}
202
+
203
+static bool trans_UQSHL_ri(DisasContext *s, arg_mve_sh_ri *a)
204
+{
205
+ return do_mve_sh_ri(s, a, gen_mve_uqshl);
206
+}
207
+
208
/*
209
* Multiply and multiply accumulate
210
*/
144
--
211
--
145
2.20.1
212
2.20.1
146
213
147
214
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
For copy_*_user, only 0 and -TARGET_EFAULT are returned; no need
4
to involve abi_long. Use size_t for lengths. Use bool for the
5
lock_user copy argument. Use ssize_t for target_strlen, because
6
we can't overflow the host memory space.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20210212184902.1251044-19-richard.henderson@linaro.org
12
[PMM: moved fix for ifdef error to previous commit]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
linux-user/qemu.h | 12 +++++-------
16
linux-user/uaccess.c | 45 ++++++++++++++++++++++----------------------
17
2 files changed, 28 insertions(+), 29 deletions(-)
18
19
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/linux-user/qemu.h
22
+++ b/linux-user/qemu.h
23
@@ -XXX,XX +XXX,XX @@
24
#include "exec/cpu_ldst.h"
25
26
#undef DEBUG_REMAP
27
-#ifdef DEBUG_REMAP
28
-#endif /* DEBUG_REMAP */
29
30
#include "exec/user/abitypes.h"
31
32
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(CPUState *cpu, int type,
33
* buffers between the target and host. These internally perform
34
* locking/unlocking of the memory.
35
*/
36
-abi_long copy_from_user(void *hptr, abi_ulong gaddr, size_t len);
37
-abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
38
+int copy_from_user(void *hptr, abi_ulong gaddr, size_t len);
39
+int copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
40
41
/* Functions for accessing guest memory. The tget and tput functions
42
read/write single values, byteswapping as necessary. The lock_user function
43
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
44
45
/* Lock an area of guest memory into the host. If copy is true then the
46
host area will have the same contents as the guest. */
47
-void *lock_user(int type, abi_ulong guest_addr, long len, int copy);
48
+void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy);
49
50
/* Unlock an area of guest memory. The first LEN bytes must be
51
flushed back to guest memory. host_ptr = NULL is explicitly
52
allowed and does nothing. */
53
#ifndef DEBUG_REMAP
54
-static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, long len)
55
+static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len)
56
{ }
57
#else
58
void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
59
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
60
61
/* Return the length of a string in target memory or -TARGET_EFAULT if
62
access error. */
63
-abi_long target_strlen(abi_ulong gaddr);
64
+ssize_t target_strlen(abi_ulong gaddr);
65
66
/* Like lock_user but for null terminated strings. */
67
void *lock_user_string(abi_ulong guest_addr);
68
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/linux-user/uaccess.c
71
+++ b/linux-user/uaccess.c
72
@@ -XXX,XX +XXX,XX @@
73
74
#include "qemu.h"
75
76
-void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
77
+void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy)
78
{
79
if (!access_ok_untagged(type, guest_addr, len)) {
80
return NULL;
81
@@ -XXX,XX +XXX,XX @@ void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
82
}
83
84
#ifdef DEBUG_REMAP
85
-void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
86
+void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len);
87
{
88
if (!host_ptr) {
89
return;
90
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
91
if (host_ptr == g2h_untagged(guest_addr)) {
92
return;
93
}
94
- if (len > 0) {
95
+ if (len != 0) {
96
memcpy(g2h_untagged(guest_addr), host_ptr, len);
97
}
98
g_free(host_ptr);
99
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
100
101
void *lock_user_string(abi_ulong guest_addr)
102
{
103
- abi_long len = target_strlen(guest_addr);
104
+ ssize_t len = target_strlen(guest_addr);
105
if (len < 0) {
106
return NULL;
107
}
108
- return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
109
+ return lock_user(VERIFY_READ, guest_addr, (size_t)len + 1, 1);
110
}
111
112
/* copy_from_user() and copy_to_user() are usually used to copy data
113
* buffers between the target and host. These internally perform
114
* locking/unlocking of the memory.
115
*/
116
-abi_long copy_from_user(void *hptr, abi_ulong gaddr, size_t len)
117
+int copy_from_user(void *hptr, abi_ulong gaddr, size_t len)
118
{
119
- abi_long ret = 0;
120
- void *ghptr;
121
+ int ret = 0;
122
+ void *ghptr = lock_user(VERIFY_READ, gaddr, len, 1);
123
124
- if ((ghptr = lock_user(VERIFY_READ, gaddr, len, 1))) {
125
+ if (ghptr) {
126
memcpy(hptr, ghptr, len);
127
unlock_user(ghptr, gaddr, 0);
128
- } else
129
+ } else {
130
ret = -TARGET_EFAULT;
131
-
132
+ }
133
return ret;
134
}
135
136
-
137
-abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len)
138
+int copy_to_user(abi_ulong gaddr, void *hptr, size_t len)
139
{
140
- abi_long ret = 0;
141
- void *ghptr;
142
+ int ret = 0;
143
+ void *ghptr = lock_user(VERIFY_WRITE, gaddr, len, 0);
144
145
- if ((ghptr = lock_user(VERIFY_WRITE, gaddr, len, 0))) {
146
+ if (ghptr) {
147
memcpy(ghptr, hptr, len);
148
unlock_user(ghptr, gaddr, len);
149
- } else
150
+ } else {
151
ret = -TARGET_EFAULT;
152
+ }
153
154
return ret;
155
}
156
157
/* Return the length of a string in target memory or -TARGET_EFAULT if
158
access error */
159
-abi_long target_strlen(abi_ulong guest_addr1)
160
+ssize_t target_strlen(abi_ulong guest_addr1)
161
{
162
uint8_t *ptr;
163
abi_ulong guest_addr;
164
- int max_len, len;
165
+ size_t max_len, len;
166
167
guest_addr = guest_addr1;
168
for(;;) {
169
@@ -XXX,XX +XXX,XX @@ abi_long target_strlen(abi_ulong guest_addr1)
170
unlock_user(ptr, guest_addr, 0);
171
guest_addr += len;
172
/* we don't allow wrapping or integer overflow */
173
- if (guest_addr == 0 ||
174
- (guest_addr - guest_addr1) > 0x7fffffff)
175
+ if (guest_addr == 0 || (guest_addr - guest_addr1) > 0x7fffffff) {
176
return -TARGET_EFAULT;
177
- if (len != max_len)
178
+ }
179
+ if (len != max_len) {
180
break;
181
+ }
182
}
183
return guest_addr - guest_addr1;
184
}
185
--
186
2.20.1
187
188
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We were fudging TBI1 enabled to speed up the generated code.
4
Now that we've improved the code generation, remove this.
5
Also, tidy the comment to reflect the current code.
6
7
The pauth test was testing a kernel address (-1) and making
8
incorrect assumptions about TBI1; stick to userland addresses.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210212184902.1251044-23-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/internals.h | 4 ++--
16
target/arm/cpu.c | 10 +++-------
17
tests/tcg/aarch64/pauth-2.c | 1 -
18
3 files changed, 5 insertions(+), 10 deletions(-)
19
20
diff --git a/target/arm/internals.h b/target/arm/internals.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/internals.h
23
+++ b/target/arm/internals.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool tcma_check(uint32_t desc, int bit55, int ptr_tag)
25
*/
26
static inline uint64_t useronly_clean_ptr(uint64_t ptr)
27
{
28
- /* TBI is known to be enabled. */
29
#ifdef CONFIG_USER_ONLY
30
- ptr = sextract64(ptr, 0, 56);
31
+ /* TBI0 is known to be enabled, while TBI1 is disabled. */
32
+ ptr &= sextract64(ptr, 0, 56);
33
#endif
34
return ptr;
35
}
36
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu.c
39
+++ b/target/arm/cpu.c
40
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
41
env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
42
}
43
/*
44
- * Enable TBI0 and TBI1. While the real kernel only enables TBI0,
45
- * turning on both here will produce smaller code and otherwise
46
- * make no difference to the user-level emulation.
47
- *
48
- * In sve_probe_page, we assume that this is set.
49
- * Do not modify this without other changes.
50
+ * Enable TBI0 but not TBI1.
51
+ * Note that this must match useronly_clean_ptr.
52
*/
53
- env->cp15.tcr_el[1].raw_tcr = (3ULL << 37);
54
+ env->cp15.tcr_el[1].raw_tcr = (1ULL << 37);
55
#else
56
/* Reset into the highest available EL */
57
if (arm_feature(env, ARM_FEATURE_EL3)) {
58
diff --git a/tests/tcg/aarch64/pauth-2.c b/tests/tcg/aarch64/pauth-2.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/tests/tcg/aarch64/pauth-2.c
61
+++ b/tests/tcg/aarch64/pauth-2.c
62
@@ -XXX,XX +XXX,XX @@ void do_test(uint64_t value)
63
int main()
64
{
65
do_test(0);
66
- do_test(-1);
67
do_test(0xda004acedeadbeefull);
68
return 0;
69
}
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These prctl fields are required for the function of MTE.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210212184902.1251044-24-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
linux-user/aarch64/target_syscall.h | 9 ++++++
11
linux-user/syscall.c | 43 +++++++++++++++++++++++++++++
12
2 files changed, 52 insertions(+)
13
14
diff --git a/linux-user/aarch64/target_syscall.h b/linux-user/aarch64/target_syscall.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/aarch64/target_syscall.h
17
+++ b/linux-user/aarch64/target_syscall.h
18
@@ -XXX,XX +XXX,XX @@ struct target_pt_regs {
19
#define TARGET_PR_SET_TAGGED_ADDR_CTRL 55
20
#define TARGET_PR_GET_TAGGED_ADDR_CTRL 56
21
# define TARGET_PR_TAGGED_ADDR_ENABLE (1UL << 0)
22
+/* MTE tag check fault modes */
23
+# define TARGET_PR_MTE_TCF_SHIFT 1
24
+# define TARGET_PR_MTE_TCF_NONE (0UL << TARGET_PR_MTE_TCF_SHIFT)
25
+# define TARGET_PR_MTE_TCF_SYNC (1UL << TARGET_PR_MTE_TCF_SHIFT)
26
+# define TARGET_PR_MTE_TCF_ASYNC (2UL << TARGET_PR_MTE_TCF_SHIFT)
27
+# define TARGET_PR_MTE_TCF_MASK (3UL << TARGET_PR_MTE_TCF_SHIFT)
28
+/* MTE tag inclusion mask */
29
+# define TARGET_PR_MTE_TAG_SHIFT 3
30
+# define TARGET_PR_MTE_TAG_MASK (0xffffUL << TARGET_PR_MTE_TAG_SHIFT)
31
32
#endif /* AARCH64_TARGET_SYSCALL_H */
33
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/linux-user/syscall.c
36
+++ b/linux-user/syscall.c
37
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
38
{
39
abi_ulong valid_mask = TARGET_PR_TAGGED_ADDR_ENABLE;
40
CPUARMState *env = cpu_env;
41
+ ARMCPU *cpu = env_archcpu(env);
42
+
43
+ if (cpu_isar_feature(aa64_mte, cpu)) {
44
+ valid_mask |= TARGET_PR_MTE_TCF_MASK;
45
+ valid_mask |= TARGET_PR_MTE_TAG_MASK;
46
+ }
47
48
if ((arg2 & ~valid_mask) || arg3 || arg4 || arg5) {
49
return -TARGET_EINVAL;
50
}
51
env->tagged_addr_enable = arg2 & TARGET_PR_TAGGED_ADDR_ENABLE;
52
+
53
+ if (cpu_isar_feature(aa64_mte, cpu)) {
54
+ switch (arg2 & TARGET_PR_MTE_TCF_MASK) {
55
+ case TARGET_PR_MTE_TCF_NONE:
56
+ case TARGET_PR_MTE_TCF_SYNC:
57
+ case TARGET_PR_MTE_TCF_ASYNC:
58
+ break;
59
+ default:
60
+ return -EINVAL;
61
+ }
62
+
63
+ /*
64
+ * Write PR_MTE_TCF to SCTLR_EL1[TCF0].
65
+ * Note that the syscall values are consistent with hw.
66
+ */
67
+ env->cp15.sctlr_el[1] =
68
+ deposit64(env->cp15.sctlr_el[1], 38, 2,
69
+ arg2 >> TARGET_PR_MTE_TCF_SHIFT);
70
+
71
+ /*
72
+ * Write PR_MTE_TAG to GCR_EL1[Exclude].
73
+ * Note that the syscall uses an include mask,
74
+ * and hardware uses an exclude mask -- invert.
75
+ */
76
+ env->cp15.gcr_el1 =
77
+ deposit64(env->cp15.gcr_el1, 0, 16,
78
+ ~arg2 >> TARGET_PR_MTE_TAG_SHIFT);
79
+ arm_rebuild_hflags(env);
80
+ }
81
return 0;
82
}
83
case TARGET_PR_GET_TAGGED_ADDR_CTRL:
84
{
85
abi_long ret = 0;
86
CPUARMState *env = cpu_env;
87
+ ARMCPU *cpu = env_archcpu(env);
88
89
if (arg2 || arg3 || arg4 || arg5) {
90
return -TARGET_EINVAL;
91
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
92
if (env->tagged_addr_enable) {
93
ret |= TARGET_PR_TAGGED_ADDR_ENABLE;
94
}
95
+ if (cpu_isar_feature(aa64_mte, cpu)) {
96
+ /* See above. */
97
+ ret |= (extract64(env->cp15.sctlr_el[1], 38, 2)
98
+ << TARGET_PR_MTE_TCF_SHIFT);
99
+ ret = deposit64(ret, TARGET_PR_MTE_TAG_SHIFT, 16,
100
+ ~env->cp15.gcr_el1);
101
+ }
102
return ret;
103
}
104
#endif /* AARCH64 */
105
--
106
2.20.1
107
108
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
A proper syndrome is required to fill in the proper si_code.
4
Use page_get_flags to determine permission vs translation for user-only.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210212184902.1251044-27-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/aarch64/cpu_loop.c | 24 +++++++++++++++++++++---
12
target/arm/tlb_helper.c | 15 +++++++++------
13
2 files changed, 30 insertions(+), 9 deletions(-)
14
15
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/aarch64/cpu_loop.c
18
+++ b/linux-user/aarch64/cpu_loop.c
19
@@ -XXX,XX +XXX,XX @@
20
#include "cpu_loop-common.h"
21
#include "qemu/guest-random.h"
22
#include "hw/semihosting/common-semi.h"
23
+#include "target/arm/syndrome.h"
24
25
#define get_user_code_u32(x, gaddr, env) \
26
({ abi_long __r = get_user_u32((x), (gaddr)); \
27
@@ -XXX,XX +XXX,XX @@
28
void cpu_loop(CPUARMState *env)
29
{
30
CPUState *cs = env_cpu(env);
31
- int trapnr;
32
+ int trapnr, ec, fsc;
33
abi_long ret;
34
target_siginfo_t info;
35
36
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
37
case EXCP_DATA_ABORT:
38
info.si_signo = TARGET_SIGSEGV;
39
info.si_errno = 0;
40
- /* XXX: check env->error_code */
41
- info.si_code = TARGET_SEGV_MAPERR;
42
info._sifields._sigfault._addr = env->exception.vaddress;
43
+
44
+ /* We should only arrive here with EC in {DATAABORT, INSNABORT}. */
45
+ ec = syn_get_ec(env->exception.syndrome);
46
+ assert(ec == EC_DATAABORT || ec == EC_INSNABORT);
47
+
48
+ /* Both EC have the same format for FSC, or close enough. */
49
+ fsc = extract32(env->exception.syndrome, 0, 6);
50
+ switch (fsc) {
51
+ case 0x04 ... 0x07: /* Translation fault, level {0-3} */
52
+ info.si_code = TARGET_SEGV_MAPERR;
53
+ break;
54
+ case 0x09 ... 0x0b: /* Access flag fault, level {1-3} */
55
+ case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
56
+ info.si_code = TARGET_SEGV_ACCERR;
57
+ break;
58
+ default:
59
+ g_assert_not_reached();
60
+ }
61
+
62
queue_signal(env, info.si_signo, QEMU_SI_FAULT, &info);
63
break;
64
case EXCP_DEBUG:
65
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/tlb_helper.c
68
+++ b/target/arm/tlb_helper.c
69
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
70
bool probe, uintptr_t retaddr)
71
{
72
ARMCPU *cpu = ARM_CPU(cs);
73
+ ARMMMUFaultInfo fi = {};
74
75
#ifdef CONFIG_USER_ONLY
76
- cpu->env.exception.vaddress = address;
77
- if (access_type == MMU_INST_FETCH) {
78
- cs->exception_index = EXCP_PREFETCH_ABORT;
79
+ int flags = page_get_flags(useronly_clean_ptr(address));
80
+ if (flags & PAGE_VALID) {
81
+ fi.type = ARMFault_Permission;
82
} else {
83
- cs->exception_index = EXCP_DATA_ABORT;
84
+ fi.type = ARMFault_Translation;
85
}
86
- cpu_loop_exit_restore(cs, retaddr);
87
+
88
+ /* now we have a real cpu fault */
89
+ cpu_restore_state(cs, retaddr, true);
90
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
91
#else
92
hwaddr phys_addr;
93
target_ulong page_size;
94
int prot, ret;
95
MemTxAttrs attrs = {};
96
- ARMMMUFaultInfo fi = {};
97
ARMCacheAttrs cacheattrs = {};
98
99
/*
100
--
101
2.20.1
102
103
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210212184902.1251044-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
linux-user/aarch64/target_signal.h | 2 ++
9
linux-user/aarch64/cpu_loop.c | 3 +++
10
2 files changed, 5 insertions(+)
11
12
diff --git a/linux-user/aarch64/target_signal.h b/linux-user/aarch64/target_signal.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/linux-user/aarch64/target_signal.h
15
+++ b/linux-user/aarch64/target_signal.h
16
@@ -XXX,XX +XXX,XX @@ typedef struct target_sigaltstack {
17
18
#include "../generic/signal.h"
19
20
+#define TARGET_SEGV_MTESERR 9 /* Synchronous ARM MTE exception */
21
+
22
#define TARGET_ARCH_HAS_SETUP_FRAME
23
#endif /* AARCH64_TARGET_SIGNAL_H */
24
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/linux-user/aarch64/cpu_loop.c
27
+++ b/linux-user/aarch64/cpu_loop.c
28
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
29
case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
30
info.si_code = TARGET_SEGV_ACCERR;
31
break;
32
+ case 0x11: /* Synchronous Tag Check Fault */
33
+ info.si_code = TARGET_SEGV_MTESERR;
34
+ break;
35
default:
36
g_assert_not_reached();
37
}
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
Implement the MVE shifts by register, which perform
2
shifts on a single general-purpose register.
2
3
3
This commit implements the single-byte mode of the SMBus.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 2 ++
9
target/arm/translate.h | 1 +
10
target/arm/t32.decode | 18 ++++++++++++++----
11
target/arm/mve_helper.c | 10 ++++++++++
12
target/arm/translate.c | 30 ++++++++++++++++++++++++++++++
13
5 files changed, 57 insertions(+), 4 deletions(-)
4
14
5
Each Nuvoton SoC has 16 System Management Bus (SMBus). These buses
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
compliant with SMBus and I2C protocol.
7
8
This patch implements the single-byte mode of the SMBus. In this mode,
9
the user sends or receives a byte each time. The SMBus device transmits
10
it to the underlying i2c device and sends an interrupt back to the QEMU
11
guest.
12
13
Reviewed-by: Doug Evans<dje@google.com>
14
Reviewed-by: Tyrong Ting<kfting@nuvoton.com>
15
Signed-off-by: Hao Wu <wuhaotsh@google.com>
16
Reviewed-by: Corey Minyard <cminyard@mvista.com>
17
Message-id: 20210210220426.3577804-2-wuhaotsh@google.com
18
Acked-by: Corey Minyard <cminyard@mvista.com>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
docs/system/arm/nuvoton.rst | 2 +-
22
include/hw/arm/npcm7xx.h | 2 +
23
include/hw/i2c/npcm7xx_smbus.h | 88 ++++
24
hw/arm/npcm7xx.c | 68 ++-
25
hw/i2c/npcm7xx_smbus.c | 783 +++++++++++++++++++++++++++++++++
26
hw/i2c/meson.build | 1 +
27
hw/i2c/trace-events | 11 +
28
7 files changed, 938 insertions(+), 17 deletions(-)
29
create mode 100644 include/hw/i2c/npcm7xx_smbus.h
30
create mode 100644 hw/i2c/npcm7xx_smbus.c
31
32
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
33
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
34
--- a/docs/system/arm/nuvoton.rst
17
--- a/target/arm/helper-mve.h
35
+++ b/docs/system/arm/nuvoton.rst
18
+++ b/target/arm/helper-mve.h
36
@@ -XXX,XX +XXX,XX @@ Supported devices
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_uqrshll48, TCG_CALL_NO_RWG, i64, env, i64, i32)
37
* GPIO controller
20
38
* Analog to Digital Converter (ADC)
21
DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
39
* Pulse Width Modulation (PWM)
22
DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
40
+ * SMBus controller (SMBF)
23
+DEF_HELPER_FLAGS_3(mve_uqrshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
41
24
+DEF_HELPER_FLAGS_3(mve_sqrshr, TCG_CALL_NO_RWG, i32, env, i32, i32)
42
Missing devices
25
diff --git a/target/arm/translate.h b/target/arm/translate.h
43
---------------
44
@@ -XXX,XX +XXX,XX @@ Missing devices
45
46
* Ethernet controllers (GMAC and EMC)
47
* USB device (USBD)
48
- * SMBus controller (SMBF)
49
* Peripheral SPI controller (PSPI)
50
* SD/MMC host
51
* PECI interface
52
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
53
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
54
--- a/include/hw/arm/npcm7xx.h
27
--- a/target/arm/translate.h
55
+++ b/include/hw/arm/npcm7xx.h
28
+++ b/target/arm/translate.h
29
@@ -XXX,XX +XXX,XX @@ typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
30
typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
31
typedef void WideShiftFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i32);
32
typedef void ShiftImmFn(TCGv_i32, TCGv_i32, int32_t shift);
33
+typedef void ShiftFn(TCGv_i32, TCGv_ptr, TCGv_i32, TCGv_i32);
34
35
/**
36
* arm_tbflags_from_tb:
37
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/t32.decode
40
+++ b/target/arm/t32.decode
56
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@
57
#include "hw/adc/npcm7xx_adc.h"
42
&mve_shl_ri rdalo rdahi shim
58
#include "hw/cpu/a9mpcore.h"
43
&mve_shl_rr rdalo rdahi rm
59
#include "hw/gpio/npcm7xx_gpio.h"
44
&mve_sh_ri rda shim
60
+#include "hw/i2c/npcm7xx_smbus.h"
45
+&mve_sh_rr rda rm
61
#include "hw/mem/npcm7xx_mc.h"
46
62
#include "hw/misc/npcm7xx_clk.h"
47
# rdahi: bits [3:1] from insn, bit 0 is 1
63
#include "hw/misc/npcm7xx_gcr.h"
48
# rdalo: bits [3:1] from insn, bit 0 is 0
64
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxState {
65
NPCM7xxMCState mc;
66
NPCM7xxRNGState rng;
67
NPCM7xxGPIOState gpio[8];
68
+ NPCM7xxSMBusState smbus[16];
69
EHCISysBusState ehci;
70
OHCISysBusState ohci;
71
NPCM7xxFIUState fiu[2];
72
diff --git a/include/hw/i2c/npcm7xx_smbus.h b/include/hw/i2c/npcm7xx_smbus.h
73
new file mode 100644
74
index XXXXXXX..XXXXXXX
75
--- /dev/null
76
+++ b/include/hw/i2c/npcm7xx_smbus.h
77
@@ -XXX,XX +XXX,XX @@
49
@@ -XXX,XX +XXX,XX @@
78
+/*
50
&mve_shl_rr rdalo=%rdalo_17 rdahi=%rdahi_9
79
+ * Nuvoton NPCM7xx SMBus Module.
51
@mve_sh_ri ....... .... . rda:4 . ... ... . .. .. .... \
80
+ *
52
&mve_sh_ri shim=%imm5_12_6
81
+ * Copyright 2020 Google LLC
53
+@mve_sh_rr ....... .... . rda:4 rm:4 .... .... .... &mve_sh_rr
82
+ *
54
83
+ * This program is free software; you can redistribute it and/or modify it
55
{
84
+ * under the terms of the GNU General Public License as published by the
56
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
85
+ * Free Software Foundation; either version 2 of the License, or
57
@@ -XXX,XX +XXX,XX @@ BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
86
+ * (at your option) any later version.
58
SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
87
+ *
88
+ * This program is distributed in the hope that it will be useful, but WITHOUT
89
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
90
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
91
+ * for more details.
92
+ */
93
+#ifndef NPCM7XX_SMBUS_H
94
+#define NPCM7XX_SMBUS_H
95
+
96
+#include "exec/memory.h"
97
+#include "hw/i2c/i2c.h"
98
+#include "hw/irq.h"
99
+#include "hw/sysbus.h"
100
+
101
+/*
102
+ * Number of addresses this module contains. Do not change this without
103
+ * incrementing the version_id in the vmstate.
104
+ */
105
+#define NPCM7XX_SMBUS_NR_ADDRS 10
106
+
107
+typedef enum NPCM7xxSMBusStatus {
108
+ NPCM7XX_SMBUS_STATUS_IDLE,
109
+ NPCM7XX_SMBUS_STATUS_SENDING,
110
+ NPCM7XX_SMBUS_STATUS_RECEIVING,
111
+ NPCM7XX_SMBUS_STATUS_NEGACK,
112
+ NPCM7XX_SMBUS_STATUS_STOPPING_LAST_RECEIVE,
113
+ NPCM7XX_SMBUS_STATUS_STOPPING_NEGACK,
114
+} NPCM7xxSMBusStatus;
115
+
116
+/*
117
+ * struct NPCM7xxSMBusState - System Management Bus device state.
118
+ * @bus: The underlying I2C Bus.
119
+ * @irq: GIC interrupt line to fire on events (if enabled).
120
+ * @sda: The serial data register.
121
+ * @st: The status register.
122
+ * @cst: The control status register.
123
+ * @cst2: The control status register 2.
124
+ * @cst3: The control status register 3.
125
+ * @ctl1: The control register 1.
126
+ * @ctl2: The control register 2.
127
+ * @ctl3: The control register 3.
128
+ * @ctl4: The control register 4.
129
+ * @ctl5: The control register 5.
130
+ * @addr: The SMBus module's own addresses on the I2C bus.
131
+ * @scllt: The SCL low time register.
132
+ * @sclht: The SCL high time register.
133
+ * @status: The current status of the SMBus.
134
+ */
135
+typedef struct NPCM7xxSMBusState {
136
+ SysBusDevice parent;
137
+
138
+ MemoryRegion iomem;
139
+
140
+ I2CBus *bus;
141
+ qemu_irq irq;
142
+
143
+ uint8_t sda;
144
+ uint8_t st;
145
+ uint8_t cst;
146
+ uint8_t cst2;
147
+ uint8_t cst3;
148
+ uint8_t ctl1;
149
+ uint8_t ctl2;
150
+ uint8_t ctl3;
151
+ uint8_t ctl4;
152
+ uint8_t ctl5;
153
+ uint8_t addr[NPCM7XX_SMBUS_NR_ADDRS];
154
+
155
+ uint8_t scllt;
156
+ uint8_t sclht;
157
+
158
+ NPCM7xxSMBusStatus status;
159
+} NPCM7xxSMBusState;
160
+
161
+#define TYPE_NPCM7XX_SMBUS "npcm7xx-smbus"
162
+#define NPCM7XX_SMBUS(obj) OBJECT_CHECK(NPCM7xxSMBusState, (obj), \
163
+ TYPE_NPCM7XX_SMBUS)
164
+
165
+#endif /* NPCM7XX_SMBUS_H */
166
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/hw/arm/npcm7xx.c
169
+++ b/hw/arm/npcm7xx.c
170
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
171
NPCM7XX_WDG2_IRQ, /* Timer Module 2 Watchdog */
172
NPCM7XX_EHCI_IRQ = 61,
173
NPCM7XX_OHCI_IRQ = 62,
174
+ NPCM7XX_SMBUS0_IRQ = 64,
175
+ NPCM7XX_SMBUS1_IRQ,
176
+ NPCM7XX_SMBUS2_IRQ,
177
+ NPCM7XX_SMBUS3_IRQ,
178
+ NPCM7XX_SMBUS4_IRQ,
179
+ NPCM7XX_SMBUS5_IRQ,
180
+ NPCM7XX_SMBUS6_IRQ,
181
+ NPCM7XX_SMBUS7_IRQ,
182
+ NPCM7XX_SMBUS8_IRQ,
183
+ NPCM7XX_SMBUS9_IRQ,
184
+ NPCM7XX_SMBUS10_IRQ,
185
+ NPCM7XX_SMBUS11_IRQ,
186
+ NPCM7XX_SMBUS12_IRQ,
187
+ NPCM7XX_SMBUS13_IRQ,
188
+ NPCM7XX_SMBUS14_IRQ,
189
+ NPCM7XX_SMBUS15_IRQ,
190
NPCM7XX_PWM0_IRQ = 93, /* PWM module 0 */
191
NPCM7XX_PWM1_IRQ, /* PWM module 1 */
192
NPCM7XX_GPIO0_IRQ = 116,
193
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_pwm_addr[] = {
194
0xf0104000,
195
};
196
197
+/* Direct memory-mapped access to each SMBus Module. */
198
+static const hwaddr npcm7xx_smbus_addr[] = {
199
+ 0xf0080000,
200
+ 0xf0081000,
201
+ 0xf0082000,
202
+ 0xf0083000,
203
+ 0xf0084000,
204
+ 0xf0085000,
205
+ 0xf0086000,
206
+ 0xf0087000,
207
+ 0xf0088000,
208
+ 0xf0089000,
209
+ 0xf008a000,
210
+ 0xf008b000,
211
+ 0xf008c000,
212
+ 0xf008d000,
213
+ 0xf008e000,
214
+ 0xf008f000,
215
+};
216
+
217
static const struct {
218
hwaddr regs_addr;
219
uint32_t unconnected_pins;
220
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
221
object_initialize_child(obj, "gpio[*]", &s->gpio[i], TYPE_NPCM7XX_GPIO);
222
}
59
}
223
60
224
+ for (i = 0; i < ARRAY_SIZE(s->smbus); i++) {
61
- LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
225
+ object_initialize_child(obj, "smbus[*]", &s->smbus[i],
62
- ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
226
+ TYPE_NPCM7XX_SMBUS);
63
- UQRSHLL64_rr 1110101 0010 1 ... 1 .... ... 1 0000 1101 @mve_shl_rr
64
- SQRSHRL64_rr 1110101 0010 1 ... 1 .... ... 1 0010 1101 @mve_shl_rr
65
+ {
66
+ UQRSHL_rr 1110101 0010 1 .... .... 1111 0000 1101 @mve_sh_rr
67
+ LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
68
+ UQRSHLL64_rr 1110101 0010 1 ... 1 .... ... 1 0000 1101 @mve_shl_rr
227
+ }
69
+ }
228
+
70
+
229
object_initialize_child(obj, "ehci", &s->ehci, TYPE_NPCM7XX_EHCI);
71
+ {
230
object_initialize_child(obj, "ohci", &s->ohci, TYPE_SYSBUS_OHCI);
72
+ SQRSHR_rr 1110101 0010 1 .... .... 1111 0010 1101 @mve_sh_rr
231
73
+ ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
232
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
74
+ SQRSHRL64_rr 1110101 0010 1 ... 1 .... ... 1 0010 1101 @mve_shl_rr
233
npcm7xx_irq(s, NPCM7XX_GPIO0_IRQ + i));
234
}
235
236
+ /* SMBus modules. Cannot fail. */
237
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_smbus_addr) != ARRAY_SIZE(s->smbus));
238
+ for (i = 0; i < ARRAY_SIZE(s->smbus); i++) {
239
+ Object *obj = OBJECT(&s->smbus[i]);
240
+
241
+ sysbus_realize(SYS_BUS_DEVICE(obj), &error_abort);
242
+ sysbus_mmio_map(SYS_BUS_DEVICE(obj), 0, npcm7xx_smbus_addr[i]);
243
+ sysbus_connect_irq(SYS_BUS_DEVICE(obj), 0,
244
+ npcm7xx_irq(s, NPCM7XX_SMBUS0_IRQ + i));
245
+ }
75
+ }
246
+
76
+
247
/* USB Host */
77
UQRSHLL48_rr 1110101 0010 1 ... 1 .... ... 1 1000 1101 @mve_shl_rr
248
object_property_set_bool(OBJECT(&s->ehci), "companion-enable", true,
78
SQRSHRL48_rr 1110101 0010 1 ... 1 .... ... 1 1010 1101 @mve_shl_rr
249
&error_abort);
79
]
250
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
80
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
251
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
81
index XXXXXXX..XXXXXXX 100644
252
create_unimplemented_device("npcm7xx.kcs", 0xf0007000, 4 * KiB);
82
--- a/target/arm/mve_helper.c
253
create_unimplemented_device("npcm7xx.gfxi", 0xf000e000, 4 * KiB);
83
+++ b/target/arm/mve_helper.c
254
- create_unimplemented_device("npcm7xx.smbus[0]", 0xf0080000, 4 * KiB);
84
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_sqshl)(CPUARMState *env, uint32_t n, uint32_t shift)
255
- create_unimplemented_device("npcm7xx.smbus[1]", 0xf0081000, 4 * KiB);
85
{
256
- create_unimplemented_device("npcm7xx.smbus[2]", 0xf0082000, 4 * KiB);
86
return do_sqrshl_bhs(n, (int8_t)shift, 32, false, &env->QF);
257
- create_unimplemented_device("npcm7xx.smbus[3]", 0xf0083000, 4 * KiB);
87
}
258
- create_unimplemented_device("npcm7xx.smbus[4]", 0xf0084000, 4 * KiB);
259
- create_unimplemented_device("npcm7xx.smbus[5]", 0xf0085000, 4 * KiB);
260
- create_unimplemented_device("npcm7xx.smbus[6]", 0xf0086000, 4 * KiB);
261
- create_unimplemented_device("npcm7xx.smbus[7]", 0xf0087000, 4 * KiB);
262
- create_unimplemented_device("npcm7xx.smbus[8]", 0xf0088000, 4 * KiB);
263
- create_unimplemented_device("npcm7xx.smbus[9]", 0xf0089000, 4 * KiB);
264
- create_unimplemented_device("npcm7xx.smbus[10]", 0xf008a000, 4 * KiB);
265
- create_unimplemented_device("npcm7xx.smbus[11]", 0xf008b000, 4 * KiB);
266
- create_unimplemented_device("npcm7xx.smbus[12]", 0xf008c000, 4 * KiB);
267
- create_unimplemented_device("npcm7xx.smbus[13]", 0xf008d000, 4 * KiB);
268
- create_unimplemented_device("npcm7xx.smbus[14]", 0xf008e000, 4 * KiB);
269
- create_unimplemented_device("npcm7xx.smbus[15]", 0xf008f000, 4 * KiB);
270
create_unimplemented_device("npcm7xx.espi", 0xf009f000, 4 * KiB);
271
create_unimplemented_device("npcm7xx.peci", 0xf0100000, 4 * KiB);
272
create_unimplemented_device("npcm7xx.siox[1]", 0xf0101000, 4 * KiB);
273
diff --git a/hw/i2c/npcm7xx_smbus.c b/hw/i2c/npcm7xx_smbus.c
274
new file mode 100644
275
index XXXXXXX..XXXXXXX
276
--- /dev/null
277
+++ b/hw/i2c/npcm7xx_smbus.c
278
@@ -XXX,XX +XXX,XX @@
279
+/*
280
+ * Nuvoton NPCM7xx SMBus Module.
281
+ *
282
+ * Copyright 2020 Google LLC
283
+ *
284
+ * This program is free software; you can redistribute it and/or modify it
285
+ * under the terms of the GNU General Public License as published by the
286
+ * Free Software Foundation; either version 2 of the License, or
287
+ * (at your option) any later version.
288
+ *
289
+ * This program is distributed in the hope that it will be useful, but WITHOUT
290
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
291
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
292
+ * for more details.
293
+ */
294
+
88
+
295
+#include "qemu/osdep.h"
89
+uint32_t HELPER(mve_uqrshl)(CPUARMState *env, uint32_t n, uint32_t shift)
296
+
297
+#include "hw/i2c/npcm7xx_smbus.h"
298
+#include "migration/vmstate.h"
299
+#include "qemu/bitops.h"
300
+#include "qemu/guest-random.h"
301
+#include "qemu/log.h"
302
+#include "qemu/module.h"
303
+#include "qemu/units.h"
304
+
305
+#include "trace.h"
306
+
307
+enum NPCM7xxSMBusCommonRegister {
308
+ NPCM7XX_SMB_SDA = 0x0,
309
+ NPCM7XX_SMB_ST = 0x2,
310
+ NPCM7XX_SMB_CST = 0x4,
311
+ NPCM7XX_SMB_CTL1 = 0x6,
312
+ NPCM7XX_SMB_ADDR1 = 0x8,
313
+ NPCM7XX_SMB_CTL2 = 0xa,
314
+ NPCM7XX_SMB_ADDR2 = 0xc,
315
+ NPCM7XX_SMB_CTL3 = 0xe,
316
+ NPCM7XX_SMB_CST2 = 0x18,
317
+ NPCM7XX_SMB_CST3 = 0x19,
318
+ NPCM7XX_SMB_VER = 0x1f,
319
+};
320
+
321
+enum NPCM7xxSMBusBank0Register {
322
+ NPCM7XX_SMB_ADDR3 = 0x10,
323
+ NPCM7XX_SMB_ADDR7 = 0x11,
324
+ NPCM7XX_SMB_ADDR4 = 0x12,
325
+ NPCM7XX_SMB_ADDR8 = 0x13,
326
+ NPCM7XX_SMB_ADDR5 = 0x14,
327
+ NPCM7XX_SMB_ADDR9 = 0x15,
328
+ NPCM7XX_SMB_ADDR6 = 0x16,
329
+ NPCM7XX_SMB_ADDR10 = 0x17,
330
+ NPCM7XX_SMB_CTL4 = 0x1a,
331
+ NPCM7XX_SMB_CTL5 = 0x1b,
332
+ NPCM7XX_SMB_SCLLT = 0x1c,
333
+ NPCM7XX_SMB_FIF_CTL = 0x1d,
334
+ NPCM7XX_SMB_SCLHT = 0x1e,
335
+};
336
+
337
+enum NPCM7xxSMBusBank1Register {
338
+ NPCM7XX_SMB_FIF_CTS = 0x10,
339
+ NPCM7XX_SMB_FAIR_PER = 0x11,
340
+ NPCM7XX_SMB_TXF_CTL = 0x12,
341
+ NPCM7XX_SMB_T_OUT = 0x14,
342
+ NPCM7XX_SMB_TXF_STS = 0x1a,
343
+ NPCM7XX_SMB_RXF_STS = 0x1c,
344
+ NPCM7XX_SMB_RXF_CTL = 0x1e,
345
+};
346
+
347
+/* ST fields */
348
+#define NPCM7XX_SMBST_STP BIT(7)
349
+#define NPCM7XX_SMBST_SDAST BIT(6)
350
+#define NPCM7XX_SMBST_BER BIT(5)
351
+#define NPCM7XX_SMBST_NEGACK BIT(4)
352
+#define NPCM7XX_SMBST_STASTR BIT(3)
353
+#define NPCM7XX_SMBST_NMATCH BIT(2)
354
+#define NPCM7XX_SMBST_MODE BIT(1)
355
+#define NPCM7XX_SMBST_XMIT BIT(0)
356
+
357
+/* CST fields */
358
+#define NPCM7XX_SMBCST_ARPMATCH BIT(7)
359
+#define NPCM7XX_SMBCST_MATCHAF BIT(6)
360
+#define NPCM7XX_SMBCST_TGSCL BIT(5)
361
+#define NPCM7XX_SMBCST_TSDA BIT(4)
362
+#define NPCM7XX_SMBCST_GCMATCH BIT(3)
363
+#define NPCM7XX_SMBCST_MATCH BIT(2)
364
+#define NPCM7XX_SMBCST_BB BIT(1)
365
+#define NPCM7XX_SMBCST_BUSY BIT(0)
366
+
367
+/* CST2 fields */
368
+#define NPCM7XX_SMBCST2_INTSTS BIT(7)
369
+#define NPCM7XX_SMBCST2_MATCH7F BIT(6)
370
+#define NPCM7XX_SMBCST2_MATCH6F BIT(5)
371
+#define NPCM7XX_SMBCST2_MATCH5F BIT(4)
372
+#define NPCM7XX_SMBCST2_MATCH4F BIT(3)
373
+#define NPCM7XX_SMBCST2_MATCH3F BIT(2)
374
+#define NPCM7XX_SMBCST2_MATCH2F BIT(1)
375
+#define NPCM7XX_SMBCST2_MATCH1F BIT(0)
376
+
377
+/* CST3 fields */
378
+#define NPCM7XX_SMBCST3_EO_BUSY BIT(7)
379
+#define NPCM7XX_SMBCST3_MATCH10F BIT(2)
380
+#define NPCM7XX_SMBCST3_MATCH9F BIT(1)
381
+#define NPCM7XX_SMBCST3_MATCH8F BIT(0)
382
+
383
+/* CTL1 fields */
384
+#define NPCM7XX_SMBCTL1_STASTRE BIT(7)
385
+#define NPCM7XX_SMBCTL1_NMINTE BIT(6)
386
+#define NPCM7XX_SMBCTL1_GCMEN BIT(5)
387
+#define NPCM7XX_SMBCTL1_ACK BIT(4)
388
+#define NPCM7XX_SMBCTL1_EOBINTE BIT(3)
389
+#define NPCM7XX_SMBCTL1_INTEN BIT(2)
390
+#define NPCM7XX_SMBCTL1_STOP BIT(1)
391
+#define NPCM7XX_SMBCTL1_START BIT(0)
392
+
393
+/* CTL2 fields */
394
+#define NPCM7XX_SMBCTL2_SCLFRQ(rv) extract8((rv), 1, 6)
395
+#define NPCM7XX_SMBCTL2_ENABLE BIT(0)
396
+
397
+/* CTL3 fields */
398
+#define NPCM7XX_SMBCTL3_SCL_LVL BIT(7)
399
+#define NPCM7XX_SMBCTL3_SDA_LVL BIT(6)
400
+#define NPCM7XX_SMBCTL3_BNK_SEL BIT(5)
401
+#define NPCM7XX_SMBCTL3_400K_MODE BIT(4)
402
+#define NPCM7XX_SMBCTL3_IDL_START BIT(3)
403
+#define NPCM7XX_SMBCTL3_ARPMEN BIT(2)
404
+#define NPCM7XX_SMBCTL3_SCLFRQ(rv) extract8((rv), 0, 2)
405
+
406
+/* ADDR fields */
407
+#define NPCM7XX_ADDR_EN BIT(7)
408
+#define NPCM7XX_ADDR_A(rv) extract8((rv), 0, 6)
409
+
410
+#define KEEP_OLD_BIT(o, n, b) (((n) & (~(b))) | ((o) & (b)))
411
+#define WRITE_ONE_CLEAR(o, n, b) ((n) & (b) ? (o) & (~(b)) : (o))
412
+
413
+#define NPCM7XX_SMBUS_ENABLED(s) ((s)->ctl2 & NPCM7XX_SMBCTL2_ENABLE)
414
+
415
+/* VERSION fields values, read-only. */
416
+#define NPCM7XX_SMBUS_VERSION_NUMBER 1
417
+#define NPCM7XX_SMBUS_VERSION_FIFO_SUPPORTED 0
418
+
419
+/* Reset values */
420
+#define NPCM7XX_SMB_ST_INIT_VAL 0x00
421
+#define NPCM7XX_SMB_CST_INIT_VAL 0x10
422
+#define NPCM7XX_SMB_CST2_INIT_VAL 0x00
423
+#define NPCM7XX_SMB_CST3_INIT_VAL 0x00
424
+#define NPCM7XX_SMB_CTL1_INIT_VAL 0x00
425
+#define NPCM7XX_SMB_CTL2_INIT_VAL 0x00
426
+#define NPCM7XX_SMB_CTL3_INIT_VAL 0xc0
427
+#define NPCM7XX_SMB_CTL4_INIT_VAL 0x07
428
+#define NPCM7XX_SMB_CTL5_INIT_VAL 0x00
429
+#define NPCM7XX_SMB_ADDR_INIT_VAL 0x00
430
+#define NPCM7XX_SMB_SCLLT_INIT_VAL 0x00
431
+#define NPCM7XX_SMB_SCLHT_INIT_VAL 0x00
432
+
433
+static uint8_t npcm7xx_smbus_get_version(void)
434
+{
90
+{
435
+ return NPCM7XX_SMBUS_VERSION_FIFO_SUPPORTED << 7 |
91
+ return do_uqrshl_bhs(n, (int8_t)shift, 32, true, &env->QF);
436
+ NPCM7XX_SMBUS_VERSION_NUMBER;
437
+}
92
+}
438
+
93
+
439
+static void npcm7xx_smbus_update_irq(NPCM7xxSMBusState *s)
94
+uint32_t HELPER(mve_sqrshr)(CPUARMState *env, uint32_t n, uint32_t shift)
440
+{
95
+{
441
+ int level;
96
+ return do_sqrshl_bhs(n, -(int8_t)shift, 32, true, &env->QF);
97
+}
98
diff --git a/target/arm/translate.c b/target/arm/translate.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/translate.c
101
+++ b/target/arm/translate.c
102
@@ -XXX,XX +XXX,XX @@ static bool trans_UQSHL_ri(DisasContext *s, arg_mve_sh_ri *a)
103
return do_mve_sh_ri(s, a, gen_mve_uqshl);
104
}
105
106
+static bool do_mve_sh_rr(DisasContext *s, arg_mve_sh_rr *a, ShiftFn *fn)
107
+{
108
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
109
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
110
+ return false;
111
+ }
112
+ if (!dc_isar_feature(aa32_mve, s) ||
113
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
114
+ a->rda == 13 || a->rda == 15 || a->rm == 13 || a->rm == 15 ||
115
+ a->rm == a->rda) {
116
+ /* These rda/rm cases are UNPREDICTABLE; we choose to UNDEF */
117
+ unallocated_encoding(s);
118
+ return true;
119
+ }
442
+
120
+
443
+ if (s->ctl1 & NPCM7XX_SMBCTL1_INTEN) {
121
+ /* The helper takes care of the sign-extension of the low 8 bits of Rm */
444
+ level = !!((s->ctl1 & NPCM7XX_SMBCTL1_NMINTE &&
122
+ fn(cpu_R[a->rda], cpu_env, cpu_R[a->rda], cpu_R[a->rm]);
445
+ s->st & NPCM7XX_SMBST_NMATCH) ||
123
+ return true;
446
+ (s->st & NPCM7XX_SMBST_BER) ||
447
+ (s->st & NPCM7XX_SMBST_NEGACK) ||
448
+ (s->st & NPCM7XX_SMBST_SDAST) ||
449
+ (s->ctl1 & NPCM7XX_SMBCTL1_STASTRE &&
450
+ s->st & NPCM7XX_SMBST_SDAST) ||
451
+ (s->ctl1 & NPCM7XX_SMBCTL1_EOBINTE &&
452
+ s->cst3 & NPCM7XX_SMBCST3_EO_BUSY));
453
+
454
+ if (level) {
455
+ s->cst2 |= NPCM7XX_SMBCST2_INTSTS;
456
+ } else {
457
+ s->cst2 &= ~NPCM7XX_SMBCST2_INTSTS;
458
+ }
459
+ qemu_set_irq(s->irq, level);
460
+ }
461
+}
124
+}
462
+
125
+
463
+static void npcm7xx_smbus_nack(NPCM7xxSMBusState *s)
126
+static bool trans_SQRSHR_rr(DisasContext *s, arg_mve_sh_rr *a)
464
+{
127
+{
465
+ s->st &= ~NPCM7XX_SMBST_SDAST;
128
+ return do_mve_sh_rr(s, a, gen_helper_mve_sqrshr);
466
+ s->st |= NPCM7XX_SMBST_NEGACK;
467
+ s->status = NPCM7XX_SMBUS_STATUS_NEGACK;
468
+}
129
+}
469
+
130
+
470
+static void npcm7xx_smbus_send_byte(NPCM7xxSMBusState *s, uint8_t value)
131
+static bool trans_UQRSHL_rr(DisasContext *s, arg_mve_sh_rr *a)
471
+{
132
+{
472
+ int rv = i2c_send(s->bus, value);
133
+ return do_mve_sh_rr(s, a, gen_helper_mve_uqrshl);
473
+
474
+ if (rv) {
475
+ npcm7xx_smbus_nack(s);
476
+ } else {
477
+ s->st |= NPCM7XX_SMBST_SDAST;
478
+ }
479
+ trace_npcm7xx_smbus_send_byte((DEVICE(s)->canonical_path), value, !rv);
480
+ npcm7xx_smbus_update_irq(s);
481
+}
134
+}
482
+
135
+
483
+static void npcm7xx_smbus_recv_byte(NPCM7xxSMBusState *s)
136
/*
484
+{
137
* Multiply and multiply accumulate
485
+ s->sda = i2c_recv(s->bus);
138
*/
486
+ s->st |= NPCM7XX_SMBST_SDAST;
487
+ if (s->st & NPCM7XX_SMBCTL1_ACK) {
488
+ trace_npcm7xx_smbus_nack(DEVICE(s)->canonical_path);
489
+ i2c_nack(s->bus);
490
+ s->st &= NPCM7XX_SMBCTL1_ACK;
491
+ }
492
+ trace_npcm7xx_smbus_recv_byte((DEVICE(s)->canonical_path), s->sda);
493
+ npcm7xx_smbus_update_irq(s);
494
+}
495
+
496
+static void npcm7xx_smbus_start(NPCM7xxSMBusState *s)
497
+{
498
+ /*
499
+ * We can start the bus if one of these is true:
500
+ * 1. The bus is idle (so we can request it)
501
+ * 2. We are the occupier (it's a repeated start condition.)
502
+ */
503
+ int available = !i2c_bus_busy(s->bus) ||
504
+ s->status != NPCM7XX_SMBUS_STATUS_IDLE;
505
+
506
+ if (available) {
507
+ s->st |= NPCM7XX_SMBST_MODE | NPCM7XX_SMBST_XMIT | NPCM7XX_SMBST_SDAST;
508
+ s->cst |= NPCM7XX_SMBCST_BUSY;
509
+ } else {
510
+ s->st &= ~NPCM7XX_SMBST_MODE;
511
+ s->cst &= ~NPCM7XX_SMBCST_BUSY;
512
+ s->st |= NPCM7XX_SMBST_BER;
513
+ }
514
+
515
+ trace_npcm7xx_smbus_start(DEVICE(s)->canonical_path, available);
516
+ s->cst |= NPCM7XX_SMBCST_BB;
517
+ s->status = NPCM7XX_SMBUS_STATUS_IDLE;
518
+ npcm7xx_smbus_update_irq(s);
519
+}
520
+
521
+static void npcm7xx_smbus_send_address(NPCM7xxSMBusState *s, uint8_t value)
522
+{
523
+ int recv;
524
+ int rv;
525
+
526
+ recv = value & BIT(0);
527
+ rv = i2c_start_transfer(s->bus, value >> 1, recv);
528
+ trace_npcm7xx_smbus_send_address(DEVICE(s)->canonical_path,
529
+ value >> 1, recv, !rv);
530
+ if (rv) {
531
+ qemu_log_mask(LOG_GUEST_ERROR,
532
+ "%s: requesting i2c bus for 0x%02x failed: %d\n",
533
+ DEVICE(s)->canonical_path, value, rv);
534
+ /* Failed to start transfer. NACK to reject.*/
535
+ if (recv) {
536
+ s->st &= ~NPCM7XX_SMBST_XMIT;
537
+ } else {
538
+ s->st |= NPCM7XX_SMBST_XMIT;
539
+ }
540
+ npcm7xx_smbus_nack(s);
541
+ npcm7xx_smbus_update_irq(s);
542
+ return;
543
+ }
544
+
545
+ s->st &= ~NPCM7XX_SMBST_NEGACK;
546
+ if (recv) {
547
+ s->status = NPCM7XX_SMBUS_STATUS_RECEIVING;
548
+ s->st &= ~NPCM7XX_SMBST_XMIT;
549
+ } else {
550
+ s->status = NPCM7XX_SMBUS_STATUS_SENDING;
551
+ s->st |= NPCM7XX_SMBST_XMIT;
552
+ }
553
+
554
+ if (s->ctl1 & NPCM7XX_SMBCTL1_STASTRE) {
555
+ s->st |= NPCM7XX_SMBST_STASTR;
556
+ if (!recv) {
557
+ s->st |= NPCM7XX_SMBST_SDAST;
558
+ }
559
+ } else if (recv) {
560
+ npcm7xx_smbus_recv_byte(s);
561
+ }
562
+ npcm7xx_smbus_update_irq(s);
563
+}
564
+
565
+static void npcm7xx_smbus_execute_stop(NPCM7xxSMBusState *s)
566
+{
567
+ i2c_end_transfer(s->bus);
568
+ s->st = 0;
569
+ s->cst = 0;
570
+ s->status = NPCM7XX_SMBUS_STATUS_IDLE;
571
+ s->cst3 |= NPCM7XX_SMBCST3_EO_BUSY;
572
+ trace_npcm7xx_smbus_stop(DEVICE(s)->canonical_path);
573
+ npcm7xx_smbus_update_irq(s);
574
+}
575
+
576
+
577
+static void npcm7xx_smbus_stop(NPCM7xxSMBusState *s)
578
+{
579
+ if (s->st & NPCM7XX_SMBST_MODE) {
580
+ switch (s->status) {
581
+ case NPCM7XX_SMBUS_STATUS_RECEIVING:
582
+ case NPCM7XX_SMBUS_STATUS_STOPPING_LAST_RECEIVE:
583
+ s->status = NPCM7XX_SMBUS_STATUS_STOPPING_LAST_RECEIVE;
584
+ break;
585
+
586
+ case NPCM7XX_SMBUS_STATUS_NEGACK:
587
+ s->status = NPCM7XX_SMBUS_STATUS_STOPPING_NEGACK;
588
+ break;
589
+
590
+ default:
591
+ npcm7xx_smbus_execute_stop(s);
592
+ break;
593
+ }
594
+ }
595
+}
596
+
597
+static uint8_t npcm7xx_smbus_read_sda(NPCM7xxSMBusState *s)
598
+{
599
+ uint8_t value = s->sda;
600
+
601
+ switch (s->status) {
602
+ case NPCM7XX_SMBUS_STATUS_STOPPING_LAST_RECEIVE:
603
+ npcm7xx_smbus_execute_stop(s);
604
+ break;
605
+
606
+ case NPCM7XX_SMBUS_STATUS_RECEIVING:
607
+ npcm7xx_smbus_recv_byte(s);
608
+ break;
609
+
610
+ default:
611
+ /* Do nothing */
612
+ break;
613
+ }
614
+
615
+ return value;
616
+}
617
+
618
+static void npcm7xx_smbus_write_sda(NPCM7xxSMBusState *s, uint8_t value)
619
+{
620
+ s->sda = value;
621
+ if (s->st & NPCM7XX_SMBST_MODE) {
622
+ switch (s->status) {
623
+ case NPCM7XX_SMBUS_STATUS_IDLE:
624
+ npcm7xx_smbus_send_address(s, value);
625
+ break;
626
+ case NPCM7XX_SMBUS_STATUS_SENDING:
627
+ npcm7xx_smbus_send_byte(s, value);
628
+ break;
629
+ default:
630
+ qemu_log_mask(LOG_GUEST_ERROR,
631
+ "%s: write to SDA in invalid status %d: %u\n",
632
+ DEVICE(s)->canonical_path, s->status, value);
633
+ break;
634
+ }
635
+ }
636
+}
637
+
638
+static void npcm7xx_smbus_write_st(NPCM7xxSMBusState *s, uint8_t value)
639
+{
640
+ s->st = WRITE_ONE_CLEAR(s->st, value, NPCM7XX_SMBST_STP);
641
+ s->st = WRITE_ONE_CLEAR(s->st, value, NPCM7XX_SMBST_BER);
642
+ s->st = WRITE_ONE_CLEAR(s->st, value, NPCM7XX_SMBST_STASTR);
643
+ s->st = WRITE_ONE_CLEAR(s->st, value, NPCM7XX_SMBST_NMATCH);
644
+
645
+ if (value & NPCM7XX_SMBST_NEGACK) {
646
+ s->st &= ~NPCM7XX_SMBST_NEGACK;
647
+ if (s->status == NPCM7XX_SMBUS_STATUS_STOPPING_NEGACK) {
648
+ npcm7xx_smbus_execute_stop(s);
649
+ }
650
+ }
651
+
652
+ if (value & NPCM7XX_SMBST_STASTR &&
653
+ s->status == NPCM7XX_SMBUS_STATUS_RECEIVING) {
654
+ npcm7xx_smbus_recv_byte(s);
655
+ }
656
+
657
+ npcm7xx_smbus_update_irq(s);
658
+}
659
+
660
+static void npcm7xx_smbus_write_cst(NPCM7xxSMBusState *s, uint8_t value)
661
+{
662
+ uint8_t new_value = s->cst;
663
+
664
+ s->cst = WRITE_ONE_CLEAR(new_value, value, NPCM7XX_SMBCST_BB);
665
+ npcm7xx_smbus_update_irq(s);
666
+}
667
+
668
+static void npcm7xx_smbus_write_cst3(NPCM7xxSMBusState *s, uint8_t value)
669
+{
670
+ s->cst3 = WRITE_ONE_CLEAR(s->cst3, value, NPCM7XX_SMBCST3_EO_BUSY);
671
+ npcm7xx_smbus_update_irq(s);
672
+}
673
+
674
+static void npcm7xx_smbus_write_ctl1(NPCM7xxSMBusState *s, uint8_t value)
675
+{
676
+ s->ctl1 = KEEP_OLD_BIT(s->ctl1, value,
677
+ NPCM7XX_SMBCTL1_START | NPCM7XX_SMBCTL1_STOP | NPCM7XX_SMBCTL1_ACK);
678
+
679
+ if (value & NPCM7XX_SMBCTL1_START) {
680
+ npcm7xx_smbus_start(s);
681
+ }
682
+
683
+ if (value & NPCM7XX_SMBCTL1_STOP) {
684
+ npcm7xx_smbus_stop(s);
685
+ }
686
+
687
+ npcm7xx_smbus_update_irq(s);
688
+}
689
+
690
+static void npcm7xx_smbus_write_ctl2(NPCM7xxSMBusState *s, uint8_t value)
691
+{
692
+ s->ctl2 = value;
693
+
694
+ if (!NPCM7XX_SMBUS_ENABLED(s)) {
695
+ /* Disable this SMBus module. */
696
+ s->ctl1 = 0;
697
+ s->st = 0;
698
+ s->cst3 = s->cst3 & (~NPCM7XX_SMBCST3_EO_BUSY);
699
+ s->cst = 0;
700
+ }
701
+}
702
+
703
+static void npcm7xx_smbus_write_ctl3(NPCM7xxSMBusState *s, uint8_t value)
704
+{
705
+ uint8_t old_ctl3 = s->ctl3;
706
+
707
+ /* Write to SDA and SCL bits are ignored. */
708
+ s->ctl3 = KEEP_OLD_BIT(old_ctl3, value,
709
+ NPCM7XX_SMBCTL3_SCL_LVL | NPCM7XX_SMBCTL3_SDA_LVL);
710
+}
711
+
712
+static uint64_t npcm7xx_smbus_read(void *opaque, hwaddr offset, unsigned size)
713
+{
714
+ NPCM7xxSMBusState *s = opaque;
715
+ uint64_t value = 0;
716
+ uint8_t bank = s->ctl3 & NPCM7XX_SMBCTL3_BNK_SEL;
717
+
718
+ /* The order of the registers are their order in memory. */
719
+ switch (offset) {
720
+ case NPCM7XX_SMB_SDA:
721
+ value = npcm7xx_smbus_read_sda(s);
722
+ break;
723
+
724
+ case NPCM7XX_SMB_ST:
725
+ value = s->st;
726
+ break;
727
+
728
+ case NPCM7XX_SMB_CST:
729
+ value = s->cst;
730
+ break;
731
+
732
+ case NPCM7XX_SMB_CTL1:
733
+ value = s->ctl1;
734
+ break;
735
+
736
+ case NPCM7XX_SMB_ADDR1:
737
+ value = s->addr[0];
738
+ break;
739
+
740
+ case NPCM7XX_SMB_CTL2:
741
+ value = s->ctl2;
742
+ break;
743
+
744
+ case NPCM7XX_SMB_ADDR2:
745
+ value = s->addr[1];
746
+ break;
747
+
748
+ case NPCM7XX_SMB_CTL3:
749
+ value = s->ctl3;
750
+ break;
751
+
752
+ case NPCM7XX_SMB_CST2:
753
+ value = s->cst2;
754
+ break;
755
+
756
+ case NPCM7XX_SMB_CST3:
757
+ value = s->cst3;
758
+ break;
759
+
760
+ case NPCM7XX_SMB_VER:
761
+ value = npcm7xx_smbus_get_version();
762
+ break;
763
+
764
+ /* This register is either invalid or banked at this point. */
765
+ default:
766
+ if (bank) {
767
+ /* Bank 1 */
768
+ qemu_log_mask(LOG_GUEST_ERROR,
769
+ "%s: read from invalid offset 0x%" HWADDR_PRIx "\n",
770
+ DEVICE(s)->canonical_path, offset);
771
+ } else {
772
+ /* Bank 0 */
773
+ switch (offset) {
774
+ case NPCM7XX_SMB_ADDR3:
775
+ value = s->addr[2];
776
+ break;
777
+
778
+ case NPCM7XX_SMB_ADDR7:
779
+ value = s->addr[6];
780
+ break;
781
+
782
+ case NPCM7XX_SMB_ADDR4:
783
+ value = s->addr[3];
784
+ break;
785
+
786
+ case NPCM7XX_SMB_ADDR8:
787
+ value = s->addr[7];
788
+ break;
789
+
790
+ case NPCM7XX_SMB_ADDR5:
791
+ value = s->addr[4];
792
+ break;
793
+
794
+ case NPCM7XX_SMB_ADDR9:
795
+ value = s->addr[8];
796
+ break;
797
+
798
+ case NPCM7XX_SMB_ADDR6:
799
+ value = s->addr[5];
800
+ break;
801
+
802
+ case NPCM7XX_SMB_ADDR10:
803
+ value = s->addr[9];
804
+ break;
805
+
806
+ case NPCM7XX_SMB_CTL4:
807
+ value = s->ctl4;
808
+ break;
809
+
810
+ case NPCM7XX_SMB_CTL5:
811
+ value = s->ctl5;
812
+ break;
813
+
814
+ case NPCM7XX_SMB_SCLLT:
815
+ value = s->scllt;
816
+ break;
817
+
818
+ case NPCM7XX_SMB_SCLHT:
819
+ value = s->sclht;
820
+ break;
821
+
822
+ default:
823
+ qemu_log_mask(LOG_GUEST_ERROR,
824
+ "%s: read from invalid offset 0x%" HWADDR_PRIx "\n",
825
+ DEVICE(s)->canonical_path, offset);
826
+ break;
827
+ }
828
+ }
829
+ break;
830
+ }
831
+
832
+ trace_npcm7xx_smbus_read(DEVICE(s)->canonical_path, offset, value, size);
833
+
834
+ return value;
835
+}
836
+
837
+static void npcm7xx_smbus_write(void *opaque, hwaddr offset, uint64_t value,
838
+ unsigned size)
839
+{
840
+ NPCM7xxSMBusState *s = opaque;
841
+ uint8_t bank = s->ctl3 & NPCM7XX_SMBCTL3_BNK_SEL;
842
+
843
+ trace_npcm7xx_smbus_write(DEVICE(s)->canonical_path, offset, value, size);
844
+
845
+ /* The order of the registers are their order in memory. */
846
+ switch (offset) {
847
+ case NPCM7XX_SMB_SDA:
848
+ npcm7xx_smbus_write_sda(s, value);
849
+ break;
850
+
851
+ case NPCM7XX_SMB_ST:
852
+ npcm7xx_smbus_write_st(s, value);
853
+ break;
854
+
855
+ case NPCM7XX_SMB_CST:
856
+ npcm7xx_smbus_write_cst(s, value);
857
+ break;
858
+
859
+ case NPCM7XX_SMB_CTL1:
860
+ npcm7xx_smbus_write_ctl1(s, value);
861
+ break;
862
+
863
+ case NPCM7XX_SMB_ADDR1:
864
+ s->addr[0] = value;
865
+ break;
866
+
867
+ case NPCM7XX_SMB_CTL2:
868
+ npcm7xx_smbus_write_ctl2(s, value);
869
+ break;
870
+
871
+ case NPCM7XX_SMB_ADDR2:
872
+ s->addr[1] = value;
873
+ break;
874
+
875
+ case NPCM7XX_SMB_CTL3:
876
+ npcm7xx_smbus_write_ctl3(s, value);
877
+ break;
878
+
879
+ case NPCM7XX_SMB_CST2:
880
+ qemu_log_mask(LOG_GUEST_ERROR,
881
+ "%s: write to read-only reg: offset 0x%" HWADDR_PRIx "\n",
882
+ DEVICE(s)->canonical_path, offset);
883
+ break;
884
+
885
+ case NPCM7XX_SMB_CST3:
886
+ npcm7xx_smbus_write_cst3(s, value);
887
+ break;
888
+
889
+ case NPCM7XX_SMB_VER:
890
+ qemu_log_mask(LOG_GUEST_ERROR,
891
+ "%s: write to read-only reg: offset 0x%" HWADDR_PRIx "\n",
892
+ DEVICE(s)->canonical_path, offset);
893
+ break;
894
+
895
+ /* This register is either invalid or banked at this point. */
896
+ default:
897
+ if (bank) {
898
+ /* Bank 1 */
899
+ qemu_log_mask(LOG_GUEST_ERROR,
900
+ "%s: write to invalid offset 0x%" HWADDR_PRIx "\n",
901
+ DEVICE(s)->canonical_path, offset);
902
+ } else {
903
+ /* Bank 0 */
904
+ switch (offset) {
905
+ case NPCM7XX_SMB_ADDR3:
906
+ s->addr[2] = value;
907
+ break;
908
+
909
+ case NPCM7XX_SMB_ADDR7:
910
+ s->addr[6] = value;
911
+ break;
912
+
913
+ case NPCM7XX_SMB_ADDR4:
914
+ s->addr[3] = value;
915
+ break;
916
+
917
+ case NPCM7XX_SMB_ADDR8:
918
+ s->addr[7] = value;
919
+ break;
920
+
921
+ case NPCM7XX_SMB_ADDR5:
922
+ s->addr[4] = value;
923
+ break;
924
+
925
+ case NPCM7XX_SMB_ADDR9:
926
+ s->addr[8] = value;
927
+ break;
928
+
929
+ case NPCM7XX_SMB_ADDR6:
930
+ s->addr[5] = value;
931
+ break;
932
+
933
+ case NPCM7XX_SMB_ADDR10:
934
+ s->addr[9] = value;
935
+ break;
936
+
937
+ case NPCM7XX_SMB_CTL4:
938
+ s->ctl4 = value;
939
+ break;
940
+
941
+ case NPCM7XX_SMB_CTL5:
942
+ s->ctl5 = value;
943
+ break;
944
+
945
+ case NPCM7XX_SMB_SCLLT:
946
+ s->scllt = value;
947
+ break;
948
+
949
+ case NPCM7XX_SMB_SCLHT:
950
+ s->sclht = value;
951
+ break;
952
+
953
+ default:
954
+ qemu_log_mask(LOG_GUEST_ERROR,
955
+ "%s: write to invalid offset 0x%" HWADDR_PRIx "\n",
956
+ DEVICE(s)->canonical_path, offset);
957
+ break;
958
+ }
959
+ }
960
+ break;
961
+ }
962
+}
963
+
964
+static const MemoryRegionOps npcm7xx_smbus_ops = {
965
+ .read = npcm7xx_smbus_read,
966
+ .write = npcm7xx_smbus_write,
967
+ .endianness = DEVICE_LITTLE_ENDIAN,
968
+ .valid = {
969
+ .min_access_size = 1,
970
+ .max_access_size = 1,
971
+ .unaligned = false,
972
+ },
973
+};
974
+
975
+static void npcm7xx_smbus_enter_reset(Object *obj, ResetType type)
976
+{
977
+ NPCM7xxSMBusState *s = NPCM7XX_SMBUS(obj);
978
+
979
+ s->st = NPCM7XX_SMB_ST_INIT_VAL;
980
+ s->cst = NPCM7XX_SMB_CST_INIT_VAL;
981
+ s->cst2 = NPCM7XX_SMB_CST2_INIT_VAL;
982
+ s->cst3 = NPCM7XX_SMB_CST3_INIT_VAL;
983
+ s->ctl1 = NPCM7XX_SMB_CTL1_INIT_VAL;
984
+ s->ctl2 = NPCM7XX_SMB_CTL2_INIT_VAL;
985
+ s->ctl3 = NPCM7XX_SMB_CTL3_INIT_VAL;
986
+ s->ctl4 = NPCM7XX_SMB_CTL4_INIT_VAL;
987
+ s->ctl5 = NPCM7XX_SMB_CTL5_INIT_VAL;
988
+
989
+ for (int i = 0; i < NPCM7XX_SMBUS_NR_ADDRS; ++i) {
990
+ s->addr[i] = NPCM7XX_SMB_ADDR_INIT_VAL;
991
+ }
992
+ s->scllt = NPCM7XX_SMB_SCLLT_INIT_VAL;
993
+ s->sclht = NPCM7XX_SMB_SCLHT_INIT_VAL;
994
+
995
+ s->status = NPCM7XX_SMBUS_STATUS_IDLE;
996
+}
997
+
998
+static void npcm7xx_smbus_hold_reset(Object *obj)
999
+{
1000
+ NPCM7xxSMBusState *s = NPCM7XX_SMBUS(obj);
1001
+
1002
+ qemu_irq_lower(s->irq);
1003
+}
1004
+
1005
+static void npcm7xx_smbus_init(Object *obj)
1006
+{
1007
+ NPCM7xxSMBusState *s = NPCM7XX_SMBUS(obj);
1008
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1009
+
1010
+ sysbus_init_irq(sbd, &s->irq);
1011
+ memory_region_init_io(&s->iomem, obj, &npcm7xx_smbus_ops, s,
1012
+ "regs", 4 * KiB);
1013
+ sysbus_init_mmio(sbd, &s->iomem);
1014
+
1015
+ s->bus = i2c_init_bus(DEVICE(s), "i2c-bus");
1016
+ s->status = NPCM7XX_SMBUS_STATUS_IDLE;
1017
+}
1018
+
1019
+static const VMStateDescription vmstate_npcm7xx_smbus = {
1020
+ .name = "npcm7xx-smbus",
1021
+ .version_id = 0,
1022
+ .minimum_version_id = 0,
1023
+ .fields = (VMStateField[]) {
1024
+ VMSTATE_UINT8(sda, NPCM7xxSMBusState),
1025
+ VMSTATE_UINT8(st, NPCM7xxSMBusState),
1026
+ VMSTATE_UINT8(cst, NPCM7xxSMBusState),
1027
+ VMSTATE_UINT8(cst2, NPCM7xxSMBusState),
1028
+ VMSTATE_UINT8(cst3, NPCM7xxSMBusState),
1029
+ VMSTATE_UINT8(ctl1, NPCM7xxSMBusState),
1030
+ VMSTATE_UINT8(ctl2, NPCM7xxSMBusState),
1031
+ VMSTATE_UINT8(ctl3, NPCM7xxSMBusState),
1032
+ VMSTATE_UINT8(ctl4, NPCM7xxSMBusState),
1033
+ VMSTATE_UINT8(ctl5, NPCM7xxSMBusState),
1034
+ VMSTATE_UINT8_ARRAY(addr, NPCM7xxSMBusState, NPCM7XX_SMBUS_NR_ADDRS),
1035
+ VMSTATE_UINT8(scllt, NPCM7xxSMBusState),
1036
+ VMSTATE_UINT8(sclht, NPCM7xxSMBusState),
1037
+ VMSTATE_END_OF_LIST(),
1038
+ },
1039
+};
1040
+
1041
+static void npcm7xx_smbus_class_init(ObjectClass *klass, void *data)
1042
+{
1043
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1044
+ DeviceClass *dc = DEVICE_CLASS(klass);
1045
+
1046
+ dc->desc = "NPCM7xx System Management Bus";
1047
+ dc->vmsd = &vmstate_npcm7xx_smbus;
1048
+ rc->phases.enter = npcm7xx_smbus_enter_reset;
1049
+ rc->phases.hold = npcm7xx_smbus_hold_reset;
1050
+}
1051
+
1052
+static const TypeInfo npcm7xx_smbus_types[] = {
1053
+ {
1054
+ .name = TYPE_NPCM7XX_SMBUS,
1055
+ .parent = TYPE_SYS_BUS_DEVICE,
1056
+ .instance_size = sizeof(NPCM7xxSMBusState),
1057
+ .class_init = npcm7xx_smbus_class_init,
1058
+ .instance_init = npcm7xx_smbus_init,
1059
+ },
1060
+};
1061
+DEFINE_TYPES(npcm7xx_smbus_types);
1062
diff --git a/hw/i2c/meson.build b/hw/i2c/meson.build
1063
index XXXXXXX..XXXXXXX 100644
1064
--- a/hw/i2c/meson.build
1065
+++ b/hw/i2c/meson.build
1066
@@ -XXX,XX +XXX,XX @@ i2c_ss.add(when: 'CONFIG_EXYNOS4', if_true: files('exynos4210_i2c.c'))
1067
i2c_ss.add(when: 'CONFIG_IMX_I2C', if_true: files('imx_i2c.c'))
1068
i2c_ss.add(when: 'CONFIG_MPC_I2C', if_true: files('mpc_i2c.c'))
1069
i2c_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('microbit_i2c.c'))
1070
+i2c_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_smbus.c'))
1071
i2c_ss.add(when: 'CONFIG_SMBUS_EEPROM', if_true: files('smbus_eeprom.c'))
1072
i2c_ss.add(when: 'CONFIG_VERSATILE_I2C', if_true: files('versatile_i2c.c'))
1073
i2c_ss.add(when: 'CONFIG_OMAP', if_true: files('omap_i2c.c'))
1074
diff --git a/hw/i2c/trace-events b/hw/i2c/trace-events
1075
index XXXXXXX..XXXXXXX 100644
1076
--- a/hw/i2c/trace-events
1077
+++ b/hw/i2c/trace-events
1078
@@ -XXX,XX +XXX,XX @@ aspeed_i2c_bus_read(uint32_t busid, uint64_t offset, unsigned size, uint64_t val
1079
aspeed_i2c_bus_write(uint32_t busid, uint64_t offset, unsigned size, uint64_t value) "bus[%d]: To 0x%" PRIx64 " of size %u: 0x%" PRIx64
1080
aspeed_i2c_bus_send(const char *mode, int i, int count, uint8_t byte) "%s send %d/%d 0x%02x"
1081
aspeed_i2c_bus_recv(const char *mode, int i, int count, uint8_t byte) "%s recv %d/%d 0x%02x"
1082
+
1083
+# npcm7xx_smbus.c
1084
+
1085
+npcm7xx_smbus_read(const char *id, uint64_t offset, uint64_t value, unsigned size) "%s offset: 0x%04" PRIx64 " value: 0x%02" PRIx64 " size: %u"
1086
+npcm7xx_smbus_write(const char *id, uint64_t offset, uint64_t value, unsigned size) "%s offset: 0x%04" PRIx64 " value: 0x%02" PRIx64 " size: %u"
1087
+npcm7xx_smbus_start(const char *id, int success) "%s starting, success: %d"
1088
+npcm7xx_smbus_send_address(const char *id, uint8_t addr, int recv, int success) "%s sending address: 0x%02x, recv: %d, success: %d"
1089
+npcm7xx_smbus_send_byte(const char *id, uint8_t value, int success) "%s send byte: 0x%02x, success: %d"
1090
+npcm7xx_smbus_recv_byte(const char *id, uint8_t value) "%s recv byte: 0x%02x"
1091
+npcm7xx_smbus_stop(const char *id) "%s stopping"
1092
+npcm7xx_smbus_nack(const char *id) "%s nacking"
1093
--
139
--
1094
2.20.1
140
2.20.1
1095
141
1096
142
diff view generated by jsdifflib