[Qemu-devel] [PATCH v4 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path

tony.nguyen@bt.com posted 15 patches 6 years, 3 months ago
Maintainers: "Michael S. Tsirkin" <mst@redhat.com>, Christian Borntraeger <borntraeger@de.ibm.com>, David Gibson <david@gibson.dropbear.id.au>, Eduardo Habkost <ehabkost@redhat.com>, Peter Maydell <peter.maydell@linaro.org>, Bastian Koppelmann <kbastian@mail.uni-paderborn.de>, Aleksandar Rikalo <arikalo@wavecomp.com>, Claudio Fontana <claudio.fontana@huawei.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, Sagar Karandikar <sagark@eecs.berkeley.edu>, Laurent Vivier <laurent@vivier.eu>, Aurelien Jarno <aurelien@aurel32.net>, Alex Williamson <alex.williamson@redhat.com>, Alistair Francis <Alistair.Francis@wdc.com>, Richard Henderson <rth@twiddle.net>, Aleksandar Markovic <amarkovic@wavecomp.com>, Stafford Horne <shorne@gmail.com>, Cornelia Huck <cohuck@redhat.com>, Palmer Dabbelt <palmer@sifive.com>, Artyom Tarasenko <atar4qemu@gmail.com>, Halil Pasic <pasic@linux.ibm.com>, Collin Walling <walling@linux.ibm.com>, Andrzej Zaborowski <balrogg@gmail.com>, Stefan Hajnoczi <stefanha@redhat.com>, Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, David Hildenbrand <david@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>
There is a newer version of this series
[Qemu-devel] [PATCH v4 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
Posted by tony.nguyen@bt.com 6 years, 3 months ago
The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1