Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure that
both the start and last addresses are within the same target page. Note that
due to performance concerns the check is only enabled when QEMU is configured
with --enable-debug-tcg.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
---
accel/tcg/tb-maint.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
index 33ea1aadd1..8cd730dcb0 100644
--- a/accel/tcg/tb-maint.c
+++ b/accel/tcg/tb-maint.c
@@ -1092,6 +1092,10 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : NULL;
#endif /* TARGET_HAS_PRECISE_SMC */
+#ifdef CONFIG_DEBUG_TCG
+ assert((last & TARGET_PAGE_MASK) == (start & TARGET_PAGE_MASK));
+#endif
+
/*
* We remove all the TBs in the range [start, last].
* XXX: see if in some cases it could be faster to invalidate all the code
--
2.30.2
On 29/6/23 10:25, Mark Cave-Ayland wrote: > Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure that > both the start and last addresses are within the same target page. Note that > due to performance concerns the check is only enabled when QEMU is configured > with --enable-debug-tcg. > > Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> > --- > accel/tcg/tb-maint.c | 4 ++++ > 1 file changed, 4 insertions(+) Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
29.06.2023 11:25, Mark Cave-Ayland wrote: > Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure that > both the start and last addresses are within the same target page. Note that > due to performance concerns the check is only enabled when QEMU is configured > with --enable-debug-tcg. Performance concerns? That's two ANDs and on compare, - is it really that performance critical? I'm just asking, I dunno. Thanks, /mjt > Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> > --- > accel/tcg/tb-maint.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c > index 33ea1aadd1..8cd730dcb0 100644 > --- a/accel/tcg/tb-maint.c > +++ b/accel/tcg/tb-maint.c > @@ -1092,6 +1092,10 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, > TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : NULL; > #endif /* TARGET_HAS_PRECISE_SMC */ > > +#ifdef CONFIG_DEBUG_TCG > + assert((last & TARGET_PAGE_MASK) == (start & TARGET_PAGE_MASK)); > +#endif > + > /* > * We remove all the TBs in the range [start, last]. > * XXX: see if in some cases it could be faster to invalidate all the code
On Fri, 30 Jun 2023, Michael Tokarev wrote: > 29.06.2023 11:25, Mark Cave-Ayland wrote: >> Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure >> that >> both the start and last addresses are within the same target page. Note >> that >> due to performance concerns the check is only enabled when QEMU is >> configured >> with --enable-debug-tcg. > > Performance concerns? That's two ANDs and on compare, - is it really that > performance > critical? > > I'm just asking, I dunno. If something is called frequently enough any small computaion can add up. In this case invalidating pages is probably a performance hit already and hopefully does not happen too often but then it's a good idea not to make it worse. As this is not something that should or could normally happen and only checks for programming errors I think it's good idea to only do it when debugging. Regards, BALATON Zoltan > Thanks, > > /mjt > >> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> >> --- >> accel/tcg/tb-maint.c | 4 ++++ >> 1 file changed, 4 insertions(+) >> >> diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c >> index 33ea1aadd1..8cd730dcb0 100644 >> --- a/accel/tcg/tb-maint.c >> +++ b/accel/tcg/tb-maint.c >> @@ -1092,6 +1092,10 @@ tb_invalidate_phys_page_range__locked(struct >> page_collection *pages, >> TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : >> NULL; >> #endif /* TARGET_HAS_PRECISE_SMC */ >> +#ifdef CONFIG_DEBUG_TCG >> + assert((last & TARGET_PAGE_MASK) == (start & TARGET_PAGE_MASK)); >> +#endif >> + >> /* >> * We remove all the TBs in the range [start, last]. >> * XXX: see if in some cases it could be faster to invalidate all the >> code > > >
© 2016 - 2026 Red Hat, Inc.