[PATCH v3 for-6.0 1/2] tcg: Do not set guard pages on the rx portion of code_gen_buffer

Richard Henderson posted 2 patches 4 years, 10 months ago
Maintainers: Richard Henderson <richard.henderson@linaro.org>
[PATCH v3 for-6.0 1/2] tcg: Do not set guard pages on the rx portion of code_gen_buffer
Posted by Richard Henderson 4 years, 10 months ago
The rw portion of the buffer is the only one in which overruns
can be generated.  Allow the rx portion to be more completely
covered by huge pages.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tcg/tcg.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/tcg/tcg.c b/tcg/tcg.c
index de91bb6e9e..88c9e6f8a4 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -828,7 +828,6 @@ void tcg_region_init(void)
     size_t region_size;
     size_t n_regions;
     size_t i;
-    uintptr_t splitwx_diff;
 
     n_regions = tcg_n_regions();
 
@@ -858,8 +857,11 @@ void tcg_region_init(void)
     /* account for that last guard page */
     region.end -= page_size;
 
-    /* set guard pages */
-    splitwx_diff = tcg_splitwx_diff;
+    /*
+     * Set guard pages in the rw buffer, as that's the one into which
+     * buffer overruns could occur.  Do not set guard pages in the rx
+     * buffer -- let that one use hugepages throughout.
+     */
     for (i = 0; i < region.n; i++) {
         void *start, *end;
         int rc;
@@ -867,10 +869,6 @@ void tcg_region_init(void)
         tcg_region_bounds(i, &start, &end);
         rc = qemu_mprotect_none(end, page_size);
         g_assert(!rc);
-        if (splitwx_diff) {
-            rc = qemu_mprotect_none(end + splitwx_diff, page_size);
-            g_assert(!rc);
-        }
     }
 
     tcg_region_trees_init();
-- 
2.25.1


Re: [PATCH v3 for-6.0 1/2] tcg: Do not set guard pages on the rx portion of code_gen_buffer
Posted by Roman Bolshakov 4 years, 10 months ago
On Sat, Mar 20, 2021 at 10:57:19AM -0600, Richard Henderson wrote:
> The rw portion of the buffer is the only one in which overruns
> can be generated.  Allow the rx portion to be more completely
> covered by huge pages.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  tcg/tcg.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index de91bb6e9e..88c9e6f8a4 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> @@ -828,7 +828,6 @@ void tcg_region_init(void)
>      size_t region_size;
>      size_t n_regions;
>      size_t i;
> -    uintptr_t splitwx_diff;
>  
>      n_regions = tcg_n_regions();
>  
> @@ -858,8 +857,11 @@ void tcg_region_init(void)
>      /* account for that last guard page */
>      region.end -= page_size;
>  
> -    /* set guard pages */
> -    splitwx_diff = tcg_splitwx_diff;
> +    /*
> +     * Set guard pages in the rw buffer, as that's the one into which
> +     * buffer overruns could occur.  Do not set guard pages in the rx
> +     * buffer -- let that one use hugepages throughout.
> +     */
>      for (i = 0; i < region.n; i++) {
>          void *start, *end;
>          int rc;
> @@ -867,10 +869,6 @@ void tcg_region_init(void)
>          tcg_region_bounds(i, &start, &end);
>          rc = qemu_mprotect_none(end, page_size);
>          g_assert(!rc);
> -        if (splitwx_diff) {
> -            rc = qemu_mprotect_none(end + splitwx_diff, page_size);
> -            g_assert(!rc);
> -        }
>      }
>  
>      tcg_region_trees_init();
> -- 
> 2.25.1
> 

Thanks for fixing the issue, Richard,

I have two questions:
 - Should we keep guards pages for rx on all platforms except darwin?
   (that would make it similar to what Philippe proposed in the comments
   to patch 2).
 - What does mean that rx might be covered by huge pages? (perhaps I'm
   missing some context)

Otherwise,

Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>

BR,
Roman

Re: [PATCH v3 for-6.0 1/2] tcg: Do not set guard pages on the rx portion of code_gen_buffer
Posted by Richard Henderson 4 years, 10 months ago
On 3/22/21 7:56 AM, Roman Bolshakov wrote:
>   - What does mean that rx might be covered by huge pages? (perhaps I'm
>     missing some context)

Huge pages are where a single level (n-1) page table entry covers the entire 
span that would be covered by m * level n page table entries.

On x86, this is a 2MB hugepage, vs 512 4kB pages.  Most modern cpus support 
something similar.

See qemu_madvise(..., QEMU_MADV_HUGEPAGE).


r~