On 07.06.2021 04:43, Penny Zheng wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1087,6 +1087,9 @@ static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
> nr_mfns, mfn_x(smfn));
> return NULL;
> }
> +
> + spin_lock(&heap_lock);
> +
> pg = mfn_to_page(smfn);
>
> for ( i = 0; i < nr_mfns; i++ )
> @@ -1127,6 +1130,8 @@ static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
> !(memflags & MEMF_no_icache_flush));
> }
>
> + spin_unlock(&heap_lock);
> +
> if ( need_tlbflush )
> filtered_flush_tlb_mask(tlbflush_timestamp);
Besides, as indicated there, the need to fold this into the previous
patch, you will also want to pay attention to alloc_heap_pages()
carefully avoiding to scrub or flush pages with the heap lock held.
You will want to follow this for your additions.
Jan