[PATCH for-4.14 v2] x86/tlb: fix assisted flush usage

Roger Pau Monne posted 1 patch 3 years, 10 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/xen tags/patchew/20200623145006.66723-1-roger.pau@citrix.com
There is a newer version of this series
xen/arch/x86/mm.c              | 12 +++++++++++-
xen/common/memory.c            |  2 +-
xen/common/page_alloc.c        |  2 +-
xen/include/asm-arm/flushtlb.h |  1 +
xen/include/asm-x86/flushtlb.h | 18 ++++++++++++++++++
xen/include/xen/mm.h           |  8 ++++++--
6 files changed, 38 insertions(+), 5 deletions(-)
[PATCH for-4.14 v2] x86/tlb: fix assisted flush usage
Posted by Roger Pau Monne 3 years, 10 months ago
Commit e9aca9470ed86 introduced a regression when avoiding sending
IPIs for certain flush operations. Xen page fault handler
(spurious_page_fault) relies on blocking interrupts in order to
prevent handling TLB flush IPIs and thus preventing other CPUs from
removing page tables pages. Switching to assisted flushing avoided such
IPIs, and thus can result in pages belonging to the page tables being
removed (and possibly re-used) while __page_fault_type is being
executed.

Force some of the TLB flushes to use IPIs, thus avoiding the assisted
TLB flush. Those selected flushes are the page type change (when
switching from a page table type to a different one, ie: a page that
has been removed as a page table) and page allocation. This sadly has
a negative performance impact on the pvshim, as less assisted flushes
can be used.

Introduce a new flag (FLUSH_FORCE_IPI) and helper to force a TLB flush
using an IPI (flush_tlb_mask_sync). Note that the flag is only
meaningfully defined when the hypervisor supports PV or shadow paging
mode, as otherwise hardware assisted paging domains are in charge of
their page tables and won't share page tables with Xen, thus not
influencing the result of page walks performed by the spurious fault
handler.

Just passing this new flag when calling flush_area_mask prevents the
usage of the assisted flush without any other side effects.

Note the flag is not defined on Arm, and the introduced helper is just
a dummy alias to the existing flush_tlb_mask.

Fixes: e9aca9470ed86 ('x86/tlb: use Xen L0 assisted TLB flush when available')
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Add a comment describing the usage of FLUSH_FORCE_IPI (and why no
   modifications to flush_area_mask are required).
 - Use PGT_root_page_table instead of PGT_l4_page_table.
 - Also perform IPI flushes if configured with shadow paging support.
 - Use ifdef instead of if.
---
 xen/arch/x86/mm.c              | 12 +++++++++++-
 xen/common/memory.c            |  2 +-
 xen/common/page_alloc.c        |  2 +-
 xen/include/asm-arm/flushtlb.h |  1 +
 xen/include/asm-x86/flushtlb.h | 18 ++++++++++++++++++
 xen/include/xen/mm.h           |  8 ++++++--
 6 files changed, 38 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index c294092f93..47872dccd0 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2894,7 +2894,17 @@ static int _get_page_type(struct page_info *page, unsigned long type,
                       ((nx & PGT_type_mask) == PGT_writable_page)) )
                 {
                     perfc_incr(need_flush_tlb_flush);
-                    flush_tlb_mask(mask);
+                    if ( (x & PGT_type_mask) &&
+                         (x & PGT_type_mask) <= PGT_root_page_table )
+                        /*
+                         * If page was a page table make sure the flush is
+                         * performed using an IPI in order to avoid changing
+                         * the type of a page table page under the feet of
+                         * spurious_page_fault.
+                         */
+                        flush_tlb_mask_sync(mask);
+                    else
+                        flush_tlb_mask(mask);
                 }
 
                 /* We lose existing type and validity. */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 714077c1e5..0d224d6675 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -278,7 +278,7 @@ static void populate_physmap(struct memop_args *a)
 
 out:
     if ( need_tlbflush )
-        filtered_flush_tlb_mask(tlbflush_timestamp);
+        filtered_flush_tlb_mask(tlbflush_timestamp, false);
 
     if ( a->memflags & MEMF_no_icache_flush )
         invalidate_icache();
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 10b7aeca48..765f8d8e78 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1060,7 +1060,7 @@ static struct page_info *alloc_heap_pages(
     }
 
     if ( need_tlbflush )
-        filtered_flush_tlb_mask(tlbflush_timestamp);
+        filtered_flush_tlb_mask(tlbflush_timestamp, true);
 
     return pg;
 }
diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
index ab1aae5c90..7ae0543885 100644
--- a/xen/include/asm-arm/flushtlb.h
+++ b/xen/include/asm-arm/flushtlb.h
@@ -27,6 +27,7 @@ static inline void page_set_tlbflush_timestamp(struct page_info *page)
 
 /* Flush specified CPUs' TLBs */
 void flush_tlb_mask(const cpumask_t *mask);
+#define flush_tlb_mask_sync flush_tlb_mask
 
 /*
  * Flush a range of VA's hypervisor mappings from the TLB of the local
diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h
index 8639427cce..9e12844111 100644
--- a/xen/include/asm-x86/flushtlb.h
+++ b/xen/include/asm-x86/flushtlb.h
@@ -126,6 +126,16 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4);
 #else
 #define FLUSH_HVM_ASID_CORE 0
 #endif
+#if defined(CONFIG_PV) || defined(CONFIG_SHADOW_PAGING)
+/*
+ * Force an IPI to be sent. Note that adding this to the flags passed to
+ * flush_area_mask will prevent using the assisted flush without having any
+ * other side effect.
+ */
+# define FLUSH_FORCE_IPI 0x8000
+#else
+# define FLUSH_FORCE_IPI 0
+#endif
 
 /* Flush local TLBs/caches. */
 unsigned int flush_area_local(const void *va, unsigned int flags);
@@ -148,6 +158,14 @@ void flush_area_mask(const cpumask_t *, const void *va, unsigned int flags);
 /* Flush specified CPUs' TLBs */
 #define flush_tlb_mask(mask)                    \
     flush_mask(mask, FLUSH_TLB)
+/*
+ * Flush specified CPUs' TLBs and force the usage of an IPI to do so. This is
+ * required for certain operations that rely on page tables themselves not
+ * being freed and reused when interrupts are blocked, as the flush IPI won't
+ * be fulfilled until exiting from that critical region.
+ */
+#define flush_tlb_mask_sync(mask)               \
+    flush_mask(mask, FLUSH_TLB | FLUSH_FORCE_IPI)
 #define flush_tlb_one_mask(mask,v)              \
     flush_area_mask(mask, (const void *)(v), FLUSH_TLB|FLUSH_ORDER(0))
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 9b62087be1..f360166f00 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -639,7 +639,8 @@ static inline void accumulate_tlbflush(bool *need_tlbflush,
     }
 }
 
-static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
+static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp,
+                                           bool sync)
 {
     cpumask_t mask;
 
@@ -648,7 +649,10 @@ static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
     if ( !cpumask_empty(&mask) )
     {
         perfc_incr(need_flush_tlb_flush);
-        flush_tlb_mask(&mask);
+        if ( sync )
+            flush_tlb_mask_sync(&mask);
+        else
+            flush_tlb_mask(&mask);
     }
 }
 
-- 
2.26.2


Re: [PATCH for-4.14 v2] x86/tlb: fix assisted flush usage
Posted by Julien Grall 3 years, 10 months ago
Hi Roger,

On 23/06/2020 15:50, Roger Pau Monne wrote:
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 9b62087be1..f360166f00 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -639,7 +639,8 @@ static inline void accumulate_tlbflush(bool *need_tlbflush,
>       }
>   }
>   
> -static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> +static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp,
> +                                           bool sync)

I read the commit message and went through the code, but it is still not 
clear what "sync" means in a non-architectural way.

As an Arm developper, I would assume this means we don't wait for the 
TLB flush to complete. But I am sure this would result to some 
unexpected behavior.

So can you explain on non-x86 words what this really mean?

Cheers,

-- 
Julien Grall

Re: [PATCH for-4.14 v2] x86/tlb: fix assisted flush usage
Posted by Roger Pau Monné 3 years, 10 months ago
On Tue, Jun 23, 2020 at 04:08:06PM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 23/06/2020 15:50, Roger Pau Monne wrote:
> > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> > index 9b62087be1..f360166f00 100644
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -639,7 +639,8 @@ static inline void accumulate_tlbflush(bool *need_tlbflush,
> >       }
> >   }
> > -static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> > +static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp,
> > +                                           bool sync)
> 
> I read the commit message and went through the code, but it is still not
> clear what "sync" means in a non-architectural way.
> 
> As an Arm developper, I would assume this means we don't wait for the TLB
> flush to complete. But I am sure this would result to some unexpected
> behavior.

No, when we return from filtered_flush_tlb_mask the flush has been
performed (either with sync or without), but I understand the
confusion given the parameter name.

> So can you explain on non-x86 words what this really mean?

sync (in this context) means to force the usage of an IPI (if built
with PV or shadow paging support) in order to perform the flush.
AFAICT on Arm you always avoid an IPI when performing a flush, and
that's fine because you don't have PV or shadow, and then you don't
require this. Also I'm not sure Arm has the concept of a spurious
page fault.

I could rename to force_ipi (or require_ipi) if that's better?

Roger.

Re: [PATCH for-4.14 v2] x86/tlb: fix assisted flush usage
Posted by Julien Grall 3 years, 10 months ago

On 23/06/2020 16:15, Roger Pau Monné wrote:
> On Tue, Jun 23, 2020 at 04:08:06PM +0100, Julien Grall wrote:
>> Hi Roger,
>>
>> On 23/06/2020 15:50, Roger Pau Monne wrote:
>>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>>> index 9b62087be1..f360166f00 100644
>>> --- a/xen/include/xen/mm.h
>>> +++ b/xen/include/xen/mm.h
>>> @@ -639,7 +639,8 @@ static inline void accumulate_tlbflush(bool *need_tlbflush,
>>>        }
>>>    }
>>> -static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
>>> +static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp,
>>> +                                           bool sync)
>>
>> I read the commit message and went through the code, but it is still not
>> clear what "sync" means in a non-architectural way.
>>
>> As an Arm developper, I would assume this means we don't wait for the TLB
>> flush to complete. But I am sure this would result to some unexpected
>> behavior.
> 
> No, when we return from filtered_flush_tlb_mask the flush has been
> performed (either with sync or without), but I understand the
> confusion given the parameter name.
> 
>> So can you explain on non-x86 words what this really mean?
> 
> sync (in this context) means to force the usage of an IPI (if built
> with PV or shadow paging support) in order to perform the flush.

This is compare to?

> AFAICT on Arm you always avoid an IPI when performing a flush, and
> that's fine because you don't have PV or shadow, and then you don't
> require this.

Arm provides instructions to broadcast TLB flush, so by the time one of 
instruction is completed there is all the TLB entry associated to the 
translation doesn't exist.

I don't think using PV or shadow would change anything here in the way 
we do the flush.

> Also I'm not sure Arm has the concept of a spurious
> page fault.

So if I understand correctly, the HW may raise a fault even if the 
mapping was in the page-tables. Is it correct?

We have a spurious page fault handler for stage-2 (aka EPT on x86) as we 
need to have an invalid mapping to transition for certain page-tables 
update (e.g. superpage shattering). We are using the same rwlock with 
the page fault handler and the page-table update, so there is no way the 
two can run concurrently.

> 
> I could rename to force_ipi (or require_ipi) if that's better?

Hmmm, based on what I wrote above, I don't think this name would be more 
suitable. However, regardless the name, it is not clear to me when a 
caller should use false or true.

Have you considered a rwlock to synchronize the two?

Cheers,

-- 
Julien Grall

Re: [PATCH for-4.14 v2] x86/tlb: fix assisted flush usage
Posted by Roger Pau Monné 3 years, 10 months ago
On Tue, Jun 23, 2020 at 04:46:29PM +0100, Julien Grall wrote:
> 
> 
> On 23/06/2020 16:15, Roger Pau Monné wrote:
> > On Tue, Jun 23, 2020 at 04:08:06PM +0100, Julien Grall wrote:
> > > Hi Roger,
> > > 
> > > On 23/06/2020 15:50, Roger Pau Monne wrote:
> > > > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> > > > index 9b62087be1..f360166f00 100644
> > > > --- a/xen/include/xen/mm.h
> > > > +++ b/xen/include/xen/mm.h
> > > > @@ -639,7 +639,8 @@ static inline void accumulate_tlbflush(bool *need_tlbflush,
> > > >        }
> > > >    }
> > > > -static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> > > > +static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp,
> > > > +                                           bool sync)
> > > 
> > > I read the commit message and went through the code, but it is still not
> > > clear what "sync" means in a non-architectural way.
> > > 
> > > As an Arm developper, I would assume this means we don't wait for the TLB
> > > flush to complete. But I am sure this would result to some unexpected
> > > behavior.
> > 
> > No, when we return from filtered_flush_tlb_mask the flush has been
> > performed (either with sync or without), but I understand the
> > confusion given the parameter name.
> > 
> > > So can you explain on non-x86 words what this really mean?
> > 
> > sync (in this context) means to force the usage of an IPI (if built
> > with PV or shadow paging support) in order to perform the flush.
> 
> This is compare to?

Using assisted flushes, like you do on Arm, where you don't send an
IPI in order to achieve a TLB flush on a remote pCPU.

> > AFAICT on Arm you always avoid an IPI when performing a flush, and
> > that's fine because you don't have PV or shadow, and then you don't
> > require this.
> 
> Arm provides instructions to broadcast TLB flush, so by the time one of
> instruction is completed there is all the TLB entry associated to the
> translation doesn't exist.
> 
> I don't think using PV or shadow would change anything here in the way we do
> the flush.

TBH, I'm not sure how this applies to Arm. There's no PV or shadow
implementation, so I have no idea whether this would apply or not.

> > Also I'm not sure Arm has the concept of a spurious
> > page fault.
> 
> So if I understand correctly, the HW may raise a fault even if the mapping
> was in the page-tables. Is it correct?

Yes, this can happen when you promote the permission of a page table
entry without doing a TLB flush AFAICT. Ie: you have a read-only page,
which is promoted to writable, but you don't perform a TLB flush and
just rely on getting a page fault that will clear the TLB entry and
retry.

> We have a spurious page fault handler for stage-2 (aka EPT on x86) as we
> need to have an invalid mapping to transition for certain page-tables update
> (e.g. superpage shattering). We are using the same rwlock with the page
> fault handler and the page-table update, so there is no way the two can run
> concurrently.

This is slightly different as it's used by PV page tables, so the
fault is triggered much more often than the fault handler you are
referring to IMO.

> > 
> > I could rename to force_ipi (or require_ipi) if that's better?
> 
> Hmmm, based on what I wrote above, I don't think this name would be more
> suitable. However, regardless the name, it is not clear to me when a caller
> should use false or true.
> 
> Have you considered a rwlock to synchronize the two?

Yes, the performance drop is huge when I tried. I could try to refine,
but I think there's always going to be a performance drop, as you then
require mutual exclusion when modifying the page tables (you take the
lock in write mode). Right now modification of the page tables can be
done concurrently.

FWIW Xen explicitly moved from using a lock into this model in
3203345bb13 apparently due to some deadlock situation. I'm not sure
if that still holds.

Roger.

Re: [PATCH for-4.14 v2] x86/tlb: fix assisted flush usage
Posted by Julien Grall 3 years, 9 months ago
Hi Roger,

On 23/06/2020 17:16, Roger Pau Monné wrote:
> On Tue, Jun 23, 2020 at 04:46:29PM +0100, Julien Grall wrote:
>>
>>
>> On 23/06/2020 16:15, Roger Pau Monné wrote:
>>> On Tue, Jun 23, 2020 at 04:08:06PM +0100, Julien Grall wrote:
>>>> Hi Roger,
>>>>
>>>> On 23/06/2020 15:50, Roger Pau Monne wrote:
>>>>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>>>>> index 9b62087be1..f360166f00 100644
>>>>> --- a/xen/include/xen/mm.h
>>>>> +++ b/xen/include/xen/mm.h
>>>>> @@ -639,7 +639,8 @@ static inline void accumulate_tlbflush(bool *need_tlbflush,
>>>>>         }
>>>>>     }
>>>>> -static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
>>>>> +static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp,
>>>>> +                                           bool sync)
>>>>
>>>> I read the commit message and went through the code, but it is still not
>>>> clear what "sync" means in a non-architectural way.
>>>>
>>>> As an Arm developper, I would assume this means we don't wait for the TLB
>>>> flush to complete. But I am sure this would result to some unexpected
>>>> behavior.
>>>
>>> No, when we return from filtered_flush_tlb_mask the flush has been
>>> performed (either with sync or without), but I understand the
>>> confusion given the parameter name.
>>>
>>>> So can you explain on non-x86 words what this really mean?
>>>
>>> sync (in this context) means to force the usage of an IPI (if built
>>> with PV or shadow paging support) in order to perform the flush.
>>
>> This is compare to?
> 
> Using assisted flushes, like you do on Arm, where you don't send an
> IPI in order to achieve a TLB flush on a remote pCPU.

AFAICT, the assisted flushes only happen when running a nested Xen. Is 
that correct?

> 
>>> AFAICT on Arm you always avoid an IPI when performing a flush, and
>>> that's fine because you don't have PV or shadow, and then you don't
>>> require this.
>>
>> Arm provides instructions to broadcast TLB flush, so by the time one of
>> instruction is completed there is all the TLB entry associated to the
>> translation doesn't exist.
>>
>> I don't think using PV or shadow would change anything here in the way we do
>> the flush.
> 
> TBH, I'm not sure how this applies to Arm. There's no PV or shadow
> implementation, so I have no idea whether this would apply or not.

Yes there is none. However, my point was that if we had to implement 
PV/shadow on Arm then an IPI would definitely be my last choice.

>>>
>>> I could rename to force_ipi (or require_ipi) if that's better?
>>
>> Hmmm, based on what I wrote above, I don't think this name would be more
>> suitable. However, regardless the name, it is not clear to me when a caller
>> should use false or true.
>>
>> Have you considered a rwlock to synchronize the two?
> 
> Yes, the performance drop is huge when I tried. I could try to refine,
> but I think there's always going to be a performance drop, as you then
> require mutual exclusion when modifying the page tables (you take the
> lock in write mode). Right now modification of the page tables can be
> done concurrently.

Fair enough. I will scratch that suggestion then. Thanks for the 
explanation!

So now getting back to filtered_flush_tlb(). AFAICT, the only two 
callers are in common code. The two are used for allocation purpose, so 
may I ask why they need to use different kind of flush?

> 
> FWIW Xen explicitly moved from using a lock into this model in
> 3203345bb13 apparently due to some deadlock situation. I'm not sure
> if that still holds.

The old classic major change with limited explanation :/.

Cheers,

-- 
Julien Grall

Re: [PATCH for-4.14 v2] x86/tlb: fix assisted flush usage
Posted by Roger Pau Monné 3 years, 9 months ago
On Wed, Jun 24, 2020 at 12:10:45PM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 23/06/2020 17:16, Roger Pau Monné wrote:
> > On Tue, Jun 23, 2020 at 04:46:29PM +0100, Julien Grall wrote:
> > > 
> > > 
> > > On 23/06/2020 16:15, Roger Pau Monné wrote:
> > > > On Tue, Jun 23, 2020 at 04:08:06PM +0100, Julien Grall wrote:
> > > > > Hi Roger,
> > > > > 
> > > > > On 23/06/2020 15:50, Roger Pau Monne wrote:
> > > > > > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> > > > > > index 9b62087be1..f360166f00 100644
> > > > > > --- a/xen/include/xen/mm.h
> > > > > > +++ b/xen/include/xen/mm.h
> > > > > > @@ -639,7 +639,8 @@ static inline void accumulate_tlbflush(bool *need_tlbflush,
> > > > > >         }
> > > > > >     }
> > > > > > -static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
> > > > > > +static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp,
> > > > > > +                                           bool sync)
> > > > > 
> > > > > I read the commit message and went through the code, but it is still not
> > > > > clear what "sync" means in a non-architectural way.
> > > > > 
> > > > > As an Arm developper, I would assume this means we don't wait for the TLB
> > > > > flush to complete. But I am sure this would result to some unexpected
> > > > > behavior.
> > > > 
> > > > No, when we return from filtered_flush_tlb_mask the flush has been
> > > > performed (either with sync or without), but I understand the
> > > > confusion given the parameter name.
> > > > 
> > > > > So can you explain on non-x86 words what this really mean?
> > > > 
> > > > sync (in this context) means to force the usage of an IPI (if built
> > > > with PV or shadow paging support) in order to perform the flush.
> > > 
> > > This is compare to?
> > 
> > Using assisted flushes, like you do on Arm, where you don't send an
> > IPI in order to achieve a TLB flush on a remote pCPU.
> 
> AFAICT, the assisted flushes only happen when running a nested Xen. Is that
> correct?

ATM yes, we don't have support for the newly introduced AMD INVLPGB
instruction yet, which provides such functionality on bare metal.

> > 
> > > > AFAICT on Arm you always avoid an IPI when performing a flush, and
> > > > that's fine because you don't have PV or shadow, and then you don't
> > > > require this.
> > > 
> > > Arm provides instructions to broadcast TLB flush, so by the time one of
> > > instruction is completed there is all the TLB entry associated to the
> > > translation doesn't exist.
> > > 
> > > I don't think using PV or shadow would change anything here in the way we do
> > > the flush.
> > 
> > TBH, I'm not sure how this applies to Arm. There's no PV or shadow
> > implementation, so I have no idea whether this would apply or not.
> 
> Yes there is none. However, my point was that if we had to implement
> PV/shadow on Arm then an IPI would definitely be my last choice.

Right, this mostly depends on how you perform page table modifications
and whether you have to handle spurious faults like x86 does.

> > > > 
> > > > I could rename to force_ipi (or require_ipi) if that's better?
> > > 
> > > Hmmm, based on what I wrote above, I don't think this name would be more
> > > suitable. However, regardless the name, it is not clear to me when a caller
> > > should use false or true.
> > > 
> > > Have you considered a rwlock to synchronize the two?
> > 
> > Yes, the performance drop is huge when I tried. I could try to refine,
> > but I think there's always going to be a performance drop, as you then
> > require mutual exclusion when modifying the page tables (you take the
> > lock in write mode). Right now modification of the page tables can be
> > done concurrently.
> 
> Fair enough. I will scratch that suggestion then. Thanks for the
> explanation!
> 
> So now getting back to filtered_flush_tlb(). AFAICT, the only two callers
> are in common code. The two are used for allocation purpose, so may I ask
> why they need to use different kind of flush?

Looking at it again, this is wrong. I've just realized that
populate_physmap will, depending on the situation, use the
MEMF_no_tlbflush flag, and so it needs to perform the flush by itself
(and that's why filtered_flush_tlb_mask is used).

I guess you will be fine with removing the sync parameter then, and on
x86 force filtered_flush_tlb_mask to always use physical IPIs in
order to perform the flush?

Thanks, Roger.