[PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption

Shrikanth Hegde posted 2 patches 5 days, 20 hours ago
[PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption
Posted by Shrikanth Hegde 5 days, 20 hours ago
Large user copy_to/from (more than 16 bytes) uses vmx instructions to 
speed things up. Once the copy is done, it makes sense to try schedule 
as soon as possible for preemptible kernels. So do this for 
preempt=full/lazy and rt kernel. 

Not checking for lazy bit here, since it could lead to unnecessary 
context switches.

Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 arch/powerpc/lib/vmx-helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index d491da8d1838..58ed6bd613a6 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -45,7 +45,7 @@ int exit_vmx_usercopy(void)
 	 * set and we are preemptible. The hack here is to schedule a
 	 * decrementer to fire here and reschedule for us if necessary.
 	 */
-	if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())
+	if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched())
 		set_dec(1);
 	return 0;
 }
-- 
2.43.5
Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption
Posted by Sebastian Andrzej Siewior 2 days, 7 hours ago
On 2024-11-17 00:53:06 [+0530], Shrikanth Hegde wrote:
> Large user copy_to/from (more than 16 bytes) uses vmx instructions to 
> speed things up. Once the copy is done, it makes sense to try schedule 
> as soon as possible for preemptible kernels. So do this for 
> preempt=full/lazy and rt kernel. 
> 
> Not checking for lazy bit here, since it could lead to unnecessary 
> context switches.
> 
> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
> ---
>  arch/powerpc/lib/vmx-helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
> index d491da8d1838..58ed6bd613a6 100644
> --- a/arch/powerpc/lib/vmx-helper.c
> +++ b/arch/powerpc/lib/vmx-helper.c
> @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void)
>  	 * set and we are preemptible. The hack here is to schedule a
>  	 * decrementer to fire here and reschedule for us if necessary.
>  	 */
> -	if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())
> +	if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched())
>  		set_dec(1);

Now looking at this again there is a comment why preempt_enable() is
bad. An interrupt between preempt_enable_no_resched() and set_dec() is
fine because irq-exit would preempt properly? Regular preemption works
again once copy_to_user() is done? So if you copy 1GiB, you are blocked
for that 1GiB?

>  	return 0;
>  }

Sebastian
Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption
Posted by Shrikanth Hegde 1 day, 21 hours ago

On 11/20/24 13:30, Sebastian Andrzej Siewior wrote:
> On 2024-11-17 00:53:06 [+0530], Shrikanth Hegde wrote:
>> Large user copy_to/from (more than 16 bytes) uses vmx instructions to
>> speed things up. Once the copy is done, it makes sense to try schedule
>> as soon as possible for preemptible kernels. So do this for
>> preempt=full/lazy and rt kernel.
>>
>> Not checking for lazy bit here, since it could lead to unnecessary
>> context switches.
>>
>> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
>> ---
>>   arch/powerpc/lib/vmx-helper.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
>> index d491da8d1838..58ed6bd613a6 100644
>> --- a/arch/powerpc/lib/vmx-helper.c
>> +++ b/arch/powerpc/lib/vmx-helper.c
>> @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void)
>>   	 * set and we are preemptible. The hack here is to schedule a
>>   	 * decrementer to fire here and reschedule for us if necessary.
>>   	 */
>> -	if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())
>> +	if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched())
>>   		set_dec(1);
> 
> Now looking at this again there is a comment why preempt_enable() is
> bad. An interrupt between preempt_enable_no_resched() and set_dec() is
> fine because irq-exit would preempt properly?

I think so. AFAIU the comment says issue lies with amr register not being saved across
context switch. interrupt_exit_kernel_prepare saves it and restore using kuap_kernel_restore.

  Regular preemption works
> again once copy_to_user() is done? So if you copy 1GiB, you are blocked
> for that 1GiB?


yes, regular preemption would work on exit of copy_to_user. Since the preempt_disable was done
before copy starts, i think yes, it would be blocked until it is complete.

> 
>>   	return 0;
>>   }
> 
> Sebastian

nick, mpe; please correct me if i am wrong.
Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption
Posted by Ankur Arora 2 days, 18 hours ago
Shrikanth Hegde <sshegde@linux.ibm.com> writes:

> Large user copy_to/from (more than 16 bytes) uses vmx instructions to
> speed things up. Once the copy is done, it makes sense to try schedule
> as soon as possible for preemptible kernels. So do this for
> preempt=full/lazy and rt kernel.

Note that this check will also fire for PREEMPT_DYNAMIC && preempt=none.
So when power supports PREEMPT_DYNAMIC this will need to change
to preempt_model_*() based checks.

> Not checking for lazy bit here, since it could lead to unnecessary
> context switches.

Maybe:
Not checking for lazy bit here, since we only want to schedule when
a context switch is imminently required.

> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
> ---
>  arch/powerpc/lib/vmx-helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
> index d491da8d1838..58ed6bd613a6 100644
> --- a/arch/powerpc/lib/vmx-helper.c
> +++ b/arch/powerpc/lib/vmx-helper.c
> @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void)
>  	 * set and we are preemptible. The hack here is to schedule a
>  	 * decrementer to fire here and reschedule for us if necessary.
>  	 */
> -	if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())
> +	if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched())
>  		set_dec(1);
>  	return 0;
>  }


--
ankur
Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption
Posted by Sebastian Andrzej Siewior 2 days, 7 hours ago
On 2024-11-19 13:08:31 [-0800], Ankur Arora wrote:
> 
> Shrikanth Hegde <sshegde@linux.ibm.com> writes:
> 
> > Large user copy_to/from (more than 16 bytes) uses vmx instructions to
> > speed things up. Once the copy is done, it makes sense to try schedule
> > as soon as possible for preemptible kernels. So do this for
> > preempt=full/lazy and rt kernel.
> 
> Note that this check will also fire for PREEMPT_DYNAMIC && preempt=none.
> So when power supports PREEMPT_DYNAMIC this will need to change
> to preempt_model_*() based checks.
> 
> > Not checking for lazy bit here, since it could lead to unnecessary
> > context switches.
> 
> Maybe:
> Not checking for lazy bit here, since we only want to schedule when
> a context switch is imminently required.

Isn't his behaviour here exactly what preempt_enable() would do?
If the LAZY bit is set, it is delayed until return to userland or an
explicit schedule() because it is done. If this LAZY bit turned into an
actual scheduling request then it is acted upon.

Sebastian
Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption
Posted by Shrikanth Hegde 2 days ago

On 11/20/24 13:33, Sebastian Andrzej Siewior wrote:
> On 2024-11-19 13:08:31 [-0800], Ankur Arora wrote:
>>
>> Shrikanth Hegde <sshegde@linux.ibm.com> writes:
>>

Thanks Ankur and Sebastian for taking a look.

>>> Large user copy_to/from (more than 16 bytes) uses vmx instructions to
>>> speed things up. Once the copy is done, it makes sense to try schedule
>>> as soon as possible for preemptible kernels. So do this for
>>> preempt=full/lazy and rt kernel.
>>
>> Note that this check will also fire for PREEMPT_DYNAMIC && preempt=none.
>> So when power supports PREEMPT_DYNAMIC this will need to change
>> to preempt_model_*() based checks.

Yes. This and return to kernel both needs to change when PowerPC support PREEMPT_DYNAMIC.
I have a patch in work in which I essentially do check for the preemption model.
Either below or based on static key.

-	if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched())
+	if (preempt_model_preemptible() && need_resched())



+mark +valentin

More looking into how PREEMPPT_DYNAMIC works with static key, I have one query.
This is more on PREEMPT_DYNAMIC than anything to with LAZY.

I see many places use static_key based check instead of using preempt_model_preemptible such as
dynamic_preempt_schedule, is it because static_key is faster?

On the other hand, using preempt_model_preemptible could make the code simpler.

>>
>>> Not checking for lazy bit here, since it could lead to unnecessary
>>> context switches.
>>
>> Maybe:
>> Not checking for lazy bit here, since we only want to schedule when
>> a context switch is imminently required.
> 
> Isn't his behaviour here exactly what preempt_enable() would do?
> If the LAZY bit is set, it is delayed until return to userland or an
> explicit schedule() because it is done. If this LAZY bit turned into an
> actual scheduling request then it is acted upon.
> 
> Sebastian