[PATCH bpf 2/3] x86/mm: Disallow vsyscall page read for copy_from_kernel_nofault()

Hou Tao posted 3 patches 1 year, 11 months ago
There is a newer version of this series
[PATCH bpf 2/3] x86/mm: Disallow vsyscall page read for copy_from_kernel_nofault()
Posted by Hou Tao 1 year, 11 months ago
From: Hou Tao <houtao1@huawei.com>

When trying to use copy_from_kernel_nofault() to read vsyscall page
through a bpf program, the following oops was reported:

  BUG: unable to handle page fault for address: ffffffffff600000
  #PF: supervisor read access in kernel mode
  #PF: error_code(0x0000) - not-present page
  PGD 3231067 P4D 3231067 PUD 3233067 PMD 3235067 PTE 0
  Oops: 0000 [#1] PREEMPT SMP PTI
  CPU: 1 PID: 20390 Comm: test_progs ...... 6.7.0+ #58
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) ......
  RIP: 0010:copy_from_kernel_nofault+0x6f/0x110
  ......
  Call Trace:
   <TASK>
   ? copy_from_kernel_nofault+0x6f/0x110
   bpf_probe_read_kernel+0x1d/0x50
   bpf_prog_2061065e56845f08_do_probe_read+0x51/0x8d
   trace_call_bpf+0xc5/0x1c0
   perf_call_bpf_enter.isra.0+0x69/0xb0
   perf_syscall_enter+0x13e/0x200
   syscall_trace_enter+0x188/0x1c0
   do_syscall_64+0xb5/0xe0
   entry_SYSCALL_64_after_hwframe+0x6e/0x76
   </TASK>
  ......
  ---[ end trace 0000000000000000 ]---

The oops happens as follows: A bpf program uses bpf_probe_read_kernel()
to read from vsyscall page, bpf_probe_read_kernel() invokes
copy_from_kernel_nofault() in turn and then invokes __get_user_asm(). A
page fault exception is triggered accordingly, but handle_page_fault()
considers the vsyscall page address as a userspace address instead of
a kernel space address, so the fix-up set-up by bpf isn't applied.
Because the exception happens in kernel space and page fault handling is
disabled, page_fault_oops() is invoked and an oops happens.

Fix it by disallowing vsyscall page read for copy_from_kernel_nofault().

Originally-from: Thomas Gleixner <tglx@linutronix.de>
Reported-by: syzbot+72aa0161922eba61b50e@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/bpf/CAG48ez06TZft=ATH1qh2c5mpS5BT8UakwNkzi6nvK5_djC-4Nw@mail.gmail.com
Reported-by: xingwei lee <xrivendell7@gmail.com>
Closes: https://lore.kernel.org/bpf/CABOYnLynjBoFZOf3Z4BhaZkc5hx_kHfsjiW+UWLoB=w33LvScw@mail.gmail.com
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 arch/x86/mm/maccess.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
index 6993f026adec9..bb454e0abbfcf 100644
--- a/arch/x86/mm/maccess.c
+++ b/arch/x86/mm/maccess.c
@@ -3,6 +3,8 @@
 #include <linux/uaccess.h>
 #include <linux/kernel.h>
 
+#include "mm_internal.h"
+
 #ifdef CONFIG_X86_64
 bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
 {
@@ -15,6 +17,10 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
 	if (vaddr < TASK_SIZE_MAX + PAGE_SIZE)
 		return false;
 
+	/* vsyscall page is also considered as userspace address. */
+	if (is_vsyscall_vaddr(vaddr))
+		return false;
+
 	/*
 	 * Allow everything during early boot before 'x86_virt_bits'
 	 * is initialized.  Needed for instruction decoding in early
-- 
2.29.2
Re: [PATCH bpf 2/3] x86/mm: Disallow vsyscall page read for copy_from_kernel_nofault()
Posted by Sohil Mehta 1 year, 11 months ago
On 1/18/2024 11:30 PM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
> 
> When trying to use copy_from_kernel_nofault() to read vsyscall page
> through a bpf program, the following oops was reported:
> 
>   BUG: unable to handle page fault for address: ffffffffff600000
>   #PF: supervisor read access in kernel mode
>   #PF: error_code(0x0000) - not-present page
>   PGD 3231067 P4D 3231067 PUD 3233067 PMD 3235067 PTE 0
>   Oops: 0000 [#1] PREEMPT SMP PTI
>   CPU: 1 PID: 20390 Comm: test_progs ...... 6.7.0+ #58
>   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) ......
>   RIP: 0010:copy_from_kernel_nofault+0x6f/0x110
>   ......
>   Call Trace:
>    <TASK>
>    ? copy_from_kernel_nofault+0x6f/0x110
>    bpf_probe_read_kernel+0x1d/0x50
>    bpf_prog_2061065e56845f08_do_probe_read+0x51/0x8d
>    trace_call_bpf+0xc5/0x1c0
>    perf_call_bpf_enter.isra.0+0x69/0xb0
>    perf_syscall_enter+0x13e/0x200
>    syscall_trace_enter+0x188/0x1c0
>    do_syscall_64+0xb5/0xe0
>    entry_SYSCALL_64_after_hwframe+0x6e/0x76
>    </TASK>
>   ......
>   ---[ end trace 0000000000000000 ]---
> 
> The oops happens as follows: A bpf program uses bpf_probe_read_kernel()
> to read from vsyscall page, bpf_probe_read_kernel() invokes
> copy_from_kernel_nofault() in turn and then invokes __get_user_asm(). A
> page fault exception is triggered accordingly, but handle_page_fault()
> considers the vsyscall page address as a userspace address instead of
> a kernel space address, so the fix-up set-up by bpf isn't applied.

This comment and the one in the code below seem contradictory and
confusing. Do we want the vsyscall page address to be considered as a
userspace address or not?

IIUC, the issue here is that the vsyscall page (in xonly mode) is not
really mapped and therefore running copy_from_kernel_nofault() on this
address is incorrect. This patch fixes this by making
copy_from_kernel_nofault() return an error for a vsyscall address.


> Because the exception happens in kernel space and page fault handling is
> disabled, page_fault_oops() is invoked and an oops happens.
> 
> Fix it by disallowing vsyscall page read for copy_from_kernel_nofault().
> 

[Maybe I have misunderstood the issue here and following questions are
not even relevant.]

But, what about vsyscall=emulate? In that mode the page is actually
mapped. Would we want the page read to go through then?

> Originally-from: Thomas Gleixner <tglx@linutronix.de>

Documentation/process/maintainer-tip.rst says to use "Originally-by:"


> diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
> index 6993f026adec9..bb454e0abbfcf 100644
> --- a/arch/x86/mm/maccess.c
> +++ b/arch/x86/mm/maccess.c
> @@ -3,6 +3,8 @@
>  #include <linux/uaccess.h>
>  #include <linux/kernel.h>
>  
> +#include "mm_internal.h"
> +
>  #ifdef CONFIG_X86_64
>  bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
>  {
> @@ -15,6 +17,10 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
>  	if (vaddr < TASK_SIZE_MAX + PAGE_SIZE)
>  		return false;
>  
> +	/* vsyscall page is also considered as userspace address. */

A bit more explanation about why this should happen might be useful.

> +	if (is_vsyscall_vaddr(vaddr))
> +		return false;
> +
>  	/*
>  	 * Allow everything during early boot before 'x86_virt_bits'
>  	 * is initialized.  Needed for instruction decoding in early
Re: [PATCH bpf 2/3] x86/mm: Disallow vsyscall page read for copy_from_kernel_nofault()
Posted by Hou Tao 1 year, 11 months ago
Hi,

On 1/23/2024 8:18 AM, Sohil Mehta wrote:
> On 1/18/2024 11:30 PM, Hou Tao wrote:
>> From: Hou Tao <houtao1@huawei.com>
>>
>> When trying to use copy_from_kernel_nofault() to read vsyscall page
>> through a bpf program, the following oops was reported:
>>
>>   BUG: unable to handle page fault for address: ffffffffff600000
>>   #PF: supervisor read access in kernel mode
>>   #PF: error_code(0x0000) - not-present page
>>   PGD 3231067 P4D 3231067 PUD 3233067 PMD 3235067 PTE 0
>>   Oops: 0000 [#1] PREEMPT SMP PTI
>>   CPU: 1 PID: 20390 Comm: test_progs ...... 6.7.0+ #58
>>   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) ......
>>   RIP: 0010:copy_from_kernel_nofault+0x6f/0x110
>>   ......
>>   Call Trace:
>>    <TASK>
>>    ? copy_from_kernel_nofault+0x6f/0x110
>>    bpf_probe_read_kernel+0x1d/0x50
>>    bpf_prog_2061065e56845f08_do_probe_read+0x51/0x8d
>>    trace_call_bpf+0xc5/0x1c0
>>    perf_call_bpf_enter.isra.0+0x69/0xb0
>>    perf_syscall_enter+0x13e/0x200
>>    syscall_trace_enter+0x188/0x1c0
>>    do_syscall_64+0xb5/0xe0
>>    entry_SYSCALL_64_after_hwframe+0x6e/0x76
>>    </TASK>
>>   ......
>>   ---[ end trace 0000000000000000 ]---
>>
>> The oops happens as follows: A bpf program uses bpf_probe_read_kernel()
>> to read from vsyscall page, bpf_probe_read_kernel() invokes
>> copy_from_kernel_nofault() in turn and then invokes __get_user_asm(). A
>> page fault exception is triggered accordingly, but handle_page_fault()
>> considers the vsyscall page address as a userspace address instead of
>> a kernel space address, so the fix-up set-up by bpf isn't applied.
> This comment and the one in the code below seem contradictory and
> confusing. Do we want the vsyscall page address to be considered as a
> userspace address or not?

Now handle_page_fault() has already considered the vsyscall page as a
userspace address, and in the patch we update copy_from_kernel_nofault()
to consider vsyscall page as a userspapce address as well.
>
> IIUC, the issue here is that the vsyscall page (in xonly mode) is not
> really mapped and therefore running copy_from_kernel_nofault() on this
> address is incorrect. This patch fixes this by making
> copy_from_kernel_nofault() return an error for a vsyscall address.
>

Yes, but the issue may occur for vsyscall=none case as well. Because
fault_in_kernel_space() invoked by handle_page_fault() will return
false, so in do_user_addr_fault(), when smap feature is enabled, the
invocation of copy_from_kernel_nofault() will trigger oops due to the
following code snippet:

        if (unlikely(cpu_feature_enabled(X86_FEATURE_SMAP) &&
                     !(error_code & X86_PF_USER) &&
                     !(regs->flags & X86_EFLAGS_AC))) {
                /*
                 * No extable entry here.  This was a kernel access to an
                 * invalid pointer.  get_kernel_nofault() will not get here.
                 */
                page_fault_oops(regs, error_code, address);
                return;
        }

>> Because the exception happens in kernel space and page fault handling is
>> disabled, page_fault_oops() is invoked and an oops happens.
>>
>> Fix it by disallowing vsyscall page read for copy_from_kernel_nofault().
>>
> [Maybe I have misunderstood the issue here and following questions are
> not even relevant.]
>
> But, what about vsyscall=emulate? In that mode the page is actually
> mapped. Would we want the page read to go through then?

Er, Now the vsyscall page is considered as a userspace address, I think
we should reject its read through copy_from_kernel_nofault() even it is
mapped.

>
>> Originally-from: Thomas Gleixner <tglx@linutronix.de>
> Documentation/process/maintainer-tip.rst says to use "Originally-by:"

Thanks for the tip. Will update.
>
>
>> diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
>> index 6993f026adec9..bb454e0abbfcf 100644
>> --- a/arch/x86/mm/maccess.c
>> +++ b/arch/x86/mm/maccess.c
>> @@ -3,6 +3,8 @@
>>  #include <linux/uaccess.h>
>>  #include <linux/kernel.h>
>>  
>> +#include "mm_internal.h"
>> +
>>  #ifdef CONFIG_X86_64
>>  bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
>>  {
>> @@ -15,6 +17,10 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
>>  	if (vaddr < TASK_SIZE_MAX + PAGE_SIZE)
>>  		return false;
>>  
>> +	/* vsyscall page is also considered as userspace address. */
> A bit more explanation about why this should happen might be useful.
>
>> +	if (is_vsyscall_vaddr(vaddr))
>> +		return false;
>> +
>>  	/*
>>  	 * Allow everything during early boot before 'x86_virt_bits'
>>  	 * is initialized.  Needed for instruction decoding in early