[PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions

Pasha Tatashin posted 2 patches 1 month, 1 week ago
[PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions
Posted by Pasha Tatashin 1 month, 1 week ago
Restored vmalloc regions are currently not properly marked for KASAN,
causing KASAN to treat accesses to these regions as out-of-bounds.

Fix this by properly unpoisoning the restored vmalloc area using
kasan_unpoison_vmalloc(). This requires setting the VM_UNINITIALIZED
flag during the initial area allocation and clearing it after the pages
have been mapped and unpoisoned, using the clear_vm_uninitialized_flag()
helper.

Reported-by: Pratyush Yadav <pratyush@kernel.org>
Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations")
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 kernel/liveupdate/kexec_handover.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index 410098bae0bf..747a35107c84 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -14,6 +14,7 @@
 #include <linux/cma.h>
 #include <linux/kmemleak.h>
 #include <linux/count_zeros.h>
+#include <linux/kasan.h>
 #include <linux/kexec.h>
 #include <linux/kexec_handover.h>
 #include <linux/kho_radix_tree.h>
@@ -1077,6 +1078,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_vmalloc);
 void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
 {
 	struct kho_vmalloc_chunk *chunk = KHOSER_LOAD_PTR(preservation->first);
+	kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_PROT_NORMAL;
 	unsigned int align, order, shift, vm_flags;
 	unsigned long total_pages, contig_pages;
 	unsigned long addr, size;
@@ -1128,7 +1130,8 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
 		goto err_free_pages_array;
 
 	area = __get_vm_area_node(total_pages * PAGE_SIZE, align, shift,
-				  vm_flags, VMALLOC_START, VMALLOC_END,
+				  vm_flags | VM_UNINITIALIZED,
+				  VMALLOC_START, VMALLOC_END,
 				  NUMA_NO_NODE, GFP_KERNEL,
 				  __builtin_return_address(0));
 	if (!area)
@@ -1143,6 +1146,13 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
 	area->nr_pages = total_pages;
 	area->pages = pages;
 
+	if (vm_flags & VM_ALLOC)
+		kasan_flags |= KASAN_VMALLOC_VM_ALLOC;
+
+	area->addr = kasan_unpoison_vmalloc(area->addr, total_pages * PAGE_SIZE,
+					    kasan_flags);
+	clear_vm_uninitialized_flag(area);
+
 	return area->addr;
 
 err_free_vm_area:
-- 
2.53.0.414.gf7e9f6c205-goog
Re: [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions
Posted by Pratyush Yadav 1 month, 1 week ago
Hi Pasha,

On Wed, Feb 25 2026, Pasha Tatashin wrote:

> Restored vmalloc regions are currently not properly marked for KASAN,
> causing KASAN to treat accesses to these regions as out-of-bounds.
>
> Fix this by properly unpoisoning the restored vmalloc area using
> kasan_unpoison_vmalloc(). This requires setting the VM_UNINITIALIZED
> flag during the initial area allocation and clearing it after the pages
> have been mapped and unpoisoned, using the clear_vm_uninitialized_flag()
> helper.
>
> Reported-by: Pratyush Yadav <pratyush@kernel.org>
> Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations")
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  kernel/liveupdate/kexec_handover.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index 410098bae0bf..747a35107c84 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -14,6 +14,7 @@
>  #include <linux/cma.h>
>  #include <linux/kmemleak.h>
>  #include <linux/count_zeros.h>
> +#include <linux/kasan.h>
>  #include <linux/kexec.h>
>  #include <linux/kexec_handover.h>
>  #include <linux/kho_radix_tree.h>
> @@ -1077,6 +1078,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_vmalloc);
>  void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
>  {
>  	struct kho_vmalloc_chunk *chunk = KHOSER_LOAD_PTR(preservation->first);
> +	kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_PROT_NORMAL;
>  	unsigned int align, order, shift, vm_flags;
>  	unsigned long total_pages, contig_pages;
>  	unsigned long addr, size;
> @@ -1128,7 +1130,8 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
>  		goto err_free_pages_array;
>  
>  	area = __get_vm_area_node(total_pages * PAGE_SIZE, align, shift,
> -				  vm_flags, VMALLOC_START, VMALLOC_END,
> +				  vm_flags | VM_UNINITIALIZED,
> +				  VMALLOC_START, VMALLOC_END,
>  				  NUMA_NO_NODE, GFP_KERNEL,
>  				  __builtin_return_address(0));
>  	if (!area)
> @@ -1143,6 +1146,13 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
>  	area->nr_pages = total_pages;
>  	area->pages = pages;
>  
> +	if (vm_flags & VM_ALLOC)
> +		kasan_flags |= KASAN_VMALLOC_VM_ALLOC;
> +
> +	area->addr = kasan_unpoison_vmalloc(area->addr, total_pages * PAGE_SIZE,
> +					    kasan_flags);

Ugh, this is tricky. Say I do vmalloc(sizeof(unsigned long)). After KHO,
this would unpoison the whole page, effectively missing all
out-of-bounds access within that page.

We need to either store the buffer size in struct kho_vmalloc, or only
allow preserving PAGE_SIZE aligned allocations, or just live with this
missed coverage. I kind of prefer the second option, but no strong
opinions.

Anyway, I think this is a clear improvement regardless of this problem.
So,

Reviewed-by: Pratyush Yadav (Google) <pratyush@kernel.org>
Tested-by: Pratyush Yadav (Google) <pratyush@kernel.org>

Thanks for fixing it.

> +	clear_vm_uninitialized_flag(area);
> +
>  	return area->addr;
>  
>  err_free_vm_area:

-- 
Regards,
Pratyush Yadav