From nobody Mon Oct 6 22:49:52 2025 Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BFAA12F85F5; Thu, 17 Jul 2025 14:28:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752762503; cv=none; b=dBHDcRqLb4+WhmdgkdlqYXY3oVKxP/iRsSEYLg/n5N/ETl+EHseBJpnFVufJkmGOkunWytzt/Tz27TYeRDO9F6Zgtno36M1Zx07RE+uOeQ5AfRhrI3p0969vgsos9DXfH5Uq1IbQza3NJYvrDWUtcxfu40EJ7mWB5SkOmxJncOM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752762503; c=relaxed/simple; bh=cY9u27QpZRpX/n8Jk/G1ognIu+G5XjwsFl66g/1WeR8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KLjmqkYUAkOVATi+pE6DdJY6ron26558g4bKnTl3OCVIwlZaTte2yoYxnSwFiVdL5u8n09dV28L7BPrkaUWOf4uQCryeeqJwfEa4KyPMJhzZbcCG/S5IFZdSMRzhDYfy198wbipiURT/h78svhVcjdOp7BiFO2B1F64EAWsbWao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jDGFsU6Y; arc=none smtp.client-ip=209.85.167.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jDGFsU6Y" Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-558f7fda97eso784817e87.2; Thu, 17 Jul 2025 07:28:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752762500; x=1753367300; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=f7ZdWtvjGwx1QVMejTFRuKlxOMwOJa4SlH8rLzpLr/Q=; b=jDGFsU6YUM/ToJ01sXWAH4q2YjUorq/HA2AszfH+T2FtOlpuaz3vM9/+TRuPmoea6X BtcxEnMgUmuXg7fnw4T0K/ux6hsTlDEgdMD6ZyHrYeYEc5xYLByePKehiPlPriMQD70t HEl3loo0LBGb+xcKqEL3LCDHt9EhJcEypM1FJ28NVDmBLLe68L07hWaQShYD2MbQpPn/ TlHO5hLyDYWLp1JwwGowy908i4L7OGBgE4t6d/rKZxtkeqwI93gV9DcUtpPkfllypC/h d+KR1PXbkkvuSKiS4/TgkN4CO+3OyVKsgI9CKwYlIVjUW56agPk/cb6+yU6+bdDPqaZD i2Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752762500; x=1753367300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f7ZdWtvjGwx1QVMejTFRuKlxOMwOJa4SlH8rLzpLr/Q=; b=I+TbtO3RGOY47r9Dz7D+euEgplrjQvvsIshmt40A1+yngYehx41BwS3zyXMFrxz9wp 9XW5mdAfU6SIBak7kuCY78XMJlfR8lV6dh6yvhqPk2TsWsIqLeODWmUVwINkmQvlSpmF SxQkznnPf1CNMGrKlOMq1dtBDpsm6Vn1deF1cHHYb45XzaehGd2Scnupx785VSemvkzk FpoNltdMTDV7wYklmwjHq3Mcpo7GRG23/TPH1UjyFchXo9B77MVJcy/JHumdhD8VlKjB EAq5RBRSP279r4468EwrNZPf2AIHiMPHT5qX+iKToZawfyOyAt3fa+bw+ROcg0cb/WZh HNsA== X-Forwarded-Encrypted: i=1; AJvYcCW+MICOtXcToIZ8/ckBgXAVldDj7QaZebn8QUAogbzZ2XRe+8frkMzUd8psvLhv3zT2rKpmxl+Jl/NF4w==@vger.kernel.org, AJvYcCXAm2WhUUZhaNMGO5vPIDEs+AwfrXtUNfOnSSbnWF9KUNiduxjrLkRFVqQiIubtHYabgJ4LsysckrUa83M=@vger.kernel.org X-Gm-Message-State: AOJu0YyDGp6/39dtM8QOyoES1W/aZDoQGfwXKjPtELCaEIP7UkxD8o+X HextCvagQLmSqjtEwgcccbdJ299i0/BQtL546z5mPaQ+zb+n2eua6w8R X-Gm-Gg: ASbGncvKPab5tiyvuwQQsujYaffwlHYxOZqcK6ehTFiPcA6DWLfQFJZGGLxa0HvhjRX YdtrkxsH0UKOjZ7P/jsHzjseCGvuVCbb7mhN162PV2/FjJlg5WfIYRFS/3Fx8jn+07yUzWiaGV0 lwi6FsFwdVScAYDJbXhpBRosVFMqi1nLlZ5CwYLIoQ90FJC+L+FRcbokH8ZTre/rjnpvrEHa7TQ I4MKHC6NcqH46DqWWA24IYNII2pOXJ7QEMWeW4U1yPgZqgUgW7Mau2rop2MB8+2o6E46dfzXO7c hny5RRFzl7QlyB3mmdtyye0XAK77S1I8iiWfOKtzzmac3SuYRWaKv9bB8CA4p1ftKuSYYg2Y6dL U4tfNaayDcx0o3rmpGOdo4CkOewVzG4g67S0rKfmaY/jV07O9FmRpFmjn4xY7qTzViFnf X-Google-Smtp-Source: AGHT+IGCxJLgxZY02tc8foSIT/rHsNNpSknOWUP3l3DGsuRGAuDE+1eyiieQPzZhElaS9ma13LLajg== X-Received: by 2002:a05:6512:2345:b0:558:f7fc:87c4 with SMTP id 2adb3069b0e04-55a23f7fc98mr2461079e87.32.1752762499563; Thu, 17 Jul 2025 07:28:19 -0700 (PDT) Received: from localhost.localdomain (178.90.89.143.dynamic.telecom.kz. [178.90.89.143]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-55989825fe3sm3022975e87.223.2025.07.17.07.28.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Jul 2025 07:28:18 -0700 (PDT) From: Sabyrzhan Tasbolatov To: hca@linux.ibm.com, christophe.leroy@csgroup.eu, andreyknvl@gmail.com, agordeev@linux.ibm.com, akpm@linux-foundation.org Cc: ryabinin.a.a@gmail.com, glider@google.com, dvyukov@google.com, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-mm@kvack.org, snovitoll@gmail.com Subject: [PATCH v3 12/12] kasan: add shadow checks to wrappers and rename kasan_arch_is_ready Date: Thu, 17 Jul 2025 19:27:32 +0500 Message-Id: <20250717142732.292822-13-snovitoll@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250717142732.292822-1-snovitoll@gmail.com> References: <20250717142732.292822-1-snovitoll@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch completes: 1. Adding kasan_shadow_initialized() checks to existing wrapper functions 2. Replacing kasan_arch_is_ready() calls with kasan_shadow_initialized() 3. Creating wrapper functions for internal functions that need shadow readiness checks 4. Removing the kasan_arch_is_ready() fallback definition The two-level approach is now fully implemented: - kasan_enabled() - controls whether KASAN is enabled at all. (compile-time for most archs) - kasan_shadow_initialized() - tracks shadow memory initialization (static key for ARCH_DEFER_KASAN archs, compile-time for others) This provides complete elimination of kasan_arch_is_ready() calls from KASAN implementation while moving all shadow readiness logic to wrapper functions. Closes: https://bugzilla.kernel.org/show_bug.cgi?id=3D217049 Signed-off-by: Sabyrzhan Tasbolatov --- Changes in v3: - Addresses Andrey's feedback to move shadow checks to wrappers - Rename kasan_arch_is_ready with kasan_shadow_initialized - Added kasan_shadow_initialized() checks to all necessary wrapper functions - Eliminated all remaining kasan_arch_is_ready() usage per reviewer guidance --- include/linux/kasan.h | 36 +++++++++++++++++++++++++++--------- mm/kasan/common.c | 9 +++------ mm/kasan/generic.c | 12 +++--------- mm/kasan/kasan.h | 36 ++++++++++++++++++++++++++---------- mm/kasan/shadow.c | 32 +++++++------------------------- 5 files changed, 66 insertions(+), 59 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 51a8293d1af..292bd741d8d 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -194,7 +194,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *s, void *= object, static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_slab_pre_free(s, object, _RET_IP_); return false; } @@ -229,7 +229,7 @@ static __always_inline bool kasan_slab_free(struct kmem= _cache *s, void *object, bool init, bool still_accessible) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_slab_free(s, object, init, still_accessible); return false; } @@ -237,7 +237,7 @@ static __always_inline bool kasan_slab_free(struct kmem= _cache *s, void __kasan_kfree_large(void *ptr, unsigned long ip); static __always_inline void kasan_kfree_large(void *ptr) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) __kasan_kfree_large(ptr, _RET_IP_); } =20 @@ -302,7 +302,7 @@ bool __kasan_mempool_poison_pages(struct page *page, un= signed int order, static __always_inline bool kasan_mempool_poison_pages(struct page *page, unsigned int order) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_mempool_poison_pages(page, order, _RET_IP_); return true; } @@ -356,7 +356,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned = long ip); */ static __always_inline bool kasan_mempool_poison_object(void *ptr) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_mempool_poison_object(ptr, _RET_IP_); return true; } @@ -568,11 +568,29 @@ static inline void kasan_init_hw_tags(void) { } #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) =20 void kasan_populate_early_vm_area_shadow(void *start, unsigned long size); -int kasan_populate_vmalloc(unsigned long addr, unsigned long size); -void kasan_release_vmalloc(unsigned long start, unsigned long end, + +int __kasan_populate_vmalloc(unsigned long addr, unsigned long size); +static inline int kasan_populate_vmalloc(unsigned long addr, unsigned long= size) +{ + if (!kasan_shadow_initialized()) + return 0; + return __kasan_populate_vmalloc(addr, size); +} + +void __kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end, unsigned long flags); +static inline void kasan_release_vmalloc(unsigned long start, + unsigned long end, + unsigned long free_region_start, + unsigned long free_region_end, + unsigned long flags) +{ + if (kasan_shadow_initialized()) + __kasan_release_vmalloc(start, end, free_region_start, + free_region_end, flags); +} =20 #else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ =20 @@ -598,7 +616,7 @@ static __always_inline void *kasan_unpoison_vmalloc(con= st void *start, unsigned long size, kasan_vmalloc_flags_t flags) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_unpoison_vmalloc(start, size, flags); return (void *)start; } @@ -607,7 +625,7 @@ void __kasan_poison_vmalloc(const void *start, unsigned= long size); static __always_inline void kasan_poison_vmalloc(const void *start, unsigned long size) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) __kasan_poison_vmalloc(start, size); } =20 diff --git a/mm/kasan/common.c b/mm/kasan/common.c index c3a6446404d..b561734767d 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -259,7 +259,7 @@ static inline void poison_slab_object(struct kmem_cache= *cache, void *object, bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, unsigned long ip) { - if (!kasan_arch_is_ready() || is_kfence_address(object)) + if (is_kfence_address(object)) return false; return check_slab_allocation(cache, object, ip); } @@ -267,7 +267,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, vo= id *object, bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init, bool still_accessible) { - if (!kasan_arch_is_ready() || is_kfence_address(object)) + if (is_kfence_address(object)) return false; =20 poison_slab_object(cache, object, init, still_accessible); @@ -291,9 +291,6 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *= object, bool init, =20 static inline bool check_page_allocation(void *ptr, unsigned long ip) { - if (!kasan_arch_is_ready()) - return false; - if (ptr !=3D page_address(virt_to_head_page(ptr))) { kasan_report_invalid_free(ptr, ip, KASAN_REPORT_INVALID_FREE); return true; @@ -520,7 +517,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned = long ip) return true; } =20 - if (is_kfence_address(ptr) || !kasan_arch_is_ready()) + if (is_kfence_address(ptr)) return true; =20 slab =3D folio_slab(folio); diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 03b6d322ff6..1d20b925b9d 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -176,7 +176,7 @@ static __always_inline bool check_region_inline(const v= oid *addr, size_t size, bool write, unsigned long ret_ip) { - if (!kasan_arch_is_ready()) + if (!kasan_shadow_initialized()) return true; =20 if (unlikely(size =3D=3D 0)) @@ -200,13 +200,10 @@ bool kasan_check_range(const void *addr, size_t size,= bool write, return check_region_inline(addr, size, write, ret_ip); } =20 -bool kasan_byte_accessible(const void *addr) +bool __kasan_byte_accessible(const void *addr) { s8 shadow_byte; =20 - if (!kasan_arch_is_ready()) - return true; - shadow_byte =3D READ_ONCE(*(s8 *)kasan_mem_to_shadow(addr)); =20 return shadow_byte >=3D 0 && shadow_byte < KASAN_GRANULE_SIZE; @@ -506,9 +503,6 @@ static void release_alloc_meta(struct kasan_alloc_meta = *meta) =20 static void release_free_meta(const void *object, struct kasan_free_meta *= meta) { - if (!kasan_arch_is_ready()) - return; - /* Check if free meta is valid. */ if (*(u8 *)kasan_mem_to_shadow(object) !=3D KASAN_SLAB_FREE_META) return; @@ -573,7 +567,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, vo= id *object, gfp_t flags) kasan_save_track(&alloc_meta->alloc_track, flags); } =20 -void kasan_save_free_info(struct kmem_cache *cache, void *object) +void __kasan_save_free_info(struct kmem_cache *cache, void *object) { struct kasan_free_meta *free_meta; =20 diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 129178be5e6..67a0a1095d2 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -398,7 +398,13 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, dep= ot_flags_t depot_flags); void kasan_set_track(struct kasan_track *track, depot_stack_handle_t stack= ); void kasan_save_track(struct kasan_track *track, gfp_t flags); void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t f= lags); -void kasan_save_free_info(struct kmem_cache *cache, void *object); + +void __kasan_save_free_info(struct kmem_cache *cache, void *object); +static inline void kasan_save_free_info(struct kmem_cache *cache, void *ob= ject) +{ + if (kasan_enabled() && kasan_shadow_initialized()) + __kasan_save_free_info(cache, object); +} =20 #ifdef CONFIG_KASAN_GENERIC bool kasan_quarantine_put(struct kmem_cache *cache, void *object); @@ -499,6 +505,7 @@ static inline bool kasan_byte_accessible(const void *ad= dr) =20 #else /* CONFIG_KASAN_HW_TAGS */ =20 +void __kasan_poison(const void *addr, size_t size, u8 value, bool init); /** * kasan_poison - mark the memory range as inaccessible * @addr: range start address, must be aligned to KASAN_GRANULE_SIZE @@ -506,7 +513,11 @@ static inline bool kasan_byte_accessible(const void *a= ddr) * @value: value that's written to metadata for the range * @init: whether to initialize the memory range (only for hardware tag-ba= sed) */ -void kasan_poison(const void *addr, size_t size, u8 value, bool init); +static inline void kasan_poison(const void *addr, size_t size, u8 value, b= ool init) +{ + if (kasan_shadow_initialized()) + __kasan_poison(addr, size, value, init); +} =20 /** * kasan_unpoison - mark the memory range as accessible @@ -521,12 +532,19 @@ void kasan_poison(const void *addr, size_t size, u8 v= alue, bool init); */ void kasan_unpoison(const void *addr, size_t size, bool init); =20 -bool kasan_byte_accessible(const void *addr); +bool __kasan_byte_accessible(const void *addr); +static inline bool kasan_byte_accessible(const void *addr) +{ + if (!kasan_shadow_initialized()) + return true; + return __kasan_byte_accessible(addr); +} =20 #endif /* CONFIG_KASAN_HW_TAGS */ =20 #ifdef CONFIG_KASAN_GENERIC =20 +void __kasan_poison_last_granule(const void *address, size_t size); /** * kasan_poison_last_granule - mark the last granule of the memory range as * inaccessible @@ -536,7 +554,11 @@ bool kasan_byte_accessible(const void *addr); * This function is only available for the generic mode, as it's the only = mode * that has partially poisoned memory granules. */ -void kasan_poison_last_granule(const void *address, size_t size); +static inline void kasan_poison_last_granule(const void *address, size_t s= ize) +{ + if (kasan_shadow_initialized()) + __kasan_poison_last_granule(address, size); +} =20 #else /* CONFIG_KASAN_GENERIC */ =20 @@ -544,12 +566,6 @@ static inline void kasan_poison_last_granule(const voi= d *address, size_t size) { =20 #endif /* CONFIG_KASAN_GENERIC */ =20 -#ifndef kasan_arch_is_ready -static inline bool kasan_arch_is_ready(void) { return true; } -#elif !defined(CONFIG_KASAN_GENERIC) || !defined(CONFIG_KASAN_OUTLINE) -#error kasan_arch_is_ready only works in KASAN generic outline mode! -#endif - #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) =20 void kasan_kunit_test_suite_start(void); diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d2c70cd2afb..90c508cad63 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -121,13 +121,10 @@ void *__hwasan_memcpy(void *dest, const void *src, ss= ize_t len) __alias(__asan_m EXPORT_SYMBOL(__hwasan_memcpy); #endif =20 -void kasan_poison(const void *addr, size_t size, u8 value, bool init) +void __kasan_poison(const void *addr, size_t size, u8 value, bool init) { void *shadow_start, *shadow_end; =20 - if (!kasan_arch_is_ready()) - return; - /* * Perform shadow offset calculation based on untagged address, as * some of the callers (e.g. kasan_poison_new_object) pass tagged @@ -145,14 +142,11 @@ void kasan_poison(const void *addr, size_t size, u8 v= alue, bool init) =20 __memset(shadow_start, value, shadow_end - shadow_start); } -EXPORT_SYMBOL_GPL(kasan_poison); +EXPORT_SYMBOL_GPL(__kasan_poison); =20 #ifdef CONFIG_KASAN_GENERIC -void kasan_poison_last_granule(const void *addr, size_t size) +void __kasan_poison_last_granule(const void *addr, size_t size) { - if (!kasan_arch_is_ready()) - return; - if (size & KASAN_GRANULE_MASK) { u8 *shadow =3D (u8 *)kasan_mem_to_shadow(addr + size); *shadow =3D size & KASAN_GRANULE_MASK; @@ -353,7 +347,7 @@ static int ___alloc_pages_bulk(struct page **pages, int= nr_pages) return 0; } =20 -static int __kasan_populate_vmalloc(unsigned long start, unsigned long end) +static int __kasan_populate_vmalloc_do(unsigned long start, unsigned long = end) { unsigned long nr_pages, nr_total =3D PFN_UP(end - start); struct vmalloc_populate_data data; @@ -385,14 +379,11 @@ static int __kasan_populate_vmalloc(unsigned long sta= rt, unsigned long end) return ret; } =20 -int kasan_populate_vmalloc(unsigned long addr, unsigned long size) +int __kasan_populate_vmalloc(unsigned long addr, unsigned long size) { unsigned long shadow_start, shadow_end; int ret; =20 - if (!kasan_arch_is_ready()) - return 0; - if (!is_vmalloc_or_module_addr((void *)addr)) return 0; =20 @@ -414,7 +405,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned= long size) shadow_start =3D PAGE_ALIGN_DOWN(shadow_start); shadow_end =3D PAGE_ALIGN(shadow_end); =20 - ret =3D __kasan_populate_vmalloc(shadow_start, shadow_end); + ret =3D __kasan_populate_vmalloc_do(shadow_start, shadow_end); if (ret) return ret; =20 @@ -551,7 +542,7 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, un= signed long addr, * pages entirely covered by the free region, we will not run in to any * trouble - any simultaneous allocations will be for disjoint regions. */ -void kasan_release_vmalloc(unsigned long start, unsigned long end, +void __kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end, unsigned long flags) @@ -560,9 +551,6 @@ void kasan_release_vmalloc(unsigned long start, unsigne= d long end, unsigned long region_start, region_end; unsigned long size; =20 - if (!kasan_arch_is_ready()) - return; - region_start =3D ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE); region_end =3D ALIGN_DOWN(end, KASAN_MEMORY_PER_SHADOW_PAGE); =20 @@ -611,9 +599,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsig= ned long size, * with setting memory tags, so the KASAN_VMALLOC_INIT flag is ignored. */ =20 - if (!kasan_arch_is_ready()) - return (void *)start; - if (!is_vmalloc_or_module_addr(start)) return (void *)start; =20 @@ -636,9 +621,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsig= ned long size, */ void __kasan_poison_vmalloc(const void *start, unsigned long size) { - if (!kasan_arch_is_ready()) - return; - if (!is_vmalloc_or_module_addr(start)) return; =20 --=20 2.34.1