From nobody Wed Dec 31 08:52:03 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B46EC4332F for ; Mon, 6 Nov 2023 20:11:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233042AbjKFUL5 (ORCPT ); Mon, 6 Nov 2023 15:11:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233014AbjKFULp (ORCPT ); Mon, 6 Nov 2023 15:11:45 -0500 Received: from out-187.mta1.migadu.com (out-187.mta1.migadu.com [IPv6:2001:41d0:203:375::bb]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 321D310C4 for ; Mon, 6 Nov 2023 12:11:41 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1699301499; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z8UoFAswFgd+L4wk44THWP2F8PtTdf8hA2eur1Xmylc=; b=nqNu/iJtTBpRGoCOjf1lL74liuMLk3YhpzpUWzj3S+HMg+GQeM+EOVnxft+z4OPpPQbPuS Udqa+ZKx2NChvvQUHxkJu+aUnZoO5Chyi4YdPGofo1DOrNugA3NlPhfyyz2+pR+R6ojuqt 30m2oaGb1ReuPP6HOsDoxxp4rcV7+Ok= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH RFC 07/20] kasan: introduce kasan_mempool_unpoison_pages Date: Mon, 6 Nov 2023 21:10:16 +0100 Message-Id: <573ab13b08f2e13d8add349c3a3900bcb7d79680.1699297309.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrey Konovalov Introduce and document a new kasan_mempool_unpoison_pages hook to be used by the mempool code instead of kasan_unpoison_pages. This hook is not functionally different from kasan_unpoison_pages, but using it improves the mempool code readability. Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 25 +++++++++++++++++++++++++ mm/kasan/common.c | 6 ++++++ 2 files changed, 31 insertions(+) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index de2a695ad34d..f8ebde384bd7 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -225,6 +225,9 @@ bool __kasan_mempool_poison_pages(struct page *page, un= signed int order, * This function is similar to kasan_mempool_poison_object() but operates = on * page allocations. * + * Before the poisoned allocation can be reused, it must be unpoisoned via + * kasan_mempool_unpoison_pages(). + * * Return: true if the allocation can be safely reused; false otherwise. */ static __always_inline bool kasan_mempool_poison_pages(struct page *page, @@ -235,6 +238,27 @@ static __always_inline bool kasan_mempool_poison_pages= (struct page *page, return true; } =20 +void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order, + unsigned long ip); +/** + * kasan_mempool_unpoison_pages - Unpoison a mempool page allocation. + * @page: Pointer to the page allocation. + * @order: Order of the allocation. + * + * This function is intended for kernel subsystems that cache page allocat= ions + * to reuse them instead of freeing them back to page_alloc (e.g. mempool). + * + * This function unpoisons a page allocation that was previously poisoned = by + * kasan_mempool_poison_pages() without zeroing the allocation's memory. F= or + * the tag-based modes, this function assigns a new tag to the allocation. + */ +static __always_inline void kasan_mempool_unpoison_pages(struct page *page, + unsigned int order) +{ + if (kasan_enabled()) + __kasan_mempool_unpoison_pages(page, order, _RET_IP_); +} + bool __kasan_mempool_poison_object(void *ptr, unsigned long ip); /** * kasan_mempool_poison_object - Check and poison a mempool slab allocatio= n. @@ -353,6 +377,7 @@ static inline bool kasan_mempool_poison_pages(struct pa= ge *page, unsigned int or { return true; } +static inline void kasan_mempool_unpoison_pages(struct page *page, unsigne= d int order) {} static inline bool kasan_mempool_poison_object(void *ptr) { return true; diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 9ccc78b20cf2..6283f0206ef6 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -439,6 +439,12 @@ bool __kasan_mempool_poison_pages(struct page *page, u= nsigned int order, return true; } =20 +void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order, + unsigned long ip) +{ + __kasan_unpoison_pages(page, order, false); +} + bool __kasan_mempool_poison_object(void *ptr, unsigned long ip) { struct folio *folio; --=20 2.25.1