From nobody Mon May 11 01:27:50 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF636C433FE for ; Tue, 19 Apr 2022 12:08:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352166AbiDSMKs (ORCPT ); Tue, 19 Apr 2022 08:10:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349915AbiDSMIR (ORCPT ); Tue, 19 Apr 2022 08:08:17 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C58943632E for ; Tue, 19 Apr 2022 05:03:27 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KjMrm66CHzhXXG; Tue, 19 Apr 2022 20:03:20 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 19 Apr 2022 20:03:25 +0800 From: Miaohe Lin To: , , , , , CC: , , , Subject: [PATCH] mm/slub: remove unneeded return value of slab_pad_check Date: Tue, 19 Apr 2022 20:03:52 +0800 Message-ID: <20220419120352.37825-1-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The return value of slab_pad_check is never used. So we can make it return void now. Signed-off-by: Miaohe Lin Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 6dc703488d30..1f699ddfff7f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1017,7 +1017,7 @@ static int check_pad_bytes(struct kmem_cache *s, stru= ct slab *slab, u8 *p) } =20 /* Check the pad bytes at the end of a slab page */ -static int slab_pad_check(struct kmem_cache *s, struct slab *slab) +static void slab_pad_check(struct kmem_cache *s, struct slab *slab) { u8 *start; u8 *fault; @@ -1027,21 +1027,21 @@ static int slab_pad_check(struct kmem_cache *s, str= uct slab *slab) int remainder; =20 if (!(s->flags & SLAB_POISON)) - return 1; + return; =20 start =3D slab_address(slab); length =3D slab_size(slab); end =3D start + length; remainder =3D length % s->size; if (!remainder) - return 1; + return; =20 pad =3D end - remainder; metadata_access_enable(); fault =3D memchr_inv(kasan_reset_tag(pad), POISON_INUSE, remainder); metadata_access_disable(); if (!fault) - return 1; + return; while (end > fault && end[-1] =3D=3D POISON_INUSE) end--; =20 @@ -1050,7 +1050,6 @@ static int slab_pad_check(struct kmem_cache *s, struc= t slab *slab) print_section(KERN_ERR, "Padding ", pad, remainder); =20 restore_bytes(s, "slab padding", POISON_INUSE, fault, end); - return 0; } =20 static int check_object(struct kmem_cache *s, struct slab *slab, @@ -1642,8 +1641,7 @@ static inline int free_debug_processing( void *head, void *tail, int bulk_cnt, unsigned long addr) { return 0; } =20 -static inline int slab_pad_check(struct kmem_cache *s, struct slab *slab) - { return 1; } +static inline void slab_pad_check(struct kmem_cache *s, struct slab *slab)= {} static inline int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { return 1; } static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *= n, --=20 2.23.0