From nobody Tue Dec 16 21:49:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C6E0C83003 for ; Thu, 24 Aug 2023 03:50:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239852AbjHXDuW (ORCPT ); Wed, 23 Aug 2023 23:50:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239796AbjHXDty (ORCPT ); Wed, 23 Aug 2023 23:49:54 -0400 Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com [IPv6:2001:4860:4864:20::2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86F001FC8 for ; Wed, 23 Aug 2023 20:48:09 -0700 (PDT) Received: by mail-oa1-x2c.google.com with SMTP id 586e51a60fabf-1c134602a55so1134469fac.1 for ; Wed, 23 Aug 2023 20:48:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692848887; x=1693453687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LxPdqdZaLdhGYYJIXR2HiqrFCPctwXJ/6zdagwsVwGU=; b=PJuxMtfgXiGgGBUiCdmwFzQZzSKHmeg0qoDKufcfTS784L/xDIwTD1+lLPGVYKdHY2 rBP0cjJkTTzpPnAs1x87JmXYN/ZYYtC+gXSknA7fDxhp7i0MEmm+MWDsSg/8O5gfGyTx MpXxiJzAYAQERb3t8MJ6Qpur+mk5F0VpMByzvS3sTwsWDwNng4+smpLsPG6NN5jPn6KN G1sHrCWSckkOsM06kv0UN/eMfulOnTn3juEsShLfG3ofrqlX0FOYySZ4kFESBb9Gbc98 6q26WY2vUTZC1oZG+U7DUHUgJ2+CwxyrSKSk9eO7DDG8HhjgmC6fXTySg2XkeiSj+d5c EsIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692848887; x=1693453687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LxPdqdZaLdhGYYJIXR2HiqrFCPctwXJ/6zdagwsVwGU=; b=gtjXWVy5hFMYSX3NxqguZ7JptejjMU+2g1Cfa2s5ioZfpjcIoG+XNrMzTDyQ8OOfP2 9ZbC4b2/13eY8/CRm+BnhttNrjHg/+kDPjqp5tsu0BoLGTHarbkIwKcL/gIOVAj6Q0Os tgerOTJ9aZ0UkXBUm7q5dh5pX+2MV5GMyoEz4s933N9kY2qZKlbjoNIHcSXSgxru//ns oW8Rz8uIvO7wbnifeJvedDEeykiQQOLdS4YpeT5kpb2hZL6sXqL+MQ5Jv8iCPVTn6Q+y IIsECBelbOHP9+jg/rlMRWQ3ZCX0VrHbFYxEr1pvJegUomQsDxhHDDiMYRI839Ak3UfY EM/A== X-Gm-Message-State: AOJu0YwiSdrSYo/AnYlkbadeer3jXfe21gAxT7Hz4fDG/kJiBoJsCYWv AHjaAQArde0m2/s1CesnAdklqw== X-Google-Smtp-Source: AGHT+IFCmSzrTq4gj/MOmL4UkrAItcPBwWXNEE2+RqRXlnnu5/fpsCgGwGOaIZ24w8cH0PGv9ZafUw== X-Received: by 2002:a05:6870:56a8:b0:1bb:b9d1:37fe with SMTP id p40-20020a05687056a800b001bbb9d137femr16277998oao.2.1692848887374; Wed, 23 Aug 2023 20:48:07 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id t6-20020a63b246000000b005579f12a238sm10533157pgo.86.2023.08.23.20.47.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 20:48:07 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song , Alexander Viro Subject: [PATCH v5 29/45] mbcache: dynamically allocate the mbcache shrinker Date: Thu, 24 Aug 2023 11:42:48 +0800 Message-Id: <20230824034304.37411-30-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230824034304.37411-1-zhengqi.arch@bytedance.com> References: <20230824034304.37411-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation for implementing lockless slab shrink, use new APIs to dynamically allocate the mbcache shrinker, so that it can be freed asynchronously via RCU. Then it doesn't need to wait for RCU read-side critical section when releasing the struct mb_cache. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song CC: Alexander Viro CC: Christian Brauner --- fs/mbcache.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/fs/mbcache.c b/fs/mbcache.c index 2a4b8b549e93..0d1e24e9a5e3 100644 --- a/fs/mbcache.c +++ b/fs/mbcache.c @@ -37,7 +37,7 @@ struct mb_cache { struct list_head c_list; /* Number of entries in cache */ unsigned long c_entry_count; - struct shrinker c_shrink; + struct shrinker *c_shrink; /* Work for shrinking when the cache has too many entries */ struct work_struct c_shrink_work; }; @@ -293,8 +293,7 @@ EXPORT_SYMBOL(mb_cache_entry_touch); static unsigned long mb_cache_count(struct shrinker *shrink, struct shrink_control *sc) { - struct mb_cache *cache =3D container_of(shrink, struct mb_cache, - c_shrink); + struct mb_cache *cache =3D shrink->private_data; =20 return cache->c_entry_count; } @@ -333,8 +332,7 @@ static unsigned long mb_cache_shrink(struct mb_cache *c= ache, static unsigned long mb_cache_scan(struct shrinker *shrink, struct shrink_control *sc) { - struct mb_cache *cache =3D container_of(shrink, struct mb_cache, - c_shrink); + struct mb_cache *cache =3D shrink->private_data; return mb_cache_shrink(cache, sc->nr_to_scan); } =20 @@ -377,15 +375,20 @@ struct mb_cache *mb_cache_create(int bucket_bits) for (i =3D 0; i < bucket_count; i++) INIT_HLIST_BL_HEAD(&cache->c_hash[i]); =20 - cache->c_shrink.count_objects =3D mb_cache_count; - cache->c_shrink.scan_objects =3D mb_cache_scan; - cache->c_shrink.seeks =3D DEFAULT_SEEKS; - if (register_shrinker(&cache->c_shrink, "mbcache-shrinker")) { + cache->c_shrink =3D shrinker_alloc(0, "mbcache-shrinker"); + if (!cache->c_shrink) { kfree(cache->c_hash); kfree(cache); goto err_out; } =20 + cache->c_shrink->count_objects =3D mb_cache_count; + cache->c_shrink->scan_objects =3D mb_cache_scan; + cache->c_shrink->seeks =3D DEFAULT_SEEKS; + cache->c_shrink->private_data =3D cache; + + shrinker_register(cache->c_shrink); + INIT_WORK(&cache->c_shrink_work, mb_cache_shrink_worker); =20 return cache; @@ -406,7 +409,7 @@ void mb_cache_destroy(struct mb_cache *cache) { struct mb_cache_entry *entry, *next; =20 - unregister_shrinker(&cache->c_shrink); + shrinker_free(cache->c_shrink); =20 /* * We don't bother with any locking. Cache must not be used at this --=20 2.30.2