From nobody Sun Oct 5 03:40:27 2025 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A95591E0083; Thu, 7 Aug 2025 03:31:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754537503; cv=none; b=V6+Nk+KRl7aAb7etUG2/VcsjjttQxkztU1cj7Mzceiamd9T4DtYoMQxwvd+hOyse6LunFF08kxEmc1/VuMKLCOqrX3AaybLjmry1I7fmPbpeEfSsw+vUTSJFpMtfvCIS5z5zbV/ZvPANaBQUMvUoO7DtmWy7reHUiN5n86FCJmQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754537503; c=relaxed/simple; bh=6EAx9+5PsHzzsIRG7YFPZaFS3rR1lvkWVVlYt2VaviU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dyLqeIBwHOUkhVQNtFXrK/4CeZ9x67bJQdSFJH3eBDOk6B15+TXJ2t6DTKRM9YuqayImsPBe9BcXzWXLopgLukm9jA85FP0LwNnxzRiXhveGlXgFLUG5FCFMg0LuC63qN+cPKC80YCroNAj6R2ugVa19xIj8a/IYNFCqi6UZ7HU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4byCPf4rWxzKHMmy; Thu, 7 Aug 2025 11:31:34 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id BD03B1A07BB; Thu, 7 Aug 2025 11:31:33 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3MxQTHpRoeARdCw--.18209S5; Thu, 07 Aug 2025 11:31:33 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk, akpm@linux-foundation.org, jack@suse.cz, bvanassche@acm.org, yang.yang@vivo.com, dlemoal@kernel.org, ming.lei@redhat.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com, johnny.chenyi@huawei.com Subject: [patch v4 1/2] lib/sbitmap: convert shallow_depth from one word to the whole sbitmap Date: Thu, 7 Aug 2025 11:24:12 +0800 Message-Id: <20250807032413.1469456-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250807032413.1469456-1-yukuai1@huaweicloud.com> References: <20250807032413.1469456-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgC3MxQTHpRoeARdCw--.18209S5 X-Coremail-Antispam: 1UD129KBjvJXoW3Cr1UGr45AF1UJF18Ar4xZwb_yoWkZr4rpF 47ta17KryrtF1j9r15t3yDZFn5twn5urnxGF4Sqw1FkrWqqan7XFnYyFyrXFyxZFWkAFWU ArZ5Xr95Gw1qqFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm014x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWU JVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67 kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY 6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0x vEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVj vjDU0xZFpf9x0JU4OJ5UUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ Content-Type: text/plain; charset="utf-8" From: Yu Kuai Currently elevators will record internal 'async_depth' to throttle asynchronous requests, and they both calculate shallow_dpeth based on sb->shift, with the respect that sb->shift is the available tags in one word. However, sb->shift is not the availbale tags in the last word, see __map_depth: if (index =3D=3D sb->map_nr - 1) return sb->depth - (index << sb->shift); For consequence, if the last word is used, more tags can be get than expected, for example, assume nr_requests=3D256 and there are four words, in the worst case if user set nr_requests=3D32, then the first word is the last word, and still use bits per word, which is 64, to calculate async_depth is wrong. One the ohter hand, due to cgroup qos, bfq can allow only one request to be allocated, and set shallow_dpeth=3D1 will still allow the number of words request to be allocated. Fix this problems by using shallow_depth to the whole sbitmap instead of per word, also change kyber, mq-deadline and bfq to follow this, a new helper __map_depth_with_shallow() is introduced to calculate available bits in each word. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 35 ++++++++++++-------------- block/bfq-iosched.h | 3 +-- block/kyber-iosched.c | 9 ++----- block/mq-deadline.c | 16 +----------- include/linux/sbitmap.h | 6 ++--- lib/sbitmap.c | 56 +++++++++++++++++++++-------------------- 6 files changed, 52 insertions(+), 73 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 0cb1e9873aab..d68da9e92e1e 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -701,17 +701,13 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk= _mq_alloc_data *data) { struct bfq_data *bfqd =3D data->q->elevator->elevator_data; struct bfq_io_cq *bic =3D bfq_bic_lookup(data->q); - int depth; - unsigned limit =3D data->q->nr_requests; - unsigned int act_idx; + unsigned int limit, act_idx; =20 /* Sync reads have full depth available */ - if (op_is_sync(opf) && !op_is_write(opf)) { - depth =3D 0; - } else { - depth =3D bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)]; - limit =3D (limit * depth) >> bfqd->full_depth_shift; - } + if (op_is_sync(opf) && !op_is_write(opf)) + limit =3D data->q->nr_requests; + else + limit =3D bfqd->async_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)]; =20 for (act_idx =3D 0; bic && act_idx < bfqd->num_actuators; act_idx++) { /* Fast path to check if bfqq is already allocated. */ @@ -725,14 +721,16 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk= _mq_alloc_data *data) * available requests and thus starve other entities. */ if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) { - depth =3D 1; + limit =3D 1; break; } } + bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u", - __func__, bfqd->wr_busy_queues, op_is_sync(opf), depth); - if (depth) - data->shallow_depth =3D depth; + __func__, bfqd->wr_busy_queues, op_is_sync(opf), limit); + + if (limit < data->q->nr_requests) + data->shallow_depth =3D limit; } =20 static struct bfq_queue * @@ -7128,9 +7126,8 @@ void bfq_put_async_queues(struct bfq_data *bfqd, stru= ct bfq_group *bfqg) */ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue = *bt) { - unsigned int depth =3D 1U << bt->sb.shift; + unsigned int nr_requests =3D bfqd->queue->nr_requests; =20 - bfqd->full_depth_shift =3D bt->sb.shift; /* * In-word depths if no bfq_queue is being weight-raised: * leaving 25% of tags only for sync reads. @@ -7142,13 +7139,13 @@ static void bfq_update_depths(struct bfq_data *bfqd= , struct sbitmap_queue *bt) * limit 'something'. */ /* no more than 50% of tags for async I/O */ - bfqd->word_depths[0][0] =3D max(depth >> 1, 1U); + bfqd->async_depths[0][0] =3D max(nr_requests >> 1, 1U); /* * no more than 75% of tags for sync writes (25% extra tags * w.r.t. async I/O, to prevent async I/O from starving sync * writes) */ - bfqd->word_depths[0][1] =3D max((depth * 3) >> 2, 1U); + bfqd->async_depths[0][1] =3D max((nr_requests * 3) >> 2, 1U); =20 /* * In-word depths in case some bfq_queue is being weight- @@ -7158,9 +7155,9 @@ static void bfq_update_depths(struct bfq_data *bfqd, = struct sbitmap_queue *bt) * shortage. */ /* no more than ~18% of tags for async I/O */ - bfqd->word_depths[1][0] =3D max((depth * 3) >> 4, 1U); + bfqd->async_depths[1][0] =3D max((nr_requests * 3) >> 4, 1U); /* no more than ~37% of tags for sync writes (~20% extra tags) */ - bfqd->word_depths[1][1] =3D max((depth * 6) >> 4, 1U); + bfqd->async_depths[1][1] =3D max((nr_requests * 6) >> 4, 1U); } =20 static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx) diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 687a3a7ba784..31217f196f4f 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -813,8 +813,7 @@ struct bfq_data { * Depth limits used in bfq_limit_depth (see comments on the * function) */ - unsigned int word_depths[2][2]; - unsigned int full_depth_shift; + unsigned int async_depths[2][2]; =20 /* * Number of independent actuators. This is equal to 1 in diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 4dba8405bd01..bfd9a40bb33d 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -157,10 +157,7 @@ struct kyber_queue_data { */ struct sbitmap_queue domain_tokens[KYBER_NUM_DOMAINS]; =20 - /* - * Async request percentage, converted to per-word depth for - * sbitmap_get_shallow(). - */ + /* Number of allowed async requests. */ unsigned int async_depth; =20 struct kyber_cpu_latency __percpu *cpu_latency; @@ -454,10 +451,8 @@ static void kyber_depth_updated(struct blk_mq_hw_ctx *= hctx) { struct kyber_queue_data *kqd =3D hctx->queue->elevator->elevator_data; struct blk_mq_tags *tags =3D hctx->sched_tags; - unsigned int shift =3D tags->bitmap_tags.sb.shift; - - kqd->async_depth =3D (1U << shift) * KYBER_ASYNC_PERCENT / 100U; =20 + kqd->async_depth =3D hctx->queue->nr_requests * KYBER_ASYNC_PERCENT / 100= U; sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, kqd->async_depth); } =20 diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 2edf1cac06d5..9ab6c6256695 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -487,20 +487,6 @@ static struct request *dd_dispatch_request(struct blk_= mq_hw_ctx *hctx) return rq; } =20 -/* - * 'depth' is a number in the range 1..INT_MAX representing a number of - * requests. Scale it with a factor (1 << bt->sb.shift) / q->nr_requests s= ince - * 1..(1 << bt->sb.shift) is the range expected by sbitmap_get_shallow(). - * Values larger than q->nr_requests have the same effect as q->nr_request= s. - */ -static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int qdept= h) -{ - struct sbitmap_queue *bt =3D &hctx->sched_tags->bitmap_tags; - const unsigned int nrr =3D hctx->queue->nr_requests; - - return ((qdepth << bt->sb.shift) + nrr - 1) / nrr; -} - /* * Called by __blk_mq_alloc_request(). The shallow_depth value set by this * function is used by __blk_mq_get_tag(). @@ -517,7 +503,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq= _alloc_data *data) * Throttle asynchronous requests and writes such that these requests * do not block the allocation of synchronous requests. */ - data->shallow_depth =3D dd_to_word_depth(data->hctx, dd->async_depth); + data->shallow_depth =3D dd->async_depth; } =20 /* Called by blk_mq_update_nr_requests(). */ diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 189140bf11fc..4adf4b364fcd 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -213,12 +213,12 @@ int sbitmap_get(struct sbitmap *sb); * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitm= ap, * limiting the depth used from each word. * @sb: Bitmap to allocate from. - * @shallow_depth: The maximum number of bits to allocate from a single wo= rd. + * @shallow_depth: The maximum number of bits to allocate from the bitmap. * * This rather specific operation allows for having multiple users with * different allocation limits. E.g., there can be a high-priority class t= hat * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shall= ow() - * with a @shallow_depth of (1 << (@sb->shift - 1)). Then, the low-priority + * with a @shallow_depth of (sb->depth >> 1). Then, the low-priority * class can only allocate half of the total bits in the bitmap, preventin= g it * from starving out the high-priority class. * @@ -478,7 +478,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_= queue *sbq, int nr_tags, * sbitmap_queue, limiting the depth used from each word, with preemption * already disabled. * @sbq: Bitmap queue to allocate from. - * @shallow_depth: The maximum number of bits to allocate from a single wo= rd. + * @shallow_depth: The maximum number of bits to allocate from the queue. * See sbitmap_get_shallow(). * * If you call this, make sure to call sbitmap_queue_min_shallow_depth() a= fter diff --git a/lib/sbitmap.c b/lib/sbitmap.c index d3412984170c..c07e3cd82e29 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -208,8 +208,28 @@ static int sbitmap_find_bit_in_word(struct sbitmap_wor= d *map, return nr; } =20 +static unsigned int __map_depth_with_shallow(const struct sbitmap *sb, + int index, + unsigned int shallow_depth) +{ + u64 shallow_word_depth; + unsigned int word_depth, reminder; + + word_depth =3D __map_depth(sb, index); + if (shallow_depth >=3D sb->depth) + return word_depth; + + shallow_word_depth =3D word_depth * shallow_depth; + reminder =3D do_div(shallow_word_depth, sb->depth); + + if (reminder >=3D (index + 1) * word_depth) + shallow_word_depth++; + + return (unsigned int)shallow_word_depth; +} + static int sbitmap_find_bit(struct sbitmap *sb, - unsigned int depth, + unsigned int shallow_depth, unsigned int index, unsigned int alloc_hint, bool wrap) @@ -218,12 +238,12 @@ static int sbitmap_find_bit(struct sbitmap *sb, int nr =3D -1; =20 for (i =3D 0; i < sb->map_nr; i++) { - nr =3D sbitmap_find_bit_in_word(&sb->map[index], - min_t(unsigned int, - __map_depth(sb, index), - depth), - alloc_hint, wrap); + unsigned int depth =3D __map_depth_with_shallow(sb, index, + shallow_depth); =20 + if (depth) + nr =3D sbitmap_find_bit_in_word(&sb->map[index], depth, + alloc_hint, wrap); if (nr !=3D -1) { nr +=3D index << sb->shift; break; @@ -406,27 +426,9 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show); static unsigned int sbq_calc_wake_batch(struct sbitmap_queue *sbq, unsigned int depth) { - unsigned int wake_batch; - unsigned int shallow_depth; - - /* - * Each full word of the bitmap has bits_per_word bits, and there might - * be a partial word. There are depth / bits_per_word full words and - * depth % bits_per_word bits left over. In bitwise arithmetic: - * - * bits_per_word =3D 1 << shift - * depth / bits_per_word =3D depth >> shift - * depth % bits_per_word =3D depth & ((1 << shift) - 1) - * - * Each word can be limited to sbq->min_shallow_depth bits. - */ - shallow_depth =3D min(1U << sbq->sb.shift, sbq->min_shallow_depth); - depth =3D ((depth >> sbq->sb.shift) * shallow_depth + - min(depth & ((1U << sbq->sb.shift) - 1), shallow_depth)); - wake_batch =3D clamp_t(unsigned int, depth / SBQ_WAIT_QUEUES, 1, - SBQ_WAKE_BATCH); - - return wake_batch; + return clamp_t(unsigned int, + min(depth, sbq->min_shallow_depth) / SBQ_WAIT_QUEUES, + 1, SBQ_WAKE_BATCH); } =20 int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, --=20 2.39.2 From nobody Sun Oct 5 03:40:27 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F44642058; Thu, 7 Aug 2025 03:31:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754537499; cv=none; b=pe2bgZ4WY2RIMuICJ43T34jHRtVQeR6idO3bu/2IK4aayW9fQsNXLKsTlslKc0nHryq9Lhrry/oM952lN/j8+gC9R6n0yB8LOz0KHa0s6niIgLn+W/WU8ysHzv51aM02PnepIgOibGb9SmqN9wPNSQfIex1YuqdcoyRYU/xAXp0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754537499; c=relaxed/simple; bh=hTd1hfm5gtu9wjeStAJhQAQyUDFf2bB9+C1LVeJHUJM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=q4FOkemBmsXFkGN0vh27s+JINF1diygdP2lm9KUWgypzBuZaBidEB8UYPhWEFoBTcCh7m3DcZH++UEJsGe83AnqXxH3t/eiIsInDLI3XQJHOuTkx+lM7emtx22fAbMCGXu7jYvXAMnmHd+m6MdyQMhAdiJRzhIWrsMlZvZoFOig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4byCPg4ms9zYQv7P; Thu, 7 Aug 2025 11:31:35 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 4F90B1A0CDD; Thu, 7 Aug 2025 11:31:34 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3MxQTHpRoeARdCw--.18209S6; Thu, 07 Aug 2025 11:31:34 +0800 (CST) From: Yu Kuai To: axboe@kernel.dk, akpm@linux-foundation.org, jack@suse.cz, bvanassche@acm.org, yang.yang@vivo.com, dlemoal@kernel.org, ming.lei@redhat.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com, johnny.chenyi@huawei.com Subject: [patch v4 2/2] lib/sbitmap: make sbitmap_get_shallow() internal Date: Thu, 7 Aug 2025 11:24:13 +0800 Message-Id: <20250807032413.1469456-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250807032413.1469456-1-yukuai1@huaweicloud.com> References: <20250807032413.1469456-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: gCh0CgC3MxQTHpRoeARdCw--.18209S6 X-Coremail-Antispam: 1UD129KBjvJXoWxCF17AF45Xw18XF4xZryrZwb_yoW5Cw4rpF 4xKa48Gr9Yqryj9rnrWFWDZF98Ww4kJFnxGFsagF1Fkrs8tanavrn5CFWft343CFW8AF43 XFyY9ryxCr1UXFDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm014x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWU JVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67 kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY 6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0x vEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVj vjDU0xZFpf9x0JUQXo7UUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ Content-Type: text/plain; charset="utf-8" From: Yu Kuai Because it's only used in sbitmap.c Signed-off-by: Yu Kuai Reviewed-by: Damien Le Moal Reviewed-by: Jan Kara Reviewed-by: Bart Van Assche --- include/linux/sbitmap.h | 17 ----------------- lib/sbitmap.c | 18 ++++++++++++++++-- 2 files changed, 16 insertions(+), 19 deletions(-) diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 4adf4b364fcd..ffb9907c7070 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -209,23 +209,6 @@ void sbitmap_resize(struct sbitmap *sb, unsigned int d= epth); */ int sbitmap_get(struct sbitmap *sb); =20 -/** - * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitm= ap, - * limiting the depth used from each word. - * @sb: Bitmap to allocate from. - * @shallow_depth: The maximum number of bits to allocate from the bitmap. - * - * This rather specific operation allows for having multiple users with - * different allocation limits. E.g., there can be a high-priority class t= hat - * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shall= ow() - * with a @shallow_depth of (sb->depth >> 1). Then, the low-priority - * class can only allocate half of the total bits in the bitmap, preventin= g it - * from starving out the high-priority class. - * - * Return: Non-negative allocated bit number if successful, -1 otherwise. - */ -int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth); - /** * sbitmap_any_bit_set() - Check for a set bit in a &struct sbitmap. * @sb: Bitmap to check. diff --git a/lib/sbitmap.c b/lib/sbitmap.c index c07e3cd82e29..4d188d05db15 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -307,7 +307,22 @@ static int __sbitmap_get_shallow(struct sbitmap *sb, return sbitmap_find_bit(sb, shallow_depth, index, alloc_hint, true); } =20 -int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth) +/** + * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitm= ap, + * limiting the depth used from each word. + * @sb: Bitmap to allocate from. + * @shallow_depth: The maximum number of bits to allocate from the bitmap. + * + * This rather specific operation allows for having multiple users with + * different allocation limits. E.g., there can be a high-priority class t= hat + * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shall= ow() + * with a @shallow_depth of (sb->depth >> 1). Then, the low-priority + * class can only allocate half of the total bits in the bitmap, preventin= g it + * from starving out the high-priority class. + * + * Return: Non-negative allocated bit number if successful, -1 otherwise. + */ +static int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_d= epth) { int nr; unsigned int hint, depth; @@ -322,7 +337,6 @@ int sbitmap_get_shallow(struct sbitmap *sb, unsigned lo= ng shallow_depth) =20 return nr; } -EXPORT_SYMBOL_GPL(sbitmap_get_shallow); =20 bool sbitmap_any_bit_set(const struct sbitmap *sb) { --=20 2.39.2