From nobody Tue Sep 16 21:46:56 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C91F6C3DA7D for ; Thu, 29 Dec 2022 08:13:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232995AbiL2INW (ORCPT ); Thu, 29 Dec 2022 03:13:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232834AbiL2INI (ORCPT ); Thu, 29 Dec 2022 03:13:08 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D282910FF9 for ; Thu, 29 Dec 2022 00:13:01 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id d15so18291586pls.6 for ; Thu, 29 Dec 2022 00:13:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LPgg4/E3xwiWPCkuKUsuYriDQInX30Nw0OCXNJZ+bXk=; b=JRQzQRQ/h7n4/IZEyBjWvWRtcDuyVX+Iug4JMnDafdf2iXKCL2eZDsBxuEa5yueYbz GBtzujYQ+rtNV3fFWhrS01wFk9UXHIUawqQpFmfKsUPE+V+W140LW12HtP23wsQOc9TO 1safTRCxSdYVsi2hSVsSmGRIpKPCrzb51h++E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LPgg4/E3xwiWPCkuKUsuYriDQInX30Nw0OCXNJZ+bXk=; b=brFr5/1rOqL84axBHJ5m2cfJeEKzgtqOwdevQqQWx6t6h+PQ2jY+fBg3OWK7SbqLvQ kV+K4lJt7GFs8MzFZEq24CCsr8nsKaEuksojaqLeV+8mZ91aktKpbnhYUPGNlmpdg+OG z48xNigzak3CLcOwhMXIWjt/cAoZNFgm5JZYoZ5wq2RCQiUFRa9l/W5cuulUk4xUkRy9 1Q5oDP+gVQm6oikyuqYz9Klqu4JYC/+Hw0Sv7TfjbcbI84Z2LYRaStoZwPxp5uD414mM X3fKYk0lwx45F/t3CUBW1CdehmuBQjakUs8JbglfZfBT8HDUyZIg97Eg36dYZy63oeEb NOGg== X-Gm-Message-State: AFqh2kqUZyfWJxvMQQR3hHCPRM/v+zxozIzGyCVsY72A7tFf1nOfkcyW f3Je7uEZU0TJP9FRvYb5Go1lGw== X-Google-Smtp-Source: AMrXdXsfqqYCIpyBiZr/cTv9/CTKLr3m3pQrIjpFfBa2/Eo9DJ18jgamvtyxXSOS/6bH1HpMdN9jKg== X-Received: by 2002:a17:902:b60e:b0:189:89a4:3954 with SMTP id b14-20020a170902b60e00b0018989a43954mr30107986pls.41.1672301581332; Thu, 29 Dec 2022 00:13:01 -0800 (PST) Received: from sarthakkukreti-glaptop.hsd1.ca.comcast.net ([2601:647:4200:b5b0:75ff:1277:3d7b:d67a]) by smtp.gmail.com with ESMTPSA id 12-20020a170902e9cc00b00192820d00d0sm6496325plk.120.2022.12.29.00.12.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Dec 2022 00:13:00 -0800 (PST) From: Sarthak Kukreti To: sarthakkukreti@google.com, dm-devel@redhat.com, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Jens Axboe , "Michael S. Tsirkin" , Jason Wang , Stefan Hajnoczi , Alasdair Kergon , Mike Snitzer , Christoph Hellwig , Brian Foster , Theodore Ts'o , Andreas Dilger , Bart Van Assche , Daniil Lunev , "Darrick J. Wong" Subject: [PATCH v2 1/7] block: Introduce provisioning primitives Date: Thu, 29 Dec 2022 00:12:46 -0800 Message-Id: <20221229081252.452240-2-sarthakkukreti@chromium.org> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog In-Reply-To: <20221229081252.452240-1-sarthakkukreti@chromium.org> References: <20221229081252.452240-1-sarthakkukreti@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce block request REQ_OP_PROVISION. The intent of this request is to request underlying storage to preallocate disk space for the given block range. Block device that support this capability will export a provision limit within their request queues. Signed-off-by: Sarthak Kukreti --- block/blk-core.c | 5 ++++ block/blk-lib.c | 53 +++++++++++++++++++++++++++++++++++++++ block/blk-merge.c | 18 +++++++++++++ block/blk-settings.c | 19 ++++++++++++++ block/blk-sysfs.c | 8 ++++++ block/bounce.c | 1 + include/linux/bio.h | 6 +++-- include/linux/blk_types.h | 5 +++- include/linux/blkdev.h | 16 ++++++++++++ 9 files changed, 128 insertions(+), 3 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 9321767470dc..30bcabc7dc01 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -123,6 +123,7 @@ static const char *const blk_op_name[] =3D { REQ_OP_NAME(WRITE_ZEROES), REQ_OP_NAME(DRV_IN), REQ_OP_NAME(DRV_OUT), + REQ_OP_NAME(PROVISION) }; #undef REQ_OP_NAME =20 @@ -785,6 +786,10 @@ void submit_bio_noacct(struct bio *bio) if (!q->limits.max_write_zeroes_sectors) goto not_supported; break; + case REQ_OP_PROVISION: + if (!q->limits.max_provision_sectors) + goto not_supported; + break; default: break; } diff --git a/block/blk-lib.c b/block/blk-lib.c index e59c3069e835..647b6451660b 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -343,3 +343,56 @@ int blkdev_issue_secure_erase(struct block_device *bde= v, sector_t sector, return ret; } EXPORT_SYMBOL(blkdev_issue_secure_erase); + +/** + * blkdev_issue_provision - provision a block range + * @bdev: blockdev to write + * @sector: start sector + * @nr_sects: number of sectors to provision + * @gfp_mask: memory allocation flags (for bio_alloc) + * + * Description: + * Issues a provision request to the block device for the range of sector= s. + * For thinly provisioned block devices, this acts as a signal for the + * underlying storage pool to allocate space for this block range. + */ +int blkdev_issue_provision(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp) +{ + sector_t bs_mask =3D (bdev_logical_block_size(bdev) >> 9) - 1; + unsigned int max_sectors =3D bdev_max_provision_sectors(bdev); + struct bio *bio =3D NULL; + struct blk_plug plug; + int ret =3D 0; + + if (max_sectors =3D=3D 0) + return -EOPNOTSUPP; + if ((sector | nr_sects) & bs_mask) + return -EINVAL; + if (bdev_read_only(bdev)) + return -EPERM; + + blk_start_plug(&plug); + for (;;) { + unsigned int req_sects =3D min_t(sector_t, nr_sects, max_sectors); + + bio =3D blk_next_bio(bio, bdev, 0, REQ_OP_PROVISION, gfp); + bio->bi_iter.bi_sector =3D sector; + bio->bi_iter.bi_size =3D req_sects << SECTOR_SHIFT; + + sector +=3D req_sects; + nr_sects -=3D req_sects; + if (!nr_sects) { + ret =3D submit_bio_wait(bio); + if (ret =3D=3D -EOPNOTSUPP) + ret =3D 0; + bio_put(bio); + break; + } + cond_resched(); + } + blk_finish_plug(&plug); + + return ret; +} +EXPORT_SYMBOL(blkdev_issue_provision); diff --git a/block/blk-merge.c b/block/blk-merge.c index 35a8f75cc45d..3ab35bb2a333 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -158,6 +158,21 @@ static struct bio *bio_split_write_zeroes(struct bio *= bio, return bio_split(bio, lim->max_write_zeroes_sectors, GFP_NOIO, bs); } =20 +static struct bio *bio_split_provision(struct bio *bio, + const struct queue_limits *lim, + unsigned *nsegs, struct bio_set *bs) +{ + *nsegs =3D 0; + + if (!lim->max_provision_sectors) + return NULL; + + if (bio_sectors(bio) <=3D lim->max_provision_sectors) + return NULL; + + return bio_split(bio, lim->max_provision_sectors, GFP_NOIO, bs); +} + /* * Return the maximum number of sectors from the start of a bio that may be * submitted as a single request to a block device. If enough sectors rema= in, @@ -355,6 +370,9 @@ struct bio *__bio_split_to_limits(struct bio *bio, case REQ_OP_WRITE_ZEROES: split =3D bio_split_write_zeroes(bio, lim, nr_segs, bs); break; + case REQ_OP_PROVISION: + split =3D bio_split_provision(bio, lim, nr_segs, bs); + break; default: split =3D bio_split_rw(bio, lim, nr_segs, bs, get_max_io_size(bio, lim) << SECTOR_SHIFT); diff --git a/block/blk-settings.c b/block/blk-settings.c index 0477c4d527fe..57d88204fbbe 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -58,6 +58,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->zoned =3D BLK_ZONED_NONE; lim->zone_write_granularity =3D 0; lim->dma_alignment =3D 511; + lim->max_provision_sectors =3D 0; } =20 /** @@ -81,6 +82,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors =3D UINT_MAX; lim->max_write_zeroes_sectors =3D UINT_MAX; lim->max_zone_append_sectors =3D UINT_MAX; + lim->max_provision_sectors =3D UINT_MAX; } EXPORT_SYMBOL(blk_set_stacking_limits); =20 @@ -202,6 +204,20 @@ void blk_queue_max_write_zeroes_sectors(struct request= _queue *q, } EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors); =20 +/** + * blk_queue_max_provision_sectors - set max sectors for a single provision + * + * @q: the request queue for the device + * @max_provision_sectors: maximum number of sectors to provision per comm= and + **/ + +void blk_queue_max_provision_sectors(struct request_queue *q, + unsigned int max_provision_sectors) +{ + q->limits.max_provision_sectors =3D max_provision_sectors; +} +EXPORT_SYMBOL(blk_queue_max_provision_sectors); + /** * blk_queue_max_zone_append_sectors - set max sectors for a single zone a= ppend * @q: the request queue for the device @@ -572,6 +588,9 @@ int blk_stack_limits(struct queue_limits *t, struct que= ue_limits *b, t->max_segment_size =3D min_not_zero(t->max_segment_size, b->max_segment_size); =20 + t->max_provision_sectors =3D min_not_zero(t->max_provision_sectors, + b->max_provision_sectors); + t->misaligned |=3D b->misaligned; =20 alignment =3D queue_limit_alignment_offset(b, start); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 93d9e9c9a6ea..2e678417b302 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -131,6 +131,12 @@ static ssize_t queue_max_discard_segments_show(struct = request_queue *q, return queue_var_show(queue_max_discard_segments(q), page); } =20 +static ssize_t queue_max_provision_sectors_show(struct request_queue *q, + char *page) +{ + return queue_var_show(queue_max_provision_sectors(q), (page)); +} + static ssize_t queue_max_integrity_segments_show(struct request_queue *q, = char *page) { return queue_var_show(q->limits.max_integrity_segments, page); @@ -589,6 +595,7 @@ QUEUE_RO_ENTRY(queue_io_min, "minimum_io_size"); QUEUE_RO_ENTRY(queue_io_opt, "optimal_io_size"); =20 QUEUE_RO_ENTRY(queue_max_discard_segments, "max_discard_segments"); +QUEUE_RO_ENTRY(queue_max_provision_sectors, "max_provision_sectors"); QUEUE_RO_ENTRY(queue_discard_granularity, "discard_granularity"); QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes"); QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes"); @@ -638,6 +645,7 @@ static struct attribute *queue_attrs[] =3D { &queue_max_sectors_entry.attr, &queue_max_segments_entry.attr, &queue_max_discard_segments_entry.attr, + &queue_max_provision_sectors_entry.attr, &queue_max_integrity_segments_entry.attr, &queue_max_segment_size_entry.attr, &elv_iosched_entry.attr, diff --git a/block/bounce.c b/block/bounce.c index 7cfcb242f9a1..ab9d8723ae64 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -176,6 +176,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_PROVISION: break; default: bio_for_each_segment(bv, bio_src, iter) diff --git a/include/linux/bio.h b/include/linux/bio.h index 22078a28d7cb..5025af105b7c 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -55,7 +55,8 @@ static inline bool bio_has_data(struct bio *bio) bio->bi_iter.bi_size && bio_op(bio) !=3D REQ_OP_DISCARD && bio_op(bio) !=3D REQ_OP_SECURE_ERASE && - bio_op(bio) !=3D REQ_OP_WRITE_ZEROES) + bio_op(bio) !=3D REQ_OP_WRITE_ZEROES && + bio_op(bio) !=3D REQ_OP_PROVISION) return true; =20 return false; @@ -65,7 +66,8 @@ static inline bool bio_no_advance_iter(const struct bio *= bio) { return bio_op(bio) =3D=3D REQ_OP_DISCARD || bio_op(bio) =3D=3D REQ_OP_SECURE_ERASE || - bio_op(bio) =3D=3D REQ_OP_WRITE_ZEROES; + bio_op(bio) =3D=3D REQ_OP_WRITE_ZEROES || + bio_op(bio) =3D=3D REQ_OP_PROVISION; } =20 static inline void *bio_data(struct bio *bio) diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 99be590f952f..27bdf88f541c 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -385,7 +385,10 @@ enum req_op { REQ_OP_DRV_IN =3D (__force blk_opf_t)34, REQ_OP_DRV_OUT =3D (__force blk_opf_t)35, =20 - REQ_OP_LAST =3D (__force blk_opf_t)36, + /* request device to provision block */ + REQ_OP_PROVISION =3D (__force blk_opf_t)37, + + REQ_OP_LAST =3D (__force blk_opf_t)38, }; =20 enum req_flag_bits { diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 301cf1cf4f2f..f1abc7b43e25 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -302,6 +302,7 @@ struct queue_limits { unsigned int discard_granularity; unsigned int discard_alignment; unsigned int zone_write_granularity; + unsigned int max_provision_sectors; =20 unsigned short max_segments; unsigned short max_integrity_segments; @@ -918,6 +919,8 @@ extern void blk_queue_max_discard_sectors(struct reques= t_queue *q, unsigned int max_discard_sectors); extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q, unsigned int max_write_same_sectors); +extern void blk_queue_max_provision_sectors(struct request_queue *q, + unsigned int max_provision_sectors); extern void blk_queue_logical_block_size(struct request_queue *, unsigned = int); extern void blk_queue_max_zone_append_sectors(struct request_queue *q, unsigned int max_zone_append_sectors); @@ -1057,6 +1060,9 @@ int __blkdev_issue_discard(struct block_device *bdev,= sector_t sector, int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp); =20 +extern int blkdev_issue_provision(struct block_device *bdev, sector_t sect= or, + sector_t nr_sects, gfp_t gfp_mask); + #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ =20 @@ -1135,6 +1141,11 @@ static inline unsigned short queue_max_discard_segme= nts(const struct request_que return q->limits.max_discard_segments; } =20 +static inline unsigned short queue_max_provision_sectors(const struct requ= est_queue *q) +{ + return q->limits.max_provision_sectors; +} + static inline unsigned int queue_max_segment_size(const struct request_que= ue *q) { return q->limits.max_segment_size; @@ -1271,6 +1282,11 @@ static inline bool bdev_nowait(struct block_device *= bdev) return test_bit(QUEUE_FLAG_NOWAIT, &bdev_get_queue(bdev)->queue_flags); } =20 +static inline unsigned int bdev_max_provision_sectors(struct block_device = *bdev) +{ + return bdev_get_queue(bdev)->limits.max_provision_sectors; +} + static inline enum blk_zoned_model bdev_zoned_model(struct block_device *b= dev) { struct request_queue *q =3D bdev_get_queue(bdev); --=20 2.37.3