From nobody Tue Dec 2 01:05:35 2025 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 862D7299A84 for ; Sat, 22 Nov 2025 07:40:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797253; cv=none; b=MTd/n6WE3/VUqU+OjhR6l9WZZGcz5SEyeGB4xHRJPTvuzQDRfxJ7h+E1zM0isig6C3e5nDFkk4eJLiAs4i4nq3KdGgJkBN40IPh4Dt0Zn3qBLrMEpnIbJ811R9IYi7Vgl7K7LOC/77p/0C5VYAn6JnG9eNtV5vavE/8mjiYmn8M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797253; c=relaxed/simple; bh=0IGd9pECmyyazBtuzTUy3wd9EY0I+2QhQWM58mapvyQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h9G1onNWpiP5oQsAaKFS8/gyeh12StwXOtc9RJPCWE5IkOXNQSkXWYA1lf85jEmMtCH9bTqmwdaXn+cbv7S1YApfiGa6ZN976cC96n8mI6Vj0NgbsXRKdjM0qWlVyHu2AO96lmlGNHHxsLgLbWo+7RXktbedQ12VzG4ZY25e3m4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=LXJlCDLE; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="LXJlCDLE" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-297ef378069so25397145ad.3 for ; Fri, 21 Nov 2025 23:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763797251; x=1764402051; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+CcYw/MVsIvV8b2h3Sq9n03D/rQEvwWc9LbzmpgrKxs=; b=LXJlCDLEfAo82LQsnuXOfSj93vf4kZqUMHsYtQweyOPxluwfsO4SWimJbdyZFeE55y L9qFNUO+u4fE/D2ad5eWlYF5T1QQ2w0Iw0tVsbMMnIKjL4t4ZMjS4NxOyCMFcs5lhXOU pPzfNaC3nOhtTVWT+ysUglEXHoaBeQFd7aL88= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763797251; x=1764402051; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=+CcYw/MVsIvV8b2h3Sq9n03D/rQEvwWc9LbzmpgrKxs=; b=nvJFgLSiGtKJe95kbIWH8Gj88oI5BrSdYddY9FcCuk8kSMfUWA3CiF0sAjxLIF2MCk /bUa4hXMI+QZosSI2UVJX/c3hnIEbQFgzjCyUg6Ax3WXy9Ffl/LUHTanQIpvVI+TChW2 Ii7Uk0mjeTFVDHkfM5qgWyjsrjibjwtZ58aXzxTDrGzZ/SCn4pdsKKS7G5LcXoPhpJdj 2WFuMOJk+XZxtykhJvoEPTAEjugDi1r0LqmyIOPBeyiaxLo8CjBzv+QJpSap79/cZEoW svrAyg9UN/V2yH5miUdn1IExX1kDqT27xsbU4JarIwbV47knSKNTUfCAVm7kfnKxVSiP pN5w== X-Forwarded-Encrypted: i=1; AJvYcCX14oPtieR32u1Vxb9gsS2Wv4H0dA9D7+KACtzlg+PbI3icqcFJOPFTtPUZo6IVYGMu5HOuMGRkhC8hUN4=@vger.kernel.org X-Gm-Message-State: AOJu0YxmIV7gSXsoxofFU17t6N7lpLYblIWgv96etj7GWujSwLLcVRjT jaggGgByi07GO0HcNXB2aA2csZVn0e7IUzAWBlGUZf9hRsU/QI0+K9IgCDAjEZTYug== X-Gm-Gg: ASbGncuRvg22tAgXT9J//MU9ZSgqP0AgWfPuVDyazusOCO2ldrM5gi06Hoq+5ZiQlWE LSv3cQavc1CxkF+9RJ1EYzKx+kltyksAmVRO/GSbRECaH4aZb+u+ob/DEHFkVoMvP+g37sAE8l9 Dum5woDButwGp74cbRXcuQDVppsQmZrc/y/iL2AlNxsb77uuLnsl41Q9mRFH/Sg7+6yHPUxZ/nH GcTYr+mdafKWnlZMVSf5FH3TCqwsdxL8fULTfwqwuRaMKtVFWEAmkXQEqOdve8L4oa9YJg/hWBz ufR6dFqczJrqChF8zlRv+ph2bm8CcwTi5Nr7kbW9kp+2QLGC7EuVObLV5DUfHK+rJWxLsRvqta+ TlkAtcOBrBarJoDyD3N2s6q6tTOOPAL9it5GgKXSvIgLxdEnKpGvdBXpN82TlIcr8Vzbg1fS7CN gl9uNi6ILXxhJ9oMtIIz1uJj/XtW+/oHOBOMk0obqC2QqrEJynhNsc4VxpQPJWDG59l/TMJ6EaC g== X-Google-Smtp-Source: AGHT+IFaSB7EbqaBfaqnc96XqHJ3nI/Cc67lkuc7fRADR0xDZSv+4CDROktulCgD73hI6bhlKxOYWg== X-Received: by 2002:a17:902:e809:b0:295:738f:73fe with SMTP id d9443c01a7336-29b6bf385e6mr72914675ad.30.1763797250758; Fri, 21 Nov 2025 23:40:50 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:948e:149d:963b:f660]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29b5b138628sm77771555ad.31.2025.11.21.23.40.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Nov 2025 23:40:50 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky , Minchan Kim Subject: [PATCHv6 1/6] zram: introduce writeback bio batching Date: Sat, 22 Nov 2025 16:40:24 +0900 Message-ID: <20251122074029.3948921-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.460.gd25c4c69ec-goog In-Reply-To: <20251122074029.3948921-1-senozhatsky@chromium.org> References: <20251122074029.3948921-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As was stated in a comment [1] a single page writeback IO is not efficient, but it works. It's time to address this throughput limitation as writeback becomes used more often. Introduce batched (multiple) bio writeback support to take advantage of parallel requests processing and better requests scheduling. Approach used in this patch doesn't use a dedicated kthread like in [2], or blk-plug like in [3]. Dedicated kthread adds complexity, which can be avoided. Apart from that not all zram setups use writeback, so having numerous per-device kthreads (on systems that create multiple zram devices) hanging around is not the most optimal thing to do. blk-plug, on the other hand, works best when request are sequential, which doesn't particularly fit zram writebck IO patterns: zram writeback IO patterns are expected to be random, due to how bdev block reservation/release are handled. blk-plug approach also works in cycles: idle IO, when zram sets up requests in a batch, is followed by bursts of IO, when zram submits the entire batch. Instead we use a batch of requests and submit new bio as soon as one of the in-flight requests completes. For the time being the writeback batch size (maximum number of in-flight bio requests) is set to 32 for all devices. A follow up patch adds a writeback_batch_size device attribute, so the batch size becomes run-time configurable. [1] https://lore.kernel.org/all/20181203024045.153534-6-minchan@kernel.org/ [2] https://lore.kernel.org/all/20250731064949.1690732-1-richardycc@google.= com/ [3] https://lore.kernel.org/all/tencent_78FC2C4FE16BA1EBAF0897DB60FCD675ED0= 5@qq.com/ Signed-off-by: Sergey Senozhatsky Co-developed-by: Yuwen Chen Co-developed-by: Richard Chang Suggested-by: Minchan Kim --- drivers/block/zram/zram_drv.c | 369 +++++++++++++++++++++++++++------- 1 file changed, 301 insertions(+), 68 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index a43074657531..06ea56f0a00f 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -500,6 +500,26 @@ static ssize_t idle_store(struct device *dev, } =20 #ifdef CONFIG_ZRAM_WRITEBACK +struct zram_wb_ctl { + /* idle list is accessed only by the writeback task, no concurency */ + struct list_head idle_reqs; + /* done list is accessed concurrently, protect by done_lock */ + struct list_head done_reqs; + wait_queue_head_t done_wait; + spinlock_t done_lock; + atomic_t num_inflight; +}; + +struct zram_wb_req { + unsigned long blk_idx; + struct page *page; + struct zram_pp_slot *pps; + struct bio_vec bio_vec; + struct bio bio; + + struct list_head entry; +}; + static ssize_t writeback_limit_enable_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { @@ -734,19 +754,221 @@ static void read_from_bdev_async(struct zram *zram, = struct page *page, submit_bio(bio); } =20 -static int zram_writeback_slots(struct zram *zram, struct zram_pp_ctl *ctl) +static void release_wb_req(struct zram_wb_req *req) { - unsigned long blk_idx =3D 0; - struct page *page =3D NULL; - struct zram_pp_slot *pps; - struct bio_vec bio_vec; - struct bio bio; + __free_page(req->page); + kfree(req); +} + +static void release_wb_ctl(struct zram_wb_ctl *wb_ctl) +{ + if (!wb_ctl) + return; + + /* We should never have inflight requests at this point */ + WARN_ON(atomic_read(&wb_ctl->num_inflight)); + WARN_ON(!list_empty(&wb_ctl->done_reqs)); + + while (!list_empty(&wb_ctl->idle_reqs)) { + struct zram_wb_req *req; + + req =3D list_first_entry(&wb_ctl->idle_reqs, + struct zram_wb_req, entry); + list_del(&req->entry); + release_wb_req(req); + } + + kfree(wb_ctl); +} + +/* XXX: should be a per-device sysfs attr */ +#define ZRAM_WB_REQ_CNT 32 + +static struct zram_wb_ctl *init_wb_ctl(void) +{ + struct zram_wb_ctl *wb_ctl; + int i; + + wb_ctl =3D kmalloc(sizeof(*wb_ctl), GFP_KERNEL); + if (!wb_ctl) + return NULL; + + INIT_LIST_HEAD(&wb_ctl->idle_reqs); + INIT_LIST_HEAD(&wb_ctl->done_reqs); + atomic_set(&wb_ctl->num_inflight, 0); + init_waitqueue_head(&wb_ctl->done_wait); + spin_lock_init(&wb_ctl->done_lock); + + for (i =3D 0; i < ZRAM_WB_REQ_CNT; i++) { + struct zram_wb_req *req; + + /* + * This is fatal condition only if we couldn't allocate + * any requests at all. Otherwise we just work with the + * requests that we have successfully allocated, so that + * writeback can still proceed, even if there is only one + * request on the idle list. + */ + req =3D kzalloc(sizeof(*req), GFP_KERNEL | __GFP_NOWARN); + if (!req) + break; + + req->page =3D alloc_page(GFP_KERNEL | __GFP_NOWARN); + if (!req->page) { + kfree(req); + break; + } + + list_add(&req->entry, &wb_ctl->idle_reqs); + } + + /* We couldn't allocate any requests, so writeabck is not possible */ + if (list_empty(&wb_ctl->idle_reqs)) + goto release_wb_ctl; + + return wb_ctl; + +release_wb_ctl: + release_wb_ctl(wb_ctl); + return NULL; +} + +static void zram_account_writeback_rollback(struct zram *zram) +{ + spin_lock(&zram->wb_limit_lock); + if (zram->wb_limit_enable) + zram->bd_wb_limit +=3D 1UL << (PAGE_SHIFT - 12); + spin_unlock(&zram->wb_limit_lock); +} + +static void zram_account_writeback_submit(struct zram *zram) +{ + spin_lock(&zram->wb_limit_lock); + if (zram->wb_limit_enable && zram->bd_wb_limit > 0) + zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); + spin_unlock(&zram->wb_limit_lock); +} + +static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *= req) +{ + u32 index =3D req->pps->index; + int err; + + err =3D blk_status_to_errno(req->bio.bi_status); + if (err) { + /* + * Failed wb requests should not be accounted in wb_limit + * (if enabled). + */ + zram_account_writeback_rollback(zram); + free_block_bdev(zram, req->blk_idx); + return err; + } + + atomic64_inc(&zram->stats.bd_writes); + zram_slot_lock(zram, index); + /* + * We release slot lock during writeback so slot can change under us: + * slot_free() or slot_free() and zram_write_page(). In both cases + * slot loses ZRAM_PP_SLOT flag. No concurrent post-processing can + * set ZRAM_PP_SLOT on such slots until current post-processing + * finishes. + */ + if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) { + free_block_bdev(zram, req->blk_idx); + goto out; + } + + zram_free_page(zram, index); + zram_set_flag(zram, index, ZRAM_WB); + zram_set_handle(zram, index, req->blk_idx); + atomic64_inc(&zram->stats.pages_stored); + +out: + zram_slot_unlock(zram, index); + return 0; +} + +static void zram_writeback_endio(struct bio *bio) +{ + struct zram_wb_req *req =3D container_of(bio, struct zram_wb_req, bio); + struct zram_wb_ctl *wb_ctl =3D bio->bi_private; + unsigned long flags; + + spin_lock_irqsave(&wb_ctl->done_lock, flags); + list_add(&req->entry, &wb_ctl->done_reqs); + spin_unlock_irqrestore(&wb_ctl->done_lock, flags); + + wake_up(&wb_ctl->done_wait); +} + +static void zram_submit_wb_request(struct zram *zram, + struct zram_wb_ctl *wb_ctl, + struct zram_wb_req *req) +{ + /* + * wb_limit (if enabled) should be adjusted before submission, + * so that we don't over-submit. + */ + zram_account_writeback_submit(zram); + atomic_inc(&wb_ctl->num_inflight); + req->bio.bi_private =3D wb_ctl; + submit_bio(&req->bio); +} + +static int zram_complete_done_reqs(struct zram *zram, + struct zram_wb_ctl *wb_ctl) +{ + struct zram_wb_req *req; + unsigned long flags; int ret =3D 0, err; - u32 index; =20 - page =3D alloc_page(GFP_KERNEL); - if (!page) - return -ENOMEM; + while (atomic_read(&wb_ctl->num_inflight) > 0) { + spin_lock_irqsave(&wb_ctl->done_lock, flags); + req =3D list_first_entry_or_null(&wb_ctl->done_reqs, + struct zram_wb_req, entry); + if (req) + list_del(&req->entry); + spin_unlock_irqrestore(&wb_ctl->done_lock, flags); + + /* ->num_inflight > 0 doesn't mean we have done requests */ + if (!req) + break; + + err =3D zram_writeback_complete(zram, req); + if (err) + ret =3D err; + + atomic_dec(&wb_ctl->num_inflight); + release_pp_slot(zram, req->pps); + req->pps =3D NULL; + + list_add(&req->entry, &wb_ctl->idle_reqs); + } + + return ret; +} + +static struct zram_wb_req *zram_select_idle_req(struct zram_wb_ctl *wb_ctl) +{ + struct zram_wb_req *req; + + req =3D list_first_entry_or_null(&wb_ctl->idle_reqs, + struct zram_wb_req, entry); + if (req) + list_del(&req->entry); + return req; +} + +static int zram_writeback_slots(struct zram *zram, + struct zram_pp_ctl *ctl, + struct zram_wb_ctl *wb_ctl) +{ + struct zram_wb_req *req =3D NULL; + unsigned long blk_idx =3D 0; + struct zram_pp_slot *pps; + int ret =3D 0, err =3D 0; + u32 index =3D 0; =20 while ((pps =3D select_pp_slot(ctl))) { spin_lock(&zram->wb_limit_lock); @@ -757,6 +979,27 @@ static int zram_writeback_slots(struct zram *zram, str= uct zram_pp_ctl *ctl) } spin_unlock(&zram->wb_limit_lock); =20 + while (!req) { + req =3D zram_select_idle_req(wb_ctl); + if (req) + break; + + wait_event(wb_ctl->done_wait, + !list_empty(&wb_ctl->done_reqs)); + + err =3D zram_complete_done_reqs(zram, wb_ctl); + /* + * BIO errors are not fatal, we continue and simply + * attempt to writeback the remaining objects (pages). + * At the same time we need to signal user-space that + * some writes (at least one, but also could be all of + * them) were not successful and we do so by returning + * the most recent BIO error. + */ + if (err) + ret =3D err; + } + if (!blk_idx) { blk_idx =3D alloc_block_bdev(zram); if (!blk_idx) { @@ -775,67 +1018,47 @@ static int zram_writeback_slots(struct zram *zram, s= truct zram_pp_ctl *ctl) */ if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) goto next; - if (zram_read_from_zspool(zram, page, index)) + if (zram_read_from_zspool(zram, req->page, index)) goto next; zram_slot_unlock(zram, index); =20 - bio_init(&bio, zram->bdev, &bio_vec, 1, - REQ_OP_WRITE | REQ_SYNC); - bio.bi_iter.bi_sector =3D blk_idx * (PAGE_SIZE >> 9); - __bio_add_page(&bio, page, PAGE_SIZE, 0); - /* - * XXX: A single page IO would be inefficient for write - * but it would be not bad as starter. + * From now on pp-slot is owned by the req, remove it from + * its pp bucket. */ - err =3D submit_bio_wait(&bio); - if (err) { - release_pp_slot(zram, pps); - /* - * BIO errors are not fatal, we continue and simply - * attempt to writeback the remaining objects (pages). - * At the same time we need to signal user-space that - * some writes (at least one, but also could be all of - * them) were not successful and we do so by returning - * the most recent BIO error. - */ - ret =3D err; - continue; - } + list_del_init(&pps->entry); =20 - atomic64_inc(&zram->stats.bd_writes); - zram_slot_lock(zram, index); - /* - * Same as above, we release slot lock during writeback so - * slot can change under us: slot_free() or slot_free() and - * reallocation (zram_write_page()). In both cases slot loses - * ZRAM_PP_SLOT flag. No concurrent post-processing can set - * ZRAM_PP_SLOT on such slots until current post-processing - * finishes. - */ - if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) - goto next; + req->blk_idx =3D blk_idx; + req->pps =3D pps; + bio_init(&req->bio, zram->bdev, &req->bio_vec, 1, REQ_OP_WRITE); + req->bio.bi_iter.bi_sector =3D req->blk_idx * (PAGE_SIZE >> 9); + req->bio.bi_end_io =3D zram_writeback_endio; + __bio_add_page(&req->bio, req->page, PAGE_SIZE, 0); =20 - zram_free_page(zram, index); - zram_set_flag(zram, index, ZRAM_WB); - zram_set_handle(zram, index, blk_idx); + zram_submit_wb_request(zram, wb_ctl, req); blk_idx =3D 0; - atomic64_inc(&zram->stats.pages_stored); - spin_lock(&zram->wb_limit_lock); - if (zram->wb_limit_enable && zram->bd_wb_limit > 0) - zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); + req =3D NULL; + cond_resched(); + continue; + next: zram_slot_unlock(zram, index); release_pp_slot(zram, pps); - - cond_resched(); } =20 - if (blk_idx) - free_block_bdev(zram, blk_idx); - if (page) - __free_page(page); + /* + * Selected idle req, but never submitted it due to some error or + * wb limit. + */ + if (req) + release_wb_req(req); + + while (atomic_read(&wb_ctl->num_inflight) > 0) { + wait_event(wb_ctl->done_wait, !list_empty(&wb_ctl->done_reqs)); + err =3D zram_complete_done_reqs(zram, wb_ctl); + if (err) + ret =3D err; + } =20 return ret; } @@ -948,7 +1171,8 @@ static ssize_t writeback_store(struct device *dev, struct zram *zram =3D dev_to_zram(dev); u64 nr_pages =3D zram->disksize >> PAGE_SHIFT; unsigned long lo =3D 0, hi =3D nr_pages; - struct zram_pp_ctl *ctl =3D NULL; + struct zram_pp_ctl *pp_ctl =3D NULL; + struct zram_wb_ctl *wb_ctl =3D NULL; char *args, *param, *val; ssize_t ret =3D len; int err, mode =3D 0; @@ -970,8 +1194,14 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - ctl =3D init_pp_ctl(); - if (!ctl) { + pp_ctl =3D init_pp_ctl(); + if (!pp_ctl) { + ret =3D -ENOMEM; + goto release_init_lock; + } + + wb_ctl =3D init_wb_ctl(); + if (!wb_ctl) { ret =3D -ENOMEM; goto release_init_lock; } @@ -1000,7 +1230,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); break; } =20 @@ -1011,7 +1241,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); break; } =20 @@ -1022,7 +1252,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); continue; } =20 @@ -1033,17 +1263,18 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); continue; } } =20 - err =3D zram_writeback_slots(zram, ctl); + err =3D zram_writeback_slots(zram, pp_ctl, wb_ctl); if (err) ret =3D err; =20 release_init_lock: - release_pp_ctl(zram, ctl); + release_pp_ctl(zram, pp_ctl); + release_wb_ctl(wb_ctl); atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock); =20 @@ -1112,7 +1343,9 @@ static int read_from_bdev(struct zram *zram, struct p= age *page, return -EIO; } =20 -static void free_block_bdev(struct zram *zram, unsigned long blk_idx) {}; +static void free_block_bdev(struct zram *zram, unsigned long blk_idx) +{ +} #endif =20 #ifdef CONFIG_ZRAM_MEMORY_TRACKING --=20 2.52.0.460.gd25c4c69ec-goog From nobody Tue Dec 2 01:05:35 2025 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A34B25A642 for ; Sat, 22 Nov 2025 07:40:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797255; cv=none; b=fGtjRfpQr6dpVpRJZdDe+sDL3YoPznXRw4BWl34flCb93hUBcHUi3BnSK5qAuAf8tXX/82iNEYGSkzUxTRn2HxqN4LN9uMBsyCWgMrrnmBKYPnUti3QYKl6BA2FBv2J6T0ZLymq3Yv8RYhYIifdk3eOgW3i6K1aNfirap/iFBzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797255; c=relaxed/simple; bh=RU7LKKkrOdPOazKvr9ZHQjeWATjBMgcJ+HOgz91KiKc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qSa3L4jTnIQejcEQuNvddceln2Ytp0PdosMyZDQNTDvuqTFjR6VTY9f9JXUvAdH+v7bQXKSzVMtEyU/HZWvQGkkN+mWvO3jDwYfmghjgnSRnkFBb7GS6PiMyx4OBRmdBRY1MFeaHYF69On855wY5FuNOjvAr0TTfv6+MgmP/EfQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=nmzPSXIT; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="nmzPSXIT" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-295548467c7so32619885ad.2 for ; Fri, 21 Nov 2025 23:40:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763797253; x=1764402053; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SX4VpemLPNnHEK+as4wej/2Q8OupmjwjrzAvH21+biQ=; b=nmzPSXIT18UjzkozSVKR5CtqzZrUP3UHTdpPG4QL/sCbgCcdhoJUJuu+55Ef0MeFSd lE72bA9SWOD3riJam/qviTnRSLud2B3xJDaPlzlp0krUHI3iYZ15DRbmxRSpv9OvbZjt eSMZLfPsltdhjWGh/SETa3G8yPxvg0dV9KJxY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763797253; x=1764402053; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=SX4VpemLPNnHEK+as4wej/2Q8OupmjwjrzAvH21+biQ=; b=TggKOEbq8szIKJ1sw3ppJQ3ceSXI61RiSps3AxZokyDFqPCCd0QTMrKPi7r21RdNR8 l9H7eTH0QAC3QYfVoFxRpLOnzFL/QB2VPRmoTBJeh1Io4bV2tlY0EJhCeVyv3d+BGTB6 KbJezW11cHV6iuL6QkNcEBkJgwxzhGkrv+9AXRXYPXlTs0VUAjd8SOfBkMnKxZ/Z4vin fC3uVaaVbDlIY2mKgbT7R5qSHxM7EZKWO7X5ypVSA4H2KWwD4AmJj36+jlyNgoc83LDS X+aTMm/bxG9Zv9rNsvaQZYkvsA0GVeD3Gn+NQNOKnBi7J/lzccwQopE4alsRIc6HeD47 kFIA== X-Forwarded-Encrypted: i=1; AJvYcCXN9UJXuJUZfoNNW+ofx7/asa9nunKTWdqRBX7cLFVfN0V5wmn0eAerb4KRDOOFWYA6Ervv+yzYmL25MEs=@vger.kernel.org X-Gm-Message-State: AOJu0Yz39WQTXBaI3gXcXmGlC+hboVmorwjnBeT1hsy7vplLHNsIQpGx Gqt7T0lfMlMb4cCZDqfI0qTuft+J8LP/5K+9ysARkG8rZHtw+eDNEXSQT1MMVYdunzYbUyUlZtg ij68= X-Gm-Gg: ASbGnctNFX4jtPBFEo33+moPdeZKaD1NQ9XHp57qGBfd2NVlzntYnK7kGZ9zBGiPFwI GxAicVmGYVfaYTQT1iff3CXTVi0KOc0sbYYyXDFE18SgCNcst6ghy7MFTYr2eDjpfE6C/vxeJH7 QyELKKw11uCtEY3LNPtH9XUujEz2kZHi1Foky2rh00VO4Jr667bpKIKxc4I5xBz3T3jP2kTRubH iQI+n+IOx0Pd3R46zqoAM0aiS9HGNOtqgHMRlPGfUap0ujagND/dtMgSmF+gioWyfBDxRZj+bAg 3OtP9tmuh3lX3TNTR7FwyQ6bTjQJe1ndhdx77MEIvQaR0ZRQp4/FTAe1gP2/88Cu5N9xi4Pzh+T 9djNBpWvWrt7xOanLaX6VSrU1tUR209E9jotq8xlKZliAv/yXpM1anhXgmWC9pqLXm3CR/uf4ww yWxIscxJUmkDmYSfYJbMBQf6698DnnhKR9PUR/KL1vnEBhTxYq3H55tJlvoyPEFp1feCor/wWLW ghk2jT9jbI3 X-Google-Smtp-Source: AGHT+IFnu2Sudoe6jxzn5HtnDaCFVbXUPGcH4YtmUDiSHkmwz2BuN3F8fTnfFOOCqEYv14rgm5x3cw== X-Received: by 2002:a17:903:320a:b0:29a:5ce:b467 with SMTP id d9443c01a7336-29b6bf9e98amr65542735ad.54.1763797253269; Fri, 21 Nov 2025 23:40:53 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:948e:149d:963b:f660]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29b5b138628sm77771555ad.31.2025.11.21.23.40.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Nov 2025 23:40:52 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv6 2/6] zram: add writeback batch size device attr Date: Sat, 22 Nov 2025 16:40:25 +0900 Message-ID: <20251122074029.3948921-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.460.gd25c4c69ec-goog In-Reply-To: <20251122074029.3948921-1-senozhatsky@chromium.org> References: <20251122074029.3948921-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce writeback_batch_size device attribute so that the maximum number of in-flight writeback bio requests can be configured at run-time per-device. This essentially enables batched bio writeback. Signed-off-by: Sergey Senozhatsky Reviewed-by: Brian Geffon --- drivers/block/zram/zram_drv.c | 46 ++++++++++++++++++++++++++++++----- drivers/block/zram/zram_drv.h | 1 + 2 files changed, 41 insertions(+), 6 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 06ea56f0a00f..5906ba061165 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -590,6 +590,40 @@ static ssize_t writeback_limit_show(struct device *dev, return sysfs_emit(buf, "%llu\n", val); } =20 +static ssize_t writeback_batch_size_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct zram *zram =3D dev_to_zram(dev); + u32 val; + + if (kstrtouint(buf, 10, &val)) + return -EINVAL; + + if (!val) + return -EINVAL; + + down_write(&zram->init_lock); + zram->wb_batch_size =3D val; + up_write(&zram->init_lock); + + return len; +} + +static ssize_t writeback_batch_size_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + u32 val; + struct zram *zram =3D dev_to_zram(dev); + + down_read(&zram->init_lock); + val =3D zram->wb_batch_size; + up_read(&zram->init_lock); + + return sysfs_emit(buf, "%u\n", val); +} + static void reset_bdev(struct zram *zram) { if (!zram->backing_dev) @@ -781,10 +815,7 @@ static void release_wb_ctl(struct zram_wb_ctl *wb_ctl) kfree(wb_ctl); } =20 -/* XXX: should be a per-device sysfs attr */ -#define ZRAM_WB_REQ_CNT 32 - -static struct zram_wb_ctl *init_wb_ctl(void) +static struct zram_wb_ctl *init_wb_ctl(struct zram *zram) { struct zram_wb_ctl *wb_ctl; int i; @@ -799,7 +830,7 @@ static struct zram_wb_ctl *init_wb_ctl(void) init_waitqueue_head(&wb_ctl->done_wait); spin_lock_init(&wb_ctl->done_lock); =20 - for (i =3D 0; i < ZRAM_WB_REQ_CNT; i++) { + for (i =3D 0; i < zram->wb_batch_size; i++) { struct zram_wb_req *req; =20 /* @@ -1200,7 +1231,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - wb_ctl =3D init_wb_ctl(); + wb_ctl =3D init_wb_ctl(zram); if (!wb_ctl) { ret =3D -ENOMEM; goto release_init_lock; @@ -2843,6 +2874,7 @@ static DEVICE_ATTR_RW(backing_dev); static DEVICE_ATTR_WO(writeback); static DEVICE_ATTR_RW(writeback_limit); static DEVICE_ATTR_RW(writeback_limit_enable); +static DEVICE_ATTR_RW(writeback_batch_size); #endif #ifdef CONFIG_ZRAM_MULTI_COMP static DEVICE_ATTR_RW(recomp_algorithm); @@ -2864,6 +2896,7 @@ static struct attribute *zram_disk_attrs[] =3D { &dev_attr_writeback.attr, &dev_attr_writeback_limit.attr, &dev_attr_writeback_limit_enable.attr, + &dev_attr_writeback_batch_size.attr, #endif &dev_attr_io_stat.attr, &dev_attr_mm_stat.attr, @@ -2925,6 +2958,7 @@ static int zram_add(void) =20 init_rwsem(&zram->init_lock); #ifdef CONFIG_ZRAM_WRITEBACK + zram->wb_batch_size =3D 32; spin_lock_init(&zram->wb_limit_lock); #endif =20 diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 6cee93f9c0d0..1a647f42c1a4 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -129,6 +129,7 @@ struct zram { struct file *backing_dev; spinlock_t wb_limit_lock; bool wb_limit_enable; + u32 wb_batch_size; u64 bd_wb_limit; struct block_device *bdev; unsigned long *bitmap; --=20 2.52.0.460.gd25c4c69ec-goog From nobody Tue Dec 2 01:05:35 2025 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BE142DC34F for ; Sat, 22 Nov 2025 07:40:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797257; cv=none; b=ap5aDKPXHqT0CmSqg17uhi8pw4n8fLPBjhJ9/nKn8NTc1/GUx5a/KwzLeWV9e/oa9KODDDt2p+ZIprPpf8wafWTH/9YdtSwPxJ+GI3mkwK0cpb9EtRCn5kyZ02POPJU1yMp88ZtyHasenoB8Gpxv2HqJT6xTIEQX+F2yiQ5axx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797257; c=relaxed/simple; bh=0lvT62u7g2hF4e73HEKdSTUKddQUZd7pbA6qKmK7XgE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h1hOnxEQ7yL5TaV8NWc1VWI6pZqziMOBR/QG1T0IJnTEZeaFZQClBA4upu0GfOAWtvCg7pg3H0L7C6nmwSsLoAUDRzzQSmwphN0yhhUM62dmDSQ0Z7eA2G1w6a+u/2nPIWpfW6Ro2tQKIlekfti6ntI4KGEOoJeW7mqKPoIXlb0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=X12BBd45; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="X12BBd45" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-2981f9ce15cso32548255ad.1 for ; Fri, 21 Nov 2025 23:40:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763797256; x=1764402056; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SP8ZymlO8cuxujreNygTi0dHk6OzbncHB3g2TEGb8AY=; b=X12BBd45p7hMcsDJsvrZh2BPdcYPYdrFQoaOHeBGCq9zJwuuVSpQIPeqapeUqYbNJp ExTl4QxTr7fKaKtO/da0EfpDeH39qiyUf7ckOFOseer6mTbsB9lgP1G2Nj2BHBfs2BUu vqDcUdZe+R03Bw4t9Mp1T4cYNfLxhOU9rs2uI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763797256; x=1764402056; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=SP8ZymlO8cuxujreNygTi0dHk6OzbncHB3g2TEGb8AY=; b=bRlsBAqO+DcXCy4eEInzXe4dLeQ8CbI4/Arg42NMciDfmYuKdnLSj/+B8ymDIEcZ/y k4yJvpfifLiMgDCTZ77G8f5ELzTg+z5zN8VRJM1zYrrDRiquu8UIvTQEuSefQOnFSUm1 HTcSx1xXHNe1Z1iNpMa8GfjtQJfHww4fdy5YW2nVUBk/uJOMs69FpSojrGea5rSJ0sQz cFUVyN36RYBwyH/iX+iYqIpj8YcTnaKhlcVNo0ZGtAoZ84ioIEStdH2umd3sBf+RgsGw pZttUzRGwMJ0vt4fRK/04OljsTFd0jO+HfF9fYFlfv9q43gyaUt5wgrNDqzvZzVTEOQq gO3A== X-Forwarded-Encrypted: i=1; AJvYcCU/fwyL1VgjZaXJK9Yh0jSSKcIyeQwSIEEDzu32auvweGiWt3pTzXmZ2afFz0HdZmB1/UnNUD2dG/v3K7c=@vger.kernel.org X-Gm-Message-State: AOJu0YyfAl+o+ILEbDwzW2aF3soNJgazCep9ngXyZthgfIa6LgnyN40q tnwf0nruWrgymrb7KLtUFu9KP+DnsL8fGWHkugvA4dhKJ1kknsHrNt/kQgbAsUD6Lw== X-Gm-Gg: ASbGncteQkh2YdwT3mLG0C752clZVJvxmekBuvyLTiWUBTk1KGz7BQhoXp+IF2zQNRt o6Br7uYstlOKei2xRaJ98aQghAyq8fuTHoHtJtE9Xsghe/WQW2V0j7PadA9zMx3zklIqDKN48aj 0e0CWip2QBnTd047Px3shs/XE8ofORyMiS4x2r63hJbc9B7Lqz9s0QQF65aPsQDKxv+VRYwNzbr LUERWfXAnpZf3W9Royi5+l2FelFwCfFeyDpm69q/GgCcyhF5mRSGFnlY+ieQNkMPPpd4qB9Q+NL XnOV9+Fc8o6OKAAaAslI57NQlwfMzNAD9Y99x0Mza9BM7wA+xycxJM88If/VvWM0K1kTWc2Ty/8 BubY9TTT3oaZBiwGij/5O8b1cjPVGOsBbgk1OFtO5YpxOxNm4uxXGzm5oNd1LZJXJxZjR3OwGdx WAHkyOeOc/dUKDe6pbElIJs0kGjPrTleviu6w/AooXTHV+XJdkQGNHKRVFrr4MKf1gkrHXID79C w== X-Google-Smtp-Source: AGHT+IEM8/Y4dIBBOwac1lJfHeplWi/wxjKlMN7X/1wlrBnzk52Ba8uHOaEg9loOMYDOH+d1fhjFKw== X-Received: by 2002:a17:902:f608:b0:295:9cb5:ae12 with SMTP id d9443c01a7336-29b6becc884mr62810705ad.25.1763797255895; Fri, 21 Nov 2025 23:40:55 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:948e:149d:963b:f660]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29b5b138628sm77771555ad.31.2025.11.21.23.40.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Nov 2025 23:40:55 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv6 3/6] zram: take write lock in wb limit store handlers Date: Sat, 22 Nov 2025 16:40:26 +0900 Message-ID: <20251122074029.3948921-4-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.460.gd25c4c69ec-goog In-Reply-To: <20251122074029.3948921-1-senozhatsky@chromium.org> References: <20251122074029.3948921-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Write device attrs handlers should take write zram init_lock. While at it, fixup coding styles. Signed-off-by: Sergey Senozhatsky Reviewed-by: Brian Geffon --- drivers/block/zram/zram_drv.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 5906ba061165..8dd733707a40 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -521,7 +521,8 @@ struct zram_wb_req { }; =20 static ssize_t writeback_limit_enable_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) + struct device_attribute *attr, + const char *buf, size_t len) { struct zram *zram =3D dev_to_zram(dev); u64 val; @@ -530,18 +531,19 @@ static ssize_t writeback_limit_enable_store(struct de= vice *dev, if (kstrtoull(buf, 10, &val)) return ret; =20 - down_read(&zram->init_lock); + down_write(&zram->init_lock); spin_lock(&zram->wb_limit_lock); zram->wb_limit_enable =3D val; spin_unlock(&zram->wb_limit_lock); - up_read(&zram->init_lock); + up_write(&zram->init_lock); ret =3D len; =20 return ret; } =20 static ssize_t writeback_limit_enable_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, + char *buf) { bool val; struct zram *zram =3D dev_to_zram(dev); @@ -556,7 +558,8 @@ static ssize_t writeback_limit_enable_show(struct devic= e *dev, } =20 static ssize_t writeback_limit_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) + struct device_attribute *attr, + const char *buf, size_t len) { struct zram *zram =3D dev_to_zram(dev); u64 val; @@ -565,11 +568,11 @@ static ssize_t writeback_limit_store(struct device *d= ev, if (kstrtoull(buf, 10, &val)) return ret; =20 - down_read(&zram->init_lock); + down_write(&zram->init_lock); spin_lock(&zram->wb_limit_lock); zram->bd_wb_limit =3D val; spin_unlock(&zram->wb_limit_lock); - up_read(&zram->init_lock); + up_write(&zram->init_lock); ret =3D len; =20 return ret; --=20 2.52.0.460.gd25c4c69ec-goog From nobody Tue Dec 2 01:05:35 2025 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A3F62F2913 for ; Sat, 22 Nov 2025 07:40:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797260; cv=none; b=MDOm01U2dg8Vb/m5WQFJlAo+DBK4skcAce/acuy59OhCKazTdwokuyeJEAHyIfi9HSM55yVr8nFrm6hisnn792TNj/+GlfhDu7BmBpdVTlXPG/iB4jhF+fdF//sv9BZTDDvWY/IU2BW1n0st/IKL8/FADkAt6GprNkOkhs2Qs5E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797260; c=relaxed/simple; bh=M4NcAMNdRfN7Xt9x5oF1s3PQb07UktRGrwZu7+yxoxs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eK6hPqE45KU5u708+tcIUwyu192vxMi1vLhwlM4UerT08TI1aNsm3LSeI8IuDY/wC++pqQWsam9B3Q7xLLuQVRlhWviKvyKe14ZIb9HFnnTBOElgd8wbGlu6/D8JxqfGKpLt06Y2U4M9RvXcikURYtNM1ChCyXY9bV0v/A8G8Cg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=Nl6y5+B/; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Nl6y5+B/" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-34585428e33so2752585a91.3 for ; Fri, 21 Nov 2025 23:40:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763797258; x=1764402058; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OgbFFpUkfI22QXU79O82s8Mc2xvY4Nv+z0knw2TCbpE=; b=Nl6y5+B/WymHcsX89hKKDKjS9uj4il9DheYH7TTFELBByM7rfVBHJypYdfhH+EQVYd JdWBIMzzjt6sZAnXaBPQkVrc32ZR9WFe9rLWa51LLiE50qQi5J8jXlOB8AY4YQgtjgLF ql7jWA/SeelQ3f4zYX7c0pU7I3yEok2tWWSdY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763797258; x=1764402058; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=OgbFFpUkfI22QXU79O82s8Mc2xvY4Nv+z0knw2TCbpE=; b=CYfmC31S8GMYvuAwq4h1bog/cHzBPf6Nq2EZk/6UAUb85VeVnAikm7CoqSUXuMYdnP QsOZ5/MAT/GQEGM62m7NW5cJre/X2azyhbtVIZegQw1OVhGlFDg+EJxTCjj4DXIBtNwy 8n9jzLE66PLzdWDJBv9lc2T6BsgFO6J7CabJKBzzFmX9MhmSZwtsuWmqhiqSuYxbdc1B l8Dm++hcL+qcWMIYSy1196ioLIjrgkyoWi2NS7UmB0Z/37yTdGt4y0ypW7TWKScuf548 x6GvSNSCt9hdDO/V1yOpvvomrvxxHufDvoNp4bJ1bEW+3CP7zWw1nRtPj6x24ezO+/x5 faMQ== X-Forwarded-Encrypted: i=1; AJvYcCWHDWTGmYWhoqJlrTDzCd+wLyX6lX8EgKj6WGlC5fUnrJcZuMTT1/Q7AnzZiuOpqutWmH/Hph2Ie0aoOOw=@vger.kernel.org X-Gm-Message-State: AOJu0YwtCISbuRR1AkQOAmhNHuKB8BCT5WBBliJemm5YM7Ezn0p4P5vj Azo31aTVyG9y9OaxYlwhAeSqDErXPhOv8PK8EVhqVW2FUFFkRzMHugFPlm+wOMDtdA== X-Gm-Gg: ASbGnctmlxi1FgQ2U5IxCfA7jkkne4gA/50ZcvXW84+jwmk9P9olgouF/WJHe3J48O2 s1xq0v1IO49HHYzS8xXyL57x4OsysxR5rfokuHLKjD96TEStDLnaqlSw0qoOFcnp4JpfJnoHvwJ DNoBVuV3cChah/XpvA7EMX8J6iPZChLDwq3I3g/B+ijNtOcL4QaP0vYaQSKGTGx+A1pkmAIjilP ndQ4k4mnBRvFaRYEYoKbpmStzmZPcIXSt8pcXLkvOTMCsgfH1Y0GdVj7mP8MDNX3mZHLpM89d74 OSGUcJeHE5bKkUcleXOBDYdKRB8VxRemPgQVgKazDHB55WMc+tiIRVnE45PI7EKKs5lxI3/Jpwj IqUpXkgJNdYnMQAqWyE3LmWMVqL76II6kPV0O7DenI68s0jeJ+Q1aEu5TSUzTP7c75TlRxjRNV6 J//1eWOasa3+p8IjDQHrHXBv1pjfy9ZjDkay0PF9sFYy/VyvCYZ8cuOyFX2aannBgFIu+uh8QcJ A== X-Google-Smtp-Source: AGHT+IHHl8vDZNN4ihdi/Rw6H5hLCSLd9BU7mCbrs0a2xeos0HXBVBYR3WSbjHwRbViXwwzjKXzfdA== X-Received: by 2002:a17:90b:3d90:b0:33e:2d0f:4793 with SMTP id 98e67ed59e1d1-34733e4c735mr5306539a91.11.1763797258406; Fri, 21 Nov 2025 23:40:58 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:948e:149d:963b:f660]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29b5b138628sm77771555ad.31.2025.11.21.23.40.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Nov 2025 23:40:58 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv6 4/6] zram: drop wb_limit_lock Date: Sat, 22 Nov 2025 16:40:27 +0900 Message-ID: <20251122074029.3948921-5-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.460.gd25c4c69ec-goog In-Reply-To: <20251122074029.3948921-1-senozhatsky@chromium.org> References: <20251122074029.3948921-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We don't need wb_limit_lock. Writeback limit setters take an exclusive write zram init_lock, while wb_limit modifications happen only from a single task and under zram read init_lock. No concurrent wb_limit modifications are possible (we permit only one post-processing task at a time). Add lockdep assertions to wb_limit mutators. While at it, fixup coding styles. Signed-off-by: Sergey Senozhatsky Reviewed-by: Brian Geffon --- drivers/block/zram/zram_drv.c | 22 +++++----------------- drivers/block/zram/zram_drv.h | 1 - 2 files changed, 5 insertions(+), 18 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 8dd733707a40..806497225603 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -532,9 +532,7 @@ static ssize_t writeback_limit_enable_store(struct devi= ce *dev, return ret; =20 down_write(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); zram->wb_limit_enable =3D val; - spin_unlock(&zram->wb_limit_lock); up_write(&zram->init_lock); ret =3D len; =20 @@ -549,9 +547,7 @@ static ssize_t writeback_limit_enable_show(struct devic= e *dev, struct zram *zram =3D dev_to_zram(dev); =20 down_read(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); val =3D zram->wb_limit_enable; - spin_unlock(&zram->wb_limit_lock); up_read(&zram->init_lock); =20 return sysfs_emit(buf, "%d\n", val); @@ -569,9 +565,7 @@ static ssize_t writeback_limit_store(struct device *dev, return ret; =20 down_write(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); zram->bd_wb_limit =3D val; - spin_unlock(&zram->wb_limit_lock); up_write(&zram->init_lock); ret =3D len; =20 @@ -579,15 +573,13 @@ static ssize_t writeback_limit_store(struct device *d= ev, } =20 static ssize_t writeback_limit_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { u64 val; struct zram *zram =3D dev_to_zram(dev); =20 down_read(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); val =3D zram->bd_wb_limit; - spin_unlock(&zram->wb_limit_lock); up_read(&zram->init_lock); =20 return sysfs_emit(buf, "%llu\n", val); @@ -869,18 +861,18 @@ static struct zram_wb_ctl *init_wb_ctl(struct zram *z= ram) =20 static void zram_account_writeback_rollback(struct zram *zram) { - spin_lock(&zram->wb_limit_lock); + lockdep_assert_held_read(&zram->init_lock); + if (zram->wb_limit_enable) zram->bd_wb_limit +=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); } =20 static void zram_account_writeback_submit(struct zram *zram) { - spin_lock(&zram->wb_limit_lock); + lockdep_assert_held_read(&zram->init_lock); + if (zram->wb_limit_enable && zram->bd_wb_limit > 0) zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); } =20 static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *= req) @@ -1005,13 +997,10 @@ static int zram_writeback_slots(struct zram *zram, u32 index =3D 0; =20 while ((pps =3D select_pp_slot(ctl))) { - spin_lock(&zram->wb_limit_lock); if (zram->wb_limit_enable && !zram->bd_wb_limit) { - spin_unlock(&zram->wb_limit_lock); ret =3D -EIO; break; } - spin_unlock(&zram->wb_limit_lock); =20 while (!req) { req =3D zram_select_idle_req(wb_ctl); @@ -2962,7 +2951,6 @@ static int zram_add(void) init_rwsem(&zram->init_lock); #ifdef CONFIG_ZRAM_WRITEBACK zram->wb_batch_size =3D 32; - spin_lock_init(&zram->wb_limit_lock); #endif =20 /* gendisk structure */ diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 1a647f42c1a4..c6d94501376c 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -127,7 +127,6 @@ struct zram { bool claim; /* Protected by disk->open_mutex */ #ifdef CONFIG_ZRAM_WRITEBACK struct file *backing_dev; - spinlock_t wb_limit_lock; bool wb_limit_enable; u32 wb_batch_size; u64 bd_wb_limit; --=20 2.52.0.460.gd25c4c69ec-goog From nobody Tue Dec 2 01:05:35 2025 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D87D2DC34F for ; Sat, 22 Nov 2025 07:41:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797263; cv=none; b=J2vUuUI8ohFT9Mg3ZxUghXTeXjSOfXFuYwd0KMNZP6EaTa16aPI4blpG/r4KPLCUqlr6frqYIWKyOUv9XBqsq26bnS0AwCsBvMjhmu2qU54SddcGBc9k6R4l4leRFCqOP/wLO53YYe9gsOqo5ZwIdmfI5oT2q/1WXQr/ALw3fBA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797263; c=relaxed/simple; bh=qDK3/KDP2yGyLUfSxb9ErGoYh406S27gFyX0GJy3yrI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TQTsk/mpARb1YX7X7k9beLGtSeTW8or/ccZ/y6n8UyeKraDEcu3Rc8hN6pnDidP9i9wSxaUOCxP2jqYDunL8sSjeZb3L8tUL28WKTI/Ez6koMizbJpLYxCKmd2EApmNm8l4W1RckiHvM8RaHZPIr5p3RR8pkAdcOVTyt5klnqLs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=kKpTt6Zv; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="kKpTt6Zv" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-29516a36affso39237065ad.3 for ; Fri, 21 Nov 2025 23:41:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763797261; x=1764402061; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eqXmvB06Z2fB+k4+MBqw/8L077gQXIHuVUXIRxAoKgE=; b=kKpTt6ZvwrNmZa+0BzJ2BlB4gskRWkG8l3MoCNcJBuCpdwHWK1o5xemPEV0HgQFkMb 82v0gKyoGc6Snk2123g/Q2/rDZJgHWP+gZEVWZQCCWH3TsmR41f+EyiEWtppvm7bZaQp 3ZTrTGiAFJJkGU3oM0mwW2fv9LXEk+3LJxnEA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763797261; x=1764402061; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=eqXmvB06Z2fB+k4+MBqw/8L077gQXIHuVUXIRxAoKgE=; b=FbcBfTQOVErYXrm5wK4lbKjmLTByBph5DSjk0x7Ci69eRAjVx/ZccDCAWWAnFj5PsQ ZQDE6kYPNMRuGMJkwoZShtG8aAo+D40rzYJYB8jax4n/KKx7l97r0iQ2x5L5FOxtZ3a7 2MhMmuIQPGGnjX3136rQKLEfV4vykkHaqGqaBskLePjCxDv50crcMA8MaDnllQC0fDzO QwgruPJCQqvkkaSqgHeltFXg7BsDbaq16lgIhz2ct5geIei1If0NoDx2YNemtxOAnbAW 1y0AvV1epSp6H2OXVTcTqY2zwAW6U1YHRUj2UpXsG9H1+lq+Xs8y8i2WiZCPm81OoTqN Q7PA== X-Forwarded-Encrypted: i=1; AJvYcCWJJrPU6/p4DpY1BWC2VCOmh9NiHcsiZw2w73UN7lhm+xzCdvVp1nNX0fpcwkdduM45KkeV6eqOcvhXQn0=@vger.kernel.org X-Gm-Message-State: AOJu0YwhJ9Kmgoa4g/vchVdrh3FKhxnmFxBaW+FY1YZGHF9mVpNRIUyd lyf77teVjMbylq8YWPAd1WGIjaf8aYl7auP0RPRfmFmj+babRMlxdIzwQMRwFCghJw== X-Gm-Gg: ASbGncsbn8ewWIa2/9dPGGeFtbtwKLe01CY8a4Ua2H/6pjibWQwLD6RYEdhowPRz83H RkVOqCL8tfpAJDqXzQuVxRgHYEvOhqC/2MUnP/B/8Ht0SYaBNT+KtNgTZfROKbXDfeUo1atBgeU 9bpg3Qkx+F5Hypt3/ZoA2FWzZbY2fxnDFCz7Fe7n1d0uJfF8tk+GM7BK0OCRovHQg8yg3b9UeNf wgnTOrCkzD2ywN3p3Xr9hLji1oESud6t5jYxiS2y/NgAhA9bZF3ntszn75OLWJAP756DCLtIt9L +eImh6CqFfj3iV1/Q5AwClusMNN3CxXwS2jjOp6yjYM1PhF/BPBv2zKQ52XzrREv52aOUAkKHBi fDKUpcaR0CvCoKoEbyK6Ku/4jIPy/VWm/NKd/50fPpagBrEXhKq7wEKxSYYyHy3oOphLLDHdvXX 0pO/hi/gIfP9b2kCG0PHn4VtoXQUR1GSf8YkI35LcfnI6b7MIiIrOkY0okprHY9d3sXpwco4zM7 bmWwWQ2hey+ X-Google-Smtp-Source: AGHT+IFDKI29A6Uqi041LAbC6kcpz5R1IMRKmCudzuYMIMwbcjwLb7vtZ5IKwHJVMEc42d2kp0CPqw== X-Received: by 2002:a17:90b:3cc3:b0:32e:d600:4fdb with SMTP id 98e67ed59e1d1-34733ef71f2mr5625087a91.18.1763797260926; Fri, 21 Nov 2025 23:41:00 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:948e:149d:963b:f660]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29b5b138628sm77771555ad.31.2025.11.21.23.40.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Nov 2025 23:41:00 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv6 5/6] zram: rework bdev block allocation Date: Sat, 22 Nov 2025 16:40:28 +0900 Message-ID: <20251122074029.3948921-6-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.460.gd25c4c69ec-goog In-Reply-To: <20251122074029.3948921-1-senozhatsky@chromium.org> References: <20251122074029.3948921-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" First, writeback bdev ->bitmap bits are set only from one context, as we can have only one single task performing writeback, so we cannot race with anything else. Remove retry path. Second, we always check ZRAM_WB flag to distinguish writtenback slots, so we should not confuse 0 bdev block index and 0 handle. We can use first bdev block (0 bit) for writeback as well. While at it, give functions slightly more accurate names, as we don't alloc/free anything there, we reserve a block for async writeback or release the block. Signed-off-by: Sergey Senozhatsky Reviewed-by: Brian Geffon --- drivers/block/zram/zram_drv.c | 37 +++++++++++++++++------------------ 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 806497225603..1f7e9e914d34 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -500,6 +500,8 @@ static ssize_t idle_store(struct device *dev, } =20 #ifdef CONFIG_ZRAM_WRITEBACK +#define INVALID_BDEV_BLOCK (~0UL) + struct zram_wb_ctl { /* idle list is accessed only by the writeback task, no concurency */ struct list_head idle_reqs; @@ -746,23 +748,20 @@ static ssize_t backing_dev_store(struct device *dev, return err; } =20 -static unsigned long alloc_block_bdev(struct zram *zram) +static unsigned long zram_reserve_bdev_block(struct zram *zram) { - unsigned long blk_idx =3D 1; -retry: - /* skip 0 bit to confuse zram.handle =3D 0 */ - blk_idx =3D find_next_zero_bit(zram->bitmap, zram->nr_pages, blk_idx); - if (blk_idx =3D=3D zram->nr_pages) - return 0; + unsigned long blk_idx; =20 - if (test_and_set_bit(blk_idx, zram->bitmap)) - goto retry; + blk_idx =3D find_next_zero_bit(zram->bitmap, zram->nr_pages, 0); + if (blk_idx =3D=3D zram->nr_pages) + return INVALID_BDEV_BLOCK; =20 + set_bit(blk_idx, zram->bitmap); atomic64_inc(&zram->stats.bd_count); return blk_idx; } =20 -static void free_block_bdev(struct zram *zram, unsigned long blk_idx) +static void zram_release_bdev_block(struct zram *zram, unsigned long blk_i= dx) { int was_set; =20 @@ -887,7 +886,7 @@ static int zram_writeback_complete(struct zram *zram, s= truct zram_wb_req *req) * (if enabled). */ zram_account_writeback_rollback(zram); - free_block_bdev(zram, req->blk_idx); + zram_release_bdev_block(zram, req->blk_idx); return err; } =20 @@ -901,7 +900,7 @@ static int zram_writeback_complete(struct zram *zram, s= truct zram_wb_req *req) * finishes. */ if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) { - free_block_bdev(zram, req->blk_idx); + zram_release_bdev_block(zram, req->blk_idx); goto out; } =20 @@ -990,8 +989,8 @@ static int zram_writeback_slots(struct zram *zram, struct zram_pp_ctl *ctl, struct zram_wb_ctl *wb_ctl) { + unsigned long blk_idx =3D INVALID_BDEV_BLOCK; struct zram_wb_req *req =3D NULL; - unsigned long blk_idx =3D 0; struct zram_pp_slot *pps; int ret =3D 0, err =3D 0; u32 index =3D 0; @@ -1023,9 +1022,9 @@ static int zram_writeback_slots(struct zram *zram, ret =3D err; } =20 - if (!blk_idx) { - blk_idx =3D alloc_block_bdev(zram); - if (!blk_idx) { + if (blk_idx =3D=3D INVALID_BDEV_BLOCK) { + blk_idx =3D zram_reserve_bdev_block(zram); + if (blk_idx =3D=3D INVALID_BDEV_BLOCK) { ret =3D -ENOSPC; break; } @@ -1059,7 +1058,7 @@ static int zram_writeback_slots(struct zram *zram, __bio_add_page(&req->bio, req->page, PAGE_SIZE, 0); =20 zram_submit_wb_request(zram, wb_ctl, req); - blk_idx =3D 0; + blk_idx =3D INVALID_BDEV_BLOCK; req =3D NULL; cond_resched(); continue; @@ -1366,7 +1365,7 @@ static int read_from_bdev(struct zram *zram, struct p= age *page, return -EIO; } =20 -static void free_block_bdev(struct zram *zram, unsigned long blk_idx) +static void zram_release_bdev_block(struct zram *zram, unsigned long blk_i= dx) { } #endif @@ -1890,7 +1889,7 @@ static void zram_free_page(struct zram *zram, size_t = index) =20 if (zram_test_flag(zram, index, ZRAM_WB)) { zram_clear_flag(zram, index, ZRAM_WB); - free_block_bdev(zram, zram_get_handle(zram, index)); + zram_release_bdev_block(zram, zram_get_handle(zram, index)); goto out; } =20 --=20 2.52.0.460.gd25c4c69ec-goog From nobody Tue Dec 2 01:05:35 2025 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 065512F6919 for ; Sat, 22 Nov 2025 07:41:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797265; cv=none; b=boWM7mBS5nxjsriV7XyhKGNozgK5gOP4mVsAwk8sj+UlCaq/1L3ODQamxXQ7e7QmCE7SipZvGsZa4ggUFdGxNI/lEEM4XEjlVt/HVb1En2B4qSbFo9K4GJjzuq99NZpAFW/bdqpUpNcMZecBQqG37Vfwayvx+ZHooaN3GQyaXvY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763797265; c=relaxed/simple; bh=qElJlRCsFrs2BiuIGEpQgPbhmv6Qt0jtFelo/Sc3eqA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PIe/EfY/L2u2nlcHoP/75loILovwHW3xL97Cuf2bYvo7I7xZhqmCNyV0VIzV1FnvlkBn1aHFY7XAemwwFVettQT51Kb0dH4GO0HZ+quVAopGJ95U9GX+ilfXczMccwYfBiTCido5QloMYKdnK+smy4PEUtUhet6NyLGc4dfO3w4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=DxlxOgIu; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="DxlxOgIu" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-297f35be2ffso42001475ad.2 for ; Fri, 21 Nov 2025 23:41:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763797263; x=1764402063; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LukW8/snVEFz9ku8flFGtdBZP5+NLLDt4ByDCK08AXY=; b=DxlxOgIukSzU+b8jpRVFOByvHBNKNIUyBNWRJlwVYmKPMJkv0csXwp9ZF+QfVZtuSA heHumVbvH2br7IPukBcS03iirWgTZosPoo9+aG39eO+/Akw+FEvaYDXwvBt8sOSnzEyB mipvsk73leBSsFImjJvgfEO6yfdnKmNblRpOs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763797263; x=1764402063; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=LukW8/snVEFz9ku8flFGtdBZP5+NLLDt4ByDCK08AXY=; b=jS86k+7/BZI9thhpQX+DPGi/q4By3QTSkKnF5W3GHSGaT8PR0eWJNi55QsC/WY7ObD zu72wyq7wUB8tLw1UqrWg5r7HG8/pe8ZTjb4qMvD5+baXvs8X0Mf5/62hoygbabTiia4 XlRImuICNKIJDY7mQa/kNQOmtXn/mrmh4puXfzfzkU7gphjIHPG5j3zy0w2UdB20FTCn 6n7ZM70fMJTrgbl4lvosZWbHj3RHBzpFLXKbf1CWu2TdVOC5ge5mPagIauJPO3BGbuHp X1ssSDY7ZijWK4oDTCYayQu0ijSRCTVequYz+aVhHY7nhlGTfgreb/+XGzPjxHvhU0eY xzwQ== X-Forwarded-Encrypted: i=1; AJvYcCW7pbVqex4DzPPuTZ+SQjQ7vLfWQi6K1gqq5EfvX1AhAceIr2WlSUBxtpaMcTr//CwHIzh5BDdynjVYGqI=@vger.kernel.org X-Gm-Message-State: AOJu0YwKL6HDw8PSXIHDdyWlPZ3g4/bG2w3qNn9JcpmItDZG3pkzgqCt BbkwrU/GbcDGWbohCUYLMl2SYBKoD0EaURlDChXKiVkxVsPBgP4G6LOs+MD0ytaNKg== X-Gm-Gg: ASbGncv2JI8zYxAKiTDFeX1Z67p/U8UMmbzp68Xs45COkBdAywIQ0GooWMa2ny9MOA0 lOMF3roCRL3z5vX3Re+PzNte3ttp3/TOlglynfDbB9BwdFgDHIDBaSedYIatDEI6Z0Ju8A/7aeL CC5KpdQdnT3C1tzN5cQh6AKlrbGbJYIDLubV9+vsTaYpeh0yYb7rd/i2orfz1XKPwlECMpmZtkp VOQusySY7FtP990V1VYQruWbrJEuJxxf+uIeezr7MMLzceehUFeBzmi0GPDUKTq1MHO8N47OGJ9 7DzLgGv5phLr8UzU7steETARjaZ2MIaNpvS1x/KsXxOZDpfVECKitAk5yyPLO6MDFBGofIC0b3+ 5n3p8ISkRgXoufHGt50DcIEADH9RYtP1Ub7U/o41UiktnkXKvV9tP1ufZudHVg14Zg1Nrq/IdD6 nqgLjzUSWo/bxosRGHk9enEFI0LT5qHD+NSalvN/P/Dc/jpxkIxdFGk6s+HLY2rygC/IcCTtITs A== X-Google-Smtp-Source: AGHT+IG5fXLGOtdx1TXZEsH8pnjPldKBNeNKvpm0G0lpVW65pfoy/FrInnxHNz+d96LXs8VdsDk4Tg== X-Received: by 2002:a17:903:19d0:b0:295:2276:6704 with SMTP id d9443c01a7336-29b6c6cdf6cmr49815225ad.51.1763797263463; Fri, 21 Nov 2025 23:41:03 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:948e:149d:963b:f660]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29b5b138628sm77771555ad.31.2025.11.21.23.41.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Nov 2025 23:41:03 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv6 6/6] zram: read slot block idx under slot lock Date: Sat, 22 Nov 2025 16:40:29 +0900 Message-ID: <20251122074029.3948921-7-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.460.gd25c4c69ec-goog In-Reply-To: <20251122074029.3948921-1-senozhatsky@chromium.org> References: <20251122074029.3948921-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Read slot's block id under slot-lock. We release the slot-lock for bdev read so, technically, slot still can get freed in the meantime, but at least we will read bdev block (page) that holds previous know slot data, not from slot->handle bdev block, which can be anything at that point. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 1f7e9e914d34..3428f647d0a7 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1995,14 +1995,14 @@ static int zram_read_page(struct zram *zram, struct= page *page, u32 index, ret =3D zram_read_from_zspool(zram, page, index); zram_slot_unlock(zram, index); } else { + unsigned long blk_idx =3D zram_get_handle(zram, index); + /* * The slot should be unlocked before reading from the backing * device. */ zram_slot_unlock(zram, index); - - ret =3D read_from_bdev(zram, page, zram_get_handle(zram, index), - parent); + ret =3D read_from_bdev(zram, page, blk_idx, parent); } =20 /* Should NEVER happen. Return bio error if it does. */ --=20 2.52.0.460.gd25c4c69ec-goog