From nobody Mon Feb 9 04:07:38 2026 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D9831DF751 for ; Sat, 15 Nov 2025 02:35:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174111; cv=none; b=g0KfInybK6sJy3sLlzZvWVJquQmySJcjH5JcKrKHb/AR5sGwKDKGoKUdYGwRLMCNIK8HwQUHVrlhCUclZptsML6vDrryi0uRChHc7hp6+nCmLplU18Tdb2Me3O+DpbCIyaHUtC2Fj1+IHsbFlqpJkp/WqfEdnSg/wFG4h6WKSE4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174111; c=relaxed/simple; bh=1pRJTJQSnGplKGQj2YWNTRGzOoIQdxZNR1IDJI60Nus=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=s9ipvxoXHJ7B16UrXz/AGtshfdsI6OT99CiDLKd1ExhkvJJ9NhRTTqie5RI9SQKx/sfDe/N1Bd/QWQ+89LCwhBEKZk8T4M6GaiW1FzRZDWDmlNqJvGCCY18+UY0H35hxhtAl0wQYOojbipqJeS1rfidXlZI5SvnQODkeAHVAoes= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=cBceud7I; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="cBceud7I" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-29555b384acso27568195ad.1 for ; Fri, 14 Nov 2025 18:35:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763174108; x=1763778908; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dDuiyI7gsQ5VgD6HO9f1c14cghPHCJ3yiw8InVJ9Pb0=; b=cBceud7IXv7D66ATkUjvg/Pc24ygT8sqlW9bZ13jk04FtqNU8QH2bedrj53Gnl6FPd 4VO+lJalnI7kGtwgx1FG5OD3PYSWwvTl+DJZEpbOcp+YEcJjbHKeRB9lj5JqoxO1nCQ/ ScwgKskGv42bFPXu9kmIvrCOLwDPamZifv5og= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763174108; x=1763778908; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=dDuiyI7gsQ5VgD6HO9f1c14cghPHCJ3yiw8InVJ9Pb0=; b=Qikd7HMlbUq7D7v7hogI3svfnGvUocX3WICGufdqmnKnBAiB9EG1Ze7OKoLfqxLYC3 cTcveInt3l3W+fFAgutohWmgSnNwtn6ls46V+/pRzi0P/PKpQtLb29RN5ni0OoSkt8es Z4zvWv0Ytei+3Iqd8YlF/HiZdf5R8Uux+TF1nyF58DwtrpxphLuYr6EbxNzucqzoN6Sd lht9+VC720bCtaErRcNuOjU1TDgTGBGFDt56J/oQz0dJntTAKJfr5DjMyGHRtqkpOo2f n/3gtyhNojPyzwDnKhtrQYL1YQryOq8Pg86XxNVgLBg/QMsewHJiqhxwpuR7NXlGlmQf bSow== X-Forwarded-Encrypted: i=1; AJvYcCWCXQ0/8w6K6vdhhme3FYWp8e0bMBQEBf6zdNKJu6Cdhp55Mp5cBJ5AC+LKbIg6vBu5GwLjTaVw6ed/SGM=@vger.kernel.org X-Gm-Message-State: AOJu0YzNPyUQls37zg+CIPtjcFYlq5fNh2Isu8eIok/ObIYPQetQ4dwI U0jfMUNExXrHVrPlh6KEBfoVTD+Q26V33tOIIOj3tUPGUTj0zHtgaXEpbivZ1JK+Mw== X-Gm-Gg: ASbGncsX6PvD1JLnTbM/h0TNTqIIdZuGmfcDPk+cu3h8lnbEkyJaO11sERIZ3nV51AU TFBrkrFgGTvleofGBUXtuBOPSJCOra+h22ho3SOSE9mwRJY6NKFQtYXxRkz5XzXNfNuFMKETE0W zyUMPc35Yh8/Vp/qf/EaBTLuCi8CIO9wn4Zv2f59IavuaM0KfO20Gw7RCspmXYFfAeCVfZpmQGr WqD0mt45K19H58d8XrAMrPkPJfJmNIfFdJsJ7VulLV5cQkvw8UM348A3dys7Avy5JYdS7NWA0cE fb0oWl8wmSQkPTNTpqpUHKBVTMiIuM6pNr2YC/m3DVazyIoQ2xzO1bi8meLTqOJung9iPsZyIGu IHdRev+GvtesLG/S067Os6Cxyp2nX4dsRyaMgEdDA6ULtAohitINLGSwM1BP89P8awQGKvBiZIN jX1SIeGp2smoRD9qyu3/+qiN1/5dl5yoZDRv+tMA== X-Google-Smtp-Source: AGHT+IEvYlpI8cv2D8grBZnnslzeQuRNQnZnYP9zaGmE6JVvK4J4n6+L8YNyH9j4MxMguvyJ+sXzpA== X-Received: by 2002:a17:903:2c04:b0:297:ec1a:9db8 with SMTP id d9443c01a7336-2986a7509demr63813165ad.49.1763174108543; Fri, 14 Nov 2025 18:35:08 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:b069:973b:b865:16a1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2b1088sm68641555ad.57.2025.11.14.18.35.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Nov 2025 18:35:08 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky , Minchan Kim Subject: [PATCHv3 1/4] zram: introduce writeback bio batching support Date: Sat, 15 Nov 2025 11:34:44 +0900 Message-ID: <20251115023447.495417-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.rc1.455.g30608eb744-goog In-Reply-To: <20251115023447.495417-1-senozhatsky@chromium.org> References: <20251115023447.495417-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yuwen Chen Currently, zram writeback supports only a single bio writeback operation, waiting for bio completion before post-processing next pp-slot. This works, in general, but has certain throughput limitations. Implement batched (multiple) bio writeback support to take advantage of parallel requests processing and better requests scheduling. For the time being the writeback batch size (maximum number of in-flight bio requests) is set to 32 for all devices. A follow up patch adds a writeback_batch_size device attribute, so the batch size becomes run-time configurable. Please refer to [1] and [2] for benchmarks. [1] https://lore.kernel.org/linux-block/tencent_B2DC37E3A2AED0E7F179365FCB5= D82455B08@qq.com [2] https://lore.kernel.org/linux-block/tencent_0FBBFC8AE0B97BC63B5D47CE1FF= 2BABFDA09@qq.com [senozhatsky: significantly reworked the initial patch so that the approach and implementation resemble current zram post-processing code] Signed-off-by: Yuwen Chen Signed-off-by: Sergey Senozhatsky Co-developed-by: Richard Chang Suggested-by: Minchan Kim --- drivers/block/zram/zram_drv.c | 343 +++++++++++++++++++++++++++------- 1 file changed, 277 insertions(+), 66 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index a43074657531..84e72c3bb280 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -500,6 +500,24 @@ static ssize_t idle_store(struct device *dev, } =20 #ifdef CONFIG_ZRAM_WRITEBACK +struct zram_wb_ctl { + struct list_head idle_reqs; + struct list_head inflight_reqs; + + atomic_t num_inflight; + struct completion done; +}; + +struct zram_wb_req { + unsigned long blk_idx; + struct page *page; + struct zram_pp_slot *pps; + struct bio_vec bio_vec; + struct bio bio; + + struct list_head entry; +}; + static ssize_t writeback_limit_enable_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { @@ -734,20 +752,207 @@ static void read_from_bdev_async(struct zram *zram, = struct page *page, submit_bio(bio); } =20 -static int zram_writeback_slots(struct zram *zram, struct zram_pp_ctl *ctl) +static void release_wb_req(struct zram_wb_req *req) +{ + __free_page(req->page); + kfree(req); +} + +static void release_wb_ctl(struct zram_wb_ctl *wb_ctl) +{ + /* We should never have inflight requests at this point */ + WARN_ON(!list_empty(&wb_ctl->inflight_reqs)); + + while (!list_empty(&wb_ctl->idle_reqs)) { + struct zram_wb_req *req; + + req =3D list_first_entry(&wb_ctl->idle_reqs, + struct zram_wb_req, entry); + list_del(&req->entry); + release_wb_req(req); + } + + kfree(wb_ctl); +} + +/* XXX: should be a per-device sysfs attr */ +#define ZRAM_WB_REQ_CNT 32 + +static struct zram_wb_ctl *init_wb_ctl(void) +{ + struct zram_wb_ctl *wb_ctl; + int i; + + wb_ctl =3D kmalloc(sizeof(*wb_ctl), GFP_KERNEL); + if (!wb_ctl) + return NULL; + + INIT_LIST_HEAD(&wb_ctl->idle_reqs); + INIT_LIST_HEAD(&wb_ctl->inflight_reqs); + atomic_set(&wb_ctl->num_inflight, 0); + init_completion(&wb_ctl->done); + + for (i =3D 0; i < ZRAM_WB_REQ_CNT; i++) { + struct zram_wb_req *req; + + /* + * This is fatal condition only if we couldn't allocate + * any requests at all. Otherwise we just work with the + * requests that we have successfully allocated, so that + * writeback can still proceed, even if there is only one + * request on the idle list. + */ + req =3D kzalloc(sizeof(*req), GFP_KERNEL | __GFP_NOWARN); + if (!req) + break; + + req->page =3D alloc_page(GFP_KERNEL | __GFP_NOWARN); + if (!req->page) { + kfree(req); + break; + } + + list_add(&req->entry, &wb_ctl->idle_reqs); + } + + /* We couldn't allocate any requests, so writeabck is not possible */ + if (list_empty(&wb_ctl->idle_reqs)) + goto release_wb_ctl; + + return wb_ctl; + +release_wb_ctl: + release_wb_ctl(wb_ctl); + return NULL; +} + +static void zram_account_writeback_rollback(struct zram *zram) { + spin_lock(&zram->wb_limit_lock); + if (zram->wb_limit_enable) + zram->bd_wb_limit +=3D 1UL << (PAGE_SHIFT - 12); + spin_unlock(&zram->wb_limit_lock); +} + +static void zram_account_writeback_submit(struct zram *zram) +{ + spin_lock(&zram->wb_limit_lock); + if (zram->wb_limit_enable && zram->bd_wb_limit > 0) + zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); + spin_unlock(&zram->wb_limit_lock); +} + +static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *= req) +{ + u32 index; + int err; + + index =3D req->pps->index; + release_pp_slot(zram, req->pps); + req->pps =3D NULL; + + err =3D blk_status_to_errno(req->bio.bi_status); + if (err) { + /* + * Failed wb requests should not be accounted in wb_limit + * (if enabled). + */ + zram_account_writeback_rollback(zram); + return err; + } + + atomic64_inc(&zram->stats.bd_writes); + zram_slot_lock(zram, index); + /* + * We release slot lock during writeback so slot can change under us: + * slot_free() or slot_free() and zram_write_page(). In both cases + * slot loses ZRAM_PP_SLOT flag. No concurrent post-processing can + * set ZRAM_PP_SLOT on such slots until current post-processing + * finishes. + */ + if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) + goto out; + + zram_free_page(zram, index); + zram_set_flag(zram, index, ZRAM_WB); + zram_set_handle(zram, index, req->blk_idx); + atomic64_inc(&zram->stats.pages_stored); + +out: + zram_slot_unlock(zram, index); + return 0; +} + +static void zram_writeback_endio(struct bio *bio) +{ + struct zram_wb_ctl *wb_ctl =3D bio->bi_private; + + if (atomic_dec_return(&wb_ctl->num_inflight) =3D=3D 0) + complete(&wb_ctl->done); +} + +static void zram_submit_wb_request(struct zram *zram, + struct zram_wb_ctl *wb_ctl, + struct zram_wb_req *req) +{ + /* + * wb_limit (if enabled) should be adjusted before submission, + * so that we don't over-submit. + */ + zram_account_writeback_submit(zram); + atomic_inc(&wb_ctl->num_inflight); + list_add_tail(&req->entry, &wb_ctl->inflight_reqs); + submit_bio(&req->bio); +} + +static struct zram_wb_req *select_idle_req(struct zram_wb_ctl *wb_ctl) +{ + struct zram_wb_req *req; + + req =3D list_first_entry_or_null(&wb_ctl->idle_reqs, + struct zram_wb_req, entry); + if (req) + list_del(&req->entry); + return req; +} + +static int zram_wb_wait_for_completion(struct zram *zram, + struct zram_wb_ctl *wb_ctl) +{ + int ret =3D 0; + + if (atomic_read(&wb_ctl->num_inflight)) + wait_for_completion_io(&wb_ctl->done); + + reinit_completion(&wb_ctl->done); + while (!list_empty(&wb_ctl->inflight_reqs)) { + struct zram_wb_req *req; + int err; + + req =3D list_first_entry(&wb_ctl->inflight_reqs, + struct zram_wb_req, entry); + list_move(&req->entry, &wb_ctl->idle_reqs); + + err =3D zram_writeback_complete(zram, req); + if (err) + ret =3D err; + } + + return ret; +} + +static int zram_writeback_slots(struct zram *zram, + struct zram_pp_ctl *ctl, + struct zram_wb_ctl *wb_ctl) +{ + struct zram_wb_req *req =3D NULL; unsigned long blk_idx =3D 0; - struct page *page =3D NULL; struct zram_pp_slot *pps; - struct bio_vec bio_vec; - struct bio bio; + struct blk_plug io_plug; int ret =3D 0, err; - u32 index; - - page =3D alloc_page(GFP_KERNEL); - if (!page) - return -ENOMEM; + u32 index =3D 0; =20 + blk_start_plug(&io_plug); while ((pps =3D select_pp_slot(ctl))) { spin_lock(&zram->wb_limit_lock); if (zram->wb_limit_enable && !zram->bd_wb_limit) { @@ -757,6 +962,26 @@ static int zram_writeback_slots(struct zram *zram, str= uct zram_pp_ctl *ctl) } spin_unlock(&zram->wb_limit_lock); =20 + while (!req) { + req =3D select_idle_req(wb_ctl); + if (req) + break; + + blk_finish_plug(&io_plug); + err =3D zram_wb_wait_for_completion(zram, wb_ctl); + blk_start_plug(&io_plug); + /* + * BIO errors are not fatal, we continue and simply + * attempt to writeback the remaining objects (pages). + * At the same time we need to signal user-space that + * some writes (at least one, but also could be all of + * them) were not successful and we do so by returning + * the most recent BIO error. + */ + if (err) + ret =3D err; + } + if (!blk_idx) { blk_idx =3D alloc_block_bdev(zram); if (!blk_idx) { @@ -765,7 +990,6 @@ static int zram_writeback_slots(struct zram *zram, stru= ct zram_pp_ctl *ctl) } } =20 - index =3D pps->index; zram_slot_lock(zram, index); /* * scan_slots() sets ZRAM_PP_SLOT and relases slot lock, so @@ -775,67 +999,46 @@ static int zram_writeback_slots(struct zram *zram, st= ruct zram_pp_ctl *ctl) */ if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) goto next; - if (zram_read_from_zspool(zram, page, index)) + if (zram_read_from_zspool(zram, req->page, index)) goto next; zram_slot_unlock(zram, index); =20 - bio_init(&bio, zram->bdev, &bio_vec, 1, - REQ_OP_WRITE | REQ_SYNC); - bio.bi_iter.bi_sector =3D blk_idx * (PAGE_SIZE >> 9); - __bio_add_page(&bio, page, PAGE_SIZE, 0); - /* - * XXX: A single page IO would be inefficient for write - * but it would be not bad as starter. + * From now on pp-slot is owned by the req, remove it from + * its pp bucket. */ - err =3D submit_bio_wait(&bio); - if (err) { - release_pp_slot(zram, pps); - /* - * BIO errors are not fatal, we continue and simply - * attempt to writeback the remaining objects (pages). - * At the same time we need to signal user-space that - * some writes (at least one, but also could be all of - * them) were not successful and we do so by returning - * the most recent BIO error. - */ - ret =3D err; - continue; - } + list_del_init(&pps->entry); =20 - atomic64_inc(&zram->stats.bd_writes); - zram_slot_lock(zram, index); - /* - * Same as above, we release slot lock during writeback so - * slot can change under us: slot_free() or slot_free() and - * reallocation (zram_write_page()). In both cases slot loses - * ZRAM_PP_SLOT flag. No concurrent post-processing can set - * ZRAM_PP_SLOT on such slots until current post-processing - * finishes. - */ - if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) - goto next; + req->blk_idx =3D blk_idx; + req->pps =3D pps; + bio_init(&req->bio, zram->bdev, &req->bio_vec, 1, REQ_OP_WRITE); + req->bio.bi_iter.bi_sector =3D req->blk_idx * (PAGE_SIZE >> 9); + req->bio.bi_end_io =3D zram_writeback_endio; + req->bio.bi_private =3D wb_ctl; + __bio_add_page(&req->bio, req->page, PAGE_SIZE, 0); =20 - zram_free_page(zram, index); - zram_set_flag(zram, index, ZRAM_WB); - zram_set_handle(zram, index, blk_idx); + zram_submit_wb_request(zram, wb_ctl, req); blk_idx =3D 0; - atomic64_inc(&zram->stats.pages_stored); - spin_lock(&zram->wb_limit_lock); - if (zram->wb_limit_enable && zram->bd_wb_limit > 0) - zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); + req =3D NULL; + continue; + next: zram_slot_unlock(zram, index); release_pp_slot(zram, pps); - cond_resched(); } =20 - if (blk_idx) - free_block_bdev(zram, blk_idx); - if (page) - __free_page(page); + /* + * Selected idle req, but never submitted it due to some error or + * wb limit. + */ + if (req) + release_wb_req(req); + + blk_finish_plug(&io_plug); + err =3D zram_wb_wait_for_completion(zram, wb_ctl); + if (err) + ret =3D err; =20 return ret; } @@ -948,7 +1151,8 @@ static ssize_t writeback_store(struct device *dev, struct zram *zram =3D dev_to_zram(dev); u64 nr_pages =3D zram->disksize >> PAGE_SHIFT; unsigned long lo =3D 0, hi =3D nr_pages; - struct zram_pp_ctl *ctl =3D NULL; + struct zram_pp_ctl *pp_ctl =3D NULL; + struct zram_wb_ctl *wb_ctl =3D NULL; char *args, *param, *val; ssize_t ret =3D len; int err, mode =3D 0; @@ -970,8 +1174,14 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - ctl =3D init_pp_ctl(); - if (!ctl) { + pp_ctl =3D init_pp_ctl(); + if (!pp_ctl) { + ret =3D -ENOMEM; + goto release_init_lock; + } + + wb_ctl =3D init_wb_ctl(); + if (!wb_ctl) { ret =3D -ENOMEM; goto release_init_lock; } @@ -1000,7 +1210,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); break; } =20 @@ -1011,7 +1221,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); break; } =20 @@ -1022,7 +1232,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); continue; } =20 @@ -1033,17 +1243,18 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); continue; } } =20 - err =3D zram_writeback_slots(zram, ctl); + err =3D zram_writeback_slots(zram, pp_ctl, wb_ctl); if (err) ret =3D err; =20 release_init_lock: - release_pp_ctl(zram, ctl); + release_pp_ctl(zram, pp_ctl); + release_wb_ctl(wb_ctl); atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock); =20 --=20 2.52.0.rc1.455.g30608eb744-goog From nobody Mon Feb 9 04:07:38 2026 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0456421B9F6 for ; Sat, 15 Nov 2025 02:35:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174114; cv=none; b=sD5ZtqHy8ywGoiaBssk65BvR4eghhMv065YjJ/iv/kiPD5NkJLe8ROg6spmcp8AIx/rPbHb2XCmK4V/q3EGsu6q6tJ8fpjgrvSCvjJoRo602SdcmuChWCqjnMKGq4VsAJUGVlDXBSPDODC2xvFrWSMgZGabjNLp42pUvq7V8568= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174114; c=relaxed/simple; bh=0g8jpnw8n4tuMYaSz6purFL7v5+h4IGLV8N/LFNz/kI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z9/VZEj0AW/8omeew66HjXa7DbWpQoCmAFlevreKWJqHIVvUs8lHmEWf6PaFWV19b+eM0jC8RB7OSIjdz56WTB1BJKIHNvfXOl/pNc4Hd/8hThRDG32Vxps4aLst1KwXaH3tJtbfeLXgEkWyTZkyVjkdD3OmRfrOfN4Zq3Hxc0k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=Pwsc51FI; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Pwsc51FI" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-2964d616df7so35089855ad.3 for ; Fri, 14 Nov 2025 18:35:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763174112; x=1763778912; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WOd9p1bhE9EtjpjaBLy8H5nHDKfUvKLk4XDOygNo1pE=; b=Pwsc51FIOYibm5MI7VtJNKU01+DRf1hwLlUjleSeyNFkHk6WjttX/JhMfdVM74pEm5 13zsheWxRsIdIvaWAfj7qf0PWZHJNSMgf720XJ2gvZHoLlJFbifEjpFGAkNdMKKGy2xD q45bl/cis9f1bIAPv2LbeGcPSDJ9qPoBHMDNo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763174112; x=1763778912; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=WOd9p1bhE9EtjpjaBLy8H5nHDKfUvKLk4XDOygNo1pE=; b=YP9oKJDa+LeNFSR5WoIhk40aHWcfYGBxEISk+SbADf9iOTdd8bZ3V8ln+myeYd99bC mxjuSoTRegDhSBK2B/+KLUuxz7gTFtvgIOUF2ugkxTuVykH/JoFGhID+mm+nIEuVgm+f UR7qYrNgJLMT1802RKYytdxzjH/eie3O9W/5F/bE60iYlb2quI0yD3MOlz+P5Thlio+G n6azxPenvQydq6XsaSQexwho0ZB75FNC1w1yPIZuAcn8i0fP4Y/+Xe7X2SYiYqSrMBu2 1/ZMAnHbt8cRm7kJn4eIbgb/ebsykRyPD1upkrVUpwSd+DQUG3PXJ7InBn066H4FOMic KtPQ== X-Forwarded-Encrypted: i=1; AJvYcCUZrGqAsserQdMopmE0ZYC1pbe34oM7KQD41zCL7IOGZ9M1kOAYSOrowSXeO5vNJoZRZ+rN9sD6FK1TB7o=@vger.kernel.org X-Gm-Message-State: AOJu0YwBw3pmtVwW8WG3mylsobUC4rqHPwP2oQomVRHl1adwhiqrl3uB +DEuirs1nsMs/E+N8gTLOiKky17SH3+PjuvtJcNyIQbHd2plzQVedUlXQO3mEvnJoQ== X-Gm-Gg: ASbGncsqz7DPvS/FNF1IGztx3BrkyWjSPMBIxfF5bMPsTIRd8MiLw7FmdfoHl7xVDXP CGJk03/Ooa2xFBMHIkLIyA5gDX9krXKelF9c1tXzwYWqHFBQly3wvaQUnVOD/S54KhvSut8emns I3eQSe8pjqcrFUSSgqFDR3SPJ1uZJFX1QvrzcBvqAFaduov6MKkoVqqlYFkOUxG7Y7qbMKGNBZd 5eKfUNMxhE2fSj4W3cNAiIcrwdnVxdsBVqzBZJM4sCmQyto+TiYts7n3+1jWZFk8EfXRufBlwq5 n0lEtI8bpZgI6GuU/Zg/CkqAzUehElfipXgrAMelQB6cI87i/4bwqGmvL1KXbn4tW9AWB1YjzmQ Ut8VkyktNi2Q/JvUmqP/ILbkSmpgLnL2unoRBE52OYQJje508lCMFXc/k+3iYCtNf2VWkTXZQpH mkO8W1l1CibnitA62NUXAjDk3BJjM= X-Google-Smtp-Source: AGHT+IHzkpY6gZM/cY3aeRPppAXw311i5GjcubtsX0WPOoLgk2KhW+0UonihGhlFXEg97h5sI/EG9A== X-Received: by 2002:a17:903:2f86:b0:295:ed0:f7bf with SMTP id d9443c01a7336-2986a76b5b8mr60561225ad.58.1763174112371; Fri, 14 Nov 2025 18:35:12 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:b069:973b:b865:16a1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2b1088sm68641555ad.57.2025.11.14.18.35.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Nov 2025 18:35:12 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv3 2/4] zram: add writeback batch size device attr Date: Sat, 15 Nov 2025 11:34:45 +0900 Message-ID: <20251115023447.495417-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.rc1.455.g30608eb744-goog In-Reply-To: <20251115023447.495417-1-senozhatsky@chromium.org> References: <20251115023447.495417-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce writeback_batch_size device attribute so that the maximum number of in-flight writeback bio requests can be configured at run-time per-device. This essentially enables batched bio writeback. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 48 ++++++++++++++++++++++++++++++----- drivers/block/zram/zram_drv.h | 1 + 2 files changed, 43 insertions(+), 6 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 84e72c3bb280..e6fecea2e3bf 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -588,6 +588,42 @@ static ssize_t writeback_limit_show(struct device *dev, return sysfs_emit(buf, "%llu\n", val); } =20 +static ssize_t writeback_batch_size_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct zram *zram =3D dev_to_zram(dev); + u32 val; + ssize_t ret =3D -EINVAL; + + if (kstrtouint(buf, 10, &val)) + return ret; + + if (!val) + val =3D 1; + + down_read(&zram->init_lock); + zram->wb_batch_size =3D val; + up_read(&zram->init_lock); + ret =3D len; + + return ret; +} + +static ssize_t writeback_batch_size_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + u32 val; + struct zram *zram =3D dev_to_zram(dev); + + down_read(&zram->init_lock); + val =3D zram->wb_batch_size; + up_read(&zram->init_lock); + + return sysfs_emit(buf, "%u\n", val); +} + static void reset_bdev(struct zram *zram) { if (!zram->backing_dev) @@ -775,10 +811,7 @@ static void release_wb_ctl(struct zram_wb_ctl *wb_ctl) kfree(wb_ctl); } =20 -/* XXX: should be a per-device sysfs attr */ -#define ZRAM_WB_REQ_CNT 32 - -static struct zram_wb_ctl *init_wb_ctl(void) +static struct zram_wb_ctl *init_wb_ctl(struct zram *zram) { struct zram_wb_ctl *wb_ctl; int i; @@ -792,7 +825,7 @@ static struct zram_wb_ctl *init_wb_ctl(void) atomic_set(&wb_ctl->num_inflight, 0); init_completion(&wb_ctl->done); =20 - for (i =3D 0; i < ZRAM_WB_REQ_CNT; i++) { + for (i =3D 0; i < zram->wb_batch_size; i++) { struct zram_wb_req *req; =20 /* @@ -1180,7 +1213,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - wb_ctl =3D init_wb_ctl(); + wb_ctl =3D init_wb_ctl(zram); if (!wb_ctl) { ret =3D -ENOMEM; goto release_init_lock; @@ -2821,6 +2854,7 @@ static DEVICE_ATTR_RW(backing_dev); static DEVICE_ATTR_WO(writeback); static DEVICE_ATTR_RW(writeback_limit); static DEVICE_ATTR_RW(writeback_limit_enable); +static DEVICE_ATTR_RW(writeback_batch_size); #endif #ifdef CONFIG_ZRAM_MULTI_COMP static DEVICE_ATTR_RW(recomp_algorithm); @@ -2842,6 +2876,7 @@ static struct attribute *zram_disk_attrs[] =3D { &dev_attr_writeback.attr, &dev_attr_writeback_limit.attr, &dev_attr_writeback_limit_enable.attr, + &dev_attr_writeback_batch_size.attr, #endif &dev_attr_io_stat.attr, &dev_attr_mm_stat.attr, @@ -2903,6 +2938,7 @@ static int zram_add(void) =20 init_rwsem(&zram->init_lock); #ifdef CONFIG_ZRAM_WRITEBACK + zram->wb_batch_size =3D 32; spin_lock_init(&zram->wb_limit_lock); #endif =20 diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 6cee93f9c0d0..1a647f42c1a4 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -129,6 +129,7 @@ struct zram { struct file *backing_dev; spinlock_t wb_limit_lock; bool wb_limit_enable; + u32 wb_batch_size; u64 bd_wb_limit; struct block_device *bdev; unsigned long *bitmap; --=20 2.52.0.rc1.455.g30608eb744-goog From nobody Mon Feb 9 04:07:38 2026 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2CD9221F06 for ; Sat, 15 Nov 2025 02:35:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174118; cv=none; b=hZraWFZviYEpsMspNOy81RWQI/fLd9OCAzdq/C6fQ5jMis5fRzahNDtHK01GBnzkNqvi226XMNbQvEaNToJClHNBM8QHmz2oXNj4PBb28uB0P2aUNTGWg7PTZraqNaeWGlkaPnZixN7BaYMxw1z4enM1G2xlyAizYMSW3fnCuE0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174118; c=relaxed/simple; bh=Jd9ofiz/haPvieqadCmQgSYbnZI0iy0DmoD5pC7vvl0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q62wAlfLN+5v7LOEIe7fUyek/Usbj7NuazQxs1ij62MNxTKJOjfpQFzwhs8f0/+oYr/+H+RZf/2XXvLJD8uKMf1G028bv1UvWF7NoaoPXKWH77n4eRMQ+5s+XXeasy6DwieEAm0QhrdHnOlpc0VG+WPosLadXvANXGCT6d47bLw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=cQxHxPtL; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="cQxHxPtL" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-297d4a56f97so26925405ad.1 for ; Fri, 14 Nov 2025 18:35:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763174116; x=1763778916; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3BmRdC0cwQA4pj06ZuctEpCj+oY9x0ty4nsQhq6sVrM=; b=cQxHxPtLncrdb28UwkY/FO7GGnmpLe9VRVlP3jsdYKRb7rH1hWHuwWace0XGPmdbEu jYOUn/om/RksAFuqbxnO2Tgtx7D2gH6XxwZtDhDG5Un/hFhZRen+HgHUwfU+5965N9DO PzsoGdvtzJpxYgXzPWJEHzgOOrqMf+/nKe1U4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763174116; x=1763778916; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=3BmRdC0cwQA4pj06ZuctEpCj+oY9x0ty4nsQhq6sVrM=; b=dEWudzcQxBoHOVnvZ+R8CDWYBebELE3z0uiX42k/Mg0cOi6xFti5Y8PITHJQz/QyQV Ui5aG6/ib/TQf+dwacT0joVN4AwqOszZDL2U/wYkHryG+pZuGrR2Z5Ke7LyZnoD9ZzNm ne0ien2oI/Z2Uh5LHdpgB6fMTgfHMj8KOn/Dxg6KRG7/c+744IMRsoXHP+x7k0ny5LlQ hGEg53O99/4m6005CovBfglxzk2BdGG/IoT9ES3HRTaE5FOoYiuOM/oKQlYw6SFqTnGZ YbdIyQkjFbjHvoCHkt6NVt4xUS7X6o5VtVqSoyIr+X4kQoRjc84QwKMXomod3jKLeyJt qlnw== X-Forwarded-Encrypted: i=1; AJvYcCXuc9/Bk8XMxY/M0JWCWl0frMmYpOoWSQR15cZ7VC2nxuzfCbGPesKcWNmlF/r3s5YYZN4m9Vh7aFOfZxg=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/ANIkJTAZGYxHWvsLeL2zwQhnemenKbBr+Kn+TQwql8e2r3VC wlgaiG01hUEpr6N8HwsLfDWUkv1hEB/zP0o9rSafWRRJ1eNvxy5YI9JZi2w3Wb0g/Q== X-Gm-Gg: ASbGncvrTOqPZlaaZo/QlErfKHAaRn/0H+/2i1sKf4BJ6Sm6oiESC9J20c39vhzsg3Y s2ax7PUUKq/2EZla07S3GdUIbBxNvGlj4MlLg9wWTqL/dFmMjKUXPeWMrrg6hV7A2KKdZmKghvJ MFG4U9iP/ECeZp6qYJD/nMLrM4zPcAFYDUsUvl7jBty9K1zxbr9aV1fAplL6iF02Zsa9Pk577tA ofwAihwAdst4gpkh7MnIcbECQBvDwEGpUJm5NkH0eZfa+7tcpPuckF+kjgFR1y8/9ElClKb7VwX t14NpB8okJe82+gaJBuISRxcB9WQzlOXbozpRS75d3rz90vsoigPkB6mzN068rnsTe6c9eUQQxp OEGwhcuZaw0wdJpsCCBeJe3kxUDwf26KcpC/FYGTl/Rbng58lpc75nE5yfezqdUobpnQ2fVhBy4 mV5+ERre5QgLJg14rUDueeyG1VlVssyego3GhGcw== X-Google-Smtp-Source: AGHT+IFf4M2Jlr3ydfB8URUmeRZ7uLXRnoh0xgPbDTGtZRUhasECUQqD8sDuwxAjOhCBrWOAohHVfA== X-Received: by 2002:a17:903:3bcb:b0:295:1277:7920 with SMTP id d9443c01a7336-2986a72c34emr63213455ad.28.1763174116118; Fri, 14 Nov 2025 18:35:16 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:b069:973b:b865:16a1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2b1088sm68641555ad.57.2025.11.14.18.35.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Nov 2025 18:35:15 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv3 3/4] zram: take write lock in wb limit store handlers Date: Sat, 15 Nov 2025 11:34:46 +0900 Message-ID: <20251115023447.495417-4-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.rc1.455.g30608eb744-goog In-Reply-To: <20251115023447.495417-1-senozhatsky@chromium.org> References: <20251115023447.495417-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Write device attrs handlers should take write zram init_lock. While at it, fixup coding styles. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index e6fecea2e3bf..76daf1e53859 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -519,7 +519,8 @@ struct zram_wb_req { }; =20 static ssize_t writeback_limit_enable_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) + struct device_attribute *attr, + const char *buf, size_t len) { struct zram *zram =3D dev_to_zram(dev); u64 val; @@ -528,18 +529,19 @@ static ssize_t writeback_limit_enable_store(struct de= vice *dev, if (kstrtoull(buf, 10, &val)) return ret; =20 - down_read(&zram->init_lock); + down_write(&zram->init_lock); spin_lock(&zram->wb_limit_lock); zram->wb_limit_enable =3D val; spin_unlock(&zram->wb_limit_lock); - up_read(&zram->init_lock); + up_write(&zram->init_lock); ret =3D len; =20 return ret; } =20 static ssize_t writeback_limit_enable_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, + char *buf) { bool val; struct zram *zram =3D dev_to_zram(dev); @@ -554,7 +556,8 @@ static ssize_t writeback_limit_enable_show(struct devic= e *dev, } =20 static ssize_t writeback_limit_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) + struct device_attribute *attr, + const char *buf, size_t len) { struct zram *zram =3D dev_to_zram(dev); u64 val; @@ -563,11 +566,11 @@ static ssize_t writeback_limit_store(struct device *d= ev, if (kstrtoull(buf, 10, &val)) return ret; =20 - down_read(&zram->init_lock); + down_write(&zram->init_lock); spin_lock(&zram->wb_limit_lock); zram->bd_wb_limit =3D val; spin_unlock(&zram->wb_limit_lock); - up_read(&zram->init_lock); + up_write(&zram->init_lock); ret =3D len; =20 return ret; --=20 2.52.0.rc1.455.g30608eb744-goog From nobody Mon Feb 9 04:07:38 2026 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47CE6223DC1 for ; Sat, 15 Nov 2025 02:35:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174121; cv=none; b=OSXH0kMl9ZKLb4YFZsIlDqdMezau9rNQd79wpqUBN9AGWesI1eG4B374EW0bOHhUUOh+rJU4GjuErQQDBJkUWTApwSEHnQfNGCNfSVG6n8xd99RtqsOUSxN36+85r/6FqKuR62wVFmmPsOrPUxmkAvtUR2WM1fC2fjEI3wmBmPA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763174121; c=relaxed/simple; bh=Rz8KWpOT2ndvIDkVBobKDwBDCHXj3WRK7sMmSaWxLC8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DThgWLUbRitTNP0K/4NxRFQnfDbi6oXhq+E4oO2bHdrrdJYbgNiT99oF6C4Ctas7co87SbmjNW7Oo1Bt8IxylPCmQibhpRh1tWmmBmVdeQag0Ww4IWibL7YvzNqIGflZOffgVZdp7GcALA2kLBBjHWTrpDJ+bLG8Sl0NPZt/zqE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=oNq4l35W; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="oNq4l35W" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-297ec50477aso19327355ad.1 for ; Fri, 14 Nov 2025 18:35:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763174119; x=1763778919; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b4c/dqlofQQ1Cq0Pw+ZOefCVsG4PeXoReJklWLO52f0=; b=oNq4l35WTBo8aXmcPQUBIFqbTTWOnBieCw1Y999IGc6jhhXIxayJUzrOxOPzUSH3uv INqhLmY0s7TkeZIncOU+W8hm+4SyIDXvngdBGssC0RB3ahzmxrSdemt1s7PIO8mDYzY/ HwlmVv1vizCSjp6w55CijHGFOyvZ77wXyuvGw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763174119; x=1763778919; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=b4c/dqlofQQ1Cq0Pw+ZOefCVsG4PeXoReJklWLO52f0=; b=H2gCfLS3ftTb4135m9HJpsLzIeObxo8qzxp7kd6nEkSx41OPnpJZ4s9RvnBBl2R7Rh Nmz9Ncneh5wYjLN336XcVi2zCDGNhEVz40mdt5iqtPMzRwh9RqPe2crFJM2bJ405XwqF fsSF9Mwoxizn/aE7HIx7c9abDa3JgR8VfSafDVmSqjvIJNSibaw+fwjpOw2fbOvfmhKy x4fJLC1BGlhLFEpw92gaPWC++LqMYYn1nYNqip3w7kzNMz/KjRIJE54YWlp0/cyV46sh Fff2c99ej9WeyyUzJle201fj5teUhT7isnqCyszdLUd3nbolr9J+zMJJh95Ni6Narx8a DcSg== X-Forwarded-Encrypted: i=1; AJvYcCULG/8Bw9ujK5FA/nNMWJ+hXr7W/wD6mLKbULZvyVPWsTawCIqQL1lgudyDXl9XJhmbgbunvTBHzTxADSA=@vger.kernel.org X-Gm-Message-State: AOJu0YxAVuL69csIdM99Hgr7z3QZr5xfit1He5JbUdwk/uB51g6V+dt4 Zn18XQBtrS/UNB8d0A/MW/biyh60aeijCjIZ9ARjMPQJkC+nI6i727RgUHsbb0C33g== X-Gm-Gg: ASbGnctWGm5pVWDwwYMecaWH4itaVAyea1W/NxiyW9l+wdViMQPIhecOFrGMiAYzyxp PSXcuNj5WogNsRcTdRfT4+nbhRSKLayDyNy7Ol37EGv73JFRlMCCV0V+7K4r8EJdA31r0P0Tf4W kQb5K/BFSI81TVjvA7pkKgpVl4MjEB4qeM44waXU91Xxj1mDBvRVbAfYxPIxIU8WXa8XR0S+y3f ei17cQtO+x6f99Gw19WtZaphr/bYXe7hUVMoYjqeiOvzmZGwE3MHRjU/V17q5aOms3FlTi9IcCp 0pjQO/uVyrdj9QjHfhB338L9KuoViIWgtnsFL8+cfXjWEBXa2JZ8dAxKLmaFq2tdpgcy9ul/eKB Nz6tbrB2Jp/v+oyn3WjvrpON4wAq2LUvK9OE82qwRWrlhC2xpXqFYf5yPH3UK0+fdB4HBNc4hlD vGz+ScJPu3X9cX0RSZJylsWISe9CdYpypTKi3aAw== X-Google-Smtp-Source: AGHT+IErO2AnZLN2sgTOQ83bbSpiNbBMpGFTea0Uu0P8ZBl3yHBz5eWOR7rGBCRpubc/taMzYWS5Lg== X-Received: by 2002:a17:903:11c7:b0:281:fd60:807d with SMTP id d9443c01a7336-29867ec97d1mr58658215ad.2.1763174119552; Fri, 14 Nov 2025 18:35:19 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:b069:973b:b865:16a1]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2b1088sm68641555ad.57.2025.11.14.18.35.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Nov 2025 18:35:19 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv3 4/4] zram: drop wb_limit_lock Date: Sat, 15 Nov 2025 11:34:47 +0900 Message-ID: <20251115023447.495417-5-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.rc1.455.g30608eb744-goog In-Reply-To: <20251115023447.495417-1-senozhatsky@chromium.org> References: <20251115023447.495417-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We don't need wb_limit_lock. Writeback limit setters take an exclusive write zram init_lock, while wb_limit modifications happen only from a single task and under zram read init_lock. No concurrent wb_limit modifications are possible (we permit only one post-processing task at a time). Add lockdep assertions to wb_limit mutators. While at it, fixup coding styles. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 22 +++++----------------- drivers/block/zram/zram_drv.h | 1 - 2 files changed, 5 insertions(+), 18 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 76daf1e53859..bc268670f852 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -530,9 +530,7 @@ static ssize_t writeback_limit_enable_store(struct devi= ce *dev, return ret; =20 down_write(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); zram->wb_limit_enable =3D val; - spin_unlock(&zram->wb_limit_lock); up_write(&zram->init_lock); ret =3D len; =20 @@ -547,9 +545,7 @@ static ssize_t writeback_limit_enable_show(struct devic= e *dev, struct zram *zram =3D dev_to_zram(dev); =20 down_read(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); val =3D zram->wb_limit_enable; - spin_unlock(&zram->wb_limit_lock); up_read(&zram->init_lock); =20 return sysfs_emit(buf, "%d\n", val); @@ -567,9 +563,7 @@ static ssize_t writeback_limit_store(struct device *dev, return ret; =20 down_write(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); zram->bd_wb_limit =3D val; - spin_unlock(&zram->wb_limit_lock); up_write(&zram->init_lock); ret =3D len; =20 @@ -577,15 +571,13 @@ static ssize_t writeback_limit_store(struct device *d= ev, } =20 static ssize_t writeback_limit_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { u64 val; struct zram *zram =3D dev_to_zram(dev); =20 down_read(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); val =3D zram->bd_wb_limit; - spin_unlock(&zram->wb_limit_lock); up_read(&zram->init_lock); =20 return sysfs_emit(buf, "%llu\n", val); @@ -864,18 +856,18 @@ static struct zram_wb_ctl *init_wb_ctl(struct zram *z= ram) =20 static void zram_account_writeback_rollback(struct zram *zram) { - spin_lock(&zram->wb_limit_lock); + lockdep_assert_held_read(&zram->init_lock); + if (zram->wb_limit_enable) zram->bd_wb_limit +=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); } =20 static void zram_account_writeback_submit(struct zram *zram) { - spin_lock(&zram->wb_limit_lock); + lockdep_assert_held_read(&zram->init_lock); + if (zram->wb_limit_enable && zram->bd_wb_limit > 0) zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); } =20 static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *= req) @@ -990,13 +982,10 @@ static int zram_writeback_slots(struct zram *zram, =20 blk_start_plug(&io_plug); while ((pps =3D select_pp_slot(ctl))) { - spin_lock(&zram->wb_limit_lock); if (zram->wb_limit_enable && !zram->bd_wb_limit) { - spin_unlock(&zram->wb_limit_lock); ret =3D -EIO; break; } - spin_unlock(&zram->wb_limit_lock); =20 while (!req) { req =3D select_idle_req(wb_ctl); @@ -2942,7 +2931,6 @@ static int zram_add(void) init_rwsem(&zram->init_lock); #ifdef CONFIG_ZRAM_WRITEBACK zram->wb_batch_size =3D 32; - spin_lock_init(&zram->wb_limit_lock); #endif =20 /* gendisk structure */ diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 1a647f42c1a4..c6d94501376c 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -127,7 +127,6 @@ struct zram { bool claim; /* Protected by disk->open_mutex */ #ifdef CONFIG_ZRAM_WRITEBACK struct file *backing_dev; - spinlock_t wb_limit_lock; bool wb_limit_enable; u32 wb_batch_size; u64 bd_wb_limit; --=20 2.52.0.rc1.455.g30608eb744-goog