From nobody Mon Feb 9 06:49:35 2026 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D32133D6C2 for ; Thu, 13 Nov 2025 08:54:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024063; cv=none; b=QeCdzPOMU7wrbsT8vb9kl3dCfrgi/bq0a+KVB2ickr0IWrVHi0IXkaBDROogoaB2r2J76cpcOK6aMatmEh+eMXws/ygZilutZKPrDHrWGULWQUtXCoBeHc3wDkdoet2Scm6p/j3UW2diZAc7AZ9yqIiNDOjCsP5rzYqKuOx8jl0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024063; c=relaxed/simple; bh=y5chO6eZNUcGQ1oXa3IM8lhLcyNKzdnhsZT743HK2uY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hSMRT8npygVUH8QUVOAP+ZEljx5/zHGlIpSp+QuniE57c/88o933+NDOK0XWjBSLJQeLWIWP7FbJPz2raivzd5ca+RXY1UU/TKVXM5JdZbLVDzd6DYgt27MoJZShNcXcJCmV9M/NeeUzOs2QJKCtZaoPf2aXtFvF7WJhW+cH7lE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=Pb33KEiK; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Pb33KEiK" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-29555415c5fso6540195ad.1 for ; Thu, 13 Nov 2025 00:54:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763024061; x=1763628861; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eo+A8oOPC3UWLIlOsQTXHsFO0OGP2jdz/XNnWDqW4QI=; b=Pb33KEiKiVS1DcXN2npL3FU3NkClCu3CQOsNOIteW1ZNqD1aa8DHpUoaQvG1GFQ+WK 9dDHYEM9HrEQXUlPhN20UtYZNe9VdbrL0KT88Pll8YXhU3LMjXwcLhFTBer5wyzRVF3K O/8eFyE3gQAinjgd8d86EspwaRRjmdbpyypZE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763024061; x=1763628861; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=eo+A8oOPC3UWLIlOsQTXHsFO0OGP2jdz/XNnWDqW4QI=; b=FFMgs6XfEkGl1JDR/b0c2kDCEsjAD0CwqeJG5fpifb8TYIv4GPty42++2yJv9nrwzl 5pRV72gbjcBUZnsaKWgUB2ccJ1I8KNVy4Bugv46Bt9wqtH3g/UdIpmRyOZXCk5ZL+wqJ pgn9EdbfAHwp2z0GJoGRnIXUfnRWpdxSF3j87Hag1BKM/TVXe+V8cJrFobVOBaasKQQ5 YWUx0+i4JOsje9HIzWl2OQk0of9kkGA8dbu7OM/cS7Vrq/OHxh/zqS8rnw7e7c57/NU3 2vVDn8dytt1n5WmX/9JXeosIPgZ3l00Q3UQfIV2jVBPoyJCMI8n8kLdOO2IlGTAFGqTp i6Ig== X-Forwarded-Encrypted: i=1; AJvYcCVle76sVsqDnQlsc5AVu+zPUPOq07lRcMO3+LuQW806Hsmk32u97DukKENkvRgJkMPHym99MrbciFbNKnU=@vger.kernel.org X-Gm-Message-State: AOJu0Yz1HReNn8XWDPkrQEtldIJ+j88J2hbYpZZHERvkrVSJwg3sMu7h YUHDu6sdlNVXcVPZhsGO8jOinmeX3kqaxlDmfHyFnQqpreWIREDOhrJPh8BgUqccxQ== X-Gm-Gg: ASbGncuGRDyRNbhezEObY4HsZB8ePj5d1qzRlT5U8fcRhi1ySb7WivmNbZkgMrzKc58 bIfElJhg5nDVdODAM+YzV23XHF1QmiVQZEhxEk1SE3lGzwUXu22cjM+z2H/zmxuiscAx2S6UF/J YjkLiY/NOiMECXxJOVPrKiEhtjrcmiJfhjGx+J7SimSwjbQvzvMwQlr7knCDidf5WaZXseAxN7/ boAvX1Ag1PJ1zQL3l8EYh9e8XvT9ez+XP5V4Whi8SMROLsYOhEctUUpkbNrfuVrbAOm6eQXp2LZ njtaXSb56XojKsBLCAlEjik0mQHEeaM5Jm8uQ30FapPhcg7VA42R1d4U09whCL8gLZNzAw5CKjz 7KX6wW097M4Fqa0IXrAySYrRzsBU9oAA9NQRaNGLCimk5sTkihcnsZP8WxbGwUOJKV5PzI7gQj4 Gg37Qs7JQ/gkkcxy+e+MSHttOS710= X-Google-Smtp-Source: AGHT+IG/79RyzUI/zRKOb6Yb1TWU8Nd4GDC7ZHdVYbW9fmlOiTb68yTSSrfLfjj6XgeMlVxejFehkg== X-Received: by 2002:a17:903:3d0b:b0:294:9699:74f6 with SMTP id d9443c01a7336-2984edddbafmr75229805ad.43.1763024060701; Thu, 13 Nov 2025 00:54:20 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:6d96:d8c6:55e6:2377]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2346f3sm17486465ad.18.2025.11.13.00.54.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Nov 2025 00:54:20 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky , Minchan Kim Subject: [PATCHv2 1/4] zram: introduce writeback bio batching support Date: Thu, 13 Nov 2025 17:53:59 +0900 Message-ID: <20251113085402.1811522-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog In-Reply-To: <20251113085402.1811522-1-senozhatsky@chromium.org> References: <20251113085402.1811522-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yuwen Chen Currently, zram writeback supports only a single bio writeback operation, waiting for bio completion before post-processing next pp-slot. This works, in general, but has certain throughput limitations. Implement batched (multiple) bio writeback support to take advantage of parallel requests processing and better requests scheduling. For the time being the writeback batch size (maximum number of in-flight bio requests) is set to 1, so the behaviors is the same as the previous single-bio writeback. This is addressed in a follow up patch, which adds a writeback_batch_size device attribute. Please refer to [1] and [2] for benchmarks. [1] https://lore.kernel.org/linux-block/tencent_B2DC37E3A2AED0E7F179365FCB5= D82455B08@qq.com [2] https://lore.kernel.org/linux-block/tencent_0FBBFC8AE0B97BC63B5D47CE1FF= 2BABFDA09@qq.com [senozhatsky: significantly reworked the initial patch so that the approach and implementation resemble current zram post-processing code] Signed-off-by: Yuwen Chen Signed-off-by: Sergey Senozhatsky Co-developed-by: Richard Chang Suggested-by: Minchan Kim --- drivers/block/zram/zram_drv.c | 343 +++++++++++++++++++++++++++------- 1 file changed, 278 insertions(+), 65 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index a43074657531..a0a939fd9d31 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -734,20 +734,226 @@ static void read_from_bdev_async(struct zram *zram, = struct page *page, submit_bio(bio); } =20 -static int zram_writeback_slots(struct zram *zram, struct zram_pp_ctl *ctl) -{ - unsigned long blk_idx =3D 0; - struct page *page =3D NULL; +struct zram_wb_ctl { + struct list_head idle_reqs; + struct list_head inflight_reqs; + + atomic_t num_inflight; + struct completion done; + struct blk_plug plug; +}; + +struct zram_wb_req { + unsigned long blk_idx; + struct page *page; struct zram_pp_slot *pps; struct bio_vec bio_vec; struct bio bio; - int ret =3D 0, err; + + struct list_head entry; +}; + +static void release_wb_req(struct zram_wb_req *req) +{ + __free_page(req->page); + kfree(req); +} + +static void release_wb_ctl(struct zram_wb_ctl *wb_ctl) +{ + /* We should never have inflight requests at this point */ + WARN_ON(!list_empty(&wb_ctl->inflight_reqs)); + + while (!list_empty(&wb_ctl->idle_reqs)) { + struct zram_wb_req *req; + + req =3D list_first_entry(&wb_ctl->idle_reqs, + struct zram_wb_req, entry); + list_del(&req->entry); + release_wb_req(req); + } + + kfree(wb_ctl); +} + +/* XXX: should be a per-device sysfs attr */ +#define ZRAM_WB_REQ_CNT 1 + +static struct zram_wb_ctl *init_wb_ctl(void) +{ + struct zram_wb_ctl *wb_ctl; + int i; + + wb_ctl =3D kmalloc(sizeof(*wb_ctl), GFP_KERNEL); + if (!wb_ctl) + return NULL; + + INIT_LIST_HEAD(&wb_ctl->idle_reqs); + INIT_LIST_HEAD(&wb_ctl->inflight_reqs); + atomic_set(&wb_ctl->num_inflight, 0); + init_completion(&wb_ctl->done); + + for (i =3D 0; i < ZRAM_WB_REQ_CNT; i++) { + struct zram_wb_req *req; + + /* + * This is fatal condition only if we couldn't allocate + * any requests at all. Otherwise we just work with the + * requests that we have successfully allocated, so that + * writeback can still proceed, even if there is only one + * request on the idle list. + */ + req =3D kzalloc(sizeof(*req), GFP_NOIO | __GFP_NOWARN); + if (!req) + break; + + req->page =3D alloc_page(GFP_NOIO | __GFP_NOWARN); + if (!req->page) { + kfree(req); + break; + } + + INIT_LIST_HEAD(&req->entry); + list_add(&req->entry, &wb_ctl->idle_reqs); + } + + /* We couldn't allocate any requests, so writeabck is not possible */ + if (list_empty(&wb_ctl->idle_reqs)) + goto release_wb_ctl; + + return wb_ctl; + +release_wb_ctl: + release_wb_ctl(wb_ctl); + return NULL; +} + +static void zram_account_writeback_rollback(struct zram *zram) +{ + spin_lock(&zram->wb_limit_lock); + if (zram->wb_limit_enable) + zram->bd_wb_limit +=3D 1UL << (PAGE_SHIFT - 12); + spin_unlock(&zram->wb_limit_lock); +} + +static void zram_account_writeback_submit(struct zram *zram) +{ + spin_lock(&zram->wb_limit_lock); + if (zram->wb_limit_enable && zram->bd_wb_limit > 0) + zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); + spin_unlock(&zram->wb_limit_lock); +} + +static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *= req) +{ u32 index; + int err; =20 - page =3D alloc_page(GFP_KERNEL); - if (!page) - return -ENOMEM; + index =3D req->pps->index; + release_pp_slot(zram, req->pps); + req->pps =3D NULL; + + err =3D blk_status_to_errno(req->bio.bi_status); + if (err) { + /* + * Failed wb requests should not be accounted in wb_limit + * (if enabled). + */ + zram_account_writeback_rollback(zram); + return err; + } =20 + atomic64_inc(&zram->stats.bd_writes); + zram_slot_lock(zram, index); + /* + * We release slot lock during writeback so slot can change under us: + * slot_free() or slot_free() and zram_write_page(). In both cases + * slot loses ZRAM_PP_SLOT flag. No concurrent post-processing can + * set ZRAM_PP_SLOT on such slots until current post-processing + * finishes. + */ + if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) + goto out; + + zram_free_page(zram, index); + zram_set_flag(zram, index, ZRAM_WB); + zram_set_handle(zram, index, req->blk_idx); + atomic64_inc(&zram->stats.pages_stored); + +out: + zram_slot_unlock(zram, index); + return 0; +} + +static void zram_writeback_endio(struct bio *bio) +{ + struct zram_wb_ctl *wb_ctl =3D bio->bi_private; + + if (atomic_dec_return(&wb_ctl->num_inflight) =3D=3D 0) + complete(&wb_ctl->done); +} + +static void zram_submit_wb_request(struct zram *zram, + struct zram_wb_ctl *wb_ctl, + struct zram_wb_req *req) +{ + /* + * wb_limit (if enabled) should be adjusted before submission, + * so that we don't over-submit. + */ + zram_account_writeback_submit(zram); + atomic_inc(&wb_ctl->num_inflight); + list_add_tail(&req->entry, &wb_ctl->inflight_reqs); + submit_bio(&req->bio); +} + +static struct zram_wb_req *select_idle_req(struct zram_wb_ctl *wb_ctl) +{ + struct zram_wb_req *req; + + req =3D list_first_entry_or_null(&wb_ctl->idle_reqs, + struct zram_wb_req, entry); + if (req) + list_del(&req->entry); + return req; +} + +static int zram_wb_wait_for_completion(struct zram *zram, + struct zram_wb_ctl *wb_ctl) +{ + int ret =3D 0; + + if (atomic_read(&wb_ctl->num_inflight)) + wait_for_completion_io(&wb_ctl->done); + + reinit_completion(&wb_ctl->done); + while (!list_empty(&wb_ctl->inflight_reqs)) { + struct zram_wb_req *req; + int err; + + req =3D list_first_entry(&wb_ctl->inflight_reqs, + struct zram_wb_req, entry); + list_move(&req->entry, &wb_ctl->idle_reqs); + + err =3D zram_writeback_complete(zram, req); + if (err) + ret =3D err; + } + + return ret; +} + +static int zram_writeback_slots(struct zram *zram, + struct zram_pp_ctl *ctl, + struct zram_wb_ctl *wb_ctl) +{ + struct zram_wb_req *req =3D NULL; + unsigned long blk_idx =3D 0; + struct zram_pp_slot *pps; + int ret =3D 0, err; + u32 index =3D 0; + + blk_start_plug(&wb_ctl->plug); while ((pps =3D select_pp_slot(ctl))) { spin_lock(&zram->wb_limit_lock); if (zram->wb_limit_enable && !zram->bd_wb_limit) { @@ -757,6 +963,26 @@ static int zram_writeback_slots(struct zram *zram, str= uct zram_pp_ctl *ctl) } spin_unlock(&zram->wb_limit_lock); =20 + while (!req) { + req =3D select_idle_req(wb_ctl); + if (req) + break; + + blk_finish_plug(&wb_ctl->plug); + err =3D zram_wb_wait_for_completion(zram, wb_ctl); + blk_start_plug(&wb_ctl->plug); + /* + * BIO errors are not fatal, we continue and simply + * attempt to writeback the remaining objects (pages). + * At the same time we need to signal user-space that + * some writes (at least one, but also could be all of + * them) were not successful and we do so by returning + * the most recent BIO error. + */ + if (err) + ret =3D err; + } + if (!blk_idx) { blk_idx =3D alloc_block_bdev(zram); if (!blk_idx) { @@ -765,7 +991,6 @@ static int zram_writeback_slots(struct zram *zram, stru= ct zram_pp_ctl *ctl) } } =20 - index =3D pps->index; zram_slot_lock(zram, index); /* * scan_slots() sets ZRAM_PP_SLOT and relases slot lock, so @@ -775,67 +1000,47 @@ static int zram_writeback_slots(struct zram *zram, s= truct zram_pp_ctl *ctl) */ if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) goto next; - if (zram_read_from_zspool(zram, page, index)) + if (zram_read_from_zspool(zram, req->page, index)) goto next; zram_slot_unlock(zram, index); =20 - bio_init(&bio, zram->bdev, &bio_vec, 1, - REQ_OP_WRITE | REQ_SYNC); - bio.bi_iter.bi_sector =3D blk_idx * (PAGE_SIZE >> 9); - __bio_add_page(&bio, page, PAGE_SIZE, 0); - /* - * XXX: A single page IO would be inefficient for write - * but it would be not bad as starter. + * From now on pp-slot is owned by the req, remove it from + * its pps bucket. */ - err =3D submit_bio_wait(&bio); - if (err) { - release_pp_slot(zram, pps); - /* - * BIO errors are not fatal, we continue and simply - * attempt to writeback the remaining objects (pages). - * At the same time we need to signal user-space that - * some writes (at least one, but also could be all of - * them) were not successful and we do so by returning - * the most recent BIO error. - */ - ret =3D err; - continue; - } + list_del_init(&pps->entry); =20 - atomic64_inc(&zram->stats.bd_writes); - zram_slot_lock(zram, index); - /* - * Same as above, we release slot lock during writeback so - * slot can change under us: slot_free() or slot_free() and - * reallocation (zram_write_page()). In both cases slot loses - * ZRAM_PP_SLOT flag. No concurrent post-processing can set - * ZRAM_PP_SLOT on such slots until current post-processing - * finishes. - */ - if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) - goto next; + req->blk_idx =3D blk_idx; + req->pps =3D pps; + bio_init(&req->bio, zram->bdev, &req->bio_vec, 1, + REQ_OP_WRITE | REQ_SYNC); + req->bio.bi_iter.bi_sector =3D req->blk_idx * (PAGE_SIZE >> 9); + req->bio.bi_end_io =3D zram_writeback_endio; + req->bio.bi_private =3D wb_ctl; + __bio_add_page(&req->bio, req->page, PAGE_SIZE, 0); =20 - zram_free_page(zram, index); - zram_set_flag(zram, index, ZRAM_WB); - zram_set_handle(zram, index, blk_idx); + zram_submit_wb_request(zram, wb_ctl, req); blk_idx =3D 0; - atomic64_inc(&zram->stats.pages_stored); - spin_lock(&zram->wb_limit_lock); - if (zram->wb_limit_enable && zram->bd_wb_limit > 0) - zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); + req =3D NULL; + continue; + next: zram_slot_unlock(zram, index); release_pp_slot(zram, pps); - cond_resched(); } =20 - if (blk_idx) - free_block_bdev(zram, blk_idx); - if (page) - __free_page(page); + /* + * Selected idle req, but never submitted it due to some error or + * wb limit. + */ + if (req) + release_wb_req(req); + + blk_finish_plug(&wb_ctl->plug); + err =3D zram_wb_wait_for_completion(zram, wb_ctl); + if (err) + ret =3D err; =20 return ret; } @@ -948,7 +1153,8 @@ static ssize_t writeback_store(struct device *dev, struct zram *zram =3D dev_to_zram(dev); u64 nr_pages =3D zram->disksize >> PAGE_SHIFT; unsigned long lo =3D 0, hi =3D nr_pages; - struct zram_pp_ctl *ctl =3D NULL; + struct zram_pp_ctl *pp_ctl =3D NULL; + struct zram_wb_ctl *wb_ctl =3D NULL; char *args, *param, *val; ssize_t ret =3D len; int err, mode =3D 0; @@ -970,8 +1176,14 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - ctl =3D init_pp_ctl(); - if (!ctl) { + pp_ctl =3D init_pp_ctl(); + if (!pp_ctl) { + ret =3D -ENOMEM; + goto release_init_lock; + } + + wb_ctl =3D init_wb_ctl(); + if (!wb_ctl) { ret =3D -ENOMEM; goto release_init_lock; } @@ -1000,7 +1212,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); break; } =20 @@ -1011,7 +1223,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); break; } =20 @@ -1022,7 +1234,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); continue; } =20 @@ -1033,17 +1245,18 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - scan_slots_for_writeback(zram, mode, lo, hi, ctl); + scan_slots_for_writeback(zram, mode, lo, hi, pp_ctl); continue; } } =20 - err =3D zram_writeback_slots(zram, ctl); + err =3D zram_writeback_slots(zram, pp_ctl, wb_ctl); if (err) ret =3D err; =20 release_init_lock: - release_pp_ctl(zram, ctl); + release_pp_ctl(zram, pp_ctl); + release_wb_ctl(wb_ctl); atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock); =20 --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Mon Feb 9 06:49:35 2026 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33D3233D6D9 for ; Thu, 13 Nov 2025 08:54:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024068; cv=none; b=CQ+CIMOwppKqJyqtI9431w8aDkbsh3HZIl2C3TAzjyI3kqZGtqOCsuKEUMTo4qm1JxY/Qgfc9v6ivC8hUSBEeorJOT+RRqDfnCYmLVgxRpwil53/kTbztK26AHV0kpmWhI2rJWK8cGSQVZM3NMgdybhknz/GEP3I4JHbZaFCAgc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024068; c=relaxed/simple; bh=ldH7pRzrAo36eX2sXr7TZBqNdm9QeHPB/CJSrJctR1w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=F5QdGsHhbjIs0z7RQILfAdw/uiEZVDgbwGKjJo+GMLRalUK7e5wz1AfhrxuKf06Cvajo3WApSqyiZpmf1hmq9XsBFhUrs0PypMm4p4V40i0aEMkB3zwyoIfHgRmtBBmMsCNEtIfnT/N6XXSZYrpFBDNbdkSidNvpyC5z3VggkYo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=OBT17WO1; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="OBT17WO1" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-2956d816c10so6320985ad.1 for ; Thu, 13 Nov 2025 00:54:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763024066; x=1763628866; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aPh7ilOkJcTxNjuYqTovUMBkJNBd0ZFj5iFNSZBggZc=; b=OBT17WO10C2XCNHg66/2dDbK+GzYwoCrlI/8jGjV5MAIKBDNW+ND4LBETe3Ya4YJfs qPH/hF10J4rSAvLmzAz8sVY+IVXahAfDpH4p9OzCOSHBWqzrEp0NT60V6ELR8/7VT94P dqIyaDUwLwypXoiRtTumvGzowfnB4qqbw7FyA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763024066; x=1763628866; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=aPh7ilOkJcTxNjuYqTovUMBkJNBd0ZFj5iFNSZBggZc=; b=brwJbBLi126SRjMUWHiABBnug2c+NssfQDrcLvuE5wf+XxUMGHmQVV/7wGKkVyCp7z CBjCbWADWJTUO7tt4vap934omZ72qUfH6ypC1fRevFk0d5Xy8jYWcqmIp4dlhDQ45bSt sLyM7U6j5nzvHDynlbC/IyQ7PL+cL9CUctr0piojafp5nA2fyEgc6WN4VRzeULjPtGOn IkSwh3oMNxrq23YmBkPJ7OEDsa2QIZXPtU1aW8mKLxHpo+VrySUlNTrRwqjmwM9LulKV ioDVMakSGRGopn3ApGzCKT+QwFjrYQRXEXtR/Awhl35RrA4xSPoDvqAWpnuUPyPFsQrD HAWg== X-Forwarded-Encrypted: i=1; AJvYcCXE4Y3p6eKYDPx3cid3XGrkal9O6B2+vkxxZ89E4aLko2y12wRHWuHS7Sx8gYoD3D3bcvnVV3hWgQhTKyo=@vger.kernel.org X-Gm-Message-State: AOJu0YwIbY+x/6xr2MVo8G4NSZCS3j5OosPdr9fr3wPtVx5rgrwTVSJp MtUsrZLTM4DVW5oNhiycfB1Dvs3yUl/0Zgd8IMlTRwHDrJWWIRHWa2C15bMdytlGlQ== X-Gm-Gg: ASbGncux6mCx7RBJK4qaDcV+4XBfd417NcR2yCYbqq/hLhx8AObwAGwQ6s5a0HWwnS2 kNAL4sLx9eLMeAI7ia4CNCjG74YYr1EkPd2Fwu2FQBUEWKsAxaXm0EK8Nl2qrbUui7DdJep+7kJ 6vT4Oad+C/s5vkFjDNSCsMSCGmPXs9sTK4cT24V186VaMU2QakNuvHp4hnj+zcFG+qP8TUg3jyP OFna/nGOacdbUjBTRKMuYsSqt4vW9dUZOMhyEb9f78TJMaxhi/A11dLzvx6M5J/+NmUtuQyxFJV LXDihS11vskFCIuhPAArRU9/hjq5EHD0u1SnBZDg1GyLEPQ7+dxNGZs07PluNmR4iyvRZT/BLip JG/8s/09k8CGpor/tWlfcnBCKJOFf37q2m91RPqLD3l67ivibwrdYEHg3mDbUFlpwHCybZsBQHk G1Edf4Qoj0w+2gdpcesgfCJav32Z529hiXMgDODQ== X-Google-Smtp-Source: AGHT+IHDznvOsg6P+PZ5zeENKccT9BP5vgmm7k+qpOBrv2DZh+4wB1td4Wp2IINZJb4BaEgtt31Ibw== X-Received: by 2002:a17:903:13c6:b0:295:4d50:aab6 with SMTP id d9443c01a7336-2984ed92f90mr78132405ad.18.1763024066402; Thu, 13 Nov 2025 00:54:26 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:6d96:d8c6:55e6:2377]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2346f3sm17486465ad.18.2025.11.13.00.54.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Nov 2025 00:54:25 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv2 2/4] zram: add writeback batch size device attr Date: Thu, 13 Nov 2025 17:54:00 +0900 Message-ID: <20251113085402.1811522-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog In-Reply-To: <20251113085402.1811522-1-senozhatsky@chromium.org> References: <20251113085402.1811522-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce writeback_batch_size device attribute so that the maximum number of in-flight writeback bio requests can be configured at run-time per-device. This essentially enables batched bio writeback. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 48 ++++++++++++++++++++++++++++++----- drivers/block/zram/zram_drv.h | 1 + 2 files changed, 43 insertions(+), 6 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index a0a939fd9d31..238b997f6891 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -570,6 +570,42 @@ static ssize_t writeback_limit_show(struct device *dev, return sysfs_emit(buf, "%llu\n", val); } =20 +static ssize_t writeback_batch_size_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct zram *zram =3D dev_to_zram(dev); + u32 val; + ssize_t ret =3D -EINVAL; + + if (kstrtouint(buf, 10, &val)) + return ret; + + if (!val) + val =3D 1; + + down_read(&zram->init_lock); + zram->wb_batch_size =3D val; + up_read(&zram->init_lock); + ret =3D len; + + return ret; +} + +static ssize_t writeback_batch_size_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + u32 val; + struct zram *zram =3D dev_to_zram(dev); + + down_read(&zram->init_lock); + val =3D zram->wb_batch_size; + up_read(&zram->init_lock); + + return sysfs_emit(buf, "%u\n", val); +} + static void reset_bdev(struct zram *zram) { if (!zram->backing_dev) @@ -776,10 +812,7 @@ static void release_wb_ctl(struct zram_wb_ctl *wb_ctl) kfree(wb_ctl); } =20 -/* XXX: should be a per-device sysfs attr */ -#define ZRAM_WB_REQ_CNT 1 - -static struct zram_wb_ctl *init_wb_ctl(void) +static struct zram_wb_ctl *init_wb_ctl(struct zram *zram) { struct zram_wb_ctl *wb_ctl; int i; @@ -793,7 +826,7 @@ static struct zram_wb_ctl *init_wb_ctl(void) atomic_set(&wb_ctl->num_inflight, 0); init_completion(&wb_ctl->done); =20 - for (i =3D 0; i < ZRAM_WB_REQ_CNT; i++) { + for (i =3D 0; i < zram->wb_batch_size; i++) { struct zram_wb_req *req; =20 /* @@ -1182,7 +1215,7 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; } =20 - wb_ctl =3D init_wb_ctl(); + wb_ctl =3D init_wb_ctl(zram); if (!wb_ctl) { ret =3D -ENOMEM; goto release_init_lock; @@ -2823,6 +2856,7 @@ static DEVICE_ATTR_RW(backing_dev); static DEVICE_ATTR_WO(writeback); static DEVICE_ATTR_RW(writeback_limit); static DEVICE_ATTR_RW(writeback_limit_enable); +static DEVICE_ATTR_RW(writeback_batch_size); #endif #ifdef CONFIG_ZRAM_MULTI_COMP static DEVICE_ATTR_RW(recomp_algorithm); @@ -2844,6 +2878,7 @@ static struct attribute *zram_disk_attrs[] =3D { &dev_attr_writeback.attr, &dev_attr_writeback_limit.attr, &dev_attr_writeback_limit_enable.attr, + &dev_attr_writeback_batch_size.attr, #endif &dev_attr_io_stat.attr, &dev_attr_mm_stat.attr, @@ -2905,6 +2940,7 @@ static int zram_add(void) =20 init_rwsem(&zram->init_lock); #ifdef CONFIG_ZRAM_WRITEBACK + zram->wb_batch_size =3D 1; spin_lock_init(&zram->wb_limit_lock); #endif =20 diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 6cee93f9c0d0..1a647f42c1a4 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -129,6 +129,7 @@ struct zram { struct file *backing_dev; spinlock_t wb_limit_lock; bool wb_limit_enable; + u32 wb_batch_size; u64 bd_wb_limit; struct block_device *bdev; unsigned long *bitmap; --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Mon Feb 9 06:49:35 2026 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9105B33DEF1 for ; Thu, 13 Nov 2025 08:54:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024072; cv=none; b=ISyYeAxOywYZeU12BYZKBKN1r6xhkmxi4ZwlzN5kl3lPCn8Ac3SWGGtw+XxFTS4OhwEvQm+OyBXY/e37hIJTmquokyq+UgCN3dHmh608cHnQPqXmlGEO+RBkglCfP/v5LOXfTfn+ASZ27/Fq1VDGQko4bsxZP7SlAfTCGAiYybU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024072; c=relaxed/simple; bh=G15zDhQSGXlkHBbvJ2D+mkyD9RCtv0j0QopfMySJkbU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JKAER6LDeuldd8NXEnnSOrbQWvJ219isXyR/MJNiNhYPVGQI6eBmr7tFM8FPui1ihTvrcxoYe7EP1WbZNc5RdpiR+IO+i4qSVHb1kpBfWACCMfctA8JXIXYUJX1RhzrreRmsgsnQRZ9Xkaf+qsXZYknVX0iwbPud2MO32DFx1aQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=BqSw3m1E; arc=none smtp.client-ip=209.85.215.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="BqSw3m1E" Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-bb7799edea8so425005a12.3 for ; Thu, 13 Nov 2025 00:54:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763024070; x=1763628870; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=66o8xvNrn3Zh6yfKx4kveyWbQRFbxbGWIXOV/eFIVRM=; b=BqSw3m1EKNyFz1IuhOERx7JwslfTdwBEVb/7wYg1qGuPO1fJ6pC/SGWuvbwrobFaGe 6uyDr0IaJ53bUv0KnSXcHT9+A3s/lkvIj/Vo3IsVQEM7D/KRWLlbAWydImper7Y3E4gH 6vObSv3Iqc22iBjnDNCaZZq1jzL+SkTTqZfO0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763024070; x=1763628870; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=66o8xvNrn3Zh6yfKx4kveyWbQRFbxbGWIXOV/eFIVRM=; b=bsC5gupndWXQz1ieHeb5nu5adwInwzYweVExKe+qVGRsSbVfvg9Ri631b+ZmQ61cFU IcOulKWr592lm+3SjHYop56oDVsQiZWpFkVW5r+TClnaYYy/x0azF8MCHNQlYsiRHCZ2 1vZhp85ORPvFNojOJkIQnQYlyXTOwQgkBXoSszqetq3+4sAPNa5XY/eXFMHSwum02Kfs 2Ytxqfw96eVjNO/Id65ujyj90XgQsSZzeNMjdyMpNdKODCmVwyObBNULRjM7e4j2/46x ehtRTfOXrQ+mDyaIDzIrYzQTwnE2fUPNfIvt/cBSy8JnWys5x9L6/MbJAfAPDzqQjwzt 9+0w== X-Forwarded-Encrypted: i=1; AJvYcCUVwFF+XS0TXhjWx6IYeMyeuOfx8aaVjq3QjMiDalyOYvEnV35q85qNRIbww8l06W9KVKhTi6RarVuCNYI=@vger.kernel.org X-Gm-Message-State: AOJu0YxMlNI9oZNNz5S063xSPod+rNxvbwoRF3OIosJEve+lJcipL7Bb vePgfPRuqYP7kWP+ODGuVBK8DTdbly7xj9qZAHh95mnFrUZX3+fMwl1EuA9kFbEoxw== X-Gm-Gg: ASbGncs56BXgyGnNcXMP1VIFOgpBiefdjDqKDNCVcOP6tE2lOaI9c09NKNPSGuJunxd KDn5GQGs0WG0VGHIF7wTJUBzVZf4VtJjcFmNugOXVlDhk3w0oZsKM/RncWHXUsfrx08w4mn1bzC RsjOL18HiE/4jC1krrsTigVq+Cxrm3yX/Kyo+mie3WjwNm4BTrszZhooulHq+93g6YDEfnl4ssn 45clUHwy2TotRCl+pM8aWb8L9whGacSQptSYyphal/A4zOJenn3gM7yjMQcTTI98cING75fbJWf 3uub0GYFsDhDSPbJJXTJkm0fzWWG12eroG0RtvUDOUmiDdKFgJfPm7PHkrAn2KjsXfNoSaSmKvC rbQoV5409aF1M/rAqQGjJP+N5bXDCaNTLjHHXvy4+1suB45iF2eppkVyLvYEWg9VeBYK0jglFpP VD8puoj5gCP/WmfojsSTUV/2660DSsUgcjgLPq9Q== X-Google-Smtp-Source: AGHT+IFCr46cECMeVud8wNJtWIGj5BffLnOQD6s1PPHNzh+xi7UWKBaPjYSgkMNd/Mok9dJQQjAbaw== X-Received: by 2002:a17:903:1ce:b0:295:73ce:b939 with SMTP id d9443c01a7336-2984eda94dfmr81834785ad.39.1763024069976; Thu, 13 Nov 2025 00:54:29 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:6d96:d8c6:55e6:2377]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2346f3sm17486465ad.18.2025.11.13.00.54.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Nov 2025 00:54:29 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv2 3/4] zram: take write lock in wb limit store handlers Date: Thu, 13 Nov 2025 17:54:01 +0900 Message-ID: <20251113085402.1811522-4-senozhatsky@chromium.org> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog In-Reply-To: <20251113085402.1811522-1-senozhatsky@chromium.org> References: <20251113085402.1811522-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Write device attrs handlers should take write zram init_lock. While at it, fixup coding styles. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 238b997f6891..6312b0437618 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -501,7 +501,8 @@ static ssize_t idle_store(struct device *dev, =20 #ifdef CONFIG_ZRAM_WRITEBACK static ssize_t writeback_limit_enable_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) + struct device_attribute *attr, + const char *buf, size_t len) { struct zram *zram =3D dev_to_zram(dev); u64 val; @@ -510,18 +511,19 @@ static ssize_t writeback_limit_enable_store(struct de= vice *dev, if (kstrtoull(buf, 10, &val)) return ret; =20 - down_read(&zram->init_lock); + down_write(&zram->init_lock); spin_lock(&zram->wb_limit_lock); zram->wb_limit_enable =3D val; spin_unlock(&zram->wb_limit_lock); - up_read(&zram->init_lock); + up_write(&zram->init_lock); ret =3D len; =20 return ret; } =20 static ssize_t writeback_limit_enable_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, + char *buf) { bool val; struct zram *zram =3D dev_to_zram(dev); @@ -536,7 +538,8 @@ static ssize_t writeback_limit_enable_show(struct devic= e *dev, } =20 static ssize_t writeback_limit_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) + struct device_attribute *attr, + const char *buf, size_t len) { struct zram *zram =3D dev_to_zram(dev); u64 val; @@ -545,11 +548,11 @@ static ssize_t writeback_limit_store(struct device *d= ev, if (kstrtoull(buf, 10, &val)) return ret; =20 - down_read(&zram->init_lock); + down_write(&zram->init_lock); spin_lock(&zram->wb_limit_lock); zram->bd_wb_limit =3D val; spin_unlock(&zram->wb_limit_lock); - up_read(&zram->init_lock); + up_write(&zram->init_lock); ret =3D len; =20 return ret; --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Mon Feb 9 06:49:35 2026 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E992933E376 for ; Thu, 13 Nov 2025 08:54:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024075; cv=none; b=Pf5RIcKiRfFzPQV9BEw6jg5jxQ9ejeDRMhxspe0sVdcsG5DKew2gYGrF8bZvw5OVDYsJuhgw0zcQhv2/o1kgO28XMVe/gFruaypAItyw+aMdHLEkRZvQDGvv/2t8HEYNsu0aRgmw+0EfBxZWHOmyGXpKW9sA40CPam6FT0AEQVA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763024075; c=relaxed/simple; bh=GKNA5sK22EBgrN4HZDB5/e/zDMa2o07L2CPe8NP9GFE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KOqWIJM07wub5qS8kLRK6XvyUs5ZmtYXp3FhaP1GIcBdVyvnhv7vXNf9evdVV2dooUwl9paaZORJwxL6r0Zc/zM5c371uYtns8nNeTt2JF/OHN7wb20aJYgmYOQFwrOhAcRRCFg2wC5J1NmxfpzcCRhntO3M94xOvRg2GsOHhjs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=huOhOgCm; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="huOhOgCm" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-29586626fbeso5547005ad.0 for ; Thu, 13 Nov 2025 00:54:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1763024073; x=1763628873; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7+JRL9kiPv0pDFSMaNJ0bpAVD07o+mb0mBj6g1Zy7bM=; b=huOhOgCmd/VZKOhV0smGsx/KW2l1QJVhgqa7089gyI7rbVsGGAENYnLKbKNkGPvaWd lupHlTbje5y2HerxqCPMmxHqA3quM3OuYr3gOf/n5vIA2ubhddj4AjF6mY7Mgzc3oTL0 Am2qmREs+v2HtIS4O5dNDs9YDFss2TZ1dhYO8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763024073; x=1763628873; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=7+JRL9kiPv0pDFSMaNJ0bpAVD07o+mb0mBj6g1Zy7bM=; b=OjoK82erK+ot8diGYdBBp6LncrjrdlC6QGNYifWfbOT8CAVkacY52z9/XgAVSA5RFo fMtePW1fd8p/NfdFyhf7vGMptWPqS3Yn9mnVfXxsG1mdmWhmbbIQDY0MwlIoISMcn3Mp M6z+NlTDW4u1qvM5SIWl3nGWu4FhTEta3o/K5s0Pq7fXbvF8Lvi+X95uHZkRmv2yujNY h04eNqyY2M6coMKaIF4hUjv4DCzTBDhFeHgE7SKrL0gbUBh6Cy6l/ic9kBT6qJoDsOPM AVQgb4xKcKhD5bFu4C06Vvxe023z0d7doQoz3Dcr2e3/4fLEpT0776sT63KKfc31cz4f MINQ== X-Forwarded-Encrypted: i=1; AJvYcCXhwrI7XIpIqfA1hlbenMEuZoQz3KwaSyFD2E2N9Ge1+F2xC/b/7qdCGkRQ033jZZ5+FZyP6/cR+EfTuFs=@vger.kernel.org X-Gm-Message-State: AOJu0YwlK3p+P7elDoE7FNBGG8nOEXrmmHmj4N7cnmeJ4dnFg8iufF0i 8FoHGWO5CbVTR57naSzKYIG1cV8rAvCbRQN1u5Ofd/S8tOJAhzLbyNVpgmFFFosqZA== X-Gm-Gg: ASbGncsDF9GXXbBpCTJqtWyikOu7xjetCgmO1r0ta1v/qxlPB8w850FTIEy44KnYy8y Ej0ajuJGKtKFpL0v8uVZoReqCr/3fifsVM7zJDQlP2l+m0BLxsOxJpOjQ3Wyb3JfY/Zc3rj0L7a PVPWMOC1I+txDlyqZ+6oKulX28Fn6cPZk6wxIJlb4Z3s13nu+e0aYtt/R0TSw/zzOtk2zmSrZQC Ci4qni+KO26dCZRy2VcZpFO49/tip3BJZ8HO1tCvh5KXPUOnmduw7sN9qubhNS2E0LL/5sfzoxB xdKedf9eZDXQdtgmB0dbSdeueXa7uZcHUxqb4NhWdqjnuuNbVlIVfZChLVXoOu1sb3LghNx2QQR gB/G2dknkR/zqH1yIo2YrQtMklPjgb6crQfm6EXAKSJeHI1jS9/qGmZhF0EqI7jkiO5E2PSJ6Y5 /EqtBdSmW7lcrSK/s1f216VnIldd4= X-Google-Smtp-Source: AGHT+IGGZxYQr+LLoy8bEdHUKGbCLuJQ0gfi58q4jvR2bx074SYFubzhdfhY5JKnP3Pyt5bf5k/gtg== X-Received: by 2002:a17:902:dac3:b0:27e:ec72:f67 with SMTP id d9443c01a7336-2984ed27ec5mr76936745ad.6.1763024073273; Thu, 13 Nov 2025 00:54:33 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:6d96:d8c6:55e6:2377]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2346f3sm17486465ad.18.2025.11.13.00.54.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Nov 2025 00:54:32 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Yuwen Chen , Richard Chang Cc: Brian Geffon , Fengyu Lian , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv2 4/4] zram: drop wb_limit_lock Date: Thu, 13 Nov 2025 17:54:02 +0900 Message-ID: <20251113085402.1811522-5-senozhatsky@chromium.org> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog In-Reply-To: <20251113085402.1811522-1-senozhatsky@chromium.org> References: <20251113085402.1811522-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We don't need wb_limit_lock. Writeback limit setters take an exclusive write zram init_lock, while wb_limit modifications happen only from a single task and under zram read init_lock. No concurrent wb_limit modifications are possible (we permit only one post-processing task at a time). Add lockdep assertions to wb_limit mutators. While at it, fixup coding styles. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 22 +++++----------------- drivers/block/zram/zram_drv.h | 1 - 2 files changed, 5 insertions(+), 18 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 6312b0437618..28afb010307d 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -512,9 +512,7 @@ static ssize_t writeback_limit_enable_store(struct devi= ce *dev, return ret; =20 down_write(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); zram->wb_limit_enable =3D val; - spin_unlock(&zram->wb_limit_lock); up_write(&zram->init_lock); ret =3D len; =20 @@ -529,9 +527,7 @@ static ssize_t writeback_limit_enable_show(struct devic= e *dev, struct zram *zram =3D dev_to_zram(dev); =20 down_read(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); val =3D zram->wb_limit_enable; - spin_unlock(&zram->wb_limit_lock); up_read(&zram->init_lock); =20 return sysfs_emit(buf, "%d\n", val); @@ -549,9 +545,7 @@ static ssize_t writeback_limit_store(struct device *dev, return ret; =20 down_write(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); zram->bd_wb_limit =3D val; - spin_unlock(&zram->wb_limit_lock); up_write(&zram->init_lock); ret =3D len; =20 @@ -559,15 +553,13 @@ static ssize_t writeback_limit_store(struct device *d= ev, } =20 static ssize_t writeback_limit_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { u64 val; struct zram *zram =3D dev_to_zram(dev); =20 down_read(&zram->init_lock); - spin_lock(&zram->wb_limit_lock); val =3D zram->bd_wb_limit; - spin_unlock(&zram->wb_limit_lock); up_read(&zram->init_lock); =20 return sysfs_emit(buf, "%llu\n", val); @@ -866,18 +858,18 @@ static struct zram_wb_ctl *init_wb_ctl(struct zram *z= ram) =20 static void zram_account_writeback_rollback(struct zram *zram) { - spin_lock(&zram->wb_limit_lock); + lockdep_assert_held_read(&zram->init_lock); + if (zram->wb_limit_enable) zram->bd_wb_limit +=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); } =20 static void zram_account_writeback_submit(struct zram *zram) { - spin_lock(&zram->wb_limit_lock); + lockdep_assert_held_read(&zram->init_lock); + if (zram->wb_limit_enable && zram->bd_wb_limit > 0) zram->bd_wb_limit -=3D 1UL << (PAGE_SHIFT - 12); - spin_unlock(&zram->wb_limit_lock); } =20 static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *= req) @@ -991,13 +983,10 @@ static int zram_writeback_slots(struct zram *zram, =20 blk_start_plug(&wb_ctl->plug); while ((pps =3D select_pp_slot(ctl))) { - spin_lock(&zram->wb_limit_lock); if (zram->wb_limit_enable && !zram->bd_wb_limit) { - spin_unlock(&zram->wb_limit_lock); ret =3D -EIO; break; } - spin_unlock(&zram->wb_limit_lock); =20 while (!req) { req =3D select_idle_req(wb_ctl); @@ -2944,7 +2933,6 @@ static int zram_add(void) init_rwsem(&zram->init_lock); #ifdef CONFIG_ZRAM_WRITEBACK zram->wb_batch_size =3D 1; - spin_lock_init(&zram->wb_limit_lock); #endif =20 /* gendisk structure */ diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 1a647f42c1a4..c6d94501376c 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -127,7 +127,6 @@ struct zram { bool claim; /* Protected by disk->open_mutex */ #ifdef CONFIG_ZRAM_WRITEBACK struct file *backing_dev; - spinlock_t wb_limit_lock; bool wb_limit_enable; u32 wb_batch_size; u64 bd_wb_limit; --=20 2.51.2.1041.gc1ab5b90ca-goog