From nobody Tue Dec 16 02:33:31 2025 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA7F2823CE for ; Tue, 23 Jan 2024 18:46:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706035564; cv=none; b=n82uhof7wHeXk/ra7ew+RmK+Vq1Bt+auEYq+cfEy7xb68A3bw8+8ZFY4xGaVV/ETbfHEAGb3MgB2Kl4kPyEkQSeMLQ90JvrP6zTnhKzyoEY/imSsI3utGjwN3jZT/SbmPjkikNeWAhK6yU/+pB67Utsmr2y+p4m67ozEXfMEhW0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706035564; c=relaxed/simple; bh=pUJVm7bcozh/5KgKizUKApaz91PNtWNQEpLGeh1RoSc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WKpAR+a3m6UEArXIRDyHq0m+MDpXkVJUJQY9wzOj7K4MSvJ6emaOoxyrDNe07FdDpop2MUutprIseaOAwS5k4mdMXF5sncYY4MHM3tlxprTUV8cjyx7fGd/VuW3WvnHaqv7BGPbb2pose8VnGh2sSrTtn7ekCaTiKIqiXvk+4J0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=a/iIC96H; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="a/iIC96H" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1d75ea3a9b6so18104685ad.2 for ; Tue, 23 Jan 2024 10:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706035562; x=1706640362; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=4tUupe0pRs64+ug0l5Lv4L/wYRNFF+WyGC/Eo0dQs5k=; b=a/iIC96HNMAGu3lomm1coyyOEQ37dAAcMmgvciHVallQj4g9zSzVylSuWojnDFX2NP VTz2YBKPIoy4rm1peNvWvmiBv8k2ztykdfreI2vlnWE3SqpgaiXSGr4qIxfE22o8BWSf MEs2bhWejMq72xccNenWCayISmgP3brOYgJ1DD/82FCIPYUpRKrqIQ47mlL/ghKnRF61 pXai4r7riqDMIYsbfuO6L9AXcd+qbP/kjGkp3jqRqzvTnLeOFghWrXm4OxGjhdJQaF1m 8YwoJTn6BiqYCV0+roYzUaNjDgWrBxUhbuUQ4PxZgZ8ovKNQr9R8fEGXEthk1Rs2iBwW NxpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706035562; x=1706640362; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=4tUupe0pRs64+ug0l5Lv4L/wYRNFF+WyGC/Eo0dQs5k=; b=XJxfvlgM673suA57xoOxXo3oLlj43fVTROkr7my57qYzxtjIlEFwaNfBuK/QBC1m+X U13RA1h23tJolIjPg7+L9vXRmMCE/SQdBU/U19J5VErqm8WvaNnLq1yRlRi5ovM5FNzj pBrZfuiUt/oXIN2ML83SNa29ZarRAgOznsLS7fnAfm6TVR3YXLqQNFtRwkxBmYxkPbi/ E09r1YElIUpqd2qAN3UaYIN7HVafSkkyCjugxT/Jv5XfxVlgGkDEj3AlizqbrY7QAOLw zc7MPJ5A2X4AKpnFDqL16/bcukfjX22qMog70ra0/9WFDv0QDHophjB7/QN5OFh0BocQ MpOw== X-Gm-Message-State: AOJu0YxAN7C/rYfAeynrieBTWqB4WM0S5Sou8JLFBRg330jCpodnHu8f P8vOHHgpkAGoiEbkhK+o+78Pm3VgNERISb7vSmAB8NVC8zhdz2Rb X-Google-Smtp-Source: AGHT+IHzOTaNc2ALEPfuFIm2wjtMBHoRPdcf8S7u1UHGQhnOuc8VCGjIMwWrNjGtijcUP07sJotD5A== X-Received: by 2002:a17:902:bc41:b0:1d7:2bd6:23e6 with SMTP id t1-20020a170902bc4100b001d72bd623e6mr6093896plz.128.1706035562123; Tue, 23 Jan 2024 10:46:02 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id d5-20020a170902b70500b001d74c285b55sm4035196pls.67.2024.01.23.10.45.59 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 23 Jan 2024 10:46:01 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Yu Zhao , Wei Xu , Chris Li , Matthew Wilcox , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 1/3] mm, lru_gen: try to prefetch next page when scanning LRU Date: Wed, 24 Jan 2024 02:45:50 +0800 Message-ID: <20240123184552.59758-2-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240123184552.59758-1-ryncsn@gmail.com> References: <20240123184552.59758-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Prefetch for inactive/active LRU have been long exiting, apply the same optimization for MGLRU. Test 1: Ramdisk fio ro test in a 4G memcg on a EPYC 7K62: fio -name=3Dmglru --numjobs=3D16 --directory=3D/mnt --size=3D960m \ --buffered=3D1 --ioengine=3Dio_uring --iodepth=3D128 \ --iodepth_batch_submit=3D32 --iodepth_batch_complete=3D32 \ --rw=3Drandread --random_distribution=3Dzipf:0.5 --norandommap \ --time_based --ramp_time=3D1m --runtime=3D6m --group_reporting Before this patch: bw ( MiB/s): min=3D 7758, max=3D 9239, per=3D100.00%, avg=3D8747.59, stdev= =3D16.51, samples=3D11488 iops : min=3D1986251, max=3D2365323, avg=3D2239380.87, stdev=3D4225.= 93, samples=3D11488 After this patch (+7.2%): bw ( MiB/s): min=3D 8360, max=3D 9771, per=3D100.00%, avg=3D9381.31, stdev= =3D15.67, samples=3D11488 iops : min=3D2140296, max=3D2501385, avg=3D2401613.91, stdev=3D4010.= 41, samples=3D11488 Test 2: Ramdisk fio hybrid test for 30m in a 4G memcg on a EPYC 7K62 (3 tim= es): fio --buffered=3D1 --numjobs=3D8 --size=3D960m --directory=3D/mnt \ --time_based --ramp_time=3D1m --runtime=3D30m \ --ioengine=3Dio_uring --iodepth=3D128 --iodepth_batch_submit=3D32 \ --iodepth_batch_complete=3D32 --norandommap \ --name=3Dmglru-ro --rw=3Drandread --random_distribution=3Dzipf:0.7 \ --name=3Dmglru-rw --rw=3Drandrw --random_distribution=3Dzipf:0.7 Before this patch: READ: 6622.0 MiB/s. Stdev: 22.090722 WRITE: 1256.3 MiB/s. Stdev: 5.249339 After this patch (+4.6%, +3.3%): READ: 6926.6 MiB/s, Stdev: 37.950260 WRITE: 1297.3 MiB/s, Stdev: 7.408704 Test 3: 30m of MySQL test in 6G memcg (12 times): echo 'set GLOBAL innodb_buffer_pool_size=3D16106127360;' | \ mysql -u USER -h localhost --password=3DPASS sysbench /usr/share/sysbench/oltp_read_only.lua \ --mysql-user=3DUSER --mysql-password=3DPASS --mysql-db=3DDB \ --tables=3D48 --table-size=3D2000000 --threads=3D16 --time=3D1800 run Before this patch Avg: 134743.714545 qps. Stdev: 582.242189 After this patch (+0.2%): Avg: 135005.779091 qps. Stdev: 295.299027 Test 4: Build linux kernel in 2G memcg with make -j48 with SSD swap (for memory stress, 18 times): Before this patch: Avg: 1456.768899 s. Stdev: 20.106973 After this patch (+0.0%): Avg: 1455.659254 s. Stdev: 15.274481 Test 5: Memtier test in a 4G cgroup using brd as swap (18 times): memcached -u nobody -m 16384 -s /tmp/memcached.socket \ -a 0766 -t 16 -B binary & memtier_benchmark -S /tmp/memcached.socket \ -P memcache_binary -n allkeys \ --key-minimum=3D1 --key-maximum=3D16000000 -d 1024 \ --ratio=3D1:0 --key-pattern=3DP:P -c 1 -t 16 --pipeline 8 -x 3 Before this patch: Avg: 50317.984000 Ops/sec. Stdev: 2568.965458 After this patch (-5.7%): Avg: 47691.343500 Ops/sec. Stdev: 3925.772473 It seems prefetch is helpful in most cases, but the memtier test is either hitting a case where prefetch causes higher cache miss or it's just too noisy (high stdev). Signed-off-by: Kairui Song --- mm/vmscan.c | 30 ++++++++++++++++++++++++++---- 1 file changed, 26 insertions(+), 4 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4f9c854ce6cc..03631cedb3ab 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3681,15 +3681,26 @@ static bool inc_min_seq(struct lruvec *lruvec, int = type, bool can_swap) /* prevent cold/hot inversion if force_scan is true */ for (zone =3D 0; zone < MAX_NR_ZONES; zone++) { struct list_head *head =3D &lrugen->folios[old_gen][type][zone]; + struct folio *prev =3D NULL; =20 - while (!list_empty(head)) { - struct folio *folio =3D lru_to_folio(head); + if (!list_empty(head)) + prev =3D lru_to_folio(head); + + while (prev) { + struct folio *folio =3D prev; =20 VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_is_file_lru(folio) !=3D type, folio); VM_WARN_ON_ONCE_FOLIO(folio_zonenum(folio) !=3D zone, folio); =20 + if (unlikely(list_is_first(&folio->lru, head))) { + prev =3D NULL; + } else { + prev =3D lru_to_folio(&folio->lru); + prefetchw(&prev->flags); + } + new_gen =3D folio_inc_gen(lruvec, folio, false); list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); =20 @@ -4341,11 +4352,15 @@ static int scan_folios(struct lruvec *lruvec, struc= t scan_control *sc, for (i =3D MAX_NR_ZONES; i > 0; i--) { LIST_HEAD(moved); int skipped_zone =3D 0; + struct folio *prev =3D NULL; int zone =3D (sc->reclaim_idx + i) % MAX_NR_ZONES; struct list_head *head =3D &lrugen->folios[gen][type][zone]; =20 - while (!list_empty(head)) { - struct folio *folio =3D lru_to_folio(head); + if (!list_empty(head)) + prev =3D lru_to_folio(head); + + while (prev) { + struct folio *folio =3D prev; int delta =3D folio_nr_pages(folio); =20 VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio); @@ -4355,6 +4370,13 @@ static int scan_folios(struct lruvec *lruvec, struct= scan_control *sc, =20 scanned +=3D delta; =20 + if (unlikely(list_is_first(&folio->lru, head))) { + prev =3D NULL; + } else { + prev =3D lru_to_folio(&folio->lru); + prefetchw(&prev->flags); + } + if (sort_folio(lruvec, folio, sc, tier)) sorted +=3D delta; else if (isolate_folio(lruvec, folio, sc)) { --=20 2.43.0 From nobody Tue Dec 16 02:33:31 2025 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C378B82D81 for ; Tue, 23 Jan 2024 18:46:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706035567; cv=none; b=FtIC7irj4bGfeI4eyYgu88dH2UjyIRs0JZp6ymmqRXSENONT/NEJ4Vf0PJXn+A8dG1xQqNmQDvCuUqySF3K4ZsaVzoLFBahHBdBp/ymg14M89yb1u+RqttJWF3YqNMRITNKIpwCD1Pp2NlI+EHll+AAECiPYEZylai1oTv1uJ7c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706035567; c=relaxed/simple; bh=b9uFtsBfQcjlq+mfT5fdYSSaIH6BNVdPfDY5iwgFdWI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G89Ge49CfYIhDRX/aGjBlemxd//NsI5dboIWhtKEjK0sWS1i4UsaVsAHwzNxMkEd2eRJQRCiZO0vLyNEyU3JRTuQcR0QpMgR8uoDRd8EBj5krBwTBjPPFkwCxOPVJNnU358/FAaWftc8LSMfFNMbOsrX6teZqhpEoBMmlbwi+kk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=iyo3F43T; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iyo3F43T" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-1d75ea3a9b6so18104925ad.2 for ; Tue, 23 Jan 2024 10:46:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706035565; x=1706640365; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=rc80RJovXRONel2vPZqiqNiPzdLsOGe//Ia7888m/xQ=; b=iyo3F43TH0i1dLIiA+qgX/G+TbdwslMXaMDZ2FW4Kjy3eBfoCrw+QCz7bR3Rt8eGKx eVaV4XzbiSZYv/mdrJTNIASHMwOEJnqT0S7K2vgW4JI/cURxv43By6dxZtGXerQgGS5m MS1GtWhvWvlD8z2Et+xd6ZNaV9lAWYNIcmyJa0GT4Ga6+4UqcBxyouV/pOS2je0OhVOu p6B13KlvmO9PW0MrsNBIFGHR0zPiwh8DyuwS4F87oCfeO4suZwR/z4o3/ziSWyZU99q+ tzc9gS3Dk0xHNKd0rAhqoq2ehCgCYpuwQ4Vpng1ziSOVsGqJgiQ3OswJlL7jyEeZ9HWG 2u0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706035565; x=1706640365; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=rc80RJovXRONel2vPZqiqNiPzdLsOGe//Ia7888m/xQ=; b=dxuufIKgRjeb9wx/cA8nNt/NRSBbfNtiKV/2rER2nkfrGkqdDuYQT7udUu9Cc9rsON bcYmzUBe1KoUz56usdySF7H879ZTz4ZdEskb3x2eyKzm2HbZQ5ceNQXDvuLC1V5ZJo1z UsTchdj+J8vAT9N5I3+oZYGP18DXtMYatermlMm4Ose2F07uD1YmCh0TAdZt2aaQRcNA kNYWitJYTcgW/wwpBcGZJzIoVYJJ63tSpbKEZxs6hGefQn4iwPZ9gsucTZpUh1ptasuA JsOl9bGBMkkjmGaxvrglojDxMba9iPt+1K9qUKyTxhUTouM9hRh/tC0u6Ytczk/9jC7i i9dw== X-Gm-Message-State: AOJu0YzXJJRQ7BvNNLWY4O1btEBnIE74FaM2FPLCIN4ujOveqf0kKJQp mcYHB7oUf3jZBVdZZ6s7tRX0RHo3D7vfFZO7qXAiMLBPojVMQF2u X-Google-Smtp-Source: AGHT+IF64IW2XJLq7K2HXZYsFhS1Lcix1YogNkO/ahBEKQxX6SOEvgC3c8bnVYZKaRE3MHMEchv0JA== X-Received: by 2002:a17:902:d48b:b0:1d7:37cf:6c71 with SMTP id c11-20020a170902d48b00b001d737cf6c71mr7152816plg.38.1706035565062; Tue, 23 Jan 2024 10:46:05 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id d5-20020a170902b70500b001d74c285b55sm4035196pls.67.2024.01.23.10.46.02 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 23 Jan 2024 10:46:04 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Yu Zhao , Wei Xu , Chris Li , Matthew Wilcox , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 2/3] mm, lru_gen: batch update counters on aging Date: Wed, 24 Jan 2024 02:45:51 +0800 Message-ID: <20240123184552.59758-3-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240123184552.59758-1-ryncsn@gmail.com> References: <20240123184552.59758-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song When lru_gen is aging, it will update mm counters page by page, which causes a higher overhead if age happens frequently or there are a lot of pages in one generation getting moved. Optimize this by doing the counter update in batch. Although most __mod_*_state has its own caches the overhead is still observable. Test 1: Ramdisk fio test in a 4G memcg on a EPYC 7K62 with: fio -name=3Dmglru --numjobs=3D16 --directory=3D/mnt --size=3D960m \ --buffered=3D1 --ioengine=3Dio_uring --iodepth=3D128 \ --iodepth_batch_submit=3D32 --iodepth_batch_complete=3D32 \ --rw=3Drandread --random_distribution=3Dzipf:0.5 --norandommap \ --time_based --ramp_time=3D1m --runtime=3D6m --group_reporting Before this patch: bw ( MiB/s): min=3D 8360, max=3D 9771, per=3D100.00%, avg=3D9381.31, stdev= =3D15.67, samples=3D11488 iops : min=3D2140296, max=3D2501385, avg=3D2401613.91, stdev=3D4010.= 41, samples=3D11488 After this patch (+0.0%): bw ( MiB/s): min=3D 8299, max=3D 9847, per=3D100.00%, avg=3D9388.23, stdev= =3D16.25, samples=3D11488 iops : min=3D2124544, max=3D2521056, avg=3D2403385.82, stdev=3D4159.= 07, samples=3D11488 Test 2: Ramdisk fio hybrid test for 30m in a 4G memcg on a EPYC 7K62 (3 tim= es): fio --buffered=3D1 --numjobs=3D8 --size=3D960m --directory=3D/mnt \ --time_based --ramp_time=3D1m --runtime=3D30m \ --ioengine=3Dio_uring --iodepth=3D128 --iodepth_batch_submit=3D32 \ --iodepth_batch_complete=3D32 --norandommap \ --name=3Dmglru-ro --rw=3Drandread --random_distribution=3Dzipf:0.7 \ --name=3Dmglru-rw --rw=3Drandrw --random_distribution=3Dzipf:0.7 Before this patch: READ: 6926.6 MiB/s, Stdev: 37.950260 WRITE: 1297.3 MiB/s, Stdev: 7.408704 After this patch (+0.7%, +0.4%): READ: 6973.3 MiB/s, Stdev: 19.601587 WRITE: 1302.3 MiB/s, Stdev: 4.988877 Test 3: 30m of MySQL test in 6G memcg (12 times): echo 'set GLOBAL innodb_buffer_pool_size=3D16106127360;' | \ mysql -u USER -h localhost --password=3DPASS sysbench /usr/share/sysbench/oltp_read_only.lua \ --mysql-user=3DUSER --mysql-password=3DPASS --mysql-db=3DDB \ --tables=3D48 --table-size=3D2000000 --threads=3D16 --time=3D1800 run Before this patch Avg: 135005.779091 qps. Stdev: 295.299027 After this patch (+0.2%): Avg: 135310.868182 qps. Stdev: 379.200942 Test 4: Build linux kernel in 2G memcg with make -j48 with SSD swap (for memory stress, 18 times): Before this patch: Average: 1455.659254 s. Stdev: 15.274481 After this patch (-0.8%): Average: 1467.813023 s. Stdev: 24.232886 Test 5: Memtier test in a 4G cgroup using brd as swap (20 times): memcached -u nobody -m 16384 -s /tmp/memcached.socket \ -a 0766 -t 16 -B binary & memtier_benchmark -S /tmp/memcached.socket \ -P memcache_binary -n allkeys \ --key-minimum=3D1 --key-maximum=3D16000000 -d 1024 \ --ratio=3D1:0 --key-pattern=3DP:P -c 1 -t 16 --pipeline 8 -x 3 Before this patch: Avg: 47691.343500 Ops/sec. Stdev: 3925.772473 After this patch (+1.7%): Avg: 48389.282500 Ops/sec. Stdev: 3534.470933 Signed-off-by: Kairui Song --- mm/vmscan.c | 68 +++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 55 insertions(+), 13 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 03631cedb3ab..8c701b34d757 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3113,12 +3113,45 @@ static int folio_update_gen(struct folio *folio, in= t gen) return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; } =20 -/* protect pages accessed multiple times through file descriptors */ -static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool = reclaiming) +/* + * When oldest gen ie being reclaimed, protected/unreclaimable pages can be + * moved in batch. They usually all land on same gen (old_gen + 1) by + * folio_inc_gen so the batch struct is limited to one / type / zone + * level LRU. + * Batch is applied after finished or aborted scanning one LRU list. + */ +struct lru_gen_inc_batch { + int delta; +}; + +static void lru_gen_inc_batch_done(struct lruvec *lruvec, int gen, int typ= e, int zone, + struct lru_gen_inc_batch *batch) { - int type =3D folio_is_file_lru(folio); + int delta =3D batch->delta; + int new_gen =3D (gen + 1) % MAX_NR_GENS; struct lru_gen_folio *lrugen =3D &lruvec->lrugen; - int new_gen, old_gen =3D lru_gen_from_seq(lrugen->min_seq[type]); + enum lru_list lru =3D type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON; + + if (!delta) + return; + + WRITE_ONCE(lrugen->nr_pages[gen][type][zone], + lrugen->nr_pages[gen][type][zone] - delta); + WRITE_ONCE(lrugen->nr_pages[new_gen][type][zone], + lrugen->nr_pages[new_gen][type][zone] + delta); + + if (!lru_gen_is_active(lruvec, gen) && lru_gen_is_active(lruvec, new_gen)= ) { + __update_lru_size(lruvec, lru, zone, -delta); + __update_lru_size(lruvec, lru + LRU_ACTIVE, zone, delta); + } +} + +/* protect pages accessed multiple times through file descriptors */ +static int folio_inc_gen(struct folio *folio, int old_gen, bool reclaiming, + struct lru_gen_inc_batch *batch) +{ + int new_gen; + int delta =3D folio_nr_pages(folio); unsigned long new_flags, old_flags =3D READ_ONCE(folio->flags); =20 VM_WARN_ON_ONCE_FOLIO(!(old_flags & LRU_GEN_MASK), folio); @@ -3138,7 +3171,8 @@ static int folio_inc_gen(struct lruvec *lruvec, struc= t folio *folio, bool reclai new_flags |=3D BIT(PG_reclaim); } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); =20 - lru_gen_update_size(lruvec, folio, old_gen, new_gen); + /* new_gen is ensured to be old_gen + 1 here, do a batch update */ + batch->delta +=3D delta; =20 return new_gen; } @@ -3672,6 +3706,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int ty= pe, bool can_swap) { int zone; int remaining =3D MAX_LRU_BATCH; + struct lru_gen_inc_batch batch =3D { }; struct lru_gen_folio *lrugen =3D &lruvec->lrugen; int new_gen, old_gen =3D lru_gen_from_seq(lrugen->min_seq[type]); =20 @@ -3701,12 +3736,15 @@ static bool inc_min_seq(struct lruvec *lruvec, int = type, bool can_swap) prefetchw(&prev->flags); } =20 - new_gen =3D folio_inc_gen(lruvec, folio, false); + new_gen =3D folio_inc_gen(folio, old_gen, false, &batch); list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); =20 - if (!--remaining) + if (!--remaining) { + lru_gen_inc_batch_done(lruvec, old_gen, type, zone, &batch); return false; + } } + lru_gen_inc_batch_done(lruvec, old_gen, type, zone, &batch); } done: reset_ctrl_pos(lruvec, type, true); @@ -4226,7 +4264,7 @@ void lru_gen_soft_reclaim(struct mem_cgroup *memcg, i= nt nid) *************************************************************************= *****/ =20 static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct = scan_control *sc, - int tier_idx) + int tier_idx, struct lru_gen_inc_batch *batch) { bool success; int gen =3D folio_lru_gen(folio); @@ -4236,6 +4274,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c int refs =3D folio_lru_refs(folio); int tier =3D lru_tier_from_refs(refs); struct lru_gen_folio *lrugen =3D &lruvec->lrugen; + int old_gen =3D lru_gen_from_seq(lrugen->min_seq[type]); =20 VM_WARN_ON_ONCE_FOLIO(gen >=3D MAX_NR_GENS, folio); =20 @@ -4259,7 +4298,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c } =20 /* promoted */ - if (gen !=3D lru_gen_from_seq(lrugen->min_seq[type])) { + if (gen !=3D old_gen) { list_move(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } @@ -4268,7 +4307,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c if (tier > tier_idx || refs =3D=3D BIT(LRU_REFS_WIDTH)) { int hist =3D lru_hist_from_seq(lrugen->min_seq[type]); =20 - gen =3D folio_inc_gen(lruvec, folio, false); + gen =3D folio_inc_gen(folio, old_gen, false, batch); list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); =20 WRITE_ONCE(lrugen->protected[hist][type][tier - 1], @@ -4278,7 +4317,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c =20 /* ineligible */ if (zone > sc->reclaim_idx || skip_cma(folio, sc)) { - gen =3D folio_inc_gen(lruvec, folio, false); + gen =3D folio_inc_gen(folio, old_gen, false, batch); list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } @@ -4286,7 +4325,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c /* waiting for writeback */ if (folio_test_locked(folio) || folio_test_writeback(folio) || (type =3D=3D LRU_GEN_FILE && folio_test_dirty(folio))) { - gen =3D folio_inc_gen(lruvec, folio, true); + gen =3D folio_inc_gen(folio, old_gen, true, batch); list_move(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } @@ -4353,6 +4392,7 @@ static int scan_folios(struct lruvec *lruvec, struct = scan_control *sc, LIST_HEAD(moved); int skipped_zone =3D 0; struct folio *prev =3D NULL; + struct lru_gen_inc_batch batch =3D { }; int zone =3D (sc->reclaim_idx + i) % MAX_NR_ZONES; struct list_head *head =3D &lrugen->folios[gen][type][zone]; =20 @@ -4377,7 +4417,7 @@ static int scan_folios(struct lruvec *lruvec, struct = scan_control *sc, prefetchw(&prev->flags); } =20 - if (sort_folio(lruvec, folio, sc, tier)) + if (sort_folio(lruvec, folio, sc, tier, &batch)) sorted +=3D delta; else if (isolate_folio(lruvec, folio, sc)) { list_add(&folio->lru, list); @@ -4391,6 +4431,8 @@ static int scan_folios(struct lruvec *lruvec, struct = scan_control *sc, break; } =20 + lru_gen_inc_batch_done(lruvec, gen, type, zone, &batch); + if (skipped_zone) { list_splice(&moved, head); __count_zid_vm_events(PGSCAN_SKIP, zone, skipped_zone); --=20 2.43.0 From nobody Tue Dec 16 02:33:31 2025 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE21C823CE for ; Tue, 23 Jan 2024 18:46:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706035570; cv=none; b=O70HO99Hp17cpHUG7JrMioMPlTFizu7Tgj0I8/lGkkB0MGGIzc3DZGy1NOcsrRLoLB7jViL3ymr0SUHq4w46kftJljUGiJpuH1I2xprSOxgn4wa7KY0iWaw6qXiM6lNPg5KVW0CaQbs07uZ5J2sLV5OLZvCtCTyj0un1R3kn728= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706035570; c=relaxed/simple; bh=IL7NUK6FxDSA1jjR35p4up3wfl6Mv6w9KsukibKyqQM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=csrG0jD511P2EnE13653qIuYxLa7l1jR+GqIvKswb4amhWvhZXEZWX+BgpL9tkf6UQOrx4DfHfZ4Elx17wuUSSSScORm/FYReuLJmFSfc9icdeasOy6f7L3h4o+FbgwOtqdP58WJaQqd2mqrm4IrLHOjZAj/ThM+cnhwsiTdUd8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=PejjRt7u; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PejjRt7u" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1d71cb97937so27820095ad.3 for ; Tue, 23 Jan 2024 10:46:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706035568; x=1706640368; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=IuyIoFCfZ7YdkhEdaTKUsA0jfoduX/Ed91Z/Iy4HbTw=; b=PejjRt7uLwqKq1+RCdilOmOSZXx1Nedcu9l8fLIBzzPY65mPMBgbBTSY+pvnBIOFg7 wU3sIXUtDHjnpPpETXQd5A6JxtR10IVacsQvV8ODvMGMTkPPAxoR/3QogKcrBHEstBbm JYoRzfvJv39e99D9xPI70uB2G5zLgNPiSpMF4anD7xvvXzWeE8eq7igNcjvJU1rY4FlW cLvXfSCIBa1Ag4OZeG/eejAXehYJHLhYfsLDE4LRJEh4PjjnefNWcgMqt44T5aU4VVUg VRTbJD+6zI000jXuEZSaUn1qMIFAhm4wPEUd3iGn7PE0GuzalzpNtjRdbkXm8/z9vWqJ 2kYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706035568; x=1706640368; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=IuyIoFCfZ7YdkhEdaTKUsA0jfoduX/Ed91Z/Iy4HbTw=; b=K/tnIK8GxKmsHA8yJLufIqCffklht6RqB1XHaCieE3Tq4FYhqGpMUtGZsGdfV35Prj HX29trMhi+vTGmbUE7fbx3+GhhhxE1TkGIsu6ptfPqKmVmEmXzHjSl0VH0L1YF5Uv2Ow cWQjBa8TUgzWVjflirnO9XXP1lZaRMlaHJbVRocVo1nGc2TszNtC6QkE4zN3n43UUTq9 x3yj3eoJBsK/ycTvr4UeiY6uUqTtkKBkWLVuQBfHJS0bJcFpbzDODY1KurD5b07ZOyoy To8S58pLX+g8WTNVwDOyg+BoRLWQvkzE9MUEwI5LbqQu/r9Rb5LSzJZB+LipVCDKrdwi XXPw== X-Gm-Message-State: AOJu0Yw4P/VuYxQ7eXIBC0V1onvodFA+SV0iLPDJlW2mYHyd4CH9tg1k K0YHR1P2XHRn1iob+Na4gB2FUdvZMDXv/U6BJlGOkdsbs+dhlYcS X-Google-Smtp-Source: AGHT+IEICLiW4CBTB3qCkG5rC/3WLM2CjLX2MTEiymk06M7wUen2STMsCnbNu4Cim58h7VWkWAoNUA== X-Received: by 2002:a17:902:eacb:b0:1d7:2500:69d5 with SMTP id p11-20020a170902eacb00b001d7250069d5mr3578430pld.17.1706035567964; Tue, 23 Jan 2024 10:46:07 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id d5-20020a170902b70500b001d74c285b55sm4035196pls.67.2024.01.23.10.46.05 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 23 Jan 2024 10:46:07 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Yu Zhao , Wei Xu , Chris Li , Matthew Wilcox , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 3/3] mm, lru_gen: move pages in bulk when aging Date: Wed, 24 Jan 2024 02:45:52 +0800 Message-ID: <20240123184552.59758-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240123184552.59758-1-ryncsn@gmail.com> References: <20240123184552.59758-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Another overhead of aging is page moving. Actually, in most cases, pages are being moved to the same gen after folio_inc_gen is called, especially the protected pages. So it's better to move them in bulk. This also has a good effect on LRU order. Currently when MGLRU ages, it walks the LRU backwards, and the protected pages are moved to the tail of newer gen one by one, which actually reverses the order of pages in LRU. Moving them in batches can help keep their order, only in a small scope though, due to the scan limit of MAX_LRU_BATCH pages. After this commit, we can see a slight performance gain (with CONFIG_DEBUG_LIST=3Dn): Test 1: Ramdisk fio test in a 4G memcg on a EPYC 7K62: fio -name=3Dmglru --numjobs=3D16 --directory=3D/mnt --size=3D960m \ --buffered=3D1 --ioengine=3Dio_uring --iodepth=3D128 \ --iodepth_batch_submit=3D32 --iodepth_batch_complete=3D32 \ --rw=3Drandread --random_distribution=3Dzipf:0.5 --norandommap \ --time_based --ramp_time=3D1m --runtime=3D6m --group_reporting Before: bw ( MiB/s): min=3D 8299, max=3D 9847, per=3D100.00%, avg=3D9388.23, stdev= =3D16.25, samples=3D11488 iops : min=3D2124544, max=3D2521056, avg=3D2403385.82, stdev=3D4159.= 07, samples=3D11488 After (-0.2%): bw ( MiB/s): min=3D 8359, max=3D 9796, per=3D100.00%, avg=3D9367.29, stdev= =3D15.75, samples=3D11488 iops : min=3D2140113, max=3D2507928, avg=3D2398024.65, stdev=3D4033.= 07, samples=3D11488 Test 2: Ramdisk fio hybrid test for 30m in a 4G memcg on a EPYC 7K62 (3 tim= es): fio --buffered=3D1 --numjobs=3D8 --size=3D960m --directory=3D/mnt \ --time_based --ramp_time=3D1m --runtime=3D30m \ --ioengine=3Dio_uring --iodepth=3D128 --iodepth_batch_submit=3D32 \ --iodepth_batch_complete=3D32 --norandommap \ --name=3Dmglru-ro --rw=3Drandread --random_distribution=3Dzipf:0.7 \ --name=3Dmglru-rw --rw=3Drandrw --random_distribution=3Dzipf:0.7 Before this patch: READ: 6973.3 MiB/s, Stdev: 19.601587 WRITE: 1302.3 MiB/s, Stdev: 4.988877 After this patch (+0.1%, +0.3%): READ: 6981.0 MiB/s, Stdev: 15.556349 WRITE: 1305.7 MiB/s, Stdev: 2.357023 Test 3: 30m of MySQL test in 6G memcg for 12 times: echo 'set GLOBAL innodb_buffer_pool_size=3D16106127360;' | \ mysql -u USER -h localhost --password=3DPASS sysbench /usr/share/sysbench/oltp_read_only.lua \ --mysql-user=3DUSER --mysql-password=3DPASS --mysql-db=3DDB \ --tables=3D48 --table-size=3D2000000 --threads=3D16 --time=3D1800 run Before this patch Avg: 135310.868182 qps. Stdev: 379.200942 After this patch (-0.3%): Avg: 135099.210000 qps. Stdev: 351.488863 Test 4: Build linux kernel in 2G memcg with make -j48 with SSD swap (for memory stress, 18 times): Before this patch: Average: 1467.813023. Stdev: 24.232886 After this patch (+0.0%): Average: 1464.178154. Stdev: 17.992974 Test 5: Memtier test in a 4G cgroup using brd as swap (20 times): memcached -u nobody -m 16384 -s /tmp/memcached.socket \ -a 0766 -t 16 -B binary & memtier_benchmark -S /tmp/memcached.socket \ -P memcache_binary -n allkeys \ --key-minimum=3D1 --key-maximum=3D16000000 -d 1024 \ --ratio=3D1:0 --key-pattern=3DP:P -c 1 -t 16 --pipeline 8 -x 3 Before this patch: Avg: 48389.282500 Ops/sec. Stdev: 3534.470933 After this patch (+1.2%): Avg: 48959.374118 Ops/sec. Stdev: 3488.559744 Signed-off-by: Kairui Song --- mm/vmscan.c | 47 ++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 44 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 8c701b34d757..373a70801db9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3122,8 +3122,45 @@ static int folio_update_gen(struct folio *folio, int= gen) */ struct lru_gen_inc_batch { int delta; + struct folio *head, *tail; }; =20 +static inline void lru_gen_inc_bulk_done(struct lru_gen_folio *lrugen, + int bulk_gen, bool type, int zone, + struct lru_gen_inc_batch *batch) +{ + if (!batch->head) + return; + + list_bulk_move_tail(&lrugen->folios[bulk_gen][type][zone], + &batch->head->lru, + &batch->tail->lru); + + batch->head =3D NULL; +} + +/* + * When aging, protected pages will go to the tail of the same higher + * gen, so the can be moved in batches. Besides reduced overhead, this + * also avoids changing their LRU order in a small scope. + */ +static inline void lru_gen_try_bulk_move(struct lru_gen_folio *lrugen, str= uct folio *folio, + int bulk_gen, int new_gen, bool type, int zone, + struct lru_gen_inc_batch *batch) +{ + /* + * If folio not moving to the bulk_gen, it's raced with promotion + * so it need to go to the head of another LRU. + */ + if (bulk_gen !=3D new_gen) + list_move(&folio->lru, &lrugen->folios[new_gen][type][zone]); + + if (!batch->head) + batch->tail =3D folio; + + batch->head =3D folio; +} + static void lru_gen_inc_batch_done(struct lruvec *lruvec, int gen, int typ= e, int zone, struct lru_gen_inc_batch *batch) { @@ -3132,6 +3169,8 @@ static void lru_gen_inc_batch_done(struct lruvec *lru= vec, int gen, int type, int struct lru_gen_folio *lrugen =3D &lruvec->lrugen; enum lru_list lru =3D type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON; =20 + lru_gen_inc_bulk_done(lrugen, new_gen, type, zone, batch); + if (!delta) return; =20 @@ -3709,6 +3748,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int ty= pe, bool can_swap) struct lru_gen_inc_batch batch =3D { }; struct lru_gen_folio *lrugen =3D &lruvec->lrugen; int new_gen, old_gen =3D lru_gen_from_seq(lrugen->min_seq[type]); + int bulk_gen =3D (old_gen + 1) % MAX_NR_GENS; =20 if (type =3D=3D LRU_GEN_ANON && !can_swap) goto done; @@ -3737,7 +3777,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int ty= pe, bool can_swap) } =20 new_gen =3D folio_inc_gen(folio, old_gen, false, &batch); - list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); + lru_gen_try_bulk_move(lrugen, folio, bulk_gen, new_gen, type, zone, &ba= tch); =20 if (!--remaining) { lru_gen_inc_batch_done(lruvec, old_gen, type, zone, &batch); @@ -4275,6 +4315,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c int tier =3D lru_tier_from_refs(refs); struct lru_gen_folio *lrugen =3D &lruvec->lrugen; int old_gen =3D lru_gen_from_seq(lrugen->min_seq[type]); + int bulk_gen =3D (old_gen + 1) % MAX_NR_GENS; =20 VM_WARN_ON_ONCE_FOLIO(gen >=3D MAX_NR_GENS, folio); =20 @@ -4308,7 +4349,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c int hist =3D lru_hist_from_seq(lrugen->min_seq[type]); =20 gen =3D folio_inc_gen(folio, old_gen, false, batch); - list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); + lru_gen_try_bulk_move(lrugen, folio, bulk_gen, gen, type, zone, batch); =20 WRITE_ONCE(lrugen->protected[hist][type][tier - 1], lrugen->protected[hist][type][tier - 1] + delta); @@ -4318,7 +4359,7 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c /* ineligible */ if (zone > sc->reclaim_idx || skip_cma(folio, sc)) { gen =3D folio_inc_gen(folio, old_gen, false, batch); - list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); + lru_gen_try_bulk_move(lrugen, folio, bulk_gen, gen, type, zone, batch); return true; } =20 --=20 2.43.0