From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B019830B529; Thu, 16 Apr 2026 03:50:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311447; cv=none; b=XutjN6U2cFkbyctKJT22uVjk2qraW4WX1dgUl72EjOhONiolkiXtcy0RiAO+jyljxdV+LEFVHW4chBVdW44SMuLkXRsCVEOdc6XWs8FebsFpiSK+REqKR1FljcyShy0xiJkuTq4TFrTiChdKhuyEBYerq92zo538GaYPZF6t108= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311447; c=relaxed/simple; bh=0/Weum56tYesfsI2M5mjBMd5+QZ6/NKzSDzxKwdxG9Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BTyfVuzy3hMbQo/rG/6+P+zBKx8knrVr+7l+62dacbtWwptTJGT3/SURYElpm03Rv+1E3O74MOMCm4cOUJlv3paPv9U3sDVfXDQ74tHKIUcQEpJ+BslsOm8YH3PrUDMtgQ2q/TnGDH1tiLnTXTrcdjiJl243Ac+NAYitH+gb4as= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.198]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tw2c2TzKHMKJ; Thu, 16 Apr 2026 11:50:16 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 0E65440577; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S5; Thu, 16 Apr 2026 11:50:26 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 1/8] md/raid1,raid10: clean up of RESYNC_SECTORS Date: Thu, 16 Apr 2026 11:37:54 +0800 Message-Id: <20260416033801.786415-2-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S5 X-Coremail-Antispam: 1UD129KBjvJXoWxWr4fXF4fKF18XF15GFyxuFg_yoW5GFy5pa 1DGFyfZw4UKF47Aas7JayUuayrA3Wxt3yUCFn5ZaykuFy3XrZrXrWjqayYgF1DXFn5tFyj qw1qkr45ZFy3taDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmG14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2vY z4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c 02F40Ex7xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVW8Jr0_Cr1UMcvj eVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I 1lw4CEc2x0rVAKj4xxMxkF7I0En4kS14v26r126r1DMxAIw28IcxkI7VAKI48JMxC20s02 6xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_Jr I_JrWlx4CE17CEb7AF67AKxVWUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v2 6r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj4 0_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8 JrUvcSsGvfC2KfnxnUUI43ZEXa7VUjWxRPUUUUU== X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan Move redundant RESYNC_SECTORS definition from raid1 and raid10 implementations to raid1-10.c. Simplify max_sync assignment in raid10_sync_request(). No functional changes. Signed-off-by: Li Nan Reviewed-by: Yu Kuai --- drivers/md/raid1-10.c | 1 + drivers/md/raid1.c | 1 - drivers/md/raid10.c | 4 +--- 3 files changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c index c33099925f23..cda531d0720b 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.c @@ -2,6 +2,7 @@ /* Maximum size of each resync request */ #define RESYNC_BLOCK_SIZE (64*1024) #define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE) +#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9) =20 /* when we get a read error on a read-only array, we redirect to another * device without failing the first device, or trying to over-write to diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 867db18bc3ba..5a73a9f19e0e 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -136,7 +136,6 @@ static void *r1bio_pool_alloc(gfp_t gfp_flags, struct r= 1conf *conf) } =20 #define RESYNC_DEPTH 32 -#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9) #define RESYNC_WINDOW (RESYNC_BLOCK_SIZE * RESYNC_DEPTH) #define RESYNC_WINDOW_SECTORS (RESYNC_WINDOW >> 9) #define CLUSTER_RESYNC_WINDOW (16 * RESYNC_WINDOW) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index b4892c5d571c..90c1036f6ec4 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -113,7 +113,6 @@ static void * r10bio_pool_alloc(gfp_t gfp_flags, void *= data) return kzalloc(size, gfp_flags); } =20 -#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9) /* amount of memory to reserve for resync requests */ #define RESYNC_WINDOW (1024*1024) /* maximum number of concurrent requests, memory permitting */ @@ -3153,7 +3152,7 @@ static sector_t raid10_sync_request(struct mddev *mdd= ev, sector_t sector_nr, struct bio *biolist =3D NULL, *bio; sector_t nr_sectors; int i; - int max_sync; + int max_sync =3D RESYNC_SECTORS; sector_t sync_blocks; sector_t chunk_mask =3D conf->geo.chunk_mask; int page_idx =3D 0; @@ -3266,7 +3265,6 @@ static sector_t raid10_sync_request(struct mddev *mdd= ev, sector_t sector_nr, * end_sync_write if we will want to write. */ =20 - max_sync =3D RESYNC_PAGES << (PAGE_SHIFT-9); if (!test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) { /* recovery... the complicated one */ int j; --=20 2.39.2 From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 505CF11CAF; Thu, 16 Apr 2026 03:50:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311431; cv=none; b=RaXgayMmGUACzmgNWaw5oeZpaVPqU0Ezyan/mjm8wwlcpW5SHyuweKdWJ+qx786xpDOz7/pO+s40syCJKC/IPj6yhkd0F+sePbdgTSInbQqXOrBn1Pm7y5ATTS3UQWT48Hb0sXdd/Y+vv4cgUJs6F+WiM+M+8r9SySy9nNIe1rI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311431; c=relaxed/simple; bh=R2tyHLp6J2xwUeS9jiR2tmKGSFUduCnm46Hm3uZSVR8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YGeLFYfyky6CjrurdsBTDGZOER6+mklBJlXouspnk0WMsdPRRq36WgfGTe7TlQjvlatAKZBkzOgH7hUCH747em0Z358V2NVpZ6GAK1hrCImIi3sczsPq7mo6O7z9TkoHcllRyaapXX5u1vgjr1lqwgJsWmFVek7Qm8y3a1dVNlg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tC70QPzYQtM4; Thu, 16 Apr 2026 11:49:39 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 169CA4058D; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S6; Thu, 16 Apr 2026 11:50:26 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 2/8] md: introduce sync_folio_io for folio support in RAID Date: Thu, 16 Apr 2026 11:37:55 +0800 Message-Id: <20260416033801.786415-3-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S6 X-Coremail-Antispam: 1UD129KBjvJXoWxXw4kKry5Jry7Cr18Cw1rWFg_yoW5XF13pa yjka43C3y5WrW7Ww1UXFs7ua4rt34SgrWUtry3u34fXF18KryqkF1UtFyjvFn8GF98Grnr t34UKFW5u3Z5JrJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmG14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2vY z4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c 02F40Ex7xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVW8Jr0_Cr1UMcvj eVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I 1lw4CEc2x0rVAKj4xxMxkF7I0En4kS14v26r126r1DMxAIw28IcxkI7VAKI48JMxC20s02 6xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_Jr I_JrWlx4CE17CEb7AF67AKxVWUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v2 6r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj4 0_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8 JrUvcSsGvfC2KfnxnUUI43ZEXa7VUUZjjDUUUUU== X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan Prepare for folio support in RAID by introducing sync_folio_io(), matching sync_page_io()'s functionality. Differences are: - Add new parameter 'off' to prepare for adding a folio to bio in segments, e.g. in fix_recovery_read_error() - Change return value to bool - Replace the checking to 'bio.bi_status =3D=3D BLK_STS_OK' sync_page_io() will be removed once full folio support is complete. Signed-off-by: Li Nan --- drivers/md/md.h | 4 +++- drivers/md/md.c | 15 +++++++++++---- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/md/md.h b/drivers/md/md.h index ac84289664cd..914b992a073b 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -924,8 +924,10 @@ void md_write_metadata(struct mddev *mddev, struct md_= rdev *rdev, sector_t sector, int size, struct page *page, unsigned int offset); extern int md_super_wait(struct mddev *mddev); -extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int size, +extern bool sync_page_io(struct md_rdev *rdev, sector_t sector, int size, struct page *page, blk_opf_t opf, bool metadata_op); +extern bool sync_folio_io(struct md_rdev *rdev, sector_t sector, int size, + int off, struct folio *folio, blk_opf_t opf, bool metadata_op); extern void md_do_sync(struct md_thread *thread); extern void md_new_event(void); extern void md_allow_write(struct mddev *mddev); diff --git a/drivers/md/md.c b/drivers/md/md.c index d9c9fd2839b3..5e83914d5c14 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -1166,8 +1166,8 @@ int md_super_wait(struct mddev *mddev) return 0; } =20 -int sync_page_io(struct md_rdev *rdev, sector_t sector, int size, - struct page *page, blk_opf_t opf, bool metadata_op) +bool sync_folio_io(struct md_rdev *rdev, sector_t sector, int size, int of= f, + struct folio *folio, blk_opf_t opf, bool metadata_op) { struct bio bio; struct bio_vec bvec; @@ -1185,11 +1185,18 @@ int sync_page_io(struct md_rdev *rdev, sector_t sec= tor, int size, bio.bi_iter.bi_sector =3D sector + rdev->new_data_offset; else bio.bi_iter.bi_sector =3D sector + rdev->data_offset; - __bio_add_page(&bio, page, size, 0); + bio_add_folio_nofail(&bio, folio, size, off); =20 submit_bio_wait(&bio); =20 - return !bio.bi_status; + return bio.bi_status =3D=3D BLK_STS_OK; +} +EXPORT_SYMBOL_GPL(sync_folio_io); + +bool sync_page_io(struct md_rdev *rdev, sector_t sector, int size, + struct page *page, blk_opf_t opf, bool metadata_op) +{ + return sync_folio_io(rdev, sector, size, 0, page_folio(page), opf, metada= ta_op); } EXPORT_SYMBOL_GPL(sync_page_io); =20 --=20 2.39.2 From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5061514E2F2; Thu, 16 Apr 2026 03:50:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; cv=none; b=L8uqv15XzwIc9eCme2Djls3wOMpE+1vqHOc17l+mOCceFJLQCNmatd/mbUwcp52BeGKpGSH5f19ke8r+cfLMTdKsveBo26UF2kHicM3e0ru6vkUW6culwKyEl6XwIVNZ6E/2DdSuGeH/2r4AzoLUL5KZS8FRiq2bAs/Gi97MT+c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; c=relaxed/simple; bh=Qd89njNjYPsZli0+xtTaYb0OaBmZ5wu1a99YIpO1V5o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DJI9T+ayM4wZhZcK+j53uPjZgQvHKX0jaqX/xf8WaOyX8sWvbXFMzfNY6jFV1utglEH68iCdjpdbre0peBYLZ39mQTt/Ki9uWpdqTbJklDwpbi+tmM5WMnUqvOEB7suCxSeEyrudicxHRwn4z4V3LQO2852nGVKvyfVR30hpcz8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tD0VkfzYQtgH; Thu, 16 Apr 2026 11:49:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 29FF540539; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S7; Thu, 16 Apr 2026 11:50:27 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 3/8] md: introduce safe_put_folio for folio support in RAID Date: Thu, 16 Apr 2026 11:37:56 +0800 Message-Id: <20260416033801.786415-4-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S7 X-Coremail-Antispam: 1UD129KBjvdXoWrGF1xZFy8Kr4UKr1fGryxuFg_yoWxXwc_uw sYqFyftryUAFn2yr15Jr47ZryYvanY9ryUuF4Igay5Zry2gFn7ursYgr1DX34ruFW8uayU AryDtrZ8Cr4jkjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbB8FF20E14v26rWj6s0DM7CY07I20VC2zVCF04k26cxKx2IYs7xG 6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWwA2048vs2IY02 0Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xv wVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1l84 ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1lnxkE FVAIw20F6cxK64vIFxWle2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I 8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jrv_JF1lYx0Ex4A2jsIE14v26r4UJVWxJr1lOx8S 6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxw AKzVCY07xG64k0F24lc7CjxVAaw2AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr41l4I8I3I0E 4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGV WUWwC2zVAF1VAY17CE14v26r126r1DMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_ Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rV WUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4U JbIYCTnIWIevJa73UjIFyTuYvjfUYGYLDUUUU X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan safe_put_page() will be removed after the last reference to it in RAID5 is removed. Signed-off-by: Li Nan --- drivers/md/md.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/md/md.h b/drivers/md/md.h index 914b992a073b..7c0c38f09cc3 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -888,6 +888,12 @@ struct md_io_clone { rcu_read_unlock(); \ } while (0) =20 +static inline void safe_folio_put(struct folio *folio) +{ + if (folio) + folio_put(folio); +} + static inline void safe_put_page(struct page *p) { if (p) put_page(p); --=20 2.39.2 From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5074C253F05; Thu, 16 Apr 2026 03:50:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; cv=none; b=gKaJXxx5B3lENBzjxzuyk3IDGEjVyIYpv3DMzR04nouv5vyCeG+XYIG+Mw9vQl7O9aKfNN8nAhQe7EX/kx0bKSCvh9+Mayq5rNnhSROP/TXhq1wq/1YnIXLnxRFu11zqxBKgIA7R0ijrK6akenjECGkQASWtjVa9Bg68rw2Oo1I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; c=relaxed/simple; bh=dLJsNXpq6hEI0nVEOJ2miYAwiIaawzWtZxf1Tp3N3A8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hUsf5ALUYBKrCEZoawkkdYTx22BF5fw26qHViJhZuCcSYmghM1nkZ6EFWfaD1rr3LbYwksGUWQnKwVg07o8s9FPoiSWFyJmTkWCjuy4K83CvTnmjqOZiARWjx/Y7Viu8h8MP1OH2G1nnhiVOnSjIBtASuRhun2pIwnAQpYjd5Fw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tD11W6zYQtgH; Thu, 16 Apr 2026 11:49:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 3E42640562; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S8; Thu, 16 Apr 2026 11:50:27 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 4/8] md/raid1: use folio for tmppage Date: Thu, 16 Apr 2026 11:37:57 +0800 Message-Id: <20260416033801.786415-5-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S8 X-Coremail-Antispam: 1UD129KBjvJXoWxZF48CF1rKrW5Jw4fGFWfZrb_yoW5Cw1Upa n8G3ZIqr4UJr98Jr1DJFWkua4aqw1xtayjkFsrG393uFsaqF95ZayYk34jgr1DXF98Ja4f XFW5KrW3ZF1rtF7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmE14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr 1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0D M2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVW8Jr0_Cr1U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lw4CEc2x0rVAKj4xxMxkF7I0En4kS14v26r126r1DMxAIw28IcxkI7VAKI48JMxC2 0s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI 0_JrI_JrWlx4CE17CEb7AF67AKxVWUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE 14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20x vaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8 JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbItC3UUUUU== X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan Convert tmppage to tmpfolio and use it throughout in raid1. Signed-off-by: Li Nan Reviewed-by: Xiao Ni --- drivers/md/raid1.h | 2 +- drivers/md/raid1.c | 18 ++++++++++-------- 2 files changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h index c98d43a7ae99..d480b3a8c2c4 100644 --- a/drivers/md/raid1.h +++ b/drivers/md/raid1.h @@ -101,7 +101,7 @@ struct r1conf { /* temporary buffer to synchronous IO when attempting to repair * a read error. */ - struct page *tmppage; + struct folio *tmpfolio; =20 /* When taking over an array from a different personality, we store * the new thread here until we fully activate the array. diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 5a73a9f19e0e..a72abdc37a2d 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2417,8 +2417,8 @@ static void fix_read_error(struct r1conf *conf, struc= t r1bio *r1_bio) rdev->recovery_offset >=3D sect + s)) && rdev_has_badblock(rdev, sect, s) =3D=3D 0) { atomic_inc(&rdev->nr_pending); - if (sync_page_io(rdev, sect, s<<9, - conf->tmppage, REQ_OP_READ, false)) + if (sync_folio_io(rdev, sect, s<<9, 0, + conf->tmpfolio, REQ_OP_READ, false)) success =3D 1; rdev_dec_pending(rdev, mddev); if (success) @@ -2447,7 +2447,8 @@ static void fix_read_error(struct r1conf *conf, struc= t r1bio *r1_bio) !test_bit(Faulty, &rdev->flags)) { atomic_inc(&rdev->nr_pending); r1_sync_page_io(rdev, sect, s, - conf->tmppage, REQ_OP_WRITE); + folio_page(conf->tmpfolio, 0), + REQ_OP_WRITE); rdev_dec_pending(rdev, mddev); } } @@ -2461,7 +2462,8 @@ static void fix_read_error(struct r1conf *conf, struc= t r1bio *r1_bio) !test_bit(Faulty, &rdev->flags)) { atomic_inc(&rdev->nr_pending); if (r1_sync_page_io(rdev, sect, s, - conf->tmppage, REQ_OP_READ)) { + folio_page(conf->tmpfolio, 0), + REQ_OP_READ)) { atomic_add(s, &rdev->corrected_errors); pr_info("md/raid1:%s: read error corrected (%d sectors at %llu on %pg= )\n", mdname(mddev), s, @@ -3099,8 +3101,8 @@ static struct r1conf *setup_conf(struct mddev *mddev) if (!conf->mirrors) goto abort; =20 - conf->tmppage =3D alloc_page(GFP_KERNEL); - if (!conf->tmppage) + conf->tmpfolio =3D folio_alloc(GFP_KERNEL, 0); + if (!conf->tmpfolio) goto abort; =20 r1bio_size =3D offsetof(struct r1bio, bios[mddev->raid_disks * 2]); @@ -3175,7 +3177,7 @@ static struct r1conf *setup_conf(struct mddev *mddev) if (conf) { mempool_destroy(conf->r1bio_pool); kfree(conf->mirrors); - safe_put_page(conf->tmppage); + safe_folio_put(conf->tmpfolio); kfree(conf->nr_pending); kfree(conf->nr_waiting); kfree(conf->nr_queued); @@ -3290,7 +3292,7 @@ static void raid1_free(struct mddev *mddev, void *pri= v) =20 mempool_destroy(conf->r1bio_pool); kfree(conf->mirrors); - safe_put_page(conf->tmppage); + safe_folio_put(conf->tmpfolio); kfree(conf->nr_pending); kfree(conf->nr_waiting); kfree(conf->nr_queued); --=20 2.39.2 From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5067A175A7D; Thu, 16 Apr 2026 03:50:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; cv=none; b=sD/Mwyof3agmA7Foz15knPEMPE7S/wouIvgNPYZsB8W0nQb2Vl1rWLYZj53ACtdCprYuiBvuyxUrbxfkPfzz59zD+AkceQkIZemE/5wYYUqI9ks07u9Kp63TYOar7tiSLIII+FfSjaZAt+JXc8vSyd5qKja+OCJAvGx2FMGbUD0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; c=relaxed/simple; bh=Ihlw/i9iD0ZQTN20fni/qHfQvhR8nvpXGtPpUupHoEc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PGIlfIyIHGFNXNV4RrJAxNIzgUcVBH8aLbHkIglbAA7DlOfdKXwHKtb1RR7V5fZenkTNBdFquKfkKi638asH8lXxXjhEZAAKX1uvl7ZUNNoxATvb/egW8dJS6SmfcxoQYcrNNffCuCgAiNjvDrJVmc1ufFK3n6HdLsVHuyoZgXA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tD1YHlzYQth5; Thu, 16 Apr 2026 11:49:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 50B994056D; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S9; Thu, 16 Apr 2026 11:50:27 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 5/8] md/raid10: use folio for tmppage Date: Thu, 16 Apr 2026 11:37:58 +0800 Message-Id: <20260416033801.786415-6-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S9 X-Coremail-Antispam: 1UD129KBjvJXoWxAr18Kr1xGF1fCw13Gr18uFg_yoWrJr15pa 1DGasayrWUJw43Xw1DJFWDC3WrK34SkFWUtrZ7W3yfua1fKr95K3WUJ3yjgFyDXF98GF1x XFW5XrW5u3Z7tF7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmE14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr 1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0D M2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVW8Jr0_Cr1U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lw4CEc2x0rVAKj4xxMxkF7I0En4kS14v26r126r1DMxAIw28IcxkI7VAKI48JMxC2 0s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI 0_JrI_JrWlx4CE17CEb7AF67AKxVWUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE 14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20x vaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8 JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbItC3UUUUU== X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan Convert tmppage to tmpfolio and use it throughout in raid10. Signed-off-by: Li Nan Reviewed-by: Xiao Ni --- drivers/md/raid10.h | 2 +- drivers/md/raid10.c | 37 +++++++++++++++++++------------------ 2 files changed, 20 insertions(+), 19 deletions(-) diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h index ec79d87fb92f..19f37439a4e2 100644 --- a/drivers/md/raid10.h +++ b/drivers/md/raid10.h @@ -89,7 +89,7 @@ struct r10conf { =20 mempool_t r10bio_pool; mempool_t r10buf_pool; - struct page *tmppage; + struct folio *tmpfolio; struct bio_set bio_split; =20 /* When taking over an array from a different personality, we store diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 90c1036f6ec4..26f93040cd13 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2581,13 +2581,13 @@ static void recovery_request_write(struct mddev *md= dev, struct r10bio *r10_bio) } } =20 -static int r10_sync_page_io(struct md_rdev *rdev, sector_t sector, - int sectors, struct page *page, enum req_op op) +static int r10_sync_folio_io(struct md_rdev *rdev, sector_t sector, + int sectors, struct folio *folio, enum req_op op) { if (rdev_has_badblock(rdev, sector, sectors) && (op =3D=3D REQ_OP_READ || test_bit(WriteErrorSeen, &rdev->flags))) return -1; - if (sync_page_io(rdev, sector, sectors << 9, page, op, false)) + if (sync_folio_io(rdev, sector, sectors << 9, 0, folio, op, false)) /* success */ return 1; if (op =3D=3D REQ_OP_WRITE) { @@ -2650,12 +2650,13 @@ static void fix_read_error(struct r10conf *conf, st= ruct mddev *mddev, struct r10 r10_bio->devs[sl].addr + sect, s) =3D=3D 0) { atomic_inc(&rdev->nr_pending); - success =3D sync_page_io(rdev, - r10_bio->devs[sl].addr + - sect, - s<<9, - conf->tmppage, - REQ_OP_READ, false); + success =3D sync_folio_io(rdev, + r10_bio->devs[sl].addr + + sect, + s<<9, + 0, + conf->tmpfolio, + REQ_OP_READ, false); rdev_dec_pending(rdev, mddev); if (success) break; @@ -2698,10 +2699,10 @@ static void fix_read_error(struct r10conf *conf, st= ruct mddev *mddev, struct r10 continue; =20 atomic_inc(&rdev->nr_pending); - if (r10_sync_page_io(rdev, - r10_bio->devs[sl].addr + - sect, - s, conf->tmppage, REQ_OP_WRITE) + if (r10_sync_folio_io(rdev, + r10_bio->devs[sl].addr + + sect, + s, conf->tmpfolio, REQ_OP_WRITE) =3D=3D 0) { /* Well, this device is dead */ pr_notice("md/raid10:%s: read correction write failed (%d sectors at %= llu on %pg)\n", @@ -2730,10 +2731,10 @@ static void fix_read_error(struct r10conf *conf, st= ruct mddev *mddev, struct r10 continue; =20 atomic_inc(&rdev->nr_pending); - switch (r10_sync_page_io(rdev, + switch (r10_sync_folio_io(rdev, r10_bio->devs[sl].addr + sect, - s, conf->tmppage, REQ_OP_READ)) { + s, conf->tmpfolio, REQ_OP_READ)) { case 0: /* Well, this device is dead */ pr_notice("md/raid10:%s: unable to read back corrected sectors (%d sec= tors at %llu on %pg)\n", @@ -3823,7 +3824,7 @@ static void raid10_free_conf(struct r10conf *conf) kfree(conf->mirrors); kfree(conf->mirrors_old); kfree(conf->mirrors_new); - safe_put_page(conf->tmppage); + safe_folio_put(conf->tmpfolio); bioset_exit(&conf->bio_split); kfree(conf); } @@ -3861,8 +3862,8 @@ static struct r10conf *setup_conf(struct mddev *mddev) if (!conf->mirrors) goto out; =20 - conf->tmppage =3D alloc_page(GFP_KERNEL); - if (!conf->tmppage) + conf->tmpfolio =3D folio_alloc(GFP_KERNEL, 0); + if (!conf->tmpfolio) goto out; =20 conf->geo =3D geo; --=20 2.39.2 From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFFB91B4138; Thu, 16 Apr 2026 03:50:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311444; cv=none; b=MoVDWYso1rFf+6+ohgVbnbrsqrjoU4tLNEAs9K++ykPFhUVmNnpzKPXU5zggZA7OsJCzdDDmHHoBwGrloke6oofTo+5J8o2rbyoxj7RmwelVPCCHDVtV39pSjJyzG5iyOoymV+QaGUkOe3iGJJ/ZU1ypjgTW3i0I0cicXevFPfs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311444; c=relaxed/simple; bh=05mtuN7knrpkINVG9TivDSY1ZhwXO8Zi6tzrZAgnXjY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gqakGk4Xzeyg6tthiX8SgG5hzThoDkhTGahK57pR0z4GEwoJgRmKruzSASe3hMQ9wleZxAO73GbBtU3YcsB+ZqLNJzBKD4twj+gjKTZdLVIKyOz+hKTES5sUU4brhDLV6an3UqAFkZ9AhiG3HBDueD1v3VxVArFNadz1U/EQxbA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tw59WqzKHMKF; Thu, 16 Apr 2026 11:50:16 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 657C740539; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S10; Thu, 16 Apr 2026 11:50:27 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 6/8] md/raid1,raid10: use folio for sync path IO Date: Thu, 16 Apr 2026 11:37:59 +0800 Message-Id: <20260416033801.786415-7-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S10 X-Coremail-Antispam: 1UD129KBjvAXoWfCFWkJrWUCr47Cr4xuryfWFg_yoW5uw1fCo Z3Jr4Ik3WrWr1rWrWktr1xtFsrWan8Zr1fJr1xCrWqvFsF9w45Kw47Jry5XrW2qF4aqFW3 Ar9ag3WrXa92vr1xn29KB7ZKAUJUUUU8529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUOD7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20EY4v20xva j40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l82xGYIkIc2x26280x7IE14v26r126s0DM28Irc Ia0xkI8VCY1x0267AKxVW5JVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK021l 84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4UJV WxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE 3s1lnxkEFVAIw20F6cxK64vIFxWle2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jrv_JF1lYx0Ex4A2jsIE14v26r4UJVWx Jr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2 IErcIFxwAKzVCY07xG64k0F24lc7CjxVAaw2AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr41l 4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67 AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r1DMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8I cVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI 8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v2 6r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUYVyIDUUUU X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan Convert all IO on the sync path to use folios, and rename page-related identifiers to match folio. Since RESYNC_BLOCK_SIZE (64K) has higher allocation failure chance than 4k, retry with lower orders to improve allocation reliability. A r1/10_bio may have different rf->folio orders, so use minimum order as r1/10_bio sectors to prevent exceeding size when adding folio to IO later. Clean up: 1. Remove resync_get_all_folio() and invoke folio_get() directly instead. 2. Clean up redundant while(0) loop in md_bio_reset_resync_folio(). 3. Clean up bio variable by directly referencing r10_bio->devs[j].bio instead in r1buf_pool_alloc() and r10buf_pool_alloc(). 4. Clean up RESYNC_PAGES. 5. Remove resync_fetch_folio(), access 'rf->folio' directly. 6. Remove resync_free_folio(), call folio_put() directly. 7. clean up sync IO size calculation in raid1/10_sync_request. Signed-off-by: Li Nan --- drivers/md/md.c | 2 +- drivers/md/raid1-10.c | 80 ++++--------- drivers/md/raid1.c | 209 +++++++++++++++------------------- drivers/md/raid10.c | 254 +++++++++++++++++++++--------------------- 4 files changed, 240 insertions(+), 305 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 5e83914d5c14..6554b849ac74 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9440,7 +9440,7 @@ static bool sync_io_within_limit(struct mddev *mddev) { /* * For raid456, sync IO is stripe(4k) per IO, for other levels, it's - * RESYNC_PAGES(64k) per IO. + * RESYNC_BLOCK_SIZE(64k) per IO. */ return atomic_read(&mddev->recovery_active) < (raid_is_456(mddev) ? 8 : 128) * sync_io_depth(mddev); diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c index cda531d0720b..10200b0a3fd2 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* Maximum size of each resync request */ #define RESYNC_BLOCK_SIZE (64*1024) -#define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE) #define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9) =20 /* when we get a read error on a read-only array, we redirect to another @@ -20,9 +19,9 @@ #define MAX_PLUG_BIO 32 =20 /* for managing resync I/O pages */ -struct resync_pages { +struct resync_folio { void *raid_bio; - struct page *pages[RESYNC_PAGES]; + struct folio *folio; }; =20 struct raid1_plug_cb { @@ -36,77 +35,44 @@ static void rbio_pool_free(void *rbio, void *data) kfree(rbio); } =20 -static inline int resync_alloc_pages(struct resync_pages *rp, - gfp_t gfp_flags) +static inline int resync_alloc_folio(struct resync_folio *rf, + gfp_t gfp_flags, int *order) { - int i; + struct folio *folio; =20 - for (i =3D 0; i < RESYNC_PAGES; i++) { - rp->pages[i] =3D alloc_page(gfp_flags); - if (!rp->pages[i]) - goto out_free; - } + do { + folio =3D folio_alloc(gfp_flags, *order); + if (folio) + break; + } while (--(*order) > 0); =20 + if (!folio) + return -ENOMEM; + + rf->folio =3D folio; return 0; - -out_free: - while (--i >=3D 0) - put_page(rp->pages[i]); - return -ENOMEM; -} - -static inline void resync_free_pages(struct resync_pages *rp) -{ - int i; - - for (i =3D 0; i < RESYNC_PAGES; i++) - put_page(rp->pages[i]); -} - -static inline void resync_get_all_pages(struct resync_pages *rp) -{ - int i; - - for (i =3D 0; i < RESYNC_PAGES; i++) - get_page(rp->pages[i]); -} - -static inline struct page *resync_fetch_page(struct resync_pages *rp, - unsigned idx) -{ - if (WARN_ON_ONCE(idx >=3D RESYNC_PAGES)) - return NULL; - return rp->pages[idx]; } =20 /* - * 'strct resync_pages' stores actual pages used for doing the resync + * 'strct resync_folio' stores actual pages used for doing the resync * IO, and it is per-bio, so make .bi_private points to it. */ -static inline struct resync_pages *get_resync_pages(struct bio *bio) +static inline struct resync_folio *get_resync_folio(struct bio *bio) { return bio->bi_private; } =20 /* generally called after bio_reset() for reseting bvec */ -static void md_bio_reset_resync_pages(struct bio *bio, struct resync_pages= *rp, +static void md_bio_reset_resync_folio(struct bio *bio, struct resync_folio= *rf, int size) { - int idx =3D 0; - /* initialize bvec table again */ - do { - struct page *page =3D resync_fetch_page(rp, idx); - int len =3D min_t(int, size, PAGE_SIZE); - - if (WARN_ON(!bio_add_page(bio, page, len, 0))) { - bio->bi_status =3D BLK_STS_RESOURCE; - bio_endio(bio); - return; - } - - size -=3D len; - } while (idx++ < RESYNC_PAGES && size > 0); + if (WARN_ON(!bio_add_folio(bio, rf->folio, + min_t(int, size, RESYNC_BLOCK_SIZE), + 0))) { + bio->bi_status =3D BLK_STS_RESOURCE; + bio_endio(bio); + } } =20 =20 diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index a72abdc37a2d..724fd4f2cc3a 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -120,11 +120,11 @@ static void remove_serial(struct md_rdev *rdev, secto= r_t lo, sector_t hi) =20 /* * for resync bio, r1bio pointer can be retrieved from the per-bio - * 'struct resync_pages'. + * 'struct resync_folio'. */ static inline struct r1bio *get_resync_r1bio(struct bio *bio) { - return get_resync_pages(bio)->raid_bio; + return get_resync_folio(bio)->raid_bio; } =20 static void *r1bio_pool_alloc(gfp_t gfp_flags, struct r1conf *conf) @@ -146,70 +146,69 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void = *data) struct r1conf *conf =3D data; struct r1bio *r1_bio; struct bio *bio; - int need_pages; + int need_folio; int j; - struct resync_pages *rps; + struct resync_folio *rfs; + int order =3D get_order(RESYNC_BLOCK_SIZE); =20 r1_bio =3D r1bio_pool_alloc(gfp_flags, conf); if (!r1_bio) return NULL; =20 - rps =3D kmalloc_array(conf->raid_disks * 2, sizeof(struct resync_pages), + rfs =3D kmalloc_array(conf->raid_disks * 2, sizeof(struct resync_folio), gfp_flags); - if (!rps) + if (!rfs) goto out_free_r1bio; =20 /* * Allocate bios : 1 for reading, n-1 for writing */ for (j =3D conf->raid_disks * 2; j-- ; ) { - bio =3D bio_kmalloc(RESYNC_PAGES, gfp_flags); + bio =3D bio_kmalloc(1, gfp_flags); if (!bio) goto out_free_bio; - bio_init_inline(bio, NULL, RESYNC_PAGES, 0); + bio_init_inline(bio, NULL, 1, 0); r1_bio->bios[j] =3D bio; } /* - * Allocate RESYNC_PAGES data pages and attach them to - * the first bio. + * Allocate data folio and attach it to the first bio. * If this is a user-requested check/repair, allocate - * RESYNC_PAGES for each bio. + * folio for each bio. */ if (test_bit(MD_RECOVERY_REQUESTED, &conf->mddev->recovery)) - need_pages =3D conf->raid_disks * 2; + need_folio =3D conf->raid_disks * 2; else - need_pages =3D 1; + need_folio =3D 1; for (j =3D 0; j < conf->raid_disks * 2; j++) { - struct resync_pages *rp =3D &rps[j]; + struct resync_folio *rf =3D &rfs[j]; =20 - bio =3D r1_bio->bios[j]; - - if (j < need_pages) { - if (resync_alloc_pages(rp, gfp_flags)) - goto out_free_pages; + if (j < need_folio) { + if (resync_alloc_folio(rf, gfp_flags, &order)) + goto out_free_folio; } else { - memcpy(rp, &rps[0], sizeof(*rp)); - resync_get_all_pages(rp); + memcpy(rf, &rfs[0], sizeof(*rf)); + folio_get(rf->folio); } =20 - rp->raid_bio =3D r1_bio; - bio->bi_private =3D rp; + rf->raid_bio =3D r1_bio; + r1_bio->bios[j]->bi_private =3D rf; } =20 + r1_bio->sectors =3D 1 << (order + PAGE_SECTORS_SHIFT); r1_bio->master_bio =3D NULL; =20 return r1_bio; =20 -out_free_pages: +out_free_folio: while (--j >=3D 0) - resync_free_pages(&rps[j]); + folio_put(rfs[j].folio); =20 out_free_bio: while (++j < conf->raid_disks * 2) { bio_uninit(r1_bio->bios[j]); kfree(r1_bio->bios[j]); } - kfree(rps); + kfree(rfs); =20 out_free_r1bio: rbio_pool_free(r1_bio, data); @@ -221,17 +220,17 @@ static void r1buf_pool_free(void *__r1_bio, void *dat= a) struct r1conf *conf =3D data; int i; struct r1bio *r1bio =3D __r1_bio; - struct resync_pages *rp =3D NULL; + struct resync_folio *rf =3D NULL; =20 for (i =3D conf->raid_disks * 2; i--; ) { - rp =3D get_resync_pages(r1bio->bios[i]); - resync_free_pages(rp); + rf =3D get_resync_folio(r1bio->bios[i]); + folio_put(rf->folio); bio_uninit(r1bio->bios[i]); kfree(r1bio->bios[i]); } =20 - /* resync pages array stored in the 1st bio's .bi_private */ - kfree(rp); + /* resync folio stored in the 1st bio's .bi_private */ + kfree(rf); =20 rbio_pool_free(r1bio, data); } @@ -2095,10 +2094,10 @@ static void end_sync_write(struct bio *bio) put_sync_write_buf(r1_bio); } =20 -static int r1_sync_page_io(struct md_rdev *rdev, sector_t sector, - int sectors, struct page *page, blk_opf_t rw) +static int r1_sync_folio_io(struct md_rdev *rdev, sector_t sector, int sec= tors, + int off, struct folio *folio, blk_opf_t rw) { - if (sync_page_io(rdev, sector, sectors << 9, page, rw, false)) + if (sync_folio_io(rdev, sector, sectors << 9, off, folio, rw, false)) /* success */ return 1; if (rw =3D=3D REQ_OP_WRITE) { @@ -2129,10 +2128,10 @@ static int fix_sync_read_error(struct r1bio *r1_bio) struct mddev *mddev =3D r1_bio->mddev; struct r1conf *conf =3D mddev->private; struct bio *bio =3D r1_bio->bios[r1_bio->read_disk]; - struct page **pages =3D get_resync_pages(bio)->pages; + struct folio *folio =3D get_resync_folio(bio)->folio; sector_t sect =3D r1_bio->sector; int sectors =3D r1_bio->sectors; - int idx =3D 0; + int off =3D 0; struct md_rdev *rdev; =20 rdev =3D conf->mirrors[r1_bio->read_disk].rdev; @@ -2162,9 +2161,8 @@ static int fix_sync_read_error(struct r1bio *r1_bio) * active, and resync is currently active */ rdev =3D conf->mirrors[d].rdev; - if (sync_page_io(rdev, sect, s<<9, - pages[idx], - REQ_OP_READ, false)) { + if (sync_folio_io(rdev, sect, s<<9, off, folio, + REQ_OP_READ, false)) { success =3D 1; break; } @@ -2197,7 +2195,7 @@ static int fix_sync_read_error(struct r1bio *r1_bio) /* Try next page */ sectors -=3D s; sect +=3D s; - idx++; + off +=3D s << 9; continue; } =20 @@ -2210,8 +2208,7 @@ static int fix_sync_read_error(struct r1bio *r1_bio) if (r1_bio->bios[d]->bi_end_io !=3D end_sync_read) continue; rdev =3D conf->mirrors[d].rdev; - if (r1_sync_page_io(rdev, sect, s, - pages[idx], + if (r1_sync_folio_io(rdev, sect, s, off, folio, REQ_OP_WRITE) =3D=3D 0) { r1_bio->bios[d]->bi_end_io =3D NULL; rdev_dec_pending(rdev, mddev); @@ -2225,14 +2222,13 @@ static int fix_sync_read_error(struct r1bio *r1_bio) if (r1_bio->bios[d]->bi_end_io !=3D end_sync_read) continue; rdev =3D conf->mirrors[d].rdev; - if (r1_sync_page_io(rdev, sect, s, - pages[idx], + if (r1_sync_folio_io(rdev, sect, s, off, folio, REQ_OP_READ) !=3D 0) atomic_add(s, &rdev->corrected_errors); } sectors -=3D s; sect +=3D s; - idx ++; + off +=3D s << 9; } set_bit(R1BIO_Uptodate, &r1_bio->state); bio->bi_status =3D 0; @@ -2252,14 +2248,12 @@ static void process_checks(struct r1bio *r1_bio) struct r1conf *conf =3D mddev->private; int primary; int i; - int vcnt; =20 /* Fix variable parts of all bios */ - vcnt =3D (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9); for (i =3D 0; i < conf->raid_disks * 2; i++) { blk_status_t status; struct bio *b =3D r1_bio->bios[i]; - struct resync_pages *rp =3D get_resync_pages(b); + struct resync_folio *rf =3D get_resync_folio(b); if (b->bi_end_io !=3D end_sync_read) continue; /* fixup the bio for reuse, but preserve errno */ @@ -2269,11 +2263,11 @@ static void process_checks(struct r1bio *r1_bio) b->bi_iter.bi_sector =3D r1_bio->sector + conf->mirrors[i].rdev->data_offset; b->bi_end_io =3D end_sync_read; - rp->raid_bio =3D r1_bio; - b->bi_private =3D rp; + rf->raid_bio =3D r1_bio; + b->bi_private =3D rf; =20 /* initialize bvec table again */ - md_bio_reset_resync_pages(b, rp, r1_bio->sectors << 9); + md_bio_reset_resync_folio(b, rf, r1_bio->sectors << 9); } for (primary =3D 0; primary < conf->raid_disks * 2; primary++) if (r1_bio->bios[primary]->bi_end_io =3D=3D end_sync_read && @@ -2284,44 +2278,39 @@ static void process_checks(struct r1bio *r1_bio) } r1_bio->read_disk =3D primary; for (i =3D 0; i < conf->raid_disks * 2; i++) { - int j =3D 0; struct bio *pbio =3D r1_bio->bios[primary]; struct bio *sbio =3D r1_bio->bios[i]; blk_status_t status =3D sbio->bi_status; - struct page **ppages =3D get_resync_pages(pbio)->pages; - struct page **spages =3D get_resync_pages(sbio)->pages; - struct bio_vec *bi; - int page_len[RESYNC_PAGES] =3D { 0 }; - struct bvec_iter_all iter_all; + struct folio *pfolio =3D get_resync_folio(pbio)->folio; + struct folio *sfolio =3D get_resync_folio(sbio)->folio; =20 if (sbio->bi_end_io !=3D end_sync_read) continue; /* Now we can 'fixup' the error value */ sbio->bi_status =3D 0; =20 - bio_for_each_segment_all(bi, sbio, iter_all) - page_len[j++] =3D bi->bv_len; - - if (!status) { - for (j =3D vcnt; j-- ; ) { - if (memcmp(page_address(ppages[j]), - page_address(spages[j]), - page_len[j])) - break; - } - } else - j =3D 0; - if (j >=3D 0) + /* + * Copy data and submit write in two cases: + * - IO error (non-zero status) + * - Data inconsistency and not a CHECK operation. + */ + if (status) { atomic64_add(r1_bio->sectors, &mddev->resync_mismatches); - if (j < 0 || (test_bit(MD_RECOVERY_CHECK, &mddev->recovery) - && !status)) { - /* No need to write to this device. */ - sbio->bi_end_io =3D NULL; - rdev_dec_pending(conf->mirrors[i].rdev, mddev); + bio_copy_data(sbio, pbio); continue; + } else if (memcmp(folio_address(pfolio), + folio_address(sfolio), + r1_bio->sectors << 9)) { + atomic64_add(r1_bio->sectors, &mddev->resync_mismatches); + if (!test_bit(MD_RECOVERY_CHECK, &mddev->recovery)) { + bio_copy_data(sbio, pbio); + continue; + } } =20 - bio_copy_data(sbio, pbio); + /* No need to write to this device. */ + sbio->bi_end_io =3D NULL; + rdev_dec_pending(conf->mirrors[i].rdev, mddev); } } =20 @@ -2446,9 +2435,8 @@ static void fix_read_error(struct r1conf *conf, struc= t r1bio *r1_bio) if (rdev && !test_bit(Faulty, &rdev->flags)) { atomic_inc(&rdev->nr_pending); - r1_sync_page_io(rdev, sect, s, - folio_page(conf->tmpfolio, 0), - REQ_OP_WRITE); + r1_sync_folio_io(rdev, sect, s, 0, + conf->tmpfolio, REQ_OP_WRITE); rdev_dec_pending(rdev, mddev); } } @@ -2461,9 +2449,8 @@ static void fix_read_error(struct r1conf *conf, struc= t r1bio *r1_bio) if (rdev && !test_bit(Faulty, &rdev->flags)) { atomic_inc(&rdev->nr_pending); - if (r1_sync_page_io(rdev, sect, s, - folio_page(conf->tmpfolio, 0), - REQ_OP_READ)) { + if (r1_sync_folio_io(rdev, sect, s, 0, + conf->tmpfolio, REQ_OP_READ)) { atomic_add(s, &rdev->corrected_errors); pr_info("md/raid1:%s: read error corrected (%d sectors at %llu on %pg= )\n", mdname(mddev), s, @@ -2738,15 +2725,15 @@ static int init_resync(struct r1conf *conf) static struct r1bio *raid1_alloc_init_r1buf(struct r1conf *conf) { struct r1bio *r1bio =3D mempool_alloc(&conf->r1buf_pool, GFP_NOIO); - struct resync_pages *rps; + struct resync_folio *rfs; struct bio *bio; int i; =20 for (i =3D conf->raid_disks * 2; i--; ) { bio =3D r1bio->bios[i]; - rps =3D bio->bi_private; + rfs =3D bio->bi_private; bio_reset(bio, NULL, 0); - bio->bi_private =3D rps; + bio->bi_private =3D rfs; } r1bio->master_bio =3D NULL; return r1bio; @@ -2775,10 +2762,9 @@ static sector_t raid1_sync_request(struct mddev *mdd= ev, sector_t sector_nr, int write_targets =3D 0, read_targets =3D 0; sector_t sync_blocks; bool still_degraded =3D false; - int good_sectors =3D RESYNC_SECTORS; + int good_sectors; int min_bad =3D 0; /* number of sectors that are bad in all devices */ int idx =3D sector_to_idx(sector_nr); - int page_idx =3D 0; =20 if (!mempool_initialized(&conf->r1buf_pool)) if (init_resync(conf)) @@ -2858,8 +2844,11 @@ static sector_t raid1_sync_request(struct mddev *mdd= ev, sector_t sector_nr, r1_bio->sector =3D sector_nr; r1_bio->state =3D 0; set_bit(R1BIO_IsSync, &r1_bio->state); - /* make sure good_sectors won't go across barrier unit boundary */ - good_sectors =3D align_to_barrier_unit_end(sector_nr, good_sectors); + /* + * make sure good_sectors won't go across barrier unit boundary. + * r1_bio->sectors <=3D RESYNC_SECTORS. + */ + good_sectors =3D align_to_barrier_unit_end(sector_nr, r1_bio->sectors); =20 for (i =3D 0; i < conf->raid_disks * 2; i++) { struct md_rdev *rdev; @@ -2979,44 +2968,28 @@ static sector_t raid1_sync_request(struct mddev *md= dev, sector_t sector_nr, max_sector =3D mddev->resync_max; /* Don't do IO beyond here */ if (max_sector > sector_nr + good_sectors) max_sector =3D sector_nr + good_sectors; - nr_sectors =3D 0; - sync_blocks =3D 0; do { - struct page *page; - int len =3D PAGE_SIZE; - if (sector_nr + (len>>9) > max_sector) - len =3D (max_sector - sector_nr) << 9; - if (len =3D=3D 0) + nr_sectors =3D max_sector - sector_nr; + if (nr_sectors =3D=3D 0) break; - if (sync_blocks =3D=3D 0) { - if (!md_bitmap_start_sync(mddev, sector_nr, - &sync_blocks, still_degraded) && - !conf->fullsync && - !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) - break; - if ((len >> 9) > sync_blocks) - len =3D sync_blocks<<9; - } + if (!md_bitmap_start_sync(mddev, sector_nr, + &sync_blocks, still_degraded) && + !conf->fullsync && + !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) + break; + if (nr_sectors > sync_blocks) + nr_sectors =3D sync_blocks; =20 for (i =3D 0 ; i < conf->raid_disks * 2; i++) { - struct resync_pages *rp; - bio =3D r1_bio->bios[i]; - rp =3D get_resync_pages(bio); if (bio->bi_end_io) { - page =3D resync_fetch_page(rp, page_idx); + struct resync_folio *rf =3D get_resync_folio(bio); =20 - /* - * won't fail because the vec table is big - * enough to hold all these pages - */ - __bio_add_page(bio, page, len, 0); + bio_add_folio_nofail(bio, rf->folio, nr_sectors << 9, 0); } } - nr_sectors +=3D len>>9; - sector_nr +=3D len>>9; - sync_blocks -=3D (len>>9); - } while (++page_idx < RESYNC_PAGES); + sector_nr +=3D nr_sectors; + } while (0); =20 r1_bio->sectors =3D nr_sectors; =20 diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 26f93040cd13..3638e00fe420 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -96,11 +96,11 @@ static void end_reshape(struct r10conf *conf); =20 /* * for resync bio, r10bio pointer can be retrieved from the per-bio - * 'struct resync_pages'. + * 'struct resync_folio'. */ static inline struct r10bio *get_resync_r10bio(struct bio *bio) { - return get_resync_pages(bio)->raid_bio; + return get_resync_folio(bio)->raid_bio; } =20 static void * r10bio_pool_alloc(gfp_t gfp_flags, void *data) @@ -133,8 +133,9 @@ static void * r10buf_pool_alloc(gfp_t gfp_flags, void *= data) struct r10bio *r10_bio; struct bio *bio; int j; - int nalloc, nalloc_rp; - struct resync_pages *rps; + int nalloc, nalloc_rf; + struct resync_folio *rfs; + int order =3D get_order(RESYNC_BLOCK_SIZE); =20 r10_bio =3D r10bio_pool_alloc(gfp_flags, conf); if (!r10_bio) @@ -148,66 +149,64 @@ static void * r10buf_pool_alloc(gfp_t gfp_flags, void= *data) =20 /* allocate once for all bios */ if (!conf->have_replacement) - nalloc_rp =3D nalloc; + nalloc_rf =3D nalloc; else - nalloc_rp =3D nalloc * 2; - rps =3D kmalloc_array(nalloc_rp, sizeof(struct resync_pages), gfp_flags); - if (!rps) + nalloc_rf =3D nalloc * 2; + rfs =3D kmalloc_array(nalloc_rf, sizeof(struct resync_folio), gfp_flags); + if (!rfs) goto out_free_r10bio; =20 /* * Allocate bios. */ for (j =3D nalloc ; j-- ; ) { - bio =3D bio_kmalloc(RESYNC_PAGES, gfp_flags); + bio =3D bio_kmalloc(1, gfp_flags); if (!bio) goto out_free_bio; - bio_init_inline(bio, NULL, RESYNC_PAGES, 0); + bio_init_inline(bio, NULL, 1, 0); r10_bio->devs[j].bio =3D bio; if (!conf->have_replacement) continue; - bio =3D bio_kmalloc(RESYNC_PAGES, gfp_flags); + bio =3D bio_kmalloc(1, gfp_flags); if (!bio) goto out_free_bio; - bio_init_inline(bio, NULL, RESYNC_PAGES, 0); + bio_init_inline(bio, NULL, 1, 0); r10_bio->devs[j].repl_bio =3D bio; } /* - * Allocate RESYNC_PAGES data pages and attach them - * where needed. + * Allocate data folio and attach it where needed. */ for (j =3D 0; j < nalloc; j++) { struct bio *rbio =3D r10_bio->devs[j].repl_bio; - struct resync_pages *rp, *rp_repl; + struct resync_folio *rf, *rf_repl; =20 - rp =3D &rps[j]; + rf =3D &rfs[j]; if (rbio) - rp_repl =3D &rps[nalloc + j]; - - bio =3D r10_bio->devs[j].bio; + rf_repl =3D &rfs[nalloc + j]; =20 if (!j || test_bit(MD_RECOVERY_SYNC, &conf->mddev->recovery)) { - if (resync_alloc_pages(rp, gfp_flags)) - goto out_free_pages; + if (resync_alloc_folio(rf, gfp_flags, &order)) + goto out_free_folio; } else { - memcpy(rp, &rps[0], sizeof(*rp)); - resync_get_all_pages(rp); + memcpy(rf, &rfs[0], sizeof(*rf)); + folio_get(rf->folio); } =20 - rp->raid_bio =3D r10_bio; - bio->bi_private =3D rp; + rf->raid_bio =3D r10_bio; + r10_bio->devs[j].bio->bi_private =3D rf; if (rbio) { - memcpy(rp_repl, rp, sizeof(*rp)); - rbio->bi_private =3D rp_repl; + memcpy(rf_repl, rf, sizeof(*rf)); + rbio->bi_private =3D rf_repl; } } =20 + r10_bio->sectors =3D 1 << (order + PAGE_SECTORS_SHIFT); return r10_bio; =20 -out_free_pages: +out_free_folio: while (--j >=3D 0) - resync_free_pages(&rps[j]); + folio_put(rfs[j].folio); =20 j =3D 0; out_free_bio: @@ -219,7 +218,7 @@ static void * r10buf_pool_alloc(gfp_t gfp_flags, void *= data) bio_uninit(r10_bio->devs[j].repl_bio); kfree(r10_bio->devs[j].repl_bio); } - kfree(rps); + kfree(rfs); out_free_r10bio: rbio_pool_free(r10_bio, conf); return NULL; @@ -230,14 +229,14 @@ static void r10buf_pool_free(void *__r10_bio, void *d= ata) struct r10conf *conf =3D data; struct r10bio *r10bio =3D __r10_bio; int j; - struct resync_pages *rp =3D NULL; + struct resync_folio *rf =3D NULL; =20 for (j =3D conf->copies; j--; ) { struct bio *bio =3D r10bio->devs[j].bio; =20 if (bio) { - rp =3D get_resync_pages(bio); - resync_free_pages(rp); + rf =3D get_resync_folio(bio); + folio_put(rf->folio); bio_uninit(bio); kfree(bio); } @@ -250,7 +249,7 @@ static void r10buf_pool_free(void *__r10_bio, void *dat= a) } =20 /* resync pages array stored in the 1st bio's .bi_private */ - kfree(rp); + kfree(rf); =20 rbio_pool_free(r10bio, conf); } @@ -2342,8 +2341,7 @@ static void sync_request_write(struct mddev *mddev, s= truct r10bio *r10_bio) struct r10conf *conf =3D mddev->private; int i, first; struct bio *tbio, *fbio; - int vcnt; - struct page **tpages, **fpages; + struct folio *tfolio, *ffolio; =20 atomic_set(&r10_bio->remaining, 1); =20 @@ -2359,14 +2357,13 @@ static void sync_request_write(struct mddev *mddev,= struct r10bio *r10_bio) fbio =3D r10_bio->devs[i].bio; fbio->bi_iter.bi_size =3D r10_bio->sectors << 9; fbio->bi_iter.bi_idx =3D 0; - fpages =3D get_resync_pages(fbio)->pages; + ffolio =3D get_resync_folio(fbio)->folio; =20 - vcnt =3D (r10_bio->sectors + (PAGE_SIZE >> 9) - 1) >> (PAGE_SHIFT - 9); /* now find blocks with errors */ for (i=3D0 ; i < conf->copies ; i++) { - int j, d; + int d; struct md_rdev *rdev; - struct resync_pages *rp; + struct resync_folio *rf; =20 tbio =3D r10_bio->devs[i].bio; =20 @@ -2375,31 +2372,23 @@ static void sync_request_write(struct mddev *mddev,= struct r10bio *r10_bio) if (i =3D=3D first) continue; =20 - tpages =3D get_resync_pages(tbio)->pages; + tfolio =3D get_resync_folio(tbio)->folio; d =3D r10_bio->devs[i].devnum; rdev =3D conf->mirrors[d].rdev; if (!r10_bio->devs[i].bio->bi_status) { /* We know that the bi_io_vec layout is the same for * both 'first' and 'i', so we just compare them. - * All vec entries are PAGE_SIZE; */ - int sectors =3D r10_bio->sectors; - for (j =3D 0; j < vcnt; j++) { - int len =3D PAGE_SIZE; - if (sectors < (len / 512)) - len =3D sectors * 512; - if (memcmp(page_address(fpages[j]), - page_address(tpages[j]), - len)) - break; - sectors -=3D len/512; + if (memcmp(folio_address(ffolio), + folio_address(tfolio), + r10_bio->sectors << 9)) { + atomic64_add(r10_bio->sectors, + &mddev->resync_mismatches); + if (test_bit(MD_RECOVERY_CHECK, + &mddev->recovery)) + /* Don't fix anything. */ + continue; } - if (j =3D=3D vcnt) - continue; - atomic64_add(r10_bio->sectors, &mddev->resync_mismatches); - if (test_bit(MD_RECOVERY_CHECK, &mddev->recovery)) - /* Don't fix anything. */ - continue; } else if (test_bit(FailFast, &rdev->flags)) { /* Just give up on this device */ md_error(rdev->mddev, rdev); @@ -2410,13 +2399,13 @@ static void sync_request_write(struct mddev *mddev,= struct r10bio *r10_bio) * First we need to fixup bv_offset, bv_len and * bi_vecs, as the read request might have corrupted these */ - rp =3D get_resync_pages(tbio); + rf =3D get_resync_folio(tbio); bio_reset(tbio, conf->mirrors[d].rdev->bdev, REQ_OP_WRITE); =20 - md_bio_reset_resync_pages(tbio, rp, fbio->bi_iter.bi_size); + md_bio_reset_resync_folio(tbio, rf, fbio->bi_iter.bi_size); =20 - rp->raid_bio =3D r10_bio; - tbio->bi_private =3D rp; + rf->raid_bio =3D r10_bio; + tbio->bi_private =3D rf; tbio->bi_iter.bi_sector =3D r10_bio->devs[i].addr; tbio->bi_end_io =3D end_sync_write; =20 @@ -2476,10 +2465,9 @@ static void fix_recovery_read_error(struct r10bio *r= 10_bio) struct bio *bio =3D r10_bio->devs[0].bio; sector_t sect =3D 0; int sectors =3D r10_bio->sectors; - int idx =3D 0; int dr =3D r10_bio->devs[0].devnum; int dw =3D r10_bio->devs[1].devnum; - struct page **pages =3D get_resync_pages(bio)->pages; + struct folio *folio =3D get_resync_folio(bio)->folio; =20 while (sectors) { int s =3D sectors; @@ -2492,19 +2480,21 @@ static void fix_recovery_read_error(struct r10bio *= r10_bio) =20 rdev =3D conf->mirrors[dr].rdev; addr =3D r10_bio->devs[0].addr + sect; - ok =3D sync_page_io(rdev, - addr, - s << 9, - pages[idx], - REQ_OP_READ, false); + ok =3D sync_folio_io(rdev, + addr, + s << 9, + sect << 9, + folio, + REQ_OP_READ, false); if (ok) { rdev =3D conf->mirrors[dw].rdev; addr =3D r10_bio->devs[1].addr + sect; - ok =3D sync_page_io(rdev, - addr, - s << 9, - pages[idx], - REQ_OP_WRITE, false); + ok =3D sync_folio_io(rdev, + addr, + s << 9, + sect << 9, + folio, + REQ_OP_WRITE, false); if (!ok) { set_bit(WriteErrorSeen, &rdev->flags); if (!test_and_set_bit(WantReplacement, @@ -2539,7 +2529,6 @@ static void fix_recovery_read_error(struct r10bio *r1= 0_bio) =20 sectors -=3D s; sect +=3D s; - idx++; } } =20 @@ -3050,7 +3039,7 @@ static int init_resync(struct r10conf *conf) static struct r10bio *raid10_alloc_init_r10buf(struct r10conf *conf) { struct r10bio *r10bio =3D mempool_alloc(&conf->r10buf_pool, GFP_NOIO); - struct rsync_pages *rp; + struct resync_folio *rf; struct bio *bio; int nalloc; int i; @@ -3063,14 +3052,14 @@ static struct r10bio *raid10_alloc_init_r10buf(stru= ct r10conf *conf) =20 for (i =3D 0; i < nalloc; i++) { bio =3D r10bio->devs[i].bio; - rp =3D bio->bi_private; + rf =3D bio->bi_private; bio_reset(bio, NULL, 0); - bio->bi_private =3D rp; + bio->bi_private =3D rf; bio =3D r10bio->devs[i].repl_bio; if (bio) { - rp =3D bio->bi_private; + rf =3D bio->bi_private; bio_reset(bio, NULL, 0); - bio->bi_private =3D rp; + bio->bi_private =3D rf; } } return r10bio; @@ -3156,7 +3145,6 @@ static sector_t raid10_sync_request(struct mddev *mdd= ev, sector_t sector_nr, int max_sync =3D RESYNC_SECTORS; sector_t sync_blocks; sector_t chunk_mask =3D conf->geo.chunk_mask; - int page_idx =3D 0; =20 /* * Allow skipping a full rebuild for incremental assembly @@ -3376,6 +3364,15 @@ static sector_t raid10_sync_request(struct mddev *md= dev, sector_t sector_nr, continue; } } + + /* + * RESYNC_BLOCK_SIZE folio might alloc failed in + * resync_alloc_folio(). Fall back to smaller sync + * size if needed. + */ + if (max_sync > r10_bio->sectors) + max_sync =3D r10_bio->sectors; + any_working =3D 1; bio =3D r10_bio->devs[0].bio; bio->bi_next =3D biolist; @@ -3527,7 +3524,15 @@ static sector_t raid10_sync_request(struct mddev *md= dev, sector_t sector_nr, } if (sync_blocks < max_sync) max_sync =3D sync_blocks; + r10_bio =3D raid10_alloc_init_r10buf(conf); + /* + * RESYNC_BLOCK_SIZE folio might alloc failed in resync_alloc_folio(). + * Fall back to smaller sync size if needed. + */ + if (max_sync > r10_bio->sectors) + max_sync =3D r10_bio->sectors; + r10_bio->state =3D 0; =20 r10_bio->mddev =3D mddev; @@ -3620,29 +3625,25 @@ static sector_t raid10_sync_request(struct mddev *m= ddev, sector_t sector_nr, } } =20 - nr_sectors =3D 0; if (sector_nr + max_sync < max_sector) max_sector =3D sector_nr + max_sync; do { - struct page *page; - int len =3D PAGE_SIZE; - if (sector_nr + (len>>9) > max_sector) - len =3D (max_sector - sector_nr) << 9; - if (len =3D=3D 0) + nr_sectors =3D max_sector - sector_nr; + + if (nr_sectors =3D=3D 0) break; for (bio=3D biolist ; bio ; bio=3Dbio->bi_next) { - struct resync_pages *rp =3D get_resync_pages(bio); - page =3D resync_fetch_page(rp, page_idx); - if (WARN_ON(!bio_add_page(bio, page, len, 0))) { + struct resync_folio *rf =3D get_resync_folio(bio); + + if (WARN_ON(!bio_add_folio(bio, rf->folio, nr_sectors << 9, 0))) { bio->bi_status =3D BLK_STS_RESOURCE; bio_endio(bio); *skipped =3D 1; - return max_sync; + return nr_sectors << 9; } } - nr_sectors +=3D len>>9; - sector_nr +=3D len>>9; - } while (++page_idx < RESYNC_PAGES); + sector_nr +=3D nr_sectors; + } while (0); r10_bio->sectors =3D nr_sectors; =20 if (mddev_is_clustered(mddev) && @@ -4560,7 +4561,7 @@ static sector_t reshape_request(struct mddev *mddev, = sector_t sector_nr, int *skipped) { /* We simply copy at most one chunk (smallest of old and new) - * at a time, possibly less if that exceeds RESYNC_PAGES, + * at a time, possibly less if that exceeds RESYNC_BLOCK_SIZE, * or we hit a bad block or something. * This might mean we pause for normal IO in the middle of * a chunk, but that is not a problem as mddev->reshape_position @@ -4600,14 +4601,13 @@ static sector_t reshape_request(struct mddev *mddev= , sector_t sector_nr, struct r10bio *r10_bio; sector_t next, safe, last; int max_sectors; - int nr_sectors; int s; struct md_rdev *rdev; int need_flush =3D 0; struct bio *blist; struct bio *bio, *read_bio; int sectors_done =3D 0; - struct page **pages; + struct folio *folio; =20 if (sector_nr =3D=3D 0) { /* If restarting in the middle, skip the initial sectors */ @@ -4709,7 +4709,12 @@ static sector_t reshape_request(struct mddev *mddev,= sector_t sector_nr, r10_bio->mddev =3D mddev; r10_bio->sector =3D sector_nr; set_bit(R10BIO_IsReshape, &r10_bio->state); - r10_bio->sectors =3D last - sector_nr + 1; + /* + * RESYNC_BLOCK_SIZE folio might alloc failed in + * resync_alloc_folio(). Fall back to smaller sync + * size if needed. + */ + r10_bio->sectors =3D min_t(int, r10_bio->sectors, last - sector_nr + 1); rdev =3D read_balance(conf, r10_bio, &max_sectors); BUG_ON(!test_bit(R10BIO_Previous, &r10_bio->state)); =20 @@ -4723,7 +4728,7 @@ static sector_t reshape_request(struct mddev *mddev, = sector_t sector_nr, return sectors_done; } =20 - read_bio =3D bio_alloc_bioset(rdev->bdev, RESYNC_PAGES, REQ_OP_READ, + read_bio =3D bio_alloc_bioset(rdev->bdev, 1, REQ_OP_READ, GFP_KERNEL, &mddev->bio_set); read_bio->bi_iter.bi_sector =3D (r10_bio->devs[r10_bio->read_slot].addr + rdev->data_offset); @@ -4787,32 +4792,23 @@ static sector_t reshape_request(struct mddev *mddev= , sector_t sector_nr, blist =3D b; } =20 - /* Now add as many pages as possible to all of these bios. */ + /* Now add folio to all of these bios. */ =20 - nr_sectors =3D 0; - pages =3D get_resync_pages(r10_bio->devs[0].bio)->pages; - for (s =3D 0 ; s < max_sectors; s +=3D PAGE_SIZE >> 9) { - struct page *page =3D pages[s / (PAGE_SIZE >> 9)]; - int len =3D (max_sectors - s) << 9; - if (len > PAGE_SIZE) - len =3D PAGE_SIZE; - for (bio =3D blist; bio ; bio =3D bio->bi_next) { - if (WARN_ON(!bio_add_page(bio, page, len, 0))) { - bio->bi_status =3D BLK_STS_RESOURCE; - bio_endio(bio); - return sectors_done; - } + folio =3D get_resync_folio(r10_bio->devs[0].bio)->folio; + for (bio =3D blist; bio ; bio =3D bio->bi_next) { + if (WARN_ON(!bio_add_folio(bio, folio, max_sectors, 0))) { + bio->bi_status =3D BLK_STS_RESOURCE; + bio_endio(bio); + return sectors_done; } - sector_nr +=3D len >> 9; - nr_sectors +=3D len >> 9; } - r10_bio->sectors =3D nr_sectors; + r10_bio->sectors =3D max_sectors >> 9; =20 /* Now submit the read */ atomic_inc(&r10_bio->remaining); read_bio->bi_next =3D NULL; submit_bio_noacct(read_bio); - sectors_done +=3D nr_sectors; + sectors_done +=3D max_sectors; if (sector_nr <=3D last) goto read_more; =20 @@ -4914,8 +4910,8 @@ static int handle_reshape_read_error(struct mddev *md= dev, struct r10conf *conf =3D mddev->private; struct r10bio *r10b; int slot =3D 0; - int idx =3D 0; - struct page **pages; + int sect =3D 0; + struct folio *folio; =20 r10b =3D kmalloc(struct_size(r10b, devs, conf->copies), GFP_NOIO); if (!r10b) { @@ -4923,8 +4919,8 @@ static int handle_reshape_read_error(struct mddev *md= dev, return -ENOMEM; } =20 - /* reshape IOs share pages from .devs[0].bio */ - pages =3D get_resync_pages(r10_bio->devs[0].bio)->pages; + /* reshape IOs share folio from .devs[0].bio */ + folio =3D get_resync_folio(r10_bio->devs[0].bio)->folio; =20 r10b->sector =3D r10_bio->sector; __raid10_find_phys(&conf->prev, r10b); @@ -4940,19 +4936,19 @@ static int handle_reshape_read_error(struct mddev *= mddev, while (!success) { int d =3D r10b->devs[slot].devnum; struct md_rdev *rdev =3D conf->mirrors[d].rdev; - sector_t addr; if (rdev =3D=3D NULL || test_bit(Faulty, &rdev->flags) || !test_bit(In_sync, &rdev->flags)) goto failed; =20 - addr =3D r10b->devs[slot].addr + idx * PAGE_SIZE; atomic_inc(&rdev->nr_pending); - success =3D sync_page_io(rdev, - addr, - s << 9, - pages[idx], - REQ_OP_READ, false); + success =3D sync_folio_io(rdev, + r10b->devs[slot].addr + + sect, + s << 9, + sect << 9, + folio, + REQ_OP_READ, false); rdev_dec_pending(rdev, mddev); if (success) break; @@ -4971,7 +4967,7 @@ static int handle_reshape_read_error(struct mddev *md= dev, return -EIO; } sectors -=3D s; - idx++; + sect +=3D s; } kfree(r10b); return 0; --=20 2.39.2 From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B00D230AD1C; Thu, 16 Apr 2026 03:50:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311444; cv=none; b=nbS9StsQFlA1+fjiE5n2KoGTD9evbpwWja0woSxKoiUnzKfecu5W2GZ+2ldR1QXy7yasEEyVydtFIBTpwlSQhdqD0TbJ28eyeraQqCuzSo3mwHZ6YINbdBasIebG56et6/usmqonHxm9q5sn419WqpHRTFjjzIVYqEKY2MMkQks= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311444; c=relaxed/simple; bh=/wu8bWGRbozzMd4UftiQBcMXd1RkCGBuzfwwZgotG5U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gTnoju7chJhCBnsX3IUlkdwzH09VnEbeg+/rm0ZEZljK/52mn9Hcl1peoF2m9YEX4RGiovXtuUXgSVdvVdCeY5SanB0wL5xhoyPFlfWKOzCYasQPXFB9UfimtRFeAMXm0Psv1pkYgME6zq5wgR0b8RV5gHT2cu12tC2ZwfJp8Sg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.177]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tw5Ym4zKHMM6; Thu, 16 Apr 2026 11:50:16 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 74D7640590; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S11; Thu, 16 Apr 2026 11:50:27 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 7/8] md/raid1: fix IO error at logical block size granularity Date: Thu, 16 Apr 2026 11:38:00 +0800 Message-Id: <20260416033801.786415-8-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S11 X-Coremail-Antispam: 1UD129KBjvJXoW7ur15uw13Cr48Cw18Jw18Zrb_yoW8ZF1xpa 17C39Yvr4UGrWUZr4DAryqy3WFkaySyFWUGrs5G3y29ryDZ3sxWFWUKayYgFyvkr9ayayU WwnFyr48C3W7tF7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQ014x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAac4AC62xK8xCEY4vEwIxC4wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUXVWUAwAv7VC2z280aVAFwI0_Gr1j6F4U JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4kE6xkIj40Ew7xC0wCY1x0262kKe7AKxVWUAVWUtwCF04k20xvY0x0EwIxGrwCF x2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14 v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY 67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI 8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v2 6r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUYVyIDUUUU X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan RAID1 currently fixes IO error at PAGE_SIZE granularity. Fix at smaller granularity can handle more errors, and RAID will support logical block sizes larger than PAGE_SIZE in the future, where PAGE_SIZE IO will fail. Switch IO error fix granularity to logical block size. Signed-off-by: Li Nan Reviewed-by: Yu Kuai --- drivers/md/raid1.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 724fd4f2cc3a..de8c964ca11d 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2116,7 +2116,7 @@ static int fix_sync_read_error(struct r1bio *r1_bio) { /* Try some synchronous reads of other devices to get * good data, much like with normal read errors. Only - * read into the pages we already have so we don't + * read into the block we already have so we don't * need to re-issue the read request. * We don't need to freeze the array, because being in an * active sync request, there is no normal IO, and @@ -2147,13 +2147,11 @@ static int fix_sync_read_error(struct r1bio *r1_bio) } =20 while(sectors) { - int s =3D sectors; + int s =3D min_t(int, sectors, mddev->logical_block_size >> 9); int d =3D r1_bio->read_disk; int success =3D 0; int start; =20 - if (s > (PAGE_SIZE>>9)) - s =3D PAGE_SIZE >> 9; do { if (r1_bio->bios[d]->bi_end_io =3D=3D end_sync_read) { /* No rcu protection needed here devices @@ -2192,7 +2190,7 @@ static int fix_sync_read_error(struct r1bio *r1_bio) if (abort) return 0; =20 - /* Try next page */ + /* Try next block */ sectors -=3D s; sect +=3D s; off +=3D s << 9; @@ -2390,14 +2388,11 @@ static void fix_read_error(struct r1conf *conf, str= uct r1bio *r1_bio) } =20 while(sectors) { - int s =3D sectors; + int s =3D min_t(int, sectors, mddev->logical_block_size >> 9); int d =3D read_disk; int success =3D 0; int start; =20 - if (s > (PAGE_SIZE>>9)) - s =3D PAGE_SIZE >> 9; - do { rdev =3D conf->mirrors[d].rdev; if (rdev && --=20 2.39.2 From nobody Sun Apr 19 15:59:41 2026 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 506C81B4138; Thu, 16 Apr 2026 03:50:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; cv=none; b=HmOpDSYPme/dnKRsWWFgGBYlHy94XdpUM9edXhSUcZhzsWPdHiYl/ZAP6F44sgDogqmaT5EtCM9uiaTZFhF0o2TOCWrxF5oCZcu5y8M8p9Wvj7hoAPhtv04m6zx6khrfcVbfFWyhdx1buuV6kVvGJtKSenSHglaurZDdldazFAM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776311432; c=relaxed/simple; bh=GoYuto8yVEQg3khDeSUdY0V8FzUWXhTPMFTzZkHi0Eg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rdxo5kgqVbhOpnxobFV/SMVmgX6WcqzELGC4T9AQU5gLfeeMzjuBo0u56/v9USoUDnkNsW1TkmUQyBuco9gtmtiwxzTRupj4TrP/0BY3aKZE28bNQnKBopAYr9GkFNSO7s8ehtNSJhcuU/qZwpUAaSQ9gLd4EDDr0LJn8/fhsuo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.170]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4fx3tD2pflzYQth7; Thu, 16 Apr 2026 11:49:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 7C1844056D; Thu, 16 Apr 2026 11:50:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.50.87.129]) by APP3 (Coremail) with SMTP id _Ch0CgCHM72BXOBpn95JAg--.61021S12; Thu, 16 Apr 2026 11:50:27 +0800 (CST) From: linan666@huaweicloud.com To: song@kernel.org, yukuai@fnnas.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linan666@huaweicloud.com, yangerkun@huawei.com, yi.zhang@huawei.com Subject: [PATCH v3 8/8] md/raid10: fix IO error at logical block size granularity Date: Thu, 16 Apr 2026 11:38:01 +0800 Message-Id: <20260416033801.786415-9-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com> References: <20260416033801.786415-1-linan666@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: _Ch0CgCHM72BXOBpn95JAg--.61021S12 X-Coremail-Antispam: 1UD129KBjvJXoW7Wry7KrWfAr1rCw4DCrWDXFb_yoW8KFy8pa 9xCFnrZrWDG3WUZrnrAFWDZ3WFk3y5tFWUKry8Gw4IgF9xtr98KF45XFWYgryYkFWav3W0 g3Wqkr48C3WkJF7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQ014x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAac4AC62xK8xCEY4vEwIxC4wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUXVWUAwAv7VC2z280aVAFwI0_Gr1j6F4U JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4kE6xkIj40Ew7xC0wCY1x0262kKe7AKxVWUAVWUtwCF04k20xvY0x0EwIxGrwCF x2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14 v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY 67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI 8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v2 6r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUYVyIDUUUU X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ Content-Type: text/plain; charset="utf-8" From: Li Nan RAID10 currently fixes IO error at PAGE_SIZE granularity. Fix at smaller granularity can handle more errors, and RAID will support logical block sizes larger than PAGE_SIZE in the future, where PAGE_SIZE IO will fail. Switch IO error fix granularity to logical block size. Signed-off-by: Li Nan Reviewed-by: Yu Kuai --- drivers/md/raid10.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 3638e00fe420..5b4ffd23211a 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2454,7 +2454,7 @@ static void sync_request_write(struct mddev *mddev, s= truct r10bio *r10_bio) static void fix_recovery_read_error(struct r10bio *r10_bio) { /* We got a read error during recovery. - * We repeat the read in smaller page-sized sections. + * We repeat the read in smaller logical_block_sized sections. * If a read succeeds, write it to the new device or record * a bad block if we cannot. * If a read fails, record a bad block on both old and @@ -2470,14 +2470,11 @@ static void fix_recovery_read_error(struct r10bio *= r10_bio) struct folio *folio =3D get_resync_folio(bio)->folio; =20 while (sectors) { - int s =3D sectors; + int s =3D min_t(int, sectors, mddev->logical_block_size >> 9); struct md_rdev *rdev; sector_t addr; int ok; =20 - if (s > (PAGE_SIZE>>9)) - s =3D PAGE_SIZE >> 9; - rdev =3D conf->mirrors[dr].rdev; addr =3D r10_bio->devs[0].addr + sect; ok =3D sync_folio_io(rdev, @@ -2621,14 +2618,11 @@ static void fix_read_error(struct r10conf *conf, st= ruct mddev *mddev, struct r10 } =20 while(sectors) { - int s =3D sectors; + int s =3D min_t(int, sectors, mddev->logical_block_size >> 9); int sl =3D slot; int success =3D 0; int start; =20 - if (s > (PAGE_SIZE>>9)) - s =3D PAGE_SIZE >> 9; - do { d =3D r10_bio->devs[sl].devnum; rdev =3D conf->mirrors[d].rdev; @@ -4926,13 +4920,10 @@ static int handle_reshape_read_error(struct mddev *= mddev, __raid10_find_phys(&conf->prev, r10b); =20 while (sectors) { - int s =3D sectors; + int s =3D min_t(int, sectors, mddev->logical_block_size >> 9); int success =3D 0; int first_slot =3D slot; =20 - if (s > (PAGE_SIZE >> 9)) - s =3D PAGE_SIZE >> 9; - while (!success) { int d =3D r10b->devs[slot].devnum; struct md_rdev *rdev =3D conf->mirrors[d].rdev; --=20 2.39.2