From nobody Fri Dec 19 00:03:22 2025 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B533D225408; Thu, 5 Jun 2025 13:17:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129427; cv=none; b=LoK4YXWoD6u5wn2WzAELHEQnmX7D4Aa9/ldqq550Qm0hlSAs4xUKMTg9Gqo/WQlQp9G/46F1KKQsOBruwT78wfKtrUspn9xTEAHRPsXuORwg/u/+iKvQvaDJhxhr5lF+UL6O+80/UGvb6nFuEZhG719k7V2a/HeEblWlWPKzqlk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129427; c=relaxed/simple; bh=3WS954puNFuUVa+iEYJenXwqmTG2brxEMsfCKzsSfOo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZallQL9gGtpkcovf7EANj+AGggYUzOK5HC9QHvu4i5nD/G697FweD+JbaLh6jB/9v8eMOwxueK1DZiGAcVC0v5l1q0xk8zSqeaoOiYuOoSWtsFfcrfK+S5EjIIdcuojsIzWbRg6dZx3X1jqjY5LODxcEzTbMoEQUVgMG/P6kcmk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4bClNC2b6DzKHN9f; Thu, 5 Jun 2025 21:16:59 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id B6B221A1D1D; Thu, 5 Jun 2025 21:16:57 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgDnTH3HmEFobD9lOQ--.29489S3; Thu, 05 Jun 2025 21:16:57 +0800 (CST) From: Kemeng Shi To: hughd@google.com, baolin.wang@linux.alibaba.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 1/7] mm: shmem: correctly pass alloced parameter to shmem_recalc_inode() to avoid WARN_ON() Date: Fri, 6 Jun 2025 06:10:31 +0800 Message-Id: <20250605221037.7872-2-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com> References: <20250605221037.7872-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgDnTH3HmEFobD9lOQ--.29489S3 X-Coremail-Antispam: 1UD129KBjvdXoW7JFW8ZrWxAr1rtr1fAr1rCrg_yoWDAwc_XF 4Iy3y7Gry8WFsrZa1DZw4fXFZY9w48Wr4qqrWaqFyxAr15XF1qkr4DXrnavryxZa15KrZx Cw1xXw15Kr1agjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbDAYFVCjjxCrM7AC8VAFwI0_Gr0_Xr1l1xkIjI8I6I8E6xAIw20E Y4v20xvaj40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l87I20VAvwVAaII0Ic2I_JFv_Gryl82 xGYIkIc2x26280x7IE14v26r18M28IrcIa0xkI8VCY1x0267AKxVWUCVW8JwA2ocxC64kI II0Yj41l84x0c7CEw4AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7 xvwVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2 z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4 xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v2 6r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7AKxV WUAVWUtwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07jfAwsU UUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Content-Type: text/plain; charset="utf-8" As noted in the comments, we need to release block usage for swap entry which was replaced with poisoned swap entry. However, no block usage is actually freed by calling shmem_recalc_inode(inode, -nr_pages, -nr_pages). Instead, call shmem_recalc_inode(inode, 0, -nr_pages) can correctly release the block usage. Fixes: 6cec2b95dadf7 ("mm/shmem: fix infinite loop when swap in shmem error= at swapoff time") Signed-off-by: Kemeng Shi --- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4b42419ce6b2..e27d19867e03 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2145,7 +2145,7 @@ static void shmem_set_folio_swapin_error(struct inode= *inode, pgoff_t index, * won't be 0 when inode is released and thus trigger WARN_ON(i_blocks) * in shmem_evict_inode(). */ - shmem_recalc_inode(inode, -nr_pages, -nr_pages); + shmem_recalc_inode(inode, 0, -nr_pages); swap_free_nr(swap, nr_pages); } =20 --=20 2.30.0 From nobody Fri Dec 19 00:03:22 2025 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B53A32367A0; Thu, 5 Jun 2025 13:17:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129428; cv=none; b=gW6s2SAgpaPR3VtHJ7/B7EYNa66+KixxNSx0tUG3r+tI+qJ1hnBosYwqHpkrdE7dGtxw9iU62OWrdrhaFvi2TMDCvOz1mR1S1l3oXfJApAdFOF/NC/oKTGjr38rY4FEA+leD+fjHDp3LIoJDrKzGo9XnXP5ET6GgM8B14ISxLjw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129428; c=relaxed/simple; bh=bxP6ruCV7zx069YSAO6SPdxMnRuAAUrp+nIT1wZtMHs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JfYT62K6M4Cqhjj6oipu3AWaTNSpSfZCbj5YBxomVhYxtywHBAVGrRFBecKNjS/Uo1h2Rscc7w9GbzHDuHRDzKmuTr0dRlF4iRT+GVeBbGmv6Q4rUILsNqyOKwW6GcO7zDvl9XDutIW2S4o8STyCtsmBPCFacP4u46wwbFwm2lA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4bClNC4ztCzKHMyJ; Thu, 5 Jun 2025 21:16:59 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 1294F1A11BD; Thu, 5 Jun 2025 21:16:58 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgDnTH3HmEFobD9lOQ--.29489S4; Thu, 05 Jun 2025 21:16:57 +0800 (CST) From: Kemeng Shi To: hughd@google.com, baolin.wang@linux.alibaba.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 2/7] mm: shmem: avoid setting error on splited entries in shmem_set_folio_swapin_error() Date: Fri, 6 Jun 2025 06:10:32 +0800 Message-Id: <20250605221037.7872-3-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com> References: <20250605221037.7872-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgDnTH3HmEFobD9lOQ--.29489S4 X-Coremail-Antispam: 1UD129KBjvJXoW7KF43Gw4rGr4DGw18Zr13Arb_yoW8tr48pa 1UG3ZYyr48WrW2kr1xJa1vvr1a9ayrWayUJrZ3W3WfAFnxJryUtFW09ryrXFyjkrykJw4F qF47Kr98ur4YqrJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPFb4IE77IF4wAFF20E14v26ryj6rWUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUXwA2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK 0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4 x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l 84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I 8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AK xVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI 0_JF0_Jw1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG 67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r1DMI IYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E 14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJV W8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxUaknY DUUUU X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Content-Type: text/plain; charset="utf-8" When large entry is splited, the first entry splited from large entry retains the same entry value and index as original large entry but it's order is reduced. In shmem_set_folio_swapin_error(), if large entry is splited before xa_cmpxchg_irq(), we may replace the first splited entry with error entry while using the size of original large entry for release operations. This could lead to a WARN_ON(i_blocks) due to incorrect nr_pages used by shmem_recalc_inode() and could lead to used after free due to incorrect nr_pages used by swap_free_nr(). Skip setting error if entry spliiting is detected to fix the issue. The bad entry will be replaced with error entry anyway as we will still get IO error when we swap in the bad entry at next time. Fixes: 12885cbe88ddf ("mm: shmem: split large entry if the swapin folio is = not large") Signed-off-by: Kemeng Shi --- mm/shmem.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index e27d19867e03..f1062910a4de 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2127,16 +2127,25 @@ static void shmem_set_folio_swapin_error(struct ino= de *inode, pgoff_t index, struct address_space *mapping =3D inode->i_mapping; swp_entry_t swapin_error; void *old; - int nr_pages; + int nr_pages =3D folio_nr_pages(folio); + int order; =20 swapin_error =3D make_poisoned_swp_entry(); - old =3D xa_cmpxchg_irq(&mapping->i_pages, index, - swp_to_radix_entry(swap), - swp_to_radix_entry(swapin_error), 0); - if (old !=3D swp_to_radix_entry(swap)) + xa_lock_irq(&mapping->i_pages); + order =3D xa_get_order(&mapping->i_pages, index); + if (nr_pages !=3D (1 << order)) { + xa_unlock_irq(&mapping->i_pages); return; + } + old =3D __xa_cmpxchg(&mapping->i_pages, index, + swp_to_radix_entry(swap), + swp_to_radix_entry(swapin_error), 0); + if (old !=3D swp_to_radix_entry(swap)) { + xa_unlock_irq(&mapping->i_pages); + return; + } + xa_unlock_irq(&mapping->i_pages); =20 - nr_pages =3D folio_nr_pages(folio); folio_wait_writeback(folio); if (!skip_swapcache) delete_from_swap_cache(folio); --=20 2.30.0 From nobody Fri Dec 19 00:03:22 2025 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B540A2367A4; Thu, 5 Jun 2025 13:17:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129427; cv=none; b=P5u7bOtNAge5f/IzT4TFSi3HW3Ej5Th1QZn3rE1Q/ORUm8UQr+bzSK/2hgWCp51/s4LuG2l/x9fK1LKlr90aZVLp90u8FEnicvy1MjIDnWuD/1b6a6CfD8rKZfERofj707aUSnZIubbLwWAuNXY/5NC0ckgkRHQkFYDADpTa13s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129427; c=relaxed/simple; bh=SeCGVBa8MCQVeTAkurdC25QXE4GiK+RbVlxLCVJnWiQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AUn+J8zmqOudZMU0kVoDxpA0NchuylS5S3b+GByZEw4OJn2s+M5Sya1j0tRKpdxXmyBkwpKQLyVH0rpAOvCMeJybQD2g9WhNkbS9uUPw5hCUY3vaX7SJiBdXlJmM/dVp7m+gMZIbOIEOT2I7cfa6cNGM2sFfTHfQcCr97s5JQGQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4bClND0MDrzKHMmb; Thu, 5 Jun 2025 21:17:00 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 68D831A11C0; Thu, 5 Jun 2025 21:16:58 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgDnTH3HmEFobD9lOQ--.29489S5; Thu, 05 Jun 2025 21:16:58 +0800 (CST) From: Kemeng Shi To: hughd@google.com, baolin.wang@linux.alibaba.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 3/7] mm: shmem: avoid missing entries in shmem_undo_range() when entries was splited concurrently Date: Fri, 6 Jun 2025 06:10:33 +0800 Message-Id: <20250605221037.7872-4-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com> References: <20250605221037.7872-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgDnTH3HmEFobD9lOQ--.29489S5 X-Coremail-Antispam: 1UD129KBjvJXoWxKFWrWFy5uF15Cw43Jr1rtFb_yoWxAw1xpF WUW3s3GrWrGr4xKrs3A3W8Xr4ag392gay8AFyfG3sxC3ZxJr12kr4qkr1YvFyDurWku3Wv qFs0y34j9F4UtrJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPFb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUWwA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK 0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4 x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l 84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I 8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AK xVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI 0_JF0_Jw1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG 67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r1DMI IYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E 14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJV W8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxUsoGQ DUUUU X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Content-Type: text/plain; charset="utf-8" If large swap entry or large folio entry returned from find_get_entries() is splited before it is truncated, only the first splited entry will be truncated, leaving the remaining splited entries un-truncated. To address this issue, we introduce a new helper function shmem_find_get_entries() which is similar to find_get_entries() but it will also return order of found entries. Then we can detect entry splitting after initial search by comparing current entry order with order returned from shmem_find_get_entries() and retry finding entries if the split is detectted to fix the issue. The large swap entry related race was introduced in 12885cbe88dd ("mm: shmem: split large entry if the swapin folio is not large"). The large folio related race seems a long-standing issue which may be related to conversion to xarray, conversion to folio and other changes. As a result, it's hard to track down the specific commit that directly introduced this issue. Signed-off-by: Kemeng Shi --- mm/filemap.c | 2 +- mm/internal.h | 2 ++ mm/shmem.c | 81 ++++++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 70 insertions(+), 15 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 7b90cbeb4a1a..672844b94d3a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2015,7 +2015,7 @@ struct folio *__filemap_get_folio(struct address_spac= e *mapping, pgoff_t index, } EXPORT_SYMBOL(__filemap_get_folio); =20 -static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t m= ax, +struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, xa_mark_t mark) { struct folio *folio; diff --git a/mm/internal.h b/mm/internal.h index 6b8ed2017743..9573b3a9e8c0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -446,6 +446,8 @@ static inline void force_page_cache_readahead(struct ad= dress_space *mapping, force_page_cache_ra(&ractl, nr_to_read); } =20 +struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, + xa_mark_t mark); unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, diff --git a/mm/shmem.c b/mm/shmem.c index f1062910a4de..2349673b239b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -949,18 +949,29 @@ static void shmem_delete_from_page_cache(struct folio= *folio, void *radswap) * the number of pages being freed. 0 means entry not found in XArray (0 p= ages * being freed). */ -static long shmem_free_swap(struct address_space *mapping, - pgoff_t index, void *radswap) +static long shmem_free_swap(struct address_space *mapping, pgoff_t index, + int order, void *radswap) { - int order =3D xa_get_order(&mapping->i_pages, index); + int old_order; void *old; =20 - old =3D xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0); - if (old !=3D radswap) + xa_lock_irq(&mapping->i_pages); + old_order =3D xa_get_order(&mapping->i_pages, index); + /* free swap anyway if input order is -1 */ + if (order !=3D -1 && old_order !=3D order) { + xa_unlock_irq(&mapping->i_pages); + return 0; + } + + old =3D __xa_cmpxchg(&mapping->i_pages, index, radswap, NULL, 0); + if (old !=3D radswap) { + xa_unlock_irq(&mapping->i_pages); return 0; - free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order); + } + xa_unlock_irq(&mapping->i_pages); =20 - return 1 << order; + free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << old_order); + return 1 << old_order; } =20 /* @@ -1077,6 +1088,39 @@ static struct folio *shmem_get_partial_folio(struct = inode *inode, pgoff_t index) return folio; } =20 +/* + * Similar to find_get_entries(), but will return order of found entries + */ +static unsigned shmem_find_get_entries(struct address_space *mapping, + pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, + pgoff_t *indices, int *orders) +{ + XA_STATE(xas, &mapping->i_pages, *start); + struct folio *folio; + + rcu_read_lock(); + while ((folio =3D find_get_entry(&xas, end, XA_PRESENT)) !=3D NULL) { + indices[fbatch->nr] =3D xas.xa_index; + if (!xa_is_value(folio)) + orders[fbatch->nr] =3D folio_order(folio); + else + orders[fbatch->nr] =3D xas_get_order(&xas); + if (!folio_batch_add(fbatch, folio)) + break; + } + + if (folio_batch_count(fbatch)) { + unsigned long nr; + int idx =3D folio_batch_count(fbatch) - 1; + + nr =3D 1 << orders[idx]; + *start =3D round_down(indices[idx] + nr, nr); + } + rcu_read_unlock(); + + return folio_batch_count(fbatch); +} + /* * Remove range of pages and swap entries from page cache, and free them. * If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocat= e. @@ -1090,6 +1134,7 @@ static void shmem_undo_range(struct inode *inode, lof= f_t lstart, loff_t lend, pgoff_t end =3D (lend + 1) >> PAGE_SHIFT; struct folio_batch fbatch; pgoff_t indices[PAGEVEC_SIZE]; + int orders[PAGEVEC_SIZE]; struct folio *folio; bool same_folio; long nr_swaps_freed =3D 0; @@ -1113,7 +1158,7 @@ static void shmem_undo_range(struct inode *inode, lof= f_t lstart, loff_t lend, if (unfalloc) continue; nr_swaps_freed +=3D shmem_free_swap(mapping, - indices[i], folio); + indices[i], -1, folio); continue; } =20 @@ -1166,8 +1211,8 @@ static void shmem_undo_range(struct inode *inode, lof= f_t lstart, loff_t lend, while (index < end) { cond_resched(); =20 - if (!find_get_entries(mapping, &index, end - 1, &fbatch, - indices)) { + if (!shmem_find_get_entries(mapping, &index, end - 1, &fbatch, + indices, orders)) { /* If all gone or hole-punch or unfalloc, we're done */ if (index =3D=3D start || end !=3D -1) break; @@ -1183,9 +1228,13 @@ static void shmem_undo_range(struct inode *inode, lo= ff_t lstart, loff_t lend, =20 if (unfalloc) continue; - swaps_freed =3D shmem_free_swap(mapping, indices[i], folio); + swaps_freed =3D shmem_free_swap(mapping, + indices[i], orders[i], folio); + /* + * Swap was replaced by page or was + * splited: retry + */ if (!swaps_freed) { - /* Swap was replaced by page: retry */ index =3D indices[i]; break; } @@ -1196,8 +1245,12 @@ static void shmem_undo_range(struct inode *inode, lo= ff_t lstart, loff_t lend, folio_lock(folio); =20 if (!unfalloc || !folio_test_uptodate(folio)) { - if (folio_mapping(folio) !=3D mapping) { - /* Page was replaced by swap: retry */ + /* + * Page was replaced by swap or was + * splited: retry + */ + if (folio_mapping(folio) !=3D mapping || + folio_order(folio) !=3D orders[i]) { folio_unlock(folio); index =3D indices[i]; break; --=20 2.30.0 From nobody Fri Dec 19 00:03:22 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C810B20C016; Thu, 5 Jun 2025 13:17:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129423; cv=none; b=WtIMQG9VKOVEiUfTf5hsFji0EYwM1O3ju/+nlTqIu8hbuFm5Y1DMmW/azglH7lOTgjAgL0zgN72c9tt16xpf8GFH9EUCooKo5SMXeh+gAD8uuLbQuMxGScSki5g5aq2Wv6EpHjtANuJB9Cl2uE7nDdnfeEshR5/umRDwZ5/wMS4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129423; c=relaxed/simple; bh=k2P5OpcOAeXgHAFYS+maLDSEn8kZiMRnQx6FsDhYg4k=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nA1CaMkk6kNw7OVqz1pL621R3lAo9s+btkAoFqIctbR3jUmOc1ukdi2NizgX1HNxXA9zn4Q27zaL+1DciLGyigldHcbc0kyOUje0PhR0hAhaZr16tgnXiG8vlK0zgqD3qpXtg/K4POR4Hb6QMjMHR8kbo1yYRuOIiWY+y9qp5NA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4bClNC58GKzYQtws; Thu, 5 Jun 2025 21:16:59 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id BD0811A11C4; Thu, 5 Jun 2025 21:16:58 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgDnTH3HmEFobD9lOQ--.29489S6; Thu, 05 Jun 2025 21:16:58 +0800 (CST) From: Kemeng Shi To: hughd@google.com, baolin.wang@linux.alibaba.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 4/7] mm: shmem: handle special case of shmem_recalc_inode() in it's caller Date: Fri, 6 Jun 2025 06:10:34 +0800 Message-Id: <20250605221037.7872-5-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com> References: <20250605221037.7872-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgDnTH3HmEFobD9lOQ--.29489S6 X-Coremail-Antispam: 1UD129KBjvJXoW7Aw48CFyDtrW3Zw4DKr1rJFb_yoW8Zw1xpr ZrK3yDJrWrAFyI9r9aya1kZrWag3y8GrWUt343u3s5u3WDJr17Kr4IyFy8Za4UCr9rJ3yS vr4xCr18ZF4Dt3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPab4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAv FVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJw A2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE 3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr2 1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv 67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2 AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r 1DMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxV WUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU s3kuDUUUU X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Content-Type: text/plain; charset="utf-8" The sepcial case in shmem_recalc_inode() is tailored for shmem_writepage(). By raising swapped before nrpages is lowered directly within shmem_writepage(), we can simplify code of shmem_recalc_inode() and eliminate the need of executing the special case code for all callers of shmem_recalc_inode(). Signed-off-by: Kemeng Shi --- mm/shmem.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 2349673b239b..9f5e1eccaacb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -443,15 +443,6 @@ static void shmem_recalc_inode(struct inode *inode, lo= ng alloced, long swapped) info->swapped +=3D swapped; freed =3D info->alloced - info->swapped - READ_ONCE(inode->i_mapping->nrpages); - /* - * Special case: whereas normally shmem_recalc_inode() is called - * after i_mapping->nrpages has already been adjusted (up or down), - * shmem_writepage() has to raise swapped before nrpages is lowered - - * to stop a racing shmem_recalc_inode() from thinking that a page has - * been freed. Compensate here, to avoid the need for a followup call. - */ - if (swapped > 0) - freed +=3D swapped; if (freed > 0) info->alloced -=3D freed; spin_unlock(&info->lock); @@ -1694,9 +1685,16 @@ static int shmem_writepage(struct page *page, struct= writeback_control *wbc) list_add(&info->swaplist, &shmem_swaplist); =20 if (!folio_alloc_swap(folio, __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN= )) { - shmem_recalc_inode(inode, 0, nr_pages); + /* + * Raise swapped before nrpages is lowered to stop racing + * shmem_recalc_inode() from thinking that a page has been freed. + */ + spin_lock(&info->lock); + info->swapped +=3D nr_pages; + spin_unlock(&info->lock); swap_shmem_alloc(folio->swap, nr_pages); shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap)); + shmem_recalc_inode(inode, 0, 0); =20 mutex_unlock(&shmem_swaplist_mutex); BUG_ON(folio_mapped(folio)); --=20 2.30.0 From nobody Fri Dec 19 00:03:22 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 607F11E5B95; Thu, 5 Jun 2025 13:17:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129423; cv=none; b=L/ikGqGfhIuGL4Mnc43hsEbyrbXA9JZ9hdAKGjW4RZxkfc9JEQynjXXy7kdt6oS+1zkqhwDshH+a0qiTeBymrIaaMMspT4qtmXYbLboLRoWcHHZriFDShQ6pSReciHxWJ7yfFuM7Lt13vBnTpjS+zcHJ1I0e9AudrKmHvKPPtKI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129423; c=relaxed/simple; bh=Rffk9NOkRdkHAyYWHIAj1Dcp+53gJMpF5CccF6Jess4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VddNQhoBwub/6YCeCinUDAZCa6Qg6gR9lze8bZfrirqKeNCf2WZgEAh3TL0I9JNUjYHWw6Hm0+M6Fkdx0yoIqSyq86Ac0qvq0aChGoWcq3OOY/TnoVao9zVJs7UU/R4JwXstEEAfATJCHEYQg1FCL+2heoRRBZ9DYjxgAwGlm6g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4bClND0D1vzYQvgy; Thu, 5 Jun 2025 21:17:00 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 157A31A0DC7; Thu, 5 Jun 2025 21:16:59 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgDnTH3HmEFobD9lOQ--.29489S7; Thu, 05 Jun 2025 21:16:58 +0800 (CST) From: Kemeng Shi To: hughd@google.com, baolin.wang@linux.alibaba.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 5/7] mm: shmem: wrap additional shmem quota related code with CONFIG_TMPFS_QUOTA Date: Fri, 6 Jun 2025 06:10:35 +0800 Message-Id: <20250605221037.7872-6-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com> References: <20250605221037.7872-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgDnTH3HmEFobD9lOQ--.29489S7 X-Coremail-Antispam: 1UD129KBjvJXoWxuFyUur4DuFyfKFWxKrWUtwb_yoWrJw13pF n7Grs8GFWUXFy0krWxur4furyftFWfGr1xtrWDKw1Yy3Wv9w1SgF1xKF1Yvrn3Zr97u3yS qFs29a4DuF48G3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPab4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAv FVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJw A2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE 3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr2 1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv 67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2 AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r 1DMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxV WUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU s3kuDUUUU X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Content-Type: text/plain; charset="utf-8" Some code routines and structure members are unreachable when CONFIG_TMPFS_QUOTA is off. Wrap additional shmem quota related code to eliniate these unreachable sections. Signed-off-by: Kemeng Shi --- include/linux/shmem_fs.h | 4 ++++ mm/shmem.c | 10 +++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 0b273a7b9f01..4873359a5442 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -47,12 +47,14 @@ struct shmem_inode_info { (FS_IMMUTABLE_FL | FS_APPEND_FL | FS_NODUMP_FL | FS_NOATIME_FL | FS_CASEF= OLD_FL) #define SHMEM_FL_INHERITED (FS_NODUMP_FL | FS_NOATIME_FL | FS_CASEFOLD_FL) =20 +#ifdef CONFIG_TMPFS_QUOTA struct shmem_quota_limits { qsize_t usrquota_bhardlimit; /* Default user quota block hard limit */ qsize_t usrquota_ihardlimit; /* Default user quota inode hard limit */ qsize_t grpquota_bhardlimit; /* Default group quota block hard limit */ qsize_t grpquota_ihardlimit; /* Default group quota inode hard limit */ }; +#endif =20 struct shmem_sb_info { unsigned long max_blocks; /* How many blocks are allowed */ @@ -72,7 +74,9 @@ struct shmem_sb_info { spinlock_t shrinklist_lock; /* Protects shrinklist */ struct list_head shrinklist; /* List of shinkable inodes */ unsigned long shrinklist_len; /* Length of shrinklist */ +#ifdef CONFIG_TMPFS_QUOTA struct shmem_quota_limits qlimits; /* Default quota limits */ +#endif }; =20 static inline struct shmem_inode_info *SHMEM_I(struct inode *inode) diff --git a/mm/shmem.c b/mm/shmem.c index 9f5e1eccaacb..e3e05bbb6db2 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -121,8 +121,10 @@ struct shmem_options { int huge; int seen; bool noswap; +#ifdef CONFIG_TMPFS_QUOTA unsigned short quota_types; struct shmem_quota_limits qlimits; +#endif #if IS_ENABLED(CONFIG_UNICODE) struct unicode_map *encoding; bool strict_encoding; @@ -132,7 +134,9 @@ struct shmem_options { #define SHMEM_SEEN_HUGE 4 #define SHMEM_SEEN_INUMS 8 #define SHMEM_SEEN_NOSWAP 16 +#ifdef CONFIG_TMPFS_QUOTA #define SHMEM_SEEN_QUOTA 32 +#endif }; =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -4549,6 +4553,7 @@ enum shmem_param { Opt_inode32, Opt_inode64, Opt_noswap, +#ifdef CONFIG_TMPFS_QUOTA Opt_quota, Opt_usrquota, Opt_grpquota, @@ -4556,6 +4561,7 @@ enum shmem_param { Opt_usrquota_inode_hardlimit, Opt_grpquota_block_hardlimit, Opt_grpquota_inode_hardlimit, +#endif Opt_casefold_version, Opt_casefold, Opt_strict_encoding, @@ -4742,6 +4748,7 @@ static int shmem_parse_one(struct fs_context *fc, str= uct fs_parameter *param) ctx->noswap =3D true; ctx->seen |=3D SHMEM_SEEN_NOSWAP; break; +#ifdef CONFIG_TMPFS_QUOTA case Opt_quota: if (fc->user_ns !=3D &init_user_ns) return invalfc(fc, "Quotas in unprivileged tmpfs mounts are unsupported= "); @@ -4796,6 +4803,7 @@ static int shmem_parse_one(struct fs_context *fc, str= uct fs_parameter *param) "Group quota inode hardlimit too large."); ctx->qlimits.grpquota_ihardlimit =3D size; break; +#endif case Opt_casefold_version: return shmem_parse_opt_casefold(fc, param, false); case Opt_casefold: @@ -4899,13 +4907,13 @@ static int shmem_reconfigure(struct fs_context *fc) goto out; } =20 +#ifdef CONFIG_TMPFS_QUOTA if (ctx->seen & SHMEM_SEEN_QUOTA && !sb_any_quota_loaded(fc->root->d_sb)) { err =3D "Cannot enable quota on remount"; goto out; } =20 -#ifdef CONFIG_TMPFS_QUOTA #define CHANGED_LIMIT(name) \ (ctx->qlimits.name## hardlimit && \ (ctx->qlimits.name## hardlimit !=3D sbinfo->qlimits.name## hardlimit)) --=20 2.30.0 From nobody Fri Dec 19 00:03:22 2025 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92E75230D35; Thu, 5 Jun 2025 13:17:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129429; cv=none; b=SmveJKKGBm1LJqtLZBLwnMcb+APLSOg4u1Ih85xQ9H0qNKiBNX6AJIs+1/WMC1/XjshcIQHQd+NJgpuacjNz77XzCufRvzoI45tz5XClU9hJZBMlK97tbuIuLmbBnI6JVUy/xTAxkrPRrwx+puQcI1dUrHyK8qNN23g9Chbh2u8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129429; c=relaxed/simple; bh=5Cz93IxqRdgLk6dXKax3WOl99BnT6SF31MyUBLILXUE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RHz9E18qLTurIXcf38C6843vOHVdz+pXX0Nb4sNkL1WoAmSdHMV4UbrAJU8+UgHqAHOde1x0n4dCxluyx2M9POzeqXvti+lG1lEsP4INdwdqr/OeNW6H/pUpb1SswkvzXzMYClaoxyJqFa4T8VhkWqfNLR6I/LyAYpmZpXG3Ws0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4bClNF0cTjzKHNGR; Thu, 5 Jun 2025 21:17:01 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 70FCA1A09DF; Thu, 5 Jun 2025 21:16:59 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgDnTH3HmEFobD9lOQ--.29489S8; Thu, 05 Jun 2025 21:16:59 +0800 (CST) From: Kemeng Shi To: hughd@google.com, baolin.wang@linux.alibaba.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 6/7] mm: shmem: simplify error flow in thpsize_shmem_enabled_store() Date: Fri, 6 Jun 2025 06:10:36 +0800 Message-Id: <20250605221037.7872-7-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com> References: <20250605221037.7872-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgDnTH3HmEFobD9lOQ--.29489S8 X-Coremail-Antispam: 1UD129KBjvdXoW7XF4fKrWkJF4UWw1Dtr1UJrb_yoWDtFc_CF yjqF9rWr47Ww4kKF1Ykw42qr9YgFWDuryqgry8tFWak34DXrZ7Jr4DXrWYvryxXayrWF95 Canavasagw1DWjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbqkYFVCjjxCrM7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20E Y4v20xvaj40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l87I20VAvwVAaII0Ic2I_JFv_Gryl82 xGYIkIc2x26280x7IE14v26r126s0DM28IrcIa0xkI8VCY1x0267AKxVW5JVCq3wA2ocxC 64kIII0Yj41l84x0c7CEw4AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM2 8EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq 3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8w Aqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE 14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwCY1x0262kKe7 AKxVWUAVWUtwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw 1lIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r 1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07ja g4hUUUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Content-Type: text/plain; charset="utf-8" Simplify error flow in thpsize_shmem_enabled_store(). Signed-off-by: Kemeng Shi --- mm/shmem.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index e3e05bbb6db2..c6ea45d542d2 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -5601,7 +5601,7 @@ static ssize_t thpsize_shmem_enabled_store(struct kob= ject *kobj, const char *buf, size_t count) { int order =3D to_thpsize(kobj)->order; - ssize_t ret =3D count; + int err; =20 if (sysfs_streq(buf, "always")) { spin_lock(&huge_shmem_orders_lock); @@ -5644,16 +5644,14 @@ static ssize_t thpsize_shmem_enabled_store(struct k= object *kobj, clear_bit(order, &huge_shmem_orders_madvise); spin_unlock(&huge_shmem_orders_lock); } else { - ret =3D -EINVAL; + return -EINVAL; } =20 - if (ret > 0) { - int err =3D start_stop_khugepaged(); + err =3D start_stop_khugepaged(); + if (err) + return err; =20 - if (err) - ret =3D err; - } - return ret; + return count; } =20 struct kobj_attribute thpsize_shmem_enabled_attr =3D --=20 2.30.0 From nobody Fri Dec 19 00:03:22 2025 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8EAFE21ADC5; Thu, 5 Jun 2025 13:17:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129423; cv=none; b=pMMrtITMWfGIIXuWucPMFELmLUin8WAx120VL6LeVXBNPlrHvl94CZmwSyExw1NZte+rph+9ZYr+X1jOf/NZh7W3Yh5YjUEwQTxjwpVa8Y+lNZjeh16OLgJVLEkNwRyGAUr/fmds010bD+5GKOW2zFGFwE+4P1ZGcr6/aqpswmg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749129423; c=relaxed/simple; bh=TcgClPKK1QHmvqc70ATZ3pT3crClJTqGkcaAyfXDC90=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WXKwPaY3QzYxGBhalrIpURQ1mVy7ZXT5M11Lb0PefcZTW1TmYNJuxtKHbWDhEWa6pK6aARGMi67qhq5PWiGKRgfGSJp4TUB2VgCtfh9Nd7QM19T90Qi7RDZDETL5qcS0zalUpLC1RxRPjpSjTXOqs9U/Y2241sFNRKSMxt6IUr0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4bClND5DY6zYQvhR; Thu, 5 Jun 2025 21:17:00 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id BFF8C1A0BA9; Thu, 5 Jun 2025 21:16:59 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.101.6]) by APP1 (Coremail) with SMTP id cCh0CgDnTH3HmEFobD9lOQ--.29489S9; Thu, 05 Jun 2025 21:16:59 +0800 (CST) From: Kemeng Shi To: hughd@google.com, baolin.wang@linux.alibaba.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 7/7] mm: shmem: eliminate unneeded page counting in shmem_unuse_swap_entries() Date: Fri, 6 Jun 2025 06:10:37 +0800 Message-Id: <20250605221037.7872-8-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20250605221037.7872-1-shikemeng@huaweicloud.com> References: <20250605221037.7872-1-shikemeng@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: cCh0CgDnTH3HmEFobD9lOQ--.29489S9 X-Coremail-Antispam: 1UD129KBjvJXoW7urWxCFW7tFWUKFWUKFW7urg_yoW8Aw48pF W3W3srJr4kXFW8Cr97A34kZw1aq393KFWjqFy3Gwn3Z3WUJw12krySkryjqF15C348G34S qw4UKry5ua1Utr7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPSb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAv FVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJw A2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE 3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr2 1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv 67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2 AFwI0_JF0_Jw1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r 1DMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7I U09YFtUUUUU== X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ Content-Type: text/plain; charset="utf-8" Caller of shmem_unuse_swap_entries() will not use the count of pages swapped in, so eliminate unneeded page counting in shmem_unuse_swap_entries(). Signed-off-by: Kemeng Shi --- mm/shmem.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index c6ea45d542d2..c83baabc169d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1480,14 +1480,13 @@ static unsigned int shmem_find_swap_entries(struct = address_space *mapping, } =20 /* - * Move the swapped pages for an inode to page cache. Returns the count - * of pages swapped in, or the error in case of failure. + * Move the swapped pages for an inode to page cache. Returns 0 if success, + * or returns error in case of failure. */ static int shmem_unuse_swap_entries(struct inode *inode, struct folio_batch *fbatch, pgoff_t *indices) { int i =3D 0; - int ret =3D 0; int error =3D 0; struct address_space *mapping =3D inode->i_mapping; =20 @@ -1499,13 +1498,11 @@ static int shmem_unuse_swap_entries(struct inode *i= node, if (error =3D=3D 0) { folio_unlock(folio); folio_put(folio); - ret++; } if (error =3D=3D -ENOMEM) - break; - error =3D 0; + return error; } - return error ? error : ret; + return 0; } =20 /* @@ -1517,24 +1514,20 @@ static int shmem_unuse_inode(struct inode *inode, u= nsigned int type) pgoff_t start =3D 0; struct folio_batch fbatch; pgoff_t indices[PAGEVEC_SIZE]; - int ret =3D 0; + int ret; =20 do { folio_batch_init(&fbatch); if (!shmem_find_swap_entries(mapping, start, &fbatch, - indices, type)) { - ret =3D 0; - break; - } + indices, type)) + return 0; =20 ret =3D shmem_unuse_swap_entries(inode, &fbatch, indices); if (ret < 0) - break; + return ret; =20 start =3D indices[folio_batch_count(&fbatch) - 1]; } while (true); - - return ret; } =20 /* --=20 2.30.0