From nobody Wed Dec 17 19:04:13 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 426A2C77B71 for ; Tue, 18 Apr 2023 08:40:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231467AbjDRIk5 (ORCPT ); Tue, 18 Apr 2023 04:40:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbjDRIkz (ORCPT ); Tue, 18 Apr 2023 04:40:55 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06C06C5 for ; Tue, 18 Apr 2023 01:40:54 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-63b5465fb99so1635606b3a.1 for ; Tue, 18 Apr 2023 01:40:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1681807253; x=1684399253; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=olmN5gzlqP0NhrI47T0CCa7149+yb2ZCju0LTQmGI+4=; b=ageKEel5fyZv58tT025n8lqnZUiQ+cxCGVdQkICI1dh7J7VmSwxtTivqmdjU79I/Qo vpCBbkdrkJTlMQgLIbzVqr3aFjxx1zPt56ImFdKa4upwf3xN3L+p3gif4WHtLXkkG+t3 Qx0i5U3/lCXRFCZn4698WdTVFJFunG2ubvagU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681807253; x=1684399253; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=olmN5gzlqP0NhrI47T0CCa7149+yb2ZCju0LTQmGI+4=; b=Lq3E8uKNKfdJUPGU2TsG0wn6L3kIXPB+zDf4rHe3HfPha6nmFCOilA0ZpQqMaRyBZL kioWmbyexWeVOBv8M75BdOh/Agk7ZDanyWBY86nv6La7COonPmnQ8HgXKgGW1+F5w3gZ vHJY9/vn0LOLwj0oPOPelSluJFi6lf7A5iGgq7ofh2BeSvI4YP7Sv4W+27lI6RkHRRco UaGNrX0cMXqUI9eLOdS2zbNBHZZyYYaCwmThT8gv31+t0iu03yLxBQdCqZf70p84WeDI QzeZFnmnUzF4sMuB/ZCALsiymZp8CtadUGIwZg0dAempGtPAERYaLtp4vCCk6eHrEau8 dNkA== X-Gm-Message-State: AAQBX9fyGOrdTpxIXun1lV/AHR7KY70ulBy45qE5Y7VZWyYocNNk5tmB aIZxdjcaoD+WxjOheGJ3Feiw5Q== X-Google-Smtp-Source: AKy350ZxonPvBGz2w+q2ydfxziDS0CMMutHh8UVXMlDu7WQu9HDpm3Lk0fD6oTQLzVjEnx6nyC+jCA== X-Received: by 2002:a05:6a00:2390:b0:636:e0fb:8c44 with SMTP id f16-20020a056a00239000b00636e0fb8c44mr26317272pfc.12.1681807253493; Tue, 18 Apr 2023 01:40:53 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:7254:8270:74ed:755b]) by smtp.gmail.com with UTF8SMTPSA id y3-20020a62b503000000b00625b9e625fdsm9007639pfe.179.2023.04.18.01.40.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 18 Apr 2023 01:40:53 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: linux-mm@kvack.org Cc: Andrew Morton , "Matthew Wilcox (Oracle)" , Suleiman Souhlal , linux-kernel@vger.kernel.org, David Stevens , stable@vger.kernel.org Subject: [PATCH v2] mm/shmem: Fix race in shmem_undo_range w/THP Date: Tue, 18 Apr 2023 17:40:31 +0900 Message-ID: <20230418084031.3439795-1-stevensd@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: David Stevens Split folios during the second loop of shmem_undo_range. It's not sufficient to only split folios when dealing with partial pages, since it's possible for a THP to be faulted in after that point. Calling truncate_inode_folio in that situation can result in throwing away data outside of the range being targeted. Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large fol= ios") Cc: stable@vger.kernel.org Signed-off-by: David Stevens --- v1 -> v2: - Actually drop pages after splitting a THP mm/shmem.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index 9218c955f482..226c94a257b1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1033,7 +1033,22 @@ static void shmem_undo_range(struct inode *inode, lo= ff_t lstart, loff_t lend, } VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); - truncate_inode_folio(mapping, folio); + + if (!folio_test_large(folio)) { + truncate_inode_folio(mapping, folio); + } else if (truncate_inode_partial_folio(folio, lstart, lend)) { + /* + * If we split a page, reset the loop so that we + * pick up the new sub pages. Otherwise the THP + * was entirely dropped or the target range was + * zeroed, so just continue the loop as is. + */ + if (!folio_test_large(folio)) { + folio_unlock(folio); + index =3D start; + break; + } + } } folio_unlock(folio); } --=20 2.40.0.634.g4ca3ef3211-goog