From nobody Fri Dec 19 03:09:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 042BDC54FB9 for ; Sun, 19 Nov 2023 19:48:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231710AbjKSTs4 (ORCPT ); Sun, 19 Nov 2023 14:48:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231600AbjKSTsn (ORCPT ); Sun, 19 Nov 2023 14:48:43 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80435D45 for ; Sun, 19 Nov 2023 11:48:28 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-6b2018a11efso3881298b3a.0 for ; Sun, 19 Nov 2023 11:48:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423308; x=1701028108; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=/Iv3a0RkKO4tJSHfzH299Wh8nqjqW5sZox/cVI//TM8=; b=TfQ3+1EvjgS8cNADoaSCxk1RMNz27Q2YIcqO8OiMF+oW9o0ikJ4nXGGF17AzeDFaa6 +rWmHBxTqxHuzPbIb1SfZtZUkixsBU9pPq1b59jlXNSlLaoKb9ISAZp/30pcLNazFK0h Mzo9AuYMrTf3wUQY1N7urp2LElyKDZfpORxu8Sa5TmHgaAexu3qwL8NjKu5wIcZ4jUvL TatZNoDWnmYQgjj5DvaKgj8pyELvBLE+3S8LnH+ekrOZ6b7Cgm52gSovQnFyZzuB2C2C Dt2o/aLSbXnT6Hzg9gsLIOGDnbCfQtoiOe5IOTIY6VP3SlpOeRwgXV03oqLctxfRmkbM zX4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423308; x=1701028108; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=/Iv3a0RkKO4tJSHfzH299Wh8nqjqW5sZox/cVI//TM8=; b=L0zmYDNp9ECwyGOlZsqUalZlHeFv/vjnAwAnza7DmLpXKGYBlphM4gfGujoie5/Tmk zWwQl988sdHCocDK1iGCXQetjnkEdocewxexKgek6ftduSVthZy7G5YhXVq8k/FbWPOi IbgOCMKNYpXxU7Wbn5oi5ocSmHuy1QHMoyjqeo/LBip/HHDLtDUhyD0FjFdKiu68Dq30 j+9EUQa0vjFmyMBJwcTDE2QIZMK+ulzaNAHYLBtv8Ee0BrTPc0t64I11j8n2BW4ygtq5 HUSu3BAcZ1R1cFPDzrHZQwgq5auoTlsrV+jtEsrj35RLhQ/fOmu+JFIoOETFPskxJ64F lbTg== X-Gm-Message-State: AOJu0YzKjU+vKJ/uT5mvPnqbcEpxQy7zsjcJ+R1VqK7SKwqh34czLLlx 7/o3qmrVKV39/Vi8YU0F/3U= X-Google-Smtp-Source: AGHT+IHOC/pH4egukb5+EX3DwlCGnZ4HWVCMhNwTp5WbUoyceBak2FygNQZocPIPIHn77Rx48Jh+bw== X-Received: by 2002:a05:6a20:3d84:b0:17b:426f:829 with SMTP id s4-20020a056a203d8400b0017b426f0829mr7488318pzi.37.1700423307883; Sun, 19 Nov 2023 11:48:27 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.25 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:27 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 08/24] mm/swap: check readahead policy per entry Date: Mon, 20 Nov 2023 03:47:24 +0800 Message-ID: <20231119194740.94101-9-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Currently VMA readahead is globally disabled when any rotate disk is used as swap backend. So multiple swap devices are enabled, if a slower hard disk is set as a low priority fallback, and a high performance SSD is used and high priority swap device, vma readahead is disabled globally. The SSD swap device performance will drop by a lot. Check readahead policy per entry to avoid such problem. Signed-off-by: Kairui Song --- mm/swap_state.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index ff6756f2e8e4..fb78f7f18ed7 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -321,9 +321,9 @@ static inline bool swap_use_no_readahead(struct swap_in= fo_struct *si, swp_entry_ return data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) = =3D=3D 1; } =20 -static inline bool swap_use_vma_readahead(void) +static inline bool swap_use_vma_readahead(struct swap_info_struct *si) { - return READ_ONCE(enable_vma_readahead) && !atomic_read(&nr_rotate_swap); + return data_race(si->flags & SWP_SOLIDSTATE) && READ_ONCE(enable_vma_read= ahead); } =20 /* @@ -341,7 +341,7 @@ struct folio *swap_cache_get_folio(swp_entry_t entry, =20 folio =3D filemap_get_folio(swap_address_space(entry), swp_offset(entry)); if (!IS_ERR(folio)) { - bool vma_ra =3D swap_use_vma_readahead(); + bool vma_ra =3D swap_use_vma_readahead(swp_swap_info(entry)); bool readahead; =20 /* @@ -920,16 +920,18 @@ static struct page *swapin_no_readahead(swp_entry_t e= ntry, gfp_t gfp_mask, struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf, bool *swapcached) { + struct swap_info_struct *si; struct mempolicy *mpol; struct page *page; pgoff_t ilx; bool cached; =20 + si =3D swp_swap_info(entry); mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - if (swap_use_no_readahead(swp_swap_info(entry), entry)) { + if (swap_use_no_readahead(si, entry)) { page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); cached =3D false; - } else if (swap_use_vma_readahead()) { + } else if (swap_use_vma_readahead(si)) { page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); cached =3D true; } else { --=20 2.42.0