From nobody Mon Feb 9 13:00:24 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AED4DC001B0 for ; Wed, 9 Aug 2023 06:12:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230453AbjHIGMq (ORCPT ); Wed, 9 Aug 2023 02:12:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229623AbjHIGMo (ORCPT ); Wed, 9 Aug 2023 02:12:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB5CE1BF3 for ; Tue, 8 Aug 2023 23:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691561563; x=1723097563; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KsgBZKiNNdVU9A9eEIcQRL879OD2WdX41NJw/witcyM=; b=fUFlKeopPGmCfmEs86Z3McO27wPCFyxaBspYLPg44SEQvj8iXhGXveYO ylPBFgo5wzIf/lUv0TNlFuylOBJoV8/bOEeDtxws1bkmSNytKau2a5EMv VirrNS08c67jBpzHMejirOQGN0wTB3dpGpHIoX9Ca4MVcP4i+asjCdDtR aZRbNmFhHBPBZEz+Aez1l48HDUdi+cDSauoEQoqGezm76vRof1tmJLAQs SZyPFITEs40iaXwQb55l932k6aDoo2ebdCPt82oEk2ANqIT1zWqZg2TRu TlwLXB/obCkGfZFWeaKPjU+l+YVMEjbT9A9D/RbZ8QC6sCOrZLmXynydo Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="361159622" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="361159622" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Aug 2023 23:12:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="681553232" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="681553232" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by orsmga003.jf.intel.com with ESMTP; 08 Aug 2023 23:12:37 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, yuzhao@google.com, willy@infradead.org, hughd@google.com, yosryahmed@google.com, ryan.roberts@arm.com, david@redhat.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH v2 1/3] mm: add functions folio_in_range() and folio_within_vma() Date: Wed, 9 Aug 2023 14:11:03 +0800 Message-Id: <20230809061105.3369958-2-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230809061105.3369958-1-fengwei.yin@intel.com> References: <20230809061105.3369958-1-fengwei.yin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It will be used to check whether the folio is mapped to specific VMA and whether the mapping address of folio is in the range. Also a helper function folio_within_vma() to check whether folio is in the range of vma based on folio_in_range(). Signed-off-by: Yin Fengwei --- mm/internal.h | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/mm/internal.h b/mm/internal.h index 154da4f0d557..5d1b71010fd2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -585,6 +585,41 @@ extern long faultin_vma_page_range(struct vm_area_stru= ct *vma, bool write, int *locked); extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, unsigned long bytes); + +static inline bool +folio_in_range(struct folio *folio, struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + pgoff_t pgoff, addr; + unsigned long vma_pglen =3D (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); + if (start > end) + return false; + + if (start < vma->vm_start) + start =3D vma->vm_start; + + if (end > vma->vm_end) + end =3D vma->vm_end; + + pgoff =3D folio_pgoff(folio); + + /* if folio start address is not in vma range */ + if (!in_range(pgoff, vma->vm_pgoff, vma_pglen)) + return false; + + addr =3D vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + + return !(addr < start || end - addr < folio_size(folio)); +} + +static inline bool +folio_within_vma(struct folio *folio, struct vm_area_struct *vma) +{ + return folio_in_range(folio, vma, vma->vm_start, vma->vm_end); +} + /* * mlock_vma_folio() and munlock_vma_folio(): * should be called with vma's mmap_lock held for read or write, --=20 2.39.2