From nobody Thu Dec 18 07:50:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B518EB64DD for ; Mon, 14 Aug 2023 18:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230025AbjHNSqD (ORCPT ); Mon, 14 Aug 2023 14:46:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231697AbjHNSpc (ORCPT ); Mon, 14 Aug 2023 14:45:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5C7510E3 for ; Mon, 14 Aug 2023 11:44:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692038657; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K80vu1sJx9DsGDhSbLEtYPGEOnTcZDHaiO/EYxwWGMs=; b=fbQSv4J5KHS/2o9bfz8Lt3QMVzKIOgzNx551g2odm4L4DyPmH1Yg1SrBylhffwBGKZMLST 9Gwg+YO8UyqP4FtGL6VIi7GlXGU9smmUKVVJjMi2QHYUoUEw/e3oXzHbAgl8U4PGHFny0l ARNUHPJmIDStXt/JVHUdiCKiOd9l+lc= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-157-Va5E3wmONzmu4YhfHT9VdQ-1; Mon, 14 Aug 2023 14:44:15 -0400 X-MC-Unique: Va5E3wmONzmu4YhfHT9VdQ-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-64726969c8bso2206096d6.0 for ; Mon, 14 Aug 2023 11:44:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692038655; x=1692643455; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K80vu1sJx9DsGDhSbLEtYPGEOnTcZDHaiO/EYxwWGMs=; b=QMUwAVCZkToQaJOnWDz+7aZX7+uKd+r2Hfo32x43/mxlBdkLqVycBc/vU0bSMQ31w4 7UuBPHZ/51qnjDRA8EXLdcENYUmaF6sqSJMpEqj1he1kR6KhdtHx+u25GYhQIk1ngPyz ubPJ0XEPzTzoAsqBniVvMQwT3b1kM4KNBI+Y1h+ARJDoZADhBowX9lx12v2zDjEnCeqv gHc3yAf4/WdjtvZ+5V+L0buyIBYy2TrBzNCgKbzTElFso43rUzWGynrpoVFEpKx5Hdmv 1SpIosXBvJvydRrWKSOocA/qYjZugsSGXpZoOILBU8ZPzm7CTPnIkJLkYYYL/azoAU1Y 0KwQ== X-Gm-Message-State: AOJu0YxwGwVz6uvNJYVpPPxuOwQe+C2McQkC5fSxLxiGD84oGfaMHsF9 GjGJI1TdCNiv853wvI6828nQ6an47s2rTnXV8FCrU/qxA02fen36RlJISeuKXEewKqxvgzYhRAp YPIKZU8N9CkSCEyWNMBNRDKoy X-Received: by 2002:a05:6214:226b:b0:626:2305:6073 with SMTP id gs11-20020a056214226b00b0062623056073mr14561417qvb.4.1692038655118; Mon, 14 Aug 2023 11:44:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGWDv/pTkCgGz4dnpVLXX0rV2wACMdw+Qcy+eJSwM3xg9sOHmswZi98yBi7GjAnG22hvSrGEA== X-Received: by 2002:a05:6214:226b:b0:626:2305:6073 with SMTP id gs11-20020a056214226b00b0062623056073mr14561409qvb.4.1692038654848; Mon, 14 Aug 2023 11:44:14 -0700 (PDT) Received: from x1n.redhat.com (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id a7-20020a0cb347000000b00630c0ed6339sm3566561qvf.64.2023.08.14.11.44.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 11:44:14 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Mike Kravetz , David Hildenbrand , Matthew Wilcox , Andrew Morton , peterx@redhat.com, Yu Zhao , Ryan Roberts , Yang Shi , Hugh Dickins , "Kirill A . Shutemov" Subject: [PATCH RFC v2 1/3] mm: Add TAIL_MAPPING_REUSED_MAX Date: Mon, 14 Aug 2023 14:44:09 -0400 Message-ID: <20230814184411.330496-2-peterx@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230814184411.330496-1-peterx@redhat.com> References: <20230814184411.330496-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Tail pages have a sanity check on ->mapping fields, not all of them but only upon index>2. It's because we reused ->mapping fields of the tail pages index=3D0,1 for other things. Define a macro for "max index of tail pages that got ->mapping field reused" on top of folio definition, because when we grow folio tail pages we'd want to boost this too together. Signed-off-by: Peter Xu --- include/linux/mm_types.h | 9 +++++++++ mm/huge_memory.c | 6 +++--- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 291c05cacd48..3f2b0d46f5d6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -248,6 +248,15 @@ static inline struct page *encoded_page_ptr(struct enc= oded_page *page) return (struct page *)(~ENCODE_PAGE_BITS & (unsigned long)page); } =20 +/* + * This macro defines the maximum tail pages (of a folio) that can have the + * page->mapping field reused (offset 12 for 32bits, or 24 for 64bits). + * + * When the tail page's mapping field reused, it'll be exempted from + * ->mapping poisoning and checks. Also see the macro TAIL_MAPPING. + */ +#define TAIL_MAPPING_REUSED_MAX (2) + /** * struct folio - Represents a contiguous set of bytes. * @flags: Identical to the page flags. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0b709d2c46c6..72f244e16dcb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2444,9 +2444,9 @@ static void __split_huge_page_tail(struct page *head,= int tail, (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK)); =20 - /* ->mapping in first and second tail page is replaced by other uses */ - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping !=3D TAIL_MAPPING, - page_tail); + /* ->mapping in <=3DTAIL_MAPPING_REUSED_MAX tail pages are reused */ + VM_BUG_ON_PAGE(tail > TAIL_MAPPING_REUSED_MAX && + page_tail->mapping !=3D TAIL_MAPPING, page_tail); page_tail->mapping =3D head->mapping; page_tail->index =3D head->index + tail; =20 --=20 2.41.0 From nobody Thu Dec 18 07:50:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1E9CC04E69 for ; Mon, 14 Aug 2023 18:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231226AbjHNSqF (ORCPT ); Mon, 14 Aug 2023 14:46:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231696AbjHNSpc (ORCPT ); Mon, 14 Aug 2023 14:45:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 574B710F0 for ; Mon, 14 Aug 2023 11:44:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692038658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FNQlVHJQ4pIimjZoX+ixCZ4sXgWXT3PPMnETr4dtlP8=; b=e3m5+RW+L++sUxEgovgcRLte12Q/eZj8CW2T+dwHmGxN4WYAdExwPgEjmAQl5qhDjLMRvN glUkdfSYQsp4334jeGNlrR457mY/Ear1WDzjQkSi6XwZfnB3Ukk2hQjdD32SXAhPOEWCiD rVSXjmpUzIsS2bYAIP1EUkaepLucZj0= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-27-moGkRko0MOef2uSPbji2gQ-1; Mon, 14 Aug 2023 14:44:17 -0400 X-MC-Unique: moGkRko0MOef2uSPbji2gQ-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-63c9463c116so13418396d6.0 for ; Mon, 14 Aug 2023 11:44:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692038656; x=1692643456; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FNQlVHJQ4pIimjZoX+ixCZ4sXgWXT3PPMnETr4dtlP8=; b=dMJoQxNpKx1p2pDiOlGf/sb6e/CHlHKIswmAzJ4lvAyc1jTrESzSpisrNkA5xBpoVM alqc+MCSwYa7zWiyidpUafGaWLUDRkTu827Nsgr2vAJFIH1DS0nhzg5dgn5nTo1ymPaN UPBCy9BQI+mD48Th9laUYzz8fUQYWy0rYPgfksx1UsEzWlQMabaJo3IH0xWYBGjPwA2z U7yD4iwrMEZpDp8MiL6umcAr9Io99VDoQzMpVv1gVi6tHbVWtpJPzAWTJzoTDUMzc0CB zmEqpjYmZccHkkIqhDgAVvn8ajbW4MV0hSNN/l+doPwf25B6HKI7GECx77LxhEPvie35 kfxQ== X-Gm-Message-State: AOJu0YzaZc1jDLHYjcYsmwB227MtOeqScizvRcnreRb0b96UqKeIZBne Keoy2/9mqkGw4r/EnMuhkJLCZuNQG/DBkYDJ8gOPQ31VWbC5dtwnWQtJU3Ig8ndYbxnxhxMSqUZ X+dzPubcgaGEC04cZKezM0bBG X-Received: by 2002:a05:6214:1d09:b0:635:d9d0:cccf with SMTP id e9-20020a0562141d0900b00635d9d0cccfmr14721970qvd.4.1692038656343; Mon, 14 Aug 2023 11:44:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHg5lNOTyDZrEVxNjNHMJ72ccEGFwymh3Zpf7FNl3V4lRHNSY8M7kb1dQ3rnyfSzoGymgcKPg== X-Received: by 2002:a05:6214:1d09:b0:635:d9d0:cccf with SMTP id e9-20020a0562141d0900b00635d9d0cccfmr14721945qvd.4.1692038656034; Mon, 14 Aug 2023 11:44:16 -0700 (PDT) Received: from x1n.redhat.com (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id a7-20020a0cb347000000b00630c0ed6339sm3566561qvf.64.2023.08.14.11.44.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 11:44:15 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Mike Kravetz , David Hildenbrand , Matthew Wilcox , Andrew Morton , peterx@redhat.com, Yu Zhao , Ryan Roberts , Yang Shi , Hugh Dickins , "Kirill A . Shutemov" Subject: [PATCH RFC v2 2/3] mm: Reorg and declare free spaces in struct folio tails Date: Mon, 14 Aug 2023 14:44:10 -0400 Message-ID: <20230814184411.330496-3-peterx@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230814184411.330496-1-peterx@redhat.com> References: <20230814184411.330496-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It's not 100% clear on what are the free spaces in the folio tail pages. Currently we defined fields for only tail pages 1-2 but they're not really fully occupied. Add the fields to show what is free, and also reorg them a bit to make 32/64 bits alignment easy. Here _free_1_0 should be a constant hole (of 2 bytes) on any system, make them explicit so people know they can be reused at any time. _free_1_1 is special and need some attention: this will shift tail page 1's fields starting from _entire_mapcount to be 4 bytes later. I don't expect this change much on real performance - if it will it might be good to have _entire_mapcount and _nr_pages_mapped to be put on the same 8B alignment, assuming that _pincount should be rarer to be used in real life. But in all cases the movement shouldn't change much on x86 or anything that has 64B cachelines. This is the major reason why I had this change separate from the upcoming documentation update patch - it may need some attention, and when unwanted things happen (I don't expect) we quickly know what's wrong. _free_1_2 / _free_2_1 just calls out extra free spaces elsewhere and shouldn't affect a thing just like _free_1_0. Signed-off-by: Peter Xu --- include/linux/mm_types.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3f2b0d46f5d6..829f5adfded1 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -329,11 +329,21 @@ struct folio { /* public: */ unsigned char _folio_dtor; unsigned char _folio_order; + /* private: 2 bytes can be reused later */ + unsigned char _free_1_0[2]; +#ifdef CONFIG_64BIT + /* 4 bytes can be reused later (64 bits only) */ + unsigned char _free_1_1[4]; +#endif + /* public: */ atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; atomic_t _pincount; #ifdef CONFIG_64BIT unsigned int _folio_nr_pages; + /* private: 4 bytes can be reused later (64 bits only) */ + unsigned char _free_1_2[4]; + /* public: */ #endif /* private: the union with struct page is transitional */ }; @@ -355,6 +365,8 @@ struct folio { unsigned long _head_2a; /* public: */ struct list_head _deferred_list; + /* private: 8 more free bytes for either 32/64 bits */ + unsigned char _free_2_1[8]; /* private: the union with struct page is transitional */ }; struct page __page_2; --=20 2.41.0 From nobody Thu Dec 18 07:50:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F62DC41513 for ; Mon, 14 Aug 2023 18:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230513AbjHNSqE (ORCPT ); Mon, 14 Aug 2023 14:46:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231540AbjHNSpc (ORCPT ); Mon, 14 Aug 2023 14:45:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37A0010F4 for ; Mon, 14 Aug 2023 11:44:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692038659; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PXZG/qw2ayWAI5P15GI+5RH7awNhex4k3sDbvA4UjWg=; b=B8sRwKEmhT38vuT8PERxk18e8uinNXGJMc/kphOY5PtrtSu238Q8fnf5pLCn6UhaSYasDV S3XAEIssqm4RiuNJ9pjpV66MUSSjT97pL0HG1I9QO/pVJy2WHiwsLjtIdaoqDAMspb4q48 CpUcHBlxrWpsb6DwnYvJZ8aTT/zFtCM= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-347-mn3PNQf4PsSSiPtgox1FdA-1; Mon, 14 Aug 2023 14:44:18 -0400 X-MC-Unique: mn3PNQf4PsSSiPtgox1FdA-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-63d2b88325bso13401976d6.1 for ; Mon, 14 Aug 2023 11:44:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692038657; x=1692643457; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PXZG/qw2ayWAI5P15GI+5RH7awNhex4k3sDbvA4UjWg=; b=S6P0FcL5tQRq/HcqjxBZ4iByCWPtuW8f0mv3MyCgO76QiLLlh5zIfphBFbziqo8vdw 6905vY+RW6kXE/PZZB1cTG3hWDaAlsXpbPCLXt+NK0eyrfMI4/xkY68oyllhgJLJNWgp fVAz5DFVv4XahLOZe3hG2mPQn/iddIRN6zAZ3elbN89N//Iq7h6o/wv0HZvKIuB1vteR 3SwLZuqIDX0nD63MHu5dNVIOpMBXTH13XCxPqaJWVOyOlHVfUhwLgeqE2UkM4zApk4Ht eQZdMXxR4oHrxKR7u58omZfjfxz9GRHJw9wD5f92JqO4g5cAKRk4dzkGlZ3Bc7zZpwhD CR6Q== X-Gm-Message-State: AOJu0YwUvI09nBK2EKPfccFgvRCfuakIdgMNk8JTbceI4KOX5LC8JQKE vMuB/7arpApI3vyaXipTxoyL6ffeyLhA5WfGcDN/meeI9DiwY6mqUpSoeS7XUNh/9e0+THKlBMb pNkIs6E9GH1CeVVvir+67rnK2 X-Received: by 2002:a05:6214:1d09:b0:635:d9d0:cccf with SMTP id e9-20020a0562141d0900b00635d9d0cccfmr14722031qvd.4.1692038657539; Mon, 14 Aug 2023 11:44:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGNyKVLMZ66Hr3NQvsgtXcZ9D0YXzjmGtFWrY2Aeh8Bf9t6sbpvwCpEe2tOq//nQtrBk7xUeg== X-Received: by 2002:a05:6214:1d09:b0:635:d9d0:cccf with SMTP id e9-20020a0562141d0900b00635d9d0cccfmr14722010qvd.4.1692038657287; Mon, 14 Aug 2023 11:44:17 -0700 (PDT) Received: from x1n.redhat.com (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id a7-20020a0cb347000000b00630c0ed6339sm3566561qvf.64.2023.08.14.11.44.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 11:44:16 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Mike Kravetz , David Hildenbrand , Matthew Wilcox , Andrew Morton , peterx@redhat.com, Yu Zhao , Ryan Roberts , Yang Shi , Hugh Dickins , "Kirill A . Shutemov" Subject: [PATCH RFC v2 3/3] mm: Proper document tail pages fields for folio Date: Mon, 14 Aug 2023 14:44:11 -0400 Message-ID: <20230814184411.330496-4-peterx@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230814184411.330496-1-peterx@redhat.com> References: <20230814184411.330496-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Tail page struct reuse is over-comlicated. Not only because we have implicit uses of tail page fields (mapcounts, or private for thp swap support, etc., that we _may_ still use in the page structs, but not obvious the relationship between that and the folio definitions), but also because we have 32/64 bits layouts for struct page so it's unclear what we can use and what we cannot when trying to find a new spot in folio struct. We also have tricks like page->mapping, where we can reuse only the tail page 1/2 but nothing more than tail page 2. It is all mostly hidden, until someone starts to read into a VM_BUG_ON_PAGE() of __split_huge_page_tail(). It's also unclear on how many fields we can reuse for a tail page. The real answer is (after help from Matthew): we have 7 WORDs guaranteed on 64 bits and 8 WORDs on 32 bits. Nothing more than that is guaranteed to even exist. Let's document it clearly on what we can use and what we can't when extending folio on reusing tail page fields, with 100% explanations on each of them. Hopefully after the doc update it will make it easier when: (1) Any reader to know exactly what field is where and for what, the relationships between folio tail pages and struct page definitions, (2) Any potential new fields to be added to a large folio, so we're clear which field one can still reuse. This is assuming WORD is defined as sizeof(void *) on any archs, just like the other comment in struct page we already have. Signed-off-by: Peter Xu --- include/linux/mm_types.h | 41 ++++++++++++++++++++++++++++++++++------ 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 829f5adfded1..9c744f70ae84 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -322,11 +322,40 @@ struct folio { }; struct page page; }; + /* + * Some of the tail page fields may not be reused by the folio + * object because they're already been used by the page struct. On + * 32bits there're at least 8 WORDs while on 64 bits there're at + * least 7 WORDs: + * + * |--------+-------------+-------------------| + * | index | 32 bits | 64 bits | + * |--------+-------------+-------------------| + * | 0 | flags | flags | + * | 1 | head | head | + * | 2 | FREE | FREE | + * | 3 | FREE [1] | FREE [1] | + * | 4 | FREE | FREE | + * | 5 | FREE | private [2] | + * | 6 | mapcnt | mapcnt+refcnt [3] | + * | 7 | refcnt [3] | | + * |--------+-------------+-------------------| + * + * [1] "mapping" field. It is free to use but needs to be with + * some caution due to poisoning, see TAIL_MAPPING_REUSED_MAX. + * + * [2] "private" field, used when THP_SWAP is on (but disabled on + * 32 bits, so this index is FREE on 32bit or hugetlb folios). + * May need to be fixed finally. + * + * [3] "refcount" field must be zero for all tail pages. See e.g. + * has_unmovable_pages() on page_ref_count() check and comment. + */ union { struct { unsigned long _flags_1; unsigned long _head_1; - /* public: */ + /* public: WORD 2 */ unsigned char _folio_dtor; unsigned char _folio_order; /* private: 2 bytes can be reused later */ @@ -335,7 +364,7 @@ struct folio { /* 4 bytes can be reused later (64 bits only) */ unsigned char _free_1_1[4]; #endif - /* public: */ + /* public: WORD 3 */ atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; atomic_t _pincount; @@ -350,20 +379,20 @@ struct folio { struct page __page_1; }; union { - struct { + struct { /* hugetlb folios */ unsigned long _flags_2; unsigned long _head_2; - /* public: */ + /* public: WORD 2 */ void *_hugetlb_subpool; void *_hugetlb_cgroup; void *_hugetlb_cgroup_rsvd; void *_hugetlb_hwpoison; /* private: the union with struct page is transitional */ }; - struct { + struct { /* non-hugetlb folios */ unsigned long _flags_2a; unsigned long _head_2a; - /* public: */ + /* public: WORD 2-3 */ struct list_head _deferred_list; /* private: 8 more free bytes for either 32/64 bits */ unsigned char _free_2_1[8]; --=20 2.41.0