On 2/2/26 16:56, Kiryl Shutsemau wrote:
> With the upcoming changes to HVO, a single page of tail struct pages
> will be shared across all huge pages of the same order on a node. Since
> huge pages on the same node may belong to different zones, the zone
> information stored in shared tail page flags would be incorrect.
>
> Always fetch zone information from the head page, which has unique and
> correct zone flags for each compound page.
>
> Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
> Acked-by: Zi Yan <ziy@nvidia.com>
> ---
> include/linux/mmzone.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index be8ce40b5638..192143b5cdc0 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1219,6 +1219,7 @@ static inline enum zone_type memdesc_zonenum(memdesc_flags_t flags)
>
> static inline enum zone_type page_zonenum(const struct page *page)
> {
> + page = compound_head(page);
> return memdesc_zonenum(page->flags);
We end up calling page_zonenum() without holding a reference.
Given that _compound_head() does a READ_ONCE(), this should work even if
we see concurrent page freeing etc.
However, this change implies that we now perform a compound page lookup
for every PageHighMem() [meh], page_zone() [quite some users in the
buddy, including for pageblock access and page freeing].
That's a nasty compromise for making HVO better? :)
We should likely limit that special casing to kernels that really rquire
it (HVO).
--
Cheers,
David