In page_cache_ra_order(), the maximal order of the page cache to be
allocated shouldn't be larger than MAX_PAGECACHE_ORDER. Otherwise,
it's possible the large page cache can't be supported by xarray when
the corresponding xarray entry is split.
For example, HPAGE_PMD_ORDER is 13 on ARM64 when the base page size
is 64KB. The PMD-sized page cache can't be supported by xarray.
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
mm/readahead.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index c1b23989d9ca..817b2a352d78 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -503,11 +503,11 @@ void page_cache_ra_order(struct readahead_control *ractl,
limit = min(limit, index + ra->size - 1);
- if (new_order < MAX_PAGECACHE_ORDER) {
+ if (new_order < MAX_PAGECACHE_ORDER)
new_order += 2;
- new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
- new_order = min_t(unsigned int, new_order, ilog2(ra->size));
- }
+
+ new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
+ new_order = min_t(unsigned int, new_order, ilog2(ra->size));
/* See comment in page_cache_ra_unbounded() */
nofs = memalloc_nofs_save();
--
2.45.1
On 25.06.24 11:06, Gavin Shan wrote:
> In page_cache_ra_order(), the maximal order of the page cache to be
> allocated shouldn't be larger than MAX_PAGECACHE_ORDER. Otherwise,
> it's possible the large page cache can't be supported by xarray when
> the corresponding xarray entry is split.
>
> For example, HPAGE_PMD_ORDER is 13 on ARM64 when the base page size
> is 64KB. The PMD-sized page cache can't be supported by xarray.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
Heh, you came up with this yourself concurrently :) so feel free to drop
that.
Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
> mm/readahead.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/readahead.c b/mm/readahead.c
> index c1b23989d9ca..817b2a352d78 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -503,11 +503,11 @@ void page_cache_ra_order(struct readahead_control *ractl,
>
> limit = min(limit, index + ra->size - 1);
>
> - if (new_order < MAX_PAGECACHE_ORDER) {
> + if (new_order < MAX_PAGECACHE_ORDER)
> new_order += 2;
> - new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
> - new_order = min_t(unsigned int, new_order, ilog2(ra->size));
> - }
> +
> + new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
> + new_order = min_t(unsigned int, new_order, ilog2(ra->size));
>
> /* See comment in page_cache_ra_unbounded() */
> nofs = memalloc_nofs_save();
--
Cheers,
David / dhildenb
On 6/26/24 4:45 AM, David Hildenbrand wrote:
> On 25.06.24 11:06, Gavin Shan wrote:
>> In page_cache_ra_order(), the maximal order of the page cache to be
>> allocated shouldn't be larger than MAX_PAGECACHE_ORDER. Otherwise,
>> it's possible the large page cache can't be supported by xarray when
>> the corresponding xarray entry is split.
>>
>> For example, HPAGE_PMD_ORDER is 13 on ARM64 when the base page size
>> is 64KB. The PMD-sized page cache can't be supported by xarray.
>>
>> Suggested-by: David Hildenbrand <david@redhat.com>
>
> Heh, you came up with this yourself concurrently :) so feel free to drop that.
>
> Acked-by: David Hildenbrand <david@redhat.com>
>
David, thanks for your follow-up and reviews. I will drop that tag in next respin :)
Thanks,
Gavin
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>> mm/readahead.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/readahead.c b/mm/readahead.c
>> index c1b23989d9ca..817b2a352d78 100644
>> --- a/mm/readahead.c
>> +++ b/mm/readahead.c
>> @@ -503,11 +503,11 @@ void page_cache_ra_order(struct readahead_control *ractl,
>> limit = min(limit, index + ra->size - 1);
>> - if (new_order < MAX_PAGECACHE_ORDER) {
>> + if (new_order < MAX_PAGECACHE_ORDER)
>> new_order += 2;
>> - new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
>> - new_order = min_t(unsigned int, new_order, ilog2(ra->size));
>> - }
>> +
>> + new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
>> + new_order = min_t(unsigned int, new_order, ilog2(ra->size));
>> /* See comment in page_cache_ra_unbounded() */
>> nofs = memalloc_nofs_save();
>
© 2016 - 2026 Red Hat, Inc.