fs/binfmt_elf.c | 2 +- fs/coredump.c | 2 +- include/linux/mm.h | 35 +- include/linux/mm_types.h | 5 +- include/trace/events/vma.h | 32 + kernel/fork.c | 2 +- mm/debug.c | 2 +- mm/internal.h | 1 + mm/mmap.c | 28 +- mm/mremap.c | 13 +- mm/nommu.c | 8 +- mm/util.c | 1 - mm/vma.c | 88 ++- tools/testing/selftests/mm/Makefile | 1 + .../selftests/mm/max_vma_count_tests.c | 709 ++++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 5 + tools/testing/vma/vma.c | 32 +- tools/testing/vma/vma_internal.h | 44 +- 18 files changed, 949 insertions(+), 61 deletions(-) create mode 100644 include/trace/events/vma.h create mode 100644 tools/testing/selftests/mm/max_vma_count_tests.c
Hi all, This is v2 to the VMA count patch I previously posted at: https://lore.kernel.org/r/20250903232437.1454293-1-kaleshsingh@google.com/ I've split it into multiple patches to address the feedback. The main changes in v2 are: - Use a capacity-based check for VMA count limit, per Lorenzo. - Rename map_count to vma_count, per David. - Add assertions for exceeding the limit, per Pedro. - Add tests for max_vma_count, per Liam. - Emit a trace event for failure due to insufficient capacity for observability Tested on x86_64 and arm64: - Build test: - allyesconfig for rename - Selftests: cd tools/testing/selftests/mm && \ make && \ ./run_vmtests.sh -t max_vma_count (With trace_max_vma_count_exceeded enabled) - vma tests: cd tools/testing/vma && \ make && \ ./vma Thanks, Kalesh Kalesh Singh (7): mm: fix off-by-one error in VMA count limit checks mm/selftests: add max_vma_count tests mm: introduce vma_count_remaining() mm: rename mm_struct::map_count to vma_count mm: harden vma_count against direct modification mm: add assertion for VMA count limit mm/tracing: introduce max_vma_count_exceeded trace event fs/binfmt_elf.c | 2 +- fs/coredump.c | 2 +- include/linux/mm.h | 35 +- include/linux/mm_types.h | 5 +- include/trace/events/vma.h | 32 + kernel/fork.c | 2 +- mm/debug.c | 2 +- mm/internal.h | 1 + mm/mmap.c | 28 +- mm/mremap.c | 13 +- mm/nommu.c | 8 +- mm/util.c | 1 - mm/vma.c | 88 ++- tools/testing/selftests/mm/Makefile | 1 + .../selftests/mm/max_vma_count_tests.c | 709 ++++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 5 + tools/testing/vma/vma.c | 32 +- tools/testing/vma/vma_internal.h | 44 +- 18 files changed, 949 insertions(+), 61 deletions(-) create mode 100644 include/trace/events/vma.h create mode 100644 tools/testing/selftests/mm/max_vma_count_tests.c base-commit: f83ec76bf285bea5727f478a68b894f5543ca76e -- 2.51.0.384.g4c02a37b29-goog
On Mon, 15 Sep 2025 09:36:31 -0700 Kalesh Singh <kaleshsingh@google.com> wrote: > Hi all, > > This is v2 to the VMA count patch I previously posted at: > > https://lore.kernel.org/r/20250903232437.1454293-1-kaleshsingh@google.com/ > > > I've split it into multiple patches to address the feedback. > > The main changes in v2 are: > > - Use a capacity-based check for VMA count limit, per Lorenzo. > - Rename map_count to vma_count, per David. > - Add assertions for exceeding the limit, per Pedro. > - Add tests for max_vma_count, per Liam. > - Emit a trace event for failure due to insufficient capacity for > observability > > Tested on x86_64 and arm64: > > - Build test: > - allyesconfig for rename > > - Selftests: > cd tools/testing/selftests/mm && \ > make && \ > ./run_vmtests.sh -t max_vma_count > > (With trace_max_vma_count_exceeded enabled) > > - vma tests: > cd tools/testing/vma && \ > make && \ > ./vma fwiw, there's nothing in the above which is usable in a [0/N] overview. While useful, the "what changed since the previous version" info isn't a suitable thing to carry in the permanent kernel record - it's short-term treansient stuff, not helpful to someone who is looking at the patchset in 2029. Similarly, the "how it was tested" material is also useful, but it becomes irrelevant as soon as the code hits linux-next and mainline. Anyhow, this -rc cycle has been quite the firehose in MM and I'm feeling a need to slow things down for additional stabilization and so people hopefully get additional bandwidth to digest the material we've added this far. So I think I'll just cherrypick [1/7] for now. A great flood of positive review activity would probably make me revisit that ;)
On Mon, Sep 15, 2025 at 03:34:01PM -0700, Andrew Morton wrote: > Anyhow, this -rc cycle has been quite the firehose in MM and I'm > feeling a need to slow things down for additional stabilization and so > people hopefully get additional bandwidth to digest the material we've > added this far. So I think I'll just cherrypick [1/7] for now. A > great flood of positive review activity would probably make me revisit > that ;) > Kalesh - I do intend to look at this series when I have a chance. My review workload has been insane so it's hard to keep up at the moment. Andrew - This cycle has been crazy, speaking from point of view of somebody doing a lot of review, it's been very very exhausting from this side too, and this kind of work can feel a little... thankless... sometimes :) I feel like we maybe need a way to ask people to slow down, sometimes at least. Perhaps being less accepting of patches during merge window is one aspect, as the merge window leading up to this cycle was almost the same review load as when the cycle started. Anyway, TL;DR: I think we need to be mindful of reviewer sanity as a factor in all this too :) (I am spekaing at Kernel Recipes then going on a very-badly-needed 2.5 week vacataion afterwards over the merge window so I hope to stave off burnout that way. Be good if I could keep mails upon return to 3 digits, but I have my doubts :P) Cheers, Lorenzo
On Tue, 16 Sep 2025 11:12:03 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > On Mon, Sep 15, 2025 at 03:34:01PM -0700, Andrew Morton wrote: > > Anyhow, this -rc cycle has been quite the firehose in MM and I'm > > feeling a need to slow things down for additional stabilization and so > > people hopefully get additional bandwidth to digest the material we've > > added this far. So I think I'll just cherrypick [1/7] for now. A > > great flood of positive review activity would probably make me revisit > > that ;) > > > > Kalesh - I do intend to look at this series when I have a chance. My review > workload has been insane so it's hard to keep up at the moment. > > Andrew - This cycle has been crazy, speaking from point of view of somebody > doing a lot of review, it's been very very exhausting from this side too, > and this kind of work can feel a little... thankless... sometimes :) I hear you. I'm shedding most everything now, to give us a couple of weeks to digest. > I feel like we maybe need a way to ask people to slow down, sometimes at > least. Yup, I'm sending submitters private emails explaining the situation. Maybe they should be public emails, I find it a hard call. > Perhaps being less accepting of patches during merge window is one aspect, > as the merge window leading up to this cycle was almost the same review > load as when the cycle started. I'm having trouble understanding what you said here? > Anyway, TL;DR: I think we need to be mindful of reviewer sanity as a factor > in all this too :) > > (I am spekaing at Kernel Recipes then going on a very-badly-needed 2.5 week > vacataion afterwards over the merge window so I hope to stave off burnout > that way. Be good if I could keep mails upon return to 3 digits, but I have > my doubts :P) I'd blow that in three days ;)
On Tue, Sep 16, 2025 at 07:16:45PM -0700, Andrew Morton wrote: > On Tue, 16 Sep 2025 11:12:03 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > On Mon, Sep 15, 2025 at 03:34:01PM -0700, Andrew Morton wrote: > > > Anyhow, this -rc cycle has been quite the firehose in MM and I'm > > > feeling a need to slow things down for additional stabilization and so > > > people hopefully get additional bandwidth to digest the material we've > > > added this far. So I think I'll just cherrypick [1/7] for now. A > > > great flood of positive review activity would probably make me revisit > > > that ;) > > > > > > > Kalesh - I do intend to look at this series when I have a chance. My review > > workload has been insane so it's hard to keep up at the moment. > > > > Andrew - This cycle has been crazy, speaking from point of view of somebody > > doing a lot of review, it's been very very exhausting from this side too, > > and this kind of work can feel a little... thankless... sometimes :) > > I hear you. I'm shedding most everything now, to give us a couple of > weeks to digest. Thanks, much appreciated! :) > > > I feel like we maybe need a way to ask people to slow down, sometimes at > > least. > > Yup, I'm sending submitters private emails explaining the situation. And again, much appreciated :) > > Maybe they should be public emails, I find it a hard call. Yeah it can be hard to get that balance right. Maybe public is better when we're deeper in the rc and there's a general load problem? > > > Perhaps being less accepting of patches during merge window is one aspect, > > as the merge window leading up to this cycle was almost the same review > > load as when the cycle started. > > I'm having trouble understanding what you said here? Sorry, what I mean to say is that in mm we're pretty open to taking stuff in the merge window, esp. now we have mm-new. And last merge window my review load felt similar to during a cycle, which was kind of crazy. So I wonder if we should be less accommodating and simply say 'sorry it's the merge window, no submissions accepted'? Of course I'm being a bit selfish here as I'm on holiday in the next merge window and hope forlornly to reduce the mail I come back to :P > > > Anyway, TL;DR: I think we need to be mindful of reviewer sanity as a factor > > in all this too :) > > > > (I am spekaing at Kernel Recipes then going on a very-badly-needed 2.5 week > > vacataion afterwards over the merge window so I hope to stave off burnout > > that way. Be good if I could keep mails upon return to 3 digits, but I have > > my doubts :P) > > I'd blow that in three days ;) Haha yeah I bet :) Cheers, Lorenzo
On Wed, 17 Sep 2025 06:36:34 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > > Perhaps being less accepting of patches during merge window is one aspect, > > > as the merge window leading up to this cycle was almost the same review > > > load as when the cycle started. > > > > I'm having trouble understanding what you said here? > > Sorry, what I mean to say is that in mm we're pretty open to taking stuff in the > merge window, esp. now we have mm-new. > > And last merge window my review load felt similar to during a cycle, which > was kind of crazy. > > So I wonder if we should be less accommodating and simply say 'sorry it's > the merge window, no submissions accepted'? hm, I always have a lot of emails piled up by the time mm-stable gets merged upstream. That ~1 week between "we merged" and "-rc1" is a nice time to go through that material and add it to mm-new. I think it smooths things out. I mean, this is peak time for people to be considering the new material? (ot, that backlog is always >400 emails and a lot of that gets tossed out anyway - either it's just too old so I request a refresh or there was a new version, or review was unpromising, etc).
On Wed, Sep 17, 2025 at 04:32:31PM -0700, Andrew Morton wrote: > On Wed, 17 Sep 2025 06:36:34 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > > > > > Perhaps being less accepting of patches during merge window is one aspect, > > > > as the merge window leading up to this cycle was almost the same review > > > > load as when the cycle started. > > > > > > I'm having trouble understanding what you said here? > > > > Sorry, what I mean to say is that in mm we're pretty open to taking stuff in the > > merge window, esp. now we have mm-new. > > > > And last merge window my review load felt similar to during a cycle, which > > was kind of crazy. > > > > So I wonder if we should be less accommodating and simply say 'sorry it's > > the merge window, no submissions accepted'? > > hm, I always have a lot of emails piled up by the time mm-stable gets > merged upstream. That ~1 week between "we merged" and "-rc1" is a nice > time to go through that material and add it to mm-new. I think it > smooths things out. I mean, this is peak time for people to be > considering the new material? I'm confused, why is the merge window a good time to consider new material? People have the entirety of the cycle to submit new material, and they do so. And equally, people are sending new revisions of old code, engaging in discussion from old series etc. during the merge window also. What happens is you essentially have reviewers work 9 weeks instead of 7 for a cycle without much of a let up (+ so no break from it), based on workload from the past cycle/merge window. I mean I can only ask that perhaps we consider not doing this in mm (I gather many other subsystems equally have a kinda 'freeze' during this time). > > (ot, that backlog is always >400 emails and a lot of that gets tossed > out anyway - either it's just too old so I request a refresh or there > was a new version, or review was unpromising, etc). > Right, which actually makes everything a lot more uncertain from a reviewer's point of view, as we don't definitely have a solid git base, mm-new isn't synced with Linus's tree very much during this time etc. Which makes the merge window actually even worse for this stuff. From the point of view of avoiding burn out, it'd be good to manage expectations a bit on this. Personally I remember literally saying to David during the last merge window 'well I hoped I could go off and work on my own stuff instead of just review, guess not then' :) THP is a particularly busy area at the moment which is part of all this.
On 18.09.25 12:29, Lorenzo Stoakes wrote: > On Wed, Sep 17, 2025 at 04:32:31PM -0700, Andrew Morton wrote: >> On Wed, 17 Sep 2025 06:36:34 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: >> >>>> >>>>> Perhaps being less accepting of patches during merge window is one aspect, >>>>> as the merge window leading up to this cycle was almost the same review >>>>> load as when the cycle started. >>>> >>>> I'm having trouble understanding what you said here? >>> >>> Sorry, what I mean to say is that in mm we're pretty open to taking stuff in the >>> merge window, esp. now we have mm-new. >>> >>> And last merge window my review load felt similar to during a cycle, which >>> was kind of crazy. >>> >>> So I wonder if we should be less accommodating and simply say 'sorry it's >>> the merge window, no submissions accepted'? >> >> hm, I always have a lot of emails piled up by the time mm-stable gets >> merged upstream. That ~1 week between "we merged" and "-rc1" is a nice >> time to go through that material and add it to mm-new. I think it >> smooths things out. I mean, this is peak time for people to be >> considering the new material? > > I'm confused, why is the merge window a good time to consider new material? > > People have the entirety of the cycle to submit new material, and they do > so. My view is that if you are sending a cleanup/feature during the merge window you cannot expect a fast reply, and you should not keep sending new versions in that timeframe expecting that all people you CCed that should have a look actually did have a look. -- Cheers David / dhildenb
On Thu, Sep 18, 2025 at 02:07:09PM +0200, David Hildenbrand wrote: > On 18.09.25 12:29, Lorenzo Stoakes wrote: > > On Wed, Sep 17, 2025 at 04:32:31PM -0700, Andrew Morton wrote: > > > On Wed, 17 Sep 2025 06:36:34 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > > > > > > > > > > > > Perhaps being less accepting of patches during merge window is one aspect, > > > > > > as the merge window leading up to this cycle was almost the same review > > > > > > load as when the cycle started. > > > > > > > > > > I'm having trouble understanding what you said here? > > > > > > > > Sorry, what I mean to say is that in mm we're pretty open to taking stuff in the > > > > merge window, esp. now we have mm-new. > > > > > > > > And last merge window my review load felt similar to during a cycle, which > > > > was kind of crazy. > > > > > > > > So I wonder if we should be less accommodating and simply say 'sorry it's > > > > the merge window, no submissions accepted'? > > > > > > hm, I always have a lot of emails piled up by the time mm-stable gets > > > merged upstream. That ~1 week between "we merged" and "-rc1" is a nice > > > time to go through that material and add it to mm-new. I think it > > > smooths things out. I mean, this is peak time for people to be > > > considering the new material? > > > > I'm confused, why is the merge window a good time to consider new material? > > > > People have the entirety of the cycle to submit new material, and they do > > so. > > My view is that if you are sending a cleanup/feature during the merge window > you cannot expect a fast reply, and you should not keep sending new versions > in that timeframe expecting that all people you CCed that should have a look > actually did have a look. Yes exactly. The problem is all the conversations and respins and etc. _do_ carry on as normal, and often land in mm-new, queued up for mm-unstable etc. unless we happen to notice them. So it makes it impossible for us to just ignore until the next cycle (or need to go through every thread that happened afterwards). And people know that, so just keep on submitting as normal. That was _really_ palpable last merge window. I mean I'm cheating by going on vacation for this merge window ;) but obviously means I'll have 2 weeks of review to check when I get back + 1st week of cycle to go too. I think in some subsystems new series/respins are actively unwelcome during the merge window. I wonder if we should be the same? > > -- > Cheers > > David / dhildenb > Cheers, Lorenzo
On Thu, 18 Sep 2025 13:49:33 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > I'm confused, why is the merge window a good time to consider new material? > > > > > > People have the entirety of the cycle to submit new material, and they do > > > so. > > > > My view is that if you are sending a cleanup/feature during the merge window > > you cannot expect a fast reply, and you should not keep sending new versions > > in that timeframe expecting that all people you CCed that should have a look > > actually did have a look. > > Yes exactly. > > The problem is all the conversations and respins and etc. _do_ carry on as > normal, and often land in mm-new, queued up for mm-unstable etc. unless we > happen to notice them. > > So it makes it impossible for us to just ignore until the next cycle (or need to > go through every thread that happened afterwards). > > And people know that, so just keep on submitting as normal. That was _really_ > palpable last merge window. > Well, what else do we have to do during the merge window? The previous cycle's batch is merged up and there may be some fallout from that, but it isn't very time-consuming. If you're proposing that we start to use that period as a break for sanity purposes then OK, didn't see that one coming, don't know how widespread this desire is. But perhaps a better time for this quiet period is during -rc6 and -rc7 when the rate of merging is throttled right back. Or perhaps from -rc7 to mid-merge-window.
On Tue, Sep 16, 2025 at 3:12 AM Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > On Mon, Sep 15, 2025 at 03:34:01PM -0700, Andrew Morton wrote: > > Anyhow, this -rc cycle has been quite the firehose in MM and I'm > > feeling a need to slow things down for additional stabilization and so > > people hopefully get additional bandwidth to digest the material we've > > added this far. So I think I'll just cherrypick [1/7] for now. A > > great flood of positive review activity would probably make me revisit > > that ;) > > > > Kalesh - I do intend to look at this series when I have a chance. My review > workload has been insane so it's hard to keep up at the moment. > > Andrew - This cycle has been crazy, speaking from point of view of somebody > doing a lot of review, it's been very very exhausting from this side too, > and this kind of work can feel a little... thankless... sometimes :) > > I feel like we maybe need a way to ask people to slow down, sometimes at > least. > > Perhaps being less accepting of patches during merge window is one aspect, > as the merge window leading up to this cycle was almost the same review > load as when the cycle started. > > Anyway, TL;DR: I think we need to be mindful of reviewer sanity as a factor > in all this too :) > > (I am spekaing at Kernel Recipes then going on a very-badly-needed 2.5 week > vacataion afterwards over the merge window so I hope to stave off burnout > that way. Be good if I could keep mails upon return to 3 digits, but I have > my doubts :P) Hi Lorenzo, Absolutely, take care of yourself. We all appreciate the amount of work you put into reviewing :) Have a good talk and enjoy your vacation. Don't worry about the backlog. -- Kalesh > > Cheers, Lorenzo
On Mon, Sep 15, 2025 at 3:34 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Mon, 15 Sep 2025 09:36:31 -0700 Kalesh Singh <kaleshsingh@google.com> wrote: > > > Hi all, > > > > This is v2 to the VMA count patch I previously posted at: > > > > https://lore.kernel.org/r/20250903232437.1454293-1-kaleshsingh@google.com/ > > > > > > I've split it into multiple patches to address the feedback. > > > > The main changes in v2 are: > > > > - Use a capacity-based check for VMA count limit, per Lorenzo. > > - Rename map_count to vma_count, per David. > > - Add assertions for exceeding the limit, per Pedro. > > - Add tests for max_vma_count, per Liam. > > - Emit a trace event for failure due to insufficient capacity for > > observability > > > > Tested on x86_64 and arm64: > > > > - Build test: > > - allyesconfig for rename > > > > - Selftests: > > cd tools/testing/selftests/mm && \ > > make && \ > > ./run_vmtests.sh -t max_vma_count > > > > (With trace_max_vma_count_exceeded enabled) > > > > - vma tests: > > cd tools/testing/vma && \ > > make && \ > > ./vma > > fwiw, there's nothing in the above which is usable in a [0/N] overview. > > While useful, the "what changed since the previous version" info isn't > a suitable thing to carry in the permanent kernel record - it's > short-term treansient stuff, not helpful to someone who is looking at > the patchset in 2029. > > Similarly, the "how it was tested" material is also useful, but it > becomes irrelevant as soon as the code hits linux-next and mainline. Hi Andrew, Thanks for the feedback. Do you mean the cover letter was not needed in this case or that it lacked enough context? > > > Anyhow, this -rc cycle has been quite the firehose in MM and I'm > feeling a need to slow things down for additional stabilization and so > people hopefully get additional bandwidth to digest the material we've > added this far. So I think I'll just cherrypick [1/7] for now. A > great flood of positive review activity would probably make me revisit > that ;) > I understand, yes 1/7 is all we need for now, since it prevents an unrecoverable situation where we get over the limit and cannot recover as munmap() will then always fail. Thanks, Kalesh
On Mon, 15 Sep 2025 16:10:55 -0700 Kalesh Singh <kaleshsingh@google.com> wrote: > > fwiw, there's nothing in the above which is usable in a [0/N] overview. > > > > While useful, the "what changed since the previous version" info isn't > > a suitable thing to carry in the permanent kernel record - it's > > short-term treansient stuff, not helpful to someone who is looking at > > the patchset in 2029. > > > > Similarly, the "how it was tested" material is also useful, but it > > becomes irrelevant as soon as the code hits linux-next and mainline. > > Hi Andrew, > > Thanks for the feedback. Do you mean the cover letter was not needed > in this case or that it lacked enough context? The latter. As I've split up the series, please put together some words to describe the remaining 6 patches if/when resending. Thanks.
On Mon, Sep 15, 2025 at 5:05 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Mon, 15 Sep 2025 16:10:55 -0700 Kalesh Singh <kaleshsingh@google.com> wrote: > > > > fwiw, there's nothing in the above which is usable in a [0/N] overview. > > > > > > While useful, the "what changed since the previous version" info isn't > > > a suitable thing to carry in the permanent kernel record - it's > > > short-term treansient stuff, not helpful to someone who is looking at > > > the patchset in 2029. > > > > > > Similarly, the "how it was tested" material is also useful, but it > > > becomes irrelevant as soon as the code hits linux-next and mainline. > > > > Hi Andrew, > > > > Thanks for the feedback. Do you mean the cover letter was not needed > > in this case or that it lacked enough context? > > The latter. As I've split up the series, please put together some > words to describe the remaining 6 patches if/when resending. Hi Andrew, Thanks for clarifying. I'll make sure to fix that when resending. --Kalesh > > Thanks.
© 2016 - 2025 Red Hat, Inc.