mm/vmalloc.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-)
Function vmap_pages_pte_range() enters the lazy MMU mode,
but fails to leave it in case an error is encountered.
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/r/202506132017.T1l1l6ME-lkp@intel.com/
Fixes: 44562c71e2cf ("mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes")
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
---
mm/vmalloc.c | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ab986dd09b6a..6dbcdceecae1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -514,6 +514,7 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long end, pgprot_t prot, struct page **pages, int *nr,
pgtbl_mod_mask *mask)
{
+ int err = 0;
pte_t *pte;
/*
@@ -530,12 +531,18 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
do {
struct page *page = pages[*nr];
- if (WARN_ON(!pte_none(ptep_get(pte))))
- return -EBUSY;
- if (WARN_ON(!page))
- return -ENOMEM;
- if (WARN_ON(!pfn_valid(page_to_pfn(page))))
- return -EINVAL;
+ if (WARN_ON(!pte_none(ptep_get(pte)))) {
+ err = -EBUSY;
+ break;
+ }
+ if (WARN_ON(!page)) {
+ err = -ENOMEM;
+ break;
+ }
+ if (WARN_ON(!pfn_valid(page_to_pfn(page)))) {
+ err = -EINVAL;
+ break;
+ }
set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
(*nr)++;
@@ -543,7 +550,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
arch_leave_lazy_mmu_mode();
*mask |= PGTBL_PTE_MODIFIED;
- return 0;
+
+ return err;
}
static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr,
--
2.48.1
On 23/06/2025 08:57, Alexander Gordeev wrote: > Function vmap_pages_pte_range() enters the lazy MMU mode, > but fails to leave it in case an error is encountered. > > Reported-by: kernel test robot <lkp@intel.com> > Reported-by: Dan Carpenter <dan.carpenter@linaro.org> > Closes: https://lore.kernel.org/r/202506132017.T1l1l6ME-lkp@intel.com/ > Fixes: 44562c71e2cf ("mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes") > Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Ouch, sorry about that! The patch looks good to me so: Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> I wonder an aditional Fixes: should be added for Commit 2ba3e6947aed ("mm/vmalloc: track which page-table levels were modified") though? That's the one that added the "*mask |= PGTBL_PTE_MODIFIED;" which would have also been skipped if an error occured before this patch. Thanks, Ryan > --- > mm/vmalloc.c | 22 +++++++++++++++------- > 1 file changed, 15 insertions(+), 7 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index ab986dd09b6a..6dbcdceecae1 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -514,6 +514,7 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, > unsigned long end, pgprot_t prot, struct page **pages, int *nr, > pgtbl_mod_mask *mask) > { > + int err = 0; > pte_t *pte; > > /* > @@ -530,12 +531,18 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, > do { > struct page *page = pages[*nr]; > > - if (WARN_ON(!pte_none(ptep_get(pte)))) > - return -EBUSY; > - if (WARN_ON(!page)) > - return -ENOMEM; > - if (WARN_ON(!pfn_valid(page_to_pfn(page)))) > - return -EINVAL; > + if (WARN_ON(!pte_none(ptep_get(pte)))) { > + err = -EBUSY; > + break; > + } > + if (WARN_ON(!page)) { > + err = -ENOMEM; > + break; > + } > + if (WARN_ON(!pfn_valid(page_to_pfn(page)))) { > + err = -EINVAL; > + break; > + } > > set_pte_at(&init_mm, addr, pte, mk_pte(page, prot)); > (*nr)++; > @@ -543,7 +550,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, > > arch_leave_lazy_mmu_mode(); > *mask |= PGTBL_PTE_MODIFIED; > - return 0; > + > + return err; > } > > static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr,
On Mon, Jun 23, 2025 at 01:37:11PM +0100, Ryan Roberts wrote: > On 23/06/2025 08:57, Alexander Gordeev wrote: > > Function vmap_pages_pte_range() enters the lazy MMU mode, > > but fails to leave it in case an error is encountered. > > > > Reported-by: kernel test robot <lkp@intel.com> > > Reported-by: Dan Carpenter <dan.carpenter@linaro.org> > > Closes: https://lore.kernel.org/r/202506132017.T1l1l6ME-lkp@intel.com/ > > Fixes: 44562c71e2cf ("mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes") > > Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> > > Ouch, sorry about that! The patch looks good to me so: > > Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > > I wonder an aditional Fixes: should be added for Commit 2ba3e6947aed > ("mm/vmalloc: track which page-table levels were modified") though? That's the > one that added the "*mask |= PGTBL_PTE_MODIFIED;" which would have also been > skipped if an error occured before this patch. Good catch! I think it certainly needs to be reported with Fixes and I even doubt whether your commit should be mentioned at all? > Thanks, > Ryan Thanks!
On 23/06/2025 14:03, Alexander Gordeev wrote: > On Mon, Jun 23, 2025 at 01:37:11PM +0100, Ryan Roberts wrote: >> On 23/06/2025 08:57, Alexander Gordeev wrote: >>> Function vmap_pages_pte_range() enters the lazy MMU mode, >>> but fails to leave it in case an error is encountered. >>> >>> Reported-by: kernel test robot <lkp@intel.com> >>> Reported-by: Dan Carpenter <dan.carpenter@linaro.org> >>> Closes: https://lore.kernel.org/r/202506132017.T1l1l6ME-lkp@intel.com/ >>> Fixes: 44562c71e2cf ("mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes") >>> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> >> >> Ouch, sorry about that! The patch looks good to me so: >> >> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> >> >> I wonder an aditional Fixes: should be added for Commit 2ba3e6947aed >> ("mm/vmalloc: track which page-table levels were modified") though? That's the >> one that added the "*mask |= PGTBL_PTE_MODIFIED;" which would have also been >> skipped if an error occured before this patch. > > Good catch! I think it certainly needs to be reported with Fixes > and I even doubt whether your commit should be mentioned at all? Well I would certainly argue that my patch is broken as is. So happy to have 2 Fixes: tags. But I'm not really sure what the rules are here... > >> Thanks, >> Ryan > > Thanks!
On Mon, Jun 23, 2025 at 02:31:48PM +0100, Ryan Roberts wrote: > On 23/06/2025 14:03, Alexander Gordeev wrote: > > On Mon, Jun 23, 2025 at 01:37:11PM +0100, Ryan Roberts wrote: > >> On 23/06/2025 08:57, Alexander Gordeev wrote: > >>> Function vmap_pages_pte_range() enters the lazy MMU mode, > >>> but fails to leave it in case an error is encountered. > >>> > >>> Reported-by: kernel test robot <lkp@intel.com> > >>> Reported-by: Dan Carpenter <dan.carpenter@linaro.org> > >>> Closes: https://lore.kernel.org/r/202506132017.T1l1l6ME-lkp@intel.com/ > >>> Fixes: 44562c71e2cf ("mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes") > >>> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> > >> > >> Ouch, sorry about that! The patch looks good to me so: > >> > >> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> > >> > >> I wonder an aditional Fixes: should be added for Commit 2ba3e6947aed > >> ("mm/vmalloc: track which page-table levels were modified") though? That's the > >> one that added the "*mask |= PGTBL_PTE_MODIFIED;" which would have also been > >> skipped if an error occured before this patch. > > > > Good catch! I think it certainly needs to be reported with Fixes > > and I even doubt whether your commit should be mentioned at all? > > Well I would certainly argue that my patch is broken as is. So happy to have 2 > Fixes: tags. But I'm not really sure what the rules are here... I would only list the older commit 2ba3e6947aed ("mm/vmalloc: track which page-table levels were modified"). The static checker warning came later, but it's not really the important bit. It's just one bug. We'll have to hand edit the commit if we want to backport it so that's a separate issue. regards, dan carpenter
On 23/06/2025 14:53, Dan Carpenter wrote: > On Mon, Jun 23, 2025 at 02:31:48PM +0100, Ryan Roberts wrote: >> On 23/06/2025 14:03, Alexander Gordeev wrote: >>> On Mon, Jun 23, 2025 at 01:37:11PM +0100, Ryan Roberts wrote: >>>> On 23/06/2025 08:57, Alexander Gordeev wrote: >>>>> Function vmap_pages_pte_range() enters the lazy MMU mode, >>>>> but fails to leave it in case an error is encountered. >>>>> >>>>> Reported-by: kernel test robot <lkp@intel.com> >>>>> Reported-by: Dan Carpenter <dan.carpenter@linaro.org> >>>>> Closes: https://lore.kernel.org/r/202506132017.T1l1l6ME-lkp@intel.com/ >>>>> Fixes: 44562c71e2cf ("mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes") >>>>> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> >>>> >>>> Ouch, sorry about that! The patch looks good to me so: >>>> >>>> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> >>>> >>>> I wonder an aditional Fixes: should be added for Commit 2ba3e6947aed >>>> ("mm/vmalloc: track which page-table levels were modified") though? That's the >>>> one that added the "*mask |= PGTBL_PTE_MODIFIED;" which would have also been >>>> skipped if an error occured before this patch. >>> >>> Good catch! I think it certainly needs to be reported with Fixes >>> and I even doubt whether your commit should be mentioned at all? >> >> Well I would certainly argue that my patch is broken as is. So happy to have 2 >> Fixes: tags. But I'm not really sure what the rules are here... > > I would only list the older commit 2ba3e6947aed ("mm/vmalloc: track > which page-table levels were modified"). The static checker warning > came later, but it's not really the important bit. It's just one bug. Given smatch caught the locking bug, I wonder if it could be taught to look for lazy_mmu issues? i.e. unbalanced enter/leave, nesting and read hazards. I think Alexander previously found a read hazard so I wouldn't be surprised if there are more. > > We'll have to hand edit the commit if we want to backport it so that's > a separate issue. > > regards, > dan carpenter >
© 2016 - 2025 Red Hat, Inc.