[PATCHv4, REBASED 1/4] x86/mm/ident_map: Fix virtual address wrap to zero

Kirill A. Shutemov posted 4 patches 1 year, 3 months ago
There is a newer version of this series
[PATCHv4, REBASED 1/4] x86/mm/ident_map: Fix virtual address wrap to zero
Posted by Kirill A. Shutemov 1 year, 3 months ago
Calculation of 'next' virtual address doesn't protect against wrapping
to zero. It can result in page table corruption and hang. The
problematic case is possible if user sets high x86_mapping_info::offset.

The wrapping to zero only occurs if the top PGD entry is accessed.
There are no such users in the upstream. Only hibernate_64.c uses
x86_mapping_info::offset, and it operates on the direct mapping range,
which is not the top PGD entry.

Replace manual 'next' calculation with p?d_addr_end() which handles
wrapping correctly.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/ident_map.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 437e96fb4977..5872f3ee863c 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -101,9 +101,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 		pmd_t *pmd;
 		bool use_gbpage;
 
-		next = (addr & PUD_MASK) + PUD_SIZE;
-		if (next > end)
-			next = end;
+		next = pud_addr_end(addr, end);
 
 		/* if this is already a gbpage, this portion is already mapped */
 		if (pud_leaf(*pud))
@@ -154,10 +152,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 		p4d_t *p4d = p4d_page + p4d_index(addr);
 		pud_t *pud;
 
-		next = (addr & P4D_MASK) + P4D_SIZE;
-		if (next > end)
-			next = end;
-
+		next = p4d_addr_end(addr, end);
 		if (p4d_present(*p4d)) {
 			pud = pud_offset(p4d, 0);
 			result = ident_pud_init(info, pud, addr, next);
@@ -199,10 +194,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 		pgd_t *pgd = pgd_page + pgd_index(addr);
 		p4d_t *p4d;
 
-		next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-		if (next > end)
-			next = end;
-
+		next = pgd_addr_end(addr, end);
 		if (pgd_present(*pgd)) {
 			p4d = p4d_offset(pgd, 0);
 			result = ident_p4d_init(info, p4d, addr, next);
-- 
2.45.2
Re: [PATCHv4, REBASED 1/4] x86/mm/ident_map: Fix virtual address wrap to zero
Posted by Borislav Petkov 1 year, 3 months ago
On Wed, Oct 16, 2024 at 02:14:55PM +0300, Kirill A. Shutemov wrote:
> Calculation of 'next' virtual address doesn't protect against wrapping
> to zero. It can result in page table corruption and hang. The
> problematic case is possible if user sets high x86_mapping_info::offset.
> 
> The wrapping to zero only occurs if the top PGD entry is accessed.
> There are no such users in the upstream. Only hibernate_64.c uses
> x86_mapping_info::offset, and it operates on the direct mapping range,
> which is not the top PGD entry.
> 
> Replace manual 'next' calculation with p?d_addr_end() which handles
> wrapping correctly.

So this is a fix for a theoretical issue as it cannot happen currently?

Can we call that out in the commit message so that the stable AI doesn't pick
it up?

And which commit is it fixing?

aece27851d44 ("x86, 64bit, mm: Add generic kernel/ident mapping helper")
perhaps?

Always add Fixes: tags when a patch is fixing something - you know that.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCHv4, REBASED 1/4] x86/mm/ident_map: Fix virtual address wrap to zero
Posted by Kirill A. Shutemov 1 year, 3 months ago
On Wed, Oct 30, 2024 at 12:47:12PM +0100, Borislav Petkov wrote:
> On Wed, Oct 16, 2024 at 02:14:55PM +0300, Kirill A. Shutemov wrote:
> > Calculation of 'next' virtual address doesn't protect against wrapping
> > to zero. It can result in page table corruption and hang. The
> > problematic case is possible if user sets high x86_mapping_info::offset.
> > 
> > The wrapping to zero only occurs if the top PGD entry is accessed.
> > There are no such users in the upstream. Only hibernate_64.c uses
> > x86_mapping_info::offset, and it operates on the direct mapping range,
> > which is not the top PGD entry.
> > 
> > Replace manual 'next' calculation with p?d_addr_end() which handles
> > wrapping correctly.
> 
> So this is a fix for a theoretical issue as it cannot happen currently?

Right.

> Can we call that out in the commit message so that the stable AI doesn't pick
> it up?

Do we have magic words for that?

I tried to express that in the second paragraph: "no such users in the
upstream".

> And which commit is it fixing?
> 
> aece27851d44 ("x86, 64bit, mm: Add generic kernel/ident mapping helper")
> perhaps?

This one is closer:

e4630fdd4763 ("x86/power/64: Always create temporary identity mapping correctly")

It adds x86_mapping_info::offset.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCHv4, REBASED 1/4] x86/mm/ident_map: Fix virtual address wrap to zero
Posted by Borislav Petkov 1 year, 3 months ago
On Thu, Oct 31, 2024 at 12:11:52PM +0200, Kirill A. Shutemov wrote:
> Do we have magic words for that?

No clue.

> I tried to express that in the second paragraph: "no such users in the
> upstream".

Right, so perhaps better to spell it out explicitly:

"Backporter's note:

This fixes a theoretical issue only and there's no need to backport it to
stable."

at the end of the commit message.

> > And which commit is it fixing?
> > 
> > aece27851d44 ("x86, 64bit, mm: Add generic kernel/ident mapping helper")
> > perhaps?
> 
> This one is closer:
> 
> e4630fdd4763 ("x86/power/64: Always create temporary identity mapping correctly")
> 
> It adds x86_mapping_info::offset.

But aece27851d44 has the faulty check...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCHv4, REBASED 1/4] x86/mm/ident_map: Fix virtual address wrap to zero
Posted by Kirill A. Shutemov 1 year, 3 months ago
On Thu, Oct 31, 2024 at 02:59:16PM +0100, Borislav Petkov wrote:
> On Thu, Oct 31, 2024 at 12:11:52PM +0200, Kirill A. Shutemov wrote:
> > Do we have magic words for that?
> 
> No clue.
> 
> > I tried to express that in the second paragraph: "no such users in the
> > upstream".
> 
> Right, so perhaps better to spell it out explicitly:
> 
> "Backporter's note:
> 
> This fixes a theoretical issue only and there's no need to backport it to
> stable."
> 
> at the end of the commit message.

Okay.

> 
> > > And which commit is it fixing?
> > > 
> > > aece27851d44 ("x86, 64bit, mm: Add generic kernel/ident mapping helper")
> > > perhaps?
> > 
> > This one is closer:
> > 
> > e4630fdd4763 ("x86/power/64: Always create temporary identity mapping correctly")
> > 
> > It adds x86_mapping_info::offset.
> 
> But aece27851d44 has the faulty check...

It cannot be triggered without 'offset'.

I'll put both.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov
[tip: x86/core] x86/mm/ident_map: Fix theoretical virtual address overflow to zero
Posted by tip-bot2 for Kirill A. Shutemov 10 months, 3 weeks ago
The following commit has been merged into the x86/core branch of tip:

Commit-ID:     f666c92090a41ac5524dade63ff96b3adcf8c2ab
Gitweb:        https://git.kernel.org/tip/f666c92090a41ac5524dade63ff96b3adcf8c2ab
Author:        Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
AuthorDate:    Wed, 16 Oct 2024 14:14:55 +03:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 19 Mar 2025 11:12:29 +01:00

x86/mm/ident_map: Fix theoretical virtual address overflow to zero

The current calculation of the 'next' virtual address in the
page table initialization functions in arch/x86/mm/ident_map.c
doesn't protect against wrapping to zero.

This is a theoretical issue that cannot happen currently,
the problematic case is possible only if the user sets a
high enough x86_mapping_info::offset value - which no
current code in the upstream kernel does.

( The wrapping to zero only occurs if the top PGD entry is accessed.
  There are no such users upstream. Only hibernate_64.c uses
  x86_mapping_info::offset, and it operates on the direct mapping
  range, which is not the top PGD entry. )

Should such an overflow happen, it can result in page table
corruption and a hang.

To future-proof this code, replace the manual 'next' calculation
with p?d_addr_end() which handles wrapping correctly.

[ Backporter's note: there's no need to backport this patch. ]

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
---
 arch/x86/mm/ident_map.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 5ab7bd2..bd5d101 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -101,9 +101,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 		pmd_t *pmd;
 		bool use_gbpage;
 
-		next = (addr & PUD_MASK) + PUD_SIZE;
-		if (next > end)
-			next = end;
+		next = pud_addr_end(addr, end);
 
 		/* if this is already a gbpage, this portion is already mapped */
 		if (pud_leaf(*pud))
@@ -154,10 +152,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 		p4d_t *p4d = p4d_page + p4d_index(addr);
 		pud_t *pud;
 
-		next = (addr & P4D_MASK) + P4D_SIZE;
-		if (next > end)
-			next = end;
-
+		next = p4d_addr_end(addr, end);
 		if (p4d_present(*p4d)) {
 			pud = pud_offset(p4d, 0);
 			result = ident_pud_init(info, pud, addr, next);
@@ -199,10 +194,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 		pgd_t *pgd = pgd_page + pgd_index(addr);
 		p4d_t *p4d;
 
-		next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-		if (next > end)
-			next = end;
-
+		next = pgd_addr_end(addr, end);
 		if (pgd_present(*pgd)) {
 			p4d = p4d_offset(pgd, 0);
 			result = ident_p4d_init(info, p4d, addr, next);
[tip: x86/mm] x86/mm/ident_map: Fix theoretical virtual address overflow to zero
Posted by tip-bot2 for Kirill A. Shutemov 11 months ago
The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     4f10ec03fe1ed12479134be33ddf006382744651
Gitweb:        https://git.kernel.org/tip/4f10ec03fe1ed12479134be33ddf006382744651
Author:        Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
AuthorDate:    Wed, 16 Oct 2024 14:14:55 +03:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 10 Mar 2025 20:31:30 +01:00

x86/mm/ident_map: Fix theoretical virtual address overflow to zero

The current calculation of the 'next' virtual address in the
page table initialization functions in arch/x86/mm/ident_map.c
doesn't protect against wrapping to zero.

This is a theoretical issue that cannot happen currently,
the problematic case is possible only if the user sets a
high enough x86_mapping_info::offset value - which no
current code in the upstream kernel does.

( The wrapping to zero only occurs if the top PGD entry is accessed.
  There are no such users upstream. Only hibernate_64.c uses
  x86_mapping_info::offset, and it operates on the direct mapping
  range, which is not the top PGD entry. )

Should such an overflow happen, it can result in page table
corruption and a hang.

To future-proof this code, replace the manual 'next' calculation
with p?d_addr_end() which handles wrapping correctly.

[ Backporter's note: there's no need to backport this patch. ]

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
---
 arch/x86/mm/ident_map.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 5ab7bd2..bd5d101 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -101,9 +101,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 		pmd_t *pmd;
 		bool use_gbpage;
 
-		next = (addr & PUD_MASK) + PUD_SIZE;
-		if (next > end)
-			next = end;
+		next = pud_addr_end(addr, end);
 
 		/* if this is already a gbpage, this portion is already mapped */
 		if (pud_leaf(*pud))
@@ -154,10 +152,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 		p4d_t *p4d = p4d_page + p4d_index(addr);
 		pud_t *pud;
 
-		next = (addr & P4D_MASK) + P4D_SIZE;
-		if (next > end)
-			next = end;
-
+		next = p4d_addr_end(addr, end);
 		if (p4d_present(*p4d)) {
 			pud = pud_offset(p4d, 0);
 			result = ident_pud_init(info, pud, addr, next);
@@ -199,10 +194,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 		pgd_t *pgd = pgd_page + pgd_index(addr);
 		p4d_t *p4d;
 
-		next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-		if (next > end)
-			next = end;
-
+		next = pgd_addr_end(addr, end);
 		if (pgd_present(*pgd)) {
 			p4d = p4d_offset(pgd, 0);
 			result = ident_p4d_init(info, p4d, addr, next);