From nobody Tue Dec 16 11:35:00 2025 Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A391421101 for ; Sun, 26 May 2024 09:23:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=93.17.236.30 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716715423; cv=none; b=rDyVeCgfFKxo5SbR/PU7tEdRp3rocxpwi8VM+dOsu4C+3+js7zpWnXKErvz3skOzj9wkkwMpzX0EAQvfZFMx/WFba8XOcLwpm185WDlW3qmvSKmp4xpVwkTvQToi1ZXGiwdLJDWkeJfR8AXaFlr3lwnJVxa3gnJXnGS6B20NRQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716715423; c=relaxed/simple; bh=Muy1TXXt8Kh3pCzvEpRGat41Z/Phahuqdyp99y6LJBw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uJ73S03hOYpwl2pbxivN5V85rbIsB9y6iIjEl0iD+eBdHBJYLUDkuJN1RpOfoN4Ew3gs/nrI0SDJ8eNDD33nyWqjcC63IktnJq/eSjNKYaLd6fXNKQT5ckqlkzi6a67dxWRuXE15bVkKlYs1w4im0lq6RW0XnUVrMlg8CZaQz6U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu; spf=pass smtp.mailfrom=csgroup.eu; arc=none smtp.client-ip=93.17.236.30 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=csgroup.eu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=csgroup.eu Received: from localhost (mailhub3.si.c-s.fr [192.168.12.233]) by localhost (Postfix) with ESMTP id 4VnCwy0fsKz9sgJ; Sun, 26 May 2024 11:22:42 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id H7um5kd-x4Zs; Sun, 26 May 2024 11:22:42 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4VnCwr2DGfz9tLb; Sun, 26 May 2024 11:22:36 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 477028B774; Sun, 26 May 2024 11:22:36 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id Fkwr30ccCJAW; Sun, 26 May 2024 11:22:36 +0200 (CEST) Received: from PO20335.idsi0.si.c-s.fr (unknown [192.168.233.45]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 9CB7D8B773; Sun, 26 May 2024 11:22:35 +0200 (CEST) From: Christophe Leroy To: Andrew Morton , Jason Gunthorpe , Peter Xu , Oscar Salvador , Michael Ellerman , Nicholas Piggin Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH v3 15/16] powerpc/mm: Remove hugepd leftovers Date: Sun, 26 May 2024 11:22:35 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1716715344; l=18376; i=christophe.leroy@csgroup.eu; s=20211009; h=from:subject:message-id; bh=Muy1TXXt8Kh3pCzvEpRGat41Z/Phahuqdyp99y6LJBw=; b=Z0BpRcG4s8w1+P6D1YV7A0AbD5XAbRU+D/Y99ar+Gtm7xXzDICDidN48dN6zDGEqJWQqNv8fr HKCW4+n+K2EAYyF0QtQWFzjMSnipjZOgzom/klh+8vibhfRUsWPGzZ3 X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All targets have now opted out of CONFIG_ARCH_HAS_HUGEPD so remove left over code. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/hugetlb.h | 7 - arch/powerpc/include/asm/page.h | 6 - arch/powerpc/include/asm/pgtable-be-types.h | 10 - arch/powerpc/include/asm/pgtable-types.h | 9 - arch/powerpc/mm/hugetlbpage.c | 412 -------------------- arch/powerpc/mm/init-common.c | 8 +- arch/powerpc/mm/pgtable.c | 27 +- 7 files changed, 3 insertions(+), 476 deletions(-) diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/= hugetlb.h index e959c26c0b52..18a3028ac3b6 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -30,13 +30,6 @@ static inline int is_hugepage_only_range(struct mm_struc= t *mm, } #define is_hugepage_only_range is_hugepage_only_range =20 -#ifdef CONFIG_ARCH_HAS_HUGEPD -#define __HAVE_ARCH_HUGETLB_FREE_PGD_RANGE -void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, - unsigned long end, unsigned long floor, - unsigned long ceiling); -#endif - #define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte, unsigned long sz); diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/pag= e.h index c0af246a64ff..83d0a4fc5f75 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -269,12 +269,6 @@ static inline const void *pfn_to_kaddr(unsigned long p= fn) #define is_kernel_addr(x) ((x) >=3D TASK_SIZE) #endif =20 -/* - * Some number of bits at the level of the page table that points to - * a hugepte are used to encode the size. This masks those bits. - */ -#define HUGEPD_SHIFT_MASK 0x3f - #ifndef __ASSEMBLY__ =20 #ifdef CONFIG_PPC_BOOK3S_64 diff --git a/arch/powerpc/include/asm/pgtable-be-types.h b/arch/powerpc/inc= lude/asm/pgtable-be-types.h index 82633200b500..6bd8f89b25dc 100644 --- a/arch/powerpc/include/asm/pgtable-be-types.h +++ b/arch/powerpc/include/asm/pgtable-be-types.h @@ -101,14 +101,4 @@ static inline bool pmd_xchg(pmd_t *pmdp, pmd_t old, pm= d_t new) return pmd_raw(old) =3D=3D prev; } =20 -#ifdef CONFIG_ARCH_HAS_HUGEPD -typedef struct { __be64 pdbe; } hugepd_t; -#define __hugepd(x) ((hugepd_t) { cpu_to_be64(x) }) - -static inline unsigned long hpd_val(hugepd_t x) -{ - return be64_to_cpu(x.pdbe); -} -#endif - #endif /* _ASM_POWERPC_PGTABLE_BE_TYPES_H */ diff --git a/arch/powerpc/include/asm/pgtable-types.h b/arch/powerpc/includ= e/asm/pgtable-types.h index db965d98e0ae..7b3d4c592a10 100644 --- a/arch/powerpc/include/asm/pgtable-types.h +++ b/arch/powerpc/include/asm/pgtable-types.h @@ -87,13 +87,4 @@ static inline bool pte_xchg(pte_t *ptep, pte_t old, pte_= t new) } #endif =20 -#ifdef CONFIG_ARCH_HAS_HUGEPD -typedef struct { unsigned long pd; } hugepd_t; -#define __hugepd(x) ((hugepd_t) { (x) }) -static inline unsigned long hpd_val(hugepd_t x) -{ - return x.pd; -} -#endif - #endif /* _ASM_POWERPC_PGTABLE_TYPES_H */ diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 1fe2843f5b12..76846c6014e4 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -28,8 +28,6 @@ =20 bool hugetlb_disabled =3D false; =20 -#define hugepd_none(hpd) (hpd_val(hpd) =3D=3D 0) - #define PTE_T_ORDER (__builtin_ffs(sizeof(pte_basic_t)) - \ __builtin_ffs(sizeof(void *))) =20 @@ -42,156 +40,6 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned l= ong addr, unsigned long s return __find_linux_pte(mm->pgd, addr, NULL, NULL); } =20 -#ifdef CONFIG_ARCH_HAS_HUGEPD -static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp, - unsigned long address, unsigned int pdshift, - unsigned int pshift, spinlock_t *ptl) -{ - struct kmem_cache *cachep; - pte_t *new; - int i; - int num_hugepd; - - if (pshift >=3D pdshift) { - cachep =3D PGT_CACHE(PTE_T_ORDER); - num_hugepd =3D 1 << (pshift - pdshift); - } else { - cachep =3D PGT_CACHE(pdshift - pshift); - num_hugepd =3D 1; - } - - if (!cachep) { - WARN_ONCE(1, "No page table cache created for hugetlb tables"); - return -ENOMEM; - } - - new =3D kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL)); - - BUG_ON(pshift > HUGEPD_SHIFT_MASK); - BUG_ON((unsigned long)new & HUGEPD_SHIFT_MASK); - - if (!new) - return -ENOMEM; - - /* - * Make sure other cpus find the hugepd set only after a - * properly initialized page table is visible to them. - * For more details look for comment in __pte_alloc(). - */ - smp_wmb(); - - spin_lock(ptl); - /* - * We have multiple higher-level entries that point to the same - * actual pte location. Fill in each as we go and backtrack on error. - * We need all of these so the DTLB pgtable walk code can find the - * right higher-level entry without knowing if it's a hugepage or not. - */ - for (i =3D 0; i < num_hugepd; i++, hpdp++) { - if (unlikely(!hugepd_none(*hpdp))) - break; - hugepd_populate(hpdp, new, pshift); - } - /* If we bailed from the for loop early, an error occurred, clean up */ - if (i < num_hugepd) { - for (i =3D i - 1 ; i >=3D 0; i--, hpdp--) - *hpdp =3D __hugepd(0); - kmem_cache_free(cachep, new); - } else { - kmemleak_ignore(new); - } - spin_unlock(ptl); - return 0; -} - -/* - * At this point we do the placement change only for BOOK3S 64. This would - * possibly work on other subarchs. - */ -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long addr, unsigned long sz) -{ - pgd_t *pg; - p4d_t *p4; - pud_t *pu; - pmd_t *pm; - hugepd_t *hpdp =3D NULL; - unsigned pshift =3D __ffs(sz); - unsigned pdshift =3D PGDIR_SHIFT; - spinlock_t *ptl; - - addr &=3D ~(sz-1); - pg =3D pgd_offset(mm, addr); - p4 =3D p4d_offset(pg, addr); - -#ifdef CONFIG_PPC_BOOK3S_64 - if (pshift =3D=3D PGDIR_SHIFT) - /* 16GB huge page */ - return (pte_t *) p4; - else if (pshift > PUD_SHIFT) { - /* - * We need to use hugepd table - */ - ptl =3D &mm->page_table_lock; - hpdp =3D (hugepd_t *)p4; - } else { - pdshift =3D PUD_SHIFT; - pu =3D pud_alloc(mm, p4, addr); - if (!pu) - return NULL; - if (pshift =3D=3D PUD_SHIFT) - return (pte_t *)pu; - else if (pshift > PMD_SHIFT) { - ptl =3D pud_lockptr(mm, pu); - hpdp =3D (hugepd_t *)pu; - } else { - pdshift =3D PMD_SHIFT; - pm =3D pmd_alloc(mm, pu, addr); - if (!pm) - return NULL; - if (pshift =3D=3D PMD_SHIFT) - /* 16MB hugepage */ - return (pte_t *)pm; - else { - ptl =3D pmd_lockptr(mm, pm); - hpdp =3D (hugepd_t *)pm; - } - } - } -#else - if (pshift >=3D PGDIR_SHIFT) { - ptl =3D &mm->page_table_lock; - hpdp =3D (hugepd_t *)p4; - } else { - pdshift =3D PUD_SHIFT; - pu =3D pud_alloc(mm, p4, addr); - if (!pu) - return NULL; - if (pshift >=3D PUD_SHIFT) { - ptl =3D pud_lockptr(mm, pu); - hpdp =3D (hugepd_t *)pu; - } else { - pdshift =3D PMD_SHIFT; - pm =3D pmd_alloc(mm, pu, addr); - if (!pm) - return NULL; - ptl =3D pmd_lockptr(mm, pm); - hpdp =3D (hugepd_t *)pm; - } - } -#endif - if (!hpdp) - return NULL; - - BUG_ON(!hugepd_none(*hpdp) && !hugepd_ok(*hpdp)); - - if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr, - pdshift, pshift, ptl)) - return NULL; - - return hugepte_offset(*hpdp, addr, pdshift); -} -#else pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { @@ -286,266 +134,6 @@ int __init alloc_bootmem_huge_page(struct hstate *h, = int nid) return __alloc_bootmem_huge_page(h, nid); } =20 -#ifdef CONFIG_ARCH_HAS_HUGEPD -#ifndef CONFIG_PPC_BOOK3S_64 -#define HUGEPD_FREELIST_SIZE \ - ((PAGE_SIZE - sizeof(struct hugepd_freelist)) / sizeof(pte_t)) - -struct hugepd_freelist { - struct rcu_head rcu; - unsigned int index; - void *ptes[]; -}; - -static DEFINE_PER_CPU(struct hugepd_freelist *, hugepd_freelist_cur); - -static void hugepd_free_rcu_callback(struct rcu_head *head) -{ - struct hugepd_freelist *batch =3D - container_of(head, struct hugepd_freelist, rcu); - unsigned int i; - - for (i =3D 0; i < batch->index; i++) - kmem_cache_free(PGT_CACHE(PTE_T_ORDER), batch->ptes[i]); - - free_page((unsigned long)batch); -} - -static void hugepd_free(struct mmu_gather *tlb, void *hugepte) -{ - struct hugepd_freelist **batchp; - - batchp =3D &get_cpu_var(hugepd_freelist_cur); - - if (atomic_read(&tlb->mm->mm_users) < 2 || - mm_is_thread_local(tlb->mm)) { - kmem_cache_free(PGT_CACHE(PTE_T_ORDER), hugepte); - put_cpu_var(hugepd_freelist_cur); - return; - } - - if (*batchp =3D=3D NULL) { - *batchp =3D (struct hugepd_freelist *)__get_free_page(GFP_ATOMIC); - (*batchp)->index =3D 0; - } - - (*batchp)->ptes[(*batchp)->index++] =3D hugepte; - if ((*batchp)->index =3D=3D HUGEPD_FREELIST_SIZE) { - call_rcu(&(*batchp)->rcu, hugepd_free_rcu_callback); - *batchp =3D NULL; - } - put_cpu_var(hugepd_freelist_cur); -} -#else -static inline void hugepd_free(struct mmu_gather *tlb, void *hugepte) {} -#endif - -/* Return true when the entry to be freed maps more than the area being fr= eed */ -static bool range_is_outside_limits(unsigned long start, unsigned long end, - unsigned long floor, unsigned long ceiling, - unsigned long mask) -{ - if ((start & mask) < floor) - return true; - if (ceiling) { - ceiling &=3D mask; - if (!ceiling) - return true; - } - return end - 1 > ceiling - 1; -} - -static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int = pdshift, - unsigned long start, unsigned long end, - unsigned long floor, unsigned long ceiling) -{ - pte_t *hugepte =3D hugepd_page(*hpdp); - int i; - - unsigned long pdmask =3D ~((1UL << pdshift) - 1); - unsigned int num_hugepd =3D 1; - unsigned int shift =3D hugepd_shift(*hpdp); - - /* Note: On fsl the hpdp may be the first of several */ - if (shift > pdshift) - num_hugepd =3D 1 << (shift - pdshift); - - if (range_is_outside_limits(start, end, floor, ceiling, pdmask)) - return; - - for (i =3D 0; i < num_hugepd; i++, hpdp++) - *hpdp =3D __hugepd(0); - - if (shift >=3D pdshift) - hugepd_free(tlb, hugepte); - else - pgtable_free_tlb(tlb, hugepte, - get_hugepd_cache_index(pdshift - shift)); -} - -static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, - unsigned long addr, unsigned long end, - unsigned long floor, unsigned long ceiling) -{ - pgtable_t token =3D pmd_pgtable(*pmd); - - if (range_is_outside_limits(addr, end, floor, ceiling, PMD_MASK)) - return; - - pmd_clear(pmd); - pte_free_tlb(tlb, token, addr); - mm_dec_nr_ptes(tlb->mm); -} - -static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud, - unsigned long addr, unsigned long end, - unsigned long floor, unsigned long ceiling) -{ - pmd_t *pmd; - unsigned long next; - unsigned long start; - - start =3D addr; - do { - unsigned long more; - - pmd =3D pmd_offset(pud, addr); - next =3D pmd_addr_end(addr, end); - if (!is_hugepd(__hugepd(pmd_val(*pmd)))) { - if (pmd_none_or_clear_bad(pmd)) - continue; - - /* - * if it is not hugepd pointer, we should already find - * it cleared. - */ - WARN_ON(!IS_ENABLED(CONFIG_PPC_8xx)); - - hugetlb_free_pte_range(tlb, pmd, addr, end, floor, ceiling); - - continue; - } - /* - * Increment next by the size of the huge mapping since - * there may be more than one entry at this level for a - * single hugepage, but all of them point to - * the same kmem cache that holds the hugepte. - */ - more =3D addr + (1UL << hugepd_shift(*(hugepd_t *)pmd)); - if (more > next) - next =3D more; - - free_hugepd_range(tlb, (hugepd_t *)pmd, PMD_SHIFT, - addr, next, floor, ceiling); - } while (addr =3D next, addr !=3D end); - - if (range_is_outside_limits(start, end, floor, ceiling, PUD_MASK)) - return; - - pmd =3D pmd_offset(pud, start & PUD_MASK); - pud_clear(pud); - pmd_free_tlb(tlb, pmd, start & PUD_MASK); - mm_dec_nr_pmds(tlb->mm); -} - -static void hugetlb_free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, - unsigned long addr, unsigned long end, - unsigned long floor, unsigned long ceiling) -{ - pud_t *pud; - unsigned long next; - unsigned long start; - - start =3D addr; - do { - pud =3D pud_offset(p4d, addr); - next =3D pud_addr_end(addr, end); - if (!is_hugepd(__hugepd(pud_val(*pud)))) { - if (pud_none_or_clear_bad(pud)) - continue; - hugetlb_free_pmd_range(tlb, pud, addr, next, floor, - ceiling); - } else { - unsigned long more; - /* - * Increment next by the size of the huge mapping since - * there may be more than one entry at this level for a - * single hugepage, but all of them point to - * the same kmem cache that holds the hugepte. - */ - more =3D addr + (1UL << hugepd_shift(*(hugepd_t *)pud)); - if (more > next) - next =3D more; - - free_hugepd_range(tlb, (hugepd_t *)pud, PUD_SHIFT, - addr, next, floor, ceiling); - } - } while (addr =3D next, addr !=3D end); - - if (range_is_outside_limits(start, end, floor, ceiling, PGDIR_MASK)) - return; - - pud =3D pud_offset(p4d, start & PGDIR_MASK); - p4d_clear(p4d); - pud_free_tlb(tlb, pud, start & PGDIR_MASK); - mm_dec_nr_puds(tlb->mm); -} - -/* - * This function frees user-level page tables of a process. - */ -void hugetlb_free_pgd_range(struct mmu_gather *tlb, - unsigned long addr, unsigned long end, - unsigned long floor, unsigned long ceiling) -{ - pgd_t *pgd; - p4d_t *p4d; - unsigned long next; - - /* - * Because there are a number of different possible pagetable - * layouts for hugepage ranges, we limit knowledge of how - * things should be laid out to the allocation path - * (huge_pte_alloc(), above). Everything else works out the - * structure as it goes from information in the hugepd - * pointers. That means that we can't here use the - * optimization used in the normal page free_pgd_range(), of - * checking whether we're actually covering a large enough - * range to have to do anything at the top level of the walk - * instead of at the bottom. - * - * To make sense of this, you should probably go read the big - * block comment at the top of the normal free_pgd_range(), - * too. - */ - - do { - next =3D pgd_addr_end(addr, end); - pgd =3D pgd_offset(tlb->mm, addr); - p4d =3D p4d_offset(pgd, addr); - if (!is_hugepd(__hugepd(pgd_val(*pgd)))) { - if (p4d_none_or_clear_bad(p4d)) - continue; - hugetlb_free_pud_range(tlb, p4d, addr, next, floor, ceiling); - } else { - unsigned long more; - /* - * Increment next by the size of the huge mapping since - * there may be more than one entry at the pgd level - * for a single hugepage, but all of them point to the - * same kmem cache that holds the hugepte. - */ - more =3D addr + (1UL << hugepd_shift(*(hugepd_t *)pgd)); - if (more > next) - next =3D more; - - free_hugepd_range(tlb, (hugepd_t *)p4d, PGDIR_SHIFT, - addr, next, floor, ceiling); - } - } while (addr =3D next, addr !=3D end); -} -#endif - bool __init arch_hugetlb_valid_size(unsigned long size) { int shift =3D __ffs(size); diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c index d3a7726ecf51..024e95c62a2d 100644 --- a/arch/powerpc/mm/init-common.c +++ b/arch/powerpc/mm/init-common.c @@ -120,12 +120,8 @@ void pgtable_cache_add(unsigned int shift) /* When batching pgtable pointers for RCU freeing, we store * the index size in the low bits. Table alignment must be * big enough to fit it. - * - * Likewise, hugeapge pagetable pointers contain a (different) - * shift value in the low bits. All tables must be aligned so - * as to leave enough 0 bits in the address to contain it. */ - unsigned long minalign =3D max(MAX_PGTABLE_INDEX_SIZE + 1, - HUGEPD_SHIFT_MASK + 1); + */ + unsigned long minalign =3D MAX_PGTABLE_INDEX_SIZE + 1; struct kmem_cache *new =3D NULL; =20 /* It would be nice if this was a BUILD_BUG_ON(), but at the diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 6c0ab45353cb..786282ebd89a 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -409,11 +409,10 @@ unsigned long vmalloc_to_phys(void *va) EXPORT_SYMBOL_GPL(vmalloc_to_phys); =20 /* - * We have 4 cases for pgds and pmds: + * We have 3 cases for pgds and pmds: * (1) invalid (all zeroes) * (2) pointer to next table, as normal; bottom 6 bits =3D=3D 0 * (3) leaf pte for huge page _PAGE_PTE set - * (4) hugepd pointer, _PAGE_PTE =3D 0 and bits [2..6] indicate size of ta= ble * * So long as we atomically load page table pointers we are safe against t= eardown, * we can follow the address down to the page and take a ref on it. @@ -430,7 +429,6 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, #endif pmd_t pmd, *pmdp; pte_t *ret_pte; - hugepd_t *hpdp =3D NULL; unsigned pdshift; =20 if (hpage_shift) @@ -463,11 +461,6 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, goto out; } =20 - if (is_hugepd(__hugepd(p4d_val(p4d)))) { - hpdp =3D (hugepd_t *)&p4d; - goto out_huge; - } - /* * Even if we end up with an unmap, the pgtable will not * be freed, because we do an rcu free and here we are @@ -485,11 +478,6 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, goto out; } =20 - if (is_hugepd(__hugepd(pud_val(pud)))) { - hpdp =3D (hugepd_t *)&pud; - goto out_huge; - } - pdshift =3D PMD_SHIFT; pmdp =3D pmd_offset(&pud, ea); #else @@ -527,21 +515,8 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea, goto out; } =20 - if (is_hugepd(__hugepd(pmd_val(pmd)))) { - hpdp =3D (hugepd_t *)&pmd; - goto out_huge; - } - return pte_offset_kernel(&pmd, ea); =20 -out_huge: - if (!hpdp) - return NULL; - -#ifdef CONFIG_ARCH_HAS_HUGEPD - ret_pte =3D hugepte_offset(*hpdp, ea, pdshift); - pdshift =3D hugepd_shift(*hpdp); -#endif out: if (hpage_shift) *hpage_shift =3D pdshift; --=20 2.44.0