From nobody Sun Feb 8 19:23:53 2026 Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E15E433A8 for ; Tue, 10 Jun 2025 10:16:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749550602; cv=none; b=o7ZRIRQgITVzmuBL8juxIDZ+FNjUVjTiPOK7rBLxPSN7FCqhuCxDcZJO7Km1+L3/27HlWZghxdiVEubgaUrJ+OiYa/cZ1NXgOmMicw5hzR4/WoIUnAcbP1NAIIws7anfGdFqYYCfpFwzfZRzTBx1XsMbZuz/XKGAFA5G0wt7rTY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749550602; c=relaxed/simple; bh=BaIBR0cHoYf/WnAzfEDbBA+VcLZI5bdLmPzqEV1cGZw=; h=Message-ID:Date:MIME-Version:Subject:From:To:Cc:References: In-Reply-To:Content-Type; b=haacIE3/02p8fV1nEmYESmm+F7jhLx6wQm1RRb3PQ+AP+J8NKFpq7ZKTL9TBL9/SuG9WVFzKn9be2vPNfMMZD1vOZAmckiIt3Wvkfi2uCwFliUtTEcZuVbvVM9YUCqdr4EUcYNWKyOKc1EzBshXvU0QZQocz/Bno7jDWvftFjkI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=neon.tech; spf=pass smtp.mailfrom=neon.tech; dkim=pass (1024-bit key) header.d=neon.tech header.i=@neon.tech header.b=ld+BZd2T; arc=none smtp.client-ip=209.85.208.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=neon.tech Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=neon.tech Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=neon.tech header.i=@neon.tech header.b="ld+BZd2T" Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-607c5715ef2so4152190a12.0 for ; Tue, 10 Jun 2025 03:16:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=neon.tech; s=google; t=1749550598; x=1750155398; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to :cc:subject:date:message-id:reply-to; bh=5TLMZnOwp1EufsUc3IBlAEgAbVTmb9HuOUhD3AV1YmY=; b=ld+BZd2T51FqeV+gpsev4xK93ZlmoRYkIzbIciMxfQPZ1TAlxvwCtsq10wSdHjT91O TLbRC22V524AOHz27m0kiscQcTxfkYHctVuFv+IGeNiiMFiMZTFFO2VKh1Qgwab28Ik2 SCR1XijEL4FP/Jf7H7yUyle6D4jJhNSvVoNNE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749550598; x=1750155398; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5TLMZnOwp1EufsUc3IBlAEgAbVTmb9HuOUhD3AV1YmY=; b=FX7EBZlfaDa9TnLW6cNJo5PkrCuh/uDelPIouIq0HqdmHlFwsdKMrW8lcZ4as/A9Bp NBnlmEIA4FP8W+Wm3LW3Fq9HpMAzN2xgB+txcjvEyxodN1DVLVeCbBoapS7V+66DRSGe c8hsfSS7mLXa1cnETaczNge16ICqybHuGHXl0i2CD5BhiQcRzAJmuIkf10pbSX4yKZ7A M+8ueYP/um15eBPaY9SU+t4pGRgD4LdAhzdON1XMV8nuwQU7G60v2ScWyigFzv/KyY38 W9Ip4OBtIfWtvN3Jz5RjlTkTB20xRQ3faPe6KRfO6vSXj+sZsDMDqeQdmt5ozqdhxjkL RE5w== X-Gm-Message-State: AOJu0YyXseg03voM+BMikHr1vomTSiRjjXSaPS6OT7qpLDma0806Np1N RbWrgG0DXWVRNyPU72W4FXylJgtaReGYwPrpJgHFZT6Hb2ZUbCqPpn9mNYvJplIcqCCVjeGYjGK 3rwzyqz+fxA== X-Gm-Gg: ASbGncuBO/piL+eWFqqidPw61IBF29neaCITmeuIxsS86GVwHAnoKOyh6G+c5ykbUiN D7o7nw+43yC2gxStC2c/wj/4Y7mYAMyEHRlUA8f3kj6tPnEA52YsJBaDK4bkmC9KNXtGaU9t2dG ON8XHfXQfIME76I4lyFp003X+MVfmXKP7p9iXBJNO2b8iN3Z4DSCG+QWN8WjkxuaEQ3Delhvai/ uwGJra6LU4PuhWAw6bsqxjglPQDuHvrqtZbdDOdOHSnWJNI+jcWXwLU23i950SV6F9jmZWgqKpQ DvOoF9cheYQeKMSoSgXj9szSxRaqJi490WPUnR4N7Av0MVi+68TGS422ZQA47UjKaQ== X-Google-Smtp-Source: AGHT+IF3bXRwUOcNB/7Vgzzi7Oqw7d/+z7EkLbZ6h+MIcJik3RGx8NeiSc67WUrU/PAz4OZ9sYhWNw== X-Received: by 2002:a17:907:3e1e:b0:ade:4300:6c8f with SMTP id a640c23a62f3a-ade4300eb40mr1194776866b.57.1749550598026; Tue, 10 Jun 2025 03:16:38 -0700 (PDT) Received: from [192.168.86.142] ([84.65.228.220]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ade1dc386f7sm686134066b.114.2025.06.10.03.16.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 10 Jun 2025 03:16:37 -0700 (PDT) Message-ID: Date: Tue, 10 Jun 2025 11:16:36 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH v3 1/2] x86/mm: Handle alloc failure in phys_*_init() From: Em Sharnoff To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-mm@kvack.org Cc: Ingo Molnar , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Borislav Petkov , "Edgecombe, Rick P" , Oleg Vasilev , Arthur Petukhovsky , Stefan Radig , Misha Sakhnov References: Content-Language: en-US In-Reply-To: Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" During memory hotplug, allocation failures in phys_*_init() aren't handled, which results in a null pointer dereference, if they occur. To handle that, change phys_pud_init() and similar functions to return allocation errors via ERR_PTR() and check for that in arch_add_memory(). Signed-off-by: Em Sharnoff --- Changelog: - v2: switch from special-casing zero value to using ERR_PTR() - v3: Fix -Wint-conversion errors --- arch/x86/mm/init.c | 6 ++++- arch/x86/mm/init_64.c | 54 +++++++++++++++++++++++++++++++++++++++---- 2 files changed, 55 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index bfa444a7dbb0..a2665b6fe376 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -533,6 +533,7 @@ bool pfn_range_is_mapped(unsigned long start_pfn, unsig= ned long end_pfn) * Setup the direct mapping of the physical memory at PAGE_OFFSET. * This runs before bootmem is initialized and gets pages directly from * the physical memory. To access them they are temporarily mapped. + * Allocation errors are returned with ERR_PTR. */ unsigned long __ref init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot) @@ -547,10 +548,13 @@ unsigned long __ref init_memory_mapping(unsigned long= start, memset(mr, 0, sizeof(mr)); nr_range =3D split_mem_range(mr, 0, start, end); =20 - for (i =3D 0; i < nr_range; i++) + for (i =3D 0; i < nr_range; i++) { ret =3D kernel_physical_mapping_init(mr[i].start, mr[i].end, mr[i].page_size_mask, prot); + if (IS_ERR((void *)ret)) + return ret; + } =20 add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT); =20 diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 7c4f6f591f2b..712006afcd6c 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -502,7 +502,8 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, uns= igned long paddr_end, /* * Create PMD level page table mapping for physical addresses. The virtual * and physical address have to be aligned at this level. - * It returns the last physical address mapped. + * It returns the last physical address mapped. Allocation errors are + * returned with ERR_PTR. */ static unsigned long __meminit phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_en= d, @@ -572,7 +573,14 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, un= signed long paddr_end, } =20 pte =3D alloc_low_page(); + if (!pte) + return (unsigned long)ERR_PTR(-ENOMEM); paddr_last =3D phys_pte_init(pte, paddr, paddr_end, new_prot, init); + /* + * phys_{ppmd,pud,p4d}_init return allocation errors via ERR_PTR. + * phys_pte_init makes no allocations, so should not error. + */ + BUG_ON(IS_ERR((void *)paddr_last)); =20 spin_lock(&init_mm.page_table_lock); pmd_populate_kernel_init(&init_mm, pmd, pte, init); @@ -586,7 +594,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, uns= igned long paddr_end, * Create PUD level page table mapping for physical addresses. The virtual * and physical address do not have to be aligned at this level. KASLR can * randomize virtual addresses up to this level. - * It returns the last physical address mapped. + * It returns the last physical address mapped. Allocation errors are + * returned with ERR_PTR. */ static unsigned long __meminit phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_en= d, @@ -623,6 +632,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, uns= igned long paddr_end, paddr_end, page_size_mask, prot, init); + if (IS_ERR((void *)paddr_last)) + return paddr_last; continue; } /* @@ -658,12 +669,22 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, u= nsigned long paddr_end, } =20 pmd =3D alloc_low_page(); + if (!pmd) + return (unsigned long)ERR_PTR(-ENOMEM); paddr_last =3D phys_pmd_init(pmd, paddr, paddr_end, page_size_mask, prot, init); =20 + /* + * We might have IS_ERR(paddr_last) if allocation failed, but we should + * still update pud before bailing, so that subsequent retries can pick + * up on progress (here and in phys_pmd_init) without leaking pmd. + */ spin_lock(&init_mm.page_table_lock); pud_populate_init(&init_mm, pud, pmd, init); spin_unlock(&init_mm.page_table_lock); + + if (IS_ERR((void *)paddr_last)) + return paddr_last; } =20 update_page_count(PG_LEVEL_1G, pages); @@ -707,16 +728,26 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, u= nsigned long paddr_end, pud =3D pud_offset(p4d, 0); paddr_last =3D phys_pud_init(pud, paddr, __pa(vaddr_end), page_size_mask, prot, init); + if (IS_ERR((void *)paddr_last)) + return paddr_last; continue; } =20 pud =3D alloc_low_page(); + if (!pud) + return (unsigned long)ERR_PTR(-ENOMEM); paddr_last =3D phys_pud_init(pud, paddr, __pa(vaddr_end), page_size_mask, prot, init); =20 spin_lock(&init_mm.page_table_lock); p4d_populate_init(&init_mm, p4d, pud, init); spin_unlock(&init_mm.page_table_lock); + + /* + * Bail only after updating p4d to keep progress from pud across retries. + */ + if (IS_ERR((void *)paddr_last)) + return paddr_last; } =20 return paddr_last; @@ -748,10 +779,14 @@ __kernel_physical_mapping_init(unsigned long paddr_st= art, __pa(vaddr_end), page_size_mask, prot, init); + if (IS_ERR((void *)paddr_last)) + return paddr_last; continue; } =20 p4d =3D alloc_low_page(); + if (!p4d) + return (unsigned long)ERR_PTR(-ENOMEM); paddr_last =3D phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), page_size_mask, prot, init); =20 @@ -763,6 +798,13 @@ __kernel_physical_mapping_init(unsigned long paddr_sta= rt, (pud_t *) p4d, init); =20 spin_unlock(&init_mm.page_table_lock); + + /* + * Bail only after updating pgd/p4d to keep progress from p4d across ret= ries. + */ + if (IS_ERR((void *)paddr_last)) + return paddr_last; + pgd_changed =3D true; } =20 @@ -777,7 +819,8 @@ __kernel_physical_mapping_init(unsigned long paddr_star= t, * Create page table mapping for the physical memory for specific physical * addresses. Note that it can only be used to populate non-present entrie= s. * The virtual and physical addresses have to be aligned on PMD level - * down. It returns the last physical address mapped. + * down. It returns the last physical address mapped. Allocation errors are + * returned with ERR_PTR. */ unsigned long __meminit kernel_physical_mapping_init(unsigned long paddr_start, @@ -980,8 +1023,11 @@ int arch_add_memory(int nid, u64 start, u64 size, { unsigned long start_pfn =3D start >> PAGE_SHIFT; unsigned long nr_pages =3D size >> PAGE_SHIFT; + unsigned long ret =3D 0; =20 - init_memory_mapping(start, start + size, params->pgprot); + ret =3D init_memory_mapping(start, start + size, params->pgprot); + if (IS_ERR((void *)ret)) + return (int)PTR_ERR((void *)ret); =20 return add_pages(nid, start_pfn, nr_pages, params); } --=20 2.39.5 From nobody Sun Feb 8 19:23:53 2026 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4DBB9283121 for ; Tue, 10 Jun 2025 10:17:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749550661; cv=none; b=qqleNAkHKyoSGnSBUBVi94PPw9wl+qtB2ebfHxiVu3siKZpa0KBFQYnunZdKCh5fGyUvshwnMs5IO90MCZ5034430ccVXuQxsmpy9w1IoKW+hjREwXM2BhXkKRvFGHeI0rkg/uGb2o1evNCCLTj8MPip9nclVcKDOryWK4m0B8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749550661; c=relaxed/simple; bh=dDDNfBinN8PAMwqLA5OJNpKoERFQz6yp9KWAM79txZU=; h=Message-ID:Date:MIME-Version:Subject:From:To:Cc:References: In-Reply-To:Content-Type; b=Dw/sRilwA5taSUIU2N/ohUhoXZVCilCV9z9xG/x5DZKy7IUM50Pko/PwMYFAMDUf/tBepidj0eamXc/w7dzELGjAq7uZRJclu8K9mZuWOdgFBexmEUP/jgXGCU0GfrM2DnRCetR6ml7Y+9xkGoS/dqtk6GSOzFbr5tlhcdXmgBw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=neon.tech; spf=pass smtp.mailfrom=neon.tech; dkim=pass (1024-bit key) header.d=neon.tech header.i=@neon.tech header.b=gsK4CrBa; arc=none smtp.client-ip=209.85.208.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=neon.tech Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=neon.tech Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=neon.tech header.i=@neon.tech header.b="gsK4CrBa" Received: by mail-ed1-f50.google.com with SMTP id 4fb4d7f45d1cf-607ea238c37so4723547a12.2 for ; Tue, 10 Jun 2025 03:17:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=neon.tech; s=google; t=1749550658; x=1750155458; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to :cc:subject:date:message-id:reply-to; bh=MdXAoe13GikYW3/JVVfkOpgxOfq/BMWnCOBM8bUopok=; b=gsK4CrBaOwxFmvjJ+xZF/fT7RoXxPU+8x+RC9s+ZtzL+6afaq/CLYSreaJJ6topzXX dzl8Kh6F24CwJJbMNZBAdoXr5Kzu5pocHspsg2cdjVH5+j/zkzlDV1jvOL6gR3M9D+bn ECaT27Y8nrVhN+zL+c+cPiGsL2EH/Du5ofn3E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749550658; x=1750155458; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MdXAoe13GikYW3/JVVfkOpgxOfq/BMWnCOBM8bUopok=; b=prOsavK8rR3qxFQRmJjRje80GDkpQTxXa+1eTX1vS7ptIHb+9dgFk3lZnzYh2p1vFt tdBpaxagmUHX9LGavIVqIUonqFk7kNkGG2mXOZYDbpLSW+sfJ7pt1/Oh/zJKYlGBCRwj pz+SKz+f7Z4/NeTdJq61uUhb53JpFhvt/kqGTafTSfojD3b2EUZUTEO8QIM181lh8CZ7 xC6qsavMIIX5cts93aOg12zaE7oTXaTRxA3Rrm3d9RCw3xia8DnH0vVXJ/RLkRqyXKsp g0wv2yx52e3rwrFtCPOKXEHbo+l1kLw523CMMTIzOugu6ykEoppI7v8dGIkFjdPGEw+4 vZWg== X-Gm-Message-State: AOJu0YwKU85YR7gVEoJG6A0jJJ6RXZjnBjI3cqsOo0Hj7ldXA/tg8qJb 4vdCBv6w8vp7q8cyj2OCDtXfCtNny8i8Zm8Slwo5i06FkPHmBMDE9rAfLNYoUFvKM9opyYzetMN 3SzBqdFbGkw== X-Gm-Gg: ASbGncuJTT/pTaWBFfjwJvLhcKTQUAD4yP1d3FswaUfdVm4VABbBCG7ceKRBv9dNC3T SLN4jVbsu4XQ/0XrgYmOUyUlP/w5mBkFY/bqT5HfGRJcL9Wu1zok1ZUtlB6dG0w0h4D7wqHDFk2 rk4hievHp1483f66n8t1JaXfKlUl4XD5EB1xCf4LHJ5RY7NJN0S5Ad3jK1OjKs0qDu52oeRh5y4 9jnnax/JwTgspUTXboluXU8LHl6mdr4h/hxtT61Z1dhQvuE/0dmiWJ/2vEI4EeoP16r9aG7l92Q SyPrQVN4NeMPMd5VAPlsEysX+5bxW+/LHrznCIsWQ1MUkxQUyBhbYlnO2we4tmG73A== X-Google-Smtp-Source: AGHT+IEBICJlJWbKgkwUwtxDPeNS/LZyZLquBPse/trluqTy6pSNdxV8h8W68MABHGTpCpR+5A/3gQ== X-Received: by 2002:a05:6402:5249:b0:606:f37b:7ed1 with SMTP id 4fb4d7f45d1cf-607748987e0mr13057422a12.21.1749550658244; Tue, 10 Jun 2025 03:17:38 -0700 (PDT) Received: from [192.168.86.142] ([84.65.228.220]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-6077836fed7sm6043562a12.7.2025.06.10.03.17.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 10 Jun 2025 03:17:37 -0700 (PDT) Message-ID: <92894a9b-3088-4cf7-83bb-ea7382a35d82@neon.tech> Date: Tue, 10 Jun 2025 11:17:36 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH v3 2/2] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot From: Em Sharnoff To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-mm@kvack.org Cc: Ingo Molnar , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Borislav Petkov , "Edgecombe, Rick P" , Oleg Vasilev , Arthur Petukhovsky , Stefan Radig , Misha Sakhnov References: Content-Language: en-US In-Reply-To: Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently it's GFP_ATOMIC. GFP_KERNEL seems more correct. From Ingo M. [1] > There's no real reason why it should be GFP_ATOMIC AFAICS, other than > some historic inertia that nobody bothered to fix. and previously Mike R. [2] > The few callers that effectively use page allocator for the direct map > updates are gart_iommu_init() and memory hotplug. Neither of them > happen in an atomic context so there is no reason to use GFP_ATOMIC > for these allocations. > > Replace GFP_ATOMIC with GFP_KERNEL to avoid using atomic reserves for > allocations that do not require that. [1]: https://lore.kernel.org/all/aEE6_S2a-1tk1dtI@gmail.com/ [2]: https://lore.kernel.org/all/20211111110241.25968-5-rppt@kernel.org/ Signed-off-by: Em Sharnoff --- Changelog: - v2: Add this patch - v3: No changes --- arch/x86/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index a2665b6fe376..3a25cd9e9076 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -131,7 +131,7 @@ __ref void *alloc_low_pages(unsigned int num) unsigned int order; =20 order =3D get_order((unsigned long)num << PAGE_SHIFT); - return (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order); + return (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order); } =20 if ((pgt_buf_end + num) > pgt_buf_top || !can_use_brk_pgt) { --=20 2.39.5