From nobody Sat Apr 4 06:08:07 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E2B13E7179 for ; Fri, 20 Mar 2026 18:24:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774031055; cv=none; b=m1s9QJHPYH+LyVQGKAz4kXunEQq583NaTxT7cbY4g7gLITB5mzp8Cw5SPgiP4LgQi6uwxO+rTldhZdwu5jAD0fxhxs+KWFa2xkz19dLbKBJpetM5lSY1z/Aitl8zRx8wuoTpy+HU1Z4cxZrvn8Ixt7epsTjDPRZBvQkEVMJq4Xs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774031055; c=relaxed/simple; bh=Z11voo7j2b37hg44nMMNkVNCll+wbp+jQObVYXnZ08g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nY8l1jy9W3gXTRs8M26+a1yJbLP/dVNGWz/SphDwfGf/ssDb1nGhclBSKJ5NORXbeqDiQfTNAe3TjS68WcQ7jDN0NbJbfLQczLsdJYde8HF92vrQKITO62qAFMZKAsHFA2LaI2tnE+xgsqekaWVAtXBaw6MU/+Oc0YvHA4H4UM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TmRYkIu2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TmRYkIu2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-485397788b3so17376645e9.2 for ; Fri, 20 Mar 2026 11:24:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774031052; x=1774635852; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3viNkbbm/Ogpf4SwfNVEvvs3dd1BOlP3cpMjUOPerME=; b=TmRYkIu2UFRYVhYZeXPe19Ee8EV9PigdT5a3ImKHfvSFDTHZp+gls+J+qoEgz7tmT+ xEp3Gy5qJTDolRddz64N+J7jWIcIAiu14I8kulmxBLNzPlq4+ca19zUDX5PKuaeXVSrf Us85zqhWjgy3vC7XQO5iJCEuOrz7+s7DworbZoTcG/0k/c8X7rcv+Pm/1ovTk6tQ4TIo CPcHYuZAcroI4ODguQLgQ3d/ue2oT8We2gx+kJdH+CWN8iPo9IlmWW2bVOHIJKY7D6t+ KGUW8wXBAfpG8KlkovbN6mcqyB8cMQ5+Ug57Z43n7Ywvvfj8s5MsaSfrgVMboVggXE0j hi3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774031052; x=1774635852; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3viNkbbm/Ogpf4SwfNVEvvs3dd1BOlP3cpMjUOPerME=; b=CAuRnXYlc192iIbZ/pSLAnT5yZInFbzAPZq/Izd2s7nw6zEy6Bl4vzDlPUeX9wYBnR 1/tXmWkEitz+qnZ3OqOb+xSMGmh6b0PgkQqY/hc+jDFhG99zbk5UNJqJKcDwTimavBHe 4xi/jeGB4XN0PVq9sQozPwqFCQ9EYcTQELmUqJm2kcO5yAt1ccMXJfdXNiZs/uFyqHgp vYXGxz+Qy9gh2hZj1CqZsNTQ9vYYjcOLRt84G+IU53H/Sb9710ctNkbSQgX4Z0U064A0 EJYsMgOIO7JHEQomqPfwQyWnzXBFVSRexWw+KWAZAZD2Rw0Ushm6EfNOk0oO/WCK81ZU pP/w== X-Forwarded-Encrypted: i=1; AJvYcCWm7ss9twArD+5A/HCWGuMBtEUKNbvfkx45huq7xEwAvj1cNOldaUTaMLojI8HsOsiwrccsqDfFhQtv9js=@vger.kernel.org X-Gm-Message-State: AOJu0YyToPfeW1Byfaxh97ogLRe+rvXDbcp4uT2nzl8I256ye8SLKwVK +pu4RIIBbj6/sv1PePz8BArf4bsjS/Dja/8pwEPYTHLIhwE8+9TwWimN+m76fbxnEGbRuivNNe6 BUS4EgZsCTAf11A== X-Received: from wmpo40.prod.google.com ([2002:a05:600c:33a8:b0:486:fe68:2045]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3486:b0:485:41c4:e2e5 with SMTP id 5b1f17b1804b1-486fee1ac62mr67392805e9.27.1774031051839; Fri, 20 Mar 2026 11:24:11 -0700 (PDT) Date: Fri, 20 Mar 2026 18:23:46 +0000 In-Reply-To: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260320-page_alloc-unmapped-v2-22-28bf1bd54f41@google.com> Subject: [PATCH v2 22/22] mm/secretmem: Use __GFP_UNMAPPED when available From: Brendan Jackman To: Borislav Petkov , Dave Hansen , Peter Zijlstra , Andrew Morton , David Hildenbrand , Vlastimil Babka , Wei Xu , Johannes Weiner , Zi Yan , Lorenzo Stoakes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, rppt@kernel.org, Sumit Garg , derkling@google.com, reijiw@google.com, Will Deacon , rientjes@google.com, "Kalyazin, Nikita" , patrick.roy@linux.dev, "Itazuri, Takahiro" , Andy Lutomirski , David Kaplan , Thomas Gleixner , Brendan Jackman , Yosry Ahmed Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This is the simplest possible way to adopt __GFP_UNMAPPED. Use it to allocate pages when it's available, meaning the set_direct_map_invalid_noflush() call is no longer needed. Signed-off-by: Brendan Jackman --- mm/secretmem.c | 87 +++++++++++++++++++++++++++++++++++++++++++++++++-----= ---- 1 file changed, 74 insertions(+), 13 deletions(-) diff --git a/mm/secretmem.c b/mm/secretmem.c index 5f57ac4720d32..9fef91237358a 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -6,6 +6,7 @@ */ =20 #include +#include #include #include #include @@ -47,13 +48,78 @@ bool secretmem_active(void) return !!atomic_read(&secretmem_users); } =20 +/* + * If it's supported, allocate using __GFP_UNMAPPED. This lets the page + * allocator amortize TLB flushes and avoids direct map fragmentation. + */ +#ifdef CONFIG_PAGE_ALLOC_UNMAPPED +static inline struct folio *secretmem_folio_alloc(gfp_t gfp, unsigned int = order) +{ + int err; + + /* Required for __GFP_UNMAPPED|__GFP_ZERO. */ + err =3D mermap_mm_prepare(current->mm); + if (err) + return ERR_PTR(err); + + return folio_alloc(gfp | __GFP_UNMAPPED, order); +} + +static inline void secretmem_vma_close(struct vm_area_struct *area) +{ + /* + * Because the folio was allocated with __GFP_UNMAPPED|__GFP_ZERO, a TLB + * shootdown is required for the mermap in order to prevent CPU attacks + * from leaking the content. This is the simplest possible way to + * achieve that, but obviously it's inefficient - it should really be + * amortized against the normal flushing that happened during the VMA + * teardown. + */ + flush_tlb_mm(area->vm_mm); +} + +/* Used __GFP_UNMAPPED so no need to restore direct map or flush TLB. */ +static inline void secretmem_folio_restore(struct folio *folio) { } +static inline void secretmem_folio_flush(struct folio *folio) { } + +#else +static inline struct folio *secretmem_folio_alloc(gfp_t gfp, unsigned int = order) +{ + struct folio *folio; + int err; + + folio =3D folio_alloc(gfp, order); + if (!folio) + return NULL; + + err =3D set_direct_map_invalid_noflush(folio_page(folio, 0)); + if (err) { + folio_put(folio); + return ERR_PTR(err); + } + + return folio; +} + +static inline void secretmem_folio_restore(struct folio *folio) +{ + set_direct_map_default_noflush(folio_page(folio, 0)); +} + +static inline void secretmem_folio_flush(struct folio *folio) +{ + unsigned long addr =3D (unsigned long)folio_address(folio); + + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); +} +#endif + static vm_fault_t secretmem_fault(struct vm_fault *vmf) { struct address_space *mapping =3D vmf->vma->vm_file->f_mapping; struct inode *inode =3D file_inode(vmf->vma->vm_file); pgoff_t offset =3D vmf->pgoff; gfp_t gfp =3D vmf->gfp_mask; - unsigned long addr; struct folio *folio; vm_fault_t ret; int err; @@ -66,16 +132,9 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) retry: folio =3D filemap_lock_folio(mapping, offset); if (IS_ERR(folio)) { - folio =3D folio_alloc(gfp | __GFP_ZERO, 0); - if (!folio) { - ret =3D VM_FAULT_OOM; - goto out; - } - - err =3D set_direct_map_invalid_noflush(folio_page(folio, 0)); - if (err) { - folio_put(folio); - ret =3D vmf_error(err); + folio =3D secretmem_folio_alloc(gfp | __GFP_ZERO, 0); + if (IS_ERR_OR_NULL(folio)) { + ret =3D folio ? vmf_error(PTR_ERR(folio)) : VM_FAULT_OOM; goto out; } =20 @@ -96,8 +155,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) goto out; } =20 - addr =3D (unsigned long)folio_address(folio); - flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + secretmem_folio_flush(folio); } =20 vmf->page =3D folio_file_page(folio, vmf->pgoff); @@ -110,6 +168,9 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) =20 static const struct vm_operations_struct secretmem_vm_ops =3D { .fault =3D secretmem_fault, +#ifdef CONFIG_PAGE_ALLOC_UNMAPPED + .close =3D secretmem_vma_close, +#endif }; =20 static int secretmem_release(struct inode *inode, struct file *file) --=20 2.51.2