From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CA41C46CA1 for ; Tue, 17 Oct 2023 20:25:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344418AbjJQUZf (ORCPT ); Tue, 17 Oct 2023 16:25:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234726AbjJQUZb (ORCPT ); Tue, 17 Oct 2023 16:25:31 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A59989F; Tue, 17 Oct 2023 13:25:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574330; x=1729110330; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+c/mkJRcRl/HTZD+PEVBiG6EmvprrHB2jpk4jFyydb0=; b=YOaXpWmqlBQ/fBIZtCORTY+HA8/+90LrE3kyDLgvDrwexUZho+rd1h2P wiDFd/Idu9HUrcVy8Wg2/YZJj9vDqzJ9eRxpzonDM7emQKFbWhadXIMjJ ct8tPeDXtb6AiSlLpxK6aWwKxLeavr6Sc2fpLfKmCFoENW1SpBMg4fHJI eYNWooVl9cXbWr6XlxCEXNL/ihUF0DToa0+qHUjaklJ+cuHv2drrouFL6 P9G1HH7gHvoKTv9fjBtMVc4OAJ6snuj2m//1pmhqjaWtNh+o8BsNXKjik 7aV8L3B0Ncx0PT62eW8Wrb3IMwx5kOWgU9e7pL4e1MdF0o87IjIWs6LTS w==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429487" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429487" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040434" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040434" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:25 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Dave Hansen Subject: [PATCH 01/10] mm: Add helper for freeing decrypted memory Date: Tue, 17 Oct 2023 13:24:56 -0700 Message-Id: <20231017202505.340906-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When freeing decrypted memory to the page allocator the memory needs to be manually re-encrypted beforehand. If this step is skipped, then the next user of those pages will have the contents inadvertently exposed to the guest, or cause the guest to crash if the page is used in way disallowed by HW (i.e. for executable code or as a page table). Unfortunately, there are many instance of patterns like: set_memory_encrypted(pages); free_pages(pages); ...or... if (set_memory_decrypted(addr, 1)) free_pages(pages); This is a problem because set_memory_encrypted() and set_memory_decrypted() can be failed by the untrusted host in such a way that an error is returned and the resulting memory is shared. To aid in a tree-wide cleanup of these callers, add a free_decrypted_pages() function that will first try to encrypt the pages before returning them. If it is not successful, have it leak the pages and warn about this. This is preferable to returning shared pages to allocator or panicking. In some cases the code path's for freeing decrypted memory handle both encrypted and decrypted pages. In this case, rely on set_memory() to handle being asked to convert memory to the state it is already in. Going forward, rely on cross-arch callers to find and use free_decrypted_pages() instead of resorting to more heavy handed solutions like terminating the guest when nasty VMM behavior is observed. To make s390's arch set_memory_XXcrypted() definitions available in linux/set_memory.h, add include for s390's asm version of set_memory.h. Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org Suggested-by: Dave Hansen Signed-off-by: Rick Edgecombe --- arch/s390/include/asm/set_memory.h | 1 + include/linux/set_memory.h | 13 +++++++++++++ 2 files changed, 14 insertions(+) diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set= _memory.h index 06fbabe2f66c..09d36ebd64b5 100644 --- a/arch/s390/include/asm/set_memory.h +++ b/arch/s390/include/asm/set_memory.h @@ -3,6 +3,7 @@ #define _ASMS390_SET_MEMORY_H =20 #include +#include =20 extern struct mutex cpa_mutex; =20 diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index 95ac8398ee72..a898b14b6b1f 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -5,6 +5,8 @@ #ifndef _LINUX_SET_MEMORY_H_ #define _LINUX_SET_MEMORY_H_ =20 +#include + #ifdef CONFIG_ARCH_HAS_SET_MEMORY #include #else @@ -78,4 +80,15 @@ static inline int set_memory_decrypted(unsigned long add= r, int numpages) } #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */ =20 +static inline void free_decrypted_pages(unsigned long addr, int order) +{ + int ret =3D set_memory_encrypted(addr, 1 << order); + + if (ret) { + WARN_ONCE(1, "Failed to re-encrypt memory before freeing, leaking pages!= \n"); + return; + } + free_pages(addr, order); +} + #endif /* _LINUX_SET_MEMORY_H_ */ --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0022BCDB474 for ; Tue, 17 Oct 2023 20:25:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344443AbjJQUZi (ORCPT ); Tue, 17 Oct 2023 16:25:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234861AbjJQUZb (ORCPT ); Tue, 17 Oct 2023 16:25:31 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86A61BA; Tue, 17 Oct 2023 13:25:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574330; x=1729110330; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gcF+O4qhVSDzWKyOPqIfGB0MPXeZPkkPdOYMfsrWVF0=; b=OYVqzXze13qHqoLPgXrwAY1F5UxX6jgrbXAmmnmxbiLJ497jgzMcTWai qdfXq9pVmhPDreyTj+WhdVcVJbiNHTjkWvtY4A42/tsaxBTH3dWVjAyLJ QOjIy5W2LYnxzuRil79sygRtMcYlD35iD5q3QnbirkaXZOIY8Fbg7KRpR Jyjo8LefcxdAvOK6Vf2/e61NIEgGUaiwgNoDxdgRpbyfigYnKkWYlCdeS LQnAKXfH5Cv6BGjiMZU1iKfnAJ9EtAEyB2n3c1mLV5NyKFOn867eZ8bSo ka3u0mYopThZlRF0EUZoWqoNxDMIJISGlbLVgdu6N0ZlMYgFR0HxlaXZx g==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429498" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429498" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040438" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040438" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:27 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com Subject: [PATCH 02/10] x86/mm/cpa: Reject incorrect encryption change requests Date: Tue, 17 Oct 2023 13:24:57 -0700 Message-Id: <20231017202505.340906-3-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Kernel memory is "encrypted" by default. Some callers may "decrypt" it in order to share it with things outside the kernel like a device or an untrusted VMM. There is nothing to stop set_memory_encrypted() from being passed memory that is already "encrypted" (aka. "private" on TDX). In fact, some callers do this because ... $REASONS. Unfortunately, part of the TDX decrypted=3D>encrypted transition is truly one way*. It can't handle being asked to encrypt an already encrypted page Allow __set_memory_enc_pgtable() to detect already-encrypted memory before it hits the TDX code. * The one way part is "page acceptance" [commit log written by Dave Hansen] Signed-off-by: Rick Edgecombe --- arch/x86/mm/pat/set_memory.c | 41 +++++++++++++++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index bda9f129835e..1238b0db3e33 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2122,6 +2122,21 @@ int set_memory_global(unsigned long addr, int numpag= es) __pgprot(_PAGE_GLOBAL), 0); } =20 +static bool kernel_vaddr_encryped(unsigned long addr, bool enc) +{ + unsigned int level; + pte_t *pte; + + pte =3D lookup_address(addr, &level); + if (!pte) + return false; + + if (enc) + return pte_val(*pte) =3D=3D cc_mkenc(pte_val(*pte)); + + return pte_val(*pte) =3D=3D cc_mkdec(pte_val(*pte)); +} + /* * __set_memory_enc_pgtable() is used for the hypervisors that get * informed about "encryption" status via page tables. @@ -2130,7 +2145,7 @@ static int __set_memory_enc_pgtable(unsigned long add= r, int numpages, bool enc) { pgprot_t empty =3D __pgprot(0); struct cpa_data cpa; - int ret; + int ret, numpages_in_state =3D 0; =20 /* Should not be working on unaligned addresses */ if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) @@ -2143,6 +2158,30 @@ static int __set_memory_enc_pgtable(unsigned long ad= dr, int numpages, bool enc) cpa.mask_clr =3D enc ? pgprot_decrypted(empty) : pgprot_encrypted(empty); cpa.pgd =3D init_mm.pgd; =20 + /* + * If any page is already in the right state, bail with an error + * because the code doesn't handled it. This is likely because + * something has gone wrong and isn't worth optimizing for. + * + * If all the memory pages are already in the desired state return + * success. + * + * kernel_vaddr_encryped() does not synchronize against huge page + * splits so take pgd_lock. A caller doing strange things could + * get a new PMD mid level PTE confused with a huge PMD entry. Just + * lock to tie up loose ends. + */ + spin_lock(&pgd_lock); + for (int i =3D 0; i < numpages; i++) { + if (kernel_vaddr_encryped(addr + (PAGE_SIZE * i), enc)) + numpages_in_state++; + } + spin_unlock(&pgd_lock); + if (numpages_in_state =3D=3D numpages) + return 0; + else if (numpages_in_state) + return 1; + /* Must avoid aliasing mappings in the highmem code */ kmap_flush_unused(); vm_unmap_aliases(); --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A34CDB474 for ; Tue, 17 Oct 2023 20:25:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344458AbjJQUZm (ORCPT ); Tue, 17 Oct 2023 16:25:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344034AbjJQUZd (ORCPT ); Tue, 17 Oct 2023 16:25:33 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC4EF9F; Tue, 17 Oct 2023 13:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574332; x=1729110332; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DtMYE8N1kmDVdDJrac2n6n5keisOitJwN0pOUvbjeOE=; b=Wq1ZyfzEba+OjXqg5BUFD446IdXA5Zo16113OlkK1lkCSZuskG0VZb7p FFKFsUWVlcYh0KC312FWOHmQodtrnM59Ie6dYMDNWDr9DZOoPyq6qZ5et YLsHtvDUOO0HRResMWMztKqonRYi4hsxmrIBNvgeoK4Ej4szGnaFgjRgB 7aTI4EAVxGSD3cXbRX/en6zZdBfLrLHQKuN1b3GR7Wk8wcS/oBLUBUo+g WYYfGjZ43vuuQ1Pj5e/MLGVb/9BZkn+iiWMIbGb7BuPwAa3D3ONYqW0Wb Fk/BKCVQiDLNtAbInvgJ+kjAMwJB6H2w8aMZIK8cuVUAC7sOudBlzBnLM w==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429511" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429511" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040443" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040443" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:28 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , kvm@vger.kernel.org Subject: [PATCH 03/10] kvmclock: Use free_decrypted_pages() Date: Tue, 17 Oct 2023 13:24:58 -0700 Message-Id: <20231017202505.340906-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. Kvmclock could free decrypted/shared pages if set_memory_decrypted() fails. Use the recently added free_decrypted_pages() to avoid this. Cc: Paolo Bonzini Cc: Wanpeng Li Cc: Vitaly Kuznetsov Cc: kvm@vger.kernel.org Signed-off-by: Rick Edgecombe Reviewed-by: Kuppuswamy Sathyanarayanan --- arch/x86/kernel/kvmclock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index fb8f52149be9..587b159c4e53 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -227,7 +227,7 @@ static void __init kvmclock_init_mem(void) r =3D set_memory_decrypted((unsigned long) hvclock_mem, 1UL << order); if (r) { - __free_pages(p, order); + free_decrypted_pages((unsigned long)hvclock_mem, order); hvclock_mem =3D NULL; pr_warn("kvmclock: set_memory_decrypted() failed. Disabling\n"); return; --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FCB1C46CA1 for ; Tue, 17 Oct 2023 20:25:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344435AbjJQUZr (ORCPT ); Tue, 17 Oct 2023 16:25:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344358AbjJQUZe (ORCPT ); Tue, 17 Oct 2023 16:25:34 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9B9CF0; Tue, 17 Oct 2023 13:25:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574333; x=1729110333; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3Ad9kgdhE7qzJyM/s24lZCoX+Aafx5OUit4Dj7g+sn8=; b=TfJWvC4iQYYjFbNMIfKU/VKXObTvIG5n5AyBser9uxYsyxd+0aZLsS3k ZRgM2B9t+bq6GLCHfGPDTxRvnqp945SqwNbH2UvPojLPG7sJ/0k1kyRr0 jBokC6Pc0EMMr3BC4Sa+tezyldQeVOclTMpK+zoILFz8+BUWLdZt2qWS4 bmamddXzSNOPmiXUYvoHBh6a9uHc4zu4TIQt7GuNKxhEaFvhu/x7YK39Y 5r08uCByUuZUTfqLW2/uSWU+wIRBhgfesrYAM2WIrSUoevCiZ71UISiyA nHtNUxfNJI+uzs0Lind/zRfdb+7Y36+7HJV351AKEDbGZk6euue8o2oGs Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429526" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429526" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040448" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040448" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:29 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux.dev Subject: [PATCH 04/10] swiotlb: Use free_decrypted_pages() Date: Tue, 17 Oct 2023 13:24:59 -0700 Message-Id: <20231017202505.340906-5-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails. Use the recently added free_decrypted_pages() to avoid this. In swiotlb_exit(), check for set_memory_encrypted() errors manually, because the pages are not nessarily going to the page allocator. Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: iommu@lists.linux.dev Signed-off-by: Rick Edgecombe --- kernel/dma/swiotlb.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 394494a6b1f3..ad06786c4f98 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -524,6 +524,7 @@ void __init swiotlb_exit(void) unsigned long tbl_vaddr; size_t tbl_size, slots_size; unsigned int area_order; + int ret; =20 if (swiotlb_force_bounce) return; @@ -536,17 +537,19 @@ void __init swiotlb_exit(void) tbl_size =3D PAGE_ALIGN(mem->end - mem->start); slots_size =3D PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs)); =20 - set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT); + ret =3D set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT); if (mem->late_alloc) { area_order =3D get_order(array_size(sizeof(*mem->areas), mem->nareas)); free_pages((unsigned long)mem->areas, area_order); - free_pages(tbl_vaddr, get_order(tbl_size)); + if (!ret) + free_pages(tbl_vaddr, get_order(tbl_size)); free_pages((unsigned long)mem->slots, get_order(slots_size)); } else { memblock_free_late(__pa(mem->areas), array_size(sizeof(*mem->areas), mem->nareas)); - memblock_free_late(mem->start, tbl_size); + if (!ret) + memblock_free_late(mem->start, tbl_size); memblock_free_late(__pa(mem->slots), slots_size); } =20 @@ -581,7 +584,7 @@ static struct page *alloc_dma_pages(gfp_t gfp, size_t b= ytes) return page; =20 error: - __free_pages(page, order); + free_decrypted_pages((unsigned long)vaddr, order); return NULL; } =20 --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9583CDB482 for ; Tue, 17 Oct 2023 20:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344511AbjJQUZu (ORCPT ); Tue, 17 Oct 2023 16:25:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344414AbjJQUZf (ORCPT ); Tue, 17 Oct 2023 16:25:35 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEB9DF1; Tue, 17 Oct 2023 13:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574334; x=1729110334; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J1x07zCPUyPDNEiTVhjwVyzpDF64pQWTC5hwW7cc64I=; b=JPwgOfLZ9GdBCkxOf6zyBuHh9bI/hzxijRF9qCVbj79tO1Zq44WeMtrc MB/3ZrdSEl7aDYrISHf1zWT+trYgMeSvGUhd3EmtX+1iyY8S8zoStLkdF WyBDviZUTokGKjcIMCI3du5Mrdo//UW0z4lYWBPvg+hfjP+n+CpM+1vkU RIH/5wvTi8LJrNWGx1GRM/ry5ifWnZf3ZQcGK4AIG+ZW/65aBstNHhCAc jaFpwg5bLUyN4gclLtuoneLZMALDO7KgBLHaWGAwOqUBMG5PJIE5FBxXp uKSaY/JTk8tfpscskWMc4mQowRmLQJU9hmha441tnfWOXzGvB8ep4lutJ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429543" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429543" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040454" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040454" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:30 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, Richard Cochran , netdev@vger.kernel.org Subject: [PATCH 05/10] ptp: Use free_decrypted_pages() Date: Tue, 17 Oct 2023 13:25:00 -0700 Message-Id: <20231017202505.340906-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. Ptp could free decrypted/shared pages if set_memory_decrypted() fails. Use the recently added free_decrypted_pages() to avoid this. Cc: Richard Cochran Cc: netdev@vger.kernel.org Signed-off-by: Rick Edgecombe --- drivers/ptp/ptp_kvm_x86.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/ptp/ptp_kvm_x86.c b/drivers/ptp/ptp_kvm_x86.c index 902844cc1a17..203af060013d 100644 --- a/drivers/ptp/ptp_kvm_x86.c +++ b/drivers/ptp/ptp_kvm_x86.c @@ -36,7 +36,7 @@ int kvm_arch_ptp_init(void) clock_pair =3D page_address(p); ret =3D set_memory_decrypted((unsigned long)clock_pair, 1); if (ret) { - __free_page(p); + free_decrypted_pages((unsigned long)clock_pair, 0); clock_pair =3D NULL; goto nofree; } --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64196C46CA1 for ; Tue, 17 Oct 2023 20:25:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344534AbjJQUZx (ORCPT ); Tue, 17 Oct 2023 16:25:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234700AbjJQUZg (ORCPT ); Tue, 17 Oct 2023 16:25:36 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AA9EF9; Tue, 17 Oct 2023 13:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574335; x=1729110335; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1KnITalfSoEUJepIi5EFi3cXKWjkojf3kZ3un2NNboE=; b=NnrPIv5fKSyMVkk67iI5ylmjTZ9cZLCTxsyMcz7w3W+yac57UxZbCmIX li8hOnnXqElYlefL66UjMDmAL9BXz9wMbkeqtzGGC0mS7hPPvtYN4KY5N NMsnGyOFV8jqqUp2kDzD6eKu8QK2Ef3jcP4SEAdct9btON3Hx8P6OvXPE BF64XV4nvllAmMWbmOaO15wilnoCWd1F/Yh+PnUd16MUUnS0GqJtk0jPT 5b4JFnscG9p8hyD5C0XCcI55dO/piasSGritenD29uYugwEY9UvAZ8P58 N6Fb9ARnOJq8sUS1/d9C1Vn4iJ7VsaiueDjwiYSRvRdS0hDmNCOv13DCm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429557" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429557" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040460" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040460" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:31 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux.dev Subject: [PATCH 06/10] dma: Use free_decrypted_pages() Date: Tue, 17 Oct 2023 13:25:01 -0700 Message-Id: <20231017202505.340906-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. DMA could free decrypted/shared pages if set_memory_decrypted() fails. Use the recently added free_decrypted_pages() to avoid this. Several paths also result in proper encrypted pages being freed through the same freeing function. Rely on free_decrypted_pages() to not leak the memory in these cases. Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: iommu@lists.linux.dev Signed-off-by: Rick Edgecombe --- include/linux/dma-map-ops.h | 3 ++- kernel/dma/contiguous.c | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f2fc203fb8a1..b0800cbbc357 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -9,6 +9,7 @@ #include #include #include +#include =20 struct cma; =20 @@ -165,7 +166,7 @@ static inline struct page *dma_alloc_contiguous(struct = device *dev, size_t size, static inline void dma_free_contiguous(struct device *dev, struct page *pa= ge, size_t size) { - __free_pages(page, get_order(size)); + free_decrypted_pages((unsigned long)page_address(page), get_order(size)); } #endif /* CONFIG_DMA_CMA*/ =20 diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index f005c66f378c..e962f1f6434e 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -429,7 +429,7 @@ void dma_free_contiguous(struct device *dev, struct pag= e *page, size_t size) } =20 /* not in any cma, free from buddy */ - __free_pages(page, get_order(size)); + free_decrypted_pages((unsigned long)page_address(page), get_order(size)); } =20 /* --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF80ECDB474 for ; Tue, 17 Oct 2023 20:25:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344568AbjJQUZ5 (ORCPT ); Tue, 17 Oct 2023 16:25:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344429AbjJQUZh (ORCPT ); Tue, 17 Oct 2023 16:25:37 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2023FF; Tue, 17 Oct 2023 13:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574336; x=1729110336; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DeK9M9T8xLg82lbSzsd9PGh1edTcS0OzTNpNIQkWt+o=; b=ibyLT5MepQhobD67/052nfhe6pYBEQu/Oy3Bz4PQaXwxoNNmUsmzruXB 2JPoCZAuETRn+kV2/hlOIGcSyYiXnaUswIaSF2sLKUwmiDjLAOg1dmWPo Sqq1GjHNS8STK5wZMyEe1a07nKuHWJSYa5ngdAihkYtDajbUVsnMg3VF3 yjGWnGMICqR3JPuwu+uYdamomDxnCuokp3b9kAgRM9lCmpGbYWuuu3OIf LRvwnG3p7I/8VTAz9aLtOi4nOFsAdtPqgtNxEDHHHfcqH8JHIS6lwBX1Y 1ZmhTDMShiAWHq5g6mdmSDRWwF8ydRUtrOjByStvvBPm9jASHivRKQ5xe g==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429573" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429573" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040468" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040468" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:32 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , linux-hyperv@vger.kernel.org Subject: [RFC 07/10] hv: Use free_decrypted_pages() Date: Tue, 17 Oct 2023 13:25:02 -0700 Message-Id: <20231017202505.340906-8-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. Hyperv could free decrypted/shared pages if set_memory_decrypted() fails. Use the recently added free_decrypted_pages() to avoid this. Only compile tested. Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Wei Liu Cc: Dexuan Cui Cc: linux-hyperv@vger.kernel.org Signed-off-by: Rick Edgecombe --- drivers/hv/channel.c | 7 ++++--- drivers/hv/connection.c | 13 +++++++++---- 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 56f7e06c673e..1ad8f7fabe06 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -153,9 +153,10 @@ void vmbus_free_ring(struct vmbus_channel *channel) hv_ringbuffer_cleanup(&channel->inbound); =20 if (channel->ringbuffer_page) { - __free_pages(channel->ringbuffer_page, - get_order(channel->ringbuffer_pagecount - << PAGE_SHIFT)); + int order =3D get_order(channel->ringbuffer_pagecount << PAGE_SHIFT); + unsigned long addr =3D (unsigned long)page_address(channel->ringbuffer_p= age); + + free_decrypted_pages(addr, order); channel->ringbuffer_page =3D NULL; } } diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 3cabeeabb1ca..cffad9b139d3 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -315,6 +315,7 @@ int vmbus_connect(void) =20 void vmbus_disconnect(void) { + int ret; /* * First send the unload request to the host. */ @@ -337,11 +338,15 @@ void vmbus_disconnect(void) vmbus_connection.int_page =3D NULL; } =20 - set_memory_encrypted((unsigned long)vmbus_connection.monitor_pages[0], 1); - set_memory_encrypted((unsigned long)vmbus_connection.monitor_pages[1], 1); + ret =3D set_memory_encrypted((unsigned long)vmbus_connection.monitor_page= s[0], 1); + ret |=3D set_memory_encrypted((unsigned long)vmbus_connection.monitor_pag= es[1], 1); =20 - hv_free_hyperv_page(vmbus_connection.monitor_pages[0]); - hv_free_hyperv_page(vmbus_connection.monitor_pages[1]); + if (!ret) { + hv_free_hyperv_page(vmbus_connection.monitor_pages[0]); + hv_free_hyperv_page(vmbus_connection.monitor_pages[1]); + } else { + WARN_ONCE(1, "Failed to re-encrypt memory before freeing, leaking pages!= \n"); + } vmbus_connection.monitor_pages[0] =3D NULL; vmbus_connection.monitor_pages[1] =3D NULL; } --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D512FCDB474 for ; Tue, 17 Oct 2023 20:26:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344586AbjJQU0L (ORCPT ); Tue, 17 Oct 2023 16:26:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344476AbjJQUZn (ORCPT ); Tue, 17 Oct 2023 16:25:43 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BB2110F; Tue, 17 Oct 2023 13:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574337; x=1729110337; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xSf/vP0NH+csMyG/vIllMJkrSA3cYI1t+FXhK7zth7Y=; b=Um+2RAUycHg4eO51TJWq4jYMYZ6sC8BRA31uq0cfcE7e/l2kMDKlCic+ XXbJWSGe/0QxqaqPMuEFCrlu32xJ1yPAxbJ7PB7pb9OdP7xXDHWoy02s+ ZgLnqmBz6NdOqZLxWy+uS9iDfBz+/EzJ9gs9uuEa5AKYBdGkZBmLx4aq1 qNTxNpFhF+rxvwnuUXS0TctXsQWuEK4Nx2EeDyVZYsgqSpaJubi4Dksjf 8BxRN8SyEAJueIixOnv+Vg39pppoDdUzYkjnvWwH3VoM+PM+ihlz80IjU GukS1tRaIzt1FdHk2VegNZuk3AC3Wc2MQHcJxCjxjneFjggAHcLAGjrTh g==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429585" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429585" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040471" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040471" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:33 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , linux-hyperv@vger.kernel.org Subject: [RFC 08/10] hv: Track decrypted status in vmbus_gpadl Date: Tue, 17 Oct 2023 13:25:03 -0700 Message-Id: <20231017202505.340906-9-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. In order to make sure caller's of vmbus_establish_gpadl() and vmbus_teardown_gpadl() don't return decrypted/shared pages to allocators, add a field in struct vmbus_gpadl to keep track of the decryption status of the buffer's. This will allow the callers to know if they should free or leak the pages. Only compile tested. Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Wei Liu Cc: Dexuan Cui Cc: linux-hyperv@vger.kernel.org Signed-off-by: Rick Edgecombe --- drivers/hv/channel.c | 11 ++++++++--- include/linux/hyperv.h | 1 + 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 1ad8f7fabe06..0a7dcbb48140 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -479,6 +479,7 @@ static int __vmbus_establish_gpadl(struct vmbus_channel= *channel, ret =3D set_memory_decrypted((unsigned long)kbuffer, PFN_UP(size)); if (ret) { + gpadl->decrypted =3D false; dev_warn(&channel->device_obj->device, "Failed to set host visibility for new GPADL %d.\n", ret); @@ -551,6 +552,7 @@ static int __vmbus_establish_gpadl(struct vmbus_channel= *channel, gpadl->gpadl_handle =3D gpadlmsg->gpadl; gpadl->buffer =3D kbuffer; gpadl->size =3D size; + gpadl->decrypted =3D true; =20 =20 cleanup: @@ -564,9 +566,10 @@ static int __vmbus_establish_gpadl(struct vmbus_channe= l *channel, =20 kfree(msginfo); =20 - if (ret) - set_memory_encrypted((unsigned long)kbuffer, - PFN_UP(size)); + if (ret) { + if (set_memory_encrypted((unsigned long)kbuffer, PFN_UP(size))) + gpadl->decrypted =3D false; + } =20 return ret; } @@ -887,6 +890,8 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel,= struct vmbus_gpadl *gpad if (ret) pr_warn("Fail to set mem host visibility in GPADL teardown %d.\n", ret); =20 + gpadl->decrypted =3D ret; + return ret; } EXPORT_SYMBOL_GPL(vmbus_teardown_gpadl); diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 2b00faf98017..5bac136c268c 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -812,6 +812,7 @@ struct vmbus_gpadl { u32 gpadl_handle; u32 size; void *buffer; + bool decrypted; }; =20 struct vmbus_channel { --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B7EDCDB482 for ; Tue, 17 Oct 2023 20:26:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344451AbjJQU0C (ORCPT ); Tue, 17 Oct 2023 16:26:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344481AbjJQUZo (ORCPT ); Tue, 17 Oct 2023 16:25:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 340EE113; Tue, 17 Oct 2023 13:25:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574338; x=1729110338; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n7cfqZUhm+mNRuq80sD29C7/lYhruGMsgMoNg518Mk8=; b=YXuEGd1YuuA8OqolNFitTWvyC72DXCtDVrq4iC64qSq0H+WUXbWwob+q moFldGrUAkrqIsmMoMBrwZ6+cuYbWgJb5SdNWl1xgaz271bELU9leP1Ew xzjHZB5UBcLRhfA3PLkJz1TK4OaCFuGBDJeRzpeRCuibVL7Jf3clrDbTy mNqR85YNWgSggizAPJxn3LEW2iBDTt1tJg/EpOSRWV1fSyhnn8uv4TGKX z2snsrvYHjwgkdn6EsgEwlC3CYsavy99A7Qulh4pyuiTTo9VxKiittR0C muzvRKkUiEFuZ0p6cHvGUTg99J7FnvevMoN32XQfR/dDcVVhpZOGCrPtu A==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429596" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429596" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040475" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040475" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:34 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , linux-hyperv@vger.kernel.org Subject: [RFC 09/10] hv_nstvsc: Don't free decrypted memory Date: Tue, 17 Oct 2023 13:25:04 -0700 Message-Id: <20231017202505.340906-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. hv_nstvsc could free decrypted/shared pages if set_memory_decrypted() fails. Check the decrypted field in the gpadl before freeing in order to not leak the memory. Only compile tested. Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Wei Liu Cc: Dexuan Cui Cc: linux-hyperv@vger.kernel.org Signed-off-by: Rick Edgecombe --- drivers/net/hyperv/netvsc.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 82e9796c8f5e..70b7f91fb96b 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -154,8 +154,11 @@ static void free_netvsc_device(struct rcu_head *head) int i; =20 kfree(nvdev->extension); - vfree(nvdev->recv_buf); - vfree(nvdev->send_buf); + + if (!nvdev->recv_buf_gpadl_handle.decrypted) + vfree(nvdev->recv_buf); + if (!nvdev->send_buf_gpadl_handle.decrypted) + vfree(nvdev->send_buf); bitmap_free(nvdev->send_section_map); =20 for (i =3D 0; i < VRSS_CHANNEL_MAX; i++) { --=20 2.34.1 From nobody Fri Jan 2 00:09:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C81CCDB474 for ; Tue, 17 Oct 2023 20:26:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344484AbjJQU0H (ORCPT ); Tue, 17 Oct 2023 16:26:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344496AbjJQUZo (ORCPT ); Tue, 17 Oct 2023 16:25:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 273F8118; Tue, 17 Oct 2023 13:25:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574339; x=1729110339; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=acK3P8RbFbLJ3Y/qsHvg8A7W0ea+PDycu9fLk45ERVg=; b=bAwYhPFDDn2o4/g5Fs2i2s5um2j26EQWQIyBaZJ2aaH4c46DESSmKzjG m8qgnDnrDqfr9zfj/plTmMMnQ9zukuTursZT/jQ40fKZMUrMtz/at8UjB 2l4v9tWYfqrSphZ+M3Cau2AbRLqI7cmk5duZAgPeY3kh5iehrPes69QO8 jBjP+2X9WgAAlW4iqXjd8rIbUUlrF8lLVOJR7htbOzfKiGP3cWF0qEI1e 21/snBiup9feaslOPp3z2p9HInLQpMahJKFB49EzoOfCu4atkY9+5wd+O Yaa8lCqGFJg1pF8Gipjp5bndl2kLgoWjnStIFm212TwtidEDXcPnrKg6U w==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429610" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429610" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040478" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040478" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:35 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , linux-hyperv@vger.kernel.org Subject: [RFC 10/10] uio_hv_generic: Don't free decrypted memory Date: Tue, 17 Oct 2023 13:25:05 -0700 Message-Id: <20231017202505.340906-11-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On TDX it is possible for the untrusted host to cause set_memory_encrypted() or set_memory_decrypted() to fail such that an error is returned and the resulting memory is shared. Callers need to take care to handle these errors to avoid returning decrypted (shared) memory to the page allocator, which could lead to functional or security issues. uio_hv_generic could free decrypted/shared pages if set_memory_decrypted() fails. Check the decrypted field in the gpadl before freeing in order to not leak the memory. Only compile tested. Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Wei Liu Cc: Dexuan Cui Cc: linux-hyperv@vger.kernel.org Signed-off-by: Rick Edgecombe --- drivers/uio/uio_hv_generic.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c index 20d9762331bd..6be3462b109f 100644 --- a/drivers/uio/uio_hv_generic.c +++ b/drivers/uio/uio_hv_generic.c @@ -181,12 +181,14 @@ hv_uio_cleanup(struct hv_device *dev, struct hv_uio_p= rivate_data *pdata) { if (pdata->send_gpadl.gpadl_handle) { vmbus_teardown_gpadl(dev->channel, &pdata->send_gpadl); - vfree(pdata->send_buf); + if (!pdata->send_gpadl.decrypted) + vfree(pdata->send_buf); } =20 if (pdata->recv_gpadl.gpadl_handle) { vmbus_teardown_gpadl(dev->channel, &pdata->recv_gpadl); - vfree(pdata->recv_buf); + if (!pdata->recv_gpadl.decrypted) + vfree(pdata->recv_buf); } } =20 @@ -295,7 +297,8 @@ hv_uio_probe(struct hv_device *dev, ret =3D vmbus_establish_gpadl(channel, pdata->recv_buf, RECV_BUFFER_SIZE, &pdata->recv_gpadl); if (ret) { - vfree(pdata->recv_buf); + if (!pdata->recv_gpadl.decrypted) + vfree(pdata->recv_buf); goto fail_close; } =20 @@ -317,7 +320,8 @@ hv_uio_probe(struct hv_device *dev, ret =3D vmbus_establish_gpadl(channel, pdata->send_buf, SEND_BUFFER_SIZE, &pdata->send_gpadl); if (ret) { - vfree(pdata->send_buf); + if (!pdata->send_gpadl.decrypted) + vfree(pdata->send_buf); goto fail_close; } =20 --=20 2.34.1