From nobody Fri Dec 26 11:21:33 2025 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A23C7358B1; Fri, 5 Jan 2024 18:30:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gb0SdXyq" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-5ceb02e2a56so1110785a12.2; Fri, 05 Jan 2024 10:30:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704479441; x=1705084241; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3sjtnuQv0aW8e3tX/O8I3CPvFIKJ9WnNTakGmnE/PQ4=; b=gb0SdXyqz7P9KT4ny2nvqvyk5Syf/8MrZPpBoXW00yZe85XcbMRZcVPpCrlEssR/4F XiJ/NO/52GfI/1oKbt1fu2xIyH88ZLT5fkKe9zXv+TEqTkMhDwH0v27wft8T1hsS7zwY cBpU4/bluZgtCj1ZnvBbMGtbTOAzwc8DH4+vF5LK27yZPijRhFAj2Bf1axvhhBoA7Bne A8i1ngKfHfJaVvkZTHrcX4qVkTwzTuJHhco+SdXrl27t1TYTNF0V6iS8HeX4fzLr8HhR rdhnV5J71ZtPVV9cOopb2teAf4ob+t8/j1ECRb2wxuQbekEr5/0fuoYldlnJosYMxJS8 bgJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704479441; x=1705084241; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=3sjtnuQv0aW8e3tX/O8I3CPvFIKJ9WnNTakGmnE/PQ4=; b=pTiOQbJPDoNUiUliRnC++M7avdm4GhJ1Qo/e1qNlIQAP/5fuDpKDmlhUmsPabh6GNv hIA2YI6nkqZwD5wHKbUQU8PLmX0lc3lMO1vllnGxd3TAq5WEyceRWSYZfFAwWpqUK5q4 Wn3t89is9yhX0ttTr42SyetaPLaZNKh/kOE2ViFcG9Tng8SWKxuLq6OkwDmGxYkP2AhX AEzPU7sQX2O45jHTlf9Ff+0mvVYdMADXw5GcGhNIWG9YdWnmRdakLHPjvNlZwrWxbmgX YLdga4EhpA9j1qgiLpCU8DmhKNFzFnigkJB1IJLlG99gu7SvGKtTBqyBpFq1dJKQ+fPn T45g== X-Gm-Message-State: AOJu0YxlAk/qBEkp0yV4YYxPhT21yVahlqaFVkZ5vNj4PBVuZb3770ip ktFL3lasm+K7qlEy6X8A8Ss= X-Google-Smtp-Source: AGHT+IGZaFoZdBTf7Hf3s9plZg+SJexjwJwB03dmWIM7+favexb3cAjsi05kOu2bP0EG5HtVpfK9Dg== X-Received: by 2002:a17:90b:1d0d:b0:28c:4a67:eb7d with SMTP id on13-20020a17090b1d0d00b0028c4a67eb7dmr2069534pjb.48.1704479440839; Fri, 05 Jan 2024 10:30:40 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id 23-20020a17090a195700b002868abc0e6dsm1687293pjh.11.2024.01.05.10.30.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:30:40 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 1/3] x86/hyperv: Use slow_virt_to_phys() in page transition hypervisor callback Date: Fri, 5 Jan 2024 10:30:23 -0800 Message-Id: <20240105183025.225972-2-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240105183025.225972-1-mhklinux@outlook.com> References: <20240105183025.225972-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michael Kelley In preparation for temporarily marking pages not present during a transition between encrypted and decrypted, use slow_virt_to_phys() in the hypervisor callback. As long as the PFN is correct, slow_virt_to_phys() works even if the leaf PTE is not present. The existing functions that depend on vmalloc_to_page() all require that the leaf PTE be marked present, so they don't work. Update the comments for slow_virt_to_phys() to note this broader usage and the requirement to work even if the PTE is not marked present. Signed-off-by: Michael Kelley --- arch/x86/hyperv/ivm.c | 9 ++++++++- arch/x86/mm/pat/set_memory.c | 13 +++++++++---- 2 files changed, 17 insertions(+), 5 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 02e55237d919..8ba18635e338 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -524,7 +524,14 @@ static bool hv_vtom_set_host_visibility(unsigned long = kbuffer, int pagecount, bo return false; =20 for (i =3D 0, pfn =3D 0; i < pagecount; i++) { - pfn_array[pfn] =3D virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE); + /* + * Use slow_virt_to_phys() because the PRESENT bit has been + * temporarily cleared in the PTEs. slow_virt_to_phys() works + * without the PRESENT bit while virt_to_hvpfn() or similar + * does not. + */ + pfn_array[pfn] =3D slow_virt_to_phys((void *)kbuffer + + i * HV_HYP_PAGE_SIZE) >> HV_HYP_PAGE_SHIFT; pfn++; =20 if (pfn =3D=3D HV_MAX_MODIFY_GPA_REP_COUNT || i =3D=3D pagecount - 1) { diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index bda9f129835e..8e19796e7ce5 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -755,10 +755,15 @@ pmd_t *lookup_pmd_address(unsigned long address) * areas on 32-bit NUMA systems. The percpu areas can * end up in this kind of memory, for instance. * - * This could be optimized, but it is only intended to be - * used at initialization time, and keeping it - * unoptimized should increase the testing coverage for - * the more obscure platforms. + * It is also used in callbacks for CoCo VM page transitions between priva= te + * and shared because it works when the PRESENT bit is not set in the leaf + * PTE. In such cases, the state of the PTEs, including the PFN, is otherw= ise + * known to be valid, so the returned physical address is correct. The sim= ilar + * function vmalloc_to_pfn() can't be used because it requires the PRESENT= bit. + * + * This could be optimized, but it is only used in paths that are not perf + * sensitive, and keeping it unoptimized should increase the testing cover= age + * for the more obscure platforms. */ phys_addr_t slow_virt_to_phys(void *__virt_addr) { --=20 2.25.1 From nobody Fri Dec 26 11:21:33 2025 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD15035EF1; Fri, 5 Jan 2024 18:30:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WXqUvoRo" Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-28bec6ae0ffso1336414a91.3; Fri, 05 Jan 2024 10:30:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704479442; x=1705084242; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lydFpi0OZX3PRN1Ii8TvNVEIYJGkasxvD7FkCrzWjvc=; b=WXqUvoRodHbzSUPLsH9yO9KusUoQB96rzJt0UjeW/vrB/AQoyeBFehJe4CzM62QxUR DCP5E/aeecxYOYyxEfUio7EO55HGNp9GsUK+tuVAYbMXuXYJYGr6+yqWDgzSnotR5djL X2OnemFjl8sj9EJtG50cbVgnF6tb2NDiKc+3E75QmhY45yFYs5ilzzIIK03VtXxlshKj cu2GTsXqdRahwXRVIebtttr5IRbr5n4E9yEmVITeKKXzPYyLT7ATh9ymeAnQL+Bp5KSs pR6mq6FuDoKLz33Enddx5GDLYovoj3Nb0ccrni53yC8sUaN/rVpvBZWedojIBfMtWWhT iD2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704479442; x=1705084242; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=lydFpi0OZX3PRN1Ii8TvNVEIYJGkasxvD7FkCrzWjvc=; b=DXWSRwFFvE1MTnqEnfFi/8dmJnuSAqDlRr5w/KED/nYLWUGq9sBLkbAjnzOfiXLV30 3HzW5/MTurV4g+Dqw3/q5qFbMXyyw4iNUfq17IeuEPyqhwWyU8MOSqL/+5fc1zgzyHU2 8xPbLW7IA9SnOc+ItDwePktXtx4f0TE5wtZLlSZqvcSyPWJND2sC1YfFeW5Id8/nfXIl OztA7aC/KXVGjEX7Pe3MToV7wIfuP5RtuDl3RIpdX0p5aEtOT/Y3cf0In0+vIayU9aet leLDAIBkp4j3fwPv6AHZyksdTVa0Ycvm3XcQvpkGli0OLw8ptH+LjZu+7U1b62WZZk7S MPqw== X-Gm-Message-State: AOJu0YzHWgKWSwnKyUxQkR/BiSYzpUea9Fmrsi1hsEXKCRYqzktpwwuR pL8XWVdB4zjvQfB+YVJU8rE= X-Google-Smtp-Source: AGHT+IG19yTmGDm43gGH0Te1ADwX3xmhZ2HI5+b+LjzFdjx7tfO595u+C0rtiQTuBnjZcxP5J7fOYQ== X-Received: by 2002:a17:90b:24b:b0:28b:c9fb:e328 with SMTP id fz11-20020a17090b024b00b0028bc9fbe328mr2079013pjb.51.1704479442131; Fri, 05 Jan 2024 10:30:42 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id 23-20020a17090a195700b002868abc0e6dsm1687293pjh.11.2024.01.05.10.30.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:30:41 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 2/3] x86/mm: Regularize set_memory_p() parameters and make non-static Date: Fri, 5 Jan 2024 10:30:24 -0800 Message-Id: <20240105183025.225972-3-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240105183025.225972-1-mhklinux@outlook.com> References: <20240105183025.225972-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michael Kelley set_memory_p() is currently static. It has parameters that don't match set_memory_p() under arch/powerpc and that aren't congruent with the other set_memory_* functions. There's no good reason for the difference. Fix this by making the parameters consistent, and update the one existing call site. Make the function non-static and add it to include/asm/set_memory.h so that it is completely parallel to set_memory_np() and is usable in other modules. No functional change. Signed-off-by: Michael Kelley Acked-by: Kirill A. Shutemov Reviewed-by: Rick Edgecombe --- arch/x86/include/asm/set_memory.h | 1 + arch/x86/mm/pat/set_memory.c | 12 ++++++------ 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_m= emory.h index a5e89641bd2d..9aee31862b4a 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -47,6 +47,7 @@ int set_memory_uc(unsigned long addr, int numpages); int set_memory_wc(unsigned long addr, int numpages); int set_memory_wb(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); +int set_memory_p(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 8e19796e7ce5..05d42395a462 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2046,17 +2046,12 @@ int set_mce_nospec(unsigned long pfn) return rc; } =20 -static int set_memory_p(unsigned long *addr, int numpages) -{ - return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0); -} - /* Restore full speculative operation to the pfn. */ int clear_mce_nospec(unsigned long pfn) { unsigned long addr =3D (unsigned long) pfn_to_kaddr(pfn); =20 - return set_memory_p(&addr, 1); + return set_memory_p(addr, 1); } EXPORT_SYMBOL_GPL(clear_mce_nospec); #endif /* CONFIG_X86_64 */ @@ -2109,6 +2104,11 @@ int set_memory_np_noalias(unsigned long addr, int nu= mpages) CPA_NO_CHECK_ALIAS, NULL); } =20 +int set_memory_p(unsigned long addr, int numpages) +{ + return change_page_attr_set(&addr, numpages, __pgprot(_PAGE_PRESENT), 0); +} + int set_memory_4k(unsigned long addr, int numpages) { return change_page_attr_set_clr(&addr, numpages, __pgprot(0), --=20 2.25.1 From nobody Fri Dec 26 11:21:33 2025 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2EC7935F17; Fri, 5 Jan 2024 18:30:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EhLZPDeS" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1d3dee5f534so5684925ad.1; Fri, 05 Jan 2024 10:30:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704479443; x=1705084243; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/DDBxixv8I/6JBD0uZ/FnCePGFgbAZpyF75SjVpOH3s=; b=EhLZPDeS27NBrNX01wbyiFm7RHs21Zx1a9df5gzlaX+kejjrCP3gZ3gZmDkS/I0fJ1 XfzVoyUaJyO8u3y9IybugJShFLDLKSQB1ToYZEknidZrQI6lfZ9lu5c85vXWPs1e/7ov wjK8Pnr0KEGH2t+GyMs4HTjFEWcH8/sEyWooHCBKVc22kdoL3ujE5OfoE3tqTZdjx9NA Nq0wMpYlyyUIi2OuUZKvhEVbbc4qV9Lu+WzXL5q5bvGd4upKI8wF8CaBLTKGhCK7zHE0 GHIxAf2kRyu1/bd+C7C0yTcbG9+IiHKNPb9CMDBG8d6UkRhqFCm8YYJGBCejtTKfyoDv o0FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704479443; x=1705084243; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=/DDBxixv8I/6JBD0uZ/FnCePGFgbAZpyF75SjVpOH3s=; b=C9rS+NbN1fWUDNym6jQ3uCRFoYaIDzyOpCEBua6tHkX51nzDu65wCLQNESu6aoP2hw 3WHncNq7uPz8uEV1ZJ9JCnXlk2e6z/PhEH685yAm5EgKOE2cMd9OlG18dYBfBJqs0cL4 YbS81oAcHzxjJP3W/jsPViwb9KIB22OpZsXOYopIm5b7QhEp4UZMAe8V44XStqBk3JF1 J2rDvE22a4UxDIVMDHa5TImW+ssxmZi1YGDbSdQ63A9F6zPAVhXWCwRb95kBdHxBxWig zsmeVL9CcP8zaDEt3rkZ6wDKzhPLyliNdL0FpkGrFrw4OWrvrTaugzRKpZiWSkolr2CZ weug== X-Gm-Message-State: AOJu0Yzd+iaoFCC9SYTV2SkfxWZKDihhyYQQ0AmC5iemyubdgRSdIxdt C1OBwz6uZQmowqZOGMJDZfI= X-Google-Smtp-Source: AGHT+IHZB8hxoZEO9XFseOS+q+ucI/+i6AVwNZFHhru24uO3VXpK5t8sRYDrNHcHS8sHO6AGVonlVw== X-Received: by 2002:a17:90a:de11:b0:28c:3620:b5ee with SMTP id m17-20020a17090ade1100b0028c3620b5eemr3251196pjv.28.1704479443490; Fri, 05 Jan 2024 10:30:43 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id 23-20020a17090a195700b002868abc0e6dsm1687293pjh.11.2024.01.05.10.30.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:30:43 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 3/3] x86/hyperv: Make encrypted/decrypted changes safe for load_unaligned_zeropad() Date: Fri, 5 Jan 2024 10:30:25 -0800 Message-Id: <20240105183025.225972-4-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240105183025.225972-1-mhklinux@outlook.com> References: <20240105183025.225972-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Michael Kelley In a CoCo VM, when transitioning memory from encrypted to decrypted, or vice versa, the caller of set_memory_encrypted() or set_memory_decrypted() is responsible for ensuring the memory isn't in use and isn't referenced while the transition is in progress. The transition has multiple steps, and the memory is in an inconsistent state until all steps are complete. A reference while the state is inconsistent could result in an exception that can't be cleanly fixed up. However, the kernel load_unaligned_zeropad() mechanism could cause a stray reference that can't be prevented by the caller of set_memory_encrypted() or set_memory_decrypted(), so there's specific code to handle this case. But a CoCo VM running on Hyper-V may be configured to run with a paravisor, with the #VC or #VE exception routed to the paravisor. There's no architectural way to forward the exceptions back to the guest kernel, and in such a case, the load_unaligned_zeropad() specific code doesn't work. To avoid this problem, mark pages as "not present" while a transition is in progress. If load_unaligned_zeropad() causes a stray reference, a normal page fault is generated instead of #VC or #VE, and the page-fault-based fixup handlers for load_unaligned_zeropad() resolve the reference. When the encrypted/decrypted transition is complete, mark the pages as "present" again. Signed-off-by: Michael Kelley Reviewed-by: Kuppuswamy Sathyanarayanan --- arch/x86/hyperv/ivm.c | 49 ++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 46 insertions(+), 3 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 8ba18635e338..5ad39256a5d2 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -502,6 +503,31 @@ static int hv_mark_gpa_visibility(u16 count, const u64= pfn[], return -EFAULT; } =20 +/* + * When transitioning memory between encrypted and decrypted, the caller + * of set_memory_encrypted() or set_memory_decrypted() is responsible for + * ensuring that the memory isn't in use and isn't referenced while the + * transition is in progress. The transition has multiple steps, and the + * memory is in an inconsistent state until all steps are complete. A + * reference while the state is inconsistent could result in an exception + * that can't be cleanly fixed up. + * + * But the Linux kernel load_unaligned_zeropad() mechanism could cause a + * stray reference that can't be prevented by the caller, so Linux has + * specific code to handle this case. But when the #VC and #VE exceptions + * routed to a paravisor, the specific code doesn't work. To avoid this + * problem, mark the pages as "not present" while the transition is in + * progress. If load_unaligned_zeropad() causes a stray reference, a normal + * page fault is generated instead of #VC or #VE, and the page-fault-based + * handlers for load_unaligned_zeropad() resolve the reference. When the + * transition is complete, hv_vtom_set_host_visibility() marks the pages + * as "present" again. + */ +static bool hv_vtom_clear_present(unsigned long kbuffer, int pagecount, bo= ol enc) +{ + return !set_memory_np(kbuffer, pagecount); +} + /* * hv_vtom_set_host_visibility - Set specified memory visible to host. * @@ -521,7 +547,7 @@ static bool hv_vtom_set_host_visibility(unsigned long k= buffer, int pagecount, bo =20 pfn_array =3D kmalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); if (!pfn_array) - return false; + goto err_set_memory_p; =20 for (i =3D 0, pfn =3D 0; i < pagecount; i++) { /* @@ -545,14 +571,30 @@ static bool hv_vtom_set_host_visibility(unsigned long= kbuffer, int pagecount, bo } } =20 - err_free_pfn_array: +err_free_pfn_array: kfree(pfn_array); + +err_set_memory_p: + /* + * Set the PTE PRESENT bits again to revert what hv_vtom_clear_present() + * did. Do this even if there is an error earlier in this function in + * order to avoid leaving the memory range in a "broken" state. Setting + * the PRESENT bits shouldn't fail, but return an error if it does. + */ + if (set_memory_p(kbuffer, pagecount)) + result =3D false; + return result; } =20 static bool hv_vtom_tlb_flush_required(bool private) { - return true; + /* + * Since hv_vtom_clear_present() marks the PTEs as "not present" + * and flushes the TLB, they can't be in the TLB. That makes the + * flush controlled by this function redundant, so return "false". + */ + return false; } =20 static bool hv_vtom_cache_flush_required(void) @@ -615,6 +657,7 @@ void __init hv_vtom_init(void) x86_platform.hyper.is_private_mmio =3D hv_is_private_mmio; x86_platform.guest.enc_cache_flush_required =3D hv_vtom_cache_flush_requi= red; x86_platform.guest.enc_tlb_flush_required =3D hv_vtom_tlb_flush_required; + x86_platform.guest.enc_status_change_prepare =3D hv_vtom_clear_present; x86_platform.guest.enc_status_change_finish =3D hv_vtom_set_host_visibili= ty; =20 /* Set WB as the default cache mode. */ --=20 2.25.1