From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 193D7C61D98 for ; Tue, 21 Nov 2023 21:20:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234426AbjKUVUk (ORCPT ); Tue, 21 Nov 2023 16:20:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229514AbjKUVUi (ORCPT ); Tue, 21 Nov 2023 16:20:38 -0500 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A28521BB; Tue, 21 Nov 2023 13:20:34 -0800 (PST) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1cf69f1163aso16218185ad.3; Tue, 21 Nov 2023 13:20:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601634; x=1701206434; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rbfDwkGJGwxULqg8ltFjMn30Y46reH21KWF1N/yir7w=; b=Kbf9nbZ3UFE5JlKKhydyu3F2cjlZr7ZWSFeuGuCxxV48n7PIKyuWV6DmIr4eoYAOAs fWMRsU8pLNNUlw8yIfhC0goEsXrM1UD7kljZsFsYpuDNKdcy7CV85ZD3jsWsDnHDl9gN FJrw7BVJTpwrXXJRPs/G3VKvYujKPHJCwzXt7k77WlbK0moEMa9cTwsr+zEJaLmBU/Sq Y9oM8o0gVH8HSvo+pMzXxA7DlVXrsSJC/CTxfNYK17JG6Yukk4wOGoDJQYTizsiO1DQ7 BQFor9iZsT4uvmO2jrWtOblXN01fKMR8jt0cH6IOSvBVE/VZ/jLC+W36e8MeDCMgwwft vcTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601634; x=1701206434; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=rbfDwkGJGwxULqg8ltFjMn30Y46reH21KWF1N/yir7w=; b=QsqUzi0FiB74jWuo9FZLBfZPMPkG0VnEM2zL/drsirpcKgTrGYWgwB0lkpg+WVwrgs 4X5ZeDoo9dA7wN6kztw/c0ZZML5cUhPn6jwl+8S+zRREUS4aUhxwqzrTuP/grSMkAv+/ ocHmngc54LguTGGaO1UJzXgbrN64W8u9sp55q78DhGlHWZTJ/VSRfd3kQalhUY8kf3ed ojJXYW5/3bEaGRxhjpFu8ZbvqP3B2X35+xucqOFcgt27esx2J0k8wI+r9YOOvfWUzRQD o9HYbvKfKa1MTAd1ujzoRnW1kLSXyJwOZrrGVxd5Qj48GAHkSFSwRk0CYbKrxaJ4YhjL 094w== X-Gm-Message-State: AOJu0YwYTDJKpt6o+IrK7GXe2jePMsWJge8aguJLkW77HZptDz+LlXQD SOMtz8rMgtEifs/E5sajXig= X-Google-Smtp-Source: AGHT+IEOsjamSGHkOASGUCwZPWO7OmI3DBmh+Jjr7fpstRfuOke2KTseFR9lGErT7CmT9qeaxU040A== X-Received: by 2002:a17:903:32c1:b0:1cf:6656:69cf with SMTP id i1-20020a17090332c100b001cf665669cfmr467098plr.21.1700601634054; Tue, 21 Nov 2023 13:20:34 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:33 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 1/8] x86/coco: Use slow_virt_to_phys() in page transition hypervisor callbacks Date: Tue, 21 Nov 2023 13:20:09 -0800 Message-Id: <20231121212016.1154303-2-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley In preparation for temporarily marking pages not present during a transition between encrypted and decrypted, use slow_virt_to_phys() in the hypervisor callbacks. As long as the PFN is correct, slow_virt_to_phys() works even if the leaf PTE is not present. The existing functions that depend on vmalloc_to_page() all require that the leaf PTE be marked present, so they don't work. Update the comments for slow_virt_to_phys() to note this broader usage and the requirement to work even if the PTE is not marked present. Signed-off-by: Michael Kelley --- arch/x86/hyperv/ivm.c | 9 ++++++++- arch/x86/kernel/sev.c | 8 +++++++- arch/x86/mm/pat/set_memory.c | 13 +++++++++---- 3 files changed, 24 insertions(+), 6 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 02e55237d919..8ba18635e338 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -524,7 +524,14 @@ static bool hv_vtom_set_host_visibility(unsigned long = kbuffer, int pagecount, bo return false; =20 for (i =3D 0, pfn =3D 0; i < pagecount; i++) { - pfn_array[pfn] =3D virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE); + /* + * Use slow_virt_to_phys() because the PRESENT bit has been + * temporarily cleared in the PTEs. slow_virt_to_phys() works + * without the PRESENT bit while virt_to_hvpfn() or similar + * does not. + */ + pfn_array[pfn] =3D slow_virt_to_phys((void *)kbuffer + + i * HV_HYP_PAGE_SIZE) >> HV_HYP_PAGE_SHIFT; pfn++; =20 if (pfn =3D=3D HV_MAX_MODIFY_GPA_REP_COUNT || i =3D=3D pagecount - 1) { diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 70472eebe719..7eac92c07a58 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -811,7 +811,13 @@ static unsigned long __set_pages_state(struct snp_psc_= desc *data, unsigned long hdr->end_entry =3D i; =20 if (is_vmalloc_addr((void *)vaddr)) { - pfn =3D vmalloc_to_pfn((void *)vaddr); + /* + * Use slow_virt_to_phys() because the PRESENT bit has been + * temporarily cleared in the PTEs. slow_virt_to_phys() works + * without the PRESENT bit while vmalloc_to_pfn() or similar + * does not. + */ + pfn =3D slow_virt_to_phys((void *)vaddr) >> PAGE_SHIFT; use_large_entry =3D false; } else { pfn =3D __pa(vaddr) >> PAGE_SHIFT; diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index bda9f129835e..8e19796e7ce5 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -755,10 +755,15 @@ pmd_t *lookup_pmd_address(unsigned long address) * areas on 32-bit NUMA systems. The percpu areas can * end up in this kind of memory, for instance. * - * This could be optimized, but it is only intended to be - * used at initialization time, and keeping it - * unoptimized should increase the testing coverage for - * the more obscure platforms. + * It is also used in callbacks for CoCo VM page transitions between priva= te + * and shared because it works when the PRESENT bit is not set in the leaf + * PTE. In such cases, the state of the PTEs, including the PFN, is otherw= ise + * known to be valid, so the returned physical address is correct. The sim= ilar + * function vmalloc_to_pfn() can't be used because it requires the PRESENT= bit. + * + * This could be optimized, but it is only used in paths that are not perf + * sensitive, and keeping it unoptimized should increase the testing cover= age + * for the more obscure platforms. */ phys_addr_t slow_virt_to_phys(void *__virt_addr) { --=20 2.25.1 From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77B1EC61D90 for ; Tue, 21 Nov 2023 21:20:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234501AbjKUVUn (ORCPT ); Tue, 21 Nov 2023 16:20:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234245AbjKUVUj (ORCPT ); Tue, 21 Nov 2023 16:20:39 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6081D45; Tue, 21 Nov 2023 13:20:35 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1ce5e76912aso38592095ad.2; Tue, 21 Nov 2023 13:20:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601635; x=1701206435; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=huLzv0PM0TYdV6aZ2lF/u0VoqugwXORYCB79pbYxdec=; b=mNFF3+SokYSSzvKY/avlzJ6uiXmE1rOhKUfVMA0fSHjyGvztuzBmuldSN3xrQg2kvf VM4Oc5RFmRWkos+kIIKNjMrQpe9cypAZeZ3R4rnIJw29uf3kFhP/EKAtCd223RPSKj02 1oYDWGu4h2GJEq8dilXv9NF2ayUpWA2iTL0TN6foU4jlZDub8BszN5CjjmvxRGO6Y9Gx Y2t/X9mQwbHTXCckNAOafy6LkBNY9JfTe1yg2jfAB0nczEZZguWUJfzkcRLLxdnQ/xKr jD6SzKOoCZFGmc0O1GeCLeNHinOroNMIcZ5eDx1QYf6xjq3YBd4SkuBvFDytA0CBBC/9 QlNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601635; x=1701206435; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=huLzv0PM0TYdV6aZ2lF/u0VoqugwXORYCB79pbYxdec=; b=WPcQsQnMqJzRXlLARyFj9qhGHUR0QhuN34AANOioCSGbE9veRBha4l4yPZeE4lFGLx jEsgDP623BL6MZQUXZWn7RGUUrCMsutkmNtS3ywmWetRqnp8W3mLr3SbV5qkx+K5t6Mf IhjK9EVQ/6U1iX2loYoOhyZQNzlaq2yQLIBmQY5N1TjlBSNQ6ESQoqwV6Z3huc9nbfR0 41YCXc6HzW6OC/5dKZU4UJkLMRdfAfYfGxKDgZH48GDbprbmKJzUBqVFG80+SNN5ipQ9 geoNN9UTe1Brw/p7c2JB0FIADbyjJIF35Cf42V2nRwAJeyqYDemBU/7YeBgZfgN4Nujt CSRA== X-Gm-Message-State: AOJu0YyPuDDZ4eA3A5piYosr5PgGeNgTHSzAiE3qA+GXOaKTy7LRh/Dp nDoRcO9F8O3R+ltVdlffJBc= X-Google-Smtp-Source: AGHT+IGu3jcvDWzMmd2brW4AbNgRjSnITFzU3bEr293ek/NBxeKno17B5zHl10Y2Oi8nNIq+gnZz2g== X-Received: by 2002:a17:902:b687:b0:1cf:6d46:9f2f with SMTP id c7-20020a170902b68700b001cf6d469f2fmr351299pls.48.1700601635304; Tue, 21 Nov 2023 13:20:35 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:35 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 2/8] x86/mm: Don't do a TLB flush if changing a PTE that isn't marked present Date: Tue, 21 Nov 2023 13:20:10 -0800 Message-Id: <20231121212016.1154303-3-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley The core function __change_page_attr() currently sets up a TLB flush if a PTE is changed. But if the old value of the PTE doesn't include the PRESENT flag, the PTE won't be in the TLB, so a flush isn't needed. Avoid an unnecessary TLB flush by conditioning the flush on the old PTE value including PRESENT. This change improves the performance of functions like set_memory_p() by avoiding the flush if the memory range was previously all not present. Signed-off-by: Michael Kelley --- arch/x86/mm/pat/set_memory.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 8e19796e7ce5..d7ef8d312a47 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1636,7 +1636,10 @@ static int __change_page_attr(struct cpa_data *cpa, = int primary) */ if (pte_val(old_pte) !=3D pte_val(new_pte)) { set_pte_atomic(kpte, new_pte); - cpa->flags |=3D CPA_FLUSHTLB; + + /* If old_pte isn't present, it's not in the TLB */ + if (pte_present(old_pte)) + cpa->flags |=3D CPA_FLUSHTLB; } cpa->numpages =3D 1; return 0; --=20 2.25.1 From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9EC9C61D90 for ; Tue, 21 Nov 2023 21:20:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234577AbjKUVUt (ORCPT ); Tue, 21 Nov 2023 16:20:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234455AbjKUVUl (ORCPT ); Tue, 21 Nov 2023 16:20:41 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E6BF1AA; Tue, 21 Nov 2023 13:20:37 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1cf6a67e290so15795685ad.1; Tue, 21 Nov 2023 13:20:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601636; x=1701206436; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WDCKZynU08jauQ6ld+7iiO8G5I0+Mgx4PpyHnd5gkcg=; b=b9Amo8b7XxRrje7EUu9nqxEMj5X/MN99pjuobtb5My8i4YpxIfBimz7rMorbQniKtG 9CyDxgo/E0h6mF6YtFz0BAy8uCysyM79R+VrgisZowvmo8vbZVnfRdNJNyt7e9zsOBBn 2U9dNBeq6lHtzMgT+BSYs9lQjdq2dUx616zxAWISM6Np6v4YI+stMSZ3ovUqiBB91eAr zOpfopn0Ier4F8dHEH13GHLGcZ2hwbYM1x5iqrkoStaDkcW64JT4nwZ/LWiAO7AhpL9u ihkrSDoU7Q0wNIiT6VzH4y4ykbW5EszQ3Ge19KLD1WTmuNLzPjmeTRNOqStPCtn4bVhN +tKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601636; x=1701206436; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=WDCKZynU08jauQ6ld+7iiO8G5I0+Mgx4PpyHnd5gkcg=; b=FDh7lGUD0uGtXXDx5zbOcIyDd4dwZHdfSI1r6XHP8iSbN40aSQSFmC3QX3OpVJPamD MnjmPWdORvuNSpkYuTr6V9gKQnTgycKQb+8d0ZKUNzFOY2zn8jVrg5HnvFmQuWzcFspU JnOml40trUUEwEiQVcn60G6s4hC+sN9E4frAZm4AGDdLWy4qRqpjU1YPruKuI68ocNBk LX0g8Lr8Xye2nMCKi3H31s+eBCdqtOZhXkZpENrLDiFgn+HtORo3+bVGpHxFj8hAV2QM nfJQZd9IHp3UcxsRBp9rFcjY1Y8m7aiwScjpNaRCqBmciYgBeT/oPsGANiB70iAfma5c tgtQ== X-Gm-Message-State: AOJu0YxRAVmI7ZaISQO4PZHOrJA9jbiJycnbe3i56BQN7qbLzJxEYnyf Bb6HcVX6ruBKang+M4YZjYc= X-Google-Smtp-Source: AGHT+IHGih5jJrhH0e5Fitw1xqGZ2tbx4HII00I2SufuwbFYeK8kmL25D5zlL+pZpN85Iq/vTDPqHw== X-Received: by 2002:a17:902:ec84:b0:1c9:faef:5765 with SMTP id x4-20020a170902ec8400b001c9faef5765mr603101plg.5.1700601636565; Tue, 21 Nov 2023 13:20:36 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:36 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 3/8] x86/mm: Remove "static" from vmap_pages_range() Date: Tue, 21 Nov 2023 13:20:11 -0800 Message-Id: <20231121212016.1154303-4-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley The mm subsystem currently provides no mechanism to map memory pages to a specified virtual address range. A virtual address range can be allocated using get_vm_area(), but the only function available for mapping memory pages to a caller-specified address in that range is ioremap_page_range(), which is inappropriate for system memory. Fix this by allowing vmap_pages_range() to be used by callers outside of vmalloc.c. Signed-off-by: Michael Kelley --- include/linux/vmalloc.h | 2 ++ mm/vmalloc.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index c720be70c8dd..ee12f5226a45 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -233,6 +233,8 @@ static inline bool is_vm_area_hugepages(const void *add= r) =20 #ifdef CONFIG_MMU void vunmap_range(unsigned long addr, unsigned long end); +int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot, + struct page **pages, unsigned int page_shift); static inline void set_vm_flush_reset_perms(void *addr) { struct vm_struct *vm =3D find_vm_area(addr); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d12a17fc0c17..b2a72bd317c6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -625,7 +625,7 @@ int vmap_pages_range_noflush(unsigned long addr, unsign= ed long end, * RETURNS: * 0 on success, -errno on failure. */ -static int vmap_pages_range(unsigned long addr, unsigned long end, +int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { int err; --=20 2.25.1 From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D95A3C61D85 for ; Tue, 21 Nov 2023 21:20:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234565AbjKUVUx (ORCPT ); Tue, 21 Nov 2023 16:20:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234476AbjKUVUm (ORCPT ); Tue, 21 Nov 2023 16:20:42 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82031D49; Tue, 21 Nov 2023 13:20:38 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1ce627400f6so32487445ad.2; Tue, 21 Nov 2023 13:20:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601638; x=1701206438; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nt8kvnxFSl+/wVVyFPR7xfVyeuKxVIIu+IJHgsCJP0Y=; b=MfXJdKyXJa5M7bputpsRPNkIe1r+rndMz1t7OTc6VLKPUYSPVi+5oGvVFMHse6DR8T QKwzY2PgVz1zUSealwSmKO4duMlDKnV0F+uy3BvtSC+iE9wF/yarFrLfgczby/Cu+7Np Qhczqgr+UbhaBGLc6erKZYc47eyJX6LJz043nwcpCGnOUJ5a09hgWwhGnctHAj13CWEk vGueONsZLHYm4wcTggbwuT39o2xYJarQdu8n9Ki1gg4IS41TOesQc5JN7tv1m4RgblpO sg/rAdGdYmw1LU3cahjA1oxHv3jnlYThMxN93UBT3eHKqFguqiV3B8pYi10yPtj6XPwY rXmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601638; x=1701206438; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=Nt8kvnxFSl+/wVVyFPR7xfVyeuKxVIIu+IJHgsCJP0Y=; b=B2twDUZ2pxfKrRSpusDkE65FnUsj7CV5u4ayboK5hnCYK9tDiml5UT8NAtEPqkM5ZI c01T0JGrnDfCyR/SsrJUd5Y+nSFvqI/b9l7PMin/E4BUNPo30cJPTdigxRTp7TgieNDO UwnM5XRcyXmG88lbUPDQ8dY02caGgGLA4T3+G3XyUcnP7kioNj1S6c/JvOLNoOkYijbl +Ff5PEzKPhA5n7jKbP5Hs6cQzy8IOie9u3KWqMJ/UW0dyqCf27BC1rP0n25pmci01HSM XEfgL/0tqFr86OfCJAxaVRGYKgPnp22AQwycvwW9xFOibk9Ne1qKlhb4yAu8JxLj+x6U ts2g== X-Gm-Message-State: AOJu0YzwrDZ2EQAL4zZgqEO0Ff5N3SOhCOb3uxCxKPW0sbkVuelnmBe9 /llUBna1nCRARowey1tPcYU= X-Google-Smtp-Source: AGHT+IEPwy5ASKSSy89dfpOvuqw7AqdqnJir15Ua8HeuGGJBuSGULdcSSfuzaTCr2on+/k5ZZyh5Vg== X-Received: by 2002:a17:902:f687:b0:1cc:3bfc:69b1 with SMTP id l7-20020a170902f68700b001cc3bfc69b1mr443828plg.24.1700601637908; Tue, 21 Nov 2023 13:20:37 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:37 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 4/8] x86/sev: Enable PVALIDATE for PFNs without a valid virtual address Date: Tue, 21 Nov 2023 13:20:12 -0800 Message-Id: <20231121212016.1154303-5-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley For SEV-SNP, the PVALIDATE instruction requires a valid virtual address that it translates to the PFN that it operates on. Per the spec, it translates the virtual address as if it were doing a single byte read. In transitioning a page between encrypted and decrypted, the direct map virtual address of the page may be temporarily marked invalid (i.e., PRESENT is cleared in the PTE) to prevent interference from load_unaligned_zeropad(). In such a case, the PVALIDATE that is required for the encrypted<->decrypted transition fails due to an invalid virtual address. Fix this by providing a temporary virtual address that is mapped to the target PFN just before executing PVALIDATE. Have PVALIDATE use this temp virtual address instead of the direct map virtual address. Unmap the temp virtual address after PVALIDATE completes. The temp virtual address must be aligned on a 2 Mbyte boundary to meet PVALIDATE requirements for operating on 2 Meg large pages, though the temp mapping need only be a 4K mapping. Also, the temp virtual address must be preceded by a 4K invalid page so it can't be accessed by load_unaligned_zeropad(). This mechanism is used only for pages transitioning between encrypted and decrypted. When PVALIDATE is done for initial page acceptance, a temp virtual address is not provided, and PVALIDATE uses the direct map virtual address. Signed-off-by: Michael Kelley --- arch/x86/boot/compressed/sev.c | 2 +- arch/x86/kernel/sev-shared.c | 57 +++++++++++++++++++++++++++------- arch/x86/kernel/sev.c | 32 ++++++++++++------- 3 files changed, 67 insertions(+), 24 deletions(-) diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index 454acd7a2daf..4d4a3fc0b725 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -224,7 +224,7 @@ static phys_addr_t __snp_accept_memory(struct snp_psc_d= esc *desc, if (vmgexit_psc(boot_ghcb, desc)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); =20 - pvalidate_pages(desc); + pvalidate_pages(desc, 0); =20 return pa; } diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c index ccb0915e84e1..fc45fdcf3892 100644 --- a/arch/x86/kernel/sev-shared.c +++ b/arch/x86/kernel/sev-shared.c @@ -1071,35 +1071,70 @@ static void __init setup_cpuid_table(const struct c= c_blob_sev_info *cc_info) } } =20 -static void pvalidate_pages(struct snp_psc_desc *desc) +#ifdef __BOOT_COMPRESSED +static int pvalidate_pfn(unsigned long vaddr, unsigned int size, + unsigned long pfn, bool validate, int *rc2) +{ + return 0; +} +#else +static int pvalidate_pfn(unsigned long vaddr, unsigned int size, + unsigned long pfn, bool validate, int *rc2) +{ + int rc; + struct page *page =3D pfn_to_page(pfn); + + *rc2 =3D vmap_pages_range(vaddr, vaddr + PAGE_SIZE, + PAGE_KERNEL, &page, PAGE_SHIFT); + rc =3D pvalidate(vaddr, size, validate); + vunmap_range(vaddr, vaddr + PAGE_SIZE); + + return rc; +} +#endif + +static void pvalidate_pages(struct snp_psc_desc *desc, unsigned long vaddr) { struct psc_entry *e; - unsigned long vaddr; + unsigned long pfn; unsigned int size; unsigned int i; bool validate; - int rc; + int rc, rc2 =3D 0; =20 for (i =3D 0; i <=3D desc->hdr.end_entry; i++) { e =3D &desc->entries[i]; =20 - vaddr =3D (unsigned long)pfn_to_kaddr(e->gfn); - size =3D e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K; + size =3D e->pagesize; validate =3D e->operation =3D=3D SNP_PAGE_STATE_PRIVATE; + pfn =3D e->gfn; =20 - rc =3D pvalidate(vaddr, size, validate); - if (rc =3D=3D PVALIDATE_FAIL_SIZEMISMATCH && size =3D=3D RMP_PG_SIZE_2M)= { - unsigned long vaddr_end =3D vaddr + PMD_SIZE; + if (vaddr) { + rc =3D pvalidate_pfn(vaddr, size, pfn, validate, &rc2); + } else { + vaddr =3D (unsigned long)pfn_to_kaddr(pfn); + rc =3D pvalidate(vaddr, size, validate); + } =20 - for (; vaddr < vaddr_end; vaddr +=3D PAGE_SIZE) { - rc =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); + if (rc =3D=3D PVALIDATE_FAIL_SIZEMISMATCH && size =3D=3D RMP_PG_SIZE_2M)= { + unsigned long last_pfn =3D pfn + PTRS_PER_PMD - 1; + + for (; pfn <=3D last_pfn; pfn++) { + if (vaddr) { + rc =3D pvalidate_pfn(vaddr, RMP_PG_SIZE_4K, + pfn, validate, &rc2); + } else { + vaddr =3D (unsigned long)pfn_to_kaddr(pfn); + rc =3D pvalidate(vaddr, RMP_PG_SIZE_4K, validate); + } if (rc) break; } } =20 if (rc) { - WARN(1, "Failed to validate address 0x%lx ret %d", vaddr, rc); + WARN(1, "Failed to validate address 0x%lx ret %d ret2 %d", + vaddr, rc, rc2); sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE); } } diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 7eac92c07a58..08b2e2a0d67d 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -790,7 +790,7 @@ void __init snp_prep_memory(unsigned long paddr, unsign= ed int sz, enum psc_op op } =20 static unsigned long __set_pages_state(struct snp_psc_desc *data, unsigned= long vaddr, - unsigned long vaddr_end, int op) + unsigned long vaddr_end, int op, unsigned long temp_vaddr) { struct ghcb_state state; bool use_large_entry; @@ -842,7 +842,7 @@ static unsigned long __set_pages_state(struct snp_psc_d= esc *data, unsigned long =20 /* Page validation must be rescinded before changing to shared */ if (op =3D=3D SNP_PAGE_STATE_SHARED) - pvalidate_pages(data); + pvalidate_pages(data, temp_vaddr); =20 local_irq_save(flags); =20 @@ -862,12 +862,13 @@ static unsigned long __set_pages_state(struct snp_psc= _desc *data, unsigned long =20 /* Page validation must be performed after changing to private */ if (op =3D=3D SNP_PAGE_STATE_PRIVATE) - pvalidate_pages(data); + pvalidate_pages(data, temp_vaddr); =20 return vaddr; } =20 -static void set_pages_state(unsigned long vaddr, unsigned long npages, int= op) +static void set_pages_state(unsigned long vaddr, unsigned long npages, + int op, unsigned long temp_vaddr) { struct snp_psc_desc desc; unsigned long vaddr_end; @@ -880,23 +881,30 @@ static void set_pages_state(unsigned long vaddr, unsi= gned long npages, int op) vaddr_end =3D vaddr + (npages << PAGE_SHIFT); =20 while (vaddr < vaddr_end) - vaddr =3D __set_pages_state(&desc, vaddr, vaddr_end, op); + vaddr =3D __set_pages_state(&desc, vaddr, vaddr_end, + op, temp_vaddr); } =20 void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { - if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) - return; + struct vm_struct *area; + unsigned long temp_vaddr; =20 - set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED); + area =3D get_vm_area(PAGE_SIZE * (PTRS_PER_PMD + 1), 0); + temp_vaddr =3D ALIGN((unsigned long)(area->addr + PAGE_SIZE), PMD_SIZE); + set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED, temp_vaddr); + free_vm_area(area); } =20 void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { - if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) - return; + struct vm_struct *area; + unsigned long temp_vaddr; =20 - set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); + area =3D get_vm_area(PAGE_SIZE * (PTRS_PER_PMD + 1), 0); + temp_vaddr =3D ALIGN((unsigned long)(area->addr + PAGE_SIZE), PMD_SIZE); + set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE, temp_vaddr); + free_vm_area(area); } =20 void snp_accept_memory(phys_addr_t start, phys_addr_t end) @@ -909,7 +917,7 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t e= nd) vaddr =3D (unsigned long)__va(start); npages =3D (end - start) >> PAGE_SHIFT; =20 - set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); + set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE, 0); } =20 static int snp_set_vmsa(void *va, bool vmsa) --=20 2.25.1 From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68AE0C61D85 for ; Tue, 21 Nov 2023 21:20:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234688AbjKUVUz (ORCPT ); Tue, 21 Nov 2023 16:20:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234524AbjKUVUr (ORCPT ); Tue, 21 Nov 2023 16:20:47 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AEF0D54; Tue, 21 Nov 2023 13:20:39 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1cc1ee2d8dfso53418935ad.3; Tue, 21 Nov 2023 13:20:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601639; x=1701206439; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7GaTpmJEyBw9jgDXZhROPeIa70x6P8adbd0c2l8Nz7M=; b=GhCTNzgjp1E1H2GMZvf7NcUtCrVynDaxcwpfg0JDlFEYkf4baH+J9O5zNkkwr1oov0 wOLF4Y3K/y9/ejW90v5O5iyb1vqo/Km+lVXQPUQAIlqRjL0pMUF1f1947WYqvyHp8jH5 LVBdUUYT5dJuqvtEIXcMYe1lu+j6DljqP5Q3aCbqXJmEI+uOp2snAfYSi5LmaNYFaRbR cwuTo/QPC03hv8WGEQtvLv1oa93UYRt5BICFTSkOJIFIUXuOIYZcS4mMHiPsr+G4LTOu oS5AM1i922gL1yuGPYUO8GJLkBJ0mD05MkO7j+IYy242jAIMrplrHyXTUIV3WH5gQBPx CXsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601639; x=1701206439; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=7GaTpmJEyBw9jgDXZhROPeIa70x6P8adbd0c2l8Nz7M=; b=Z+z53Qxz3Z/+Q8TManmtlwpLnGH1H59FWfdaAt2Ayrt9zJN4wq+MznAw7WwunezOqe Y4dzFJfXrb2TNEIeQir5D96L0kS6r6eUmp60GsXyo/qa6qol2SeDEUfHU83uFLCklvY/ FIsE3kAqGPIrU/Xw4FxHxvVcrWZK8BjrdUGjmLRnfZKWd4pLqJ90YPshsdawiwzzv3nF stznalxhGiHSO7DBC6PizLVy4HAVVhBcsY2Dnl//YVmzyAJ1l2q5i3EonjgLT0tC3EkG Cg9Z1iFixCMCgUnqYrSKYLzxHkFGHYsp8Qz6KWDMhxjJPtjHdj1mpomSK7MPL3AYdCf+ Hvqw== X-Gm-Message-State: AOJu0YyCh6s8azmW8uRcRkWuKZvBqJF2RjTga9fZt4qfUgHl+bIEOPR0 ploy9uNj2Rj7YGxrhihJoHo= X-Google-Smtp-Source: AGHT+IF20yx0OHR4w+dYeGZB9G5yHsyZo0dLpfvxQzVoTuUqgi8K6gHCFS0wku5f4MEmMYr1ReDSPw== X-Received: by 2002:a17:903:2448:b0:1ce:6589:d1c0 with SMTP id l8-20020a170903244800b001ce6589d1c0mr372895pls.46.1700601639270; Tue, 21 Nov 2023 13:20:39 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:39 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 5/8] x86/mm: Mark CoCo VM pages not present while changing encrypted state Date: Tue, 21 Nov 2023 13:20:13 -0800 Message-Id: <20231121212016.1154303-6-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley In a CoCo VM when a page transitions from encrypted to decrypted, or vice versa, attributes in the PTE must be updated *and* the hypervisor must be notified of the change. Because there are two separate steps, there's a window where the settings are inconsistent. Normally the code that initiates the transition (via set_memory_decrypted() or set_memory_encrypted()) ensures that the memory is not being accessed during a transition, so the window of inconsistency is not a problem. However, the load_unaligned_zeropad() function can read arbitrary memory pages at arbitrary times, which could access a transitioning page during the window. In such a case, CoCo VM specific exceptions are taken (depending on the CoCo architecture in use). Current code in those exception handlers recovers and does "fixup" on the result returned by load_unaligned_zeropad(). Unfortunately, this exception handling can't work in paravisor scenarios (TDX Paritioning and SEV-SNP in vTOM mode) if the exceptions are routed to the paravisor. The paravisor can't do the load_unaligned_zeropad() fixup, so the exceptions would need to be forwarded from the paravisor to the Linux guest, but there are no architectural specs for how to do that. Fortunately, there's a simpler way to solve the problem by changing the core transition code in __set_memory_enc_pgtable() to do the following: 1. Remove aliasing mappings 2. Flush the data cache if needed 3. Remove the PRESENT bit from the PTEs of all transitioning pages 4. Notify the hypervisor of the impending encryption status change 5. Set/clear the encryption attribute as appropriate 6. Flush the TLB so the changed encryption attribute isn't visible 7. Notify the hypervisor after the encryption status change 8. Add back the PRESENT bit, making the changed attribute visible With this approach, load_unaligned_zeropad() just takes its normal page-fault-based fixup path if it touches a page that is transitioning. As a result, load_unaligned_zeropad() and CoCo VM page transitioning are completely decoupled. CoCo VM page transitions can proceed without needing to handle architecture-specific exceptions and fix things up. This decoupling reduces the complexity due to separate TDX and SEV-SNP fixup paths, and gives more freedom to revise and introduce new capabilities in future versions of the TDX and SEV-SNP architectures. Paravisor scenarios work properly without needing to forward exceptions. Because Step 3 always does a TLB flush, the separate TLB flush callback is no longer required and is removed. Signed-off-by: Michael Kelley --- arch/x86/coco/tdx/tdx.c | 20 -------------- arch/x86/hyperv/ivm.c | 6 ----- arch/x86/include/asm/x86_init.h | 2 -- arch/x86/kernel/x86_init.c | 2 -- arch/x86/mm/mem_encrypt_amd.c | 6 ----- arch/x86/mm/pat/set_memory.c | 48 ++++++++++++++++++++++++--------- 6 files changed, 35 insertions(+), 49 deletions(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 1b5d17a9f70d..39ead21bcba6 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -697,24 +697,6 @@ bool tdx_handle_virt_exception(struct pt_regs *regs, s= truct ve_info *ve) return true; } =20 -static bool tdx_tlb_flush_required(bool private) -{ - /* - * TDX guest is responsible for flushing TLB on private->shared - * transition. VMM is responsible for flushing on shared->private. - * - * The VMM _can't_ flush private addresses as it can't generate PAs - * with the guest's HKID. Shared memory isn't subject to integrity - * checking, i.e. the VMM doesn't need to flush for its own protection. - * - * There's no need to flush when converting from shared to private, - * as flushing is the VMM's responsibility in this case, e.g. it must - * flush to avoid integrity failures in the face of a buggy or - * malicious guest. - */ - return !private; -} - static bool tdx_cache_flush_required(void) { /* @@ -876,9 +858,7 @@ void __init tdx_early_init(void) */ x86_platform.guest.enc_status_change_prepare =3D tdx_enc_status_change_pr= epare; x86_platform.guest.enc_status_change_finish =3D tdx_enc_status_change_fi= nish; - x86_platform.guest.enc_cache_flush_required =3D tdx_cache_flush_required; - x86_platform.guest.enc_tlb_flush_required =3D tdx_tlb_flush_required; =20 /* * TDX intercepts the RDMSR to read the X2APIC ID in the parallel diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 8ba18635e338..4005c573e00c 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -550,11 +550,6 @@ static bool hv_vtom_set_host_visibility(unsigned long = kbuffer, int pagecount, bo return result; } =20 -static bool hv_vtom_tlb_flush_required(bool private) -{ - return true; -} - static bool hv_vtom_cache_flush_required(void) { return false; @@ -614,7 +609,6 @@ void __init hv_vtom_init(void) =20 x86_platform.hyper.is_private_mmio =3D hv_is_private_mmio; x86_platform.guest.enc_cache_flush_required =3D hv_vtom_cache_flush_requi= red; - x86_platform.guest.enc_tlb_flush_required =3D hv_vtom_tlb_flush_required; x86_platform.guest.enc_status_change_finish =3D hv_vtom_set_host_visibili= ty; =20 /* Set WB as the default cache mode. */ diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_ini= t.h index c878616a18b8..5b3a9a214815 100644 --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -146,13 +146,11 @@ struct x86_init_acpi { * * @enc_status_change_prepare Notify HV before the encryption status of a = range is changed * @enc_status_change_finish Notify HV after the encryption status of a ra= nge is changed - * @enc_tlb_flush_required Returns true if a TLB flush is needed before ch= anging page encryption status * @enc_cache_flush_required Returns true if a cache flush is needed befor= e changing page encryption status */ struct x86_guest { bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool e= nc); bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool en= c); - bool (*enc_tlb_flush_required)(bool enc); bool (*enc_cache_flush_required)(void); }; =20 diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index a37ebd3b4773..1c0d23a2b6cf 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -133,7 +133,6 @@ static void default_nmi_init(void) { }; =20 static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages= , bool enc) { return true; } static bool enc_status_change_finish_noop(unsigned long vaddr, int npages,= bool enc) { return true; } -static bool enc_tlb_flush_required_noop(bool enc) { return false; } static bool enc_cache_flush_required_noop(void) { return false; } static bool is_private_mmio_noop(u64 addr) {return false; } =20 @@ -156,7 +155,6 @@ struct x86_platform_ops x86_platform __ro_after_init = =3D { .guest =3D { .enc_status_change_prepare =3D enc_status_change_prepare_noop, .enc_status_change_finish =3D enc_status_change_finish_noop, - .enc_tlb_flush_required =3D enc_tlb_flush_required_noop, .enc_cache_flush_required =3D enc_cache_flush_required_noop, }, }; diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index a68f2dda0948..652cc61b89b6 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -242,11 +242,6 @@ static unsigned long pg_level_to_pfn(int level, pte_t = *kpte, pgprot_t *ret_prot) return pfn; } =20 -static bool amd_enc_tlb_flush_required(bool enc) -{ - return true; -} - static bool amd_enc_cache_flush_required(void) { return !cpu_feature_enabled(X86_FEATURE_SME_COHERENT); @@ -464,7 +459,6 @@ void __init sme_early_init(void) =20 x86_platform.guest.enc_status_change_prepare =3D amd_enc_status_change_pr= epare; x86_platform.guest.enc_status_change_finish =3D amd_enc_status_change_fi= nish; - x86_platform.guest.enc_tlb_flush_required =3D amd_enc_tlb_flush_requir= ed; x86_platform.guest.enc_cache_flush_required =3D amd_enc_cache_flush_requ= ired; =20 /* diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index d7ef8d312a47..b125035608d5 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2019,6 +2019,11 @@ int set_memory_wb(unsigned long addr, int numpages) } EXPORT_SYMBOL(set_memory_wb); =20 +static int set_memory_p(unsigned long *addr, int numpages) +{ + return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0); +} + /* Prevent speculative access to a page by marking it not-present */ #ifdef CONFIG_X86_64 int set_mce_nospec(unsigned long pfn) @@ -2049,11 +2054,6 @@ int set_mce_nospec(unsigned long pfn) return rc; } =20 -static int set_memory_p(unsigned long *addr, int numpages) -{ - return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0); -} - /* Restore full speculative operation to the pfn. */ int clear_mce_nospec(unsigned long pfn) { @@ -2144,6 +2144,23 @@ static int __set_memory_enc_pgtable(unsigned long ad= dr, int numpages, bool enc) if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) addr &=3D PAGE_MASK; =20 + /* + * The caller must ensure that the memory being transitioned between + * encrypted and decrypted is not being accessed. But if + * load_unaligned_zeropad() touches the "next" page, it may generate a + * read access the caller has no control over. To ensure such accesses + * cause a normal page fault for the load_unaligned_zeropad() handler, + * mark the pages not present until the transition is complete. We + * don't want a #VE or #VC fault due to a mismatch in the memory + * encryption status, since paravisor configurations can't cleanly do + * the load_unaligned_zeropad() handling in the paravisor. + * + * set_memory_np() flushes the TLB. + */ + ret =3D set_memory_np(addr, numpages); + if (ret) + return ret; + memset(&cpa, 0, sizeof(cpa)); cpa.vaddr =3D &addr; cpa.numpages =3D numpages; @@ -2156,14 +2173,16 @@ static int __set_memory_enc_pgtable(unsigned long a= ddr, int numpages, bool enc) vm_unmap_aliases(); =20 /* Flush the caches as needed before changing the encryption attribute. */ - if (x86_platform.guest.enc_tlb_flush_required(enc)) - cpa_flush(&cpa, x86_platform.guest.enc_cache_flush_required()); + if (x86_platform.guest.enc_cache_flush_required()) + cpa_flush(&cpa, 1); =20 /* Notify hypervisor that we are about to set/clr encryption attribute. */ if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc)) return -EIO; =20 ret =3D __change_page_attr_set_clr(&cpa, 1); + if (ret) + return ret; =20 /* * After changing the encryption attribute, we need to flush TLBs again @@ -2174,13 +2193,16 @@ static int __set_memory_enc_pgtable(unsigned long a= ddr, int numpages, bool enc) */ cpa_flush(&cpa, 0); =20 - /* Notify hypervisor that we have successfully set/clr encryption attribu= te. */ - if (!ret) { - if (!x86_platform.guest.enc_status_change_finish(addr, numpages, enc)) - ret =3D -EIO; - } + /* Notify hypervisor that we have successfully set/clr encryption attr. */ + if (!x86_platform.guest.enc_status_change_finish(addr, numpages, enc)) + return -EIO; =20 - return ret; + /* + * Now that the hypervisor is sync'ed with the page table changes + * made here, add back _PAGE_PRESENT. set_memory_p() does not flush + * the TLB. + */ + return set_memory_p(&addr, numpages); } =20 static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) --=20 2.25.1 From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2CF1C61D96 for ; Tue, 21 Nov 2023 21:20:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234717AbjKUVU6 (ORCPT ); Tue, 21 Nov 2023 16:20:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234420AbjKUVUt (ORCPT ); Tue, 21 Nov 2023 16:20:49 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AEE2D67; Tue, 21 Nov 2023 13:20:41 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1cf63ca9b1bso19017105ad.3; Tue, 21 Nov 2023 13:20:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601641; x=1701206441; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RYzSG2iLW5ukNXtkNm2hP56hOS32ndgpWoY3EW0G0CY=; b=CjGBXwoR3OtRBGpkkjJacdYSq6QWbX7cFhSJPSQlRuBrdNn556BTKihy/X1C1/bGXH ASqGCZFZlviPJxlCITNe511fFhBh7UGzb2A9SLm1Ho9mdtWjZufLiJJv/2A576VA1RZt wYV3i6qMAQMErkgwggxYPjFcxTE+AyaeNpn9UoFp4pKSTl7pCg355CVFcPHsedsnqJvM N7nvFGySeVHweLmh/4MsyuFtp2MrTJPSTM81ZN7PMnKnGDe+nKsMChiqrKGdQ5YIOGH5 izx5wwKOSx/7xm3tAzU10vHsIAMBhueA0fHrbvR2ZWVCDRCugNkN2LvnkxDPwwsVXNRR T5+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601641; x=1701206441; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=RYzSG2iLW5ukNXtkNm2hP56hOS32ndgpWoY3EW0G0CY=; b=Nlf+GW13ph+UOFLE3lZVZZA3XrMpNniVtCLH/ZXLtbMit3y2+rRi29/1++08LszuFY SNQEi7FkCHZNaXYCSJec3l2W+rXAfywtP0X0dKftI+Uga5eFH8ACuayylwI470+scbe6 EtBWPunkwq5KnYOvRsT6/9y3Q+GHRAPRZ82YA9AhBto1VMNJCsDMRq6NHtwOCWP26LL2 ejDVvsQbICWn9hr7BARxiIOmlfrKle1wH9BloT4zwfe7dw2YYcjycnuPWZ0Lz/2tG0CF M42vbp4rFFcRHqOiYTDEoiyJVgeU2Ol15rFfPnW8hrgXXSX/qVC+kM05dRIo4wfVDxJt qUhw== X-Gm-Message-State: AOJu0YyFrapxE6vsIinB9m6FYTqqqI4IWVr84v5afTotMxjHUQ6QgRWc 0Jp85aGdrEHUK47Tcb1mEtg= X-Google-Smtp-Source: AGHT+IE1MVXP9jLgCRz1LuG0iGobjYSCE+L/5OxloFuh71PnRNTZVUoMxlDiAphtLEUCzLU7eX0E5A== X-Received: by 2002:a17:902:9a02:b0:1cc:70dd:62c3 with SMTP id v2-20020a1709029a0200b001cc70dd62c3mr349053plp.30.1700601640736; Tue, 21 Nov 2023 13:20:40 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:40 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 6/8] x86/mm: Merge CoCo prepare and finish hypervisor callbacks Date: Tue, 21 Nov 2023 13:20:14 -0800 Message-Id: <20231121212016.1154303-7-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley With CoCo VM pages being marked not present when changing between encrypted and decrypted, the order of updating the guest PTEs and notifying the hypervisor doesn't matter. As such, only a single hypervisor callback is needed, rather than one before and one after the PTE update. Simplify the code by eliminating the extra hypervisor callback and merging the TDX and SEV-SNP code that handles the before and after cases. Eliminating the additional callback also allows optimizing PTE manipulation when changing between encrypted and decrypted. The initial marking of the PTE as not present and the changes to the encyption-related protection flags can be done as a single operation. Signed-off-by: Michael Kelley --- arch/x86/coco/tdx/tdx.c | 46 +-------------------------------- arch/x86/include/asm/sev.h | 6 ++--- arch/x86/include/asm/x86_init.h | 2 -- arch/x86/kernel/sev.c | 17 +++--------- arch/x86/kernel/x86_init.c | 2 -- arch/x86/mm/mem_encrypt_amd.c | 17 ++---------- arch/x86/mm/pat/set_memory.c | 31 ++++++++++------------ 7 files changed, 22 insertions(+), 99 deletions(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 39ead21bcba6..12fbc6824fb3 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -779,30 +779,6 @@ static bool tdx_enc_status_changed(unsigned long vaddr= , int numpages, bool enc) return true; } =20 -static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpage= s, - bool enc) -{ - /* - * Only handle shared->private conversion here. - * See the comment in tdx_early_init(). - */ - if (enc) - return tdx_enc_status_changed(vaddr, numpages, enc); - return true; -} - -static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages, - bool enc) -{ - /* - * Only handle private->shared conversion here. - * See the comment in tdx_early_init(). - */ - if (!enc) - return tdx_enc_status_changed(vaddr, numpages, enc); - return true; -} - void __init tdx_early_init(void) { struct tdx_module_args args =3D { @@ -837,27 +813,7 @@ void __init tdx_early_init(void) */ physical_mask &=3D cc_mask - 1; =20 - /* - * The kernel mapping should match the TDX metadata for the page. - * load_unaligned_zeropad() can touch memory *adjacent* to that which is - * owned by the caller and can catch even _momentary_ mismatches. Bad - * things happen on mismatch: - * - * - Private mapping =3D> Shared Page =3D=3D Guest shutdown - * - Shared mapping =3D> Private Page =3D=3D Recoverable #VE - * - * guest.enc_status_change_prepare() converts the page from - * shared=3D>private before the mapping becomes private. - * - * guest.enc_status_change_finish() converts the page from - * private=3D>shared after the mapping becomes private. - * - * In both cases there is a temporary shared mapping to a private page, - * which can result in a #VE. But, there is never a private mapping to - * a shared page. - */ - x86_platform.guest.enc_status_change_prepare =3D tdx_enc_status_change_pr= epare; - x86_platform.guest.enc_status_change_finish =3D tdx_enc_status_change_fi= nish; + x86_platform.guest.enc_status_change_finish =3D tdx_enc_status_changed; x86_platform.guest.enc_cache_flush_required =3D tdx_cache_flush_required; =20 /* diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 5b4a1ce3d368..1183b470d090 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -204,8 +204,7 @@ void __init early_snp_set_memory_private(unsigned long = vaddr, unsigned long padd void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long= paddr, unsigned long npages); void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc= _op op); -void snp_set_memory_shared(unsigned long vaddr, unsigned long npages); -void snp_set_memory_private(unsigned long vaddr, unsigned long npages); +void snp_set_memory(unsigned long vaddr, unsigned long npages, bool enc); void snp_set_wakeup_secondary_cpu(void); bool snp_init(struct boot_params *bp); void __init __noreturn snp_abort(void); @@ -228,8 +227,7 @@ early_snp_set_memory_private(unsigned long vaddr, unsig= ned long paddr, unsigned static inline void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsi= gned long npages) { } static inline void __init snp_prep_memory(unsigned long paddr, unsigned in= t sz, enum psc_op op) { } -static inline void snp_set_memory_shared(unsigned long vaddr, unsigned lon= g npages) { } -static inline void snp_set_memory_private(unsigned long vaddr, unsigned lo= ng npages) { } +static inline void snp_set_memory(unsigned long vaddr, unsigned long npage= s, bool enc) { } static inline void snp_set_wakeup_secondary_cpu(void) { } static inline bool snp_init(struct boot_params *bp) { return false; } static inline void snp_abort(void) { } diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_ini= t.h index 5b3a9a214815..3e15c4c9ab49 100644 --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -144,12 +144,10 @@ struct x86_init_acpi { /** * struct x86_guest - Functions used by misc guest incarnations like SEV, = TDX, etc. * - * @enc_status_change_prepare Notify HV before the encryption status of a = range is changed * @enc_status_change_finish Notify HV after the encryption status of a ra= nge is changed * @enc_cache_flush_required Returns true if a cache flush is needed befor= e changing page encryption status */ struct x86_guest { - bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool e= nc); bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool en= c); bool (*enc_cache_flush_required)(void); }; diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 08b2e2a0d67d..9569fd6e968a 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -885,25 +885,16 @@ static void set_pages_state(unsigned long vaddr, unsi= gned long npages, op, temp_vaddr); } =20 -void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) +void snp_set_memory(unsigned long vaddr, unsigned long npages, bool enc) { struct vm_struct *area; unsigned long temp_vaddr; + enum psc_op op; =20 area =3D get_vm_area(PAGE_SIZE * (PTRS_PER_PMD + 1), 0); temp_vaddr =3D ALIGN((unsigned long)(area->addr + PAGE_SIZE), PMD_SIZE); - set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED, temp_vaddr); - free_vm_area(area); -} - -void snp_set_memory_private(unsigned long vaddr, unsigned long npages) -{ - struct vm_struct *area; - unsigned long temp_vaddr; - - area =3D get_vm_area(PAGE_SIZE * (PTRS_PER_PMD + 1), 0); - temp_vaddr =3D ALIGN((unsigned long)(area->addr + PAGE_SIZE), PMD_SIZE); - set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE, temp_vaddr); + op =3D enc ? SNP_PAGE_STATE_PRIVATE : SNP_PAGE_STATE_SHARED; + set_pages_state(vaddr, npages, op, temp_vaddr); free_vm_area(area); } =20 diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index 1c0d23a2b6cf..cf5179bb1857 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -131,7 +131,6 @@ struct x86_cpuinit_ops x86_cpuinit =3D { =20 static void default_nmi_init(void) { }; =20 -static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages= , bool enc) { return true; } static bool enc_status_change_finish_noop(unsigned long vaddr, int npages,= bool enc) { return true; } static bool enc_cache_flush_required_noop(void) { return false; } static bool is_private_mmio_noop(u64 addr) {return false; } @@ -153,7 +152,6 @@ struct x86_platform_ops x86_platform __ro_after_init = =3D { .hyper.is_private_mmio =3D is_private_mmio_noop, =20 .guest =3D { - .enc_status_change_prepare =3D enc_status_change_prepare_noop, .enc_status_change_finish =3D enc_status_change_finish_noop, .enc_cache_flush_required =3D enc_cache_flush_required_noop, }, diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index 652cc61b89b6..90753a27eb53 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -277,18 +277,6 @@ static void enc_dec_hypercall(unsigned long vaddr, uns= igned long size, bool enc) #endif } =20 -static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages,= bool enc) -{ - /* - * To maintain the security guarantees of SEV-SNP guests, make sure - * to invalidate the memory before encryption attribute is cleared. - */ - if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !enc) - snp_set_memory_shared(vaddr, npages); - - return true; -} - /* Return true unconditionally: return value doesn't matter for the SEV si= de */ static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, = bool enc) { @@ -296,8 +284,8 @@ static bool amd_enc_status_change_finish(unsigned long = vaddr, int npages, bool e * After memory is mapped encrypted in the page table, validate it * so that it is consistent with the page table updates. */ - if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && enc) - snp_set_memory_private(vaddr, npages); + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + snp_set_memory(vaddr, npages, enc); =20 if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc); @@ -457,7 +445,6 @@ void __init sme_early_init(void) /* Update the protection map with memory encryption mask */ add_encrypt_protection_map(); =20 - x86_platform.guest.enc_status_change_prepare =3D amd_enc_status_change_pr= epare; x86_platform.guest.enc_status_change_finish =3D amd_enc_status_change_fi= nish; x86_platform.guest.enc_cache_flush_required =3D amd_enc_cache_flush_requ= ired; =20 diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index b125035608d5..c13178f37b13 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2144,6 +2144,10 @@ static int __set_memory_enc_pgtable(unsigned long ad= dr, int numpages, bool enc) if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) addr &=3D PAGE_MASK; =20 + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr =3D &addr; + cpa.numpages =3D numpages; + /* * The caller must ensure that the memory being transitioned between * encrypted and decrypted is not being accessed. But if @@ -2155,17 +2159,12 @@ static int __set_memory_enc_pgtable(unsigned long a= ddr, int numpages, bool enc) * encryption status, since paravisor configurations can't cleanly do * the load_unaligned_zeropad() handling in the paravisor. * - * set_memory_np() flushes the TLB. + * There's no requirement to do so, but for efficiency we can clear + * _PAGE_PRESENT and set/clr encryption attr as a single operation. */ - ret =3D set_memory_np(addr, numpages); - if (ret) - return ret; - - memset(&cpa, 0, sizeof(cpa)); - cpa.vaddr =3D &addr; - cpa.numpages =3D numpages; cpa.mask_set =3D enc ? pgprot_encrypted(empty) : pgprot_decrypted(empty); - cpa.mask_clr =3D enc ? pgprot_decrypted(empty) : pgprot_encrypted(empty); + cpa.mask_clr =3D enc ? pgprot_decrypted(__pgprot(_PAGE_PRESENT)) : + pgprot_encrypted(__pgprot(_PAGE_PRESENT)); cpa.pgd =3D init_mm.pgd; =20 /* Must avoid aliasing mappings in the highmem code */ @@ -2176,20 +2175,16 @@ static int __set_memory_enc_pgtable(unsigned long a= ddr, int numpages, bool enc) if (x86_platform.guest.enc_cache_flush_required()) cpa_flush(&cpa, 1); =20 - /* Notify hypervisor that we are about to set/clr encryption attribute. */ - if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc)) - return -EIO; - ret =3D __change_page_attr_set_clr(&cpa, 1); if (ret) return ret; =20 /* - * After changing the encryption attribute, we need to flush TLBs again - * in case any speculative TLB caching occurred (but no need to flush - * caches again). We could just use cpa_flush_all(), but in case TLB - * flushing gets optimized in the cpa_flush() path use the same logic - * as above. + * After clearing _PAGE_PRESENT and changing the encryption attribute, + * we need to flush TLBs to ensure no further accesses to the memory can + * be made with the old encryption attribute (but no need to flush caches + * again). We could just use cpa_flush_all(), but in case TLB flushing + * gets optimized in the cpa_flush() path use the same logic as above. */ cpa_flush(&cpa, 0); =20 --=20 2.25.1 From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CAABC61D90 for ; Tue, 21 Nov 2023 21:20:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234728AbjKUVVB (ORCPT ); Tue, 21 Nov 2023 16:21:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234583AbjKUVUu (ORCPT ); Tue, 21 Nov 2023 16:20:50 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9752CD50; Tue, 21 Nov 2023 13:20:42 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1cc5fa0e4d5so51073425ad.0; Tue, 21 Nov 2023 13:20:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601642; x=1701206442; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SKZ1/19bZDjXmVD1CEbmEpv81ZSrP5Qqz0tPf+dknk4=; b=G4t36tCHg0fwkGnEShyrhGWezWsXVxN2qVmH10CL3JfF2Q7uESiu9nvcM9EMkRYYIM g94Mz4SwhQligb30bqks2ddGGNw5+QOtQrS9zOb8ssvy00WkU7vrh8UUK57qcBSJQTUn TmAT3O5n7Ig2LH7Q9OdHoUzifBgqIm3I4bR9Gm1b5P5Kfukql6TGt6fZbs55aAf8gBtx Fp4T5uYbSndgbh+3ueIcvUfypwPZVB4Dt2z7j3wZpKSof3JwUcXdis0LaF5SA+XaUl6u GVcVqYbeQEDcpaymbwJR92CP492vBTim00yrpL/2Fvbyg35VrU88YOOM5rp53cvNYuAs 35zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601642; x=1701206442; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=SKZ1/19bZDjXmVD1CEbmEpv81ZSrP5Qqz0tPf+dknk4=; b=o1CywgdMEeafBO4z/mUJaVTntSseBDuDd+3V+wcK2+W2ZpbxaMYmN5WPVgTK9BMFqj RLviAI1hVgpSFcKvQaPa40qnqCo5E4Aec0Hm8J3jbSbvT6+Q0t0qBHCTUZSYx/M6RTa5 iA6lsQPyYtSiZzXlP5o6v9iD1GmewyzJHKXSzAm9NPSks28FCEqD+9ArgV8LL29hrWbV AChDYUlP6ECndSfQBLsbRUEJCwMO3pGmflaf/xTWXR2XDSTT445IPEOy82EChH97z0Mw NOCUInXGWsaFN43sEB8Z4887FiWAchsYtHtnosUK/ps6v9E5+QO1TcUCijA1ugo/niLd llQw== X-Gm-Message-State: AOJu0Yx81zTC7iuCvZqeLgjJlgrVkc0EEMTQG5X1A7PbAHBYjE4pJw3F yO+zLvYiCjt7haCc4choGh4= X-Google-Smtp-Source: AGHT+IGz0GBc2L6fKdHOp277El3AvOBOL/M7tgvndLmzeYqXyjHTZ5visODxoyvzOD8ddZlnIN7psg== X-Received: by 2002:a17:903:2281:b0:1cf:6315:8698 with SMTP id b1-20020a170903228100b001cf63158698mr401033plh.42.1700601641967; Tue, 21 Nov 2023 13:20:41 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:41 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 7/8] x86/mm: Remove unnecessary call layer for __set_memory_enc_pgtable() Date: Tue, 21 Nov 2023 13:20:15 -0800 Message-Id: <20231121212016.1154303-8-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley __set_memory_enc_pgtable() is only called from __set_memory_enc_dec() after doing a simple validation check. Prior to commit 812b0597fb40, __set_memory_enc_dec() did more complex checking, but now the code can be simplified by collapsing the two functions. No functional change. Signed-off-by: Michael Kelley --- arch/x86/mm/pat/set_memory.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index c13178f37b13..7365c86a7ff0 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2131,15 +2131,18 @@ int set_memory_global(unsigned long addr, int numpa= ges) } =20 /* - * __set_memory_enc_pgtable() is used for the hypervisors that get + * __set_memory_enc_dec() is used for the hypervisors that get * informed about "encryption" status via page tables. */ -static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool= enc) +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) { pgprot_t empty =3D __pgprot(0); struct cpa_data cpa; int ret; =20 + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) + return 0; + /* Should not be working on unaligned addresses */ if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) addr &=3D PAGE_MASK; @@ -2200,14 +2203,6 @@ static int __set_memory_enc_pgtable(unsigned long ad= dr, int numpages, bool enc) return set_memory_p(&addr, numpages); } =20 -static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) -{ - if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) - return __set_memory_enc_pgtable(addr, numpages, enc); - - return 0; -} - int set_memory_encrypted(unsigned long addr, int numpages) { return __set_memory_enc_dec(addr, numpages, true); --=20 2.25.1 From nobody Tue Dec 16 20:16:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D515C61D85 for ; Tue, 21 Nov 2023 21:21:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234741AbjKUVVE (ORCPT ); Tue, 21 Nov 2023 16:21:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234587AbjKUVUu (ORCPT ); Tue, 21 Nov 2023 16:20:50 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B81C7D76; Tue, 21 Nov 2023 13:20:43 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1cc5b705769so53531915ad.0; Tue, 21 Nov 2023 13:20:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601643; x=1701206443; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r8UP31zz2tzNyQ1XI1JgNxsSlBbVYVcKRejtDKCckFA=; b=EQvFRD4xS0Od02YOZ89FSXUma5aS1JF8NLEsj3EHY2PokkTlrckc/oaQGvN+T+GqIz oUbBAQmU9m9bdG5GULvBeGmc6ddF8O+8dhT9wiUOrikXEd6vGDCk5Z6qT+oXzonIeOBj sOmUMfxnplAyO80Wlrx/LVHjJBNH+ab2kZSmll2Qo3gIlRYBFdNnnbh1yfLiC2JxDgE5 dE7UpZ5X9IOf/ISlPzN8MthizhHvgM+MZMeAYGzmJ0eg/dDcgtwBh7hLmO4CHcW7Ads0 sSYWh9MgW/CnXlhPB03cNXdABx5fmbDglcPoD1p6zruaw3ieE38hwDQLGPopCnWxHose QMTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601643; x=1701206443; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=r8UP31zz2tzNyQ1XI1JgNxsSlBbVYVcKRejtDKCckFA=; b=stjgxx6USQfk8Ap7GZ8JXFAId15KpZAxbEGFul8xgAbgGYyERhrq1DfeHB8smYJrKR s48wbyhlm2yr4tuOlAwVyX9l2A7Jj7DhKOYdKSpZSTb5HmjNyyyAfe/5ntNHhvNQtApA WRQRn4YysUeFuzhc9hCV3h1Az9H93LYAnsDkewEkglwdoTaBpCOdKG6wMAPhTR7nohhQ bw94a7c1ym2kI98sOw7q0oYFvFCIR1tWla4SzoaRN2j8Et2yQ+GjKWKCbIxrGWeEiI5v 3TcwmZf7tqtY3QRRqe3QLvEzgkL/OgSYjJXySmL6VsS9PBxlDBAbdXmLek+zaMUlpL4K Vfng== X-Gm-Message-State: AOJu0Yxvf3FnvzfvVXeqj6uecIVACvPFi75j4wNyPMFkU+yXHyqi7BKp a/QEEzMKydsIH6ohs4Z2zLc= X-Google-Smtp-Source: AGHT+IGw2VRIF1KG5D53DbB1/O4g2GnYpBfD7Y44LMA28ENuVB/Zgq+24fLWa4HnCMkuyOoJyBsYkg== X-Received: by 2002:a17:903:1107:b0:1cf:73ff:b196 with SMTP id n7-20020a170903110700b001cf73ffb196mr470013plh.8.1700601643193; Tue, 21 Nov 2023 13:20:43 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:42 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 8/8] x86/mm: Add comments about errors in set_memory_decrypted()/encrypted() Date: Tue, 21 Nov 2023 13:20:16 -0800 Message-Id: <20231121212016.1154303-9-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Michael Kelley The functions set_memory_decrypted()/encrypted() may leave the input memory range in an inconsistent state if an error occurs. Add comments describing the situation and what callers must be aware of. Also add comments in __set_memory_enc_dec() with more details on the issues and why further investment in error handling is not likely to be useful. No functional change. Suggested-by: Rick Edgecombe Signed-off-by: Michael Kelley --- arch/x86/mm/pat/set_memory.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 7365c86a7ff0..f519e5ca543b 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2133,6 +2133,24 @@ int set_memory_global(unsigned long addr, int numpag= es) /* * __set_memory_enc_dec() is used for the hypervisors that get * informed about "encryption" status via page tables. + * + * If an error occurs in making the transition between encrypted and + * decrypted, the transitioned memory is left in an indeterminate state. + * The encryption status in the guest page tables may not match the + * hypervisor's view of the encryption status, making the memory unusable. + * If the memory consists of multiple pages, different pages may be in + * different indeterminate states. + * + * It is difficult to recover from errors such that we can ensure + * consistency between the page tables and hypervisor view of the encrypti= on + * state. It may not be possible to back out of changes, particularly if t= he + * failure occurs in communicating with the hypervisor. Given this limitat= ion, + * further work on the error handling is not likely to meaningfully improve + * the reliablity or usability of the system. + * + * Any errors are likely to soon render the VM inoperable, but we return + * an error rather than panic'ing so that the caller can decide how best + * to shutdown cleanly. */ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) { @@ -2203,6 +2221,14 @@ static int __set_memory_enc_dec(unsigned long addr, = int numpages, bool enc) return set_memory_p(&addr, numpages); } =20 +/* + * If set_memory_encrypted()/decrypted() returns an error, the input memory + * range is left in an indeterminate state. The encryption status of pages + * may be inconsistent, so the memory is unusable. The caller should not = try + * to do further operations on the memory, or return it to the free list. + * The memory must be leaked, and the caller should take steps to shutdown + * the system as cleanly as possible as something is seriously wrong. + */ int set_memory_encrypted(unsigned long addr, int numpages) { return __set_memory_enc_dec(addr, numpages, true); --=20 2.25.1