From nobody Mon Sep 16 19:13:31 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F0CEC7EE23 for ; Fri, 26 May 2023 12:02:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237156AbjEZMCn (ORCPT ); Fri, 26 May 2023 08:02:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231815AbjEZMCl (ORCPT ); Fri, 26 May 2023 08:02:41 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 215ED195; Fri, 26 May 2023 05:02:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685102559; x=1716638559; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OOmH2mGuJmVEQEJwVjzFq4ACkBIsPCefRb15KF/4X2c=; b=AoUOuX9guL1F/FDrdR07dJN4agdq+CERT1uSVJ14jvnTWVpECZOwUM1Y 9/r6dI6s+xTTiwXLJhyXvcP3g5BgY8ecGLeMKuDbJHXXTy62pQ0k17epr uJCAAK4fOaT+0JKnsmqY1GAkKvNx/DaFb7y31fkTKt82UbEBiN4F2spTr kagr5cp8PV2jHdjRxLrF8wnpgYX9QMTKgTfYYsCT9RYayFTsWykG46vIY Hah0XRtW/eHLQ3iBLgNNhu8dMHP+oIYXj9s37PORw0KuWdoy61FQW6B01 +E+uproGctWuhXl7Gj3CVcjcslYA44PSvBc5J3XQOqxAjBk6JtQ8qw+Jg A==; X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="356568398" X-IronPort-AV: E=Sophos;i="6.00,194,1681196400"; d="scan'208";a="356568398" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 May 2023 05:02:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="817513633" X-IronPort-AV: E=Sophos;i="6.00,194,1681196400"; d="scan'208";a="817513633" Received: from fgarrona-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.208.169]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 May 2023 05:02:34 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 9813010DC16; Fri, 26 May 2023 15:02:31 +0300 (+03) From: "Kirill A. Shutemov" To: dave.hansen@intel.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de Cc: decui@microsoft.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, thomas.lendacky@amd.com, x86@kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , stable@vger.kernel.org Subject: [PATCHv2 1/3] x86/mm: Allow guest.enc_status_change_prepare() to fail Date: Fri, 26 May 2023 15:02:23 +0300 Message-Id: <20230526120225.31936-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230526120225.31936-1-kirill.shutemov@linux.intel.com> References: <20230526120225.31936-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" TDX code is going to provide guest.enc_status_change_prepare() that is able to fail. TDX will use the call to convert the GPA range from shared to private. This operation can fail. Add a way to return an error from the callback. Signed-off-by: Kirill A. Shutemov Cc: stable@vger.kernel.org Reviewed-by: Kuppuswamy Sathyanarayanan --- arch/x86/include/asm/x86_init.h | 2 +- arch/x86/kernel/x86_init.c | 2 +- arch/x86/mm/mem_encrypt_amd.c | 4 +++- arch/x86/mm/pat/set_memory.c | 3 ++- 4 files changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_ini= t.h index 88085f369ff6..1ca9701917c5 100644 --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -150,7 +150,7 @@ struct x86_init_acpi { * @enc_cache_flush_required Returns true if a cache flush is needed befor= e changing page encryption status */ struct x86_guest { - void (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool e= nc); + bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool e= nc); bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool en= c); bool (*enc_tlb_flush_required)(bool enc); bool (*enc_cache_flush_required)(void); diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index d82f4fa2f1bf..f230d4d7d8eb 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -130,7 +130,7 @@ struct x86_cpuinit_ops x86_cpuinit =3D { =20 static void default_nmi_init(void) { }; =20 -static void enc_status_change_prepare_noop(unsigned long vaddr, int npages= , bool enc) { } +static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages= , bool enc) { return true; } static bool enc_status_change_finish_noop(unsigned long vaddr, int npages,= bool enc) { return false; } static bool enc_tlb_flush_required_noop(bool enc) { return false; } static bool enc_cache_flush_required_noop(void) { return false; } diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index e0b51c09109f..4f95c449a406 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -319,7 +319,7 @@ static void enc_dec_hypercall(unsigned long vaddr, int = npages, bool enc) #endif } =20 -static void amd_enc_status_change_prepare(unsigned long vaddr, int npages,= bool enc) +static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages,= bool enc) { /* * To maintain the security guarantees of SEV-SNP guests, make sure @@ -327,6 +327,8 @@ static void amd_enc_status_change_prepare(unsigned long= vaddr, int npages, bool */ if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !enc) snp_set_memory_shared(vaddr, npages); + + return true; } =20 /* Return true unconditionally: return value doesn't matter for the SEV si= de */ diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 7159cf787613..b8f48ebe753c 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2151,7 +2151,8 @@ static int __set_memory_enc_pgtable(unsigned long add= r, int numpages, bool enc) cpa_flush(&cpa, x86_platform.guest.enc_cache_flush_required()); =20 /* Notify hypervisor that we are about to set/clr encryption attribute. */ - x86_platform.guest.enc_status_change_prepare(addr, numpages, enc); + if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc)) + return -EIO; =20 ret =3D __change_page_attr_set_clr(&cpa, 1); =20 --=20 2.39.3