From nobody Thu May 7 16:11:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3210EC433F5 for ; Tue, 24 May 2022 22:10:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242131AbiEXWKr (ORCPT ); Tue, 24 May 2022 18:10:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242062AbiEXWKf (ORCPT ); Tue, 24 May 2022 18:10:35 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 900B48217F for ; Tue, 24 May 2022 15:10:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653430232; x=1684966232; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+YKWMSbb5ilC548cSjiQyw/GUTbjzrOI6q8bG06K3nw=; b=NpEj8f9cEqagUnklxzXaUffp/U0F5WyWfffTPbj7XYdwiQZ1WeSr76vL Xcps8REs/VLBS65r8xaRUs4+z+XedEAo81u+DxdjTWF13SrcELZE3XI3b JrEOhoUsZG82EVSn67rGHmRX80zcI0jJ1Hb1pQfJKLjwBqFPpvksVCdTD CW6xeqryIsG8dh6RjYjE2b0vrlpx3QGFZUjy7mtLULe890v4CVIOLcZ6F X1TBqsEswaTeVF33OWnkCEECtGkPy7G3ydPPfYNq6BrR+iJ7X9xD2uMgz rQnVORhr1rD4GM6Fpefd5IvuBx170DfPvWEPHt3m5zCOg6cssxW9Qmlc0 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10357"; a="336715527" X-IronPort-AV: E=Sophos;i="5.91,250,1647327600"; d="scan'208";a="336715527" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2022 15:10:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,250,1647327600"; d="scan'208";a="548714797" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 24 May 2022 15:10:20 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 09B67103; Wed, 25 May 2022 01:10:21 +0300 (EEST) From: "Kirill A. Shutemov" To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org Cc: ak@linux.intel.com, dan.j.williams@intel.com, david@redhat.com, hpa@zytor.com, linux-kernel@vger.kernel.org, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, thomas.lendacky@amd.com, x86@kernel.org, "Kirill A. Shutemov" Subject: [PATCHv3 1/3] x86/tdx: Fix early #VE handling Date: Wed, 25 May 2022 01:10:10 +0300 Message-Id: <20220524221012.62332-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220524221012.62332-1-kirill.shutemov@linux.intel.com> References: <20220524221012.62332-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Move RIP in tdx_early_handle_ve() after handling the exception. Failure to do that leads to infinite loop of exceptions. Signed-off-by: Kirill A. Shutemov Fixes: 32e72854fa5f ("x86/tdx: Port I/O: Add early boot support") Reviewed-by: Kuppuswamy Sathyanarayanan --- arch/x86/coco/tdx/tdx.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 03deb4d6920d..faae53f8d559 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -447,13 +447,17 @@ static bool handle_io(struct pt_regs *regs, u32 exit_= qual) __init bool tdx_early_handle_ve(struct pt_regs *regs) { struct ve_info ve; + bool ret; =20 tdx_get_ve_info(&ve); =20 if (ve.exit_reason !=3D EXIT_REASON_IO_INSTRUCTION) return false; =20 - return handle_io(regs, ve.exit_qual); + ret =3D handle_io(regs, ve.exit_qual); + if (ret) + regs->ip +=3D ve.instr_len; + return ret; } =20 void tdx_get_ve_info(struct ve_info *ve) --=20 2.35.1 From nobody Thu May 7 16:11:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AA4FC433EF for ; Tue, 24 May 2022 22:10:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237266AbiEXWKn (ORCPT ); Tue, 24 May 2022 18:10:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242054AbiEXWK1 (ORCPT ); Tue, 24 May 2022 18:10:27 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E977E6D4FE for ; Tue, 24 May 2022 15:10:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653430225; x=1684966225; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5g7kvXQOL3l3KkfdW0N8IBlR0S+B7GFngKl0tGnerJo=; b=g00MSi1FQZCyfoUcl+3jilcHOmLmJ48bAg9cDqI8/tExF0kAmv2BvPlW j9+AQmRUlqaCWdTwnJEoYyd2TbmVx9u56Hh2ZAVc62x0d2Te7ObCFYpTB sKwQ/moRXs7NO8rVgOWynKrJ0GadzINDDQtnqWCyD8PVxbUYxp0bzzttB INrqjXWJU+lvq9eVz/Ugs8PGKDwyCjajPfpQTvS8Dzj7SRSolUwqzA+75 q7foiALAgg49OIKzBQ512ha6wA1s9OiPlTJpo3tQXsGPL4BYxaZ9jNHtb 9fE3OBhbqkHnfxrgcZfYLTRwPo3ylzxWPccyQ+rdC38hk1hRQSp1XhDF7 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10357"; a="253535972" X-IronPort-AV: E=Sophos;i="5.91,250,1647327600"; d="scan'208";a="253535972" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2022 15:10:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,250,1647327600"; d="scan'208";a="608858603" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 24 May 2022 15:10:20 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 0F98827D; Wed, 25 May 2022 01:10:22 +0300 (EEST) From: "Kirill A. Shutemov" To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org Cc: ak@linux.intel.com, dan.j.williams@intel.com, david@redhat.com, hpa@zytor.com, linux-kernel@vger.kernel.org, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, thomas.lendacky@amd.com, x86@kernel.org, "Kirill A. Shutemov" Subject: [PATCHv3 2/3] x86/tdx: Clarify RIP adjustments in #VE handler Date: Wed, 25 May 2022 01:10:11 +0300 Message-Id: <20220524221012.62332-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220524221012.62332-1-kirill.shutemov@linux.intel.com> References: <20220524221012.62332-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" After successful #VE handling, tdx_handle_virt_exception() has to move RIP to the next instruction. The handler needs to know the length of the instruction. If the #VE happened due to instruction execution, GET_VEINFO TDX module call provides info on the instruction in R10, including its length. For #VE due to EPT violation, info in R10 is not usable and kernel has to decode instruction manually to find out its length. Restructure the code to make it explicit that the instruction length depends on the type of #VE. Handler of an exit reason returns instruction length on success or -errno on failure. Suggested-by: Dave Hansen Signed-off-by: Kirill A. Shutemov --- arch/x86/coco/tdx/tdx.c | 149 +++++++++++++++++++++++++--------------- 1 file changed, 94 insertions(+), 55 deletions(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index faae53f8d559..94e447e7f103 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -124,6 +124,22 @@ static u64 get_cc_mask(void) return BIT_ULL(gpa_width - 1); } =20 +static int ve_instr_len(struct ve_info *ve) +{ + /* + * If the #VE happened due to instruction execution, GET_VEINFO + * provides info on the instruction. + * + * For #VE due to EPT violation, info provided by GET_VEINFO not usable + * and kernel has to decode instruction manually to find out its + * length. Catch such cases. + */ + if (WARN_ON_ONCE(ve->exit_reason =3D=3D EXIT_REASON_EPT_VIOLATION)) + return 0; + + return ve->instr_len; +} + static u64 __cpuidle __halt(const bool irq_disabled, const bool do_sti) { struct tdx_hypercall_args args =3D { @@ -147,7 +163,7 @@ static u64 __cpuidle __halt(const bool irq_disabled, co= nst bool do_sti) return __tdx_hypercall(&args, do_sti ? TDX_HCALL_ISSUE_STI : 0); } =20 -static bool handle_halt(void) +static int handle_halt(struct ve_info *ve) { /* * Since non safe halt is mainly used in CPU offlining @@ -158,9 +174,9 @@ static bool handle_halt(void) const bool do_sti =3D false; =20 if (__halt(irq_disabled, do_sti)) - return false; + return -EIO; =20 - return true; + return ve_instr_len(ve); } =20 void __cpuidle tdx_safe_halt(void) @@ -180,7 +196,7 @@ void __cpuidle tdx_safe_halt(void) WARN_ONCE(1, "HLT instruction emulation failed\n"); } =20 -static bool read_msr(struct pt_regs *regs) +static int read_msr(struct pt_regs *regs, struct ve_info *ve) { struct tdx_hypercall_args args =3D { .r10 =3D TDX_HYPERCALL_STANDARD, @@ -194,14 +210,14 @@ static bool read_msr(struct pt_regs *regs) * (GHCI), section titled "TDG.VP.VMCALL". */ if (__tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT)) - return false; + return -EIO; =20 regs->ax =3D lower_32_bits(args.r11); regs->dx =3D upper_32_bits(args.r11); - return true; + return ve_instr_len(ve); } =20 -static bool write_msr(struct pt_regs *regs) +static int write_msr(struct pt_regs *regs, struct ve_info *ve) { struct tdx_hypercall_args args =3D { .r10 =3D TDX_HYPERCALL_STANDARD, @@ -215,10 +231,13 @@ static bool write_msr(struct pt_regs *regs) * can be found in TDX Guest-Host-Communication Interface * (GHCI) section titled "TDG.VP.VMCALL". */ - return !__tdx_hypercall(&args, 0); + if (__tdx_hypercall(&args, 0)) + return -EIO; + + return ve_instr_len(ve); } =20 -static bool handle_cpuid(struct pt_regs *regs) +static int handle_cpuid(struct pt_regs *regs, struct ve_info *ve) { struct tdx_hypercall_args args =3D { .r10 =3D TDX_HYPERCALL_STANDARD, @@ -236,7 +255,7 @@ static bool handle_cpuid(struct pt_regs *regs) */ if (regs->ax < 0x40000000 || regs->ax > 0x4FFFFFFF) { regs->ax =3D regs->bx =3D regs->cx =3D regs->dx =3D 0; - return true; + return ve_instr_len(ve); } =20 /* @@ -245,7 +264,7 @@ static bool handle_cpuid(struct pt_regs *regs) * (GHCI), section titled "VP.VMCALL". */ if (__tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT)) - return false; + return -EIO; =20 /* * As per TDX GHCI CPUID ABI, r12-r15 registers contain contents of @@ -257,7 +276,7 @@ static bool handle_cpuid(struct pt_regs *regs) regs->cx =3D args.r14; regs->dx =3D args.r15; =20 - return true; + return ve_instr_len(ve); } =20 static bool mmio_read(int size, unsigned long addr, unsigned long *val) @@ -283,7 +302,7 @@ static bool mmio_write(int size, unsigned long addr, un= signed long val) EPT_WRITE, addr, val); } =20 -static bool handle_mmio(struct pt_regs *regs, struct ve_info *ve) +static int handle_mmio(struct pt_regs *regs, struct ve_info *ve) { char buffer[MAX_INSN_SIZE]; unsigned long *reg, val; @@ -294,34 +313,36 @@ static bool handle_mmio(struct pt_regs *regs, struct = ve_info *ve) =20 /* Only in-kernel MMIO is supported */ if (WARN_ON_ONCE(user_mode(regs))) - return false; + return -EFAULT; =20 if (copy_from_kernel_nofault(buffer, (void *)regs->ip, MAX_INSN_SIZE)) - return false; + return -EFAULT; =20 if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64)) - return false; + return -EINVAL; =20 mmio =3D insn_decode_mmio(&insn, &size); if (WARN_ON_ONCE(mmio =3D=3D MMIO_DECODE_FAILED)) - return false; + return -EINVAL; =20 if (mmio !=3D MMIO_WRITE_IMM && mmio !=3D MMIO_MOVS) { reg =3D insn_get_modrm_reg_ptr(&insn, regs); if (!reg) - return false; + return -EINVAL; } =20 - ve->instr_len =3D insn.length; - /* Handle writes first */ switch (mmio) { case MMIO_WRITE: memcpy(&val, reg, size); - return mmio_write(size, ve->gpa, val); + if (!mmio_write(size, ve->gpa, val)) + return -EIO; + return insn.length; case MMIO_WRITE_IMM: val =3D insn.immediate.value; - return mmio_write(size, ve->gpa, val); + if (!mmio_write(size, ve->gpa, val)) + return -EIO; + return insn.length; case MMIO_READ: case MMIO_READ_ZERO_EXTEND: case MMIO_READ_SIGN_EXTEND: @@ -334,15 +355,15 @@ static bool handle_mmio(struct pt_regs *regs, struct = ve_info *ve) * decoded or handled properly. It was likely not using io.h * helpers or accessed MMIO accidentally. */ - return false; + return -EINVAL; default: WARN_ONCE(1, "Unknown insn_decode_mmio() decode value?"); - return false; + return -EINVAL; } =20 /* Handle reads */ if (!mmio_read(size, ve->gpa, &val)) - return false; + return -EIO; =20 switch (mmio) { case MMIO_READ: @@ -364,13 +385,13 @@ static bool handle_mmio(struct pt_regs *regs, struct = ve_info *ve) default: /* All other cases has to be covered with the first switch() */ WARN_ON_ONCE(1); - return false; + return -EINVAL; } =20 if (extend_size) memset(reg, extend_val, extend_size); memcpy(reg, &val, size); - return true; + return insn.length; } =20 static bool handle_in(struct pt_regs *regs, int size, int port) @@ -421,13 +442,14 @@ static bool handle_out(struct pt_regs *regs, int size= , int port) * * Return True on success or False on failure. */ -static bool handle_io(struct pt_regs *regs, u32 exit_qual) +static int handle_io(struct pt_regs *regs, struct ve_info *ve) { + u32 exit_qual =3D ve->exit_qual; int size, port; - bool in; + bool in, ret; =20 if (VE_IS_IO_STRING(exit_qual)) - return false; + return -EIO; =20 in =3D VE_IS_IO_IN(exit_qual); size =3D VE_GET_IO_SIZE(exit_qual); @@ -435,9 +457,13 @@ static bool handle_io(struct pt_regs *regs, u32 exit_q= ual) =20 =20 if (in) - return handle_in(regs, size, port); + ret =3D handle_in(regs, size, port); else - return handle_out(regs, size, port); + ret =3D handle_out(regs, size, port); + if (!ret) + return -EIO; + + return ve_instr_len(ve); } =20 /* @@ -447,17 +473,19 @@ static bool handle_io(struct pt_regs *regs, u32 exit_= qual) __init bool tdx_early_handle_ve(struct pt_regs *regs) { struct ve_info ve; - bool ret; + int insn_len; =20 tdx_get_ve_info(&ve); =20 if (ve.exit_reason !=3D EXIT_REASON_IO_INSTRUCTION) return false; =20 - ret =3D handle_io(regs, ve.exit_qual); - if (ret) - regs->ip +=3D ve.instr_len; - return ret; + insn_len =3D handle_io(regs, &ve); + if (insn_len < 0) + return false; + + regs->ip +=3D insn_len; + return true; } =20 void tdx_get_ve_info(struct ve_info *ve) @@ -490,54 +518,65 @@ void tdx_get_ve_info(struct ve_info *ve) ve->instr_info =3D upper_32_bits(out.r10); } =20 -/* Handle the user initiated #VE */ -static bool virt_exception_user(struct pt_regs *regs, struct ve_info *ve) +/* + * Handle the user initiated #VE. + * + * On success, returns the number of bytes RIP should be incremented (>=3D= 0) + * or -errno on error. + */ +static int virt_exception_user(struct pt_regs *regs, struct ve_info *ve) { switch (ve->exit_reason) { case EXIT_REASON_CPUID: - return handle_cpuid(regs); + return handle_cpuid(regs, ve); default: pr_warn("Unexpected #VE: %lld\n", ve->exit_reason); - return false; + return -EIO; } } =20 -/* Handle the kernel #VE */ -static bool virt_exception_kernel(struct pt_regs *regs, struct ve_info *ve) +/* + * Handle the kernel #VE. + * + * On success, returns the number of bytes RIP should be incremented (>=3D= 0) + * or -errno on error. + */ +static int virt_exception_kernel(struct pt_regs *regs, struct ve_info *ve) { switch (ve->exit_reason) { case EXIT_REASON_HLT: - return handle_halt(); + return handle_halt(ve); case EXIT_REASON_MSR_READ: - return read_msr(regs); + return read_msr(regs, ve); case EXIT_REASON_MSR_WRITE: - return write_msr(regs); + return write_msr(regs, ve); case EXIT_REASON_CPUID: - return handle_cpuid(regs); + return handle_cpuid(regs, ve); case EXIT_REASON_EPT_VIOLATION: return handle_mmio(regs, ve); case EXIT_REASON_IO_INSTRUCTION: - return handle_io(regs, ve->exit_qual); + return handle_io(regs, ve); default: pr_warn("Unexpected #VE: %lld\n", ve->exit_reason); - return false; + return -EIO; } } =20 bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve) { - bool ret; + int insn_len; =20 if (user_mode(regs)) - ret =3D virt_exception_user(regs, ve); + insn_len =3D virt_exception_user(regs, ve); else - ret =3D virt_exception_kernel(regs, ve); + insn_len =3D virt_exception_kernel(regs, ve); + if (insn_len < 0) + return false; =20 /* After successful #VE handling, move the IP */ - if (ret) - regs->ip +=3D ve->instr_len; + regs->ip +=3D insn_len; =20 - return ret; + return true; } =20 static bool tdx_tlb_flush_required(bool private) --=20 2.35.1 From nobody Thu May 7 16:11:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 541AAC433FE for ; Tue, 24 May 2022 22:10:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242058AbiEXWKf (ORCPT ); Tue, 24 May 2022 18:10:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242042AbiEXWK0 (ORCPT ); Tue, 24 May 2022 18:10:26 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4054D819A3 for ; Tue, 24 May 2022 15:10:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653430225; x=1684966225; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KjUmR28PG6nuPU2mel4Olt+Fqotqsfe0DrqeDe68tw8=; b=meNks6yjZ64+P80N4J/tSvwEl2Yuc4FiWNXI7JXRRqMhSStNdZ+mbnFw EmeV6+qQ3ircE5eLnf0b4wsbY650ekXWWKXc4IeF/hpm4EFHxuDgshINw VJmcrWtR/Or88Ab81EO4QS4+qQ04FldcJUAaklPivtJzgW+vXGPDbai8u 4e9nOqKwoQZB1UyeSpltwCfVj9yNLMGzii8lrLpAhbDxSiDlh5Zw3PXhm GwCyXcR++p1Rhwcuzn8OvL0E4aanJ+vxHVO0ONMsk7hVlHZhwBkyhHhQk fbd3Z/BtcP3f2xErHUX5exiA8HaBpsr8LL7OmKFAhf8W96+3Raw9Gvv2b A==; X-IronPort-AV: E=McAfee;i="6400,9594,10357"; a="273657963" X-IronPort-AV: E=Sophos;i="5.91,250,1647327600"; d="scan'208";a="273657963" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2022 15:10:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,250,1647327600"; d="scan'208";a="664143791" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 24 May 2022 15:10:21 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 1D779709; Wed, 25 May 2022 01:10:22 +0300 (EEST) From: "Kirill A. Shutemov" To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org Cc: ak@linux.intel.com, dan.j.williams@intel.com, david@redhat.com, hpa@zytor.com, linux-kernel@vger.kernel.org, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, thomas.lendacky@amd.com, x86@kernel.org, "Kirill A. Shutemov" Subject: [PATCHv3 3/3] x86/tdx: Handle load_unaligned_zeropad() page-cross to a shared page Date: Wed, 25 May 2022 01:10:12 +0300 Message-Id: <20220524221012.62332-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220524221012.62332-1-kirill.shutemov@linux.intel.com> References: <20220524221012.62332-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. In TDX guests, the second page can be shared page and VMM may configure it to trigger #VE. Kernel assumes that #VE on a shared page is MMIO access and tries to decode instruction to handle it. In case of load_unaligned_zeropad() it may result in confusion as it is not MMIO access. Fix it by detecting unaligned MMIO accesses (it includes page-crossings) and fail them. load_unaligned_zeropad() will recover using exception fixups. The issue was discovered by analysis. It was not triggered during the testing. Signed-off-by: Kirill A. Shutemov --- arch/x86/coco/tdx/tdx.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 94e447e7f103..4e566ed67db8 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -331,6 +331,17 @@ static int handle_mmio(struct pt_regs *regs, struct ve= _info *ve) return -EINVAL; } =20 + /* + * MMIO accesses suppose to be naturally aligned and therefore never + * cross a page boundary. Seeing unaligned accesses indicates a bug or + * load_unaligned_zeropad() that steps into unmapped shared page. + * + * In both cases fail the #VE handling. load_unaligned_zeropad() will + * recover using exception fixups. + */ + if ((unsigned long)insn_get_addr_ref(&insn, regs) % size) + return -EFAULT; + /* Handle writes first */ switch (mmio) { case MMIO_WRITE: --=20 2.35.1