From nobody Tue Dec 16 12:21:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1747391495; cv=none; d=zohomail.com; s=zohoarc; b=hXAJLdlaElryIhFZvSbGDocLWtrWlAL0XiFtqan0zdhRXQW/jI7MN+sB2WFRu8/HSlwy1L7Ox8Hw9KDa102n5CgW/xtyvl/VtPWurAySLL8f1W+kRVqduWo+W3y34E1csPyL0q8REwEVv2xJn/9+GhtFuUpyDlxnG9KOlRdMhuc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1747391495; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=rmMyFtnQQP/M6lJv+2OTm80wHgu+FGdxuTTYePbwvvo=; b=CJE0yIS4EOWJVvbT/bw0sMN/Eo7Xge2m6EfxHJK17LSJeNVyfUGuC8zrOYC+WmI9tXUwSW/dqtDjkYT4REnWdjBF9OmJJ5GKU4qgB75GxzOy/5T6YM47/OhmzsrZrUOzPSmhcWwHwu9S4nAJF4DokpuoqNQcN6RvGluacHfArHQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1747391494933900.3976658449016; Fri, 16 May 2025 03:31:34 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.986848.1372369 (Exim 4.92) (envelope-from ) id 1uFsLZ-00047l-Gl; Fri, 16 May 2025 10:31:21 +0000 Received: by outflank-mailman (output) from mailman id 986848.1372369; Fri, 16 May 2025 10:31:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uFsLZ-00047c-Cp; Fri, 16 May 2025 10:31:21 +0000 Received: by outflank-mailman (input) for mailman id 986848; Fri, 16 May 2025 10:31:19 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uFsF8-0000kS-Qw for xen-devel@lists.xenproject.org; Fri, 16 May 2025 10:24:42 +0000 Received: from mail133-28.atl131.mandrillapp.com (mail133-28.atl131.mandrillapp.com [198.2.133.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f9c17d21-323f-11f0-9ffb-bf95429c2676; Fri, 16 May 2025 12:24:41 +0200 (CEST) Received: from pmta13.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail133-28.atl131.mandrillapp.com (Mailchimp) with ESMTP id 4ZzNVc0flkzMQxhSc for ; Fri, 16 May 2025 10:24:40 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id 55be2ee1b05f4c71bac5a8b138fad311; Fri, 16 May 2025 10:24:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f9c17d21-323f-11f0-9ffb-bf95429c2676 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1747391080; x=1747661080; bh=rmMyFtnQQP/M6lJv+2OTm80wHgu+FGdxuTTYePbwvvo=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=QDuEbMATQOKPddW1ufK70fdBeatEr0CakgVN7waNQI9kJ5j9ABbZza4T5O79Iidkw 5smVTwk0yrES1Pp7VugS3hE3t4Q2T0mE/BEQEzl1YYP0yBOEM1Popc5ATH0JEwsJIt nN0KMpD+Hgwa+LRSoPi5l/yYzr1wCsQpyrWO/RXFmdfDxu6oCa1I3TZnpvQYQ35GvM n8zIzXB7FQR+Tg9CBQirMsi3HU5bryIxr9wuW4gYI0ve2p+QvdHD1IKdjT90huxwzX VOI2DRvu7VPo4buRFZyEIzCQJwYf8BgKMrgjP8ycops/wi4NaQ77QmmRHofZCCek77 dUhjU7tS5t8kA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1747391080; x=1747651580; i=teddy.astie@vates.tech; bh=rmMyFtnQQP/M6lJv+2OTm80wHgu+FGdxuTTYePbwvvo=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=qKljt4eXIzjKpiRbjn9rvxgK70F9BWLWXx/pVT86itAsZglLmpx42itb1q2iv2PE5 ID0i3ku/Txr/p0FvkN6v2y0SxzTIM5k3ZBq/+/TLDsnHsWlFvkEkmjZNUT3QQ986q/ v+GGRkIQkP1c87qnssQdjLd9DM9mCnUK4sWWrBKPJ2MTuMYCOt/bCuJhEerG6w8Voc X9t7nNcN0aig8I6jM4u6H4uT2RMaTuj5KuYmF0SIYzYcGPRY5NFuSulJgIa81h4Poy eVzquOE6MsIlx4R7HyELmIQxpeG3wwzywRqftt+mUyAG6TphpxJiUTRZzPJ9T31qJg GpFBwkLhJRLjg== From: "Teddy Astie" Subject: =?utf-8?Q?[RFC=20PATCH=2014/16]=20sev/emulate:=20Handle=20some=20non-emulable=20HVM=20paths?= X-Mailer: git-send-email 2.49.0 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1747391079133 To: xen-devel@lists.xenproject.org Cc: "Teddy Astie" , "Jan Beulich" , "Andrew Cooper" , "=?utf-8?Q?Roger=20Pau=20Monn=C3=A9?=" , "Andrei Semenov" Message-Id: In-Reply-To: References: X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.55be2ee1b05f4c71bac5a8b138fad311?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20250516:md Date: Fri, 16 May 2025 10:24:40 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity teddy.astie@vates.tech) X-ZM-MESSAGEID: 1747391495438116600 Content-Type: text/plain; charset="utf-8" From: Andrei Semenov Some code paths are not emulable under SEV or needs special handling. Signed-off-by: Andrei Semenov Signed-off-by: Teddy Astie --- xen/arch/x86/hvm/emulate.c | 137 ++++++++++++++++++++++++++++++++----- xen/arch/x86/hvm/hvm.c | 13 ++++ 2 files changed, 133 insertions(+), 17 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 6ed8e03475..7ac3be2d59 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include =20 @@ -689,6 +690,9 @@ static void *hvmemul_map_linear_addr( goto unhandleable; } =20 + if ( is_sev_domain(curr->domain) && (nr_frames > 1) ) + goto unhandleable; + for ( i =3D 0; i < nr_frames; i++ ) { enum hvm_translation_result res; @@ -703,8 +707,16 @@ static void *hvmemul_map_linear_addr( /* Error checking. Confirm that the current slot is clean. */ ASSERT(mfn_x(*mfn) =3D=3D 0); =20 - res =3D hvm_translate_get_page(curr, addr, true, pfec, + if ( is_sev_domain(curr->domain) ) + { + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; + unsigned long gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (addr & = ~PAGE_MASK); + res =3D hvm_translate_get_page(curr, gpa, false, pfec, &pfinfo, &page, &gfn, &p2mt); + } + else + res =3D hvm_translate_get_page(curr, addr, true, pfec, + &pfinfo, &page, &gfn, &p2mt); =20 switch ( res ) { @@ -1173,6 +1185,7 @@ static int hvmemul_linear_mmio_access( dir, buffer_off= set); paddr_t gpa; unsigned long one_rep =3D 1; + unsigned int chunk; int rc; =20 if ( cache =3D=3D NULL ) @@ -1183,21 +1196,50 @@ static int hvmemul_linear_mmio_access( ASSERT_UNREACHABLE(); return X86EMUL_UNHANDLEABLE; } + =20 + chunk =3D min_t(unsigned int, size, PAGE_SIZE - offset); =20 if ( known_gpfn ) gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | offset; else { - rc =3D hvmemul_linear_to_phys(gla, &gpa, size, &one_rep, pfec, + if ( is_sev_domain(current->domain) ) + gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | offset; + else + { + rc =3D hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec, + hvmemul_ctxt); + if ( rc !=3D X86EMUL_OKAY ) + return rc; + } + + latch_linear_to_phys(hvio, gla, gpa, dir =3D=3D IOREQ_WRITE); + } + + for ( ;; ) + { + rc =3D hvmemul_phys_mmio_access(cache, gpa, chunk, dir, buffer, bu= ffer_offset); + if ( rc !=3D X86EMUL_OKAY ) + break; + + gla +=3D chunk; + buffer_offset +=3D chunk; + size -=3D chunk; + + if ( size =3D=3D 0 ) + break; + + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + + chunk =3D min_t(unsigned int, size, PAGE_SIZE); + rc =3D hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec, hvmemul_ctxt); if ( rc !=3D X86EMUL_OKAY ) return rc; - - latch_linear_to_phys(hvio, gla, gpa, dir =3D=3D IOREQ_WRITE); } =20 - return hvmemul_phys_mmio_access(cache, gpa, size, dir, buffer, - buffer_offset); + return rc; } =20 static inline int hvmemul_linear_mmio_read( @@ -1254,6 +1296,9 @@ static int linear_read(unsigned long addr, unsigned i= nt bytes, void *p_data, { unsigned int part1 =3D PAGE_SIZE - offset; =20 + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + /* Split the access at the page boundary. */ rc =3D linear_read(addr, part1, p_data, pfec, hvmemul_ctxt); if ( rc !=3D X86EMUL_OKAY ) @@ -1278,11 +1323,25 @@ static int linear_read(unsigned long addr, unsigned= int bytes, void *p_data, * upon replay) the RAM access for anything that's ahead of or past MM= IO, * i.e. in RAM. */ - cache =3D hvmemul_find_mmio_cache(hvio, start, IOREQ_READ, ~0); - if ( !cache || - addr + bytes <=3D start + cache->skip || - addr >=3D start + cache->size ) - rc =3D hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfin= fo); + cache =3D hvmemul_find_mmio_cache(hvio, start, IOREQ_READ, ~0); + if ( !cache || + addr + bytes <=3D start + cache->skip || + addr >=3D start + cache->size ) + { + if ( is_sev_domain(current->domain) ) + { + if ( hvio->mmio_gpfn ) + { + paddr_t gpa; + gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (addr & ~PAGE_MASK= ); + rc =3D hvm_copy_from_guest_phys(p_data, gpa, bytes); + } + else + return X86EMUL_UNHANDLEABLE; + } + else + rc =3D hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &= pfinfo); + } =20 switch ( rc ) { @@ -1325,6 +1384,9 @@ static int linear_write(unsigned long addr, unsigned = int bytes, void *p_data, { unsigned int part1 =3D PAGE_SIZE - offset; =20 + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + /* Split the access at the page boundary. */ rc =3D linear_write(addr, part1, p_data, pfec, hvmemul_ctxt); if ( rc !=3D X86EMUL_OKAY ) @@ -1340,9 +1402,23 @@ static int linear_write(unsigned long addr, unsigned= int bytes, void *p_data, /* See commentary in linear_read(). */ cache =3D hvmemul_find_mmio_cache(hvio, start, IOREQ_WRITE, ~0); if ( !cache || - addr + bytes <=3D start + cache->skip || - addr >=3D start + cache->size ) - rc =3D hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo= ); + addr + bytes <=3D start + cache->skip || + addr >=3D start + cache->size ) + { + if ( is_sev_domain(current->domain) ) + { + if ( hvio->mmio_gpfn ) + { + paddr_t gpa; + gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (addr & ~PAGE_MASK= ); + rc =3D hvm_copy_to_guest_phys(gpa, p_data, bytes, current); + } + else + return X86EMUL_UNHANDLEABLE; + } + else + rc =3D hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pf= info); + } =20 switch ( rc ) { @@ -1430,7 +1506,12 @@ int cf_check hvmemul_insn_fetch( if ( !bytes || unlikely((insn_off + bytes) > hvmemul_ctxt->insn_buf_bytes) ) { - int rc =3D __hvmemul_read(x86_seg_cs, offset, p_data, bytes, + int rc; + + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + + rc =3D __hvmemul_read(x86_seg_cs, offset, p_data, bytes, hvm_access_insn_fetch, hvmemul_ctxt); =20 if ( rc =3D=3D X86EMUL_OKAY && bytes ) @@ -1485,6 +1566,7 @@ static int cf_check hvmemul_write( if ( !known_gla(addr, bytes, pfec) ) { mapping =3D hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctx= t); + if ( IS_ERR(mapping) ) return ~PTR_ERR(mapping); } @@ -1719,6 +1801,9 @@ static int cf_check hvmemul_cmpxchg( int rc; void *mapping =3D NULL; =20 + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + rc =3D hvmemul_virtual_to_linear( seg, offset, bytes, NULL, hvm_access_write, hvmemul_ctxt, &addr); if ( rc !=3D X86EMUL_OKAY ) @@ -1821,6 +1906,9 @@ static int cf_check hvmemul_rep_ins( p2m_type_t p2mt; int rc; =20 + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + rc =3D hvmemul_virtual_to_linear( dst_seg, dst_offset, bytes_per_rep, reps, hvm_access_write, hvmemul_ctxt, &addr); @@ -1899,6 +1987,9 @@ static int cf_check hvmemul_rep_outs( p2m_type_t p2mt; int rc; =20 + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + if ( unlikely(hvmemul_ctxt->set_context) ) return hvmemul_rep_outs_set_context(dst_port, bytes_per_rep, reps); =20 @@ -1944,6 +2035,9 @@ static int cf_check hvmemul_rep_movs( int rc, df =3D !!(ctxt->regs->eflags & X86_EFLAGS_DF); char *buf; =20 + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; + rc =3D hvmemul_virtual_to_linear( src_seg, src_offset, bytes_per_rep, reps, hvm_access_read, hvmemul_ctxt, &saddr); @@ -2109,9 +2203,13 @@ static int cf_check hvmemul_rep_stos( paddr_t gpa; p2m_type_t p2mt; bool df =3D ctxt->regs->eflags & X86_EFLAGS_DF; - int rc =3D hvmemul_virtual_to_linear(seg, offset, bytes_per_rep, reps, - hvm_access_write, hvmemul_ctxt, &ad= dr); + int rc; + + if ( is_sev_domain(current->domain) ) + return X86EMUL_UNHANDLEABLE; =20 + rc =3D hvmemul_virtual_to_linear(seg, offset, bytes_per_rep, reps, + hvm_access_write, hvmemul_ctxt, &addr); if ( rc !=3D X86EMUL_OKAY ) return rc; =20 @@ -2770,6 +2868,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt, struct vcpu *curr =3D current; uint32_t new_intr_shadow; struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; + int rc; =20 /* @@ -2983,6 +3082,9 @@ void hvm_emulate_init_per_insn( unsigned int pfec =3D PFEC_page_present | PFEC_insn_fetch; unsigned long addr; =20 + if ( is_sev_domain(current->domain) ) + goto out; + if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl =3D=3D 3 ) pfec |=3D PFEC_user_mode; =20 @@ -3000,6 +3102,7 @@ void hvm_emulate_init_per_insn( sizeof(hvmemul_ctxt->insn_buf) : 0; } =20 +out: hvmemul_ctxt->is_mem_access =3D false; } =20 diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index e1bcf8e086..d3060329fb 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include #include @@ -3477,6 +3478,9 @@ enum hvm_translation_result hvm_copy_to_guest_linear( unsigned long addr, const void *buf, unsigned int size, uint32_t pfec, pagefault_info_t *pfinfo) { + if ( is_sev_domain(current->domain) ) + return HVMTRANS_unhandleable; + return __hvm_copy((void *)buf /* HVMCOPY_to_guest doesn't modify */, addr, size, current, HVMCOPY_to_guest | HVMCOPY_line= ar, PFEC_page_present | PFEC_write_access | pfec, pfinfo= ); @@ -3486,6 +3490,9 @@ enum hvm_translation_result hvm_copy_from_guest_linea= r( void *buf, unsigned long addr, unsigned int size, uint32_t pfec, pagefault_info_t *pfinfo) { + if ( is_sev_domain(current->domain) ) + return HVMTRANS_unhandleable; + return __hvm_copy(buf, addr, size, current, HVMCOPY_from_guest | HVMCOPY_linear, PFEC_page_present | pfec, pfinfo); @@ -3495,6 +3502,9 @@ enum hvm_translation_result hvm_copy_from_vcpu_linear( void *buf, unsigned long addr, unsigned int size, struct vcpu *v, unsigned int pfec) { + if ( is_sev_domain(v->domain) ) + return HVMTRANS_unhandleable; + return __hvm_copy(buf, addr, size, v, HVMCOPY_from_guest | HVMCOPY_linear, PFEC_page_present | pfec, NULL); @@ -3522,6 +3532,9 @@ unsigned int clear_user_hvm(void *to, unsigned int le= n) { int rc; =20 + if ( is_sev_domain(current->domain) ) + return HVMTRANS_unhandleable; + if ( current->hcall_compat && is_compat_arg_xlat_range(to, len) ) { memset(to, 0x00, len); --=20 2.49.0 Teddy Astie | Vates XCP-ng Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech