From nobody Sat Nov 23 03:27:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=quarantine dis=quarantine) header.from=epam.com ARC-Seal: i=1; a=rsa-sha256; t=1717414514; cv=none; d=zohomail.com; s=zohoarc; b=WTtoCrdSMQ7vVB11uH7zSU+VkpEzOeqXnCRQgUhmp7enWeGmwgbFNMKRxq8OSCQReV0WZqDTKgjoRUK+0ObhEdTWXNz6+BnAgtP2QozbSgKwm/DFccDdY3Zod903LwKibcC2dC4irDKu1vLVfFIlT30tcVeqYV0b4wqpL3/M+6s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1717414514; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Q+WQq5fA5RnOcqwlSQ40eXkQFOKKggK6YxbL/XjpEZk=; b=Ao3xxktwtX0mEo4GAVJmsIBk3O7Uy82my/2EuIIvNE57MuZDaDWaQwkkkEJNYfF03oE7SIRdyZF8TOCv/Twa0Hg9DjwSqKnB/WfL/ihad+pIJbSZGdNqMS3XVKLyj4rqI7GfLLER91fafDcaxMdtmDJ4/B1TgAjB3tg0hN2YIPo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=quarantine dis=quarantine) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1717414514076581.0594153304821; Mon, 3 Jun 2024 04:35:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.734883.1141007 (Exim 4.92) (envelope-from ) id 1sE5xr-0002RJ-DN; Mon, 03 Jun 2024 11:34:59 +0000 Received: by outflank-mailman (output) from mailman id 734883.1141007; Mon, 03 Jun 2024 11:34:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sE5xr-0002RC-9T; Mon, 03 Jun 2024 11:34:59 +0000 Received: by outflank-mailman (input) for mailman id 734883; Mon, 03 Jun 2024 11:34:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sE5xq-0002Qn-61 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:34:58 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 4d5a9d82-219d-11ef-90a1-e314d9c70b13; Mon, 03 Jun 2024 13:34:56 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 412FF2A963; Mon, 3 Jun 2024 07:34:56 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 37BC12A962; Mon, 3 Jun 2024 07:34:56 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.75]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 330182A961; Mon, 3 Jun 2024 07:34:54 -0400 (EDT) (envelope-from sakib@darkstar.site) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4d5a9d82-219d-11ef-90a1-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=HkS7PEf8kJUA2RxHxXS98LEBr MEt1ID9/ioAN+1BsYg=; b=QqwbSLPTK4nhdLFUwp4v1x2GmmVp2h+6EoIzXwYYj G1q9DTzCDsn1Aj9XyY1fig+lx566ZbI3r3Xp2NODp6S9yY6s36UK9WJCogtXezyT pccqmwcnn7aDZbvaayqnU7Vb3EY1XawYZ5GHitOi62MyBIsLyixtP6n8cCRqLbYG Zs= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Xenia Ragiadakou Subject: [XEN PATCH v3 14/16] ioreq: make arch_vcpu_ioreq_completion() an optional callback Date: Mon, 3 Jun 2024 14:34:53 +0300 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 4CB4C53E-219D-11EF-A092-6488940A682E-90055647!pb-smtp2.pobox.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @pobox.com) X-ZM-MESSAGEID: 1717414514645100001 Content-Type: text/plain; charset="utf-8" For the most cases arch_vcpu_ioreq_completion() routine is just an empty st= ub, except when handling VIO_realmode_completion, which only happens on HVM domains running on VT-x machine. When VT-x is disabled in build configurati= on, both x86 & arm version of routine become empty stubs. To dispose of these useless stubs we can do optional call to arch-specific ioreq completion handler, if it's present, and drop arm and generic x86 han= dlers. Actual handling of VIO_realmore_completion can be done by VMX code then. Signed-off-by: Sergiy Kibrik --- xen/arch/arm/ioreq.c | 6 ------ xen/arch/x86/hvm/ioreq.c | 23 ----------------------- xen/arch/x86/hvm/vmx/vmx.c | 16 ++++++++++++++++ xen/common/ioreq.c | 5 ++++- xen/include/xen/ioreq.h | 2 +- 5 files changed, 21 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index 5df755b48b..2e829d2e7f 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -135,12 +135,6 @@ bool arch_ioreq_complete_mmio(void) return false; } =20 -bool arch_vcpu_ioreq_completion(enum vio_completion completion) -{ - ASSERT_UNREACHABLE(); - return true; -} - /* * The "legacy" mechanism of mapping magic pages for the IOREQ servers * is x86 specific, so the following hooks don't need to be implemented on= Arm: diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 4eb7a70182..088650e007 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -29,29 +29,6 @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } =20 -bool arch_vcpu_ioreq_completion(enum vio_completion completion) -{ - switch ( completion ) - { - case VIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } - - default: - ASSERT_UNREACHABLE(); - break; - } - - return true; -} - static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index f16faa6a61..7187d1819c 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -2749,6 +2750,20 @@ static void cf_check vmx_set_reg(struct vcpu *v, uns= igned int reg, uint64_t val) vmx_vmcs_exit(v); } =20 +bool realmode_vcpu_ioreq_completion(enum vio_completion completion) +{ + struct hvm_emulate_ctxt ctxt; + + if ( completion !=3D VIO_realmode_completion ) + ASSERT_UNREACHABLE(); + + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); + + return true; +} + static struct hvm_function_table __initdata_cf_clobber vmx_function_table = =3D { .name =3D "VMX", .cpu_up_prepare =3D vmx_cpu_up_prepare, @@ -3070,6 +3085,7 @@ const struct hvm_function_table * __init start_vmx(vo= id) lbr_tsx_fixup_check(); ler_to_fixup_check(); =20 + arch_vcpu_ioreq_completion =3D realmode_vcpu_ioreq_completion; return &vmx_function_table; } =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 1257a3d972..94fde97ece 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -33,6 +33,8 @@ #include #include =20 +bool (*arch_vcpu_ioreq_completion)(enum vio_completion completion) =3D NUL= L; + void ioreq_request_mapcache_invalidate(const struct domain *d) { struct vcpu *v =3D current; @@ -244,7 +246,8 @@ bool vcpu_ioreq_handle_completion(struct vcpu *v) break; =20 default: - res =3D arch_vcpu_ioreq_completion(completion); + if ( arch_vcpu_ioreq_completion ) + res =3D arch_vcpu_ioreq_completion(completion); break; } =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index cd399adf17..880214ec41 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -111,7 +111,7 @@ void ioreq_domain_init(struct domain *d); int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const= _op); =20 bool arch_ioreq_complete_mmio(void); -bool arch_vcpu_ioreq_completion(enum vio_completion completion); +extern bool (*arch_vcpu_ioreq_completion)(enum vio_completion completion); int arch_ioreq_server_map_pages(struct ioreq_server *s); void arch_ioreq_server_unmap_pages(struct ioreq_server *s); void arch_ioreq_server_enable(struct ioreq_server *s); --=20 2.25.1