From nobody Wed Apr 24 23:24:43 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488761; cv=none; d=zohomail.com; s=zohoarc; b=XQoOEO/VynJNnKwohoM2F3nOgpZe0fj3i6V2Frz6oafGtBZJHL1rqY3jJv9cWG2fv04onqtKb+QTURlcib//E2wv6L4yty2DJlBF9h+nij6Fn1ddQTeXETviVHTcqkvSO52fUuiTKJoFvs5AOmENm7+tksambxhEePHYQlFLxK8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488761; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=tamcZqwUn7y9mfgc6xAuRXb4k79cTKvUO9nzUgw3C9A=; b=NdSqPip/31NfSB1Lx0jV8eX688ilTiMVjTcNPsRM9SStBOlpI9EeKhJB+Mq73LkzzD5DGwhZ5qOKnpiLpelFxhRofhZN6jxtNvDIZlbxVcZvLTZ07vRFXxbblO6pl2MQ+ECXnGsAjTU5kMUuDaH/VmfB8DScEn4Vlw3LLv87BhE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 161048876108793.3626532327412; Tue, 12 Jan 2021 13:59:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66075.117243 (Exim 4.92) (envelope-from ) id 1kzRgo-0003cd-6k; Tue, 12 Jan 2021 21:58:58 +0000 Received: by outflank-mailman (output) from mailman id 66075.117243; Tue, 12 Jan 2021 21:58:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgo-0003cR-10; Tue, 12 Jan 2021 21:58:58 +0000 Received: by outflank-mailman (input) for mailman id 66075; Tue, 12 Jan 2021 21:58:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRcU-0002PK-I7 for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:30 +0000 Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 390650ae-9249-49ed-b83c-fbb2a182fa29; Tue, 12 Jan 2021 21:53:11 +0000 (UTC) Received: by mail-wm1-x335.google.com with SMTP id y187so3501669wmd.3 for ; Tue, 12 Jan 2021 13:53:09 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:08 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 390650ae-9249-49ed-b83c-fbb2a182fa29 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tamcZqwUn7y9mfgc6xAuRXb4k79cTKvUO9nzUgw3C9A=; b=trmbKV1xXaatlg45sBPBoKO+qOX9OFTMop1XsL0VvgTlQ9ZAiw9lRFRZg3XFl9kH6n 0MKVah2o8xjbPNhhgsLwP28GfyNOH0f06mgC8fxybAY7yLd4NprinB1BSZyzyWiF9QKd wWmXKNPStoFDH+fA060d7CTd05AWnSjPKSH5kjzJ//Sd9tdiTqVjCKPa8wQVfk6YYPEo PjtpzPyop77tgXSFrhGb9ik6dxZJ11SpaOpjFHCKFa98pwdhvhKOc7GGxLSH7gG/s2kF HEd7WcY0FcBmMj29xJ76npT6LzaEU+WAbeSiGmkuuY+t6ZXIrRyVBtLBA6vRKinSZrj7 iq5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tamcZqwUn7y9mfgc6xAuRXb4k79cTKvUO9nzUgw3C9A=; b=rUbVsYu6j7oUoc3oZDgWKnS6vIfagvzlpc7ArvC4Ie529H/yKPe+6FkxhrSM9B7gN2 a+QcsP1pcIEg6odN/ywQ/+0Bwo3zeIGFXqTopQr1seIo03km3p5ajuXL3q+yNJDLpZfQ riFL530EsH4MmbebU6PIdMPIo6DGmz637uhk5ND3A/NaVmBC0u58Fg3w46nJrUpCmDgm JR4u3o4Hmij2eB5oj//jWwcXgVR+11lqVgZL5RID+8S9V/eMZBjARwJdQdF0vq5kSlYW kK5Nw3dGMwtP6IhPIkSRNhr+HVVPVmnz/PAwjiFboWxgS5BhnmV7KpSUJLY0XFm8nyOn hscw== X-Gm-Message-State: AOAM532TiXT3c5BlzNogzNRAIrUG3c9kvoluQlX4+IMMW3NIdvli00ei TFs9cbJTd0EcR0aCApcri9R9K8fuuyVT+w== X-Google-Smtp-Source: ABdhPJyOyfeRFHOrhz4FlC7tzz1UJR4yQXPV6gAEQ03npZWdYJNii7zcoJrk1eGWghIJo2Aya38Beg== X-Received: by 2002:a1c:6055:: with SMTP id u82mr1149106wmb.61.1610488388968; Tue, 12 Jan 2021 13:53:08 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V4 15/24] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed Date: Tue, 12 Jan 2021 23:52:23 +0200 Message-Id: <1610488352-18494-16-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds proper handling of return value of vcpu_ioreq_handle_completion() which involves using a loop in leave_hypervisor_to_guest(). The reason to use an unbounded loop here is the fact that vCPU shouldn't continue until the I/O has completed. The IOREQ code is using wait_on_xen_event_channel(). Yet, this can still "exit" early if an event has been received. But this doesn't mean the I/O has completed (in can be just a spurious wake-up). So we need to check if the I/O has completed and wait again if it hasn't (we will block the vCPU again until an event is received). This loop makes sure that all the vCPU works are done before we return to the guest. The call chain below: check_for_vcpu_work -> vcpu_ioreq_handle_completion -> wait_for_io -> wait_on_xen_event_channel The worse that can happen here if the vCPU will never run again (the I/O will never complete). But, in Xen case, if the I/O never completes then it most likely means that something went horribly wrong with the Device Emulator. And it is most likely not safe to continue. So letting the vCPU to spin forever if the I/O never completes is a safer action than letting it continue and leaving the guest in unclear state and is the best what we can do for now. Please note, using this loop we will not spin forever on a pCPU, preventing any other vCPUs from being scheduled. At every loop we will call check_for_pcpu_work() that will process pending softirqs. In case of failure, the guest will crash and the vCPU will be unscheduled. In normal case, if the rescheduling is necessary (might be set by a timer or by a caller in check_for_vcpu_work(), where wait_for_io() is a preemption point) the vCPU will be rescheduled to give place to someone else. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, changes were derived from (+ new explanation): arm/ioreq: Introduce arch specific bits for IOREQ/DM features Changes V2 -> V3: - update patch description Changes V3 -> V4: - update patch description and comment in code --- xen/arch/arm/traps.c | 38 +++++++++++++++++++++++++++++++++----- 1 file changed, 33 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 036b13f..4a83e1e 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2257,18 +2257,23 @@ static void check_for_pcpu_work(void) * Process pending work for the vCPU. Any call should be fast or * implement preemption. */ -static void check_for_vcpu_work(void) +static bool check_for_vcpu_work(void) { struct vcpu *v =3D current; =20 #ifdef CONFIG_IOREQ_SERVER + bool handled; + local_irq_enable(); - vcpu_ioreq_handle_completion(v); + handled =3D vcpu_ioreq_handle_completion(v); local_irq_disable(); + + if ( !handled ) + return true; #endif =20 if ( likely(!v->arch.need_flush_to_ram) ) - return; + return false; =20 /* * Give a chance for the pCPU to process work before handling the vCPU @@ -2279,6 +2284,8 @@ static void check_for_vcpu_work(void) local_irq_enable(); p2m_flush_vm(v); local_irq_disable(); + + return false; } =20 /* @@ -2291,8 +2298,29 @@ void leave_hypervisor_to_guest(void) { local_irq_disable(); =20 - check_for_vcpu_work(); - check_for_pcpu_work(); + /* + * The reason to use an unbounded loop here is the fact that vCPU + * shouldn't continue until the I/O has completed. + * + * The worse that can happen here if the vCPU will never run again + * (the I/O will never complete). But, in Xen case, if the I/O never + * completes then it most likely means that something went horribly + * wrong with the Device Emulator. And it is most likely not safe + * to continue. So letting the vCPU to spin forever if the I/O never + * completes is a safer action than letting it continue and leaving + * the guest in unclear state and is the best what we can do for now. + * + * Please note, using this loop we will not spin forever on a pCPU, + * preventing any other vCPUs from being scheduled. At every loop + * we will call check_for_pcpu_work() that will process pending + * softirqs. In case of failure, the guest will crash and the vCPU + * will be unscheduled. In normal case, if the rescheduling is necessa= ry + * (might be set by a timer or by a caller in check_for_vcpu_work(), + * the vCPU will be rescheduled to give place to someone else. + */ + do { + check_for_pcpu_work(); + } while ( check_for_vcpu_work() ); =20 vgic_sync_to_lrs(); =20 --=20 2.7.4