From nobody Tue May 14 14:32:00 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1654859304; cv=none; d=zohomail.com; s=zohoarc; b=kuaG+0v+q6Xihh9kUrS7bJPe3mM51bEyhS+e7GgoGkxzShpA71pQw0064nqyeoEU123rq4yR3E1uZ6GVdZQ9w92hmrx49vfdbY0drIG7LTn1qqFiUJwdR/WhUw+2iH9CijPfe8dYbxwXvHw38dlq0uTeHCscXtqzMCMWoSy8R38= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654859304; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=EthrMhWwyW2ZANWxtSdJTZUbkwrnMMUoCw+MUtk5CiI=; b=WOoB8Re7uMnr4ORKxGfR7TnEgLrS2+EcLU0owlNcjoG4lIuxYNdkUV2UZYUPFGXsRg4/DhprUcwNKZHCFkOM4I+NxGjaLw52MPCsJ6mWxlB5lwKd2pRJeqCChvgPuIjmCQbVXGBAV+8VaPWVsfheQf2hhbacX50RdPiDqFhO568= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1654859304123945.2910492734567; Fri, 10 Jun 2022 04:08:24 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.346389.572174 (Exim 4.92) (envelope-from ) id 1nzcUM-0002hw-0T; Fri, 10 Jun 2022 11:07:38 +0000 Received: by outflank-mailman (output) from mailman id 346389.572174; Fri, 10 Jun 2022 11:07:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nzcUL-0002hp-Th; Fri, 10 Jun 2022 11:07:37 +0000 Received: by outflank-mailman (input) for mailman id 346389; Fri, 10 Jun 2022 11:07:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nzcUK-0002hi-P0 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 11:07:36 +0000 Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 87618fd6-e8ad-11ec-bd2c-47488cf2e6aa; Fri, 10 Jun 2022 13:07:35 +0200 (CEST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 87618fd6-e8ad-11ec-bd2c-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1654859255; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=TIIlT2YCZV5kt/tlpkQ71ZA41ADB8x6cU93JUDAjEZA=; b=UXIOypdIC269byQrQZ0czCzXxbgNeT5YlTJCzUZb0KHQ5Shl8cwG+zd/ R5W7h8+VF6ly7jfXvA7MHtHG+ANZPSEAp9p2GAgN8pbizMGHq/cARH6lx QWerTWZ4SaWCuspx0yQ3JP4/s71P8czRSxFjXjOWRO6RduQaWAAxuAJju E=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 5.1 X-MesageID: 75863028 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:dUpG7KiMM9ovV31zk/Kn6DD+X161NxAKZh0ujC45NGQN5FlHY01je htvUW6HOfaMYzH2Ld0jOY638h5Qvp/SzNNhSFc5rnwwF3gb9cadCdqndUqhZCn6wu8v7a5EA 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrV4 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9 TlDiXC/YVgqPPGQo+staStjOSZwPoMX1uHpLUHq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bklNpyzyfKP8iSJTKRaji7t5ExjYgwMtJGJ4yY uJGNGoxN0yaM3WjPH8XCZxhm9iln0DdbhN1rVK+u7NpoGH6mVkZPL/Fb4OOJ43iqd9utlmcj nLL+SL+GB5yHN6VxCeB83msrvTShi69U4UXfJWo+/gvjFCNy2g7DBwNSUD9sfS/klS5Wd9UN woT4CVGkEQp3BX1FJ+nBUT++SPa+E5HMzZNLwEkwF6OyPaI2AmpPFo/ZDlPa/J3mpEYSQV/g zdlgOjV6SxTXKy9ECzAqufO8GjuZ0D5PkdZO3ZaEFJtD83L5dhq00mRFosL/Lud1IWdJN3m/ 9ydQMHSbZ03hNVD6ai09Euvb9mE9smQFV5dCuk6swuYAuJFiG2NPdXABaDzt6ooEWpgZgDpU II4s8af9vsSKpqGiTaARu4AdJnwuavZb2yN3wE1RMN/n9hIx5JEVdE43d2DDB0xbpZslcHBO yc/Rj+9FLcMZSD3PMebkqq6CtgwzLiIKOkJosv8N4IUCrAoLVfv1Hg3OSa4gjG2+GBxwP5XB HtuWZv1ZZrsIf8/nGTeqiZ0+eJD+x3SMkuCHMuhlUn6iuH2ibz8Ye5tDWZip9sRtMusyDg5O f4FXydW432ziNHDXxQ= IronPort-HdrOrdr: A9a23:73H5xqrcJtGA2zhz6TDLTmEaV5pMeYIsimQD101hICG9E/bo9P xG88526faZslgssRIb+exoWpPsfZq0z/cciuMs1N+ZLWvbUQCTTb2Kg7GM/wHd X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; d="scan'208";a="75863028" From: Jane Malalane To: Xen-devel CC: Jane Malalane , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu Subject: [PATCH v4] x86/hvm: Widen condition for is_hvm_pv_evtchn_domain() and report fix in CPUID Date: Fri, 10 Jun 2022 12:07:04 +0100 Message-ID: <20220610110704.29039-1-jane.malalane@citrix.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1654859305118100001 Have is_hvm_pv_evtchn_domain() return true for vector callbacks for evtchn delivery set up on a per-vCPU basis via HVMOP_set_evtchn_upcall_vector. Assume that if vCPU0 uses HVMOP_set_evtchn_upcall_vector, all remaining vCPUs will too and thus remove is_hvm_pv_evtchn_vcpu() and replace sole caller with is_hvm_pv_evtchn_domain(). is_hvm_pv_evtchn_domain() returning true is a condition for setting up physical IRQ to event channel mappings. Therefore, also add a CPUID bit so that guests know whether the check in is_hvm_pv_evtchn_domain() will fail when using HVMOP_set_evtchn_upcall_vector. This matters for guests that route PIRQs over event channels since is_hvm_pv_evtchn_domain() is a condition in physdev_map_pirq(). The naming of the CPUID bit is quite generic about upcall support being available. That's done so that the define name doesn't become overly long. A guest that doesn't care about physical interrupts routed over event channels can just test for the availability of the hypercall directly (HVMOP_set_evtchn_upcall_vector) without checking the CPUID bit. Signed-off-by: Jane Malalane Reviewed-by: Roger Pau Monn=C3=A9 --- CC: Jan Beulich CC: Andrew Cooper CC: "Roger Pau Monn=C3=A9" CC: Wei Liu v4: * Remove is_hvm_pv_evtchn_vcpu and replace sole caller. v3: * Improve commit message and title. v2: * Since the naming of the CPUID bit is quite generic, better explain when it should be checked for, in code comments and commit message. --- xen/arch/x86/hvm/irq.c | 2 +- xen/arch/x86/include/asm/domain.h | 9 +++++++-- xen/arch/x86/traps.c | 6 ++++++ xen/include/public/arch-x86/cpuid.h | 5 +++++ 4 files changed, 19 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c index 5a7f39b54f..19252448cb 100644 --- a/xen/arch/x86/hvm/irq.c +++ b/xen/arch/x86/hvm/irq.c @@ -325,7 +325,7 @@ void hvm_assert_evtchn_irq(struct vcpu *v) =20 vlapic_set_irq(vcpu_vlapic(v), vector, 0); } - else if ( is_hvm_pv_evtchn_vcpu(v) ) + else if ( is_hvm_pv_evtchn_domain(v->domain) ) vcpu_kick(v); else if ( v->vcpu_id =3D=3D 0 ) hvm_set_callback_irq_level(v); diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/d= omain.h index 35898d725f..dcd221cc6f 100644 --- a/xen/arch/x86/include/asm/domain.h +++ b/xen/arch/x86/include/asm/domain.h @@ -14,9 +14,14 @@ =20 #define has_32bit_shinfo(d) ((d)->arch.has_32bit_shinfo) =20 +/* + * Set to true if either the global vector-type callback or per-vCPU + * LAPIC vectors are used. Assume all vCPUs will use + * HVMOP_set_evtchn_upcall_vector as long as the initial vCPU does. + */ #define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \ - (d)->arch.hvm.irq->callback_via_type =3D=3D HVMIRQ_callback_vector) -#define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain)) + ((d)->arch.hvm.irq->callback_via_type =3D=3D HVMIRQ_callback_vecto= r || \ + (d)->vcpu[0]->arch.hvm.evtchn_upcall_vector)) #define is_domain_direct_mapped(d) ((void)(d), 0) =20 #define VCPU_TRAP_NONE 0 diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 25bffe47d7..1a7f9df067 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1152,6 +1152,12 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, u= int32_t leaf, res->a |=3D XEN_HVM_CPUID_DOMID_PRESENT; res->c =3D d->domain_id; =20 + /* + * Per-vCPU event channel upcalls are implemented and work + * correctly with PIRQs routed over event channels. + */ + res->a |=3D XEN_HVM_CPUID_UPCALL_VECTOR; + break; =20 case 5: /* PV-specific parameters */ diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-= x86/cpuid.h index f2b2b3632c..c49eefeaf8 100644 --- a/xen/include/public/arch-x86/cpuid.h +++ b/xen/include/public/arch-x86/cpuid.h @@ -109,6 +109,11 @@ * field from 8 to 15 bits, allowing to target APIC IDs up 32768. */ #define XEN_HVM_CPUID_EXT_DEST_ID (1u << 5) +/* + * Per-vCPU event channel upcalls work correctly with physical IRQs + * bound to event channels. + */ +#define XEN_HVM_CPUID_UPCALL_VECTOR (1u << 6) =20 /* * Leaf 6 (0x40000x05) --=20 2.11.0