From nobody Mon Apr 29 14:25:58 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1659078333; cv=none; d=zohomail.com; s=zohoarc; b=FyU1M53YJZ2H7+QKMJiSlAzWy6Wfuu+hTTPPzytBh+ctzDv2NXiv5yweQxN9r+jmQD0pE2REQab+WL3GcYSO3D2aNrABcpHmvSsiESmQ1hsGAjWbo23TQ6ohkG+KaGwlKwwIuh81ZlOoHpoRvNqNe2OWdz7FO4uG6OiXyIwIB3M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1659078333; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=lWoWPfUNAXL09S+0aEbH4PnEnY09j4XBudSswpPVrBc=; b=JUKybvd7my8ieCEl0ZLoDv7oWPxkACptZi6FfUIhXyStgePiA7xuo71g5ToCnu5F1H/PetrhX4edhG/+SnEwGDloW8sq8A6FbdXZsSgKVbse5JBSx+1pfHxzh3+9zK3z/oB4IsLO2BAFvxSfZCNQjU9pKLrOvi8jHbBQzomU2II= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1659078333017341.0033872865765; Fri, 29 Jul 2022 00:05:33 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.377364.610557 (Exim 4.92) (envelope-from ) id 1oHK3B-0003gi-RL; Fri, 29 Jul 2022 07:04:45 +0000 Received: by outflank-mailman (output) from mailman id 377364.610557; Fri, 29 Jul 2022 07:04:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oHK3B-0003gb-NZ; Fri, 29 Jul 2022 07:04:45 +0000 Received: by outflank-mailman (input) for mailman id 377364; Fri, 29 Jul 2022 07:04:44 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oHK3A-0003gV-Q1 for xen-devel@lists.xenproject.org; Fri, 29 Jul 2022 07:04:44 +0000 Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id b75c532a-0f0c-11ed-bd2d-47488cf2e6aa; Fri, 29 Jul 2022 09:04:42 +0200 (CEST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b75c532a-0f0c-11ed-bd2d-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1659078282; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Jq4vj1RzUhUFD8chOtnp1zUxZquW4wk6DMoEc5oQx0w=; b=QkF/O9WJDxmaNvXPDtaKLBgrK540E2Ed17Hecqctc7tEiFUZu7dc4k7D EL0z3v+hXsodugYnWFpAMj8PVPcanwcKlZH0BKV7rjZaIiRN+dknqXB+i +74Ec/1xWh6+8dOpT8sP8EwWGMKtMpB9qP38t4fZ29YxOXQq0AAzUQ8I8 A=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 2.7 X-MesageID: 76181499 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:ki9Ej6rdfjn3IQcXgiWvvWXVZnFeBmJ+YhIvgKrLsJaIsI4StFCzt garIBmFM/mINmLwf9h/aIW08kIFucXTmNVgSFFk/y5kFyMW9JuZCYyVIHmrMnLJJKUvbq7GA +byyDXkBJppJpMJjk71atANlVEliefSAOKU5NfsYkhZXRVjRDoqlSVtkus4hp8AqdWiCkaGt MiaT/f3YTdJ4BYpdDNPg06/gEk35q6q52lJ5gVWic1j5zcyqVFEVPrzGonpR5fIatE8NvK3Q e/F0Ia48gvxl/v6Ior4+lpTWhRiro/6ZWBiuFIPM0SRqkEqShgJ+rQ6LJIhhXJ/0F1lqTzTJ OJl7vRcQS9xVkHFdX90vxNwS0mSNoUekFPLzOTWXWV+ACQqflO1q8iCAn3aMqUk5/RbHXMTt scUKQBTQCuoqsjrm6mkH7wEasQLdKEHPasas3BkizrYEewnUdbIRKCiCd1whWlqwJoURLCHO pRfOWEHgBfoOnWjPn81AZQz2sKhgnD7ejtVgFmUubA28y7YywkZPL3Fb4SMKoXWFJQ9ckCwq kH+707kAQsgO+OdzH2Y1mqvn/bGknauMG4VPOLhraM76LGJ/UQUDBAVTlK9reOOll+lW9lfJ koX/QIjtaE3skesS7HVRAakqXSJuhodXdt4EOAg7gyJjK3O7G6xFjhaZj1MctorsIkxXzNC/ l2GhdTyHhR0raaYD3ma89+8vT60fCQYM2IGTSsFVhcepcnuppkpiRDCRcolF7S65vXwGTzhx T2ipS03lbIVy8IGv425913ahzOnprDSUxU4oA7QWwqN8gx9dKahZoq19ULc6/dQaoqUJnGLu 2IFgI6Z9/wUCo+Wlz2lR/8EF7Wkof2CNVX0hV9pAolk9Dm3/XOnVZ5f7Ss4J0pzNMsAPzjzb yf7vAJX65h7JnambaZrJYm2DqwCzbDpPcb0SvfOKNFJZ/BZeAaZ8WdubEiL0mbFlEkqjLF5O JGHfMLqBnEfYYxt1BK/Q+YQ1+9tyi1W+I/IbcmllVL9i+PYPSPLD+deWLeTUgwnxJ+ZsinJ4 YscDMeL+0txa8bTSHSM6pFGeDjmMkMH6YDKR91/L7Dec1I7STx5Wpc90pt6JdU7wv09evPgu yjkBxQGkAeXaWjvc13iV5x1VF/4sX+TR1ofNDdkA1un0mNLjW2HvPZGLMtfkVXKGYVeIR9Io xotIZzo7gxnEGivxtjkRcCVQHZeXBqqnxmSGCGufSIyeZVtLySQpIK1JlC0r3lSUXDo3Sfbn 1FH/lqBKafvuiw4VJqGAB5R5wjZUYchdBJaABKTf4g7lLTE+4l2MS3h5sIKzzU3AUyanlOyi lfJaSr0UMGX/OfZBvGV2v3fx2poesMidndn857ztOvvaneGpDb5nOetko+gJFjgaY89w436D c098h02GKdvcIpi22akL4tW8A== IronPort-HdrOrdr: A9a23:/Mq4x64O5NXyLOROZQPXwMTXdLJyesId70hD6qhwISY6TiX+rb HIoB17726RtN9/YhEdcLy7VJVoIkmskKKdg7NhXotKNTOO0ADDQb2KhbGSpQEIcBeeygcy78 hdmtBFeb/NMWQ= X-IronPort-AV: E=Sophos;i="5.93,200,1654574400"; d="scan'208";a="76181499" From: Jane Malalane To: LKML CC: Jane Malalane , Juergen Gross , Boris Ostrovsky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Stefano Stabellini , Oleksandr Tyshchenko , Jan Beulich , "Maximilian Heyne" , Subject: [PATCH v4] x86/xen: Add support for HVMOP_set_evtchn_upcall_vector Date: Fri, 29 Jul 2022 08:04:16 +0100 Message-ID: <20220729070416.23306-1-jane.malalane@citrix.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1659078333785100001 Implement support for the HVMOP_set_evtchn_upcall_vector hypercall in order to set the per-vCPU event channel vector callback on Linux and use it in preference of HVM_PARAM_CALLBACK_IRQ. If the per-VCPU vector setup is successful on BSP, use this method for the APs. If not, fallback to the global vector-type callback. Also register callback_irq at per-vCPU event channel setup to trick toolstack to think the domain is enlightened. Suggested-by: "Roger Pau Monn=C3=A9" Signed-off-by: Jane Malalane Reviewed-by: Boris Ostrovsky --- CC: Juergen Gross CC: Boris Ostrovsky CC: Thomas Gleixner CC: Ingo Molnar CC: Borislav Petkov CC: Dave Hansen CC: x86@kernel.org CC: "H. Peter Anvin" CC: Stefano Stabellini CC: Oleksandr Tyshchenko CC: Jan Beulich CC: Maximilian Heyne CC: xen-devel@lists.xenproject.org v4: * amend code comment v3: * comment style * add comment on toolstack trick * remove unnecessary variable and function call * surround x86-specific code with #ifdef v2: * remove no_vector_callback * make xen_have_vector_callback a bool * rename xen_ack_upcall to xen_percpu_upcall * fail to bring CPU up on init instead of crashing kernel * add and use xen_set_upcall_vector where suitable * xen_setup_upcall_vector -> xen_init_setup_upcall_vector for clarity --- arch/x86/include/asm/xen/cpuid.h | 2 ++ arch/x86/include/asm/xen/events.h | 3 ++- arch/x86/xen/enlighten.c | 2 +- arch/x86/xen/enlighten_hvm.c | 24 ++++++++++++----- arch/x86/xen/suspend_hvm.c | 10 ++++++- drivers/xen/events/events_base.c | 53 +++++++++++++++++++++++++++++++++-= ---- include/xen/hvm.h | 2 ++ include/xen/interface/hvm/hvm_op.h | 19 ++++++++++++++ 8 files changed, 100 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/xen/cpuid.h b/arch/x86/include/asm/xen/cp= uid.h index 78e667a31d6c..6daa9b0c8d11 100644 --- a/arch/x86/include/asm/xen/cpuid.h +++ b/arch/x86/include/asm/xen/cpuid.h @@ -107,6 +107,8 @@ * ID field from 8 to 15 bits, allowing to target APIC IDs up 32768. */ #define XEN_HVM_CPUID_EXT_DEST_ID (1u << 5) +/* Per-vCPU event channel upcalls */ +#define XEN_HVM_CPUID_UPCALL_VECTOR (1u << 6) =20 /* * Leaf 6 (0x40000x05) diff --git a/arch/x86/include/asm/xen/events.h b/arch/x86/include/asm/xen/e= vents.h index 068d9b067c83..62bdceb594f1 100644 --- a/arch/x86/include/asm/xen/events.h +++ b/arch/x86/include/asm/xen/events.h @@ -23,7 +23,7 @@ static inline int xen_irqs_disabled(struct pt_regs *regs) /* No need for a barrier -- XCHG is a barrier on x86. */ #define xchg_xen_ulong(ptr, val) xchg((ptr), (val)) =20 -extern int xen_have_vector_callback; +extern bool xen_have_vector_callback; =20 /* * Events delivered via platform PCI interrupts are always @@ -34,4 +34,5 @@ static inline bool xen_support_evtchn_rebind(void) return (!xen_hvm_domain() || xen_have_vector_callback); } =20 +extern bool xen_percpu_upcall; #endif /* _ASM_X86_XEN_EVENTS_H */ diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 30c6e986a6cd..b8db2148c07d 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -51,7 +51,7 @@ EXPORT_SYMBOL_GPL(xen_start_info); =20 struct shared_info xen_dummy_shared_info; =20 -__read_mostly int xen_have_vector_callback; +__read_mostly bool xen_have_vector_callback =3D true; EXPORT_SYMBOL_GPL(xen_have_vector_callback); =20 /* diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c index 8b71b1dd7639..198d3cd3e9a5 100644 --- a/arch/x86/xen/enlighten_hvm.c +++ b/arch/x86/xen/enlighten_hvm.c @@ -7,6 +7,8 @@ =20 #include #include +#include +#include #include =20 #include @@ -30,6 +32,9 @@ =20 static unsigned long shared_info_pfn; =20 +__ro_after_init bool xen_percpu_upcall; +EXPORT_SYMBOL_GPL(xen_percpu_upcall); + void xen_hvm_init_shared_info(void) { struct xen_add_to_physmap xatp; @@ -125,6 +130,9 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_xen_hvm_callback) { struct pt_regs *old_regs =3D set_irq_regs(regs); =20 + if (xen_percpu_upcall) + ack_APIC_irq(); + inc_irq_stat(irq_hv_callback_count); =20 xen_hvm_evtchn_do_upcall(); @@ -168,6 +176,15 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu) if (!xen_have_vector_callback) return 0; =20 + if (xen_percpu_upcall) { + rc =3D xen_set_upcall_vector(cpu); + if (rc) { + WARN(1, "HVMOP_set_evtchn_upcall_vector" + " for CPU %d failed: %d\n", cpu, rc); + return rc; + } + } + if (xen_feature(XENFEAT_hvm_safe_pvclock)) xen_setup_timer(cpu); =20 @@ -188,8 +205,6 @@ static int xen_cpu_dead_hvm(unsigned int cpu) return 0; } =20 -static bool no_vector_callback __initdata; - static void __init xen_hvm_guest_init(void) { if (xen_pv_domain()) @@ -211,9 +226,6 @@ static void __init xen_hvm_guest_init(void) =20 xen_panic_handler_init(); =20 - if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector)) - xen_have_vector_callback =3D 1; - xen_hvm_smp_init(); WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_hvm, xen_cpu_dead_hvm)); xen_unplug_emulated_devices(); @@ -239,7 +251,7 @@ early_param("xen_nopv", xen_parse_nopv); =20 static __init int xen_parse_no_vector_callback(char *arg) { - no_vector_callback =3D true; + xen_have_vector_callback =3D false; return 0; } early_param("xen_no_vector_callback", xen_parse_no_vector_callback); diff --git a/arch/x86/xen/suspend_hvm.c b/arch/x86/xen/suspend_hvm.c index 9d548b0c772f..0c4f7554b7cc 100644 --- a/arch/x86/xen/suspend_hvm.c +++ b/arch/x86/xen/suspend_hvm.c @@ -5,6 +5,7 @@ #include #include #include +#include =20 #include "xen-ops.h" =20 @@ -14,6 +15,13 @@ void xen_hvm_post_suspend(int suspend_cancelled) xen_hvm_init_shared_info(); xen_vcpu_restore(); } - xen_setup_callback_vector(); + if (xen_percpu_upcall) { + unsigned int cpu; + + for_each_online_cpu(cpu) + BUG_ON(xen_set_upcall_vector(cpu)); + } else { + xen_setup_callback_vector(); + } xen_unplug_emulated_devices(); } diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b= ase.c index 46d9295d9a6e..206d4b466e44 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #endif #include @@ -2183,6 +2184,7 @@ static struct irq_chip xen_percpu_chip __read_mostly = =3D { .irq_ack =3D ack_dynirq, }; =20 +#ifdef CONFIG_X86 #ifdef CONFIG_XEN_PVHVM /* Vector callbacks are better than PCI interrupts to receive event * channel notifications because we can receive vector callbacks on any @@ -2195,11 +2197,48 @@ void xen_setup_callback_vector(void) callback_via =3D HVM_CALLBACK_VECTOR(HYPERVISOR_CALLBACK_VECTOR); if (xen_set_callback_via(callback_via)) { pr_err("Request for Xen HVM callback vector failed\n"); - xen_have_vector_callback =3D 0; + xen_have_vector_callback =3D false; } } } =20 +/* + * Setup per-vCPU vector-type callbacks. If this setup is unavailable, + * fallback to the global vector-type callback. + */ +static __init void xen_init_setup_upcall_vector(void) +{ + if (!xen_have_vector_callback) + return; + + if ((cpuid_eax(xen_cpuid_base() + 4) & XEN_HVM_CPUID_UPCALL_VECTOR) && + !xen_set_upcall_vector(0)) + xen_percpu_upcall =3D true; + else if (xen_feature(XENFEAT_hvm_callback_vector)) + xen_setup_callback_vector(); + else + xen_have_vector_callback =3D false; +} + +int xen_set_upcall_vector(unsigned int cpu) +{ + int rc; + xen_hvm_evtchn_upcall_vector_t op =3D { + .vector =3D HYPERVISOR_CALLBACK_VECTOR, + .vcpu =3D per_cpu(xen_vcpu_id, cpu), + }; + + rc =3D HYPERVISOR_hvm_op(HVMOP_set_evtchn_upcall_vector, &op); + if (rc) + return rc; + + /* Trick toolstack to think we are enlightened. */ + if (!cpu) + rc =3D xen_set_callback_via(1); + + return rc; +} + static __init void xen_alloc_callback_vector(void) { if (!xen_have_vector_callback) @@ -2210,8 +2249,11 @@ static __init void xen_alloc_callback_vector(void) } #else void xen_setup_callback_vector(void) {} +static inline void xen_init_setup_upcall_vector(void) {} +int xen_set_upcall_vector(unsigned int cpu) {} static inline void xen_alloc_callback_vector(void) {} -#endif +#endif /* CONFIG_XEN_PVHVM */ +#endif /* CONFIG_X86 */ =20 bool xen_fifo_events =3D true; module_param_named(fifo_events, xen_fifo_events, bool, 0); @@ -2271,10 +2313,9 @@ void __init xen_init_IRQ(void) if (xen_initial_domain()) pci_xen_initial_domain(); } - if (xen_feature(XENFEAT_hvm_callback_vector)) { - xen_setup_callback_vector(); - xen_alloc_callback_vector(); - } + xen_init_setup_upcall_vector(); + xen_alloc_callback_vector(); + =20 if (xen_hvm_domain()) { native_init_IRQ(); diff --git a/include/xen/hvm.h b/include/xen/hvm.h index b7fd7fc9ad41..8da7a6747058 100644 --- a/include/xen/hvm.h +++ b/include/xen/hvm.h @@ -60,4 +60,6 @@ static inline int hvm_get_parameter(int idx, uint64_t *va= lue) =20 void xen_setup_callback_vector(void); =20 +int xen_set_upcall_vector(unsigned int cpu); + #endif /* XEN_HVM_H__ */ diff --git a/include/xen/interface/hvm/hvm_op.h b/include/xen/interface/hvm= /hvm_op.h index f3097e79bb03..03134bf3cec1 100644 --- a/include/xen/interface/hvm/hvm_op.h +++ b/include/xen/interface/hvm/hvm_op.h @@ -46,4 +46,23 @@ struct xen_hvm_get_mem_type { }; DEFINE_GUEST_HANDLE_STRUCT(xen_hvm_get_mem_type); =20 +#if defined(__i386__) || defined(__x86_64__) + +/* + * HVMOP_set_evtchn_upcall_vector: Set a that should be used for = event + * channel upcalls on the specified = . If set, + * this vector will be used in preference = to the + * domain global callback via (see + * HVM_PARAM_CALLBACK_IRQ). + */ +#define HVMOP_set_evtchn_upcall_vector 23 +struct xen_hvm_evtchn_upcall_vector { + uint32_t vcpu; + uint8_t vector; +}; +typedef struct xen_hvm_evtchn_upcall_vector xen_hvm_evtchn_upcall_vector_t; +DEFINE_GUEST_HANDLE_STRUCT(xen_hvm_evtchn_upcall_vector_t); + +#endif /* defined(__i386__) || defined(__x86_64__) */ + #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */ --=20 2.11.0