From nobody Sat Apr 27 23:02:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1635406118; cv=none; d=zohomail.com; s=zohoarc; b=Y56lOZu/EhQBwTQ6eBMwdr3FCaU0w+SvGRpCPX5+kQuUw+iQ9pw3jqD8J+87bcbp7giuJFhwq60qu9q6WJYAzxwlrUz7dK9QouVG3zfMuYyw5TbUuDHuBCvV98szRiO+HT3vXi3Li1f97EyJVD/y/0QhPjNyEM4INze/OT2biNc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1635406118; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=PUfw6t8nwpxm2zhH5abaL+rPMYpBE9jrK9+wuQK9hvA=; b=QGovg0IyU+u1XPfEi6oXgDJ+YIHsfFoy2IZbTkpyoXaR7YdTH+6P+MhwBZwcy+xu5niaLQvUqJFUfH9kDaL/OTgnEb1KKWSOzMHi93ssmuwoZGsHOG/POEpcLHTh0DT839Ff9wvpvqKT6bFGl/oxUOGwpv16hocaw7sNqg61pI8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1635406118650923.8187722014873; Thu, 28 Oct 2021 00:28:38 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.217683.377811 (Exim 4.92) (envelope-from ) id 1mfzpQ-0004Xk-9R; Thu, 28 Oct 2021 07:28:00 +0000 Received: by outflank-mailman (output) from mailman id 217683.377811; Thu, 28 Oct 2021 07:28:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mfzpQ-0004Wm-2F; Thu, 28 Oct 2021 07:28:00 +0000 Received: by outflank-mailman (input) for mailman id 217683; Thu, 28 Oct 2021 07:27:59 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mfzpO-0004U4-Tx for xen-devel@lists.xenproject.org; Thu, 28 Oct 2021 07:27:58 +0000 Received: from smtp-out1.suse.de (unknown [195.135.220.28]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 913742bf-37c0-11ec-8499-12813bfff9fa; Thu, 28 Oct 2021 07:27:57 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 04A4B21966; Thu, 28 Oct 2021 07:27:56 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id ADEA513ABD; Thu, 28 Oct 2021 07:27:55 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 0KRSKftQemEmGwAAMHmgww (envelope-from ); Thu, 28 Oct 2021 07:27:55 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 913742bf-37c0-11ec-8499-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1635406076; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PUfw6t8nwpxm2zhH5abaL+rPMYpBE9jrK9+wuQK9hvA=; b=gi6tqNuahpz4PiXSsejbjDh7zaV2+FLmXWLxMRdjprAWF5wE8v3Vjmp8qgy0D1hMn6S5vM wDB8UN/b58JMPqALcU8BEyk1mlq3oTqvRGoyECXLSjWaLEgOmzvBO8Ef8JKReLmj00h2Jz lTjxccr136CnGdVrXXgPiOH8c9oYrqU= From: Juergen Gross To: xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Peter Zijlstra Subject: [PATCH v3 1/2] x86/xen: remove xen_have_vcpu_info_placement flag Date: Thu, 28 Oct 2021 09:27:47 +0200 Message-Id: <20211028072748.29862-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211028072748.29862-1-jgross@suse.com> References: <20211028072748.29862-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1635406120599100005 Content-Type: text/plain; charset="utf-8" The flag xen_have_vcpu_info_placement was needed to support Xen hypervisors older than version 3.4, which didn't support the VCPUOP_register_vcpu_info hypercall. Today the Linux kernel requires at least Xen 4.0 to be able to run, so xen_have_vcpu_info_placement can be dropped (in theory the flag was used to ensure a working kernel even in case of the VCPUOP_register_vcpu_info hypercall failing for other reasons than the hypercall not being supported, but the only cases covered by the flag would be parameter errors, which ought not to be made anyway). This allows to let some functions return void now, as they can never fail. Signed-off-by: Juergen Gross Acked-by: Peter Zijlstra (Intel) Reviewed-by: Boris Ostrovsky --- arch/x86/xen/enlighten.c | 97 +++++++++--------------------------- arch/x86/xen/enlighten_hvm.c | 6 +-- arch/x86/xen/enlighten_pv.c | 28 ++--------- arch/x86/xen/smp.c | 24 --------- arch/x86/xen/xen-ops.h | 4 +- 5 files changed, 33 insertions(+), 126 deletions(-) diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 95d970359e17..006b4a814fac 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -84,21 +84,6 @@ EXPORT_SYMBOL(xen_start_flags); */ struct shared_info *HYPERVISOR_shared_info =3D &xen_dummy_shared_info; =20 -/* - * Flag to determine whether vcpu info placement is available on all - * VCPUs. We assume it is to start with, and then set it to zero on - * the first failure. This is because it can succeed on some VCPUs - * and not others, since it can involve hypervisor memory allocation, - * or because the guest failed to guarantee all the appropriate - * constraints on all VCPUs (ie buffer can't cross a page boundary). - * - * Note that any particular CPU may be using a placed vcpu structure, - * but we can only optimise if the all are. - * - * 0: not available, 1: available - */ -int xen_have_vcpu_info_placement =3D 1; - static int xen_cpu_up_online(unsigned int cpu) { xen_init_lock_cpu(cpu); @@ -124,10 +109,8 @@ int xen_cpuhp_setup(int (*cpu_up_prepare_cb)(unsigned = int), return rc >=3D 0 ? 0 : rc; } =20 -static int xen_vcpu_setup_restore(int cpu) +static void xen_vcpu_setup_restore(int cpu) { - int rc =3D 0; - /* Any per_cpu(xen_vcpu) is stale, so reset it */ xen_vcpu_info_reset(cpu); =20 @@ -136,11 +119,8 @@ static int xen_vcpu_setup_restore(int cpu) * be handled by hotplug. */ if (xen_pv_domain() || - (xen_hvm_domain() && cpu_online(cpu))) { - rc =3D xen_vcpu_setup(cpu); - } - - return rc; + (xen_hvm_domain() && cpu_online(cpu))) + xen_vcpu_setup(cpu); } =20 /* @@ -150,7 +130,7 @@ static int xen_vcpu_setup_restore(int cpu) */ void xen_vcpu_restore(void) { - int cpu, rc; + int cpu; =20 for_each_possible_cpu(cpu) { bool other_cpu =3D (cpu !=3D smp_processor_id()); @@ -170,20 +150,9 @@ void xen_vcpu_restore(void) if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock)) xen_setup_runstate_info(cpu); =20 - rc =3D xen_vcpu_setup_restore(cpu); - if (rc) - pr_emerg_once("vcpu restore failed for cpu=3D%d err=3D%d. " - "System will hang.\n", cpu, rc); - /* - * In case xen_vcpu_setup_restore() fails, do not bring up the - * VCPU. This helps us avoid the resulting OOPS when the VCPU - * accesses pvclock_vcpu_time via xen_vcpu (which is NULL.) - * Note that this does not improve the situation much -- now the - * VM hangs instead of OOPSing -- with the VCPUs that did not - * fail, spinning in stop_machine(), waiting for the failed - * VCPUs to come up. - */ - if (other_cpu && is_up && (rc =3D=3D 0) && + xen_vcpu_setup_restore(cpu); + + if (other_cpu && is_up && HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL)) BUG(); } @@ -200,7 +169,7 @@ void xen_vcpu_info_reset(int cpu) } } =20 -int xen_vcpu_setup(int cpu) +void xen_vcpu_setup(int cpu) { struct vcpu_register_vcpu_info info; int err; @@ -221,44 +190,26 @@ int xen_vcpu_setup(int cpu) */ if (xen_hvm_domain()) { if (per_cpu(xen_vcpu, cpu) =3D=3D &per_cpu(xen_vcpu_info, cpu)) - return 0; + return; } =20 - if (xen_have_vcpu_info_placement) { - vcpup =3D &per_cpu(xen_vcpu_info, cpu); - info.mfn =3D arbitrary_virt_to_mfn(vcpup); - info.offset =3D offset_in_page(vcpup); + vcpup =3D &per_cpu(xen_vcpu_info, cpu); + info.mfn =3D arbitrary_virt_to_mfn(vcpup); + info.offset =3D offset_in_page(vcpup); =20 - /* - * Check to see if the hypervisor will put the vcpu_info - * structure where we want it, which allows direct access via - * a percpu-variable. - * N.B. This hypercall can _only_ be called once per CPU. - * Subsequent calls will error out with -EINVAL. This is due to - * the fact that hypervisor has no unregister variant and this - * hypercall does not allow to over-write info.mfn and - * info.offset. - */ - err =3D HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, - xen_vcpu_nr(cpu), &info); - - if (err) { - pr_warn_once("register_vcpu_info failed: cpu=3D%d err=3D%d\n", - cpu, err); - xen_have_vcpu_info_placement =3D 0; - } else { - /* - * This cpu is using the registered vcpu info, even if - * later ones fail to. - */ - per_cpu(xen_vcpu, cpu) =3D vcpup; - } - } - - if (!xen_have_vcpu_info_placement) - xen_vcpu_info_reset(cpu); + /* + * N.B. This hypercall can _only_ be called once per CPU. + * Subsequent calls will error out with -EINVAL. This is due to + * the fact that hypervisor has no unregister variant and this + * hypercall does not allow to over-write info.mfn and + * info.offset. + */ + err =3D HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu), + &info); + if (err) + panic("register_vcpu_info failed: cpu=3D%d err=3D%d\n", cpu, err); =20 - return ((per_cpu(xen_vcpu, cpu) =3D=3D NULL) ? -ENODEV : 0); + per_cpu(xen_vcpu, cpu) =3D vcpup; } =20 void __init xen_banner(void) diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c index e68ea5f4ad1c..42300941ec29 100644 --- a/arch/x86/xen/enlighten_hvm.c +++ b/arch/x86/xen/enlighten_hvm.c @@ -163,9 +163,9 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu) per_cpu(xen_vcpu_id, cpu) =3D cpu_acpi_id(cpu); else per_cpu(xen_vcpu_id, cpu) =3D cpu; - rc =3D xen_vcpu_setup(cpu); - if (rc || !xen_have_vector_callback) - return rc; + xen_vcpu_setup(cpu); + if (!xen_have_vector_callback) + return 0; =20 if (xen_feature(XENFEAT_hvm_safe_pvclock)) xen_setup_timer(cpu); diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index a7b7d674f500..2635a00be42d 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -993,31 +993,13 @@ void __init xen_setup_vcpu_info_placement(void) for_each_possible_cpu(cpu) { /* Set up direct vCPU id mapping for PV guests. */ per_cpu(xen_vcpu_id, cpu) =3D cpu; - - /* - * xen_vcpu_setup(cpu) can fail -- in which case it - * falls back to the shared_info version for cpus - * where xen_vcpu_nr(cpu) < MAX_VIRT_CPUS. - * - * xen_cpu_up_prepare_pv() handles the rest by failing - * them in hotplug. - */ - (void) xen_vcpu_setup(cpu); + xen_vcpu_setup(cpu); } =20 - /* - * xen_vcpu_setup managed to place the vcpu_info within the - * percpu area for all cpus, so make use of it. - */ - if (xen_have_vcpu_info_placement) { - pv_ops.irq.save_fl =3D __PV_IS_CALLEE_SAVE(xen_save_fl_direct); - pv_ops.irq.irq_disable =3D - __PV_IS_CALLEE_SAVE(xen_irq_disable_direct); - pv_ops.irq.irq_enable =3D - __PV_IS_CALLEE_SAVE(xen_irq_enable_direct); - pv_ops.mmu.read_cr2 =3D - __PV_IS_CALLEE_SAVE(xen_read_cr2_direct); - } + pv_ops.irq.save_fl =3D __PV_IS_CALLEE_SAVE(xen_save_fl_direct); + pv_ops.irq.irq_disable =3D __PV_IS_CALLEE_SAVE(xen_irq_disable_direct); + pv_ops.irq.irq_enable =3D __PV_IS_CALLEE_SAVE(xen_irq_enable_direct); + pv_ops.mmu.read_cr2 =3D __PV_IS_CALLEE_SAVE(xen_read_cr2_direct); } =20 static const struct pv_info xen_info __initconst =3D { diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c index c1b2f764b29a..bafa61b1482f 100644 --- a/arch/x86/xen/smp.c +++ b/arch/x86/xen/smp.c @@ -121,34 +121,10 @@ int xen_smp_intr_init(unsigned int cpu) =20 void __init xen_smp_cpus_done(unsigned int max_cpus) { - int cpu, rc, count =3D 0; - if (xen_hvm_domain()) native_smp_cpus_done(max_cpus); else calculate_max_logical_packages(); - - if (xen_have_vcpu_info_placement) - return; - - for_each_online_cpu(cpu) { - if (xen_vcpu_nr(cpu) < MAX_VIRT_CPUS) - continue; - - rc =3D remove_cpu(cpu); - - if (rc =3D=3D 0) { - /* - * Reset vcpu_info so this cpu cannot be onlined again. - */ - xen_vcpu_info_reset(cpu); - count++; - } else { - pr_warn("%s: failed to bring CPU %d down, error %d\n", - __func__, cpu, rc); - } - } - WARN(count, "%s: brought %d CPUs offline\n", __func__, count); } =20 void xen_smp_send_reschedule(int cpu) diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h index 8bc8b72a205d..fd0fec6e92f4 100644 --- a/arch/x86/xen/xen-ops.h +++ b/arch/x86/xen/xen-ops.h @@ -76,9 +76,7 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id); =20 bool xen_vcpu_stolen(int vcpu); =20 -extern int xen_have_vcpu_info_placement; - -int xen_vcpu_setup(int cpu); +void xen_vcpu_setup(int cpu); void xen_vcpu_info_reset(int cpu); void xen_setup_vcpu_info_placement(void); =20 --=20 2.26.2 From nobody Sat Apr 27 23:02:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1635406120; cv=none; d=zohomail.com; s=zohoarc; b=V9usraiCXonevVsHdP9dXT2ksDYPC5GDkxRpQx+sSRzOlmTO1qbzQ8pcxfNg+w0ZMFlldNLCGuhXb/eSItfnvMddPL2pbghsaTkIK6ftYaLe6LSK7Oc4I+rppidsRXAPZTW9QxjEha4PdLIIuiTBKXP26tY2fBMRI/l1YfysxrM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1635406120; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ggMgRCzUrgVk3/nbGHlSVTwAJfT38iDHULHtl55v/o8=; b=S/nb57iD/Q6bdVdWmSkM4ijDEH2mXCqmLulcUkWCR2oyJsHSXFJeoWMysXmwyQuxlG6jcBXDOT8XJNUcBI/XDiKRYNK+3z3BjO6bjw5JI2LSx7we3f7J/U/4FXouOYQpnhSNPmMLEABYeB1QGVsrSZumy16155LsYgoBYbdte90= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1635406120206897.1034186233795; Thu, 28 Oct 2021 00:28:40 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.217684.377828 (Exim 4.92) (envelope-from ) id 1mfzpT-00051z-EJ; Thu, 28 Oct 2021 07:28:03 +0000 Received: by outflank-mailman (output) from mailman id 217684.377828; Thu, 28 Oct 2021 07:28:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mfzpT-00051s-B6; Thu, 28 Oct 2021 07:28:03 +0000 Received: by outflank-mailman (input) for mailman id 217684; Thu, 28 Oct 2021 07:28:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mfzpS-0004Ty-K0 for xen-devel@lists.xenproject.org; Thu, 28 Oct 2021 07:28:02 +0000 Received: from smtp-out1.suse.de (unknown [195.135.220.28]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f56227c9-2e4c-43c5-be9e-7872ee2bb4ea; Thu, 28 Oct 2021 07:27:57 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 68ADB21967; Thu, 28 Oct 2021 07:27:56 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0D40313ABD; Thu, 28 Oct 2021 07:27:56 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id UAr+AfxQemEmGwAAMHmgww (envelope-from ); Thu, 28 Oct 2021 07:27:56 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f56227c9-2e4c-43c5-be9e-7872ee2bb4ea DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1635406076; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ggMgRCzUrgVk3/nbGHlSVTwAJfT38iDHULHtl55v/o8=; b=L+hv7qAjrtmHYTJ8VoWW7a/cmADpjn9JUa+yjIIp8HlLmnZVXHLcZHpgdATMNLaxfScin7 eTxF6U3n9v2kka9NS4TuR31Zc2y4baK1ldiDDkiidpC1V7KxUJ3PibOnOsjl7eYQNoMpUP 6PXws49K+zxxzYg50k1WwNhWqNMKcY0= From: Juergen Gross To: xen-devel@lists.xenproject.org, x86@kernel.org, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Deep Shah , "VMware, Inc." , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Boris Ostrovsky , Stefano Stabellini , Peter Zijlstra Subject: [PATCH v3 2/2] x86/xen: switch initial pvops IRQ functions to dummy ones Date: Thu, 28 Oct 2021 09:27:48 +0200 Message-Id: <20211028072748.29862-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211028072748.29862-1-jgross@suse.com> References: <20211028072748.29862-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1635406122398100009 Content-Type: text/plain; charset="utf-8" The initial pvops functions handling irq flags will only ever be called before interrupts are being enabled. So switch them to be dummy functions: - xen_save_fl() can always return 0 - xen_irq_disable() is a nop - xen_irq_enable() can BUG() Add some generic paravirt functions for that purpose. Signed-off-by: Juergen Gross Acked-by: Peter Zijlstra (Intel) Reviewed-by: Boris Ostrovsky --- V3: - make paravirt_BUG() noinstr --- arch/x86/include/asm/paravirt_types.h | 2 + arch/x86/kernel/paravirt.c | 13 +++++- arch/x86/xen/enlighten.c | 19 +-------- arch/x86/xen/irq.c | 61 ++------------------------- 4 files changed, 20 insertions(+), 75 deletions(-) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/p= aravirt_types.h index d9d6b0203ec4..fc1151e77569 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -577,7 +577,9 @@ void paravirt_leave_lazy_mmu(void); void paravirt_flush_lazy_mmu(void); =20 void _paravirt_nop(void); +void paravirt_BUG(void); u64 _paravirt_ident_64(u64); +unsigned long paravirt_ret0(void); =20 #define paravirt_nop ((void *)_paravirt_nop) =20 diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 04cafc057bed..b44814dfe83f 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -46,6 +46,17 @@ asm (".pushsection .entry.text, \"ax\"\n" ".type _paravirt_nop, @function\n\t" ".popsection"); =20 +/* stub always returning 0. */ +asm (".pushsection .entry.text, \"ax\"\n" + ".global paravirt_ret0\n" + "paravirt_ret0:\n\t" + "xor %" _ASM_AX ", %" _ASM_AX ";\n\t" + "ret\n\t" + ".size paravirt_ret0, . - paravirt_ret0\n\t" + ".type paravirt_ret0, @function\n\t" + ".popsection"); + + void __init default_banner(void) { printk(KERN_INFO "Booting paravirtualized kernel on %s\n", @@ -53,7 +64,7 @@ void __init default_banner(void) } =20 /* Undefined instruction for dealing with missing ops pointers. */ -static void paravirt_BUG(void) +noinstr void paravirt_BUG(void) { BUG(); } diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 006b4a814fac..30c6e986a6cd 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -31,25 +31,10 @@ EXPORT_SYMBOL_GPL(hypercall_page); * Pointer to the xen_vcpu_info structure or * &HYPERVISOR_shared_info->vcpu_info[cpu]. See xen_hvm_init_shared_info * and xen_vcpu_setup for details. By default it points to share_info->vcp= u_info - * but if the hypervisor supports VCPUOP_register_vcpu_info then it can po= int - * to xen_vcpu_info. The pointer is used in __xen_evtchn_do_upcall to - * acknowledge pending events. - * Also more subtly it is used by the patched version of irq enable/disable - * e.g. xen_irq_enable_direct and xen_iret in PV mode. - * - * The desire to be able to do those mask/unmask operations as a single - * instruction by using the per-cpu offset held in %gs is the real reason - * vcpu info is in a per-cpu pointer and the original reason for this - * hypercall. - * + * but during boot it is switched to point to xen_vcpu_info. + * The pointer is used in __xen_evtchn_do_upcall to acknowledge pending ev= ents. */ DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu); - -/* - * Per CPU pages used if hypervisor supports VCPUOP_register_vcpu_info - * hypercall. This can be used both in PV and PVHVM mode. The structure - * overrides the default per_cpu(xen_vcpu, cpu) value. - */ DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info); =20 /* Linux <-> Xen vCPU id mapping */ diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c index dfa091d79c2e..ae8537583102 100644 --- a/arch/x86/xen/irq.c +++ b/arch/x86/xen/irq.c @@ -24,60 +24,6 @@ void xen_force_evtchn_callback(void) (void)HYPERVISOR_xen_version(0, NULL); } =20 -asmlinkage __visible unsigned long xen_save_fl(void) -{ - struct vcpu_info *vcpu; - unsigned long flags; - - vcpu =3D this_cpu_read(xen_vcpu); - - /* flag has opposite sense of mask */ - flags =3D !vcpu->evtchn_upcall_mask; - - /* convert to IF type flag - -0 -> 0x00000000 - -1 -> 0xffffffff - */ - return (-flags) & X86_EFLAGS_IF; -} -PV_CALLEE_SAVE_REGS_THUNK(xen_save_fl); - -asmlinkage __visible void xen_irq_disable(void) -{ - /* There's a one instruction preempt window here. We need to - make sure we're don't switch CPUs between getting the vcpu - pointer and updating the mask. */ - preempt_disable(); - this_cpu_read(xen_vcpu)->evtchn_upcall_mask =3D 1; - preempt_enable_no_resched(); -} -PV_CALLEE_SAVE_REGS_THUNK(xen_irq_disable); - -asmlinkage __visible void xen_irq_enable(void) -{ - struct vcpu_info *vcpu; - - /* - * We may be preempted as soon as vcpu->evtchn_upcall_mask is - * cleared, so disable preemption to ensure we check for - * events on the VCPU we are still running on. - */ - preempt_disable(); - - vcpu =3D this_cpu_read(xen_vcpu); - vcpu->evtchn_upcall_mask =3D 0; - - /* Doesn't matter if we get preempted here, because any - pending event will get dealt with anyway. */ - - barrier(); /* unmask then check (avoid races) */ - if (unlikely(vcpu->evtchn_upcall_pending)) - xen_force_evtchn_callback(); - - preempt_enable(); -} -PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable); - static void xen_safe_halt(void) { /* Blocking includes an implicit local_irq_enable(). */ @@ -95,9 +41,10 @@ static void xen_halt(void) } =20 static const struct pv_irq_ops xen_irq_ops __initconst =3D { - .save_fl =3D PV_CALLEE_SAVE(xen_save_fl), - .irq_disable =3D PV_CALLEE_SAVE(xen_irq_disable), - .irq_enable =3D PV_CALLEE_SAVE(xen_irq_enable), + /* Initial interrupt flag handling only called while interrupts off. */ + .save_fl =3D __PV_IS_CALLEE_SAVE(paravirt_ret0), + .irq_disable =3D __PV_IS_CALLEE_SAVE(paravirt_nop), + .irq_enable =3D __PV_IS_CALLEE_SAVE(paravirt_BUG), =20 .safe_halt =3D xen_safe_halt, .halt =3D xen_halt, --=20 2.26.2