From nobody Sun Apr 28 15:36:41 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582882476; cv=none; d=zohomail.com; s=zohoarc; b=BGdoQJxk7PxgnKFowqqtkoZ8ecFLOTY6063qe83KRMsPyXVtMsSBiIfdO4nBseYAX6g0LAOj19p5/dBKbpGb3I2ZUfoE9kCbJgbhwQRz9aWqpXMQZbMe0aiXvae3ZzacujfUcqnKMuUk10ptXBgT5UJr43dppIJmnW23/mGh1xA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582882476; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=udWjRuCzYYMTTQGAwtOmk0gJ8UdWXiGX6i0giHY6BBE=; b=Z0CQdnn83F6/A+gtvQeS1KTWe+p6V4MHnW+qqNr4KBT90nbXjC4dcsfylv4dqg+g8bjHWyOi5GQ0hDzQcfRQgXrjljvqGDWIErrP9FLCu4Nlf6ZbMeGXpnghkdxLN1fSk/PdNkHyF/EONMk3a2lrwuQsURQJVU8bMQvQRcqYBNg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582882476005286.3385481529542; Fri, 28 Feb 2020 01:34:36 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j7c21-0000dS-VV; Fri, 28 Feb 2020 09:34:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j7c20-0000cp-Jp for xen-devel@lists.xenproject.org; Fri, 28 Feb 2020 09:34:04 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 72cd7fa6-5a0d-11ea-b7e8-bc764e2007e4; Fri, 28 Feb 2020 09:33:59 +0000 (UTC) X-Inumbo-ID: 72cd7fa6-5a0d-11ea-b7e8-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582882439; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nVQnn5b/1mYJiuc0qBvbdvw96D3soqQ/4cxZzw/8cqI=; b=AvmVW7Wz/BAr4X1oevVDwrIx0+i2n60L8ZIfZ97H0iGeM9pGfQ55MsGS CeTn7j6XPpsNHpAx4KOB34teeD62qMnIMBuW3B1HVQibe2y+ISW70sScz DIK4IoCiRPMYoNUD4POjS2tR3z8F3UzIUKqPUkqHhyMYPWe6IxAQpAwPl o=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 6Xi6wvmuSJjWPCgnZSZ5FEV5s8ll91G7/Zo1+kXGys4Rg03GZCR4/HOs7X6VJbfCPp8yFWvJfm Op0oMVsQt+APbXCaEFFb0ojnAp/+ijUHBm2Hpn68qdeMFVsgM5HLn8m56gKSGgmtXM0lSgQ2KN elyE6K3ryV0NAUdWd/HasEgmWRyIWL6xhTI9J5BDKsZQ5ykaE2DKX8wbYHzvLJJqbVEzdtCXga mVcbNY//JnOhJOHNi76xYOlejldrrjfsfkQo1nPO6XdBIXkEFWeoZZUrlD/1avj7HJodtYNYuC Imk= X-SBRS: 2.7 X-MesageID: 13588527 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,495,1574139600"; d="scan'208";a="13588527" From: Roger Pau Monne To: Date: Fri, 28 Feb 2020 10:33:33 +0100 Message-ID: <20200228093334.36586-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200228093334.36586-1-roger.pau@citrix.com> References: <20200228093334.36586-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 1/2] x86/smp: use a dedicated CPU mask in send_IPI_mask X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Some callers of send_IPI_mask pass the scratch cpumask as the mask parameter of send_IPI_mask, so the scratch cpumask cannot be used by the function. The following trace has been obtained with a debug patch and shows one of those callers: (XEN) scratch CPU mask already in use by arch/x86/mm.c#_get_page_type+0x1f9= /0x1abf (XEN) Xen BUG at smp.c:45 [...] (XEN) Xen call trace: (XEN) [] R scratch_cpumask+0xd3/0xf9 (XEN) [] F send_IPI_mask+0x72/0x1ca (XEN) [] F flush_area_mask+0x10c/0x16c (XEN) [] F arch/x86/mm.c#_get_page_type+0x3ff/0x1abf (XEN) [] F get_page_type+0xe/0x2c (XEN) [] F pv_set_gdt+0xa1/0x2aa (XEN) [] F arch_set_info_guest+0x1196/0x16ba (XEN) [] F default_initialise_vcpu+0xc7/0xd4 (XEN) [] F arch_initialise_vcpu+0x61/0xcd (XEN) [] F do_vcpu_op+0x219/0x690 (XEN) [] F pv_hypercall+0x2f6/0x593 (XEN) [] F lstar_enter+0x112/0x120 _get_page_type will use the scratch cpumask to call flush_tlb_mask, which in turn calls send_IPI_mask. Fix this by using a dedicated per CPU cpumask in send_IPI_mask. Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when p= ossible') Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/smp.c | 4 +++- xen/arch/x86/smpboot.c | 9 ++++++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index 0461812cf6..072638f0f6 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -59,6 +59,8 @@ static void send_IPI_shortcut(unsigned int shortcut, int = vector, apic_write(APIC_ICR, cfg); } =20 +DECLARE_PER_CPU(cpumask_var_t, send_ipi_cpumask); + /* * send_IPI_mask(cpumask, vector): sends @vector IPI to CPUs in @cpumask, * excluding the local CPU. @cpumask may be empty. @@ -67,7 +69,7 @@ static void send_IPI_shortcut(unsigned int shortcut, int = vector, void send_IPI_mask(const cpumask_t *mask, int vector) { bool cpus_locked =3D false; - cpumask_t *scratch =3D this_cpu(scratch_cpumask); + cpumask_t *scratch =3D this_cpu(send_ipi_cpumask); =20 if ( in_irq() || in_mce_handler() || in_nmi_handler() ) { diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index ad49f2dcd7..6c548b0b53 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -57,6 +57,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask); static cpumask_t scratch_cpu0mask; =20 +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, send_ipi_cpumask); +static cpumask_t send_ipi_cpu0mask; + cpumask_t cpu_online_map __read_mostly; EXPORT_SYMBOL(cpu_online_map); =20 @@ -930,6 +933,8 @@ static void cpu_smpboot_free(unsigned int cpu, bool rem= ove) FREE_CPUMASK_VAR(per_cpu(cpu_core_mask, cpu)); if ( per_cpu(scratch_cpumask, cpu) !=3D &scratch_cpu0mask ) FREE_CPUMASK_VAR(per_cpu(scratch_cpumask, cpu)); + if ( per_cpu(send_ipi_cpumask, cpu) !=3D &send_ipi_cpu0mask ) + FREE_CPUMASK_VAR(per_cpu(send_ipi_cpumask, cpu)); } =20 cleanup_cpu_root_pgt(cpu); @@ -1034,7 +1039,8 @@ static int cpu_smpboot_alloc(unsigned int cpu) =20 if ( !(cond_zalloc_cpumask_var(&per_cpu(cpu_sibling_mask, cpu)) && cond_zalloc_cpumask_var(&per_cpu(cpu_core_mask, cpu)) && - cond_alloc_cpumask_var(&per_cpu(scratch_cpumask, cpu))) ) + cond_alloc_cpumask_var(&per_cpu(scratch_cpumask, cpu)) && + cond_alloc_cpumask_var(&per_cpu(send_ipi_cpumask, cpu))) ) goto out; =20 rc =3D 0; @@ -1175,6 +1181,7 @@ void __init smp_prepare_boot_cpu(void) cpumask_set_cpu(cpu, &cpu_present_map); #if NR_CPUS > 2 * BITS_PER_LONG per_cpu(scratch_cpumask, cpu) =3D &scratch_cpu0mask; + per_cpu(send_ipi_cpumask, cpu) =3D &send_ipi_cpu0mask; #endif =20 get_cpu_info()->use_pv_cr3 =3D false; --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 15:36:41 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1582882479; cv=none; d=zohomail.com; s=zohoarc; b=hi0aXmY08ae1bJZvIlwngNSvoB9IJiVTQmFuMWT7GHB4keVOc/+TzkwB1NgQWXs5h1HPmmkmOueF8G8GhFgRpYeexwWa4xQt69G1AYxwTJTzr283IyP9ISHaPi3QXCr5vppJnoVtt18NytO/DcFsapavvrFs6NcHzV3hxS3IjC0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1582882479; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rl0SbwtwwKjxe46u48iX2hdlMbVjqAiSa88UXAIIf3Y=; b=YNr/REqFZZ9nB2gp8XEqgTf1/lBNwWqmP5FX217ztignhcDZK1RpoBv9ZmnRLmkp1YMQ/6iwDfJTu0dRgcDa3CZM5OS2Y/3Eot69dgtFcImaxAh/PuPhbNkB8pKT41ZOZbfXfyW+TiARVKqmfuFtIb4n+amvplvjevt2NtRQaCs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1582882479114339.3541022033012; Fri, 28 Feb 2020 01:34:39 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j7c20-0000cm-GL; Fri, 28 Feb 2020 09:34:04 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j7c1z-0000cd-2P for xen-devel@lists.xenproject.org; Fri, 28 Feb 2020 09:34:03 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 73ec2c2b-5a0d-11ea-98e0-12813bfff9fa; Fri, 28 Feb 2020 09:34:01 +0000 (UTC) X-Inumbo-ID: 73ec2c2b-5a0d-11ea-98e0-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1582882441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/L5QbZmqwv6BQ3PWrF+4VFTvbji6uxQ3edUPTu0fUoQ=; b=OFoIZ5ioUaEPEJWmBGGxsjKA2byGmB9c4qVh2mjS9WOsZPUk5v71wp/u w8iZM+WG8TTY/4MdbADZyIv+KuOqML+Rc9/BRGOfybK3//0GcmUluoCvU aqLVWmQYMN8HRUsl7ARghEuai46+oAh6p4OB1xfQdzhG7Wb170HgvQhd1 o=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: DG0G3af32BZbbPyrQE0p2O+KcADmQhypuPctRMft2NLZoXctr4BVgtcnxQPTuZrQxc1tGc0NPl zLVnDTuOccfltyaAKjHqa7Y1xbbtyy9UGSqAQBGSAQmpQgn9bXLttQzdSHcWVismz9hUmJbLCl UZQBLELSKhD/QANbEdxdsMGzHz0uDzFPxyEvFWl3MEYLQxB8pwgIB7pEADKl9EG6730mzfiCu4 wD06PqcB37KI0a9ZmuAYCpmxlaYBh9CNHaJEB64Z9H6WBuU+G0MrfB1PAXOlfgzMU4c96UVQ+y VgA= X-SBRS: 2.7 X-MesageID: 13588529 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,495,1574139600"; d="scan'208";a="13588529" From: Roger Pau Monne To: Date: Fri, 28 Feb 2020 10:33:34 +0100 Message-ID: <20200228093334.36586-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200228093334.36586-1-roger.pau@citrix.com> References: <20200228093334.36586-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 2/2] x86: add accessors for scratch cpu mask X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Current usage of the per-CPU scratch cpumask is dangerous since there's no way to figure out if the mask is already being used except for manual code inspection of all the callers and possible call paths. This is unsafe and not reliable, so introduce a minimal get/put infrastructure to prevent nested usage of the scratch mask and usage in interrupt context. Move the definition of scratch_cpumask to smp.c in order to place the declaration and the accessors as close as possible. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v3: - Fix commit message. - Split the cpumask taken section into two in _clear_irq_vector. - Add an empty statement in do_mmuext_op to avoid a break. - Change the logic used to release the scratch cpumask in __do_update_va_mapping. - Add a %ps print to scratch_cpumask helper. - Remove printing the current IP, as that would be done by BUG anyway. - Pass the cpumask to put_scratch_cpumask and zap the pointer. Changes since v1: - Use __builtin_return_address(0) instead of __func__. - Move declaration of scratch_cpumask and scratch_cpumask accessor to smp.c. - Do not allow usage in #MC or #NMI context. --- xen/arch/x86/io_apic.c | 6 ++++-- xen/arch/x86/irq.c | 14 ++++++++++---- xen/arch/x86/mm.c | 40 +++++++++++++++++++++++++++------------ xen/arch/x86/msi.c | 4 +++- xen/arch/x86/smp.c | 25 ++++++++++++++++++++++++ xen/arch/x86/smpboot.c | 1 - xen/include/asm-x86/smp.h | 14 ++++++++++++++ 7 files changed, 84 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c index e98e08e9c8..0bb994f0ba 100644 --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -2236,10 +2236,11 @@ int io_apic_set_pci_routing (int ioapic, int pin, i= nt irq, int edge_level, int a entry.vector =3D vector; =20 if (cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS)) { - cpumask_t *mask =3D this_cpu(scratch_cpumask); + cpumask_t *mask =3D get_scratch_cpumask(); =20 cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS); SET_DEST(entry, logical, cpu_mask_to_apicid(mask)); + put_scratch_cpumask(mask); } else { printk(XENLOG_ERR "IRQ%d: no target CPU (%*pb vs %*pb)\n", irq, CPUMASK_PR(desc->arch.cpu_mask), CPUMASK_PR(TARGET_CPU= S)); @@ -2433,10 +2434,11 @@ int ioapic_guest_write(unsigned long physbase, unsi= gned int reg, u32 val) =20 if ( cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS) ) { - cpumask_t *mask =3D this_cpu(scratch_cpumask); + cpumask_t *mask =3D get_scratch_cpumask(); =20 cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS); SET_DEST(rte, logical, cpu_mask_to_apicid(mask)); + put_scratch_cpumask(mask); } else { diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index cc2eb8e925..19488dae21 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -196,7 +196,7 @@ static void _clear_irq_vector(struct irq_desc *desc) { unsigned int cpu, old_vector, irq =3D desc->irq; unsigned int vector =3D desc->arch.vector; - cpumask_t *tmp_mask =3D this_cpu(scratch_cpumask); + cpumask_t *tmp_mask =3D get_scratch_cpumask(); =20 BUG_ON(!valid_irq_vector(vector)); =20 @@ -208,6 +208,7 @@ static void _clear_irq_vector(struct irq_desc *desc) ASSERT(per_cpu(vector_irq, cpu)[vector] =3D=3D irq); per_cpu(vector_irq, cpu)[vector] =3D ~irq; } + put_scratch_cpumask(tmp_mask); =20 desc->arch.vector =3D IRQ_VECTOR_UNASSIGNED; cpumask_clear(desc->arch.cpu_mask); @@ -227,8 +228,9 @@ static void _clear_irq_vector(struct irq_desc *desc) =20 /* If we were in motion, also clear desc->arch.old_vector */ old_vector =3D desc->arch.old_vector; - cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map); =20 + cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map); + tmp_mask =3D get_scratch_cpumask(); for_each_cpu(cpu, tmp_mask) { ASSERT(per_cpu(vector_irq, cpu)[old_vector] =3D=3D irq); @@ -236,6 +238,7 @@ static void _clear_irq_vector(struct irq_desc *desc) per_cpu(vector_irq, cpu)[old_vector] =3D ~irq; } =20 + put_scratch_cpumask(tmp_mask); release_old_vec(desc); =20 desc->arch.move_in_progress =3D 0; @@ -1152,10 +1155,11 @@ static void irq_guest_eoi_timer_fn(void *data) break; =20 case ACKTYPE_EOI: - cpu_eoi_map =3D this_cpu(scratch_cpumask); + cpu_eoi_map =3D get_scratch_cpumask(); cpumask_copy(cpu_eoi_map, action->cpu_eoi_map); spin_unlock_irq(&desc->lock); on_selected_cpus(cpu_eoi_map, set_eoi_ready, desc, 0); + put_scratch_cpumask(cpu_eoi_map); return; } =20 @@ -2531,12 +2535,12 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) unsigned int irq; static int warned; struct irq_desc *desc; + cpumask_t *affinity =3D get_scratch_cpumask(); =20 for ( irq =3D 0; irq < nr_irqs; irq++ ) { bool break_affinity =3D false, set_affinity =3D true; unsigned int vector; - cpumask_t *affinity =3D this_cpu(scratch_cpumask); =20 if ( irq =3D=3D 2 ) continue; @@ -2640,6 +2644,8 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) irq, CPUMASK_PR(affinity)); } =20 + put_scratch_cpumask(affinity); + /* That doesn't seem sufficient. Give it 1ms. */ local_irq_enable(); mdelay(1); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 70b87c4830..b3b09a0219 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1262,7 +1262,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domai= n *l1e_owner) (l1e_owner =3D=3D pg_owner) ) { struct vcpu *v; - cpumask_t *mask =3D this_cpu(scratch_cpumask); + cpumask_t *mask =3D get_scratch_cpumask(); =20 cpumask_clear(mask); =20 @@ -1279,6 +1279,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domai= n *l1e_owner) =20 if ( !cpumask_empty(mask) ) flush_tlb_mask(mask); + put_scratch_cpumask(mask); } #endif /* CONFIG_PV_LDT_PAGING */ put_page(page); @@ -2903,7 +2904,7 @@ static int _get_page_type(struct page_info *page, uns= igned long type, * vital that no other CPUs are left with mappings of a fr= ame * which is about to become writeable to the guest. */ - cpumask_t *mask =3D this_cpu(scratch_cpumask); + cpumask_t *mask =3D get_scratch_cpumask(); =20 BUG_ON(in_irq()); cpumask_copy(mask, d->dirty_cpumask); @@ -2919,6 +2920,7 @@ static int _get_page_type(struct page_info *page, uns= igned long type, perfc_incr(need_flush_tlb_flush); flush_tlb_mask(mask); } + put_scratch_cpumask(mask); =20 /* We lose existing type and validity. */ nx &=3D ~(PGT_type_mask | PGT_validated); @@ -3635,7 +3637,7 @@ long do_mmuext_op( case MMUEXT_TLB_FLUSH_MULTI: case MMUEXT_INVLPG_MULTI: { - cpumask_t *mask =3D this_cpu(scratch_cpumask); + cpumask_t *mask =3D get_scratch_cpumask(); =20 if ( unlikely(currd !=3D pg_owner) ) rc =3D -EPERM; @@ -3645,12 +3647,13 @@ long do_mmuext_op( mask)) ) rc =3D -EINVAL; if ( unlikely(rc) ) - break; - - if ( op.cmd =3D=3D MMUEXT_TLB_FLUSH_MULTI ) + ; + else if ( op.cmd =3D=3D MMUEXT_TLB_FLUSH_MULTI ) flush_tlb_mask(mask); else if ( __addr_ok(op.arg1.linear_addr) ) flush_tlb_one_mask(mask, op.arg1.linear_addr); + put_scratch_cpumask(mask); + break; } =20 @@ -3683,7 +3686,7 @@ long do_mmuext_op( else if ( likely(cache_flush_permitted(currd)) ) { unsigned int cpu; - cpumask_t *mask =3D this_cpu(scratch_cpumask); + cpumask_t *mask =3D get_scratch_cpumask(); =20 cpumask_clear(mask); for_each_online_cpu(cpu) @@ -3691,6 +3694,7 @@ long do_mmuext_op( per_cpu(cpu_sibling_mask, cpu= )) ) __cpumask_set_cpu(cpu, mask); flush_mask(mask, FLUSH_CACHE); + put_scratch_cpumask(mask); } else rc =3D -EINVAL; @@ -4156,12 +4160,13 @@ long do_mmu_update( * Force other vCPU-s of the affected guest to pick up L4 entry * changes (if any). */ - unsigned int cpu =3D smp_processor_id(); - cpumask_t *mask =3D per_cpu(scratch_cpumask, cpu); + cpumask_t *mask =3D get_scratch_cpumask(); =20 - cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu)); + cpumask_andnot(mask, pt_owner->dirty_cpumask, + cpumask_of(smp_processor_id())); if ( !cpumask_empty(mask) ) flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL); + put_scratch_cpumask(mask); } =20 perfc_add(num_page_updates, i); @@ -4353,7 +4358,7 @@ static int __do_update_va_mapping( mask =3D d->dirty_cpumask; break; default: - mask =3D this_cpu(scratch_cpumask); + mask =3D get_scratch_cpumask(); rc =3D vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bma= p_ptr, void), mask); @@ -4373,7 +4378,7 @@ static int __do_update_va_mapping( mask =3D d->dirty_cpumask; break; default: - mask =3D this_cpu(scratch_cpumask); + mask =3D get_scratch_cpumask(); rc =3D vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bma= p_ptr, void), mask); @@ -4384,6 +4389,17 @@ static int __do_update_va_mapping( break; } =20 + switch ( flags & ~UVMF_FLUSHTYPE_MASK ) + { + case UVMF_LOCAL: + case UVMF_ALL: + break; + + default: + put_scratch_cpumask(mask); + } + + return rc; } =20 diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c index 161ee60dbe..6d198f8665 100644 --- a/xen/arch/x86/msi.c +++ b/xen/arch/x86/msi.c @@ -159,13 +159,15 @@ void msi_compose_msg(unsigned vector, const cpumask_t= *cpu_mask, struct msi_msg =20 if ( cpu_mask ) { - cpumask_t *mask =3D this_cpu(scratch_cpumask); + cpumask_t *mask; =20 if ( !cpumask_intersects(cpu_mask, &cpu_online_map) ) return; =20 + mask =3D get_scratch_cpumask(); cpumask_and(mask, cpu_mask, &cpu_online_map); msg->dest32 =3D cpu_mask_to_apicid(mask); + put_scratch_cpumask(mask); } =20 msg->address_hi =3D MSI_ADDR_BASE_HI; diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index 072638f0f6..945dbabefe 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -25,6 +25,31 @@ #include #include =20 +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask); + +#ifndef NDEBUG +cpumask_t *scratch_cpumask(bool use) +{ + static DEFINE_PER_CPU(void *, scratch_cpumask_use); + + /* + * Due to reentrancy scratch cpumask cannot be used in IRQ, #MC or #NMI + * context. + */ + BUG_ON(in_irq() || in_mce_handler() || in_nmi_handler()); + + if ( use && unlikely(this_cpu(scratch_cpumask_use)) ) + { + printk("scratch CPU mask already in use by %ps (%p)\n", + this_cpu(scratch_cpumask_use), this_cpu(scratch_cpumask_use= )); + BUG(); + } + this_cpu(scratch_cpumask_use) =3D use ? __builtin_return_address(0) : = NULL; + + return use ? this_cpu(scratch_cpumask) : NULL; +} +#endif + /* Helper functions to prepare APIC register values. */ static unsigned int prepare_ICR(unsigned int shortcut, int vector) { diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 6c548b0b53..e26b61a8b4 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -54,7 +54,6 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mas= k); /* representing HT and core siblings of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); =20 -DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask); static cpumask_t scratch_cpu0mask; =20 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, send_ipi_cpumask); diff --git a/xen/include/asm-x86/smp.h b/xen/include/asm-x86/smp.h index 92d69a5ea0..d2f0bb0b4f 100644 --- a/xen/include/asm-x86/smp.h +++ b/xen/include/asm-x86/smp.h @@ -23,6 +23,20 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask); DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask); DECLARE_PER_CPU(cpumask_var_t, scratch_cpumask); =20 +#ifndef NDEBUG +/* Not to be called directly, use {get/put}_scratch_cpumask(). */ +cpumask_t *scratch_cpumask(bool use); +#define get_scratch_cpumask() scratch_cpumask(true) +#define put_scratch_cpumask(m) do { \ + BUG_ON((m) !=3D this_cpu(scratch_cpumask)); \ + scratch_cpumask(false); \ + (m) =3D NULL; \ +} while ( false ) +#else +#define get_scratch_cpumask() this_cpu(scratch_cpumask) +#define put_scratch_cpumask(m) +#endif + /* * Do we, for platform reasons, need to actually keep CPUs online when we * would otherwise prefer them to be off? --=20 2.25.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel