From nobody Mon May 6 19:29:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1604602643; cv=none; d=zohomail.com; s=zohoarc; b=FCsvTG3IRXZJGPZbStOGELxgDrf1pA622IJPNrypLKAkQk7qfkqe78IQsbSGKHvLTCZ5NPfpbFIurqWzMBlbKQOkHyqUlOEzcBn2NzP/PDy5+feVV8EGsh2CaYFDsr/bVwMgWhYxAntUIq+ADl1a3HNPSkiMfEX77xRo5xeiNz4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604602643; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=JXV2grd6INUy3gkR3e7DtGC7TFJVb0kBB00hdfAKmbI=; b=OnLYsDe4/zURqfVYK81ATanD0JqL+HDTI6fdg5rslqwAgheKVcshD2SoW9HkxkpEpOIaDUcrR9hEsFa/o1pxUlAY5sMHhtudbpE0wdfm2SuJVPFRdqb0UU2Nrnxb9yle+WPP39V5eQy7fpwqQPNiFwQ8YcVmMFjWqRQ4TdDMT2Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1604602643020619.9429431403386; Thu, 5 Nov 2020 10:57:23 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.20050.45645 (Exim 4.92) (envelope-from ) id 1kakRX-0006hF-PL; Thu, 05 Nov 2020 18:57:07 +0000 Received: by outflank-mailman (output) from mailman id 20050.45645; Thu, 05 Nov 2020 18:57:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kakRX-0006h7-LQ; Thu, 05 Nov 2020 18:57:07 +0000 Received: by outflank-mailman (input) for mailman id 20050; Thu, 05 Nov 2020 18:57:06 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRW-0006gr-PL for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:06 +0000 Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3df7f1a5-ac49-40de-ac32-b14f6008216d; Thu, 05 Nov 2020 18:57:05 +0000 (UTC) Received: by mail-wr1-x429.google.com with SMTP id w1so3009385wrm.4 for ; Thu, 05 Nov 2020 10:57:05 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:03 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRW-0006gr-PL for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:06 +0000 Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3df7f1a5-ac49-40de-ac32-b14f6008216d; Thu, 05 Nov 2020 18:57:05 +0000 (UTC) Received: by mail-wr1-x429.google.com with SMTP id w1so3009385wrm.4 for ; Thu, 05 Nov 2020 10:57:05 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:03 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3df7f1a5-ac49-40de-ac32-b14f6008216d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JXV2grd6INUy3gkR3e7DtGC7TFJVb0kBB00hdfAKmbI=; b=JQMuCwVvrfUlQSiYlywTVwmSgq7MvlOrDxhoITNqLZ0yUQu7Uk+BoyIxaXj6j56MJA WXmv04DJuD9lMEalI2Bv4LQUKFmdIGENeZ0AdGbRSS6bmQ4qaoCjZNkOdHyM351lGzSC 1pOBENorcNwxeBsZO6L/xceFIwgu8k3psV+yn1ipXOJzTrj9ZuGYT96SY9tX2z3SEp5d EZRtBqRjSoHA54NUV6URJmI/3SQR2kWouCeeP1M5ZA0HF5ZeGXrK4+PvPeKnn9GNS819 5GnhlM/jBhJ6xuWqN7FwJmAbOmvHVY46cQZN8e4ymXQ0Hd9lnrkzWcpgtCnUQBlmETiL Apng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JXV2grd6INUy3gkR3e7DtGC7TFJVb0kBB00hdfAKmbI=; b=iRXBej+Mosg57Hwvp1Vv/di1f9VVJu6c4PG86GFILjd9DPGYK5wTEgQ6XW+MpV/rxi nNPJxosV7LapdWTL12/S3jujQas/AEtf5It7Obj1LTSdZ98VKQ5BQVdng/wTUgmnFS9c UVkptaQLC4oj+/JfaCOZFUEyyi2efuYqBCA1s834NhHJ2DAmFfQgkz25G2u5xIm8vin7 5S+nJV3CwqWOL47LEaTdp0Eu0gjbfbdcGMoOQQ9SIoPJ73FkjbdYOgSoV4CCrmdDZCjR LQaNUEHD/7NVAwTyLJKTsoPW8dyVSGFKFPgm7on9ISoT5vi/LH6x/RwZPfmZ1xvUZfAB b+dQ== X-Gm-Message-State: AOAM531Cb4GImtruvNIAoz20C9LQ8Wc+FmDQKKL6jPkeG81/Rgii03cY uDaTXXgVTwpbHtYjEK+abPT0UwgFlx0= X-Google-Smtp-Source: ABdhPJyCljWIKg/99jagCmcOIk9a+ZtYzbTERdVozWW3gWffCC+jUgvSlr1Z2bFuN9Tx+wIr+rHbgQ== X-Received: by 2002:adf:ed4c:: with SMTP id u12mr4564286wro.63.1604602624600; Thu, 05 Nov 2020 10:57:04 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com, Ash Wilding Subject: [RFC PATCH 1/6] xen/arm: Support detection of CPU features in other ID registers Date: Thu, 5 Nov 2020 18:55:58 +0000 Message-Id: <20201105185603.24149-2-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com> References: <20201105185603.24149-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" The current Arm boot_cpu_feature64() and boot_cpu_feature32() macros are hardcoded to only detect features in ID_AA64PFR{0,1}_EL1 and ID_PFR{0,1} respectively. This patch replaces these macros with a new macro, boot_cpu_feature(), which takes an explicit ID register name as an argument. While we're here, cull cpu_feature64() and cpu_feature32() as they have no callers (we only ever use the boot CPU features), and update the printk() messages in setup.c to use the new macro. Signed-off-by: Ash Wilding Acked-by: Julien Grall --- xen/arch/arm/setup.c | 8 +++--- xen/include/asm-arm/cpufeature.h | 44 +++++++++++++++----------------- 2 files changed, 24 insertions(+), 28 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 7fcff9af2a..5121f06fc5 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -134,16 +134,16 @@ static void __init processor_id(void) cpu_has_gicv3 ? " GICv3-SysReg" : ""); =20 /* Warn user if we find unknown floating-point features */ - if ( cpu_has_fp && (boot_cpu_feature64(fp) >=3D 2) ) + if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >=3D 2) ) printk(XENLOG_WARNING "WARNING: Unknown Floating-point ID:%d, " "this may result in corruption on the platform\n", - boot_cpu_feature64(fp)); + boot_cpu_feature(pfr64, fp)); =20 /* Warn user if we find unknown AdvancedSIMD features */ - if ( cpu_has_simd && (boot_cpu_feature64(simd) >=3D 2) ) + if ( cpu_has_simd && (boot_cpu_feature(pfr64, simd) >=3D 2) ) printk(XENLOG_WARNING "WARNING: Unknown AdvancedSIMD ID:%d, " "this may result in corruption on the platform\n", - boot_cpu_feature64(simd)); + boot_cpu_feature(pfr64, simd)); =20 printk(" Debug Features: %016"PRIx64" %016"PRIx64"\n", boot_cpu_data.dbg64.bits[0], boot_cpu_data.dbg64.bits[1]); diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeat= ure.h index 10878ead8a..f9281ea343 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -1,39 +1,35 @@ #ifndef __ASM_ARM_CPUFEATURE_H #define __ASM_ARM_CPUFEATURE_H =20 +#define boot_cpu_feature(idreg, feat) (boot_cpu_data.idreg.feat) + #ifdef CONFIG_ARM_64 -#define cpu_feature64(c, feat) ((c)->pfr64.feat) -#define boot_cpu_feature64(feat) (boot_cpu_data.pfr64.feat) - -#define cpu_has_el0_32 (boot_cpu_feature64(el0) =3D=3D 2) -#define cpu_has_el0_64 (boot_cpu_feature64(el0) >=3D 1) -#define cpu_has_el1_32 (boot_cpu_feature64(el1) =3D=3D 2) -#define cpu_has_el1_64 (boot_cpu_feature64(el1) >=3D 1) -#define cpu_has_el2_32 (boot_cpu_feature64(el2) =3D=3D 2) -#define cpu_has_el2_64 (boot_cpu_feature64(el2) >=3D 1) -#define cpu_has_el3_32 (boot_cpu_feature64(el3) =3D=3D 2) -#define cpu_has_el3_64 (boot_cpu_feature64(el3) >=3D 1) -#define cpu_has_fp (boot_cpu_feature64(fp) < 8) -#define cpu_has_simd (boot_cpu_feature64(simd) < 8) -#define cpu_has_gicv3 (boot_cpu_feature64(gic) =3D=3D 1) +#define cpu_has_el0_32 (boot_cpu_feature(pfr64, el0) =3D=3D 2) +#define cpu_has_el0_64 (boot_cpu_feature(pfr64, el0) >=3D 1) +#define cpu_has_el1_32 (boot_cpu_feature(pfr64, el1) =3D=3D 2) +#define cpu_has_el1_64 (boot_cpu_feature(pfr64, el1) >=3D 1) +#define cpu_has_el2_32 (boot_cpu_feature(pfr64, el2) =3D=3D 2) +#define cpu_has_el2_64 (boot_cpu_feature(pfr64, el2) >=3D 1) +#define cpu_has_el3_32 (boot_cpu_feature(pfr64, el3) =3D=3D 2) +#define cpu_has_el3_64 (boot_cpu_feature(pfr64, el3) >=3D 1) +#define cpu_has_fp (boot_cpu_feature(pfr64, fp) < 8) +#define cpu_has_simd (boot_cpu_feature(pfr64, simd) < 8) +#define cpu_has_gicv3 (boot_cpu_feature(pfr64, gic) =3D=3D 1) #endif =20 -#define cpu_feature32(c, feat) ((c)->pfr32.feat) -#define boot_cpu_feature32(feat) (boot_cpu_data.pfr32.feat) - -#define cpu_has_arm (boot_cpu_feature32(arm) =3D=3D 1) -#define cpu_has_thumb (boot_cpu_feature32(thumb) >=3D 1) -#define cpu_has_thumb2 (boot_cpu_feature32(thumb) >=3D 3) -#define cpu_has_jazelle (boot_cpu_feature32(jazelle) > 0) -#define cpu_has_thumbee (boot_cpu_feature32(thumbee) =3D=3D 1) +#define cpu_has_arm (boot_cpu_feature(pfr32, arm) =3D=3D 1) +#define cpu_has_thumb (boot_cpu_feature(pfr32, thumb) >=3D 1) +#define cpu_has_thumb2 (boot_cpu_feature(pfr32, thumb) >=3D 3) +#define cpu_has_jazelle (boot_cpu_feature(pfr32, jazelle) > 0) +#define cpu_has_thumbee (boot_cpu_feature(pfr32, thumbee) =3D=3D 1) #define cpu_has_aarch32 (cpu_has_arm || cpu_has_thumb) =20 #ifdef CONFIG_ARM_32 -#define cpu_has_gentimer (boot_cpu_feature32(gentimer) =3D=3D 1) +#define cpu_has_gentimer (boot_cpu_feature(pfr32, gentimer) =3D=3D 1) #else #define cpu_has_gentimer (1) #endif -#define cpu_has_security (boot_cpu_feature32(security) > 0) +#define cpu_has_security (boot_cpu_feature(pfr32, security) > 0) =20 #define ARM64_WORKAROUND_CLEAN_CACHE 0 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE 1 --=20 2.24.3 (Apple Git-128) From nobody Mon May 6 19:29:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1604602642; cv=none; d=zohomail.com; s=zohoarc; b=CJxPH+nxWuQkSmUtB0Yja2E4xJdWIoFXwBNpyE8mX4xI6CS4ZHAklgmI7V+GtpQvFudXAA2d/7VbIrnih4stWC/XPIWXZjyrnjBzv0SxDA08bOIES6BhXVzjznJqESaMivjRmNj2L3hdE5ijmJgzD1v1KNZJYFSzI2XsTiMJsJI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604602642; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=1cxZBd9UtbAwSXXwBERB1exoaCA+r5xNDHyZ9ab9j0k=; b=hpR3Aum8lyr1zJfhismTLc6b1ilcee1f8dYGeERQQCR1gLIqQ35Qr+ggUAyw/vRLfiqX7KDOkFjoxxhn3KBtJa5bWalxSR10EN6wC675uYJln2LLmxPWjHVWaxHxaRyPReynfVc8jBItV/l8/LnfZGr43PEH/kXY+6bAbtdK/D0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1604602642580184.11045387874617; Thu, 5 Nov 2020 10:57:22 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.20051.45657 (Exim 4.92) (envelope-from ) id 1kakRZ-0006jG-3J; Thu, 05 Nov 2020 18:57:09 +0000 Received: by outflank-mailman (output) from mailman id 20051.45657; Thu, 05 Nov 2020 18:57:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kakRY-0006j6-Tv; Thu, 05 Nov 2020 18:57:08 +0000 Received: by outflank-mailman (input) for mailman id 20051; Thu, 05 Nov 2020 18:57:07 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRX-0006gr-Hb for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:07 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cd5a5f0a-c5e6-447d-9134-36238ffbd79f; Thu, 05 Nov 2020 18:57:06 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id b8so3013984wrn.0 for ; Thu, 05 Nov 2020 10:57:06 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:04 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRX-0006gr-Hb for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:07 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cd5a5f0a-c5e6-447d-9134-36238ffbd79f; Thu, 05 Nov 2020 18:57:06 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id b8so3013984wrn.0 for ; Thu, 05 Nov 2020 10:57:06 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:04 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: cd5a5f0a-c5e6-447d-9134-36238ffbd79f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1cxZBd9UtbAwSXXwBERB1exoaCA+r5xNDHyZ9ab9j0k=; b=dn2Sdwt7lDaVlJ36tXFutn/QDnyBoZUaQBm3vtVJR3qQIR4BcilQE16TK0PqCNRMX5 x+nO8UUCrLJ3hZUYmFe7hUunJEx5yyuPk2L4Hs887M5FTxQNQ9W0h9gL0w3DKqgNGE8l qbi1dU0GQocRhOABQ1wg7Yh3azozTFGBhKbM1n7Rq52zQRBl9IaDukFFZwCmAyL0FLci ZZAJsJSPYcvpj9s2ON5W/JgSmFey8rKcwthoVAHzB5U8XJtcWhCrgAN5ePYH/jPcinwi P5pq13bukcF6y+IIPY5tWV+RAcpch3a+dMx0wwCNBomVmvc0yyOCSZmD4MhKguXPkomS h9vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1cxZBd9UtbAwSXXwBERB1exoaCA+r5xNDHyZ9ab9j0k=; b=fhdLF+qtSki9TQiouQtC3LPNjksMmqbVjNLW0F1Uc0cdP0AJY8tKHUA0QpsnEpLmWg ewbxZyJ6dBci+Yua58fIwuuSY9gHITgTxiG9dA3Gls5Qm7MRNkBK6AUjf7Z5E7KEGzZu NX3U2A5PCHk+XuS3mlihNqfv6gE4GwwQn5Kc395l3GoG1HRVSCgKEJo2jqdC1EEjFBB7 v8ZViLLRDSoi+142FFYX5BR9W2MPJM0J+eRKYGXtuobeE2OD68pWNMhymkdm7NQgwJMc woqhZ763zGy8nehBVv3/Z+ToDYudWuH0YFAuX3wtbKEZFjpxRESweqIhabCfgRHFcFBi jEVA== X-Gm-Message-State: AOAM530DgMKclVLAIMgZKoh1XgHVCXKVNp6+N3CWTnNwX3jua8lrqQ1C KrGbhniPFizzSw8VE9pIqtyWbvZOooE= X-Google-Smtp-Source: ABdhPJyaKSWUq4UC8sHcV/FCKE6XLTUJDppDVYkUNwm0rWqZzMuRQz8ZP2rFjmFOFazXw8vjLY2WAw== X-Received: by 2002:a5d:474f:: with SMTP id o15mr4446221wrs.377.1604602625681; Thu, 05 Nov 2020 10:57:05 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com, Ash Wilding Subject: [RFC PATCH 2/6] xen/arm: Add detection of Armv8.1-LSE atomic instructions Date: Thu, 5 Nov 2020 18:55:59 +0000 Message-Id: <20201105185603.24149-3-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com> References: <20201105185603.24149-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" Use the new infrastructure for detecting CPU features in other ID registers to detect the presence of Armv8.1-LSE atomic instructions, as reported by ID_AA64ISAR0_EL1.Atomic. While we're here, print detection of these instructions in setup.c's processor_id(). Signed-off-by: Ash Wilding --- xen/arch/arm/setup.c | 5 +++-- xen/include/asm-arm/cpufeature.h | 10 +++++++++- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 5121f06fc5..138e1957c5 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -128,10 +128,11 @@ static void __init processor_id(void) cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No", cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No", cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No"); - printk(" Extensions:%s%s%s\n", + printk(" Extensions:%s%s%s%s\n", cpu_has_fp ? " FloatingPoint" : "", cpu_has_simd ? " AdvancedSIMD" : "", - cpu_has_gicv3 ? " GICv3-SysReg" : ""); + cpu_has_gicv3 ? " GICv3-SysReg" : "", + cpu_has_lse_atomics ? " LSE-Atomics" : ""); =20 /* Warn user if we find unknown floating-point features */ if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >=3D 2) ) diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeat= ure.h index f9281ea343..2366926e82 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -15,6 +15,7 @@ #define cpu_has_fp (boot_cpu_feature(pfr64, fp) < 8) #define cpu_has_simd (boot_cpu_feature(pfr64, simd) < 8) #define cpu_has_gicv3 (boot_cpu_feature(pfr64, gic) =3D=3D 1) +#define cpu_has_lse_atomics (boot_cpu_feature(isa64, atomic) =3D=3D 2) #endif =20 #define cpu_has_arm (boot_cpu_feature(pfr32, arm) =3D=3D 1) @@ -187,8 +188,15 @@ struct cpuinfo_arm { }; } mm64; =20 - struct { + union { uint64_t bits[2]; + struct { + unsigned long __res0 : 20; + unsigned long atomic : 4; + unsigned long __res1 : 40; + + unsigned long __res2 : 64; + }; } isa64; =20 #endif --=20 2.24.3 (Apple Git-128) From nobody Mon May 6 19:29:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1604602648; cv=none; d=zohomail.com; s=zohoarc; b=eWzx6cyIa6dAXZmLDML+ebGW8VxQGDQJFSF5gacqsAg0cr9gJhTnF1oHGaO3MlEIIN1zvEYXWtoknPkHt2djO94mLOut6UDGlHmcRKDCb219wEtBNDGUFj/vlJ7wRk7gucZF/7prYwfcVtKnoJLy8gwlXMNMXpfevvKJmDmmw0Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604602648; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=c0zMNTYfmc28SWK0VrVhI6pcULa51GufXevQj+g5qp8=; b=ayM8SgiQmlS0yvPa1gzIGI3FjScYWAbJfmAPZIUyEA4KunUnAEnf8t/o3zcYuPnITze/6Zt+v2BY4tCHODkKzDcOrkd3AbnhK4CppXtnnKh6ufh8nGR3u0d2ZKIi8F9PHQUhG9nI950phlb8siTS4bSjb5axHOp0e1TWUpqurT4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1604602648424215.85921934040903; Thu, 5 Nov 2020 10:57:28 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.20053.45669 (Exim 4.92) (envelope-from ) id 1kakRd-0006nu-At; Thu, 05 Nov 2020 18:57:13 +0000 Received: by outflank-mailman (output) from mailman id 20053.45669; Thu, 05 Nov 2020 18:57:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kakRd-0006nl-7d; Thu, 05 Nov 2020 18:57:13 +0000 Received: by outflank-mailman (input) for mailman id 20053; Thu, 05 Nov 2020 18:57:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRb-0006gr-O1 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:11 +0000 Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9f58eff1-99bd-4287-a10e-6e50cea65f9f; Thu, 05 Nov 2020 18:57:07 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id 33so2992898wrl.7 for ; Thu, 05 Nov 2020 10:57:07 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:06 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRb-0006gr-O1 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:11 +0000 Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9f58eff1-99bd-4287-a10e-6e50cea65f9f; Thu, 05 Nov 2020 18:57:07 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id 33so2992898wrl.7 for ; Thu, 05 Nov 2020 10:57:07 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:06 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9f58eff1-99bd-4287-a10e-6e50cea65f9f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c0zMNTYfmc28SWK0VrVhI6pcULa51GufXevQj+g5qp8=; b=Lm6c8SGbxdYDpNzRTRwm/sWoLyrOkZnr2F9fvPCjnOOgtDeXdXv1HT0Jvb0dxwBJuK FED+Jq9PlHfWxAG3CpMFvwil5wK9S7T1qXTZA8q5DYx6uCL8aHJ+3EfS0nqaovekyX/u jbN7fRn2sEhfIAT2Y9uFgUbNIeenFCc6tS7HYnK/JgEFxrdJNapdJjT5nR2TeQzleE0C 6X4tzQ3i/HR19NjsouoJppPYeT1mq2KoGXZ/LLLVvO+wVDghQXeRs8DtFUx6wUN/QjYF CpEVcuPrRp05kVldRCbKd9o7zW4zI1J7r5bgnwHXvVstwyVbPMWNuv1AVDEV3NOp6l1P JzqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c0zMNTYfmc28SWK0VrVhI6pcULa51GufXevQj+g5qp8=; b=PBeDbSSdFlmgXiO2I7wJD+kLKMZJCs7iryllgUdRhQEn7IfCrlAanaySPE6pWs88M1 JWceC31AeUUmzWcdW2YdGld94Bp9GgPou7vl7NqdnhKSQYOF7sfj9ou7EwkrpMIJlnWN h2+o7DJdgV8Nbv5aV9VZgoJLO1WRPrW1adgGt8F/RVESqzc61bpu+tUiLL8/Zx4CL3f6 5bJzKsCJQHW/qccmksk6pSS4IyJnRyTxwo0tcNP7GsWSGNgqnWGMn9LqkRyPGXW7N+dv b9ute9Rbw+DM6HyxBw25HzB9+wJfLTGDrtrQ/z1Evq0VsWG1V0FAIy76h271pAEiYCAY BIXQ== X-Gm-Message-State: AOAM533HS88wnfwaWVf5qUpHVLcOOTEi864rrQgPe2gvubVHDfq5FRFf rBP8ZL1lF1X3kXvwUgdUvCtROPjpt9s= X-Google-Smtp-Source: ABdhPJxD3UGgnbHrtL2aQwE7zcXO2D+ilM1nPJwwd/BSpSb7pACKUt2+fZo+z3sKTTEBtTttsyQjzw== X-Received: by 2002:a5d:6944:: with SMTP id r4mr4667895wrw.151.1604602626876; Thu, 05 Nov 2020 10:57:06 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com, Ash Wilding Subject: [RFC PATCH 3/6] xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap Date: Thu, 5 Nov 2020 18:56:00 +0000 Message-Id: <20201105185603.24149-4-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com> References: <20201105185603.24149-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" This patch introduces the ARM64_HAS_LSE_ATOMICS hwcap. While doing this, CONFIG_ARM64_LSE_ATOMICS is added to control whether the hwcap is actually detected and set at runtime. Without this Kconfig being set we will always fallback on LL/SC atomics using Armv8.0 exlusive accesses. Note this patch does not actually add the ALTERNATIVE() switching based on the hwcap being detected and set; that comes later in the series. Signed-off-by: Ash Wilding --- xen/arch/arm/Kconfig | 11 +++++++++++ xen/arch/arm/Makefile | 1 + xen/arch/arm/lse.c | 13 +++++++++++++ xen/include/asm-arm/cpufeature.h | 3 ++- 4 files changed, 27 insertions(+), 1 deletion(-) create mode 100644 xen/arch/arm/lse.c diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 2777388265..febc41e492 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -78,6 +78,17 @@ config SBSA_VUART_CONSOLE Allows a guest to use SBSA Generic UART as a console. The SBSA Generic UART implements a subset of ARM PL011 UART. =20 +config ARM64_LSE_ATOMICS + bool "Armv8.1-LSE Atomics" + depends on ARM_64 && HAS_ALTERNATIVE + default y + ---help--- + When set, dynamically patch Xen at runtime to use Armv8.1-LSE + atomics when supported by the system. + + When unset, or when Armv8.1-LSE atomics are not supported by the + system, fallback on LL/SC atomics using Armv8.0 exclusive accesses. + config ARM_SSBD bool "Speculative Store Bypass Disable" if EXPERT depends on HAS_ALTERNATIVE diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 296c5e68bb..cadd0ad253 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -63,6 +63,7 @@ obj-y +=3D vsmc.o obj-y +=3D vpsci.o obj-y +=3D vuart.o extra-y +=3D $(TARGET_SUBARCH)/head.o +obj-$(CONFIG_ARM64_LSE_ATOMICS) +=3D lse.o =20 #obj-bin-y +=3D ....o =20 diff --git a/xen/arch/arm/lse.c b/xen/arch/arm/lse.c new file mode 100644 index 0000000000..8274dac671 --- /dev/null +++ b/xen/arch/arm/lse.c @@ -0,0 +1,13 @@ + +#include +#include + +static int __init update_lse_caps(void) +{ + if ( cpu_has_lse_atomics ) + cpus_set_cap(ARM64_HAS_LSE_ATOMICS); + + return 0; +} + +__initcall(update_lse_caps); diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeat= ure.h index 2366926e82..48c172ee29 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -42,8 +42,9 @@ #define ARM_SSBD 7 #define ARM_SMCCC_1_1 8 #define ARM64_WORKAROUND_AT_SPECULATE 9 +#define ARM64_HAS_LSE_ATOMICS 10 =20 -#define ARM_NCAPS 10 +#define ARM_NCAPS 11 =20 #ifndef __ASSEMBLY__ =20 --=20 2.24.3 (Apple Git-128) From nobody Mon May 6 19:29:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1604602667; cv=none; d=zohomail.com; s=zohoarc; b=LCK16a0WxHILQtsggT6c7jUeXOnFkYPTrqOmZlOymrGz0oO8kcFX6pw+WayB4/VEOYecBN7MleOGwyOSewz6eKSCx02iFoVXnaBsZVRx31TImkFxGjf+zOs2U86EwLhgfh0jeV0PRY1S7kjn8YA7hpGM3S0M7DZApYFYnjl28vY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604602667; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=8I2Ok6wpRdbW4t7D33HeYP/XkP6I7e2EsC6fbv4S2J4=; b=JsX87zrW5GsqughC475XGYbo87uhK85ot1Hvesxz7DECNb/nVBVG5LrafQVQkn3Il52Mot06it0eRCSOMXsZ9xqc9COxgTscPJyrSxz/TLFxN7Qq7oVsrRg1MDjCupMwzXTc5lg+a69VFb4gjkCkN4Ry2BDKxZR6pDFP4fmyOho= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1604602667939631.3760727123753; Thu, 5 Nov 2020 10:57:47 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.20063.45705 (Exim 4.92) (envelope-from ) id 1kakRs-00076M-HA; Thu, 05 Nov 2020 18:57:28 +0000 Received: by outflank-mailman (output) from mailman id 20063.45705; Thu, 05 Nov 2020 18:57:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kakRs-00076C-Co; Thu, 05 Nov 2020 18:57:28 +0000 Received: by outflank-mailman (input) for mailman id 20063; Thu, 05 Nov 2020 18:57:26 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRq-0006gr-OZ for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:26 +0000 Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3c3384f2-72bb-4bd0-8af0-7b6d26181d2a; Thu, 05 Nov 2020 18:57:10 +0000 (UTC) Received: by mail-wr1-x441.google.com with SMTP id n15so3008933wrq.2 for ; Thu, 05 Nov 2020 10:57:10 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:07 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRq-0006gr-OZ for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:26 +0000 Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3c3384f2-72bb-4bd0-8af0-7b6d26181d2a; Thu, 05 Nov 2020 18:57:10 +0000 (UTC) Received: by mail-wr1-x441.google.com with SMTP id n15so3008933wrq.2 for ; Thu, 05 Nov 2020 10:57:10 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:07 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3c3384f2-72bb-4bd0-8af0-7b6d26181d2a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8I2Ok6wpRdbW4t7D33HeYP/XkP6I7e2EsC6fbv4S2J4=; b=RSifhXyjfPPNLWT28QaLbOs16saU8yP5G7sfH5Wf/cLNS2zGFmWJhgoxiqJd/kIoYY 1+TaDUmvO2ZeonOKEMKidi1aXwEp66+S8+cVjxo/xo54n7n7ImK7lSVD2+O3Ocx06heB hVzseaLgHKGpELZgE3QDct0nKF+eZ/jMBZPKHG9efY8x38gIoLUzcaSsvT8YQZqQUbwl XWrH0cUrHNK2SrSxrr55FYd++9Yd7LB6hS3docWhGJLQLsNbgBJsvY9QlZjoc/p5U70+ 7FUzszjkPvoRia5lRNoLncMrvaUSD6uE8uq/HDKfuKn6KeMPpiQTee+S91wzSS7/NI1w txEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8I2Ok6wpRdbW4t7D33HeYP/XkP6I7e2EsC6fbv4S2J4=; b=I4eSMBFkOX2x2p/x0tXjfx+WujSuSmYTZRELbRrZiy+fJLC9yhd9mZLC001FBzuj/D S+0EUTAe2bP1TneOLsd7pSeuWYwvegpZ7gHGSsx/eSukhpkvf5hYRDJydvxha08vC1rz nRA+NEIcqWn923yuoiIuZlC/dTwU51Nn9z71LIYjJmrEv3bNwm27UFz5WrJgItb1G06s yVkhE9NybuhtDjfa5wll798TKs8X5IY7PPQQSPChWNZGVvVi46f2MDcZ/3ihj+Y1yaf9 fvytTHylNdeTKSFGbAfR/JpwVc4r7TultUhbVpic2CixVH0MLE6A8V3H/FLGM8khYwOw rv/g== X-Gm-Message-State: AOAM532NhtT2tNatluTetwDoEO+ocO2+BF0YB2GGupBYQ0a5fw26bYu0 +EdLGodBXYma5KBZl7Jn8qOYSHkAQjA= X-Google-Smtp-Source: ABdhPJwu7tJnCNE3rvZyMq5nd9ZRgtDX+P8XJETy908fBefF8RGAvA0HeH6w48G9zlwwXuRPawsP8Q== X-Received: by 2002:adf:ead1:: with SMTP id o17mr2263636wrn.396.1604602628146; Thu, 05 Nov 2020 10:57:08 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com, Ash Wilding Subject: [RFC PATCH 4/6] xen/arm64: Port Linux LL/SC and LSE atomics helpers to Xen Date: Thu, 5 Nov 2020 18:56:01 +0000 Message-Id: <20201105185603.24149-5-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com> References: <20201105185603.24149-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" This patch ports Linux's arm64 LL/SC and LSE atomics helpers to Xen, using Linux 5.10-rc2 (last commit 3cea11cd5) as a basis. The opening comment of each header file details the changes made to that file while porting it to Xen. !! NB: This patch breaks arm32 builds until the next patch in the series ports Linux's 32-bit LL/SC helpers. The patches have been split in this way to aid in reviewing and discussing. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm64/atomic.h | 242 +++++------ xen/include/asm-arm/arm64/atomic_ll_sc.h | 236 +++++++++++ xen/include/asm-arm/arm64/atomic_lse.h | 251 +++++++++++ xen/include/asm-arm/arm64/cmpxchg.h | 505 ++++++++++++++++------- xen/include/asm-arm/arm64/lse.h | 53 +++ xen/include/asm-arm/arm64/system.h | 2 +- xen/include/asm-arm/atomic.h | 15 +- xen/include/xen/compiler.h | 3 + 8 files changed, 1021 insertions(+), 286 deletions(-) create mode 100644 xen/include/asm-arm/arm64/atomic_ll_sc.h create mode 100644 xen/include/asm-arm/arm64/atomic_lse.h create mode 100644 xen/include/asm-arm/arm64/lse.h diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64= /atomic.h index 2d42567866..5632ff7b13 100644 --- a/xen/include/asm-arm/arm64/atomic.h +++ b/xen/include/asm-arm/arm64/atomic.h @@ -1,148 +1,124 @@ + /* - * Based on arch/arm64/include/asm/atomic.h - * which in turn is - * Based on arch/arm/include/asm/atomic.h + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) + * + * Summary of changes: + * - Rename header include guard to reflect Xen directory structure + * - Drop redundant includes and redirect others to Xen equivalents + * - Rename declarations from arch_atomic_() to atomic_() + * - Drop atomic64_t helper declarations * * Copyright (C) 1996 Russell King. * Copyright (C) 2002 Deep Blue Solutions Ltd. * Copyright (C) 2012 ARM Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . + * SPDX-License-Identifier: GPL-2.0-only */ -#ifndef __ARCH_ARM_ARM64_ATOMIC -#define __ARCH_ARM_ARM64_ATOMIC +#ifndef __ASM_ARM_ARM64_ATOMIC_H +#define __ASM_ARM_ARM64_ATOMIC_H =20 -/* - * AArch64 UP and SMP safe atomic ops. We use load exclusive and - * store exclusive to ensure that these are atomic. We may loop - * to ensure that the update happens. - */ -static inline void atomic_add(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_add\n" -"1: ldxr %w0, %2\n" -" add %w0, %w0, %w3\n" -" stxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i)); -} +#include +#include =20 -static inline int atomic_add_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_add_return\n" -"1: ldxr %w0, %2\n" -" add %w0, %w0, %w3\n" -" stlxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i) - : "memory"); - - smp_mb(); - return result; -} +#include "lse.h" +#include "cmpxchg.h" =20 -static inline void atomic_sub(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_sub\n" -"1: ldxr %w0, %2\n" -" sub %w0, %w0, %w3\n" -" stxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i)); +#define ATOMIC_OP(op) \ +static inline void op(int i, atomic_t *v) \ +{ \ + __lse_ll_sc_body(op, i, v); \ } =20 -static inline int atomic_sub_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_sub_return\n" -"1: ldxr %w0, %2\n" -" sub %w0, %w0, %w3\n" -" stlxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i) - : "memory"); - - smp_mb(); - return result; -} +ATOMIC_OP(atomic_andnot) +ATOMIC_OP(atomic_or) +ATOMIC_OP(atomic_xor) +ATOMIC_OP(atomic_add) +ATOMIC_OP(atomic_and) +ATOMIC_OP(atomic_sub) =20 -static inline void atomic_and(int m, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_and\n" -"1: ldxr %w0, %2\n" -" and %w0, %w0, %w3\n" -" stxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (m)); -} +#undef ATOMIC_OP =20 -static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) -{ - unsigned long tmp; - int oldval; - - smp_mb(); - - asm volatile("// atomic_cmpxchg\n" -"1: ldxr %w1, %2\n" -" cmp %w1, %w3\n" -" b.ne 2f\n" -" stxr %w0, %w4, %2\n" -" cbnz %w0, 1b\n" -"2:" - : "=3D&r" (tmp), "=3D&r" (oldval), "+Q" (ptr->counter) - : "Ir" (old), "r" (new) - : "cc"); - - smp_mb(); - return oldval; +#define ATOMIC_FETCH_OP(name, op) \ +static inline int op##name(int i, atomic_t *v) \ +{ \ + return __lse_ll_sc_body(op##name, i, v); \ } =20 -static inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - - c =3D atomic_read(v); - while (c !=3D u && (old =3D atomic_cmpxchg((v), c, c + a)) !=3D c) - c =3D old; - return c; -} - -#endif -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: - */ +#define ATOMIC_FETCH_OPS(op) \ + ATOMIC_FETCH_OP(_relaxed, op) \ + ATOMIC_FETCH_OP(_acquire, op) \ + ATOMIC_FETCH_OP(_release, op) \ + ATOMIC_FETCH_OP( , op) + +ATOMIC_FETCH_OPS(atomic_fetch_andnot) +ATOMIC_FETCH_OPS(atomic_fetch_or) +ATOMIC_FETCH_OPS(atomic_fetch_xor) +ATOMIC_FETCH_OPS(atomic_fetch_add) +ATOMIC_FETCH_OPS(atomic_fetch_and) +ATOMIC_FETCH_OPS(atomic_fetch_sub) +ATOMIC_FETCH_OPS(atomic_add_return) +ATOMIC_FETCH_OPS(atomic_sub_return) + +#undef ATOMIC_FETCH_OP +#undef ATOMIC_FETCH_OPS +#define atomic_read(v) __READ_ONCE((v)->counter) +#define atomic_set(v, i) __WRITE_ONCE(((v)->counter), (i)) + +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_add_return_acquire atomic_add_return_acquire +#define atomic_add_return_release atomic_add_return_release +#define atomic_add_return atomic_add_return + +#define atomic_sub_return_relaxed atomic_sub_return_relaxed +#define atomic_sub_return_acquire atomic_sub_return_acquire +#define atomic_sub_return_release atomic_sub_return_release +#define atomic_sub_return atomic_sub_return + +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_add_acquire atomic_fetch_add_acquire +#define atomic_fetch_add_release atomic_fetch_add_release +#define atomic_fetch_add atomic_fetch_add + +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed +#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire +#define atomic_fetch_sub_release atomic_fetch_sub_release +#define atomic_fetch_sub atomic_fetch_sub + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_and_acquire atomic_fetch_and_acquire +#define atomic_fetch_and_release atomic_fetch_and_release +#define atomic_fetch_and atomic_fetch_and + +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire +#define atomic_fetch_andnot_release atomic_fetch_andnot_release +#define atomic_fetch_andnot atomic_fetch_andnot + +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_or_acquire atomic_fetch_or_acquire +#define atomic_fetch_or_release atomic_fetch_or_release +#define atomic_fetch_or atomic_fetch_or + +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed +#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire +#define atomic_fetch_xor_release atomic_fetch_xor_release +#define atomic_fetch_xor atomic_fetch_xor + +#define atomic_xchg_relaxed(v, new) \ + xchg_relaxed(&((v)->counter), (new)) +#define atomic_xchg_acquire(v, new) \ + xchg_acquire(&((v)->counter), (new)) +#define atomic_xchg_release(v, new) \ + xchg_release(&((v)->counter), (new)) +#define atomic_xchg(v, new) \ + xchg(&((v)->counter), (new)) + +#define atomic_cmpxchg_relaxed(v, old, new) \ + cmpxchg_relaxed(&((v)->counter), (old), (new)) +#define atomic_cmpxchg_acquire(v, old, new) \ + cmpxchg_acquire(&((v)->counter), (old), (new)) +#define atomic_cmpxchg_release(v, old, new) \ + cmpxchg_release(&((v)->counter), (old), (new)) + +#define atomic_andnot atomic_andnot + +#endif /* __ASM_ARM_ARM64_ATOMIC_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h b/xen/include/asm-arm= /arm64/atomic_ll_sc.h new file mode 100644 index 0000000000..dbcb0e9fe7 --- /dev/null +++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h @@ -0,0 +1,236 @@ +/* + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) + * + * Summary of changes: + * - Rename header include guard to reflect Xen directory structure + * - Redirect includes to Xen equivalents + * - Drop atomic64_t helper definitions + * + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * Copyright (C) 2012 ARM Ltd. + * SPDX-License-Identifier: GPL-2.0-only + */ + +#ifndef __ASM_ARM_ARM64_ATOMIC_LL_SC_H +#define __ASM_ARM_ARM64_ATOMIC_LL_SC_H + +#include + +#ifdef CONFIG_ARM64_LSE_ATOMICS +#define __LL_SC_FALLBACK(asm_ops) \ +" b 3f\n" \ +" .subsection 1\n" \ +"3:\n" \ +asm_ops "\n" \ +" b 4f\n" \ +" .previous\n" \ +"4:\n" +#else +#define __LL_SC_FALLBACK(asm_ops) asm_ops +#endif + +#ifndef CONFIG_CC_HAS_K_CONSTRAINT +#define K +#endif + +/* + * AArch64 UP and SMP safe atomic ops. We use load exclusive and + * store exclusive to ensure that these are atomic. We may loop + * to ensure that the update happens. + */ + +#define ATOMIC_OP(op, asm_op, constraint) \ +static inline void \ +__ll_sc_atomic_##op(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + asm volatile("// atomic_" #op "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %2\n" \ +"1: ldxr %w0, %2\n" \ +" " #asm_op " %w0, %w0, %w3\n" \ +" stxr %w1, %w0, %2\n" \ +" cbnz %w1, 1b\n") \ + : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i)); \ +} + +#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\ +static inline int \ +__ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + asm volatile("// atomic_" #op "_return" #name "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %2\n" \ +"1: ld" #acq "xr %w0, %2\n" \ +" " #asm_op " %w0, %w0, %w3\n" \ +" st" #rel "xr %w1, %w0, %2\n" \ +" cbnz %w1, 1b\n" \ +" " #mb ) \ + : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i) \ + : cl); \ + \ + return result; \ +} + +#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \ +static inline int \ +__ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int val, result; \ + \ + asm volatile("// atomic_fetch_" #op #name "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %3\n" \ +"1: ld" #acq "xr %w0, %3\n" \ +" " #asm_op " %w1, %w0, %w4\n" \ +" st" #rel "xr %w2, %w1, %3\n" \ +" cbnz %w2, 1b\n" \ +" " #mb ) \ + : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i) \ + : cl); \ + \ + return result; \ +} + +#define ATOMIC_OPS(...) \ + ATOMIC_OP(__VA_ARGS__) \ + ATOMIC_OP_RETURN( , dmb ish, , l, "memory", __VA_ARGS__)\ + ATOMIC_OP_RETURN(_relaxed, , , , , __VA_ARGS__)\ + ATOMIC_OP_RETURN(_acquire, , a, , "memory", __VA_ARGS__)\ + ATOMIC_OP_RETURN(_release, , , l, "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP ( , dmb ish, , l, "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_relaxed, , , , , __VA_ARGS__)\ + ATOMIC_FETCH_OP (_acquire, , a, , "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_release, , , l, "memory", __VA_ARGS__) + +ATOMIC_OPS(add, add, I) +ATOMIC_OPS(sub, sub, J) + +#undef ATOMIC_OPS +#define ATOMIC_OPS(...) \ + ATOMIC_OP(__VA_ARGS__) \ + ATOMIC_FETCH_OP ( , dmb ish, , l, "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_relaxed, , , , , __VA_ARGS__)\ + ATOMIC_FETCH_OP (_acquire, , a, , "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_release, , , l, "memory", __VA_ARGS__) + +ATOMIC_OPS(and, and, K) +ATOMIC_OPS(or, orr, K) +ATOMIC_OPS(xor, eor, K) +/* + * GAS converts the mysterious and undocumented BIC (immediate) alias to + * an AND (immediate) instruction with the immediate inverted. We don't + * have a constraint for this, so fall back to register. + */ +ATOMIC_OPS(andnot, bic, ) + +#undef ATOMIC_OPS +#undef ATOMIC_FETCH_OP +#undef ATOMIC_OP_RETURN +#undef ATOMIC_OP + +#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint) \ +static inline u##sz \ +__ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \ + unsigned long old, \ + u##sz new) \ +{ \ + unsigned long tmp; \ + u##sz oldval; \ + \ + /* \ + * Sub-word sizes require explicit casting so that the compare \ + * part of the cmpxchg doesn't end up interpreting non-zero \ + * upper bits of the register containing "old". \ + */ \ + if (sz < 32) \ + old =3D (u##sz)old; \ + \ + asm volatile( \ + __LL_SC_FALLBACK( \ + " prfm pstl1strm, %[v]\n" \ + "1: ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n" \ + " eor %" #w "[tmp], %" #w "[oldval], %" #w "[old]\n" \ + " cbnz %" #w "[tmp], 2f\n" \ + " st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n" \ + " cbnz %w[tmp], 1b\n" \ + " " #mb "\n" \ + "2:") \ + : [tmp] "=3D&r" (tmp), [oldval] "=3D&r" (oldval), \ + [v] "+Q" (*(u##sz *)ptr) \ + : [old] __stringify(constraint) "r" (old), [new] "r" (new) \ + : cl); \ + \ + return oldval; \ +} + +/* + * Earlier versions of GCC (no later than 8.1.0) appear to incorrectly + * handle the 'K' constraint for the value 4294967295 - thus we use no + * constraint for 32 bit operations. + */ +__CMPXCHG_CASE(w, b, , 8, , , , , K) +__CMPXCHG_CASE(w, h, , 16, , , , , K) +__CMPXCHG_CASE(w, , , 32, , , , , K) +__CMPXCHG_CASE( , , , 64, , , , , L) +__CMPXCHG_CASE(w, b, acq_, 8, , a, , "memory", K) +__CMPXCHG_CASE(w, h, acq_, 16, , a, , "memory", K) +__CMPXCHG_CASE(w, , acq_, 32, , a, , "memory", K) +__CMPXCHG_CASE( , , acq_, 64, , a, , "memory", L) +__CMPXCHG_CASE(w, b, rel_, 8, , , l, "memory", K) +__CMPXCHG_CASE(w, h, rel_, 16, , , l, "memory", K) +__CMPXCHG_CASE(w, , rel_, 32, , , l, "memory", K) +__CMPXCHG_CASE( , , rel_, 64, , , l, "memory", L) +__CMPXCHG_CASE(w, b, mb_, 8, dmb ish, , l, "memory", K) +__CMPXCHG_CASE(w, h, mb_, 16, dmb ish, , l, "memory", K) +__CMPXCHG_CASE(w, , mb_, 32, dmb ish, , l, "memory", K) +__CMPXCHG_CASE( , , mb_, 64, dmb ish, , l, "memory", L) + +#undef __CMPXCHG_CASE + +#define __CMPXCHG_DBL(name, mb, rel, cl) \ +static inline long \ +__ll_sc__cmpxchg_double##name(unsigned long old1, \ + unsigned long old2, \ + unsigned long new1, \ + unsigned long new2, \ + volatile void *ptr) \ +{ \ + unsigned long tmp, ret; \ + \ + asm volatile("// __cmpxchg_double" #name "\n" \ + __LL_SC_FALLBACK( \ + " prfm pstl1strm, %2\n" \ + "1: ldxp %0, %1, %2\n" \ + " eor %0, %0, %3\n" \ + " eor %1, %1, %4\n" \ + " orr %1, %0, %1\n" \ + " cbnz %1, 2f\n" \ + " st" #rel "xp %w0, %5, %6, %2\n" \ + " cbnz %w0, 1b\n" \ + " " #mb "\n" \ + "2:") \ + : "=3D&r" (tmp), "=3D&r" (ret), "+Q" (*(unsigned long *)ptr) \ + : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \ + : cl); \ + \ + return ret; \ +} + +__CMPXCHG_DBL( , , , ) +__CMPXCHG_DBL(_mb, dmb ish, l, "memory") + +#undef __CMPXCHG_DBL +#undef K + +#endif /* __ASM_ARM_ARM64_ATOMIC_LL_SC_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/atomic_lse.h b/xen/include/asm-arm/a= rm64/atomic_lse.h new file mode 100644 index 0000000000..0d579f3262 --- /dev/null +++ b/xen/include/asm-arm/arm64/atomic_lse.h @@ -0,0 +1,251 @@ + +/* + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) + * + * Summary of changes: + * - Rename header include guard to reflect Xen directory structure + * - Drop atomic64_t helper definitions + * - Switch __always_inline qualifier to always_inline + * + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * Copyright (C) 2012 ARM Ltd. + * SPDX-License-Identifier: GPL-2.0-only + */ + +#ifndef __ASM_ARM_ARM64_ATOMIC_LSE_H +#define __ASM_ARM_ARM64_ATOMIC_LSE_H + +#define ATOMIC_OP(op, asm_op) \ +static inline void __lse_atomic_##op(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ +" " #asm_op " %w[i], %[v]\n" \ + : [i] "+r" (i), [v] "+Q" (v->counter) \ + : "r" (v)); \ +} + +ATOMIC_OP(andnot, stclr) +ATOMIC_OP(or, stset) +ATOMIC_OP(xor, steor) +ATOMIC_OP(add, stadd) + +#undef ATOMIC_OP + +#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \ +static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ +" " #asm_op #mb " %w[i], %w[i], %[v]" \ + : [i] "+r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +#define ATOMIC_FETCH_OPS(op, asm_op) \ + ATOMIC_FETCH_OP(_relaxed, , op, asm_op) \ + ATOMIC_FETCH_OP(_acquire, a, op, asm_op, "memory") \ + ATOMIC_FETCH_OP(_release, l, op, asm_op, "memory") \ + ATOMIC_FETCH_OP( , al, op, asm_op, "memory") + +ATOMIC_FETCH_OPS(andnot, ldclr) +ATOMIC_FETCH_OPS(or, ldset) +ATOMIC_FETCH_OPS(xor, ldeor) +ATOMIC_FETCH_OPS(add, ldadd) + +#undef ATOMIC_FETCH_OP +#undef ATOMIC_FETCH_OPS + +#define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \ +static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \ +{ \ + u32 tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " ldadd" #mb " %w[i], %w[tmp], %[v]\n" \ + " add %w[i], %w[i], %w[tmp]" \ + : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_OP_ADD_RETURN(_relaxed, ) +ATOMIC_OP_ADD_RETURN(_acquire, a, "memory") +ATOMIC_OP_ADD_RETURN(_release, l, "memory") +ATOMIC_OP_ADD_RETURN( , al, "memory") + +#undef ATOMIC_OP_ADD_RETURN + +static inline void __lse_atomic_and(int i, atomic_t *v) +{ + asm volatile( + __LSE_PREAMBLE + " mvn %w[i], %w[i]\n" + " stclr %w[i], %[v]" + : [i] "+&r" (i), [v] "+Q" (v->counter) + : "r" (v)); +} + +#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \ +static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ + " mvn %w[i], %w[i]\n" \ + " ldclr" #mb " %w[i], %w[i], %[v]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_FETCH_OP_AND(_relaxed, ) +ATOMIC_FETCH_OP_AND(_acquire, a, "memory") +ATOMIC_FETCH_OP_AND(_release, l, "memory") +ATOMIC_FETCH_OP_AND( , al, "memory") + +#undef ATOMIC_FETCH_OP_AND + +static inline void __lse_atomic_sub(int i, atomic_t *v) +{ + asm volatile( + __LSE_PREAMBLE + " neg %w[i], %w[i]\n" + " stadd %w[i], %[v]" + : [i] "+&r" (i), [v] "+Q" (v->counter) + : "r" (v)); +} + +#define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \ +static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \ +{ \ + u32 tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " neg %w[i], %w[i]\n" \ + " ldadd" #mb " %w[i], %w[tmp], %[v]\n" \ + " add %w[i], %w[i], %w[tmp]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_OP_SUB_RETURN(_relaxed, ) +ATOMIC_OP_SUB_RETURN(_acquire, a, "memory") +ATOMIC_OP_SUB_RETURN(_release, l, "memory") +ATOMIC_OP_SUB_RETURN( , al, "memory") + +#undef ATOMIC_OP_SUB_RETURN + +#define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \ +static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ + " neg %w[i], %w[i]\n" \ + " ldadd" #mb " %w[i], %w[i], %[v]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_FETCH_OP_SUB(_relaxed, ) +ATOMIC_FETCH_OP_SUB(_acquire, a, "memory") +ATOMIC_FETCH_OP_SUB(_release, l, "memory") +ATOMIC_FETCH_OP_SUB( , al, "memory") + +#undef ATOMIC_FETCH_OP_SUB + +#define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...) \ +static always_inline u##sz \ +__lse__cmpxchg_case_##name##sz(volatile void *ptr, \ + u##sz old, \ + u##sz new) \ +{ \ + register unsigned long x0 asm ("x0") =3D (unsigned long)ptr; \ + register u##sz x1 asm ("x1") =3D old; \ + register u##sz x2 asm ("x2") =3D new; \ + unsigned long tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " mov %" #w "[tmp], %" #w "[old]\n" \ + " cas" #mb #sfx "\t%" #w "[tmp], %" #w "[new], %[v]\n" \ + " mov %" #w "[ret], %" #w "[tmp]" \ + : [ret] "+r" (x0), [v] "+Q" (*(unsigned long *)ptr), \ + [tmp] "=3D&r" (tmp) \ + : [old] "r" (x1), [new] "r" (x2) \ + : cl); \ + \ + return x0; \ +} + +__CMPXCHG_CASE(w, b, , 8, ) +__CMPXCHG_CASE(w, h, , 16, ) +__CMPXCHG_CASE(w, , , 32, ) +__CMPXCHG_CASE(x, , , 64, ) +__CMPXCHG_CASE(w, b, acq_, 8, a, "memory") +__CMPXCHG_CASE(w, h, acq_, 16, a, "memory") +__CMPXCHG_CASE(w, , acq_, 32, a, "memory") +__CMPXCHG_CASE(x, , acq_, 64, a, "memory") +__CMPXCHG_CASE(w, b, rel_, 8, l, "memory") +__CMPXCHG_CASE(w, h, rel_, 16, l, "memory") +__CMPXCHG_CASE(w, , rel_, 32, l, "memory") +__CMPXCHG_CASE(x, , rel_, 64, l, "memory") +__CMPXCHG_CASE(w, b, mb_, 8, al, "memory") +__CMPXCHG_CASE(w, h, mb_, 16, al, "memory") +__CMPXCHG_CASE(w, , mb_, 32, al, "memory") +__CMPXCHG_CASE(x, , mb_, 64, al, "memory") + +#undef __CMPXCHG_CASE + +#define __CMPXCHG_DBL(name, mb, cl...) \ +static always_inline long \ +__lse__cmpxchg_double##name(unsigned long old1, \ + unsigned long old2, \ + unsigned long new1, \ + unsigned long new2, \ + volatile void *ptr) \ +{ \ + unsigned long oldval1 =3D old1; \ + unsigned long oldval2 =3D old2; \ + register unsigned long x0 asm ("x0") =3D old1; \ + register unsigned long x1 asm ("x1") =3D old2; \ + register unsigned long x2 asm ("x2") =3D new1; \ + register unsigned long x3 asm ("x3") =3D new2; \ + register unsigned long x4 asm ("x4") =3D (unsigned long)ptr; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"\ + " eor %[old1], %[old1], %[oldval1]\n" \ + " eor %[old2], %[old2], %[oldval2]\n" \ + " orr %[old1], %[old1], %[old2]" \ + : [old1] "+&r" (x0), [old2] "+&r" (x1), \ + [v] "+Q" (*(unsigned long *)ptr) \ + : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ + [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \ + : cl); \ + \ + return x0; \ +} + +__CMPXCHG_DBL( , ) +__CMPXCHG_DBL(_mb, al, "memory") + +#undef __CMPXCHG_DBL + +#endif /* __ASM_ARM_ARM64_ATOMIC_LSE_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm6= 4/cmpxchg.h index 10e4edc022..4ee8291d3e 100644 --- a/xen/include/asm-arm/arm64/cmpxchg.h +++ b/xen/include/asm-arm/arm64/cmpxchg.h @@ -1,136 +1,363 @@ -#ifndef __ASM_ARM64_CMPXCHG_H -#define __ASM_ARM64_CMPXCHG_H +/* + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) + * + * Summary of changes: + * - Rename header include guard to reflect Xen directory structure + * - Drop redundant includes and redirect others to Xen equivalents + * - Rename definitions from arch_xchg_() to xchg_() + * - Switch __always_inline qualifier to always_inline + * - Switch usage of BUILD_BUG() to returning __bad_cmpxchg() + * - Pull in original Xen arm64 cmpxchg.h definitions of + * cmpxchg_timeout*() and cmpxchg64_timeout*() as these are not + * provided by Linux and are required for Xen's guest atomics + * + * Copyright (C) 2012 ARM Ltd. + * SPDX-License-Identifier: GPL-2.0-only + */ +#ifndef __ASM_ARM_ARM64_CMPXCHG_H +#define __ASM_ARM_ARM64_CMPXCHG_H =20 -extern void __bad_xchg(volatile void *, int); - -static inline unsigned long __xchg(unsigned long x, volatile void *ptr, in= t size) -{ - unsigned long ret, tmp; - - switch (size) { - case 1: - asm volatile("// __xchg1\n" - "1: ldxrb %w0, %2\n" - " stlxrb %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u8 *)ptr) - : "r" (x) - : "memory"); - break; - case 2: - asm volatile("// __xchg2\n" - "1: ldxrh %w0, %2\n" - " stlxrh %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u16 *)ptr) - : "r" (x) - : "memory"); - break; - case 4: - asm volatile("// __xchg4\n" - "1: ldxr %w0, %2\n" - " stlxr %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u32 *)ptr) - : "r" (x) - : "memory"); - break; - case 8: - asm volatile("// __xchg8\n" - "1: ldxr %0, %2\n" - " stlxr %w1, %3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u64 *)ptr) - : "r" (x) - : "memory"); - break; - default: - __bad_xchg(ptr, size), ret =3D 0; - break; - } - - smp_mb(); - return ret; -} - -#define xchg(ptr,x) \ -({ \ - __typeof__(*(ptr)) __ret; \ - __ret =3D (__typeof__(*(ptr))) \ - __xchg((unsigned long)(x), (ptr), sizeof(*(ptr))); \ - __ret; \ -}) +#include +#include "lse.h" =20 extern unsigned long __bad_cmpxchg(volatile void *ptr, int size); =20 -#define __CMPXCHG_CASE(w, sz, name) \ -static inline bool __cmpxchg_case_##name(volatile void *ptr, \ - unsigned long *old, \ - unsigned long new, \ - bool timeout, \ - unsigned int max_try) \ +/* + * We need separate acquire parameters for ll/sc and lse, since the full + * barrier case is generated as release+dmb for the former and + * acquire+release for the latter. + */ +#define __XCHG_CASE(w, sfx, name, sz, mb, nop_lse, acq, acq_lse, rel, cl) \ +static inline u##sz __xchg_case_##name##sz(u##sz x, volatile void *ptr) \ +{ \ + u##sz ret; \ + unsigned long tmp; \ + \ + asm volatile(ARM64_LSE_ATOMIC_INSN( \ + /* LL/SC */ \ + " prfm pstl1strm, %2\n" \ + "1: ld" #acq "xr" #sfx "\t%" #w "0, %2\n" \ + " st" #rel "xr" #sfx "\t%w1, %" #w "3, %2\n" \ + " cbnz %w1, 1b\n" \ + " " #mb, \ + /* LSE atomics */ \ + " swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n" \ + "nop\n" \ + "nop\n" \ + "nop\n" \ + " " #nop_lse) \ + : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u##sz *)ptr) \ + : "r" (x) \ + : cl); \ + \ + return ret; \ +} + +__XCHG_CASE(w, b, , 8, , , , , , ) +__XCHG_CASE(w, h, , 16, , , , , , ) +__XCHG_CASE(w, , , 32, , , , , , ) +__XCHG_CASE( , , , 64, , , , , , ) +__XCHG_CASE(w, b, acq_, 8, , , a, a, , "memory") +__XCHG_CASE(w, h, acq_, 16, , , a, a, , "memory") +__XCHG_CASE(w, , acq_, 32, , , a, a, , "memory") +__XCHG_CASE( , , acq_, 64, , , a, a, , "memory") +__XCHG_CASE(w, b, rel_, 8, , , , , l, "memory") +__XCHG_CASE(w, h, rel_, 16, , , , , l, "memory") +__XCHG_CASE(w, , rel_, 32, , , , , l, "memory") +__XCHG_CASE( , , rel_, 64, , , , , l, "memory") +__XCHG_CASE(w, b, mb_, 8, dmb ish, nop, , a, l, "memory") +__XCHG_CASE(w, h, mb_, 16, dmb ish, nop, , a, l, "memory") +__XCHG_CASE(w, , mb_, 32, dmb ish, nop, , a, l, "memory") +__XCHG_CASE( , , mb_, 64, dmb ish, nop, , a, l, "memory") + +#undef __XCHG_CASE + +#define __XCHG_GEN(sfx) \ +static always_inline unsigned long __xchg##sfx(unsigned long x, \ + volatile void *ptr, \ + int size) \ { \ - unsigned long oldval; \ - unsigned long res; \ + switch (size) { \ + case 1: \ + return __xchg_case##sfx##_8(x, ptr); \ + case 2: \ + return __xchg_case##sfx##_16(x, ptr); \ + case 4: \ + return __xchg_case##sfx##_32(x, ptr); \ + case 8: \ + return __xchg_case##sfx##_64(x, ptr); \ + default: \ + return __bad_cmpxchg(ptr, size); \ + } \ \ - do { \ - asm volatile("// __cmpxchg_case_" #name "\n" \ - " ldxr" #sz " %" #w "1, %2\n" \ - " mov %w0, #0\n" \ - " cmp %" #w "1, %" #w "3\n" \ - " b.ne 1f\n" \ - " stxr" #sz " %w0, %" #w "4, %2\n" \ - "1:\n" \ - : "=3D&r" (res), "=3D&r" (oldval), \ - "+Q" (*(unsigned long *)ptr) \ - : "Ir" (*old), "r" (new) \ - : "cc"); \ + unreachable(); \ +} + +__XCHG_GEN() +__XCHG_GEN(_acq) +__XCHG_GEN(_rel) +__XCHG_GEN(_mb) + +#undef __XCHG_GEN + +#define __xchg_wrapper(sfx, ptr, x) \ +({ \ + __typeof__(*(ptr)) __ret; \ + __ret =3D (__typeof__(*(ptr))) \ + __xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr))); \ + __ret; \ +}) + +/* xchg */ +#define xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__) +#define xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__) +#define xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__) +#define xchg(...) __xchg_wrapper( _mb, __VA_ARGS__) + +#define __CMPXCHG_CASE(name, sz) \ +static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr, \ + u##sz old, \ + u##sz new) \ +{ \ + return __lse_ll_sc_body(_cmpxchg_case_##name##sz, \ + ptr, old, new); \ +} + +__CMPXCHG_CASE( , 8) +__CMPXCHG_CASE( , 16) +__CMPXCHG_CASE( , 32) +__CMPXCHG_CASE( , 64) +__CMPXCHG_CASE(acq_, 8) +__CMPXCHG_CASE(acq_, 16) +__CMPXCHG_CASE(acq_, 32) +__CMPXCHG_CASE(acq_, 64) +__CMPXCHG_CASE(rel_, 8) +__CMPXCHG_CASE(rel_, 16) +__CMPXCHG_CASE(rel_, 32) +__CMPXCHG_CASE(rel_, 64) +__CMPXCHG_CASE(mb_, 8) +__CMPXCHG_CASE(mb_, 16) +__CMPXCHG_CASE(mb_, 32) +__CMPXCHG_CASE(mb_, 64) + +#undef __CMPXCHG_CASE + +#define __CMPXCHG_DBL(name) \ +static inline long __cmpxchg_double##name(unsigned long old1, \ + unsigned long old2, \ + unsigned long new1, \ + unsigned long new2, \ + volatile void *ptr) \ +{ \ + return __lse_ll_sc_body(_cmpxchg_double##name, \ + old1, old2, new1, new2, ptr); \ +} + +__CMPXCHG_DBL( ) +__CMPXCHG_DBL(_mb) + +#undef __CMPXCHG_DBL + +#define __CMPXCHG_GEN(sfx) \ +static always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ + unsigned long old, \ + unsigned long new, \ + int size) \ +{ \ + switch (size) { \ + case 1: \ + return __cmpxchg_case##sfx##_8(ptr, old, new); \ + case 2: \ + return __cmpxchg_case##sfx##_16(ptr, old, new); \ + case 4: \ + return __cmpxchg_case##sfx##_32(ptr, old, new); \ + case 8: \ + return __cmpxchg_case##sfx##_64(ptr, old, new); \ + default: \ + return __bad_cmpxchg(ptr, size); \ + } \ \ - if (!res) \ - break; \ - } while (!timeout || ((--max_try) > 0)); \ + unreachable(); \ +} + +__CMPXCHG_GEN() +__CMPXCHG_GEN(_acq) +__CMPXCHG_GEN(_rel) +__CMPXCHG_GEN(_mb) + +#undef __CMPXCHG_GEN + +#define __cmpxchg_wrapper(sfx, ptr, o, n) \ +({ \ + __typeof__(*(ptr)) __ret; \ + __ret =3D (__typeof__(*(ptr))) \ + __cmpxchg##sfx((ptr), (unsigned long)(o), \ + (unsigned long)(n), sizeof(*(ptr))); \ + __ret; \ +}) + +/* cmpxchg */ +#define cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__) +#define cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__) +#define cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__) +#define cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__) +#define cmpxchg_local cmpxchg_relaxed + +/* cmpxchg64 */ +#define cmpxchg64_relaxed cmpxchg_relaxed +#define cmpxchg64_acquire cmpxchg_acquire +#define cmpxchg64_release cmpxchg_release +#define cmpxchg64 cmpxchg +#define cmpxchg64_local cmpxchg_local + +/* cmpxchg_double */ +#define system_has_cmpxchg_double() 1 + +#define __cmpxchg_double_check(ptr1, ptr2) \ +({ \ + if (sizeof(*(ptr1)) !=3D 8) \ + return __bad_cmpxchg(ptr, size); \ + VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) !=3D 1); \ +}) + +#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ +({ \ + int __ret; \ + __cmpxchg_double_check(ptr1, ptr2); \ + __ret =3D !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2), \ + (unsigned long)(n1), (unsigned long)(n2), \ + ptr1); \ + __ret; \ +}) + +#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \ +({ \ + int __ret; \ + __cmpxchg_double_check(ptr1, ptr2); \ + __ret =3D !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2), \ + (unsigned long)(n1), (unsigned long)(n2), \ + ptr1); \ + __ret; \ +}) + +#define __CMPWAIT_CASE(w, sfx, sz) \ +static inline void __cmpwait_case_##sz(volatile void *ptr, \ + unsigned long val) \ +{ \ + unsigned long tmp; \ \ - *old =3D oldval; \ + asm volatile( \ + " sevl\n" \ + " wfe\n" \ + " ldxr" #sfx "\t%" #w "[tmp], %[v]\n" \ + " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \ + " cbnz %" #w "[tmp], 1f\n" \ + " wfe\n" \ + "1:" \ + : [tmp] "=3D&r" (tmp), [v] "+Q" (*(unsigned long *)ptr) \ + : [val] "r" (val)); \ +} + +__CMPWAIT_CASE(w, b, 8); +__CMPWAIT_CASE(w, h, 16); +__CMPWAIT_CASE(w, , 32); +__CMPWAIT_CASE( , , 64); + +#undef __CMPWAIT_CASE + +#define __CMPWAIT_GEN(sfx) \ +static always_inline void __cmpwait##sfx(volatile void *ptr, \ + unsigned long val, \ + int size) \ +{ \ + switch (size) { \ + case 1: \ + return __cmpwait_case##sfx##_8(ptr, (u8)val); \ + case 2: \ + return __cmpwait_case##sfx##_16(ptr, (u16)val); \ + case 4: \ + return __cmpwait_case##sfx##_32(ptr, val); \ + case 8: \ + return __cmpwait_case##sfx##_64(ptr, val); \ + default: \ + __bad_cmpxchg(ptr, size); \ + } \ \ - return !res; \ + unreachable(); \ +} + +__CMPWAIT_GEN() + +#undef __CMPWAIT_GEN + +#define __cmpwait_relaxed(ptr, val) \ + __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr))) + +/* + * This code is from the original Xen arm32 cmpxchg.h, from before the + * Linux 5.10-rc2 atomics helpers were ported over. The only changes + * here are renaming the macros and functions to explicitly use + * "timeout" in their names so that they don't clash with the above. + * + * We need this here for guest atomics (the only user of the timeout + * variants). + */ + +#define __CMPXCHG_TIMEOUT_CASE(w, sz, name) \ +static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr, \ + unsigned long *old, \ + unsigned long new, \ + bool timeout, \ + unsigned int max_try) \ +{ \ + unsigned long oldval; \ + unsigned long res; \ + \ + do { \ + asm volatile("// __cmpxchg_timeout_case_" #name "\n" \ + " ldxr" #sz " %" #w "1, %2\n" \ + " mov %w0, #0\n" \ + " cmp %" #w "1, %" #w "3\n" \ + " b.ne 1f\n" \ + " stxr" #sz " %w0, %" #w "4, %2\n" \ + "1:\n" \ + : "=3D&r" (res), "=3D&r" (oldval), = \ + "+Q" (*(unsigned long *)ptr) \ + : "Ir" (*old), "r" (new) \ + : "cc"); \ + \ + if (!res) \ + break; \ + } while (!timeout || ((--max_try) > 0)); \ + \ + *old =3D oldval; \ + \ + return !res; \ } =20 -__CMPXCHG_CASE(w, b, 1) -__CMPXCHG_CASE(w, h, 2) -__CMPXCHG_CASE(w, , 4) -__CMPXCHG_CASE( , , 8) +__CMPXCHG_TIMEOUT_CASE(w, b, 1) +__CMPXCHG_TIMEOUT_CASE(w, h, 2) +__CMPXCHG_TIMEOUT_CASE(w, , 4) +__CMPXCHG_TIMEOUT_CASE( , , 8) =20 static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long = *old, - unsigned long new, int size, - bool timeout, unsigned int max_try) + unsigned long new, int size, + bool timeout, unsigned int max_try) { - switch (size) { - case 1: - return __cmpxchg_case_1(ptr, old, new, timeout, max_try); - case 2: - return __cmpxchg_case_2(ptr, old, new, timeout, max_try); - case 4: - return __cmpxchg_case_4(ptr, old, new, timeout, max_try); - case 8: - return __cmpxchg_case_8(ptr, old, new, timeout, max_try); - default: - return __bad_cmpxchg(ptr, size); - } + switch (size) { + case 1: + return __cmpxchg_timeout_case_1(ptr, old, new, timeout, ma= x_try); + case 2: + return __cmpxchg_timeout_case_2(ptr, old, new, timeout, ma= x_try); + case 4: + return __cmpxchg_timeout_case_4(ptr, old, new, timeout, ma= x_try); + case 8: + return __cmpxchg_timeout_case_8(ptr, old, new, timeout, ma= x_try); + default: + return __bad_cmpxchg(ptr, size); + } =20 - ASSERT_UNREACHABLE(); -} - -static always_inline unsigned long __cmpxchg(volatile void *ptr, - unsigned long old, - unsigned long new, - int size) -{ - smp_mb(); - if (!__int_cmpxchg(ptr, &old, new, size, false, 0)) - ASSERT_UNREACHABLE(); - smp_mb(); - - return old; + ASSERT_UNREACHABLE(); } =20 /* @@ -144,40 +371,22 @@ static always_inline unsigned long __cmpxchg(volatile= void *ptr, * timeout) and false if the update has failed. */ static always_inline bool __cmpxchg_timeout(volatile void *ptr, - unsigned long *old, - unsigned long new, - int size, - unsigned int max_try) + unsigned long *old, + unsigned long new, + int size, + unsigned int max_try) { - bool ret; + bool ret; =20 - smp_mb(); - ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); - smp_mb(); + smp_mb(); + ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); + smp_mb(); =20 - return ret; + return ret; } =20 -#define cmpxchg(ptr, o, n) \ -({ \ - __typeof__(*(ptr)) __ret; \ - __ret =3D (__typeof__(*(ptr))) \ - __cmpxchg((ptr), (unsigned long)(o), (unsigned long)(n), \ - sizeof(*(ptr))); \ - __ret; \ -}) +#define __cmpxchg64_timeout(ptr, old, new, max_try) \ + __cmpxchg_timeout(ptr, old, new, 8, max_try) =20 -#define cmpxchg64(ptr, o, n) cmpxchg(ptr, o, n) =20 -#define __cmpxchg64_timeout(ptr, old, new, max_try) \ - __cmpxchg_timeout(ptr, old, new, 8, max_try) - -#endif -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: - */ +#endif /* __ASM_ARM_ARM64_CMPXCHG_H */ diff --git a/xen/include/asm-arm/arm64/lse.h b/xen/include/asm-arm/arm64/ls= e.h new file mode 100644 index 0000000000..e26245a74b --- /dev/null +++ b/xen/include/asm-arm/arm64/lse.h @@ -0,0 +1,53 @@ +/* + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) + * + * Summary of changes: + * - Rename header include guard to reflect Xen directory structure + * - Drop redundant includes and redirect others to Xen equivalents + * - Modify hwcap check to use cpus_have_cap() + * + * SPDX-License-Identifier: GPL-2.0 + */ +#ifndef __ASM_ARM_ARM64_LSE_H +#define __ASM_ARM_ARM64_LSE_H + +#include "atomic_ll_sc.h" + +#ifdef CONFIG_ARM64_LSE_ATOMICS + +#define __LSE_PREAMBLE ".arch_extension lse\n" + +#include +#include +#include + +#include + +#include "atomic_lse.h" + +static inline bool system_uses_lse_atomics(void) +{ + return cpus_have_cap(ARM64_HAS_LSE_ATOMICS); +} + +#define __lse_ll_sc_body(op, ...) \ +({ \ + system_uses_lse_atomics() ? \ + __lse_##op(__VA_ARGS__) : \ + __ll_sc_##op(__VA_ARGS__); \ +}) + +/* In-line patching at runtime */ +#define ARM64_LSE_ATOMIC_INSN(llsc, lse) \ + ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS) + +#else /* CONFIG_ARM64_LSE_ATOMICS */ + +static inline bool system_uses_lse_atomics(void) { return false; } + +#define __lse_ll_sc_body(op, ...) __ll_sc_##op(__VA_ARGS__) + +#define ARM64_LSE_ATOMIC_INSN(llsc, lse) llsc + +#endif /* CONFIG_ARM64_LSE_ATOMICS */ +#endif /* __ASM_ARM_ARM64_LSE_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64= /system.h index 2e36573ac6..dfbbe4b87d 100644 --- a/xen/include/asm-arm/arm64/system.h +++ b/xen/include/asm-arm/arm64/system.h @@ -2,7 +2,7 @@ #ifndef __ASM_ARM64_SYSTEM_H #define __ASM_ARM64_SYSTEM_H =20 -#include +#include =20 /* Uses uimm4 as a bitmask to select the clearing of one or more of * the DAIF exception mask bits: diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h index ac2798d095..866f54d03c 100644 --- a/xen/include/asm-arm/atomic.h +++ b/xen/include/asm-arm/atomic.h @@ -2,8 +2,6 @@ #define __ARCH_ARM_ATOMIC__ =20 #include -#include -#include =20 #define build_atomic_read(name, size, width, type) \ static inline type name(const volatile type *addr) \ @@ -220,10 +218,19 @@ static inline int atomic_add_negative(int i, atomic_t= *v) =20 static inline int atomic_add_unless(atomic_t *v, int a, int u) { - return __atomic_add_unless(v, a, u); + int c, old; + + c =3D atomic_read(v); + while (c !=3D u && (old =3D atomic_cmpxchg((v), c, c + a)) !=3D c) + c =3D old; + + return c; } =20 -#define atomic_xchg(v, new) (xchg(&((v)->counter), new)) +static inline int atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return cmpxchg(&((v)->counter), (old), (new)); +} =20 #endif /* __ARCH_ARM_ATOMIC__ */ /* diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h index c0e0ee9f27..aa0546bfe8 100644 --- a/xen/include/xen/compiler.h +++ b/xen/include/xen/compiler.h @@ -138,4 +138,7 @@ # define CLANG_DISABLE_WARN_GCC_COMPAT_END #endif =20 +#define __READ_ONCE(x) (*(volatile typeof(x) *)&(x)) +#define __WRITE_ONCE(x, v) (*(volatile typeof(x) *)&(x) =3D (v)) + #endif /* __LINUX_COMPILER_H */ --=20 2.24.3 (Apple Git-128) From nobody Mon May 6 19:29:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1604602654; cv=none; d=zohomail.com; s=zohoarc; b=D9IyUVrGD/nY7vaNw5yNzXJzjJyJU1iZkWrZ1aeR4ak2+dL4rkpAS42DpzktJAB0pvObRMfh8GPkvfskvE106VEDmtOYKPbm+I1JTzxE+vpaX4+0Isy/O9rVX9A+sU5n9VIzZOJGIqqX/0J0hd/Tyy7pg1wG4yK96WGIPRamHyk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604602654; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=GnhkvWFpWDOq5MdCwowKxEgDQKGXysmyNosoeceNYd4=; b=IL2qcx2aoECCsJqOkeNt9OEGFYFbVxfUUXC73qUmhYSuJEbhkyCpYyptbRRG46JJQTTcJTB3m5CYgEWITv/HZ1KK0P07i+qbrDaUBjrEsr9GTOXTqoRGOAbsdw63HPgj/xbjqktZWUjbkFGNnbz+UijkhYm0rEnDsuyVGqDtxjY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 160460265409072.1741528061533; Thu, 5 Nov 2020 10:57:34 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.20055.45681 (Exim 4.92) (envelope-from ) id 1kakRi-0006uB-MK; Thu, 05 Nov 2020 18:57:18 +0000 Received: by outflank-mailman (output) from mailman id 20055.45681; Thu, 05 Nov 2020 18:57:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kakRi-0006u1-HH; Thu, 05 Nov 2020 18:57:18 +0000 Received: by outflank-mailman (input) for mailman id 20055; Thu, 05 Nov 2020 18:57:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRg-0006gr-O6 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:16 +0000 Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d03e226e-0ae6-4cb5-b00d-74f4f27c7b7b; Thu, 05 Nov 2020 18:57:10 +0000 (UTC) Received: by mail-wr1-x441.google.com with SMTP id w14so2987085wrs.9 for ; Thu, 05 Nov 2020 10:57:10 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:08 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRg-0006gr-O6 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:16 +0000 Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d03e226e-0ae6-4cb5-b00d-74f4f27c7b7b; Thu, 05 Nov 2020 18:57:10 +0000 (UTC) Received: by mail-wr1-x441.google.com with SMTP id w14so2987085wrs.9 for ; Thu, 05 Nov 2020 10:57:10 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:08 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d03e226e-0ae6-4cb5-b00d-74f4f27c7b7b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GnhkvWFpWDOq5MdCwowKxEgDQKGXysmyNosoeceNYd4=; b=oFwB6g/WwFFvWGjiqHwJln484BrGqoXtl9CsN/D367voX5UFdMBC8xgWVFybdWowDl Ltm++Yke4FSVdXQGMo2yGSfzk96atMhyNG1PVILi90JE9Jl+cScm8LVAil6bDoro5n1Q hcZzVQRwJ0WFgfy4p9wBdHzHByr2VoA+WhVcxvPnT4e8Q2RvX3R7RHATI8Gbd4nbJZee ZE1gicQorTqjPJiy9Fcr+TqB0xfuJdwvJW3o4mxuX2lDMUexibYf7yeKIdeMi/0LJSLY W5WfuSxLNRcTiZ06QSSFuR3bJq+7GlWgeCooYWvFxUPzTcB4KAkUHW0YIIStAAMXXqpX v30w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GnhkvWFpWDOq5MdCwowKxEgDQKGXysmyNosoeceNYd4=; b=YB+ce3UuciavU/kodh3gauS7Vj9dlB76CzXFefXap42+miM1cXuWrYpMYupDd5Jc8d BJvVVpxGBuKEJoKEr3jvRr/iyWcMqbuHy7hTFwIIQ1D6v73R2tWb2pP29Le6j+s6uFLl yFNmr1C9+JiI1BIe4vK38RlouX5SNjBo6MNUIpgh2MCYFBpCdo+BH5NxKOBDrGN+3Ak9 B+URZdJV6/rXcFK9D7EEjuZ8q683+1pGr7BqYFF/abxIJgDa5daxXyVffcDDNf5VkFBF k0Kc/uqggvcXA14FnNXavmR0CZyQHDzblQNnj7qFCGJI12ETW0IvBTgR6NGCmAxXSkr/ kngg== X-Gm-Message-State: AOAM532TiraOvNim2PEGFWbCrB+v40JBhu+56ijDvuYioX9+/iB4SPbv hv6Zmky27FgBwLKCnRLPICcQ//nnzM4= X-Google-Smtp-Source: ABdhPJzFU2zN+NquNgaFKhweu17Rce7BEBgHZuLvIFeAbYvdnCHCQ4tb5zRoi2NGO0VlGup7gAYNqw== X-Received: by 2002:adf:e9c6:: with SMTP id l6mr4600491wrn.257.1604602629112; Thu, 05 Nov 2020 10:57:09 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com, Ash Wilding Subject: [RFC PATCH 5/6] xen/arm32: Port Linux LL/SC atomics helpers to Xen Date: Thu, 5 Nov 2020 18:56:02 +0000 Message-Id: <20201105185603.24149-6-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com> References: <20201105185603.24149-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" This patch ports Linux's arm32 LL/SC atomics helpers to Xen. The opening comment of each header file details the changes made to that file while porting it to Xen. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm32/atomic.h | 261 ++++++++++-------- xen/include/asm-arm/arm32/cmpxchg.h | 403 ++++++++++++++++++---------- xen/include/asm-arm/arm32/system.h | 2 +- 3 files changed, 413 insertions(+), 253 deletions(-) diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32= /atomic.h index 2832a72792..544a4ba492 100644 --- a/xen/include/asm-arm/arm32/atomic.h +++ b/xen/include/asm-arm/arm32/atomic.h @@ -1,124 +1,118 @@ /* - * arch/arm/include/asm/atomic.h + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * - * Copyright (C) 1996 Russell King. - * Copyright (C) 2002 Deep Blue Solutions Ltd. + * Summary of changes: + * - Drop redundant includes and redirect others to Xen equivalents + * - Rename header include guard to reflect Xen directory structure + * - Drop atomic64_t helper declarations + * - Drop pre-Armv6 support + * - Redirect READ_ONCE/WRITE_ONCE to __* equivalents in compiler.h + * - Add explicit atomic_add_return() and atomic_sub_return() as + * Linux doesn't define these for arm32. Here we just sandwich + * the atomic__return_relaxed() calls with smp_mb()s. * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * SPDX-License-Identifier: GPL-2.0-only */ -#ifndef __ARCH_ARM_ARM32_ATOMIC__ -#define __ARCH_ARM_ARM32_ATOMIC__ +#ifndef __ASM_ARM_ARM32_ATOMIC_H +#define __ASM_ARM_ARM32_ATOMIC_H + +#include +#include +#include +#include "system.h" +#include "cmpxchg.h" + +/* + * On ARM, ordinary assignment (str instruction) doesn't clear the local + * strex/ldrex monitor on some implementations. The reason we can use it f= or + * atomic_set() is the clrex or dummy strex done on every exception return. + */ +#define atomic_read(v) __READ_ONCE((v)->counter) +#define atomic_set(v,i) __WRITE_ONCE(((v)->counter), (i)) =20 /* * ARMv6 UP and SMP safe atomic ops. We use load exclusive and * store exclusive to ensure that these are atomic. We may loop * to ensure that the update happens. */ -static inline void atomic_add(int i, atomic_t *v) -{ - unsigned long tmp; - int result; =20 - prefetchw(&v->counter); - __asm__ __volatile__("@ atomic_add\n" -"1: ldrex %0, [%3]\n" -" add %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); +#define ATOMIC_OP(op, c_op, asm_op) \ +static inline void atomic_##op(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + prefetchw(&v->counter); \ + __asm__ __volatile__("@ atomic_" #op "\n" \ +"1: ldrex %0, [%3]\n" \ +" " #asm_op " %0, %0, %4\n" \ +" strex %1, %0, [%3]\n" \ +" teq %1, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "Ir" (i) \ + : "cc"); \ +} \ + +#define ATOMIC_OP_RETURN(op, c_op, asm_op) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + prefetchw(&v->counter); \ + \ + __asm__ __volatile__("@ atomic_" #op "_return\n" \ +"1: ldrex %0, [%3]\n" \ +" " #asm_op " %0, %0, %4\n" \ +" strex %1, %0, [%3]\n" \ +" teq %1, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "Ir" (i) \ + : "cc"); \ + \ + return result; \ } =20 -static inline int atomic_add_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - smp_mb(); - prefetchw(&v->counter); - - __asm__ __volatile__("@ atomic_add_return\n" -"1: ldrex %0, [%3]\n" -" add %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); - - smp_mb(); - - return result; -} - -static inline void atomic_sub(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - prefetchw(&v->counter); - __asm__ __volatile__("@ atomic_sub\n" -"1: ldrex %0, [%3]\n" -" sub %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); +#define ATOMIC_FETCH_OP(op, c_op, asm_op) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result, val; \ + \ + prefetchw(&v->counter); \ + \ + __asm__ __volatile__("@ atomic_fetch_" #op "\n" \ +"1: ldrex %0, [%4]\n" \ +" " #asm_op " %1, %0, %5\n" \ +" strex %2, %1, [%4]\n" \ +" teq %2, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "Ir" (i) \ + : "cc"); \ + \ + return result; \ } =20 -static inline int atomic_sub_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed =20 - smp_mb(); - prefetchw(&v->counter); - - __asm__ __volatile__("@ atomic_sub_return\n" -"1: ldrex %0, [%3]\n" -" sub %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); - - smp_mb(); - - return result; -} - -static inline void atomic_and(int m, atomic_t *v) -{ - unsigned long tmp; - int result; - - prefetchw(&v->counter); - __asm__ __volatile__("@ atomic_and\n" -"1: ldrex %0, [%3]\n" -" and %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (m) - : "cc"); -} +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed =20 -static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) +static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new) { int oldval; unsigned long res; =20 - smp_mb(); prefetchw(&ptr->counter); =20 do { @@ -132,12 +126,11 @@ static inline int atomic_cmpxchg(atomic_t *ptr, int o= ld, int new) : "cc"); } while (res); =20 - smp_mb(); - return oldval; } +#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed =20 -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int oldval, newval; unsigned long tmp; @@ -163,13 +156,61 @@ static inline int __atomic_add_unless(atomic_t *v, in= t a, int u) =20 return oldval; } +#define atomic_fetch_add_unless atomic_fetch_add_unless + +#define ATOMIC_OPS(op, c_op, asm_op) \ + ATOMIC_OP(op, c_op, asm_op) \ + ATOMIC_OP_RETURN(op, c_op, asm_op) \ + ATOMIC_FETCH_OP(op, c_op, asm_op) + +ATOMIC_OPS(add, +=3D, add) +ATOMIC_OPS(sub, -=3D, sub) + +#define atomic_andnot atomic_andnot + +#undef ATOMIC_OPS +#define ATOMIC_OPS(op, c_op, asm_op) \ + ATOMIC_OP(op, c_op, asm_op) \ + ATOMIC_FETCH_OP(op, c_op, asm_op) + +ATOMIC_OPS(and, &=3D, and) +ATOMIC_OPS(andnot, &=3D ~, bic) +ATOMIC_OPS(or, |=3D, orr) +ATOMIC_OPS(xor, ^=3D, eor) + +#undef ATOMIC_OPS +#undef ATOMIC_FETCH_OP +#undef ATOMIC_OP_RETURN +#undef ATOMIC_OP + +#define atomic_xchg(v, new) (xchg(&((v)->counter), new)) =20 -#endif /* __ARCH_ARM_ARM32_ATOMIC__ */ /* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: + * Linux doesn't define strict atomic_add_return() or atomic_sub_return() + * for /arch/arm -- Let's manually define these for Xen. */ + +static inline int atomic_add_return(int i, atomic_t *v) +{ + int ret; + + smp_mb(); + ret =3D atomic_add_return_relaxed(i, v); + smp_mb(); + + return ret; +} + +static inline int atomic_sub_return(int i, atomic_t *v) +{ + int ret; + + smp_mb(); + ret =3D atomic_sub_return_relaxed(i, v); + smp_mb(); + + return ret; +} + + +#endif /* __ASM_ARM_ARM32_ATOMIC_H */ diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm3= 2/cmpxchg.h index b0bd1d8b68..7aa8d93fc2 100644 --- a/xen/include/asm-arm/arm32/cmpxchg.h +++ b/xen/include/asm-arm/arm32/cmpxchg.h @@ -1,16 +1,36 @@ -#ifndef __ASM_ARM32_CMPXCHG_H -#define __ASM_ARM32_CMPXCHG_H +/* + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) + * + * Summary of changes: + * - Rename header include guard to reflect Xen directory structure + * - Drop redundant includes and redirect others to Xen equivalents + * - Assume running on Armv7 so drop support for <=3D Armv6, and drop + * workarounds for StrongARM "swp" instruction errata + * - Drop local() variants (no callers in Xen) + * - Add strict versions of xchg(), cmpxchg(), and cmpxchg64() as + * Linux does not provide these + * - Keep the compiler happy by updating __cmpxchg64() ptr arg to + * be volatile and make the call to prefetchw() correctly cast + * ptr to (const volatile *) + * - Pull in original Xen arm32 cmpxchg.h definitions of + * cmpxchg_timeout*() and cmpxchg64_timeout*() as these are not + * provided by Linux and are required for Xen's guest atomics + * + * SPDX-License-Identifier: GPL-2.0 + */ +#ifndef __ASM_ARM_ARM32_CMPXCHG_H +#define __ASM_ARM_ARM32_CMPXCHG_H =20 #include +#include =20 -extern void __bad_xchg(volatile void *, int); +extern void __bad_cmpxchg(volatile void *ptr, int size); =20 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, in= t size) { unsigned long ret; unsigned int tmp; =20 - smp_mb(); prefetchw((const void *)ptr); =20 switch (size) { @@ -24,6 +44,16 @@ static inline unsigned long __xchg(unsigned long x, vola= tile void *ptr, int size : "r" (x), "r" (ptr) : "memory", "cc"); break; + case 2: + asm volatile("@ __xchg2\n" + "1: ldrexh %0, [%3]\n" + " strexh %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=3D&r" (ret), "=3D&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; case 4: asm volatile("@ __xchg4\n" "1: ldrex %0, [%3]\n" @@ -34,121 +64,236 @@ static inline unsigned long __xchg(unsigned long x, v= olatile void *ptr, int size : "r" (x), "r" (ptr) : "memory", "cc"); break; + default: - __bad_xchg(ptr, size), ret =3D 0; + /* Cause a link-time error, the size is not supported */ + __bad_cmpxchg(ptr, size), ret =3D 0; break; } - smp_mb(); =20 return ret; } =20 -#define xchg(ptr,x) \ - ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) +#define xchg_relaxed(ptr, x) ({ \ + (__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr), \ + sizeof(*(ptr))); \ +}) + +static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long ol= d, + unsigned long new, int size) +{ + unsigned long oldval, res; + + prefetchw((const void *)ptr); + + switch (size) { + case 1: + do { + asm volatile("@ __cmpxchg1\n" + " ldrexb %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexbeq %0, %4, [%2]\n" + : "=3D&r" (res), "=3D&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + case 2: + do { + asm volatile("@ __cmpxchg1\n" + " ldrexh %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexheq %0, %4, [%2]\n" + : "=3D&r" (res), "=3D&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + case 4: + do { + asm volatile("@ __cmpxchg4\n" + " ldrex %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexeq %0, %4, [%2]\n" + : "=3D&r" (res), "=3D&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + + default: + __bad_cmpxchg(ptr, size); + oldval =3D 0; + } + + return oldval; +} + +#define cmpxchg_relaxed(ptr,o,n) ({ \ + (__typeof__(*(ptr)))__cmpxchg((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr))); \ +}) + +static inline unsigned long long __cmpxchg64(volatile unsigned long long *= ptr, + unsigned long long old, + unsigned long long new) +{ + unsigned long long oldval; + unsigned long res; + + prefetchw((const void *)ptr); + + __asm__ __volatile__( +"1: ldrexd %1, %H1, [%3]\n" +" teq %1, %4\n" +" teqeq %H1, %H4\n" +" bne 2f\n" +" strexd %0, %5, %H5, [%3]\n" +" teq %0, #0\n" +" bne 1b\n" +"2:" + : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (*ptr) + : "r" (ptr), "r" (old), "r" (new) + : "cc"); + + return oldval; +} + +#define cmpxchg64_relaxed(ptr, o, n) ({ \ + (__typeof__(*(ptr)))__cmpxchg64((ptr), \ + (unsigned long long)(o), \ + (unsigned long long)(n)); \ +}) + + +/* + * Linux doesn't provide strict versions of xchg(), cmpxchg(), and cmpxchg= 64(), + * so manually define these for Xen as smp_mb() wrappers around the relaxed + * variants. + */ + +#define xchg(ptr, x) ({ \ + long ret; \ + smp_mb(); \ + ret =3D xchg_relaxed(ptr, x); \ + smp_mb(); \ + ret; \ +}) + +#define cmpxchg(ptr, o, n) ({ \ + long ret; \ + smp_mb(); \ + ret =3D cmpxchg_relaxed(ptr, o, n); \ + smp_mb(); \ + ret; \ +}) + +#define cmpxchg64(ptr, o, n) ({ \ + long long ret; \ + smp_mb(); \ + ret =3D cmpxchg64_relaxed(ptr, o, n); \ + smp_mb(); \ + ret; \ +}) =20 /* - * Atomic compare and exchange. Compare OLD with MEM, if identical, - * store NEW in MEM. Return the initial value in MEM. Success is - * indicated by comparing RETURN with OLD. + * This code is from the original Xen arm32 cmpxchg.h, from before the + * Linux 5.10-rc2 atomics helpers were ported over. The only changes + * here are renaming the macros and functions to explicitly use + * "timeout" in their names so that they don't clash with the above. + * + * We need this here for guest atomics (the only user of the timeout + * variants). */ =20 -extern unsigned long __bad_cmpxchg(volatile void *ptr, int size); - -#define __CMPXCHG_CASE(sz, name) \ -static inline bool __cmpxchg_case_##name(volatile void *ptr, \ - unsigned long *old, \ - unsigned long new, \ - bool timeout, \ - unsigned int max_try) \ -{ \ - unsigned long oldval; \ - unsigned long res; \ - \ - do { \ - asm volatile("@ __cmpxchg_case_" #name "\n" \ - " ldrex" #sz " %1, [%2]\n" \ - " mov %0, #0\n" \ - " teq %1, %3\n" \ - " strex" #sz "eq %0, %4, [%2]\n" \ - : "=3D&r" (res), "=3D&r" (oldval) \ - : "r" (ptr), "Ir" (*old), "r" (new) \ - : "memory", "cc"); \ - \ - if (!res) \ - break; \ - } while (!timeout || ((--max_try) > 0)); \ - \ - *old =3D oldval; \ - \ - return !res; \ +#define __CMPXCHG_TIMEOUT_CASE(sz, name) = \ +static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr, = \ + unsigned long *old, \ + unsigned long new, \ + bool timeout, \ + unsigned int max_try) \ +{ \ + unsigned long oldval; \ + unsigned long res; \ + \ + do { \ + asm volatile("@ __cmpxchg_timeout_case_" #name "\n" = \ + " ldrex" #sz " %1, [%2]\n" \ + " mov %0, #0\n" \ + " teq %1, %3\n" \ + " strex" #sz "eq %0, %4, [%2]\n" \ + : "=3D&r" (res), "=3D&r" (oldval) = \ + : "r" (ptr), "Ir" (*old), "r" (new) \ + : "memory", "cc"); \ + \ + if (!res) \ + break; \ + } while (!timeout || ((--max_try) > 0)); \ + \ + *old =3D oldval; \ + \ + return !res; \ } =20 -__CMPXCHG_CASE(b, 1) -__CMPXCHG_CASE(h, 2) -__CMPXCHG_CASE( , 4) +__CMPXCHG_TIMEOUT_CASE(b, 1) +__CMPXCHG_TIMEOUT_CASE(h, 2) +__CMPXCHG_TIMEOUT_CASE( , 4) =20 -static inline bool __cmpxchg_case_8(volatile uint64_t *ptr, - uint64_t *old, - uint64_t new, - bool timeout, - unsigned int max_try) +static inline bool __cmpxchg_timeout_case_8(volatile uint64_t *ptr, + uint64_t *old, + uint64_t new, + bool timeout, + unsigned int max_try) { - uint64_t oldval; - uint64_t res; - - do { - asm volatile( - " ldrexd %1, %H1, [%3]\n" - " teq %1, %4\n" - " teqeq %H1, %H4\n" - " movne %0, #0\n" - " movne %H0, #0\n" - " bne 2f\n" - " strexd %0, %5, %H5, [%3]\n" - "2:" - : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (*ptr) - : "r" (ptr), "r" (*old), "r" (new) - : "memory", "cc"); - if (!res) - break; - } while (!timeout || ((--max_try) > 0)); - - *old =3D oldval; - - return !res; + uint64_t oldval; + uint64_t res; + + do { + asm volatile( + " ldrexd %1, %H1, [%3]\n" + " teq %1, %4\n" + " teqeq %H1, %H4\n" + " movne %0, #0\n" + " movne %H0, #0\n" + " bne 2f\n" + " strexd %0, %5, %H5, [%3]\n" + "2:" + : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (*ptr) + : "r" (ptr), "r" (*old), "r" (new) + : "memory", "cc"); + if (!res) + break; + } while (!timeout || ((--max_try) > 0)); + + *old =3D oldval; + + return !res; } =20 static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long = *old, - unsigned long new, int size, - bool timeout, unsigned int max_try) + unsigned long new, int size, + bool timeout, unsigned int max_try) { - prefetchw((const void *)ptr); + prefetchw((const void *)ptr); =20 - switch (size) { - case 1: - return __cmpxchg_case_1(ptr, old, new, timeout, max_try); - case 2: - return __cmpxchg_case_2(ptr, old, new, timeout, max_try); - case 4: - return __cmpxchg_case_4(ptr, old, new, timeout, max_try); - default: - return __bad_cmpxchg(ptr, size); - } + switch (size) { + case 1: + return __cmpxchg_timeout_case_1(ptr, old, new, timeout, ma= x_try); + case 2: + return __cmpxchg_timeout_case_2(ptr, old, new, timeout, ma= x_try); + case 4: + return __cmpxchg_timeout_case_4(ptr, old, new, timeout, ma= x_try); + default: + __bad_cmpxchg(ptr, size); + return false; + } =20 - ASSERT_UNREACHABLE(); -} - -static always_inline unsigned long __cmpxchg(volatile void *ptr, - unsigned long old, - unsigned long new, - int size) -{ - smp_mb(); - if (!__int_cmpxchg(ptr, &old, new, size, false, 0)) - ASSERT_UNREACHABLE(); - smp_mb(); - - return old; + ASSERT_UNREACHABLE(); } =20 /* @@ -162,18 +307,18 @@ static always_inline unsigned long __cmpxchg(volatile= void *ptr, * timeout) and false if the update has failed. */ static always_inline bool __cmpxchg_timeout(volatile void *ptr, - unsigned long *old, - unsigned long new, - int size, - unsigned int max_try) + unsigned long *old, + unsigned long new, + int size, + unsigned int max_try) { - bool ret; + bool ret; =20 - smp_mb(); - ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); - smp_mb(); + smp_mb(); + ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); + smp_mb(); =20 - return ret; + return ret; } =20 /* @@ -187,43 +332,17 @@ static always_inline bool __cmpxchg_timeout(volatile = void *ptr, * timeout) and false if the update has failed. */ static always_inline bool __cmpxchg64_timeout(volatile uint64_t *ptr, - uint64_t *old, - uint64_t new, - unsigned int max_try) + uint64_t *old, + uint64_t new, + unsigned int max_try) { - bool ret; + bool ret; =20 - smp_mb(); - ret =3D __cmpxchg_case_8(ptr, old, new, true, max_try); - smp_mb(); + smp_mb(); + ret =3D __cmpxchg_timeout_case_8(ptr, old, new, true, max_try); + smp_mb(); =20 - return ret; + return ret; } =20 -#define cmpxchg(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) - -static inline uint64_t cmpxchg64(volatile uint64_t *ptr, - uint64_t old, - uint64_t new) -{ - smp_mb(); - if (!__cmpxchg_case_8(ptr, &old, new, false, 0)) - ASSERT_UNREACHABLE(); - smp_mb(); - - return old; -} - -#endif -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: - */ +#endif /* __ASM_ARM_ARM32_CMPXCHG_H */ diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32= /system.h index ab57abfbc5..88798d11db 100644 --- a/xen/include/asm-arm/arm32/system.h +++ b/xen/include/asm-arm/arm32/system.h @@ -2,7 +2,7 @@ #ifndef __ASM_ARM32_SYSTEM_H #define __ASM_ARM32_SYSTEM_H =20 -#include +#include =20 #define local_irq_disable() asm volatile ( "cpsid i @ local_irq_disable\n"= : : : "cc" ) #define local_irq_enable() asm volatile ( "cpsie i @ local_irq_enable\n" = : : : "cc" ) --=20 2.24.3 (Apple Git-128) From nobody Mon May 6 19:29:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1604602656; cv=none; d=zohomail.com; s=zohoarc; b=crwii8ggIjJZxwaXOkJrMamZMhnsI9b8ckYZHioTKX05pjdxqDTjfYy1zkwNhcVSRD9uJE0z+aCmfBfeEMYas315X57e4egyTmA6f9DRTF5aNVsN2N9lU5SUNiJYbU33GdvAPJRMjCzjE87qe/9dlTbaxwfS07hKRDtUOTkT96A= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604602656; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=2hblBpQBBS5NdLsTqWg42/ulV6RYw8DM4ijUkOSDxmc=; b=SGFK5UGwQO+WM/5f4T/GVm2a9ADUBB/FtGiXF0juRqPHIPFFf7cM6NxjUQ9YkiXzaz+08ESCjz0uLuYfwD64V/q4eGh14pZyshS5JwU4YNuGskUykonhkc+TW3sxD5uwNOC6ed8HOdQG8D1xVJS1O0k/yzzEqhJWIKWk7gFVXWE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1604602656389763.8024885033724; Thu, 5 Nov 2020 10:57:36 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.20059.45693 (Exim 4.92) (envelope-from ) id 1kakRn-00070A-5Q; Thu, 05 Nov 2020 18:57:23 +0000 Received: by outflank-mailman (output) from mailman id 20059.45693; Thu, 05 Nov 2020 18:57:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kakRn-000700-1w; Thu, 05 Nov 2020 18:57:23 +0000 Received: by outflank-mailman (input) for mailman id 20059; Thu, 05 Nov 2020 18:57:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRl-0006gr-OE for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:21 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8d9dd261-a20c-4b3f-8692-e58e969fd50c; Thu, 05 Nov 2020 18:57:11 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id a3so2839263wrx.13 for ; Thu, 05 Nov 2020 10:57:11 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:09 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRl-0006gr-OE for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:21 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8d9dd261-a20c-4b3f-8692-e58e969fd50c; Thu, 05 Nov 2020 18:57:11 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id a3so2839263wrx.13 for ; Thu, 05 Nov 2020 10:57:11 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 10:57:09 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8d9dd261-a20c-4b3f-8692-e58e969fd50c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2hblBpQBBS5NdLsTqWg42/ulV6RYw8DM4ijUkOSDxmc=; b=N2AqB3IxmclD0OoUcdMw5I5yPLH0O1tzCMCPXYTvw2C6YidmBN9/7/K637M0WhYwRx jkn5WD0jD+CbaX3RhodOzYUrkvC9OVeaP2sj3QrnP6EQ/DLMcH+eaynPhRDiBZvM4U4g G18mWhEuTn1xJSnM+3vLxQ9+ohhxHmo7xLBHHqsjEhTX5JPwUyPUI5+1f9sFkEbiXkIM jsoJY0c7kgxkyeC51HfsFiyd9vNFMUJxaCWsV9EUwBBfhy0CfudlBH4PlLcDlQTmO46n IskCQs12U919VbMP2GvrfJj/cfmmUoP+JqkOFjAnecxnJqxTCN2yqqWr4D2QgKXqVo8C Ntig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2hblBpQBBS5NdLsTqWg42/ulV6RYw8DM4ijUkOSDxmc=; b=XDHE0aXw2y9XYx2HynmyOlrNIrf1vToOyPQ6kxiGOv3t/a82bJ/Q4qzRe5vr+kE9rB HI2hp32vNHfmiovvag2OSwcdenl3Dxn+hm4nA1Aun/I5nzJFYzM22CLnF6j8ISYB1XTG L96NsW+DxSl7u0UiQ4CKO9y/vrbJzW7pP0ZN7r3GWndZGXVBiMbOoq9TQ6pJzzcVt4iN 954/pOMQtF7am1lHcUWkyuoCzwcKlhmqxMLedvKHZpQNJqvnyodT5+TJARbI1CS3IUM7 tqIX6GqSF+xvybkSUDgxslHg0MoX0A0rjhOEOZX4NR6YN8aZVosVi+r8//480WpWInNh mVmw== X-Gm-Message-State: AOAM5312RtVlGY9Sr4rdF3anuPp7Sf3yUN6aGM+jryQN55Nx6DN0fNYN W9ceWwRW/4qc0thiYOrWLP2Jk8Cc5cY= X-Google-Smtp-Source: ABdhPJzB6BSFEs1paVwMgXDuTbnRDHer5g36E3+ncp6SJksPOa8l52hgblR3dJfJV6G4xUQdVvNp1A== X-Received: by 2002:a5d:54d0:: with SMTP id x16mr4579092wrv.75.1604602630143; Thu, 05 Nov 2020 10:57:10 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com, Ash Wilding Subject: [RFC PATCH 6/6] xen/arm: Remove dependency on gcc builtin __sync_fetch_and_add() Date: Thu, 5 Nov 2020 18:56:03 +0000 Message-Id: <20201105185603.24149-7-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com> References: <20201105185603.24149-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" Now that we have explicit implementations of LL/SC and LSE atomics helpers after porting Linux's versions to Xen, we can drop the reference to gcc's builtin __sync_fetch_and_add(). This requires some fudging using container_of() because the users of __sync_fetch_and_add(), namely xen/spinlock.c, expect the ptr to be directly to the u32 being modified while the atomics helpers expect the ptr to be to an atomic_t and then access that atomic_t's counter member. NOTE: spinlock.c is using u32 for the value being added while the atomics helpers use int for their counter member. This shouldn't actually matter because we do the addition in assembly and the compiler isn't smart enough to detect signed integer overflow in inline assembly, but I thought it worth calling out in the commit message. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm32/atomic.h | 2 +- xen/include/asm-arm/system.h | 10 +++++++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32= /atomic.h index 544a4ba492..5cf13cc8fa 100644 --- a/xen/include/asm-arm/arm32/atomic.h +++ b/xen/include/asm-arm/arm32/atomic.h @@ -200,6 +200,7 @@ static inline int atomic_add_return(int i, atomic_t *v) =20 return ret; } +#define atomic_fetch_add(i, v) atomic_add_return(i, v) =20 static inline int atomic_sub_return(int i, atomic_t *v) { @@ -212,5 +213,4 @@ static inline int atomic_sub_return(int i, atomic_t *v) return ret; } =20 - #endif /* __ASM_ARM_ARM32_ATOMIC_H */ diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h index 65d5c8e423..86c50915d9 100644 --- a/xen/include/asm-arm/system.h +++ b/xen/include/asm-arm/system.h @@ -3,6 +3,7 @@ #define __ASM_SYSTEM_H =20 #include +#include #include =20 #define sev() asm volatile("sev" : : : "memory") @@ -58,7 +59,14 @@ static inline int local_abort_is_enabled(void) return !(flags & PSR_ABT_MASK); } =20 -#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v) +#define arch_fetch_and_add(ptr, x) ({ \ + int ret; \ + \ + atomic_t * tmp =3D container_of((int *)(&(x)), atomic_t, counter); \ + ret =3D atomic_fetch_add(x, tmp); \ + \ + ret; \ +}) =20 extern struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next); =20 --=20 2.24.3 (Apple Git-128)