From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131591; cv=none; d=zohomail.com; s=zohoarc; b=Dn3VfbExbzo/KFD7hEANRB9z5QEOdcgyCawCrBXSirb5V8Su+GBykmzkcjTRBRB6lVRHApynwomP9B6FAyi9g3a+BSxiG7awK8Ft9qOoxWGvtv431cJ2oWscWAcP1aaOfdbYamjqmvKI1ZN8DM7TLJIM6MefdT91tXTEZfT7aug= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131591; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=u7aluI1nuZMrv7pbrDJ9eHy15M26syg8r4F17Forexw=; b=R5K/MbsmPuDldvAbJ0gK7nX+dVPeVAodVA30RrYoPYA5CCpKfZj9ch69B6bfXMLr/slaF2UrJeIF08uW+y9XZkcclNV7p7HLN1gxXY3rtGxqJAgAWJ7EM9upV9k+f9LiVywvXpwljVr0d/JyV5T/oCsKT5fPugLXw4+/ncGY7mI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131591857280.6754957913406; Wed, 11 Nov 2020 13:53:11 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25292.52945 (Exim 4.92) (envelope-from ) id 1kcy2o-00066S-NP; Wed, 11 Nov 2020 21:52:46 +0000 Received: by outflank-mailman (output) from mailman id 25292.52945; Wed, 11 Nov 2020 21:52:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy2o-00066K-K2; Wed, 11 Nov 2020 21:52:46 +0000 Received: by outflank-mailman (input) for mailman id 25292; Wed, 11 Nov 2020 21:52:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2m-00064v-Oj for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:44 +0000 Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a17d2cbb-8852-4648-a661-1c53bca5fd55; Wed, 11 Nov 2020 21:52:39 +0000 (UTC) Received: by mail-wm1-x330.google.com with SMTP id c9so3559012wml.5 for ; Wed, 11 Nov 2020 13:52:39 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:37 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2m-00064v-Oj for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:44 +0000 Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a17d2cbb-8852-4648-a661-1c53bca5fd55; Wed, 11 Nov 2020 21:52:39 +0000 (UTC) Received: by mail-wm1-x330.google.com with SMTP id c9so3559012wml.5 for ; Wed, 11 Nov 2020 13:52:39 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:37 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a17d2cbb-8852-4648-a661-1c53bca5fd55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u7aluI1nuZMrv7pbrDJ9eHy15M26syg8r4F17Forexw=; b=AaXUZNDndVWbWZNl/rKaq8I01YExK21+vqGN9RXrjEsbsDX58L0Ql0iQ0dJPdqvMl5 IeljEHP5YJFKhWPrgIU1Me2d5/adn2sNpj6cHk9NJ+coqLF+s1+ggGoYGgMLzjtrzDZd elBF2488JIz4wIzzfkQn9dUUlkj/2RH+LG2TTep0KDMy18KJ7mRrZ4ArNVu9+0m7Vl4O KfWi2nIKFgGBvAtBSHRZhBl+jTnPeXuWk1qN+HTlrqiEpXanJZhAen1SBlLL5Z7bFOER pSr/p1Mh/vMihla1PSLl7eLrDfOOYLR3Wz/vV2SoOfEKnMIBkqcieN+3bokSU2kycoDa TODw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u7aluI1nuZMrv7pbrDJ9eHy15M26syg8r4F17Forexw=; b=NLz8bj7YjkTPWQpJPg+Dpzx52ohyZy0A9/ckc+vSIN3IIvDVDin9qglbKANet1mEle Z2k+juyT691bVeKBrY5Gn755h0bnyZD5Gq0t8tZ/LDW+BRLWfhFm7Mz/G461WhHJ/ogB 3FQhp0SRWWFXmLt2MG11BpTlqvMWK8lNBtXGsOqOZ5zzT/9GhxLWzlco6JyFozGlqe4d JmY2rzxXSCRS9pj0au4B0pzdfsyNyxFbHNoMhU8UeU2b4mY1xoNymDyFXi7aSVZOj+A2 DJuWaCnN01ZOkodmrmvvxctZWhtI3W97S+8O+wwJWB7G5q97O5nrJwNqpoJT7Vx5QvFv r6xQ== X-Gm-Message-State: AOAM5311oYe7Bw8um+OgnvRxipZ+rH2DCT+pVQvmaDdRC+OpVdFfRw7p 6aMwD6/VG7Qq5reUcadBmxkUKk2hBKA= X-Google-Smtp-Source: ABdhPJxqxeloMs0OpqsobCRlKw8L4J9/KoX+2kcXY1LPtgscBNYgLhXT7DvlMqN+lXX38yWPulhAMA== X-Received: by 2002:a1c:5585:: with SMTP id j127mr6489117wmb.90.1605131558299; Wed, 11 Nov 2020 13:52:38 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 01/15] xen/arm: Support detection of CPU features in other ID registers Date: Wed, 11 Nov 2020 21:51:49 +0000 Message-Id: <20201111215203.80336-2-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding The current Arm boot_cpu_feature64() and boot_cpu_feature32() macros are hardcoded to only detect features in ID_AA64PFR{0,1}_EL1 and ID_PFR{0,1} respectively. This patch replaces these macros with a new macro, boot_cpu_feature(), which takes an explicit ID register name as an argument. While we're here, cull cpu_feature64() and cpu_feature32() as they have no callers (we only ever use the boot CPU features), and update the printk() messages in setup.c to use the new macro. Signed-off-by: Ash Wilding --- xen/arch/arm/setup.c | 8 +++--- xen/include/asm-arm/cpufeature.h | 44 +++++++++++++++----------------- 2 files changed, 24 insertions(+), 28 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 7fcff9af2a..5121f06fc5 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -134,16 +134,16 @@ static void __init processor_id(void) cpu_has_gicv3 ? " GICv3-SysReg" : ""); =20 /* Warn user if we find unknown floating-point features */ - if ( cpu_has_fp && (boot_cpu_feature64(fp) >=3D 2) ) + if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >=3D 2) ) printk(XENLOG_WARNING "WARNING: Unknown Floating-point ID:%d, " "this may result in corruption on the platform\n", - boot_cpu_feature64(fp)); + boot_cpu_feature(pfr64, fp)); =20 /* Warn user if we find unknown AdvancedSIMD features */ - if ( cpu_has_simd && (boot_cpu_feature64(simd) >=3D 2) ) + if ( cpu_has_simd && (boot_cpu_feature(pfr64, simd) >=3D 2) ) printk(XENLOG_WARNING "WARNING: Unknown AdvancedSIMD ID:%d, " "this may result in corruption on the platform\n", - boot_cpu_feature64(simd)); + boot_cpu_feature(pfr64, simd)); =20 printk(" Debug Features: %016"PRIx64" %016"PRIx64"\n", boot_cpu_data.dbg64.bits[0], boot_cpu_data.dbg64.bits[1]); diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeat= ure.h index 10878ead8a..f9281ea343 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -1,39 +1,35 @@ #ifndef __ASM_ARM_CPUFEATURE_H #define __ASM_ARM_CPUFEATURE_H =20 +#define boot_cpu_feature(idreg, feat) (boot_cpu_data.idreg.feat) + #ifdef CONFIG_ARM_64 -#define cpu_feature64(c, feat) ((c)->pfr64.feat) -#define boot_cpu_feature64(feat) (boot_cpu_data.pfr64.feat) - -#define cpu_has_el0_32 (boot_cpu_feature64(el0) =3D=3D 2) -#define cpu_has_el0_64 (boot_cpu_feature64(el0) >=3D 1) -#define cpu_has_el1_32 (boot_cpu_feature64(el1) =3D=3D 2) -#define cpu_has_el1_64 (boot_cpu_feature64(el1) >=3D 1) -#define cpu_has_el2_32 (boot_cpu_feature64(el2) =3D=3D 2) -#define cpu_has_el2_64 (boot_cpu_feature64(el2) >=3D 1) -#define cpu_has_el3_32 (boot_cpu_feature64(el3) =3D=3D 2) -#define cpu_has_el3_64 (boot_cpu_feature64(el3) >=3D 1) -#define cpu_has_fp (boot_cpu_feature64(fp) < 8) -#define cpu_has_simd (boot_cpu_feature64(simd) < 8) -#define cpu_has_gicv3 (boot_cpu_feature64(gic) =3D=3D 1) +#define cpu_has_el0_32 (boot_cpu_feature(pfr64, el0) =3D=3D 2) +#define cpu_has_el0_64 (boot_cpu_feature(pfr64, el0) >=3D 1) +#define cpu_has_el1_32 (boot_cpu_feature(pfr64, el1) =3D=3D 2) +#define cpu_has_el1_64 (boot_cpu_feature(pfr64, el1) >=3D 1) +#define cpu_has_el2_32 (boot_cpu_feature(pfr64, el2) =3D=3D 2) +#define cpu_has_el2_64 (boot_cpu_feature(pfr64, el2) >=3D 1) +#define cpu_has_el3_32 (boot_cpu_feature(pfr64, el3) =3D=3D 2) +#define cpu_has_el3_64 (boot_cpu_feature(pfr64, el3) >=3D 1) +#define cpu_has_fp (boot_cpu_feature(pfr64, fp) < 8) +#define cpu_has_simd (boot_cpu_feature(pfr64, simd) < 8) +#define cpu_has_gicv3 (boot_cpu_feature(pfr64, gic) =3D=3D 1) #endif =20 -#define cpu_feature32(c, feat) ((c)->pfr32.feat) -#define boot_cpu_feature32(feat) (boot_cpu_data.pfr32.feat) - -#define cpu_has_arm (boot_cpu_feature32(arm) =3D=3D 1) -#define cpu_has_thumb (boot_cpu_feature32(thumb) >=3D 1) -#define cpu_has_thumb2 (boot_cpu_feature32(thumb) >=3D 3) -#define cpu_has_jazelle (boot_cpu_feature32(jazelle) > 0) -#define cpu_has_thumbee (boot_cpu_feature32(thumbee) =3D=3D 1) +#define cpu_has_arm (boot_cpu_feature(pfr32, arm) =3D=3D 1) +#define cpu_has_thumb (boot_cpu_feature(pfr32, thumb) >=3D 1) +#define cpu_has_thumb2 (boot_cpu_feature(pfr32, thumb) >=3D 3) +#define cpu_has_jazelle (boot_cpu_feature(pfr32, jazelle) > 0) +#define cpu_has_thumbee (boot_cpu_feature(pfr32, thumbee) =3D=3D 1) #define cpu_has_aarch32 (cpu_has_arm || cpu_has_thumb) =20 #ifdef CONFIG_ARM_32 -#define cpu_has_gentimer (boot_cpu_feature32(gentimer) =3D=3D 1) +#define cpu_has_gentimer (boot_cpu_feature(pfr32, gentimer) =3D=3D 1) #else #define cpu_has_gentimer (1) #endif -#define cpu_has_security (boot_cpu_feature32(security) > 0) +#define cpu_has_security (boot_cpu_feature(pfr32, security) > 0) =20 #define ARM64_WORKAROUND_CLEAN_CACHE 0 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE 1 --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131590; cv=none; d=zohomail.com; s=zohoarc; b=WhaKSo8Mw0ITzP1KyRtoocabXE4ag1oS/xFYC6TcXXSXhtWQoa7zdjTyN6L0kRUIc0LzeW1BUZtIO02JukLDndsmGImo5XMt8F7GEhF5hAzcZ5pLwEqUO2xyX2EBZucIjqprHOAKtAkRlMyR6B5FjU1AmnaQZ8EQG6b/sywD9Hk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131590; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=CODXTx/qAJFmAOdmnrKmfVlPhSYEzSAm6huhIA2Mo1I=; b=m5owWSXa24A19pjuL08ixvK/R/15lSIx5Y0iTJFDbOlUMlIs9FMzapoN8kHJ63IKLASVs4YLqeCPeSBIyPTAEEQm8qX4co2iWhV5MREgLlPx3tk2NlKF8EBsTvVB6TybzmuoZ60HuVDAVyMcfUqveRb0Zwn8dBZU5Cp4Vp71vtE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131590241306.93594307624926; Wed, 11 Nov 2020 13:53:10 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25293.52957 (Exim 4.92) (envelope-from ) id 1kcy2t-00069d-VB; Wed, 11 Nov 2020 21:52:51 +0000 Received: by outflank-mailman (output) from mailman id 25293.52957; Wed, 11 Nov 2020 21:52:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy2t-00069U-Rv; Wed, 11 Nov 2020 21:52:51 +0000 Received: by outflank-mailman (input) for mailman id 25293; Wed, 11 Nov 2020 21:52:49 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2r-00064v-Ok for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:49 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2483e5c9-4c86-4ff1-9b5f-9efaf2ce6fcd; Wed, 11 Nov 2020 21:52:40 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id k2so3998943wrx.2 for ; Wed, 11 Nov 2020 13:52:40 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:38 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2r-00064v-Ok for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:49 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2483e5c9-4c86-4ff1-9b5f-9efaf2ce6fcd; Wed, 11 Nov 2020 21:52:40 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id k2so3998943wrx.2 for ; Wed, 11 Nov 2020 13:52:40 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:38 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2483e5c9-4c86-4ff1-9b5f-9efaf2ce6fcd DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CODXTx/qAJFmAOdmnrKmfVlPhSYEzSAm6huhIA2Mo1I=; b=oQNVr3As2iNhoTTgUw08EzCuvEeUJGdh+BijVSabFb5qFpbq5mcVhXfZK8uabpWOud gQpUHRAaRFa620hoH7jmrUdB1CSM3GcXaHjhv7grkduuprEhqRzK1EidJxWkjM9td51q v2xaE6eozqeLg0M+qdJE3hm6fl6YCeQpO8KRtd+WTHFur/s6IaEJk8p7AoOXX40b/WNI s8auOJusZejoyZdyPLc58iepBaX2ulQds/oeP/hqXdOXklyllHvoUvwG5l68mmTnkFX6 EHppP93AqRDWliZnx89JN0lYN7oEHseaVL4zwNHrVwZ9Sb8tKznlrHRmlMmyqRNS92hd nlug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CODXTx/qAJFmAOdmnrKmfVlPhSYEzSAm6huhIA2Mo1I=; b=r/VhnlazKEFtz3S66anfAgLolYgH9ry7G4kv3uNarUL95XTUqwI3QWE/jAnXr/2N1c Lh8Clax9FeNpeRVqI8HBYau6O9dN9wBrQArz/tymcUysoIgFLe+r35VOECQvzRlonQwH i0i36B38erK+iaAGMIzgbczEeMbAsyjGlOM6tmBEu2XyPDbNDQfVI8Viju37kCbl9vr8 jRg42m7HzDkrtB7DaSEFmax4zLikJqiDBpGDxdX/bT7+StwmjN7Yqgc124JxJnHzM5+T pkkdN5k+sBiyd2oEieYKtmqtlhzRkSlNuxnLkggAibSIs26UxeiZHIIqUtqi0sQP0O4A gz3w== X-Gm-Message-State: AOAM531+pEkYZxZQxVFdNzCAvEk/VnF78WQz3JxYhcAXk1IOFCxKzBsP 6JtEwRYLLP4u5JGaV439k4p/yPq6oyY= X-Google-Smtp-Source: ABdhPJzBDSOrdcdgHWz1ZLiR2a2Q81rePOml0YZw0Qnuq0gjXwXFCubF2rtCDjJm1Bwj3aDpBXXauA== X-Received: by 2002:adf:f607:: with SMTP id t7mr5085487wrp.169.1605131559247; Wed, 11 Nov 2020 13:52:39 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 02/15] xen/arm: Add detection of Armv8.1-LSE atomic instructions Date: Wed, 11 Nov 2020 21:51:50 +0000 Message-Id: <20201111215203.80336-3-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding Use the new infrastructure for detecting CPU features in other ID registers to detect the presence of Armv8.1-LSE atomic instructions, as reported by ID_AA64ISAR0_EL1.Atomic. While we're here, print detection of these instructions in setup.c's processor_id(). Signed-off-by: Ash Wilding --- xen/arch/arm/setup.c | 5 +++-- xen/include/asm-arm/cpufeature.h | 10 +++++++++- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 5121f06fc5..138e1957c5 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -128,10 +128,11 @@ static void __init processor_id(void) cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No", cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No", cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No"); - printk(" Extensions:%s%s%s\n", + printk(" Extensions:%s%s%s%s\n", cpu_has_fp ? " FloatingPoint" : "", cpu_has_simd ? " AdvancedSIMD" : "", - cpu_has_gicv3 ? " GICv3-SysReg" : ""); + cpu_has_gicv3 ? " GICv3-SysReg" : "", + cpu_has_lse_atomics ? " LSE-Atomics" : ""); =20 /* Warn user if we find unknown floating-point features */ if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >=3D 2) ) diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeat= ure.h index f9281ea343..2366926e82 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -15,6 +15,7 @@ #define cpu_has_fp (boot_cpu_feature(pfr64, fp) < 8) #define cpu_has_simd (boot_cpu_feature(pfr64, simd) < 8) #define cpu_has_gicv3 (boot_cpu_feature(pfr64, gic) =3D=3D 1) +#define cpu_has_lse_atomics (boot_cpu_feature(isa64, atomic) =3D=3D 2) #endif =20 #define cpu_has_arm (boot_cpu_feature(pfr32, arm) =3D=3D 1) @@ -187,8 +188,15 @@ struct cpuinfo_arm { }; } mm64; =20 - struct { + union { uint64_t bits[2]; + struct { + unsigned long __res0 : 20; + unsigned long atomic : 4; + unsigned long __res1 : 40; + + unsigned long __res2 : 64; + }; } isa64; =20 #endif --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131591; cv=none; d=zohomail.com; s=zohoarc; b=I5lGbALlamaLfY7RpuuDtms8EebiUBTR8/LtGgyvNxiqK6GpC2pwdq1kgRz81JqLONvqnfGwxp4AxFSMvGTUmyEM9YypmJE7xsQZ6PRL+zXL5pe9LdZrHHYHogU7paJSXQxz38zkNs3EKRXGTJa1PP7oH+CDoSCIlKFQkkkdfMw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131591; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ILeuG5HDLbKOJMGFigti9pLkxVcIcfbMYnxrhvicuhs=; b=fme+lccg3gsIirEQQoV/j5htoKePPfik12KMCpHjfUjyzI9rDIc2IgvtT7q8QhxvP6m+H0CN8rRcZx0Cz8/mZ5S5DylNNQDN05AM93Tu6qzboK4a76KXVsV14edTamDlxxkdk0813YndhkyY7IK2+VAVUULYks+63OoDrUDOe9k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131591902505.688678477268; Wed, 11 Nov 2020 13:53:11 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25294.52969 (Exim 4.92) (envelope-from ) id 1kcy2x-0006DY-GR; Wed, 11 Nov 2020 21:52:55 +0000 Received: by outflank-mailman (output) from mailman id 25294.52969; Wed, 11 Nov 2020 21:52:55 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy2x-0006DO-BE; Wed, 11 Nov 2020 21:52:55 +0000 Received: by outflank-mailman (input) for mailman id 25294; Wed, 11 Nov 2020 21:52:54 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2w-00064v-Ov for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:54 +0000 Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f491eefb-b04a-4d17-a8d0-0fde8a7e6418; Wed, 11 Nov 2020 21:52:41 +0000 (UTC) Received: by mail-wm1-x342.google.com with SMTP id w24so3718977wmi.0 for ; Wed, 11 Nov 2020 13:52:41 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:39 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2w-00064v-Ov for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:54 +0000 Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f491eefb-b04a-4d17-a8d0-0fde8a7e6418; Wed, 11 Nov 2020 21:52:41 +0000 (UTC) Received: by mail-wm1-x342.google.com with SMTP id w24so3718977wmi.0 for ; Wed, 11 Nov 2020 13:52:41 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:39 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f491eefb-b04a-4d17-a8d0-0fde8a7e6418 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ILeuG5HDLbKOJMGFigti9pLkxVcIcfbMYnxrhvicuhs=; b=An+8CipOTaoiE77z9aL89YeTyY7pLgZ2kaLDHIAXOkU1pGPFomBfw0AajX8cqN2V62 ZFsXaQMUSrZbGWHIBCSze3xG7u6Y0Dgq8Hd8oyTqG+w5t0juFbp5ch0ULIiluky58wg1 mZqq4mr+fruNareInBW0rw3GhzIwD6JhPlpdsbEkhsbUxP9OxE5LS8Q4PX3fYiOBPlyM gtti3xPSoN9XJyt+w8viVVe5r6YgJ/k1aw+g8T0WKinw1Gz91Y0aXxbEVElpJLC0hJtK GNLy5f8jwq+2qGLIyl9olQLp/kgvP2zFoOMjdeBAFgccghfQnfcFX5jYsMplpqrBrD7N DRFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ILeuG5HDLbKOJMGFigti9pLkxVcIcfbMYnxrhvicuhs=; b=U6/alKj2phDC3EdYPbxNA7PIstQ7AOmeSEOOrM83Hfdxziomgd/Ccs04Tbx32zOiJa RafwvdppsKJE/xqOj8PxYWCBZPKbs41MVCgEvYz5ycqPLr91HpuZat43mHI6RLbBoz+w jXyLxubL7k4Dvu99NuSsV5bLkujDlnTCFD6aVjaNDUZKvZpPLejkAGkuBuOvK9UjzQWt 91TrBetBmJ2QOmB11tXUkeVCLCwwStBZkmGOyxZbObYIiQ2khOoOn89oUbwY+K9kP5KH kLAU2hwjXQsDv2FBoveAE5xl1gzIB3K47xE9pDv7V6hkHG0a4HcukodjMPUZGIF0PfJG rmxA== X-Gm-Message-State: AOAM533IMiVxrNjlgC/jqJdi9OaE4EihlzElDxXFduXV7UGtXpDtRbqB lSmPGa/MVy0+/EtAW7N1g1I5d5mhj1c= X-Google-Smtp-Source: ABdhPJwv9av7Lz91Xd/usJaADjr8A2KccXvtsZlJPdPKY01xUJM6J8xIOnuQYFnTLXt8u5BtGyH1lQ== X-Received: by 2002:a1c:1906:: with SMTP id 6mr6054100wmz.87.1605131560190; Wed, 11 Nov 2020 13:52:40 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 03/15] xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap Date: Wed, 11 Nov 2020 21:51:51 +0000 Message-Id: <20201111215203.80336-4-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding This patch introduces the ARM64_HAS_LSE_ATOMICS hwcap. While doing this, CONFIG_ARM64_LSE_ATOMICS is added to control whether the hwcap is actually detected and set at runtime. Without this Kconfig being set we will always fallback on LL/SC atomics using Armv8.0 exlusive accesses. Note this patch does not actually add the ALTERNATIVE() switching based on the hwcap being detected and set; that comes later in the series. Signed-off-by: Ash Wilding --- xen/arch/arm/Kconfig | 11 +++++++++++ xen/arch/arm/Makefile | 1 + xen/arch/arm/lse.c | 13 +++++++++++++ xen/include/asm-arm/cpufeature.h | 3 ++- 4 files changed, 27 insertions(+), 1 deletion(-) create mode 100644 xen/arch/arm/lse.c diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 2777388265..febc41e492 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -78,6 +78,17 @@ config SBSA_VUART_CONSOLE Allows a guest to use SBSA Generic UART as a console. The SBSA Generic UART implements a subset of ARM PL011 UART. =20 +config ARM64_LSE_ATOMICS + bool "Armv8.1-LSE Atomics" + depends on ARM_64 && HAS_ALTERNATIVE + default y + ---help--- + When set, dynamically patch Xen at runtime to use Armv8.1-LSE + atomics when supported by the system. + + When unset, or when Armv8.1-LSE atomics are not supported by the + system, fallback on LL/SC atomics using Armv8.0 exclusive accesses. + config ARM_SSBD bool "Speculative Store Bypass Disable" if EXPERT depends on HAS_ALTERNATIVE diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 296c5e68bb..cadd0ad253 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -63,6 +63,7 @@ obj-y +=3D vsmc.o obj-y +=3D vpsci.o obj-y +=3D vuart.o extra-y +=3D $(TARGET_SUBARCH)/head.o +obj-$(CONFIG_ARM64_LSE_ATOMICS) +=3D lse.o =20 #obj-bin-y +=3D ....o =20 diff --git a/xen/arch/arm/lse.c b/xen/arch/arm/lse.c new file mode 100644 index 0000000000..8274dac671 --- /dev/null +++ b/xen/arch/arm/lse.c @@ -0,0 +1,13 @@ + +#include +#include + +static int __init update_lse_caps(void) +{ + if ( cpu_has_lse_atomics ) + cpus_set_cap(ARM64_HAS_LSE_ATOMICS); + + return 0; +} + +__initcall(update_lse_caps); diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeat= ure.h index 2366926e82..48c172ee29 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -42,8 +42,9 @@ #define ARM_SSBD 7 #define ARM_SMCCC_1_1 8 #define ARM64_WORKAROUND_AT_SPECULATE 9 +#define ARM64_HAS_LSE_ATOMICS 10 =20 -#define ARM_NCAPS 10 +#define ARM_NCAPS 11 =20 #ifndef __ASSEMBLY__ =20 --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131600; cv=none; d=zohomail.com; s=zohoarc; b=gXwP8mTP0BOPRNvsoquSvKIfoZv+Y7dIrc/W3S7Hwm3RRoqInm46WQ6iYzPm8EC6ndDe0czutf1pvVLa7V1c+Dj9nV6/0nFMmmws4cA18W5/cdCJhAjqXBPjUJzoNFvyHNmYHk7tlqBpaPVLIs0tq6oxyTTTvQNMo9Gcb7LM9iA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131600; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=3+/74+RXl1H91X5ZF9JOoW4t9e/MjX4Wl/UlElI//SE=; b=OIxQMt8AFQhTeQzf8cr2R6xH+JdAqsRk7VhGtJJ8cDrQiYAuUsSWIasvY5dJVksUf6VYvjW22yENpC0nnUQpgBxFQkpaca3QOT9iL/wLJ4JtukRT3BY8AXcG+dF5Oo+M1+2krXEBQEzFLya7qajFetp+UXlg3t8h0ZvY3PuyALY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131600225759.7790200348453; Wed, 11 Nov 2020 13:53:20 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25295.52981 (Exim 4.92) (envelope-from ) id 1kcy32-0006K8-QZ; Wed, 11 Nov 2020 21:53:00 +0000 Received: by outflank-mailman (output) from mailman id 25295.52981; Wed, 11 Nov 2020 21:53:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy32-0006Jz-ME; Wed, 11 Nov 2020 21:53:00 +0000 Received: by outflank-mailman (input) for mailman id 25295; Wed, 11 Nov 2020 21:52:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy31-00064v-P1 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:59 +0000 Received: from mail-wm1-x334.google.com (unknown [2a00:1450:4864:20::334]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e0773842-facd-4d42-8abf-566e5bfed63a; Wed, 11 Nov 2020 21:52:43 +0000 (UTC) Received: by mail-wm1-x334.google.com with SMTP id c9so3559143wml.5 for ; Wed, 11 Nov 2020 13:52:43 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:41 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy31-00064v-P1 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:59 +0000 Received: from mail-wm1-x334.google.com (unknown [2a00:1450:4864:20::334]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e0773842-facd-4d42-8abf-566e5bfed63a; Wed, 11 Nov 2020 21:52:43 +0000 (UTC) Received: by mail-wm1-x334.google.com with SMTP id c9so3559143wml.5 for ; Wed, 11 Nov 2020 13:52:43 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:41 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e0773842-facd-4d42-8abf-566e5bfed63a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3+/74+RXl1H91X5ZF9JOoW4t9e/MjX4Wl/UlElI//SE=; b=YNtR/mlRT3gzJELs4HDv+CjYlKzeA9m1SJjlGWxGo37smGmcx1ICeSJKDNHwWkGzJl TPapNwU+9WDbOBWUhoXtVU0w76Eub/Vjv15ma6WNTE41bo0Gsyw3vH8uP/J03PTi4FZm 1dd7C1ak5dtWlarbez6hrpuYS7BaARZjrxDsnUvr18LpO6sNvhNfuQppy8GBidR7hBNI /8SjYpQ32yIkEjM9duRVd9KExhHcm7MC5jIK/aqxCcEvxMixJU/UutXyRHfORF+Xudbu q+owN+ghJ7KKRohwSU2ONwWesOkGC3L/eRvt9QMa05Xa9sFkEURd/6Omz8FQtuGroIcq jBhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3+/74+RXl1H91X5ZF9JOoW4t9e/MjX4Wl/UlElI//SE=; b=g472sGGJojv2gsoOEmaNS+vAERiATuAo/OQg6q0v1Dv00d6N/l4+xIJkPdfrjtsuAy 4w1UVaT62HM3+OKCJA7hpT0i1p0HQNf/DzTFpBd+2PvSBgGTKDZ2zcGlbz1FkWkAjCzB s0hCUyF+Rvzs6T/YzzP0uRRGi2cQ3sNxm9FB8/NlPY7oWnQzjtjt+TZvJ7vgH/HLZNLU AvRxcNltzeGot4eYtAywsk/7wUqxfUNKZaHzDna0GYnpqFqOujrcIpdohwHBIJvA7Ebh uOi6y34CdUzXJf8bFDbraXfNF+pqiSFmD402DpxGHegwqJ4Zc0fQJlLHGsF3fiq+9iiN UP6g== X-Gm-Message-State: AOAM530+jSKluZzvuDdQWCX5hRt3KmxItOYDQlovQrDj37S4kouTJck2 Jeb1HxPlPFJVVY/heZOMqQr4Qe3D/Zs= X-Google-Smtp-Source: ABdhPJxjbmQePSEubjeQTycD3G62AK4kgy1YGoyzSFvMK+1rR81b72aa16dQZoW5PugXxPajldPhBQ== X-Received: by 2002:a05:600c:2319:: with SMTP id 25mr6471524wmo.102.1605131561937; Wed, 11 Nov 2020 13:52:41 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 04/15] xen/arm: Delete Xen atomics helpers Date: Wed, 11 Nov 2020 21:51:52 +0000 Message-Id: <20201111215203.80336-5-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding To maintain clean diffs and dissectability, let's delete the existing Xen atomics helpers before pulling in the Linux versions. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm32/atomic.h | 175 --------------------- xen/include/asm-arm/arm32/cmpxchg.h | 229 ---------------------------- xen/include/asm-arm/arm64/atomic.h | 148 ------------------ xen/include/asm-arm/arm64/cmpxchg.h | 183 ---------------------- 4 files changed, 735 deletions(-) delete mode 100644 xen/include/asm-arm/arm32/atomic.h delete mode 100644 xen/include/asm-arm/arm32/cmpxchg.h delete mode 100644 xen/include/asm-arm/arm64/atomic.h delete mode 100644 xen/include/asm-arm/arm64/cmpxchg.h diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32= /atomic.h deleted file mode 100644 index 2832a72792..0000000000 --- a/xen/include/asm-arm/arm32/atomic.h +++ /dev/null @@ -1,175 +0,0 @@ -/* - * arch/arm/include/asm/atomic.h - * - * Copyright (C) 1996 Russell King. - * Copyright (C) 2002 Deep Blue Solutions Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ -#ifndef __ARCH_ARM_ARM32_ATOMIC__ -#define __ARCH_ARM_ARM32_ATOMIC__ - -/* - * ARMv6 UP and SMP safe atomic ops. We use load exclusive and - * store exclusive to ensure that these are atomic. We may loop - * to ensure that the update happens. - */ -static inline void atomic_add(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - prefetchw(&v->counter); - __asm__ __volatile__("@ atomic_add\n" -"1: ldrex %0, [%3]\n" -" add %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); -} - -static inline int atomic_add_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - smp_mb(); - prefetchw(&v->counter); - - __asm__ __volatile__("@ atomic_add_return\n" -"1: ldrex %0, [%3]\n" -" add %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); - - smp_mb(); - - return result; -} - -static inline void atomic_sub(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - prefetchw(&v->counter); - __asm__ __volatile__("@ atomic_sub\n" -"1: ldrex %0, [%3]\n" -" sub %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); -} - -static inline int atomic_sub_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - smp_mb(); - prefetchw(&v->counter); - - __asm__ __volatile__("@ atomic_sub_return\n" -"1: ldrex %0, [%3]\n" -" sub %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); - - smp_mb(); - - return result; -} - -static inline void atomic_and(int m, atomic_t *v) -{ - unsigned long tmp; - int result; - - prefetchw(&v->counter); - __asm__ __volatile__("@ atomic_and\n" -"1: ldrex %0, [%3]\n" -" and %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (m) - : "cc"); -} - -static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) -{ - int oldval; - unsigned long res; - - smp_mb(); - prefetchw(&ptr->counter); - - do { - __asm__ __volatile__("@ atomic_cmpxchg\n" - "ldrex %1, [%3]\n" - "mov %0, #0\n" - "teq %1, %4\n" - "strexeq %0, %5, [%3]\n" - : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (ptr->counter) - : "r" (&ptr->counter), "Ir" (old), "r" (new) - : "cc"); - } while (res); - - smp_mb(); - - return oldval; -} - -static inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int oldval, newval; - unsigned long tmp; - - smp_mb(); - prefetchw(&v->counter); - - __asm__ __volatile__ ("@ atomic_add_unless\n" -"1: ldrex %0, [%4]\n" -" teq %0, %5\n" -" beq 2f\n" -" add %1, %0, %6\n" -" strex %2, %1, [%4]\n" -" teq %2, #0\n" -" bne 1b\n" -"2:" - : "=3D&r" (oldval), "=3D&r" (newval), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "r" (u), "r" (a) - : "cc"); - - if (oldval !=3D u) - smp_mb(); - - return oldval; -} - -#endif /* __ARCH_ARM_ARM32_ATOMIC__ */ -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: - */ diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm3= 2/cmpxchg.h deleted file mode 100644 index b0bd1d8b68..0000000000 --- a/xen/include/asm-arm/arm32/cmpxchg.h +++ /dev/null @@ -1,229 +0,0 @@ -#ifndef __ASM_ARM32_CMPXCHG_H -#define __ASM_ARM32_CMPXCHG_H - -#include - -extern void __bad_xchg(volatile void *, int); - -static inline unsigned long __xchg(unsigned long x, volatile void *ptr, in= t size) -{ - unsigned long ret; - unsigned int tmp; - - smp_mb(); - prefetchw((const void *)ptr); - - switch (size) { - case 1: - asm volatile("@ __xchg1\n" - "1: ldrexb %0, [%3]\n" - " strexb %1, %2, [%3]\n" - " teq %1, #0\n" - " bne 1b" - : "=3D&r" (ret), "=3D&r" (tmp) - : "r" (x), "r" (ptr) - : "memory", "cc"); - break; - case 4: - asm volatile("@ __xchg4\n" - "1: ldrex %0, [%3]\n" - " strex %1, %2, [%3]\n" - " teq %1, #0\n" - " bne 1b" - : "=3D&r" (ret), "=3D&r" (tmp) - : "r" (x), "r" (ptr) - : "memory", "cc"); - break; - default: - __bad_xchg(ptr, size), ret =3D 0; - break; - } - smp_mb(); - - return ret; -} - -#define xchg(ptr,x) \ - ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) - -/* - * Atomic compare and exchange. Compare OLD with MEM, if identical, - * store NEW in MEM. Return the initial value in MEM. Success is - * indicated by comparing RETURN with OLD. - */ - -extern unsigned long __bad_cmpxchg(volatile void *ptr, int size); - -#define __CMPXCHG_CASE(sz, name) \ -static inline bool __cmpxchg_case_##name(volatile void *ptr, \ - unsigned long *old, \ - unsigned long new, \ - bool timeout, \ - unsigned int max_try) \ -{ \ - unsigned long oldval; \ - unsigned long res; \ - \ - do { \ - asm volatile("@ __cmpxchg_case_" #name "\n" \ - " ldrex" #sz " %1, [%2]\n" \ - " mov %0, #0\n" \ - " teq %1, %3\n" \ - " strex" #sz "eq %0, %4, [%2]\n" \ - : "=3D&r" (res), "=3D&r" (oldval) \ - : "r" (ptr), "Ir" (*old), "r" (new) \ - : "memory", "cc"); \ - \ - if (!res) \ - break; \ - } while (!timeout || ((--max_try) > 0)); \ - \ - *old =3D oldval; \ - \ - return !res; \ -} - -__CMPXCHG_CASE(b, 1) -__CMPXCHG_CASE(h, 2) -__CMPXCHG_CASE( , 4) - -static inline bool __cmpxchg_case_8(volatile uint64_t *ptr, - uint64_t *old, - uint64_t new, - bool timeout, - unsigned int max_try) -{ - uint64_t oldval; - uint64_t res; - - do { - asm volatile( - " ldrexd %1, %H1, [%3]\n" - " teq %1, %4\n" - " teqeq %H1, %H4\n" - " movne %0, #0\n" - " movne %H0, #0\n" - " bne 2f\n" - " strexd %0, %5, %H5, [%3]\n" - "2:" - : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (*ptr) - : "r" (ptr), "r" (*old), "r" (new) - : "memory", "cc"); - if (!res) - break; - } while (!timeout || ((--max_try) > 0)); - - *old =3D oldval; - - return !res; -} - -static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long = *old, - unsigned long new, int size, - bool timeout, unsigned int max_try) -{ - prefetchw((const void *)ptr); - - switch (size) { - case 1: - return __cmpxchg_case_1(ptr, old, new, timeout, max_try); - case 2: - return __cmpxchg_case_2(ptr, old, new, timeout, max_try); - case 4: - return __cmpxchg_case_4(ptr, old, new, timeout, max_try); - default: - return __bad_cmpxchg(ptr, size); - } - - ASSERT_UNREACHABLE(); -} - -static always_inline unsigned long __cmpxchg(volatile void *ptr, - unsigned long old, - unsigned long new, - int size) -{ - smp_mb(); - if (!__int_cmpxchg(ptr, &old, new, size, false, 0)) - ASSERT_UNREACHABLE(); - smp_mb(); - - return old; -} - -/* - * The helper may fail to update the memory if the action takes too long. - * - * @old: On call the value pointed contains the expected old value. It wil= l be - * updated to the actual old value. - * @max_try: Maximum number of iterations - * - * The helper will return true when the update has succeeded (i.e no - * timeout) and false if the update has failed. - */ -static always_inline bool __cmpxchg_timeout(volatile void *ptr, - unsigned long *old, - unsigned long new, - int size, - unsigned int max_try) -{ - bool ret; - - smp_mb(); - ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); - smp_mb(); - - return ret; -} - -/* - * The helper may fail to update the memory if the action takes too long. - * - * @old: On call the value pointed contains the expected old value. It wil= l be - * updated to the actual old value. - * @max_try: Maximum number of iterations - * - * The helper will return true when the update has succeeded (i.e no - * timeout) and false if the update has failed. - */ -static always_inline bool __cmpxchg64_timeout(volatile uint64_t *ptr, - uint64_t *old, - uint64_t new, - unsigned int max_try) -{ - bool ret; - - smp_mb(); - ret =3D __cmpxchg_case_8(ptr, old, new, true, max_try); - smp_mb(); - - return ret; -} - -#define cmpxchg(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) - -static inline uint64_t cmpxchg64(volatile uint64_t *ptr, - uint64_t old, - uint64_t new) -{ - smp_mb(); - if (!__cmpxchg_case_8(ptr, &old, new, false, 0)) - ASSERT_UNREACHABLE(); - smp_mb(); - - return old; -} - -#endif -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: - */ diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64= /atomic.h deleted file mode 100644 index 2d42567866..0000000000 --- a/xen/include/asm-arm/arm64/atomic.h +++ /dev/null @@ -1,148 +0,0 @@ -/* - * Based on arch/arm64/include/asm/atomic.h - * which in turn is - * Based on arch/arm/include/asm/atomic.h - * - * Copyright (C) 1996 Russell King. - * Copyright (C) 2002 Deep Blue Solutions Ltd. - * Copyright (C) 2012 ARM Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . - */ -#ifndef __ARCH_ARM_ARM64_ATOMIC -#define __ARCH_ARM_ARM64_ATOMIC - -/* - * AArch64 UP and SMP safe atomic ops. We use load exclusive and - * store exclusive to ensure that these are atomic. We may loop - * to ensure that the update happens. - */ -static inline void atomic_add(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_add\n" -"1: ldxr %w0, %2\n" -" add %w0, %w0, %w3\n" -" stxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i)); -} - -static inline int atomic_add_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_add_return\n" -"1: ldxr %w0, %2\n" -" add %w0, %w0, %w3\n" -" stlxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i) - : "memory"); - - smp_mb(); - return result; -} - -static inline void atomic_sub(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_sub\n" -"1: ldxr %w0, %2\n" -" sub %w0, %w0, %w3\n" -" stxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i)); -} - -static inline int atomic_sub_return(int i, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_sub_return\n" -"1: ldxr %w0, %2\n" -" sub %w0, %w0, %w3\n" -" stlxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (i) - : "memory"); - - smp_mb(); - return result; -} - -static inline void atomic_and(int m, atomic_t *v) -{ - unsigned long tmp; - int result; - - asm volatile("// atomic_and\n" -"1: ldxr %w0, %2\n" -" and %w0, %w0, %w3\n" -" stxr %w1, %w0, %2\n" -" cbnz %w1, 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : "Ir" (m)); -} - -static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) -{ - unsigned long tmp; - int oldval; - - smp_mb(); - - asm volatile("// atomic_cmpxchg\n" -"1: ldxr %w1, %2\n" -" cmp %w1, %w3\n" -" b.ne 2f\n" -" stxr %w0, %w4, %2\n" -" cbnz %w0, 1b\n" -"2:" - : "=3D&r" (tmp), "=3D&r" (oldval), "+Q" (ptr->counter) - : "Ir" (old), "r" (new) - : "cc"); - - smp_mb(); - return oldval; -} - -static inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - - c =3D atomic_read(v); - while (c !=3D u && (old =3D atomic_cmpxchg((v), c, c + a)) !=3D c) - c =3D old; - return c; -} - -#endif -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: - */ diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm6= 4/cmpxchg.h deleted file mode 100644 index 10e4edc022..0000000000 --- a/xen/include/asm-arm/arm64/cmpxchg.h +++ /dev/null @@ -1,183 +0,0 @@ -#ifndef __ASM_ARM64_CMPXCHG_H -#define __ASM_ARM64_CMPXCHG_H - -extern void __bad_xchg(volatile void *, int); - -static inline unsigned long __xchg(unsigned long x, volatile void *ptr, in= t size) -{ - unsigned long ret, tmp; - - switch (size) { - case 1: - asm volatile("// __xchg1\n" - "1: ldxrb %w0, %2\n" - " stlxrb %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u8 *)ptr) - : "r" (x) - : "memory"); - break; - case 2: - asm volatile("// __xchg2\n" - "1: ldxrh %w0, %2\n" - " stlxrh %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u16 *)ptr) - : "r" (x) - : "memory"); - break; - case 4: - asm volatile("// __xchg4\n" - "1: ldxr %w0, %2\n" - " stlxr %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u32 *)ptr) - : "r" (x) - : "memory"); - break; - case 8: - asm volatile("// __xchg8\n" - "1: ldxr %0, %2\n" - " stlxr %w1, %3, %2\n" - " cbnz %w1, 1b\n" - : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u64 *)ptr) - : "r" (x) - : "memory"); - break; - default: - __bad_xchg(ptr, size), ret =3D 0; - break; - } - - smp_mb(); - return ret; -} - -#define xchg(ptr,x) \ -({ \ - __typeof__(*(ptr)) __ret; \ - __ret =3D (__typeof__(*(ptr))) \ - __xchg((unsigned long)(x), (ptr), sizeof(*(ptr))); \ - __ret; \ -}) - -extern unsigned long __bad_cmpxchg(volatile void *ptr, int size); - -#define __CMPXCHG_CASE(w, sz, name) \ -static inline bool __cmpxchg_case_##name(volatile void *ptr, \ - unsigned long *old, \ - unsigned long new, \ - bool timeout, \ - unsigned int max_try) \ -{ \ - unsigned long oldval; \ - unsigned long res; \ - \ - do { \ - asm volatile("// __cmpxchg_case_" #name "\n" \ - " ldxr" #sz " %" #w "1, %2\n" \ - " mov %w0, #0\n" \ - " cmp %" #w "1, %" #w "3\n" \ - " b.ne 1f\n" \ - " stxr" #sz " %w0, %" #w "4, %2\n" \ - "1:\n" \ - : "=3D&r" (res), "=3D&r" (oldval), \ - "+Q" (*(unsigned long *)ptr) \ - : "Ir" (*old), "r" (new) \ - : "cc"); \ - \ - if (!res) \ - break; \ - } while (!timeout || ((--max_try) > 0)); \ - \ - *old =3D oldval; \ - \ - return !res; \ -} - -__CMPXCHG_CASE(w, b, 1) -__CMPXCHG_CASE(w, h, 2) -__CMPXCHG_CASE(w, , 4) -__CMPXCHG_CASE( , , 8) - -static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long = *old, - unsigned long new, int size, - bool timeout, unsigned int max_try) -{ - switch (size) { - case 1: - return __cmpxchg_case_1(ptr, old, new, timeout, max_try); - case 2: - return __cmpxchg_case_2(ptr, old, new, timeout, max_try); - case 4: - return __cmpxchg_case_4(ptr, old, new, timeout, max_try); - case 8: - return __cmpxchg_case_8(ptr, old, new, timeout, max_try); - default: - return __bad_cmpxchg(ptr, size); - } - - ASSERT_UNREACHABLE(); -} - -static always_inline unsigned long __cmpxchg(volatile void *ptr, - unsigned long old, - unsigned long new, - int size) -{ - smp_mb(); - if (!__int_cmpxchg(ptr, &old, new, size, false, 0)) - ASSERT_UNREACHABLE(); - smp_mb(); - - return old; -} - -/* - * The helper may fail to update the memory if the action takes too long. - * - * @old: On call the value pointed contains the expected old value. It wil= l be - * updated to the actual old value. - * @max_try: Maximum number of iterations - * - * The helper will return true when the update has succeeded (i.e no - * timeout) and false if the update has failed. - */ -static always_inline bool __cmpxchg_timeout(volatile void *ptr, - unsigned long *old, - unsigned long new, - int size, - unsigned int max_try) -{ - bool ret; - - smp_mb(); - ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); - smp_mb(); - - return ret; -} - -#define cmpxchg(ptr, o, n) \ -({ \ - __typeof__(*(ptr)) __ret; \ - __ret =3D (__typeof__(*(ptr))) \ - __cmpxchg((ptr), (unsigned long)(o), (unsigned long)(n), \ - sizeof(*(ptr))); \ - __ret; \ -}) - -#define cmpxchg64(ptr, o, n) cmpxchg(ptr, o, n) - -#define __cmpxchg64_timeout(ptr, old, new, max_try) \ - __cmpxchg_timeout(ptr, old, new, 8, max_try) - -#endif -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 8 - * indent-tabs-mode: t - * End: - */ --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131625; cv=none; d=zohomail.com; s=zohoarc; b=MRnNkSqmWCZRUflQcScPkcx+CJ6ktOG9Kpx35cmYcLG5LeiaGb6CYh4WgRMgsjST+lvEb/xik9KLH98P0OiTFNpXFPN+MS5T7uCtrhwGL7zt+Ub4lN7SzqXDbeRmSpLkwv3U6QGov0qEOBLmurqbxa81Qi/ihcupUycJqQQTTsU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131625; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=stsGgsmPbfxHthK3Am66kcnwhe4wDikUxtnO5WZxAiU=; b=kBYR5owEhOcsr8EpvvCWWMW8Jn51hsgSAXRr4qCzz6QyfXgsHIn0vnBpJTOiWeGLJYFqS7cAET6O2FE6R2a0q3zH8bWhWZlGI3elqaNnnHg70T9Yn+ndNkYfKoeHFRR7cwhjWCkgXszT3ccPb7+kPh1pYSQC3wlhYTuEY2INXgU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131625639180.55506281386977; Wed, 11 Nov 2020 13:53:45 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25307.53029 (Exim 4.92) (envelope-from ) id 1kcy3M-0006jQ-Hh; Wed, 11 Nov 2020 21:53:20 +0000 Received: by outflank-mailman (output) from mailman id 25307.53029; Wed, 11 Nov 2020 21:53:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy3M-0006jH-DK; Wed, 11 Nov 2020 21:53:20 +0000 Received: by outflank-mailman (input) for mailman id 25307; Wed, 11 Nov 2020 21:53:19 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3L-00064v-Py for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:19 +0000 Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ca30c0ad-830d-40c8-b73b-4dacfcc7c5d6; Wed, 11 Nov 2020 21:52:47 +0000 (UTC) Received: by mail-wm1-x343.google.com with SMTP id 10so3584954wml.2 for ; Wed, 11 Nov 2020 13:52:47 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:44 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3L-00064v-Py for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:19 +0000 Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ca30c0ad-830d-40c8-b73b-4dacfcc7c5d6; Wed, 11 Nov 2020 21:52:47 +0000 (UTC) Received: by mail-wm1-x343.google.com with SMTP id 10so3584954wml.2 for ; Wed, 11 Nov 2020 13:52:47 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:44 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ca30c0ad-830d-40c8-b73b-4dacfcc7c5d6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=stsGgsmPbfxHthK3Am66kcnwhe4wDikUxtnO5WZxAiU=; b=kyBB746iC2H+sCtyhNfe3jdl7s85sBgieXlHnVn30CG9UlKN9LTYdp0LcUKgWzOCMn NvWpVAMOROBaRQUD0pwEX7qy1lNZt9X4j6EF/Y939OOJ30YCC6NgSSXQfebgM+dpQC88 3saZGesFj9Ynj5AcQzHmdATWiHoKdnFKPbPemHVV6e55ir56v77eg7e2flzEADZKZ2W9 OwEb83hFwxX1iRyQK2Na43iSWolCTtUKodxbx+9bMlKp3BR/UFcGEr65+Hke5v2W5h94 3gTMDMg2HThq3ygOJW9VbxOv+iVW+diW79jTYytAD3h+q+KedzWgHfmQVA1SPAAIJVFw ReRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=stsGgsmPbfxHthK3Am66kcnwhe4wDikUxtnO5WZxAiU=; b=g1l6e6AgV8YGidbNLeKbYA44hrntVtMPn76K/q5NLIlnk8LOZL8cgWc7kXApmig0Ce 2zzzSsfW5vlRgRBTc6RL5b0bhwZ6NMyCEuIFh90+cZMtRfGGKMSdyLaGSxB7FvBIkynE pallaOzWIX9pKMLXUnZESKTuEVJ3i+aT8hQmNN5f0nHudMhvCyd+67/sjR0lnPb5dU/J RrNXgOlr+fWP55GTUvHOgOJCBvEJhJ6xaGP6yLDI4nmyS80Xy3g+P9wJI+yjghUWXd5n ZxblLLoTDsB9MqxiTfd5k7c6S4SAyO/dEtTaNQymjnU28HdN9TfGg52mV27/2AoI1M8W rF7w== X-Gm-Message-State: AOAM531aA/uRVjzeqf2mW2naNKApZyx7cx2Q10g5/jApONtM9Z2oRIt3 LMbVA28mn0tV1T/901JutkFSwWN77Ho= X-Google-Smtp-Source: ABdhPJyW39UzUt7ed5ebZOO8NTkvWiogc3vf3WA3Pa8enHq6kgxZm2mt25CWSTTFbfqeOptnsCiA6w== X-Received: by 2002:a1c:f31a:: with SMTP id q26mr6175980wmq.178.1605131565244; Wed, 11 Nov 2020 13:52:45 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 05/15] xen/arm: pull in Linux atomics helpers and dependencies Date: Wed, 11 Nov 2020 21:51:53 +0000 Message-Id: <20201111215203.80336-6-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding This patch pulls in Linux's atomics helpers for arm32 and arm64, and their dependencies, as-is. Note that Linux's arm32 atomics helpers use the READ_ONCE() and WRITE_ONCE() macros defined in , while Linux's arm64 atomics helpers use __READ_ONCE() and __WRITE_ONCE(). The only difference is that the __* versions skip checking whether the object being accessed is the same size as a native C type (e.g. char, int, long, etc). Given the arm32 helpers are using the macros to access an atomic_t's counter member, which is an int, we can skip this check by using the __* versions like the arm64 code does. I mention this here because it forms the first part of my justification for *not* copying Linux's to Xen; the size check described above is performed by __native_word() defined in that header. The second part of my justification may be more contentious; as you'll see in the next patch, I intend to drop the __unqual_scalar_typeof() calls in __READ_ONCE() and __WRITE_ONCE(). This is because the pointer to the atomic_t's counter member is always a basic (int *) so we don't need this, and dropping it means we can avoid having to port Linux's . If people are against this approach, please bear in mind that the current version of __unqual_scalar_typeof() in was actually the reason for Linux upgrading the minimum GCC version required to 4.9 earlier this year so that they can guarantee C11 _Generic support [1]. So if we do want to take Linux's we'll either need to: A) bump up the minimum required version of GCC to 4.9 to match that required by Linux; in the Xen README we currently state GCC 4.8 for Arm and GCC 4.1.2_20070115 for x86. or: B) mix-and-match an older version of Linux's with the other headers taken from a newer version of Linux. Thoughts? [1] https://lkml.org/lkml/2020/7/8/1308 Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm32/atomic.h | 507 +++++++++++++++++++++++ xen/include/asm-arm/arm32/cmpxchg.h | 279 +++++++++++++ xen/include/asm-arm/arm64/atomic.h | 228 ++++++++++ xen/include/asm-arm/arm64/atomic_ll_sc.h | 353 ++++++++++++++++ xen/include/asm-arm/arm64/atomic_lse.h | 419 +++++++++++++++++++ xen/include/asm-arm/arm64/cmpxchg.h | 285 +++++++++++++ xen/include/asm-arm/arm64/lse.h | 48 +++ xen/include/xen/rwonce.h | 90 ++++ 8 files changed, 2209 insertions(+) create mode 100644 xen/include/asm-arm/arm32/atomic.h create mode 100644 xen/include/asm-arm/arm32/cmpxchg.h create mode 100644 xen/include/asm-arm/arm64/atomic.h create mode 100644 xen/include/asm-arm/arm64/atomic_ll_sc.h create mode 100644 xen/include/asm-arm/arm64/atomic_lse.h create mode 100644 xen/include/asm-arm/arm64/cmpxchg.h create mode 100644 xen/include/asm-arm/arm64/lse.h create mode 100644 xen/include/xen/rwonce.h diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32= /atomic.h new file mode 100644 index 0000000000..ac6338dd9b --- /dev/null +++ b/xen/include/asm-arm/arm32/atomic.h @@ -0,0 +1,507 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * arch/arm/include/asm/atomic.h + * + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + */ +#ifndef __ASM_ARM_ATOMIC_H +#define __ASM_ARM_ATOMIC_H + +#include +#include +#include +#include +#include +#include + +#ifdef __KERNEL__ + +/* + * On ARM, ordinary assignment (str instruction) doesn't clear the local + * strex/ldrex monitor on some implementations. The reason we can use it f= or + * atomic_set() is the clrex or dummy strex done on every exception return. + */ +#define atomic_read(v) READ_ONCE((v)->counter) +#define atomic_set(v,i) WRITE_ONCE(((v)->counter), (i)) + +#if __LINUX_ARM_ARCH__ >=3D 6 + +/* + * ARMv6 UP and SMP safe atomic ops. We use load exclusive and + * store exclusive to ensure that these are atomic. We may loop + * to ensure that the update happens. + */ + +#define ATOMIC_OP(op, c_op, asm_op) \ +static inline void atomic_##op(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + prefetchw(&v->counter); \ + __asm__ __volatile__("@ atomic_" #op "\n" \ +"1: ldrex %0, [%3]\n" \ +" " #asm_op " %0, %0, %4\n" \ +" strex %1, %0, [%3]\n" \ +" teq %1, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "Ir" (i) \ + : "cc"); \ +} \ + +#define ATOMIC_OP_RETURN(op, c_op, asm_op) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + prefetchw(&v->counter); \ + \ + __asm__ __volatile__("@ atomic_" #op "_return\n" \ +"1: ldrex %0, [%3]\n" \ +" " #asm_op " %0, %0, %4\n" \ +" strex %1, %0, [%3]\n" \ +" teq %1, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "Ir" (i) \ + : "cc"); \ + \ + return result; \ +} + +#define ATOMIC_FETCH_OP(op, c_op, asm_op) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result, val; \ + \ + prefetchw(&v->counter); \ + \ + __asm__ __volatile__("@ atomic_fetch_" #op "\n" \ +"1: ldrex %0, [%4]\n" \ +" " #asm_op " %1, %0, %5\n" \ +" strex %2, %1, [%4]\n" \ +" teq %2, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "Ir" (i) \ + : "cc"); \ + \ + return result; \ +} + +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed + +static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new) +{ + int oldval; + unsigned long res; + + prefetchw(&ptr->counter); + + do { + __asm__ __volatile__("@ atomic_cmpxchg\n" + "ldrex %1, [%3]\n" + "mov %0, #0\n" + "teq %1, %4\n" + "strexeq %0, %5, [%3]\n" + : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (ptr->counter) + : "r" (&ptr->counter), "Ir" (old), "r" (new) + : "cc"); + } while (res); + + return oldval; +} +#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed + +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + int oldval, newval; + unsigned long tmp; + + smp_mb(); + prefetchw(&v->counter); + + __asm__ __volatile__ ("@ atomic_add_unless\n" +"1: ldrex %0, [%4]\n" +" teq %0, %5\n" +" beq 2f\n" +" add %1, %0, %6\n" +" strex %2, %1, [%4]\n" +" teq %2, #0\n" +" bne 1b\n" +"2:" + : "=3D&r" (oldval), "=3D&r" (newval), "=3D&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "r" (u), "r" (a) + : "cc"); + + if (oldval !=3D u) + smp_mb(); + + return oldval; +} +#define atomic_fetch_add_unless atomic_fetch_add_unless + +#else /* ARM_ARCH_6 */ + +#ifdef CONFIG_SMP +#error SMP not supported on pre-ARMv6 CPUs +#endif + +#define ATOMIC_OP(op, c_op, asm_op) \ +static inline void atomic_##op(int i, atomic_t *v) \ +{ \ + unsigned long flags; \ + \ + raw_local_irq_save(flags); \ + v->counter c_op i; \ + raw_local_irq_restore(flags); \ +} \ + +#define ATOMIC_OP_RETURN(op, c_op, asm_op) \ +static inline int atomic_##op##_return(int i, atomic_t *v) \ +{ \ + unsigned long flags; \ + int val; \ + \ + raw_local_irq_save(flags); \ + v->counter c_op i; \ + val =3D v->counter; \ + raw_local_irq_restore(flags); \ + \ + return val; \ +} + +#define ATOMIC_FETCH_OP(op, c_op, asm_op) \ +static inline int atomic_fetch_##op(int i, atomic_t *v) \ +{ \ + unsigned long flags; \ + int val; \ + \ + raw_local_irq_save(flags); \ + val =3D v->counter; \ + v->counter c_op i; \ + raw_local_irq_restore(flags); \ + \ + return val; \ +} + +static inline int atomic_cmpxchg(atomic_t *v, int old, int new) +{ + int ret; + unsigned long flags; + + raw_local_irq_save(flags); + ret =3D v->counter; + if (likely(ret =3D=3D old)) + v->counter =3D new; + raw_local_irq_restore(flags); + + return ret; +} + +#define atomic_fetch_andnot atomic_fetch_andnot + +#endif /* __LINUX_ARM_ARCH__ */ + +#define ATOMIC_OPS(op, c_op, asm_op) \ + ATOMIC_OP(op, c_op, asm_op) \ + ATOMIC_OP_RETURN(op, c_op, asm_op) \ + ATOMIC_FETCH_OP(op, c_op, asm_op) + +ATOMIC_OPS(add, +=3D, add) +ATOMIC_OPS(sub, -=3D, sub) + +#define atomic_andnot atomic_andnot + +#undef ATOMIC_OPS +#define ATOMIC_OPS(op, c_op, asm_op) \ + ATOMIC_OP(op, c_op, asm_op) \ + ATOMIC_FETCH_OP(op, c_op, asm_op) + +ATOMIC_OPS(and, &=3D, and) +ATOMIC_OPS(andnot, &=3D ~, bic) +ATOMIC_OPS(or, |=3D, orr) +ATOMIC_OPS(xor, ^=3D, eor) + +#undef ATOMIC_OPS +#undef ATOMIC_FETCH_OP +#undef ATOMIC_OP_RETURN +#undef ATOMIC_OP + +#define atomic_xchg(v, new) (xchg(&((v)->counter), new)) + +#ifndef CONFIG_GENERIC_ATOMIC64 +typedef struct { + s64 counter; +} atomic64_t; + +#define ATOMIC64_INIT(i) { (i) } + +#ifdef CONFIG_ARM_LPAE +static inline s64 atomic64_read(const atomic64_t *v) +{ + s64 result; + + __asm__ __volatile__("@ atomic64_read\n" +" ldrd %0, %H0, [%1]" + : "=3D&r" (result) + : "r" (&v->counter), "Qo" (v->counter) + ); + + return result; +} + +static inline void atomic64_set(atomic64_t *v, s64 i) +{ + __asm__ __volatile__("@ atomic64_set\n" +" strd %2, %H2, [%1]" + : "=3DQo" (v->counter) + : "r" (&v->counter), "r" (i) + ); +} +#else +static inline s64 atomic64_read(const atomic64_t *v) +{ + s64 result; + + __asm__ __volatile__("@ atomic64_read\n" +" ldrexd %0, %H0, [%1]" + : "=3D&r" (result) + : "r" (&v->counter), "Qo" (v->counter) + ); + + return result; +} + +static inline void atomic64_set(atomic64_t *v, s64 i) +{ + s64 tmp; + + prefetchw(&v->counter); + __asm__ __volatile__("@ atomic64_set\n" +"1: ldrexd %0, %H0, [%2]\n" +" strexd %0, %3, %H3, [%2]\n" +" teq %0, #0\n" +" bne 1b" + : "=3D&r" (tmp), "=3DQo" (v->counter) + : "r" (&v->counter), "r" (i) + : "cc"); +} +#endif + +#define ATOMIC64_OP(op, op1, op2) \ +static inline void atomic64_##op(s64 i, atomic64_t *v) \ +{ \ + s64 result; \ + unsigned long tmp; \ + \ + prefetchw(&v->counter); \ + __asm__ __volatile__("@ atomic64_" #op "\n" \ +"1: ldrexd %0, %H0, [%3]\n" \ +" " #op1 " %Q0, %Q0, %Q4\n" \ +" " #op2 " %R0, %R0, %R4\n" \ +" strexd %1, %0, %H0, [%3]\n" \ +" teq %1, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "r" (i) \ + : "cc"); \ +} \ + +#define ATOMIC64_OP_RETURN(op, op1, op2) \ +static inline s64 \ +atomic64_##op##_return_relaxed(s64 i, atomic64_t *v) \ +{ \ + s64 result; \ + unsigned long tmp; \ + \ + prefetchw(&v->counter); \ + \ + __asm__ __volatile__("@ atomic64_" #op "_return\n" \ +"1: ldrexd %0, %H0, [%3]\n" \ +" " #op1 " %Q0, %Q0, %Q4\n" \ +" " #op2 " %R0, %R0, %R4\n" \ +" strexd %1, %0, %H0, [%3]\n" \ +" teq %1, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "r" (i) \ + : "cc"); \ + \ + return result; \ +} + +#define ATOMIC64_FETCH_OP(op, op1, op2) \ +static inline s64 \ +atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v) \ +{ \ + s64 result, val; \ + unsigned long tmp; \ + \ + prefetchw(&v->counter); \ + \ + __asm__ __volatile__("@ atomic64_fetch_" #op "\n" \ +"1: ldrexd %0, %H0, [%4]\n" \ +" " #op1 " %Q1, %Q0, %Q5\n" \ +" " #op2 " %R1, %R0, %R5\n" \ +" strexd %2, %1, %H1, [%4]\n" \ +" teq %2, #0\n" \ +" bne 1b" \ + : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Qo" (v->counter) \ + : "r" (&v->counter), "r" (i) \ + : "cc"); \ + \ + return result; \ +} + +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_OP_RETURN(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + +ATOMIC64_OPS(add, adds, adc) +ATOMIC64_OPS(sub, subs, sbc) + +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + +#define atomic64_andnot atomic64_andnot + +ATOMIC64_OPS(and, and, and) +ATOMIC64_OPS(andnot, bic, bic) +ATOMIC64_OPS(or, orr, orr) +ATOMIC64_OPS(xor, eor, eor) + +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed + +#undef ATOMIC64_OPS +#undef ATOMIC64_FETCH_OP +#undef ATOMIC64_OP_RETURN +#undef ATOMIC64_OP + +static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 n= ew) +{ + s64 oldval; + unsigned long res; + + prefetchw(&ptr->counter); + + do { + __asm__ __volatile__("@ atomic64_cmpxchg\n" + "ldrexd %1, %H1, [%3]\n" + "mov %0, #0\n" + "teq %1, %4\n" + "teqeq %H1, %H4\n" + "strexdeq %0, %5, %H5, [%3]" + : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (ptr->counter) + : "r" (&ptr->counter), "r" (old), "r" (new) + : "cc"); + } while (res); + + return oldval; +} +#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed + +static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new) +{ + s64 result; + unsigned long tmp; + + prefetchw(&ptr->counter); + + __asm__ __volatile__("@ atomic64_xchg\n" +"1: ldrexd %0, %H0, [%3]\n" +" strexd %1, %4, %H4, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (ptr->counter) + : "r" (&ptr->counter), "r" (new) + : "cc"); + + return result; +} +#define atomic64_xchg_relaxed atomic64_xchg_relaxed + +static inline s64 atomic64_dec_if_positive(atomic64_t *v) +{ + s64 result; + unsigned long tmp; + + smp_mb(); + prefetchw(&v->counter); + + __asm__ __volatile__("@ atomic64_dec_if_positive\n" +"1: ldrexd %0, %H0, [%3]\n" +" subs %Q0, %Q0, #1\n" +" sbc %R0, %R0, #0\n" +" teq %R0, #0\n" +" bmi 2f\n" +" strexd %1, %0, %H0, [%3]\n" +" teq %1, #0\n" +" bne 1b\n" +"2:" + : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter) + : "cc"); + + smp_mb(); + + return result; +} +#define atomic64_dec_if_positive atomic64_dec_if_positive + +static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + s64 oldval, newval; + unsigned long tmp; + + smp_mb(); + prefetchw(&v->counter); + + __asm__ __volatile__("@ atomic64_add_unless\n" +"1: ldrexd %0, %H0, [%4]\n" +" teq %0, %5\n" +" teqeq %H0, %H5\n" +" beq 2f\n" +" adds %Q1, %Q0, %Q6\n" +" adc %R1, %R0, %R6\n" +" strexd %2, %1, %H1, [%4]\n" +" teq %2, #0\n" +" bne 1b\n" +"2:" + : "=3D&r" (oldval), "=3D&r" (newval), "=3D&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "r" (u), "r" (a) + : "cc"); + + if (oldval !=3D u) + smp_mb(); + + return oldval; +} +#define atomic64_fetch_add_unless atomic64_fetch_add_unless + +#endif /* !CONFIG_GENERIC_ATOMIC64 */ +#endif +#endif \ No newline at end of file diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm3= 2/cmpxchg.h new file mode 100644 index 0000000000..638ae84afb --- /dev/null +++ b/xen/include/asm-arm/arm32/cmpxchg.h @@ -0,0 +1,279 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ARM_CMPXCHG_H +#define __ASM_ARM_CMPXCHG_H + +#include +#include +#include + +#if defined(CONFIG_CPU_SA1100) || defined(CONFIG_CPU_SA110) +/* + * On the StrongARM, "swp" is terminally broken since it bypasses the + * cache totally. This means that the cache becomes inconsistent, and, + * since we use normal loads/stores as well, this is really bad. + * Typically, this causes oopsen in filp_close, but could have other, + * more disastrous effects. There are two work-arounds: + * 1. Disable interrupts and emulate the atomic swap + * 2. Clean the cache, perform atomic swap, flush the cache + * + * We choose (1) since its the "easiest" to achieve here and is not + * dependent on the processor type. + * + * NOTE that this solution won't work on an SMP system, so explcitly + * forbid it here. + */ +#define swp_is_buggy +#endif + +static inline unsigned long __xchg(unsigned long x, volatile void *ptr, in= t size) +{ + extern void __bad_xchg(volatile void *, int); + unsigned long ret; +#ifdef swp_is_buggy + unsigned long flags; +#endif +#if __LINUX_ARM_ARCH__ >=3D 6 + unsigned int tmp; +#endif + + prefetchw((const void *)ptr); + + switch (size) { +#if __LINUX_ARM_ARCH__ >=3D 6 +#ifndef CONFIG_CPU_V6 /* MIN ARCH >=3D V6K */ + case 1: + asm volatile("@ __xchg1\n" + "1: ldrexb %0, [%3]\n" + " strexb %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=3D&r" (ret), "=3D&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; + case 2: + asm volatile("@ __xchg2\n" + "1: ldrexh %0, [%3]\n" + " strexh %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=3D&r" (ret), "=3D&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; +#endif + case 4: + asm volatile("@ __xchg4\n" + "1: ldrex %0, [%3]\n" + " strex %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=3D&r" (ret), "=3D&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; +#elif defined(swp_is_buggy) +#ifdef CONFIG_SMP +#error SMP is not supported on this platform +#endif + case 1: + raw_local_irq_save(flags); + ret =3D *(volatile unsigned char *)ptr; + *(volatile unsigned char *)ptr =3D x; + raw_local_irq_restore(flags); + break; + + case 4: + raw_local_irq_save(flags); + ret =3D *(volatile unsigned long *)ptr; + *(volatile unsigned long *)ptr =3D x; + raw_local_irq_restore(flags); + break; +#else + case 1: + asm volatile("@ __xchg1\n" + " swpb %0, %1, [%2]" + : "=3D&r" (ret) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; + case 4: + asm volatile("@ __xchg4\n" + " swp %0, %1, [%2]" + : "=3D&r" (ret) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; +#endif + default: + /* Cause a link-time error, the xchg() size is not supported */ + __bad_xchg(ptr, size), ret =3D 0; + break; + } + + return ret; +} + +#define xchg_relaxed(ptr, x) ({ \ + (__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr), \ + sizeof(*(ptr))); \ +}) + +#include + +#if __LINUX_ARM_ARCH__ < 6 +/* min ARCH < ARMv6 */ + +#ifdef CONFIG_SMP +#error "SMP is not supported on this platform" +#endif + +#define xchg xchg_relaxed + +/* + * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always ma= ke + * them available. + */ +#define cmpxchg_local(ptr, o, n) ({ \ + (__typeof(*ptr))__cmpxchg_local_generic((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr))); \ +}) + +#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (= n)) + +#include + +#else /* min ARCH >=3D ARMv6 */ + +extern void __bad_cmpxchg(volatile void *ptr, int size); + +/* + * cmpxchg only support 32-bits operands on ARMv6. + */ + +static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long ol= d, + unsigned long new, int size) +{ + unsigned long oldval, res; + + prefetchw((const void *)ptr); + + switch (size) { +#ifndef CONFIG_CPU_V6 /* min ARCH >=3D ARMv6K */ + case 1: + do { + asm volatile("@ __cmpxchg1\n" + " ldrexb %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexbeq %0, %4, [%2]\n" + : "=3D&r" (res), "=3D&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + case 2: + do { + asm volatile("@ __cmpxchg1\n" + " ldrexh %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexheq %0, %4, [%2]\n" + : "=3D&r" (res), "=3D&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; +#endif + case 4: + do { + asm volatile("@ __cmpxchg4\n" + " ldrex %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexeq %0, %4, [%2]\n" + : "=3D&r" (res), "=3D&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + default: + __bad_cmpxchg(ptr, size); + oldval =3D 0; + } + + return oldval; +} + +#define cmpxchg_relaxed(ptr,o,n) ({ \ + (__typeof__(*(ptr)))__cmpxchg((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr))); \ +}) + +static inline unsigned long __cmpxchg_local(volatile void *ptr, + unsigned long old, + unsigned long new, int size) +{ + unsigned long ret; + + switch (size) { +#ifdef CONFIG_CPU_V6 /* min ARCH =3D=3D ARMv6 */ + case 1: + case 2: + ret =3D __cmpxchg_local_generic(ptr, old, new, size); + break; +#endif + default: + ret =3D __cmpxchg(ptr, old, new, size); + } + + return ret; +} + +#define cmpxchg_local(ptr, o, n) ({ \ + (__typeof(*ptr))__cmpxchg_local((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr))); \ +}) + +static inline unsigned long long __cmpxchg64(unsigned long long *ptr, + unsigned long long old, + unsigned long long new) +{ + unsigned long long oldval; + unsigned long res; + + prefetchw(ptr); + + __asm__ __volatile__( +"1: ldrexd %1, %H1, [%3]\n" +" teq %1, %4\n" +" teqeq %H1, %H4\n" +" bne 2f\n" +" strexd %0, %5, %H5, [%3]\n" +" teq %0, #0\n" +" bne 1b\n" +"2:" + : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (*ptr) + : "r" (ptr), "r" (old), "r" (new) + : "cc"); + + return oldval; +} + +#define cmpxchg64_relaxed(ptr, o, n) ({ \ + (__typeof__(*(ptr)))__cmpxchg64((ptr), \ + (unsigned long long)(o), \ + (unsigned long long)(n)); \ +}) + +#define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n)) + +#endif /* __LINUX_ARM_ARCH__ >=3D 6 */ + +#endif /* __ASM_ARM_CMPXCHG_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64= /atomic.h new file mode 100644 index 0000000000..a2eab9f091 --- /dev/null +++ b/xen/include/asm-arm/arm64/atomic.h @@ -0,0 +1,228 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Based on arch/arm/include/asm/atomic.h + * + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * Copyright (C) 2012 ARM Ltd. + */ +#ifndef __ASM_ATOMIC_H +#define __ASM_ATOMIC_H + +#include +#include + +#include +#include +#include + +#define ATOMIC_OP(op) \ +static inline void arch_##op(int i, atomic_t *v) \ +{ \ + __lse_ll_sc_body(op, i, v); \ +} + +ATOMIC_OP(atomic_andnot) +ATOMIC_OP(atomic_or) +ATOMIC_OP(atomic_xor) +ATOMIC_OP(atomic_add) +ATOMIC_OP(atomic_and) +ATOMIC_OP(atomic_sub) + +#undef ATOMIC_OP + +#define ATOMIC_FETCH_OP(name, op) \ +static inline int arch_##op##name(int i, atomic_t *v) \ +{ \ + return __lse_ll_sc_body(op##name, i, v); \ +} + +#define ATOMIC_FETCH_OPS(op) \ + ATOMIC_FETCH_OP(_relaxed, op) \ + ATOMIC_FETCH_OP(_acquire, op) \ + ATOMIC_FETCH_OP(_release, op) \ + ATOMIC_FETCH_OP( , op) + +ATOMIC_FETCH_OPS(atomic_fetch_andnot) +ATOMIC_FETCH_OPS(atomic_fetch_or) +ATOMIC_FETCH_OPS(atomic_fetch_xor) +ATOMIC_FETCH_OPS(atomic_fetch_add) +ATOMIC_FETCH_OPS(atomic_fetch_and) +ATOMIC_FETCH_OPS(atomic_fetch_sub) +ATOMIC_FETCH_OPS(atomic_add_return) +ATOMIC_FETCH_OPS(atomic_sub_return) + +#undef ATOMIC_FETCH_OP +#undef ATOMIC_FETCH_OPS + +#define ATOMIC64_OP(op) \ +static inline void arch_##op(long i, atomic64_t *v) \ +{ \ + __lse_ll_sc_body(op, i, v); \ +} + +ATOMIC64_OP(atomic64_andnot) +ATOMIC64_OP(atomic64_or) +ATOMIC64_OP(atomic64_xor) +ATOMIC64_OP(atomic64_add) +ATOMIC64_OP(atomic64_and) +ATOMIC64_OP(atomic64_sub) + +#undef ATOMIC64_OP + +#define ATOMIC64_FETCH_OP(name, op) \ +static inline long arch_##op##name(long i, atomic64_t *v) \ +{ \ + return __lse_ll_sc_body(op##name, i, v); \ +} + +#define ATOMIC64_FETCH_OPS(op) \ + ATOMIC64_FETCH_OP(_relaxed, op) \ + ATOMIC64_FETCH_OP(_acquire, op) \ + ATOMIC64_FETCH_OP(_release, op) \ + ATOMIC64_FETCH_OP( , op) + +ATOMIC64_FETCH_OPS(atomic64_fetch_andnot) +ATOMIC64_FETCH_OPS(atomic64_fetch_or) +ATOMIC64_FETCH_OPS(atomic64_fetch_xor) +ATOMIC64_FETCH_OPS(atomic64_fetch_add) +ATOMIC64_FETCH_OPS(atomic64_fetch_and) +ATOMIC64_FETCH_OPS(atomic64_fetch_sub) +ATOMIC64_FETCH_OPS(atomic64_add_return) +ATOMIC64_FETCH_OPS(atomic64_sub_return) + +#undef ATOMIC64_FETCH_OP +#undef ATOMIC64_FETCH_OPS + +static inline long arch_atomic64_dec_if_positive(atomic64_t *v) +{ + return __lse_ll_sc_body(atomic64_dec_if_positive, v); +} + +#define arch_atomic_read(v) __READ_ONCE((v)->counter) +#define arch_atomic_set(v, i) __WRITE_ONCE(((v)->counter), (i)) + +#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed +#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire +#define arch_atomic_add_return_release arch_atomic_add_return_release +#define arch_atomic_add_return arch_atomic_add_return + +#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed +#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire +#define arch_atomic_sub_return_release arch_atomic_sub_return_release +#define arch_atomic_sub_return arch_atomic_sub_return + +#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed +#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire +#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release +#define arch_atomic_fetch_add arch_atomic_fetch_add + +#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed +#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire +#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + +#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed +#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire +#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release +#define arch_atomic_fetch_and arch_atomic_fetch_and + +#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed +#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire +#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release +#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot + +#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed +#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire +#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release +#define arch_atomic_fetch_or arch_atomic_fetch_or + +#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed +#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire +#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + +#define arch_atomic_xchg_relaxed(v, new) \ + arch_xchg_relaxed(&((v)->counter), (new)) +#define arch_atomic_xchg_acquire(v, new) \ + arch_xchg_acquire(&((v)->counter), (new)) +#define arch_atomic_xchg_release(v, new) \ + arch_xchg_release(&((v)->counter), (new)) +#define arch_atomic_xchg(v, new) \ + arch_xchg(&((v)->counter), (new)) + +#define arch_atomic_cmpxchg_relaxed(v, old, new) \ + arch_cmpxchg_relaxed(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg_acquire(v, old, new) \ + arch_cmpxchg_acquire(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg_release(v, old, new) \ + arch_cmpxchg_release(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg(v, old, new) \ + arch_cmpxchg(&((v)->counter), (old), (new)) + +#define arch_atomic_andnot arch_atomic_andnot + +/* + * 64-bit arch_atomic operations. + */ +#define ATOMIC64_INIT ATOMIC_INIT +#define arch_atomic64_read arch_atomic_read +#define arch_atomic64_set arch_atomic_set + +#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed +#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire +#define arch_atomic64_add_return_release arch_atomic64_add_return_release +#define arch_atomic64_add_return arch_atomic64_add_return + +#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed +#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire +#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release +#define arch_atomic64_sub_return arch_atomic64_sub_return + +#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed +#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire +#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release +#define arch_atomic64_fetch_add arch_atomic64_fetch_add + +#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed +#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire +#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub + +#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed +#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire +#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release +#define arch_atomic64_fetch_and arch_atomic64_fetch_and + +#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_rela= xed +#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acqu= ire +#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_rele= ase +#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot + +#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed +#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire +#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release +#define arch_atomic64_fetch_or arch_atomic64_fetch_or + +#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed +#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire +#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor + +#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed +#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire +#define arch_atomic64_xchg_release arch_atomic_xchg_release +#define arch_atomic64_xchg arch_atomic_xchg + +#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed +#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire +#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release +#define arch_atomic64_cmpxchg arch_atomic_cmpxchg + +#define arch_atomic64_andnot arch_atomic64_andnot + +#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive + +#define ARCH_ATOMIC + +#endif /* __ASM_ATOMIC_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h b/xen/include/asm-arm= /arm64/atomic_ll_sc.h new file mode 100644 index 0000000000..e1009c0f94 --- /dev/null +++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h @@ -0,0 +1,353 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Based on arch/arm/include/asm/atomic.h + * + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ASM_ATOMIC_LL_SC_H +#define __ASM_ATOMIC_LL_SC_H + +#include + +#ifdef CONFIG_ARM64_LSE_ATOMICS +#define __LL_SC_FALLBACK(asm_ops) \ +" b 3f\n" \ +" .subsection 1\n" \ +"3:\n" \ +asm_ops "\n" \ +" b 4f\n" \ +" .previous\n" \ +"4:\n" +#else +#define __LL_SC_FALLBACK(asm_ops) asm_ops +#endif + +#ifndef CONFIG_CC_HAS_K_CONSTRAINT +#define K +#endif + +/* + * AArch64 UP and SMP safe atomic ops. We use load exclusive and + * store exclusive to ensure that these are atomic. We may loop + * to ensure that the update happens. + */ + +#define ATOMIC_OP(op, asm_op, constraint) \ +static inline void \ +__ll_sc_atomic_##op(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + asm volatile("// atomic_" #op "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %2\n" \ +"1: ldxr %w0, %2\n" \ +" " #asm_op " %w0, %w0, %w3\n" \ +" stxr %w1, %w0, %2\n" \ +" cbnz %w1, 1b\n") \ + : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i)); \ +} + +#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\ +static inline int \ +__ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int result; \ + \ + asm volatile("// atomic_" #op "_return" #name "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %2\n" \ +"1: ld" #acq "xr %w0, %2\n" \ +" " #asm_op " %w0, %w0, %w3\n" \ +" st" #rel "xr %w1, %w0, %2\n" \ +" cbnz %w1, 1b\n" \ +" " #mb ) \ + : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i) \ + : cl); \ + \ + return result; \ +} + +#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \ +static inline int \ +__ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \ +{ \ + unsigned long tmp; \ + int val, result; \ + \ + asm volatile("// atomic_fetch_" #op #name "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %3\n" \ +"1: ld" #acq "xr %w0, %3\n" \ +" " #asm_op " %w1, %w0, %w4\n" \ +" st" #rel "xr %w2, %w1, %3\n" \ +" cbnz %w2, 1b\n" \ +" " #mb ) \ + : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i) \ + : cl); \ + \ + return result; \ +} + +#define ATOMIC_OPS(...) \ + ATOMIC_OP(__VA_ARGS__) \ + ATOMIC_OP_RETURN( , dmb ish, , l, "memory", __VA_ARGS__)\ + ATOMIC_OP_RETURN(_relaxed, , , , , __VA_ARGS__)\ + ATOMIC_OP_RETURN(_acquire, , a, , "memory", __VA_ARGS__)\ + ATOMIC_OP_RETURN(_release, , , l, "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP ( , dmb ish, , l, "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_relaxed, , , , , __VA_ARGS__)\ + ATOMIC_FETCH_OP (_acquire, , a, , "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_release, , , l, "memory", __VA_ARGS__) + +ATOMIC_OPS(add, add, I) +ATOMIC_OPS(sub, sub, J) + +#undef ATOMIC_OPS +#define ATOMIC_OPS(...) \ + ATOMIC_OP(__VA_ARGS__) \ + ATOMIC_FETCH_OP ( , dmb ish, , l, "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_relaxed, , , , , __VA_ARGS__)\ + ATOMIC_FETCH_OP (_acquire, , a, , "memory", __VA_ARGS__)\ + ATOMIC_FETCH_OP (_release, , , l, "memory", __VA_ARGS__) + +ATOMIC_OPS(and, and, K) +ATOMIC_OPS(or, orr, K) +ATOMIC_OPS(xor, eor, K) +/* + * GAS converts the mysterious and undocumented BIC (immediate) alias to + * an AND (immediate) instruction with the immediate inverted. We don't + * have a constraint for this, so fall back to register. + */ +ATOMIC_OPS(andnot, bic, ) + +#undef ATOMIC_OPS +#undef ATOMIC_FETCH_OP +#undef ATOMIC_OP_RETURN +#undef ATOMIC_OP + +#define ATOMIC64_OP(op, asm_op, constraint) \ +static inline void \ +__ll_sc_atomic64_##op(s64 i, atomic64_t *v) \ +{ \ + s64 result; \ + unsigned long tmp; \ + \ + asm volatile("// atomic64_" #op "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %2\n" \ +"1: ldxr %0, %2\n" \ +" " #asm_op " %0, %0, %3\n" \ +" stxr %w1, %0, %2\n" \ +" cbnz %w1, 1b") \ + : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i)); \ +} + +#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\ +static inline long \ +__ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \ +{ \ + s64 result; \ + unsigned long tmp; \ + \ + asm volatile("// atomic64_" #op "_return" #name "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %2\n" \ +"1: ld" #acq "xr %0, %2\n" \ +" " #asm_op " %0, %0, %3\n" \ +" st" #rel "xr %w1, %0, %2\n" \ +" cbnz %w1, 1b\n" \ +" " #mb ) \ + : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i) \ + : cl); \ + \ + return result; \ +} + +#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\ +static inline long \ +__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ +{ \ + s64 result, val; \ + unsigned long tmp; \ + \ + asm volatile("// atomic64_fetch_" #op #name "\n" \ + __LL_SC_FALLBACK( \ +" prfm pstl1strm, %3\n" \ +"1: ld" #acq "xr %0, %3\n" \ +" " #asm_op " %1, %0, %4\n" \ +" st" #rel "xr %w2, %1, %3\n" \ +" cbnz %w2, 1b\n" \ +" " #mb ) \ + : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Q" (v->counter) \ + : __stringify(constraint) "r" (i) \ + : cl); \ + \ + return result; \ +} + +#define ATOMIC64_OPS(...) \ + ATOMIC64_OP(__VA_ARGS__) \ + ATOMIC64_OP_RETURN(, dmb ish, , l, "memory", __VA_ARGS__) \ + ATOMIC64_OP_RETURN(_relaxed,, , , , __VA_ARGS__) \ + ATOMIC64_OP_RETURN(_acquire,, a, , "memory", __VA_ARGS__) \ + ATOMIC64_OP_RETURN(_release,, , l, "memory", __VA_ARGS__) \ + ATOMIC64_FETCH_OP (, dmb ish, , l, "memory", __VA_ARGS__) \ + ATOMIC64_FETCH_OP (_relaxed,, , , , __VA_ARGS__) \ + ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \ + ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__) + +ATOMIC64_OPS(add, add, I) +ATOMIC64_OPS(sub, sub, J) + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(...) \ + ATOMIC64_OP(__VA_ARGS__) \ + ATOMIC64_FETCH_OP (, dmb ish, , l, "memory", __VA_ARGS__) \ + ATOMIC64_FETCH_OP (_relaxed,, , , , __VA_ARGS__) \ + ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \ + ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__) + +ATOMIC64_OPS(and, and, L) +ATOMIC64_OPS(or, orr, L) +ATOMIC64_OPS(xor, eor, L) +/* + * GAS converts the mysterious and undocumented BIC (immediate) alias to + * an AND (immediate) instruction with the immediate inverted. We don't + * have a constraint for this, so fall back to register. + */ +ATOMIC64_OPS(andnot, bic, ) + +#undef ATOMIC64_OPS +#undef ATOMIC64_FETCH_OP +#undef ATOMIC64_OP_RETURN +#undef ATOMIC64_OP + +static inline s64 +__ll_sc_atomic64_dec_if_positive(atomic64_t *v) +{ + s64 result; + unsigned long tmp; + + asm volatile("// atomic64_dec_if_positive\n" + __LL_SC_FALLBACK( +" prfm pstl1strm, %2\n" +"1: ldxr %0, %2\n" +" subs %0, %0, #1\n" +" b.lt 2f\n" +" stlxr %w1, %0, %2\n" +" cbnz %w1, 1b\n" +" dmb ish\n" +"2:") + : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) + : + : "cc", "memory"); + + return result; +} + +#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint) \ +static inline u##sz \ +__ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \ + unsigned long old, \ + u##sz new) \ +{ \ + unsigned long tmp; \ + u##sz oldval; \ + \ + /* \ + * Sub-word sizes require explicit casting so that the compare \ + * part of the cmpxchg doesn't end up interpreting non-zero \ + * upper bits of the register containing "old". \ + */ \ + if (sz < 32) \ + old =3D (u##sz)old; \ + \ + asm volatile( \ + __LL_SC_FALLBACK( \ + " prfm pstl1strm, %[v]\n" \ + "1: ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n" \ + " eor %" #w "[tmp], %" #w "[oldval], %" #w "[old]\n" \ + " cbnz %" #w "[tmp], 2f\n" \ + " st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n" \ + " cbnz %w[tmp], 1b\n" \ + " " #mb "\n" \ + "2:") \ + : [tmp] "=3D&r" (tmp), [oldval] "=3D&r" (oldval), \ + [v] "+Q" (*(u##sz *)ptr) \ + : [old] __stringify(constraint) "r" (old), [new] "r" (new) \ + : cl); \ + \ + return oldval; \ +} + +/* + * Earlier versions of GCC (no later than 8.1.0) appear to incorrectly + * handle the 'K' constraint for the value 4294967295 - thus we use no + * constraint for 32 bit operations. + */ +__CMPXCHG_CASE(w, b, , 8, , , , , K) +__CMPXCHG_CASE(w, h, , 16, , , , , K) +__CMPXCHG_CASE(w, , , 32, , , , , K) +__CMPXCHG_CASE( , , , 64, , , , , L) +__CMPXCHG_CASE(w, b, acq_, 8, , a, , "memory", K) +__CMPXCHG_CASE(w, h, acq_, 16, , a, , "memory", K) +__CMPXCHG_CASE(w, , acq_, 32, , a, , "memory", K) +__CMPXCHG_CASE( , , acq_, 64, , a, , "memory", L) +__CMPXCHG_CASE(w, b, rel_, 8, , , l, "memory", K) +__CMPXCHG_CASE(w, h, rel_, 16, , , l, "memory", K) +__CMPXCHG_CASE(w, , rel_, 32, , , l, "memory", K) +__CMPXCHG_CASE( , , rel_, 64, , , l, "memory", L) +__CMPXCHG_CASE(w, b, mb_, 8, dmb ish, , l, "memory", K) +__CMPXCHG_CASE(w, h, mb_, 16, dmb ish, , l, "memory", K) +__CMPXCHG_CASE(w, , mb_, 32, dmb ish, , l, "memory", K) +__CMPXCHG_CASE( , , mb_, 64, dmb ish, , l, "memory", L) + +#undef __CMPXCHG_CASE + +#define __CMPXCHG_DBL(name, mb, rel, cl) \ +static inline long \ +__ll_sc__cmpxchg_double##name(unsigned long old1, \ + unsigned long old2, \ + unsigned long new1, \ + unsigned long new2, \ + volatile void *ptr) \ +{ \ + unsigned long tmp, ret; \ + \ + asm volatile("// __cmpxchg_double" #name "\n" \ + __LL_SC_FALLBACK( \ + " prfm pstl1strm, %2\n" \ + "1: ldxp %0, %1, %2\n" \ + " eor %0, %0, %3\n" \ + " eor %1, %1, %4\n" \ + " orr %1, %0, %1\n" \ + " cbnz %1, 2f\n" \ + " st" #rel "xp %w0, %5, %6, %2\n" \ + " cbnz %w0, 1b\n" \ + " " #mb "\n" \ + "2:") \ + : "=3D&r" (tmp), "=3D&r" (ret), "+Q" (*(unsigned long *)ptr) \ + : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \ + : cl); \ + \ + return ret; \ +} + +__CMPXCHG_DBL( , , , ) +__CMPXCHG_DBL(_mb, dmb ish, l, "memory") + +#undef __CMPXCHG_DBL +#undef K + +#endif /* __ASM_ATOMIC_LL_SC_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/atomic_lse.h b/xen/include/asm-arm/a= rm64/atomic_lse.h new file mode 100644 index 0000000000..b3b0d43a7d --- /dev/null +++ b/xen/include/asm-arm/arm64/atomic_lse.h @@ -0,0 +1,419 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Based on arch/arm/include/asm/atomic.h + * + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ASM_ATOMIC_LSE_H +#define __ASM_ATOMIC_LSE_H + +#define ATOMIC_OP(op, asm_op) \ +static inline void __lse_atomic_##op(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ +" " #asm_op " %w[i], %[v]\n" \ + : [i] "+r" (i), [v] "+Q" (v->counter) \ + : "r" (v)); \ +} + +ATOMIC_OP(andnot, stclr) +ATOMIC_OP(or, stset) +ATOMIC_OP(xor, steor) +ATOMIC_OP(add, stadd) + +#undef ATOMIC_OP + +#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \ +static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ +" " #asm_op #mb " %w[i], %w[i], %[v]" \ + : [i] "+r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +#define ATOMIC_FETCH_OPS(op, asm_op) \ + ATOMIC_FETCH_OP(_relaxed, , op, asm_op) \ + ATOMIC_FETCH_OP(_acquire, a, op, asm_op, "memory") \ + ATOMIC_FETCH_OP(_release, l, op, asm_op, "memory") \ + ATOMIC_FETCH_OP( , al, op, asm_op, "memory") + +ATOMIC_FETCH_OPS(andnot, ldclr) +ATOMIC_FETCH_OPS(or, ldset) +ATOMIC_FETCH_OPS(xor, ldeor) +ATOMIC_FETCH_OPS(add, ldadd) + +#undef ATOMIC_FETCH_OP +#undef ATOMIC_FETCH_OPS + +#define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \ +static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \ +{ \ + u32 tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " ldadd" #mb " %w[i], %w[tmp], %[v]\n" \ + " add %w[i], %w[i], %w[tmp]" \ + : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_OP_ADD_RETURN(_relaxed, ) +ATOMIC_OP_ADD_RETURN(_acquire, a, "memory") +ATOMIC_OP_ADD_RETURN(_release, l, "memory") +ATOMIC_OP_ADD_RETURN( , al, "memory") + +#undef ATOMIC_OP_ADD_RETURN + +static inline void __lse_atomic_and(int i, atomic_t *v) +{ + asm volatile( + __LSE_PREAMBLE + " mvn %w[i], %w[i]\n" + " stclr %w[i], %[v]" + : [i] "+&r" (i), [v] "+Q" (v->counter) + : "r" (v)); +} + +#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \ +static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ + " mvn %w[i], %w[i]\n" \ + " ldclr" #mb " %w[i], %w[i], %[v]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_FETCH_OP_AND(_relaxed, ) +ATOMIC_FETCH_OP_AND(_acquire, a, "memory") +ATOMIC_FETCH_OP_AND(_release, l, "memory") +ATOMIC_FETCH_OP_AND( , al, "memory") + +#undef ATOMIC_FETCH_OP_AND + +static inline void __lse_atomic_sub(int i, atomic_t *v) +{ + asm volatile( + __LSE_PREAMBLE + " neg %w[i], %w[i]\n" + " stadd %w[i], %[v]" + : [i] "+&r" (i), [v] "+Q" (v->counter) + : "r" (v)); +} + +#define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \ +static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \ +{ \ + u32 tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " neg %w[i], %w[i]\n" \ + " ldadd" #mb " %w[i], %w[tmp], %[v]\n" \ + " add %w[i], %w[i], %w[tmp]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_OP_SUB_RETURN(_relaxed, ) +ATOMIC_OP_SUB_RETURN(_acquire, a, "memory") +ATOMIC_OP_SUB_RETURN(_release, l, "memory") +ATOMIC_OP_SUB_RETURN( , al, "memory") + +#undef ATOMIC_OP_SUB_RETURN + +#define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \ +static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ + " neg %w[i], %w[i]\n" \ + " ldadd" #mb " %w[i], %w[i], %[v]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC_FETCH_OP_SUB(_relaxed, ) +ATOMIC_FETCH_OP_SUB(_acquire, a, "memory") +ATOMIC_FETCH_OP_SUB(_release, l, "memory") +ATOMIC_FETCH_OP_SUB( , al, "memory") + +#undef ATOMIC_FETCH_OP_SUB + +#define ATOMIC64_OP(op, asm_op) \ +static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ +" " #asm_op " %[i], %[v]\n" \ + : [i] "+r" (i), [v] "+Q" (v->counter) \ + : "r" (v)); \ +} + +ATOMIC64_OP(andnot, stclr) +ATOMIC64_OP(or, stset) +ATOMIC64_OP(xor, steor) +ATOMIC64_OP(add, stadd) + +#undef ATOMIC64_OP + +#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ +static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ +" " #asm_op #mb " %[i], %[i], %[v]" \ + : [i] "+r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +#define ATOMIC64_FETCH_OPS(op, asm_op) \ + ATOMIC64_FETCH_OP(_relaxed, , op, asm_op) \ + ATOMIC64_FETCH_OP(_acquire, a, op, asm_op, "memory") \ + ATOMIC64_FETCH_OP(_release, l, op, asm_op, "memory") \ + ATOMIC64_FETCH_OP( , al, op, asm_op, "memory") + +ATOMIC64_FETCH_OPS(andnot, ldclr) +ATOMIC64_FETCH_OPS(or, ldset) +ATOMIC64_FETCH_OPS(xor, ldeor) +ATOMIC64_FETCH_OPS(add, ldadd) + +#undef ATOMIC64_FETCH_OP +#undef ATOMIC64_FETCH_OPS + +#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \ +static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\ +{ \ + unsigned long tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " ldadd" #mb " %[i], %x[tmp], %[v]\n" \ + " add %[i], %[i], %x[tmp]" \ + : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC64_OP_ADD_RETURN(_relaxed, ) +ATOMIC64_OP_ADD_RETURN(_acquire, a, "memory") +ATOMIC64_OP_ADD_RETURN(_release, l, "memory") +ATOMIC64_OP_ADD_RETURN( , al, "memory") + +#undef ATOMIC64_OP_ADD_RETURN + +static inline void __lse_atomic64_and(s64 i, atomic64_t *v) +{ + asm volatile( + __LSE_PREAMBLE + " mvn %[i], %[i]\n" + " stclr %[i], %[v]" + : [i] "+&r" (i), [v] "+Q" (v->counter) + : "r" (v)); +} + +#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ +static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ + " mvn %[i], %[i]\n" \ + " ldclr" #mb " %[i], %[i], %[v]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC64_FETCH_OP_AND(_relaxed, ) +ATOMIC64_FETCH_OP_AND(_acquire, a, "memory") +ATOMIC64_FETCH_OP_AND(_release, l, "memory") +ATOMIC64_FETCH_OP_AND( , al, "memory") + +#undef ATOMIC64_FETCH_OP_AND + +static inline void __lse_atomic64_sub(s64 i, atomic64_t *v) +{ + asm volatile( + __LSE_PREAMBLE + " neg %[i], %[i]\n" + " stadd %[i], %[v]" + : [i] "+&r" (i), [v] "+Q" (v->counter) + : "r" (v)); +} + +#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \ +static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \ +{ \ + unsigned long tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " neg %[i], %[i]\n" \ + " ldadd" #mb " %[i], %x[tmp], %[v]\n" \ + " add %[i], %[i], %x[tmp]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC64_OP_SUB_RETURN(_relaxed, ) +ATOMIC64_OP_SUB_RETURN(_acquire, a, "memory") +ATOMIC64_OP_SUB_RETURN(_release, l, "memory") +ATOMIC64_OP_SUB_RETURN( , al, "memory") + +#undef ATOMIC64_OP_SUB_RETURN + +#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ +static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \ +{ \ + asm volatile( \ + __LSE_PREAMBLE \ + " neg %[i], %[i]\n" \ + " ldadd" #mb " %[i], %[i], %[v]" \ + : [i] "+&r" (i), [v] "+Q" (v->counter) \ + : "r" (v) \ + : cl); \ + \ + return i; \ +} + +ATOMIC64_FETCH_OP_SUB(_relaxed, ) +ATOMIC64_FETCH_OP_SUB(_acquire, a, "memory") +ATOMIC64_FETCH_OP_SUB(_release, l, "memory") +ATOMIC64_FETCH_OP_SUB( , al, "memory") + +#undef ATOMIC64_FETCH_OP_SUB + +static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v) +{ + unsigned long tmp; + + asm volatile( + __LSE_PREAMBLE + "1: ldr %x[tmp], %[v]\n" + " subs %[ret], %x[tmp], #1\n" + " b.lt 2f\n" + " casal %x[tmp], %[ret], %[v]\n" + " sub %x[tmp], %x[tmp], #1\n" + " sub %x[tmp], %x[tmp], %[ret]\n" + " cbnz %x[tmp], 1b\n" + "2:" + : [ret] "+&r" (v), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) + : + : "cc", "memory"); + + return (long)v; +} + +#define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...) \ +static __always_inline u##sz \ +__lse__cmpxchg_case_##name##sz(volatile void *ptr, \ + u##sz old, \ + u##sz new) \ +{ \ + register unsigned long x0 asm ("x0") =3D (unsigned long)ptr; \ + register u##sz x1 asm ("x1") =3D old; \ + register u##sz x2 asm ("x2") =3D new; \ + unsigned long tmp; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " mov %" #w "[tmp], %" #w "[old]\n" \ + " cas" #mb #sfx "\t%" #w "[tmp], %" #w "[new], %[v]\n" \ + " mov %" #w "[ret], %" #w "[tmp]" \ + : [ret] "+r" (x0), [v] "+Q" (*(unsigned long *)ptr), \ + [tmp] "=3D&r" (tmp) \ + : [old] "r" (x1), [new] "r" (x2) \ + : cl); \ + \ + return x0; \ +} + +__CMPXCHG_CASE(w, b, , 8, ) +__CMPXCHG_CASE(w, h, , 16, ) +__CMPXCHG_CASE(w, , , 32, ) +__CMPXCHG_CASE(x, , , 64, ) +__CMPXCHG_CASE(w, b, acq_, 8, a, "memory") +__CMPXCHG_CASE(w, h, acq_, 16, a, "memory") +__CMPXCHG_CASE(w, , acq_, 32, a, "memory") +__CMPXCHG_CASE(x, , acq_, 64, a, "memory") +__CMPXCHG_CASE(w, b, rel_, 8, l, "memory") +__CMPXCHG_CASE(w, h, rel_, 16, l, "memory") +__CMPXCHG_CASE(w, , rel_, 32, l, "memory") +__CMPXCHG_CASE(x, , rel_, 64, l, "memory") +__CMPXCHG_CASE(w, b, mb_, 8, al, "memory") +__CMPXCHG_CASE(w, h, mb_, 16, al, "memory") +__CMPXCHG_CASE(w, , mb_, 32, al, "memory") +__CMPXCHG_CASE(x, , mb_, 64, al, "memory") + +#undef __CMPXCHG_CASE + +#define __CMPXCHG_DBL(name, mb, cl...) \ +static __always_inline long \ +__lse__cmpxchg_double##name(unsigned long old1, \ + unsigned long old2, \ + unsigned long new1, \ + unsigned long new2, \ + volatile void *ptr) \ +{ \ + unsigned long oldval1 =3D old1; \ + unsigned long oldval2 =3D old2; \ + register unsigned long x0 asm ("x0") =3D old1; \ + register unsigned long x1 asm ("x1") =3D old2; \ + register unsigned long x2 asm ("x2") =3D new1; \ + register unsigned long x3 asm ("x3") =3D new2; \ + register unsigned long x4 asm ("x4") =3D (unsigned long)ptr; \ + \ + asm volatile( \ + __LSE_PREAMBLE \ + " casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"\ + " eor %[old1], %[old1], %[oldval1]\n" \ + " eor %[old2], %[old2], %[oldval2]\n" \ + " orr %[old1], %[old1], %[old2]" \ + : [old1] "+&r" (x0), [old2] "+&r" (x1), \ + [v] "+Q" (*(unsigned long *)ptr) \ + : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ + [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \ + : cl); \ + \ + return x0; \ +} + +__CMPXCHG_DBL( , ) +__CMPXCHG_DBL(_mb, al, "memory") + +#undef __CMPXCHG_DBL + +#endif /* __ASM_ATOMIC_LSE_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm6= 4/cmpxchg.h new file mode 100644 index 0000000000..c51388216e --- /dev/null +++ b/xen/include/asm-arm/arm64/cmpxchg.h @@ -0,0 +1,285 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Based on arch/arm/include/asm/cmpxchg.h + * + * Copyright (C) 2012 ARM Ltd. + */ +#ifndef __ASM_CMPXCHG_H +#define __ASM_CMPXCHG_H + +#include +#include + +#include +#include + +/* + * We need separate acquire parameters for ll/sc and lse, since the full + * barrier case is generated as release+dmb for the former and + * acquire+release for the latter. + */ +#define __XCHG_CASE(w, sfx, name, sz, mb, nop_lse, acq, acq_lse, rel, cl) \ +static inline u##sz __xchg_case_##name##sz(u##sz x, volatile void *ptr) \ +{ \ + u##sz ret; \ + unsigned long tmp; \ + \ + asm volatile(ARM64_LSE_ATOMIC_INSN( \ + /* LL/SC */ \ + " prfm pstl1strm, %2\n" \ + "1: ld" #acq "xr" #sfx "\t%" #w "0, %2\n" \ + " st" #rel "xr" #sfx "\t%w1, %" #w "3, %2\n" \ + " cbnz %w1, 1b\n" \ + " " #mb, \ + /* LSE atomics */ \ + " swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n" \ + __nops(3) \ + " " #nop_lse) \ + : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u##sz *)ptr) \ + : "r" (x) \ + : cl); \ + \ + return ret; \ +} + +__XCHG_CASE(w, b, , 8, , , , , , ) +__XCHG_CASE(w, h, , 16, , , , , , ) +__XCHG_CASE(w, , , 32, , , , , , ) +__XCHG_CASE( , , , 64, , , , , , ) +__XCHG_CASE(w, b, acq_, 8, , , a, a, , "memory") +__XCHG_CASE(w, h, acq_, 16, , , a, a, , "memory") +__XCHG_CASE(w, , acq_, 32, , , a, a, , "memory") +__XCHG_CASE( , , acq_, 64, , , a, a, , "memory") +__XCHG_CASE(w, b, rel_, 8, , , , , l, "memory") +__XCHG_CASE(w, h, rel_, 16, , , , , l, "memory") +__XCHG_CASE(w, , rel_, 32, , , , , l, "memory") +__XCHG_CASE( , , rel_, 64, , , , , l, "memory") +__XCHG_CASE(w, b, mb_, 8, dmb ish, nop, , a, l, "memory") +__XCHG_CASE(w, h, mb_, 16, dmb ish, nop, , a, l, "memory") +__XCHG_CASE(w, , mb_, 32, dmb ish, nop, , a, l, "memory") +__XCHG_CASE( , , mb_, 64, dmb ish, nop, , a, l, "memory") + +#undef __XCHG_CASE + +#define __XCHG_GEN(sfx) \ +static __always_inline unsigned long __xchg##sfx(unsigned long x, \ + volatile void *ptr, \ + int size) \ +{ \ + switch (size) { \ + case 1: \ + return __xchg_case##sfx##_8(x, ptr); \ + case 2: \ + return __xchg_case##sfx##_16(x, ptr); \ + case 4: \ + return __xchg_case##sfx##_32(x, ptr); \ + case 8: \ + return __xchg_case##sfx##_64(x, ptr); \ + default: \ + BUILD_BUG(); \ + } \ + \ + unreachable(); \ +} + +__XCHG_GEN() +__XCHG_GEN(_acq) +__XCHG_GEN(_rel) +__XCHG_GEN(_mb) + +#undef __XCHG_GEN + +#define __xchg_wrapper(sfx, ptr, x) \ +({ \ + __typeof__(*(ptr)) __ret; \ + __ret =3D (__typeof__(*(ptr))) \ + __xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr))); \ + __ret; \ +}) + +/* xchg */ +#define arch_xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__) +#define arch_xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__) +#define arch_xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__) +#define arch_xchg(...) __xchg_wrapper( _mb, __VA_ARGS__) + +#define __CMPXCHG_CASE(name, sz) \ +static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr, \ + u##sz old, \ + u##sz new) \ +{ \ + return __lse_ll_sc_body(_cmpxchg_case_##name##sz, \ + ptr, old, new); \ +} + +__CMPXCHG_CASE( , 8) +__CMPXCHG_CASE( , 16) +__CMPXCHG_CASE( , 32) +__CMPXCHG_CASE( , 64) +__CMPXCHG_CASE(acq_, 8) +__CMPXCHG_CASE(acq_, 16) +__CMPXCHG_CASE(acq_, 32) +__CMPXCHG_CASE(acq_, 64) +__CMPXCHG_CASE(rel_, 8) +__CMPXCHG_CASE(rel_, 16) +__CMPXCHG_CASE(rel_, 32) +__CMPXCHG_CASE(rel_, 64) +__CMPXCHG_CASE(mb_, 8) +__CMPXCHG_CASE(mb_, 16) +__CMPXCHG_CASE(mb_, 32) +__CMPXCHG_CASE(mb_, 64) + +#undef __CMPXCHG_CASE + +#define __CMPXCHG_DBL(name) \ +static inline long __cmpxchg_double##name(unsigned long old1, \ + unsigned long old2, \ + unsigned long new1, \ + unsigned long new2, \ + volatile void *ptr) \ +{ \ + return __lse_ll_sc_body(_cmpxchg_double##name, \ + old1, old2, new1, new2, ptr); \ +} + +__CMPXCHG_DBL( ) +__CMPXCHG_DBL(_mb) + +#undef __CMPXCHG_DBL + +#define __CMPXCHG_GEN(sfx) \ +static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ + unsigned long old, \ + unsigned long new, \ + int size) \ +{ \ + switch (size) { \ + case 1: \ + return __cmpxchg_case##sfx##_8(ptr, old, new); \ + case 2: \ + return __cmpxchg_case##sfx##_16(ptr, old, new); \ + case 4: \ + return __cmpxchg_case##sfx##_32(ptr, old, new); \ + case 8: \ + return __cmpxchg_case##sfx##_64(ptr, old, new); \ + default: \ + BUILD_BUG(); \ + } \ + \ + unreachable(); \ +} + +__CMPXCHG_GEN() +__CMPXCHG_GEN(_acq) +__CMPXCHG_GEN(_rel) +__CMPXCHG_GEN(_mb) + +#undef __CMPXCHG_GEN + +#define __cmpxchg_wrapper(sfx, ptr, o, n) \ +({ \ + __typeof__(*(ptr)) __ret; \ + __ret =3D (__typeof__(*(ptr))) \ + __cmpxchg##sfx((ptr), (unsigned long)(o), \ + (unsigned long)(n), sizeof(*(ptr))); \ + __ret; \ +}) + +/* cmpxchg */ +#define arch_cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__) +#define arch_cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__) +#define arch_cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__) +#define arch_cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__) +#define arch_cmpxchg_local arch_cmpxchg_relaxed + +/* cmpxchg64 */ +#define arch_cmpxchg64_relaxed arch_cmpxchg_relaxed +#define arch_cmpxchg64_acquire arch_cmpxchg_acquire +#define arch_cmpxchg64_release arch_cmpxchg_release +#define arch_cmpxchg64 arch_cmpxchg +#define arch_cmpxchg64_local arch_cmpxchg_local + +/* cmpxchg_double */ +#define system_has_cmpxchg_double() 1 + +#define __cmpxchg_double_check(ptr1, ptr2) \ +({ \ + if (sizeof(*(ptr1)) !=3D 8) \ + BUILD_BUG(); \ + VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) !=3D 1); \ +}) + +#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ +({ \ + int __ret; \ + __cmpxchg_double_check(ptr1, ptr2); \ + __ret =3D !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2), \ + (unsigned long)(n1), (unsigned long)(n2), \ + ptr1); \ + __ret; \ +}) + +#define arch_cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \ +({ \ + int __ret; \ + __cmpxchg_double_check(ptr1, ptr2); \ + __ret =3D !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2), \ + (unsigned long)(n1), (unsigned long)(n2), \ + ptr1); \ + __ret; \ +}) + +#define __CMPWAIT_CASE(w, sfx, sz) \ +static inline void __cmpwait_case_##sz(volatile void *ptr, \ + unsigned long val) \ +{ \ + unsigned long tmp; \ + \ + asm volatile( \ + " sevl\n" \ + " wfe\n" \ + " ldxr" #sfx "\t%" #w "[tmp], %[v]\n" \ + " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \ + " cbnz %" #w "[tmp], 1f\n" \ + " wfe\n" \ + "1:" \ + : [tmp] "=3D&r" (tmp), [v] "+Q" (*(unsigned long *)ptr) \ + : [val] "r" (val)); \ +} + +__CMPWAIT_CASE(w, b, 8); +__CMPWAIT_CASE(w, h, 16); +__CMPWAIT_CASE(w, , 32); +__CMPWAIT_CASE( , , 64); + +#undef __CMPWAIT_CASE + +#define __CMPWAIT_GEN(sfx) \ +static __always_inline void __cmpwait##sfx(volatile void *ptr, \ + unsigned long val, \ + int size) \ +{ \ + switch (size) { \ + case 1: \ + return __cmpwait_case##sfx##_8(ptr, (u8)val); \ + case 2: \ + return __cmpwait_case##sfx##_16(ptr, (u16)val); \ + case 4: \ + return __cmpwait_case##sfx##_32(ptr, val); \ + case 8: \ + return __cmpwait_case##sfx##_64(ptr, val); \ + default: \ + BUILD_BUG(); \ + } \ + \ + unreachable(); \ +} + +__CMPWAIT_GEN() + +#undef __CMPWAIT_GEN + +#define __cmpwait_relaxed(ptr, val) \ + __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr))) + +#endif /* __ASM_CMPXCHG_H */ \ No newline at end of file diff --git a/xen/include/asm-arm/arm64/lse.h b/xen/include/asm-arm/arm64/ls= e.h new file mode 100644 index 0000000000..704be3e4e4 --- /dev/null +++ b/xen/include/asm-arm/arm64/lse.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_LSE_H +#define __ASM_LSE_H + +#include + +#ifdef CONFIG_ARM64_LSE_ATOMICS + +#define __LSE_PREAMBLE ".arch_extension lse\n" + +#include +#include +#include +#include +#include +#include +#include + +extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; +extern struct static_key_false arm64_const_caps_ready; + +static inline bool system_uses_lse_atomics(void) +{ + return (static_branch_likely(&arm64_const_caps_ready)) && + static_branch_likely(&cpu_hwcap_keys[ARM64_HAS_LSE_ATOMICS]); +} + +#define __lse_ll_sc_body(op, ...) \ +({ \ + system_uses_lse_atomics() ? \ + __lse_##op(__VA_ARGS__) : \ + __ll_sc_##op(__VA_ARGS__); \ +}) + +/* In-line patching at runtime */ +#define ARM64_LSE_ATOMIC_INSN(llsc, lse) \ + ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS) + +#else /* CONFIG_ARM64_LSE_ATOMICS */ + +static inline bool system_uses_lse_atomics(void) { return false; } + +#define __lse_ll_sc_body(op, ...) __ll_sc_##op(__VA_ARGS__) + +#define ARM64_LSE_ATOMIC_INSN(llsc, lse) llsc + +#endif /* CONFIG_ARM64_LSE_ATOMICS */ +#endif /* __ASM_LSE_H */ \ No newline at end of file diff --git a/xen/include/xen/rwonce.h b/xen/include/xen/rwonce.h new file mode 100644 index 0000000000..6b47392d1c --- /dev/null +++ b/xen/include/xen/rwonce.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Prevent the compiler from merging or refetching reads or writes. The + * compiler is also forbidden from reordering successive instances of + * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some + * particular ordering. One way to make the compiler aware of ordering is = to + * put the two invocations of READ_ONCE or WRITE_ONCE in different C + * statements. + * + * These two macros will also work on aggregate data types like structs or + * unions. + * + * Their two major use cases are: (1) Mediating communication between + * process-level code and irq/NMI handlers, all running on the same CPU, + * and (2) Ensuring that the compiler does not fold, spindle, or otherwise + * mutilate accesses that either do not require ordering or that interact + * with an explicit memory barrier or atomic instruction that provides the + * required ordering. + */ +#ifndef __ASM_GENERIC_RWONCE_H +#define __ASM_GENERIC_RWONCE_H + +#ifndef __ASSEMBLY__ + +#include +#include +#include + +/* + * Yes, this permits 64-bit accesses on 32-bit architectures. These will + * actually be atomic in some cases (namely Armv7 + LPAE), but for others = we + * rely on the access being split into 2x32-bit accesses for a 32-bit quan= tity + * (e.g. a virtual address) and a strong prevailing wind. + */ +#define compiletime_assert_rwonce_type(t) \ + compiletime_assert(__native_word(t) || sizeof(t) =3D=3D sizeof(long long)= , \ + "Unsupported access size for {READ,WRITE}_ONCE().") + +/* + * Use __READ_ONCE() instead of READ_ONCE() if you do not require any + * atomicity. Note that this may result in tears! + */ +#ifndef __READ_ONCE +#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x)) +#endif + +#define READ_ONCE(x) \ +({ \ + compiletime_assert_rwonce_type(x); \ + __READ_ONCE(x); \ +}) + +#define __WRITE_ONCE(x, val) \ +do { \ + *(volatile typeof(x) *)&(x) =3D (val); \ +} while (0) + +#define WRITE_ONCE(x, val) \ +do { \ + compiletime_assert_rwonce_type(x); \ + __WRITE_ONCE(x, val); \ +} while (0) + +static __no_sanitize_or_inline +unsigned long __read_once_word_nocheck(const void *addr) +{ + return __READ_ONCE(*(unsigned long *)addr); +} + +/* + * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a + * word from memory atomically but without telling KASAN/KCSAN. This is + * usually used by unwinding code when walking the stack of a running proc= ess. + */ +#define READ_ONCE_NOCHECK(x) \ +({ \ + compiletime_assert(sizeof(x) =3D=3D sizeof(unsigned long), \ + "Unsupported access size for READ_ONCE_NOCHECK()."); \ + (typeof(x))__read_once_word_nocheck(&(x)); \ +}) + +static __no_kasan_or_inline +unsigned long read_word_at_a_time(const void *addr) +{ + kasan_check_read(addr, 1); + return *(unsigned long *)addr; +} + +#endif /* __ASSEMBLY__ */ +#endif /* __ASM_GENERIC_RWONCE_H */ \ No newline at end of file --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131601; cv=none; d=zohomail.com; s=zohoarc; b=AB6KXb3GhoZxPfSSUKmMJEwrDeTqCzOBA/IH9RIPa2UknolaLvcip6iOVvaBUm0xKIlVW4TxqZmOWkMoB7TwhFB9W6k2c1vQaZ4VCq5+VJc9U9rx2rDN3PKPRDnxH38rc8SeIw6frMBaGO7F4GRrUiVFjWcqgN//K6HGiJ36/yo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131601; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=sGzJpFoTaZnCaM6xOC0KlI8sjb8W5MHW3wWMiououtA=; b=Ia8u64SAMdPv+4vV2mLxcpnaB6xwM8aZIuB7g1XeYO+ouc8BHJwUp85sqt9Ks6AbABWouE3qN68pD7wGeyaIq97MoXY7mQ8mMqQhhROTO09vvujmuHx3NN1k+93jl4XZU64gDybmxE2yiLgBWztVrdftBkV+YQ/hvBmfuD5/w4I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131601751397.7328681569944; Wed, 11 Nov 2020 13:53:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25298.52993 (Exim 4.92) (envelope-from ) id 1kcy38-0006PT-7r; Wed, 11 Nov 2020 21:53:06 +0000 Received: by outflank-mailman (output) from mailman id 25298.52993; Wed, 11 Nov 2020 21:53:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy38-0006PJ-2X; Wed, 11 Nov 2020 21:53:06 +0000 Received: by outflank-mailman (input) for mailman id 25298; Wed, 11 Nov 2020 21:53:04 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy36-00064v-PL for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:04 +0000 Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 842d170a-abf2-4a60-8904-60b84d13b358; Wed, 11 Nov 2020 21:52:47 +0000 (UTC) Received: by mail-wm1-x344.google.com with SMTP id 23so4924528wmg.1 for ; Wed, 11 Nov 2020 13:52:47 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:45 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy36-00064v-PL for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:04 +0000 Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 842d170a-abf2-4a60-8904-60b84d13b358; Wed, 11 Nov 2020 21:52:47 +0000 (UTC) Received: by mail-wm1-x344.google.com with SMTP id 23so4924528wmg.1 for ; Wed, 11 Nov 2020 13:52:47 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:45 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 842d170a-abf2-4a60-8904-60b84d13b358 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sGzJpFoTaZnCaM6xOC0KlI8sjb8W5MHW3wWMiououtA=; b=lzrosJwBjf2JJ1R6hDwZGAfEDibeHSG4jsEPUGW4dyTwr9h7yHFIWuauR9MfE/W0Z7 kCLZ54Z/GachrPhKRopStg6qckWSh/m3aK56rgkZprKQeLZ2M1devLrjrHt+rxHD3Wil v+MwST5ssBLkCAaoKR7eRZRPGgMJYExDtRg8PrSMuPZMoGL+t6qORpwd6IaUbR14k3VM 05deiW3/TcX+UPafP9k0EiRK3dIIj9JK3wLZGssvDHwhb58dmV7CrM82ed2KPrTV4X7w 4ivCG/OzQPQL1prmrU1tnUqoWhE6TA1mrromhM60WpdfA5Zn4IkON/AlnxcuUeSIWsXa ZU8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sGzJpFoTaZnCaM6xOC0KlI8sjb8W5MHW3wWMiououtA=; b=p4Zp7ciQqTPLZrPeZwXl/g1SR6dOtuO01Yzj0VxTeX6hwT3itMx7iabZT9cDSbmVcH Iaioqft0ZkHI4wwbUcmN5LzLmfvUOIlvcetgahrVVGr8U+/ZqVZ8jOEWzhY8IPas7Hxz kFaKmVYAzHOvWH+jne0hCTfOyy35F8PSdRU47D1nVlh678sHaLDMplHoAw6mVzjysqrj GCcZCOCoYxOF9IzSswRdapoyMkuqIAHd9aRTAcOHELm54rTzl0uQjvmLwEQwoPKetiYG 8EWeGytf9iqdepgEndKmnJDQkix4joCCoAyS83H17kPlN9lkgbZtXb+Re854eygz29E1 SF9Q== X-Gm-Message-State: AOAM53127UzvnZaYcboeaHN4+Fcq/wDBh5vrv3ewGGMWayOOAlXOOgHQ bU4Ytv/z7nuZiXUxiwNSjzKJjOgVSKI= X-Google-Smtp-Source: ABdhPJwpE+zpDXpZX3INJNNQfvF/71AxIIOCQ6cvj9NkP66u4kzrVa6Nr6tLpWEWx5Vkl/8qZyMo1w== X-Received: by 2002:a7b:c3d2:: with SMTP id t18mr6509009wmj.112.1605131566282; Wed, 11 Nov 2020 13:52:46 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 06/15] xen: port Linux to Xen Date: Wed, 11 Nov 2020 21:51:54 +0000 Message-Id: <20201111215203.80336-7-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding - Drop kasan related helpers. - Drop READ_ONCE() and WRITE_ONCE(); the __* versions are fine for now as the only callers in Xen are the arm32 atomics helpers which are always accessing an atomic_t's counter member, which is an int. This means we can swap the arm32 atomics helpers over to using the __* versions like the arm64 code does, removing a dependency on for __native_word(). - Relax __unqual_scalar_typeof() in __READ_ONCE() to just typeof(). Similarly to above, the only callers in Xen are the arm32/arm64 atomics helpers, which are always accessing an atomic_t's counter member as a regular (int *) which doesn't need unqual'ing. This means we can remove the other dependency on . Please see previous patch in the series for expanded rationale on why not having to port to Xen makes life easier. Signed-off-by: Ash Wilding --- xen/include/xen/rwonce.h | 79 +++------------------------------------- 1 file changed, 5 insertions(+), 74 deletions(-) diff --git a/xen/include/xen/rwonce.h b/xen/include/xen/rwonce.h index 6b47392d1c..d001e7e41e 100644 --- a/xen/include/xen/rwonce.h +++ b/xen/include/xen/rwonce.h @@ -1,90 +1,21 @@ -/* SPDX-License-Identifier: GPL-2.0 */ /* - * Prevent the compiler from merging or refetching reads or writes. The - * compiler is also forbidden from reordering successive instances of - * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some - * particular ordering. One way to make the compiler aware of ordering is = to - * put the two invocations of READ_ONCE or WRITE_ONCE in different C - * statements. + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * - * These two macros will also work on aggregate data types like structs or - * unions. - * - * Their two major use cases are: (1) Mediating communication between - * process-level code and irq/NMI handlers, all running on the same CPU, - * and (2) Ensuring that the compiler does not fold, spindle, or otherwise - * mutilate accesses that either do not require ordering or that interact - * with an explicit memory barrier or atomic instruction that provides the - * required ordering. + * SPDX-License-Identifier: GPL-2.0 */ + #ifndef __ASM_GENERIC_RWONCE_H #define __ASM_GENERIC_RWONCE_H =20 #ifndef __ASSEMBLY__ =20 -#include -#include -#include - -/* - * Yes, this permits 64-bit accesses on 32-bit architectures. These will - * actually be atomic in some cases (namely Armv7 + LPAE), but for others = we - * rely on the access being split into 2x32-bit accesses for a 32-bit quan= tity - * (e.g. a virtual address) and a strong prevailing wind. - */ -#define compiletime_assert_rwonce_type(t) \ - compiletime_assert(__native_word(t) || sizeof(t) =3D=3D sizeof(long long)= , \ - "Unsupported access size for {READ,WRITE}_ONCE().") - -/* - * Use __READ_ONCE() instead of READ_ONCE() if you do not require any - * atomicity. Note that this may result in tears! - */ -#ifndef __READ_ONCE -#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x)) -#endif - -#define READ_ONCE(x) \ -({ \ - compiletime_assert_rwonce_type(x); \ - __READ_ONCE(x); \ -}) +#define __READ_ONCE(x) (*(const volatile typeof(x) *)&(x)) =20 #define __WRITE_ONCE(x, val) \ do { \ *(volatile typeof(x) *)&(x) =3D (val); \ } while (0) =20 -#define WRITE_ONCE(x, val) \ -do { \ - compiletime_assert_rwonce_type(x); \ - __WRITE_ONCE(x, val); \ -} while (0) - -static __no_sanitize_or_inline -unsigned long __read_once_word_nocheck(const void *addr) -{ - return __READ_ONCE(*(unsigned long *)addr); -} - -/* - * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a - * word from memory atomically but without telling KASAN/KCSAN. This is - * usually used by unwinding code when walking the stack of a running proc= ess. - */ -#define READ_ONCE_NOCHECK(x) \ -({ \ - compiletime_assert(sizeof(x) =3D=3D sizeof(unsigned long), \ - "Unsupported access size for READ_ONCE_NOCHECK()."); \ - (typeof(x))__read_once_word_nocheck(&(x)); \ -}) - -static __no_kasan_or_inline -unsigned long read_word_at_a_time(const void *addr) -{ - kasan_check_read(addr, 1); - return *(unsigned long *)addr; -} =20 #endif /* __ASSEMBLY__ */ -#endif /* __ASM_GENERIC_RWONCE_H */ \ No newline at end of file +#endif /* __ASM_GENERIC_RWONCE_H */ --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131604; cv=none; d=zohomail.com; s=zohoarc; b=GSg0IRTp29fKOSS4HBNWH7e4OqgbDIKAQVZ5D0Rd2LOi1U78ltrSqYH+mKoqg5xTvuhqWmjdlEYHewHlpxkXfNkM30+VOVpc/EgxPq/6OxMl/iQjkJiIB7EDyLsJGBKXzk4QZG9KByDedV3JyoUqWyeMiK0WsIA/sYcm8iJXqs0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131604; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=lKBI14xArPA18L2e3mixfdhgDGBWzfLh6HwsMuwD7s4=; b=R+D5fEbqYbV4l3HV0r4GoKItiyRMtz54w7q47PCsTFnK//1Dam54oy+FT+Cl3fGMpUZbvtxvo5so0FAC6Px4Ko5zySYTD+ajULWju4eJ5M7le9cTJAF4Y7yXyJcH49tc+5505+lblQutFgvzgOQUlVpFXIENM8rn1euPzCbaiOs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 160513160463280.7528322245613; Wed, 11 Nov 2020 13:53:24 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25300.53005 (Exim 4.92) (envelope-from ) id 1kcy3C-0006X1-QU; Wed, 11 Nov 2020 21:53:10 +0000 Received: by outflank-mailman (output) from mailman id 25300.53005; Wed, 11 Nov 2020 21:53:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy3C-0006Wo-Kh; Wed, 11 Nov 2020 21:53:10 +0000 Received: by outflank-mailman (input) for mailman id 25300; Wed, 11 Nov 2020 21:53:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3B-00064v-PU for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:09 +0000 Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 61dd892c-4a6a-4f81-9b53-1d30211448c1; Wed, 11 Nov 2020 21:52:48 +0000 (UTC) Received: by mail-wr1-x442.google.com with SMTP id j7so3995880wrp.3 for ; Wed, 11 Nov 2020 13:52:48 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:46 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3B-00064v-PU for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:09 +0000 Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 61dd892c-4a6a-4f81-9b53-1d30211448c1; Wed, 11 Nov 2020 21:52:48 +0000 (UTC) Received: by mail-wr1-x442.google.com with SMTP id j7so3995880wrp.3 for ; Wed, 11 Nov 2020 13:52:48 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:46 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 61dd892c-4a6a-4f81-9b53-1d30211448c1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lKBI14xArPA18L2e3mixfdhgDGBWzfLh6HwsMuwD7s4=; b=CBI9vPn64FgxZuhJIGgbZ3efbwKrz8NpUktTKCWRjraau29LdgEClES3irHVLpGUm1 PH9SNGgWKskLHy7zAA2Kjx+f91g2KNDKIDtuiqP8qYLuKzShz0dT4Q7vUS6NTcODUEd7 y874QrlFnnZ977xY7QVX3xrXv2FmvnYODQDijgqFBj4uX6Y+NmZDL5u6L3ZAD/U6IsY9 +H0KpYzeFk7jMdwO45yQ32wFe+Zf+kflZxzBEAgMhgQLaUuCe4HiNiCr45HiOVJGR4IC ECH4u/Huk+/B1zRTwJk6af7hrBZIkXcam1P2jnhORqvMqz+WsdK4uiow7aPmlQ/hJ/6t VymA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lKBI14xArPA18L2e3mixfdhgDGBWzfLh6HwsMuwD7s4=; b=sewc1Z+wtYJqdyUkZL5kMdzjyeDKYvVKxXvx5wEjcMDG2R/RSz+6RJsnabvwTHup5m YkvPaVkg84LEU6viguu4IIa/wHNWBAteetAQtDVyggvRu+93GPQ9OAyGzyjQv5MTECxb Es9YZxyfWl42XDVJbQKoJv2QcHnkFOU3Wog2mcOeGD5qSSSdM7fnPUBRnvf+crJCip0C OdVhnrwwpai7iNej2ARykg3KzJt3L2dzhwnGpjbEdkGDPGDlJtwDCGmC/JsJaWh/3QlS WPqth4cKVd8S1WqnFA+7OORnMfO7C2SRPnVwr4Yocls3cXYP3kyPyp3KcaGWYTcBX49P StlA== X-Gm-Message-State: AOAM531HR7nC/Lc1eWMWVpiO13FmQkwIoPcFag3wAa07K+W7W1XwYuwO txTW4f9Qrur1Yn9e2gEyPcdAuogRjug= X-Google-Smtp-Source: ABdhPJy/UaGSodvWycXJfP8lDDsD6AfWRUcQrVgiracEK6FwIOITn2EXSSWznnSwQQjRw7T4lAgMzA== X-Received: by 2002:adf:e44f:: with SMTP id t15mr29024528wrm.380.1605131567174; Wed, 11 Nov 2020 13:52:47 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 07/15] xen/arm: prepare existing Xen headers for Linux atomics Date: Wed, 11 Nov 2020 21:51:55 +0000 Message-Id: <20201111215203.80336-8-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding This small patch helps prepare and the arm32/arm64 specific system.h headers to play nicely with the Linux atomics helpers: - We don't need the indirection around atomic_add_unless() anymore so let's just pull up the old Xen arm64 definition into here and use it for both arm32 and arm64. - We don't need an atomic_xchg() in as the arm32/arm64 specific cmpxchg.h from Linux defines it for us. - We drop the includes of and from as they're not needed. - We swap out the include of the arm32/arm64 specific cmpxchg.h in the arm32/arm64 specific system.h and instead make them include atomic.h; this works around build issues from cmpxchg.h trying to use the __ll_sc_lse_body() macro before it's ready. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm32/system.h | 2 +- xen/include/asm-arm/arm64/system.h | 2 +- xen/include/asm-arm/atomic.h | 15 +++++++++++---- 3 files changed, 13 insertions(+), 6 deletions(-) diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32= /system.h index ab57abfbc5..88798d11db 100644 --- a/xen/include/asm-arm/arm32/system.h +++ b/xen/include/asm-arm/arm32/system.h @@ -2,7 +2,7 @@ #ifndef __ASM_ARM32_SYSTEM_H #define __ASM_ARM32_SYSTEM_H =20 -#include +#include =20 #define local_irq_disable() asm volatile ( "cpsid i @ local_irq_disable\n"= : : : "cc" ) #define local_irq_enable() asm volatile ( "cpsie i @ local_irq_enable\n" = : : : "cc" ) diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64= /system.h index 2e36573ac6..dfbbe4b87d 100644 --- a/xen/include/asm-arm/arm64/system.h +++ b/xen/include/asm-arm/arm64/system.h @@ -2,7 +2,7 @@ #ifndef __ASM_ARM64_SYSTEM_H #define __ASM_ARM64_SYSTEM_H =20 -#include +#include =20 /* Uses uimm4 as a bitmask to select the clearing of one or more of * the DAIF exception mask bits: diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h index ac2798d095..866f54d03c 100644 --- a/xen/include/asm-arm/atomic.h +++ b/xen/include/asm-arm/atomic.h @@ -2,8 +2,6 @@ #define __ARCH_ARM_ATOMIC__ =20 #include -#include -#include =20 #define build_atomic_read(name, size, width, type) \ static inline type name(const volatile type *addr) \ @@ -220,10 +218,19 @@ static inline int atomic_add_negative(int i, atomic_t= *v) =20 static inline int atomic_add_unless(atomic_t *v, int a, int u) { - return __atomic_add_unless(v, a, u); + int c, old; + + c =3D atomic_read(v); + while (c !=3D u && (old =3D atomic_cmpxchg((v), c, c + a)) !=3D c) + c =3D old; + + return c; } =20 -#define atomic_xchg(v, new) (xchg(&((v)->counter), new)) +static inline int atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return cmpxchg(&((v)->counter), (old), (new)); +} =20 #endif /* __ARCH_ARM_ATOMIC__ */ /* --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131613; cv=none; d=zohomail.com; s=zohoarc; b=cWalb3m5jASTzZ9kGClxeNnJB/SZSyFww1rqmU7dfy1Gq4p5GOp3N2qMRqy+mOPIJhi7dh/jbkFWPl+4YoSNT/163O4E2Xnm6gzh7T5y102bjmm7Ka53kJRlN8b+mhxfVQHiGgnaL6pmE8zJS2MJRltPoZ3A+nXVU3NCMv8ef8g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131613; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=7RO1fbeEEFsgd7lyDUOy1fpofsP4z9rSI0cQwzw9Kvk=; b=abUrwMckNEC/q1U+Y1NYlDXpF7Axq0a1ebZWxc9HiALv3OTCsh154LebK506U/7ZK6QDneaLKgKxODcPUSclMxuR/KShSt0r+qbrmnrbe9mxePuDo+9YPTcntu5A6gEgTOuMUhEgYXbWy5TSf46Wuv2WJp1nVrca+TUT8y+hhgE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131613769746.8424164558521; Wed, 11 Nov 2020 13:53:33 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25303.53017 (Exim 4.92) (envelope-from ) id 1kcy3I-0006dP-7R; Wed, 11 Nov 2020 21:53:16 +0000 Received: by outflank-mailman (output) from mailman id 25303.53017; Wed, 11 Nov 2020 21:53:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy3I-0006dB-25; Wed, 11 Nov 2020 21:53:16 +0000 Received: by outflank-mailman (input) for mailman id 25303; Wed, 11 Nov 2020 21:53:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3G-00064v-Pm for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:14 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 88650103-95b8-4958-818b-9ac90130e790; Wed, 11 Nov 2020 21:52:49 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id 23so3964827wrc.8 for ; Wed, 11 Nov 2020 13:52:49 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:47 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3G-00064v-Pm for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:14 +0000 Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 88650103-95b8-4958-818b-9ac90130e790; Wed, 11 Nov 2020 21:52:49 +0000 (UTC) Received: by mail-wr1-x443.google.com with SMTP id 23so3964827wrc.8 for ; Wed, 11 Nov 2020 13:52:49 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:47 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 88650103-95b8-4958-818b-9ac90130e790 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7RO1fbeEEFsgd7lyDUOy1fpofsP4z9rSI0cQwzw9Kvk=; b=KO+deDdjmc/oIfnFnkk3fe+ng2FQbdeGxiyUjiaBsxgdLFHgEZfW0cBBc135s2SxXl 1+IUMKZ+bZCMZgHcX6+nOdZRGaRP5pooOtM+7omdkkXli0MzIp/Un1DJiwX1huQXRmcX aUbENgTHvbXOKP1nDXajqDDAoQvPjMeAlQdfreEpiexg+zIQ6YseOWaXyJ+Edj2r/ja8 cY+5WDdHi4w4ugZEJGjizRia200NjurxQf/P1CheLezL7usvcI+CV2GHRHc+tOVnmlze 0x3lC1yZDjH6FiPaGGBP53gcMIjlFD0yUVTcciGnsrjqMorwWt56ULf/DCb7bGhy+2bp rwLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7RO1fbeEEFsgd7lyDUOy1fpofsP4z9rSI0cQwzw9Kvk=; b=mlPOxeKLlU2MzjVntzXUs80VxXseK6kouLhVS7pCAMmmQs8BDYydpsFZU/Zw7jn6G/ maDJ3TpaauaiSOXO0cugar0yPMl7LndnC+qLq6YsGFAP5plsGqOmVZnWpxor+F0NSCBw ZAInjPM+7ZJUDqaKJtvFQ3/A4QX2QNHF9LQzjjnXJOdnXC7/ulw6zpDWDSo7Xmud6P0x ae0KLtkAIXgThY9HH2wSpFhPWy7gR4eiiCmvlncUMEJhXVjMQ4npHO2nFAomejeNjSc8 o9oC6pxWbDqGnOwwvdCQA/rNYRhrrcpnWcSrWeNUHF/1xYlIbEw9ltrLvgHws+lHvAYA tyUQ== X-Gm-Message-State: AOAM530NjQwd6uB4woP1+4nhZAX4/TEQnKWr+SlRz7kw2zoewhbM/Ea0 +bE/5hinp+U+5s30l3XmdXmIQVkTV0I= X-Google-Smtp-Source: ABdhPJzYqOUBmnW7c4C8WSAck44Yq++828fMpKee3j+Rd+usbytDU7wpgVgP8fBYmcQw2YmLWOtZwQ== X-Received: by 2002:adf:d84b:: with SMTP id k11mr15840677wrl.305.1605131568233; Wed, 11 Nov 2020 13:52:48 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 08/15] xen/arm64: port Linux's arm64 atomic_ll_sc.h to Xen Date: Wed, 11 Nov 2020 21:51:56 +0000 Message-Id: <20201111215203.80336-9-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding Most of the "work" here is simply deleting the atomic64_t helper definitions as we don't have an atomic64_t type in Xen. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm64/atomic_ll_sc.h | 134 +---------------------- 1 file changed, 6 insertions(+), 128 deletions(-) diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h b/xen/include/asm-arm= /arm64/atomic_ll_sc.h index e1009c0f94..20b0cb174e 100644 --- a/xen/include/asm-arm/arm64/atomic_ll_sc.h +++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h @@ -1,16 +1,16 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ /* - * Based on arch/arm/include/asm/atomic.h + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * * Copyright (C) 1996 Russell King. * Copyright (C) 2002 Deep Blue Solutions Ltd. * Copyright (C) 2012 ARM Ltd. + * SPDX-License-Identifier: GPL-2.0-only */ =20 -#ifndef __ASM_ATOMIC_LL_SC_H -#define __ASM_ATOMIC_LL_SC_H +#ifndef __ASM_ARM_ARM64_ATOMIC_LL_SC_H +#define __ASM_ARM_ARM64_ATOMIC_LL_SC_H =20 -#include +#include =20 #ifdef CONFIG_ARM64_LSE_ATOMICS #define __LL_SC_FALLBACK(asm_ops) \ @@ -134,128 +134,6 @@ ATOMIC_OPS(andnot, bic, ) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP =20 -#define ATOMIC64_OP(op, asm_op, constraint) \ -static inline void \ -__ll_sc_atomic64_##op(s64 i, atomic64_t *v) \ -{ \ - s64 result; \ - unsigned long tmp; \ - \ - asm volatile("// atomic64_" #op "\n" \ - __LL_SC_FALLBACK( \ -" prfm pstl1strm, %2\n" \ -"1: ldxr %0, %2\n" \ -" " #asm_op " %0, %0, %3\n" \ -" stxr %w1, %0, %2\n" \ -" cbnz %w1, 1b") \ - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ - : __stringify(constraint) "r" (i)); \ -} - -#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\ -static inline long \ -__ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \ -{ \ - s64 result; \ - unsigned long tmp; \ - \ - asm volatile("// atomic64_" #op "_return" #name "\n" \ - __LL_SC_FALLBACK( \ -" prfm pstl1strm, %2\n" \ -"1: ld" #acq "xr %0, %2\n" \ -" " #asm_op " %0, %0, %3\n" \ -" st" #rel "xr %w1, %0, %2\n" \ -" cbnz %w1, 1b\n" \ -" " #mb ) \ - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) \ - : __stringify(constraint) "r" (i) \ - : cl); \ - \ - return result; \ -} - -#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\ -static inline long \ -__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ -{ \ - s64 result, val; \ - unsigned long tmp; \ - \ - asm volatile("// atomic64_fetch_" #op #name "\n" \ - __LL_SC_FALLBACK( \ -" prfm pstl1strm, %3\n" \ -"1: ld" #acq "xr %0, %3\n" \ -" " #asm_op " %1, %0, %4\n" \ -" st" #rel "xr %w2, %1, %3\n" \ -" cbnz %w2, 1b\n" \ -" " #mb ) \ - : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Q" (v->counter) \ - : __stringify(constraint) "r" (i) \ - : cl); \ - \ - return result; \ -} - -#define ATOMIC64_OPS(...) \ - ATOMIC64_OP(__VA_ARGS__) \ - ATOMIC64_OP_RETURN(, dmb ish, , l, "memory", __VA_ARGS__) \ - ATOMIC64_OP_RETURN(_relaxed,, , , , __VA_ARGS__) \ - ATOMIC64_OP_RETURN(_acquire,, a, , "memory", __VA_ARGS__) \ - ATOMIC64_OP_RETURN(_release,, , l, "memory", __VA_ARGS__) \ - ATOMIC64_FETCH_OP (, dmb ish, , l, "memory", __VA_ARGS__) \ - ATOMIC64_FETCH_OP (_relaxed,, , , , __VA_ARGS__) \ - ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \ - ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__) - -ATOMIC64_OPS(add, add, I) -ATOMIC64_OPS(sub, sub, J) - -#undef ATOMIC64_OPS -#define ATOMIC64_OPS(...) \ - ATOMIC64_OP(__VA_ARGS__) \ - ATOMIC64_FETCH_OP (, dmb ish, , l, "memory", __VA_ARGS__) \ - ATOMIC64_FETCH_OP (_relaxed,, , , , __VA_ARGS__) \ - ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \ - ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__) - -ATOMIC64_OPS(and, and, L) -ATOMIC64_OPS(or, orr, L) -ATOMIC64_OPS(xor, eor, L) -/* - * GAS converts the mysterious and undocumented BIC (immediate) alias to - * an AND (immediate) instruction with the immediate inverted. We don't - * have a constraint for this, so fall back to register. - */ -ATOMIC64_OPS(andnot, bic, ) - -#undef ATOMIC64_OPS -#undef ATOMIC64_FETCH_OP -#undef ATOMIC64_OP_RETURN -#undef ATOMIC64_OP - -static inline s64 -__ll_sc_atomic64_dec_if_positive(atomic64_t *v) -{ - s64 result; - unsigned long tmp; - - asm volatile("// atomic64_dec_if_positive\n" - __LL_SC_FALLBACK( -" prfm pstl1strm, %2\n" -"1: ldxr %0, %2\n" -" subs %0, %0, #1\n" -" b.lt 2f\n" -" stlxr %w1, %0, %2\n" -" cbnz %w1, 1b\n" -" dmb ish\n" -"2:") - : "=3D&r" (result), "=3D&r" (tmp), "+Q" (v->counter) - : - : "cc", "memory"); - - return result; -} - #define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint) \ static inline u##sz \ __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \ @@ -350,4 +228,4 @@ __CMPXCHG_DBL(_mb, dmb ish, l, "memory") #undef __CMPXCHG_DBL #undef K =20 -#endif /* __ASM_ATOMIC_LL_SC_H */ \ No newline at end of file +#endif /* __ASM_ARM_ARM64_ATOMIC_LL_SC_H */ \ No newline at end of file --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605131626; cv=none; d=zohomail.com; s=zohoarc; b=jzk114jjKoiOakHTx+MaRYdSRtPebrGU/PDiEtkME96oUTIuORHQTIWChBVuE0N3yX7zTmtM18WE/rpaNWq7ul1zKRPAt44JLwy+n221RyYf3t4OkD59o+rh1Vus7zH4xcoYx9xNCmMt0s7SMZT5OPCRKvWcOuXyxqD8BnvDwuk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605131626; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+QHRVqSHfJTTG2Ygl57V4sYwrHfvgFCCMFL3g+IeQj8=; b=czCrM55Hbg0CA1oVOJxfzQqZoBXY+H1fPqSnJFP8WoymV+gsprohDWjT78uJ6PURhNYN/WCDsZTCobaQFkj4G1aZNMUdT1xTGk2V/DDdd4fKDOw/bqNviVdNKWJiU2R9adJC12IUFC8MKlhkJKinh8AklPtBfyTK3WeOVd04AOQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605131626266406.29722318575364; Wed, 11 Nov 2020 13:53:46 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25309.53041 (Exim 4.92) (envelope-from ) id 1kcy3S-0006qP-51; Wed, 11 Nov 2020 21:53:26 +0000 Received: by outflank-mailman (output) from mailman id 25309.53041; Wed, 11 Nov 2020 21:53:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy3R-0006qD-VJ; Wed, 11 Nov 2020 21:53:25 +0000 Received: by outflank-mailman (input) for mailman id 25309; Wed, 11 Nov 2020 21:53:24 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3Q-00064v-QQ for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:24 +0000 Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8535338f-947b-459b-80e8-fe6ceaf732ff; Wed, 11 Nov 2020 21:52:50 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id 33so3976715wrl.7 for ; Wed, 11 Nov 2020 13:52:50 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:49 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3Q-00064v-QQ for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:24 +0000 Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8535338f-947b-459b-80e8-fe6ceaf732ff; Wed, 11 Nov 2020 21:52:50 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id 33so3976715wrl.7 for ; Wed, 11 Nov 2020 13:52:50 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:49 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8535338f-947b-459b-80e8-fe6ceaf732ff DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+QHRVqSHfJTTG2Ygl57V4sYwrHfvgFCCMFL3g+IeQj8=; b=BZHrOBFOy/rx7QL8h3AKVZcmsqI9x8YiW8BYuJ4AfRMF07n2AqP9f1ySpuTIrbsgFL 46ZB7u58zy5tEvCQZtJjo40QVdnxOlApbZYC2oPNE9ghAQr4XuVvVfEZ1SAv6H4crlHJ 3X1/AnVsElMG0s4aJ3w0VnLhD5oNlzXe1y2GsaI3bBvF7Vhch9cMzx/w5S6D8vgQpYjS ROLsPAssDgJnfiYxS81SwuhiUY87EyZeMuFoHq+uZ8kaYZcekvLpQO08HAcpPUv9v5YV QRJS2/9g/6zhHWmh17IkhuG3Fpdncn8Pv54tYwo3ZqyaDUdZ7F2LJAtQ+1J0VTgEu/X1 Gy/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+QHRVqSHfJTTG2Ygl57V4sYwrHfvgFCCMFL3g+IeQj8=; b=TkxHe9hL8nTjcdHkbhXMm9aWuatYGK8TOzHYGyUE6+kDHdLfPjXTF0I60oJp0HT+s0 Zuizc0f7CJnKI0TIH/O3z+gaFQtuOpTq+sF2TfTxwnCaIPljSktLnJDT7wPqKULWnRWX SouXnS3Jasp4uhxiFImtvBOpZ06I7U2t2cQgB7nWAoJElrxn94U/WDt8A1A2JCK4kxNe BPTbyFviVLHQURJSIQiZkAJWbItHH4cwlKuVfkJ2LUhLxfv9UR7YjNO+gWRGEKdzBPrc euzHrNNszN3E8xV3PBP43a1l9KqiEMPLMtpg6aKw/hEvnc4HuAe9K2YUh0XZVGDRm0TF bFQA== X-Gm-Message-State: AOAM5324XN1TAgN5oSL5VEI6CzZ3tJ7pUzfPTdWNxyrubKjMOhVnIN7w oJHdYIVdF1MtJLQuarqVr0+gvZTeS24= X-Google-Smtp-Source: ABdhPJzrjndt6cTML74tYem+23oe1Hq/1DfFe8NBK2vLCL6LPZBEZP0LJRX73ZzEZWEitTpSD1gDAw== X-Received: by 2002:a05:6000:182:: with SMTP id p2mr22389539wrx.116.1605131569763; Wed, 11 Nov 2020 13:52:49 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 09/15] xen/arm64: port Linux's arm64 atomic_lse.h to Xen Date: Wed, 11 Nov 2020 21:51:57 +0000 Message-Id: <20201111215203.80336-10-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding As with the LL/SC atomics helpers, most of the "work" here is simply deleting the atomic64_t helper definitions as we don't have an atomic64_t type in Xen. We do also need to s/__always_inline/always_inline/ to match the qualifier name used by Xen. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm64/atomic_lse.h | 189 ++----------------------- 1 file changed, 8 insertions(+), 181 deletions(-) diff --git a/xen/include/asm-arm/arm64/atomic_lse.h b/xen/include/asm-arm/a= rm64/atomic_lse.h index b3b0d43a7d..81613f7250 100644 --- a/xen/include/asm-arm/arm64/atomic_lse.h +++ b/xen/include/asm-arm/arm64/atomic_lse.h @@ -1,14 +1,15 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ + /* - * Based on arch/arm/include/asm/atomic.h + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * * Copyright (C) 1996 Russell King. * Copyright (C) 2002 Deep Blue Solutions Ltd. * Copyright (C) 2012 ARM Ltd. + * SPDX-License-Identifier: GPL-2.0-only */ =20 -#ifndef __ASM_ATOMIC_LSE_H -#define __ASM_ATOMIC_LSE_H +#ifndef __ASM_ARM_ARM64_ATOMIC_LSE_H +#define __ASM_ARM_ARM64_ATOMIC_LSE_H =20 #define ATOMIC_OP(op, asm_op) \ static inline void __lse_atomic_##op(int i, atomic_t *v) \ @@ -163,182 +164,8 @@ ATOMIC_FETCH_OP_SUB( , al, "memory") =20 #undef ATOMIC_FETCH_OP_SUB =20 -#define ATOMIC64_OP(op, asm_op) \ -static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \ -{ \ - asm volatile( \ - __LSE_PREAMBLE \ -" " #asm_op " %[i], %[v]\n" \ - : [i] "+r" (i), [v] "+Q" (v->counter) \ - : "r" (v)); \ -} - -ATOMIC64_OP(andnot, stclr) -ATOMIC64_OP(or, stset) -ATOMIC64_OP(xor, steor) -ATOMIC64_OP(add, stadd) - -#undef ATOMIC64_OP - -#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ -static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\ -{ \ - asm volatile( \ - __LSE_PREAMBLE \ -" " #asm_op #mb " %[i], %[i], %[v]" \ - : [i] "+r" (i), [v] "+Q" (v->counter) \ - : "r" (v) \ - : cl); \ - \ - return i; \ -} - -#define ATOMIC64_FETCH_OPS(op, asm_op) \ - ATOMIC64_FETCH_OP(_relaxed, , op, asm_op) \ - ATOMIC64_FETCH_OP(_acquire, a, op, asm_op, "memory") \ - ATOMIC64_FETCH_OP(_release, l, op, asm_op, "memory") \ - ATOMIC64_FETCH_OP( , al, op, asm_op, "memory") - -ATOMIC64_FETCH_OPS(andnot, ldclr) -ATOMIC64_FETCH_OPS(or, ldset) -ATOMIC64_FETCH_OPS(xor, ldeor) -ATOMIC64_FETCH_OPS(add, ldadd) - -#undef ATOMIC64_FETCH_OP -#undef ATOMIC64_FETCH_OPS - -#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \ -static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\ -{ \ - unsigned long tmp; \ - \ - asm volatile( \ - __LSE_PREAMBLE \ - " ldadd" #mb " %[i], %x[tmp], %[v]\n" \ - " add %[i], %[i], %x[tmp]" \ - : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ - : "r" (v) \ - : cl); \ - \ - return i; \ -} - -ATOMIC64_OP_ADD_RETURN(_relaxed, ) -ATOMIC64_OP_ADD_RETURN(_acquire, a, "memory") -ATOMIC64_OP_ADD_RETURN(_release, l, "memory") -ATOMIC64_OP_ADD_RETURN( , al, "memory") - -#undef ATOMIC64_OP_ADD_RETURN - -static inline void __lse_atomic64_and(s64 i, atomic64_t *v) -{ - asm volatile( - __LSE_PREAMBLE - " mvn %[i], %[i]\n" - " stclr %[i], %[v]" - : [i] "+&r" (i), [v] "+Q" (v->counter) - : "r" (v)); -} - -#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ -static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ -{ \ - asm volatile( \ - __LSE_PREAMBLE \ - " mvn %[i], %[i]\n" \ - " ldclr" #mb " %[i], %[i], %[v]" \ - : [i] "+&r" (i), [v] "+Q" (v->counter) \ - : "r" (v) \ - : cl); \ - \ - return i; \ -} - -ATOMIC64_FETCH_OP_AND(_relaxed, ) -ATOMIC64_FETCH_OP_AND(_acquire, a, "memory") -ATOMIC64_FETCH_OP_AND(_release, l, "memory") -ATOMIC64_FETCH_OP_AND( , al, "memory") - -#undef ATOMIC64_FETCH_OP_AND - -static inline void __lse_atomic64_sub(s64 i, atomic64_t *v) -{ - asm volatile( - __LSE_PREAMBLE - " neg %[i], %[i]\n" - " stadd %[i], %[v]" - : [i] "+&r" (i), [v] "+Q" (v->counter) - : "r" (v)); -} - -#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \ -static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \ -{ \ - unsigned long tmp; \ - \ - asm volatile( \ - __LSE_PREAMBLE \ - " neg %[i], %[i]\n" \ - " ldadd" #mb " %[i], %x[tmp], %[v]\n" \ - " add %[i], %[i], %x[tmp]" \ - : [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) \ - : "r" (v) \ - : cl); \ - \ - return i; \ -} - -ATOMIC64_OP_SUB_RETURN(_relaxed, ) -ATOMIC64_OP_SUB_RETURN(_acquire, a, "memory") -ATOMIC64_OP_SUB_RETURN(_release, l, "memory") -ATOMIC64_OP_SUB_RETURN( , al, "memory") - -#undef ATOMIC64_OP_SUB_RETURN - -#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ -static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \ -{ \ - asm volatile( \ - __LSE_PREAMBLE \ - " neg %[i], %[i]\n" \ - " ldadd" #mb " %[i], %[i], %[v]" \ - : [i] "+&r" (i), [v] "+Q" (v->counter) \ - : "r" (v) \ - : cl); \ - \ - return i; \ -} - -ATOMIC64_FETCH_OP_SUB(_relaxed, ) -ATOMIC64_FETCH_OP_SUB(_acquire, a, "memory") -ATOMIC64_FETCH_OP_SUB(_release, l, "memory") -ATOMIC64_FETCH_OP_SUB( , al, "memory") - -#undef ATOMIC64_FETCH_OP_SUB - -static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v) -{ - unsigned long tmp; - - asm volatile( - __LSE_PREAMBLE - "1: ldr %x[tmp], %[v]\n" - " subs %[ret], %x[tmp], #1\n" - " b.lt 2f\n" - " casal %x[tmp], %[ret], %[v]\n" - " sub %x[tmp], %x[tmp], #1\n" - " sub %x[tmp], %x[tmp], %[ret]\n" - " cbnz %x[tmp], 1b\n" - "2:" - : [ret] "+&r" (v), [v] "+Q" (v->counter), [tmp] "=3D&r" (tmp) - : - : "cc", "memory"); - - return (long)v; -} - #define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...) \ -static __always_inline u##sz \ +static always_inline u##sz \ __lse__cmpxchg_case_##name##sz(volatile void *ptr, \ u##sz old, \ u##sz new) \ @@ -381,7 +208,7 @@ __CMPXCHG_CASE(x, , mb_, 64, al, "memory") #undef __CMPXCHG_CASE =20 #define __CMPXCHG_DBL(name, mb, cl...) \ -static __always_inline long \ +static always_inline long \ __lse__cmpxchg_double##name(unsigned long old1, \ unsigned long old2, \ unsigned long new1, \ @@ -416,4 +243,4 @@ __CMPXCHG_DBL(_mb, al, "memory") =20 #undef __CMPXCHG_DBL =20 -#endif /* __ASM_ATOMIC_LSE_H */ \ No newline at end of file +#endif /* __ASM_ARM_ARM64_ATOMIC_LSE_H */ \ No newline at end of file --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605132006; cv=none; d=zohomail.com; s=zohoarc; b=C3ju3JSpyCkRxdhVkTp4i+AKlqozzxAxQ62AhMu0FOYVxNjLUh0cnXqI1nQYBDGsiVu73wAqqs4ndSt7rEba6b/E91/b6TN4EhA8cdP6n6Cg/+hRrNFWs5wR5oKxl/QbloepiT2DoS3ILB3Lqd6IKIEip2eK8E6hhNbFoRSsCww= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605132006; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=BBJwuz9SQvVyH51XqrVftJIZ0i+2M/l+kiSJyhKCl9k=; b=LsI4GMG1v29EMnMYX+BSFbdrza3Edeq44bb8lwXW7W6aX08p16C0fMODx9QDpft5mJ+gjtG0gt+iJ+990c60HzvTjXNaWcOHS+ZaOOoXr4Ph5SA9axpPhhWz3M4/zoOTm8k3uHz0ugz04sXxo6oecUNWq1rfv0lilggST7Nc+bY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605132006127817.486750162778; Wed, 11 Nov 2020 14:00:06 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25369.53086 (Exim 4.92) (envelope-from ) id 1kcy9Z-0007hn-9k; Wed, 11 Nov 2020 21:59:45 +0000 Received: by outflank-mailman (output) from mailman id 25369.53086; Wed, 11 Nov 2020 21:59:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy9Y-0007hN-WF; Wed, 11 Nov 2020 21:59:45 +0000 Received: by outflank-mailman (input) for mailman id 25369; Wed, 11 Nov 2020 21:59:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3V-00064v-QI for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:29 +0000 Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c5224aac-a27f-435e-829f-b96da6d16fe3; Wed, 11 Nov 2020 21:52:52 +0000 (UTC) Received: by mail-wr1-x433.google.com with SMTP id p8so3986703wrx.5 for ; Wed, 11 Nov 2020 13:52:52 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:50 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3V-00064v-QI for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:29 +0000 Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c5224aac-a27f-435e-829f-b96da6d16fe3; Wed, 11 Nov 2020 21:52:52 +0000 (UTC) Received: by mail-wr1-x433.google.com with SMTP id p8so3986703wrx.5 for ; Wed, 11 Nov 2020 13:52:52 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:50 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c5224aac-a27f-435e-829f-b96da6d16fe3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BBJwuz9SQvVyH51XqrVftJIZ0i+2M/l+kiSJyhKCl9k=; b=XlzDxZVRka1KzvX5UJrwjhiosi0LTKxfDC1uz5BD3yOlL+I42WiaVPwKI9+aqVKXql 5YBd4vgOHXP4dK47MDifcU3g2YSqLpxdvo+pAaGTLsPTj9NdIaphWCTKGSnUlkHMg+4S fs3zpJKk77/1AfGsFX1E6KRnBYP8uBpkPLtVasjZFSJL/6lKpmiPA/JBKEOBapTUFiKH cRr9WFfgk1InFZPGSUHkKJ3hV/fV9APjaw2abeMHb8vo1ONFxu19mPPmmDBAdI3r5gR+ cJjpO++2/qhasb2xm057E8xKo7UjRoRC81aXvxvDmiJ8avbDfS/65UFyiKgfOPYl+Wkb ZM4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BBJwuz9SQvVyH51XqrVftJIZ0i+2M/l+kiSJyhKCl9k=; b=in/N5AYoKBXnrx+nnFqfsoTaB+4rdiDjvzRM0ECjc2WxxIM3Lgu6g1u/c7u5//9pt0 0eYyyAsEU6HfMaiumAxxHQUA52M/3WPL7+p++cb7Es0+WrJFm7XyDgwY3LOrMJUzRmco lnhewLYFqGRM6asiyE9gQ+v2KVATzZKycgXvSAWwpd0dpi0NCZ/6Kx5w+kxT1XbKv2Ub E7DTFTf3f/xB2Nj2v7GD456fUBuAldalxHdV+zocH6obWb9diIutQ01XYpyY1SnYqk80 Gbt21wH2ZMvcKu17CHA6NowtFKUdbRLT6SKO1p0G8kNyu2oJTFUGZukkfr6As1SlRhoe XvyA== X-Gm-Message-State: AOAM533EmdtH0NrUAYDgn15R73o0nmMsD6XysvkcvM1qb/14IWZKe6vH DyMkWr5DqUJeLHJAl6nXoyLafJZ8TO0= X-Google-Smtp-Source: ABdhPJzCLdv2deDdC5HyuuTbfyEyMVsfKt4vC1Cap1tpREd6zqOgCLxp6yQlpIo2fvH7a8l43IhQhA== X-Received: by 2002:adf:f246:: with SMTP id b6mr3825186wrp.238.1605131570813; Wed, 11 Nov 2020 13:52:50 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 10/15] xen/arm64: port Linux's arm64 cmpxchg.h to Xen Date: Wed, 11 Nov 2020 21:51:58 +0000 Message-Id: <20201111215203.80336-11-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding - s/arch_xchg/xchg/ - s/arch_cmpxchg/cmpxchg/ - Replace calls to BUILD_BUG() with calls to __bad_cmpxchg() as we don't currently have a BUILD_BUG() macro in Xen and this will equivalently cause a link-time error. - Replace calls to VM_BUG_ON() with BUG_ON() as we don't currently have a VM_BUG_ON() macro in Xen. - Pull in the timeout variants of cmpxchg from the original Xen arm64 cmpxchg.h as these are required for guest atomics and are not provided by Linux. Note these are always using LL/SC so we should ideally write LSE versions at some point. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm64/cmpxchg.h | 165 ++++++++++++++++++++++------ 1 file changed, 131 insertions(+), 34 deletions(-) diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm6= 4/cmpxchg.h index c51388216e..a5282cf66e 100644 --- a/xen/include/asm-arm/arm64/cmpxchg.h +++ b/xen/include/asm-arm/arm64/cmpxchg.h @@ -1,17 +1,16 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ /* - * Based on arch/arm/include/asm/cmpxchg.h + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * * Copyright (C) 2012 ARM Ltd. + * SPDX-License-Identifier: GPL-2.0-only */ -#ifndef __ASM_CMPXCHG_H -#define __ASM_CMPXCHG_H +#ifndef __ASM_ARM_ARM64_CMPXCHG_H +#define __ASM_ARM_ARM64_CMPXCHG_H =20 -#include -#include +#include +#include "lse.h" =20 -#include -#include +extern unsigned long __bad_cmpxchg(volatile void *ptr, int size); =20 /* * We need separate acquire parameters for ll/sc and lse, since the full @@ -33,7 +32,9 @@ static inline u##sz __xchg_case_##name##sz(u##sz x, volat= ile void *ptr) \ " " #mb, \ /* LSE atomics */ \ " swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n" \ - __nops(3) \ + "nop\n" \ + "nop\n" \ + "nop\n" \ " " #nop_lse) \ : "=3D&r" (ret), "=3D&r" (tmp), "+Q" (*(u##sz *)ptr) \ : "r" (x) \ @@ -62,7 +63,7 @@ __XCHG_CASE( , , mb_, 64, dmb ish, nop, , a, l, "memor= y") #undef __XCHG_CASE =20 #define __XCHG_GEN(sfx) \ -static __always_inline unsigned long __xchg##sfx(unsigned long x, \ +static always_inline unsigned long __xchg##sfx(unsigned long x, \ volatile void *ptr, \ int size) \ { \ @@ -76,7 +77,7 @@ static __always_inline unsigned long __xchg##sfx(unsigne= d long x, \ case 8: \ return __xchg_case##sfx##_64(x, ptr); \ default: \ - BUILD_BUG(); \ + return __bad_cmpxchg(ptr, size); \ } \ \ unreachable(); \ @@ -98,10 +99,10 @@ __XCHG_GEN(_mb) }) =20 /* xchg */ -#define arch_xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__) -#define arch_xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__) -#define arch_xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__) -#define arch_xchg(...) __xchg_wrapper( _mb, __VA_ARGS__) +#define xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__) +#define xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__) +#define xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__) +#define xchg(...) __xchg_wrapper( _mb, __VA_ARGS__) =20 #define __CMPXCHG_CASE(name, sz) \ static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr, \ @@ -148,7 +149,7 @@ __CMPXCHG_DBL(_mb) #undef __CMPXCHG_DBL =20 #define __CMPXCHG_GEN(sfx) \ -static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ +static always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ unsigned long old, \ unsigned long new, \ int size) \ @@ -163,7 +164,7 @@ static __always_inline unsigned long __cmpxchg##sfx(vol= atile void *ptr, \ case 8: \ return __cmpxchg_case##sfx##_64(ptr, old, new); \ default: \ - BUILD_BUG(); \ + return __bad_cmpxchg(ptr, size); \ } \ \ unreachable(); \ @@ -186,18 +187,18 @@ __CMPXCHG_GEN(_mb) }) =20 /* cmpxchg */ -#define arch_cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__) -#define arch_cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__) -#define arch_cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__) -#define arch_cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__) -#define arch_cmpxchg_local arch_cmpxchg_relaxed +#define cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__) +#define cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__) +#define cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__) +#define cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__) +#define cmpxchg_local cmpxchg_relaxed =20 /* cmpxchg64 */ -#define arch_cmpxchg64_relaxed arch_cmpxchg_relaxed -#define arch_cmpxchg64_acquire arch_cmpxchg_acquire -#define arch_cmpxchg64_release arch_cmpxchg_release -#define arch_cmpxchg64 arch_cmpxchg -#define arch_cmpxchg64_local arch_cmpxchg_local +#define cmpxchg64_relaxed cmpxchg_relaxed +#define cmpxchg64_acquire cmpxchg_acquire +#define cmpxchg64_release cmpxchg_release +#define cmpxchg64 cmpxchg +#define cmpxchg64_local cmpxchg_local =20 /* cmpxchg_double */ #define system_has_cmpxchg_double() 1 @@ -205,11 +206,11 @@ __CMPXCHG_GEN(_mb) #define __cmpxchg_double_check(ptr1, ptr2) \ ({ \ if (sizeof(*(ptr1)) !=3D 8) \ - BUILD_BUG(); \ - VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) !=3D 1); \ + return __bad_cmpxchg(ptr, size); \ + BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) !=3D 1); \ }) =20 -#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ +#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ ({ \ int __ret; \ __cmpxchg_double_check(ptr1, ptr2); \ @@ -219,7 +220,7 @@ __CMPXCHG_GEN(_mb) __ret; \ }) =20 -#define arch_cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \ +#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \ ({ \ int __ret; \ __cmpxchg_double_check(ptr1, ptr2); \ @@ -255,7 +256,7 @@ __CMPWAIT_CASE( , , 64); #undef __CMPWAIT_CASE =20 #define __CMPWAIT_GEN(sfx) \ -static __always_inline void __cmpwait##sfx(volatile void *ptr, \ +static always_inline void __cmpwait##sfx(volatile void *ptr, \ unsigned long val, \ int size) \ { \ @@ -269,7 +270,7 @@ static __always_inline void __cmpwait##sfx(volatile voi= d *ptr, \ case 8: \ return __cmpwait_case##sfx##_64(ptr, val); \ default: \ - BUILD_BUG(); \ + __bad_cmpxchg(ptr, size); \ } \ \ unreachable(); \ @@ -282,4 +283,100 @@ __CMPWAIT_GEN() #define __cmpwait_relaxed(ptr, val) \ __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr))) =20 -#endif /* __ASM_CMPXCHG_H */ \ No newline at end of file +/* + * This code is from the original Xen arm64 cmpxchg.h, from before the + * Linux 5.10-rc2 atomics helpers were ported over. The only changes + * here are renaming the macros and functions to explicitly use + * "timeout" in their names so that they don't clash with the above. + * + * We need this here for guest atomics (the only user of the timeout + * variants). + */ + +#define __CMPXCHG_TIMEOUT_CASE(w, sz, name) \ +static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr, \ + unsigned long *old, \ + unsigned long new, \ + bool timeout, \ + unsigned int max_try) \ +{ \ + unsigned long oldval; \ + unsigned long res; \ + \ + do { \ + asm volatile("// __cmpxchg_timeout_case_" #name "\n" \ + " ldxr" #sz " %" #w "1, %2\n" \ + " mov %w0, #0\n" \ + " cmp %" #w "1, %" #w "3\n" \ + " b.ne 1f\n" \ + " stxr" #sz " %w0, %" #w "4, %2\n" \ + "1:\n" \ + : "=3D&r" (res), "=3D&r" (oldval), = \ + "+Q" (*(unsigned long *)ptr) \ + : "Ir" (*old), "r" (new) \ + : "cc"); \ + \ + if (!res) \ + break; \ + } while (!timeout || ((--max_try) > 0)); \ + \ + *old =3D oldval; \ + \ + return !res; \ +} + +__CMPXCHG_TIMEOUT_CASE(w, b, 1) +__CMPXCHG_TIMEOUT_CASE(w, h, 2) +__CMPXCHG_TIMEOUT_CASE(w, , 4) +__CMPXCHG_TIMEOUT_CASE( , , 8) + +static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long = *old, + unsigned long new, int size, + bool timeout, unsigned int max_try) +{ + switch (size) { + case 1: + return __cmpxchg_timeout_case_1(ptr, old, new, timeout, ma= x_try); + case 2: + return __cmpxchg_timeout_case_2(ptr, old, new, timeout, ma= x_try); + case 4: + return __cmpxchg_timeout_case_4(ptr, old, new, timeout, ma= x_try); + case 8: + return __cmpxchg_timeout_case_8(ptr, old, new, timeout, ma= x_try); + default: + return __bad_cmpxchg(ptr, size); + } + + ASSERT_UNREACHABLE(); +} + +/* + * The helper may fail to update the memory if the action takes too long. + * + * @old: On call the value pointed contains the expected old value. It wil= l be + * updated to the actual old value. + * @max_try: Maximum number of iterations + * + * The helper will return true when the update has succeeded (i.e no + * timeout) and false if the update has failed. + */ +static always_inline bool __cmpxchg_timeout(volatile void *ptr, + unsigned long *old, + unsigned long new, + int size, + unsigned int max_try) +{ + bool ret; + + smp_mb(); + ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); + smp_mb(); + + return ret; +} + +#define __cmpxchg64_timeout(ptr, old, new, max_try) \ + __cmpxchg_timeout(ptr, old, new, 8, max_try) + + +#endif /* __ASM_ARM_ARM64_CMPXCHG_H */ --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605132003; cv=none; d=zohomail.com; s=zohoarc; b=OxkF6O8+7+ZKqmv0csZJjpKQKWO3V6V4+DD9jnKryTRRs05V0+rONOFi8fwkc9wBMf0jg0+yShqDMmekm6iG0jqqslI70xLgbONAm3CXuG0P6Fj3DWj0VYgaDBthB7rDav+jNsunXhEqI+UX2/XOFQuujc/3lmiiGTU5NhHQk9k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605132003; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Df9Xh6fAL7MEGN4PPLSLvsKAz4yhwBBWrhg9iGXHQa0=; b=UaDQsIUrnTKfVoY1zqcPVLylFMn+UmA6vb1cBXfqrmXjvHYNnf/B1rLjN5t/xYqVp2gnrlN3AX3DO8tH15Cy4Rj/NObxF42Ou+tZ+SNyDp0/r7Ubzwm0HekoATrPJ/S9mQ3XVEnOQNHbsOD7Wv9icH5XNIaVewjgH9FGcNh1wwM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605132003402858.1168045154458; Wed, 11 Nov 2020 14:00:03 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25365.53057 (Exim 4.92) (envelope-from ) id 1kcy9X-0007e7-U9; Wed, 11 Nov 2020 21:59:43 +0000 Received: by outflank-mailman (output) from mailman id 25365.53057; Wed, 11 Nov 2020 21:59:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy9X-0007e3-Po; Wed, 11 Nov 2020 21:59:43 +0000 Received: by outflank-mailman (input) for mailman id 25365; Wed, 11 Nov 2020 21:59:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3a-00064v-Qa for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:34 +0000 Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e8572d65-a88e-4d6d-9e12-c7b7451f14b3; Wed, 11 Nov 2020 21:52:53 +0000 (UTC) Received: by mail-wm1-x341.google.com with SMTP id a3so3669141wmb.5 for ; Wed, 11 Nov 2020 13:52:52 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:51 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3a-00064v-Qa for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:34 +0000 Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e8572d65-a88e-4d6d-9e12-c7b7451f14b3; Wed, 11 Nov 2020 21:52:53 +0000 (UTC) Received: by mail-wm1-x341.google.com with SMTP id a3so3669141wmb.5 for ; Wed, 11 Nov 2020 13:52:52 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:51 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e8572d65-a88e-4d6d-9e12-c7b7451f14b3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Df9Xh6fAL7MEGN4PPLSLvsKAz4yhwBBWrhg9iGXHQa0=; b=uF84L1YgWpK/16Gee/yPHZGbKLt/KXZASJ7oPFGrzoDBMo9l5AYMJJhAJ9qsF5H2BY cO3x/E//asW6nbzdufmOh9GmOJR/N/paqbsGcR+xPJfvsVfob471OJYsuA0V2l5YPyhq IfBYCvgrDPHpL2bd/xm5RJF/3DvmJ4J1Ou/u+5lDEZ38bffyPIP5ceVeviSbmv/oFd2K G+30yIhQ6ZrqQWbYIGRJ9vQeXelVwSZIVsvQWnVT/aHk8yZqAMGI1yBZiQ5F6BWJQ7M+ /HSp2vvFd5Ekd8WMtEB0FRsCndmsl8YqIwGEyrBgQvkoLMx8vJT0ozbW5ZM4VPWn4mDd X8zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Df9Xh6fAL7MEGN4PPLSLvsKAz4yhwBBWrhg9iGXHQa0=; b=t04dM7dcHtzBXMoWlPkbqfh55S3bwr/zk2sfSzsXCSsJqzlcP6gUJyqW6U2K4E8Hf8 KCgq8/aKDbgfCLu8s2VWyQpFRPILjeSe3k1K6CIs27KU0ZcjhEKaQv4MnKhAPgqAGjrX UvvfYXLi3CHNEurR2ALXvUVV0m/dKLOURAWewlbkbePx8rHyHvXqjusXlwvP+gna+ypR tDR5cJCJmetRFD5Qi7eMM/o3MNZK0qKgkiUZHAHVmmy1PIk3Px6hKWU06bVZBoTyiD2i LXjcP+37SNXGoeQBmdRxmpwcwpQkAvFBFGG99D/FN26+A1v2b39ypfNUuIicOZJ14URW RsNA== X-Gm-Message-State: AOAM5302I364Xht1AmHnleotcy0jeUSqmWxAKCB4BiY925lO9ICmOtMs 6zkJn886CMwK+MpgwX4qVS2rvHZUrpc= X-Google-Smtp-Source: ABdhPJz998xdMQgMVAfS5d5KafAZkwOa43oizKaiuZiuVOvTHxZwM7rWKfd7OzHDt3uJAVh9K0k1XA== X-Received: by 2002:a7b:c77a:: with SMTP id x26mr6304609wmk.63.1605131571835; Wed, 11 Nov 2020 13:52:51 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 11/15] xen/arm64: port Linux's arm64 atomic.h to Xen Date: Wed, 11 Nov 2020 21:51:59 +0000 Message-Id: <20201111215203.80336-12-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding - Drop atomic64_t helper declarations as we don't currently have an atomic64_t in Xen. - Drop arch_* prefixes. - Swap include of to just . Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm64/atomic.h | 256 ++++++++--------------------- 1 file changed, 73 insertions(+), 183 deletions(-) diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64= /atomic.h index a2eab9f091..b695cc6e09 100644 --- a/xen/include/asm-arm/arm64/atomic.h +++ b/xen/include/asm-arm/arm64/atomic.h @@ -1,23 +1,23 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ + /* - * Based on arch/arm/include/asm/atomic.h + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * * Copyright (C) 1996 Russell King. * Copyright (C) 2002 Deep Blue Solutions Ltd. * Copyright (C) 2012 ARM Ltd. + * SPDX-License-Identifier: GPL-2.0-only */ -#ifndef __ASM_ATOMIC_H -#define __ASM_ATOMIC_H +#ifndef __ASM_ARM_ARM64_ATOMIC_H +#define __ASM_ARM_ARM64_ATOMIC_H =20 -#include -#include +#include +#include =20 -#include -#include -#include +#include "lse.h" +#include "cmpxchg.h" =20 #define ATOMIC_OP(op) \ -static inline void arch_##op(int i, atomic_t *v) \ +static inline void op(int i, atomic_t *v) \ { \ __lse_ll_sc_body(op, i, v); \ } @@ -32,7 +32,7 @@ ATOMIC_OP(atomic_sub) #undef ATOMIC_OP =20 #define ATOMIC_FETCH_OP(name, op) \ -static inline int arch_##op##name(int i, atomic_t *v) \ +static inline int op##name(int i, atomic_t *v) \ { \ return __lse_ll_sc_body(op##name, i, v); \ } @@ -54,175 +54,65 @@ ATOMIC_FETCH_OPS(atomic_sub_return) =20 #undef ATOMIC_FETCH_OP #undef ATOMIC_FETCH_OPS - -#define ATOMIC64_OP(op) \ -static inline void arch_##op(long i, atomic64_t *v) \ -{ \ - __lse_ll_sc_body(op, i, v); \ -} - -ATOMIC64_OP(atomic64_andnot) -ATOMIC64_OP(atomic64_or) -ATOMIC64_OP(atomic64_xor) -ATOMIC64_OP(atomic64_add) -ATOMIC64_OP(atomic64_and) -ATOMIC64_OP(atomic64_sub) - -#undef ATOMIC64_OP - -#define ATOMIC64_FETCH_OP(name, op) \ -static inline long arch_##op##name(long i, atomic64_t *v) \ -{ \ - return __lse_ll_sc_body(op##name, i, v); \ -} - -#define ATOMIC64_FETCH_OPS(op) \ - ATOMIC64_FETCH_OP(_relaxed, op) \ - ATOMIC64_FETCH_OP(_acquire, op) \ - ATOMIC64_FETCH_OP(_release, op) \ - ATOMIC64_FETCH_OP( , op) - -ATOMIC64_FETCH_OPS(atomic64_fetch_andnot) -ATOMIC64_FETCH_OPS(atomic64_fetch_or) -ATOMIC64_FETCH_OPS(atomic64_fetch_xor) -ATOMIC64_FETCH_OPS(atomic64_fetch_add) -ATOMIC64_FETCH_OPS(atomic64_fetch_and) -ATOMIC64_FETCH_OPS(atomic64_fetch_sub) -ATOMIC64_FETCH_OPS(atomic64_add_return) -ATOMIC64_FETCH_OPS(atomic64_sub_return) - -#undef ATOMIC64_FETCH_OP -#undef ATOMIC64_FETCH_OPS - -static inline long arch_atomic64_dec_if_positive(atomic64_t *v) -{ - return __lse_ll_sc_body(atomic64_dec_if_positive, v); -} - -#define arch_atomic_read(v) __READ_ONCE((v)->counter) -#define arch_atomic_set(v, i) __WRITE_ONCE(((v)->counter), (i)) - -#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed -#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire -#define arch_atomic_add_return_release arch_atomic_add_return_release -#define arch_atomic_add_return arch_atomic_add_return - -#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed -#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire -#define arch_atomic_sub_return_release arch_atomic_sub_return_release -#define arch_atomic_sub_return arch_atomic_sub_return - -#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed -#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire -#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release -#define arch_atomic_fetch_add arch_atomic_fetch_add - -#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed -#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire -#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release -#define arch_atomic_fetch_sub arch_atomic_fetch_sub - -#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed -#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire -#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release -#define arch_atomic_fetch_and arch_atomic_fetch_and - -#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed -#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire -#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release -#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot - -#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed -#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire -#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release -#define arch_atomic_fetch_or arch_atomic_fetch_or - -#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed -#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire -#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release -#define arch_atomic_fetch_xor arch_atomic_fetch_xor - -#define arch_atomic_xchg_relaxed(v, new) \ - arch_xchg_relaxed(&((v)->counter), (new)) -#define arch_atomic_xchg_acquire(v, new) \ - arch_xchg_acquire(&((v)->counter), (new)) -#define arch_atomic_xchg_release(v, new) \ - arch_xchg_release(&((v)->counter), (new)) -#define arch_atomic_xchg(v, new) \ - arch_xchg(&((v)->counter), (new)) - -#define arch_atomic_cmpxchg_relaxed(v, old, new) \ - arch_cmpxchg_relaxed(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg_acquire(v, old, new) \ - arch_cmpxchg_acquire(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg_release(v, old, new) \ - arch_cmpxchg_release(&((v)->counter), (old), (new)) -#define arch_atomic_cmpxchg(v, old, new) \ - arch_cmpxchg(&((v)->counter), (old), (new)) - -#define arch_atomic_andnot arch_atomic_andnot - -/* - * 64-bit arch_atomic operations. - */ -#define ATOMIC64_INIT ATOMIC_INIT -#define arch_atomic64_read arch_atomic_read -#define arch_atomic64_set arch_atomic_set - -#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed -#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire -#define arch_atomic64_add_return_release arch_atomic64_add_return_release -#define arch_atomic64_add_return arch_atomic64_add_return - -#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed -#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire -#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release -#define arch_atomic64_sub_return arch_atomic64_sub_return - -#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed -#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire -#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release -#define arch_atomic64_fetch_add arch_atomic64_fetch_add - -#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed -#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire -#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release -#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub - -#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed -#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire -#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release -#define arch_atomic64_fetch_and arch_atomic64_fetch_and - -#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_rela= xed -#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acqu= ire -#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_rele= ase -#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot - -#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed -#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire -#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release -#define arch_atomic64_fetch_or arch_atomic64_fetch_or - -#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed -#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire -#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release -#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor - -#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed -#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire -#define arch_atomic64_xchg_release arch_atomic_xchg_release -#define arch_atomic64_xchg arch_atomic_xchg - -#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed -#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire -#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release -#define arch_atomic64_cmpxchg arch_atomic_cmpxchg - -#define arch_atomic64_andnot arch_atomic64_andnot - -#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive - -#define ARCH_ATOMIC - -#endif /* __ASM_ATOMIC_H */ \ No newline at end of file +#define atomic_read(v) __READ_ONCE((v)->counter) +#define atomic_set(v, i) __WRITE_ONCE(((v)->counter), (i)) + +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_add_return_acquire atomic_add_return_acquire +#define atomic_add_return_release atomic_add_return_release +#define atomic_add_return atomic_add_return + +#define atomic_sub_return_relaxed atomic_sub_return_relaxed +#define atomic_sub_return_acquire atomic_sub_return_acquire +#define atomic_sub_return_release atomic_sub_return_release +#define atomic_sub_return atomic_sub_return + +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_add_acquire atomic_fetch_add_acquire +#define atomic_fetch_add_release atomic_fetch_add_release +#define atomic_fetch_add atomic_fetch_add + +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed +#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire +#define atomic_fetch_sub_release atomic_fetch_sub_release +#define atomic_fetch_sub atomic_fetch_sub + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_and_acquire atomic_fetch_and_acquire +#define atomic_fetch_and_release atomic_fetch_and_release +#define atomic_fetch_and atomic_fetch_and + +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire +#define atomic_fetch_andnot_release atomic_fetch_andnot_release +#define atomic_fetch_andnot atomic_fetch_andnot + +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_or_acquire atomic_fetch_or_acquire +#define atomic_fetch_or_release atomic_fetch_or_release +#define atomic_fetch_or atomic_fetch_or + +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed +#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire +#define atomic_fetch_xor_release atomic_fetch_xor_release +#define atomic_fetch_xor atomic_fetch_xor + +#define atomic_xchg_relaxed(v, new) \ + xchg_relaxed(&((v)->counter), (new)) +#define atomic_xchg_acquire(v, new) \ + xchg_acquire(&((v)->counter), (new)) +#define atomic_xchg_release(v, new) \ + xchg_release(&((v)->counter), (new)) +#define atomic_xchg(v, new) \ + xchg(&((v)->counter), (new)) + +#define atomic_cmpxchg_relaxed(v, old, new) \ + cmpxchg_relaxed(&((v)->counter), (old), (new)) +#define atomic_cmpxchg_acquire(v, old, new) \ + cmpxchg_acquire(&((v)->counter), (old), (new)) +#define atomic_cmpxchg_release(v, old, new) \ + cmpxchg_release(&((v)->counter), (old), (new)) + +#define atomic_andnot atomic_andnot + +#endif /* __ASM_ARM_ARM64_ATOMIC_H */ \ No newline at end of file --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605132002; cv=none; d=zohomail.com; s=zohoarc; b=fbB+2cHy7QSTXYKrJ4mtlCQMrvYyKDdcJ2lCJSQee9/EfQ6KH8rd7X9WRKH8nzhB8UFa5+Up8g0IHQoQjh81+GujSr2S67OU59fbzHf1QT5Jnjdl5tSAmMqx1QZ92auAA6AQZ7dkvv71adRMKLi6XcM7NNb+WM0LoJ3C2hv5nq0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605132002; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=DEXuW1ye/R98bU3FkkVyzjawl7M6F4lMcGyQWz2Hrt0=; b=V/lj9qOA439Vd5eUwrhgHzFvuAhSVduRzRbMs90SjcsgKUIx+7a9VUQHDh6JveGtxmq965AkyW5HrSUmSaV1mjm02U9wTg8wGaCFX8vri7f+yPeKFA3bF0+uJynLdSq/vQGQT2OxWJWJQEWrnieE/hMV0C2RsMUDDaGv6I5lk5Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605132002743559.254954445259; Wed, 11 Nov 2020 14:00:02 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25370.53094 (Exim 4.92) (envelope-from ) id 1kcy9Z-0007ir-OX; Wed, 11 Nov 2020 21:59:45 +0000 Received: by outflank-mailman (output) from mailman id 25370.53094; Wed, 11 Nov 2020 21:59:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy9Z-0007iW-FQ; Wed, 11 Nov 2020 21:59:45 +0000 Received: by outflank-mailman (input) for mailman id 25370; Wed, 11 Nov 2020 21:59:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3f-00064v-Qk for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:39 +0000 Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e5127b06-50ef-41e0-b499-a0baf39ee347; Wed, 11 Nov 2020 21:52:53 +0000 (UTC) Received: by mail-wm1-x32e.google.com with SMTP id p22so3574027wmg.3 for ; Wed, 11 Nov 2020 13:52:53 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:52 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3f-00064v-Qk for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:39 +0000 Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e5127b06-50ef-41e0-b499-a0baf39ee347; Wed, 11 Nov 2020 21:52:53 +0000 (UTC) Received: by mail-wm1-x32e.google.com with SMTP id p22so3574027wmg.3 for ; Wed, 11 Nov 2020 13:52:53 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:52 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e5127b06-50ef-41e0-b499-a0baf39ee347 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DEXuW1ye/R98bU3FkkVyzjawl7M6F4lMcGyQWz2Hrt0=; b=BDCjgexRF/nkz3zayZWKWBFaJk9GsuImSgpzdNEtS12TjZz80LiOSrhUXbkNafK57w h5ecFabJtWNoue+CmBqaNr98EBM69CmIS1yS6hr+UHuv/vjGglyeur3hHD9UyFGHzFJQ 6a3LQUiDXa6LOL76y18d3fwq8XG1BjquH5kBGiPQ1YO2RJnve8LLUrrhMusuS0tDOkgI qrcCe2Ri2jMXID6Q5ZyNZdESrQXKGR8yn28pw4TfYfIeaKAjldMeYiX5IiG7DqJePoHq SVPu43W2CSIxYh2KW17g5p2ATcPEOes+tAGHF4BLuzwOqI9svz4MZxTfcicYCaor4Tqi QB7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DEXuW1ye/R98bU3FkkVyzjawl7M6F4lMcGyQWz2Hrt0=; b=MSpChR8uVD9xMruncQ0QQVp8gdvZoHvOGSXYffGy+4oA1DceTWugVO3PxUV56UtKxc UzBD5EeichEUnOKxMJY9mQ5bE8M+rA0WnR3HuZ/Cl8SFzr0zWlyGuixfjU6PICwbxWR5 Sw9ic2/nBkvi/19fc90zswB767gQpNFHv/sT0VpHS87hDhXlZcNNGXwmpMMW6Sex6EdB qV5XHDt56gH1AJByighYeBRS4OjZuZ0LfOCymcGuSTNISNAy3U6eMIiQwOTGozz+gnXD LllWvZtLP7NqqZm5Fi+oeFukFbavEwm9xWxG08rgr1RGntR96SWdA/H9cVT09Ct5+1w4 dsyg== X-Gm-Message-State: AOAM533GdcgzfjTgpaA9fkwn3FyRcbB+3/zulIZFkYcmwOT1HD6b5RSU 9tMrObWNNiYQvWq2I6C/BFXRfKnNmJM= X-Google-Smtp-Source: ABdhPJykiIWBidlXLzHt7P7UzHcuzyRNwaZ4CUeVlycKkFPOa7EMnevqGj7uV7xjzomLA9nrmSItfw== X-Received: by 2002:a1c:5a06:: with SMTP id o6mr6406841wmb.181.1605131572888; Wed, 11 Nov 2020 13:52:52 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 12/15] xen/arm64: port Linux's arm64 lse.h to Xen Date: Wed, 11 Nov 2020 21:52:00 +0000 Message-Id: <20201111215203.80336-13-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding This just involves making system_uses_lse_atomics() call cpus_have_cap() instead of directly looking up in cpu_hwcap_keys. Not 100% sure whether this is a valid transformation until I do a run test. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm64/lse.h | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/xen/include/asm-arm/arm64/lse.h b/xen/include/asm-arm/arm64/ls= e.h index 704be3e4e4..847727f219 100644 --- a/xen/include/asm-arm/arm64/lse.h +++ b/xen/include/asm-arm/arm64/lse.h @@ -1,28 +1,28 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __ASM_LSE_H -#define __ASM_LSE_H +/* + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) + * + * SPDX-License-Identifier: GPL-2.0 + */ +#ifndef __ASM_ARM_ARM64_LSE_H +#define __ASM_ARM_ARM64_LSE_H =20 -#include +#include "atomic_ll_sc.h" =20 #ifdef CONFIG_ARM64_LSE_ATOMICS =20 #define __LSE_PREAMBLE ".arch_extension lse\n" =20 -#include -#include -#include -#include +#include +#include +#include + #include -#include -#include =20 -extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; -extern struct static_key_false arm64_const_caps_ready; +#include "atomic_lse.h" =20 static inline bool system_uses_lse_atomics(void) { - return (static_branch_likely(&arm64_const_caps_ready)) && - static_branch_likely(&cpu_hwcap_keys[ARM64_HAS_LSE_ATOMICS]); + return cpus_have_cap(ARM64_HAS_LSE_ATOMICS); } =20 #define __lse_ll_sc_body(op, ...) \ @@ -45,4 +45,4 @@ static inline bool system_uses_lse_atomics(void) { return= false; } #define ARM64_LSE_ATOMIC_INSN(llsc, lse) llsc =20 #endif /* CONFIG_ARM64_LSE_ATOMICS */ -#endif /* __ASM_LSE_H */ \ No newline at end of file +#endif /* __ASM_ARM_ARM64_LSE_H */ \ No newline at end of file --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605132003; cv=none; d=zohomail.com; s=zohoarc; b=gJBd7kSc5sCxcU5HKdYn8DCpYmViq5t1OPtRL5bSKS0yGaAoDHvFqXd0BqzzpiXcw7QjicKs+ToMxToHOSAOpwv6IpGK1zkq1P7zSHPZmRVVgFn2s5MmcHM0e/nDmmmzrvwh/siJGVt4+QqJQqiUZywL1DIMj8sY5V5nUPUkdmI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605132003; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=IXFCViiQlQAZEJbjZ19ZkmNy8WD0OQFKlmdpW+wsKU8=; b=n12BsbIuEmMS7Roqv/xHGMejmy2vEEpA5eratlnGG/pZoL5t/fB8PZgqmX/1VXoYn0oNk1UkHrr9cImNQhIsdIQfBQEMpb4ZoLjfocLMo7mQ3gUoKVXIqwcFPoPTh2IxWQaPMseVk+lJFAxUfUTUUwh9mKLBx68ik63Ld5ifqTA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605132003819158.46717227363126; Wed, 11 Nov 2020 14:00:03 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25367.53062 (Exim 4.92) (envelope-from ) id 1kcy9Y-0007f3-7j; Wed, 11 Nov 2020 21:59:44 +0000 Received: by outflank-mailman (output) from mailman id 25367.53062; Wed, 11 Nov 2020 21:59:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy9Y-0007en-2Y; Wed, 11 Nov 2020 21:59:44 +0000 Received: by outflank-mailman (input) for mailman id 25367; Wed, 11 Nov 2020 21:59:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3p-00064v-RB for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:49 +0000 Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5a7def3d-c819-4d01-8499-8cff76a05d08; Wed, 11 Nov 2020 21:52:55 +0000 (UTC) Received: by mail-wm1-x341.google.com with SMTP id h2so3607180wmm.0 for ; Wed, 11 Nov 2020 13:52:55 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:53 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3p-00064v-RB for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:49 +0000 Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5a7def3d-c819-4d01-8499-8cff76a05d08; Wed, 11 Nov 2020 21:52:55 +0000 (UTC) Received: by mail-wm1-x341.google.com with SMTP id h2so3607180wmm.0 for ; Wed, 11 Nov 2020 13:52:55 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:53 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5a7def3d-c819-4d01-8499-8cff76a05d08 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IXFCViiQlQAZEJbjZ19ZkmNy8WD0OQFKlmdpW+wsKU8=; b=RnBL36HD997IHMhccuSQNZSZDDpVAU8KLGq0R59vlQGn7wBebSGfrp0mJtXAovT4c6 EJyWLq3sYPTsA4cm9P6lMf1fuAHGLWzS3A+qX8gthR6F01Xm4VRPqXgXrgdGbyc7Xoa9 YMFMZKI9GfeWIYWyAriVKNSjXBea1Q6/vsaUfU0fqy5AgYLUMjMZiLQOkVRhg8tVwIpL VA9UUlNQS4UWUDJZb5+5a/EYByAH/12OhCaBzDoP7Md6j6x0k5mLdrU/fG7Yj/mfMn3v BIO6R9T3RS0RBGllNLCW0QBzTLnfQzEl2V2ie/Mws1oaVW9aLYI5apmW91PUjAzYrAis qs+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IXFCViiQlQAZEJbjZ19ZkmNy8WD0OQFKlmdpW+wsKU8=; b=L4dzUbspzMXM7zQGaTVt7haalWANjKBqoXGs5cWpp4BiCNI73DfMVAPJliHY6jTXmf yljL5gQ9KvL3fEBt2cf95CoT3vcSiXtuZImwFu3prLOuV9mW/OPASdOW5JNcOboMb64V inaeRSADdAzz5caIHGt1GtiicXkGSndbvc1YbFvHAzJ6QaOGyz7toegDuu5NdlV58SNh WhxhTVy48IWfuZt2SG5byxtQprZafcF0HT2IQVASMGS9micRVmM/nV74Qxl62p7aQrUS BwGX2g0l12iqTA54xS2FgaIpFzhqGILlbczGTRS1t9pvhblPt6/Y2KIPEqFGKRkeq4i9 khAQ== X-Gm-Message-State: AOAM533K6tUSIqAMLIgb0RNSaG6syDVpgVJvL4fDHVN+pSq1Af0ghWdL sbSg130APpndTpGFZJySPCvm5tOtZ24= X-Google-Smtp-Source: ABdhPJxQF6Pq2fJkYjLdTtlSw3AwUxsPPRK7RpLtDF2xcsJ1YDjaoI17FxqchSrlgwSxgtpfRKuXiw== X-Received: by 2002:a1c:dd41:: with SMTP id u62mr6067891wmg.78.1605131574031; Wed, 11 Nov 2020 13:52:54 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 13/15] xen/arm32: port Linux's arm32 atomic.h to Xen Date: Wed, 11 Nov 2020 21:52:01 +0000 Message-Id: <20201111215203.80336-14-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding - Drop arch_* prefixes. - Redirect include of to , and swap usage of READ_ONCE()/WRITE_ONCE() to the __* versions accordingly. As discussed earlier in the series, we can do this because we're accessing an atomic_t's counter member, which is an int, so the extra checks performed by READ_ONCE()/WRITE_ONCE() are redundant, and this actually matches the Linux arm64 code which already uses the __* variants. - Drop support for pre-Armv7 systems. - Drop atomic64_t helper definitions as we don't currently have an atomic64_t in Xen. - Add explicit strict variants of atomic_{add,sub}_return() as Linux does not define these for arm32 and they're needed for Xen. These strict variants are just wrappers that sandwich a call to the relaxed variants between two smp_mb()'s.' Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm32/atomic.h | 357 +++-------------------------- 1 file changed, 28 insertions(+), 329 deletions(-) diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32= /atomic.h index ac6338dd9b..2d8cd3c586 100644 --- a/xen/include/asm-arm/arm32/atomic.h +++ b/xen/include/asm-arm/arm32/atomic.h @@ -1,31 +1,26 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ /* - * arch/arm/include/asm/atomic.h + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * - * Copyright (C) 1996 Russell King. - * Copyright (C) 2002 Deep Blue Solutions Ltd. + * Copyright (C) 1996 Russell King. + * Copyright (C) 2002 Deep Blue Solutions Ltd. + * SPDX-License-Identifier: GPL-2.0-only */ -#ifndef __ASM_ARM_ATOMIC_H -#define __ASM_ARM_ATOMIC_H +#ifndef __ASM_ARM_ARM32_ATOMIC_H +#define __ASM_ARM_ARM32_ATOMIC_H =20 -#include -#include -#include -#include -#include -#include - -#ifdef __KERNEL__ +#include +#include +#include +#include "system.h" +#include "cmpxchg.h" =20 /* * On ARM, ordinary assignment (str instruction) doesn't clear the local * strex/ldrex monitor on some implementations. The reason we can use it f= or * atomic_set() is the clrex or dummy strex done on every exception return. */ -#define atomic_read(v) READ_ONCE((v)->counter) -#define atomic_set(v,i) WRITE_ONCE(((v)->counter), (i)) - -#if __LINUX_ARM_ARCH__ >=3D 6 +#define atomic_read(v) __READ_ONCE((v)->counter) +#define atomic_set(v,i) __WRITE_ONCE(((v)->counter), (i)) =20 /* * ARMv6 UP and SMP safe atomic ops. We use load exclusive and @@ -153,68 +148,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v,= int a, int u) } #define atomic_fetch_add_unless atomic_fetch_add_unless =20 -#else /* ARM_ARCH_6 */ - -#ifdef CONFIG_SMP -#error SMP not supported on pre-ARMv6 CPUs -#endif - -#define ATOMIC_OP(op, c_op, asm_op) \ -static inline void atomic_##op(int i, atomic_t *v) \ -{ \ - unsigned long flags; \ - \ - raw_local_irq_save(flags); \ - v->counter c_op i; \ - raw_local_irq_restore(flags); \ -} \ - -#define ATOMIC_OP_RETURN(op, c_op, asm_op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ -{ \ - unsigned long flags; \ - int val; \ - \ - raw_local_irq_save(flags); \ - v->counter c_op i; \ - val =3D v->counter; \ - raw_local_irq_restore(flags); \ - \ - return val; \ -} - -#define ATOMIC_FETCH_OP(op, c_op, asm_op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ -{ \ - unsigned long flags; \ - int val; \ - \ - raw_local_irq_save(flags); \ - val =3D v->counter; \ - v->counter c_op i; \ - raw_local_irq_restore(flags); \ - \ - return val; \ -} - -static inline int atomic_cmpxchg(atomic_t *v, int old, int new) -{ - int ret; - unsigned long flags; - - raw_local_irq_save(flags); - ret =3D v->counter; - if (likely(ret =3D=3D old)) - v->counter =3D new; - raw_local_irq_restore(flags); - - return ret; -} - -#define atomic_fetch_andnot atomic_fetch_andnot - -#endif /* __LINUX_ARM_ARCH__ */ - #define ATOMIC_OPS(op, c_op, asm_op) \ ATOMIC_OP(op, c_op, asm_op) \ ATOMIC_OP_RETURN(op, c_op, asm_op) \ @@ -242,266 +175,32 @@ ATOMIC_OPS(xor, ^=3D, eor) =20 #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) =20 -#ifndef CONFIG_GENERIC_ATOMIC64 -typedef struct { - s64 counter; -} atomic64_t; +/* + * Linux doesn't define strict atomic_add_return() or atomic_sub_return() + * for /arch/arm -- Let's manually define these for Xen. + */ =20 -#define ATOMIC64_INIT(i) { (i) } - -#ifdef CONFIG_ARM_LPAE -static inline s64 atomic64_read(const atomic64_t *v) -{ - s64 result; - - __asm__ __volatile__("@ atomic64_read\n" -" ldrd %0, %H0, [%1]" - : "=3D&r" (result) - : "r" (&v->counter), "Qo" (v->counter) - ); - - return result; -} - -static inline void atomic64_set(atomic64_t *v, s64 i) -{ - __asm__ __volatile__("@ atomic64_set\n" -" strd %2, %H2, [%1]" - : "=3DQo" (v->counter) - : "r" (&v->counter), "r" (i) - ); -} -#else -static inline s64 atomic64_read(const atomic64_t *v) -{ - s64 result; - - __asm__ __volatile__("@ atomic64_read\n" -" ldrexd %0, %H0, [%1]" - : "=3D&r" (result) - : "r" (&v->counter), "Qo" (v->counter) - ); - - return result; -} - -static inline void atomic64_set(atomic64_t *v, s64 i) +static inline int atomic_add_return(int i, atomic_t *v) { - s64 tmp; - - prefetchw(&v->counter); - __asm__ __volatile__("@ atomic64_set\n" -"1: ldrexd %0, %H0, [%2]\n" -" strexd %0, %3, %H3, [%2]\n" -" teq %0, #0\n" -" bne 1b" - : "=3D&r" (tmp), "=3DQo" (v->counter) - : "r" (&v->counter), "r" (i) - : "cc"); -} -#endif - -#define ATOMIC64_OP(op, op1, op2) \ -static inline void atomic64_##op(s64 i, atomic64_t *v) \ -{ \ - s64 result; \ - unsigned long tmp; \ - \ - prefetchw(&v->counter); \ - __asm__ __volatile__("@ atomic64_" #op "\n" \ -"1: ldrexd %0, %H0, [%3]\n" \ -" " #op1 " %Q0, %Q0, %Q4\n" \ -" " #op2 " %R0, %R0, %R4\n" \ -" strexd %1, %0, %H0, [%3]\n" \ -" teq %1, #0\n" \ -" bne 1b" \ - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ - : "r" (&v->counter), "r" (i) \ - : "cc"); \ -} \ - -#define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline s64 \ -atomic64_##op##_return_relaxed(s64 i, atomic64_t *v) \ -{ \ - s64 result; \ - unsigned long tmp; \ - \ - prefetchw(&v->counter); \ - \ - __asm__ __volatile__("@ atomic64_" #op "_return\n" \ -"1: ldrexd %0, %H0, [%3]\n" \ -" " #op1 " %Q0, %Q0, %Q4\n" \ -" " #op2 " %R0, %R0, %R4\n" \ -" strexd %1, %0, %H0, [%3]\n" \ -" teq %1, #0\n" \ -" bne 1b" \ - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) \ - : "r" (&v->counter), "r" (i) \ - : "cc"); \ - \ - return result; \ -} - -#define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline s64 \ -atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v) \ -{ \ - s64 result, val; \ - unsigned long tmp; \ - \ - prefetchw(&v->counter); \ - \ - __asm__ __volatile__("@ atomic64_fetch_" #op "\n" \ -"1: ldrexd %0, %H0, [%4]\n" \ -" " #op1 " %Q1, %Q0, %Q5\n" \ -" " #op2 " %R1, %R0, %R5\n" \ -" strexd %2, %1, %H1, [%4]\n" \ -" teq %2, #0\n" \ -" bne 1b" \ - : "=3D&r" (result), "=3D&r" (val), "=3D&r" (tmp), "+Qo" (v->counter) \ - : "r" (&v->counter), "r" (i) \ - : "cc"); \ - \ - return result; \ -} - -#define ATOMIC64_OPS(op, op1, op2) \ - ATOMIC64_OP(op, op1, op2) \ - ATOMIC64_OP_RETURN(op, op1, op2) \ - ATOMIC64_FETCH_OP(op, op1, op2) - -ATOMIC64_OPS(add, adds, adc) -ATOMIC64_OPS(sub, subs, sbc) - -#define atomic64_add_return_relaxed atomic64_add_return_relaxed -#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed -#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed -#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed - -#undef ATOMIC64_OPS -#define ATOMIC64_OPS(op, op1, op2) \ - ATOMIC64_OP(op, op1, op2) \ - ATOMIC64_FETCH_OP(op, op1, op2) - -#define atomic64_andnot atomic64_andnot - -ATOMIC64_OPS(and, and, and) -ATOMIC64_OPS(andnot, bic, bic) -ATOMIC64_OPS(or, orr, orr) -ATOMIC64_OPS(xor, eor, eor) - -#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed -#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed -#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed -#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed - -#undef ATOMIC64_OPS -#undef ATOMIC64_FETCH_OP -#undef ATOMIC64_OP_RETURN -#undef ATOMIC64_OP - -static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 n= ew) -{ - s64 oldval; - unsigned long res; - - prefetchw(&ptr->counter); - - do { - __asm__ __volatile__("@ atomic64_cmpxchg\n" - "ldrexd %1, %H1, [%3]\n" - "mov %0, #0\n" - "teq %1, %4\n" - "teqeq %H1, %H4\n" - "strexdeq %0, %5, %H5, [%3]" - : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (ptr->counter) - : "r" (&ptr->counter), "r" (old), "r" (new) - : "cc"); - } while (res); - - return oldval; -} -#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed - -static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new) -{ - s64 result; - unsigned long tmp; - - prefetchw(&ptr->counter); - - __asm__ __volatile__("@ atomic64_xchg\n" -"1: ldrexd %0, %H0, [%3]\n" -" strexd %1, %4, %H4, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (ptr->counter) - : "r" (&ptr->counter), "r" (new) - : "cc"); - - return result; -} -#define atomic64_xchg_relaxed atomic64_xchg_relaxed - -static inline s64 atomic64_dec_if_positive(atomic64_t *v) -{ - s64 result; - unsigned long tmp; + int ret; =20 smp_mb(); - prefetchw(&v->counter); - - __asm__ __volatile__("@ atomic64_dec_if_positive\n" -"1: ldrexd %0, %H0, [%3]\n" -" subs %Q0, %Q0, #1\n" -" sbc %R0, %R0, #0\n" -" teq %R0, #0\n" -" bmi 2f\n" -" strexd %1, %0, %H0, [%3]\n" -" teq %1, #0\n" -" bne 1b\n" -"2:" - : "=3D&r" (result), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter) - : "cc"); - + ret =3D atomic_add_return_relaxed(i, v); smp_mb(); =20 - return result; + return ret; } -#define atomic64_dec_if_positive atomic64_dec_if_positive +#define atomic_fetch_add(i, v) atomic_add_return(i, v) =20 -static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +static inline int atomic_sub_return(int i, atomic_t *v) { - s64 oldval, newval; - unsigned long tmp; + int ret; =20 smp_mb(); - prefetchw(&v->counter); - - __asm__ __volatile__("@ atomic64_add_unless\n" -"1: ldrexd %0, %H0, [%4]\n" -" teq %0, %5\n" -" teqeq %H0, %H5\n" -" beq 2f\n" -" adds %Q1, %Q0, %Q6\n" -" adc %R1, %R0, %R6\n" -" strexd %2, %1, %H1, [%4]\n" -" teq %2, #0\n" -" bne 1b\n" -"2:" - : "=3D&r" (oldval), "=3D&r" (newval), "=3D&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "r" (u), "r" (a) - : "cc"); - - if (oldval !=3D u) - smp_mb(); + ret =3D atomic_sub_return_relaxed(i, v); + smp_mb(); =20 - return oldval; + return ret; } -#define atomic64_fetch_add_unless atomic64_fetch_add_unless =20 -#endif /* !CONFIG_GENERIC_ATOMIC64 */ -#endif -#endif \ No newline at end of file +#endif /* __ASM_ARM_ARM32_ATOMIC_H */ --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605132004; cv=none; d=zohomail.com; s=zohoarc; b=LgaWlrYE9VLhFuIso+Td2/oQbPn4zskEmrehsUDLS1GMYS/87IoytCdjU6gpEhnvZ/1NfO2kOjQVm+9+RjkJnpiJble5az4afBNyhaRBTilW+l4PKLUpi67fELawBtdSbacI5gKc3ZHBL1HzGSI+EHOTEwELfeDlBg9SJPj8Sq0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605132004; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=fD74UoudjvYdXmC0ErN+hXn9USTuY6/Q/RnD/8M+iRI=; b=OfbqLZqq4vfgTg0wThPXG8qbR6bR809ru1hSBom03x/VaMzhvuvcOi3SZ+TbKCvAUVtUEPCJVgJMKOCWCI9wKsw8xtaqjxkE3KPKdxR2daReEvrJXp5FP9KMCyuodK28zwGSDrJJyTvQX/JhpCFCVGE06mjBgpQiNP2Bc3aXWwM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 160513200441744.009539299825974; Wed, 11 Nov 2020 14:00:04 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25368.53073 (Exim 4.92) (envelope-from ) id 1kcy9Y-0007gC-NY; Wed, 11 Nov 2020 21:59:44 +0000 Received: by outflank-mailman (output) from mailman id 25368.53073; Wed, 11 Nov 2020 21:59:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy9Y-0007fs-CS; Wed, 11 Nov 2020 21:59:44 +0000 Received: by outflank-mailman (input) for mailman id 25368; Wed, 11 Nov 2020 21:59:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3z-00064v-RZ for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:59 +0000 Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e89902fb-7dcf-49c9-809d-f4bae8f5d314; Wed, 11 Nov 2020 21:52:56 +0000 (UTC) Received: by mail-wr1-x435.google.com with SMTP id 23so3965030wrc.8 for ; Wed, 11 Nov 2020 13:52:56 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:54 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3z-00064v-RZ for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:59 +0000 Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e89902fb-7dcf-49c9-809d-f4bae8f5d314; Wed, 11 Nov 2020 21:52:56 +0000 (UTC) Received: by mail-wr1-x435.google.com with SMTP id 23so3965030wrc.8 for ; Wed, 11 Nov 2020 13:52:56 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:54 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e89902fb-7dcf-49c9-809d-f4bae8f5d314 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fD74UoudjvYdXmC0ErN+hXn9USTuY6/Q/RnD/8M+iRI=; b=Ng2wzGaRjSGtlmnnkCYihoMFHpZwADCLk1XYrol1yaUdmD12gN2MY9krmJLEMObL6M qeRfXzyFGXGEnelMKh4vUMrud0s6/iA/EUwfhCibNgEjEP566X9zJmiQTKX1q1ktTCLJ sjpCDE3p9TQ9wjWCnQQ3DZEZFqTHUKS7+uoqw8NhBfgxNhX4AeY/83a5/TN26GG57/m8 BrqKON3pdQMW3y/pn0D9ZbfBJFvC4peOqNTzmfSMbnEyNsmhLAokvxy33FcYsY2rYPV4 ETxMzPHoSjCSaulv2OL7l/j9EEjzSZM9JTc9q3ziGcwrZVFNQsQolZadfomxkWugAKcJ wqcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fD74UoudjvYdXmC0ErN+hXn9USTuY6/Q/RnD/8M+iRI=; b=JCsgIbj02b2RowGtJ38pbdY+pQMjqSqjzi66CKL2A+lTIiB3sBb58+g/VetooaBwe9 MfT5rdeTH1PLJlJx/bqI2G4vMxABM8sjMOTcnRHqkFiDGdEMXMZZqcRgb/iOR5AMqFGK 6EI9+a1BtXFxtKO2U2qgz6yXlBXlAol8hO/kocR3cxr06pTH+xxPvSpabr0imzTA6A1l M/KaujIioWNYg/eObYvbCq5u7TCMKY5/VNwFREgRVr7bOZXBqCYL7eSb2lsBt+IcWVlC sTm+z9Ak07XUTqnolSA6HSOtjg8+B8bcND9DPXJ/Ij9pOPrPtOLWf9YtJB6Yv7ucxypJ RFeQ== X-Gm-Message-State: AOAM530yv2AmksfNyxIGpWZ2AQdJZ6nya4NeHR32DKNkEOn+fR7WM6N1 f4aWdU5xJgjiqUavkTa9NDt4Hid6aDw= X-Google-Smtp-Source: ABdhPJyolI2is12QJH+ntYHmZaUJPBkLe6jN9v0sIQCa3sPAyKR5IWMzFSm2OnjNj32FgK5Am2o3aw== X-Received: by 2002:a5d:69d1:: with SMTP id s17mr21879105wrw.104.1605131574983; Wed, 11 Nov 2020 13:52:54 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 14/15] xen/arm32: port Linux's arm32 cmpxchg.h to Xen Date: Wed, 11 Nov 2020 21:52:02 +0000 Message-Id: <20201111215203.80336-15-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding - Drop support for pre-Armv7 systems, including the workarounds for the swp instruction being broken on StrongARM. - Drop local variants as no callers in Xen. - Keep the compiler happy by fixing __cmpxchg64()'s ptr arg to be volatile, and casting ptr to be (const void *) in the call to prefetchw(). - Add explicit strict variants of xchg(), cmpxchg(), and cmpxchg64(), as the Linux arm32 cmpxchg.h doesn't define these and they're needed for Xen. These strict variants are just wrappers that sandwich calls to the relaxed variants between two smp_mb()'s. - Pull in the timeout variants of cmpxchg from the original Xen arm32 cmpxchg.h as these are required for guest atomics and are not provided by Linux. Signed-off-by: Ash Wilding --- xen/include/asm-arm/arm32/cmpxchg.h | 322 ++++++++++++++++------------ 1 file changed, 188 insertions(+), 134 deletions(-) diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm3= 2/cmpxchg.h index 638ae84afb..d7189984d0 100644 --- a/xen/include/asm-arm/arm32/cmpxchg.h +++ b/xen/include/asm-arm/arm32/cmpxchg.h @@ -1,46 +1,24 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __ASM_ARM_CMPXCHG_H -#define __ASM_ARM_CMPXCHG_H - -#include -#include -#include - -#if defined(CONFIG_CPU_SA1100) || defined(CONFIG_CPU_SA110) /* - * On the StrongARM, "swp" is terminally broken since it bypasses the - * cache totally. This means that the cache becomes inconsistent, and, - * since we use normal loads/stores as well, this is really bad. - * Typically, this causes oopsen in filp_close, but could have other, - * more disastrous effects. There are two work-arounds: - * 1. Disable interrupts and emulate the atomic swap - * 2. Clean the cache, perform atomic swap, flush the cache + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5) * - * We choose (1) since its the "easiest" to achieve here and is not - * dependent on the processor type. - * - * NOTE that this solution won't work on an SMP system, so explcitly - * forbid it here. + * SPDX-License-Identifier: GPL-2.0 */ -#define swp_is_buggy -#endif +#ifndef __ASM_ARM_ARM32_CMPXCHG_H +#define __ASM_ARM_ARM32_CMPXCHG_H + +#include +#include + +extern void __bad_cmpxchg(volatile void *ptr, int size); =20 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, in= t size) { - extern void __bad_xchg(volatile void *, int); unsigned long ret; -#ifdef swp_is_buggy - unsigned long flags; -#endif -#if __LINUX_ARM_ARCH__ >=3D 6 unsigned int tmp; -#endif =20 prefetchw((const void *)ptr); =20 switch (size) { -#if __LINUX_ARM_ARCH__ >=3D 6 -#ifndef CONFIG_CPU_V6 /* MIN ARCH >=3D V6K */ case 1: asm volatile("@ __xchg1\n" "1: ldrexb %0, [%3]\n" @@ -61,7 +39,6 @@ static inline unsigned long __xchg(unsigned long x, volat= ile void *ptr, int size : "r" (x), "r" (ptr) : "memory", "cc"); break; -#endif case 4: asm volatile("@ __xchg4\n" "1: ldrex %0, [%3]\n" @@ -72,42 +49,10 @@ static inline unsigned long __xchg(unsigned long x, vol= atile void *ptr, int size : "r" (x), "r" (ptr) : "memory", "cc"); break; -#elif defined(swp_is_buggy) -#ifdef CONFIG_SMP -#error SMP is not supported on this platform -#endif - case 1: - raw_local_irq_save(flags); - ret =3D *(volatile unsigned char *)ptr; - *(volatile unsigned char *)ptr =3D x; - raw_local_irq_restore(flags); - break; =20 - case 4: - raw_local_irq_save(flags); - ret =3D *(volatile unsigned long *)ptr; - *(volatile unsigned long *)ptr =3D x; - raw_local_irq_restore(flags); - break; -#else - case 1: - asm volatile("@ __xchg1\n" - " swpb %0, %1, [%2]" - : "=3D&r" (ret) - : "r" (x), "r" (ptr) - : "memory", "cc"); - break; - case 4: - asm volatile("@ __xchg4\n" - " swp %0, %1, [%2]" - : "=3D&r" (ret) - : "r" (x), "r" (ptr) - : "memory", "cc"); - break; -#endif default: - /* Cause a link-time error, the xchg() size is not supported */ - __bad_xchg(ptr, size), ret =3D 0; + /* Cause a link-time error, the size is not supported */ + __bad_cmpxchg(ptr, size), ret =3D 0; break; } =20 @@ -119,40 +64,6 @@ static inline unsigned long __xchg(unsigned long x, vol= atile void *ptr, int size sizeof(*(ptr))); \ }) =20 -#include - -#if __LINUX_ARM_ARCH__ < 6 -/* min ARCH < ARMv6 */ - -#ifdef CONFIG_SMP -#error "SMP is not supported on this platform" -#endif - -#define xchg xchg_relaxed - -/* - * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always ma= ke - * them available. - */ -#define cmpxchg_local(ptr, o, n) ({ \ - (__typeof(*ptr))__cmpxchg_local_generic((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr))); \ -}) - -#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (= n)) - -#include - -#else /* min ARCH >=3D ARMv6 */ - -extern void __bad_cmpxchg(volatile void *ptr, int size); - -/* - * cmpxchg only support 32-bits operands on ARMv6. - */ - static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long ol= d, unsigned long new, int size) { @@ -161,7 +72,6 @@ static inline unsigned long __cmpxchg(volatile void *ptr= , unsigned long old, prefetchw((const void *)ptr); =20 switch (size) { -#ifndef CONFIG_CPU_V6 /* min ARCH >=3D ARMv6K */ case 1: do { asm volatile("@ __cmpxchg1\n" @@ -186,7 +96,6 @@ static inline unsigned long __cmpxchg(volatile void *ptr= , unsigned long old, : "memory", "cc"); } while (res); break; -#endif case 4: do { asm volatile("@ __cmpxchg4\n" @@ -199,6 +108,7 @@ static inline unsigned long __cmpxchg(volatile void *pt= r, unsigned long old, : "memory", "cc"); } while (res); break; + default: __bad_cmpxchg(ptr, size); oldval =3D 0; @@ -214,41 +124,14 @@ static inline unsigned long __cmpxchg(volatile void *= ptr, unsigned long old, sizeof(*(ptr))); \ }) =20 -static inline unsigned long __cmpxchg_local(volatile void *ptr, - unsigned long old, - unsigned long new, int size) -{ - unsigned long ret; - - switch (size) { -#ifdef CONFIG_CPU_V6 /* min ARCH =3D=3D ARMv6 */ - case 1: - case 2: - ret =3D __cmpxchg_local_generic(ptr, old, new, size); - break; -#endif - default: - ret =3D __cmpxchg(ptr, old, new, size); - } - - return ret; -} - -#define cmpxchg_local(ptr, o, n) ({ \ - (__typeof(*ptr))__cmpxchg_local((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr))); \ -}) - -static inline unsigned long long __cmpxchg64(unsigned long long *ptr, +static inline unsigned long long __cmpxchg64(volatile unsigned long long *= ptr, unsigned long long old, unsigned long long new) { unsigned long long oldval; unsigned long res; =20 - prefetchw(ptr); + prefetchw((const void *)ptr); =20 __asm__ __volatile__( "1: ldrexd %1, %H1, [%3]\n" @@ -272,8 +155,179 @@ static inline unsigned long long __cmpxchg64(unsigned= long long *ptr, (unsigned long long)(n)); \ }) =20 -#define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n)) =20 -#endif /* __LINUX_ARM_ARCH__ >=3D 6 */ +/* + * Linux doesn't provide strict versions of xchg(), cmpxchg(), and cmpxchg= 64(), + * so manually define these for Xen as smp_mb() wrappers around the relaxed + * variants. + */ =20 -#endif /* __ASM_ARM_CMPXCHG_H */ \ No newline at end of file +#define xchg(ptr, x) ({ \ + long ret; \ + smp_mb(); \ + ret =3D xchg_relaxed(ptr, x); \ + smp_mb(); \ + ret; \ +}) + +#define cmpxchg(ptr, o, n) ({ \ + long ret; \ + smp_mb(); \ + ret =3D cmpxchg_relaxed(ptr, o, n); \ + smp_mb(); \ + ret; \ +}) + +#define cmpxchg64(ptr, o, n) ({ \ + long long ret; \ + smp_mb(); \ + ret =3D cmpxchg64_relaxed(ptr, o, n); \ + smp_mb(); \ + ret; \ +}) + +/* + * This code is from the original Xen arm32 cmpxchg.h, from before the + * Linux 5.10-rc2 atomics helpers were ported over. The only changes + * here are renaming the macros and functions to explicitly use + * "timeout" in their names so that they don't clash with the above. + * + * We need this here for guest atomics (the only user of the timeout + * variants). + */ + +#define __CMPXCHG_TIMEOUT_CASE(sz, name) = \ +static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr, = \ + unsigned long *old, \ + unsigned long new, \ + bool timeout, \ + unsigned int max_try) \ +{ \ + unsigned long oldval; \ + unsigned long res; \ + \ + do { \ + asm volatile("@ __cmpxchg_timeout_case_" #name "\n" = \ + " ldrex" #sz " %1, [%2]\n" \ + " mov %0, #0\n" \ + " teq %1, %3\n" \ + " strex" #sz "eq %0, %4, [%2]\n" \ + : "=3D&r" (res), "=3D&r" (oldval) = \ + : "r" (ptr), "Ir" (*old), "r" (new) \ + : "memory", "cc"); \ + \ + if (!res) \ + break; \ + } while (!timeout || ((--max_try) > 0)); \ + \ + *old =3D oldval; \ + \ + return !res; \ +} + +__CMPXCHG_TIMEOUT_CASE(b, 1) +__CMPXCHG_TIMEOUT_CASE(h, 2) +__CMPXCHG_TIMEOUT_CASE( , 4) + +static inline bool __cmpxchg_timeout_case_8(volatile uint64_t *ptr, + uint64_t *old, + uint64_t new, + bool timeout, + unsigned int max_try) +{ + uint64_t oldval; + uint64_t res; + + do { + asm volatile( + " ldrexd %1, %H1, [%3]\n" + " teq %1, %4\n" + " teqeq %H1, %H4\n" + " movne %0, #0\n" + " movne %H0, #0\n" + " bne 2f\n" + " strexd %0, %5, %H5, [%3]\n" + "2:" + : "=3D&r" (res), "=3D&r" (oldval), "+Qo" (*ptr) + : "r" (ptr), "r" (*old), "r" (new) + : "memory", "cc"); + if (!res) + break; + } while (!timeout || ((--max_try) > 0)); + + *old =3D oldval; + + return !res; +} + +static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long = *old, + unsigned long new, int size, + bool timeout, unsigned int max_try) +{ + prefetchw((const void *)ptr); + + switch (size) { + case 1: + return __cmpxchg_timeout_case_1(ptr, old, new, timeout, ma= x_try); + case 2: + return __cmpxchg_timeout_case_2(ptr, old, new, timeout, ma= x_try); + case 4: + return __cmpxchg_timeout_case_4(ptr, old, new, timeout, ma= x_try); + default: + __bad_cmpxchg(ptr, size); + return false; + } + + ASSERT_UNREACHABLE(); +} + +/* + * The helper may fail to update the memory if the action takes too long. + * + * @old: On call the value pointed contains the expected old value. It wil= l be + * updated to the actual old value. + * @max_try: Maximum number of iterations + * + * The helper will return true when the update has succeeded (i.e no + * timeout) and false if the update has failed. + */ +static always_inline bool __cmpxchg_timeout(volatile void *ptr, + unsigned long *old, + unsigned long new, + int size, + unsigned int max_try) +{ + bool ret; + + smp_mb(); + ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try); + smp_mb(); + + return ret; +} + +/* + * The helper may fail to update the memory if the action takes too long. + * + * @old: On call the value pointed contains the expected old value. It wil= l be + * updated to the actual old value. + * @max_try: Maximum number of iterations + * + * The helper will return true when the update has succeeded (i.e no + * timeout) and false if the update has failed. + */ +static always_inline bool __cmpxchg64_timeout(volatile uint64_t *ptr, + uint64_t *old, + uint64_t new, + unsigned int max_try) +{ + bool ret; + + smp_mb(); + ret =3D __cmpxchg_timeout_case_8(ptr, old, new, true, max_try); + smp_mb(); + + return ret; +} + +#endif /* __ASM_ARM_ARM32_CMPXCHG_H */ --=20 2.24.3 (Apple Git-128) From nobody Fri Apr 19 03:27:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1605132003; cv=none; d=zohomail.com; s=zohoarc; b=W7zTfhkTFyhClFali4hRWVZL/DOOHNWjj66gh5JKE30asKwSdloroC5q5fIbWAAyCbJSBgBFIlIjDnBv4QOmJ7+vWZx6uam8rIiN9hD7NKjP5NW4RAA9/z15tbWxWE2bnHugWicuf6CLPqJZADSDlTLNlmAgB0ljmyLWMPvqWmg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605132003; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rfNmZKkQ9GXWazOVDjrBrED1XtwfKtN3kTzNIukG8jo=; b=HLOKu2wM1KBBUKmmQv1hZkTipk3m1Td84XepV0I3qdOuV183ykKW0+FnqTKlPxybv7uEiSljBhCUmKBe9aPRf2zG51Qlm+vW+HB+e7g6Yj8XqGKB1Ec56DQu6VkZOKhBDl02LETkH1Mrm9ytgrtgKX2JnWeyc0MGSS9ItLBWMvA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 160513200356431.601082817088695; Wed, 11 Nov 2020 14:00:03 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.25371.53109 (Exim 4.92) (envelope-from ) id 1kcy9a-0007kq-GE; Wed, 11 Nov 2020 21:59:46 +0000 Received: by outflank-mailman (output) from mailman id 25371.53109; Wed, 11 Nov 2020 21:59:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcy9a-0007kJ-5G; Wed, 11 Nov 2020 21:59:46 +0000 Received: by outflank-mailman (input) for mailman id 25371; Wed, 11 Nov 2020 21:59:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3u-00064v-RK for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:54 +0000 Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id faaf8b8e-c396-4fb1-8e30-4a2829c5ebe4; Wed, 11 Nov 2020 21:52:57 +0000 (UTC) Received: by mail-wr1-x442.google.com with SMTP id k2so3999500wrx.2 for ; Wed, 11 Nov 2020 13:52:57 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:55 -0800 (PST) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3u-00064v-RK for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:54 +0000 Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id faaf8b8e-c396-4fb1-8e30-4a2829c5ebe4; Wed, 11 Nov 2020 21:52:57 +0000 (UTC) Received: by mail-wr1-x442.google.com with SMTP id k2so3999500wrx.2 for ; Wed, 11 Nov 2020 13:52:57 -0800 (PST) Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151]) by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2020 13:52:55 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: faaf8b8e-c396-4fb1-8e30-4a2829c5ebe4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rfNmZKkQ9GXWazOVDjrBrED1XtwfKtN3kTzNIukG8jo=; b=FDG3HI4ZHFQ1tpEoeHroWB7sKALJuSzLT2s0Wk3a6dcimJwgnnhb2UTGDN+N9mjXQS SNNcOsCdSrOPtErpo7GBQpILhO+UCkLidYqp1hCn0m3xW1ueJEFc8Gsy2cYhcXUl0a9c 1LHRczZ5NQ4SGLJ6F7rIYvTc/KzYutIQgWm9qjbqmd3iczlFY1pE6MZC0hiYzRwvWgcz tmbTBnSWc56V5mVcyuL4BkaYuReTjl82qf+pB/HVzKCZcGtEdDO+gkYy5KSGdeZiVB2J 1j/lw3HPSiaV5DspnC6Q5waZ1eW6LSPjUWtbgH/GpsuN9EJGgTqXLboTaHDVV/coxTXp iuOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rfNmZKkQ9GXWazOVDjrBrED1XtwfKtN3kTzNIukG8jo=; b=d7I9Btx34alxAC0eQ1b4zsl+MmGoTQvadrgsnyWrQjS0eRpSXBpSQ0CMiva8NkK8Cf me36xbOaE7RsjVfPflx5eos7uUgyKJcgFRFcru+0OJYOWClJYI0brP5P9f54sh2NaUXc TViDvzD2VkrP6eTAltUKu+/yFhfXZ0ChARzm9rwEjUknKITMKZ3ndHMn/vzPuh9fzYy+ oTNszf2vA3LKtz3ZY92nO8ZpzMxXhuVJozh9EleZq/JqxVAfZ9ZZrBnvezeorifFzoW2 u1qVOUHdDy6hfPzsGdNCQwahmpD0r8ZGBkK5XJhogj6zvAhwHTDchaV1QnET36VrgYH0 AEzQ== X-Gm-Message-State: AOAM531SpxrKFSp2bEXylTt4Tbjs5CjCdWw9JBBYIEf2X/K0b3PpQd9A 28J02lhvhiQLNLzONvMUFz8rqrBT5MQ= X-Google-Smtp-Source: ABdhPJxsDoo8botbsPJoNsZZSEN0xqfshqIiL0ER+EsuBlcdAQUToZNdg9NG7mwG5vyj0hFiyppkwQ== X-Received: by 2002:adf:9cc6:: with SMTP id h6mr32447024wre.341.1605131576028; Wed, 11 Nov 2020 13:52:56 -0800 (PST) From: Ash Wilding X-Google-Original-From: Ash Wilding To: xen-devel@lists.xenproject.org Cc: Ash Wilding , julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com Subject: [RFC PATCH v2 15/15] xen/arm: remove dependency on gcc built-in __sync_fetch_and_add() Date: Wed, 11 Nov 2020 21:52:03 +0000 Message-Id: <20201111215203.80336-16-ash.j.wilding@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com> References: <20201111215203.80336-1-ash.j.wilding@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Type: text/plain; charset="utf-8" From: Ash Wilding Now that we have explicit implementations of LL/SC and LSE atomics helpers after porting Linux's versions to Xen, we can drop the reference to gcc's built-in __sync_fetch_and_add(). This requires some fudging using container_of() because the users of __sync_fetch_and_add(), namely xen/spinlock.c, expect the ptr to be directly to the u32 being modified while the atomics helpers expect the ptr to be to an atomic_t and then access that atomic_t's counter member. By using container_of() we can create a "fake" (atomic_t *) pointer and pass that to the atomic_fetch_add() that we ported from Linux. NOTE: spinlock.c is using u32 for the value being added while the atomics helpers use int for their counter member. This shouldn't actually matter because we do the addition in assembly and the compiler isn't smart enough to detect potential signed integer overflow in inline assembly, but I thought it worth calling out in the commit message. Signed-off-by: Ash Wilding --- xen/include/asm-arm/system.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h index 65d5c8e423..0326e3ade4 100644 --- a/xen/include/asm-arm/system.h +++ b/xen/include/asm-arm/system.h @@ -58,7 +58,14 @@ static inline int local_abort_is_enabled(void) return !(flags & PSR_ABT_MASK); } =20 -#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v) +#define arch_fetch_and_add(ptr, x) ({ \ + int ret; \ + \ + atomic_t * tmp =3D container_of((int *)(&(x)), atomic_t, counter); \ + ret =3D atomic_fetch_add(x, tmp); \ + \ + ret; \ +}) =20 extern struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next); =20 --=20 2.24.3 (Apple Git-128)