From nobody Wed Apr 8 04:25:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09088C32774 for ; Wed, 24 Aug 2022 03:21:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234521AbiHXDVs (ORCPT ); Tue, 23 Aug 2022 23:21:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234261AbiHXDV1 (ORCPT ); Tue, 23 Aug 2022 23:21:27 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D8447FE7D for ; Tue, 23 Aug 2022 20:21:26 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id m11-20020a17090a3f8b00b001fabfce6a26so169366pjc.4 for ; Tue, 23 Aug 2022 20:21:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:reply-to:from:to:cc; bh=DiPDkyCUU5+qWi0FAGBrmfRDeC2PU2rAf87l6jKus4Q=; b=XvXBevlrlAqE9wagoH125syNrQYxrpHCiRlRfiVQtu1MdNipV2GG0TAviWCkY42kue 40hfNKd+srF0kxrbk+L6cIyjcvRD8ldiLVaaVnTF7/0DWZuQLDQlKmMKu1ugP09CeNj6 VfX9VSOzSDrnlZamHjfahcZ2qWLNHssJ6Tq+/c4s295toNWRhLpMaKG2PErS9+FS8eu/ nEwvow02yLb8KzgPd+H/QmXmd4FUVqVHY6f7qsUkwmsM976gLKCuj3wk6E+/Ec1o7Qk2 rSKdgFoFxqyd41Hj6wSl65wgpFaa3s0g/kZVmRXjxDr+uNytf2U5o026FzYj9bwJFc5X barw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc; bh=DiPDkyCUU5+qWi0FAGBrmfRDeC2PU2rAf87l6jKus4Q=; b=l9JdAAElbWJcyMUrOgtPES/zctDO6a9gShFzkW5YLvZ7zXmOGCQlpphEUU9Gcd6wao yKT/16P96FxkZ6gurRj8YMnpd4ddxlGjR/Usc45t1Gxf/i1qJ4pY1j/XzP20tYTLtsEr 2ucbxLWKuk7wIvBTbA+sYgKYHXn5S/KLNvepy9bhH5pV2yG6ukGG+W2moDUW0X5zSPCY ly8xBPvlXWJ8sc/zZjlVf2OVMdX/1JfzURcA4FvYHIu5Ah8Sp6j8noh8pzMEO95CJT1g eGMnRLSOt9BwfcgBXJuUV34d+S1/o4mBEfMRn8YAeXrRKrEqHDo8k3wv/65XvSg8dq9o lArA== X-Gm-Message-State: ACgBeo2AbLqPiX4orLg+gSO5FDo5qd0SdGXHY2hUsVNsbKX8a25+zOzl CljGLN9e0afwE9teSCKlo361+WDTCQg= X-Google-Smtp-Source: AA6agR7hkMAlHzZjMos7RFjUz8V/HjO4UccQUnDLH/uwK3wpKW33JaEUPX9HlB35rLdBTEQmTcaewjVzFKc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:301c:b0:52d:bff9:5004 with SMTP id ay28-20020a056a00301c00b0052dbff95004mr27937449pfb.84.1661311285738; Tue, 23 Aug 2022 20:21:25 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 24 Aug 2022 03:21:13 +0000 In-Reply-To: <20220824032115.3563686-1-seanjc@google.com> Message-Id: <20220824032115.3563686-5-seanjc@google.com> Mime-Version: 1.0 References: <20220824032115.3563686-1-seanjc@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH v4 4/6] tools: Add atomic_test_and_set_bit() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Nathan Chancellor , Nick Desaulniers Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Atish Patra , David Hildenbrand , Tom Rix , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, llvm@lists.linux.dev, linux-kernel@vger.kernel.org, Colton Lewis , Peter Gonda , Andrew Jones , Sean Christopherson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Gonda Add x86 and generic implementations of atomic_test_and_set_bit() to allow KVM selftests to atomically manage bitmaps. Note, the generic version is taken from arch_test_and_set_bit() as of commit 415d83249709 ("locking/atomic: Make test_and_*_bit() ordered on failure"). Signed-off-by: Peter Gonda Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- tools/arch/x86/include/asm/atomic.h | 7 +++++++ tools/include/asm-generic/atomic-gcc.h | 12 ++++++++++++ 2 files changed, 19 insertions(+) diff --git a/tools/arch/x86/include/asm/atomic.h b/tools/arch/x86/include/a= sm/atomic.h index 1f5e26aae9fc..01cc27ec4520 100644 --- a/tools/arch/x86/include/asm/atomic.h +++ b/tools/arch/x86/include/asm/atomic.h @@ -8,6 +8,7 @@ =20 #define LOCK_PREFIX "\n\tlock; " =20 +#include #include =20 /* @@ -70,4 +71,10 @@ static __always_inline int atomic_cmpxchg(atomic_t *v, i= nt old, int new) return cmpxchg(&v->counter, old, new); } =20 +static inline int atomic_test_and_set_bit(long nr, unsigned long *addr) +{ + GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(bts), *addr, "Ir", nr, "%0", "c"); + +} + #endif /* _TOOLS_LINUX_ASM_X86_ATOMIC_H */ diff --git a/tools/include/asm-generic/atomic-gcc.h b/tools/include/asm-gen= eric/atomic-gcc.h index 4c1966f7c77a..6daa68bf5b9e 100644 --- a/tools/include/asm-generic/atomic-gcc.h +++ b/tools/include/asm-generic/atomic-gcc.h @@ -4,6 +4,7 @@ =20 #include #include +#include =20 /* * Atomic operations that C can't guarantee us. Useful for @@ -69,4 +70,15 @@ static inline int atomic_cmpxchg(atomic_t *v, int oldval= , int newval) return cmpxchg(&(v)->counter, oldval, newval); } =20 +static inline int atomic_test_and_set_bit(long nr, unsigned long *addr) +{ + unsigned long mask =3D BIT_MASK(nr); + long old; + + addr +=3D BIT_WORD(nr); + + old =3D __sync_fetch_and_or(addr, mask); + return !!(old & mask); +} + #endif /* __TOOLS_ASM_GENERIC_ATOMIC_H */ --=20 2.37.1.595.g718a3a8f04-goog