From nobody Tue Feb 10 18:36:13 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB63D1EF01 for ; Fri, 27 Dec 2024 01:10:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735261845; cv=none; b=WDbooFZQ8I1oM0HFDehUeKJOa/pvIcSDXi8rkF7dbHrrsrgR8kyCffF0qqYDs1H3wkIiPPZVc2WPzj056MMIU/jBRt1hiaOdM5dbNE6GsNbdf/1VzQiQyFp2BoHJRewN8wAjFYRu4sYbBZWjANo3ZXCAjghCsDRQACNPC3TqHJs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735261845; c=relaxed/simple; bh=XOA/Re8WRsJ7cF2WRPugYycBZj0hSQujANYi66LdgLQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=L8LEwDTlvKlESsoFcTd7rgV575U6AXPV4VAkb7iqvIvZBP4Vma2PSO9Ac+2nq95lJYS0lGErWYgH/IYs9IvP7ZAA6rrAvjmp1kbXFyFoiXDxa4JDyH8/lUuO5daocOaqXKHC7Ij6/7SAWb1fcHUNcozlp/jJ9lHU0xjolV+SUhw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NdTt6y5R; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NdTt6y5R" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2151C4CED2; Fri, 27 Dec 2024 01:10:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735261845; bh=XOA/Re8WRsJ7cF2WRPugYycBZj0hSQujANYi66LdgLQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NdTt6y5RCVO++5gWi8KHzzlOpAIlcEaEj0Jzgu2ON69gToMIcfmesBHErEP5cuae1 MEvqF5H+NUiL09MTiF8sgYXzJ+Jnvn+TewnduBTwNGYO48P6REw9nf75jfojJIbOZQ x2t6u2aIbSyG2gDBxZwo7IIK1wv1xntnC+H9rCrcO2phVFeEpcueyThUHlikTQ/RXL eoI6uwqwvShLc3gbM/VrG4YDsrBMf+7pUEt8FB+Aw8f9LXqpeR4SadHqfNDPJsutrr bx+Z6Ewmiak4dvKcWGdFjS/W7xQYPDfpNgPiGpwYmX7uwyCufrYnQodYRSc83ZwDYe HgHURpmhFRS/g== From: guoren@kernel.org To: paul.walmsley@sifive.com, palmer@dabbelt.com, guoren@kernel.org, bjorn@rivosinc.com, conor@kernel.org, leobras@redhat.com, peterz@infradead.org, parri.andrea@gmail.com, will@kernel.org, longman@redhat.com, boqun.feng@gmail.com, arnd@arndb.de, alexghiti@rivosinc.com, ajones@ventanamicro.com, rkrcmar@ventanamicro.com, atishp@rivosinc.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Guo Ren Subject: [RFC PATCH V2 4/4] RISC-V: paravirt: Enable virt_spin_lock() when disable PARAVIRT_SPINLOCKS Date: Thu, 26 Dec 2024 20:10:11 -0500 Message-Id: <20241227011011.2331381-5-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20241227011011.2331381-1-guoren@kernel.org> References: <20241227011011.2331381-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Guo Ren The VM guests should fall back to a Test-and-Set spinlock when PARAVIRT_SPINLOCKS disabled, because fair locks have horrible lock 'holder' preemption issues. The virt_spin_lock_key would shortcut for the queued_spin_lock_- slowpath() function that allow virt_spin_lock to hijack it. ref: 43b3f02899f7 ("locking/qspinlock/x86: Fix performance regression under unaccelerated VMs"). Add a static key controlling whether virt_spin_lock() should be called or not. Add nopvspin support as x86. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- .../admin-guide/kernel-parameters.txt | 2 +- arch/riscv/include/asm/qspinlock.h | 24 +++++++++++++++ arch/riscv/include/asm/sbi.h | 16 ++++++++++ arch/riscv/kernel/qspinlock_paravirt.c | 3 ++ arch/riscv/kernel/sbi.c | 2 +- arch/riscv/kernel/setup.c | 29 +++++++++++++++++++ 6 files changed, 74 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 3872bc6ec49d..541202ff0f36 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4048,7 +4048,7 @@ as generic guest with no PV drivers. Currently support XEN HVM, KVM, HYPER_V and VMWARE guest. =20 - nopvspin [X86,XEN,KVM,EARLY] + nopvspin [X86,RISCV,XEN,KVM,EARLY] Disables the qspinlock slow path using PV optimizations which allow the hypervisor to 'idle' the guest on lock contention. diff --git a/arch/riscv/include/asm/qspinlock.h b/arch/riscv/include/asm/qs= pinlock.h index 1d9f32334ff1..4a62dcb43617 100644 --- a/arch/riscv/include/asm/qspinlock.h +++ b/arch/riscv/include/asm/qspinlock.h @@ -14,6 +14,8 @@ /* How long a lock should spin before we consider blocking */ #define SPIN_THRESHOLD (1 << 15) =20 +extern bool nopvspin; + void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); void __pv_init_lock_hash(void); void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); @@ -31,5 +33,27 @@ static inline void queued_spin_unlock(struct qspinlock *= lock) #endif /* CONFIG_PARAVIRT_SPINLOCKS */ =20 #include +#include + +/* + * The KVM guests fall back to a Test-and-Set spinlock, because fair locks + * have horrible lock 'holder' preemption issues. The test_and_set_spinloc= k_key + * would shortcut for the queued_spin_lock_slowpath() function that allow + * virt_spin_lock to hijack it. + */ +DECLARE_STATIC_KEY_FALSE(test_and_set_spinlock_key); + +#define virt_spin_lock test_and_set_spinlock +static inline bool test_and_set_spinlock(struct qspinlock *lock) +{ + if (!static_branch_likely(&test_and_set_spinlock_key)) + return false; + + do { + smp_cond_load_relaxed((s32 *)&lock->val, VAL =3D=3D 0); + } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) !=3D 0); + + return true; +} =20 #endif /* _ASM_RISCV_QSPINLOCK_H */ diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 03e719f076ad..13669c29ead3 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -56,6 +56,21 @@ enum sbi_ext_base_fid { SBI_EXT_BASE_GET_MIMPID, }; =20 +enum sbi_ext_base_impl_id { + SBI_EXT_BASE_IMPL_ID_BBL =3D 0, + SBI_EXT_BASE_IMPL_ID_OPENSBI, + SBI_EXT_BASE_IMPL_ID_XVISOR, + SBI_EXT_BASE_IMPL_ID_KVM, + SBI_EXT_BASE_IMPL_ID_RUSTSBI, + SBI_EXT_BASE_IMPL_ID_DIOSIX, + SBI_EXT_BASE_IMPL_ID_COFFER, + SBI_EXT_BASE_IMPL_ID_XEN, + SBI_EXT_BASE_IMPL_ID_POLARFIRE, + SBI_EXT_BASE_IMPL_ID_COREBOOT, + SBI_EXT_BASE_IMPL_ID_OREBOOT, + SBI_EXT_BASE_IMPL_ID_BHYVE, +}; + enum sbi_ext_time_fid { SBI_EXT_TIME_SET_TIMER =3D 0, }; @@ -449,6 +464,7 @@ static inline int sbi_console_getchar(void) { return -E= NOENT; } long sbi_get_mvendorid(void); long sbi_get_marchid(void); long sbi_get_mimpid(void); +long sbi_get_firmware_id(void); void sbi_set_timer(uint64_t stime_value); void sbi_shutdown(void); void sbi_send_ipi(unsigned int cpu); diff --git a/arch/riscv/kernel/qspinlock_paravirt.c b/arch/riscv/kernel/qsp= inlock_paravirt.c index 781ed2190334..0f2e50754689 100644 --- a/arch/riscv/kernel/qspinlock_paravirt.c +++ b/arch/riscv/kernel/qspinlock_paravirt.c @@ -58,6 +58,9 @@ void __init pv_qspinlock_init(void) if (!sbi_probe_extension(SBI_EXT_PVLOCK)) return; =20 + if (nopvspin) + return; + pr_info("PV qspinlocks enabled\n"); __pv_init_lock_hash(); =20 diff --git a/arch/riscv/kernel/sbi.c b/arch/riscv/kernel/sbi.c index 1989b8cade1b..2cbacc345e5d 100644 --- a/arch/riscv/kernel/sbi.c +++ b/arch/riscv/kernel/sbi.c @@ -488,7 +488,7 @@ static inline long sbi_get_spec_version(void) return __sbi_base_ecall(SBI_EXT_BASE_GET_SPEC_VERSION); } =20 -static inline long sbi_get_firmware_id(void) +long sbi_get_firmware_id(void) { return __sbi_base_ecall(SBI_EXT_BASE_GET_IMP_ID); } diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 8b51ff5c7300..bb76133e684f 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -249,6 +249,32 @@ DEFINE_STATIC_KEY_TRUE(qspinlock_key); EXPORT_SYMBOL(qspinlock_key); #endif =20 +#ifdef CONFIG_QUEUED_SPINLOCKS +DEFINE_STATIC_KEY_FALSE(test_and_set_spinlock_key); + +static bool __init virt_spin_lock_init(void) +{ + if (sbi_get_firmware_id() =3D=3D SBI_EXT_BASE_IMPL_ID_KVM && + !IS_ENABLED(CONFIG_PARAVIRT_SPINLOCKS)) + goto out; + +#ifdef CONFIG_PARAVIRT_SPINLOCKS + if (sbi_probe_extension(SBI_EXT_PVLOCK) && nopvspin) + goto out; +#endif + + return false; +out: + static_branch_enable(&test_and_set_spinlock_key); + return true; +} +#else +static bool __init virt_spin_lock_init(void) +{ + return false; +} +#endif + static void __init riscv_spinlock_init(void) { char *using_ext =3D NULL; @@ -274,6 +300,9 @@ static void __init riscv_spinlock_init(void) } #endif =20 + if (virt_spin_lock_init()) + using_ext =3D "using test and set"; + if (!using_ext) pr_err("Queued spinlock without Zabha or Ziccrse"); else --=20 2.40.1