From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAD3B3254AC for ; Mon, 15 Dec 2025 18:02:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821748; cv=none; b=FTngxBINA5fs9yXxBMYn0CVF6kc89k8Gc4SKR8eGjIFderFgSoiSoan9CjDNimdSQLGWW+hwsP5nVD6ELvFZ+8gB25UtccqtQiDzQSQs5CNcmYi+SimHEayr8PWsHWUqHOM+39qx5yK6bEfs879oZa4fFAA50OE6sIu3glYop5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821748; c=relaxed/simple; bh=4KmugcrHydZtA4FvS83IMjLfuFNJu3ywAxEr3DryFOw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NNqMerkKngCmZiyTTKZ+XY+5/Wtvp52IMRVfxbE08o/4DhUywaTY2cn+h1B5gzyah3W4SG9/7PeBwm9jpzRiirqwCLFfDttwdklQUMFvyfxGm0TwqE8QTxf0cldRBB97ZEuOCoChU4xy4fw/CykRH+BF0tKTRwpPxgnO58Xve2Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GjCdlPLH; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GjCdlPLH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HJ5+HqZXS7lh9wbJ+hRTtrtZrrubRkkg6uazMC6eVC0=; b=GjCdlPLHBuXJao3ttcgVK5+/LbsEYTBLWsa4u6b34mEMTH6UWOtcezLCS4aC6/X46BNUD5 jvhQ/dv64BGJLst3adOucNojwoMw0G3Kp1DQtFLYwjvXn7KjwwvxgzoBFMay2wgxdOI8S0 x5qjNvJ6A/knixWlx2DSDSCoHsvg/js= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-449-CFISnRXjNhaZS1TLiIcrAg-1; Mon, 15 Dec 2025 13:02:13 -0500 X-MC-Unique: CFISnRXjNhaZS1TLiIcrAg-1 X-Mimecast-MFC-AGG-ID: CFISnRXjNhaZS1TLiIcrAg_1765821727 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8042818E68F3; Mon, 15 Dec 2025 17:58:18 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DD053180045B; Mon, 15 Dec 2025 17:58:14 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 01/17] preempt: Introduce HARDIRQ_DISABLE_BITS Date: Mon, 15 Dec 2025 12:57:48 -0500 Message-ID: <20251215175806.102713-2-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" From: Boqun Feng In order to support preempt_disable()-like interrupt disabling, that is, using part of preempt_count() to track interrupt disabling nested level, change the preempt_count() layout to contain 8-bit HARDIRQ_DISABLE count. Note that HARDIRQ_BITS and NMI_BITS are reduced by 1 because of this, and it changes the maximum of their (hardirq and nmi) nesting level. Signed-off-by: Boqun Feng Signed-off-by: Lyude Paul --- V14: * Fix HARDIRQ_DISABLE_MASK definition include/linux/preempt.h | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index d964f965c8ffc..f07e7f37f3ca5 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -17,6 +17,7 @@ * * - bits 0-7 are the preemption count (max preemption depth: 256) * - bits 8-15 are the softirq count (max # of softirqs: 256) + * - bits 16-23 are the hardirq disable count (max # of hardirq disable: 2= 56) * * The hardirq count could in theory be the same as the number of * interrupts in the system, but we run all interrupt handlers with @@ -26,29 +27,34 @@ * * PREEMPT_MASK: 0x000000ff * SOFTIRQ_MASK: 0x0000ff00 - * HARDIRQ_MASK: 0x000f0000 - * NMI_MASK: 0x00f00000 + * HARDIRQ_DISABLE_MASK: 0x00ff0000 + * HARDIRQ_MASK: 0x07000000 + * NMI_MASK: 0x38000000 * PREEMPT_NEED_RESCHED: 0x80000000 */ #define PREEMPT_BITS 8 #define SOFTIRQ_BITS 8 -#define HARDIRQ_BITS 4 -#define NMI_BITS 4 +#define HARDIRQ_DISABLE_BITS 8 +#define HARDIRQ_BITS 3 +#define NMI_BITS 3 =20 #define PREEMPT_SHIFT 0 #define SOFTIRQ_SHIFT (PREEMPT_SHIFT + PREEMPT_BITS) -#define HARDIRQ_SHIFT (SOFTIRQ_SHIFT + SOFTIRQ_BITS) +#define HARDIRQ_DISABLE_SHIFT (SOFTIRQ_SHIFT + SOFTIRQ_BITS) +#define HARDIRQ_SHIFT (HARDIRQ_DISABLE_SHIFT + HARDIRQ_DISABLE_BITS) #define NMI_SHIFT (HARDIRQ_SHIFT + HARDIRQ_BITS) =20 #define __IRQ_MASK(x) ((1UL << (x))-1) =20 #define PREEMPT_MASK (__IRQ_MASK(PREEMPT_BITS) << PREEMPT_SHIFT) #define SOFTIRQ_MASK (__IRQ_MASK(SOFTIRQ_BITS) << SOFTIRQ_SHIFT) +#define HARDIRQ_DISABLE_MASK (__IRQ_MASK(HARDIRQ_DISABLE_BITS) << HARDIRQ_= DISABLE_SHIFT) #define HARDIRQ_MASK (__IRQ_MASK(HARDIRQ_BITS) << HARDIRQ_SHIFT) #define NMI_MASK (__IRQ_MASK(NMI_BITS) << NMI_SHIFT) =20 #define PREEMPT_OFFSET (1UL << PREEMPT_SHIFT) #define SOFTIRQ_OFFSET (1UL << SOFTIRQ_SHIFT) +#define HARDIRQ_DISABLE_OFFSET (1UL << HARDIRQ_DISABLE_SHIFT) #define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT) #define NMI_OFFSET (1UL << NMI_SHIFT) =20 --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 047FA32571F for ; Mon, 15 Dec 2025 18:03:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821824; cv=none; b=FgULi7EV6AKv3ubwM6oGEQyx7biTGXOZT559RNDrsoC3+R4SnMPJKQd/XbeSMQB/A1rGL5mki3VYuEXa+7dvHtkx46vDrT/muRBfgTp7BAP79vFVCI1Ar0CLQI/niK5o1tYHEN/bW/N3ZBMXkfD/bFsB+KGWHRN1IJruuHKzDI4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821824; c=relaxed/simple; bh=Hv8poHE2O98fYOp3Y68wbGNXSFxcm3ueSfXe3mNmI4U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bADW7KdqoCDX0l+LQhxFVgghp4hMJ1JgaDjMpckPj+AOd7EzReN25uKMc1R5lZFOm2qtkcgYX6pC3UVFCoSdgQQ48/mjEiRyIu6vszeSOjNCSsDYz4g7CPwaQTR9UcK6xO3vqcQZAgvaJ1MooEVpR7Ri/DPmh6zjYfmz2hW2iRU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XQeCjKyI; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XQeCjKyI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821820; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t/2vrXZGCGDvtwhDve6dSbvbtmIASyHWERC1j3nx7E8=; b=XQeCjKyIrgqs/PJvaWraJ8lG0CdxDAA6k454F2uPfYQ2bqXh52V2MmytJyq3iZhAeyVOZo dhabIvGAF+/HRPmGRKf/yKiWMsYIuKdAgp7nfPPBQLK/IiUb5KTlGkvpxld0D4bGQ3OM3p uz+fwyVv9V9jaY0BdgJydeGCAcGIEZk= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-600-6pCEryHeNXmpIyiC4VlWTA-1; Mon, 15 Dec 2025 13:02:42 -0500 X-MC-Unique: 6pCEryHeNXmpIyiC4VlWTA-1 X-Mimecast-MFC-AGG-ID: 6pCEryHeNXmpIyiC4VlWTA_1765821732 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 74FBA196E090; Mon, 15 Dec 2025 17:58:23 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C5C08180044F; Mon, 15 Dec 2025 17:58:18 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Joel Fernandes Subject: [PATCH v16 02/17] preempt: Track NMI nesting to separate per-CPU counter Date: Mon, 15 Dec 2025 12:57:49 -0500 Message-ID: <20251215175806.102713-3-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" From: Joel Fernandes Move NMI nesting tracking from the preempt_count bits to a separate per-CPU counter (nmi_nesting). This is to free up the NMI bits in the preempt_count, allowing those bits to be repurposed for other uses. This also has the ben= efit of tracking more than 16-levels deep if there is ever a need. Reduce multiple bits in preempt_count for NMI tracking. Reduce NMI_BITS from 3 to 1, using it only to detect if we're in an NMI. Suggested-by: Boqun Feng Signed-off-by: Joel Fernandes Signed-off-by: Lyude Paul --- include/linux/hardirq.h | 16 ++++++++++++---- include/linux/preempt.h | 13 +++++++++---- kernel/softirq.c | 2 ++ 3 files changed, 23 insertions(+), 8 deletions(-) diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index d57cab4d4c06f..cc06bda52c3e5 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -10,6 +10,8 @@ #include #include =20 +DECLARE_PER_CPU(unsigned int, nmi_nesting); + extern void synchronize_irq(unsigned int irq); extern bool synchronize_hardirq(unsigned int irq); =20 @@ -102,14 +104,16 @@ void irq_exit_rcu(void); */ =20 /* - * nmi_enter() can nest up to 15 times; see NMI_BITS. + * nmi_enter() can nest - nesting is tracked in a per-CPU counter. */ #define __nmi_enter() \ do { \ lockdep_off(); \ arch_nmi_enter(); \ - BUG_ON(in_nmi() =3D=3D NMI_MASK); \ - __preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET); \ + BUG_ON(__this_cpu_read(nmi_nesting) =3D=3D UINT_MAX); \ + __this_cpu_inc(nmi_nesting); \ + __preempt_count_add(HARDIRQ_OFFSET); \ + preempt_count_set(preempt_count() | NMI_MASK); \ } while (0) =20 #define nmi_enter() \ @@ -124,8 +128,12 @@ void irq_exit_rcu(void); =20 #define __nmi_exit() \ do { \ + unsigned int nesting; \ BUG_ON(!in_nmi()); \ - __preempt_count_sub(NMI_OFFSET + HARDIRQ_OFFSET); \ + __preempt_count_sub(HARDIRQ_OFFSET); \ + nesting =3D __this_cpu_dec_return(nmi_nesting); \ + if (!nesting) \ + __preempt_count_sub(NMI_OFFSET); \ arch_nmi_exit(); \ lockdep_on(); \ } while (0) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index f07e7f37f3ca5..e2d3079d3f5f1 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -18,6 +18,8 @@ * - bits 0-7 are the preemption count (max preemption depth: 256) * - bits 8-15 are the softirq count (max # of softirqs: 256) * - bits 16-23 are the hardirq disable count (max # of hardirq disable: 2= 56) + * - bits 24-27 are the hardirq count (max # of hardirqs: 16) + * - bit 28 is the NMI flag (no nesting count, tracked separately) * * The hardirq count could in theory be the same as the number of * interrupts in the system, but we run all interrupt handlers with @@ -25,18 +27,21 @@ * there are a few palaeontologic drivers which reenable interrupts in * the handler, so we need more than one bit here. * + * NMI nesting depth is tracked in a separate per-CPU variable + * (nmi_nesting) to save bits in preempt_count. + * * PREEMPT_MASK: 0x000000ff * SOFTIRQ_MASK: 0x0000ff00 * HARDIRQ_DISABLE_MASK: 0x00ff0000 - * HARDIRQ_MASK: 0x07000000 - * NMI_MASK: 0x38000000 + * HARDIRQ_MASK: 0x0f000000 + * NMI_MASK: 0x10000000 * PREEMPT_NEED_RESCHED: 0x80000000 */ #define PREEMPT_BITS 8 #define SOFTIRQ_BITS 8 #define HARDIRQ_DISABLE_BITS 8 -#define HARDIRQ_BITS 3 -#define NMI_BITS 3 +#define HARDIRQ_BITS 4 +#define NMI_BITS 1 =20 #define PREEMPT_SHIFT 0 #define SOFTIRQ_SHIFT (PREEMPT_SHIFT + PREEMPT_BITS) diff --git a/kernel/softirq.c b/kernel/softirq.c index 77198911b8dd4..af47ea23aba3b 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -88,6 +88,8 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled); EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context); #endif =20 +DEFINE_PER_CPU(unsigned int, nmi_nesting); + /* * SOFTIRQ_OFFSET usage: * --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF8643254BB for ; Mon, 15 Dec 2025 18:02:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821781; cv=none; b=pNHMPiI6tQ4MQ51Ya/GgbbBcjFaM1RhNlRFPOw+XI1Z5iK26K4Zk+kyx7UwZ+dFloOxZMqrK2V2niyY/vaoivnNtEbMPro2sEonoe2snlqoKBDCzXndr6pBM9Lcoe6IswghYNqeEd4EDTUp6Rix/GiNxjdqIBTnonqOJ+DCzc54= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821781; c=relaxed/simple; bh=/AZwrywQu8fRDeWvAEBjY1fvnsUTpBHEN5MeMyq+E74=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iY0iP5qMlBjTAE2TMp5VY1hKRrLobR8FuDIZBj1vWYDwjje8aLHwVKsrnN77WMsQx2b8qBPTQY3HWhcXckidwQ0yfrO/xyd0ILEcW/vWVBRiZ/64xN74KJZNFjaA/DKRjaBCIrYBoxylfjfGTXYI4S9pnp4XWfurdlWlMrH+eNA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Ug3SArXd; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ug3SArXd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821779; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mPyRPXTerqg0njOsw5XzoBU4LA2v4D9TDPJCyzw6h/A=; b=Ug3SArXdAO635TILqsbbkdutFOxuBEn7T78GbPKSwjt2JpBhgvKcVjq+i1ZjWj2y7VGOs4 iP6a2V27ftBPlLgUY8oWuyiVey4VIpLaww4HPosq7zW0bAy4+flWdsxsvPuirQLyfI7EyD 683FlCwCyb3V1mn5TJ62tW+/b8jTmxI= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-348-ZTYhJwumNUepL_Kdb8S6EQ-1; Mon, 15 Dec 2025 13:02:50 -0500 X-MC-Unique: ZTYhJwumNUepL_Kdb8S6EQ-1 X-Mimecast-MFC-AGG-ID: ZTYhJwumNUepL_Kdb8S6EQ_1765821764 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D265C180A8C8; Mon, 15 Dec 2025 17:58:28 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C9C56180044F; Mon, 15 Dec 2025 17:58:23 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 03/17] preempt: Introduce __preempt_count_{sub, add}_return() Date: Mon, 15 Dec 2025 12:57:50 -0500 Message-ID: <20251215175806.102713-4-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" From: Boqun Feng In order to use preempt_count() to tracking the interrupt disable nesting level, __preempt_count_{add,sub}_return() are introduced, as their name suggest, these primitives return the new value of the preempt_count() after changing it. The following example shows the usage of it in local_interrupt_disable(): // increase the HARDIRQ_DISABLE bit new_count =3D __preempt_count_add_return(HARDIRQ_DISABLE_OFFSET); // if it's the first-time increment, then disable the interrupt // at hardware level. if (new_count & HARDIRQ_DISABLE_MASK =3D=3D HARDIRQ_DISABLE_OFFSET) { local_irq_save(flags); raw_cpu_write(local_interrupt_disable_state.flags, flags); } Having these primitives will avoid a read of preempt_count() after changing preempt_count() on certain architectures. Signed-off-by: Boqun Feng --- V10: * Add commit message I forgot * Rebase against latest pcpu_hot changes V11: * Remove CONFIG_PROFILE_ALL_BRANCHES workaround from __preempt_count_add_return() arch/arm64/include/asm/preempt.h | 18 ++++++++++++++++++ arch/s390/include/asm/preempt.h | 10 ++++++++++ arch/x86/include/asm/preempt.h | 10 ++++++++++ include/asm-generic/preempt.h | 14 ++++++++++++++ 4 files changed, 52 insertions(+) diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/pree= mpt.h index 932ea4b620428..0dd8221d1bef7 100644 --- a/arch/arm64/include/asm/preempt.h +++ b/arch/arm64/include/asm/preempt.h @@ -55,6 +55,24 @@ static inline void __preempt_count_sub(int val) WRITE_ONCE(current_thread_info()->preempt.count, pc); } =20 +static inline int __preempt_count_add_return(int val) +{ + u32 pc =3D READ_ONCE(current_thread_info()->preempt.count); + pc +=3D val; + WRITE_ONCE(current_thread_info()->preempt.count, pc); + + return pc; +} + +static inline int __preempt_count_sub_return(int val) +{ + u32 pc =3D READ_ONCE(current_thread_info()->preempt.count); + pc -=3D val; + WRITE_ONCE(current_thread_info()->preempt.count, pc); + + return pc; +} + static inline bool __preempt_count_dec_and_test(void) { struct thread_info *ti =3D current_thread_info(); diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preemp= t.h index 6ccd033acfe52..5ae366e26c57d 100644 --- a/arch/s390/include/asm/preempt.h +++ b/arch/s390/include/asm/preempt.h @@ -98,6 +98,16 @@ static __always_inline bool should_resched(int preempt_o= ffset) return unlikely(READ_ONCE(get_lowcore()->preempt_count) =3D=3D preempt_of= fset); } =20 +static __always_inline int __preempt_count_add_return(int val) +{ + return val + __atomic_add(val, &get_lowcore()->preempt_count); +} + +static __always_inline int __preempt_count_sub_return(int val) +{ + return __preempt_count_add_return(-val); +} + #define init_task_preempt_count(p) do { } while (0) /* Deferred to CPU bringup time */ #define init_idle_preempt_count(p, cpu) do { } while (0) diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index 578441db09f0b..1220656f3370b 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -85,6 +85,16 @@ static __always_inline void __preempt_count_sub(int val) raw_cpu_add_4(__preempt_count, -val); } =20 +static __always_inline int __preempt_count_add_return(int val) +{ + return raw_cpu_add_return_4(__preempt_count, val); +} + +static __always_inline int __preempt_count_sub_return(int val) +{ + return raw_cpu_add_return_4(__preempt_count, -val); +} + /* * Because we keep PREEMPT_NEED_RESCHED set when we do _not_ need to resch= edule * a decrement which hits zero means we have no preempt_count and should diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h index 51f8f3881523a..c8683c046615d 100644 --- a/include/asm-generic/preempt.h +++ b/include/asm-generic/preempt.h @@ -59,6 +59,20 @@ static __always_inline void __preempt_count_sub(int val) *preempt_count_ptr() -=3D val; } =20 +static __always_inline int __preempt_count_add_return(int val) +{ + *preempt_count_ptr() +=3D val; + + return *preempt_count_ptr(); +} + +static __always_inline int __preempt_count_sub_return(int val) +{ + *preempt_count_ptr() -=3D val; + + return *preempt_count_ptr(); +} + static __always_inline bool __preempt_count_dec_and_test(void) { /* --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50FC7325709 for ; Mon, 15 Dec 2025 18:04:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821876; cv=none; b=LcBPO0QsGokB1Ne5OLyxtcooNQQzZsJ0erb5kLu76szDg4tkRx/EKJFhxN8+dsLd30+BkU5me00JLkr2fg7yEIpmU/E3BuyesHB7Qf8eq060/oNmjur7/zGdbdQLK95aG4zdRc9rLXoLA5vtzNY326ValWz8faPr4X+geYIEU50= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821876; c=relaxed/simple; bh=Nsew5WVYcnIVKjzy9ZH47Z7m4Gui0DMYEziJL+DQ+WA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=f2dhjiT4TNe+18cH/hcRn86CBH5GczWMpw1YvTNlfqaWQuIVcmIhh1c9xYq29L3S2Ndry7QdxhPlo+KE+MmLWOuZEY5v78R7aHI0UHyw8A1HaY4nEOr0kzNApCrMKWtxkg953dguUnotbeRgl4vvS8inU+GUar6bMbsIPq429YE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=VE9jbRDz; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VE9jbRDz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821872; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EH3ES6AA+QRCOsHlcZKtsYa2XtonPK9oiLh9VZOeMQo=; b=VE9jbRDzkxuaTqrFLNLGGVq/ZZuynVNEhIHc9JBJbi4NM0mpDx6EZqm1Wh7/3d+4Me5r3y 3b5ee3RV6djfosl1W73pjz45ZUr3BUsZzdek5X9sWjmEEr0kfHNZ8a38nlqtkioN+nJ/zh NBYsJaPMsnRktMlQvZC8Zdmp60lT0Ic= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-661-EmAb8MXqORmqzNSaOv4-SQ-1; Mon, 15 Dec 2025 13:01:35 -0500 X-MC-Unique: EmAb8MXqORmqzNSaOv4-SQ-1 X-Mimecast-MFC-AGG-ID: EmAb8MXqORmqzNSaOv4-SQ_1765821693 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D6BE01860168; Mon, 15 Dec 2025 17:58:33 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5844B180044F; Mon, 15 Dec 2025 17:58:29 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 04/17] openrisc: Include in smp.h Date: Mon, 15 Dec 2025 12:57:51 -0500 Message-ID: <20251215175806.102713-5-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" While OpenRISC currently doesn't fail to build upstream, it appears that include in the right headers is enough to break that - primarily because OpenRISC's asm/smp.h header doesn't actually provide any definition for struct cpumask. Which means the only reason we aren't failing to build kernel is because we've been lucky enough that every spot including asm/smp.h already has definitions for struct cpumask pulled in. This became evident when trying to work on a patch series for adding ref-counted interrupt enable/disables to the kernel, where introducing a new interrupt_rc.h header suddenly introduced a build error on OpenRISC: In file included from include/linux/interrupt_rc.h:17, from include/linux/spinlock.h:60, from include/linux/mmzone.h:8, from include/linux/gfp.h:7, from include/linux/mm.h:7, from arch/openrisc/include/asm/pgalloc.h:20, from arch/openrisc/include/asm/io.h:18, from include/linux/io.h:12, from drivers/irqchip/irq-ompic.c:61: arch/openrisc/include/asm/smp.h:21:59: warning: 'struct cpumask' declared inside parameter list will not be visible outside of this definition or declaration 21 | extern void arch_send_call_function_ipi_mask(const struct cpum= ask *mask); | ^~~~= ~~~ arch/openrisc/include/asm/smp.h:23:54: warning: 'struct cpumask' declared inside parameter list will not be visible outside of this definition or declaration 23 | extern void set_smp_cross_call(void (*)(const struct cpumask *= , unsigned int)); | ^~~~~~~ drivers/irqchip/irq-ompic.c: In function 'ompic_of_init': >> drivers/irqchip/irq-ompic.c:191:28: error: passing argument 1 of 'set_smp_cross_call' from incompatible pointer type [-Werror=3Dincompatible-pointer-types] 191 | set_smp_cross_call(ompic_raise_softirq); | ^~~~~~~~~~~~~~~~~~~ | | | void (*)(const struct cpumask *, un= signed int) arch/openrisc/include/asm/smp.h:23:32: note: expected 'void (*)(const struct cpumask *, unsigned int)' but argument is of type 'void (*)(const struct cpumask *, unsigned int)' 23 | extern void set_smp_cross_call(void (*)(const struct cpumask *= , unsigned int)); To fix this, let's take an example from the smp.h headers of other architectures (x86, hexagon, arm64, probably more): just include linux/cpumask.h at the top. Signed-off-by: Lyude Paul Acked-by: Stafford Horne --- arch/openrisc/include/asm/smp.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/openrisc/include/asm/smp.h b/arch/openrisc/include/asm/sm= p.h index e21d2f12b5b67..0327d8cdae2d0 100644 --- a/arch/openrisc/include/asm/smp.h +++ b/arch/openrisc/include/asm/smp.h @@ -9,6 +9,8 @@ #ifndef __ASM_OPENRISC_SMP_H #define __ASM_OPENRISC_SMP_H =20 +#include + #include #include =20 --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17E9B30BB8C for ; Mon, 15 Dec 2025 18:02:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821734; cv=none; b=k3dfjcva56yMHTyNjlZkWjBTLid1zbFUd7gasQIM0jfmt2qJeQ+mRGhdZWripoTbnUI5rDWpq33f0DrSZ76fBAmKikZLGLZzVykm7wBKFiWLRMceyVX3QcmqResYCCOo8bTHohlXG6G0gUWlXqhRfOsBx3Zeb8h4ZCB/kO5gD8A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821734; c=relaxed/simple; bh=REjB/1QtbPouE0OzlDc77O57+1usMcZ2F+04G113sjA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=D4m5/bD4vu8bsgjk1ezMVz5mtZSf5KiCJV3H8uLlVlqE/pLV/3aWROx31P4tVrzEKzYuIlXMKukVJXMPqUv8JsTEWG42AcvNY89sy5Jg8wOZCQPTv6/Ml/L3aOzro05I+zeEurhvCRzQ2Fsd5VVRQRLW27s9SMV3Q/jXv4b4TJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=H4mHGOPe; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="H4mHGOPe" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821730; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bZijDIO46qf3TtRLUzXYB6GXibUrtd7ZwaoimSRUkNU=; b=H4mHGOPexUjj8VAdDbnrKKrd+xSq+HD8CW0+D14Jfhy7QEykzmblDB+s10PomJbOEJyIfS ZDXdpj+2HwW7/UHYQLKNAO7cb2VVwsjjEnZQEMtqRmYZGy0/ZcwW2YZIDL7KOb2HDmIw0B OKj5Y1WiL3ICulU6qkPctx4xp+NdxYc= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-421-z-VNT-OlO--UV099BpDjyA-1; Mon, 15 Dec 2025 13:01:46 -0500 X-MC-Unique: z-VNT-OlO--UV099BpDjyA-1 X-Mimecast-MFC-AGG-ID: z-VNT-OlO--UV099BpDjyA_1765821704 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 03ED4186016D; Mon, 15 Dec 2025 17:58:39 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 28B71180044F; Mon, 15 Dec 2025 17:58:33 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 05/17] irq & spin_lock: Add counted interrupt disabling/enabling Date: Mon, 15 Dec 2025 12:57:52 -0500 Message-ID: <20251215175806.102713-6-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 From: Boqun Feng Currently the nested interrupt disabling and enabling is present by _irqsave() and _irqrestore() APIs, which are relatively unsafe, for example: spin_lock_irqsave(l1, flag1); spin_lock_irqsave(l2, flag2); spin_unlock_irqrestore(l1, flags1); // accesses to interrupt-disable protect data will cause races. This is even easier to triggered with guard facilities: unsigned long flag2; scoped_guard(spin_lock_irqsave, l1) { spin_lock_irqsave(l2, flag2); } // l2 locked but interrupts are enabled. spin_unlock_irqrestore(l2, flag2); (Hand-to-hand locking critical sections are not uncommon for a fine-grained lock design) And because this unsafety, Rust cannot easily wrap the interrupt-disabling locks in a safe API, which complicates the design. To resolve this, introduce a new set of interrupt disabling APIs: * local_interrupt_disable(); * local_interrupt_enable(); They work like local_irq_save() and local_irq_restore() except that 1) the outermost local_interrupt_disable() call save the interrupt state into a percpu variable, so that the outermost local_interrupt_enable() can restore the state, and 2) a percpu counter is added to record the nest level of these calls, so that interrupts are not accidentally enabled inside the outermost critical section. Also add the corresponding spin_lock primitives: spin_lock_irq_disable() and spin_unlock_irq_enable(), as a result, code as follow: spin_lock_irq_disable(l1); spin_lock_irq_disable(l2); spin_unlock_irq_enable(l1); // Interrupts are still disabled. spin_unlock_irq_enable(l2); doesn't have the issue that interrupts are accidentally enabled. This also makes the wrapper of interrupt-disabling locks on Rust easier to design. Signed-off-by: Boqun Feng Signed-off-by: Lyude Paul --- V10: * Add missing __raw_spin_lock_irq_disable() definition in spinlock.c V11: * Move definition of spin_trylock_irq_disable() into this commit * Get rid of leftover space * Remove unneeded preempt_disable()/preempt_enable() V12: * Move local_interrupt_enable()/local_interrupt_disable() out of include/linux/spinlock.h, into include/linux/irqflags.h V14: * Move local_interrupt_enable()/disable() again, this time into its own header, interrupt_rc.h, in order to fix a hexagon-specific build issue caught by the CKI bot. The reason this is needed is because on most architectures, irqflags.h ends up including . This provides a definition for the raw_smp_processor_id() function which we depend on like so: local_interrupt_disable() =E2=86=92 raw_cpu_write() =E2=86=92 raw_smp= _processor_id() Unfortunately, hexagon appears to be one such architecture which does not pull in by default here - causing kernel builds to fail and claim that raw_smp_processor_id() is undefined: In file included from kernel/sched/rq-offsets.c:5: In file included from kernel/sched/sched.h:8: In file included from include/linux/sched/affinity.h:1: In file included from include/linux/sched.h:37: In file included from include/linux/spinlock.h:59: >> include/linux/irqflags.h:277:3: error: call to undeclared function 'raw_smp_processor_id'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 277 | raw_cpu_write(local_interrupt_disable_state.flags, flags); | ^ include/linux/percpu-defs.h:413:34: note: expanded from macro 'raw_cpu_write' While including in does fix the build on hexagon, it ends up breaking the build on x86_64: In file included from kernel/sched/rq-offsets.c:5: In file included from kernel/sched/sched.h:8: In file included from ./include/linux/sched/affinity.h:1: In file included from ./include/linux/sched.h:13: In file included from ./arch/x86/include/asm/processor.h:25: In file included from ./arch/x86/include/asm/special_insns.h:10: In file included from ./include/linux/irqflags.h:22: In file included from ./arch/x86/include/asm/smp.h:6: In file included from ./include/linux/thread_info.h:60: In file included from ./arch/x86/include/asm/thread_info.h:59: ./arch/x86/include/asm/cpufeature.h:110:40: error: use of undeclared identifier 'boot_cpu_data' [cap_byte] "i" (&((const char *)boot_cpu_data.x86_capability)[bit >= > 3]) ^ While boot_cpu_data is defined in , it's not possible for us to include that header in irqflags.h because we're already inside of . As a result, I just concluded there's no reasonable way of having these functions in because of how many low level ASM headers depend on it. So, we go with the solution of simply giving ourselves our own header file. V15: * Fix build error on CONFIG_SMP=3Dn reported by Kernel CI include/linux/interrupt_rc.h | 63 ++++++++++++++++++++++++++++++++ include/linux/preempt.h | 4 ++ include/linux/spinlock.h | 25 +++++++++++++ include/linux/spinlock_api_smp.h | 27 ++++++++++++++ include/linux/spinlock_api_up.h | 8 ++++ include/linux/spinlock_rt.h | 15 ++++++++ kernel/locking/spinlock.c | 29 +++++++++++++++ kernel/softirq.c | 3 ++ 8 files changed, 174 insertions(+) create mode 100644 include/linux/interrupt_rc.h diff --git a/include/linux/interrupt_rc.h b/include/linux/interrupt_rc.h new file mode 100644 index 0000000000000..d6d05498731b2 --- /dev/null +++ b/include/linux/interrupt_rc.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * include/linux/interrupt_rc.h - refcounted local processor interrupt + * management. + * + * Since the implementation of this API currently depends on + * local_irq_save()/local_irq_restore(), we split this into it's own heade= r to + * make it easier to include without hitting circular header dependencies. + */ + +#ifndef __LINUX_INTERRUPT_RC_H +#define __LINUX_INTERRUPT_RC_H + +#include +#include +#ifdef CONFIG_SMP +#include +#endif + +/* Per-cpu interrupt disabling state for local_interrupt_{disable,enable}(= ) */ +struct interrupt_disable_state { + unsigned long flags; +}; + +DECLARE_PER_CPU(struct interrupt_disable_state, local_interrupt_disable_st= ate); + +static inline void local_interrupt_disable(void) +{ + unsigned long flags; + int new_count; + + new_count =3D hardirq_disable_enter(); + + if ((new_count & HARDIRQ_DISABLE_MASK) =3D=3D HARDIRQ_DISABLE_OFFSET) { + local_irq_save(flags); + raw_cpu_write(local_interrupt_disable_state.flags, flags); + } +} + +static inline void local_interrupt_enable(void) +{ + int new_count; + + new_count =3D hardirq_disable_exit(); + + if ((new_count & HARDIRQ_DISABLE_MASK) =3D=3D 0) { + unsigned long flags; + + flags =3D raw_cpu_read(local_interrupt_disable_state.flags); + local_irq_restore(flags); + /* + * TODO: re-read preempt count can be avoided, but it needs + * should_resched() taking another parameter as the current + * preempt count + */ +#ifdef PREEMPTION + if (should_resched(0)) + __preempt_schedule(); +#endif + } +} + +#endif /* !__LINUX_INTERRUPT_RC_H */ diff --git a/include/linux/preempt.h b/include/linux/preempt.h index e2d3079d3f5f1..33fc4c814a9f0 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -151,6 +151,10 @@ static __always_inline unsigned char interrupt_context= _level(void) #define in_softirq() (softirq_count()) #define in_interrupt() (irq_count()) =20 +#define hardirq_disable_count() ((preempt_count() & HARDIRQ_DISABLE_MASK) = >> HARDIRQ_DISABLE_SHIFT) +#define hardirq_disable_enter() __preempt_count_add_return(HARDIRQ_DISABLE= _OFFSET) +#define hardirq_disable_exit() __preempt_count_sub_return(HARDIRQ_DISABLE_= OFFSET) + /* * The preempt_count offset after preempt_disable(); */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index d3561c4a080e2..bbbee61c6f5df 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -57,6 +57,7 @@ #include #include #include +#include #include #include #include @@ -272,9 +273,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *= lock) __releases(lock) #endif =20 #define raw_spin_lock_irq(lock) _raw_spin_lock_irq(lock) +#define raw_spin_lock_irq_disable(lock) _raw_spin_lock_irq_disable(lock) #define raw_spin_lock_bh(lock) _raw_spin_lock_bh(lock) #define raw_spin_unlock(lock) _raw_spin_unlock(lock) #define raw_spin_unlock_irq(lock) _raw_spin_unlock_irq(lock) +#define raw_spin_unlock_irq_enable(lock) _raw_spin_unlock_irq_enable(lock) =20 #define raw_spin_unlock_irqrestore(lock, flags) \ do { \ @@ -300,6 +303,13 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *= lock) __releases(lock) 1 : ({ local_irq_restore(flags); 0; }); \ }) =20 +#define raw_spin_trylock_irq_disable(lock) \ +({ \ + local_interrupt_disable(); \ + raw_spin_trylock(lock) ? \ + 1 : ({ local_interrupt_enable(); 0; }); \ +}) + #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ #include @@ -376,6 +386,11 @@ static __always_inline void spin_lock_irq(spinlock_t *= lock) raw_spin_lock_irq(&lock->rlock); } =20 +static __always_inline void spin_lock_irq_disable(spinlock_t *lock) +{ + raw_spin_lock_irq_disable(&lock->rlock); +} + #define spin_lock_irqsave(lock, flags) \ do { \ raw_spin_lock_irqsave(spinlock_check(lock), flags); \ @@ -401,6 +416,11 @@ static __always_inline void spin_unlock_irq(spinlock_t= *lock) raw_spin_unlock_irq(&lock->rlock); } =20 +static __always_inline void spin_unlock_irq_enable(spinlock_t *lock) +{ + raw_spin_unlock_irq_enable(&lock->rlock); +} + static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsig= ned long flags) { raw_spin_unlock_irqrestore(&lock->rlock, flags); @@ -421,6 +441,11 @@ static __always_inline int spin_trylock_irq(spinlock_t= *lock) raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ }) =20 +static __always_inline int spin_trylock_irq_disable(spinlock_t *lock) +{ + return raw_spin_trylock_irq_disable(&lock->rlock); +} + /** * spin_is_locked() - Check whether a spinlock is locked. * @lock: Pointer to the spinlock. diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_= smp.h index 9ecb0ab504e32..92532103b9eaa 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -28,6 +28,8 @@ _raw_spin_lock_nest_lock(raw_spinlock_t *lock, struct loc= kdep_map *map) void __lockfunc _raw_spin_lock_bh(raw_spinlock_t *lock) __acquires(lock); void __lockfunc _raw_spin_lock_irq(raw_spinlock_t *lock) __acquires(lock); +void __lockfunc _raw_spin_lock_irq_disable(raw_spinlock_t *lock) + __acquires(lock); =20 unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock) __acquires(lock); @@ -39,6 +41,7 @@ int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock= ); +void __lockfunc _raw_spin_unlock_irq_enable(raw_spinlock_t *lock) __releas= es(lock); void __lockfunc _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) __releases(lock); @@ -55,6 +58,11 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsign= ed long flags) #define _raw_spin_lock_irq(lock) __raw_spin_lock_irq(lock) #endif =20 +/* Use the same config as spin_lock_irq() temporarily. */ +#ifdef CONFIG_INLINE_SPIN_LOCK_IRQ +#define _raw_spin_lock_irq_disable(lock) __raw_spin_lock_irq_disable(lock) +#endif + #ifdef CONFIG_INLINE_SPIN_LOCK_IRQSAVE #define _raw_spin_lock_irqsave(lock) __raw_spin_lock_irqsave(lock) #endif @@ -79,6 +87,11 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsign= ed long flags) #define _raw_spin_unlock_irq(lock) __raw_spin_unlock_irq(lock) #endif =20 +/* Use the same config as spin_unlock_irq() temporarily. */ +#ifdef CONFIG_INLINE_SPIN_UNLOCK_IRQ +#define _raw_spin_unlock_irq_enable(lock) __raw_spin_unlock_irq_enable(loc= k) +#endif + #ifdef CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE #define _raw_spin_unlock_irqrestore(lock, flags) __raw_spin_unlock_irqrest= ore(lock, flags) #endif @@ -120,6 +133,13 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t = *lock) LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock); } =20 +static inline void __raw_spin_lock_irq_disable(raw_spinlock_t *lock) +{ + local_interrupt_disable(); + spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); + LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock); +} + static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); @@ -160,6 +180,13 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_= t *lock) preempt_enable(); } =20 +static inline void __raw_spin_unlock_irq_enable(raw_spinlock_t *lock) +{ + spin_release(&lock->dep_map, _RET_IP_); + do_raw_spin_unlock(lock); + local_interrupt_enable(); +} + static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) { spin_release(&lock->dep_map, _RET_IP_); diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_u= p.h index 819aeba1c87e6..d02a73671713b 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -36,6 +36,9 @@ #define __LOCK_IRQ(lock) \ do { local_irq_disable(); __LOCK(lock); } while (0) =20 +#define __LOCK_IRQ_DISABLE(lock) \ + do { local_interrupt_disable(); __LOCK(lock); } while (0) + #define __LOCK_IRQSAVE(lock, flags) \ do { local_irq_save(flags); __LOCK(lock); } while (0) =20 @@ -52,6 +55,9 @@ #define __UNLOCK_IRQ(lock) \ do { local_irq_enable(); __UNLOCK(lock); } while (0) =20 +#define __UNLOCK_IRQ_ENABLE(lock) \ + do { __UNLOCK(lock); local_interrupt_enable(); } while (0) + #define __UNLOCK_IRQRESTORE(lock, flags) \ do { local_irq_restore(flags); __UNLOCK(lock); } while (0) =20 @@ -64,6 +70,7 @@ #define _raw_read_lock_bh(lock) __LOCK_BH(lock) #define _raw_write_lock_bh(lock) __LOCK_BH(lock) #define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock) +#define _raw_spin_lock_irq_disable(lock) __LOCK_IRQ_DISABLE(lock) #define _raw_read_lock_irq(lock) __LOCK_IRQ(lock) #define _raw_write_lock_irq(lock) __LOCK_IRQ(lock) #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) @@ -80,6 +87,7 @@ #define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock) #define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock) #define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock) +#define _raw_spin_unlock_irq_enable(lock) __UNLOCK_IRQ_ENABLE(lock) #define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock) #define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock) #define _raw_spin_unlock_irqrestore(lock, flags) \ diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index f6499c37157df..074182f7cfeea 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -93,6 +93,11 @@ static __always_inline void spin_lock_irq(spinlock_t *lo= ck) rt_spin_lock(lock); } =20 +static __always_inline void spin_lock_irq_disable(spinlock_t *lock) +{ + rt_spin_lock(lock); +} + #define spin_lock_irqsave(lock, flags) \ do { \ typecheck(unsigned long, flags); \ @@ -116,12 +121,22 @@ static __always_inline void spin_unlock_irq(spinlock_= t *lock) rt_spin_unlock(lock); } =20 +static __always_inline void spin_unlock_irq_enable(spinlock_t *lock) +{ + rt_spin_unlock(lock); +} + static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) { rt_spin_unlock(lock); } =20 +static __always_inline int spin_trylock_irq_disable(spinlock_t *lock) +{ + return rt_spin_trylock(lock); +} + #define spin_trylock(lock) \ __cond_lock(lock, rt_spin_trylock(lock)) =20 diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c index 7685defd7c526..da54b220b5a45 100644 --- a/kernel/locking/spinlock.c +++ b/kernel/locking/spinlock.c @@ -125,6 +125,19 @@ static void __lockfunc __raw_##op##_lock_bh(locktype##= _t *lock) \ */ BUILD_LOCK_OPS(spin, raw_spinlock); =20 +/* No rwlock_t variants for now, so just build this function by hand */ +static void __lockfunc __raw_spin_lock_irq_disable(raw_spinlock_t *lock) +{ + for (;;) { + local_interrupt_disable(); + if (likely(do_raw_spin_trylock(lock))) + break; + local_interrupt_enable(); + + arch_spin_relax(&lock->raw_lock); + } +} + #ifndef CONFIG_PREEMPT_RT BUILD_LOCK_OPS(read, rwlock); BUILD_LOCK_OPS(write, rwlock); @@ -172,6 +185,14 @@ noinline void __lockfunc _raw_spin_lock_irq(raw_spinlo= ck_t *lock) EXPORT_SYMBOL(_raw_spin_lock_irq); #endif =20 +#ifndef CONFIG_INLINE_SPIN_LOCK_IRQ +noinline void __lockfunc _raw_spin_lock_irq_disable(raw_spinlock_t *lock) +{ + __raw_spin_lock_irq_disable(lock); +} +EXPORT_SYMBOL_GPL(_raw_spin_lock_irq_disable); +#endif + #ifndef CONFIG_INLINE_SPIN_LOCK_BH noinline void __lockfunc _raw_spin_lock_bh(raw_spinlock_t *lock) { @@ -204,6 +225,14 @@ noinline void __lockfunc _raw_spin_unlock_irq(raw_spin= lock_t *lock) EXPORT_SYMBOL(_raw_spin_unlock_irq); #endif =20 +#ifndef CONFIG_INLINE_SPIN_UNLOCK_IRQ +noinline void __lockfunc _raw_spin_unlock_irq_enable(raw_spinlock_t *lock) +{ + __raw_spin_unlock_irq_enable(lock); +} +EXPORT_SYMBOL_GPL(_raw_spin_unlock_irq_enable); +#endif + #ifndef CONFIG_INLINE_SPIN_UNLOCK_BH noinline void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) { diff --git a/kernel/softirq.c b/kernel/softirq.c index af47ea23aba3b..b681545eabbbe 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -88,6 +88,9 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled); EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context); #endif =20 +DEFINE_PER_CPU(struct interrupt_disable_state, local_interrupt_disable_sta= te); +EXPORT_PER_CPU_SYMBOL_GPL(local_interrupt_disable_state); + DEFINE_PER_CPU(unsigned int, nmi_nesting); =20 /* --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41C92226D00 for ; Mon, 15 Dec 2025 18:03:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821809; cv=none; b=rUiz2b+3SvORzIFhBuF5KwNiPXdQHFWIIY4rYU8XUJ7rSFB/xkT1dQ6rh0i6QN1tz6KpZms393mJaU3fpPMmsOVpWw584GJulm43HpPBSOUArkc2Hdjpgl+WR8uNN5h4wbLsuSrZYMqq7jBN5IErIWXLI+mbJeE9MmsqpPvDI70= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821809; c=relaxed/simple; bh=s9cSj+K9Y0k0UzbN2VX0uArP/QEE/E4vsQs2/fsyzSs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DLJ8M288q/UYH3yfkWIGn6KeuDI2KWAKky0f5JF2uNyGg/TWgLXcW4BWNrHUGZYwSiMHR16Hd0aR9JLa5btFjtu52Sm45BJG3kUi7Lv3TT1oyEHNF38ZVJl48WWlvurjtnVzFJai+BG2a7mOD/WdSLoVZziR3COLFrdwidsHiB4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=O3eBDJrS; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="O3eBDJrS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821806; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d/ey7ksw/+pRqj50E/exPto1IduKCmbLnZyX6h0aAno=; b=O3eBDJrSVFpAjmuzquQY1XQfzEJUSbGao52uuQ24L88ibFEKRlLuRFTFHaMtvUBYHfcDGq vRuvkvbOo3vVaGGyikh16u+/QDYxYXCC7NsDwsOm1NHpp6yNA24NC55/yCeBduAbmSSur6 YNWPAlR5HxrCyw5+K55uMYjqEcua1uY= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-552-DthgbZdIN9SXfjCY2h123Q-1; Mon, 15 Dec 2025 13:02:59 -0500 X-MC-Unique: DthgbZdIN9SXfjCY2h123Q-1 X-Mimecast-MFC-AGG-ID: DthgbZdIN9SXfjCY2h123Q_1765821768 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9438619539BA; Mon, 15 Dec 2025 17:58:43 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 615A61800294; Mon, 15 Dec 2025 17:58:39 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 06/17] irq: Add KUnit test for refcounted interrupt enable/disable Date: Mon, 15 Dec 2025 12:57:53 -0500 Message-ID: <20251215175806.102713-7-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 While making changes to the refcounted interrupt patch series, at some point on my local branch I broke something and ended up writing some kunit tests for testing refcounted interrupts as a result. So, let's include these tests now that we have refcounted interrupts. Signed-off-by: Lyude Paul --- V13: * Add missing MODULE_DESCRIPTION/MODULE_LICENSE lines * Switch from kunit_test_suites(=E2=80=A6) to kunit_test_suite(=E2=80=A6) kernel/irq/Makefile | 1 + kernel/irq/refcount_interrupt_test.c | 109 +++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) create mode 100644 kernel/irq/refcount_interrupt_test.c diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile index 6ab3a40556670..7b5bb5510b110 100644 --- a/kernel/irq/Makefile +++ b/kernel/irq/Makefile @@ -20,3 +20,4 @@ obj-$(CONFIG_SMP) +=3D affinity.o obj-$(CONFIG_GENERIC_IRQ_DEBUGFS) +=3D debugfs.o obj-$(CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR) +=3D matrix.o obj-$(CONFIG_IRQ_KUNIT_TEST) +=3D irq_test.o +obj-$(CONFIG_KUNIT) +=3D refcount_interrupt_test.o diff --git a/kernel/irq/refcount_interrupt_test.c b/kernel/irq/refcount_int= errupt_test.c new file mode 100644 index 0000000000000..b4f224595f261 --- /dev/null +++ b/kernel/irq/refcount_interrupt_test.c @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KUnit test for refcounted interrupt enable/disables. + */ + +#include +#include + +#define TEST_IRQ_ON() KUNIT_EXPECT_FALSE(test, irqs_disabled()) +#define TEST_IRQ_OFF() KUNIT_EXPECT_TRUE(test, irqs_disabled()) + +/* =3D=3D=3D=3D=3D Test cases =3D=3D=3D=3D=3D */ +static void test_single_irq_change(struct kunit *test) +{ + local_interrupt_disable(); + TEST_IRQ_OFF(); + local_interrupt_enable(); +} + +static void test_nested_irq_change(struct kunit *test) +{ + local_interrupt_disable(); + TEST_IRQ_OFF(); + local_interrupt_disable(); + TEST_IRQ_OFF(); + local_interrupt_disable(); + TEST_IRQ_OFF(); + + local_interrupt_enable(); + TEST_IRQ_OFF(); + local_interrupt_enable(); + TEST_IRQ_OFF(); + local_interrupt_enable(); + TEST_IRQ_ON(); +} + +static void test_multiple_irq_change(struct kunit *test) +{ + local_interrupt_disable(); + TEST_IRQ_OFF(); + local_interrupt_disable(); + TEST_IRQ_OFF(); + + local_interrupt_enable(); + TEST_IRQ_OFF(); + local_interrupt_enable(); + TEST_IRQ_ON(); + + local_interrupt_disable(); + TEST_IRQ_OFF(); + local_interrupt_enable(); + TEST_IRQ_ON(); +} + +static void test_irq_save(struct kunit *test) +{ + unsigned long flags; + + local_irq_save(flags); + TEST_IRQ_OFF(); + local_interrupt_disable(); + TEST_IRQ_OFF(); + local_interrupt_enable(); + TEST_IRQ_OFF(); + local_irq_restore(flags); + TEST_IRQ_ON(); + + local_interrupt_disable(); + TEST_IRQ_OFF(); + local_irq_save(flags); + TEST_IRQ_OFF(); + local_irq_restore(flags); + TEST_IRQ_OFF(); + local_interrupt_enable(); + TEST_IRQ_ON(); +} + +static struct kunit_case test_cases[] =3D { + KUNIT_CASE(test_single_irq_change), + KUNIT_CASE(test_nested_irq_change), + KUNIT_CASE(test_multiple_irq_change), + KUNIT_CASE(test_irq_save), + {}, +}; + +/* (init and exit are the same */ +static int test_init(struct kunit *test) +{ + TEST_IRQ_ON(); + + return 0; +} + +static void test_exit(struct kunit *test) +{ + TEST_IRQ_ON(); +} + +static struct kunit_suite refcount_interrupt_test_suite =3D { + .name =3D "refcount_interrupt", + .test_cases =3D test_cases, + .init =3D test_init, + .exit =3D test_exit, +}; + +kunit_test_suite(refcount_interrupt_test_suite); +MODULE_AUTHOR("Lyude Paul "); +MODULE_DESCRIPTION("Refcounted interrupt unit test suite"); +MODULE_LICENSE("GPL"); --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E664335087 for ; Mon, 15 Dec 2025 18:03:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821841; cv=none; b=MZiHBCi/XR1U0/MdbB5nIcbW5X+1ILRs3CEkdyt6YPMGjpA8aTE4vtmjuxR4asRtBLhisWWM+uFDQY8AviFEkf8jnaLnveGU3/0T0sWGZ39EcaW2SoJxq8bAvU9zoSpHjoC+5/5sh8NIHKtIqeODAz3SSFOGX/6/kXTmjBWgzK8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821841; c=relaxed/simple; bh=anCh6UPk2spDyl3U5N4dyIzHhrOXEnplUkyYMq+t8a4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KnFrfXUV0cST9QGx1Lr1jpF9XPCufiQRNlH2Df5D3iVuMB5TeeIE8f4Bd0bCW1JWOwCDm1Fq1ELKS5F6l0md3fHQIHrHHZhpUQoGfVWqX4MAVCkjfQqDChgy74mgHXQlipfacWkmvpamKQVKLHyeYG76owmwJj4rvRkof/PfiwQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=MBmxnahD; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MBmxnahD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821835; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wrQU7FpJZvGeRAZBJWQRrpXho13xVbhZAhFSmMJ/JPY=; b=MBmxnahDYQbWlnAY218dGlTM7FVQ5t+iifRIL/T3b6N8ZjDIPkTF7MmgXUxKxlRJo1P2gB 55/UrmD05uj4+t/X1UUCAiHQ7xJmuFEyY/9W5hfjAxFTNQQak3QKCt5442fikmQwhDO6fG 5nN1DiGAqor+8qGRH5acPJWJhfhw3Js= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-258-4oKqGulKM96Jg22ph0Je4w-1; Mon, 15 Dec 2025 13:02:59 -0500 X-MC-Unique: 4oKqGulKM96Jg22ph0Je4w-1 X-Mimecast-MFC-AGG-ID: 4oKqGulKM96Jg22ph0Je4w_1765821764 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C2DF018D953D; Mon, 15 Dec 2025 17:58:47 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DA719180045B; Mon, 15 Dec 2025 17:58:43 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 07/17] rust: Introduce interrupt module Date: Mon, 15 Dec 2025 12:57:54 -0500 Message-ID: <20251215175806.102713-8-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 This introduces a module for dealing with interrupt-disabled contexts, including the ability to enable and disable interrupts along with the ability to annotate functions as expecting that IRQs are already disabled on the local CPU. [Boqun: This is based on Lyude's work on interrupt disable abstraction, I port to the new local_interrupt_disable() mechanism to make it work as a guard type. I cannot even take the credit of this design, since Lyude also brought up the same idea in zulip. Anyway, this is only for POC purpose, and of course all bugs are mine] Signed-off-by: Lyude Paul Co-developed-by: Boqun Feng Signed-off-by: Boqun Feng Reviewed-by: Benno Lossin Reviewed-by: Andreas Hindborg --- V10: * Fix documentation typos V11: * Get rid of unneeded `use bindings;` * Move ASSUME_DISABLED into assume_disabled() * Confirm using lockdep_assert_irqs_disabled() that local interrupts are in fact disabled when LocalInterruptDisabled::assume_disabled() is called. rust/helpers/helpers.c | 1 + rust/helpers/interrupt.c | 18 +++++++++ rust/helpers/sync.c | 5 +++ rust/kernel/interrupt.rs | 86 ++++++++++++++++++++++++++++++++++++++++ rust/kernel/lib.rs | 1 + 5 files changed, 111 insertions(+) create mode 100644 rust/helpers/interrupt.c create mode 100644 rust/kernel/interrupt.rs diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 551da6c9b5064..01ffade6c0389 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -29,6 +29,7 @@ #include "err.c" #include "irq.c" #include "fs.c" +#include "interrupt.c" #include "io.c" #include "jump_label.c" #include "kunit.c" diff --git a/rust/helpers/interrupt.c b/rust/helpers/interrupt.c new file mode 100644 index 0000000000000..f2380dd461ca5 --- /dev/null +++ b/rust/helpers/interrupt.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_local_interrupt_disable(void) +{ + local_interrupt_disable(); +} + +void rust_helper_local_interrupt_enable(void) +{ + local_interrupt_enable(); +} + +bool rust_helper_irqs_disabled(void) +{ + return irqs_disabled(); +} diff --git a/rust/helpers/sync.c b/rust/helpers/sync.c index ff7e68b488101..45b2f519f4e2e 100644 --- a/rust/helpers/sync.c +++ b/rust/helpers/sync.c @@ -11,3 +11,8 @@ void rust_helper_lockdep_unregister_key(struct lock_class= _key *k) { lockdep_unregister_key(k); } + +void rust_helper_lockdep_assert_irqs_disabled(void) +{ + lockdep_assert_irqs_disabled(); +} diff --git a/rust/kernel/interrupt.rs b/rust/kernel/interrupt.rs new file mode 100644 index 0000000000000..6c8d2f58bca70 --- /dev/null +++ b/rust/kernel/interrupt.rs @@ -0,0 +1,86 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Interrupt controls +//! +//! This module allows Rust code to annotate areas of code where local pro= cessor interrupts should +//! be disabled, along with actually disabling local processor interrupts. +//! +//! # =E2=9A=A0=EF=B8=8F Warning! =E2=9A=A0=EF=B8=8F +//! +//! The usage of this module can be more complicated than meets the eye, e= specially surrounding +//! [preemptible kernels]. It's recommended to take care when using the fu= nctions and types defined +//! here and familiarize yourself with the various documentation we have b= efore using them, along +//! with the various documents we link to here. +//! +//! # Reading material +//! +//! - [Software interrupts and realtime (LWN)](https://lwn.net/Articles/52= 0076) +//! +//! [preemptible kernels]: https://www.kernel.org/doc/html/latest/locking/= preempt-locking.html + +use kernel::types::NotThreadSafe; + +/// A guard that represents local processor interrupt disablement on preem= ptible kernels. +/// +/// [`LocalInterruptDisabled`] is a guard type that represents that local = processor interrupts have +/// been disabled on a preemptible kernel. +/// +/// Certain functions take an immutable reference of [`LocalInterruptDisab= led`] in order to require +/// that they may only be run in local-interrupt-disabled contexts on pree= mptible kernels. +/// +/// This is a marker type; it has no size, and is simply used as a compile= -time guarantee that local +/// processor interrupts are disabled on preemptible kernels. Note that no= guarantees about the +/// state of interrupts are made by this type on non-preemptible kernels. +/// +/// # Invariants +/// +/// Local processor interrupts are disabled on preemptible kernels for as = long as an object of this +/// type exists. +pub struct LocalInterruptDisabled(NotThreadSafe); + +/// Disable local processor interrupts on a preemptible kernel. +/// +/// This function disables local processor interrupts on a preemptible ker= nel, and returns a +/// [`LocalInterruptDisabled`] token as proof of this. On non-preemptible = kernels, this function is +/// a no-op. +/// +/// **Usage of this function is discouraged** unless you are absolutely su= re you know what you are +/// doing, as kernel interfaces for rust that deal with interrupt state wi= ll typically handle local +/// processor interrupt state management on their own and managing this by= hand is quite error +/// prone. +pub fn local_interrupt_disable() -> LocalInterruptDisabled { + // SAFETY: It's always safe to call `local_interrupt_disable()`. + unsafe { bindings::local_interrupt_disable() }; + + LocalInterruptDisabled(NotThreadSafe) +} + +impl Drop for LocalInterruptDisabled { + fn drop(&mut self) { + // SAFETY: Per type invariants, a `local_interrupt_disable()` must= be called to create this + // object, hence call the corresponding `local_interrupt_enable()`= is safe. + unsafe { bindings::local_interrupt_enable() }; + } +} + +impl LocalInterruptDisabled { + /// Assume that local processor interrupts are disabled on preemptible= kernels. + /// + /// This can be used for annotating code that is known to be run in co= ntexts where local + /// processor interrupts are disabled on preemptible kernels. It makes= no changes to the local + /// interrupt state on its own. + /// + /// # Safety + /// + /// For the whole life `'a`, local interrupts must be disabled on pree= mptible kernels. This + /// could be a context like for example, an interrupt handler. + pub unsafe fn assume_disabled<'a>() -> &'a LocalInterruptDisabled { + const ASSUME_DISABLED: &LocalInterruptDisabled =3D &LocalInterrupt= Disabled(NotThreadSafe); + + // Confirm they're actually disabled if lockdep is available + // SAFETY: It's always safe to call `lockdep_assert_irqs_disabled(= )` + unsafe { bindings::lockdep_assert_irqs_disabled() }; + + ASSUME_DISABLED + } +} diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index 235d0d8b1eff2..6fb8f92dfa8e8 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -96,6 +96,7 @@ pub mod fs; pub mod id_pool; pub mod init; +pub mod interrupt; pub mod io; pub mod ioctl; pub mod iov; --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CAB032B98F for ; Mon, 15 Dec 2025 18:03:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821800; cv=none; b=cxdRcV5P09QofFTGz1vryMl4k6Wr+Dhc7TCkRLwwW/2+2fUXLeP8cb2iUWlnbFynrNsIF4Tx6UgS6F4egvCy5SrHjjLfj+b/sbFZyIQM/RmarkkvyKG1T5dT8yBrLcRUUYwb29yHAEdmjqgtGI76fIv++mu3HT5mumk1+EYjjgE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821800; c=relaxed/simple; bh=Z0piJ2uMHqJvLowyoCX36ouM9fy0bcmA2oM/iZMlUr0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CijPCV9RDg2roZTGrMtL6n6LZb99TJeFtV37p2z8IA5kRBGd+Msg9oxCFdUU5WC5hM1F+5uosANVXy9efvdzoohBhEq7gnZhHmVdFTnOroEEZBNyY4BgH4d/6nTG+mn4V4sHoULquvntOsjXaTQcXQfiCOuLKUEigSSGotv1aec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=BBnGs9p4; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BBnGs9p4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821795; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ctsqKiNbu9tw0Co3r3NlUOa+U4FB32iqwclFHrHc7RQ=; b=BBnGs9p4bXUyNy1JI/17OyfdLvAWbgyPZ7r/ggJH9ffGi6JJteGrLgcojgQC1o8o8UXmuo 2t2Hy8B9AFdrKZl7t+QZbd0HXgbuK29/0HUB0gxcjr7yBPOzcgsQ+5py6w3RKtWQ/pIZ67 jD3xbKHumOiq40Ki6GvblTTA4vwRXoI= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-670-LiClZ0w8NYC8xnaXAsfRqw-1; Mon, 15 Dec 2025 13:02:52 -0500 X-MC-Unique: LiClZ0w8NYC8xnaXAsfRqw-1 X-Mimecast-MFC-AGG-ID: LiClZ0w8NYC8xnaXAsfRqw_1765821765 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3AA691975ACD; Mon, 15 Dec 2025 17:58:53 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id F1D25180045B; Mon, 15 Dec 2025 17:58:47 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 08/17] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers Date: Mon, 15 Dec 2025 12:57:55 -0500 Message-ID: <20251215175806.102713-9-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" From: Boqun Feng spin_lock_irq_disable() and spin_unlock_irq_enable() are inline functions, to use them in Rust, helpers are introduced. This is for interrupt disabling lock abstraction in Rust. Signed-off-by: Boqun Feng Reviewed-by: Andreas Hindborg Signed-off-by: Lyude Paul --- rust/helpers/spinlock.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c index 42c4bf01a23e4..d4e61057c2a7a 100644 --- a/rust/helpers/spinlock.c +++ b/rust/helpers/spinlock.c @@ -35,3 +35,18 @@ void rust_helper_spin_assert_is_held(spinlock_t *lock) { lockdep_assert_held(lock); } + +void rust_helper_spin_lock_irq_disable(spinlock_t *lock) +{ + spin_lock_irq_disable(lock); +} + +void rust_helper_spin_unlock_irq_enable(spinlock_t *lock) +{ + spin_unlock_irq_enable(lock); +} + +int rust_helper_spin_trylock_irq_disable(spinlock_t *lock) +{ + return spin_trylock_irq_disable(lock); +} --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B823E32692D for ; Mon, 15 Dec 2025 18:03:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821796; cv=none; b=bSKcFVxmtvsSSDDPzbav1tA58/j8zhM20GC2UalzeAdva4xY1yXMU6OLuZNyCWgkidHJdrpfpcBdI6beeHZtK1xroxMqVBuo1gbQup53WX0IppAg36LyoEtKIIaTegBO4rz5uSH2zMYgZaGeJRj8fwcvxp2gFTJYRZxQIr/o33U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821796; c=relaxed/simple; bh=4SgfVkac/hJiLSRjFfbJfSEO3yBJn6RjkimufUhTRVA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XIAnL4H54evhOXzL+eKen3qdY/HVTXYdBeHJQPLUNjP8d3LwjCJj9vfF5tcHq3GJGK1HLTCL8TSxTtqQ9QbgSmYkLHedDlgnL3V41iwscn9wIX6GuXdWAzYhhFbBqup6quXYt9MjubwCs30jA/WvxWtNUr5lOPYyseSH9H0u+p4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=epn+/lA8; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="epn+/lA8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821792; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1Zl1iP7E/6Abrc+BzKXKfVMXH7mQZ0iUlUkUiMg3Iso=; b=epn+/lA8xZgtrlyYK+NkWP3xdLElKsQHw4obXhNceQ+y7JmMNUB65QPOw214QGjbuHzV7+ Y+dnSWaz5VfCRzqu0ZWkGyEcM4ouuar1NgbLWFaJi6H1ZYU5sLIQhZdc0cOCart0Q7auAe uieAWdJYUO6vkR0D5EQ41ZmA3XzdnWc= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-164-RCEqVCE6M4GKJTrbNg03zA-1; Mon, 15 Dec 2025 13:03:01 -0500 X-MC-Unique: RCEqVCE6M4GKJTrbNg03zA-1 X-Mimecast-MFC-AGG-ID: RCEqVCE6M4GKJTrbNg03zA_1765821769 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D385B1964CD2; Mon, 15 Dec 2025 17:58:57 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9D2D1180044F; Mon, 15 Dec 2025 17:58:53 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 09/17] rust: sync: Add SpinLockIrq Date: Mon, 15 Dec 2025 12:57:56 -0500 Message-ID: <20251215175806.102713-10-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" A variant of SpinLock that is expected to be used in noirq contexts, so lock() will disable interrupts and unlock() (i.e. `Guard::drop()` will undo the interrupt disable. [Boqun: Port to use spin_lock_irq_disable() and spin_unlock_irq_enable()] Signed-off-by: Lyude Paul Co-developed-by: Boqun Feng Signed-off-by: Boqun Feng Reviewed-by: Andreas Hindborg --- V10: * Also add support to GlobalLock * Documentation fixes from Dirk V11: * Add unit test requested by Daniel Almeida V14: - Improve rustdoc for SpinLockIrqBackend rust/kernel/sync.rs | 4 +- rust/kernel/sync/lock/global.rs | 3 + rust/kernel/sync/lock/spinlock.rs | 229 ++++++++++++++++++++++++++++++ 3 files changed, 235 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index c94753d6413e2..847edd943c457 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -26,7 +26,9 @@ pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult}; pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBac= kend, GlobalLockedBy}; pub use lock::mutex::{new_mutex, Mutex, MutexGuard}; -pub use lock::spinlock::{new_spinlock, SpinLock, SpinLockGuard}; +pub use lock::spinlock::{ + new_spinlock, new_spinlock_irq, SpinLock, SpinLockGuard, SpinLockIrq, = SpinLockIrqGuard, +}; pub use locked_by::LockedBy; pub use refcount::Refcount; =20 diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global= .rs index eab48108a4aeb..7030a47bc0ad1 100644 --- a/rust/kernel/sync/lock/global.rs +++ b/rust/kernel/sync/lock/global.rs @@ -302,4 +302,7 @@ macro_rules! global_lock_inner { (backend SpinLock) =3D> { $crate::sync::lock::spinlock::SpinLockBackend }; + (backend SpinLockIrq) =3D> { + $crate::sync::lock::spinlock::SpinLockIrqBackend + }; } diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spin= lock.rs index d7be38ccbdc7d..3fdfb0a8a0ab1 100644 --- a/rust/kernel/sync/lock/spinlock.rs +++ b/rust/kernel/sync/lock/spinlock.rs @@ -3,6 +3,7 @@ //! A kernel spinlock. //! //! This module allows Rust code to use the kernel's `spinlock_t`. +use crate::prelude::*; =20 /// Creates a [`SpinLock`] initialiser with the given name and a newly-cre= ated lock class. /// @@ -139,3 +140,231 @@ unsafe fn assert_is_held(ptr: *mut Self::State) { unsafe { bindings::spin_assert_is_held(ptr) } } } + +/// Creates a [`SpinLockIrq`] initialiser with the given name and a newly-= created lock class. +/// +/// It uses the name if one is given, otherwise it generates one based on = the file name and line +/// number. +#[macro_export] +macro_rules! new_spinlock_irq { + ($inner:expr $(, $name:literal)? $(,)?) =3D> { + $crate::sync::SpinLockIrq::new( + $inner, $crate::optional_name!($($name)?), $crate::static_lock= _class!()) + }; +} +pub use new_spinlock_irq; + +/// A spinlock that may be acquired when local processor interrupts are di= sabled. +/// +/// This is a version of [`SpinLock`] that can only be used in contexts wh= ere interrupts for the +/// local CPU are disabled. It can be acquired in two ways: +/// +/// - Using [`lock()`] like any other type of lock, in which case the bind= ings will modify the +/// interrupt state to ensure that local processor interrupts remain dis= abled for at least as long +/// as the [`SpinLockIrqGuard`] exists. +/// - Using [`lock_with()`] in contexts where a [`LocalInterruptDisabled`]= token is present and +/// local processor interrupts are already known to be disabled, in whic= h case the local interrupt +/// state will not be touched. This method should be preferred if a [`Lo= calInterruptDisabled`] +/// token is present in the scope. +/// +/// For more info on spinlocks, see [`SpinLock`]. For more information on = interrupts, +/// [see the interrupt module](kernel::interrupt). +/// +/// # Examples +/// +/// The following example shows how to declare, allocate initialise and ac= cess a struct (`Example`) +/// that contains an inner struct (`Inner`) that is protected by a spinloc= k that requires local +/// processor interrupts to be disabled. +/// +/// ``` +/// use kernel::sync::{new_spinlock_irq, SpinLockIrq}; +/// +/// struct Inner { +/// a: u32, +/// b: u32, +/// } +/// +/// #[pin_data] +/// struct Example { +/// #[pin] +/// c: SpinLockIrq, +/// #[pin] +/// d: SpinLockIrq, +/// } +/// +/// impl Example { +/// fn new() -> impl PinInit { +/// pin_init!(Self { +/// c <- new_spinlock_irq!(Inner { a: 0, b: 10 }), +/// d <- new_spinlock_irq!(Inner { a: 20, b: 30 }), +/// }) +/// } +/// } +/// +/// // Allocate a boxed `Example` +/// let e =3D KBox::pin_init(Example::new(), GFP_KERNEL)?; +/// +/// // Accessing an `Example` from a context where interrupts may not be d= isabled already. +/// let c_guard =3D e.c.lock(); // interrupts are disabled now, +1 interru= pt disable refcount +/// let d_guard =3D e.d.lock(); // no interrupt state change, +1 interrupt= disable refcount +/// +/// assert_eq!(c_guard.a, 0); +/// assert_eq!(c_guard.b, 10); +/// assert_eq!(d_guard.a, 20); +/// assert_eq!(d_guard.b, 30); +/// +/// drop(c_guard); // Dropping c_guard will not re-enable interrupts just = yet, since d_guard is +/// // still in scope. +/// drop(d_guard); // Last interrupt disable reference dropped here, so in= terrupts are re-enabled +/// // now +/// # Ok::<(), Error>(()) +/// ``` +/// +/// [`lock()`]: SpinLockIrq::lock +/// [`lock_with()`]: SpinLockIrq::lock_with +pub type SpinLockIrq =3D super::Lock; + +/// A kernel `spinlock_t` lock backend that can only be acquired in interr= upt disabled contexts. +pub struct SpinLockIrqBackend; + +/// A [`Guard`] acquired from locking a [`SpinLockIrq`] using [`lock()`]. +/// +/// This is simply a type alias for a [`Guard`] returned from locking a [`= SpinLockIrq`] using +/// [`lock_with()`]. It will unlock the [`SpinLockIrq`] and decrement the = local processor's +/// interrupt disablement refcount upon being dropped. +/// +/// [`Guard`]: super::Guard +/// [`lock()`]: SpinLockIrq::lock +/// [`lock_with()`]: SpinLockIrq::lock_with +pub type SpinLockIrqGuard<'a, T> =3D super::Guard<'a, T, SpinLockIrqBacken= d>; + +// SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclus= ion. `relock` uses the +// default implementation that always calls the same locking method. +unsafe impl super::Backend for SpinLockIrqBackend { + type State =3D bindings::spinlock_t; + type GuardState =3D (); + + unsafe fn init( + ptr: *mut Self::State, + name: *const crate::ffi::c_char, + key: *mut bindings::lock_class_key, + ) { + // SAFETY: The safety requirements ensure that `ptr` is valid for = writes, and `name` and + // `key` are valid for read indefinitely. + unsafe { bindings::__spin_lock_init(ptr, name, key) } + } + + unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState { + // SAFETY: The safety requirements of this function ensure that `p= tr` points to valid + // memory, and that it has been initialised before. + unsafe { bindings::spin_lock_irq_disable(ptr) } + } + + unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardStat= e) { + // SAFETY: The safety requirements of this function ensure that `p= tr` is valid and that the + // caller is the owner of the spinlock. + unsafe { bindings::spin_unlock_irq_enable(ptr) } + } + + unsafe fn try_lock(ptr: *mut Self::State) -> Option { + // SAFETY: The `ptr` pointer is guaranteed to be valid and initial= ized before use. + let result =3D unsafe { bindings::spin_trylock_irq_disable(ptr) }; + + if result !=3D 0 { + Some(()) + } else { + None + } + } + + unsafe fn assert_is_held(ptr: *mut Self::State) { + // SAFETY: The `ptr` pointer is guaranteed to be valid and initial= ized before use. + unsafe { bindings::spin_assert_is_held(ptr) } + } +} + +#[kunit_tests(rust_spinlock_irq_condvar)] +mod tests { + use super::*; + use crate::{ + sync::*, + workqueue::{self, impl_has_work, new_work, Work, WorkItem}, + }; + + struct TestState { + value: u32, + waiter_ready: bool, + } + + #[pin_data] + struct Test { + #[pin] + state: SpinLockIrq, + + #[pin] + state_changed: CondVar, + + #[pin] + waiter_state_changed: CondVar, + + #[pin] + wait_work: Work, + } + + impl_has_work! { + impl HasWork for Test { self.wait_work } + } + + impl Test { + pub(crate) fn new() -> Result> { + Arc::try_pin_init( + try_pin_init!( + Self { + state <- new_spinlock_irq!(TestState { + value: 1, + waiter_ready: false + }), + state_changed <- new_condvar!(), + waiter_state_changed <- new_condvar!(), + wait_work <- new_work!("IrqCondvarTest::wait_work") + } + ), + GFP_KERNEL, + ) + } + } + + impl WorkItem for Test { + type Pointer =3D Arc; + + fn run(this: Arc) { + // Wait for the test to be ready to wait for us + let mut state =3D this.state.lock(); + + while !state.waiter_ready { + this.waiter_state_changed.wait(&mut state); + } + + // Deliver the exciting value update our test has been waiting= for + state.value +=3D 1; + this.state_changed.notify_sync(); + } + } + + #[test] + fn spinlock_irq_condvar() -> Result { + let testdata =3D Test::new()?; + + let _ =3D workqueue::system().enqueue(testdata.clone()); + + // Let the updater know when we're ready to wait + let mut state =3D testdata.state.lock(); + state.waiter_ready =3D true; + testdata.waiter_state_changed.notify_sync(); + + // Wait for the exciting value update + testdata.state_changed.wait(&mut state); + assert_eq!(state.value, 2); + Ok(()) + } +} --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB59D32572F for ; Mon, 15 Dec 2025 18:03:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821792; cv=none; b=p2Z11/fheX/8hMa+1st5Q12tczifoqOPPjFBoCd/nHLi+1/+f9nh4YQLrTymc64bILNJE/Ozm2MHMRpRlfi6A7lG36roVhn406toKZB+j/OqzcPJVaYa4+cwHCMMrMLUxdbamr3PY8sZVS5HWVq+8Zj9GZYkLX/vKme1ax4eqY8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821792; c=relaxed/simple; bh=ADeFYuSNlzs3Kdd/Em0QJ4o2W4/9jIJKwe2bImwATRI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Wc/jPwWGFTiepvR65PnM4nYBH/MrhAbLE2yHb3ydtsIjW5cjFdvanfLTZaLPFtGWW2EOOApdprgVg9roCZ2R0FGY6sAq22DqkyZgyjDRu3uUkjtG3CI/ErOSy2GOSY3afmtC4kjUzsifl0Tem/wzUf0do4Y8KHd3cZNj0zq5REM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KBjHJoDG; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KBjHJoDG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821789; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Io++pEAe2a4SEwOOe98W65FM6g3XqgvlabnuToFJ384=; b=KBjHJoDGQYSAtkDM7EpFnGR/TmTQBr/FLfiBXSALgKXel+mm0jLTOQJPK8xdzUbHcoRrTh mfb9+zx7XMdhznwB5y7DWQ3LM4A7gzG3JdLKICpsBrSZ0hvEzplpDnw6WGCdGFi4W9YCdt 6jjKSRjPZl2M4Ap+fLOLJeHVEu7novE= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-544-hHObrJwvN0Sc0cq3PVVmKw-1; Mon, 15 Dec 2025 13:02:56 -0500 X-MC-Unique: hHObrJwvN0Sc0cq3PVVmKw-1 X-Mimecast-MFC-AGG-ID: hHObrJwvN0Sc0cq3PVVmKw_1765821767 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E71561975AF1; Mon, 15 Dec 2025 17:59:01 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 137EB180044F; Mon, 15 Dec 2025 17:58:57 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 10/17] rust: sync: Introduce lock::Backend::Context Date: Mon, 15 Dec 2025 12:57:57 -0500 Message-ID: <20251215175806.102713-11-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Now that we've introduced an `InterruptDisabled` token for marking contexts in which IRQs are disabled, we can have a way to avoid `SpinLockIrq` disabling interrupts if the interrupts have already been disabled. Basically, a `SpinLockIrq` should work like a `SpinLock` if interrupts are disabled. So a function: (&'a SpinLockIrq, &'a InterruptDisabled) -> Guard<'a, .., SpinLockBackend> makes senses. Note that due to `Guard` and `InterruptDisabled` having the same lifetime, interrupts cannot be enabled while the Guard exists. Add a `lock_with()` interface for `Lock`, and an associate type of `Backend` to describe the context. Signed-off-by: Lyude Paul Co-developed-by: Boqun Feng Signed-off-by: Boqun Feng --- V10: - Fix typos - Dirk rust/kernel/sync/lock.rs | 12 +++++++++++- rust/kernel/sync/lock/mutex.rs | 1 + rust/kernel/sync/lock/spinlock.rs | 4 +++- 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs index 46a57d1fc309d..bf2d94c1999bd 100644 --- a/rust/kernel/sync/lock.rs +++ b/rust/kernel/sync/lock.rs @@ -44,6 +44,9 @@ pub unsafe trait Backend { /// [`unlock`]: Backend::unlock type GuardState; =20 + /// The context which can be provided to acquire the lock with a diffe= rent backend. + type Context<'a>; + /// Initialises the lock. /// /// # Safety @@ -168,8 +171,15 @@ pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a = Self { } =20 impl Lock { + /// Acquires the lock with the given context and gives the caller acce= ss to the data protected + /// by it. + pub fn lock_with<'a>(&'a self, _context: B::Context<'a>) -> Guard<'a, = T, B> { + todo!() + } + /// Acquires the lock and gives the caller access to the data protecte= d by it. - pub fn lock(&self) -> Guard<'_, T, B> { + #[inline] + pub fn lock<'a>(&'a self) -> Guard<'a, T, B> { // SAFETY: The constructor of the type calls `init`, so the existe= nce of the object proves // that `init` was called. let state =3D unsafe { B::lock(self.state.get()) }; diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs index 581cee7ab842a..be1e2e18cf42d 100644 --- a/rust/kernel/sync/lock/mutex.rs +++ b/rust/kernel/sync/lock/mutex.rs @@ -101,6 +101,7 @@ macro_rules! new_mutex { unsafe impl super::Backend for MutexBackend { type State =3D bindings::mutex; type GuardState =3D (); + type Context<'a> =3D (); =20 unsafe fn init( ptr: *mut Self::State, diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spin= lock.rs index 3fdfb0a8a0ab1..70d19a2636afe 100644 --- a/rust/kernel/sync/lock/spinlock.rs +++ b/rust/kernel/sync/lock/spinlock.rs @@ -3,7 +3,7 @@ //! A kernel spinlock. //! //! This module allows Rust code to use the kernel's `spinlock_t`. -use crate::prelude::*; +use crate::{interrupt::LocalInterruptDisabled, prelude::*}; =20 /// Creates a [`SpinLock`] initialiser with the given name and a newly-cre= ated lock class. /// @@ -101,6 +101,7 @@ macro_rules! new_spinlock { unsafe impl super::Backend for SpinLockBackend { type State =3D bindings::spinlock_t; type GuardState =3D (); + type Context<'a> =3D (); =20 unsafe fn init( ptr: *mut Self::State, @@ -243,6 +244,7 @@ macro_rules! new_spinlock_irq { unsafe impl super::Backend for SpinLockIrqBackend { type State =3D bindings::spinlock_t; type GuardState =3D (); + type Context<'a> =3D &'a LocalInterruptDisabled; =20 unsafe fn init( ptr: *mut Self::State, --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 052B63385A0 for ; Mon, 15 Dec 2025 18:05:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821910; cv=none; b=hKhiFfEZx8QAUhx+AhkMqi/c7+bc+QgDK955+XNls1Gj6lPpcIc0P8PzW6sAxCJilMSL6yOMQUe8YyYEv6R64Sf5SznPhyBCFGH3NiP9xTJ66WI0Y3qPgMwUv5SVZAxhujyZAMvMEqV/MvJegKYHVDeykiRV538its9GxYMBWxo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821910; c=relaxed/simple; bh=7kYeJC+PKDePcSPxfUhPBrOdJchIEOEf5xR2h1Ww+5g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ralu3CKu3D4Gek2I96T1UMyu+pLGPuMh7ci3tKn0ieN+vhsGU4dGIYnFwnDd5JXikzZ102aS1OnaHS+QVlOpMzS9PcASXbDIIo0FFw2zA7QlhvOOPUjB9uWBgfvxWKLTPeJQh1RLJfjGgKEJgtk4wfqKxo84h3R5cqE+Hu2EFN8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Bq6m3Apx; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Bq6m3Apx" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821908; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ti5msfkMSTNbClRocDHXSKKV1CsLVFfaetmLCgzkRhQ=; b=Bq6m3ApxLFwya3p2nukIwzaZpxtIpjk7CbXDLeZ81P16yjtfKOCLBz/zAbbp7FThNfhdwH 10Cf+Uwy8iDsVeLVCy8yJ/P4ztgGtoCAu8UnzTPgBXe9t2EimoT7X6e2iroj/lIF7JpezU h1Lei7nd73ql2FvzCX4PgoWCxy340R4= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-116-Dy_KiyviNZyqxd70-o1WfA-1; Mon, 15 Dec 2025 13:02:55 -0500 X-MC-Unique: Dy_KiyviNZyqxd70-o1WfA-1 X-Mimecast-MFC-AGG-ID: Dy_KiyviNZyqxd70-o1WfA_1765821768 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0D444183459C; Mon, 15 Dec 2025 17:59:07 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4F69E180045B; Mon, 15 Dec 2025 17:59:02 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 11/17] rust: sync: lock: Add `Backend::BackendInContext` Date: Mon, 15 Dec 2025 12:57:58 -0500 Message-ID: <20251215175806.102713-12-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" From: Boqun Feng `SpinLockIrq` and `SpinLock` use the exact same underlying C structure, with the only real difference being that the former uses the irq_disable() and irq_enable() variants for locking/unlocking. These variants can introduce some minor overhead in contexts where we already know that local processor interrupts are disabled, and as such we want a way to be able to skip modifying processor interrupt state in said contexts in order to avoid some overhead - just like the current C API allows us to do. So, `BackendInContext` allows us to cast a lock into it's contextless version for situations where we already have whatever guarantees would be provided by `Backend::Context` in place. In some hacked-together benchmarks we ran, most of the time this did actually seem to lead to a noticeable difference in overhead: From an aarch64 VM running on a MacBook M4: lock() when irq is disabled, 100 times cost Delta { nanos: 500 } lock_with() when irq is disabled, 100 times cost Delta { nanos: 292 } lock() when irq is enabled, 100 times cost Delta { nanos: 834 } lock() when irq is disabled, 100 times cost Delta { nanos: 459 } lock_with() when irq is disabled, 100 times cost Delta { nanos: 291 } lock() when irq is enabled, 100 times cost Delta { nanos: 709 } From an x86_64 VM (qemu/kvm) running on a i7-13700H lock() when irq is disabled, 100 times cost Delta { nanos: 1002 } lock_with() when irq is disabled, 100 times cost Delta { nanos: 729 } lock() when irq is enabled, 100 times cost Delta { nanos: 1516 } lock() when irq is disabled, 100 times cost Delta { nanos: 754 } lock_with() when irq is disabled, 100 times cost Delta { nanos: 966 } lock() when irq is enabled, 100 times cost Delta { nanos: 1227 } (note that there were some runs on x86_64 where lock() on irq disabled vs. lock_with() on irq disabled had equivalent benchmarks, but it very much appeared to be a minority of test runs. While it's not clear how this affects real-world workloads yet, let's add this for the time being so we can find out. Signed-off-by: Boqun Feng Co-developed-by: Lyude Paul Signed-off-by: Lyude Paul --- V10: * Fix typos - Dirk/Lyude * Since we're adding support for context locks to GlobalLock as well, let's also make sure to cover try_lock while we're at it and add try_lock_with * Add a private function as_lock_in_context() for handling casting from a Lock to Lock so we don't have to duplicate safety comments V11: * Fix clippy::ref_as_ptr error in Lock::as_lock_in_context() V14: * Add benchmark results, rewrite commit message rust/kernel/sync/lock.rs | 61 ++++++++++++++++++++++++++++++- rust/kernel/sync/lock/mutex.rs | 1 + rust/kernel/sync/lock/spinlock.rs | 41 +++++++++++++++++++++ 3 files changed, 101 insertions(+), 2 deletions(-) diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs index bf2d94c1999bd..938ffe1bac06c 100644 --- a/rust/kernel/sync/lock.rs +++ b/rust/kernel/sync/lock.rs @@ -30,10 +30,15 @@ /// is owned, that is, between calls to [`lock`] and [`unlock`]. /// - Implementers must also ensure that [`relock`] uses the same locking = method as the original /// lock operation. +/// - Implementers must ensure if [`BackendInContext`] is a [`Backend`], i= t's safe to acquire the +/// lock under the [`Context`], the [`State`] of two backends must be th= e same. /// /// [`lock`]: Backend::lock /// [`unlock`]: Backend::unlock /// [`relock`]: Backend::relock +/// [`BackendInContext`]: Backend::BackendInContext +/// [`Context`]: Backend::Context +/// [`State`]: Backend::State pub unsafe trait Backend { /// The state required by the lock. type State; @@ -47,6 +52,9 @@ pub unsafe trait Backend { /// The context which can be provided to acquire the lock with a diffe= rent backend. type Context<'a>; =20 + /// The alternative backend we can use if a [`Context`](Backend::Conte= xt) is provided. + type BackendInContext: Sized; + /// Initialises the lock. /// /// # Safety @@ -171,10 +179,59 @@ pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a= Self { } =20 impl Lock { + /// Casts the lock as a `Lock`. + fn as_lock_in_context<'a>( + &'a self, + _context: B::Context<'a>, + ) -> &'a Lock + where + B::BackendInContext: Backend, + { + // SAFETY: + // - Per the safety guarantee of `Backend`, if `B::BackendInContex= t` and `B` should + // have the same state, the layout of the lock is the same so it= 's safe to convert one to + // another. + // - The caller provided `B::Context<'a>`, so it is safe to recast= and return this lock. + unsafe { &*(core::ptr::from_ref(self) as *const _) } + } + /// Acquires the lock with the given context and gives the caller acce= ss to the data protected /// by it. - pub fn lock_with<'a>(&'a self, _context: B::Context<'a>) -> Guard<'a, = T, B> { - todo!() + pub fn lock_with<'a>(&'a self, context: B::Context<'a>) -> Guard<'a, T= , B::BackendInContext> + where + B::BackendInContext: Backend, + { + let lock =3D self.as_lock_in_context(context); + + // SAFETY: The constructor of the type calls `init`, so the existe= nce of the object proves + // that `init` was called. Plus the safety guarantee of `Backend` = guarantees that `B::State` + // is the same as `B::BackendInContext::State`, also it's safe to = call another backend + // because there is `B::Context<'a>`. + let state =3D unsafe { B::BackendInContext::lock(lock.state.get())= }; + + // SAFETY: The lock was just acquired. + unsafe { Guard::new(lock, state) } + } + + /// Tries to acquire the lock with the given context. + /// + /// Returns a guard that can be used to access the data protected by t= he lock if successful. + pub fn try_lock_with<'a>( + &'a self, + context: B::Context<'a>, + ) -> Option> + where + B::BackendInContext: Backend, + { + let lock =3D self.as_lock_in_context(context); + + // SAFETY: The constructor of the type calls `init`, so the existe= nce of the object proves + // that `init` was called. Plus the safety guarantee of `Backend` = guarantees that `B::State` + // is the same as `B::BackendInContext::State`, also it's safe to = call another backend + // because there is `B::Context<'a>`. + unsafe { + B::BackendInContext::try_lock(lock.state.get()).map(|state| Gu= ard::new(lock, state)) + } } =20 /// Acquires the lock and gives the caller access to the data protecte= d by it. diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs index be1e2e18cf42d..662a530750703 100644 --- a/rust/kernel/sync/lock/mutex.rs +++ b/rust/kernel/sync/lock/mutex.rs @@ -102,6 +102,7 @@ unsafe impl super::Backend for MutexBackend { type State =3D bindings::mutex; type GuardState =3D (); type Context<'a> =3D (); + type BackendInContext =3D (); =20 unsafe fn init( ptr: *mut Self::State, diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spin= lock.rs index 70d19a2636afe..81384ea239955 100644 --- a/rust/kernel/sync/lock/spinlock.rs +++ b/rust/kernel/sync/lock/spinlock.rs @@ -102,6 +102,7 @@ unsafe impl super::Backend for SpinLockBackend { type State =3D bindings::spinlock_t; type GuardState =3D (); type Context<'a> =3D (); + type BackendInContext =3D (); =20 unsafe fn init( ptr: *mut Self::State, @@ -221,6 +222,45 @@ macro_rules! new_spinlock_irq { /// # Ok::<(), Error>(()) /// ``` /// +/// The next example demonstrates locking a [`SpinLockIrq`] using [`lock_w= ith()`] in a function +/// which can only be called when local processor interrupts are already d= isabled. +/// +/// ``` +/// use kernel::sync::{new_spinlock_irq, SpinLockIrq}; +/// use kernel::interrupt::*; +/// +/// struct Inner { +/// a: u32, +/// } +/// +/// #[pin_data] +/// struct Example { +/// #[pin] +/// inner: SpinLockIrq, +/// } +/// +/// impl Example { +/// fn new() -> impl PinInit { +/// pin_init!(Self { +/// inner <- new_spinlock_irq!(Inner { a: 20 }), +/// }) +/// } +/// } +/// +/// // Accessing an `Example` from a function that can only be called in n= o-interrupt contexts. +/// fn noirq_work(e: &Example, interrupt_disabled: &LocalInterruptDisabled= ) { +/// // Because we know interrupts are disabled from interrupt_disable,= we can skip toggling +/// // interrupt state using lock_with() and the provided token +/// assert_eq!(e.inner.lock_with(interrupt_disabled).a, 20); +/// } +/// +/// # let e =3D KBox::pin_init(Example::new(), GFP_KERNEL)?; +/// # let interrupt_guard =3D local_interrupt_disable(); +/// # noirq_work(&e, &interrupt_guard); +/// # +/// # Ok::<(), Error>(()) +/// ``` +/// /// [`lock()`]: SpinLockIrq::lock /// [`lock_with()`]: SpinLockIrq::lock_with pub type SpinLockIrq =3D super::Lock; @@ -245,6 +285,7 @@ unsafe impl super::Backend for SpinLockIrqBackend { type State =3D bindings::spinlock_t; type GuardState =3D (); type Context<'a> =3D &'a LocalInterruptDisabled; + type BackendInContext =3D SpinLockBackend; =20 unsafe fn init( ptr: *mut Self::State, --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57E4A335087 for ; Mon, 15 Dec 2025 18:03:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821833; cv=none; b=iYFmHdIdPpHZQe8QzIADHToaKLR82zOCUyf7lwwOJDiYz7+yrTYNA7dsb1CpvT9kCkANR86KAgTWfuXrgMjt95gHKbKlQI3fXD+WsZCU4Z68peZsp0wyvlPEY9EfsCs6HT5oNzNFWz9qVCus8jpzIRyff5fwjvTMCjQoGILH+XM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821833; c=relaxed/simple; bh=0EDtGy67sbcMtonwjGxPUyf4fnAu6cVCFLi68oUxz6c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qCqN7Ib+7KegMjmiNmNX0iQvhcJhoy66zpvaxSqU6uwCug+kb3XrwMxg5cAikC6ntbYBB1XGJSUjANQE/kEmQb+uTLeX9ugwtlf6FVmsHTjILf22hD8ELELxe4h/izKO5cg6OarmcWH4V/VyPuvWfcVITzX4sYh7V00hIvbL9rQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=BoWKmGL1; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BoWKmGL1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821827; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NPXbpVkoQILJpCk9IhXa/hmpE7g50auFpY6RHrWLkQE=; b=BoWKmGL1u6VnHP5vBXEqrXn4bxpxFnkKnqY6esoULlTPeOwD1JasWHB4Ye7/+wcaol9Liv AYOwik4+7Gimw4H/YShWJOfLjUoemF7IN4XtBe7KlMpBx7iIaF/IYJphlT6+r+gdZD1Z2a ilGWNrnEvBEZFVeglQ2VJ4BXaWl5IiE= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-507-ZtwmeZ5MMDCsUmMMwVvWFw-1; Mon, 15 Dec 2025 13:03:16 -0500 X-MC-Unique: ZtwmeZ5MMDCsUmMMwVvWFw-1 X-Mimecast-MFC-AGG-ID: ZtwmeZ5MMDCsUmMMwVvWFw_1765821768 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 757C618868BB; Mon, 15 Dec 2025 17:59:11 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3E524180044F; Mon, 15 Dec 2025 17:59:07 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 12/17] rust: sync: lock/global: Rename B to G in trait bounds Date: Mon, 15 Dec 2025 12:57:59 -0500 Message-ID: <20251215175806.102713-13-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Due to the introduction of Backend::BackendInContext, if we want to be able support Lock types with a Context we need to be able to handle the fact that the Backend for a returned Guard may not exactly match the Backend for the lock. Before we add this though, rename B to G in all of our trait bounds to make sure things don't become more difficult to understand once we add a Backend bound. There should be no functional changes in this patch. Signed-off-by: Lyude Paul --- rust/kernel/sync/atomic.rs | 2 +- rust/kernel/sync/lock/global.rs | 58 ++++++++++++++++----------------- 2 files changed, 30 insertions(+), 30 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 3afc376be42d9..c07a53f8360b4 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -21,8 +21,8 @@ mod predefine; =20 pub use internal::AtomicImpl; -pub use ordering::{Acquire, Full, Relaxed, Release}; pub(crate) use internal::{AtomicArithmeticOps, AtomicBasicOps, AtomicExcha= ngeOps}; +pub use ordering::{Acquire, Full, Relaxed, Release}; =20 use crate::build_error; use internal::AtomicRepr; diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global= .rs index 7030a47bc0ad1..06d62ad02f90d 100644 --- a/rust/kernel/sync/lock/global.rs +++ b/rust/kernel/sync/lock/global.rs @@ -33,18 +33,18 @@ pub trait GlobalLockBackend { /// Type used for global locks. /// /// See [`global_lock!`] for examples. -pub struct GlobalLock { - inner: Lock, +pub struct GlobalLock { + inner: Lock, } =20 -impl GlobalLock { +impl GlobalLock { /// Creates a global lock. /// /// # Safety /// /// * Before any other method on this lock is called, [`Self::init`] m= ust be called. - /// * The type `B` must not be used with any other lock. - pub const unsafe fn new(data: B::Item) -> Self { + /// * The type `G` must not be used with any other lock. + pub const unsafe fn new(data: G::Item) -> Self { Self { inner: Lock { state: Opaque::uninit(), @@ -68,23 +68,23 @@ pub unsafe fn init(&'static self) { // `init` before using any other methods. As `init` can only be ca= lled once, all other // uses of this lock must happen after this call. unsafe { - B::Backend::init( + G::Backend::init( self.inner.state.get(), - B::NAME.as_char_ptr(), - B::get_lock_class().as_ptr(), + G::NAME.as_char_ptr(), + G::get_lock_class().as_ptr(), ) } } =20 /// Lock this global lock. - pub fn lock(&'static self) -> GlobalGuard { + pub fn lock(&'static self) -> GlobalGuard { GlobalGuard { inner: self.inner.lock(), } } =20 /// Try to lock this global lock. - pub fn try_lock(&'static self) -> Option> { + pub fn try_lock(&'static self) -> Option> { Some(GlobalGuard { inner: self.inner.try_lock()?, }) @@ -94,21 +94,21 @@ pub fn try_lock(&'static self) -> Option= > { /// A guard for a [`GlobalLock`]. /// /// See [`global_lock!`] for examples. -pub struct GlobalGuard { - inner: Guard<'static, B::Item, B::Backend>, +pub struct GlobalGuard { + inner: Guard<'static, G::Item, G::Backend>, } =20 -impl core::ops::Deref for GlobalGuard { - type Target =3D B::Item; +impl core::ops::Deref for GlobalGuard { + type Target =3D G::Item; =20 fn deref(&self) -> &Self::Target { &self.inner } } =20 -impl core::ops::DerefMut for GlobalGuard +impl core::ops::DerefMut for GlobalGuard where - B::Item: Unpin, + G::Item: Unpin, { fn deref_mut(&mut self) -> &mut Self::Target { &mut self.inner @@ -118,33 +118,33 @@ fn deref_mut(&mut self) -> &mut Self::Target { /// A version of [`LockedBy`] for a [`GlobalLock`]. /// /// See [`global_lock!`] for examples. -pub struct GlobalLockedBy { - _backend: PhantomData, +pub struct GlobalLockedBy { + _backend: PhantomData, value: UnsafeCell, } =20 // SAFETY: The same thread-safety rules as `LockedBy` apply to `GlobalLock= edBy`. -unsafe impl Send for GlobalLockedBy +unsafe impl Send for GlobalLockedBy where T: ?Sized, - B: GlobalLockBackend, - LockedBy: Send, + G: GlobalLockBackend, + LockedBy: Send, { } =20 // SAFETY: The same thread-safety rules as `LockedBy` apply to `GlobalLock= edBy`. -unsafe impl Sync for GlobalLockedBy +unsafe impl Sync for GlobalLockedBy where T: ?Sized, - B: GlobalLockBackend, - LockedBy: Sync, + G: GlobalLockBackend, + LockedBy: Sync, { } =20 -impl GlobalLockedBy { +impl GlobalLockedBy { /// Create a new [`GlobalLockedBy`]. /// - /// The provided value will be protected by the global lock indicated = by `B`. + /// The provided value will be protected by the global lock indicated = by `G`. pub fn new(val: T) -> Self { Self { value: UnsafeCell::new(val), @@ -153,11 +153,11 @@ pub fn new(val: T) -> Self { } } =20 -impl GlobalLockedBy { +impl GlobalLockedBy { /// Access the value immutably. /// /// The caller must prove shared access to the lock. - pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard) -> &'a T { + pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard) -> &'a T { // SAFETY: The lock is globally unique, so there can only be one g= uard. unsafe { &*self.value.get() } } @@ -165,7 +165,7 @@ pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard)= -> &'a T { /// Access the value mutably. /// /// The caller must prove shared exclusive to the lock. - pub fn as_mut<'a>(&'a self, _guard: &'a mut GlobalGuard) -> &'a mut= T { + pub fn as_mut<'a>(&'a self, _guard: &'a mut GlobalGuard) -> &'a mut= T { // SAFETY: The lock is globally unique, so there can only be one g= uard. unsafe { &mut *self.value.get() } } --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E93523D2B4 for ; Mon, 15 Dec 2025 18:02:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821777; cv=none; b=jKTw9IPhHv8CDfeiQSMo5+v8xuDwMUsnxp6dheJKqgZWe4bc7kUDzDjhuhgTA2vi1AiSCDeImUtESlEeFY9ao99fBugbq380bGli2NcOoiAAGcI15RG+0u87NyAYpoqY2XHfLUqRcyiHkgKCeRGhfSoWf4/EJug1ZO9wuz0yc08= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821777; c=relaxed/simple; bh=V+CYd43G0Rxq8MaOFhL0sj51fwZ35kG9ivvYSUylXvA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jBB8EqVwb868L9OVmQ9Zah4PYVNadGIl1RVHogQ5MCIL3AzO25VqqQ83CvRm+U37hQCfUHoCl+GCy+l+a84Uwg8aO0yam+riWFDOGR6XoIwBLcGhm2k4YLwwO3L6nkCIC94MMS3RPhueHkroJCLE7Y/24Av4xDMXaVU4oe1KrWU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=eNZBGVov; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="eNZBGVov" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821775; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TjqIueDDstKW3uLG/uaM+O+TnIyJC2fCUs0r35KoPk0=; b=eNZBGVovPjta9jAg8M2jPl+T1OAewY7Y2GTLk5iHsL2x/6IQO2LQk/Q7Krhh9KZiYnLjj3 0P/9/dSdp3Koy1wzqu2GOLDutvFYrS6L4Zp4gn9G7RNfItkOtGyuL6WnzWUA8Cmv2Fe29S xOiwqn2zQfmQ2D+Ma8vOL8HnN5dB6C0= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-460-74QAVwWBMRmgYe5elRjSAg-1; Mon, 15 Dec 2025 13:02:50 -0500 X-MC-Unique: 74QAVwWBMRmgYe5elRjSAg-1 X-Mimecast-MFC-AGG-ID: 74QAVwWBMRmgYe5elRjSAg_1765821764 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7504B1886248; Mon, 15 Dec 2025 17:59:16 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CF4B9180044F; Mon, 15 Dec 2025 17:59:11 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 13/17] rust: sync: Add a lifetime parameter to lock::global::GlobalGuard Date: Mon, 15 Dec 2025 12:58:00 -0500 Message-ID: <20251215175806.102713-14-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" While a GlobalLock is always going to be static, in the case of locks with explicit backend contexts the GlobalGuard will not be 'static and will instead share the lifetime of the context. So, add a lifetime parameter to GlobalGuard to allow for this so we can implement GlobalGuard support for SpinlockIrq. Signed-off-by: Lyude Paul --- rust/kernel/sync/lock/global.rs | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global= .rs index 06d62ad02f90d..be17a30c66bf8 100644 --- a/rust/kernel/sync/lock/global.rs +++ b/rust/kernel/sync/lock/global.rs @@ -77,14 +77,14 @@ pub unsafe fn init(&'static self) { } =20 /// Lock this global lock. - pub fn lock(&'static self) -> GlobalGuard { + pub fn lock(&'static self) -> GlobalGuard<'static, G> { GlobalGuard { inner: self.inner.lock(), } } =20 /// Try to lock this global lock. - pub fn try_lock(&'static self) -> Option> { + pub fn try_lock(&'static self) -> Option> { Some(GlobalGuard { inner: self.inner.try_lock()?, }) @@ -94,11 +94,11 @@ pub fn try_lock(&'static self) -> Option= > { /// A guard for a [`GlobalLock`]. /// /// See [`global_lock!`] for examples. -pub struct GlobalGuard { - inner: Guard<'static, G::Item, G::Backend>, +pub struct GlobalGuard<'a, G: GlobalLockBackend> { + inner: Guard<'a, G::Item, G::Backend>, } =20 -impl core::ops::Deref for GlobalGuard { +impl<'a, G: GlobalLockBackend> core::ops::Deref for GlobalGuard<'a, G> { type Target =3D G::Item; =20 fn deref(&self) -> &Self::Target { @@ -106,7 +106,7 @@ fn deref(&self) -> &Self::Target { } } =20 -impl core::ops::DerefMut for GlobalGuard +impl<'a, G: GlobalLockBackend> core::ops::DerefMut for GlobalGuard<'a, G> where G::Item: Unpin, { @@ -157,7 +157,7 @@ impl GlobalLockedBy { /// Access the value immutably. /// /// The caller must prove shared access to the lock. - pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard) -> &'a T { + pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard<'_, G>) -> &'a T { // SAFETY: The lock is globally unique, so there can only be one g= uard. unsafe { &*self.value.get() } } @@ -165,7 +165,7 @@ pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard)= -> &'a T { /// Access the value mutably. /// /// The caller must prove shared exclusive to the lock. - pub fn as_mut<'a>(&'a self, _guard: &'a mut GlobalGuard) -> &'a mut= T { + pub fn as_mut<'a>(&'a self, _guard: &'a mut GlobalGuard<'_, G>) -> &'a= mut T { // SAFETY: The lock is globally unique, so there can only be one g= uard. unsafe { &mut *self.value.get() } } @@ -235,7 +235,7 @@ pub fn get_mut(&mut self) -> &mut T { /// /// Increment the counter in this instance. /// /// /// /// The caller must hold the `MY_MUTEX` mutex. -/// fn increment(&self, guard: &mut GlobalGuard) -> u32 { +/// fn increment(&self, guard: &mut GlobalGuard<'_, MY_MUTEX>) -> u32 { /// let my_counter =3D self.my_counter.as_mut(guard); /// *my_counter +=3D 1; /// *my_counter --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A675D32695F for ; Mon, 15 Dec 2025 18:03:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821817; cv=none; b=AbqnR4CEiFAfNNllkWIn67bq5n+TCLE/KVsWc7wtjxmtaB0P/IwaMyi4wjzujzkXShhh4XalNQFu58+t+vYMoyMxx1ywwkLA7k32ksd/YDcdJZshTKEVFgpW29GkaJJtlHMgYQ097RuqHkRBNGlQtYGSMAlptQBmEOSowxqYI/A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821817; c=relaxed/simple; bh=nCFXarMFa586dGrcNAs0jKPT2h0FKQUvMN9FU4ejyb4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uyl3kQKIAmX6LbtwsbNi1iBbcIFDAT3rvef5d3hknGXxFnnPV0OixEAgfXeTz+vQkxmv8kvOLBd4GtLKBJF++z2Mjxy1NOMBkwnOWxJ1Ba9MU4HcFA+xLI/UaTJtc8L6/WMNIHTph1pE12JB38X7ctw3fnM/PCM0rD09tDEAfmk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=UK5kxe7V; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UK5kxe7V" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821814; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tH4YX3T1TRr9YuPAznqrV8AK+8NwB79iZ5r++1p2W4k=; b=UK5kxe7V4QucNTePUbQCdx/9hvvarD/W+FHRqglg0Ck8lvIYq5hOqauKb3AWCGYi3dzyLf hak5L98bBp0DNUaE4+sWFX9TqAQy3pl+Ax/4QSqOqsrELAJlaaO5rVcya1Bd0K9pGJ1vMy GqQD4gTHrxRsvEoFzWPEriuU3aW2UBQ= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-676-yOSt0VlgMPyPrDxjYn_9RQ-1; Mon, 15 Dec 2025 13:02:57 -0500 X-MC-Unique: yOSt0VlgMPyPrDxjYn_9RQ-1 X-Mimecast-MFC-AGG-ID: yOSt0VlgMPyPrDxjYn_9RQ_1765821775 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2FE6D188282A; Mon, 15 Dec 2025 17:59:21 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D3A52180044F; Mon, 15 Dec 2025 17:59:16 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 14/17] rust: sync: Expose lock::Backend Date: Mon, 15 Dec 2025 12:58:01 -0500 Message-ID: <20251215175806.102713-15-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Due to the addition of sync::lock::Backend::Context, lock guards can be returned with a different Backend than their respective lock. Since we'll be adding a trait bound for Backend to GlobalGuard in order to support this, users will need to be able to directly refer to Backend so that they can use it in trait bounds. So, let's make this easier for users and expose Backend in sync. Signed-off-by: Lyude Paul --- rust/kernel/sync.rs | 1 + 1 file changed, 1 insertion(+) diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 847edd943c457..e4604d21c884a 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -29,6 +29,7 @@ pub use lock::spinlock::{ new_spinlock, new_spinlock_irq, SpinLock, SpinLockGuard, SpinLockIrq, = SpinLockIrqGuard, }; +pub use lock::Backend; pub use locked_by::LockedBy; pub use refcount::Refcount; =20 --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D07DE2EC571 for ; Mon, 15 Dec 2025 18:03:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821816; cv=none; b=gk8rw4CABupG7YxcYnYXUz2QJuuKjr2wzzlH4Wfbj7Ah/ACUxCbbXM2oAssU0xyU1HxQl648oWGwBXIVSE4QANqDSuXLsfcJk3bY17j1ygc4rTqpdAQgqjGDLw/CdXbF3VSvGmNIj7Hl9DDiWfo+RZciOL64jeEZSiAFg0Y3bTc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821816; c=relaxed/simple; bh=XiUW8T1Lpn3qyU4RkhUWB1Srg6orqf5+qUJH6iElQ5g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O2MSpyZDCaaaimNVweMuE/l6tTCuGlHpYXk6aoaNZb+3og6GACkX2tLDyMKUsgNCgezICX1NosvNuypW6tGq3MzohXuZeaZ2oM9oMeMsw679Ui8KNLdrlNRihOr0LVC7x/0z0GeeCYCS+3C+ZC0KYNj920awHyk7fsMhCBnoxEs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=QRQMd2Sx; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QRQMd2Sx" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821813; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qG5xhx6G54OCEaEgkaeoMu+x4F1ntx63gzJNN7RdCWQ=; b=QRQMd2Sx4wno3ekRkfj+FtAKFRsXnieQTGtq5WEKTECRNhx7u6HajPzz4R5tNe3J+QPfiL uAX3FeUO0EBdpE/syTWXjK5VAODx+yvqCQHPTmbG9R/qnbsbmv5aPjXpWMdp6CLux5JC7L gao/aVyPkDhNRjBWNrtXuLegz/+r5Pc= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-696-K42_WJ3rMGmEdvkuoMFciQ-1; Mon, 15 Dec 2025 13:02:59 -0500 X-MC-Unique: K42_WJ3rMGmEdvkuoMFciQ-1 X-Mimecast-MFC-AGG-ID: K42_WJ3rMGmEdvkuoMFciQ_1765821774 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E488E1860164; Mon, 15 Dec 2025 17:59:25 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8C99A1800579; Mon, 15 Dec 2025 17:59:21 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 15/17] rust: sync: lock/global: Add Backend parameter to GlobalGuard Date: Mon, 15 Dec 2025 12:58:02 -0500 Message-ID: <20251215175806.102713-16-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Due to the introduction of sync::lock::Backend::Context, it's now possible for normal locks to return a Guard with a different Backend than their respective lock (e.g. Backend::BackendInContext). We want to be able to support global locks with contexts as well, so add a trait bound to explicitly specify which Backend is in use for a GlobalGuard. Signed-off-by: Lyude Paul --- rust/kernel/sync/lock/global.rs | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global= .rs index be17a30c66bf8..64fc7e7a4b282 100644 --- a/rust/kernel/sync/lock/global.rs +++ b/rust/kernel/sync/lock/global.rs @@ -77,14 +77,14 @@ pub unsafe fn init(&'static self) { } =20 /// Lock this global lock. - pub fn lock(&'static self) -> GlobalGuard<'static, G> { + pub fn lock(&'static self) -> GlobalGuard<'static, G, G::Backend> { GlobalGuard { inner: self.inner.lock(), } } =20 /// Try to lock this global lock. - pub fn try_lock(&'static self) -> Option> { + pub fn try_lock(&'static self) -> Option> { Some(GlobalGuard { inner: self.inner.try_lock()?, }) @@ -94,11 +94,11 @@ pub fn try_lock(&'static self) -> Option> { /// A guard for a [`GlobalLock`]. /// /// See [`global_lock!`] for examples. -pub struct GlobalGuard<'a, G: GlobalLockBackend> { - inner: Guard<'a, G::Item, G::Backend>, +pub struct GlobalGuard<'a, G: GlobalLockBackend, B: Backend> { + inner: Guard<'a, G::Item, B>, } =20 -impl<'a, G: GlobalLockBackend> core::ops::Deref for GlobalGuard<'a, G> { +impl<'a, G: GlobalLockBackend, B: Backend> core::ops::Deref for GlobalGuar= d<'a, G, B> { type Target =3D G::Item; =20 fn deref(&self) -> &Self::Target { @@ -106,7 +106,7 @@ fn deref(&self) -> &Self::Target { } } =20 -impl<'a, G: GlobalLockBackend> core::ops::DerefMut for GlobalGuard<'a, G> +impl<'a, G: GlobalLockBackend, B: Backend> core::ops::DerefMut for GlobalG= uard<'a, G, B> where G::Item: Unpin, { @@ -157,7 +157,7 @@ impl GlobalLockedBy { /// Access the value immutably. /// /// The caller must prove shared access to the lock. - pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard<'_, G>) -> &'a T { + pub fn as_ref<'a, B: Backend>(&'a self, _guard: &'a GlobalGuard<'_, G,= B>) -> &'a T { // SAFETY: The lock is globally unique, so there can only be one g= uard. unsafe { &*self.value.get() } } @@ -165,7 +165,7 @@ pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard<'_,= G>) -> &'a T { /// Access the value mutably. /// /// The caller must prove shared exclusive to the lock. - pub fn as_mut<'a>(&'a self, _guard: &'a mut GlobalGuard<'_, G>) -> &'a= mut T { + pub fn as_mut<'a, B: Backend>(&'a self, _guard: &'a mut GlobalGuard<'_= , G, B>) -> &'a mut T { // SAFETY: The lock is globally unique, so there can only be one g= uard. unsafe { &mut *self.value.get() } } @@ -219,7 +219,7 @@ pub fn get_mut(&mut self) -> &mut T { /// ``` /// # mod ex { /// # use kernel::prelude::*; -/// use kernel::sync::{GlobalGuard, GlobalLockedBy}; +/// use kernel::sync::{Backend, GlobalGuard, GlobalLockedBy}; /// /// kernel::sync::global_lock! { /// // SAFETY: Initialized in module initializer before first use. @@ -235,7 +235,7 @@ pub fn get_mut(&mut self) -> &mut T { /// /// Increment the counter in this instance. /// /// /// /// The caller must hold the `MY_MUTEX` mutex. -/// fn increment(&self, guard: &mut GlobalGuard<'_, MY_MUTEX>) -> u32 { +/// fn increment(&self, guard: &mut GlobalGuard<'_, MY_MUT= EX, B>) -> u32 { /// let my_counter =3D self.my_counter.as_mut(guard); /// *my_counter +=3D 1; /// *my_counter --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19188325709 for ; Mon, 15 Dec 2025 18:04:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821868; cv=none; b=lUZYrCAXagQpz0oBDUlcG2ND3vYELaR7UM8arW3kXD2gNdj5NM+hKArBEDTw8AkRfClXLwiYExwaTZxbuCWmrCGy6n5+0HCh3IM+jFhyozi3HRSS5aGnD2B/08JwQzZ6oYAPxBrI4ud2Ay0J1flW7WJuopEtb74oWPMUYev4PZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821868; c=relaxed/simple; bh=86k3OgDjSeb7bg0evk6lxfbo5SCeIjmcFgI9djfHaUM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ozD0ITwzhwWIjTcetl/Z8h47jC261itb0QOZPA6ieC1+1njhylKUBQtRIsVglWvIz2Cpy0NJxVkTniNL2YBGDLev7yYyWK7iyKhzEIm4bHctsfhrvAheNoZcE1fUMrObkCrmBNjEpA0uhZRp2t9qtfCbNZB2CF6/CHb0Kq5e9R8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=iWWu+ZTN; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="iWWu+ZTN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821865; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ebwyIU06MuM/rohj935yndHBMzVxl0C8QUy2GpvD5Aw=; b=iWWu+ZTN9CWVPDN2RBz+8QWLFbLtguIdLgRyie0oaeqCUkNpMu8E/vKOZI6m+dC4vReVL9 IggWhF/W71RYOTwLT6eOQWcCXSwv6XlrDP2O2sk4OKqkEXYycung4kQIEheWULSdNoPWXN Qf/766GmWV8Y9DnRMYYaQpfmhxBjYOs= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-245-VoZ9gbt4O1inDiqOuGXWkw-1; Mon, 15 Dec 2025 13:02:55 -0500 X-MC-Unique: VoZ9gbt4O1inDiqOuGXWkw-1 X-Mimecast-MFC-AGG-ID: VoZ9gbt4O1inDiqOuGXWkw_1765821773 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id ADFEE188EA49; Mon, 15 Dec 2025 17:59:30 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 47E39180045B; Mon, 15 Dec 2025 17:59:26 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 16/17] rust: sync: lock/global: Add BackendInContext support to GlobalLock Date: Mon, 15 Dec 2025 12:58:03 -0500 Message-ID: <20251215175806.102713-17-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" Now that we have the ability to provide an explicit lifetime for a GlobalGuard and an explicit Backend for a GlobalGuard, we can finally implement lock_with() and try_lock_with(). Signed-off-by: Lyude Paul --- rust/kernel/sync/lock/global.rs | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global= .rs index 64fc7e7a4b282..7aee9b25baefc 100644 --- a/rust/kernel/sync/lock/global.rs +++ b/rust/kernel/sync/lock/global.rs @@ -89,6 +89,34 @@ pub fn try_lock(&'static self) -> Option> { inner: self.inner.try_lock()?, }) } + + /// Lock this global lock with the provided `context`. + pub fn lock_with<'a, B>( + &'static self, + context: ::Context<'a>, + ) -> GlobalGuard<'a, G, B> + where + G::Backend: Backend, + B: Backend, + { + GlobalGuard { + inner: self.inner.lock_with(context), + } + } + + /// Try to lock this global lock with the provided `context`. + pub fn try_lock_with<'a, B>( + &'static self, + context: ::Context<'a>, + ) -> Option> + where + G::Backend: Backend, + B: Backend, + { + Some(GlobalGuard { + inner: self.inner.try_lock_with(context)?, + }) + } } =20 /// A guard for a [`GlobalLock`]. --=20 2.52.0 From nobody Fri Dec 19 15:38:54 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 527412EC571 for ; Mon, 15 Dec 2025 18:03:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821811; cv=none; b=MSAlVDjE40KN5yVz/bojGvd+JdHOKoBTveOiRT5GtlEtYa8xbs93UftNiA84DPsjy1gOrIlC58IScjjK6vSQ8Tc9RxyvyzaIowTJZoidzh/YOqhpj1jXtR4ydgleLMDKdh5TFWnrmd4dLM8IpvDapBwdmEXs+j7lYWwgVDLsq/k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765821811; c=relaxed/simple; bh=fPxAdolV1q9+D0A+THWo+HP/AlBcsMa5ra6MfSfe9+U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lwGHj5lDBreI/GeMO83BtsMG0IF0WpVZikl6W1nuALRTihT1FTov7/1o+rkrgyrfgjosmRrAq4YGeRfCpidvuPIjX+b0Ffv4isYmYy7lj6yzLGGoywiuoBo5zrm7++wYVRZNj38e1r7sGYgn2MsYd4iP6q1RKWxuHWAhimU5r3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HR2Mz9gM; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HR2Mz9gM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1765821806; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hPS5Nw8KLa1+bD9ZupFJcOjHObvSWqJUha8JDoIkYoY=; b=HR2Mz9gMN12MHKyHYpJ+hTKLSGT9Cv2kVuKoz1qXm4Cyj141RWn0xWrEnBwJRVCblr0dAZ 1+n67Q0WlAhopFwJmqEVNXr0sGSLIRFEuVuVl5KUz2oBdnTbS/tKvXv03DjUw6Cc6V3JL3 rtcixLsHwJMe/PQUuWNFFgnIWs0H8To= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-654-1LkcYrb2NnOG6FNbIq1MJQ-1; Mon, 15 Dec 2025 13:02:55 -0500 X-MC-Unique: 1LkcYrb2NnOG6FNbIq1MJQ-1 X-Mimecast-MFC-AGG-ID: 1LkcYrb2NnOG6FNbIq1MJQ_1765821773 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E5F3B189C312; Mon, 15 Dec 2025 17:59:35 +0000 (UTC) Received: from chopper.lan (unknown [10.22.81.30]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 105FE180044F; Mon, 15 Dec 2025 17:59:30 +0000 (UTC) From: Lyude Paul To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Boqun Feng , Daniel Almeida , Miguel Ojeda , Alex Gaynor , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Andrew Morton , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: [PATCH v16 17/17] locking: Switch to _irq_{disable,enable}() variants in cleanup guards Date: Mon, 15 Dec 2025 12:58:04 -0500 Message-ID: <20251215175806.102713-18-lyude@redhat.com> In-Reply-To: <20251215175806.102713-1-lyude@redhat.com> References: <20251215175806.102713-1-lyude@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Content-Type: text/plain; charset="utf-8" From: Boqun Feng The semantics of various irq disabling guards match what *_irq_{disable,enable}() provide, i.e. the interrupt disabling is properly nested, therefore it's OK to switch to use *_irq_{disable,enable}() primitives. Signed-off-by: Boqun Feng --- V10: * Add PREEMPT_RT build fix from Guangbo Cui include/linux/spinlock.h | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index bbbee61c6f5df..72bb6ae5319c7 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -568,10 +568,10 @@ DEFINE_LOCK_GUARD_1(raw_spinlock_nested, raw_spinlock= _t, raw_spin_unlock(_T->lock)) =20 DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t, - raw_spin_lock_irq(_T->lock), - raw_spin_unlock_irq(_T->lock)) + raw_spin_lock_irq_disable(_T->lock), + raw_spin_unlock_irq_enable(_T->lock)) =20 -DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq(_T->= lock)) +DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq_disa= ble(_T->lock)) =20 DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t, raw_spin_lock_bh(_T->lock), @@ -580,12 +580,11 @@ DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t, DEFINE_LOCK_GUARD_1_COND(raw_spinlock_bh, _try, raw_spin_trylock_bh(_T->lo= ck)) =20 DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t, - raw_spin_lock_irqsave(_T->lock, _T->flags), - raw_spin_unlock_irqrestore(_T->lock, _T->flags), - unsigned long flags) + raw_spin_lock_irq_disable(_T->lock), + raw_spin_unlock_irq_enable(_T->lock)) =20 DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try, - raw_spin_trylock_irqsave(_T->lock, _T->flags)) + raw_spin_trylock_irq_disable(_T->lock)) =20 DEFINE_LOCK_GUARD_1(spinlock, spinlock_t, spin_lock(_T->lock), @@ -594,11 +593,11 @@ DEFINE_LOCK_GUARD_1(spinlock, spinlock_t, DEFINE_LOCK_GUARD_1_COND(spinlock, _try, spin_trylock(_T->lock)) =20 DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t, - spin_lock_irq(_T->lock), - spin_unlock_irq(_T->lock)) + spin_lock_irq_disable(_T->lock), + spin_unlock_irq_enable(_T->lock)) =20 DEFINE_LOCK_GUARD_1_COND(spinlock_irq, _try, - spin_trylock_irq(_T->lock)) + spin_trylock_irq_disable(_T->lock)) =20 DEFINE_LOCK_GUARD_1(spinlock_bh, spinlock_t, spin_lock_bh(_T->lock), @@ -608,12 +607,11 @@ DEFINE_LOCK_GUARD_1_COND(spinlock_bh, _try, spin_trylock_bh(_T->lock)) =20 DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t, - spin_lock_irqsave(_T->lock, _T->flags), - spin_unlock_irqrestore(_T->lock, _T->flags), - unsigned long flags) + spin_lock_irq_disable(_T->lock), + spin_unlock_irq_enable(_T->lock)) =20 DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try, - spin_trylock_irqsave(_T->lock, _T->flags)) + spin_trylock_irq_disable(_T->lock)) =20 DEFINE_LOCK_GUARD_1(read_lock, rwlock_t, read_lock(_T->lock), --=20 2.52.0