From nobody Fri Sep 19 02:23:35 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DC6DC433FE for ; Wed, 30 Nov 2022 00:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231326AbiK3A1K (ORCPT ); Tue, 29 Nov 2022 19:27:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229652AbiK3A1I (ORCPT ); Tue, 29 Nov 2022 19:27:08 -0500 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C4232BB0D; Tue, 29 Nov 2022 16:27:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669768027; x=1701304027; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=iLzgun4NeQFU9eN4Te5MLTAfAq9iM+D79phmoPc3gVI=; b=Kiy8KhteDTvpEecEJ+Ga8m8RNMKAQEHoEyuphzQT5/ir4scmK9JVkXBE DJiGwprA4VE2Nj0Ok3MKqH82thZ+UYFGLNa9arK0625Yf8LXdvtksYWx5 OczraXJXyR1PMb8q5Rg9eGHoGp9VXO5OeM6ZHGxknmALSKjjhYxfNN8yK ji5koWGPNMaZviY4tvukGor2mbcxwzHci2x5uvO8J8Twqaw6yLCb4v82n Sje2FmUCcIv1VbjNn6hthPTePxd9gBXIw6mqXVoJgzEtj8HvcO2Qm+2Br 4I87vAqVDNVL5M/s6oZSvQ30BQuD77zyE7plCfUGC35ZI/NUXTIakX0d4 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="298633247" X-IronPort-AV: E=Sophos;i="5.96,204,1665471600"; d="scan'208";a="298633247" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 16:27:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="750099068" X-IronPort-AV: E=Sophos;i="5.96,204,1665471600"; d="scan'208";a="750099068" Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82]) by fmsmga002.fm.intel.com with ESMTP; 29 Nov 2022 16:26:41 -0800 Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 29 Nov 2022 16:26:40 -0800 Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 29 Nov 2022 16:26:40 -0800 Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16 via Frontend Transport; Tue, 29 Nov 2022 16:26:40 -0800 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.105) by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.16; Tue, 29 Nov 2022 16:26:40 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fgnsVzixoqFA00i4b37tYGRJ4TqgVnVxAwVnrKSvAdTSABfpFQsQfbJ0TGa6pryfx+5neTepmbeY481W+cEIWcEUv8iRVUWsH560tY/+Z3toMOF64FbSRwmWQ8X+dpcfs0Zcx62HdnInIAgr0G+oXZ1Ls3EVpBut3EKCKHq1L1W2tlcsykbONbnwi6ZkQLJwXeLnfilWoTk+5rhsI6cEaqNr7iF0yRDR+QXV45dg3kZzgkW0Pmj6Ulsg1gfTOzApTd2Uj9gOkEPSD0YuoyzFYQBUkZ5IqIehi5DTXE9+GK39PW3Y5sL+36cHAGEoZUq4VuLkZrbqw5k9VBwW7jFSIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iLzgun4NeQFU9eN4Te5MLTAfAq9iM+D79phmoPc3gVI=; b=jQMw81T5L31noOnQ6NXRfg+fSvugyhRaLd58Pef3hEJB7SQ/+FDv8X/rpGhCpqh1j8+eCkA4jvdk90+8Y10+faIiA2Et5iek5huF5yIHcegn+TL+3mwP/XEZEY9kdE4Rn4qfROLBeMww6Qhwfr3/Xw3h/HhyBVjR9Z9ktMv7HLZkC1Q566VMt81BvK1Rjr3oiuUiDzFkB5U3atH6jCUfI84fvJciSMdy1ayYvGepZPd6II0BCvMZpo1CE2TpL8xjc4yrS87Sp9Oo79lJum2mDxLJwZ3qH6RvZtaKG37rjpIEIWLhDDOX7E90dtKC5rnjkLnioPm6YeXDkUUTTjMr8g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from PH0PR11MB5880.namprd11.prod.outlook.com (2603:10b6:510:143::14) by DS0PR11MB6327.namprd11.prod.outlook.com (2603:10b6:8:d1::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.21; Wed, 30 Nov 2022 00:26:37 +0000 Received: from PH0PR11MB5880.namprd11.prod.outlook.com ([fe80::8f92:b03a:1de2:e315]) by PH0PR11MB5880.namprd11.prod.outlook.com ([fe80::8f92:b03a:1de2:e315%4]) with mapi id 15.20.5857.023; Wed, 30 Nov 2022 00:26:37 +0000 From: "Zhang, Qiang1" To: "paulmck@kernel.org" CC: Joel Fernandes , "frederic@kernel.org" , "quic_neeraju@quicinc.com" , "rcu@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: RE: [PATCH v2] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug Thread-Topic: [PATCH v2] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug Thread-Index: AQHZAzXSKOjuTzu2RUKeoTlESXbs8q5VFkkAgAAryDCAACcSgIAAA3vQgACYiwCAAJi6wA== Date: Wed, 30 Nov 2022 00:26:37 +0000 Message-ID: References: <24EC376D-B542-4E3C-BC10-3E81F2F2F49C@joelfernandes.org> <20221129151810.GY4001@paulmck-ThinkPad-P17-Gen-1> In-Reply-To: <20221129151810.GY4001@paulmck-ThinkPad-P17-Gen-1> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: PH0PR11MB5880:EE_|DS0PR11MB6327:EE_ x-ms-office365-filtering-correlation-id: 04603916-cd66-4506-fece-08dad2698b3c x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: tNIZhhYr1NfIyyEItgIj+05FaXc96fOAa3dYk667Z/oSL/3xyqV1rtRzTSBIFml/RyuGMpO0WNa9gTWyyWn/NnpPkESJhvPyN/7R2iN1iWgV9Z/tTx0Ml0qlaXR6GkLBq0Q9eEZFMOO90y4bEnB5mZbrmoBCzwKwfREpbncAUYuoNRT7EILotddk9L3Q9EH3rsu96yf3H0U97M3TSpxa7rnfUZyqivuHjysJtIsqS3c2xbkVrtiKvHa6p56H5lwxpjSk+yS54/pAEdQMdZEfH5XAy56mLVrcQil3uZRhLdaHw0wa7T+WQkdJXgDIcBbeAzW7RqYvB3FI6WfI5t3auO+5WljY3WQ1nA9WyMmuZ4a+ac7/L4R+thiykhcBmfNNu/hTbZj3hgbRTeAeymPU4hqtNhx0omimUv0sBy3hMHhu8FW10scXEKakRJ4eMr4Y7NMcBhA5S/vxXe0POK1GwNvAOGeuFB+OaJaclSUGLDxutKvd7NkURkRXQcV9wqnWg9SsAhmv4PzF2EuE5XxAB0p/YnRN1U3Tnd1OX1TwWcmolRCvYZgTsni1IE6kbp+kKTgrOg8tnFK1jpJLOh4q//crFHH9NR39kw+eKx+wy2WJ6aMj3EZpt7/l3OW6t8S8E0U9fYYFSJ+Wy2G+1/0+2B5WqXXEPO9pB3HTe2sgvtIAFw6JEHE79NiJ9Os4oZsN681YYpq2jVFtxIFn3/JSNQ== x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR11MB5880.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(39860400002)(346002)(376002)(136003)(451199015)(71200400001)(55016003)(38070700005)(6506007)(33656002)(86362001)(9686003)(54906003)(7696005)(8936002)(478600001)(6916009)(2906002)(66476007)(53546011)(52536014)(82960400001)(5660300002)(64756008)(41300700001)(4326008)(316002)(66946007)(76116006)(8676002)(66556008)(186003)(38100700002)(122000001)(26005)(66446008)(83380400001);DIR:OUT;SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?utf-8?B?Zmd3TXJUVDhYNU4vaDlRN0xETUZsaUcwb2xTcHJRMUQ5c2RRaHRya2ZXQVNj?= =?utf-8?B?UVZRUGdFcXZqY2U0OW1FUy9jUDgzY2xRMjlHTVVKbmtDK3JkbEtLbGVneUFL?= =?utf-8?B?VDhqRlVpV3p3RnFiK1J1bHU2ZVZFSE56MWMzRmVuTzVqWHlPbHNtMS96QVl4?= =?utf-8?B?SmpRaGRXWFNRcWhHL0ZkTlA0d05ZNWVPYW8xM3VMSTYrWmFnbStOcUo4Ylo2?= =?utf-8?B?WXNXS0Y3SEJucktMdG5HK0x4Yk1qOTErSFhVTWthUVJ4eDJUN3hPQWQ2dkRG?= =?utf-8?B?NUVHT3R3ZGVwY2tVR01hbk4xUVhsK2QyVjNTRVIvaE1Ha0RULzBlZjRmY0FQ?= =?utf-8?B?cmpVbjMrcm5PTS82eWtRSjVnSDhBSEM3bnIreE9LUXRSRGJhUUJoZUlIRExZ?= =?utf-8?B?aVFFMXhCUVh1UDhoVW11cHNZOFFlbzRQaW1neFBDdFA0QkFjbGhDU0tlRm9H?= =?utf-8?B?WUNlV3RLNEhXT3ZvL1dxWmRLaEhaSkhqQ2VyS1NWVzd3RFFwN2RWVG9WSVVN?= =?utf-8?B?SXp0RUdHVzFrdTFzaVFpOUVia1FNTGRrb05SMlBvZ0ttbDFCVzIwMlRrRnor?= =?utf-8?B?ZVFXNHBCTHBmVnZ4a2U4aTljRlkrNkdDYWlMeWc1Ym5jV2ZLeTg1d0lmZ1Nw?= =?utf-8?B?WGY0NGpaTjhqeVpBeXB5N0V4cHZJMFlscUtkU2k5azhHeUJYbTYwZjJiRmdz?= =?utf-8?B?Q1JaRHpkOFY1ZDl2WUhtMmJMaXF5QjNSNm9UWlk5RlhHZm5ja2dxWnFXT1c0?= =?utf-8?B?bzA3a21EYndESmgrYUJWY2Uyald6L2V0ZGl3R3RjbHIxVU9zY2ZPRFN4cHNC?= =?utf-8?B?Zk9EbWV2bjMyQ0NqeGNvOTd1SXNOY3lkQVREYWNFU3ZRVTE5aktsTXNOaEpx?= =?utf-8?B?aWQzL3hIMnJaUlp4emljZEQ3aEllQ25UM216QXBieUlYaUxOUlpNdGYxc3V6?= =?utf-8?B?Z043VnhyZzVKY2JNbWlPYTBXRklwdEwxbEhlVVdhME9TdkNVT3JEbTRpc05N?= =?utf-8?B?TXpjUlhWVzVib3RnOUd5NjF4d3NCU1VkSnIrTHM5Q04yajZhUEgyUXc2cytn?= =?utf-8?B?SXVQN201NGdXUjE1bVpzK0g4elRlbFlUMm54bTVXQ0pSTWJ3elVnUGlUR3h1?= =?utf-8?B?QVpmaG5JWXZjQTJYbmUwaFZDS1dZdTQ3eWpuTUxBNlpSc0VXb3orbjJZR0Y5?= =?utf-8?B?Mzdybkx0bjltRmFOd0VFR1ptb0dwNnk4ZzlkbGhPem83TGRiV3ZQNmZiNHIy?= =?utf-8?B?M2pWUzY1QVYzU095eml6TXl1VytsOW1nT1RsOGVLQ3RUcG5LZXBaenRUeWpi?= =?utf-8?B?NVphdzhYS2VSSU1maGxEZEcwOTFoeGowVWVDRG1ya2ZJclFNSXB4Z3FOeHZj?= =?utf-8?B?NXlyOWlnN0h0MGtUelBKYVp0dmV2Qi9YWHJRaHE3Slk2b25zK29NTVVzb1A1?= =?utf-8?B?Y0g2ejhrRVp4c2p6QUJWVU5zcVc3NFRlMXE3d3dTMmZ5TEw2UkJUSFo5Rkd4?= =?utf-8?B?VE1yR2ZudUFlZW81NkdIUmsxdmxMenpDN3N6d090Ym0rcTZ0N2sxS0FkWmRT?= =?utf-8?B?c1R5TFZZNzFTQXdaVlB1d2lBb1hyT2VZWTc2dCtjcnFjcWR0d2VtYlRCT25i?= =?utf-8?B?aEJLN3l1VWFRVkh3YjhUelVFU0p0enh0eDdpWkJKVHNJT1pZUm9RMzdSTWNK?= =?utf-8?B?U1d6MVROTEpINC9ONEhzU2NXNHV4YjE2bDB5ZUY2WitManRqVEMvUDM5VTJH?= =?utf-8?B?YzZ4UnI3VnNRQ2tQWkZ0Q0pWek9LejAxY293L085TE5LbTE4bGFZVGg5TmNu?= =?utf-8?B?UzMwcnU2NHhVL2lLTy9FWml1cFl6RzhrenNaUkF2eDN0TFZDSXYwbHVSb0Js?= =?utf-8?B?dFJZa0tDcTZKQ3pybEZmNlZnbHpPVnR1dFZ3L0xxU2ZzZ3ZhUktoZDJDcWpn?= =?utf-8?B?SEM4M1ZadWNEdmZRZU52eWNldDIybXZJeEtMN0dLd2JrRnlsbno0cVRnRGJ1?= =?utf-8?B?cUpxRWlPM3JmbFZjdHZ1cXZHNnMvcWRSS1dWWHJ5SHFDd1FQTGNaR21oQ0ha?= =?utf-8?B?WXl1a3JrSUNyTFFxOUFsNGVrYzN1RE9VY1VWeWZGT01COXRnSWJSUE9KdDll?= =?utf-8?Q?wGkmIVErvPlklY4PEbWblt+6w?= Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR11MB5880.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 04603916-cd66-4506-fece-08dad2698b3c X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2022 00:26:37.5463 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: OYJHkUiSHMjMD2PHETwC+iX6kFR7f3gbRCyLmdzZjzkSuNK3zyncg9kG4P1NELFeWdRhSCiKqtst5RMn7rHaAQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR11MB6327 X-OriginatorOrg: intel.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 29, 2022 at 06:25:04AM +0000, Zhang, Qiang1 wrote: > > On Nov 28, 2022, at 11:54 PM, Zhang, Qiang1 wr= ote: > >=20 > > =EF=BB=BFOn Mon, Nov 28, 2022 at 10:34:28PM +0800, Zqiang wrote: > >> Currently, invoke rcu_tasks_rude_wait_gp() to wait one rude > >> RCU-tasks grace period, if __num_online_cpus =3D=3D 1, will return > >> directly, indicates the end of the rude RCU-task grace period. > >> suppose the system has two cpus, consider the following scenario: > >>=20 > >> CPU0 CPU1 (going offline) > >> migration/1 task: > >> cpu_stopper_thread > >> -> take_cpu_down > >> -> _cpu_disable > >> (dec __num_online_cpus) > >> ->cpuhp_invoke_callback > >> preempt_disable > >> access old_data0 > >> task1 > >> del old_data0 ..... > >> synchronize_rcu_tasks_rude() > >> task1 schedule out > >> .... > >> task2 schedule in > >> rcu_tasks_rude_wait_gp() > >> ->__num_online_cpus =3D=3D 1 > >> ->return > >> .... > >> task1 schedule in > >> ->free old_data0 > >> preempt_enable > >>=20 > >> when CPU1 dec __num_online_cpus and __num_online_cpus is equal one, > >> the CPU1 has not finished offline, stop_machine task(migration/1) > >> still running on CPU1, maybe still accessing 'old_data0', but the > >> 'old_data0' has freed on CPU0. > >>=20 > >> This commit add cpus_read_lock/unlock() protection before accessing > >> __num_online_cpus variables, to ensure that the CPU in the offline > >> process has been completed offline. > >>=20 > >> Signed-off-by: Zqiang > >>=20 > >> First, good eyes and good catch!!! > >>=20 > >> The purpose of that check for num_online_cpus() is not performance > >> on single-CPU systems, but rather correct operation during early boot. > >> So a simpler way to make that work is to check for RCU_SCHEDULER_RUNNI= NG, > >> for example, as follows: > >>=20 > >> if (rcu_scheduler_active !=3D RCU_SCHEDULER_RUNNING && > >> num_online_cpus() <=3D 1) > >> return; // Early boot fastpath for only one CPU. > >=20 > > Hi Paul > >=20 > > During system startup, because the RCU_SCHEDULER_RUNNING is set after s= tarting other CPUs,=20 > >=20 > > CPU0 = CPU1 = =20 > >=20 > > if (rcu_scheduler_active !=3D =20 > > RCU_SCHEDULER_RUNNING && > > __num_online_cpus =3D=3D 1) = =20 > > return; = inc __num_online_cpus > > (__num_online_cpus =3D=3D 2) > >=20 > > CPU0 didn't notice the update of the __num_online_cpus variable by CPU1= in time > > Can we move rcu_set_runtime_mode() before smp_init() > > any thoughts? > > > >Is anyone expected to do rcu-tasks operation before the scheduler is run= ning?=20 >=20 > Not sure if such a scenario exists. >=20 > >Typically this requires the tasks to context switch which is a scheduler= operation. > > > >If the scheduler is not yet running, then I don=E2=80=99t think missing = an update the __num_online_cpus matters since no one does a tasks-RCU synch= ronize. >=20 > Hi Joel >=20 > After the kernel_init task runs, before calling smp_init() to starting ot= her CPUs,=20 > the scheduler haven been initialization, task context switching can occur. > >Good catch, thank you both. For some reason, I was thinking that the >additional CPUs did not come online until later. > >So how about this? > > if (rcu_scheduler_active =3D=3D RCU_SCHEDULER_INACTIVE) > return; // Early boot fastpath. If use RCU_SCHEDULER_INACTIVE to check, Can we make the following changes? --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -562,8 +562,8 @@ static int __noreturn rcu_tasks_kthread(void *arg) static void synchronize_rcu_tasks_generic(struct rcu_tasks *rtp) { /* Complain if the scheduler has not started. */ - WARN_ONCE(rcu_scheduler_active =3D=3D RCU_SCHEDULER_INACTIVE, - "synchronize_rcu_tasks called too soon"); + if(WARN_ONCE(rcu_scheduler_active =3D=3D RCU_SCHEDULER_INACTIVE)) + return; // If the grace-period kthread is running, use it. if (READ_ONCE(rtp->kthread_ptr)) { @@ -1066,9 +1066,6 @@ static void rcu_tasks_be_rude(struct work_struct *wor= k) // Wait for one rude RCU-tasks grace period. static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp) { - if (num_online_cpus() <=3D 1) - return; // Fastpath for only one CPU. - rtp->n_ipis +=3D cpumask_weight(cpu_online_mask); schedule_on_each_cpu(rcu_tasks_be_rude); } Thanks Zqiang > >If this condition is true, there is only one CPU and no scheduler, >thus no preemption. > > Thanx, Paul > Thanks > Zqiang >=20 > > > >Or did I miss something? > > > >Thanks. > > > > > > > >=20 > > Thanks > > Zqiang > >=20 > >>=20 > >> This works because rcu_scheduler_active is set to RCU_SCHEDULER_RUNNING > >> long before it is possible to offline CPUs. > >>=20 > >> Yes, schedule_on_each_cpu() does do cpus_read_lock(), again, good eyes, > >> and it also unnecessarily does the schedule_work_on() the current CPU, > >> but the code calling synchronize_rcu_tasks_rude() is on high-overhead > >> code paths, so this overhead is down in the noise. > >>=20 > >> Until further notice, anyway. > >>=20 > >> So simplicity is much more important than performance in this code. > >> So just adding the check for RCU_SCHEDULER_RUNNING should fix this, > >> unless I am missing something (always possible!). > >>=20 > >> Thanx, Paul > >>=20 > >> --- > >> kernel/rcu/tasks.h | 20 ++++++++++++++++++-- > >> 1 file changed, 18 insertions(+), 2 deletions(-) > >>=20 > >> diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h > >> index 4a991311be9b..08e72c6462d8 100644 > >> --- a/kernel/rcu/tasks.h > >> +++ b/kernel/rcu/tasks.h > >> @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_stru= ct *work) > >> { > >> } > >>=20 > >> +static DEFINE_PER_CPU(struct work_struct, rude_work); > >> + > >> // Wait for one rude RCU-tasks grace period. > >> static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp) > >> { > >> + int cpu; > >> + struct work_struct *work; > >> + > >> + cpus_read_lock(); > >> if (num_online_cpus() <=3D 1) > >> - return; // Fastpath for only one CPU. > >> + goto end;// Fastpath for only one CPU. > >>=20 > >> rtp->n_ipis +=3D cpumask_weight(cpu_online_mask); > >> - schedule_on_each_cpu(rcu_tasks_be_rude); > >> + for_each_online_cpu(cpu) { > >> + work =3D per_cpu_ptr(&rude_work, cpu); > >> + INIT_WORK(work, rcu_tasks_be_rude); > >> + schedule_work_on(cpu, work); > >> + } > >> + > >> + for_each_online_cpu(cpu) > >> + flush_work(per_cpu_ptr(&rude_work, cpu)); > >> + > >> +end: > >> + cpus_read_unlock(); > >> } > >>=20 > >> void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func); > >> --=20 > >> 2.25.1 > >>=20