From nobody Thu Oct 2 11:50:40 2025 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96311284880 for ; Wed, 17 Sep 2025 06:09:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.194 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089375; cv=none; b=S2Yhjr9xA47KY/JNYkW7lvHCK5FrU3kAT3d40Z0Xukdg0A7/WYba9OreR4sTFeIipsIMiINw2H4LJd6K3c+/88BpuctNuJTP8HH64K8bYOTiGUAWk63tDotTNmJfjzyA/cYWOaANvsYIKMpbKSTLK4RAbqNU41v4YasS9CRt3UQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089375; c=relaxed/simple; bh=51/7G50SDfjQX29eW9cyq2LILmEKrSJpjBavpe+mqxY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GMOHlcrgrhjCFOQUy1zhMtKDa9/aoXVec4MbfiZBTJyf38+NecKhW5TLERZo1CYPzsNIOggB0/XCnyIN1C88JxKjr/LFiIpVpXVhNJ2jXsoozOdvqSvqP7g9H+uEtCnxC7/ZtX5xU7Nx7/kMAjRn49n+LqsNiDmWP16MurjfBlA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=khs4ENA7; arc=none smtp.client-ip=209.85.210.194 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="khs4ENA7" Received: by mail-pf1-f194.google.com with SMTP id d2e1a72fcca58-76e2ea933b7so508767b3a.1 for ; Tue, 16 Sep 2025 23:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758089372; x=1758694172; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kyDiWMiReLjD3cuUzzqO54+mvDQQQch/ZsaFwMEEKa4=; b=khs4ENA70zTzSEeQwasRQbWFSLoI/ntuTiuVdQ66wl2gPuYgtQcPQY93c1agLZldxN BnJlAHkiN+hQOtZOIY0teHUXJEu84iWWGmzVdBLChT173A4x7/OAhhzkSiQ3sPrai6vB Ds+9DLL/skLgiXu2WYGcGPBzQ5AM2dIdMQ4g/l61U20l9tXkYdERAb+jzvIKRD+hIK27 +CICl58b4PwOlbb7FcNQdbn9D5dlISP9vAZhI/DLRFtEFeky2wCeAhWE9/FvxDNR8IyJ QJqU9+5uzvcatoUqo7TOnTIwOS9iKFkxRCG9OZ9aj5wsIUqHS5XCS5jCwxp02eGG3Jsk 8N8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758089372; x=1758694172; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kyDiWMiReLjD3cuUzzqO54+mvDQQQch/ZsaFwMEEKa4=; b=eQidtL/xyoOerwxf2bcPsFVnpRV3s/fq7rYp2GJzlB2EH0jJmXxFstatZzhkuo0MWE zQ3rzC3f7tQSx5Bx8LgtjCQv2U3SiIojXckbNjiPdWOX6IVEK3gZ2R9qwPtuA3dJyYbx if/ilVb7LQY/RitxdWOA8+piOG/uikdymy+QJtpZzQAUrZmWJ8ji+bEy43d0QcsK5pRk hhrV48jCQu7qenuUVhNdn74AsV/9PnzVay7k3BPIyAjY2CkrfmVVyQBE9+XeDw65ODlQ 7BhkOsyuQYWa/i+zpih9IlKH2/xK+nLV2OsW6UbEfumZiyEWPT3PM/TAt/v2ps4DVtJp 3gXw== X-Forwarded-Encrypted: i=1; AJvYcCUTA0HM2f5k//S3U5O8jmBn6zEzDfkxxIbM0qDZnCyawXiA1ZUvMkwn/zLzhXaCRqckOa420WvnKsO45XM=@vger.kernel.org X-Gm-Message-State: AOJu0YzcmXdj9JqUIvSfMjrx4HueXNXfP9l7R+ehIOXHeEkcf2yfoUdS tTet1jf/v4ki2fHmavTxmu1iNBJ0F0zVunl55BBfcFfP+AqUPe7F3H/M X-Gm-Gg: ASbGncteC7+C/yCuLT7pZ3YHI3BQrkhA73a3FESVFN+cavs6OQ8i1JvIZ53XjPpQWNY C67QU3UUd6k779PS1fSb2FMFx4VDuR0LZlDwhBAZRFKDJYkftm21AJN2Xw0mB0AkvwKF7cL5PUP 0Fg/GzdDs/HX6+BcyhuZ4dz74t1geX71K56ovPfaBrgIgRJd9q1BYW4itmVoVoW6/kVTl94Q9rL NrIHHYbICJt9xLH7o8EC14A/UMS07eUbg/7MVWkYqXYIIVNnbiHGcUl0SVNfgmLwbXfJ8tZNEyL OStWzKLmFk+jlOQvOHpevIkT9YVMMXq88WQFm+NvC0mscee+g0WdcNpULiup8WDJySLkG2iEHho pXE+yG6UZaEcmTC/Afrw= X-Google-Smtp-Source: AGHT+IFFEM1EIbQYe9i2D7pELi4P0NpqM3me2e+flUwl7koYdRajJgy/2tfq5WFDARxR7qM9PEZTOg== X-Received: by 2002:a05:6a21:e083:b0:243:b965:1bb7 with SMTP id adf61e73a8af0-266f4bdfa13mr6737258637.19.1758089371671; Tue, 16 Sep 2025 23:09:31 -0700 (PDT) Received: from 7940hx ([43.129.244.20]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b54a3aa1c54sm15845427a12.50.2025.09.16.23.09.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Sep 2025 23:09:31 -0700 (PDT) From: Menglong Dong X-Google-Original-From: Menglong Dong To: peterz@infradead.org, ast@kernel.org Cc: mingo@redhat.com, paulmck@kernel.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, tzimmermann@suse.de, simona.vetter@ffwll.ch, jani.nikula@intel.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v5 1/4] arch: add the macro COMPILE_OFFSETS to all the asm-offsets.c Date: Wed, 17 Sep 2025 14:09:13 +0800 Message-ID: <20250917060916.462278-2-dongml2@chinatelecom.cn> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250917060916.462278-1-dongml2@chinatelecom.cn> References: <20250917060916.462278-1-dongml2@chinatelecom.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The include/generated/asm-offsets.h is generated in Kbuild during compiling from arch/SRCARCH/kernel/asm-offsets.c. When we want to generate another similar offset header file, circular dependency can happen. For example, we want to generate a offset file include/generated/test.h, which is included in include/sched/sched.h. If we generate asm-offsets.h first, it will fail, as include/sched/sched.h is included in asm-offsets.c and include/generated/test.h doesn't exist; If we generate test.h first, it can't success neither, as include/generated/asm-offsets.h is included by it. In x86_64, the macro COMPILE_OFFSETS is used to avoid such circular dependency. We can generate asm-offsets.h first, and if the COMPILE_OFFSETS is defined, we don't include the "generated/test.h". And we define the macro COMPILE_OFFSETS for all the asm-offsets.c for this purpose. Signed-off-by: Menglong Dong --- arch/alpha/kernel/asm-offsets.c | 1 + arch/arc/kernel/asm-offsets.c | 1 + arch/arm/kernel/asm-offsets.c | 2 ++ arch/arm64/kernel/asm-offsets.c | 1 + arch/csky/kernel/asm-offsets.c | 1 + arch/hexagon/kernel/asm-offsets.c | 1 + arch/loongarch/kernel/asm-offsets.c | 2 ++ arch/m68k/kernel/asm-offsets.c | 1 + arch/microblaze/kernel/asm-offsets.c | 1 + arch/mips/kernel/asm-offsets.c | 2 ++ arch/nios2/kernel/asm-offsets.c | 1 + arch/openrisc/kernel/asm-offsets.c | 1 + arch/parisc/kernel/asm-offsets.c | 1 + arch/powerpc/kernel/asm-offsets.c | 1 + arch/riscv/kernel/asm-offsets.c | 1 + arch/s390/kernel/asm-offsets.c | 1 + arch/sh/kernel/asm-offsets.c | 1 + arch/sparc/kernel/asm-offsets.c | 1 + arch/um/kernel/asm-offsets.c | 2 ++ arch/xtensa/kernel/asm-offsets.c | 1 + 20 files changed, 24 insertions(+) diff --git a/arch/alpha/kernel/asm-offsets.c b/arch/alpha/kernel/asm-offset= s.c index e9dad60b147f..1ebb05890499 100644 --- a/arch/alpha/kernel/asm-offsets.c +++ b/arch/alpha/kernel/asm-offsets.c @@ -4,6 +4,7 @@ * This code generates raw asm output which is post-processed to extract * and format the required data. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/arc/kernel/asm-offsets.c b/arch/arc/kernel/asm-offsets.c index f77deb799175..2978da85fcb6 100644 --- a/arch/arc/kernel/asm-offsets.c +++ b/arch/arc/kernel/asm-offsets.c @@ -2,6 +2,7 @@ /* * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.c= om) */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 123f4a8ef446..2101938d27fc 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -7,6 +7,8 @@ * This code generates raw asm output which is post-processed to extract * and format the required data. */ +#define COMPILE_OFFSETS + #include #include #include diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offset= s.c index 30d4bbe68661..b6367ff3a49c 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -6,6 +6,7 @@ * 2001-2002 Keith Owens * Copyright (C) 2012 ARM Ltd. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/csky/kernel/asm-offsets.c b/arch/csky/kernel/asm-offsets.c index d1e903579473..5525c8e7e1d9 100644 --- a/arch/csky/kernel/asm-offsets.c +++ b/arch/csky/kernel/asm-offsets.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 // Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd. +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/hexagon/kernel/asm-offsets.c b/arch/hexagon/kernel/asm-of= fsets.c index 03a7063f9456..50eea9fa6f13 100644 --- a/arch/hexagon/kernel/asm-offsets.c +++ b/arch/hexagon/kernel/asm-offsets.c @@ -8,6 +8,7 @@ * * Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/as= m-offsets.c index db1e4bb26b6a..3017c7157600 100644 --- a/arch/loongarch/kernel/asm-offsets.c +++ b/arch/loongarch/kernel/asm-offsets.c @@ -4,6 +4,8 @@ * * Copyright (C) 2020-2022 Loongson Technology Corporation Limited */ +#define COMPILE_OFFSETS + #include #include #include diff --git a/arch/m68k/kernel/asm-offsets.c b/arch/m68k/kernel/asm-offsets.c index 906d73230537..67a1990f9d74 100644 --- a/arch/m68k/kernel/asm-offsets.c +++ b/arch/m68k/kernel/asm-offsets.c @@ -9,6 +9,7 @@ * #defines from the assembly-language output. */ =20 +#define COMPILE_OFFSETS #define ASM_OFFSETS_C =20 #include diff --git a/arch/microblaze/kernel/asm-offsets.c b/arch/microblaze/kernel/= asm-offsets.c index 104c3ac5f30c..b4b67d58e7f6 100644 --- a/arch/microblaze/kernel/asm-offsets.c +++ b/arch/microblaze/kernel/asm-offsets.c @@ -7,6 +7,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c index 1e29efcba46e..5debd9a3854a 100644 --- a/arch/mips/kernel/asm-offsets.c +++ b/arch/mips/kernel/asm-offsets.c @@ -9,6 +9,8 @@ * Kevin Kissell, kevink@mips.com and Carsten Langgaard, carstenl@mips.com * Copyright (C) 2000 MIPS Technologies, Inc. */ +#define COMPILE_OFFSETS + #include #include #include diff --git a/arch/nios2/kernel/asm-offsets.c b/arch/nios2/kernel/asm-offset= s.c index e3d9b7b6fb48..88190b503ce5 100644 --- a/arch/nios2/kernel/asm-offsets.c +++ b/arch/nios2/kernel/asm-offsets.c @@ -2,6 +2,7 @@ /* * Copyright (C) 2011 Tobias Klauser */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/openrisc/kernel/asm-offsets.c b/arch/openrisc/kernel/asm-= offsets.c index 710651d5aaae..3cc826f2216b 100644 --- a/arch/openrisc/kernel/asm-offsets.c +++ b/arch/openrisc/kernel/asm-offsets.c @@ -18,6 +18,7 @@ * compile this file to assembler, and then extract the * #defines from the assembly-language output. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/parisc/kernel/asm-offsets.c b/arch/parisc/kernel/asm-offs= ets.c index 757816a7bd4b..9abfe65492c6 100644 --- a/arch/parisc/kernel/asm-offsets.c +++ b/arch/parisc/kernel/asm-offsets.c @@ -13,6 +13,7 @@ * Copyright (C) 2002 Randolph Chung * Copyright (C) 2003 James Bottomley */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-of= fsets.c index b3048f6d3822..a4bc80b30410 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -8,6 +8,7 @@ * compile this file to assembler, and then extract the * #defines from the assembly-language output. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offset= s.c index 6e8c0d6feae9..7d42d3b8a32a 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -3,6 +3,7 @@ * Copyright (C) 2012 Regents of the University of California * Copyright (C) 2017 SiFive */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/s390/kernel/asm-offsets.c b/arch/s390/kernel/asm-offsets.c index 95ecad9c7d7d..a8915663e917 100644 --- a/arch/s390/kernel/asm-offsets.c +++ b/arch/s390/kernel/asm-offsets.c @@ -4,6 +4,7 @@ * This code generates raw asm output which is post-processed to extract * and format the required data. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/sh/kernel/asm-offsets.c b/arch/sh/kernel/asm-offsets.c index a0322e832845..429b6a763146 100644 --- a/arch/sh/kernel/asm-offsets.c +++ b/arch/sh/kernel/asm-offsets.c @@ -8,6 +8,7 @@ * compile this file to assembler, and then extract the * #defines from the assembly-language output. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/sparc/kernel/asm-offsets.c b/arch/sparc/kernel/asm-offset= s.c index 3d9b9855dce9..6e660bde48dd 100644 --- a/arch/sparc/kernel/asm-offsets.c +++ b/arch/sparc/kernel/asm-offsets.c @@ -10,6 +10,7 @@ * * On sparc, thread_info data is static and TI_XXX offsets are computed by= hand. */ +#define COMPILE_OFFSETS =20 #include #include diff --git a/arch/um/kernel/asm-offsets.c b/arch/um/kernel/asm-offsets.c index 1fb12235ab9c..a69873aa697f 100644 --- a/arch/um/kernel/asm-offsets.c +++ b/arch/um/kernel/asm-offsets.c @@ -1 +1,3 @@ +#define COMPILE_OFFSETS + #include diff --git a/arch/xtensa/kernel/asm-offsets.c b/arch/xtensa/kernel/asm-offs= ets.c index da38de20ae59..cfbced95e944 100644 --- a/arch/xtensa/kernel/asm-offsets.c +++ b/arch/xtensa/kernel/asm-offsets.c @@ -11,6 +11,7 @@ * * Chris Zankel */ +#define COMPILE_OFFSETS =20 #include #include --=20 2.51.0 From nobody Thu Oct 2 11:50:40 2025 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE00A29898B for ; Wed, 17 Sep 2025 06:09:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.194 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089379; cv=none; b=B13brPRgaMaqA9uevaFzwIi8IJD0ZyV9+1hagzFN3akRdsAvGUta+VGoaQgOd7DFnenf/qxl2rMAzdC/8J5l340+wCZlfADJt/ksYG2pJzURfzzv9Rw0QrL8jyqRHL9Cgz++2Umvk63XgGzClAIoQ+Y8ci9oHLViujkrxagTGIg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089379; c=relaxed/simple; bh=sFbuMNox4i9aaUKyaARF6W36sL4xybBDCMsqhDSWuRM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U5AaUukAM6GSvEzMEnZj8I8RsuTGBmt4HdqowPR74ujGmRszTmcK4eZ69jK2qGYc+HsN//5XwFKZ/feFX/3TR+F+hc4AFp+4tbssIeEycPRREX37pTjY4nvS7HEVzNWDJlhq0yAK/wikRnphoj2i2fk/8n+vv+7pNiXmXYpRdGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=R8aicn3L; arc=none smtp.client-ip=209.85.210.194 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="R8aicn3L" Received: by mail-pf1-f194.google.com with SMTP id d2e1a72fcca58-776b0100de0so463895b3a.0 for ; Tue, 16 Sep 2025 23:09:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758089377; x=1758694177; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=utq94C3VxoGuQS3QuI/fmnJBC+uMMJHBbGWmRbRXlAc=; b=R8aicn3LgkCJ6rW9Wx8Wvz311JCyTJHJDIvblHqlByDjsfmnSYkg41Xe4Jmq3wvyRU fewb00k0Jba5uaUvXXhg0dbyQVoaByBKVQsPxqGa5ULNWhBr3iMlCkiLEQJSPP0GL4UC /YlKNTWQeuJ6C81amSYfqrliVa/Zzj95ChC8cKEB8mQ6gax5uItmEOy9FxAKTyrFzqLv i3ojWCIW4nHSU4m7BzjznvzsmOudg0hGjwnNzibo3+vB+mtVnGPbrgeeBI/rHPZrluDF 9x8IqtI2Wxz/zu9Bu6uVpgI9nPJXzZrTvwd9bJeh+may849EXw2mXXb4+8+EmRVP3oTb OSzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758089377; x=1758694177; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=utq94C3VxoGuQS3QuI/fmnJBC+uMMJHBbGWmRbRXlAc=; b=dsPjj++PfNejd3QacINXCfNGFHviuJwp9SytW5NZAIG0T2INJpwnalDyopZ79SE8Yu d9+gKQVe00DojVdByJMLpN1i7iNONo9BzPGHEon1HmCBHE2b2wZIBnorVlrL7tZjqipV gjfUdw6kssrf/2spxWMWnKFJo3Qu2D/inYS9EabZ+a6OWbmkLC9c24qbakif4Aqdhhw5 HcDYbSYxaQWJG1paIrovIHim8Vr7YS0iJah5V0MhNpISEpBvmpdLYrxks0CpN+yyYTUg 5hpSU55cm2AEglYKTGuSTi921MeaPyU+pqWfUA1oHlQejluCPEzkoNFE71AbjCthnMk3 jamQ== X-Forwarded-Encrypted: i=1; AJvYcCX0OiU+D4Co6g/LuKuFvUK3bcEvG1Jsuc3URorvRFRpvM31RLMMk4t2ZgHwvoV248js6/yB34auEt9ibEw=@vger.kernel.org X-Gm-Message-State: AOJu0YwmcRtVis4QurYw8una6DxPVGfN2SY6KMvlOv8way2N1b6+g1AZ AJk5fvd5HxhJt8Zee8yMLiuxaZ65X8wmrDdXCaFNYB5VQQRDPCooyD8Y X-Gm-Gg: ASbGncu+9Xx3pB0H15QPTQm2xe5Q72Ci4letmqDgn3ghxSPT2Nfq9AHaAq6uFO93abL R0ceMdKP8v0TQMbKec/p6XIggeohGxEJa34weDz0JbiH1JpyedjMiI/6AVTXjfkjmS8JIeF8ZxB 5genwYpb5eIJP4hvCDkvSmPha5LFOpltvxMaYswiL/LwE7JAN2KB3B4ohoTJ6piOyopBUPeTBLg 5TqsJG37HdvLJuRWv4ThEhBa3nVRvBQ3JCFEnvUfvX8yrKdJ+KeGCH+ffBGerVFNFhNB9IMwqFQ pviA3rMOeuINc6KVv+wFy3N3PNEuUwq1zuwDLNf7j5cMqHYy1L7j0MHm/wWth6f4fg1T36+N0H+ 66fbQ0d6yXbE+VH9uX9w= X-Google-Smtp-Source: AGHT+IHyE7wI/wBNXTL2G9Xu5VkmmyYEi5MPpXY5oaDnuD0UhDPs3k2sfKx40YqgmRplQY1pNY5FFA== X-Received: by 2002:a05:6a20:430c:b0:246:3a6:3e47 with SMTP id adf61e73a8af0-27a21d12a46mr1286345637.12.1758089377248; Tue, 16 Sep 2025 23:09:37 -0700 (PDT) Received: from 7940hx ([43.129.244.20]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b54a3aa1c54sm15845427a12.50.2025.09.16.23.09.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Sep 2025 23:09:36 -0700 (PDT) From: Menglong Dong X-Google-Original-From: Menglong Dong To: peterz@infradead.org, ast@kernel.org Cc: mingo@redhat.com, paulmck@kernel.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, tzimmermann@suse.de, simona.vetter@ffwll.ch, jani.nikula@intel.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v5 2/4] rcu: replace preempt.h with sched.h in include/linux/rcupdate.h Date: Wed, 17 Sep 2025 14:09:14 +0800 Message-ID: <20250917060916.462278-3-dongml2@chinatelecom.cn> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250917060916.462278-1-dongml2@chinatelecom.cn> References: <20250917060916.462278-1-dongml2@chinatelecom.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In the next commit, we will move the definition of migrate_enable() and migrate_disable() to linux/sched.h. However, migrate_enable/migrate_disable will be used in commit 1b93c03fb319 ("rcu: add rcu_read_lock_dont_migrate()") in bpf-next tree. In order to fix potential compiling error, replace linux/preempt.h with linux/sched.h in include/linux/rcupdate.h. Signed-off-by: Menglong Dong --- include/linux/rcupdate.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 120536f4c6eb..8f346c847ee5 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include --=20 2.51.0 From nobody Thu Oct 2 11:50:40 2025 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 142A72773CB for ; Wed, 17 Sep 2025 06:09:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.196 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089385; cv=none; b=S7DyGfYBH+c5nk4xF+K8sN0aKPzewQAtflEDCmFI7/uPlJLaa8Yw9lTSpAc0aTKAVPD2vNGuk2NaovxkHrLjlH5tKUutfo16Oq08J/r9SQWDgDVz5uwD9Any1S93ObUZGyx31ATo2rsEWMdsgT/7p4wNyYbZoWb2tsfg3KWvOrM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089385; c=relaxed/simple; bh=evqnwkFd3LTqlwrs4DhMUVQPKhCtMd5/kMVbKkmcOmQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WyE4gg98SKp+54tcE+eoBBfbnJOXiyMZX2Mq7nuc7RzzNcoQqC4BfTCcZse8B47Uxs8KXvuTPOFvEe+SOEGhccbhKtT5Svf8GITn2XxCaCYwZnt6lVhxWvcf29zjYACG6cf9lSJHLHmj+N0dtqAMmu84DjtX145hqxbyYb7bslI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=I1ccu4yq; arc=none smtp.client-ip=209.85.210.196 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="I1ccu4yq" Received: by mail-pf1-f196.google.com with SMTP id d2e1a72fcca58-77616dce48cso3578289b3a.0 for ; Tue, 16 Sep 2025 23:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758089383; x=1758694183; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ysQz8N+qO7HZkfEGJ21cxgykX/0mRcVvREriQqdyxME=; b=I1ccu4yqlLOEtRYM3MA1E01AVjOjgPdVf+AYEL8SjO8fpauhBhBMQiFliltrvSem1K DolfjWBiT4d9mwXqG1OH0145FAGO8AwN4mu57Py32SI/KXoyxf18uOydcp7JoftIDXs9 cXMNIBp4ZVQbNnyeRzKsfCLXXdHEj44ArUy+zvVdl5D45y7v0Akuivu4VK8R8T+wpOPO 7YfDvruPvDm7QnmY47FKn4wrnUkGBuH6V3qpsskbtHO4iKf4CbbiPbYk8NgUhP0lloks rZBfHY0pZxkXEkgo8ZGBfGf0WhBjwHyllaMUGsnT7RGIp+U4e0PgUK85KesIM36pPJgP n8Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758089383; x=1758694183; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ysQz8N+qO7HZkfEGJ21cxgykX/0mRcVvREriQqdyxME=; b=l88QzCBuPXQCDzuotmGJf59HGysCLug1ULi0U1aFytaoSo9OEc0jVW4cK1zWZFm2lU qVwVsc/IYMmiVF83iDdvPnd9Y91VQ9dX+rVgk+clDFzViygZJcg7zriELpoAhH8dRINE 1XSf4FVDhIx3QTvXiBa7hhIyDtj0u0EDbDr6gzfCPz1eA6xLlUZlEbjdRkeqNTf91RTn /2khkFAuWlAUoOEsj6ceukq+eO8X78Gab0UuaplR8HkstyzQR/iewwkX0+s1ZI1J4CKa A7Rf0KJzMHCHvTVmLRXvgUdEkJ1JBWfbIFZQE7M19Pf7T3m5jN5nXmNIjyy6sEfeecB7 R90g== X-Forwarded-Encrypted: i=1; AJvYcCU+ETVkapFnuKMDddvy9SS2IVOGzem9h10Yl9OuC7rUPy2d4yYzS4unoi1lZCpiqgSnZr0A6OvnwSgh0ss=@vger.kernel.org X-Gm-Message-State: AOJu0YyWPHq/y/OYc14dIzKroxjTHygNsnPKES7RqoCI81Zw40Z2lXHB nX/EcVHvMLR3e++9+bawppR+PKcLDV1VC92XcAvONURSzvviX/CpPsJ9 X-Gm-Gg: ASbGncuf4i0BFpN02AydLSHz6AvTNhw80yyutIx9tGs5NXMl90Vz4XIzseLTh8b89rw sz5YoC7l/LcrIgxIwjZOTQcKxSrHurmDq0LHDQf1bawR5LWyeXYlJc+G2U1yPVbNCEx86UkfakO YuhwfE871e7GWQdQqO1x12lCrHD4NDIptPTGbqdUBgeZPNKHIqjypfdmhZT2pVNdoRVdlhYIPF9 tZp2u0oT6nmyRqa9CjeAQJkbyPbfxpbBalesrFJ7ZwxSMiD0cbdARo2VS5ua5gC3Nkk7+1GopUJ 0EdkqGBC068V1A02K8hvPclHDJDVcGc11Zmg+DOKMf/8Dfi9bVEg942UyXXKILi572F/Us2hS5Y LBWIuEzg0IORtBIkk7VI= X-Google-Smtp-Source: AGHT+IFrt44m9s9yu7Fn/1C+Zc08IdcJ1DeyVWPDnzSI/ATbbKkoTUJAiDdnkVVR38BouC3uQQedPg== X-Received: by 2002:a05:6a20:3d05:b0:24c:c33e:8df0 with SMTP id adf61e73a8af0-27aac9934e3mr1210366637.45.1758089383231; Tue, 16 Sep 2025 23:09:43 -0700 (PDT) Received: from 7940hx ([43.129.244.20]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b54a3aa1c54sm15845427a12.50.2025.09.16.23.09.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Sep 2025 23:09:42 -0700 (PDT) From: Menglong Dong X-Google-Original-From: Menglong Dong To: peterz@infradead.org, ast@kernel.org Cc: mingo@redhat.com, paulmck@kernel.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, tzimmermann@suse.de, simona.vetter@ffwll.ch, jani.nikula@intel.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v5 3/4] sched: make migrate_enable/migrate_disable inline Date: Wed, 17 Sep 2025 14:09:15 +0800 Message-ID: <20250917060916.462278-4-dongml2@chinatelecom.cn> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250917060916.462278-1-dongml2@chinatelecom.cn> References: <20250917060916.462278-1-dongml2@chinatelecom.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable For now, migrate_enable and migrate_disable are global, which makes them become hotspots in some case. Take BPF for example, the function calling to migrate_enable and migrate_disable in BPF trampoline can introduce significant overhead, and following is the 'perf top' of FENTRY's benchmark (./tools/testing/selftests/bpf/bench trig-fentry): 54.63% bpf_prog_2dcccf652aac1793_bench_trigger_fentry [k] bpf_prog_2dcccf652aac1793_bench_trigger_fentry 10.43% [kernel] [k] migrate_enable 10.07% bpf_trampoline_6442517037 [k] bpf_trampoline_6442517037 8.06% [kernel] [k] __bpf_prog_exit_recur 4.11% libc.so.6 [.] syscall 2.15% [kernel] [k] entry_SYSCALL_64 1.48% [kernel] [k] memchr_inv 1.32% [kernel] [k] fput 1.16% [kernel] [k] _copy_to_user 0.73% [kernel] [k] bpf_prog_test_run_raw_tp So in this commit, we make migrate_enable/migrate_disable inline to obtain better performance. The struct rq is defined internally in kernel/sched/sched.h, and the field "nr_pinned" is accessed in migrate_enable/migrate_disable, which makes it hard to make them inline. Alexei Starovoitov suggests to generate the offset of "nr_pinned" in [1], so we can define the migrate_enable/migrate_disable in include/linux/sched.h and access "this_rq()->nr_pinned" with "(void *)this_rq() + RQ_nr_pinned". The offset of "nr_pinned" is generated in include/generated/rq-offsets.h by kernel/sched/rq-offsets.c. Generally speaking, we move the definition of migrate_enable and migrate_disable to include/linux/sched.h from kernel/sched/core.c. The calling to __set_cpus_allowed_ptr() is leaved in ___migrate_enable(). The "struct rq" is not available in include/linux/sched.h, so we can't access the "runqueues" with this_cpu_ptr(), as the compilation will fail in this_cpu_ptr() -> raw_cpu_ptr() -> __verify_pcpu_ptr(): typeof((ptr) + 0) So we introduce the this_rq_raw() and access the runqueues with arch_raw_cpu_ptr/PERCPU_PTR directly. The variable "runqueues" is not visible in the kernel modules, and export it is not a good idea. As Peter Zijlstra advised in [2], we define and export migrate_enable/migrate_disable in kernel/sched/core.c too, and use them for the modules. Before this patch, the performance of BPF FENTRY is: fentry : 113.030 =C2=B1 0.149M/s fentry : 112.501 =C2=B1 0.187M/s fentry : 112.828 =C2=B1 0.267M/s fentry : 115.287 =C2=B1 0.241M/s After this patch, the performance of BPF FENTRY increases to: fentry : 143.644 =C2=B1 0.670M/s fentry : 149.764 =C2=B1 0.362M/s fentry : 149.642 =C2=B1 0.156M/s fentry : 145.263 =C2=B1 0.221M/s Link: https://lore.kernel.org/bpf/CAADnVQ+5sEDKHdsJY5ZsfGDO_1SEhhQWHrt2SMBG= 5SYyQ+jt7w@mail.gmail.com/ [1] Link: https://lore.kernel.org/all/20250819123214.GH4067720@noisy.programmin= g.kicks-ass.net/ [2] Signed-off-by: Menglong Dong --- v5: - fix the comment style problem in include/linux/sched.h v4: - rename CREATE_MIGRATE_DISABLE to INSTANTIATE_EXPORTED_MIGRATE_DISABLE - add document for INSTANTIATE_EXPORTED_MIGRATE_DISABLE v3: - don't export runqueues, define migrate_enable and migrate_disable in kernel/sched/core.c and use them for kernel modules instead - define the macro this_rq_pinned() - add some comment for this_rq_raw() v2: - use PERCPU_PTR() for this_rq_raw() if !CONFIG_SMP --- Kbuild | 13 ++++- include/linux/preempt.h | 3 - include/linux/sched.h | 114 ++++++++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 1 + kernel/sched/core.c | 63 +++++---------------- kernel/sched/rq-offsets.c | 12 ++++ 6 files changed, 153 insertions(+), 53 deletions(-) create mode 100644 kernel/sched/rq-offsets.c diff --git a/Kbuild b/Kbuild index f327ca86990c..13324b4bbe23 100644 --- a/Kbuild +++ b/Kbuild @@ -34,13 +34,24 @@ arch/$(SRCARCH)/kernel/asm-offsets.s: $(timeconst-file)= $(bounds-file) $(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE $(call filechk,offsets,__ASM_OFFSETS_H__) =20 +# Generate rq-offsets.h + +rq-offsets-file :=3D include/generated/rq-offsets.h + +targets +=3D kernel/sched/rq-offsets.s + +kernel/sched/rq-offsets.s: $(offsets-file) + +$(rq-offsets-file): kernel/sched/rq-offsets.s FORCE + $(call filechk,offsets,__RQ_OFFSETS_H__) + # Check for missing system calls =20 quiet_cmd_syscalls =3D CALL $< cmd_syscalls =3D $(CONFIG_SHELL) $< $(CC) $(c_flags) $(missing_sysca= lls_flags) =20 PHONY +=3D missing-syscalls -missing-syscalls: scripts/checksyscalls.sh $(offsets-file) +missing-syscalls: scripts/checksyscalls.sh $(rq-offsets-file) $(call cmd,syscalls) =20 # Check the manual modification of atomic headers diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 1fad1c8a4c76..92237c319035 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -424,8 +424,6 @@ static inline void preempt_notifier_init(struct preempt= _notifier *notifier, * work-conserving schedulers. * */ -extern void migrate_disable(void); -extern void migrate_enable(void); =20 /** * preempt_disable_nested - Disable preemption inside a normally preempt d= isabled section @@ -471,7 +469,6 @@ static __always_inline void preempt_enable_nested(void) =20 DEFINE_LOCK_GUARD_0(preempt, preempt_disable(), preempt_enable()) DEFINE_LOCK_GUARD_0(preempt_notrace, preempt_disable_notrace(), preempt_en= able_notrace()) -DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable()) =20 #ifdef CONFIG_PREEMPT_DYNAMIC =20 diff --git a/include/linux/sched.h b/include/linux/sched.h index 644a01bdae70..2a1efccda2e2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -49,6 +49,9 @@ #include #include #include +#ifndef COMPILE_OFFSETS +#include +#endif =20 /* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; @@ -2317,4 +2320,115 @@ static __always_inline void alloc_tag_restore(struc= t alloc_tag *tag, struct allo #define alloc_tag_restore(_tag, _old) do {} while (0) #endif =20 +#ifndef MODULE +#ifndef COMPILE_OFFSETS + +extern void ___migrate_enable(void); + +struct rq; +DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); + +/* + * The "struct rq" is not available here, so we can't access the + * "runqueues" with this_cpu_ptr(), as the compilation will fail in + * this_cpu_ptr() -> raw_cpu_ptr() -> __verify_pcpu_ptr(): + * typeof((ptr) + 0) + * + * So use arch_raw_cpu_ptr()/PERCPU_PTR() directly here. + */ +#ifdef CONFIG_SMP +#define this_rq_raw() arch_raw_cpu_ptr(&runqueues) +#else +#define this_rq_raw() PERCPU_PTR(&runqueues) +#endif +#define this_rq_pinned() (*(unsigned int *)((void *)this_rq_raw() + RQ_nr_= pinned)) + +static inline void __migrate_enable(void) +{ + struct task_struct *p =3D current; + +#ifdef CONFIG_DEBUG_PREEMPT + /* + * Check both overflow from migrate_disable() and superfluous + * migrate_enable(). + */ + if (WARN_ON_ONCE((s16)p->migration_disabled <=3D 0)) + return; +#endif + + if (p->migration_disabled > 1) { + p->migration_disabled--; + return; + } + + /* + * Ensure stop_task runs either before or after this, and that + * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule(). + */ + guard(preempt)(); + if (unlikely(p->cpus_ptr !=3D &p->cpus_mask)) + ___migrate_enable(); + /* + * Mustn't clear migration_disabled() until cpus_ptr points back at the + * regular cpus_mask, otherwise things that race (eg. + * select_fallback_rq) get confused. + */ + barrier(); + p->migration_disabled =3D 0; + this_rq_pinned()--; +} + +static inline void __migrate_disable(void) +{ + struct task_struct *p =3D current; + + if (p->migration_disabled) { +#ifdef CONFIG_DEBUG_PREEMPT + /* + *Warn about overflow half-way through the range. + */ + WARN_ON_ONCE((s16)p->migration_disabled < 0); +#endif + p->migration_disabled++; + return; + } + + guard(preempt)(); + this_rq_pinned()++; + p->migration_disabled =3D 1; +} +#else /* !COMPILE_OFFSETS */ +static inline void __migrate_disable(void) { } +static inline void __migrate_enable(void) { } +#endif /* !COMPILE_OFFSETS */ + +/* + * The variable "runqueues" is not visible in the kernel modules, and expo= rt + * it is not a good idea. As Peter Zijlstra advised, define and export + * migrate_enable/migrate_disable in kernel/sched/core.c too, and use + * them for the modules. The macro "INSTANTIATE_EXPORTED_MIGRATE_DISABLE" + * will be defined in kernel/sched/core.c. + */ +#ifndef INSTANTIATE_EXPORTED_MIGRATE_DISABLE +static inline void migrate_disable(void) +{ + __migrate_disable(); +} + +static inline void migrate_enable(void) +{ + __migrate_enable(); +} +#else /* INSTANTIATE_EXPORTED_MIGRATE_DISABLE */ +extern void migrate_disable(void); +extern void migrate_enable(void); +#endif /* INSTANTIATE_EXPORTED_MIGRATE_DISABLE */ + +#else /* MODULE */ +extern void migrate_disable(void); +extern void migrate_enable(void); +#endif /* MODULE */ + +DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable()) + #endif diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9fb1f957a093..8340cecd1b35 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -23859,6 +23859,7 @@ int bpf_check_attach_target(struct bpf_verifier_log= *log, BTF_SET_START(btf_id_deny) BTF_ID_UNUSED #ifdef CONFIG_SMP +BTF_ID(func, ___migrate_enable) BTF_ID(func, migrate_disable) BTF_ID(func, migrate_enable) #endif diff --git a/kernel/sched/core.c b/kernel/sched/core.c index da2062de97a2..fa437bedf8a8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7,6 +7,8 @@ * Copyright (C) 1991-2002 Linus Torvalds * Copyright (C) 1998-2024 Ingo Molnar, Red Hat */ +#define INSTANTIATE_EXPORTED_MIGRATE_DISABLE +#include #include #include #include @@ -2381,28 +2383,7 @@ static void migrate_disable_switch(struct rq *rq, st= ruct task_struct *p) __do_set_cpus_allowed(p, &ac); } =20 -void migrate_disable(void) -{ - struct task_struct *p =3D current; - - if (p->migration_disabled) { -#ifdef CONFIG_DEBUG_PREEMPT - /* - *Warn about overflow half-way through the range. - */ - WARN_ON_ONCE((s16)p->migration_disabled < 0); -#endif - p->migration_disabled++; - return; - } - - guard(preempt)(); - this_rq()->nr_pinned++; - p->migration_disabled =3D 1; -} -EXPORT_SYMBOL_GPL(migrate_disable); - -void migrate_enable(void) +void ___migrate_enable(void) { struct task_struct *p =3D current; struct affinity_context ac =3D { @@ -2410,35 +2391,19 @@ void migrate_enable(void) .flags =3D SCA_MIGRATE_ENABLE, }; =20 -#ifdef CONFIG_DEBUG_PREEMPT - /* - * Check both overflow from migrate_disable() and superfluous - * migrate_enable(). - */ - if (WARN_ON_ONCE((s16)p->migration_disabled <=3D 0)) - return; -#endif + __set_cpus_allowed_ptr(p, &ac); +} +EXPORT_SYMBOL_GPL(___migrate_enable); =20 - if (p->migration_disabled > 1) { - p->migration_disabled--; - return; - } +void migrate_disable(void) +{ + __migrate_disable(); +} +EXPORT_SYMBOL_GPL(migrate_disable); =20 - /* - * Ensure stop_task runs either before or after this, and that - * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule(). - */ - guard(preempt)(); - if (p->cpus_ptr !=3D &p->cpus_mask) - __set_cpus_allowed_ptr(p, &ac); - /* - * Mustn't clear migration_disabled() until cpus_ptr points back at the - * regular cpus_mask, otherwise things that race (eg. - * select_fallback_rq) get confused. - */ - barrier(); - p->migration_disabled =3D 0; - this_rq()->nr_pinned--; +void migrate_enable(void) +{ + __migrate_enable(); } EXPORT_SYMBOL_GPL(migrate_enable); =20 diff --git a/kernel/sched/rq-offsets.c b/kernel/sched/rq-offsets.c new file mode 100644 index 000000000000..a23747bbe25b --- /dev/null +++ b/kernel/sched/rq-offsets.c @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: GPL-2.0 +#define COMPILE_OFFSETS +#include +#include +#include "sched.h" + +int main(void) +{ + DEFINE(RQ_nr_pinned, offsetof(struct rq, nr_pinned)); + + return 0; +} --=20 2.51.0 From nobody Thu Oct 2 11:50:40 2025 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F8A72BE056 for ; Wed, 17 Sep 2025 06:09:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089391; cv=none; b=jjk2dWLzkXFBJBgPJKoZeU361fjB/xTQ9piU4buXHLLQiZ27DK6FPJQafbrqiPpfodNkcmwHRkpvKDZ2YuLm72N5p0mZMFvSCalj/nJ+TAuiNfBebSZjZ4WM+pOK0CuIo2LLFOeuVwbHMBFECyyuq+4bOWEKr/jcHNydsIa1kio= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758089391; c=relaxed/simple; bh=QNQvynZlOOH8IMgTmEdWcYET2p/CChKWnmzoWhxeMd4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kgHplMu2X7xlK82wCVTxLoES6ILzhPXFoHu0S52Qbr2M/DuyI+ICP8H0Uv7ogoeV7G2aDEiePU+mf6XbydvZCNPOXvfA8TxKn7TWj+AEN7Vm/o0Z+mziak78EURTUfgnVV3b9NjxnAotJkt0VwCIxcOVHdlN+xYrtjhhMSjeP/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CPWIwQud; arc=none smtp.client-ip=209.85.216.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CPWIwQud" Received: by mail-pj1-f66.google.com with SMTP id 98e67ed59e1d1-32e74ae0306so472364a91.1 for ; Tue, 16 Sep 2025 23:09:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758089390; x=1758694190; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e59v+VK1meCsMZeLVvoLFn49I9lv/VnXIPMZZBibqP4=; b=CPWIwQud8Bs8py0FrRBK6LappKq75Yk4/ipLQlLFYG1oM0N47GERCObe6F2UU/AHjs RmltKuwsKiHnbxnZKhxOnXJG/2kEmoJEPWhRIcC+tvthGD7SMMXhMT67gjodtK+K4KEv S4HNN3+Fr6H3GBdvsRkUuC8CVH8tKDt9c9C+/94WbWGx6eoTBFIPjeiUAlYuN3S2XmPV S1Slm07liTw/whjcY5AlGk+BjdVFKMQu/sX1c6F81yl7TsWllWewjs/vDokOFryLcfn6 gtEbKws8bKiolW5PiJgZKUt7e1JcDfcqU2PS7QRzkqGDbfzpCIsd2jhLvzkZm0pvoHfz iZBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758089390; x=1758694190; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e59v+VK1meCsMZeLVvoLFn49I9lv/VnXIPMZZBibqP4=; b=ZI8yVf91fMzO2FHCw1BA9B8E1XSKVdreqAF1d53LpbJQebhs6aui1ZgpmwTbOMTMwe /YPIuvI/AMe3fuoa+CWfeoeYXXHoHp2clZwLhzWDsCSB0EqCNYpxUapVxsWCDn1uoILT nqmAB+8UfNKlHx6k4qGIYjcL9+uS2mgyPgY5ZHg0D8PQAirfcPaKa1vNkozLlLFkUuH0 ZbpRnVyTBAU4bdyg0hq25zeoFEn4O2H5lrvNr55wC2w8DRLll1Z6tuRHgDhZliXy1FQO +2o6UFOkBMElaRw26cFZO8WT6+zaFDffScdnj2nXEW63V3EfzQyahDmhgmD0F1jZFLV8 gmVQ== X-Forwarded-Encrypted: i=1; AJvYcCW2LJT1V1uP9bbPR2czvFoYrciP5AQRc8ZCapNTBao+PcRZZkjHVLkOWaKb63pMFhQnovVdhQaR1b4j8G8=@vger.kernel.org X-Gm-Message-State: AOJu0YzDxLGykNrArgy6MAKJpk7npH3Um2JjhS+gKEBXOKt1yUDh3zz/ WtkoJKTuLCGltnok9bZ2RAVCv+y9CaSWNehwQvBZKe8PqZMiYHeEa0gU X-Gm-Gg: ASbGncukPEJeplimkwJfAHCmSJ+YMF/t7U7xYU03wYFQH9A+DvdW1bCYWA38l4qMgS1 hJySICHnjqQQnrB5YwyOxPxCUqxk/myXmr3/xgnyZ1eKFWYlZsABPxomVyGjP+jp1zCCtDI574G NM7okmqLCiQrrZLymkbgcEy9LcdLeDHB6ATFCyEn1jUP+AklObnQ97urgo/8vLp9KOFxwU7foFm anNkoK4YDdONFmA8RlRYznJ6eB/cz15HxJrk6Egp7TCmYZ+bZb4tXzah4X3rLqSvWUBjjoFZLhK YajMN9zteMnBxZfuJkmZqc8lDzYqRcXSdArySTS8FqCQhyJ87k19eKwvZjV31ZEUS2Mo95zK527 ahw2ZgvqRp7Bga8d9HhYLD13nfrKh/w== X-Google-Smtp-Source: AGHT+IGcee+8x5fSj+d0gM6u/9pAt3eJTzUrOSgGWUp1U3NPTVVjd4Tga8HcNTRH0N5/UGUufGONaw== X-Received: by 2002:a17:90b:33cd:b0:32b:d8a9:8725 with SMTP id 98e67ed59e1d1-32ea631cd58mr5457289a91.18.1758089389508; Tue, 16 Sep 2025 23:09:49 -0700 (PDT) Received: from 7940hx ([43.129.244.20]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b54a3aa1c54sm15845427a12.50.2025.09.16.23.09.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Sep 2025 23:09:48 -0700 (PDT) From: Menglong Dong X-Google-Original-From: Menglong Dong To: peterz@infradead.org, ast@kernel.org Cc: mingo@redhat.com, paulmck@kernel.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, tzimmermann@suse.de, simona.vetter@ffwll.ch, jani.nikula@intel.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v5 4/4] sched: fix some typos in include/linux/preempt.h Date: Wed, 17 Sep 2025 14:09:16 +0800 Message-ID: <20250917060916.462278-5-dongml2@chinatelecom.cn> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250917060916.462278-1-dongml2@chinatelecom.cn> References: <20250917060916.462278-1-dongml2@chinatelecom.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There are some typos in the comments of migrate in include/linux/preempt.h: elegible -> eligible it's -> its migirate_disable -> migrate_disable abritrary -> arbitrary Just fix them. Signed-off-by: Menglong Dong --- include/linux/preempt.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 92237c319035..102202185d7a 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -372,7 +372,7 @@ static inline void preempt_notifier_init(struct preempt= _notifier *notifier, /* * Migrate-Disable and why it is undesired. * - * When a preempted task becomes elegible to run under the ideal model (IO= W it + * When a preempted task becomes eligible to run under the ideal model (IO= W it * becomes one of the M highest priority tasks), it might still have to wa= it * for the preemptee's migrate_disable() section to complete. Thereby suff= ering * a reduction in bandwidth in the exact duration of the migrate_disable() @@ -387,7 +387,7 @@ static inline void preempt_notifier_init(struct preempt= _notifier *notifier, * - a lower priority tasks; which under preempt_disable() could've instan= tly * migrated away when another CPU becomes available, is now constrained * by the ability to push the higher priority task away, which might its= elf be - * in a migrate_disable() section, reducing it's available bandwidth. + * in a migrate_disable() section, reducing its available bandwidth. * * IOW it trades latency / moves the interference term, but it stays in the * system, and as long as it remains unbounded, the system is not fully @@ -399,7 +399,7 @@ static inline void preempt_notifier_init(struct preempt= _notifier *notifier, * PREEMPT_RT breaks a number of assumptions traditionally held. By forcin= g a * number of primitives into becoming preemptible, they would also allow * migration. This turns out to break a bunch of per-cpu usage. To this en= d, - * all these primitives employ migirate_disable() to restore this implicit + * all these primitives employ migrate_disable() to restore this implicit * assumption. * * This is a 'temporary' work-around at best. The correct solution is gett= ing @@ -407,7 +407,7 @@ static inline void preempt_notifier_init(struct preempt= _notifier *notifier, * per-cpu locking or short preempt-disable regions. * * The end goal must be to get rid of migrate_disable(), alternatively we = need - * a schedulability theory that does not depend on abritrary migration. + * a schedulability theory that does not depend on arbitrary migration. * * * Notes on the implementation. --=20 2.51.0