From nobody Thu Apr 9 08:09:43 2026 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF4FC29D27D for ; Tue, 10 Mar 2026 06:17:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773123451; cv=none; b=K+CYUswuMA0Y0qFcsRYbJO1miyEcSPGxI0+MTLeLoUmTyBdETSTLPg6TbtSmPOwEsLP2FAo3Ds7dmZo6ZeWu2W8z508zsedULAvQFhBCuvSLMmaaRHGKxTlfcl7g5UXmOm+WRAKvHm2fZplWr1yfBsLrNo3OnzXI2rexAcXRArw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773123451; c=relaxed/simple; bh=sjblse0ugy2llXgU4uc97bKAz7CZNs+OSZ9xaMbofMo=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=NFyu82ANBeM/709dX6TgdwZR4qf8b8+en/64XXtFZ8ELC+GOE5i1+uXlWcGfhzwo/VxhK71euP8GqhmpDf+BziBtYAlv6b1VYI9s9o04EP2RO+P9yQR9o6VzSVOwYIx3rDCeOiv3Yx2dYXOyf4poXn/IXwRGs7o8468byDkPmBU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=fcTTCzKs; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="fcTTCzKs" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type: Content-ID:Content-Description:In-Reply-To:References; bh=wG35nvd8m0ZqAQF3mDNcrq0J81qWqaGDb0rRkl7T7YM=; b=fcTTCzKsKGR3FvLX1YsOirDo7M 7sycsRCz1BlTTa2qeGjTdqoiDQ3QBGHPrE7oHxdv7KunJaVU7J6req7Argh7+VirMIBXUnqVReECd lP2wwwxo1RmVaw6670iU1HtFxCeEJVtnF4eUvjSqYkgbL465h69pu478b9GZHi0zMyFRNGkBSq2d3 DxREswd/sGWqj3QogzbeCOiFRPDD8KUDpEVTaXCSMBt0FsafTZQISi7ddif0JkuLSLgXoO6zMwqeR IiUTw1PeJmkToJ91m4QfDf7OKfuEWL2NVgAwUnJwNUAtEFjuSPU78TaAdjCZsImKxTdSwC3Vf0IZo ppHAQVxA==; Received: from [50.53.43.113] (helo=bombadil.infradead.org) by bombadil.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1vzqPH-00000008mYm-1tul; Tue, 10 Mar 2026 06:17:27 +0000 From: Randy Dunlap To: linux-kernel@vger.kernel.org Cc: Randy Dunlap , Thomas Gleixner , Peter Zijlstra , Andrew Morton Subject: [PATCH v2] smp: add missing kernel-doc comments Date: Mon, 9 Mar 2026 23:17:26 -0700 Message-ID: <20260310061726.1153764-1-rdunlap@infradead.org> X-Mailer: git-send-email 2.53.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add missing kernel-doc comments and rearrange the order of others to prevent all kernel-doc warnings. - add function Returns: sections or format existing comments as kernel-doc - add missing function parameter comments - use "/**" for smp_call_function_any() and on_each_cpu_cond_mask() - correct the commented function name for on_each_cpu_cond_mask() - use correct format for function short descriptions - add all kernel-doc comments for smp_call_on_cpu() - remove kernel-doc comments for raw_smp_processor_id() since there is no prototype for it here (other than !SMP) - in smp.h, rearrange some lines so that the kernel-doc comments for smp_processor_id() are immediately before the macro (to prevent kernel-doc warnings) - remove "Returns" from smp_call_function() since it doesn't return a value Signed-off-by: Randy Dunlap --- v2: clean up more kernel-doc comments based on viewing the html output Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Andrew Morton include/linux/smp.h | 38 +++++++++++++++++++++----------------- kernel/smp.c | 38 +++++++++++++++++++++++++------------- 2 files changed, 46 insertions(+), 30 deletions(-) --- linux-next-20260309.orig/kernel/smp.c +++ linux-next-20260309/kernel/smp.c @@ -215,7 +215,7 @@ static atomic_t n_csd_lock_stuck; /** * csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long? * - * Returns @true if a CSD-lock acquisition is stuck and has been stuck + * Returns: @true if a CSD-lock acquisition is stuck and has been stuck * long enough for a "non-responsive CSD lock" message to be printed. */ bool csd_lock_is_stuck(void) @@ -625,13 +625,14 @@ void flush_smp_call_function_queue(void) local_irq_restore(flags); } =20 -/* +/** * smp_call_function_single - Run a function on a specific CPU + * @cpu: Specific target CPU for this function. * @func: The function to run. This must be fast and non-blocking. * @info: An arbitrary pointer to pass to the function. * @wait: If true, wait until function has completed on other CPUs. * - * Returns 0 on success, else a negative status code. + * Returns: %0 on success, else a negative status code. */ int smp_call_function_single(int cpu, smp_call_func_t func, void *info, int wait) @@ -738,18 +739,18 @@ out: } EXPORT_SYMBOL_GPL(smp_call_function_single_async); =20 -/* +/** * smp_call_function_any - Run a function on any of the given cpus * @mask: The mask of cpus it can run on. * @func: The function to run. This must be fast and non-blocking. * @info: An arbitrary pointer to pass to the function. * @wait: If true, wait until function has completed. * - * Returns 0 on success, else a negative status code (if no cpus were onli= ne). - * * Selection preference: * 1) current cpu if in @mask * 2) nearest cpu in @mask, based on NUMA topology + * + * Returns: %0 on success, else a negative status code (if no cpus were on= line). */ int smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, void *info, int wait) @@ -880,7 +881,7 @@ static void smp_call_function_many_cond( } =20 /** - * smp_call_function_many(): Run a function on a set of CPUs. + * smp_call_function_many() - Run a function on a set of CPUs. * @mask: The set of cpus to run on (only runs on online subset). * @func: The function to run. This must be fast and non-blocking. * @info: An arbitrary pointer to pass to the function. @@ -902,14 +903,12 @@ void smp_call_function_many(const struct EXPORT_SYMBOL(smp_call_function_many); =20 /** - * smp_call_function(): Run a function on all other CPUs. + * smp_call_function() - Run a function on all other CPUs. * @func: The function to run. This must be fast and non-blocking. * @info: An arbitrary pointer to pass to the function. * @wait: If true, wait (atomically) until function has completed * on other CPUs. * - * Returns 0. - * * If @wait is true, then returns once @func has returned; otherwise * it returns just before the target cpu calls @func. * @@ -1009,8 +1008,8 @@ void __init smp_init(void) smp_cpus_done(setup_max_cpus); } =20 -/* - * on_each_cpu_cond(): Call a function on each processor for which +/** + * on_each_cpu_cond_mask() - Call a function on each processor for which * the supplied function cond_func returns true, optionally waiting * for all the required CPUs to finish. This may include the local * processor. @@ -1024,6 +1023,7 @@ void __init smp_init(void) * @info: An arbitrary pointer to pass to both functions. * @wait: If true, wait (atomically) until function has * completed on other CPUs. + * @mask: The set of cpus to run on (only runs on online subset). * * Preemption is disabled to protect against CPUs going offline but not on= line. * CPUs going online during the call will not be seen or sent an IPI. @@ -1095,7 +1095,7 @@ EXPORT_SYMBOL_GPL(wake_up_all_idle_cpus) * scheduled, for any of the CPUs in the @mask. It does not guarantee * correctness as it only provides a racy snapshot. * - * Returns true if there is a pending IPI scheduled and false otherwise. + * Returns: true if there is a pending IPI scheduled and false otherwise. */ bool cpus_peek_for_pending_ipi(const struct cpumask *mask) { @@ -1145,6 +1145,18 @@ static void smp_call_on_cpu_callback(str complete(&sscs->done); } =20 +/** + * smp_call_on_cpu() - Call a function on a specific CPU and wait + * for it to return. + * @cpu: The CPU to run on. + * @func: The function to run + * @par: An arbitrary pointer parameter for @func. + * @phys: If @true, force to run on physical @cpu. See + * &struct smp_call_on_cpu_struct for more info. + * + * Returns: %-ENXIO if the @cpu is invalid; otherwise the return value + * from @func. + */ int smp_call_on_cpu(unsigned int cpu, int (*func)(void *), void *par, bool= phys) { struct smp_call_on_cpu_struct sscs =3D { --- linux-next-20260309.orig/include/linux/smp.h +++ linux-next-20260309/include/linux/smp.h @@ -73,7 +73,7 @@ static inline void on_each_cpu(smp_call_ } =20 /** - * on_each_cpu_mask(): Run a function on processors specified by + * on_each_cpu_mask() - Run a function on processors specified by * cpumask, which may include the local processor. * @mask: The set of cpus to run on (only runs on online subset). * @func: The function to run. This must be fast and non-blocking. @@ -239,13 +239,30 @@ static inline int get_boot_cpu_id(void) =20 #endif /* !SMP */ =20 -/** +/* * raw_smp_processor_id() - get the current (unstable) CPU id * - * For then you know what you are doing and need an unstable + * raw_smp_processor_id() is arch-specific/arch-defined and + * may be a macro or a static inline function. + * + * For when you know what you are doing and need an unstable * CPU id. */ =20 +/* + * Allow the architecture to differentiate between a stable and unstable r= ead. + * For example, x86 uses an IRQ-safe asm-volatile read for the unstable bu= t a + * regular asm read for the stable. + */ +#ifndef __smp_processor_id +#define __smp_processor_id() raw_smp_processor_id() +#endif + +#ifdef CONFIG_DEBUG_PREEMPT + extern unsigned int debug_smp_processor_id(void); +# define smp_processor_id() debug_smp_processor_id() + +#else /** * smp_processor_id() - get the current (stable) CPU id * @@ -258,23 +275,10 @@ static inline int get_boot_cpu_id(void) * - preemption is disabled; * - the task is CPU affine. * - * When CONFIG_DEBUG_PREEMPT; we verify these assumption and WARN + * When CONFIG_DEBUG_PREEMPT=3Dy, we verify these assumptions and WARN * when smp_processor_id() is used when the CPU id is not stable. */ =20 -/* - * Allow the architecture to differentiate between a stable and unstable r= ead. - * For example, x86 uses an IRQ-safe asm-volatile read for the unstable bu= t a - * regular asm read for the stable. - */ -#ifndef __smp_processor_id -#define __smp_processor_id() raw_smp_processor_id() -#endif - -#ifdef CONFIG_DEBUG_PREEMPT - extern unsigned int debug_smp_processor_id(void); -# define smp_processor_id() debug_smp_processor_id() -#else # define smp_processor_id() __smp_processor_id() #endif