From nobody Tue Apr 7 12:28:39 2026 Received: from xry111.site (xry111.site [89.208.246.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7003E3803D9; Tue, 3 Mar 2026 08:43:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=89.208.246.23 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772527425; cv=none; b=C3w4cI3cBg5246BLkVy2cIkPXzJJDaomiZo0YwamFpZ8kpM+SKKzq27DeW3VR7cWaCyswX1OlnwTq272quR7vASz1bm90xg/ycoIiLCpRCBaenbeM+c4uJP8mNdJ9jcNzmP1xXnpF1cJJw820xE6iHH6v2zNBwpkITdm/lg5G+8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772527425; c=relaxed/simple; bh=26yOvU4PGSmUYbwM5BWTdE/xZqMT4u/6v4tGjrHtDYQ=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=faxb7NFULgVOrDo1+7JraIaZ8d96xu9eWGPAE1Vm+VQCCiIbnk78iMn39JqXqhhqFjgD6G3I4UMmcQb4uOMggWY1pc+PSnmioRSOmRPSnH+u+m9Yx9N6z2DjcbiSpyva/JTVOT1xg2wYHYLUpFpCNiQnOX9orta0sw5W2xkqOCM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=xry111.site; spf=pass smtp.mailfrom=xry111.site; dkim=pass (1024-bit key) header.d=xry111.site header.i=@xry111.site header.b=Cs8EGN3j; arc=none smtp.client-ip=89.208.246.23 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=xry111.site Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=xry111.site Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=xry111.site header.i=@xry111.site header.b="Cs8EGN3j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xry111.site; s=default; t=1772527415; bh=8h/nxElR90QOwQiFPbP84MouVjSfqiZmRVv2wqWek1Q=; h=From:To:Cc:Subject:Date:From; b=Cs8EGN3jlh2Y20/fpsNm7c91Zqi+JWXY3TqkGIJ0O38KPzi9sh4VfPPHrVKBWcdKC iDIj+sbFdipVDFGI6oE5MQItaEh7hSs7BbpY/Q6XIuHL3KWahUV0XA4cjbIAtRQiOz oQfInvciyKeU0wzlv/fHwtSakILmqB9o3hNbbtMI= Received: from stargazer (unknown [IPv6:2409:8a4c:e11:4510:818c:b334:624:49f8]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (Client did not present a certificate) (Authenticated sender: xry111@xry111.site) by xry111.site (Postfix) with ESMTPSA id 8198D675DB; Tue, 3 Mar 2026 03:43:28 -0500 (EST) From: Xi Ruoyao To: Huacai Chen , WANG Xuerui , Hengqi Chen Cc: Mingcong Bai , Zixing Liu , Xi Ruoyao , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Tiezhu Yang , George Guo , Chenghao Duan , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v3] LoongArch: Fix calling smp_processor_id() in preemptible code Date: Tue, 3 Mar 2026 16:43:14 +0800 Message-ID: <20260303084314.628311-1-xry111@xry111.site> X-Mailer: git-send-email 2.53.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Fix the warning: BUG: using smp_processor_id() in preemptible [00000000] code: sys temd/1 caller is larch_insn_text_copy+0x40/0xf0 Simply changing it to raw_smp_processor_id() is not enough: if preempt and CPU hotplug happens after raw_smp_processor_id() but before stop_machine(), the CPU where raw_smp_processor_id() has run may be offline when stop_machine() and no CPU will run copy_to_kernel_nofault() in text_copy_cb(). Thus guard the larch_insn_text_copy() calls with cpus_read_lock() and change stop_machine() to stop_machine_cpuslocked() to prevent this. I've considered moving the locks inside larch_insn_text_copy() but doing so seems not an easy hack. In bpf_arch_text_poke() obviously the memcpy() call must be guarded by text_mutex, so we have to leave the acquire of text_mutex out of larch_insn_text_copy. But in the entire kernel the acquire of mutexes is always after cpus_read_lock(), so we cannot put cpus_read_lock() into larch_insn_text_copy() while leaving the text_mutex acquire out (or we risk a deadlock due to inconsistent lock acquire order). So let's fix the bug first and leave the posssible refactor as future work. Fixes: 9fbd18cf4c69 ("LoongArch: BPF: Add dynamic code modification support= ") Signed-off-by: Xi Ruoyao --- Change since [v2]: - Include the annotation of v1 in the commit message. Change since [v1] to v2: - Add lockdep_assert_cpus_held() with a comment to prevent people from calling larch_insn_text_copy() w/o acquiring the cpus lock in the future. [v1]:https://lore.kernel.org/20260225141404.178823-2-xry111@xry111.site [v2]:https://lore.kernel.org/20260227080351.663693-1-xry111@xry111.site arch/loongarch/kernel/inst.c | 10 ++++++++-- arch/loongarch/net/bpf_jit.c | 6 ++++++ 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/loongarch/kernel/inst.c b/arch/loongarch/kernel/inst.c index bf037f0c6b26..7545ae3c796e 100644 --- a/arch/loongarch/kernel/inst.c +++ b/arch/loongarch/kernel/inst.c @@ -263,14 +263,20 @@ int larch_insn_text_copy(void *dst, void *src, size_t= len) .dst =3D dst, .src =3D src, .len =3D len, - .cpu =3D smp_processor_id(), + .cpu =3D raw_smp_processor_id(), }; =20 + /* + * Ensure copy.cpu won't be hot removed before stop_machine. If + * it's removed nobody will really update the text. + */ + lockdep_assert_cpus_held(); + start =3D round_down((size_t)dst, PAGE_SIZE); end =3D round_up((size_t)dst + len, PAGE_SIZE); =20 set_memory_rw(start, (end - start) / PAGE_SIZE); - ret =3D stop_machine(text_copy_cb, ©, cpu_online_mask); + ret =3D stop_machine_cpuslocked(text_copy_cb, ©, cpu_online_mask); set_memory_rox(start, (end - start) / PAGE_SIZE); =20 return ret; diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c index 3bd89f55960d..e8e0ad34928c 100644 --- a/arch/loongarch/net/bpf_jit.c +++ b/arch/loongarch/net/bpf_jit.c @@ -1379,9 +1379,11 @@ void *bpf_arch_text_copy(void *dst, void *src, size_= t len) { int ret; =20 + cpus_read_lock(); mutex_lock(&text_mutex); ret =3D larch_insn_text_copy(dst, src, len); mutex_unlock(&text_mutex); + cpus_read_unlock(); =20 return ret ? ERR_PTR(-EINVAL) : dst; } @@ -1429,10 +1431,12 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke= _type old_t, if (ret) return ret; =20 + cpus_read_lock(); mutex_lock(&text_mutex); if (memcmp(ip, new_insns, LOONGARCH_LONG_JUMP_NBYTES)) ret =3D larch_insn_text_copy(ip, new_insns, LOONGARCH_LONG_JUMP_NBYTES); mutex_unlock(&text_mutex); + cpus_read_unlock(); =20 return ret; } @@ -1450,10 +1454,12 @@ int bpf_arch_text_invalidate(void *dst, size_t len) for (i =3D 0; i < (len / sizeof(u32)); i++) inst[i] =3D INSN_BREAK; =20 + cpus_read_lock(); mutex_lock(&text_mutex); if (larch_insn_text_copy(dst, inst, len)) ret =3D -EINVAL; mutex_unlock(&text_mutex); + cpus_read_unlock(); =20 kvfree(inst); =20 --=20 2.53.0