From nobody Tue Dec 16 09:28:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8EC1515D1; Mon, 21 Apr 2025 21:45:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745271921; cv=none; b=VacFvTKL6ut+ABtaqEZGLBDB0aLm7sSaiQslrn8zaCCh17oxgMyGT583a9Kl/3ga9QObUBxQ0NXH/m92iKuaJOPcrR551svM/oyKiO9bvQuEZM9Dp40mWgdcCs6KTnOBhls1VU2FJNXpgHJ9YxrwKy9cUKliq2woxVsTnVvoXn4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745271921; c=relaxed/simple; bh=1h8Lt5sfu3I36yNkn7tVNN8qftvnhgevSwlEaG90tM4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UYdop878Z6kiKJx1hXIz5uP3n3Ck0WI2IO0cDZlkU8TPsyC6CgZscYcg477bYxaut87I1jWipBjFnH62Zvrln1/yIbgaoX8uGtNAn86rsP6zJLDCbGBWIIo9h/h/HMnvhg5OkshmbA8OtpFLnang0K48Cg6LZREBD8wuNt6iWBA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OgS20hVa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OgS20hVa" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4F98C4CEE4; Mon, 21 Apr 2025 21:45:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745271921; bh=1h8Lt5sfu3I36yNkn7tVNN8qftvnhgevSwlEaG90tM4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OgS20hVaRmYbyUn3HeoV1Xl+DTNeZobY/dlY0dL4kZ2M9WEwn7uyO025YtCsJTiOP ytbji9NNI/JDXMtqHMmDc9lXh8NZdHDnLMSwNGz9CID5AaNV2JOHXmjOqsrS3JV//g fd57+smsSrtdohiUELx8ho2LS9wbXf3qLv/Wx6yHj5ZgBOd4QQrDBXARbWuwVEwdKr ukpu5i3VkkqSjeIpIsL1UpdtPKGJIaJIr9FhDc/Mr4br9vKM0BuDTXxNKoyUkN7BsW bBcGhbR0bMKoAz5TR6Quq60KGOcY47EPGCgo2FfPQUG4MzKb5DY8P1DwbjXQ3D1cG6 fDRQ2OidgI+SQ== From: Jiri Olsa To: Oleg Nesterov , Peter Zijlstra , Andrii Nakryiko Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, x86@kernel.org, Song Liu , Yonghong Song , John Fastabend , Hao Luo , Steven Rostedt , Masami Hiramatsu , Alan Maguire , David Laight , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= , Ingo Molnar Subject: [PATCH perf/core 04/22] uprobes: Add uprobe_write function Date: Mon, 21 Apr 2025 23:44:04 +0200 Message-ID: <20250421214423.393661-5-jolsa@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250421214423.393661-1-jolsa@kernel.org> References: <20250421214423.393661-1-jolsa@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adding uprobe_write function that does what uprobe_write_opcode did so far, but allows to pass verify callback function that checks the memory location before writing the opcode. It will be used in following changes to implement specific checking logic for instruction update. The uprobe_write_opcode now calls uprobe_write with verify_opcode as the verify callback. Signed-off-by: Jiri Olsa --- include/linux/uprobes.h | 5 +++++ kernel/events/uprobes.c | 14 ++++++++++---- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index d3496f7bc583..09fe93816173 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -187,6 +187,9 @@ struct uprobes_state { struct xol_area *xol_area; }; =20 +typedef int (*uprobe_write_verify_t)(struct page *page, unsigned long vadd= r, + uprobe_opcode_t *opcode); + extern void __init uprobes_init(void); extern int set_swbp(struct arch_uprobe *aup, struct vm_area_struct *vma, u= nsigned long vaddr); extern int set_orig_insn(struct arch_uprobe *aup, struct vm_area_struct *v= ma, unsigned long vaddr); @@ -195,6 +198,8 @@ extern bool is_trap_insn(uprobe_opcode_t *insn); extern unsigned long uprobe_get_swbp_addr(struct pt_regs *regs); extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); extern int uprobe_write_opcode(struct vm_area_struct *vma, unsigned long v= addr, uprobe_opcode_t opcode); +extern int uprobe_write(struct vm_area_struct *vma, const unsigned long op= code_vaddr, + uprobe_opcode_t opcode, uprobe_write_verify_t verify); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, = loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc,= bool); extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_= consumer *uc); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 8b31340ed1c3..3c5dc86bfe65 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -399,7 +399,7 @@ static bool orig_page_is_identical(struct vm_area_struc= t *vma, return identical; } =20 -static int __uprobe_write_opcode(struct vm_area_struct *vma, +static int __uprobe_write(struct vm_area_struct *vma, struct folio_walk *fw, struct folio *folio, unsigned long opcode_vaddr, uprobe_opcode_t opcode) { @@ -488,6 +488,12 @@ static int __uprobe_write_opcode(struct vm_area_struct= *vma, */ int uprobe_write_opcode(struct vm_area_struct *vma, const unsigned long op= code_vaddr, uprobe_opcode_t opcode) +{ + return uprobe_write(vma, opcode_vaddr, opcode, verify_opcode); +} + +int uprobe_write(struct vm_area_struct *vma, const unsigned long opcode_va= ddr, + uprobe_opcode_t opcode, uprobe_write_verify_t verify) { const unsigned long vaddr =3D opcode_vaddr & PAGE_MASK; struct mm_struct *mm =3D vma->vm_mm; @@ -508,7 +514,7 @@ int uprobe_write_opcode(struct vm_area_struct *vma, con= st unsigned long opcode_v * page that we can safely modify. Use FOLL_WRITE to trigger a write * fault if required. When unregistering, we might be lucky and the * anon page is already gone. So defer write faults until really - * required. Use FOLL_SPLIT_PMD, because __uprobe_write_opcode() + * required. Use FOLL_SPLIT_PMD, because __uprobe_write() * cannot deal with PMDs yet. */ if (is_register) @@ -520,7 +526,7 @@ int uprobe_write_opcode(struct vm_area_struct *vma, con= st unsigned long opcode_v goto out; folio =3D page_folio(page); =20 - ret =3D verify_opcode(page, opcode_vaddr, &opcode); + ret =3D verify(page, opcode_vaddr, &opcode); if (ret <=3D 0) { folio_put(folio); goto out; @@ -548,7 +554,7 @@ int uprobe_write_opcode(struct vm_area_struct *vma, con= st unsigned long opcode_v /* Walk the page tables again, to perform the actual update. */ if (folio_walk_start(&fw, vma, vaddr, 0)) { if (fw.page =3D=3D page) - ret =3D __uprobe_write_opcode(vma, &fw, folio, opcode_vaddr, opcode); + ret =3D __uprobe_write(vma, &fw, folio, opcode_vaddr, opcode); folio_walk_end(&fw, vma); } =20 --=20 2.49.0