From nobody Wed Dec 17 08:52:15 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53112224895 for ; Fri, 21 Mar 2025 11:37:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742557042; cv=none; b=tI5l9KGSfJkKVE7fF5dXStA0PGFI525p+tE8AOG+I2+4vZlVBUXDXd6PpxyjAZjuXR+HMWprFTwBrLftGlcTExQu4Dh1LdvBalxdtNe2X2BmdqAa7PFFz+MZDIdzy+eHJv4DIef8TUEG6O2LwsSu0/CvPEJBpawQPly3FpN+4Gs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742557042; c=relaxed/simple; bh=XqLEM356KJ4Nr3KBH7m+aoqP3r0tyveu5sLdaxismyE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mZO3u5b9sDbywBF4xGGLMrkWrZ19vTl7r43iq1+714D7TtWV85sSVK5EFGaUiwMI9tov1RHCz7drUrY9fiN5Gu9WuX3B28BqWwOLcGTs9DsHnc0ccMsMbekdSL+Pzm7OZszqQVf7VqYv5JvTDoLHbV9Z67r61y2PlFB/niRVBtI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ZNiyceq6; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZNiyceq6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1742557040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/AdkIGrCnMsc+er9X66uqVETmxKiULUWSewMsh5Pldk=; b=ZNiyceq6mVerUlngThger5Bp4PHwJ8/EXAlLpA980qEtSfJ3ikz72XFYvnMu6uaaaG3LM9 oM9Kegy9PlqCeB4jAh9z8qapARf9xJ4vVR1uRmOt4OVD9rnFzUVyFsDVow3CIHaP/3FJzx /HrONctspIWKVRrCnz1JzyJrPDuBrsI= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-425-VH39COGfPamkZ6mA_SxkYg-1; Fri, 21 Mar 2025 07:37:19 -0400 X-MC-Unique: VH39COGfPamkZ6mA_SxkYg-1 X-Mimecast-MFC-AGG-ID: VH39COGfPamkZ6mA_SxkYg_1742557038 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-39143311936so882685f8f.0 for ; Fri, 21 Mar 2025 04:37:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742557038; x=1743161838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/AdkIGrCnMsc+er9X66uqVETmxKiULUWSewMsh5Pldk=; b=Bd9cwL+mJKkkQubPKzhb2wYpcp5QmZPuD5qQwPAj6fuvk2xQ0V0wLaEIIURicOX8I0 M63GqHx+yYCIshRn/psqYrnK/CoH577JDcUgTe5Ui8YY6e+wggNDXUW6j+NpdNpXYQtr +pXl4Gt+1Am9ujNDIE3VENB6LDizbwkDG7Ob9lzKmZ/yJMZk1TUpyThBNEoznW/QIDIo wd2kUMuiRErXBle07QVeC3ro6MtgLs4L+2DiYa/sL4y6VzUlaqA7Adqom9qapMKsXYBC 4LSRcAvo/otx6Ha9if3S9gKxhVuHYWJ7XKATfplzWLm2MNXf1WospLSGv2xC63anhOZT CNMw== X-Gm-Message-State: AOJu0YzTefDvcDUWyOyyLF+s3QCn2SYck1EDOt5FBjgPaBfUJjOv3JuE Zcwf0zLttqAvzDVzTFnoDG6cFXmZXBhUUt9t0Y8y6a0knQhejyWaqii2RPXhJ/IZcSldLIr4yPj wwLF+Ol7o7O40VBnaNpmHyPkNmLklimVbzDIPzHaDft+yiWcwaYdY9UCU2VFwA/8mpIUmV5OmKB Wr1eBGE69zA55jCHlHh4pOVxfSyYpAdFfnlFIn8pFUoL6I X-Gm-Gg: ASbGncuHCRvXrqb5rv0oWQL37D0mNksIh8PCwC+FKtIX+gM7JtseL4KSNhTKdIruAWM LhBKfpRgugxXzA8p1h2GU+Ep4fqu8znST8Z0fhPiPtRFuvdqkk2u2rAsts/Z5mK6aVf+KE8mi0W BP3P8hHplYgJBZTBOQw3TIJeOK25PJdEjhrUPww0MPKGHA5oSnRlu3Rev9OGP6bMQjlta2UB72J WuOGopY1pl6ghNtl35c4qnrU7MF6tvC1SrAPAudhJwof6itVDGhI9pgJiwLTWOC9QZB7597f7SF 0F6idJBImrnUOLm0RaAzWVrq/f0bLo3GO8YIaOGaoGu9IoTVRZX0wwzI+pA0VnuC+33Hp9f0tht u X-Received: by 2002:a05:6000:144c:b0:390:ea34:7d83 with SMTP id ffacd0b85a97d-3997f90e7c2mr2841371f8f.31.1742557037765; Fri, 21 Mar 2025 04:37:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG49ODTrypvf9tYQ5TgQ34vttK3IHB4Rj01M+eydlpDxfAmjxfWJ7Zgo7XydYKuCca6cGe7VQ== X-Received: by 2002:a05:6000:144c:b0:390:ea34:7d83 with SMTP id ffacd0b85a97d-3997f90e7c2mr2841314f8f.31.1742557037213; Fri, 21 Mar 2025 04:37:17 -0700 (PDT) Received: from localhost (p200300cbc72a910023d23800cdcc90f0.dip0.t-ipconnect.de. [2003:cb:c72a:9100:23d2:3800:cdcc:90f0]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3997f9a3f36sm2189825f8f.32.2025.03.21.04.37.15 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Mar 2025 04:37:16 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Andrii Nakryiko , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH v3 1/3] kernel/events/uprobes: pass VMA instead of MM to remove_breakpoint() Date: Fri, 21 Mar 2025 12:37:11 +0100 Message-ID: <20250321113713.204682-2-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250321113713.204682-1-david@redhat.com> References: <20250321113713.204682-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ... and remove the "MM" argument from install_breakpoint(), because it can easily be derived from the VMA. Acked-by: Oleg Nesterov Signed-off-by: David Hildenbrand Acked-by: Peter Zijlstra (Intel) --- kernel/events/uprobes.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 5d6f3d9d29f44..259038d099819 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1134,10 +1134,10 @@ static bool filter_chain(struct uprobe *uprobe, str= uct mm_struct *mm) return ret; } =20 -static int -install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long vaddr) +static int install_breakpoint(struct uprobe *uprobe, struct vm_area_struct= *vma, + unsigned long vaddr) { + struct mm_struct *mm =3D vma->vm_mm; bool first_uprobe; int ret; =20 @@ -1162,9 +1162,11 @@ install_breakpoint(struct uprobe *uprobe, struct mm_= struct *mm, return ret; } =20 -static int -remove_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, unsigned lo= ng vaddr) +static int remove_breakpoint(struct uprobe *uprobe, struct vm_area_struct = *vma, + unsigned long vaddr) { + struct mm_struct *mm =3D vma->vm_mm; + set_bit(MMF_RECALC_UPROBES, &mm->flags); return set_orig_insn(&uprobe->arch, mm, vaddr); } @@ -1296,10 +1298,10 @@ register_for_each_vma(struct uprobe *uprobe, struct= uprobe_consumer *new) if (is_register) { /* consult only the "caller", new consumer. */ if (consumer_filter(new, mm)) - err =3D install_breakpoint(uprobe, mm, vma, info->vaddr); + err =3D install_breakpoint(uprobe, vma, info->vaddr); } else if (test_bit(MMF_HAS_UPROBES, &mm->flags)) { if (!filter_chain(uprobe, mm)) - err |=3D remove_breakpoint(uprobe, mm, info->vaddr); + err |=3D remove_breakpoint(uprobe, vma, info->vaddr); } =20 unlock: @@ -1472,7 +1474,7 @@ static int unapply_uprobe(struct uprobe *uprobe, stru= ct mm_struct *mm) continue; =20 vaddr =3D offset_to_vaddr(vma, uprobe->offset); - err |=3D remove_breakpoint(uprobe, mm, vaddr); + err |=3D remove_breakpoint(uprobe, vma, vaddr); } mmap_read_unlock(mm); =20 @@ -1610,7 +1612,7 @@ int uprobe_mmap(struct vm_area_struct *vma) if (!fatal_signal_pending(current) && filter_chain(uprobe, vma->vm_mm)) { unsigned long vaddr =3D offset_to_vaddr(vma, uprobe->offset); - install_breakpoint(uprobe, vma->vm_mm, vma, vaddr); + install_breakpoint(uprobe, vma, vaddr); } put_uprobe(uprobe); } --=20 2.48.1 From nobody Wed Dec 17 08:52:15 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 204382253EC for ; Fri, 21 Mar 2025 11:37:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742557044; cv=none; b=OuM+2bgx0JA0X4SEWtEAAWo/98zWqm13GUat4eBXQOIdzbVMmSsf+L3T3HXhT82otq3op2bas9u/abeX87zc+G3AvrkCGOCmm3ovBUAp/c6PpggpT81fhR+gKrTdcU57NzgCDeQ9mXxooHpbXW0yWcz786KL8aeywRzbwVk79Lk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742557044; c=relaxed/simple; bh=tnk3bsb90JY2mcV21Ncrc9nnVyKzLcleDYrjSkfxIGo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=S58s759NQgQXZ7kqHtJ6PiRjVO9tH8PPT/gfe6E5TpEpmgswXlPhYlSRoNpj0Ig8L3w8wA/3y/pobDDly2WaYAo/DbAoex4ZAfxiF+ekAy92seuKgTQDjo2lma2ahycDDWziO3FoHi/jnlXnKW9Ur5H6/9vCyaOAykGQakX7Z7Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=WK+Tvel6; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WK+Tvel6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1742557042; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LA4tQm8B9yk34e6zjAsFaRrtnSy4omTcdpMVJkFUhlY=; b=WK+Tvel68EbUKnclqJtVSioSAx2UmbD7a/IyF4PqlA2yJQx6i01Qq/4SMGBvQiG7ffpbKM kKkHUXYFBjormVtLbyVwNuILYzY/f8RFy7lO1fyXbNYrbEZ8XK5khOMUIsQFxGckeMmPs3 +EyJvYdVe4Rk8K8OjBtDow7CVE8qOyQ= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-505-Jt--19hHO0aj532vhBirUw-1; Fri, 21 Mar 2025 07:37:20 -0400 X-MC-Unique: Jt--19hHO0aj532vhBirUw-1 X-Mimecast-MFC-AGG-ID: Jt--19hHO0aj532vhBirUw_1742557040 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-3913f97d115so908028f8f.0 for ; Fri, 21 Mar 2025 04:37:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742557040; x=1743161840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LA4tQm8B9yk34e6zjAsFaRrtnSy4omTcdpMVJkFUhlY=; b=ucpzlofhChaoCkQuNEk5hg0oY+m5Bgo97AbcJumX/yT3mWDXkWxQLrkM0ANy306ztX ZLMvb6vWCPB+lpkWhSM2goWj4zCQlMXr/GnXEVICdGm+ZB22LWpjCAQtLtS0I0pfhrNZ l9ioBB8sa80NM5PO0+9aYCS5uhmiBPIjMus0+NvJuhMnmqNn49ek17QjfrcIl2CZoK/3 DIdbdvrYH2/r0ou7O1WiY2U8UHvaUJF5K4yNODBD+335aOKrrKiNveejdnAvww72aOGO eQQwOrDtiEPrJUoNmqi92onQBHubqNGQnLwdDj7t/tarMCyWGU4rIS7/cnREI+qtNeKw MMQQ== X-Gm-Message-State: AOJu0Yxb46R/fTmpwx6u4+b1WwbDaDFWdGO489/MrcYauYMTeKPLkwTf U/4P6zAPbydHI/OIilRe9YidmG84NjywsPnfwEQf+QtNvXgjAbI48c+hFXzdDv8jBwqb7k9TTcd Cb29xrqsu2aUfBz63gXF8aAQEZLeICSg1XU5Mpg9zAuB99UmgUFGDHqSIu7EnvBnHaSyzUgR4o5 Axvsa3oRx0Ep8DV4ridP/dpNzIDvXjZS84TX556aMPlN8t X-Gm-Gg: ASbGnctGHgRTmmj/SvQ+s6LO42gXMt8PK4NESaiCtEtlozNmTZaox6fB/8E69K9Ccdr DpeZn3I5oSsiv9ZZdrkcWz11KfPyAIUK5FY9Eym90GY4qnjTUW0xxOaJ3cT0zSmdcxlHl1oUaOU V05h7EWlGT2eWPdd6432izABjP7Xe73VS8StbUSW6/w28GwrlkL34uiVKNk2UZbABE30XyB/YzN ixu0CkHhOmeVVyYgKKjJ57rZuO4dwZkKy/jhiKJcednGhOggOW/HmoH3AV5VKDKwne7c07r4WXg 1YJ7FjDx/XiPxx/FNPb32kNH0a/wTrNFHG+/GbirP2v6Cv4QXbtpPY5O+KMEuVyKMh88px9YAkN t X-Received: by 2002:a05:6000:4021:b0:390:df83:1f22 with SMTP id ffacd0b85a97d-3997959ce52mr7464174f8f.25.1742557039675; Fri, 21 Mar 2025 04:37:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEzgIkkOJsHGuf2WyAxCwKihBA43kh5WjBQNfuxP9FngTIZo6gSbHPXNu6WapbxvVPx5btxjA== X-Received: by 2002:a05:6000:4021:b0:390:df83:1f22 with SMTP id ffacd0b85a97d-3997959ce52mr7464123f8f.25.1742557039114; Fri, 21 Mar 2025 04:37:19 -0700 (PDT) Received: from localhost (p200300cbc72a910023d23800cdcc90f0.dip0.t-ipconnect.de. [2003:cb:c72a:9100:23d2:3800:cdcc:90f0]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-43d43f43e50sm74599665e9.12.2025.03.21.04.37.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Mar 2025 04:37:18 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Andrii Nakryiko , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH v3 2/3] kernel/events/uprobes: pass VMA to set_swbp(), set_orig_insn() and uprobe_write_opcode() Date: Fri, 21 Mar 2025 12:37:12 +0100 Message-ID: <20250321113713.204682-3-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250321113713.204682-1-david@redhat.com> References: <20250321113713.204682-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We already have the VMA, no need to look it up using get_user_page_vma_remote(). We can now switch to get_user_pages_remote(). Acked-by: Oleg Nesterov Signed-off-by: David Hildenbrand Acked-by: Peter Zijlstra (Intel) --- arch/arm/probes/uprobes/core.c | 4 ++-- include/linux/uprobes.h | 6 +++--- kernel/events/uprobes.c | 33 +++++++++++++++++---------------- 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/arch/arm/probes/uprobes/core.c b/arch/arm/probes/uprobes/core.c index f5f790c6e5f89..885e0c5e8c20d 100644 --- a/arch/arm/probes/uprobes/core.c +++ b/arch/arm/probes/uprobes/core.c @@ -26,10 +26,10 @@ bool is_swbp_insn(uprobe_opcode_t *insn) (UPROBE_SWBP_ARM_INSN & 0x0fffffff); } =20 -int set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, +int set_swbp(struct arch_uprobe *auprobe, struct vm_area_struct *vma, unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, + return uprobe_write_opcode(auprobe, vma, vaddr, __opcode_to_mem_arm(auprobe->bpinsn)); } =20 diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index b1df7d792fa16..288a42cc40baa 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -185,13 +185,13 @@ struct uprobes_state { }; =20 extern void __init uprobes_init(void); -extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigne= d long vaddr); -extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, un= signed long vaddr); +extern int set_swbp(struct arch_uprobe *aup, struct vm_area_struct *vma, u= nsigned long vaddr); +extern int set_orig_insn(struct arch_uprobe *aup, struct vm_area_struct *v= ma, unsigned long vaddr); extern bool is_swbp_insn(uprobe_opcode_t *insn); extern bool is_trap_insn(uprobe_opcode_t *insn); extern unsigned long uprobe_get_swbp_addr(struct pt_regs *regs); extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); -extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_stru= ct *mm, unsigned long vaddr, uprobe_opcode_t); +extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area= _struct *vma, unsigned long vaddr, uprobe_opcode_t); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, = loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc,= bool); extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_= consumer *uc); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 259038d099819..ac17c16f65d63 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -474,19 +474,19 @@ static int update_ref_ctr(struct uprobe *uprobe, stru= ct mm_struct *mm, * * uprobe_write_opcode - write the opcode at a given virtual address. * @auprobe: arch specific probepoint information. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @vaddr: the virtual address to store the opcode. * @opcode: opcode to be written at @vaddr. * * Called with mm->mmap_lock held for read or write. * Return 0 (success) or a negative errno. */ -int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, - unsigned long vaddr, uprobe_opcode_t opcode) +int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct= *vma, + unsigned long vaddr, uprobe_opcode_t opcode) { + struct mm_struct *mm =3D vma->vm_mm; struct uprobe *uprobe; struct page *old_page, *new_page; - struct vm_area_struct *vma; int ret, is_register, ref_ctr_updated =3D 0; bool orig_page_huge =3D false; unsigned int gup_flags =3D FOLL_FORCE; @@ -498,9 +498,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, st= ruct mm_struct *mm, if (is_register) gup_flags |=3D FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - old_page =3D get_user_page_vma_remote(mm, vaddr, gup_flags, &vma); - if (IS_ERR(old_page)) - return PTR_ERR(old_page); + ret =3D get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, NULL); + if (ret !=3D 1) + return ret; =20 ret =3D verify_opcode(old_page, vaddr, &opcode); if (ret <=3D 0) @@ -590,30 +590,31 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, = struct mm_struct *mm, /** * set_swbp - store breakpoint at a given address. * @auprobe: arch specific probepoint information. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @vaddr: the virtual address to insert the opcode. * * For mm @mm, store the breakpoint instruction at @vaddr. * Return 0 (success) or a negative errno. */ -int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, uns= igned long vaddr) +int __weak set_swbp(struct arch_uprobe *auprobe, struct vm_area_struct *vm= a, + unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, UPROBE_SWBP_INSN); + return uprobe_write_opcode(auprobe, vma, vaddr, UPROBE_SWBP_INSN); } =20 /** * set_orig_insn - Restore the original instruction. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @auprobe: arch specific probepoint information. * @vaddr: the virtual address to insert the opcode. * * For mm @mm, restore the original opcode (opcode) at @vaddr. * Return 0 (success) or a negative errno. */ -int __weak -set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned = long vaddr) +int __weak set_orig_insn(struct arch_uprobe *auprobe, + struct vm_area_struct *vma, unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, + return uprobe_write_opcode(auprobe, vma, vaddr, *(uprobe_opcode_t *)&auprobe->insn); } =20 @@ -1153,7 +1154,7 @@ static int install_breakpoint(struct uprobe *uprobe, = struct vm_area_struct *vma, if (first_uprobe) set_bit(MMF_HAS_UPROBES, &mm->flags); =20 - ret =3D set_swbp(&uprobe->arch, mm, vaddr); + ret =3D set_swbp(&uprobe->arch, vma, vaddr); if (!ret) clear_bit(MMF_RECALC_UPROBES, &mm->flags); else if (first_uprobe) @@ -1168,7 +1169,7 @@ static int remove_breakpoint(struct uprobe *uprobe, s= truct vm_area_struct *vma, struct mm_struct *mm =3D vma->vm_mm; =20 set_bit(MMF_RECALC_UPROBES, &mm->flags); - return set_orig_insn(&uprobe->arch, mm, vaddr); + return set_orig_insn(&uprobe->arch, vma, vaddr); } =20 struct map_info { --=20 2.48.1 From nobody Wed Dec 17 08:52:15 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E5E1226CEE for ; Fri, 21 Mar 2025 11:37:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742557047; cv=none; b=lgz3KLAkJ2XkpTxJwBr3M1gE8+sIyVk69jxd39pLHrVbCLUkvYQytRu5nOg/KZcUhopOLqkdx8sSbShEQpDrLlBx3iE0umWHTZ2llhn3b1QsSRgq7RDpi6Gpf6MTyeFGTSq1hGclnePgRaFkZxcJw5I9RxzLwqu4lUqYbB/TVV0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742557047; c=relaxed/simple; bh=+8Vl4PNSwXiJI+gdFhFYs/aXgmf6RY6AKWNZGyV48ig=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gTTZe6RJCVqDmDHsAI0n62P1bVIuvZ3N7mRWLQliZp8ELbtRwdc+8Bvs+2xbFfvam67sBTD/Wg4UR2tu0qY/whCOjIeYU8JDYQQc+3ztPNpb8JvBN44qVI7ILzkGtyRdCXD8h1agRxgXZP/bsFStQRYryTlWJEEAnjot5+MJzJY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ZBH+OwSa; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZBH+OwSa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1742557044; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rJdKl7eMVXp/E0C5Su96l6r1ubOFuz+HPNb8MwX5DOc=; b=ZBH+OwSaQzuKxraGePuS8zTWqTXMkY5ijI23GOSyldDj8UtjlYzwXC+RSYPj8MU7B9yiuW 4Mvb6XJgWBqJfMwI50gi9A2n2Gzr7J7GEkeAOkB74Bh9JZyt3hUjPdwXRc/73g2BMs47lk iWCLLvT7WUrpOuWdA/11xNptKW1tXpM= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-478-VwJzulTuOP22MvtqDsGClw-1; Fri, 21 Mar 2025 07:37:23 -0400 X-MC-Unique: VwJzulTuOP22MvtqDsGClw-1 X-Mimecast-MFC-AGG-ID: VwJzulTuOP22MvtqDsGClw_1742557042 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-3912aab7a36so728714f8f.0 for ; Fri, 21 Mar 2025 04:37:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742557042; x=1743161842; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rJdKl7eMVXp/E0C5Su96l6r1ubOFuz+HPNb8MwX5DOc=; b=dRW35U1yptJmEqzTNQ3UnJmUwMMu1oBs9oYsl4g0BibEkZ58w4khal1FfURe2c0fRb y6rgh8hPhNgoXgi0tzioev0cOIDV/xmfy4L7cjeUc3hRV/FuouNH+kXePyZQA60ivgVu hZtbgtovFoIlib+IDHpeplm5kew9Piq24X9OG3+0bwt2OBt6mfNZ05esxme+GLkI+GMG pM739bRzFysqmTiGVlPcKYWMCgJAEk62YjFjsrgVtsb1PKnrPoSLJUQVyWaTZ/snG0RG EKG21w0Wupanfd9iV2p1SWGSAfm8QhOAJkwCmAS7W+0LaUAumvuOQzSRG/hMMGVa4inr g5MQ== X-Gm-Message-State: AOJu0YzFD1qajgPmS38djltK31J5Misu7VdnjdNbDElugEFjh/mYRsh0 kCOY8n3dQkFTOC/pHCWJlL0uxLzwZvDamcm07GQQjzaminklDCRLimLL1QP75Sid8sbFT+zZXtu rsGDCdaGlEo4aNpgVAkgvHz37gL0GLDN+eRU8TItnHWBUpWluLr4J1ScPib23c091DIf2sqFBjt 42vo2I1kaYXV/lNi14K+k2yXP7HK10OQ8uCDd5EwL5kNRl X-Gm-Gg: ASbGncu7sfnTr4BJLoLjVertTmtNzEvtK/Bq+bckPoUx4UjczAnPjdDSBDvq7Nioc75 WMtYZOuVPQHp5zbgLreyjsZ2uyu8TXSEOryzxmXVGFO+ZfUvtFSTubyv1DvRjDxQdbxJSZVzUoi wHgzXeSZr0v7Ceqxn5R60R4ioIYckfrbDuzZgfn70q3pM3aNpfsvf5Uhm/ATpYcGw1s2Q1rGwHn Jc7jXkvWl2jVOsBW9zmcY+WNh7ODk2uJIS75QahLAMqqGSftpd0DW7TnwtVDHdQH7aa0wgy7Mxw Ti98tweT4Xz7Nn6NkyWjczSd9+jpANauWOmPVJsx87rZhI9G2q0UJfojQmK/bmd1pZX1tgJuDcO g X-Received: by 2002:a5d:6d08:0:b0:391:3261:ff48 with SMTP id ffacd0b85a97d-3997f92d06dmr3398994f8f.35.1742557041758; Fri, 21 Mar 2025 04:37:21 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE8Axp06fh0N+BXwj2bhaZ4JLVC+fk8FODg2R4HEdpHhstbeEn/gKETuIO/dmFqvvCgQmToHQ== X-Received: by 2002:a5d:6d08:0:b0:391:3261:ff48 with SMTP id ffacd0b85a97d-3997f92d06dmr3398923f8f.35.1742557041089; Fri, 21 Mar 2025 04:37:21 -0700 (PDT) Received: from localhost (p200300cbc72a910023d23800cdcc90f0.dip0.t-ipconnect.de. [2003:cb:c72a:9100:23d2:3800:cdcc:90f0]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3997f99561dsm2182430f8f.12.2025.03.21.04.37.19 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Mar 2025 04:37:20 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Andrii Nakryiko , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH v3 3/3] kernel/events/uprobes: uprobe_write_opcode() rewrite Date: Fri, 21 Mar 2025 12:37:13 +0100 Message-ID: <20250321113713.204682-4-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250321113713.204682-1-david@redhat.com> References: <20250321113713.204682-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" uprobe_write_opcode() does some pretty low-level things that really, it shouldn't be doing: for example, manually breaking COW by allocating anonymous folios and replacing mapped pages. Further, it does seem to do some shaky things: for example, writing to possible COW-shared anonymous pages or zapping anonymous pages that might be pinned. We're also not taking care of uffd, uffd-wp, softdirty ... although rather corner cases here. Let's just get it right like ordinary ptrace writes would. Let's rewrite the code, leaving COW-breaking to core-MM, triggered by FOLL_FORCE|FOLL_WRITE (note that the code was already using FOLL_FORCE). We'll use GUP to lookup/faultin the page and break COW if required. Then, we'll walk the page tables using a folio_walk to perform our page modification atomically by temporarily unmap the PTE + flushing the TLB. Likely, we could avoid the temporary unmap in case we can just atomically write the instruction, but that will be a separate project. Unfortunately, we still have to implement the zapping logic manually, because we only want to zap in specific circumstances (e.g., page content identical). Note that we can now handle large folios (compound pages) and the shared zeropage just fine, so drop these checks. Acked-by: Oleg Nesterov Signed-off-by: David Hildenbrand Acked-by: Peter Zijlstra (Intel) --- kernel/events/uprobes.c | 312 ++++++++++++++++++++-------------------- 1 file changed, 158 insertions(+), 154 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index ac17c16f65d63..f098e8a4f24ee 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -29,6 +29,7 @@ #include #include #include /* check_stable_address_space */ +#include =20 #include =20 @@ -151,91 +152,6 @@ static loff_t vaddr_to_offset(struct vm_area_struct *v= ma, unsigned long vaddr) return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start); } =20 -/** - * __replace_page - replace page in vma by new page. - * based on replace_page in mm/ksm.c - * - * @vma: vma that holds the pte pointing to page - * @addr: address the old @page is mapped at - * @old_page: the page we are replacing by new_page - * @new_page: the modified page we replace page by - * - * If @new_page is NULL, only unmap @old_page. - * - * Returns 0 on success, negative error code otherwise. - */ -static int __replace_page(struct vm_area_struct *vma, unsigned long addr, - struct page *old_page, struct page *new_page) -{ - struct folio *old_folio =3D page_folio(old_page); - struct folio *new_folio; - struct mm_struct *mm =3D vma->vm_mm; - DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0); - int err; - struct mmu_notifier_range range; - pte_t pte; - - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, - addr + PAGE_SIZE); - - if (new_page) { - new_folio =3D page_folio(new_page); - err =3D mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL); - if (err) - return err; - } - - /* For folio_free_swap() below */ - folio_lock(old_folio); - - mmu_notifier_invalidate_range_start(&range); - err =3D -EAGAIN; - if (!page_vma_mapped_walk(&pvmw)) - goto unlock; - VM_BUG_ON_PAGE(addr !=3D pvmw.address, old_page); - pte =3D ptep_get(pvmw.pte); - - /* - * Handle PFN swap PTES, such as device-exclusive ones, that actually - * map pages: simply trigger GUP again to fix it up. - */ - if (unlikely(!pte_present(pte))) { - page_vma_mapped_walk_done(&pvmw); - goto unlock; - } - - if (new_page) { - folio_get(new_folio); - folio_add_new_anon_rmap(new_folio, vma, addr, RMAP_EXCLUSIVE); - folio_add_lru_vma(new_folio, vma); - } else - /* no new page, just dec_mm_counter for old_page */ - dec_mm_counter(mm, MM_ANONPAGES); - - if (!folio_test_anon(old_folio)) { - dec_mm_counter(mm, mm_counter_file(old_folio)); - inc_mm_counter(mm, MM_ANONPAGES); - } - - flush_cache_page(vma, addr, pte_pfn(pte)); - ptep_clear_flush(vma, addr, pvmw.pte); - if (new_page) - set_pte_at(mm, addr, pvmw.pte, - mk_pte(new_page, vma->vm_page_prot)); - - folio_remove_rmap_pte(old_folio, old_page, vma); - if (!folio_mapped(old_folio)) - folio_free_swap(old_folio); - page_vma_mapped_walk_done(&pvmw); - folio_put(old_folio); - - err =3D 0; - unlock: - mmu_notifier_invalidate_range_end(&range); - folio_unlock(old_folio); - return err; -} - /** * is_swbp_insn - check if instruction is breakpoint instruction. * @insn: instruction to be checked. @@ -463,6 +379,95 @@ static int update_ref_ctr(struct uprobe *uprobe, struc= t mm_struct *mm, return ret; } =20 +static bool orig_page_is_identical(struct vm_area_struct *vma, + unsigned long vaddr, struct page *page, bool *pmd_mappable) +{ + const pgoff_t index =3D vaddr_to_offset(vma, vaddr) >> PAGE_SHIFT; + struct folio *orig_folio =3D filemap_get_folio(vma->vm_file->f_mapping, + index); + struct page *orig_page; + bool identical; + + if (IS_ERR(orig_folio)) + return false; + orig_page =3D folio_file_page(orig_folio, index); + + *pmd_mappable =3D folio_test_pmd_mappable(orig_folio); + identical =3D folio_test_uptodate(orig_folio) && + pages_identical(page, orig_page); + folio_put(orig_folio); + return identical; +} + +static int __uprobe_write_opcode(struct vm_area_struct *vma, + struct folio_walk *fw, struct folio *folio, + unsigned long opcode_vaddr, uprobe_opcode_t opcode) +{ + const unsigned long vaddr =3D opcode_vaddr & PAGE_MASK; + const bool is_register =3D !!is_swbp_insn(&opcode); + bool pmd_mappable; + + /* For now, we'll only handle PTE-mapped folios. */ + if (fw->level !=3D FW_LEVEL_PTE) + return -EFAULT; + + /* + * See can_follow_write_pte(): we'd actually prefer a writable PTE here, + * but the VMA might not be writable. + */ + if (!pte_write(fw->pte)) { + if (!PageAnonExclusive(fw->page)) + return -EFAULT; + if (unlikely(userfaultfd_pte_wp(vma, fw->pte))) + return -EFAULT; + /* SOFTDIRTY is handled via pte_mkdirty() below. */ + } + + /* + * We'll temporarily unmap the page and flush the TLB, such that we can + * modify the page atomically. + */ + flush_cache_page(vma, vaddr, pte_pfn(fw->pte)); + fw->pte =3D ptep_clear_flush(vma, vaddr, fw->ptep); + copy_to_page(fw->page, opcode_vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + + /* + * When unregistering, we may only zap a PTE if uffd is disabled and + * there are no unexpected folio references ... + */ + if (is_register || userfaultfd_missing(vma) || + (folio_ref_count(folio) !=3D folio_mapcount(folio) + 1 + + folio_test_swapcache(folio) * folio_nr_pages(folio))) + goto remap; + + /* + * ... and the mapped page is identical to the original page that + * would get faulted in on next access. + */ + if (!orig_page_is_identical(vma, vaddr, fw->page, &pmd_mappable)) + goto remap; + + dec_mm_counter(vma->vm_mm, MM_ANONPAGES); + folio_remove_rmap_pte(folio, fw->page, vma); + if (!folio_mapped(folio) && folio_test_swapcache(folio) && + folio_trylock(folio)) { + folio_free_swap(folio); + folio_unlock(folio); + } + folio_put(folio); + + return pmd_mappable; +remap: + /* + * Make sure that our copy_to_page() changes become visible before the + * set_pte_at() write. + */ + smp_wmb(); + /* We modified the page. Make sure to mark the PTE dirty. */ + set_pte_at(vma->vm_mm, vaddr, fw->ptep, pte_mkdirty(fw->pte)); + return 0; +} + /* * NOTE: * Expect the breakpoint instruction to be the smallest size instruction f= or @@ -475,116 +480,115 @@ static int update_ref_ctr(struct uprobe *uprobe, st= ruct mm_struct *mm, * uprobe_write_opcode - write the opcode at a given virtual address. * @auprobe: arch specific probepoint information. * @vma: the probed virtual memory area. - * @vaddr: the virtual address to store the opcode. - * @opcode: opcode to be written at @vaddr. + * @opcode_vaddr: the virtual address to store the opcode. + * @opcode: opcode to be written at @opcode_vaddr. * * Called with mm->mmap_lock held for read or write. * Return 0 (success) or a negative errno. */ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct= *vma, - unsigned long vaddr, uprobe_opcode_t opcode) + const unsigned long opcode_vaddr, uprobe_opcode_t opcode) { + const unsigned long vaddr =3D opcode_vaddr & PAGE_MASK; struct mm_struct *mm =3D vma->vm_mm; struct uprobe *uprobe; - struct page *old_page, *new_page; int ret, is_register, ref_ctr_updated =3D 0; - bool orig_page_huge =3D false; unsigned int gup_flags =3D FOLL_FORCE; + struct mmu_notifier_range range; + struct folio_walk fw; + struct folio *folio; + struct page *page; =20 is_register =3D is_swbp_insn(&opcode); uprobe =3D container_of(auprobe, struct uprobe, arch); =20 -retry: + if (WARN_ON_ONCE(!is_cow_mapping(vma->vm_flags))) + return -EINVAL; + + /* + * When registering, we have to break COW to get an exclusive anonymous + * page that we can safely modify. Use FOLL_WRITE to trigger a write + * fault if required. When unregistering, we might be lucky and the + * anon page is already gone. So defer write faults until really + * required. Use FOLL_SPLIT_PMD, because __uprobe_write_opcode() + * cannot deal with PMDs yet. + */ if (is_register) - gup_flags |=3D FOLL_SPLIT_PMD; - /* Read the page with vaddr into memory */ - ret =3D get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, NULL); - if (ret !=3D 1) - return ret; + gup_flags |=3D FOLL_WRITE | FOLL_SPLIT_PMD; =20 - ret =3D verify_opcode(old_page, vaddr, &opcode); +retry: + ret =3D get_user_pages_remote(mm, vaddr, 1, gup_flags, &page, NULL); if (ret <=3D 0) - goto put_old; - - if (is_zero_page(old_page)) { - ret =3D -EINVAL; - goto put_old; - } + goto out; + folio =3D page_folio(page); =20 - if (WARN(!is_register && PageCompound(old_page), - "uprobe unregister should never work on compound page\n")) { - ret =3D -EINVAL; - goto put_old; + ret =3D verify_opcode(page, opcode_vaddr, &opcode); + if (ret <=3D 0) { + folio_put(folio); + goto out; } =20 /* We are going to replace instruction, update ref_ctr. */ if (!ref_ctr_updated && uprobe->ref_ctr_offset) { ret =3D update_ref_ctr(uprobe, mm, is_register ? 1 : -1); - if (ret) - goto put_old; + if (ret) { + folio_put(folio); + goto out; + } =20 ref_ctr_updated =3D 1; } =20 ret =3D 0; - if (!is_register && !PageAnon(old_page)) - goto put_old; - - ret =3D anon_vma_prepare(vma); - if (ret) - goto put_old; - - ret =3D -ENOMEM; - new_page =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); - if (!new_page) - goto put_old; - - __SetPageUptodate(new_page); - copy_highpage(new_page, old_page); - copy_to_page(new_page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + if (unlikely(!folio_test_anon(folio))) { + VM_WARN_ON_ONCE(is_register); + folio_put(folio); + goto out; + } =20 if (!is_register) { - struct page *orig_page; - pgoff_t index; - - VM_BUG_ON_PAGE(!PageAnon(old_page), old_page); - - index =3D vaddr_to_offset(vma, vaddr & PAGE_MASK) >> PAGE_SHIFT; - orig_page =3D find_get_page(vma->vm_file->f_inode->i_mapping, - index); - - if (orig_page) { - if (PageUptodate(orig_page) && - pages_identical(new_page, orig_page)) { - /* let go new_page */ - put_page(new_page); - new_page =3D NULL; - - if (PageCompound(orig_page)) - orig_page_huge =3D true; - } - put_page(orig_page); - } + /* + * In the common case, we'll be able to zap the page when + * unregistering. So trigger MMU notifiers now, as we won't + * be able to do it under PTL. + */ + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, + vaddr, vaddr + PAGE_SIZE); + mmu_notifier_invalidate_range_start(&range); + } + + ret =3D -EAGAIN; + /* Walk the page tables again, to perform the actual update. */ + if (folio_walk_start(&fw, vma, vaddr, 0)) { + if (fw.page =3D=3D page) + ret =3D __uprobe_write_opcode(vma, &fw, folio, opcode_vaddr, opcode); + folio_walk_end(&fw, vma); } =20 - ret =3D __replace_page(vma, vaddr & PAGE_MASK, old_page, new_page); - if (new_page) - put_page(new_page); -put_old: - put_page(old_page); + if (!is_register) + mmu_notifier_invalidate_range_end(&range); =20 - if (unlikely(ret =3D=3D -EAGAIN)) + folio_put(folio); + switch (ret) { + case -EFAULT: + gup_flags |=3D FOLL_WRITE | FOLL_SPLIT_PMD; + fallthrough; + case -EAGAIN: goto retry; + default: + break; + } =20 +out: /* Revert back reference counter if instruction update failed. */ - if (ret && is_register && ref_ctr_updated) + if (ret < 0 && is_register && ref_ctr_updated) update_ref_ctr(uprobe, mm, -1); =20 /* try collapse pmd for compound page */ - if (!ret && orig_page_huge) + if (ret > 0) collapse_pte_mapped_thp(mm, vaddr, false); =20 - return ret; + return ret < 0 ? ret : 0; } =20 /** --=20 2.48.1