From nobody Thu Apr 2 14:06:40 2026 Received: from mail-dl1-f73.google.com (mail-dl1-f73.google.com [74.125.82.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F289B3932E4 for ; Fri, 27 Mar 2026 20:55:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644906; cv=none; b=M8UfNEbGAiS/RzpDnE7J3q3xkBmUc7UsMoVhRNssrUF+FQsoMmDUU+tR8mS86xnQiolhy7FMHGdgm7f2LawE5AlNZZo4HOLgQt0VWYt6M+FygV8ZtgsRbvEBy/lMnKmri2MvNy6AySEhk6Ae2uBwj687OJAkQemFlkP5UMOWzFU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644906; c=relaxed/simple; bh=sf6ZRopiRtrqEMlCQEVNfp8Vt7nygFqSKsvnUNGcEHI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kX4u+zzDcpC3Ur9oBQrajmCAG2lKOZTbjycOopEF3lZUxKUxxsMih5l8mAlUihvjuHDMXkJbjHmX1ox/zwGaNs1qiKX8C3u65RTDjxtc9sKg/1ZqDnBG1v2vxnD7n4epQpclssQ1GfPzed8Ndowu/wwRvPzgjOXVG5zojGYeXWg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BkvU6rb4; arc=none smtp.client-ip=74.125.82.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BkvU6rb4" Received: by mail-dl1-f73.google.com with SMTP id a92af1059eb24-12711ec96fbso1768250c88.0 for ; Fri, 27 Mar 2026 13:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774644904; x=1775249704; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wGMSLi6pfWPDruUV3e3+0duVYRVGge54aYv+ZuYilb8=; b=BkvU6rb4BYHmR4p3Wk+QDNO0TBlWxNe1tIlqSbA8NwcwUoHp8crupNLwnd0kc7yxWR UUh1cjLbFYrbZfPI64MPccmpFv0O8NixBRpiyejesyqEc5WFfuCzzmkO9tpSOpRKWYs/ cqdx2b5PuFBULdT3aTK20OHyShX04fM1R2oZHyg8pdi5P+xHd2nG4bhYFD+W0nZam9XS a9+2VoUiEGXyuTOa6Xgw4QG4myUtjsDyDKH7vHbACg9YVySvSWRoJQMXlA7OLqSDednt 4DalXTwS4A/KV5nZ8VCu6N8aimslENrM7w+tsHiEQ+8Cl14tWs6Rhcs4N1DQrZ85WVlV DcXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774644904; x=1775249704; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wGMSLi6pfWPDruUV3e3+0duVYRVGge54aYv+ZuYilb8=; b=Qf7GBt0B5Rl/07Am6DiBDCNRDIcOi8oF2qbLlbA9S5VkY8x/x839XYyiViC8Tndlu1 bNR6Q8ELeF+7CJEo0l2lbZUeW6XYyNMOnlv9Nut+2xZ+v1MhBq9wmydTefcA84SMXXEn lrXTSaY9yNTmAsM6y5c19SolJ/IQnGDeyKLBhMrZYUt12jmtGhrhP4xqtkCeFoJ5Qxhy fHnNJMu38CtU6XR+v4T6cIZectZKtmIfVucFGCbgX+FMtU2Zo7FJprTzey327ZQT7nsk yr0JcNboupl1AWQ2WMwMH3W7DRoB7S8JRSdSA549peaHVCNLQsLMES5EBv6vNSLMBtF4 3noQ== X-Forwarded-Encrypted: i=1; AJvYcCV8uJUo1jkNe02qSne3KW38KcKy+vK04HY9pjAlpZLxFJTHRuG1Bg8f/HDq7CHzd0K1f4/eJ6g+anKAG1M=@vger.kernel.org X-Gm-Message-State: AOJu0Yz5/W2dp/dzBQCYVBcoXCDvnl/vm/jxuouHveIQdLM38MJPGjb2 fVVLA8SD10Yxnnpk5WBJi/L7KsSQT67/4Jf6Tw49pWsAj+7n3NPJs2Am+xDA+oaNZ9SVzmXh8rx PCuowJg== X-Received: from dlbsj11.prod.google.com ([2002:a05:7022:f90b:b0:128:dfaf:10ac]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7022:e982:b0:119:e569:f277 with SMTP id a92af1059eb24-12ab28f2848mr2382854c88.32.1774644903670; Fri, 27 Mar 2026 13:55:03 -0700 (PDT) Date: Fri, 27 Mar 2026 13:54:52 -0700 In-Reply-To: <20260327205457.604224-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260327205457.604224-1-surenb@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260327205457.604224-2-surenb@google.com> Subject: [PATCH v6 1/6] mm/vma: cleanup error handling path in vma_expand() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, david@kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com, apopple@nvidia.com, ljs@kernel.org, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, vbabka@suse.cz, jannh@google.com, rppt@kernel.org, mhocko@suse.com, pfalcato@suse.de, kees@kernel.org, maddy@linux.ibm.com, npiggin@gmail.com, mpe@ellerman.id.au, chleroy@kernel.org, borntraeger@linux.ibm.com, frankja@linux.ibm.com, imbrenda@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gerald.schaefer@linux.ibm.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, surenb@google.com, Lorenzo Stoakes Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_expand() error handling is a bit confusing with "if (ret) return ret;" mixed with "if (!ret && ...) ret =3D ...;". Simplify the code to check for errors and return immediately after an operation that might fail. This also makes later changes to this function more readable. Change variable name for storing the error code from "ret" to "err". No functional change intended. Suggested-by: Jann Horn Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett Reviewed-by: Lorenzo Stoakes Reviewed-by: Barry Song --- mm/vma.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index a43f3c5d4b3d..ba78ab1f397a 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -1170,7 +1170,7 @@ int vma_expand(struct vma_merge_struct *vmg) vma_flags_t sticky_flags =3D vma_flags_and_mask(&vmg->vma_flags, VMA_STICKY_FLAGS); vma_flags_t target_sticky; - int ret =3D 0; + int err =3D 0; =20 mmap_assert_write_locked(vmg->mm); vma_start_write(target); @@ -1200,12 +1200,16 @@ int vma_expand(struct vma_merge_struct *vmg) * Note that, by convention, callers ignore OOM for this case, so * we don't need to account for vmg->give_up_on_mm here. */ - if (remove_next) - ret =3D dup_anon_vma(target, next, &anon_dup); - if (!ret && vmg->copied_from) - ret =3D dup_anon_vma(target, vmg->copied_from, &anon_dup); - if (ret) - return ret; + if (remove_next) { + err =3D dup_anon_vma(target, next, &anon_dup); + if (err) + return err; + } + if (vmg->copied_from) { + err =3D dup_anon_vma(target, vmg->copied_from, &anon_dup); + if (err) + return err; + } =20 if (remove_next) { vma_flags_t next_sticky; --=20 2.53.0.1018.g2bb0e51243-goog From nobody Thu Apr 2 14:06:40 2026 Received: from mail-dy1-f202.google.com (mail-dy1-f202.google.com [74.125.82.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8A9330EF77 for ; Fri, 27 Mar 2026 20:55:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644908; cv=none; b=OYnpUXRA/ZnSLfpJg6ZK86WymPCWCFojYHquiTU5kNQYUa6YLRMA9GPZlsNIcmVUL/kc6CjzKV9eIlfiqUduhU2S7PsksR1yyND164fj1vAg88zO6Ewl3VK2nahrF/SVA5PkqmTKBJSt/3dtta0Rro0B7Jhe1jE405KpOVrWNXU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644908; c=relaxed/simple; bh=fZnlcZOeSjVhl04HcTlDCHmtii+E0UmZNLBd4Vz+tuU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mmFAgArB9zck0uPGDM8DDR6upenhJs+SwHZQIWCHs7N+do1FMD2fsq/biqOatmUmcxcYJhlNMhqY4e+YoO3HaFyg0CmSNRwaHY8KcXyS1X/JCitgYWf8qob/51aJM3juXc7uojBbvOJQkrSJwqBV78PWnG6TZr07Otwd88fpcRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nC+AzOqz; arc=none smtp.client-ip=74.125.82.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nC+AzOqz" Received: by mail-dy1-f202.google.com with SMTP id 5a478bee46e88-2c0f6593ef5so2642253eec.1 for ; Fri, 27 Mar 2026 13:55:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774644906; x=1775249706; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6+n3Djg+u5/v209SkkHKUKK85Ins0KyBj2RQsN4UbGc=; b=nC+AzOqzys4d7eC1ucjZQ1MFNGepMKuxNpvJyLgvS5G/WtLM4EUCIpyhhXThmUEEIO ynjR2DVaUKyqV5genLXibQca0oTJZJz31H5piq9a3aTnLS7LufCmRf2XwEEe3eKMuTuy a1LGouYkRqNmPwCGZbFcdvAUCT49JWniyyVBgFDnulLu9CDTNr/wazHUJ5TZ5MTGJUqt 2z6Pvwe3/AI5Pk8bnNxK39FEj1q2sJfSiIm5/1k0uPGaH/xKxjfJo/Jkhr3up7DBfPzM iGWvpp/sJqmbOp1MZnSQrVt5ivgPVC4xq05LP6HeNq2h256n+08VfIu7/QIarpCI0SBT a/gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774644906; x=1775249706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6+n3Djg+u5/v209SkkHKUKK85Ins0KyBj2RQsN4UbGc=; b=OPh4VTZOxaCxtJKfUW48qFVUMQcW5CLBJ61yhMqBNZ4UimMn46YOREn7rL/j8wC1ET OqPOA9OW01Alr1krTGjf/tu+5hvAGKfsMaCWPaxr7hAmnfAokDWPQC/Q8IjXANQbesdI foiucbVZQvGG3W9ODVTn8D7kAsE9XQOHxH3wbWOPZypvBh4Sd5lY8koGhWBJ1cqAa7jv /2skDGOAEVHfNJfWkW/tj07kU+6ldLdzEdr1yO7SqVOsgwt6NoXWEqqsAkd8JFjYPVDA yJAIGN9t3g8iW6JFgH/ilXlVpciH0WDBcVBxWFmG0wZCKIQZtM7+pUr3LDxaQRCeSih6 ZZdQ== X-Forwarded-Encrypted: i=1; AJvYcCXMSALmyLNl8AZ54iFHrhW3gi745lv9KGnt0Yripj4YN2Toto93OndJ5UyqzSu0FA/nd10Iz/2e2nNxJb4=@vger.kernel.org X-Gm-Message-State: AOJu0YwthJC2f6uEMVcu4NnV2YUw5e+/dpvsbAF+2DPAUwqfvDakg8xS jIYm89rEj5/bVpxx7wNRfJ8DDTUH4Jz0sOgJdFkTU7ULCdxUSjt/ghoOYHHavgqjqy9aduuuIWf ug3YuHQ== X-Received: from dybsr8.prod.google.com ([2002:a05:7301:7188:b0:2c0:b345:9b22]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7301:fa04:b0:2c1:6cfd:73da with SMTP id 5a478bee46e88-2c185fa842amr1935438eec.32.1774644905874; Fri, 27 Mar 2026 13:55:05 -0700 (PDT) Date: Fri, 27 Mar 2026 13:54:53 -0700 In-Reply-To: <20260327205457.604224-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260327205457.604224-1-surenb@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260327205457.604224-3-surenb@google.com> Subject: [PATCH v6 2/6] mm: use vma_start_write_killable() in mm syscalls From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, david@kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com, apopple@nvidia.com, ljs@kernel.org, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, vbabka@suse.cz, jannh@google.com, rppt@kernel.org, mhocko@suse.com, pfalcato@suse.de, kees@kernel.org, maddy@linux.ibm.com, npiggin@gmail.com, mpe@ellerman.id.au, chleroy@kernel.org, borntraeger@linux.ibm.com, frankja@linux.ibm.com, imbrenda@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gerald.schaefer@linux.ibm.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace vma_start_write() with vma_start_write_killable() in syscalls, improving reaction time to the kill signal. In a number of places we now lock VMA earlier than before to avoid doing work and undoing it later if a fatal signal is pending. This is safe because the moves are happening within sections where we already hold the mmap_write_lock, so the moves do not change the locking order relative to other kernel locks. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan --- mm/madvise.c | 13 ++++++++++--- mm/memory.c | 2 ++ mm/mempolicy.c | 11 +++++++++-- mm/mlock.c | 30 ++++++++++++++++++++++++------ mm/mprotect.c | 25 +++++++++++++++++-------- mm/mremap.c | 8 +++++--- mm/mseal.c | 24 +++++++++++++++++++----- 7 files changed, 86 insertions(+), 27 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 69708e953cf5..f2c7b0512cdf 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -172,10 +172,17 @@ static int madvise_update_vma(vm_flags_t new_flags, if (IS_ERR(vma)) return PTR_ERR(vma); =20 - madv_behavior->vma =3D vma; + /* + * If a new vma was created during vma_modify_XXX, the resulting + * vma is already locked. Skip re-locking new vma in this case. + */ + if (vma =3D=3D madv_behavior->vma) { + if (vma_start_write_killable(vma)) + return -EINTR; + } else { + madv_behavior->vma =3D vma; + } =20 - /* vm_flags is protected by the mmap_lock held in write mode. */ - vma_start_write(vma); vma->flags =3D new_vma_flags; if (set_new_anon_name) return replace_anon_vma_name(vma, anon_name); diff --git a/mm/memory.c b/mm/memory.c index e44469f9cf65..9f99ec634831 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -366,6 +366,8 @@ void free_pgd_range(struct mmu_gather *tlb, * page tables that should be removed. This can differ from the vma mappi= ngs on * some archs that may have mappings that need to be removed outside the v= mas. * Note that the prev->vm_end and next->vm_start are often used. + * We don't use vma_start_write_killable() because page tables should be f= reed + * even if the task is being killed. * * The vma_end differs from the pg_end when a dup_mmap() failed and the tr= ee has * unrelated data to the mm_struct being torn down. diff --git a/mm/mempolicy.c b/mm/mempolicy.c index fd08771e2057..c38a90487531 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1784,7 +1784,8 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned lon= g, start, unsigned long, le return -EINVAL; if (end =3D=3D start) return 0; - mmap_write_lock(mm); + if (mmap_write_lock_killable(mm)) + return -EINTR; prev =3D vma_prev(&vmi); for_each_vma_range(vmi, vma, end) { /* @@ -1801,13 +1802,19 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned l= ong, start, unsigned long, le err =3D -EOPNOTSUPP; break; } + /* + * Lock the VMA early to avoid extra work if fatal signal + * is pending. + */ + err =3D vma_start_write_killable(vma); + if (err) + break; new =3D mpol_dup(old); if (IS_ERR(new)) { err =3D PTR_ERR(new); break; } =20 - vma_start_write(vma); new->home_node =3D home_node; err =3D mbind_range(&vmi, vma, &prev, start, end, new); mpol_put(new); diff --git a/mm/mlock.c b/mm/mlock.c index 8c227fefa2df..2ed454db7cf7 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -419,8 +419,10 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long a= ddr, * * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED; * called for munlock() and munlockall(), to clear VM_LOCKED from @vma. + * + * Return: 0 on success, -EINTR if fatal signal is pending. */ -static void mlock_vma_pages_range(struct vm_area_struct *vma, +static int mlock_vma_pages_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, vma_flags_t *new_vma_flags) { @@ -442,7 +444,9 @@ static void mlock_vma_pages_range(struct vm_area_struct= *vma, */ if (vma_flags_test(new_vma_flags, VMA_LOCKED_BIT)) vma_flags_set(new_vma_flags, VMA_IO_BIT); - vma_start_write(vma); + if (vma_start_write_killable(vma)) + return -EINTR; + vma_flags_reset_once(vma, new_vma_flags); =20 lru_add_drain(); @@ -453,6 +457,7 @@ static void mlock_vma_pages_range(struct vm_area_struct= *vma, vma_flags_clear(new_vma_flags, VMA_IO_BIT); vma_flags_reset_once(vma, new_vma_flags); } + return 0; } =20 /* @@ -506,11 +511,15 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, */ if (vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT) && vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) { + ret =3D vma_start_write_killable(vma); + if (ret) + goto out; /* mm->locked_vm is fine as nr_pages =3D=3D 0 */ /* No work to do, and mlocking twice would be wrong */ - vma_start_write(vma); vma->flags =3D new_vma_flags; } else { - mlock_vma_pages_range(vma, start, end, &new_vma_flags); + ret =3D mlock_vma_pages_range(vma, start, end, &new_vma_flags); + if (ret) + mm->locked_vm -=3D nr_pages; } out: *prev =3D vma; @@ -739,9 +748,18 @@ static int apply_mlockall_flags(int flags) =20 error =3D mlock_fixup(&vmi, vma, &prev, vma->vm_start, vma->vm_end, newflags); - /* Ignore errors, but prev needs fixing up. */ - if (error) + if (error) { + /* + * If we failed due to a pending fatal signal, return + * now. If we locked the vma before signal arrived, it + * will be unlocked when we drop mmap_write_lock. + */ + if (fatal_signal_pending(current)) + return -EINTR; + + /* Ignore errors, but prev needs fixing up. */ prev =3D vma; + } cond_resched(); } out: diff --git a/mm/mprotect.c b/mm/mprotect.c index 110d47a36d4b..d6227877465f 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -700,6 +700,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, const vma_flags_t old_vma_flags =3D READ_ONCE(vma->flags); vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); long nrpages =3D (end - start) >> PAGE_SHIFT; + struct vm_area_struct *new_vma; unsigned int mm_cp_flags =3D 0; unsigned long charged =3D 0; int error; @@ -756,19 +757,27 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 - vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &new_vma_flags); - if (IS_ERR(vma)) { - error =3D PTR_ERR(vma); + new_vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, + &new_vma_flags); + if (IS_ERR(new_vma)) { + error =3D PTR_ERR(new_vma); goto fail; } =20 - *pprev =3D vma; - /* - * vm_flags and vm_page_prot are protected by the mmap_lock - * held in write mode. + * If a new vma was created during vma_modify_flags, the resulting + * vma is already locked. Skip re-locking new vma in this case. */ - vma_start_write(vma); + if (new_vma =3D=3D vma) { + error =3D vma_start_write_killable(vma); + if (error) + goto fail; + } else { + vma =3D new_vma; + } + + *pprev =3D vma; + vma_flags_reset_once(vma, &new_vma_flags); if (vma_wants_manual_pte_write_upgrade(vma)) mm_cp_flags |=3D MM_CP_TRY_CHANGE_WRITABLE; diff --git a/mm/mremap.c b/mm/mremap.c index e9c8b1d05832..0860102bddab 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1348,6 +1348,11 @@ static unsigned long move_vma(struct vma_remap_struc= t *vrm) if (err) return err; =20 + /* We don't want racing faults. */ + err =3D vma_start_write_killable(vrm->vma); + if (err) + return err; + /* * If accounted, determine the number of bytes the operation will * charge. @@ -1355,9 +1360,6 @@ static unsigned long move_vma(struct vma_remap_struct= *vrm) if (!vrm_calc_charge(vrm)) return -ENOMEM; =20 - /* We don't want racing faults. */ - vma_start_write(vrm->vma); - /* Perform copy step. */ err =3D copy_vma_and_data(vrm, &new_vma); /* diff --git a/mm/mseal.c b/mm/mseal.c index 603df53ad267..1ea19fd3d384 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -70,14 +70,28 @@ static int mseal_apply(struct mm_struct *mm, =20 if (!vma_test(vma, VMA_SEALED_BIT)) { vma_flags_t vma_flags =3D vma->flags; + struct vm_area_struct *new_vma; =20 vma_flags_set(&vma_flags, VMA_SEALED_BIT); =20 - vma =3D vma_modify_flags(&vmi, prev, vma, curr_start, - curr_end, &vma_flags); - if (IS_ERR(vma)) - return PTR_ERR(vma); - vma_start_write(vma); + new_vma =3D vma_modify_flags(&vmi, prev, vma, curr_start, + curr_end, &vma_flags); + if (IS_ERR(new_vma)) + return PTR_ERR(new_vma); + + /* + * If a new vma was created during vma_modify_flags, + * the resulting vma is already locked. + * Skip re-locking new vma in this case. + */ + if (new_vma =3D=3D vma) { + int err =3D vma_start_write_killable(vma); + if (err) + return err; + } else { + vma =3D new_vma; + } + vma_set_flags(vma, VMA_SEALED_BIT); } =20 --=20 2.53.0.1018.g2bb0e51243-goog From nobody Thu Apr 2 14:06:40 2026 Received: from mail-dy1-f201.google.com (mail-dy1-f201.google.com [74.125.82.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5CFF83988F6 for ; Fri, 27 Mar 2026 20:55:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644911; cv=none; b=PJiWriMubsuGC6pCgbukbnZaXkOG8GjyfSUgDl6Y/6E3zNojDt1xyLv9IY64U2s0ckY6D1/bT9C92XxSam3t/i+uL0BJd8n2Oc1/NxoNO5Y0KwOvDxUYscV0qR4w4tenkolMraupv33D1gd6E33R3lKO8YnMigZUkf6Sg8U8LJo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644911; c=relaxed/simple; bh=3GnyT2WlT0kxCiLmP60OzV4FCCTn3R5BE0Og+KeeECM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lyUg1omMZjpGA+KQdBSOdv4prgAalOaI6331KIlAjC/bN2o4UysknLLGuUGL9JpT4xDI8/blK5ktd5RULMycKv6bSV8tj1GBpuYsdOFlwREBlLoEbx6nDVHC9B+kngQFJOmXw63vewoHBJfoEaPTvcyq/D2d2UWCZyCDzHs8V9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=D3VRt8CW; arc=none smtp.client-ip=74.125.82.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="D3VRt8CW" Received: by mail-dy1-f201.google.com with SMTP id 5a478bee46e88-2b81ff82e3cso1707250eec.0 for ; Fri, 27 Mar 2026 13:55:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774644908; x=1775249708; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=f0m3laKqVZc6BVbp9FlOKUOANtvuTeySm/ZmFaYOMpk=; b=D3VRt8CWARQqchkoVurjC4qsPYFQHqYRQ5wPHD+n7PhrnmA/rhV4BubaAkZKuK/Ncp U4CQGhDnve2aiJXLV2ZIifCOBVUByeTg+mK60IYlMgb9KcIkiuRvmGDmKT4ZSz9Jhftf uRUWafi6IB6M8O+OWsLXgvyAJE51PooUXJSoOamOd1qh3k7I+0y0NT5RfyZT24RyrpdT 0zFhR6Jj9GSi/oBuvaCv5vzcLJAw/H5mYzEKgPGfS6kZ9Xsr1qhIKI6cqReEOr9IJRra MmKD0jJG3A8J0STrIcqxYhIs1QMZc5480dp1Gpp5NSuWYAgCEZhuLVPDKoSz6AYEl22e UttQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774644908; x=1775249708; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=f0m3laKqVZc6BVbp9FlOKUOANtvuTeySm/ZmFaYOMpk=; b=N35HoSXyhVKc5zUc8o+boE0912DIWdVo2NJyJln5DWta8Xjf/taEhSnyjUO6N3juYn Tyr2K5PV/S/zT63KJh4McMOFBqiqplzewn8Jv4KlaUGPsEWlefQDsUhRUyG5psmeQEMC cEAnLFIhIcHqPX/VpiJqm/pJnt+Bg+9GhHMq4Bxekm1yLUQotoyUkV09dpd9Epth6wrR rY2lPMeai6AXlQTN4GbvDSLGAqM4h5sgJM1XKfPtlQ9tY11Yb57q+ALZqCcvv/j8XKH7 W6sRHQQ2DNoQBlHU6XPMJYkwEKxQ9hLMYX5EFcfbThJGW9yvV7wCO8vF36cbbgOE4pAe Cmhw== X-Forwarded-Encrypted: i=1; AJvYcCWW0Sbjr5c/QQVve3SIXn1FzMZ3Hm36hFsa836tBqpDDTwI2lVPvxQHbVW31Kwoy4Wc/n0/uA1L/sz8olk=@vger.kernel.org X-Gm-Message-State: AOJu0Yxwdfn6M03LUECoTRDVclB9StUmkM5ULW30xLrxNqeOGeHtUlms ZPCQ0JS5t/VhmPTzmqa0wHhU8uPNJND0iXY+aR1QIn1Pv/MtTxsLX3QEyGwSJkn9qorHOP+sn2U kaxqoWg== X-Received: from dyz2.prod.google.com ([2002:a05:693c:4082:b0:2c1:2155:75eb]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7022:6981:b0:12a:80f2:89a4 with SMTP id a92af1059eb24-12aabb97edamr3324672c88.10.1774644908020; Fri, 27 Mar 2026 13:55:08 -0700 (PDT) Date: Fri, 27 Mar 2026 13:54:54 -0700 In-Reply-To: <20260327205457.604224-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260327205457.604224-1-surenb@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260327205457.604224-4-surenb@google.com> Subject: [PATCH v6 3/6] mm/khugepaged: use vma_start_write_killable() in collapse_huge_page() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, david@kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com, apopple@nvidia.com, ljs@kernel.org, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, vbabka@suse.cz, jannh@google.com, rppt@kernel.org, mhocko@suse.com, pfalcato@suse.de, kees@kernel.org, maddy@linux.ibm.com, npiggin@gmail.com, mpe@ellerman.id.au, chleroy@kernel.org, borntraeger@linux.ibm.com, frankja@linux.ibm.com, imbrenda@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gerald.schaefer@linux.ibm.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace vma_start_write() with vma_start_write_killable(), improving reaction time to the kill signal. Replace vma_start_write() in collapse_huge_page(). Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes (Oracle) --- mm/khugepaged.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d06d84219e1b..a1825a4dec8b 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1166,7 +1166,10 @@ static enum scan_result collapse_huge_page(struct mm= _struct *mm, unsigned long a if (result !=3D SCAN_SUCCEED) goto out_up_write; /* check if the pmd is still valid */ - vma_start_write(vma); + if (vma_start_write_killable(vma)) { + result =3D SCAN_FAIL; + goto out_up_write; + } result =3D check_pmd_still_valid(mm, address, pmd); if (result !=3D SCAN_SUCCEED) goto out_up_write; --=20 2.53.0.1018.g2bb0e51243-goog From nobody Thu Apr 2 14:06:40 2026 Received: from mail-dy1-f202.google.com (mail-dy1-f202.google.com [74.125.82.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63D0C395268 for ; Fri, 27 Mar 2026 20:55:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644913; cv=none; b=Z7bVwXsNzy6cVCUG/2T95Zl4ozinlid1k2fTmQCgnHPYE08gN72WloD6fHIQS1JyVBxYBEGucdKD4FTTLyx+CAyZ84LeOr0Ojhux8ssO5DpCnYY24dgPoXXnkhB5Jw0C+aZ64TQpq3mrQYt0x7Gup6IAI7cyu7A9ZjWojxpoUjI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644913; c=relaxed/simple; bh=d5mnzxn4Tjwdwq6SneMA/zGXA7gBgTdol5RbAikXVvg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rxXc1QbjOmDDfBwQ/Awl1fyzSO+F5sdwn1uNoYPG1JSrSS4c+AEwHDV2Ga8qK5Hg6wXgYcB+G20WLGCBQO36HnQAxNsOHdHUNGVkx4ARKw41E4dQD+VoJAXR1hESzaZOhKjWSMq60OrOVZ0jHOgSbHa1hFBRt/PPjh3BDYeRdoU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=g5uZucig; arc=none smtp.client-ip=74.125.82.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g5uZucig" Received: by mail-dy1-f202.google.com with SMTP id 5a478bee46e88-2ba9a744f7dso2551578eec.0 for ; Fri, 27 Mar 2026 13:55:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774644911; x=1775249711; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6skjtd7OxVmiwFu5TmEIQZKDuU+5vel5eIP7VGPQHkg=; b=g5uZucigZDwE7NyCqBEYkRQOXhOpVkYFRISsZ5p0llxgndtPPQxdb5uKgdjiuTwD7Q wQ9/hivbLB8lOfMW+Q/ndrvuEpIZhTzn5w7sgnZq5mT60rBmiOdNLDQ4epGuNK3UQ78g 91wuLODB9gVI3uSkqzYOJ4M1U+uIbeCDIrI7HlXedAQCkfKCFsDfawsbsZznKgjJrtsG 2ev2gmAOV33p8OFFjSJgUW0jvilRtaeyrWSnCm0FH+CxpLCyuyMU5KfLKpst1khE2z6B Sn6ljoHT94m3XV5cVv6KlCy4kZgp2IRjhI+W8jbOCCSaLlaIQ/8ICZKShMBuWICRra7d /noQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774644911; x=1775249711; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6skjtd7OxVmiwFu5TmEIQZKDuU+5vel5eIP7VGPQHkg=; b=jITQYHZmCSaU2vTOKcTBldTwwJFd/XSF0C29KNctzNhoqkpfZj6tbhdITkASZ2/R84 HHdktm8SS0rZkrZALc6EzdQ08tvA4FZLC+io+3yRDrwrQkBK4AnrUAWB0xt2XfnB/OLB GGtHR1rZ3bYCfYyYT3iGmrsKqBtRGsmPLmAKnNBXLZQHGDoSYkcshrs2gmKDYwTguJIA VBUyuGVe4mKNqFqQB5BcAJYEos1DbM5zLqG8If0Zit6YD3fTli8y5woChQJtZSIkhzMd NuWjGaooGMa83EjCzQcjzCgP6IATWsDTo4uT7sERrwnOCa99NKGxg85Vcn1d6edCEwDg xrcQ== X-Forwarded-Encrypted: i=1; AJvYcCUtHdbAYm7cRlmq1CKrT+5ka3Bsa2FrL9x64yEfwq8MRZVmvNrguHb0PGkOR1W4lk2sp+0abaoCLPl5bbE=@vger.kernel.org X-Gm-Message-State: AOJu0YyS8FG9MafHwQmYRtEb9lc15l/bnzxCA2odGovKgCJLx/7q1HK0 N8Uu1b9SEudJtC/+Yg9dxp8SUEw2K8w/g4V3hcJQvtJHLFYVspnCkBIdpTXi+ojgNnKvw7Eh3FI idUG/Ag== X-Received: from dybrq15.prod.google.com ([2002:a05:7301:468f:b0:2c3:adf7:d755]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7300:5b89:b0:2c1:1dea:c443 with SMTP id 5a478bee46e88-2c185e460a2mr2052736eec.19.1774644910254; Fri, 27 Mar 2026 13:55:10 -0700 (PDT) Date: Fri, 27 Mar 2026 13:54:55 -0700 In-Reply-To: <20260327205457.604224-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260327205457.604224-1-surenb@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260327205457.604224-5-surenb@google.com> Subject: [PATCH v6 4/6] mm/vma: use vma_start_write_killable() in vma operations From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, david@kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com, apopple@nvidia.com, ljs@kernel.org, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, vbabka@suse.cz, jannh@google.com, rppt@kernel.org, mhocko@suse.com, pfalcato@suse.de, kees@kernel.org, maddy@linux.ibm.com, npiggin@gmail.com, mpe@ellerman.id.au, chleroy@kernel.org, borntraeger@linux.ibm.com, frankja@linux.ibm.com, imbrenda@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gerald.schaefer@linux.ibm.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace vma_start_write() with vma_start_write_killable(), improving reaction time to the kill signal. Replace vma_start_write() calls when we operate on VMAs. To propagate errors from vma_merge_existing_range() and vma_expand() we fake an ENOMEM error when we fail due to a pending fatal signal. This is a temporary workaround. Fixing this requires some refactoring and will be done separately in the future. In a number of places we now lock VMA earlier than before to avoid doing work and undoing it later if a fatal signal is pending. This is safe because the moves are happening within sections where we already hold the mmap_write_lock, so the moves do not change the locking order relative to other kernel locks. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan --- mm/vma.c | 146 ++++++++++++++++++++++++++++++++++++++------------ mm/vma_exec.c | 6 ++- 2 files changed, 117 insertions(+), 35 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index ba78ab1f397a..cc382217f730 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -524,6 +524,21 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_s= truct *vma, new->vm_pgoff +=3D ((addr - vma->vm_start) >> PAGE_SHIFT); } =20 + /* + * Lock VMAs before cloning to avoid extra work if fatal signal + * is pending. + */ + err =3D vma_start_write_killable(vma); + if (err) + goto out_free_vma; + /* + * Locking a new detached VMA will always succeed but it's just a + * detail of the current implementation, so handle it all the same. + */ + err =3D vma_start_write_killable(new); + if (err) + goto out_free_vma; + err =3D -ENOMEM; vma_iter_config(vmi, new->vm_start, new->vm_end); if (vma_iter_prealloc(vmi, new)) @@ -543,9 +558,6 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_st= ruct *vma, if (new->vm_ops && new->vm_ops->open) new->vm_ops->open(new); =20 - vma_start_write(vma); - vma_start_write(new); - init_vma_prep(&vp, vma); vp.insert =3D new; vma_prepare(&vp); @@ -900,12 +912,22 @@ static __must_check struct vm_area_struct *vma_merge_= existing_range( } =20 /* No matter what happens, we will be adjusting middle. */ - vma_start_write(middle); + err =3D vma_start_write_killable(middle); + if (err) { + /* Ensure error propagates. */ + vmg->give_up_on_oom =3D false; + goto abort; + } =20 if (merge_right) { vma_flags_t next_sticky; =20 - vma_start_write(next); + err =3D vma_start_write_killable(next); + if (err) { + /* Ensure error propagates. */ + vmg->give_up_on_oom =3D false; + goto abort; + } vmg->target =3D next; next_sticky =3D vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); vma_flags_set_mask(&sticky_flags, next_sticky); @@ -914,7 +936,12 @@ static __must_check struct vm_area_struct *vma_merge_e= xisting_range( if (merge_left) { vma_flags_t prev_sticky; =20 - vma_start_write(prev); + err =3D vma_start_write_killable(prev); + if (err) { + /* Ensure error propagates. */ + vmg->give_up_on_oom =3D false; + goto abort; + } vmg->target =3D prev; =20 prev_sticky =3D vma_flags_and_mask(&prev->flags, VMA_STICKY_FLAGS); @@ -1170,10 +1197,18 @@ int vma_expand(struct vma_merge_struct *vmg) vma_flags_t sticky_flags =3D vma_flags_and_mask(&vmg->vma_flags, VMA_STICKY_FLAGS); vma_flags_t target_sticky; - int err =3D 0; + int err; =20 mmap_assert_write_locked(vmg->mm); - vma_start_write(target); + err =3D vma_start_write_killable(target); + if (err) { + /* + * Override VMA_MERGE_NOMERGE to prevent callers from + * falling back to a new VMA allocation. + */ + vmg->state =3D VMA_MERGE_ERROR_NOMEM; + return err; + } =20 target_sticky =3D vma_flags_and_mask(&target->flags, VMA_STICKY_FLAGS); =20 @@ -1201,6 +1236,19 @@ int vma_expand(struct vma_merge_struct *vmg) * we don't need to account for vmg->give_up_on_mm here. */ if (remove_next) { + /* + * Lock the VMA early to avoid extra work if fatal signal + * is pending. + */ + err =3D vma_start_write_killable(next); + if (err) { + /* + * Override VMA_MERGE_NOMERGE to prevent callers from + * falling back to a new VMA allocation. + */ + vmg->state =3D VMA_MERGE_ERROR_NOMEM; + return err; + } err =3D dup_anon_vma(target, next, &anon_dup); if (err) return err; @@ -1214,7 +1262,6 @@ int vma_expand(struct vma_merge_struct *vmg) if (remove_next) { vma_flags_t next_sticky; =20 - vma_start_write(next); vmg->__remove_next =3D true; =20 next_sticky =3D vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); @@ -1252,9 +1299,14 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_a= rea_struct *vma, unsigned long start, unsigned long end, pgoff_t pgoff) { struct vma_prepare vp; + int err; =20 WARN_ON((vma->vm_start !=3D start) && (vma->vm_end !=3D end)); =20 + err =3D vma_start_write_killable(vma); + if (err) + return err; + if (vma->vm_start < start) vma_iter_config(vmi, vma->vm_start, start); else @@ -1263,8 +1315,6 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_ar= ea_struct *vma, if (vma_iter_prealloc(vmi, NULL)) return -ENOMEM; =20 - vma_start_write(vma); - init_vma_prep(&vp, vma); vma_prepare(&vp); vma_adjust_trans_huge(vma, start, end, NULL); @@ -1453,7 +1503,9 @@ static int vms_gather_munmap_vmas(struct vma_munmap_s= truct *vms, if (error) goto end_split_failed; } - vma_start_write(next); + error =3D vma_start_write_killable(next); + if (error) + goto munmap_gather_failed; mas_set(mas_detach, vms->vma_count++); error =3D mas_store_gfp(mas_detach, next, GFP_KERNEL); if (error) @@ -1848,12 +1900,16 @@ static void vma_link_file(struct vm_area_struct *vm= a, bool hold_rmap_lock) static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma) { VMA_ITERATOR(vmi, mm, 0); + int err; + + err =3D vma_start_write_killable(vma); + if (err) + return err; =20 vma_iter_config(&vmi, vma->vm_start, vma->vm_end); if (vma_iter_prealloc(&vmi, vma)) return -ENOMEM; =20 - vma_start_write(vma); vma_iter_store_new(&vmi, vma); vma_link_file(vma, /* hold_rmap_lock=3D */false); mm->map_count++; @@ -2239,9 +2295,8 @@ int mm_take_all_locks(struct mm_struct *mm) * is reached. */ for_each_vma(vmi, vma) { - if (signal_pending(current)) + if (signal_pending(current) || vma_start_write_killable(vma)) goto out_unlock; - vma_start_write(vma); } =20 vma_iter_init(&vmi, mm, 0); @@ -2540,8 +2595,8 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap, struct mmap_action *action) { struct vma_iterator *vmi =3D map->vmi; - int error =3D 0; struct vm_area_struct *vma; + int error; =20 /* * Determine the object being mapped and call the appropriate @@ -2552,6 +2607,14 @@ static int __mmap_new_vma(struct mmap_state *map, st= ruct vm_area_struct **vmap, if (!vma) return -ENOMEM; =20 + /* + * Lock the VMA early to avoid extra work if fatal signal + * is pending. + */ + error =3D vma_start_write_killable(vma); + if (error) + goto free_vma; + vma_iter_config(vmi, map->addr, map->end); vma_set_range(vma, map->addr, map->end, map->pgoff); vma->flags =3D map->vma_flags; @@ -2582,8 +2645,6 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap, WARN_ON_ONCE(!arch_validate_flags(map->vm_flags)); #endif =20 - /* Lock the VMA since it is modified after insertion into VMA tree */ - vma_start_write(vma); vma_iter_store_new(vmi, vma); map->mm->map_count++; vma_link_file(vma, action->hide_from_rmap_until_complete); @@ -2878,6 +2939,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, unsigned long addr, unsigned long len, vma_flags_t vma_flags) { struct mm_struct *mm =3D current->mm; + int err; =20 /* * Check against address space limits by the changed size @@ -2910,24 +2972,33 @@ int do_brk_flags(struct vma_iterator *vmi, struct v= m_area_struct *vma, =20 if (vma_merge_new_range(&vmg)) goto out; - else if (vmg_nomem(&vmg)) + if (vmg_nomem(&vmg)) { + err =3D -ENOMEM; goto unacct_fail; + } } =20 if (vma) vma_iter_next_range(vmi); /* create a vma struct for an anonymous mapping */ vma =3D vm_area_alloc(mm); - if (!vma) + if (!vma) { + err =3D -ENOMEM; goto unacct_fail; + } =20 vma_set_anonymous(vma); vma_set_range(vma, addr, addr + len, addr >> PAGE_SHIFT); vma->flags =3D vma_flags; vma->vm_page_prot =3D vm_get_page_prot(vma_flags_to_legacy(vma_flags)); - vma_start_write(vma); - if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) + if (vma_start_write_killable(vma)) { + err =3D -EINTR; + goto vma_lock_fail; + } + if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) { + err =3D -ENOMEM; goto mas_store_fail; + } =20 mm->map_count++; validate_mm(mm); @@ -2942,10 +3013,11 @@ int do_brk_flags(struct vma_iterator *vmi, struct v= m_area_struct *vma, return 0; =20 mas_store_fail: +vma_lock_fail: vm_area_free(vma); unacct_fail: vm_unacct_memory(len >> PAGE_SHIFT); - return -ENOMEM; + return err; } =20 /** @@ -3112,8 +3184,8 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) struct mm_struct *mm =3D vma->vm_mm; struct vm_area_struct *next; unsigned long gap_addr; - int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); + int error; =20 if (!vma_test(vma, VMA_GROWSUP_BIT)) return -EFAULT; @@ -3149,12 +3221,14 @@ int expand_upwards(struct vm_area_struct *vma, unsi= gned long address) =20 /* We must make sure the anon_vma is allocated. */ if (unlikely(anon_vma_prepare(vma))) { - vma_iter_free(&vmi); - return -ENOMEM; + error =3D -ENOMEM; + goto vma_prep_fail; } =20 /* Lock the VMA before expanding to prevent concurrent page faults */ - vma_start_write(vma); + error =3D vma_start_write_killable(vma); + if (error) + goto vma_lock_fail; /* We update the anon VMA tree. */ anon_vma_lock_write(vma->anon_vma); =20 @@ -3183,8 +3257,10 @@ int expand_upwards(struct vm_area_struct *vma, unsig= ned long address) } } anon_vma_unlock_write(vma->anon_vma); - vma_iter_free(&vmi); validate_mm(mm); +vma_lock_fail: +vma_prep_fail: + vma_iter_free(&vmi); return error; } #endif /* CONFIG_STACK_GROWSUP */ @@ -3197,8 +3273,8 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) { struct mm_struct *mm =3D vma->vm_mm; struct vm_area_struct *prev; - int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); + int error; =20 if (!vma_test(vma, VMA_GROWSDOWN_BIT)) return -EFAULT; @@ -3228,12 +3304,14 @@ int expand_downwards(struct vm_area_struct *vma, un= signed long address) =20 /* We must make sure the anon_vma is allocated. */ if (unlikely(anon_vma_prepare(vma))) { - vma_iter_free(&vmi); - return -ENOMEM; + error =3D -ENOMEM; + goto vma_prep_fail; } =20 /* Lock the VMA before expanding to prevent concurrent page faults */ - vma_start_write(vma); + error =3D vma_start_write_killable(vma); + if (error) + goto vma_lock_fail; /* We update the anon VMA tree. */ anon_vma_lock_write(vma->anon_vma); =20 @@ -3263,8 +3341,10 @@ int expand_downwards(struct vm_area_struct *vma, uns= igned long address) } } anon_vma_unlock_write(vma->anon_vma); - vma_iter_free(&vmi); validate_mm(mm); +vma_lock_fail: +vma_prep_fail: + vma_iter_free(&vmi); return error; } =20 diff --git a/mm/vma_exec.c b/mm/vma_exec.c index 5cee8b7efa0f..8ddcc791d828 100644 --- a/mm/vma_exec.c +++ b/mm/vma_exec.c @@ -41,6 +41,7 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigne= d long shift) struct vm_area_struct *next; struct mmu_gather tlb; PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length); + int err; =20 BUG_ON(new_start > new_end); =20 @@ -56,8 +57,9 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigne= d long shift) * cover the whole range: [new_start, old_end) */ vmg.target =3D vma; - if (vma_expand(&vmg)) - return -ENOMEM; + err =3D vma_expand(&vmg); + if (err) + return err; =20 /* * move the page tables downwards, on failure we rely on --=20 2.53.0.1018.g2bb0e51243-goog From nobody Thu Apr 2 14:06:40 2026 Received: from mail-dy1-f202.google.com (mail-dy1-f202.google.com [74.125.82.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97A0030EF77 for ; Fri, 27 Mar 2026 20:55:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644917; cv=none; b=Zv1fLVG0b7Jk5ChSbHw6wvqiss0+OTM/M7FccodW8dl0k6aCM41nsoI3p5SqQFOod+4Ts2HjpeQbItcaCZdVM0xn472VmSCYOI4QA3DKk6o9QJzIf4fjGueRsT2uq1kJGuntnpQtWKsW9/pH4nrgOUHw+DjpMM6RvDO0pNuEY2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644917; c=relaxed/simple; bh=ZiP2/jRFYzX7XcuvDC36XuCNtrYxPTdVBGe8uC3fy1Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KU50Vky3Ze3YQX1Dl4dazIoCtD3Opwda59FdzPkfYC2+Ovrds3bgQRu01P694QTqlxMyYQYr/LjmX5kw25Fl0gkgdj2TT5vT90JW28jwA7d5o2XybCeLmfq1IbkaIv1bcaLngCxW+gRNeFWN1TH2GCQOCc2yegu64N+gtO2GX08= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ABFzHL6F; arc=none smtp.client-ip=74.125.82.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ABFzHL6F" Received: by mail-dy1-f202.google.com with SMTP id 5a478bee46e88-2b81ff82e3cso1707270eec.0 for ; Fri, 27 Mar 2026 13:55:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774644913; x=1775249713; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gEJVuDKnjdgqU8zwIL3/sLmobVk+HSWnDM5ddTiDQu8=; b=ABFzHL6FkSiY7VLGzodEmZWlqpkqlqGpkR2TAZFTDFdkayGESmfWgTU4cYB000Qf8Q c2D9UyYV+wsEamAGtcfQFQ2WgSRV5veeKuf8Q7YnWxi1LqCovVkGto1OBU+ius1tQlXI 5dwwfC+Z1RC85B4GHivn1jqI9vDma1lh5C1F9tEQP7Mqqoy7Vq6W7dv11nvEJo8fiaRt w1zc9L6AXBlbPSB2d/I3DbWUzC0onjzDHgD1Tf/vz+CjtLb3Z+29CmvBkLg4P+QCux04 zB39uxZktRCmS3tsmYxu6nFy9pDyWOPNgCS1gbtUKyTV/uO1lsU0l9hQJYNLTnNhBEVz KDVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774644913; x=1775249713; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gEJVuDKnjdgqU8zwIL3/sLmobVk+HSWnDM5ddTiDQu8=; b=U14hDRAop4nypwditzFlng69vxq11K56xXCCC0QEABaU8BdKzHUDEYV8GX5Z6MIolQ 88zRjT2uUgVvbHg7InDop8biubTlyHXqFEAJLQWAm+11wjF3aMMfw8sP1lBNCPfGTgN7 RprQUrZtI1ZpSSsKPFdKHIphPcOdRgkCeiK7iRUKGSerK710hqUkWeRghNo2ygd5UmNl eDgyp8Zxvn34q2aqB7os4ebGX6qi2kTvFtB1LthkkFizpGUWMqr9mFGoJMuQYYwFg7Al VrWhiseehuLxVNtHY6Q5kPAWKxbtqEJxAuNvdSzNsg4qF/4BL9Gi9qir6OriLKVFoXfz Cx+A== X-Forwarded-Encrypted: i=1; AJvYcCUE2sjCSIKTKRYoB50mCYPeQy89yzpuflUViudQ7bcZjGr4tTzqTs3wIRI6bFJkQ8BDP/nKn+a+VM8pY8c=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3phBQFGxsr6qLFMz6b+djhDQ8KyZsJJAkYAjMgWMdLLSX+h9w s5aBJYudZZvT0bPkodGYhhF2nVJleMgyYfoyI/eublK3OZ01GMP+bkEys0VSCAH7jcT2D1dmmVw YwlIfag== X-Received: from dly16-n1.prod.google.com ([2002:a05:701b:2050:10b0:12a:7f27:56f7]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7022:6099:b0:119:e569:f874 with SMTP id a92af1059eb24-12ab3045876mr1405563c88.17.1774644912438; Fri, 27 Mar 2026 13:55:12 -0700 (PDT) Date: Fri, 27 Mar 2026 13:54:56 -0700 In-Reply-To: <20260327205457.604224-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260327205457.604224-1-surenb@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260327205457.604224-6-surenb@google.com> Subject: [PATCH v6 5/6] mm: use vma_start_write_killable() in process_vma_walk_lock() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, david@kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com, apopple@nvidia.com, ljs@kernel.org, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, vbabka@suse.cz, jannh@google.com, rppt@kernel.org, mhocko@suse.com, pfalcato@suse.de, kees@kernel.org, maddy@linux.ibm.com, npiggin@gmail.com, mpe@ellerman.id.au, chleroy@kernel.org, borntraeger@linux.ibm.com, frankja@linux.ibm.com, imbrenda@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gerald.schaefer@linux.ibm.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, surenb@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace vma_start_write() with vma_start_write_killable() when process_vma_walk_lock() is used with PGWALK_WRLOCK option. Adjust its direct and indirect users to check for a possible error and handle it. Ensure users handle EINTR correctly and do not ignore it. When queue_pages_range() fails, check whether it failed due to a fatal signal or some other reason and return appropriate error. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan --- fs/proc/task_mmu.c | 12 ++++++------ mm/mempolicy.c | 10 +++++++++- mm/pagewalk.c | 22 +++++++++++++++------- 3 files changed, 30 insertions(+), 14 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index e091931d7ca1..33e5094a7842 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1774,15 +1774,15 @@ static ssize_t clear_refs_write(struct file *file, = const char __user *buf, struct vm_area_struct *vma; enum clear_refs_types type; int itype; - int rv; + int err; =20 if (count > sizeof(buffer) - 1) count =3D sizeof(buffer) - 1; if (copy_from_user(buffer, buf, count)) return -EFAULT; - rv =3D kstrtoint(strstrip(buffer), 10, &itype); - if (rv < 0) - return rv; + err =3D kstrtoint(strstrip(buffer), 10, &itype); + if (err) + return err; type =3D (enum clear_refs_types)itype; if (type < CLEAR_REFS_ALL || type >=3D CLEAR_REFS_LAST) return -EINVAL; @@ -1824,7 +1824,7 @@ static ssize_t clear_refs_write(struct file *file, co= nst char __user *buf, 0, mm, 0, -1UL); mmu_notifier_invalidate_range_start(&range); } - walk_page_range(mm, 0, -1, &clear_refs_walk_ops, &cp); + err =3D walk_page_range(mm, 0, -1, &clear_refs_walk_ops, &cp); if (type =3D=3D CLEAR_REFS_SOFT_DIRTY) { mmu_notifier_invalidate_range_end(&range); flush_tlb_mm(mm); @@ -1837,7 +1837,7 @@ static ssize_t clear_refs_write(struct file *file, co= nst char __user *buf, } put_task_struct(task); =20 - return count; + return err ? : count; } =20 const struct file_operations proc_clear_refs_operations =3D { diff --git a/mm/mempolicy.c b/mm/mempolicy.c index c38a90487531..51f298cfc33b 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -969,6 +969,7 @@ static const struct mm_walk_ops queue_pages_lock_vma_wa= lk_ops =3D { * (a hugetlbfs page or a transparent huge page being counted as 1). * -EIO - a misplaced page found, when MPOL_MF_STRICT specified without MO= VEs. * -EFAULT - a hole in the memory range, when MPOL_MF_DISCONTIG_OK unspeci= fied. + * -EINTR - walk got terminated due to pending fatal signal. */ static long queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long= end, @@ -1545,7 +1546,14 @@ static long do_mbind(unsigned long start, unsigned l= ong len, flags | MPOL_MF_INVERT | MPOL_MF_WRLOCK, &pagelist); =20 if (nr_failed < 0) { - err =3D nr_failed; + /* + * queue_pages_range() might override the original error with -EFAULT. + * Confirm that fatal signals are still treated correctly. + */ + if (fatal_signal_pending(current)) + err =3D -EINTR; + else + err =3D nr_failed; nr_failed =3D 0; } else { vma_iter_init(&vmi, mm, start); diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 3ae2586ff45b..eca7bc711617 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -443,14 +443,13 @@ static inline void process_mm_walk_lock(struct mm_str= uct *mm, mmap_assert_write_locked(mm); } =20 -static inline void process_vma_walk_lock(struct vm_area_struct *vma, - enum page_walk_lock walk_lock) +static int process_vma_walk_lock(struct vm_area_struct *vma, + enum page_walk_lock walk_lock) { #ifdef CONFIG_PER_VMA_LOCK switch (walk_lock) { case PGWALK_WRLOCK: - vma_start_write(vma); - break; + return vma_start_write_killable(vma); case PGWALK_WRLOCK_VERIFY: vma_assert_write_locked(vma); break; @@ -462,6 +461,7 @@ static inline void process_vma_walk_lock(struct vm_area= _struct *vma, break; } #endif + return 0; } =20 /* @@ -505,7 +505,9 @@ int walk_page_range_mm_unsafe(struct mm_struct *mm, uns= igned long start, if (ops->pte_hole) err =3D ops->pte_hole(start, next, -1, &walk); } else { /* inside vma */ - process_vma_walk_lock(vma, ops->walk_lock); + err =3D process_vma_walk_lock(vma, ops->walk_lock); + if (err) + break; walk.vma =3D vma; next =3D min(end, vma->vm_end); vma =3D find_vma(mm, vma->vm_end); @@ -722,6 +724,7 @@ int walk_page_range_vma_unsafe(struct vm_area_struct *v= ma, unsigned long start, .vma =3D vma, .private =3D private, }; + int err; =20 if (start >=3D end || !walk.mm) return -EINVAL; @@ -729,7 +732,9 @@ int walk_page_range_vma_unsafe(struct vm_area_struct *v= ma, unsigned long start, return -EINVAL; =20 process_mm_walk_lock(walk.mm, ops->walk_lock); - process_vma_walk_lock(vma, ops->walk_lock); + err =3D process_vma_walk_lock(vma, ops->walk_lock); + if (err) + return err; return __walk_page_range(start, end, &walk); } =20 @@ -752,6 +757,7 @@ int walk_page_vma(struct vm_area_struct *vma, const str= uct mm_walk_ops *ops, .vma =3D vma, .private =3D private, }; + int err; =20 if (!walk.mm) return -EINVAL; @@ -759,7 +765,9 @@ int walk_page_vma(struct vm_area_struct *vma, const str= uct mm_walk_ops *ops, return -EINVAL; =20 process_mm_walk_lock(walk.mm, ops->walk_lock); - process_vma_walk_lock(vma, ops->walk_lock); + err =3D process_vma_walk_lock(vma, ops->walk_lock); + if (err) + return err; return __walk_page_range(vma->vm_start, vma->vm_end, &walk); } =20 --=20 2.53.0.1018.g2bb0e51243-goog From nobody Thu Apr 2 14:06:40 2026 Received: from mail-dl1-f73.google.com (mail-dl1-f73.google.com [74.125.82.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91DCB39DBFA for ; Fri, 27 Mar 2026 20:55:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644918; cv=none; b=goyVTGXmOHoH/GS19CdyaGxQrM5VoEy3ygIMaFDHN51lDCFKGvPgaaDM9fnxNvxG0KE//bCwXHlsyVOiV74ZABvld6MJRASb+RcUEEptLr+FADcxEgs0Ozq/4VbvP8P+uqNr5jg7c2siANhN+Zzju8IMxsvAi6j8CRcOX9RvvXY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774644918; c=relaxed/simple; bh=7fxUvQarsZza+ZSNU1G51Ztbl4ikxZMWuEapYuAipV8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ge+nXlbXyp1j3uOr1nZfoIEL46ZyeGNrUvMJFVLxEUhdK5f5Z46TB2ge/6M6cNyNQu4ov+k9VpYCr3FgYM48dpGfimpGpNkktIEcI2RRrDEAl8Kwd4XTLgIqkwWJOLIzKNI1WHggadse+SVS6SZ+gtxgIO0OSP/aeBpflEHwaQA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cL+PFNhG; arc=none smtp.client-ip=74.125.82.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cL+PFNhG" Received: by mail-dl1-f73.google.com with SMTP id a92af1059eb24-12711ec96fbso1768954c88.0 for ; Fri, 27 Mar 2026 13:55:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774644915; x=1775249715; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5NgLz5FVqf/NV7m9qbKXaNWE4w/1GDJf1IUGYhV4ad8=; b=cL+PFNhGqLnw3Nx11U2NRKzy1sX9BqHn2atuj4oQVD76sAbpTmlhejNfUZGs/zFhvy WetMUao0I0s6yYA7UkDM07hIMBfdtegmeTiYLjX9AR37smM2624gYFHvHfiiKHfBOgp8 Vb8QY7z7AKn5uy+AB+gwrcrXWMdjpqAL24p/MbwFPHAxUoUyATeC9bMTujciFfG3Qp20 aT0oIKQbzz5xOtYM0+Nvl7XGCxDDuqMpeVzDLI7d0kyHCtHxHGRPGJdkDPnIEzayuWyo 7eungRr5W2ZJbEUETlUhqdYBJuOXYgos0knadVP7c3BKdgERffkMy6IDASN35ldovo+t bFng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774644915; x=1775249715; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5NgLz5FVqf/NV7m9qbKXaNWE4w/1GDJf1IUGYhV4ad8=; b=e7HX6RhkVwRMdBobNKhfXmsBqFRIM07wbCyCbAzMo75Vg1dcdI8uBhhbv/dxbVzXIH RWMj9EHRTKMPf7OXhryIkV7CfDfwnvaASn0ngY2dAnR/QyZpsurBV/1bQ4BJ3NEUnswB TfNeS16Imm3bkwCELs9JRVoZcUZ9jniApC39Lb6VBx/pOF0hrdEfl4aPFpPZkqa5mxjR QLxLUuDizqCsW+9HBLTzoXkmc2NLerpiJ/SM03mXoJTifnaf9nmkzAweabgTsSwtYdEc SMdfqpGjPU3zQQq6llxEHwDC6/MV/ZPhKL4mzSszQVs+ezQY9K7+jokcchF/gC1znoJo chPA== X-Forwarded-Encrypted: i=1; AJvYcCU49g21uxCjdGwRWxFy1iLVIlBYB8PkVKFaqIvwxarfUMiEnkT863zqLyJIQ1MUsa+PcbtQkUwhGfwJPMM=@vger.kernel.org X-Gm-Message-State: AOJu0YwBV9kxpL/9o1aPJLt8CeS7umPCmkT9wq4ISWgUZTcG8XYHRtok hXtLfeFueYw8aCQKA1dRJsUCP7MRkZLx3VtXWOsEIzNLrcE8uzXyVEa8laWiB03mpXIWZwXPK/Y 4k9g+sg== X-Received: from dybml5.prod.google.com ([2002:a05:7301:1505:b0:2c1:5ad0:b659]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7022:ea30:b0:128:d7a7:5261 with SMTP id a92af1059eb24-12ab28eba7bmr2796910c88.30.1774644914544; Fri, 27 Mar 2026 13:55:14 -0700 (PDT) Date: Fri, 27 Mar 2026 13:54:57 -0700 In-Reply-To: <20260327205457.604224-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260327205457.604224-1-surenb@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260327205457.604224-7-surenb@google.com> Subject: [PATCH v6 6/6] KVM: PPC: use vma_start_write_killable() in kvmppc_memslot_page_merge() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, david@kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com, apopple@nvidia.com, ljs@kernel.org, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, vbabka@suse.cz, jannh@google.com, rppt@kernel.org, mhocko@suse.com, pfalcato@suse.de, kees@kernel.org, maddy@linux.ibm.com, npiggin@gmail.com, mpe@ellerman.id.au, chleroy@kernel.org, borntraeger@linux.ibm.com, frankja@linux.ibm.com, imbrenda@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gerald.schaefer@linux.ibm.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, surenb@google.com, "Ritesh Harjani (IBM)" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace vma_start_write() with vma_start_write_killable(), improving reaction time to the kill signal. Replace vma_start_write() in kvmppc_memslot_page_merge(). Signed-off-by: Suren Baghdasaryan Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Lorenzo Stoakes (Oracle) --- arch/powerpc/kvm/book3s_hv_uvmem.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_h= v_uvmem.c index 5fbb95d90e99..0a28b48a46b8 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -410,7 +410,10 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm, ret =3D H_STATE; break; } - vma_start_write(vma); + if (vma_start_write_killable(vma)) { + ret =3D H_STATE; + break; + } /* Copy vm_flags to avoid partial modifications in ksm_madvise */ vm_flags =3D vma->vm_flags; ret =3D ksm_madvise(vma, vma->vm_start, vma->vm_end, --=20 2.53.0.1018.g2bb0e51243-goog