From nobody Sat Feb 7 20:44:03 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E79062E0925 for ; Wed, 12 Nov 2025 06:27:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762928856; cv=none; b=spKa4yPVfVFfkMYW5T+LpuIabgfUk5E4W+ggyxbK020lvL+IWcvrs/d/XDncegtto8LxfWQ4Os2KE3l/5S1Csx0V8VCLoNoYMENmnWaXFqbpKvvVz1Q7Rj83ne14UPmn1DOVVoVFfmLsZ9hhQFafdpxq8E8yFH2MkDLIRI3m3ZY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762928856; c=relaxed/simple; bh=kwwWhPVfJTbx+PRizK8Zk4NlAJeKRT/N820AdDvtvBs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uk5f+FAtHvJYR4gUce0ZMlmg7A9Hey/2S0VcH10r9hHChL99mDQrsPfFSIimWsWQKpeu3Zp9bd5GY0bEW7/Hi7CBMxYp1UHTbmmfqwHnVlVcqmnlj5jpBcKDhM3y+ADChhophoSL2+uZHiCKiTvENoYJKhSBgP8hB44+ZyzNazo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A14131515; Tue, 11 Nov 2025 22:27:26 -0800 (PST) Received: from MacBook-Pro.blr.arm.com.com (unknown [10.164.18.56]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3842E3F5A1; Tue, 11 Nov 2025 22:27:30 -0800 (PST) From: Dev Jain To: catalin.marinas@arm.com, will@kernel.org Cc: ryan.roberts@arm.com, rppt@kernel.org, shijie@os.amperecomputing.com, yang@os.amperecomputing.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Dev Jain Subject: [PATCH 1/2] arm64/pageattr: Propagate return value from __change_memory_common Date: Wed, 12 Nov 2025 11:57:15 +0530 Message-Id: <20251112062716.64801-2-dev.jain@arm.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20251112062716.64801-1-dev.jain@arm.com> References: <20251112062716.64801-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The rodata=3Don security measure requires that any code path which does vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias too. Therefore, if such a call fails, we must abort set_memory_* and caller must take appropriate action; currently we are suppressing the error, and there is a real chance of such an error arising post commit a166563e7ec3 ("arm64: mm: support large block mapping when rodata=3Dfull"). Therefore, propagate any error to the caller. Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=3D= full") Signed-off-by: Dev Jain Reviewed-by: Ryan Roberts reviewed-by tag as it's not on a separate line (I haven't checked the --- v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.ja= in@arm.com/ I have dropped stable since no real chance of failure was there. arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int= numpages, unsigned long size =3D PAGE_SIZE * numpages; unsigned long end =3D start + size; struct vm_struct *area; + int ret; int i; =20 if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, in= t numpages, if (rodata_full && (pgprot_val(set_mask) =3D=3D PTE_RDONLY || pgprot_val(clear_mask) =3D=3D PTE_RDONLY)) { for (i =3D 0; i < area->nr_pages; i++) { - __change_memory_common((u64)page_address(area->pages[i]), + ret =3D __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask); + if (ret) + return ret; } } =20 --=20 2.30.2 From nobody Sat Feb 7 20:44:03 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5DC012E7182 for ; Wed, 12 Nov 2025 06:27:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762928859; cv=none; b=FNeBlct/OAe3g+NN9t9v8hqmPsqBkLzJ226BsKa8kMcAY3XQq+Lbyg0IKpq+fvmwvgJSa5noA1sK370pz+q9Tklka6bYUrW0Y746BgzgP4IPr1nq7zddLVdGjxEKjzqtkWK3nv1dqSMgNQtGdWvVoGXoGWkzTHTeOLa/HR1RhVw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762928859; c=relaxed/simple; bh=//xwDxJGPkAJ9Y89CWiFQFsY8WI4ZLyZdRn+QVzwaKI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=W9uJozTPrPDgr2rPlznAQSEoUxSH4FZieHJ1c4Aq6ntAOjovRhkyeD1SvwES1iRUgGe2yQZayGTiUjM/l6mI69G+KBaQ+U0x//tTlir7uvoyKkQY9KsYmjK26glTA7MFYUWJC7YepBv8xAlETJ9SeoBpdOZ28tFLEpGojC0560g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51F4B1515; Tue, 11 Nov 2025 22:27:30 -0800 (PST) Received: from MacBook-Pro.blr.arm.com.com (unknown [10.164.18.56]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E6C543F5A1; Tue, 11 Nov 2025 22:27:34 -0800 (PST) From: Dev Jain To: catalin.marinas@arm.com, will@kernel.org Cc: ryan.roberts@arm.com, rppt@kernel.org, shijie@os.amperecomputing.com, yang@os.amperecomputing.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Dev Jain Subject: [PATCH 2/2] arm64/mm: Document why linear map split failure upon vm_reset_perms is not problematic Date: Wed, 12 Nov 2025 11:57:16 +0530 Message-Id: <20251112062716.64801-3-dev.jain@arm.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20251112062716.64801-1-dev.jain@arm.com> References: <20251112062716.64801-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Consider the following code path: (1) vmalloc -> (2) set_vm_flush_reset_perms -> (3) set_memory_ro/set_memory= _rox -> .... (4) use the mapping .... -> (5) vfree -> (6) vm_reset_perms -> (7) set_area_direct_map. Or, it may happen that we encounter failure at (3) and directly jump to (5). In both cases, (7) may fail due to linear map split failure. But, we care about its success *only* for the region which got successfully changed by (3). Such a region is guaranteed to be pte-mapped. The TLDR is that (7) will surely succeed for the regions we care about. Signed-off-by: Dev Jain Reviewed-by: Ryan Roberts --- arch/arm64/mm/pageattr.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index b4ea86cd3a71..dc05f06a47f2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -185,6 +185,15 @@ static int change_memory_common(unsigned long addr, in= t numpages, */ if (rodata_full && (pgprot_val(set_mask) =3D=3D PTE_RDONLY || pgprot_val(clear_mask) =3D=3D PTE_RDONLY)) { + /* + * Note: One may wonder what happens if the calls to + * set_area_direct_map() in vm_reset_perms() fail due ENOMEM on + * linear map split failure. Observe that we care about those + * calls to succeed *only* for the region whose permissions + * are not default. Such a region is guaranteed to be + * pte-mapped, because the below call can change those + * permissions to non-default only after splitting that region. + */ for (i =3D 0; i < area->nr_pages; i++) { ret =3D __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask); --=20 2.30.2