From nobody Thu Apr 2 09:20:47 2026 Received: from fra-out-010.esa.eu-central-1.outbound.mail-perimeter.amazon.com (fra-out-010.esa.eu-central-1.outbound.mail-perimeter.amazon.com [63.178.143.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61D9E3E1D0B; Tue, 17 Mar 2026 14:10:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=63.178.143.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773756658; cv=none; b=RUs29/szwAuDnpsha3kxI+2I8C2T+DpkfEjUBAr1tvTrQAK5DNuj6MxcFkgws5BSeEu6ZKT0eZVSdEx2EbkGffavaZOz3lQv5pZtJ7e+wYKyGovpwm/UhGpBdJIGgj6M4dVzq1WdDFt1SWvszLqyS8l7wNe+v/jXASJIB0w+F5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773756658; c=relaxed/simple; bh=Owe09P6Af8oFfAW6Oc3p7TY7b9Co0iJXCNG3kEQNcP0=; h=From:To:CC:Subject:Date:Message-ID:References:In-Reply-To: Content-Type:MIME-Version; b=uWjBBCV4AKmoieb065ROVCJW4gxZi93lEY2C3RJojXlkDz07tKjUeJUToxdCgs9Cy4Rnk/xiNcAVDXPavT5PAZS8fRM2CnTHCj9hCesRLOjb3NlmcfpWCkYTaPWSXaYB29dIdZOXtUgyb0jJS+EiIszNxyaVcR++FgkQef8EYgI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.co.uk; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (2048-bit key) header.d=amazon.co.uk header.i=@amazon.co.uk header.b=rUbY/HUZ; arc=none smtp.client-ip=63.178.143.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=amazon.co.uk header.i=@amazon.co.uk header.b="rUbY/HUZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazoncorp2; t=1773756656; x=1805292656; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=YT+RN7R7BxgHS+s1x91TyyjwNFc2MVFokFMzhgUvxKw=; b=rUbY/HUZXlVLzmCkCWW4rPOu0F+csmx8PqTd1CAV6Ya3qAJvpRmUpWfR dqA/mzTYqt5XG8dgyPq/fMzC8tUeIc6e04y6zXwLjT0Yxez/vJs0kwIyp 3c3rHsjZFwm3hsdQPNEKmbvSDQMwIDzrrzZa4mm19wUnDNXwlfmX9Lvov SLLWim6j0kiV1UqwMv7unByO51JgL8/zgQWINhK4OgAldzP3xyuPwcPjy uOXevcJsQogAHTSn3gH72xSkhp3mvKIsklEIwLv+9Go5LyaxpnPUGYsqv viMJyPAUJj+nD7kAKfhq31HyqhY6N2buvrFlsFdOu7vEcoKctxmSUGhnY A==; X-CSE-ConnectionGUID: Jl3fpmDvQqSZ6PiOFO0afQ== X-CSE-MsgGUID: hATzySX/RrW5GB1+DbrJjQ== X-IronPort-AV: E=Sophos;i="6.23,124,1770595200"; d="scan'208";a="10894902" Received: from ip-10-6-11-83.eu-central-1.compute.internal (HELO smtpout.naws.eu-central-1.prod.farcaster.email.amazon.dev) ([10.6.11.83]) by internal-fra-out-010.esa.eu-central-1.outbound.mail-perimeter.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2026 14:10:51 +0000 Received: from EX19MTAEUC001.ant.amazon.com [54.240.197.225:31423] by smtpin.naws.eu-central-1.prod.farcaster.email.amazon.dev [10.0.35.101:2525] with esmtp (Farcaster) id 57d99ac1-34a2-4a23-b520-ce9bbfa8863e; Tue, 17 Mar 2026 14:10:50 +0000 (UTC) X-Farcaster-Flow-ID: 57d99ac1-34a2-4a23-b520-ce9bbfa8863e Received: from EX19D005EUB001.ant.amazon.com (10.252.51.12) by EX19MTAEUC001.ant.amazon.com (10.252.51.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.37; Tue, 17 Mar 2026 14:10:45 +0000 Received: from EX19D005EUB003.ant.amazon.com (10.252.51.31) by EX19D005EUB001.ant.amazon.com (10.252.51.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.37; Tue, 17 Mar 2026 14:10:44 +0000 Received: from EX19D005EUB003.ant.amazon.com ([fe80::b825:becb:4b38:da0c]) by EX19D005EUB003.ant.amazon.com ([fe80::b825:becb:4b38:da0c%3]) with mapi id 15.02.2562.037; Tue, 17 Mar 2026 14:10:44 +0000 From: "Kalyazin, Nikita" To: "kvm@vger.kernel.org" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "kvmarm@lists.linux.dev" , "linux-fsdevel@vger.kernel.org" , "linux-mm@kvack.org" , "bpf@vger.kernel.org" , "linux-kselftest@vger.kernel.org" , "kernel@xen0n.name" , "linux-riscv@lists.infradead.org" , "linux-s390@vger.kernel.org" , "loongarch@lists.linux.dev" , "linux-pm@vger.kernel.org" CC: "pbonzini@redhat.com" , "corbet@lwn.net" , "maz@kernel.org" , "oupton@kernel.org" , "joey.gouly@arm.com" , "suzuki.poulose@arm.com" , "yuzenghui@huawei.com" , "catalin.marinas@arm.com" , "will@kernel.org" , "seanjc@google.com" , "tglx@kernel.org" , "mingo@redhat.com" , "bp@alien8.de" , "dave.hansen@linux.intel.com" , "x86@kernel.org" , "hpa@zytor.com" , "luto@kernel.org" , "peterz@infradead.org" , "willy@infradead.org" , "akpm@linux-foundation.org" , "david@kernel.org" , "lorenzo.stoakes@oracle.com" , "vbabka@kernel.org" , "rppt@kernel.org" , "surenb@google.com" , "mhocko@suse.com" , "ast@kernel.org" , "daniel@iogearbox.net" , "andrii@kernel.org" , "martin.lau@linux.dev" , "eddyz87@gmail.com" , "song@kernel.org" , "yonghong.song@linux.dev" , "john.fastabend@gmail.com" , "kpsingh@kernel.org" , "sdf@fomichev.me" , "haoluo@google.com" , "jolsa@kernel.org" , "jgg@ziepe.ca" , "jhubbard@nvidia.com" , "peterx@redhat.com" , "jannh@google.com" , "pfalcato@suse.de" , "skhan@linuxfoundation.org" , "riel@surriel.com" , "ryan.roberts@arm.com" , "jgross@suse.com" , "yu-cheng.yu@intel.com" , "kas@kernel.org" , "coxu@redhat.com" , "kevin.brodsky@arm.com" , "ackerleytng@google.com" , "yosry@kernel.org" , "ajones@ventanamicro.com" , "maobibo@loongson.cn" , "tabba@google.com" , "prsampat@amd.com" , "wu.fei9@sanechips.com.cn" , "mlevitsk@redhat.com" , "jmattson@google.com" , "jthoughton@google.com" , "agordeev@linux.ibm.com" , "alex@ghiti.fr" , "aou@eecs.berkeley.edu" , "borntraeger@linux.ibm.com" , "chenhuacai@kernel.org" , "dev.jain@arm.com" , "gor@linux.ibm.com" , "hca@linux.ibm.com" , "palmer@dabbelt.com" , "pjw@kernel.org" , "shijie@os.amperecomputing.com" , "svens@linux.ibm.com" , "thuth@redhat.com" , "wyihan@google.com" , "yang@os.amperecomputing.com" , "Jonathan.Cameron@huawei.com" , "Liam.Howlett@oracle.com" , "urezki@gmail.com" , "zhengqi.arch@bytedance.com" , "gerald.schaefer@linux.ibm.com" , "jiayuan.chen@shopee.com" , "lenb@kernel.org" , "osalvador@suse.de" , "pavel@kernel.org" , "rafael@kernel.org" , "vannapurve@google.com" , "jackmanb@google.com" , "aneesh.kumar@kernel.org" , "patrick.roy@linux.dev" , "Thomson, Jack" , "Itazuri, Takahiro" , "Manwaring, Derek" , "Kalyazin, Nikita" Subject: [PATCH v11 01/16] set_memory: set_direct_map_* to take address Thread-Topic: [PATCH v11 01/16] set_memory: set_direct_map_* to take address Thread-Index: AQHcthfYKKVWyGEnck+/HNsWWreYfw== Date: Tue, 17 Mar 2026 14:10:44 +0000 Message-ID: <20260317141031.514-2-kalyazin@amazon.com> References: <20260317141031.514-1-kalyazin@amazon.com> In-Reply-To: <20260317141031.514-1-kalyazin@amazon.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Transfer-Encoding: quoted-printable Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Nikita Kalyazin This is to avoid excessive conversions folio->page->address when adding helpers on top of set_direct_map_valid_noflush() in the next patch. Acked-by: David Hildenbrand (Arm) Signed-off-by: Nikita Kalyazin --- arch/arm64/include/asm/set_memory.h | 7 ++++--- arch/arm64/mm/pageattr.c | 19 +++++++++---------- arch/loongarch/include/asm/set_memory.h | 7 ++++--- arch/loongarch/mm/pageattr.c | 25 +++++++++++-------------- arch/riscv/include/asm/set_memory.h | 7 ++++--- arch/riscv/mm/pageattr.c | 17 +++++++++-------- arch/s390/include/asm/set_memory.h | 7 ++++--- arch/s390/mm/pageattr.c | 13 +++++++------ arch/x86/include/asm/set_memory.h | 7 ++++--- arch/x86/mm/pat/set_memory.c | 23 ++++++++++++----------- include/linux/set_memory.h | 9 +++++---- kernel/power/snapshot.c | 4 ++-- mm/execmem.c | 6 ++++-- mm/secretmem.c | 6 +++--- mm/vmalloc.c | 11 +++++++---- 15 files changed, 89 insertions(+), 79 deletions(-) diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/s= et_memory.h index 90f61b17275e..c71a2a6812c4 100644 --- a/arch/arm64/include/asm/set_memory.h +++ b/arch/arm64/include/asm/set_memory.h @@ -11,9 +11,10 @@ bool can_set_direct_map(void); =20 int set_memory_valid(unsigned long addr, int numpages, int enable); =20 -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_default_noflush(struct page *page); -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d); +int set_direct_map_invalid_noflush(const void *addr); +int set_direct_map_default_noflush(const void *addr); +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid); bool kernel_page_present(struct page *page); =20 int set_memory_encrypted(unsigned long addr, int numpages); diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 358d1dc9a576..5aff94e1f8b2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -245,7 +245,7 @@ int set_memory_valid(unsigned long addr, int numpages, = int enable) __pgprot(PTE_VALID)); } =20 -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(const void *addr) { pgprot_t clear_mask =3D __pgprot(PTE_VALID); pgprot_t set_mask =3D __pgprot(0); @@ -253,11 +253,11 @@ int set_direct_map_invalid_noflush(struct page *page) if (!can_set_direct_map()) return 0; =20 - return update_range_prot((unsigned long)page_address(page), - PAGE_SIZE, set_mask, clear_mask); + return update_range_prot((unsigned long)addr, PAGE_SIZE, set_mask, + clear_mask); } =20 -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(const void *addr) { pgprot_t set_mask =3D __pgprot(PTE_VALID | PTE_WRITE); pgprot_t clear_mask =3D __pgprot(PTE_RDONLY); @@ -265,8 +265,8 @@ int set_direct_map_default_noflush(struct page *page) if (!can_set_direct_map()) return 0; =20 - return update_range_prot((unsigned long)page_address(page), - PAGE_SIZE, set_mask, clear_mask); + return update_range_prot((unsigned long)addr, PAGE_SIZE, set_mask, + clear_mask); } =20 static int __set_memory_enc_dec(unsigned long addr, @@ -349,14 +349,13 @@ int realm_register_memory_enc_ops(void) return arm64_mem_crypt_ops_register(&realm_crypt_ops); } =20 -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d) +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid) { - unsigned long addr =3D (unsigned long)page_address(page); - if (!can_set_direct_map()) return 0; =20 - return set_memory_valid(addr, nr, valid); + return set_memory_valid((unsigned long)addr, numpages, valid); } =20 #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/arch/loongarch/include/asm/set_memory.h b/arch/loongarch/inclu= de/asm/set_memory.h index 55dfaefd02c8..5e9b67b2fea1 100644 --- a/arch/loongarch/include/asm/set_memory.h +++ b/arch/loongarch/include/asm/set_memory.h @@ -15,8 +15,9 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); =20 bool kernel_page_present(struct page *page); -int set_direct_map_default_noflush(struct page *page); -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d); +int set_direct_map_invalid_noflush(const void *addr); +int set_direct_map_default_noflush(const void *addr); +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid); =20 #endif /* _ASM_LOONGARCH_SET_MEMORY_H */ diff --git a/arch/loongarch/mm/pageattr.c b/arch/loongarch/mm/pageattr.c index f5e910b68229..9e08905d3624 100644 --- a/arch/loongarch/mm/pageattr.c +++ b/arch/loongarch/mm/pageattr.c @@ -198,32 +198,29 @@ bool kernel_page_present(struct page *page) return pte_present(ptep_get(pte)); } =20 -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(const void *addr) { - unsigned long addr =3D (unsigned long)page_address(page); - - if (addr < vm_map_base) + if ((unsigned long)addr < vm_map_base) return 0; =20 - return __set_memory(addr, 1, PAGE_KERNEL, __pgprot(0)); + return __set_memory((unsigned long)addr, 1, PAGE_KERNEL, __pgprot(0)); } =20 -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(const void *addr) { - unsigned long addr =3D (unsigned long)page_address(page); - - if (addr < vm_map_base) + if ((unsigned long)addr < vm_map_base) return 0; =20 - return __set_memory(addr, 1, __pgprot(0), __pgprot(_PAGE_PRESENT | _PAGE_= VALID)); + return __set_memory((unsigned long)addr, 1, __pgprot(0), + __pgprot(_PAGE_PRESENT | _PAGE_VALID)); } =20 -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d) +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid) { - unsigned long addr =3D (unsigned long)page_address(page); pgprot_t set, clear; =20 - if (addr < vm_map_base) + if ((unsigned long)addr < vm_map_base) return 0; =20 if (valid) { @@ -234,5 +231,5 @@ int set_direct_map_valid_noflush(struct page *page, uns= igned nr, bool valid) clear =3D __pgprot(_PAGE_PRESENT | _PAGE_VALID); } =20 - return __set_memory(addr, 1, set, clear); + return __set_memory((unsigned long)addr, 1, set, clear); } diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/s= et_memory.h index 87389e93325a..a87eabd7fc78 100644 --- a/arch/riscv/include/asm/set_memory.h +++ b/arch/riscv/include/asm/set_memory.h @@ -40,9 +40,10 @@ static inline int set_kernel_memory(char *startp, char *= endp, } #endif =20 -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_default_noflush(struct page *page); -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d); +int set_direct_map_invalid_noflush(const void *addr); +int set_direct_map_default_noflush(const void *addr); +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid); bool kernel_page_present(struct page *page); =20 #endif /* __ASSEMBLER__ */ diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 3f76db3d2769..0a457177a88c 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -374,19 +374,20 @@ int set_memory_nx(unsigned long addr, int numpages) return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC)); } =20 -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(const void *addr) { - return __set_memory((unsigned long)page_address(page), 1, - __pgprot(0), __pgprot(_PAGE_PRESENT)); + return __set_memory((unsigned long)addr, 1, __pgprot(0), + __pgprot(_PAGE_PRESENT)); } =20 -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(const void *addr) { - return __set_memory((unsigned long)page_address(page), 1, - PAGE_KERNEL, __pgprot(_PAGE_EXEC)); + return __set_memory((unsigned long)addr, 1, PAGE_KERNEL, + __pgprot(_PAGE_EXEC)); } =20 -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d) +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid) { pgprot_t set, clear; =20 @@ -398,7 +399,7 @@ int set_direct_map_valid_noflush(struct page *page, uns= igned nr, bool valid) clear =3D __pgprot(_PAGE_PRESENT); } =20 - return __set_memory((unsigned long)page_address(page), nr, set, clear); + return __set_memory((unsigned long)addr, numpages, set, clear); } =20 #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set= _memory.h index 94092f4ae764..3e43c3c96e67 100644 --- a/arch/s390/include/asm/set_memory.h +++ b/arch/s390/include/asm/set_memory.h @@ -60,9 +60,10 @@ __SET_MEMORY_FUNC(set_memory_rox, SET_MEMORY_RO | SET_ME= MORY_X) __SET_MEMORY_FUNC(set_memory_rwnx, SET_MEMORY_RW | SET_MEMORY_NX) __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K) =20 -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_default_noflush(struct page *page); -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d); +int set_direct_map_invalid_noflush(const void *addr); +int set_direct_map_default_noflush(const void *addr); +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid); bool kernel_page_present(struct page *page); =20 #endif diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index bb29c38ae624..8e90ff5cf50d 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -383,17 +383,18 @@ int __set_memory(unsigned long addr, unsigned long nu= mpages, unsigned long flags return rc; } =20 -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(const void *addr) { - return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_INV); + return __set_memory((unsigned long)addr, 1, SET_MEMORY_INV); } =20 -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(const void *addr) { - return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF); + return __set_memory((unsigned long)addr, 1, SET_MEMORY_DEF); } =20 -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d) +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid) { unsigned long flags; =20 @@ -402,7 +403,7 @@ int set_direct_map_valid_noflush(struct page *page, uns= igned nr, bool valid) else flags =3D SET_MEMORY_INV; =20 - return __set_memory((unsigned long)page_to_virt(page), nr, flags); + return __set_memory((unsigned long)addr, numpages, flags); } =20 bool kernel_page_present(struct page *page) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_m= emory.h index 4362c26aa992..b6a4173ff249 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -86,9 +86,10 @@ int set_pages_wb(struct page *page, int numpages); int set_pages_ro(struct page *page, int numpages); int set_pages_rw(struct page *page, int numpages); =20 -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_default_noflush(struct page *page); -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d); +int set_direct_map_invalid_noflush(const void *addr); +int set_direct_map_default_noflush(const void *addr); +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid); bool kernel_page_present(struct page *page); =20 extern int kernel_set_to_readonly; diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 40581a720fe8..6aea1f470fd5 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2587,9 +2587,9 @@ int set_pages_rw(struct page *page, int numpages) return set_memory_rw(addr, numpages); } =20 -static int __set_pages_p(struct page *page, int numpages) +static int __set_pages_p(const void *addr, int numpages) { - unsigned long tempaddr =3D (unsigned long) page_address(page); + unsigned long tempaddr =3D (unsigned long)addr; struct cpa_data cpa =3D { .vaddr =3D &tempaddr, .pgd =3D NULL, .numpages =3D numpages, @@ -2606,9 +2606,9 @@ static int __set_pages_p(struct page *page, int numpa= ges) return __change_page_attr_set_clr(&cpa, 1); } =20 -static int __set_pages_np(struct page *page, int numpages) +static int __set_pages_np(const void *addr, int numpages) { - unsigned long tempaddr =3D (unsigned long) page_address(page); + unsigned long tempaddr =3D (unsigned long)addr; struct cpa_data cpa =3D { .vaddr =3D &tempaddr, .pgd =3D NULL, .numpages =3D numpages, @@ -2625,22 +2625,23 @@ static int __set_pages_np(struct page *page, int nu= mpages) return __change_page_attr_set_clr(&cpa, 1); } =20 -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(const void *addr) { - return __set_pages_np(page, 1); + return __set_pages_np(addr, 1); } =20 -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(const void *addr) { - return __set_pages_p(page, 1); + return __set_pages_p(addr, 1); } =20 -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool vali= d) +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages, + bool valid) { if (valid) - return __set_pages_p(page, nr); + return __set_pages_p(addr, numpages); =20 - return __set_pages_np(page, nr); + return __set_pages_np(addr, numpages); } =20 #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index 3030d9245f5a..1a2563f525fc 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -25,17 +25,18 @@ static inline int set_memory_rox(unsigned long addr, in= t numpages) #endif =20 #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP -static inline int set_direct_map_invalid_noflush(struct page *page) +static inline int set_direct_map_invalid_noflush(const void *addr) { return 0; } -static inline int set_direct_map_default_noflush(struct page *page) +static inline int set_direct_map_default_noflush(const void *addr) { return 0; } =20 -static inline int set_direct_map_valid_noflush(struct page *page, - unsigned nr, bool valid) +static inline int set_direct_map_valid_noflush(const void *addr, + unsigned long numpages, + bool valid) { return 0; } diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 6e1321837c66..6eddfb22c0ff 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -88,7 +88,7 @@ static inline int hibernate_restore_unprotect_page(void *= page_address) {return 0 static inline void hibernate_map_page(struct page *page) { if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { - int ret =3D set_direct_map_default_noflush(page); + int ret =3D set_direct_map_default_noflush(page_address(page)); =20 if (ret) pr_warn_once("Failed to remap page\n"); @@ -101,7 +101,7 @@ static inline void hibernate_unmap_page(struct page *pa= ge) { if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { unsigned long addr =3D (unsigned long)page_address(page); - int ret =3D set_direct_map_invalid_noflush(page); + int ret =3D set_direct_map_invalid_noflush(page_address(page)); =20 if (ret) pr_warn_once("Failed to remap page\n"); diff --git a/mm/execmem.c b/mm/execmem.c index 810a4ba9c924..220298ec87c8 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -119,7 +119,8 @@ static int execmem_set_direct_map_valid(struct vm_struc= t *vm, bool valid) int err =3D 0; =20 for (int i =3D 0; i < vm->nr_pages; i +=3D nr) { - err =3D set_direct_map_valid_noflush(vm->pages[i], nr, valid); + err =3D set_direct_map_valid_noflush(page_address(vm->pages[i]), + nr, valid); if (err) goto err_restore; updated +=3D nr; @@ -129,7 +130,8 @@ static int execmem_set_direct_map_valid(struct vm_struc= t *vm, bool valid) =20 err_restore: for (int i =3D 0; i < updated; i +=3D nr) - set_direct_map_valid_noflush(vm->pages[i], nr, !valid); + set_direct_map_valid_noflush(page_address(vm->pages[i]), nr, + !valid); =20 return err; } diff --git a/mm/secretmem.c b/mm/secretmem.c index 11a779c812a7..fd29b33c6764 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -72,7 +72,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) goto out; } =20 - err =3D set_direct_map_invalid_noflush(folio_page(folio, 0)); + err =3D set_direct_map_invalid_noflush(folio_address(folio)); if (err) { folio_put(folio); ret =3D vmf_error(err); @@ -87,7 +87,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) * already happened when we marked the page invalid * which guarantees that this call won't fail */ - set_direct_map_default_noflush(folio_page(folio, 0)); + set_direct_map_default_noflush(folio_address(folio)); folio_put(folio); if (err =3D=3D -EEXIST) goto retry; @@ -151,7 +151,7 @@ static int secretmem_migrate_folio(struct address_space= *mapping, =20 static void secretmem_free_folio(struct folio *folio) { - set_direct_map_default_noflush(folio_page(folio, 0)); + set_direct_map_default_noflush(folio_address(folio)); folio_zero_segment(folio, 0, folio_size(folio)); } =20 diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 61caa55a4402..8822f73957d9 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3342,14 +3342,17 @@ struct vm_struct *remove_vm_area(const void *addr) } =20 static inline void set_area_direct_map(const struct vm_struct *area, - int (*set_direct_map)(struct page *page)) + int (*set_direct_map)(const void *addr)) { int i; =20 /* HUGE_VMALLOC passes small pages to set_direct_map */ - for (i =3D 0; i < area->nr_pages; i++) - if (page_address(area->pages[i])) - set_direct_map(area->pages[i]); + for (i =3D 0; i < area->nr_pages; i++) { + const void *addr =3D page_address(area->pages[i]); + + if (addr) + set_direct_map(addr); + } } =20 /* --=20 2.50.1