From nobody Thu Dec 18 16:55:13 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3447C83F17 for ; Tue, 29 Aug 2023 03:00:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232804AbjH2C72 (ORCPT ); Mon, 28 Aug 2023 22:59:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232601AbjH2C7A (ORCPT ); Mon, 28 Aug 2023 22:59:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 760D519F; Mon, 28 Aug 2023 19:58:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0046762AA6; Tue, 29 Aug 2023 02:58:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A2DCC433C7; Tue, 29 Aug 2023 02:58:53 +0000 (UTC) From: Huacai Chen To: Arnd Bergmann , Huacai Chen Cc: loongarch@lists.linux.dev, linux-arch@vger.kernel.org, Xuefeng Li , Guo Ren , Xuerui Wang , Jiaxun Yang , linux-kernel@vger.kernel.org, loongson-kernel@lists.loongnix.cn, Huacai Chen , Jiantao Shan Subject: [PATCH V2] LoongArch: Remove shm_align_mask and use SHMLBA instead Date: Tue, 29 Aug 2023 10:58:41 +0800 Message-Id: <20230829025841.1435746-1-chenhuacai@loongson.cn> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Both shm_align_mask and SHMLBA want to avoid cache alias. But they are inconsistent: shm_align_mask is (PAGE_SIZE - 1) while SHMLBA is SZ_64K, but PAGE_SIZE is not always equal to SZ_64K. This may cause problems when shmat() twice. Fix this problem by removing shm_align_mask and using SHMLBA (strictly SHMLBA - 1) instead. Reported-by: Jiantao Shan Signed-off-by: Huacai Chen --- V2: Define SHM_ALIGN_MASK. arch/loongarch/mm/cache.c | 1 - arch/loongarch/mm/mmap.c | 13 ++++++------- 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/mm/cache.c b/arch/loongarch/mm/cache.c index 72685a48eaf0..6be04d36ca07 100644 --- a/arch/loongarch/mm/cache.c +++ b/arch/loongarch/mm/cache.c @@ -156,7 +156,6 @@ void cpu_cache_init(void) =20 current_cpu_data.cache_leaves_present =3D leaf; current_cpu_data.options |=3D LOONGARCH_CPU_PREFETCH; - shm_align_mask =3D PAGE_SIZE - 1; } =20 static const pgprot_t protection_map[16] =3D { diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c index fbe1a4856fc4..a9630a81b38a 100644 --- a/arch/loongarch/mm/mmap.c +++ b/arch/loongarch/mm/mmap.c @@ -8,12 +8,11 @@ #include #include =20 -unsigned long shm_align_mask =3D PAGE_SIZE - 1; /* Sane caches */ -EXPORT_SYMBOL(shm_align_mask); +#define SHM_ALIGN_MASK (SHMLBA - 1) =20 -#define COLOUR_ALIGN(addr, pgoff) \ - ((((addr) + shm_align_mask) & ~shm_align_mask) + \ - (((pgoff) << PAGE_SHIFT) & shm_align_mask)) +#define COLOUR_ALIGN(addr, pgoff) \ + ((((addr) + SHM_ALIGN_MASK) & ~SHM_ALIGN_MASK) \ + + (((pgoff) << PAGE_SHIFT) & SHM_ALIGN_MASK)) =20 enum mmap_allocation_direction {UP, DOWN}; =20 @@ -40,7 +39,7 @@ static unsigned long arch_get_unmapped_area_common(struct= file *filp, * cache aliasing constraints. */ if ((flags & MAP_SHARED) && - ((addr - (pgoff << PAGE_SHIFT)) & shm_align_mask)) + ((addr - (pgoff << PAGE_SHIFT)) & SHM_ALIGN_MASK)) return -EINVAL; return addr; } @@ -63,7 +62,7 @@ static unsigned long arch_get_unmapped_area_common(struct= file *filp, } =20 info.length =3D len; - info.align_mask =3D do_color_align ? (PAGE_MASK & shm_align_mask) : 0; + info.align_mask =3D do_color_align ? (PAGE_MASK & SHM_ALIGN_MASK) : 0; info.align_offset =3D pgoff << PAGE_SHIFT; =20 if (dir =3D=3D DOWN) { --=20 2.39.3