From nobody Thu Sep 19 01:03:15 2024 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D85317C9EE for ; Fri, 26 Jul 2024 16:37:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722011868; cv=none; b=E3ttS3UvJaESNqkvKzEJP9cvys+yhGvLvWIoGdzs9jg0ltOQNF4DNwP9cdCPenGo3BF0tVbGKKjjyeJPDY2igqWFLBUQHI+78jaRBZqXPj/EjZwe9UFiSJkljgkJrGUC2qaAAB8ecy1pAYEpi+5LBR4T/CH7htz8jf3Qd5weaOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722011868; c=relaxed/simple; bh=KPzo1t8aNGcupG0Pg2gWOUn9JiWdbpJchlMkHseo+HY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uCpFukDRPAS5XV79da88LsHQcCCjsKVcZyG9Kh/kFFAHjlLnC3A02ZCAw8kGLSr0Mic1rIV4BoLyNxjYPop6BbdmDvOjZEM1EOJJNlIYJ5I0/7DL2ZFo2ONxkYKwRqyd4iSZPCuLxYygaoNmhopnuHdGxeGhNaIdg60KktoURNk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=VMQVmofq; arc=none smtp.client-ip=209.85.215.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="VMQVmofq" Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-7a0b2924e52so698672a12.2 for ; Fri, 26 Jul 2024 09:37:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1722011866; x=1722616666; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jL/Wcr3kp2TFFG3UjxVeBtOA+tWOHsY6KHzkM6T1E70=; b=VMQVmofqYjH4tN8mJ1bzS+n3abMfA87C4ngdNJLlq9bMxgBHgfJmnqW1Uo8n9q5+Xh GjAF73LEYZopwqLkWNSYS0hxwY4JoSK8mXvx+2aNq9V5aYecph67lnIHXRdy09E1RrLQ +jdNy5g12Ny0LtEL4qIc4pP/HYPwO7UP+GfLDqmn34P/J6T3JMV3nJHYB2mcXRXVOjgs rpJgiTDx9gS4ONp92adTHa7it+hA1t78/R7wSJXuKQJK+B/zhaveusKN/kIWe+2ZYPA1 XxE+clgn9d3O816S+zJBNo52qnqkHj8LsYj4nYqaCXbPJbvYINBzwOgT7bQ8RDiAkO9s 4L7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722011866; x=1722616666; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jL/Wcr3kp2TFFG3UjxVeBtOA+tWOHsY6KHzkM6T1E70=; b=A8FlidP5B6hVGzBvbMZE6vL7PYIAJG6jhfhAI7f29ruKBvS6+LMTY+JOMW7zdD6z9a 6ocsXs9GE+ddgpY7gOr9q6pcyXF6AhwR2vcqiONqD2p1WHAjj7Nhswfel1GrUKIGRY0e A6e/y9t+jytqp9WZbNPoEr4kegZL4kvEIZGreSMrY49NXmo0gCCOqo7PwCVgYXbwQSkj 0pWM59+oE746nc8zHXdappdAPlMdHksLxn8Fsg1rIz2HztK9dvUjgXxjPM5WEjLQLOGU 2CoJlbR4BMCxORCysk2zkbnKIZY4VA7bgPYxpsmaUcak5xuhJSsFlbQXyXFj7OxF7XkQ tLlA== X-Forwarded-Encrypted: i=1; AJvYcCVNJ4Xim8oMqlvvJoCgGm+bLkD7v0Igkxf0IkT3tk+1D9uZy9UJWg1c/JTtofHd9TONKZ3dZr1zQJkOxbg4sBi94zWDUJYrcrbwHIJz X-Gm-Message-State: AOJu0Ywfx7y/Mz4fvmwS7GP7HJ4t9bo0RCI07T5oQ5fqBRXyPfFdN6rA QaTc2+jlURgcVNOjB3ay4edzhSWnE8ZPePpZ8xIIPZKRtKcpPrJ8MUQrt+RH+ww= X-Google-Smtp-Source: AGHT+IGeDGfTKa28zuUIyGNAc40tXgpN6lNbD7TWL4IPjzeIquVunAJ7rdtjdUVk2ZZZ/Wul3+wjfQ== X-Received: by 2002:a17:90a:750:b0:2c9:64fb:1c7e with SMTP id 98e67ed59e1d1-2cf7e1c0e3emr50338a91.14.1722011865651; Fri, 26 Jul 2024 09:37:45 -0700 (PDT) Received: from jesse-desktop.ba.rivosinc.com (pool-108-26-179-17.bstnma.fios.verizon.net. [108.26.179.17]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2cf28c55a2dsm3676619a91.7.2024.07.26.09.37.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jul 2024 09:37:45 -0700 (PDT) From: Jesse Taube To: linux-riscv@lists.infradead.org Cc: Jonathan Corbet , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Evan Green , Andrew Jones , Jesse Taube , Charlie Jenkins , Xiao Wang , Andy Chiu , Eric Biggers , Greentime Hu , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Heiko Stuebner , Costa Shulyupin , Andrew Morton , Baoquan He , Anup Patel , Zong Li , Sami Tolvanen , Ben Dooks , Alexandre Ghiti , "Gustavo A. R. Silva" , Erick Archer , Joel Granados , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Conor Dooley Subject: [PATCH v7 5/8] RISC-V: Replace RISCV_MISALIGNED with RISCV_SCALAR_MISALIGNED Date: Fri, 26 Jul 2024 12:37:16 -0400 Message-ID: <20240726163719.1667923-6-jesse@rivosinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240726163719.1667923-1-jesse@rivosinc.com> References: <20240726163719.1667923-1-jesse@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace RISCV_MISALIGNED with RISCV_SCALAR_MISALIGNED to allow for the addition of RISCV_VECTOR_MISALIGNED in a later patch. Signed-off-by: Jesse Taube Reviewed-by: Conor Dooley Reviewed-by: Charlie Jenkins Reviewed-by: Evan Green --- V2 -> V3: - New patch V3 -> V4: - No changes V4 -> V5: - No changes V5 -> V6: - fix accidental moving of check_unaligned_access_emulated_all_cpus out of= the #ifdef V6 -> V7: - No changes --- arch/riscv/Kconfig | 6 +++--- arch/riscv/include/asm/cpufeature.h | 2 +- arch/riscv/include/asm/entry-common.h | 2 +- arch/riscv/kernel/Makefile | 4 ++-- arch/riscv/kernel/fpu.S | 4 ++-- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index b94176e25be1..34d24242e37a 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -717,7 +717,7 @@ config THREAD_SIZE_ORDER Specify the Pages of thread stack size (from 4KB to 64KB), which also affects irq stack size, which is equal to thread stack size. =20 -config RISCV_MISALIGNED +config RISCV_SCALAR_MISALIGNED bool select SYSCTL_ARCH_UNALIGN_ALLOW help @@ -734,7 +734,7 @@ choice =20 config RISCV_PROBE_UNALIGNED_ACCESS bool "Probe for hardware unaligned access support" - select RISCV_MISALIGNED + select RISCV_SCALAR_MISALIGNED help During boot, the kernel will run a series of tests to determine the speed of unaligned accesses. This probing will dynamically determine @@ -745,7 +745,7 @@ config RISCV_PROBE_UNALIGNED_ACCESS =20 config RISCV_EMULATED_UNALIGNED_ACCESS bool "Emulate unaligned access where system support is missing" - select RISCV_MISALIGNED + select RISCV_SCALAR_MISALIGNED help If unaligned memory accesses trap into the kernel as they are not supported by the system, the kernel will emulate the unaligned diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/c= pufeature.h index 3b24342c7d2a..4ade9f87fc55 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -34,7 +34,7 @@ extern struct riscv_isainfo hart_isa[NR_CPUS]; =20 void riscv_user_isa_enable(void); =20 -#if defined(CONFIG_RISCV_MISALIGNED) +#if defined(CONFIG_RISCV_SCALAR_MISALIGNED) bool check_unaligned_access_emulated_all_cpus(void); void check_unaligned_access_emulated(struct work_struct *work __always_unu= sed); void unaligned_emulation_finish(void); diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm= /entry-common.h index 2293e535f865..0a4e3544c877 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -25,7 +25,7 @@ static inline void arch_exit_to_user_mode_prepare(struct = pt_regs *regs, void handle_page_fault(struct pt_regs *regs); void handle_break(struct pt_regs *regs); =20 -#ifdef CONFIG_RISCV_MISALIGNED +#ifdef CONFIG_RISCV_SCALAR_MISALIGNED int handle_misaligned_load(struct pt_regs *regs); int handle_misaligned_store(struct pt_regs *regs); #else diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 5b243d46f4b1..8d4e7d40e42f 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -62,8 +62,8 @@ obj-y +=3D probes/ obj-y +=3D tests/ obj-$(CONFIG_MMU) +=3D vdso.o vdso/ =20 -obj-$(CONFIG_RISCV_MISALIGNED) +=3D traps_misaligned.o -obj-$(CONFIG_RISCV_MISALIGNED) +=3D unaligned_access_speed.o +obj-$(CONFIG_RISCV_SCALAR_MISALIGNED) +=3D traps_misaligned.o +obj-$(CONFIG_RISCV_SCALAR_MISALIGNED) +=3D unaligned_access_speed.o obj-$(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) +=3D copy-unaligned.o =20 obj-$(CONFIG_FPU) +=3D fpu.o diff --git a/arch/riscv/kernel/fpu.S b/arch/riscv/kernel/fpu.S index 327cf527dd7e..f74f6b60e347 100644 --- a/arch/riscv/kernel/fpu.S +++ b/arch/riscv/kernel/fpu.S @@ -170,7 +170,7 @@ SYM_FUNC_END(__fstate_restore) __access_func(f31) =20 =20 -#ifdef CONFIG_RISCV_MISALIGNED +#ifdef CONFIG_RISCV_SCALAR_MISALIGNED =20 /* * Disable compressed instructions set to keep a constant offset between FP @@ -224,4 +224,4 @@ SYM_FUNC_START(get_f64_reg) fp_access_epilogue SYM_FUNC_END(get_f64_reg) =20 -#endif /* CONFIG_RISCV_MISALIGNED */ +#endif /* CONFIG_RISCV_SCALAR_MISALIGNED */ --=20 2.45.2