From nobody Mon Feb 9 17:07:20 2026 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40B265CAC for ; Sat, 2 Mar 2024 01:45:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343943; cv=none; b=bgmIWkWwp7UcRldJI4cAq1qB/+/zZrofvqX3TOxCRbWW41QUXC6a0sKtU3tmLIxonjWzrsThFAVHYG+KFBvBoI2SaMFXogdaezuLWgN1gKDikm5+ATAyUGDd1ekrILjLXycQr6iC2XoQ/TjJrg0wxAJthiMsLOmE+5cUu/VqTqE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343943; c=relaxed/simple; bh=FzR0GpkIwoZRYWxFyEryVlUwRX7agq76vtFdOU7VZ+M=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IrZ0Eut2jcI9mmXwDmeQUma9jhDQD0AlPMPjlSOrvyPYGqY5INMeSDaYZTL5p9t5X4ABkcFVG6tTBV8t7QX1l2KStlhI4F2ZFUTTa8glbJMY0ILhsklgSEIDCcOfs1QV9COB8RwRfZeUkg2cICrIPuWsenf/srVWhtibe47eaak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=gQAD+KNE; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="gQAD+KNE" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-1dc1ff3ba1aso23309165ad.3 for ; Fri, 01 Mar 2024 17:45:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1709343941; x=1709948741; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=3EVtHtZkQ3gl1JI+EwyEBrvg+iHnnWLYEsQB5N3MR4A=; b=gQAD+KNEyN72R2i8qUXlDiPXK1T4h8PMbtJ6G+je0YEhEHwIt+PYRg/vkbcfB4Vs7r mugN3Fnp0PEEglWl4h+g3tQCB8uEGLeYA+7jRapx7QcgcfsDkJDrmEHCQdiH5m6Oq1nu WaaWsTzJB4vwH9gCwY06W0bFlcRJf4r4EQeO459Bg00/Kj+DEyqL5Qco7gYMbTPk4W4a c4jrLZx4uUYp4jdgvzEf9+RtQklV3hL0uDJciODMyycAI1nX5r9mh+uK199asSQpcWMD ONVYFVb5lqO0mRxV1eJdAsuovJfeCYUtRS/e9baSLt/S/9wuC2BOJXHNBdngW0KBL1xd XWgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709343941; x=1709948741; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3EVtHtZkQ3gl1JI+EwyEBrvg+iHnnWLYEsQB5N3MR4A=; b=ofqqmCmc/ThOgHMmfROfqPBV6IgthPMnI7TvZ0SzRxAaZHOlcVB0PyFViPqssh1WSv JhXpUEG7KtpdqtLQ2HXf/yJqC9MidCTiGnSmaZVyYmTiy5Ht0gUXLTrjghzf/vXB2RVE j2IUshpCeTl436y0P5agqgK0VEVAtKEB2MHnY9UNE4ffcCMJe1byLAUTCSD8yJtd5CSf xNYwYCCxTnxSXEwwHq51D6ZL0X/3BJOD59GGrO4kEnwko0ThyUPUKX87S3iqTD4u/S2i 9dKR92mgXBTd59xMVHdj4Jfq/ie7tnm6UPVvw2GuHCVHluycti/e6ThRkEXjv3fjLYMS P1zQ== X-Forwarded-Encrypted: i=1; AJvYcCUiPvtEO2Ua9ihjfgGLwSh4eZ0cdabhY/tlomn34y899Opaspf6UA5uGeP02RjQkBG//mD0LpHRSzUAHKLAIhcOuLSNB7S8HTflEDey X-Gm-Message-State: AOJu0YzQLHlUyBEnqkjPvYW3ftVbT0SeAfnFNsPloc4S1qtxQ57DuIJo CnqUTeeSY0zLBMLFAKT5OAr7TZoli7mlBi0lg3b8O049DmlklYZNGaGsTkfoyYxsHcApV2JZhC3 K X-Google-Smtp-Source: AGHT+IEFOA+Mva/UJNgDW5MD4NNeIEjYKDh0h2SpSm1kLIc6+lPYL79pNCt/9xrF9nmKlJSwzwIR8Q== X-Received: by 2002:a17:902:e841:b0:1dc:c93e:f5f5 with SMTP id t1-20020a170902e84100b001dcc93ef5f5mr4725296plg.12.1709343941118; Fri, 01 Mar 2024 17:45:41 -0800 (PST) Received: from charlie.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b001d71729ec9csm4129039plj.188.2024.03.01.17.45.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 17:45:40 -0800 (PST) From: Charlie Jenkins Date: Fri, 01 Mar 2024 17:45:32 -0800 Subject: [PATCH v6 1/4] riscv: lib: Introduce has_fast_unaligned_access function Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240301-disable_misaligned_probe_config-v6-1-612ebd69f430@rivosinc.com> References: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> In-Reply-To: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Jisheng Zhang , Evan Green , =?utf-8?q?Cl=C3=A9ment_L=C3=A9ger?= , Eric Biggers , Elliot Berman , Charles Lohr , Conor Dooley Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Charlie Jenkins X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1709343937; l=3372; i=charlie@rivosinc.com; s=20231120; h=from:subject:message-id; bh=FzR0GpkIwoZRYWxFyEryVlUwRX7agq76vtFdOU7VZ+M=; b=MTe+OzVOyEChCkgWAjVjTKJ1KGldjB3X8XOlTPihrud5sPuCHjpapts64+9B/jDdMiVP4tKOR 9wf0fy4i0KRBUiNT07bmPtNKyvrHOeVYeniuqlbOHRIcZCWcU6fLy9n X-Developer-Key: i=charlie@rivosinc.com; a=ed25519; pk=t4RSWpMV1q5lf/NWIeR9z58bcje60/dbtxxmoSfBEcs= Create has_fast_unaligned_access to avoid needing to explicitly check the fast_misaligned_access_speed_key static key. Signed-off-by: Charlie Jenkins Reviewed-by: Evan Green --- arch/riscv/include/asm/cpufeature.h | 11 ++++++++--- arch/riscv/kernel/cpufeature.c | 6 +++--- arch/riscv/lib/csum.c | 7 ++----- 3 files changed, 13 insertions(+), 11 deletions(-) diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/c= pufeature.h index 5a626ed2c47a..466e1f591919 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright 2022-2023 Rivos, Inc + * Copyright 2022-2024 Rivos, Inc */ =20 #ifndef _ASM_CPUFEATURE_H @@ -53,6 +53,13 @@ static inline bool check_unaligned_access_emulated(int c= pu) static inline void unaligned_emulation_finish(void) {} #endif =20 +DECLARE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key); + +static __always_inline bool has_fast_unaligned_accesses(void) +{ + return static_branch_likely(&fast_unaligned_access_speed_key); +} + unsigned long riscv_get_elf_hwcap(void); =20 struct riscv_isa_ext_data { @@ -135,6 +142,4 @@ static __always_inline bool riscv_cpu_has_extension_unl= ikely(int cpu, const unsi return __riscv_isa_extension_available(hart_isa[cpu].isa, ext); } =20 -DECLARE_STATIC_KEY_FALSE(fast_misaligned_access_speed_key); - #endif diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 89920f84d0a3..7878cddccc0d 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -810,14 +810,14 @@ static void check_unaligned_access_nonboot_cpu(void *= param) check_unaligned_access(pages[cpu]); } =20 -DEFINE_STATIC_KEY_FALSE(fast_misaligned_access_speed_key); +DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key); =20 static void modify_unaligned_access_branches(cpumask_t *mask, int weight) { if (cpumask_weight(mask) =3D=3D weight) - static_branch_enable_cpuslocked(&fast_misaligned_access_speed_key); + static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key); else - static_branch_disable_cpuslocked(&fast_misaligned_access_speed_key); + static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key); } =20 static void set_unaligned_access_static_branches_except_cpu(int cpu) diff --git a/arch/riscv/lib/csum.c b/arch/riscv/lib/csum.c index af3df5274ccb..7178e0acfa22 100644 --- a/arch/riscv/lib/csum.c +++ b/arch/riscv/lib/csum.c @@ -3,7 +3,7 @@ * Checksum library * * Influenced by arch/arm64/lib/csum.c - * Copyright (C) 2023 Rivos Inc. + * Copyright (C) 2023-2024 Rivos Inc. */ #include #include @@ -318,10 +318,7 @@ unsigned int do_csum(const unsigned char *buff, int le= n) * branches. The largest chunk of overlap was delegated into the * do_csum_common function. */ - if (static_branch_likely(&fast_misaligned_access_speed_key)) - return do_csum_no_alignment(buff, len); - - if (((unsigned long)buff & OFFSET_MASK) =3D=3D 0) + if (has_fast_unaligned_accesses() || (((unsigned long)buff & OFFSET_MASK)= =3D=3D 0)) return do_csum_no_alignment(buff, len); =20 return do_csum_with_alignment(buff, len); --=20 2.43.2 From nobody Mon Feb 9 17:07:20 2026 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2105747F for ; Sat, 2 Mar 2024 01:45:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343945; cv=none; b=M8jti8087WYCEC4zzPDVFoY8EpjEpYoQCB0PT9TBkQGr9C3Eey2sUFh5xuSqLLZkexQzWrWu+D6SWn04N+GWTTT0ku8YKJtFWRI15E123bPluSfhdoearShctd7xdgiu+/xApnQoBvs1OMehJqiyUFCEQOSTnLTd1/Ij+3QmuZg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343945; c=relaxed/simple; bh=UUdT4DeT5/dOQZ05jhj8UeaRSkQffyRcQJbuhlDryrg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=lr0uLtLPNqvpSiCweKEpKjjr0pRGZYqLWdKx+ffoFzDeaIA62Hn9zYDeziodQFy/NGIFBr86SjoeyeBLN4zqxDCbhkzdh6PHXwtJoa/M1r2fGQynucdFMQqhvlz0esG+ZcEMsqxPWmVORCVfKUwb7s6WA/QWaDPS8PwlRKWE/LM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=RFCjloGD; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="RFCjloGD" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-1dcad814986so24227285ad.0 for ; Fri, 01 Mar 2024 17:45:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1709343942; x=1709948742; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=HtnskPkRhYiZU7za9ryep8TQ6a4uPdl3XXtWJSoiADM=; b=RFCjloGDxX3x6YLBfGr7f8Bm7ofj4xhETV3TP/wmVKDaMhrVpOAk5hRpId0fSUfHyH FQdINvMD8NYGgsDmsVzIHa7WfBc230vvvNROXo1kzfrzyna+4T5menDaOo44/WXAhrGc JwciEPACBO2WRK/Aw3G6XdCKrH07OTwiMUgnT7yorjX/Q2fH1Bhm41ZbAMtYyAuu1Rwc WcmIfubGqXrjNmB+6Limzg6rVxErbcLOGAGy7+xwwmxK+rIhzYiJBlVC2K1akmJzz2bd fJFRYEIO7iHZyIVWR5mHQK6sUQOFSucckKSBLkj14vhxp5cdFJD/r/CCVRxXlMprL5Ij noVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709343942; x=1709948742; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HtnskPkRhYiZU7za9ryep8TQ6a4uPdl3XXtWJSoiADM=; b=NwuEqbut3vg78nTDQN6O9qzSsdh1w/YNkYxQ7Q3z9eY1z2LECuOBayHbQGDmR2bj8L 4rbc/qCG8eYUJ8W79722OHKQY57oEtYHfG4749xZLHxmZmgxNMTv5E86OvShVqC95cO8 ivP82ppCMQ0CCIjCjc7Klj0jY/mb84oGv8HgEK9PyGnCAUDY9yYIP5EPi5K74hGA8Snj qivKczackO+EYlNQLZp2bRhiUTL0gOvuUfRDHo7LGnXzoV+SA5irnTLzyoLxafeMqRUi XAnVQsJoqsV25lMNddicBhtcpElwacXihf0OZNVcu/yELP4+QaAJotqPy8ig5OjxbKzY FE1A== X-Forwarded-Encrypted: i=1; AJvYcCX7bZ3lDpR4oze4RHhpNVdX/EADT20d6BGaeV+Sfo2/vkne8vdrxSYUhQrSqqbiaL+Bi1q2pwbwkDg8/qgtb8eFlrxH2226gZWxXm9F X-Gm-Message-State: AOJu0Yygue3EXDOZfdhRf3GpMGX8XRi5ZsFJoRpdV7Dd63lAyPZ84hP4 b4bvVmsuwpiauEi5x5/1t+oKsFY8YylQHueN2wn7IYPnQ12eGX/vdSf5ld3sL44Su47EIxiYSUh t X-Google-Smtp-Source: AGHT+IHSxlLixNtRgaHq6kfFYEYyEaOBDGvFg0CiI9J7Q5qF+AdJT4es86pFimCOz/DXUH+g9K8JWg== X-Received: by 2002:a17:903:2408:b0:1dc:66ac:c34b with SMTP id e8-20020a170903240800b001dc66acc34bmr3661423plo.68.1709343942685; Fri, 01 Mar 2024 17:45:42 -0800 (PST) Received: from charlie.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b001d71729ec9csm4129039plj.188.2024.03.01.17.45.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 17:45:42 -0800 (PST) From: Charlie Jenkins Date: Fri, 01 Mar 2024 17:45:33 -0800 Subject: [PATCH v6 2/4] riscv: Only check online cpus for emulated accesses Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240301-disable_misaligned_probe_config-v6-2-612ebd69f430@rivosinc.com> References: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> In-Reply-To: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Jisheng Zhang , Evan Green , =?utf-8?q?Cl=C3=A9ment_L=C3=A9ger?= , Eric Biggers , Elliot Berman , Charles Lohr , Conor Dooley Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Charlie Jenkins X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1709343937; l=929; i=charlie@rivosinc.com; s=20231120; h=from:subject:message-id; bh=UUdT4DeT5/dOQZ05jhj8UeaRSkQffyRcQJbuhlDryrg=; b=8PmXJ6dPykReAEr2oVdcdTdjbyNW1n+hkJwqkARm+ne05UxS/l+kJ7+aXvipYzLBMxOTOE/j/ 2410PfeOQLFDLfFBbI/C09RxasD89zBRF5GyE4tEEIgI5PD+ang1sQa X-Developer-Key: i=charlie@rivosinc.com; a=ed25519; pk=t4RSWpMV1q5lf/NWIeR9z58bcje60/dbtxxmoSfBEcs= The unaligned access checker only sets valid values for online cpus. Check for these values on online cpus rather than on present cpus. Signed-off-by: Charlie Jenkins Fixes: 71c54b3d169d ("riscv: report misaligned accesses emulation to hwprob= e") Reviewed-by: Conor Dooley --- arch/riscv/kernel/traps_misaligned.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps= _misaligned.c index 8ded225e8c5b..c2ed4e689bf9 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -632,7 +632,7 @@ void unaligned_emulation_finish(void) * accesses emulated since tasks requesting such control can run on any * CPU. */ - for_each_present_cpu(cpu) { + for_each_online_cpu(cpu) { if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_EMULATED) { return; --=20 2.43.2 From nobody Mon Feb 9 17:07:20 2026 Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42955B65E for ; Sat, 2 Mar 2024 01:45:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343946; cv=none; b=NwhHih0dUK+Ef3gCmCqBIo4gdGJFpSXyW54Ar1GUGVN71D9dDbtAkDg3XDGw3tmgcUvn+r0c2fnFyY+3jgBSmRoeoC/1K0TOYEDzUP1aaDXRY39l2S2Gw77al5Kxo3qskrw+XJpOz7O7heLODgfyhYc/xF3m+Eyov1W4GyzB/1Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343946; c=relaxed/simple; bh=YiWDyhZlyEbMfbqz0FsXl9yfSIL66FQp7h5UIvYeHJo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NRUm4F5/wCpBNkqxH2ozhozWwhhL22/zgvWII2XYSsaI8doV7j2R2vSgxyPKgOVOVN8quG/TFMTskES8cEdDHLwP+xML5czYoswx4SHgGsos4s3ZQt2shnoskEaskywMzB6wWZNEqN2mgF8HGP1BIGQ5tkKQnbu5KokWI+r+3Xg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=hejeb6Bu; arc=none smtp.client-ip=209.85.215.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="hejeb6Bu" Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-5d8b276979aso2054435a12.2 for ; Fri, 01 Mar 2024 17:45:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1709343944; x=1709948744; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=r3Qfvs3PS/SqaY35b4e9kYVL30tzyoJAgDQ9SsWZHOI=; b=hejeb6BuPgkStXMXukbkPY1DHTGFo5s7MKouOQdb4Qn0OMvr/Lu3xvRooStKbU4wSE sJrdQcNblcS/xIX+E6WufGU5JrS3G9vc2FsKFOgzbnLQYW5RABUSvI2NoH2ZqG60+PdD H2VeoMq74d24O+nUDbRsDY21UR40OmlZp7bH6MoQ5RSJOB3STj0XxMleMRakaMvse7Rs 8Ek45b7m8GBAoPYjSln7Dzidi2q02xOcAHpP96qdzRE4dTS7vuvkECs5VIoonCjDO8X6 GNfGWHvL2qlOtDvu37I0uzZNPvJEOyxs7rp9ef1+jkotg+IJvopUBRbn/wmrixAbiI1a +46A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709343944; x=1709948744; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r3Qfvs3PS/SqaY35b4e9kYVL30tzyoJAgDQ9SsWZHOI=; b=nOsze59Bp+xK1GVxnI6kCv19a4ZWmjGwyf9LbUxeChPzl4I+2RJmm8VbdeoPxeeYy1 qesBxnHlbLHiLNQCQMdmrw2O5p1wnaiKwrum0sknYcpD/VK36bF0EroqPU2Fy8HQ1qMQ dIKq0ZPTnJcI8JKPlAdq/wt81nquu0+hCCVX1GXn4p6+rvwPSEE9n+L4bzBuaUUBWe0v u+9PbdmqhK1rkClR2/8OautVLYpHY5hAQa6vOKKwcgtvAXiaNbmbWV+DxikLw6A/uokO HNu77j+2weL1YBezYpuOaWJ+WxP2VxwEzcVUI4vIjTnGL/sSq5yIW9mS6cUOJSJFUPnu KDRg== X-Forwarded-Encrypted: i=1; AJvYcCVF+0cPzx0HDDNi2CwHXB3kdOxCROztk4FnXPZ/ou6N456AuNuckR8ql44Ca1w5B6W2HzqQO7bX9aESWzGFHpi01QtmK0DZpmMZvgtE X-Gm-Message-State: AOJu0YyDFGQzwA23AaO8CVbpoO3hmJy2SDqvDfeOCPGoPO3Wyt4g5I8D XUpWBRq0zFaY/nhaWRJQwXPeRhku5OrwHEvVNHXw07O62+LPvCChvKgqnfn3I/2gnUTj6OVP9hb L X-Google-Smtp-Source: AGHT+IEs+Cr6aC23Fswwh3ZFwN8l50fRomu8sFf+KE5jvSmniBN5OXcfGMAgT+IVT61A2SBTamH2BQ== X-Received: by 2002:a17:903:595:b0:1db:4746:5fdd with SMTP id jv21-20020a170903059500b001db47465fddmr2542481plb.43.1709343944235; Fri, 01 Mar 2024 17:45:44 -0800 (PST) Received: from charlie.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b001d71729ec9csm4129039plj.188.2024.03.01.17.45.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 17:45:43 -0800 (PST) From: Charlie Jenkins Date: Fri, 01 Mar 2024 17:45:34 -0800 Subject: [PATCH v6 3/4] riscv: Decouple emulated unaligned accesses from access speed Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240301-disable_misaligned_probe_config-v6-3-612ebd69f430@rivosinc.com> References: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> In-Reply-To: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Jisheng Zhang , Evan Green , =?utf-8?q?Cl=C3=A9ment_L=C3=A9ger?= , Eric Biggers , Elliot Berman , Charles Lohr , Conor Dooley Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Charlie Jenkins X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1709343937; l=4205; i=charlie@rivosinc.com; s=20231120; h=from:subject:message-id; bh=YiWDyhZlyEbMfbqz0FsXl9yfSIL66FQp7h5UIvYeHJo=; b=M1VlqbLcYcq1VPmyWChr6/UcHm055g1J1jbZi0Ing9Lqe/dnK3npq+QdKEdNeIF7NX4qlMNrp wXdgqrbwSSlALqlnlscL6GXP8VeQb/WoJS8UcYhTBJnToIP2yy3U6iH X-Developer-Key: i=charlie@rivosinc.com; a=ed25519; pk=t4RSWpMV1q5lf/NWIeR9z58bcje60/dbtxxmoSfBEcs= Detecting if a system traps into the kernel on an unaligned access can be performed separately from checking the speed of unaligned accesses. This decoupling will make it possible to selectively enable or disable each of these checks as is done in the following patch. Signed-off-by: Charlie Jenkins --- arch/riscv/include/asm/cpufeature.h | 2 +- arch/riscv/kernel/cpufeature.c | 25 +++++++++++++++++++++---- arch/riscv/kernel/traps_misaligned.c | 20 +++++++------------- 3 files changed, 29 insertions(+), 18 deletions(-) diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/c= pufeature.h index 466e1f591919..6fec91845aa0 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -37,7 +37,7 @@ void riscv_user_isa_enable(void); =20 #ifdef CONFIG_RISCV_MISALIGNED bool unaligned_ctl_available(void); -bool check_unaligned_access_emulated(int cpu); +bool check_unaligned_access_emulated_all_cpus(void); void unaligned_emulation_finish(void); #else static inline bool unaligned_ctl_available(void) diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 7878cddccc0d..abb3a2f53106 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -719,7 +719,8 @@ static int check_unaligned_access(void *param) void *src; long speed =3D RISCV_HWPROBE_MISALIGNED_SLOW; =20 - if (check_unaligned_access_emulated(cpu)) + if (IS_ENABLED(CONFIG_RISCV_MISALIGNED) && + per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_U= NKNOWN) return 0; =20 /* Make an unaligned destination buffer. */ @@ -896,8 +897,8 @@ static int riscv_offline_cpu(unsigned int cpu) return 0; } =20 -/* Measure unaligned access on all CPUs present at boot in parallel. */ -static int check_unaligned_access_all_cpus(void) +/* Measure unaligned access speed on all CPUs present at boot in parallel.= */ +static int check_unaligned_access_speed_all_cpus(void) { unsigned int cpu; unsigned int cpu_count =3D num_possible_cpus(); @@ -935,7 +936,6 @@ static int check_unaligned_access_all_cpus(void) riscv_online_cpu, riscv_offline_cpu); =20 out: - unaligned_emulation_finish(); for_each_cpu(cpu, cpu_online_mask) { if (bufs[cpu]) __free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER); @@ -945,6 +945,23 @@ static int check_unaligned_access_all_cpus(void) return 0; } =20 +#ifdef CONFIG_RISCV_MISALIGNED +static int check_unaligned_access_all_cpus(void) +{ + bool all_cpus_emulated =3D check_unaligned_access_emulated_all_cpus(); + + if (!all_cpus_emulated) + return check_unaligned_access_speed_all_cpus(); + + return 0; +} +#else +static int check_unaligned_access_all_cpus(void) +{ + return check_unaligned_access_speed_all_cpus(); +} +#endif + arch_initcall(check_unaligned_access_all_cpus); =20 void riscv_user_isa_enable(void) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps= _misaligned.c index c2ed4e689bf9..1e3cec3f5d93 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -596,7 +596,7 @@ int handle_misaligned_store(struct pt_regs *regs) return 0; } =20 -bool check_unaligned_access_emulated(int cpu) +static bool check_unaligned_access_emulated(int cpu) { long *mas_ptr =3D per_cpu_ptr(&misaligned_access_speed, cpu); unsigned long tmp_var, tmp_val; @@ -623,22 +623,16 @@ bool check_unaligned_access_emulated(int cpu) return misaligned_emu_detected; } =20 -void unaligned_emulation_finish(void) +bool check_unaligned_access_emulated_all_cpus(void) { int cpu; =20 - /* - * We can only support PR_UNALIGN controls if all CPUs have misaligned - * accesses emulated since tasks requesting such control can run on any - * CPU. - */ - for_each_online_cpu(cpu) { - if (per_cpu(misaligned_access_speed, cpu) !=3D - RISCV_HWPROBE_MISALIGNED_EMULATED) { - return; - } - } + for_each_online_cpu(cpu) + if (check_unaligned_access_emulated(cpu)) + return false; + unaligned_ctl =3D true; + return true; } =20 bool unaligned_ctl_available(void) --=20 2.43.2 From nobody Mon Feb 9 17:07:20 2026 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4452811717 for ; Sat, 2 Mar 2024 01:45:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343949; cv=none; b=Fh3GUNSU1U5M9mQMJyTJ1RqnGI4org1P2nfiH6K2HfdF2mdu0RvWfNpEtdderzk6WQ9CHKBPmvfn89Zt8JkWMuhpP+/AUqx0kaKdI5nrVynXUnreYtgzbc5Mym+ZtF1Oklz819LSMHe+bHPBi5Znl7enpk4esf554H3vtnsnqiE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709343949; c=relaxed/simple; bh=ehRyG26oucn1ovQh8rKwT5tcwjcYYD6506Zz4CQdLg8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Tu+7d6mTfZMsJxb8dDXIzeEyBxTZu8Y6EvwPKsMT3tdx5WHLESor07r0bosha5oG4WSd3fJNOnDTHEqCi2UJOoFkJOZJ/euQHLI9Dg0sHqEIxSdreRSxOdgrfNBxonwRgosdhl+AtvtW6XJ83nUtPHUv5UIqmyb0ZABXzQDJ1Tw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=AuLh5pEA; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="AuLh5pEA" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1dcd6a3da83so20695665ad.3 for ; Fri, 01 Mar 2024 17:45:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1709343946; x=1709948746; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=559sKojVnuOwSWDYhc8RTDphfpbI9INqfdKUkTLytU4=; b=AuLh5pEAaOY/pFBautWeudd8XjmNmTLrOagqlBANskfpnvrBKIelAxMMG+SWE40TDe L268RKSiFlM700Z50Tso5YGS5G2KoXT+tQiHw8ks2VyNb2ncSjV4vXbP2fFfkE3h1RHH KxP2iTPijv0XO/tDbrNK9XX+v40uKlwVhyuUpcwfkKLs41a5a2PGhns8dgRF9sKc9Z00 NbndmFgGE17o5LdIVS3SDW2Awrl4ukqYBooJpnBkRPemo+Lvt0rhXaAHnaf1tkeHNH2V omf1o5YlhG2OhNwcADKn0lGq+lY3PNRThPg5WS+BdwsTGd6Pebs8oPq1CtHMPrL8Ma4O Ktkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709343946; x=1709948746; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=559sKojVnuOwSWDYhc8RTDphfpbI9INqfdKUkTLytU4=; b=ZCL378ZwIdca3AsW/FL/Pv8eJhvgdevB3pF+Is3SLkqNL2vTNHWhgvALUWC2qHSOmM ZmU3jh0iGHM/iBr4n7CkirXDukq/Lcc5hD3b8gYsZzl1QBRWPmE0m6X1AD2nNEaPHnGN PiAbVOfh9O+o9bu8W/LpLOZa/kKAfffyMWyOyoZ4xP2sNyZDp9CTDYe2kEy4OszqTOkF BSZN4a53XsLVI/UGzRYJxMkwAkPG3DG1Beu9n25K02ctBtayvmR15M7LP8OtLvQ/UHa9 8oRD0aE777PsolbUiaz3TdJTHISvalZf5UUr+JBvtHvRiUYNVSmNaRSCBi/g9IIE95S1 Pdog== X-Forwarded-Encrypted: i=1; AJvYcCXI64SrgO0zMddqhcCvqSUcHcRmyI9y3rhodyb7t2Po1mrCKrfA1evyyDglkSx7Rn/etVYXhwDwpJGiSaUKuOZJ/JJnyYH8PI6NxOXa X-Gm-Message-State: AOJu0Yw6jVGSj0e2raYFR1hRBwGOBAGdOp7lrH0eqJRPUvV013R2mfLW W8uhGwEuV4eAffTM9PKy4VbCPGJKKUTKs/INmAaP4gNVvCtnv7bRrnQa08yo2N1+baAjU7U+6nr p X-Google-Smtp-Source: AGHT+IHkeeNDKASqKCFWKPcxxx0ceTz8eshdE31HG5PvTq7zkOO+xglud9UHs0LZWnefKOa7kM00pw== X-Received: by 2002:a17:902:ee85:b0:1dc:6b72:6467 with SMTP id a5-20020a170902ee8500b001dc6b726467mr3720552pld.38.1709343945829; Fri, 01 Mar 2024 17:45:45 -0800 (PST) Received: from charlie.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b001d71729ec9csm4129039plj.188.2024.03.01.17.45.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 17:45:45 -0800 (PST) From: Charlie Jenkins Date: Fri, 01 Mar 2024 17:45:35 -0800 Subject: [PATCH v6 4/4] riscv: Set unaligned access speed at compile time Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240301-disable_misaligned_probe_config-v6-4-612ebd69f430@rivosinc.com> References: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> In-Reply-To: <20240301-disable_misaligned_probe_config-v6-0-612ebd69f430@rivosinc.com> To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Jisheng Zhang , Evan Green , =?utf-8?q?Cl=C3=A9ment_L=C3=A9ger?= , Eric Biggers , Elliot Berman , Charles Lohr , Conor Dooley Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Charlie Jenkins X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1709343937; l=25936; i=charlie@rivosinc.com; s=20231120; h=from:subject:message-id; bh=ehRyG26oucn1ovQh8rKwT5tcwjcYYD6506Zz4CQdLg8=; b=TS+KRxnINTELtdnFfgWPlT5jCmHLEQ0O8nEfbhYhwJsFc6gHI+5Ar96mgTDcJvt0TKoNDAhF9 ZdXg6vBlt2GBkssU/WP/b4b5CUJq1To9oh9mE2a6X50ZPs5MELbziIr X-Developer-Key: i=charlie@rivosinc.com; a=ed25519; pk=t4RSWpMV1q5lf/NWIeR9z58bcje60/dbtxxmoSfBEcs= Introduce Kconfig options to set the kernel unaligned access support. These options provide a non-portable alternative to the runtime unaligned access probe. To support this, the unaligned access probing code is moved into it's own file and gated behind a new RISCV_PROBE_UNALIGNED_ACCESS_SUPPORT option. Signed-off-by: Charlie Jenkins --- arch/riscv/Kconfig | 58 ++++-- arch/riscv/include/asm/cpufeature.h | 26 +-- arch/riscv/kernel/Makefile | 4 +- arch/riscv/kernel/cpufeature.c | 272 -------------------------= --- arch/riscv/kernel/sys_hwprobe.c | 21 +++ arch/riscv/kernel/traps_misaligned.c | 2 + arch/riscv/kernel/unaligned_access_speed.c | 282 +++++++++++++++++++++++++= ++++ 7 files changed, 369 insertions(+), 296 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index bffbd869a068..60b6de35599d 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -688,27 +688,61 @@ config THREAD_SIZE_ORDER affects irq stack size, which is equal to thread stack size. =20 config RISCV_MISALIGNED - bool "Support misaligned load/store traps for kernel and userspace" + bool select SYSCTL_ARCH_UNALIGN_ALLOW - default y help - Say Y here if you want the kernel to embed support for misaligned - load/store for both kernel and userspace. When disable, misaligned - accesses will generate SIGBUS in userspace and panic in kernel. + Embed support for misaligned load/store for both kernel and userspace. + When disabled, misaligned accesses will generate SIGBUS in userspace + and panic in kernel. + +choice + prompt "Unaligned Accesses Support" + default RISCV_PROBE_UNALIGNED_ACCESS + help + This selects the hardware support for unaligned accesses. This + information is used by the kernel to perform optimizations. It is also + exposed to user space via the hwprobe syscall. The hardware will be + probed at boot by default. + +config RISCV_PROBE_UNALIGNED_ACCESS + bool "Probe for hardware unaligned access support" + select RISCV_MISALIGNED + help + During boot, the kernel will run a series of tests to determine the + speed of unaligned accesses. This probing will dynamically determine + the speed of unaligned accesses on the boot hardware. The kernel will + also check if unaligned memory accesses will trap into the kernel and + handle such traps accordingly. + +config RISCV_EMULATED_UNALIGNED_ACCESS + bool "Assume the system expects emulated unaligned memory accesses" + select RISCV_MISALIGNED + help + Assume that the system expects unaligned memory accesses to be + emulated. The kernel will check if unaligned memory accesses will + trap into the kernel and handle such traps accordingly. + +config RISCV_SLOW_UNALIGNED_ACCESS + bool "Assume the system supports slow unaligned memory accesses" + depends on NONPORTABLE + help + Assume that the system supports slow unaligned memory accesses. The + kernel may not be able to run at all on systems that do not support + unaligned memory accesses. =20 config RISCV_EFFICIENT_UNALIGNED_ACCESS - bool "Assume the CPU supports fast unaligned memory accesses" + bool "Assume the system supports fast unaligned memory accesses" depends on NONPORTABLE select DCACHE_WORD_ACCESS if MMU select HAVE_EFFICIENT_UNALIGNED_ACCESS help - Say Y here if you want the kernel to assume that the CPU supports - efficient unaligned memory accesses. When enabled, this option - improves the performance of the kernel on such CPUs. However, the - kernel will run much more slowly, or will not be able to run at all, - on CPUs that do not support efficient unaligned memory accesses. + Assume that the system supports fast unaligned memory accesses. When + enabled, this option improves the performance of the kernel on such + systems. However, the kernel will run much more slowly, or will not + be able to run at all, on systems that do not support efficient + unaligned memory accesses. =20 - If unsure what to do here, say N. +endchoice =20 endmenu # "Platform type" =20 diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/c= pufeature.h index 6fec91845aa0..357972cd8f82 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -28,37 +28,41 @@ struct riscv_isainfo { =20 DECLARE_PER_CPU(struct riscv_cpuinfo, riscv_cpuinfo); =20 -DECLARE_PER_CPU(long, misaligned_access_speed); - /* Per-cpu ISA extensions. */ extern struct riscv_isainfo hart_isa[NR_CPUS]; =20 void riscv_user_isa_enable(void); =20 -#ifdef CONFIG_RISCV_MISALIGNED -bool unaligned_ctl_available(void); +#if defined(CONFIG_RISCV_MISALIGNED) bool check_unaligned_access_emulated_all_cpus(void); void unaligned_emulation_finish(void); +bool unaligned_ctl_available(void); +DECLARE_PER_CPU(long, misaligned_access_speed); #else static inline bool unaligned_ctl_available(void) { return false; } - -static inline bool check_unaligned_access_emulated(int cpu) -{ - return false; -} - -static inline void unaligned_emulation_finish(void) {} #endif =20 +#if defined(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) DECLARE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key); =20 static __always_inline bool has_fast_unaligned_accesses(void) { return static_branch_likely(&fast_unaligned_access_speed_key); } +#elif defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) +static __always_inline bool has_fast_unaligned_accesses(void) +{ + return true; +} +#else +static __always_inline bool has_fast_unaligned_accesses(void) +{ + return false; +} +#endif =20 unsigned long riscv_get_elf_hwcap(void); =20 diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index f71910718053..c8085126a6f9 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -38,7 +38,6 @@ extra-y +=3D vmlinux.lds obj-y +=3D head.o obj-y +=3D soc.o obj-$(CONFIG_RISCV_ALTERNATIVE) +=3D alternative.o -obj-y +=3D copy-unaligned.o obj-y +=3D cpu.o obj-y +=3D cpufeature.o obj-y +=3D entry.o @@ -62,6 +61,9 @@ obj-y +=3D tests/ obj-$(CONFIG_MMU) +=3D vdso.o vdso/ =20 obj-$(CONFIG_RISCV_MISALIGNED) +=3D traps_misaligned.o +obj-$(CONFIG_RISCV_MISALIGNED) +=3D unaligned_access_speed.o +obj-$(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) +=3D copy-unaligned.o + obj-$(CONFIG_FPU) +=3D fpu.o obj-$(CONFIG_RISCV_ISA_V) +=3D vector.o obj-$(CONFIG_RISCV_ISA_V) +=3D kernel_mode_vector.o diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index abb3a2f53106..319670af5704 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -21,20 +20,12 @@ #include #include #include -#include #include #include #include =20 -#include "copy-unaligned.h" - #define NUM_ALPHA_EXTS ('z' - 'a' + 1) =20 -#define MISALIGNED_ACCESS_JIFFIES_LG2 1 -#define MISALIGNED_BUFFER_SIZE 0x4000 -#define MISALIGNED_BUFFER_ORDER get_order(MISALIGNED_BUFFER_SIZE) -#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80) - unsigned long elf_hwcap __read_mostly; =20 /* Host ISA bitmap */ @@ -43,11 +34,6 @@ static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __re= ad_mostly; /* Per-cpu ISA extensions. */ struct riscv_isainfo hart_isa[NR_CPUS]; =20 -/* Performance information */ -DEFINE_PER_CPU(long, misaligned_access_speed); - -static cpumask_t fast_misaligned_access; - /** * riscv_isa_extension_base() - Get base extension word * @@ -706,264 +692,6 @@ unsigned long riscv_get_elf_hwcap(void) return hwcap; } =20 -static int check_unaligned_access(void *param) -{ - int cpu =3D smp_processor_id(); - u64 start_cycles, end_cycles; - u64 word_cycles; - u64 byte_cycles; - int ratio; - unsigned long start_jiffies, now; - struct page *page =3D param; - void *dst; - void *src; - long speed =3D RISCV_HWPROBE_MISALIGNED_SLOW; - - if (IS_ENABLED(CONFIG_RISCV_MISALIGNED) && - per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_U= NKNOWN) - return 0; - - /* Make an unaligned destination buffer. */ - dst =3D (void *)((unsigned long)page_address(page) | 0x1); - /* Unalign src as well, but differently (off by 1 + 2 =3D 3). */ - src =3D dst + (MISALIGNED_BUFFER_SIZE / 2); - src +=3D 2; - word_cycles =3D -1ULL; - /* Do a warmup. */ - __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE); - preempt_disable(); - start_jiffies =3D jiffies; - while ((now =3D jiffies) =3D=3D start_jiffies) - cpu_relax(); - - /* - * For a fixed amount of time, repeatedly try the function, and take - * the best time in cycles as the measurement. - */ - while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) { - start_cycles =3D get_cycles64(); - /* Ensure the CSR read can't reorder WRT to the copy. */ - mb(); - __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE); - /* Ensure the copy ends before the end time is snapped. */ - mb(); - end_cycles =3D get_cycles64(); - if ((end_cycles - start_cycles) < word_cycles) - word_cycles =3D end_cycles - start_cycles; - } - - byte_cycles =3D -1ULL; - __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE); - start_jiffies =3D jiffies; - while ((now =3D jiffies) =3D=3D start_jiffies) - cpu_relax(); - - while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) { - start_cycles =3D get_cycles64(); - mb(); - __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE); - mb(); - end_cycles =3D get_cycles64(); - if ((end_cycles - start_cycles) < byte_cycles) - byte_cycles =3D end_cycles - start_cycles; - } - - preempt_enable(); - - /* Don't divide by zero. */ - if (!word_cycles || !byte_cycles) { - pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned acc= ess speed\n", - cpu); - - return 0; - } - - if (word_cycles < byte_cycles) - speed =3D RISCV_HWPROBE_MISALIGNED_FAST; - - ratio =3D div_u64((byte_cycles * 100), word_cycles); - pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.= %02d, unaligned accesses are %s\n", - cpu, - ratio / 100, - ratio % 100, - (speed =3D=3D RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow"); - - per_cpu(misaligned_access_speed, cpu) =3D speed; - - /* - * Set the value of fast_misaligned_access of a CPU. These operations - * are atomic to avoid race conditions. - */ - if (speed =3D=3D RISCV_HWPROBE_MISALIGNED_FAST) - cpumask_set_cpu(cpu, &fast_misaligned_access); - else - cpumask_clear_cpu(cpu, &fast_misaligned_access); - - return 0; -} - -static void check_unaligned_access_nonboot_cpu(void *param) -{ - unsigned int cpu =3D smp_processor_id(); - struct page **pages =3D param; - - if (smp_processor_id() !=3D 0) - check_unaligned_access(pages[cpu]); -} - -DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key); - -static void modify_unaligned_access_branches(cpumask_t *mask, int weight) -{ - if (cpumask_weight(mask) =3D=3D weight) - static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key); - else - static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key); -} - -static void set_unaligned_access_static_branches_except_cpu(int cpu) -{ - /* - * Same as set_unaligned_access_static_branches, except excludes the - * given CPU from the result. When a CPU is hotplugged into an offline - * state, this function is called before the CPU is set to offline in - * the cpumask, and thus the CPU needs to be explicitly excluded. - */ - - cpumask_t fast_except_me; - - cpumask_and(&fast_except_me, &fast_misaligned_access, cpu_online_mask); - cpumask_clear_cpu(cpu, &fast_except_me); - - modify_unaligned_access_branches(&fast_except_me, num_online_cpus() - 1); -} - -static void set_unaligned_access_static_branches(void) -{ - /* - * This will be called after check_unaligned_access_all_cpus so the - * result of unaligned access speed for all CPUs will be available. - * - * To avoid the number of online cpus changing between reading - * cpu_online_mask and calling num_online_cpus, cpus_read_lock must be - * held before calling this function. - */ - - cpumask_t fast_and_online; - - cpumask_and(&fast_and_online, &fast_misaligned_access, cpu_online_mask); - - modify_unaligned_access_branches(&fast_and_online, num_online_cpus()); -} - -static int lock_and_set_unaligned_access_static_branch(void) -{ - cpus_read_lock(); - set_unaligned_access_static_branches(); - cpus_read_unlock(); - - return 0; -} - -arch_initcall_sync(lock_and_set_unaligned_access_static_branch); - -static int riscv_online_cpu(unsigned int cpu) -{ - static struct page *buf; - - /* We are already set since the last check */ - if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_U= NKNOWN) - goto exit; - - buf =3D alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); - if (!buf) { - pr_warn("Allocation failure, not measuring misaligned performance\n"); - return -ENOMEM; - } - - check_unaligned_access(buf); - __free_pages(buf, MISALIGNED_BUFFER_ORDER); - -exit: - set_unaligned_access_static_branches(); - - return 0; -} - -static int riscv_offline_cpu(unsigned int cpu) -{ - set_unaligned_access_static_branches_except_cpu(cpu); - - return 0; -} - -/* Measure unaligned access speed on all CPUs present at boot in parallel.= */ -static int check_unaligned_access_speed_all_cpus(void) -{ - unsigned int cpu; - unsigned int cpu_count =3D num_possible_cpus(); - struct page **bufs =3D kzalloc(cpu_count * sizeof(struct page *), - GFP_KERNEL); - - if (!bufs) { - pr_warn("Allocation failure, not measuring misaligned performance\n"); - return 0; - } - - /* - * Allocate separate buffers for each CPU so there's no fighting over - * cache lines. - */ - for_each_cpu(cpu, cpu_online_mask) { - bufs[cpu] =3D alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); - if (!bufs[cpu]) { - pr_warn("Allocation failure, not measuring misaligned performance\n"); - goto out; - } - } - - /* Check everybody except 0, who stays behind to tend jiffies. */ - on_each_cpu(check_unaligned_access_nonboot_cpu, bufs, 1); - - /* Check core 0. */ - smp_call_on_cpu(0, check_unaligned_access, bufs[0], true); - - /* - * Setup hotplug callbacks for any new CPUs that come online or go - * offline. - */ - cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online", - riscv_online_cpu, riscv_offline_cpu); - -out: - for_each_cpu(cpu, cpu_online_mask) { - if (bufs[cpu]) - __free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER); - } - - kfree(bufs); - return 0; -} - -#ifdef CONFIG_RISCV_MISALIGNED -static int check_unaligned_access_all_cpus(void) -{ - bool all_cpus_emulated =3D check_unaligned_access_emulated_all_cpus(); - - if (!all_cpus_emulated) - return check_unaligned_access_speed_all_cpus(); - - return 0; -} -#else -static int check_unaligned_access_all_cpus(void) -{ - return check_unaligned_access_speed_all_cpus(); -} -#endif - -arch_initcall(check_unaligned_access_all_cpus); - void riscv_user_isa_enable(void) { if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_ZI= CBOZ)) diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprob= e.c index a7c56b41efd2..dad02f5faec3 100644 --- a/arch/riscv/kernel/sys_hwprobe.c +++ b/arch/riscv/kernel/sys_hwprobe.c @@ -147,8 +147,10 @@ static bool hwprobe_ext0_has(const struct cpumask *cpu= s, unsigned long ext) return (pair.value & ext); } =20 +#if defined(CONFIG_RISCV_PROBE_UNALIGNED_ACCESS) static u64 hwprobe_misaligned(const struct cpumask *cpus) { + return RISCV_HWPROBE_MISALIGNED_FAST; int cpu; u64 perf =3D -1ULL; =20 @@ -169,6 +171,25 @@ static u64 hwprobe_misaligned(const struct cpumask *cp= us) =20 return perf; } +#elif defined(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) +static u64 hwprobe_misaligned(const struct cpumask *cpus) +{ + if (unaligned_ctl_available()) + return RISCV_HWPROBE_MISALIGNED_EMULATED; + else + return RISCV_HWPROBE_MISALIGNED_SLOW; +} +#elif defined(CONFIG_RISCV_SLOW_UNALIGNED_ACCESS) +static u64 hwprobe_misaligned(const struct cpumask *cpus) +{ + return RISCV_HWPROBE_MISALIGNED_SLOW; +} +#elif defined(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS) +static u64 hwprobe_misaligned(const struct cpumask *cpus) +{ + return RISCV_HWPROBE_MISALIGNED_FAST; +} +#endif =20 static void hwprobe_one_pair(struct riscv_hwprobe *pair, const struct cpumask *cpus) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps= _misaligned.c index 1e3cec3f5d93..6153158adcb7 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -413,7 +413,9 @@ int handle_misaligned_load(struct pt_regs *regs) =20 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); =20 +#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS *this_cpu_ptr(&misaligned_access_speed) =3D RISCV_HWPROBE_MISALIGNED_EMUL= ATED; +#endif =20 if (!unaligned_enabled) return -1; diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel= /unaligned_access_speed.c new file mode 100644 index 000000000000..52264ea4f0bd --- /dev/null +++ b/arch/riscv/kernel/unaligned_access_speed.c @@ -0,0 +1,282 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2024 Rivos Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "copy-unaligned.h" + +#define MISALIGNED_ACCESS_JIFFIES_LG2 1 +#define MISALIGNED_BUFFER_SIZE 0x4000 +#define MISALIGNED_BUFFER_ORDER get_order(MISALIGNED_BUFFER_SIZE) +#define MISALIGNED_COPY_SIZE ((MISALIGNED_BUFFER_SIZE / 2) - 0x80) + +DEFINE_PER_CPU(long, misaligned_access_speed); + +#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS +static cpumask_t fast_misaligned_access; +static int check_unaligned_access(void *param) +{ + int cpu =3D smp_processor_id(); + u64 start_cycles, end_cycles; + u64 word_cycles; + u64 byte_cycles; + int ratio; + unsigned long start_jiffies, now; + struct page *page =3D param; + void *dst; + void *src; + long speed =3D RISCV_HWPROBE_MISALIGNED_SLOW; + + if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_U= NKNOWN) + return 0; + + /* Make an unaligned destination buffer. */ + dst =3D (void *)((unsigned long)page_address(page) | 0x1); + /* Unalign src as well, but differently (off by 1 + 2 =3D 3). */ + src =3D dst + (MISALIGNED_BUFFER_SIZE / 2); + src +=3D 2; + word_cycles =3D -1ULL; + /* Do a warmup. */ + __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE); + preempt_disable(); + start_jiffies =3D jiffies; + while ((now =3D jiffies) =3D=3D start_jiffies) + cpu_relax(); + + /* + * For a fixed amount of time, repeatedly try the function, and take + * the best time in cycles as the measurement. + */ + while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) { + start_cycles =3D get_cycles64(); + /* Ensure the CSR read can't reorder WRT to the copy. */ + mb(); + __riscv_copy_words_unaligned(dst, src, MISALIGNED_COPY_SIZE); + /* Ensure the copy ends before the end time is snapped. */ + mb(); + end_cycles =3D get_cycles64(); + if ((end_cycles - start_cycles) < word_cycles) + word_cycles =3D end_cycles - start_cycles; + } + + byte_cycles =3D -1ULL; + __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE); + start_jiffies =3D jiffies; + while ((now =3D jiffies) =3D=3D start_jiffies) + cpu_relax(); + + while (time_before(jiffies, now + (1 << MISALIGNED_ACCESS_JIFFIES_LG2))) { + start_cycles =3D get_cycles64(); + mb(); + __riscv_copy_bytes_unaligned(dst, src, MISALIGNED_COPY_SIZE); + mb(); + end_cycles =3D get_cycles64(); + if ((end_cycles - start_cycles) < byte_cycles) + byte_cycles =3D end_cycles - start_cycles; + } + + preempt_enable(); + + /* Don't divide by zero. */ + if (!word_cycles || !byte_cycles) { + pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned acc= ess speed\n", + cpu); + + return 0; + } + + if (word_cycles < byte_cycles) + speed =3D RISCV_HWPROBE_MISALIGNED_FAST; + + ratio =3D div_u64((byte_cycles * 100), word_cycles); + pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.= %02d, unaligned accesses are %s\n", + cpu, + ratio / 100, + ratio % 100, + (speed =3D=3D RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow"); + + per_cpu(misaligned_access_speed, cpu) =3D speed; + + /* + * Set the value of fast_misaligned_access of a CPU. These operations + * are atomic to avoid race conditions. + */ + if (speed =3D=3D RISCV_HWPROBE_MISALIGNED_FAST) + cpumask_set_cpu(cpu, &fast_misaligned_access); + else + cpumask_clear_cpu(cpu, &fast_misaligned_access); + + return 0; +} + +static void check_unaligned_access_nonboot_cpu(void *param) +{ + unsigned int cpu =3D smp_processor_id(); + struct page **pages =3D param; + + if (smp_processor_id() !=3D 0) + check_unaligned_access(pages[cpu]); +} + +DEFINE_STATIC_KEY_FALSE(fast_unaligned_access_speed_key); + +static void modify_unaligned_access_branches(cpumask_t *mask, int weight) +{ + if (cpumask_weight(mask) =3D=3D weight) + static_branch_enable_cpuslocked(&fast_unaligned_access_speed_key); + else + static_branch_disable_cpuslocked(&fast_unaligned_access_speed_key); +} + +static void set_unaligned_access_static_branches_except_cpu(int cpu) +{ + /* + * Same as set_unaligned_access_static_branches, except excludes the + * given CPU from the result. When a CPU is hotplugged into an offline + * state, this function is called before the CPU is set to offline in + * the cpumask, and thus the CPU needs to be explicitly excluded. + */ + + cpumask_t fast_except_me; + + cpumask_and(&fast_except_me, &fast_misaligned_access, cpu_online_mask); + cpumask_clear_cpu(cpu, &fast_except_me); + + modify_unaligned_access_branches(&fast_except_me, num_online_cpus() - 1); +} + +static void set_unaligned_access_static_branches(void) +{ + /* + * This will be called after check_unaligned_access_all_cpus so the + * result of unaligned access speed for all CPUs will be available. + * + * To avoid the number of online cpus changing between reading + * cpu_online_mask and calling num_online_cpus, cpus_read_lock must be + * held before calling this function. + */ + + cpumask_t fast_and_online; + + cpumask_and(&fast_and_online, &fast_misaligned_access, cpu_online_mask); + + modify_unaligned_access_branches(&fast_and_online, num_online_cpus()); +} + +static int lock_and_set_unaligned_access_static_branch(void) +{ + cpus_read_lock(); + set_unaligned_access_static_branches(); + cpus_read_unlock(); + + return 0; +} + +arch_initcall_sync(lock_and_set_unaligned_access_static_branch); + +static int riscv_online_cpu(unsigned int cpu) +{ + static struct page *buf; + + /* We are already set since the last check */ + if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_U= NKNOWN) + goto exit; + + buf =3D alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); + if (!buf) { + pr_warn("Allocation failure, not measuring misaligned performance\n"); + return -ENOMEM; + } + + check_unaligned_access(buf); + __free_pages(buf, MISALIGNED_BUFFER_ORDER); + +exit: + set_unaligned_access_static_branches(); + + return 0; +} + +static int riscv_offline_cpu(unsigned int cpu) +{ + set_unaligned_access_static_branches_except_cpu(cpu); + + return 0; +} + +/* Measure unaligned access speed on all CPUs present at boot in parallel.= */ +static int check_unaligned_access_speed_all_cpus(void) +{ + unsigned int cpu; + unsigned int cpu_count =3D num_possible_cpus(); + struct page **bufs =3D kzalloc(cpu_count * sizeof(struct page *), + GFP_KERNEL); + + if (!bufs) { + pr_warn("Allocation failure, not measuring misaligned performance\n"); + return 0; + } + + /* + * Allocate separate buffers for each CPU so there's no fighting over + * cache lines. + */ + for_each_cpu(cpu, cpu_online_mask) { + bufs[cpu] =3D alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); + if (!bufs[cpu]) { + pr_warn("Allocation failure, not measuring misaligned performance\n"); + goto out; + } + } + + /* Check everybody except 0, who stays behind to tend jiffies. */ + on_each_cpu(check_unaligned_access_nonboot_cpu, bufs, 1); + + /* Check core 0. */ + smp_call_on_cpu(0, check_unaligned_access, bufs[0], true); + + /* + * Setup hotplug callbacks for any new CPUs that come online or go + * offline. + */ + cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online", + riscv_online_cpu, riscv_offline_cpu); + +out: + for_each_cpu(cpu, cpu_online_mask) { + if (bufs[cpu]) + __free_pages(bufs[cpu], MISALIGNED_BUFFER_ORDER); + } + + kfree(bufs); + return 0; +} + +static int check_unaligned_access_all_cpus(void) +{ + bool all_cpus_emulated =3D check_unaligned_access_emulated_all_cpus(); + + if (!all_cpus_emulated) + return check_unaligned_access_speed_all_cpus(); + + return 0; +} +#else /* CONFIG_RISCV_PROBE_UNALIGNED_ACCESS */ +static int check_unaligned_access_all_cpus(void) +{ + check_unaligned_access_emulated_all_cpus(); + + return 0; +} +#endif + +arch_initcall(check_unaligned_access_all_cpus); --=20 2.43.2