From nobody Thu Dec 18 20:17:47 2025 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69311178CE2 for ; Tue, 25 Jun 2024 16:51:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719334303; cv=none; b=hdqZ+JysCXFzJfOaxn6lctsRERSI65W898tLweWCJXwgkIK39WTh5dd0TXLYzmnhYB38Hg07Tquw+4nG4wB6v4+cCRd+TfReBJBtmXNzFJDNKiGdejlIsFE6H5rnZbXnb7CN1F3dvQ4cwaFUm9Zo2fhqxOwNPsQxsLSsPnT0A68= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719334303; c=relaxed/simple; bh=JHooToYsubKhRvNYLyZ7cftbBIY9huG4AQKTlNX8LnA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EWNkVAwQs2y2YEWACExGl7MNUIyl9FcnIP/8nNJYkdQzav7MWgxHzhlCqoArTcPJk/3OMsayIdJp8Pqsu0SQpRWhSREJmElIsecmC+EAlOFhXOYW+n1zJMEmy/Qv9bSC/dC1kkJccCJUga7QYZdBU7FEUgECEzz/hRgtRWGXMh8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=NcgtzPzc; arc=none smtp.client-ip=209.85.216.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="NcgtzPzc" Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-2c722e54db8so4443795a91.2 for ; Tue, 25 Jun 2024 09:51:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1719334301; x=1719939101; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dZXJtegesCoNWeLT3fyH7pcXdAcWRPzbJ/UkKvHn+vg=; b=NcgtzPzcLXPncsQrCTVurf6tKWX7sBZi8D0ggl45wrO9A9fbTB6i+5mEHYkhicjC7F AuXCdaY7U39UqxLBIbcqbl6WLgtF9dyqcymjMVjVCD1RhD0P8uRnIYOWzQ6w43gSzoPv iLFIzUkomlPcGWbeemejH8XUW4CbfXsYI7pi0FkdGGTjE1kFzBOObZX0ihgVstfQM+Ft X0gnx2i/a2hleTqMQDYC92gUsCmjXhxJ5NEjVJoNBWcCGWQurLz6DodP7FnpmlBMeYQd hvhP+Ut2f5MnUpeJRcWOxZFOfqyqpmeFcxwPokdym/1+KzPvtuORCELDVHdXBXMAYC77 WK4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719334301; x=1719939101; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dZXJtegesCoNWeLT3fyH7pcXdAcWRPzbJ/UkKvHn+vg=; b=p1wm0iAVdtjo+SkShUP/YwYl83na4hYVX1Av7mo/rXA72jMv7ie/vQPwDQMZdaFZfb +FoUhVuNxpb20Ecv08I8WeTEvJV4sZE/PH4rCtMhCywuFvIIJ9jUqohtAQ5CR3y7Qgcx kVx1p0I1DwjypdhTsIhn82+Ur5MdRHkqonAI3A2KmG2B/gqJqcaZViX1cDxS4xGX1NpA U9kB1EusjVXJs9RtLAqlwe17QB7mre2Fv8HDtQRWEtcNEsV9QL22miE6dlmm5rAaAKsM dvE/2eOj8UEmaR1EbQmI7cYl6vq+stFL1GV2Xdsw69Mj+3Qon9RMh0JwWiMsiAq8yvhR FbEQ== X-Forwarded-Encrypted: i=1; AJvYcCV+CmhTgMZAYR6JaPmpE7BYA3JnOBl1IElu36EXnGuDpPHKv+DaedNSLjsuWSaNJSboxgNhiY2be/js91TA9SCGNZU5T1B7H1xOgE6o X-Gm-Message-State: AOJu0YyepZg1q/4FZ630dW9ZOBoxjw45tWdS0s0oGy5tSzrvuO9DExvB bit42ohSUjOOOc+qYvaLwMBxCy1szyZRRYoOm5tk17HU9uvO/V/uNmLhyoNBSsw= X-Google-Smtp-Source: AGHT+IFgmAX/NR/Rny+7cxUhqpzJmnRfgk5/0YiE5Juy2IWGyJPjWd26T9/Jdru1Y0LXwKSKowzy4A== X-Received: by 2002:a17:90a:de86:b0:2c7:af97:ccfa with SMTP id 98e67ed59e1d1-2c861297f4dmr7174509a91.10.1719334300631; Tue, 25 Jun 2024 09:51:40 -0700 (PDT) Received: from evan.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c819a623a3sm8991226a91.5.2024.06.25.09.51.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jun 2024 09:51:40 -0700 (PDT) From: Evan Green To: Palmer Dabbelt Cc: Yangyu Chen , Evan Green , Charlie Jenkins , Andrew Jones , Albert Ou , Andy Chiu , =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Conor Dooley , Costa Shulyupin , Jonathan Corbet , Paul Walmsley , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Subject: [PATCH v2 1/2] RISC-V: hwprobe: Add MISALIGNED_PERF key Date: Tue, 25 Jun 2024 09:51:20 -0700 Message-Id: <20240625165121.2160354-2-evan@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240625165121.2160354-1-evan@rivosinc.com> References: <20240625165121.2160354-1-evan@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" RISCV_HWPROBE_KEY_CPUPERF_0 was mistakenly flagged as a bitmask in hwprobe_key_is_bitmask(), when in reality it was an enum value. This causes problems when used in conjunction with RISCV_HWPROBE_WHICH_CPUS, since SLOW, FAST, and EMULATED have values whose bits overlap with each other. If the caller asked for the set of CPUs that was SLOW or EMULATED, the returned set would also include CPUs that were FAST. Introduce a new hwprobe key, RISCV_HWPROBE_KEY_MISALIGNED_PERF, which returns the same values in response to a direct query (with no flags), but is properly handled as an enumerated value. As a result, SLOW, FAST, and EMULATED are all correctly treated as distinct values under the new key when queried with the WHICH_CPUS flag. Leave the old key in place to avoid disturbing applications which may have already come to rely on the key, with or without its broken behavior with respect to the WHICH_CPUS flag. Fixes: e178bf146e4b ("RISC-V: hwprobe: Introduce which-cpus flag") Signed-off-by: Evan Green Reviewed-by: Charlie Jenkins Reviewed-by: Andrew Jones --- Changes in v2: - Clarified the distinction of slow and fast refers to misaligned word accesses. Previously it just said misaligned accesses, leaving it ambiguous as to which type of access was measured. - Removed shifts in values (Andrew) - Renamed key to RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF (Palmer) Documentation/arch/riscv/hwprobe.rst | 17 +++++++++++------ arch/riscv/include/asm/hwprobe.h | 2 +- arch/riscv/include/uapi/asm/hwprobe.h | 13 +++++++------ arch/riscv/kernel/sys_hwprobe.c | 1 + 4 files changed, 20 insertions(+), 13 deletions(-) diff --git a/Documentation/arch/riscv/hwprobe.rst b/Documentation/arch/risc= v/hwprobe.rst index fc015b452ebf..c9f570b1ab60 100644 --- a/Documentation/arch/riscv/hwprobe.rst +++ b/Documentation/arch/riscv/hwprobe.rst @@ -207,8 +207,13 @@ The following keys are defined: * :c:macro:`RISCV_HWPROBE_EXT_ZVE64D`: The Vector sub-extension Zve64d is supported, as defined by version 1.0 of the RISC-V Vector extension ma= nual. =20 -* :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performa= nce - information about the selected set of processors. +* :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: Deprecated. Returns similar val= ues to + :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`, but the key was + mistakenly classified as a bitmask rather than a value. + +* :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`: An enum value descr= ibing + the performance of misaligned scalar word accesses on the selected set of + processors. =20 * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNKNOWN`: The performance of misali= gned accesses is unknown. @@ -217,12 +222,12 @@ The following keys are defined: emulated via software, either in or below the kernel. These accesses = are always extremely slow. =20 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are slow= er - than equivalent byte accesses. Misaligned accesses may be supported + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned word accesses are + slower than equivalent byte accesses. Misaligned accesses may be supp= orted directly in hardware, or trapped and emulated by software. =20 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are fast= er - than equivalent byte accesses. + * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned word accesses are + faster than equivalent byte accesses. =20 * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses a= re not supported at all and will generate a misaligned address fault. diff --git a/arch/riscv/include/asm/hwprobe.h b/arch/riscv/include/asm/hwpr= obe.h index 630507dff5ea..150a9877b0af 100644 --- a/arch/riscv/include/asm/hwprobe.h +++ b/arch/riscv/include/asm/hwprobe.h @@ -8,7 +8,7 @@ =20 #include =20 -#define RISCV_HWPROBE_MAX_KEY 6 +#define RISCV_HWPROBE_MAX_KEY 7 =20 static inline bool riscv_hwprobe_key_is_valid(__s64 key) { diff --git a/arch/riscv/include/uapi/asm/hwprobe.h b/arch/riscv/include/uap= i/asm/hwprobe.h index 7b95fadbea2a..22073533cea8 100644 --- a/arch/riscv/include/uapi/asm/hwprobe.h +++ b/arch/riscv/include/uapi/asm/hwprobe.h @@ -66,13 +66,14 @@ struct riscv_hwprobe { #define RISCV_HWPROBE_EXT_ZVE64F (1ULL << 40) #define RISCV_HWPROBE_EXT_ZVE64D (1ULL << 41) #define RISCV_HWPROBE_KEY_CPUPERF_0 5 -#define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0) -#define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0) -#define RISCV_HWPROBE_MISALIGNED_SLOW (2 << 0) -#define RISCV_HWPROBE_MISALIGNED_FAST (3 << 0) -#define RISCV_HWPROBE_MISALIGNED_UNSUPPORTED (4 << 0) -#define RISCV_HWPROBE_MISALIGNED_MASK (7 << 0) +#define RISCV_HWPROBE_MISALIGNED_UNKNOWN 0 +#define RISCV_HWPROBE_MISALIGNED_EMULATED 1 +#define RISCV_HWPROBE_MISALIGNED_SLOW 2 +#define RISCV_HWPROBE_MISALIGNED_FAST 3 +#define RISCV_HWPROBE_MISALIGNED_UNSUPPORTED 4 +#define RISCV_HWPROBE_MISALIGNED_MASK 7 #define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6 +#define RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF 7 /* Increase RISCV_HWPROBE_MAX_KEY when adding items. */ =20 /* Flags */ diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprob= e.c index 83fcc939df67..991ceba67717 100644 --- a/arch/riscv/kernel/sys_hwprobe.c +++ b/arch/riscv/kernel/sys_hwprobe.c @@ -217,6 +217,7 @@ static void hwprobe_one_pair(struct riscv_hwprobe *pair, break; =20 case RISCV_HWPROBE_KEY_CPUPERF_0: + case RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF: pair->value =3D hwprobe_misaligned(cpus); break; =20 --=20 2.34.1 From nobody Thu Dec 18 20:17:47 2025 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 386B617B511 for ; Tue, 25 Jun 2024 16:51:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719334307; cv=none; b=WjQdbuXIJ64mHRUn2Xm4ItUEFOBhQ36biT16kRw0Sgr68LHaB0dtk6tpsHcpbkCoC+N4oeL3mJ5W4PY4uizYYjnFjN1KwSYuff7fw7kml5wP3uax8VE442jQvYY2GFA2BPXMYf207stKNszCJCLCWSJVZd2mmWv4MEYQkq1S33o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719334307; c=relaxed/simple; bh=79xO+qrayyKghyhgqDNXBHflLO61vX8n9nMsiBFk8mI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OD1nQMS4GrSPSP9KHbJfZ9W0e520JT0QeX3t9l3ZG8tXmwMF9ptY0G3zK9M8Zoy+yyHGfo4DbwzGuaIDnhCc3uPpgnDz0OsuzfWGeAxGTficxUi3B/gO9mt/hnhZWQWzRle0vwlIzsmXG962Y3KX5LexNaFA2jcTpSx1trcPIwc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=enlTLE1E; arc=none smtp.client-ip=209.85.215.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="enlTLE1E" Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-71910dfb8c0so2226219a12.3 for ; Tue, 25 Jun 2024 09:51:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1719334305; x=1719939105; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rBsl83KViobbjzgTpYKSwJCRy5tn48PEO/PrLb9sWCk=; b=enlTLE1Ez1cB1Bz07oYbAV5OYzVZKcgUO/8mjfm8hmOdfx9ylas0pwnhEYADPsqy5z yMUu8cNbVH/PNkse8ekvfuuziu2Gc2aGSXOjBuWYYHV8wkEZPwEX7Lphb2um6WmoBaaz QFd67Yz2gYkZQF9CO7sZO7jWc16rN9l7jRh7IkiVg6e27amSL2ON9oBsdP9HlLpVPjdT EZsJUR1YuqWPOTa5Yozaw6tKWwAKTRTu0JY/fTjErCnrVTmaA9jtFhzXGJD1wlMyybF3 Nz2f+BckcfkFZv3tnm2nOxL6B70GfIQxJ4vqhHgN/b9fwdNOmL516MiGEzEIWudYsm4B 9cVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719334305; x=1719939105; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rBsl83KViobbjzgTpYKSwJCRy5tn48PEO/PrLb9sWCk=; b=RjuXvOjx4Mt9lvf/46gMUgBMzUQKNJC5Nc9f2uLkKjxvd03R+VEXwV3W1bssuFDbjF cXRB+DdH7E93HdyMwvb6Nnm7OcaooLsZBU8+61zkQDffbEyKIIrrCP50YCKaKvMTdU5l nbYwE64PTIxbESbLmpX+L8HmLq/uxBvGVEAFs4c7bhFbjfAvONBJDXSNnjEv+Xcpea3G NqwZ4lIV46OIJ3kHNcyCLf4XFCsynEZP6hKO1mu2YNC9QoiHB5n7GFNeLrEb7q7syY2e SZ0gGq+RmLsoo+IIkVY34M9ucBpRUaYWQuuwhclqmNrbouIl91O1WmmC0gUgswOiVIND +Nkg== X-Forwarded-Encrypted: i=1; AJvYcCWpWxyKeaRWgOFTnvJIi2N7QngmDJt83RgblZNvSXVZsXwb+8ny3nc/j4QhAHiVm9P3SF3gpwrnJX9i+aCBOvV+nM9zf6fvnh52Vyg7 X-Gm-Message-State: AOJu0YwJuDZIHVAqSqadCaVWBSH3kwxQJIcnloLViQiiSviQWB4D8gek fHkAAKkbxvWUW65zFbTUZVqQalK+pTvlt0TkA9ZKMg/QM3/t8yzyTmEJzKv5e0o= X-Google-Smtp-Source: AGHT+IE/pbTAB9T+wN/dOZUlkETZhHUJOt2kPyOUdL0NwwoaoK2tmhf6yg+9Yyrto16N7uIezKnTqw== X-Received: by 2002:a17:90a:34cb:b0:2bf:c6fb:ec34 with SMTP id 98e67ed59e1d1-2c85819f561mr7071899a91.8.1719334305291; Tue, 25 Jun 2024 09:51:45 -0700 (PDT) Received: from evan.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c819a623a3sm8991226a91.5.2024.06.25.09.51.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jun 2024 09:51:44 -0700 (PDT) From: Evan Green To: Palmer Dabbelt Cc: Yangyu Chen , Evan Green , Albert Ou , Alexandre Ghiti , Andrew Jones , Andy Chiu , Ben Dooks , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Charlie Jenkins , =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Conor Dooley , Costa Shulyupin , Erick Archer , "Gustavo A. R. Silva" , Jonathan Corbet , Paul Walmsley , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Subject: [PATCH v2 2/2] RISC-V: hwprobe: Add SCALAR to misaligned perf defines Date: Tue, 25 Jun 2024 09:51:21 -0700 Message-Id: <20240625165121.2160354-3-evan@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240625165121.2160354-1-evan@rivosinc.com> References: <20240625165121.2160354-1-evan@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for misaligned vector performance hwprobe keys, rename the hwprobe key values associated with misaligned scalar accesses to include the term SCALAR. Signed-off-by: Evan Green Reviewed-by: Charlie Jenkins --- Changes in v2: - Added patch to rename misaligned perf key values (Palmer) Documentation/arch/riscv/hwprobe.rst | 20 ++++++++++---------- arch/riscv/include/uapi/asm/hwprobe.h | 10 +++++----- arch/riscv/kernel/sys_hwprobe.c | 10 +++++----- arch/riscv/kernel/traps_misaligned.c | 6 +++--- arch/riscv/kernel/unaligned_access_speed.c | 12 ++++++------ 5 files changed, 29 insertions(+), 29 deletions(-) diff --git a/Documentation/arch/riscv/hwprobe.rst b/Documentation/arch/risc= v/hwprobe.rst index c9f570b1ab60..83f7f3c1347f 100644 --- a/Documentation/arch/riscv/hwprobe.rst +++ b/Documentation/arch/riscv/hwprobe.rst @@ -215,22 +215,22 @@ The following keys are defined: the performance of misaligned scalar word accesses on the selected set of processors. =20 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNKNOWN`: The performance of misali= gned - accesses is unknown. + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN`: The performance of + misaligned accesses is unknown. =20 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_EMULATED`: Misaligned accesses are + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED`: Misaligned access= es are emulated via software, either in or below the kernel. These accesses = are always extremely slow. =20 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned word accesses are - slower than equivalent byte accesses. Misaligned accesses may be supp= orted - directly in hardware, or trapped and emulated by software. + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW`: Misaligned word acces= ses + are slower than equivalent byte accesses. Misaligned accesses may be + supported directly in hardware, or trapped and emulated by software. =20 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned word accesses are - faster than equivalent byte accesses. + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_FAST`: Misaligned word acces= ses + are faster than equivalent byte accesses. =20 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses a= re - not supported at all and will generate a misaligned address fault. + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED`: Misaligned acc= esses + are not supported at all and will generate a misaligned address fault. =20 * :c:macro:`RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE`: An unsigned int which represents the size of the Zicboz block in bytes. diff --git a/arch/riscv/include/uapi/asm/hwprobe.h b/arch/riscv/include/uap= i/asm/hwprobe.h index 22073533cea8..e11684d8ae1c 100644 --- a/arch/riscv/include/uapi/asm/hwprobe.h +++ b/arch/riscv/include/uapi/asm/hwprobe.h @@ -66,11 +66,11 @@ struct riscv_hwprobe { #define RISCV_HWPROBE_EXT_ZVE64F (1ULL << 40) #define RISCV_HWPROBE_EXT_ZVE64D (1ULL << 41) #define RISCV_HWPROBE_KEY_CPUPERF_0 5 -#define RISCV_HWPROBE_MISALIGNED_UNKNOWN 0 -#define RISCV_HWPROBE_MISALIGNED_EMULATED 1 -#define RISCV_HWPROBE_MISALIGNED_SLOW 2 -#define RISCV_HWPROBE_MISALIGNED_FAST 3 -#define RISCV_HWPROBE_MISALIGNED_UNSUPPORTED 4 +#define RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN 0 +#define RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED 1 +#define RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW 2 +#define RISCV_HWPROBE_MISALIGNED_SCALAR_FAST 3 +#define RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED 4 #define RISCV_HWPROBE_MISALIGNED_MASK 7 #define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6 #define RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF 7 diff --git a/arch/riscv/kernel/sys_hwprobe.c b/arch/riscv/kernel/sys_hwprob= e.c index 991ceba67717..fbf952e7383e 100644 --- a/arch/riscv/kernel/sys_hwprobe.c +++ b/arch/riscv/kernel/sys_hwprobe.c @@ -170,13 +170,13 @@ static u64 hwprobe_misaligned(const struct cpumask *c= pus) perf =3D this_perf; =20 if (perf !=3D this_perf) { - perf =3D RISCV_HWPROBE_MISALIGNED_UNKNOWN; + perf =3D RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN; break; } } =20 if (perf =3D=3D -1ULL) - return RISCV_HWPROBE_MISALIGNED_UNKNOWN; + return RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN; =20 return perf; } @@ -184,12 +184,12 @@ static u64 hwprobe_misaligned(const struct cpumask *c= pus) static u64 hwprobe_misaligned(const struct cpumask *cpus) { if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS)) - return RISCV_HWPROBE_MISALIGNED_FAST; + return RISCV_HWPROBE_MISALIGNED_SCALAR_FAST; =20 if (IS_ENABLED(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) && unaligned_ctl_a= vailable()) - return RISCV_HWPROBE_MISALIGNED_EMULATED; + return RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED; =20 - return RISCV_HWPROBE_MISALIGNED_SLOW; + return RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW; } #endif =20 diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps= _misaligned.c index b62d5a2f4541..192cd5603e95 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -338,7 +338,7 @@ int handle_misaligned_load(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); =20 #ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS - *this_cpu_ptr(&misaligned_access_speed) =3D RISCV_HWPROBE_MISALIGNED_EMUL= ATED; + *this_cpu_ptr(&misaligned_access_speed) =3D RISCV_HWPROBE_MISALIGNED_SCAL= AR_EMULATED; #endif =20 if (!unaligned_enabled) @@ -532,13 +532,13 @@ static bool check_unaligned_access_emulated(int cpu) unsigned long tmp_var, tmp_val; bool misaligned_emu_detected; =20 - *mas_ptr =3D RISCV_HWPROBE_MISALIGNED_UNKNOWN; + *mas_ptr =3D RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN; =20 __asm__ __volatile__ ( " "REG_L" %[tmp], 1(%[ptr])\n" : [tmp] "=3Dr" (tmp_val) : [ptr] "r" (&tmp_var) : "memory"); =20 - misaligned_emu_detected =3D (*mas_ptr =3D=3D RISCV_HWPROBE_MISALIGNED_EMU= LATED); + misaligned_emu_detected =3D (*mas_ptr =3D=3D RISCV_HWPROBE_MISALIGNED_SCA= LAR_EMULATED); /* * If unaligned_ctl is already set, this means that we detected that all * CPUS uses emulated misaligned access at boot time. If that changed diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel= /unaligned_access_speed.c index a9a6bcb02acf..160628a2116d 100644 --- a/arch/riscv/kernel/unaligned_access_speed.c +++ b/arch/riscv/kernel/unaligned_access_speed.c @@ -34,9 +34,9 @@ static int check_unaligned_access(void *param) struct page *page =3D param; void *dst; void *src; - long speed =3D RISCV_HWPROBE_MISALIGNED_SLOW; + long speed =3D RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW; =20 - if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_U= NKNOWN) + if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_S= CALAR_UNKNOWN) return 0; =20 /* Make an unaligned destination buffer. */ @@ -95,14 +95,14 @@ static int check_unaligned_access(void *param) } =20 if (word_cycles < byte_cycles) - speed =3D RISCV_HWPROBE_MISALIGNED_FAST; + speed =3D RISCV_HWPROBE_MISALIGNED_SCALAR_FAST; =20 ratio =3D div_u64((byte_cycles * 100), word_cycles); pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.= %02d, unaligned accesses are %s\n", cpu, ratio / 100, ratio % 100, - (speed =3D=3D RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow"); + (speed =3D=3D RISCV_HWPROBE_MISALIGNED_SCALAR_FAST) ? "fast" : "slow"); =20 per_cpu(misaligned_access_speed, cpu) =3D speed; =20 @@ -110,7 +110,7 @@ static int check_unaligned_access(void *param) * Set the value of fast_misaligned_access of a CPU. These operations * are atomic to avoid race conditions. */ - if (speed =3D=3D RISCV_HWPROBE_MISALIGNED_FAST) + if (speed =3D=3D RISCV_HWPROBE_MISALIGNED_SCALAR_FAST) cpumask_set_cpu(cpu, &fast_misaligned_access); else cpumask_clear_cpu(cpu, &fast_misaligned_access); @@ -188,7 +188,7 @@ static int riscv_online_cpu(unsigned int cpu) static struct page *buf; =20 /* We are already set since the last check */ - if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_U= NKNOWN) + if (per_cpu(misaligned_access_speed, cpu) !=3D RISCV_HWPROBE_MISALIGNED_S= CALAR_UNKNOWN) goto exit; =20 buf =3D alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); --=20 2.34.1