From nobody Tue Feb 10 17:14:00 2026 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 984A8261568 for ; Wed, 30 Apr 2025 00:17:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745972234; cv=none; b=RVmKBxPgBE5NOadvwi8OMqdbJsZ6atJN3MlMP2rOdEjeA3PUcmqmwFzlOwtAkvu8rDgGUFNO9X1QehMMjodFXcpqcS198S3fOhThTegX5ZrzBDHYVxK1pV+V5r0LpW2ruLYWUvNEwQvX27tD+YhMUUQNugdRzQc0Cz4lG+SvrK4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745972234; c=relaxed/simple; bh=324Gi+TgnuaZF4Pqcfmf4w7PT4F+syG7qr9CaVrA77s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gLUyopuOrM+TW47mctG7x9Q474OQnSJfxq4G45CwEbH7YxOnkldGjLdfzHL1St0FriHsQJmeFGNC+etXfhEneNNhcLogiX5/cPl1BbtKaft1/R9f8VViavVn+5IJX3TB4J7j+4Z0dBPEcDlCjtBjUyWAqjPizET2qHDKzLBGglk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=mwzx0f2/; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="mwzx0f2/" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-2254e0b4b79so99464755ad.2 for ; Tue, 29 Apr 2025 17:17:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1745972231; x=1746577031; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=SQhD4RGrPvN5KDVlO784BlCXFf+Afq7++TBvXb/ZOFY=; b=mwzx0f2/D9O2kNl8CU2swhS/dfRE0eDBHqPl25SjkgfwNUGHWZKfq3jn6/Xre9CXzN QzBdcEFjxKerxLWMD93SHVHlgFrdRVk+eUCfeiIulkQZ8tficUbIPL1zavV+x2XUIiie /CCdgSwgq8oJMsiNCqou/SRisG/gewnGEFVUh42lkO29uerxyAS8bx6epNwpri2Cu7QB NhBncXucFGtA1v18o6JGLbhkYFW6eb5spigSn3YLzkF6vfGf15uST7kQfWnk1+0n7bzi tXBsKTNwmAouJRb631NQnCsgKSG+DFXrA35kVlIlWpaCvtZzsrOVPTenEpDDEocQmkAJ vSyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745972231; x=1746577031; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SQhD4RGrPvN5KDVlO784BlCXFf+Afq7++TBvXb/ZOFY=; b=MfYZZqq3BeUZLkpKMip33blVlIw03DQbmJ+8WmSsiPJKCvI34neHRGZQSchgCK1YVT /H5oltiZAzeS3ZdWXZV84Pr519Tq315uf9V1mWTsyPLZuRBcmVX3TZBINUTuHAlJssiU wcSYqNWEkDqjiE7T6bYug/W+bOausIzvmN+SEACK5WCLzHwlOPq7+nw6H4OG+Nx4dGiC lHgSr0qQ/g/S5yeJ39YULSeG3PqhkHiTkz+OXPQ/YdEywoD9InNqAKRm6/QPGz+fGEB1 gD2VNwKNqUagiDHew0SFd7EGp3SSkkQlsI/d0hoSHZ2G8eGi4eFKqWc+bHZitVb34EPL IcaQ== X-Gm-Message-State: AOJu0YzFb//sYUmZsiU08ucfo7jWi1gEnX6VbdJT8vKoEDl3a6klkXg9 YSNq+IYSIeKujLevWYyuPGQPUZWFAb9KULCEAJBx2NE4ZyW0q26FTVaId+4B7iw= X-Gm-Gg: ASbGncuvnZMl90FeH+Dk0hpdvgw+Yf4hW6ysxV+8aPDC+kV2m51zykGzklfCk+bwe2z Zs4IG4m/hh2cf1tUbupkXMIBh3dDRb+3PLpGTFThN89mUB3ZhjI67RfEn6CjbTXWlX0MCtQKcMn 5b9xFbqfKnd1G92ZoFIgDlZpVz8FXDtSGmDTkHl9krBXA0HPAeVFLo6t7NOGJT8pfZs1R2rJIi6 z/CLsj8JB9/VZzFWOmVaIU7LEcA13c5UxV0RvwXcmrL1ImaHVtDGkZNcaSCL8ovTzt7KC4ULKgy 89wsaTADVZbev6Az008eUeL9t22cfRugM8lWgFrCx8Nvn5FxPYM= X-Google-Smtp-Source: AGHT+IGGioI14EzwjWxiMgKSjXhSCLyNkbMqM6RySIWm1VRTHqizTK0idzfeTktPhmn7c6G1vIvRvA== X-Received: by 2002:a17:902:ecd1:b0:229:1cef:4c83 with SMTP id d9443c01a7336-22df34a9861mr18945415ad.4.1745972230840; Tue, 29 Apr 2025 17:17:10 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22db4d770d6sm109386035ad.17.2025.04.29.17.17.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Apr 2025 17:17:10 -0700 (PDT) From: Deepak Gupta Date: Tue, 29 Apr 2025 17:16:31 -0700 Subject: [PATCH v14 14/27] riscv: Implements arch agnostic indirect branch tracking prctls Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250429-v5_user_cfi_series-v14-14-5239410d012a@rivosinc.com> References: <20250429-v5_user_cfi_series-v14-0-5239410d012a@rivosinc.com> In-Reply-To: <20250429-v5_user_cfi_series-v14-0-5239410d012a@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley , Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, rust-for-linux@vger.kernel.org, Zong Li , Deepak Gupta X-Mailer: b4 0.13.0 prctls implemented are: PR_SET_INDIR_BR_LP_STATUS, PR_GET_INDIR_BR_LP_STATUS and PR_LOCK_INDIR_BR_LP_STATUS Reviewed-by: Zong Li Signed-off-by: Deepak Gupta --- arch/riscv/include/asm/usercfi.h | 14 +++++++ arch/riscv/kernel/entry.S | 2 +- arch/riscv/kernel/process.c | 5 +++ arch/riscv/kernel/usercfi.c | 79 ++++++++++++++++++++++++++++++++++++= ++++ 4 files changed, 99 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/user= cfi.h index b530ff5baa6e..cea7908cdb3a 100644 --- a/arch/riscv/include/asm/usercfi.h +++ b/arch/riscv/include/asm/usercfi.h @@ -16,6 +16,8 @@ struct kernel_clone_args; struct cfi_state { unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ unsigned long ubcfi_locked : 1; + unsigned long ufcfi_en : 1; /* Enable for forward cfi. Note that ELP goes= in sstatus */ + unsigned long ufcfi_locked : 1; unsigned long user_shdw_stk; /* Current user shadow stack pointer */ unsigned long shdw_stk_base; /* Base address of shadow stack */ unsigned long shdw_stk_size; /* size of shadow stack */ @@ -32,6 +34,10 @@ bool is_shstk_locked(struct task_struct *task); bool is_shstk_allocated(struct task_struct *task); void set_shstk_lock(struct task_struct *task); void set_shstk_status(struct task_struct *task, bool enable); +bool is_indir_lp_enabled(struct task_struct *task); +bool is_indir_lp_locked(struct task_struct *task); +void set_indir_lp_status(struct task_struct *task, bool enable); +void set_indir_lp_lock(struct task_struct *task); =20 #define PR_SHADOW_STACK_SUPPORTED_STATUS_MASK (PR_SHADOW_STACK_ENABLE) =20 @@ -57,6 +63,14 @@ void set_shstk_status(struct task_struct *task, bool ena= ble); =20 #define set_shstk_status(task, enable) =20 +#define is_indir_lp_enabled(task) false + +#define is_indir_lp_locked(task) false + +#define set_indir_lp_status(task, enable) + +#define set_indir_lp_lock(task) + #endif /* CONFIG_RISCV_USER_CFI */ =20 #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index c4bfe2085c41..978115567bca 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -169,7 +169,7 @@ SYM_CODE_START(handle_exception) * Disable the FPU/Vector to detect illegal usage of floating point * or vector in kernel space. */ - li t0, SR_SUM | SR_FS_VS + li t0, SR_SUM | SR_FS_VS | SR_ELP =20 REG_L s0, TASK_TI_USER_SP(tp) csrrc s1, CSR_STATUS, t0 diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index cd11667593fe..4587201dd81d 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -160,6 +160,11 @@ void start_thread(struct pt_regs *regs, unsigned long = pc, set_shstk_status(current, false); set_shstk_base(current, 0, 0); set_active_shstk(current, 0); + /* + * disable indirect branch tracking on exec. + * libc will enable it later via prctl. + */ + set_indir_lp_status(current, false); =20 #ifdef CONFIG_64BIT regs->status &=3D ~SR_UXL; diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c index 08620bdae696..2ebe789caa6b 100644 --- a/arch/riscv/kernel/usercfi.c +++ b/arch/riscv/kernel/usercfi.c @@ -72,6 +72,35 @@ void set_shstk_lock(struct task_struct *task) task->thread_info.user_cfi_state.ubcfi_locked =3D 1; } =20 +bool is_indir_lp_enabled(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.ufcfi_en; +} + +bool is_indir_lp_locked(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.ufcfi_locked; +} + +void set_indir_lp_status(struct task_struct *task, bool enable) +{ + if (!cpu_supports_indirect_br_lp_instr()) + return; + + task->thread_info.user_cfi_state.ufcfi_en =3D enable ? 1 : 0; + + if (enable) + task->thread.envcfg |=3D ENVCFG_LPE; + else + task->thread.envcfg &=3D ~ENVCFG_LPE; + + csr_write(CSR_ENVCFG, task->thread.envcfg); +} + +void set_indir_lp_lock(struct task_struct *task) +{ + task->thread_info.user_cfi_state.ufcfi_locked =3D 1; +} /* * If size is 0, then to be compatible with regular stack we want it to be= as big as * regular stack. Else PAGE_ALIGN it and return back @@ -371,3 +400,53 @@ int arch_lock_shadow_stack_status(struct task_struct *= task, =20 return 0; } + +int arch_get_indir_br_lp_status(struct task_struct *t, unsigned long __use= r *status) +{ + unsigned long fcfi_status =3D 0; + + if (!cpu_supports_indirect_br_lp_instr()) + return -EINVAL; + + /* indirect branch tracking is enabled on the task or not */ + fcfi_status |=3D (is_indir_lp_enabled(t) ? PR_INDIR_BR_LP_ENABLE : 0); + + return copy_to_user(status, &fcfi_status, sizeof(fcfi_status)) ? -EFAULT = : 0; +} + +int arch_set_indir_br_lp_status(struct task_struct *t, unsigned long statu= s) +{ + bool enable_indir_lp =3D false; + + if (!cpu_supports_indirect_br_lp_instr()) + return -EINVAL; + + /* indirect branch tracking is locked and further can't be modified by us= er */ + if (is_indir_lp_locked(t)) + return -EINVAL; + + /* Reject unknown flags */ + if (status & ~PR_INDIR_BR_LP_ENABLE) + return -EINVAL; + + enable_indir_lp =3D (status & PR_INDIR_BR_LP_ENABLE); + set_indir_lp_status(t, enable_indir_lp); + + return 0; +} + +int arch_lock_indir_br_lp_status(struct task_struct *task, + unsigned long arg) +{ + /* + * If indirect branch tracking is not supported or not enabled on task, + * nothing to lock here + */ + if (!cpu_supports_indirect_br_lp_instr() || + !is_indir_lp_enabled(task) || arg !=3D 0) + return -EINVAL; + + set_indir_lp_lock(task); + + return 0; +} --=20 2.43.0